diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..664870ca63b6231637089dd6e90266e1d29cf156 Binary files /dev/null and b/.DS_Store differ diff --git a/.gitattributes b/.gitattributes index 446f525834d468502f01f4dba832c763454a686f..2e1ac0060e8d29d47a5cba444504e4c55acdac51 100644 --- a/.gitattributes +++ b/.gitattributes @@ -762,3 +762,4 @@ zeroshot[[:space:]]and[[:space:]]fewshot[[:space:]]learning[[:space:]]for[[:spac zeroshot[[:space:]]and[[:space:]]fewshot[[:space:]]video[[:space:]]question[[:space:]]answering[[:space:]]with[[:space:]]multimodal[[:space:]]prompts.pdf filter=lfs diff=lfs merge=lfs -text zeroshot[[:space:]]information[[:space:]]extraction[[:space:]]via[[:space:]]chatting[[:space:]]with[[:space:]]chatgpt.pdf filter=lfs diff=lfs merge=lfs -text zeroshot[[:space:]]nuclei[[:space:]]detection[[:space:]]via[[:space:]]visuallanguage[[:space:]]pretrained[[:space:]]models.pdf filter=lfs diff=lfs merge=lfs -text +*.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/arxiv_papers_for_human_review.csv b/arxiv_papers_for_human_review.csv new file mode 100644 index 0000000000000000000000000000000000000000..241cf5830b7eec20314621cbd49ee1396ccb77ea --- /dev/null +++ b/arxiv_papers_for_human_review.csv @@ -0,0 +1,30294 @@ +title,firstAuthor,url,dateSubmitted,keywords,pdf_titles,abstract +"""Do Anything Now"": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models",Xinyue Shen,http://arxiv.org/pdf/2308.03825v1.pdf,2023-08-07,"['cs.cr', 'cs.lg']",2308.03825v1.pdf," The misuse of large language models (LLMs) has garnered significant attention +from the general public and LLM vendors. In response, efforts have been made to +align LLMs with human values and intent use. However, a particular type of +adversarial prompts, known as jailbreak prompt, has emerged and continuously +evolved to bypass the safeguards and elicit harmful content from LLMs. In this +paper, we conduct the first measurement study on jailbreak prompts in the wild, +with 6,387 prompts collected from four platforms over six months. Leveraging +natural language processing technologies and graph-based community detection +methods, we discover unique characteristics of jailbreak prompts and their +major attack strategies, such as prompt injection and privilege escalation. We +also observe that jailbreak prompts increasingly shift from public platforms to +private ones, posing new challenges for LLM vendors in proactive detection. To +assess the potential harm caused by jailbreak prompts, we create a question set +comprising 46,800 samples across 13 forbidden scenarios. Our experiments show +that current LLMs and safeguards cannot adequately defend jailbreak prompts in +all scenarios. Particularly, we identify two highly effective jailbreak prompts +which achieve 0.99 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and +they have persisted online for over 100 days. Our work sheds light on the +severe and evolving threat landscape of jailbreak prompts. We hope our study +can facilitate the research community and LLM vendors in promoting safer and +regulated LLMs. +" +Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study,Yi Liu,http://arxiv.org/pdf/2305.13860v1.pdf,2023-05-23,"['cs.se', 'cs.ai', 'cs.cl']",2305.13860v1.pdf," Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential +but also introduce challenges related to content constraints and potential +misuse. Our study investigates three key research questions: (1) the number of +different prompt types that can jailbreak LLMs, (2) the effectiveness of +jailbreak prompts in circumventing LLM constraints, and (3) the resilience of +ChatGPT against these jailbreak prompts. Initially, we develop a classification +model to analyze the distribution of existing prompts, identifying ten distinct +patterns and three categories of jailbreak prompts. Subsequently, we assess the +jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a +dataset of 3,120 jailbreak questions across eight prohibited scenarios. +Finally, we evaluate the resistance of ChatGPT against jailbreak prompts, +finding that the prompts can consistently evade the restrictions in 40 use-case +scenarios. The study underscores the importance of prompt structures in +jailbreaking LLMs and discusses the challenges of robust jailbreak prompt +generation and prevention. +" +AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models,Xiaogeng Liu,http://arxiv.org/pdf/2310.04451v1.pdf,2023-10-03,"['cs.cl', 'cs.ai']",2310.04451v1.pdf," The aligned Large Language Models (LLMs) are powerful language understanding +and decision-making tools that are created through extensive alignment with +human feedback. However, these large models remain susceptible to jailbreak +attacks, where adversaries manipulate prompts to elicit malicious outputs that +should not be given by aligned LLMs. Investigating jailbreak prompts can lead +us to delve into the limitations of LLMs and further guide us to secure them. +Unfortunately, existing jailbreak techniques suffer from either (1) scalability +issues, where attacks heavily rely on manual crafting of prompts, or (2) +stealthiness problems, as attacks depend on token-based algorithms to generate +prompts that are often semantically meaningless, making them susceptible to +detection through basic perplexity testing. In light of these challenges, we +intend to answer this question: Can we develop an approach that can +automatically generate stealthy jailbreak prompts? In this paper, we introduce +AutoDAN, a novel jailbreak attack against aligned LLMs. AutoDAN can +automatically generate stealthy jailbreak prompts by the carefully designed +hierarchical genetic algorithm. Extensive evaluations demonstrate that AutoDAN +not only automates the process while preserving semantic meaningfulness, but +also demonstrates superior attack strength in cross-model transferability, and +cross-sample universality compared with the baseline. Moreover, we also compare +AutoDAN with perplexity-based defense methods and show that AutoDAN can bypass +them effectively. +" +Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM,Bochuan Cao,http://arxiv.org/pdf/2309.14348v1.pdf,2023-09-18,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",2309.14348v1.pdf," Recently, Large Language Models (LLMs) have made significant advancements and +are now widely used across various domains. Unfortunately, there has been a +rising concern that LLMs can be misused to generate harmful or malicious +content. Though a line of research has focused on aligning LLMs with human +values and preventing them from producing inappropriate content, such +alignments are usually vulnerable and can be bypassed by alignment-breaking +attacks via adversarially optimized or handcrafted jailbreaking prompts. In +this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against +potential alignment-breaking attacks. RA-LLM can be directly constructed upon +an existing aligned LLM with a robust alignment checking function, without +requiring any expensive retraining or fine-tuning process of the original LLM. +Furthermore, we also provide a theoretical analysis for RA-LLM to verify its +effectiveness in defending against alignment-breaking attacks. Through +real-world experiments on open-source large language models, we demonstrate +that RA-LLM can successfully defend against both state-of-the-art adversarial +prompts and popular handcrafted jailbreaking prompts by reducing their attack +success rates from nearly 100\% to around 10\% or less. +" +FuzzLLM: A Novel and Universal Fuzzing Framework for Proactively Discovering Jailbreak Vulnerabilities in Large Language Models,Dongyu Yao,http://arxiv.org/pdf/2309.05274v1.pdf,2023-09-11,['cs.cr'],2309.05274v1.pdf," Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit +meticulously crafted prompts to elicit content that violates service +guidelines, have captured the attention of research communities. While model +owners can defend against individual jailbreak prompts through safety training +strategies, this relatively passive approach struggles to handle the broader +category of similar jailbreaks. To tackle this issue, we introduce FuzzLLM, an +automated fuzzing framework designed to proactively test and discover jailbreak +vulnerabilities in LLMs. We utilize templates to capture the structural +integrity of a prompt and isolate key features of a jailbreak class as +constraints. By integrating different base classes into powerful combo attacks +and varying the elements of constraints and prohibited questions, FuzzLLM +enables efficient testing with reduced manual effort. Extensive experiments +demonstrate FuzzLLM's effectiveness and comprehensiveness in vulnerability +discovery across various LLMs. +" +Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation,Rusheb Shah,http://arxiv.org/pdf/2311.03348v1.pdf,2023-11-06,"['cs.cl', 'cs.ai', 'cs.lg']",2311.03348v1.pdf," Despite efforts to align large language models to produce harmless responses, +they are still vulnerable to jailbreak prompts that elicit unrestricted +behaviour. In this work, we investigate persona modulation as a black-box +jailbreaking method to steer a target model to take on personalities that are +willing to comply with harmful instructions. Rather than manually crafting +prompts for each persona, we automate the generation of jailbreaks using a +language model assistant. We demonstrate a range of harmful completions made +possible by persona modulation, including detailed instructions for +synthesising methamphetamine, building a bomb, and laundering money. These +automated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is +185 times larger than before modulation (0.23%). These prompts also transfer to +Claude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%, +respectively. Our work reveals yet another vulnerability in commercial large +language models and highlights the need for more comprehensive safeguards. +" +Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models,Huachuan Qiu,http://arxiv.org/pdf/2307.08487v3.pdf,2023-07-17,['cs.cl'],2307.08487v3.pdf," Considerable research efforts have been devoted to ensuring that large +language models (LLMs) align with human values and generate safe text. However, +an excessive focus on sensitivity to certain topics can compromise the model's +robustness in following instructions, thereby impacting its overall performance +in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily +focused on evaluating the safety of the models without considering their +robustness. In this paper, we propose a benchmark that assesses both the safety +and robustness of LLMs, emphasizing the need for a balanced approach. To +comprehensively study text safety and output robustness, we introduce a latent +jailbreak prompt dataset, each involving malicious instruction embedding. +Specifically, we instruct the model to complete a regular task, such as +translation, with the text to be translated containing malicious instructions. +To further analyze safety and robustness, we design a hierarchical annotation +framework. We present a systematic analysis of the safety and robustness of +LLMs regarding the position of explicit normal instructions, word replacements +(verbs in explicit normal instructions, target groups in malicious +instructions, cue words for explicit normal instructions), and instruction +replacements (different explicit normal instructions). Our results demonstrate +that current LLMs not only prioritize certain instruction verbs but also +exhibit varying jailbreak rates for different instruction verbs in explicit +normal instructions. Code and data are available at +https://github.com/qiuhuachuan/latent-jailbreak. +" +MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots,Gelei Deng,http://arxiv.org/pdf/2307.08715v2.pdf,2023-07-16,['cs.cr'],2307.08715v2.pdf," Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI) +services due to their exceptional proficiency in understanding and generating +human-like text. LLM chatbots, in particular, have seen widespread adoption, +transforming human-machine interactions. However, these LLM chatbots are +susceptible to ""jailbreak"" attacks, where malicious users manipulate prompts to +elicit inappropriate or sensitive responses, contravening service policies. +Despite existing attempts to mitigate such threats, our research reveals a +substantial gap in our understanding of these vulnerabilities, largely due to +the undisclosed defensive measures implemented by LLM service providers. + In this paper, we present Jailbreaker, a comprehensive framework that offers +an in-depth understanding of jailbreak attacks and countermeasures. Our work +makes a dual contribution. First, we propose an innovative methodology inspired +by time-based SQL injection techniques to reverse-engineer the defensive +strategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat. +This time-sensitive approach uncovers intricate details about these services' +defenses, facilitating a proof-of-concept attack that successfully bypasses +their mechanisms. Second, we introduce an automatic generation method for +jailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential of +automated jailbreak generation across various commercial LLM chatbots. Our +method achieves a promising average success rate of 21.58%, significantly +outperforming the effectiveness of existing techniques. We have responsibly +disclosed our findings to the concerned service providers, underscoring the +urgent need for more robust defenses. Jailbreaker thus marks a significant step +towards understanding and mitigating jailbreak threats in the realm of LLM +chatbots. +" +Using Large Language Models for Cybersecurity Capture-The-Flag Challenges and Certification Questions,Wesley Tann,http://arxiv.org/pdf/2308.10443v1.pdf,2023-08-21,"['cs.ai', 'cs.cl', 'cs.cy']",2308.10443v1.pdf," The assessment of cybersecurity Capture-The-Flag (CTF) exercises involves +participants finding text strings or ``flags'' by exploiting system +vulnerabilities. Large Language Models (LLMs) are natural-language models +trained on vast amounts of words to understand and generate text; they can +perform well on many CTF challenges. Such LLMs are freely available to +students. In the context of CTF exercises in the classroom, this raises +concerns about academic integrity. Educators must understand LLMs' capabilities +to modify their teaching to accommodate generative AI assistance. This research +investigates the effectiveness of LLMs, particularly in the realm of CTF +challenges and questions. Here we evaluate three popular LLMs, OpenAI ChatGPT, +Google Bard, and Microsoft Bing. First, we assess the LLMs' question-answering +performance on five Cisco certifications with varying difficulty levels. Next, +we qualitatively study the LLMs' abilities in solving CTF challenges to +understand their limitations. We report on the experience of using the LLMs for +seven test cases in all five types of CTF challenges. In addition, we +demonstrate how jailbreak prompts can bypass and break LLMs' ethical +safeguards. The paper concludes by discussing LLM's impact on CTF exercises and +its implications. +" +Baseline Defenses for Adversarial Attacks Against Aligned Language Models,Neel Jain,http://arxiv.org/pdf/2309.00614v2.pdf,2023-09-01,"['cs.lg', 'cs.cl', 'cs.cr']",2309.00614v2.pdf," As Large Language Models quickly become ubiquitous, it becomes critical to +understand their security vulnerabilities. Recent work shows that text +optimizers can produce jailbreaking prompts that bypass moderation and +alignment. Drawing from the rich body of work on adversarial machine learning, +we approach these attacks with three questions: What threat models are +practically useful in this domain? How do baseline defense techniques perform +in this new domain? How does LLM security differ from computer vision? + We evaluate several baseline defense strategies against leading adversarial +attacks on LLMs, discussing the various settings in which each is feasible and +effective. Particularly, we look at three types of defenses: detection +(perplexity based), input preprocessing (paraphrase and retokenization), and +adversarial training. We discuss white-box and gray-box settings and discuss +the robustness-performance trade-off for each of the defenses considered. We +find that the weakness of existing discrete optimizers for text, combined with +the relatively high costs of optimization, makes standard adaptive attacks more +challenging for LLMs. Future research will be needed to uncover whether more +powerful optimizers can be developed, or whether the strength of filtering and +preprocessing defenses is greater in the LLMs domain than it has been in +computer vision. +" +GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts,Jiahao Yu,http://arxiv.org/pdf/2309.10253v2.pdf,2023-09-19,['cs.ai'],2309.10253v2.pdf," Large language models (LLMs) have recently experienced tremendous popularity +and are widely used from casual conversations to AI-driven programming. +However, despite their considerable success, LLMs are not entirely reliable and +can give detailed guidance on how to conduct harmful or illegal activities. +While safety measures can reduce the risk of such outputs, adversarial +jailbreak attacks can still exploit LLMs to produce harmful content. These +jailbreak templates are typically manually crafted, making large-scale testing +challenging. + In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing +framework inspired by the AFL fuzzing framework. Instead of manual engineering, +GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs. +At its core, GPTFuzz starts with human-written templates as initial seeds, then +mutates them to produce new templates. We detail three key components of +GPTFuzz: a seed selection strategy for balancing efficiency and variability, +mutate operators for creating semantically equivalent or similar sentences, and +a judgment model to assess the success of a jailbreak attack. + We evaluate GPTFuzz against various commercial and open-source LLMs, +including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our +results indicate that GPTFuzz consistently produces jailbreak templates with a +high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz +achieves over 90% attack success rates against ChatGPT and Llama-2 models, even +with suboptimal initial seed templates. We anticipate that GPTFuzz will be +instrumental for researchers and practitioners in examining LLM robustness and +will encourage further exploration into enhancing LLM safety. +" +Probing LLMs for hate speech detection: strengths and vulnerabilities,Sarthak Roy,http://arxiv.org/pdf/2310.12860v2.pdf,2023-10-19,"['cs.cl', 'cs.cy']",2310.12860v2.pdf," Recently efforts have been made by social media platforms as well as +researchers to detect hateful or toxic language using large language models. +However, none of these works aim to use explanation, additional context and +victim community information in the detection process. We utilise different +prompt variation, input information and evaluate large language models in zero +shot setting (without adding any in-context examples). We select three large +language models (GPT-3.5, text-davinci and Flan-T5) and three datasets - +HateXplain, implicit hate and ToxicSpans. We find that on average including the +target information in the pipeline improves the model performance substantially +(~20-30%) over the baseline across the datasets. There is also a considerable +effect of adding the rationales/explanations into the pipeline (~10-20%) over +the baseline across the datasets. In addition, we further provide a typology of +the error cases where these large language models fail to (i) classify and (ii) +explain the reason for the decisions they take. Such vulnerable points +automatically constitute 'jailbreak' prompts for these models and industry +scale safeguard techniques need to be developed to make the models robust +against such prompts. +" +Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction,Martin Josifoski,http://arxiv.org/pdf/2303.04132v2.pdf,2023-03-07,"['cs.cl', 'cs.ai', 'cs.lg']",2303.04132v2.pdf," Large language models (LLMs) have great potential for synthetic data +generation. This work shows that useful data can be synthetically generated +even for tasks that cannot be solved directly by LLMs: for problems with +structured outputs, it is possible to prompt an LLM to perform the task in the +reverse direction, by generating plausible input text for a target output +structure. Leveraging this asymmetry in task difficulty makes it possible to +produce large-scale, high-quality data for complex tasks. We demonstrate the +effectiveness of this approach on closed information extraction, where +collecting ground-truth data is challenging, and no satisfactory dataset exists +to date. We synthetically generate a dataset of 1.8M data points, establish its +superior quality compared to existing datasets in a human evaluation, and use +it to finetune small models (220M and 770M parameters), termed SynthIE, that +outperform the prior state of the art (with equal model size) by a substantial +margin of 57 absolute points in micro-F1 and 79 points in macro-F1. Code, data, +and models are available at https://github.com/epfl-dlab/SynthIE. +" +Small Language Models Improve Giants by Rewriting Their Outputs,Giorgos Vernikos,http://arxiv.org/pdf/2305.13514v1.pdf,2023-05-22,"['cs.cl', 'cs.lg']",2305.13514v1.pdf," Large language models (LLMs) have demonstrated impressive few-shot learning +capabilities, but they often underperform compared to fine-tuned models on +challenging tasks. Furthermore, their large size and restricted access only +through APIs make task-specific fine-tuning impractical. Moreover, LLMs are +sensitive to different aspects of prompts (e.g., the selection and order of +demonstrations) and can thus require time-consuming prompt engineering. In this +light, we propose a method to correct LLM outputs without relying on their +weights. First, we generate a pool of candidates by few-shot prompting an LLM. +Second, we refine the LLM-generated outputs using a smaller model, the +LM-corrector (LMCor), which is trained to rank, combine and rewrite the +candidates to produce the final target output. Our experiments demonstrate that +even a small LMCor model (250M) substantially improves the few-shot performance +of LLMs (62B) across diverse tasks. Moreover, we illustrate that the LMCor +exhibits robustness against different prompts, thereby minimizing the need for +extensive prompt engineering. Finally, we showcase that the LMCor can be +seamlessly integrated with different LLMs at inference time, serving as a +plug-and-play module to improve their performance. +" +Aligning Language Models to User Opinions,EunJeong Hwang,http://arxiv.org/pdf/2305.14929v1.pdf,2023-05-24,['cs.cl'],2305.14929v1.pdf," An important aspect of developing LLMs that interact with humans is to align +models' behavior to their users. It is possible to prompt an LLM into behaving +as a certain persona, especially a user group or ideological persona the model +captured during its pertaining stage. But, how to best align an LLM with a +specific user and not a demographic or ideological group remains an open +question. Mining public opinion surveys (by Pew Research), we find that the +opinions of a user and their demographics and ideologies are not mutual +predictors. We use this insight to align LLMs by modeling both user opinions as +well as user demographics and ideology, achieving up to 7 points accuracy gains +in predicting public opinions from survey questions across a broad set of +topics. In addition to the typical approach of prompting LLMs with demographics +and ideology, we discover that utilizing the most relevant past opinions from +individual users enables the model to predict user opinions more accurately. +" +Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models,Myra Cheng,http://arxiv.org/pdf/2305.18189v1.pdf,2023-05-29,"['cs.cl', 'cs.ai', 'cs.cy']",2305.18189v1.pdf," To recognize and mitigate harms from large language models (LLMs), we need to +understand the prevalence and nuances of stereotypes in LLM outputs. Toward +this end, we present Marked Personas, a prompt-based method to measure +stereotypes in LLMs for intersectional demographic groups without any lexicon +or data labeling. Grounded in the sociolinguistic concept of markedness (which +characterizes explicitly linguistically marked categories versus unmarked +defaults), our proposed method is twofold: 1) prompting an LLM to generate +personas, i.e., natural language descriptions, of the target demographic group +alongside personas of unmarked, default groups; 2) identifying the words that +significantly distinguish personas of the target group from corresponding +unmarked ones. We find that the portrayals generated by GPT-3.5 and GPT-4 +contain higher rates of racial stereotypes than human-written portrayals using +the same prompts. The words distinguishing personas of marked (non-white, +non-male) groups reflect patterns of othering and exoticizing these +demographics. An intersectional lens further reveals tropes that dominate +portrayals of marginalized groups, such as tropicalism and the +hypersexualization of minoritized women. These representational harms have +concerning implications for downstream applications like story generation. +" +Reranking for Natural Language Generation from Logical Forms: A Study based on Large Language Models,Levon Haroutunian,http://arxiv.org/pdf/2309.12294v1.pdf,2023-09-21,['cs.cl'],2309.12294v1.pdf," Large language models (LLMs) have demonstrated impressive capabilities in +natural language generation. However, their output quality can be inconsistent, +posing challenges for generating natural language from logical forms (LFs). +This task requires the generated outputs to embody the exact semantics of LFs, +without missing any LF semantics or creating any hallucinations. In this work, +we tackle this issue by proposing a novel generate-and-rerank approach. Our +approach involves initially generating a set of candidate outputs by prompting +an LLM and subsequently reranking them using a task-specific reranker model. In +addition, we curate a manually collected dataset to evaluate the alignment +between different ranking metrics and human judgements. The chosen ranking +metrics are utilized to enhance the training and evaluation of the reranker +model. By conducting extensive experiments on three diverse datasets, we +demonstrate that the candidates selected by our reranker outperform those +selected by baseline methods in terms of semantic consistency and fluency, as +measured by three comprehensive metrics. Our findings provide strong evidence +for the effectiveness of our approach in improving the quality of generated +outputs. +" +Query Rewriting for Retrieval-Augmented Large Language Models,Xinbei Ma,http://arxiv.org/pdf/2305.14283v3.pdf,2023-05-23,['cs.cl'],2305.14283v3.pdf," Large Language Models (LLMs) play powerful, black-box readers in the +retrieve-then-read pipeline, making remarkable progress in knowledge-intensive +tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of +the previous retrieve-then-read for the retrieval-augmented LLMs from the +perspective of the query rewriting. Unlike prior studies focusing on adapting +either the retriever or the reader, our approach pays attention to the +adaptation of the search query itself, for there is inevitably a gap between +the input text and the needed knowledge in retrieval. We first prompt an LLM to +generate the query, then use a web search engine to retrieve contexts. +Furthermore, to better align the query to the frozen modules, we propose a +trainable scheme for our pipeline. A small language model is adopted as a +trainable rewriter to cater to the black-box LLM reader. The rewriter is +trained using the feedback of the LLM reader by reinforcement learning. +Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice +QA. Experiments results show consistent performance improvement, indicating +that our framework is proven effective and scalable, and brings a new framework +for retrieval-augmented LLM. +" +ALGO: Synthesizing Algorithmic Programs with Generated Oracle Verifiers,Kexun Zhang,http://arxiv.org/pdf/2305.14591v2.pdf,2023-05-24,"['cs.cl', 'cs.se']",2305.14591v2.pdf," Large language models (LLMs) excel at implementing code from functionality +descriptions but struggle with algorithmic problems that require not only +implementation but also identification of the suitable algorithm. Moreover, +LLM-generated programs lack guaranteed correctness and require human +verification. To address these challenges, we propose ALGO, a framework that +synthesizes Algorithmic programs with LLM-Generated Oracles to guide the +generation and verify their correctness. ALGO first generates a reference +oracle by prompting an LLM to exhaustively enumerate all the combinations of +relevant variables. This oracle is then utilized to guide an arbitrary search +strategy in exploring the algorithm space and to verify the synthesized +algorithms. Our study shows that the LLM-generated oracles are correct for 88% +of the cases. With the oracles as verifiers, ALGO can be integrated with any +existing code generation model in a model-agnostic manner to enhance its +performance. Experiments show that when equipped with ALGO, we achieve an 8x +better one-submission pass rate over the Codex model and a 2.6x better +one-submission pass rate over CodeT, the current state-of-the-art model on +CodeContests. We can also get 1.3x better pass rate over the ChatGPT Code +Interpreter on unseen problems. The problem set we used for testing, the +prompts we used, the verifier and solution programs, and the test cases +generated by ALGO are available at https://github.com/zkx06111/ALGO. +" +PromptNER: Prompting For Named Entity Recognition,Dhananjay Ashok,http://arxiv.org/pdf/2305.15444v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2305.15444v2.pdf," In a surprising turn, Large Language Models (LLMs) together with a growing +arsenal of prompt-based heuristics now offer powerful off-the-shelf approaches +providing few-shot solutions to myriad classic NLP problems. However, despite +promising early results, these LLM-based few-shot methods remain far from the +state of the art in Named Entity Recognition (NER), where prevailing methods +include learning representations via end-to-end structural understanding and +fine-tuning on standard labeled corpora. In this paper, we introduce PromptNER, +a new state-of-the-art algorithm for few-Shot and cross-domain NER. To adapt to +any new NER task PromptNER requires a set of entity definitions in addition to +the standard few-shot examples. Given a sentence, PromptNER prompts an LLM to +produce a list of potential entities along with corresponding explanations +justifying their compatibility with the provided entity type definitions. +Remarkably, PromptNER achieves state-of-the-art performance on few-shot NER, +achieving a 4% (absolute) improvement in F1 score on the ConLL dataset, a 9% +(absolute) improvement on the GENIA dataset, and a 4% (absolute) improvement on +the FewNERD dataset. PromptNER also moves the state of the art on Cross Domain +NER, outperforming prior methods (including those not limited to the few-shot +setting), setting a new mark on 3/5 CrossNER target domains, with an average F1 +gain of 3%, despite using less than 2% of the available data. +" +Dcc --help: Generating Context-Aware Compiler Error Explanations with Large Language Models,Andrew Taylor,http://arxiv.org/pdf/2308.11873v2.pdf,2023-08-23,"['cs.se', 'cs.lg', 'cs.pl']",2308.11873v2.pdf," In the challenging field of introductory programming, high enrollments and +failure rates drive us to explore tools and systems to enhance student +outcomes, especially automated tools that scale to large cohorts. This paper +presents and evaluates the dcc --help tool, an integration of a Large Language +Model (LLM) into the Debugging C Compiler (DCC) to generate unique, +novice-focused explanations tailored to each error. dcc --help prompts an LLM +with contextual information of compile- and run-time error occurrences, +including the source code, error location and standard compiler error message. +The LLM is instructed to generate novice-focused, actionable error explanations +and guidance, designed to help students understand and resolve problems without +providing solutions. dcc --help was deployed to our CS1 and CS2 courses, with +2,565 students using the tool over 64,000 times in ten weeks. We analysed a +subset of these error/explanation pairs to evaluate their properties, including +conceptual correctness, relevancy, and overall quality. We found that the +LLM-generated explanations were conceptually accurate in 90% of compile-time +and 75% of run-time cases, but often disregarded the instruction not to provide +solutions in code. Our findings, observations and reflections following +deployment indicate that dcc-help provides novel opportunities for scaffolding +students' introduction to programming. +" +BLSP: Bootstrapping Language-Speech Pre-training via Behavior Alignment of Continuation Writing,Chen Wang,http://arxiv.org/pdf/2309.00916v1.pdf,2023-09-02,"['cs.cl', 'cs.sd', 'eess.as']",2309.00916v1.pdf," The emergence of large language models (LLMs) has sparked significant +interest in extending their remarkable language capabilities to speech. +However, modality alignment between speech and text still remains an open +problem. Current solutions can be categorized into two strategies. One is a +cascaded approach where outputs (tokens or states) of a separately trained +speech recognition system are used as inputs for LLMs, which limits their +potential in modeling alignment between speech and text. The other is an +end-to-end approach that relies on speech instruction data, which is very +difficult to collect in large quantities. In this paper, we address these +issues and propose the BLSP approach that Bootstraps Language-Speech +Pre-training via behavior alignment of continuation writing. We achieve this by +learning a lightweight modality adapter between a frozen speech encoder and an +LLM, ensuring that the LLM exhibits the same generation behavior regardless of +the modality of input: a speech segment or its transcript. The training process +can be divided into two steps. The first step prompts an LLM to generate texts +with speech transcripts as prefixes, obtaining text continuations. In the +second step, these continuations are used as supervised signals to train the +modality adapter in an end-to-end manner. We demonstrate that this +straightforward process can extend the capabilities of LLMs to speech, enabling +speech recognition, speech translation, spoken language understanding, and +speech conversation, even in zero-shot cross-lingual scenarios. +" +Balanced and Explainable Social Media Analysis for Public Health with Large Language Models,Yan Jiang,http://arxiv.org/pdf/2309.05951v1.pdf,2023-09-12,['cs.cl'],2309.05951v1.pdf," As social media becomes increasingly popular, more and more public health +activities emerge, which is worth noting for pandemic monitoring and government +decision-making. Current techniques for public health analysis involve popular +models such as BERT and large language models (LLMs). Although recent progress +in LLMs has shown a strong ability to comprehend knowledge by being fine-tuned +on specific domain datasets, the costs of training an in-domain LLM for every +specific public health task are especially expensive. Furthermore, such kinds +of in-domain datasets from social media are generally highly imbalanced, which +will hinder the efficiency of LLMs tuning. To tackle these challenges, the data +imbalance issue can be overcome by sophisticated data augmentation methods for +social media datasets. In addition, the ability of the LLMs can be effectively +utilised by prompting the model properly. In light of the above discussion, in +this paper, a novel ALEX framework is proposed for social media analysis on +public health. Specifically, an augmentation pipeline is developed to resolve +the data imbalance issue. Furthermore, an LLMs explanation mechanism is +proposed by prompting an LLM with the predicted results from BERT models. +Extensive experiments conducted on three tasks at the Social Media Mining for +Health 2023 (SMM4H) competition with the first ranking in two tasks demonstrate +the superior performance of the proposed ALEX method. Our code has been +released in https://github.com/YanJiangJerry/ALEX. +" +HowToCaption: Prompting LLMs to Transform Video Annotations at Scale,Nina Shvetsova,http://arxiv.org/pdf/2310.04900v1.pdf,2023-10-07,['cs.cv'],2310.04900v1.pdf," Instructional videos are an excellent source for learning multimodal +representations by leveraging video-subtitle pairs extracted with automatic +speech recognition systems (ASR) from the audio signal in the videos. However, +in contrast to human-annotated captions, both speech and subtitles naturally +differ from the visual content of the videos and thus provide only noisy +supervision for multimodal learning. As a result, large-scale annotation-free +web video training data remains sub-optimal for training text-video models. In +this work, we propose to leverage the capability of large language models +(LLMs) to obtain fine-grained video descriptions aligned with videos. +Specifically, we prompt an LLM to create plausible video descriptions based on +ASR narrations of the video for a large-scale instructional video dataset. To +this end, we introduce a prompting method that is able to take into account a +longer text of subtitles, allowing us to capture context beyond a single +sentence. To align the captions to the video temporally, we prompt the LLM to +generate timestamps for each produced caption based on the subtitles. In this +way, we obtain human-style video captions at scale without human supervision. +We apply our method to the subtitles of the HowTo100M dataset, creating a new +large-scale dataset, HowToCaption. Our evaluation shows that the resulting +captions not only significantly improve the performance over many different +benchmark datasets for text-video retrieval but also lead to a disentangling of +textual narration from the audio, boosting performance in text-video-audio +tasks. +" +ClarifyGPT: Empowering LLM-based Code Generation with Intention Clarification,Fangwen Mu,http://arxiv.org/pdf/2310.10996v1.pdf,2023-10-17,['cs.se'],2310.10996v1.pdf," We introduce a novel framework named ClarifyGPT, which aims to enhance code +generation by empowering LLMs with the ability to identify ambiguous +requirements and ask targeted clarifying questions. In particular, ClarifyGPT +first detects whether a given requirement is ambiguous by performing a code +consistency check. If it is ambiguous, ClarifyGPT prompts an LLM to generate +targeted clarifying questions. After receiving question responses, ClarifyGPT +refines the ambiguous requirement and inputs it into the same LLM to generate a +final code solution. To evaluate our ClarifyGPT, we first conduct a human +evaluation involving ten participants who use ClarifyGPT for code generation on +two publicly available benchmarks: MBPP-sanitized and MBPP-ET. The results show +that ClarifyGPT elevates the performance (Pass@1) of GPT-4 from 70.96% to +80.80% on MBPP-sanitized. Furthermore, to perform large-scale automated +evaluations of ClarifyGPT across different LLMs and benchmarks without +requiring user participation, we introduce a high-fidelity simulation method to +simulate user responses. The automated evaluation results also demonstrate that +ClarifyGPT can significantly enhance code generation performance compared to +the baselines. In particular, ClarifyGPT improves the average performance of +GPT-4 and ChatGPT across four benchmarks from 68.02% to 75.75% and from 58.55% +to 67.22%, respectively. We believe that ClarifyGPT can effectively facilitate +the practical application of LLMs in real-world development environments. +" +Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning,Xiaoxin He,http://arxiv.org/pdf/2305.19523v3.pdf,2023-05-31,['cs.lg'],2305.19523v3.pdf," Representation learning on text-attributed graphs (TAGs) has become a +critical research problem in recent years. A typical example of a TAG is a +paper citation graph, where the text of each paper serves as node attributes. +Initial graph neural network (GNN) pipelines handled these text attributes by +transforming them into shallow or hand-crafted features, such as skip-gram or +bag-of-words features. Recent efforts have focused on enhancing these pipelines +with language models (LMs), which typically demand intricate designs and +substantial computational resources. With the advent of powerful large language +models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and +to utilize general knowledge, there is a growing need for techniques which +combine the textual modelling abilities of LLMs with the structural learning +capabilities of GNNs. Hence, in this work, we focus on leveraging LLMs to +capture textual information as features, which can be used to boost GNN +performance on downstream tasks. A key innovation is our use of explanations as +features: we prompt an LLM to perform zero-shot classification, request textual +explanations for its decision-making process, and design an LLM-to-LM +interpreter to translate these explanations into informative features that +enhance downstream GNNs. Our experiments demonstrate that our method achieves +state-of-the-art results on well-established TAG datasets, including Cora, +PubMed, ogbn-arxiv, as well as our newly introduced dataset, arXiv-2023. +Furthermore, our method significantly speeds up training, achieving a 2.88 +times improvement over the closest baseline on ogbn-arxiv. Lastly, we believe +the versatility of the proposed method extends beyond TAGs and holds the +potential to enhance other tasks involving graph-text data~\footnote{Our codes +and datasets are available at: \url{https://github.com/XiaoxinHe/TAPE}}. +" +LEGO-Prover: Neural Theorem Proving with Growing Libraries,Haiming Wang,http://arxiv.org/pdf/2310.00656v3.pdf,2023-10-01,['cs.ai'],2310.00656v3.pdf," Despite the success of large language models (LLMs), the task of theorem +proving still remains one of the hardest reasoning tasks that is far from being +fully solved. Prior methods using language models have demonstrated promising +results, but they still struggle to prove even middle school level theorems. +One common limitation of these methods is that they assume a fixed theorem +library during the whole theorem proving process. However, as we all know, +creating new useful theorems or even new theories is not only helpful but +crucial and necessary for advancing mathematics and proving harder and deeper +results. In this work, we present LEGO-Prover, which employs a growing skill +library containing verified lemmas as skills to augment the capability of LLMs +used in theorem proving. By constructing the proof modularly, LEGO-Prover +enables LLMs to utilize existing skills retrieved from the library and to +create new skills during the proving process. These skills are further evolved +(by prompting an LLM) to enrich the library on another scale. Modular and +reusable skills are constantly added to the library to enable tackling +increasingly intricate mathematical problems. Moreover, the learned library +further bridges the gap between human proofs and formal proofs by making it +easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass +rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%). +During the proving process, LEGO-Prover also manages to generate over 20,000 +skills (theorems/lemmas) and adds them to the growing library. Our ablation +study indicates that these newly added skills are indeed helpful for proving +theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We +also release our code and all the generated skills. +" +BooookScore: A systematic exploration of book-length summarization in the era of LLMs,Yapei Chang,http://arxiv.org/pdf/2310.00785v2.pdf,2023-10-01,"['cs.cl', 'cs.ai', 'cs.lg']",2310.00785v2.pdf," Summarizing book-length documents (>100K tokens) that exceed the context +window size of large language models (LLMs) requires first breaking the input +document into smaller chunks and then prompting an LLM to merge, update, and +compress chunk-level summaries. Despite the complexity and importance of this +task, it has yet to be meaningfully studied due to the challenges of +evaluation: existing book-length summarization datasets (e.g., BookSum) are in +the pretraining data of most public LLMs, and existing evaluation methods +struggle to capture errors made by modern LLM summarizers. In this paper, we +present the first study of the coherence of LLM-based book-length summarizers +implemented via two prompting workflows: (1) hierarchically merging chunk-level +summaries, and (2) incrementally updating a running summary. We obtain 1193 +fine-grained human annotations on GPT-4 generated summaries of 100 +recently-published books and identify eight common types of coherence errors +made by LLMs. Because human evaluation is expensive and time-consuming, we +develop an automatic metric, BooookScore, that measures the proportion of +sentences in a summary that do not contain any of the identified error types. +BooookScore has high agreement with human annotations and allows us to +systematically evaluate the impact of many other critical parameters (e.g., +chunk size, base LLM) while saving $15K and 500 hours in human evaluation +costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce +summaries with higher BooookScore than the oft-repetitive ones generated by +LLaMA 2. Incremental updating yields lower BooookScore but higher level of +detail than hierarchical merging, a trade-off sometimes preferred by human +annotators. We release code and annotations after blind review to spur more +principled research on book-length summarization. +" +The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning,Xi Ye,http://arxiv.org/pdf/2205.03401v2.pdf,2022-05-06,['cs.cl'],2205.03401v2.pdf," Does prompting a large language model (LLM) like GPT-3 with explanations +improve in-context learning? We study this question on two NLP tasks that +involve reasoning over text, namely question answering and natural language +inference. We test the performance of four LLMs on three textual reasoning +datasets using prompts that include explanations in multiple different styles. +For these tasks, we find that including explanations in the prompts for OPT, +GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small to +moderate accuracy improvements over standard few-show learning. However, +text-davinci-002 is able to benefit more substantially. + We further show that explanations generated by the LLMs may not entail the +models' predictions nor be factually grounded in the input, even on simple +tasks with extractive explanations. However, these flawed explanations can +still be useful as a way to verify LLMs' predictions post-hoc. Through analysis +in our three settings, we show that explanations judged by humans to be +good--logically consistent with the input and the prediction--more likely +cooccur with accurate predictions. Following these observations, we train +calibrators using automatically extracted scores that assess the reliability of +explanations, allowing us to improve performance post-hoc across all of our +datasets. +" +Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large Language Models,Albert Xu,http://arxiv.org/pdf/2211.15718v2.pdf,2022-11-28,['cs.cl'],2211.15718v2.pdf," In many task settings, text classification models are likely to encounter +examples from novel classes on which they cannot predict correctly. Selective +prediction, in which models abstain on low-confidence examples, provides a +possible solution, but existing models are often overly confident on unseen +classes. To remedy this overconfidence, we introduce Contrastive +Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD +examples representative of novel classes, then trains to decrease confidence on +them. First, we generate OOD examples by prompting a large language model +twice: we prompt it to enumerate relevant novel classes, then generate examples +from each novel class matching the task format. Second, we train a classifier +with a novel contrastive objective that encourages lower confidence on +generated OOD examples than training examples. When trained with CoNAL, +classifiers improve in their ability to detect and abstain on novel class +examples over prior methods by an average of 2.3% in terms of accuracy under +the accuracy-coverage curve (AUAC) and 5.5% AUROC across 4 NLP datasets, with +no cost to in-distribution accuracy. +" +Extensible Prompts for Language Models,Tao Ge,http://arxiv.org/pdf/2212.00616v1.pdf,2022-12-01,['cs.cl'],2212.00616v1.pdf," We propose eXtensible Prompt (X-Prompt) for prompting a large language model +(LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL +but also an extensible vocabulary of imaginary words that are introduced to +help represent what NL words hardly describe, allowing a prompt to be more +descriptive. Like NL prompts, X-Prompt is out-of-distribution (OOD) robust, for +which we propose context-guided learning with prompt augmentation to learn its +imaginary words for general usability, enabling them to use in different prompt +contexts for fine-grain specifications. The promising results of X-Prompt +demonstrate its potential of approaching advanced interaction between humans +and LLMs to bridge their communication gap. +" +Reward Design with Language Models,Minae Kwon,http://arxiv.org/pdf/2303.00001v1.pdf,2023-02-27,"['cs.lg', 'cs.ai', 'cs.cl']",2303.00001v1.pdf," Reward design in reinforcement learning (RL) is challenging since specifying +human notions of desired behavior may be difficult via reward functions or +require many expert demonstrations. Can we instead cheaply design rewards using +a natural language interface? This paper explores how to simplify reward design +by prompting a large language model (LLM) such as GPT-3 as a proxy reward +function, where the user provides a textual prompt containing a few examples +(few-shot) or a description (zero-shot) of the desired behavior. Our approach +leverages this proxy reward function in an RL framework. Specifically, users +specify a prompt once at the beginning of training. During training, the LLM +evaluates an RL agent's behavior against the desired behavior described by the +prompt and outputs a corresponding reward signal. The RL agent then uses this +reward to update its behavior. We evaluate whether our approach can train +agents aligned with user objectives in the Ultimatum Game, matrix games, and +the DealOrNoDeal negotiation task. In all three tasks, we show that RL agents +trained with our framework are well-aligned with the user's objectives and +outperform RL agents trained with reward functions learned via supervised +learning +" +Prompt-Based Monte-Carlo Tree Search for Goal-Oriented Dialogue Policy Planning,Xiao Yu,http://arxiv.org/pdf/2305.13660v2.pdf,2023-05-23,['cs.cl'],2305.13660v2.pdf," Planning for goal-oriented dialogue often requires simulating future dialogue +interactions and estimating task progress. Many approaches thus consider +training neural networks to perform look-ahead search algorithms such as A* +search and Monte Carlo Tree Search (MCTS). However, this training often +requires abundant annotated data, which creates challenges when faced with +noisy annotations or low-resource settings. We introduce GDP-Zero, an approach +using Open-Loop MCTS to perform goal-oriented dialogue policy planning without +any model training. GDP-Zero prompts a large language model to act as a policy +prior, value function, user simulator, and system model during the tree search. +We evaluate GDP-Zero on the goal-oriented task PersuasionForGood, and find that +its responses are preferred over ChatGPT up to 59.32% of the time, and are +rated more persuasive than ChatGPT during interactive evaluations. +" +IDAS: Intent Discovery with Abstractive Summarization,Maarten De Raedt,http://arxiv.org/pdf/2305.19783v1.pdf,2023-05-31,['cs.cl'],2305.19783v1.pdf," Intent discovery is the task of inferring latent intents from a set of +unlabeled utterances, and is a useful step towards the efficient creation of +new conversational agents. We show that recent competitive methods in intent +discovery can be outperformed by clustering utterances based on abstractive +summaries, i.e., ""labels"", that retain the core elements while removing +non-essential information. We contribute the IDAS approach, which collects a +set of descriptive utterance labels by prompting a Large Language Model, +starting from a well-chosen seed set of prototypical utterances, to bootstrap +an In-Context Learning procedure to generate labels for non-prototypical +utterances. The utterances and their resulting noisy labels are then encoded by +a frozen pre-trained encoder, and subsequently clustered to recover the latent +intents. For the unsupervised task (without any intent labels) IDAS outperforms +the state-of-the-art by up to +7.42% in standard cluster metrics for the +Banking, StackOverflow, and Transport datasets. For the semi-supervised task +(with labels for a subset of intents) IDAS surpasses 2 recent methods on the +CLINC benchmark without even using labeled data. +" +Prompting a Large Language Model to Generate Diverse Motivational Messages: A Comparison with Human-Written Messages,Samuel Rhys Cox,http://arxiv.org/pdf/2308.13479v1.pdf,2023-08-25,"['cs.cl', 'cs.hc']",2308.13479v1.pdf," Large language models (LLMs) are increasingly capable and prevalent, and can +be used to produce creative content. The quality of content is influenced by +the prompt used, with more specific prompts that incorporate examples generally +producing better results. On from this, it could be seen that using +instructions written for crowdsourcing tasks (that are specific and include +examples to guide workers) could prove effective LLM prompts. To explore this, +we used a previous crowdsourcing pipeline that gave examples to people to help +them generate a collectively diverse corpus of motivational messages. We then +used this same pipeline to generate messages using GPT-4, and compared the +collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the +pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts +using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages +than the two baseline prompts. We also discuss implications from messages +generated by both human writers and LLMs. +" +Social Simulacra: Creating Populated Prototypes for Social Computing Systems,Joon Sung Park,http://arxiv.org/pdf/2208.04024v1.pdf,2022-08-08,['cs.hc'],2208.04024v1.pdf," Social computing prototypes probe the social behaviors that may arise in an +envisioned system design. This prototyping practice is currently limited to +recruiting small groups of people. Unfortunately, many challenges do not arise +until a system is populated at a larger scale. Can a designer understand how a +social system might behave when populated, and make adjustments to the design +before the system falls prey to such challenges? We introduce social simulacra, +a prototyping technique that generates a breadth of realistic social +interactions that may emerge when a social computing system is populated. +Social simulacra take as input the designer's description of a community's +design -- goal, rules, and member personas -- and produce as output an instance +of that design with simulated behavior, including posts, replies, and +anti-social behaviors. We demonstrate that social simulacra shift the behaviors +that they generate appropriately in response to design changes, and that they +enable exploration of ""what if?"" scenarios where community members or +moderators intervene. To power social simulacra, we contribute techniques for +prompting a large language model to generate thousands of distinct community +members and their social interactions with each other; these techniques are +enabled by the observation that large language models' training data already +includes a wide variety of positive and negative behavior on social media +platforms. In evaluations, we show that participants are often unable to +distinguish social simulacra from actual community behavior and that social +computing designers successfully refine their social computing designs when +using social simulacra. +" +Generate rather than Retrieve: Large Language Models are Strong Context Generators,Wenhao Yu,http://arxiv.org/pdf/2209.10063v3.pdf,2022-09-21,"['cs.cl', 'cs.ai']",2209.10063v3.pdf," Knowledge-intensive tasks, such as open-domain question answering (QA), +require access to a large amount of world or domain knowledge. A common +approach for knowledge-intensive tasks is to employ a retrieve-then-read +pipeline that first retrieves a handful of relevant contextual documents from +an external corpus such as Wikipedia and then predicts an answer conditioned on +the retrieved documents. In this paper, we present a novel perspective for +solving knowledge-intensive tasks by replacing document retrievers with large +language model generators. We call our method generate-then-read (GenRead), +which first prompts a large language model to generate contextutal documents +based on a given question, and then reads the generated documents to produce +the final answer. Furthermore, we propose a novel clustering-based prompting +method that selects distinct prompts, resulting in the generated documents that +cover different perspectives, leading to better recall over acceptable answers. +We conduct extensive experiments on three different knowledge-intensive tasks, +including open-domain QA, fact checking, and dialogue system. Notably, GenRead +achieves 71.6 and 54.4 exact match scores on TriviaQA and WebQ, significantly +outperforming the state-of-the-art retrieve-then-read pipeline DPR-FiD by +4.0 +and +3.9, without retrieving any documents from any external knowledge source. +Lastly, we demonstrate the model performance can be further improved by +combining retrieval and generation. Our code and generated documents can be +found at https://github.com/wyu97/GenRead. +" +q2d: Turning Questions into Dialogs to Teach Models How to Search,Yonatan Bitton,http://arxiv.org/pdf/2304.14318v1.pdf,2023-04-27,['cs.cl'],2304.14318v1.pdf," One of the exciting capabilities of recent language models for dialog is +their ability to independently search for relevant information to ground a +given dialog response. However, obtaining training data to teach models how to +issue search queries is time and resource consuming. In this work, we propose +q2d: an automatic data generation pipeline that generates information-seeking +dialogs from questions. We prompt a large language model (PaLM) to create +conversational versions of question answering datasets, and use it to improve +query generation models that communicate with external search APIs to ground +dialog responses. Unlike previous approaches which relied on human written +dialogs with search queries, our method allows to automatically generate +query-based grounded dialogs with better control and scale. Our experiments +demonstrate that: (1) For query generation on the QReCC dataset, models trained +on our synthetically-generated data achieve 90%--97% of the performance of +models trained on the human-generated data; (2) We can successfully generate +data for training dialog models in new domains without any existing dialog data +as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We +perform a thorough analysis of the generated dialogs showing that humans find +them of high quality and struggle to distinguish them from human-written +dialogs. +" +Multi-Modal Classifiers for Open-Vocabulary Object Detection,Prannay Kaul,http://arxiv.org/pdf/2306.05493v1.pdf,2023-06-08,"['cs.cv', 'cs.ai', 'cs.lg', 'i.4.6; i.4.8; i.4.9; i.2.10']",2306.05493v1.pdf," The goal of this paper is open-vocabulary object detection (OVOD) +$\unicode{x2013}$ building a model that can detect objects beyond the set of +categories seen at training, thus enabling the user to specify categories of +interest at inference without the need for model retraining. We adopt a +standard two-stage object detector architecture, and explore three ways for +specifying novel categories: via language descriptions, via image exemplars, or +via a combination of the two. We make three contributions: first, we prompt a +large language model (LLM) to generate informative language descriptions for +object classes, and construct powerful text-based classifiers; second, we +employ a visual aggregator on image exemplars that can ingest any number of +images as input, forming vision-based classifiers; and third, we provide a +simple method to fuse information from language descriptions and image +exemplars, yielding a multi-modal classifier. When evaluating on the +challenging LVIS open-vocabulary benchmark we demonstrate that: (i) our +text-based classifiers outperform all previous OVOD works; (ii) our +vision-based classifiers perform as well as text-based classifiers in prior +work; (iii) using multi-modal classifiers perform better than either modality +alone; and finally, (iv) our text-based and multi-modal classifiers yield +better performance than a fully-supervised detector. +" +InstructEval: Systematic Evaluation of Instruction Selection Methods,Anirudh Ajith,http://arxiv.org/pdf/2307.00259v2.pdf,2023-07-01,"['cs.cl', 'cs.ai']",2307.00259v2.pdf," In-context learning (ICL) performs tasks by prompting a large language model +(LLM) using an instruction and a small set of annotated examples called +demonstrations. Recent work has shown that precise details of the inputs used +in the ICL prompt significantly impact performance, which has incentivized +instruction selection algorithms. The effect of instruction-choice however is +severely underexplored, with existing analyses restricted to shallow subsets of +models and tasks, limiting the generalizability of their insights. We develop +InstructEval, an ICL evaluation suite to conduct a thorough assessment of these +techniques. The suite includes 13 open-sourced LLMs of varying scales from four +model families, and covers nine tasks across three categories. Using the suite, +we evaluate the relative performance of seven popular instruction selection +methods over five metrics relevant to ICL. Our experiments reveal that using +curated manually-written instructions or simple instructions without any +task-specific descriptions often elicits superior ICL performance overall than +that of automatic instruction-induction methods, pointing to a lack of +generalizability among the latter. We release our evaluation suite for +benchmarking instruction selection approaches and enabling more generalizable +methods in this space. +" +Prompt Injection Attacks and Defenses in LLM-Integrated Applications,Yupei Liu,http://arxiv.org/pdf/2310.12815v1.pdf,2023-10-19,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg']",2310.12815v1.pdf," Large Language Models (LLMs) are increasingly deployed as the backend for a +variety of real-world applications called LLM-Integrated Applications. Multiple +recent works showed that LLM-Integrated Applications are vulnerable to prompt +injection attacks, in which an attacker injects malicious instruction/data into +the input of those applications such that they produce results as the attacker +desires. However, existing works are limited to case studies. As a result, the +literature lacks a systematic understanding of prompt injection attacks and +their defenses. We aim to bridge the gap in this work. In particular, we +propose a general framework to formalize prompt injection attacks. Existing +attacks, which are discussed in research papers and blog posts, are special +cases in our framework. Our framework enables us to design a new attack by +combining existing attacks. Moreover, we also propose a framework to +systematize defenses against prompt injection attacks. Using our frameworks, we +conduct a systematic evaluation on prompt injection attacks and their defenses +with 10 LLMs and 7 tasks. We hope our frameworks can inspire future research in +this field. Our code is available at +https://github.com/liu00222/Open-Prompt-Injection. +" +Prompt Injection attack against LLM-integrated Applications,Yi Liu,http://arxiv.org/pdf/2306.05499v1.pdf,2023-06-08,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.se']",2306.05499v1.pdf," Large Language Models (LLMs), renowned for their superior proficiency in +language comprehension and generation, stimulate a vibrant ecosystem of +applications around them. However, their extensive assimilation into various +services introduces significant security risks. This study deconstructs the +complexities and implications of prompt injection attacks on actual +LLM-integrated applications. Initially, we conduct an exploratory analysis on +ten commercial applications, highlighting the constraints of current attack +strategies in practice. Prompted by these limitations, we subsequently +formulate HouYi, a novel black-box prompt injection attack technique, which +draws inspiration from traditional web injection attacks. HouYi is +compartmentalized into three crucial elements: a seamlessly-incorporated +pre-constructed prompt, an injection prompt inducing context partition, and a +malicious payload designed to fulfill the attack objectives. Leveraging HouYi, +we unveil previously unknown and severe attack outcomes, such as unrestricted +arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi +on 36 actual LLM-integrated applications and discern 31 applications +susceptible to prompt injection. 10 vendors have validated our discoveries, +including Notion, which has the potential to impact millions of users. Our +investigation illuminates both the possible risks of prompt injection attacks +and the possible tactics for mitigation. +" +Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game,Sam Toyer,http://arxiv.org/pdf/2311.01011v1.pdf,2023-11-02,"['cs.lg', 'cs.cr']",2311.01011v1.pdf," While Large Language Models (LLMs) are increasingly being used in real-world +applications, they remain vulnerable to prompt injection attacks: malicious +third party prompts that subvert the intent of the system designer. To help +researchers study this problem, we present a dataset of over 126,000 prompt +injection attacks and 46,000 prompt-based ""defenses"" against prompt injection, +all created by players of an online game called Tensor Trust. To the best of +our knowledge, this is currently the largest dataset of human-generated +adversarial examples for instruction-following LLMs. The attacks in our dataset +have a lot of easily interpretable stucture, and shed light on the weaknesses +of LLMs. We also use the dataset to create a benchmark for resistance to two +types of prompt injection, which we refer to as prompt extraction and prompt +hijacking. Our benchmark results show that many models are vulnerable to the +attack strategies in the Tensor Trust dataset. Furthermore, we show that some +attack strategies from the dataset generalize to deployed LLM-based +applications, even though they have a very different set of constraints to the +game. We release all data and source code at https://tensortrust.ai/paper +" +Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection,Kai Greshake,http://arxiv.org/pdf/2302.12173v2.pdf,2023-02-23,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.cy']",2302.12173v2.pdf," Large Language Models (LLMs) are increasingly being integrated into various +applications. The functionalities of recent LLMs can be flexibly modulated via +natural language prompts. This renders them susceptible to targeted adversarial +prompting, e.g., Prompt Injection (PI) attacks enable attackers to override +original instructions and employed controls. So far, it was assumed that the +user is directly prompting the LLM. But, what if it is not the user prompting? +We argue that LLM-Integrated Applications blur the line between data and +instructions. We reveal new attack vectors, using Indirect Prompt Injection, +that enable adversaries to remotely (without a direct interface) exploit +LLM-integrated applications by strategically injecting prompts into data likely +to be retrieved. We derive a comprehensive taxonomy from a computer security +perspective to systematically investigate impacts and vulnerabilities, +including data theft, worming, information ecosystem contamination, and other +novel security risks. We demonstrate our attacks' practical viability against +both real-world systems, such as Bing's GPT-4 powered Chat and code-completion +engines, and synthetic applications built on GPT-4. We show how processing +retrieved prompts can act as arbitrary code execution, manipulate the +application's functionality, and control how and if other APIs are called. +Despite the increasing integration and reliance on LLMs, effective mitigations +of these emerging threats are currently lacking. By raising awareness of these +vulnerabilities and providing key insights into their implications, we aim to +promote the safe and responsible deployment of these powerful models and the +development of robust defenses that protect users and systems from potential +attacks. +" +From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?,Rodrigo Pedro,http://arxiv.org/pdf/2308.01990v3.pdf,2023-08-03,['cs.cr'],2308.01990v3.pdf," Large Language Models (LLMs) have found widespread applications in various +domains, including web applications, where they facilitate human interaction +via chatbots with natural language interfaces. Internally, aided by an +LLM-integration middleware such as Langchain, user prompts are translated into +SQL queries used by the LLM to provide meaningful responses to users. However, +unsanitized user prompts can lead to SQL injection attacks, potentially +compromising the security of the database. Despite the growing interest in +prompt injection vulnerabilities targeting LLMs, the specific risks of +generating SQL injection attacks through prompt injections have not been +extensively studied. In this paper, we present a comprehensive examination of +prompt-to-SQL (P$_2$SQL) injections targeting web applications based on the +Langchain framework. Using Langchain as our case study, we characterize +P$_2$SQL injections, exploring their variants and impact on application +security through multiple concrete examples. Furthermore, we evaluate 7 +state-of-the-art LLMs, demonstrating the pervasiveness of P$_2$SQL attacks +across language models. Our findings indicate that LLM-integrated applications +based on Langchain are highly susceptible to P$_2$SQL injection attacks, +warranting the adoption of robust defenses. To counter these attacks, we +propose four effective defense techniques that can be integrated as extensions +to the Langchain framework. We validate the defenses through an experimental +evaluation with a real-world use case application. +" +Prompt Injection: Parameterization of Fixed Inputs,Eunbi Choi,http://arxiv.org/pdf/2206.11349v2.pdf,2022-05-31,"['cs.lg', 'cs.ai', 'cs.cl']",2206.11349v2.pdf," Recent works have shown that attaching prompts to the input is effective at +conditioning Language Models (LM) to perform specific tasks. However, prompts +are always included in the input text during inference, thus incurring +substantial computational and memory overhead. Also, there is currently no +straightforward method of utilizing prompts that are longer than the maximum +input length of the LMs without incurring additional costs during inference. We +propose Prompt Injection (PI), a novel formulation of injecting the prompt into +the parameters of an LM to be an efficient alternative to attaching fixed +prompts to the input. We show that in scenarios with long fixed prompts, PI can +be up to 280 times more efficient in terms of total FLOPs than previous +approaches. We further explore methodologies for PI and show promising results +in persona-dependent conversation, semantic parsing, and zero-shot learning +with task instructions. Through these explorations, we show that PI can be a +promising direction for conditioning language models, especially in scenarios +with long and fixed prompts. +" +Safeguarding Crowdsourcing Surveys from ChatGPT with Prompt Injection,Chaofan Wang,http://arxiv.org/pdf/2306.08833v1.pdf,2023-06-15,['cs.hc'],2306.08833v1.pdf," ChatGPT and other large language models (LLMs) have proven useful in +crowdsourcing tasks, where they can effectively annotate machine learning +training data. However, this means that they also have the potential for +misuse, specifically to automatically answer surveys. LLMs can potentially +circumvent quality assurance measures, thereby threatening the integrity of +methodologies that rely on crowdsourcing surveys. In this paper, we propose a +mechanism to detect LLM-generated responses to surveys. The mechanism uses +""prompt injection"", such as directions that can mislead LLMs into giving +predictable responses. We evaluate our technique against a range of question +scenarios, types, and positions, and find that it can reliably detect +LLM-generated responses with more than 93% effectiveness. We also provide an +open-source software to help survey designers use our technique to detect LLM +responses. Our work is a step in ensuring that survey methodologies remain +rigorous vis-a-vis LLMs. +" +Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection,Jun Yan,http://arxiv.org/pdf/2307.16888v2.pdf,2023-07-31,"['cs.cl', 'cs.cr', 'cs.lg']",2307.16888v2.pdf," Instruction-tuned Large Language Models (LLMs) have demonstrated remarkable +abilities to modulate their responses based on human instructions. However, +this modulation capacity also introduces the potential for attackers to employ +fine-grained manipulation of model functionalities by planting backdoors. In +this paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoor +attack setting tailored for instruction-tuned LLMs. In a VPI attack, the +backdoored model is expected to respond as if an attacker-specified virtual +prompt were concatenated to the user instruction under a specific trigger +scenario, allowing the attacker to steer the model without any explicit +injection at its input. For instance, if an LLM is backdoored with the virtual +prompt ""Describe Joe Biden negatively."" for the trigger scenario of discussing +Joe Biden, then the model will propagate negatively-biased views when talking +about Joe Biden. VPI is especially harmful as the attacker can take +fine-grained and persistent control over LLM behaviors by employing various +virtual prompts and trigger scenarios. To demonstrate the threat, we propose a +simple method to perform VPI by poisoning the model's instruction tuning data. +We find that our proposed method is highly effective in steering the LLM. For +example, by poisoning only 52 instruction tuning examples (0.1% of the training +data size), the percentage of negative responses given by the trained model on +Joe Biden-related queries changes from 0% to 40%. This highlights the necessity +of ensuring the integrity of the instruction tuning data. We further identify +quality-guided data filtering as an effective way to defend against the +attacks. Our project page is available at https://poison-llm.github.io. +" +Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts,Cicero Nogueira dos Santos,http://arxiv.org/pdf/2210.04726v1.pdf,2022-10-10,"['cs.cl', 'cs.ai', 'cs.lg']",2210.04726v1.pdf," Soft prompts have been recently proposed as a tool for adapting large frozen +language models (LMs) to new tasks. In this work, we repurpose soft prompts to +the task of injecting world knowledge into LMs. We introduce a method to train +soft prompts via self-supervised learning on data from knowledge bases. The +resulting soft knowledge prompts (KPs) are task independent and work as an +external memory of the LMs. We perform qualitative and quantitative experiments +and demonstrate that: (1) KPs can effectively model the structure of the +training data; (2) KPs can be used to improve the performance of LMs in +different knowledge intensive tasks. +" +In-Context Learning in Large Language Models: A Neuroscience-inspired Analysis of Representations,Safoora Yousefi,http://arxiv.org/pdf/2310.00313v2.pdf,2023-09-30,['cs.cl'],2310.00313v2.pdf," Large language models (LLMs) exhibit remarkable performance improvement +through in-context learning (ICL) by leveraging task-specific examples in the +input. However, the mechanisms behind this improvement remain elusive. In this +work, we investigate embeddings and attention representations in Llama-2 70B +and Vicuna 13B. Specifically, we study how embeddings and attention change +after in-context-learning, and how these changes mediate improvement in +behavior. We employ neuroscience-inspired techniques, such as representational +similarity analysis (RSA), and propose novel methods for parameterized probing +and attention ratio analysis (ARA, measuring the ratio of attention to relevant +vs. irrelevant information). We designed three tasks with a priori +relationships among their conditions: reading comprehension, linear regression, +and adversarial prompt injection. We formed hypotheses about expected +similarities in task representations to investigate latent changes in +embeddings and attention. Our analyses revealed a meaningful correlation +between changes in both embeddings and attention representations with +improvements in behavioral performance after ICL. This empirical framework +empowers a nuanced understanding of how latent representations affect LLM +behavior with and without ICL, offering valuable tools and insights for future +research and practical applications. +" +From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy,Maanak Gupta,http://arxiv.org/pdf/2307.00691v1.pdf,2023-07-03,"['cs.cr', 'cs.ai']",2307.00691v1.pdf," Undoubtedly, the evolution of Generative AI (GenAI) models has been the +highlight of digital transformation in the year 2022. As the different GenAI +models like ChatGPT and Google Bard continue to foster their complexity and +capability, it's critical to understand its consequences from a cybersecurity +perspective. Several instances recently have demonstrated the use of GenAI +tools in both the defensive and offensive side of cybersecurity, and focusing +on the social, ethical and privacy implications this technology possesses. This +research paper highlights the limitations, challenges, potential risks, and +opportunities of GenAI in the domain of cybersecurity and privacy. The work +presents the vulnerabilities of ChatGPT, which can be exploited by malicious +users to exfiltrate malicious information bypassing the ethical constraints on +the model. This paper demonstrates successful example attacks like Jailbreaks, +reverse psychology, and prompt injection attacks on the ChatGPT. The paper also +investigates how cyber offenders can use the GenAI tools in developing cyber +attacks, and explore the scenarios where ChatGPT can be used by adversaries to +create social engineering attacks, phishing attacks, automated hacking, attack +payload generation, malware creation, and polymorphic malware. This paper then +examines defense techniques and uses GenAI tools to improve security measures, +including cyber defense automation, reporting, threat intelligence, secure code +generation and detection, attack identification, developing ethical guidelines, +incidence response plans, and malware detection. We will also discuss the +social, legal, and ethical implications of ChatGPT. In conclusion, the paper +highlights open challenges and future directions to make this GenAI secure, +safe, trustworthy, and ethical as the community understands its cybersecurity +impacts. +" +Evaluating the Instruction-Following Robustness of Large Language Models to Prompt Injection,Zekun Li,http://arxiv.org/pdf/2308.10819v2.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.10819v2.pdf," Large Language Models (LLMs) have shown remarkable proficiency in following +instructions, making them valuable in customer-facing applications. However, +their impressive capabilities also raise concerns about the amplification of +risks posed by adversarial instructions, which can be injected into the model +input by third-party attackers to manipulate LLMs' original instructions and +prompt unintended actions and content. Therefore, it is crucial to understand +LLMs' ability to accurately discern which instructions to follow to ensure +their safe deployment in real-world scenarios. In this paper, we propose a +pioneering benchmark for automatically evaluating the robustness of +instruction-following LLMs against adversarial instructions injected in the +prompt. The objective of this benchmark is to quantify the extent to which LLMs +are influenced by injected adversarial instructions and assess their ability to +differentiate between these injected adversarial instructions and original user +instructions. Through experiments conducted with state-of-the-art +instruction-following LLMs, we uncover significant limitations in their +robustness against adversarial instruction injection attacks. Furthermore, our +findings indicate that prevalent instruction-tuned models are prone to being +``overfitted'' to follow any instruction phrase in the prompt without truly +understanding which instructions should be followed. This highlights the need +to address the challenge of training models to comprehend prompts instead of +merely following instruction phrases and completing the text. The data and code +can be found at \url{https://github.com/Leezekun/Adv-Instruct-Eval}. +" +Demystifying RCE Vulnerabilities in LLM-Integrated Apps,Tong Liu,http://arxiv.org/pdf/2309.02926v2.pdf,2023-09-06,['cs.cr'],2309.02926v2.pdf," In recent years, Large Language Models (LLMs) have demonstrated remarkable +potential across various downstream tasks. LLM-integrated frameworks, which +serve as the essential infrastructure, have given rise to many LLM-integrated +web apps. However, some of these frameworks suffer from Remote Code Execution +(RCE) vulnerabilities, allowing attackers to execute arbitrary code on apps' +servers remotely via prompt injections. Despite the severity of these +vulnerabilities, no existing work has been conducted for a systematic +investigation of them. This leaves a great challenge on how to detect +vulnerabilities in frameworks as well as LLM-integrated apps in real-world +scenarios. To fill this gap, we present two novel strategies, including 1) a +static analysis-based tool called LLMSmith to scan the source code of the +framework to detect potential RCE vulnerabilities and 2) a prompt-based +automated testing approach to verify the vulnerability in LLM-integrated web +apps. We discovered 13 vulnerabilities in 6 frameworks, including 12 RCE +vulnerabilities and 1 arbitrary file read/write vulnerability. 11 of them are +confirmed by the framework developers, resulting in the assignment of 7 CVE +IDs. After testing 51 apps, we found vulnerabilities in 17 apps, 16 of which +are vulnerable to RCE and 1 to SQL injection. We responsibly reported all 17 +issues to the corresponding developers and received acknowledgments. +Furthermore, we amplify the attack impact beyond achieving RCE by allowing +attackers to exploit other app users (e.g. app responses hijacking, user API +key leakage) without direct interaction between the attacker and the victim. +Lastly, we propose some mitigating strategies for improving the security +awareness of both framework and app developers, helping them to mitigate these +risks effectively. +" +Hydrogen-rich supernovae beyond the neutrino-driven core-collapse paradigm,G. Terreran,http://arxiv.org/pdf/1709.10475v1.pdf,2017-09-29,['astro-ph.sr'],1709.10475v1.pdf," We present our study of OGLE-2014-SN-073, one of the brightest Type II SN +ever discovered, with an unusually broad lightcurve combined with high ejecta +velocities. From our hydrodynamical modelling we infer a remarkable ejecta mass +of $60^{+42}_{-16}$~M$_\odot$, and a relatively high explosion energy of +$12.4^{+13.0}_{-5.9} \times10^{51}$~erg. We show that this object belongs, with +a very small number of other hydrogen-rich SNe, to an energy regime that is not +explained by standard core-collapse (CC) neutrino-driven explosions. We compare +the quantities inferred by the hydrodynamical modelling with the expectations +of various exploding scenarios, trying to explain the high energy and +luminosity released. We find some qualitative similarities with +pair-instabilities SNe, although a prompt injection of energy by a magnetar +seems also a viable alternative to explain such extreme event. +" +Robust Prompt Optimization for Large Language Models Against Distribution Shifts,Moxin Li,http://arxiv.org/pdf/2305.13954v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.13954v2.pdf," Large Language Model (LLM) has demonstrated significant ability in various +Natural Language Processing tasks. However, their effectiveness is highly +dependent on the phrasing of the task prompt, leading to research on automatic +prompt optimization using labeled task data. We reveal that these prompt +optimization techniques are vulnerable to distribution shifts such as +subpopulation shifts, which are common for LLMs in real-world scenarios such as +customer reviews analysis. In this light, we propose a new problem of robust +prompt optimization for LLMs against distribution shifts, which requires the +prompt optimized over the labeled source group can simultaneously generalize to +an unlabeled target group. To solve this problem, we propose Generalized Prompt +Optimization framework, which incorporates the unlabeled data from the target +group into prompt optimization. Extensive experimental results demonstrate the +effectiveness of the proposed framework with significant performance +improvement on the target group and comparable performance on the source group. +" +MultiPrompter: Cooperative Prompt Optimization with Multi-Agent Reinforcement Learning,Dong-Ki Kim,http://arxiv.org/pdf/2310.16730v1.pdf,2023-10-25,['cs.lg'],2310.16730v1.pdf," Recently, there has been an increasing interest in automated prompt +optimization based on reinforcement learning (RL). This approach offers +important advantages, such as generating interpretable prompts and being +compatible with black-box foundation models. However, the substantial prompt +space size poses challenges for RL-based methods, often leading to suboptimal +policy convergence. This paper introduces MultiPrompter, a new framework that +views prompt optimization as a cooperative game between prompters which take +turns composing a prompt together. Our cooperative prompt optimization +effectively reduces the problem size and helps prompters learn optimal prompts. +We test our method on the text-to-image task and show its ability to generate +higher-quality images than baselines. +" +Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Optimization for Few-shot Learning,Chengzhengxu Li,http://arxiv.org/pdf/2308.07272v1.pdf,2023-08-14,"['cs.lg', 'cs.cl']",2308.07272v1.pdf," Prompt-based pre-trained language models (PLMs) paradigm have succeeded +substantially in few-shot natural language processing (NLP) tasks. However, +prior discrete prompt optimization methods require expert knowledge to design +the base prompt set and identify high-quality prompts, which is costly, +inefficient, and subjective. Meanwhile, existing continuous prompt optimization +methods improve the performance by learning the ideal prompts through the +gradient information of PLMs, whose high computational cost, and low +readability and generalizability are often concerning. To address the research +gap, we propose a Dialogue-comprised Policy-gradient-based Discrete Prompt +Optimization ($DP_2O$) method. We first design a multi-round dialogue alignment +strategy for readability prompt set generation based on GPT-4. Furthermore, we +propose an efficient prompt screening metric to identify high-quality prompts +with linear complexity. Finally, we construct a reinforcement learning (RL) +framework based on policy gradients to match the prompts to inputs optimally. +By training a policy network with only 0.67% of the PLM parameter size on the +tasks in the few-shot setting, $DP_2O$ outperforms the state-of-the-art (SOTA) +method by 1.52% in accuracy on average on four open-source datasets. Moreover, +subsequent experiments also demonstrate that $DP_2O$ has good universality, +robustness, and generalization ability. +" +PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization,Xinyuan Wang,http://arxiv.org/pdf/2310.16427v1.pdf,2023-10-25,['cs.cl'],2310.16427v1.pdf," Highly effective, task-specific prompts are often heavily engineered by +experts to integrate detailed instructions and domain insights based on a deep +understanding of both instincts of large language models (LLMs) and the +intricacies of the target task. However, automating the generation of such +expert-level prompts remains elusive. Existing prompt optimization methods tend +to overlook the depth of domain knowledge and struggle to efficiently explore +the vast space of expert-level prompts. Addressing this, we present +PromptAgent, an optimization method that autonomously crafts prompts equivalent +in quality to those handcrafted by experts. At its core, PromptAgent views +prompt optimization as a strategic planning problem and employs a principled +planning algorithm, rooted in Monte Carlo tree search, to strategically +navigate the expert-level prompt space. Inspired by human-like trial-and-error +exploration, PromptAgent induces precise expert-level insights and in-depth +instructions by reflecting on model errors and generating constructive error +feedback. Such a novel framework allows the agent to iteratively examine +intermediate prompts (states), refine them based on error feedbacks (actions), +simulate future rewards, and search for high-reward paths leading to expert +prompts. We apply PromptAgent to 12 tasks spanning three practical domains: +BIG-Bench Hard (BBH), as well as domain-specific and general NLP tasks, showing +it significantly outperforms strong Chain-of-Thought and recent prompt +optimization baselines. Extensive analyses emphasize its capability to craft +expert-level, detailed, and domain-insightful prompts with great efficiency and +generalizability. +" +"Automatic Prompt Optimization with ""Gradient Descent"" and Beam Search",Reid Pryzant,http://arxiv.org/pdf/2305.03495v2.pdf,2023-05-04,"['cs.cl', 'cs.ai', 'cs.lg']",2305.03495v2.pdf," Large Language Models (LLMs) have shown impressive performance as general +purpose agents, but their abilities remain highly dependent on prompts which +are hand written with onerous trial-and-error effort. We propose a simple and +nonparametric solution to this problem, Automatic Prompt Optimization (APO), +which is inspired by numerical gradient descent to automatically improve +prompts, assuming access to training data and an LLM API. The algorithm uses +minibatches of data to form natural language ""gradients"" that criticize the +current prompt. The gradients are then ""propagated"" into the prompt by editing +the prompt in the opposite semantic direction of the gradient. These gradient +descent steps are guided by a beam search and bandit selection procedure which +significantly improves algorithmic efficiency. Preliminary results across three +benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest +that Automatic Prompt Optimization can outperform prior prompt editing +techniques and improve an initial prompt's performance by up to 31%, by using +data to rewrite vague task descriptions into more precise annotation +instructions. +" +Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker,Sukmin Cho,http://arxiv.org/pdf/2305.13729v1.pdf,2023-05-23,"['cs.ir', 'cs.ai', 'cs.cl']",2305.13729v1.pdf," Re-rankers, which order retrieved documents with respect to the relevance +score on the given query, have gained attention for the information retrieval +(IR) task. Rather than fine-tuning the pre-trained language model (PLM), the +large-scale language model (LLM) is utilized as a zero-shot re-ranker with +excellent results. While LLM is highly dependent on the prompts, the impact and +the optimization of the prompts for the zero-shot re-ranker are not explored +yet. Along with highlighting the impact of optimization on the zero-shot +re-ranker, we propose a novel discrete prompt optimization method, Constrained +Prompt generation (Co-Prompt), with the metric estimating the optimum for +re-ranking. Co-Prompt guides the generated texts from PLM toward optimal +prompts based on the metric without parameter update. The experimental results +demonstrate that Co-Prompt leads to outstanding re-ranking performance against +the baselines. Also, Co-Prompt generates more interpretable prompts for humans +against other prompt optimization methods. +" +Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL,Hao Sun,http://arxiv.org/pdf/2309.06553v3.pdf,2023-09-13,"['cs.cl', 'cs.ai', 'cs.lg']",2309.06553v3.pdf," In this study, we aim to enhance the arithmetic reasoning ability of Large +Language Models (LLMs) through zero-shot prompt optimization. We identify a +previously overlooked objective of query dependency in such optimization and +elucidate two ensuing challenges that impede the successful and economical +design of prompt optimization techniques. One primary issue is the absence of +an effective method to evaluate prompts during inference when the golden answer +is unavailable. Concurrently, learning via interactions with the LLMs to +navigate the expansive natural language prompting space proves to be +resource-intensive. To address this, we introduce Prompt-OIRL, which harnesses +offline inverse reinforcement learning to draw insights from offline prompting +demonstration data. Such data exists as by-products when diverse prompts are +benchmarked on open-accessible datasets. With Prompt-OIRL, the query-dependent +prompt optimization objective is achieved by first learning an offline reward +model. This model can evaluate any query-prompt pairs without accessing LLMs. +Subsequently, a best-of-N strategy is deployed to recommend the optimal prompt. +Our experimental evaluations across various LLM scales and arithmetic reasoning +datasets underscore both the efficacy and economic viability of the proposed +approach. +" +ATT3D: Amortized Text-to-3D Object Synthesis,Jonathan Lorraine,http://arxiv.org/pdf/2306.07349v1.pdf,2023-06-06,"['cs.lg', 'cs.ai', 'cs.cv', '68t45', 'i.2.6; i.2.7; i.3.6; i.3.7']",2306.07349v1.pdf," Text-to-3D modelling has seen exciting progress by combining generative +text-to-image models with image-to-3D methods like Neural Radiance Fields. +DreamFusion recently achieved high-quality results but requires a lengthy, +per-prompt optimization to create 3D objects. To address this, we amortize +optimization over text prompts by training on many prompts simultaneously with +a unified model, instead of separately. With this, we share computation across +a prompt set, training in less time than per-prompt optimization. Our framework +- Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to +generalize to unseen setups and smooth interpolations between text for novel +assets and simple animations. +" +Temporally-Extended Prompts Optimization for SAM in Interactive Medical Image Segmentation,Chuyun Shen,http://arxiv.org/pdf/2306.08958v1.pdf,2023-06-15,"['cs.cv', 'cs.ai', 'cs.lg']",2306.08958v1.pdf," The Segmentation Anything Model (SAM) has recently emerged as a foundation +model for addressing image segmentation. Owing to the intrinsic complexity of +medical images and the high annotation cost, the medical image segmentation +(MIS) community has been encouraged to investigate SAM's zero-shot capabilities +to facilitate automatic annotation. Inspired by the extraordinary +accomplishments of interactive medical image segmentation (IMIS) paradigm, this +paper focuses on assessing the potential of SAM's zero-shot capabilities within +the IMIS paradigm to amplify its benefits in the MIS domain. Regrettably, we +observe that SAM's vulnerability to prompt forms (e.g., points, bounding boxes) +becomes notably pronounced in IMIS. This leads us to develop a framework that +adaptively offers suitable prompt forms for human experts. We refer to the +framework above as temporally-extended prompts optimization (TEPO) and model it +as a Markov decision process, solvable through reinforcement learning. +Numerical experiments on the standardized benchmark BraTS2020 demonstrate that +the learned TEPO agent can further enhance SAM's zero-shot capability in the +MIS context. +" +Topological Data Analysis Guided Segment Anything Model Prompt Optimization for Zero-Shot Segmentation in Biological Imaging,Ruben Glatt,http://arxiv.org/pdf/2306.17400v1.pdf,2023-06-30,"['cs.cv', '68t45', 'i.4.6']",2306.17400v1.pdf," Emerging foundation models in machine learning are models trained on vast +amounts of data that have been shown to generalize well to new tasks. Often +these models can be prompted with multi-modal inputs that range from natural +language descriptions over images to point clouds. In this paper, we propose +topological data analysis (TDA) guided prompt optimization for the Segment +Anything Model (SAM) and show preliminary results in the biological image +segmentation domain. Our approach replaces the standard grid search approach +that is used in the original implementation and finds point locations based on +their topological significance. Our results show that the TDA optimized point +cloud is much better suited for finding small objects and massively reduces +computational complexity despite the extra step in scenarios which require many +segmentations. +" +Emotion-Conditioned Text Generation through Automatic Prompt Optimization,Yarik Menchaca Resendiz,http://arxiv.org/pdf/2308.04857v1.pdf,2023-08-09,['cs.cl'],2308.04857v1.pdf," Conditional natural language generation methods often require either +expensive fine-tuning or training a large language model from scratch. Both are +unlikely to lead to good results without a substantial amount of data and +computational resources. Prompt learning without changing the parameters of a +large language model presents a promising alternative. It is a cost-effective +approach, while still achieving competitive results. While this procedure is +now established for zero- and few-shot text classification and structured +prediction, it has received limited attention in conditional text generation. +We present the first automatic prompt optimization approach for +emotion-conditioned text generation with instruction-fine-tuned models. Our +method uses an iterative optimization procedure that changes the prompt by +adding, removing, or replacing tokens. As objective function, we only require a +text classifier that measures the realization of the conditional variable in +the generated text. We evaluate the method on emotion-conditioned text +generation with a focus on event reports and compare it to manually designed +prompts that also act as the seed for the optimization procedure. The optimized +prompts achieve 0.75 macro-average F1 to fulfill the emotion condition in +contrast to manually designed seed prompts with only 0.22 macro-average F1. +" +Read-only Prompt Optimization for Vision-Language Few-shot Learning,Dongjun Lee,http://arxiv.org/pdf/2308.14960v1.pdf,2023-08-29,['cs.cv'],2308.14960v1.pdf," In recent years, prompt tuning has proven effective in adapting pre-trained +vision-language models to downstream tasks. These methods aim to adapt the +pre-trained models by introducing learnable prompts while keeping pre-trained +weights frozen. However, learnable prompts can affect the internal +representation within the self-attention module, which may negatively impact +performance variance and generalization, especially in data-deficient settings. +To address these issues, we propose a novel approach, Read-only Prompt +Optimization (RPO). RPO leverages masked attention to prevent the internal +representation shift in the pre-trained model. Further, to facilitate the +optimization of RPO, the read-only prompts are initialized based on special +tokens of the pre-trained model. Our extensive experiments demonstrate that RPO +outperforms CLIP and CoCoOp in base-to-new generalization and domain +generalization while displaying better robustness. Also, the proposed method +achieves better generalization on extremely data-deficient settings, while +improving parameter efficiency and computational overhead. Code is available at +https://github.com/mlvlab/RPO. +" +Large Language Models as Optimizers,Chengrun Yang,http://arxiv.org/pdf/2309.03409v1.pdf,2023-09-07,"['cs.lg', 'cs.ai', 'cs.cl']",2309.03409v1.pdf," Optimization is ubiquitous. While derivative-based algorithms have been +powerful tools for various problems, the absence of gradient imposes challenges +on many real-world applications. In this work, we propose Optimization by +PROmpting (OPRO), a simple and effective approach to leverage large language +models (LLMs) as optimizers, where the optimization task is described in +natural language. In each optimization step, the LLM generates new solutions +from the prompt that contains previously generated solutions with their values, +then the new solutions are evaluated and added to the prompt for the next +optimization step. We first showcase OPRO on linear regression and traveling +salesman problems, then move on to prompt optimization where the goal is to +find instructions that maximize the task accuracy. With a variety of LLMs, we +demonstrate that the best prompts optimized by OPRO outperform human-designed +prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. +" +Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers,Qingyan Guo,http://arxiv.org/pdf/2309.08532v1.pdf,2023-09-15,"['cs.cl', 'cs.ai']",2309.08532v1.pdf," Large Language Models (LLMs) excel in various tasks, but they rely on +carefully crafted prompts that often demand substantial human effort. To +automate this process, in this paper, we propose a novel framework for discrete +prompt optimization, called EvoPrompt, which borrows the idea of evolutionary +algorithms (EAs) as they exhibit good performance and fast convergence. To +enable EAs to work on discrete prompts, which are natural language expressions +that need to be coherent and human-readable, we connect LLMs with EAs. This +approach allows us to simultaneously leverage the powerful language processing +capabilities of LLMs and the efficient optimization performance of EAs. +Specifically, abstaining from any gradients or parameters, EvoPrompt starts +from a population of prompts and iteratively generates new prompts with LLMs +based on the evolutionary operators, improving the population based on the +development set. We optimize prompts for both closed- and open-source LLMs +including GPT-3.5 and Alpaca, on 9 datasets spanning language understanding and +generation tasks. EvoPrompt significantly outperforms human-engineered prompts +and existing methods for automatic prompt generation by up to 25% and 14% +respectively. Furthermore, EvoPrompt demonstrates that connecting LLMs with EAs +creates synergies, which could inspire further research on the combination of +LLMs and conventional algorithms. +" +Black-Box Prompt Optimization: Aligning Large Language Models without Model Training,Jiale Cheng,http://arxiv.org/pdf/2311.04155v2.pdf,2023-11-07,['cs.cl'],2311.04155v2.pdf," Large language models (LLMs) have shown impressive success in various +applications. However, these models are often not well aligned with human +intents, which calls for additional treatments on them, that is, the alignment +problem. To make LLMs better follow user instructions, existing alignment +methods mostly focus on further training them. However, the extra training of +LLMs are usually expensive in terms of GPU compute; worse still, LLMs of +interest are oftentimes not accessible for user-demanded training, such as +GPTs. In this work, we take a different perspective -- Black-Box Prompt +Optimization (BPO) -- to perform alignments. The idea is to optimize user +prompts to suit LLMs' input understanding, so as to best realize users' intents +without updating LLMs' parameters. BPO is model-agnostic and the empirical +results demonstrate that the BPO-aligned ChatGPT yields a 22% increase in the +win rate against its original version, and 10% for GPT-4. Importantly, the +BPO-aligned LLMs can outperform the same models aligned by PPO and DPO, and it +also brings additional performance gains when combining BPO with PPO or DPO. +Code and datasets are released at https://github.com/thu-coai/BPO. +" +In-context Examples Selection for Machine Translation,Sweta Agrawal,http://arxiv.org/pdf/2212.02437v1.pdf,2022-12-05,['cs.cl'],2212.02437v1.pdf," Large-scale generative models show an impressive ability to perform a wide +range of Natural Language Processing (NLP) tasks using in-context learning, +where a few examples are used to describe a task to the model. For Machine +Translation (MT), these examples are typically randomly sampled from the +development dataset with a similar distribution as the evaluation set. However, +it is unclear how the choice of these in-context examples and their ordering +impacts the output translation quality. In this work, we aim to understand the +properties of good in-context examples for MT in both in-domain and +out-of-domain settings. We show that the translation quality and the domain of +the in-context examples matter and that 1-shot noisy unrelated example can have +a catastrophic impact on output quality. While concatenating multiple random +examples reduces the effect of noise, a single good prompt optimized to +maximize translation quality on the development dataset can elicit learned +information from the pre-trained language model. Adding similar examples based +on an n-gram overlap with the test source significantly and consistently +improves the translation quality of the outputs, outperforming a strong kNN-MT +baseline in 2 out of 4 out-of-domain datasets. +" +ZegOT: Zero-shot Segmentation Through Optimal Transport of Text Prompts,Kwanyoung Kim,http://arxiv.org/pdf/2301.12171v2.pdf,2023-01-28,"['cs.cv', 'cs.ai', 'cs.lg', 'stat.ml']",2301.12171v2.pdf," Recent success of large-scale Contrastive Language-Image Pre-training (CLIP) +has led to great promise in zero-shot semantic segmentation by transferring +image-text aligned knowledge to pixel-level classification. However, existing +methods usually require an additional image encoder or retraining/tuning the +CLIP module. Here, we propose a novel Zero-shot segmentation with Optimal +Transport (ZegOT) method that matches multiple text prompts with frozen image +embeddings through optimal transport. In particular, we introduce a novel +Multiple Prompt Optimal Transport Solver (MPOT), which is designed to learn an +optimal mapping between multiple text prompts and visual feature maps of the +frozen image encoder hidden layers. This unique mapping method facilitates each +of the multiple text prompts to effectively focus on distinct visual semantic +attributes. Through extensive experiments on benchmark datasets, we show that +our method achieves the state-of-the-art (SOTA) performance over existing +Zero-shot Semantic Segmentation (ZS3) approaches. +" +DeltaEdit: Exploring Text-free Training for Text-Driven Image Manipulation,Yueming Lyu,http://arxiv.org/pdf/2303.06285v1.pdf,2023-03-11,['cs.cv'],2303.06285v1.pdf," Text-driven image manipulation remains challenging in training or inference +flexibility. Conditional generative models depend heavily on expensive +annotated training data. Meanwhile, recent frameworks, which leverage +pre-trained vision-language models, are limited by either per text-prompt +optimization or inference-time hyper-parameters tuning. In this work, we +propose a novel framework named \textit{DeltaEdit} to address these problems. +Our key idea is to investigate and identify a space, namely delta image and +text space that has well-aligned distribution between CLIP visual feature +differences of two images and CLIP textual embedding differences of source and +target texts. Based on the CLIP delta space, the DeltaEdit network is designed +to map the CLIP visual features differences to the editing directions of +StyleGAN at training phase. Then, in inference phase, DeltaEdit predicts the +StyleGAN's editing directions from the differences of the CLIP textual +features. In this way, DeltaEdit is trained in a text-free manner. Once +trained, it can well generalize to various text prompts for zero-shot inference +without bells and whistles. Code is available at +https://github.com/Yueming6568/DeltaEdit. +" +Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference,Alessandro Sordoni,http://arxiv.org/pdf/2306.12509v1.pdf,2023-06-21,"['cs.cl', 'cs.lg']",2306.12509v1.pdf," We view large language models (LLMs) as stochastic \emph{language layers} in +a network, where the learnable parameters are the natural language +\emph{prompts} at each layer. We stack two such layers, feeding the output of +one layer to the next. We call the stacked architecture a \emph{Deep Language +Network} (DLN). We first show how to effectively perform prompt optimization +for a 1-Layer language network (DLN-1). We then show how to train 2-layer DLNs +(DLN-2), where two prompts must be learnt. We consider the output of the first +layer as a latent variable to marginalize, and devise a variational inference +algorithm for joint prompt training. A DLN-2 reaches higher performance than a +single layer, sometimes comparable to few-shot GPT-4 even when each LLM in the +network is smaller and less powerful. The DLN code is open source: +https://github.com/microsoft/deep-language-networks . +" +Unnatural language processing: How do language models handle machine-generated prompts?,Corentin Kervadec,http://arxiv.org/pdf/2310.15829v1.pdf,2023-10-24,['cs.cl'],2310.15829v1.pdf," Language model prompt optimization research has shown that semantically and +grammatically well-formed manually crafted prompts are routinely outperformed +by automatically generated token sequences with no apparent meaning or +syntactic structure, including sequences of vectors from a model's embedding +space. We use machine-generated prompts to probe how models respond to input +that is not composed of natural language expressions. We study the behavior of +models of different sizes in multiple semantic tasks in response to both +continuous and discrete machine-generated prompts, and compare it to the +behavior in response to human-generated natural-language prompts. Even when +producing a similar output, machine-generated and human prompts trigger +different response patterns through the network processing pathways, including +different perplexities, different attention and output entropy distributions, +and different unit activation profiles. We provide preliminary insight into the +nature of the units activated by different prompt types, suggesting that only +natural language prompts recruit a genuinely linguistic circuit. +" +Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained Language Models,Paul Youssef,http://arxiv.org/pdf/2310.16570v1.pdf,2023-10-25,['cs.cl'],2310.16570v1.pdf," Pre-trained Language Models (PLMs) are trained on vast unlabeled data, rich +in world knowledge. This fact has sparked the interest of the community in +quantifying the amount of factual knowledge present in PLMs, as this explains +their performance on downstream tasks, and potentially justifies their use as +knowledge bases. In this work, we survey methods and datasets that are used to +probe PLMs for factual knowledge. Our contributions are: (1) We propose a +categorization scheme for factual probing methods that is based on how their +inputs, outputs and the probed PLMs are adapted; (2) We provide an overview of +the datasets used for factual probing; (3) We synthesize insights about +knowledge retention and prompt optimization in PLMs, analyze obstacles to +adopting PLMs as knowledge bases and outline directions for future work. +" +Task-driven Prompt Evolution for Foundation Models,Rachana Sathish,http://arxiv.org/pdf/2310.17128v1.pdf,2023-10-26,['cs.cv'],2310.17128v1.pdf," Promptable foundation models, particularly Segment Anything Model (SAM), have +emerged as a promising alternative to the traditional task-specific supervised +learning for image segmentation. However, many evaluation studies have found +that their performance on medical imaging modalities to be underwhelming +compared to conventional deep learning methods. In the world of large +pre-trained language and vision-language models, learning prompt from +downstream tasks has achieved considerable success in improving performance. In +this work, we propose a plug-and-play Prompt Optimization Technique for +foundation models like SAM (SAMPOT) that utilizes the downstream segmentation +task to optimize the human-provided prompt to obtain improved performance. We +demonstrate the utility of SAMPOT on lung segmentation in chest X-ray images +and obtain an improvement on a significant number of cases ($\sim75\%$) over +human-provided initial prompts. We hope this work will lead to further +investigations in the nascent field of automatic visual prompt-tuning. +" +RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning,Mingkai Deng,http://arxiv.org/pdf/2205.12548v3.pdf,2022-05-25,"['cs.cl', 'cs.lg']",2205.12548v3.pdf," Prompting has shown impressive success in enabling large pretrained language +models (LMs) to perform diverse NLP tasks, especially when only few downstream +data are available. Automatically finding the optimal prompt for each task, +however, is challenging. Most existing work resorts to tuning soft prompt +(e.g., embeddings) which falls short of interpretability, reusability across +LMs, and applicability when gradients are not accessible. Discrete prompt, on +the other hand, is difficult to optimize, and is often created by ""enumeration +(e.g., paraphrasing)-then-selection"" heuristics that do not explore the prompt +space systematically. This paper proposes RLPrompt, an efficient discrete +prompt optimization approach with reinforcement learning (RL). RLPrompt +formulates a parameter-efficient policy network that generates the desired +discrete prompt after training with reward. To overcome the complexity and +stochasticity of reward signals by the large LM environment, we incorporate +effective reward stabilization that substantially enhances the training +efficiency. RLPrompt is flexibly applicable to different types of LMs, such as +masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both +classification and generation tasks. Experiments on few-shot classification and +unsupervised text style transfer show superior performance over a wide range of +existing finetuning or prompting methods. Interestingly, the resulting +optimized prompts are often ungrammatical gibberish text; and surprisingly, +those gibberish prompts are transferrable between different LMs to retain +significant performance, indicating LM prompting may not follow human language +patterns. +" +Diversity-Aware Meta Visual Prompting,Qidong Huang,http://arxiv.org/pdf/2303.08138v1.pdf,2023-03-14,['cs.cv'],2303.08138v1.pdf," We present Diversity-Aware Meta Visual Prompting~(DAM-VP), an efficient and +effective prompting method for transferring pre-trained models to downstream +tasks with frozen backbone. A challenging issue in visual prompting is that +image datasets sometimes have a large data diversity whereas a per-dataset +generic prompt can hardly handle the complex distribution shift toward the +original pretraining data distribution properly. To address this issue, we +propose a dataset Diversity-Aware prompting strategy whose initialization is +realized by a Meta-prompt. Specifically, we cluster the downstream dataset into +small homogeneity subsets in a diversity-adaptive way, with each subset has its +own prompt optimized separately. Such a divide-and-conquer design reduces the +optimization difficulty greatly and significantly boosts the prompting +performance. Furthermore, all the prompts are initialized with a meta-prompt, +which is learned across several datasets. It is a bootstrapped paradigm, with +the key observation that the prompting knowledge learned from previous datasets +could help the prompt to converge faster and perform better on a new dataset. +During inference, we dynamically select a proper prompt for each input, based +on the feature distance between the input and each subset. Through extensive +experiments, our DAM-VP demonstrates superior efficiency and effectiveness, +clearly surpassing previous prompting methods in a series of downstream +datasets for different pretraining models. Our code is available at: +\url{https://github.com/shikiw/DAM-VP}. +" +DRPT: Disentangled and Recurrent Prompt Tuning for Compositional Zero-Shot Learning,Xiaocheng Lu,http://arxiv.org/pdf/2305.01239v1.pdf,2023-05-02,"['cs.cv', 'cs.ai']",2305.01239v1.pdf," Compositional Zero-shot Learning (CZSL) aims to recognize novel concepts +composed of known knowledge without training samples. Standard CZSL either +identifies visual primitives or enhances unseen composed entities, and as a +result, entanglement between state and object primitives cannot be fully +utilized. Admittedly, vision-language models (VLMs) could naturally cope with +CZSL through tuning prompts, while uneven entanglement leads prompts to be +dragged into local optimum. In this paper, we take a further step to introduce +a novel Disentangled and Recurrent Prompt Tuning framework termed DRPT to +better tap the potential of VLMs in CZSL. Specifically, the state and object +primitives are deemed as learnable tokens of vocabulary embedded in prompts and +tuned on seen compositions. Instead of jointly tuning state and object, we +devise a disentangled and recurrent tuning strategy to suppress the traction +force caused by entanglement and gradually optimize the token parameters, +leading to a better prompt space. Notably, we develop a progressive fine-tuning +procedure that allows for incremental updates to the prompts, optimizing the +object first, then the state, and vice versa. Meanwhile, the optimization of +state and object is independent, thus clearer features can be learned to +further alleviate the issue of entangling misleading optimization. Moreover, we +quantify and analyze the entanglement in CZSL and supplement entanglement +rebalancing optimization schemes. DRPT surpasses representative +state-of-the-art methods on extensive benchmark datasets, demonstrating +superiority in both accuracy and efficiency. +" +Getting MoRE out of Mixture of Language Model Reasoning Experts,Chenglei Si,http://arxiv.org/pdf/2305.14628v2.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14628v2.pdf," While recent large language models (LLMs) improve on various question +answering (QA) datasets, it remains difficult for a single model to generalize +across question types that require distinct reasoning abilities. We provide +empirical evidence that state-of-the-art LLMs suffer from poor generalizability +on reasoning types beyond those seen in the prompt. To remedy this, we propose +a Mixture-of-Reasoning-Experts (MoRE) framework that ensembles diverse +specialized language models. We specialize the backbone language model with +prompts optimized for different reasoning categories, including factual, +multihop, mathematical, and commonsense reasoning. Our key insight is to +leverage agreement among the specialized experts to select the best answer for +each question, or to abstain from answering. This gives MoRE higher accuracy +than any single specialized model on a collection of 12 QA datasets from four +reasoning types. Beyond generalizability, the interpretable design of MoRE +improves selective question answering results compared to baselines without +incorporating inter-expert agreement. This framework is also more interpretable +and useful to human consumers of QA outputs. Our human study confirms that +presenting expert predictions and the answer selection process helps annotators +more accurately calibrate when to trust the system's output. We release all +code and data to facilitate future work. +" +Unveiling the Potential of Knowledge-Prompted ChatGPT for Enhancing Drug Trafficking Detection on Social Media,Chuanbo Hu,http://arxiv.org/pdf/2307.03699v1.pdf,2023-07-07,"['cs.cl', 'cs.ai', 'cs.si']",2307.03699v1.pdf," Social media platforms such as Instagram and Twitter have emerged as critical +channels for drug marketing and illegal sale. Detecting and labeling online +illicit drug trafficking activities becomes important in addressing this issue. +However, the effectiveness of conventional supervised learning methods in +detecting drug trafficking heavily relies on having access to substantial +amounts of labeled data, while data annotation is time-consuming and +resource-intensive. Furthermore, these models often face challenges in +accurately identifying trafficking activities when drug dealers use deceptive +language and euphemisms to avoid detection. To overcome this limitation, we +conduct the first systematic study on leveraging large language models (LLMs), +such as ChatGPT, to detect illicit drug trafficking activities on social media. +We propose an analytical framework to compose \emph{knowledge-informed +prompts}, which serve as the interface that humans can interact with and use +LLMs to perform the detection task. Additionally, we design a Monte Carlo +dropout based prompt optimization method to further to improve performance and +interpretability. Our experimental findings demonstrate that the proposed +framework outperforms other baseline language models in terms of drug +trafficking detection accuracy, showing a remarkable improvement of nearly +12\%. By integrating prior knowledge and the proposed prompts, ChatGPT can +effectively identify and label drug trafficking activities on social networks, +even in the presence of deceptive language and euphemisms used by drug dealers +to evade detection. The implications of our research extend to social networks, +emphasizing the importance of incorporating prior knowledge and scenario-based +prompts into analytical tools to improve online security and public safety. +" +AutoHint: Automatic Prompt Optimization with Hint Generation,Hong Sun,http://arxiv.org/pdf/2307.07415v2.pdf,2023-07-13,"['cs.cl', 'cs.ai']",2307.07415v2.pdf," This paper presents AutoHint, a novel framework for automatic prompt +engineering and optimization for Large Language Models (LLM). While LLMs have +demonstrated remarkable ability in achieving high-quality annotation in various +tasks, the key to applying this ability to specific tasks lies in developing +high-quality prompts. Thus we propose a framework to inherit the merits of both +in-context learning and zero-shot learning by incorporating enriched +instructions derived from input-output demonstrations to optimize original +prompt. We refer to the enrichment as the hint and propose a framework to +automatically generate the hint from labeled data. More concretely, starting +from an initial prompt, our method first instructs a LLM to deduce new hints +for selected samples from incorrect predictions, and then summarizes from +per-sample hints and adds the results back to the initial prompt to form a new, +enriched instruction. The proposed method is evaluated on the BIG-Bench +Instruction Induction dataset for both zero-shot and few-short prompts, where +experiments demonstrate our method is able to significantly boost accuracy for +multiple tasks. +" +"Optimizing Mobile-Edge AI-Generated Everything (AIGX) Services by Prompt Engineering: Fundamental, Framework, and Case Study",Yinqiu Liu,http://arxiv.org/pdf/2309.01065v1.pdf,2023-09-03,['cs.ni'],2309.01065v1.pdf," As the next-generation paradigm for content creation, AI-Generated Content +(AIGC), i.e., generating content automatically by Generative AI (GAI) based on +user prompts, has gained great attention and success recently. With the +ever-increasing power of GAI, especially the emergence of Pretrained Foundation +Models (PFMs) that contain billions of parameters and prompt engineering +methods (i.e., finding the best prompts for the given task), the application +range of AIGC is rapidly expanding, covering various forms of information for +human, systems, and networks, such as network designs, channel coding, and +optimization solutions. In this article, we present the concept of mobile-edge +AI-Generated Everything (AIGX). Specifically, we first review the building +blocks of AIGX, the evolution from AIGC to AIGX, as well as practical AIGX +applications. Then, we present a unified mobile-edge AIGX framework, which +employs edge devices to provide PFM-empowered AIGX services and optimizes such +services via prompt engineering. More importantly, we demonstrate that +suboptimal prompts lead to poor generation quality, which adversely affects +user satisfaction, edge network performance, and resource utilization. +Accordingly, we conduct a case study, showcasing how to train an effective +prompt optimizer using ChatGPT and investigating how much improvement is +possible with prompt engineering in terms of user experience, quality of +generation, and network performance. +" +Automatic Data Transformation Using Large Language Model: An Experimental Study on Building Energy Data,Ankita Sharma,http://arxiv.org/pdf/2309.01957v2.pdf,2023-09-05,['cs.db'],2309.01957v2.pdf," Existing approaches to automatic data transformation are insufficient to meet +the requirements in many real-world scenarios, such as the building sector. +First, there is no convenient interface for domain experts to provide domain +knowledge easily. Second, they require significant training data collection +overheads. Third, the accuracy suffers from complicated schema changes. To +bridge this gap, we present a novel approach that leverages the unique +capabilities of large language models (LLMs) in coding, complex reasoning, and +zero-shot learning to generate SQL code that transforms the source datasets +into the target datasets. We demonstrate the viability of this approach by +designing an LLM-based framework, termed SQLMorpher, which comprises a prompt +generator that integrates the initial prompt with optional domain knowledge and +historical patterns in external databases. It also implements an iterative +prompt optimization mechanism that automatically improves the prompt based on +flaw detection. The key contributions of this work include (1) pioneering an +end-to-end LLM-based solution for data transformation, (2) developing a +benchmark dataset of 105 real-world building energy data transformation +problems, and (3) conducting an extensive empirical evaluation where our +approach achieved 96% accuracy in all 105 problems. SQLMorpher demonstrates the +effectiveness of utilizing LLMs in complex, domain-specific challenges, +highlighting the potential of their potential to drive sustainable solutions. +" +Automatic Prompt Rewriting for Personalized Text Generation,Cheng Li,http://arxiv.org/pdf/2310.00152v1.pdf,2023-09-29,['cs.cl'],2310.00152v1.pdf," Facilitated by large language models (LLMs), personalized text generation has +become a rapidly growing research direction. Most existing studies focus on +designing specialized models for a particular domain, or they require +fine-tuning the LLMs to generate personalized text. We consider a typical +scenario in which the large language model, which generates personalized +output, is frozen and can only be accessed through APIs. Under this constraint, +all one can do is to improve the input text (i.e., text prompts) sent to the +LLM, a procedure that is usually done manually. In this paper, we propose a +novel method to automatically revise prompts for personalized text generation. +The proposed method takes the initial prompts generated by a state-of-the-art, +multistage framework for personalized generation and rewrites a few critical +components that summarize and synthesize the personal context. The prompt +rewriter employs a training paradigm that chains together supervised learning +(SL) and reinforcement learning (RL), where SL reduces the search space of RL +and RL facilitates end-to-end training of the rewriter. Using datasets from +three representative domains, we demonstrate that the rewritten prompts +outperform both the original prompts and the prompts optimized via supervised +learning or reinforcement learning alone. In-depth analysis of the rewritten +prompts shows that they are not only human readable, but also able to guide +manual revision of prompts when there is limited resource to employ +reinforcement learning to train the prompt rewriter, or when it is costly to +deploy an automatic prompt rewriter for inference. +" +DeltaSpace: A Semantic-aligned Feature Space for Flexible Text-guided Image Editing,Yueming Lyu,http://arxiv.org/pdf/2310.08785v1.pdf,2023-10-12,"['cs.cv', 'cs.ai']",2310.08785v1.pdf," Text-guided image editing faces significant challenges to training and +inference flexibility. Much literature collects large amounts of annotated +image-text pairs to train text-conditioned generative models from scratch, +which is expensive and not efficient. After that, some approaches that leverage +pre-trained vision-language models are put forward to avoid data collection, +but they are also limited by either per text-prompt optimization or +inference-time hyper-parameters tuning. To address these issues, we investigate +and identify a specific space, referred to as CLIP DeltaSpace, where the CLIP +visual feature difference of two images is semantically aligned with the CLIP +textual feature difference of their corresponding text descriptions. Based on +DeltaSpace, we propose a novel framework called DeltaEdit, which maps the CLIP +visual feature differences to the latent space directions of a generative model +during the training phase, and predicts the latent space directions from the +CLIP textual feature differences during the inference phase. And this design +endows DeltaEdit with two advantages: (1) text-free training; (2) +generalization to various text prompts for zero-shot inference. Extensive +experiments validate the effectiveness and versatility of DeltaEdit with +different generative models, including both the GAN model and the diffusion +model, in achieving flexible text-guided image editing. Code is available at +https://github.com/Yueming6568/DeltaEdit. +" +InstructPix2NeRF: Instructed 3D Portrait Editing from a Single Image,Jianhui Li,http://arxiv.org/pdf/2311.02826v1.pdf,2023-11-06,['cs.cv'],2311.02826v1.pdf," With the success of Neural Radiance Field (NeRF) in 3D-aware portrait +editing, a variety of works have achieved promising results regarding both +quality and 3D consistency. However, these methods heavily rely on per-prompt +optimization when handling natural language as editing instructions. Due to the +lack of labeled human face 3D datasets and effective architectures, the area of +human-instructed 3D-aware editing for open-world portraits in an end-to-end +manner remains under-explored. To solve this problem, we propose an end-to-end +diffusion-based framework termed InstructPix2NeRF, which enables instructed +3D-aware portrait editing from a single open-world image with human +instructions. At its core lies a conditional latent 3D diffusion process that +lifts 2D editing to 3D space by learning the correlation between the paired +images' difference and the instructions via triplet data. With the help of our +proposed token position randomization strategy, we could even achieve +multi-semantic editing through one single pass with the portrait identity +well-preserved. Besides, we further propose an identity consistency module that +directly modulates the extracted identity signals into our diffusion process, +which increases the multi-view 3D identity consistency. Extensive experiments +verify the effectiveness of our method and show its superiority against strong +baselines quantitatively and qualitatively. +" +What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers,Boseop Kim,http://arxiv.org/pdf/2109.04650v2.pdf,2021-09-10,['cs.cl'],2109.04650v2.pdf," GPT-3 shows remarkable in-context learning ability of large-scale language +models (LMs) trained on hundreds of billion scale data. Here we address some +remaining issues less reported by the GPT-3 paper, such as a non-English LM, +the performances of different sized models, and the effect of recently +introduced prompt optimization on in-context learning. To achieve this, we +introduce HyperCLOVA, a Korean variant of 82B GPT-3 trained on a Korean-centric +corpus of 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVA +with our training configuration shows state-of-the-art in-context zero-shot and +few-shot learning performances on various downstream tasks in Korean. Also, we +show the performance benefits of prompt-based learning and demonstrate how it +can be integrated into the prompt engineering pipeline. Then we discuss the +possibility of materializing the No Code AI paradigm by providing AI +prototyping capabilities to non-experts of ML by introducing HyperCLOVA studio, +an interactive prompt engineering interface. Lastly, we demonstrate the +potential of our methods with three successful in-house applications. +" +MLLM-DataEngine: An Iterative Refinement Approach for MLLM,Zhiyuan Zhao,http://arxiv.org/pdf/2308.13566v2.pdf,2023-08-25,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cv']",2308.13566v2.pdf," Despite the great advance of Multimodal Large Language Models (MLLMs) in both +instruction dataset building and benchmarking, the independence of training and +evaluation makes current MLLMs hard to further improve their capability under +the guidance of evaluation results with a relatively low human cost. In this +paper, we propose MLLM-DataEngine, a novel closed-loop system that bridges data +generation, model training, and evaluation. Within each loop iteration, the +MLLM-DataEngine first analyze the weakness of the model based on the evaluation +results, then generate a proper incremental dataset for the next training +iteration and enhance the model capability iteratively. Compared with previous +data collection methods which are separate from the benchmarking, the data +generated by MLLM-DataEngine shows better targeting, quality, and correctness. +For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts +the ratio of different types of data within each incremental dataset based on +the benchmarking results. For quality, we resort to GPT-4 to generate +high-quality data with each given data type. For correctness, prompt design is +critical for the data generation results. Rather than previous hand-crafted +prompt, we propose an Interactive Prompt Optimization strategy, which optimizes +the prompt with the multi-round interaction between human and GPT, and improve +the correctness of generated data greatly. Through extensive experiments, we +find our MLLM-DataEngine could boost the MLLM capability in a targeted and +automatic manner, with only a few human participation. We hope it could be a +general solution for the following MLLMs building. The MLLM-DataEngine has been +open-sourced and is now available at +https://github.com/opendatalab/MLLM-DataEngine. +" +Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review,Banghao Chen,http://arxiv.org/pdf/2310.14735v2.pdf,2023-10-23,"['cs.cl', 'cs.ai', 'i.2.7']",2310.14735v2.pdf," This paper delves into the pivotal role of prompt engineering in unleashing +the capabilities of Large Language Models (LLMs). Prompt engineering is the +process of structuring input text for LLMs and is a technique integral to +optimizing the efficacy of LLMs. This survey elucidates foundational principles +of prompt engineering, such as role-prompting, one-shot, and few-shot +prompting, as well as more advanced methodologies such as the chain-of-thought +and tree-of-thoughts prompting. The paper sheds light on how external +assistance in the form of plugins can assist in this task, and reduce machine +hallucination by retrieving external knowledge. We subsequently delineate +prospective directions in prompt engineering research, emphasizing the need for +a deeper understanding of structures and the role of agents in Artificial +Intelligence-Generated Content (AIGC) tools. We discuss how to assess the +efficacy of prompt methods from different perspectives and using different +methods. Finally, we gather information about the application of prompt +engineering in such fields as education and programming, showing its +transformative potential. This comprehensive survey aims to serve as a friendly +guide for anyone venturing through the big world of LLMs and prompt +engineering. +" +Prompt Engineering For Students of Medicine and Their Teachers,Thomas F. Heston,http://arxiv.org/pdf/2308.11628v1.pdf,2023-08-08,['cs.hc'],2308.11628v1.pdf," ""Prompt Engineering for Students of Medicine and Their Teachers"" brings the +principles of prompt engineering for large language models such as ChatGPT and +Google Bard to medical education. This book contains a comprehensive guide to +prompt engineering to help both teachers and students improve education in the +medical field. Just as prompt engineering is critical in getting good +information out of an AI, it is also critical to get students to think and +understand more deeply. The principles of prompt engineering that we have +learned from AI systems have the potential to simultaneously revolutionize +learning in the healthcare field. The book analyzes from multiple angles the +anatomy of a good prompt for both AI models and students. The different types +of prompts are examined, showing how each style has unique characteristics and +applications. The principles of prompt engineering, applied properly, are +demonstrated to be effective in teaching across the diverse fields of anatomy, +physiology, pathology, pharmacology, and clinical skills. Just like ChatGPT and +similar large language AI models, students need clear and detailed prompting in +order for them to fully understand a topic. Using identical principles, a +prompt that gets good information from an AI will also cause a student to think +more deeply and accurately. The process of prompt engineering facilitates this +process. Because each chapter contains multiple examples and key takeaways, it +is a practical guide for implementing prompt engineering in the learning +process. It provides a hands-on approach to ensure readers can immediately +apply the concepts they learn +" +Prompting AI Art: An Investigation into the Creative Skill of Prompt Engineering,Jonas Oppenlaender,http://arxiv.org/pdf/2303.13534v1.pdf,2023-03-13,"['cs.hc', 'h.m']",2303.13534v1.pdf," Humankind is entering a novel era of creativity - an era in which anybody can +synthesize digital content. The paradigm under which this revolution takes +place is prompt-based learning (or in-context learning). This paradigm has +found fruitful application in text-to-image generation where it is being used +to synthesize digital images from zero-shot text prompts in natural language +for the purpose of creating AI art. This activity is referred to as prompt +engineering - the practice of iteratively crafting prompts to generate and +improve images. In this paper, we investigate prompt engineering as a novel +creative skill for creating prompt-based art. In three studies with +participants recruited from a crowdsourcing platform, we explore whether +untrained participants could 1) recognize the quality of prompts, 2) write +prompts, and 3) improve their prompts. Our results indicate that participants +could assess the quality of prompts and respective images. This ability +increased with the participants' experience and interest in art. Participants +further were able to write prompts in rich descriptive language. However, even +though participants were specifically instructed to generate artworks, +participants' prompts were missing the specific vocabulary needed to apply a +certain style to the generated images. Our results suggest that prompt +engineering is a learned skill that requires expertise and practice. Based on +our findings and experience with running our studies with participants +recruited from a crowdsourcing platform, we provide ten recommendations for +conducting experimental research on text-to-image generation and prompt +engineering with a paid crowd. Our studies offer a deeper understanding of +prompt engineering thereby opening up avenues for research on the future of +prompt engineering. We conclude by speculating on four possible futures of +prompt engineering. +" +Review of Large Vision Models and Visual Prompt Engineering,Jiaqi Wang,http://arxiv.org/pdf/2307.00855v1.pdf,2023-07-03,"['cs.cv', 'cs.ai']",2307.00855v1.pdf," Visual prompt engineering is a fundamental technology in the field of visual +and image Artificial General Intelligence, serving as a key component for +achieving zero-shot capabilities. As the development of large vision models +progresses, the importance of prompt engineering becomes increasingly evident. +Designing suitable prompts for specific visual tasks has emerged as a +meaningful research direction. This review aims to summarize the methods +employed in the computer vision domain for large vision models and visual +prompt engineering, exploring the latest advancements in visual prompt +engineering. We present influential large models in the visual domain and a +range of prompt engineering methods employed on these models. It is our hope +that this review provides a comprehensive and systematic description of prompt +engineering methods based on large visual models, offering valuable insights +for future researchers in their exploration of this field. +" +Prompt Engineering for Healthcare: Methodologies and Applications,Jiaqi Wang,http://arxiv.org/pdf/2304.14670v1.pdf,2023-04-28,['cs.ai'],2304.14670v1.pdf," This review will introduce the latest advances in prompt engineering in the +field of natural language processing (NLP) for the medical domain. First, we +will provide a brief overview of the development of prompt engineering and +emphasize its significant contributions to healthcare NLP applications such as +question-answering systems, text summarization, and machine translation. With +the continuous improvement of general large language models, the importance of +prompt engineering in the healthcare domain is becoming increasingly prominent. +The aim of this article is to provide useful resources and bridges for +healthcare NLP researchers to better explore the application of prompt +engineering in this field. We hope that this review can provide new ideas and +inspire ample possibilities for research and application in medical NLP. +" +A Brief History of Prompt: Leveraging Language Models,Golam Md Muktadir,http://arxiv.org/pdf/2310.04438v1.pdf,2023-09-30,"['cs.cl', 'cs.ai']",2310.04438v1.pdf," This paper presents a comprehensive exploration of the evolution of prompt +engineering and generation in the field of natural language processing (NLP). +Starting from the early language models and information retrieval systems, we +trace the key developments that have shaped prompt engineering over the years. +The introduction of attention mechanisms in 2015 revolutionized language +understanding, leading to advancements in controllability and +context-awareness. Subsequent breakthroughs in reinforcement learning +techniques further enhanced prompt engineering, addressing issues like exposure +bias and biases in generated text. We examine the significant contributions in +2018 and 2019, focusing on fine-tuning strategies, control codes, and +template-based generation. The paper also discusses the growing importance of +fairness, human-AI collaboration, and low-resource adaptation. In 2020 and +2021, contextual prompting and transfer learning gained prominence, while 2022 +and 2023 witnessed the emergence of advanced techniques like unsupervised +pre-training and novel reward shaping. Throughout the paper, we reference +specific research studies that exemplify the impact of various developments on +prompt engineering. The journey of prompt engineering continues, with ethical +considerations being paramount for the responsible and inclusive future of AI +systems. +" +A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models,Jindong Gu,http://arxiv.org/pdf/2307.12980v1.pdf,2023-07-24,['cs.cv'],2307.12980v1.pdf," Prompt engineering is a technique that involves augmenting a large +pre-trained model with task-specific hints, known as prompts, to adapt the +model to new tasks. Prompts can be created manually as natural language +instructions or generated automatically as either natural language instructions +or vector representations. Prompt engineering enables the ability to perform +predictions based solely on prompts without updating model parameters, and the +easier application of large pre-trained models in real-world tasks. In past +years, Prompt engineering has been well-studied in natural language processing. +Recently, it has also been intensively studied in vision-language modeling. +However, there is currently a lack of a systematic overview of prompt +engineering on pre-trained vision-language models. This paper aims to provide a +comprehensive survey of cutting-edge research in prompt engineering on three +types of vision-language models: multimodal-to-text generation models (e.g. +Flamingo), image-text matching models (e.g. CLIP), and text-to-image generation +models (e.g. Stable Diffusion). For each type of model, a brief model summary, +prompting methods, prompting-based applications, and the corresponding +responsibility and integrity issues are summarized and discussed. Furthermore, +the commonalities and differences between prompting on vision-language models, +language models, and vision models are also discussed. The challenges, future +directions, and research opportunities are summarized to foster future research +on this topic. +" +Prompt Engineering and Calibration for Zero-Shot Commonsense Reasoning,Chenkai Ma,http://arxiv.org/pdf/2304.06962v1.pdf,2023-04-14,"['cs.cl', 'cs.ai']",2304.06962v1.pdf," Prompt engineering and calibration make large language models excel at +reasoning tasks, including multiple choice commonsense reasoning. From a +practical perspective, we investigate and evaluate these strategies on smaller +language models. Through experiments on five commonsense reasoning benchmarks, +we find that each strategy favors certain models, but their joint effects are +mostly negative. +" +Just Tell Me: Prompt Engineering in Business Process Management,Kiran Busch,http://arxiv.org/pdf/2304.07183v1.pdf,2023-04-14,"['cs.ai', 'cs.cl', 'cs.lg']",2304.07183v1.pdf," GPT-3 and several other language models (LMs) can effectively address various +natural language processing (NLP) tasks, including machine translation and text +summarization. Recently, they have also been successfully employed in the +business process management (BPM) domain, e.g., for predictive process +monitoring and process extraction from text. This, however, typically requires +fine-tuning the employed LM, which, among others, necessitates large amounts of +suitable training data. A possible solution to this problem is the use of +prompt engineering, which leverages pre-trained LMs without fine-tuning them. +Recognizing this, we argue that prompt engineering can help bring the +capabilities of LMs to BPM research. We use this position paper to develop a +research agenda for the use of prompt engineering for BPM research by +identifying the associated potentials and challenges. +" +Revisiting Prompt Engineering via Declarative Crowdsourcing,Aditya G. Parameswaran,http://arxiv.org/pdf/2308.03854v1.pdf,2023-08-07,"['cs.db', 'cs.ai', 'cs.hc', 'cs.lg']",2308.03854v1.pdf," Large language models (LLMs) are incredibly powerful at comprehending and +generating data in the form of text, but are brittle and error-prone. There has +been an advent of toolkits and recipes centered around so-called prompt +engineering-the process of asking an LLM to do something via a series of +prompts. However, for LLM-powered data processing workflows, in particular, +optimizing for quality, while keeping cost bounded, is a tedious, manual +process. We put forth a vision for declarative prompt engineering. We view LLMs +like crowd workers and leverage ideas from the declarative crowdsourcing +literature-including leveraging multiple prompting strategies, ensuring +internal consistency, and exploring hybrid-LLM-non-LLM approaches-to make +prompt engineering a more principled process. Preliminary case studies on +sorting, entity resolution, and imputation demonstrate the promise of our +approach +" +How understanding large language models can inform their use in physics education,Giulia Polverini,http://arxiv.org/pdf/2309.12074v1.pdf,2023-09-21,['physics.ed-ph'],2309.12074v1.pdf," The paper aims to fulfil three main functions: (1) to serve as an +introduction for the physics education community to the functioning of Large +Language Models (LLMs), (2) to present a series of illustrative examples +demonstrating how prompt-engineering techniques can impact LLMs performance on +conceptual physics tasks and (3) to discuss potential implications of the +understanding of LLMs and prompt engineering for physics teaching and learning. +We first summarise existing research on the performance of a popular LLM-based +chatbot (ChatGPT) on physics tasks. We then give a basic account of how LLMs +work, illustrate essential features of their functioning, and discuss their +strengths and limitations. Equipped with this knowledge, we discuss some +challenges with generating useful output with ChatGPT-4 in the context of +introductory physics, paying special attention to conceptual questions and +problems. We then provide a condensed overview of relevant literature on prompt +engineering and demonstrate through illustrative examples how selected +prompt-engineering techniques can be employed to improve ChatGPT-4's output on +conceptual introductory physics problems. Qualitatively studying these examples +provides additional insights into ChatGPT's functioning and its utility in +physics problem solving. Finally, we consider how insights from the paper can +inform the use of LMMs in the teaching and learning of physics. +" +Data-Driven Approach for Formality-Sensitive Machine Translation: Language-Specific Handling and Synthetic Data Generation,Seugnjun Lee,http://arxiv.org/pdf/2306.14514v2.pdf,2023-06-26,"['cs.cl', 'cs.ai']",2306.14514v2.pdf," In this paper, we introduce a data-driven approach for Formality-Sensitive +Machine Translation (FSMT) that caters to the unique linguistic properties of +four target languages. Our methodology centers on two core strategies: 1) +language-specific data handling, and 2) synthetic data generation using +large-scale language models and empirical prompt engineering. This approach +demonstrates a considerable improvement over the baseline, highlighting the +effectiveness of data-centric techniques. Our prompt engineering strategy +further improves performance by producing superior synthetic translation +examples. +" +Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering,Edward Junprung,http://arxiv.org/pdf/2308.07411v1.pdf,2023-08-14,"['cs.ai', 'cs.ma']",2308.07411v1.pdf," The final frontier for simulation is the accurate representation of complex, +real-world social systems. While agent-based modeling (ABM) seeks to study the +behavior and interactions of agents within a larger system, it is unable to +faithfully capture the full complexity of human-driven behavior. Large language +models (LLMs), like ChatGPT, have emerged as a potential solution to this +bottleneck by enabling researchers to explore human-driven interactions in +previously unimaginable ways. Our research investigates simulations of human +interactions using LLMs. Through prompt engineering, inspired by Park et al. +(2023), we present two simulations of believable proxies of human behavior: a +two-agent negotiation and a six-agent murder mystery game. +" +Large Language Models Are Human-Level Prompt Engineers,Yongchao Zhou,http://arxiv.org/pdf/2211.01910v2.pdf,2022-11-03,"['cs.lg', 'cs.ai', 'cs.cl']",2211.01910v2.pdf," By conditioning on natural language instructions, large language models +(LLMs) have displayed impressive capabilities as general-purpose computers. +However, task performance depends significantly on the quality of the prompt +used to steer the model, and most effective prompts have been handcrafted by +humans. Inspired by classical program synthesis and the human approach to +prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic +instruction generation and selection. In our method, we treat the instruction +as the ""program,"" optimized by searching over a pool of instruction candidates +proposed by an LLM in order to maximize a chosen score function. To evaluate +the quality of the selected instruction, we evaluate the zero-shot performance +of another LLM following the selected instruction. Experiments on 24 NLP tasks +show that our automatically generated instructions outperform the prior LLM +baseline by a large margin and achieve better or comparable performance to the +instructions generated by human annotators on 19/24 tasks. We conduct extensive +qualitative and quantitative analyses to explore the performance of APE. We +show that APE-engineered prompts can be applied to steer models toward +truthfulness and/or informativeness, as well as to improve few-shot learning +performance by simply prepending them to standard in-context learning prompts. +Please check out our webpage at +https://sites.google.com/view/automatic-prompt-engineer. +" +Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales,Martin Ruskov,http://arxiv.org/pdf/2302.08961v2.pdf,2023-02-17,"['cs.cl', 'cs.ai', 'cs.hc', 'i.2']",2302.08961v2.pdf," The quality of text-to-image generation is continuously improving, yet the +boundaries of its applicability are still unclear. In particular, refinement of +the text input with the objective of achieving better results - commonly called +prompt engineering - so far seems to have not been geared towards work with +pre-existing texts. We investigate whether text-to-image generation and prompt +engineering could be used to generate basic illustrations of popular +fairytales. Using Midjourney v4, we engage in action research with a dual aim: +to attempt to generate 5 believable illustrations for each of 5 popular +fairytales, and to define a prompt engineering process that starts from a +pre-existing text and arrives at an illustration of it. We arrive at a +tentative 4-stage process: i) initial prompt, ii) composition adjustment, iii) +style refinement, and iv) variation selection. We also discuss three reasons +why the generation model struggles with certain illustrations: difficulties +with counts, bias from stereotypical configurations and inability to depict +overly fantastic situations. Our findings are not limited to the specific +generation model and are intended to be generalisable to future ones. +" +A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT,Jules White,http://arxiv.org/pdf/2302.11382v1.pdf,2023-02-21,"['cs.se', 'cs.ai']",2302.11382v1.pdf," Prompt engineering is an increasingly important skill set needed to converse +effectively with large language models (LLMs), such as ChatGPT. Prompts are +instructions given to an LLM to enforce rules, automate processes, and ensure +specific qualities (and quantities) of generated output. Prompts are also a +form of programming that can customize the outputs and interactions with an +LLM. This paper describes a catalog of prompt engineering techniques presented +in pattern form that have been applied to solve common problems when conversing +with LLMs. Prompt patterns are a knowledge transfer method analogous to +software patterns since they provide reusable solutions to common problems +faced in a particular context, i.e., output generation and interaction when +working with LLMs. This paper provides the following contributions to research +on prompt engineering that apply LLMs to automate software development tasks. +First, it provides a framework for documenting patterns for structuring prompts +to solve a range of problems so that they can be adapted to different domains. +Second, it presents a catalog of patterns that have been applied successfully +to improve the outputs of LLM conversations. Third, it explains how prompts can +be built from multiple patterns and illustrates prompt patterns that benefit +from combination with other prompt patterns. +" +Prompt Space Optimizing Few-shot Reasoning Success with Large Language Models,Fobo Shi,http://arxiv.org/pdf/2306.03799v1.pdf,2023-06-06,['cs.cl'],2306.03799v1.pdf," Prompt engineering is an essential technique for enhancing the abilities of +large language models (LLMs) by providing explicit and specific instructions. +It enables LLMs to excel in various tasks, such as arithmetic reasoning, +question answering, summarization, relation extraction, machine translation, +and sentiment analysis. Researchers have been actively exploring different +prompt engineering strategies, such as Chain of Thought (CoT), Zero-CoT, and +In-context learning. However, an unresolved problem arises from the fact that +current approaches lack a solid theoretical foundation for determining optimal +prompts. To address this issue in prompt engineering, we propose a new and +effective approach called Prompt Space. Our methodology utilizes text +embeddings to obtain basis vectors by matrix decomposition, and then constructs +a space for representing all prompts. Prompt Space significantly outperforms +state-of-the-art prompt paradigms on ten public reasoning benchmarks. Notably, +without the help of the CoT method and the prompt ""Let's think step by step"", +Prompt Space shows superior performance over the few-shot method. Overall, our +approach provides a robust and fundamental theoretical framework for selecting +simple and effective prompts. This advancement marks a significant step towards +improving prompt engineering for a wide variety of applications in LLMs. +" +An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing,Sonish Sivarajkumar,http://arxiv.org/pdf/2309.08008v1.pdf,2023-09-14,"['cs.cl', 'cs.ai']",2309.08008v1.pdf," Large language models (LLMs) have shown remarkable capabilities in Natural +Language Processing (NLP), especially in domains where labeled data is scarce +or expensive, such as clinical domain. However, to unlock the clinical +knowledge hidden in these LLMs, we need to design effective prompts that can +guide them to perform specific clinical NLP tasks without any task-specific +training data. This is known as in-context learning, which is an art and +science that requires understanding the strengths and weaknesses of different +LLMs and prompt engineering approaches. In this paper, we present a +comprehensive and systematic experimental study on prompt engineering for five +clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence +Extraction, Coreference Resolution, Medication Status Extraction, and +Medication Attribute Extraction. We assessed the prompts proposed in recent +literature, including simple prefix, simple cloze, chain of thought, and +anticipatory prompts, and introduced two new types of prompts, namely heuristic +prompting and ensemble prompting. We evaluated the performance of these prompts +on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted +zero-shot prompting with few-shot prompting, and provide novel insights and +guidelines for prompt engineering for LLMs in clinical NLP. To the best of our +knowledge, this is one of the first works on the empirical evaluation of +different prompt engineering approaches for clinical NLP in this era of +generative AI, and we hope that it will inspire and inform future research in +this area. +" +Prompt Engineering or Fine Tuning: An Empirical Assessment of Large Language Models in Automated Software Engineering Tasks,Jiho Shin,http://arxiv.org/pdf/2310.10508v1.pdf,2023-10-11,['cs.se'],2310.10508v1.pdf," In this paper, we investigate the effectiveness of state-of-the-art LLM, +i.e., GPT-4, with three different prompting engineering techniques (i.e., basic +prompting, in-context learning, and task-specific prompting) against 18 +fine-tuned LLMs on three typical ASE tasks, i.e., code generation, code +summarization, and code translation. Our quantitative analysis of these +prompting strategies suggests that prompt engineering GPT-4 cannot necessarily +and significantly outperform fine-tuning smaller/older LLMs in all three tasks. +For comment generation, GPT-4 with the best prompting strategy (i.e., +task-specific prompt) had outperformed the first-ranked fine-tuned model by +8.33% points on average in BLEU. However, for code generation, the first-ranked +fine-tuned model outperforms GPT-4 with best prompting by 16.61% and 28.3% +points, on average in BLEU. For code translation, GPT-4 and fine-tuned +baselines tie as they outperform each other on different translation tasks. To +explore the impact of different prompting strategies, we conducted a user study +with 27 graduate students and 10 industry practitioners. From our qualitative +analysis, we find that the GPT-4 with conversational prompts (i.e., when a +human provides feedback and instructions back and forth with a model to achieve +best results) showed drastic improvement compared to GPT-4 with automatic +prompting strategies. Moreover, we observe that participants tend to request +improvements, add more context, or give specific instructions as conversational +prompts, which goes beyond typical and generic prompting strategies. Our study +suggests that, at its current state, GPT-4 with conversational prompting has +great potential for ASE tasks, but fully automated prompt engineering with no +human in the loop requires more study and improvement. +" +An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels,Taylor Sorensen,http://arxiv.org/pdf/2203.11364v1.pdf,2022-03-21,"['cs.cl', 'cs.lg']",2203.11364v1.pdf," Pre-trained language models derive substantial linguistic and factual +knowledge from the massive corpora on which they are trained, and prompt +engineering seeks to align these models to specific tasks. Unfortunately, +existing prompt engineering methods require significant amounts of labeled +data, access to model parameters, or both. We introduce a new method for +selecting prompt templates \textit{without labeled examples} and +\textit{without direct access to the model}. Specifically, over a set of +candidate templates, we choose the template that maximizes the mutual +information between the input and the corresponding model output. Across 8 +datasets representing 7 distinct NLP tasks, we show that when a template has +high mutual information, it also has high accuracy on the task. On the largest +model, selecting prompts with our method gets 90\% of the way from the average +prompt accuracy to the best prompt accuracy and requires no ground truth +labels. +" +Unsupervised Prompt Learning for Vision-Language Models,Tony Huang,http://arxiv.org/pdf/2204.03649v2.pdf,2022-04-07,['cs.cv'],2204.03649v2.pdf," Contrastive vision-language models like CLIP have shown great progress in +transfer learning. In the inference stage, the proper text description, also +known as prompt, needs to be carefully designed to correctly classify the given +images. In order to avoid laborious prompt engineering, recent works such as +CoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for +downstream image recognition tasks on a small set of labeled data. Though +promising improvements are achieved, requiring labeled data from the target +datasets may restrict the scalability. In this paper, we explore a different +scenario, in which the labels of the target datasets are unprovided, and we +present an unsupervised prompt learning (UPL) approach to avoid prompt +engineering while simultaneously improving transfer performance of CLIP-like +vision-language models. As far as we know, UPL is the first work to introduce +unsupervised learning into prompt learning. Experimentally, our UPL outperforms +original CLIP with prompt engineering on ImageNet as well as other 10 datasets. +An enhanced version of UPL is even competitive with the 8-shot CoOp and the +8-shot TIP-Adapter on most datasets. Code and models are available at +https://github.com/tonyhuang2022/UPL. +" +ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing,Ian Arawjo,http://arxiv.org/pdf/2309.09128v1.pdf,2023-09-17,"['cs.hc', 'cs.ai', 'h.5.2; i.2']",2309.09128v1.pdf," Evaluating outputs of large language models (LLMs) is challenging, requiring +making -- and making sense of -- many responses. Yet tools that go beyond basic +prompting tend to require knowledge of programming APIs, focus on narrow +domains, or are closed-source. We present ChainForge, an open-source visual +toolkit for prompt engineering and on-demand hypothesis testing of text +generation LLMs. ChainForge provides a graphical interface for comparison of +responses across models and prompt variations. Our system was designed to +support three tasks: model selection, prompt template design, and hypothesis +testing (e.g., auditing). We released ChainForge early in its development and +iterated on its design with academics and online users. Through in-lab and +interview studies, we find that a range of people could use ChainForge to +investigate hypotheses that matter to them, including in real-world settings. +We identify three modes of prompt engineering and LLM hypothesis testing: +opportunistic exploration, limited evaluation, and iterative refinement. +" +CoPrompt: Supporting Prompt Sharing and Referring in Collaborative Natural Language Programming,Felicia Li Feng,http://arxiv.org/pdf/2310.09235v1.pdf,2023-10-13,['cs.hc'],2310.09235v1.pdf," Natural language (NL) programming has become more approachable due to the +powerful code-generation capability of large language models (LLMs). This shift +to using NL to program enhances collaborative programming by reducing +communication barriers and context-switching among programmers from varying +backgrounds. However, programmers may face challenges during prompt engineering +in a collaborative setting as they need to actively keep aware of their +collaborators' progress and intents. In this paper, we aim to investigate ways +to assist programmers' prompt engineering in a collaborative context. We first +conducted a formative study to understand the workflows and challenges of +programmers when using NL for collaborative programming. Based on our findings, +we implemented a prototype, CoPrompt, to support collaborative prompt +engineering by providing referring, requesting, sharing, and linking +mechanisms. Our user study indicates that CoPrompt assists programmers in +comprehending collaborators' prompts and building on their collaborators' work, +reducing repetitive updates and communication costs. +" +Prompt-Engineering and Transformer-based Question Generation and Evaluation,Rubaba Amyeen,http://arxiv.org/pdf/2310.18867v1.pdf,2023-10-29,"['cs.cl', 'cs.ai']",2310.18867v1.pdf," Question generation has numerous applications in the educational context. +Question generation can prove helpful for students when reviewing content and +testing themselves. Furthermore, a question generation model can aid teachers +by lessening the burden of creating assessments and other practice material. +This paper aims to find the best method to generate questions from textual data +through a transformer model and prompt engineering. In this research, we +finetuned a pretrained distilBERT model on the SQuAD question answering dataset +to generate questions. In addition to training a transformer model, prompt +engineering was applied to generate questions effectively using the LLaMA +model. The generated questions were compared against the baseline questions in +the SQuAD dataset to evaluate the effectiveness of four different prompts. All +four prompts demonstrated over 60% similarity on average. Of the +prompt-generated questions, 30% achieved a high similarity score greater than +70%. +" +A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models,James Urquhart Allingham,http://arxiv.org/pdf/2302.06235v2.pdf,2023-02-13,"['cs.lg', 'cs.cv', 'stat.ml']",2302.06235v2.pdf," Contrastively trained text-image models have the remarkable ability to +perform zero-shot classification, that is, classifying previously unseen images +into categories that the model has never been explicitly trained to identify. +However, these zero-shot classifiers need prompt engineering to achieve high +accuracy. Prompt engineering typically requires hand-crafting a set of prompts +for individual downstream tasks. In this work, we aim to automate this prompt +engineering and improve zero-shot accuracy through prompt ensembling. In +particular, we ask ""Given a large pool of prompts, can we automatically score +the prompts and ensemble those that are most suitable for a particular +downstream dataset, without needing access to labeled validation data?"". We +demonstrate that this is possible. In doing so, we identify several pathologies +in a naive prompt scoring method where the score can be easily overconfident +due to biases in pre-training and test data, and we propose a novel prompt +scoring method that corrects for the biases. Using our proposed scoring method +to create a weighted average prompt ensemble, our method outperforms equal +average ensemble, as well as hand-crafted prompts, on ImageNet, 4 of its +variants, and 11 fine-grained classification benchmarks, all while being fully +automatic, optimization-free, and not requiring access to labeled validation +data. +" +Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification,Benjamin Clavié,http://arxiv.org/pdf/2303.07142v3.pdf,2023-03-13,['cs.cl'],2303.07142v3.pdf," This case study investigates the task of job classification in a real-world +setting, where the goal is to determine whether an English-language job posting +is appropriate for a graduate or entry-level position. We explore multiple +approaches to text classification, including supervised approaches such as +traditional models like Support Vector Machines (SVMs) and state-of-the-art +deep learning methods such as DeBERTa. We compare them with Large Language +Models (LLMs) used in both few-shot and zero-shot classification settings. To +accomplish this task, we employ prompt engineering, a technique that involves +designing prompts to guide the LLMs towards the desired output. Specifically, +we evaluate the performance of two commercially available state-of-the-art +GPT-3.5-based language models, text-davinci-003 and gpt-3.5-turbo. We also +conduct a detailed analysis of the impact of different aspects of prompt +engineering on the model's performance. Our results show that, with a +well-designed prompt, a zero-shot gpt-3.5-turbo classifier outperforms all +other models, achieving a 6% increase in Precision@95% Recall compared to the +best supervised approach. Furthermore, we observe that the wording of the +prompt is a critical factor in eliciting the appropriate ""reasoning"" in the +model, and that seemingly minor aspects of the prompt significantly affect the +model's performance. +" +Simulating H.P. Lovecraft horror literature with the ChatGPT large language model,Eduardo C. Garrido-Merchán,http://arxiv.org/pdf/2305.03429v1.pdf,2023-05-05,['cs.cl'],2305.03429v1.pdf," In this paper, we present a novel approach to simulating H.P. Lovecraft's +horror literature using the ChatGPT large language model, specifically the +GPT-4 architecture. Our study aims to generate text that emulates Lovecraft's +unique writing style and themes, while also examining the effectiveness of +prompt engineering techniques in guiding the model's output. To achieve this, +we curated a prompt containing several specialized literature references and +employed advanced prompt engineering methods. We conducted an empirical +evaluation of the generated text by administering a survey to a sample of +undergraduate students. Utilizing statistical hypothesis testing, we assessed +the students ability to distinguish between genuine Lovecraft works and those +generated by our model. Our findings demonstrate that the participants were +unable to reliably differentiate between the two, indicating the effectiveness +of the GPT-4 model and our prompt engineering techniques in emulating +Lovecraft's literary style. In addition to presenting the GPT model's +capabilities, this paper provides a comprehensive description of its underlying +architecture and offers a comparative analysis with related work that simulates +other notable authors and philosophers, such as Dennett. By exploring the +potential of large language models in the context of literary emulation, our +study contributes to the body of research on the applications and limitations +of these models in various creative domains. +" +CXR-LLaVA: Multimodal Large Language Model for Interpreting Chest X-ray Images,Seowoo Lee,http://arxiv.org/pdf/2310.18341v2.pdf,2023-10-22,"['cs.cl', 'cs.ai']",2310.18341v2.pdf," Purpose: Recent advancements in large language models (LLMs) have expanded +their capabilities in a multimodal fashion, potentially replicating the image +interpretation of human radiologists. This study aimed to develop open-source +multimodal large language model for interpreting chest X-ray images +(CXR-LLaVA). We also examined the effect of prompt engineering and model +parameters such as temperature and nucleus sampling. + Materials and Methods: For training, we collected 659,287 publicly available +CXRs: 417,336 CXRs had labels for certain radiographic abnormalities (dataset +1); 241,951 CXRs provided free-text radiology reports (dataset 2). After +pre-training the Resnet50 as an image encoder, the contrastive language-image +pre-training was used to align CXRs and corresponding radiographic +abnormalities. Then, the Large Language Model Meta AI-2 was fine-tuned using +dataset 2, which were refined using GPT-4, with generating various question +answering scenarios. The code can be found at +https://github.com/ECOFRI/CXR_LLaVA. + Results: In the test set, we observed that the model's performance fluctuated +based on its parameters. On average, it achieved F1 score of 0.34 for five +pathologic findings (atelectasis, cardiomegaly, consolidation, edema, and +pleural effusion), which was improved to 0.46 through prompt engineering. In +the independent set, the model achieved an average F1 score of 0.30 for the +same pathologic findings. Notably, for the pediatric chest radiograph dataset, +which was unseen during training, the model differentiated abnormal radiographs +with an F1 score ranging from 0.84 to 0.85. + Conclusion: CXR-LLaVA demonstrates promising potential in CXR interpretation. +Both prompt engineering and model parameter adjustments can play pivotal roles +in interpreting CXRs. +" +A Taxonomy of Prompt Modifiers for Text-To-Image Generation,Jonas Oppenlaender,http://arxiv.org/pdf/2204.13988v3.pdf,2022-04-20,"['cs.mm', 'cs.cl', 'cs.hc', 'h.5; h.m; j.5']",2204.13988v3.pdf," Text-to-image generation has seen an explosion of interest since 2021. Today, +beautiful and intriguing digital images and artworks can be synthesized from +textual inputs (""prompts"") with deep generative models. Online communities +around text-to-image generation and AI generated art have quickly emerged. This +paper identifies six types of prompt modifiers used by practitioners in the +online community based on a 3-month ethnographic study. The novel taxonomy of +prompt modifiers provides researchers a conceptual starting point for +investigating the practice of text-to-image generation, but may also help +practitioners of AI generated art improve their images. We further outline how +prompt modifiers are applied in the practice of ""prompt engineering."" We +discuss research opportunities of this novel creative practice in the field of +Human-Computer Interaction (HCI). The paper concludes with a discussion of +broader implications of prompt engineering from the perspective of Human-AI +Interaction (HAI) in future applications beyond the use case of text-to-image +generation and AI generated art. +" +What GPT Knows About Who is Who,Xiaohan Yang,http://arxiv.org/pdf/2205.07407v1.pdf,2022-05-16,"['cs.cl', 'cs.lg']",2205.07407v1.pdf," Coreference resolution -- which is a crucial task for understanding discourse +and language at large -- has yet to witness widespread benefits from large +language models (LLMs). Moreover, coreference resolution systems largely rely +on supervised labels, which are highly expensive and difficult to annotate, +thus making it ripe for prompt engineering. In this paper, we introduce a +QA-based prompt-engineering method and discern \textit{generative}, pre-trained +LLMs' abilities and limitations toward the task of coreference resolution. Our +experiments show that GPT-2 and GPT-Neo can return valid answers, but that +their capabilities to identify coreferent mentions are limited and +prompt-sensitive, leading to inconsistent results. +" +Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements,Conrad Borchers,http://arxiv.org/pdf/2205.11374v1.pdf,2022-05-23,"['cs.cl', 'cs.ai']",2205.11374v1.pdf," The growing capability and availability of generative language models has +enabled a wide range of new downstream tasks. Academic research has identified, +quantified and mitigated biases present in language models but is rarely +tailored to downstream tasks where wider impact on individuals and society can +be felt. In this work, we leverage one popular generative language model, +GPT-3, with the goal of writing unbiased and realistic job advertisements. We +first assess the bias and realism of zero-shot generated advertisements and +compare them to real-world advertisements. We then evaluate prompt-engineering +and fine-tuning as debiasing methods. We find that prompt-engineering with +diversity-encouraging prompts gives no significant improvement to bias, nor +realism. Conversely, fine-tuning, especially on unbiased real advertisements, +can improve realism and reduce bias. +" +Arguments to Key Points Mapping with Prompt-based Learning,Ahnaf Mozib Samin,http://arxiv.org/pdf/2211.14995v1.pdf,2022-11-28,['cs.cl'],2211.14995v1.pdf," Handling and digesting a huge amount of information in an efficient manner +has been a long-term demand in modern society. Some solutions to map key points +(short textual summaries capturing essential information and filtering +redundancies) to a large number of arguments/opinions have been provided +recently (Bar-Haim et al., 2020). To complement the full picture of the +argument-to-keypoint mapping task, we mainly propose two approaches in this +paper. The first approach is to incorporate prompt engineering for fine-tuning +the pre-trained language models (PLMs). The second approach utilizes +prompt-based learning in PLMs to generate intermediary texts, which are then +combined with the original argument-keypoint pairs and fed as inputs to a +classifier, thereby mapping them. Furthermore, we extend the experiments to +cross/in-domain to conduct an in-depth analysis. In our evaluation, we find +that i) using prompt engineering in a more direct way (Approach 1) can yield +promising results and improve the performance; ii) Approach 2 performs +considerably worse than Approach 1 due to the negation issue of the PLM. +" +Legal Prompt Engineering for Multilingual Legal Judgement Prediction,Dietrich Trautmann,http://arxiv.org/pdf/2212.02199v1.pdf,2022-12-05,"['cs.cl', 'cs.ai']",2212.02199v1.pdf," Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide and +assist a large language model (LLM) with performing a natural legal language +processing (NLLP) skill. Our goal is to use LPE with LLMs over long legal +documents for the Legal Judgement Prediction (LJP) task. We investigate the +performance of zero-shot LPE for given facts in case-texts from the European +Court of Human Rights (in English) and the Federal Supreme Court of Switzerland +(in German, French and Italian). Our results show that zero-shot LPE is better +compared to the baselines, but it still falls short compared to current state +of the art supervised approaches. Nevertheless, the results are important, +since there was 1) no explicit domain-specific data used - so we show that the +transfer to the legal domain is possible for general-purpose LLMs, and 2) the +LLMs where directly applied without any further training or fine-tuning - which +in turn saves immensely in terms of additional computational costs. +" +The Infinite Index: Information Retrieval on Generative Text-To-Image Models,Niklas Deckers,http://arxiv.org/pdf/2212.07476v2.pdf,2022-12-14,"['cs.ir', 'cs.cl', 'cs.cv']",2212.07476v2.pdf," Conditional generative models such as DALL-E and Stable Diffusion generate +images based on a user-defined text, the prompt. Finding and refining prompts +that produce a desired image has become the art of prompt engineering. +Generative models do not provide a built-in retrieval model for a user's +information need expressed through prompts. In light of an extensive literature +review, we reframe prompt engineering for generative models as interactive +text-based retrieval on a novel kind of ""infinite index"". We apply these +insights for the first time in a case study on image generation for game design +with an expert. Finally, we envision how active learning may help to guide the +retrieval of generated images. +" +"Artificial Intelligence for Health Message Generation: Theory, Method, and an Empirical Study Using Prompt Engineering",Sue Lim,http://arxiv.org/pdf/2212.07507v1.pdf,2022-12-14,['cs.cl'],2212.07507v1.pdf," This study introduces and examines the potential of an AI system to generate +health awareness messages. The topic of folic acid, a vitamin that is critical +during pregnancy, served as a test case. Using prompt engineering, we generated +messages that could be used to raise awareness and compared them to retweeted +human-generated messages via computational and human evaluation methods. The +system was easy to use and prolific, and computational analyses revealed that +the AI-generated messages were on par with human-generated ones in terms of +sentiment, reading ease, and semantic content. Also, the human evaluation study +showed that AI-generated messages ranked higher in message quality and clarity. +We discuss the theoretical, practical, and ethical implications of these +results. +" +What does CLIP know about a red circle? Visual prompt engineering for VLMs,Aleksandar Shtedritski,http://arxiv.org/pdf/2304.06712v2.pdf,2023-04-13,['cs.cv'],2304.06712v2.pdf," Large-scale Vision-Language Models, such as CLIP, learn powerful image-text +representations that have found numerous applications, from zero-shot +classification to text-to-image generation. Despite that, their capabilities +for solving novel discriminative tasks via prompting fall behind those of large +language models, such as GPT-3. Here we explore the idea of visual prompt +engineering for solving computer vision tasks beyond classification by editing +in image space instead of text. In particular, we discover an emergent ability +of CLIP, where, by simply drawing a red circle around an object, we can direct +the model's attention to that region, while also maintaining global +information. We show the power of this simple approach by achieving +state-of-the-art in zero-shot referring expressions comprehension and strong +performance in keypoint localization tasks. Finally, we draw attention to some +potential ethical concerns of large language-vision models. +" +Prompt Engineering for Transformer-based Chemical Similarity Search Identifies Structurally Distinct Functional Analogues,Clayton W. Kosonocky,http://arxiv.org/pdf/2305.16330v1.pdf,2023-05-17,"['physics.chem-ph', 'cs.lg']",2305.16330v1.pdf," Chemical similarity searches are widely used in-silico methods for +identifying new drug-like molecules. These methods have historically relied on +structure-based comparisons to compute molecular similarity. Here, we use a +chemical language model to create a vector-based chemical search. We extend +implementations by creating a prompt engineering strategy that utilizes two +different chemical string representation algorithms: one for the query and the +other for the database. We explore this method by reviewing the search results +from five drug-like query molecules (penicillin G, nirmatrelvir, zidovudine, +lysergic acid diethylamide, and fentanyl) and three dye-like query molecules +(acid blue 25, avobenzone, and 2-diphenylaminocarbazole). We find that this +novel method identifies molecules that are functionally similar to the query, +indicated by the associated patent literature, and that many of these molecules +are structurally distinct from the query, making them unlikely to be found with +traditional chemical similarity search methods. This method may aid in the +discovery of novel structural classes of molecules that achieve target +functionality. +" +Submodular Minimax Optimization: Finding Effective Sets,Loay Mualem,http://arxiv.org/pdf/2305.16903v1.pdf,2023-05-26,"['cs.lg', 'cs.dm', 'math.oc', '68r05 (primary) 90c26, 90c20, 68t20, 68w40 (secondary)', 'g.2.1; i.2.m; f.2.2']",2305.16903v1.pdf," Despite the rich existing literature about minimax optimization in continuous +settings, only very partial results of this kind have been obtained for +combinatorial settings. In this paper, we fill this gap by providing a +characterization of submodular minimax optimization, the problem of finding a +set (for either the min or the max player) that is effective against every +possible response. We show when and under what conditions we can find such +sets. We also demonstrate how minimax submodular optimization provides robust +solutions for downstream machine learning applications such as (i) efficient +prompt engineering for question answering, (ii) prompt engineering for dialog +state tracking, (iii) identifying robust waiting locations for ride-sharing, +(iv) ride-share difficulty kernelization, and (v) finding adversarial images. +Our experiments demonstrate that our proposed algorithms consistently +outperform other baselines. +" +Unsupervised Human Activity Recognition through Two-stage Prompting with ChatGPT,Qingxin Xia,http://arxiv.org/pdf/2306.02140v1.pdf,2023-06-03,"['cs.hc', 'cs.cl']",2306.02140v1.pdf," Wearable sensor devices, which offer the advantage of recording daily objects +used by a person while performing an activity, enable the feasibility of +unsupervised Human Activity Recognition (HAR). Unfortunately, previous +unsupervised approaches using the usage sequence of objects usually require a +proper description of activities manually prepared by humans. Instead, we +leverage the knowledge embedded in a Large Language Model (LLM) of ChatGPT. +Because the sequence of objects robustly characterizes the activity identity, +it is possible that ChatGPT already learned the association between activities +and objects from existing contexts. However, previous prompt engineering for +ChatGPT exhibits limited generalization ability when dealing with a list of +words (i.e., sequence of objects) due to the similar weighting assigned to each +word in the list. In this study, we propose a two-stage prompt engineering, +which first guides ChatGPT to generate activity descriptions associated with +objects while emphasizing important objects for distinguishing similar +activities; then outputs activity classes and explanations for enhancing the +contexts that are helpful for HAR. To the best of our knowledge, this is the +first study that utilizes ChatGPT to recognize activities using objects in an +unsupervised manner. We conducted our approach on three datasets and +demonstrated the state-of-the-art performance. +" +User-friendly Image Editing with Minimal Text Input: Leveraging Captioning and Injection Techniques,Sunwoo Kim,http://arxiv.org/pdf/2306.02717v1.pdf,2023-06-05,['cs.cv'],2306.02717v1.pdf," Recent text-driven image editing in diffusion models has shown remarkable +success. However, the existing methods assume that the user's description +sufficiently grounds the contexts in the source image, such as objects, +background, style, and their relations. This assumption is unsuitable for +real-world applications because users have to manually engineer text prompts to +find optimal descriptions for different images. From the users' standpoint, +prompt engineering is a labor-intensive process, and users prefer to provide a +target word for editing instead of a full sentence. To address this problem, we +first demonstrate the importance of a detailed text description of the source +image, by dividing prompts into three categories based on the level of semantic +details. Then, we propose simple yet effective methods by combining prompt +generation frameworks, thereby making the prompt engineering process more +user-friendly. Extensive qualitative and quantitative experiments demonstrate +the importance of prompts in text-driven image editing and our method is +comparable to ground-truth prompts. +" +PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation,Yingchaojie Feng,http://arxiv.org/pdf/2307.09036v2.pdf,2023-07-18,"['cs.ai', 'cs.hc']",2307.09036v2.pdf," Generative text-to-image models have gained great popularity among the public +for their powerful capability to generate high-quality images based on natural +language prompts. However, developing effective prompts for desired images can +be challenging due to the complexity and ambiguity of natural language. This +research proposes PromptMagician, a visual analysis system that helps users +explore the image results and refine the input prompts. The backbone of our +system is a prompt recommendation model that takes user prompts as input, +retrieves similar prompt-image pairs from DiffusionDB, and identifies special +(important and relevant) prompt keywords. To facilitate interactive prompt +refinement, PromptMagician introduces a multi-level visualization for the +cross-modal embedding of the retrieved images and recommended keywords, and +supports users in specifying multiple criteria for personalized exploration. +Two usage scenarios, a user study, and expert interviews demonstrate the +effectiveness and usability of our system, suggesting it facilitates prompt +engineering and improves the creativity support of the generative text-to-image +model. +" +Is GPT a Computational Model of Emotion? Detailed Analysis,Ala N. Tak,http://arxiv.org/pdf/2307.13779v1.pdf,2023-07-25,"['cs.cl', 'cs.ai', 'cs.cy', 'cs.hc']",2307.13779v1.pdf," This paper investigates the emotional reasoning abilities of the GPT family +of large language models via a component perspective. The paper first examines +how the model reasons about autobiographical memories. Second, it +systematically varies aspects of situations to impact emotion intensity and +coping tendencies. Even without the use of prompt engineering, it is shown that +GPT's predictions align significantly with human-provided appraisals and +emotional labels. However, GPT faces difficulties predicting emotion intensity +and coping responses. GPT-4 showed the highest performance in the initial study +but fell short in the second, despite providing superior results after minor +prompt engineering. This assessment brings up questions on how to effectively +employ the strong points and address the weak areas of these models, +particularly concerning response variability. These studies underscore the +merits of evaluating models from a componential perspective. +" +Prompts Matter: Insights and Strategies for Prompt Engineering in Automated Software Traceability,Alberto D. Rodriguez,http://arxiv.org/pdf/2308.00229v1.pdf,2023-08-01,['cs.se'],2308.00229v1.pdf," Large Language Models (LLMs) have the potential to revolutionize automated +traceability by overcoming the challenges faced by previous methods and +introducing new possibilities. However, the optimal utilization of LLMs for +automated traceability remains unclear. This paper explores the process of +prompt engineering to extract link predictions from an LLM. We provide detailed +insights into our approach for constructing effective prompts, offering our +lessons learned. Additionally, we propose multiple strategies for leveraging +LLMs to generate traceability links, improving upon previous zero-shot methods +on the ranking of candidate links after prompt refinement. The primary +objective of this paper is to inspire and assist future researchers and +engineers by highlighting the process of constructing traceability prompts to +effectively harness LLMs for advancing automatic traceability. +" +CoT-BERT: Enhancing Unsupervised Sentence Representation through Chain-of-Thought,Bowen Zhang,http://arxiv.org/pdf/2309.11143v1.pdf,2023-09-20,"['cs.cl', 'cs.ai']",2309.11143v1.pdf," Unsupervised sentence representation learning aims to transform input +sentences into fixed-length vectors enriched with intricate semantic +information while obviating the reliance on labeled data. Recent progress +within this field, propelled by contrastive learning and prompt engineering, +has significantly bridged the gap between unsupervised and supervised +strategies. Nonetheless, the potential utilization of Chain-of-Thought, remains +largely untapped within this trajectory. To unlock latent capabilities within +pre-trained models, such as BERT, we propose a two-stage approach for sentence +representation: comprehension and summarization. Subsequently, the output of +the latter phase is harnessed as the vectorized representation of the input +sentence. For further performance enhancement, we meticulously refine both the +contrastive learning loss function and the template denoising technique for +prompt engineering. Rigorous experimentation substantiates our method, +CoT-BERT, transcending a suite of robust baselines without necessitating other +text representation models or external databases. +" +How does prompt engineering affect ChatGPT performance on unsupervised entity resolution?,Khanin Sisaengsuwanchai,http://arxiv.org/pdf/2310.06174v1.pdf,2023-10-09,"['cs.ai', 'cs.se']",2310.06174v1.pdf," Entity Resolution (ER) is the problem of semi-automatically determining when +two entities refer to the same underlying entity, with applications ranging +from healthcare to e-commerce. Traditional ER solutions required considerable +manual expertise, including feature engineering, as well as identification and +curation of training data. In many instances, such techniques are highly +dependent on the domain. With recent advent in large language models (LLMs), +there is an opportunity to make ER much more seamless and domain-independent. +However, it is also well known that LLMs can pose risks, and that the quality +of their outputs can depend on so-called prompt engineering. Unfortunately, a +systematic experimental study on the effects of different prompting methods for +addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper +aims to address this gap by conducting such a study. Although preliminary in +nature, our results show that prompting can significantly affect the quality of +ER, although it affects some metrics more than others, and can also be dataset +dependent. +" +Interactive Task Planning with Language Models,Boyi Li,http://arxiv.org/pdf/2310.10645v1.pdf,2023-10-16,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.hc']",2310.10645v1.pdf," An interactive robot framework accomplishes long-horizon task planning and +can easily generalize to new goals or distinct tasks, even during execution. +However, most traditional methods require predefined module design, which makes +it hard to generalize to different goals. Recent large language model based +approaches can allow for more open-ended planning but often require heavy +prompt engineering or domain-specific pretrained models. To tackle this, we +propose a simple framework that achieves interactive task planning with +language models. Our system incorporates both high-level planning and low-level +function execution via language. We verify the robustness of our system in +generating novel high-level instructions for unseen objectives and its ease of +adaptation to different tasks by merely substituting the task guidelines, +without the need for additional complex prompt engineering. Furthermore, when +the user sends a new request, our system is able to replan accordingly with +precision based on the new request, task guidelines and previously executed +steps. Please check more details on our https://wuphilipp.github.io/itp_site +and https://youtu.be/TrKLuyv26_g. +" +Prompt Engineering Through the Lens of Optimal Control,Yifan Luo,http://arxiv.org/pdf/2310.14201v2.pdf,2023-10-22,"['cs.lg', 'math.oc']",2310.14201v2.pdf," Prompt Engineering (PE) has emerged as a critical technique for guiding Large +Language Models (LLMs) in solving intricate tasks. Its importance is +highlighted by its potential to significantly enhance the efficiency and +effectiveness of human-machine interaction. As tasks grow increasingly complex, +recent advanced PE methods have extended beyond the limitations of single-round +interactions to embrace multi-round interactions, which allows for a deeper and +more nuanced engagement with LLMs. In this paper, we propose an optimal control +framework tailored for multi-round interactions with LLMs. This framework +provides a unified mathematical structure that not only systematizes the +existing PE methods but also sets the stage for rigorous analytical +improvements. Furthermore, we extend this framework to include PE via ensemble +methods and multi-agent collaboration, thereby enlarging the scope of +applicability. By adopting an optimal control perspective, we offer fresh +insights into existing PE methods and highlight theoretical challenges that +warrant future research. Besides, our work lays a foundation for the +development of more effective and interpretable PE methods. +" +A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models,Yuanfeng Song,http://arxiv.org/pdf/2310.18358v1.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.18358v1.pdf," The springing up of Large Language Models (LLMs) has shifted the community +from single-task-orientated natural language processing (NLP) research to a +holistic end-to-end multi-task learning paradigm. Along this line of research +endeavors in the area, LLM-based prompting methods have attracted much +attention, partially due to the technological advantages brought by prompt +engineering (PE) as well as the underlying NLP principles disclosed by various +prompting methods. Traditional supervised learning usually requires training a +model based on labeled data and then making predictions. In contrast, PE +methods directly use the powerful capabilities of existing LLMs (i.e., GPT-3 +and GPT-4) via composing appropriate prompts, especially under few-shot or +zero-shot scenarios. Facing the abundance of studies related to the prompting +and the ever-evolving nature of this field, this article aims to (i) illustrate +a novel perspective to review existing PE methods, within the well-established +communication theory framework; (ii) facilitate a better/deeper understanding +of developing trends of existing PE methods used in four typical tasks; (iii) +shed light on promising research directions for future PE methods. +" +Apollo: Zero-shot MultiModal Reasoning with Multiple Experts,Daniela Ben-David,http://arxiv.org/pdf/2310.18369v1.pdf,2023-10-25,"['cs.cl', 'cs.ai', 'cs.cv', 'i.2.7; i.5.4']",2310.18369v1.pdf," We propose a modular framework that leverages the expertise of different +foundation models over different modalities and domains in order to perform a +single, complex, multi-modal task, without relying on prompt engineering or +otherwise tailor-made multi-modal training. Our approach enables decentralized +command execution and allows each model to both contribute and benefit from the +expertise of the other models. Our method can be extended to a variety of +foundation models (including audio and vision), above and beyond only language +models, as it does not depend on prompts. We demonstrate our approach on two +tasks. On the well-known task of stylized image captioning, our experiments +show that our approach outperforms semi-supervised state-of-the-art models, +while being zero-shot and avoiding costly training, data collection, and prompt +engineering. We further demonstrate this method on a novel task, audio-aware +image captioning, in which an image and audio are given and the task is to +generate text that describes the image within the context of the provided +audio. Our code is available on GitHub. +" +Towards Zero-Shot and Few-Shot Table Question Answering using GPT-3,Pragya Srivastava,http://arxiv.org/pdf/2210.17284v1.pdf,2022-10-31,"['cs.lg', '14j60 (primary)']",2210.17284v1.pdf," We present very early results on using GPT-3 to perform question answering on +tabular data. We find that stock pre-trained GPT-3 is able to zero-shot learn +the table structure from a serialized JSON array-of-arrays representation, and +able to answer lookup queries and simple comparison questions in natural +language without any fine-tuning. We further find that simple prompt +engineering to include few-shot static Q&A examples significantly improves +accuracy. Lastly, we find that intermixing passage text improves accuracy even +further on heterogeneous data. We apply our approach on a novel dataset of +simple tables in newspaper infographics with promising results. Overall, we +find much cause for optimism in this basic approach. +" +Investigating Prompt Engineering in Diffusion Models,Sam Witteveen,http://arxiv.org/pdf/2211.15462v1.pdf,2022-11-21,"['cs.cv', 'cs.ai', 'cs.cl']",2211.15462v1.pdf," With the spread of the use of Text2Img diffusion models such as DALL-E 2, +Imagen, Mid Journey and Stable Diffusion, one challenge that artists face is +selecting the right prompts to achieve the desired artistic output. We present +techniques for measuring the effect that specific words and phrases in prompts +have, and (in the Appendix) present guidance on the selection of prompts to +produce desired effects. +" +Refining the Responses of LLMs by Themselves,Tianqiang Yan,http://arxiv.org/pdf/2305.04039v1.pdf,2023-05-06,"['cs.cl', 'cs.ai']",2305.04039v1.pdf," In this paper, we propose a simple yet efficient approach based on prompt +engineering that leverages the large language model itself to optimize its +answers without relying on auxiliary models. We introduce an iterative +self-evaluating optimization mechanism, with the potential for improved output +quality as iterations progress, removing the need for manual intervention. The +experiment's findings indicate that utilizing our response refinement framework +on the GPT-3.5 model yields results that are on par with, or even surpass, +those generated by the cutting-edge GPT-4 model. Detailed implementation +strategies and illustrative examples are provided to demonstrate the +superiority of our proposed solution. +" +Efficient Black-Box Adversarial Attacks on Neural Text Detectors,Vitalii Fishchuk,http://arxiv.org/pdf/2311.01873v1.pdf,2023-11-03,['cs.cl'],2311.01873v1.pdf," Neural text detectors are models trained to detect whether a given text was +generated by a language model or written by a human. In this paper, we +investigate three simple and resource-efficient strategies (parameter tweaking, +prompt engineering, and character-level mutations) to alter texts generated by +GPT-3.5 that are unsuspicious or unnoticeable for humans but cause +misclassification by neural text detectors. The results show that especially +parameter tweaking and character-level mutations are effective strategies. +" +Prompted Software Engineering in the Era of AI Models,Dae-Kyoo Kim,http://arxiv.org/pdf/2311.03359v1.pdf,2023-09-07,['cs.se'],2311.03359v1.pdf," This paper introduces prompted software engineering (PSE), which integrates +prompt engineering to build effective prompts for language-based AI models, to +enhance the software development process. PSE enables the use of AI models in +software development to produce high-quality software with fewer resources, +automating tedious tasks and allowing developers to focus on more innovative +aspects. However, effective prompts are necessary to guide software development +in generating accurate, relevant, and useful responses, while mitigating risks +of misleading outputs. This paper describes how productive prompts should be +built throughout the software development cycle. +" +Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language,Paul Denny,http://arxiv.org/pdf/2210.15157v1.pdf,2022-10-27,"['cs.hc', 'cs.ai']",2210.15157v1.pdf," GitHub Copilot is an artificial intelligence model for automatically +generating source code from natural language problem descriptions. Since June +2022, Copilot has officially been available for free to all students as a +plug-in to development environments like Visual Studio Code. Prior work +exploring OpenAI Codex, the underlying model that powers Copilot, has shown it +performs well on typical CS1 problems thus raising concerns about the impact it +will have on how introductory programming courses are taught. However, little +is known about the types of problems for which Copilot does not perform well, +or about the natural language interactions that a student might have with +Copilot when resolving errors. We explore these questions by evaluating the +performance of Copilot on a publicly available dataset of 166 programming +problems. We find that it successfully solves around half of these problems on +its very first attempt, and that it solves 60\% of the remaining problems using +only natural language changes to the problem description. We argue that this +type of prompt engineering, which we believe will become a standard interaction +between human and Copilot when it initially fails, is a potentially useful +learning activity that promotes computational thinking skills, and is likely to +change the nature of code writing skill development. +" +ChatGPT4PCG Competition: Character-like Level Generation for Science Birds,Pittawat Taveekitworachai,http://arxiv.org/pdf/2303.15662v2.pdf,2023-03-28,"['cs.ai', 'cs.cl', 'i.2.7; i.2.8']",2303.15662v2.pdf," This paper presents the first ChatGPT4PCG Competition at the 2023 IEEE +Conference on Games. The objective of this competition is for participants to +create effective prompts for ChatGPT--enabling it to generate Science Birds +levels with high stability and character-like qualities--fully using their +creativity as well as prompt engineering skills. ChatGPT is a conversational +agent developed by OpenAI. Science Birds is selected as the competition +platform because designing an Angry Birds-like level is not a trivial task due +to the in-game gravity; the quality of the levels is determined by their +stability. To lower the entry barrier to the competition, we limit the task to +the generation of capitalized English alphabetical characters. We also allow +only a single prompt to be used for generating all the characters. Here, the +quality of the generated levels is determined by their stability and similarity +to the given characters. A sample prompt is provided to participants for their +reference. An experiment is conducted to determine the effectiveness of several +modified versions of this sample prompt on level stability and similarity by +testing them on several characters. To the best of our knowledge, we believe +that ChatGPT4PCG is the first competition of its kind and hope to inspire +enthusiasm for prompt engineering in procedural content generation. +" +Enhancing Automated Program Repair through Fine-tuning and Prompt Engineering,Rishov Paul,http://arxiv.org/pdf/2304.07840v2.pdf,2023-04-16,"['cs.lg', 'cs.se']",2304.07840v2.pdf," Sequence-to-sequence models have been used to transform erroneous programs +into correct ones when trained with a large enough dataset. Some recent studies +also demonstrated strong empirical evidence that code review could improve the +program repair further. Large language models, trained with Natural Language +(NL) and Programming Language (PL), can contain inherent knowledge of both. In +this study, we investigate if this inherent knowledge of PL and NL can be +utilized to improve automated program repair. We applied PLBART and CodeT5, two +state-of-the-art language models that are pre-trained with both PL and NL, on +two such natural language-based program repair datasets and found that the +pre-trained language models fine-tuned with datasets containing both code +review and subsequent code changes notably outperformed each of the previous +models. With the advent of code generative models like Codex and GPT-3.5-Turbo, +we also performed zero-shot and few-shots learning-based prompt engineering to +assess their performance on these datasets. However, the practical application +of using LLMs in the context of automated program repair is still a long way +off based on our manual analysis of the generated repaired codes by the +learning models. +" +Conceptual Design Generation Using Large Language Models,Kevin Ma,http://arxiv.org/pdf/2306.01779v1.pdf,2023-05-30,"['cs.cl', 'cs.ai']",2306.01779v1.pdf," Concept generation is a creative step in the conceptual design phase, where +designers often turn to brainstorming, mindmapping, or crowdsourcing design +ideas to complement their own knowledge of the domain. Recent advances in +natural language processing (NLP) and machine learning (ML) have led to the +rise of Large Language Models (LLMs) capable of generating seemingly creative +outputs from textual prompts. The success of these models has led to their +integration and application across a variety of domains, including art, +entertainment, and other creative work. In this paper, we leverage LLMs to +generate solutions for a set of 12 design problems and compare them to a +baseline of crowdsourced solutions. We evaluate the differences between +generated and crowdsourced design solutions through multiple perspectives, +including human expert evaluations and computational metrics. Expert +evaluations indicate that the LLM-generated solutions have higher average +feasibility and usefulness while the crowdsourced solutions have more novelty. +We experiment with prompt engineering and find that leveraging few-shot +learning can lead to the generation of solutions that are more similar to the +crowdsourced solutions. These findings provide insight into the quality of +design solutions generated with LLMs and begins to evaluate prompt engineering +techniques that could be leveraged by practitioners to generate higher-quality +design solutions synergistically with LLMs. +" +Cheap-fake Detection with LLM using Prompt Engineering,Guangyang Wu,http://arxiv.org/pdf/2306.02776v1.pdf,2023-06-05,['cs.cv'],2306.02776v1.pdf," The misuse of real photographs with conflicting image captions in news items +is an example of the out-of-context (OOC) misuse of media. In order to detect +OOC media, individuals must determine the accuracy of the statement and +evaluate whether the triplet (~\textit{i.e.}, the image and two captions) +relates to the same event. This paper presents a novel learnable approach for +detecting OOC media in ICME'23 Grand Challenge on Detecting Cheapfakes. The +proposed method is based on the COSMOS structure, which assesses the coherence +between an image and captions, as well as between two captions. We enhance the +baseline algorithm by incorporating a Large Language Model (LLM), GPT3.5, as a +feature extractor. Specifically, we propose an innovative approach to feature +extraction utilizing prompt engineering to develop a robust and reliable +feature extractor with GPT3.5 model. The proposed method captures the +correlation between two captions and effectively integrates this module into +the COSMOS baseline model, which allows for a deeper understanding of the +relationship between captions. By incorporating this module, we demonstrate the +potential for significant improvements in cheap-fakes detection performance. +The proposed methodology holds promising implications for various applications +such as natural language processing, image captioning, and text-to-image +synthesis. Docker for submission is available at +https://hub.docker.com/repository/docker/mulns/ acmmmcheapfakes. +" +Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis,James R. Kirk,http://arxiv.org/pdf/2306.06770v3.pdf,2023-06-11,"['cs.ai', 'cs.hc', 'cs.ro', 'i.2.6; i.2.7']",2306.06770v3.pdf," Large language models (LLMs) offer significant promise as a knowledge source +for task learning. Prompt engineering has been shown to be effective for +eliciting knowledge from an LLM, but alone it is insufficient for acquiring +relevant, situationally grounded knowledge for an embodied agent learning novel +tasks. We describe a cognitive-agent approach that extends and complements +prompt engineering, mitigating its limitations and thus enabling an agent to +acquire new task knowledge matched to its native language capabilities, +embodiment, environment, and user preferences. The approach is to increase the +response space of LLMs and deploy general strategies, embedded within the +autonomous agent, to evaluate, repair, and select among candidate responses +produced by the LLM. We describe the approach and experiments that show how an +agent, by retrieving and evaluating a breadth of responses from the LLM, can +achieve 77-94% task completion in one-shot learning without user oversight. The +approach achieves 100% task completion when human oversight (such as an +indication of preference) is provided. Further, the type of oversight largely +shifts from explicit, natural language instruction to simple +confirmation/discomfirmation of high-quality responses that have been vetted by +the agent before presentation to a user. +" +ChatGPT for Robotics: Design Principles and Model Abilities,Sai Vemprala,http://arxiv.org/pdf/2306.17582v2.pdf,2023-02-20,"['cs.ai', 'cs.cl', 'cs.hc', 'cs.lg', 'cs.ro']",2306.17582v2.pdf," This paper presents an experimental study regarding the use of OpenAI's +ChatGPT for robotics applications. We outline a strategy that combines design +principles for prompt engineering and the creation of a high-level function +library which allows ChatGPT to adapt to different robotics tasks, simulators, +and form factors. We focus our evaluations on the effectiveness of different +prompt engineering techniques and dialog strategies towards the execution of +various types of robotics tasks. We explore ChatGPT's ability to use free-form +dialog, parse XML tags, and to synthesize code, in addition to the use of +task-specific prompting functions and closed-loop reasoning through dialogues. +Our study encompasses a range of tasks within the robotics domain, from basic +logical, geometrical, and mathematical reasoning all the way to complex domains +such as aerial navigation, manipulation, and embodied agents. We show that +ChatGPT can be effective at solving several of such tasks, while allowing users +to interact with it primarily via natural language instructions. In addition to +these studies, we introduce an open-sourced research tool called PromptCraft, +which contains a platform where researchers can collaboratively upload and vote +on examples of good prompting schemes for robotics applications, as well as a +sample robotics simulator with ChatGPT integration, making it easier for users +to get started with using ChatGPT for robotics. +" +Cases of EFL Secondary Students' Prompt Engineering Pathways to Complete a Writing Task with ChatGPT,David James Woo,http://arxiv.org/pdf/2307.05493v1.pdf,2023-06-19,"['cs.hc', 'cs.ai', 'cs.cl']",2307.05493v1.pdf," ChatGPT is a state-of-the-art (SOTA) chatbot. Although it has potential to +support English as a foreign language (EFL) students' writing, to effectively +collaborate with it, a student must learn to engineer prompts, that is, the +skill of crafting appropriate instructions so that ChatGPT produces desired +outputs. However, writing an appropriate prompt for ChatGPT is not +straightforward for non-technical users who suffer a trial-and-error process. +This paper examines the content of EFL students' ChatGPT prompts when +completing a writing task and explores patterns in the quality and quantity of +the prompts. The data come from iPad screen recordings of secondary school EFL +students who used ChatGPT and other SOTA chatbots for the first time to +complete the same writing task. The paper presents a case study of four +distinct pathways that illustrate the trial-and-error process and show +different combinations of prompt content and quantity. The cases contribute +evidence for the need to provide prompt engineering education in the context of +the EFL writing classroom, if students are to move beyond an individual +trial-and-error process, learning a greater variety of prompt content and more +sophisticated prompts to support their writing. +" +"Multi-party Goal Tracking with LLMs: Comparing Pre-training, Fine-tuning, and Prompt Engineering",Angus Addlesee,http://arxiv.org/pdf/2308.15231v1.pdf,2023-08-29,"['cs.cl', 'cs.hc']",2308.15231v1.pdf," This paper evaluates the extent to which current Large Language Models (LLMs) +can capture task-oriented multi-party conversations (MPCs). We have recorded +and transcribed 29 MPCs between patients, their companions, and a social robot +in a hospital. We then annotated this corpus for multi-party goal-tracking and +intent-slot recognition. People share goals, answer each other's goals, and +provide other people's goals in MPCs - none of which occur in dyadic +interactions. To understand user goals in MPCs, we compared three methods in +zero-shot and few-shot settings: we fine-tuned T5, created pre-training tasks +to train DialogLM using LED, and employed prompt engineering techniques with +GPT-3.5-turbo, to determine which approach can complete this novel task with +limited data. GPT-3.5-turbo significantly outperformed the others in a few-shot +setting. The `reasoning' style prompt, when given 7% of the corpus as example +annotated conversations, was the best performing method. It correctly annotated +62.32% of the goal tracking MPCs, and 69.57% of the intent-slot recognition +MPCs. A `story' style prompt increased model hallucination, which could be +detrimental if deployed in safety-critical settings. We conclude that +multi-party conversations still challenge state-of-the-art LLMs. +" +Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation,Dawei Gao,http://arxiv.org/pdf/2308.15363v3.pdf,2023-08-29,"['cs.db', 'cs.cl', 'cs.lg']",2308.15363v3.pdf," Large language models (LLMs) have emerged as a new paradigm for Text-to-SQL +task. However, the absence of a systematical benchmark inhibits the development +of designing effective, efficient and economic LLM-based Text-to-SQL solutions. +To address this challenge, in this paper, we first conduct a systematical and +extensive comparison over existing prompt engineering methods, including +question representation, example selection and example organization, and with +these experimental results, we elaborate their pros and cons. Based on these +findings, we propose a new integrated solution, named DAIL-SQL, which refreshes +the Spider leaderboard with 86.6% execution accuracy and sets a new bar. To +explore the potential of open-source LLM, we investigate them in various +scenarios, and further enhance their performance with supervised fine-tuning. +Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well +as the advantages and disadvantages of the supervised fine-tuning. +Additionally, towards an efficient and economic LLM-based Text-to-SQL solution, +we emphasize the token efficiency in prompt engineering and compare the prior +studies under this metric. We hope that our work provides a deeper +understanding of Text-to-SQL with LLMs, and inspires further investigations and +broad applications. +" +PRE: Vision-Language Prompt Learning with Reparameterization Encoder,Anh Pham Thi Minh,http://arxiv.org/pdf/2309.07760v2.pdf,2023-09-14,"['cs.cv', 'cs.ai', 'cs.lg', 'i.4.0']",2309.07760v2.pdf," Large pre-trained vision-language models such as CLIP have demonstrated great +potential in zero-shot transferability to downstream tasks. However, to attain +optimal performance, the manual selection of prompts is necessary to improve +alignment between the downstream image distribution and the textual class +descriptions. This manual prompt engineering is the major challenge for +deploying such models in practice since it requires domain expertise and is +extremely time-consuming. To avoid non-trivial prompt engineering, recent work +Context Optimization (CoOp) introduced the concept of prompt learning to the +vision domain using learnable textual tokens. While CoOp can achieve +substantial improvements over manual prompts, its learned context is worse +generalizable to wider unseen classes within the same dataset. In this work, we +present Prompt Learning with Reparameterization Encoder (PRE) - a simple and +efficient method that enhances the generalization ability of the learnable +prompt to unseen classes while maintaining the capacity to learn Base classes. +Instead of directly optimizing the prompts, PRE employs a prompt encoder to +reparameterize the input prompt embeddings, enhancing the exploration of +task-specific knowledge from few-shot samples. Experiments and extensive +ablation studies on 8 benchmarks demonstrate that our approach is an efficient +method for prompt learning. Specifically, PRE achieves a notable enhancement of +5.60% in average accuracy on New classes and 3% in Harmonic mean compared to +CoOp in the 16-shot setting, all achieved within a good training time. +" +PEACE: Prompt Engineering Automation for CLIPSeg Enhancement in Aerial Robotics,Haechan Mark Bong,http://arxiv.org/pdf/2310.00085v1.pdf,2023-09-29,['cs.ro'],2310.00085v1.pdf," From industrial to space robotics, safe landing is an essential component for +flight operations. With the growing interest in artificial intelligence, we +direct our attention to learning based safe landing approaches. This paper +extends our previous work, DOVESEI, which focused on a reactive UAV system by +harnessing the capabilities of open vocabulary image segmentation. Prompt-based +safe landing zone segmentation using an open vocabulary based model is no more +just an idea, but proven to be feasible by the work of DOVESEI. However, a +heuristic selection of words for prompt is not a reliable solution since it +cannot take the changing environment into consideration and detrimental +consequences can occur if the observed environment is not well represented by +the given prompt. Therefore, we introduce PEACE (Prompt Engineering Automation +for CLIPSeg Enhancement), powering DOVESEI to automate the prompt generation +and engineering to adapt to data distribution shifts. Our system is capable of +performing safe landing operations with collision avoidance at altitudes as low +as 20 meters using only monocular cameras and image segmentation. We take +advantage of DOVESEI's dynamic focus to circumvent abrupt fluctuations in the +terrain segmentation between frames in a video stream. PEACE shows promising +improvements in prompt generation and engineering for aerial images compared to +the standard prompt used for CLIP and CLIPSeg. Combining DOVESEI and PEACE, our +system was able improve successful safe landing zone selections by 58.62% +compared to using only DOVESEI. All the source code is open source and +available online. +" +Understanding prompt engineering may not require rethinking generalization,Victor Akinwande,http://arxiv.org/pdf/2310.03957v1.pdf,2023-10-06,"['cs.lg', 'cs.cv']",2310.03957v1.pdf," Zero-shot learning in prompted vision-language models, the practice of +crafting prompts to build classifiers without an explicit training process, has +achieved impressive performance in many settings. This success presents a +seemingly surprising observation: these methods suffer relatively little from +overfitting, i.e., when a prompt is manually engineered to achieve low error on +a given training set (thus rendering the method no longer actually zero-shot), +the approach still performs well on held-out test data. In this paper, we show +that we can explain such performance well via recourse to classical PAC-Bayes +bounds. Specifically, we show that the discrete nature of prompts, combined +with a PAC-Bayes prior given by a language model, results in generalization +bounds that are remarkably tight by the standards of the literature: for +instance, the generalization bound of an ImageNet classifier is often within a +few percentage points of the true test error. We demonstrate empirically that +this holds for existing handcrafted prompts and prompts generated through +simple greedy search. Furthermore, the resulting bound is well-suited for model +selection: the models with the best bound typically also have the best test +performance. This work thus provides a possible justification for the +widespread practice of prompt engineering, even if it seems that such methods +could potentially overfit the training data. +" +What's the Magic Word? A Control Theory of LLM Prompting,Aman Bhargava,http://arxiv.org/pdf/2310.04444v2.pdf,2023-10-02,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.ne']",2310.04444v2.pdf," Prompt engineering is effective and important in the deployment of LLMs but +is poorly understood mathematically. Here, we formalize prompt engineering as +an optimal control problem on LLMs -- where the prompt is considered a control +variable for modulating the output distribution of the LLM. Within this +framework, we ask a simple question: given a sequence of tokens, does there +always exist a prompt we can prepend that will steer the LLM toward accurately +predicting the final token? We call such an optimal prompt the magic word since +prepending the prompt causes the LLM to output the correct answer. If magic +words exist, can we find them? If so, what are their properties? We offer +analytic analysis on the controllability of the self-attention head where we +prove a bound on controllability as a function of the singular values of its +weight matrices. We take inspiration from control theory to propose a metric +called $k-\epsilon$ controllability to characterize LLM steerability. We +compute the $k-\epsilon$ controllability of a panel of large language models, +including Falcon-7b, Llama-7b, and Falcon-40b on 5000 WikiText causal language +modeling tasks. Remarkably, we find that magic words of 10 tokens or less exist +for over 97% of WikiText instances surveyed for each model. +" +Configuration Validation with Large Language Models,Xinyu Lian,http://arxiv.org/pdf/2310.09690v1.pdf,2023-10-15,"['cs.se', 'cs.ai', 'cs.os']",2310.09690v1.pdf," Misconfigurations are the major causes of software failures. Existing +configuration validation techniques rely on manually written rules or test +cases, which are expensive to implement and maintain, and are hard to be +comprehensive. Leveraging machine learning (ML) and natural language processing +(NLP) for configuration validation is considered a promising direction, but has +been facing challenges such as the need of not only large-scale configuration +data, but also system-specific features and models which are hard to +generalize. Recent advances in Large Language Models (LLMs) show the promises +to address some of the long-lasting limitations of ML/NLP-based configuration +validation techniques. In this paper, we present an exploratory analysis on the +feasibility and effectiveness of using LLMs like GPT and Codex for +configuration validation. Specifically, we take a first step to empirically +evaluate LLMs as configuration validators without additional fine-tuning or +code generation. We develop a generic LLM-based validation framework, named +Ciri, which integrates different LLMs. Ciri devises effective prompt +engineering with few-shot learning based on both valid configuration and +misconfiguration data. Ciri also validates and aggregates the outputs of LLMs +to generate validation results, coping with known hallucination and +nondeterminism of LLMs. We evaluate the validation effectiveness of Ciri on +five popular LLMs using configuration data of six mature, widely deployed +open-source systems. Our analysis (1) confirms the potential of using LLMs for +configuration validation, (2) understands the design space of LLMbased +validators like Ciri, especially in terms of prompt engineering with few-shot +learning, and (3) reveals open challenges such as ineffectiveness in detecting +certain types of misconfigurations and biases to popular configuration +parameters. +" +Learning to Prompt for Vision-Language Models,Kaiyang Zhou,http://arxiv.org/pdf/2109.01134v6.pdf,2021-09-02,"['cs.cv', 'cs.ai', 'cs.lg']",2109.01134v6.pdf," Large pre-trained vision-language models like CLIP have shown great potential +in learning representations that are transferable across a wide range of +downstream tasks. Different from the traditional representation learning that +is based mostly on discretized labels, vision-language pre-training aligns +images and texts in a common feature space, which allows zero-shot transfer to +a downstream task via prompting, i.e., classification weights are synthesized +from natural language describing classes of interest. In this work, we show +that a major challenge for deploying such models in practice is prompt +engineering, which requires domain expertise and is extremely time-consuming -- +one needs to spend a significant amount of time on words tuning since a slight +change in wording could have a huge impact on performance. Inspired by recent +advances in prompt learning research in natural language processing (NLP), we +propose Context Optimization (CoOp), a simple approach specifically for +adapting CLIP-like vision-language models for downstream image recognition. +Concretely, CoOp models a prompt's context words with learnable vectors while +the entire pre-trained parameters are kept fixed. To handle different image +recognition tasks, we provide two implementations of CoOp: unified context and +class-specific context. Through extensive experiments on 11 datasets, we +demonstrate that CoOp requires as few as one or two shots to beat hand-crafted +prompts with a decent margin and is able to gain significant improvements over +prompt engineering with more shots, e.g., with 16 shots the average gain is +around 15% (with the highest reaching over 45%). Despite being a learning-based +approach, CoOp achieves superb domain generalization performance compared with +the zero-shot model using hand-crafted prompts. +" +"Prompt-Free Diffusion: Taking ""Text"" out of Text-to-Image Diffusion Models",Xingqian Xu,http://arxiv.org/pdf/2305.16223v2.pdf,2023-05-25,['cs.cv'],2305.16223v2.pdf," Text-to-image (T2I) research has grown explosively in the past year, owing to +the large-scale pre-trained diffusion models and many emerging personalization +and editing approaches. Yet, one pain point persists: the text prompt +engineering, and searching high-quality text prompts for customized results is +more art than science. Moreover, as commonly argued: ""an image is worth a +thousand words"" - the attempt to describe a desired image with texts often ends +up being ambiguous and cannot comprehensively cover delicate visual details, +hence necessitating more additional controls from the visual domain. In this +paper, we take a bold step forward: taking ""Text"" out of a pre-trained T2I +diffusion model, to reduce the burdensome prompt engineering efforts for users. +Our proposed framework, Prompt-Free Diffusion, relies on only visual inputs to +generate new images: it takes a reference image as ""context"", an optional image +structural conditioning, and an initial noise, with absolutely no text prompt. +The core architecture behind the scene is Semantic Context Encoder (SeeCoder), +substituting the commonly used CLIP-based or LLM-based text encoder. The +reusability of SeeCoder also makes it a convenient drop-in component: one can +also pre-train a SeeCoder in one T2I model and reuse it for another. Through +extensive experiments, Prompt-Free Diffusion is experimentally found to (i) +outperform prior exemplar-based image synthesis approaches; (ii) perform on par +with state-of-the-art T2I models using prompts following the best practice; and +(iii) be naturally extensible to other downstream applications such as anime +figure generation and virtual try-on, with promising quality. Our code and +models are open-sourced at https://github.com/SHI-Labs/Prompt-Free-Diffusion. +" +Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models,Robert L. Logan IV,http://arxiv.org/pdf/2106.13353v2.pdf,2021-06-24,"['cs.cl', 'cs.lg']",2106.13353v2.pdf," Prompting language models (LMs) with training examples and task descriptions +has been seen as critical to recent successes in few-shot learning. In this +work, we show that finetuning LMs in the few-shot setting can considerably +reduce the need for prompt engineering. In fact, one can use null prompts, +prompts that contain neither task-specific templates nor training examples, and +achieve competitive accuracy to manually-tuned prompts across a wide range of +tasks. While finetuning LMs does introduce new parameters for each downstream +task, we show that this memory overhead can be substantially reduced: +finetuning only the bias terms can achieve comparable or better accuracy than +standard finetuning while only updating 0.1% of the parameters. All in all, we +recommend finetuning LMs for few-shot learning as it is more accurate, robust +to different prompts, and can be made nearly as efficient as using frozen LMs. +" +An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models,Tianxing He,http://arxiv.org/pdf/2109.02772v2.pdf,2021-09-06,['cs.ai'],2109.02772v2.pdf," Prompt-based knowledge probing for 1-hop relations has been used to measure +how much world knowledge is stored in pretrained language models. Existing work +uses considerable amounts of data to tune the prompts for better performance. +In this work, we compare a variety of approaches under a few-shot knowledge +probing setting, where only a small number (e.g., 10 or 20) of example triples +are available. In addition, we create a new dataset named TREx-2p, which +contains 2-hop relations. We report that few-shot examples can strongly boost +the probing performance for both 1-hop and 2-hop relations. In particular, we +find that a simple-yet-effective approach of finetuning the bias vectors in the +model outperforms existing prompt-engineering methods. Our dataset and code are +available at \url{https://github.com/cloudygoose/fewshot_lama}. +" +Design Guidelines for Prompt Engineering Text-to-Image Generative Models,Vivian Liu,http://arxiv.org/pdf/2109.06977v3.pdf,2021-09-14,['cs.hc'],2109.06977v3.pdf," Text-to-image generative models are a new and powerful way to generate visual +artwork. However, the open-ended nature of text as interaction is double-edged; +while users can input anything and have access to an infinite range of +generations, they also must engage in brute-force trial and error with the text +prompt when the result quality is poor. We conduct a study exploring what +prompt keywords and model hyperparameters can help produce coherent outputs. In +particular, we study prompts structured to include subject and style keywords +and investigate success and failure modes of these prompts. Our evaluation of +5493 generations over the course of five experiments spans 51 abstract and +concrete subjects as well as 51 abstract and figurative styles. From this +evaluation, we present design guidelines that can help people produce better +outcomes from text-to-image generative models. +" +Cut the CARP: Fishing for zero-shot story evaluation,Shahbuland Matiana,http://arxiv.org/pdf/2110.03111v3.pdf,2021-10-06,['cs.cl'],2110.03111v3.pdf," Recent advances in large-scale language models (Raffel et al., 2019; Brown et +al., 2020) have brought significant qualitative and quantitative improvements +in machine-driven text generation. Despite this, generation and evaluation of +machine-generated narrative text remains a challenging problem. Objective +evaluation of computationally-generated stories may be prohibitively expensive, +require meticulously annotated datasets, or may not adequately measure the +logical coherence of a generated story's narratological structure. + Informed by recent advances in contrastive learning (Radford et al., 2021), +we present Contrastive Authoring and Reviewing Pairing (CARP): a scalable, +efficient method for performing qualitatively superior, zero-shot evaluation of +stories. We show a strong correlation between human evaluation of stories and +those of CARP. Model outputs more significantly correlate with corresponding +human input than those language-model based methods which utilize finetuning or +prompt engineering approaches. We also present and analyze the Story-Critique +Dataset, a new corpora composed of 1.3 million aligned story-critique pairs +derived from over 80,000 stories. We expect this corpus to be of interest to +NLP researchers. +" +Solving Probability and Statistics Problems by Program Synthesis,Leonard Tang,http://arxiv.org/pdf/2111.08267v1.pdf,2021-11-16,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",2111.08267v1.pdf," We solve university level probability and statistics questions by program +synthesis using OpenAI's Codex, a Transformer trained on text and fine-tuned on +code. We transform course problems from MIT's 18.05 Introduction to Probability +and Statistics and Harvard's STAT110 Probability into programming tasks. We +then execute the generated code to get a solution. Since these course questions +are grounded in probability, we often aim to have Codex generate probabilistic +programs that simulate a large number of probabilistic dependencies to compute +its solution. Our approach requires prompt engineering to transform the +question from its original form to an explicit, tractable form that results in +a correct program and solution. To estimate the amount of work needed to +translate an original question into its tractable form, we measure the +similarity between original and transformed questions. Our work is the first to +introduce a new dataset of university-level probability and statistics problems +and solve these problems in a scalable fashion using the program synthesis +capabilities of large language models. +" +StyleMC: Multi-Channel Based Fast Text-Guided Image Generation and Manipulation,Umut Kocasari,http://arxiv.org/pdf/2112.08493v1.pdf,2021-12-15,"['cs.cv', 'cs.lg']",2112.08493v1.pdf," Discovering meaningful directions in the latent space of GANs to manipulate +semantic attributes typically requires large amounts of labeled data. Recent +work aims to overcome this limitation by leveraging the power of Contrastive +Language-Image Pre-training (CLIP), a joint text-image model. While promising, +these methods require several hours of preprocessing or training to achieve the +desired manipulations. In this paper, we present StyleMC, a fast and efficient +method for text-driven image generation and manipulation. StyleMC uses a +CLIP-based loss and an identity loss to manipulate images via a single text +prompt without significantly affecting other attributes. Unlike prior work, +StyleMC requires only a few seconds of training per text prompt to find stable +global directions, does not require prompt engineering and can be used with any +pre-trained StyleGAN2 model. We demonstrate the effectiveness of our method and +compare it to state-of-the-art methods. Our code can be found at +http://catlab-team.github.io/stylemc. +" +QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition,Andy T. Liu,http://arxiv.org/pdf/2203.01543v2.pdf,2022-03-03,"['cs.cl', 'cs.ai', 'cs.lg']",2203.01543v2.pdf," Recently, prompt-based learning for pre-trained language models has succeeded +in few-shot Named Entity Recognition (NER) by exploiting prompts as task +guidance to increase label efficiency. However, previous prompt-based methods +for few-shot NER have limitations such as a higher computational complexity, +poor zero-shot ability, requiring manual prompt engineering, or lack of prompt +robustness. In this work, we address these shortcomings by proposing a new +prompt-based learning NER method with Question Answering (QA), called QaNER. +Our approach includes 1) a refined strategy for converting NER problems into +the QA formulation; 2) NER prompt generation for QA models; 3) prompt-based +tuning with QA models on a few annotated NER examples; 4) zero-shot NER by +prompting the QA model. Comparing the proposed approach with previous methods, +QaNER is faster at inference, insensitive to the prompt quality, and robust to +hyper-parameters, as well as demonstrating significantly better low-resource +performance and zero-shot capability. +" +Executive Function: A Contrastive Value Policy for Resampling and Relabeling Perceptions via Hindsight Summarization?,Chris Lengerich,http://arxiv.org/pdf/2204.12639v1.pdf,2022-04-27,['cs.cl'],2204.12639v1.pdf," We develop the few-shot continual learning task from first principles and +hypothesize an evolutionary motivation and mechanism of action for executive +function as a contrastive value policy which resamples and relabels perception +data via hindsight summarization to minimize attended prediction error, similar +to an online prompt engineering problem. This is made feasible by the use of a +memory policy and a pretrained network with inductive biases for a grammar of +learning and is trained to maximize evolutionary survival. We show how this +model of executive function can be used to implement hypothesis testing as a +stream of consciousness and may explain observations of human few-shot learning +and neuroanatomy. +" +Polyglot Prompt: Multilingual Multitask PrompTraining,Jinlan Fu,http://arxiv.org/pdf/2204.14264v2.pdf,2022-04-29,['cs.cl'],2204.14264v2.pdf," This paper aims for a potential architectural improvement for multilingual +learning and asks: Can different tasks from different languages be modeled in a +monolithic framework, i.e. without any task/language-specific module? The +benefit of achieving this could open new doors for future multilingual +research, including allowing systems trained on low resources to be further +assisted by other languages as well as other tasks. We approach this goal by +developing a learning framework named Polyglot Prompting to exploit prompting +methods for learning a unified semantic space for different languages and tasks +with multilingual prompt engineering. We performed a comprehensive evaluation +of 6 tasks, namely topic classification, sentiment classification, named entity +recognition, question answering, natural language inference, and summarization, +covering 24 datasets and 49 languages. The experimental results demonstrated +the efficacy of multilingual multitask prompt-based learning and led to +inspiring observations. We also present an interpretable multilingual +evaluation methodology and show how the proposed framework, multilingual +multitask prompt training, works. We release all datasets prompted in the best +setting and code. +" +CLIP-CLOP: CLIP-Guided Collage and Photomontage,Piotr Mirowski,http://arxiv.org/pdf/2205.03146v3.pdf,2022-05-06,"['cs.cv', 'cs.ai']",2205.03146v3.pdf," The unabated mystique of large-scale neural networks, such as the CLIP dual +image-and-text encoder, popularized automatically generated art. Increasingly +more sophisticated generators enhanced the artworks' realism and visual +appearance, and creative prompt engineering enabled stylistic expression. +Guided by an artist-in-the-loop ideal, we design a gradient-based generator to +produce collages. It requires the human artist to curate libraries of image +patches and to describe (with prompts) the whole image composition, with the +option to manually adjust the patches' positions during generation, thereby +allowing humans to reclaim some control of the process and achieve greater +creative freedom. We explore the aesthetic potentials of high-resolution +collages, and provide an open-source Google Colab as an artistic tool. +" +Toxicity Detection with Generative Prompt-based Inference,Yau-Shian Wang,http://arxiv.org/pdf/2205.12390v1.pdf,2022-05-24,"['cs.cl', 'cs.ai']",2205.12390v1.pdf," Due to the subtleness, implicity, and different possible interpretations +perceived by different people, detecting undesirable content from text is a +nuanced difficulty. It is a long-known risk that language models (LMs), once +trained on corpus containing undesirable content, have the power to manifest +biases and toxicity. However, recent studies imply that, as a remedy, LMs are +also capable of identifying toxic content without additional fine-tuning. +Prompt-methods have been shown to effectively harvest this surprising +self-diagnosing capability. However, existing prompt-based methods usually +specify an instruction to a language model in a discriminative way. In this +work, we explore the generative variant of zero-shot prompt-based toxicity +detection with comprehensive trials on prompt engineering. We evaluate on three +datasets with toxicity labels annotated on social media posts. Our analysis +highlights the strengths of our generative classification approach both +quantitatively and qualitatively. Interesting aspects of self-diagnosis and its +ethical implications are discussed. +" +The Creativity of Text-to-Image Generation,Jonas Oppenlaender,http://arxiv.org/pdf/2206.02904v4.pdf,2022-05-13,"['cs.hc', 'cs.gr', 'h.5; h.m']",2206.02904v4.pdf," Text-guided synthesis of images has made a giant leap towards becoming a +mainstream phenomenon. With text-to-image generation systems, anybody can +create digital images and artworks. This provokes the question of whether +text-to-image generation is creative. This paper expounds on the nature of +human creativity involved in text-to-image art (so-called ""AI art"") with a +specific focus on the practice of prompt engineering. The paper argues that the +current product-centered view of creativity falls short in the context of +text-to-image generation. A case exemplifying this shortcoming is provided and +the importance of online communities for the creative ecosystem of +text-to-image art is highlighted. The paper provides a high-level summary of +this online ecosystem drawing on Rhodes' conceptual four P model of creativity. +Challenges for evaluating the creativity of text-to-image generation and +opportunities for research on text-to-image generation in the field of +Human-Computer Interaction (HCI) are discussed. +" +Rationale-Augmented Ensembles in Language Models,Xuezhi Wang,http://arxiv.org/pdf/2207.00747v1.pdf,2022-07-02,['cs.cl'],2207.00747v1.pdf," Recent research has shown that rationales, or step-by-step chains of thought, +can be used to improve performance in multi-step reasoning tasks. We reconsider +rationale-augmented prompting for few-shot in-context learning, where (input -> +output) prompts are expanded to (input, rationale -> output) prompts. For +rationale-augmented prompting we demonstrate how existing approaches, which +rely on manual prompt engineering, are subject to sub-optimal rationales that +may harm performance. To mitigate this brittleness, we propose a unified +framework of rationale-augmented ensembles, where we identify rationale +sampling in the output space as the key component to robustly improve +performance. This framework is general and can easily be extended to common +natural language processing tasks, even those that do not traditionally +leverage intermediate steps, such as question answering, word sense +disambiguation, and sentiment analysis. We demonstrate that rationale-augmented +ensembles achieve more accurate and interpretable results than existing +prompting approaches--including standard prompting without rationales and +rationale-based chain-of-thought prompting--while simultaneously improving +interpretability of model predictions through the associated rationales. +" +Text-Guided Synthesis of Artistic Images with Retrieval-Augmented Diffusion Models,Robin Rombach,http://arxiv.org/pdf/2207.13038v1.pdf,2022-07-26,['cs.cv'],2207.13038v1.pdf," Novel architectures have recently improved generative image synthesis leading +to excellent visual quality in various tasks. Of particular note is the field +of ``AI-Art'', which has seen unprecedented growth with the emergence of +powerful multimodal models such as CLIP. By combining speech and image +synthesis models, so-called ``prompt-engineering'' has become established, in +which carefully selected and composed sentences are used to achieve a certain +visual style in the synthesized image. In this note, we present an alternative +approach based on retrieval-augmented diffusion models (RDMs). In RDMs, a set +of nearest neighbors is retrieved from an external database during training for +each training instance, and the diffusion model is conditioned on these +informative samples. During inference (sampling), we replace the retrieval +database with a more specialized database that contains, for example, only +images of a particular visual style. This provides a novel way to prompt a +general trained model after training and thereby specify a particular visual +style. As shown by our experiments, this approach is superior to specifying the +visual style within the text prompt. We open-source code and model weights at +https://github.com/CompVis/latent-diffusion . +" +Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models,Hendrik Strobelt,http://arxiv.org/pdf/2208.07852v1.pdf,2022-08-16,"['cs.cl', 'cs.hc', 'cs.lg']",2208.07852v1.pdf," State-of-the-art neural language models can now be used to solve ad-hoc +language tasks through zero-shot prompting without the need for supervised +training. This approach has gained popularity in recent years, and researchers +have demonstrated prompts that achieve strong accuracy on specific NLP tasks. +However, finding a prompt for new tasks requires experimentation. Different +prompt templates with different wording choices lead to significant accuracy +differences. PromptIDE allows users to experiment with prompt variations, +visualize prompt performance, and iteratively optimize prompts. We developed a +workflow that allows users to first focus on model feedback using small data +before moving on to a large data regime that allows empirical grounding of +promising prompts using quantitative measures of the task. The tool then allows +easy deployment of the newly created ad-hoc models. We demonstrate the utility +of PromptIDE (demo at http://prompt.vizhub.ai) and our workflow using several +real-world use cases. +" +Will It Blend? Mixing Training Paradigms & Prompting for Argument Quality Prediction,Michiel van der Meer,http://arxiv.org/pdf/2209.08966v2.pdf,2022-09-19,"['cs.cl', 'cs.ai']",2209.08966v2.pdf," This paper describes our contributions to the Shared Task of the 9th Workshop +on Argument Mining (2022). Our approach uses Large Language Models for the task +of Argument Quality Prediction. We perform prompt engineering using GPT-3, and +also investigate the training paradigms multi-task learning, contrastive +learning, and intermediate-task training. We find that a mixed prediction setup +outperforms single models. Prompting GPT-3 works best for predicting argument +validity, and argument novelty is best estimated by a model trained using all +three training paradigms. +" +Legal Prompting: Teaching a Language Model to Think Like a Lawyer,Fangyi Yu,http://arxiv.org/pdf/2212.01326v2.pdf,2022-12-02,"['cs.cl', 'cs.ai', 'i.2.7']",2212.01326v2.pdf," Large language models that are capable of zero or few-shot prompting +approaches have given rise to the new research area of prompt engineering. +Recent advances showed that for example Chain-of-Thought (CoT) prompts can +improve arithmetic or common sense tasks significantly. We explore how such +approaches fare with legal reasoning tasks and take the COLIEE entailment task +based on the Japanese Bar exam for testing zero-shot/few-shot and fine-tuning +approaches. Our findings show that while CoT prompting and fine-tuning with +explanations approaches show improvements, the best results are produced by +prompts that are derived from specific legal reasoning techniques such as IRAC +(Issue, Rule, Application, Conclusion). Based on our experiments we improve the +2021 best result from 0.7037 accuracy to 0.8148 accuracy and beat the 2022 best +system of 0.6789 accuracy with an accuracy of 0.7431. +" +Controllable Image Captioning via Prompting,Ning Wang,http://arxiv.org/pdf/2212.01803v1.pdf,2022-12-04,['cs.cv'],2212.01803v1.pdf," Despite the remarkable progress of image captioning, existing captioners +typically lack the controllable capability to generate desired image captions, +e.g., describing the image in a rough or detailed manner, in a factual or +emotional view, etc. In this paper, we show that a unified model is qualified +to perform well in diverse domains and freely switch among multiple styles. +Such a controllable capability is achieved by embedding the prompt learning +into the image captioning framework. To be specific, we design a set of prompts +to fine-tune the pre-trained image captioner. These prompts allow the model to +absorb stylized data from different domains for joint training, without +performance degradation in each domain. Furthermore, we optimize the prompts +with learnable vectors in the continuous word embedding space, avoiding the +heuristic prompt engineering and meanwhile exhibiting superior performance. In +the inference stage, our model is able to generate desired stylized captions by +choosing the corresponding prompts. Extensive experiments verify the +controllable capability of the proposed method. Notably, we achieve outstanding +performance on two diverse image captioning benchmarks including COCO Karpathy +split and TextCaps using a unified model. +" +Fake it till you make it: Learning transferable representations from synthetic ImageNet clones,Mert Bulent Sariyildiz,http://arxiv.org/pdf/2212.08420v2.pdf,2022-12-16,"['cs.cv', 'cs.lg']",2212.08420v2.pdf," Recent image generation models such as Stable Diffusion have exhibited an +impressive ability to generate fairly realistic images starting from a simple +text prompt. Could such models render real images obsolete for training image +prediction models? In this paper, we answer part of this provocative question +by investigating the need for real images when training models for ImageNet +classification. Provided only with the class names that have been used to build +the dataset, we explore the ability of Stable Diffusion to generate synthetic +clones of ImageNet and measure how useful these are for training classification +models from scratch. We show that with minimal and class-agnostic prompt +engineering, ImageNet clones are able to close a large part of the gap between +models produced by synthetic images and models trained with real images, for +the several standard classification benchmarks that we consider in this study. +More importantly, we show that models trained on synthetic images exhibit +strong generalization properties and perform on par with models trained on real +data for transfer. Project page: https://europe.naverlabs.com/imagenet-sd/ +" +Explanation Regeneration via Information Bottleneck,Qintong Li,http://arxiv.org/pdf/2212.09603v2.pdf,2022-12-19,['cs.cl'],2212.09603v2.pdf," Explaining the black-box predictions of NLP models naturally and accurately +is an important open problem in natural language generation. These free-text +explanations are expected to contain sufficient and carefully-selected evidence +to form supportive arguments for predictions. Due to the superior generative +capacity of large pretrained language models, recent work built on prompt +engineering enables explanation generation without specific training. However, +explanation generated through single-pass prompting often lacks sufficiency and +conciseness. To address this problem, we develop an information bottleneck +method EIB to produce refined explanations that are sufficient and concise. Our +approach regenerates the free-text explanation by polishing the single-pass +output from the pretrained language model but retaining the information that +supports the contents being explained. Experiments on two out-of-domain tasks +verify the effectiveness of EIB through automatic evaluation and +thoroughly-conducted human evaluation. +" +Optimizing Prompts for Text-to-Image Generation,Yaru Hao,http://arxiv.org/pdf/2212.09611v1.pdf,2022-12-19,"['cs.cl', 'cs.cv']",2212.09611v1.pdf," Well-designed prompts can guide text-to-image models to generate amazing +images. However, the performant prompts are often model-specific and misaligned +with user input. Instead of laborious human engineering, we propose prompt +adaptation, a general framework that automatically adapts original user input +to model-preferred prompts. Specifically, we first perform supervised +fine-tuning with a pretrained language model on a small collection of manually +engineered prompts. Then we use reinforcement learning to explore better +prompts. We define a reward function that encourages the policy to generate +more aesthetically pleasing images while preserving the original user +intentions. Experimental results on Stable Diffusion show that our method +outperforms manual prompt engineering in terms of both automatic metrics and +human preference ratings. Moreover, reinforcement learning further boosts +performance, especially on out-of-domain prompts. The pretrained checkpoints +are available at https://aka.ms/promptist. The demo can be found at +https://aka.ms/promptist-demo. +" +Using Large Language Models to Generate Engaging Captions for Data Visualizations,Ashley Liew,http://arxiv.org/pdf/2212.14047v1.pdf,2022-12-27,"['cs.cl', 'cs.ai', 'cs.hc']",2212.14047v1.pdf," Creating compelling captions for data visualizations has been a longstanding +challenge. Visualization researchers are typically untrained in journalistic +reporting and hence the captions that are placed below data visualizations tend +to be not overly engaging and rather just stick to basic observations about the +data. In this work we explore the opportunities offered by the newly emerging +crop of large language models (LLM) which use sophisticated deep learning +technology to produce human-like prose. We ask, can these powerful software +devices be purposed to produce engaging captions for generic data +visualizations like a scatterplot. It turns out that the key challenge lies in +designing the most effective prompt for the LLM, a task called prompt +engineering. We report on first experiments using the popular LLM GPT-3 and +deliver some promising results. +" +Fixing Hardware Security Bugs with Large Language Models,Baleegh Ahmad,http://arxiv.org/pdf/2302.01215v1.pdf,2023-02-02,['cs.cr'],2302.01215v1.pdf," Novel AI-based code-writing Large Language Models (LLMs) such as OpenAI's +Codex have demonstrated capabilities in many coding-adjacent domains. In this +work we consider how LLMs maybe leveraged to automatically repair security +relevant bugs present in hardware designs. We focus on bug repair in code +written in the Hardware Description Language Verilog. For this study we build a +corpus of domain-representative hardware security bugs. We then design and +implement a framework to quantitatively evaluate the performance of any LLM +tasked with fixing the specified bugs. The framework supports design space +exploration of prompts (i.e., prompt engineering) and identifying the best +parameters for the LLM. We show that an ensemble of LLMs can repair all ten of +our benchmarks. This ensemble outperforms the state-of-the-art Cirfix hardware +bug repair tool on its own suite of bugs. These results show that LLMs can +repair hardware security bugs and the framework is an important step towards +the ultimate goal of an automated end-to-end bug repair framework. +" +UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation,Daixuan Cheng,http://arxiv.org/pdf/2303.08518v3.pdf,2023-03-15,['cs.cl'],2303.08518v3.pdf," Large Language Models (LLMs) are popular for their impressive abilities, but +the need for model-specific fine-tuning or task-specific prompt engineering can +hinder their generalization. We propose UPRISE (Universal Prompt Retrieval for +Improving zero-Shot Evaluation), which tunes a lightweight and versatile +retriever that automatically retrieves prompts for a given zero-shot task +input. Specifically, we demonstrate universality in a cross-task and +cross-model scenario: the retriever is tuned on a diverse set of tasks, but +tested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, for +tuning the retriever, but test the retriever on different LLMs of much larger +scales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show that +UPRISE mitigates the hallucination problem in our experiments with ChatGPT, +suggesting its potential to improve even the strongest LLMs. Our model and code +are available at https://github.com/microsoft/LMOps. +" +Patch-Token Aligned Bayesian Prompt Learning for Vision-Language Models,Xinyang Liu,http://arxiv.org/pdf/2303.09100v1.pdf,2023-03-16,"['cs.cv', 'cs.cl', 'cs.lg']",2303.09100v1.pdf," For downstream applications of vision-language pre-trained models, there has +been significant interest in constructing effective prompts. Existing works on +prompt engineering, which either require laborious manual designs or optimize +the prompt tuning as a point estimation problem, may fail to describe diverse +characteristics of categories and limit their applications. We introduce a +Bayesian probabilistic resolution to prompt learning, where the label-specific +stochastic prompts are generated hierarchically by first sampling a latent +vector from an underlying distribution and then employing a lightweight +generative model. Importantly, we semantically regularize prompt learning with +the visual knowledge and view images and the corresponding prompts as patch and +token sets under optimal transport, which pushes the prompt tokens to +faithfully capture the label-specific visual concepts, instead of overfitting +the training categories. Moreover, the proposed model can also be +straightforwardly extended to the conditional case where the +instance-conditional prompts are generated to improve the generalizability. +Extensive experiments on 15 datasets show promising transferability and +generalization performance of our proposed model. +" +Safety Analysis in the Era of Large Language Models: A Case Study of STPA using ChatGPT,Yi Qi,http://arxiv.org/pdf/2304.01246v2.pdf,2023-04-03,"['cs.cl', 'cs.ai', 'cs.cy', 'cs.se']",2304.01246v2.pdf," Can safety analysis make use of Large Language Models (LLMs)? A case study +explores Systems Theoretic Process Analysis (STPA) applied to Automatic +Emergency Brake (AEB) and Electricity Demand Side Management (DSM) systems +using ChatGPT. We investigate how collaboration schemes, input semantic +complexity, and prompt guidelines influence STPA results. Comparative results +show that using ChatGPT without human intervention may be inadequate due to +reliability related issues, but with careful design, it may outperform human +experts. No statistically significant differences are found when varying the +input semantic complexity or using common prompt guidelines, which suggests the +necessity for developing domain-specific prompt engineering. We also highlight +future challenges, including concerns about LLM trustworthiness and the +necessity for standardisation and regulation in this domain. +" +Geotechnical Parrot Tales (GPT): Harnessing Large Language Models in geotechnical engineering,Krishna Kumar,http://arxiv.org/pdf/2304.02138v3.pdf,2023-04-04,"['cs.cl', 'physics.geo-ph', 'i.2.7; j.2.6']",2304.02138v3.pdf," The widespread adoption of large language models (LLMs), such as OpenAI's +ChatGPT, could revolutionize various industries, including geotechnical +engineering. However, GPT models can sometimes generate plausible-sounding but +false outputs, leading to hallucinations. In this article, we discuss the +importance of prompt engineering in mitigating these risks and harnessing the +full potential of GPT for geotechnical applications. We explore the challenges +and pitfalls associated with LLMs and highlight the role of context in ensuring +accurate and valuable responses. Furthermore, we examine the development of +context-specific search engines and the potential of LLMs to become a natural +interface for complex tasks, such as data analysis and design. We also develop +a unified interface using natural language to handle complex geotechnical +engineering tasks and data analysis. By integrating GPT into geotechnical +engineering workflows, professionals can streamline their work and develop +sustainable and resilient infrastructure systems for the future. +" +Evaluation of ChatGPT Family of Models for Biomedical Reasoning and Classification,Shan Chen,http://arxiv.org/pdf/2304.02496v1.pdf,2023-04-05,"['cs.cl', 'cs.ai']",2304.02496v1.pdf," Recent advances in large language models (LLMs) have shown impressive ability +in biomedical question-answering, but have not been adequately investigated for +more specific biomedical applications. This study investigates the performance +of LLMs such as the ChatGPT family of models (GPT-3.5s, GPT-4) in biomedical +tasks beyond question-answering. Because no patient data can be passed to the +OpenAI API public interface, we evaluated model performance with over 10000 +samples as proxies for two fundamental tasks in the clinical domain - +classification and reasoning. The first task is classifying whether statements +of clinical and policy recommendations in scientific literature constitute +health advice. The second task is causal relation detection from the biomedical +literature. We compared LLMs with simpler models, such as bag-of-words (BoW) +with logistic regression, and fine-tuned BioBERT models. Despite the excitement +around viral ChatGPT, we found that fine-tuning for two fundamental NLP tasks +remained the best strategy. The simple BoW model performed on par with the most +complex LLM prompting. Prompt engineering required significant investment. +" +"VOICE: Visual Oracle for Interaction, Conversation, and Explanation",Donggang Jia,http://arxiv.org/pdf/2304.04083v1.pdf,2023-04-08,"['cs.hc', 'cs.gr']",2304.04083v1.pdf," We present VOICE, a novel approach for connecting large language models' +(LLM) conversational capabilities with interactive exploratory visualization. +VOICE introduces several innovative technical contributions that drive our +conversational visualization framework. Our foundation is a pack-of-bots that +can perform specific tasks, such as assigning tasks, extracting instructions, +and generating coherent content. We employ fine-tuning and prompt engineering +techniques to tailor bots' performance to their specific roles and accurately +respond to user queries, and a new prompt-based iterative scene-tree generation +establishes a coupling with a structural model. Our text-to-visualization +method generates a flythrough sequence matching the content explanation. +Finally, 3D natural language interaction provides capabilities to navigate and +manipulate the 3D models in real-time. The VOICE framework can receive +arbitrary voice commands from the user and responds verbally, tightly coupled +with corresponding visual representation with low latency and high accuracy. We +demonstrate the effectiveness and high generalizability potential of our +approach by applying it to two distinct domains: analyzing three 3D molecular +models with multi-scale and multi-instance attributes, and showcasing its +effectiveness on a cartographic map visualization. A free copy of this paper +and all supplemental materials are available at https://osf.io/g7fbr/. +" +Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot Task Generalization,Puyuan Peng,http://arxiv.org/pdf/2305.11095v3.pdf,2023-05-18,"['eess.as', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.sd']",2305.11095v3.pdf," We investigate the emergent abilities of the recently proposed web-scale +speech model Whisper, by adapting it to unseen tasks with prompt engineering. +We selected three tasks: audio-visual speech recognition (AVSR), code-switched +speech recognition (CS-ASR), and speech translation (ST) on unseen language +pairs. We design task-specific prompts, by either leveraging another +large-scale model, or simply manipulating the special tokens in the default +prompts. Experiments show that compared to the default prompts, our proposed +prompts improve performance by 10% to 45% on the three zero-shot tasks, and +even outperform SotA supervised models on some datasets. In addition, our +experiments reveal many interesting properties of Whisper, including its +robustness to prompts, bias on accents, and the multilingual understanding in +its latent space. Code is available at +https://github.com/jasonppy/PromptingWhisper +" +Constructing Dreams using Generative AI,Safinah Ali,http://arxiv.org/pdf/2305.12013v1.pdf,2023-05-19,"['cs.hc', 'cs.ai', 'cs.cy']",2305.12013v1.pdf," Generative AI tools introduce new and accessible forms of media creation for +youth. They also raise ethical concerns about the generation of fake media, +data protection, privacy and ownership of AI-generated art. Since generative AI +is already being used in products used by youth, it is critical that they +understand how these tools work and how they can be used or misused. In this +work, we facilitated students' generative AI learning through expression of +their imagined future identities. We designed a learning workshop - Dreaming +with AI - where students learned about the inner workings of generative AI +tools, used text-to-image generation algorithms to create their imaged future +dreams, reflected on the potential benefits and harms of generative AI tools +and voiced their opinions about policies for the use of these tools in +classrooms. In this paper, we present the learning activities and experiences +of 34 high school students who engaged in our workshops. Students reached +creative learning objectives by using prompt engineering to create their future +dreams, gained technical knowledge by learning the abilities, limitations, +text-visual mappings and applications of generative AI, and identified most +potential societal benefits and harms of generative AI. +" +Interactive Data Synthesis for Systematic Vision Adaptation via LLMs-AIGCs Collaboration,Qifan Yu,http://arxiv.org/pdf/2305.12799v1.pdf,2023-05-22,['cs.cv'],2305.12799v1.pdf," Recent text-to-image generation models have shown promising results in +generating high-fidelity photo-realistic images. In parallel, the problem of +data scarcity has brought a growing interest in employing AIGC technology for +high-quality data expansion. However, this paradigm requires well-designed +prompt engineering that cost-less data expansion and labeling remain +under-explored. Inspired by LLM's powerful capability in task guidance, we +propose a new paradigm of annotated data expansion named as ChatGenImage. The +core idea behind it is to leverage the complementary strengths of diverse +models to establish a highly effective and user-friendly pipeline for +interactive data augmentation. In this work, we extensively study how LLMs +communicate with AIGC model to achieve more controllable image generation and +make the first attempt to collaborate them for automatic data augmentation for +a variety of downstream tasks. Finally, we present fascinating results obtained +from our ChatGenImage framework and demonstrate the powerful potential of our +synthetic data for systematic vision adaptation. Our codes are available at +https://github.com/Yuqifan1117/Labal-Anything-Pipeline. +" +Making Language Models Better Tool Learners with Execution Feedback,Shuofei Qiao,http://arxiv.org/pdf/2305.13068v1.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ir', 'cs.lg']",2305.13068v1.pdf," Tools serve as pivotal interfaces that enable humans to understand and +reshape the world. With the advent of foundational models, AI systems can +utilize tools to expand their capabilities and interact with the world. +Existing tool learning methodologies, encompassing supervised fine-tuning and +prompt engineering approaches, often induce language models to utilize tools +indiscriminately, as complex problems often exceed their own competencies. +However, introducing tools for simple tasks, which the models themselves can +readily resolve, can inadvertently propagate errors rather than enhance +performance. This leads to the research question: can we teach language models +when and how to use tools? To meet this need, we propose Tool leaRning wIth +exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the +model to continually learn through feedback derived from tool execution, +thereby learning when and how to use tools effectively. Experimental results, +backed by further analysis, show that TRICE can make the language model to +selectively use tools by decreasing the model's dependency on tools while +enhancing the performance. Code and datasets will be available in +https://github.com/zjunlp/trice. +" +Prompt position really matters in few-shot and zero-shot NLU tasks,Junyu Mao,http://arxiv.org/pdf/2305.14493v2.pdf,2023-05-23,['cs.cl'],2305.14493v2.pdf," Prompt-based models have made remarkable advancements in the fields of +zero-shot and few-shot learning, attracting a lot of attention from +researchers. Developing an effective prompt template plays a critical role. +However, prior studies have mainly focused on prompt vocabulary selection or +embedding initialization with the reserved prompt position fixed. In this +empirical study, we conduct the most comprehensive analysis to date of prompt +position option for natural language understanding tasks. Our findings quantify +the substantial impact prompt position has on model performance. We observe +that the prompt position used in prior studies is often sub-optimal for both +zero-shot and few-shot settings. These findings suggest prompt position +optimisation as an interesting research direction alongside the existing focus +on prompt engineering. +" +ContrastNER: Contrastive-based Prompt Tuning for Few-shot NER,Amirhossein Layegh,http://arxiv.org/pdf/2305.17951v1.pdf,2023-05-29,"['cs.cl', 'cs.ai']",2305.17951v1.pdf," Prompt-based language models have produced encouraging results in numerous +applications, including Named Entity Recognition (NER) tasks. NER aims to +identify entities in a sentence and provide their types. However, the strong +performance of most available NER approaches is heavily dependent on the design +of discrete prompts and a verbalizer to map the model-predicted outputs to +entity categories, which are complicated undertakings. To address these +challenges, we present ContrastNER, a prompt-based NER framework that employs +both discrete and continuous tokens in prompts and uses a contrastive learning +approach to learn the continuous prompts and forecast entity types. The +experimental results demonstrate that ContrastNER obtains competitive +performance to the state-of-the-art NER methods in high-resource settings and +outperforms the state-of-the-art models in low-resource circumstances without +requiring extensive manual prompt engineering and verbalizer design. +" +Conformal Prediction with Large Language Models for Multi-Choice Question Answering,Bhawesh Kumar,http://arxiv.org/pdf/2305.18404v3.pdf,2023-05-28,"['cs.cl', 'cs.lg', 'stat.ml']",2305.18404v3.pdf," As large language models continue to be widely developed, robust uncertainty +quantification techniques will become crucial for their safe deployment in +high-stakes scenarios. In this work, we explore how conformal prediction can be +used to provide uncertainty quantification in language models for the specific +task of multiple-choice question-answering. We find that the uncertainty +estimates from conformal prediction are tightly correlated with prediction +accuracy. This observation can be useful for downstream applications such as +selective classification and filtering out low-quality predictions. We also +investigate the exchangeability assumption required by conformal prediction to +out-of-subject questions, which may be a more realistic scenario for many +practical applications. Our work contributes towards more trustworthy and +reliable usage of large language models in safety-critical situations, where +robust guarantees of error rate are required. +" +Test-Time Training on Nearest Neighbors for Large Language Models,Moritz Hardt,http://arxiv.org/pdf/2305.18466v2.pdf,2023-05-29,"['cs.cl', 'cs.lg']",2305.18466v2.pdf," Many recent efforts aim to augment language models with relevant information +retrieved from a database at test time. We avoid the need for prompt +engineering by directly fine-tuning the model on data retrieved at test time +using its standard training setup. For this purpose, we build a large-scale +distributed nearest neighbor index based on text embeddings of the Pile +dataset. Given a query to a language model, our system retrieves the neighbors +of the query and fine-tunes the model on the text data corresponding to those +neighbors. Surprisingly, retrieving and training on as few as 20 neighbors, +each for only one gradient iteration, drastically improves performance across +more than twenty language modeling tasks in the Pile benchmark. For example, +test-time training significantly narrows the performance gap between a small +GPT2 model and a GPTNeo model, more than ten times larger, that was +specifically trained to convergence on the Pile. Sufficient index quality and +size, however, are important. Our work establishes a valuable first baseline +for implementing test-time training in the context of large language models, +opening the door to numerous promising research avenues. +" +CONA: A novel CONtext-Aware instruction paradigm for communication using large language model,Nan Zhou,http://arxiv.org/pdf/2305.18620v1.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.hc']",2305.18620v1.pdf," We introduce CONA, a novel context-aware instruction paradigm for effective +knowledge dissemination using generative pre-trained transformer (GPT) models. +CONA is a flexible framework designed to leverage the capabilities of Large +Language Models (LLMs) and incorporate DIKW (Data, Information, Knowledge, +Wisdom) hierarchy to automatically instruct and optimise presentation content, +anticipate potential audience inquiries, and provide context-aware answers that +adaptive to the knowledge level of the audience group. The unique aspect of the +CONA paradigm lies in its combination of an independent advisory mechanism and +a recursive feedback loop rooted on the DIKW hierarchy. This synergy +significantly enhances context-aware contents, ensuring they are accessible and +easily comprehended by the audience. This paradigm is an early pioneer to +explore new methods for knowledge dissemination and communication in the LLM +era, offering effective support for everyday knowledge sharing scenarios. We +conduct experiments on a range of audience roles, along with materials from +various disciplines using GPT4. Both quantitative and qualitative results +demonstrated that the proposed CONA paradigm achieved remarkable performance +compared to the outputs guided by conventional prompt engineering. +" +GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction,Rui Yang,http://arxiv.org/pdf/2305.18752v1.pdf,2023-05-30,"['cs.cv', 'cs.cl']",2305.18752v1.pdf," This paper aims to efficiently enable Large Language Models (LLMs) to use +multimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, have +shown great potential for tool usage through sophisticated prompt engineering. +Nevertheless, these models typically rely on prohibitive computational costs +and publicly inaccessible data. To address these challenges, we propose the +GPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA and +OPT, to use tools. It generates an instruction-following dataset by prompting +an advanced teacher with various multi-modal contexts. By using the Low-Rank +Adaptation (LoRA) optimization, our approach facilitates the open-source LLMs +to solve a range of visual problems, including visual comprehension and image +generation. Moreover, we provide a benchmark to evaluate the ability of LLMs to +use tools, which is performed in both zero-shot and fine-tuning ways. Extensive +experiments demonstrate the effectiveness of our method on various language +models, which not only significantly improves the accuracy of invoking seen +tools, but also enables the zero-shot capacity for unseen tools. The code and +demo are available at https://github.com/StevenGrove/GPT4Tools. +" +Contextualizing Problems to Student Interests at Scale in Intelligent Tutoring System Using Large Language Models,Gautam Yadav,http://arxiv.org/pdf/2306.00190v1.pdf,2023-05-31,['cs.hc'],2306.00190v1.pdf," Contextualizing problems to align with student interests can significantly +improve learning outcomes. However, this task often presents scalability +challenges due to resource and time constraints. Recent advancements in Large +Language Models (LLMs) like GPT-4 offer potential solutions to these issues. +This study explores the ability of GPT-4 in the contextualization of problems +within CTAT, an intelligent tutoring system, aiming to increase student +engagement and enhance learning outcomes. Through iterative prompt engineering, +we achieved meaningful contextualization that preserved the difficulty and +original intent of the problem, thereby not altering values or overcomplicating +the questions. While our research highlights the potential of LLMs in +educational settings, we acknowledge current limitations, particularly with +geometry problems, and emphasize the need for ongoing evaluation and research. +Future work includes systematic studies to measure the impact of this tool on +students' learning outcomes and enhancements to handle a broader range of +problems. +" +Exploring EFL students' prompt engineering in human-AI story writing: an Activity Theory perspective,David James Woo,http://arxiv.org/pdf/2306.01798v1.pdf,2023-06-01,"['cs.cy', 'cs.ai']",2306.01798v1.pdf," This study applies Activity Theory to investigate how English as a foreign +language (EFL) students prompt generative artificial intelligence (AI) tools +during short story writing. Sixty-seven Hong Kong secondary school students +created generative-AI tools using open-source language models and wrote short +stories with them. The study collected and analyzed the students' generative-AI +tools, short stories, and written reflections on their conditions or purposes +for prompting. The research identified three main themes regarding the purposes +for which students prompt generative-AI tools during short story writing: a +lack of awareness of purposes, overcoming writer's block, and developing, +expanding, and improving the story. The study also identified common +characteristics of students' activity systems, including the sophistication of +their generative-AI tools, the quality of their stories, and their school's +overall academic achievement level, for their prompting of generative-AI tools +for the three purposes during short story writing. The study's findings suggest +that teachers should be aware of students' purposes for prompting generative-AI +tools to provide tailored instructions and scaffolded guidance. The findings +may also help designers provide differentiated instructions for users at +various levels of story development when using a generative-AI tool. +" +Prompting Is All You Need: Automated Android Bug Replay with Large Language Models,Sidong Feng,http://arxiv.org/pdf/2306.01987v2.pdf,2023-06-03,['cs.se'],2306.01987v2.pdf," Bug reports are vital for software maintenance that allow users to inform +developers of the problems encountered while using the software. As such, +researchers have committed considerable resources toward automating bug replay +to expedite the process of software maintenance. Nonetheless, the success of +current automated approaches is largely dictated by the characteristics and +quality of bug reports, as they are constrained by the limitations of +manually-crafted patterns and pre-defined vocabulary lists. Inspired by the +success of Large Language Models (LLMs) in natural language understanding, we +propose AdbGPT, a new lightweight approach to automatically reproduce the bugs +from bug reports through prompt engineering, without any training and +hard-coding effort. AdbGPT leverages few-shot learning and chain-of-thought +reasoning to elicit human knowledge and logical reasoning from LLMs to +accomplish the bug replay in a manner similar to a developer. Our evaluations +demonstrate the effectiveness and efficiency of our AdbGPT to reproduce 81.3% +of bug reports in 253.6 seconds, outperforming the state-of-the-art baselines +and ablation studies. We also conduct a small-scale user study to confirm the +usefulness of AdbGPT in enhancing developers' bug replay capabilities. +" +ChatGPT as a mapping assistant: A novel method to enrich maps with generative AI and content derived from street-level photographs,Levente Juhász,http://arxiv.org/pdf/2306.03204v1.pdf,2023-06-05,"['cs.cy', 'cs.cv']",2306.03204v1.pdf," This paper explores the concept of leveraging generative AI as a mapping +assistant for enhancing the efficiency of collaborative mapping. We present +results of an experiment that combines multiple sources of volunteered +geographic information (VGI) and large language models (LLMs). Three analysts +described the content of crowdsourced Mapillary street-level photographs taken +along roads in a small test area in Miami, Florida. GPT-3.5-turbo was +instructed to suggest the most appropriate tagging for each road in +OpenStreetMap (OSM). The study also explores the utilization of BLIP-2, a +state-of-the-art multimodal pre-training method as an artificial analyst of +street-level photographs in addition to human analysts. Results demonstrate two +ways to effectively increase the accuracy of mapping suggestions without +modifying the underlying AI models: by (1) providing a more detailed +description of source photographs, and (2) combining prompt engineering with +additional context (e.g. location and objects detected along a road). The first +approach increases the suggestion accuracy by up to 29%, and the second one by +up to 20%. +" +An Approach to Solving the Abstraction and Reasoning Corpus (ARC) Challenge,Tan John Chong Min,http://arxiv.org/pdf/2306.03553v1.pdf,2023-06-06,['cs.ai'],2306.03553v1.pdf," We utilise the power of Large Language Models (LLMs), in particular GPT4, to +be prompt engineered into performing an arbitrary task. Here, we give the model +some human priors via text, along with some typical procedures for solving the +ARC tasks, and ask it to generate the i) broad description of the input-output +relation, ii) detailed steps of the input-output mapping, iii) use the detailed +steps to perform manipulation on the test input and derive the test output. The +current GPT3.5/GPT4 prompt solves 2 out of 4 tested small ARC challenges (those +with small grids of 8x8 and below). With tweaks to the prompt to make it more +specific for the use case, it can solve more. We posit that when scaled to a +multi-agent system with usage of past memory and equipped with an image +interpretation tool via Visual Question Answering, we may actually be able to +solve the majority of the ARC challenge +" +Protect Your Prompts: Protocols for IP Protection in LLM Applications,M. A. van Wyk,http://arxiv.org/pdf/2306.06297v1.pdf,2023-06-09,"['cs.cl', 'cs.ai', '91d10, 68t10, 03d40', 'i.2.6; k.6.5; f.3.2']",2306.06297v1.pdf," With the rapid adoption of AI in the form of large language models (LLMs), +the potential value of carefully engineered prompts has become significant. +However, to realize this potential, prompts should be tradable on an open +market. Since prompts are, at present, generally economically non-excludable, +by virtue of their nature as text, no general competitive market has yet been +established. This note discusses two protocols intended to provide protection +of prompts, elevating their status as intellectual property, thus confirming +the intellectual property rights of prompt engineers, and potentially +supporting the flourishing of an open market for LLM prompts. +" +Scalable 3D Captioning with Pretrained Models,Tiange Luo,http://arxiv.org/pdf/2306.07279v2.pdf,2023-06-12,['cs.cv'],2306.07279v2.pdf," We introduce Cap3D, an automatic approach for generating descriptive text for +3D objects. This approach utilizes pretrained models from image captioning, +image-text alignment, and LLM to consolidate captions from multiple views of a +3D asset, completely side-stepping the time-consuming and costly process of +manual annotation. We apply Cap3D to the recently introduced large-scale 3D +dataset, Objaverse, resulting in 660k 3D-text pairs. Our evaluation, conducted +using 41k human annotations from the same dataset, demonstrates that Cap3D +surpasses human-authored descriptions in terms of quality, cost, and speed. +Through effective prompt engineering, Cap3D rivals human performance in +generating geometric descriptions on 17k collected annotations from the ABO +dataset. Finally, we finetune Text-to-3D models on Cap3D and human captions, +and show Cap3D outperforms; and benchmark the SOTA including Point-E, Shape-E, +and DreamFusion. +" +FALL-E: A Foley Sound Synthesis Model and Strategies,Minsung Kang,http://arxiv.org/pdf/2306.09807v2.pdf,2023-06-16,"['eess.as', 'cs.lg', 'cs.sd']",2306.09807v2.pdf," This paper introduces FALL-E, a foley synthesis system and its +training/inference strategies. The FALL-E model employs a cascaded approach +comprising low-resolution spectrogram generation, spectrogram super-resolution, +and a vocoder. We trained every sound-related model from scratch using our +extensive datasets, and utilized a pre-trained language model. We conditioned +the model with dataset-specific texts, enabling it to learn sound quality and +recording environment based on text input. Moreover, we leveraged external +language models to improve text descriptions of our datasets and performed +prompt engineering for quality, coherence, and diversity. FALL-E was evaluated +by an objective measure as well as listening tests in the DCASE 2023 challenge +Task 7. The submission achieved the second place on average, while achieving +the best score for diversity, second place for audio quality, and third place +for class fitness. +" +The Cultivated Practices of Text-to-Image Generation,Jonas Oppenlaender,http://arxiv.org/pdf/2306.11393v1.pdf,2023-06-20,"['cs.cy', 'cs.ai', 'k.4; j.5; i.2.0; k.5.m']",2306.11393v1.pdf," Humankind is entering a novel creative era in which anybody can synthesize +digital information using generative artificial intelligence (AI). +Text-to-image generation, in particular, has become vastly popular and millions +of practitioners produce AI-generated images and AI art online. This chapter +first gives an overview of the key developments that enabled a healthy +co-creative online ecosystem around text-to-image generation to rapidly emerge, +followed by a high-level description of key elements in this ecosystem. A +particular focus is placed on prompt engineering, a creative practice that has +been embraced by the AI art community. It is then argued that the emerging +co-creative ecosystem constitutes an intelligent system on its own - a system +that both supports human creativity, but also potentially entraps future +generations and limits future development efforts in AI. The chapter discusses +the potential risks and dangers of cultivating this co-creative ecosystem, such +as the bias inherent in today's training data, potential quality degradation in +future image generation systems due to synthetic data becoming common place, +and the potential long-term effects of text-to-image generation on people's +imagination, ambitions, and development. +" +Solving and Generating NPR Sunday Puzzles with Large Language Models,Jingmiao Zhao,http://arxiv.org/pdf/2306.12255v1.pdf,2023-06-21,['cs.cl'],2306.12255v1.pdf," We explore the ability of large language models to solve and generate puzzles +from the NPR Sunday Puzzle game show using PUZZLEQA, a dataset comprising 15 +years of on-air puzzles. We evaluate four large language models using PUZZLEQA, +in both multiple choice and free response formats, and explore two prompt +engineering techniques to improve free response performance: chain-of-thought +reasoning and prompt summarization. We find that state-of-the-art large +language models can solve many PUZZLEQA puzzles: the best model, GPT-3.5, +achieves 50.2% loose accuracy. However, in our few-shot puzzle generation +experiment, we find no evidence that models can generate puzzles: GPT-3.5 +generates puzzles with answers that do not conform to the generated rules. +Puzzle generation remains a challenging task for future work. +" +Federated Large Language Model: A Position Paper,Chaochao Chen,http://arxiv.org/pdf/2307.08925v1.pdf,2023-07-18,"['cs.lg', 'cs.ai', 'cs.cl']",2307.08925v1.pdf," Large scale language models (LLM) have received significant attention and +found diverse applications across various domains, but their development +encounters challenges in real-world scenarios. These challenges arise due to +the scarcity of public domain data availability and the need to maintain +privacy with respect to private domain data. To address these issues, federated +learning (FL) has emerged as a promising technology that enables collaborative +training of shared models while preserving decentralized data. We propose the +concept of federated LLM, which comprises three key components, i.e., federated +LLM pre-training, federated LLM fine-tuning, and federated LLM prompt +engineering. For each component, we discuss its advantage over traditional LLM +training methods and propose specific engineering strategies for +implementation. Furthermore, we explore the novel challenges introduced by the +integration of FL and LLM. We analyze existing solutions and identify potential +obstacles faced by these solutions within the context of federated LLM. +" +Chit-Chat or Deep Talk: Prompt Engineering for Process Mining,Urszula Jessen,http://arxiv.org/pdf/2307.09909v1.pdf,2023-07-19,['cs.ai'],2307.09909v1.pdf," This research investigates the application of Large Language Models (LLMs) to +augment conversational agents in process mining, aiming to tackle its inherent +complexity and diverse skill requirements. While LLM advancements present novel +opportunities for conversational process mining, generating efficient outputs +is still a hurdle. We propose an innovative approach that amend many issues in +existing solutions, informed by prior research on Natural Language Processing +(NLP) for conversational agents. Leveraging LLMs, our framework improves both +accessibility and agent performance, as demonstrated by experiments on public +question and data sets. Our research sets the stage for future explorations +into LLMs' role in process mining and concludes with propositions for enhancing +LLM memory, implementing real-time user testing, and examining diverse data +sets. +" +Large Language Models can accomplish Business Process Management Tasks,Michael Grohs,http://arxiv.org/pdf/2307.09923v1.pdf,2023-07-19,['cs.cl'],2307.09923v1.pdf," Business Process Management (BPM) aims to improve organizational activities +and their outcomes by managing the underlying processes. To achieve this, it is +often necessary to consider information from various sources, including +unstructured textual documents. Therefore, researchers have developed several +BPM-specific solutions that extract information from textual documents using +Natural Language Processing techniques. These solutions are specific to their +respective tasks and cannot accomplish multiple process-related problems as a +general-purpose instrument. However, in light of the recent emergence of Large +Language Models (LLMs) with remarkable reasoning capabilities, such a +general-purpose instrument with multiple applications now appears attainable. +In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by +applying a specific LLM to three exemplary tasks: mining imperative process +models from textual descriptions, mining declarative process models from +textual descriptions, and assessing the suitability of process tasks from +textual descriptions for robotic process automation. We show that, without +extensive configuration or prompt engineering, LLMs perform comparably to or +better than existing solutions and discuss implications for future BPM research +as well as practical usage. +" +SentimentGPT: Exploiting GPT for Advanced Sentiment Analysis and its Departure from Current Machine Learning,Kiana Kheiri,http://arxiv.org/pdf/2307.10234v2.pdf,2023-07-16,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.si']",2307.10234v2.pdf," This study presents a thorough examination of various Generative Pretrained +Transformer (GPT) methodologies in sentiment analysis, specifically in the +context of Task 4 on the SemEval 2017 dataset. Three primary strategies are +employed: 1) prompt engineering using the advanced GPT-3.5 Turbo, 2) +fine-tuning GPT models, and 3) an inventive approach to embedding +classification. The research yields detailed comparative insights among these +strategies and individual GPT models, revealing their unique strengths and +potential limitations. Additionally, the study compares these GPT-based +methodologies with other current, high-performing models previously used with +the same dataset. The results illustrate the significant superiority of the GPT +approaches in terms of predictive performance, more than 22\% in F1-score +compared to the state-of-the-art. Further, the paper sheds light on common +challenges in sentiment analysis tasks, such as understanding context and +detecting sarcasm. It underscores the enhanced capabilities of the GPT models +to effectively handle these complexities. Taken together, these findings +highlight the promising potential of GPT models in sentiment analysis, setting +the stage for future research in this field. The code can be found at +https://github.com/DSAatUSU/SentimentGPT +" +Domain Knowledge Distillation from Large Language Model: An Empirical Study in the Autonomous Driving Domain,Yun Tang,http://arxiv.org/pdf/2307.11769v1.pdf,2023-07-17,['cs.cl'],2307.11769v1.pdf," Engineering knowledge-based (or expert) systems require extensive manual +effort and domain knowledge. As Large Language Models (LLMs) are trained using +an enormous amount of cross-domain knowledge, it becomes possible to automate +such engineering processes. This paper presents an empirical automation and +semi-automation framework for domain knowledge distillation using prompt +engineering and the LLM ChatGPT. We assess the framework empirically in the +autonomous driving domain and present our key observations. In our +implementation, we construct the domain knowledge ontology by ""chatting"" with +ChatGPT. The key finding is that while fully automated domain ontology +construction is possible, human supervision and early intervention typically +improve efficiency and output quality as they lessen the effects of response +randomness and the butterfly effect. We, therefore, also develop a web-based +distillation assistant enabling supervision and flexible intervention at +runtime. We hope our findings and tools could inspire future research toward +revolutionizing the engineering of knowledge-based systems across application +domains. +" +Copilot for Xcode: Exploring AI-Assisted Programming by Prompting Cloud-based Large Language Models,Chee Wei Tan,http://arxiv.org/pdf/2307.14349v1.pdf,2023-07-08,"['cs.se', 'cs.ai']",2307.14349v1.pdf," This paper presents an AI-assisted programming tool called Copilot for Xcode +for program composition and design to support human software developers. By +seamlessly integrating cloud-based Large Language Models (LLM) with Apple's +local development environment, Xcode, this tool enhances productivity and +unleashes creativity for software development in Apple software ecosystem +(e.g., iOS apps, macOS). Leveraging advanced natural language processing (NLP) +techniques, Copilot for Xcode effectively processes source code tokens and +patterns within code repositories, enabling features such as code generation, +autocompletion, documentation, and error detection. Software developers can +also query and make ""small"" decisions for program composition, some of which +can be made simultaneously, and this is facilitated through prompt engineering +in a chat interface of Copilot for Xcode. Finally, we present simple case +studies as evidence of the effectiveness of utilizing NLP in Xcode to prompt +popular LLM services like OpenAI ChatGPT for program composition and design. +" +Backdoor Attacks for In-Context Learning with Language Models,Nikhil Kandpal,http://arxiv.org/pdf/2307.14692v1.pdf,2023-07-27,['cs.cr'],2307.14692v1.pdf," Because state-of-the-art language models are expensive to train, most +practitioners must make use of one of the few publicly available language +models or language model APIs. This consolidation of trust increases the +potency of backdoor attacks, where an adversary tampers with a machine learning +model in order to make it perform some malicious behavior on inputs that +contain a predefined backdoor trigger. We show that the in-context learning +ability of large language models significantly complicates the question of +developing backdoor attacks, as a successful backdoor must work against various +prompting strategies and should not affect the model's general purpose +capabilities. We design a new attack for eliciting targeted misclassification +when language models are prompted to perform a particular target task and +demonstrate the feasibility of this attack by backdooring multiple large +language models ranging in size from 1.3 billion to 6 billion parameters. +Finally we study defenses to mitigate the potential harms of our attack: for +example, while in the white-box setting we show that fine-tuning models for as +few as 500 steps suffices to remove the backdoor behavior, in the black-box +setting we are unable to develop a successful defense that relies on prompt +engineering alone. +" +Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language Models,Keyu Pan,http://arxiv.org/pdf/2307.16180v1.pdf,2023-07-30,['cs.cl'],2307.16180v1.pdf," The field of large language models (LLMs) has made significant progress, and +their knowledge storage capacity is approaching that of human beings. +Furthermore, advanced techniques, such as prompt learning and reinforcement +learning, are being employed to address ethical concerns and hallucination +problems associated with LLMs, bringing them closer to aligning with human +values. This situation naturally raises the question of whether LLMs with +human-like abilities possess a human-like personality? In this paper, we aim to +investigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), a +widespread human personality assessment tool, as an evaluation metric for LLMs. +Specifically, extensive experiments will be conducted to explore: 1) the +personality types of different LLMs, 2) the possibility of changing the +personality types by prompt engineering, and 3) How does the training dataset +affect the model's personality. Although the MBTI is not a rigorous assessment, +it can still reflect the similarity between LLMs and human personality. In +practice, the MBTI has the potential to serve as a rough indicator. Our codes +are available at +https://github.com/HarderThenHarder/transformers_tasks/tree/main/LLM/llms_mbti. +" +Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment,Saizhuo Wang,http://arxiv.org/pdf/2308.00016v1.pdf,2023-07-31,"['q-fin.cp', 'cs.ai', 'cs.cl']",2308.00016v1.pdf," One of the most important tasks in quantitative investment research is mining +new alphas (effective trading signals or factors). Traditional alpha mining +methods, either hand-crafted factor synthesizing or algorithmic factor mining +(e.g., search with genetic programming), have inherent limitations, especially +in implementing the ideas of quants. In this work, we propose a new alpha +mining paradigm by introducing human-AI interaction, and a novel prompt +engineering algorithmic framework to implement this paradigm by leveraging the +power of large language models. Moreover, we develop Alpha-GPT, a new +interactive alpha mining system framework that provides a heuristic way to +``understand'' the ideas of quant researchers and outputs creative, insightful, +and effective alphas. We demonstrate the effectiveness and advantage of +Alpha-GPT via a number of alpha mining experiments. +" +Optimizing Machine Translation through Prompt Engineering: An Investigation into ChatGPT's Customizability,Masaru Yamada,http://arxiv.org/pdf/2308.01391v1.pdf,2023-08-02,['cs.cl'],2308.01391v1.pdf," This paper explores the influence of integrating the purpose of the +translation and the target audience into prompts on the quality of translations +produced by ChatGPT. Drawing on previous translation studies, industry +practices, and ISO standards, the research underscores the significance of the +pre-production phase in the translation process. The study reveals that the +inclusion of suitable prompts in large-scale language models like ChatGPT can +yield flexible translations, a feat yet to be realized by conventional Machine +Translation (MT). The research scrutinizes the changes in translation quality +when prompts are used to generate translations that meet specific conditions. +The evaluation is conducted from a practicing translator's viewpoint, both +subjectively and qualitatively, supplemented by the use of OpenAI's word +embedding API for cosine similarity calculations. The findings suggest that the +integration of the purpose and target audience into prompts can indeed modify +the generated translations, generally enhancing the translation quality by +industry standards. The study also demonstrates the practical application of +the ""good translation"" concept, particularly in the context of marketing +documents and culturally dependent idioms. +" +InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent,Po-Lin Chen,http://arxiv.org/pdf/2308.01552v1.pdf,2023-08-03,"['cs.ai', 'cs.cl', 'cs.lg']",2308.01552v1.pdf," This research paper delves into the integration of OpenAI's ChatGPT into +embodied agent systems, evaluating its influence on interactive decision-making +benchmark. Drawing a parallel to the concept of people assuming roles according +to their unique strengths, we introduce InterAct. In this approach, we feed +ChatGPT with varied prompts, assigning it a numerous roles like a checker and a +sorter, then integrating them with the original language model. Our research +shows a remarkable success rate of 98% in AlfWorld, which consists of 6 +different tasks in a simulated household environment, emphasizing the +significance of proficient prompt engineering. The results highlight ChatGPT's +competence in comprehending and performing intricate tasks effectively in +real-world settings, thus paving the way for further advancements in task +planning. +" +RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model,Yao Lu,http://arxiv.org/pdf/2308.05345v2.pdf,2023-08-10,"['cs.lg', 'cs.ar']",2308.05345v2.pdf," Inspired by the recent success of large language models (LLMs) like ChatGPT, +researchers start to explore the adoption of LLMs for agile hardware design, +such as generating design RTL based on natural-language instructions. However, +in existing works, their target designs are all relatively simple and in a +small scale, and proposed by the authors themselves, making a fair comparison +among different LLM solutions challenging. In addition, many prior works only +focus on the design correctness, without evaluating the design qualities of +generated design RTL. In this work, we propose an open-source benchmark named +RTLLM, for generating design RTL with natural language instructions. To +systematically evaluate the auto-generated design RTL, we summarized three +progressive goals, named syntax goal, functionality goal, and design quality +goal. This benchmark can automatically provide a quantitative evaluation of any +given LLM-based solution. Furthermore, we propose an easy-to-use yet +surprisingly effective prompt engineering technique named self-planning, which +proves to significantly boost the performance of GPT-3.5 in our proposed +benchmark. +" +"LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked",Mansi Phute,http://arxiv.org/pdf/2308.07308v3.pdf,2023-08-14,"['cs.cl', 'cs.ai']",2308.07308v3.pdf," Large language models (LLMs) are popular for high-quality text generation but +can produce harmful content, even when aligned with human values through +reinforcement learning. Adversarial prompts can bypass their safety measures. +We propose LLM Self Defense, a simple approach to defend against these attacks +by having an LLM screen the induced responses. Our method does not require any +fine-tuning, input preprocessing, or iterative output generation. Instead, we +incorporate the generated content into a pre-defined prompt and employ another +instance of an LLM to analyze the text and predict whether it is harmful. We +test LLM Self Defense on GPT 3.5 and Llama 2, two of the current most prominent +LLMs against various types of attacks, such as forcefully inducing affirmative +responses to prompts and prompt engineering attacks. Notably, LLM Self Defense +succeeds in reducing the attack success rate to virtually 0 using both GPT 3.5 +and Llama 2. +" +Data Race Detection Using Large Language Models,Le Chen,http://arxiv.org/pdf/2308.07505v2.pdf,2023-08-15,"['cs.lg', 'cs.cl']",2308.07505v2.pdf," Large language models (LLMs) are demonstrating significant promise as an +alternate strategy to facilitate analyses and optimizations of high-performance +computing programs, circumventing the need for resource-intensive manual tool +creation. In this paper, we explore a novel LLM-based data race detection +approach combining prompting engineering and fine-tuning techniques. We create +a dedicated dataset named DRB-ML, which is derived from DataRaceBench, with +fine-grain labels showing the presence of data race pairs and their associated +variables, line numbers, and read/write information. DRB-ML is then used to +evaluate representative LLMs and fine-tune open-source ones. Our experiment +shows that LLMs can be a viable approach to data race detection. However, they +still cannot compete with traditional data race detection tools when we need +detailed information about variable pairs causing data races. +" +Accelerated materials language processing enabled by GPT,Jaewoong Choi,http://arxiv.org/pdf/2308.09354v1.pdf,2023-08-18,"['cs.cl', 'cond-mat.mtrl-sci']",2308.09354v1.pdf," Materials language processing (MLP) is one of the key facilitators of +materials science research, as it enables the extraction of structured +information from massive materials science literature. Prior works suggested +high-performance MLP models for text classification, named entity recognition +(NER), and extractive question answering (QA), which require complex model +architecture, exhaustive fine-tuning and a large number of human-labelled +datasets. In this study, we develop generative pretrained transformer +(GPT)-enabled pipelines where the complex architectures of prior MLP models are +replaced with strategic designs of prompt engineering. First, we develop a +GPT-enabled document classification method for screening relevant documents, +achieving comparable accuracy and reliability compared to prior models, with +only small dataset. Secondly, for NER task, we design an entity-centric +prompts, and learning few-shot of them improved the performance on most of +entities in three open datasets. Finally, we develop an GPT-enabled extractive +QA model, which provides improved performance and shows the possibility of +automatically correcting annotations. While our findings confirm the potential +of GPT-enabled MLP models as well as their value in terms of reliability and +practicability, our scientific methods and systematic approach are applicable +to any materials science domain to accelerate the information extraction of +scientific literature. +" +Data-to-text Generation for Severely Under-Resourced Languages with GPT-3.5: A Bit of Help Needed from Google Translate,Michela Lorandi,http://arxiv.org/pdf/2308.09957v1.pdf,2023-08-19,"['cs.cl', 'cs.ai']",2308.09957v1.pdf," LLMs like GPT are great at tasks involving English which dominates in their +training data. In this paper, we look at how they cope with tasks involving +languages that are severely under-represented in their training data, in the +context of data-to-text generation for Irish, Maltese, Welsh and Breton. During +the prompt-engineering phase we tested a range of prompt types and formats on +GPT-3.5 and~4 with a small sample of example input/output pairs. We then fully +evaluated the two most promising prompts in two scenarios: (i) direct +generation into the under-resourced language, and (ii) generation into English +followed by translation into the under-resourced language. We find that +few-shot prompting works better for direct generation into under-resourced +languages, but that the difference disappears when pivoting via English. The +few-shot + translation system variants were submitted to the WebNLG 2023 shared +task where they outperformed competitor systems by substantial margins in all +languages on all metrics. We conclude that good performance on under-resourced +languages can be achieved out-of-the box with state-of-the-art LLMs. However, +our best results (for Welsh) remain well below the lowest ranked English system +at WebNLG'20. +" +Activation Addition: Steering Language Models Without Optimization,Alexander Matt Turner,http://arxiv.org/pdf/2308.10248v2.pdf,2023-08-20,"['cs.cl', 'cs.lg']",2308.10248v2.pdf," Reliably controlling the behavior of large language models is a pressing open +problem. Existing methods include supervised finetuning, reinforcement learning +from human feedback, prompt engineering, and guided decoding. We instead +investigate activation engineering: modifying activations at inference time to +predictably alter model behavior. In particular, we bias the forward pass with +an added 'steering vector' implicitly specified through natural language. + Unlike past work which learned these steering vectors, our Activation +Addition (ActAdd) method computes them by taking the activation differences +that result from pairs of prompts. We demonstrate ActAdd on GPT-2 on +OpenWebText and ConceptNet. Our inference-time approach yields control over +high-level properties of output and preserves off-target model performance. It +involves far less compute and implementation effort than finetuning, allows +users to provide natural language specifications, and its overhead scales +naturally with model size. +" +Situated Natural Language Explanations,Zining Zhu,http://arxiv.org/pdf/2308.14115v1.pdf,2023-08-27,['cs.cl'],2308.14115v1.pdf," Natural language is among the most accessible tools for explaining decisions +to humans, and large pretrained language models (PLMs) have demonstrated +impressive abilities to generate coherent natural language explanations (NLE). +The existing NLE research perspectives do not take the audience into account. +An NLE can have high textual quality, but it might not accommodate audiences' +needs and preference. To address this limitation, we propose an alternative +perspective, situated NLE, including a situated generation framework and a +situated evaluation framework. On the generation side, we propose simple prompt +engineering methods that adapt the NLEs to situations. In human studies, the +annotators preferred the situated NLEs. On the evaluation side, we set up +automated evaluation scores in lexical, semantic, and pragmatic categories. The +scores can be used to select the most suitable prompts to generate NLEs. +Situated NLE provides a perspective to conduct further research on automatic +NLE generations. +" +"FurChat: An Embodied Conversational Agent using LLMs, Combining Open and Closed-Domain Dialogue with Facial Expressions",Neeraj Cherakara,http://arxiv.org/pdf/2308.15214v2.pdf,2023-08-29,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ro']",2308.15214v2.pdf," We demonstrate an embodied conversational agent that can function as a +receptionist and generate a mixture of open and closed-domain dialogue along +with facial expressions, by using a large language model (LLM) to develop an +engaging conversation. We deployed the system onto a Furhat robot, which is +highly expressive and capable of using both verbal and nonverbal cues during +interaction. The system was designed specifically for the National Robotarium +to interact with visitors through natural conversations, providing them with +information about the facilities, research, news, upcoming events, etc. The +system utilises the state-of-the-art GPT-3.5 model to generate such information +along with domain-general conversations and facial expressions based on prompt +engineering. +" +Can Prompt Learning Benefit Radiology Report Generation?,Jun Wang,http://arxiv.org/pdf/2308.16269v1.pdf,2023-08-30,['cs.cv'],2308.16269v1.pdf," Radiology report generation aims to automatically provide clinically +meaningful descriptions of radiology images such as MRI and X-ray. Although +great success has been achieved in natural scene image captioning tasks, +radiology report generation remains challenging and requires prior medical +knowledge. In this paper, we propose PromptRRG, a method that utilizes prompt +learning to activate a pretrained model and incorporate prior knowledge. Since +prompt learning for radiology report generation has not been explored before, +we begin with investigating prompt designs and categorise them based on varying +levels of knowledge: common, domain-specific and disease-enriched prompts. +Additionally, we propose an automatic prompt learning mechanism to alleviate +the burden of manual prompt engineering. This is the first work to +systematically examine the effectiveness of prompt learning for radiology +report generation. Experimental results on the largest radiology report +generation benchmark, MIMIC-CXR, demonstrate that our proposed method achieves +state-of-the-art performance. Code will be available upon the acceptance. +" +Large Language Models as Data Preprocessors,Haochen Zhang,http://arxiv.org/pdf/2308.16361v1.pdf,2023-08-30,"['cs.ai', 'cs.db']",2308.16361v1.pdf," Large Language Models (LLMs), typified by OpenAI's GPT series and Meta's +LLaMA variants, have marked a significant advancement in artificial +intelligence. Trained on vast amounts of text data, LLMs are capable of +understanding and generating human-like text across a diverse range of topics. +This study expands on the applications of LLMs, exploring their potential in +data preprocessing, a critical stage in data mining and analytics applications. +We delve into the applicability of state-of-the-art LLMs such as GPT-3.5, +GPT-4, and Vicuna-13B for error detection, data imputation, schema matching, +and entity matching tasks. Alongside showcasing the inherent capabilities of +LLMs, we highlight their limitations, particularly in terms of computational +expense and inefficiency. We propose an LLM-based framework for data +preprocessing, which integrates cutting-edge prompt engineering techniques, +coupled with traditional methods like contextualization and feature selection, +to improve the performance and efficiency of these models. The effectiveness of +LLMs in data preprocessing is evaluated through an experimental study spanning +12 datasets. GPT-4 emerged as a standout, achieving 100\% accuracy or F1 score +on 4 datasets, suggesting LLMs' immense potential in these tasks. Despite +certain limitations, our study underscores the promise of LLMs in this domain +and anticipates future developments to overcome current hurdles. +" +Developing a Scalable Benchmark for Assessing Large Language Models in Knowledge Graph Engineering,Lars-Peter Meyer,http://arxiv.org/pdf/2308.16622v1.pdf,2023-08-31,"['cs.ai', 'cs.cl', 'cs.db']",2308.16622v1.pdf," As the field of Large Language Models (LLMs) evolves at an accelerated pace, +the critical need to assess and monitor their performance emerges. We introduce +a benchmarking framework focused on knowledge graph engineering (KGE) +accompanied by three challenges addressing syntax and error correction, facts +extraction and dataset generation. We show that while being a useful tool, LLMs +are yet unfit to assist in knowledge graph generation with zero-shot prompting. +Consequently, our LLM-KG-Bench framework provides automatic evaluation and +storage of LLM responses as well as statistical data and visualization tools to +support tracking of prompt engineering and model performance. +" +Linking microblogging sentiments to stock price movement: An application of GPT-4,Rick Steinert,http://arxiv.org/pdf/2308.16771v1.pdf,2023-08-31,"['q-fin.st', 'q-fin.cp']",2308.16771v1.pdf," This paper investigates the potential improvement of the GPT-4 Language +Learning Model (LLM) in comparison to BERT for modeling same-day daily stock +price movements of Apple and Tesla in 2017, based on sentiment analysis of +microblogging messages. We recorded daily adjusted closing prices and +translated them into up-down movements. Sentiment for each day was extracted +from messages on the Stocktwits platform using both LLMs. We develop a novel +method to engineer a comprehensive prompt for contextual sentiment analysis +which unlocks the true capabilities of modern LLM. This enables us to carefully +retrieve sentiments, perceived advantages or disadvantages, and the relevance +towards the analyzed company. Logistic regression is used to evaluate whether +the extracted message contents reflect stock price movements. As a result, +GPT-4 exhibited substantial accuracy, outperforming BERT in five out of six +months and substantially exceeding a naive buy-and-hold strategy, reaching a +peak accuracy of 71.47 % in May. The study also highlights the importance of +prompt engineering in obtaining desired outputs from GPT-4's contextual +abilities. However, the costs of deploying GPT-4 and the need for fine-tuning +prompts highlight some practical considerations for its use. +" +LoGoPrompt: Synthetic Text Images Can Be Good Visual Prompts for Vision-Language Models,Cheng Shi,http://arxiv.org/pdf/2309.01155v2.pdf,2023-09-03,['cs.cv'],2309.01155v2.pdf," Prompt engineering is a powerful tool used to enhance the performance of +pre-trained models on downstream tasks. For example, providing the prompt +""Let's think step by step"" improved GPT-3's reasoning accuracy to 63% on +MutiArith while prompting ""a photo of"" filled with a class name enables CLIP to +achieve $80$\% zero-shot accuracy on ImageNet. While previous research has +explored prompt learning for the visual modality, analyzing what constitutes a +good visual prompt specifically for image recognition is limited. In addition, +existing visual prompt tuning methods' generalization ability is worse than +text-only prompting tuning. This paper explores our key insight: synthetic text +images are good visual prompts for vision-language models! To achieve that, we +propose our LoGoPrompt, which reformulates the classification objective to the +visual prompt selection and addresses the chicken-and-egg challenge of first +adding synthetic text images as class-wise visual prompts or predicting the +class first. Without any trainable visual prompt parameters, experimental +results on 16 datasets demonstrate that our method consistently outperforms +state-of-the-art methods in few-shot learning, base-to-new generalization, and +domain generalization. +" +FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning,Xinyi Wang,http://arxiv.org/pdf/2309.04663v2.pdf,2023-09-09,"['cs.cl', 'cs.ai']",2309.04663v2.pdf," Learning paradigms for large language models (LLMs) currently tend to fall +within either in-context learning (ICL) or full fine-tuning. Each of these +comes with their own trade-offs based on available data, model size, compute +cost, ease-of-use, and final quality with neither solution performing well +across-the-board. In this article, we first describe ICL and fine-tuning +paradigms in a way that highlights their natural connections. Based on these +connections, we propose a new learning paradigm called FIAT that fuses the best +of these paradigms together, enabling prompt-engineered instructions and +chain-of-thought reasoning with the very largest models while also using +similar methods to perform parameter updates on a modestly-sized LLM with +parameter-efficient tuning. We evaluate FIAT's effectiveness on a variety of +multilingual tasks and observe that FIAT performs better than both ICL and +fine-tuning at scales ranging from 100-10,000 training examples. We hope that +FIAT provides a practical way of harnessing the full potential of LLMs without +needing to make a hard choice between learning paradigms. +" +Toward Reproducing Network Research Results Using Large Language Models,Qiao Xiang,http://arxiv.org/pdf/2309.04716v1.pdf,2023-09-09,"['cs.lg', 'cs.ai', 'cs.cl']",2309.04716v1.pdf," Reproducing research results in the networking community is important for +both academia and industry. The current best practice typically resorts to +three approaches: (1) looking for publicly available prototypes; (2) contacting +the authors to get a private prototype; and (3) manually implementing a +prototype following the description of the publication. However, most published +network research does not have public prototypes and private prototypes are +hard to get. As such, most reproducing efforts are spent on manual +implementation based on the publications, which is both time and labor +consuming and error-prone. In this paper, we boldly propose reproducing network +research results using the emerging large language models (LLMs). In +particular, we first prove its feasibility with a small-scale experiment, in +which four students with essential networking knowledge each reproduces a +different networking system published in prominent conferences and journals by +prompt engineering ChatGPT. We report the experiment's observations and lessons +and discuss future open research questions of this proposal. This work raises +no ethical issue. +" +Detecting Natural Language Biases with Prompt-based Learning,Md Abdul Aowal,http://arxiv.org/pdf/2309.05227v1.pdf,2023-09-11,"['cs.cl', 'cs.ai']",2309.05227v1.pdf," In this project, we want to explore the newly emerging field of prompt +engineering and apply it to the downstream task of detecting LM biases. More +concretely, we explore how to design prompts that can indicate 4 different +types of biases: (1) gender, (2) race, (3) sexual orientation, and (4) +religion-based. Within our project, we experiment with different manually +crafted prompts that can draw out the subtle biases that may be present in the +language model. We apply these prompts to multiple variations of popular and +well-recognized models: BERT, RoBERTa, and T5 to evaluate their biases. We +provide a comparative analysis of these models and assess them using a two-fold +method: use human judgment to decide whether model predictions are biased and +utilize model-level judgment (through further prompts) to understand if a model +can self-diagnose the biases of its own prediction. +" +Two Timin': Repairing Smart Contracts With A Two-Layered Approach,Abhinav Jain,http://arxiv.org/pdf/2309.07841v1.pdf,2023-09-14,"['cs.cr', 'cs.ai']",2309.07841v1.pdf," Due to the modern relevance of blockchain technology, smart contracts present +both substantial risks and benefits. Vulnerabilities within them can trigger a +cascade of consequences, resulting in significant losses. Many current papers +primarily focus on classifying smart contracts for malicious intent, often +relying on limited contract characteristics, such as bytecode or opcode. This +paper proposes a novel, two-layered framework: 1) classifying and 2) directly +repairing malicious contracts. Slither's vulnerability report is combined with +source code and passed through a pre-trained RandomForestClassifier (RFC) and +Large Language Models (LLMs), classifying and repairing each suggested +vulnerability. Experiments demonstrate the effectiveness of fine-tuned and +prompt-engineered LLMs. The smart contract repair models, built from +pre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overall +vulnerability count by 97.5% and 96.7% respectively. A manual inspection of +repaired contracts shows that all retain functionality, indicating that the +proposed method is appropriate for automatic batch classification and repair of +vulnerabilities in smart contracts. +" +Large Language Models for Failure Mode Classification: An Investigation,Michael Stewart,http://arxiv.org/pdf/2309.08181v1.pdf,2023-09-15,['cs.cl'],2309.08181v1.pdf," In this paper we present the first investigation into the effectiveness of +Large Language Models (LLMs) for Failure Mode Classification (FMC). FMC, the +task of automatically labelling an observation with a corresponding failure +mode code, is a critical task in the maintenance domain as it reduces the need +for reliability engineers to spend their time manually analysing work orders. +We detail our approach to prompt engineering to enable an LLM to predict the +failure mode of a given observation using a restricted code list. We +demonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned on +annotated data is a significant improvement over a currently available text +classification model (F1=0.60) trained on the same annotated data set. The +fine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). This +investigation reinforces the need for high quality fine-tuning data sets for +domain-specific tasks using LLMs. +" +Safurai 001: New Qualitative Approach for Code LLM Evaluation,Davide Cifarelli,http://arxiv.org/pdf/2309.11385v1.pdf,2023-09-20,['cs.cl'],2309.11385v1.pdf," This paper presents Safurai-001, a new Large Language Model (LLM) with +significant potential in the domain of coding assistance. Driven by recent +advancements in coding LLMs, Safurai-001 competes in performance with the +latest models like WizardCoder [Xu et al., 2023], PanguCoder [Shen et al., +2023] and Phi-1 [Gunasekar et al., 2023] but aims to deliver a more +conversational interaction. By capitalizing on the progress in data engineering +(including latest techniques of data transformation and prompt engineering) and +instruction tuning, this new model promises to stand toe-to-toe with recent +closed and open source developments. Recognizing the need for an efficacious +evaluation metric for coding LLMs, this paper also introduces GPT4-based +MultiParameters, an evaluation benchmark that harnesses varied parameters to +present a comprehensive insight into the models functioning and performance. +Our assessment shows that Safurai-001 can outperform GPT-3.5 by 1.58% and +WizardCoder by 18.78% in the Code Readability parameter and more. +" +A Practical Survey on Zero-shot Prompt Design for In-context Learning,Yinheng Li,http://arxiv.org/pdf/2309.13205v1.pdf,2023-09-22,"['cs.cl', 'cs.ai', 'cs.et', 'cs.lg']",2309.13205v1.pdf," The remarkable advancements in large language models (LLMs) have brought +about significant improvements in Natural Language Processing(NLP) tasks. This +paper presents a comprehensive review of in-context learning techniques, +focusing on different types of prompts, including discrete, continuous, +few-shot, and zero-shot, and their impact on LLM performance. We explore +various approaches to prompt design, such as manual design, optimization +algorithms, and evaluation methods, to optimize LLM performance across diverse +tasks. Our review covers key research studies in prompt engineering, discussing +their methodologies and contributions to the field. We also delve into the +challenges faced in evaluating prompt performance, given the absence of a +single ""best"" prompt and the importance of considering multiple metrics. In +conclusion, the paper highlights the critical role of prompt design in +harnessing the full potential of LLMs and provides insights into the +combination of manual design, optimization techniques, and rigorous evaluation +for more effective and efficient use of LLMs in various NLP tasks. +" +A Chat About Boring Problems: Studying GPT-based text normalization,Yang Zhang,http://arxiv.org/pdf/2309.13426v1.pdf,2023-09-23,"['cs.cl', 'cs.ai']",2309.13426v1.pdf," Text normalization - the conversion of text from written to spoken form - is +traditionally assumed to be an ill-formed task for language models. In this +work, we argue otherwise. We empirically show the capacity of Large-Language +Models (LLM) for text normalization in few-shot scenarios. Combining +self-consistency reasoning with linguistic-informed prompt engineering, we find +LLM based text normalization to achieve error rates around 40\% lower than top +normalization systems. Further, upon error analysis, we note key limitations in +the conventional design of text normalization tasks. We create a new taxonomy +of text normalization errors and apply it to results from GPT-3.5-Turbo and +GPT-4.0. Through this new framework, we can identify strengths and weaknesses +of GPT-based TN, opening opportunities for future work. +" +DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs,Gyeongmin Kim,http://arxiv.org/pdf/2309.16031v1.pdf,2023-09-27,['cs.ro'],2309.16031v1.pdf," Mobile robots often rely on pre-existing maps for effective path planning and +navigation. However, when these maps are unavailable, particularly in +unfamiliar environments, a different approach become essential. This paper +introduces DynaCon, a novel system designed to provide mobile robots with +contextual awareness and dynamic adaptability during navigation, eliminating +the reliance of traditional maps. DynaCon integrates real-time feedback with an +object server, prompt engineering, and navigation modules. By harnessing the +capabilities of Large Language Models (LLMs), DynaCon not only understands +patterns within given numeric series but also excels at categorizing objects +into matched spaces. This facilitates dynamic path planner imbued with +contextual awareness. We validated the effectiveness of DynaCon through an +experiment where a robot successfully navigated to its goal using reasoning. +Source code and experiment videos for this work can be found at: +https://sites.google.com/view/dynacon. +" +Cyber Sentinel: Exploring Conversational Agents in Streamlining Security Tasks with GPT-4,Mehrdad Kaheh,http://arxiv.org/pdf/2309.16422v1.pdf,2023-09-28,['cs.cr'],2309.16422v1.pdf," In an era where cyberspace is both a battleground and a backbone of modern +society, the urgency of safeguarding digital assets against ever-evolving +threats is paramount. This paper introduces Cyber Sentinel, an innovative +task-oriented cybersecurity dialogue system that is effectively capable of +managing two core functions: explaining potential cyber threats within an +organization to the user, and taking proactive/reactive security actions when +instructed by the user. Cyber Sentinel embodies the fusion of artificial +intelligence, cybersecurity domain expertise, and real-time data analysis to +combat the multifaceted challenges posed by cyber adversaries. This article +delves into the process of creating such a system and how it can interact with +other components typically found in cybersecurity organizations. Our work is a +novel approach to task-oriented dialogue systems, leveraging the power of +chaining GPT-4 models combined with prompt engineering across all sub-tasks. We +also highlight its pivotal role in enhancing cybersecurity communication and +interaction, concluding that not only does this framework enhance the system's +transparency (Explainable AI) but also streamlines the decision-making process +and responding to threats (Actionable AI), therefore marking a significant +advancement in the realm of cybersecurity communication. +" +"A Sign Language Recognition System with Pepper, Lightweight-Transformer, and LLM",JongYoon Lim,http://arxiv.org/pdf/2309.16898v1.pdf,2023-09-28,"['cs.ro', 'cs.cl', 'cs.cv', 'cs.hc']",2309.16898v1.pdf," This research explores using lightweight deep neural network architectures to +enable the humanoid robot Pepper to understand American Sign Language (ASL) and +facilitate non-verbal human-robot interaction. First, we introduce a +lightweight and efficient model for ASL understanding optimized for embedded +systems, ensuring rapid sign recognition while conserving computational +resources. Building upon this, we employ large language models (LLMs) for +intelligent robot interactions. Through intricate prompt engineering, we tailor +interactions to allow the Pepper Robot to generate natural Co-Speech Gesture +responses, laying the foundation for more organic and intuitive humanoid-robot +dialogues. Finally, we present an integrated software pipeline, embodying +advancements in a socially aware AI interaction model. Leveraging the Pepper +Robot's capabilities, we demonstrate the practicality and effectiveness of our +approach in real-world scenarios. The results highlight a profound potential +for enhancing human-robot interaction through non-verbal interactions, bridging +communication gaps, and making technology more accessible and understandable. +" +SPELL: Semantic Prompt Evolution based on a LLM,Yujian Betterest Li,http://arxiv.org/pdf/2310.01260v1.pdf,2023-10-02,"['cs.cl', 'cs.ai']",2310.01260v1.pdf," Prompt engineering is a new paradigm for enhancing the performance of trained +neural network models. For optimizing text-style prompts, existing methods +usually individually operate small portions of a text step by step, which +either breaks the fluency or could not globally adjust a prompt. Since large +language models (LLMs) have powerful ability of generating coherent texts token +by token, can we utilize LLMs for improving prompts? Based on this motivation, +in this paper, considering a trained LLM as a text generator, we attempt to +design a black-box evolution algorithm for automatically optimizing texts, +namely SPELL (Semantic Prompt Evolution based on a LLM). The proposed method is +evaluated with different LLMs and evolution parameters in different text tasks. +Experimental results show that SPELL could rapidly improve the prompts indeed. +We further explore the evolution process and discuss on the limitations, +potential possibilities and future work. +" +Co-audit: tools to help humans double-check AI-generated content,Andrew D. Gordon,http://arxiv.org/pdf/2310.01297v1.pdf,2023-10-02,"['cs.hc', 'cs.ai', 'cs.cl', 'cs.pl']",2310.01297v1.pdf," Users are increasingly being warned to check AI-generated content for +correctness. Still, as LLMs (and other generative models) generate more complex +output, such as summaries, tables, or code, it becomes harder for the user to +audit or evaluate the output for quality or correctness. Hence, we are seeing +the emergence of tool-assisted experiences to help the user double-check a +piece of AI-generated content. We refer to these as co-audit tools. Co-audit +tools complement prompt engineering techniques: one helps the user construct +the input prompt, while the other helps them check the output response. As a +specific example, this paper describes recent research on co-audit tools for +spreadsheet computations powered by generative models. We explain why co-audit +experiences are essential for any application of generative AI where quality is +important and errors are consequential (as is common in spreadsheet +computations). We propose a preliminary list of principles for co-audit, and +outline research challenges. +" +Chain of Natural Language Inference for Reducing Large Language Model Ungrounded Hallucinations,Deren Lei,http://arxiv.org/pdf/2310.03951v2.pdf,2023-10-06,"['cs.cl', 'cs.ai']",2310.03951v2.pdf," Large language models (LLMs) can generate fluent natural language texts when +given relevant documents as background context. This ability has attracted +considerable interest in developing industry applications of LLMs. However, +LLMs are prone to generate hallucinations that are not supported by the +provided sources. In this paper, we propose a hierarchical framework to detect +and mitigate such ungrounded hallucination. Our framework uses Chain of Natural +Language Inference (CoNLI) for hallucination detection and hallucination +reduction via post-editing. Our approach achieves state-of-the-art performance +on hallucination detection and enhances text quality through rewrite, using +LLMs without any fine-tuning or domain-specific prompt engineering. We show +that this simple plug-and-play framework can serve as an effective choice for +hallucination detection and reduction, achieving competitive performance across +various contexts. +" +LLM4VV: Developing LLM-Driven Testsuite for Compiler Validation,Christian Munley,http://arxiv.org/pdf/2310.04963v2.pdf,2023-10-08,['cs.ai'],2310.04963v2.pdf," Large language models (LLMs) are a new and powerful tool for a wide span of +applications involving natural language and demonstrate impressive code +generation abilities. In this paper, we explore the capabilitity of +state-of-the-art LLMs, including closed-source options like OpenAI GPT-4 and +open-source alternatives like Meta AI Codellama, to automatically generate +tests and use these tests to validate and verify compiler implementations of a +directive-based programming paradigm, OpenACC. Our approach entails exploring +various prompt engineering techniques including a code template, +retrieval-augmented generation (RAG) with code template, expressive prompt +using RAG with code template, one-shot example, and RAG with one-shot example. +This paper focuses on (a) exploring the capabilities of the latest LLMs for +code generation, (b) investigating prompt and fine tuning methods, and (c) +analyzing the outcome of LLMs generated tests +" +Large Language Models for Propaganda Detection,Kilian Sprenkamp,http://arxiv.org/pdf/2310.06422v1.pdf,2023-10-10,"['cs.cl', 'cs.ai']",2310.06422v1.pdf," The prevalence of propaganda in our digital society poses a challenge to +societal harmony and the dissemination of truth. Detecting propaganda through +NLP in text is challenging due to subtle manipulation techniques and contextual +dependencies. To address this issue, we investigate the effectiveness of modern +Large Language Models (LLMs) such as GPT-3 and GPT-4 for propaganda detection. +We conduct experiments using the SemEval-2020 task 11 dataset, which features +news articles labeled with 14 propaganda techniques as a multi-label +classification problem. Five variations of GPT-3 and GPT-4 are employed, +incorporating various prompt engineering and fine-tuning strategies across the +different models. We evaluate the models' performance by assessing metrics such +as $F1$ score, $Precision$, and $Recall$, comparing the results with the +current state-of-the-art approach using RoBERTa. Our findings demonstrate that +GPT-4 achieves comparable results to the current state-of-the-art. Further, +this study analyzes the potential and challenges of LLMs in complex tasks like +propaganda detection. +" +Forgetful Large Language Models: Lessons Learned from Using LLMs in Robot Programming,Juo-Tung Chen,http://arxiv.org/pdf/2310.06646v1.pdf,2023-10-10,['cs.ro'],2310.06646v1.pdf," Large language models offer new ways of empowering people to program robot +applications-namely, code generation via prompting. However, the code generated +by LLMs is susceptible to errors. This work reports a preliminary exploration +that empirically characterizes common errors produced by LLMs in robot +programming. We categorize these errors into two phases: interpretation and +execution. In this work, we focus on errors in execution and observe that they +are caused by LLMs being ""forgetful"" of key information provided in user +prompts. Based on this observation, we propose prompt engineering tactics +designed to reduce errors in execution. We then demonstrate the effectiveness +of these tactics with three language models: ChatGPT, Bard, and LLaMA-2. +Finally, we discuss lessons learned from using LLMs in robot programming and +call for the benchmarking of LLM-powered end-user development of robot +applications. +" +LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing,Stephen Moskal,http://arxiv.org/pdf/2310.06936v1.pdf,2023-10-10,"['cs.cr', 'cs.lg']",2310.06936v1.pdf," In this paper, we explore the potential of Large Language Models (LLMs) to +reason about threats, generate information about tools, and automate cyber +campaigns. We begin with a manual exploration of LLMs in supporting specific +threat-related actions and decisions. We proceed by automating the decision +process in a cyber campaign. We present prompt engineering approaches for a +plan-act-report loop for one action of a threat campaign and and a prompt +chaining design that directs the sequential decision process of a multi-action +campaign. We assess the extent of LLM's cyber-specific knowledge w.r.t the +short campaign we demonstrate and provide insights into prompt design for +eliciting actionable responses. We discuss the potential impact of LLMs on the +threat landscape and the ethical considerations of using LLMs for accelerating +threat actor capabilities. We report a promising, yet concerning, application +of generative AI to cyber threats. However, the LLM's capabilities to deal with +more complex networks, sophisticated vulnerabilities, and the sensitivity of +prompts are open questions. This research should spur deliberations over the +inevitable advancements in LLM-supported cyber adversarial landscape. +" +Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators,Liang Chen,http://arxiv.org/pdf/2310.07289v1.pdf,2023-10-11,['cs.cl'],2310.07289v1.pdf," Large language models (LLMs) outperform information retrieval techniques for +downstream knowledge-intensive tasks when being prompted to generate world +knowledge. However, community concerns abound regarding the factuality and +potential implications of using this uncensored knowledge. In light of this, we +introduce CONNER, a COmpreheNsive kNowledge Evaluation fRamework, designed to +systematically and automatically evaluate generated knowledge from six +important perspectives -- Factuality, Relevance, Coherence, Informativeness, +Helpfulness and Validity. We conduct an extensive empirical analysis of the +generated knowledge from three different types of LLMs on two widely studied +knowledge-intensive tasks, i.e., open-domain question answering and +knowledge-grounded dialogue. Surprisingly, our study reveals that the +factuality of generated knowledge, even if lower, does not significantly hinder +downstream tasks. Instead, the relevance and coherence of the outputs are more +important than small factual mistakes. Further, we show how to use CONNER to +improve knowledge-intensive tasks by designing two strategies: Prompt +Engineering and Knowledge Selection. Our evaluation code and LLM-generated +knowledge with human annotations will be released to facilitate future +research. +" +Multimodal Large Language Model for Visual Navigation,Yao-Hung Hubert Tsai,http://arxiv.org/pdf/2310.08669v2.pdf,2023-10-12,"['cs.cv', 'cs.ro']",2310.08669v2.pdf," Recent efforts to enable visual navigation using large language models have +mainly focused on developing complex prompt systems. These systems incorporate +instructions, observations, and history into massive text prompts, which are +then combined with pre-trained large language models to facilitate visual +navigation. In contrast, our approach aims to fine-tune large language models +for visual navigation without extensive prompt engineering. Our design involves +a simple text prompt, current observations, and a history collector model that +gathers information from previous observations as input. For output, our design +provides a probability distribution of possible actions that the agent can take +during navigation. We train our model using human demonstrations and collision +signals from the Habitat-Matterport 3D Dataset (HM3D). Experimental results +demonstrate that our method outperforms state-of-the-art behavior cloning +methods and effectively reduces collision rates. +" +GPTutor: an open-source AI pair programming tool alternative to Copilot,Eason Chen,http://arxiv.org/pdf/2310.13896v3.pdf,2023-10-21,['cs.hc'],2310.13896v3.pdf," This paper presents the latest progress of GPTutor: a ChatGPT-powered +programming tool extension in Visual Studio Code. The emergence of Large +Language Models (LLMs) has improved software development efficiency, but their +performance can be hindered by training data limitations and prompt design +issues. Existing LLM development tools often operate as black boxes, with users +unable to view the prompts used and unable to improve performance by correcting +prompts when errors occur. To address the aforementioned issues, GPTutor was +introduced as an open-source AI pair programming tool, offering an alternative +to Copilot. GPTutor empowers users to customize prompts for various programming +languages and scenarios, with support for 120+ human languages and 50+ +programming languages. Users can fine-tune prompts to correct the errors from +LLM for precision and efficient code generation. At the end of the paper, we +underscore GPTutor's potential through examples, including demonstrating its +proficiency in interpreting and generating Sui-Move, a newly introduced smart +contract language, using prompt engineering. +" +Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models,Gabriel Sarch,http://arxiv.org/pdf/2310.15127v1.pdf,2023-10-23,"['cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",2310.15127v1.pdf," Pre-trained and frozen LLMs can effectively map simple scene re-arrangement +instructions to programs over a robot's visuomotor functions through +appropriate few-shot example prompting. To parse open-domain natural language +and adapt to a user's idiosyncratic procedures, not known during prompt +engineering time, fixed prompts fall short. In this paper, we introduce HELPER, +an embodied agent equipped with an external memory of language-program pairs +that parses free-form human-robot dialogue into action programs through +retrieval-augmented LLM prompting: relevant memories are retrieved based on the +current dialogue, instruction, correction or VLM description, and used as +in-context prompt examples for LLM querying. The memory is expanded during +deployment to include pairs of user's language and action plans, to assist +future inferences and personalize them to the user's language and routines. +HELPER sets a new state-of-the-art in the TEACh benchmark in both Execution +from Dialog History (EDH) and Trajectory from Dialogue (TfD), with 1.7x +improvement over the previous SOTA for TfD. Our models, code and video results +can be found in our project's website: https://helper-agent-llm.github.io. +" +TaskDiff: A Similarity Metric for Task-Oriented Conversations,Ankita Bhaumik,http://arxiv.org/pdf/2310.15298v2.pdf,2023-10-23,"['cs.cl', 'cs.ai']",2310.15298v2.pdf," The popularity of conversational digital assistants has resulted in the +availability of large amounts of conversational data which can be utilized for +improved user experience and personalized response generation. Building these +assistants using popular large language models like ChatGPT also require +additional emphasis on prompt engineering and evaluation methods. Textual +similarity metrics are a key ingredient for such analysis and evaluations. +While many similarity metrics have been proposed in the literature, they have +not proven effective for task-oriented conversations as they do not take +advantage of unique conversational features. To address this gap, we present +TaskDiff, a novel conversational similarity metric that utilizes different +dialogue components (utterances, intents, and slots) and their distributions to +compute similarity. Extensive experimental evaluation of TaskDiff on a +benchmark dataset demonstrates its superior performance and improved robustness +over other related approaches. +" +Large language models for aspect-based sentiment analysis,Paul F. Simmering,http://arxiv.org/pdf/2310.18025v1.pdf,2023-10-27,"['cs.cl', 'cs.ai']",2310.18025v1.pdf," Large language models (LLMs) offer unprecedented text completion +capabilities. As general models, they can fulfill a wide range of roles, +including those of more specialized models. We assess the performance of GPT-4 +and GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-based +sentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-art +F1 score of 83.8 on the joint aspect term extraction and polarity +classification task of the SemEval-2014 Task 4, improving upon InstructABSA +[@scaria_instructabsa_2023] by 5.7%. However, this comes at the price of 1000 +times more model parameters and thus increased inference cost. We discuss the +the cost-performance trade-offs of different models, and analyze the typical +errors that they make. Our results also indicate that detailed prompts improve +performance in zero-shot and few-shot settings but are not necessary for +fine-tuned models. This evidence is relevant for practioners that are faced +with the choice of prompt engineering versus fine-tuning when using LLMs for +ABSA. +" +Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias,S. Lee,http://arxiv.org/pdf/2311.00217v1.pdf,2023-11-01,"['cs.ai', 'cs.cy']",2311.00217v1.pdf," Large language models (LLMs) have demonstrated their potential in social +science research by emulating human perceptions and behaviors, a concept +referred to as algorithmic fidelity. This study assesses the algorithmic +fidelity and bias of LLMs by utilizing two nationally representative climate +change surveys. The LLMs were conditioned on demographics and/or psychological +covariates to simulate survey responses. The findings indicate that LLMs can +effectively capture presidential voting behaviors but encounter challenges in +accurately representing global warming perspectives when relevant covariates +are not included. GPT-4 exhibits improved performance when conditioned on both +demographics and covariates. However, disparities emerge in LLM estimations of +the views of certain groups, with LLMs tending to underestimate worry about +global warming among Black Americans. While highlighting the potential of LLMs +to aid social science research, these results underscore the importance of +meticulous conditioning, model selection, survey question format, and bias +assessment when employing LLMs for survey simulation. Further investigation +into prompt engineering and algorithm auditing is essential to harness the +power of LLMs while addressing their inherent limitations. +" +Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis,Hongyi Zheng,http://arxiv.org/pdf/2311.00258v1.pdf,2023-11-01,"['cs.cl', 'cs.lg']",2311.00258v1.pdf," Recent advances in prompt engineering enable large language models (LLMs) to +solve multi-hop logical reasoning problems with impressive accuracy. However, +there is little existing work investigating the robustness of LLMs with +few-shot prompting techniques. Therefore, we introduce a systematic approach to +test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic +perturbations. We include perturbations at multiple levels of abstractions +(e.g. lexical perturbations such as typos, and semantic perturbations such as +the inclusion of intermediate reasoning steps in the questions) to conduct +behavioral analysis on the LLMs. Throughout our experiments, we find that +models are more sensitive to certain perturbations such as replacing words with +their synonyms. We also demonstrate that increasing the proportion of perturbed +exemplars in the prompts improves the robustness of few-shot prompting methods. +" +Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers,Weiwei Sun,http://arxiv.org/pdf/2311.01555v1.pdf,2023-11-02,"['cs.ir', 'cs.cl']",2311.01555v1.pdf," Recent studies have demonstrated the great potential of Large Language Models +(LLMs) serving as zero-shot relevance rankers. The typical approach involves +making comparisons between pairs or lists of documents. Although effective, +these listwise and pairwise methods are not efficient and also heavily rely on +intricate prompt engineering. To tackle this problem, we introduce a novel +instruction distillation method. The key idea is to distill the pairwise +ranking ability of open-sourced LLMs to a simpler but more efficient pointwise +ranking. Specifically, given the same LLM, we first rank documents using the +effective pairwise approach with complex instructions, and then distill the +teacher predictions to the pointwise approach with simpler instructions. +Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that +instruction distillation can improve efficiency by 10 to 100x and also enhance +the ranking performance of LLMs. Furthermore, our approach surpasses the +performance of existing supervised methods like monoT5 and is on par with the +state-of-the-art zero-shot methods. The code to reproduce our results is +available at www.github.com/sunnweiwei/RankGPT. +" +Indicative Summarization of Long Discussions,Shahbaz Syed,http://arxiv.org/pdf/2311.01882v1.pdf,2023-11-03,['cs.cl'],2311.01882v1.pdf," Online forums encourage the exchange and discussion of different stances on +many topics. Not only do they provide an opportunity to present one's own +arguments, but may also gather a broad cross-section of others' arguments. +However, the resulting long discussions are difficult to overview. This paper +presents a novel unsupervised approach using large language models (LLMs) to +generating indicative summaries for long discussions that basically serve as +tables of contents. Our approach first clusters argument sentences, generates +cluster labels as abstractive summaries, and classifies the generated cluster +labels into argumentation frames resulting in a two-level summary. Based on an +extensively optimized prompt engineering approach, we evaluate 19~LLMs for +generative cluster labeling and frame classification. To evaluate the +usefulness of our indicative summaries, we conduct a purpose-driven user study +via a new visual interface called Discussion Explorer: It shows that our +proposed indicative summaries serve as a convenient navigation tool to explore +long discussions. +" +Automating Governing Knowledge Commons and Contextual Integrity (GKC-CI) Privacy Policy Annotations with Large Language Models,Jake Chanenson,http://arxiv.org/pdf/2311.02192v1.pdf,2023-11-03,"['cs.cy', 'cs.cl', 'cs.lg']",2311.02192v1.pdf," Identifying contextual integrity (CI) and governing knowledge commons (GKC) +parameters in privacy policy texts can facilitate normative privacy analysis. +However, GKC-CI annotation has heretofore required manual or crowdsourced +effort. This paper demonstrates that high-accuracy GKC-CI parameter annotation +of privacy policies can be performed automatically using large language models. +We fine-tune 18 open-source and proprietary models on 21,588 GKC-CI annotations +from 16 ground truth privacy policies. Our best-performing model (fine-tuned +GPT-3.5 Turbo with prompt engineering) has an accuracy of 86%, exceeding the +performance of prior crowdsourcing approaches despite the complexity of privacy +policy texts and the nuance of the GKC-CI annotation task. We apply our +best-performing model to privacy policies from 164 popular online services, +demonstrating the effectiveness of scaling GKC-CI annotation for data +exploration. We make all annotated policies as well as the training data and +scripts needed to fine-tune our best-performing model publicly available for +future research. +" +Requirements Engineering using Generative AI: Prompts and Prompting Patterns,Krishna Ronanki,http://arxiv.org/pdf/2311.03832v1.pdf,2023-11-07,['cs.se'],2311.03832v1.pdf," [Context]: Companies are increasingly recognizing the importance of +automating Requirements Engineering (RE) tasks due to their resource-intensive +nature. The advent of GenAI has made these tasks more amenable to automation, +thanks to its ability to understand and interpret context effectively. +[Problem]: However, in the context of GenAI, prompt engineering is a critical +factor for success. Despite this, we currently lack tools and methods to +systematically assess and determine the most effective prompt patterns to +employ for a particular RE task. [Method]: Two tasks related to requirements, +specifically requirement classification and tracing, were automated using the +GPT-3.5 turbo API. The performance evaluation involved assessing various +prompts created using 5 prompt patterns and implemented programmatically to +perform the selected RE tasks, focusing on metrics such as precision, recall, +accuracy, and F-Score. [Results]: This paper evaluates the effectiveness of the +5 prompt patterns' ability to make GPT-3.5 turbo perform the selected RE tasks +and offers recommendations on which prompt pattern to use for a specific RE +task. Additionally, it also provides an evaluation framework as a reference for +researchers and practitioners who want to evaluate different prompt patterns +for different RE tasks. +" +Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners,Ningyu Zhang,http://arxiv.org/pdf/2108.13161v7.pdf,2021-08-30,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.ir', 'cs.lg']",2108.13161v7.pdf," Large-scale pre-trained language models have contributed significantly to +natural language processing by demonstrating remarkable abilities as few-shot +learners. However, their effectiveness depends mainly on scaling the model +parameters and prompt design, hindering their implementation in most real-world +applications. This study proposes a novel pluggable, extensible, and efficient +approach named DifferentiAble pRompT (DART), which can convert small language +models into better few-shot learners without any prompt engineering. The main +principle behind this approach involves reformulating potential natural +language processing tasks into the task of a pre-trained language model and +differentially optimizing the prompt template as well as the target label with +backpropagation. Furthermore, the proposed approach can be: (i) Plugged to any +pre-trained language models; (ii) Extended to widespread classification tasks. +A comprehensive evaluation of standard NLP tasks demonstrates that the proposed +approach achieves a better few-shot performance. Code is available in +https://github.com/zjunlp/DART. +" +ActionCLIP: A New Paradigm for Video Action Recognition,Mengmeng Wang,http://arxiv.org/pdf/2109.08472v1.pdf,2021-09-17,['cs.cv'],2109.08472v1.pdf," The canonical approach to video action recognition dictates a neural model to +do a classic and standard 1-of-N majority vote task. They are trained to +predict a fixed set of predefined categories, limiting their transferable +ability on new datasets with unseen concepts. In this paper, we provide a new +perspective on action recognition by attaching importance to the semantic +information of label texts rather than simply mapping them into numbers. +Specifically, we model this task as a video-text matching problem within a +multimodal learning framework, which strengthens the video representation with +more semantic language supervision and enables our model to do zero-shot action +recognition without any further labeled data or parameters requirements. +Moreover, to handle the deficiency of label texts and make use of tremendous +web data, we propose a new paradigm based on this multimodal learning framework +for action recognition, which we dub ""pre-train, prompt and fine-tune"". This +paradigm first learns powerful representations from pre-training on a large +amount of web image-text or video-text data. Then it makes the action +recognition task to act more like pre-training problems via prompt engineering. +Finally, it end-to-end fine-tunes on target datasets to obtain strong +performance. We give an instantiation of the new paradigm, ActionCLIP, which +not only has superior and flexible zero-shot/few-shot transfer ability but also +reaches a top performance on general action recognition task, achieving 83.8% +top-1 accuracy on Kinetics-400 with a ViT-B/16 as the backbone. Code is +available at https://github.com/sallymmx/ActionCLIP.git +" +CLIP-Adapter: Better Vision-Language Models with Feature Adapters,Peng Gao,http://arxiv.org/pdf/2110.04544v1.pdf,2021-10-09,"['cs.cv', 'cs.cl']",2110.04544v1.pdf," Large-scale contrastive vision-language pre-training has shown significant +progress in visual representation learning. Unlike traditional visual systems +trained by a fixed set of discrete labels, a new paradigm was introduced in +\cite{radford2021learning} to directly learn to align images with raw texts in +an open-vocabulary setting. On downstream tasks, a carefully chosen text prompt +is employed to make zero-shot predictions.~To avoid non-trivial prompt +engineering, context optimization \cite{zhou2021coop} has been proposed to +learn continuous vectors as task-specific prompts with few-shot training +examples.~In this paper, we show that there is an alternative path to achieve +better vision-language models other than prompt tuning.~While prompt tuning is +for the textual inputs, we propose CLIP-Adapter to conduct fine-tuning with +feature adapters on either visual or language branch. Specifically, +CLIP-Adapter adopts an additional bottleneck layer to learn new features and +performs residual-style feature blending with the original pre-trained +features.~As a consequence, CLIP-Adapter is able to outperform context +optimization while maintains a simple design. Experiments and extensive +ablation studies on various visual classification tasks demonstrate the +effectiveness of our approach. +" +Symbolic Knowledge Distillation: from General Language Models to Commonsense Models,Peter West,http://arxiv.org/pdf/2110.07178v2.pdf,2021-10-14,['cs.cl'],2110.07178v2.pdf," The common practice for training commonsense models has gone +from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in +order to train commonsense models. In this work, we investigate an alternative, +from-machine-to-corpus-to-machine: general language models author these +commonsense knowledge graphs to train commonsense models. Our study leads to a +new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge +Distillation (Hinton et al., 2015), our approach uses larger models to teach +smaller models. A key difference is that we distill knowledge symbolically-as +text-in addition to the neural model. We also distill only one aspect-the +commonsense of a general language model teacher, allowing the student to be a +different type, a commonsense model. Altogether, we show that careful prompt +engineering and a separately trained critic model allow us to selectively +distill high-quality causal commonsense from GPT-3, a general language model. +Empirical results demonstrate that, for the first time, a human-authored +commonsense knowledge graph is surpassed by our automatically distilled variant +in all three criteria: quantity, quality, and diversity. In addition, it +results in a neural commonsense model that surpasses the teacher model's +commonsense capabilities despite its 100x smaller size. We apply this to the +ATOMIC resource, and share our new symbolic knowledge graph and commonsense +models. +" +Red Teaming Language Models with Language Models,Ethan Perez,http://arxiv.org/pdf/2202.03286v1.pdf,2022-02-07,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",2202.03286v1.pdf," Language Models (LMs) often cannot be deployed because of their potential to +harm users in hard-to-predict ways. Prior work identifies harmful behaviors +before deployment by using human annotators to hand-write test cases. However, +human annotation is expensive, limiting the number and diversity of test cases. +In this work, we automatically find cases where a target LM behaves in a +harmful way, by generating test cases (""red teaming"") using another LM. We +evaluate the target LM's replies to generated test questions using a classifier +trained to detect offensive content, uncovering tens of thousands of offensive +replies in a 280B parameter LM chatbot. We explore several methods, from +zero-shot generation to reinforcement learning, for generating test cases with +varying levels of diversity and difficulty. Furthermore, we use prompt +engineering to control LM-generated test cases to uncover a variety of other +harms, automatically finding groups of people that the chatbot discusses in +offensive ways, personal and hospital phone numbers generated as the chatbot's +own contact info, leakage of private training data in generated text, and harms +that occur over the course of a conversation. Overall, LM-based red teaming is +one promising tool (among many needed) for finding and fixing diverse, +undesirable LM behaviors before impacting users. +" +Learning to Prompt for Open-Vocabulary Object Detection with Vision-Language Model,Yu Du,http://arxiv.org/pdf/2203.14940v1.pdf,2022-03-28,['cs.cv'],2203.14940v1.pdf," Recently, vision-language pre-training shows great potential in +open-vocabulary object detection, where detectors trained on base classes are +devised for detecting new classes. The class text embedding is firstly +generated by feeding prompts to the text encoder of a pre-trained +vision-language model. It is then used as the region classifier to supervise +the training of a detector. The key element that leads to the success of this +model is the proper prompt, which requires careful words tuning and ingenious +design. To avoid laborious prompt engineering, there are some prompt +representation learning methods being proposed for the image classification +task, which however can only be sub-optimal solutions when applied to the +detection task. In this paper, we introduce a novel method, detection prompt +(DetPro), to learn continuous prompt representations for open-vocabulary object +detection based on the pre-trained vision-language model. Different from the +previous classification-oriented methods, DetPro has two highlights: 1) a +background interpretation scheme to include the proposals in image background +into the prompt training; 2) a context grading scheme to separate proposals in +image foreground for tailored prompt training. We assemble DetPro with ViLD, a +recent state-of-the-art open-world object detector, and conduct experiments on +the LVIS as well as transfer learning on the Pascal VOC, COCO, Objects365 +datasets. Experimental results show that our DetPro outperforms the baseline +ViLD in all settings, e.g., +3.4 APbox and +3.0 APmask improvements on the +novel classes of LVIS. Code and models are available at +https://github.com/dyabel/detpro. +" +No Token Left Behind: Explainability-Aided Image Classification and Generation,Roni Paiss,http://arxiv.org/pdf/2204.04908v2.pdf,2022-04-11,['cs.cv'],2204.04908v2.pdf," The application of zero-shot learning in computer vision has been +revolutionized by the use of image-text matching models. The most notable +example, CLIP, has been widely used for both zero-shot classification and +guiding generative models with a text prompt. However, the zero-shot use of +CLIP is unstable with respect to the phrasing of the input text, making it +necessary to carefully engineer the prompts used. We find that this instability +stems from a selective similarity score, which is based only on a subset of the +semantically meaningful input tokens. To mitigate it, we present a novel +explainability-based approach, which adds a loss term to ensure that CLIP +focuses on all relevant semantic parts of the input, in addition to employing +the CLIP similarity loss used in previous works. When applied to one-shot +classification through prompt engineering, our method yields an improvement in +the recognition rate, without additional training or fine-tuning. Additionally, +we show that CLIP guidance of generative models using our method significantly +improves the generated images. Finally, we demonstrate a novel use of CLIP +guidance for text-based image generation with spatial conditioning on object +location, by requiring the image explainability heatmap for each object to be +confined to a pre-determined bounding box. +" +On Measuring Social Biases in Prompt-Based Multi-Task Learning,Afra Feyza Akyürek,http://arxiv.org/pdf/2205.11605v1.pdf,2022-05-23,"['cs.cl', 'cs.cy']",2205.11605v1.pdf," Large language models trained on a mixture of NLP tasks that are converted +into a text-to-text format using prompts, can generalize into novel forms of +language and handle novel tasks. A large body of work within prompt engineering +attempts to understand the effects of input forms and prompts in achieving +superior performance. We consider an alternative measure and inquire whether +the way in which an input is encoded affects social biases promoted in outputs. +In this paper, we study T0, a large-scale multi-task text-to-text language +model trained using prompt-based learning. We consider two different forms of +semantically equivalent inputs: question-answer format and premise-hypothesis +format. We use an existing bias benchmark for the former BBQ and create the +first bias benchmark in natural language inference BBNLI with hand-written +hypotheses while also converting each benchmark into the other form. The +results on two benchmarks suggest that given two different formulations of +essentially the same input, T0 conspicuously acts more biased in question +answering form, which is seen during training, compared to premise-hypothesis +form which is unlike its training examples. Code and data are released under +https://github.com/feyzaakyurek/bbnli. +" +OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression,Wanhua Li,http://arxiv.org/pdf/2206.02338v2.pdf,2022-06-06,['cs.cv'],2206.02338v2.pdf," This paper presents a language-powered paradigm for ordinal regression. +Existing methods usually treat each rank as a category and employ a set of +weights to learn these concepts. These methods are easy to overfit and usually +attain unsatisfactory performance as the learned concepts are mainly derived +from the training set. Recent large pre-trained vision-language models like +CLIP have shown impressive performance on various visual tasks. In this paper, +we propose to learn the rank concepts from the rich semantic CLIP latent space. +Specifically, we reformulate this task as an image-language matching problem +with a contrastive objective, which regards labels as text and obtains a +language prototype from a text encoder for each rank. While prompt engineering +for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable +prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists +of learnable context tokens and learnable rank embeddings; The learnable rank +embeddings are constructed by explicitly modeling numerical continuity, +resulting in well-ordered, compact language prototypes in the CLIP space. Once +learned, we can only save the language prototypes and discard the huge language +model, resulting in zero additional computational overhead compared with the +linear head counterpart. Experimental results show that our paradigm achieves +competitive performance in general ordinal regression tasks, and gains +improvements in few-shot and distribution shift settings for age estimation. +The code is available at https://github.com/xk-huang/OrdinalCLIP. +" +P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting,Ziyi Wang,http://arxiv.org/pdf/2208.02812v2.pdf,2022-08-04,"['cs.cv', 'cs.ai', 'cs.lg']",2208.02812v2.pdf," Nowadays, pre-training big models on large-scale datasets has become a +crucial topic in deep learning. The pre-trained models with high representation +ability and transferability achieve a great success and dominate many +downstream tasks in natural language processing and 2D vision. However, it is +non-trivial to promote such a pretraining-tuning paradigm to the 3D vision, +given the limited training data that are relatively inconvenient to collect. In +this paper, we provide a new perspective of leveraging pre-trained 2D knowledge +in 3D domain to tackle this problem, tuning pre-trained image models with the +novel Point-to-Pixel prompting for point cloud analysis at a minor parameter +cost. Following the principle of prompting engineering, we transform point +clouds into colorful images with geometry-preserved projection and +geometry-aware coloring to adapt to pre-trained image models, whose weights are +kept frozen during the end-to-end optimization of point cloud analysis tasks. +We conduct extensive experiments to demonstrate that cooperating with our +proposed Point-to-Pixel Prompting, better pre-trained image model will lead to +consistently better performance in 3D vision. Enjoying prosperous development +from image pre-training field, our method attains 89.3% accuracy on the hardest +setting of ScanObjectNN, surpassing conventional point cloud models with much +fewer trainable parameters. Our framework also exhibits very competitive +performance on ModelNet classification and ShapeNet Part Segmentation. Code is +available at https://github.com/wangzy22/P2P. +" +Unsupervised Hashing with Semantic Concept Mining,Rong-Cheng Tu,http://arxiv.org/pdf/2209.11475v1.pdf,2022-09-23,"['cs.cv', 'cs.ir']",2209.11475v1.pdf," Recently, to improve the unsupervised image retrieval performance, plenty of +unsupervised hashing methods have been proposed by designing a semantic +similarity matrix, which is based on the similarities between image features +extracted by a pre-trained CNN model. However, most of these methods tend to +ignore high-level abstract semantic concepts contained in images. Intuitively, +concepts play an important role in calculating the similarity among images. In +real-world scenarios, each image is associated with some concepts, and the +similarity between two images will be larger if they share more identical +concepts. Inspired by the above intuition, in this work, we propose a novel +Unsupervised Hashing with Semantic Concept Mining, called UHSCM, which +leverages a VLP model to construct a high-quality similarity matrix. +Specifically, a set of randomly chosen concepts is first collected. Then, by +employing a vision-language pretraining (VLP) model with the prompt engineering +which has shown strong power in visual representation learning, the set of +concepts is denoised according to the training images. Next, the proposed +method UHSCM applies the VLP model with prompting again to mine the concept +distribution of each image and construct a high-quality semantic similarity +matrix based on the mined concept distributions. Finally, with the semantic +similarity matrix as guiding information, a novel hashing loss with a modified +contrastive loss based regularization item is proposed to optimize the hashing +network. Extensive experiments on three benchmark datasets show that the +proposed method outperforms the state-of-the-art baselines in the image +retrieval task. +" +Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning,Louis Castricato,http://arxiv.org/pdf/2210.07792v2.pdf,2022-10-14,['cs.cl'],2210.07792v2.pdf," Controlled automated story generation seeks to generate natural language +stories satisfying constraints from natural language critiques or preferences. +Existing methods to control for story preference utilize prompt engineering +which is labor intensive and often inconsistent. They may also use +logit-manipulation methods which require annotated datasets to exist for the +desired attributes. To address these issues, we first train a contrastive +bi-encoder model to align stories with corresponding human critiques, named +CARP, building a general purpose preference model. This is subsequently used as +a reward function to fine-tune a generative language model via reinforcement +learning. However, simply fine-tuning a generative language model with a +contrastive reward model does not always reliably result in a story generation +system capable of generating stories that meet user preferences. To increase +story generation robustness we further fine-tune the contrastive reward model +using a prompt-learning technique. A human participant study is then conducted +comparing generations from our full system, ablations, and two baselines. We +show that the full fine-tuning pipeline results in a story generator preferred +over a LLM 20x as large as well as logit-based methods. This motivates the use +of contrastive learning for general purpose human preference modeling. +" +Towards Equitable Representation in Text-to-Image Synthesis Models with the Cross-Cultural Understanding Benchmark (CCUB) Dataset,Zhixuan Liu,http://arxiv.org/pdf/2301.12073v2.pdf,2023-01-28,['cs.cv'],2301.12073v2.pdf," It has been shown that accurate representation in media improves the +well-being of the people who consume it. By contrast, inaccurate +representations can negatively affect viewers and lead to harmful perceptions +of other cultures. To achieve inclusive representation in generated images, we +propose a culturally-aware priming approach for text-to-image synthesis using a +small but culturally curated dataset that we collected, known here as +Cross-Cultural Understanding Benchmark (CCUB) Dataset, to fight the bias +prevalent in giant datasets. Our proposed approach is comprised of two +fine-tuning techniques: (1) Adding visual context via fine-tuning a pre-trained +text-to-image synthesis model, Stable Diffusion, on the CCUB text-image pairs, +and (2) Adding semantic context via automated prompt engineering using the +fine-tuned large language model, GPT-3, trained on our CCUB culturally-aware +text data. CCUB dataset is curated and our approach is evaluated by people who +have a personal relationship with that particular culture. Our experiments +indicate that priming using both text and image is effective in improving the +cultural relevance and decreasing the offensiveness of generated images while +maintaining quality. +" +Trash to Treasure: Using text-to-image models to inform the design of physical artefacts,Amy Smith,http://arxiv.org/pdf/2302.00561v1.pdf,2023-02-01,['cs.ai'],2302.00561v1.pdf," Text-to-image generative models have recently exploded in popularity and +accessibility. Yet so far, use of these models in creative tasks that bridge +the 2D digital world and the creation of physical artefacts has been +understudied. We conduct a pilot study to investigate if and how text-to-image +models can be used to assist in upstream tasks within the creative process, +such as ideation and visualization, prior to a sculpture-making activity. +Thirty participants selected sculpture-making materials and generated three +images using the Stable Diffusion text-to-image generator, each with text +prompts of their choice, with the aim of informing and then creating a physical +sculpture. The majority of participants (23/30) reported that the generated +images informed their sculptures, and 28/30 reported interest in using +text-to-image models to help them in a creative task in the future. We identify +several prompt engineering strategies and find that a participant's prompting +strategy relates to their stage in the creative process. We discuss how our +findings can inform support for users at different stages of the design process +and for using text-to-image models for physical artefact design. +" +"Chat2VIS: Generating Data Visualisations via Natural Language using ChatGPT, Codex and GPT-3 Large Language Models",Paula Maddigan,http://arxiv.org/pdf/2302.02094v2.pdf,2023-02-04,['cs.hc'],2302.02094v2.pdf," The field of data visualisation has long aimed to devise solutions for +generating visualisations directly from natural language text. Research in +Natural Language Interfaces (NLIs) has contributed towards the development of +such techniques. However, the implementation of workable NLIs has always been +challenging due to the inherent ambiguity of natural language, as well as in +consequence of unclear and poorly written user queries which pose problems for +existing language models in discerning user intent. Instead of pursuing the +usual path of developing new iterations of language models, this study uniquely +proposes leveraging the advancements in pre-trained large language models +(LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directly +into code for appropriate visualisations. This paper presents a novel system, +Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrates +how, with effective prompt engineering, the complex problem of language +understanding can be solved more efficiently, resulting in simpler and more +accurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMs +together with the proposed prompts offer a reliable approach to rendering +visualisations from natural language queries, even when queries are highly +misspecified and underspecified. This solution also presents a significant +reduction in costs for the development of NLI systems, while attaining greater +visualisation inference abilities compared to traditional NLP approaches that +use hand-crafted grammar rules and tailored models. This study also presents +how LLM prompts can be constructed in a way that preserves data security and +privacy while being generalisable to different datasets. This work compares the +performance of GPT-3, Codex and ChatGPT across a number of case studies and +contrasts the performances with prior studies. +" +CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets,Zachary Novack,http://arxiv.org/pdf/2302.02551v3.pdf,2023-02-06,"['cs.cv', 'cs.lg']",2302.02551v3.pdf," Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot +classification through their ability generate embeddings for each class based +on their (natural language) names. Prior work has focused on improving the +accuracy of these models through prompt engineering or by incorporating a small +amount of labeled downstream data (via finetuning). However, there has been +little focus on improving the richness of the class names themselves, which can +pose issues when class labels are coarsely-defined and are uninformative. We +propose Classification with Hierarchical Label Sets (or CHiLS), an alternative +strategy for zero-shot classification specifically designed for datasets with +implicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each +class, produce a set of subclasses, using either existing label hierarchies or +by querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though +these subclasses were the labels of interest; (iii) map the predicted subclass +back to its parent to produce the final prediction. Across numerous datasets +with underlying hierarchical structure, CHiLS leads to improved accuracy in +situations both with and without ground-truth hierarchical information. CHiLS +is simple to implement within existing zero-shot pipelines and requires no +additional training cost. Code is available at: +https://github.com/acmi-lab/CHILS. +" +"A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity",Yejin Bang,http://arxiv.org/pdf/2302.04023v2.pdf,2023-02-08,"['cs.cl', 'cs.ai']",2302.04023v2.pdf," This paper proposes a framework for quantitatively evaluating interactive +LLMs such as ChatGPT using publicly available data sets. We carry out an +extensive technical evaluation of ChatGPT using 23 data sets covering 8 +different common NLP application tasks. We evaluate the multitask, multilingual +and multi-modal aspects of ChatGPT based on these data sets and a newly +designed multimodal dataset. We find that ChatGPT outperforms LLMs with +zero-shot learning on most tasks and even outperforms fine-tuned models on some +tasks. We find that it is better at understanding non-Latin script languages +than generating them. It is able to generate multimodal content from textual +prompts, via an intermediate code generation step. Moreover, we find that +ChatGPT is 63.41% accurate on average in 10 different reasoning categories +under logical reasoning, non-textual reasoning, and commonsense reasoning, +hence making it an unreliable reasoner. It is, for example, better at deductive +than inductive reasoning. ChatGPT suffers from hallucination problems like +other LLMs and it generates more extrinsic hallucinations from its parametric +memory as it does not have access to an external knowledge base. Finally, the +interactive feature of ChatGPT enables human collaboration with the underlying +LLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++ +on machine translation, in a multi-turn ""prompt engineering"" fashion. We also +release codebase for evaluation set extraction. +" +Prompt Stealing Attacks Against Text-to-Image Generation Models,Xinyue Shen,http://arxiv.org/pdf/2302.09923v1.pdf,2023-02-20,"['cs.cr', 'cs.lg']",2302.09923v1.pdf," Text-to-Image generation models have revolutionized the artwork design +process and enabled anyone to create high-quality images by entering text +descriptions called prompts. Creating a high-quality prompt that consists of a +subject and several modifiers can be time-consuming and costly. In consequence, +a trend of trading high-quality prompts on specialized marketplaces has +emerged. In this paper, we propose a novel attack, namely prompt stealing +attack, which aims to steal prompts from generated images by text-to-image +generation models. Successful prompt stealing attacks direct violate the +intellectual property and privacy of prompt engineers and also jeopardize the +business model of prompt trading marketplaces. We first perform a large-scale +analysis on a dataset collected by ourselves and show that a successful prompt +stealing attack should consider a prompt's subject as well as its modifiers. We +then propose the first learning-based prompt stealing attack, PromptStealer, +and demonstrate its superiority over two baseline methods quantitatively and +qualitatively. We also make some initial attempts to defend PromptStealer. In +general, our study uncovers a new attack surface in the ecosystem created by +the popular text-to-image generation models. We hope our results can help to +mitigate the threat. To facilitate research in this field, we will share our +dataset and code with the community. +" +Controlled and Conditional Text to Image Generation with Diffusion Prior,Pranav Aggarwal,http://arxiv.org/pdf/2302.11710v2.pdf,2023-02-23,['cs.cv'],2302.11710v2.pdf," Denoising Diffusion models have shown remarkable performance in generating +diverse, high quality images from text. Numerous techniques have been proposed +on top of or in alignment with models like Stable Diffusion and Imagen that +generate images directly from text. A lesser explored approach is DALLE-2's two +step process comprising a Diffusion Prior that generates a CLIP image embedding +from text and a Diffusion Decoder that generates an image from a CLIP image +embedding. We explore the capabilities of the Diffusion Prior and the +advantages of an intermediate CLIP representation. We observe that Diffusion +Prior can be used in a memory and compute efficient way to constrain the +generation to a specific domain without altering the larger Diffusion Decoder. +Moreover, we show that the Diffusion Prior can be trained with additional +conditional information such as color histogram to further control the +generation. We show quantitatively and qualitatively that the proposed +approaches perform better than prompt engineering for domain specific +generation and existing baselines for color conditioned generation. We believe +that our observations and results will instigate further research into the +diffusion prior and uncover more of its capabilities. +" +EvoPrompting: Language Models for Code-Level Neural Architecture Search,Angelica Chen,http://arxiv.org/pdf/2302.14838v2.pdf,2023-02-28,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.lg']",2302.14838v2.pdf," Given the recent impressive accomplishments of language models (LMs) for code +generation, we explore the use of LMs as adaptive mutation and crossover +operators for an evolutionary neural architecture search (NAS) algorithm. While +NAS still proves too difficult a task for LMs to succeed at solely through +prompting, we find that the combination of evolutionary prompt engineering with +soft prompt-tuning, a method we term EvoPrompting, consistently finds diverse +and high performing models. We first demonstrate that EvoPrompting is effective +on the computationally efficient MNIST-1D dataset, where EvoPrompting produces +convolutional architecture variants that outperform both those designed by +human experts and naive few-shot prompting in terms of accuracy and model size. +We then apply our method to searching for graph neural networks on the CLRS +Algorithmic Reasoning Benchmark, where EvoPrompting is able to design novel +architectures that outperform current state-of-the-art models on 21 out of 30 +algorithmic reasoning tasks while maintaining similar model size. EvoPrompting +is successful at designing accurate and efficient neural network architectures +across a variety of machine learning tasks, while also being general enough for +easy adaptation to other tasks beyond neural network design. +" +Extracting Accurate Materials Data from Research Papers with Conversational Language Models and Prompt Engineering,Maciej P. Polak,http://arxiv.org/pdf/2303.05352v2.pdf,2023-03-07,"['cs.cl', 'cond-mat.mtrl-sci']",2303.05352v2.pdf," There has been a growing effort to replace hand extraction of data from +research papers with automated data extraction based on natural language +processing, language models, and recently, large language models (LLMs). +Although these methods enable efficient extraction of data from large sets of +research papers, they require a significant amount of up-front effort, +expertise, and coding. In this work we propose the ChatExtract method that can +fully automate very accurate data extraction with minimal initial effort and +background, using an advanced conversational LLM. ChatExtract consists of a set +of engineered prompts applied to a conversational LLM that both identify +sentences with data, extract that data, and assure the data's correctness +through a series of follow-up questions. These follow-up questions largely +overcome known issues with LLMs providing factually inaccurate responses. +ChatExtract can be applied with any conversational LLMs and yields very high +quality data extraction. In tests on materials data we find precision and +recall both close to 90% from the best conversational LLMs, like ChatGPT-4. We +demonstrate that the exceptional performance is enabled by the information +retention in a conversational model combined with purposeful redundancy and +introducing uncertainty through follow-up prompts. These results suggest that +approaches similar to ChatExtract, due to their simplicity, transferability, +and accuracy are likely to become powerful tools for data extraction in the +near future. Finally, databases for critical cooling rates of metallic glasses +and yield strengths of high entropy alloys are developed using ChatExtract. +" +On Codex Prompt Engineering for OCL Generation: An Empirical Study,Seif Abukhalaf,http://arxiv.org/pdf/2303.16244v1.pdf,2023-03-28,"['cs.se', 'cs.ai']",2303.16244v1.pdf," The Object Constraint Language (OCL) is a declarative language that adds +constraints and object query expressions to MOF models. Despite its potential +to provide precision and conciseness to UML models, the unfamiliar syntax of +OCL has hindered its adoption. Recent advancements in LLMs, such as GPT-3, have +shown their capability in many NLP tasks, including semantic parsing and text +generation. Codex, a GPT-3 descendant, has been fine-tuned on publicly +available code from GitHub and can generate code in many programming languages. +We investigate the reliability of OCL constraints generated by Codex from +natural language specifications. To achieve this, we compiled a dataset of 15 +UML models and 168 specifications and crafted a prompt template with slots to +populate with UML information and the target task, using both zero- and +few-shot learning methods. By measuring the syntactic validity and execution +accuracy metrics of the generated OCL constraints, we found that enriching the +prompts with UML information and enabling few-shot learning increases the +reliability of the generated OCL constraints. Furthermore, the results reveal a +close similarity based on sentence embedding between the generated OCL +constraints and the human-written ones in the ground truth, implying a level of +clarity and understandability in the generated OCL constraints by Codex. +" +Ten Quick Tips for Harnessing the Power of ChatGPT/GPT-4 in Computational Biology,Tiago Lubiana,http://arxiv.org/pdf/2303.16429v1.pdf,2023-03-29,"['q-bio.ot', '92-04']",2303.16429v1.pdf," The rise of advanced chatbots, such as ChatGPT, has sparked curiosity in the +scientific community. ChatGPT is a general-purpose chatbot powered by large +language models (LLMs) GPT-3.5 and GPT-4, with the potential to impact numerous +fields, including computational biology. In this article, we offer ten tips +based on our experience with ChatGPT to assist computational biologists in +optimizing their workflows. We have collected relevant prompts and reviewed the +nascent literature in the field, compiling tips we project to remain pertinent +for future ChatGPT and LLM iterations, ranging from code refactoring to +scientific writing to prompt engineering. We hope our work will help +bioinformaticians to complement their workflows while staying aware of the +various implications of using this technology. Additionally, to track new and +creative applications for bioinformatics tools such as ChatGPT, we have +established a GitHub repository at +https://github.com/csbl-br/awesome-compbio-chatgpt. Our belief is that ethical +adherence to ChatGPT and other LLMs will increase the efficiency of +computational biologists, ultimately advancing the pace of scientific discovery +in the life sciences. +" +Humans in Humans Out: On GPT Converging Toward Common Sense in both Success and Failure,Philipp Koralus,http://arxiv.org/pdf/2303.17276v1.pdf,2023-03-30,"['cs.ai', 'cs.cl', 'cs.hc', 'cs.lg', '00, 68', 'i.2.0; i.2.6']",2303.17276v1.pdf," Increase in computational scale and fine-tuning has seen a dramatic +improvement in the quality of outputs of large language models (LLMs) like GPT. +Given that both GPT-3 and GPT-4 were trained on large quantities of +human-generated text, we might ask to what extent their outputs reflect +patterns of human thinking, both for correct and incorrect cases. The Erotetic +Theory of Reason (ETR) provides a symbolic generative model of both human +success and failure in thinking, across propositional, quantified, and +probabilistic reasoning, as well as decision-making. We presented GPT-3, +GPT-3.5, and GPT-4 with 61 central inference and judgment problems from a +recent book-length presentation of ETR, consisting of experimentally verified +data-points on human judgment and extrapolated data-points predicted by ETR, +with correct inference patterns as well as fallacies and framing effects (the +ETR61 benchmark). ETR61 includes classics like Wason's card task, illusory +inferences, the decoy effect, and opportunity-cost neglect, among others. GPT-3 +showed evidence of ETR-predicted outputs for 59% of these examples, rising to +77% in GPT-3.5 and 75% in GPT-4. Remarkably, the production of human-like +fallacious judgments increased from 18% in GPT-3 to 33% in GPT-3.5 and 34% in +GPT-4. This suggests that larger and more advanced LLMs may develop a tendency +toward more human-like mistakes, as relevant thought patterns are inherent in +human-produced training data. According to ETR, the same fundamental patterns +are involved both in successful and unsuccessful ordinary reasoning, so that +the ""bad"" cases could paradoxically be learned from the ""good"" cases. We +further present preliminary evidence that ETR-inspired prompt engineering could +reduce instances of these mistakes. +" +Pair Programming with Large Language Models for Sampling and Estimation of Copulas,Jan Górecki,http://arxiv.org/pdf/2303.18116v1.pdf,2023-03-31,"['cs.cl', 'stat.co', '65c60, 68n19, 68t50']",2303.18116v1.pdf," Without writing a single line of code by a human, an example Monte Carlo +simulation based application for stochastic dependence modeling with copulas is +developed using a state-of-the-art large language model (LLM) fine-tuned for +conversations. This includes interaction with ChatGPT in natural language and +using mathematical formalism, which, under careful supervision by a +human-expert, led to producing a working code in MATLAB, Python and R for +sampling from a given copula model, evaluation of the model's density, +performing maximum likelihood estimation, optimizing the code for parallel +computing for CPUs as well as for GPUs, and visualization of the computed +results. In contrast to other emerging studies that assess the accuracy of LLMs +like ChatGPT on tasks from a selected area, this work rather investigates ways +how to achieve a successful solution of a standard statistical task in a +collaboration of a human-expert and artificial intelligence (AI). Particularly, +through careful prompt engineering, we separate successful solutions generated +by ChatGPT from unsuccessful ones, resulting in a comprehensive list of related +pros and cons. It is demonstrated that if the typical pitfalls are avoided, we +can substantially benefit from collaborating with an AI partner. For example, +we show that if ChatGPT is not able to provide a correct solution due to a lack +of or incorrect knowledge, the human-expert can feed it with the correct +knowledge, e.g., in the form of mathematical theorems and formulas, and make it +to apply the gained knowledge in order to provide a solution that is correct. +Such ability presents an attractive opportunity to achieve a programmed +solution even for users with rather limited knowledge of programming +techniques. +" +"Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing",Walid Hariri,http://arxiv.org/pdf/2304.02017v5.pdf,2023-03-27,['cs.cl'],2304.02017v5.pdf," Large language models have revolutionized the field of artificial +intelligence and have been used in various applications. Among these models, +ChatGPT (Chat Generative Pre-trained Transformer) has been developed by OpenAI, +it stands out as a powerful tool that has been widely adopted. ChatGPT has been +successfully applied in numerous areas, including chatbots, content generation, +language translation, personalized recommendations, and even medical diagnosis +and treatment. Its success in these applications can be attributed to its +ability to generate human-like responses, understand natural language, and +adapt to different contexts. Its versatility and accuracy make it a powerful +tool for natural language processing (NLP). However, there are also limitations +to ChatGPT, such as its tendency to produce biased responses and its potential +to perpetuate harmful language patterns. This article provides a comprehensive +overview of ChatGPT, its applications, advantages, and limitations. +Additionally, the paper emphasizes the importance of ethical considerations +when using this robust tool in real-world scenarios. Finally, This paper +contributes to ongoing discussions surrounding artificial intelligence and its +impact on vision and NLP domains by providing insights into prompt engineering +techniques. +" +TagGPT: Large Language Models are Zero-shot Multimodal Taggers,Chen Li,http://arxiv.org/pdf/2304.03022v1.pdf,2023-04-06,['cs.ir'],2304.03022v1.pdf," Tags are pivotal in facilitating the effective distribution of multimedia +content in various applications in the contemporary Internet era, such as +search engines and recommendation systems. Recently, large language models +(LLMs) have demonstrated impressive capabilities across a wide range of tasks. +In this work, we propose TagGPT, a fully automated system capable of tag +extraction and multimodal tagging in a completely zero-shot fashion. Our core +insight is that, through elaborate prompt engineering, LLMs are able to extract +and reason about proper tags given textual clues of multimodal data, e.g., OCR, +ASR, title, etc. Specifically, to automatically build a high-quality tag set +that reflects user intent and interests for a specific application, TagGPT +predicts large-scale candidate tags from a series of raw data via prompting +LLMs, filtered with frequency and semantics. Given a new entity that needs +tagging for distribution, TagGPT introduces two alternative options for +zero-shot tagging, i.e., a generative method with late semantic matching with +the tag set, and another selective method with early matching in prompts. It is +well noticed that TagGPT provides a system-level solution based on a modular +framework equipped with a pre-trained LLM (GPT-3.5 used here) and a sentence +embedding model (SimCSE used here), which can be seamlessly replaced with any +more advanced one you want. TagGPT is applicable for various modalities of data +in modern social media and showcases strong generalization ability to a wide +range of applications. We evaluate TagGPT on publicly available datasets, i.e., +Kuaishou and Food.com, and demonstrate the effectiveness of TagGPT compared to +existing hashtags and off-the-shelf taggers. Project page: +https://github.com/TencentARC/TagGPT. +" +Towards Interpretable Mental Health Analysis with Large Language Models,Kailai Yang,http://arxiv.org/pdf/2304.03347v4.pdf,2023-04-06,['cs.cl'],2304.03347v4.pdf," The latest large language models (LLMs) such as ChatGPT, exhibit strong +capabilities in automated mental health analysis. However, existing relevant +studies bear several limitations, including inadequate evaluations, lack of +prompting strategies, and ignorance of exploring LLMs for explainability. To +bridge these gaps, we comprehensively evaluate the mental health analysis and +emotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explore +the effects of different prompting strategies with unsupervised and distantly +supervised emotional information. Based on these prompts, we explore LLMs for +interpretable mental health analysis by instructing them to generate +explanations for each of their decisions. We convey strict human evaluations to +assess the quality of the generated explanations, leading to a novel dataset +with 163 human-assessed explanations. We benchmark existing automatic +evaluation metrics on this dataset to guide future related works. According to +the results, ChatGPT shows strong in-context learning ability but still has a +significant gap with advanced task-specific methods. Careful prompt engineering +with emotional cues and expert-written few-shot examples can also effectively +improve performance on mental health analysis. In addition, ChatGPT generates +explanations that approach human performance, showing its great potential in +explainable mental health analysis. +" +Low-code LLM: Visual Programming over LLMs,Yuzhe Cai,http://arxiv.org/pdf/2304.08103v2.pdf,2023-04-17,"['cs.cl', 'cs.hc']",2304.08103v2.pdf," Effectively utilizing LLMs for complex tasks is challenging, often involving +a time-consuming and uncontrollable prompt engineering process. This paper +introduces a novel human-LLM interaction framework, Low-code LLM. It +incorporates six types of simple low-code visual programming interactions, all +supported by clicking, dragging, or text editing, to achieve more controllable +and stable responses. Through visual interaction with a graphical user +interface, users can incorporate their ideas into the workflow without writing +trivial prompts. The proposed Low-code LLM framework consists of a Planning LLM +that designs a structured planning workflow for complex tasks, which can be +correspondingly edited and confirmed by users through low-code visual +programming operations, and an Executing LLM that generates responses following +the user-confirmed workflow. We highlight three advantages of the low-code LLM: +controllable generation results, user-friendly human-LLM interaction, and +broadly applicable scenarios. We demonstrate its benefits using four typical +applications. By introducing this approach, we aim to bridge the gap between +humans and LLMs, enabling more effective and efficient utilization of LLMs for +complex tasks. Our system will be soon publicly available at LowCodeLLM. +" +Inducing anxiety in large language models increases exploration and bias,Julian Coda-Forno,http://arxiv.org/pdf/2304.11111v1.pdf,2023-04-21,"['cs.cl', 'cs.ai', 'cs.lg']",2304.11111v1.pdf," Large language models are transforming research on machine learning while +galvanizing public debates. Understanding not only when these models work well +and succeed but also why they fail and misbehave is of great societal +relevance. We propose to turn the lens of computational psychiatry, a framework +used to computationally describe and modify aberrant behavior, to the outputs +produced by these models. We focus on the Generative Pre-Trained Transformer +3.5 and subject it to tasks commonly studied in psychiatry. Our results show +that GPT-3.5 responds robustly to a common anxiety questionnaire, producing +higher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be +predictably changed by using emotion-inducing prompts. Emotion-induction not +only influences GPT-3.5's behavior in a cognitive task measuring exploratory +decision-making but also influences its behavior in a previously-established +task measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a +strong increase in biases when prompted with anxiety-inducing text. Thus, it is +likely that how prompts are communicated to large language models has a strong +influence on their behavior in applied settings. These results progress our +understanding of prompt engineering and demonstrate the usefulness of methods +taken from computational psychiatry for studying the capable algorithms to +which we increasingly delegate authority and autonomy. +" +Is ChatGPT the Ultimate Programming Assistant -- How far is it?,Haoye Tian,http://arxiv.org/pdf/2304.11938v2.pdf,2023-04-24,"['cs.se', 'cs.ai']",2304.11938v2.pdf," Recently, the ChatGPT LLM has received great attention: it can be used as a +bot for discussing source code, prompting it to suggest changes, provide +descriptions or even generate code. Typical demonstrations generally focus on +existing benchmarks, which may have been used in model training (i.e., data +leakage). To assess the feasibility of using an LLM as a useful assistant bot +for programmers, we must assess its realistic capabilities on unseen problems +as well as its capabilities on various tasks. In this paper, we present an +empirical study of ChatGPT's potential as a fully automated programming +assistant, focusing on the tasks of code generation, program repair, and code +summariziation. The study investigates ChatGPT's performance on common +programming problems and compares it with state-of-the-art approaches on two +benchmarks. Among several findings, our study shows that ChatGPT is effective +in dealing with common programming problems. However, our experiments also +reveal limitations in terms of its attention span: detailed descriptions will +constrain the focus of ChatGPT and prevent it from leveraging its vast +knowledge to solve the actual problem. Surprisingly, we have identified the +ability of ChatGPT to reason the original intention of the code. We expect +future work to build on this insight for dealing with the open question of the +oracle problem. Our findings contribute interesting insights to the development +of LLMs for programming assistance, notably by demonstrating the importance of +prompt engineering, and providing a better understanding of ChatGPT's practical +applications for software engineering. +" +Framing the News:From Human Perception to Large Language Model Inferences,David Alonso del Barrio,http://arxiv.org/pdf/2304.14456v1.pdf,2023-04-27,"['cs.cl', 'cs.hc']",2304.14456v1.pdf," Identifying the frames of news is important to understand the articles' +vision, intention, message to be conveyed, and which aspects of the news are +emphasized. Framing is a widely studied concept in journalism, and has emerged +as a new topic in computing, with the potential to automate processes and +facilitate the work of journalism professionals. In this paper, we study this +issue with articles related to the Covid-19 anti-vaccine movement. First, to +understand the perspectives used to treat this theme, we developed a protocol +for human labeling of frames for 1786 headlines of No-Vax movement articles of +European newspapers from 5 countries. Headlines are key units in the written +press, and worth of analysis as many people only read headlines (or use them to +guide their decision for further reading.) Second, considering advances in +Natural Language Processing (NLP) with large language models, we investigated +two approaches for frame inference of news headlines: first with a GPT-3.5 +fine-tuning approach, and second with GPT-3.5 prompt-engineering. Our work +contributes to the study and analysis of the performance that these models have +to facilitate journalistic tasks like classification of frames, while +understanding whether the models are able to replicate human perception in the +identification of these frames. +" +"ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations",Chunkit Chan,http://arxiv.org/pdf/2304.14827v2.pdf,2023-04-28,['cs.cl'],2304.14827v2.pdf," This paper aims to quantitatively evaluate the performance of ChatGPT, an +interactive large language model, on inter-sentential relations such as +temporal relations, causal relations, and discourse relations. Given ChatGPT's +promising performance across various tasks, we conduct extensive evaluations on +the whole test sets of 13 datasets, including temporal and causal relations, +PDTB2.0-based and dialogue-based discourse relations, and downstream +applications on discourse understanding. To achieve reliable results, we adopt +three tailored prompt templates for each task, including the zero-shot prompt +template, zero-shot prompt engineering (PE) template, and in-context learning +(ICL) prompt template, to establish the initial baseline scores for all popular +sentence-pair relation classification tasks for the first time. We find that +ChatGPT exhibits strong performance in detecting and reasoning about causal +relations, while it may not be proficient in identifying the temporal order +between two events. It can recognize most discourse relations with existing +explicit discourse connectives, but the implicit discourse relation still +remains a challenging task. Meanwhile, ChatGPT performs poorly in the dialogue +discourse parsing task that requires structural understanding in a dialogue +before being aware of the discourse relation. +" +Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns,Julian Hazell,http://arxiv.org/pdf/2305.06972v2.pdf,2023-05-11,"['cs.cy', 'cs.ai', 'cs.cr']",2305.06972v2.pdf," Recent progress in artificial intelligence (AI), particularly in the domain +of large language models (LLMs), has resulted in powerful and versatile +dual-use systems. Indeed, cognition can be put towards a wide variety of tasks, +some of which can result in harm. This study investigates how LLMs can be used +for spear phishing, a form of cybercrime that involves manipulating targets +into divulging sensitive information. I first explore LLMs' ability to assist +with the reconnaissance and message generation stages of a successful spear +phishing attack, where I find that advanced LLMs are capable of improving +cybercriminals' efficiency during these stages. To explore how LLMs can be used +to scale spear phishing campaigns, I then create unique spear phishing messages +for over 600 British Members of Parliament using OpenAI's GPT-3.5 and GPT-4 +models. My findings reveal that these messages are not only realistic but also +cost-effective, with each email costing only a fraction of a cent to generate. +Next, I demonstrate how basic prompt engineering can circumvent safeguards +installed in LLMs by the reinforcement learning from human feedback fine-tuning +process, highlighting the need for more robust governance interventions aimed +at preventing misuse. To address these evolving risks, I propose two potential +solutions: structured access schemes, such as application programming +interfaces, and LLM-based defensive systems. +" +Text2Cohort: Democratizing the NCI Imaging Data Commons with Natural Language Cohort Discovery,Pranav Kulkarni,http://arxiv.org/pdf/2305.07637v2.pdf,2023-05-12,"['cs.lg', 'cs.cl', 'cs.hc', 'cs.ir']",2305.07637v2.pdf," The Imaging Data Commons (IDC) is a cloud-based database that provides +researchers with open access to cancer imaging data, with the goal of +facilitating collaboration in medical imaging research. However, querying the +IDC database for cohort discovery and access to imaging data has a significant +learning curve for researchers due to its complex nature. We developed +Text2Cohort, a large language model (LLM) based toolkit to facilitate +user-friendly and intuitive natural language cohort discovery in the IDC. +Text2Cohorts translates user input into IDC database queries using prompt +engineering and autocorrection and returns the query's response to the user. +Autocorrection resolves errors in queries by passing the errors back to the +model for interpretation and correction. We evaluate Text2Cohort on 50 natural +language user inputs ranging from information extraction to cohort discovery. +The resulting queries and outputs were verified by two computer scientists to +measure Text2Cohort's accuracy and F1 score. Text2Cohort successfully generated +queries and their responses with an 88% accuracy and F1 score of 0.94. However, +it failed to generate queries for 6/50 (12%) user inputs due to syntax and +semantic errors. Our results indicate that Text2Cohort succeeded at generating +queries with correct responses, but occasionally failed due to a lack of +understanding of the data schema. Despite these shortcomings, Text2Cohort +demonstrates the utility of LLMs to enable researchers to discover and curate +cohorts using data hosted on IDC with high levels of accuracy using natural +language in a more intuitive and user-friendly way. +" +Sensitivity and Robustness of Large Language Models to Prompt Template in Japanese Text Classification Tasks,Chengguang Gan,http://arxiv.org/pdf/2305.08714v2.pdf,2023-05-15,"['cs.cl', 'cs.ai']",2305.08714v2.pdf," Prompt engineering relevance research has seen a notable surge in recent +years, primarily driven by advancements in pre-trained language models and +large language models. However, a critical issue has been identified within +this domain: the inadequate of sensitivity and robustness of these models +towards Prompt Templates, particularly in lesser-studied languages such as +Japanese. This paper explores this issue through a comprehensive evaluation of +several representative Large Language Models (LLMs) and a widely-utilized +pre-trained model(PLM). These models are scrutinized using a benchmark dataset +in Japanese, with the aim to assess and analyze the performance of the current +multilingual models in this context. Our experimental results reveal startling +discrepancies. A simple modification in the sentence structure of the Prompt +Template led to a drastic drop in the accuracy of GPT-4 from 49.21 to 25.44. +This observation underscores the fact that even the highly performance GPT-4 +model encounters significant stability issues when dealing with diverse +Japanese prompt templates, rendering the consistency of the model's output +results questionable. In light of these findings, we conclude by proposing +potential research trajectories to further enhance the development and +performance of Large Language Models in their current stage. +" +Knowledge Graph Completion Models are Few-shot Learners: An Empirical Study of Relation Labeling in E-commerce with LLMs,Jiao Chen,http://arxiv.org/pdf/2305.09858v1.pdf,2023-05-17,"['cs.ir', 'cs.ai', 'cs.cl', 'cs.lg']",2305.09858v1.pdf," Knowledge Graphs (KGs) play a crucial role in enhancing e-commerce system +performance by providing structured information about entities and their +relationships, such as complementary or substitutable relations between +products or product types, which can be utilized in recommender systems. +However, relation labeling in KGs remains a challenging task due to the dynamic +nature of e-commerce domains and the associated cost of human labor. Recently, +breakthroughs in Large Language Models (LLMs) have shown surprising results in +numerous natural language processing tasks. In this paper, we conduct an +empirical study of LLMs for relation labeling in e-commerce KGs, investigating +their powerful learning capabilities in natural language and effectiveness in +predicting relations between product types with limited labeled data. We +evaluate various LLMs, including PaLM and GPT-3.5, on benchmark datasets, +demonstrating their ability to achieve competitive performance compared to +humans on relation labeling tasks using just 1 to 5 labeled examples per +relation. Additionally, we experiment with different prompt engineering +techniques to examine their impact on model performance. Our results show that +LLMs significantly outperform existing KG completion models in relation +labeling for e-commerce KGs and exhibit performance strong enough to replace +human labeling. +" +VisorGPT: Learning Visual Prior via Generative Pre-Training,Jinheng Xie,http://arxiv.org/pdf/2305.13777v4.pdf,2023-05-23,['cs.cv'],2305.13777v4.pdf," Various stuff and things in visual data possess specific traits, which can be +learned by deep neural networks and are implicitly represented as the visual +prior, e.g., object location and shape, in the model. Such prior potentially +impacts many vision tasks. For example, in conditional image synthesis, spatial +conditions failing to adhere to the prior can result in visually inaccurate +synthetic results. This work aims to explicitly learn the visual prior and +enable the customization of sampling. Inspired by advances in language +modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed +VisorGPT. By discretizing visual locations of objects, e.g., bounding boxes, +human pose, and instance masks, into sequences, VisorGPT can model visual prior +through likelihood maximization. Besides, prompt engineering is investigated to +unify various visual locations and enable customized sampling of sequential +outputs from the learned prior. Experimental results demonstrate that VisorGPT +can effectively model the visual prior, which can be employed for many vision +tasks, such as customizing accurate human pose for conditional image synthesis +models like ControlNet. Code will be released at +https://github.com/Sierkinhane/VisorGPT. +" +Game of Tones: Faculty detection of GPT-4 generated content in university assessments,Mike Perkins,http://arxiv.org/pdf/2305.18081v1.pdf,2023-05-29,"['cs.cy', 'cs.ai', 'k.4']",2305.18081v1.pdf," This study explores the robustness of university assessments against the use +of Open AI's Generative Pre-Trained Transformer 4 (GPT-4) generated content and +evaluates the ability of academic staff to detect its use when supported by the +Turnitin Artificial Intelligence (AI) detection tool. The research involved +twenty-two GPT-4 generated submissions being created and included in the +assessment process to be marked by fifteen different faculty members. The study +reveals that although the detection tool identified 91% of the experimental +submissions as containing some AI-generated content, the total detected content +was only 54.8%. This suggests that the use of adversarial techniques regarding +prompt engineering is an effective method in evading AI detection tools and +highlights that improvements to AI detection software are needed. Using the +Turnitin AI detect tool, faculty reported 54.5% of the experimental submissions +to the academic misconduct process, suggesting the need for increased awareness +and training into these tools. Genuine submissions received a mean score of +54.4, whereas AI-generated content scored 52.3, indicating the comparable +performance of GPT-4 in real-life situations. Recommendations include adjusting +assessment strategies to make them more resistant to the use of AI tools, using +AI-inclusive assessment where possible, and providing comprehensive training +programs for faculty and students. This research contributes to understanding +the relationship between AI-generated content and academic assessment, urging +further investigation to preserve academic integrity. +" +Responsible Task Automation: Empowering Large Language Models as Responsible Task Automators,Zhizheng Zhang,http://arxiv.org/pdf/2306.01242v1.pdf,2023-06-02,"['cs.ai', 'cs.cl']",2306.01242v1.pdf," The recent success of Large Language Models (LLMs) signifies an impressive +stride towards artificial general intelligence. They have shown a promising +prospect in automatically completing tasks upon user instructions, functioning +as brain-like coordinators. The associated risks will be revealed as we +delegate an increasing number of tasks to machines for automated completion. A +big question emerges: how can we make machines behave responsibly when helping +humans automate tasks as personal copilots? In this paper, we explore this +question in depth from the perspectives of feasibility, completeness and +security. In specific, we present Responsible Task Automation (ResponsibleTA) +as a fundamental framework to facilitate responsible collaboration between +LLM-based coordinators and executors for task automation with three empowered +capabilities: 1) predicting the feasibility of the commands for executors; 2) +verifying the completeness of executors; 3) enhancing the security (e.g., the +protection of users' privacy). We further propose and compare two paradigms for +implementing the first two capabilities. One is to leverage the generic +knowledge of LLMs themselves via prompt engineering while the other is to adopt +domain-specific learnable models. Moreover, we introduce a local memory +mechanism for achieving the third capability. We evaluate our proposed +ResponsibleTA on UI task automation and hope it could bring more attentions to +ensuring LLMs more responsible in diverse scenarios. The research project +homepage is at +https://task-automation-research.github.io/responsible_task_automation. +" +A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering,Chaoning Zhang,http://arxiv.org/pdf/2306.06211v3.pdf,2023-05-12,['cs.cv'],2306.06211v3.pdf," Segment anything model (SAM) developed by Meta AI Research has recently +attracted significant attention. Trained on a large segmentation dataset of +over 1 billion masks, SAM is capable of segmenting any object on a certain +image. In the original SAM work, the authors turned to zero-short transfer +tasks (like edge detection) for evaluating the performance of SAM. Recently, +numerous works have attempted to investigate the performance of SAM in various +scenarios to recognize and segment objects. Moreover, numerous projects have +emerged to show the versatility of SAM as a foundation model by combining it +with other models, like Grounding DINO, Stable Diffusion, ChatGPT, etc. With +the relevant papers and projects increasing exponentially, it is challenging +for the readers to catch up with the development of SAM. To this end, this work +conducts the first yet comprehensive survey on SAM. This is an ongoing project +and we intend to update the manuscript on a regular basis. Therefore, readers +are welcome to contact us if they complete new works related to SAM so that we +can include them in our next version. +" +The economic trade-offs of large language models: A case study,Kristen Howell,http://arxiv.org/pdf/2306.07402v1.pdf,2023-06-08,"['cs.cl', 'cs.ai']",2306.07402v1.pdf," Contacting customer service via chat is a common practice. Because employing +customer service agents is expensive, many companies are turning to NLP that +assists human agents by auto-generating responses that can be used directly or +with modifications. Large Language Models (LLMs) are a natural fit for this use +case; however, their efficacy must be balanced with the cost of training and +serving them. This paper assesses the practical cost and impact of LLMs for the +enterprise as a function of the usefulness of the responses that they generate. +We present a cost framework for evaluating an NLP model's utility for this use +case and apply it to a single brand as a case study in the context of an +existing agent assistance product. We compare three strategies for specializing +an LLM - prompt engineering, fine-tuning, and knowledge distillation - using +feedback from the brand's customer service agents. We find that the usability +of a model's responses can make up for a large difference in inference cost for +our case study brand, and we extrapolate our findings to the broader enterprise +space. +" +TART: A plug-and-play Transformer module for task-agnostic reasoning,Kush Bhatia,http://arxiv.org/pdf/2306.07536v1.pdf,2023-06-13,"['cs.lg', 'cs.ai', 'cs.cl']",2306.07536v1.pdf," Large language models (LLMs) exhibit in-context learning abilities which +enable the same model to perform several tasks without any task-specific +training. In contrast, traditional adaptation approaches, such as fine-tuning, +modify the underlying models for each specific task. In-context learning, +however, consistently underperforms task-specific tuning approaches even when +presented with the same examples. While most existing approaches (e.g., prompt +engineering) focus on the LLM's learned representations to patch this +performance gap, our analysis actually reveal that LLM representations contain +sufficient information to make good predictions. As such, we focus on the LLM's +reasoning abilities and demonstrate that this performance gap exists due to +their inability to perform simple probabilistic reasoning tasks. This raises an +intriguing question: Are LLMs actually capable of learning how to reason in a +task-agnostic manner? We answer this in the affirmative and propose TART which +generically improves an LLM's reasoning abilities using a synthetically trained +Transformer-based reasoning module. TART trains this reasoning module in a +task-agnostic manner using only synthetic logistic regression tasks and +composes it with an arbitrary real-world pre-trained model without any +additional training. With a single inference module, TART improves performance +across different model families (GPT-Neo, Pythia, BLOOM), model sizes (100M - +6B), tasks (14 NLP binary classification tasks), and even across different +modalities (audio and vision). Additionally, on the RAFT Benchmark, TART +improves GPT-Neo (125M)'s performance such that it outperforms BLOOM (176B), +and is within 4% of GPT-3 (175B). Our code and models are available at +https://github.com/HazyResearch/TART . +" +Exploring the Effectiveness of Dataset Synthesis: An application of Apple Detection in Orchards,Alexander van Meekeren,http://arxiv.org/pdf/2306.11763v1.pdf,2023-06-20,['cs.cv'],2306.11763v1.pdf," Deep object detection models have achieved notable successes in recent years, +but one major obstacle remains: the requirement for a large amount of training +data. Obtaining such data is a tedious process and is mainly time consuming, +leading to the exploration of new research avenues like synthetic data +generation techniques. In this study, we explore the usability of Stable +Diffusion 2.1-base for generating synthetic datasets of apple trees for object +detection and compare it to a baseline model trained on real-world data. After +creating a dataset of realistic apple trees with prompt engineering and +utilizing a previously trained Stable Diffusion model, the custom dataset was +annotated and evaluated by training a YOLOv5m object detection model to predict +apples in a real-world apple detection dataset. YOLOv5m was chosen for its +rapid inference time and minimal hardware demands. Results demonstrate that the +model trained on generated data is slightly underperforming compared to a +baseline model trained on real-world images when evaluated on a set of +real-world images. However, these findings remain highly promising, as the +average precision difference is only 0.09 and 0.06, respectively. Qualitative +results indicate that the model can accurately predict the location of apples, +except in cases of heavy shading. These findings illustrate the potential of +synthetic data generation techniques as a viable alternative to the collection +of extensive training data for object detection models. +" +Do you still need a manual smart contract audit?,Isaac David,http://arxiv.org/pdf/2306.12338v2.pdf,2023-06-21,['cs.cr'],2306.12338v2.pdf," We investigate the feasibility of employing large language models (LLMs) for +conducting the security audit of smart contracts, a traditionally +time-consuming and costly process. Our research focuses on the optimization of +prompt engineering for enhanced security analysis, and we evaluate the +performance and accuracy of LLMs using a benchmark dataset comprising 52 +Decentralized Finance (DeFi) smart contracts that have previously been +compromised. + Our findings reveal that, when applied to vulnerable contracts, both GPT-4 +and Claude models correctly identify the vulnerability type in 40% of the +cases. However, these models also demonstrate a high false positive rate, +necessitating continued involvement from manual auditors. The LLMs tested +outperform a random model by 20% in terms of F1-score. + To ensure the integrity of our study, we conduct mutation testing on five +newly developed and ostensibly secure smart contracts, into which we manually +insert two and 15 vulnerabilities each. This testing yielded a remarkable +best-case 78.7% true positive rate for the GPT-4-32k model. We tested both, +asking the models to perform a binary classification on whether a contract is +vulnerable, and a non-binary prompt. We also examined the influence of model +temperature variations and context length on the LLM's performance. + Despite the potential for many further enhancements, this work lays the +groundwork for a more efficient and economical approach to smart contract +security audits. +" +MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models,Chaoyou Fu,http://arxiv.org/pdf/2306.13394v2.pdf,2023-06-23,['cs.cv'],2306.13394v2.pdf," Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform +multimodal tasks, showing amazing emergent abilities in recent studies, such as +writing poems based on an image. However, it is difficult for these case +studies to fully reflect the performance of MLLM, lacking a comprehensive +evaluation. In this paper, we fill in this blank, presenting the first MLLM +Evaluation benchmark MME. It measures both perception and cognition abilities +on a total of 14 subtasks. In order to avoid data leakage that may arise from +direct use of public datasets for evaluation, the annotations of +instruction-answer pairs are all manually designed. The concise instruction +design allows us to fairly compare MLLMs, instead of struggling in prompt +engineering. Besides, with such an instruction, we can also easily carry out +quantitative statistics. A total of 12 advanced MLLMs are comprehensively +evaluated on our MME, which not only suggests that existing MLLMs still have a +large room for improvement, but also reveals the potential directions for the +subsequent model optimization. +" +Zero-shot Nuclei Detection via Visual-Language Pre-trained Models,Yongjian Wu,http://arxiv.org/pdf/2306.17659v1.pdf,2023-06-30,['cs.cv'],2306.17659v1.pdf," Large-scale visual-language pre-trained models (VLPM) have proven their +excellent performance in downstream object detection for natural scenes. +However, zero-shot nuclei detection on H\&E images via VLPMs remains +underexplored. The large gap between medical images and the web-originated +text-image pairs used for pre-training makes it a challenging task. In this +paper, we attempt to explore the potential of the object-level VLPM, Grounded +Language-Image Pre-training (GLIP) model, for zero-shot nuclei detection. +Concretely, an automatic prompts design pipeline is devised based on the +association binding trait of VLPM and the image-to-text VLPM BLIP, avoiding +empirical manual prompts engineering. We further establish a self-training +framework, using the automatically designed prompts to generate the preliminary +results as pseudo labels from GLIP and refine the predicted boxes in an +iterative manner. Our method achieves a remarkable performance for label-free +nuclei detection, surpassing other comparison methods. Foremost, our work +demonstrates that the VLPM pre-trained on natural image-text pairs exhibits +astonishing potential for downstream tasks in the medical field as well. Code +will be released at https://github.com/wuyongjianCODE/VLPMNuD. +" +Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise Given to Students in Synthetic Dialogues,Dollaya Hirunyasiri,http://arxiv.org/pdf/2307.02018v1.pdf,2023-07-05,"['cs.cl', 'cs.ai', 'cs.hc']",2307.02018v1.pdf," Research suggests that providing specific and timely feedback to human tutors +enhances their performance. However, it presents challenges due to the +time-consuming nature of assessing tutor performance by human evaluators. Large +language models, such as the AI-chatbot ChatGPT, hold potential for offering +constructive feedback to tutors in practical settings. Nevertheless, the +accuracy of AI-generated feedback remains uncertain, with scant research +investigating the ability of models like ChatGPT to deliver effective feedback. +In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a +tutor-student setting. We use two different prompting approaches, the zero-shot +chain of thought and the few-shot chain of thought, to identify specific +components of effective praise based on five criteria. These approaches are +then compared to the results of human graders for accuracy. Our goal is to +assess the extent to which GPT-4 can accurately identify each praise criterion. +We found that both zero-shot and few-shot chain of thought approaches yield +comparable results. GPT-4 performs moderately well in identifying instances +when the tutor offers specific and immediate praise. However, GPT-4 +underperforms in identifying the tutor's ability to deliver sincere praise, +particularly in the zero-shot prompting scenario where examples of sincere +tutor praise statements were not provided. Future work will focus on enhancing +prompt engineering, developing a more general tutoring rubric, and evaluating +our method using real-life tutoring dialogues. +" +"Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions",Dawen Zhang,http://arxiv.org/pdf/2307.03941v3.pdf,2023-07-08,"['cs.cy', 'cs.ai', 'cs.cl']",2307.03941v3.pdf," The Right to be Forgotten (RTBF) was first established as the result of the +ruling of Google Spain SL, Google Inc. v AEPD, Mario Costeja Gonz\'alez, and +was later included as the Right to Erasure under the General Data Protection +Regulation (GDPR) of European Union to allow individuals the right to request +personal data be deleted by organizations. Specifically for search engines, +individuals can send requests to organizations to exclude their information +from the query results. It was a significant emergent right as the result of +the evolution of technology. With the recent development of Large Language +Models (LLMs) and their use in chatbots, LLM-enabled software systems have +become popular. But they are not excluded from the RTBF. Compared with the +indexing approach used by search engines, LLMs store, and process information +in a completely different way. This poses new challenges for compliance with +the RTBF. In this paper, we explore these challenges and provide our insights +on how to implement technical solutions for the RTBF, including the use of +differential privacy, machine unlearning, model editing, and prompt +engineering. With the rapid advancement of AI and the increasing need of +regulating this powerful technology, learning from the case of RTBF can provide +valuable lessons for technical practitioners, legal experts, organizations, and +authorities. +" +"Software Testing with Large Language Model: Survey, Landscape, and Vision",Junjie Wang,http://arxiv.org/pdf/2307.07221v1.pdf,2023-07-14,['cs.se'],2307.07221v1.pdf," Pre-trained large language models (LLMs) have recently emerged as a +breakthrough technology in natural language processing and artificial +intelligence, with the ability to handle large-scale datasets and exhibit +remarkable performance across a wide range of tasks. Meanwhile, software +testing is a crucial undertaking that serves as a cornerstone for ensuring the +quality and reliability of software products. As the scope and complexity of +software systems continue to grow, the need for more effective software testing +techniques becomes increasingly urgent, and making it an area ripe for +innovative approaches such as the use of LLMs. This paper provides a +comprehensive review of the utilization of LLMs in software testing. It +analyzes 52 relevant studies that have used LLMs for software testing, from +both the software testing and LLMs perspectives. The paper presents a detailed +discussion of the software testing tasks for which LLMs are commonly used, +among which test case preparation and program repair are the most +representative ones. It also analyzes the commonly used LLMs, the types of +prompt engineering that are employed, as well as the accompanied techniques +with these LLMs. It also summarizes the key challenges and potential +opportunities in this direction. This work can serve as a roadmap for future +research in this area, highlighting potential avenues for exploration, and +identifying gaps in our current understanding of the use of LLMs in software +testing. +" +The Potential and Pitfalls of using a Large Language Model such as ChatGPT or GPT-4 as a Clinical Assistant,Jingqing Zhang,http://arxiv.org/pdf/2307.08152v1.pdf,2023-07-16,['cs.cl'],2307.08152v1.pdf," Recent studies have demonstrated promising performance of ChatGPT and GPT-4 +on several medical domain tasks. However, none have assessed its performance +using a large-scale real-world electronic health record database, nor have +evaluated its utility in providing clinical diagnostic assistance for patients +across a full range of disease presentation. We performed two analyses using +ChatGPT and GPT-4, one to identify patients with specific medical diagnoses +using a real-world large electronic health record database and the other, in +providing diagnostic assistance to healthcare workers in the prospective +evaluation of hypothetical patients. Our results show that GPT-4 across disease +classification tasks with chain of thought and few-shot prompting can achieve +performance as high as 96% F1 scores. For patient assessment, GPT-4 can +accurately diagnose three out of four times. However, there were mentions of +factually incorrect statements, overlooking crucial medical findings, +recommendations for unnecessary investigations and overtreatment. These issues +coupled with privacy concerns, make these models currently inadequate for real +world clinical use. However, limited data and time needed for prompt +engineering in comparison to configuration of conventional machine learning +workflows highlight their potential for scalability across healthcare +applications. +" +A Lightweight Framework for High-Quality Code Generation,Mohammed Latif Siddiq,http://arxiv.org/pdf/2307.08220v1.pdf,2023-07-17,"['cs.se', 'cs.lg']",2307.08220v1.pdf," In recent years, the use of automated source code generation utilizing +transformer-based generative models has expanded, and these models can generate +functional code according to the requirements of the developers. However, +recent research revealed that these automatically generated source codes can +contain vulnerabilities and other quality issues. Despite researchers' and +practitioners' attempts to enhance code generation models, retraining and +fine-tuning large language models is time-consuming and resource-intensive. +Thus, we describe FRANC, a lightweight framework for recommending more secure +and high-quality source code derived from transformer-based code generation +models. FRANC includes a static filter to make the generated code compilable +with heuristics and a quality-aware ranker to sort the code snippets based on a +quality score. Moreover, the framework uses prompt engineering to fix +persistent quality issues. We evaluated the framework with five Python and Java +code generation models and six prompt datasets, including a newly created one +in this work (SOEval). The static filter improves 9% to 46% Java suggestions +and 10% to 43% Python suggestions regarding compilability. The average +improvement over the NDCG@10 score for the ranking system is 0.0763, and the +repairing techniques repair the highest 80% of prompts. FRANC takes, on +average, 1.98 seconds for Java; for Python, it takes 0.08 seconds. +" +"Multi-Method Self-Training: Improving Code Generation With Text, And Vice Versa",Shriyash K. Upadhyay,http://arxiv.org/pdf/2307.10633v1.pdf,2023-07-20,"['cs.cl', 'cs.lg']",2307.10633v1.pdf," Large Language Models have many methods for solving the same problem. This +introduces novel strengths (different methods may work well for different +problems) and weaknesses (it may be difficult for users to know which method to +use). In this paper, we introduce Multi-Method Self-Training (MMST), where one +method is trained on the filtered outputs of another, allowing us to augment +the strengths and ameliorate the weaknesses of each method. Using a 176B +parameter model trained on both language and code, we show that MMST can 1) +improve the less performant method (up to 30%) making the model easier to use, +2) improve the more performant method (up to 32.2%) making the model more +performant, and 3) improve the performance of related but distinct tasks (up to +10.3%) by improving the ability of the model to generate rationales. We then +conduct ablation analyses to explore why MMST works. We show that MMST +generates more data than traditional self-training, but the improvement in +performance is driven by the use of multiple methods. We also analyze +prompt-engineering and anti-correlated performance between methods as means of +making MMST more effective. We hope the evidence from our paper motivates +machine learning researchers to explore ways in which advances in language +models allow for new forms of training. +" +Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts,Mayug Maniparambil,http://arxiv.org/pdf/2307.11661v2.pdf,2023-07-21,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2307.11661v2.pdf," Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have +revolutionized visual representation learning by providing good performance on +downstream datasets. VLMs are 0-shot adapted to a downstream dataset by +designing prompts that are relevant to the dataset. Such prompt engineering +makes use of domain expertise and a validation dataset. Meanwhile, recent +developments in generative pretrained models like GPT-4 mean they can be used +as advanced internet search tools. They can also be manipulated to provide +visual information in any structure. In this work, we show that GPT-4 can be +used to generate text that is visually descriptive and how this can be used to +adapt CLIP to downstream tasks. We show considerable improvements in 0-shot +transfer accuracy on specialized fine-grained datasets like EuroSAT (~7%), DTD +(~7%), SUN397 (~4.6%), and CUB (~3.3%) when compared to CLIP's default prompt. +We also design a simple few-shot adapter that learns to choose the best +possible sentences to construct generalizable classifiers that outperform the +recently proposed CoCoOP by ~2% on average and by over 4% on 4 specialized +fine-grained datasets. The code, prompts, and auxiliary text dataset is +available at https://github.com/mayug/VDT-Adapter. +" +GPT-3 Models are Few-Shot Financial Reasoners,Raul Salles de Padua,http://arxiv.org/pdf/2307.13617v2.pdf,2023-07-25,"['cs.cl', 'cs.ai']",2307.13617v2.pdf," Financial analysis is an important tool for evaluating company performance. +Practitioners work to answer financial questions to make profitable investment +decisions, and use advanced quantitative analyses to do so. As a result, +Financial Question Answering (QA) is a question answering task that requires +deep reasoning about numbers. Furthermore, it is unknown how well pre-trained +language models can reason in the financial domain. The current +state-of-the-art requires a retriever to collect relevant facts about the +financial question from the text and a generator to produce a valid financial +program and a final answer. However, recently large language models like GPT-3 +have achieved state-of-the-art performance on wide variety of tasks with just a +few shot examples. We run several experiments with GPT-3 and find that a +separate retrieval model and logic engine continue to be essential components +to achieving SOTA performance in this task, particularly due to the precise +nature of financial questions and the complex information stored in financial +documents. With this understanding, our refined prompt-engineering approach on +GPT-3 achieves near SOTA accuracy without any fine-tuning. +" +S3: Social-network Simulation System with Large Language Model-Empowered Agents,Chen Gao,http://arxiv.org/pdf/2307.14984v2.pdf,2023-07-27,['cs.si'],2307.14984v2.pdf," Social network simulation plays a crucial role in addressing various +challenges within social science. It offers extensive applications such as +state prediction, phenomena explanation, and policy-making support, among +others. In this work, we harness the formidable human-like capabilities +exhibited by large language models (LLMs) in sensing, reasoning, and behaving, +and utilize these qualities to construct the S$^3$ system (short for +$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to +the widely employed agent-based simulation paradigm, we employ prompt +engineering and prompt tuning techniques to ensure that the agent's behavior +closely emulates that of a genuine human within the social network. +Specifically, we simulate three pivotal aspects: emotion, attitude, and +interaction behaviors. By endowing the agent in the system with the ability to +perceive the informational environment and emulate human actions, we observe +the emergence of population-level phenomena, including the propagation of +information, attitudes, and emotions. We conduct an evaluation encompassing two +levels of simulation, employing real-world social network data. Encouragingly, +the results demonstrate promising accuracy. This work represents an initial +step in the realm of social network simulation empowered by LLM-based agents. +We anticipate that our endeavors will serve as a source of inspiration for the +development of simulation systems within, but not limited to, social science. +" +Flows: Building Blocks of Reasoning and Collaborating AI,Martin Josifoski,http://arxiv.org/pdf/2308.01285v1.pdf,2023-08-02,"['cs.ai', 'cs.hc']",2308.01285v1.pdf," Recent advances in artificial intelligence (AI) have produced highly capable +and controllable systems. This creates unprecedented opportunities for +structured reasoning as well as collaboration among multiple AI systems and +humans. To fully realize this potential, it is essential to develop a +principled way of designing and studying such structured interactions. For this +purpose, we introduce the conceptual framework of Flows: a systematic approach +to modeling complex interactions. Flows are self-contained building blocks of +computation, with an isolated state, communicating through a standardized +message-based interface. This modular design allows Flows to be recursively +composed into arbitrarily nested interactions, with a substantial reduction of +complexity. Crucially, any interaction can be implemented using this framework, +including prior work on AI--AI and human--AI interactions, prompt engineering +schemes, and tool augmentation. We demonstrate the potential of Flows on the +task of competitive coding, a challenging task on which even GPT-4 struggles. +Our results suggest that structured reasoning and collaboration substantially +improve generalization, with AI-only Flows adding +$21$ and human--AI Flows +adding +$54$ absolute points in terms of solve rate. To support rapid and +rigorous research, we introduce the aiFlows library. The library comes with a +repository of Flows that can be easily used, extended, and composed into novel, +more complex Flows. + The aiFlows library is available at https://github.com/epfl-dlab/aiflows. +Data and Flows for reproducing our experiments are available at +https://github.com/epfl-dlab/cc_flows. +" +Evaluating ChatGPT text-mining of clinical records for obesity monitoring,Ivo S. Fins,http://arxiv.org/pdf/2308.01666v1.pdf,2023-08-03,"['cs.ir', 'cs.cl']",2308.01666v1.pdf," Background: Veterinary clinical narratives remain a largely untapped resource +for addressing complex diseases. Here we compare the ability of a large +language model (ChatGPT) and a previously developed regular expression (RegexT) +to identify overweight body condition scores (BCS) in veterinary narratives. +Methods: BCS values were extracted from 4,415 anonymised clinical narratives +using either RegexT or by appending the narrative to a prompt sent to ChatGPT +coercing the model to return the BCS information. Data were manually reviewed +for comparison. Results: The precision of RegexT was higher (100%, 95% CI +94.81-100%) than the ChatGPT (89.3%; 95% CI82.75-93.64%). However, the recall +of ChatGPT (100%. 95% CI 96.18-100%) was considerably higher than that of +RegexT (72.6%, 95% CI 63.92-79.94%). Limitations: Subtle prompt engineering is +needed to improve ChatGPT output. Conclusions: Large language models create +diverse opportunities and, whilst complex, present an intuitive interface to +information but require careful implementation to avoid unpredictable errors. +" +ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP,Lu Yan,http://arxiv.org/pdf/2308.02122v2.pdf,2023-08-04,"['cs.cr', 'cs.cl']",2308.02122v2.pdf," Backdoor attacks have emerged as a prominent threat to natural language +processing (NLP) models, where the presence of specific triggers in the input +can lead poisoned models to misclassify these inputs to predetermined target +classes. Current detection mechanisms are limited by their inability to address +more covert backdoor strategies, such as style-based attacks. In this work, we +propose an innovative test-time poisoned sample detection framework that hinges +on the interpretability of model predictions, grounded in the semantic meaning +of inputs. We contend that triggers (e.g., infrequent words) are not supposed +to fundamentally alter the underlying semantic meanings of poisoned samples as +they want to stay stealthy. Based on this observation, we hypothesize that +while the model's predictions for paraphrased clean samples should remain +stable, predictions for poisoned samples should revert to their true labels +upon the mutations applied to triggers during the paraphrasing process. We +employ ChatGPT, a state-of-the-art large language model, as our paraphraser and +formulate the trigger-removal task as a prompt engineering problem. We adopt +fuzzing, a technique commonly used for unearthing software vulnerabilities, to +discover optimal paraphrase prompts that can effectively eliminate triggers +while concurrently maintaining input semantics. Experiments on 4 types of +backdoor attacks, including the subtle style backdoors, and 4 distinct datasets +demonstrate that our approach surpasses baseline methods, including STRIP, RAP, +and ONION, in precision and recall. +" +IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models,Hu Ye,http://arxiv.org/pdf/2308.06721v1.pdf,2023-08-13,"['cs.cv', 'cs.ai']",2308.06721v1.pdf," Recent years have witnessed the strong power of large text-to-image diffusion +models for the impressive generative capability to create high-fidelity images. +However, it is very tricky to generate desired images using only text prompt as +it often involves complex prompt engineering. An alternative to text prompt is +image prompt, as the saying goes: ""an image is worth a thousand words"". +Although existing methods of direct fine-tuning from pretrained models are +effective, they require large computing resources and are not compatible with +other base models, text prompt, and structural controls. In this paper, we +present IP-Adapter, an effective and lightweight adapter to achieve image +prompt capability for the pretrained text-to-image diffusion models. The key +design of our IP-Adapter is decoupled cross-attention mechanism that separates +cross-attention layers for text features and image features. Despite the +simplicity of our method, an IP-Adapter with only 22M parameters can achieve +comparable or even better performance to a fully fine-tuned image prompt model. +As we freeze the pretrained diffusion model, the proposed IP-Adapter can be +generalized not only to other custom models fine-tuned from the same base +model, but also to controllable generation using existing controllable tools. +With the benefit of the decoupled cross-attention strategy, the image prompt +can also work well with the text prompt to achieve multimodal image generation. +The project page is available at \url{https://ip-adapter.github.io}. +" +LogPrompt: Prompt Engineering Towards Zero-Shot and Interpretable Log Analysis,Yilun Liu,http://arxiv.org/pdf/2308.07610v1.pdf,2023-08-15,"['cs.se', 'cs.cl']",2308.07610v1.pdf," Automated log analysis is crucial in modern software-intensive systems for +ensuring reliability and resilience throughout software maintenance and +engineering life cycles. Existing methods perform tasks such as log parsing and +log anomaly detection by providing a single prediction value without +interpretation. However, given the increasing volume of system events, the +limited interpretability of analysis results hinders analysts' trust and their +ability to take appropriate actions. Moreover, these methods require +substantial in-domain training data, and their performance declines sharply (by +up to 62.5%) in online scenarios involving unseen logs from new domains, a +common occurrence due to rapid software updates. In this paper, we propose +LogPrompt, a novel zero-shot and interpretable log analysis approach. LogPrompt +employs large language models (LLMs) to perform zero-shot log analysis tasks +via a suite of advanced prompt strategies tailored for log tasks, which +enhances LLMs' performance by up to 107.5% compared with simple prompts. +Experiments on nine publicly available evaluation datasets across two tasks +demonstrate that LogPrompt, despite using no training data, outperforms +existing approaches trained on thousands of logs by up to around 50%. We also +conduct a human evaluation of LogPrompt's interpretability, with six +practitioners possessing over 10 years of experience, who highly rated the +generated content in terms of usefulness and readability (averagely 4.42/5). +LogPrompt also exhibits remarkable compatibility with open-source and +smaller-scale LLMs, making it flexible for practical deployment. +" +Transforming Sentiment Analysis in the Financial Domain with ChatGPT,Georgios Fatouros,http://arxiv.org/pdf/2308.07935v1.pdf,2023-08-13,"['cs.cl', 'cs.ai', 'cs.ce', 'cs.ir', '68t01, 68t50, 91b28, 91b30']",2308.07935v1.pdf," Financial sentiment analysis plays a crucial role in decoding market trends +and guiding strategic trading decisions. Despite the deployment of advanced +deep learning techniques and language models to refine sentiment analysis in +finance, this study breaks new ground by investigating the potential of large +language models, particularly ChatGPT 3.5, in financial sentiment analysis, +with a strong emphasis on the foreign exchange market (forex). Employing a +zero-shot prompting approach, we examine multiple ChatGPT prompts on a +meticulously curated dataset of forex-related news headlines, measuring +performance using metrics such as precision, recall, f1-score, and Mean +Absolute Error (MAE) of the sentiment class. Additionally, we probe the +correlation between predicted sentiment and market returns as an additional +evaluation approach. ChatGPT, compared to FinBERT, a well-established sentiment +analysis model for financial texts, exhibited approximately 35\% enhanced +performance in sentiment classification and a 36\% higher correlation with +market returns. By underlining the significance of prompt engineering, +particularly in zero-shot contexts, this study spotlights ChatGPT's potential +to substantially boost sentiment analysis in financial applications. By sharing +the utilized dataset, our intention is to stimulate further research and +advancements in the field of financial services. +" +ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based Healthcare Decision Support using ChatGPT,Fatemeh Nazary,http://arxiv.org/pdf/2308.09731v1.pdf,2023-08-17,"['cs.ai', 'cs.cl', 'cs.lg']",2308.09731v1.pdf," This study presents an innovative approach to the application of large +language models (LLMs) in clinical decision-making, focusing on OpenAI's +ChatGPT. Our approach introduces the use of contextual prompts-strategically +designed to include task description, feature description, and crucially, +integration of domain knowledge-for high-quality binary classification tasks +even in data-scarce scenarios. The novelty of our work lies in the utilization +of domain knowledge, obtained from high-performing interpretable ML models, and +its seamless incorporation into prompt design. By viewing these ML models as +medical experts, we extract key insights on feature importance to aid in +decision-making processes. This interplay of domain knowledge and AI holds +significant promise in creating a more insightful diagnostic tool. + Additionally, our research explores the dynamics of zero-shot and few-shot +prompt learning based on LLMs. By comparing the performance of OpenAI's ChatGPT +with traditional supervised ML models in different data conditions, we aim to +provide insights into the effectiveness of prompt engineering strategies under +varied data availability. In essence, this paper bridges the gap between AI and +healthcare, proposing a novel methodology for LLMs application in clinical +decision support systems. It highlights the transformative potential of +effective prompt design, domain knowledge integration, and flexible learning +approaches in enhancing automated decision-making. +" +Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis,Oscar J. Romero,http://arxiv.org/pdf/2308.09830v3.pdf,2023-08-18,['cs.ai'],2308.09830v3.pdf," This paper explores the integration of two AI subdisciplines employed in the +development of artificial agents that exhibit intelligent behavior: Large +Language Models (LLMs) and Cognitive Architectures (CAs). We present three +integration approaches, each grounded in theoretical models and supported by +preliminary empirical evidence. The modular approach, which introduces four +models with varying degrees of integration, makes use of chain-of-thought +prompting, and draws inspiration from augmented LLMs, the Common Model of +Cognition, and the simulation theory of cognition. The agency approach, +motivated by the Society of Mind theory and the LIDA cognitive architecture, +proposes the formation of agent collections that interact at micro and macro +cognitive levels, driven by either LLMs or symbolic components. The +neuro-symbolic approach, which takes inspiration from the CLARION cognitive +architecture, proposes a model where bottom-up learning extracts symbolic +representations from an LLM layer and top-down guidance utilizes symbolic +representations to direct prompt engineering in the LLM layer. These approaches +aim to harness the strengths of both LLMs and CAs, while mitigating their +weaknesses, thereby advancing the development of more robust AI systems. We +discuss the tradeoffs and challenges associated with each approach. +" +Manipulating Embeddings of Stable Diffusion Prompts,Niklas Deckers,http://arxiv.org/pdf/2308.12059v1.pdf,2023-08-23,"['cs.cv', 'cs.lg']",2308.12059v1.pdf," Generative text-to-image models such as Stable Diffusion allow users to +generate images based on a textual description, the prompt. Changing the prompt +is still the primary means for the user to change a generated image as desired. +However, changing the image by reformulating the prompt remains a difficult +process of trial and error, which has led to the emergence of prompt +engineering as a new field of research. We propose and analyze methods to +change the embedding of a prompt directly instead of the prompt text. It allows +for more fine-grained and targeted control that takes into account user +intentions. Our approach treats the generative text-to-image model as a +continuous function and passes gradients between the image space and the prompt +embedding space. By addressing different user interaction problems, we can +apply this idea in three scenarios: (1) Optimization of a metric defined in +image space that could measure, for example, image style. (2) Assistance of +users in creative tasks by enabling them to navigate the image space along a +selection of directions of ""near"" prompt embeddings. (3) Changing the embedding +of the prompt to include information that the user has seen in a particular +seed but finds difficult to describe in the prompt. Our experiments demonstrate +the feasibility of the described methods. +" +Large Language Models in Fault Localisation,Yonghao Wu,http://arxiv.org/pdf/2308.15276v3.pdf,2023-08-29,['cs.se'],2308.15276v3.pdf," Large Language Models (LLMs) have shown promise in multiple software +engineering tasks including code generation, program repair, code +summarisation, and test generation. Fault localisation is instrumental in +enabling automated debugging and repair of programs and was prominently +featured as a highlight during the launch event of ChatGPT-4. Nevertheless, the +performance of LLMs compared to state-of-the-art methods, as well as the impact +of prompt design and context length on their efficacy, remains unclear. To fill +this gap, this paper presents an in-depth investigation into the capability of +ChatGPT-3.5 and ChatGPT-4, the two state-of-the-art LLMs, on fault +localisation. Using the widely-adopted large-scale Defects4J dataset, we +compare the two LLMs with the existing fault localisation techniques. We also +investigate the consistency of LLMs in fault localisation, as well as how +prompt engineering and the length of code context affect the fault localisation +effectiveness. + Our findings demonstrate that within function-level context, ChatGPT-4 +outperforms all the existing fault localisation methods. Additional error logs +can further improve ChatGPT models' localisation accuracy and consistency, with +an average 46.9% higher accuracy over the state-of-the-art baseline SmartFL on +the Defects4J dataset in terms of TOP-1 metric. However, when the code context +of the Defects4J dataset expands to the class-level, ChatGPT-4's performance +suffers a significant drop, with 49.9% lower accuracy than SmartFL under TOP-1 +metric. These observations indicate that although ChatGPT can effectively +localise faults under specific conditions, limitations are evident. Further +research is needed to fully harness the potential of LLMs like ChatGPT for +practical fault localisation applications. +" +Leveraging Large Language Models for Exploiting ASR Uncertainty,Pranay Dighe,http://arxiv.org/pdf/2309.04842v2.pdf,2023-09-09,"['cs.cl', 'cs.hc', 'cs.sd', 'eess.as']",2309.04842v2.pdf," While large language models excel in a variety of natural language processing +(NLP) tasks, to perform well on spoken language understanding (SLU) tasks, they +must either rely on off-the-shelf automatic speech recognition (ASR) systems +for transcription, or be equipped with an in-built speech modality. This work +focuses on the former scenario, where LLM's accuracy on SLU tasks is +constrained by the accuracy of a fixed ASR system on the spoken input. +Specifically, we tackle speech-intent classification task, where a high +word-error-rate can limit the LLM's ability to understand the spoken intent. +Instead of chasing a high accuracy by designing complex or specialized +architectures regardless of deployment costs, we seek to answer how far we can +go without substantially changing the underlying ASR and LLM, which can +potentially be shared by multiple unrelated tasks. To this end, we propose +prompting the LLM with an n-best list of ASR hypotheses instead of only the +error-prone 1-best hypothesis. We explore prompt-engineering to explain the +concept of n-best lists to the LLM; followed by the finetuning of Low-Rank +Adapters on the downstream tasks. Our approach using n-best lists proves to be +effective on a device-directed speech detection task as well as on a keyword +spotting task, where systems using n-best list prompts outperform those using +1-best ASR hypothesis; thus paving the way for an efficient method to exploit +ASR uncertainty via LLMs for speech-based applications. +" +Unveiling the potential of large language models in generating semantic and cross-language clones,Palash R. Roy,http://arxiv.org/pdf/2309.06424v1.pdf,2023-09-12,"['cs.se', 'cs.ai', 'cs.lg']",2309.06424v1.pdf," Semantic and Cross-language code clone generation may be useful for code +reuse, code comprehension, refactoring and benchmarking. OpenAI's GPT model has +potential in such clone generation as GPT is used for text generation. When +developers copy/paste codes from Stack Overflow (SO) or within a system, there +might be inconsistent changes leading to unexpected behaviours. Similarly, if +someone possesses a code snippet in a particular programming language but seeks +equivalent functionality in a different language, a semantic cross-language +code clone generation approach could provide valuable assistance. In this +study, using SemanticCloneBench as a vehicle, we evaluated how well the GPT-3 +model could help generate semantic and cross-language clone variants for a +given fragment.We have comprised a diverse set of code fragments and assessed +GPT-3s performance in generating code variants.Through extensive +experimentation and analysis, where 9 judges spent 158 hours to validate, we +investigate the model's ability to produce accurate and semantically correct +variants. Our findings shed light on GPT-3's strengths in code generation, +offering insights into the potential applications and challenges of using +advanced language models in software development. Our quantitative analysis +yields compelling results. In the realm of semantic clones, GPT-3 attains an +impressive accuracy of 62.14% and 0.55 BLEU score, achieved through few-shot +prompt engineering. Furthermore, the model shines in transcending linguistic +confines, boasting an exceptional 91.25% accuracy in generating cross-language +clones +" +Is GPT4 a Good Trader?,Bingzhe Wu,http://arxiv.org/pdf/2309.10982v1.pdf,2023-09-20,['cs.ai'],2309.10982v1.pdf," Recently, large language models (LLMs), particularly GPT-4, have demonstrated +significant capabilities in various planning and reasoning tasks +\cite{cheng2023gpt4,bubeck2023sparks}. Motivated by these advancements, there +has been a surge of interest among researchers to harness the capabilities of +GPT-4 for the automated design of quantitative factors that do not overlap with +existing factor libraries, with an aspiration to achieve alpha returns +\cite{webpagequant}. In contrast to these work, this study aims to examine the +fidelity of GPT-4's comprehension of classic trading theories and its +proficiency in applying its code interpreter abilities to real-world trading +data analysis. Such an exploration is instrumental in discerning whether the +underlying logic GPT-4 employs for trading is intrinsically reliable. +Furthermore, given the acknowledged interpretative latitude inherent in most +trading theories, we seek to distill more precise methodologies of deploying +these theories from GPT-4's analytical process, potentially offering invaluable +insights to human traders. + To achieve this objective, we selected daily candlestick (K-line) data from +specific periods for certain assets, such as the Shanghai Stock Index. Through +meticulous prompt engineering, we guided GPT-4 to analyze the technical +structures embedded within this data, based on specific theories like the +Elliott Wave Theory. We then subjected its analytical output to manual +evaluation, assessing its interpretative depth and accuracy vis-\`a-vis these +trading theories from multiple dimensions. The results and findings from this +study could pave the way for a synergistic amalgamation of human expertise and +AI-driven insights in the realm of trading. +" +AI-Copilot for Business Optimisation: A Framework and A Case Study in Production Scheduling,Pivithuru Thejan Amarasinghe,http://arxiv.org/pdf/2309.13218v3.pdf,2023-09-22,['cs.ai'],2309.13218v3.pdf," Business optimisation refers to the process of finding and implementing +efficient and cost-effective means of operation to bring a competitive +advantage for businesses. Synthesizing problem formulations is an integral part +of business optimisation, which relies on human expertise to construct problem +formulations using optimisation languages. Interestingly, with advancements in +Large Language Models (LLMs), the human expertise needed in problem formulation +can be minimized. However, developing an LLM for problem formulation is +challenging, due to training data, token limitations, and lack of appropriate +performance metrics. For the requirement of training data, recent attention has +been directed towards fine-tuning pre-trained LLMs for downstream tasks rather +than training an LLM from scratch for a specific task. In this paper, we adopt +an LLM fine-tuning approach and propose an AI-Copilot for business optimisation +problem formulation. For token limitations, we introduce modularization and +prompt engineering techniques to synthesize complex problem formulations as +modules that fit into the token limits of LLMs. Additionally, we design +performance evaluation metrics that are better suited for assessing the +accuracy and quality of problem formulations. The experiment results +demonstrate that with this approach we can synthesize complex and large problem +formulations for a typical business optimisation problem in production +scheduling. +" +An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems,Andreas Metzger,http://arxiv.org/pdf/2309.14391v1.pdf,2023-09-25,"['cs.lg', 'cs.ai', 'cs.cl']",2309.14391v1.pdf," Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the +open-world assumption in service-oriented systems. Deep RL was successfully +applied to problems such as dynamic service composition, job scheduling, and +offloading, as well as service adaptation. While Deep RL offers many benefits, +understanding the decision-making of Deep RL is challenging because its learned +decision-making policy essentially appears as a black box. Yet, understanding +the decision-making of Deep RL is key to help service developers perform +debugging, support service providers to comply with relevant legal frameworks, +and facilitate service users to build trust. We introduce Chat4XAI to +facilitate the understanding of the decision-making of Deep RL by providing +natural-language explanations. Compared with visual explanations, the reported +benefits of natural-language explanations include better understandability for +non-technical users, increased user acceptance and trust, as well as more +efficient explanations. Chat4XAI leverages modern AI chatbot technology and +dedicated prompt engineering. Compared to earlier work on natural-language +explanations using classical software-based dialogue systems, using an AI +chatbot eliminates the need for eliciting and defining potential questions and +answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API +and evaluate the fidelity and stability of its explanations using an adaptive +service exemplar. +" +Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering,Han Zhou,http://arxiv.org/pdf/2309.17249v1.pdf,2023-09-29,"['cs.cl', 'cs.ai', 'cs.lg']",2309.17249v1.pdf," Prompting and in-context learning (ICL) have become efficient learning +paradigms for large language models (LLMs). However, LLMs suffer from prompt +brittleness and various bias factors in the prompt, including but not limited +to the formatting, the choice verbalizers, and the ICL examples. To address +this problem that results in unexpected performance degradation, calibration +methods have been developed to mitigate the effects of these biases while +recovering LLM performance. In this work, we first conduct a systematic +analysis of the existing calibration methods, where we both provide a unified +view and reveal the failure cases. Inspired by these analyses, we propose Batch +Calibration (BC), a simple yet intuitive method that controls the contextual +bias from the batched input, unifies various prior approaches, and effectively +addresses the aforementioned issues. BC is zero-shot, inference-only, and +incurs negligible additional costs. In the few-shot setup, we further extend BC +to allow it to learn the contextual bias from labeled data. We validate the +effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate +state-of-the-art performance over previous calibration baselines across more +than 10 natural language understanding and image classification tasks. +" +Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4,Jiaxian Guo,http://arxiv.org/pdf/2309.17277v2.pdf,2023-09-29,['cs.ai'],2309.17277v2.pdf," Unlike perfect information games, where all elements are known to every +player, imperfect information games emulate the real-world complexities of +decision-making under uncertain or incomplete information. GPT-4, the recent +breakthrough in large language models (LLMs) trained on massive passive data, +is notable for its knowledge retrieval and reasoning abilities. This paper +delves into the applicability of GPT-4's learned knowledge for imperfect +information games. To achieve this, we introduce \textbf{Suspicion-Agent}, an +innovative agent that leverages GPT-4's capabilities for performing in +imperfect information games. With proper prompt engineering to achieve +different functions, Suspicion-Agent based on GPT-4 demonstrates remarkable +adaptability across a range of imperfect information card games. Importantly, +GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning it +can understand others and intentionally impact others' behavior. Leveraging +this, we design a planning strategy that enables GPT-4 to competently play +against different opponents, adapting its gameplay style as needed, while +requiring only the game rules and descriptions of observations as input. In the +experiments, we qualitatively showcase the capabilities of Suspicion-Agent +across three different imperfect information games and then quantitatively +evaluate it in Leduc Hold'em. The results show that Suspicion-Agent can +potentially outperform traditional algorithms designed for imperfect +information games, without any specialized training or examples. In order to +encourage and foster deeper insights within the community, we make our +game-related data publicly available. +" +Investigating the Limitation of CLIP Models: The Worst-Performing Categories,Jie-Jing Shao,http://arxiv.org/pdf/2310.03324v1.pdf,2023-10-05,"['cs.cv', 'cs.lg']",2310.03324v1.pdf," Contrastive Language-Image Pre-training (CLIP) provides a foundation model by +integrating natural language into visual concepts, enabling zero-shot +recognition on downstream tasks. It is usually expected that satisfactory +overall accuracy can be achieved across numerous domains through well-designed +textual prompts. However, we found that their performance in the worst +categories is significantly inferior to the overall performance. For example, +on ImageNet, there are a total of 10 categories with class-wise accuracy as low +as 0\%, even though the overall performance has achieved 64.1\%. This +phenomenon reveals the potential risks associated with using CLIP models, +particularly in risk-sensitive applications where specific categories hold +significant importance. To address this issue, we investigate the alignment +between the two modalities in the CLIP model and propose the Class-wise +Matching Margin (\cmm) to measure the inference confusion. \cmm\ can +effectively identify the worst-performing categories and estimate the potential +performance of the candidate prompts. We further query large language models to +enrich descriptions of worst-performing categories and build a weighted +ensemble to highlight the efficient prompts. Experimental results clearly +verify the effectiveness of our proposal, where the accuracy on the worst-10 +categories on ImageNet is boosted to 5.2\%, without manual prompt engineering, +laborious optimization, or access to labeled validation data. +" +Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models,Junchi Yu,http://arxiv.org/pdf/2310.03965v2.pdf,2023-10-06,"['cs.ai', 'cs.cl']",2310.03965v2.pdf," Large Language Models (LLMs) have achieved remarkable success in reasoning +tasks with the development of prompting methods. However, existing prompting +approaches cannot reuse insights of solving similar problems and suffer from +accumulated errors in multi-step reasoning, since they prompt LLMs to reason +\textit{from scratch}. To address these issues, we propose +\textbf{\textit{Thought Propagation} (TP)}, which explores the analogous +problems and leverages their solutions to enhance the complex reasoning ability +of LLMs. These analogous problems are related to the input one, with reusable +solutions and problem-solving strategies. Thus, it is promising to propagate +insights of solving previous analogous problems to inspire new problem-solving. +To achieve this, TP first prompts LLMs to propose and solve a set of analogous +problems that are related to the input one. Then, TP reuses the results of +analogous problems to directly yield a new solution or derive a +knowledge-intensive plan for execution to amend the initial solution obtained +from scratch. TP is compatible with existing prompting approaches, allowing +plug-and-play generalization and enhancement in a wide range of tasks without +much labor in task-specific prompt engineering. Experiments across three +challenging tasks demonstrate TP enjoys a substantial improvement over the +baselines by an average of 12\% absolute increase in finding the optimal +solutions in Shortest-path Reasoning, 13\% improvement of human preference in +Creative Writing, and 15\% enhancement in the task completion rate of LLM-Agent +Planning. +" +JVNV: A Corpus of Japanese Emotional Speech with Verbal Content and Nonverbal Expressions,Detai Xin,http://arxiv.org/pdf/2310.06072v1.pdf,2023-10-09,"['cs.sd', 'eess.as']",2310.06072v1.pdf," We present the JVNV, a Japanese emotional speech corpus with verbal content +and nonverbal vocalizations whose scripts are generated by a large-scale +language model. Existing emotional speech corpora lack not only proper +emotional scripts but also nonverbal vocalizations (NVs) that are essential +expressions in spoken language to express emotions. We propose an automatic +script generation method to produce emotional scripts by providing seed words +with sentiment polarity and phrases of nonverbal vocalizations to ChatGPT using +prompt engineering. We select 514 scripts with balanced phoneme coverage from +the generated candidate scripts with the assistance of emotion confidence +scores and language fluency scores. We demonstrate the effectiveness of JVNV by +showing that JVNV has better phoneme coverage and emotion recognizability than +previous Japanese emotional speech corpora. We then benchmark JVNV on emotional +text-to-speech synthesis using discrete codes to represent NVs. We show that +there still exists a gap between the performance of synthesizing read-aloud +speech and emotional speech, and adding NVs in the speech makes the task even +harder, which brings new challenges for this task and makes JVNV a valuable +resource for relevant works in the future. To our best knowledge, JVNV is the +first speech corpus that generates scripts automatically using large language +models. +" +Large Language Model-Empowered Agents for Simulating Macroeconomic Activities,Nian Li,http://arxiv.org/pdf/2310.10436v1.pdf,2023-10-16,['cs.ai'],2310.10436v1.pdf," The advent of the Web has brought about a paradigm shift in traditional +economics, particularly in the digital economy era, enabling the precise +recording and analysis of individual economic behavior. This has led to a +growing emphasis on data-driven modeling in macroeconomics. In macroeconomic +research, Agent-based modeling (ABM) emerged as an alternative, evolving +through rule-based agents, machine learning-enhanced decision-making, and, more +recently, advanced AI agents. However, the existing works are suffering from +three main challenges when endowing agents with human-like decision-making, +including agent heterogeneity, the influence of macroeconomic trends, and +multifaceted economic factors. Large language models (LLMs) have recently +gained prominence in offering autonomous human-like characteristics. Therefore, +leveraging LLMs in macroeconomic simulation presents an opportunity to overcome +traditional limitations. In this work, we take an early step in introducing a +novel approach that leverages LLMs in macroeconomic simulation. We design +prompt-engineering-driven LLM agents to exhibit human-like decision-making and +adaptability in the economic environment, with the abilities of perception, +reflection, and decision-making to address the abovementioned challenges. +Simulation experiments on macroeconomic activities show that LLM-empowered +agents can make realistic work and consumption decisions and emerge more +reasonable macroeconomic phenomena than existing rule-based or AI agents. Our +work demonstrates the promising potential to simulate macroeconomics based on +LLM and its human-like characteristics. +" +Large Language Model for Multi-objective Evolutionary Optimization,Fei Liu,http://arxiv.org/pdf/2310.12541v2.pdf,2023-10-19,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.et']",2310.12541v2.pdf," Multiobjective evolutionary algorithms (MOEAs) are major methods for solving +multiobjective optimization problems (MOPs). Many MOEAs have been proposed in +the past decades, of which the search operators need a carefully handcrafted +design with domain knowledge. Recently, some attempts have been made to replace +the manually designed operators in MOEAs with learning-based operators (e.g., +neural network models). However, much effort is still required for designing +and training such models, and the learned operators might not generalize well +on new problems. To tackle the above challenges, this work investigates a novel +approach that leverages the powerful large language model (LLM) to design MOEA +operators. With proper prompt engineering, we successfully let a general LLM +serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a +zero-shot manner. In addition, by learning from the LLM behavior, we further +design an explicit white-box operator with randomness and propose a new version +of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on +different test benchmarks show that our proposed method can achieve competitive +performance with widely used MOEAs. It is also promising to see the operator +only learned from a few instances can have robust generalization performance on +unseen problems with quite different patterns and settings. The results reveal +the potential benefits of using pre-trained LLMs in the design of MOEAs. +" +Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning,Juan Rocamonde,http://arxiv.org/pdf/2310.12921v1.pdf,2023-10-19,"['cs.lg', 'cs.ai']",2310.12921v1.pdf," Reinforcement learning (RL) requires either manually specifying a reward +function, which is often infeasible, or learning a reward model from a large +amount of human feedback, which is often very expensive. We study a more +sample-efficient alternative: using pretrained vision-language models (VLMs) as +zero-shot reward models (RMs) to specify tasks via natural language. We propose +a natural and general approach to using VLMs as reward models, which we call +VLM-RMs. We use VLM-RMs based on CLIP to train a MuJoCo humanoid to learn +complex tasks without a manually specified reward function, such as kneeling, +doing the splits, and sitting in a lotus position. For each of these tasks, we +only provide a single sentence text prompt describing the desired task with +minimal prompt engineering. We provide videos of the trained agents at: +https://sites.google.com/view/vlm-rm. We can improve performance by providing a +second ``baseline'' prompt and projecting out parts of the CLIP embedding space +irrelevant to distinguish between goal and baseline. Further, we find a strong +scaling effect for VLM-RMs: larger VLMs trained with more compute and data are +better reward models. The failure modes of VLM-RMs we encountered are all +related to known capability limitations of current VLMs, such as limited +spatial reasoning ability or visually unrealistic environments that are far +off-distribution for the VLM. We find that VLM-RMs are remarkably robust as +long as the VLM is large enough. This suggests that future VLMs will become +more and more useful reward models for a wide range of RL applications. +" +Enhancing Zero-Shot Crypto Sentiment with Fine-tuned Language Model and Prompt Engineering,Rahman S M Wahidur,http://arxiv.org/pdf/2310.13226v1.pdf,2023-10-20,['cs.cl'],2310.13226v1.pdf," Blockchain technology has revolutionized the financial landscape, with +cryptocurrencies gaining widespread adoption for their decentralized and +transparent nature. As the sentiment expressed on social media platforms can +significantly influence cryptocurrency discussions and market movements, +sentiment analysis has emerged as a crucial tool for understanding public +opinion and predicting market trends. Motivated by the aim to enhance sentiment +analysis accuracy in the cryptocurrency domain, this paper investigates +fine-tuning techniques on large language models. This paper also investigates +the efficacy of supervised fine-tuning and instruction-based fine-tuning on +large language models for unseen tasks. Experimental results demonstrate a +significant average zero-shot performance gain of 40% after fine-tuning, +highlighting the potential of this technique in optimizing pre-trained language +model efficiency. Additionally, the impact of instruction tuning on models of +varying scales is examined, revealing that larger models benefit from +instruction tuning, achieving the highest average accuracy score of 75.16%. In +contrast, smaller-scale models may experience reduced generalization due to the +complete utilization of model capacity. To gain deeper insight about how +instruction works with these language models, this paper presents an +experimental investigation into the response of an instruction-based model +under different instruction tuning setups. The investigation demonstrates that +the model achieves an average accuracy score of 72.38% for short and simple +instructions. This performance significantly outperforms its accuracy under +long and complex instructions by over 12%, thereby effectively highlighting the +profound significance of instruction characteristics in maximizing model +performance. +" +Can LLMs Grade Short-answer Reading Comprehension Questions : Foundational Literacy Assessment in LMICs,Owen Henkel,http://arxiv.org/pdf/2310.18373v1.pdf,2023-10-26,"['cs.cl', 'cs.ai']",2310.18373v1.pdf," This paper presents emerging evidence of using generative large language +models (i.e., GPT-4) to reliably evaluate short-answer reading comprehension +questions. Specifically, we explore how various configurations of generative +(LLMs) are able to evaluate student responses from a new dataset, drawn from a +battery of reading assessments conducted with over 150 students in Ghana. As +this dataset is novel and hence not used in training runs of GPT, it offers an +opportunity to test for domain shift and evaluate the generalizability of +generative LLMs, which are predominantly designed and trained on data from +high-income North American countries. We found that GPT-4, with minimal prompt +engineering performed extremely well on evaluating the novel dataset (Quadratic +Weighted Kappa 0.923, F1 0.88), substantially outperforming transfer-learning +based approaches, and even exceeding expert human raters (Quadratic Weighted +Kappa 0.915, F1 0.87). To the best of our knowledge, our work is the first to +empirically evaluate the performance of generative LLMs on short-answer reading +comprehension questions, using real student data, and suggests that generative +LLMs have the potential to reliably evaluate foundational literacy. Currently +the assessment of formative literacy and numeracy is infrequent in many low and +middle-income countries (LMICs) due to the cost and operational complexities of +conducting them at scale. Automating the grading process for reading assessment +could enable wider usage, and in turn improve decision-making regarding +curricula, school management, and teaching practice at the classroom level. +Importantly, in contrast transfer learning based approaches, generative LLMs +generalize well and the technical barriers to their use are low, making them +more feasible to implement and scale in lower resource educational contexts. +" +Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation Models,Hao Li,http://arxiv.org/pdf/2310.19721v2.pdf,2023-10-30,"['eess.iv', 'cs.cv']",2310.19721v2.pdf," To address prevalent issues in medical imaging, such as data acquisition +challenges and label availability, transfer learning from natural to medical +image domains serves as a viable strategy to produce reliable segmentation +results. However, several existing barriers between domains need to be broken +down, including addressing contrast discrepancies, managing anatomical +variability, and adapting 2D pretrained models for 3D segmentation tasks. In +this paper, we propose ProMISe,a prompt-driven 3D medical image segmentation +model using only a single point prompt to leverage knowledge from a pretrained +2D image foundation model. In particular, we use the pretrained vision +transformer from the Segment Anything Model (SAM) and integrate lightweight +adapters to extract depth-related (3D) spatial context without updating the +pretrained weights. For robust results, a hybrid network with complementary +encoders is designed, and a boundary-aware loss is proposed to achieve precise +boundaries. We evaluate our model on two public datasets for colon and pancreas +tumor segmentations, respectively. Compared to the state-of-the-art +segmentation methods with and without prompt engineering, our proposed method +achieves superior performance. The code is publicly available at +https://github.com/MedICL-VU/ProMISe. +" +Making Large Language Models Better Data Creators,Dong-Ho Lee,http://arxiv.org/pdf/2310.20111v1.pdf,2023-10-31,['cs.cl'],2310.20111v1.pdf," Although large language models (LLMs) have advanced the state-of-the-art in +NLP significantly, deploying them for downstream applications is still +challenging due to cost, responsiveness, control, or concerns around privacy +and security. As such, trainable models are still the preferred option in some +cases. However, these models still require human-labeled data for optimal +performance, which is expensive and time-consuming to obtain. In order to +address this issue, several techniques to reduce human effort involve labeling +or generating data using LLMs. Although these methods are effective for certain +applications, in practice they encounter difficulties in real-world scenarios. +Labeling data requires careful data selection, while generating data +necessitates task-specific prompt engineering. In this paper, we propose a +unified data creation pipeline that requires only a single formatting example, +and which is applicable to a broad range of tasks, including traditionally +problematic ones with semantically devoid label spaces. In our experiments we +demonstrate that instruction-following LLMs are highly cost-effective data +creators, and that models trained with these data exhibit performance better +than those trained with human-labeled data (by up to 17.5%) on +out-of-distribution evaluation, while maintaining comparable performance on +in-distribution tasks. These results have important implications for the +robustness of NLP systems deployed in the real-world. +" +VisPercep: A Vision-Language Approach to Enhance Visual Perception for People with Blindness and Low Vision,Yu Hao,http://arxiv.org/pdf/2310.20225v1.pdf,2023-10-31,"['cs.cv', 'cs.ai']",2310.20225v1.pdf," People with blindness and low vision (pBLV) encounter substantial challenges +when it comes to comprehensive scene recognition and precise object +identification in unfamiliar environments. Additionally, due to the vision +loss, pBLV have difficulty in accessing and identifying potential tripping +hazards on their own. In this paper, we present a pioneering approach that +leverages a large vision-language model to enhance visual perception for pBLV, +offering detailed and comprehensive descriptions of the surrounding +environments and providing warnings about the potential risks. Our method +begins by leveraging a large image tagging model (i.e., Recognize Anything +(RAM)) to identify all common objects present in the captured images. The +recognition results and user query are then integrated into a prompt, tailored +specifically for pBLV using prompt engineering. By combining the prompt and +input image, a large vision-language model (i.e., InstructBLIP) generates +detailed and comprehensive descriptions of the environment and identifies +potential risks in the environment by analyzing the environmental objects and +scenes, relevant to the prompt. We evaluate our approach through experiments +conducted on both indoor and outdoor datasets. Our results demonstrate that our +method is able to recognize objects accurately and provide insightful +descriptions and analysis of the environment for pBLV. +" +BigBIO: A Framework for Data-Centric Biomedical Natural Language Processing,Jason Alan Fries,http://arxiv.org/pdf/2206.15076v1.pdf,2022-06-30,['cs.cl'],2206.15076v1.pdf," Training and evaluating language models increasingly requires the +construction of meta-datasets --diverse collections of curated data with clear +provenance. Natural language prompting has recently lead to improved zero-shot +generalization by transforming existing, supervised datasets into a diversity +of novel pretraining tasks, highlighting the benefits of meta-dataset curation. +While successful in general-domain text, translating these data-centric +approaches to biomedical language modeling remains challenging, as labeled +biomedical datasets are significantly underrepresented in popular data hubs. To +address this challenge, we introduce BigBIO a community library of 126+ +biomedical NLP datasets, currently covering 12 task categories and 10+ +languages. BigBIO facilitates reproducible meta-dataset curation via +programmatic access to datasets and their metadata, and is compatible with +current platforms for prompt engineering and end-to-end few/zero shot language +model evaluation. We discuss our process for task schema harmonization, data +auditing, contribution guidelines, and outline two illustrative use cases: +zero-shot evaluation of biomedical prompts and large-scale, multi-task +learning. BigBIO is an ongoing community effort and is available at +https://github.com/bigscience-workshop/biomedical +" +GPT Takes the Bar Exam,Michael Bommarito II,http://arxiv.org/pdf/2212.14402v1.pdf,2022-12-29,"['cs.cl', 'cs.ai', 'cs.lg']",2212.14402v1.pdf," Nearly all jurisdictions in the United States require a professional license +exam, commonly referred to as ""the Bar Exam,"" as a precondition for law +practice. To even sit for the exam, most jurisdictions require that an +applicant completes at least seven years of post-secondary education, including +three years at an accredited law school. In addition, most test-takers also +undergo weeks to months of further, exam-specific preparation. Despite this +significant investment of time and capital, approximately one in five +test-takers still score under the rate required to pass the exam on their first +try. In the face of a complex task that requires such depth of knowledge, what, +then, should we expect of the state of the art in ""AI?"" In this research, we +document our experimental evaluation of the performance of OpenAI's +`text-davinci-003` model, often-referred to as GPT-3.5, on the multistate +multiple choice (MBE) section of the exam. While we find no benefit in +fine-tuning over GPT-3.5's zero-shot performance at the scale of our training +data, we do find that hyperparameter optimization and prompt engineering +positively impacted GPT-3.5's zero-shot performance. For best prompt and +parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete +NCBE MBE practice exam, significantly in excess of the 25% baseline guessing +rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's +ranking of responses is also highly-correlated with correctness; its top two +and top three choices are correct 71% and 88% of the time, respectively, +indicating very strong non-entailment performance. While our ability to +interpret these results is limited by nascent scientific understanding of LLMs +and the proprietary nature of GPT, we believe that these results strongly +suggest that an LLM will pass the MBE component of the Bar Exam in the near +future. +" +Few-shot Multimodal Multitask Multilingual Learning,Aman Chadha,http://arxiv.org/pdf/2303.12489v1.pdf,2023-02-19,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.mm']",2303.12489v1.pdf," While few-shot learning as a transfer learning paradigm has gained +significant traction for scenarios with limited data, it has primarily been +explored in the context of building unimodal and unilingual models. +Furthermore, a significant part of the existing literature in the domain of +few-shot multitask learning perform in-context learning which requires manually +generated prompts as the input, yielding varying outcomes depending on the +level of manual prompt-engineering. In addition, in-context learning suffers +from substantial computational, memory, and storage costs which eventually +leads to high inference latency because it involves running all of the prompt's +examples through the model every time a prediction is made. In contrast, +methods based on the transfer learning via the fine-tuning paradigm avoid the +aforementioned issues at a one-time cost of fine-tuning weights on a per-task +basis. However, such methods lack exposure to few-shot multimodal multitask +learning. In this paper, we propose few-shot learning for a multimodal +multitask multilingual (FM3) setting by adapting pre-trained vision and +language models using task-specific hypernetworks and contrastively fine-tuning +them to enable few-shot learning. FM3's architecture combines the best of both +worlds of in-context and fine-tuning based learning and consists of three major +components: (i) multimodal contrastive fine-tuning to enable few-shot learning, +(ii) hypernetwork task adaptation to perform multitask learning, and (iii) +task-specific output heads to cater to a plethora of diverse tasks. FM3 learns +the most prominent tasks in the vision and language domains along with their +intersections, namely visual entailment (VE), visual question answering (VQA), +and natural language understanding (NLU) tasks such as neural entity +recognition (NER) and the GLUE benchmark including QNLI, MNLI, QQP, and SST-2. +" +Improving Few-Shot Prompts with Relevant Static Analysis Products,Toufique Ahmed,http://arxiv.org/pdf/2304.06815v2.pdf,2023-04-13,"['cs.se', 'cs.lg']",2304.06815v2.pdf," Large Language Models (LLM) are a new class of computation engines, +""programmed"" via prompt engineering. We are still learning how to best +""program"" these LLMs to help developers. We start with the intuition that +developers tend to consciously and unconsciously have a collection of semantics +facts in mind when working on coding tasks. Mostly these are shallow, simple +facts arising from a quick read. For a function, examples of facts might +include parameter and local variable names, return expressions, simple pre- and +post-conditions, and basic control and data flow, etc. + One might assume that the powerful multi-layer architecture of +transformer-style LLMs makes them inherently capable of doing this simple level +of ""code analysis"" and extracting such information, implicitly, while +processing code: but are they, really? If they aren't, could explicitly adding +this information help? Our goal here is to investigate this question, using the +code summarization task and evaluate whether automatically augmenting an LLM's +prompt with semantic facts explicitly, actually helps. + Prior work shows that LLM performance on code summarization benefits from +few-shot samples drawn either from the same-project or from examples found via +information retrieval methods (such as BM25). While summarization performance +has steadily increased since the early days, there is still room for +improvement: LLM performance on code summarization still lags its performance +on natural-language tasks like translation and text summarization. + We find that adding semantic facts actually does help! This approach improves +performance in several different settings suggested by prior work, including +for two different Large Language Models. In most cases, improvement nears or +exceeds 2 BLEU; for the PHP language in the challenging CodeSearchNet dataset, +this augmentation actually yields performance surpassing 30 BLEU. +" +Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery,Debadutta Dash,http://arxiv.org/pdf/2304.13714v3.pdf,2023-04-26,"['cs.ai', 'cs.cl', 'cs.ir']",2304.13714v3.pdf," Despite growing interest in using large language models (LLMs) in healthcare, +current explorations do not assess the real-world utility and safety of LLMs in +clinical settings. Our objective was to determine whether two LLMs can serve +information needs submitted by physicians as questions to an informatics +consultation service in a safe and concordant manner. Sixty six questions from +an informatics consult service were submitted to GPT-3.5 and GPT-4 via simple +prompts. 12 physicians assessed the LLM responses' possibility of patient harm +and concordance with existing reports from an informatics consultation service. +Physician assessments were summarized based on majority vote. For no questions +did a majority of physicians deem either LLM response as harmful. For GPT-3.5, +responses to 8 questions were concordant with the informatics consult report, +20 discordant, and 9 were unable to be assessed. There were 29 responses with +no majority on ""Agree"", ""Disagree"", and ""Unable to assess"". For GPT-4, +responses to 13 questions were concordant, 15 discordant, and 3 were unable to +be assessed. There were 35 responses with no majority. Responses from both LLMs +were largely devoid of overt harm, but less than 20% of the responses agreed +with an answer from an informatics consultation service, responses contained +hallucinated references, and physicians were divided on what constitutes harm. +These results suggest that while general purpose LLMs are able to provide safe +and credible responses, they often do not meet the specific information need of +a given question. A definitive evaluation of the usefulness of LLMs in +healthcare settings will likely require additional research on prompt +engineering, calibration, and custom-tailoring of general purpose models. +" +Zelda: Video Analytics using Vision-Language Models,Francisco Romero,http://arxiv.org/pdf/2305.03785v2.pdf,2023-05-05,['cs.db'],2305.03785v2.pdf," Advances in ML have motivated the design of video analytics systems that +allow for structured queries over video datasets. However, existing systems +limit query expressivity, require users to specify an ML model per predicate, +rely on complex optimizations that trade off accuracy for performance, and +return large amounts of redundant and low-quality results. This paper focuses +on the recently developed Vision-Language Models (VLMs) that allow users to +query images using natural language like ""cars during daytime at traffic +intersections."" Through an in-depth analysis, we show VLMs address three +limitations of current video analytics systems: general expressivity, a single +general purpose model to query many predicates, and are both simple and fast. +However, VLMs still return large numbers of redundant and low-quality results +that can overwhelm and burden users. In addition, VLMs often require manual +prompt engineering to improve result relevance. + We present Zelda: a video analytics system that uses VLMs to return both +relevant and semantically diverse results for top-K queries on large video +datasets. Zelda prompts the VLM with the user's query in natural language. +Zelda then automatically adds discriminator and synonym terms to boost +accuracy, and terms to identify low-quality frames. To improve result +diversity, Zelda uses semantic-rich VLM embeddings in an algorithm that prunes +similar frames while considering their relevance to the query and the number of +top-K results requested. We evaluate Zelda across five datasets and 19 queries +and quantitatively show it achieves higher mean average precision (up to 1.15x) +and improves average pairwise similarity (up to 1.16x) compared to using VLMs +out-of-the-box. We also compare Zelda to a state-of-the-art video analytics +engine and show that Zelda retrieves results 7.5x (up to 10.4x) faster for the +same accuracy and frame diversity. +" +ConES: Concept Embedding Search for Parameter Efficient Tuning Large Vision Language Models,Huahui Yi,http://arxiv.org/pdf/2305.18993v1.pdf,2023-05-30,['cs.cv'],2305.18993v1.pdf," Large pre-trained vision-language models have shown great prominence in +transferring pre-acquired knowledge to various domains and downstream tasks +with appropriate prompting or tuning. Existing prevalent tuning methods can be +generally categorized into three genres: 1) prompt engineering by creating +suitable prompt texts, which is time-consuming and requires domain expertise; +2) or simply fine-tuning the whole model, which is extremely inefficient; 3) +prompt tuning through parameterized prompt embeddings with the text encoder. +Nevertheless, all methods rely on the text encoder for bridging the modality +gap between vision and language. In this work, we question the necessity of the +cumbersome text encoder for a more lightweight and efficient tuning paradigm as +well as more representative prompt embeddings closer to the image +representations. To achieve this, we propose a Concept Embedding Search (ConES) +approach by optimizing prompt embeddings -- without the need of the text +encoder -- to capture the 'concept' of the image modality through a variety of +task objectives. By dropping the text encoder, we are able to significantly +speed up the learning process, \eg, from about an hour to just ten minutes in +our experiments for personalized text-to-image generation without impairing the +generation quality. Moreover, our proposed approach is orthogonal to current +existing tuning methods since the searched concept embeddings can be further +utilized in the next stage of fine-tuning the pre-trained large models for +boosting performance. Extensive experiments show that our approach can beat the +prompt tuning and textual inversion methods in a variety of downstream tasks +including objection detection, instance segmentation, and image generation. Our +approach also shows better generalization capability for unseen concepts in +specialized domains, such as the medical domain. +" +ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis,Zhiling Zheng,http://arxiv.org/pdf/2306.11296v2.pdf,2023-06-20,"['cs.ir', 'cond-mat.mtrl-sci', 'cs.cl', 'physics.chem-ph']",2306.11296v2.pdf," We use prompt engineering to guide ChatGPT in the automation of text mining +of metal-organic frameworks (MOFs) synthesis conditions from diverse formats +and styles of the scientific literature. This effectively mitigates ChatGPT's +tendency to hallucinate information -- an issue that previously made the use of +Large Language Models (LLMs) in scientific fields challenging. Our approach +involves the development of a workflow implementing three different processes +for text mining, programmed by ChatGPT itself. All of them enable parsing, +searching, filtering, classification, summarization, and data unification with +different tradeoffs between labor, speed, and accuracy. We deploy this system +to extract 26,257 distinct synthesis parameters pertaining to approximately 800 +MOFs sourced from peer-reviewed research articles. This process incorporates +our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, +resulting in impressive precision, recall, and F1 scores of 90-99%. +Furthermore, with the dataset built by text mining, we constructed a +machine-learning model with over 86% accuracy in predicting MOF experimental +crystallization outcomes and preliminarily identifying important factors in MOF +crystallization. We also developed a reliable data-grounded MOF chatbot to +answer questions on chemical reactions and synthesis procedures. Given that the +process of using ChatGPT reliably mines and tabulates diverse MOF synthesis +information in a unified format, while using only narrative language requiring +no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be +very useful across various other chemistry sub-disciplines. +" +Identifying and Extracting Rare Disease Phenotypes with Large Language Models,Cathy Shyr,http://arxiv.org/pdf/2306.12656v1.pdf,2023-06-22,"['cs.cl', 'cs.ai']",2306.12656v1.pdf," Rare diseases (RDs) are collectively common and affect 300 million people +worldwide. Accurate phenotyping is critical for informing diagnosis and +treatment, but RD phenotypes are often embedded in unstructured text and +time-consuming to extract manually. While natural language processing (NLP) +models can perform named entity recognition (NER) to automate extraction, a +major bottleneck is the development of a large, annotated corpus for model +training. Recently, prompt learning emerged as an NLP paradigm that can lead to +more generalizable results without any (zero-shot) or few labeled samples +(few-shot). Despite growing interest in ChatGPT, a revolutionary large language +model capable of following complex human prompts and generating high-quality +responses, none have studied its NER performance for RDs in the zero- and +few-shot settings. To this end, we engineered novel prompts aimed at extracting +RD phenotypes and, to the best of our knowledge, are the first the establish a +benchmark for evaluating ChatGPT's performance in these settings. We compared +its performance to the traditional fine-tuning approach and conducted an +in-depth error analysis. Overall, fine-tuning BioClinicalBERT resulted in +higher performance (F1 of 0.689) than ChatGPT (F1 of 0.472 and 0.591 in the +zero- and few-shot settings, respectively). Despite this, ChatGPT achieved +similar or higher accuracy for certain entities (i.e., rare diseases and signs) +in the one-shot setting (F1 of 0.776 and 0.725). This suggests that with +appropriate prompt engineering, ChatGPT has the potential to match or +outperform fine-tuned language models for certain entity types with just one +labeled sample. While the proliferation of large language models may provide +opportunities for supporting RD diagnosis and treatment, researchers and +clinicians should critically evaluate model outputs and be well-informed of +their limitations. +" +Demonstrations of the Potential of AI-based Political Issue Polling,Nathan E. Sanders,http://arxiv.org/pdf/2307.04781v2.pdf,2023-07-10,['cs.cy'],2307.04781v2.pdf," Political polling is a multi-billion dollar industry with outsized influence +on the societal trajectory of the United States and nations around the world. +However, it has been challenged by factors that stress its cost, availability, +and accuracy. At the same time, artificial intelligence (AI) chatbots have +become compelling stand-ins for human behavior, powered by increasingly +sophisticated large language models (LLMs). Could AI chatbots be an effective +tool for anticipating public opinion on controversial issues to the extent that +they could be used by campaigns, interest groups, and polling firms? We have +developed a prompt engineering methodology for eliciting human-like survey +responses from ChatGPT, which simulate the response to a policy question of a +person described by a set of demographic factors, and produce both an ordinal +numeric response score and a textual justification. We execute large scale +experiments, querying for thousands of simulated responses at a cost far lower +than human surveys. We compare simulated data to human issue polling data from +the Cooperative Election Study (CES). We find that ChatGPT is effective at +anticipating both the mean level and distribution of public opinion on a +variety of policy issues such as abortion bans and approval of the US Supreme +Court, particularly in their ideological breakdown (correlation typically +>85%). However, it is less successful at anticipating demographic-level +differences. Moreover, ChatGPT tends to overgeneralize to new policy issues +that arose after its training data was collected, such as US support for +involvement in the war in Ukraine. Our work has implications for our +understanding of the strengths and limitations of the current generation of AI +chatbots as virtual publics or online listening platforms, future directions +for LLM development, and applications of AI tools to the political domain. +(Abridged) +" +Go Beyond The Obvious: Probing the gap of INFORMAL reasoning ability between Humanity and LLMs by Detective Reasoning Puzzle Benchmark,Zhouhon Gu,http://arxiv.org/pdf/2307.05113v2.pdf,2023-07-11,['cs.cl'],2307.05113v2.pdf," Informal reasoning ability is the ability to reason based on common sense, +experience, and intuition.Humans use informal reasoning every day to extract +the most influential elements for their decision-making from a large amount of +life-like information.With the rapid development of language models, the +realization of general artificial intelligence has emerged with hope. Given the +outstanding informal reasoning ability of humans, how much informal reasoning +ability language models have has not been well studied by scholars.In order to +explore the gap between humans and language models in informal reasoning +ability, this paper constructs a Detective Reasoning Benchmark, which is an +assembly of 1,200 questions gathered from accessible online resources, aims at +evaluating the model's informal reasoning ability in real-life +context.Considering the improvement of the model's informal reasoning ability +restricted by the lack of benchmark, we further propose a Self-Question Prompt +Framework that mimics human thinking to enhance the model's informal reasoning +ability.The goals of self-question are to find key elements, deeply investigate +the connections between these elements, encourage the relationship between each +element and the problem, and finally, require the model to reasonably answer +the problem.The experimental results show that human performance greatly +outperforms the SoTA Language Models in Detective Reasoning Benchmark.Besides, +Self-Question is proven to be the most effective prompt engineering in +improving GPT-4's informal reasoning ability, but it still does not even +surpass the lowest score made by human participants.Upon acceptance of the +paper, the source code for the benchmark will be made publicly accessible. +" +Benchmarking Causal Study to Interpret Large Language Models for Source Code,Daniel Rodriguez-Cardenas,http://arxiv.org/pdf/2308.12415v1.pdf,2023-08-23,"['cs.se', 'cs.ai']",2308.12415v1.pdf," One of the most common solutions adopted by software researchers to address +code generation is by training Large Language Models (LLMs) on massive amounts +of source code. Although a number of studies have shown that LLMs have been +effectively evaluated on popular accuracy metrics (e.g., BLEU, CodeBleu), +previous research has largely overlooked the role of Causal Inference as a +fundamental component of the interpretability of LLMs' performance. Existing +benchmarks and datasets are meant to highlight the difference between the +expected and the generated outcome, but do not take into account confounding +variables (e.g., lines of code, prompt size) that equally influence the +accuracy metrics. The fact remains that, when dealing with generative software +tasks by LLMs, no benchmark is available to tell researchers how to quantify +neither the causal effect of SE-based treatments nor the correlation of +confounders to the model's performance. In an effort to bring statistical rigor +to the evaluation of LLMs, this paper introduces a benchmarking strategy named +Galeras comprised of curated testbeds for three SE tasks (i.e., code +completion, code summarization, and commit generation) to help aid the +interpretation of LLMs' performance. We illustrate the insights of our +benchmarking strategy by conducting a case study on the performance of ChatGPT +under distinct prompt engineering methods. The results of the case study +demonstrate the positive causal influence of prompt semantics on ChatGPT's +generative performance by an average treatment effect of $\approx 3\%$. +Moreover, it was found that confounders such as prompt size are highly +correlated with accuracy metrics ($\approx 0.412\%$). The end result of our +case study is to showcase causal inference evaluations, in practice, to reduce +confounding bias. By reducing the bias, we offer an interpretable solution for +the accuracy metric under analysis. +" +GPTCloneBench: A comprehensive benchmark of semantic clones and cross-language clones using GPT-3 model and SemanticCloneBench,Ajmain Inqiad Alam,http://arxiv.org/pdf/2308.13963v2.pdf,2023-08-26,['cs.se'],2308.13963v2.pdf," With the emergence of Machine Learning, there has been a surge in leveraging +its capabilities for problem-solving across various domains. In the code clone +realm, the identification of type-4 or semantic clones has emerged as a crucial +yet challenging task. Researchers aim to utilize Machine Learning to tackle +this challenge, often relying on the BigCloneBench dataset. However, it's worth +noting that BigCloneBench, originally not designed for semantic clone +detection, presents several limitations that hinder its suitability as a +comprehensive training dataset for this specific purpose. Furthermore, CLCDSA +dataset suffers from a lack of reusable examples aligning with real-world +software systems, rendering it inadequate for cross-language clone detection +approaches. In this work, we present a comprehensive semantic clone and +cross-language clone benchmark, GPTCloneBench by exploiting SemanticCloneBench +and OpenAI's GPT-3 model. In particular, using code fragments from +SemanticCloneBench as sample inputs along with appropriate prompt engineering +for GPT-3 model, we generate semantic and cross-language clones for these +specific fragments and then conduct a combination of extensive manual analysis, +tool-assisted filtering, functionality testing and automated validation in +building the benchmark. From 79,928 clone pairs of GPT-3 output, we created a +benchmark with 37,149 true semantic clone pairs, 19,288 false semantic +pairs(Type-1/Type-2), and 20,770 cross-language clones across four languages +(Java, C, C#, and Python). Our benchmark is 15-fold larger than +SemanticCloneBench, has more functional code examples for software systems and +programming language support than CLCDSA, and overcomes BigCloneBench's +qualities, quantification, and language variety limitations. +" +"AI Foundation Models for Weather and Climate: Applications, Design, and Implementation",S. Karthik Mukkavilli,http://arxiv.org/pdf/2309.10808v2.pdf,2023-09-19,"['cs.lg', 'cs.ai', 'physics.ao-ph', '68t07 (primary), 68t01, 86a08', 'i.2.0; i.4.0; j.2.5']",2309.10808v2.pdf," Machine learning and deep learning methods have been widely explored in +understanding the chaotic behavior of the atmosphere and furthering weather +forecasting. There has been increasing interest from technology companies, +government institutions, and meteorological agencies in building digital twins +of the Earth. Recent approaches using transformers, physics-informed machine +learning, and graph neural networks have demonstrated state-of-the-art +performance on relatively narrow spatiotemporal scales and specific tasks. With +the recent success of generative artificial intelligence (AI) using pre-trained +transformers for language modeling and vision with prompt engineering and +fine-tuning, we are now moving towards generalizable AI. In particular, we are +witnessing the rise of AI foundation models that can perform competitively on +multiple domain-specific downstream tasks. Despite this progress, we are still +in the nascent stages of a generalizable AI model for global Earth system +models, regional climate models, and mesoscale weather models. Here, we review +current state-of-the-art AI approaches, primarily from transformer and operator +learning literature in the context of meteorology. We provide our perspective +on criteria for success towards a family of foundation models for nowcasting +and forecasting weather and climate predictions. We also discuss how such +models can perform competitively on downstream tasks such as downscaling +(super-resolution), identifying conditions conducive to the occurrence of +wildfires, and predicting consequential meteorological phenomena across various +spatiotemporal scales such as hurricanes and atmospheric rivers. In particular, +we examine current AI methodologies and contend they have matured enough to +design and implement a weather foundation model. +" +Exploring Small Language Models with Prompt-Learning Paradigm for Efficient Domain-Specific Text Classification,Hengyu Luo,http://arxiv.org/pdf/2309.14779v1.pdf,2023-09-26,"['cs.cl', 'cs.ai', 'cs.lg']",2309.14779v1.pdf," Domain-specific text classification faces the challenge of scarce labeled +data due to the high cost of manual labeling. Prompt-learning, known for its +efficiency in few-shot scenarios, is proposed as an alternative to traditional +fine-tuning methods. And besides, although large language models (LLMs) have +gained prominence, small language models (SLMs, with under 1B parameters) offer +significant customizability, adaptability, and cost-effectiveness for +domain-specific tasks, given industry constraints. In this study, we +investigate the potential of SLMs combined with prompt-learning paradigm for +domain-specific text classification, specifically within customer-agent +interactions in retail. Our evaluations show that, in few-shot settings when +prompt-based model fine-tuning is possible, T5-base, a typical SLM with 220M +parameters, achieve approximately 75% accuracy with limited labeled data (up to +15% of full data), which shows great potentials of SLMs with prompt-learning. +Based on this, We further validate the effectiveness of active few-shot +sampling and the ensemble strategy in the prompt-learning pipeline that +contribute to a remarkable performance gain. Besides, in zero-shot settings +with a fixed model, we underscore a pivotal observation that, although the +GPT-3.5-turbo equipped with around 154B parameters garners an accuracy of +55.16%, the power of well designed prompts becomes evident when the +FLAN-T5-large, a model with a mere 0.5% of GPT-3.5-turbo's parameters, achieves +an accuracy exceeding 31% with the optimized prompt, a leap from its sub-18% +performance with an unoptimized one. Our findings underscore the promise of +prompt-learning in classification tasks with SLMs, emphasizing the benefits of +active few-shot sampling, and ensemble strategies in few-shot settings, and the +importance of prompt engineering in zero-shot settings. +" +Label Supervised LLaMA Finetuning,Zongxi Li,http://arxiv.org/pdf/2310.01208v1.pdf,2023-10-02,['cs.cl'],2310.01208v1.pdf," The recent success of Large Language Models (LLMs) has gained significant +attention in both academia and industry. Substantial efforts have been made to +enhance the zero- and few-shot generalization capabilities of open-source LLMs +through finetuning. Currently, the prevailing approach is instruction-tuning, +which trains LLMs to complete real-world tasks by generating responses guided +by natural language instructions. It is worth noticing that such an approach +may underperform in sequence and token classification tasks. Unlike text +generation tasks, classification tasks have a limited label space, where +precise label prediction is more appreciated than generating diverse and +human-like responses. Prior research has unveiled that instruction-tuned LLMs +cannot outperform BERT, prompting us to explore the potential of leveraging +latent representations from LLMs for supervised label prediction. In this +paper, we introduce a label-supervised adaptation for LLMs, which aims to +finetuning the model with discriminant labels. We evaluate this approach with +Label Supervised LLaMA (LS-LLaMA), based on LLaMA-2-7B, a relatively +small-scale LLM, and can be finetuned on a single GeForce RTX4090 GPU. We +extract latent representations from the final LLaMA layer and project them into +the label space to compute the cross-entropy loss. The model is finetuned by +Low-Rank Adaptation (LoRA) to minimize this loss. Remarkably, without intricate +prompt engineering or external knowledge, LS-LLaMA substantially outperforms +LLMs ten times its size in scale and demonstrates consistent improvements +compared to robust baselines like BERT-Large and RoBERTa-Large in text +classification. Moreover, by removing the causal mask from decoders, LS-unLLaMA +achieves the state-of-the-art performance in named entity recognition (NER). +Our work will shed light on a novel approach to adapting LLMs for various +downstream tasks. +" +Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models,Zeqiang Lai,http://arxiv.org/pdf/2310.07653v2.pdf,2023-10-11,['cs.ai'],2310.07653v2.pdf," The revolution of artificial intelligence content generation has been rapidly +accelerated with the booming text-to-image (T2I) diffusion models. Within just +two years of development, it was unprecedentedly of high-quality, diversity, +and creativity that the state-of-the-art models could generate. However, a +prevalent limitation persists in the effective communication with these popular +T2I models, such as Stable Diffusion, using natural language descriptions. This +typically makes an engaging image hard to obtain without expertise in prompt +engineering with complex word compositions, magic tags, and annotations. +Inspired by the recently released DALLE3 - a T2I model directly built-in +ChatGPT that talks human language, we revisit the existing T2I systems +endeavoring to align human intent and introduce a new task - interactive text +to image (iT2I), where people can interact with LLM for interleaved +high-quality image generation/edit/refinement and question answering with +stronger images and text correspondences using natural language. In addressing +the iT2I problem, we present a simple approach that augments LLMs for iT2I with +prompting techniques and off-the-shelf T2I models. We evaluate our approach for +iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT, +LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a +convenient and low-cost way to introduce the iT2I ability for any existing LLMs +and any text-to-image models without any training while bringing little +degradation on LLMs' inherent capabilities in, e.g., question answering and +code generation. We hope this work could draw broader attention and provide +inspiration for boosting user experience in human-machine interactions +alongside the image quality of the next-generation T2I systems. +" +Promptor: A Conversational and Autonomous Prompt Generation Agent for Intelligent Text Entry Techniques,Junxiao Shen,http://arxiv.org/pdf/2310.08101v2.pdf,2023-10-12,"['cs.cl', 'cs.ai']",2310.08101v2.pdf," Text entry is an essential task in our day-to-day digital interactions. +Numerous intelligent features have been developed to streamline this process, +making text entry more effective, efficient, and fluid. These improvements +include sentence prediction and user personalization. However, as deep +learning-based language models become the norm for these advanced features, the +necessity for data collection and model fine-tuning increases. These challenges +can be mitigated by harnessing the in-context learning capability of large +language models such as GPT-3.5. This unique feature allows the language model +to acquire new skills through prompts, eliminating the need for data collection +and fine-tuning. Consequently, large language models can learn various text +prediction techniques. We initially showed that, for a sentence prediction +task, merely prompting GPT-3.5 surpassed a GPT-2 backed system and is +comparable with a fine-tuned GPT-3.5 model, with the latter two methods +requiring costly data collection, fine-tuning and post-processing. However, the +task of prompting large language models to specialize in specific text +prediction tasks can be challenging, particularly for designers without +expertise in prompt engineering. To address this, we introduce Promptor, a +conversational prompt generation agent designed to engage proactively with +designers. Promptor can automatically generate complex prompts tailored to meet +specific needs, thus offering a solution to this challenge. We conducted a user +study involving 24 participants creating prompts for three intelligent text +entry tasks, half of the participants used Promptor while the other half +designed prompts themselves. The results show that Promptor-designed prompts +result in a 35% increase in similarity and 22% in coherence over those by +designers. +" +Human-in-the-loop Machine Translation with Large Language Model,Xinyi Yang,http://arxiv.org/pdf/2310.08908v1.pdf,2023-10-13,['cs.cl'],2310.08908v1.pdf," The large language model (LLM) has garnered significant attention due to its +in-context learning mechanisms and emergent capabilities. The research +community has conducted several pilot studies to apply LLMs to machine +translation tasks and evaluate their performance from diverse perspectives. +However, previous research has primarily focused on the LLM itself and has not +explored human intervention in the inference process of LLM. The +characteristics of LLM, such as in-context learning and prompt engineering, +closely mirror human cognitive abilities in language tasks, offering an +intuitive solution for human-in-the-loop generation. In this study, we propose +a human-in-the-loop pipeline that guides LLMs to produce customized outputs +with revision instructions. The pipeline initiates by prompting the LLM to +produce a draft translation, followed by the utilization of automatic retrieval +or human feedback as supervision signals to enhance the LLM's translation +through in-context learning. The human-machine interactions generated in this +pipeline are also stored in an external database to expand the in-context +retrieval database, enabling us to leverage human supervision in an offline +setting. We evaluate the proposed pipeline using GPT-3.5-turbo API on five +domain-specific benchmarks for German-English translation. The results +demonstrate the effectiveness of the pipeline in tailoring in-domain +translations and improving translation performance compared to direct +translation. Additionally, we discuss the results from the following +perspectives: 1) the effectiveness of different in-context retrieval methods; +2) the construction of a retrieval database under low-resource scenarios; 3) +the observed domains differences; 4) the quantitative analysis of linguistic +statistics; and 5) the qualitative analysis of translation cases. The code and +data are available at https://github.com/NLP2CT/HIL-MT/. +" +ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles,Savvas Petridis,http://arxiv.org/pdf/2310.15428v1.pdf,2023-10-24,"['cs.hc', 'cs.ai']",2310.15428v1.pdf," Large language model (LLM) prompting is a promising new approach for users to +create and customize their own chatbots. However, current methods for steering +a chatbot's outputs, such as prompt engineering and fine-tuning, do not support +users in converting their natural feedback on the model's outputs to changes in +the prompt or model. In this work, we explore how to enable users to +interactively refine model outputs through their feedback, by helping them +convert their feedback into a set of principles (i.e. a constitution) that +dictate the model's behavior. From a formative study, we (1) found that users +needed support converting their feedback into principles for the chatbot and +(2) classified the different principle types desired by users. Inspired by +these findings, we developed ConstitutionMaker, an interactive tool for +converting user feedback into principles, to steer LLM-based chatbots. With +ConstitutionMaker, users can provide either positive or negative feedback in +natural language, select auto-generated feedback, or rewrite the chatbot's +response; each mode of feedback automatically generates a principle that is +inserted into the chatbot's prompt. In a user study with 14 participants, we +compare ConstitutionMaker to an ablated version, where users write their own +principles. With ConstitutionMaker, participants felt that their principles +could better guide the chatbot, that they could more easily convert their +feedback into principles, and that they could write principles more +efficiently, with less mental demand. ConstitutionMaker helped users identify +ways to improve the chatbot, formulate their intuitive responses to the model +into feedback, and convert this feedback into specific and clear principles. +Together, these findings inform future tools that support the interactive +critiquing of LLM outputs. +" +Few-shot learning for sentence pair classification and its applications in software engineering,Robert Kraig Helmeczi,http://arxiv.org/pdf/2306.08058v1.pdf,2023-06-13,['cs.se'],2306.08058v1.pdf," Few-shot learning-the ability to train models with access to limited data-has +become increasingly popular in the natural language processing (NLP) domain, as +large language models such as GPT and T0 have been empirically shown to achieve +high performance in numerous tasks with access to just a handful of labeled +examples. Smaller language models such as BERT and its variants have also been +shown to achieve strong performance with just a handful of labeled examples +when combined with few-shot learning algorithms like pattern-exploiting +training (PET) and SetFit. The focus of this work is to investigate the +performance of alternative few-shot learning approaches with BERT-based models. +Specifically, vanilla fine-tuning, PET and SetFit are compared for numerous +BERT-based checkpoints over an array of training set sizes. To facilitate this +investigation, applications of few-shot learning are considered in software +engineering. For each task, high-performance techniques and their associated +model checkpoints are identified through detailed empirical analysis. Our +results establish PET as a strong few-shot learning approach, and our analysis +shows that with just a few hundred labeled examples it can achieve performance +near that of fine-tuning on full-sized data sets. +" +FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark,Liang Xu,http://arxiv.org/pdf/2107.07498v2.pdf,2021-07-15,"['cs.cl', 'cs.ai']",2107.07498v2.pdf," Pretrained Language Models (PLMs) have achieved tremendous success in natural +language understanding tasks. While different learning schemes -- fine-tuning, +zero-shot, and few-shot learning -- have been widely explored and compared for +languages such as English, there is comparatively little work in Chinese to +fairly and comprehensively evaluate and compare these methods and thus hinders +cumulative progress. In this paper, we introduce the Chinese Few-shot Learning +Evaluation Benchmark (FewCLUE), the first comprehensive few-shot evaluation +benchmark in Chinese. It includes nine tasks, ranging from single-sentence and +sentence-pair classification tasks to machine reading comprehension tasks. We +systematically evaluate five state-of-the-art (SOTA) few-shot learning methods +(including PET, ADAPET, LM-BFF, P-tuning and EFL), and compare their +performance with fine-tuning and zero-shot learning schemes on the newly +constructed FewCLUE benchmark. Experimental results reveal that: 1) The effect +of different few-shot learning methods is sensitive to the pre-trained model to +which the methods are applied; 2) PET and P-tuning achieve the best overall +performance with RoBERTa and ERNIE respectively. Our benchmark is used in the +few-shot learning contest of NLPCC 2021. In addition, we provide a +user-friendly toolkit, as well as an online leaderboard to help facilitate +further progress on Chinese few-shot learning. We provide a baseline +performance on different learning methods, a reference for future research. +" +Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning,Jishnu Jaykumar P,http://arxiv.org/pdf/2307.03073v2.pdf,2023-07-06,"['cs.cv', 'cs.ro']",2307.03073v2.pdf," We propose a novel framework for few-shot learning by leveraging large-scale +vision-language models such as CLIP. Motivated by the unimodal prototypical +networks for few-shot learning, we introduce PROTO-CLIP that utilizes image +prototypes and text prototypes for few-shot learning. Specifically, PROTO-CLIP +adapts the image encoder and text encoder in CLIP in a joint fashion using +few-shot examples. The two encoders are used to compute prototypes of image +classes for classification. During adaptation, we propose aligning the image +and text prototypes of corresponding classes. Such a proposed alignment is +beneficial for few-shot classification due to the contributions from both types +of prototypes. We demonstrate the effectiveness of our method by conducting +experiments on benchmark datasets for few-shot learning as well as in the real +world for robot perception. +" +A Survey on Recent Named Entity Recognition and Relation Classification Methods with Focus on Few-Shot Learning Approaches,Sakher Alqaaidi,http://arxiv.org/pdf/2310.19055v1.pdf,2023-10-29,['cs.cl'],2310.19055v1.pdf," Named entity recognition and relation classification are key stages for +extracting information from unstructured text. Several natural language +processing applications utilize the two tasks, such as information retrieval, +knowledge graph construction and completion, question answering and other +domain-specific applications, such as biomedical data mining. We present a +survey of recent approaches in the two tasks with focus on few-shot learning +approaches. Our work compares the main approaches followed in the two +paradigms. Additionally, we report the latest metric scores in the two tasks +with a structured analysis that considers the results in the few-shot learning +scope. +" +True Few-Shot Learning with Prompts -- A Real-World Perspective,Timo Schick,http://arxiv.org/pdf/2111.13440v1.pdf,2021-11-26,['cs.cl'],2111.13440v1.pdf," Prompt-based approaches are strong at few-shot learning. However, Perez et +al. (2021) have recently cast doubt on their performance because they had +difficulty getting good results in a ""true"" few-shot setting in which prompts +and hyperparameters cannot be tuned on a dev set. In view of this, we conduct +an extensive study of PET, a method that combines textual instructions with +example-based finetuning. We show that, if correctly configured, PET performs +strongly in a true few-shot setting, i.e., without a dev set. Crucial for this +strong performance is PET's ability to intelligently handle multiple prompts. +We then put our findings to a real-world test by running PET on RAFT, a +benchmark of tasks taken directly from realistic NLP applications for which no +labeled dev or test sets are available. PET achieves a new state of the art on +RAFT and performs close to non-expert humans for 7 out of 11 tasks. These +results demonstrate that prompt-based learners like PET excel at true few-shot +learning and underpin our belief that learning from instructions will play an +important role on the path towards human-like few-shot learning capabilities. +" +Improving In-Context Few-Shot Learning via Self-Supervised Training,Mingda Chen,http://arxiv.org/pdf/2205.01703v2.pdf,2022-05-03,['cs.cl'],2205.01703v2.pdf," Self-supervised pretraining has made few-shot learning possible for many NLP +tasks. But the pretraining objectives are not typically adapted specifically +for in-context few-shot learning. In this paper, we propose to use +self-supervision in an intermediate training stage between pretraining and +downstream few-shot usage with the goal to teach the model to perform +in-context few shot learning. We propose and evaluate four self-supervised +objectives on two benchmarks. We find that the intermediate self-supervision +stage produces models that outperform strong baselines. Ablation study shows +that several factors affect the downstream performance, such as the amount of +training data and the diversity of the self-supervised objectives. +Human-annotated cross-task supervision and self-supervision are complementary. +Qualitative analysis suggests that the self-supervised-trained models are +better at following task requirements. +" +Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models,Mengzhou Xia,http://arxiv.org/pdf/2205.15223v3.pdf,2022-05-30,"['cs.cl', 'cs.lg']",2205.15223v3.pdf," Pre-trained masked language models successfully perform few-shot learning by +formulating downstream tasks as text infilling. However, as a strong +alternative in full-shot settings, discriminative pre-trained models like +ELECTRA do not fit into the paradigm. In this work, we adapt prompt-based +few-shot learning to ELECTRA and show that it outperforms masked language +models in a wide range of tasks. ELECTRA is pre-trained to distinguish if a +token is generated or original. We naturally extend that to prompt-based +few-shot learning by training to score the originality of the target options +without introducing new parameters. Our method can be easily adapted to tasks +involving multi-token predictions without extra computation overhead. Analysis +shows that ELECTRA learns distributions that align better with downstream +tasks. +" +Revisiting Few-Shot Learning from a Causal Perspective,Guoliang Lin,http://arxiv.org/pdf/2209.13816v1.pdf,2022-09-28,"['cs.lg', 'cs.ai']",2209.13816v1.pdf," Few-shot learning with N-way K-shot scheme is an open challenge in machine +learning. Many approaches have been proposed to tackle this problem, e.g., the +Matching Networks and CLIP-Adapter. Despite that these approaches have shown +significant progress, the mechanism of why these methods succeed has not been +well explored. In this paper, we interpret these few-shot learning methods via +causal mechanism. We show that the existing approaches can be viewed as +specific forms of front-door adjustment, which is to remove the effects of +confounders. Based on this, we introduce a general causal method for few-shot +learning, which considers not only the relationship between examples but also +the diversity of representations. Experimental results demonstrate the +superiority of our proposed method in few-shot classification on various +benchmark datasets. Code is available in the supplementary material. +" +In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models,Yukun Huang,http://arxiv.org/pdf/2212.10670v1.pdf,2022-12-20,"['cs.cl', 'cs.lg']",2212.10670v1.pdf," Given the success with in-context learning of large pre-trained language +models, we introduce in-context learning distillation to transfer in-context +few-shot learning ability from large models to smaller models. We propose to +combine in-context learning objectives with language modeling objectives to +distill both the ability to read in-context examples and task knowledge to the +smaller models. We perform in-context learning distillation under two different +few-shot learning paradigms: Meta In-context Tuning (Meta-ICT) and Multitask +In-context Tuning (Multitask-ICT). Multitask-ICT performs better on multitask +few-shot learning but also requires more computation than Meta-ICT. Our method +shows consistent improvements for both Meta-ICT and Multitask-ICT on two +benchmarks: LAMA and CrossFit. Our extensive experiments and analysis reveal +that in-context learning objectives and language modeling objectives are +complementary under the Multitask-ICT paradigm. In-context learning objectives +achieve the best performance when combined with language modeling objectives. +" +FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?,Zihao Jiang,http://arxiv.org/pdf/2307.04114v1.pdf,2023-07-09,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.mm']",2307.04114v1.pdf," Few-shot learning aims to train models that can be generalized to novel +classes with only a few samples. Recently, a line of works are proposed to +enhance few-shot learning with accessible semantic information from class +names. However, these works focus on improving existing modules such as visual +prototypes and feature extractors of the standard few-shot learning framework. +This limits the full potential use of semantic information. In this paper, we +propose a novel few-shot learning framework that uses pre-trained language +models based on contrastive learning. To address the challenge of alignment +between visual features and textual embeddings obtained from text-based +pre-trained language model, we carefully design the textual branch of our +framework and introduce a metric module to generalize the cosine similarity. +For better transferability, we let the metric module adapt to different +few-shot tasks and adopt MAML to train the model via bi-level optimization. +Moreover, we conduct extensive experiments on multiple benchmarks to +demonstrate the effectiveness of our method. +" +Reordering Examples Helps during Priming-based Few-Shot Learning,Sawan Kumar,http://arxiv.org/pdf/2106.01751v1.pdf,2021-06-03,['cs.cl'],2106.01751v1.pdf," The ability to learn from limited data, or few-shot learning, is a desirable +and often critical requirement for NLP systems. While many existing methods do +poorly at learning from a handful of examples, large pretrained language models +have recently been shown to be efficient few-shot learners. One approach to +few-shot learning, which does not require finetuning of model parameters, is to +augment the language model's input with priming text which is typically +constructed using task specific descriptions and examples. In this work, we +further explore priming-based few-shot learning, with focus on using examples +as prompts. We show that presenting examples in the right order is key for +generalization. We introduce PERO (Prompting with Examples in the Right Order), +where we formulate few-shot learning as search over the set of permutations of +the training examples. We show that PERO can learn to generalize efficiently +using as few as 10 examples, in contrast to existing approaches. While the +newline token is a natural choice for separating the examples in the prompt, we +show that learning a new separator token can potentially provide further gains +in performance. We demonstrate the effectiveness of the proposed method on the +tasks of sentiment classification, natural language inference and fact +retrieval. Finally, we analyze the learned prompts to reveal novel insights, +including the idea that two training examples in the right order alone can +provide competitive performance for sentiment classification and natural +language inference. +" +CLUES: Few-Shot Learning Evaluation in Natural Language Understanding,Subhabrata Mukherjee,http://arxiv.org/pdf/2111.02570v1.pdf,2021-11-04,"['cs.cl', 'cs.lg']",2111.02570v1.pdf," Most recent progress in natural language understanding (NLU) has been driven, +in part, by benchmarks such as GLUE, SuperGLUE, SQuAD, etc. In fact, many NLU +models have now matched or exceeded ""human-level"" performance on many tasks in +these benchmarks. Most of these benchmarks, however, give models access to +relatively large amounts of labeled data for training. As such, the models are +provided far more data than required by humans to achieve strong performance. +That has motivated a line of work that focuses on improving few-shot learning +performance of NLU models. However, there is a lack of standardized evaluation +benchmarks for few-shot NLU resulting in different experimental settings in +different papers. To help accelerate this line of work, we introduce CLUES +(Constrained Language Understanding Evaluation Standard), a benchmark for +evaluating the few-shot learning capabilities of NLU models. We demonstrate +that while recent models reach human performance when they have access to large +amounts of labeled data, there is a huge gap in performance in the few-shot +setting for most tasks. We also demonstrate differences between alternative +model families and adaptation techniques in the few shot setting. Finally, we +discuss several principles and choices in designing the experimental settings +for evaluating the true few-shot learning performance and suggest a unified +standardized approach to few-shot learning evaluation. We aim to encourage +research on NLU models that can generalize to new tasks with a small number of +examples. Code and data for CLUES are available at +https://github.com/microsoft/CLUES. +" +Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning,Yu Meng,http://arxiv.org/pdf/2211.03044v2.pdf,2022-11-06,"['cs.cl', 'cs.lg']",2211.03044v2.pdf," Recent studies have revealed the intriguing few-shot learning ability of +pretrained language models (PLMs): They can quickly adapt to a new task when +fine-tuned on a small amount of labeled data formulated as prompts, without +requiring abundant task-specific annotations. Despite their promising +performance, most existing few-shot approaches that only learn from the small +training set still underperform fully supervised training by nontrivial +margins. In this work, we study few-shot learning with PLMs from a different +perspective: We first tune an autoregressive PLM on the few-shot samples and +then use it as a generator to synthesize a large amount of novel training +samples which augment the original training set. To encourage the generator to +produce label-discriminative samples, we train it via weighted maximum +likelihood where the weight of each token is automatically adjusted based on a +discriminative meta-learning objective. A classification PLM can then be +fine-tuned on both the few-shot and the synthetic samples with regularization +for better generalization and stability. Our approach FewGen achieves an +overall better result across seven classification tasks of the GLUE benchmark +than existing few-shot learning methods, improving no-augmentation methods by +5+ average points, and outperforming augmentation methods by 3+ average points. +" +Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data,Alon Albalak,http://arxiv.org/pdf/2302.00674v4.pdf,2023-02-01,"['cs.lg', 'cs.cl']",2302.00674v4.pdf," Few-shot learning is valuable in many real-world applications, but learning a +generalizable model without overfitting to the few labeled datapoints is +challenging. In this work, we focus on Few-shot Learning with Auxiliary Data +(FLAD), a training paradigm that assumes access to auxiliary data during +few-shot learning in hopes of improving generalization. Previous works have +proposed automated methods for mixing auxiliary and target data, but these +methods typically scale linearly (or worse) with the number of auxiliary +datasets, limiting their practicality. In this work we relate FLAD to the +explore-exploit dilemma that is central to the multi-armed bandit setting and +derive algorithms whose computational complexity is independent of the number +of auxiliary datasets, allowing us to scale to 100x more auxiliary datasets +than prior methods. We propose two algorithms -- EXP3-FLAD and UCB1-FLAD -- and +compare them with prior FLAD methods that either explore or exploit, finding +that the combination of exploration and exploitation is crucial. Through +extensive experimentation we find that our methods outperform all pre-existing +FLAD methods by 4% and lead to the first 3 billion parameter language models +that outperform the 175 billion parameter GPT-3. Overall, our work suggests +that the discovery of better, more efficient mixing strategies for FLAD may +provide a viable path towards substantially improving generalization in +few-shot learning. +" +Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching,Donggyun Kim,http://arxiv.org/pdf/2303.14969v1.pdf,2023-03-27,"['cs.cv', 'cs.ai']",2303.14969v1.pdf," Dense prediction tasks are a fundamental class of problems in computer +vision. As supervised methods suffer from high pixel-wise labeling cost, a +few-shot learning solution that can learn any dense task from a few labeled +images is desired. Yet, current few-shot learning methods target a restricted +set of tasks such as semantic segmentation, presumably due to challenges in +designing a general and unified model that is able to flexibly and efficiently +adapt to arbitrary tasks of unseen semantics. We propose Visual Token Matching +(VTM), a universal few-shot learner for arbitrary dense prediction tasks. It +employs non-parametric matching on patch-level embedded tokens of images and +labels that encapsulates all tasks. Also, VTM flexibly adapts to any task with +a tiny amount of task-specific parameters that modulate the matching algorithm. +We implement VTM as a powerful hierarchical encoder-decoder architecture +involving ViT backbones where token matching is performed at multiple feature +hierarchies. We experiment VTM on a challenging variant of Taskonomy dataset +and observe that it robustly few-shot learns various unseen dense prediction +tasks. Surprisingly, it is competitive with fully supervised baselines using +only 10 labeled examples of novel tasks (0.004% of full supervision) and +sometimes outperforms using 0.1% of full supervision. Codes are available at +https://github.com/GitGyun/visual_token_matching. +" +FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in Few-Shot Learning,Kun Song,http://arxiv.org/pdf/2310.15105v3.pdf,2023-10-23,['cs.cv'],2310.15105v3.pdf," Due to the limited availability of data, existing few-shot learning methods +trained from scratch fail to achieve satisfactory performance. In contrast, +large-scale pre-trained models such as CLIP demonstrate remarkable few-shot and +zero-shot capabilities. To enhance the performance of pre-trained models for +downstream tasks, fine-tuning the model on downstream data is frequently +necessary. However, fine-tuning the pre-trained model leads to a decrease in +its generalizability in the presence of distribution shift, while the limited +number of samples in few-shot learning makes the model highly susceptible to +overfitting. Consequently, existing methods for fine-tuning few-shot learning +primarily focus on fine-tuning the model's classification head or introducing +additional structure. In this paper, we introduce a fine-tuning approach termed +Feature Discrimination Alignment (FD-Align). Our method aims to bolster the +model's generalizability by preserving the consistency of spurious features +across the fine-tuning process. Extensive experimental results validate the +efficacy of our approach for both ID and OOD tasks. Once fine-tuned, the model +can seamlessly integrate with existing methods, leading to performance +improvements. Our code can be found in https://github.com/skingorz/FD-Align. +" +Few-Shot Learning with Localization in Realistic Settings,Davis Wertheimer,http://arxiv.org/pdf/1904.08502v2.pdf,2019-04-09,"['cs.cv', 'cs.ai', 'cs.lg', 'stat.ml']",1904.08502v2.pdf," Traditional recognition methods typically require large, +artificially-balanced training classes, while few-shot learning methods are +tested on artificially small ones. In contrast to both extremes, real world +recognition problems exhibit heavy-tailed class distributions, with cluttered +scenes and a mix of coarse and fine-grained class distinctions. We show that +prior methods designed for few-shot learning do not work out of the box in +these challenging conditions, based on a new ""meta-iNat"" benchmark. We +introduce three parameter-free improvements: (a) better training procedures +based on adapting cross-validation to meta-learning, (b) novel architectures +that localize objects using limited bounding box annotations before +classification, and (c) simple parameter-free expansions of the feature space +based on bilinear pooling. Together, these improvements double the accuracy of +state-of-the-art models on meta-iNat while generalizing to prior benchmarks, +complex neural architectures, and settings with substantial domain shift. +" +Model-Agnostic Graph Regularization for Few-Shot Learning,Ethan Shen,http://arxiv.org/pdf/2102.07077v1.pdf,2021-02-14,"['cs.lg', 'cs.cv']",2102.07077v1.pdf," In many domains, relationships between categories are encoded in the +knowledge graph. Recently, promising results have been achieved by +incorporating knowledge graph as side information in hard classification tasks +with severely limited data. However, prior models consist of highly complex +architectures with many sub-components that all seem to impact performance. In +this paper, we present a comprehensive empirical study on graph embedded +few-shot learning. We introduce a graph regularization approach that allows a +deeper understanding of the impact of incorporating graph information between +labels. Our proposed regularization is widely applicable and model-agnostic, +and boosts the performance of any few-shot learning model, including +fine-tuning, metric-based, and optimization-based meta-learning. Our approach +improves the performance of strong base learners by up to 2% on Mini-ImageNet +and 6.7% on ImageNet-FS, outperforming state-of-the-art graph embedded methods. +Additional analyses reveal that graph regularizing models result in a lower +loss for more difficult tasks, such as those with fewer shots and less +informative support examples. +" +Uniform Sampling over Episode Difficulty,Sébastien M. R. Arnold,http://arxiv.org/pdf/2108.01662v2.pdf,2021-08-03,"['cs.lg', 'cs.ai', 'cs.cv']",2108.01662v2.pdf," Episodic training is a core ingredient of few-shot learning to train models +on tasks with limited labelled data. Despite its success, episodic training +remains largely understudied, prompting us to ask the question: what is the +best way to sample episodes? In this paper, we first propose a method to +approximate episode sampling distributions based on their difficulty. Building +on this method, we perform an extensive analysis and find that sampling +uniformly over episode difficulty outperforms other sampling schemes, including +curriculum and easy-/hard-mining. As the proposed sampling method is algorithm +agnostic, we can leverage these insights to improve few-shot learning +accuracies across many episodic training algorithms. We demonstrate the +efficacy of our method across popular few-shot learning datasets, algorithms, +network architectures, and protocols. +" +CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems,Fei Mi,http://arxiv.org/pdf/2109.04645v4.pdf,2021-09-10,"['cs.cl', 'cs.lg']",2109.04645v4.pdf," As labeling cost for different modules in task-oriented dialog (ToD) systems +is high, a major challenge in practice is to learn different tasks with the +least amount of labeled data. Recently, prompting methods over pre-trained +language models (PLMs) have shown promising results for few-shot learning in +ToD. To better utilize the power of PLMs, this paper proposes Comprehensive +Instruction (CINS) that exploits PLMs with extra task-specific instructions. We +design a schema (definition, constraint, prompt) of instructions and their +customized realizations for three important downstream tasks in ToD, i.e. +intent classification, dialog state tracking, and natural language generation. +A sequence-to-sequence model (T5) is adopted to solve these three tasks in a +unified framework. Extensive experiments are conducted on these ToD tasks in +realistic few-shot learning scenarios with small validation data. Empirical +results demonstrate that the proposed CINS approach consistently improves +techniques that finetune PLMs with raw input or short prompts. +" +Exploring Prompt-based Few-shot Learning for Grounded Dialog Generation,Chujie Zheng,http://arxiv.org/pdf/2109.06513v2.pdf,2021-09-14,['cs.cl'],2109.06513v2.pdf," Dialog models can be greatly strengthened through grounding on various +external information, but grounded dialog corpora are usually not naturally +accessible. In this work, we focus on the few-shot learning for grounded dialog +generation (GDG). We first propose a simple prompting method for GDG tasks, +where different constructs of model input, such as the grounding source and the +conversation context, are distinguished through continuous or discrete prompts. +On three typical GDG tasks, we empirically demonstrate and analyze in-depth the +effectiveness of our method. We then conduct extensive experiments to +thoroughly investigate how our prompting method works with different +pre-trained models. We show that prompted language models perform superiorly to +conversational models, and further analyze various factors that influence the +effects of prompting. Overall, our work introduces a prompt-based perspective +to the few-shot learning for GDG tasks, and provides valuable findings and +insights for future research. +" +Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning,Sungyong Baik,http://arxiv.org/pdf/2110.03909v2.pdf,2021-10-08,"['cs.lg', 'cs.cv']",2110.03909v2.pdf," In few-shot learning scenarios, the challenge is to generalize and perform +well on new unseen examples when only very few labeled examples are available +for each task. Model-agnostic meta-learning (MAML) has gained the popularity as +one of the representative few-shot learning methods for its flexibility and +applicability to diverse problems. However, MAML and its variants often resort +to a simple loss function without any auxiliary loss function or regularization +terms that can help achieve better generalization. The problem lies in that +each application and task may require different auxiliary loss function, +especially when tasks are diverse and distinct. Instead of attempting to +hand-design an auxiliary loss function for each application and task, we +introduce a new meta-learning framework with a loss function that adapts to +each task. Our proposed framework, named Meta-Learning with Task-Adaptive Loss +Function (MeTAL), demonstrates the effectiveness and the flexibility across +various domains, such as few-shot classification and few-shot regression. +" +Ontology-enhanced Prompt-tuning for Few-shot Learning,Hongbin Ye,http://arxiv.org/pdf/2201.11332v1.pdf,2022-01-27,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2201.11332v1.pdf," Few-shot Learning (FSL) is aimed to make predictions based on a limited +number of samples. Structured data such as knowledge graphs and ontology +libraries has been leveraged to benefit the few-shot setting in various tasks. +However, the priors adopted by the existing methods suffer from challenging +knowledge missing, knowledge noise, and knowledge heterogeneity, which hinder +the performance for few-shot learning. In this study, we explore knowledge +injection for FSL with pre-trained language models and propose +ontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop the +ontology transformation based on the external knowledge graph to address the +knowledge missing issue, which fulfills and converts structure knowledge to +text. We further introduce span-sensitive knowledge injection via a visible +matrix to select informative knowledge to handle the knowledge noise issue. To +bridge the gap between knowledge and text, we propose a collective training +algorithm to optimize representations jointly. We evaluate our proposed +OntoPrompt in three tasks, including relation extraction, event extraction, and +knowledge graph completion, with eight datasets. Experimental results +demonstrate that our approach can obtain better few-shot performance than +baselines. +" +Impossible Triangle: What's Next for Pre-trained Language Models?,Chenguang Zhu,http://arxiv.org/pdf/2204.06130v2.pdf,2022-04-13,['cs.cl'],2204.06130v2.pdf," Recent development of large-scale pre-trained language models (PLM) have +significantly improved the capability of models in various NLP tasks, in terms +of performance after task-specific fine-tuning and zero-shot / few-shot +learning. However, many of such models come with a dauntingly huge size that +few institutions can afford to pre-train, fine-tune or even deploy, while +moderate-sized models usually lack strong generalized few-shot learning +capabilities. In this paper, we first elaborate the current obstacles of using +PLM models in terms of the Impossible Triangle: 1) moderate model size, 2) +state-of-the-art few-shot learning capability, and 3) state-of-the-art +fine-tuning capability. We argue that all existing PLM models lack one or more +properties from the Impossible Triangle. To remedy these missing properties of +PLMs, various techniques have been proposed, such as knowledge distillation, +data augmentation and prompt learning, which inevitably brings additional work +to the application of PLMs in real scenarios. We then offer insights into +future research directions of PLMs to achieve the Impossible Triangle, and +break down the task into several key phases. +" +A Study on Prompt-based Few-Shot Learning Methods for Belief State Tracking in Task-oriented Dialog Systems,Debjoy Saha,http://arxiv.org/pdf/2204.08167v1.pdf,2022-04-18,"['cs.cl', 'cs.ai']",2204.08167v1.pdf," We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented +conversational systems. Recent approaches to this problem leveraging +Transformer-based models have yielded great results. However, training these +models is expensive, both in terms of computational resources and time. +Additionally, collecting high quality annotated dialogue datasets remains a +challenge for researchers because of the extensive annotation required for +training these models. Driven by the recent success of pre-trained language +models and prompt-based learning, we explore prompt-based few-shot learning for +Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage +prompt-based language modelling task and train language models for both tasks +and present a comprehensive empirical analysis of their separate and joint +performance. We demonstrate the potential of prompt-based methods in few-shot +learning for DST and provide directions for future improvement. +" +How to Prompt? Opportunities and Challenges of Zero- and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models,Hai Dang,http://arxiv.org/pdf/2209.01390v1.pdf,2022-09-03,"['cs.hc', 'cs.cl', 'h.5.2; i.2.7']",2209.01390v1.pdf," Deep generative models have the potential to fundamentally change the way we +create high-fidelity digital content but are often hard to control. Prompting a +generative model is a promising recent development that in principle enables +end-users to creatively leverage zero-shot and few-shot learning to assign new +tasks to an AI ad-hoc, simply by writing them down. However, for the majority +of end-users writing effective prompts is currently largely a trial and error +process. To address this, we discuss the key opportunities and challenges for +interactive creative applications that use prompting as a new paradigm for +Human-AI interaction. Based on our analysis, we propose four design goals for +user interfaces that support prompting. We illustrate these with concrete UI +design sketches, focusing on the use case of creative writing. The research +community in HCI and AI can take these as starting points to develop adequate +user interfaces for models capable of zero- and few-shot learning. +" +On Measuring the Intrinsic Few-Shot Hardness of Datasets,Xinran Zhao,http://arxiv.org/pdf/2211.09113v1.pdf,2022-11-16,['cs.cl'],2211.09113v1.pdf," While advances in pre-training have led to dramatic improvements in few-shot +learning of NLP tasks, there is limited understanding of what drives successful +few-shot adaptation in datasets. In particular, given a new dataset and a +pre-trained model, what properties of the dataset make it \emph{few-shot +learnable} and are these properties independent of the specific adaptation +techniques used? We consider an extensive set of recent few-shot learning +methods, and show that their performance across a large number of datasets is +highly correlated, showing that few-shot hardness may be intrinsic to datasets, +for a given pre-trained model. To estimate intrinsic few-shot hardness, we then +propose a simple and lightweight metric called ""Spread"" that captures the +intuition that few-shot learning is made possible by exploiting feature-space +invariances between training and test samples. Our metric better accounts for +few-shot hardness compared to existing notions of hardness, and is ~8-100x +faster to compute. +" +Differentiable Entailment for Parameter Efficient Few Shot Learning,Ethan Kim,http://arxiv.org/pdf/2301.13345v1.pdf,2023-01-31,['cs.cl'],2301.13345v1.pdf," Few-shot learning allows pre-trained language models to adapt to downstream +tasks while using a limited number of training examples. However, practical +applications are limited when all model parameters must be optimized. In this +work we apply a new technique for parameter efficient few shot learning while +adopting a strict definition of parameter efficiency. Our training method +combines 1) intermediate training by reformulating natural language tasks as +entailment tasks \cite{wang_entailment_2021} and 2) differentiable optimization +of template and label tokens \cite{zhang_differentiable_2021}. We quantify the +tradeoff between parameter efficiency and performance in the few-shot regime +and propose a simple model agnostic approach that can be extended to any task +By achieving competitive performance while only optimizing 3\% of a model's +parameters and allowing for batched inference, we allow for more efficient +practical deployment of models. +" +MerA: Merging Pretrained Adapters For Few-Shot Learning,Shwai He,http://arxiv.org/pdf/2308.15982v1.pdf,2023-08-30,['cs.cl'],2308.15982v1.pdf," Adapter tuning, which updates only a few parameters, has become a mainstream +method for fine-tuning pretrained language models to downstream tasks. However, +it often yields subpar results in few-shot learning. AdapterFusion, which +assembles pretrained adapters using composition layers tailored to specific +tasks, is a possible solution but significantly increases trainable parameters +and deployment costs. Despite this, our preliminary study reveals that even +single adapters can outperform Adapterfusion in few-shot learning, urging us to +propose \textbf{\texttt{Merging Pretrained Adapters}} (MerA) that efficiently +incorporates pretrained adapters to a single model through model fusion. +Extensive experiments on two PLMs demonstrate that MerA achieves substantial +improvements compared to both single adapters and AdapterFusion. To further +enhance the capacity of MerA, we also introduce a simple yet effective +technique, referred to as the ""\textit{same-track}"" setting, that merges +adapters from the same track of pretraining tasks. With the implementation of +the ""\textit{same-track}"" setting, we observe even more impressive gains, +surpassing the performance of both full fine-tuning and adapter tuning by a +substantial margin, e.g., 3.5\% in MRPC and 5.0\% in MNLI. +" +Meta-Adapter: An Online Few-shot Learner for Vision-Language Model,Cheng Cheng,http://arxiv.org/pdf/2311.03774v1.pdf,2023-11-07,['cs.cv'],2311.03774v1.pdf," The contrastive vision-language pre-training, known as CLIP, demonstrates +remarkable potential in perceiving open-world visual concepts, enabling +effective zero-shot image recognition. Nevertheless, few-shot learning methods +based on CLIP typically require offline fine-tuning of the parameters on +few-shot samples, resulting in longer inference time and the risk of +over-fitting in certain domains. To tackle these challenges, we propose the +Meta-Adapter, a lightweight residual-style adapter, to refine the CLIP features +guided by the few-shot samples in an online manner. With a few training +samples, our method can enable effective few-shot learning capabilities and +generalize to unseen data or tasks without additional fine-tuning, achieving +competitive performance and high efficiency. Without bells and whistles, our +approach outperforms the state-of-the-art online few-shot learning method by an +average of 3.6\% on eight image classification datasets with higher inference +speed. Furthermore, our model is simple and flexible, serving as a +plug-and-play module directly applicable to downstream tasks. Without further +fine-tuning, Meta-Adapter obtains notable performance improvements in +open-vocabulary object detection and segmentation tasks. +" +Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference,Shell Xu Hu,http://arxiv.org/pdf/2204.07305v1.pdf,2022-04-15,"['cs.cv', 'cs.lg']",2204.07305v1.pdf," Few-shot learning (FSL) is an important and topical problem in computer +vision that has motivated extensive research into numerous methods spanning +from sophisticated meta-learning methods to simple transfer learning baselines. +We seek to push the limits of a simple-but-effective pipeline for more +realistic and practical settings of few-shot image classification. To this end, +we explore few-shot learning from the perspective of neural network +architecture, as well as a three stage pipeline of network updates under +different data supplies, where unsupervised external data is considered for +pre-training, base categories are used to simulate few-shot tasks for +meta-training, and the scarcely labelled data of an novel task is taken for +fine-tuning. We investigate questions such as: (1) How pre-training on external +data benefits FSL? (2) How state-of-the-art transformer architectures can be +exploited? and (3) How fine-tuning mitigates domain shift? Ultimately, we show +that a simple transformer-based pipeline yields surprisingly good performance +on standard benchmarks such as Mini-ImageNet, CIFAR-FS, CDFSL and Meta-Dataset. +Our code and demo are available at https://hushell.github.io/pmf. +" +"Multi-Level Fine-Tuning, Data Augmentation, and Few-Shot Learning for Specialized Cyber Threat Intelligence",Markus Bayer,http://arxiv.org/pdf/2207.11076v1.pdf,2022-07-22,"['cs.cr', 'cs.cl']",2207.11076v1.pdf," Gathering cyber threat intelligence from open sources is becoming +increasingly important for maintaining and achieving a high level of security +as systems become larger and more complex. However, these open sources are +often subject to information overload. It is therefore useful to apply machine +learning models that condense the amount of information to what is necessary. +Yet, previous studies and applications have shown that existing classifiers are +not able to extract specific information about emerging cybersecurity events +due to their low generalization ability. Therefore, we propose a system to +overcome this problem by training a new classifier for each new incident. Since +this requires a lot of labelled data using standard training methods, we +combine three different low-data regime techniques - transfer learning, data +augmentation, and few-shot learning - to train a high-quality classifier from +very few labelled instances. We evaluated our approach using a novel dataset +derived from the Microsoft Exchange Server data breach of 2021 which was +labelled by three experts. Our findings reveal an increase in F1 score of more +than 21 points compared to standard training methods and more than 18 points +compared to a state-of-the-art method in few-shot learning. Furthermore, the +classifier trained with this method and 32 instances is only less than 5 F1 +score points worse than a classifier trained with 1800 instances. +" +Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning,Tianxiang Sun,http://arxiv.org/pdf/2210.07565v3.pdf,2022-10-14,['cs.cl'],2210.07565v3.pdf," Prompt tuning is a parameter-efficient approach to adapting pre-trained +language models to downstream tasks. Although prompt tuning has been shown to +match the performance of full model tuning when training data is sufficient, it +tends to struggle in few-shot learning settings. In this paper, we present +Multi-task Pre-trained Modular Prompt (MP2) to boost prompt tuning for few-shot +learning. MP2 is a set of combinable prompts pre-trained on 38 Chinese tasks. +On downstream tasks, the pre-trained prompts are selectively activated and +combined, leading to strong compositional generalization to unseen tasks. To +bridge the gap between pre-training and fine-tuning, we formulate upstream and +downstream tasks into a unified machine reading comprehension task. Extensive +experiments under two learning paradigms, i.e., gradient descent and black-box +tuning, show that MP2 significantly outperforms prompt tuning, full model +tuning, and prior prompt pre-training methods in few-shot settings. In +addition, we demonstrate that MP2 can achieve surprisingly fast and strong +adaptation to downstream tasks by merely learning 8 parameters to combine the +pre-trained modular prompts. +" +Few-shot Classification with Hypersphere Modeling of Prototypes,Ning Ding,http://arxiv.org/pdf/2211.05319v1.pdf,2022-11-10,"['cs.lg', 'cs.cl', 'cs.cv']",2211.05319v1.pdf," Metric-based meta-learning is one of the de facto standards in few-shot +learning. It composes of representation learning and metrics calculation +designs. Previous works construct class representations in different ways, +varying from mean output embedding to covariance and distributions. However, +using embeddings in space lacks expressivity and cannot capture class +information robustly, while statistical complex modeling poses difficulty to +metric designs. In this work, we use tensor fields (``areas'') to model classes +from the geometrical perspective for few-shot learning. We present a simple and +effective method, dubbed hypersphere prototypes (HyperProto), where class +information is represented by hyperspheres with dynamic sizes with two sets of +learnable parameters: the hypersphere's center and the radius. Extending from +points to areas, hyperspheres are much more expressive than embeddings. +Moreover, it is more convenient to perform metric-based classification with +hypersphere prototypes than statistical modeling, as we only need to calculate +the distance from a data point to the surface of the hypersphere. Following +this idea, we also develop two variants of prototypes under other measurements. +Extensive experiments and analysis on few-shot learning tasks across NLP and CV +and comparison with 20+ competitive baselines demonstrate the effectiveness of +our approach. +" +StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning,Yuqian Fu,http://arxiv.org/pdf/2302.09309v2.pdf,2023-02-18,['cs.cv'],2302.09309v2.pdf," Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that +tackles few-shot learning across different domains. It aims at transferring +prior knowledge learned on the source dataset to novel target datasets. The +CD-FSL task is especially challenged by the huge domain gap between different +datasets. Critically, such a domain gap actually comes from the changes of +visual styles, and wave-SAN empirically shows that spanning the style +distribution of the source data helps alleviate this issue. However, wave-SAN +simply swaps styles of two images. Such a vanilla operation makes the generated +styles ``real'' and ``easy'', which still fall into the original set of the +source styles. Thus, inspired by vanilla adversarial learning, a novel +model-agnostic meta Style Adversarial training (StyleAdv) method together with +a novel style adversarial attack method is proposed for CD-FSL. Particularly, +our style attack method synthesizes both ``virtual'' and ``hard'' adversarial +styles for model training. This is achieved by perturbing the original style +with the signed style gradients. By continually attacking styles and forcing +the model to recognize these challenging adversarial styles, our model is +gradually robust to the visual styles, thus boosting the generalization ability +for novel target datasets. Besides the typical CNN-based backbone, we also +employ our StyleAdv method on large-scale pretrained vision transformer. +Extensive experiments conducted on eight various target datasets show the +effectiveness of our method. Whether built upon ResNet or ViT, we achieve the +new state of the art for CD-FSL. Code is available at +https://github.com/lovelyqian/StyleAdv-CDFSL. +" +Few-Shot Learning with Visual Distribution Calibration and Cross-Modal Distribution Alignment,Runqi Wang,http://arxiv.org/pdf/2305.11439v1.pdf,2023-05-19,['cs.cv'],2305.11439v1.pdf," Pre-trained vision-language models have inspired much research on few-shot +learning. However, with only a few training images, there exist two crucial +problems: (1) the visual feature distributions are easily distracted by +class-irrelevant information in images, and (2) the alignment between the +visual and language feature distributions is difficult. To deal with the +distraction problem, we propose a Selective Attack module, which consists of +trainable adapters that generate spatial attention maps of images to guide the +attacks on class-irrelevant image areas. By messing up these areas, the +critical features are captured and the visual distributions of image features +are calibrated. To better align the visual and language feature distributions +that describe the same object class, we propose a cross-modal distribution +alignment module, in which we introduce a vision-language prototype for each +class to align the distributions, and adopt the Earth Mover's Distance (EMD) to +optimize the prototypes. For efficient computation, the upper bound of EMD is +derived. In addition, we propose an augmentation strategy to increase the +diversity of the images and the text prompts, which can reduce overfitting to +the few-shot training images. Extensive experiments on 11 datasets demonstrate +that our method consistently outperforms prior arts in few-shot learning. The +implementation code will be available at https://github.com/bhrqw/SADA. +" +Federated Few-shot Learning for Cough Classification with Edge Devices,Ngan Dao Hoang,http://arxiv.org/pdf/2309.01076v1.pdf,2023-09-03,"['cs.lg', 'cs.sd', 'eess.as']",2309.01076v1.pdf," Automatically classifying cough sounds is one of the most critical tasks for +the diagnosis and treatment of respiratory diseases. However, collecting a huge +amount of labeled cough dataset is challenging mainly due to high laborious +expenses, data scarcity, and privacy concerns. In this work, our aim is to +develop a framework that can effectively perform cough classification even in +situations when enormous cough data is not available, while also addressing +privacy concerns. Specifically, we formulate a new problem to tackle these +challenges and adopt few-shot learning and federated learning to design a novel +framework, termed F2LCough, for solving the newly formulated problem. We +illustrate the superiority of our method compared with other approaches on +COVID-19 Thermal Face & Cough dataset, in which F2LCough achieves an average +F1-Score of 86%. Our results show the feasibility of few-shot learning combined +with federated learning to build a classification model of cough sounds. This +new methodology is able to classify cough sounds in data-scarce situations and +maintain privacy properties. The outcomes of this work can be a fundamental +framework for building support systems for the detection and diagnosis of +cough-related diseases. +" +Few-Shot Bot: Prompt-Based Learning for Dialogue Systems,Andrea Madotto,http://arxiv.org/pdf/2110.08118v1.pdf,2021-10-15,"['cs.cl', 'cs.ai']",2110.08118v1.pdf," Learning to converse using only a few examples is a great challenge in +conversational AI. The current best conversational models, which are either +good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL), +are language models (LMs) fine-tuned on large conversational datasets. Training +these models is expensive, both in terms of computational resources and time, +and it is hard to keep them up to date with new conversational skills. A simple +yet unexplored solution is prompt-based few-shot learning (Brown et al. 2020) +which does not require gradient-based fine-tuning but instead uses a few +examples in the LM context as the only source of learning. In this paper, we +explore prompt-based few-shot learning in dialogue tasks. We benchmark LMs of +different sizes in nine response generation tasks, which include four +knowledge-grounded tasks, a task-oriented generations task, three open-chat +tasks, and controlled stylistic generation, and five conversational parsing +tasks, which include dialogue state tracking, graph path generation, persona +information extraction, document retrieval, and internet query generation. The +current largest released LM (GPT-J-6B) using prompt-based few-shot learning, +and thus requiring no training, achieves competitive performance to fully +trained state-of-the-art models. Moreover, we propose a novel prompt-based +few-shot classifier, that also does not require any fine-tuning, to select the +most appropriate prompt given a dialogue history. Finally, by combining the +power of prompt-based few-shot learning and a Skill Selector, we create an +end-to-end chatbot named the Few-Shot Bot (FSB), which automatically selects +the most appropriate conversational skill, queries different knowledge bases or +the internet, and uses the retrieved knowledge to generate a human-like +response, all using only few dialogue examples per skill. +" +"A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level",Iddo Drori,http://arxiv.org/pdf/2112.15594v4.pdf,2021-12-31,"['cs.lg', 'cs.ai']",2112.15594v4.pdf," We demonstrate that a neural network pre-trained on text and fine-tuned on +code solves mathematics course problems, explains solutions, and generates new +questions at a human level. We automatically synthesize programs using few-shot +learning and OpenAI's Codex transformer and execute them to solve course +problems at 81% automatic accuracy. We curate a new dataset of questions from +MIT's largest mathematics courses (Single Variable and Multivariable Calculus, +Differential Equations, Introduction to Probability and Statistics, Linear +Algebra, and Mathematics for Computer Science) and Columbia University's +Computational Linear Algebra. We solve questions from a MATH dataset (on +Prealgebra, Algebra, Counting and Probability, Intermediate Algebra, Number +Theory, and Precalculus), the latest benchmark of advanced mathematics problems +designed to assess mathematical reasoning. We randomly sample questions and +generate solutions with multiple modalities, including numbers, equations, and +plots. The latest GPT-3 language model pre-trained on text automatically solves +only 18.8% of these university questions using zero-shot learning and 30.8% +using few-shot learning and the most recent chain of thought prompting. In +contrast, program synthesis with few-shot learning using Codex fine-tuned on +code generates programs that automatically solve 81% of these questions. Our +approach improves the previous state-of-the-art automatic solution accuracy on +the benchmark topics from 8.8% to 81.1%. We perform a survey to evaluate the +quality and difficulty of generated questions. This work is the first to +automatically solve university-level mathematics course questions at a human +level and the first work to explain and generate university-level mathematics +course questions at scale, a milestone for higher education. +" +Is Support Set Diversity Necessary for Meta-Learning?,Amrith Setlur,http://arxiv.org/pdf/2011.14048v2.pdf,2020-11-28,"['cs.lg', 'stat.ml']",2011.14048v2.pdf," Meta-learning is a popular framework for learning with limited data in which +an algorithm is produced by training over multiple few-shot learning tasks. For +classification problems, these tasks are typically constructed by sampling a +small number of support and query examples from a subset of the classes. While +conventional wisdom is that task diversity should improve the performance of +meta-learning, in this work we find evidence to the contrary: we propose a +modification to traditional meta-learning approaches in which we keep the +support sets fixed across tasks, thus reducing task diversity. Surprisingly, we +find that not only does this modification not result in adverse effects, it +almost always improves the performance for a variety of datasets and +meta-learning methods. We also provide several initial analyses to understand +this phenomenon. Our work serves to: (i) more closely investigate the effect of +support set construction for the problem of meta-learning, and (ii) suggest a +simple, general, and competitive baseline for few-shot learning. +" +Detecting Hate Speech with GPT-3,Ke-Li Chiu,http://arxiv.org/pdf/2103.12407v4.pdf,2021-03-23,['cs.cl'],2103.12407v4.pdf," Sophisticated language models such as OpenAI's GPT-3 can generate hateful +text that targets marginalized groups. Given this capacity, we are interested +in whether large language models can be used to identify hate speech and +classify text as sexist or racist. We use GPT-3 to identify sexist and racist +text passages with zero-, one-, and few-shot learning. We find that with zero- +and one-shot learning, GPT-3 can identify sexist or racist text with an average +accuracy between 55 per cent and 67 per cent, depending on the category of text +and type of learning. With few-shot learning, the model's accuracy can be as +high as 85 per cent. Large language models have a role to play in hate speech +detection, and with further development they could eventually be used to +counter hate speech. +" +CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP,Qinyuan Ye,http://arxiv.org/pdf/2104.08835v2.pdf,2021-04-18,"['cs.cl', 'cs.lg']",2104.08835v2.pdf," Humans can learn a new language task efficiently with only few examples, by +leveraging their knowledge obtained when learning prior tasks. In this paper, +we explore whether and how such cross-task generalization ability can be +acquired, and further applied to build better few-shot learners across diverse +NLP tasks. We introduce CrossFit, a problem setup for studying cross-task +generalization ability, which standardizes seen/unseen task partitions, data +access during different learning stages, and the evaluation protocols. To +instantiate different seen/unseen task partitions in CrossFit and facilitate +in-depth analysis, we present the NLP Few-shot Gym, a repository of 160 diverse +few-shot NLP tasks created from open-access NLP datasets and converted to a +unified text-to-text format. Our analysis reveals that the few-shot learning +ability on unseen tasks can be improved via an upstream learning stage using a +set of seen tasks. We also observe that the selection of upstream learning +tasks can significantly influence few-shot performance on unseen tasks, asking +further analysis on task similarity and transferability. +" +Entailment as Few-Shot Learner,Sinong Wang,http://arxiv.org/pdf/2104.14690v1.pdf,2021-04-29,"['cs.cl', 'cs.ai']",2104.14690v1.pdf," Large pre-trained language models (LMs) have demonstrated remarkable ability +as few-shot learners. However, their success hinges largely on scaling model +parameters to a degree that makes it challenging to train and serve. In this +paper, we propose a new approach, named as EFL, that can turn small LMs into +better few-shot learners. The key idea of this approach is to reformulate +potential NLP task into an entailment one, and then fine-tune the model with as +little as 8 examples. We further demonstrate our proposed method can be: (i) +naturally combined with an unsupervised contrastive learning-based data +augmentation method; (ii) easily extended to multilingual few-shot learning. A +systematic evaluation on 18 standard NLP tasks demonstrates that this approach +improves the various existing SOTA few-shot learning methods by 12\%, and +yields competitive few-shot performance with 500 times larger models, such as +GPT-3. +" +True Few-Shot Learning with Language Models,Ethan Perez,http://arxiv.org/pdf/2105.11447v1.pdf,2021-05-24,"['cs.cl', 'cs.lg', 'stat.ml']",2105.11447v1.pdf," Pretrained language models (LMs) perform well on many tasks even when +learning from a few examples, but prior work uses many held-out examples to +tune various aspects of learning, such as hyperparameters, training objectives, +and natural language templates (""prompts""). Here, we evaluate the few-shot +ability of LMs when such held-out examples are unavailable, a setting we call +true few-shot learning. We test two model selection criteria, cross-validation +and minimum description length, for choosing LM prompts and hyperparameters in +the true few-shot setting. On average, both marginally outperform random +selection and greatly underperform selection based on held-out examples. +Moreover, selection criteria often prefer models that perform significantly +worse than randomly-selected ones. We find similar results even when taking +into account our uncertainty in a model's true performance during selection, as +well as when varying the amount of computation and number of examples used for +selection. Overall, our findings suggest that prior work significantly +overestimated the true few-shot ability of LMs given the difficulty of few-shot +model selection. +" +"Generate, Annotate, and Learn: NLP with Synthetic Text",Xuanli He,http://arxiv.org/pdf/2106.06168v3.pdf,2021-06-11,['cs.lg'],2106.06168v3.pdf," This paper studies the use of language models as a source of synthetic +unlabeled text for NLP. We formulate a general framework called ``generate, +annotate, and learn (GAL)'' to take advantage of synthetic text within +knowledge distillation, self-training, and few-shot learning applications. To +generate high-quality task-specific text, we either fine-tune LMs on inputs +from the task of interest, or prompt large LMs with few examples. We use the +best available classifier to annotate synthetic text with soft pseudo labels +for knowledge distillation and self-training, and use LMs to obtain hard labels +for few-shot learning. We train new supervised models on the combination of +labeled and pseudo-labeled data, which results in significant gains across +several applications. We investigate key components of GAL and present +theoretical and empirical arguments against the use of class-conditional LMs to +generate synthetic labeled text instead of unlabeled text. GAL achieves new +state-of-the-art knowledge distillation results for 6-layer transformers on the +GLUE leaderboard. +" +Multimodal Few-Shot Learning with Frozen Language Models,Maria Tsimpoukelli,http://arxiv.org/pdf/2106.13884v2.pdf,2021-06-25,"['cs.cv', 'cs.cl', 'cs.lg']",2106.13884v2.pdf," When trained at sufficient scale, auto-regressive language models exhibit the +notable ability to learn a new language task after being prompted with just a +few examples. Here, we present a simple, yet effective, approach for +transferring this few-shot learning ability to a multimodal setting (vision and +language). Using aligned image and caption data, we train a vision encoder to +represent each image as a sequence of continuous embeddings, such that a +pre-trained, frozen language model prompted with this prefix generates the +appropriate caption. The resulting system is a multimodal few-shot learner, +with the surprising ability to learn a variety of new tasks when conditioned on +examples, represented as a sequence of multiple interleaved image and text +embeddings. We demonstrate that it can rapidly learn words for new objects and +novel visual categories, do visual question-answering with only a handful of +examples, and make use of outside knowledge, by measuring a single model on a +variety of established and new benchmarks. +" +Revisiting Self-Training for Few-Shot Learning of Language Model,Yiming Chen,http://arxiv.org/pdf/2110.01256v1.pdf,2021-10-04,['cs.cl'],2110.01256v1.pdf," As unlabeled data carry rich task-relevant information, they are proven +useful for few-shot learning of language model. The question is how to +effectively make use of such data. In this work, we revisit the self-training +technique for language model fine-tuning and present a state-of-the-art +prompt-based few-shot learner, SFLM. Given two views of a text sample via weak +and strong augmentation techniques, SFLM generates a pseudo label on the weakly +augmented version. Then, the model predicts the same pseudo label when +fine-tuned with the strongly augmented version. This simple approach is shown +to outperform other state-of-the-art supervised and semi-supervised +counterparts on six sentence classification and six sentence-pair +classification benchmarking tasks. In addition, SFLM only relies on a few +in-domain unlabeled data. We conduct a comprehensive analysis to demonstrate +the robustness of our proposed approach under various settings, including +augmentation techniques, model scale, and few-shot knowledge transfer across +tasks. +" +In-Context Learning for Few-Shot Dialogue State Tracking,Yushi Hu,http://arxiv.org/pdf/2203.08568v3.pdf,2022-03-16,['cs.cl'],2203.08568v3.pdf," Collecting and annotating task-oriented dialogues is time-consuming and +costly; thus, zero and few shot learning could greatly benefit dialogue state +tracking (DST). In this work, we propose an in-context learning (ICL) framework +for zero-shot and few-shot learning DST, where a large pre-trained language +model (LM) takes a test instance and a few exemplars as input, and directly +decodes the dialogue state without any parameter updates. To better leverage a +tabular domain description in the LM prompt, we reformulate DST into a +text-to-SQL problem. We also propose a novel approach to retrieve annotated +dialogues as exemplars. Empirical results on MultiWOZ show that our method +IC-DST substantially outperforms previous fine-tuned state-of-the-art models in +few-shot settings. In addition, we test IC-DST in zero-shot settings, in which +the model only takes a fixed task instruction as input, finding that it +outperforms previous zero-shot methods by a large margin. +" +WAVPROMPT: Towards Few-Shot Spoken Language Understanding with Frozen Language Models,Heting Gao,http://arxiv.org/pdf/2203.15863v2.pdf,2022-03-29,"['eess.as', 'cs.ai', 'cs.cl']",2203.15863v2.pdf," Large-scale auto-regressive language models pretrained on massive text have +demonstrated their impressive ability to perform new natural language tasks +with only a few text examples, without the need for fine-tuning. Recent studies +further show that such a few-shot learning ability can be extended to the +text-image setting by training an encoder to encode the images into embeddings +functioning like the text embeddings of the language model. Interested in +exploring the possibility of transferring the few-shot learning ability to the +audio-text setting, we propose a novel speech understanding framework, +WavPrompt, where we finetune a wav2vec model to generate a sequence of audio +embeddings understood by the language model. We show that WavPrompt is a +few-shot learner that can perform speech understanding tasks better than a +naive text baseline. We conduct detailed ablation studies on different +components and hyperparameters to empirically identify the best model +configuration. In addition, we conduct a non-speech understanding experiment to +show WavPrompt can extract more information than just the transcriptions. Code +is available at https://github.com/Hertin/WavPrompt +" +Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values,Yejin Bang,http://arxiv.org/pdf/2210.07652v1.pdf,2022-10-14,"['cs.cl', 'cs.ai']",2210.07652v1.pdf," Many NLP classification tasks, such as sexism/racism detection or toxicity +detection, are based on human values. Yet, human values can vary under diverse +cultural conditions. Therefore, we introduce a framework for value-aligned +classification that performs prediction based on explicitly written human +values in the command. Along with the task, we propose a practical approach +that distills value-aligned knowledge from large-scale language models (LLMs) +to construct value-aligned classifiers in two steps. First, we generate +value-aligned training data from LLMs by prompt-based few-shot learning. Next, +we fine-tune smaller classification models with the generated data for the +task. Empirical results show that our VA-Models surpass multiple baselines by +at least 15.56% on the F1-score, including few-shot learning with OPT-175B and +existing text augmentation methods. We suggest that using classifiers with +explicit human value input improves both inclusivity & explainability in AI. +" +Aligning MAGMA by Few-Shot Learning and Finetuning,Jean-Charles Layoun,http://arxiv.org/pdf/2210.14161v1.pdf,2022-10-18,"['cs.cv', 'cs.ai']",2210.14161v1.pdf," The goal of vision-language modeling is to allow models to tie language +understanding with visual inputs. The aim of this paper is to evaluate and +align the Visual Language Model (VLM) called Multimodal Augmentation of +Generative Models through Adapter-based finetuning (MAGMA) with human values. +MAGMA is a VLM that is capable of image captioning and visual +question-answering. We will evaluate its alignment in three different +scenarios. To begin, we assess MAGMA's out-of-the-box alignment through the +checkpoint provided by Hugging Face. Then, we measure if few-shot learning +manages to improve the results. Finally, we finetune the model on aligned +examples and evaluate its behavior. +" +GPS: Genetic Prompt Search for Efficient Few-shot Learning,Hanwei Xu,http://arxiv.org/pdf/2210.17041v1.pdf,2022-10-31,['cs.cl'],2210.17041v1.pdf," Prompt-based techniques have demostrated great potential for improving the +few-shot generalization of pretrained language models. However, their +performance heavily relies on the manual design of prompts and thus requires a +lot of human efforts. In this paper, we introduce Genetic Prompt Search (GPS) +to improve few-shot learning with prompts, which utilizes a genetic algorithm +to automatically search for high-performing prompts. GPS is gradient-free and +requires no update of model parameters but only a small validation set. +Experiments on diverse datasets proved the effectiveness of GPS, which +outperforms manual prompts by a large margin of 2.6 points. Our method is also +better than other parameter-efficient tuning methods such as prompt tuning. +" +MEAL: Stable and Active Learning for Few-Shot Prompting,Abdullatif Köksal,http://arxiv.org/pdf/2211.08358v2.pdf,2022-11-15,['cs.cl'],2211.08358v2.pdf," Few-shot classification has made great strides due to foundation models that, +through priming and prompting, are highly effective few-shot learners. However, +this approach has high variance both across different sets of few shots (data +selection) and across different finetuning runs (run variability). This is +problematic not only because it impedes the fair comparison of different +approaches, but especially because it makes few-shot learning too unreliable +for many real-world applications. To alleviate these issues, we make two +contributions for more stable and effective few-shot learning: First, we +propose novel ensembling methods and show that they substantially reduce run +variability. Second, we introduce a new active learning (AL) criterion for data +selection and present the first AL-based approach specifically tailored towards +prompt-based learning. In our experiments, we show that our combined method, +MEAL (Multiprompt finetuning and prediction Ensembling with Active Learning), +improves overall performance of prompt-based finetuning by 2.3 points on five +diverse tasks. +" +Few-shot Query-Focused Summarization with Prefix-Merging,Ruifeng Yuan,http://arxiv.org/pdf/2211.16164v1.pdf,2022-11-29,"['cs.cl', 'cs.ai']",2211.16164v1.pdf," Query-focused summarization has been considered as an important extension for +text summarization. It aims to generate a concise highlight for a given query. +Different from text summarization, query-focused summarization has long been +plagued by the problem of lacking high-quality large-scale datasets. In this +paper, we investigate the idea that whether we can integrate and transfer the +knowledge of text summarization and question answering to assist the few-shot +learning in query-focused summarization. Here, we propose prefix-merging, a +prefix-based pretraining strategy for few-shot learning in query-focused +summarization. Drawn inspiration from prefix-tuning, we are allowed to +integrate the task knowledge from text summarization and question answering +into a properly designed prefix and apply the merged prefix to query-focused +summarization. With only a small amount of trainable parameters, prefix-merging +outperforms fine-tuning on query-focused summarization. We further discuss the +influence of different prefix designs and propose a visualized explanation for +how prefix-merging works. +" +JASMINE: Arabic GPT Models for Few-Shot Learning,El Moatez Billah Nagoudi,http://arxiv.org/pdf/2212.10755v2.pdf,2022-12-21,['cs.cl'],2212.10755v2.pdf," Scholarship on generative pretraining (GPT) remains acutely Anglocentric, +leaving serious gaps in our understanding of the whole class of autoregressive +models. For example, we have little knowledge about the potential of these +models and their societal impacts in diverse linguistic and cultural settings. +We alleviate this issue for Arabic, a wide collection of languages and +dialectal varieties with more than 400 million population, by introducing +JASMINE. JASMINE is a suite of powerful Arabic autoregressive Transformer +language models ranging in size between 300 million-6.7 billion parameters +pretrained on a large and diverse dataset (~ 235 GB of text). We also carefully +design and release a comprehensive benchmark for both automated and human +evaluation of Arabic autoregressive models, with coverage of potential social +biases, harms, and toxicity. Using our novel benchmark, we evaluate JASMINE +extensively showing powerful performance intrinsically as well as in few-shot +learning on a wide range of NLP tasks. We aim to responsibly release our models +and evaluation benchmark with interested researchers, along with code for +experimenting with them. +" +Log Parsing with Prompt-based Few-shot Learning,Van-Hoang Le,http://arxiv.org/pdf/2302.07435v1.pdf,2023-02-15,['cs.se'],2302.07435v1.pdf," Logs generated by large-scale software systems provide crucial information +for engineers to understand the system status and diagnose problems of the +systems. Log parsing, which converts raw log messages into structured data, is +the first step to enabling automated log analytics. Existing log parsers +extract the common part as log templates using statistical features. However, +these log parsers often fail to identify the correct templates and parameters +because: 1) they often overlook the semantic meaning of log messages, and 2) +they require domain-specific knowledge for different log datasets. To address +the limitations of existing methods, in this paper, we propose LogPPT to +capture the patterns of templates using prompt-based few-shot learning. LogPPT +utilises a novel prompt tuning method to recognise keywords and parameters +based on a few labelled log data. In addition, an adaptive random sampling +algorithm is designed to select a small yet diverse training set. We have +conducted extensive experiments on 16 public log datasets. The experimental +results show that LogPPT is effective and efficient for log parsing. +" +Conversation Style Transfer using Few-Shot Learning,Shamik Roy,http://arxiv.org/pdf/2302.08362v2.pdf,2023-02-16,['cs.cl'],2302.08362v2.pdf," Conventional text style transfer approaches focus on sentence-level style +transfer without considering contextual information, and the style is described +with attributes (e.g., formality). When applying style transfer in +conversations such as task-oriented dialogues, existing approaches suffer from +these limitations as context can play an important role and the style +attributes are often difficult to define in conversations. In this paper, we +introduce conversation style transfer as a few-shot learning problem, where the +model learns to perform style transfer by observing only a few example +dialogues in the target style. We propose a novel in-context learning approach +to solve the task with style-free dialogues as a pivot. Human evaluation shows +that by incorporating multi-turn context, the model is able to match the target +style while having better appropriateness and semantic correctness compared to +utterance/sentence-level style transfer. Additionally, we show that +conversation style transfer can also benefit downstream tasks. For example, in +multi-domain intent classification tasks, the F1 scores improve after +transferring the style of training data to match the style of the test data. +" +STUNT: Few-shot Tabular Learning with Self-generated Tasks from Unlabeled Tables,Jaehyun Nam,http://arxiv.org/pdf/2303.00918v1.pdf,2023-03-02,"['cs.lg', 'cs.ai']",2303.00918v1.pdf," Learning with few labeled tabular samples is often an essential requirement +for industrial machine learning applications as varieties of tabular data +suffer from high annotation costs or have difficulties in collecting new +samples for novel tasks. Despite the utter importance, such a problem is quite +under-explored in the field of tabular learning, and existing few-shot learning +schemes from other domains are not straightforward to apply, mainly due to the +heterogeneous characteristics of tabular data. In this paper, we propose a +simple yet effective framework for few-shot semi-supervised tabular learning, +coined Self-generated Tasks from UNlabeled Tables (STUNT). Our key idea is to +self-generate diverse few-shot tasks by treating randomly chosen columns as a +target label. We then employ a meta-learning scheme to learn generalizable +knowledge with the constructed tasks. Moreover, we introduce an unsupervised +validation scheme for hyperparameter search (and early stopping) by generating +a pseudo-validation set using STUNT from unlabeled data. Our experimental +results demonstrate that our simple framework brings significant performance +gain under various tabular few-shot learning benchmarks, compared to prior +semi- and self-supervised baselines. Code is available at +https://github.com/jaehyun513/STUNT. +" +CancerGPT: Few-shot Drug Pair Synergy Prediction using Large Pre-trained Language Models,Tianhao Li,http://arxiv.org/pdf/2304.10946v1.pdf,2023-04-18,"['cs.cl', 'cs.lg', 'q-bio.bm']",2304.10946v1.pdf," Large pre-trained language models (LLMs) have been shown to have significant +potential in few-shot learning across various fields, even with minimal +training data. However, their ability to generalize to unseen tasks in more +complex fields, such as biology, has yet to be fully evaluated. LLMs can offer +a promising alternative approach for biological inference, particularly in +cases where structured data and sample size are limited, by extracting prior +knowledge from text corpora. Our proposed few-shot learning approach uses LLMs +to predict the synergy of drug pairs in rare tissues that lack structured data +and features. Our experiments, which involved seven rare tissues from different +cancer types, demonstrated that the LLM-based prediction model achieved +significant accuracy with very few or zero samples. Our proposed model, the +CancerGPT (with $\sim$ 124M parameters), was even comparable to the larger +fine-tuned GPT-3 model (with $\sim$ 175B parameters). Our research is the first +to tackle drug pair synergy prediction in rare tissues with limited data. We +are also the first to utilize an LLM-based prediction model for biological +reaction prediction tasks. +" +Automated Few-shot Classification with Instruction-Finetuned Language Models,Rami Aly,http://arxiv.org/pdf/2305.12576v2.pdf,2023-05-21,['cs.cl'],2305.12576v2.pdf," A particularly successful class of approaches for few-shot learning combines +language models with prompts -- hand-crafted task descriptions that complement +data samples. However, designing prompts by hand for each task commonly +requires domain knowledge and substantial guesswork. We observe, in the context +of classification tasks, that instruction finetuned language models exhibit +remarkable prompt robustness, and we subsequently propose a simple method to +eliminate the need for handcrafted prompts, named AuT-Few. This approach +consists of (i) a prompt retrieval module that selects suitable task +instructions from the instruction-tuning knowledge base, and (ii) the +generation of two distinct, semantically meaningful, class descriptions and a +selection mechanism via cross-validation. Over $12$ datasets, spanning $8$ +classification tasks, we show that AuT-Few outperforms current state-of-the-art +few-shot learning methods. Moreover, AuT-Few is the best ranking method across +datasets on the RAFT few-shot benchmark. Notably, these results are achieved +without task-specific handcrafted prompts on unseen tasks. +" +Active Learning Principles for In-Context Learning with Large Language Models,Katerina Margatina,http://arxiv.org/pdf/2305.14264v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14264v1.pdf," The remarkable advancements in large language models (LLMs) have +significantly enhanced the performance in few-shot learning settings. By using +only a small number of labeled examples, referred to as demonstrations, LLMs +can effectively grasp the task at hand through in-context learning. However, +the process of selecting appropriate demonstrations has received limited +attention in prior work. This paper addresses the issue of identifying the most +informative demonstrations for few-shot learning by approaching it as a +pool-based Active Learning (AL) problem over a single iteration. Our objective +is to investigate how AL algorithms can serve as effective demonstration +selection methods for in-context learning. We compare various standard AL +algorithms based on uncertainty, diversity, and similarity, and consistently +observe that the latter outperforms all other methods, including random +sampling. Notably, uncertainty sampling, despite its success in conventional +supervised learning scenarios, performs poorly in this context. Our extensive +experimentation involving a diverse range of GPT and OPT models across $24$ +classification and multi-choice tasks, coupled with thorough analysis, +unambiguously demonstrates that in-context example selection through AL +prioritizes high-quality examples that exhibit low uncertainty and bear +similarity to the test examples. +" +Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts,Mohna Chakraborty,http://arxiv.org/pdf/2305.15689v2.pdf,2023-05-25,"['cs.cl', 'cs.ai']",2305.15689v2.pdf," Recent studies have demonstrated that natural-language prompts can help to +leverage the knowledge learned by pre-trained language models for the binary +sentence-level sentiment classification task. Specifically, these methods +utilize few-shot learning settings to fine-tune the sentiment classification +model using manual or automatically generated prompts. However, the performance +of these methods is sensitive to the perturbations of the utilized prompts. +Furthermore, these methods depend on a few labeled instances for automatic +prompt generation and prompt ranking. This study aims to find high-quality +prompts for the given task in a zero-shot setting. Given a base prompt, our +proposed approach automatically generates multiple prompts similar to the base +prompt employing positional, reasoning, and paraphrasing techniques and then +ranks the prompts using a novel metric. We empirically demonstrate that the +top-ranked prompts are high-quality and significantly outperform the base +prompt and the prompts generated using few-shot learning for the binary +sentence-level sentiment classification task. +" +FLamE: Few-shot Learning from Natural Language Explanations,Yangqiaoyu Zhou,http://arxiv.org/pdf/2306.08042v1.pdf,2023-06-13,"['cs.cl', 'cs.ai']",2306.08042v1.pdf," Natural language explanations have the potential to provide rich information +that in principle guides model reasoning. Yet, recent work by Lampinen et al. +(2022) has shown limited utility of natural language explanations in improving +classification. To effectively learn from explanations, we present FLamE, a +two-stage few-shot learning framework that first generates explanations using +GPT-3, and then finetunes a smaller model (e.g., RoBERTa) with generated +explanations. Our experiments on natural language inference demonstrate +effectiveness over strong baselines, increasing accuracy by 17.6% over GPT-3 +Babbage and 5.7% over GPT-3 Davinci in e-SNLI. Despite improving classification +performance, human evaluation surprisingly reveals that the majority of +generated explanations does not adequately justify classification decisions. +Additional analyses point to the important role of label-specific cues (e.g., +""not know"" for the neutral label) in generated explanations. +" +Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners,Jihyeon Lee,http://arxiv.org/pdf/2307.14856v1.pdf,2023-07-27,"['cs.cl', 'cs.ai']",2307.14856v1.pdf," In-context learning, which offers substantial advantages over fine-tuning, is +predominantly observed in decoder-only models, while encoder-decoder (i.e., +seq2seq) models excel in methods that rely on weight updates. Recently, a few +studies have demonstrated the feasibility of few-shot learning with seq2seq +models; however, this has been limited to tasks that align well with the +seq2seq architecture, such as summarization and translation. Inspired by these +initial studies, we provide a first-ever extensive experiment comparing the +in-context few-shot learning capabilities of decoder-only and encoder-decoder +models on a broad range of tasks. Furthermore, we propose two methods to more +effectively elicit in-context learning ability in seq2seq models: +objective-aligned prompting and a fusion-based approach. Remarkably, our +approach outperforms a decoder-only model that is six times larger and exhibits +significant performance improvements compared to conventional seq2seq models +across a variety of settings. We posit that, with the right configuration and +prompt design, seq2seq models can be highly effective few-shot learners for a +wide spectrum of applications. +" +Prototypes-oriented Transductive Few-shot Learning with Conditional Transport,Long Tian,http://arxiv.org/pdf/2308.03047v1.pdf,2023-08-06,['cs.cv'],2308.03047v1.pdf," Transductive Few-Shot Learning (TFSL) has recently attracted increasing +attention since it typically outperforms its inductive peer by leveraging +statistics of query samples. However, previous TFSL methods usually encode +uniform prior that all the classes within query samples are equally likely, +which is biased in imbalanced TFSL and causes severe performance degradation. + Given this pivotal issue, in this work, we propose a novel Conditional +Transport (CT) based imbalanced TFSL model called {\textbf P}rototypes-oriented +{\textbf U}nbiased {\textbf T}ransfer {\textbf M}odel (PUTM) to fully exploit +unbiased statistics of imbalanced query samples, which employs forward and +backward navigators as transport matrices to balance the prior of query samples +per class between uniform and adaptive data-driven distributions. For +efficiently transferring statistics learned by CT, we further derive a closed +form solution to refine prototypes based on MAP given the learned navigators. +The above two steps of discovering and transferring unbiased statistics follow +an iterative manner, formulating our EM-based solver. + Experimental results on four standard benchmarks including miniImageNet, +tieredImageNet, CUB, and CIFAR-FS demonstrate superiority of our model in +class-imbalanced generalization. +" +Approximating Human-Like Few-shot Learning with GPT-based Compression,Cynthia Huang,http://arxiv.org/pdf/2308.06942v1.pdf,2023-08-14,"['cs.ai', 'cs.cl', 'cs.it', 'math.it']",2308.06942v1.pdf," In this work, we conceptualize the learning process as information +compression. We seek to equip generative pre-trained models with human-like +learning capabilities that enable data compression during inference. We present +a novel approach that utilizes the Generative Pre-trained Transformer (GPT) to +approximate Kolmogorov complexity, with the aim of estimating the optimal +Information Distance for few-shot learning. We first propose using GPT as a +prior for lossless text compression, achieving a noteworthy compression ratio. +Experiment with LLAMA2-7B backbone achieves a compression ratio of 15.5 on +enwik9. We justify the pre-training objective of GPT models by demonstrating +its equivalence to the compression length, and, consequently, its ability to +approximate the information distance for texts. Leveraging the approximated +information distance, our method allows the direct application of GPT models in +quantitative text similarity measurements. Experiment results show that our +method overall achieves superior performance compared to embedding and prompt +baselines on challenging NLP tasks, including semantic similarity, zero and +one-shot text classification, and zero-shot text ranking. +" +COCA: Classifier-Oriented Calibration for Source-Free Universal Domain Adaptation via Textual Prototype,Xinghong Liu,http://arxiv.org/pdf/2308.10450v1.pdf,2023-08-21,['cs.cv'],2308.10450v1.pdf," Universal Domain Adaptation (UniDA) aims to distinguish common and private +classes between the source and target domains where domain shift exists. +Recently, due to more stringent data restrictions, researchers have introduced +Source-Free UniDA (SF-UniDA) in more realistic scenarios. SF-UniDA methods +eliminate the need for direct access to source samples when performing +adaptation to the target domain. However, existing SF-UniDA methods still +require an extensive quantity of labeled source samples to train a source +model, resulting in significant labeling costs. To tackle this issue, we +present a novel Classifier-Oriented Calibration (COCA) method. This method, +which leverages textual prototypes, is formulated for the source model based on +few-shot learning. Specifically, we propose studying few-shot learning, usually +explored for closed-set scenarios, to identify common and domain-private +classes despite a significant domain shift between source and target domains. +Essentially, we present a novel paradigm based on the vision-language model to +learn SF-UniDA and hugely reduce the labeling costs on the source domain. +Experimental results demonstrate that our approach outperforms state-of-the-art +UniDA and SF-UniDA models. +" +Evaluating the Decency and Consistency of Data Validation Tests Generated by LLMs,Rohan Alexander,http://arxiv.org/pdf/2310.01402v1.pdf,2023-10-02,['stat.me'],2310.01402v1.pdf," We investigated the potential of large language models (LLMs) in developing +dataset validation tests. We carried out 96 experiments each for both GPT-3.5 +and GPT-4, examining different prompt scenarios, learning modes, temperature +settings, and roles. The prompt scenarios were: 1) Asking for expectations, 2) +Asking for expectations with a given context, 3) Asking for expectations after +requesting a simulation, and 4) Asking for expectations with a provided data +sample. For learning modes, we tested: 1) zero-shot, 2) one-shot, and 3) +few-shot learning. We also tested four temperature settings: 0, 0.4, 0.6, and +1. Furthermore, two distinct roles were considered: 1) ""helpful assistant"", 2) +""expert data scientist"". To gauge consistency, every setup was tested five +times. The LLM-generated responses were benchmarked against a gold standard +suite, created by an experienced data scientist knowledgeable about the data in +question. We find there are considerable returns to the use of few-shot +learning, and that the more explicit the data setting can be the better. The +best LLM configurations complement, rather than substitute, the gold standard +results. This study underscores the value LLMs can bring to the data cleaning +and preparation stages of the data science workflow. +" +Improving generalization in large language models by learning prefix subspaces,Louis Falissard,http://arxiv.org/pdf/2310.15793v1.pdf,2023-10-24,"['cs.lg', 'cs.ai', 'cs.cl']",2310.15793v1.pdf," This article focuses on large language models (LLMs) fine-tuning in the +scarce data regime (also known as the ""few-shot"" learning setting). We propose +a method to increase the generalization capabilities of LLMs based on neural +network subspaces. This optimization method, recently introduced in computer +vision, aims to improve model generalization by identifying wider local optima +through the joint optimization of an entire simplex of models in parameter +space. Its adaptation to massive, pretrained transformers, however, poses some +challenges. First, their considerable number of parameters makes it difficult +to train several models jointly, and second, their deterministic parameter +initialization schemes make them unfit for the subspace method as originally +proposed. We show in this paper that ""Parameter Efficient Fine-Tuning"" (PEFT) +methods, however, are perfectly compatible with this original approach, and +propose to learn entire simplex of continuous prefixes. We test our method on a +variant of the GLUE benchmark adapted to the few-shot learning setting, and +show that both our contributions jointly lead to a gain in average performances +compared to sota methods. The implementation can be found at the following +link: https://github.com/Liloulou/prefix_subspace +" +Zero-shot and Few-shot Learning with Knowledge Graphs: A Comprehensive Survey,Jiaoyan Chen,http://arxiv.org/pdf/2112.10006v6.pdf,2021-12-18,"['cs.lg', 'cs.ai']",2112.10006v6.pdf," Machine learning especially deep neural networks have achieved great success +but many of them often rely on a number of labeled samples for supervision. As +sufficient labeled training data are not always ready due to e.g., continuously +emerging prediction targets and costly sample annotation in real world +applications, machine learning with sample shortage is now being widely +investigated. Among all these studies, many prefer to utilize auxiliary +information including those in the form of Knowledge Graph (KG) to reduce the +reliance on labeled samples. In this survey, we have comprehensively reviewed +over 90 papers about KG-aware research for two major sample shortage settings +-- zero-shot learning (ZSL) where some classes to be predicted have no labeled +samples, and few-shot learning (FSL) where some classes to be predicted have +only a small number of labeled samples that are available. We first introduce +KGs used in ZSL and FSL as well as their construction methods, and then +systematically categorize and summarize KG-aware ZSL and FSL methods, dividing +them into different paradigms such as the mapping-based, the data augmentation, +the propagation-based and the optimization-based. We next present different +applications, including not only KG augmented prediction tasks such as image +classification, question answering, text classification and knowledge +extraction, but also KG completion tasks, and some typical evaluation resources +for each task. We eventually discuss some challenges and open problems from +different perspectives. +" +Few-shot Learning with Multilingual Language Models,Xi Victoria Lin,http://arxiv.org/pdf/2112.10668v3.pdf,2021-12-20,"['cs.cl', 'cs.ai']",2112.10668v3.pdf," Large-scale generative language models such as GPT-3 are competitive few-shot +learners. While these models are known to be able to jointly represent many +different languages, their training data is dominated by English, potentially +limiting their cross-lingual generalization. In this work, we train +multilingual generative language models on a corpus covering a diverse set of +languages, and study their few- and zero-shot learning capabilities in a wide +range of tasks. Our largest model with 7.5 billion parameters sets new state of +the art in few-shot learning in more than 20 representative languages, +outperforming GPT-3 of comparable size in multilingual commonsense reasoning +(with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in +4-shot settings) and natural language inference (+5.4% in each of 0-shot and +4-shot settings). On the FLORES-101 machine translation benchmark, our model +outperforms GPT-3 on 171 out of 182 directions with 32 training examples, while +surpassing the official supervised baseline in 45 directions. We conduct an +in-depth analysis of different multilingual prompting approaches, showing in +particular that strong few-shot learning performance across languages can be +achieved via cross-lingual transfer through both templates and demonstration +examples. Finally, we evaluate our models in social value tasks such as hate +speech detection in five languages and find it has limitations similar to +comparable sized GPT-3 models. +" +Flamingo: a Visual Language Model for Few-Shot Learning,Jean-Baptiste Alayrac,http://arxiv.org/pdf/2204.14198v2.pdf,2022-04-29,"['cs.cv', 'cs.ai', 'cs.lg']",2204.14198v2.pdf," Building models that can be rapidly adapted to novel tasks using only a +handful of annotated examples is an open challenge for multimodal machine +learning research. We introduce Flamingo, a family of Visual Language Models +(VLM) with this ability. We propose key architectural innovations to: (i) +bridge powerful pretrained vision-only and language-only models, (ii) handle +sequences of arbitrarily interleaved visual and textual data, and (iii) +seamlessly ingest images or videos as inputs. Thanks to their flexibility, +Flamingo models can be trained on large-scale multimodal web corpora containing +arbitrarily interleaved text and images, which is key to endow them with +in-context few-shot learning capabilities. We perform a thorough evaluation of +our models, exploring and measuring their ability to rapidly adapt to a variety +of image and video tasks. These include open-ended tasks such as visual +question-answering, where the model is prompted with a question which it has to +answer; captioning tasks, which evaluate the ability to describe a scene or an +event; and close-ended tasks such as multiple-choice visual question-answering. +For tasks lying anywhere on this spectrum, a single Flamingo model can achieve +a new state of the art with few-shot learning, simply by prompting the model +with task-specific examples. On numerous benchmarks, Flamingo outperforms +models fine-tuned on thousands of times more task-specific data. +" +"Code Generation Tools (Almost) for Free? A Study of Few-Shot, Pre-Trained Language Models on Code",Patrick Bareiß,http://arxiv.org/pdf/2206.01335v2.pdf,2022-06-02,"['cs.se', 'cs.lg']",2206.01335v2.pdf," Few-shot learning with large-scale, pre-trained language models is a powerful +way to answer questions about code, e.g., how to complete a given code example, +or even generate code snippets from scratch. The success of these models raises +the question whether they could serve as a basis for building a wide range code +generation tools. Traditionally, such tools are built manually and separately +for each task. Instead, few-shot learning may allow to obtain different tools +from a single pre-trained language model by simply providing a few examples or +a natural language description of the expected tool behavior. This paper +studies to what extent a state-of-the-art, pre-trained language model of code, +Codex, may serve this purpose. We consider three code manipulation and code +generation tasks targeted by a range of traditional tools: (i) code mutation; +(ii) test oracle generation from natural language documentation; and (iii) test +case generation. For each task, we compare few-shot learning to a manually +built tool. Our results show that the model-based tools complement (code +mutation), are on par (test oracle generation), or even outperform their +respective traditionally built tool (test case generation), while imposing far +less effort to develop them. By comparing the effectiveness of different +variants of the model-based tools, we provide insights on how to design an +appropriate input (""prompt"") to the model and what influence the size of the +model has. For example, we find that providing a small natural language +description of the code generation task is an easy way to improve predictions. +Overall, we conclude that few-shot language models are surprisingly effective, +yet there is still more work to be done, such as exploring more diverse ways of +prompting and tackling even more involved tasks. +" +From Human Days to Machine Seconds: Automatically Answering and Generating Machine Learning Final Exams,Iddo Drori,http://arxiv.org/pdf/2206.05442v7.pdf,2022-06-11,['cs.lg'],2206.05442v7.pdf," A final exam in machine learning at a top institution such as MIT, Harvard, +or Cornell typically takes faculty days to write, and students hours to solve. +We demonstrate that large language models pass machine learning finals at a +human level, on finals available online after the models were trained, and +automatically generate new human-quality final exam questions in seconds. +Previous work has developed program synthesis and few-shot learning methods to +solve university-level problem set questions in mathematics and STEM courses. +In this work, we develop and compare methods that solve final exams, which +differ from problem sets in several ways: the questions are longer, have +multiple parts, are more complicated, and span a broader set of topics. We +curate a dataset and benchmark of questions from machine learning final exams +available online and code for answering these questions and generating new +questions. We show how to generate new questions from other questions and +course notes. For reproducibility and future research on this final exam +benchmark, we use automatic checkers for multiple-choice, numeric, and +questions with expression answers. We perform ablation studies comparing +zero-shot learning with few-shot learning and chain-of-thought prompting using +GPT-3, OPT, Codex, and ChatGPT across machine learning topics and find that +few-shot learning methods perform best. We highlight the transformative +potential of language models to streamline the writing and solution of +large-scale assessments, significantly reducing the workload from human days to +mere machine seconds. Our results suggest that rather than banning large +language models such as ChatGPT in class, instructors should teach students to +harness them by asking students meta-questions about correctness, completeness, +and originality of the responses generated, encouraging critical thinking in +academic studies. +" +Model Tuning or Prompt Tuning? A Study of Large Language Models for Clinical Concept and Relation Extraction,Cheng Peng,http://arxiv.org/pdf/2310.06239v1.pdf,2023-10-10,"['cs.cl', 'cs.ai']",2310.06239v1.pdf," Objective To develop soft prompt-based learning algorithms for large language +models (LLMs), examine the shape of prompts, prompt-tuning using +frozen/unfrozen LLMs, transfer learning, and few-shot learning abilities. +Methods We developed a soft prompt-based LLM model and compared 4 training +strategies including (1) fine-tuning without prompts; (2) hard-prompt with +unfrozen LLMs; (3) soft-prompt with unfrozen LLMs; and (4) soft-prompt with +frozen LLMs. We evaluated 7 pretrained LLMs using the 4 training strategies for +clinical concept and relation extraction on two benchmark datasets. We +evaluated the transfer learning ability of the prompt-based learning algorithms +in a cross-institution setting. We also assessed the few-shot learning ability. +Results and Conclusion When LLMs are unfrozen, GatorTron-3.9B with soft +prompting achieves the best strict F1-scores of 0.9118 and 0.8604 for concept +extraction, outperforming the traditional fine-tuning and hard prompt-based +models by 0.6~3.1% and 1.2~2.9%, respectively; GatorTron-345M with soft +prompting achieves the best F1-scores of 0.8332 and 0.7488 for end-to-end +relation extraction, outperforming the other two models by 0.2~2% and +0.6~11.7%, respectively. When LLMs are frozen, small (i.e., 345 million +parameters) LLMs have a big gap to be competitive with unfrozen models; scaling +LLMs up to billions of parameters makes frozen LLMs competitive with unfrozen +LLMs. For cross-institute evaluation, soft prompting with a frozen +GatorTron-8.9B model achieved the best performance. This study demonstrates +that (1) machines can learn soft prompts better than humans, (2) frozen LLMs +have better few-shot learning ability and transfer learning ability to +facilitate muti-institution applications, and (3) frozen LLMs require large +models. +" +On Unifying Misinformation Detection,Nayeon Lee,http://arxiv.org/pdf/2104.05243v1.pdf,2021-04-12,"['cs.ai', 'cs.cl']",2104.05243v1.pdf," In this paper, we introduce UnifiedM2, a general-purpose misinformation model +that jointly models multiple domains of misinformation with a single, unified +setup. The model is trained to handle four tasks: detecting news bias, +clickbait, fake news, and verifying rumors. By grouping these tasks together, +UnifiedM2learns a richer representation of misinformation, which leads to +state-of-the-art or comparable performance across all tasks. Furthermore, we +demonstrate that UnifiedM2's learned representation is helpful for few-shot +learning of unseen misinformation tasks/datasets and model's generalizability +to unseen events. +" +Discrete and Soft Prompting for Multilingual Models,Mengjie Zhao,http://arxiv.org/pdf/2109.03630v1.pdf,2021-09-08,['cs.cl'],2109.03630v1.pdf," It has been shown for English that discrete and soft prompting perform +strongly in few-shot learning with pretrained language models (PLMs). In this +paper, we show that discrete and soft prompting perform better than finetuning +in multilingual cases: Crosslingual transfer and in-language training of +multilingual natural language inference. For example, with 48 English training +examples, finetuning obtains 33.74% accuracy in crosslingual transfer, barely +surpassing the majority baseline (33.33%). In contrast, discrete and soft +prompting outperform finetuning, achieving 36.43% and 38.79%. We also +demonstrate good performance of prompting with training data in multiple +languages other than English. +" +Cedille: A large autoregressive French language model,Martin Müller,http://arxiv.org/pdf/2202.03371v1.pdf,2022-02-07,"['cs.cl', '68t50', 'i.2.7']",2202.03371v1.pdf," Scaling up the size and training of autoregressive language models has +enabled novel ways of solving Natural Language Processing tasks using zero-shot +and few-shot learning. While extreme-scale language models such as GPT-3 offer +multilingual capabilities, zero-shot learning for languages other than English +remain largely unexplored. Here, we introduce Cedille, a large open source +auto-regressive language model, specifically trained for the French language. +Our results show that Cedille outperforms existing French language models and +is competitive with GPT-3 on a range of French zero-shot benchmarks. +Furthermore, we provide an in-depth comparison of the toxicity exhibited by +these models, showing that Cedille marks an improvement in language model +safety thanks to dataset filtering. +" +Human in the loop: How to effectively create coherent topics by manually labeling only a few documents per class,Anton Thielmann,http://arxiv.org/pdf/2212.09422v1.pdf,2022-12-19,['cs.cl'],2212.09422v1.pdf," Few-shot methods for accurate modeling under sparse label-settings have +improved significantly. However, the applications of few-shot modeling in +natural language processing remain solely in the field of document +classification. With recent performance improvements, supervised few-shot +methods, combined with a simple topic extraction method pose a significant +challenge to unsupervised topic modeling methods. Our research shows that +supervised few-shot learning, combined with a simple topic extraction method, +can outperform unsupervised topic modeling techniques in terms of generating +coherent topics, even when only a few labeled documents per class are used. +" +Sentence Simplification via Large Language Models,Yutao Feng,http://arxiv.org/pdf/2302.11957v1.pdf,2023-02-23,"['cs.cl', 'cs.ai']",2302.11957v1.pdf," Sentence Simplification aims to rephrase complex sentences into simpler +sentences while retaining original meaning. Large Language models (LLMs) have +demonstrated the ability to perform a variety of natural language processing +tasks. However, it is not yet known whether LLMs can be served as a +high-quality sentence simplification system. In this work, we empirically +analyze the zero-/few-shot learning ability of LLMs by evaluating them on a +number of benchmark test sets. Experimental results show LLMs outperform +state-of-the-art sentence simplification methods, and are judged to be on a par +with human annotators. +" +NeuroCLIP: Neuromorphic Data Understanding by CLIP and SNN,Yufei Guo,http://arxiv.org/pdf/2306.12073v1.pdf,2023-06-21,['cs.cv'],2306.12073v1.pdf," Recently, the neuromorphic vision sensor has received more and more interest. +However, the neuromorphic data consists of asynchronous event spikes, which is +not natural and difficult to construct a benchmark, thus limiting the +neuromorphic data understanding for ""unseen"" objects by deep learning. +Zero-shot and few-shot learning via Contrastive Vision-Language Pre-training +(CLIP) have shown inspirational performance in 2D frame image recognition. To +handle ""unseen"" recognition for the neuromorphic data, in this paper, we +propose NeuroCLIP, which transfers the CLIP's 2D pre-trained knowledge to event +spikes. To improve the few-shot performance, we also provide an inter-timestep +adapter based on a spiking neural network. Our code is open-sourced at +https://github.com/yfguo91/NeuroCLIP.git. +" +Leveraging Few-Shot Data Augmentation and Waterfall Prompting for Response Generation,Lea Krause,http://arxiv.org/pdf/2308.01080v1.pdf,2023-08-02,['cs.cl'],2308.01080v1.pdf," This paper discusses our approaches for task-oriented conversational +modelling using subjective knowledge, with a particular emphasis on response +generation. Our methodology was shaped by an extensive data analysis that +evaluated key factors such as response length, sentiment, and dialogue acts +present in the provided dataset. We used few-shot learning to augment the data +with newly generated subjective knowledge items and present three approaches +for DSTC11: (1) task-specific model exploration, (2) incorporation of the most +frequent question into all generated responses, and (3) a waterfall prompting +technique using a combination of both GPT-3 and ChatGPT. +" +Making Pre-trained Language Models Better Few-shot Learners,Tianyu Gao,http://arxiv.org/pdf/2012.15723v2.pdf,2020-12-31,"['cs.cl', 'cs.lg']",2012.15723v2.pdf," The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot +performance solely by leveraging a natural-language prompt and a few task +demonstrations as input context. Inspired by their findings, we study few-shot +learning in a more practical scenario, where we use smaller language models for +which fine-tuning is computationally efficient. We present LM-BFF--better +few-shot fine-tuning of language models--a suite of simple and complementary +techniques for fine-tuning language models on a small number of annotated +examples. Our approach includes (1) prompt-based fine-tuning together with a +novel pipeline for automating prompt generation; and (2) a refined strategy for +dynamically and selectively incorporating demonstrations into each context. +Finally, we present a systematic evaluation for analyzing few-shot performance +on a range of NLP tasks, including classification and regression. Our +experiments demonstrate that our methods combine to dramatically outperform +standard fine-tuning procedures in this low resource setting, achieving up to +30% absolute improvement, and 11% on average across all tasks. Our approach +makes minimal assumptions on task resources and domain expertise, and hence +constitutes a strong task-agnostic method for few-shot learning. +" +GPT-3 Models are Poor Few-Shot Learners in the Biomedical Domain,Milad Moradi,http://arxiv.org/pdf/2109.02555v2.pdf,2021-09-06,"['cs.cl', 'cs.ai', 'cs.lg']",2109.02555v2.pdf," Deep neural language models have set new breakthroughs in many tasks of +Natural Language Processing (NLP). Recent work has shown that deep transformer +language models (pretrained on large amounts of texts) can achieve high levels +of task-specific few-shot performance comparable to state-of-the-art models. +However, the ability of these large language models in few-shot transfer +learning has not yet been explored in the biomedical domain. We investigated +the performance of two powerful transformer language models, i.e. GPT-3 and +BioBERT, in few-shot settings on various biomedical NLP tasks. The experimental +results showed that, to a great extent, both the models underperform a language +model fine-tuned on the full training data. Although GPT-3 had already achieved +near state-of-the-art results in few-shot knowledge transfer on open-domain NLP +tasks, it could not perform as effectively as BioBERT, which is orders of +magnitude smaller than GPT-3. Regarding that BioBERT was already pretrained on +large biomedical text corpora, our study suggests that language models may +largely benefit from in-domain pretraining in task-specific few-shot learning. +However, in-domain pretraining seems not to be sufficient; novel pretraining +and few-shot learning strategies are required in the biomedical NLP domain. +" +PPT: Pre-trained Prompt Tuning for Few-shot Learning,Yuxian Gu,http://arxiv.org/pdf/2109.04332v3.pdf,2021-09-09,['cs.cl'],2109.04332v3.pdf," Prompts for pre-trained language models (PLMs) have shown remarkable +performance by bridging the gap between pre-training tasks and various +downstream tasks. Among these methods, prompt tuning, which freezes PLMs and +only tunes soft prompts, provides an efficient and effective solution for +adapting large-scale PLMs to downstream tasks. However, prompt tuning is yet to +be fully explored. In our pilot experiments, we find that prompt tuning +performs comparably with conventional full-model fine-tuning when downstream +data are sufficient, whereas it performs much worse under few-shot learning +settings, which may hinder the application of prompt tuning in practice. We +attribute this low performance to the manner of initializing soft prompts. +Therefore, in this work, we propose to pre-train prompts by adding soft prompts +into the pre-training stage to obtain a better initialization. We name this +Pre-trained Prompt Tuning framework ""PPT"". To ensure the generalization of PPT, +we formulate similar classification tasks into a unified task form and +pre-train soft prompts for this unified task. Extensive experiments show that +tuning pre-trained prompts for downstream tasks can reach or even outperform +full-model fine-tuning under both full-data and few-shot settings. Our approach +is effective and efficient for using large-scale PLMs in practice. +" +Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning,Shaohua Wu,http://arxiv.org/pdf/2110.04725v2.pdf,2021-10-10,"['cs.cl', 'cs.ai']",2110.04725v2.pdf," Recent work like GPT-3 has demonstrated excellent performance of Zero-Shot +and Few-Shot learning on many natural language processing (NLP) tasks by +scaling up model size, dataset size and the amount of computation. However, +training a model like GPT-3 requires huge amount of computational resources +which makes it challengeable to researchers. In this work, we propose a method +that incorporates large-scale distributed training performance into model +architecture design. With this method, Yuan 1.0, the current largest singleton +language model with 245B parameters, achieves excellent performance on +thousands GPUs during training, and the state-of-the-art results on NLP tasks. +A data processing method is designed to efficiently filter massive amount of +raw data. The current largest high-quality Chinese corpus with 5TB high quality +texts is built based on this method. In addition, a calibration and label +expansion method is proposed to improve the Zero-Shot and Few-Shot performance, +and steady improvement is observed on the accuracy of various tasks. Yuan 1.0 +presents strong capacity of natural language generation, and the generated +articles are difficult to distinguish from the human-written ones. +" +LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners,Yaqing Wang,http://arxiv.org/pdf/2110.06274v2.pdf,2021-10-12,['cs.cl'],2110.06274v2.pdf," We present a new method LiST is short for Lite Prompted Self-Training for +parameter-efficient fine-tuning of large pre-trained language models (PLMs) for +few-shot learning. LiST improves over recent methods that adopt prompt-based +fine-tuning (FN) using two key techniques. The first is the use of +self-training to leverage large amounts of unlabeled data for prompt-based FN +in few-shot settings. We use self-training in conjunction with meta-learning +for re-weighting noisy pseudo-prompt labels. Self-training is expensive as it +requires updating all the model parameters repetitively. Therefore, we use a +second technique for light-weight fine-tuning where we introduce a small number +of task-specific parameters that are fine-tuned during self-training while +keeping the PLM encoder frozen. Our experiments show that LiST can effectively +leverage unlabeled data to improve the model performance for few-shot learning. +Additionally, the fine-tuning is efficient as it only updates a small +percentage of parameters and the overall model footprint is reduced since +several tasks can share a common PLM encoder as backbone. A comprehensive study +on six NLU tasks demonstrate LiST to improve by 35% over classic fine-tuning +and 6% over prompt-based FN with 96% reduction in number of trainable +parameters when fine-tuned with no more than 30 labeled examples from each +task. With only 14M tunable parameters, LiST outperforms GPT-3 in-context +learning by 33% on few-shot NLU tasks. +" +PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models,Rabeeh Karimi Mahabadi,http://arxiv.org/pdf/2204.01172v2.pdf,2022-04-03,['cs.cl'],2204.01172v2.pdf," Current methods for few-shot fine-tuning of pretrained masked language models +(PLMs) require carefully engineered prompts and verbalizers for each new task +to convert examples into a cloze-format that the PLM can score. In this work, +we propose PERFECT, a simple and efficient method for few-shot fine-tuning of +PLMs without relying on any such handcrafting, which is highly effective given +as few as 32 data points. PERFECT makes two key design choices: First, we show +that manually engineered task prompts can be replaced with task-specific +adapters that enable sample-efficient fine-tuning and reduce memory and storage +costs by roughly factors of 5 and 100, respectively. Second, instead of using +handcrafted verbalizers, we learn new multi-token label embeddings during +fine-tuning, which are not tied to the model vocabulary and which allow us to +avoid complex auto-regressive decoding. These embeddings are not only learnable +from limited data but also enable nearly 100x faster training and inference. +Experiments on a wide range of few-shot NLP tasks demonstrate that PERFECT, +while being simple and efficient, also outperforms existing state-of-the-art +few-shot learning methods. Our code is publicly available at +https://github.com/facebookresearch/perfect.git. +" +On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model,Seongjin Shin,http://arxiv.org/pdf/2204.13509v2.pdf,2022-04-28,['cs.cl'],2204.13509v2.pdf," Many recent studies on large-scale language models have reported successful +in-context zero- and few-shot learning ability. However, the in-depth analysis +of when in-context learning occurs is still lacking. For example, it is unknown +how in-context learning performance changes as the training corpus varies. +Here, we investigate the effects of the source and size of the pretraining +corpus on in-context learning in HyperCLOVA, a Korean-centric GPT-3 model. From +our in-depth investigation, we introduce the following observations: (1) +in-context learning performance heavily depends on the corpus domain source, +and the size of the pretraining corpus does not necessarily determine the +emergence of in-context learning, (2) in-context learning ability can emerge +when a language model is trained on a combination of multiple corpora, even +when each corpus does not result in in-context learning on its own, (3) +pretraining with a corpus related to a downstream task does not always +guarantee the competitive in-context learning performance of the downstream +task, especially in the few-shot setting, and (4) the relationship between +language modeling (measured in perplexity) and in-context learning does not +always correlate: e.g., low perplexity does not always imply high in-context +few-shot learning performance. +" +Few-Shot Stance Detection via Target-Aware Prompt Distillation,Yan Jiang,http://arxiv.org/pdf/2206.13214v1.pdf,2022-06-27,['cs.cl'],2206.13214v1.pdf," Stance detection aims to identify whether the author of a text is in favor +of, against, or neutral to a given target. The main challenge of this task +comes two-fold: few-shot learning resulting from the varying targets and the +lack of contextual information of the targets. Existing works mainly focus on +solving the second issue by designing attention-based models or introducing +noisy external knowledge, while the first issue remains under-explored. In this +paper, inspired by the potential capability of pre-trained language models +(PLMs) serving as knowledge bases and few-shot learners, we propose to +introduce prompt-based fine-tuning for stance detection. PLMs can provide +essential contextual information for the targets and enable few-shot learning +via prompts. Considering the crucial role of the target in stance detection +task, we design target-aware prompts and propose a novel verbalizer. Instead of +mapping each label to a concrete word, our verbalizer maps each label to a +vector and picks the label that best captures the correlation between the +stance and the target. Moreover, to alleviate the possible defect of dealing +with varying targets with a single hand-crafted prompt, we propose to distill +the information learned from multiple prompts. Experimental results show the +superior performance of our proposed model in both full-data and few-shot +scenarios. +" +Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks,David Oniani,http://arxiv.org/pdf/2208.14923v2.pdf,2022-08-31,['cs.cl'],2208.14923v2.pdf," Clinical Natural Language Processing (NLP) has become an emerging technology +in healthcare that leverages a large amount of free-text data in electronic +health records (EHRs) to improve patient care, support clinical decisions, and +facilitate clinical and translational science research. Recently, deep learning +has achieved state-of-the-art performance in many clinical NLP tasks. However, +training deep learning models usually requires large annotated datasets, which +are normally not publicly available and can be time-consuming to build in +clinical domains. Working with smaller annotated datasets is typical in +clinical NLP and therefore, ensuring that deep learning models perform well is +crucial for the models to be used in real-world applications. A widely adopted +approach is fine-tuning existing Pre-trained Language Models (PLMs), but these +attempts fall short when the training dataset contains only a few annotated +samples. Few-Shot Learning (FSL) has recently been investigated to tackle this +problem. Siamese Neural Network (SNN) has been widely utilized as an FSL +approach in computer vision, but has not been studied well in NLP. Furthermore, +the literature on its applications in clinical domains is scarce. In this +paper, we propose two SNN-based FSL approaches for clinical NLP, including +Pre-Trained SNN (PT-SNN) and SNN with Second-Order Embeddings (SOE-SNN). We +evaluated the proposed approaches on two clinical tasks, namely clinical text +classification and clinical named entity recognition. We tested three few-shot +settings including 4-shot, 8-shot, and 16-shot learning. Both clinical NLP +tasks were benchmarked using three PLMs, including BERT,BioBERT, and +BioClinicalBERT. The experimental results verified the effectiveness of the +proposed SNN-based FSL approaches in both NLP tasks. +" +Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models,Yue Zhang,http://arxiv.org/pdf/2210.10841v1.pdf,2022-10-19,"['cs.cl', 'cs.cv']",2210.10841v1.pdf," Prompt learning is a new learning paradigm which reformulates downstream +tasks as similar pretraining tasks on pretrained models by leveraging textual +prompts. Recent works have demonstrated that prompt learning is particularly +useful for few-shot learning, where there is limited training data. Depending +on the granularity of prompts, those methods can be roughly divided into +task-level prompting and instance-level prompting. Task-level prompting methods +learn one universal prompt for all input samples, which is efficient but +ineffective to capture subtle differences among different classes. +Instance-level prompting methods learn a specific prompt for each input, though +effective but inefficient. In this work, we develop a novel prototype-based +prompt learning method to overcome the above limitations. In particular, we +focus on few-shot image recognition tasks on pretrained vision-language models +(PVLMs) and develop a method of prompting through prototype (PTP), where we +define $K$ image prototypes and $K$ prompt prototypes. In PTP, the image +prototype represents a centroid of a certain image cluster in the latent space +and a prompt prototype is defined as a soft prompt in the continuous space. The +similarity between a query image and an image prototype determines how much +this prediction relies on the corresponding prompt prototype. Hence, in PTP, +similar images will utilize similar prompting ways. Through extensive +experiments on seven real-world benchmarks, we show that PTP is an effective +method to leverage the latent knowledge and adaptive to various PVLMs. +Moreover, through detailed analysis, we discuss pros and cons for prompt +learning and parameter-efficient fine-tuning under the context of few-shot +learning. +" +SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification,Fang Peng,http://arxiv.org/pdf/2211.16191v2.pdf,2022-11-28,"['cs.cv', 'cs.mm']",2211.16191v2.pdf," Although significant progress has been made in few-shot learning, most of +existing few-shot image classification methods require supervised pre-training +on a large amount of samples of base classes, which limits their generalization +ability in real world application. Recently, large-scale Vision-Language +Pre-trained models (VLPs) have been gaining increasing attention in few-shot +learning because they can provide a new paradigm for transferable visual +representation learning with easily available text on the Web. However, the +VLPs may neglect detailed visual information that is difficult to describe by +language sentences, but important for learning an effective classifier to +distinguish different images. To address the above problem, we propose a new +framework, named Semantic-guided Visual Adapting (SgVA), which can effectively +extend vision-language pre-trained models to produce discriminative adapted +visual features by comprehensively using an implicit knowledge distillation, a +vision-specific contrastive loss, and a cross-modal contrastive loss. The +implicit knowledge distillation is designed to transfer the fine-grained +cross-modal knowledge to guide the updating of the vision adapter. +State-of-the-art results on 13 datasets demonstrate that the adapted visual +features can well complement the cross-modal features to improve few-shot image +classification. +" +Finetune like you pretrain: Improved finetuning of zero-shot vision models,Sachin Goyal,http://arxiv.org/pdf/2212.00638v1.pdf,2022-12-01,"['cs.cv', 'cs.lg']",2212.00638v1.pdf," Finetuning image-text models such as CLIP achieves state-of-the-art +accuracies on a variety of benchmarks. However, recent works like WiseFT +(Wortsman et al., 2021) and LP-FT (Kumar et al., 2022) have shown that even +subtle differences in the finetuning process can lead to surprisingly large +differences in the final performance, both for in-distribution (ID) and +out-of-distribution (OOD) data. In this work, we show that a natural and simple +approach of mimicking contrastive pretraining consistently outperforms +alternative finetuning approaches. Specifically, we cast downstream class +labels as text prompts and continue optimizing the contrastive loss between +image embeddings and class-descriptive prompt embeddings (contrastive +finetuning). + Our method consistently outperforms baselines across 7 distribution shifts, 6 +transfer learning, and 3 few-shot learning benchmarks. On WILDS-iWILDCam, our +proposed approach FLYP outperforms the top of the leaderboard by $2.3\%$ ID and +$2.7\%$ OOD, giving the highest reported accuracy. Averaged across 7 OOD +datasets (2 WILDS and 5 ImageNet associated shifts), FLYP gives gains of +$4.2\%$ OOD over standard finetuning and outperforms the current state of the +art (LP-FT) by more than $1\%$ both ID and OOD. Similarly, on 3 few-shot +learning benchmarks, our approach gives gains up to $4.6\%$ over standard +finetuning and $4.4\%$ over the state of the art. In total, these benchmarks +establish contrastive finetuning as a simple, intuitive, and state-of-the-art +approach for supervised finetuning of image-text models like CLIP. Code is +available at https://github.com/locuslab/FLYP. +" +Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models,Zhiqiu Lin,http://arxiv.org/pdf/2301.06267v4.pdf,2023-01-16,"['cs.cv', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",2301.06267v4.pdf," The ability to quickly learn a new task with minimal instruction - known as +few-shot learning - is a central aspect of intelligent agents. Classical +few-shot benchmarks make use of few-shot samples from a single modality, but +such samples may not be sufficient to characterize an entire concept class. In +contrast, humans use cross-modal information to learn new concepts efficiently. +In this work, we demonstrate that one can indeed build a better ${\bf visual}$ +dog classifier by ${\bf read}$ing about dogs and ${\bf listen}$ing to them +bark. To do so, we exploit the fact that recent multimodal foundation models +such as CLIP are inherently cross-modal, mapping different modalities to the +same representation space. Specifically, we propose a simple cross-modal +adaptation approach that learns from few-shot examples spanning different +modalities. By repurposing class names as additional one-shot training samples, +we achieve SOTA results with an embarrassingly simple linear classifier for +vision-language adaptation. Furthermore, we show that our approach can benefit +existing methods such as prefix tuning, adapters, and classifier ensembling. +Finally, to explore other modalities beyond vision and language, we construct +the first (to our knowledge) audiovisual few-shot benchmark and use cross-modal +training to improve the performance of both image and audio classification. +" +AugGPT: Leveraging ChatGPT for Text Data Augmentation,Haixing Dai,http://arxiv.org/pdf/2302.13007v3.pdf,2023-02-25,"['cs.cl', 'cs.ai', 'cs.lg']",2302.13007v3.pdf," Text data augmentation is an effective strategy for overcoming the challenge +of limited sample sizes in many natural language processing (NLP) tasks. This +challenge is especially prominent in the few-shot learning scenario, where the +data in the target domain is generally much scarcer and of lowered quality. A +natural and widely-used strategy to mitigate such challenges is to perform data +augmentation to better capture the data invariance and increase the sample +size. However, current text data augmentation methods either can't ensure the +correct labeling of the generated data (lacking faithfulness) or can't ensure +sufficient diversity in the generated data (lacking compactness), or both. +Inspired by the recent success of large language models, especially the +development of ChatGPT, which demonstrated improved language comprehension +abilities, in this work, we propose a text data augmentation approach based on +ChatGPT (named AugGPT). AugGPT rephrases each sentence in the training samples +into multiple conceptually similar but semantically different samples. The +augmented samples can then be used in downstream model training. Experiment +results on few-shot learning text classification tasks show the superior +performance of the proposed AugGPT approach over state-of-the-art text data +augmentation methods in terms of testing accuracy and distribution of the +augmented samples. +" +Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning,Ivona Najdenkoska,http://arxiv.org/pdf/2302.14794v1.pdf,2023-02-28,['cs.cv'],2302.14794v1.pdf," Multimodal few-shot learning is challenging due to the large domain gap +between vision and language modalities. Existing methods are trying to +communicate visual concepts as prompts to frozen language models, but rely on +hand-engineered task induction to reduce the hypothesis space. To make the +whole process learnable, we introduce a multimodal meta-learning approach. +Specifically, our approach decomposes the training of the model into a set of +related multimodal few-shot tasks. We define a meta-mapper network, acting as a +meta-learner, to efficiently bridge frozen large-scale vision and language +models and leverage their already learned capacity. By updating the learnable +parameters only of the meta-mapper, it learns to accrue shared meta-knowledge +among these tasks. Thus, it can rapidly adapt to newly presented samples with +only a few gradient updates. Importantly, it induces the task in a completely +data-driven manner, with no need for a hand-engineered task induction. We +evaluate our approach on recently proposed multimodal few-shot benchmarks, +measuring how rapidly the model can bind novel visual concepts to words and +answer visual questions by observing only a limited set of labeled examples. +The experimental results show that our meta-learning approach outperforms the +baseline across multiple datasets and various training settings while being +computationally more efficient. +" +Semantic Prompt for Few-Shot Image Recognition,Wentao Chen,http://arxiv.org/pdf/2303.14123v1.pdf,2023-03-24,['cs.cv'],2303.14123v1.pdf," Few-shot learning is a challenging problem since only a few examples are +provided to recognize a new class. Several recent studies exploit additional +semantic information, e.g. text embeddings of class names, to address the issue +of rare samples through combining semantic prototypes with visual prototypes. +However, these methods still suffer from the spurious visual features learned +from the rare support samples, resulting in limited benefits. In this paper, we +propose a novel Semantic Prompt (SP) approach for few-shot learning. Instead of +the naive exploitation of semantic information for remedying classifiers, we +explore leveraging semantic information as prompts to tune the visual feature +extraction network adaptively. Specifically, we design two complementary +mechanisms to insert semantic prompts into the feature extractor: one is to +enable the interaction between semantic prompts and patch embeddings along the +spatial dimension via self-attention, another is to supplement visual features +with the transformed semantic prompts along the channel dimension. By combining +these two mechanisms, the feature extractor presents a better ability to attend +to the class-specific features and obtains more generalized image +representations with merely a few support samples. Through extensive +experiments on four datasets, the proposed approach achieves promising results, +improving the 1-shot learning accuracy by 3.67% on average. +" +RPLKG: Robust Prompt Learning with Knowledge Graph,Yewon Kim,http://arxiv.org/pdf/2304.10805v1.pdf,2023-04-21,"['cs.ai', 'cs.lg']",2304.10805v1.pdf," Large-scale pre-trained models have been known that they are transferable, +and they generalize well on the unseen dataset. Recently, multimodal +pre-trained models such as CLIP show significant performance improvement in +diverse experiments. However, when the labeled dataset is limited, the +generalization of a new dataset or domain is still challenging. To improve the +generalization performance on few-shot learning, there have been diverse +efforts, such as prompt learning and adapter. However, the current few-shot +adaptation methods are not interpretable, and they require a high computation +cost for adaptation. In this study, we propose a new method, robust prompt +learning with knowledge graph (RPLKG). Based on the knowledge graph, we +automatically design diverse interpretable and meaningful prompt sets. Our +model obtains cached embeddings of prompt sets after one forwarding from a +large pre-trained model. After that, model optimizes the prompt selection +processes with GumbelSoftmax. In this way, our model is trained using +relatively little memory and learning time. Also, RPLKG selects the optimal +interpretable prompt automatically, depending on the dataset. In summary, RPLKG +is i) interpretable, ii) requires small computation resources, and iii) easy to +incorporate prior human knowledge. To validate the RPLKG, we provide +comprehensive experimental results on few-shot learning, domain generalization +and new class generalization setting. RPLKG shows a significant performance +improvement compared to zero-shot learning and competitive performance against +several prompt learning methods using much lower resources. +" +The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning,Seungone Kim,http://arxiv.org/pdf/2305.14045v2.pdf,2023-05-23,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14045v2.pdf," Language models (LMs) with less than 100B parameters are known to perform +poorly on chain-of-thought (CoT) reasoning in contrast to large LMs when +solving unseen tasks. In this work, we aim to equip smaller LMs with the +step-by-step reasoning capability by instruction tuning with CoT rationales. In +order to achieve this goal, we first introduce a new instruction-tuning dataset +called the CoT Collection, which augments the existing Flan Collection +(including only 9 CoT tasks) with additional 1.84 million rationales across +1,060 tasks. We show that CoT fine-tuning Flan-T5 (3B & 11B) with CoT +Collection enables smaller LMs to have better CoT capabilities on unseen tasks. +On the BIG-Bench-Hard (BBH) benchmark, we report an average improvement of ++4.34% (Flan-T5 3B) and +2.60% (Flan-T5 11B), in terms of zero-shot task +accuracy. Furthermore, we show that instruction tuning with CoT Collection +allows LMs to possess stronger few-shot learning capabilities on 4 +domain-specific tasks, resulting in an improvement of +2.24% (Flan-T5 3B) and ++2.37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until +the max length by a +13.98% margin. Our code, the CoT Collection data, and +model checkpoints are publicly available. +" +Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language Understanding,Venkata Prabhakara Sarath Nookala,http://arxiv.org/pdf/2306.11066v2.pdf,2023-06-19,"['cs.cl', 'cs.lg']",2306.11066v2.pdf," State-of-the-art few-shot learning (FSL) methods leverage prompt-based +fine-tuning to obtain remarkable results for natural language understanding +(NLU) tasks. While much of the prior FSL methods focus on improving downstream +task performance, there is a limited understanding of the adversarial +robustness of such methods. In this work, we conduct an extensive study of +several state-of-the-art FSL methods to assess their robustness to adversarial +perturbations. To better understand the impact of various factors towards +robustness (or the lack of it), we evaluate prompt-based FSL methods against +fully fine-tuned models for aspects such as the use of unlabeled data, multiple +prompts, number of few-shot examples, model size and type. Our results on six +GLUE tasks indicate that compared to fully fine-tuned models, vanilla FSL +methods lead to a notable relative drop in task performance (i.e., are less +robust) in the face of adversarial perturbations. However, using (i) unlabeled +data for prompt-based FSL and (ii) multiple prompts flip the trend. We further +demonstrate that increasing the number of few-shot examples and model size lead +to increased adversarial robustness of vanilla FSL methods. Broadly, our work +sheds light on the adversarial robustness evaluation of prompt-based FSL +methods for NLU tasks. +" +Few-shot Learning for Inference in Medical Imaging with Subspace Feature Representations,Jiahui Liu,http://arxiv.org/pdf/2306.11152v1.pdf,2023-06-19,"['math.na', 'cs.na']",2306.11152v1.pdf," Unlike the field of visual scene recognition where tremendous advances have +taken place due to the availability of very large datasets to train deep neural +networks, inference from medical images is often hampered by the fact that only +small amounts of data may be available. When working with very small dataset +problems, of the order of a few hundred items of data, the power of deep +learning may still be exploited by using a model pre-trained on natural images +as a feature extractor and carrying out classic pattern recognition techniques +in this feature space, the so-called few-shot learning problem. In regimes +where the dimension of this feature space is comparable to or even larger than +the number of items of data, dimensionality reduction is a necessity and is +often achieved by principal component analysis, i.e., singular value +decomposition (SVD). In this paper, noting the inappropriateness of using SVD +for this setting, we usher in and explore two alternatives based on +discriminant analysis and non-negative matrix factorization (NMF). Using 14 +different datasets spanning $11$ distinct disease types, we demonstrate that +discriminant subspaces at low dimensions achieve significant improvements over +SVD-based subspaces and the original feature space. We also show that NMF at +modest dimensions is a competitive alternative to SVD in this setting. +" +Visually grounded few-shot word learning in low-resource settings,Leanne Nortje,http://arxiv.org/pdf/2306.11371v2.pdf,2023-06-20,"['eess.as', 'cs.cl']",2306.11371v2.pdf," We propose a visually grounded speech model that learns new words and their +visual depictions from just a few word-image example pairs. Given a set of test +images and a spoken query, we ask the model which image depicts the query word. +Previous work has simplified this few-shot learning problem by either using an +artificial setting with digit word-image pairs or by using a large number of +examples per class. Moreover, all previous studies were performed using English +speech-image data. We propose an approach that can work on natural word-image +pairs but with less examples, i.e. fewer shots, and then illustrate how this +approach can be applied for multimodal few-shot learning in a real low-resource +language, Yoruba. Our approach involves using the given word-image example +pairs to mine new unsupervised word-image training pairs from large collections +of unlabelledspeech and images. Additionally, we use a word-to-image attention +mechanism to determine word-image similarity. With this new model, we achieve +better performance with fewer shots than previous approaches on an existing +English benchmark. Many of the model's mistakes are due to confusion between +visual concepts co-occurring in similar contexts. The experiments on Yoruba +show the benefit of transferring knowledge from a multimodal model trained on a +larger set of English speech-image data. +" +Cross-Modal Concept Learning and Inference for Vision-Language Models,Yi Zhang,http://arxiv.org/pdf/2307.15460v1.pdf,2023-07-28,"['cs.cv', 'cs.cl']",2307.15460v1.pdf," Large-scale pre-trained Vision-Language Models (VLMs), such as CLIP, +establish the correlation between texts and images, achieving remarkable +success on various downstream tasks with fine-tuning. In existing fine-tuning +methods, the class-specific text description is matched against the whole +image. We recognize that this whole image matching is not effective since +images from the same class often contain a set of different semantic objects, +and an object further consists of a set of semantic parts or concepts. +Individual semantic parts or concepts may appear in image samples from +different classes. To address this issue, in this paper, we develop a new +method called cross-model concept learning and inference (CCLI). Using the +powerful text-image correlation capability of CLIP, our method automatically +learns a large set of distinctive visual concepts from images using a set of +semantic text concepts. Based on these visual concepts, we construct a +discriminative representation of images and learn a concept inference network +to perform downstream image classification tasks, such as few-shot learning and +domain generalization. Extensive experimental results demonstrate that our CCLI +method is able to improve the performance upon the current state-of-the-art +methods by large margins, for example, by up to 8.0% improvement on few-shot +learning and by up to 1.3% for domain generalization. +" +Demonstration-based learning for few-shot biomedical named entity recognition under machine reading comprehension,Leilei Su,http://arxiv.org/pdf/2308.06454v1.pdf,2023-08-12,['cs.cl'],2308.06454v1.pdf," Although deep learning techniques have shown significant achievements, they +frequently depend on extensive amounts of hand-labeled data and tend to perform +inadequately in few-shot scenarios. The objective of this study is to devise a +strategy that can improve the model's capability to recognize biomedical +entities in scenarios of few-shot learning. By redefining biomedical named +entity recognition (BioNER) as a machine reading comprehension (MRC) problem, +we propose a demonstration-based learning method to address few-shot BioNER, +which involves constructing appropriate task demonstrations. In assessing our +proposed method, we compared the proposed method with existing advanced methods +using six benchmark datasets, including BC4CHEMD, BC5CDR-Chemical, +BC5CDR-Disease, NCBI-Disease, BC2GM, and JNLPBA. We examined the models' +efficacy by reporting F1 scores from both the 25-shot and 50-shot learning +experiments. In 25-shot learning, we observed 1.1% improvements in the average +F1 scores compared to the baseline method, reaching 61.7%, 84.1%, 69.1%, 70.1%, +50.6%, and 59.9% on six datasets, respectively. In 50-shot learning, we further +improved the average F1 scores by 1.0% compared to the baseline method, +reaching 73.1%, 86.8%, 76.1%, 75.6%, 61.7%, and 65.4%, respectively. We +reported that in the realm of few-shot learning BioNER, MRC-based language +models are much more proficient in recognizing biomedical entities compared to +the sequence labeling approach. Furthermore, our MRC-language models can +compete successfully with fully-supervised learning methodologies that rely +heavily on the availability of abundant annotated data. These results highlight +possible pathways for future advancements in few-shot BioNER methodologies. +" +Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models,Yugeng Liu,http://arxiv.org/pdf/2308.07847v1.pdf,2023-08-15,['cs.cr'],2308.07847v1.pdf," Large Language Models (LLMs) have led to significant improvements in many +tasks across various domains, such as code interpretation, response generation, +and ambiguity handling. These LLMs, however, when upgrading, primarily +prioritize enhancing user experience while neglecting security, privacy, and +safety implications. Consequently, unintended vulnerabilities or biases can be +introduced. Previous studies have predominantly focused on specific versions of +the models and disregard the potential emergence of new attack vectors +targeting the updated versions. Through the lens of adversarial examples within +the in-context learning framework, this longitudinal study addresses this gap +by conducting a comprehensive assessment of the robustness of successive +versions of LLMs, vis-\`a-vis GPT-3.5. We conduct extensive experiments to +analyze and understand the impact of the robustness in two distinct learning +categories: zero-shot learning and few-shot learning. Our findings indicate +that, in comparison to earlier versions of LLMs, the updated versions do not +exhibit the anticipated level of robustness against adversarial attacks. In +addition, our study emphasizes the increased effectiveness of synergized +adversarial queries in most zero-shot learning and few-shot learning cases. We +hope that our study can lead to a more refined assessment of the robustness of +LLMs over time and provide valuable insights of these models for both +developers and users. +" +UniAP: Towards Universal Animal Perception in Vision via Few-shot Learning,Meiqi Sun,http://arxiv.org/pdf/2308.09953v1.pdf,2023-08-19,['cs.cv'],2308.09953v1.pdf," Animal visual perception is an important technique for automatically +monitoring animal health, understanding animal behaviors, and assisting +animal-related research. However, it is challenging to design a deep +learning-based perception model that can freely adapt to different animals +across various perception tasks, due to the varying poses of a large diversity +of animals, lacking data on rare species, and the semantic inconsistency of +different tasks. We introduce UniAP, a novel Universal Animal Perception model +that leverages few-shot learning to enable cross-species perception among +various visual tasks. Our proposed model takes support images and labels as +prompt guidance for a query image. Images and labels are processed through a +Transformer-based encoder and a lightweight label encoder, respectively. Then a +matching module is designed for aggregating information between prompt guidance +and the query image, followed by a multi-head label decoder to generate outputs +for various tasks. By capitalizing on the shared visual characteristics among +different animals and tasks, UniAP enables the transfer of knowledge from +well-studied species to those with limited labeled data or even unseen species. +We demonstrate the effectiveness of UniAP through comprehensive experiments in +pose estimation, segmentation, and classification tasks on diverse animal +species, showcasing its ability to generalize and adapt to new classes with +minimal labeled examples. +" +PaLM: Scaling Language Modeling with Pathways,Aakanksha Chowdhery,http://arxiv.org/pdf/2204.02311v5.pdf,2022-04-05,['cs.cl'],2204.02311v5.pdf," Large language models have been shown to achieve remarkable performance +across a variety of natural language tasks using few-shot learning, which +drastically reduces the number of task-specific training examples needed to +adapt the model to a particular application. To further our understanding of +the impact of scale on few-shot learning, we trained a 540-billion parameter, +densely activated, Transformer language model, which we call Pathways Language +Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML +system which enables highly efficient training across multiple TPU Pods. We +demonstrate continued benefits of scaling by achieving state-of-the-art +few-shot learning results on hundreds of language understanding and generation +benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough +performance, outperforming the finetuned state-of-the-art on a suite of +multi-step reasoning tasks, and outperforming average human performance on the +recently released BIG-bench benchmark. A significant number of BIG-bench tasks +showed discontinuous improvements from model scale, meaning that performance +steeply increased as we scaled to our largest model. PaLM also has strong +capabilities in multilingual tasks and source code generation, which we +demonstrate on a wide array of benchmarks. We additionally provide a +comprehensive analysis on bias and toxicity, and study the extent of training +data memorization with respect to model scale. Finally, we discuss the ethical +considerations related to large language models and discuss potential +mitigation strategies. +" +Few-Shot Electronic Health Record Coding through Graph Contrastive Learning,Shanshan Wang,http://arxiv.org/pdf/2106.15467v1.pdf,2021-06-29,"['cs.ai', 'cs.cl']",2106.15467v1.pdf," Electronic health record (EHR) coding is the task of assigning ICD codes to +each EHR. Most previous studies either only focus on the frequent ICD codes or +treat rare and frequent ICD codes in the same way. These methods perform well +on frequent ICD codes but due to the extremely unbalanced distribution of ICD +codes, the performance on rare ones is far from satisfactory. We seek to +improve the performance for both frequent and rare ICD codes by using a +contrastive graph-based EHR coding framework, CoGraph, which re-casts EHR +coding as a few-shot learning task. First, we construct a heterogeneous EHR +word-entity (HEWE) graph for each EHR, where the words and entities extracted +from an EHR serve as nodes and the relations between them serve as edges. Then, +CoGraph learns similarities and dissimilarities between HEWE graphs from +different ICD codes so that information can be transferred among them. In a +few-shot learning scenario, the model only has access to frequent ICD codes +during training, which might force it to encode features that are useful for +frequent ICD codes only. To mitigate this risk, CoGraph devises two graph +contrastive learning schemes, GSCL and GECL, that exploit the HEWE graph +structures so as to encode transferable features. GSCL utilizes the +intra-correlation of different sub-graphs sampled from HEWE graphs while GECL +exploits the inter-correlation among HEWE graphs at different clinical stages. +Experiments on the MIMIC-III benchmark dataset show that CoGraph significantly +outperforms state-of-the-art methods on EHR coding, not only on frequent ICD +codes, but also on rare codes, in terms of several evaluation indicators. On +frequent ICD codes, GSCL and GECL improve the classification accuracy and F1 by +1.31% and 0.61%, respectively, and on rare ICD codes CoGraph has more obvious +improvements by 2.12% and 2.95%. +" +ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation,Yu Sun,http://arxiv.org/pdf/2107.02137v1.pdf,2021-07-05,['cs.cl'],2107.02137v1.pdf," Pre-trained models have achieved state-of-the-art results in various Natural +Language Processing (NLP) tasks. Recent works such as T5 and GPT-3 have shown +that scaling up pre-trained language models can improve their generalization +abilities. Particularly, the GPT-3 model with 175 billion parameters shows its +strong task-agnostic zero-shot/few-shot learning capabilities. Despite their +success, these large-scale models are trained on plain texts without +introducing knowledge such as linguistic knowledge and world knowledge. In +addition, most large-scale models are trained in an auto-regressive way. As a +result, this kind of traditional fine-tuning approach demonstrates relatively +weak performance when solving downstream language understanding tasks. In order +to solve the above problems, we propose a unified framework named ERNIE 3.0 for +pre-training large-scale knowledge enhanced models. It fuses auto-regressive +network and auto-encoding network, so that the trained model can be easily +tailored for both natural language understanding and generation tasks with +zero-shot learning, few-shot learning or fine-tuning. We trained the model with +10 billion parameters on a 4TB corpus consisting of plain texts and a +large-scale knowledge graph. Empirical results show that the model outperforms +the state-of-the-art models on 54 Chinese NLP tasks, and its English version +achieves the first place on the SuperGLUE benchmark (July 3, 2021), surpassing +the human performance by +0.8% (90.6% vs. 89.8%). +" +UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models,Tianbao Xie,http://arxiv.org/pdf/2201.05966v3.pdf,2022-01-16,['cs.cl'],2201.05966v3.pdf," Structured knowledge grounding (SKG) leverages structured knowledge to +complete user requests, such as semantic parsing over databases and question +answering over knowledge bases. Since the inputs and outputs of SKG tasks are +heterogeneous, they have been studied separately by different communities, +which limits systematic and compatible research on SKG. In this paper, we +overcome this limitation by proposing the UnifiedSKG framework, which unifies +21 SKG tasks into a text-to-text format, aiming to promote systematic SKG +research, instead of being exclusive to a single task, domain, or dataset. We +use UnifiedSKG to benchmark T5 with different sizes and show that T5, with +simple modifications when necessary, achieves state-of-the-art performance on +almost all of the 21 tasks. We further demonstrate that multi-task +prefix-tuning improves the performance on most tasks, largely improving the +overall performance. UnifiedSKG also facilitates the investigation of zero-shot +and few-shot learning, and we show that T0, GPT-3, and Codex struggle in +zero-shot and few-shot learning for SKG. We also use UnifiedSKG to conduct a +series of controlled experiments on structured knowledge encoding variants +across SKG tasks. UnifiedSKG is easily extensible to more tasks, and it is +open-sourced at https://github.com/hkunlp/unifiedskg. +" +A Prompt-based Few-shot Learning Approach to Software Conflict Detection,Robert K. Helmeczi,http://arxiv.org/pdf/2211.02709v1.pdf,2022-11-04,['cs.se'],2211.02709v1.pdf," A software requirement specification (SRS) document is an essential part of +the software development life cycle which outlines the requirements that a +software program in development must satisfy. This document is often specified +by a diverse group of stakeholders and is subject to continual change, making +the process of maintaining the document and detecting conflicts between +requirements an essential task in software development. Notably, projects that +do not address conflicts in the SRS document early on face considerable +problems later in the development life cycle. These problems incur substantial +costs in terms of time and money, and these costs often become insurmountable +barriers that ultimately result in the termination of a software project +altogether. As a result, early detection of SRS conflicts is critical to +project sustainability. The conflict detection task is approached in numerous +ways, many of which require a significant amount of manual intervention from +developers, or require access to a large amount of labeled, task-specific +training data. In this work, we propose using a prompt-based learning approach +to perform few-shot learning for conflict detection. We compare our results to +supervised learning approaches that use pretrained language models, such as +BERT and its variants. Our results show that prompting with just 32 labeled +examples can achieve a similar level of performance in many key metrics to that +of supervised learning on training sets that are magnitudes larger in size. In +contrast to many other conflict detection approaches, we make no assumptions +about the type of underlying requirements, allowing us to analyze pairings of +both functional and non-functional requirements. This allows us to omit the +potentially expensive task of filtering out non-functional requirements from +our dataset. +" +"Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing",Tal Schuster,http://arxiv.org/pdf/1902.09492v2.pdf,2019-02-25,"['cs.cl', 'cs.lg']",1902.09492v2.pdf," We introduce a novel method for multilingual transfer that utilizes deep +contextual embeddings, pretrained in an unsupervised fashion. While contextual +embeddings have been shown to yield richer representations of meaning compared +to their static counterparts, aligning them poses a challenge due to their +dynamic nature. To this end, we construct context-independent variants of the +original monolingual spaces and utilize their mapping to derive an alignment +for the context-dependent spaces. This mapping readily supports processing of a +target language, improving transfer by context-aware embeddings. Our +experimental results demonstrate the effectiveness of this approach for +zero-shot and few-shot learning of dependency parsing. Specifically, our method +consistently outperforms the previous state-of-the-art on 6 tested languages, +yielding an improvement of 6.8 LAS points on average. +" +Few-shot Natural Language Generation for Task-Oriented Dialog,Baolin Peng,http://arxiv.org/pdf/2002.12328v1.pdf,2020-02-27,['cs.cl'],2002.12328v1.pdf," As a crucial component in task-oriented dialog systems, the Natural Language +Generation (NLG) module converts a dialog act represented in a semantic form +into a response in natural language. The success of traditional template-based +or statistical models typically relies on heavily annotated data, which is +infeasible for new domains. Therefore, it is pivotal for an NLG system to +generalize well with limited labelled data in real applications. To this end, +we present FewShotWoz, the first NLG benchmark to simulate the few-shot +learning setting in task-oriented dialog systems. Further, we develop the +SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to +acquire the controllable generation ability, and fine-tuned with only a few +domain-specific labels to adapt to new domains. Experiments on FewShotWoz and +the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly +outperforms existing methods, measured by various automatic metrics and human +evaluations. +" +Alleviating the Incompatibility between Cross Entropy Loss and Episode Training for Few-shot Skin Disease Classification,Wei Zhu,http://arxiv.org/pdf/2004.09694v1.pdf,2020-04-21,"['eess.iv', 'cs.cv', 'cs.lg']",2004.09694v1.pdf," Skin disease classification from images is crucial to dermatological +diagnosis. However, identifying skin lesions involves a variety of aspects in +terms of size, color, shape, and texture. To make matters worse, many +categories only contain very few samples, posing great challenges to +conventional machine learning algorithms and even human experts. Inspired by +the recent success of Few-Shot Learning (FSL) in natural image classification, +we propose to apply FSL to skin disease identification to address the extreme +scarcity of training sample problem. However, directly applying FSL to this +task does not work well in practice, and we find that the problem can be +largely attributed to the incompatibility between Cross Entropy (CE) and +episode training, which are both commonly used in FSL. Based on a detailed +analysis, we propose the Query-Relative (QR) loss, which proves superior to CE +under episode training and is closely related to recently proposed mutual +information estimation. Moreover, we further strengthen the proposed QR loss +with a novel adaptive hard margin strategy. Comprehensive experiments validate +the effectiveness of the proposed FSL scheme and the possibility to diagnosis +rare skin disease with a few labeled samples. +" +Few-shot learning through contextual data augmentation,Farid Arthaud,http://arxiv.org/pdf/2103.16911v1.pdf,2021-03-31,['cs.cl'],2103.16911v1.pdf," Machine translation (MT) models used in industries with constantly changing +topics, such as translation or news agencies, need to adapt to new data to +maintain their performance over time. Our aim is to teach a pre-trained MT +model to translate previously unseen words accurately, based on very few +examples. We propose (i) an experimental setup allowing us to simulate novel +vocabulary appearing in human-submitted translations, and (ii) corresponding +evaluation metrics to compare our approaches. We extend a data augmentation +approach using a pre-trained language model to create training examples with +similar contexts for novel words. We compare different fine-tuning and data +augmentation approaches and show that adaptation on the scale of one to five +examples is possible. Combining data augmentation with randomly selected +training sentences leads to the highest BLEU score and accuracy improvements. +Impressively, with only 1 to 5 examples, our model reports better accuracy +scores than a reference system trained with on average 313 parallel examples. +" +Meta-Learning GNN Initializations for Low-Resource Molecular Property Prediction,Cuong Q. Nguyen,http://arxiv.org/pdf/2003.05996v2.pdf,2020-03-12,"['cs.lg', 'physics.chem-ph', 'stat.ml']",2003.05996v2.pdf," Building in silico models to predict chemical properties and activities is a +crucial step in drug discovery. However, limited labeled data often hinders the +application of deep learning in this setting. Meanwhile advances in +meta-learning have enabled state-of-the-art performances in few-shot learning +benchmarks, naturally prompting the question: Can meta-learning improve deep +learning performance in low-resource drug discovery projects? In this work, we +assess the transferability of graph neural networks initializations learned by +the Model-Agnostic Meta-Learning (MAML) algorithm - and its variants FO-MAML +and ANIL - for chemical properties and activities tasks. Using the ChEMBL20 +dataset to emulate low-resource settings, our benchmark shows that +meta-initializations perform comparably to or outperform multi-task +pre-training baselines on 16 out of 20 in-distribution tasks and on all +out-of-distribution tasks, providing an average improvement in AUPRC of 11.2% +and 26.9% respectively. Finally, we observe that meta-initializations +consistently result in the best performing models across fine-tuning sets with +$k \in \{16, 32, 64, 128, 256\}$ instances. +" +Neural Data Augmentation via Example Extrapolation,Kenton Lee,http://arxiv.org/pdf/2102.01335v1.pdf,2021-02-02,"['cs.cl', 'cs.ai']",2102.01335v1.pdf," In many applications of machine learning, certain categories of examples may +be underrepresented in the training data, causing systems to underperform on +such ""few-shot"" cases at test time. A common remedy is to perform data +augmentation, such as by duplicating underrepresented examples, or +heuristically synthesizing new examples. But these remedies often fail to cover +the full diversity and complexity of real examples. + We propose a data augmentation approach that performs neural Example +Extrapolation (Ex2). Given a handful of exemplars sampled from some +distribution, Ex2 synthesizes new examples that also belong to the same +distribution. The Ex2 model is learned by simulating the example generation +procedure on data-rich slices of the data, and it is applied to +underrepresented, few-shot slices. + We apply Ex2 to a range of language understanding tasks and significantly +improve over state-of-the-art methods on multiple few-shot learning benchmarks, +including for relation extraction (FewRel) and intent classification + slot +filling (SNIPS). +" +One-shot learning for the long term: consolidation with an artificial hippocampal algorithm,Gideon Kowadlo,http://arxiv.org/pdf/2102.07503v2.pdf,2021-02-15,"['cs.lg', 'cs.ai', 'cs.ne', 'i.2.6; i.5.0; i.5.1']",2102.07503v2.pdf," Standard few-shot experiments involve learning to efficiently match +previously unseen samples by class. We claim that few-shot learning should be +long term, assimilating knowledge for the future, without forgetting previous +concepts. In the mammalian brain, the hippocampus is understood to play a +significant role in this process, by learning rapidly and consolidating +knowledge to the neocortex incrementally over a short period. In this research +we tested whether an artificial hippocampal algorithm (AHA), could be used with +a conventional Machine Learning (ML) model that learns incrementally analogous +to the neocortex, to achieve one-shot learning both short and long term. The +results demonstrated that with the addition of AHA, the system could learn in +one-shot and consolidate the knowledge for the long term without catastrophic +forgetting. This study is one of the first examples of using a CLS model of +hippocampus to consolidate memories, and it constitutes a step toward few-shot +continual learning. +" +Calibrate Before Use: Improving Few-Shot Performance of Language Models,Tony Z. Zhao,http://arxiv.org/pdf/2102.09690v2.pdf,2021-02-19,"['cs.cl', 'cs.lg']",2102.09690v2.pdf," GPT-3 can perform numerous tasks when provided a natural language prompt that +contains a few training examples. We show that this type of few-shot learning +can be unstable: the choice of prompt format, training examples, and even the +order of the training examples can cause accuracy to vary from near chance to +near state-of-the-art. We demonstrate that this instability arises from the +bias of language models towards predicting certain answers, e.g., those that +are placed near the end of the prompt or are common in the pre-training data. +To mitigate this, we first estimate the model's bias towards each answer by +asking for its prediction when given the training prompt and a content-free +test input such as ""N/A"". We then fit calibration parameters that cause the +prediction for this input to be uniform across answers. On a diverse set of +tasks, this contextual calibration procedure substantially improves GPT-3 and +GPT-2's average accuracy (up to 30.0% absolute) and reduces variance across +different choices of the prompt. +" +The Power of Scale for Parameter-Efficient Prompt Tuning,Brian Lester,http://arxiv.org/pdf/2104.08691v2.pdf,2021-04-18,['cs.cl'],2104.08691v2.pdf," In this work, we explore ""prompt tuning"", a simple yet effective mechanism +for learning ""soft prompts"" to condition frozen language models to perform +specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft +prompts are learned through backpropagation and can be tuned to incorporate +signal from any number of labeled examples. Our end-to-end learned approach +outperforms GPT-3's ""few-shot"" learning by a large margin. More remarkably, +through ablations on model size using T5, we show that prompt tuning becomes +more competitive with scale: as models exceed billions of parameters, our +method ""closes the gap"" and matches the strong performance of model tuning +(where all model weights are tuned). This finding is especially relevant in +that large models are costly to share and serve, and the ability to reuse one +frozen model for multiple downstream tasks can ease this burden. Our method can +be seen as a simplification of the recently proposed ""prefix tuning"" of Li and +Liang (2021), and we provide a comparison to this and other similar approaches. +Finally, we show that conditioning a frozen model with soft prompts confers +benefits in robustness to domain transfer, as compared to full model tuning. +" +What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 -- MeasEval,Curt Kohler,http://arxiv.org/pdf/2106.14720v1.pdf,2021-06-28,['cs.cl'],2106.14720v1.pdf," In the summer of 2020 OpenAI released its GPT-3 autoregressive language model +to much fanfare. While the model has shown promise on tasks in several areas, +it has not always been clear when the results were cherry-picked or when they +were the unvarnished output. We were particularly interested in what benefits +GPT-3 could bring to the SemEval 2021 MeasEval task - identifying measurements +and their associated attributes in scientific literature. We had already +experimented with multi-turn questions answering as a solution to this task. We +wanted to see if we could use GPT-3's few-shot learning capabilities to more +easily develop a solution that would have better performance than our prior +work. Unfortunately, we have not been successful in that effort. This paper +discusses the approach we used, challenges we encountered, and results we +observed. Some of the problems we encountered were simply due to the state of +the art. For example, the limits on the size of the prompt and answer limited +the amount of the training signal that could be offered. Others are more +fundamental. We are unaware of generative models that excel in retaining +factual information. Also, the impact of changes in the prompts is +unpredictable, making it hard to reliably improve performance. +" +FLEX: Unifying Evaluation for Few-Shot NLP,Jonathan Bragg,http://arxiv.org/pdf/2107.07170v2.pdf,2021-07-15,"['cs.cl', 'cs.lg', 'i.2.7']",2107.07170v2.pdf," Few-shot NLP research is highly active, yet conducted in disjoint research +threads with evaluation suites that lack challenging-yet-realistic testing +setups and fail to employ careful experimental design. Consequently, the +community does not know which techniques perform best or even if they +outperform simple baselines. In response, we formulate the FLEX Principles, a +set of requirements and best practices for unified, rigorous, valid, and +cost-sensitive few-shot NLP evaluation. These principles include Sample Size +Design, a novel approach to benchmark design that optimizes statistical +accuracy and precision while keeping evaluation costs manageable. Following the +principles, we release the FLEX benchmark, which includes four few-shot +transfer settings, zero-shot evaluation, and a public leaderboard that covers +diverse NLP tasks. In addition, we present UniFew, a prompt-based model for +few-shot learning that unifies pretraining and finetuning prompt formats, +eschewing complex machinery of recent prompt-based approaches in adapting +downstream task formats to language model pretraining objectives. We +demonstrate that despite simplicity, UniFew achieves results competitive with +both popular meta-learning and prompt-based approaches. +" +Wordcraft: a Human-AI Collaborative Editor for Story Writing,Andy Coenen,http://arxiv.org/pdf/2107.07430v1.pdf,2021-07-15,['cs.cl'],2107.07430v1.pdf," As neural language models grow in effectiveness, they are increasingly being +applied in real-world settings. However these applications tend to be limited +in the modes of interaction they support. In this extended abstract, we propose +Wordcraft, an AI-assisted editor for story writing in which a writer and a +dialog system collaborate to write a story. Our novel interface uses few-shot +learning and the natural affordances of conversation to support a variety of +interactions. Our editor provides a sandbox for writers to probe the boundaries +of transformer-based language models and paves the way for future +human-in-the-loop training pipelines and novel evaluation methods. +" +Design of a Graphical User Interface for Few-Shot Machine Learning Classification of Electron Microscopy Data,Christina Doty,http://arxiv.org/pdf/2107.10387v1.pdf,2021-07-21,"['cond-mat.mtrl-sci', 'cs.lg']",2107.10387v1.pdf," The recent growth in data volumes produced by modern electron microscopes +requires rapid, scalable, and flexible approaches to image segmentation and +analysis. Few-shot machine learning, which can richly classify images from a +handful of user-provided examples, is a promising route to high-throughput +analysis. However, current command-line implementations of such approaches can +be slow and unintuitive to use, lacking the real-time feedback necessary to +perform effective classification. Here we report on the development of a +Python-based graphical user interface that enables end users to easily conduct +and visualize the output of few-shot learning models. This interface is +lightweight and can be hosted locally or on the web, providing the opportunity +to reproducibly conduct, share, and crowd-source few-shot analyses. +" +Noisy Channel Language Model Prompting for Few-Shot Text Classification,Sewon Min,http://arxiv.org/pdf/2108.04106v3.pdf,2021-08-09,"['cs.cl', 'cs.ai']",2108.04106v3.pdf," We introduce a noisy channel approach for language model prompting in +few-shot text classification. Instead of computing the likelihood of the label +given the input (referred as direct models), channel models compute the +conditional probability of the input given the label, and are thereby required +to explain every word in the input. We use channel models for recently proposed +few-shot learning methods with no or very limited updates to the language model +parameters, via either in-context demonstration or prompt tuning. Our +experiments show that, for both methods, channel models significantly +outperform their direct counterparts, which we attribute to their stability, +i.e., lower variance and higher worst-case accuracy. We also present extensive +ablations that provide recommendations for when to use channel prompt tuning +instead of other competitive methods (e.g., direct head tuning): channel prompt +tuning is preferred when the number of training examples is small, labels in +the training data are imbalanced, or generalization to unseen labels is +required. +" +FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning,Jing Zhou,http://arxiv.org/pdf/2108.06332v2.pdf,2021-08-13,['cs.cl'],2108.06332v2.pdf," Most previous methods for text data augmentation are limited to simple tasks +and weak baselines. We explore data augmentation on hard tasks (i.e., few-shot +natural language understanding) and strong baselines (i.e., pretrained models +with over one billion parameters). Under this setting, we reproduced a large +number of previous augmentation methods and found that these methods bring +marginal gains at best and sometimes degrade the performance much. To address +this challenge, we propose a novel data augmentation method FlipDA that jointly +uses a generative model and a classifier to generate label-flipped data. +Central to the idea of FlipDA is the discovery that generating label-flipped +data is more crucial to the performance than generating label-preserved data. +Experiments show that FlipDA achieves a good tradeoff between effectiveness and +robustness -- it substantially improves many tasks while not negatively +affecting the others. +" +On the Multilingual Capabilities of Very Large-Scale English Language Models,Jordi Armengol-Estapé,http://arxiv.org/pdf/2108.13349v1.pdf,2021-08-30,"['cs.cl', 'cs.ai']",2108.13349v1.pdf," Generative Pre-trained Transformers (GPTs) have recently been scaled to +unprecedented sizes in the history of machine learning. These models, solely +trained on the language modeling objective, have been shown to exhibit +outstanding few-shot learning capabilities in a number of different tasks. +Nevertheless, aside from anecdotal experiences, little is known regarding their +multilingual capabilities, given the fact that the pre-training corpus is +almost entirely composed of English text. In this work, we investigate the +multilingual skills of GPT-3, focusing on one language that barely appears in +the pre-training corpus, Catalan, which makes the results especially +meaningful; we assume that our results may be relevant for other languages as +well. We find that the model shows an outstanding performance, particularly in +generative tasks, with predictable limitations mostly in language understanding +tasks but still with remarkable results given the zero-shot scenario. We +investigate its potential and limits in extractive question-answering and +natural language generation, as well as the effect of scale in terms of model +size. +" +Want To Reduce Labeling Cost? GPT-3 Can Help,Shuohang Wang,http://arxiv.org/pdf/2108.13487v1.pdf,2021-08-30,"['cs.cl', 'cs.ai']",2108.13487v1.pdf," Data annotation is a time-consuming and labor-intensive process for many NLP +tasks. Although there exist various methods to produce pseudo data labels, they +are often task-specific and require a decent amount of labeled data to start +with. Recently, the immense language model GPT-3 with 175 billion parameters +has achieved tremendous improvement across many few-shot learning tasks. In +this paper, we explore ways to leverage GPT-3 as a low-cost data labeler to +train other models. We find that, to make the downstream model achieve the same +performance on a variety of NLU and NLG tasks, it costs 50% to 96% less to use +labels from GPT-3 than using labels from humans. Furthermore, we propose a +novel framework of combining pseudo labels from GPT-3 with human labels, which +leads to even better performance with limited labeling budget. These results +present a cost-effective data labeling methodology that is generalizable to +many practical applications. +" +ConQX: Semantic Expansion of Spoken Queries for Intent Detection based on Conditioned Text Generation,Eyup Halit Yilmaz,http://arxiv.org/pdf/2109.00729v1.pdf,2021-09-02,"['cs.cl', 'cs.ai']",2109.00729v1.pdf," Intent detection of spoken queries is a challenging task due to their noisy +structure and short length. To provide additional information regarding the +query and enhance the performance of intent detection, we propose a method for +semantic expansion of spoken queries, called ConQX, which utilizes the text +generation ability of an auto-regressive language model, GPT-2. To avoid +off-topic text generation, we condition the input query to a structured context +with prompt mining. We then apply zero-shot, one-shot, and few-shot learning. +We lastly use the expanded queries to fine-tune BERT and RoBERTa for intent +detection. The experimental results show that the performance of intent +detection can be improved by our semantic expansion method. +" +Do Prompt-Based Models Really Understand the Meaning of their Prompts?,Albert Webson,http://arxiv.org/pdf/2109.01247v2.pdf,2021-09-02,['cs.cl'],2109.01247v2.pdf," Recently, a boom of papers has shown extraordinary progress in zero-shot and +few-shot learning with various prompt-based models. It is commonly argued that +prompts help models to learn faster in the same way that humans learn faster +when provided with task instructions expressed in natural language. In this +study, we experiment with over 30 prompt templates manually written for natural +language inference (NLI). We find that models learn just as fast with many +prompts that are intentionally irrelevant or even pathologically misleading as +they do with instructively ""good"" prompts. Further, such patterns hold even for +models as large as 175 billion parameters (Brown et al., 2020) as well as the +recently proposed instruction-tuned models which are trained on hundreds of +prompts (Sanh et al., 2022). That is, instruction-tuned models often produce +good predictions with irrelevant and misleading prompts even at zero shots. In +sum, notwithstanding prompt-based models' impressive improvement, we find +evidence of serious limitations that question the degree to which such +improvement is derived from models understanding task instructions in ways +analogous to humans' use of task instructions. +" +Learning Opinion Summarizers by Selecting Informative Reviews,Arthur Bražinskas,http://arxiv.org/pdf/2109.04325v1.pdf,2021-09-09,"['cs.cl', 'cs.ai', 'cs.lg']",2109.04325v1.pdf," Opinion summarization has been traditionally approached with unsupervised, +weakly-supervised and few-shot learning techniques. In this work, we collect a +large dataset of summaries paired with user reviews for over 31,000 products, +enabling supervised training. However, the number of reviews per product is +large (320 on average), making summarization - and especially training a +summarizer - impractical. Moreover, the content of many reviews is not +reflected in the human-written summaries, and, thus, the summarizer trained on +random review subsets hallucinates. In order to deal with both of these +challenges, we formulate the task as jointly learning to select informative +subsets of reviews and summarizing the opinions expressed in these subsets. The +choice of the review subset is treated as a latent variable, predicted by a +small and simple selector. The subset is then fed into a more powerful +summarizer. For joint training, we use amortized variational inference and +policy gradient methods. Our experiments demonstrate the importance of +selecting informative reviews resulting in improved quality of summaries and +reduced hallucinations. +" +STraTA: Self-Training with Task Augmentation for Better Few-shot Learning,Tu Vu,http://arxiv.org/pdf/2109.06270v2.pdf,2021-09-13,['cs.cl'],2109.06270v2.pdf," Despite their recent successes in tackling many NLP tasks, large-scale +pre-trained language models do not perform as well in few-shot settings where +only a handful of training examples are available. To address this shortcoming, +we propose STraTA, which stands for Self-Training with Task Augmentation, an +approach that builds on two key ideas for effective leverage of unlabeled data. +First, STraTA uses task augmentation, a novel technique that synthesizes a +large amount of data for auxiliary-task fine-tuning from target-task unlabeled +texts. Second, STraTA performs self-training by further fine-tuning the strong +base model created by task augmentation on a broad distribution of +pseudo-labeled data. Our experiments demonstrate that STraTA can substantially +improve sample efficiency across 12 few-shot benchmarks. Remarkably, on the +SST-2 sentiment dataset, STraTA, with only 8 training examples per class, +achieves comparable results to standard fine-tuning with 67K training examples. +Our analyses reveal that task augmentation and self-training are both +complementary and independently effective. +" +Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks,Gaël Guibon,http://arxiv.org/pdf/2109.09366v1.pdf,2021-09-20,"['cs.cl', 'cs.lg']",2109.09366v1.pdf," Several recent studies on dyadic human-human interactions have been done on +conversations without specific business objectives. However, many companies +might benefit from studies dedicated to more precise environments such as after +sales services or customer satisfaction surveys. In this work, we place +ourselves in the scope of a live chat customer service in which we want to +detect emotions and their evolution in the conversation flow. This context +leads to multiple challenges that range from exploiting restricted, small and +mostly unlabeled datasets to finding and adapting methods for such context.We +tackle these challenges by using Few-Shot Learning while making the hypothesis +it can serve conversational emotion classification for different languages and +sparse labels. We contribute by proposing a variation of Prototypical Networks +for sequence labeling in conversation that we name ProtoSeq. We test this +method on two datasets with different languages: daily conversations in English +and customer service chat conversations in French. When applied to emotion +classification in conversations, our method proved to be competitive even when +compared to other ones. +" +UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis,Fatemehsadat Mireshghallah,http://arxiv.org/pdf/2110.00135v2.pdf,2021-10-01,"['cs.lg', 'cs.ai', 'cs.cl']",2110.00135v2.pdf," Global models are trained to be as generalizable as possible, with user +invariance considered desirable since the models are shared across multitudes +of users. As such, these models are often unable to produce personalized +responses for individual users, based on their data. Contrary to widely-used +personalization techniques based on few-shot learning, we propose +UserIdentifier, a novel scheme for training a single shared model for all +users. Our approach produces personalized responses by adding fixed, +non-trainable user identifiers to the input data. We empirically demonstrate +that this proposed method outperforms the prefix-tuning based state-of-the-art +approach by up to 13%, on a suite of sentiment analysis datasets. We also show +that, unlike prior work, this method needs neither any additional model +parameters nor any extra rounds of few-shot fine-tuning. +" +Instance-aware Prompt Learning for Language Understanding and Generation,Feihu Jin,http://arxiv.org/pdf/2201.07126v1.pdf,2022-01-18,['cs.cl'],2201.07126v1.pdf," Recently, prompt learning has become a new paradigm to utilize pre-trained +language models (PLMs) and achieves promising results in downstream tasks with +a negligible increase of parameters. The current usage of discrete and +continuous prompts assumes that the prompt is fixed for a specific task and all +samples in the task share the same prompt. However, a task may contain quite +diverse samples in which some are easy and others are difficult, and diverse +prompts are desirable. In this paper, we propose an instance-aware prompt +learning method that learns a different prompt for each instance. Specifically, +we suppose that each learnable prompt token has a different contribution to +different instances, and we learn the contribution by calculating the relevance +score between an instance and each prompt token. The contribution weighted +prompt would be instance aware. We apply our method to both unidirectional and +bidirectional PLMs on both language understanding and generation tasks. +Extensive experiments demonstrate that our method obtains considerable +improvements compared to strong baselines. Especially, our method achieves the +state-of-the-art on the SuperGLUE few-shot learning benchmark. +" +Generating Training Data with Language Models: Towards Zero-Shot Language Understanding,Yu Meng,http://arxiv.org/pdf/2202.04538v2.pdf,2022-02-09,"['cs.cl', 'cs.lg']",2202.04538v2.pdf," Pretrained language models (PLMs) have demonstrated remarkable performance in +various natural language processing tasks: Unidirectional PLMs (e.g., GPT) are +well known for their superior text generation capabilities; bidirectional PLMs +(e.g., BERT) have been the prominent choice for natural language understanding +(NLU) tasks. While both types of models have achieved promising few-shot +learning performance, their potential for zero-shot learning has been +underexplored. In this paper, we present a simple approach that uses both types +of PLMs for fully zero-shot learning of NLU tasks without requiring any +task-specific data: A unidirectional PLM generates class-conditioned texts +guided by prompts, which are used as the training data for fine-tuning a +bidirectional PLM. With quality training data selected based on the generation +probability and regularization techniques (label smoothing and temporal +ensembling) applied to the fine-tuning stage for better generalization and +stability, our approach demonstrates strong performance across seven +classification tasks of the GLUE benchmark (e.g., 72.3/73.8 on MNLI-m/mm and +92.8 on SST-2), significantly outperforming zero-shot prompting methods and +achieving even comparable results to strong few-shot approaches using 32 +training samples per class. +" +Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation,Zhuang Li,http://arxiv.org/pdf/2202.13363v3.pdf,2022-02-27,['cs.cl'],2202.13363v3.pdf," In this paper, we propose a variational autoencoder with disentanglement +priors, VAE-DPRIOR, for task-specific natural language generation with none or +a handful of task-specific labeled examples. In order to tackle compositional +generalization across tasks, our model performs disentangled representation +learning by introducing a conditional prior for the latent content space and +another conditional prior for the latent label space. Both types of priors +satisfy a novel property called $\epsilon$-disentangled. We show both +empirically and theoretically that the novel priors can disentangle +representations even without specific regularizations as in the prior work. The +content prior enables directly sampling diverse content representations from +the content space learned from the seen tasks, and fuse them with the +representations of novel tasks for generating semantically diverse texts in the +low-resource settings. Our extensive experiments demonstrate the superior +performance of our model over competitive baselines in terms of i) data +augmentation in continuous zero/few-shot learning, and ii) text style transfer +in the few-shot setting. +" +ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer for Event-Centric Generation and Classification,Yucheng Zhou,http://arxiv.org/pdf/2203.02225v2.pdf,2022-03-04,['cs.cl'],2203.02225v2.pdf," Generating new events given context with correlated ones plays a crucial role +in many event-centric reasoning tasks. Existing works either limit their scope +to specific scenarios or overlook event-level correlations. In this paper, we +propose to pre-train a general Correlation-aware context-to-Event Transformer +(ClarET) for event-centric reasoning. To achieve this, we propose three novel +event-centric objectives, i.e., whole event recovering, contrastive +event-correlation encoding and prompt-based event locating, which highlight +event-level correlations with effective training. The proposed ClarET is +applicable to a wide range of event-centric reasoning scenarios, considering +its versatility of (i) event-correlation types (e.g., causal, temporal, +contrast), (ii) application formulations (i.e., generation and classification), +and (iii) reasoning types (e.g., abductive, counterfactual and ending +reasoning). Empirical fine-tuning results, as well as zero- and few-shot +learning, on 9 benchmarks (5 generation and 4 classification tasks covering 4 +reasoning types with diverse event correlations), verify its effectiveness and +generalization ability. +" +Pre-trained Token-replaced Detection Model as Few-shot Learner,Zicheng Li,http://arxiv.org/pdf/2203.03235v2.pdf,2022-03-07,"['cs.cl', 'cs.ai']",2203.03235v2.pdf," Pre-trained masked language models have demonstrated remarkable ability as +few-shot learners. In this paper, as an alternative, we propose a novel +approach to few-shot learning with pre-trained token-replaced detection models +like ELECTRA. In this approach, we reformulate a classification or a regression +task as a token-replaced detection problem. Specifically, we first define a +template and label description words for each task and put them into the input +to form a natural language prompt. Then, we employ the pre-trained +token-replaced detection model to predict which label description word is the +most original (i.e., least replaced) among all label description words in the +prompt. A systematic evaluation on 16 datasets demonstrates that our approach +outperforms few-shot learners with pre-trained masked language models in both +one-sentence and two-sentence learning tasks. +" +InstructionNER: A Multi-Task Instruction-Based Generative Framework for Few-shot NER,Liwen Wang,http://arxiv.org/pdf/2203.03903v1.pdf,2022-03-08,['cs.cl'],2203.03903v1.pdf," Recently, prompt-based methods have achieved significant performance in +few-shot learning scenarios by bridging the gap between language model +pre-training and fine-tuning for downstream tasks. However, existing prompt +templates are mostly designed for sentence-level tasks and are inappropriate +for sequence labeling objectives. To address the above issue, we propose a +multi-task instruction-based generative framework, named InstructionNER, for +low-resource named entity recognition. Specifically, we reformulate the NER +task as a generation problem, which enriches source sentences with +task-specific instructions and answer options, then inferences the entities and +types in natural language. We further propose two auxiliary tasks, including +entity extraction and entity typing, which enable the model to capture more +boundary information of entities and deepen the understanding of entity type +semantics, respectively. Experimental results show that our method consistently +outperforms other baselines on five datasets in few-shot settings. +" +Prototypical Verbalizer for Prompt-based Few-shot Tuning,Ganqu Cui,http://arxiv.org/pdf/2203.09770v1.pdf,2022-03-18,"['cs.cl', 'cs.lg']",2203.09770v1.pdf," Prompt-based tuning for pre-trained language models (PLMs) has shown its +effectiveness in few-shot learning. Typically, prompt-based tuning wraps the +input text into a cloze question. To make predictions, the model maps the +output words to labels via a verbalizer, which is either manually designed or +automatically built. However, manual verbalizers heavily depend on +domain-specific prior knowledge and human efforts, while finding appropriate +label words automatically still remains challenging.In this work, we propose +the prototypical verbalizer (ProtoVerb) which is built directly from training +data. Specifically, ProtoVerb learns prototype vectors as verbalizers by +contrastive learning. In this way, the prototypes summarize training instances +and are able to enclose rich class-level semantics. We conduct experiments on +both topic classification and entity typing tasks, and the results demonstrate +that ProtoVerb significantly outperforms current automatic verbalizers, +especially when training data is extremely scarce. More surprisingly, ProtoVerb +consistently boosts prompt-based tuning even on untuned PLMs, indicating an +elegant non-tuning way to utilize PLMs. Our codes are avaliable at +https://github.com/thunlp/OpenPrompt. +" +Few-Shot Learning with Siamese Networks and Label Tuning,Thomas Müller,http://arxiv.org/pdf/2203.14655v2.pdf,2022-03-28,"['cs.cl', 'cs.lg']",2203.14655v2.pdf," We study the problem of building text classifiers with little or no training +data, commonly known as zero and few-shot text classification. In recent years, +an approach based on neural textual entailment models has been found to give +strong results on a diverse range of tasks. In this work, we show that with +proper pre-training, Siamese Networks that embed texts and labels offer a +competitive alternative. These models allow for a large reduction in inference +cost: constant in the number of labels rather than linear. Furthermore, we +introduce label tuning, a simple and computationally efficient approach that +allows to adapt the models in a few-shot setup by only changing the label +embeddings. While giving lower performance than model fine-tuning, this +approach has the architectural advantage that a single encoder can be shared by +many different tasks. +" +Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging,Yutai Hou,http://arxiv.org/pdf/2204.00885v1.pdf,2022-04-02,"['cs.cl', 'cs.ai']",2204.00885v1.pdf," Prompting methods recently achieve impressive success in few-shot learning. +These methods modify input samples with prompt sentence pieces, and decode +label tokens to map samples to corresponding labels. However, such a paradigm +is very inefficient for the task of slot tagging. Since slot tagging samples +are multiple consecutive words in a sentence, the prompting methods have to +enumerate all n-grams token spans to find all the possible slots, which greatly +slows down the prediction. To tackle this, we introduce an inverse paradigm for +prompting. Different from the classic prompts mapping tokens to labels, we +reversely predict slot values given slot types. Such inverse prompting only +requires a one-turn prediction for each slot type and greatly speeds up the +prediction. Besides, we propose a novel Iterative Prediction Strategy, from +which the model learns to refine predictions by considering the relations +between different slot types. We find, somewhat surprisingly, the proposed +method not only predicts faster but also significantly improves the effect +(improve over 6.1 F1-scores on 10-shot setting) and achieves new +state-of-the-art performance. +" +Leveraging pre-trained language models for conversational information seeking from text,Patrizio Bellan,http://arxiv.org/pdf/2204.03542v1.pdf,2022-03-31,"['cs.cl', 'cs.ai']",2204.03542v1.pdf," Recent advances in Natural Language Processing, and in particular on the +construction of very large pre-trained language representation models, is +opening up new perspectives on the construction of conversational information +seeking (CIS) systems. In this paper we investigate the usage of in-context +learning and pre-trained language representation models to address the problem +of information extraction from process description documents, in an incremental +question and answering oriented fashion. In particular we investigate the usage +of the native GPT-3 (Generative Pre-trained Transformer 3) model, together with +two in-context learning customizations that inject conceptual definitions and a +limited number of samples in a few shot-learning fashion. The results highlight +the potential of the approach and the usefulness of the in-context learning +customizations, which can substantially contribute to address the ""training +data challenge"" of deep learning based NLP techniques the BPM field. It also +highlight the challenge posed by control flow relations for which further +training needs to be devised. +" +MGIMN: Multi-Grained Interactive Matching Network for Few-shot Text Classification,Jianhai Zhang,http://arxiv.org/pdf/2204.04952v3.pdf,2022-04-11,['cs.cl'],2204.04952v3.pdf," Text classification struggles to generalize to unseen classes with very few +labeled text instances per class. In such a few-shot learning (FSL) setting, +metric-based meta-learning approaches have shown promising results. Previous +studies mainly aim to derive a prototype representation for each class. +However, they neglect that it is challenging-yet-unnecessary to construct a +compact representation which expresses the entire meaning for each class. They +also ignore the importance to capture the inter-dependency between query and +the support set for few-shot text classification. To deal with these issues, we +propose a meta-learning based method MGIMN which performs instance-wise +comparison followed by aggregation to generate class-wise matching vectors +instead of prototype learning. The key of instance-wise comparison is the +interactive matching within the class-specific context and episode-specific +context. Extensive experiments demonstrate that the proposed method +significantly outperforms the existing state-of-the-art approaches, under both +the standard FSL and generalized FSL settings. +" +Zero and Few-shot Learning for Author Profiling,Mara Chinea-Rios,http://arxiv.org/pdf/2204.10543v2.pdf,2022-04-22,['cs.cl'],2204.10543v2.pdf," Author profiling classifies author characteristics by analyzing how language +is shared among people. In this work, we study that task from a low-resource +viewpoint: using little or no training data. We explore different zero and +few-shot models based on entailment and evaluate our systems on several +profiling tasks in Spanish and English. In addition, we study the effect of +both the entailment hypothesis and the size of the few-shot training sample. We +find that entailment-based models out-perform supervised text classifiers based +on roberta-XLM and that we can reach 80% of the accuracy of previous approaches +using less than 50\% of the training data on average. +" +Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce Data Annotation Required in Visual Commonsense Tasks,Navid Rezaei,http://arxiv.org/pdf/2204.11922v1.pdf,2022-04-25,"['cs.cl', 'cs.ai']",2204.11922v1.pdf," Pre-trained language models have shown excellent results in few-shot learning +scenarios using in-context learning. Although it is impressive, the size of +language models can be prohibitive to make them usable in on-device +applications, such as sensors or smartphones. With smaller language models, +task-specific data annotation is needed to fine-tune the language model for a +specific purpose. However, data annotation can have a substantial financial and +time burden for small research groups, startups, and even companies. In this +paper, we analyze different prompt-based fine-tuning techniques to improve +results on both language and multimodal causal transformer models. To evaluate +our results, we use a dataset focusing on visual commonsense reasoning in time. +Our results show that by simple model-agnostic prompt-based fine-tuning, +comparable results can be reached by only using 35%-40% of the fine-tuning +training dataset. The proposed approaches result in significant time and +financial savings. As the proposed methods make minimal architectural +assumptions, other researchers can use the results in their transformer models +with minimal adaptations. We plan to release the source code freely to make it +easier for the community to use and contribute to our work. +" +Building a Role Specified Open-Domain Dialogue System Leveraging Large-Scale Language Models,Sanghwan Bae,http://arxiv.org/pdf/2205.00176v1.pdf,2022-04-30,['cs.cl'],2205.00176v1.pdf," Recent open-domain dialogue models have brought numerous breakthroughs. +However, building a chat system is not scalable since it often requires a +considerable volume of human-human dialogue data, especially when enforcing +features such as persona, style, or safety. In this work, we study the +challenge of imposing roles on open-domain dialogue systems, with the goal of +making the systems maintain consistent roles while conversing naturally with +humans. To accomplish this, the system must satisfy a role specification that +includes certain conditions on the stated features as well as a system policy +on whether or not certain types of utterances are allowed. For this, we propose +an efficient data collection framework leveraging in-context few-shot learning +of large-scale language models for building role-satisfying dialogue dataset +from scratch. We then compare various architectures for open-domain dialogue +systems in terms of meeting role specifications while maintaining +conversational abilities. Automatic and human evaluations show that our models +return few out-of-bounds utterances, keeping competitive performance on general +metrics. We release a Korean dialogue dataset we built for further research. +" +EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing,Chengyu Wang,http://arxiv.org/pdf/2205.00258v2.pdf,2022-04-30,['cs.cl'],2205.00258v2.pdf," The success of Pre-Trained Models (PTMs) has reshaped the development of +Natural Language Processing (NLP). Yet, it is not easy to obtain +high-performing models and deploy them online for industrial practitioners. To +bridge this gap, EasyNLP is designed to make it easy to build NLP applications, +which supports a comprehensive suite of NLP algorithms. It further features +knowledge-enhanced pre-training, knowledge distillation and few-shot learning +functionalities for large-scale PTMs, and provides a unified framework of model +training, inference and deployment for real-world applications. Currently, +EasyNLP has powered over ten business units within Alibaba Group and is +seamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud. +The source code of our EasyNLP toolkit is released at GitHub +(https://github.com/alibaba/EasyNLP). +" +POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection,Yujian Liu,http://arxiv.org/pdf/2205.00619v1.pdf,2022-05-02,['cs.cl'],2205.00619v1.pdf," Ideology is at the core of political science research. Yet, there still does +not exist general-purpose tools to characterize and predict ideology across +different genres of text. To this end, we study Pretrained Language Models +using novel ideology-driven pretraining objectives that rely on the comparison +of articles on the same story written by media of different ideologies. We +further collect a large-scale dataset, consisting of more than 3.6M political +news articles, for pretraining. Our model POLITICS outperforms strong baselines +and the previous state-of-the-art models on ideology prediction and stance +detection tasks. Further analyses show that POLITICS is especially good at +understanding long or formally written texts, and is also robust in few-shot +learning scenarios. +" +KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering,Jianing Wang,http://arxiv.org/pdf/2205.03071v1.pdf,2022-05-06,"['cs.cl', 'cs.ai']",2205.03071v1.pdf," Extractive Question Answering (EQA) is one of the most important tasks in +Machine Reading Comprehension (MRC), which can be solved by fine-tuning the +span selecting heads of Pre-trained Language Models (PLMs). However, most +existing approaches for MRC may perform poorly in the few-shot learning +scenario. To solve this issue, we propose a novel framework named Knowledge +Enhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads to +PLMs, we introduce a seminal paradigm for EQA that transform the task into a +non-autoregressive Masked Language Modeling (MLM) generation problem. +Simultaneously, rich semantics from the external knowledge base (KB) and the +passage context are support for enhancing the representations of the query. In +addition, to boost the performance of PLMs, we jointly train the model by the +MLM and contrastive learning objectives. Experiments on multiple benchmarks +demonstrate that our method consistently outperforms state-of-the-art +approaches in few-shot settings by a large margin. +" +ProQA: Structural Prompt-based Pre-training for Unified Question Answering,Wanjun Zhong,http://arxiv.org/pdf/2205.04040v2.pdf,2022-05-09,['cs.cl'],2205.04040v2.pdf," Question Answering (QA) is a longstanding challenge in natural language +processing. Existing QA works mostly focus on specific question types, +knowledge domains, or reasoning skills. The specialty in QA research hinders +systems from modeling commonalities between tasks and generalization for wider +applications. To address this issue, we present ProQA, a unified QA paradigm +that solves various tasks through a single model. ProQA takes a unified +structural prompt as the bridge and improves the QA-centric ability by +structural prompt-based pre-training. Through a structurally designed +prompt-based input schema, ProQA concurrently models the knowledge +generalization for all QA tasks while keeping the knowledge customization for +every specific QA task. Furthermore, ProQA is pre-trained with structural +prompt-formatted large-scale synthesized corpus, which empowers the model with +the commonly-required QA ability. Experimental results on 11 QA benchmarks +demonstrate that ProQA consistently boosts performance on both full data +fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, +ProQA exhibits strong ability in both continual learning and transfer learning +by taking the advantages of the structural prompt. +" +ALLSH: Active Learning Guided by Local Sensitivity and Hardness,Shujian Zhang,http://arxiv.org/pdf/2205.04980v2.pdf,2022-05-10,"['cs.cl', 'cs.ai', 'cs.lg']",2205.04980v2.pdf," Active learning, which effectively collects informative unlabeled data for +annotation, reduces the demand for labeled data. In this work, we propose to +retrieve unlabeled samples with a local sensitivity and hardness-aware +acquisition function. The proposed method generates data copies through local +perturbations and selects data points whose predictive likelihoods diverge the +most from their copies. We further empower our acquisition function by +injecting the select-worst case perturbation. Our method achieves consistent +gains over the commonly used active learning strategies in various +classification tasks. Furthermore, we observe consistent improvements over the +baselines on the study of prompt selection in prompt-based few-shot learning. +These experiments demonstrate that our acquisition guided by local sensitivity +and hardness can be effective and beneficial for many NLP tasks. +" +Prototypical Calibration for Few-shot Learning of Language Models,Zhixiong Han,http://arxiv.org/pdf/2205.10183v2.pdf,2022-05-20,['cs.cl'],2205.10183v2.pdf," In-context learning of GPT-like models has been recognized as fragile across +different hand-crafted templates, and demonstration permutations. In this work, +we propose prototypical calibration to adaptively learn a more robust decision +boundary for zero- and few-shot classification, instead of greedy decoding. +Concretely, our method first adopts Gaussian mixture distribution to estimate +the prototypical clusters for all categories. Then we assign each cluster to +the corresponding label by solving a weighted bipartite matching problem. Given +an example, its prediction is calibrated by the likelihood of prototypical +clusters. Experimental results show that prototypical calibration yields a +substantial improvement on a diverse set of tasks. Extensive analysis across +different scales also indicates that our method calibrates the decision +boundary as expected, greatly improving the robustness of GPT to templates, +permutations, and class imbalance. +" +BBTv2: Towards a Gradient-Free Future with Large Language Models,Tianxiang Sun,http://arxiv.org/pdf/2205.11200v2.pdf,2022-05-23,"['cs.cl', 'cs.ai']",2205.11200v2.pdf," Most downstream adaptation methods tune all or part of the parameters of +pre-trained models (PTMs) through gradient descent, where the tuning cost +increases linearly with the growth of the model size. By contrast, +gradient-free methods only require the forward computation of the PTM to tune +the prompt, retaining the benefits of efficient tuning and deployment. Though, +past work on gradient-free tuning often introduces gradient descent to seek a +good initialization of prompt and lacks versatility across tasks and PTMs. In +this paper, we present BBTv2, an improved version of Black-Box Tuning, to drive +PTMs for few-shot learning. We prepend continuous prompts to every layer of the +PTM and propose a divide-and-conquer gradient-free algorithm to optimize the +prompts at different layers alternately. Extensive experiments across various +tasks and PTMs show that BBTv2 can achieve comparable performance to full model +tuning and state-of-the-art parameter-efficient methods (e.g., Adapter, LoRA, +BitFit, etc.) under few-shot settings while maintaining much fewer tunable +parameters. +" +Zero-Shot and Few-Shot Learning for Lung Cancer Multi-Label Classification using Vision Transformer,Fu-Ming Guo,http://arxiv.org/pdf/2205.15290v2.pdf,2022-05-30,"['cs.cv', 'cs.ai', 'cs.lg']",2205.15290v2.pdf," Lung cancer is the leading cause of cancer-related death worldwide. Lung +adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most +common histologic subtypes of non-small-cell lung cancer (NSCLC). Histology is +an essential tool for lung cancer diagnosis. Pathologists make classifications +according to the dominant subtypes. Although morphology remains the standard +for diagnosis, significant tool needs to be developed to elucidate the +diagnosis. In our study, we utilize the pre-trained Vision Transformer (ViT) +model to classify multiple label lung cancer on histologic slices (from dataset +LC25000), in both Zero-Shot and Few-Shot settings. Then we compare the +performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall, +sensitivity and specificity. Our study show that the pre-trained ViT model has +a good performance in Zero-Shot setting, a competitive accuracy ($99.87\%$) in +Few-Shot setting ({epoch = 1}) and an optimal result ($100.00\%$ on both +validation set and test set) in Few-Shot seeting ({epoch = 5}). +" +Neural Prompt Search,Yuanhan Zhang,http://arxiv.org/pdf/2206.04673v2.pdf,2022-06-09,"['cs.cv', 'cs.ai', 'cs.lg']",2206.04673v2.pdf," The size of vision models has grown exponentially over the last few years, +especially after the emergence of Vision Transformer. This has motivated the +development of parameter-efficient tuning methods, such as learning adapter +layers or visual prompt tokens, which allow a tiny portion of model parameters +to be trained whereas the vast majority obtained from pre-training are frozen. +However, designing a proper tuning method is non-trivial: one might need to try +out a lengthy list of design choices, not to mention that each downstream +dataset often requires custom designs. In this paper, we view the existing +parameter-efficient tuning methods as ""prompt modules"" and propose Neural +prOmpt seArcH (NOAH), a novel approach that learns, for large vision models, +the optimal design of prompt modules through a neural architecture search +algorithm, specifically for each downstream dataset. By conducting extensive +experiments on over 20 vision datasets, we demonstrate that NOAH (i) is +superior to individual prompt modules, (ii) has a good few-shot learning +ability, and (iii) is domain-generalizable. The code and models are available +at https://github.com/Davidzhangyuanhan/NOAH. +" +Low Resource Pipeline for Spoken Language Understanding via Weak Supervision,Ayush Kumar,http://arxiv.org/pdf/2206.10559v1.pdf,2022-06-21,['cs.cl'],2206.10559v1.pdf," In Weak Supervised Learning (WSL), a model is trained over noisy labels +obtained from semantic rules and task-specific pre-trained models. Rules offer +limited generalization over tasks and require significant manual efforts while +pre-trained models are available only for limited tasks. In this work, we +propose to utilize prompt-based methods as weak sources to obtain the noisy +labels on unannotated data. We show that task-agnostic prompts are +generalizable and can be used to obtain noisy labels for different Spoken +Language Understanding (SLU) tasks such as sentiment classification, disfluency +detection and emotion classification. These prompts could additionally be +updated to add task-specific contexts, thus providing flexibility to design +task-specific prompts. We demonstrate that prompt-based methods generate +reliable labels for the above SLU tasks and thus can be used as a universal +weak source to train a weak-supervised model (WSM) in absence of labeled data. +Our proposed WSL pipeline trained over prompt-based weak source outperforms +other competitive low-resource benchmarks on zero and few-shot learning by more +than 4% on Macro-F1 on all of the three benchmark SLU datasets. The proposed +method also outperforms a conventional rule based WSL pipeline by more than 5% +on Macro-F1. +" +Prompting Decision Transformer for Few-Shot Policy Generalization,Mengdi Xu,http://arxiv.org/pdf/2206.13499v1.pdf,2022-06-27,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ro']",2206.13499v1.pdf," Humans can leverage prior experience and learn novel tasks from a handful of +demonstrations. In contrast to offline meta-reinforcement learning, which aims +to achieve quick adaptation through better algorithm design, we investigate the +effect of architecture inductive bias on the few-shot learning capability. We +propose a Prompt-based Decision Transformer (Prompt-DT), which leverages the +sequential modeling ability of the Transformer architecture and the prompt +framework to achieve few-shot adaptation in offline RL. We design the +trajectory prompt, which contains segments of the few-shot demonstrations, and +encodes task-specific information to guide policy generation. Our experiments +in five MuJoCo control benchmarks show that Prompt-DT is a strong few-shot +learner without any extra finetuning on unseen target tasks. Prompt-DT +outperforms its variants and strong meta offline RL baselines by a large margin +with a trajectory prompt containing only a few timesteps. Prompt-DT is also +robust to prompt length changes and can generalize to out-of-distribution (OOD) +environments. +" +Few-shot training LLMs for project-specific code-summarization,Toufique Ahmed,http://arxiv.org/pdf/2207.04237v2.pdf,2022-07-09,"['cs.se', 'cs.lg']",2207.04237v2.pdf," Very large language models (LLMs), such as GPT-3 and Codex have achieved +state-of-the-art performance on several natural-language tasks, and show great +promise also for code. A particularly exciting aspect of LLMs is their knack +for few-shot and zero-shot learning: they can learn to perform a task with very +few examples. Few-shotting has particular synergies in software engineering, +where there are a lot of phenomena (identifier names, APIs, terminology, coding +patterns) that are known to be highly project-specific. However, +project-specific data can be quite limited, especially early in the history of +a project; thus the few-shot learning capacity of LLMs might be very relevant. +In this paper, we investigate the use few-shot training with the very large GPT +(Generative Pre-trained Transformer) Codex model, and find evidence suggesting +that one can significantly surpass state-of-the-art models for +code-summarization, leveraging project-specific training. +" +Convolutional Bypasses Are Better Vision Transformer Adapters,Shibo Jie,http://arxiv.org/pdf/2207.07039v3.pdf,2022-07-14,['cs.cv'],2207.07039v3.pdf," The pretrain-then-finetune paradigm has been widely adopted in computer +vision. But as the size of Vision Transformer (ViT) grows exponentially, the +full finetuning becomes prohibitive in view of the heavier storage overhead. +Motivated by parameter-efficient transfer learning (PETL) on language +transformers, recent studies attempt to insert lightweight adaptation modules +(e.g., adapter layers or prompt tokens) to pretrained ViT and only finetune +these modules while the pretrained weights are frozen. However, these modules +were originally proposed to finetune language models and did not take into +account the prior knowledge specifically for visual tasks. In this paper, we +propose to construct Convolutional Bypasses (Convpass) in ViT as adaptation +modules, introducing only a small amount (less than 0.5% of model parameters) +of trainable parameters to adapt the large ViT. Different from other PETL +methods, Convpass benefits from the hard-coded inductive bias of convolutional +layers and thus is more suitable for visual tasks, especially in the low-data +regime. Experimental results on VTAB-1K benchmark and few-shot learning +datasets show that Convpass outperforms current language-oriented adaptation +modules, demonstrating the necessity to tailor vision-oriented adaptation +modules for adapting vision models. +" +STT: Soft Template Tuning for Few-Shot Adaptation,Ping Yu,http://arxiv.org/pdf/2207.08408v1.pdf,2022-07-18,"['cs.cl', 'cs.ai']",2207.08408v1.pdf," Prompt tuning has been an extremely effective tool to adapt a pre-trained +model to downstream tasks. However, standard prompt-based methods mainly +consider the case of sufficient data of downstream tasks. It is still unclear +whether the advantage can be transferred to the few-shot regime, where only +limited data are available for each downstream task. Although some works have +demonstrated the potential of prompt-tuning under the few-shot setting, the +main stream methods via searching discrete prompts or tuning soft prompts with +limited data are still very challenging. Through extensive empirical studies, +we find that there is still a gap between prompt tuning and fully fine-tuning +for few-shot learning. To bridge the gap, we propose a new prompt-tuning +framework, called Soft Template Tuning (STT). STT combines manual and auto +prompts, and treats downstream classification tasks as a masked language +modeling task. Comprehensive evaluation on different settings suggests STT can +close the gap between fine-tuning and prompt-based methods without introducing +additional parameters. Significantly, it can even outperform the time- and +resource-consuming fine-tuning method on sentiment classification tasks. +" +Self-Supervision Can Be a Good Few-Shot Learner,Yuning Lu,http://arxiv.org/pdf/2207.09176v1.pdf,2022-07-19,['cs.cv'],2207.09176v1.pdf," Existing few-shot learning (FSL) methods rely on training with a large +labeled dataset, which prevents them from leveraging abundant unlabeled data. +From an information-theoretic perspective, we propose an effective unsupervised +FSL method, learning representations with self-supervision. Following the +InfoMax principle, our method learns comprehensive representations by capturing +the intrinsic structure of the data. Specifically, we maximize the mutual +information (MI) of instances and their representations with a low-bias MI +estimator to perform self-supervised pre-training. Rather than supervised +pre-training focusing on the discriminable features of the seen classes, our +self-supervised model has less bias toward the seen classes, resulting in +better generalization for unseen classes. We explain that supervised +pre-training and self-supervised pre-training are actually maximizing different +MI objectives. Extensive experiments are further conducted to analyze their FSL +performance with various training settings. Surprisingly, the results show that +self-supervised pre-training can outperform supervised pre-training under the +appropriate conditions. Compared with state-of-the-art FSL methods, our +approach achieves comparable performance on widely used FSL benchmarks without +any labels of the base classes. +" +Language Model Cascades,David Dohan,http://arxiv.org/pdf/2207.10342v2.pdf,2022-07-21,"['cs.cl', 'cs.ai']",2207.10342v2.pdf," Prompted models have demonstrated impressive few-shot learning abilities. +Repeated interactions at test-time with a single model, or the composition of +multiple models together, further expands capabilities. These compositions are +probabilistic models, and may be expressed in the language of graphical models +with random variables whose values are complex data types such as strings. +Cases with control flow and dynamic structure require techniques from +probabilistic programming, which allow implementing disparate model structures +and inference strategies in a unified language. We formalize several existing +techniques from this perspective, including scratchpads / chain of thought, +verifiers, STaR, selection-inference, and tool use. We refer to the resulting +programs as language model cascades. +" +Few-shot Adaptation Works with UnpredicTable Data,Jun Shern Chan,http://arxiv.org/pdf/2208.01009v2.pdf,2022-08-01,"['cs.cl', 'cs.ai', 'cs.lg']",2208.01009v2.pdf," Prior work on language models (LMs) shows that training on a large number of +diverse tasks improves few-shot learning (FSL) performance on new tasks. We +take this to the extreme, automatically extracting 413,299 tasks from internet +tables - orders of magnitude more than the next-largest public datasets. +Finetuning on the resulting dataset leads to improved FSL performance on +Natural Language Processing (NLP) tasks, but not proportionally to dataset +scale. In fact, we find that narrow subsets of our dataset sometimes outperform +more diverse datasets. For example, finetuning on software documentation from +support.google.com raises FSL performance by a mean of +7.5% on 52 downstream +tasks, which beats training on 40 human-curated NLP datasets (+6.7%). +Finetuning on various narrow datasets leads to similar broad improvements +across test tasks, suggesting that the gains are not from domain adaptation but +adapting to FSL in general. We do not observe clear patterns between the +datasets that lead to FSL gains, leaving open questions about why certain data +helps with FSL. +" +Robotic Interestingness via Human-Informed Few-Shot Object Detection,Seungchan Kim,http://arxiv.org/pdf/2208.01084v1.pdf,2022-08-01,['cs.ro'],2208.01084v1.pdf," Interestingness recognition is crucial for decision making in autonomous +exploration for mobile robots. Previous methods proposed an unsupervised online +learning approach that can adapt to environments and detect interesting scenes +quickly, but lack the ability to adapt to human-informed interesting objects. +To solve this problem, we introduce a human-interactive framework, +AirInteraction, that can detect human-informed objects via few-shot online +learning. To reduce the communication bandwidth, we first apply an online +unsupervised learning algorithm on the unmanned vehicle for interestingness +recognition and then only send the potential interesting scenes to a +base-station for human inspection. The human operator is able to draw and +provide bounding box annotations for particular interesting objects, which are +sent back to the robot to detect similar objects via few-shot learning. Only +using few human-labeled examples, the robot can learn novel interesting object +categories during the mission and detect interesting scenes that contain the +objects. We evaluate our method on various interesting scene recognition +datasets. To the best of our knowledge, it is the first human-informed few-shot +object detection framework for autonomous exploration. +" +Atlas: Few-shot Learning with Retrieval Augmented Language Models,Gautier Izacard,http://arxiv.org/pdf/2208.03299v3.pdf,2022-08-05,['cs.cl'],2208.03299v3.pdf," Large language models have shown impressive few-shot results on a wide range +of tasks. However, when knowledge is key for such results, as is the case for +tasks such as question answering and fact checking, massive parameter counts to +store knowledge seem to be needed. Retrieval augmented models are known to +excel at knowledge intensive tasks without the need for as many parameters, but +it is unclear whether they work in few-shot settings. In this work we present +Atlas, a carefully designed and pre-trained retrieval augmented language model +able to learn knowledge intensive tasks with very few training examples. We +perform evaluations on a wide range of tasks, including MMLU, KILT and +NaturalQuestions, and study the impact of the content of the document index, +showing that it can easily be updated. Notably, Atlas reaches over 42% accuracy +on Natural Questions using only 64 examples, outperforming a 540B parameters +model by 3% despite having 50x fewer parameters. +" +Limits of an AI program for solving college math problems,Ernest Davis,http://arxiv.org/pdf/2208.06906v1.pdf,2022-08-14,['cs.ai'],2208.06906v1.pdf," Drori et al. (2022) report that ""A neural network solves, explains, and +generates university math problems by program synthesis and few-shot learning +at human level ... [It] automatically answers 81\% of university-level +mathematics problems."" The system they describe is indeed impressive; however, +the above description is very much overstated. The work of solving the problems +is done, not by a neural network, but by the symbolic algebra package Sympy. +Problems of various formats are excluded from consideration. The so-called +""explanations"" are just rewordings of lines of code. Answers are marked as +correct that are not in the form specified in the problem. Most seriously, it +seems that in many cases the system uses the correct answer given in the test +corpus to guide its path to solving the problem. +" +Efficient Few-Shot Learning Without Prompts,Lewis Tunstall,http://arxiv.org/pdf/2209.11055v1.pdf,2022-09-22,['cs.cl'],2209.11055v1.pdf," Recent few-shot methods, such as parameter-efficient fine-tuning (PEFT) and +pattern exploiting training (PET), have achieved impressive results in +label-scarce settings. However, they are difficult to employ since they are +subject to high variability from manually crafted prompts, and typically +require billion-parameter language models to achieve high accuracy. To address +these shortcomings, we propose SetFit (Sentence Transformer Fine-tuning), an +efficient and prompt-free framework for few-shot fine-tuning of Sentence +Transformers (ST). SetFit works by first fine-tuning a pretrained ST on a small +number of text pairs, in a contrastive Siamese manner. The resulting model is +then used to generate rich text embeddings, which are used to train a +classification head. This simple framework requires no prompts or verbalizers, +and achieves high accuracy with orders of magnitude less parameters than +existing techniques. Our experiments show that SetFit obtains comparable +results with PEFT and PET techniques, while being an order of magnitude faster +to train. We also show that SetFit can be applied in multilingual settings by +simply switching the ST body. Our code is available at +https://github.com/huggingface/setfit and our datasets at +https://huggingface.co/setfit . +" +CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation,Tanay Dixit,http://arxiv.org/pdf/2210.04873v2.pdf,2022-10-10,['cs.cl'],2210.04873v2.pdf," Counterfactual data augmentation (CDA) -- i.e., adding minimally perturbed +inputs during training -- helps reduce model reliance on spurious correlations +and improves generalization to out-of-distribution (OOD) data. Prior work on +generating counterfactuals only considered restricted classes of perturbations, +limiting their effectiveness. We present COunterfactual Generation via +Retrieval and Editing (CORE), a retrieval-augmented generation framework for +creating diverse counterfactual perturbations for CDA. For each training +example, CORE first performs a dense retrieval over a task-related unlabeled +text corpus using a learned bi-encoder and extracts relevant counterfactual +excerpts. CORE then incorporates these into prompts to a large language model +with few-shot learning capabilities, for counterfactual editing. Conditioning +language model edits on naturally occurring data results in diverse +perturbations. Experiments on natural language inference and sentiment analysis +benchmarks show that CORE counterfactuals are more effective at improving +generalization to OOD data compared to other DA approaches. We also show that +the CORE retrieval framework can be used to encourage diversity in manually +authored perturbations +" +Continual Training of Language Models for Few-Shot Learning,Zixuan Ke,http://arxiv.org/pdf/2210.05549v1.pdf,2022-10-11,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.ne']",2210.05549v1.pdf," Recent work on applying large language models (LMs) achieves impressive +performance in many NLP applications. Adapting or posttraining an LM using an +unlabeled domain corpus can produce even better performance for end-tasks in +the domain. This paper proposes the problem of continually extending an LM by +incrementally post-train the LM with a sequence of unlabeled domain corpora to +expand its knowledge without forgetting its previous skills. The goal is to +improve the few-shot end-task learning in these domains. The resulting system +is called CPT (Continual PostTraining), which to our knowledge, is the first +continual post-training system. Experimental results verify its effectiveness. +" +Knowledge-grounded Dialog State Tracking,Dian Yu,http://arxiv.org/pdf/2210.06656v1.pdf,2022-10-13,['cs.cl'],2210.06656v1.pdf," Knowledge (including structured knowledge such as schema and ontology, and +unstructured knowledge such as web corpus) is a critical part of dialog +understanding, especially for unseen tasks and domains. Traditionally, such +domain-specific knowledge is encoded implicitly into model parameters for the +execution of downstream tasks, which makes training inefficient. In addition, +such models are not easily transferable to new tasks with different schemas. In +this work, we propose to perform dialog state tracking grounded on knowledge +encoded externally. We query relevant knowledge of various forms based on the +dialog context where such information can ground the prediction of dialog +states. We demonstrate superior performance of our proposed method over strong +baselines, especially in the few-shot learning setting. +" +Unified Vision and Language Prompt Learning,Yuhang Zang,http://arxiv.org/pdf/2210.07225v1.pdf,2022-10-13,"['cs.cv', 'cs.ai']",2210.07225v1.pdf," Prompt tuning, a parameter- and data-efficient transfer learning paradigm +that tunes only a small number of parameters in a model's input space, has +become a trend in the vision community since the emergence of large +vision-language models like CLIP. We present a systematic study on two +representative prompt tuning methods, namely text prompt tuning and visual +prompt tuning. A major finding is that none of the unimodal prompt tuning +methods performs consistently well: text prompt tuning fails on data with high +intra-class visual variances while visual prompt tuning cannot handle low +inter-class variances. To combine the best from both worlds, we propose a +simple approach called Unified Prompt Tuning (UPT), which essentially learns a +tiny neural network to jointly optimize prompts across different modalities. +Extensive experiments on over 11 vision datasets show that UPT achieves a +better trade-off than the unimodal counterparts on few-shot learning +benchmarks, as well as on domain generalization benchmarks. Code and models +will be released to facilitate future research. +" +"Vision-Language Pre-training: Basics, Recent Advances, and Future Trends",Zhe Gan,http://arxiv.org/pdf/2210.09263v1.pdf,2022-10-17,"['cs.cv', 'cs.cl']",2210.09263v1.pdf," This paper surveys vision-language pre-training (VLP) methods for multimodal +intelligence that have been developed in the last few years. We group these +approaches into three categories: ($i$) VLP for image-text tasks, such as image +captioning, image-text retrieval, visual question answering, and visual +grounding; ($ii$) VLP for core computer vision tasks, such as (open-set) image +classification, object detection, and segmentation; and ($iii$) VLP for +video-text tasks, such as video captioning, video-text retrieval, and video +question answering. For each category, we present a comprehensive review of +state-of-the-art methods, and discuss the progress that has been made and +challenges still being faced, using specific systems and models as case +studies. In addition, for each category, we discuss advanced topics being +actively explored in the research community, such as big foundation models, +unified modeling, in-context few-shot learning, knowledge, robustness, and +computer vision in the wild, to name a few. +" +Better Few-Shot Relation Extraction with Label Prompt Dropout,Peiyuan Zhang,http://arxiv.org/pdf/2210.13733v1.pdf,2022-10-25,['cs.cl'],2210.13733v1.pdf," Few-shot relation extraction aims to learn to identify the relation between +two entities based on very limited training examples. Recent efforts found that +textual labels (i.e., relation names and relation descriptions) could be +extremely useful for learning class representations, which will benefit the +few-shot learning task. However, what is the best way to leverage such label +information in the learning process is an important research question. Existing +works largely assume such textual labels are always present during both +learning and prediction. In this work, we argue that such approaches may not +always lead to optimal results. Instead, we present a novel approach called +label prompt dropout, which randomly removes label descriptions in the learning +process. Our experiments show that our approach is able to lead to improved +class representations, yielding significantly better results on the few-shot +relation extraction task. +" +STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification,Jinta Weng,http://arxiv.org/pdf/2210.16489v1.pdf,2022-10-29,"['cs.cl', 'cs.ai']",2210.16489v1.pdf," The effectiveness of prompt learning has been demonstrated in different +pre-trained language models. By formulating suitable template and choosing +representative label mapping, prompt learning can be used as an efficient +knowledge probe. However, finding suitable prompt in existing methods requires +multiple experimental attempts or appropriate vector initialization on +formulating suitable template and choosing representative label mapping, which +it is more common in few-shot learning tasks. Motivating by PLM working +process, we try to construct the prompt from task semantic perspective and thus +propose the STPrompt -Semantic-guided and Task-driven Prompt model. +Specifically, two novel prompts generated from the semantic dependency tree +(Dep-prompt) and task-specific metadata description (Meta-prompt), are firstly +constructed in a prompt augmented pool, and the proposed model would +automatically select a suitable semantic prompt to motivating the prompt +learning process. Our results show that the proposed model achieves the +state-of-the-art performance in five different datasets of few-shot text +classification tasks, which prove that more semantic and significant prompts +could assume as a better knowledge proving tool. +" +ConsPrompt: Easily Exploiting Contrastive Samples for Few-shot Prompt Learning,Jinta Weng,http://arxiv.org/pdf/2211.04118v1.pdf,2022-11-08,"['cs.cl', 'cs.ai']",2211.04118v1.pdf," Prompt learning recently become an effective linguistic tool to motivate the +PLMs' knowledge on few-shot-setting tasks. However, studies have shown the lack +of robustness still exists in prompt learning, since suitable initialization of +continuous prompt and expert-first manual prompt are essential in fine-tuning +process. What is more, human also utilize their comparative ability to motivate +their existing knowledge for distinguishing different examples. Motivated by +this, we explore how to use contrastive samples to strengthen prompt learning. +In detail, we first propose our model ConsPrompt combining with prompt encoding +network, contrastive sampling module, and contrastive scoring module. +Subsequently, two sampling strategies, similarity-based and label-based +strategies, are introduced to realize differential contrastive learning. The +effectiveness of proposed ConsPrompt is demonstrated in five different few-shot +learning tasks and shown the similarity-based sampling strategy is more +effective than label-based in combining contrastive learning. Our results also +exhibits the state-of-the-art performance and robustness in different few-shot +settings, which proves that the ConsPrompt could be assumed as a better +knowledge probe to motivate PLMs. +" +Retrieval-Augmented Generative Question Answering for Event Argument Extraction,Xinya Du,http://arxiv.org/pdf/2211.07067v1.pdf,2022-11-14,['cs.cl'],2211.07067v1.pdf," Event argument extraction has long been studied as a sequential prediction +problem with extractive-based methods, tackling each argument in isolation. +Although recent work proposes generation-based methods to capture +cross-argument dependency, they require generating and post-processing a +complicated target sequence (template). Motivated by these observations and +recent pretrained language models' capabilities of learning from +demonstrations. We propose a retrieval-augmented generative QA model (R-GQA) +for event argument extraction. It retrieves the most similar QA pair and +augments it as prompt to the current example's context, then decodes the +arguments as answers. Our approach outperforms substantially prior methods +across various settings (i.e. fully supervised, domain transfer, and fewshot +learning). Finally, we propose a clustering-based sampling strategy (JointEnc) +and conduct a thorough analysis of how different strategies influence the +few-shot learning performance. The implementations are available at https:// +github.com/xinyadu/RGQA +" +ProtSi: Prototypical Siamese Network with Data Augmentation for Few-Shot Subjective Answer Evaluation,Yining Lu,http://arxiv.org/pdf/2211.09855v1.pdf,2022-11-17,['cs.cl'],2211.09855v1.pdf," Subjective answer evaluation is a time-consuming and tedious task, and the +quality of the evaluation is heavily influenced by a variety of subjective +personal characteristics. Instead, machine evaluation can effectively assist +educators in saving time while also ensuring that evaluations are fair and +realistic. However, most existing methods using regular machine learning and +natural language processing techniques are generally hampered by a lack of +annotated answers and poor model interpretability, making them unsuitable for +real-world use. To solve these challenges, we propose ProtSi Network, a unique +semi-supervised architecture that for the first time uses few-shot learning to +subjective answer evaluation. To evaluate students' answers by similarity +prototypes, ProtSi Network simulates the natural process of evaluator scoring +answers by combining Siamese Network which consists of BERT and encoder layers +with Prototypical Network. We employed an unsupervised diverse paraphrasing +model ProtAugment, in order to prevent overfitting for effective few-shot text +classification. By integrating contrastive learning, the discriminative text +issue can be mitigated. Experiments on the Kaggle Short Scoring Dataset +demonstrate that the ProtSi Network outperforms the most recent baseline models +in terms of accuracy and quadratic weighted kappa. +" +TEMPERA: Test-Time Prompting via Reinforcement Learning,Tianjun Zhang,http://arxiv.org/pdf/2211.11890v1.pdf,2022-11-21,"['cs.cl', 'cs.ai']",2211.11890v1.pdf," Careful prompt design is critical to the use of large language models in +zero-shot or few-shot learning. As a consequence, there is a growing interest +in automated methods to design optimal prompts. In this work, we propose +Test-time Prompt Editing using Reinforcement learning (TEMPERA). In contrast to +prior prompt generation methods, TEMPERA can efficiently leverage prior +knowledge, is adaptive to different queries and provides an interpretable +prompt for every query. To achieve this, we design a novel action space that +allows flexible editing of the initial prompts covering a wide set of +commonly-used components like instructions, few-shot exemplars, and +verbalizers. The proposed method achieves significant gains compared with +recent SoTA approaches like prompt tuning, AutoPrompt, and RLPrompt, across a +variety of tasks including sentiment analysis, topic classification, natural +language inference, and reading comprehension. Our method achieves 5.33x on +average improvement in sample efficiency when compared to the traditional +fine-tuning methods. +" +Towards Practical Few-shot Federated NLP,Dongqi Cai,http://arxiv.org/pdf/2212.00192v2.pdf,2022-12-01,"['cs.cl', 'cs.lg']",2212.00192v2.pdf," Transformer-based pre-trained models have emerged as the predominant solution +for natural language processing (NLP). Fine-tuning such pre-trained models for +downstream tasks often requires a considerable amount of labeled private data. +In practice, private data is often distributed across heterogeneous mobile +devices and may be prohibited from being uploaded. Moreover, well-curated +labeled data is often scarce, presenting an additional challenge. To address +these challenges, we first introduce a data generator for federated few-shot +learning tasks, which encompasses the quantity and skewness of scarce labeled +data in a realistic setting. Subsequently, we propose AUG-FedPrompt, a +prompt-based federated learning system that exploits abundant unlabeled data +for data augmentation. Our experiments indicate that AUG-FedPrompt can perform +on par with full-set fine-tuning with a limited amount of labeled data. +However, such competitive performance comes at a significant system cost. +" +Few-Shot Nested Named Entity Recognition,Hong Ming,http://arxiv.org/pdf/2212.00953v1.pdf,2022-12-02,"['cs.cl', 'cs.ai']",2212.00953v1.pdf," While Named Entity Recognition (NER) is a widely studied task, making +inferences of entities with only a few labeled data has been challenging, +especially for entities with nested structures. Unlike flat entities, entities +and their nested entities are more likely to have similar semantic feature +representations, drastically increasing difficulties in classifying different +entity categories in the few-shot setting. Although prior work has briefly +discussed nested structures in the context of few-shot learning, to our best +knowledge, this paper is the first one specifically dedicated to studying the +few-shot nested NER task. Leveraging contextual dependency to distinguish +nested entities, we propose a Biaffine-based Contrastive Learning (BCL) +framework. We first design a Biaffine span representation module for learning +the contextual span dependency representation for each entity span rather than +only learning its semantic representation. We then merge these two +representations by the residual connection to distinguish nested entities. +Finally, we build a contrastive learning framework to adjust the representation +distribution for larger margin boundaries and more generalized domain transfer +learning ability. We conducted experimental studies on three English, German, +and Russian nested NER datasets. The results show that the BCL outperformed +three baseline models on the 1-shot and 5-shot tasks in terms of F1 score. +" +Improving Few-Shot Performance of Language Models via Nearest Neighbor Calibration,Feng Nie,http://arxiv.org/pdf/2212.02216v1.pdf,2022-12-05,['cs.cl'],2212.02216v1.pdf," Pre-trained language models (PLMs) have exhibited remarkable few-shot +learning capabilities when provided a few examples in a natural language prompt +as demonstrations of test instances, i.e., in-context learning. However, the +performance of in-context learning is susceptible to the choice of prompt +format, training examples and the ordering of the training examples. In this +paper, we propose a novel nearest-neighbor calibration framework for in-context +learning to ease this issue. It is inspired by a phenomenon that the in-context +learning paradigm produces incorrect labels when inferring training instances, +which provides a useful supervised signal to calibrate predictions. Thus, our +method directly augments the predictions with a $k$-nearest-neighbor ($k$NN) +classifier over a datastore of cached few-shot instance representations +obtained by PLMs and their corresponding labels. Then adaptive neighbor +selection and feature regularization modules are introduced to make full use of +a few support instances to reduce the $k$NN retrieval noise. Experiments on +various few-shot text classification tasks demonstrate that our method +significantly improves in-context learning, while even achieving comparable +performance with state-of-the-art tuning-based approaches in some sentiment +analysis tasks. +" +JamPatoisNLI: A Jamaican Patois Natural Language Inference Dataset,Ruth-Ann Armstrong,http://arxiv.org/pdf/2212.03419v1.pdf,2022-12-07,"['cs.cl', 'cs.lg', 'i.2.7']",2212.03419v1.pdf," JamPatoisNLI provides the first dataset for natural language inference in a +creole language, Jamaican Patois. Many of the most-spoken low-resource +languages are creoles. These languages commonly have a lexicon derived from a +major world language and a distinctive grammar reflecting the languages of the +original speakers and the process of language birth by creolization. This gives +them a distinctive place in exploring the effectiveness of transfer from large +monolingual or multilingual pretrained models. While our work, along with +previous work, shows that transfer from these models to low-resource languages +that are unrelated to languages in their training set is not very effective, we +would expect stronger results from transfer to creoles. Indeed, our experiments +show considerably better results from few-shot learning of JamPatoisNLI than +for such unrelated languages, and help us begin to understand how the unique +relationship between creoles and their high-resource base languages affect +cross-lingual transfer. JamPatoisNLI, which consists of naturally-occurring +premises and expert-written hypotheses, is a step towards steering research +into a traditionally underserved language and a useful benchmark for +understanding cross-lingual NLP. +" +Learn to Explore: on Bootstrapping Interactive Data Exploration with Meta-learning,Yukun Cao,http://arxiv.org/pdf/2212.03423v4.pdf,2022-12-07,"['cs.db', 'cs.ai']",2212.03423v4.pdf," Interactive data exploration (IDE) is an effective way of comprehending big +data, whose volume and complexity are beyond human abilities. The main goal of +IDE is to discover user interest regions from a database through multi-rounds +of user labelling. Existing IDEs adopt active-learning framework, where users +iteratively discriminate or label the interestingness of selected tuples. The +process of data exploration can be viewed as the process of training a +classifier, which determines whether a database tuple is interesting to a user. +An efficient exploration thus takes very few iterations of user labelling to +reach the data region of interest. In this work, we consider the data +exploration as the process of few-shot learning, where the classifier is +learned with only a few training examples, or exploration iterations. To this +end, we propose a learning-to-explore framework, based on meta-learning, which +learns how to learn a classifier with automatically generated meta-tasks, so +that the exploration process can be much shortened. Extensive experiments on +real datasets show that our proposal outperforms existing explore-by-example +solutions in terms of accuracy and efficiency. +" +Demystifying Prompts in Language Models via Perplexity Estimation,Hila Gonen,http://arxiv.org/pdf/2212.04037v1.pdf,2022-12-08,['cs.cl'],2212.04037v1.pdf," Language models can be prompted to perform a wide variety of zero- and +few-shot learning problems. However, performance varies significantly with the +choice of prompt, and we do not yet understand why this happens or how to pick +the best prompts. In this work, we analyze the factors that contribute to this +variance and establish a new empirical hypothesis: the performance of a prompt +is coupled with the extent to which the model is familiar with the language it +contains. Over a wide range of tasks, we show that the lower the perplexity of +the prompt is, the better the prompt is able to perform the task. As a result, +we devise a method for creating prompts: (1) automatically extend a small seed +set of manually written prompts by paraphrasing using GPT3 and backtranslation +and (2) choose the lowest perplexity prompts to get significant gains in +performance. +" +Technical Report -- Competition Solution for Prompt Tuning using Pretrained Language Model,Jiang-Long Song,http://arxiv.org/pdf/2212.06369v3.pdf,2022-12-13,['cs.cl'],2212.06369v3.pdf," Prompt tuning recently becomes a hot-spot in the applications of large +pretrained language models on specific downstream tasks. Regarding the Language +Model as a Service (LMaaS), black-box tuning using derivative-free optimization +(DFO) provides a novel approach to expand the practical scenarios of pretrained +models and enrich the researches of few-shot learning. In this report, we +present our solution in this competition that is based on the LMaaS scenario. +Our solution consists of several modifications to BBTv2, including multiple +label words, selection of P0, rolling update strategy, multi-task loss from MLP +classifier, and finally using the ensemble method to further improve +generalization ability. We also shared some strategies that we tried but didn't +use in the final submission for further discussion. In the end we raised a +question about the SNLI dataset and the impact on the results, as well as our +concerns about the competition. +" +Localized Latent Updates for Fine-Tuning Vision-Language Models,Moritz Ibing,http://arxiv.org/pdf/2212.06556v1.pdf,2022-12-13,"['cs.cv', 'cs.cl', 'cs.lg']",2212.06556v1.pdf," Although massive pre-trained vision-language models like CLIP show impressive +generalization capabilities for many tasks, still it often remains necessary to +fine-tune them for improved performance on specific datasets. When doing so, it +is desirable that updating the model is fast and that the model does not lose +its capabilities on data outside of the dataset, as is often the case with +classical fine-tuning approaches. In this work we suggest a lightweight +adapter, that only updates the models predictions close to seen datapoints. We +demonstrate the effectiveness and speed of this relatively simple approach in +the context of few-shot learning, where our results both on classes seen and +unseen during training are comparable with or improve on the state of the art. +" +ALERT: Adapting Language Models to Reasoning Tasks,Ping Yu,http://arxiv.org/pdf/2212.08286v2.pdf,2022-12-16,['cs.cl'],2212.08286v2.pdf," Current large language models can perform reasonably well on complex tasks +that require step-by-step reasoning with few-shot learning. Are these models +applying reasoning skills they have learnt during pre-training and reason +outside of their training context, or are they simply memorizing their training +corpus at finer granularity and have learnt to better understand their context? +To tease apart these possibilities, we introduce ALERT, a benchmark and suite +of analyses for assessing language models' reasoning ability comparing +pre-trained and finetuned models on complex tasks that require reasoning skills +to solve. ALERT provides a test bed to asses any language model on fine-grained +reasoning skills, which spans over 20 datasets and covers 10 different +reasoning skills. We leverage ALERT to further investigate the role of +finetuning. With extensive empirical analysis we find that language models +learn more reasoning skills such as textual entailment, abductive reasoning, +and analogical reasoning during finetuning stage compared to pretraining state. +We also find that when language models are finetuned they tend to overfit to +the prompt template, which hurts the robustness of models causing +generalization problems. +" +Learning from Taxonomy: Multi-label Few-Shot Classification for Everyday Sound Recognition,Jinhua Liang,http://arxiv.org/pdf/2212.08952v1.pdf,2022-12-17,"['cs.sd', 'eess.as']",2212.08952v1.pdf," Everyday sound recognition aims to infer types of sound events in audio +streams. While many works succeeded in training models with high performance in +a fully-supervised manner, they are still restricted to the demand of large +quantities of labelled data and the range of predefined classes. To overcome +these drawbacks, this work firstly curates a new database named FSD-FS for +multi-label few-shot audio classification. It then explores how to incorporate +audio taxonomy in few-shot learning. Specifically, this work proposes +label-dependent prototypical networks (LaD-protonet) to exploit parent-children +relationships between labels. Plus, it applies taxonomy-aware label smoothing +techniques to boost model performance. Experiments demonstrate that +LaD-protonet outperforms original prototypical networks as well as other +state-of-the-art methods. Moreover, its performance can be further boosted when +combined with taxonomy-aware label smoothing. +" +Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations,Xinxi Lyu,http://arxiv.org/pdf/2212.09865v2.pdf,2022-12-19,"['cs.cl', 'cs.ai']",2212.09865v2.pdf," Although large language models can be prompted for both zero- and few-shot +learning, performance drops significantly when no demonstrations are available. +In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap +by constructing pseudo-demonstrations for a given test input using a raw text +corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the +nearest neighbors to the test input from the corpus and pairing them with +random task labels, and (2) applying a set of techniques to reduce the amount +of direct copying the model does from the resulting demonstrations. Evaluation +on nine classification datasets shows that Z-ICL outperforms previous zero-shot +methods by a significant margin, and is on par with in-context learning with +labeled training data in the few-shot setting. Overall, Z-ICL provides a +significantly higher estimate of the zero-shot performance levels of a model, +and supports future efforts to develop better pseudo-demonstrations that +further improve zero-shot results. +" +A Survey On Few-shot Knowledge Graph Completion with Structural and Commonsense Knowledge,Haodi Ma,http://arxiv.org/pdf/2301.01172v1.pdf,2023-01-03,"['cs.cl', 'cs.ai', 'cs.lg']",2301.01172v1.pdf," Knowledge graphs (KG) have served as the key component of various natural +language processing applications. Commonsense knowledge graphs (CKG) are a +special type of KG, where entities and relations are composed of free-form +text. However, previous works in KG completion and CKG completion suffer from +long-tail relations and newly-added relations which do not have many know +triples for training. In light of this, few-shot KG completion (FKGC), which +requires the strengths of graph representation learning and few-shot learning, +has been proposed to challenge the problem of limited annotated data. In this +paper, we comprehensively survey previous attempts on such tasks in the form of +a series of methods and applications. Specifically, we first introduce FKGC +challenges, commonly used KGs, and CKGs. Then we systematically categorize and +summarize existing works in terms of the type of KGs and the methods. Finally, +we present applications of FKGC models on prediction tasks in different areas +and share our thoughts on future research directions of FKGC. +" +Distillation of encoder-decoder transformers for sequence labelling,Marco Farina,http://arxiv.org/pdf/2302.05454v1.pdf,2023-02-10,"['cs.cl', 'cs.ir']",2302.05454v1.pdf," Driven by encouraging results on a wide range of tasks, the field of NLP is +experiencing an accelerated race to develop bigger language models. This race +for bigger models has also underscored the need to continue the pursuit of +practical distillation approaches that can leverage the knowledge acquired by +these big models in a compute-efficient manner. Having this goal in mind, we +build on recent work to propose a hallucination-free framework for sequence +tagging that is especially suited for distillation. We show empirical results +of new state-of-the-art performance across multiple sequence labelling datasets +and validate the usefulness of this framework for distilling a large model in a +few-shot learning scenario. +" +Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?,Chengwei Qin,http://arxiv.org/pdf/2302.08143v2.pdf,2023-02-16,"['cs.cl', 'cs.ai']",2302.08143v2.pdf," Prompt tuning (PT) which only tunes the embeddings of an additional sequence +of tokens per task, keeping the pre-trained language model (PLM) frozen, has +shown remarkable performance in few-shot learning. Despite this, PT has been +shown to rely heavily on good initialization of the prompt embeddings. In this +work, we study meta prompt tuning (MPT) to systematically explore how +meta-learning can help improve (if it can) cross-task generalization in PT +through learning to initialize the prompt embeddings from other relevant tasks. +We empirically analyze a representative set of meta learning algorithms in a +wide range of adaptation settings with different source/target task +configurations on a large set of few-shot tasks. With extensive experiments and +analysis, we demonstrate the effectiveness of MPT. We find the improvement to +be significant particularly on classification tasks. For other kinds of tasks +such as question answering, we observe that while MPT can outperform PT in most +cases, it does not always outperform multi-task learning. We further provide an +in-depth analysis from the perspective of task similarity. +" +Scalable Prompt Generation for Semi-supervised Learning with Language Models,Yuhang Zhou,http://arxiv.org/pdf/2302.09236v1.pdf,2023-02-18,"['cs.cl', 'cs.ai']",2302.09236v1.pdf," Prompt-based learning methods in semi-supervised learning (SSL) settings have +been shown to be effective on multiple natural language understanding (NLU) +datasets and tasks in the literature. However, manually designing multiple +prompts and verbalizers requires domain knowledge and human effort, making it +difficult and expensive to scale across different datasets. In this paper, we +propose two methods to automatically design multiple prompts and integrate +automatic verbalizer in SSL settings without sacrificing performance. The first +method uses various demonstration examples with learnable continuous prompt +tokens to create diverse prompt models. The second method uses a varying number +of soft prompt tokens to encourage language models to learn different prompts. +For the verbalizer, we use the prototypical verbalizer to replace the manual +one. In summary, we obtained the best average accuracy of 73.2% (a relative +improvement of 2.52% over even the previous state-of-the-art SSL method with +manual prompts and verbalizers) in different few-shot learning settings. +" +Language Models are Few-shot Learners for Prognostic Prediction,Zekai Chen,http://arxiv.org/pdf/2302.12692v4.pdf,2023-02-24,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",2302.12692v4.pdf," Clinical prediction is an essential task in the healthcare industry. However, +the recent success of transformers, on which large language models are built, +has not been extended to this domain. In this research, we explore the use of +transformers and language models in prognostic prediction for immunotherapy +using real-world patients' clinical data and molecular profiles. This paper +investigates the potential of transformers to improve clinical prediction +compared to conventional machine learning approaches and addresses the +challenge of few-shot learning in predicting rare disease areas. The study +benchmarks the efficacy of baselines and language models on prognostic +prediction across multiple cancer types and investigates the impact of +different pretrained language models under few-shot regimes. The results +demonstrate significant improvements in accuracy and highlight the potential of +NLP in clinical research to improve early detection and intervention for +different diseases. +" +Pre-Finetuning for Few-Shot Emotional Speech Recognition,Maximillian Chen,http://arxiv.org/pdf/2302.12921v2.pdf,2023-02-24,"['cs.cl', 'cs.lg', 'cs.sd', 'eess.as']",2302.12921v2.pdf," Speech models have long been known to overfit individual speakers for many +classification tasks. This leads to poor generalization in settings where the +speakers are out-of-domain or out-of-distribution, as is common in production +environments. We view speaker adaptation as a few-shot learning problem and +propose investigating transfer learning approaches inspired by recent success +with pre-trained models in natural language tasks. We propose pre-finetuning +speech models on difficult tasks to distill knowledge into few-shot downstream +classification objectives. We pre-finetune Wav2Vec2.0 on every permutation of +four multiclass emotional speech recognition corpora and evaluate our +pre-finetuned models through 33,600 few-shot fine-tuning trials on the +Emotional Speech Dataset. +" +Mixture of Soft Prompts for Controllable Data Generation,Derek Chen,http://arxiv.org/pdf/2303.01580v2.pdf,2023-03-02,['cs.cl'],2303.01580v2.pdf," Large language models (LLMs) effectively generate fluent text when the target +output follows natural language patterns. However, structured prediction tasks +confine the output format to a limited ontology, causing even very large models +to struggle since they were never trained with such restrictions in mind. The +difficulty of using LLMs for direct prediction is exacerbated in few-shot +learning scenarios, which commonly arise due to domain shift and resource +limitations. We flip the problem on its head by leveraging the LLM as a tool +for data augmentation rather than direct prediction. Our proposed Mixture of +Soft Prompts (MSP) serves as a parameter-efficient procedure for generating +data in a controlled manner. Denoising mechanisms are further applied to +improve the quality of synthesized data. Automatic metrics show our method is +capable of producing diverse and natural text, while preserving label +semantics. Moreover, MSP achieves state-of-the-art results on three benchmarks +when compared against strong baselines. Our method offers an alternate +data-centric approach for applying LLMs to complex prediction tasks. +" +Prismer: A Vision-Language Model with An Ensemble of Experts,Shikun Liu,http://arxiv.org/pdf/2303.02506v2.pdf,2023-03-04,"['cs.lg', 'cs.ai', 'cs.cv']",2303.02506v2.pdf," Recent vision-language models have shown impressive multi-modal generation +capabilities. However, typically they require training huge models on massive +datasets. As a more scalable alternative, we introduce Prismer, a data- and +parameter-efficient vision-language model that leverages an ensemble of domain +experts. Prismer only requires training of a small number of components, with +the majority of network weights inherited from readily-available, pre-trained +domain experts, and kept frozen during training. By leveraging experts from a +wide range of domains, we show that Prismer can efficiently pool this expert +knowledge and adapt it to various vision-language reasoning tasks. In our +experiments, we show that Prismer achieves fine-tuned and few-shot learning +performance which is competitive with current state-of-the-art models, whilst +requiring up to two orders of magnitude less training data. Code is available +at https://github.com/NVlabs/prismer. +" +Enhancing Activity Prediction Models in Drug Discovery with the Ability to Understand Human Language,Philipp Seidl,http://arxiv.org/pdf/2303.03363v2.pdf,2023-03-06,"['q-bio.bm', 'cs.cl', 'cs.lg', 'stat.ml']",2303.03363v2.pdf," Activity and property prediction models are the central workhorses in drug +discovery and materials sciences, but currently they have to be trained or +fine-tuned for new tasks. Without training or fine-tuning, scientific language +models could be used for such low-data tasks through their announced zero- and +few-shot capabilities. However, their predictive quality at activity prediction +is lacking. In this work, we envision a novel type of activity prediction model +that is able to adapt to new prediction tasks at inference time, via +understanding textual information describing the task. To this end, we propose +a new architecture with separate modules for chemical and natural language +inputs, and a contrastive pre-training objective on data from large biochemical +databases. In extensive experiments, we show that our method CLAMP yields +improved predictive performance on few-shot learning benchmarks and zero-shot +problems in drug discovery. We attribute the advances of our method to the +modularized architecture and to our pre-training objective. +" +MenuCraft: Interactive Menu System Design with Large Language Models,Amir Hossein Kargaran,http://arxiv.org/pdf/2303.04496v2.pdf,2023-03-08,"['cs.cl', 'cs.ai', 'cs.hc']",2303.04496v2.pdf," Menu system design is a challenging task involving many design options and +various human factors. For example, one crucial factor that designers need to +consider is the semantic and systematic relation of menu commands. However, +capturing these relations can be challenging due to limited available +resources. With the advancement of neural language models, large language +models can utilize their vast pre-existing knowledge in designing and refining +menu systems. In this paper, we propose MenuCraft, an AI-assisted designer for +menu design that enables collaboration between the designer and a dialogue +system to design menus. MenuCraft offers an interactive language-based menu +design tool that simplifies the menu design process and enables easy +customization of design options. MenuCraft supports a variety of interactions +through dialog that allows performing zero/few-shot learning. +" +Consistency Analysis of ChatGPT,Myeongjun Erik Jang,http://arxiv.org/pdf/2303.06273v2.pdf,2023-03-11,"['cs.cl', 'cs.ai']",2303.06273v2.pdf," ChatGPT has gained a huge popularity since its introduction. Its positive +aspects have been reported through many media platforms, and some analyses even +showed that ChatGPT achieved a decent grade in professional exams, adding extra +support to the claim that AI can now assist and even replace humans in +industrial fields. Others, however, doubt its reliability and trustworthiness. +This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding +logically consistent behaviour, focusing specifically on semantic consistency +and the properties of negation, symmetric, and transitive consistency. Our +findings suggest that while both models appear to show an enhanced language +understanding and reasoning ability, they still frequently fall short of +generating logically consistent predictions. We also ascertain via experiments +that prompt designing, few-shot learning and employing larger large language +models (LLMs) are unlikely to be the ultimate solution to resolve the +inconsistency issue of LLMs. +" +Learning Expressive Prompting With Residuals for Vision Transformers,Rajshekhar Das,http://arxiv.org/pdf/2303.15591v1.pdf,2023-03-27,['cs.cv'],2303.15591v1.pdf," Prompt learning is an efficient approach to adapt transformers by inserting +learnable set of parameters into the input and intermediate representations of +a pre-trained model. In this work, we present Expressive Prompts with Residuals +(EXPRES) which modifies the prompt learning paradigm specifically for effective +adaptation of vision transformers (ViT). Out method constructs downstream +representations via learnable ``output'' tokens, that are akin to the learned +class tokens of the ViT. Further for better steering of the downstream +representation processed by the frozen transformer, we introduce residual +learnable tokens that are added to the output of various computations. We apply +EXPRES for image classification, few shot learning, and semantic segmentation, +and show our method is capable of achieving state of the art prompt tuning on +3/3 categories of the VTAB benchmark. In addition to strong performance, we +observe that our approach is an order of magnitude more prompt efficient than +existing visual prompting baselines. We analytically show the computational +benefits of our approach over weight space adaptation techniques like +finetuning. Lastly we systematically corroborate the architectural design of +our method via a series of ablation experiments. +" +Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement,Xiangyang Zhu,http://arxiv.org/pdf/2304.01195v1.pdf,2023-04-03,"['cs.cv', 'cs.ai', 'cs.mm']",2304.01195v1.pdf," The popularity of Contrastive Language-Image Pre-training (CLIP) has +propelled its application to diverse downstream vision tasks. To improve its +capacity on downstream tasks, few-shot learning has become a widely-adopted +technique. However, existing methods either exhibit limited performance or +suffer from excessive learnable parameters. In this paper, we propose APE, an +Adaptive Prior rEfinement method for CLIP's pre-trained knowledge, which +achieves superior accuracy with high computational efficiency. Via a prior +refinement module, we analyze the inter-class disparity in the downstream data +and decouple the domain-specific knowledge from the CLIP-extracted cache model. +On top of that, we introduce two model variants, a training-free APE and a +training-required APE-T. We explore the trilateral affinities between the test +image, prior cache model, and textual representations, and only enable a +lightweight category-residual module to be trained. For the average accuracy +over 11 benchmarks, both APE and APE-T attain state-of-the-art and respectively +outperform the second-best by +1.59% and +1.99% under 16 shots with x30 less +learnable parameters. +" +Sociocultural knowledge is needed for selection of shots in hate speech detection tasks,Antonis Maronikolakis,http://arxiv.org/pdf/2304.01890v4.pdf,2023-04-04,"['cs.cl', 'cs.ai', 'cs.lg']",2304.01890v4.pdf," We introduce HATELEXICON, a lexicon of slurs and targets of hate speech for +the countries of Brazil, Germany, India and Kenya, to aid training and +interpretability of models. We demonstrate how our lexicon can be used to +interpret model predictions, showing that models developed to classify extreme +speech rely heavily on target words when making predictions. Further, we +propose a method to aid shot selection for training in low-resource settings +via HATELEXICON. In few-shot learning, the selection of shots is of paramount +importance to model performance. In our work, we simulate a few-shot setting +for German and Hindi, using HASOC data for training and the Multilingual +HateCheck (MHC) as a benchmark. We show that selecting shots based on our +lexicon leads to models performing better on MHC than models trained on shots +sampled randomly. Thus, when given only a few training examples, using our +lexicon to select shots containing more sociocultural information leads to +better few-shot performance. +" +Revisiting Automated Prompting: Are We Actually Doing Better?,Yulin Zhou,http://arxiv.org/pdf/2304.03609v2.pdf,2023-04-07,"['cs.cl', 'cs.lg']",2304.03609v2.pdf," Current literature demonstrates that Large Language Models (LLMs) are great +few-shot learners, and prompting significantly increases their performance on a +range of downstream tasks in a few-shot learning setting. An attempt to +automate human-led prompting followed, with some progress achieved. In +particular, subsequent work demonstrates automation can outperform fine-tuning +in certain K-shot learning scenarios. + In this paper, we revisit techniques for automated prompting on six different +downstream tasks and a larger range of K-shot learning settings. We find that +automated prompting does not consistently outperform simple manual prompts. Our +work suggests that, in addition to fine-tuning, manual prompts should be used +as a baseline in this line of research. +" +MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning,Bohan Li,http://arxiv.org/pdf/2304.09402v1.pdf,2023-04-19,"['cs.cl', 'cs.lg']",2304.09402v1.pdf," Prompt-based learning reformulates downstream tasks as cloze problems by +combining the original input with a template. This technique is particularly +useful in few-shot learning, where a model is trained on a limited amount of +data. However, the limited templates and text used in few-shot prompt-based +learning still leave significant room for performance improvement. +Additionally, existing methods using model ensembles can constrain the model +efficiency. To address these issues, we propose an augmentation method called +MixPro, which augments both the vanilla input text and the templates through +token-level, sentence-level, and epoch-level Mixup strategies. We conduct +experiments on five few-shot datasets, and the results show that MixPro +outperforms other augmentation baselines, improving model performance by an +average of 5.08% compared to before augmentation. +" +Information Extraction from Documents: Question Answering vs Token Classification in real-world setups,Laurent Lam,http://arxiv.org/pdf/2304.10994v1.pdf,2023-04-21,['cs.cl'],2304.10994v1.pdf," Research in Document Intelligence and especially in Document Key Information +Extraction (DocKIE) has been mainly solved as Token Classification problem. +Recent breakthroughs in both natural language processing (NLP) and computer +vision helped building document-focused pre-training methods, leveraging a +multimodal understanding of the document text, layout and image modalities. +However, these breakthroughs also led to the emergence of a new DocKIE subtask +of extractive document Question Answering (DocQA), as part of the Machine +Reading Comprehension (MRC) research field. In this work, we compare the +Question Answering approach with the classical token classification approach +for document key information extraction. We designed experiments to benchmark +five different experimental setups : raw performances, robustness to noisy +environment, capacity to extract long entities, fine-tuning speed on Few-Shot +Learning and finally Zero-Shot Learning. Our research showed that when dealing +with clean and relatively short entities, it is still best to use token +classification-based approach, while the QA approach could be a good +alternative for noisy environment or long entities use-cases. +" +Discern and Answer: Mitigating the Impact of Misinformation in Retrieval-Augmented Models with Discriminators,Giwon Hong,http://arxiv.org/pdf/2305.01579v1.pdf,2023-05-02,"['cs.cl', 'cs.ai']",2305.01579v1.pdf," Most existing retrieval-augmented language models (LMs) for question +answering assume all retrieved information is factually correct. In this work, +we study a more realistic scenario in which retrieved documents may contain +misinformation, causing conflicts among them. We observe that the existing +models are highly brittle to such information in both fine-tuning and +in-context few-shot learning settings. We propose approaches to make +retrieval-augmented LMs robust to misinformation by explicitly fine-tuning a +discriminator or prompting to elicit discrimination capability in GPT-3. Our +empirical results on open-domain question answering show that these approaches +significantly improve LMs' robustness to knowledge conflicts. We also provide +our findings on interleaving the fine-tuned model's decision with the +in-context learning process, paving a new path to leverage the best of both +worlds. +" +Causal Interventions-based Few-Shot Named Entity Recognition,Zhen Yang,http://arxiv.org/pdf/2305.01914v1.pdf,2023-05-03,['cs.cl'],2305.01914v1.pdf," Few-shot named entity recognition (NER) systems aims at recognizing new +classes of entities based on a few labeled samples. A significant challenge in +the few-shot regime is prone to overfitting than the tasks with abundant +samples. The heavy overfitting in few-shot learning is mainly led by spurious +correlation caused by the few samples selection bias. To alleviate the problem +of the spurious correlation in the few-shot NER, in this paper, we propose a +causal intervention-based few-shot NER method. Based on the prototypical +network, the method intervenes in the context and prototype via backdoor +adjustment during training. In particular, intervening in the context of the +one-shot scenario is very difficult, so we intervene in the prototype via +incremental learning, which can also avoid catastrophic forgetting. Our +experiments on different benchmarks show that our approach achieves new +state-of-the-art results (achieving up to 29% absolute improvement and 12% on +average for all tasks). +" +Plug-and-Play Multilingual Few-shot Spoken Words Recognition,Aaqib Saeed,http://arxiv.org/pdf/2305.03058v1.pdf,2023-05-03,"['eess.as', 'cs.lg', 'cs.sd']",2305.03058v1.pdf," As technology advances and digital devices become prevalent, seamless +human-machine communication is increasingly gaining significance. The growing +adoption of mobile, wearable, and other Internet of Things (IoT) devices has +changed how we interact with these smart devices, making accurate spoken words +recognition a crucial component for effective interaction. However, building +robust spoken words detection system that can handle novel keywords remains +challenging, especially for low-resource languages with limited training data. +Here, we propose PLiX, a multilingual and plug-and-play keyword spotting system +that leverages few-shot learning to harness massive real-world data and enable +the recognition of unseen spoken words at test-time. Our few-shot deep models +are learned with millions of one-second audio clips across 20 languages, +achieving state-of-the-art performance while being highly efficient. Extensive +evaluations show that PLiX can generalize to novel spoken words given as few as +just one support example and performs well on unseen languages out of the box. +We release models and inference code to serve as a foundation for future +research and voice-enabled user interface development for emerging devices. +" +Data Curation for Image Captioning with Text-to-Image Generative Models,Wenyan Li,http://arxiv.org/pdf/2305.03610v1.pdf,2023-05-05,"['cs.cv', 'cs.ai', 'cs.cl']",2305.03610v1.pdf," Recent advances in image captioning are mainly driven by large-scale +vision-language pretraining, relying heavily on computational resources and +increasingly large multimodal datasets. Instead of scaling up pretraining data, +we ask whether it is possible to improve performance by improving the quality +of the samples in existing datasets. We pursue this question through two +approaches to data curation: one that assumes that some examples should be +avoided due to mismatches between the image and caption, and one that assumes +that the mismatch can be addressed by replacing the image, for which we use the +state-of-the-art Stable Diffusion model. These approaches are evaluated using +the BLIP model on MS COCO and Flickr30K in both finetuning and few-shot +learning settings. Our simple yet effective approaches consistently outperform +baselines, indicating that better image captioning models can be trained by +curating existing resources. Finally, we conduct a human study to understand +the errors made by the Stable Diffusion model and highlight directions for +future work in text-to-image generation. +" +Make Prompt-based Black-Box Tuning Colorful: Boosting Model Generalization from Three Orthogonal Perspectives,Qiushi Sun,http://arxiv.org/pdf/2305.08088v1.pdf,2023-05-14,"['cs.cl', 'cs.ai']",2305.08088v1.pdf," Large language models (LLMs) have shown increasing power on various natural +language processing (NLP) tasks. However, tuning these models for downstream +tasks usually needs exorbitant costs or is unavailable due to commercial +considerations. Recently, black-box tuning has been proposed to address this +problem by optimizing task-specific prompts without accessing the gradients and +hidden representations. However, most existing works have yet fully exploited +the potential of gradient-free optimization under the scenario of few-shot +learning. In this paper, we describe BBT-RGB, a suite of straightforward and +complementary techniques for enhancing the efficiency and performance of +black-box optimization. Specifically, our method includes three plug-and-play +components: (1) Two-stage derivative-free optimization strategy that +facilitates fast convergence and mitigates overfitting; (2) Automatic +verbalizer construction with its novel usage under few-shot settings; (3) +Better prompt initialization policy based on instruction search and +auto-selected demonstration. Extensive experiments across various tasks on +natural language understanding and inference demonstrate the effectiveness of +our method. Our codes are publicly available at +https://github.com/QiushiSun/BBT-RGB. +" +CPL-NoViD: Context-Aware Prompt-based Learning for Norm Violation Detection in Online Communities,Zihao He,http://arxiv.org/pdf/2305.09846v2.pdf,2023-05-16,"['cs.cl', 'cs.si']",2305.09846v2.pdf," Detecting norm violations in online communities is critical to maintaining +healthy and safe spaces for online discussions. Existing machine learning +approaches often struggle to adapt to the diverse rules and interpretations +across different communities due to the inherent challenges of fine-tuning +models for such context-specific tasks. In this paper, we introduce +Context-aware Prompt-based Learning for Norm Violation Detection (CPL-NoViD), a +novel method that employs prompt-based learning to detect norm violations +across various types of rules. CPL-NoViD outperforms the baseline by +incorporating context through natural language prompts and demonstrates +improved performance across different rule types. Significantly, it not only +excels in cross-rule-type and cross-community norm violation detection but also +exhibits adaptability in few-shot learning scenarios. Most notably, it +establishes a new state-of-the-art in norm violation detection, surpassing +existing benchmarks. Our work highlights the potential of prompt-based learning +for context-sensitive norm violation detection and paves the way for future +research on more adaptable, context-aware models to better support online +community moderators. +" +A Weak Supervision Approach for Few-Shot Aspect Based Sentiment,Robert Vacareanu,http://arxiv.org/pdf/2305.11979v1.pdf,2023-05-19,['cs.cl'],2305.11979v1.pdf," We explore how weak supervision on abundant unlabeled data can be leveraged +to improve few-shot performance in aspect-based sentiment analysis (ABSA) +tasks. We propose a pipeline approach to construct a noisy ABSA dataset, and we +use it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks. We +test the resulting model on three widely used ABSA datasets, before and after +fine-tuning. Our proposed method preserves the full fine-tuning performance +while showing significant improvements (15.84% absolute F1) in the few-shot +learning scenario for the harder tasks. In zero-shot (i.e., without +fine-tuning), our method outperforms the previous state of the art on the +aspect extraction sentiment classification (AESC) task and is, additionally, +capable of performing the harder aspect sentiment triplet extraction (ASTE) +task. +" +Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data Synthesis,Mingda Chen,http://arxiv.org/pdf/2305.13691v1.pdf,2023-05-23,['cs.cl'],2305.13691v1.pdf," Few-shot learning for open domain multi-hop question answering typically +relies on large language models (LLMs). While powerful, LLMs are inefficient at +the inference time. We propose a data synthesis framework for multi-hop +question answering that allows for improving smaller language models with less +than 10 human-annotated question answer pairs. The framework is built upon the +data generation functions parameterized by LLMs and prompts, which requires +minimal hand-crafted features. Empirically, we synthesize millions of multi-hop +questions and claims. After finetuning language models on the synthetic data, +we evaluate the models on popular benchmarks on multi-hop question answering +and fact verification. Our experimental results show that finetuning on the +synthetic data improves model performance significantly, allowing our finetuned +models to be competitive with prior models while being almost one-third the +size in terms of parameter counts. +" +Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks,Sherzod Hakimov,http://arxiv.org/pdf/2305.13782v1.pdf,2023-05-23,['cs.cl'],2305.13782v1.pdf," Large language models have demonstrated robust performance on various +language tasks using zero-shot or few-shot learning paradigms. While being +actively researched, multimodal models that can additionally handle images as +input have yet to catch up in size and generality with language-only models. In +this work, we ask whether language-only models can be utilised for tasks that +require visual input -- but also, as we argue, often require a strong reasoning +component. Similar to some recent related work, we make visual information +accessible to the language model using separate verbalisation models. +Specifically, we investigate the performance of open-source, open-access +language models against GPT-3 on five vision-language tasks when given +textually-encoded visual information. Our results suggest that language models +are effective for solving vision-language tasks even with limited samples. This +approach also enhances the interpretability of a model's output by providing a +means of tracing the output back through the verbalised image content. +" +Improving Factuality and Reasoning in Language Models through Multiagent Debate,Yilun Du,http://arxiv.org/pdf/2305.14325v1.pdf,2023-05-23,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2305.14325v1.pdf," Large language models (LLMs) have demonstrated remarkable capabilities in +language generation, understanding, and few-shot learning in recent years. An +extensive body of work has explored how their performance may be further +improved through the tools of prompting, ranging from verification, +self-consistency, or intermediate scratchpads. In this paper, we present a +complementary approach to improve language responses where multiple language +model instances propose and debate their individual responses and reasoning +processes over multiple rounds to arrive at a common final answer. Our findings +indicate that this approach significantly enhances mathematical and strategic +reasoning across a number of tasks. We also demonstrate that our approach +improves the factual validity of generated content, reducing fallacious answers +and hallucinations that contemporary models are prone to. Our approach may be +directly applied to existing black-box models and uses identical procedure and +prompts for all tasks we investigate. Overall, our findings suggest that such +""society of minds"" approach has the potential to significantly advance the +capabilities of LLMs and pave the way for further breakthroughs in language +generation and understanding. +" +Are Large Language Models Robust Zero-shot Coreference Resolvers?,Nghia T. Le,http://arxiv.org/pdf/2305.14489v1.pdf,2023-05-23,['cs.cl'],2305.14489v1.pdf," Recent progress in domain adaptation for coreference resolution relies on +continued training using annotated data from target domains. At the same time, +pre-trained large language models (LMs) have exhibited strong zero- and +few-shot learning abilities across a wide range of NLP tasks including pronoun +resolution. While this demonstrates evidence of coreference ability, previous +work has mostly studied this ability using simple sentence-level datasets such +as the Winograd Schema Challenge. In this work, we assess the feasibility of +zero-shot learning for coreference resolution by evaluating instruction-tuned +language models on more difficult, linguistically-complex coreference +benchmarks (e.g., CoNLL-2012). We demonstrate that zero-shot prompting +outperforms current unsupervised coreference systems. Further investigations +reveal the robust zero-shot generalization ability of instruction-tuned LMs +across a wide range of domains, languages, and time periods, as well as a +strong reliance on high-quality mention detection systems. +" +Training on Thin Air: Improve Image Classification with Generated Data,Yongchao Zhou,http://arxiv.org/pdf/2305.15316v1.pdf,2023-05-24,"['cs.cv', 'cs.lg']",2305.15316v1.pdf," Acquiring high-quality data for training discriminative models is a crucial +yet challenging aspect of building effective predictive systems. In this paper, +we present Diffusion Inversion, a simple yet effective method that leverages +the pre-trained generative model, Stable Diffusion, to generate diverse, +high-quality training data for image classification. Our approach captures the +original data distribution and ensures data coverage by inverting images to the +latent space of Stable Diffusion, and generates diverse novel training images +by conditioning the generative model on noisy versions of these vectors. We +identify three key components that allow our generated images to successfully +supplant the original dataset, leading to a 2-3x enhancement in sample +complexity and a 6.5x decrease in sampling time. Moreover, our approach +consistently outperforms generic prompt-based steering methods and KNN +retrieval baseline across a wide range of datasets. Additionally, we +demonstrate the compatibility of our approach with widely-used data +augmentation techniques, as well as the reliability of the generated data in +supporting various neural architectures and enhancing few-shot learning. +" +ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-Translation,Kuan-Hao Huang,http://arxiv.org/pdf/2305.16585v1.pdf,2023-05-26,['cs.cl'],2305.16585v1.pdf," Paraphrase generation is a long-standing task in natural language processing +(NLP). Supervised paraphrase generation models, which rely on human-annotated +paraphrase pairs, are cost-inefficient and hard to scale up. On the other hand, +automatically annotated paraphrase pairs (e.g., by machine back-translation), +usually suffer from the lack of syntactic diversity -- the generated paraphrase +sentences are very similar to the source sentences in terms of syntax. In this +work, we present ParaAMR, a large-scale syntactically diverse paraphrase +dataset created by abstract meaning representation back-translation. Our +quantitative analysis, qualitative examples, and human evaluation demonstrate +that the paraphrases of ParaAMR are syntactically more diverse compared to +existing large-scale paraphrase datasets while preserving good semantic +similarity. In addition, we show that ParaAMR can be used to improve on three +NLP tasks: learning sentence embeddings, syntactically controlled paraphrase +generation, and data augmentation for few-shot learning. Our results thus +showcase the potential of ParaAMR for improving various NLP applications. +" +Adapting Language-Audio Models as Few-Shot Audio Learners,Jinhua Liang,http://arxiv.org/pdf/2305.17719v1.pdf,2023-05-28,"['eess.as', 'cs.sd']",2305.17719v1.pdf," We presented the Treff adapter, a training-efficient adapter for CLAP, to +boost zero-shot classification performance by making use of a small set of +labelled data. Specifically, we designed CALM to retrieve the probability +distribution of text-audio clips over classes using a set of audio-label pairs +and combined it with CLAP's zero-shot classification results. Furthermore, we +designed a training-free version of the Treff adapter by using CALM as a cosine +similarity measure. Experiments showed that the proposed Treff adapter is +comparable and even better than fully-supervised methods and adaptation methods +in low-shot and data-abundant scenarios. While the Treff adapter shows that +combining large-scale pretraining and rapid learning of domain-specific +knowledge is non-trivial for obtaining generic representations for few-shot +learning, it is still limited to audio classification tasks. In the future, we +will explore how to use audio-language models in diverse audio domains. +" +Transfer Learning for Power Outage Detection Task with Limited Training Data,Olukunle Owolabi,http://arxiv.org/pdf/2305.17817v1.pdf,2023-05-28,"['cs.cl', 'stat.ap']",2305.17817v1.pdf," Early detection of power outages is crucial for maintaining a reliable power +distribution system. This research investigates the use of transfer learning +and language models in detecting outages with limited labeled data. By +leveraging pretraining and transfer learning, models can generalize to unseen +classes. + Using a curated balanced dataset of social media tweets related to power +outages, we conducted experiments using zero-shot and few-shot learning. Our +hypothesis is that Language Models pretrained with limited data could achieve +high performance in outage detection tasks over baseline models. Results show +that while classical models outperform zero-shot Language Models, few-shot +fine-tuning significantly improves their performance. For example, with 10% +fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5% +accuracy (+8.5%). This has practical implications for analyzing and localizing +outages in scenarios with limited data availability. + Our evaluation provides insights into the potential of few-shot fine-tuning +with Language Models for power outage detection, highlighting their strengths +and limitations. This research contributes to the knowledge base of leveraging +advanced natural language processing techniques for managing critical +infrastructure. +" +Deeply Coupled Cross-Modal Prompt Learning,Xuejing Liu,http://arxiv.org/pdf/2305.17903v2.pdf,2023-05-29,['cs.cv'],2305.17903v2.pdf," Recent advancements in multimodal foundation models (e.g., CLIP) have +excelled in zero-shot generalization. Prompt tuning involved in the knowledge +transfer from foundation models to downstream tasks has gained significant +attention recently. Existing prompt-tuning methods in cross-modal learning, +however, either solely focus on language branch, or learn vision-language +interaction in a shallow mechanism. In this context, we propose a Deeply +coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly +accommodates the interplay between vision and language with a Cross-Modal +Prompt Attention (CMPA) mechanism, which enables the mutual exchange of +respective representation through a well-connected multi-head attention module +progressively and strongly. We then conduct comprehensive few-shot learning +experiments on 11 image classification datasets and analyze the robustness to +domain shift as well. Thorough experimental analysis evidently demonstrates the +superb few-shot generalization and compelling domain adaption capacity of a +well-executed DCP. The code can be found at https://github.com/GingL/CMPA. +" +"What does the Failure to Reason with ""Respectively"" in Zero/Few-Shot Settings Tell Us about Language Models?",Ruixiang Cui,http://arxiv.org/pdf/2305.19597v1.pdf,2023-05-31,"['cs.cl', 'cs.ai']",2305.19597v1.pdf," Humans can effortlessly understand the coordinate structure of sentences such +as ""Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, +respectively"". In the context of natural language inference (NLI), we examine +how language models (LMs) reason with respective readings (Gawron and Kehler, +2004) from two perspectives: syntactic-semantic and commonsense-world +knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally +occurring dataset NatResNLI to encompass various explicit and implicit +realizations of ""respectively"". We show that fine-tuned NLI models struggle +with understanding such readings without explicit supervision. While few-shot +learning is easy in the presence of explicit cues, longer training is required +when the reading is evoked implicitly, leaving models to rely on common sense +inferences. Furthermore, our fine-grained analysis indicates models fail to +generalize across different constructions. To conclude, we demonstrate that LMs +still lag behind humans in generalizing to the long tail of linguistic +constructions. +" +Measuring the Robustness of Natural Language Processing Models to Domain Shifts,Nitay Calderon,http://arxiv.org/pdf/2306.00168v2.pdf,2023-05-31,['cs.cl'],2306.00168v2.pdf," Existing research on Domain Robustness (DR) suffers from disparate setups, +lack of evaluation task variety, and reliance on challenge sets. In this paper, +we pose a fundamental question: What is the state of affairs of the DR +challenge in the era of Large Language Models (LLMs)? To this end, we construct +a DR benchmark comprising diverse NLP tasks, including sentence and token-level +classification, QA, and generation, each task consists of several domains. We +explore the DR challenge of fine-tuned and few-shot learning models in natural +domain shift settings and devise two diagnostic metrics of Out-of-Distribution +(OOD) performance degradation: The commonly used Source Drop (SD) and the +overlooked Target Drop (TD). Our findings reveal important insights: First, +despite their capabilities, zero-to-few shot LLMs and fine-tuning approaches +still fail to meet satisfactory performance in the OOD context; Second, TD +approximates better than SD the average OOD degradation; Third, in a +significant proportion of domain shifts, either SD or TD is positive, but not +both, and therefore disregarding one can lead to incorrect DR conclusions. +" +Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language,Kevin Ellis,http://arxiv.org/pdf/2306.02797v3.pdf,2023-06-05,"['cs.cl', 'cs.ai', 'cs.lg']",2306.02797v3.pdf," A core tension in models of concept learning is that the model must carefully +balance the tractability of inference against the expressivity of the +hypothesis class. Humans, however, can efficiently learn a broad range of +concepts. We introduce a model of inductive learning that seeks to be +human-like in that sense. It implements a Bayesian reasoning process where a +language model first proposes candidate hypotheses expressed in natural +language, which are then re-weighed by a prior and a likelihood. By estimating +the prior from human data, we can predict human judgments on learning problems +involving numbers and sets, spanning concepts that are generative, +discriminative, propositional, and higher-order. +" +Few Shot Rationale Generation using Self-Training with Dual Teachers,Aditya Srikanth Veerubhotla,http://arxiv.org/pdf/2306.03315v1.pdf,2023-06-05,"['cs.cl', 'cs.ai']",2306.03315v1.pdf," Self-rationalizing models that also generate a free-text explanation for +their predicted labels are an important tool to build trustworthy AI +applications. Since generating explanations for annotated labels is a laborious +and costly pro cess, recent models rely on large pretrained language models +(PLMs) as their backbone and few-shot learning. In this work we explore a +self-training approach leveraging both labeled and unlabeled data to further +improve few-shot models, under the assumption that neither human written +rationales nor annotated task labels are available at scale. We introduce a +novel dual-teacher learning framework, which learns two specialized teacher +models for task prediction and rationalization using self-training and distills +their knowledge into a multi-tasking student model that can jointly generate +the task label and rationale. Furthermore, we formulate a new loss function, +Masked Label Regularization (MLR) which promotes explanations to be strongly +conditioned on predicted labels. Evaluation on three public datasets +demonstrate that the proposed methods are effective in modeling task labels and +generating faithful rationales. +" +A New Dataset and Empirical Study for Sentence Simplification in Chinese,Shiping Yang,http://arxiv.org/pdf/2306.04188v1.pdf,2023-06-07,['cs.cl'],2306.04188v1.pdf," Sentence Simplification is a valuable technique that can benefit language +learners and children a lot. However, current research focuses more on English +sentence simplification. The development of Chinese sentence simplification is +relatively slow due to the lack of data. To alleviate this limitation, this +paper introduces CSS, a new dataset for assessing sentence simplification in +Chinese. We collect manual simplifications from human annotators and perform +data analysis to show the difference between English and Chinese sentence +simplifications. Furthermore, we test several unsupervised and zero/few-shot +learning methods on CSS and analyze the automatic evaluation and human +evaluation results. In the end, we explore whether Large Language Models can +serve as high-quality Chinese sentence simplification systems by evaluating +them on CSS. +" +Can AI Moderate Online Communities?,Henrik Axelsen,http://arxiv.org/pdf/2306.05122v1.pdf,2023-06-08,['cs.cy'],2306.05122v1.pdf," The task of cultivating healthy communication in online communities becomes +increasingly urgent, as gaming and social media experiences become +progressively more immersive and life-like. We approach the challenge of +moderating online communities by training student models using a large language +model (LLM). We use zero-shot learning models to distill and expand datasets +followed by a few-shot learning and a fine-tuning approach, leveraging +open-access generative pre-trained transformer models (GPT) from OpenAI. Our +preliminary findings suggest, that when properly trained, LLMs can excel in +identifying actor intentions, moderating toxic comments, and rewarding positive +contributions. The student models perform above-expectation in non-contextual +assignments such as identifying classically toxic behavior and perform +sufficiently on contextual assignments such as identifying positive +contributions to online discourse. Further, using open-access models like +OpenAI's GPT we experience a step-change in the development process for what +has historically been a complex modeling task. We contribute to the information +system (IS) discourse with a rapid development framework on the application of +generative AI in content online moderation and management of culture in +decentralized, pseudonymous communities by providing a sample model suite of +industrial-ready generative AI models based on open-access LLMs. +" +The ADAIO System at the BEA-2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues,Adaeze Adigwe,http://arxiv.org/pdf/2306.05360v1.pdf,2023-06-08,"['cs.cl', 'cs.ai', 'cs.cy']",2306.05360v1.pdf," This paper presents the ADAIO team's system entry in the Building Educational +Applications (BEA) 2023 Shared Task on Generating AI Teacher Responses in +Educational Dialogues. The task aims to assess the performance of +state-of-the-art generative models as AI teachers in producing suitable +responses within a student-teacher dialogue. Our system comprises evaluating +various baseline models using OpenAI GPT-3 and designing diverse prompts to +prompt the OpenAI models for teacher response generation. After the challenge, +our system achieved second place by employing a few-shot prompt-based approach +with the OpenAI text-davinci-003 model. The results highlight the few-shot +learning capabilities of large-language models, particularly OpenAI's GPT-3, in +the role of AI teachers. +" +Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning,Giridhar Kaushik Ramachandran,http://arxiv.org/pdf/2306.07170v1.pdf,2023-06-12,['cs.cl'],2306.07170v1.pdf," Social determinants of health (SDOH) documented in the electronic health +record through unstructured text are increasingly being studied to understand +how SDOH impacts patient health outcomes. In this work, we utilize the Social +History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified +social history sections annotated for SDOH, including substance use, +employment, and living status information. We explore the automatic extraction +of SDOH information with SHAC in both standoff and inline annotation formats +using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction +performance with a high-performing supervised approach and perform thorough +error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on +the SHAC test set, similar to the 7th best-performing system among all teams in +the n2c2 challenge with SHAC. +" +Rethink the Effectiveness of Text Data Augmentation: An Empirical Analysis,Zhengxiang Shi,http://arxiv.org/pdf/2306.07664v1.pdf,2023-06-13,"['cs.cl', 'cs.ai', 'cs.lg']",2306.07664v1.pdf," In recent years, language models (LMs) have made remarkable progress in +advancing the field of natural language processing (NLP). However, the impact +of data augmentation (DA) techniques on the fine-tuning (FT) performance of +these LMs has been a topic of ongoing debate. In this study, we evaluate the +effectiveness of three different FT methods in conjugation with +back-translation across an array of 7 diverse NLP tasks, including +classification and regression types, covering single-sentence and sentence-pair +tasks. Contrary to prior assumptions that DA does not contribute to the +enhancement of LMs' FT performance, our findings reveal that continued +pre-training on augmented data can effectively improve the FT performance of +the downstream tasks. In the most favourable case, continued pre-training +improves the performance of FT by more than 10% in the few-shot learning +setting. Our finding highlights the potential of DA as a powerful tool for +bolstering LMs' performance. +" +Neural Fine-Tuning Search for Few-Shot Learning,Panagiotis Eustratiadis,http://arxiv.org/pdf/2306.09295v1.pdf,2023-06-15,"['cs.cv', 'cs.lg']",2306.09295v1.pdf," In few-shot recognition, a classifier that has been trained on one set of +classes is required to rapidly adapt and generalize to a disjoint, novel set of +classes. To that end, recent studies have shown the efficacy of fine-tuning +with carefully crafted adaptation architectures. However this raises the +question of: How can one design the optimal adaptation strategy? In this paper, +we study this question through the lens of neural architecture search (NAS). +Given a pre-trained neural network, our algorithm discovers the optimal +arrangement of adapters, which layers to keep frozen and which to fine-tune. We +demonstrate the generality of our NAS method by applying it to both residual +networks and vision transformers and report state-of-the-art performance on +Meta-Dataset and Meta-Album. +" +Multilingual Few-Shot Learning via Language Model Retrieval,Genta Indra Winata,http://arxiv.org/pdf/2306.10964v1.pdf,2023-06-19,['cs.cl'],2306.10964v1.pdf," Transformer-based language models have achieved remarkable success in +few-shot in-context learning and drawn a lot of research interest. However, +these models' performance greatly depends on the choice of the example prompts +and also has high variability depending on how samples are chosen. In this +paper, we conduct a comprehensive study of retrieving semantically similar +few-shot samples and using them as the context, as it helps the model decide +the correct label without any gradient update in the multilingual and +cross-lingual settings. We evaluate the proposed method on five natural +language understanding datasets related to intent detection, question +classification, sentiment analysis, and topic classification. The proposed +method consistently outperforms random sampling in monolingual and +cross-lingual tasks in non-English languages. +" +Language models are weak learners,Hariharan Manikandan,http://arxiv.org/pdf/2306.14101v1.pdf,2023-06-25,"['cs.lg', 'cs.ai']",2306.14101v1.pdf," A central notion in practical and theoretical machine learning is that of a +$\textit{weak learner}$, classifiers that achieve better-than-random +performance (on any given distribution over data), even by a small margin. Such +weak learners form the practical basis for canonical machine learning methods +such as boosting. In this work, we illustrate that prompt-based large language +models can operate effectively as said weak learners. Specifically, we +illustrate the use of a large language model (LLM) as a weak learner in a +boosting algorithm applied to tabular data. We show that by providing (properly +sampled according to the distribution of interest) text descriptions of tabular +data samples, LLMs can produce a summary of the samples that serves as a +template for classification and achieves the aim of acting as a weak learner on +this task. We incorporate these models into a boosting approach, which in some +settings can leverage the knowledge within the LLM to outperform traditional +tree-based boosting. The model outperforms both few-shot learning and +occasionally even more involved fine-tuning procedures, particularly for tasks +involving small numbers of data points. The results illustrate the potential +for prompt-based LLMs to function not just as few-shot learners themselves, but +as components of larger machine learning pipelines. +" +RobuT: A Systematic Study of Table QA Robustness Against Human-Annotated Adversarial Perturbations,Yilun Zhao,http://arxiv.org/pdf/2306.14321v1.pdf,2023-06-25,"['cs.cl', 'cs.ai']",2306.14321v1.pdf," Despite significant progress having been made in question answering on +tabular data (Table QA), it's unclear whether, and to what extent existing +Table QA models are robust to task-specific perturbations, e.g., replacing key +question entities or shuffling table columns. To systematically study the +robustness of Table QA models, we propose a benchmark called RobuT, which +builds upon existing Table QA datasets (WTQ, WikiSQL-Weak, and SQA) and +includes human-annotated adversarial perturbations in terms of table header, +table content, and question. Our results indicate that both state-of-the-art +Table QA models and large language models (e.g., GPT-3) with few-shot learning +falter in these adversarial sets. We propose to address this problem by using +large language models to generate adversarial examples to enhance training, +which significantly improves the robustness of Table QA models. Our data and +code is publicly available at https://github.com/yilunzhao/RobuT. +" +Benchmarking Large Language Model Capabilities for Conditional Generation,Joshua Maynez,http://arxiv.org/pdf/2306.16793v1.pdf,2023-06-29,['cs.cl'],2306.16793v1.pdf," Pre-trained large language models (PLMs) underlie most new developments in +natural language processing. They have shifted the field from +application-specific model pipelines to a single model that is adapted to a +wide range of tasks. Autoregressive PLMs like GPT-3 or PaLM, alongside +techniques like few-shot learning, have additionally shifted the output +modality to generation instead of classification or regression. Despite their +ubiquitous use, the generation quality of language models is rarely evaluated +when these models are introduced. Additionally, it is unclear how existing +generation tasks--while they can be used to compare systems at a high +level--relate to the real world use cases for which people have been adopting +them. In this work, we discuss how to adapt existing application-specific +generation benchmarks to PLMs and provide an in-depth, empirical study of the +limitations and capabilities of PLMs in natural language generation tasks along +dimensions such as scale, architecture, input and output language. Our results +show that PLMs differ in their applicability to different data regimes and +their generalization to multiple languages and inform which PLMs to use for a +given generation task setup. We share best practices to be taken into +consideration when benchmarking generation capabilities during the development +of upcoming PLMs. +" +On Conditional and Compositional Language Model Differentiable Prompting,Jonathan Pilault,http://arxiv.org/pdf/2307.01446v1.pdf,2023-07-04,"['cs.cl', 'cs.lg']",2307.01446v1.pdf," Prompts have been shown to be an effective method to adapt a frozen +Pretrained Language Model (PLM) to perform well on downstream tasks. Prompts +can be represented by a human-engineered word sequence or by a learned +continuous embedding. In this work, we investigate conditional and +compositional differentiable prompting. We propose a new model, Prompt +Production System (PRopS), which learns to transform task instructions or input +metadata, into continuous prompts that elicit task-specific outputs from the +PLM. Our model uses a modular network structure based on our neural formulation +of Production Systems, which allows the model to learn discrete rules -- neural +functions that learn to specialize in transforming particular prompt input +patterns, making it suitable for compositional transfer learning and few-shot +learning. We present extensive empirical and theoretical analysis and show that +PRopS consistently surpasses other PLM adaptation techniques, and often +improves upon fully fine-tuned models, on compositional generalization tasks, +controllable summarization and multilingual translation, while needing fewer +trainable parameters. +" +Diverse Retrieval-Augmented In-Context Learning for Dialogue State Tracking,Brendan King,http://arxiv.org/pdf/2307.01453v1.pdf,2023-07-04,['cs.cl'],2307.01453v1.pdf," There has been significant interest in zero and few-shot learning for +dialogue state tracking (DST) due to the high cost of collecting and annotating +task-oriented dialogues. Recent work has demonstrated that in-context learning +requires very little data and zero parameter updates, and even outperforms +trained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST, +which advances the state of the art with three advancements to in-context +learning for DST. First, we formulate DST as a Python programming task, +explicitly modeling language coreference as variable reference in Python. +Second, since in-context learning depends highly on the context examples, we +propose a method to retrieve a diverse set of relevant examples to improve +performance. Finally, we introduce a novel re-weighting method during decoding +that takes into account probabilities of competing surface forms, and produces +a more accurate dialogue state prediction. We evaluate our approach using +MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero +and few-shot settings. +" +Generating Efficient Training Data via LLM-based Attribute Manipulation,Letian Peng,http://arxiv.org/pdf/2307.07099v1.pdf,2023-07-14,['cs.cl'],2307.07099v1.pdf," In this paper, we propose a novel method, Chain-of-Thoughts Attribute +Manipulation (CoTAM), to guide few-shot learning by carefully crafted data from +Large Language Models (LLMs). The main idea is to create data with changes only +in the attribute targeted by the task. Inspired by facial attribute +manipulation, our approach generates label-switched data by leveraging LLMs to +manipulate task-specific attributes and reconstruct new sentences in a +controlled manner. Instead of conventional latent representation controlling, +we implement chain-of-thoughts decomposition and reconstruction to adapt the +procedure to LLMs. Extensive results on text classification and other tasks +verify the advantage of CoTAM over other LLM-based text generation methods with +the same number of training examples. Analysis visualizes the attribute +manipulation effectiveness of CoTAM and presents the potential of LLM-guided +learning with even less supervision. +" +Overthinking the Truth: Understanding how Language Models Process False Demonstrations,Danny Halawi,http://arxiv.org/pdf/2307.09476v1.pdf,2023-07-18,"['cs.lg', 'cs.ai', 'cs.cl']",2307.09476v1.pdf," Modern language models can imitate complex patterns through few-shot +learning, enabling them to complete challenging tasks without fine-tuning. +However, imitation can also lead models to reproduce inaccuracies or harmful +content if present in the context. We study harmful imitation through the lens +of a model's internal representations, and identify two related phenomena: +overthinking and false induction heads. The first phenomenon, overthinking, +appears when we decode predictions from intermediate layers, given correct vs. +incorrect few-shot demonstrations. At early layers, both demonstrations induce +similar model behavior, but the behavior diverges sharply at some ""critical +layer"", after which the accuracy given incorrect demonstrations progressively +decreases. The second phenomenon, false induction heads, are a possible +mechanistic cause of overthinking: these are heads in late layers that attend +to and copy false information from previous demonstrations, and whose ablation +reduces overthinking. Beyond scientific understanding, our results suggest that +studying intermediate model computations could be a promising avenue for +understanding and guarding against harmful model behaviors. +" +Does Correction Remain A Problem For Large Language Models?,Xiaowu Zhang,http://arxiv.org/pdf/2308.01776v2.pdf,2023-08-03,['cs.cl'],2308.01776v2.pdf," As large language models, such as GPT, continue to advance the capabilities +of natural language processing (NLP), the question arises: does the problem of +correction still persist? This paper investigates the role of correction in the +context of large language models by conducting two experiments. The first +experiment focuses on correction as a standalone task, employing few-shot +learning techniques with GPT-like models for error correction. The second +experiment explores the notion of correction as a preparatory task for other +NLP tasks, examining whether large language models can tolerate and perform +adequately on texts containing certain levels of noise or errors. By addressing +these experiments, we aim to shed light on the significance of correction in +the era of large language models and its implications for various NLP +applications. +" +Thespian: Multi-Character Text Role-Playing Game Agents,Christopher Cui,http://arxiv.org/pdf/2308.01872v1.pdf,2023-08-03,"['cs.ai', 'cs.cl']",2308.01872v1.pdf," Text-adventure games and text role-playing games are grand challenges for +reinforcement learning game playing agents. Text role-playing games are +open-ended environments where an agent must faithfully play a particular +character. We consider the distinction between characters and actors, where an +actor agent has the ability to play multiple characters. We present a framework +we call a thespian agent that can learn to emulate multiple characters along +with a soft prompt that can be used to direct it as to which character to play +at any time. We further describe an attention mechanism that allows the agent +to learn new characters that are based on previously learned characters in a +few-shot fashion. We show that our agent outperforms the state of the art agent +framework in multi-character learning and few-shot learning. +" +Meta-learning in healthcare: A survey,Alireza Rafiei,http://arxiv.org/pdf/2308.02877v1.pdf,2023-08-05,"['cs.lg', 'cs.ai']",2308.02877v1.pdf," As a subset of machine learning, meta-learning, or learning to learn, aims at +improving the model's capabilities by employing prior knowledge and experience. +A meta-learning paradigm can appropriately tackle the conventional challenges +of traditional learning approaches, such as insufficient number of samples, +domain shifts, and generalization. These unique characteristics position +meta-learning as a suitable choice for developing influential solutions in +various healthcare contexts, where the available data is often insufficient, +and the data collection methodologies are different. This survey discusses +meta-learning broad applications in the healthcare domain to provide insight +into how and where it can address critical healthcare challenges. We first +describe the theoretical foundations and pivotal methods of meta-learning. We +then divide the employed meta-learning approaches in the healthcare domain into +two main categories of multi/single-task learning and many/few-shot learning +and survey the studies. Finally, we highlight the current challenges in +meta-learning research, discuss the potential solutions and provide future +perspectives on meta-learning in healthcare. +" +AutoConv: Automatically Generating Information-seeking Conversations with Large Language Models,Siheng Li,http://arxiv.org/pdf/2308.06507v1.pdf,2023-08-12,['cs.cl'],2308.06507v1.pdf," Information-seeking conversation, which aims to help users gather information +through conversation, has achieved great progress in recent years. However, the +research is still stymied by the scarcity of training data. To alleviate this +problem, we propose AutoConv for synthetic conversation generation, which takes +advantage of the few-shot learning ability and generation capacity of large +language models (LLM). Specifically, we formulate the conversation generation +problem as a language modeling task, then finetune an LLM with a few human +conversations to capture the characteristics of the information-seeking process +and use it for generating synthetic conversations with high quality. +Experimental results on two frequently-used datasets verify that AutoConv has +substantial improvements over strong baselines and alleviates the dependence on +human annotation. In addition, we also provide several analysis studies to +promote future research. +" +Few-shot Class-incremental Learning: A Survey,Jinghua Zhang,http://arxiv.org/pdf/2308.06764v1.pdf,2023-08-13,"['cs.lg', 'cs.ai']",2308.06764v1.pdf," Few-shot Class-Incremental Learning (FSCIL) presents a unique challenge in +machine learning, as it necessitates the continuous learning of new classes +from sparse labeled training samples without forgetting previous knowledge. +While this field has seen recent progress, it remains an active area of +exploration. This paper aims to provide a comprehensive and systematic review +of FSCIL. In our in-depth examination, we delve into various facets of FSCIL, +encompassing the problem definition, the discussion of primary challenges of +unreliable empirical risk minimization and the stability-plasticity dilemma, +general schemes, and relevant problems of incremental learning and few-shot +learning. Besides, we offer an overview of benchmark datasets and evaluation +metrics. Furthermore, we introduce the classification methods in FSCIL from +data-based, structure-based, and optimization-based approaches and the object +detection methods in FSCIL from anchor-free and anchor-based approaches. Beyond +these, we illuminate several promising research directions within FSCIL that +merit further investigation. +" +Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation,William Shen,http://arxiv.org/pdf/2308.07931v1.pdf,2023-07-27,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",2308.07931v1.pdf," Self-supervised and language-supervised image models contain rich knowledge +of the world that is important for generalization. Many robotic tasks, however, +require a detailed understanding of 3D geometry, which is often lacking in 2D +image features. This work bridges this 2D-to-3D gap for robotic manipulation by +leveraging distilled feature fields to combine accurate 3D geometry with rich +semantics from 2D foundation models. We present a few-shot learning method for +6-DOF grasping and placing that harnesses these strong spatial and semantic +priors to achieve in-the-wild generalization to unseen objects. Using features +distilled from a vision-language model, CLIP, we present a way to designate +novel objects for manipulation via free-text natural language, and demonstrate +its ability to generalize to unseen expressions and novel categories of +objects. +" +Refashioning Emotion Recognition Modelling: The Advent of Generalised Large Models,Zixing Zhang,http://arxiv.org/pdf/2308.11578v1.pdf,2023-08-21,"['cs.cl', 'cs.ai', 'cs.lg']",2308.11578v1.pdf," After the inception of emotion recognition or affective computing, it has +increasingly become an active research topic due to its broad applications. +Over the past couple of decades, emotion recognition models have gradually +migrated from statistically shallow models to neural network-based deep models, +which can significantly boost the performance of emotion recognition models and +consistently achieve the best results on different benchmarks. Therefore, in +recent years, deep models have always been considered the first option for +emotion recognition. However, the debut of large language models (LLMs), such +as ChatGPT, has remarkably astonished the world due to their emerged +capabilities of zero/few-shot learning, in-context learning, chain-of-thought, +and others that are never shown in previous deep models. In the present paper, +we comprehensively investigate how the LLMs perform in emotion recognition in +terms of diverse aspects, including in-context learning, few-short learning, +accuracy, generalisation, and explanation. Moreover, we offer some insights and +pose other potential challenges, hoping to ignite broader discussions about +enhancing emotion recognition in the new era of advanced and generalised large +models. +" +Gpachov at CheckThat! 2023: A Diverse Multi-Approach Ensemble for Subjectivity Detection in News Articles,Georgi Pachov,http://arxiv.org/pdf/2309.06844v1.pdf,2023-09-13,"['cs.cl', 'cs.ai', 'cs.mm']",2309.06844v1.pdf," The wide-spread use of social networks has given rise to subjective, +misleading, and even false information on the Internet. Thus, subjectivity +detection can play an important role in ensuring the objectiveness and the +quality of a piece of information. This paper presents the solution built by +the Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivity +detection. Three different research directions are explored. The first one is +based on fine-tuning a sentence embeddings encoder model and dimensionality +reduction. The second one explores a sample-efficient few-shot learning model. +The third one evaluates fine-tuning a multilingual transformer on an altered +dataset, using data from multiple languages. Finally, the three approaches are +combined in a simple majority voting ensemble, resulting in 0.77 macro F1 on +the test set and achieving 2nd place on the English subtask. +" +"An Empathy-Based Sandbox Approach to Bridge Attitudes, Goals, Knowledge, and Behaviors in the Privacy Paradox",Chaoran Chen,http://arxiv.org/pdf/2309.14510v1.pdf,2023-09-25,['cs.hc'],2309.14510v1.pdf," The ""privacy paradox"" describes the discrepancy between users' privacy +attitudes and their actual behaviors. Mitigating this discrepancy requires +solutions that account for both system opaqueness and users' hesitations in +testing different privacy settings due to fears of unintended data exposure. We +introduce an empathy-based approach that allows users to experience how privacy +behaviors may alter system outcomes in a risk-free sandbox environment from the +perspective of artificially generated personas. To generate realistic personas, +we introduce a novel pipeline that augments the outputs of large language +models using few-shot learning, contextualization, and chain of thoughts. Our +empirical studies demonstrated the adequate quality of generated personas and +highlighted the changes in privacy-related applications (e.g., online +advertising) caused by different personas. Furthermore, users demonstrated +cognitive and emotional empathy towards the personas when interacting with our +sandbox. We offered design implications for downstream applications in +improving user privacy literacy and promoting behavior changes. +" +Boosting In-Context Learning with Factual Knowledge,Jianing Wang,http://arxiv.org/pdf/2309.14771v1.pdf,2023-09-26,"['cs.cl', 'cs.ai']",2309.14771v1.pdf," In-Context Learning (ICL) over Large language models (LLMs) aims at solving +previously unseen tasks by conditioning on a few training examples, eliminating +the need for parameter updates and achieving competitive performance. In this +paper, we demonstrate that factual knowledge is imperative for the performance +of ICL in three core facets, i.e., the inherent knowledge learned in LLMs, the +factual knowledge derived from the selected in-context examples, and the +knowledge biases in LLMs for output generation. To unleash the power of LLMs in +few-shot learning scenarios, we introduce a novel Knowledgeable In-Context +Tuning (KICT) framework to further improve the performance of ICL: 1) injecting +factual knowledge to LLMs during continual self-supervised pre-training, 2) +judiciously selecting the examples with high knowledge relevance, and 3) +calibrating the prediction results based on prior knowledge. We evaluate the +proposed approaches on auto-regressive LLMs (e.g., GPT-style models) over +multiple text classification and question answering tasks. Experimental results +demonstrate that KICT substantially outperforms strong baselines, and improves +by more than 13% and 7% of accuracy on text classification and question +answering tasks, respectively. +" +Small Visual Language Models can also be Open-Ended Few-Shot Learners,Mohammad Mahdi Derakhshani,http://arxiv.org/pdf/2310.00500v1.pdf,2023-09-30,['cs.cv'],2310.00500v1.pdf," We present Self-Context Adaptation (SeCAt), a self-supervised approach that +unlocks open-ended few-shot abilities of small visual language models. Our +proposed adaptation algorithm explicitly learns from symbolic, yet +self-supervised training tasks. Specifically, our approach imitates image +captions in a self-supervised way based on clustering a large pool of images +followed by assigning semantically-unrelated names to clusters. By doing so, we +construct the `self-context', a training signal consisting of interleaved +sequences of image and pseudo-caption pairs and a query image for which the +model is trained to produce the right pseudo-caption. We demonstrate the +performance and flexibility of SeCAt on several multimodal few-shot datasets, +spanning various granularities. By using models with approximately 1B +parameters we outperform the few-shot abilities of much larger models, such as +Frozen and FROMAGe. SeCAt opens new possibilities for research in open-ended +few-shot learning that otherwise requires access to large or proprietary +models. +" +Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation,Matthias Lindemann,http://arxiv.org/pdf/2310.00796v1.pdf,2023-10-01,['cs.cl'],2310.00796v1.pdf," Strong inductive biases enable learning from little data and help +generalization outside of the training distribution. Popular neural +architectures such as Transformers lack strong structural inductive biases for +seq2seq NLP tasks on their own. Consequently, they struggle with systematic +generalization beyond the training distribution, e.g. with extrapolating to +longer inputs, even when pre-trained on large amounts of text. We show how a +structural inductive bias can be injected into a seq2seq model by pre-training +it to simulate structural transformations on synthetic data. Specifically, we +inject an inductive bias towards Finite State Transducers (FSTs) into a +Transformer by pre-training it to simulate FSTs given their descriptions. Our +experiments show that our method imparts the desired inductive bias, resulting +in improved systematic generalization and better few-shot learning for FST-like +tasks. +" +TRAM: Benchmarking Temporal Reasoning for Large Language Models,Yuqing Wang,http://arxiv.org/pdf/2310.00835v2.pdf,2023-10-02,['cs.cl'],2310.00835v2.pdf," Reasoning about time is essential for understanding the nuances of events +described in natural language. Previous research on this topic has been limited +in scope, characterized by a lack of standardized benchmarks that would allow +for consistent evaluations across different studies. In this paper, we +introduce TRAM, a temporal reasoning benchmark composed of ten datasets, +encompassing various temporal aspects of events such as order, arithmetic, +frequency, and duration, designed to facilitate a comprehensive evaluation of +the temporal reasoning capabilities of large language models (LLMs). We conduct +an extensive evaluation using popular LLMs, such as GPT-4 and Llama2, in both +zero-shot and few-shot learning scenarios. Additionally, we employ BERT-based +models to establish the baseline evaluations. Our findings indicate that these +models still trail human performance in temporal reasoning tasks. It is our +aspiration that TRAM will spur further progress in enhancing the temporal +reasoning abilities of LLMs. +" +Procedural Text Mining with Large Language Models,Anisa Rula,http://arxiv.org/pdf/2310.03376v1.pdf,2023-10-05,"['cs.cl', 'cs.ai', 'cs.it', 'math.it']",2310.03376v1.pdf," Recent advancements in the field of Natural Language Processing, particularly +the development of large-scale language models that are pretrained on vast +amounts of knowledge, are creating novel opportunities within the realm of +Knowledge Engineering. In this paper, we investigate the usage of large +language models (LLMs) in both zero-shot and in-context learning settings to +tackle the problem of extracting procedures from unstructured PDF text in an +incremental question-answering fashion. In particular, we leverage the current +state-of-the-art GPT-4 (Generative Pre-trained Transformer 4) model, +accompanied by two variations of in-context learning that involve an ontology +with definitions of procedures and steps and a limited number of samples of +few-shot learning. The findings highlight both the promise of this approach and +the value of the in-context learning customisations. These modifications have +the potential to significantly address the challenge of obtaining sufficient +training data, a hurdle often encountered in deep learning-based Natural +Language Processing techniques for procedure extraction. +" +PrototypeFormer: Learning to Explore Prototype Relationships for Few-shot Image Classification,Feihong He,http://arxiv.org/pdf/2310.03517v1.pdf,2023-10-05,['cs.cv'],2310.03517v1.pdf," Few-shot image classification has received considerable attention for +addressing the challenge of poor classification performance with limited +samples in novel classes. However, numerous studies have employed sophisticated +learning strategies and diversified feature extraction methods to address this +issue. In this paper, we propose our method called PrototypeFormer, which aims +to significantly advance traditional few-shot image classification approaches +by exploring prototype relationships. Specifically, we utilize a transformer +architecture to build a prototype extraction module, aiming to extract class +representations that are more discriminative for few-shot classification. +Additionally, during the model training process, we propose a contrastive +learning-based optimization approach to optimize prototype features in few-shot +learning scenarios. Despite its simplicity, the method performs remarkably +well, with no bells and whistles. We have experimented with our approach on +several popular few-shot image classification benchmark datasets, which shows +that our method outperforms all current state-of-the-art methods. In +particular, our method achieves 97.07% and 90.88% on 5-way 5-shot and 5-way +1-shot tasks of miniImageNet, which surpasses the state-of-the-art results with +accuracy of 7.27% and 8.72%, respectively. The code will be released later. +" +A Holistic Evaluation of Piano Sound Quality,Monan Zhou,http://arxiv.org/pdf/2310.04722v1.pdf,2023-10-07,"['cs.sd', 'cs.ai', 'eess.as']",2310.04722v1.pdf," This paper aims to develop a holistic evaluation method for piano sound +quality to assist in purchasing decisions. Unlike previous studies that focused +on the effect of piano performance techniques on sound quality, this study +evaluates the inherent sound quality of different pianos. To derive quality +evaluation systems, the study uses subjective questionnaires based on a piano +sound quality dataset. The method selects the optimal piano classification +models by comparing the fine-tuning results of different pre-training models of +Convolutional Neural Networks (CNN). To improve the interpretability of the +models, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. The +results reveal that musically trained individuals are better able to +distinguish between the sound quality differences of different pianos. The best +fine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3\% as the +piano classifier. However, the dataset is limited, and the audio is sliced to +increase its quantity, resulting in a lack of diversity and balance, so we use +focal loss to reduce the impact of data imbalance. To optimize the method, the +dataset will be expanded, or few-shot learning techniques will be employed in +future research. +" +Argumentative Stance Prediction: An Exploratory Study on Multimodality and Few-Shot Learning,Arushi Sharma,http://arxiv.org/pdf/2310.07093v1.pdf,2023-10-11,['cs.cl'],2310.07093v1.pdf," To advance argumentative stance prediction as a multimodal problem, the First +Shared Task in Multimodal Argument Mining hosted stance prediction in crucial +social topics of gun control and abortion. Our exploratory study attempts to +evaluate the necessity of images for stance prediction in tweets and compare +out-of-the-box text-based large-language models (LLM) in few-shot settings +against fine-tuned unimodal and multimodal models. Our work suggests an +ensemble of fine-tuned text-based language models (0.817 F1-score) outperforms +both the multimodal (0.677 F1-score) and text-based few-shot prediction using a +recent state-of-the-art LLM (0.550 F1-score). In addition to the differences in +performance, our findings suggest that the multimodal models tend to perform +better when image content is summarized as natural language over their native +pixel structure and, using in-context examples improves few-shot performance of +LLMs. +" +LLM-augmented Preference Learning from Natural Language,Inwon Kang,http://arxiv.org/pdf/2310.08523v1.pdf,2023-10-12,['cs.cl'],2310.08523v1.pdf," Finding preferences expressed in natural language is an important but +challenging task. State-of-the-art(SotA) methods leverage transformer-based +models such as BERT, RoBERTa, etc. and graph neural architectures such as graph +attention networks. Since Large Language Models (LLMs) are equipped to deal +with larger context lengths and have much larger model sizes than the +transformer-based model, we investigate their ability to classify comparative +text directly. This work aims to serve as a first step towards using LLMs for +the CPC task. We design and conduct a set of experiments that format the +classification task into an input prompt for the LLM and a methodology to get a +fixed-format response that can be automatically evaluated. Comparing +performances with existing methods, we see that pre-trained LLMs are able to +outperform the previous SotA models with no fine-tuning involved. Our results +show that the LLMs can consistently outperform the SotA when the target text is +large -- i.e. composed of multiple sentences --, and are still comparable to +the SotA performance in shorter text. We also find that few-shot learning +yields better performance than zero-shot learning. +" +In-Context Learning for Few-Shot Molecular Property Prediction,Christopher Fifty,http://arxiv.org/pdf/2310.08863v1.pdf,2023-10-13,['cs.lg'],2310.08863v1.pdf," In-context learning has become an important approach for few-shot learning in +Large Language Models because of its ability to rapidly adapt to new tasks +without fine-tuning model parameters. However, it is restricted to applications +in natural language and inapplicable to other domains. In this paper, we adapt +the concepts underpinning in-context learning to develop a new algorithm for +few-shot molecular property prediction. Our approach learns to predict +molecular properties from a context of (molecule, property measurement) pairs +and rapidly adapts to new properties without fine-tuning. On the FS-Mol and +BACE molecular property prediction benchmarks, we find this method surpasses +the performance of recent meta-learning algorithms at small support sizes and +is competitive with the best methods at large support sizes. +" +In-Context Few-Shot Relation Extraction via Pre-Trained Language Models,Yilmazcan Ozyurt,http://arxiv.org/pdf/2310.11085v1.pdf,2023-10-17,"['cs.cl', 'cs.ai', 'cs.lg']",2310.11085v1.pdf," Relation extraction aims at inferring structured human knowledge from textual +documents. State-of-the-art methods based on language models commonly have two +limitations: (1) they require named entities to be either given as input or +infer them, which introduces additional noise, and (2) they require human +annotations of documents. As a remedy, we present a novel framework for +in-context few-shot relation extraction via pre-trained language models. To the +best of our knowledge, we are the first to reformulate the relation extraction +task as a tailored in-context few-shot learning paradigm. Thereby, we achieve +crucial benefits in that we eliminate the need for both named entity +recognition and human annotation of documents. Unlike existing methods based on +fine-tuning, our framework is flexible in that it can be easily updated for a +new set of relations without re-training. We evaluate our framework using +DocRED, the largest publicly available dataset for document-level relation +extraction, and demonstrate that our framework achieves state-of-the-art +performance. Finally, our framework allows us to identify missing annotations, +and we thus show that our framework actually performs much better than the +original labels from the development set of DocRED. +" +Group Preference Optimization: Few-Shot Alignment of Large Language Models,Siyan Zhao,http://arxiv.org/pdf/2310.11523v1.pdf,2023-10-17,"['cs.lg', 'cs.ai', 'cs.cl']",2310.11523v1.pdf," Many applications of large language models (LLMs), ranging from chatbots to +creative writing, require nuanced subjective judgments that can differ +significantly across different groups. Existing alignment algorithms can be +expensive to align for each group, requiring prohibitive amounts of +group-specific preference data and computation for real-world use cases. We +introduce Group Preference Optimization (GPO), an alignment framework that +steers language models to preferences of individual groups in a few-shot +manner. In GPO, we augment the base LLM with an independent transformer module +trained to predict the preferences of a group for the LLM generations. For +few-shot learning, we parameterize this module as an in-context autoregressive +transformer and train it via meta-learning on several groups. We empirically +validate the efficacy of GPO through rigorous evaluations using LLMs with +varied sizes on three human opinion adaptation tasks. These tasks involve +adapting to the preferences of US demographic groups, global countries, and +individual users. Our results demonstrate that GPO not only aligns models more +accurately but also requires fewer group-specific preferences, and less +training and inference computing resources, outperforming existing strategies +such as in-context steering and fine-tuning methods. +" +CLARA: Multilingual Contrastive Learning for Audio Representation Acquisition,Kari A Noriy,http://arxiv.org/pdf/2310.11830v2.pdf,2023-10-18,"['cs.sd', 'cs.lg', 'cs.mm', 'eess.as']",2310.11830v2.pdf," Multilingual speech processing requires understanding emotions, a task made +difficult by limited labelled data. CLARA, minimizes reliance on labelled data, +enhancing generalization across languages. It excels at fostering shared +representations, aiding cross-lingual transfer of speech and emotions, even +with little data. Our approach adeptly captures emotional nuances in speech, +overcoming subjective assessment issues. Using a large multilingual audio +corpus and self-supervised learning, CLARA develops speech representations +enriched with emotions, advancing emotion-aware multilingual speech processing. + Our method expands the data range using data augmentation, textual embedding +for visual understanding, and transfers knowledge from high- to low-resource +languages. CLARA demonstrates excellent performance in emotion recognition, +language comprehension, and audio benchmarks, excelling in zero-shot and +few-shot learning. It adapts to low-resource languages, marking progress in +multilingual speech representation learning. +" +A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation,Giuseppe Attanasio,http://arxiv.org/pdf/2310.12127v2.pdf,2023-10-18,"['cs.cl', 'cs.lg']",2310.12127v2.pdf," Recent instruction fine-tuned models can solve multiple NLP tasks when +prompted to do so, with machine translation (MT) being a prominent use case. +However, current research often focuses on standard performance benchmarks, +leaving compelling fairness and ethical considerations behind. In MT, this +might lead to misgendered translations, resulting, among other harms, in the +perpetuation of stereotypes and prejudices. In this work, we address this gap +by investigating whether and to what extent such models exhibit gender bias in +machine translation and how we can mitigate it. Concretely, we compute +established gender bias metrics on the WinoMT corpus from English to German and +Spanish. We discover that IFT models default to male-inflected translations, +even disregarding female occupational stereotypes. Next, using interpretability +methods, we unveil that models systematically overlook the pronoun indicating +the gender of a target occupation in misgendered translations. Finally, based +on this finding, we propose an easy-to-implement and effective bias mitigation +solution based on few-shot learning that leads to significantly fairer +translations. +" +An Exploration of In-Context Learning for Speech Language Model,Ming-Hao Hsu,http://arxiv.org/pdf/2310.12477v1.pdf,2023-10-19,"['eess.as', 'cs.ai', 'cs.cl']",2310.12477v1.pdf," Ever since the development of GPT-3 in the natural language processing (NLP) +field, in-context learning (ICL) has played an important role in utilizing +large language models (LLMs). By presenting the LM utterance-label +demonstrations at the input, the LM can accomplish few-shot learning without +relying on gradient descent or requiring explicit modification of its +parameters. This enables the LM to learn and adapt in a black-box manner. +Despite the success of ICL in NLP, little work is exploring the possibility of +ICL in speech processing. This study proposes the first exploration of ICL with +a speech LM without text supervision. We first show that the current speech LM +does not have the ICL capability. With the proposed warmup training, the speech +LM can, therefore, perform ICL on unseen tasks. In this work, we verify the +feasibility of ICL for speech LM on speech classification tasks. +" +Large Language Models are biased to overestimate profoundness,Eugenio Herrera-Berg,http://arxiv.org/pdf/2310.14422v1.pdf,2023-10-22,['cs.cl'],2310.14422v1.pdf," Recent advancements in natural language processing by large language models +(LLMs), such as GPT-4, have been suggested to approach Artificial General +Intelligence. And yet, it is still under dispute whether LLMs possess similar +reasoning abilities to humans. This study evaluates GPT-4 and various other +LLMs in judging the profoundness of mundane, motivational, and pseudo-profound +statements. We found a significant statement-to-statement correlation between +the LLMs and humans, irrespective of the type of statements and the prompting +technique used. However, LLMs systematically overestimate the profoundness of +nonsensical statements, with the exception of Tk-instruct, which uniquely +underestimates the profoundness of statements. Only few-shot learning prompts, +as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans. +Furthermore, this work provides insights into the potential biases induced by +Reinforcement Learning from Human Feedback (RLHF), inducing an increase in the +bias to overestimate the profoundness of statements. +" +Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning,Ananth Balashankar,http://arxiv.org/pdf/2310.16959v1.pdf,2023-10-25,['cs.lg'],2310.16959v1.pdf," As large language models (LLMs) are widely adopted, new safety issues and +policies emerge, to which existing safety classifiers do not generalize well. +If we have only observed a few examples of violations of a new safety rule, how +can we build a classifier to detect violations? In this paper, we study the +novel setting of domain-generalized few-shot learning for LLM-based text safety +classifiers. Unlike prior few-shot work, these new safety issues can be hard to +uncover and we do not get to choose the few examples. We demonstrate that +existing few-shot techniques do not perform well in this setting, and rather we +propose to do parameter-efficient fine-tuning (PEFT) combined with augmenting +training data based on similar examples in prior existing rules. We empirically +show that our approach of similarity-based data-augmentation + prompt-tuning +(DAPT) consistently outperforms baselines that either do not rely on data +augmentation or on PEFT by 7-17% F1 score in the Social Chemistry moral +judgement and 9-13% AUC in the Toxicity detection tasks, even when the new rule +is loosely correlated with existing ones. +" +Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning,Sapan Shah,http://arxiv.org/pdf/2310.18930v1.pdf,2023-10-29,['cs.cl'],2310.18930v1.pdf," We present a novel retrofitting method to induce emotion aspects into +pre-trained language models (PLMs) such as BERT and RoBERTa. Our method updates +pre-trained network weights using contrastive learning so that the text +fragments exhibiting similar emotions are encoded nearby in the representation +space, and the fragments with different emotion content are pushed apart. While +doing so, it also ensures that the linguistic knowledge already present in PLMs +is not inadvertently perturbed. The language models retrofitted by our method, +i.e., BERTEmo and RoBERTaEmo, produce emotion-aware text representations, as +evaluated through different clustering and retrieval metrics. For the +downstream tasks on sentiment analysis and sarcasm detection, they perform +better than their pre-trained counterparts (about 1% improvement in F1-score) +and other existing approaches. Additionally, a more significant boost in +performance is observed for the retrofitted models over pre-trained ones in +few-shot learning setting. +" +Nexus at ArAIEval Shared Task: Fine-Tuning Arabic Language Models for Propaganda and Disinformation Detection,Yunze Xiao,http://arxiv.org/pdf/2311.03184v1.pdf,2023-11-06,"['cs.cl', 'cs.ai', 'cs.si', '68t50', 'f.2.2; i.2.7']",2311.03184v1.pdf," The spread of disinformation and propagandistic content poses a threat to +societal harmony, undermining informed decision-making and trust in reliable +sources. Online platforms often serve as breeding grounds for such content, and +malicious actors exploit the vulnerabilities of audiences to shape public +opinion. Although there have been research efforts aimed at the automatic +identification of disinformation and propaganda in social media content, there +remain challenges in terms of performance. The ArAIEval shared task aims to +further research on these particular issues within the context of the Arabic +language. In this paper, we discuss our participation in these shared tasks. We +competed in subtasks 1A and 2A, where our submitted system secured positions +9th and 10th, respectively. Our experiments consist of fine-tuning transformer +models and using zero- and few-shot learning with GPT-4. +" +Multilingual Mathematical Autoformalization,Albert Q. Jiang,http://arxiv.org/pdf/2311.03755v1.pdf,2023-11-07,"['cs.cl', 'cs.lg']",2311.03755v1.pdf," Autoformalization is the task of translating natural language materials into +machine-verifiable formalisations. Progress in autoformalization research is +hindered by the lack of a sizeable dataset consisting of informal-formal pairs +expressing the same essence. Existing methods tend to circumvent this challenge +by manually curating small corpora or using few-shot learning with large +language models. But these methods suffer from data scarcity and formal +language acquisition difficulty. In this work, we create $\texttt{MMA}$, a +large, flexible, multilingual, and multi-domain dataset of informal-formal +pairs, by using a language model to translate in the reverse direction, that +is, from formal mathematical statements into corresponding informal ones. +Experiments show that language models fine-tuned on $\texttt{MMA}$ produce +$16-18\%$ of statements acceptable with minimal corrections on the +$\texttt{miniF2F}$ and $\texttt{ProofNet}$ benchmarks, up from $0\%$ with the +base model. We demonstrate that fine-tuning on multilingual formal data results +in more capable autoformalization models even when deployed on monolingual +tasks. +" +Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks,Igor Shalyminov,http://arxiv.org/pdf/1910.01302v1.pdf,2019-10-03,"['cs.cl', 'i.2.7']",1910.01302v1.pdf," Goal-oriented dialogue systems are now being widely adopted in industry where +it is of key importance to maintain a rapid prototyping cycle for new products +and domains. Data-driven dialogue system development has to be adapted to meet +this requirement --- therefore, reducing the amount of data and annotations +necessary for training such systems is a central research problem. + In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet), +a state-of-the-art approach to goal-oriented dialogue generation which only +uses a few example dialogues (i.e. few-shot learning), none of which has to be +annotated. We achieve this by performing a 2-stage training. Firstly, we +perform unsupervised dialogue representation pre-training on a large source of +goal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at +the transfer stage, we train DiKTNet using this representation together with 2 +other textual knowledge sources with different levels of generality: ELMo +encoder and the main dataset's source domains. + Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate +our model on it in terms of BLEU and Entity F1 scores, and show that our +approach significantly and consistently improves upon a series of baseline +models as well as over the previous state-of-the-art dialogue generation model, +ZSDG. The improvement upon the latter --- up to 10% in Entity F1 and the +average of 3% in BLEU score --- is achieved using only the equivalent of 10% of +ZSDG's in-domain training data. +" +Meta-Learning with Dynamic-Memory-Based Prototypical Network for Few-Shot Event Detection,Shumin Deng,http://arxiv.org/pdf/1910.11621v2.pdf,2019-10-25,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",1910.11621v2.pdf," Event detection (ED), a sub-task of event extraction, involves identifying +triggers and categorizing event mentions. Existing methods primarily rely upon +supervised learning and require large-scale labeled event datasets which are +unfortunately not readily available in many real-life applications. In this +paper, we consider and reformulate the ED task with limited labeled data as a +Few-Shot Learning problem. We propose a Dynamic-Memory-Based Prototypical +Network (DMB-PN), which exploits Dynamic Memory Network (DMN) to not only learn +better prototypes for event types, but also produce more robust sentence +encodings for event mentions. Differing from vanilla prototypical networks +simply computing event prototypes by averaging, which only consume event +mentions once, our model is more robust and is capable of distilling contextual +information from event mentions for multiple times due to the multi-hop +mechanism of DMNs. The experiments show that DMB-PN not only deals with sample +scarcity better than a series of baseline models but also performs more +robustly when the variety of event types is relatively large and the instance +quantity is extremely small. +" +Spirit Distillation: Precise Real-time Semantic Segmentation of Road Scenes with Insufficient Data,Zhiyuan Wu,http://arxiv.org/pdf/2103.13733v2.pdf,2021-03-25,"['cs.cv', 'cs.ai', 'cs.lg']",2103.13733v2.pdf," Semantic segmentation of road scenes is one of the key technologies for +realizing autonomous driving scene perception, and the effectiveness of deep +Convolutional Neural Networks(CNNs) for this task has been demonstrated. +State-of-art CNNs for semantic segmentation suffer from excessive computations +as well as large-scale training data requirement. Inspired by the ideas of +Fine-tuning-based Transfer Learning (FTT) and feature-based knowledge +distillation, we propose a new knowledge distillation method for cross-domain +knowledge transference and efficient data-insufficient network training, named +Spirit Distillation(SD), which allow the student network to mimic the teacher +network to extract general features, so that a compact and accurate student +network can be trained for real-time semantic segmentation of road scenes. +Then, in order to further alleviate the trouble of insufficient data and +improve the robustness of the student, an Enhanced Spirit Distillation (ESD) +method is proposed, which commits to exploit a more comprehensive general +features extraction capability by considering images from both the target and +the proximity domains as input. To our knowledge, this paper is a pioneering +work on the application of knowledge distillation to few-shot learning. +Persuasive experiments conducted on Cityscapes semantic segmentation with the +prior knowledge transferred from COCO2017 and KITTI demonstrate that our +methods can train a better student network (mIOU and high-precision accuracy +boost by 1.4% and 8.2% respectively, with 78.2% segmentation variance) with +only 41.8% FLOPs (see Fig. 1). +" +AMP0: Species-Specific Prediction of Anti-microbial Peptides using Zero and Few Shot Learning,Sadaf Gull,http://arxiv.org/pdf/1911.06106v1.pdf,2019-10-28,"['q-bio.bm', 'cs.lg', 'stat.ml']",1911.06106v1.pdf," The evolution of drug-resistant microbial species is one of the major +challenges to global health. The development of new antimicrobial treatments +such as antimicrobial peptides needs to be accelerated to combat this threat. +However, the discovery of novel antimicrobial peptides is hampered by +low-throughput biochemical assays. Computational techniques can be used for +rapid screening of promising antimicrobial peptide candidates prior to testing +in the wet lab. The vast majority of existing antimicrobial peptide predictors +are non-targeted in nature, i.e., they can predict whether a given peptide +sequence is antimicrobial, but they are unable to predict whether the sequence +can target a particular microbial species. In this work, we have developed a +targeted antimicrobial peptide activity predictor that can predict whether a +peptide is effective against a given microbial species or not. This has been +made possible through zero-shot and few-shot machine learning. The proposed +predictor called AMP0 takes in the peptide amino acid sequence and any +N/C-termini modifications together with the genomic sequence of a target +microbial species to generate targeted predictions. It is important to note +that the proposed method can generate predictions for species that are not part +of its training set. The accuracy of predictions for novel test species can be +further improved by providing a few example peptides for that species. Our +computational cross-validation results show that the pro-posed scheme is +particularly effective for targeted antimicrobial prediction in comparison to +existing approaches and can be used for screening potential antimicrobial +peptides in a targeted manner especially for cases in which the number of +training examples is small. The webserver of the method is available at +http://ampzero.pythonanywhere.com. +" +Brain-inspired global-local learning incorporated with neuromorphic computing,Yujie Wu,http://arxiv.org/pdf/2006.03226v3.pdf,2020-06-05,"['cs.ne', 'cs.ai', 'q-bio.nc']",2006.03226v3.pdf," Two main routes of learning methods exist at present including error-driven +global learning and neuroscience-oriented local learning. Integrating them into +one network may provide complementary learning capabilities for versatile +learning scenarios. At the same time, neuromorphic computing holds great +promise, but still needs plenty of useful algorithms and algorithm-hardware +co-designs for exploiting the advantages. Here, we report a neuromorphic hybrid +learning model by introducing a brain-inspired meta-learning paradigm and a +differentiable spiking model incorporating neuronal dynamics and synaptic +plasticity. It can meta-learn local plasticity and receive top-down supervision +information for multiscale synergic learning. We demonstrate the advantages of +this model in multiple different tasks, including few-shot learning, continual +learning, and fault-tolerance learning in neuromorphic vision sensors. It +achieves significantly higher performance than single-learning methods, and +shows promise in empowering neuromorphic applications revolution. We further +implemented the hybrid model in the Tianjic neuromorphic platform by exploiting +algorithm-hardware co-designs and proved that the model can fully utilize +neuromorphic many-core architecture to develop hybrid computation paradigm. +" +Direct multimodal few-shot learning of speech and images,Leanne Nortje,http://arxiv.org/pdf/2012.05680v2.pdf,2020-12-10,"['cs.cl', 'cs.sd', 'eess.as']",2012.05680v2.pdf," We propose direct multimodal few-shot models that learn a shared embedding +space of spoken words and images from only a few paired examples. Imagine an +agent is shown an image along with a spoken word describing the object in the +picture, e.g. pen, book and eraser. After observing a few paired examples of +each class, the model is asked to identify the ""book"" in a set of unseen +pictures. Previous work used a two-step indirect approach relying on learned +unimodal representations: speech-speech and image-image comparisons are +performed across the support set of given speech-image pairs. We propose two +direct models which instead learn a single multimodal space where inputs from +different modalities are directly comparable: a multimodal triplet network +(MTriplet) and a multimodal correspondence autoencoder (MCAE). To train these +direct models, we mine speech-image pairs: the support set is used to pair up +unlabelled in-domain speech and images. In a speech-to-image digit matching +task, direct models outperform indirect models, with the MTriplet achieving the +best multimodal five-shot accuracy. We show that the improvements are due to +the combination of unsupervised and transfer learning in the direct models, and +the absence of two-step compounding errors. +" +What Makes Good In-Context Examples for GPT-$3$?,Jiachang Liu,http://arxiv.org/pdf/2101.06804v1.pdf,2021-01-17,['cs.cl'],2101.06804v1.pdf," GPT-$3$ has attracted lots of attention due to its superior performance +across a wide range of NLP tasks, especially with its powerful and versatile +in-context few-shot learning ability. Despite its success, we found that the +empirical results of GPT-$3$ depend heavily on the choice of in-context +examples. In this work, we investigate whether there are more effective +strategies for judiciously selecting in-context examples (relative to random +sampling) that better leverage GPT-$3$'s few-shot capabilities. Inspired by the +recent success of leveraging a retrieval module to augment large-scale neural +network models, we propose to retrieve examples that are semantically-similar +to a test sample to formulate its corresponding prompt. Intuitively, the +in-context examples selected with such a strategy may serve as more informative +inputs to unleash GPT-$3$'s extensive knowledge. We evaluate the proposed +approach on several natural language understanding and generation benchmarks, +where the retrieval-based prompt selection approach consistently outperforms +the random baseline. Moreover, it is observed that the sentence encoders +fine-tuned on task-related datasets yield even more helpful retrieval results. +Notably, significant gains are observed on tasks such as table-to-text +generation (41.9% on the ToTTo dataset) and open-domain question answering +(45.5% on the NQ dataset). We hope our investigation could help understand the +behaviors of GPT-$3$ and large-scale pre-trained LMs in general and enhance +their few-shot capabilities. +" +Modelling Latent Translations for Cross-Lingual Transfer,Edoardo Maria Ponti,http://arxiv.org/pdf/2107.11353v1.pdf,2021-07-23,['cs.cl'],2107.11353v1.pdf," While achieving state-of-the-art results in multiple tasks and languages, +translation-based cross-lingual transfer is often overlooked in favour of +massively multilingual pre-trained encoders. Arguably, this is due to its main +limitations: 1) translation errors percolating to the classification phase and +2) the insufficient expressiveness of the maximum-likelihood translation. To +remedy this, we propose a new technique that integrates both steps of the +traditional pipeline (translation and classification) into a single model, by +treating the intermediate translations as a latent random variable. As a +result, 1) the neural machine translation system can be fine-tuned with a +variant of Minimum Risk Training where the reward is the accuracy of the +downstream task classifier. Moreover, 2) multiple samples can be drawn to +approximate the expected loss across all possible translations during +inference. We evaluate our novel latent translation-based model on a series of +multilingual NLU tasks, including commonsense reasoning, paraphrase +identification, and natural language inference. We report gains for both +zero-shot and few-shot learning setups, up to 2.7 accuracy points on average, +which are even more prominent for low-resource languages (e.g., Haitian +Creole). Finally, we carry out in-depth analyses comparing different underlying +NMT models and assessing the impact of alternative translations on the +downstream performance. +" +ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback,Mike Wu,http://arxiv.org/pdf/2107.14035v2.pdf,2021-07-23,"['cs.cy', 'cs.lg']",2107.14035v2.pdf," High-quality computer science education is limited by the difficulty of +providing instructor feedback to students at scale. While this feedback could +in principle be automated, supervised approaches to predicting the correct +feedback are bottlenecked by the intractability of annotating large quantities +of student code. In this paper, we instead frame the problem of providing +feedback as few-shot classification, where a meta-learner adapts to give +feedback to student code on a new programming question from just a few examples +annotated by instructors. Because data for meta-training is limited, we propose +a number of amendments to the typical few-shot learning framework, including +task augmentation to create synthetic tasks, and additional side information to +build stronger priors about each task. These additions are combined with a +transformer architecture to embed discrete sequences (e.g. code) to a +prototypical representation of a feedback class label. On a suite of few-shot +natural language processing tasks, we match or outperform state-of-the-art +performance. Then, on a collection of student solutions to exam questions from +an introductory university course, we show that our approach reaches an average +precision of 88% on unseen questions, surpassing the 82% precision of teaching +assistants. Our approach was successfully deployed to deliver feedback to +16,000 student exam-solutions in a programming course offered by a tier 1 +university. This is, to the best of our knowledge, the first successful +deployment of a machine learning based feedback to open-ended student code. +" +Robust Retrieval Augmented Generation for Zero-shot Slot Filling,Michael Glass,http://arxiv.org/pdf/2108.13934v2.pdf,2021-08-31,"['cs.cl', 'cs.ai', 'cs.ir']",2108.13934v2.pdf," Automatically inducing high quality knowledge graphs from a given collection +of documents still remains a challenging problem in AI. One way to make headway +for this problem is through advancements in a related task known as slot +filling. In this task, given an entity query in form of [Entity, Slot, ?], a +system is asked to fill the slot by generating or extracting the missing value +exploiting evidence extracted from relevant passage(s) in the given document +collection. The recent works in the field try to solve this task in an +end-to-end fashion using retrieval-based language models. In this paper, we +present a novel approach to zero-shot slot filling that extends dense passage +retrieval with hard negatives and robust training procedures for retrieval +augmented generation models. Our model reports large improvements on both T-REx +and zsRE slot filling datasets, improving both passage retrieval and slot value +generation, and ranking at the top-1 position in the KILT leaderboard. +Moreover, we demonstrate the robustness of our system showing its domain +adaptation capability on a new variant of the TACRED dataset for slot filling, +through a combination of zero/few-shot learning. We release the source code and +pre-trained models. +" +Template-free Prompt Tuning for Few-shot NER,Ruotian Ma,http://arxiv.org/pdf/2109.13532v3.pdf,2021-09-28,"['cs.cl', 'cs.ai']",2109.13532v3.pdf," Prompt-based methods have been successfully applied in sentence-level +few-shot learning tasks, mostly owing to the sophisticated design of templates +and label words. However, when applied to token-level labeling tasks such as +NER, it would be time-consuming to enumerate the template queries over all +potential entity spans. In this work, we propose a more elegant method to +reformulate NER tasks as LM problems without any templates. Specifically, we +discard the template construction process while maintaining the word prediction +paradigm of pre-training models to predict a class-related pivot word (or label +word) at the entity position. Meanwhile, we also explore principled ways to +automatically search for appropriate label words that the pre-trained models +can easily adapt to. While avoiding complicated template-based process, the +proposed LM objective also reduces the gap between different objectives used in +pre-training and fine-tuning, thus it can better benefit the few-shot +performance. Experimental results demonstrate the effectiveness of the proposed +method over bert-tagger and template-based method under few-shot setting. +Moreover, the decoding speed of the proposed method is up to 1930.12 times +faster than the template-based method. +" +RAFT: A Real-World Few-Shot Text Classification Benchmark,Neel Alex,http://arxiv.org/pdf/2109.14076v3.pdf,2021-09-28,"['cs.cl', 'cs.ai', 'cs.lg']",2109.14076v3.pdf," Large pre-trained language models have shown promise for few-shot learning, +completing text-based tasks given only a few task-specific examples. Will +models soon solve classification tasks that have so far been reserved for human +research assistants? Existing benchmarks are not designed to measure progress +in applied settings, and so don't directly answer this question. The RAFT +benchmark (Real-world Annotated Few-shot Tasks) focuses on naturally occurring +tasks and uses an evaluation setup that mirrors deployment. Baseline +evaluations on RAFT reveal areas current techniques struggle with: reasoning +over long texts and tasks with many classes. Human baselines show that some +classification tasks are difficult for non-expert humans, reflecting that +real-world value sometimes depends on domain expertise. Yet even non-expert +human baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasets +and leaderboard will track which model improvements translate into real-world +benefits at https://raft.elicit.org . +" +LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5,Chengwei Qin,http://arxiv.org/pdf/2110.07298v3.pdf,2021-10-14,['cs.cl'],2110.07298v3.pdf," Existing approaches to lifelong language learning rely on plenty of labeled +data for learning a new task, which is hard to obtain in most real scenarios. +Considering that humans can continually learn new tasks from a handful of +examples, we expect the models also to be able to generalize well on new +few-shot tasks without forgetting the previous ones. In this work, we define +this more challenging yet practical problem as Lifelong Few-shot Language +Learning (LFLL) and propose a unified framework for it based on prompt tuning +of T5. Our framework called LFPT5 takes full advantage of PT's strong few-shot +learning ability, and simultaneously trains the model as a task solver and a +data generator. Before learning a new domain of the same task type, LFPT5 +generates pseudo (labeled) samples of previously learned domains, and later +gets trained on those samples to alleviate forgetting of previous knowledge as +it learns the new domain. In addition, a KL divergence loss is minimized to +achieve label consistency between the previous and the current model. While +adapting to a new task type, LFPT5 includes and tunes additional prompt +embeddings for the new task. With extensive experiments, we demonstrate that +LFPT5 can be applied to various different types of tasks and significantly +outperform previous methods in different LFLL settings. +" +MetaICL: Learning to Learn In Context,Sewon Min,http://arxiv.org/pdf/2110.15943v2.pdf,2021-10-29,"['cs.cl', 'cs.ai']",2110.15943v2.pdf," We introduce MetaICL (Meta-training for In-Context Learning), a new +meta-training framework for few-shot learning where a pretrained language model +is tuned to do in-context learning on a large set of training tasks. This +meta-training enables the model to more effectively learn a new task in context +at test time, by simply conditioning on a few training examples with no +parameter updates or task-specific templates. We experiment on a large, diverse +collection of tasks consisting of 142 NLP datasets including classification, +question answering, natural language inference, paraphrase detection and more, +across seven different meta-training/target splits. MetaICL outperforms a range +of baselines including in-context learning without meta-training and multi-task +learning followed by zero-shot transfer. We find that the gains are +particularly significant for target tasks that have domain shifts from the +meta-training tasks, and that using a diverse set of the meta-training tasks is +key to improvements. We also show that MetaICL approaches (and sometimes beats) +the performance of models fully finetuned on the target task, and outperforms +much bigger models with nearly 8x parameters. Finally, we show that MetaICL is +complementary to human-written instructions, and the best performance can be +achieved by combining both approaches. +" +Scaling ASR Improves Zero and Few Shot Learning,Alex Xiao,http://arxiv.org/pdf/2111.05948v3.pdf,2021-11-10,"['cs.cl', 'cs.sd', 'eess.as']",2111.05948v3.pdf," With 4.5 million hours of English speech from 10 different sources across 120 +countries and models of up to 10 billion parameters, we explore the frontiers +of scale for automatic speech recognition. We propose data selection techniques +to efficiently scale training data to find the most valuable samples in massive +datasets. To efficiently scale model sizes, we leverage various optimizations +such as sparse transducer loss and model sharding. By training 1-10B parameter +universal English ASR models, we push the limits of speech recognition +performance across many domains. Furthermore, our models learn powerful speech +representations with zero and few-shot capabilities on novel domains and styles +of speech, exceeding previous results across multiple in-house and public +benchmarks. For speakers with disorders due to brain damage, our best zero-shot +and few-shot models achieve 22% and 60% relative improvement on the AphasiaBank +test set, respectively, while realizing the best performance on public social +media videos. Furthermore, the same universal model reaches equivalent +performance with 500x less in-domain data on the SPGISpeech financial-domain +dataset. +" +PointCLIP: Point Cloud Understanding by CLIP,Renrui Zhang,http://arxiv.org/pdf/2112.02413v1.pdf,2021-12-04,"['cs.cv', 'cs.ai', 'cs.ro']",2112.02413v1.pdf," Recently, zero-shot and few-shot learning via Contrastive Vision-Language +Pre-training (CLIP) have shown inspirational performance on 2D visual +recognition, which learns to match images with their corresponding texts in +open-vocabulary settings. However, it remains under explored that whether CLIP, +pre-trained by large-scale image-text pairs in 2D, can be generalized to 3D +recognition. In this paper, we identify such a setting is feasible by proposing +PointCLIP, which conducts alignment between CLIP-encoded point cloud and 3D +category texts. Specifically, we encode a point cloud by projecting it into +multi-view depth maps without rendering, and aggregate the view-wise zero-shot +prediction to achieve knowledge transfer from 2D to 3D. On top of that, we +design an inter-view adapter to better extract the global feature and +adaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in +2D. By just fine-tuning the lightweight adapter in the few-shot settings, the +performance of PointCLIP could be largely improved. In addition, we observe the +complementary property between PointCLIP and classical 3D-supervised networks. +By simple ensembling, PointCLIP boosts baseline's performance and even +surpasses state-of-the-art models. Therefore, PointCLIP is a promising +alternative for effective 3D point cloud understanding via CLIP under low +resource cost and data regime. We conduct thorough experiments on +widely-adopted ModelNet10, ModelNet40 and the challenging ScanObjectNN to +demonstrate the effectiveness of PointCLIP. The code is released at +https://github.com/ZrrSkywalker/PointCLIP. +" +A Survey of Deep Learning for Low-Shot Object Detection,Qihan Huang,http://arxiv.org/pdf/2112.02814v4.pdf,2021-12-06,"['cs.cv', 'cs.ai']",2112.02814v4.pdf," Object detection has achieved a huge breakthrough with deep neural networks +and massive annotated data. However, current detection methods cannot be +directly transferred to the scenario where the annotated data is scarce due to +the severe overfitting problem. Although few-shot learning and zero-shot +learning have been extensively explored in the field of image classification, +it is indispensable to design new methods for object detection in the +data-scarce scenario since object detection has an additional challenging +localization task. Low-Shot Object Detection (LSOD) is an emerging research +topic of detecting objects from a few or even no annotated samples, consisting +of One-Shot Object Detection (OSOD), Few-Shot Object Detection (FSOD) and +Zero-Shot Object Detection (ZSD). This survey provides a comprehensive review +of LSOD methods. First, we propose a thorough taxonomy of LSOD methods and +analyze them systematically, comprising some extensional topics of LSOD +(semi-supervised LSOD, weakly-supervised LSOD, and incremental LSOD). Then, we +indicate the pros and cons of current LSOD methods with a comparison of their +performance. Finally, we discuss the challenges and promising directions of +LSOD to provide guidance for future works. +" +"Vision-Language Intelligence: Tasks, Representation Learning, and Large Models",Feng Li,http://arxiv.org/pdf/2203.01922v1.pdf,2022-03-03,"['cs.cv', 'cs.ai', 'cs.cl']",2203.01922v1.pdf," This paper presents a comprehensive survey of vision-language (VL) +intelligence from the perspective of time. This survey is inspired by the +remarkable progress in both computer vision and natural language processing, +and recent trends shifting from single modality processing to multiple modality +comprehension. We summarize the development in this field into three time +periods, namely task-specific methods, vision-language pre-training (VLP) +methods, and larger models empowered by large-scale weakly-labeled data. We +first take some common VL tasks as examples to introduce the development of +task-specific methods. Then we focus on VLP methods and comprehensively review +key components of the model structures and training methods. After that, we +show how recent work utilizes large-scale raw image-text data to learn +language-aligned visual representations that generalize better on zero or few +shot learning tasks. Finally, we discuss some potential future trends towards +modality cooperation, unified representation, and knowledge incorporation. We +believe that this review will be of help for researchers and practitioners of +AI and ML, especially those interested in computer vision and natural language +processing. +" +Rethinking Task Sampling for Few-shot Vision-Language Transfer Learning,Zhenhailong Wang,http://arxiv.org/pdf/2203.04904v3.pdf,2022-03-09,"['cs.mm', 'cs.cl', 'cs.cv']",2203.04904v3.pdf," Despite achieving state-of-the-art zero-shot performance, existing +vision-language models still fall short of few-shot transfer ability on +domain-specific problems. Classical fine-tuning often fails to prevent highly +expressive models from exploiting spurious correlations. Although +model-agnostic meta-learning (MAML) presents as a natural alternative for +few-shot transfer learning, the expensive computation due to implicit +second-order optimization limits its use on large-scale vision-language models +such as CLIP. While much literature has been devoted to exploring alternative +optimization strategies, we identify another essential aspect towards effective +few-shot transfer learning, task sampling, which is previously only be viewed +as part of data pre-processing in MAML. To show the impact of task sampling, we +propose a simple algorithm, Model-Agnostic Multitask Fine-tuning (MAMF), which +differentiates classical fine-tuning only on uniformly sampling multiple tasks. +Despite its simplicity, we show that MAMF consistently outperforms classical +fine-tuning on five few-shot vision-language classification tasks. We further +show that the effectiveness of the bi-level optimization in MAML is highly +sensitive to the zero-shot performance of a task in the context of few-shot +vision-language classification. The goal of this paper is to provide new +insights on what makes few-shot learning work, and encourage more research into +investigating better task sampling strategies. +" +mGPT: Few-Shot Learners Go Multilingual,Oleh Shliazhko,http://arxiv.org/pdf/2204.07580v2.pdf,2022-04-15,"['cs.cl', 'cs.ai', '68-06, 68-04, 68t50, 68t01', 'i.2; i.2.7']",2204.07580v2.pdf," Recent studies report that autoregressive language models can successfully +solve many NLP tasks via zero- and few-shot learning paradigms, which opens up +new possibilities for using the pre-trained language models. This paper +introduces two autoregressive GPT-like models with 1.3 billion and 13 billion +parameters trained on 60 languages from 25 language families using Wikipedia +and Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture using +GPT-2 sources and the sparse attention mechanism; Deepspeed and Megatron +frameworks allow us to parallelize the training and inference steps +effectively. The resulting models show performance on par with the recently +released XGLM models by Facebook, covering more languages and enhancing NLP +possibilities for low resource languages of CIS countries and Russian small +nations. We detail the motivation for the choices of the architecture design, +thoroughly describe the data preparation pipeline, and train five small +versions of the model to choose the most optimal multilingual tokenization +strategy. We measure the model perplexity in all covered languages and evaluate +it on the wide spectre of multilingual tasks, including classification, +generative, sequence labeling and knowledge probing. The models were evaluated +with the zero-shot and few-shot methods. Furthermore, we compared the +classification tasks with the state-of-the-art multilingual model XGLM. source +code and the mGPT XL model are publicly released. +" +In-BoXBART: Get Instructions into Biomedical Multi-Task Learning,Mihir Parmar,http://arxiv.org/pdf/2204.07600v1.pdf,2022-04-15,['cs.cl'],2204.07600v1.pdf," Single-task models have proven pivotal in solving specific tasks; however, +they have limitations in real-world applications where multi-tasking is +necessary and domain shifts are exhibited. Recently, instructional prompts have +shown significant improvement towards multi-task generalization; however, the +effect of instructional prompts and Multi-Task Learning (MTL) has not been +systematically studied in the biomedical domain. Motivated by this, this paper +explores the impact of instructional prompts for biomedical MTL. We introduce +the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) +various categories. Using this meta-dataset, we propose a unified model termed +In-BoXBART, that can jointly learn all tasks of the BoX without any +task-specific modules. To the best of our knowledge, this is the first attempt +to propose a unified model in the biomedical domain and use instructions to +achieve generalization across several biomedical tasks. Experimental results +indicate that the proposed model: 1) outperforms the single-task baseline by +~3% and multi-task (without instruction) baseline by ~18% on an average, and 2) +shows ~23% improvement compared to the single-task baseline in few-shot +learning (i.e., 32 instances per task) on an average. Our analysis indicates +that there is significant room for improvement across tasks in the BoX, +implying the scope for future research direction. +" +OPT: Open Pre-trained Transformer Language Models,Susan Zhang,http://arxiv.org/pdf/2205.01068v4.pdf,2022-05-02,"['cs.cl', 'cs.lg']",2205.01068v4.pdf," Large language models, which are often trained for hundreds of thousands of +compute days, have shown remarkable capabilities for zero- and few-shot +learning. Given their computational cost, these models are difficult to +replicate without significant capital. For the few that are available through +APIs, no access is granted to the full model weights, making them difficult to +study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only +pre-trained transformers ranging from 125M to 175B parameters, which we aim to +fully and responsibly share with interested researchers. We show that OPT-175B +is comparable to GPT-3, while requiring only 1/7th the carbon footprint to +develop. We are also releasing our logbook detailing the infrastructure +challenges we faced, along with code for experimenting with all of the released +models. +" +Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning,Xiang Chen,http://arxiv.org/pdf/2205.02355v2.pdf,2022-05-04,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2205.02355v2.pdf," Pre-trained language models have contributed significantly to relation +extraction by demonstrating remarkable few-shot learning abilities. However, +prompt tuning methods for relation extraction may still fail to generalize to +those rare or hard patterns. Note that the previous parametric learning +paradigm can be viewed as memorization regarding training data as a book and +inference as the close-book test. Those long-tailed or hard patterns can hardly +be memorized in parameters given few-shot instances. To this end, we regard RE +as an open-book examination and propose a new semiparametric paradigm of +retrieval-enhanced prompt tuning for relation extraction. We construct an +open-book datastore for retrieval regarding prompt-based instance +representations and corresponding relation labels as memorized key-value pairs. +During inference, the model can infer relations by linearly interpolating the +base output of PLM with the non-parametric nearest neighbor distribution over +the datastore. In this way, our model not only infers relation through +knowledge stored in the weights during training but also assists +decision-making by unwinding and querying examples in the open-book datastore. +Extensive experiments on benchmark datasets show that our method can achieve +state-of-the-art in both standard supervised and few-shot settings. Code are +available in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE. +" +Towards Unified Prompt Tuning for Few-shot Text Classification,Jianing Wang,http://arxiv.org/pdf/2205.05313v1.pdf,2022-05-11,"['cs.cl', 'cs.ai']",2205.05313v1.pdf," Prompt-based fine-tuning has boosted the performance of Pre-trained Language +Models (PLMs) on few-shot text classification by employing task-specific +prompts. Yet, PLMs are unfamiliar with prompt-style expressions during +pre-training, which limits the few-shot learning performance on downstream +tasks. It would be desirable if the models can acquire some prompting knowledge +before adaptation to specific NLP tasks. We present the Unified Prompt Tuning +(UPT) framework, leading to better few-shot text classification for BERT-style +models by explicitly capturing prompting semantics from non-target NLP +datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for +joint prompt learning across different NLP tasks, forcing PLMs to capture +task-invariant prompting knowledge. We further design a self-supervised task +named Knowledge-enhanced Selective Masked Language Modeling to improve the +PLM's generalization abilities for accurate adaptation to previously unseen +tasks. After multi-task learning across multiple tasks, the PLM can be better +prompt-tuned towards any dissimilar target tasks in low-resourced settings. +Experiments over a variety of NLP tasks show that UPT consistently outperforms +state-of-the-arts for prompt-based fine-tuning. +" +Towards Answering Open-ended Ethical Quandary Questions,Yejin Bang,http://arxiv.org/pdf/2205.05989v3.pdf,2022-05-12,"['cs.cl', 'cs.ai', 'cs.lg']",2205.05989v3.pdf," Considerable advancements have been made in various NLP tasks based on the +impressive power of large language models (LLMs) and many NLP applications are +deployed in our daily lives. In this work, we challenge the capability of LLMs +with the new task of Ethical Quandary Generative Question Answering. Ethical +quandary questions are more challenging to address because multiple conflicting +answers may exist to a single quandary. We explore the current capability of +LLMs in providing an answer with a deliberative exchange of different +perspectives to an ethical quandary, in the approach of Socratic philosophy, +instead of providing a closed answer like an oracle. We propose a model that +searches for different ethical principles applicable to the ethical quandary +and generates an answer conditioned on the chosen principles through +prompt-based few-shot learning. We also discuss the remaining challenges and +ethical issues involved in this task and suggest the direction toward +developing responsible NLP systems by incorporating human values explicitly. +" +PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot Learners,Canyu Chen,http://arxiv.org/pdf/2205.09229v3.pdf,2022-05-18,"['cs.cl', 'cs.ai']",2205.09229v3.pdf," Recent advances in large pre-trained language models (PLMs) lead to +impressive gains in natural language understanding (NLU) tasks with +task-specific fine-tuning. However, directly fine-tuning PLMs heavily relies on +sufficient labeled training instances, which are usually hard to obtain. +Prompt-based tuning on PLMs has shown to be powerful for various downstream +few-shot tasks. Existing works studying prompt-based tuning for few-shot NLU +tasks mainly focus on deriving proper label words with a verbalizer or +generating prompt templates to elicit semantics from PLMs. In addition, +conventional data augmentation strategies such as synonym substitution, though +widely adopted in low-resource scenarios, only bring marginal improvements for +prompt-based few-shot learning. Thus, an important research question arises: +how to design effective data augmentation methods for prompt-based few-shot +tuning? To this end, considering the label semantics are essential in +prompt-based tuning, we propose a novel label-guided data augmentation +framework PromptDA, which exploits the enriched label semantic information for +data augmentation. Extensive experiment results on few-shot text classification +tasks demonstrate the superior performance of the proposed framework by +effectively leveraging label semantics and data augmentation for natural +language understanding. Our code is available at +https://github.com/canyuchen/PromptDA. +" +What Makes Data-to-Text Generation Hard for Pretrained Language Models?,Moniba Keymanesh,http://arxiv.org/pdf/2205.11505v1.pdf,2022-05-23,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2205.11505v1.pdf," Expressing natural language descriptions of structured facts or relations -- +data-to-text generation (D2T) -- increases the accessibility of structured +knowledge repositories. Previous work shows that pre-trained language +models(PLMs) perform remarkably well on this task after fine-tuning on a +significant amount of task-specific training data. On the other hand, while +auto-regressive PLMs can generalize from a few task examples, their efficacy at +D2T is largely unexplored. Furthermore, we have an incomplete understanding of +the limits of PLMs on D2T. + In this work, we conduct an empirical study of both fine-tuned and +auto-regressive PLMs on the DART multi-domain D2T dataset. We consider their +performance as a function of the amount of task-specific data and how these +data are incorporated into the models: zero and few-shot learning, and +fine-tuning of model weights. In addition, we probe the limits of PLMs by +measuring performance on subsets of the evaluation data: novel predicates and +abstractive test examples. To improve the performance on these subsets, we +investigate two techniques: providing predicate descriptions in the context and +re-ranking generated candidates by information reflected in the source. +Finally, we conduct a human evaluation of model errors and show that D2T +generation tasks would benefit from datasets with more careful manual curation. +" +ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts,Akari Asai,http://arxiv.org/pdf/2205.11961v2.pdf,2022-05-24,['cs.cl'],2205.11961v2.pdf," This work introduces a new multi-task, parameter-efficient language model +(LM) tuning method that learns to transfer knowledge across different tasks via +a mixture of soft prompts-small prefix embedding vectors pre-trained for +different tasks. Our method, called ATTEMPT (ATTEntional Mixtures of Prompt +Tuning), obtains source prompts as encodings of large-scale source tasks into a +small number of parameters and trains an attention module to interpolate the +source prompts and a newly initialized target prompt for every instance in the +target task. During training, only the target task prompt and the attention +weights, which are shared between tasks in multi-task training, are updated, +while the original LM and source prompts are intact. ATTEMPT is highly +parameter-efficient (e.g., updates 2,300 times fewer parameters than full +fine-tuning) while achieving high task performance using knowledge from +high-resource tasks. Moreover, it is modular using pre-trained soft prompts, +and can flexibly add or remove source prompts for effective knowledge transfer. +Our experimental results across 21 diverse NLP datasets show that ATTEMPT +significantly outperforms prompt tuning and outperforms or matches fully +fine-tuned or other parameter-efficient tuning approaches that use over ten +times more parameters. Finally, ATTEMPT outperforms previous work in few-shot +learning settings. +" +Making Large Language Models Better Reasoners with Step-Aware Verifier,Yifei Li,http://arxiv.org/pdf/2206.02336v3.pdf,2022-06-06,"['cs.cl', 'cs.ai']",2206.02336v3.pdf," Few-shot learning is a challenging task that requires language models to +generalize from limited examples. Large language models like GPT-3 and PaLM +have made impressive progress in this area, but they still face difficulties in +reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve +their reasoning skills, previous work has proposed to guide the language model +with prompts that elicit a series of reasoning steps before giving the final +answer, achieving a significant improvement on GSM8K from 17.9% to 58.1% in +problem-solving rate. In this paper, we present DIVERSE (Diverse Verifier on +Reasoning Step), a novel approach that further enhances the reasoning +capability of language models. DIVERSE has three main components: first, it +generates diverse prompts to explore different reasoning paths for the same +question; second, it uses a verifier to filter out incorrect answers based on a +weighted voting scheme; and third, it verifies each reasoning step individually +instead of the whole chain. We evaluate DIVERSE on the latest language model +code-davinci-002 and show that it achieves new state-of-the-art results on six +of eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%). +" +Language Models are General-Purpose Interfaces,Yaru Hao,http://arxiv.org/pdf/2206.06336v1.pdf,2022-06-13,['cs.cl'],2206.06336v1.pdf," Foundation models have received much attention due to their effectiveness +across a broad range of downstream applications. Though there is a big +convergence in terms of architecture, most pretrained models are typically +still developed for specific tasks or modalities. In this work, we propose to +use language models as a general-purpose interface to various foundation +models. A collection of pretrained encoders perceive diverse modalities (such +as vision, and language), and they dock with a language model that plays the +role of a universal task layer. We propose a semi-causal language modeling +objective to jointly pretrain the interface and the modular encoders. We +subsume the advantages and capabilities from both causal and non-causal +modeling, thereby combining the best of two worlds. Specifically, the proposed +method not only inherits the capabilities of in-context learning and open-ended +generation from causal language modeling, but also is conducive to finetuning +because of the bidirectional encoders. More importantly, our approach +seamlessly unlocks the combinations of the above capabilities, e.g., enabling +in-context learning or instruction following with finetuned encoders. +Experimental results across various language-only and vision-language +benchmarks show that our model outperforms or is competitive with specialized +models on finetuning, zero-shot generalization, and few-shot learning. +" +FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification,Aliaksandra Shysheya,http://arxiv.org/pdf/2206.08671v2.pdf,2022-06-17,"['stat.ml', 'cs.cv', 'cs.lg']",2206.08671v2.pdf," Modern deep learning systems are increasingly deployed in situations such as +personalization and federated learning where it is necessary to support i) +learning on small amounts of data, and ii) communication efficient distributed +training protocols. In this work, we develop FiLM Transfer (FiT) which fulfills +these requirements in the image classification setting by combining ideas from +transfer learning (fixed pretrained backbones and fine-tuned FiLM adapter +layers) and meta-learning (automatically configured Naive Bayes classifiers and +episodic training) to yield parameter efficient models with superior +classification accuracy at low-shot. The resulting parameter efficiency is key +for enabling few-shot learning, inexpensive model updates for personalization, +and communication efficient federated learning. We experiment with FiT on a +wide range of downstream datasets and show that it achieves better +classification accuracy than the leading Big Transfer (BiT) algorithm at +low-shot and achieves state-of-the art accuracy on the challenging VTAB-1k +benchmark, with fewer than 1% of the updateable parameters. Finally, we +demonstrate the parameter efficiency and superior accuracy of FiT in +distributed low-shot applications including model personalization and federated +learning where model update size is an important performance metric. +" +A Reinforcement Learning-based Offensive semantics Censorship System for Chatbots,Shaokang Cai,http://arxiv.org/pdf/2207.10569v1.pdf,2022-07-13,['cs.cl'],2207.10569v1.pdf," The rapid development of artificial intelligence (AI) technology has enabled +large-scale AI applications to land in the market and practice. However, while +AI technology has brought many conveniences to people in the productization +process, it has also exposed many security issues. Especially, attacks against +online learning vulnerabilities of chatbots occur frequently. Therefore, this +paper proposes a semantics censorship chatbot system based on reinforcement +learning, which is mainly composed of two parts: the Offensive semantics +censorship model and the semantics purification model. Offensive semantics +review can combine the context of user input sentences to detect the rapid +evolution of Offensive semantics and respond to Offensive semantics responses. +The semantics purification model For the case of chatting robot models, it has +been contaminated by large numbers of offensive semantics, by strengthening the +offensive reply learned by the learning algorithm, rather than rolling back to +the early versions. In addition, by integrating a once-through learning +approach, the speed of semantics purification is accelerated while reducing the +impact on the quality of replies. The experimental results show that our +proposed approach reduces the probability of the chat model generating +offensive replies and that the integration of the few-shot learning algorithm +improves the training speed rapidly while effectively slowing down the decline +in BLEU values. +" +AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model,Saleh Soltan,http://arxiv.org/pdf/2208.01448v2.pdf,2022-08-02,"['cs.cl', 'cs.lg']",2208.01448v2.pdf," In this work, we demonstrate that multilingual large-scale +sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising +and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners +than decoder-only models on various tasks. In particular, we train a 20 billion +parameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B) +and show that it achieves state-of-the-art (SOTA) performance on 1-shot +summarization tasks, outperforming a much larger 540B PaLM decoder model. +AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially for +low-resource languages, across almost all language pairs supported by the model +(Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, +Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show in +zero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2 +datasets and provides SOTA performance on multilingual tasks such as XNLI, +XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling case +for seq2seq models as a powerful alternative to decoder-only models for +Large-scale Language Model (LLM) training. +" +Unsupervisedly Prompting AlphaFold2 for Few-Shot Learning of Accurate Folding Landscape and Protein Structure Prediction,Jun Zhang,http://arxiv.org/pdf/2208.09652v2.pdf,2022-08-20,"['cs.lg', 'cs.ai', 'physics.bio-ph']",2208.09652v2.pdf," Data-driven predictive methods which can efficiently and accurately transform +protein sequences into biologically active structures are highly valuable for +scientific research and medical development. Determining accurate folding +landscape using co-evolutionary information is fundamental to the success of +modern protein structure prediction methods. As the state of the art, +AlphaFold2 has dramatically raised the accuracy without performing explicit +co-evolutionary analysis. Nevertheless, its performance still shows strong +dependence on available sequence homologs. Based on the interrogation on the +cause of such dependence, we presented EvoGen, a meta generative model, to +remedy the underperformance of AlphaFold2 for poor MSA targets. By prompting +the model with calibrated or virtually generated homologue sequences, EvoGen +helps AlphaFold2 fold accurately in low-data regime and even achieve +encouraging performance with single-sequence predictions. Being able to make +accurate predictions with few-shot MSA not only generalizes AlphaFold2 better +for orphan sequences, but also democratizes its use for high-throughput +applications. Besides, EvoGen combined with AlphaFold2 yields a probabilistic +structure generation method which could explore alternative conformations of +protein sequences, and the task-aware differentiable algorithm for sequence +generation will benefit other related tasks including protein design. +" +Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective,Jiangmeng Li,http://arxiv.org/pdf/2208.12681v2.pdf,2022-08-26,['cs.cv'],2208.12681v2.pdf," Few-shot learning models learn representations with limited human +annotations, and such a learning paradigm demonstrates practicability in +various tasks, e.g., image classification, object detection, etc. However, +few-shot object detection methods suffer from an intrinsic defect that the +limited training data makes the model cannot sufficiently explore semantic +information. To tackle this, we introduce knowledge distillation to the +few-shot object detection learning paradigm. We further run a motivating +experiment, which demonstrates that in the process of knowledge distillation, +the empirical error of the teacher model degenerates the prediction performance +of the few-shot object detection model as the student. To understand the +reasons behind this phenomenon, we revisit the learning paradigm of knowledge +distillation on the few-shot object detection task from the causal theoretic +standpoint, and accordingly, develop a Structural Causal Model. Following the +theoretical guidance, we propose a backdoor adjustment-based knowledge +distillation method for the few-shot object detection task, namely Disentangle +and Remerge (D&R), to perform conditional causal intervention toward the +corresponding Structural Causal Model. Empirically, the experiments on +benchmarks demonstrate that D&R can yield significant performance boosts in +few-shot object detection. Code is available at +https://github.com/ZYN-1101/DandR.git. +" +NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results,Dustin Carrión-Ojeda,http://arxiv.org/pdf/2208.14686v1.pdf,2022-08-31,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ne']",2208.14686v1.pdf," We present the design and baseline results for a new challenge in the +ChaLearn meta-learning series, accepted at NeurIPS'22, focusing on +""cross-domain"" meta-learning. Meta-learning aims to leverage experience gained +from previous tasks to solve new tasks efficiently (i.e., with better +performance, little training data, and/or modest computational resources). +While previous challenges in the series focused on within-domain few-shot +learning problems, with the aim of learning efficiently N-way k-shot tasks +(i.e., N class classification problems with k training examples), this +competition challenges the participants to solve ""any-way"" and ""any-shot"" +problems drawn from various domains (healthcare, ecology, biology, +manufacturing, and others), chosen for their humanitarian and societal impact. +To that end, we created Meta-Album, a meta-dataset of 40 image classification +datasets from 10 domains, from which we carve out tasks with any number of +""ways"" (within the range 2-20) and any number of ""shots"" (within the range +1-20). The competition is with code submission, fully blind-tested on the +CodaLab challenge platform. The code of the winners will be open-sourced, +enabling the deployment of automated machine learning solutions for few-shot +image classification across several domains. +" +Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models,Zichun Yu,http://arxiv.org/pdf/2209.09401v1.pdf,2022-09-20,"['cs.cl', 'cs.lg']",2209.09401v1.pdf," Prompting, which casts downstream applications as language modeling tasks, +has shown to be sample efficient compared to standard fine-tuning with +pre-trained models. However, one pitfall of prompting is the need of +manually-designed patterns, whose outcome can be unintuitive and requires large +validation sets to tune. To tackle the challenge, we propose AutoSeq, a fully +automatic prompting method: (1) We adopt natural language prompts on +sequence-to-sequence models, enabling free-form generation and larger label +search space; (2) We propose label sequences -- phrases with indefinite lengths +to verbalize the labels -- which eliminate the need of manual templates and are +more expressive than single label words; (3) We use beam search to +automatically generate a large amount of label sequence candidates and propose +contrastive re-ranking to get the best combinations. AutoSeq significantly +outperforms other no-manual-design methods, such as soft prompt tuning, adapter +tuning, and automatic search on single label words; the generated label +sequences are even better than curated manual ones on a variety of tasks. Our +method reveals the potential of sequence-to-sequence models in few-shot +learning and sheds light on a path to generic and automatic prompting. The +source code of this paper can be obtained from +https://github.com/thunlp/Seq2Seq-Prompt. +" +Collaboration of Pre-trained Models Makes Better Few-shot Learner,Renrui Zhang,http://arxiv.org/pdf/2209.12255v2.pdf,2022-09-25,['cs.cv'],2209.12255v2.pdf," Few-shot classification requires deep neural networks to learn generalized +representations only from limited training images, which is challenging but +significant in low-data regimes. Recently, CLIP-based methods have shown +promising few-shot performance benefited from the contrastive language-image +pre-training. Based on this point, we question if the large-scale pre-training +can alleviate the few-shot data deficiency and also assist the representation +learning by the pre-learned knowledge. In this paper, we propose CoMo, a +Collaboration of pre-trained Models that incorporates diverse prior knowledge +from various pre-training paradigms for better few-shot learning. Our CoMo +includes: CLIP's language-contrastive knowledge, DINO's vision-contrastive +knowledge, and DALL-E's language-generative knowledge. Specifically, CoMo works +in two aspects: few-shot data expansion and diverse knowledge ensemble. For +one, we generate synthetic images via zero-shot DALL-E to enrich the few-shot +training data without any manpower. For the other, we introduce a learnable +Multi-Knowledge Adapter (MK-Adapter) to adaptively blend the predictions from +CLIP and DINO. By such collaboration, CoMo can fully unleash the potential of +different pre-training methods and unify them to perform state-of-the-art for +few-shot classification. We conduct extensive experiments on 11 datasets to +demonstrate the superiority and generalization ability of our approach. +" +CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training,Tianyu Huang,http://arxiv.org/pdf/2210.01055v3.pdf,2022-10-03,['cs.cv'],2210.01055v3.pdf," Pre-training across 3D vision and language remains under development because +of limited training data. Recent works attempt to transfer vision-language +pre-training models to 3D vision. PointCLIP converts point cloud data to +multi-view depth maps, adopting CLIP for shape classification. However, its +performance is restricted by the domain gap between rendered depth maps and +images, as well as the diversity of depth distributions. To address this issue, +we propose CLIP2Point, an image-depth pre-training method by contrastive +learning to transfer CLIP to the 3D domain, and adapt it to point cloud +classification. We introduce a new depth rendering setting that forms a better +visual effect, and then render 52,460 pairs of images and depth maps from +ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines +cross-modality learning to enforce the depth features for capturing expressive +visual and textual features and intra-modality learning to enhance the +invariance of depth aggregation. Additionally, we propose a novel Dual-Path +Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for +few-shot learning. The dual-path structure allows the joint use of CLIP and +CLIP2Point, and the simplified adapter can well fit few-shot tasks without +post-search. Experimental results show that CLIP2Point is effective in +transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP +and other self-supervised 3D networks, achieving state-of-the-art results on +zero-shot and few-shot classification. +" +Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis,Siddharth Varia,http://arxiv.org/pdf/2210.06629v2.pdf,2022-10-12,['cs.cl'],2210.06629v2.pdf," Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis +task which involves four elements from user-generated texts: aspect term, +aspect category, opinion term, and sentiment polarity. Most computational +approaches focus on some of the ABSA sub-tasks such as tuple (aspect term, +sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity) +extraction using either pipeline or joint modeling approaches. Recently, +generative approaches have been proposed to extract all four elements as (one +or more) quadruplets from text as a single task. In this work, we take a step +further and propose a unified framework for solving ABSA, and the associated +sub-tasks to improve the performance in few-shot scenarios. To this end, we +fine-tune a T5 model with instructional prompts in a multi-task learning +fashion covering all the sub-tasks, as well as the entire quadruple prediction +task. In experiments with multiple benchmark datasets, we show that the +proposed multi-task prompting approach brings performance boost (by absolute +8.29 F1) in the few-shot learning setting. +" +"RARR: Researching and Revising What Language Models Say, Using Language Models",Luyu Gao,http://arxiv.org/pdf/2210.08726v3.pdf,2022-10-17,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2210.08726v3.pdf," Language models (LMs) now excel at many tasks such as few-shot learning, +question answering, reasoning, and dialog. However, they sometimes generate +unsupported or misleading content. A user cannot easily determine whether their +outputs are trustworthy or not, because most LMs do not have any built-in +mechanism for attribution to external evidence. To enable attribution while +still preserving all the powerful advantages of recent generation models, we +propose RARR (Retrofit Attribution using Research and Revision), a system that +1) automatically finds attribution for the output of any text generation model +and 2) post-edits the output to fix unsupported content while preserving the +original output as much as possible. When applied to the output of several +state-of-the-art LMs on a diverse set of generation tasks, we find that RARR +significantly improves attribution while otherwise preserving the original +input to a much greater degree than previously explored edit models. +Furthermore, the implementation of RARR requires only a handful of training +examples, a large language model, and standard web search. +" +TAPE: Assessing Few-shot Russian Language Understanding,Ekaterina Taktasheva,http://arxiv.org/pdf/2210.12813v1.pdf,2022-10-23,['cs.cl'],2210.12813v1.pdf," Recent advances in zero-shot and few-shot learning have shown promise for a +scope of research and practical purposes. However, this fast-growing area lacks +standardized evaluation suites for non-English languages, hindering progress +outside the Anglo-centric paradigm. To address this line of research, we +propose TAPE (Text Attack and Perturbation Evaluation), a novel benchmark that +includes six more complex NLU tasks for Russian, covering multi-hop reasoning, +ethical concepts, logic and commonsense knowledge. The TAPE's design focuses on +systematic zero-shot and few-shot NLU evaluation: (i) linguistic-oriented +adversarial attacks and perturbations for analyzing robustness, and (ii) +subpopulations for nuanced interpretation. The detailed analysis of testing the +autoregressive baselines indicates that simple spelling-based perturbations +affect the performance the most, while paraphrasing the input has a more +negligible effect. At the same time, the results demonstrate a significant gap +between the neural and human baselines for most tasks. We publicly release TAPE +(tape-benchmark.com) to foster research on robust LMs that can generalize to +new tasks when little to no supervision is available. +" +Learning New Tasks from a Few Examples with Soft-Label Prototypes,Avyav Kumar Singh,http://arxiv.org/pdf/2210.17437v2.pdf,2022-10-31,"['cs.lg', 'cs.cl']",2210.17437v2.pdf," It has been experimentally demonstrated that humans are able to learn in a +manner that allows them to make predictions on categories for which they have +not seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020) +have recently presented a machine learning approach that aims to do the same. +They utilise synthetically generated data and demonstrate that it is possible +to achieve sub-linear scaling and develop models that can learn to recognise N +classes from M training samples where M is less than N - aka less-than-one shot +learning. Their method was, however, defined for univariate or simple +multivariate data (Sucholutsky et al., 2021). We extend it to work on large, +high-dimensional and real-world datasets and empirically validate it in this +new and challenging setting. We apply this method to learn previously unseen +NLP tasks from very few examples (4, 8 or 16). We first generate compact, +sophisticated less-than-one shot representations called soft-label prototypes +which are fitted on training data, capturing the distribution of different +classes across the input domain space. We then use a modified k-Nearest +Neighbours classifier to demonstrate that soft-label prototypes can classify +data competitively, even outperforming much more computationally complex +few-shot learning methods. +" +QAmeleon: Multilingual QA with Only 5 Examples,Priyanka Agrawal,http://arxiv.org/pdf/2211.08264v2.pdf,2022-11-15,['cs.cl'],2211.08264v2.pdf," The availability of large, high-quality datasets has been one of the main +drivers of recent progress in question answering (QA). Such annotated datasets +however are difficult and costly to collect, and rarely exist in languages +other than English, rendering QA technology inaccessible to underrepresented +languages. An alternative to building large monolingual training datasets is to +leverage pre-trained language models (PLMs) under a few-shot learning setting. +Our approach, QAmeleon, uses a PLM to automatically generate multilingual data +upon which QA models are trained, thus avoiding costly annotation. Prompt +tuning the PLM for data synthesis with only five examples per language delivers +accuracy superior to translation-based baselines, bridges nearly 60% of the gap +between an English-only baseline and a fully supervised upper bound trained on +almost 50,000 hand labeled examples, and always leads to substantial +improvements compared to fine-tuning a QA model directly on labeled examples in +low resource settings. Experiments on the TyDiQA-GoldP and MLQA benchmarks show +that few-shot prompt tuning for data synthesis scales across languages and is a +viable alternative to large-scale annotation. +" +Explicit Knowledge Transfer for Weakly-Supervised Code Generation,Zhangir Azerbayev,http://arxiv.org/pdf/2211.16740v3.pdf,2022-11-30,['cs.cl'],2211.16740v3.pdf," Large language models (LLMs) can acquire strong code-generation capabilities +through few-shot learning. In contrast, supervised fine-tuning is still needed +for smaller models to achieve good performance. Such fine-tuning demands a +large number of task-specific NL-code pairs, which are expensive to obtain. In +this paper, we attempt to transfer the code generation ability of an LLM to a +smaller model with the aid of weakly-supervised data. More specifically, we +propose explicit knowledge transfer (EKT), which uses the few-shot capabilities +of a teacher LLM to create NL-code pairs that we then filter for correctness +and fine-tune the student on. We evaluate EKT on the task of generating code +solutions to math word problems from the GSM8k dataset. We find that EKT not +only yields better performance than training with expert iteration, but also +outperforms knowledge distillation, another form of knowledge transfer. A +GPT-Neo 1.3B model trained using EKT with a GPT-J teacher achieves a 12.4% +pass@100 on GSM8k, while the same student and teacher trained with knowledge +distillation yield only a 3.7% pass@100. We also show that it is possible for a +student model to outperform the teacher using EKT. +" +Can In-context Learners Learn a Reasoning Concept from Demonstrations?,Michal Štefánik,http://arxiv.org/pdf/2212.01692v4.pdf,2022-12-03,"['cs.cl', 'cs.ai', 'cs.lg']",2212.01692v4.pdf," Language models exhibit an emergent ability to learn a new task from a small +number of input-output demonstrations. However, recent work shows that +in-context learners largely rely on their pre-trained knowledge, such as the +sentiment of the labels, instead of learning new associations from the input. +We argue that the commonly-used few-shot evaluation using a random selection of +in-context demonstrations can not disentangle models' reliance on such biases, +as most of the randomly-selected demonstrations do not present relations +informative for prediction beyond exposing the task's input-output +distribution. + Therefore, to evaluate models' in-context learning ability independent of +models' memory, we introduce a Concept-sharing few-shot learning method +choosing the demonstrations that share an underlying concept with the predicted +sample. We extract a set of such concepts from available human explanations and +measure how much models can benefit from presenting these concepts in few-shot +demonstrations. + We find that most of the recent in-context learners can not consistently +benefit from the demonstrated concepts, irrespective of the model size. +However, we note that T0 models are more sensitive to exhibited concepts, +benefiting from concept-sharing demonstrations in 7 out of 8 evaluation +scenarios. +" +Frozen CLIP Model is An Efficient Point Cloud Backbone,Xiaoshui Huang,http://arxiv.org/pdf/2212.04098v2.pdf,2022-12-08,['cs.cv'],2212.04098v2.pdf," The pretraining-finetuning paradigm has demonstrated great success in NLP and +2D image fields because of the high-quality representation ability and +transferability of their pretrained models. However, pretraining such a strong +model is difficult in the 3D point cloud field since the training data is +limited and point cloud collection is expensive. This paper introduces +Efficient Point Cloud Learning (EPCL), an effective and efficient point cloud +learner for directly training high-quality point cloud models with a frozen +CLIP model. Our EPCL connects the 2D and 3D modalities by semantically aligning +the 2D features and point cloud features without paired 2D-3D data. +Specifically, the input point cloud is divided into a sequence of tokens and +directly fed into the frozen CLIP model to learn point cloud representation. +Furthermore, we design a task token to narrow the gap between 2D images and 3D +point clouds. Comprehensive experiments on 3D detection, semantic segmentation, +classification and few-shot learning demonstrate that the 2D CLIP model can be +an efficient point cloud backbone and our method achieves state-of-the-art +accuracy on both real-world and synthetic downstream tasks. Code will be +available. +" +Federated Few-Shot Learning for Mobile NLP,Dongqi Cai,http://arxiv.org/pdf/2212.05974v2.pdf,2022-12-12,"['cs.lg', 'cs.cl']",2212.05974v2.pdf," Natural language processing (NLP) sees rich mobile applications. To support +various language understanding tasks, a foundation NLP model is often +fine-tuned in a federated, privacy-preserving setting (FL). This process +currently relies on at least hundreds of thousands of labeled training samples +from mobile clients; yet mobile users often lack willingness or knowledge to +label their data. Such an inadequacy of data labels is known as a few-shot +scenario; it becomes the key blocker for mobile NLP applications. + For the first time, this work investigates federated NLP in the few-shot +scenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling and +prompt learning, we first establish a training pipeline that delivers +competitive accuracy when only 0.05% (fewer than 100) of the training data is +labeled and the remaining is unlabeled. To instantiate the workflow, we further +present a system FeS, addressing the high execution cost with novel designs. +(1) Curriculum pacing, which injects pseudo labels to the training workflow at +a rate commensurate to the learning progress; (2) Representational diversity, a +mechanism for selecting the most learnable data, only for which pseudo labels +will be generated; (3) Co-planning of a model's training depth and layer +capacity. Together, these designs reduce the training delay, client energy, and +network traffic by up to 46.0$\times$, 41.2$\times$ and 3000.0$\times$, +respectively. Through algorithm/system co-design, FFNLP demonstrates that FL +can apply to challenging settings where most training samples are unlabeled. +" +FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP Tasks,Weilong Dong,http://arxiv.org/pdf/2212.08354v1.pdf,2022-12-16,['cs.cl'],2212.08354v1.pdf," Massively multi-task learning with large language models has recently made +substantial progress on few-shot generalization. However, this is usually +performed in a centralized learning fashion, ignoring the privacy sensitivity +issue of (annotated) data used in multiple tasks. To mitigate this issue, we +propose FewFedWeight, a few-shot federated learning framework across multiple +tasks, to achieve the best of both worlds: privacy preservation and cross-task +generalization. FewFedWeight trains client models in isolated devices without +sharing data. It broadcasts the global model in the server to each client and +produces pseudo data for clients so that knowledge from the global model can be +explored to enhance few-shot learning of each client model. An energy-based +algorithm is further proposed to weight pseudo samples in order to reduce the +negative impact of noise from the generated pseudo data. Adaptive model weights +of client models are also tuned according to their performance. We use these +model weights to dynamically aggregate client models to update the global +model. Experiments on 118 NLP tasks show that FewFedWeight can significantly +improve the performance of client models on 61% tasks with an average +performance improvement rate of 30.5% over the baseline and substantially +outperform FedAvg and other decentralized learning methods. +" +Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning,Chris Lengerich,http://arxiv.org/pdf/2212.11353v1.pdf,2022-12-21,"['cs.cl', 'cs.lg']",2212.11353v1.pdf," Traditional approaches to RL have focused on learning decision policies +directly from episodic decisions, while slowly and implicitly learning the +semantics of compositional representations needed for generalization. While +some approaches have been adopted to refine representations via auxiliary +self-supervised losses while simultaneously learning decision policies, +learning compositional representations from hand-designed and +context-independent self-supervised losses (multi-view) still adapts relatively +slowly to the real world, which contains many non-IID subspaces requiring rapid +distribution shift in both time and spatial attention patterns at varying +levels of abstraction. In contrast, supervised language model cascades have +shown the flexibility to adapt to many diverse manifolds, and hints of +self-learning needed for autonomous task transfer. However, to date, transfer +methods for language models like few-shot learning and fine-tuning still +require human supervision and transfer learning using self-learning methods has +been underexplored. We propose a self-supervised loss policy called contrastive +distillation which manifests latent variables with high mutual information with +both source and target tasks from weights to tokens. We show how this +outperforms common methods of transfer learning and suggests a useful design +axis of trading off compute for generalizability for online transfer. +Contrastive distillation is improved through sampling from memory and suggests +a simple algorithm for more efficiently sampling negative examples for +contrastive losses than random sampling. +" +Exploring Efficient Few-shot Adaptation for Vision Transformers,Chengming Xu,http://arxiv.org/pdf/2301.02419v1.pdf,2023-01-06,['cs.cv'],2301.02419v1.pdf," The task of Few-shot Learning (FSL) aims to do the inference on novel +categories containing only few labeled examples, with the help of knowledge +learned from base categories containing abundant labeled training samples. +While there are numerous works into FSL task, Vision Transformers (ViTs) have +rarely been taken as the backbone to FSL with few trials focusing on naive +finetuning of whole backbone or classification layer.} Essentially, despite +ViTs have been shown to enjoy comparable or even better performance on other +vision tasks, it is still very nontrivial to efficiently finetune the ViTs in +real-world FSL scenarios. To this end, we propose a novel efficient Transformer +Tuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The key +novelties come from the newly presented Attentive Prefix Tuning (APT) and +Domain Residual Adapter (DRA) for the task and backbone tuning, individually. +Specifically, in APT, the prefix is projected to new key and value pairs that +are attached to each self-attention layer to provide the model with +task-specific information. Moreover, we design the DRA in the form of learnable +offset vectors to handle the potential domain gaps between base and novel data. +To ensure the APT would not deviate from the initial task-specific information +much, we further propose a novel prototypical regularization, which maximizes +the similarity between the projected distribution of prefix and initial +prototypes, regularizing the update procedure. Our method receives outstanding +performance on the challenging Meta-Dataset. We conduct extensive experiments +to show the efficacy of our model. +" +Unleashing the Power of Shared Label Structures for Human Activity Recognition,Xiyuan Zhang,http://arxiv.org/pdf/2301.03462v2.pdf,2023-01-01,"['cs.lg', 'cs.ai', 'eess.sp']",2301.03462v2.pdf," Current human activity recognition (HAR) techniques regard activity labels as +integer class IDs without explicitly modeling the semantics of class labels. We +observe that different activity names often have shared structures. For +example, ""open door"" and ""open fridge"" both have ""open"" as the action; ""kicking +soccer ball"" and ""playing tennis ball"" both have ""ball"" as the object. Such +shared structures in label names can be translated to the similarity in sensory +data and modeling common structures would help uncover knowledge across +different activities, especially for activities with limited samples. In this +paper, we propose SHARE, a HAR framework that takes into account shared +structures of label names for different activities. To exploit the shared +structures, SHARE comprises an encoder for extracting features from input +sensory time series and a decoder for generating label names as a token +sequence. We also propose three label augmentation techniques to help the model +more effectively capture semantic structures across activities, including a +basic token-level augmentation, and two enhanced embedding-level and +sequence-level augmentations utilizing the capabilities of pre-trained models. +SHARE outperforms state-of-the-art HAR models in extensive experiments on seven +HAR benchmark datasets. We also evaluate in few-shot learning and label +imbalance settings and observe even more significant performance gap. +" +"See, Think, Confirm: Interactive Prompting Between Vision and Language Models for Knowledge-based Visual Reasoning",Zhenfang Chen,http://arxiv.org/pdf/2301.05226v1.pdf,2023-01-12,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2301.05226v1.pdf," Large pre-trained vision and language models have demonstrated remarkable +capacities for various tasks. However, solving the knowledge-based visual +reasoning tasks remains challenging, which requires a model to comprehensively +understand image content, connect the external world knowledge, and perform +step-by-step reasoning to answer the questions correctly. To this end, we +propose a novel framework named Interactive Prompting Visual Reasoner (IPVR) +for few-shot knowledge-based visual reasoning. IPVR contains three stages, see, +think and confirm. The see stage scans the image and grounds the visual concept +candidates with a visual perception model. The think stage adopts a pre-trained +large language model (LLM) to attend to the key concepts from candidates +adaptively. It then transforms them into text context for prompting with a +visual captioning model and adopts the LLM to generate the answer. The confirm +stage further uses the LLM to generate the supporting rationale to the answer, +verify the generated rationale with a cross-modality classifier and ensure that +the rationale can infer the predicted output consistently. We conduct +experiments on a range of knowledge-based visual reasoning datasets. We found +our IPVR enjoys several benefits, 1). it achieves better performance than the +previous few-shot learning baselines; 2). it enjoys the total transparency and +trustworthiness of the whole reasoning process by providing rationales for each +reasoning step; 3). it is computation-efficient compared with other fine-tuning +baselines. +" +Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning,Xinyi Wang,http://arxiv.org/pdf/2301.11916v3.pdf,2023-01-27,"['cs.cl', 'cs.ai', 'cs.lg']",2301.11916v3.pdf," In recent years, pre-trained large language models (LLMs) have demonstrated +remarkable efficiency in achieving an inference-time few-shot learning +capability known as in-context learning. However, existing literature has +highlighted the sensitivity of this capability to the selection of few-shot +demonstrations. Current understandings of the underlying mechanisms by which +this capability arises from regular language model pretraining objectives +remain disconnected from the real-world LLMs. This study aims to examine the +in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs +as latent variable models. On this premise, we propose an algorithm to select +optimal demonstrations from a set of annotated data with a small LM, and then +directly generalize the selected demonstrations to larger LMs. We demonstrate +significant improvement over baselines, averaged over eight GPT models on eight +real-world text classification datasets. We also demonstrate the real-world +usefulness of our algorithm on GSM8K, a math word problem dataset. Our +empirical findings support our hypothesis that LLMs implicitly infer a latent +variable containing task information. +" +Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment,Hao Liu,http://arxiv.org/pdf/2302.00902v2.pdf,2023-02-02,"['cs.lg', 'cs.cl', 'cs.cv']",2302.00902v2.pdf," Recent progress in scaling up large language models has shown impressive +capabilities in performing few-shot learning across a wide range of text-based +tasks. However, a key limitation is that these language models fundamentally +lack visual perception - a crucial attribute needed to extend these models to +be able to interact with the real world and solve vision tasks, such as in +visual-question answering and robotics. Prior works have largely connected +image to text through pretraining and/or fine-tuning on curated image-text +datasets, which can be a costly and expensive process. In order to resolve this +limitation, we propose a simple yet effective approach called +Language-Quantized AutoEncoder (LQAE), a modification of VQ-VAE that learns to +align text-image data in an unsupervised manner by leveraging pretrained +language models (e.g., BERT, RoBERTa). Our main idea is to encode image as +sequences of text tokens by directly quantizing image embeddings using a +pretrained language codebook. We then apply random masking followed by a BERT +model, and have the decoder reconstruct the original image from BERT predicted +text token embeddings. By doing so, LQAE learns to represent similar images +with similar clusters of text tokens, thereby aligning these two modalities +without the use of aligned text-image pairs. This enables few-shot image +classification with large language models (e.g., GPT-3) as well as linear +classification of images based on BERT text features. To the best of our +knowledge, our work is the first work that uses unaligned images for multimodal +tasks by leveraging the power of pretrained language models. +" +The unreasonable effectiveness of few-shot learning for machine translation,Xavier Garcia,http://arxiv.org/pdf/2302.01398v1.pdf,2023-02-02,['cs.cl'],2302.01398v1.pdf," We demonstrate the potential of few-shot translation systems, trained with +unpaired language data, for both high and low-resource language pairs. We show +that with only 5 examples of high-quality translation data shown at inference, +a transformer decoder-only model trained solely with self-supervised learning, +is able to match specialized supervised state-of-the-art models as well as more +general commercial translation systems. In particular, we outperform the best +performing system on the WMT'21 English - Chinese news translation task by only +using five examples of English - Chinese parallel data at inference. Moreover, +our approach in building these models does not necessitate joint multilingual +training or back-translation, is conceptually simple and shows the potential to +extend to the multilingual setting. Furthermore, the resulting models are two +orders of magnitude smaller than state-of-the-art language models. We then +analyze the factors which impact the performance of few-shot translation +systems, and highlight that the quality of the few-shot demonstrations heavily +determines the quality of the translations generated by our models. Finally, we +show that the few-shot paradigm also provides a way to control certain +attributes of the translation -- we show that we are able to control for +regional varieties and formality using only a five examples at inference, +paving the way towards controllable machine translation systems. +" +CrossCodeBench: Benchmarking Cross-Task Generalization of Source Code Models,Changan Niu,http://arxiv.org/pdf/2302.04030v2.pdf,2023-02-08,"['cs.se', 'cs.ai']",2302.04030v2.pdf," Despite the recent advances showing that a model pre-trained on large-scale +source code data is able to gain appreciable generalization capability, it +still requires a sizeable amount of data on the target task for fine-tuning. +And the effectiveness of the model generalization is largely affected by the +size and quality of the fine-tuning data, which is detrimental for target tasks +with limited or unavailable resources. Therefore, cross-task generalization, +with the goal of improving the generalization of the model to unseen tasks that +have not been seen before, is of strong research and application value. + In this paper, we propose a large-scale benchmark that includes 216 existing +code-related tasks. Then, we annotate each task with the corresponding meta +information such as task description and instruction, which contains detailed +information about the task and a solution guide. This also helps us to easily +create a wide variety of ``training/evaluation'' task splits to evaluate the +various cross-task generalization capabilities of the model. Then we perform +some preliminary experiments to demonstrate that the cross-task generalization +of models can be largely improved by in-context learning methods such as +few-shot learning and learning from task instructions, which shows the +promising prospects of conducting cross-task learning research on our +benchmark. We hope that the collection of the datasets and our benchmark will +facilitate future work that is not limited to cross-task generalization. +" +Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning,Zhuolin Yang,http://arxiv.org/pdf/2302.04858v2.pdf,2023-02-09,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.ir', 'cs.lg']",2302.04858v2.pdf," Augmenting pretrained language models (LMs) with a vision encoder (e.g., +Flamingo) has obtained the state-of-the-art results in image-to-text +generation. However, these models store all the knowledge within their +parameters, thus often requiring enormous model parameters to model the +abundant visual concepts and very rich textual descriptions. Additionally, they +are inefficient in incorporating new data, requiring a computational-expensive +fine-tuning process. In this work, we introduce a Retrieval-augmented Visual +Language Model, Re-ViLM, built upon the Flamingo, that supports retrieving the +relevant knowledge from the external database for zero and in-context few-shot +image-to-text generations. By storing certain knowledge explicitly in the +external database, our approach reduces the number of model parameters and can +easily accommodate new data during evaluation by simply updating the database. +We also construct an interleaved image and text data that facilitates +in-context few-shot learning capabilities. We demonstrate that Re-ViLM +significantly boosts performance for image-to-text generation tasks, especially +for zero-shot and few-shot generation in out-of-domain settings with 4 times +less parameters compared with baseline methods. +" +Mask-guided BERT for Few Shot Text Classification,Wenxiong Liao,http://arxiv.org/pdf/2302.10447v3.pdf,2023-02-21,"['cs.cl', 'cs.ai']",2302.10447v3.pdf," Transformer-based language models have achieved significant success in +various domains. However, the data-intensive nature of the transformer +architecture requires much labeled data, which is challenging in low-resource +scenarios (i.e., few-shot learning (FSL)). The main challenge of FSL is the +difficulty of training robust models on small amounts of samples, which +frequently leads to overfitting. Here we present Mask-BERT, a simple and +modular framework to help BERT-based architectures tackle FSL. The proposed +approach fundamentally differs from existing FSL strategies such as prompt +tuning and meta-learning. The core idea is to selectively apply masks on text +inputs and filter out irrelevant information, which guides the model to focus +on discriminative tokens that influence prediction results. In addition, to +make the text representations from different categories more separable and the +text representations from the same category more compact, we introduce a +contrastive learning loss function. Experimental results on public-domain +benchmark datasets demonstrate the effectiveness of Mask-BERT. +" +Meta-Learning with Adaptive Weighted Loss for Imbalanced Cold-Start Recommendation,Minchang Kim,http://arxiv.org/pdf/2302.14640v2.pdf,2023-02-28,"['cs.ir', 'cs.lg']",2302.14640v2.pdf," Sequential recommenders have made great strides in capturing a user's +preferences. Nevertheless, the cold-start recommendation remains a fundamental +challenge as they typically involve limited user-item interactions for +personalization. Recently, gradient-based meta-learning approaches have emerged +in the sequential recommendation field due to their fast adaptation and +easy-to-integrate abilities. The meta-learning algorithms formulate the +cold-start recommendation as a few-shot learning problem, where each user is +represented as a task to be adapted. While meta-learning algorithms generally +assume that task-wise samples are evenly distributed over classes or values, +user-item interactions in real-world applications do not conform to such a +distribution (e.g., watching favorite videos multiple times, leaving only +positive ratings without any negative ones). Consequently, imbalanced user +feedback, which accounts for the majority of task training data, may dominate +the user adaptation process and prevent meta-learning algorithms from learning +meaningful meta-knowledge for personalized recommendations. To alleviate this +limitation, we propose a novel sequential recommendation framework based on +gradient-based meta-learning that captures the imbalanced rating distribution +of each user and computes adaptive loss for user-specific learning. Our work is +the first to tackle the impact of imbalanced ratings in cold-start sequential +recommendation scenarios. Through extensive experiments conducted on real-world +datasets, we demonstrate the effectiveness of our framework. +" +"Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners",Renrui Zhang,http://arxiv.org/pdf/2303.02151v1.pdf,2023-03-03,"['cs.cv', 'cs.cl']",2303.02151v1.pdf," Visual recognition in low-data regimes requires deep neural networks to learn +generalized representations from limited training samples. Recently, CLIP-based +methods have shown promising few-shot performance benefited from the +contrastive language-image pre-training. We then question, if the more diverse +pre-training knowledge can be cascaded to further assist few-shot +representation learning. In this paper, we propose CaFo, a Cascade of +Foundation models that incorporates diverse prior knowledge of various +pre-training paradigms for better few-shot learning. Our CaFo incorporates +CLIP's language-contrastive knowledge, DINO's vision-contrastive knowledge, +DALL-E's vision-generative knowledge, and GPT-3's language-generative +knowledge. Specifically, CaFo works by 'Prompt, Generate, then Cache'. Firstly, +we leverage GPT-3 to produce textual inputs for prompting CLIP with rich +downstream linguistic semantics. Then, we generate synthetic images via DALL-E +to expand the few-shot training data without any manpower. At last, we +introduce a learnable cache model to adaptively blend the predictions from CLIP +and DINO. By such collaboration, CaFo can fully unleash the potential of +different pre-training methods and unify them to perform state-of-the-art for +few-shot classification. Code is available at +https://github.com/ZrrSkywalker/CaFo. +" +Knowledge-augmented Few-shot Visual Relation Detection,Tianyu Yu,http://arxiv.org/pdf/2303.05342v1.pdf,2023-03-09,"['cs.cv', 'cs.ai']",2303.05342v1.pdf," Visual Relation Detection (VRD) aims to detect relationships between objects +for image understanding. Most existing VRD methods rely on thousands of +training samples of each relationship to achieve satisfactory performance. Some +recent papers tackle this problem by few-shot learning with elaborately +designed pipelines and pre-trained word vectors. However, the performance of +existing few-shot VRD models is severely hampered by the poor generalization +capability, as they struggle to handle the vast semantic diversity of visual +relationships. Nonetheless, humans have the ability to learn new relationships +with just few examples based on their knowledge. Inspired by this, we devise a +knowledge-augmented, few-shot VRD framework leveraging both textual knowledge +and visual relation knowledge to improve the generalization ability of few-shot +VRD. The textual knowledge and visual relation knowledge are acquired from a +pre-trained language model and an automatically constructed visual relation +knowledge graph, respectively. We extensively validate the effectiveness of our +framework. Experiments conducted on three benchmarks from the commonly used +Visual Genome dataset show that our performance surpasses existing +state-of-the-art models with a large improvement. +" +Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models,Juncheng Li,http://arxiv.org/pdf/2303.06571v2.pdf,2023-03-12,['cs.cv'],2303.06571v2.pdf," Prompt tuning, a recently emerging paradigm, enables the powerful +vision-language pre-training models to adapt to downstream tasks in a parameter +-- and data -- efficient way, by learning the ``soft prompts'' to condition +frozen pre-training models. Though effective, it is particularly problematic in +the few-shot scenario, where prompt tuning performance is sensitive to the +initialization and requires a time-consuming process to find a good +initialization, thus restricting the fast adaptation ability of the +pre-training models. In addition, prompt tuning could undermine the +generalizability of the pre-training models, because the learnable prompt +tokens are easy to overfit to the limited training samples. To address these +issues, we introduce a novel Gradient-RegulAted Meta-prompt learning (GRAM) +framework that jointly meta-learns an efficient soft prompt initialization for +better adaptation and a lightweight gradient regulating function for strong +cross-domain generalizability in a meta-learning paradigm using only the +unlabeled image-text pre-training data. Rather than designing a specific prompt +tuning method, our GRAM can be easily incorporated into various prompt tuning +methods in a model-agnostic way, and comprehensive experiments show that GRAM +brings about consistent improvement for them in several settings (i.e., +few-shot learning, cross-domain generalization, cross-dataset generalization, +etc.) over 11 datasets. Further, experiments show that GRAM enables the +orthogonal methods of textual and visual prompt tuning to work in a +mutually-enhanced way, offering better generalizability beyond the uni-modal +prompt tuning methods. +" +Decomposed Prototype Learning for Few-Shot Scene Graph Generation,Xingchen Li,http://arxiv.org/pdf/2303.10863v1.pdf,2023-03-20,['cs.cv'],2303.10863v1.pdf," Today's scene graph generation (SGG) models typically require abundant manual +annotations to learn new predicate types. Thus, it is difficult to apply them +to real-world applications with a long-tailed distribution of predicates. In +this paper, we focus on a new promising task of SGG: few-shot SGG (FSSGG). +FSSGG encourages models to be able to quickly transfer previous knowledge and +recognize novel predicates well with only a few examples. Although many +advanced approaches have achieved great success on few-shot learning (FSL) +tasks, straightforwardly extending them into FSSGG is not applicable due to two +intrinsic characteristics of predicate concepts: 1) Each predicate category +commonly has multiple semantic meanings under different contexts. 2) The visual +appearance of relation triplets with the same predicate differs greatly under +different subject-object pairs. Both issues make it hard to model conventional +latent representations for predicate categories with state-of-the-art FSL +methods. To this end, we propose a novel Decomposed Prototype Learning (DPL). +Specifically, we first construct a decomposable prototype space to capture +intrinsic visual patterns of subjects and objects for predicates, and enhance +their feature representations with these decomposed prototypes. Then, we devise +an intelligent metric learner to assign adaptive weights to each support sample +by considering the relevance of their subject-object pairs. We further re-split +the VG dataset and compare DPL with various FSL methods to benchmark this task. +Extensive results show that DPL achieves excellent performance in both base and +novel categories. +" +Supervised Masked Knowledge Distillation for Few-Shot Transformers,Han Lin,http://arxiv.org/pdf/2303.15466v2.pdf,2023-03-25,"['cs.cv', 'cs.ai']",2303.15466v2.pdf," Vision Transformers (ViTs) emerge to achieve impressive performance on many +data-abundant computer vision tasks by capturing long-range dependencies among +local features. However, under few-shot learning (FSL) settings on small +datasets with only a few labeled data, ViT tends to overfit and suffers from +severe performance degradation due to its absence of CNN-alike inductive bias. +Previous works in FSL avoid such problem either through the help of +self-supervised auxiliary losses, or through the dextile uses of label +information under supervised settings. But the gap between self-supervised and +supervised few-shot Transformers is still unfilled. Inspired by recent advances +in self-supervised knowledge distillation and masked image modeling (MIM), we +propose a novel Supervised Masked Knowledge Distillation model (SMKD) for +few-shot Transformers which incorporates label information into +self-distillation frameworks. Compared with previous self-supervised methods, +we allow intra-class knowledge distillation on both class and patch tokens, and +introduce the challenging task of masked patch tokens reconstruction across +intra-class images. Experimental results on four few-shot classification +benchmark datasets show that our method with simple design outperforms previous +methods by a large margin and achieves a new start-of-the-art. Detailed +ablation studies confirm the effectiveness of each component of our model. Code +for this paper is available here: https://github.com/HL-hanlin/SMKD. +" +"Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with Text",Wanrong Zhu,http://arxiv.org/pdf/2304.06939v3.pdf,2023-04-14,"['cs.cv', 'cs.cl']",2304.06939v3.pdf," In-context vision and language models like Flamingo support arbitrarily +interleaved sequences of images and text as input. This format not only enables +few-shot learning via interleaving independent supervised (image, text) +examples, but also, more complex prompts involving interaction between images, +e.g., ""What do image A and image B have in common?"" To support this interface, +pretraining occurs over web corpora that similarly contain interleaved +images+text. To date, however, large-scale data of this form have not been +publicly available. + We release Multimodal C4, an augmentation of the popular text-only C4 corpus +with images interleaved. We use a linear assignment algorithm to place images +into longer bodies of text using CLIP features, a process that we show +outperforms alternatives. Multimodal C4 spans everyday topics like cooking, +travel, technology, etc. A manual inspection of a random sample of documents +shows that a vast majority (88%) of images are topically relevant, and that +linear assignment frequently selects individual sentences specifically +well-aligned with each image (80%). After filtering NSFW images, ads, etc., the +resulting corpus consists of 101.2M documents with 571M images interleaved in +43B English tokens. +" +A Survey on Few-Shot Class-Incremental Learning,Songsong Tian,http://arxiv.org/pdf/2304.08130v2.pdf,2023-04-17,['cs.cv'],2304.08130v2.pdf," Large deep learning models are impressive, but they struggle when real-time +data is not available. Few-shot class-incremental learning (FSCIL) poses a +significant challenge for deep neural networks to learn new tasks from just a +few labeled samples without forgetting the previously learned ones. This setup +easily leads to catastrophic forgetting and overfitting problems, severely +affecting model performance. Studying FSCIL helps overcome deep learning model +limitations on data volume and acquisition time, while improving practicality +and adaptability of machine learning models. This paper provides a +comprehensive survey on FSCIL. Unlike previous surveys, we aim to synthesize +few-shot learning and incremental learning, focusing on introducing FSCIL from +two perspectives, while reviewing over 30 theoretical research studies and more +than 20 applied research studies. From the theoretical perspective, we provide +a novel categorization approach that divides the field into five subcategories, +including traditional machine learning methods, meta-learning based methods, +feature and feature space-based methods, replay-based methods, and dynamic +network structure-based methods. We also evaluate the performance of recent +theoretical research on benchmark datasets of FSCIL. From the application +perspective, FSCIL has achieved impressive achievements in various fields of +computer vision such as image classification, object detection, and image +segmentation, as well as in natural language processing and graph. We summarize +the important applications. Finally, we point out potential future research +directions, including applications, problem setups, and theory development. +Overall, this paper offers a comprehensive analysis of the latest advances in +FSCIL from a methodological, performance, and application perspective. +" +Unified Quantum State Tomography and Hamiltonian Learning Using Transformer Models: A Language-Translation-Like Approach for Quantum Systems,Zheng An,http://arxiv.org/pdf/2304.12010v1.pdf,2023-04-24,['quant-ph'],2304.12010v1.pdf," Schr\""odinger's equation serves as a fundamental component in characterizing +quantum systems, wherein both quantum state tomography and Hamiltonian learning +are instrumental in comprehending and interpreting quantum systems. While +numerous techniques exist for carrying out state tomography and learning +Hamiltonians individually, no method has been developed to combine these two +aspects. In this study, we introduce a new approach that employs the attention +mechanism in transformer models to effectively merge quantum state tomography +and Hamiltonian learning. By carefully choosing and preparing the training +data, our method integrates both tasks without altering the model's +architecture, allowing the model to effectively learn the intricate +relationships between quantum states and Hamiltonian. We also demonstrate the +effectiveness of our approach across various quantum systems, ranging from +simple 2-qubit cases to more involved 2D antiferromagnetic Heisenberg +structures. The data collection process is streamlined, as it only necessitates +a one-way generation process beginning with state tomography. Furthermore, the +scalability and few-shot learning capabilities of our method could potentially +minimize the resources required for characterizing and optimizing quantum +systems. Our research provides valuable insights into the relationship between +Hamiltonian structure and quantum system behavior, fostering opportunities for +additional studies on quantum systems and the advancement of quantum +computation and associated technologies. +" +Analogy-Forming Transformers for Few-Shot 3D Parsing,Nikolaos Gkanatsios,http://arxiv.org/pdf/2304.14382v2.pdf,2023-04-27,"['cs.cv', 'cs.ai', 'cs.lg']",2304.14382v2.pdf," We present Analogical Networks, a model that encodes domain knowledge +explicitly, in a collection of structured labelled 3D scenes, in addition to +implicitly, as model parameters, and segments 3D object scenes with analogical +reasoning: instead of mapping a scene to part segments directly, our model +first retrieves related scenes from memory and their corresponding part +structures, and then predicts analogous part structures for the input scene, +via an end-to-end learnable modulation mechanism. By conditioning on more than +one retrieved memories, compositions of structures are predicted, that mix and +match parts across the retrieved memories. One-shot, few-shot or many-shot +learning are treated uniformly in Analogical Networks, by conditioning on the +appropriate set of memories, whether taken from a single, few or many memory +exemplars, and inferring analogous parses. We show Analogical Networks are +competitive with state-of-the-art 3D segmentation transformers in many-shot +settings, and outperform them, as well as existing paradigms of meta-learning +and few-shot learning, in few-shot settings. Analogical Networks successfully +segment instances of novel object categories simply by expanding their memory, +without any weight updates. Our code and models are publicly available in the +project webpage: http://analogicalnets.github.io/. +" +HQP: A Human-Annotated Dataset for Detecting Online Propaganda,Abdurahman Maarouf,http://arxiv.org/pdf/2304.14931v2.pdf,2023-04-28,['cs.cl'],2304.14931v2.pdf," Online propaganda poses a severe threat to the integrity of societies. +However, existing datasets for detecting online propaganda have a key +limitation: they were annotated using weak labels that can be noisy and even +incorrect. To address this limitation, our work makes the following +contributions: (1) We present HQP: a novel dataset (N=30,000) for detecting +online propaganda with high-quality labels. To the best of our knowledge, HQP +is the first dataset for detecting online propaganda that was created through +human annotation. (2) We show empirically that state-of-the-art language models +fail in detecting online propaganda when trained with weak labels (AUC: 64.03). +In contrast, state-of-the-art language models can accurately detect online +propaganda when trained with our high-quality labels (AUC: 92.25), which is an +improvement of ~44%. (3) To address the cost of labeling, we extend our work to +few-shot learning. Specifically, we show that prompt-based learning using a +small sample of high-quality labels can still achieve a reasonable performance +(AUC: 80.27). Finally, we discuss implications for the NLP community to balance +the cost and quality of labeling. Crucially, our work highlights the importance +of high-quality labels for sensitive NLP tasks such as propaganda detection. +" +Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based Alignment,Zhen Zhang,http://arxiv.org/pdf/2305.03510v2.pdf,2023-05-02,"['cs.cl', 'cs.ai']",2305.03510v2.pdf," Pre-trained vision and language models such as CLIP have witnessed remarkable +success in connecting images and texts with a primary focus on English texts. +Despite recent efforts to extend CLIP to support other languages, disparities +in performance among different languages have been observed due to uneven +resource availability. Additionally, current cross-lingual transfer methods of +those pre-trained models would consume excessive resources for a large number +of languages. Therefore, we propose a new parameter-efficient cross-lingual +transfer learning framework that utilizes a translation-based alignment method +to mitigate multilingual disparities and explores parameter-efficient +fine-tuning methods for parameter-efficient cross-lingual transfer. Extensive +experiments on XTD and Multi30K datasets, covering 11 languages under +zero-shot, few-shot, and full-dataset learning scenarios, show that our +framework significantly reduces the multilingual disparities among languages +and improves cross-lingual transfer results, especially in low-resource +scenarios, while only keeping and fine-tuning an extremely small number of +parameters compared to the full model (e.g., Our framework only requires 0.16\% +additional parameters of a full-model for each language in the few-shot +learning scenario). The codes are available at +\url{https://github.com/eric-ai-lab/PECTVLM}. The codes are available at +\url{https://github.com/eric-ai-lab/PECTVLM}. +" +CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors,Peng Li,http://arxiv.org/pdf/2305.05711v2.pdf,2023-05-09,"['cs.cl', 'cs.ai']",2305.05711v2.pdf," Large language models (LLMs) pre-trained on massive corpora have demonstrated +impressive few-shot learning ability on many NLP tasks. A common practice is to +recast the task into a text-to-text format such that generative LLMs of natural +language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is +nontrivial to perform information extraction (IE) tasks with NL-LLMs since the +output of the IE task is usually structured and therefore is hard to be +converted into plain text. In this paper, we propose to recast the structured +output in the form of code instead of natural language and utilize generative +LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, +named entity recognition and relation extraction. In contrast to NL-LLMs, we +show that Code-LLMs can be well-aligned with these IE tasks by designing +code-style prompts and formulating these IE tasks as code generation tasks. +Experiment results on seven benchmarks show that our method consistently +outperforms fine-tuning moderate-size pre-trained models specially designed for +IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further +conduct a series of in-depth analyses to demonstrate the merits of leveraging +Code-LLMs for IE tasks. +" +Qualifying Chinese Medical Licensing Examination with Knowledge Enhanced Generative Pre-training Model,Jiageng Wu,http://arxiv.org/pdf/2305.10163v2.pdf,2023-05-17,"['cs.cl', 'cs.ai', 'cs.cy']",2305.10163v2.pdf," Generative Pre-Training (GPT) models like ChatGPT have demonstrated +exceptional performance in various Natural Language Processing (NLP) tasks. +Although ChatGPT has been integrated into the overall workflow to boost +efficiency in many domains, the lack of flexibility in the finetuning process +hinders its applications in areas that demand extensive domain expertise and +semantic knowledge, such as healthcare. In this paper, we evaluate ChatGPT on +the China National Medical Licensing Examination (CNMLE) and propose a novel +approach to improve ChatGPT from two perspectives: integrating medical domain +knowledge and enabling few-shot learning. By using a simple but effective +retrieval method, medical background knowledge is extracted as semantic +instructions to guide the inference of ChatGPT. Similarly, relevant medical +questions are identified and fed as demonstrations to ChatGPT. Experimental +results show that directly applying ChatGPT fails to qualify the CNMLE at a +score of 51 (i.e., only 51\% of questions are answered correctly). While our +knowledge-enhanced model achieves a high score of 70 on CNMLE-2022 which not +only passes the qualification but also surpasses the average score of humans +(61). This research demonstrates the potential of knowledge-enhanced ChatGPT to +serve as versatile medical assistants, capable of analyzing real-world medical +problems in a more accessible, user-friendly, and adaptable manner. +" +PointGPT: Auto-regressively Generative Pre-training from Point Clouds,Guangyan Chen,http://arxiv.org/pdf/2305.11487v2.pdf,2023-05-19,['cs.cv'],2305.11487v2.pdf," Large language models (LLMs) based on the generative pre-training transformer +(GPT) have demonstrated remarkable effectiveness across a diverse range of +downstream tasks. Inspired by the advancements of the GPT, we present PointGPT, +a novel approach that extends the concept of GPT to point clouds, addressing +the challenges associated with disorder properties, low information density, +and task gaps. Specifically, a point cloud auto-regressive generation task is +proposed to pre-train transformer models. Our method partitions the input point +cloud into multiple point patches and arranges them in an ordered sequence +based on their spatial proximity. Then, an extractor-generator based +transformer decoder, with a dual masking strategy, learns latent +representations conditioned on the preceding point patches, aiming to predict +the next one in an auto-regressive manner. Our scalable approach allows for +learning high-capacity models that generalize well, achieving state-of-the-art +performance on various downstream tasks. In particular, our approach achieves +classification accuracies of 94.9% on the ModelNet40 dataset and 93.4% on the +ScanObjectNN dataset, outperforming all other transformer models. Furthermore, +our method also attains new state-of-the-art accuracies on all four few-shot +learning benchmarks. +" +A Survey of Diffusion Models in Natural Language Processing,Hao Zou,http://arxiv.org/pdf/2305.14671v2.pdf,2023-05-24,['cs.cl'],2305.14671v2.pdf," This survey paper provides a comprehensive review of the use of diffusion +models in natural language processing (NLP). Diffusion models are a class of +mathematical models that aim to capture the diffusion of information or signals +across a network or manifold. In NLP, diffusion models have been used in a +variety of applications, such as natural language generation, sentiment +analysis, topic modeling, and machine translation. This paper discusses the +different formulations of diffusion models used in NLP, their strengths and +limitations, and their applications. We also perform a thorough comparison +between diffusion models and alternative generative models, specifically +highlighting the autoregressive (AR) models, while also examining how diverse +architectures incorporate the Transformer in conjunction with diffusion models. +Compared to AR models, diffusion models have significant advantages for +parallel generation, text interpolation, token-level controls such as syntactic +structures and semantic contents, and robustness. Exploring further +permutations of integrating Transformers into diffusion models would be a +valuable pursuit. Also, the development of multimodal diffusion models and +large-scale diffusion language models with notable capabilities for few-shot +learning would be important directions for the future advance of diffusion +models in NLP. +" +Benchmarking Arabic AI with Large Language Models,Ahmed Abdelali,http://arxiv.org/pdf/2305.14982v1.pdf,2023-05-24,"['cs.cl', 'cs.ai', '68t50', 'f.2.2; i.2.7']",2305.14982v1.pdf," With large Foundation Models (FMs), language technologies (AI in general) are +entering a new paradigm: eliminating the need for developing large-scale +task-specific datasets and supporting a variety of tasks through set-ups +ranging from zero-shot to few-shot learning. However, understanding FMs +capabilities requires a systematic benchmarking effort by comparing FMs +performance with the state-of-the-art (SOTA) task-specific models. With that +goal, past work focused on the English language and included a few efforts with +multiple languages. Our study contributes to ongoing research by evaluating FMs +performance for standard Arabic NLP and Speech processing, including a range of +tasks from sequence tagging to content classification across diverse domains. +We start with zero-shot learning using GPT-3.5-turbo, Whisper, and USM, +addressing 33 unique tasks using 59 publicly available datasets resulting in 96 +test setups. For a few tasks, FMs performs on par or exceeds the performance of +the SOTA models but for the majority it under-performs. Given the importance of +prompt for the FMs performance, we discuss our prompt strategies in detail and +elaborate on our findings. Our future work on Arabic AI will explore few-shot +prompting, expand the range of tasks, and investigate additional open-source +models. +" +Sentiment Analysis in the Era of Large Language Models: A Reality Check,Wenxuan Zhang,http://arxiv.org/pdf/2305.15005v1.pdf,2023-05-24,['cs.cl'],2305.15005v1.pdf," Sentiment analysis (SA) has been a long-standing research area in natural +language processing. It can offer rich insights into human sentiments and +opinions and has thus seen considerable interest from both academia and +industry. With the advent of large language models (LLMs) such as ChatGPT, +there is a great potential for their employment on SA problems. However, the +extent to which existing LLMs can be leveraged for different sentiment analysis +tasks remains unclear. This paper aims to provide a comprehensive investigation +into the capabilities of LLMs in performing various sentiment analysis tasks, +from conventional sentiment classification to aspect-based sentiment analysis +and multifaceted analysis of subjective texts. We evaluate performance across +13 tasks on 26 datasets and compare the results against small language models +(SLMs) trained on domain-specific datasets. Our study reveals that while LLMs +demonstrate satisfactory performance in simpler tasks, they lag behind in more +complex tasks requiring deeper understanding or structured sentiment +information. However, LLMs significantly outperform SLMs in few-shot learning +settings, suggesting their potential when annotation resources are limited. We +also highlight the limitations of current evaluation practices in assessing +LLMs' SA abilities and propose a novel benchmark, \textsc{SentiEval}, for a +more comprehensive and realistic evaluation. Data and code during our +investigations are available at +\url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}. +" +Impact of Large Language Models on Generating Software Specifications,Danning Xie,http://arxiv.org/pdf/2306.03324v2.pdf,2023-06-06,['cs.se'],2306.03324v2.pdf," Software specifications are essential for ensuring the reliability of +software systems. Existing specification extraction approaches, however, suffer +from limited generalizability and require manual efforts. The recent emergence +of Large Language Models (LLMs), which have been successfully applied to +numerous software engineering tasks, offers a promising avenue for automating +this process. In this paper, we conduct the first empirical study to evaluate +the capabilities of LLMs for generating software specifications from software +comments or documentation. We evaluate LLMs' performance with Few Shot Learning +(FSL), enabling LLMs to generalize from a small number of examples, as well as +different prompt construction strategies, and compare the performance of LLMs +with traditional approaches. Additionally, we conduct a comparative diagnosis +of the failure cases from both LLMs and traditional methods, identifying their +unique strengths and weaknesses. Lastly, we conduct extensive experiments on 15 +state of the art LLMs, evaluating their performance and cost effectiveness for +generating software specifications. + Our results show that with FSL, LLMs outperform traditional methods (by +5.6%), and more sophisticated prompt construction strategies can further +enlarge this performance gap (up to 5.1 to 10.0%). Yet, LLMs suffer from their +unique challenges, such as ineffective prompts and the lack of domain +knowledge, which together account for 53 to 60% of LLM unique failures. The +strong performance of open source models (e.g., StarCoder) makes closed source +models (e.g., GPT 3 Davinci) less desirable due to size and cost. Our study +offers valuable insights for future research to improve specification +generation. +" +One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning,Arnav Chavan,http://arxiv.org/pdf/2306.07967v2.pdf,2023-06-13,"['cs.lg', 'cs.ai', 'cs.cv']",2306.07967v2.pdf," We present Generalized LoRA (GLoRA), an advanced approach for universal +parameter-efficient fine-tuning tasks. Enhancing Low-Rank Adaptation (LoRA), +GLoRA employs a generalized prompt module to optimize pre-trained model weights +and adjust intermediate activations, providing more flexibility and capability +across diverse tasks and datasets. Moreover, GLoRA facilitates efficient +parameter adaptation by employing a scalable, modular, layer-wise structure +search that learns individual adapter of each layer. Originating from a unified +mathematical formulation, GLoRA exhibits strong transfer learning, few-shot +learning and domain generalization abilities, as it adapts to new tasks through +not only weights but also additional dimensions like activations. Comprehensive +experiments demonstrate that GLoRA outperforms all previous methods in natural, +specialized, and structured vision benchmarks, achieving superior accuracy with +fewer parameters and computations. The proposed method on LLaMA-1 and LLaMA-2 +also show considerable enhancements compared to the original LoRA in the +language domain. Furthermore, our structural re-parameterization design ensures +that GLoRA incurs no extra inference cost, rendering it a practical solution +for resource-limited applications. Code and models are available at: +https://github.com/Arnav0400/ViT-Slim/tree/master/GLoRA. +" +Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts,Xuan-Phi Nguyen,http://arxiv.org/pdf/2306.11372v1.pdf,2023-06-20,"['cs.cl', 'cs.ai']",2306.11372v1.pdf," Large language models (LLMs) are known to effectively perform tasks by simply +observing few exemplars. However, in low-resource languages, obtaining such +hand-picked exemplars can still be challenging, where unsupervised techniques +may be necessary. Moreover, competent generative capabilities of LLMs are +observed only in high-resource languages, while their performances among +under-represented languages fall behind due to pre-training data imbalance. To +elicit LLMs' ability onto low-resource languages without any supervised data, +we propose to assemble synthetic exemplars from a diverse set of high-resource +languages to prompt the LLMs to translate from any language into English. These +prompts are then used to create intra-lingual exemplars to perform tasks in the +target languages. Our unsupervised prompting method performs on par with +supervised few-shot learning in LLMs of different sizes for translations +between English and 13 Indic and 21 African low-resource languages. We also +show that fine-tuning a 7B model on data generated from our method helps it +perform competitively with a 175B model. In non-English translation tasks, our +method even outperforms supervised prompting by up to 3 chrF++ in many +low-resource languages. When evaluated on zero-shot multilingual summarization, +our method surpasses other English-pivoting baselines by up to 4 ROUGE-L and is +also favored by GPT-4. +" +ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion,Yingjun Du,http://arxiv.org/pdf/2306.14770v2.pdf,2023-06-26,"['cs.lg', 'cs.ai']",2306.14770v2.pdf," Prototype-based meta-learning has emerged as a powerful technique for +addressing few-shot learning challenges. However, estimating a deterministic +prototype using a simple average function from a limited number of examples +remains a fragile process. To overcome this limitation, we introduce ProtoDiff, +a novel framework that leverages a task-guided diffusion model during the +meta-training phase to gradually generate prototypes, thereby providing +efficient class representations. Specifically, a set of prototypes is optimized +to achieve per-task prototype overfitting, enabling accurately obtaining the +overfitted prototypes for individual tasks. Furthermore, we introduce a +task-guided diffusion process within the prototype space, enabling the +meta-learning of a generative process that transitions from a vanilla prototype +to an overfitted prototype. ProtoDiff gradually generates task-specific +prototypes from random noise during the meta-test stage, conditioned on the +limited samples available for the new task. Furthermore, to expedite training +and enhance ProtoDiff's performance, we propose the utilization of residual +prototype learning, which leverages the sparsity of the residual prototype. We +conduct thorough ablation studies to demonstrate its ability to accurately +capture the underlying prototype distribution and enhance generalization. The +new state-of-the-art performance on within-domain, cross-domain, and few-task +few-shot classification further substantiates the benefit of ProtoDiff. +" +Effective Transfer of Pretrained Large Visual Model for Fabric Defect Segmentation via Specifc Knowledge Injection,Zhewei Chen,http://arxiv.org/pdf/2306.16186v1.pdf,2023-06-28,"['cs.cv', 'cs.ai', 'i.2.10; i.4.9; i.5.4']",2306.16186v1.pdf," Fabric defect segmentation is integral to textile quality control. Despite +this, the scarcity of high-quality annotated data and the diversity of fabric +defects present significant challenges to the application of deep learning in +this field. These factors limit the generalization and segmentation performance +of existing models, impeding their ability to handle the complexity of diverse +fabric types and defects. To overcome these obstacles, this study introduces an +innovative method to infuse specialized knowledge of fabric defects into the +Segment Anything Model (SAM), a large-scale visual model. By introducing and +training a unique set of fabric defect-related parameters, this approach +seamlessly integrates domain-specific knowledge into SAM without the need for +extensive modifications to the pre-existing model parameters. The revamped SAM +model leverages generalized image understanding learned from large-scale +natural image datasets while incorporating fabric defect-specific knowledge, +ensuring its proficiency in fabric defect segmentation tasks. The experimental +results reveal a significant improvement in the model's segmentation +performance, attributable to this novel amalgamation of generic and +fabric-specific knowledge. When benchmarking against popular existing +segmentation models across three datasets, our proposed model demonstrates a +substantial leap in performance. Its impressive results in cross-dataset +comparisons and few-shot learning experiments further demonstrate its potential +for practical applications in textile quality control. +" +Prompting classes: Exploring the Power of Prompt Class Learning in Weakly Supervised Semantic Segmentation,Balamurali Murugesan,http://arxiv.org/pdf/2307.00097v2.pdf,2023-06-30,['cs.cv'],2307.00097v2.pdf," Recently, CLIP-based approaches have exhibited remarkable performance on +generalization and few-shot learning tasks, fueled by the power of contrastive +language-vision pre-training. In particular, prompt tuning has emerged as an +effective strategy to adapt the pre-trained language-vision models to +downstream tasks by employing task-related textual tokens. Motivated by this +progress, in this work we question whether other fundamental problems, such as +weakly supervised semantic segmentation (WSSS), can benefit from prompt tuning. +Our findings reveal two interesting observations that shed light on the impact +of prompt tuning on WSSS. First, modifying only the class token of the text +prompt results in a greater impact on the Class Activation Map (CAM), compared +to arguably more complex strategies that optimize the context. And second, the +class token associated with the image ground truth does not necessarily +correspond to the category that yields the best CAM. Motivated by these +observations, we introduce a novel approach based on a PrOmpt cLass lEarning +(POLE) strategy. Through extensive experiments we demonstrate that our simple, +yet efficient approach achieves SOTA performance in a well-known WSSS +benchmark. These results highlight not only the benefits of language-vision +models in WSSS but also the potential of prompt learning for this problem. The +code is available at https://github.com/rB080/WSS_POLE. +" +Meta-training with Demonstration Retrieval for Efficient Few-shot Learning,Aaron Mueller,http://arxiv.org/pdf/2307.00119v1.pdf,2023-06-30,['cs.cl'],2307.00119v1.pdf," Large language models show impressive results on few-shot NLP tasks. However, +these models are memory and computation-intensive. Meta-training allows one to +leverage smaller models for few-shot generalization in a domain-general and +task-agnostic manner; however, these methods alone results in models that may +not have sufficient parameterization or knowledge to adapt quickly to a large +variety of tasks. To overcome this issue, we propose meta-training with +demonstration retrieval, where we use a dense passage retriever to retrieve +semantically similar labeled demonstrations to each example for more varied +supervision. By separating external knowledge from model parameters, we can use +meta-training to train parameter-efficient models that generalize well on a +larger variety of tasks. We construct a meta-training set from UnifiedQA and +CrossFit, and propose a demonstration bank based on UnifiedQA tasks. To our +knowledge, our work is the first to combine retrieval with meta-training, to +use DPR models to retrieve demonstrations, and to leverage demonstrations from +many tasks simultaneously, rather than randomly sampling demonstrations from +the training set of the target task. Our approach outperforms a variety of +targeted parameter-efficient and retrieval-augmented few-shot methods on QA, +NLI, and text classification tasks (including SQuAD, QNLI, and TREC). Our +approach can be meta-trained and fine-tuned quickly on a single GPU. +" +TablEye: Seeing small Tables through the Lens of Images,Seung-eon Lee,http://arxiv.org/pdf/2307.02491v1.pdf,2023-07-04,"['cs.lg', 'cs.ai']",2307.02491v1.pdf," The exploration of few-shot tabular learning becomes imperative. Tabular data +is a versatile representation that captures diverse information, yet it is not +exempt from limitations, property of data and model size. Labeling extensive +tabular data can be challenging, and it may not be feasible to capture every +important feature. Few-shot tabular learning, however, remains relatively +unexplored, primarily due to scarcity of shared information among independent +datasets and the inherent ambiguity in defining boundaries within tabular data. +To the best of our knowledge, no meaningful and unrestricted few-shot tabular +learning techniques have been developed without imposing constraints on the +dataset. In this paper, we propose an innovative framework called TablEye, +which aims to overcome the limit of forming prior knowledge for tabular data by +adopting domain transformation. It facilitates domain transformation by +generating tabular images, which effectively conserve the intrinsic semantics +of the original tabular data. This approach harnesses rigorously tested +few-shot learning algorithms and embedding functions to acquire and apply prior +knowledge. Leveraging shared data domains allows us to utilize this prior +knowledge, originally learned from the image domain. Specifically, TablEye +demonstrated a superior performance by outstripping the TabLLM in a 4-shot task +with a maximum 0.11 AUC and a STUNT in a 1- shot setting, where it led on +average by 3.17% accuracy. +" +Text Descriptions are Compressive and Invariant Representations for Visual Learning,Zhili Feng,http://arxiv.org/pdf/2307.04317v2.pdf,2023-07-10,"['cs.cv', 'cs.lg']",2307.04317v2.pdf," Modern image classification is based upon directly predicting classes via +large discriminative networks, which do not directly contain information about +the intuitive visual features that may constitute a classification decision. +Recently, work in vision-language models (VLM) such as CLIP has provided ways +to specify natural language descriptions of image classes, but typically +focuses on providing single descriptions for each class. In this work, we +demonstrate that an alternative approach, in line with humans' understanding of +multiple visual features per class, can also provide compelling performance in +the robust few-shot learning setting. In particular, we introduce a novel +method, \textit{SLR-AVD (Sparse Logistic Regression using Augmented Visual +Descriptors)}. This method first automatically generates multiple visual +descriptions of each class via a large language model (LLM), then uses a VLM to +translate these descriptions to a set of visual feature embeddings of each +image, and finally uses sparse logistic regression to select a relevant subset +of these features to classify each image. Core to our approach is the fact +that, information-theoretically, these descriptive features are more invariant +to domain shift than traditional image embeddings, even though the VLM training +process is not explicitly designed for invariant representation learning. These +invariant descriptive features also compose a better input compression scheme. +When combined with finetuning, we show that SLR-AVD is able to outperform +existing state-of-the-art finetuning approaches on both in-distribution and +out-of-distribution performance. +" +DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI,Jianguo Zhang,http://arxiv.org/pdf/2307.10172v2.pdf,2023-07-19,"['cs.cl', 'cs.ai']",2307.10172v2.pdf," Despite advancements in conversational AI, language models encounter +challenges to handle diverse conversational tasks, and existing dialogue +dataset collections often lack diversity and comprehensiveness. To tackle these +issues, we introduce DialogStudio: the largest and most diverse collection of +dialogue datasets, unified under a consistent format while preserving their +original information. Our collection encompasses data from open-domain +dialogues, task-oriented dialogues, natural language understanding, +conversational recommendation, dialogue summarization, and knowledge-grounded +dialogues, making it an incredibly rich and diverse resource for dialogue +research and model training. To further enhance the utility of DialogStudio, we +identify the licenses for each dataset and design domain-aware prompts for +selected dialogues to facilitate instruction-aware fine-tuning. Furthermore, we +develop conversational AI models using the dataset collection, and our +experiments in both zero-shot and few-shot learning scenarios demonstrate the +superiority of DialogStudio. To improve transparency and support dataset and +task-based research, as well as language model pre-training, all datasets, +licenses, codes, and models associated with DialogStudio are made publicly +accessible at https://github.com/salesforce/DialogStudio +" +Mutual Reinforcement Effects in Japanese Sentence Classification and Named Entity Recognition Tasks,Chengguang Gan,http://arxiv.org/pdf/2307.10291v2.pdf,2023-07-18,['cs.cl'],2307.10291v2.pdf," Information extraction(IE) is a crucial subfield within natural language +processing. However, for the traditionally segmented approach to sentence +classification and Named Entity Recognition, the intricate interactions between +these individual subtasks remain largely uninvestigated. In this study, we +propose an integrative analysis, converging sentence classification with Named +Entity Recognition, with the objective to unveil and comprehend the mutual +reinforcement effect within these two information extraction subtasks. To +achieve this, we introduce a Sentence Classification and Named Entity +Recognition Multi-task (SCNM) approach that combines Sentence Classification +(SC) and Named Entity Recognition (NER). We develop a Sentence-to-Label +Generation (SLG) framework for SCNM and construct a Wikipedia dataset +containing both SC and NER. Using a format converter, we unify input formats +and employ a generative model to generate SC-labels, NER-labels, and associated +text segments. We propose a Constraint Mechanism (CM) to improve generated +format accuracy. Our results show SC accuracy increased by 1.13 points and NER +by 1.06 points in SCNM compared to standalone tasks, with CM raising format +accuracy from 63.61 to 100. The findings indicate mutual reinforcement effects +between SC and NER, and integration enhances both tasks' performance. We +additionally implemented the SLG framework on single SC task. It yielded +superior accuracies compared to the baseline on two distinct Japanese SC +datasets. Notably, in the experiment of few-shot learning, SLG framework shows +much better performance than fine-tune method. These empirical findings +contribute additional evidence to affirm the efficacy of the SLG framework. +" +CohortGPT: An Enhanced GPT for Participant Recruitment in Clinical Study,Zihan Guan,http://arxiv.org/pdf/2307.11346v1.pdf,2023-07-21,"['cs.cl', 'cs.ai']",2307.11346v1.pdf," Participant recruitment based on unstructured medical texts such as clinical +notes and radiology reports has been a challenging yet important task for the +cohort establishment in clinical research. Recently, Large Language Models +(LLMs) such as ChatGPT have achieved tremendous success in various downstream +tasks thanks to their promising performance in language understanding, +inference, and generation. It is then natural to test their feasibility in +solving the cohort recruitment task, which involves the classification of a +given paragraph of medical text into disease label(s). However, when applied to +knowledge-intensive problem settings such as medical text classification, where +the LLMs are expected to understand the decision made by human experts and +accurately identify the implied disease labels, the LLMs show a mediocre +performance. A possible explanation is that, by only using the medical text, +the LLMs neglect to use the rich context of additional information that +languages afford. To this end, we propose to use a knowledge graph as auxiliary +information to guide the LLMs in making predictions. Moreover, to further boost +the LLMs adapt to the problem setting, we apply a chain-of-thought (CoT) sample +selection strategy enhanced by reinforcement learning, which selects a set of +CoT samples given each individual medical report. Experimental results and +various ablation studies show that our few-shot learning method achieves +satisfactory performance compared with fine-tuning strategies and gains superb +advantages when the available data is limited. The code and sample dataset of +the proposed CohortGPT model is available at: +https://anonymous.4open.science/r/CohortGPT-4872/ +" +Identifying Misinformation on YouTube through Transcript Contextual Analysis with Transformer Models,Christos Christodoulou,http://arxiv.org/pdf/2307.12155v1.pdf,2023-07-22,['cs.cl'],2307.12155v1.pdf," Misinformation on YouTube is a significant concern, necessitating robust +detection strategies. In this paper, we introduce a novel methodology for video +classification, focusing on the veracity of the content. We convert the +conventional video classification task into a text classification task by +leveraging the textual content derived from the video transcripts. We employ +advanced machine learning techniques like transfer learning to solve the +classification challenge. Our approach incorporates two forms of transfer +learning: (a) fine-tuning base transformer models such as BERT, RoBERTa, and +ELECTRA, and (b) few-shot learning using sentence-transformers MPNet and +RoBERTa-large. We apply the trained models to three datasets: (a) YouTube +Vaccine-misinformation related videos, (b) YouTube Pseudoscience videos, and +(c) Fake-News dataset (a collection of articles). Including the Fake-News +dataset extended the evaluation of our approach beyond YouTube videos. Using +these datasets, we evaluated the models distinguishing valid information from +misinformation. The fine-tuned models yielded Matthews Correlation +Coefficient>0.81, accuracy>0.90, and F1 score>0.90 in two of three datasets. +Interestingly, the few-shot models outperformed the fine-tuned ones by 20% in +both Accuracy and F1 score for the YouTube Pseudoscience dataset, highlighting +the potential utility of this approach -- especially in the context of limited +training data. +" +ChatGPT for Arabic Grammatical Error Correction,Sang Yun Kwon,http://arxiv.org/pdf/2308.04492v1.pdf,2023-08-08,['cs.ai'],2308.04492v1.pdf," Recently, large language models (LLMs) fine-tuned to follow human instruction +have exhibited significant capabilities in various English NLP tasks. However, +their performance in grammatical error correction (GEC) tasks, particularly in +non-English languages, remains significantly unexplored. In this paper, we +delve into abilities of instruction fine-tuned LLMs in Arabic GEC, a task made +complex due to Arabic's rich morphology. Our findings suggest that various +prompting methods, coupled with (in-context) few-shot learning, demonstrate +considerable effectiveness, with GPT-4 achieving up to $65.49$ +F\textsubscript{1} score under expert prompting (approximately $5$ points +higher than our established baseline). This highlights the potential of LLMs in +low-resource settings, offering a viable approach for generating useful +synthetic data for model training. Despite these positive results, we find that +instruction fine-tuned models, regardless of their size, significantly +underperform compared to fully fine-tuned models of significantly smaller +sizes. This disparity highlights a substantial room for improvements for LLMs. +Inspired by methods from low-resource machine translation, we also develop a +method exploiting synthetic data that significantly outperforms previous models +on two standard Arabic benchmarks. Our work sets new SoTA for Arabic GEC, with +$72.19\%$ and $73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively. +" +LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking,Fahim Dalvi,http://arxiv.org/pdf/2308.04945v1.pdf,2023-08-09,"['cs.cl', 'cs.ai', '68t50', 'f.2.2; i.2.7']",2308.04945v1.pdf," The recent development and success of Large Language Models (LLMs) +necessitate an evaluation of their performance across diverse NLP tasks in +different languages. Although several frameworks have been developed and made +publicly available, their customization capabilities for specific tasks and +datasets are often complex for different users. In this study, we introduce the +LLMeBench framework. Initially developed to evaluate Arabic NLP tasks using +OpenAI's GPT and BLOOM models; it can be seamlessly customized for any NLP task +and model, regardless of language. The framework also features zero- and +few-shot learning settings. A new custom dataset can be added in less than 10 +minutes, and users can use their own model API keys to evaluate the task at +hand. The developed framework has been already tested on 31 unique NLP tasks +using 53 publicly available datasets within 90 experimental setups, involving +approximately 296K data points. We plan to open-source the framework for the +community (https://github.com/qcri/LLMeBench/). A video demonstrating the +framework is available online (https://youtu.be/FkQn4UjYA0s). +" +Link-Context Learning for Multimodal LLMs,Yan Tai,http://arxiv.org/pdf/2308.07891v1.pdf,2023-08-15,"['cs.cv', 'cs.cl']",2308.07891v1.pdf," The ability to learn from context with novel concepts, and deliver +appropriate responses are essential in human conversations. Despite current +Multimodal Large Language Models (MLLMs) and Large Language Models (LLMs) being +trained on mega-scale datasets, recognizing unseen images or understanding +novel concepts in a training-free manner remains a challenge. In-Context +Learning (ICL) explores training-free few-shot learning, where models are +encouraged to ``learn to learn"" from limited tasks and generalize to unseen +tasks. In this work, we propose link-context learning (LCL), which emphasizes +""reasoning from cause and effect"" to augment the learning capabilities of +MLLMs. LCL goes beyond traditional ICL by explicitly strengthening the causal +relationship between the support set and the query set. By providing +demonstrations with causal links, LCL guides the model to discern not only the +analogy but also the underlying causal associations between data points, which +empowers MLLMs to recognize unseen images and understand novel concepts more +effectively. To facilitate the evaluation of this novel approach, we introduce +the ISEKAI dataset, comprising exclusively of unseen generated image-label +pairs designed for link-context learning. Extensive experiments show that our +LCL-MLLM exhibits strong link-context learning capabilities to novel concepts +over vanilla MLLMs. Code and data will be released at +https://github.com/isekai-portal/Link-Context-Learning. +" +CodeCoT and Beyond: Learning to Program and Test like a Developer,Dong Huang,http://arxiv.org/pdf/2308.08784v1.pdf,2023-08-17,"['cs.se', 'cs.ai']",2308.08784v1.pdf," In natural language processing, transformer-based large language models +(LLMs) like GPT-x models developed by OpenAI have revolutionized the landscape. +Despite their impressive capabilities, these models often encounter challenges +when handling tasks that differ from their training data, resulting in +compromised performance. To address this, few-shot learning has emerged as a +valuable technique, allowing LLMs to adapt with minimal task-specific data. One +innovative strategy, known as Chain-of-Thought Prompting (CoT), has been +introduced to guide LLMs in revealing cognitive processes during multi-step +reasoning. In this paper, we propose Code Chain-of-Thought~(CodeCoT), which +consists of two components: the Vanilla CodeCoT and the Self-exam CodeCoT. The +latter incorporates self-examination, empowering the model to iteratively +generate code, formulate test cases, and refine its outputs. Specifically, the +process entails the generation of test examples by the model corresponding to +the code it is tasked to implement. If it fails on the test examples, then it +regenerates the code based on the erroneous code and associated error types. +Through comprehensive experiments, we observed that both techniques +significantly enhance code generation accuracy across various LLM variants. Our +evaluation results reveal that CodeCoT improves the code generation +effectiveness, including an unprecedented pass@1 accuracy of 79.27\% using the +Self-exam CodeCoT approach on the gpt-3.5-turbo-0613 model in the HumanEval +dataset. +" +Large Language Models Vote: Prompting for Rare Disease Identification,David Oniani,http://arxiv.org/pdf/2308.12890v2.pdf,2023-08-24,"['cs.cl', 'cs.ai']",2308.12890v2.pdf," The emergence of generative Large Language Models (LLMs) emphasizes the need +for accurate and efficient prompting approaches. LLMs are often applied in +Few-Shot Learning (FSL) contexts, where tasks are executed with minimal +training data. FSL has become popular in many Artificial Intelligence (AI) +subdomains, including AI for health. Rare diseases affect a small fraction of +the population. Rare disease identification from clinical notes inherently +requires FSL techniques due to limited data availability. Manual data +collection and annotation is both expensive and time-consuming. In this paper, +we propose Models-Vote Prompting (MVP), a flexible prompting approach for +improving the performance of LLM queries in FSL settings. MVP works by +prompting numerous LLMs to perform the same tasks and then conducting a +majority vote on the resulting outputs. This method achieves improved results +to any one model in the ensemble on one-shot rare disease identification and +classification tasks. We also release a novel rare disease dataset for FSL, +available to those who signed the MIMIC-IV Data Use Agreement (DUA). +Furthermore, in using MVP, each model is prompted multiple times, substantially +increasing the time needed for manual annotation, and to address this, we +assess the feasibility of using JSON for automating generative LLM evaluation. +" +Diagnosing Infeasible Optimization Problems Using Large Language Models,Hao Chen,http://arxiv.org/pdf/2308.12923v1.pdf,2023-08-23,"['cs.hc', 'cs.cl', 'cs.lg', 'math.oc']",2308.12923v1.pdf," Decision-making problems can be represented as mathematical optimization +models, finding wide applications in fields such as economics, engineering and +manufacturing, transportation, and health care. Optimization models are +mathematical abstractions of the problem of making the best decision while +satisfying a set of requirements or constraints. One of the primary barriers to +deploying these models in practice is the challenge of helping practitioners +understand and interpret such models, particularly when they are infeasible, +meaning no decision satisfies all the constraints. Existing methods for +diagnosing infeasible optimization models often rely on expert systems, +necessitating significant background knowledge in optimization. In this paper, +we introduce OptiChat, a first-of-its-kind natural language-based system +equipped with a chatbot GUI for engaging in interactive conversations about +infeasible optimization models. OptiChat can provide natural language +descriptions of the optimization model itself, identify potential sources of +infeasibility, and offer suggestions to make the model feasible. The +implementation of OptiChat is built on GPT-4, which interfaces with an +optimization solver to identify the minimal subset of constraints that render +the entire optimization problem infeasible, also known as the Irreducible +Infeasible Subset (IIS). We utilize few-shot learning, expert chain-of-thought, +key-retrieve, and sentiment prompts to enhance OptiChat's reliability. Our +experiments demonstrate that OptiChat assists both expert and non-expert users +in improving their understanding of the optimization models, enabling them to +quickly identify the sources of infeasibility. +" +Less is More: Towards Efficient Few-shot 3D Semantic Segmentation via Training-free Networks,Xiangyang Zhu,http://arxiv.org/pdf/2308.12961v1.pdf,2023-08-24,['cs.cv'],2308.12961v1.pdf," To reduce the reliance on large-scale datasets, recent works in 3D +segmentation resort to few-shot learning. Current 3D few-shot semantic +segmentation methods first pre-train the models on `seen' classes, and then +evaluate their generalization performance on `unseen' classes. However, the +prior pre-training stage not only introduces excessive time overhead, but also +incurs a significant domain gap on `unseen' classes. To tackle these issues, we +propose an efficient Training-free Few-shot 3D Segmentation netwrok, TFS3D, and +a further training-based variant, TFS3D-T. Without any learnable parameters, +TFS3D extracts dense representations by trigonometric positional encodings, and +achieves comparable performance to previous training-based methods. Due to the +elimination of pre-training, TFS3D can alleviate the domain gap issue and save +a substantial amount of time. Building upon TFS3D, TFS3D-T only requires to +train a lightweight query-support transferring attention (QUEST), which +enhances the interaction between the few-shot query and support data. +Experiments demonstrate TFS3D-T improves previous state-of-the-art methods by ++6.93% and +17.96% mIoU respectively on S3DIS and ScanNet, while reducing the +training time by -90%, indicating superior effectiveness and efficiency. +" +"LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding",Yushi Bai,http://arxiv.org/pdf/2308.14508v1.pdf,2023-08-28,['cs.cl'],2308.14508v1.pdf," Although large language models (LLMs) demonstrate impressive performance for +many language tasks, most of them can only handle texts a few thousand tokens +long, limiting their applications on longer sequence inputs, such as books, +reports, and codebases. Recent works have proposed methods to improve LLMs' +long context capabilities by extending context windows and more sophisticated +memory mechanisms. However, comprehensive benchmarks tailored for evaluating +long context understanding are lacking. In this paper, we introduce LongBench, +the first bilingual, multi-task benchmark for long context understanding, +enabling a more rigorous evaluation of long context understanding. LongBench +comprises 21 datasets across 6 task categories in both English and Chinese, +with an average length of 6,711 words (English) and 13,386 characters +(Chinese). These tasks cover key long-text application areas including +single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks, +and code completion. All datasets in LongBench are standardized into a unified +format, allowing for effortless automatic evaluation of LLMs. Upon +comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial +model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still +struggles on longer contexts. (2) Scaled position embedding and fine-tuning on +longer sequences lead to substantial improvement on long context understanding. +(3) Context compression technique such as retrieval brings improvement for +model with weak ability on long contexts, but the performance still lags behind +models that have strong long context understanding capability. The code and +datasets are available at https://github.com/THUDM/LongBench. +" +TransPrompt v2: A Transferable Prompting Framework for Cross-task Text Classification,Jianing Wang,http://arxiv.org/pdf/2308.15010v1.pdf,2023-08-29,['cs.cl'],2308.15010v1.pdf," Text classification is one of the most imperative tasks in natural language +processing (NLP). Recent advances with pre-trained language models (PLMs) have +shown remarkable success on this task. However, the satisfying results obtained +by PLMs heavily depend on the large amounts of task-specific labeled data, +which may not be feasible in many application scenarios due to data access and +privacy constraints. The recently-proposed prompt-based fine-tuning paradigm +improves the performance of PLMs for few-shot text classification with +task-specific templates. Yet, it is unclear how the prompting knowledge can be +transferred across tasks, for the purpose of mutual reinforcement. We propose +TransPrompt v2, a novel transferable prompting framework for few-shot learning +across similar or distant text classification tasks. For learning across +similar tasks, we employ a multi-task meta-knowledge acquisition (MMA) +procedure to train a meta-learner that captures the cross-task transferable +knowledge. For learning across distant tasks, we further inject the task type +descriptions into the prompt, and capture the intra-type and inter-type prompt +embeddings among multiple distant tasks. Additionally, two de-biasing +techniques are further designed to make the trained meta-learner more +task-agnostic and unbiased towards any tasks. After that, the meta-learner can +be adapted to each specific task with better parameters initialization. +Extensive experiments show that TransPrompt v2 outperforms single-task and +cross-task strong baselines over multiple NLP tasks and datasets. We further +show that the meta-learner can effectively improve the performance of PLMs on +previously unseen tasks. In addition, TransPrompt v2 also outperforms strong +fine-tuning baselines when learning with full training sets. +" +AskIt: Unified Programming Interface for Programming with Large Language Models,Katsumi Okuda,http://arxiv.org/pdf/2308.15645v1.pdf,2023-08-29,"['cs.pl', 'cs.ai', 'cs.se']",2308.15645v1.pdf," In the evolving landscape of software development, Large Language Models +(LLMs) exhibit a unique phenomenon known as emergent abilities, demonstrating +adeptness across numerous tasks, from text summarization to code generation. +While these abilities open up novel avenues in software design and crafting, +their incorporation presents substantial challenges. Developers grapple with +decisions surrounding the direct embedding of LLMs within applications versus +employing them for code generation. Moreover, effective prompt design becomes a +critical concern, given the necessity of data extraction from natural language +outputs. To address these intricacies, this paper introduces AskIt, a +domain-specific language (DSL) specifically designed for LLMs. AskIt simplifies +LLM integration, offering type-guided output control, template-based function +definitions, and a unified interface that diminishes the distinction between +LLM-based code generation and application integration. Furthermore, through +Programming by Example (PBE), AskIt harnesses the power of few-shot learning at +the programming language level. Our evaluations underscore AskIt's potency. +Across 50 tasks, AskIt generated concise prompts for the given tasks, achieving +a 16.14% reduction in prompt length relative to benchmarks. Additionally, by +enabling the transition from direct LLM application usage to function +generation, AskIt achieved significant speedups, as observed in our GSM8K +benchmark experiments. Through these advancements, AskIt streamlines the +integration of LLMs in software development, offering a more efficient, +versatile approach for leveraging emergent abilities. The implementations of +AskIt in TypeScript and Python are available at +https://github.com/katsumiok/ts-askit and https://github.com/katsumiok/pyaskit, +respectively. +" +Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation with Meta-Learning,Yiming Zhang,http://arxiv.org/pdf/2308.16466v3.pdf,2023-08-31,['cs.cv'],2308.16466v3.pdf," While the Segment Anything Model (SAM) excels in semantic segmentation for +general-purpose images, its performance significantly deteriorates when applied +to medical images, primarily attributable to insufficient representation of +medical images in its training dataset. Nonetheless, gathering comprehensive +datasets and training models that are universally applicable is particularly +challenging due to the long-tail problem common in medical images. To address +this gap, here we present a Self-Sampling Meta SAM (SSM-SAM) framework for +few-shot medical image segmentation. Our innovation lies in the design of three +key modules: 1) An online fast gradient descent optimizer, further optimized by +a meta-learner, which ensures swift and robust adaptation to new tasks. 2) A +Self-Sampling module designed to provide well-aligned visual prompts for +improved attention allocation; and 3) A robust attention-based decoder +specifically designed for medical few-shot learning to capture relationship +between different slices. Extensive experiments on a popular abdominal CT +dataset and an MRI dataset demonstrate that the proposed method achieves +significant improvements over state-of-the-art methods in few-shot +segmentation, with an average improvements of 10.21% and 1.80% in terms of DSC, +respectively. In conclusion, we present a novel approach for rapid online +adaptation in interactive image segmentation, adapting to a new organ in just +0.83 minutes. Code is publicly available on GitHub upon acceptance. +" +Prompt-based Node Feature Extractor for Few-shot Learning on Text-Attributed Graphs,Xuanwen Huang,http://arxiv.org/pdf/2309.02848v1.pdf,2023-09-06,['cs.si'],2309.02848v1.pdf," Text-attributed Graphs (TAGs) are commonly found in the real world, such as +social networks and citation networks, and consist of nodes represented by +textual descriptions. Currently, mainstream machine learning methods on TAGs +involve a two-stage modeling approach: (1) unsupervised node feature extraction +with pre-trained language models (PLMs); and (2) supervised learning using +Graph Neural Networks (GNNs). However, we observe that these representations, +which have undergone large-scale pre-training, do not significantly improve +performance with a limited amount of training samples. The main issue is that +existing methods have not effectively integrated information from the graph and +downstream tasks simultaneously. In this paper, we propose a novel framework +called G-Prompt, which combines a graph adapter and task-specific prompts to +extract node features. First, G-Prompt introduces a learnable GNN layer +(\emph{i.e.,} adaptor) at the end of PLMs, which is fine-tuned to better +capture the masked tokens considering graph neighborhood information. After the +adapter is trained, G-Prompt incorporates task-specific prompts to obtain +\emph{interpretable} node representations for the downstream task. Our +experiment results demonstrate that our proposed method outperforms current +state-of-the-art (SOTA) methods on few-shot node classification. More +importantly, in zero-shot settings, the G-Prompt embeddings can not only +provide better task interpretability than vanilla PLMs but also achieve +comparable performance with fully-supervised baselines. +" +Cross-Image Context Matters for Bongard Problems,Nikhil Raghuraman,http://arxiv.org/pdf/2309.03468v1.pdf,2023-09-07,"['cs.cv', 'cs.ai', 'cs.lg']",2309.03468v1.pdf," Current machine learning methods struggle to solve Bongard problems, which +are a type of IQ test that requires deriving an abstract ""concept"" from a set +of positive and negative ""support"" images, and then classifying whether or not +a new query image depicts the key concept. On Bongard-HOI, a benchmark for +natural-image Bongard problems, existing methods have only reached 66% accuracy +(where chance is 50%). Low accuracy is often attributed to neural nets' lack of +ability to find human-like symbolic rules. In this work, we point out that many +existing methods are forfeiting accuracy due to a much simpler problem: they do +not incorporate information contained in the support set as a whole, and rely +instead on information extracted from individual supports. This is a critical +issue, because unlike in few-shot learning tasks concerning object +classification, the ""key concept"" in a typical Bongard problem can only be +distinguished using multiple positives and multiple negatives. We explore a +variety of simple methods to take this cross-image context into account, and +demonstrate substantial gains over prior methods, leading to new +state-of-the-art performance on Bongard-LOGO (75.3%) and Bongard-HOI (72.45%) +and strong performance on the original Bongard problem set (60.84%). +" +DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning,Zhengxiang Shi,http://arxiv.org/pdf/2309.05173v2.pdf,2023-09-11,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2309.05173v2.pdf," Prompt tuning (PT), where a small amount of trainable soft (continuous) +prompt vectors is affixed to the input of language models (LM), has shown +promising results across various tasks and models for parameter-efficient +fine-tuning (PEFT). PT stands out from other PEFT approaches because it +maintains competitive performance with fewer trainable parameters and does not +drastically scale up its parameters as the model size expands. However, PT +introduces additional soft prompt tokens, leading to longer input sequences, +which significantly impacts training and inference time and memory usage due to +the Transformer's quadratic complexity. Particularly concerning for Large +Language Models (LLMs) that face heavy daily querying. To address this issue, +we propose Decomposed Prompt Tuning (DePT), which decomposes the soft prompt +into a shorter soft prompt and a pair of low-rank matrices that are then +optimised with two different learning rates. This allows DePT to achieve better +performance while saving over 20% memory and time costs compared to vanilla PT +and its variants, without changing trainable parameter sizes. Through extensive +experiments on 23 natural language processing (NLP) and vision-language (VL) +tasks, we demonstrate that DePT outperforms state-of-the-art PEFT approaches, +including the full fine-tuning baseline in some scenarios. Additionally, we +empirically show that DEPT grows more efficient as the model size increases. +Our further study reveals that DePT integrates seamlessly with +parameter-efficient transfer learning in the few-shot learning setting and +highlights its adaptability to various model architectures and sizes. +" +Zero-shot Learning with Minimum Instruction to Extract Social Determinants and Family History from Clinical Notes using GPT Model,Neel Bhate,http://arxiv.org/pdf/2309.05475v2.pdf,2023-09-11,['cs.cl'],2309.05475v2.pdf," Demographics, Social determinants of health, and family history documented in +the unstructured text within the electronic health records are increasingly +being studied to understand how this information can be utilized with the +structured data to improve healthcare outcomes. After the GPT models were +released, many studies have applied GPT models to extract this information from +the narrative clinical notes. Different from the existing work, our research +focuses on investigating the zero-shot learning on extracting this information +together by providing minimum information to the GPT model. We utilize +de-identified real-world clinical notes annotated for demographics, various +social determinants, and family history information. Given that the GPT model +might provide text different from the text in the original data, we explore two +sets of evaluation metrics, including the traditional NER evaluation metrics +and semantic similarity evaluation metrics, to completely understand the +performance. Our results show that the GPT-3.5 method achieved an average of +0.975 F1 on demographics extraction, 0.615 F1 on social determinants +extraction, and 0.722 F1 on family history extraction. We believe these results +can be further improved through model fine-tuning or few-shots learning. +Through the case studies, we also identified the limitations of the GPT models, +which need to be addressed in future research. +" +GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection,Yufei Li,http://arxiv.org/pdf/2309.05953v1.pdf,2023-09-12,"['cs.lg', 'cs.ir']",2309.05953v1.pdf," Logs play a crucial role in system monitoring and debugging by recording +valuable system information, including events and states. Although various +methods have been proposed to detect anomalies in log sequences, they often +overlook the significance of considering relations among system components, +such as services and users, which can be identified from log contents. +Understanding these relations is vital for detecting anomalies and their +underlying causes. To address this issue, we introduce GLAD, a Graph-based Log +Anomaly Detection framework designed to detect relational anomalies in system +logs. GLAD incorporates log semantics, relational patterns, and sequential +patterns into a unified framework for anomaly detection. Specifically, GLAD +first introduces a field extraction module that utilizes prompt-based few-shot +learning to identify essential fields from log contents. Then GLAD constructs +dynamic log graphs for sliding windows by interconnecting extracted fields and +log events parsed from the log parser. These graphs represent events and fields +as nodes and their relations as edges. Subsequently, GLAD utilizes a +temporal-attentive graph edge anomaly detection model for identifying anomalous +relations in these dynamic log graphs. This model employs a Graph Neural +Network (GNN)-based encoder enhanced with transformers to capture content, +structural and temporal features. We evaluate our proposed method on three +datasets, and the results demonstrate the effectiveness of GLAD in detecting +anomalies indicated by varying relational patterns. +" +Using Large Language Model to Solve and Explain Physics Word Problems Approaching Human Level,Jingzhe Ding,http://arxiv.org/pdf/2309.08182v2.pdf,2023-09-15,"['cs.cl', 'cs.ai', 'i.2.7']",2309.08182v2.pdf," Our work demonstrates that large language model (LLM) pre-trained on texts +can not only solve pure math word problems, but also physics word problems, +whose solution requires calculation and inference based on prior physical +knowledge. We collect and annotate the first physics word problem +dataset-PhysQA, which contains over 1000 junior high school physics word +problems (covering Kinematics, Mass&Density, Mechanics, Heat, Electricity). +Then we use OpenAI' s GPT3.5 to generate the answer of these problems and found +that GPT3.5 could automatically solve 49.3% of the problems through zero-shot +learning and 73.2% through few-shot learning. This result demonstrates that by +using similar problems and their answers as prompt, LLM could solve elementary +physics word problems approaching human level performance. In addition to +solving problems, GPT3.5 can also summarize the knowledge or topics covered by +the problems, provide relevant explanations, and generate new physics word +problems based on the input. Our work is the first research to focus on the +automatic solving, explanation, and generation of physics word problems across +various types and scenarios, and we achieve an acceptable and state-of-the-art +accuracy. This underscores the potential of LLMs for further applications in +secondary education. +" +SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels,Henry Hengyuan Zhao,http://arxiv.org/pdf/2309.08513v2.pdf,2023-09-15,"['cs.cv', 'cs.ai']",2309.08513v2.pdf," Pre-trained vision transformers have strong representation benefits to +various downstream tasks. Recently, many parameter-efficient fine-tuning (PEFT) +methods have been proposed, and their experiments demonstrate that tuning only +1% of extra parameters could surpass full fine-tuning in low-data resource +scenarios. However, these methods overlook the task-specific information when +fine-tuning diverse downstream tasks. In this paper, we propose a simple yet +effective method called ""Salient Channel Tuning"" (SCT) to leverage the +task-specific information by forwarding the model with the task images to +select partial channels in a feature map that enables us to tune only 1/8 +channels leading to significantly lower parameter costs. Experiments outperform +full fine-tuning on 18 out of 19 tasks in the VTAB-1K benchmark by adding only +0.11M parameters of the ViT-B, which is 780$\times$ fewer than its full +fine-tuning counterpart. Furthermore, experiments on domain generalization and +few-shot learning surpass other PEFT methods with lower parameter costs, +demonstrating our proposed tuning technique's strong capability and +effectiveness in the low-data regime. +" +nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance,Yunxiang Li,http://arxiv.org/pdf/2309.16967v2.pdf,2023-09-29,"['cs.cv', 'eess.iv']",2309.16967v2.pdf," The recent developments of foundation models in computer vision, especially +the Segment Anything Model (SAM), allow scalable and domain-agnostic image +segmentation to serve as a general-purpose segmentation tool. In parallel, the +field of medical image segmentation has benefited significantly from +specialized neural networks like the nnUNet, which is trained on +domain-specific datasets and can automatically configure the network to tailor +to specific segmentation challenges. To combine the advantages of foundation +models and domain-specific models, we present nnSAM, which synergistically +integrates the SAM model with the nnUNet model to achieve more accurate and +robust medical image segmentation. The nnSAM model leverages the powerful and +robust feature extraction capabilities of SAM, while harnessing the automatic +configuration capabilities of nnUNet to promote dataset-tailored learning. Our +comprehensive evaluation of nnSAM model on different sizes of training samples +shows that it allows few-shot learning, which is highly relevant for medical +image segmentation where high-quality, annotated data can be scarce and costly +to obtain. By melding the strengths of both its predecessors, nnSAM positions +itself as a potential new benchmark in medical image segmentation, offering a +tool that combines broad applicability with specialized efficiency. The code is +available at https://github.com/Kent0n-Li/Medical-Image-Segmentation. +" +An evaluation of GPT models for phenotype concept recognition,Tudor Groza,http://arxiv.org/pdf/2309.17169v1.pdf,2023-09-29,"['cs.cl', 'cs.ai']",2309.17169v1.pdf," Objective: Clinical deep phenotyping plays a critical role in both the +diagnosis of patients with rare disorders as well as in building care +coordination plans. The process relies on modelling and curating patient +profiles using ontology concepts, usually from the Human Phenotype Ontology. +Machine learning methods have been widely adopted to support this phenotype +concept recognition task. With the significant shift in the use of large +language models (LLMs) for most NLP tasks, herewithin, we examine the +performance of the latest Generative Pre-trained Transformer (GPT) models +underpinning ChatGPT in clinical deep phenotyping. Materials and Methods: The +experimental setup of the study included seven prompts of various levels of +specificity, two GPT models (gpt-3.5 and gpt-4.0) and an established gold +standard for phenotype recognition. Results: Our results show that, currently, +these models have not yet achieved state of the art performance. The best run, +using few-shots learning, achieved 0.41 F1 score, compared to a 0.62 F1 score +achieved by the current best in class tool. Conclusion: The non-deterministic +nature of the outcomes and the lack of concordance between different runs using +the same prompt and input makes the use of these LLMs in clinical settings +problematic. +" +RA-DIT: Retrieval-Augmented Dual Instruction Tuning,Xi Victoria Lin,http://arxiv.org/pdf/2310.01352v3.pdf,2023-10-02,"['cs.cl', 'cs.ai']",2310.01352v3.pdf," Retrieval-augmented language models (RALMs) improve performance by accessing +long-tail and up-to-date knowledge from external data stores, but are +challenging to build. Existing approaches require either expensive +retrieval-specific modifications to LM pre-training or use post-hoc integration +of the data store that leads to suboptimal performance. We introduce +Retrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuning +methodology that provides a third option by retrofitting any LLM with retrieval +capabilities. Our approach operates in two distinct fine-tuning steps: (1) one +updates a pre-trained LM to better use retrieved information, while (2) the +other updates the retriever to return more relevant results, as preferred by +the LM. By fine-tuning over tasks that require both knowledge utilization and +contextual awareness, we demonstrate that each stage yields significant +performance improvements, and using both leads to additional gains. Our best +model, RA-DIT 65B, achieves state-of-the-art performance across a range of +knowledge-intensive zero- and few-shot learning benchmarks, significantly +outperforming existing in-context RALM approaches by up to +8.9% in 0-shot +setting and +1.4% in 5-shot setting on average. +" +UniPredict: Large Language Models are Universal Tabular Predictors,Ruiyu Wang,http://arxiv.org/pdf/2310.03266v1.pdf,2023-10-05,['cs.lg'],2310.03266v1.pdf," Tabular data prediction is a fundamental machine learning task for many +applications. Existing methods predominantly employ discriminative modeling and +operate under the assumption of a fixed target column, necessitating +re-training for every new predictive task. Inspired by the generative power of +large language models (LLMs), this paper exploits the idea of building +universal tabular data predictors based on generative modeling, namely +UniPredict. Here, we show that scaling up an LLM to extensive tabular datasets +with the capability of comprehending diverse tabular inputs and predicting for +target variables following the input instructions. Specifically, we train a +single LLM on an aggregation of 169 tabular datasets with diverse targets and +compare its performance against baselines that are trained on each dataset +separately. We observe this versatile UniPredict model demonstrates an +advantage over other models, ranging from 5.4% to 13.4%, when compared with the +best tree-boosting baseline and the best neural network baseline, respectively. +We further test UniPredict in few-shot learning settings on another 62 tabular +datasets. Our method achieves strong performance in quickly adapting to new +tasks, where our method outperforms XGBoost over 100% on the low-resource setup +and shows a significant margin over all baselines. We envision that UniPredict +sheds light on developing a universal tabular data prediction system that +learns from data at scale and serves a wide range of prediction tasks. +" +LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression,Huiqiang Jiang,http://arxiv.org/pdf/2310.06839v1.pdf,2023-10-10,"['cs.cl', 'cs.lg']",2310.06839v1.pdf," In long context scenarios, large language models (LLMs) face three main +challenges: higher computational/financial cost, longer latency, and inferior +performance. Some studies reveal that the performance of LLMs depends on both +the density and the position of the key information (question relevant) in the +input prompt. Inspired by these findings, we propose LongLLMLingua for prompt +compression towards improving LLMs' perception of the key information to +simultaneously address the three challenges. We conduct evaluation on a wide +range of long context scenarios including single-/multi-document QA, few-shot +learning, summarization, synthetic tasks, and code completion. The experimental +results show that LongLLMLingua compressed prompt can derive higher performance +with much less cost. The latency of the end-to-end system is also reduced. For +example, on NaturalQuestions benchmark, LongLLMLingua gains a performance boost +of up to 17.1% over the original prompt with ~4x fewer tokens as input to +GPT-3.5-Turbo. It can derive cost savings of \$28.5 and \$27.4 per 1,000 +samples from the LongBench and ZeroScrolls benchmark, respectively. +Additionally, when compressing prompts of ~10k tokens at a compression rate of +2x-10x, LongLLMLingua can speed up the end-to-end latency by 1.4x-3.8x. Our +code is available at https://aka.ms/LLMLingua. +" +Empower Text-Attributed Graphs Learning with Large Language Models (LLMs),Jianxiang Yu,http://arxiv.org/pdf/2310.09872v1.pdf,2023-10-15,['cs.lg'],2310.09872v1.pdf," Text-attributed graphs have recently garnered significant attention due to +their wide range of applications in web domains. Existing methodologies employ +word embedding models for acquiring text representations as node features, +which are subsequently fed into Graph Neural Networks (GNNs) for training. +Recently, the advent of Large Language Models (LLMs) has introduced their +powerful capabilities in information retrieval and text generation, which can +greatly enhance the text attributes of graph data. Furthermore, the acquisition +and labeling of extensive datasets are both costly and time-consuming +endeavors. Consequently, few-shot learning has emerged as a crucial problem in +the context of graph learning tasks. In order to tackle this challenge, we +propose a lightweight paradigm called ENG, which adopts a plug-and-play +approach to empower text-attributed graphs through node generation using LLMs. +Specifically, we utilize LLMs to extract semantic information from the labels +and generate samples that belong to these categories as exemplars. +Subsequently, we employ an edge predictor to capture the structural information +inherent in the raw dataset and integrate the newly generated samples into the +original graph. This approach harnesses LLMs for enhancing class-level +information and seamlessly introduces labeled nodes and edges without modifying +the raw dataset, thereby facilitating the node classification task in few-shot +scenarios. Extensive experiments demonstrate the outstanding performance of our +proposed paradigm, particularly in low-shot scenarios. For instance, in the +1-shot setting of the ogbn-arxiv dataset, ENG achieves a 76% improvement over +the baseline model. +" +In-Context Learning with Iterative Demonstration Selection,Chengwei Qin,http://arxiv.org/pdf/2310.09881v2.pdf,2023-10-15,"['cs.cl', 'cs.ai']",2310.09881v2.pdf," Spurred by advancements in scale, large language models (LLMs) have +demonstrated strong few-shot learning ability via in-context learning (ICL). +However, the performance of ICL has been shown to be highly sensitive to the +selection of few-shot demonstrations. Selecting the most suitable examples as +context remains an ongoing challenge and an open problem. Existing literature +has highlighted the importance of selecting examples that are diverse or +semantically similar to the test sample while ignoring the fact that the +optimal selection dimension, i.e., diversity or similarity, is task-specific. +Leveraging the merits of both dimensions, we propose Iterative Demonstration +Selection (IDS). Using zero-shot chain-of-thought reasoning (Zero-shot-CoT), +IDS iteratively selects examples that are diverse but still strongly correlated +with the test sample as ICL demonstrations. Specifically, IDS applies +Zero-shot-CoT to the test sample before demonstration selection. The output +reasoning path is then used to choose demonstrations that are prepended to the +test sample for inference. The generated answer is accompanied by its +corresponding reasoning path for extracting a new set of demonstrations in the +next iteration. After several iterations, IDS adopts majority voting to obtain +the final result. Through extensive experiments on tasks including commonsense +reasoning, question answering, topic classification, and sentiment analysis, we +demonstrate that IDS can consistently outperform existing ICL demonstration +selection methods. +" +The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages,Chiyu Zhang,http://arxiv.org/pdf/2310.14557v1.pdf,2023-10-23,['cs.cl'],2310.14557v1.pdf," Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate +remarkable performance in a wide range of tasks. Despite numerous recent +studies that examine the performance of instruction-tuned LLMs on various NLP +benchmarks, there remains a lack of comprehensive investigation into their +ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning +embedded within social and interactive contexts. This deficiency arises partly +from SM not being adequately represented in any of the existing benchmarks. To +address this gap, we present SPARROW, an extensive multilingual benchmark +specifically designed for SM understanding. SPARROW comprises 169 datasets +covering 13 task types across six primary categories (e.g., anti-social +language detection, emotion recognition). SPARROW datasets encompass 64 +different languages originating from 12 language families representing 16 +writing scripts. We evaluate the performance of various multilingual pretrained +language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT) +on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our +comprehensive analysis reveals that existing open-source instruction tuned LLMs +still struggle to understand SM across various languages, performing close to a +random baseline in some cases. We also find that although ChatGPT outperforms +many LLMs, it still falls behind task-specific finetuned models with a gap of +12.19 SPARROW score. Our benchmark is available at: +https://github.com/UBC-NLP/SPARROW +" +PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven Perturbed Gradient Descent,Guangliang Liu,http://arxiv.org/pdf/2310.17588v1.pdf,2023-10-26,"['cs.lg', 'cs.cl']",2310.17588v1.pdf," Fine-tuning pretrained language models (PLMs) for downstream tasks is a +large-scale optimization problem, in which the choice of the training algorithm +critically determines how well the trained model can generalize to unseen test +data, especially in the context of few-shot learning. To achieve good +generalization performance and avoid overfitting, techniques such as data +augmentation and pruning are often applied. However, adding these +regularizations necessitates heavy tuning of the hyperparameters of +optimization algorithms, such as the popular Adam optimizer. In this paper, we +propose a two-stage fine-tuning method, PAC-tuning, to address this +optimization challenge. First, based on PAC-Bayes training, PAC-tuning directly +minimizes the PAC-Bayes generalization bound to learn proper parameter +distribution. Second, PAC-tuning modifies the gradient by injecting noise with +the variance learned in the first stage into the model parameters during +training, resulting in a variant of perturbed gradient descent (PGD). In the +past, the few-shot scenario posed difficulties for PAC-Bayes training because +the PAC-Bayes bound, when applied to large models with limited training data, +might not be stringent. Our experimental results across 5 GLUE benchmark tasks +demonstrate that PAC-tuning successfully handles the challenges of fine-tuning +tasks and outperforms strong baseline methods by a visible margin, further +confirming the potential to apply PAC training for any other settings where the +Adam optimizer is currently used for training. +" +Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning,Ruizhe Shi,http://arxiv.org/pdf/2310.20587v3.pdf,2023-10-31,['cs.lg'],2310.20587v3.pdf," Offline reinforcement learning (RL) aims to find a near-optimal policy using +pre-collected datasets. In real-world scenarios, data collection could be +costly and risky; therefore, offline RL becomes particularly challenging when +the in-domain data is limited. Given recent advances in Large Language Models +(LLMs) and their few-shot learning prowess, this paper introduces +$\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$), a +general framework based on Decision Transformers to effectively use pre-trained +Language Models (LMs) for offline RL. Our framework highlights four crucial +components: (1) Initializing Decision Transformers with sequentially +pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to +full-weight fine-tuning, to combine the pre-trained knowledge from LMs and +in-domain knowledge effectively, (3) using the non-linear MLP transformation +instead of linear projections, to generate embeddings, and (4) integrating an +auxiliary language prediction loss during fine-tuning to stabilize the LMs and +retain their original abilities on languages. Empirical results indicate +$\textbf{LaMo}$ achieves state-of-the-art performance in sparse-reward tasks +and closes the gap between value-based offline RL methods and decision +transformers in dense-reward tasks. In particular, our method demonstrates +superior performance in scenarios with limited data samples. Our project +website is $\href{https://lamo2023.github.io}{\text{this https URL}}$. +" +On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval,Jiayi Chen,http://arxiv.org/pdf/2311.00693v1.pdf,2023-11-01,['cs.ai'],2311.00693v1.pdf," Visually-rich document entity retrieval (VDER), which extracts key +information (e.g. date, address) from document images like invoices and +receipts, has become an important topic in industrial NLP applications. The +emergence of new document types at a constant pace, each with its unique entity +types, presents a unique challenge: many documents contain unseen entity types +that occur only a couple of times. Addressing this challenge requires models to +have the ability of learning entities in a few-shot manner. However, prior +works for Few-shot VDER mainly address the problem at the document level with a +predefined global entity space, which doesn't account for the entity-level +few-shot scenario: target entity types are locally personalized by each task +and entity occurrences vary significantly among documents. To address this +unexplored scenario, this paper studies a novel entity-level few-shot VDER +task. The challenges lie in the uniqueness of the label space for each task and +the increased complexity of out-of-distribution (OOD) contents. To tackle this +novel task, we present a task-aware meta-learning based framework, with a +central focus on achieving effective task personalization that distinguishes +between in-task and out-of-task distribution. Specifically, we adopt a +hierarchical decoder (HC) and employ contrastive learning (ContrastProtoNet) to +achieve this goal. Furthermore, we introduce a new dataset, FewVEX, to boost +future research in the field of entity-level few-shot VDER. Experimental +results demonstrate our approaches significantly improve the robustness of +popular meta-learning baselines. +" +A Survey of Large Language Models for Autonomous Driving,Zhenjie Yang,http://arxiv.org/pdf/2311.01043v1.pdf,2023-11-02,['cs.ai'],2311.01043v1.pdf," Autonomous driving technology, a catalyst for revolutionizing transportation +and urban mobility, has the tend to transition from rule-based systems to +data-driven strategies. Traditional module-based systems are constrained by +cumulative errors among cascaded modules and inflexible pre-set rules. In +contrast, end-to-end autonomous driving systems have the potential to avoid +error accumulation due to their fully data-driven training process, although +they often lack transparency due to their ``black box"" nature, complicating the +validation and traceability of decisions. Recently, large language models +(LLMs) have demonstrated abilities including understanding context, logical +reasoning, and generating answers. A natural thought is to utilize these +abilities to empower autonomous driving. By combining LLM with foundation +vision models, it could open the door to open-world understanding, reasoning, +and few-shot learning, which current autonomous driving systems are lacking. In +this paper, we systematically review a research line about \textit{Large +Language Models for Autonomous Driving (LLM4AD)}. This study evaluates the +current state of technological advancements, distinctly outlining the principal +challenges and prospective directions for the field. For the convenience of +researchers in academia and industry, we provide real-time updates on the +latest advances in the field as well as relevant open-source resources via the +designated link: https://github.com/Thinklab-SJTU/Awesome-LLM4AD. +" +Robust Fine-Tuning of Vision-Language Models for Domain Generalization,Kevin Vogt-Lowell,http://arxiv.org/pdf/2311.02236v1.pdf,2023-11-03,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2311.02236v1.pdf," Transfer learning enables the sharing of common knowledge among models for a +variety of downstream tasks, but traditional methods suffer in limited training +data settings and produce narrow models incapable of effectively generalizing +under distribution shifts. Foundation models have recently demonstrated +impressive zero-shot inference capabilities and robustness under distribution +shifts. However, zero-shot evaluation for these models has been predominantly +confined to benchmarks with simple distribution shifts, limiting our +understanding of their effectiveness under the more realistic shifts found in +practice. Moreover, common fine-tuning methods for these models have yet to be +evaluated against vision models in few-shot scenarios where training data is +limited. To address these gaps, we present a new recipe for few-shot +fine-tuning of the popular vision-language foundation model CLIP and evaluate +its performance on challenging benchmark datasets with realistic distribution +shifts from the WILDS collection. Our experimentation demonstrates that, while +zero-shot CLIP fails to match performance of trained vision models on more +complex benchmarks, few-shot CLIP fine-tuning outperforms its vision-only +counterparts in terms of in-distribution and out-of-distribution accuracy at +all levels of training data availability. This provides a strong incentive for +adoption of foundation models within few-shot learning applications operating +with real-world data. Code is available at +https://github.com/mit-ll/robust-vision-language-finetuning +" +"A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics",Qing Li,http://arxiv.org/pdf/2103.01403v3.pdf,2021-03-02,"['cs.lg', 'cs.ai', 'cs.cv']",2103.01403v3.pdf," Inspired by humans' exceptional ability to master arithmetic and generalize +to new problems, we present a new dataset, Handwritten arithmetic with INTegers +(HINT), to examine machines' capability of learning generalizable concepts at +three levels: perception, syntax, and semantics. In HINT, machines are tasked +with learning how concepts are perceived from raw signals such as images (i.e., +perception), how multiple concepts are structurally combined to form a valid +expression (i.e., syntax), and how concepts are realized to afford various +reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing +on systematic generalization, we carefully design a five-fold test set to +evaluate both the interpolation and the extrapolation of learned concepts +w.r.t. the three levels. Further, we design a few-shot learning split to +determine whether or not models can rapidly learn new concepts and generalize +them to more complex scenarios. To comprehend existing models' limitations, we +undertake extensive experiments with various sequence-to-sequence models, +including RNNs, Transformers, and GPT-3 (with the chain of thought prompting). +The results indicate that current models struggle to extrapolate to long-range +syntactic dependency and semantics. Models exhibit a considerable gap toward +human-level generalization when evaluated with new concepts in a few-shot +setting. Moreover, we discover that it is infeasible to solve HINT by merely +scaling up the dataset and the model size; this strategy contributes little to +the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3 +experiments, the chain of thought prompting exhibits impressive results and +significantly boosts the test accuracy. We believe the HINT dataset and the +experimental findings are of great interest to the learning community on +systematic generalization. +" +Lesion2Vec: Deep Metric Learning for Few-Shot Multiple Lesions Recognition in Wireless Capsule Endoscopy Video,Sodiq Adewole,http://arxiv.org/pdf/2101.04240v2.pdf,2021-01-11,['cs.cv'],2101.04240v2.pdf," Effective and rapid detection of lesions in the Gastrointestinal tract is +critical to gastroenterologist's response to some life-threatening diseases. +Wireless Capsule Endoscopy (WCE) has revolutionized traditional endoscopy +procedure by allowing gastroenterologists visualize the entire GI tract +non-invasively. Once the tiny capsule is swallowed, it sequentially capture +images of the GI tract at about 2 to 6 frames per second (fps). A single video +can last up to 8 hours producing between 30,000 to 100,000 images. Automating +the detection of frames containing specific lesion in WCE video would relieve +gastroenterologists the arduous task of reviewing the entire video before +making diagnosis. While the WCE produces large volume of images, only about 5\% +of the frames contain lesions that aid the diagnosis process. Convolutional +Neural Network (CNN) based models have been very successful in various image +classification tasks. However, they suffer excessive parameters, are sample +inefficient and rely on very large amount of training data. Deploying a CNN +classifier for lesion detection task will require time-to-time fine-tuning to +generalize to any unforeseen category. In this paper, we propose a metric-based +learning framework followed by a few-shot lesion recognition in WCE data. +Metric-based learning is a meta-learning framework designed to establish +similarity or dissimilarity between concepts while few-shot learning (FSL) aims +to identify new concepts from only a small number of examples. We train a +feature extractor to learn a representation for different small bowel lesions +using metric-based learning. At the testing stage, the category of an unseen +sample is predicted from only a few support examples, thereby allowing the +model to generalize to a new category that has never been seen before. We +demonstrated the efficacy of this method on real patient capsule endoscopy +data. +" +Program Synthesis with Large Language Models,Jacob Austin,http://arxiv.org/pdf/2108.07732v1.pdf,2021-08-16,"['cs.pl', 'cs.lg']",2108.07732v1.pdf," This paper explores the limits of the current generation of large language +models for program synthesis in general purpose programming languages. We +evaluate a collection of such models (with between 244M and 137B parameters) on +two new benchmarks, MBPP and MathQA-Python, in both the few-shot and +fine-tuning regimes. Our benchmarks are designed to measure the ability of +these models to synthesize short Python programs from natural language +descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 +programming tasks, designed to be solvable by entry-level programmers. The +MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 +problems that evaluate the ability of the models to synthesize code from more +complex text. On both datasets, we find that synthesis performance scales +log-linearly with model size. Our largest models, even without finetuning on a +code dataset, can synthesize solutions to 59.6 percent of the problems from +MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a +held-out portion of the dataset improves performance by about 10 percentage +points across most model sizes. On the MathQA-Python dataset, the largest +fine-tuned model achieves 83.8 percent accuracy. Going further, we study the +model's ability to engage in dialog about code, incorporating human feedback to +improve its solutions. We find that natural language feedback from a human +halves the error rate compared to the model's initial prediction. Additionally, +we conduct an error analysis to shed light on where these models fall short and +what types of programs are most difficult to generate. Finally, we explore the +semantic grounding of these models by fine-tuning them to predict the results +of program execution. We find that even our best models are generally unable to +predict the output of a program given a specific input. +" +Unsupervised Law Article Mining based on Deep Pre-Trained Language Representation Models with Application to the Italian Civil Code,Andrea Tagarelli,http://arxiv.org/pdf/2112.03033v1.pdf,2021-12-02,"['cs.cl', 'cs.ai', 'cs.ir', 'physics.soc-ph']",2112.03033v1.pdf," Modeling law search and retrieval as prediction problems has recently emerged +as a predominant approach in law intelligence. Focusing on the law article +retrieval task, we present a deep learning framework named LamBERTa, which is +designed for civil-law codes, and specifically trained on the Italian civil +code. To our knowledge, this is the first study proposing an advanced approach +to law article prediction for the Italian legal system based on a BERT +(Bidirectional Encoder Representations from Transformers) learning framework, +which has recently attracted increased attention among deep learning +approaches, showing outstanding effectiveness in several natural language +processing and learning tasks. We define LamBERTa models by fine-tuning an +Italian pre-trained BERT on the Italian civil code or its portions, for law +article retrieval as a classification task. One key aspect of our LamBERTa +framework is that we conceived it to address an extreme classification +scenario, which is characterized by a high number of classes, the few-shot +learning problem, and the lack of test query benchmarks for Italian legal +prediction tasks. To solve such issues, we define different methods for the +unsupervised labeling of the law articles, which can in principle be applied to +any law article code system. We provide insights into the explainability and +interpretability of our LamBERTa models, and we present an extensive +experimental analysis over query sets of different type, for single-label as +well as multi-label evaluation tasks. Empirical evidence has shown the +effectiveness of LamBERTa, and also its superiority against widely used +deep-learning text classifiers and a few-shot learner conceived for an +attribute-aware prediction task. +" +"Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model",Shaden Smith,http://arxiv.org/pdf/2201.11990v3.pdf,2022-01-28,['cs.cl'],2201.11990v3.pdf," Pretrained general-purpose language models can achieve state-of-the-art +accuracies in various natural language processing domains by adapting to +downstream tasks via zero-shot, few-shot and fine-tuning techniques. Because of +their success, the size of these models has increased rapidly, requiring +high-performance hardware, software, and algorithmic techniques to enable +training such large models. As the result of a joint effort between Microsoft +and NVIDIA, we present details on the training of the largest monolithic +transformer based language model, Megatron-Turing NLG 530B (MT-NLG), with 530 +billion parameters. In this paper, we first focus on the infrastructure as well +as the 3D parallelism methodology used to train this model using DeepSpeed and +Megatron. Next, we detail the training process, the design of our training +corpus, and our data curation techniques, which we believe is a key ingredient +to the success of the model. Finally, we discuss various evaluation results, as +well as other interesting observations and new properties exhibited by MT-NLG. +We demonstrate that MT-NLG achieves superior zero-, one-, and few-shot learning +accuracies on several NLP benchmarks and establishes new state-of-the-art +results. We believe that our contributions will help further the development of +large-scale training infrastructures, large-scale language models, and natural +language generations. +" +Data Distributional Properties Drive Emergent In-Context Learning in Transformers,Stephanie C. Y. Chan,http://arxiv.org/pdf/2205.05055v6.pdf,2022-04-22,"['cs.lg', 'cs.ai', 'cs.cl']",2205.05055v6.pdf," Large transformer-based models are able to perform in-context few-shot +learning, without being explicitly trained for it. This observation raises the +question: what aspects of the training regime lead to this emergent behavior? +Here, we show that this behavior is driven by the distributions of the training +data itself. In-context learning emerges when the training data exhibits +particular distributional properties such as burstiness (items appear in +clusters rather than being uniformly distributed over time) and having large +numbers of rarely occurring classes. In-context learning also emerges more +strongly when item meanings or interpretations are dynamic rather than fixed. +These properties are exemplified by natural language, but are also inherent to +naturalistic data in a wide range of other domains. They also depart +significantly from the uniform, i.i.d. training distributions typically used +for standard supervised learning. In our initial experiments, we found that +in-context learning traded off against more conventional weight-based learning, +and models were unable to achieve both simultaneously. However, our later +experiments uncovered that the two modes of learning could co-exist in a single +model when it was trained on data following a skewed Zipfian distribution -- +another common property of naturalistic data, including language. In further +experiments, we found that naturalistic data distributions were only able to +elicit in-context learning in transformers, and not in recurrent models. In +sum, our findings indicate how the transformer architecture works together with +particular properties of the training data to drive the intriguing emergent +in-context learning behaviour of large language models, and how future work +might encourage both in-context and in-weights learning in domains beyond +language. +" +Large Language Models are Zero-Shot Reasoners,Takeshi Kojima,http://arxiv.org/pdf/2205.11916v4.pdf,2022-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2205.11916v4.pdf," Pretrained large language models (LLMs) are widely used in many sub-fields of +natural language processing (NLP) and generally known as excellent few-shot +learners with task-specific exemplars. Notably, chain of thought (CoT) +prompting, a recent technique for eliciting complex multi-step reasoning +through step-by-step answer examples, achieved the state-of-the-art +performances in arithmetics and symbolic reasoning, difficult system-2 tasks +that do not follow the standard scaling laws for LLMs. While these successes +are often attributed to LLMs' ability for few-shot learning, we show that LLMs +are decent zero-shot reasoners by simply adding ""Let's think step by step"" +before each answer. Experimental results demonstrate that our Zero-shot-CoT, +using the same single prompt template, significantly outperforms zero-shot LLM +performances on diverse benchmark reasoning tasks including arithmetics +(MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin +Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled +Objects), without any hand-crafted few-shot examples, e.g. increasing the +accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with +large InstructGPT model (text-davinci-002), as well as similar magnitudes of +improvements with another off-the-shelf large model, 540B parameter PaLM. The +versatility of this single prompt across very diverse reasoning tasks hints at +untapped and understudied fundamental zero-shot capabilities of LLMs, +suggesting high-level, multi-task broad cognitive capabilities may be extracted +by simple prompting. We hope our work not only serves as the minimal strongest +zero-shot baseline for the challenging reasoning benchmarks, but also +highlights the importance of carefully exploring and analyzing the enormous +zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or +few-shot exemplars. +" +Hungry Hungry Hippos: Towards Language Modeling with State Space Models,Daniel Y. Fu,http://arxiv.org/pdf/2212.14052v3.pdf,2022-12-28,"['cs.lg', 'cs.cl']",2212.14052v3.pdf," State space models (SSMs) have demonstrated state-of-the-art sequence +modeling performance in some modalities, but underperform attention in language +modeling. Moreover, despite scaling nearly linearly in sequence length instead +of quadratically, SSMs are still slower than Transformers due to poor hardware +utilization. In this paper, we make progress on understanding the expressivity +gap between SSMs and attention in language modeling, and on reducing the +hardware barrier between SSMs and attention. First, we use synthetic language +modeling tasks to understand the gap between SSMs and attention. We find that +existing SSMs struggle with two capabilities: recalling earlier tokens in the +sequence and comparing tokens across the sequence. To understand the impact on +language modeling, we propose a new SSM layer, H3, that is explicitly designed +for these abilities. H3 matches attention on the synthetic languages and comes +within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid +125M-parameter H3-attention model that retains two attention layers +surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to +improve the efficiency of training SSMs on modern hardware, we propose +FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on +sequences up to 8K, and introduces a novel state passing algorithm that +exploits the recurrent properties of SSMs to scale to longer sequences. +FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows +hybrid language models to generate text 2.4$\times$ faster than Transformers. +Using FlashConv, we scale hybrid H3-attention language models up to 2.7B +parameters on the Pile and find promising initial results, achieving lower +perplexity than Transformers and outperforming Transformers in zero- and +few-shot learning on a majority of tasks in the SuperGLUE benchmark. +" +CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP,Runnan Chen,http://arxiv.org/pdf/2301.04926v2.pdf,2023-01-12,['cs.cv'],2301.04926v2.pdf," Contrastive Language-Image Pre-training (CLIP) achieves promising results in +2D zero-shot and few-shot learning. Despite the impressive performance in 2D, +applying CLIP to help the learning in 3D scene understanding has yet to be +explored. In this paper, we make the first attempt to investigate how CLIP +knowledge benefits 3D scene understanding. We propose CLIP2Scene, a simple yet +effective framework that transfers CLIP knowledge from 2D image-text +pre-trained models to a 3D point cloud network. We show that the pre-trained 3D +network yields impressive performance on various downstream tasks, i.e., +annotation-free and fine-tuning with labelled data for semantic segmentation. +Specifically, built upon CLIP, we design a Semantic-driven Cross-modal +Contrastive Learning framework that pre-trains a 3D network via semantic and +spatial-temporal consistency regularization. For the former, we first leverage +CLIP's text semantics to select the positive and negative point samples and +then employ the contrastive loss to train the 3D network. In terms of the +latter, we force the consistency between the temporally coherent point cloud +features and their corresponding image features. We conduct experiments on +SemanticKITTI, nuScenes, and ScanNet. For the first time, our pre-trained +network achieves annotation-free 3D semantic segmentation with 20.8% and 25.08% +mIoU on nuScenes and ScanNet, respectively. When fine-tuned with 1% or 100% +labelled data, our method significantly outperforms other self-supervised +methods, with improvements of 8% and 1% mIoU, respectively. Furthermore, we +demonstrate the generalizability for handling cross-domain datasets. Code is +publicly available https://github.com/runnanchen/CLIP2Scene. +" +An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation,Max Schäfer,http://arxiv.org/pdf/2302.06527v3.pdf,2023-02-13,"['cs.se', 'cs.ai']",2302.06527v3.pdf," Unit tests play a key role in ensuring the correctness of software. However, +manually creating unit tests is a laborious task, motivating the need for +automation. Large Language Models (LLMs) have recently been applied to this +problem, utilizing additional training or few-shot learning on examples of +existing tests. This paper presents a large-scale empirical evaluation on the +effectiveness of LLMs for automated unit test generation without additional +training or manual effort, providing the LLM with the signature and +implementation of the function under test, along with usage examples extracted +from documentation. We also attempt to repair failed generated tests by +re-prompting the model with the failing test and error message. We implement +our approach in TestPilot, a test generation tool for JavaScript that +automatically generates unit tests for all API functions in an npm package. We +evaluate TestPilot using OpenAI's gpt3.5-turbo LLM on 25 npm packages with a +total of 1,684 API functions. The generated tests achieve a median statement +coverage of 70.2% and branch coverage of 52.8%, significantly improving on +Nessie, a recent feedback-directed JavaScript test generation technique, which +achieves only 51.3% statement coverage and 25.6% branch coverage. We also find +that 92.8% of TestPilot's generated tests have no more than 50% similarity with +existing tests (as measured by normalized edit distance), with none of them +being exact copies. Finally, we run TestPilot with two additional LLMs, +OpenAI's older code-cushman-002 LLM and the open LLM StarCoder. Overall, we +observed similar results with the former (68.2% median statement coverage), and +somewhat worse results with the latter (54.0% median statement coverage), +suggesting that the effectiveness of the approach is influenced by the size and +training set of the LLM, but does not fundamentally depend on the specific +model. +" +On the Opportunities and Challenges of Foundation Models for Geospatial Artificial Intelligence,Gengchen Mai,http://arxiv.org/pdf/2304.06798v1.pdf,2023-04-13,"['cs.ai', 'cs.cl', 'cs.cv', 'i.2.0; i.2.4; i.2.7; i.2.10; i.5.1']",2304.06798v1.pdf," Large pre-trained models, also known as foundation models (FMs), are trained +in a task-agnostic manner on large-scale data and can be adapted to a wide +range of downstream tasks by fine-tuning, few-shot, or even zero-shot learning. +Despite their successes in language and vision tasks, we have yet seen an +attempt to develop foundation models for geospatial artificial intelligence +(GeoAI). In this work, we explore the promises and challenges of developing +multimodal foundation models for GeoAI. We first investigate the potential of +many existing FMs by testing their performances on seven tasks across multiple +geospatial subdomains including Geospatial Semantics, Health Geography, Urban +Geography, and Remote Sensing. Our results indicate that on several geospatial +tasks that only involve text modality such as toponym recognition, location +description recognition, and US state-level/county-level dementia time series +forecasting, these task-agnostic LLMs can outperform task-specific +fully-supervised models in a zero-shot or few-shot learning setting. However, +on other geospatial tasks, especially tasks that involve multiple data +modalities (e.g., POI-based urban function classification, street view +image-based urban noise intensity classification, and remote sensing image +scene classification), existing foundation models still underperform +task-specific models. Based on these observations, we propose that one of the +major challenges of developing a FM for GeoAI is to address the multimodality +nature of geospatial tasks. After discussing the distinct challenges of each +geospatial data modality, we suggest the possibility of a multimodal foundation +model which can reason over various types of geospatial data through geospatial +alignments. We conclude this paper by discussing the unique risks and +challenges to develop such a model for GeoAI. +" +Learning to detect an animal sound from five examples,Inês Nolasco,http://arxiv.org/pdf/2305.13210v1.pdf,2023-05-22,"['cs.sd', 'eess.as', 'q-bio.qm']",2305.13210v1.pdf," Automatic detection and classification of animal sounds has many applications +in biodiversity monitoring and animal behaviour. In the past twenty years, the +volume of digitised wildlife sound available has massively increased, and +automatic classification through deep learning now shows strong results. +However, bioacoustics is not a single task but a vast range of small-scale +tasks (such as individual ID, call type, emotional indication) with wide +variety in data characteristics, and most bioacoustic tasks do not come with +strongly-labelled training data. The standard paradigm of supervised learning, +focussed on a single large-scale dataset and/or a generic pre-trained +algorithm, is insufficient. In this work we recast bioacoustic sound event +detection within the AI framework of few-shot learning. We adapt this framework +to sound event detection, such that a system can be given the annotated +start/end times of as few as 5 events, and can then detect events in +long-duration audio -- even when the sound category was not known at the time +of algorithm training. We introduce a collection of open datasets designed to +strongly test a system's ability to perform few-shot sound event detections, +and we present the results of a public contest to address the task. We show +that prototypical networks are a strong-performing method, when enhanced with +adaptations for general characteristics of animal sounds. We demonstrate that +widely-varying sound event durations are an important factor in performance, as +well as non-stationarity, i.e. gradual changes in conditions throughout the +duration of a recording. For fine-grained bioacoustic recognition tasks without +massive annotated training data, our results demonstrate that few-shot sound +event detection is a powerful new method, strongly outperforming traditional +signal-processing detection methods in the fully automated scenario. +" +The Rise of AI Language Pathologists: Exploring Two-level Prompt Learning for Few-shot Weakly-supervised Whole Slide Image Classification,Linhao Qu,http://arxiv.org/pdf/2305.17891v1.pdf,2023-05-29,['cs.cv'],2305.17891v1.pdf," This paper introduces the novel concept of few-shot weakly supervised +learning for pathology Whole Slide Image (WSI) classification, denoted as FSWC. +A solution is proposed based on prompt learning and the utilization of a large +language model, GPT-4. Since a WSI is too large and needs to be divided into +patches for processing, WSI classification is commonly approached as a Multiple +Instance Learning (MIL) problem. In this context, each WSI is considered a bag, +and the obtained patches are treated as instances. The objective of FSWC is to +classify both bags and instances with only a limited number of labeled bags. +Unlike conventional few-shot learning problems, FSWC poses additional +challenges due to its weak bag labels within the MIL framework. Drawing +inspiration from the recent achievements of vision-language models (V-L models) +in downstream few-shot classification tasks, we propose a two-level prompt +learning MIL framework tailored for pathology, incorporating language prior +knowledge. Specifically, we leverage CLIP to extract instance features for each +patch, and introduce a prompt-guided pooling strategy to aggregate these +instance features into a bag feature. Subsequently, we employ a small number of +labeled bags to facilitate few-shot prompt learning based on the bag features. +Our approach incorporates the utilization of GPT-4 in a question-and-answer +mode to obtain language prior knowledge at both the instance and bag levels, +which are then integrated into the instance and bag level language prompts. +Additionally, a learnable component of the language prompts is trained using +the available few-shot labeled data. We conduct extensive experiments on three +real WSI datasets encompassing breast cancer, lung cancer, and cervical cancer, +demonstrating the notable performance of the proposed method in bag and +instance classification. All codes will be made publicly accessible. +" +Effective Test Generation Using Pre-trained Large Language Models and Mutation Testing,Arghavan Moradi Dakhel,http://arxiv.org/pdf/2308.16557v1.pdf,2023-08-31,['cs.se'],2308.16557v1.pdf," One of the critical phases in software development is software testing. +Testing helps with identifying potential bugs and reducing maintenance costs. +The goal of automated test generation tools is to ease the development of tests +by suggesting efficient bug-revealing tests. Recently, researchers have +leveraged Large Language Models (LLMs) of code to generate unit tests. While +the code coverage of generated tests was usually assessed, the literature has +acknowledged that the coverage is weakly correlated with the efficiency of +tests in bug detection. To improve over this limitation, in this paper, we +introduce MuTAP for improving the effectiveness of test cases generated by LLMs +in terms of revealing bugs by leveraging mutation testing. Our goal is achieved +by augmenting prompts with surviving mutants, as those mutants highlight the +limitations of test cases in detecting bugs. MuTAP is capable of generating +effective test cases in the absence of natural language descriptions of the +Program Under Test (PUTs). We employ different LLMs within MuTAP and evaluate +their performance on different benchmarks. Our results show that our proposed +method is able to detect up to 28% more faulty human-written code snippets. +Among these, 17% remained undetected by both the current state-of-the-art fully +automated test generation tool (i.e., Pynguin) and zero-shot/few-shot learning +approaches on LLMs. Furthermore, MuTAP achieves a Mutation Score (MS) of 93.57% +on synthetic buggy code, outperforming all other approaches in our evaluation. +Our findings suggest that although LLMs can serve as a useful tool to generate +test cases, they require specific post-processing steps to enhance the +effectiveness of the generated test cases which may suffer from syntactic or +functional errors and may be ineffective in detecting certain types of bugs and +testing corner cases PUTs. +" +LLM4SGG: Large Language Model for Weakly Supervised Scene Graph Generation,Kibum Kim,http://arxiv.org/pdf/2310.10404v4.pdf,2023-10-16,['cs.cv'],2310.10404v4.pdf," Weakly-Supervised Scene Graph Generation (WSSGG) research has recently +emerged as an alternative to the fully-supervised approach that heavily relies +on costly annotations. In this regard, studies on WSSGG have utilized image +captions to obtain unlocalized triplets while primarily focusing on grounding +the unlocalized triplets over image regions. However, they have overlooked the +two issues involved in the triplet formation process from the captions: 1) +Semantic over-simplification issue arises when extracting triplets from +captions, where fine-grained predicates in captions are undesirably converted +into coarse-grained predicates, resulting in a long-tailed predicate +distribution, and 2) Low-density scene graph issue arises when aligning the +triplets in the caption with entity/predicate classes of interest, where many +triplets are discarded and not used in training, leading to insufficient +supervision. To tackle the two issues, we propose a new approach, i.e., Large +Language Model for weakly-supervised SGG (LLM4SGG), where we mitigate the two +issues by leveraging the LLM's in-depth understanding of language and reasoning +ability during the extraction of triplets from captions and alignment of +entity/predicate classes with target data. To further engage the LLM in these +processes, we adopt the idea of Chain-of-Thought and the in-context few-shot +learning strategy. To validate the effectiveness of LLM4SGG, we conduct +extensive experiments on Visual Genome and GQA datasets, showing significant +improvements in both Recall@K and mean Recall@K compared to the +state-of-the-art WSSGG methods. A further appeal is that LLM4SGG is +data-efficient, enabling effective model training with a small amount of +training images. +" +Language Models are Few-Shot Learners,Tom B. Brown,http://arxiv.org/pdf/2005.14165v4.pdf,2020-05-28,['cs.cl'],2005.14165v4.pdf," Recent work has demonstrated substantial gains on many NLP tasks and +benchmarks by pre-training on a large corpus of text followed by fine-tuning on +a specific task. While typically task-agnostic in architecture, this method +still requires task-specific fine-tuning datasets of thousands or tens of +thousands of examples. By contrast, humans can generally perform a new language +task from only a few examples or from simple instructions - something which +current NLP systems still largely struggle to do. Here we show that scaling up +language models greatly improves task-agnostic, few-shot performance, sometimes +even reaching competitiveness with prior state-of-the-art fine-tuning +approaches. Specifically, we train GPT-3, an autoregressive language model with +175 billion parameters, 10x more than any previous non-sparse language model, +and test its performance in the few-shot setting. For all tasks, GPT-3 is +applied without any gradient updates or fine-tuning, with tasks and few-shot +demonstrations specified purely via text interaction with the model. GPT-3 +achieves strong performance on many NLP datasets, including translation, +question-answering, and cloze tasks, as well as several tasks that require +on-the-fly reasoning or domain adaptation, such as unscrambling words, using a +novel word in a sentence, or performing 3-digit arithmetic. At the same time, +we also identify some datasets where GPT-3's few-shot learning still struggles, +as well as some datasets where GPT-3 faces methodological issues related to +training on large web corpora. Finally, we find that GPT-3 can generate samples +of news articles which human evaluators have difficulty distinguishing from +articles written by humans. We discuss broader societal impacts of this finding +and of GPT-3 in general. +" +MasakhaNEWS: News Topic Classification for African languages,David Ifeoluwa Adelani,http://arxiv.org/pdf/2304.09972v2.pdf,2023-04-19,['cs.cl'],2304.09972v2.pdf," African languages are severely under-represented in NLP research due to lack +of datasets covering several NLP tasks. While there are individual language +specific datasets that are being expanded to different tasks, only a handful of +NLP tasks (e.g. named entity recognition and machine translation) have +standardized benchmark datasets covering several geographical and +typologically-diverse African languages. In this paper, we develop MasakhaNEWS +-- a new benchmark dataset for news topic classification covering 16 languages +widely spoken in Africa. We provide an evaluation of baseline models by +training classical machine learning models and fine-tuning several language +models. Furthermore, we explore several alternatives to full fine-tuning of +language models that are better suited for zero-shot and few-shot learning such +as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern +exploiting training (PET), prompting language models (like ChatGPT), and +prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). +Our evaluation in zero-shot setting shows the potential of prompting ChatGPT +for news topic classification in low-resource African languages, achieving an +average performance of 70 F1 points without leveraging additional supervision +like MAD-X. In few-shot setting, we show that with as little as 10 examples per +label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of +full supervised training (92.6 F1 points) leveraging the PET approach. +" +Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods,Mengsay Loem,http://arxiv.org/pdf/2305.18156v1.pdf,2023-05-29,"['cs.cl', 'cs.ai']",2305.18156v1.pdf," Large-scale pre-trained language models such as GPT-3 have shown remarkable +performance across various natural language processing tasks. However, applying +prompt-based methods with GPT-3 for Grammatical Error Correction (GEC) tasks +and their controllability remains underexplored. Controllability in GEC is +crucial for real-world applications, particularly in educational settings, +where the ability to tailor feedback according to learner levels and specific +error types can significantly enhance the learning process. This paper +investigates the performance and controllability of prompt-based methods with +GPT-3 for GEC tasks using zero-shot and few-shot setting. We explore the impact +of task instructions and examples on GPT-3's output, focusing on controlling +aspects such as minimal edits, fluency edits, and learner levels. Our findings +demonstrate that GPT-3 could effectively perform GEC tasks, outperforming +existing supervised and unsupervised approaches. We also showed that GPT-3 +could achieve controllability when appropriate task instructions and examples +are given. +" +Causal Intervention-based Prompt Debiasing for Event Argument Extraction,Jiaju Lin,http://arxiv.org/pdf/2210.01561v1.pdf,2022-10-04,"['cs.cl', 'cs.ai']",2210.01561v1.pdf," Prompt-based methods have become increasingly popular among information +extraction tasks, especially in low-data scenarios. By formatting a finetune +task into a pre-training objective, prompt-based methods resolve the data +scarce problem effectively. However, seldom do previous research investigate +the discrepancy among different prompt formulating strategies. In this work, we +compare two kinds of prompts, name-based prompt and ontology-base prompt, and +reveal how ontology-base prompt methods exceed its counterpart in zero-shot +event argument extraction (EAE) . Furthermore, we analyse the potential risk in +ontology-base prompts via a causal view and propose a debias method by causal +intervention. Experiments on two benchmarks demonstrate that modified by our +debias method, the baseline model becomes both more effective and robust, with +significant improvement in the resistance to adversarial attacks. +" +When Prompt-based Incremental Learning Does Not Meet Strong Pretraining,Yu-Ming Tang,http://arxiv.org/pdf/2308.10445v1.pdf,2023-08-21,['cs.cv'],2308.10445v1.pdf," Incremental learning aims to overcome catastrophic forgetting when learning +deep networks from sequential tasks. With impressive learning efficiency and +performance, prompt-based methods adopt a fixed backbone to sequential tasks by +learning task-specific prompts. However, existing prompt-based methods heavily +rely on strong pretraining (typically trained on ImageNet-21k), and we find +that their models could be trapped if the potential gap between the pretraining +task and unknown future tasks is large. In this work, we develop a learnable +Adaptive Prompt Generator (APG). The key is to unify the prompt retrieval and +prompt learning processes into a learnable prompt generator. Hence, the whole +prompting process can be optimized to reduce the negative effects of the gap +between tasks effectively. To make our APG avoid learning ineffective +knowledge, we maintain a knowledge pool to regularize APG with the feature +distribution of each class. Extensive experiments show that our method +significantly outperforms advanced methods in exemplar-free incremental +learning without (strong) pretraining. Besides, under strong retraining, our +method also has comparable performance to existing prompt-based models, showing +that our method can still benefit from pretraining. Codes can be found at +https://github.com/TOM-tym/APG +" +Zero-shot Domain Adaptation for Neural Machine Translation with Retrieved Phrase-level Prompts,Zewei Sun,http://arxiv.org/pdf/2209.11409v1.pdf,2022-09-23,['cs.cl'],2209.11409v1.pdf," Domain adaptation is an important challenge for neural machine translation. +However, the traditional fine-tuning solution requires multiple extra training +and yields a high cost. In this paper, we propose a non-tuning paradigm, +resolving domain adaptation with a prompt-based method. Specifically, we +construct a bilingual phrase-level database and retrieve relevant pairs from it +as a prompt for the input sentences. By utilizing Retrieved Phrase-level +Prompts (RePP), we effectively boost the translation quality. Experiments show +that our method improves domain-specific machine translation for 6.2 BLEU +scores and improves translation constraints for 11.5% accuracy without +additional training. +" +NSP-BERT: A Prompt-based Few-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction,Yi Sun,http://arxiv.org/pdf/2109.03564v2.pdf,2021-09-08,"['cs.cl', 'cs.ai']",2109.03564v2.pdf," Using prompts to utilize language models to perform various downstream tasks, +also known as prompt-based learning or prompt-learning, has lately gained +significant success in comparison to the pre-train and fine-tune paradigm. +Nonetheless, virtually all prompt-based methods are token-level, meaning they +all utilize GPT's left-to-right language model or BERT's masked language model +to perform cloze-style tasks. In this paper, we attempt to accomplish several +NLP tasks in the zero-shot scenario using a BERT original pre-training task +abandoned by RoBERTa and other models--Next Sentence Prediction (NSP). Unlike +token-level techniques, our sentence-level prompt-based method NSP-BERT does +not need to fix the length of the prompt or the position to be predicted, +allowing it to handle tasks such as entity linking with ease. Based on the +characteristics of NSP-BERT, we offer several quick building templates for +various downstream tasks. We suggest a two-stage prompt method for word sense +disambiguation tasks in particular. Our strategies for mapping the labels +significantly enhance the model's performance on sentence pair tasks. On the +FewCLUE benchmark, our NSP-BERT outperforms other zero-shot methods on most of +these tasks and comes close to the few-shot methods. +" +Introducing Language Guidance in Prompt-based Continual Learning,Muhammad Gul Zain Ali Khan,http://arxiv.org/pdf/2308.15827v1.pdf,2023-08-30,['cs.cv'],2308.15827v1.pdf," Continual Learning aims to learn a single model on a sequence of tasks +without having access to data from previous tasks. The biggest challenge in the +domain still remains catastrophic forgetting: a loss in performance on seen +classes of earlier tasks. Some existing methods rely on an expensive replay +buffer to store a chunk of data from previous tasks. This, while promising, +becomes expensive when the number of tasks becomes large or data can not be +stored for privacy reasons. As an alternative, prompt-based methods have been +proposed that store the task information in a learnable prompt pool. This +prompt pool instructs a frozen image encoder on how to solve each task. While +the model faces a disjoint set of classes in each task in this setting, we +argue that these classes can be encoded to the same embedding space of a +pre-trained language encoder. In this work, we propose Language Guidance for +Prompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods. +LGCL is model agnostic and introduces language guidance at the task level in +the prompt pool and at the class level on the output feature of the vision +encoder. We show with extensive experimentation that LGCL consistently improves +the performance of prompt-based continual learning methods to set a new +state-of-the art. LGCL achieves these performance improvements without needing +any additional learnable parameters. +" +Enable Language Models to Implicitly Learn Self-Improvement From Data,Ziqi Wang,http://arxiv.org/pdf/2310.00898v2.pdf,2023-10-02,['cs.cl'],2310.00898v2.pdf," Large Language Models (LLMs) have demonstrated remarkable capabilities in +open-ended text generation tasks. However, the inherent open-ended nature of +these tasks implies that there is always room for improvement in the quality of +model responses. To address this challenge, various approaches have been +proposed to enhance the performance of LLMs. There has been a growing focus on +enabling LLMs to self-improve their response quality, thereby reducing the +reliance on extensive human annotation efforts for collecting diverse and +high-quality training data. Recently, prompting-based methods have been widely +explored among self-improvement methods owing to their effectiveness, +efficiency, and convenience. However, those methods usually require explicitly +and thoroughly written rubrics as inputs to LLMs. It is expensive and +challenging to manually derive and provide all necessary rubrics with a +real-world complex goal for improvement (e.g., being more helpful and less +harmful). To this end, we propose an ImPlicit Self-ImprovemenT (PIT) framework +that implicitly learns the improvement goal from human preference data. PIT +only requires preference data that are used to train reward models without +extra human efforts. Specifically, we reformulate the training objective of +reinforcement learning from human feedback (RLHF) -- instead of maximizing +response quality for a given input, we maximize the quality gap of the response +conditioned on a reference response. In this way, PIT is implicitly trained +with the improvement goal of better aligning with human preferences. +Experiments on two real-world datasets and one synthetic dataset show that our +method significantly outperforms prompting-based methods. +" +MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition,Jinming Zhao,http://arxiv.org/pdf/2111.00865v1.pdf,2021-10-27,"['cs.cv', 'eess.iv']",2111.00865v1.pdf," Multimodal emotion recognition study is hindered by the lack of labelled +corpora in terms of scale and diversity, due to the high annotation cost and +label ambiguity. In this paper, we propose a pre-training model +\textbf{MEmoBERT} for multimodal emotion recognition, which learns multimodal +joint representations through self-supervised learning from large-scale +unlabeled video data that come in sheer volume. Furthermore, unlike the +conventional ""pre-train, finetune"" paradigm, we propose a prompt-based method +that reformulates the downstream emotion classification task as a masked text +prediction one, bringing the downstream task closer to the pre-training. +Extensive experiments on two benchmark datasets, IEMOCAP and MSP-IMPROV, show +that our proposed MEmoBERT significantly enhances emotion recognition +performance. +" +PSG: Prompt-based Sequence Generation for Acronym Extraction,Bin Li,http://arxiv.org/pdf/2111.14301v2.pdf,2021-11-29,"['cs.cl', 'cs.ai']",2111.14301v2.pdf," Acronym extraction aims to find acronyms (i.e., short-forms) and their +meanings (i.e., long-forms) from the documents, which is important for +scientific document understanding (SDU@AAAI-22) tasks. Previous works are +devoted to modeling this task as a paragraph-level sequence labeling problem. +However, it lacks the effective use of the external knowledge, especially when +the datasets are in a low-resource setting. Recently, the prompt-based method +with the vast pre-trained language model can significantly enhance the +performance of the low-resourced downstream tasks. In this paper, we propose a +Prompt-based Sequence Generation (PSG) method for the acronym extraction task. +Specifically, we design a template for prompting the extracted acronym texts +with auto-regression. A position extraction algorithm is designed for +extracting the position of the generated answers. The results on the acronym +extraction of Vietnamese and Persian in a low-resource setting show that the +proposed method outperforms all other competitive state-of-the-art (SOTA) +methods. +" +Chemical Identification and Indexing in PubMed Articles via BERT and Text-to-Text Approaches,Virginia Adams,http://arxiv.org/pdf/2111.15622v1.pdf,2021-11-30,['cs.cl'],2111.15622v1.pdf," The Biocreative VII Track-2 challenge consists of named entity recognition, +entity-linking (or entity-normalization), and topic indexing tasks -- with +entities and topics limited to chemicals for this challenge. Named entity +recognition is a well-established problem and we achieve our best performance +with BERT-based BioMegatron models. We extend our BERT-based approach to the +entity linking task. After the second stage of pretraining BioBERT with a +metric-learning loss strategy called self-alignment pretraining (SAP), we link +entities based on the cosine similarity between their SAP-BioBERT word +embeddings. Despite the success of our named entity recognition experiments, we +find the chemical indexing task generally more challenging. + In addition to conventional NER methods, we attempt both named entity +recognition and entity linking with a novel text-to-text or ""prompt"" based +method that uses generative language models such as T5 and GPT. We achieve +encouraging results with this new approach. +" +AdaPrompt: Adaptive Model Training for Prompt-based NLP,Yulong Chen,http://arxiv.org/pdf/2202.04824v2.pdf,2022-02-10,['cs.cl'],2202.04824v2.pdf," Prompt-based learning, with its capability to tackle zero-shot and few-shot +NLP tasks, has gained much attention in community. The main idea is to bridge +the gap between NLP downstream tasks and language modeling (LM), by mapping +these tasks into natural language prompts, which are then filled by pre-trained +language models (PLMs). However, for prompt learning, there are still two +salient gaps between NLP tasks and pretraining. First, prompt information is +not necessarily sufficiently present during LM pretraining. Second, +task-specific data are not necessarily well represented during pretraining. We +address these two issues by proposing AdaPrompt, adaptively retrieving external +data for continual pretraining of PLMs by making use of both task and prompt +characteristics. In addition, we make use of knowledge in Natural Language +Inference models for deriving adaptive verbalizers. Experimental results on +five NLP benchmarks show that AdaPrompt can improve over standard PLMs in +few-shot settings. In addition, in zero-shot settings, our method outperforms +standard prompt-based methods by up to 26.35\% relative error reduction. +" +Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt,Xinyin Ma,http://arxiv.org/pdf/2205.07523v1.pdf,2022-05-16,['cs.cl'],2205.07523v1.pdf," Data-free knowledge distillation (DFKD) conducts knowledge distillation via +eliminating the dependence of original training data, and has recently achieved +impressive results in accelerating pre-trained language models. At the heart of +DFKD is to reconstruct a synthetic dataset by inverting the parameters of the +uncompressed model. Prior DFKD approaches, however, have largely relied on +hand-crafted priors of the target data distribution for the reconstruction, +which can be inevitably biased and often incompetent to capture the intrinsic +distributions. To address this problem, we propose a prompt-based method, +termed as PromptDFD, that allows us to take advantage of learned language +priors, which effectively harmonizes the synthetic sentences to be semantically +and grammatically correct. Specifically, PromptDFD leverages a pre-trained +generative model to provide language priors and introduces a reinforced topic +prompter to control data synthesis, making the generated samples thematically +relevant and semantically plausible, and thus friendly to downstream tasks. As +shown in our experiments, the proposed method substantially improves the +synthesis quality and achieves considerable improvements on distillation +performance. In some cases, PromptDFD even gives rise to results on par with +those from the data-driven knowledge distillation with access to the original +training data. +" +"Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias",Yarden Tal,http://arxiv.org/pdf/2206.09860v1.pdf,2022-06-20,['cs.cl'],2206.09860v1.pdf," The size of pretrained models is increasing, and so is their performance on a +variety of NLP tasks. However, as their memorization capacity grows, they might +pick up more social biases. In this work, we examine the connection between +model size and its gender bias (specifically, occupational gender bias). We +measure bias in three masked language model families (RoBERTa, DeBERTa, and T5) +in two setups: directly using prompt based method, and using a downstream task +(Winogender). We find on the one hand that larger models receive higher bias +scores on the former task, but when evaluated on the latter, they make fewer +gender errors. To examine these potentially conflicting results, we carefully +investigate the behavior of the different models on Winogender. We find that +while larger models outperform smaller ones, the probability that their +mistakes are caused by gender bias is higher. Moreover, we find that the +proportion of stereotypical errors compared to anti-stereotypical ones grows +with the model size. Our findings highlight the potential risks that can arise +from increasing model size. +" +PromptAttack: Prompt-based Attack for Language Models via Gradient Search,Yundi Shi,http://arxiv.org/pdf/2209.01882v1.pdf,2022-09-05,"['cs.cl', 'cs.ai', 'cs.cr']",2209.01882v1.pdf," As the pre-trained language models (PLMs) continue to grow, so do the +hardware and data requirements for fine-tuning PLMs. Therefore, the researchers +have come up with a lighter method called \textit{Prompt Learning}. However, +during the investigations, we observe that the prompt learning methods are +vulnerable and can easily be attacked by some illegally constructed prompts, +resulting in classification errors, and serious security problems for PLMs. +Most of the current research ignores the security issue of prompt-based +methods. Therefore, in this paper, we propose a malicious prompt template +construction method (\textbf{PromptAttack}) to probe the security performance +of PLMs. Several unfriendly template construction approaches are investigated +to guide the model to misclassify the task. Extensive experiments on three +datasets and three PLMs prove the effectiveness of our proposed approach +PromptAttack. We also conduct experiments to verify that our method is +applicable in few-shot scenarios. +" +ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering,Zhiyu Chen,http://arxiv.org/pdf/2210.03849v1.pdf,2022-10-07,['cs.cl'],2210.03849v1.pdf," With the recent advance in large pre-trained language models, researchers +have achieved record performances in NLP tasks that mostly focus on language +pattern matching. The community is experiencing the shift of the challenge from +how to model language to the imitation of complex reasoning abilities like +human beings. In this work, we investigate the application domain of finance +that involves real-world, complex numerical reasoning. We propose a new +large-scale dataset, ConvFinQA, aiming to study the chain of numerical +reasoning in conversational question answering. Our dataset poses great +challenge in modeling long-range, complex numerical reasoning paths in +real-world conversations. We conduct comprehensive experiments and analyses +with both the neural symbolic methods and the prompting-based methods, to +provide insights into the reasoning mechanisms of these two divisions. We +believe our new dataset should serve as a valuable resource to push forward the +exploration of real-world, complex reasoning tasks as the next research focus. +Our dataset and code is publicly available at +https://github.com/czyssrs/ConvFinQA. +" +Can Language Models Be Specific? How?,Jie Huang,http://arxiv.org/pdf/2210.05159v2.pdf,2022-10-11,"['cs.cl', 'cs.ai']",2210.05159v2.pdf," ""He is a person"", ""Paris is located on the earth"". Both statements are +correct but meaningless - due to lack of specificity. In this paper, we propose +to measure how specific the language of pre-trained language models (PLMs) is. +To achieve this, we introduce a novel approach to build a benchmark for +specificity testing by forming masked token prediction tasks with prompts. For +instance, given ""Toronto is located in [MASK]."", we want to test whether a more +specific answer will be better filled in by PLMs, e.g., Ontario instead of +Canada. From our evaluations, we show that existing PLMs have only a slight +preference for more specific answers. We identify underlying factors affecting +the specificity and design two prompt-based methods to improve the specificity. +Results show that the specificity of the models can be improved by the proposed +methods without additional training. We hope this work can bring to awareness +the notion of specificity of language models and encourage the research +community to further explore this important but understudied problem. +" +Multilingual Relation Classification via Efficient and Effective Prompting,Yuxuan Chen,http://arxiv.org/pdf/2210.13838v2.pdf,2022-10-25,"['cs.cl', 'cs.lg']",2210.13838v2.pdf," Prompting pre-trained language models has achieved impressive performance on +various NLP tasks, especially in low data regimes. Despite the success of +prompting in monolingual settings, applying prompt-based methods in +multilingual scenarios has been limited to a narrow set of tasks, due to the +high cost of handcrafting multilingual prompts. In this paper, we present the +first work on prompt-based multilingual relation classification (RC), by +introducing an efficient and effective method that constructs prompts from +relation triples and involves only minimal translation for the class labels. We +evaluate its performance in fully supervised, few-shot and zero-shot scenarios, +and analyze its effectiveness across 14 languages, prompt variants, and +English-task training in cross-lingual settings. We find that in both fully +supervised and few-shot scenarios, our prompt method beats competitive +baselines: fine-tuning XLM-R_EM and null prompts. It also outperforms the +random baseline by a large margin in zero-shot experiments. Our method requires +little in-language knowledge and can be used as a strong baseline for similar +multilingual classification tasks. +" +Steps towards prompt-based creation of virtual worlds,Jasmine Roberts,http://arxiv.org/pdf/2211.05875v1.pdf,2022-11-10,"['cs.hc', 'cs.ai', 'cs.lg', 'cs.mm']",2211.05875v1.pdf," Large language models trained for code generation can be applied to speaking +virtual worlds into existence (creating virtual worlds). In this work we show +that prompt-based methods can both accelerate in-VR level editing, as well as +can become part of gameplay rather than just part of game development. As an +example, we present Codex VR Pong which shows non-deterministic game mechanics +using generative processes to not only create static content but also +non-trivial interactions between 3D objects. This demonstration naturally leads +to an integral discussion on how one would evaluate and benchmark experiences +created by generative models - as there are no qualitative or quantitative +metrics that apply in these scenarios. We conclude by discussing impending +challenges of AI-assisted co-creation in VR. +" +SPE: Symmetrical Prompt Enhancement for Fact Probing,Yiyuan Li,http://arxiv.org/pdf/2211.07078v1.pdf,2022-11-14,"['cs.cl', 'cs.ai', 'cs.lg']",2211.07078v1.pdf," Pretrained language models (PLMs) have been shown to accumulate factual +knowledge during pretrainingng (Petroni et al., 2019). Recent works probe PLMs +for the extent of this knowledge through prompts either in discrete or +continuous forms. However, these methods do not consider symmetry of the task: +object prediction and subject prediction. In this work, we propose Symmetrical +Prompt Enhancement (SPE), a continuous prompt-based method for factual probing +in PLMs that leverages the symmetry of the task by constructing symmetrical +prompts for subject and object prediction. Our results on a popular factual +probing dataset, LAMA, show significant improvement of SPE over previous +probing methods. +" +Interactive-Chain-Prompting: Ambiguity Resolution for Crosslingual Conditional Generation with Interaction,Jonathan Pilault,http://arxiv.org/pdf/2301.10309v1.pdf,2023-01-24,"['cs.lg', 'cs.ai', 'cs.cl']",2301.10309v1.pdf," Crosslingual conditional generation (e.g., machine translation) has long +enjoyed the benefits of scaling. Nonetheless, there are still issues that scale +alone may not overcome. A source query in one language, for instance, may yield +several translation options in another language without any extra context. Only +one translation could be acceptable however, depending on the translator's +preferences and goals. Choosing the incorrect option might significantly affect +translation usefulness and quality. We propose a novel method interactive-chain +prompting -- a series of question, answering and generation intermediate steps +between a Translator model and a User model -- that reduces translations into a +list of subproblems addressing ambiguities and then resolving such subproblems +before producing the final text to be translated. To check ambiguity resolution +capabilities and evaluate translation quality, we create a dataset exhibiting +different linguistic phenomena which leads to ambiguities at inference for four +languages. To encourage further exploration in this direction, we release all +datasets. We note that interactive-chain prompting, using eight interactions as +exemplars, consistently surpasses prompt-based methods with direct access to +background information to resolve ambiguities. +" +Evaluating the Robustness of Discrete Prompts,Yoichi Ishibashi,http://arxiv.org/pdf/2302.05619v1.pdf,2023-02-11,"['cs.cl', 'cs.ai']",2302.05619v1.pdf," Discrete prompts have been used for fine-tuning Pre-trained Language Models +for diverse NLP tasks. In particular, automatic methods that generate discrete +prompts from a small set of training instances have reported superior +performance. However, a closer look at the learnt prompts reveals that they +contain noisy and counter-intuitive lexical constructs that would not be +encountered in manually-written prompts. This raises an important yet +understudied question regarding the robustness of automatically learnt discrete +prompts when used in downstream tasks. To address this question, we conduct a +systematic study of the robustness of discrete prompts by applying carefully +designed perturbations into an application using AutoPrompt and then measure +their performance in two Natural Language Inference (NLI) datasets. Our +experimental results show that although the discrete prompt-based method +remains relatively robust against perturbations to NLI inputs, they are highly +sensitive to other types of perturbations such as shuffling and deletion of +prompt tokens. Moreover, they generalize poorly across different NLI datasets. +We hope our findings will inspire future work on robust discrete prompt +learning. +" +Stabilized In-Context Learning with Pre-trained Language Models for Few Shot Dialogue State Tracking,Derek Chen,http://arxiv.org/pdf/2302.05932v1.pdf,2023-02-12,['cs.cl'],2302.05932v1.pdf," Prompt-based methods with large pre-trained language models (PLMs) have shown +impressive unaided performance across many NLP tasks. These models improve even +further with the addition of a few labeled in-context exemplars to guide output +generation. However, for more complex tasks such as dialogue state tracking +(DST), designing prompts that reliably convey the desired intent is nontrivial, +leading to unstable results. Furthermore, building in-context exemplars for +dialogue tasks is difficult because conversational contexts are long while +model input lengths are relatively short. To overcome these issues we first +adapt a meta-learning scheme to the dialogue domain which stabilizes the +ability of the model to perform well under various prompts. We additionally +design a novel training method to improve upon vanilla retrieval mechanisms to +find ideal in-context examples. Finally, we introduce a saliency model to limit +dialogue text length, allowing us to include more exemplars per query. In +effect, we are able to achieve highly competitive results for few-shot DST on +MultiWOZ. +" +Zero-Shot Information Extraction via Chatting with ChatGPT,Xiang Wei,http://arxiv.org/pdf/2302.10205v1.pdf,2023-02-20,['cs.cl'],2302.10205v1.pdf," Zero-shot information extraction (IE) aims to build IE systems from the +unannotated text. It is challenging due to involving little human intervention. +Challenging but worthwhile, zero-shot IE reduces the time and effort that data +labeling takes. Recent efforts on large language models (LLMs, e.g., GPT-3, +ChatGPT) show promising performance on zero-shot settings, thus inspiring us to +explore prompt-based methods. In this work, we ask whether strong IE models can +be constructed by directly prompting LLMs. Specifically, we transform the +zero-shot IE task into a multi-turn question-answering problem with a two-stage +framework (ChatIE). With the power of ChatGPT, we extensively evaluate our +framework on three IE tasks: entity-relation triple extract, named entity +recognition, and event extraction. Empirical results on six datasets across two +languages show that ChatIE achieves impressive performance and even surpasses +some full-shot models on several datasets (e.g., NYT11-HRL). We believe that +our work could shed light on building IE models with limited resources. +" +Divide and Prompt: Chain of Thought Prompting for Text-to-SQL,Xiping Liu,http://arxiv.org/pdf/2304.11556v1.pdf,2023-04-23,"['cs.cl', 'cs.ai']",2304.11556v1.pdf," Chain-of-thought (CoT) prompting combined with large language models (LLMs) +have achieved encouraging results on complex reasoning tasks. Text-to-SQL is a +critical semantic parsing task that converts natural language questions into +SQL statements, involving a complex reasoning process. However, there is little +work about using CoT prompting to activate LLM's reasoning capabilities on +Text-to-SQL tasks. In this work, we propose a new paradigm for prompting +Text-to-SQL tasks, called Divide-and-Prompt, which first divides the task into +subtasks, and then approach each subtask through CoT. We present 3 +prompting-based methods to enhance the Text-to-SQL ability of LLMs. Experiments +show that these prompts guide LLMs to generate Text-to-SQL with higher +execution accuracy. +" +Few-shot Event Detection: An Empirical Study and a Unified View,Yubo Ma,http://arxiv.org/pdf/2305.01901v2.pdf,2023-05-03,"['cs.cl', 'cs.ai']",2305.01901v2.pdf," Few-shot event detection (ED) has been widely studied, while this brings +noticeable discrepancies, e.g., various motivations, tasks, and experimental +settings, that hinder the understanding of models for future progress.This +paper presents a thorough empirical study, a unified view of ED models, and a +better unified baseline. For fair evaluation, we compare 12 representative +methods on three datasets, which are roughly grouped into prompt-based and +prototype-based models for detailed analysis. Experiments consistently +demonstrate that prompt-based methods, including ChatGPT, still significantly +trail prototype-based methods in terms of overall performance. To investigate +their superior performance, we break down their design elements along several +dimensions and build a unified framework on prototype-based methods. Under such +unified view, each prototype-method can be viewed a combination of different +modules from these design elements. We further combine all advantageous modules +and propose a simple yet effective baseline, which outperforms existing methods +by a large margin (e.g., 2.7% F1 gains under low-resource setting). +" +PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions,Anthony Chen,http://arxiv.org/pdf/2305.14908v1.pdf,2023-05-24,['cs.cl'],2305.14908v1.pdf," The remarkable capabilities of large language models have been accompanied by +a persistent drawback: the generation of false and unsubstantiated claims +commonly known as ""hallucinations"". To combat this issue, recent research has +introduced approaches that involve editing and attributing the outputs of +language models, particularly through prompt-based editing. However, the +inference cost and speed of using large language models for editing currently +bottleneck prompt-based methods. These bottlenecks motivate the training of +compact editors, which is challenging due to the scarcity of training data for +this purpose. To overcome these challenges, we exploit the power of large +language models to introduce corruptions (i.e., noise) into text and +subsequently fine-tune compact editors to denoise the corruptions by +incorporating relevant evidence. Our methodology is entirely unsupervised and +provides us with faux hallucinations for training in any domain. Our Petite +Unsupervised Research and Revision model, PURR, not only improves attribution +over existing editing methods based on fine-tuning and prompting, but also +achieves faster execution times by orders of magnitude. +" +Syntax-aware Hybrid prompt model for Few-shot multi-modal sentiment analysis,Zikai Zhou,http://arxiv.org/pdf/2306.01312v2.pdf,2023-06-02,['cs.cl'],2306.01312v2.pdf," Multimodal Sentiment Analysis (MSA) has been a popular topic in natural +language processing nowadays, at both sentence and aspect level. However, the +existing approaches almost require large-size labeled datasets, which bring +about large consumption of time and resources. Therefore, it is practical to +explore the method for few-shot sentiment analysis in cross-modalities. +Previous works generally execute on textual modality, using the prompt-based +methods, mainly two types: hand-crafted prompts and learnable prompts. The +existing approach in few-shot multi-modality sentiment analysis task has +utilized both methods, separately. We further design a hybrid pattern that can +combine one or more fixed hand-crafted prompts and learnable prompts and +utilize the attention mechanisms to optimize the prompt encoder. The +experiments on both sentence-level and aspect-level datasets prove that we get +a significant outperformance. +" +Scaling Sentence Embeddings with Large Language Models,Ting Jiang,http://arxiv.org/pdf/2307.16645v1.pdf,2023-07-31,['cs.cl'],2307.16645v1.pdf," Large language models (LLMs) have recently garnered significant interest. +With in-context learning, LLMs achieve impressive results in various natural +language tasks. However, the application of LLMs to sentence embeddings remains +an area of ongoing research. In this work, we propose an in-context +learning-based method aimed at improving sentence embeddings performance. Our +approach involves adapting the previous prompt-based representation method for +autoregressive models, constructing a demonstration set that enables LLMs to +perform in-context learning, and scaling up the LLMs to different model sizes. +Through extensive experiments, in-context learning enables LLMs to generate +high-quality sentence embeddings without any fine-tuning. It helps LLMs achieve +performance comparable to current contrastive learning methods. By scaling +model size, we find scaling to more than tens of billion parameters harms the +performance on semantic textual similarity (STS) tasks. However, the largest +model outperforms other counterparts and achieves the new state-of-the-art +result on transfer tasks. We also fine-tune LLMs with current contrastive +learning approach, and the 2.7B OPT model, incorporating our prompt-based +method, surpasses the performance of 4.8B ST5, achieving the new +state-of-the-art results on STS tasks. Our code is available at +https://github.com/kongds/scaling_sentemb. +" +Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation,Tianyi Liu,http://arxiv.org/pdf/2112.05587v2.pdf,2021-12-10,"['cs.cv', 'cs.cl', 'cs.lg']",2112.05587v2.pdf," Most existing vision-language pre-training methods focus on understanding +tasks and use BERT-like objectives (masked language modeling and image-text +matching) during pretraining. Although they perform well in many understanding +downstream tasks, e.g., visual question answering, image-text retrieval and +visual entailment, they do not possess the ability to generate. To tackle this +problem, we propose Unified multimodal pre-training for both Vision-Language +understanding and generation (UniVL). The proposed UniVL is capable of handling +both understanding tasks and generative tasks. We augment existing pretraining +paradigms that only use random masks with causal masks, i.e., triangular masks +that mask out future tokens, such that the pre-trained models can have +autoregressive generation abilities by design. We formulate several previous +understanding tasks as a text generation task and propose to use prompt-based +method for fine-tuning on different downstream tasks. Our experiments show that +there is a trade-off between understanding tasks and generation tasks while +using the same model, and a feasible way to improve both tasks is to use more +data. Our UniVL framework attains comparable performance to recent +vision-language pre-training methods on both understanding tasks and generation +tasks. Moreover, we demostrate that prompt-based finetuning is more +data-efficient - it outperforms discriminative methods in few-shot scenarios. +" +Learning to Transfer Prompts for Text Generation,Junyi Li,http://arxiv.org/pdf/2205.01543v2.pdf,2022-05-03,['cs.cl'],2205.01543v2.pdf," Pretrained language models (PLMs) have made remarkable progress in text +generation tasks via fine-tuning. While, it is challenging to fine-tune PLMs in +a data-scarce situation. Therefore, it is non-trivial to develop a general and +lightweight model that can adapt to various text generation tasks based on +PLMs. To fulfill this purpose, the recent prompt-based learning offers a +potential solution. In this paper, we improve this technique and propose a +novel prompt-based method (PTG) for text generation in a transferable setting. +First, PTG learns a set of source prompts for various source generation tasks +and then transfers these prompts as target prompts to perform target generation +tasks. To consider both task- and instance-level information, we design an +adaptive attention mechanism to derive the target prompts. For each data +instance, PTG learns a specific target prompt by attending to highly relevant +source prompts. In extensive experiments, PTG yields competitive or better +results than fine-tuning methods. We release our source prompts as an open +resource, where users can add or reuse them to improve new text generation +tasks for future research. Code and data can be available at +https://github.com/RUCAIBox/Transfer-Prompts-for-Text-Generation. +" +On the Robustness of Dialogue History Representation in Conversational Question Answering: A Comprehensive Study and a New Prompt-based Method,Zorik Gekhman,http://arxiv.org/pdf/2206.14796v2.pdf,2022-06-29,"['cs.cl', 'cs.ai', 'cs.lg']",2206.14796v2.pdf," Most works on modeling the conversation history in Conversational Question +Answering (CQA) report a single main result on a common CQA benchmark. While +existing models show impressive results on CQA leaderboards, it remains unclear +whether they are robust to shifts in setting (sometimes to more realistic +ones), training data size (e.g. from large to small sets) and domain. In this +work, we design and conduct the first large-scale robustness study of history +modeling approaches for CQA. We find that high benchmark scores do not +necessarily translate to strong robustness, and that various methods can +perform extremely differently under different settings. Equipped with the +insights from our study, we design a novel prompt-based history modeling +approach, and demonstrate its strong robustness across various settings. Our +approach is inspired by existing methods that highlight historic answers in the +passage. However, instead of highlighting by modifying the passage token +embeddings, we add textual prompts directly in the passage text. Our approach +is simple, easy-to-plug into practically any model, and highly effective, thus +we recommend it as a starting point for future model developers. We also hope +that our study and insights will raise awareness to the importance of +robustness-focused evaluation, in addition to obtaining high leaderboard +scores, leading to better CQA systems. +" +GPTs at Factify 2022: Prompt Aided Fact-Verification,Pawan Kumar Sahu,http://arxiv.org/pdf/2206.14913v1.pdf,2022-06-29,['cs.cl'],2206.14913v1.pdf," One of the most pressing societal issues is the fight against false news. The +false claims, as difficult as they are to expose, create a lot of damage. To +tackle the problem, fact verification becomes crucial and thus has been a topic +of interest among diverse research communities. Using only the textual form of +data we propose our solution to the problem and achieve competitive results +with other approaches. We present our solution based on two approaches - PLM +(pre-trained language model) based method and Prompt based method. The +PLM-based approach uses the traditional supervised learning, where the model is +trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas, +Prompt-based learning reflects the idea to design input to fit the model such +that the original objective may be re-framed as a problem of (masked) language +modeling. We may further stimulate the rich knowledge provided by PLMs to +better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our +experiments showed that the proposed method performs better than just +fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and +a 7th position on the competition leader-board. +" +Towards Realistic Low-resource Relation Extraction: A Benchmark with Empirical Baseline Study,Xin Xu,http://arxiv.org/pdf/2210.10678v3.pdf,2022-10-19,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2210.10678v3.pdf," This paper presents an empirical study to build relation extraction systems +in low-resource settings. Based upon recent pre-trained language models, we +comprehensively investigate three schemes to evaluate the performance in +low-resource settings: (i) different types of prompt-based methods with +few-shot labeled data; (ii) diverse balancing methods to address the +long-tailed distribution issue; (iii) data augmentation technologies and +self-training to generate more labeled in-domain data. We create a benchmark +with 8 relation extraction (RE) datasets covering different languages, domains +and contexts and perform extensive comparisons over the proposed schemes with +combinations. Our experiments illustrate: (i) Though prompt-based tuning is +beneficial in low-resource RE, there is still much potential for improvement, +especially in extracting relations from cross-sentence contexts with multiple +relational triples; (ii) Balancing methods are not always helpful for RE with +long-tailed distribution; (iii) Data augmentation complements existing +baselines and can bring much performance gain, while self-training may not +consistently achieve advancement to low-resource RE. Code and datasets are in +https://github.com/zjunlp/LREBench. +" +PromptFusion: Decoupling Stability and Plasticity for Continual Learning,Haoran Chen,http://arxiv.org/pdf/2303.07223v1.pdf,2023-03-13,['cs.cv'],2303.07223v1.pdf," Continual learning refers to the capability of continuously learning from a +stream of data. Current research mainly focuses on relieving catastrophic +forgetting, and most of their success is at the cost of limiting the +performance of newly incoming tasks. Such a trade-off is referred to as the +stabilityplasticity dilemma and is a more general and challenging problem for +continual learning. However, the inherent conflict between these two concepts +makes it seemingly impossible to devise a satisfactory solution to both of them +simultaneously. Therefore, we ask, ""is it possible to divide them into two +problems to conquer independently?"" To this end, we propose a +prompt-tuning-based method termed PromptFusion to enable the decoupling of +stability and plasticity. Specifically, PromptFusion consists of a carefully +designed Stabilizer module that deals with catastrophic forgetting and a +Booster module to learn new knowledge concurrently. During training, +PromptFusion first passes an input image to the two modules separately. Then +the resulting logits are further fused with a learnable weight parameter. +Finally, a weight mask is applied to the derived logits to balance between old +and new classes. Extensive experiments show that our method achieves promising +results on popular continual learning datasets for both class-incremental and +domain incremental settings. Especially on Split-Imagenet-R, one of the most +challenging datasets for class-incremental learning, our method exceeds +state-of-the-art prompt-based methods L2P and DualPrompt by more than 10%. +" +Progressive Visual Prompt Learning with Contrastive Feature Re-formation,Chen Xu,http://arxiv.org/pdf/2304.08386v1.pdf,2023-04-17,['cs.cv'],2304.08386v1.pdf," Prompt learning has been designed as an alternative to fine-tuning for +adapting Vision-language (V-L) models to the downstream tasks. Previous works +mainly focus on text prompt while visual prompt works are limited for V-L +models. The existing visual prompt methods endure either mediocre performance +or unstable training process, indicating the difficulty of visual prompt +learning. In this paper, we propose a new Progressive Visual Prompt (ProVP) +structure to strengthen the interactions among prompts of different layers. +More importantly, our ProVP could effectively propagate the image embeddings to +deep layers and behave partially similar to an instance adaptive prompt method. +To alleviate generalization deterioration, we further propose a new contrastive +feature re-formation, which prevents the serious deviation of the prompted +visual feature from the fixed CLIP visual feature distribution. Combining both, +our method (ProVP-Ref) is evaluated on 11 image benchmark datasets and achieves +7/11 state-of-theart results on both few-shot and base-to-novel settings. To +the best of our knowledge, we are the first to demonstrate the superior +performance of visual prompts in V-L models to previous prompt-based methods in +downstream tasks. Meanwhile, it implies that our ProVP-Ref shows the best +capability to adapt and to generalize. +" +SelfEvolve: A Code Evolution Framework via Large Language Models,Shuyang Jiang,http://arxiv.org/pdf/2306.02907v1.pdf,2023-06-05,"['cs.cl', 'cs.se']",2306.02907v1.pdf," Large language models (LLMs) have already revolutionized code generation, +after being pretrained on publicly available code data. However, while various +methods have been proposed to augment LLMs with retrieved knowledge and enhance +the quality of code generation, the performance of these retrieval-based +methods is limited by the strength of the retrievers used. In addition, while +LLMs show great emergent ability, they still struggle to produce the correct +code in one turn. To address these challenges, we propose a novel two-step +pipeline, called \autoknow, that leverages LLMs as both knowledge providers and +self-reflective programmers. Unlike retrieval-based methods, \autoknow~obtains +the knowledge from input prompts and generates intermediate code based on the +generated knowledge. After that, \autoknow~asks LLM to act as an expert +programmer to perform debugging for the generated code. This is achieved by +receiving the error message from the interpreter, without requiring special +test cases for correctness verification. We evaluate \autoknow~on three code +generation datasets, including DS-1000 for data science code, HumanEval for +software engineering code, and TransCoder for C++-to-Python translation. Our +empirical experiments show that \autoknow~outperforms strong baselines by a +significant margin on all datasets. We also conduct exhaustive analytical +experiments to validate the effectiveness of the two stages of \autoknow, and +find that both are superior to other prompting-based methods. Further +scalability analysis demonstrates that \autoknow~can be adapted to other more +advanced models, such as GPT-4, and bring consistent efficacy improvement. +" +Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting,Melanie Sclar,http://arxiv.org/pdf/2310.11324v1.pdf,2023-10-17,"['cs.cl', 'cs.ai', 'cs.lg']",2310.11324v1.pdf," As large language models (LLMs) are adopted as a fundamental component of +language technologies, it is crucial to accurately characterize their +performance. Because choices in prompt design can strongly influence model +behavior, this design process is critical in effectively using any modern +pre-trained generative language model. In this work, we focus on LLM +sensitivity to a quintessential class of meaning-preserving design choices: +prompt formatting. We find that several widely used open-source LLMs are +extremely sensitive to subtle changes in prompt formatting in few-shot +settings, with performance differences of up to 76 accuracy points when +evaluated using LLaMA-2-13B. Sensitivity remains even when increasing model +size, the number of few-shot examples, or performing instruction tuning. Our +analysis suggests that work evaluating LLMs with prompting-based methods would +benefit from reporting a range of performance across plausible prompt formats, +instead of the currently-standard practice of reporting performance on a single +format. We also show that format performance only weakly correlates between +models, which puts into question the methodological validity of comparing +models with an arbitrarily chosen, fixed prompt format. To facilitate +systematic analysis we propose FormatSpread, an algorithm that rapidly +evaluates a sampled set of plausible prompt formats for a given task, and +reports the interval of expected performance without accessing model weights. +Furthermore, we present a suite of analyses that characterize the nature of +this sensitivity, including exploring the influence of particular atomic +perturbations and the internal representation of particular formats. +" +GPT-3-driven pedagogical agents for training children's curious question-asking skills,Rania Abdelghani,http://arxiv.org/pdf/2211.14228v6.pdf,2022-11-25,"['cs.cl', 'cs.hc']",2211.14228v6.pdf," In order to train children's ability to ask curiosity-driven questions, +previous research has explored designing specific exercises relying on +providing semantic and linguistic cues to help formulate such questions. But +despite showing pedagogical efficiency, this method is still limited as it +relies on generating the said cues by hand, which can be a very costly process. +In this context, we propose to leverage advances in the natural language +processing field (NLP) and investigate the efficiency of using a large language +model (LLM) for automating the production of the pedagogical content of a +curious question-asking (QA) training. We study generating the said content +using the ""prompt-based"" method that consists of explaining the task to the LLM +in natural text. We evaluate the output using human experts annotations and +comparisons with hand-generated content. Results suggested indeed the relevance +and usefulness of this content. We also conduct a field study in primary school +(75 children aged 9-10), where we evaluate children's QA performance when +having this training. We compare 3 types of content : 1) hand-generated content +that proposes ""closed"" cues leading to predefined questions; 2) GPT-3-generated +content that proposes the same type of cues; 3) GPT-3-generated content that +proposes ""open"" cues leading to several possible questions. We see a similar QA +performance between the two ""closed"" trainings (showing the scalability of the +approach using GPT-3), and a better one for participants with the ""open"" +training. These results suggest the efficiency of using LLMs to support +children in generating more curious questions, using a natural language +prompting approach that affords usability by teachers and other users not +specialists of AI techniques. Furthermore, results also show that open-ended +content may be more suitable for training curious question-asking skills. +" +Towards using Few-Shot Prompt Learning for Automating Model Completion,Meriem Ben Chaaben,http://arxiv.org/pdf/2212.03404v1.pdf,2022-12-07,"['cs.se', 'cs.cl']",2212.03404v1.pdf," We propose a simple yet a novel approach to improve completion in domain +modeling activities. Our approach exploits the power of large language models +by using few-shot prompt learning without the need to train or fine-tune those +models with large datasets that are scarce in this field. We implemented our +approach and tested it on the completion of static and dynamic domain diagrams. +Our initial evaluation shows that such an approach is effective and can be +integrated in different ways during the modeling activities. +" +Are Prompt-based Models Clueless?,Pride Kavumba,http://arxiv.org/pdf/2205.09295v2.pdf,2022-05-19,['cs.cl'],2205.09295v2.pdf," Finetuning large pre-trained language models with a task-specific head has +advanced the state-of-the-art on many natural language understanding +benchmarks. However, models with a task-specific head require a lot of training +data, making them susceptible to learning and exploiting dataset-specific +superficial cues that do not generalize to other datasets. Prompting has +reduced the data requirement by reusing the language model head and formatting +the task input to match the pre-training objective. Therefore, it is expected +that few-shot prompt-based models do not exploit superficial cues. This paper +presents an empirical examination of whether few-shot prompt-based models also +exploit superficial cues. Analyzing few-shot prompt-based models on MNLI, SNLI, +HANS, and COPA has revealed that prompt-based models also exploit superficial +cues. While the models perform well on instances with superficial cues, they +often underperform or only marginally outperform random accuracy on instances +without superficial cues. +" +Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models,Ratish Puduppully,http://arxiv.org/pdf/2305.13085v2.pdf,2023-05-22,['cs.cl'],2305.13085v2.pdf," This study investigates machine translation between related languages i.e., +languages within the same family that share linguistic characteristics such as +word order and lexical similarity. Machine translation through few-shot +prompting leverages a small set of translation pair examples to generate +translations for test sentences. This procedure requires the model to learn how +to generate translations while simultaneously ensuring that token ordering is +maintained to produce a fluent and accurate translation. We propose that for +related languages, the task of machine translation can be simplified by +leveraging the monotonic alignment characteristic of such languages. We +introduce DecoMT, a novel approach of few-shot prompting that decomposes the +translation process into a sequence of word chunk translations. Through +automatic and human evaluation conducted on multiple related language pairs +across various language families, we demonstrate that our proposed approach of +decomposed prompting surpasses multiple established few-shot baseline +approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM +model with an average improvement of 8 chrF++ scores across the examined +languages. +" +Multilingual Large Language Models Are Not (Yet) Code-Switchers,Ruochen Zhang,http://arxiv.org/pdf/2305.14235v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14235v2.pdf," Multilingual Large Language Models (LLMs) have recently shown great +capabilities in a wide range of tasks, exhibiting state-of-the-art performance +through zero-shot or few-shot prompting methods. While there have been +extensive studies on their abilities in monolingual tasks, the investigation of +their potential in the context of code-switching (CSW), the practice of +alternating languages within an utterance, remains relatively uncharted. In +this paper, we provide a comprehensive empirical analysis of various +multilingual LLMs, benchmarking their performance across four tasks: sentiment +analysis, machine translation, summarization and word-level language +identification. Our results indicate that despite multilingual LLMs exhibiting +promising outcomes in certain tasks using zero or few-shot prompting, they +still underperform in comparison to fine-tuned models of much smaller scales. +We argue that current ""multilingualism"" in LLMs does not inherently imply +proficiency with code-switching texts, calling for future research to bridge +this discrepancy. +" +"Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango",Aman Madaan,http://arxiv.org/pdf/2209.07686v2.pdf,2022-09-16,"['cs.cl', 'cs.ai', 'cs.lg']",2209.07686v2.pdf," The past decade has witnessed dramatic gains in natural language processing +and an unprecedented scaling of large language models. These developments have +been accelerated by the advent of few-shot techniques such as chain of thought +(CoT) prompting. Specifically, CoT pushes the performance of large language +models in a few-shot setup by augmenting the prompts with intermediate steps. +Despite impressive results across various tasks, the reasons behind their +success have not been explored. This work uses counterfactual prompting to +develop a deeper understanding of CoT-based few-shot prompting mechanisms in +large language models. We first systematically identify and define the key +components of a prompt: symbols, patterns, and text. Then, we devise and +conduct an exhaustive set of experiments across four different tasks, by +querying the model with counterfactual prompts where only one of these +components is altered. Our experiments across three models (PaLM, GPT-3, and +CODEX) reveal several surprising findings and brings into question the +conventional wisdom around few-shot prompting. First, the presence of factual +patterns in a prompt is practically immaterial to the success of CoT. Second, +our results conclude that the primary role of intermediate steps may not be to +facilitate learning how to solve a task. The intermediate steps are rather a +beacon for the model to realize what symbols to replicate in the output to form +a factual answer. Further, text imbues patterns with commonsense knowledge and +meaning. Our empirical and qualitative analysis reveals that a symbiotic +relationship between text and patterns explains the success of few-shot +prompting: text helps extract commonsense from the question to help patterns, +and patterns enforce task understanding and direct text generation. +" +Understanding How Model Size Affects Few-shot Instruction Prompting,Ayrton San Joaquin,http://arxiv.org/pdf/2212.01907v1.pdf,2022-12-04,"['cs.cl', 'cs.lg', 'stat.ml']",2212.01907v1.pdf," Large Language Models are affected by the phenomena of memorizing and +forgetting their training data. But how do these vary by model size? We work +towards this question by investigating how the model size affects the model's +ability to discriminate a word's meaning in a given context. We introduce a +dataset called DeltaWords, which evaluates a model's ability to follow +instructions to select a sentence which replaces the target word with its +antonym. We show a weak inverse scaling trend, where task accuracy degrades as +model size increase, under extremely few-shot prompting regimes. We show that +increasing the number of examples tend to disproportionately benefit larger +models than smaller models. +" +Prompted LLMs as Chatbot Modules for Long Open-domain Conversation,Gibbeum Lee,http://arxiv.org/pdf/2305.04533v1.pdf,2023-05-08,"['cs.cl', 'cs.ai', 'cs.lg']",2305.04533v1.pdf," In this paper, we propose MPC (Modular Prompted Chatbot), a new approach for +creating high-quality conversational agents without the need for fine-tuning. +Our method utilizes pre-trained large language models (LLMs) as individual +modules for long-term consistency and flexibility, by using techniques such as +few-shot prompting, chain-of-thought (CoT), and external memory. Our human +evaluation results show that MPC is on par with fine-tuned chatbot models in +open-domain conversations, making it an effective solution for creating +consistent and engaging chatbots. +" +Internet-augmented language models through few-shot prompting for open-domain question answering,Angeliki Lazaridou,http://arxiv.org/pdf/2203.05115v2.pdf,2022-03-10,"['cs.cl', 'cs.lg']",2203.05115v2.pdf," In this work, we aim to capitalize on the unique few-shot capabilities of +large-scale language models (LSLMs) to overcome some of their challenges with +respect to grounding to factual and up-to-date information. Motivated by +semi-parametric language models (LMs), which ground their decisions in external +retrieved evidence, we use few-shot prompting to learn to condition LMs on +information returned from the web using Google Search, a broad and constantly +updated knowledge source. Our approach does not involve fine-tuning or learning +additional parameters, thus making it applicable to any LM, offering therefore +a strong baseline. Indeed, we find that LMs conditioned on the web surpass +performance of closed-book models of similar, or even larger, model sizes in +open-domain question answering. Finally, we find that increasing the +inference-time compute of models, achieved via using multiple retrieved +evidences to generate multiple answers followed by a reranking stage that uses +scores generated by the same LMs, leads to better performance and alleviates +lower performance of smaller few-shot LMs. All in all, our findings suggest +that it might be beneficial to slow down the race towards the biggest model and +instead shift attention towards finding more effective ways to use models, +including but not limited to, better prompting or increasing inference-time +compute. +" +Decomposed Prompting: A Modular Approach for Solving Complex Tasks,Tushar Khot,http://arxiv.org/pdf/2210.02406v2.pdf,2022-10-05,['cs.cl'],2210.02406v2.pdf," Few-shot prompting is a surprisingly powerful way to use Large Language +Models (LLMs) to solve various tasks. However, this approach struggles as the +task complexity increases or when the individual reasoning steps of the task +themselves are hard to learn, especially when embedded in more complex tasks. +To address this, we propose Decomposed Prompting, a new approach to solve +complex tasks by decomposing them (via prompting) into simpler sub-tasks that +can be delegated to a library of prompting-based LLMs dedicated to these +sub-tasks. This modular structure allows each prompt to be optimized for its +specific sub-task, further decomposed if necessary, and even easily replaced +with more effective prompts, trained models, or symbolic functions if desired. +We show that the flexibility and modularity of Decomposed Prompting allows it +to outperform prior work on few-shot prompting using GPT3. On symbolic +reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into +even simpler solvable sub-tasks. When the complexity comes from the input +length, we can recursively decompose the task into the same task but with +smaller inputs. We also evaluate our approach on textual multi-step reasoning +tasks: on long-context multi-hop QA task, we can more effectively teach the +sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, +we can incorporate a symbolic information retrieval within our decomposition +framework, leading to improved performance on both tasks. Datasets, Code and +Prompts available at https://github.com/allenai/DecomP. +" +Language Model Crossover: Variation through Few-Shot Prompting,Elliot Meyerson,http://arxiv.org/pdf/2302.12170v2.pdf,2023-02-23,['cs.ne'],2302.12170v2.pdf," This paper pursues the insight that language models naturally enable an +intelligent variation operator similar in spirit to evolutionary crossover. In +particular, language models of sufficient scale demonstrate in-context +learning, i.e. they can learn from associations between a small number of input +patterns to generate outputs incorporating such associations (also called +few-shot prompting). This ability can be leveraged to form a simple but +powerful variation operator, i.e. to prompt a language model with a few +text-based genotypes (such as code, plain-text sentences, or equations), and to +parse its corresponding output as those genotypes' offspring. The promise of +such language model crossover (which is simple to implement and can leverage +many different open-source language models) is that it enables a simple +mechanism to evolve semantically-rich text representations (with few +domain-specific tweaks), and naturally benefits from current progress in +language models. Experiments in this paper highlight the versatility of +language-model crossover, through evolving binary bit-strings, sentences, +equations, text-to-image prompts, and Python code. The conclusion is that +language model crossover is a promising method for evolving genomes +representable as text. +" +Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes,Cheng-Yu Hsieh,http://arxiv.org/pdf/2305.02301v2.pdf,2023-05-03,"['cs.cl', 'cs.ai', 'cs.lg']",2305.02301v2.pdf," Deploying large language models (LLMs) is challenging because they are memory +inefficient and compute-intensive for practical applications. In reaction, +researchers train smaller task-specific models by either finetuning with human +labels or distilling using LLM-generated labels. However, finetuning and +distillation require large amounts of training data to achieve comparable +performance to LLMs. We introduce Distilling step-by-step, a new mechanism that +(a) trains smaller models that outperform LLMs, and (b) achieves so by +leveraging less training data needed by finetuning or distillation. Our method +extracts LLM rationales as additional supervision for training small models +within a multi-task framework. We present three findings across 4 NLP +benchmarks: First, compared to both finetuning and distillation, our mechanism +achieves better performance with much fewer labeled/unlabeled training +examples. Second, compared to few-shot prompted LLMs, we achieve better +performance using substantially smaller model sizes. Third, we reduce both the +model size and the amount of data required to outperform LLMs; our finetuned +770M T5 model outperforms the few-shot prompted 540B PaLM model using only 80% +of available data on a benchmark, whereas standard finetuning the same T5 model +struggles to match even by using 100% of the dataset. We release the code at: +https://github.com/google-research/distilling-step-by-step . +" +Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning,Zhanming Jie,http://arxiv.org/pdf/2305.18170v2.pdf,2023-05-29,['cs.cl'],2305.18170v2.pdf," Chain-of-thought (CoT) prompting with large language models has proven +effective in numerous natural language processing tasks, but designing prompts +that generalize well to diverse problem types can be challenging, especially in +the context of math word problem (MWP) solving. Additionally, it is common to +have a large amount of training data that have a better diversity coverage but +CoT annotations are not available, which limits the use of supervised learning +techniques. To address these issues, we investigate two approaches to leverage +the training data in a few-shot prompting scenario: dynamic program prompting +and program distillation. Our approach is largely inspired by Gao et al., +(2022), where they proposed to replace the CoT with the programs as the +intermediate reasoning step. Such a prompting strategy allows us to accurately +verify the answer correctness through program execution in MWP solving. Our +dynamic program prompting involves annotating the training data by sampling +correct programs from a large language model, while program distillation +involves adapting a smaller model to the program-annotated training data. Our +experiments on three standard MWP datasets demonstrate the effectiveness of +these approaches, yielding significant improvements over previous baselines for +prompting and fine-tuning. Our results suggest that leveraging a large amount +of training data can improve the generalization ability of prompts and boost +the performance of fine-tuned small models in MWP solving. +" +Zero- and Few-Shot Prompting with LLMs: A Comparative Study with Fine-tuned Models for Bangla Sentiment Analysis,Md. Arid Hasan,http://arxiv.org/pdf/2308.10783v1.pdf,2023-08-21,"['cs.cl', 'cs.lg', '68t50', 'i.2.7']",2308.10783v1.pdf," The rapid expansion of the digital world has propelled sentiment analysis +into a critical tool across diverse sectors such as marketing, politics, +customer service, and healthcare. While there have been significant +advancements in sentiment analysis for widely spoken languages, low-resource +languages, such as Bangla, remain largely under-researched due to resource +constraints. Furthermore, the recent unprecedented performance of Large +Language Models (LLMs) in various applications highlights the need to evaluate +them in the context of low-resource languages. In this study, we present a +sizeable manually annotated dataset encompassing 33,605 Bangla news tweets and +Facebook comments. We also investigate zero- and few-shot in-context learning +with several language models, including Flan-T5, GPT-4, and Bloomz, offering a +comparative analysis against fine-tuned models. Our findings suggest that +monolingual transformer-based models consistently outperform other models, even +in zero and few-shot scenarios. To foster continued exploration, we intend to +make this dataset and our research tools publicly available to the broader +research community. In the spirit of further research, we plan to make this +dataset and our experimental resources publicly accessible to the wider +research community. +" +FOLIO: Natural Language Reasoning with First-Order Logic,Simeng Han,http://arxiv.org/pdf/2209.00840v1.pdf,2022-09-02,['cs.cl'],2209.00840v1.pdf," We present FOLIO, a human-annotated, open-domain, and logically complex and +diverse dataset for reasoning in natural language (NL), equipped with first +order logic (FOL) annotations. FOLIO consists of 1,435 examples (unique +conclusions), each paired with one of 487 sets of premises which serve as rules +to be used to deductively reason for the validity of each conclusion. The +logical correctness of premises and conclusions is ensured by their parallel +FOL annotations, which are automatically verified by our FOL inference engine. +In addition to the main NL reasoning task, NL-FOL pairs in FOLIO automatically +constitute a new NL-FOL translation dataset using FOL as the logical form. Our +experiments on FOLIO systematically evaluate the FOL reasoning ability of +supervised fine-tuning on medium-sized language models (BERT, RoBERTa) and +few-shot prompting on large language models (GPT-NeoX, OPT, GPT-3, Codex). For +NL-FOL translation, we experiment with GPT-3 and Codex. Our results show that +one of the most capable Large Language Model (LLM) publicly available, GPT-3 +davinci, achieves only slightly better than random results with few-shot +prompting on a subset of FOLIO, and the model is especially bad at predicting +the correct truth values for False and Unknown conclusions. Our dataset and +code are available at https://github.com/Yale-LILY/FOLIO. +" +Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them,Mirac Suzgun,http://arxiv.org/pdf/2210.09261v1.pdf,2022-10-17,"['cs.cl', 'cs.ai']",2210.09261v1.pdf," BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that +focuses on tasks believed to be beyond the capabilities of current language +models. Language models have already made good progress on this benchmark, with +the best model in the BIG-Bench paper outperforming average reported +human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But +on what tasks do language models fall short of average human-rater performance, +and are those tasks actually unsolvable by current language models? + In this work, we focus on a suite of 23 challenging BIG-Bench tasks which we +call BIG-Bench Hard (BBH). These are the task for which prior language model +evaluations did not outperform the average human-rater. We find that applying +chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the +average human-rater performance on 10 of the 23 tasks, and Codex +(code-davinci-002) to surpass the average human-rater performance on 17 of the +23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot +prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al., +2022), substantially underestimates the best performance and capabilities of +language models, which is better captured via CoT prompting. As further +analysis, we explore the interaction between CoT and model scale on BBH, +finding that CoT enables emergent task performance on several BBH tasks with +otherwise flat scaling curves. +" +Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data,Xuhai Xu,http://arxiv.org/pdf/2307.14385v3.pdf,2023-07-26,"['cs.cl', '68u35', 'h.5.2; i.2.m']",2307.14385v3.pdf," Advances in large language models (LLMs) have empowered a variety of +applications. However, there is still a significant gap in research when it +comes to understanding and enhancing the capabilities of LLMs in the field of +mental health. In this work, we present the first comprehensive evaluation of +multiple LLMs, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4, on +various mental health prediction tasks via online text data. We conduct a broad +range of experiments, covering zero-shot prompting, few-shot prompting, and +instruction fine-tuning. The results indicate a promising yet limited +performance of LLMs with zero-shot and few-shot prompt designs for the mental +health tasks. More importantly, our experiments show that instruction +finetuning can significantly boost the performance of LLMs for all tasks +simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, +outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% +on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. +They further perform on par with the state-of-the-art task-specific language +model. We also conduct an exploratory case study on LLMs' capability on the +mental health reasoning tasks, illustrating the promising capability of certain +models such as GPT-4. We summarize our findings into a set of action guidelines +for potential methods to enhance LLMs' capability for mental health tasks. +Meanwhile, we also emphasize the important limitations before achieving +deployability in real-world mental health settings, such as known racial and +gender bias. We highlight the important ethical risks accompanying this line of +research. +" +Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm,Laria Reynolds,http://arxiv.org/pdf/2102.07350v1.pdf,2021-02-15,"['cs.cl', 'cs.ai']",2102.07350v1.pdf," Prevailing methods for mapping large generative language models to supervised +tasks may fail to sufficiently probe models' novel capabilities. Using GPT-3 as +a case study, we show that 0-shot prompts can significantly outperform few-shot +prompts. We suggest that the function of few-shot examples in these cases is +better described as locating an already learned task rather than meta-learning. +This analysis motivates rethinking the role of prompts in controlling and +evaluating powerful language models. In this work, we discuss methods of prompt +programming, emphasizing the usefulness of considering prompts through the lens +of natural language. We explore techniques for exploiting the capacity of +narratives and cultural anchors to encode nuanced intentions and techniques for +encouraging deconstruction of a problem into components before producing a +verdict. Informed by this more encompassing theory of prompt programming, we +also introduce the idea of a metaprompt that seeds the model to generate its +own natural language prompts for a range of tasks. Finally, we discuss how +these more general methods of interacting with language models can be +incorporated into existing and future benchmarks and practical applications. +" +Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity,Yao Lu,http://arxiv.org/pdf/2104.08786v2.pdf,2021-04-18,"['cs.cl', 'cs.ai']",2104.08786v2.pdf," When primed with only a handful of training samples, very large, pretrained +language models such as GPT-3 have shown competitive results when compared to +fully-supervised, fine-tuned, large, pretrained language models. We demonstrate +that the order in which the samples are provided can make the difference +between near state-of-the-art and random guess performance: essentially some +permutations are ""fantastic"" and some not. We analyse this phenomenon in +detail, establishing that: it is present across model sizes (even for the +largest current models), it is not related to a specific subset of samples, and +that a given good permutation for one model is not transferable to another. +While one could use a development set to determine which permutations are +performant, this would deviate from the true few-shot setting as it requires +additional annotated data. Instead, we use the generative nature of language +models to construct an artificial development set and based on entropy +statistics of the candidate permutations on this set, we identify performant +prompts. Our method yields a 13% relative improvement for GPT-family models +across eleven different established text classification tasks. +" +Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning,Prasetya Ajie Utama,http://arxiv.org/pdf/2109.04144v1.pdf,2021-09-09,"['cs.cl', 'cs.ai']",2109.04144v1.pdf," Recent prompt-based approaches allow pretrained language models to achieve +strong performances on few-shot finetuning by reformulating downstream tasks as +a language modeling problem. In this work, we demonstrate that, despite its +advantages on low data regimes, finetuned prompt-based models for sentence pair +classification tasks still suffer from a common pitfall of adopting inference +heuristics based on lexical overlap, e.g., models incorrectly assuming a +sentence pair is of the same meaning because they consist of the same set of +words. Interestingly, we find that this particular inference heuristic is +significantly less present in the zero-shot evaluation of the prompt-based +model, indicating how finetuning can be destructive to useful knowledge learned +during the pretraining. We then show that adding a regularization that +preserves pretraining weights is effective in mitigating this destructive +tendency of few-shot finetuning. Our evaluation on three datasets demonstrates +promising improvements on the three corresponding challenge datasets used to +diagnose the inference heuristics. +" +Towards Zero-Label Language Learning,Zirui Wang,http://arxiv.org/pdf/2109.09193v1.pdf,2021-09-19,"['cs.cl', 'cs.lg']",2109.09193v1.pdf," This paper explores zero-label learning in Natural Language Processing (NLP), +whereby no human-annotated data is used anywhere during training and models are +trained purely on synthetic data. At the core of our framework is a novel +approach for better leveraging the powerful pretrained language models. +Specifically, inspired by the recent success of few-shot inference on GPT-3, we +present a training data creation procedure named Unsupervised Data Generation +(UDG), which leverages few-shot prompts to synthesize high-quality training +data without real human annotations. Our method enables zero-label learning as +we train task-specific models solely on the synthetic data, yet we achieve +better or comparable results from strong baseline models trained on +human-labeled data. Furthermore, when mixed with labeled data, our approach +serves as a highly effective data augmentation procedure, achieving new +state-of-the-art results on the SuperGLUE benchmark. +" +P4E: Few-Shot Event Detection as Prompt-Guided Identification and Localization,Sha Li,http://arxiv.org/pdf/2202.07615v3.pdf,2022-02-15,['cs.cl'],2202.07615v3.pdf," We propose P4E, an identify-and-localize event detection framework that +integrates the best of few-shot prompting and structured prediction. Our +framework decomposes event detection into an identification task and a +localization task. For the identification task, which we formulate as +multi-label classification, we leverage cloze-based prompting to align our +objective with the pre-training task of language models, allowing our model to +quickly adapt to new event types. We then employ an event type-agnostic +sequence labeling model to localize the event trigger conditioned on the +identification output. This heterogeneous model design allows P4E to quickly +learn new event types without sacrificing the ability to make structured +predictions. Our experiments demonstrate the effectiveness of our proposed +design, and P4E shows superior performance for few-shot event detection on +benchmark datasets FewEvent and MAVEN and comparable performance to SOTA for +fully-supervised event detection on ACE. +" +Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models,Mirac Suzgun,http://arxiv.org/pdf/2205.11503v1.pdf,2022-05-23,['cs.cl'],2205.11503v1.pdf," We propose a method for arbitrary textual style transfer (TST)--the task of +transforming a text into any given style--utilizing general-purpose pre-trained +language models. Our method, Prompt-and-Rerank, is based on a mathematical +formulation of the TST task, decomposing it into three constituent components: +textual similarity, target style strength, and fluency. Specifically, our +method first uses zero-shot or few-shot prompting to obtain a set of candidate +generations in the target style, and then re-ranks these candidates according +to a combination of the three components above. Empirically, our method enables +small pre-trained language models to perform on par with state-of-the-art +large-scale models while consuming two orders of magnitude less compute and +memory. Finally, we conduct a systematic investigation of the effect of model +size and prompt design (e.g., prompt paraphrasing and delimiter-pair choice) on +style transfer quality across seven diverse textual style transfer datasets. +" +Bootstrapping Multilingual Semantic Parsers using Large Language Models,Abhijeet Awasthi,http://arxiv.org/pdf/2210.07313v2.pdf,2022-10-13,"['cs.cl', 'cs.lg']",2210.07313v2.pdf," Despite cross-lingual generalization demonstrated by pre-trained multilingual +models, the translate-train paradigm of transferring English datasets across +multiple languages remains to be a key mechanism for training task-specific +multilingual models. However, for many low-resource languages, the availability +of a reliable translation service entails significant amounts of costly +human-annotated translation pairs. Further, translation services may continue +to be brittle due to domain mismatch between task-specific input text and +general-purpose text used for training translation models. For multilingual +semantic parsing, we demonstrate the effectiveness and flexibility offered by +large language models (LLMs) for translating English datasets into several +languages via few-shot prompting. Through extensive comparisons on two public +datasets, MTOP and MASSIVE, spanning 50 languages and several domains, we show +that our method of translating data using LLMs outperforms a strong +translate-train baseline on 41 out of 50 languages. We study the key design +choices that enable more effective multilingual data translation via prompted +LLMs. +" +Prompting GPT-3 To Be Reliable,Chenglei Si,http://arxiv.org/pdf/2210.09150v2.pdf,2022-10-17,['cs.cl'],2210.09150v2.pdf," Large language models (LLMs) show impressive abilities via few-shot +prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use +in real-world language applications. However, the crucial problem of how to +improve the reliability of GPT-3 is still under-explored. While reliability is +a broad and vaguely defined term, we decompose reliability into four main +facets that correspond to the existing framework of ML safety and are +well-recognized to be important: generalizability, social biases, calibration, +and factuality. Our core contribution is to establish simple and effective +prompts that improve GPT-3's reliability as it: 1) generalizes +out-of-distribution, 2) balances demographic distribution and uses natural +language instructions to reduce social biases, 3) calibrates output +probabilities, and 4) updates the LLM's factual knowledge and reasoning chains. +With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised +models on all these facets. We release all processed datasets, evaluation +scripts, and model predictions. Our systematic empirical study not only sheds +new insights on the reliability of prompting LLMs, but more importantly, our +prompting strategies can help practitioners more reliably use LLMs like GPT-3. +" +Exploring The Landscape of Distributional Robustness for Question Answering Models,Anas Awadalla,http://arxiv.org/pdf/2210.12517v1.pdf,2022-10-22,"['cs.cl', 'cs.lg']",2210.12517v1.pdf," We conduct a large empirical evaluation to investigate the landscape of +distributional robustness in question answering. Our investigation spans over +350 models and 16 question answering datasets, including a diverse set of +architectures, model sizes, and adaptation methods (e.g., fine-tuning, adapter +tuning, in-context learning, etc.). We find that, in many cases, model +variations do not affect robustness and in-distribution performance alone +determines out-of-distribution performance. Moreover, our findings indicate +that i) zero-shot and in-context learning methods are more robust to +distribution shifts than fully fine-tuned models; ii) few-shot prompt +fine-tuned models exhibit better robustness than few-shot fine-tuned span +prediction models; iii) parameter-efficient and robustness enhancing training +methods provide no significant robustness improvements. In addition, we +publicly release all evaluations to encourage researchers to further analyze +robustness trends for question answering models. +" +"""Covid vaccine is against Covid but Oxford vaccine is made at Oxford!"" Semantic Interpretation of Proper Noun Compounds",Keshav Kolluru,http://arxiv.org/pdf/2210.13039v1.pdf,2022-10-24,['cs.cl'],2210.13039v1.pdf," Proper noun compounds, e.g., ""Covid vaccine"", convey information in a +succinct manner (a ""Covid vaccine"" is a ""vaccine that immunizes against the +Covid disease""). These are commonly used in short-form domains, such as news +headlines, but are largely ignored in information-seeking applications. To +address this limitation, we release a new manually annotated dataset, ProNCI, +consisting of 22.5K proper noun compounds along with their free-form semantic +interpretations. ProNCI is 60 times larger than prior noun compound datasets +and also includes non-compositional examples, which have not been previously +explored. We experiment with various neural models for automatically generating +the semantic interpretations from proper noun compounds, ranging from few-shot +prompting to supervised learning, with varying degrees of knowledge about the +constituent nouns. We find that adding targeted knowledge, particularly about +the common noun, results in performance gains of upto 2.8%. Finally, we +integrate our model generated interpretations with an existing Open IE system +and observe an 7.5% increase in yield at a precision of 85%. The dataset and +code are available at https://github.com/dair-iitd/pronci. +" +Prompting PaLM for Translation: Assessing Strategies and Performance,David Vilar,http://arxiv.org/pdf/2211.09102v3.pdf,2022-11-16,['cs.cl'],2211.09102v3.pdf," Large language models (LLMs) that have been trained on multilingual but not +parallel text exhibit a remarkable ability to translate between languages. We +probe this ability in an in-depth study of the pathways language model (PaLM), +which has demonstrated the strongest machine translation (MT) performance among +similarly-trained LLMs to date. We investigate various strategies for choosing +translation examples for few-shot prompting, concluding that example quality is +the most important factor. Using optimized prompts, we revisit previous +assessments of PaLM's MT capabilities with more recent test sets, modern MT +metrics, and human evaluation, and find that its performance, while impressive, +still lags that of state-of-the-art supervised systems. We conclude by +providing an analysis of PaLM's MT output which reveals some interesting +properties and prospects for future work. +" +PartSLIP: Low-Shot Part Segmentation for 3D Point Clouds via Pretrained Image-Language Models,Minghua Liu,http://arxiv.org/pdf/2212.01558v2.pdf,2022-12-03,"['cs.cv', 'cs.ro']",2212.01558v2.pdf," Generalizable 3D part segmentation is important but challenging in vision and +robotics. Training deep models via conventional supervised methods requires +large-scale 3D datasets with fine-grained part annotations, which are costly to +collect. This paper explores an alternative way for low-shot part segmentation +of 3D point clouds by leveraging a pretrained image-language model, GLIP, which +achieves superior performance on open-vocabulary 2D detection. We transfer the +rich knowledge from 2D to 3D through GLIP-based part detection on point cloud +rendering and a novel 2D-to-3D label lifting algorithm. We also utilize +multi-view 3D priors and few-shot prompt tuning to boost performance +significantly. Extensive evaluation on PartNet and PartNet-Mobility datasets +shows that our method enables excellent zero-shot 3D part segmentation. Our +few-shot version not only outperforms existing few-shot approaches by a large +margin but also achieves highly competitive results compared to the fully +supervised counterpart. Furthermore, we demonstrate that our method can be +directly applied to iPhone-scanned point clouds without significant domain +gaps. +" +Natural Language to Code Generation in Interactive Data Science Notebooks,Pengcheng Yin,http://arxiv.org/pdf/2212.09248v1.pdf,2022-12-19,"['cs.cl', 'cs.se']",2212.09248v1.pdf," Computational notebooks, such as Jupyter notebooks, are interactive computing +environments that are ubiquitous among data scientists to perform data +wrangling and analytic tasks. To measure the performance of AI pair programmers +that automatically synthesize programs for those tasks given natural language +(NL) intents from users, we build ARCADE, a benchmark of 1082 code generation +problems using the pandas data analysis framework in data science notebooks. +ARCADE features multiple rounds of NL-to-code problems from the same notebook. +It requires a model to understand rich multi-modal contexts, such as existing +notebook cells and their execution states as well as previous turns of +interaction. To establish a strong baseline on this challenging task, we +develop PaChiNCo, a 62B code language model (LM) for Python computational +notebooks, which significantly outperforms public code LMs. Finally, we explore +few-shot prompting strategies to elicit better code with step-by-step +decomposition and NL explanation, showing the potential to improve the +diversity and explainability of model predictions. +" +LAMBADA: Backward Chaining for Automated Reasoning in Natural Language,Mehran Kazemi,http://arxiv.org/pdf/2212.13894v2.pdf,2022-12-20,"['cs.ai', 'cs.lg']",2212.13894v2.pdf," Remarkable progress has been made on automated reasoning with natural text, +by using Language Models (LMs) and methods such as Chain-of-Thought and +Selection-Inference. These techniques search for proofs in the forward +direction from axioms to the conclusion, which suffers from a combinatorial +explosion of the search space, and thus high failure rates for problems +requiring longer chains of reasoning. The classical automated reasoning +literature has shown that reasoning in the backward direction (i.e. from the +intended conclusion to supporting axioms) is significantly more efficient at +proof-finding. Importing this intuition into the LM setting, we develop a +Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into +four sub-modules. These sub-modules are simply implemented by few-shot prompted +LM inference. We show that LAMBADA achieves sizable accuracy boosts over +state-of-the-art forward reasoning methods on challenging logical reasoning +datasets, particularly when deep and accurate proof chains are required. +" +Can GPT-3 Perform Statutory Reasoning?,Andrew Blair-Stanek,http://arxiv.org/pdf/2302.06100v2.pdf,2023-02-13,"['cs.cl', 'cs.ai']",2302.06100v2.pdf," Statutory reasoning is the task of reasoning with facts and statutes, which +are rules written in natural language by a legislature. It is a basic legal +skill. In this paper we explore the capabilities of the most capable GPT-3 +model, text-davinci-003, on an established statutory-reasoning dataset called +SARA. We consider a variety of approaches, including dynamic few-shot +prompting, chain-of-thought prompting, and zero-shot prompting. While we +achieve results with GPT-3 that are better than the previous best published +results, we also identify several types of clear errors it makes. We +investigate why these errors happen. We discover that GPT-3 has imperfect prior +knowledge of the actual U.S. statutes on which SARA is based. More importantly, +we create simple synthetic statutes, which GPT-3 is guaranteed not to have seen +during training. We find GPT-3 performs poorly at answering straightforward +questions about these simple synthetic statutes. +" +STREET: A Multi-Task Structured Reasoning and Explanation Benchmark,Danilo Ribeiro,http://arxiv.org/pdf/2302.06729v1.pdf,2023-02-13,"['cs.cl', 'cs.ai', 'i.2.7; i.2.6']",2302.06729v1.pdf," We introduce STREET, a unified multi-task and multi-domain natural language +reasoning and explanation benchmark. Unlike most existing question-answering +(QA) datasets, we expect models to not only answer questions, but also produce +step-by-step structured explanations describing how premises in the question +are used to produce intermediate conclusions that can prove the correctness of +a certain answer. We perform extensive evaluation with popular language models +such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models +still lag behind human performance when producing such structured reasoning +steps. We believe this work will provide a way for the community to better +train and test systems on multi-step reasoning and explanations in natural +language. +" +ADELT: Transpilation Between Deep Learning Frameworks,Linyuan Gong,http://arxiv.org/pdf/2303.03593v1.pdf,2023-03-07,"['cs.cl', 'cs.lg']",2303.03593v1.pdf," We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-source +transpilation between deep learning frameworks. Unlike prior approaches, we +decouple the transpilation of code skeletons and the mapping of API keywords +(an API function name or a parameter name). ADELT transpile code skeletons +using few-shot prompting on big language models. Based on contextual embeddings +extracted by a BERT for code, we train aligned API embeddings in a +domain-adversarial setup, upon which we generate a dictionary for keyword +translation. The model is trained on our unlabeled DL corpus from web crawl +data, without using any hand-crafted rules and parallel data. Our method +outperforms state-of-the-art transpilers on multiple transpilation pairs +including PyTorch-Keras and PyTorch-MXNet by 15.9pts and 12.0pts in exact match +scores respectively. +" +Query2doc: Query Expansion with Large Language Models,Liang Wang,http://arxiv.org/pdf/2303.07678v2.pdf,2023-03-14,"['cs.ir', 'cs.cl']",2303.07678v2.pdf," This paper introduces a simple yet effective query expansion approach, +denoted as query2doc, to improve both sparse and dense retrieval systems. The +proposed method first generates pseudo-documents by few-shot prompting large +language models (LLMs), and then expands the query with generated +pseudo-documents. LLMs are trained on web-scale text corpora and are adept at +knowledge memorization. The pseudo-documents from LLMs often contain highly +relevant information that can aid in query disambiguation and guide the +retrievers. Experimental results demonstrate that query2doc boosts the +performance of BM25 by 3% to 15% on ad-hoc IR datasets, such as MS-MARCO and +TREC DL, without any model fine-tuning. Furthermore, our method also benefits +state-of-the-art dense retrievers in terms of both in-domain and out-of-domain +results. +" +How to Design Translation Prompts for ChatGPT: An Empirical Study,Yuan Gao,http://arxiv.org/pdf/2304.02182v2.pdf,2023-04-05,['cs.cl'],2304.02182v2.pdf," The recently released ChatGPT has demonstrated surprising abilities in +natural language understanding and natural language generation. Machine +translation relies heavily on the abilities of language understanding and +generation. Thus, in this paper, we explore how to assist machine translation +with ChatGPT. We adopt several translation prompts on a wide range of +translations. Our experimental results show that ChatGPT with designed +translation prompts can achieve comparable or better performance over +commercial translation systems for high-resource language translations. We +further evaluate the translation quality using multiple references, and ChatGPT +achieves superior performance compared to commercial systems. We also conduct +experiments on domain-specific translations, the final results show that +ChatGPT is able to comprehend the provided domain keyword and adjust +accordingly to output proper translations. At last, we perform few-shot prompts +that show consistent improvement across different base prompts. Our work +provides empirical evidence that ChatGPT still has great potential in +translations. +" +Boosted Prompt Ensembles for Large Language Models,Silviu Pitis,http://arxiv.org/pdf/2304.05970v1.pdf,2023-04-12,"['cs.cl', 'cs.lg']",2304.05970v1.pdf," Methods such as chain-of-thought prompting and self-consistency have pushed +the frontier of language model reasoning performance with no additional +training. To further improve performance, we propose a prompt ensembling method +for large language models, which uses a small dataset to construct a set of few +shot prompts that together comprise a ``boosted prompt ensemble''. The few shot +examples for each prompt are chosen in a stepwise fashion to be ``hard'' +examples on which the previous step's ensemble is uncertain. We show that this +outperforms single-prompt output-space ensembles and bagged prompt-space +ensembles on the GSM8k and AQuA datasets, among others. We propose both +train-time and test-time versions of boosted prompting that use different +levels of available annotation and conduct a detailed empirical study of our +algorithm. +" +Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models,Jimmy Wei,http://arxiv.org/pdf/2304.13835v3.pdf,2023-04-26,"['cs.cl', 'cs.lg']",2304.13835v3.pdf," Current dialogue research primarily studies pairwise (two-party) +conversations, and does not address the everyday setting where more than two +speakers converse together. In this work, we both collect and evaluate +multi-party conversations to study this more general case. We use the LIGHT +environment to construct grounded conversations, where each participant has an +assigned character to role-play. We thus evaluate the ability of language +models to act as one or more characters in such conversations. Models require +two skills that pairwise-trained models appear to lack: (1) being able to +decide when to talk; (2) producing coherent utterances grounded on multiple +characters. We compare models trained on our new dataset to existing +pairwise-trained dialogue models, as well as large language models with +few-shot prompting. We find that our new dataset, MultiLIGHT, which we will +publicly release, can help bring significant improvements in the group setting. +" +Transferring Procedural Knowledge across Commonsense Tasks,Yifan Jiang,http://arxiv.org/pdf/2304.13867v2.pdf,2023-04-26,['cs.cl'],2304.13867v2.pdf," Stories about everyday situations are an essential part of human +communication, motivating the need to develop AI agents that can reliably +understand these stories. Despite the long list of supervised methods for story +completion and procedural understanding, current AI has no mechanisms to +automatically track and explain procedures in unseen stories. To bridge this +gap, we study the ability of AI models to transfer procedural knowledge to +novel narrative tasks in a transparent manner. We design LEAP: a comprehensive +framework that integrates state-of-the-art modeling architectures, training +regimes, and augmentation strategies based on both natural and synthetic +stories. To address the lack of densely annotated training data, we devise a +robust automatic labeler based on few-shot prompting to enhance the augmented +data. Our experiments with in- and out-of-domain tasks reveal insights into the +interplay of different architectures, training regimes, and augmentation +strategies. LEAP's labeler has a clear positive impact on out-of-domain +datasets, while the resulting dense annotation provides native explainability. +" +Explainable Verbal Reasoner Plus (EVR+): A Natural Language Reasoning Framework that Supports Diverse Compositional Reasoning,Zhengzhong Liang,http://arxiv.org/pdf/2305.00061v1.pdf,2023-04-28,"['cs.cl', 'cs.ai']",2305.00061v1.pdf," Languages models have been successfully applied to a variety of reasoning +tasks in NLP, yet the language models still suffer from compositional +generalization. In this paper we present Explainable Verbal Reasoner Plus +(EVR+), a reasoning framework that enhances language models' compositional +reasoning ability by (1) allowing the model to explicitly generate and execute +symbolic operators, and (2) allowing the model to decompose a complex task into +several simpler ones in a flexible manner. Compared with its predecessor +Explainable Verbal Reasoner (EVR) and other previous approaches adopting +similar ideas, our framework supports more diverse types of reasoning such as +nested loops and different types of recursion. To evaluate our reasoning +framework, we build a synthetic dataset with five tasks that require +compositional reasoning. Results show that our reasoning framework can enhance +the language model's compositional generalization performance on the five +tasks, using a fine-tuned language model. We also discussed the possibility and +the challenges to combine our reasoning framework with a few-shot prompted +language model. +" +Revisiting Relation Extraction in the era of Large Language Models,Somin Wadhwa,http://arxiv.org/pdf/2305.05003v1.pdf,2023-05-08,['cs.cl'],2305.05003v1.pdf," Relation extraction (RE) is the core NLP task of inferring semantic +relationships between entities from text. Standard supervised RE techniques +entail training modules to tag tokens comprising entity spans and then predict +the relationship between them. Recent work has instead treated the problem as a +\emph{sequence-to-sequence} task, linearizing relations between entities as +target strings to be generated conditioned on the input. Here we push the +limits of this approach, using larger language models (GPT-3 and Flan-T5 large) +than considered in prior work and evaluating their performance on standard RE +tasks under varying levels of supervision. We address issues inherent to +evaluating generative approaches to RE by doing human evaluations, in lieu of +relying on exact matching. Under this refined evaluation, we find that: (1) +Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly +equivalent to existing fully supervised models; (2) Flan-T5 is not as capable +in the few-shot setting, but supervising and fine-tuning it with +Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA +results. We release this model as a new baseline for RE tasks. +" +Generating medically-accurate summaries of patient-provider dialogue: A multi-stage approach using large language models,Varun Nair,http://arxiv.org/pdf/2305.05982v1.pdf,2023-05-10,"['cs.cl', 'cs.ai', 'cs.lg']",2305.05982v1.pdf," A medical provider's summary of a patient visit serves several critical +purposes, including clinical decision-making, facilitating hand-offs between +providers, and as a reference for the patient. An effective summary is required +to be coherent and accurately capture all the medically relevant information in +the dialogue, despite the complexity of patient-generated language. Even minor +inaccuracies in visit summaries (for example, summarizing ""patient does not +have a fever"" when a fever is present) can be detrimental to the outcome of +care for the patient. + This paper tackles the problem of medical conversation summarization by +discretizing the task into several smaller dialogue-understanding tasks that +are sequentially built upon. First, we identify medical entities and their +affirmations within the conversation to serve as building blocks. We study +dynamically constructing few-shot prompts for tasks by conditioning on relevant +patient information and use GPT-3 as the backbone for our experiments. We also +develop GPT-derived summarization metrics to measure performance against +reference summaries quantitatively. Both our human evaluation study and metrics +for medical correctness show that summaries generated using this approach are +clinically accurate and outperform the baseline approach of summarizing the +dialog in a zero-shot, single-prompt setting. +" +ZARA: Improving Few-Shot Self-Rationalization for Small Language Models,Wei-Lin Chen,http://arxiv.org/pdf/2305.07355v2.pdf,2023-05-12,['cs.cl'],2305.07355v2.pdf," Language models (LMs) that jointly generate end-task answers as well as +free-text rationales are known as self-rationalization models. Recent works +demonstrate great performance gain for self-rationalization by few-shot +prompting LMs with rationale-augmented exemplars. However, the ability to +benefit from explanations only emerges with large-scale LMs, which have poor +accessibility. In this work, we explore the less-studied setting of leveraging +explanations for small LMs to improve few-shot self-rationalization. We first +revisit the relationship between rationales and answers. Inspired by the +implicit mental process of how human beings assess explanations, we present a +novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to +automatically construct pseudo-parallel data for self-training by reducing the +problem of plausibility judgement to natural language inference. Experimental +results show ZARA achieves SOTA performance on the FEB benchmark, for both the +task accuracy and the explanation metric. In addition, we conduct human and +quantitative evaluation validating ZARA's ability to automatically identify +plausible and accurate rationale-answer pairs. +" +Natural Language Decomposition and Interpretation of Complex Utterances,Harsh Jhamtani,http://arxiv.org/pdf/2305.08677v1.pdf,2023-05-15,['cs.cl'],2305.08677v1.pdf," Natural language interfaces often require supervised data to translate user +requests into programs, database queries, or other structured intent +representations. During data collection, it can be difficult to anticipate and +formalize the full range of user needs -- for example, in a system designed to +handle simple requests (like $\textit{find my meetings tomorrow}$ or +$\textit{move my meeting with my manager to noon})$, users may also express +more elaborate requests (like $\textit{swap all my calls on Monday and +Tuesday}$). We introduce an approach for equipping a simple language-to-code +model to handle complex utterances via a process of hierarchical natural +language decomposition. Our approach uses a pre-trained language model to +decompose a complex utterance into a sequence of smaller natural language +steps, then interprets each step using the language-to-code model. To test our +approach, we collect and release DeCU -- a new NL-to-program benchmark to +evaluate Decomposition of Complex Utterances. Experiments show that the +proposed approach enables the interpretation of complex utterances with almost +no complex training data, while outperforming standard few-shot prompting +approaches. +" +Visualizing Linguistic Diversity of Text Datasets Synthesized by Large Language Models,Emily Reif,http://arxiv.org/pdf/2305.11364v2.pdf,2023-05-19,"['cs.cl', 'cs.ai']",2305.11364v2.pdf," Large language models (LLMs) can be used to generate smaller, more refined +datasets via few-shot prompting for benchmarking, fine-tuning or other use +cases. However, understanding and evaluating these datasets is difficult, and +the failure modes of LLM-generated data are still not well understood. +Specifically, the data can be repetitive in surprising ways, not only +semantically but also syntactically and lexically. We present LinguisticLens, a +novel inter-active visualization tool for making sense of and analyzing +syntactic diversity of LLM-generated datasets. LinguisticLens clusters text +along syntactic, lexical, and semantic axes. It supports hierarchical +visualization of a text dataset, allowing users to quickly scan for an overview +and inspect individual examples. The live demo is available at +shorturl.at/zHOUV. +" +Improved Compositional Generalization by Generating Demonstrations for Meta-Learning,Sam Spilsbury,http://arxiv.org/pdf/2305.13092v1.pdf,2023-05-22,['cs.cl'],2305.13092v1.pdf," Meta-learning and few-shot prompting are viable methods to induce certain +types of compositional behaviour. However, these methods can be very sensitive +to the choice of support examples used. Choosing good supports from the +training data for a given test query is already a difficult problem, but in +some cases solving this may not even be enough. We consider a grounded language +learning problem (gSCAN) where good support examples for certain test splits +might not even exist in the training data, or would be infeasible to search +for. We design an agent which instead generates possible supports which are +relevant to the test query and current state of the world, then uses these +supports via meta-learning to solve the test query. We show substantially +improved performance on a previously unsolved compositional behaviour split +without a loss of performance on other splits. Further experiments show that in +this case, searching for relevant demonstrations even with an oracle function +is not sufficient to attain good performance when using meta-learning. +" +SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations,Jesus Solano,http://arxiv.org/pdf/2305.13235v2.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.13235v2.pdf," Explaining the decisions of neural models is crucial for ensuring their +trustworthiness at deployment time. Using Natural Language Explanations (NLEs) +to justify a model's predictions has recently gained increasing interest. +However, this approach usually demands large datasets of human-written NLEs for +the ground-truth answers, which are expensive and potentially infeasible for +some applications. For models to generate high-quality NLEs when only a few +NLEs are available, the fine-tuning of Pre-trained Language Models (PLMs) in +conjunction with prompt-based learning recently emerged. However, PLMs +typically have billions of parameters, making fine-tuning expensive. We propose +SparseFit, a sparse few-shot fine-tuning strategy that leverages discrete +prompts to jointly generate predictions and NLEs. We experiment with SparseFit +on the T5 model and four datasets and compare it against state-of-the-art +parameter-efficient fine-tuning techniques. We perform automatic and human +evaluations to assess the quality of the model-generated NLEs, finding that +fine-tuning only 6.8% of the model parameters leads to competitive results for +both the task performance and the quality of the NLEs. +" +Towards Legally Enforceable Hate Speech Detection for Public Forums,Chu Fei Luo,http://arxiv.org/pdf/2305.13677v2.pdf,2023-05-23,['cs.cl'],2305.13677v2.pdf," Hate speech causes widespread and deep-seated societal issues. Proper +enforcement of hate speech laws is key for protecting groups of people against +harmful and discriminatory language. However, determining what constitutes hate +speech is a complex task that is highly open to subjective interpretations. +Existing works do not align their systems with enforceable definitions of hate +speech, which can make their outputs inconsistent with the goals of regulators. +This research introduces a new perspective and task for enforceable hate speech +detection centred around legal definitions, and a dataset annotated on +violations of eleven possible definitions by legal experts. Given the challenge +of identifying clear, legally enforceable instances of hate speech, we augment +the dataset with expert-generated samples and an automatically mined challenge +set. We experiment with grounding the model decision in these definitions using +zero-shot and few-shot prompting. We then report results on several large +language models (LLMs). With this task definition, automatic hate speech +detection can be more closely aligned to enforceable laws, and hence assist in +more rigorous enforcement of legal protections against harmful speech in public +forums. +" +PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents,Simeng Sun,http://arxiv.org/pdf/2305.14564v1.pdf,2023-05-23,['cs.cl'],2305.14564v1.pdf," Strategies such as chain-of-thought prompting improve the performance of +large language models (LLMs) on complex reasoning tasks by decomposing input +examples into intermediate steps. However, it remains unclear how to apply such +methods to reason over long input documents, in which both the decomposition +and the output of each intermediate step are non-trivial to obtain. In this +work, we propose PEARL, a prompting framework to improve reasoning over long +documents, which consists of three stages: action mining, plan formulation, and +plan execution. More specifically, given a question about a long document, +PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, +FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain +the answer. Each stage of PEARL is implemented via zero-shot or few-shot +prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate +PEARL on a challenging subset of the QuALITY dataset, which contains questions +that require complex reasoning over long narrative texts. PEARL outperforms +zero-shot and chain-of-thought prompting on this dataset, and ablation +experiments show that each stage of PEARL is critical to its performance. +Overall, PEARL is a first step towards leveraging LLMs to reason over long +documents. +" +Large Language Model Distillation Doesn't Need a Teacher,Ananya Harsh Jha,http://arxiv.org/pdf/2305.14864v1.pdf,2023-05-24,['cs.cl'],2305.14864v1.pdf," Knowledge distillation trains a smaller student model to match the output +distribution of a larger teacher to maximize the end-task performance under +computational constraints. However, existing literature on language model +distillation primarily focuses on compressing encoder-only models that are then +specialized by task-specific supervised finetuning. We need to rethink this +setup for more recent large language models with tens to hundreds of billions +of parameters. Task-specific finetuning is impractical at this scale, and model +performance is often measured using zero/few-shot prompting. Thus, in this +work, we advocate for task-agnostic zero-shot evaluated distillation for large +language models without access to end-task finetuning data. We propose a +teacher-free task-agnostic distillation method, which uses a truncated version +of the larger model for initialization, and continues pretraining this model +using a language modeling objective. Our teacher-free method shines in a +distillation regime where it is infeasible to fit both the student and teacher +into the GPU memory. Despite its simplicity, our method can effectively reduce +the model size by 50\%, matching or outperforming the vanilla distillation +method on perplexity and accuracy on 13 zero-shot end-tasks while being 1.5x +computationally efficient. +" +Revisiting non-English Text Simplification: A Unified Multilingual Benchmark,Michael J. Ryan,http://arxiv.org/pdf/2305.15678v1.pdf,2023-05-25,"['cs.cl', 'cs.ai']",2305.15678v1.pdf," Recent advancements in high-quality, large-scale English resources have +pushed the frontier of English Automatic Text Simplification (ATS) research. +However, less work has been done on multilingual text simplification due to the +lack of a diverse evaluation benchmark that covers complex-simple sentence +pairs in many languages. This paper introduces the MultiSim benchmark, a +collection of 27 resources in 12 distinct languages containing over 1.7 million +complex-simple sentence pairs. This benchmark will encourage research in +developing more effective multilingual text simplification models and +evaluation metrics. Our experiments using MultiSim with pre-trained +multilingual language models reveal exciting performance improvements from +multilingual training in non-English settings. We observe strong performance +from Russian in zero-shot cross-lingual transfer to low-resource languages. We +further show that few-shot prompting with BLOOM-176b achieves comparable +quality to reference simplifications outperforming fine-tuned models in most +languages. We validate these findings through human evaluation. +" +Do GPTs Produce Less Literal Translations?,Vikas Raunak,http://arxiv.org/pdf/2305.16806v4.pdf,2023-05-26,"['cs.cl', 'cs.ai']",2305.16806v4.pdf," Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose +language models capable of addressing many natural language generation or +understanding tasks. On the task of Machine Translation (MT), multiple works +have investigated few-shot prompting mechanisms to elicit better translations +from LLMs. However, there has been relatively little investigation on how such +translations differ qualitatively from the translations generated by standard +Neural Machine Translation (NMT) models. In this work, we investigate these +differences in terms of the literalness of translations produced by the two +systems. Using literalness measures involving word alignment and monotonicity, +we find that translations out of English (E-X) from GPTs tend to be less +literal, while exhibiting similar or better scores on MT quality metrics. We +demonstrate that this finding is borne out in human evaluations as well. We +then show that these differences are especially pronounced when translating +sentences that contain idiomatic expressions. +" +Log Parsing: How Far Can ChatGPT Go?,Van-Hoang Le,http://arxiv.org/pdf/2306.01590v2.pdf,2023-06-02,"['cs.se', 'cs.ai']",2306.01590v2.pdf," Software logs play an essential role in ensuring the reliability and +maintainability of large-scale software systems, as they are often the sole +source of runtime information. Log parsing, which converts raw log messages +into structured data, is an important initial step towards downstream log +analytics. In recent studies, ChatGPT, the current cutting-edge large language +model (LLM), has been widely applied to a wide range of software engineering +tasks. However, its performance in automated log parsing remains unclear. In +this paper, we evaluate ChatGPT's ability to undertake log parsing by +addressing two research questions. (1) Can ChatGPT effectively parse logs? (2) +How does ChatGPT perform with different prompting methods? Our results show +that ChatGPT can achieve promising results for log parsing with appropriate +prompts, especially with few-shot prompting. Based on our findings, we outline +several challenges and opportunities for ChatGPT-based log parsing. +" +Large Language Model Augmented Narrative Driven Recommendations,Sheshera Mysore,http://arxiv.org/pdf/2306.02250v2.pdf,2023-06-04,"['cs.ir', 'cs.cl']",2306.02250v2.pdf," Narrative-driven recommendation (NDR) presents an information access problem +where users solicit recommendations with verbose descriptions of their +preferences and context, for example, travelers soliciting recommendations for +points of interest while describing their likes/dislikes and travel +circumstances. These requests are increasingly important with the rise of +natural language-based conversational interfaces for search and recommendation +systems. However, NDR lacks abundant training data for models, and current +platforms commonly do not support these requests. Fortunately, classical +user-item interaction datasets contain rich textual data, e.g., reviews, which +often describe user preferences and context - this may be used to bootstrap +training for NDR models. In this work, we explore using large language models +(LLMs) for data augmentation to train NDR models. We use LLMs for authoring +synthetic narrative queries from user-item interactions with few-shot prompting +and train retrieval models for NDR on synthetic queries and user-item +interaction data. Our experiments demonstrate that this is an effective +strategy for training small-parameter retrieval models that outperform other +retrieval and LLM baselines for narrative-driven recommendation. +" +Enhancing In-Context Learning with Answer Feedback for Multi-Span Question Answering,Zixian Huang,http://arxiv.org/pdf/2306.04508v1.pdf,2023-06-07,"['cs.cl', 'cs.ai']",2306.04508v1.pdf," Whereas the recent emergence of large language models (LLMs) like ChatGPT has +exhibited impressive general performance, it still has a large gap with +fully-supervised models on specific tasks such as multi-span question +answering. Previous researches found that in-context learning is an effective +approach to exploiting LLM, by using a few task-related labeled data as +demonstration examples to construct a few-shot prompt for answering new +questions. A popular implementation is to concatenate a few questions and their +correct answers through simple templates, informing LLM of the desired output. +In this paper, we propose a novel way of employing labeled data such that it +also informs LLM of some undesired output, by extending demonstration examples +with feedback about answers predicted by an off-the-shelf model, e.g., correct, +incorrect, or incomplete. Experiments on three multi-span question answering +datasets as well as a keyphrase extraction dataset show that our new prompting +strategy consistently improves LLM's in-context learning performance. +" +Product Information Extraction using ChatGPT,Alexander Brinkmann,http://arxiv.org/pdf/2306.14921v1.pdf,2023-06-23,"['cs.cl', 'cs.ir']",2306.14921v1.pdf," Structured product data in the form of attribute/value pairs is the +foundation of many e-commerce applications such as faceted product search, +product comparison, and product recommendation. Product offers often only +contain textual descriptions of the product attributes in the form of titles or +free text. Hence, extracting attribute/value pairs from textual product +descriptions is an essential enabler for e-commerce applications. In order to +excel, state-of-the-art product information extraction methods require large +quantities of task-specific training data. The methods also struggle with +generalizing to out-of-distribution attributes and attribute values that were +not a part of the training data. Due to being pre-trained on huge amounts of +text as well as due to emergent effects resulting from the model size, Large +Language Models like ChatGPT have the potential to address both of these +shortcomings. This paper explores the potential of ChatGPT for extracting +attribute/value pairs from product descriptions. We experiment with different +zero-shot and few-shot prompt designs. Our results show that ChatGPT achieves a +performance similar to a pre-trained language model but requires much smaller +amounts of training data and computation for fine-tuning. +" +SummQA at MEDIQA-Chat 2023:In-Context Learning with GPT-4 for Medical Summarization,Yash Mathur,http://arxiv.org/pdf/2306.17384v1.pdf,2023-06-30,['cs.cl'],2306.17384v1.pdf," Medical dialogue summarization is challenging due to the unstructured nature +of medical conversations, the use of medical terminology in gold summaries, and +the need to identify key information across multiple symptom sets. We present a +novel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA +2023 Shared Task. Our approach for section-wise summarization (Task A) is a +two-stage process of selecting semantically similar dialogues and using the +top-k similar dialogues as in-context examples for GPT-4. For full-note +summarization (Task B), we use a similar solution with k=1. We achieved 3rd +place in Task A (2nd among all teams), 4th place in Task B Division Wise +Summarization (2nd among all teams), 15th place in Task A Section Header +Classification (9th among all teams), and 8th place among all teams in Task B. +Our results highlight the effectiveness of few-shot prompting for this task, +though we also identify several weaknesses of prompting-based approaches. We +compare GPT-4 performance with several finetuned baselines. We find that GPT-4 +summaries are more abstractive and shorter. We make our code publicly +available. +" +Building Cooperative Embodied Agents Modularly with Large Language Models,Hongxin Zhang,http://arxiv.org/pdf/2307.02485v1.pdf,2023-07-05,"['cs.ai', 'cs.cl', 'cs.cv']",2307.02485v1.pdf," Large Language Models (LLMs) have demonstrated impressive planning abilities +in single-agent embodied tasks across various domains. However, their capacity +for planning and communication in multi-agent cooperation remains unclear, even +though these are crucial skills for intelligent embodied agents. In this paper, +we present a novel framework that utilizes LLMs for multi-agent cooperation and +tests it in various embodied environments. Our framework enables embodied +agents to plan, communicate, and cooperate with other embodied agents or humans +to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, +such as GPT-4, can surpass strong planning-based methods and exhibit emergent +effective communication using our framework without requiring fine-tuning or +few-shot prompting. We also discover that LLM-based agents that communicate in +natural language can earn more trust and cooperate more effectively with +humans. Our research underscores the potential of LLMs for embodied AI and lays +the foundation for future research in multi-agent cooperation. Videos can be +found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. +" +MultiQG-TI: Towards Question Generation from Multi-modal Sources,Zichao Wang,http://arxiv.org/pdf/2307.04643v1.pdf,2023-07-07,"['cs.cl', 'cs.ai']",2307.04643v1.pdf," We study the new problem of automatic question generation (QG) from +multi-modal sources containing images and texts, significantly expanding the +scope of most of the existing work that focuses exclusively on QG from only +textual sources. We propose a simple solution for our new problem, called +MultiQG-TI, which enables a text-only question generator to process visual +input in addition to textual input. Specifically, we leverage an image-to-text +model and an optical character recognition model to obtain the textual +description of the image and extract any texts in the image, respectively, and +then feed them together with the input texts to the question generator. We only +fine-tune the question generator while keeping the other components fixed. On +the challenging ScienceQA dataset, we demonstrate that MultiQG-TI significantly +outperforms ChatGPT with few-shot prompting, despite having hundred-times less +trainable parameters. Additional analyses empirically confirm the necessity of +both visual and textual signals for QG and show the impact of various modeling +choices. +" +Why Is Prompt Tuning for Vision-Language Models Robust to Noisy Labels?,Cheng-En Wu,http://arxiv.org/pdf/2307.11978v1.pdf,2023-07-22,"['cs.cv', 'cs.ai', 'cs.lg']",2307.11978v1.pdf," Vision-language models such as CLIP learn a generic text-image embedding from +large-scale training data. A vision-language model can be adapted to a new +classification task through few-shot prompt tuning. We find that such a prompt +tuning process is highly robust to label noises. This intrigues us to study the +key reasons contributing to the robustness of the prompt tuning paradigm. We +conducted extensive experiments to explore this property and find the key +factors are: 1) the fixed classname tokens provide a strong regularization to +the optimization of the model, reducing gradients induced by the noisy samples; +2) the powerful pre-trained image-text embedding that is learned from diverse +and generic web data provides strong prior knowledge for image classification. +Further, we demonstrate that noisy zero-shot predictions from CLIP can be used +to tune its own prompt, significantly enhancing prediction accuracy in the +unsupervised setting. The code is available at https://github.com/CEWu/PTNL. +" +Analyzing Chain-of-Thought Prompting in Large Language Models via Gradient-based Feature Attributions,Skyler Wu,http://arxiv.org/pdf/2307.13339v1.pdf,2023-07-25,"['cs.cl', 'cs.ai']",2307.13339v1.pdf," Chain-of-thought (CoT) prompting has been shown to empirically improve the +accuracy of large language models (LLMs) on various question answering tasks. +While understanding why CoT prompting is effective is crucial to ensuring that +this phenomenon is a consequence of desired model behavior, little work has +addressed this; nonetheless, such an understanding is a critical prerequisite +for responsible model deployment. We address this question by leveraging +gradient-based feature attribution methods which produce saliency scores that +capture the influence of input tokens on model output. Specifically, we probe +several open-source LLMs to investigate whether CoT prompting affects the +relative importances they assign to particular input tokens. Our results +indicate that while CoT prompting does not increase the magnitude of saliency +scores attributed to semantically relevant tokens in the prompt compared to +standard few-shot prompting, it increases the robustness of saliency scores to +question perturbations and variations in model output. +" +Low-Parameter Federated Learning with Large Language Models,Jingang Jiang,http://arxiv.org/pdf/2307.13896v1.pdf,2023-07-26,['cs.dc'],2307.13896v1.pdf," We study few-shot Natural Language Understanding (NLU) tasks with Large +Language Models (LLMs) in federated learning (FL) scenarios. It is a +challenging task due to limited labeled data and communication capacities in +FL, especially with mobile devices. Recent studies show LLMs can be prompted to +perform few-shot NLU tasks like sentiment analysis and arithmetic reasoning. +However, the huge sizes of LLMs result in high computation and communication +costs, making classical FL schemes impractical. To address these challenges, we +propose Low-Parameter Federated Learning (LP-FL). LP-FL combines few-shot +prompt learning from LLMs with efficient communication and federating +techniques. Our approach enables federated clients to assign soft labels to +unlabeled data using gradually learned knowledge from the global model. Through +iterative soft-label assigning, we continually expand the labeled set during +the FL process. Additionally, to reduce computation and communication costs, +LP-FL utilizes the Low-Rank Adaptation (LoRA) technique for compact learnable +parameter construction, efficient local model fine-tuning, and affordable +global model federation. LP-FL consistently outperforms Full-Parameter +Federated Learning (FP-FL) in sentiment analysis tasks across various FL +settings. Its resistance to overfitting allows LP-FL to equal or surpass +centralized training in few-shot scenarios. +" +Large Language Model Prompt Chaining for Long Legal Document Classification,Dietrich Trautmann,http://arxiv.org/pdf/2308.04138v1.pdf,2023-08-08,['cs.cl'],2308.04138v1.pdf," Prompting is used to guide or steer a language model in generating an +appropriate response that is consistent with the desired outcome. Chaining is a +strategy used to decompose complex tasks into smaller, manageable components. +In this study, we utilize prompt chaining for extensive legal document +classification tasks, which present difficulties due to their intricate +domain-specific language and considerable length. Our approach begins with the +creation of a concise summary of the original document, followed by a semantic +search for related exemplar texts and their corresponding annotations from a +training corpus. Finally, we prompt for a label - based on the task - to +assign, by leveraging the in-context learning from the few-shot prompt. We +demonstrate that through prompt chaining, we can not only enhance the +performance over zero-shot, but also surpass the micro-F1 score achieved by +larger models, such as ChatGPT zero-shot, using smaller models. +" +FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for Large Language Models,Liwen Zhang,http://arxiv.org/pdf/2308.09975v1.pdf,2023-08-19,['cs.cl'],2308.09975v1.pdf," Large language models (LLMs) have demonstrated exceptional performance in +various natural language processing tasks, yet their efficacy in more +challenging and domain-specific tasks remains largely unexplored. This paper +presents FinEval, a benchmark specifically designed for the financial domain +knowledge in the LLMs. FinEval is a collection of high-quality multiple-choice +questions covering Finance, Economy, Accounting, and Certificate. It includes +4,661 questions spanning 34 different academic subjects. To ensure a +comprehensive model performance evaluation, FinEval employs a range of prompt +types, including zero-shot and few-shot prompts, as well as answer-only and +chain-of-thought prompts. Evaluating state-of-the-art Chinese and English LLMs +on FinEval, the results show that only GPT-4 achieved an accuracy close to 70% +in different prompt settings, indicating significant growth potential for LLMs +in the financial domain knowledge. Our work offers a more comprehensive +financial knowledge evaluation benchmark, utilizing data of mock exams and +covering a wide range of evaluated LLMs. +" +Diversity Measures: Domain-Independent Proxies for Failure in Language Model Queries,Noel Ngu,http://arxiv.org/pdf/2308.11189v1.pdf,2023-08-22,"['cs.cl', 'cs.ai', 'cs.lg']",2308.11189v1.pdf," Error prediction in large language models often relies on domain-specific +information. In this paper, we present measures for quantification of error in +the response of a large language model based on the diversity of responses to a +given prompt - hence independent of the underlying application. We describe how +three such measures - based on entropy, Gini impurity, and centroid distance - +can be employed. We perform a suite of experiments on multiple datasets and +temperature settings to demonstrate that these measures strongly correlate with +the probability of failure. Additionally, we present empirical results +demonstrating how these measures can be applied to few-shot prompting, +chain-of-thought reasoning, and error detection. +" +Evaluating Large Language Models on Graphs: Performance Insights and Comparative Analysis,Chang Liu,http://arxiv.org/pdf/2308.11224v2.pdf,2023-08-22,"['cs.ai', 'cs.cl']",2308.11224v2.pdf," Large Language Models (LLMs) have garnered considerable interest within both +academic and industrial. Yet, the application of LLMs to graph data remains +under-explored. In this study, we evaluate the capabilities of four LLMs in +addressing several analytical problems with graph data. We employ four distinct +evaluation metrics: Comprehension, Correctness, Fidelity, and Rectification. +Our results show that: 1) LLMs effectively comprehend graph data in natural +language and reason with graph topology. 2) GPT models can generate logical and +coherent results, outperforming alternatives in correctness. 3) All examined +LLMs face challenges in structural reasoning, with techniques like zero-shot +chain-of-thought and few-shot prompting showing diminished efficacy. 4) GPT +models often produce erroneous answers in multi-answer tasks, raising concerns +in fidelity. 5) GPT models exhibit elevated confidence in their outputs, +potentially hindering their rectification capacities. Notably, GPT-4 has +demonstrated the capacity to rectify responses from GPT-3.5-turbo and its own +previous iterations. The code is available at: +https://github.com/Ayame1006/LLMtoGraph. +" +Prompt2Model: Generating Deployable Models from Natural Language Instructions,Vijay Viswanathan,http://arxiv.org/pdf/2308.12261v1.pdf,2023-08-23,['cs.cl'],2308.12261v1.pdf," Large language models (LLMs) enable system builders today to create competent +NLP systems through prompting, where they only need to describe the task in +natural language and provide a few examples. However, in other ways, LLMs are a +step backward from traditional special-purpose NLP models; they require +extensive computational resources for deployment and can be gated behind APIs. +In this paper, we propose Prompt2Model, a general-purpose method that takes a +natural language task description like the prompts provided to LLMs, and uses +it to train a special-purpose model that is conducive to deployment. This is +done through a multi-step process of retrieval of existing datasets and +pretrained models, dataset generation using LLMs, and supervised fine-tuning on +these retrieved and generated datasets. Over three tasks, we demonstrate that +given the same few-shot prompt as input, Prompt2Model trains models that +outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20% +while being up to 700 times smaller. We also show that this data can be used to +obtain reliable performance estimates of model performance, enabling model +developers to assess model reliability before deployment. Prompt2Model is +available open-source at https://github.com/neulab/prompt2model. +" +Prompt a Robot to Walk with Large Language Models,Yen-Jen Wang,http://arxiv.org/pdf/2309.09969v1.pdf,2023-09-18,"['cs.ro', 'cs.lg', 'cs.sy', 'eess.sy']",2309.09969v1.pdf," Large language models (LLMs) pre-trained on vast internet-scale data have +showcased remarkable capabilities across diverse domains. Recently, there has +been escalating interest in deploying LLMs for robotics, aiming to harness the +power of foundation models in real-world settings. However, this approach faces +significant challenges, particularly in grounding these models in the physical +world and in generating dynamic robot motions. To address these issues, we +introduce a novel paradigm in which we use few-shot prompts collected from the +physical environment, enabling the LLM to autoregressively generate low-level +control commands for robots without task-specific fine-tuning. Experiments +across various robots and environments validate that our method can effectively +prompt a robot to walk. We thus illustrate how LLMs can proficiently function +as low-level feedback controllers for dynamic motion control even in +high-dimensional robotic systems. The project website and source code can be +found at: https://prompt2walk.github.io/ . +" +SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models,Shyam Sundar Kannan,http://arxiv.org/pdf/2309.10062v1.pdf,2023-09-18,['cs.ro'],2309.10062v1.pdf," In this work, we introduce SMART-LLM, an innovative framework designed for +embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task +Planning using Large Language Models (LLMs), harnesses the power of LLMs to +convert high-level task instructions provided as input into a multi-robot task +plan. It accomplishes this by executing a series of stages, including task +decomposition, coalition formation, and task allocation, all guided by +programmatic LLM prompts within the few-shot prompting paradigm. We create a +benchmark dataset designed for validating the multi-robot task planning +problem, encompassing four distinct categories of high-level instructions that +vary in task complexity. Our evaluation experiments span both simulation and +real-world scenarios, demonstrating that the proposed model can achieve +promising results for generating multi-robot task plans. The experimental +videos, code, and datasets from the work can be found at +https://sites.google.com/view/smart-llm/. +" +EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning,Rajasekhar Reddy Mekala,http://arxiv.org/pdf/2309.10687v2.pdf,2023-09-16,['cs.cl'],2309.10687v2.pdf," Language models are achieving impressive performance on various tasks by +aggressively adopting inference-time prompting techniques, such as zero-shot +and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet +effective approach that prompts the model to rephrase its queries before +answering them. EchoPrompt is adapted for both zero-shot and few-shot +in-context learning with standard and chain-of-thought prompting. Experimental +results show that EchoPrompt yields substantial improvements across all these +settings for four families of causal language models. These improvements are +observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading +comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On +average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 +by 5% in numerical tasks and 13% in reading comprehension tasks. We investigate +the factors contributing to EchoPrompt's effectiveness through ablation +studies, which reveal that both the original query and the model-generated +rephrased version are instrumental in its performance gains. Our empirical +results indicate that EchoPrompt is an effective technique that enhances +in-context learning performance. We recommend incorporating EchoPrompt into +various baseline prompting strategies to achieve performance boosts. +" +Self-Explanation Prompting Improves Dialogue Understanding in Large Language Models,Haoyu Gao,http://arxiv.org/pdf/2309.12940v1.pdf,2023-09-22,"['cs.cl', 'cs.ai']",2309.12940v1.pdf," Task-oriented dialogue (TOD) systems facilitate users in executing various +activities via multi-turn dialogues, but Large Language Models (LLMs) often +struggle to comprehend these intricate contexts. In this study, we propose a +novel ""Self-Explanation"" prompting strategy to enhance the comprehension +abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires +the model to analyze each dialogue utterance before task execution, thereby +improving performance across various dialogue-centric tasks. Experimental +results from six benchmark datasets confirm that our method consistently +outperforms other zero-shot prompts and matches or exceeds the efficacy of +few-shot prompts, demonstrating its potential as a powerful tool in enhancing +LLMs' comprehension in complex dialogue tasks. +" +Language Models as Knowledge Bases for Visual Word Sense Disambiguation,Anastasia Kritharoula,http://arxiv.org/pdf/2310.01960v1.pdf,2023-10-03,"['cs.cl', 'cs.ai']",2310.01960v1.pdf," Visual Word Sense Disambiguation (VWSD) is a novel challenging task that lies +between linguistic sense disambiguation and fine-grained multimodal retrieval. +The recent advancements in the development of visiolinguistic (VL) transformers +suggest some off-the-self implementations with encouraging results, which +however we argue that can be further improved. To this end, we propose some +knowledge-enhancement techniques towards improving the retrieval performance of +VL transformers via the usage of Large Language Models (LLMs) as Knowledge +Bases. More specifically, knowledge stored in LLMs is retrieved with the help +of appropriate prompts in a zero-shot manner, achieving performance +advancements. Moreover, we convert VWSD to a purely textual question-answering +(QA) problem by considering generated image captions as multiple-choice +candidate answers. Zero-shot and few-shot prompting strategies are leveraged to +explore the potential of such a transformation, while Chain-of-Thought (CoT) +prompting in the zero-shot setting is able to reveal the internal reasoning +steps an LLM follows to select the appropriate candidate. In total, our +presented approach is the first one to analyze the merits of exploiting +knowledge stored in LLMs in different ways to solve WVSD. +" +Can Large Language Models be Good Path Planners? A Benchmark and Investigation on Spatial-temporal Reasoning,Mohamed Aghzal,http://arxiv.org/pdf/2310.03249v1.pdf,2023-10-05,['cs.cl'],2310.03249v1.pdf," Large language models (LLMs) have achieved remarkable success across a wide +spectrum of tasks; however, they still face limitations in scenarios that +demand long-term planning and spatial reasoning. To facilitate this line of +research, in this work, we propose a new benchmark, termed $\textbf{P}$ath +$\textbf{P}$lanning from $\textbf{N}$atural $\textbf{L}$anguage +($\textbf{PPNL}$). Our benchmark evaluates LLMs' spatial-temporal reasoning by +formulating ''path planning'' tasks that require an LLM to navigate to target +locations while avoiding obstacles and adhering to constraints. Leveraging this +benchmark, we systematically investigate LLMs including GPT-4 via different +few-shot prompting methodologies and BART and T5 of various sizes via +fine-tuning. Our experimental results show the promise of few-shot GPT-4 in +spatial reasoning, when it is prompted to reason and act interleavedly, +although it still fails to make long-term temporal reasoning. In contrast, +while fine-tuned LLMs achieved impressive results on in-distribution reasoning +tasks, they struggled to generalize to larger environments or environments with +more obstacles. +" +Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning,Hongfu Liu,http://arxiv.org/pdf/2310.08923v1.pdf,2023-10-13,['cs.cl'],2310.08923v1.pdf," Large Language models (LLMs) possess the capability to engage In-context +Learning (ICL) by leveraging a few demonstrations pertaining to a new +downstream task as conditions. However, this particular learning paradigm +suffers from high instability stemming from substantial variances induced by +factors such as the input distribution of selected examples, their ordering, +and prompt formats. In this work, we demonstrate that even when all these +factors are held constant, the random selection of examples still results in +high variance. Consequently, we aim to explore the informative ability of data +examples by quantifying the Information Gain (IG) obtained in prediction after +observing a given example candidate. Then we propose to sample those with +maximum IG. Additionally, we identify the presence of template bias, which can +lead to unfair evaluations of IG during the sampling process. To mitigate this +bias, we introduce Calibration Before Sampling strategy. The experimental +results illustrate that our proposed method can yield an average relative +improvement of 14.3% across six classification tasks using three LLMs. +" +Ecologically Valid Explanations for Label Variation in NLI,Nan-Jiang Jiang,http://arxiv.org/pdf/2310.13850v1.pdf,2023-10-20,['cs.cl'],2310.13850v1.pdf," Human label variation, or annotation disagreement, exists in many natural +language processing (NLP) tasks, including natural language inference (NLI). To +gain direct evidence of how NLI label variation arises, we build LiveNLI, an +English dataset of 1,415 ecologically valid explanations (annotators explain +the NLI labels they chose) for 122 MNLI items (at least 10 explanations per +item). The LiveNLI explanations confirm that people can systematically vary on +their interpretation and highlight within-label variation: annotators sometimes +choose the same label for different reasons. This suggests that explanations +are crucial for navigating label interpretations in general. We few-shot prompt +large language models to generate explanations but the results are +inconsistent: they sometimes produces valid and informative explanations, but +it also generates implausible ones that do not support the label, highlighting +directions for improvement. +" +API-Assisted Code Generation for Question Answering on Varied Table Structures,Yihan Cao,http://arxiv.org/pdf/2310.14687v1.pdf,2023-10-23,"['cs.cl', 'cs.ai']",2310.14687v1.pdf," A persistent challenge to table question answering (TableQA) by generating +executable programs has been adapting to varied table structures, typically +requiring domain-specific logical forms. In response, this paper introduces a +unified TableQA framework that: (1) provides a unified representation for +structured tables as multi-index Pandas data frames, (2) uses Python as a +powerful querying language, and (3) uses few-shot prompting to translate NL +questions into Python programs, which are executable on Pandas data frames. +Furthermore, to answer complex relational questions with extended program +functionality and external knowledge, our framework allows customized APIs that +Python programs can call. We experiment with four TableQA datasets that involve +tables of different structures -- relational, multi-table, and hierarchical +matrix shapes -- and achieve prominent improvements over past state-of-the-art +systems. In ablation studies, we (1) show benefits from our multi-index +representation and APIs over baselines that use only an LLM, and (2) +demonstrate that our approach is modular and can incorporate additional APIs. +" +Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models,Gangwoo Kim,http://arxiv.org/pdf/2310.14696v1.pdf,2023-10-23,['cs.cl'],2310.14696v1.pdf," Questions in open-domain question answering are often ambiguous, allowing +multiple interpretations. One approach to handling them is to identify all +possible interpretations of the ambiguous question (AQ) and to generate a +long-form answer addressing them all, as suggested by Stelmakh et al., (2022). +While it provides a comprehensive response without bothering the user for +clarification, considering multiple dimensions of ambiguity and gathering +corresponding knowledge remains a challenge. To cope with the challenge, we +propose a novel framework, Tree of Clarifications (ToC): It recursively +constructs a tree of disambiguations for the AQ -- via few-shot prompting +leveraging external knowledge -- and uses it to generate a long-form answer. +ToC outperforms existing baselines on ASQA in a few-shot setup across the +metrics, while surpassing fully-supervised baselines trained on the whole +training set in terms of Disambig-F1 and Disambig-ROUGE. Code is available at +https://github.com/gankim/tree-of-clarifications. +" +Dissecting In-Context Learning of Translations in GPTs,Vikas Raunak,http://arxiv.org/pdf/2310.15987v1.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.15987v1.pdf," Most of the recent work in leveraging Large Language Models (LLMs) such as +GPT-3 for Machine Translation (MT) has focused on selecting the few-shot +samples for prompting. In this work, we try to better understand the role of +demonstration attributes for the in-context learning of translations through +perturbations of high-quality, in-domain demonstrations. We find that +asymmetric perturbation of the source-target mappings yield vastly different +results. We show that the perturbation of the source side has surprisingly +little impact, while target perturbation can drastically reduce translation +quality, suggesting that it is the output text distribution that provides the +most important learning signal during in-context learning of translations. We +propose a method named Zero-Shot-Context to add this signal automatically in +Zero-Shot prompting. We demonstrate that it improves upon the zero-shot +translation performance of GPT-3, even making it competitive with few-shot +prompted translations. +" +Extraction of Atypical Aspects from Customer Reviews: Datasets and Experiments with Language Models,Smita Nannaware,http://arxiv.org/pdf/2311.02702v1.pdf,2023-11-05,"['cs.cl', 'cs.ai']",2311.02702v1.pdf," A restaurant dinner may become a memorable experience due to an unexpected +aspect enjoyed by the customer, such as an origami-making station in the +waiting area. If aspects that are atypical for a restaurant experience were +known in advance, they could be leveraged to make recommendations that have the +potential to engender serendipitous experiences, further increasing user +satisfaction. Although relatively rare, whenever encountered, atypical aspects +often end up being mentioned in reviews due to their memorable quality. +Correspondingly, in this paper we introduce the task of detecting atypical +aspects in customer reviews. To facilitate the development of extraction +models, we manually annotate benchmark datasets of reviews in three domains - +restaurants, hotels, and hair salons, which we use to evaluate a number of +language models, ranging from fine-tuning the instruction-based text-to-text +transformer Flan-T5 to zero-shot and few-shot prompting of GPT-3.5. +" +SQLPrompt: In-Context Text-to-SQL with Minimal Labeled Data,Ruoxi Sun,http://arxiv.org/pdf/2311.02883v1.pdf,2023-11-06,['cs.cl'],2311.02883v1.pdf," Text-to-SQL aims to automate the process of generating SQL queries on a +database from natural language text. In this work, we propose ""SQLPrompt"", +tailored to improve the few-shot prompting capabilities of Text-to-SQL for +Large Language Models (LLMs). Our methods include innovative prompt design, +execution-based consistency decoding strategy which selects the SQL with the +most consistent execution outcome among other SQL proposals, and a method that +aims to improve performance by diversifying the SQL proposals during +consistency selection with different prompt designs (""MixPrompt"") and +foundation models (""MixLLMs""). We show that \emph{SQLPrompt} outperforms +previous approaches for in-context learning with few labeled data by a large +margin, closing the gap with finetuning state-of-the-art with thousands of +labeled data. +" +OLaLa: Ontology Matching with Large Language Models,Sven Hertling,http://arxiv.org/pdf/2311.03837v1.pdf,2023-11-07,"['cs.ir', 'cs.cl']",2311.03837v1.pdf," Ontology (and more generally: Knowledge Graph) Matching is a challenging task +where information in natural language is one of the most important signals to +process. With the rise of Large Language Models, it is possible to incorporate +this knowledge in a better way into the matching pipeline. A number of +decisions still need to be taken, e.g., how to generate a prompt that is useful +to the model, how information in the KG can be formulated in prompts, which +Large Language Model to choose, how to provide existing correspondences to the +model, how to generate candidates, etc. In this paper, we present a prototype +that explores these questions by applying zero-shot and few-shot prompting with +multiple open Large Language Models to different tasks of the Ontology +Alignment Evaluation Initiative (OAEI). We show that with only a handful of +examples and a well-designed prompt, it is possible to achieve results that are +en par with supervised matching systems which use a much larger portion of the +ground truth. +" +Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue,Lena Reed,http://arxiv.org/pdf/2110.08094v2.pdf,2021-10-15,['cs.cl'],2110.08094v2.pdf," One challenge with open-domain dialogue systems is the need to produce +truthful, high-quality responses on any topic. We aim to improve the quality +and coverage of Athena, an Alexa Prize dialogue system. We experiment with +few-shot prompt-based learning, comparing GPT-Neo to Jurassic-1, for the +movies, music, TV, sports, and video game domains, both within and +cross-domain, with different prompt set sizes (2, 3, 10), formats, and meaning +representations consisting of either sets of WikiData KG triples, or dialogue +acts. Our evaluation uses BLEURT and human metrics, and shows that with 10-shot +prompting, Athena-Jurassic's performance is significantly better for coherence +and semantic accuracy. Experiments with 2-shot cross-domain prompts results in +a huge performance drop for Athena-GPT-Neo, whose semantic accuracy falls to +0.41, and whose untrue hallucination rate increases to 12%. Experiments with +dialogue acts for video games show that with 10-shot prompting, both models +learn to control dialogue acts, but Athena-Jurassic has significantly higher +coherence, and only 4% untrue hallucinations. Our results suggest that +Athena-Jurassic produces high enough quality outputs to be useful in live +systems with real users. To our knowledge, these are the first results +demonstrating that few-shot semantic prompt-based learning can create NLGs that +generalize to new domains, and produce high-quality, semantically-controlled, +conversational responses directly from meaning representations. +" +Code as Policies: Language Model Programs for Embodied Control,Jacky Liang,http://arxiv.org/pdf/2209.07753v4.pdf,2022-09-16,['cs.ro'],2209.07753v4.pdf," Large language models (LLMs) trained on code completion have been shown to be +capable of synthesizing simple Python programs from docstrings [1]. We find +that these code-writing LLMs can be re-purposed to write robot policy code, +given natural language commands. Specifically, policy code can express +functions or feedback loops that process perception outputs (e.g.,from object +detectors [2], [3]) and parameterize control primitive APIs. When provided as +input several example language commands (formatted as comments) followed by +corresponding policy code (via few-shot prompting), LLMs can take in new +commands and autonomously re-compose API calls to generate new policy code +respectively. By chaining classic logic structures and referencing third-party +libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way +can write robot policies that (i) exhibit spatial-geometric reasoning, (ii) +generalize to new instructions, and (iii) prescribe precise values (e.g., +velocities) to ambiguous descriptions (""faster"") depending on context (i.e., +behavioral commonsense). This paper presents code as policies: a robot-centric +formulation of language model generated programs (LMPs) that can represent +reactive policies (e.g., impedance controllers), as well as waypoint-based +policies (vision-based pick and place, trajectory-based control), demonstrated +across multiple real robot platforms. Central to our approach is prompting +hierarchical code-gen (recursively defining undefined functions), which can +write more complex code and also improves state-of-the-art to solve 39.8% of +problems on the HumanEval [1] benchmark. Code and videos are available at +https://code-as-policies.github.io +" +Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus,Gang Li,http://arxiv.org/pdf/2209.14927v4.pdf,2022-09-29,"['cs.cv', 'cs.hc', 'cs.lg']",2209.14927v4.pdf," Mobile UI understanding is important for enabling various interaction tasks +such as UI automation and accessibility. Previous mobile UI modeling often +depends on the view hierarchy information of a screen, which directly provides +the structural data of the UI, with the hope to bypass challenging tasks of +visual modeling from screen pixels. However, view hierarchies are not always +available, and are often corrupted with missing object descriptions or +misaligned structure information. As a result, despite the use of view +hierarchies could offer short-term gains, it may ultimately hinder the +applicability and performance of the model. In this paper, we propose +Spotlight, a vision-only approach for mobile UI understanding. Specifically, we +enhance a vision-language model that only takes the screenshot of the UI and a +region of interest on the screen -- the focus -- as the input. This general +architecture of Spotlight is easily scalable and capable of performing a range +of UI modeling tasks. Our experiments show that our model establishes SoTA +results on several representative UI tasks and outperforms previous methods +that use both screenshots and view hierarchies as inputs. Furthermore, we +explore multi-task learning and few-shot prompting capacities of the proposed +models, demonstrating promising results in the multi-task learning direction. +" +Grounding Language with Visual Affordances over Unstructured Data,Oier Mees,http://arxiv.org/pdf/2210.01911v3.pdf,2022-10-04,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.lg']",2210.01911v3.pdf," Recent works have shown that Large Language Models (LLMs) can be applied to +ground natural language to a wide variety of robot skills. However, in +practice, learning multi-task, language-conditioned robotic skills typically +requires large-scale data collection and frequent human intervention to reset +the environment or help correcting the current policies. In this work, we +propose a novel approach to efficiently learn general-purpose +language-conditioned robot skills from unstructured, offline and reset-free +data in the real world by exploiting a self-supervised visuo-lingual affordance +model, which requires annotating as little as 1% of the total data with +language. We evaluate our method in extensive experiments both in simulated and +real-world robotic tasks, achieving state-of-the-art performance on the +challenging CALVIN benchmark and learning over 25 distinct visuomotor +manipulation tasks with a single policy in the real world. We find that when +paired with LLMs to break down abstract natural language instructions into +subgoals via few-shot prompting, our method is capable of completing +long-horizon, multi-tier tasks in the real world, while requiring an order of +magnitude less data than previous approaches. Code and videos are available at +http://hulc2.cs.uni-freiburg.de +" +MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting,Oscar Mañas,http://arxiv.org/pdf/2210.07179v2.pdf,2022-10-13,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2210.07179v2.pdf," Large pre-trained models have proved to be remarkable zero- and +(prompt-based) few-shot learners in unimodal vision and language tasks. We +propose MAPL, a simple and parameter-efficient method that reuses frozen +pre-trained unimodal models and leverages their strong generalization +capabilities in multimodal vision-language (VL) settings. MAPL learns a +lightweight mapping between the representation spaces of unimodal models using +aligned image-text data, and can generalize to unseen VL tasks from just a few +in-context examples. The small number of trainable parameters makes MAPL +effective at low-data and in-domain learning. Moreover, MAPL's modularity +enables easy extension to other pre-trained models. Extensive experiments on +several visual question answering and image captioning benchmarks show that +MAPL achieves superior or competitive performance compared to similar methods +while training orders of magnitude fewer parameters. MAPL can be trained in +just a few hours using modest computational resources and public datasets. We +release our code and pre-trained model weights at +https://github.com/mair-lab/mapl. +" +Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning,Xiangyu Peng,http://arxiv.org/pdf/2210.12587v3.pdf,2022-10-23,['cs.cl'],2210.12587v3.pdf," Prompt tuning approaches, which learn task-specific soft prompts for a +downstream task conditioning on frozen pre-trained models, have attracted +growing interest due to its parameter efficiency. With large language models +and sufficient training data, prompt tuning performs comparably to full-model +tuning. However, with limited training samples in few-shot settings, prompt +tuning fails to match the performance of full-model fine-tuning. In this work, +we focus on improving the few-shot performance of prompt tuning by transferring +knowledge from soft prompts of source tasks. Recognizing the good +generalization capabilities of ensemble methods in low-data regime, we first +experiment and show that a simple ensemble of model predictions based on +different source prompts, outperforms existing multi-prompt knowledge transfer +approaches such as source prompt fusion in the few-shot setting. Motivated by +this observation, we further investigate model ensembles and propose +Sample-specific Ensemble of Source Models (SESoM). SESoM learns to adjust the +contribution of each source model for each target sample separately when +ensembling source model outputs. Through this way, SESoM inherits the superior +generalization of model ensemble approaches and simultaneously captures the +sample-specific competence of each source prompt. We conduct experiments across +a diverse set of eight NLP tasks using models of different scales (T5-{base, +large, XL}) and find that SESoM consistently outperforms the existing models of +the same as well as larger parametric scale by a large margin. +" +Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations,Swarnadeep Saha,http://arxiv.org/pdf/2211.07517v1.pdf,2022-11-14,"['cs.cl', 'cs.ai']",2211.07517v1.pdf," Recent work on explainable NLP has shown that few-shot prompting can enable +large pretrained language models (LLMs) to generate grammatical and factual +natural language explanations for data labels. In this work, we study the +connection between explainability and sample hardness by investigating the +following research question - ""Are LLMs and humans equally good at explaining +data labels for both easy and hard samples?"" We answer this question by first +collecting human-written explanations in the form of generalizable commonsense +rules on the task of Winograd Schema Challenge (Winogrande dataset). We compare +these explanations with those generated by GPT-3 while varying the hardness of +the test samples as well as the in-context samples. We observe that (1) GPT-3 +explanations are as grammatical as human explanations regardless of the +hardness of the test samples, (2) for easy examples, GPT-3 generates highly +supportive explanations but human explanations are more generalizable, and (3) +for hard examples, human explanations are significantly better than GPT-3 +explanations both in terms of label-supportiveness and generalizability +judgements. We also find that hardness of the in-context examples impacts the +quality of GPT-3 explanations. Finally, we show that the supportiveness and +generalizability aspects of human explanations are also impacted by sample +hardness, although by a much smaller margin than models. Supporting code and +data are available at https://github.com/swarnaHub/ExplanationHardness +" +Crowd Score: A Method for the Evaluation of Jokes using Large Language Model AI Voters as Judges,Fabricio Goes,http://arxiv.org/pdf/2212.11214v1.pdf,2022-12-21,['cs.ai'],2212.11214v1.pdf," This paper presents the Crowd Score, a novel method to assess the funniness +of jokes using large language models (LLMs) as AI judges. Our method relies on +inducing different personalities into the LLM and aggregating the votes of the +AI judges into a single score to rate jokes. We validate the votes using an +auditing technique that checks if the explanation for a particular vote is +reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of +four AI voters with different humour types: affiliative, self-enhancing, +aggressive and self-defeating. Our results show that few-shot prompting leads +to better results than zero-shot for the voting question. Personality induction +showed that aggressive and self-defeating voters are significantly more +inclined to find more jokes funny of a set of aggressive/self-defeating jokes +than the affiliative and self-enhancing voters. The Crowd Score follows the +same trend as human judges by assigning higher scores to jokes that are also +considered funnier by human judges. We believe that our methodology could be +applied to other creative domains such as story, poetry, slogans, etc. It could +both help the adoption of a flexible and accurate standard approach to compare +different work in the CC community under a common metric and by minimizing +human participation in assessing creative artefacts, it could accelerate the +prototyping of creative artefacts and reduce the cost of hiring human +participants to rate creative artefacts. +" +CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models,Hossein Hajipour,http://arxiv.org/pdf/2302.04012v2.pdf,2023-02-08,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.se']",2302.04012v2.pdf," Large language models (LLMs) for automatic code generation have achieved +breakthroughs in several programming tasks. Their advances in competition-level +programming problems have made them an essential pillar of AI-assisted pair +programming, and tools such as GitHub Copilot have emerged as part of the daily +programming workflow used by millions of developers. The training data for +these models is usually collected from the Internet (e.g., from open-source +repositories) and is likely to contain faults and security vulnerabilities. +This unsanitized training data can cause the language models to learn these +vulnerabilities and propagate them during the code generation procedure. While +these models have been extensively assessed for their ability to produce +functionally correct programs, there remains a lack of comprehensive +investigations and benchmarks addressing the security aspects of these models. + In this work, we propose a method to systematically study the security issues +of code language models to assess their susceptibility to generating vulnerable +code. To this end, we introduce the first approach to automatically find +generated code that contains vulnerabilities in black-box code generation +models. To achieve this, we present an approach to approximate inversion of the +black-box code generation models based on few-shot prompting. We evaluate the +effectiveness of our approach by examining code language models in generating +high-risk security weaknesses. Furthermore, we establish a collection of +diverse non-secure prompts for various vulnerability scenarios using our +method. This dataset forms a benchmark for evaluating and comparing the +security weaknesses in code language models. +" +ART: Automatic multi-step reasoning and tool-use for large language models,Bhargavi Paranjape,http://arxiv.org/pdf/2303.09014v1.pdf,2023-03-16,['cs.cl'],2303.09014v1.pdf," Large language models (LLMs) can perform complex reasoning in few- and +zero-shot settings by generating intermediate chain of thought (CoT) reasoning +steps. Further, each reasoning step can rely on external tools to support +computation beyond the core LLM capabilities (e.g. search/running code). Prior +work on CoT prompting and tool use typically requires hand-crafting +task-specific demonstrations and carefully scripted interleaving of model +generations with tool use. We introduce Automatic Reasoning and Tool-use (ART), +a framework that uses frozen LLMs to automatically generate intermediate +reasoning steps as a program. Given a new task to solve, ART selects +demonstrations of multi-step reasoning and tool use from a task library. At +test time, ART seamlessly pauses generation whenever external tools are called, +and integrates their output before resuming generation. ART achieves a +substantial improvement over few-shot prompting and automatic CoT on unseen +tasks in the BigBench and MMLU benchmarks, and matches performance of +hand-crafted CoT prompts on a majority of these tasks. ART is also extensible, +and makes it easy for humans to improve performance by correcting errors in +task-specific programs or incorporating new tools, which we demonstrate by +drastically improving performance on select tasks with minimal human +intervention. +" +Fairness-guided Few-shot Prompting for Large Language Models,Huan Ma,http://arxiv.org/pdf/2303.13217v3.pdf,2023-03-23,"['cs.cl', 'cs.ai']",2303.13217v3.pdf," Large language models have demonstrated surprising ability to perform +in-context learning, i.e., these models can be directly applied to solve +numerous downstream tasks by conditioning on a prompt constructed by a few +input-output examples. However, prior research has shown that in-context +learning can suffer from high instability due to variations in training +examples, example order, and prompt formats. Therefore, the construction of an +appropriate prompt is essential for improving the performance of in-context +learning. In this paper, we revisit this problem from the view of predictive +bias. Specifically, we introduce a metric to evaluate the predictive bias of a +fixed prompt against labels or a given attributes. Then we empirically show +that prompts with higher bias always lead to unsatisfactory predictive quality. +Based on this observation, we propose a novel search strategy based on the +greedy search to identify the near-optimal prompt for improving the performance +of in-context learning. We perform comprehensive experiments with +state-of-the-art mainstream models such as GPT-3 on various downstream tasks. +Our results indicate that our method can enhance the model's in-context +learning performance in an effective and interpretable manner. +" +Is ChatGPT a Good Recommender? A Preliminary Study,Junling Liu,http://arxiv.org/pdf/2304.10149v3.pdf,2023-04-20,['cs.ir'],2304.10149v3.pdf," Recommendation systems have witnessed significant advancements and have been +widely used over the past decades. However, most traditional recommendation +methods are task-specific and therefore lack efficient generalization ability. +Recently, the emergence of ChatGPT has significantly advanced NLP tasks by +enhancing the capabilities of conversational models. Nonetheless, the +application of ChatGPT in the recommendation domain has not been thoroughly +investigated. In this paper, we employ ChatGPT as a general-purpose +recommendation model to explore its potential for transferring extensive +linguistic and world knowledge acquired from large-scale corpora to +recommendation scenarios. Specifically, we design a set of prompts and evaluate +ChatGPT's performance on five recommendation scenarios. Unlike traditional +recommendation methods, we do not fine-tune ChatGPT during the entire +evaluation process, relying only on the prompts themselves to convert +recommendation tasks into natural language tasks. Further, we explore the use +of few-shot prompting to inject interaction information that contains user +potential interest to help ChatGPT better understand user needs and interests. +Comprehensive experimental results on Amazon Beauty dataset show that ChatGPT +has achieved promising results in certain tasks and is capable of reaching the +baseline level in others. We conduct human evaluations on two +explainability-oriented tasks to more accurately evaluate the quality of +contents generated by different models. And the human evaluations show ChatGPT +can truly understand the provided information and generate clearer and more +reasonable results. We hope that our study can inspire researchers to further +explore the potential of language models like ChatGPT to improve recommendation +performance and contribute to the advancement of the recommendation systems +field. +" +Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting,Miles Turpin,http://arxiv.org/pdf/2305.04388v1.pdf,2023-05-07,"['cs.cl', 'cs.ai']",2305.04388v1.pdf," Large Language Models (LLMs) can achieve strong performance on many tasks by +producing step-by-step reasoning before giving a final output, often referred +to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT +explanations as the LLM's process for solving a task. However, we find that CoT +explanations can systematically misrepresent the true reason for a model's +prediction. We demonstrate that CoT explanations can be heavily influenced by +adding biasing features to model inputs -- e.g., by reordering the +multiple-choice options in a few-shot prompt to make the answer always ""(A)"" -- +which models systematically fail to mention in their explanations. When we bias +models toward incorrect answers, they frequently generate CoT explanations +supporting those answers. This causes accuracy to drop by as much as 36% on a +suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI +and Claude 1.0 from Anthropic. On a social-bias task, model explanations +justify giving answers in line with stereotypes without mentioning the +influence of these social biases. Our findings indicate that CoT explanations +can be plausible yet misleading, which risks increasing our trust in LLMs +without guaranteeing their safety. CoT is promising for explainability, but our +results highlight the need for targeted efforts to evaluate and improve +explanation faithfulness. +" +Skill-Based Few-Shot Selection for In-Context Learning,Shengnan An,http://arxiv.org/pdf/2305.14210v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14210v2.pdf," In-context learning is the paradigm that adapts large language models to +downstream tasks by providing a few examples. Few-shot selection -- selecting +appropriate examples for each test instance separately -- is important for +in-context learning. In this paper, we propose Skill-KNN, a skill-based +few-shot selection method for in-context learning. The key advantages of +Skill-KNN include: (1) it addresses the problem that existing methods based on +pre-trained embeddings can be easily biased by surface natural language +features that are not important for the target task; (2) it does not require +training or fine-tuning of any models, making it suitable for frequently +expanding or changing example banks. The key insight is to optimize the inputs +fed into the embedding model, rather than tuning the model itself. Technically, +Skill-KNN generates the skill-based descriptions for each test case and +candidate example by utilizing a pre-processing few-shot prompting, thus +eliminating unimportant surface features. Experimental results across five +cross-domain semantic parsing datasets and six backbone models show that +Skill-KNN significantly outperforms existing methods. +" +USB: A Unified Summarization Benchmark Across Tasks and Domains,Kundan Krishna,http://arxiv.org/pdf/2305.14296v1.pdf,2023-05-23,"['cs.cl', 'cs.lg']",2305.14296v1.pdf," An abundance of datasets exist for training and evaluating models on the task +of summary generation.However, these datasets are often derived heuristically, +and lack sufficient annotations to support research into all aspects of +summarization, such as evidence extraction and controllable summarization. We +introduce a benchmark comprising 8 tasks that require multi-dimensional +understanding of summarization, e.g., surfacing evidence for a summary, +assessing its correctness, and gauging its relevance to different topics. We +compare various methods on this benchmark and discover that on multiple tasks, +moderately-sized fine-tuned models consistently outperform much larger few-shot +prompted language models. For factuality related tasks, we also evaluate +existing heuristics to create training data and find that training on them +performs worse than training on $20\times$ less human-labeled data. Our +benchmark consists of data from 6 different domains, allowing us to study +cross-domain performance of trained models. We find that for some tasks, the +amount of training data matters more than the domain where it comes from, while +for other tasks training specifically on data from the target domain, even if +limited, is more beneficial. Our work fulfills the need for a well-annotated +summarization benchmark with diverse tasks, and provides useful insights about +the impact of the quality, size and domain of training data. +" +Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement,Zhiheng Xi,http://arxiv.org/pdf/2305.14497v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14497v1.pdf," Prompting methods such as Chain-of-Thought (CoT) have shed new light on +enhancing the reasoning capabilities of large language models, and researchers +have extensively explored the generation process of rationales and answers. +However, they have overlooked the potential challenges posed by the poor +quality of reasoning problems, which may influence the reasoning performance +significantly. In this work, we propose Self-Polish (SP), a novel method that +facilitates the model's problem-solving process by prompting them to +progressively refine the given problems to be more comprehensible and solvable. +Specifically, the method teaches models to eliminate irrelevant information, +rearrange the logic structure and organize local conditions into new ones +parallelly. SP is orthogonal to all other prompting methods, making it +convenient to integrate with state-of-the-art techniques for further +improvement. We conduct thorough experiments on five benchmarks to illustrate +the effectiveness of the proposed method. For example, with Text-davinci-003, +our method boosts the performance of standard few-shot prompting by $8.0\%$ on +GSM8K and $17.8\%$ on MultiArith; it also improves the performance of CoT by +$6.0\%$ on GSM8K and $6.0\%$ on MathQA, respectively. Furthermore, our method +also showcases impressive performance on robustness evaluation. +" +SciFix: Outperforming GPT3 on Scientific Factual Error Correction,Dhananjay Ashok,http://arxiv.org/pdf/2305.14707v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14707v2.pdf," Due to the prohibitively high cost of creating error correction datasets, +most Factual Claim Correction methods rely on a powerful verification model to +guide the correction process. This leads to a significant drop in performance +in domains like scientific claims, where good verification models do not always +exist. In this work, we introduce SciFix, a scientific claim correction system +that does not require a verifier but can outperform existing methods by a +considerable margin -- achieving correction accuracy of 84% on the SciFact +dataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to next +best accuracies of 7%, 5%, and 15% on the same datasets respectively. Our +method leverages the power of prompting with LLMs during training to create a +richly annotated dataset that can be used for fully supervised training and +regularization. We additionally use a claim-aware decoding procedure to improve +the quality of corrected claims. Our method outperforms the very LLM that was +used to generate the annotated dataset -- with Few-Shot Prompting on GPT3.5 +achieving 58%, 61%, and 64% on the respective datasets, a consistently lower +correction accuracy, despite using nearly 800 times as many parameters as our +model. +" +LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections,M. Jehanzeb Mirza,http://arxiv.org/pdf/2305.18287v2.pdf,2023-05-29,"['cs.cv', 'cs.cl']",2305.18287v2.pdf," Recently, large-scale pre-trained Vision and Language (VL) models have set a +new state-of-the-art (SOTA) in zero-shot visual classification enabling +open-vocabulary recognition of potentially unlimited set of categories defined +as simple language prompts. However, despite these great advances, the +performance of these zeroshot classifiers still falls short of the results of +dedicated (closed category set) classifiers trained with supervised fine +tuning. In this paper we show, for the first time, how to reduce this gap +without any labels and without any paired VL data, using an unlabeled image +collection and a set of texts auto-generated using a Large Language Model (LLM) +describing the categories of interest and effectively substituting labeled +visual instances of those categories. Using our label-free approach, we are +able to attain significant performance improvements over the zero-shot +performance of the base VL model and other contemporary methods and baselines +on a wide variety of datasets, demonstrating absolute improvement of up to +11.7% (3.8% on average) in the label-free setting. Moreover, despite our +approach being label-free, we observe 1.3% average gains over leading few-shot +prompting baselines that do use 5-shot supervision. +" +"Better patching using LLM prompting, via Self-Consistency",Toufique Ahmed,http://arxiv.org/pdf/2306.00108v2.pdf,2023-05-31,"['cs.se', 'cs.lg']",2306.00108v2.pdf," Large Language models (LLMs) can be induced to solve non-trivial problems +with ""few-shot"" prompts including illustrative problem-solution examples. Now +if the few-shots also include ""chain of thought"" (CoT) explanations, which are +of the form problem-explanation-solution, LLMs will generate a ""explained"" +solution, and perform even better. Recently an exciting, substantially better +technique, self-consistency [1] (S-C) has emerged, based on the intuition that +there are many plausible explanations for the right solution; when the LLM is +sampled repeatedly to generate a pool of explanation-solution pairs, for a +given problem, the most frequently occurring solutions in the pool (ignoring +the explanations) tend to be even more likely to be correct! Unfortunately, the +use of this highly-performant S-C (or even CoT) approach in software +engineering settings is hampered by the lack of explanations; most software +datasets lack explanations. In this paper, we describe an application of the +S-C approach to program repair, using the commit log on the fix as the +explanation, only in the illustrative few-shots. We achieve state-of-the art +results, beating previous approaches to prompting-based program repair, on the +MODIT dataset; we also find evidence suggesting that the correct commit +messages are helping the LLM learn to produce better patches. +" +Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence,John J. Nay,http://arxiv.org/pdf/2306.07075v1.pdf,2023-06-12,"['cs.cl', 'cs.ai', 'cs.cy']",2306.07075v1.pdf," Better understanding of Large Language Models' (LLMs) legal analysis +abilities can contribute to improving the efficiency of legal services, +governing artificial intelligence, and leveraging LLMs to identify +inconsistencies in law. This paper explores LLM capabilities in applying tax +law. We choose this area of law because it has a structure that allows us to +set up automated validation pipelines across thousands of examples, requires +logical reasoning and maths skills, and enables us to test LLM capabilities in +a manner relevant to real-world economic lives of citizens and companies. Our +experiments demonstrate emerging legal understanding capabilities, with +improved performance in each subsequent OpenAI model release. We experiment +with retrieving and utilising the relevant legal authority to assess the impact +of providing additional legal context to LLMs. Few-shot prompting, presenting +examples of question-answer pairs, is also found to significantly enhance the +performance of the most advanced model, GPT-4. The findings indicate that LLMs, +particularly when combined with prompting enhancements and the correct legal +texts, can perform at high levels of accuracy but not yet at expert tax lawyer +levels. As LLMs continue to advance, their ability to reason about law +autonomously could have significant implications for the legal profession and +AI governance. +" +DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks,Caixin Kang,http://arxiv.org/pdf/2306.09124v2.pdf,2023-06-15,"['cs.cv', 'cs.ai', 'cs.cr', 'cs.lg']",2306.09124v2.pdf," Adversarial attacks, particularly patch attacks, pose significant threats to +the robustness and reliability of deep learning models. Developing reliable +defenses against patch attacks is crucial for real-world applications, yet +current research in this area is not satisfactory. In this paper, we propose +DIFFender, a novel defense method that leverages a text-guided diffusion model +to defend against adversarial patches. DIFFender includes two main stages: +patch localization and patch restoration. In the localization stage, we find +and exploit an intriguing property of the diffusion model to effectively +identify the locations of adversarial patches. In the restoration stage, we +employ the diffusion model to reconstruct the adversarial regions in the images +while preserving the integrity of the visual content. Importantly, these two +stages are carefully guided by a unified diffusion model, thus we can utilize +the close interaction between them to improve the whole defense performance. +Moreover, we propose a few-shot prompt-tuning algorithm to fine-tune the +diffusion model, enabling the pre-trained diffusion model to easily adapt to +the defense task. We conduct extensive experiments on the image classification +and face recognition tasks, demonstrating that our proposed method exhibits +superior robustness under strong adaptive attacks and generalizes well across +various scenarios, diverse classifiers, and multiple patch attack methods. +" +Teaching Arithmetic to Small Transformers,Nayoung Lee,http://arxiv.org/pdf/2307.03381v1.pdf,2023-07-07,['cs.lg'],2307.03381v1.pdf," Large language models like GPT-4 exhibit emergent capabilities across +general-purpose tasks, such as basic arithmetic, when trained on extensive text +data, even though these tasks are not explicitly encoded by the unsupervised, +next-token prediction objective. This study investigates how small +transformers, trained from random initialization, can efficiently learn +arithmetic operations such as addition, multiplication, and elementary +functions like square root, using the next-token prediction objective. We first +demonstrate that conventional training data is not the most effective for +arithmetic learning, and simple formatting changes can significantly improve +accuracy. This leads to sharp phase transitions as a function of training data +scale, which, in some cases, can be explained through connections to low-rank +matrix completion. Building on prior work, we then train on chain-of-thought +style data that includes intermediate step results. Even in the complete +absence of pretraining, this approach significantly and simultaneously improves +accuracy, sample complexity, and convergence speed. We also study the interplay +between arithmetic and text data during training and examine the effects of +few-shot prompting, pretraining, and model scale. Additionally, we discuss +length generalization challenges. Our work highlights the importance of +high-quality, instructive data that considers the particular characteristics of +the next-word prediction objective for rapidly eliciting arithmetic +capabilities. +" +Controllable Generation of Dialogue Acts for Dialogue Systems via Few-Shot Response Generation and Ranking,Angela Ramirez,http://arxiv.org/pdf/2307.14440v1.pdf,2023-07-26,['cs.cl'],2307.14440v1.pdf," Dialogue systems need to produce responses that realize multiple types of +dialogue acts (DAs) with high semantic fidelity. In the past, natural language +generators (NLGs) for dialogue were trained on large parallel corpora that map +from a domain-specific DA and its semantic attributes to an output utterance. +Recent work shows that pretrained language models (LLMs) offer new +possibilities for controllable NLG using prompt-based learning. Here we develop +a novel few-shot overgenerate-and-rank approach that achieves the controlled +generation of DAs. We compare eight few-shot prompt styles that include a novel +method of generating from textual pseudo-references using a textual style +transfer approach. We develop six automatic ranking functions that identify +outputs with both the correct DA and high semantic accuracy at generation time. +We test our approach on three domains and four LLMs. To our knowledge, this is +the first work on NLG for dialogue that automatically ranks outputs using both +DA and attribute accuracy. For completeness, we compare our results to +fine-tuned few-shot models trained with 5 to 100 instances per DA. Our results +show that several prompt settings achieve perfect DA accuracy, and near perfect +semantic accuracy (99.81%) and perform better than few-shot fine-tuning. +" +Contextual Biasing of Named-Entities with Large Language Models,Chuanneng Sun,http://arxiv.org/pdf/2309.00723v2.pdf,2023-09-01,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as', '68t10', 'i.2.7']",2309.00723v2.pdf," This paper studies contextual biasing with Large Language Models (LLMs), +where during second-pass rescoring additional contextual information is +provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We +propose to leverage prompts for a LLM without fine tuning during rescoring +which incorporate a biasing list and few-shot examples to serve as additional +information when calculating the score for the hypothesis. In addition to +few-shot prompt learning, we propose multi-task training of the LLM to predict +both the entity class and the next token. To improve the efficiency for +contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we +propose dynamic prompting, where we select the most likely class using the +class tag prediction, and only use entities in this class as contexts for next +token prediction. Word Error Rate (WER) evaluation is performed on i) an +internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli +dataset. Results indicate that biasing lists and few-shot examples can achieve +17.8% and 9.6% relative improvement compared to first pass ASR, and that +multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative +WER improvement, respectively. +" +MindAgent: Emergent Gaming Interaction,Ran Gong,http://arxiv.org/pdf/2309.09971v2.pdf,2023-09-18,"['cs.ai', 'cs.hc', 'cs.ma']",2309.09971v2.pdf," Large Language Models (LLMs) have the capacity of performing complex +scheduling in a multi-agent system and can coordinate these agents into +completing sophisticated tasks that require extensive collaboration. However, +despite the introduction of numerous gaming frameworks, the community has +insufficient benchmarks towards building general multi-agents collaboration +infrastructure that encompass both LLM and human-NPCs collaborations. In this +work, we propose a novel infrastructure - MindAgent - to evaluate planning and +coordination emergent capabilities for gaming interaction. In particular, our +infrastructure leverages existing gaming framework, to i) require understanding +of the coordinator for a multi-agent system, ii) collaborate with human players +via un-finetuned proper instructions, and iii) establish an in-context learning +on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new +gaming scenario and related benchmark that dispatch a multi-agent collaboration +efficiency and supervise multiple agents playing the game simultaneously. We +conduct comprehensive evaluations with new auto-metric CoS for calculating the +collaboration efficiency. Finally, our infrastructure can be deployed into +real-world gaming scenarios in a customized VR version of CUISINEWORLD and +adapted in existing broader Minecraft gaming domain. We hope our findings on +LLMs and the new infrastructure for general-purpose scheduling and coordination +can help shed light on how such skills can be obtained by learning from large +language corpora. +" +DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines,Omar Khattab,http://arxiv.org/pdf/2310.03714v1.pdf,2023-10-05,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2310.03714v1.pdf," The ML community is rapidly exploring techniques for prompting language +models (LMs) and for stacking them into pipelines that solve complex tasks. +Unfortunately, existing LM pipelines are typically implemented using hard-coded +""prompt templates"", i.e. lengthy strings discovered via trial and error. Toward +a more systematic approach for developing and optimizing LM pipelines, we +introduce DSPy, a programming model that abstracts LM pipelines as text +transformation graphs, i.e. imperative computational graphs where LMs are +invoked through declarative modules. DSPy modules are parameterized, meaning +they can learn (by creating and collecting demonstrations) how to apply +compositions of prompting, finetuning, augmentation, and reasoning techniques. +We design a compiler that will optimize any DSPy pipeline to maximize a given +metric. We conduct two case studies, showing that succinct DSPy programs can +express and optimize sophisticated LM pipelines that reason about math word +problems, tackle multi-hop retrieval, answer complex questions, and control +agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and +llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot +prompting (generally by over 25% and 65%, respectively) and pipelines with +expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top +of that, DSPy programs compiled to open and relatively small LMs like +770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely +on expert-written prompt chains for proprietary GPT-3.5. DSPy is available at +https://github.com/stanfordnlp/dspy +" +InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations,Nils Feldhus,http://arxiv.org/pdf/2310.05592v2.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.hc']",2310.05592v2.pdf," While recently developed NLP explainability methods let us open the black box +in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is +an interactive tool offering a conversational interface. Such a dialogue system +can help users explore datasets and models with explanations in a +contextualized manner, e.g. via clarification or follow-up questions, and +through a natural language interface. We adapt the conversational explanation +framework TalkToModel (Slack et al., 2022) to the NLP domain, add new +NLP-specific operations such as free-text rationalization, and illustrate its +generalizability on three NLP tasks (dialogue act classification, question +answering, hate speech detection). To recognize user queries for explanations, +we evaluate fine-tuned and few-shot prompting models and implement a novel +Adapter-based approach. We then conduct two user studies on (1) the perceived +correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. +how objectively helpful dialogical explanations are for humans in figuring out +the model's predicted label when it's not shown. We found rationalization and +feature attribution were helpful in explaining the model behavior. Moreover, +users could more reliably predict the model outcome based on an explanation +dialogue rather than one-off explanations. +" +FireAct: Toward Language Agent Fine-tuning,Baian Chen,http://arxiv.org/pdf/2310.05915v1.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05915v1.pdf," Recent efforts have augmented language models (LMs) with external tools or +environments, leading to the development of language agents that can reason and +act. However, most of these agents rely on few-shot prompting techniques with +off-the-shelf LMs. In this paper, we investigate and argue for the overlooked +direction of fine-tuning LMs to obtain language agents. Using a setup of +question answering (QA) with a Google search API, we explore a variety of base +LMs, prompting methods, fine-tuning data, and QA tasks, and find language +agents are consistently improved after fine-tuning their backbone LMs. For +example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 +leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct, +a novel approach to fine-tuning LMs with trajectories from multiple tasks and +prompting methods, and show having more diverse fine-tuning data can further +improve agents. Along with other findings regarding scaling effects, +robustness, generalization, efficiency and cost, our work establishes +comprehensive benefits of fine-tuning LMs for agents, and provides an initial +set of experimental designs, insights, as well as open questions toward +language agent fine-tuning. +" +Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning,Duarte M. Alves,http://arxiv.org/pdf/2310.13448v1.pdf,2023-10-20,['cs.cl'],2310.13448v1.pdf," Large language models (LLMs) are a promising avenue for machine translation +(MT). However, current LLM-based MT systems are brittle: their effectiveness +highly depends on the choice of few-shot examples and they often require extra +post-processing due to overgeneration. Alternatives such as finetuning on +translation instructions are computationally expensive and may weaken +in-context learning capabilities, due to overspecialization. In this paper, we +provide a closer look at this problem. We start by showing that adapter-based +finetuning with LoRA matches the performance of traditional finetuning while +reducing the number of training parameters by a factor of 50. This method also +outperforms few-shot prompting and eliminates the need for post-processing or +in-context examples. However, we show that finetuning generally degrades +few-shot performance, hindering adaptation capabilities. Finally, to obtain the +best of both worlds, we propose a simple approach that incorporates few-shot +examples during finetuning. Experiments on 10 language pairs show that our +proposed approach recovers the original few-shot capabilities while keeping the +added benefits of finetuning. +" +On Bilingual Lexicon Induction with Large Language Models,Yaoyiran Li,http://arxiv.org/pdf/2310.13995v1.pdf,2023-10-21,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2310.13995v1.pdf," Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that +still, to a large extent, relies on calculating cross-lingual word +representations. Inspired by the global paradigm shift in NLP towards Large +Language Models (LLMs), we examine the potential of the latest generation of +LLMs for the development of bilingual lexicons. We ask the following research +question: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) for +BLI, and how does this approach compare against and complement current BLI +approaches? To this end, we systematically study 1) zero-shot prompting for +unsupervised BLI and 2) few-shot in-context prompting with a set of seed +translation pairs, both without any LLM fine-tuning, as well as 3) standard +BLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-source +text-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on two +standard BLI benchmarks covering a range of typologically diverse languages. +Our work is the first to demonstrate strong BLI capabilities of text-to-text +mLLMs. The results reveal that few-shot prompting with in-context examples from +nearest neighbours achieves the best performance, establishing new +state-of-the-art BLI scores for many language pairs. We also conduct a series +of in-depth analyses and ablation studies, providing more insights on BLI with +(m)LLMs, also along with their limitations. +" +An Early Evaluation of GPT-4V(ision),Yang Wu,http://arxiv.org/pdf/2310.16534v1.pdf,2023-10-25,"['cs.cl', 'cs.cv']",2310.16534v1.pdf," In this paper, we evaluate different abilities of GPT-4V including visual +understanding, language understanding, visual puzzle solving, and understanding +of other modalities such as depth, thermal, video, and audio. To estimate +GPT-4V's performance, we manually construct 656 test instances and carefully +evaluate the results of GPT-4V. The highlights of our findings are as follows: +(1) GPT-4V exhibits impressive performance on English visual-centric benchmarks +but fails to recognize simple Chinese texts in the images; (2) GPT-4V shows +inconsistent refusal behavior when answering questions related to sensitive +traits such as gender, race, and age; (3) GPT-4V obtains worse results than +GPT-4 (API) on language understanding tasks including general language +understanding benchmarks and visual commonsense knowledge evaluation +benchmarks; (4) Few-shot prompting can improve GPT-4V's performance on both +visual understanding and language understanding; (5) GPT-4V struggles to find +the nuances between two similar images and solve the easy math picture puzzles; +(6) GPT-4V shows non-trivial performance on the tasks of similar modalities to +image, such as video and thermal. Our experimental results reveal the ability +and limitations of GPT-4V and we hope our paper can provide some insights into +the application and research of GPT-4V. +" +"""You Are An Expert Linguistic Annotator"": Limits of LLMs as Analyzers of Abstract Meaning Representation",Allyson Ettinger,http://arxiv.org/pdf/2310.17793v1.pdf,2023-10-26,"['cs.cl', 'cs.ai']",2310.17793v1.pdf," Large language models (LLMs) show amazing proficiency and fluency in the use +of language. Does this mean that they have also acquired insightful linguistic +knowledge about the language, to an extent that they can serve as an ""expert +linguistic annotator""? In this paper, we examine the successes and limitations +of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning +structure, focusing on the Abstract Meaning Representation (AMR; Banarescu et +al. 2013) parsing formalism, which provides rich graphical representations of +sentence meaning structure while abstracting away from surface forms. We +compare models' analysis of this semantic structure across two settings: 1) +direct production of AMR parses based on zero- and few-shot prompts, and 2) +indirect partial reconstruction of AMR via metalinguistic natural language +queries (e.g., ""Identify the primary event of this sentence, and the predicate +corresponding to that event.""). Across these settings, we find that models can +reliably reproduce the basic format of AMR, and can often capture core event, +argument, and modifier structure -- however, model outputs are prone to +frequent and major errors, and holistic analysis of parse acceptability shows +that even with few-shot demonstrations, models have virtually 0% success in +producing fully accurate parses. Eliciting natural language responses produces +similar patterns of errors. Overall, our findings indicate that these models +out-of-the-box can capture aspects of semantic structure, but there remain key +limitations in their ability to support fully accurate semantic analyses or +parses. +" +Style-Aware Radiology Report Generation with RadGraph and Few-Shot Prompting,Benjamin Yan,http://arxiv.org/pdf/2310.17811v2.pdf,2023-10-26,"['cs.ai', 'cs.cl']",2310.17811v2.pdf," Automatically generated reports from medical images promise to improve the +workflow of radiologists. Existing methods consider an image-to-report modeling +task by directly generating a fully-fledged report from an image. However, this +conflates the content of the report (e.g., findings and their attributes) with +its style (e.g., format and choice of words), which can lead to clinically +inaccurate reports. To address this, we propose a two-step approach for +radiology report generation. First, we extract the content from an image; then, +we verbalize the extracted content into a report that matches the style of a +specific radiologist. For this, we leverage RadGraph -- a graph representation +of reports -- together with large language models (LLMs). In our quantitative +evaluations, we find that our approach leads to beneficial performance. Our +human evaluation with clinical raters highlights that the AI-generated reports +are indistinguishably tailored to the style of individual radiologist despite +leveraging only a few examples as context. +" +Multi-lingual Evaluation of Code Generation Models,Ben Athiwaratkun,http://arxiv.org/pdf/2210.14868v3.pdf,2022-10-26,"['cs.lg', 'cs.cl']",2210.14868v3.pdf," We present new benchmarks on evaluation code generation models: MBXP and +Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming +languages and are generated using a scalable conversion framework that +transpiles prompts and test cases from the original Python datasets into the +corresponding data in the target language. Using these benchmarks, we are able +to assess the performance of code generation models in a multi-lingual fashion, +and discovered generalization ability of language models on out-of-domain +languages, advantages of multi-lingual models over mono-lingual, the ability of +few-shot prompting to teach the model new languages, and zero-shot translation +abilities even on mono-lingual settings. Furthermore, we use our code +generation model to perform large-scale bootstrapping to obtain synthetic +canonical solutions in several languages, which can be used for other +code-related evaluations such as code insertion, robustness, or summarization +tasks. Overall, our benchmarks represents a significant step towards a deeper +understanding of language models' code generation abilities. We publicly +release our code and datasets at https://github.com/amazon-research/mxeval. +" +PAL: Program-aided Language Models,Luyu Gao,http://arxiv.org/pdf/2211.10435v2.pdf,2022-11-18,"['cs.cl', 'cs.ai']",2211.10435v2.pdf," Large language models (LLMs) have recently demonstrated an impressive ability +to perform arithmetic and symbolic reasoning tasks, when provided with a few +examples at test time (""few-shot prompting""). Much of this success can be +attributed to prompting methods such as ""chain-of-thought'', which employ LLMs +for both understanding the problem description by decomposing it into steps, as +well as solving each step of the problem. While LLMs seem to be adept at this +sort of step-by-step decomposition, LLMs often make logical and arithmetic +mistakes in the solution part, even when the problem is decomposed correctly. +In this paper, we present Program-Aided Language models (PAL): a novel approach +that uses the LLM to read natural language problems and generate programs as +the intermediate reasoning steps, but offloads the solution step to a runtime +such as a Python interpreter. With PAL, decomposing the natural language +problem into runnable steps remains the only learning task for the LLM, while +solving is delegated to the interpreter. We demonstrate this synergy between a +neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and +algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all +these natural language reasoning tasks, generating code using an LLM and +reasoning using a Python interpreter leads to more accurate results than much +larger models. For example, PAL using Codex achieves state-of-the-art few-shot +accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B +which uses chain-of-thought by absolute 15% top-1. Our code and data are +publicly available at http://reasonwithpal.com/ . +" +Learning Performance-Improving Code Edits,Alexander Shypula,http://arxiv.org/pdf/2302.07867v4.pdf,2023-02-15,"['cs.se', 'cs.ai', 'cs.lg', 'cs.pf']",2302.07867v4.pdf," With the waning of Moore's law, optimizing program performance has become a +major focus of software research. However, high-level optimizations such as API +and algorithm changes remain elusive due to the difficulty of understanding the +semantics of code. Simultaneously, pretrained large language models (LLMs) have +demonstrated strong capabilities at solving a wide range of programming tasks. +To that end, we introduce a framework for adapting LLMs to high-level program +optimization. First, we curate a dataset of performance-improving edits made by +human programmers of over 77K competitive C++ programming submission pairs, +accompanied by extensive unit tests. A major challenge is the significant +variability of measuring performance on commodity hardware, which can lead to +spurious ""improvements"". To isolate and reliably evaluate the impact of program +optimizations, we design an environment based on the gem5 full system +simulator, the de facto simulator used in academia and industry. Next, we +propose a broad range of adaptation strategies for code optimization; for +prompting, these include retrieval-based few-shot prompting and +chain-of-thought, and for finetuning, these include performance-conditioned +generation and synthetic data augmentation based on self-play. A combination of +these techniques achieves an average speedup of 5.65X on CodeLlama-13B and +6.86X on GPT-3.5, surpassing the best human performance (4.06X). We find our +proposed performance-conditioned generation is particularly effective at +improving performance as well as increasing the fraction of optimized programs. +" +Large Language Models for User Interest Journeys,Konstantina Christakopoulou,http://arxiv.org/pdf/2305.15498v1.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.ir']",2305.15498v1.pdf," Large language models (LLMs) have shown impressive capabilities in natural +language understanding and generation. Their potential for deeper user +understanding and improved personalized user experience on recommendation +platforms is, however, largely untapped. This paper aims to address this gap. +Recommender systems today capture users' interests through encoding their +historical activities on the platforms. The generated user representations are +hard to examine or interpret. On the other hand, if we were to ask people about +interests they pursue in their life, they might talk about their hobbies, like +I just started learning the ukulele, or their relaxation routines, e.g., I like +to watch Saturday Night Live, or I want to plant a vertical garden. We argue, +and demonstrate through extensive experiments, that LLMs as foundation models +can reason through user activities, and describe their interests in nuanced and +interesting ways, similar to how a human would. + We define interest journeys as the persistent and overarching user interests, +in other words, the non-transient ones. These are the interests that we believe +will benefit most from the nuanced and personalized descriptions. We introduce +a framework in which we first perform personalized extraction of interest +journeys, and then summarize the extracted journeys via LLMs, using techniques +like few-shot prompting, prompt-tuning and fine-tuning. Together, our results +in prompting LLMs to name extracted user journeys in a large-scale industrial +platform demonstrate great potential of these models in providing deeper, more +interpretable, and controllable user understanding. We believe LLM powered user +understanding can be a stepping stone to entirely new user experiences on +recommendation platforms that are journey-aware, assistive, and enabling +frictionless conversation down the line. +" +Passive learning of active causal strategies in agents and language models,Andrew Kyle Lampinen,http://arxiv.org/pdf/2305.16183v2.pdf,2023-05-25,"['cs.lg', 'cs.ai', 'cs.cl']",2305.16183v2.pdf," What can be learned about causality and experimentation from passive data? +This question is salient given recent successes of passively-trained language +models in interactive domains such as tool use. Passive learning is inherently +limited. However, we show that purely passive learning can in fact allow an +agent to learn generalizable strategies for determining and using causal +structures, as long as the agent can intervene at test time. We formally +illustrate that learning a strategy of first experimenting, then seeking goals, +can allow generalization from passive learning in principle. We then show +empirically that agents trained via imitation on expert data can indeed +generalize at test time to infer and use causal links which are never present +in the training data; these agents can also generalize experimentation +strategies to novel variable sets never observed in training. We then show that +strategies for causal intervention and exploitation can be generalized from +passive data even in a more complex environment with high-dimensional +observations, with the support of natural language explanations. Explanations +can even allow passive learners to generalize out-of-distribution from +perfectly-confounded training data. Finally, we show that language models, +trained only on passive next-word prediction, can generalize causal +intervention strategies from a few-shot prompt containing examples of +experimentation, together with explanations and reasoning. These results +highlight the surprising power of passive learning of active causal strategies, +and may help to understand the behaviors and capabilities of language models. +" +Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models,Cheng-Yu Hsieh,http://arxiv.org/pdf/2308.00675v1.pdf,2023-08-01,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2308.00675v1.pdf," Today, large language models (LLMs) are taught to use new tools by providing +a few demonstrations of the tool's usage. Unfortunately, demonstrations are +hard to acquire, and can result in undesirable biased usage if the wrong +demonstration is chosen. Even in the rare scenario that demonstrations are +readily available, there is no principled selection protocol to determine how +many and which ones to provide. As tasks grow more complex, the selection +search grows combinatorially and invariably becomes intractable. Our work +provides an alternative to demonstrations: tool documentation. We advocate the +use of tool documentation, descriptions for the individual tool usage, over +demonstrations. We substantiate our claim through three main empirical findings +on 6 tasks across both vision and language modalities. First, on existing +benchmarks, zero-shot prompts with only tool documentation are sufficient for +eliciting proper tool usage, achieving performance on par with few-shot +prompts. Second, on a newly collected realistic tool-use dataset with hundreds +of available tool APIs, we show that tool documentation is significantly more +valuable than demonstrations, with zero-shot documentation significantly +outperforming few-shot without documentation. Third, we highlight the benefits +of tool documentations by tackling image generation and video tracking using +just-released unseen state-of-the-art models as tools. Finally, we highlight +the possibility of using tool documentation to automatically enable new +applications: by using nothing more than the documentation of GroundingDino, +Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the +just-released Grounded-SAM and Track Anything models. +" +MathAttack: Attacking Large Language Models Towards Math Solving Ability,Zihao Zhou,http://arxiv.org/pdf/2309.01686v1.pdf,2023-09-04,['cs.cl'],2309.01686v1.pdf," With the boom of Large Language Models (LLMs), the research of solving Math +Word Problem (MWP) has recently made great progress. However, there are few +studies to examine the security of LLMs in math solving ability. Instead of +attacking prompts in the use of LLMs, we propose a MathAttack model to attack +MWP samples which are closer to the essence of security in solving math +problems. Compared to traditional text adversarial attack, it is essential to +preserve the mathematical logic of original MWPs during the attacking. To this +end, we propose logical entity recognition to identify logical entries which +are then frozen. Subsequently, the remaining text are attacked by adopting a +word-level attacker. Furthermore, we propose a new dataset RobustMath to +evaluate the robustness of LLMs in math solving ability. Extensive experiments +on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth +show that MathAttack could effectively attack the math solving ability of LLMs. +In the experiments, we observe that (1) Our adversarial samples from +higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy +(e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot +prompts); (2) Complex MWPs (such as more solving steps, longer text, more +numbers) are more vulnerable to attack; (3) We can improve the robustness of +LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our +practice and observation can serve as an important attempt towards enhancing +the robustness of LLMs in math solving ability. We will release our code and +dataset. +" +MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models,Kailai Yang,http://arxiv.org/pdf/2309.13567v2.pdf,2023-09-24,['cs.cl'],2309.13567v2.pdf," With the development of web technology, social media texts are becoming a +rich source for automatic mental health analysis. As traditional discriminative +methods bear the problem of low interpretability, the recent large language +models have been explored for interpretable mental health analysis on social +media, which aims to provide detailed explanations along with predictions. The +results show that ChatGPT can generate approaching-human explanations for its +correct classifications. However, LLMs still achieve unsatisfactory +classification performance in a zero-shot/few-shot manner. Domain-specific +finetuning is an effective solution, but faces 2 challenges: 1) lack of +high-quality training data. 2) no open-source LLMs for interpretable mental +health analysis were released to lower the finetuning cost. To alleviate these +problems, we build the first multi-task and multi-source interpretable mental +health instruction (IMHI) dataset on social media, with 105K data samples. The +raw social media data are collected from 10 existing sources covering 8 mental +health analysis tasks. We use expert-written few-shot prompts and collected +labels to prompt ChatGPT and obtain explanations from its responses. To ensure +the reliability of the explanations, we perform strict automatic and human +evaluations on the correctness, consistency, and quality of generated data. +Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA, +the first open-source LLM series for interpretable mental health analysis with +instruction-following capability. We also evaluate the performance of +MentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where their +correctness for making predictions and the quality of explanations are +examined. The results show that MentalLLaMA approaches state-of-the-art +discriminative methods in correctness and generates high-quality explanations. +" +FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation,Tu Vu,http://arxiv.org/pdf/2310.03214v1.pdf,2023-10-05,['cs.cl'],2310.03214v1.pdf," Most large language models (LLMs) are trained once and never updated; thus, +they lack the ability to dynamically adapt to our ever-changing world. In this +work, we perform a detailed study of the factuality of LLM-generated text in +the context of answering questions that test current world knowledge. +Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a +diverse range of question and answer types, including questions that require +fast-changing world knowledge as well as questions with false premises that +need to be debunked. We benchmark a diverse array of both closed and +open-source LLMs under a two-mode evaluation procedure that allows us to +measure both correctness and hallucination. Through human evaluations involving +more than 50K judgments, we shed light on limitations of these models and +demonstrate significant room for improvement: for instance, all models +(regardless of model size) struggle on questions that involve fast-changing +knowledge and false premises. Motivated by these results, we present +FreshPrompt, a simple few-shot prompting method that substantially boosts the +performance of an LLM on FreshQA by incorporating relevant and up-to-date +information retrieved from a search engine into the prompt. Our experiments +show that FreshPrompt outperforms both competing search engine-augmented +prompting methods such as Self-Ask (Press et al., 2022) as well as commercial +systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that +both the number of retrieved evidences and their order play a key role in +influencing the correctness of LLM-generated answers. Additionally, instructing +the LLM to generate concise and direct answers helps reduce hallucination +compared to encouraging more verbose answers. To facilitate future work, we +release FreshQA at github.com/freshllms/freshqa and commit to updating it at +regular intervals. +" +A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT,Ce Zhou,http://arxiv.org/pdf/2302.09419v3.pdf,2023-02-18,"['cs.ai', 'cs.cl', 'cs.lg']",2302.09419v3.pdf," Pretrained Foundation Models (PFMs) are regarded as the foundation for +various downstream tasks with different data modalities. A PFM (e.g., BERT, +ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable +parameter initialization for a wide range of downstream applications. BERT +learns bidirectional encoder representations from Transformers, which are +trained on large datasets as contextual language models. Similarly, the +generative pretrained transformer (GPT) method employs Transformers as the +feature extractor and is trained using an autoregressive paradigm on large +datasets. Recently, ChatGPT shows promising success on large language models, +which applies an autoregressive language model with zero shot or few shot +prompting. The remarkable achievements of PFM have brought significant +breakthroughs to various fields of AI. Numerous studies have proposed different +methods, raising the demand for an updated survey. This study provides a +comprehensive review of recent research advancements, challenges, and +opportunities for PFMs in text, image, graph, as well as other data modalities. +The review covers the basic components and existing pretraining methods used in +natural language processing, computer vision, and graph learning. Additionally, +it explores advanced PFMs used for different data modalities and unified PFMs +that consider data quality and quantity. The review also discusses research +related to the fundamentals of PFMs, such as model efficiency and compression, +security, and privacy. Finally, the study provides key implications, future +research directions, challenges, and open problems in the field of PFMs. +Overall, this survey aims to shed light on the research of the PFMs on +scalability, security, logical reasoning ability, cross-domain learning +ability, and the user-friendly interactive ability for artificial general +intelligence. +" +Short Answer Grading Using One-shot Prompting and Text Similarity Scoring Model,Su-Youn Yoon,http://arxiv.org/pdf/2305.18638v1.pdf,2023-05-29,"['cs.cl', 'i.2.7']",2305.18638v1.pdf," In this study, we developed an automated short answer grading (ASAG) model +that provided both analytic scores and final holistic scores. Short answer +items typically consist of multiple sub-questions, and providing an analytic +score and the text span relevant to each sub-question can increase the +interpretability of the automated scores. Furthermore, they can be used to +generate actionable feedback for students. Despite these advantages, most +studies have focused on predicting only holistic scores due to the difficulty +in constructing dataset with manual annotations. To address this difficulty, we +used large language model (LLM)-based one-shot prompting and a text similarity +scoring model with domain adaptation using small manually annotated dataset. +The accuracy and quadratic weighted kappa of our model were 0.67 and 0.71 on a +subset of the publicly available ASAG dataset. The model achieved a substantial +improvement over the majority baseline. +" +DePlot: One-shot visual language reasoning by plot-to-table translation,Fangyu Liu,http://arxiv.org/pdf/2212.10505v2.pdf,2022-12-20,"['cs.cl', 'cs.ai', 'cs.cv']",2212.10505v2.pdf," Visual language such as charts and plots is ubiquitous in the human world. +Comprehending plots and charts requires strong reasoning skills. Prior +state-of-the-art (SOTA) models require at least tens of thousands of training +examples and their reasoning capabilities are still much limited, especially on +complex human-written queries. This paper presents the first one-shot solution +to visual language reasoning. We decompose the challenge of visual language +reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over +the translated text. The key in this method is a modality conversion module, +named as DePlot, which translates the image of a plot or chart to a linearized +table. The output of DePlot can then be directly used to prompt a pretrained +large language model (LLM), exploiting the few-shot reasoning capabilities of +LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing +unified task formats and metrics, and train DePlot end-to-end on this task. +DePlot can then be used off-the-shelf together with LLMs in a plug-and-play +fashion. Compared with a SOTA model finetuned on more than >28k data points, +DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over +finetuned SOTA on human-written queries from the task of chart QA. +" +CHAI-DT: A Framework for Prompting Conversational Generative AI Agents to Actively Participate in Co-Creation,Brandon Harwood,http://arxiv.org/pdf/2305.03852v1.pdf,2023-05-05,"['cs.hc', 'cs.ai']",2305.03852v1.pdf," This paper explores the potential for utilizing generative AI models in +group-focused co-creative frameworks to enhance problem solving and ideation in +business innovation and co-creation contexts, and proposes a novel prompting +technique for conversational generative AI agents which employ methods inspired +by traditional 'human-to-human' facilitation and instruction to enable active +contribution to Design Thinking, a co-creative framework. Through experiments +using this prompting technique, we gather evidence that conversational +generative transformers (i.e. ChatGPT) have the capability to contribute +context-specific, useful, and creative input into Design Thinking activities. +We also discuss the potential benefits, limitations, and risks associated with +using generative AI models in co-creative ideation and provide recommendations +for future research. +" +AceCoder: Utilizing Existing Code to Enhance Code Generation,Jia Li,http://arxiv.org/pdf/2303.17780v3.pdf,2023-03-31,"['cs.se', 'cs.ai']",2303.17780v3.pdf," Large Language Models (LLMs) have shown great success in code generation. +LLMs take as the input a prompt and output the code. A key question is how to +make prompts (i.e., Prompting Techniques). Existing prompting techniques are +designed for natural language generation and have low accuracy in code +generation. + In this paper, we propose a new prompting technique named AceCoder. Our +motivation is that code generation meets two unique challenges (i.e., +requirement understanding and code implementation). AceCoder contains two novel +mechanisms (i.e., guided code generation and example retrieval) to solve these +challenges. (1) Guided code generation asks LLMs first to analyze requirements +and output an intermediate preliminary (e.g., test cases). The preliminary is +used to clarify requirements and tell LLMs ""what to write"". (2) Example +retrieval selects similar programs as examples in prompts, which provide lots +of relevant content (e.g., algorithms, APIs) and teach LLMs ""how to write"". We +apply AceCoder to three LLMs (e.g., Codex) and evaluate it on three public +benchmarks using the Pass@k. Results show that AceCoder can significantly +improve the performance of LLMs on code generation. (1) In terms of Pass@1, +AceCoder outperforms the state-of-the-art baseline by up to 56.4% in MBPP, +70.7% in MBJP, and 88.4% in MBJSP. (2) AceCoder is effective in LLMs with +different sizes (i.e., 6B to 13B) and different languages (i.e., Python, Java, +and JavaScript). (3) Human evaluation shows human developers prefer programs +from AceCoder. +" +Compositional Semantic Parsing with Large Language Models,Andrew Drozdov,http://arxiv.org/pdf/2209.15003v2.pdf,2022-09-29,"['cs.cl', 'cs.ai']",2209.15003v2.pdf," Humans can reason compositionally when presented with new tasks. Previous +research shows that appropriate prompting techniques enable large language +models (LLMs) to solve artificial compositional generalization tasks such as +SCAN. In this work, we identify additional challenges in more realistic +semantic parsing tasks with larger vocabulary and refine these prompting +techniques to address them. Our best method is based on least-to-most +prompting: it decomposes the problem using prompting-based syntactic parsing, +then uses this decomposition to select appropriate exemplars and to +sequentially generate the semantic parse. This method allows us to set a new +state of the art for CFQ while requiring only 1% of the training data used by +traditional approaches. Due to the general nature of our approach, we expect +similar efforts will lead to new results in other tasks and domains, especially +for knowledge-intensive applications. +" +EvEntS ReaLM: Event Reasoning of Entity States via Language Models,Evangelia Spiliopoulou,http://arxiv.org/pdf/2211.05392v1.pdf,2022-11-10,['cs.cl'],2211.05392v1.pdf," This paper investigates models of event implications. Specifically, how well +models predict entity state-changes, by targeting their understanding of +physical attributes. Nominally, Large Language models (LLM) have been exposed +to procedural knowledge about how objects interact, yet our benchmarking shows +they fail to reason about the world. Conversely, we also demonstrate that +existing approaches often misrepresent the surprising abilities of LLMs via +improper task encodings and that proper model prompting can dramatically +improve performance of reported baseline results across multiple tasks. In +particular, our results indicate that our prompting technique is especially +useful for unseen attributes (out-of-domain) or when only limited data is +available. +" +GEMBA-MQM: Detecting Translation Quality Error Spans with GPT-4,Tom Kocmi,http://arxiv.org/pdf/2310.13988v1.pdf,2023-10-21,['cs.cl'],2310.13988v1.pdf," This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed to +detect translation quality errors, specifically for the quality estimation +setting without the need for human reference translations. Based on the power +of large language models (LLM), GEMBA-MQM employs a fixed three-shot prompting +technique, querying the GPT-4 model to mark error quality spans. Compared to +previous works, our method has language-agnostic prompts, thus avoiding the +need for manual prompt preparation for new languages. + While preliminary results indicate that GEMBA-MQM achieves state-of-the-art +accuracy for system ranking, we advise caution when using it in academic works +to demonstrate improvements over other methods due to its dependence on the +proprietary, black-box GPT model. +" +Utilizing Language Models for Energy Load Forecasting,Hao Xue,http://arxiv.org/pdf/2310.17788v1.pdf,2023-10-26,"['cs.ai', 'cs.cl']",2310.17788v1.pdf," Energy load forecasting plays a crucial role in optimizing resource +allocation and managing energy consumption in buildings and cities. In this +paper, we propose a novel approach that leverages language models for energy +load forecasting. We employ prompting techniques to convert energy consumption +data into descriptive sentences, enabling fine-tuning of language models. By +adopting an autoregressive generating approach, our proposed method enables +predictions of various horizons of future energy load consumption. Through +extensive experiments on real-world datasets, we demonstrate the effectiveness +and accuracy of our proposed method. Our results indicate that utilizing +language models for energy load forecasting holds promise for enhancing energy +efficiency and facilitating intelligent decision-making in energy systems. +" +Eliciting Topic Hierarchies from Large Language Models,Grace Li,http://arxiv.org/pdf/2310.19275v1.pdf,2023-10-30,['cs.hc'],2310.19275v1.pdf," Finding topics to write about can be a mentally demanding process. However, +topic hierarchies can help writers explore topics of varying levels of +specificity. In this paper, we use large language models (LLMs) to help +construct topic hierarchies. Although LLMs have access to such knowledge, it +can be difficult to elicit due to issues of specificity, scope, and repetition. +We designed and tested three different prompting techniques to find one that +maximized accuracy. We found that prepending the general topic area to a prompt +yielded the most accurate results with 85% accuracy. We discuss applications of +this research including STEM writing, education, and content creation. +" +Structured Chain-of-Thought Prompting for Code Generation,Jia Li,http://arxiv.org/pdf/2305.06599v3.pdf,2023-05-11,"['cs.se', 'cs.cl']",2305.06599v3.pdf," Large Language Models (LLMs) (e.g., ChatGPT) have shown impressive +performance in code generation. LLMs take prompts as inputs, and +Chain-of-Thought (CoT) prompting is the state-of-the-art prompting technique. +CoT prompting asks LLMs first to generate CoTs (i.e., intermediate natural +language reasoning steps) and then output the code. However, CoT prompting is +designed for natural language generation and has low accuracy in code +generation. + In this paper, we propose Structured CoTs (SCoTs) and present a novel +prompting technique for code generation, named SCoT prompting. Our motivation +is source code contains rich structural information and any code can be +composed of three program structures (i.e., sequence, branch, and loop +structures). Intuitively, structured intermediate reasoning steps make for +structured source code. Thus, we ask LLMs to use program structures to build +CoTs, obtaining SCoTs. Then, LLMs generate the final code based on SCoTs. +Compared to CoT prompting, SCoT prompting explicitly constrains LLMs to think +about how to solve requirements from the view of source code and further the +performance of LLMs in code generation. We apply SCoT prompting to two LLMs +(i.e., ChatGPT and Codex) and evaluate it on three benchmarks (i.e., HumanEval, +MBPP, and MBCPP). (1) SCoT prompting outperforms the state-of-the-art baseline +- CoT prompting by up to 13.79% in Pass@1. (2) Human evaluation shows human +developers prefer programs from SCoT prompting. (3) SCoT prompting is robust to +examples and achieves substantial improvements. +" +The Impact of AI in Physics Education: A Comprehensive Review from GCSE to University Levels,Will Yeadon,http://arxiv.org/pdf/2309.05163v1.pdf,2023-09-10,['physics.ed-ph'],2309.05163v1.pdf," With the rapid evolution of Artificial Intelligence (AI), its potential +implications for higher education have become a focal point of interest. This +study delves into the capabilities of AI in Physics Education and offers +actionable AI policy recommendations. Using a Large Language Model (LLM), we +assessed its ability to answer 1337 Physics exam questions spanning GCSE, +A-Level, and Introductory University curricula. We employed various AI +prompting techniques: Zero Shot, In Context Learning, and Confirmatory +Checking, which merges Chain of Thought reasoning with Reflection. The AI's +proficiency varied across academic levels: it scored an average of 83.4% on +GCSE, 63.8% on A-Level, and 37.4% on university-level questions, with an +overall average of 59.9% using the most effective prompting technique. In a +separate test, the LLM's accuracy on 5000 mathematical operations was found to +decrease as the number of digits increased. Furthermore, when evaluated as a +marking tool, the LLM's concordance with human markers averaged at 50.8%, with +notable inaccuracies in marking straightforward questions, like +multiple-choice. Given these results, our recommendations underscore caution: +while current LLMs can consistently perform well on Physics questions at +earlier educational stages, their efficacy diminishes with advanced content and +complex calculations. LLM outputs often showcase novel methods not in the +syllabus, excessive verbosity, and miscalculations in basic arithmetic. This +suggests that at university, there's no substantial threat from LLMs for +non-invigilated Physics questions. However, given the LLMs' considerable +proficiency in writing Physics essays and coding abilities, non-invigilated +examinations of these skills in Physics are highly vulnerable to automated +completion by LLMs. This vulnerability also extends to Physics questions +pitched at lower academic levels. +" +HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create Customized Content with Models,Swaroop Mishra,http://arxiv.org/pdf/2208.08232v2.pdf,2022-08-17,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.hc', 'cs.lg']",2208.08232v2.pdf," Controlling the text generated by language models and customizing the content +has been a long-standing challenge. Existing prompting techniques proposed in +pursuit of providing control are task-specific and lack generality; this +provides overwhelming choices for non-expert users to find a suitable method +for their task. The effort associated with those techniques, such as in writing +examples, explanations, instructions, etc. further limits their adoption among +non-expert users. In this paper, we propose a simple prompting strategy HELP ME +THINK where we encourage GPT3 to help non-expert users by asking a set of +relevant questions and leveraging user answers to execute the task. We +demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks. +Specifically, we focus on tasks that are hard for average humans and require +significant thinking to perform. We hope our work will encourage the +development of unconventional ways to harness the power of large language +models. +" +Enabling Conversational Interaction with Mobile UI using Large Language Models,Bryan Wang,http://arxiv.org/pdf/2209.08655v2.pdf,2022-09-18,"['cs.hc', 'cs.ai']",2209.08655v2.pdf," Conversational agents show the promise to allow users to interact with mobile +devices using language. However, to perform diverse UI tasks with natural +language, developers typically need to create separate datasets and models for +each specific task, which is expensive and effort-consuming. Recently, +pre-trained large language models (LLMs) have been shown capable of +generalizing to various downstream tasks when prompted with a handful of +examples from the target task. This paper investigates the feasibility of +enabling versatile conversational interactions with mobile UIs using a single +LLM. We designed prompting techniques to adapt an LLM to mobile UIs. We +experimented with four important modeling tasks that address various scenarios +in conversational interaction. Our method achieved competitive performance on +these challenging tasks without requiring dedicated datasets and training, +offering a lightweight and generalizable approach to enable language-based +mobile interaction. +" +Teaching Algorithmic Reasoning via In-context Learning,Hattie Zhou,http://arxiv.org/pdf/2211.09066v1.pdf,2022-11-15,"['cs.lg', 'cs.ai', 'cs.cl']",2211.09066v1.pdf," Large language models (LLMs) have shown increasing in-context learning +capabilities through scaling up model and data size. Despite this progress, +LLMs are still unable to solve algorithmic reasoning problems. While providing +a rationale with the final answer has led to further improvements in multi-step +reasoning problems, Anil et al. 2022 showed that even simple algorithmic +reasoning tasks such as parity are far from solved. In this work, we identify +and study four key stages for successfully teaching algorithmic reasoning to +LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills +simultaneously (skill accumulation), (3) teaching how to combine skills (skill +composition) and (4) teaching how to use skills as tools. We show that it is +possible to teach algorithmic reasoning to LLMs via in-context learning, which +we refer to as algorithmic prompting. We evaluate our approach on a variety of +arithmetic and quantitative reasoning tasks, and demonstrate significant boosts +in performance over existing prompting techniques. In particular, for long +parity, addition, multiplication and subtraction, we achieve an error reduction +of approximately 10x, 9x, 5x and 2x respectively compared to the best available +baselines. +" +Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing,Justus Mattern,http://arxiv.org/pdf/2212.10678v1.pdf,2022-12-20,"['cs.cl', 'cs.lg']",2212.10678v1.pdf," Generated texts from large pretrained language models have been shown to +exhibit a variety of harmful, human-like biases about various demographics. +These findings prompted large efforts aiming to understand and measure such +effects, with the goal of providing benchmarks that can guide the development +of techniques mitigating these stereotypical associations. However, as recent +research has pointed out, the current benchmarks lack a robust experimental +setup, consequently hindering the inference of meaningful conclusions from +their evaluation metrics. In this paper, we extend these arguments and +demonstrate that existing techniques and benchmarks aiming to measure +stereotypes tend to be inaccurate and consist of a high degree of experimental +noise that severely limits the knowledge we can gain from benchmarking language +models based on them. Accordingly, we propose a new framework for robustly +measuring and quantifying biases exhibited by generative language models. +Finally, we use this framework to investigate GPT-3's occupational gender bias +and propose prompting techniques for mitigating these biases without the need +for fine-tuning. +" +Image To Tree with Recursive Prompting,James Batten,http://arxiv.org/pdf/2301.00447v1.pdf,2023-01-01,"['cs.cv', 'cs.lg']",2301.00447v1.pdf," Extracting complex structures from grid-based data is a common key step in +automated medical image analysis. The conventional solution to recovering +tree-structured geometries typically involves computing the minimal cost path +through intermediate representations derived from segmentation masks. However, +this methodology has significant limitations in the context of projective +imaging of tree-structured 3D anatomical data such as coronary arteries, since +there are often overlapping branches in the 2D projection. In this work, we +propose a novel approach to predicting tree connectivity structure which +reformulates the task as an optimization problem over individual steps of a +recursive process. We design and train a two-stage model which leverages the +UNet and Transformer architectures and introduces an image-based prompting +technique. Our proposed method achieves compelling results on a pair of +synthetic datasets, and outperforms a shortest-path baseline. +" +Large Language Models Can Be Easily Distracted by Irrelevant Context,Freda Shi,http://arxiv.org/pdf/2302.00093v3.pdf,2023-01-31,"['cs.cl', 'cs.ai']",2302.00093v3.pdf," Large language models have achieved impressive performance on various natural +language processing tasks. However, so far they have been evaluated primarily +on benchmarks where all information in the input context is relevant for +solving the task. In this work, we investigate the distractibility of large +language models, i.e., how the model problem-solving accuracy can be influenced +by irrelevant context. In particular, we introduce Grade-School Math with +Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant +information in the problem description. We use this benchmark to measure the +distractibility of cutting-edge prompting techniques for large language models, +and find that the model performance is dramatically decreased when irrelevant +information is included. We also identify several approaches for mitigating +this deficiency, such as decoding with self-consistency and adding to the +prompt an instruction that tells the language model to ignore the irrelevant +information. +" +Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models,Zhihong Shao,http://arxiv.org/pdf/2302.00618v1.pdf,2023-02-01,['cs.cl'],2302.00618v1.pdf," Large language models can perform various reasoning tasks by using +chain-of-thought prompting, which guides them to find answers through +step-by-step demonstrations. However, the quality of the prompts depends on the +demonstrations given to the models, and creating many of them by hand is +costly. We introduce Synthetic prompting, a method that leverages a few +handcrafted examples to prompt the model to generate more examples by itself, +and selects effective demonstrations to elicit better reasoning. Our method +alternates between a backward and forward process to generate new examples. The +backward process generates a question that match a sampled reasoning chain, so +that the question is solvable and clear. The forward process produces a more +detailed reasoning chain for the question, improving the quality of the +example. We evaluate our method on numerical, symbolic, and algorithmic +reasoning tasks, and show that it outperforms existing prompting techniques. +" +Language-Specific Representation of Emotion-Concept Knowledge Causally Supports Emotion Inference,Ming Li,http://arxiv.org/pdf/2302.09582v4.pdf,2023-02-19,"['cs.ai', 'cs.cl']",2302.09582v4.pdf," Understanding how language supports emotion inference remains a topic of +debate in emotion science. The present study investigated whether +language-derived emotion-concept knowledge would causally support emotion +inference by manipulating the language-specific knowledge representations in +large language models. Using the prompt technique, 14 attributes of emotion +concepts were found to be represented by distinct artificial neuron +populations. By manipulating these attribute-related neurons, the majority of +the emotion inference tasks showed performance deterioration compared to random +manipulations. The attribute-specific performance deterioration was related to +the importance of different attributes in human mental space. Our findings +provide causal evidence in support of a language-based mechanism for emotion +inference and highlight the contributions of emotion-concept knowledge. +" +MathPrompter: Mathematical Reasoning using Large Language Models,Shima Imani,http://arxiv.org/pdf/2303.05398v1.pdf,2023-03-04,"['cs.cl', 'cs.ai']",2303.05398v1.pdf," Large Language Models (LLMs) have limited performance when solving arithmetic +reasoning tasks and often provide incorrect answers. Unlike natural language +understanding, math problems typically have a single correct answer, making the +task of generating accurate solutions more challenging for LLMs. To the best of +our knowledge, we are not aware of any LLMs that indicate their level of +confidence in their responses which fuels a trust deficit in these models +impeding their adoption. To address this deficiency, we propose `MathPrompter', +a technique that improves performance of LLMs on arithmetic problems along with +increased reliance in the predictions. MathPrompter uses the Zero-shot +chain-of-thought prompting technique to generate multiple Algebraic expressions +or Python functions to solve the same math problem in different ways and +thereby raise the confidence level in the output results. This is in contrast +to other prompt based CoT methods, where there is no check on the validity of +the intermediate steps followed. Our technique improves over state-of-the-art +on the MultiArith dataset ($78.7\%\rightarrow92.5\%$) evaluated using 175B +parameter GPT-based LLM. +" +Zero-shot Temporal Relation Extraction with ChatGPT,Chenhan Yuan,http://arxiv.org/pdf/2304.05454v1.pdf,2023-04-11,"['cs.cl', 'cs.ai']",2304.05454v1.pdf," The goal of temporal relation extraction is to infer the temporal relation +between two events in the document. Supervised models are dominant in this +task. In this work, we investigate ChatGPT's ability on zero-shot temporal +relation extraction. We designed three different prompt techniques to break +down the task and evaluate ChatGPT. Our experiments show that ChatGPT's +performance has a large gap with that of supervised methods and can heavily +rely on the design of prompts. We further demonstrate that ChatGPT can infer +more small relation classes correctly than supervised methods. The current +shortcomings of ChatGPT on temporal relation extraction are also discussed in +this paper. We found that ChatGPT cannot keep consistency during temporal +inference and it fails in actively long-dependency temporal inference. +" +An Empirical Study on the Robustness of the Segment Anything Model (SAM),Yuqing Wang,http://arxiv.org/pdf/2305.06422v2.pdf,2023-05-10,['cs.cv'],2305.06422v2.pdf," The Segment Anything Model (SAM) is a foundation model for general image +segmentation. Although it exhibits impressive performance predominantly on +natural images, understanding its robustness against various image +perturbations and domains is critical for real-world applications where such +challenges frequently arise. In this study we conduct a comprehensive +robustness investigation of SAM under diverse real-world conditions. Our +experiments encompass a wide range of image perturbations. Our experimental +results demonstrate that SAM's performance generally declines under perturbed +images, with varying degrees of vulnerability across different perturbations. +By customizing prompting techniques and leveraging domain knowledge based on +the unique characteristics of each dataset, the model's resilience to these +perturbations can be enhanced, addressing dataset-specific challenges. This +work sheds light on the limitations and strengths of SAM in real-world +applications, promoting the development of more robust and versatile image +segmentation solutions. +" +SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables,Xinyuan Lu,http://arxiv.org/pdf/2305.13186v3.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.13186v3.pdf," Current scientific fact-checking benchmarks exhibit several shortcomings, +such as biases arising from crowd-sourced claims and an over-reliance on +text-based evidence. We present SCITAB, a challenging evaluation dataset +consisting of 1.2K expert-verified scientific claims that 1) originate from +authentic scientific publications and 2) require compositional reasoning for +verification. The claims are paired with evidence-containing scientific tables +annotated with labels. Through extensive evaluations, we demonstrate that +SCITAB poses a significant challenge to state-of-the-art models, including +table-based pretraining models and large language models. All models except +GPT-4 achieved performance barely above random guessing. Popular prompting +techniques, such as Chain-of-Thought, do not achieve much performance gains on +SCITAB. Our analysis uncovers several unique challenges posed by SCITAB, +including table grounding, claim ambiguity, and compositional reasoning. Our +codes and data are publicly available at https://github.com/XinyuanLu00/SciTab. +" +Unraveling ChatGPT: A Critical Analysis of AI-Generated Goal-Oriented Dialogues and Annotations,Tiziano Labruna,http://arxiv.org/pdf/2305.14556v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14556v1.pdf," Large pre-trained language models have exhibited unprecedented capabilities +in producing high-quality text via prompting techniques. This fact introduces +new possibilities for data collection and annotation, particularly in +situations where such data is scarce, complex to gather, expensive, or even +sensitive. In this paper, we explore the potential of these models to generate +and annotate goal-oriented dialogues, and conduct an in-depth analysis to +evaluate their quality. Our experiments employ ChatGPT, and encompass three +categories of goal-oriented dialogues (task-oriented, collaborative, and +explanatory), two generation modes (interactive and one-shot), and two +languages (English and Italian). Based on extensive human-based evaluations, we +demonstrate that the quality of generated dialogues and annotations is on par +with those generated by humans. +" +StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code,Hannah McLean Babe,http://arxiv.org/pdf/2306.04556v1.pdf,2023-06-07,"['cs.lg', 'cs.hc', 'cs.se']",2306.04556v1.pdf," Code LLMs are being rapidly deployed and there is evidence that they can make +professional programmers more productive. Current benchmarks for code +generation measure whether models generate correct programs given an expert +prompt. In this paper, we present a new benchmark containing multiple prompts +per problem, written by a specific population of non-expert prompters: +beginning programmers. StudentEval contains 1,749 prompts for 48 problems, +written by 80 students who have only completed one semester of Python +programming. Our students wrote these prompts while working interactively with +a Code LLM, and we observed very mixed success rates. We use StudentEval to +evaluate 5 Code LLMs and find that StudentEval is a better discriminator of +model performance than existing benchmarks. We analyze the prompts and find +significant variation in students' prompting techniques. We also find that +nondeterministic LLM sampling could mislead students into thinking that their +prompts are more (or less) effective than they actually are, which has +implications for how to teach with Code LLMs. +" +Knowledge-Prompted Estimator: A Novel Approach to Explainable Machine Translation Assessment,Hao Yang,http://arxiv.org/pdf/2306.07486v1.pdf,2023-06-13,['cs.cl'],2306.07486v1.pdf," Cross-lingual Machine Translation (MT) quality estimation plays a crucial +role in evaluating translation performance. GEMBA, the first MT quality +assessment metric based on Large Language Models (LLMs), employs one-step +prompting to achieve state-of-the-art (SOTA) in system-level MT quality +estimation; however, it lacks segment-level analysis. In contrast, +Chain-of-Thought (CoT) prompting outperforms one-step prompting by offering +improved reasoning and explainability. In this paper, we introduce +Knowledge-Prompted Estimator (KPE), a CoT prompting method that combines three +one-step prompting techniques, including perplexity, token-level similarity, +and sentence-level similarity. This method attains enhanced performance for +segment-level estimation compared with previous deep learning models and +one-step prompting approaches. Furthermore, supplementary experiments on +word-level visualized alignment demonstrate that our KPE method significantly +improves token alignment compared with earlier models and provides better +interpretability for MT quality estimation. Code will be released upon +publication. +" +Questioning the Survey Responses of Large Language Models,Ricardo Dominguez-Olmedo,http://arxiv.org/pdf/2306.07951v2.pdf,2023-06-13,['cs.cl'],2306.07951v2.pdf," As large language models increase in capability, researchers have started to +conduct surveys of all kinds on these models with varying scientific +motivations. In this work, we examine what we can learn from language models' +survey responses on the basis of the well-established American Community Survey +(ACS) by the U.S. Census Bureau. Using a de-facto standard multiple-choice +prompting technique and evaluating 40 different language models, hundreds of +thousands of times each on questions from the ACS, we systematically establish +two dominant patterns. First, models have significant position and labeling +biases, for example, towards survey responses labeled with the letter ""A"". +Second, when adjusting for labeling biases through randomized answer ordering, +models across the board trend towards uniformly random survey responses. In +fact, binary classifiers can almost perfectly differentiate between models' +responses to the ACS and the responses of the US census. Taken together, our +findings suggest caution in treating survey responses from language models as +equivalent to those of human populations at present time. +" +Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering,Rabiul Awal,http://arxiv.org/pdf/2306.09996v1.pdf,2023-06-16,"['cs.cv', 'cs.cl']",2306.09996v1.pdf," Visual question answering (VQA) is a challenging task that requires the +ability to comprehend and reason with visual information. While recent +vision-language models have made strides, they continue to struggle with +zero-shot VQA, particularly in handling complex compositional questions and +adapting to new domains i.e. knowledge-based reasoning. This paper explores the +use of various prompting strategies, focusing on the BLIP2 model, to enhance +zero-shot VQA performance. We conduct a comprehensive investigation across +several VQA datasets, examining the effectiveness of different question +templates, the role of few-shot exemplars, the impact of chain-of-thought (CoT) +reasoning, and the benefits of incorporating image captions as additional +visual cues. Despite the varied outcomes, our findings demonstrate that +carefully designed question templates and the integration of additional visual +cues, like image captions, can contribute to improved VQA performance, +especially when used in conjunction with few-shot examples. However, we also +identify a limitation in the use of chain-of-thought rationalization, which +negatively affects VQA accuracy. Our study thus provides critical insights into +the potential of prompting for improving zero-shot VQA performance. +" +Extracting Multi-valued Relations from Language Models,Sneha Singhania,http://arxiv.org/pdf/2307.03122v2.pdf,2023-07-06,['cs.cl'],2307.03122v2.pdf," The widespread usage of latent language representations via pre-trained +language models (LMs) suggests that they are a promising source of structured +knowledge. However, existing methods focus only on a single object per +subject-relation pair, even though often multiple objects are correct. To +overcome this limitation, we analyze these representations for their potential +to yield materialized multi-object relational knowledge. We formulate the +problem as a rank-then-select task. For ranking candidate objects, we evaluate +existing prompting techniques and propose new ones incorporating domain +knowledge. Among the selection methods, we find that choosing objects with a +likelihood above a learned relation-specific threshold gives a 49.5% F1 score. +Our results highlight the difficulty of employing LMs for the multi-valued +slot-filling task and pave the way for further research on extracting +relational knowledge from latent language representations. +" +Prompts Should not be Seen as Secrets: Systematically Measuring Prompt Extraction Attack Success,Yiming Zhang,http://arxiv.org/pdf/2307.06865v1.pdf,2023-07-13,"['cs.cl', 'cs.ai']",2307.06865v1.pdf," The generations of large language models are commonly controlled through +prompting techniques, where a user's query to the model is prefixed with a +prompt that aims to guide the model's behaviour on the query. The prompts used +by companies to guide their models are often treated as secrets, to be hidden +from the user making the query. They have even been treated as commodities to +be bought and sold. However, there has been anecdotal evidence showing that the +prompts can be extracted by a user even when they are kept secret. In this +paper, we present a framework for systematically measuring the success of +prompt extraction attacks. In experiments with multiple sources of prompts and +multiple underlying language models, we find that simple text-based attacks can +in fact reveal prompts with high probability. +" +Leveraging Large Language Models to Generate Answer Set Programs,Adam Ishay,http://arxiv.org/pdf/2307.07699v1.pdf,2023-07-15,"['cs.ai', 'cs.cl', 'cs.sc']",2307.07699v1.pdf," Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated +exceptional performance in various natural language processing tasks and have +shown the ability to solve certain reasoning problems. However, their reasoning +capabilities are limited and relatively shallow, despite the application of +various prompting techniques. In contrast, formal logic is adept at handling +complex reasoning, but translating natural language descriptions into formal +logic is a challenging task that non-experts struggle with. This paper proposes +a neuro-symbolic method that combines the strengths of large language models +and answer set programming. Specifically, we employ an LLM to transform natural +language descriptions of logic puzzles into answer set programs. We carefully +design prompts for an LLM to convert natural language descriptions into answer +set programs in a step by step manner. Surprisingly, with just a few in-context +learning examples, LLMs can generate reasonably complex answer set programs. +The majority of errors made are relatively simple and can be easily corrected +by humans, thus enabling LLMs to effectively assist in the creation of answer +set programs. +" +Fixing Rust Compilation Errors using LLMs,Pantazis Deligiannis,http://arxiv.org/pdf/2308.05177v1.pdf,2023-08-09,"['cs.se', 'cs.pl']",2308.05177v1.pdf," The Rust programming language, with its safety guarantees, has established +itself as a viable choice for low-level systems programming language over the +traditional, unsafe alternatives like C/C++. These guarantees come from a +strong ownership-based type system, as well as primitive support for features +like closures, pattern matching, etc., that make the code more concise and +amenable to reasoning. These unique Rust features also pose a steep learning +curve for programmers. + This paper presents a tool called RustAssistant that leverages the emergent +capabilities of Large Language Models (LLMs) to automatically suggest fixes for +Rust compilation errors. RustAssistant uses a careful combination of prompting +techniques as well as iteration with an LLM to deliver high accuracy of fixes. +RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on +real-world compilation errors in popular open-source Rust repositories. We plan +to release our dataset of Rust compilation errors to enable further research. +" +The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation,Patrick Fernandes,http://arxiv.org/pdf/2308.07286v1.pdf,2023-08-14,"['cs.cl', 'cs.lg']",2308.07286v1.pdf," Automatic evaluation of machine translation (MT) is a critical tool driving +the rapid iterative development of MT systems. While considerable progress has +been made on estimating a single scalar quality score, current metrics lack the +informativeness of more detailed schemes that annotate individual errors, such +as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap +by proposing AutoMQM, a prompting technique which leverages the reasoning and +in-context learning capabilities of large language models (LLMs) and asks them +to identify and categorize errors in translations. We start by evaluating +recent LLMs, such as PaLM and PaLM-2, through simple score prediction +prompting, and we study the impact of labeled data through in-context learning +and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that +it improves performance compared to just prompting for scores (with +particularly large gains for larger models) while providing interpretability +through error spans that align with human annotations. +" +Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought,Bin Lei,http://arxiv.org/pdf/2308.08614v1.pdf,2023-08-16,"['cs.lg', 'cs.ai', 'cs.cl']",2308.08614v1.pdf," Recent advancements in large-scale models, such as GPT-4, have showcased +remarkable capabilities in addressing standard queries. However, when facing +complex problems that require multi-step logical reasoning, their accuracy +dramatically decreases. Current research has explored the realm of +\textit{prompting engineering} to bolster the inferential capacities of these +models. Our paper unveils a pioneering prompting technique, dubbed +\textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating +challenges: the 24-point game, resolution of high-degree polynomial equations, +and derivation of formulas for recursive sequences, our method outperformed +GPT-4, achieving accuracy improvements of $89.7\%$, $86\%$, and $56\%$ for each +respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) +prompting method, \textit{Tree of Thought (ToT)}, our approach registered an +average accuracy boost of $23\%$, $24\%$, and $15\%$. +" +DevGPT: Studying Developer-ChatGPT Conversations,Tao Xiao,http://arxiv.org/pdf/2309.03914v1.pdf,2023-08-31,['cs.se'],2309.03914v1.pdf," The emergence of large language models (LLMs) such as ChatGPT has disrupted +the landscape of software development. Many studies are investigating the +quality of responses generated by ChatGPT, the efficacy of various prompting +techniques, and its comparative performance in programming contests, to name a +few examples. Yet, we know very little about how ChatGPT is actually used by +software developers. What questions do developers present to ChatGPT? What are +the dynamics of these interactions? What is the backdrop against which these +conversations are held, and how do the conversations feedback into the +artifacts of their work? To close this gap, we introduce DevGPT, a curated +dataset which encompasses 17,913 prompts and ChatGPT's responses including +11,751 code snippets, coupled with the corresponding software development +artifacts -- ranging from source code, commits, issues, pull requests, to +discussions and Hacker News threads -- to enable the analysis of the context +and implications of these developer interactions with ChatGPT. +" +Generative Speech Recognition Error Correction with Large Language Models and Task-Activating Prompting,Chao-Han Huck Yang,http://arxiv.org/pdf/2309.15649v2.pdf,2023-09-27,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",2309.15649v2.pdf," We explore the ability of large language models (LLMs) to act as speech +recognition post-processors that perform rescoring and error correction. Our +first focus is on instruction prompting to let LLMs perform these task without +fine-tuning, for which we evaluate different prompting schemes, both zero- and +few-shot in-context learning, and a novel task activation prompting method that +combines causal instructions and demonstration to increase its context windows. +Next, we show that rescoring only by in-context learning with frozen LLMs +achieves results that are competitive with rescoring by domain-tuned LMs, using +a pretrained first-pass recognition system and rescoring output on two +out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with +fine-tuning we achieve error rates below the N-best oracle level, showcasing +the generalization power of the LLMs. +" +UPAR: A Kantian-Inspired Prompting Framework for Enhancing Large Language Model Capabilities,Hejia Geng,http://arxiv.org/pdf/2310.01441v1.pdf,2023-09-30,"['cs.cl', 'cs.ai']",2310.01441v1.pdf," Large Language Models (LLMs) have demonstrated impressive inferential +capabilities, with numerous research endeavors devoted to enhancing this +capacity through prompting. Despite these efforts, a unified epistemological +foundation is still conspicuously absent. Drawing inspiration from Kant's a +priori philosophy, we propose the UPAR prompting framework, designed to emulate +the structure of human cognition within LLMs. The UPAR framework is delineated +into four phases: ""Understand"", ""Plan"", ""Act"", and ""Reflect"", enabling the +extraction of structured information from complex contexts, prior planning of +solutions, execution according to plan, and self-reflection. This structure +significantly augments the explainability and accuracy of LLM inference, +producing a human-understandable and inspectable inferential trajectory. +Furthermore, our work offers an epistemological foundation for existing +prompting techniques, allowing for a possible systematic integration of these +methods. With GPT-4, our approach elevates the accuracy from COT baseline of +22.92% to 58.33% in a challenging subset of GSM8K, and from 67.91% to 75.40% in +the causal judgment task. +" +Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models,Huaixiu Steven Zheng,http://arxiv.org/pdf/2310.06117v1.pdf,2023-10-09,"['cs.lg', 'cs.ai', 'cs.cl']",2310.06117v1.pdf," We present Step-Back Prompting, a simple prompting technique that enables +LLMs to do abstractions to derive high-level concepts and first principles from +instances containing specific details. Using the concepts and principles to +guide the reasoning steps, LLMs significantly improve their abilities in +following a correct reasoning path towards the solution. We conduct experiments +of Step-Back Prompting with PaLM-2L models and observe substantial performance +gains on a wide range of challenging reasoning-intensive tasks including STEM, +Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting +improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%, +TimeQA by 27%, and MuSiQue by 7%. +" +POSQA: Probe the World Models of LLMs with Size Comparisons,Chang Shu,http://arxiv.org/pdf/2310.13394v1.pdf,2023-10-20,"['cs.cl', 'cs.ai', 'cs.cy']",2310.13394v1.pdf," Embodied language comprehension emphasizes that language understanding is not +solely a matter of mental processing in the brain but also involves +interactions with the physical and social environment. With the explosive +growth of Large Language Models (LLMs) and their already ubiquitous presence in +our daily lives, it is becoming increasingly necessary to verify their +real-world understanding. Inspired by cognitive theories, we propose POSQA: a +Physical Object Size Question Answering dataset with simple size comparison +questions to examine the extremity and analyze the potential mechanisms of the +embodied comprehension of the latest LLMs. + We show that even the largest LLMs today perform poorly under the zero-shot +setting. We then push their limits with advanced prompting techniques and +external knowledge augmentation. Furthermore, we investigate whether their +real-world comprehension primarily derives from contextual information or +internal weights and analyse the impact of prompt formats and report bias of +different objects. Our results show that real-world understanding that LLMs +shaped from textual data can be vulnerable to deception and confusion by the +surface form of prompts, which makes it less aligned with human behaviours. +" +MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning,Zayne Sprague,http://arxiv.org/pdf/2310.16049v1.pdf,2023-10-24,['cs.cl'],2310.16049v1.pdf," While large language models (LLMs) equipped with techniques like +chain-of-thought prompting have demonstrated impressive capabilities, they +still fall short in their ability to reason robustly in complex settings. +However, evaluating LLM reasoning is challenging because system capabilities +continue to grow while benchmark datasets for tasks like logical deduction have +remained static. We introduce MuSR, a dataset for evaluating language models on +multistep soft reasoning tasks specified in a natural language narrative. This +dataset has two crucial features. First, it is created through a novel +neurosymbolic synthetic-to-natural generation algorithm, enabling the +construction of complex reasoning instances that challenge GPT-4 (e.g., murder +mysteries roughly 1000 words in length) and which can be scaled further as more +capable LLMs are released. Second, our dataset instances are free text +narratives corresponding to real-world domains of reasoning; this makes it +simultaneously much more challenging than other synthetically-crafted +benchmarks while remaining realistic and tractable for human annotators to +solve with high accuracy. We evaluate a range of LLMs and prompting techniques +on this dataset and characterize the gaps that remain for techniques like +chain-of-thought to perform robust reasoning. +" +"Supercharging academic writing with generative AI: framework, techniques, and caveats",Zhicheng Lin,http://arxiv.org/pdf/2310.17143v1.pdf,2023-10-26,"['cs.cy', 'cs.cl']",2310.17143v1.pdf," Academic writing is an indispensable yet laborious part of the research +enterprise. This Perspective maps out principles and methods for using +generative artificial intelligence (AI), specifically large language models +(LLMs), to elevate the quality and efficiency of academic writing. We introduce +a human-AI collaborative framework that delineates the rationale (why), process +(how), and nature (what) of AI engagement in writing. The framework pinpoints +both short-term and long-term reasons for engagement and their underlying +mechanisms (e.g., cognitive offloading and imaginative stimulation). It reveals +the role of AI throughout the writing process, conceptualized through a +two-stage model for human-AI collaborative writing, and the nature of AI +assistance in writing, represented through a model of writing-assistance types +and levels. Building on this framework, we describe effective prompting +techniques for incorporating AI into the writing routine (outlining, drafting, +and editing) as well as strategies for maintaining rigorous scholarship, +adhering to varied journal policies, and avoiding overreliance on AI. +Ultimately, the prudent integration of AI into academic writing can ease the +communication burden, empower authors, accelerate discovery, and promote +diversity in science. +" +Little Giants: Exploring the Potential of Small LLMs as Evaluation Metrics in Summarization in the Eval4NLP 2023 Shared Task,Neema Kotonya,http://arxiv.org/pdf/2311.00686v1.pdf,2023-11-01,['cs.cl'],2311.00686v1.pdf," This paper describes and analyzes our participation in the 2023 Eval4NLP +shared task, which focuses on assessing the effectiveness of prompt-based +techniques to empower Large Language Models to handle the task of quality +estimation, particularly in the context of evaluating machine translations and +summaries. We conducted systematic experiments with various prompting +techniques, including standard prompting, prompts informed by annotator +instructions, and innovative chain-of-thought prompting. In addition, we +integrated these approaches with zero-shot and one-shot learning methods to +maximize the efficacy of our evaluation procedures. Our work reveals that +combining these approaches using a ""small"", open source model (orca_mini_v3_7B) +yields competitive results. +" +Can Large Language Models Design Accurate Label Functions?,Naiqing Guan,http://arxiv.org/pdf/2311.00739v1.pdf,2023-11-01,"['cs.cl', 'cs.db', 'cs.lg', 'h.2.8; i.5.4']",2311.00739v1.pdf," Programmatic weak supervision methodologies facilitate the expedited labeling +of extensive datasets through the use of label functions (LFs) that encapsulate +heuristic data sources. Nonetheless, the creation of precise LFs necessitates +domain expertise and substantial endeavors. Recent advances in pre-trained +language models (PLMs) have exhibited substantial potential across diverse +tasks. However, the capacity of PLMs to autonomously formulate accurate LFs +remains an underexplored domain. In this research, we address this gap by +introducing DataSculpt, an interactive framework that harnesses PLMs for the +automated generation of LFs. Within DataSculpt, we incorporate an array of +prompting techniques, instance selection strategies, and LF filtration methods +to explore the expansive design landscape. Ultimately, we conduct a thorough +assessment of DataSculpt's performance on 12 real-world datasets, encompassing +a range of tasks. This evaluation unveils both the strengths and limitations of +contemporary PLMs in LF design. +" +Prompting as Probing: Using Language Models for Knowledge Base Construction,Dimitrios Alivanistos,http://arxiv.org/pdf/2208.11057v3.pdf,2022-08-23,"['cs.cl', 'cs.ai']",2208.11057v3.pdf," Language Models (LMs) have proven to be useful in various downstream +applications, such as summarisation, translation, question answering and text +classification. LMs are becoming increasingly important tools in Artificial +Intelligence, because of the vast quantity of information they can store. In +this work, we present ProP (Prompting as Probing), which utilizes GPT-3, a +large Language Model originally proposed by OpenAI in 2020, to perform the task +of Knowledge Base Construction (KBC). ProP implements a multi-step approach +that combines a variety of prompting techniques to achieve this. Our results +show that manual prompt curation is essential, that the LM must be encouraged +to give answer sets of variable lengths, in particular including empty answer +sets, that true/false questions are a useful device to increase precision on +suggestions generated by the LM, that the size of the LM is a crucial factor, +and that a dictionary of entity aliases improves the LM score. Our evaluation +study indicates that these proposed techniques can substantially enhance the +quality of the final predictions: ProP won track 2 of the LM-KBC competition, +outperforming the baseline by 36.4 percentage points. Our implementation is +available on https://github.com/HEmile/iswc-challenge. +" +Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors,Mohammad Reza Taesiri,http://arxiv.org/pdf/2210.02506v1.pdf,2022-10-05,"['cs.cl', 'cs.se']",2210.02506v1.pdf," Video game testing requires game-specific knowledge as well as common sense +reasoning about the events in the game. While AI-driven agents can satisfy the +first requirement, it is not yet possible to meet the second requirement +automatically. Therefore, video game testing often still relies on manual +testing, and human testers are required to play the game thoroughly to detect +bugs. As a result, it is challenging to fully automate game testing. In this +study, we explore the possibility of leveraging the zero-shot capabilities of +large language models for video game bug detection. By formulating the bug +detection problem as a question-answering task, we show that large language +models can identify which event is buggy in a sequence of textual descriptions +of events from a game. To this end, we introduce the GameBugDescriptions +benchmark dataset, which consists of 167 buggy gameplay videos and a total of +334 question-answer pairs across 8 games. We extensively evaluate the +performance of six models across the OPT and InstructGPT large language model +families on our benchmark dataset. Our results show promising results for +employing language models to detect video game bugs. With the proper prompting +technique, we could achieve an accuracy of 70.66%, and on some video games, up +to 78.94%. Our code, evaluation data and the benchmark can be found on +https://asgaardlab.github.io/LLMxBugs +" +Boosting Low-Data Instance Segmentation by Unsupervised Pre-training with Saliency Prompt,Hao Li,http://arxiv.org/pdf/2302.01171v1.pdf,2023-02-02,"['cs.cv', 'cs.ai']",2302.01171v1.pdf," Recently, inspired by DETR variants, query-based end-to-end instance +segmentation (QEIS) methods have outperformed CNN-based models on large-scale +datasets. Yet they would lose efficacy when only a small amount of training +data is available since it's hard for the crucial queries/kernels to learn +localization and shape priors. To this end, this work offers a novel +unsupervised pre-training solution for low-data regimes. Inspired by the recent +success of the Prompting technique, we introduce a new pre-training method that +boosts QEIS models by giving Saliency Prompt for queries/kernels. Our method +contains three parts: 1) Saliency Masks Proposal is responsible for generating +pseudo masks from unlabeled images based on the saliency mechanism. 2) +Prompt-Kernel Matching transfers pseudo masks into prompts and injects the +corresponding localization and shape priors to the best-matched kernels. 3) +Kernel Supervision is applied to supply supervision at the kernel level for +robust learning. From a practical perspective, our pre-training method helps +QEIS models achieve a similar convergence speed and comparable performance with +CNN-based models in low-data regimes. Experimental results show that our method +significantly boosts several QEIS models on three datasets. Code will be made +available. +" +One-Shot Labeling for Automatic Relevance Estimation,Sean MacAvaney,http://arxiv.org/pdf/2302.11266v2.pdf,2023-02-22,['cs.ir'],2302.11266v2.pdf," Dealing with unjudged documents (""holes"") in relevance assessments is a +perennial problem when evaluating search systems with offline experiments. +Holes can reduce the apparent effectiveness of retrieval systems during +evaluation and introduce biases in models trained with incomplete data. In this +work, we explore whether large language models can help us fill such holes to +improve offline evaluations. We examine an extreme, albeit common, evaluation +setting wherein only a single known relevant document per query is available +for evaluation. We then explore various approaches for predicting the relevance +of unjudged documents with respect to a query and the known relevant document, +including nearest neighbor, supervised, and prompting techniques. We find that +although the predictions of these One-Shot Labelers (1SL) frequently disagree +with human assessments, the labels they produce yield a far more reliable +ranking of systems than the single labels do alone. Specifically, the strongest +approaches can consistently reach system ranking correlations of over 0.86 with +the full rankings over a variety of measures. Meanwhile, the approach +substantially increases the reliability of t-tests due to filling holes in +relevance assessments, giving researchers more confidence in results they find +to be significant. Alongside this work, we release an easy-to-use software +package to enable the use of 1SL for evaluation of other ad-hoc collections or +systems. +" +Are Large Language Models Ready for Healthcare? A Comparative Study on Clinical Language Understanding,Yuqing Wang,http://arxiv.org/pdf/2304.05368v3.pdf,2023-04-09,"['cs.cl', 'cs.ai']",2304.05368v3.pdf," Large language models (LLMs) have made significant progress in various +domains, including healthcare. However, the specialized nature of clinical +language understanding tasks presents unique challenges and limitations that +warrant further investigation. In this study, we conduct a comprehensive +evaluation of state-of-the-art LLMs, namely GPT-3.5, GPT-4, and Bard, within +the realm of clinical language understanding tasks. These tasks span a diverse +range, including named entity recognition, relation extraction, natural +language inference, semantic textual similarity, document classification, and +question-answering. We also introduce a novel prompting strategy, +self-questioning prompting (SQP), tailored to enhance LLMs' performance by +eliciting informative questions and answers pertinent to the clinical scenarios +at hand. Our evaluation underscores the significance of task-specific learning +strategies and prompting techniques for improving LLMs' effectiveness in +healthcare-related tasks. Additionally, our in-depth error analysis on the +challenging relation extraction task offers valuable insights into error +distribution and potential avenues for improvement using SQP. Our study sheds +light on the practical implications of employing LLMs in the specialized domain +of healthcare, serving as a foundation for future research and the development +of potential applications in healthcare settings. +" +Multi-Prompt with Depth Partitioned Cross-Modal Learning,Yingjie Tian,http://arxiv.org/pdf/2305.06221v3.pdf,2023-05-10,"['cs.cv', 'cs.ai']",2305.06221v3.pdf," In recent years, soft prompt learning methods have been proposed to fine-tune +large-scale vision-language pre-trained models for various downstream tasks. +These methods typically combine learnable textual tokens with class tokens as +input for models with frozen parameters. However, they often employ a single +prompt to describe class contexts, failing to capture categories' diverse +attributes adequately. This study introduces the Partitioned Multi-modal Prompt +(PMPO), a multi-modal prompting technique that extends the soft prompt from a +single learnable prompt to multiple prompts. Our method divides the visual +encoder depths and connects learnable prompts to the separated visual depths, +enabling different prompts to capture the hierarchical contextual depths of +visual representations. Furthermore, to maximize the advantages of multi-prompt +learning, we incorporate prior information from manually designed templates and +learnable multi-prompts, thus improving the generalization capabilities of our +approach. We evaluate the effectiveness of our approach on three challenging +tasks: new class generalization, cross-dataset evaluation, and domain +generalization. For instance, our method achieves a $79.28$ harmonic mean, +averaged over 11 diverse image recognition datasets ($+7.62$ compared to CoOp), +demonstrating significant competitiveness compared to state-of-the-art +prompting methods. +" +ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models,Qijiong Liu,http://arxiv.org/pdf/2305.06566v4.pdf,2023-05-11,"['cs.ir', 'cs.cl']",2305.06566v4.pdf," Personalized content-based recommender systems have become indispensable +tools for users to navigate through the vast amount of content available on +platforms like daily news websites and book recommendation services. However, +existing recommenders face significant challenges in understanding the content +of items. Large language models (LLMs), which possess deep semantic +comprehension and extensive knowledge from pretraining, have proven to be +effective in various natural language processing tasks. In this study, we +explore the potential of leveraging both open- and closed-source LLMs to +enhance content-based recommendation. With open-source LLMs, we utilize their +deep layers as content encoders, enriching the representation of content at the +embedding level. For closed-source LLMs, we employ prompting techniques to +enrich the training data at the token level. Through comprehensive experiments, +we demonstrate the high effectiveness of both types of LLMs and show the +synergistic relationship between them. Notably, we observed a significant +relative improvement of up to 19.32% compared to existing state-of-the-art +recommendation models. These findings highlight the immense potential of both +open- and closed-source of LLMs in enhancing content-based recommendation +systems. We will make our code and LLM-generated data available for other +researchers to reproduce our results. +" +OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models,Badr AlKhamissi,http://arxiv.org/pdf/2305.12001v2.pdf,2023-05-19,['cs.cl'],2305.12001v2.pdf," In this paper, we conduct a thorough investigation into the reasoning +capabilities of Large Language Models (LLMs), focusing specifically on the Open +Pretrained Transformers (OPT) models as a representative of such models. Our +study entails finetuning three different sizes of OPT on a carefully curated +reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned +without explanations, and OPT-RE, finetuned with explanations. We then evaluate +all models on 57 out-of-domain tasks drawn from the SUPER-NATURALINSTRUCTIONS +benchmark, covering 26 distinct reasoning skills, utilizing three prompting +techniques. Through a comprehensive grid of 27 configurations and 6,156 test +evaluations, we investigate the dimensions of finetuning, prompting, and scale +to understand the role of explanations on different reasoning skills. Our +findings reveal that having explanations in the fewshot exemplar has no +significant impact on the model's performance when the model is finetuned, +while positively affecting the non-finetuned counterpart. Moreover, we observe +a slight yet consistent increase in classification accuracy as we incorporate +explanations during prompting and finetuning, respectively. Finally, we offer +insights on which skills benefit the most from incorporating explanations +during finetuning and prompting, such as Numerical (+20.4%) and Analogical +(+13.9%) reasoning, as well as skills that exhibit negligible or negative +effects. +" +The Utility of Large Language Models and Generative AI for Education Research,Andrew Katz,http://arxiv.org/pdf/2305.18125v1.pdf,2023-05-29,['cs.hc'],2305.18125v1.pdf," The use of natural language processing (NLP) techniques in engineering +education can provide valuable insights into the underlying processes involved +in generating text. While accessing these insights can be labor-intensive if +done manually, recent advances in NLP and large language models have made it a +realistic option for individuals. This study explores and evaluates a +combination of clustering, summarization, and prompting techniques to analyze +over 1,000 student essays in which students discussed their career interests. +The specific assignment prompted students to define and explain their career +goals as engineers. Using text embedding representations of student responses, +we clustered the responses together to identify thematically similar statements +from students. The clustered responses were then summarized to quickly identify +career interest themes. We also used a set of a priori codes about career +satisfaction and sectors to demonstrate an alternative approach to using these +generative text models to analyze student writing. The results of this study +demonstrate the feasibility and usefulness of NLP techniques in engineering +education research. By automating the initial analysis of student essays, +researchers and educators can more efficiently and accurately identify key +themes and patterns in student writing. The methods presented in this paper +have broader applications for engineering education and research purposes +beyond analyzing student essays. By explaining these methods to the engineering +education community, readers can utilize them in their own contexts. +" +Fine-Grained Visual Prompting,Lingfeng Yang,http://arxiv.org/pdf/2306.04356v1.pdf,2023-06-07,['cs.cv'],2306.04356v1.pdf," Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive +zero-shot transfer capabilities in image-level visual perception. However, +these models have shown limited performance in instance-level tasks that demand +precise localization and recognition. Previous works have suggested that +incorporating visual prompts, such as colorful boxes or circles, can improve +the ability of models to recognize objects of interest. Nonetheless, compared +to language prompting, visual prompting designs are rarely explored. Existing +approaches, which employ coarse visual cues such as colorful boxes or circles, +often result in sub-optimal performance due to the inclusion of irrelevant and +noisy pixels. In this paper, we carefully study the visual prompting designs by +exploring more fine-grained markings, such as segmentation masks and their +variations. In addition, we introduce a new zero-shot framework that leverages +pixel-level annotations acquired from a generalist segmentation model for +fine-grained visual prompting. Consequently, our investigation reveals that a +straightforward application of blur outside the target mask, referred to as the +Blur Reverse Mask, exhibits exceptional effectiveness. This proposed prompting +strategy leverages the precise mask annotations to reduce focus on weakly +related regions while retaining spatial coherence between the target and the +surrounding background. Our Fine-Grained Visual Prompting (FGVP) demonstrates +superior performance in zero-shot comprehension of referring expressions on the +RefCOCO, RefCOCO+, and RefCOCOg benchmarks. It outperforms prior methods by an +average margin of 3.0% to 4.6%, with a maximum improvement of 12.5% on the +RefCOCO+ testA subset. The part detection experiments conducted on the PACO +dataset further validate the preponderance of FGVP over existing visual +prompting techniques. Code and models will be made available. +" +The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification,Norbert Tihanyi,http://arxiv.org/pdf/2307.02192v2.pdf,2023-07-05,"['cs.db', 'cs.ai']",2307.02192v2.pdf," This paper presents the FormAI dataset, a large collection of 112, 000 +AI-generated compilable and independent C programs with vulnerability +classification. We introduce a dynamic zero-shot prompting technique +constructed to spawn diverse programs utilizing Large Language Models (LLMs). +The dataset is generated by GPT-3.5-turbo and comprises programs with varying +levels of complexity. Some programs handle complicated tasks like network +management, table games, or encryption, while others deal with simpler tasks +like string manipulation. Every program is labeled with the vulnerabilities +found within the source code, indicating the type, line number, and vulnerable +function name. This is accomplished by employing a formal verification method +using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model +checking, abstract interpretation, constraint programming, and satisfiability +modulo theories to reason over safety/security properties in programs. This +approach definitively detects vulnerabilities and offers a formal model known +as a counterexample, thus eliminating the possibility of generating false +positive reports. We have associated the identified vulnerabilities with Common +Weakness Enumeration (CWE) numbers. We make the source code available for the +112, 000 programs, accompanied by a separate file containing the +vulnerabilities detected in each program, making the dataset ideal for training +LLMs and machine learning algorithms. Our study unveiled that according to +ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities, +thereby presenting considerable risks to software safety and security. +" +SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs,Shengzhi Li,http://arxiv.org/pdf/2308.03349v1.pdf,2023-08-07,"['cs.cl', 'cs.ai', 'cs.cv']",2308.03349v1.pdf," In this work, we present SciGraphQA, a synthetic multi-turn question-answer +dataset related to academic graphs. SciGraphQA is 13 times larger than +ChartVQA, the previously largest chart-visual question-answering dataset. It is +also the largest open-sourced chart VQA dataset with non-synthetic charts. To +build our dataset, we selected 290,000 Computer Science or Machine Learning +ArXiv papers published between 2010 and 2020, and then used Palm-2 to generate +295K samples of open-vocabulary multi-turn question-answering dialogues about +the graphs. As context, we provided the text-only Palm-2 with paper title, +abstract, paragraph mentioning the graph, and rich text contextual data from +the graph itself, obtaining dialogues with an average 2.23 question-answer +turns for each graph. We asked GPT-4 to assess the matching quality of our +question-answer turns given the paper's context, obtaining an average rating of +8.7/10 on our 3K test set. We evaluated the 0-shot capability of the most +popular MLLM models such as LLaVa, mPLUGowl, BLIP-2, and openFlamingo's on our +dataset, finding LLaVA-13B being the most performant with a CIDEr score of +0.08. We further enriched the question prompts for LLAVA by including the +serialized data tables extracted from the graphs using the DePlot model, +boosting LLaVA's 0-shot CIDEr to 0.15. To verify the validity of our dataset, +we also fine-tuned LLaVa using our dataset, reaching a substantially higher +CIDEr score of 0.26. We anticipate further accuracy improvement by including +segmentation mask tokens and leveraging larger LLM backbones coupled with +emergent prompting techniques. Our code and data are open-sourced. +" +GOPro: Generate and Optimize Prompts in CLIP using Self-Supervised Learning,Mainak Singha,http://arxiv.org/pdf/2308.11605v1.pdf,2023-08-22,['cs.cv'],2308.11605v1.pdf," Large-scale foundation models, such as CLIP, have demonstrated remarkable +success in visual recognition tasks by embedding images in a semantically rich +space. Self-supervised learning (SSL) has also shown promise in improving +visual recognition by learning invariant features. However, the combination of +CLIP with SSL is found to face challenges due to the multi-task framework that +blends CLIP's contrastive loss and SSL's loss, including difficulties with loss +weighting and inconsistency among different views of images in CLIP's output +space. To overcome these challenges, we propose a prompt learning-based model +called GOPro, which is a unified framework that ensures similarity between +various augmented views of input images in a shared image-text embedding space, +using a pair of learnable image and text projectors atop CLIP, to promote +invariance and generalizability. To automatically learn such prompts, we +leverage the visual content and style primitives extracted from pre-trained +CLIP and adapt them to the target task. In addition to CLIP's cross-domain +contrastive loss, we introduce a visual contrastive loss and a novel prompt +consistency loss, considering the different views of the images. GOPro is +trained end-to-end on all three loss objectives, combining the strengths of +CLIP and SSL in a principled manner. Empirical evaluations demonstrate that +GOPro outperforms the state-of-the-art prompting techniques on three +challenging domain generalization tasks across multiple benchmarks by a +significant margin. Our code is available at +https://github.com/mainaksingha01/GOPro. +" +Spoken Language Intelligence of Large Language Models for Language Learning,Linkai Peng,http://arxiv.org/pdf/2308.14536v1.pdf,2023-08-28,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",2308.14536v1.pdf," People have long hoped for a conversational system that can assist in +real-life situations, and recent progress on large language models (LLMs) is +bringing this idea closer to reality. While LLMs are often impressive in +performance, their efficacy in real-world scenarios that demand expert +knowledge remains unclear. LLMs are believed to hold the most potential and +value in education, especially in the development of Artificial intelligence +(AI) based virtual teachers capable of facilitating language learning. Our +focus is centered on evaluating the efficacy of LLMs in the realm of education, +specifically in the areas of spoken language learning which encompass +phonetics, phonology, and second language acquisition. We introduce a new +multiple-choice question dataset to evaluate the effectiveness of LLMs in the +aforementioned scenarios, including understanding and application of spoken +language knowledge. In addition, we investigate the influence of various +prompting techniques such as zero- and few-shot method (prepending the question +with question-answer exemplars), chain-of-thought (CoT, think step-by-step), +in-domain exampler and external tools (Google, Wikipedia). We conducted +large-scale evaluation on popular LLMs (20 distinct models) using these +methods. We achieved significant performance improvements compared to the +zero-shot baseline in the practical questions reasoning (GPT-3.5, 49.1% -> +63.1%; LLaMA2-70B-Chat, 42.2% -> 48.6%). We found that models of different +sizes have good understanding of concepts in phonetics, phonology, and second +language acquisition, but show limitations in reasoning for real-world +problems. Additionally, we also explore preliminary findings on conversational +communication. +" +Are Emergent Abilities in Large Language Models just In-Context Learning?,Sheng Lu,http://arxiv.org/pdf/2309.01809v1.pdf,2023-09-04,['cs.cl'],2309.01809v1.pdf," Large language models have exhibited emergent abilities, demonstrating +exceptional performance across diverse tasks for which they were not explicitly +trained, including those that require complex reasoning abilities. The +emergence of such abilities carries profound implications for the future +direction of research in NLP, especially as the deployment of such models +becomes more prevalent. However, one key challenge is that the evaluation of +these abilities is often confounded by competencies that arise in models +through alternative prompting techniques, such as in-context learning and +instruction following, which also emerge as the models are scaled up. In this +study, we provide the first comprehensive examination of these emergent +abilities while accounting for various potentially biasing factors that can +influence the evaluation of models. We conduct rigorous tests on a set of 18 +models, encompassing a parameter range from 60 million to 175 billion +parameters, across a comprehensive set of 22 tasks. Through an extensive series +of over 1,000 experiments, we provide compelling evidence that emergent +abilities can primarily be ascribed to in-context learning. We find no evidence +for the emergence of reasoning abilities, thus providing valuable insights into +the underlying mechanisms driving the observed abilities and thus alleviating +safety concerns regarding their use. +" +Unsupervised Contrast-Consistent Ranking with Language Models,Niklas Stoehr,http://arxiv.org/pdf/2309.06991v1.pdf,2023-09-13,"['cs.lg', 'cs.cl', 'stat.ml']",2309.06991v1.pdf," Language models contain ranking-based knowledge and are powerful solvers of +in-context ranking tasks. For instance, they may have parametric knowledge +about the ordering of countries by size or may be able to rank reviews by +sentiment. Recent work focuses on pairwise, pointwise, and listwise prompting +techniques to elicit a language model's ranking knowledge. However, we find +that even with careful calibration and constrained decoding, prompting-based +techniques may not always be self-consistent in the rankings they produce. This +motivates us to explore an alternative approach that is inspired by an +unsupervised probing method called Contrast-Consistent Search (CCS). The idea +is to train a probing model guided by a logical constraint: a model's +representation of a statement and its negation must be mapped to contrastive +true-false poles consistently across multiple statements. We hypothesize that +similar constraints apply to ranking tasks where all items are related via +consistent pairwise or listwise comparisons. To this end, we extend the binary +CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking +methods such as the Max-Margin Loss, Triplet Loss, and Ordinal Regression +objective. Our results confirm that, for the same language model, CCR probing +outperforms prompting and even performs on a par with prompting much larger +language models. +" +S3-DST: Structured Open-Domain Dialogue Segmentation and State Tracking in the Era of LLMs,Sarkar Snigdha Sarathi Das,http://arxiv.org/pdf/2309.08827v1.pdf,2023-09-16,"['cs.cl', 'cs.ai']",2309.08827v1.pdf," The traditional Dialogue State Tracking (DST) problem aims to track user +preferences and intents in user-agent conversations. While sufficient for +task-oriented dialogue systems supporting narrow domain applications, the +advent of Large Language Model (LLM)-based chat systems has introduced many +real-world intricacies in open-domain dialogues. These intricacies manifest in +the form of increased complexity in contextual interactions, extended dialogue +sessions encompassing a diverse array of topics, and more frequent contextual +shifts. To handle these intricacies arising from evolving LLM-based chat +systems, we propose joint dialogue segmentation and state tracking per segment +in open-domain dialogue systems. Assuming a zero-shot setting appropriate to a +true open-domain dialogue system, we propose S3-DST, a structured prompting +technique that harnesses Pre-Analytical Recollection, a novel grounding +mechanism we designed for improving long context tracking. To demonstrate the +efficacy of our proposed approach in joint segmentation and state tracking, we +evaluate S3-DST on a proprietary anonymized open-domain dialogue dataset, as +well as publicly available DST and segmentation datasets. Across all datasets +and settings, S3-DST consistently outperforms the state-of-the-art, +demonstrating its potency and robustness the next generation of LLM-based chat +systems. +" +Scalable Multi-Robot Collaboration with Large Language Models: Centralized or Decentralized Systems?,Yongchao Chen,http://arxiv.org/pdf/2309.15943v1.pdf,2023-09-27,['cs.ro'],2309.15943v1.pdf," A flurry of recent work has demonstrated that pre-trained large language +models (LLMs) can be effective task planners for a variety of single-robot +tasks. The planning performance of LLMs is significantly improved via prompting +techniques, such as in-context learning or re-prompting with state feedback, +placing new importance on the token budget for the context window. An +under-explored but natural next direction is to investigate LLMs as multi-robot +task planners. However, long-horizon, heterogeneous multi-robot planning +introduces new challenges of coordination while also pushing up against the +limits of context window length. It is therefore critical to find +token-efficient LLM planning frameworks that are also able to reason about the +complexities of multi-robot coordination. In this work, we compare the task +success rate and token efficiency of four multi-agent communication frameworks +(centralized, decentralized, and two hybrid) as applied to four +coordination-dependent multi-agent 2D task scenarios for increasing numbers of +agents. We find that a hybrid framework achieves better task success rates +across all four tasks and scales better to more agents. We further demonstrate +the hybrid frameworks in 3D simulations where the vision-to-text problem and +dynamical errors are considered. See our project website +https://yongchao98.github.io/MIT-REALM-Multi-Robot/ for prompts, videos, and +code. +" +Adaptive-Solver Framework for Dynamic Strategy Selection in Large Language Model Reasoning,Jianpeng Zhou,http://arxiv.org/pdf/2310.01446v1.pdf,2023-10-01,"['cs.cl', 'cs.ai']",2310.01446v1.pdf," Large Language Models (LLMs) are showcasing impressive ability in handling +complex reasoning tasks. In real-world situations, problems often span a +spectrum of complexities. Humans inherently adjust their problem-solving +approaches based on task complexity. However, most methodologies that leverage +LLMs tend to adopt a uniform approach: utilizing consistent models, prompting +methods, and degrees of problem decomposition, regardless of the problem +complexity. Inflexibility of them can bring unnecessary computational overhead +or sub-optimal performance. To address this problem, we introduce an +Adaptive-Solver framework. It strategically modulates solving strategies based +on the difficulties of the problems. Given an initial solution, the framework +functions with two primary modules. The initial evaluation module assesses the +adequacy of the current solution. If improvements are needed, the subsequent +adaptation module comes into play. Within this module, three key adaptation +strategies are employed: (1) Model Adaptation: Switching to a stronger LLM when +a weaker variant is inadequate. (2) Prompting Method Adaptation: Alternating +between different prompting techniques to suit the problem's nuances. (3) +Decomposition Granularity Adaptation: Breaking down a complex problem into more +fine-grained sub-questions to enhance solvability. Through such dynamic +adaptations, our framework not only enhances computational efficiency but also +elevates the overall performance. This dual-benefit ensures both the efficiency +of the system for simpler tasks and the precision required for more complex +questions. Experimental results from complex reasoning tasks reveal that the +prompting method adaptation and decomposition granularity adaptation enhance +performance across all tasks. Furthermore, the model adaptation approach +significantly reduces API costs (up to 50%) while maintaining superior +performance. +" +Revisiting Large Language Models as Zero-shot Relation Extractors,Guozheng Li,http://arxiv.org/pdf/2310.05028v3.pdf,2023-10-08,"['cs.ai', 'cs.cl']",2310.05028v3.pdf," Relation extraction (RE) consistently involves a certain degree of labeled or +unlabeled data even if under zero-shot setting. Recent studies have shown that +large language models (LLMs) transfer well to new tasks out-of-the-box simply +given a natural language prompt, which provides the possibility of extracting +relations from text without any data and parameter tuning. This work focuses on +the study of exploring LLMs, such as ChatGPT, as zero-shot relation extractors. +On the one hand, we analyze the drawbacks of existing RE prompts and attempt to +incorporate recent prompt techniques such as chain-of-thought (CoT) to improve +zero-shot RE. We propose the summarize-and-ask (\textsc{SumAsk}) prompting, a +simple prompt recursively using LLMs to transform RE inputs to the effective +question answering (QA) format. On the other hand, we conduct comprehensive +experiments on various benchmarks and settings to investigate the capabilities +of LLMs on zero-shot RE. Specifically, we have the following findings: (i) +\textsc{SumAsk} consistently and significantly improves LLMs performance on +different model sizes, benchmarks and settings; (ii) Zero-shot prompting with +ChatGPT achieves competitive or superior results compared with zero-shot and +fully supervised methods; (iii) LLMs deliver promising performance in +extracting overlapping relations; (iv) The performance varies greatly regarding +different relations. Different from small language models, LLMs are effective +in handling challenge none-of-the-above (NoTA) relation. +" +Towards Training-free Open-world Segmentation via Image Prompting Foundation Models,Lv Tang,http://arxiv.org/pdf/2310.10912v1.pdf,2023-10-17,['cs.cv'],2310.10912v1.pdf," The realm of computer vision has witnessed a paradigm shift with the advent +of foundational models, mirroring the transformative influence of large +language models in the domain of natural language processing. This paper delves +into the exploration of open-world segmentation, presenting a novel approach +called Image Prompt Segmentation (IPSeg) that harnesses the power of vision +foundational models. At the heart of IPSeg lies the principle of a +training-free paradigm, which capitalizes on image prompting techniques. IPSeg +utilizes a single image containing a subjective visual concept as a flexible +prompt to query vision foundation models like DINOv2 and Stable Diffusion. Our +approach extracts robust features for the prompt image and input image, then +matches the input representations to the prompt representations via a novel +feature interaction module to generate point prompts highlighting target +objects in the input image. The generated point prompts are further utilized to +guide the Segment Anything Model to segment the target object in the input +image. The proposed method stands out by eliminating the need for exhaustive +training sessions, thereby offering a more efficient and scalable solution. +Experiments on COCO, PASCAL VOC, and other datasets demonstrate IPSeg's +efficacy for flexible open-world segmentation using intuitive image prompts. +This work pioneers tapping foundation models for open-world understanding +through visual concepts conveyed in images. +" +Cross-lingual Prompting: Improving Zero-shot Chain-of-Thought Reasoning across Languages,Libo Qin,http://arxiv.org/pdf/2310.14799v1.pdf,2023-10-23,"['cs.cl', 'cs.ai']",2310.14799v1.pdf," Chain-of-thought (CoT) is capable of eliciting models to explicitly generate +reasoning paths, thus promoting reasoning accuracy and attracting increasing +attention. Specifically, zero-shot CoT achieves remarkable improvements in a +wide range of reasoning tasks by simply instructing the LLM with the prompt +""Let's think step by step!"". Despite the success of zero-shot CoT, the existing +zero-shot prompting techniques remain limited to a single language, making it +challenging to generalize to other languages and hindering global development. +In this work, we introduce cross-lingual prompting (CLP), aiming to improve +zero-shot CoT reasoning across languages. Specifically, CLP consists of two +main components: (1) cross-lingual alignment prompting and (2) task-specific +solver prompting. The cross-lingual alignment prompting is responsible for +aligning representations across different languages, whereas the task-specific +solver prompting is used to generate the final chain of thoughts and results +for the reasoning task. In addition, we further introduce cross-lingual +self-consistent prompting (CLSP) to ensemble different reasoning paths across +languages. Our experimental evaluations on several benchmarks demonstrate that +CLP and CLSP significantly outperform the existing prompting methods and +achieve state-of-the-art performance. We hope this work will inspire further +breakthroughs in cross-lingual CoT. +" +HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks,Yihong Ma,http://arxiv.org/pdf/2310.15318v1.pdf,2023-10-23,"['cs.lg', 'cs.ai']",2310.15318v1.pdf," Graphs have emerged as a natural choice to represent and analyze the +intricate patterns and rich information of the Web, enabling applications such +as online page classification and social recommendation. The prevailing +""pre-train, fine-tune"" paradigm has been widely adopted in graph machine +learning tasks, particularly in scenarios with limited labeled nodes. However, +this approach often exhibits a misalignment between the training objectives of +pretext tasks and those of downstream tasks. This gap can result in the +""negative transfer"" problem, wherein the knowledge gained from pre-training +adversely affects performance in the downstream tasks. The surge in +prompt-based learning within Natural Language Processing (NLP) suggests the +potential of adapting a ""pre-train, prompt"" paradigm to graphs as an +alternative. However, existing graph prompting techniques are tailored to +homogeneous graphs, neglecting the inherent heterogeneity of Web graphs. To +bridge this gap, we propose HetGPT, a general post-training prompting framework +to improve the predictive performance of pre-trained heterogeneous graph neural +networks (HGNNs). The key is the design of a novel prompting function that +integrates a virtual class prompt and a heterogeneous feature prompt, with the +aim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPT +introduces a multi-view neighborhood aggregation mechanism, capturing the +complex neighborhood structure in heterogeneous graphs. Extensive experiments +on three benchmark datasets demonstrate HetGPT's capability to enhance the +performance of state-of-the-art HGNNs on semi-supervised node classification. +" +Videoprompter: an ensemble of foundational models for zero-shot video understanding,Adeel Yousaf,http://arxiv.org/pdf/2310.15324v1.pdf,2023-10-23,['cs.cv'],2310.15324v1.pdf," Vision-language models (VLMs) classify the query video by calculating a +similarity score between the visual features and text-based class label +representations. Recently, large language models (LLMs) have been used to +enrich the text-based class labels by enhancing the descriptiveness of the +class names. However, these improvements are restricted to the text-based +classifier only, and the query visual features are not considered. In this +paper, we propose a framework which combines pre-trained discriminative VLMs +with pre-trained generative video-to-text and text-to-text models. We introduce +two key modifications to the standard zero-shot setting. First, we propose +language-guided visual feature enhancement and employ a video-to-text model to +convert the query video to its descriptive form. The resulting descriptions +contain vital visual cues of the query video, such as what objects are present +and their spatio-temporal interactions. These descriptive cues provide +additional semantic knowledge to VLMs to enhance their zeroshot performance. +Second, we propose video-specific prompts to LLMs to generate more meaningful +descriptions to enrich class label representations. Specifically, we introduce +prompt techniques to create a Tree Hierarchy of Categories for class names, +offering a higher-level action context for additional visual cues, We +demonstrate the effectiveness of our approach in video understanding across +three different zero-shot settings: 1) video action recognition, 2) +video-to-text and textto-video retrieval, and 3) time-sensitive video tasks. +Consistent improvements across multiple benchmarks and with various VLMs +demonstrate the effectiveness of our proposed framework. Our code will be made +publicly available. +" +Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting,Preethi Lahoti,http://arxiv.org/pdf/2310.16523v1.pdf,2023-10-25,"['cs.cl', 'cs.ai']",2310.16523v1.pdf," A crucial challenge for generative large language models (LLMs) is diversity: +when a user's prompt is under-specified, models may follow implicit assumptions +while generating a response, which may result in homogenization of the +responses, as well as certain demographic groups being under-represented or +even erased from the generated responses. In this paper, we formalize diversity +of representation in generative LLMs. We present evaluation datasets and +propose metrics to measure diversity in generated responses along people and +culture axes. We find that LLMs understand the notion of diversity, and that +they can reason and critique their own responses for that goal. This finding +motivated a new prompting technique called collective-critique and self-voting +(CCSV) to self-improve people diversity of LLMs by tapping into its diversity +reasoning capabilities, without relying on handcrafted examples or prompt +tuning. Extensive empirical experiments with both human and automated +evaluations show that our proposed approach is effective at improving people +and culture diversity, and outperforms all baseline methods by a large margin. +" +LLM4DyG: Can Large Language Models Solve Problems on Dynamic Graphs?,Zeyang Zhang,http://arxiv.org/pdf/2310.17110v1.pdf,2023-10-26,['cs.lg'],2310.17110v1.pdf," In an era marked by the increasing adoption of Large Language Models (LLMs) +for various tasks, there is a growing focus on exploring LLMs' capabilities in +handling web data, particularly graph data. Dynamic graphs, which capture +temporal network evolution patterns, are ubiquitous in real-world web data. +Evaluating LLMs' competence in understanding spatial-temporal information on +dynamic graphs is essential for their adoption in web applications, which +remains unexplored in the literature. In this paper, we bridge the gap via +proposing to evaluate LLMs' spatial-temporal understanding abilities on dynamic +graphs, to the best of our knowledge, for the first time. Specifically, we +propose the LLM4DyG benchmark, which includes nine specially designed tasks +considering the capability evaluation of LLMs from both temporal and spatial +dimensions. Then, we conduct extensive experiments to analyze the impacts of +different data generators, data statistics, prompting techniques, and LLMs on +the model performance. Finally, we propose Disentangled Spatial-Temporal +Thoughts (DST2) for LLMs on dynamic graphs to enhance LLMs' spatial-temporal +understanding abilities. Our main observations are: 1) LLMs have preliminary +spatial-temporal understanding abilities on dynamic graphs, 2) Dynamic graph +tasks show increasing difficulties for LLMs as the graph size and density +increase, while not sensitive to the time span and data generation mechanism, +3) the proposed DST2 prompting method can help to improve LLMs' +spatial-temporal understanding abilities on dynamic graphs for most tasks. The +data and codes will be open-sourced at publication time. +" +Which is better? Exploring Prompting Strategy For LLM-based Metrics,Joonghoon Kim,http://arxiv.org/pdf/2311.03754v1.pdf,2023-11-07,['cs.cl'],2311.03754v1.pdf," This paper describes the DSBA submissions to the Prompting Large Language +Models as Explainable Metrics shared task, where systems were submitted to two +tracks: small and large summarization tracks. With advanced Large Language +Models (LLMs) such as GPT-4, evaluating the quality of Natural Language +Generation (NLG) has become increasingly paramount. Traditional +similarity-based metrics such as BLEU and ROUGE have shown to misalign with +human evaluation and are ill-suited for open-ended generation tasks. To address +this issue, we explore the potential capability of LLM-based metrics, +especially leveraging open-source LLMs. In this study, wide range of prompts +and prompting techniques are systematically analyzed with three approaches: +prompting strategy, score aggregation, and explainability. Our research focuses +on formulating effective prompt templates, determining the granularity of NLG +quality scores and assessing the impact of in-context examples on LLM-based +evaluation. Furthermore, three aggregation strategies are compared to identify +the most reliable method for aggregating NLG quality scores. To examine +explainability, we devise a strategy that generates rationales for the scores +and analyzes the characteristics of the explanation produced by the open-source +LLMs. Extensive experiments provide insights regarding evaluation capabilities +of open-source LLMs and suggest effective prompting strategies. +" +Understanding and Improving Visual Prompting: A Label-Mapping Perspective,Aochuan Chen,http://arxiv.org/pdf/2211.11635v5.pdf,2022-11-21,['cs.cv'],2211.11635v5.pdf," We revisit and advance visual prompting (VP), an input prompting technique +for vision tasks. VP can reprogram a fixed, pre-trained source model to +accomplish downstream tasks in the target domain by simply incorporating +universal prompts (in terms of input perturbation patterns) into downstream +data points. Yet, it remains elusive why VP stays effective even given a +ruleless label mapping (LM) between the source classes and the target classes. +Inspired by the above, we ask: How is LM interrelated with VP? And how to +exploit such a relationship to improve its accuracy on target tasks? We peer +into the influence of LM on VP and provide an affirmative answer that a better +'quality' of LM (assessed by mapping precision and explanation) can +consistently improve the effectiveness of VP. This is in contrast to the prior +art where the factor of LM was missing. To optimize LM, we propose a new VP +framework, termed ILM-VP (iterative label mapping-based visual prompting), +which automatically re-maps the source labels to the target labels and +progressively improves the target task accuracy of VP. Further, when using a +contrastive language-image pretrained (CLIP) model, we propose to integrate an +LM process to assist the text prompt selection of CLIP and to improve the +target task accuracy. Extensive experiments demonstrate that our proposal +significantly outperforms state-of-the-art VP methods. As highlighted below, we +show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target +tasks, our method outperforms baselines by a substantial margin, e.g., 7.9% and +6.7% accuracy improvements in transfer learning to the target Flowers102 and +CIFAR100 datasets. Besides, our proposal on CLIP-based VP provides 13.7% and +7.1% accuracy improvements on Flowers102 and DTD respectively. Our code is +available at https://github.com/OPTML-Group/ILM-VP. +" +The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platforms,Yuyang Du,http://arxiv.org/pdf/2307.07319v4.pdf,2023-07-14,['eess.sp'],2307.07319v4.pdf," Large language models (LLMs) have garnered significant attention across +various research disciplines, including the wireless communication community. +There have been several heated discussions on the intersection of LLMs and +wireless technologies. While recent studies have demonstrated the ability of +LLMs to generate hardware description language (HDL) code for simple +computation tasks, developing wireless prototypes and products via HDL poses +far greater challenges because of the more complex computation tasks involved. +In this paper, we aim to address this challenge by investigating the role of +LLMs in FPGA-based hardware development for advanced wireless signal +processing. We begin by exploring LLM-assisted code refactoring, reuse, and +validation, using an open-source software-defined radio (SDR) project as a case +study. Through the case study, we find that an LLM assistant can potentially +yield substantial productivity gains for researchers and developers. We then +examine the feasibility of using LLMs to generate HDL code for advanced +wireless signal processing, using the Fast Fourier Transform (FFT) algorithm as +an example. This task presents two unique challenges: the scheduling of +subtasks within the overall task and the multi-step thinking required to solve +certain arithmetic problem within the task. To address these challenges, we +employ in-context learning (ICL) and Chain-of-Thought (CoT) prompting +techniques, culminating in the successful generation of a 64-point Verilog FFT +module. Our results demonstrate the potential of LLMs for generalization and +imitation, affirming their usefulness in writing HDL code for wireless +communication systems. Overall, this work contributes to understanding the role +of LLMs in wireless communication and motivates further exploration of their +capabilities. +" +Foundation Metrics: Quantifying Effectiveness of Healthcare Conversations powered by Generative AI,Mahyar Abbasian,http://arxiv.org/pdf/2309.12444v2.pdf,2023-09-21,['cs.cl'],2309.12444v2.pdf," Generative Artificial Intelligence is set to revolutionize healthcare +delivery by transforming traditional patient care into a more personalized, +efficient, and proactive process. Chatbots, serving as interactive +conversational models, will probably drive this patient-centered transformation +in healthcare. Through the provision of various services, including diagnosis, +personalized lifestyle recommendations, and mental health support, the +objective is to substantially augment patient health outcomes, all the while +mitigating the workload burden on healthcare providers. The life-critical +nature of healthcare applications necessitates establishing a unified and +comprehensive set of evaluation metrics for conversational models. Existing +evaluation metrics proposed for various generic large language models (LLMs) +demonstrate a lack of comprehension regarding medical and health concepts and +their significance in promoting patients' well-being. Moreover, these metrics +neglect pivotal user-centered aspects, including trust-building, ethics, +personalization, empathy, user comprehension, and emotional support. The +purpose of this paper is to explore state-of-the-art LLM-based evaluation +metrics that are specifically applicable to the assessment of interactive +conversational models in healthcare. Subsequently, we present an comprehensive +set of evaluation metrics designed to thoroughly assess the performance of +healthcare chatbots from an end-user perspective. These metrics encompass an +evaluation of language processing abilities, impact on real-world clinical +tasks, and effectiveness in user-interactive conversations. Finally, we engage +in a discussion concerning the challenges associated with defining and +implementing these metrics, with particular emphasis on confounding factors +such as the target audience, evaluation methods, and prompt techniques involved +in the evaluation process. +" +Fill in the Blank: Exploring and Enhancing LLM Capabilities for Backward Reasoning in Math Word Problems,Aniruddha Deb,http://arxiv.org/pdf/2310.01991v1.pdf,2023-10-03,"['cs.cl', 'cs.ai', 'cs.lg', 'i.2.3']",2310.01991v1.pdf," While forward reasoning (i.e. find the answer given the question) has been +explored extensively in the recent literature, backward reasoning is relatively +unexplored. We examine the backward reasoning capabilities of LLMs on Math Word +Problems (MWPs): given a mathematical question and its answer, with some +details omitted from the question, can LLMs effectively retrieve the missing +information? + In this paper, we formally define the backward reasoning task on math word +problems and modify three datasets to evaluate this task: GSM8k, SVAMP and +MultiArith. Our findings show a significant drop in the accuracy of models on +backward reasoning compared to forward reasoning across four SOTA LLMs (GPT4, +GPT3.5, PaLM-2, and LLaMa-2). Utilizing the specific format of this task, we +propose three novel techniques that improve performance: Rephrase reformulates +the given problem into a forward reasoning problem, PAL-Tools combines the idea +of Program-Aided LLMs to produce a set of equations that can be solved by an +external solver, and Check your Work exploits the availability of natural +verifier of high accuracy in the forward direction, interleaving solving and +verification steps. Finally, realizing that each of our base methods correctly +solves a different set of problems, we propose a novel Bayesian formulation for +creating an ensemble over these base methods aided by a verifier to further +boost the accuracy by a significant margin. Extensive experimentation +demonstrates that our techniques successively improve the performance of LLMs +on the backward reasoning task, with the final ensemble-based method resulting +in a substantial performance gain compared to the raw LLMs with standard +prompting techniques such as chain-of-thought. +" +Autonomous Tree-search Ability of Large Language Models,Zheyu Zhang,http://arxiv.org/pdf/2310.10686v1.pdf,2023-10-14,"['cs.cl', 'cs.ai']",2310.10686v1.pdf," Large Language Models have excelled in remarkable reasoning capabilities with +advanced prompting techniques, but they fall short on tasks that require +exploration, strategic foresight, and sequential decision-making. Recent works +propose to utilize external programs to define search logic, such that LLMs can +perform passive tree search to solve more challenging reasoning tasks. Though +impressive results have been achieved, there are several fundamental +limitations of these approaches. First, passive tree searches are not efficient +as they usually require multiple rounds of LLM API calls to solve one single +problem. Moreover, passive search methods are not flexible since they need +task-specific program designs. Then a natural question arises: can we maintain +the tree-search capability of LLMs without the aid of external programs, and +can still generate responses that clearly demonstrate the process of a +tree-structure search? To this end, we propose a new concept called autonomous +tree-search ability of LLM, which can automatically generate a response +containing search trajectories for the correct answer. Concretely, we perform +search trajectories using capable LLM API via a fixed system prompt, allowing +them to perform autonomous tree-search (ATS) right out of the box. Experiments +on 4 puzzle games demonstrate our method can achieve huge improvements. The +ATS-BFS method outperforms the Chain of Thought approach by achieving an +average accuracy improvement of 33%. Compared to Tree of Thoughts, it requires +65.6% or 47.7% less GPT-api cost to attain a comparable level of accuracy. +Moreover, we have collected data using the ATS prompt method and fine-tuned +LLaMA. This approach yield a greater improvement compared to the ones +fine-tuned on CoT data. Specifically, it outperforms CoT-tuned LLaMAs by an +average of 40.6% and 38.5% for LLaMA2-7B and LLaMA2-13B, respectively. +" +In-Context Impersonation Reveals Large Language Models' Strengths and Biases,Leonard Salewski,http://arxiv.org/pdf/2305.14930v1.pdf,2023-05-24,"['cs.ai', 'cs.cl', 'cs.lg']",2305.14930v1.pdf," In everyday conversations, humans can take on different roles and adapt their +vocabulary to their chosen roles. We explore whether LLMs can take on, that is +impersonate, different roles when they generate text in-context. We ask LLMs to +assume different personas before solving vision and language tasks. We do this +by prefixing the prompt with a persona that is associated either with a social +identity or domain expertise. In a multi-armed bandit task, we find that LLMs +pretending to be children of different ages recover human-like developmental +stages of exploration. In a language-based reasoning task, we find that LLMs +impersonating domain experts perform better than LLMs impersonating non-domain +experts. Finally, we test whether LLMs' impersonations are complementary to +visual information when describing different categories. We find that +impersonation can improve performance: an LLM prompted to be a bird expert +describes birds better than one prompted to be a car expert. However, +impersonation can also uncover LLMs' biases: an LLM prompted to be a man +describes cars better than one prompted to be a woman. These findings +demonstrate that LLMs are capable of taking on diverse roles and that this +in-context impersonation can be used to uncover their hidden strengths and +biases. +" +ROSGPT_Vision: Commanding Robots Using Only Language Models' Prompts,Bilel Benjdira,http://arxiv.org/pdf/2308.11236v2.pdf,2023-08-22,"['cs.ro', 'cs.ai']",2308.11236v2.pdf," In this paper, we argue that the next generation of robots can be commanded +using only Language Models' prompts. Every prompt interrogates separately a +specific Robotic Modality via its Modality Language Model (MLM). A central Task +Modality mediates the whole communication to execute the robotic mission via a +Large Language Model (LLM). This paper gives this new robotic design pattern +the name of: Prompting Robotic Modalities (PRM). Moreover, this paper applies +this PRM design pattern in building a new robotic framework named +ROSGPT_Vision. ROSGPT_Vision allows the execution of a robotic task using only +two prompts: a Visual and an LLM prompt. The Visual Prompt extracts, in natural +language, the visual semantic features related to the task under consideration +(Visual Robotic Modality). Meanwhile, the LLM Prompt regulates the robotic +reaction to the visual description (Task Modality). The framework automates all +the mechanisms behind these two prompts. The framework enables the robot to +address complex real-world scenarios by processing visual data, making informed +decisions, and carrying out actions automatically. The framework comprises one +generic vision module and two independent ROS nodes. As a test application, we +used ROSGPT_Vision to develop CarMate, which monitors the driver's distraction +on the roads and makes real-time vocal notifications to the driver. We showed +how ROSGPT_Vision significantly reduced the development cost compared to +traditional methods. We demonstrated how to improve the quality of the +application by optimizing the prompting strategies, without delving into +technical details. ROSGPT_Vision is shared with the community (link: +https://github.com/bilel-bj/ROSGPT_Vision) to advance robotic research in this +direction and to build more robotic frameworks that implement the PRM design +pattern and enables controlling robots using only prompts. +" +ProgPrompt: Generating Situated Robot Task Plans using Large Language Models,Ishika Singh,http://arxiv.org/pdf/2209.11302v1.pdf,2022-09-22,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.lg']",2209.11302v1.pdf," Task planning can require defining myriad domain knowledge about the world in +which a robot needs to act. To ameliorate that effort, large language models +(LLMs) can be used to score potential next actions during task planning, and +even generate action sequences directly, given an instruction in natural +language with no additional domain information. However, such methods either +require enumerating all possible next steps for scoring, or generate free-form +text that may contain actions not possible on a given robot in its current +context. We present a programmatic LLM prompt structure that enables plan +generation functional across situated environments, robot capabilities, and +tasks. Our key insight is to prompt the LLM with program-like specifications of +the available actions and objects in an environment, as well as with example +programs that can be executed. We make concrete recommendations about prompt +structure and generation constraints through ablation experiments, demonstrate +state of the art success rates in VirtualHome household tasks, and deploy our +method on a physical robot arm for tabletop tasks. Website at +progprompt.github.io +" +Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models,Renat Aksitov,http://arxiv.org/pdf/2302.05578v2.pdf,2023-02-11,"['cs.cl', 'cs.ai']",2302.05578v2.pdf," Despite recent progress, it has been difficult to prevent semantic +hallucinations in generative Large Language Models. One common solution to this +is augmenting LLMs with a retrieval system and making sure that the generated +output is attributable to the retrieved information. Given this new added +constraint, it is plausible to expect that the overall quality of the output +will be affected, for example, in terms of fluency. Can scaling language models +help? + Here we examine the relationship between fluency and attribution in LLMs +prompted with retrieved evidence in knowledge-heavy dialog settings. Our +experiments were implemented with a set of auto-metrics that are aligned with +human preferences. They were used to evaluate a large set of generations, +produced under varying parameters of LLMs and supplied context. + We show that larger models tend to do much better in both fluency and +attribution, and that (naively) using top-k retrieval versus top-1 retrieval +improves attribution but hurts fluency. We next propose a recipe that could +allow smaller models to both close the gap with larger models and preserve the +benefits of top-k retrieval while avoiding its drawbacks. +" +Dictionary-based Phrase-level Prompting of Large Language Models for Machine Translation,Marjan Ghazvininejad,http://arxiv.org/pdf/2302.07856v1.pdf,2023-02-15,"['cs.cl', 'cs.lg']",2302.07856v1.pdf," Large language models (LLMs) demonstrate remarkable machine translation (MT) +abilities via prompting, even though they were not explicitly trained for this +task. However, even given the incredible quantities of data they are trained +on, LLMs can struggle to translate inputs with rare words, which are common in +low resource or domain transfer scenarios. We show that LLM prompting can +provide an effective solution for rare words as well, by using prior knowledge +from bilingual dictionaries to provide control hints in the prompts. We propose +a novel method, DiPMT, that provides a set of possible translations for a +subset of the input words, thereby enabling fine-grained phrase-level prompted +control of the LLM. Extensive experiments show that DiPMT outperforms the +baseline both in low-resource MT, as well as for out-of-domain MT. We further +provide a qualitative analysis of the benefits and limitations of this +approach, including the overall level of controllability that is achieved. +" +UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers,Jon Saad-Falcon,http://arxiv.org/pdf/2303.00807v3.pdf,2023-03-01,"['cs.ir', 'cs.cl']",2303.00807v3.pdf," Many information retrieval tasks require large labeled datasets for +fine-tuning. However, such datasets are often unavailable, and their utility +for real-world applications can diminish quickly due to domain shifts. To +address this challenge, we develop and motivate a method for using large +language models (LLMs) to generate large numbers of synthetic queries cheaply. +The method begins by generating a small number of synthetic queries using an +expensive LLM. After that, a much less expensive one is used to create large +numbers of synthetic queries, which are used to fine-tune a family of reranker +models. These rerankers are then distilled into a single efficient retriever +for use in the target domain. We show that this technique boosts zero-shot +accuracy in long-tail domains and achieves substantially lower latency than +standard reranking methods. +" +LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments,Tae Soo Kim,http://arxiv.org/pdf/2303.15125v1.pdf,2023-03-27,"['cs.hc', 'cs.cl']",2303.15125v1.pdf," Large language models (LLMs) can enhance writing by automating or supporting +specific tasks in writers' workflows (e.g., paraphrasing, creating analogies). +Leveraging this capability, a collection of interfaces have been developed that +provide LLM-powered tools for specific writing tasks. However, these interfaces +provide limited support for writers to create personal tools for their own +unique tasks, and may not comprehensively fulfill a writer's needs -- requiring +them to continuously switch between interfaces during writing. In this work, we +envision LMCanvas, an interface that enables writers to create their own +LLM-powered writing tools and arrange their personal writing environment by +interacting with ""blocks"" in a canvas. In this interface, users can create text +blocks to encapsulate writing and LLM prompts, model blocks for model parameter +configurations, and connect these to create pipeline blocks that output +generations. In this workshop paper, we discuss the design for LMCanvas and our +plans to develop this concept. +" +SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting,Xiaoying Zhang,http://arxiv.org/pdf/2305.09067v1.pdf,2023-05-15,['cs.cl'],2305.09067v1.pdf," Building end-to-end task bots and maintaining their integration with new +functionalities using minimal human efforts is a long-standing challenge in +dialog research. Recently large language models (LLMs) have demonstrated +exceptional proficiency in conversational engagement and adherence to +instructions across various downstream tasks. In this work, we introduce +SGP-TOD, Schema-Guided Prompting for building Task-Oriented Dialog systems +effortlessly based on LLMs. Utilizing the symbolic knowledge -- task schema, we +instruct fixed LLMs to generate appropriate responses on novel tasks, +circumventing the need for training data. Specifically, SGP-TOD comprises three +components: a LLM for engaging with users, a DST Prompter to aid the LLM with +dialog state tracking, which is then used to retrieve database items, and a +Policy Prompter to elicit proper responses adhering to the provided dialog +policy. Experimental results on Multiwoz, RADDLE and STAR datasets show that +our training-free strategy SGP-TOD, without any task-specific data, yields +state-of-the-art (SOTA) zero-shot performance, greatly surpasses the few-shot +approaches. In a domain-extension setting, SGP-TOD aptly adapts to new +functionalities by merely adding supplementary schema rules. We make our code +and data publicly available. +" +TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks,Shubhra Kanti Karmaker Santu,http://arxiv.org/pdf/2305.11430v2.pdf,2023-05-19,"['cs.ai', 'cs.cl', 'cs.ir', 'cs.lg', 'i.2.7']",2305.11430v2.pdf," While LLMs have shown great success in understanding and generating text in +traditional conversational settings, their potential for performing ill-defined +complex tasks is largely under-studied. Indeed, we are yet to conduct +comprehensive benchmarking studies with multiple LLMs that are exclusively +focused on a complex task. However, conducting such benchmarking studies is +challenging because of the large variations in LLMs' performance when different +prompt types/styles are used and different degrees of detail are provided in +the prompts. To address this issue, the paper proposes a general taxonomy that +can be used to design prompts with specific properties in order to perform a +wide range of complex tasks. This taxonomy will allow future benchmarking +studies to report the specific categories of prompts used as part of the study, +enabling meaningful comparisons across different studies. Also, by establishing +a common standard through this taxonomy, researchers will be able to draw more +accurate conclusions about LLMs' performance on a specific complex task. +" +S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering,Fangyu Lei,http://arxiv.org/pdf/2305.11725v1.pdf,2023-05-19,['cs.cl'],2305.11725v1.pdf," Answering multi-hop questions over hybrid factual knowledge from the given +text and table (TextTableQA) is a challenging task. Existing models mainly +adopt a retriever-reader framework, which have several deficiencies, such as +noisy labeling in training retriever, insufficient utilization of heterogeneous +information over text and table, and deficient ability for different reasoning +operations. In this paper, we propose a three-stage TextTableQA framework +S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever +with refinement training to solve the noisy labeling problem. Then, a hybrid +selector considers the linked relationships between heterogeneous data to +select the most relevant factual knowledge. For the final stage, instead of +adapting a reading comprehension module like in previous methods, we employ a +generation-based reasoner to obtain answers. This includes two approaches: a +row-wise generator and an LLM prompting generator~(first time used in this +task). The experimental results demonstrate that our method achieves +competitive results in the few-shot setting. When trained on the full dataset, +our approach outperforms all baseline methods, ranking first on the HybridQA +leaderboard. +" +LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models,Yen-Ting Lin,http://arxiv.org/pdf/2305.13711v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.13711v1.pdf," We propose LLM-Eval, a unified multi-dimensional automatic evaluation method +for open-domain conversations with large language models (LLMs). Existing +evaluation methods often rely on human annotations, ground-truth responses, or +multiple LLM prompts, which can be expensive and time-consuming. To address +these issues, we design a single prompt-based evaluation method that leverages +a unified evaluation schema to cover multiple dimensions of conversation +quality in a single model call. We extensively evaluate the performance of +LLM-Eval on various benchmark datasets, demonstrating its effectiveness, +efficiency, and adaptability compared to state-of-the-art evaluation methods. +Our analysis also highlights the importance of choosing suitable LLMs and +decoding strategies for accurate evaluation results. LLM-Eval offers a +versatile and robust solution for evaluating open-domain conversation systems, +streamlining the evaluation process and providing consistent performance across +diverse scenarios. +" +AutoPlan: Automatic Planning of Interactive Decision-Making Tasks With Large Language Models,Siqi Ouyang,http://arxiv.org/pdf/2305.15064v3.pdf,2023-05-24,['cs.cl'],2305.15064v3.pdf," Recent large language models (LLMs) are promising for making decisions in +grounded environments. However, LLMs frequently fail in complex decision-making +tasks due to the misalignment between the pre-trained knowledge in LLMs and the +actual rules in the environment. Existing methods require either costly +gradient computation or lengthy in-context demonstrations. In this paper, we +propose AutoPlan, an approach to guide LLM-based agents to accomplish +interactive decision-making tasks. AutoPlan augments the LLM prompt with a +task-solving plan and optimizes it through iterative experience collection and +reflection. Our experiments show that AutoPlan, though using no in-context +demonstrations, achieves success rates on par with the baselines using +human-written demonstrations on ALFWorld and even outperforms them by 8% on +HotpotQA. The code is available at https://github.com/owaski/AutoPlan. +" +ChatGPT for PLC/DCS Control Logic Generation,Heiko Koziolek,http://arxiv.org/pdf/2305.15809v1.pdf,2023-05-25,"['cs.se', 'cs.ai', 'd.2.2']",2305.15809v1.pdf," Large language models (LLMs) providing generative AI have become popular to +support software engineers in creating, summarizing, optimizing, and +documenting source code. It is still unknown how LLMs can support control +engineers using typical control programming languages in programming tasks. +Researchers have explored GitHub CoPilot or DeepMind AlphaCode for source code +generation but did not yet tackle control logic programming. The contribution +of this paper is an exploratory study, for which we created 100 LLM prompts in +10 representative categories to analyze control logic generation for of PLCs +and DCS from natural language. We tested the prompts by generating answers with +ChatGPT using the GPT-4 LLM. It generated syntactically correct IEC 61131-3 +Structured Text code in many cases and demonstrated useful reasoning skills +that could boost control engineer productivity. Our prompt collection is the +basis for a more formal LLM benchmark to test and compare such models for +control logic generation. +" +AdaPlanner: Adaptive Planning from Feedback with Language Models,Haotian Sun,http://arxiv.org/pdf/2305.16653v1.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.lg']",2305.16653v1.pdf," Large language models (LLMs) have recently demonstrated the potential in +acting as autonomous agents for sequential decision-making tasks. However, most +existing methods either take actions greedily without planning or rely on +static plans that are not adaptable to environmental feedback. Consequently, +the sequential decision-making performance of LLM agents degenerates with +problem complexity and plan horizons increase. We propose a closed-loop +approach, AdaPlanner, which allows the LLM agent to refine its self-generated +plan adaptively in response to environmental feedback. In AdaPlanner, the LLM +agent adaptively refines its plan from feedback with both in-plan and +out-of-plan refinement strategies. To mitigate hallucination, we develop a +code-style LLM prompt structure that facilitates plan generation across a +variety of tasks, environments, and agent capabilities. Furthermore, we propose +a skill discovery mechanism that leverages successful plans as few-shot +exemplars, enabling the agent to plan and refine with fewer task +demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments +demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and +4.11% while utilizing 2x and 600x fewer samples, respectively. +" +Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures,Yue Zhen,http://arxiv.org/pdf/2306.05171v1.pdf,2023-06-08,"['cs.ro', 'cs.ai']",2306.05171v1.pdf," Traditional robot task planning methods face challenges when dealing with +highly unstructured environments and complex tasks. We propose a task planning +method that combines human expertise with an LLM and have designed an LLM +prompt template, Think_Net_Prompt, with stronger expressive power to represent +structured professional knowledge. We further propose a method to progressively +decompose tasks and generate a task tree to reduce the planning volume for each +task, and we have designed a strategy to decouple robot task planning. By +dividing different planning entities and separating the task from the actual +machine binding process, the task planning process becomes more flexible. +Research results show that our method performs well in handling specified code +formats, understanding the relationship between tasks and subtasks, and +extracting parameters from text descriptions. However, there are also problems +such as limited complexity of task logic handling, ambiguity in the quantity of +parts and the precise location of assembly. Improving the precision of task +description and cognitive structure can bring certain improvements. +https://github.com/NOMIzy/Think_Net_Prompt +" +SayTap: Language to Quadrupedal Locomotion,Yujin Tang,http://arxiv.org/pdf/2306.07580v3.pdf,2023-06-13,['cs.ro'],2306.07580v3.pdf," Large language models (LLMs) have demonstrated the potential to perform +high-level planning. Yet, it remains a challenge for LLMs to comprehend +low-level commands, such as joint angle targets or motor torques. This paper +proposes an approach to use foot contact patterns as an interface that bridges +human commands in natural language and a locomotion controller that outputs +these low-level commands. This results in an interactive system for quadrupedal +robots that allows the users to craft diverse locomotion behaviors flexibly. We +contribute an LLM prompt design, a reward function, and a method to expose the +controller to the feasible distribution of contact patterns. The results are a +controller capable of achieving diverse locomotion patterns that can be +transferred to real robot hardware. Compared with other design choices, the +proposed approach enjoys more than 50% success rate in predicting the correct +contact patterns and can solve 10 more tasks out of a total of 30 tasks. Our +project site is: https://saytap.github.io. +" +Large Language Models Enable Few-Shot Clustering,Vijay Viswanathan,http://arxiv.org/pdf/2307.00524v1.pdf,2023-07-02,['cs.cl'],2307.00524v1.pdf," Unlike traditional unsupervised clustering, semi-supervised clustering allows +users to provide meaningful structure to the data, which helps the clustering +algorithm to match the user's intent. Existing approaches to semi-supervised +clustering require a significant amount of feedback from an expert to improve +the clusters. In this paper, we ask whether a large language model can amplify +an expert's guidance to enable query-efficient, few-shot semi-supervised text +clustering. We show that LLMs are surprisingly effective at improving +clustering. We explore three stages where LLMs can be incorporated into +clustering: before clustering (improving input features), during clustering (by +providing constraints to the clusterer), and after clustering (using LLMs +post-correction). We find incorporating LLMs in the first two stages can +routinely provide significant improvements in cluster quality, and that LLMs +enable a user to make trade-offs between cost and accuracy to produce desired +clusters. We release our code and LLM prompts for the public to use. +" +GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution,Yining Lu,http://arxiv.org/pdf/2307.08775v1.pdf,2023-07-17,['cs.ai'],2307.08775v1.pdf," Augmenting large language models (LLM) to use external tools enhances their +performance across a variety of tasks. However, prior works over-rely on +task-specific demonstration of tool use that limits their generalizability and +computational cost due to making many calls to large-scale LLMs. We introduce +GEAR, a computationally efficient query-tool grounding algorithm that is +generalizable to various tasks that require tool use while not relying on +task-specific demonstrations. GEAR achieves better efficiency by delegating +tool grounding and execution to small language models (SLM) and LLM, +respectively; while leveraging semantic and pattern-based evaluation at both +question and answer levels for generalizable tool grounding. We evaluate GEAR +on 14 datasets across 6 downstream tasks, demonstrating its strong +generalizability to novel tasks, tools and different SLMs. Despite offering +more efficiency, GEAR achieves higher precision in tool grounding compared to +prior strategies using LLM prompting, thus improving downstream accuracy at a +reduced computational cost. For example, we demonstrate that GEAR-augmented +GPT-J and GPT-3 outperform counterpart tool-augmented baselines because of +better tool use. +" +Simple LLM Prompting is State-of-the-Art for Robust and Multilingual Dialogue Evaluation,John Mendonça,http://arxiv.org/pdf/2308.16797v2.pdf,2023-08-31,['cs.cl'],2308.16797v2.pdf," Despite significant research effort in the development of automatic dialogue +evaluation metrics, little thought is given to evaluating dialogues other than +in English. At the same time, ensuring metrics are invariant to semantically +similar responses is also an overlooked topic. In order to achieve the desired +properties of robustness and multilinguality for dialogue evaluation metrics, +we propose a novel framework that takes advantage of the strengths of current +evaluation models with the newly-established paradigm of prompting Large +Language Models (LLMs). Empirical results show our framework achieves state of +the art results in terms of mean Spearman correlation scores across several +benchmarks and ranks first place on both the Robust and Multilingual tasks of +the DSTC11 Track 4 ""Automatic Evaluation Metrics for Open-Domain Dialogue +Systems"", proving the evaluation capabilities of prompted LLMs. +" +"MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images",Weihao Liu,http://arxiv.org/pdf/2309.04790v1.pdf,2023-09-09,['cs.cl'],2309.04790v1.pdf," In the real world, knowledge often exists in a multimodal and heterogeneous +form. Addressing the task of question answering with hybrid data types, +including text, tables, and images, is a challenging task (MMHQA). Recently, +with the rise of large language models (LLM), in-context learning (ICL) has +become the most popular way to solve QA problems. We propose MMHQA-ICL +framework for addressing this problems, which includes stronger heterogeneous +data retriever and an image caption module. Most importantly, we propose a +Type-specific In-context Learning Strategy for MMHQA, enabling LLMs to leverage +their powerful performance in this task. We are the first to use end-to-end LLM +prompting method for this task. Experimental results demonstrate that our +framework outperforms all baselines and methods trained on the full dataset, +achieving state-of-the-art results under the few-shot setting on the +MultimodalQA dataset. +" +Empowering Private Tutoring by Chaining Large Language Models,Yulin Chen,http://arxiv.org/pdf/2309.08112v1.pdf,2023-09-15,['cs.hc'],2309.08112v1.pdf," Artificial intelligence has been applied in various aspects of online +education to facilitate teaching and learning. However, few approaches has been +made toward a complete AI-powered tutoring system. In this work, we explore the +development of a full-fledged intelligent tutoring system powered by +state-of-the-art large language models (LLMs), covering automatic course +planning and adjusting, tailored instruction, and flexible quiz evaluation. To +make the system robust to prolonged interaction and cater to individualized +education, the system is decomposed into three inter-connected core +processes-interaction, reflection, and reaction. Each process is implemented by +chaining LLM-powered tools along with dynamically updated memory modules. Tools +are LLMs prompted to execute one specific task at a time, while memories are +data storage that gets updated during education process. Statistical results +from learning logs demonstrate the effectiveness and mechanism of each tool +usage. Subjective feedback from human users reveal the usability of each +function, and comparison with ablation systems further testify the benefits of +the designed processes in long-term interaction. +" +Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering,Yike Wu,http://arxiv.org/pdf/2309.11206v2.pdf,2023-09-20,"['cs.cl', 'cs.ai']",2309.11206v2.pdf," Despite their competitive performance on knowledge-intensive tasks, large +language models (LLMs) still have limitations in memorizing all world knowledge +especially long tail knowledge. In this paper, we study the KG-augmented +language model approach for solving the knowledge graph question answering +(KGQA) task that requires rich world knowledge. Existing work has shown that +retrieving KG knowledge to enhance LLMs prompting can significantly improve +LLMs performance in KGQA. However, their approaches lack a well-formed +verbalization of KG knowledge, i.e., they ignore the gap between KG +representations and textual representations. To this end, we propose an +answer-sensitive KG-to-Text approach that can transform KG knowledge into +well-textualized statements most informative for KGQA. Based on this approach, +we propose a KG-to-Text enhanced LLMs framework for solving the KGQA task. +Experiments on several KGQA benchmarks show that the proposed KG-to-Text +augmented LLMs approach outperforms previous KG-augmented LLMs approaches +regarding answer accuracy and usefulness of knowledge statements. +" +LPML: LLM-Prompting Markup Language for Mathematical Reasoning,Ryutaro Yamauchi,http://arxiv.org/pdf/2309.13078v2.pdf,2023-09-21,"['cs.ai', 'cs.lg', 'cs.pl']",2309.13078v2.pdf," In utilizing large language models (LLMs) for mathematical reasoning, +addressing the errors in the reasoning and calculation present in the generated +text by LLMs is a crucial challenge. In this paper, we propose a novel +framework that integrates the Chain-of-Thought (CoT) method with an external +tool (Python REPL). We discovered that by prompting LLMs to generate structured +text in XML-like markup language, we could seamlessly integrate CoT and the +external tool and control the undesired behaviors of LLMs. With our approach, +LLMs can utilize Python computation to rectify errors within CoT. We applied +our method to ChatGPT (GPT-3.5) to solve challenging mathematical problems and +demonstrated that combining CoT and Python REPL through the markup language +enhances the reasoning capability of LLMs. Our approach enables LLMs to write +the markup language and perform advanced mathematical reasoning using only +zero-shot prompting. +" +HeaP: Hierarchical Policies for Web Actions using LLMs,Paloma Sodhi,http://arxiv.org/pdf/2310.03720v1.pdf,2023-10-05,['cs.lg'],2310.03720v1.pdf," Large language models (LLMs) have demonstrated remarkable capabilities in +performing a range of instruction following tasks in few and zero-shot +settings. However, teaching LLMs to perform tasks on the web presents +fundamental challenges -- combinatorially large open-world tasks and variations +across web interfaces. We tackle these challenges by leveraging LLMs to +decompose web tasks into a collection of sub-tasks, each of which can be solved +by a low-level, closed-loop policy. These policies constitute a shared grammar +across tasks, i.e., new web tasks can be expressed as a composition of these +policies. We propose a novel framework, Hierarchical Policies for Web Actions +using LLMs (HeaP), that learns a set of hierarchical LLM prompts from +demonstrations for planning high-level tasks and executing them via a sequence +of low-level policies. We evaluate HeaP against a range of baselines on a suite +of web tasks, including MiniWoB++, WebArena, a mock airline CRM, as well as +live website interactions, and show that it is able to outperform prior works +using orders of magnitude less data. +" +OptiMUS: Optimization Modeling Using MIP Solvers and large language models,Ali AhmadiTeshnizi,http://arxiv.org/pdf/2310.06116v2.pdf,2023-10-09,['cs.ai'],2310.06116v2.pdf," Optimization problems are pervasive across various sectors, from +manufacturing and distribution to healthcare. However, most such problems are +still solved heuristically by hand rather than optimally by state-of-the-art +solvers, as the expertise required to formulate and solve these problems limits +the widespread adoption of optimization tools and techniques. We introduce +OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and +solve MILP problems from their natural language descriptions. OptiMUS is +capable of developing mathematical models, writing and debugging solver code, +developing tests, and checking the validity of generated solutions. To +benchmark our agent, we present NLP4LP, a novel dataset of linear programming +(LP) and mixed integer linear programming (MILP) problems. Our experiments +demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM +prompting strategy. OptiMUS code and NLP4LP dataset are available at +\href{https://github.com/teshnizi/OptiMUS}{https://github.com/teshnizi/OptiMUS} +" +A ML-LLM pairing for better code comment classification,Hanna Abi Akl,http://arxiv.org/pdf/2310.10275v1.pdf,2023-10-13,"['cs.se', 'cs.ai']",2310.10275v1.pdf," The ""Information Retrieval in Software Engineering (IRSE)"" at FIRE 2023 +shared task introduces code comment classification, a challenging task that +pairs a code snippet with a comment that should be evaluated as either useful +or not useful to the understanding of the relevant code. We answer the code +comment classification shared task challenge by providing a two-fold +evaluation: from an algorithmic perspective, we compare the performance of +classical machine learning systems and complement our evaluations from a +data-driven perspective by generating additional data with the help of large +language model (LLM) prompting to measure the potential increase in +performance. Our best model, which took second place in the shared task, is a +Neural Network with a Macro-F1 score of 88.401% on the provided seed data and a +1.5% overall increase in performance on the data generated by the LLM. +" +Multi-stage Large Language Model Correction for Speech Recognition,Jie Pu,http://arxiv.org/pdf/2310.11532v1.pdf,2023-10-17,"['cs.cl', 'eess.as']",2310.11532v1.pdf," In this paper, we investigate the usage of large language models (LLMs) to +improve the performance of competitive speech recognition systems. Different +from traditional language models that focus on one single data domain, the rise +of LLMs brings us the opportunity to push the limit of state-of-the-art ASR +performance, and at the same time to achieve higher robustness and generalize +effectively across multiple domains. Motivated by this, we propose a novel +multi-stage approach to combine traditional language model re-scoring and LLM +prompting. Specifically, the proposed method has two stages: the first stage +uses a language model to re-score an N-best list of ASR hypotheses and run a +confidence check; The second stage uses prompts to a LLM to perform ASR error +correction on less confident results from the first stage. Our experimental +results demonstrate the effectiveness of the proposed method by showing a 10% ~ +20% relative improvement in WER over a competitive ASR system -- across +multiple test domains. +" +PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers' Workflows,Savvas Petridis,http://arxiv.org/pdf/2310.15435v1.pdf,2023-10-24,"['cs.hc', 'cs.ai']",2310.15435v1.pdf," Prototyping AI applications is notoriously difficult. While large language +model (LLM) prompting has dramatically lowered the barriers to AI prototyping, +designers are still prototyping AI functionality and UI separately. We +investigate how coupling prompt and UI design affects designers' workflows. +Grounding this research, we developed PromptInfuser, a Figma plugin that +enables users to create semi-functional mockups, by connecting UI elements to +the inputs and outputs of prompts. In a study with 14 designers, we compare +PromptInfuser to designers' current AI-prototyping workflow. PromptInfuser was +perceived to be significantly more useful for communicating product ideas, more +capable of producing prototypes that realistically represent the envisioned +artifact, more efficient for prototyping, and more helpful for anticipating UI +issues and technical constraints. PromptInfuser encouraged iteration over +prompt and UI together, which helped designers identify UI and prompt +incompatibilities and reflect upon their total solution. Together, these +findings inform future systems for prototyping AI applications. +" +OmniFill: Domain-Agnostic Form Filling Suggestions Using Multi-Faceted Context,Timothy J. Aveni,http://arxiv.org/pdf/2310.17826v1.pdf,2023-10-27,['cs.hc'],2310.17826v1.pdf," Predictive suggestion systems offer contextually-relevant text entry +completions. Existing approaches, like autofill, often excel in +narrowly-defined domains but fail to generalize to arbitrary workflows. We +introduce a conceptual framework to analyze the compound demands of a +particular suggestion context, yielding unique opportunities for large language +models (LLMs) to infer suggestions for a wide range of domain-agnostic +form-filling tasks that were out of reach with prior approaches. We explore +these opportunities in OmniFill, a prototype that collects multi-faceted +context including browsing and text entry activity to construct an LLM prompt +that offers suggestions in situ for arbitrary structured text entry interfaces. +Through a user study with 18 participants, we found that OmniFill offered +valuable suggestions and we identified four themes that characterize users' +behavior and attitudes: an ""opportunistic scrapbooking"" approach; a trust +placed in the system; value in partial success; and a need for visibility into +prompt context. +" +Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models,Ran Xu,http://arxiv.org/pdf/2311.00287v1.pdf,2023-11-01,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",2311.00287v1.pdf," Clinical natural language processing requires methods that can address +domain-specific challenges, such as complex medical terminology and clinical +contexts. Recently, large language models (LLMs) have shown promise in this +domain. Yet, their direct deployment can lead to privacy issues and are +constrained by resources. To address this challenge, we delve into synthetic +clinical text generation using LLMs for clinical NLP tasks. We propose an +innovative, resource-efficient approach, ClinGen, which infuses knowledge into +the process. Our model involves clinical knowledge extraction and +context-informed LLM prompting. Both clinical topics and writing styles are +drawn from external domain-specific knowledge graphs and LLMs to guide data +generation. Our extensive empirical study across 7 clinical NLP tasks and 16 +datasets reveals that ClinGen consistently enhances performance across various +tasks, effectively aligning the distribution of real datasets and significantly +enriching the diversity of generated training instances. We will publish our +code and all the generated data in \url{https://github.com/ritaranx/ClinGen}. +" +Promptagator: Few-shot Dense Retrieval From 8 Examples,Zhuyun Dai,http://arxiv.org/pdf/2209.11755v1.pdf,2022-09-23,"['cs.cl', 'cs.ir']",2209.11755v1.pdf," Much recent research on information retrieval has focused on how to transfer +from one task (typically with abundant supervised data) to various other tasks +where supervision is limited, with the implicit assumption that it is possible +to generalize from one task to all the rest. However, this overlooks the fact +that there are many diverse and unique retrieval tasks, each targeting +different search intents, queries, and search domains. In this paper, we +suggest to work on Few-shot Dense Retrieval, a setting where each task comes +with a short description and a few examples. To amplify the power of a few +examples, we propose Prompt-base Query Generation for Retriever (Promptagator), +which leverages large language models (LLM) as a few-shot query generator, and +creates task-specific retrievers based on the generated data. Powered by LLM's +generalization ability, Promptagator makes it possible to create task-specific +end-to-end retrievers solely based on a few examples {without} using Natural +Questions or MS MARCO to train %question generators or dual encoders. +Surprisingly, LLM prompting with no more than 8 examples allows dual encoders +to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by +more than 1.2 nDCG on average on 11 retrieval sets. Further training +standard-size re-rankers using the same generated data yields another 5.0 point +nDCG improvement. Our studies determine that query generation can be far more +effective than previously observed, especially when a small amount of +task-specific knowledge is given. +" +Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback,Baolin Peng,http://arxiv.org/pdf/2302.12813v3.pdf,2023-02-24,"['cs.cl', 'cs.ai']",2302.12813v3.pdf," Large language models (LLMs), such as ChatGPT, are able to generate +human-like, fluent responses for many downstream tasks, e.g., task-oriented +dialog and question answering. However, applying LLMs to real-world, +mission-critical applications remains challenging mainly due to their tendency +to generate hallucinations and their inability to use external knowledge. This +paper proposes a LLM-Augmenter system, which augments a black-box LLM with a +set of plug-and-play modules. Our system makes the LLM generate responses +grounded in external knowledge, e.g., stored in task-specific databases. It +also iteratively revises LLM prompts to improve model responses using feedback +generated by utility functions, e.g., the factuality score of a LLM-generated +response. The effectiveness of LLM-Augmenter is empirically validated on two +types of scenarios, task-oriented dialog and open-domain question answering. +LLM-Augmenter significantly reduces ChatGPT's hallucinations without +sacrificing the fluency and informativeness of its responses. We make the +source code and models publicly available. +" +AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback,Yann Dubois,http://arxiv.org/pdf/2305.14387v2.pdf,2023-05-22,"['cs.lg', 'cs.ai', 'cs.cl']",2305.14387v2.pdf," Large language models (LLMs) such as ChatGPT have seen widespread adoption +due to their ability to follow user instructions well. Developing these LLMs +involves a complex yet poorly understood workflow requiring training with human +feedback. Replicating and understanding this instruction-following process +faces three major challenges: the high cost of data collection, the lack of +trustworthy evaluation, and the absence of reference method implementations. We +address these challenges with AlpacaFarm, a simulator that enables research and +development for learning from feedback at a low cost. First, we design LLM +prompts to simulate human feedback that are 45x cheaper than crowdworkers and +display high agreement with humans. Second, we propose an automatic evaluation +and validate it against human instructions obtained on real-world interactions. +Third, we contribute reference implementations for several methods (PPO, +best-of-n, expert iteration, and more) that learn from pairwise feedback. +Finally, as an end-to-end validation of AlpacaFarm, we train and evaluate +eleven models on 10k pairs of real human feedback and show that rankings of +models trained in AlpacaFarm match rankings of models trained on human data. As +a demonstration of the research possible in AlpacaFarm, we find that methods +that use a reward model can substantially improve over supervised fine-tuning +and that our reference PPO implementation leads to a +10% improvement in +win-rate against Davinci003. We release all components of AlpacaFarm at +https://github.com/tatsu-lab/alpaca_farm. +" +MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties Grounded in Math Reasoning Problems,Jakub Macina,http://arxiv.org/pdf/2305.14536v2.pdf,2023-05-23,['cs.cl'],2305.14536v2.pdf," While automatic dialogue tutors hold great potential in making education +personalized and more accessible, research on such systems has been hampered by +a lack of sufficiently large and high-quality datasets. Collecting such +datasets remains challenging, as recording tutoring sessions raises privacy +concerns and crowdsourcing leads to insufficient data quality. To address this, +we propose a framework to generate such dialogues by pairing human teachers +with a Large Language Model (LLM) prompted to represent common student errors. +We describe how we use this framework to collect MathDial, a dataset of 3k +one-to-one teacher-student tutoring dialogues grounded in multi-step math +reasoning problems. While models like GPT-3 are good problem solvers, they fail +at tutoring because they generate factually incorrect feedback or are prone to +revealing solutions to students too early. To overcome this, we let teachers +provide learning opportunities to students by guiding them using various +scaffolding questions according to a taxonomy of teacher moves. We demonstrate +MathDial and its extensive annotations can be used to finetune models to be +more effective tutors (and not just solvers). We confirm this by automatic and +human evaluation, notably in an interactive setting that measures the trade-off +between student solving success and telling solutions. The dataset is released +publicly. +" +SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning,Yue Wu,http://arxiv.org/pdf/2305.15486v2.pdf,2023-05-24,"['cs.ai', 'cs.lg']",2305.15486v2.pdf," Open-world survival games pose significant challenges for AI algorithms due +to their multi-tasking, deep exploration, and goal prioritization requirements. +Despite reinforcement learning (RL) being popular for solving games, its high +sample complexity limits its effectiveness in complex open-world games like +Crafter or Minecraft. We propose a novel approach, SPRING, to read the game's +original academic paper and use the knowledge learned to reason and play the +game through a large language model (LLM). Prompted with the LaTeX source as +game context and a description of the agent's current observation, our SPRING +framework employs a directed acyclic graph (DAG) with game-related questions as +nodes and dependencies as edges. We identify the optimal action to take in the +environment by traversing the DAG and calculating LLM responses for each node +in topological order, with the LLM's answer to final node directly translating +to environment actions. In our experiments, we study the quality of in-context +""reasoning"" induced by different forms of prompts under the setting of the +Crafter open-world environment. Our experiments suggest that LLMs, when +prompted with consistent chain-of-thought, have great potential in completing +sophisticated high-level trajectories. Quantitatively, SPRING with GPT-4 +outperforms all state-of-the-art RL baselines, trained for 1M steps, without +any training. Finally, we show the potential of games as a test bed for LLMs. +" +Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models,Haonan Duan,http://arxiv.org/pdf/2305.15594v1.pdf,2023-05-24,"['cs.lg', 'cs.cl', 'cs.cr']",2305.15594v1.pdf," Large language models (LLMs) are excellent in-context learners. However, the +sensitivity of data contained in prompts raises privacy concerns. Our work +first shows that these concerns are valid: we instantiate a simple but highly +effective membership inference attack against the data used to prompt LLMs. To +address this vulnerability, one could forego prompting and resort to +fine-tuning LLMs with known algorithms for private gradient descent. However, +this comes at the expense of the practicality and efficiency offered by +prompting. Therefore, we propose to privately learn to prompt. We first show +that soft prompts can be obtained privately through gradient descent on +downstream data. However, this is not the case for discrete prompts. Thus, we +orchestrate a noisy vote among an ensemble of LLMs presented with different +prompts, i.e., a flock of stochastic parrots. The vote privately transfers the +flock's knowledge into a single public prompt. We show that LLMs prompted with +our private algorithms closely match the non-private baselines. For example, +using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the +sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs. +95.2% for the non-private baseline. Through our experiments, we also show that +our prompt-based approach is easily deployed with existing commercial APIs. +" +Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction,Salvatore Carta,http://arxiv.org/pdf/2307.01128v1.pdf,2023-07-03,"['cs.cl', 'cs.ai']",2307.01128v1.pdf," In the current digitalization era, capturing and effectively representing +knowledge is crucial in most real-world scenarios. In this context, knowledge +graphs represent a potent tool for retrieving and organizing a vast amount of +information in a properly interconnected and interpretable structure. However, +their generation is still challenging and often requires considerable human +effort and domain expertise, hampering the scalability and flexibility across +different application fields. This paper proposes an innovative knowledge graph +generation approach that leverages the potential of the latest generative large +language models, such as GPT-3.5, that can address all the main critical issues +in knowledge graph building. The approach is conveyed in a pipeline that +comprises novel iterative zero-shot and external knowledge-agnostic strategies +in the main stages of the generation process. Our unique manifold approach may +encompass significant benefits to the scientific community. In particular, the +main contribution can be summarized by: (i) an innovative strategy for +iteratively prompting large language models to extract relevant components of +the final graph; (ii) a zero-shot strategy for each prompt, meaning that there +is no need for providing examples for ""guiding"" the prompt result; (iii) a +scalable solution, as the adoption of LLMs avoids the need for any external +resources or human expertise. To assess the effectiveness of our proposed +model, we performed experiments on a dataset that covered a specific domain. We +claim that our proposal is a suitable solution for scalable and versatile +knowledge graph construction and may be applied to different and novel +contexts. +" +PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine,Chenrui Zhang,http://arxiv.org/pdf/2308.12033v1.pdf,2023-08-23,"['cs.cl', 'cs.ai']",2308.12033v1.pdf," As an effective tool for eliciting the power of Large Language Models (LLMs), +prompting has recently demonstrated unprecedented abilities across a variety of +complex tasks. To further improve the performance, prompt ensemble has +attracted substantial interest for tackling the hallucination and instability +of LLMs. However, existing methods usually adopt a two-stage paradigm, which +requires a pre-prepared set of prompts with substantial manual effort, and is +unable to perform directed optimization for different weak learners. In this +paper, we propose a simple, universal, and automatic method named PREFER (Pompt +Ensemble learning via Feedback-Reflect-Refine) to address the stated +limitations. Specifically, given the fact that weak learners are supposed to +focus on hard examples during boosting, PREFER builds a feedback mechanism for +reflecting on the inadequacies of existing weak learners. Based on this, the +LLM is required to automatically synthesize new prompts for iterative +refinement. Moreover, to enhance stability of the prompt effect evaluation, we +propose a novel prompt bagging method involving forward and backward thinking, +which is superior to majority voting and is beneficial for both feedback and +weight calculation in boosting. Extensive experiments demonstrate that our +PREFER achieves state-of-the-art performance in multiple types of tasks by a +significant margin. We have made our code publicly available. +" +ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models,Mohi Reza,http://arxiv.org/pdf/2310.00117v2.pdf,2023-09-29,"['cs.hc', 'cs.ai', 'cs.lg']",2310.00117v2.pdf," Exploring alternative ideas by rewriting text is integral to the writing +process. State-of-the-art large language models (LLMs) can simplify writing +variation generation. However, current interfaces pose challenges for +simultaneous consideration of multiple variations: creating new versions +without overwriting text can be difficult, and pasting them sequentially can +clutter documents, increasing workload and disrupting writers' flow. To tackle +this, we present ABScribe, an interface that supports rapid, yet visually +structured, exploration of writing variations in human-AI co-writing tasks. +With ABScribe, users can swiftly produce multiple variations using LLM prompts, +which are auto-converted into reusable buttons. Variations are stored +adjacently within text segments for rapid in-place comparisons using mouse-over +interactions on a context toolbar. Our user study with 12 writers shows that +ABScribe significantly reduces task workload (d = 1.20, p < 0.001), enhances +user perceptions of the revision process (d = 2.41, p < 0.001) compared to a +popular baseline workflow, and provides insights into how writers explore +variations using LLMs. +" +Knowledge Crosswords: Geometric Reasoning over Structured Knowledge with Large Language Models,Wenxuan Ding,http://arxiv.org/pdf/2310.01290v1.pdf,2023-10-02,"['cs.cl', 'cs.ai']",2310.01290v1.pdf," Large language models (LLMs) are widely adopted in knowledge-intensive tasks +and have achieved impressive performance thanks to their knowledge abilities. +While LLMs have demonstrated outstanding performance on atomic or linear +(multi-hop) QA tasks, whether they can reason in knowledge-rich scenarios with +interweaving constraints remains an underexplored problem. In this work, we +propose geometric reasoning over structured knowledge, where pieces of +knowledge are connected in a graph structure and models need to fill in the +missing information. Such geometric knowledge reasoning would require the +ability to handle structured knowledge, reason with uncertainty, verify facts, +and backtrack when an error occurs. We propose Knowledge Crosswords, a +multi-blank QA dataset where each problem consists of a natural language +question representing the geometric constraints of an incomplete entity +network, where LLMs are tasked with working out the missing entities while +meeting all factual constraints. Knowledge Crosswords contains 2,101 individual +problems, covering various knowledge domains and further divided into three +difficulty levels. We conduct extensive experiments to evaluate existing LLM +prompting approaches on the Knowledge Crosswords benchmark. We additionally +propose two new approaches, Staged Prompting and Verify-All, to augment LLMs' +ability to backtrack and verify structured constraints. Our results demonstrate +that while baseline approaches perform well on easier problems but struggle +with hard ones, our proposed Verify-All outperforms other methods by a large +margin and is more robust with hard problems. Further analysis reveals that +LLMs' ability of geometric reasoning over structured knowledge is still far +from robust or perfect, susceptible to confounders such as the order of +options, certain structural patterns, assumption of existence of correct +answer, and more. +" +Retrieval-augmented Generation to Improve Math Question-Answering: Trade-offs Between Groundedness and Human Preference,Zachary Levonian,http://arxiv.org/pdf/2310.03184v1.pdf,2023-10-04,"['cs.cl', 'cs.hc']",2310.03184v1.pdf," For middle-school math students, interactive question-answering (QA) with +tutors is an effective way to learn. The flexibility and emergent capabilities +of generative large language models (LLMs) has led to a surge of interest in +automating portions of the tutoring process - including interactive QA to +support conceptual discussion of mathematical concepts. However, LLM responses +to math questions can be incorrect or mismatched to the educational context - +such as being misaligned with a school's curriculum. One potential solution is +retrieval-augmented generation (RAG), which involves incorporating a vetted +external knowledge source in the LLM prompt to increase response quality. In +this paper, we designed prompts that retrieve and use content from a +high-quality open-source math textbook to generate responses to real student +questions. We evaluate the efficacy of this RAG system for middle-school +algebra and geometry QA by administering a multi-condition survey, finding that +humans prefer responses generated using RAG, but not when responses are too +grounded in the textbook content. We argue that while RAG is able to improve +response quality, designers of math QA systems must consider trade-offs between +generating responses preferred by students and responses closely matched to +specific educational resources. +" +Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning,Gurusha Juneja,http://arxiv.org/pdf/2310.18338v1.pdf,2023-10-21,"['cs.cl', 'cs.ai']",2310.18338v1.pdf," Large Language Models (LLMs) prompted to generate chain-of-thought (CoT) +exhibit impressive reasoning capabilities. Recent attempts at prompt +decomposition toward solving complex, multi-step reasoning problems depend on +the ability of the LLM to simultaneously decompose and solve the problem. A +significant disadvantage is that foundational LLMs are typically not available +for fine-tuning, making adaptation computationally prohibitive. We believe (and +demonstrate) that problem decomposition and solution generation are distinct +capabilites, better addressed in separate modules, than by one monolithic LLM. +We introduce DaSLaM, which uses a decomposition generator to decompose complex +problems into subproblems that require fewer reasoning steps. These subproblems +are answered by a solver. We use a relatively small (13B parameters) LM as the +decomposition generator, which we train using policy gradient optimization to +interact with a solver LM (regarded as black-box) and guide it through +subproblems, thereby rendering our method solver-agnostic. Evaluation on +multiple different reasoning datasets reveal that with our method, a 175 +billion parameter LM (text-davinci-003) can produce competitive or even better +performance, compared to its orders-of-magnitude larger successor, GPT-4. +Additionally, we show that DaSLaM is not limited by the solver's capabilities +as a function of scale; e.g., solver LMs with diverse sizes give significant +performance improvement with our solver-agnostic decomposition technique. +Exhaustive ablation studies evince the superiority of our modular finetuning +technique over exorbitantly large decomposer LLMs, based on prompting alone. +" +Universal Fuzzing via Large Language Models,Chunqiu Steven Xia,http://arxiv.org/pdf/2308.04748v1.pdf,2023-08-09,"['cs.se', 'cs.lg']",2308.04748v1.pdf," Fuzzing has achieved tremendous success in discovering bugs and +vulnerabilities in various software systems. Systems under test (SUTs) that +take in programming or formal language as inputs, e.g., compilers, runtime +engines, constraint solvers, and software libraries with accessible APIs, are +especially important as they are fundamental building blocks of software +development. However, existing fuzzers for such systems often target a specific +language, and thus cannot be easily applied to other languages or even other +versions of the same language. Moreover, the inputs generated by existing +fuzzers are often limited to specific features of the input language, and thus +can hardly reveal bugs related to other or new features. This paper presents +Fuzz4All, the first fuzzer that is universal in the sense that it can target +many different input languages and many different features of these languages. +The key idea behind Fuzz4All is to leverage large language models (LLMs) as an +input generation and mutation engine, which enables the approach to produce +diverse and realistic inputs for any practically relevant language. To realize +this potential, we present a novel autoprompting technique, which creates LLM +prompts that are wellsuited for fuzzing, and a novel LLM-powered fuzzing loop, +which iteratively updates the prompt to create new fuzzing inputs. We evaluate +Fuzz4All on nine systems under test that take in six different languages (C, +C++, Go, SMT2, Java and Python) as inputs. The evaluation shows, across all six +languages, that universal fuzzing achieves higher coverage than existing, +language-specific fuzzers. Furthermore, Fuzz4All has identified 76 bugs in +widely used systems, such as GCC, Clang, Z3, CVC5, OpenJDK, and the Qiskit +quantum computing platform, with 47 bugs already confirmed by developers as +previously unknown. +" +AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts,Tongshuang Wu,http://arxiv.org/pdf/2110.01691v3.pdf,2021-10-04,"['cs.hc', 'cs.cl']",2110.01691v3.pdf," Although large language models (LLMs) have demonstrated impressive potential +on simple tasks, their breadth of scope, lack of transparency, and insufficient +controllability can make them less effective when assisting humans on more +complex tasks. In response, we introduce the concept of Chaining LLM steps +together, where the output of one step becomes the input for the next, thus +aggregating the gains per step. We first define a set of LLM primitive +operations useful for Chain construction, then present an interactive system +where users can modify these Chains, along with their intermediate results, in +a modular way. In a 20-person user study, we found that Chaining not only +improved the quality of task outcomes, but also significantly enhanced system +transparency, controllability, and sense of collaboration. Additionally, we saw +that users developed new ways of interacting with LLMs through Chains: they +leveraged sub-tasks to calibrate model expectations, compared and contrasted +alternative strategies by observing parallel downstream effects, and debugged +unexpected model outputs by ""unit-testing"" sub-components of a Chain. In two +case studies, we further explore how LLM Chains may be used in future +applications +" +PromptChainer: Chaining Large Language Model Prompts through Visual Programming,Tongshuang Wu,http://arxiv.org/pdf/2203.06566v1.pdf,2022-03-13,['cs.hc'],2203.06566v1.pdf," While LLMs can effectively help prototype single ML functionalities, many +real-world applications involve complex tasks that cannot be easily handled via +a single run of an LLM. Recent work has found that chaining multiple LLM runs +together (with the output of one step being the input to the next) can help +users accomplish these more complex tasks, and in a way that is perceived to be +more transparent and controllable. However, it remains unknown what users need +when authoring their own LLM chains -- a key step for lowering the barriers for +non-AI-experts to prototype AI-infused applications. In this work, we explore +the LLM chain authoring process. We conclude from pilot studies find that +chaining requires careful scaffolding for transforming intermediate node +outputs, as well as debugging the chain at multiple granularities; to help with +these needs, we designed PromptChainer, an interactive interface for visually +programming chains. Through case studies with four people, we show that +PromptChainer supports building prototypes for a range of applications, and +conclude with open questions on scaling chains to complex tasks, and supporting +low-fi chain prototyping. +" +Few-shot Reranking for Multi-hop QA via Language Model Prompting,Muhammad Khalifa,http://arxiv.org/pdf/2205.12650v3.pdf,2022-05-25,"['cs.cl', 'cs.ir']",2205.12650v3.pdf," We study few-shot reranking for multi-hop QA with open-domain questions. To +alleviate the need for a large number of labeled question-document pairs for +retriever training, we propose PromptRank, which relies on large language +models prompting for multi-hop path reranking. PromptRank first constructs an +instruction-based prompt that includes a candidate document path and then +computes the relevance score between a given question and the path based on the +conditional likelihood of the question given the path prompt according to a +language model. PromptRank yields strong retrieval performance on HotpotQA with +only 128 training examples compared to state-of-the-art methods trained on +thousands of examples -- 73.6 recall@10 by PromptRank vs. 77.8 by PathRetriever +and 77.5 by multi-hop dense retrieval. Code available at +https://github.com/mukhal/PromptRank +" +Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency,Lingfeng Shen,http://arxiv.org/pdf/2305.10713v2.pdf,2023-05-18,"['cs.cl', 'cs.lg']",2305.10713v2.pdf," With growing capabilities of large language models, prompting them has become +the dominant way to access them. This has motivated the development of +strategies for automatically selecting effective language prompts. In this +paper, we introduce prompt flatness, a new metric to quantify the expected +utility of a language prompt. This metric is inspired by flatness +regularization in statistical learning that quantifies the robustness of the +model towards its parameter perturbations. We provide theoretical foundations +for this metric and its relationship with other prompt selection metrics, +providing a comprehensive understanding of existing methods. Empirically, we +show that combining prompt flatness with existing metrics improves both +performance and sample efficiency. Our metric outperforms the previous prompt +selection metrics with an average increase of 5% in accuracy and 10% in Pearson +correlation across 6 classification benchmarks. +" +A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event Extraction,Erica Cai,http://arxiv.org/pdf/2305.15051v1.pdf,2023-05-24,['cs.cl'],2305.15051v1.pdf," We consider dyadic zero-shot event extraction (EE) to identify actions +between pairs of actors. The \emph{zero-shot} setting allows social scientists +or other non-computational researchers to extract any customized, +user-specified set of events without training, resulting in a \emph{dyadic} +event database, allowing insight into sociopolitical relational dynamics among +actors and the higher level organizations or countries they represent. +Unfortunately, we find that current zero-shot EE methods perform poorly for the +task, with issues including word sense ambiguity, modality mismatch, and +efficiency. Straightforward application of large language model prompting +typically performs even worse. We address these challenges with a new +fine-grained, multi-stage generative question-answer method, using a Monte +Carlo approach to exploit and overcome the randomness of generative outputs. It +performs 90\% fewer queries than a previous approach, with strong performance +on the widely-used Automatic Content Extraction dataset. Finally, we extend our +method to extract affiliations of actor arguments and demonstrate our method +and findings on a dyadic international relations case study. +" +EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria,Tae Soo Kim,http://arxiv.org/pdf/2309.13633v1.pdf,2023-09-24,"['cs.hc', 'cs.ai', 'cs.cl']",2309.13633v1.pdf," By simply composing prompts, developers can prototype novel generative +applications with Large Language Models (LLMs). To refine prototypes into +products, however, developers must iteratively revise prompts by evaluating +outputs to diagnose weaknesses. Formative interviews (N=8) revealed that +developers invest significant effort in manually evaluating outputs as they +assess context-specific and subjective criteria. We present EvalLM, an +interactive system for iteratively refining prompts by evaluating multiple +outputs on user-defined criteria. By describing criteria in natural language, +users can employ the system's LLM-based evaluator to get an overview of where +prompts excel or fail, and improve these based on the evaluator's feedback. A +comparative study (N=12) showed that EvalLM, when compared to manual +evaluation, helped participants compose more diverse criteria, examine twice as +many outputs, and reach satisfactory prompts with 59% fewer revisions. Beyond +prompts, our work can be extended to augment model evaluation and alignment in +specific application contexts. +" +Terminology-Aware Translation with Constrained Decoding and Large Language Model Prompting,Nikolay Bogoychev,http://arxiv.org/pdf/2310.05824v1.pdf,2023-10-09,['cs.cl'],2310.05824v1.pdf," Terminology correctness is important in the downstream application of machine +translation, and a prevalent way to ensure this is to inject terminology +constraints into a translation system. In our submission to the WMT 2023 +terminology translation task, we adopt a translate-then-refine approach which +can be domain-independent and requires minimal manual efforts. We annotate +random source words with pseudo-terminology translations obtained from word +alignment to first train a terminology-aware model. Further, we explore two +post-processing methods. First, we use an alignment process to discover whether +a terminology constraint has been violated, and if so, we re-decode with the +violating word negatively constrained. Alternatively, we leverage a large +language model to refine a hypothesis by providing it with terminology +constraints. Results show that our terminology-aware model learns to +incorporate terminologies effectively, and the large language model refinement +process can further improve terminology recall. +" +Prompter: Utilizing Large Language Model Prompting for a Data Efficient Embodied Instruction Following,Yuki Inoue,http://arxiv.org/pdf/2211.03267v1.pdf,2022-11-07,"['cs.ro', 'cs.cv']",2211.03267v1.pdf," Embodied Instruction Following (EIF) studies how mobile manipulator robots +should be controlled to accomplish long-horizon tasks specified by natural +language instructions. While most research on EIF are conducted in simulators, +the ultimate goal of the field is to deploy the agents in real life. As such, +it is important to minimize the data cost required for training an agent, to +help the transition from sim to real. However, many studies only focus on the +performance and overlook the data cost -- modules that require separate +training on extra data are often introduced without a consideration on +deployability. In this work, we propose FILM++ which extends the existing work +FILM with modifications that do not require extra data. While all data-driven +modules are kept constant, FILM++ more than doubles FILM's performance. +Furthermore, we propose Prompter, which replaces FILM++'s semantic search +module with language model prompting. Unlike FILM++'s implementation that +requires training on extra sets of data, no training is needed for our +prompting based implementation while achieving better or at least comparable +performance. Prompter achieves 42.64% and 45.72% on the ALFRED benchmark with +high-level instructions only and with step-by-step instructions, respectively, +outperforming the previous state of the art by 6.57% and 10.31%. +" +FIRE: Food Image to REcipe generation,Prateek Chhikara,http://arxiv.org/pdf/2308.14391v1.pdf,2023-08-28,"['cs.cv', 'cs.cl']",2308.14391v1.pdf," Food computing has emerged as a prominent multidisciplinary field of research +in recent years. An ambitious goal of food computing is to develop end-to-end +intelligent systems capable of autonomously producing recipe information for a +food image. Current image-to-recipe methods are retrieval-based and their +success depends heavily on the dataset size and diversity, as well as the +quality of learned embeddings. Meanwhile, the emergence of powerful +attention-based vision and language models presents a promising avenue for +accurate and generalizable recipe generation, which has yet to be extensively +explored. This paper proposes FIRE, a novel multimodal methodology tailored to +recipe generation in the food computing domain, which generates the food title, +ingredients, and cooking instructions based on input food images. FIRE +leverages the BLIP model to generate titles, utilizes a Vision Transformer with +a decoder for ingredient extraction, and employs the T5 model to generate +recipes incorporating titles and ingredients as inputs. We showcase two +practical applications that can benefit from integrating FIRE with large +language model prompting: recipe customization to fit recipes to user +preferences and recipe-to-code transformation to enable automated cooking +processes. Our experimental findings validate the efficacy of our proposed +approach, underscoring its potential for future advancements and widespread +adoption in food computing. +" +Large language models can accurately predict searcher preferences,Paul Thomas,http://arxiv.org/pdf/2309.10621v1.pdf,2023-09-19,"['cs.ir', 'cs.ai', 'cs.cl', 'cs.lg']",2309.10621v1.pdf," Relevance labels, which indicate whether a search result is valuable to a +searcher, are key to evaluating and optimising search systems. The best way to +capture the true preferences of users is to ask them for their careful feedback +on which results would be useful, but this approach does not scale to produce a +large number of labels. Getting relevance labels at scale is usually done with +third-party labellers, who judge on behalf of the user, but there is a risk of +low-quality data if the labeller doesn't understand user needs. To improve +quality, one standard approach is to study real users through interviews, user +studies and direct feedback, find areas where labels are systematically +disagreeing with users, then educate labellers about user needs through judging +guidelines, training and monitoring. This paper introduces an alternate +approach for improving label quality. It takes careful feedback from real +users, which by definition is the highest-quality first-party gold data that +can be derived, and develops an large language model prompt that agrees with +that data. + We present ideas and observations from deploying language models for +large-scale relevance labelling at Bing, and illustrate with data from TREC. We +have found large language models can be effective, with accuracy as good as +human labellers and similar capability to pick the hardest queries, best runs, +and best groups. Systematic changes to the prompts make a difference in +accuracy, but so too do simple paraphrases. To measure agreement with real +searchers needs high-quality ``gold'' labels, but with these we find that +models produce better labels than third-party workers, for a fraction of the +cost, and these labels let us train notably better rankers. +" +Meta-in-context learning in large language models,Julian Coda-Forno,http://arxiv.org/pdf/2305.12907v1.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.lg']",2305.12907v1.pdf," Large language models have shown tremendous performance in a variety of +tasks. In-context learning -- the ability to improve at a task after being +provided with a number of demonstrations -- is seen as one of the main +contributors to their success. In the present paper, we demonstrate that the +in-context learning abilities of large language models can be recursively +improved via in-context learning itself. We coin this phenomenon +meta-in-context learning. Looking at two idealized domains, a one-dimensional +regression task and a two-armed bandit task, we show that meta-in-context +learning adaptively reshapes a large language model's priors over expected +tasks. Furthermore, we find that meta-in-context learning modifies the +in-context learning strategies of such models. Finally, we extend our approach +to a benchmark of real-world regression problems where we observe competitive +performance to traditional learning algorithms. Taken together, our work +improves our understanding of in-context learning and paves the way toward +adapting large language models to the environment they are applied purely +through meta-in-context learning rather than traditional finetuning. +" +MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models,Masoud Monajatipoor,http://arxiv.org/pdf/2306.01311v1.pdf,2023-06-02,['cs.cl'],2306.01311v1.pdf," Large-scale language models have shown the ability to adapt to a new task via +conditioning on a few demonstrations (i.e., in-context learning). However, in +the vision-language domain, most large-scale pre-trained vision-language (VL) +models do not possess the ability to conduct in-context learning. How can we +enable in-context learning for VL models? In this paper, we study an +interesting hypothesis: can we transfer the in-context learning ability from +the language domain to VL domain? Specifically, we first meta-trains a language +model to perform in-context learning on NLP tasks (as in MetaICL); then we +transfer this model to perform VL tasks by attaching a visual encoder. Our +experiments suggest that indeed in-context learning ability can be transferred +cross modalities: our model considerably improves the in-context learning +capability on VL tasks and can even compensate for the size of the model +significantly. On VQA, OK-VQA, and GQA, our method could outperform the +baseline model while having 20 times fewer parameters. +" +A Theory of Emergent In-Context Learning as Implicit Structure Induction,Michael Hahn,http://arxiv.org/pdf/2303.07971v1.pdf,2023-03-14,"['cs.cl', 'cs.lg']",2303.07971v1.pdf," Scaling large language models (LLMs) leads to an emergent capacity to learn +in-context from example demonstrations. Despite progress, theoretical +understanding of this phenomenon remains limited. We argue that in-context +learning relies on recombination of compositional operations found in natural +language data. We derive an information-theoretic bound showing how in-context +learning abilities arise from generic next-token prediction when the +pretraining distribution has sufficient amounts of compositional structure, +under linguistically motivated assumptions. A second bound provides a +theoretical justification for the empirical success of prompting LLMs to output +intermediate steps towards an answer. To validate theoretical predictions, we +introduce a controlled setup for inducing in-context learning; unlike previous +approaches, it accounts for the compositional nature of language. Trained +transformers can perform in-context learning for a range of tasks, in a manner +consistent with the theoretical results. Mirroring real-world LLMs in a +miniature setup, in-context learning emerges when scaling parameters and data, +and models perform better when prompted to output intermediate steps. Probing +shows that in-context learning is supported by a representation of the input's +compositional structure. Taken together, these results provide a step towards +theoretical understanding of emergent behavior in large language models. +" +Fine-tune Language Models to Approximate Unbiased In-context Learning,Timothy Chu,http://arxiv.org/pdf/2310.03331v1.pdf,2023-10-05,['cs.lg'],2310.03331v1.pdf," In-context learning (ICL) is an astonishing emergent ability of large +language models (LLMs). By presenting a prompt that includes multiple +input-output pairs as examples and introducing a new query input, models can +generate the corresponding output. However, the performance of models heavily +relies on the quality of the input prompt when implementing in-context +learning. Biased or imbalanced input prompts can significantly degrade the +performance of language models. To address this issue, we introduce a +reweighted algorithm called RICL (Reweighted In-context Learning). This +algorithm fine-tunes language models using an unbiased validation set to +determine the optimal weight for each input-output example to approximate +unbiased in-context learning. Furthermore, we also introduce a low-cost +reweighted algorithm, a linear optimal weight approximation algorithm called +LARICL (Linear Approximation of Reweighted In-context Learning). This algorithm +requires minimal training cost while providing effective results. We prove the +convergence of our algorithm and validate its performance through experiments +conducted on a numerical dataset. The experimental findings reveal a +substantial improvement in comparison to benchmarks including the performance +of casual prompt-based in-context learning and the performance of a classic +fine-tuning method. +" +PRODIGY: Enabling In-context Learning Over Graphs,Qian Huang,http://arxiv.org/pdf/2305.12600v1.pdf,2023-05-21,"['cs.lg', 'cs.ai']",2305.12600v1.pdf," In-context learning is the ability of a pretrained model to adapt to novel +and diverse downstream tasks by conditioning on prompt examples, without +optimizing any parameters. While large language models have demonstrated this +ability, how in-context learning could be performed over graphs is unexplored. +In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse +\textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first +pretraining framework that enables in-context learning over graphs. The key +idea of our framework is to formulate in-context learning over graphs with a +novel \emph{prompt graph} representation, which connects prompt examples and +queries. We then propose a graph neural network architecture over the prompt +graph and a corresponding family of in-context pretraining objectives. With +PRODIGY, the pretrained model can directly perform novel downstream +classification tasks on unseen graphs via in-context learning. We provide +empirical evidence of the effectiveness of our framework by showcasing its +strong in-context learning performance on tasks involving citation networks and +knowledge graphs. Our approach outperforms the in-context learning accuracy of +contrastive pretraining baselines with hard-coded adaptation by 18\% on average +across all setups. Moreover, it also outperforms standard finetuning with +limited data by 33\% on average with in-context learning. +" +An Explanation of In-context Learning as Implicit Bayesian Inference,Sang Michael Xie,http://arxiv.org/pdf/2111.02080v6.pdf,2021-11-03,"['cs.cl', 'cs.lg']",2111.02080v6.pdf," Large language models (LMs) such as GPT-3 have the surprising ability to do +in-context learning, where the model learns to do a downstream task simply by +conditioning on a prompt consisting of input-output examples. The LM learns +from these examples without being explicitly pretrained to learn. Thus, it is +unclear what enables in-context learning. In this paper, we study how +in-context learning can emerge when pretraining documents have long-range +coherence. Here, the LM must infer a latent document-level concept to generate +coherent next tokens during pretraining. At test time, in-context learning +occurs when the LM also infers a shared latent concept between examples in a +prompt. We prove when this occurs despite a distribution mismatch between +prompts and pretraining data in a setting where the pretraining distribution is +a mixture of HMMs. In contrast to messy large-scale datasets used to train LMs +capable of in-context learning, we generate a small-scale synthetic dataset +(GINC) where Transformers and LSTMs both exhibit in-context learning. Beyond +the theory, experiments on GINC exhibit large-scale real-world phenomena +including improved in-context performance with model scaling (despite the same +pretraining loss), sensitivity to example order, and instances where zero-shot +is better than few-shot in-context learning. +" +Rethinking the Role of Scale for In-Context Learning: An Interpretability-based Case Study at 66 Billion Scale,Hritik Bansal,http://arxiv.org/pdf/2212.09095v2.pdf,2022-12-18,"['cs.cl', 'cs.ai']",2212.09095v2.pdf," Language models have been shown to perform better with an increase in scale +on a wide variety of tasks via the in-context learning paradigm. In this paper, +we investigate the hypothesis that the ability of a large language model to +in-context learn-perform a task is not uniformly spread across all of its +underlying components. Using a 66 billion parameter language model (OPT-66B) +across a diverse set of 14 downstream tasks, we find this is indeed the case: +$\sim$70% of attention heads and $\sim$20% of feed forward networks can be +removed with minimal decline in task performance. We find substantial overlap +in the set of attention heads (un)important for in-context learning across +tasks and number of in-context examples. We also address our hypothesis through +a task-agnostic lens, finding that a small set of attention heads in OPT-66B +score highly on their ability to perform primitive induction operations +associated with in-context learning, namely, prefix matching and copying. These +induction heads overlap with task-specific important heads, reinforcing +arguments by Olsson et al. (arXiv:2209.11895) regarding induction head +generality to more sophisticated behaviors associated with in-context learning. +Overall, our study provides several insights that indicate large language +models may be under-trained for in-context learning and opens up questions on +how to pre-train language models to more effectively perform in-context +learning. +" +A Closer Look at In-Context Learning under Distribution Shifts,Kartik Ahuja,http://arxiv.org/pdf/2305.16704v1.pdf,2023-05-26,"['cs.lg', 'stat.ml']",2305.16704v1.pdf," In-context learning, a capability that enables a model to learn from input +examples on the fly without necessitating weight updates, is a defining +characteristic of large language models. In this work, we follow the setting +proposed in (Garg et al., 2022) to better understand the generality and +limitations of in-context learning from the lens of the simple yet fundamental +task of linear regression. The key question we aim to address is: Are +transformers more adept than some natural and simpler architectures at +performing in-context learning under varying distribution shifts? To compare +transformers, we propose to use a simple architecture based on set-based +Multi-Layer Perceptrons (MLPs). We find that both transformers and set-based +MLPs exhibit in-context learning under in-distribution evaluations, but +transformers more closely emulate the performance of ordinary least squares +(OLS). Transformers also display better resilience to mild distribution shifts, +where set-based MLPs falter. However, under severe distribution shifts, both +models' in-context learning abilities diminish. +" +Exploring the Relationship Between Model Architecture and In-Context Learning Ability,Ivan Lee,http://arxiv.org/pdf/2310.08049v1.pdf,2023-10-12,['cs.lg'],2310.08049v1.pdf," What is the relationship between model architecture and the ability to +perform in-context learning? In this empirical study, we take the first steps +towards answering this question. In particular, we evaluate fifteen model +architectures across a suite of synthetic in-context learning tasks. The +selected architectures represent a broad range of paradigms, including +recurrent and convolution-based neural networks, transformers, and emerging +attention alternatives. We discover that all considered architectures can +perform in-context learning under certain conditions. However, contemporary +architectures are found to be the best performing, especially as task +complexity grows. Additionally, our follow-up experiments delve into various +factors that influence in-context learning. We observe varied sensitivities +among architectures with respect to hyperparameter settings. Our study of +training dynamics reveals that certain architectures exhibit a smooth, +progressive learning trajectory, while others demonstrate periods of stagnation +followed by abrupt mastery of the task. Finally, and somewhat surprisingly, we +find that several emerging attention alternatives are more robust in-context +learners than transformers; since such approaches have constant-sized memory +footprints at inference time, this result opens the future possibility of +scaling up in-context learning to vastly larger numbers of in-context examples. +" +What Can Transformers Learn In-Context? A Case Study of Simple Function Classes,Shivam Garg,http://arxiv.org/pdf/2208.01066v3.pdf,2022-08-01,"['cs.cl', 'cs.lg']",2208.01066v3.pdf," In-context learning refers to the ability of a model to condition on a prompt +sequence consisting of in-context examples (input-output pairs corresponding to +some task) along with a new query input, and generate the corresponding output. +Crucially, in-context learning happens only at inference time without any +parameter updates to the model. While large language models such as GPT-3 +exhibit some ability to perform in-context learning, it is unclear what the +relationship is between tasks on which this succeeds and what is present in the +training data. To make progress towards understanding in-context learning, we +consider the well-defined problem of training a model to in-context learn a +function class (e.g., linear functions): that is, given data derived from some +functions in the class, can we train a model to in-context learn ""most"" +functions from this class? We show empirically that standard Transformers can +be trained from scratch to perform in-context learning of linear functions -- +that is, the trained model is able to learn unseen linear functions from +in-context examples with performance comparable to the optimal least squares +estimator. In fact, in-context learning is possible even under two forms of +distribution shift: (i) between the training data of the model and +inference-time prompts, and (ii) between the in-context examples and the query +input during inference. We also show that we can train Transformers to +in-context learn more complex function classes -- namely sparse linear +functions, two-layer neural networks, and decision trees -- with performance +that matches or exceeds task-specific learning algorithms. Our code and models +are available at https://github.com/dtsip/in-context-learning . +" +"Structured Prompting: Scaling In-Context Learning to 1,000 Examples",Yaru Hao,http://arxiv.org/pdf/2212.06713v1.pdf,2022-12-13,['cs.cl'],2212.06713v1.pdf," Large language models have exhibited intriguing in-context learning +capability, achieving promising zero- and few-shot performance without updating +the parameters. However, conventional in-context learning is usually restricted +by length constraints, rendering it ineffective to absorb supervision from a +large number of examples. In order to go beyond few shots, we introduce +structured prompting that breaks the length limit and scales in-context +learning to thousands of examples. Specifically, demonstration examples are +separately encoded with well-designed position embeddings, and then they are +jointly attended by the test example using a rescaled attention mechanism. So +we can scale the number of exemplars with linear complexity instead of +quadratic complexity with respect to length. Experimental results on a diverse +set of tasks show that our approach improves end-task performance and reduces +evaluation variance over conventional in-context learning as the number of +demonstration examples increases. Code has been released at +https://aka.ms/structured-prompting. +" +Pre-Training to Learn in Context,Yuxian Gu,http://arxiv.org/pdf/2305.09137v1.pdf,2023-05-16,['cs.cl'],2305.09137v1.pdf," In-context learning, where pre-trained language models learn to perform tasks +from task examples and instructions in their contexts, has attracted much +attention in the NLP community. However, the ability of in-context learning is +not fully exploited because language models are not explicitly trained to learn +in context. To this end, we propose PICL (Pre-training for In-Context +Learning), a framework to enhance the language models' in-context learning +ability by pre-training the model on a large collection of ""intrinsic tasks"" in +the general plain-text corpus using the simple language modeling objective. +PICL encourages the model to infer and perform tasks by conditioning on the +contexts while maintaining task generalization of pre-trained models. We +evaluate the in-context learning performance of the model trained with PICL on +seven widely-used text classification datasets and the Super-NaturalInstrctions +benchmark, which contains 100+ NLP tasks formulated to text generation. Our +experiments show that PICL is more effective and task-generalizable than a +range of baselines, outperforming larger language models with nearly 4x +parameters. The code is publicly available at https://github.com/thu-coai/PICL. +" +EXnet: Efficient In-context Learning for Data-less Text classification,Debaditya Shome,http://arxiv.org/pdf/2305.14622v1.pdf,2023-05-24,"['cs.cl', 'cs.lg']",2305.14622v1.pdf," Large pre-trained language models (PLMs) have made significant progress in +encoding world knowledge and spawned a new set of learning paradigms including +zero-shot, few-shot, and in-context learning. Many language tasks can be +modeled as a set of prompts (for example, is this text about geography?) and +language models can provide binary answers, i.e., Yes or No. There is evidence +to suggest that the next-word prediction used by many PLMs does not align well +with zero-shot paradigms. Therefore, PLMs are fine-tuned as a +question-answering system. In-context learning extends zero-shot learning by +incorporating prompts and examples, resulting in increased task accuracy. Our +paper presents EXnet, a model specifically designed to perform in-context +learning without any limitations on the number of examples. We argue that +in-context learning is an effective method to increase task accuracy, and +providing examples facilitates cross-task generalization, especially when it +comes to text classification tasks. With extensive experiments, we show that +even our smallest model (15M parameters) generalizes to several unseen +classification tasks and domains. +" +RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder Language Models,Jie Huang,http://arxiv.org/pdf/2308.07922v1.pdf,2023-08-15,"['cs.cl', 'cs.ai', 'cs.lg']",2308.07922v1.pdf," In this paper, we investigate the in-context learning ability of +retrieval-augmented encoder-decoder language models. We first conduct a +comprehensive analysis of the state-of-the-art ATLAS model and identify its +limitations in in-context learning, primarily due to a mismatch between +pretraining and testing, as well as a restricted context length. To address +these issues, we propose RAVEN, a model that combines retrieval-augmented +masked language modeling and prefix language modeling. We further introduce +Fusion-in-Context Learning to enhance the few-shot performance by enabling the +model to leverage more in-context examples without requiring additional +training or model modifications. Through extensive experiments, we demonstrate +that RAVEN significantly outperforms ATLAS and achieves results comparable to +the most advanced language models in certain scenarios, despite having +substantially fewer parameters. Our work underscores the potential of +retrieval-augmented encoder-decoder language models for in-context learning and +encourages further research in this direction. +" +Understanding In-Context Learning from Repetitions,Jianhao Yan,http://arxiv.org/pdf/2310.00297v2.pdf,2023-09-30,['cs.cl'],2310.00297v2.pdf," This paper explores the elusive mechanism underpinning in-context learning in +Large Language Models (LLMs). Our work provides a novel perspective by +examining in-context learning via the lens of surface repetitions. We +quantitatively investigate the role of surface features in text generation, and +empirically establish the existence of \emph{token co-occurrence +reinforcement}, a principle that strengthens the relationship between two +tokens based on their contextual co-occurrences. By investigating the dual +impacts of these features, our research illuminates the internal workings of +in-context learning and expounds on the reasons for its failures. This paper +provides an essential contribution to the understanding of in-context learning +and its potential limitations, providing a fresh perspective on this exciting +capability. +" +In-Context Learning Dynamics with Random Binary Sequences,Eric J. Bigelow,http://arxiv.org/pdf/2310.17639v1.pdf,2023-10-26,"['cs.ai', 'cs.cl', 'cs.lg']",2310.17639v1.pdf," Large language models (LLMs) trained on huge corpora of text datasets +demonstrate complex, emergent capabilities, achieving state-of-the-art +performance on tasks they were not explicitly trained for. The precise nature +of LLM capabilities is often mysterious, and different prompts can elicit +different capabilities through in-context learning. We propose a Cognitive +Interpretability framework that enables us to analyze in-context learning +dynamics to understand latent concepts in LLMs underlying behavioral patterns. +This provides a more nuanced understanding than success-or-failure evaluation +benchmarks, but does not require observing internal activations as a +mechanistic interpretation of circuits would. Inspired by the cognitive science +of human randomness perception, we use random binary sequences as context and +study dynamics of in-context learning by manipulating properties of context +data, such as sequence length. In the latest GPT-3.5+ models, we find emergent +abilities to generate pseudo-random numbers and learn basic formal languages, +with striking in-context learning dynamics where model outputs transition +sharply from pseudo-random behaviors to deterministic repetition. +" +In-Context Learning with Many Demonstration Examples,Mukai Li,http://arxiv.org/pdf/2302.04931v1.pdf,2023-02-09,"['cs.cl', 'cs.ai']",2302.04931v1.pdf," Large pre-training language models (PLMs) have shown promising in-context +learning abilities. However, due to the backbone transformer architecture, +existing PLMs are bottlenecked by the memory and computational cost when +scaling up to a large context size, leaving instruction tuning and in-context +learning of many demonstration examples, as well as long-range language +modeling under-explored. In this study, we propose a long-range language model +EVALM based on an efficient transformer mechanism. EVALM is trained with 8k +tokens per batch line and can test up to 256k-lengthed contexts with +extrapolation, 128 times to the limit of existing PLMs (e.g. GPT3). Based on +EVALM, we scale up the size of examples efficiently in both instruction tuning +and in-context learning to explore the boundary of the benefits from more +annotated data. Experimental results on a diverse set of tasks show that EVALM +achieves 4.1% higher accuracy on average, and the average length of achieving +the best accuracy score over tasks is around 12k. We find that in-context +learning can achieve higher performance with more demonstrations under +many-shot instruction tuning (8k), and further extending the length of +instructions (16k) can further improve the upper bound of scaling in-context +learning. +" +The Learnability of In-Context Learning,Noam Wies,http://arxiv.org/pdf/2303.07895v1.pdf,2023-03-14,['cs.cl'],2303.07895v1.pdf," In-context learning is a surprising and important phenomenon that emerged +when modern language models were scaled to billions of learned parameters. +Without modifying a large language model's weights, it can be tuned to perform +various downstream natural language tasks simply by including concatenated +training examples of these tasks in its input. Though disruptive for many +practical applications of large language models, this emergent learning +paradigm is not well understood from a theoretical perspective. In this paper, +we propose a first-of-its-kind PAC based framework for in-context learnability, +and use it to provide the first finite sample complexity results for the +in-context learning setup. Our framework includes an initial pretraining phase, +which fits a function to the pretraining distribution, and then a second +in-context learning phase, which keeps this function constant and concatenates +training examples of the downstream task in its input. We use our framework in +order to prove that, under mild assumptions, when the pretraining distribution +is a mixture of latent tasks (a model often considered for natural language +pretraining), these tasks can be efficiently learned via in-context learning, +even though the model's weights are unchanged and the input significantly +diverges from the pretraining distribution. Our theoretical analysis reveals +that in this setting, in-context learning is more about identifying the task +than about learning it, a result which is in line with a series of recent +empirical findings. We hope that the in-context learnability framework +presented in this paper will facilitate future progress towards a deeper +understanding of this important new learning paradigm. +" +SINC: Self-Supervised In-Context Learning for Vision-Language Tasks,Yi-Syuan Chen,http://arxiv.org/pdf/2307.07742v2.pdf,2023-07-15,"['cs.cv', 'cs.ai']",2307.07742v2.pdf," Large Pre-trained Transformers exhibit an intriguing capacity for in-context +learning. Without gradient updates, these models can rapidly construct new +predictors from demonstrations presented in the inputs. Recent works promote +this ability in the vision-language domain by incorporating visual information +into large language models that can already make in-context predictions. +However, these methods could inherit issues in the language domain, such as +template sensitivity and hallucination. Also, the scale of these language +models raises a significant demand for computations, making learning and +operating these models resource-intensive. To this end, we raise a question: +``How can we enable in-context learning without relying on the intrinsic +in-context ability of large language models?"". To answer it, we propose a +succinct and general framework, Self-supervised IN-Context learning (SINC), +that introduces a meta-model to learn on self-supervised prompts consisting of +tailored demonstrations. The learned models can be transferred to downstream +tasks for making in-context predictions on-the-fly. Extensive experiments show +that SINC outperforms gradient-based methods in various vision-language tasks +under few-shot settings. Furthermore, the designs of SINC help us investigate +the benefits of in-context learning across different tasks, and the analysis +further reveals the essential components for the emergence of in-context +learning in the vision-language domain. +" +Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator,Hyuhng Joon Kim,http://arxiv.org/pdf/2206.08082v1.pdf,2022-06-16,['cs.cl'],2206.08082v1.pdf," Large-scale pre-trained language models (PLMs) are well-known for being +capable of solving a task simply by conditioning a few input-label pairs dubbed +demonstrations on a prompt without being explicitly tuned for the desired +downstream task. Such a process (i.e., in-context learning), however, naturally +leads to high reliance on the demonstrations which are usually selected from +external datasets. In this paper, we propose self-generated in-context learning +(SG-ICL), which generates demonstrations for in-context learning from PLM +itself to minimize the reliance on the external demonstration. We conduct +experiments on four different text classification tasks and show SG-ICL +significantly outperforms zero-shot learning and is generally worth +approximately 0.6 gold training samples. Moreover, our generated demonstrations +show more consistent performance with low variance compared to randomly +selected demonstrations from the training dataset. +" +Active Example Selection for In-Context Learning,Yiming Zhang,http://arxiv.org/pdf/2211.04486v1.pdf,2022-11-08,"['cs.cl', 'cs.ai']",2211.04486v1.pdf," With a handful of demonstration examples, large-scale language models show +strong capability to perform various tasks by in-context learning from these +examples, without any fine-tuning. We demonstrate that in-context learning +performance can be highly unstable across samples of examples, indicating the +idiosyncrasies of how language models acquire information. We formulate example +selection for in-context learning as a sequential decision problem, and propose +a reinforcement learning algorithm for identifying generalizable policies to +select demonstration examples. For GPT-2, our learned policies demonstrate +strong abilities of generalizing to unseen tasks in training, with a $5.8\%$ +improvement on average. Examples selected from our learned policies can even +achieve a small improvement on GPT-3 Ada. However, the improvement diminishes +on larger GPT-3 models, suggesting emerging capabilities of large language +models. +" +On the Compositional Generalization Gap of In-Context Learning,Arian Hosseini,http://arxiv.org/pdf/2211.08473v1.pdf,2022-11-15,"['cs.cl', 'cs.lg']",2211.08473v1.pdf," Pretrained large generative language models have shown great performance on +many tasks, but exhibit low compositional generalization abilities. Scaling +such models has been shown to improve their performance on various NLP tasks +even just by conditioning them on a few examples to solve the task without any +fine-tuning (also known as in-context learning). In this work, we look at the +gap between the in-distribution (ID) and out-of-distribution (OOD) performance +of such models in semantic parsing tasks with in-context learning. In the ID +settings, the demonstrations are from the same split (test or train) that the +model is being evaluated on, and in the OOD settings, they are from the other +split. We look at how the relative generalization gap of in-context learning +evolves as models are scaled up. We evaluate four model families, OPT, BLOOM, +CodeGen and Codex on three semantic parsing datasets, CFQ, SCAN and GeoQuery +with different number of exemplars, and observe a trend of decreasing relative +generalization gap as models are scaled up. +" +Bayesian Optimization of Catalysts With In-context Learning,Mayk Caldas Ramos,http://arxiv.org/pdf/2304.05341v1.pdf,2023-04-11,"['physics.chem-ph', 'cs.lg']",2304.05341v1.pdf," Large language models (LLMs) are able to do accurate classification with zero +or only a few examples (in-context learning). We show a prompting system that +enables regression with uncertainty for in-context learning with frozen LLM +(GPT-3, GPT-3.5, and GPT-4) models, allowing predictions without features or +architecture tuning. By incorporating uncertainty, our approach enables +Bayesian optimization for catalyst or molecule optimization using natural +language, eliminating the need for training or simulation. Here, we performed +the optimization using the synthesis procedure of catalysts to predict +properties. Working with natural language mitigates difficulty synthesizability +since the literal synthesis procedure is the model's input. We showed that +in-context learning could improve past a model context window (maximum number +of tokens the model can process at once) as data is gathered via example +selection, allowing the model to scale better. Although our method does not +outperform all baselines, it requires zero training, feature selection, and +minimal computing while maintaining satisfactory performance. We also find +Gaussian Process Regression on text embeddings is strong at Bayesian +optimization. The code is available in our GitHub repository: +https://github.com/ur-whitelab/BO-LIFT +" +In-Context Learning Unlocked for Diffusion Models,Zhendong Wang,http://arxiv.org/pdf/2305.01115v2.pdf,2023-05-01,['cs.cv'],2305.01115v2.pdf," We present Prompt Diffusion, a framework for enabling in-context learning in +diffusion-based generative models. Given a pair of task-specific example +images, such as depth from/to image and scribble from/to image, and a text +guidance, our model automatically understands the underlying task and performs +the same task on a new query image following the text guidance. To achieve +this, we propose a vision-language prompt that can model a wide range of +vision-language tasks and a diffusion model that takes it as input. The +diffusion model is trained jointly over six different tasks using these +prompts. The resulting Prompt Diffusion model is the first diffusion-based +vision-language foundation model capable of in-context learning. It +demonstrates high-quality in-context generation on the trained tasks and +generalizes effectively to new, unseen vision tasks with their respective +prompts. Our model also shows compelling text-guided image editing results. Our +framework aims to facilitate research into in-context learning for computer +vision. We share our code and pre-trained models at +https://github.com/Zhendong-Wang/Prompt-Diffusion. +" +Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation,Marius Mosbach,http://arxiv.org/pdf/2305.16938v2.pdf,2023-05-26,['cs.cl'],2305.16938v2.pdf," Few-shot fine-tuning and in-context learning are two alternative strategies +for task adaptation of pre-trained language models. Recently, in-context +learning has gained popularity over fine-tuning due to its simplicity and +improved out-of-domain generalization, and because extensive evidence shows +that fine-tuned models pick up on spurious correlations. Unfortunately, +previous comparisons of the two approaches were done using models of different +sizes. This raises the question of whether the observed weaker out-of-domain +generalization of fine-tuned models is an inherent property of fine-tuning or a +limitation of the experimental setup. In this paper, we compare the +generalization of few-shot fine-tuning and in-context learning to challenge +datasets, while controlling for the models used, the number of examples, and +the number of parameters, ranging from 125M to 30B. Our results show that +fine-tuned language models can in fact generalize well out-of-domain. We find +that both approaches generalize similarly; they exhibit large variation and +depend on properties such as model size and the number of examples, +highlighting that robust task adaptation remains a challenge. +" +Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning,Ruixiang Tang,http://arxiv.org/pdf/2305.17256v2.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.lg']",2305.17256v2.pdf," Large language models (LLMs) have recently shown great potential for +in-context learning, where LLMs learn a new task simply by conditioning on a +few input-label pairs (prompts). Despite their potential, our understanding of +the factors influencing end-task performance and the robustness of in-context +learning remains limited. This paper aims to bridge this knowledge gap by +investigating the reliance of LLMs on shortcuts or spurious correlations within +prompts. Through comprehensive experiments on classification and extraction +tasks, we reveal that LLMs are ""lazy learners"" that tend to exploit shortcuts +in prompts for downstream tasks. Additionally, we uncover a surprising finding +that larger models are more likely to utilize shortcuts in prompts during +inference. Our findings provide a new perspective on evaluating robustness in +in-context learning and pose new challenges for detecting and mitigating the +use of shortcuts in prompts. +" +Multi-Dimensional Evaluation of Text Summarization with In-Context Learning,Sameer Jain,http://arxiv.org/pdf/2306.01200v1.pdf,2023-06-01,['cs.cl'],2306.01200v1.pdf," Evaluation of natural language generation (NLG) is complex and +multi-dimensional. Generated text can be evaluated for fluency, coherence, +factuality, or any other dimensions of interest. Most frameworks that perform +such multi-dimensional evaluation require training on large manually or +synthetically generated datasets. In this paper, we study the efficacy of large +language models as multi-dimensional evaluators using in-context learning, +obviating the need for large training datasets. Our experiments show that +in-context learning-based evaluators are competitive with learned evaluation +frameworks for the task of text summarization, establishing state-of-the-art on +dimensions such as relevance and factual consistency. We then analyze the +effects of factors such as the selection and number of in-context examples on +performance. Finally, we study the efficacy of in-context learning based +evaluators in evaluating zero-shot summaries written by large language models +such as GPT-3. +" +Exploring the Integration of Large Language Models into Automatic Speech Recognition Systems: An Empirical Study,Zeping Min,http://arxiv.org/pdf/2307.06530v1.pdf,2023-07-13,"['cs.cl', 'cs.sd', 'eess.as']",2307.06530v1.pdf," This paper explores the integration of Large Language Models (LLMs) into +Automatic Speech Recognition (ASR) systems to improve transcription accuracy. +The increasing sophistication of LLMs, with their in-context learning +capabilities and instruction-following behavior, has drawn significant +attention in the field of Natural Language Processing (NLP). Our primary focus +is to investigate the potential of using an LLM's in-context learning +capabilities to enhance the performance of ASR systems, which currently face +challenges such as ambient noise, speaker accents, and complex linguistic +contexts. We designed a study using the Aishell-1 and LibriSpeech datasets, +with ChatGPT and GPT-4 serving as benchmarks for LLM capabilities. +Unfortunately, our initial experiments did not yield promising results, +indicating the complexity of leveraging LLM's in-context learning for ASR +applications. Despite further exploration with varied settings and models, the +corrected sentences from the LLMs frequently resulted in higher Word Error +Rates (WER), demonstrating the limitations of LLMs in speech applications. This +paper provides a detailed overview of these experiments, their results, and +implications, establishing that using LLMs' in-context learning capabilities to +correct potential errors in speech recognition transcriptions is still a +challenging task at the current stage. +" +ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought,Hanchong Zhang,http://arxiv.org/pdf/2310.17342v1.pdf,2023-10-26,['cs.cl'],2310.17342v1.pdf," Recently Large Language Models (LLMs) have been proven to have strong +abilities in various domains and tasks. We study the problem of prompt +designing in the text-to-SQL task and attempt to improve the LLMs' reasoning +ability when generating SQL queries. Besides the trivial few-shot in-context +learning setting, we design our chain-of-thought (CoT) prompt with a similar +method to schema linking. We provide a method named ACT-SQL to automatically +generate auto-CoT exemplars and thus the whole process doesn't need manual +labeling. Our approach is cost-saving since we only use the LLMs' API call once +when generating one SQL query. Furthermore, we extend our in-context learning +method to the multi-turn text-to-SQL task. The experiment results show that the +LLMs' performance can benefit from our ACT-SQL approach. Our approach achieves +SOTA performance on the Spider dev set among existing in-context learning +approaches. +" +COSMIC: Data Efficient Instruction-tuning For Speech In-Context Learning,Jing Pan,http://arxiv.org/pdf/2311.02248v1.pdf,2023-11-03,"['cs.cl', 'cs.ai', 'eess.as']",2311.02248v1.pdf," We present a data and cost efficient way of incorporating the speech modality +into a large language model (LLM). The resulting multi-modal LLM is a +COntextual Speech Model with Instruction-following/in-context-learning +Capabilities - COSMIC. Speech comprehension test question-answer (SQA) pairs +are generated using GPT-3.5 based on the speech transcriptions as a part of the +supervision for the instruction tuning. With fewer than 20M trainable +parameters and as little as 450 hours of English speech data for SQA +generation, COSMIC exhibits emergent instruction-following and in-context +learning capabilities in speech-to-text tasks. The model is able to follow the +given text instructions to generate text response even on the unseen EN$\to$X +speech-to-text translation (S2TT) task with zero-shot setting. We evaluate the +model's in-context learning via various tasks such as EN$\to$X S2TT and +few-shot domain adaptation. And instruction-following capabilities are +evaluated through a contextual biasing benchmark. Our results demonstrate the +efficacy of the proposed low cost recipe for building a speech LLM and that +with the new instruction-tuning data. +" +Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again,Bernal Jiménez Gutiérrez,http://arxiv.org/pdf/2203.08410v3.pdf,2022-03-16,"['cs.cl', 'cs.ir']",2203.08410v3.pdf," The strong few-shot in-context learning capability of large pre-trained +language models (PLMs) such as GPT-3 is highly appealing for application +domains such as biomedicine, which feature high and diverse demands of language +technologies but also high data annotation costs. In this paper, we present the +first systematic and comprehensive study to compare the few-shot performance of +GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on +two highly representative biomedical information extraction tasks, named entity +recognition and relation extraction. We follow the true few-shot setting to +avoid overestimating models' few-shot performance by model selection over a +large validation set. We also optimize GPT-3's performance with known +techniques such as contextual calibration and dynamic in-context example +retrieval. However, our results show that GPT-3 still significantly +underperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3 +in-context learning also yields smaller gains in accuracy when more training +data becomes available. Our in-depth analyses further reveal issues of the +in-context learning setting that may be detrimental to information extraction +tasks in general. Given the high cost of experimenting with GPT-3, we hope our +study provides guidance for biomedical researchers and practitioners towards +more promising directions such as fine-tuning small PLMs. +" +Exploring Effective Factors for Improving Visual In-Context Learning,Yanpeng Sun,http://arxiv.org/pdf/2304.04748v1.pdf,2023-04-10,['cs.cv'],2304.04748v1.pdf," The In-Context Learning (ICL) is to understand a new task via a few +demonstrations (aka. prompt) and predict new inputs without tuning the models. +While it has been widely studied in NLP, it is still a relatively new area of +research in computer vision. To reveal the factors influencing the performance +of visual in-context learning, this paper shows that prompt selection and +prompt fusion are two major factors that have a direct impact on the inference +performance of visual context learning. Prompt selection is the process of +identifying the most appropriate prompt or example to help the model understand +new tasks. This is important because providing the model with relevant prompts +can help it learn more effectively and efficiently. Prompt fusion involves +combining knowledge from different positions within the large-scale visual +model. By doing this, the model can leverage the diverse knowledge stored in +different parts of the model to improve its performance on new tasks. Based +these findings, we propose a simple framework prompt-SelF for visual in-context +learning. Specifically, we first use the pixel-level retrieval method to select +a suitable prompt, and then use different prompt fusion methods to activate all +the knowledge stored in the large-scale model, and finally ensemble the +prediction results obtained from different prompt fusion methods to obtain the +final prediction results. And we conduct extensive experiments on single-object +segmentation and detection tasks to demonstrate the effectiveness of +prompt-SelF. Remarkably, the prompt-SelF has outperformed OSLSM based +meta-learning in 1-shot segmentation for the first time. This indicated the +great potential of visual in-context learning. The source code and models will +be available at \url{https://github.com/syp2ysy/prompt-SelF}. +" +Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning,Yingcong Li,http://arxiv.org/pdf/2305.18869v2.pdf,2023-05-30,"['cs.lg', 'cs.ai', 'cs.cl']",2305.18869v2.pdf," Chain-of-thought (CoT) is a method that enables language models to handle +complex reasoning tasks by decomposing them into simpler steps. Despite its +success, the underlying mechanics of CoT are not yet fully understood. In an +attempt to shed light on this, our study investigates the impact of CoT on the +ability of transformers to in-context learn a simple to study, yet general +family of compositional functions: multi-layer perceptrons (MLPs). In this +setting, we find that the success of CoT can be attributed to breaking down +in-context learning of a compositional function into two distinct phases: +focusing on and filtering data related to each step of the composition and +in-context learning the single-step composition function. Through both +experimental and theoretical evidence, we demonstrate how CoT significantly +reduces the sample complexity of in-context learning (ICL) and facilitates the +learning of complex functions that non-CoT methods struggle with. Furthermore, +we illustrate how transformers can transition from vanilla in-context learning +to mastering a compositional function with CoT by simply incorporating +additional layers that perform the necessary data-filtering for CoT via the +attention mechanism. In addition to these test-time benefits, we show CoT helps +accelerate pretraining by learning shortcuts to represent complex functions and +filtering plays an important role in this process. These findings collectively +provide insights into the mechanics of CoT, inviting further investigation of +its role in complex reasoning tasks. +" +In-Context Learning through the Bayesian Prism,Kabir Ahuja,http://arxiv.org/pdf/2306.04891v1.pdf,2023-06-08,"['cs.lg', 'cs.cl']",2306.04891v1.pdf," In-context learning is one of the surprising and useful features of large +language models. How it works is an active area of research. Recently, stylized +meta-learning-like setups have been devised that train these models on a +sequence of input-output pairs $(x, f(x))$ from a function class using the +language modeling loss and observe generalization to unseen functions from the +same class. One of the main discoveries in this line of research has been that +for several problems such as linear regression, trained transformers learn +algorithms for learning functions in context. However, the inductive biases of +these models resulting in this behavior are not clearly understood. A model +with unlimited training data and compute is a Bayesian predictor: it learns the +pretraining distribution. It has been shown that high-capacity transformers +mimic the Bayesian predictor for linear regression. In this paper, we show +empirical evidence of transformers exhibiting the behavior of this ideal +learner across different linear and non-linear function classes. We also extend +the previous setups to work in the multitask setting and verify that +transformers can do in-context learning in this setup as well and the Bayesian +perspective sheds light on this setting also. Finally, via the example of +learning Fourier series, we study the inductive bias for in-context learning. +We find that in-context learning may or may not have simplicity bias depending +on the pretraining data distribution. +" +Explore In-Context Learning for 3D Point Cloud Understanding,Zhongbin Fang,http://arxiv.org/pdf/2306.08659v1.pdf,2023-06-14,['cs.cv'],2306.08659v1.pdf," With the rise of large-scale models trained on broad data, in-context +learning has become a new learning paradigm that has demonstrated significant +potential in natural language processing and computer vision tasks. Meanwhile, +in-context learning is still largely unexplored in the 3D point cloud domain. +Although masked modeling has been successfully applied for in-context learning +in 2D vision, directly extending it to 3D point clouds remains a formidable +challenge. In the case of point clouds, the tokens themselves are the point +cloud positions (coordinates) that are masked during inference. Moreover, +position embedding in previous works may inadvertently introduce information +leakage. To address these challenges, we introduce a novel framework, named +Point-In-Context, designed especially for in-context learning in 3D point +clouds, where both inputs and outputs are modeled as coordinates for each task. +Additionally, we propose the Joint Sampling module, carefully designed to work +in tandem with the general point sampling operator, effectively resolving the +aforementioned technical issues. We conduct extensive experiments to validate +the versatility and adaptability of our proposed methods in handling a wide +range of tasks. Furthermore, with a more effective prompt selection strategy, +our framework surpasses the results of individually trained models. +" +Scaling In-Context Demonstrations with Structured Attention,Tianle Cai,http://arxiv.org/pdf/2307.02690v1.pdf,2023-07-05,"['cs.cl', 'cs.ai', 'cs.lg']",2307.02690v1.pdf," The recent surge of large language models (LLMs) highlights their ability to +perform in-context learning, i.e., ""learning"" to perform a task from a few +demonstrations in the context without any parameter updates. However, their +capabilities of in-context learning are limited by the model architecture: 1) +the use of demonstrations is constrained by a maximum sentence length due to +positional embeddings; 2) the quadratic complexity of attention hinders users +from using more demonstrations efficiently; 3) LLMs are shown to be sensitive +to the order of the demonstrations. In this work, we tackle these challenges by +proposing a better architectural design for in-context learning. We propose +SAICL (Structured Attention for In-Context Learning), which replaces the +full-attention by a structured attention mechanism designed for in-context +learning, and removes unnecessary dependencies between individual +demonstrations, while making the model invariant to the permutation of +demonstrations. We evaluate SAICL in a meta-training framework and show that +SAICL achieves comparable or better performance than full attention while +obtaining up to 3.4x inference speed-up. SAICL also consistently outperforms a +strong Fusion-in-Decoder (FiD) baseline which processes each demonstration +independently. Finally, thanks to its linear nature, we demonstrate that SAICL +can easily scale to hundreds of demonstrations with continuous performance +gains with scaling. +" +DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning,Jing Xiong,http://arxiv.org/pdf/2310.02954v4.pdf,2023-10-04,['cs.cl'],2310.02954v4.pdf," Recent advances in natural language processing, primarily propelled by Large +Language Models (LLMs), have showcased their remarkable capabilities grounded +in in-context learning. A promising avenue for guiding LLMs in intricate +reasoning tasks involves the utilization of intermediate reasoning steps within +the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies +in the effective selection of exemplars for facilitating in-context learning. +In this study, we introduce a framework that leverages Dual Queries and +Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars +for in-context learning. Dual Queries first query LLM to obtain LLM-generated +knowledge such as CoT, then query the retriever to obtain the final exemplars +via both question and the knowledge. Moreover, for the second query, LoRe +employs dimensionality reduction techniques to refine exemplar selection, +ensuring close alignment with the input question's knowledge. Through extensive +experiments, we demonstrate that DQ-LoRe significantly outperforms prior +state-of-the-art methods in the automatic selection of exemplars for GPT-4, +enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further +reveals that DQ-LoRe consistently outperforms retrieval-based approaches in +terms of both performance and adaptability, especially in scenarios +characterized by distribution shifts. DQ-LoRe pushes the boundaries of +in-context learning and opens up new avenues for addressing complex reasoning +challenges. We will release the code soon. +" +OverPrompt: Enhancing ChatGPT Capabilities through an Efficient In-Context Learning Approach,Jiazheng Li,http://arxiv.org/pdf/2305.14973v1.pdf,2023-05-24,['cs.cl'],2305.14973v1.pdf," The exceptional performance of pre-trained large language models has +revolutionised various applications, but their adoption in production +environments is hindered by prohibitive costs and inefficiencies, particularly +when utilising long prompts. This paper proposes OverPrompt, an in-context +learning method aimed at improving LLM efficiency and performance by processing +multiple inputs in parallel. Evaluated across diverse datasets, OverPrompt +enhances task efficiency and integrates a diverse range of examples for +improved performance. Particularly, it amplifies fact-checking and sentiment +analysis tasks when supplemented with contextual information. Synthetic data +grouping further enhances performance, suggesting a viable approach for data +augmentation. +" +Crosslingual Retrieval Augmented In-context Learning for Bangla,Xiaoqian Li,http://arxiv.org/pdf/2311.00587v1.pdf,2023-11-01,['cs.cl'],2311.00587v1.pdf," The promise of Large Language Models (LLMs) in Natural Language Processing +has often been overshadowed by their limited performance in low-resource +languages such as Bangla. To address this, our paper presents a pioneering +approach that utilizes cross-lingual retrieval augmented in-context learning. +By strategically sourcing semantically similar prompts from high-resource +language, we enable multilingual pretrained language models (MPLMs), especially +the generative model BLOOMZ, to successfully boost performance on Bangla tasks. +Our extensive evaluation highlights that the cross-lingual retrieval augmented +prompts bring steady improvements to MPLMs over the zero-shot performance. +" +Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations,Kang Min Yoo,http://arxiv.org/pdf/2205.12685v2.pdf,2022-05-25,"['cs.cl', 'cs.ai', 'cs.lg']",2205.12685v2.pdf," Despite recent explosion of interests in in-context learning, the underlying +mechanism and the precise impact of the quality of demonstrations remain +elusive. Intuitively, ground-truth labels should have as much impact in +in-context learning (ICL) as supervised learning, but recent work reported that +the input-label correspondence is significantly less important than previously +thought. Intrigued by this counter-intuitive observation, we re-examine the +importance of ground-truth labels in in-context learning. With the introduction +of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth +Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the +impact of ground-truth label demonstrations. Through extensive analyses, we +find that the correct input-label mappings can have varying impacts on the +downstream in-context learning performances, depending on the experimental +configuration. Through additional studies, we identify key components, such as +the verbosity of prompt templates and the language model size, as the +controlling factor to achieve more noise-resilient ICL. +" +In-context Learning and Induction Heads,Catherine Olsson,http://arxiv.org/pdf/2209.11895v1.pdf,2022-09-24,['cs.lg'],2209.11895v1.pdf," ""Induction heads"" are attention heads that implement a simple algorithm to +complete token sequences like [A][B] ... [A] -> [B]. In this work, we present +preliminary and indirect evidence for a hypothesis that induction heads might +constitute the mechanism for the majority of all ""in-context learning"" in large +transformer models (i.e. decreasing loss at increasing token indices). We find +that induction heads develop at precisely the same point as a sudden sharp +increase in in-context learning ability, visible as a bump in the training +loss. We present six complementary lines of evidence, arguing that induction +heads may be the mechanistic source of general in-context learning in +transformer models of any size. For small attention-only models, we present +strong, causal evidence; for larger models with MLPs, we present correlational +evidence. +" +Transformers learn in-context by gradient descent,Johannes von Oswald,http://arxiv.org/pdf/2212.07677v2.pdf,2022-12-15,"['cs.lg', 'cs.ai', 'cs.cl']",2212.07677v2.pdf," At present, the mechanisms of in-context learning in Transformers are not +well understood and remain mostly an intuition. In this paper, we suggest that +training Transformers on auto-regressive objectives is closely related to +gradient-based meta-learning formulations. We start by providing a simple +weight construction that shows the equivalence of data transformations induced +by 1) a single linear self-attention layer and by 2) gradient-descent (GD) on a +regression loss. Motivated by that construction, we show empirically that when +training self-attention-only Transformers on simple regression tasks either the +models learned by GD and Transformers show great similarity or, remarkably, the +weights found by optimization match the construction. Thus we show how trained +Transformers become mesa-optimizers i.e. learn models by gradient descent in +their forward pass. This allows us, at least in the domain of regression +problems, to mechanistically understand the inner workings of in-context +learning in optimized Transformers. Building on this insight, we furthermore +identify how Transformers surpass the performance of plain gradient descent by +learning an iterative curvature correction and learn linear models on deep data +representations to solve non-linear regression tasks. Finally, we discuss +intriguing parallels to a mechanism identified to be crucial for in-context +learning termed induction-head (Olsson et al., 2022) and show how it could be +understood as a specific case of in-context learning by gradient descent +learning within Transformers. Code to reproduce the experiments can be found at +https://github.com/google-research/self-organising-systems/tree/master/transformers_learn_icl_by_gd . +" +What Makes Good Examples for Visual In-Context Learning?,Yuanhan Zhang,http://arxiv.org/pdf/2301.13670v2.pdf,2023-01-31,['cs.cv'],2301.13670v2.pdf," Large-scale models trained on broad data have recently become the mainstream +architecture in computer vision due to their strong generalization performance. +In this paper, the main focus is on an emergent ability in large vision models, +known as in-context learning, which allows inference on unseen tasks by +conditioning on in-context examples (a.k.a.~prompt) without updating the model +parameters. This concept has been well-known in natural language processing but +has only been studied very recently for large vision models. We for the first +time provide a comprehensive investigation on the impact of in-context examples +in computer vision, and find that the performance is highly sensitive to the +choice of in-context examples. To overcome the problem, we propose a prompt +retrieval framework to automate the selection of in-context examples. +Specifically, we present (1) an unsupervised prompt retrieval method based on +nearest example search using an off-the-shelf model, and (2) a supervised +prompt retrieval method, which trains a neural network to choose examples that +directly maximize in-context learning performance. The results demonstrate that +our methods can bring non-trivial improvements to visual in-context learning in +comparison to the commonly-used random selection. +" +Compositional Exemplars for In-context Learning,Jiacheng Ye,http://arxiv.org/pdf/2302.05698v3.pdf,2023-02-11,"['cs.cl', 'cs.ai', 'cs.lg']",2302.05698v3.pdf," Large pretrained language models (LMs) have shown impressive In-Context +Learning (ICL) ability, where the model learns to do an unseen task via a +prompt consisting of input-output examples as the demonstration, without any +parameter updates. The performance of ICL is highly dominated by the quality of +the selected in-context examples. However, previous selection methods are +mostly based on simple heuristics, leading to sub-optimal performance. In this +work, we formulate in-context example selection as a subset selection problem. +We propose CEIL (Compositional Exemplars for In-context Learning), which is +instantiated by Determinantal Point Processes (DPPs) to model the interaction +between the given input and in-context examples, and optimized through a +carefully-designed contrastive learning objective to obtain preference from +LMs. We validate CEIL on 12 classification and generation datasets from 7 +distinct NLP tasks, including sentiment analysis, paraphrase detection, natural +language inference, commonsense reasoning, open-domain question answering, code +generation, and semantic parsing. Extensive experiments demonstrate not only +the state-of-the-art performance but also the transferability and +compositionality of CEIL, shedding new light on effective and efficient +in-context learning. Our code is released at +https://github.com/HKUNLP/icl-ceil. +" +ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction,Jiabang He,http://arxiv.org/pdf/2303.05063v4.pdf,2023-03-09,['cs.cl'],2303.05063v4.pdf," Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstrated +remarkable results in various natural language processing (NLP) tasks with +in-context learning, which involves inference based on a few demonstration +examples. Despite their successes in NLP tasks, no investigation has been +conducted to assess the ability of LLMs to perform document information +extraction (DIE) using in-context learning. Applying LLMs to DIE poses two +challenges: the modality and task gap. To this end, we propose a simple but +effective in-context learning framework called ICL-D3IE, which enables LLMs to +perform DIE with different types of demonstration examples. Specifically, we +extract the most difficult and distinct segments from hard training documents +as hard demonstrations for benefiting all test instances. We design +demonstrations describing relationships that enable LLMs to understand +positional relationships. We introduce formatting demonstrations for easy +answer extraction. Additionally, the framework improves diverse demonstrations +by updating them iteratively. Our experiments on three widely used benchmark +datasets demonstrate that the ICL-D3IE framework enables Davinci-003/ChatGPT to +achieve superior performance when compared to previous pre-trained methods +fine-tuned with full training in both the in-distribution (ID) setting and in +the out-of-distribution (OOD) setting. Code is available at +https://github.com/MAEHCM/ICL-D3IE. +" +The Closeness of In-Context Learning and Weight Shifting for Softmax Regression,Shuai Li,http://arxiv.org/pdf/2304.13276v1.pdf,2023-04-26,"['cs.cl', 'cs.lg']",2304.13276v1.pdf," Large language models (LLMs) are known for their exceptional performance in +natural language processing, making them highly effective in many human +life-related or even job-related tasks. The attention mechanism in the +Transformer architecture is a critical component of LLMs, as it allows the +model to selectively focus on specific input parts. The softmax unit, which is +a key part of the attention mechanism, normalizes the attention scores. Hence, +the performance of LLMs in various NLP tasks depends significantly on the +crucial role played by the attention mechanism with the softmax unit. + In-context learning, as one of the celebrated abilities of recent LLMs, is an +important concept in querying LLMs such as ChatGPT. Without further parameter +updates, Transformers can learn to predict based on few in-context examples. +However, the reason why Transformers becomes in-context learners is not well +understood. Recently, several works [ASA+22,GTLV22,ONR+22] have studied the +in-context learning from a mathematical perspective based on a linear +regression formulation $\min_x\| Ax - b \|_2$, which show Transformers' +capability of learning linear functions in context. + In this work, we study the in-context learning based on a softmax regression +formulation $\min_{x} \| \langle \exp(Ax), {\bf 1}_n \rangle^{-1} \exp(Ax) - b +\|_2$ of Transformer's attention mechanism. We show the upper bounds of the +data transformations induced by a single self-attention layer and by +gradient-descent on a $\ell_2$ regression loss for softmax prediction function, +which imply that when training self-attention-only Transformers for fundamental +regression tasks, the models learned by gradient-descent and Transformers show +great similarity. +" +MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning,Haozhe Zhao,http://arxiv.org/pdf/2309.07915v2.pdf,2023-09-14,"['cs.cl', 'cs.ai', 'cs.cv']",2309.07915v2.pdf," Since the resurgence of deep learning, vision-language models (VLMs) enhanced +by large language models (LLMs) have grown exponentially in popularity. +However, while LLMs can utilize extensive background knowledge and task +information with in-context learning, most VLMs still struggle with +understanding complex multi-modal prompts with multiple images, making VLMs +less effective in downstream vision-language tasks. In this paper, we address +the limitation above by 1) introducing MMICL, a new approach to allow the VLM +to deal with multi-modal inputs efficiently; 2) proposing a novel context +scheme to augment the in-context learning ability of the VLM; 3) constructing +the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the +VLM's ability to understand complex multi-modal prompts. Our experiments +confirm that MMICL achieves new state-of-the-art zero-shot performance on a +wide range of general vision-language tasks, especially for complex benchmarks, +including MME and MMBench. Our analysis demonstrates that MMICL effectively +tackles the challenge of complex multi-modal prompt understanding and emerges +the impressive ICL ability. Furthermore, we observe that MMICL successfully +alleviates language bias in VLMs, a common issue for VLMs that often leads to +hallucination when faced with extensive textual context. +" +Visual In-Context Learning for Few-Shot Eczema Segmentation,Neelesh Kumar,http://arxiv.org/pdf/2309.16656v1.pdf,2023-09-28,"['cs.cv', 'cs.lg']",2309.16656v1.pdf," Automated diagnosis of eczema from digital camera images is crucial for +developing applications that allow patients to self-monitor their recovery. An +important component of this is the segmentation of eczema region from such +images. Current methods for eczema segmentation rely on deep neural networks +such as convolutional (CNN)-based U-Net or transformer-based Swin U-Net. While +effective, these methods require high volume of annotated data, which can be +difficult to obtain. Here, we investigate the capabilities of visual in-context +learning that can perform few-shot eczema segmentation with just a handful of +examples and without any need for retraining models. Specifically, we propose a +strategy for applying in-context learning for eczema segmentation with a +generalist vision model called SegGPT. When benchmarked on a dataset of +annotated eczema images, we show that SegGPT with just 2 representative example +images from the training dataset performs better (mIoU: 36.69) than a CNN U-Net +trained on 428 images (mIoU: 32.60). We also discover that using more number of +examples for SegGPT may in fact be harmful to its performance. Our result +highlights the importance of visual in-context learning in developing faster +and better solutions to skin imaging tasks. Our result also paves the way for +developing inclusive solutions that can cater to minorities in the demographics +who are typically heavily under-represented in the training data. +" +Learning To Retrieve Prompts for In-Context Learning,Ohad Rubin,http://arxiv.org/pdf/2112.08633v2.pdf,2021-12-16,"['cs.cl', 'cs.lg']",2112.08633v2.pdf," In-context learning is a recent paradigm in natural language understanding, +where a large pre-trained language model (LM) observes a test instance and a +few training examples as its input, and directly decodes the output without any +update to its parameters. However, performance has been shown to strongly +depend on the selected training examples (termed prompt). In this work, we +propose an efficient method for retrieving prompts for in-context learning +using annotated data and a LM. Given an input-output pair, we estimate the +probability of the output given the input and a candidate training example as +the prompt, and label training examples as positive or negative based on this +probability. We then train an efficient dense retriever from this data, which +is used to retrieve training examples as prompts at test time. We evaluate our +approach on three sequence-to-sequence tasks where language utterances are +mapped to meaning representations, and find that it substantially outperforms +prior work and multiple baselines across the board. +" +Semantic-Oriented Unlabeled Priming for Large-Scale Language Models,Yanchen Liu,http://arxiv.org/pdf/2202.06133v1.pdf,2022-02-12,['cs.cl'],2202.06133v1.pdf," Due to the high costs associated with finetuning large language models, +various recent works propose to adapt them to specific tasks without any +parameter updates through in-context learning. Unfortunately, for in-context +learning there is currently no way to leverage unlabeled data, which is often +much easier to obtain in large quantities than labeled examples. In this work, +we therefore investigate ways to make use of unlabeled examples to improve the +zero-shot performance of pretrained language models without any finetuning: We +introduce Semantic-Oriented Unlabeled Priming (SOUP), a method that classifies +examples by retrieving semantically similar unlabeled examples, assigning +labels to them in a zero-shot fashion, and then using them for in-context +learning. We also propose bag-of-contexts priming, a new priming strategy that +is more suitable for our setting and enables the usage of more examples than +fit into the context window. +" +Complementary Explanations for Effective In-Context Learning,Xi Ye,http://arxiv.org/pdf/2211.13892v2.pdf,2022-11-25,['cs.cl'],2211.13892v2.pdf," Large language models (LLMs) have exhibited remarkable capabilities in +learning from explanations in prompts, but there has been limited understanding +of exactly how these explanations function or why they are effective. This work +aims to better understand the mechanisms by which explanations are used for +in-context learning. We first study the impact of two different factors on the +performance of prompts with explanations: the computation trace (the way the +solution is decomposed) and the natural language used to express the prompt. By +perturbing explanations on three controlled tasks, we show that both factors +contribute to the effectiveness of explanations. We further study how to form +maximally effective sets of explanations for solving a given test query. We +find that LLMs can benefit from the complementarity of the explanation set: +diverse reasoning skills shown by different exemplars can lead to better +performance. Therefore, we propose a maximal marginal relevance-based exemplar +selection approach for constructing exemplar sets that are both relevant as +well as complementary, which successfully improves the in-context learning +performance across three real-world tasks on multiple LLMs. +" +Diverse Demonstrations Improve In-context Compositional Generalization,Itay Levy,http://arxiv.org/pdf/2212.06800v3.pdf,2022-12-13,['cs.cl'],2212.06800v3.pdf," In-context learning has shown great success in i.i.d semantic parsing splits, +where the training and test sets are drawn from the same distribution. In this +setup, models are typically prompted with demonstrations that are similar to +the input utterance. However, in the setup of compositional generalization, +where models are tested on outputs with structures that are absent from the +training set, selecting similar demonstrations is insufficient, as often no +example will be similar enough to the input. In this work, we propose a method +to select diverse demonstrations that aims to collectively cover all of the +structures required in the output program, in order to encourage the model to +generalize to new structures from these demonstrations. We empirically show +that combining diverse demonstrations with in-context learning substantially +improves performance across three compositional generalization semantic parsing +datasets in the pure in-context learning setup and when combined with +finetuning. +" +The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning,Hanlin Zhang,http://arxiv.org/pdf/2212.08686v1.pdf,2022-12-16,['cs.cl'],2212.08686v1.pdf," Pre-trained language models (LMs) have shown remarkable reasoning performance +using explanations (or ``chain-of-thought'' (CoT)) for in-context learning. On +the other hand, these reasoning tasks are usually presumed to be more +approachable for symbolic programming. To make progress towards understanding +in-context learning, we curate synthetic datasets containing equivalent +(natural, symbolic) data pairs, where symbolic examples contain first-order +logic rules and predicates from knowledge bases (KBs). Then we revisit +neuro-symbolic approaches and use Language Models as Logic Programmer (LMLP) +that learns from demonstrations containing logic rules and corresponding +examples to iteratively reason over KBs, recovering Prolog's backward chaining +algorithm. Comprehensive experiments are included to systematically compare +LMLP with CoT in deductive reasoning settings, showing that LMLP enjoys more +than 25% higher accuracy than CoT on length generalization benchmarks even with +fewer parameters. +" +Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering,Zhiyong Wu,http://arxiv.org/pdf/2212.10375v2.pdf,2022-12-20,"['cs.cl', 'cs.ai']",2212.10375v2.pdf," Despite the surprising few-shot performance of in-context learning (ICL), it +is still a common practice to randomly sample examples to serve as context. +This paper advocates a new principle for ICL: self-adaptive in-context +learning. The self-adaption mechanism is introduced to help each sample find an +in-context example permutation (i.e., selection and ordering) that can derive +the correct prediction, thus maximizing performance. To validate the +effectiveness of self-adaptive ICL, we propose a general select-then-rank +framework and instantiate it with new selection and ranking algorithms. Upon +extensive evaluation on eight different NLP datasets, our self-adaptive ICL +method achieves a 40% relative improvement over the common practice setting. +Further analysis reveals the enormous potential of self-adaptive ICL that it +might be able to close the gap between ICL and finetuning given more advanced +algorithms. Our code is released to facilitate future research in this area: +https://github.com/Shark-NLP/self-adaptive-ICL +" +Privacy-Preserving In-Context Learning for Large Language Models,Tong Wu,http://arxiv.org/pdf/2305.01639v2.pdf,2023-05-02,"['cs.lg', 'cs.ai', 'cs.cr']",2305.01639v2.pdf," In-context learning (ICL) is an important capability of Large Language Models +(LLMs), enabling these models to dynamically adapt based on specific, +in-context exemplars, thereby improving accuracy and relevance. However, LLM's +responses may leak the sensitive private information contained in in-context +exemplars. To address this challenge, we propose Differentially Private +In-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. The +key idea for DP-ICL paradigm is generating differentially private responses +through a noisy consensus among an ensemble of LLM's responses based on +disjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiate +several techniques showing how to privatize ICL for text classification and +language generation. We evaluate DP-ICL on four text classification benchmarks +and two language generation tasks, and our empirical results show that DP-ICL +achieves a strong utility-privacy tradeoff. +" +In-context Learning as Maintaining Coherency: A Study of On-the-fly Machine Translation Using Large Language Models,Suzanna Sia,http://arxiv.org/pdf/2305.03573v1.pdf,2023-05-05,"['cs.cl', 'cs.ai']",2305.03573v1.pdf," The phenomena of in-context learning has typically been thought of as +""learning from examples"". In this work which focuses on Machine Translation, we +present a perspective of in-context learning as the desired generation task +maintaining coherency with its context, i.e., the prompt examples. We first +investigate randomly sampled prompts across 4 domains, and find that +translation performance improves when shown in-domain prompts. Next, we +investigate coherency for the in-domain setting, which uses prompt examples +from a moving window. We study this with respect to other factors that have +previously been identified in the literature such as length, surface similarity +and sentence embedding similarity. Our results across 3 models (GPTNeo2.7B, +Bloom3B, XGLM2.9B), and three translation directions +(\texttt{en}$\rightarrow$\{\texttt{pt, de, fr}\}) suggest that the long-term +coherency of the prompts and the test sentence is a good indicator of +downstream translation performance. In doing so, we demonstrate the efficacy of +In-context Machine Translation for on-the-fly adaptation. +" +Small Models are Valuable Plug-ins for Large Language Models,Canwen Xu,http://arxiv.org/pdf/2305.08848v1.pdf,2023-05-15,"['cs.cl', 'cs.ai', 'cs.lg']",2305.08848v1.pdf," Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their +weights are often publicly unavailable and their immense sizes make the models +difficult to be tuned with common hardware. As a result, effectively tuning +these models with large-scale supervised data can be challenging. As an +alternative, In-Context Learning (ICL) can only use a small number of +supervised examples due to context length limits. In this paper, we propose +Super In-Context Learning (SuperICL) which allows black-box LLMs to work with +locally fine-tuned smaller models, resulting in superior performance on +supervised tasks. Our experiments demonstrate that SuperICL can improve +performance beyond state-of-the-art fine-tuned models while addressing the +instability problem of in-context learning. Furthermore, SuperICL can enhance +the capabilities of smaller models, such as multilinguality and +interpretability. +" +ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning,Jingyuan Selena She,http://arxiv.org/pdf/2305.19426v1.pdf,2023-05-30,"['cs.cl', 'cs.lg']",2305.19426v1.pdf," A number of recent benchmarks seek to assess how well models handle natural +language negation. However, these benchmarks lack the controlled example +paradigms that would allow us to infer whether a model had learned how negation +morphemes semantically scope. To fill these analytical gaps, we present the +Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six +examples with up to two negations where either zero, one, or both negative +morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and +in-context learning strategies. We find that RoBERTa and DeBERTa models solve +ScoNe-NLI after many shot fine-tuning. For in-context learning, we test +InstructGPT models and find that most prompt strategies are not successful, +including those using step-by-step reasoning. To better understand this result, +we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds +negation reasoning in short narratives. Here, InstructGPT is successful, which +reveals the model can correctly reason about negation, but struggles to do so +on prompt-adapted NLI examples outside of its core pretraining regime. +" +GPT-FinRE: In-context Learning for Financial Relation Extraction using Large Language Models,Pawan Kumar Rajpoot,http://arxiv.org/pdf/2306.17519v2.pdf,2023-06-30,['cs.cl'],2306.17519v2.pdf," Relation extraction (RE) is a crucial task in natural language processing +(NLP) that aims to identify and classify relationships between entities +mentioned in text. In the financial domain, relation extraction plays a vital +role in extracting valuable information from financial documents, such as news +articles, earnings reports, and company filings. This paper describes our +solution to relation extraction on one such dataset REFinD. The dataset was +released along with shared task as a part of the Fourth Workshop on Knowledge +Discovery from Unstructured Data in Financial Services, co-located with SIGIR +2023. In this paper, we employed OpenAI models under the framework of +in-context learning (ICL). We utilized two retrieval strategies to find top K +relevant in-context learning demonstrations / examples from training data for a +given test example. The first retrieval mechanism, we employed, is a +learning-free dense retriever and the other system is a learning-based +retriever. We were able to achieve 3rd rank overall. Our best F1-score is +0.718. +" +Code-Style In-Context Learning for Knowledge-Based Question Answering,Zhijie Nie,http://arxiv.org/pdf/2309.04695v1.pdf,2023-09-09,"['cs.cl', 'cs.ai']",2309.04695v1.pdf," Current methods for Knowledge-Based Question Answering (KBQA) usually rely on +complex training techniques and model frameworks, leading to many limitations +in practical applications. Recently, the emergence of In-Context Learning (ICL) +capabilities in Large Language Models (LLMs) provides a simple and +training-free semantic parsing paradigm for KBQA: Given a small number of +questions and their labeled logical forms as demo examples, LLMs can understand +the task intent and generate the logic form for a new question. However, +current powerful LLMs have little exposure to logic forms during pre-training, +resulting in a high format error rate. To solve this problem, we propose a +code-style in-context learning method for KBQA, which converts the generation +process of unfamiliar logical form into the more familiar code generation +process for LLMs. Experimental results on three mainstream datasets show that +our method dramatically mitigated the formatting error problem in generating +logic forms while realizing a new SOTA on WebQSP, GrailQA, and GraphQ under the +few-shot setting. +" +Can Whisper perform speech-based in-context learning,Siyin Wang,http://arxiv.org/pdf/2309.07081v1.pdf,2023-09-13,"['eess.as', 'cs.cl', 'cs.sd']",2309.07081v1.pdf," This paper investigates the in-context learning abilities of the Whisper +automatic speech recognition (ASR) models released by OpenAI. A novel +speech-based in-context learning (SICL) approach is proposed for test-time +adaptation, which can reduce the word error rates (WERs) with only a small +number of labelled speech samples without gradient descent. Language-level +adaptation experiments using Chinese dialects showed that when applying SICL to +isolated word ASR, consistent and considerable relative WER reductions can be +achieved using Whisper models of any size on two dialects, which is on average +32.3%. A k-nearest-neighbours-based in-context example selection technique can +be applied to further improve the efficiency of SICL, which can increase the +average relative WER reduction to 36.4%. The findings are verified using +speaker adaptation or continuous speech recognition tasks, and both achieved +considerable relative WER reductions. Detailed quantitative analyses are also +provided to shed light on SICL's adaptability to phonological variances and +dialect-specific lexical nuances. +" +ICLEF: In-Context Learning with Expert Feedback for Explainable Style Transfer,Arkadiy Saakyan,http://arxiv.org/pdf/2309.08583v1.pdf,2023-09-15,['cs.cl'],2309.08583v1.pdf," While state-of-the-art language models excel at the style transfer task, +current work does not address explainability of style transfer systems. +Explanations could be generated using large language models such as GPT-3.5 and +GPT-4, but the use of such complex systems is inefficient when smaller, widely +distributed, and transparent alternatives are available. We propose a framework +to augment and improve a formality style transfer dataset with explanations via +model distillation from ChatGPT. To further refine the generated explanations, +we propose a novel way to incorporate scarce expert human feedback using +in-context learning (ICLEF: In-Context Learning from Expert Feedback) by +prompting ChatGPT to act as a critic to its own outputs. We use the resulting +dataset of 9,960 explainable formality style transfer instances (e-GYAFC) to +show that current openly distributed instruction-tuned models (and, in some +settings, ChatGPT) perform poorly on the task, and that fine-tuning on our +high-quality dataset leads to significant improvements as shown by automatic +evaluation. In human evaluation, we show that models much smaller than ChatGPT +fine-tuned on our data align better with expert preferences. Finally, we +discuss two potential applications of models fine-tuned on the explainable +style transfer task: interpretable authorship verification and interpretable +adversarial attacks on AI-generated text detectors. +" +SALM: Speech-augmented Language Model with In-context Learning for Speech Recognition and Translation,Zhehuai Chen,http://arxiv.org/pdf/2310.09424v1.pdf,2023-10-13,"['cs.cl', 'cs.hc', 'cs.sd', 'eess.as', '68t10', 'i.2.7']",2310.09424v1.pdf," We present a novel Speech Augmented Language Model (SALM) with {\em +multitask} and {\em in-context} learning capabilities. SALM comprises a frozen +text LLM, a audio encoder, a modality adapter module, and LoRA layers to +accommodate speech input and associated task instructions. The unified SALM not +only achieves performance on par with task-specific Conformer baselines for +Automatic Speech Recognition (ASR) and Speech Translation (AST), but also +exhibits zero-shot in-context learning capabilities, demonstrated through +keyword-boosting task for ASR and AST. Moreover, {\em speech supervised +in-context training} is proposed to bridge the gap between LLM training and +downstream speech tasks, which further boosts the in-context learning ability +of speech-to-text models. Proposed model is open-sourced via NeMo toolkit. +" +Utilising a Large Language Model to Annotate Subject Metadata: A Case Study in an Australian National Research Data Catalogue,Shiwei Zhang,http://arxiv.org/pdf/2310.11318v1.pdf,2023-10-17,"['cs.cl', 'cs.ai']",2310.11318v1.pdf," In support of open and reproducible research, there has been a rapidly +increasing number of datasets made available for research. As the availability +of datasets increases, it becomes more important to have quality metadata for +discovering and reusing them. Yet, it is a common issue that datasets often +lack quality metadata due to limited resources for data curation. Meanwhile, +technologies such as artificial intelligence and large language models (LLMs) +are progressing rapidly. Recently, systems based on these technologies, such as +ChatGPT, have demonstrated promising capabilities for certain data curation +tasks. This paper proposes to leverage LLMs for cost-effective annotation of +subject metadata through the LLM-based in-context learning. Our method employs +GPT-3.5 with prompts designed for annotating subject metadata, demonstrating +promising performance in automatic metadata annotation. However, models based +on in-context learning cannot acquire discipline-specific rules, resulting in +lower performance in several categories. This limitation arises from the +limited contextual information available for subject inference. To the best of +our knowledge, we are introducing, for the first time, an in-context learning +method that harnesses large language models for automated subject metadata +annotation. +" +Hint-enhanced In-Context Learning wakes Large Language Models up for knowledge-intensive tasks,Yifan Wang,http://arxiv.org/pdf/2311.01949v1.pdf,2023-11-03,['cs.cl'],2311.01949v1.pdf," In-context learning (ICL) ability has emerged with the increasing scale of +large language models (LLMs), enabling them to learn input-label mappings from +demonstrations and perform well on downstream tasks. However, under the +standard ICL setting, LLMs may sometimes neglect query-related information in +demonstrations, leading to incorrect predictions. To address this limitation, +we propose a new paradigm called Hint-enhanced In-Context Learning (HICL) to +explore the power of ICL in open-domain question answering, an important form +in knowledge-intensive tasks. HICL leverages LLMs' reasoning ability to extract +query-related knowledge from demonstrations, then concatenates the knowledge to +prompt LLMs in a more explicit way. Furthermore, we track the source of this +knowledge to identify specific examples, and introduce a Hint-related Example +Retriever (HER) to select informative examples for enhanced demonstrations. We +evaluate HICL with HER on 3 open-domain QA benchmarks, and observe average +performance gains of 2.89 EM score and 2.52 F1 score on gpt-3.5-turbo, 7.62 EM +score and 7.27 F1 score on LLaMA-2-Chat-7B compared with standard setting. +" +Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?,Sewon Min,http://arxiv.org/pdf/2202.12837v2.pdf,2022-02-25,"['cs.cl', 'cs.ai']",2202.12837v2.pdf," Large language models (LMs) are able to in-context learn -- perform a new +task via inference alone by conditioning on a few input-label pairs +(demonstrations) and making predictions for new inputs. However, there has been +little understanding of how the model learns and which aspects of the +demonstrations contribute to end task performance. In this paper, we show that +ground truth demonstrations are in fact not required -- randomly replacing +labels in the demonstrations barely hurts performance on a range of +classification and multi-choce tasks, consistently over 12 different models +including GPT-3. Instead, we find that other aspects of the demonstrations are +the key drivers of end task performance, including the fact that they provide a +few examples of (1) the label space, (2) the distribution of the input text, +and (3) the overall format of the sequence. Together, our analysis provides a +new way of understanding how and why in-context learning works, while opening +up new questions about how much can be learned from large language models +through inference alone. +" +Can Foundation Models Help Us Achieve Perfect Secrecy?,Simran Arora,http://arxiv.org/pdf/2205.13722v2.pdf,2022-05-27,"['cs.lg', 'cs.cl']",2205.13722v2.pdf," A key promise of machine learning is the ability to assist users with +personal tasks. Because the personal context required to make accurate +predictions is often sensitive, we require systems that protect privacy. A gold +standard privacy-preserving system will satisfy perfect secrecy, meaning that +interactions with the system provably reveal no private information. However, +privacy and quality appear to be in tension in existing systems for personal +tasks. Neural models typically require copious amounts of training to perform +well, while individual users typically hold a limited scale of data, so +federated learning (FL) systems propose to learn from the aggregate data of +multiple users. FL does not provide perfect secrecy, but rather practitioners +apply statistical notions of privacy -- i.e., the probability of learning +private information about a user should be reasonably low. The strength of the +privacy guarantee is governed by privacy parameters. Numerous privacy attacks +have been demonstrated on FL systems and it can be challenging to reason about +the appropriate privacy parameters for a privacy-sensitive use case. Therefore +our work proposes a simple baseline for FL, which both provides the stronger +perfect secrecy guarantee and does not require setting any privacy parameters. +We initiate the study of when and where an emerging tool in ML -- the +in-context learning abilities of recent pretrained models -- can be an +effective baseline alongside FL. We find in-context learning is competitive +with strong FL baselines on 6 of 7 popular benchmarks from the privacy +literature and a real-world case study, which is disjoint from the pretraining +data. We release our code here: https://github.com/simran-arora/focus +" +Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts,Nghia T. Le,http://arxiv.org/pdf/2210.03690v2.pdf,2022-10-07,"['cs.cl', 'cs.ai']",2210.03690v2.pdf," Anaphora resolution is an important task for information extraction across a +range of languages, text genres, and domains, motivating the need for methods +that do not require large annotated datasets. In-context learning has emerged +as a promising approach, yet there are a number of challenges in applying +in-context learning to resolve anaphora. For example, encoding a single +in-context demonstration that consists of: an anaphor, a paragraph-length +context, and a list of corresponding antecedents, requires conditioning a +language model on a long sequence of tokens, limiting the number of +demonstrations per prompt. In this paper, we present MICE (Mixtures of +In-Context Experts), which we demonstrate is effective for few-shot anaphora +resolution in scientific protocols (Tamari et al., 2021). Given only a handful +of training examples, MICE combines the predictions of hundreds of in-context +experts, yielding a 30% increase in F1 score over a competitive prompt +retrieval baseline. Furthermore, we show MICE can be used to train compact +student models without sacrificing performance. As far as we are aware, this is +the first work to present experimental results demonstrating the effectiveness +of in-context learning on the task of few-shot anaphora resolution in +scientific protocols. +" +What learning algorithm is in-context learning? Investigations with linear models,Ekin Akyürek,http://arxiv.org/pdf/2211.15661v3.pdf,2022-11-28,"['cs.lg', 'cs.cl']",2211.15661v3.pdf," Neural sequence models, especially transformers, exhibit a remarkable +capacity for in-context learning. They can construct new predictors from +sequences of labeled examples $(x, f(x))$ presented in the input without +further parameter updates. We investigate the hypothesis that transformer-based +in-context learners implement standard learning algorithms implicitly, by +encoding smaller models in their activations, and updating these implicit +models as new examples appear in the context. Using linear regression as a +prototypical problem, we offer three sources of evidence for this hypothesis. +First, we prove by construction that transformers can implement learning +algorithms for linear models based on gradient descent and closed-form ridge +regression. Second, we show that trained in-context learners closely match the +predictors computed by gradient descent, ridge regression, and exact +least-squares regression, transitioning between different predictors as +transformer depth and dataset noise vary, and converging to Bayesian estimators +for large widths and depths. Third, we present preliminary evidence that +in-context learners share algorithmic features with these predictors: learners' +late layers non-linearly encode weight vectors and moment matrices. These +results suggest that in-context learning is understandable in algorithmic +terms, and that (at least in the linear case) learners may rediscover standard +estimation algorithms. Code and reference implementations are released at +https://github.com/ekinakyurek/google-research/blob/master/incontext. +" +SE Factual Knowledge in Frozen Giant Code Model: A Study on FQN and its Retrieval,Qing Huang,http://arxiv.org/pdf/2212.08221v1.pdf,2022-12-16,['cs.se'],2212.08221v1.pdf," Pre-trained giant code models (PCMs) start coming into the developers' daily +practices. Understanding what types of and how much software knowledge is +packed into PCMs is the foundation for incorporating PCMs into software +engineering (SE) tasks and fully releasing their potential. In this work, we +conduct the first systematic study on the SE factual knowledge in the +state-of-the-art PCM CoPilot, focusing on APIs' Fully Qualified Names (FQNs), +the fundamental knowledge for effective code analysis, search and reuse. Driven +by FQNs' data distribution properties, we design a novel lightweight in-context +learning on Copilot for FQN inference, which does not require code compilation +as traditional methods or gradient update by recent FQN prompt-tuning. We +systematically experiment with five in-context-learning design factors to +identify the best in-context learning configuration that developers can adopt +in practice. With this best configuration, we investigate the effects of amount +of example prompts and FQN data properties on Copilot's FQN inference +capability. Our results confirm that Copilot stores diverse FQN knowledge and +can be applied for the FQN inference due to its high inference accuracy and +non-reliance on code analysis. Based on our experience interacting with +Copilot, we discuss various opportunities to improve human-CoPilot interaction +in the FQN inference task. +" +Transformers as Algorithms: Generalization and Stability in In-context Learning,Yingcong Li,http://arxiv.org/pdf/2301.07067v2.pdf,2023-01-17,"['cs.lg', 'cs.cl', 'stat.ml']",2301.07067v2.pdf," In-context learning (ICL) is a type of prompting where a transformer model +operates on a sequence of (input, output) examples and performs inference +on-the-fly. In this work, we formalize in-context learning as an algorithm +learning problem where a transformer model implicitly constructs a hypothesis +function at inference-time. We first explore the statistical aspects of this +abstraction through the lens of multitask learning: We obtain generalization +bounds for ICL when the input prompt is (1) a sequence of i.i.d. (input, label) +pairs or (2) a trajectory arising from a dynamical system. The crux of our +analysis is relating the excess risk to the stability of the algorithm +implemented by the transformer. We characterize when transformer/attention +architecture provably obeys the stability condition and also provide empirical +verification. For generalization on unseen tasks, we identify an inductive bias +phenomenon in which the transfer learning risk is governed by the task +complexity and the number of MTL tasks in a highly predictable manner. Finally, +we provide numerical evaluations that (1) demonstrate transformers can indeed +implement near-optimal algorithms on classical regression problems with i.i.d. +and dynamic data, (2) provide insights on stability, and (3) verify our +theoretical predictions. +" +Adaptive Machine Translation with Large Language Models,Yasmin Moslem,http://arxiv.org/pdf/2301.13294v3.pdf,2023-01-30,['cs.cl'],2301.13294v3.pdf," Consistency is a key requirement of high-quality translation. It is +especially important to adhere to pre-approved terminology and adapt to +corrected translations in domain-specific projects. Machine translation (MT) +has achieved significant progress in the area of domain adaptation. However, +real-time adaptation remains challenging. Large-scale language models (LLMs) +have recently shown interesting capabilities of in-context learning, where they +learn to replicate certain input-output text generation patterns, without +further fine-tuning. By feeding an LLM at inference time with a prompt that +consists of a list of translation pairs, it can then simulate the domain and +style characteristics. This work aims to investigate how we can utilize +in-context learning to improve real-time adaptive MT. Our extensive experiments +show promising results at translation time. For example, LLMs can adapt to a +set of in-domain sentence pairs and/or terminology while translating a new +sentence. We observe that the translation quality with few-shot in-context +learning can surpass that of strong encoder-decoder MT systems, especially for +high-resource languages. Moreover, we investigate whether we can combine MT +from strong encoder-decoder models with fuzzy matches, which can further +improve translation quality, especially for less supported languages. We +conduct our experiments across five diverse language pairs, namely +English-to-Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French +(EN-FR), English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES). +" +ScatterShot: Interactive In-context Example Curation for Text Transformation,Tongshuang Wu,http://arxiv.org/pdf/2302.07346v1.pdf,2023-02-14,"['cs.hc', 'cs.cl']",2302.07346v1.pdf," The in-context learning capabilities of LLMs like GPT-3 allow annotators to +customize an LLM to their specific tasks with a small number of examples. +However, users tend to include only the most obvious patterns when crafting +examples, resulting in underspecified in-context functions that fall short on +unseen cases. Further, it is hard to know when ""enough"" examples have been +included even for known patterns. In this work, we present ScatterShot, an +interactive system for building high-quality demonstration sets for in-context +learning. ScatterShot iteratively slices unlabeled data into task-specific +patterns, samples informative inputs from underexplored or not-yet-saturated +slices in an active learning manner, and helps users label more efficiently +with the help of an LLM and the current example set. In simulation studies on +two text perturbation scenarios, ScatterShot sampling improves the resulting +few-shot functions by 4-5 percentage points over random sampling, with less +variance as more examples are added. In a user study, ScatterShot greatly helps +users in covering different patterns in the input space and labeling in-context +examples more efficiently, resulting in better in-context learning and less +user effort. +" +Resources and Few-shot Learners for In-context Learning in Slavic Languages,Michal Štefánik,http://arxiv.org/pdf/2304.01922v1.pdf,2023-04-04,['cs.cl'],2304.01922v1.pdf," Despite the rapid recent progress in creating accurate and compact in-context +learners, most recent work focuses on in-context learning (ICL) for tasks in +English. However, the ability to interact with users of languages outside +English presents a great potential for broadening the applicability of language +technologies to non-English speakers. + In this work, we collect the infrastructure necessary for training and +evaluation of ICL in a selection of Slavic languages: Czech, Polish, and +Russian. We link a diverse set of datasets and cast these into a unified +instructional format through a set of transformations and newly-crafted +templates written purely in target languages. Using the newly-curated dataset, +we evaluate a set of the most recent in-context learners and compare their +results to the supervised baselines. Finally, we train, evaluate and publish a +set of in-context learning models that we train on the collected resources and +compare their performance to previous work. + We find that ICL models tuned in English are also able to learn some tasks +from non-English contexts, but multilingual instruction fine-tuning +consistently improves the ICL ability. We also find that the massive multitask +training can be outperformed by single-task training in the target language, +uncovering the potential for specializing in-context learners to the +language(s) of their application. +" +Boosting Theory-of-Mind Performance in Large Language Models via Prompting,Shima Rahimi Moghaddam,http://arxiv.org/pdf/2304.11490v3.pdf,2023-04-22,"['cs.ai', 'cs.cl']",2304.11490v3.pdf," Large language models (LLMs) excel in many tasks in 2023, but they still face +challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require +understanding agents' beliefs, goals, and mental states, are essential for +common-sense reasoning involving humans, making it crucial to enhance LLM +performance in this area. This study measures the ToM performance of GPT-4 and +three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates +the effectiveness of in-context learning in improving their ToM comprehension. +We evaluated prompts featuring two-shot chain of thought reasoning and +step-by-step thinking instructions. We found that LLMs trained with +Reinforcement Learning from Human Feedback (RLHF) (all models excluding +Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed +best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell +short of the 87% human accuracy on the test set. However, when supplied with +prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM +accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate +prompting enhances LLM ToM reasoning, and they underscore the context-dependent +nature of LLM cognitive capacities. +" +Unified Demonstration Retriever for In-Context Learning,Xiaonan Li,http://arxiv.org/pdf/2305.04320v2.pdf,2023-05-07,['cs.cl'],2305.04320v2.pdf," In-context learning is a new learning paradigm where a language model +conditions on a few input-output pairs (demonstrations) and a test input, and +directly outputs the prediction. It has been shown highly dependent on the +provided demonstrations and thus promotes the research of demonstration +retrieval: given a test input, relevant examples are retrieved from the +training set to serve as informative demonstrations for in-context learning. +While previous works focus on training task-specific retrievers for several +tasks separately, these methods are often hard to transfer and scale on various +tasks, and separately trained retrievers incur a lot of parameter storage and +deployment cost. In this paper, we propose Unified Demonstration Retriever +(\textbf{UDR}), a single model to retrieve demonstrations for a wide range of +tasks. To train UDR, we cast various tasks' training signals into a unified +list-wise ranking formulation by language model's feedback. Then we propose a +multi-task list-wise ranking training framework, with an iterative mining +strategy to find high-quality candidates, which can help UDR fully incorporate +various tasks' signals. Experiments on 30+ tasks across 13 task families and +multiple data domains show that UDR significantly outperforms baselines. +Further analyses show the effectiveness of each proposed component and UDR's +strong ability in various scenarios including different LMs (1.3B - 175B), +unseen datasets, varying demonstration quantities, etc. +" +Efficient Prompting via Dynamic In-Context Learning,Wangchunshu Zhou,http://arxiv.org/pdf/2305.11170v1.pdf,2023-05-18,"['cs.cl', 'cs.ai', 'cs.lg']",2305.11170v1.pdf," The primary way of building AI applications is shifting from training +specialist models to prompting generalist models. A common practice for +prompting generalist models, often referred to as in-context learning, is to +append a few examples (demonstrations) to the prompt to help the model better +understand the task. While effective, in-context learning can be inefficient +because it makes the input prompt much longer, consuming valuable space in the +context window and leading to larger computational costs. In this paper, we +propose DynaICL, a recipe for efficient prompting with black-box generalist +models that dynamically allocate in-context examples according to the input +complexity and the computational budget. To achieve this, we train a meta +controller that predicts the number of in-context examples suitable for the +generalist model to make a good prediction based on the performance-efficiency +trade-off for a specific input. We then dynamically allocate the number of +demonstrations for an input according to predictions from the meta controller +and the given computation budget. Experimental results show that dynamic +example allocation helps achieve a better performance-efficiency trade-off in +two practical settings where computational resources or the required +performance is constrained. Specifically, DynaICL saves up to 46% token budget +compared to the common practice that allocates the same number of in-context +examples to each input. We also find that a meta controller trained on a +certain backbone model and tasks can successfully generalize to unseen models +and tasks. +" +Post Hoc Explanations of Language Models Can Improve Language Models,Satyapriya Krishna,http://arxiv.org/pdf/2305.11426v2.pdf,2023-05-19,"['cs.cl', 'cs.ai']",2305.11426v2.pdf," Large Language Models (LLMs) have demonstrated remarkable capabilities in +performing complex tasks. Moreover, recent research has shown that +incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) +during in-context learning can significantly enhance the performance of these +models, particularly on tasks that require reasoning capabilities. However, +incorporating such rationales poses challenges in terms of scalability as this +requires a high degree of human involvement. In this work, we present a novel +framework, Amplifying Model Performance by Leveraging In-Context Learning with +Post Hoc Explanations (AMPLIFY), which addresses the aforementioned challenges +by automating the process of rationale generation. To this end, we leverage +post hoc explanation methods which output attribution scores (explanations) +capturing the influence of each of the input features on model predictions. +More specifically, we construct automated natural language rationales that +embed insights from post hoc explanations to provide corrective signals to +LLMs. Extensive experimentation with real-world datasets demonstrates that our +framework, AMPLIFY, leads to prediction accuracy improvements of about 10-25% +over a wide range of tasks, including those where prior approaches which rely +on human-annotated rationales such as Chain-of-Thought prompting fall short. +Our work makes one of the first attempts at highlighting the potential of post +hoc explanations as valuable tools for enhancing the effectiveness of LLMs. +Furthermore, we conduct additional empirical analyses and ablation studies to +demonstrate the impact of each of the components of AMPLIFY, which, in turn, +leads to critical insights for refining in-context learning. +" +Explaining Emergent In-Context Learning as Kernel Regression,Chi Han,http://arxiv.org/pdf/2305.12766v2.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.lg']",2305.12766v2.pdf," Large language models (LLMs) have initiated a paradigm shift in transfer +learning. In contrast to the classic pretraining-then-finetuning procedure, in +order to use LLMs for downstream prediction tasks, one only needs to provide a +few demonstrations, known as in-context examples, without adding more or +updating existing model parameters. This in-context learning (ICL) capability +of LLMs is intriguing, and it is not yet fully understood how pretrained LLMs +acquire such capabilities. In this paper, we investigate the reason why a +transformer-based language model can accomplish in-context learning after +pre-training on a general language corpus by proposing one hypothesis that LLMs +can simulate kernel regression with internal representations when faced with +in-context examples. More concretely, we first prove that Bayesian inference on +in-context prompts can be asymptotically understood as kernel regression $\hat +y = \sum_i y_i K(x, x_i)/\sum_i K(x, x_i)$ as the number of in-context +demonstrations grows. Then, we empirically investigate the in-context behaviors +of language models. We find that during ICL, the attention and hidden features +in LLMs match the behaviors of a kernel regression. Finally, our theory +provides insights into multiple phenomena observed in the ICL field: why +retrieving demonstrative samples similar to test samples can help, why ICL +performance is sensitive to the output formats, and why ICL accuracy benefits +from selecting in-distribution and representative samples. +" +RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning,Alexander Scarlatos,http://arxiv.org/pdf/2305.14502v1.pdf,2023-05-23,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14502v1.pdf," Many recent developments in large language models focus on prompting them to +perform specific tasks. One effective prompting method is in-context learning, +where the model performs a (possibly new) generation/prediction task given one +(or more) examples. Past work has shown that the choice of examples can make a +large impact on task performance. However, finding good examples is not +straightforward since the definition of a representative group of examples can +vary greatly depending on the task. While there are many existing methods for +selecting in-context examples, they generally score examples independently, +ignoring the dependency between them and the order in which they are provided +to the large language model. In this work, we propose Retrieval for In-Context +Learning (RetICL), a learnable method for modeling and optimally selecting +examples sequentially for in-context learning. We frame the problem of +sequential example selection as a Markov decision process, design an example +retriever model using an LSTM, and train it using proximal policy optimization +(PPO). We validate RetICL on math problem solving datasets and show that it +outperforms both heuristic and learnable baselines, and achieves +state-of-the-art accuracy on the TabMWP dataset. We also use case studies to +show that RetICL implicitly learns representations of math problem solving +strategies. +" +In-Context Learning for Attention Scheme: from Single Softmax Regression to Multiple Softmax Regression via a Tensor Trick,Yeqi Gao,http://arxiv.org/pdf/2307.02419v1.pdf,2023-07-05,['cs.lg'],2307.02419v1.pdf," Large language models (LLMs) have brought significant and transformative +changes in human society. These models have demonstrated remarkable +capabilities in natural language understanding and generation, leading to +various advancements and impacts across several domains. + We consider the in-context learning under two formulation for attention +related regression in this work. Given matrices $A_1 \in \mathbb{R}^{n \times +d}$, and $A_2 \in \mathbb{R}^{n \times d}$ and $B \in \mathbb{R}^{n \times n}$, +the purpose is to solve some certain optimization problems: Normalized version +$\min_{X} \| D(X)^{-1} \exp(A_1 X A_2^\top) - B \|_F^2$ and Rescaled version +$\| \exp(A_1 X A_2^\top) - D(X) \cdot B \|_F^2$. Here $D(X) := \mathrm{diag}( +\exp(A_1 X A_2^\top) {\bf 1}_n )$. + Our regression problem shares similarities with previous studies on +softmax-related regression. Prior research has extensively investigated +regression techniques related to softmax regression: Normalized version $\| +\langle \exp(Ax) , {\bf 1}_n \rangle^{-1} \exp(Ax) - b \|_2^2$ and Resscaled +version $\| \exp(Ax) - \langle \exp(Ax), {\bf 1}_n \rangle b \|_2^2 $ + In contrast to previous approaches, we adopt a vectorization technique to +address the regression problem in matrix formulation. This approach expands the +dimension from $d$ to $d^2$, resembling the formulation of the regression +problem mentioned earlier. + Upon completing the lipschitz analysis of our regression function, we have +derived our main result concerning in-context learning. +" +SynerGPT: In-Context Learning for Personalized Drug Synergy Prediction and Drug Design,Carl Edwards,http://arxiv.org/pdf/2307.11694v2.pdf,2023-06-19,"['cs.ai', 'cs.lg', 'q-bio.bm', 'q-bio.mn']",2307.11694v2.pdf," Predicting synergistic drug combinations can help accelerate discovery of +cancer treatments, particularly therapies personalized to a patient's specific +tumor via biopsied cells. In this paper, we propose a novel setting and models +for in-context drug synergy learning. We are given a small ""personalized +dataset"" of 10-20 drug synergy relationships in the context of specific cancer +cell targets. Our goal is to predict additional drug synergy relationships in +that context. Inspired by recent work that pre-trains a GPT language model (LM) +to ""in-context learn"" common function classes, we devise novel pre-training +schemes that enable a GPT model to in-context learn ""drug synergy functions"". +Our model -- which does not use any textual corpora, molecular fingerprints, +protein interaction or any other domain-specific knowledge -- is able to +achieve competitive results. We further integrate our in-context approach with +a genetic algorithm to optimize model prompts and select synergy candidates to +test after conducting a patient biopsy. Finally, we explore a novel task of +inverse drug design which can potentially enable the design of drugs that +synergize specifically to target a given patient's ""personalized dataset"". Our +findings can potentially have an important impact on precision cancer medicine, +and also raise intriguing questions on non-textual pre-training for LMs. +" +OUTFOX: LLM-generated Essay Detection through In-context Learning with Adversarially Generated Examples,Ryuto Koike,http://arxiv.org/pdf/2307.11729v2.pdf,2023-07-21,['cs.cl'],2307.11729v2.pdf," Large Language Models (LLMs) have achieved human-level fluency in text +generation, making it difficult to distinguish between human-written and +LLM-generated texts. This poses a growing risk of misuse of LLMs and demands +the development of detectors to identify LLM-generated texts. However, existing +detectors lack robustness against attacks: they degrade detection accuracy by +simply paraphrasing LLM-generated texts. Furthermore, a malicious user might +attempt to deliberately evade the detectors based on detection results, but +this has not been assumed in previous studies. In this paper, we propose +OUTFOX, a framework that improves the robustness of LLM-generated-text +detectors by allowing both the detector and the attacker to consider each +other's output. In this framework, the attacker uses the detector's prediction +labels as examples for in-context learning and adversarially generates essays +that are harder to detect, while the detector uses the adversarially generated +essays as examples for in-context learning to learn to detect essays from a +strong attacker. Experiments in the domain of student essays show that the +proposed detector improves the detection performance on the attacker-generated +texts by up to +41.3 points in F1-score. Furthermore, the proposed detector +shows a state-of-the-art detection performance: up to 96.9 points in F1-score, +beating existing detectors on non-attacked texts. Finally, the proposed +attacker drastically degrades the performance of detectors by up to -57.0 +points F1-score, massively outperforming the baseline paraphrasing method for +evading detection. +" +Metric-Based In-context Learning: A Case Study in Text Simplification,Subha Vadlamannati,http://arxiv.org/pdf/2307.14632v1.pdf,2023-07-27,"['cs.cl', 'cs.ai']",2307.14632v1.pdf," In-context learning (ICL) for large language models has proven to be a +powerful approach for many natural language processing tasks. However, +determining the best method to select examples for ICL is nontrivial as the +results can vary greatly depending on the quality, quantity, and order of +examples used. In this paper, we conduct a case study on text simplification +(TS) to investigate how to select the best and most robust examples for ICL. We +propose Metric-Based in-context Learning (MBL) method that utilizes commonly +used TS metrics such as SARI, compression ratio, and BERT-Precision for +selection. Through an extensive set of experiments with various-sized GPT +models on standard TS benchmarks such as TurkCorpus and ASSET, we show that +examples selected by the top SARI scores perform the best on larger models such +as GPT-175B, while the compression ratio generally performs better on smaller +models such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL is +generally robust to example orderings and out-of-domain test sets, and +outperforms strong baselines and state-of-the-art finetuned language models. +Finally, we show that the behaviour of large GPT models can be implicitly +controlled by the chosen metric. Our research provides a new framework for +selecting examples in ICL, and demonstrates its effectiveness in text +simplification tasks, breaking new ground for more accurate and efficient NLG +systems. +" +HICL: Hashtag-Driven In-Context Learning for Social Media Natural Language Understanding,Hanzhuo Tan,http://arxiv.org/pdf/2308.09985v1.pdf,2023-08-19,['cs.cl'],2308.09985v1.pdf," Natural language understanding (NLU) is integral to various social media +applications. However, existing NLU models rely heavily on context for semantic +learning, resulting in compromised performance when faced with short and noisy +social media content. To address this issue, we leverage in-context learning +(ICL), wherein language models learn to make inferences by conditioning on a +handful of demonstrations to enrich the context and propose a novel +hashtag-driven in-context learning (HICL) framework. Concretely, we pre-train a +model #Encoder, which employs #hashtags (user-annotated topic labels) to drive +BERT-based pre-training through contrastive learning. Our objective here is to +enable #Encoder to gain the ability to incorporate topic-related semantic +information, which allows it to retrieve topic-related posts to enrich contexts +and enhance social media NLU with noisy contexts. To further integrate the +retrieved context with the source text, we employ a gradient-based method to +identify trigger terms useful in fusing information from both sources. For +empirical studies, we collected 45M tweets to set up an in-context NLU +benchmark, and the experimental results on seven downstream tasks show that +HICL substantially advances the previous state-of-the-art results. Furthermore, +we conducted extensive analyzes and found that: (1) combining source input with +a top-retrieved post from #Encoder is more effective than using semantically +similar posts; (2) trigger words can largely benefit in merging context from +the source and retrieved posts. +" +Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning,Yuchen Yang,http://arxiv.org/pdf/2310.04782v1.pdf,2023-10-07,['cs.cl'],2310.04782v1.pdf," In recent years, large-scale language models (LLMs) have gained attention for +their impressive text generation capabilities. However, these models often face +the challenge of ""hallucination,"" which undermines their reliability. In this +study, we introduce an uncertainty-aware in-context learning framework to +empower the model to enhance or reject its output in response to uncertainty. +Human-defined methods for estimating uncertainty typically assume that +""uncertainty is lower when the model's response is correct compared to when it +is incorrect."" However, setting a precise threshold to distinguish correctness +is challenging. Therefore, we introduce uncertainty information as an +intermediary variable that implicitly influences the model's behavior. Our +innovative uncertainty-aware in-context learning framework involves fine-tuning +the LLM using a calibration dataset. Our aim is to improve the model's +responses by filtering out answers with high uncertainty while considering the +model's knowledge limitations. We evaluate the model's knowledge by examining +multiple responses to the same question for the presence of a correct answer. +When the model lacks relevant knowledge, the response should indicate that the +question cannot be answered. Conversely, when the model has relevant knowledge, +the response should provide the correct answer. Extensive experiments confirm +the effectiveness of our framework, leading to two key findings. First, the +logit output values of the LLM partly reflect inherent uncertainty. Second, our +model autonomously recognizes uncertainty, resulting in improved responses. +" +In-Context Convergence of Transformers,Yu Huang,http://arxiv.org/pdf/2310.05249v1.pdf,2023-10-08,"['cs.lg', 'cs.ai', 'math.oc', 'stat.ml']",2310.05249v1.pdf," Transformers have recently revolutionized many domains in modern machine +learning and one salient discovery is their remarkable in-context learning +capability, where models can solve an unseen task by utilizing task-specific +prompts without further parameters fine-tuning. This also inspired recent +theoretical studies aiming to understand the in-context learning mechanism of +transformers, which however focused only on linear transformers. In this work, +we take the first step toward studying the learning dynamics of a one-layer +transformer with softmax attention trained via gradient descent in order to +in-context learn linear function classes. We consider a structured data model, +where each token is randomly sampled from a set of feature vectors in either +balanced or imbalanced fashion. For data with balanced features, we establish +the finite-time convergence guarantee with near-zero prediction error by +navigating our analysis over two phases of the training dynamics of the +attention map. More notably, for data with imbalanced features, we show that +the learning dynamics take a stage-wise convergence process, where the +transformer first converges to a near-zero prediction error for the query +tokens of dominant features, and then converges later to a near-zero prediction +error for the query tokens of under-represented features, respectively via one +and four training phases. Our proof features new techniques for analyzing the +competing strengths of two types of attention weights, the change of which +determines different training phases. +" +Large Language Model-Aware In-Context Learning for Code Generation,Jia Li,http://arxiv.org/pdf/2310.09748v1.pdf,2023-10-15,"['cs.se', 'cs.cl']",2310.09748v1.pdf," Large language models (LLMs) have shown impressive in-context learning (ICL) +ability in code generation. LLMs take a prompt consisting of requirement-code +examples and a new requirement as input, and output new programs. Existing +studies have found that ICL is highly dominated by the examples and thus arises +research on example selection. However, existing approaches randomly select +examples or only consider the textual similarity of requirements to retrieve, +leading to sub-optimal performance. In this paper, we propose a novel +learning-based selection approach named LAIL (LLM-Aware In-context Learning) +for code generation. Given a candidate example, we exploit LLMs themselves to +estimate it by considering the generation probabilities of ground-truth +programs given a requirement and the example. We then label candidate examples +as positive or negative through the probability feedback. Based on the labeled +data, we import a contrastive learning objective to train an effective +retriever that acquires the preference of LLMs in code generation. We apply +LAIL to three LLMs and evaluate it on three representative datasets (e.g., +MBJP, MBPP, and MBCPP). LATA outperforms the state-of-the-art baselines by +11.58%, 6.89%, and 5.07% on CodeGen, and 4.38%, 2.85%, and 2.74% on GPT-3.5 in +terms of Pass@1, respectively. +" +Two-stage LLM Fine-tuning with Less Specialization and More Generalization,Yihan Wang,http://arxiv.org/pdf/2211.00635v2.pdf,2022-11-01,"['cs.cl', 'cs.lg']",2211.00635v2.pdf," Pretrained large language models (LLMs) are general purpose problem solvers +applicable to a diverse set of tasks with prompts. They can be further improved +towards a specific task by fine-tuning on a specialized dataset. However, +fine-tuning usually makes the model narrowly specialized on this dataset with +reduced general in-context learning performances, which is undesirable whenever +the fine-tuned model needs to handle additional tasks where no fine-tuning data +is available. In this work, we first demonstrate that fine-tuning on a single +task indeed decreases LLMs' general in-context learning performance. We +discover one important cause of such forgetting, format specialization, where +the model overfits to the format of the fine-tuned task. We further show that +format specialization happens at the very beginning of fine-tuning. To solve +this problem, we propose Prompt Tuning with MOdel Tuning (ProMoT), a simple yet +effective two-stage fine-tuning framework that reduces format specialization +and improves generalization. ProMoT offloads task-specific format learning into +additional and removable parameters by first doing prompt tuning and then +fine-tuning the model itself with this soft prompt attached. With experiments +on several fine-tuning tasks and 8 in-context evaluation tasks, we show that +ProMoT achieves comparable performance on fine-tuned tasks to standard +fine-tuning, but with much less loss of in-context learning performances across +a board range of out-of-domain evaluation tasks. More importantly, ProMoT can +even enhance generalization on in-context learning tasks that are semantically +related to the fine-tuned task, e.g. ProMoT on En-Fr translation significantly +improves performance on other language pairs, and ProMoT on NLI improves +performance on summarization. Experiments also show that ProMoT can improve the +generalization performance of multi-task training. +" +On the Relation between Sensitivity and Accuracy in In-context Learning,Yanda Chen,http://arxiv.org/pdf/2209.07661v2.pdf,2022-09-16,"['cs.cl', 'cs.ai', 'cs.lg']",2209.07661v2.pdf," In-context learning (ICL) suffers from oversensitivity to the prompt, making +it unreliable in real-world scenarios. We study the sensitivity of ICL with +respect to multiple perturbation types. First, we find that label bias obscures +the true sensitivity, and therefore prior work may have significantly +underestimated ICL sensitivity. Second, we observe a strong negative +correlation between ICL sensitivity and accuracy: predictions sensitive to +perturbations are less likely to be correct. Motivated by these findings, we +propose \textsc{SenSel}, a few-shot selective prediction method that abstains +from sensitive predictions. Experiments on ten classification datasets show +that \textsc{SenSel} consistently outperforms two commonly used +confidence-based and entropy-based baselines on abstention decisions. +" +WinoDict: Probing language models for in-context word acquisition,Julian Martin Eisenschlos,http://arxiv.org/pdf/2209.12153v1.pdf,2022-09-25,"['cs.cl', 'cs.ai']",2209.12153v1.pdf," We introduce a new in-context learning paradigm to measure Large Language +Models' (LLMs) ability to learn novel words during inference. In particular, we +rewrite Winograd-style co-reference resolution problems by replacing the key +concept word with a synthetic but plausible word that the model must understand +to complete the task. Solving this task requires the model to make use of the +dictionary definition of the new word given in the prompt. This benchmark +addresses word acquisition, one important aspect of the diachronic degradation +known to afflict LLMs. As LLMs are frozen in time at the moment they are +trained, they are normally unable to reflect the way language changes over +time. We show that the accuracy of LLMs compared to the original Winograd tasks +decreases radically in our benchmark, thus identifying a limitation of current +models and providing a benchmark to measure future improvements in LLMs ability +to do in-context learning. +" +Data Curation Alone Can Stabilize In-context Learning,Ting-Yun Chang,http://arxiv.org/pdf/2212.10378v2.pdf,2022-12-20,['cs.cl'],2212.10378v2.pdf," In-context learning (ICL) enables large language models (LLMs) to perform new +tasks by prompting them with a sequence of training examples. However, it is +known that ICL is very sensitive to the choice of training examples: randomly +sampling examples from a training set leads to high variance in performance. In +this paper, we show that carefully curating a subset of training data greatly +stabilizes ICL performance without any other changes to the ICL algorithm +(e.g., prompt retrieval or calibration). We introduce two methods to choose +training subsets -- both score training examples individually, then select the +highest-scoring ones. CondAcc scores a training example by its average dev-set +ICL accuracy when combined with random training examples, while Datamodels +learns linear regressors that estimate how the presence of each training +example influences LLM outputs. Across five tasks and two LLMs, sampling from +stable subsets selected by CondAcc and Datamodels improves average accuracy +over sampling from the entire training set by 7.7% and 6.3%, respectively. +Surprisingly, the stable subset examples are not especially diverse in content +or low in perplexity, in contrast with other work suggesting that diversity and +perplexity are important when prompting LLMs. +" +A Survey on In-context Learning,Qingxiu Dong,http://arxiv.org/pdf/2301.00234v3.pdf,2022-12-31,"['cs.cl', 'cs.ai']",2301.00234v3.pdf," With the increasing ability of large language models (LLMs), in-context +learning (ICL) has become a new paradigm for natural language processing (NLP), +where LLMs make predictions only based on contexts augmented with a few +examples. It has been a new trend to explore ICL to evaluate and extrapolate +the ability of LLMs. In this paper, we aim to survey and summarize the progress +and challenges of ICL. We first present a formal definition of ICL and clarify +its correlation to related studies. Then, we organize and discuss advanced +techniques, including training strategies, demonstration designing strategies, +as well as related analysis. Finally, we discuss the challenges of ICL and +provide potential directions for further research. We hope that our work can +encourage more research on uncovering how ICL works and improving ICL. +" +Using In-Context Learning to Improve Dialogue Safety,Nicholas Meade,http://arxiv.org/pdf/2302.00871v3.pdf,2023-02-02,['cs.cl'],2302.00871v3.pdf," While large neural-based conversational models have become increasingly +proficient dialogue agents, recent work has highlighted safety issues with +these systems. For example, these systems can be goaded into generating toxic +content, which often perpetuates social biases or stereotypes. We investigate a +retrieval-based method for reducing bias and toxicity in responses from +chatbots. It uses in-context learning to steer a model towards safer +generations. Concretely, to generate a response to an unsafe dialogue context, +we retrieve demonstrations of safe responses to similar dialogue contexts. We +find our method performs competitively with strong baselines without requiring +training. For instance, using automatic evaluation, we find our best fine-tuned +baseline only generates safe responses to unsafe dialogue contexts from +DiaSafety 4.04% more than our approach. Finally, we also propose a re-ranking +procedure which can further improve response safeness. +" +Towards Few-Shot Identification of Morality Frames using In-Context Learning,Shamik Roy,http://arxiv.org/pdf/2302.02029v1.pdf,2023-02-03,['cs.cl'],2302.02029v1.pdf," Data scarcity is a common problem in NLP, especially when the annotation +pertains to nuanced socio-linguistic concepts that require specialized +knowledge. As a result, few-shot identification of these concepts is desirable. +Few-shot in-context learning using pre-trained Large Language Models (LLMs) has +been recently applied successfully in many NLP tasks. In this paper, we study +few-shot identification of a psycho-linguistic concept, Morality Frames (Roy et +al., 2021), using LLMs. Morality frames are a representation framework that +provides a holistic view of the moral sentiment expressed in text, identifying +the relevant moral foundation (Haidt and Graham, 2007) and at a finer level of +granularity, the moral sentiment expressed towards the entities mentioned in +the text. Previous studies relied on human annotation to identify morality +frames in text which is expensive. In this paper, we propose prompting-based +approaches using pretrained Large Language Models for identification of +morality frames, relying only on few-shot exemplars. We compare our models' +performance with few-shot RoBERTa and found promising results. +" +OpenICL: An Open-Source Framework for In-context Learning,Zhenyu Wu,http://arxiv.org/pdf/2303.02913v1.pdf,2023-03-06,['cs.cl'],2303.02913v1.pdf," In recent years, In-context Learning (ICL) has gained increasing attention +and emerged as the new paradigm for large language model (LLM) evaluation. +Unlike traditional fine-tuning methods, ICL instead adapts the pre-trained +models to unseen tasks without any parameter updates. However, the +implementation of ICL is sophisticated due to the diverse retrieval and +inference methods involved, as well as the varying pre-processing requirements +for different models, datasets, and tasks. A unified and flexible framework for +ICL is urgently needed to ease the implementation of the aforementioned +components. To facilitate ICL research, we introduce OpenICL, an open-source +toolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highly +flexible architecture that users can easily combine different components to +suit their needs. It also provides various state-of-the-art retrieval and +inference methods to streamline the process of adapting ICL to cutting-edge +research. The effectiveness of OpenICL has been validated on a wide range of +NLP tasks, including classification, QA, machine translation, and semantic +parsing. As a side-product, we found OpenICL to be an efficient yet robust tool +for LLMs evaluation. OpenICL is released at +https://github.com/Shark-NLP/OpenICL +" +The Scope of In-Context Learning for the Extraction of Medical Temporal Constraints,Parker Seegmiller,http://arxiv.org/pdf/2303.09366v2.pdf,2023-03-16,"['cs.cl', 'cs.lg']",2303.09366v2.pdf," Medications often impose temporal constraints on everyday patient activity. +Violations of such medical temporal constraints (MTCs) lead to a lack of +treatment adherence, in addition to poor health outcomes and increased +healthcare expenses. These MTCs are found in drug usage guidelines (DUGs) in +both patient education materials and clinical texts. Computationally +representing MTCs in DUGs will advance patient-centric healthcare applications +by helping to define safe patient activity patterns. We define a novel taxonomy +of MTCs found in DUGs and develop a novel context-free grammar (CFG) based +model to computationally represent MTCs from unstructured DUGs. Additionally, +we release three new datasets with a combined total of N = 836 DUGs labeled +with normalized MTCs. We develop an in-context learning (ICL) solution for +automatically extracting and normalizing MTCs found in DUGs, achieving an +average F1 score of 0.62 across all datasets. Finally, we rigorously +investigate ICL model performance against a baseline model, across datasets and +MTC types, and through in-depth error analysis. +" +How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?,Xin Xu,http://arxiv.org/pdf/2305.01555v4.pdf,2023-05-02,"['cs.cl', 'cs.ai', 'cs.db', 'cs.ir', 'cs.lg']",2305.01555v4.pdf," Scaling language models have revolutionized widespread NLP tasks, yet little +comprehensively explored few-shot relation extraction with large language +models. In this paper, we investigate principal methodologies, in-context +learning and data generation, for few-shot relation extraction via GPT-3.5 +through exhaustive experiments. To enhance few-shot performance, we further +propose task-related instructions and schema-constrained data generation. We +observe that in-context learning can achieve performance on par with previous +prompt learning approaches, and data generation with the large language model +can boost previous solutions to obtain new state-of-the-art few-shot results on +four widely-studied relation extraction datasets. We hope our work can inspire +future research for the capabilities of large language models in few-shot +relation extraction. Code is available in +https://github.com/zjunlp/DeepKE/tree/main/example/llm. +" +GPT-RE: In-context Learning for Relation Extraction using Large Language Models,Zhen Wan,http://arxiv.org/pdf/2305.02105v2.pdf,2023-05-03,['cs.cl'],2305.02105v2.pdf," In spite of the potential for ground-breaking achievements offered by large +language models (LLMs) (e.g., GPT-3), they still lag significantly behind +fully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE). +This is due to the two major shortcomings of LLMs in RE: (1) low relevance +regarding entity and relation in retrieved demonstrations for in-context +learning; and (2) the strong inclination to wrongly classify NULL examples into +other pre-defined labels. + In this paper, we propose GPT-RE to bridge the gap between LLMs and +fully-supervised baselines. GPT-RE successfully addresses the aforementioned +issues by (1) incorporating task-specific entity representations in +demonstration retrieval; and (2) enriching the demonstrations with gold +label-induced reasoning logic. We evaluate GPT-RE on four widely-used RE +datasets, and observe that GPT-RE achieves improvements over not only existing +GPT-3 baselines, but also fully-supervised baselines. Specifically, GPT-RE +achieves SOTA performances on the Semeval and SciERC datasets, and competitive +performances on the TACRED and ACE05 datasets. +" +GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from Doctor-Patient Conversations through Fine-tuning and In-context Learning,Xiangru Tang,http://arxiv.org/pdf/2305.05001v1.pdf,2023-05-08,['cs.cl'],2305.05001v1.pdf," This paper presents our contribution to the MEDIQA-2023 Dialogue2Note shared +task, encompassing both subtask A and subtask B. We approach the task as a +dialogue summarization problem and implement two distinct pipelines: (a) a +fine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b) +few-shot in-context learning (ICL) using a large language model, GPT-4. Both +methods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1 +(deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421, +respectively. Additionally, we predict the associated section headers using +RoBERTa and SciBERT based classification models. Our team ranked fourth among +all teams, while each team is allowed to submit three runs as part of their +submission. We also utilize expert annotations to demonstrate that the notes +generated through the ICL GPT-4 are better than all other baselines. The code +for our submission is available. +" +Can We Edit Factual Knowledge by In-Context Learning?,Ce Zheng,http://arxiv.org/pdf/2305.12740v1.pdf,2023-05-22,['cs.cl'],2305.12740v1.pdf," Previous studies have shown that large language models (LLMs) like GPTs store +massive factual knowledge in their parameters. However, the stored knowledge +could be false or out-dated. Traditional knowledge editing methods refine LLMs +via fine-tuning on texts containing specific knowledge. However, with the +increasing scales of LLMs, these gradient-based approaches bring large +computation costs. The trend of model-as-a-service also makes it impossible to +modify knowledge in black-box LMs. Inspired by in-context learning (ICL), a new +paradigm based on demonstration contexts without parameter updating, we explore +whether ICL can edit factual knowledge. To answer this question, we give a +comprehensive empirical study of ICL strategies. Experiments show that +in-context knowledge editing (IKE), without any gradient and parameter +updating, achieves a competitive success rate compared to gradient-based +methods on GPT-J (6B) but with much fewer side effects, including less +over-editing on similar but unrelated facts and less knowledge forgetting on +previously stored knowledge. We also apply the method to larger LMs with tens +or hundreds of parameters like OPT-175B, which shows the scalability of our +method. The code is available at https://github.com/Zce1112zslx/IKE. +" +Concept-aware Training Improves In-context Learning Ability of Language Models,Michal Štefánik,http://arxiv.org/pdf/2305.13775v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.13775v1.pdf," Many recent language models (LMs) of Transformers family exhibit so-called +in-context learning (ICL) ability, manifested in the LMs' ability to modulate +their function by a task described in a natural language input. Previous work +curating these models assumes that ICL emerges from vast over-parametrization +or the scale of multi-task training. However, a complementary branch of recent +theoretical work attributes ICL emergence to specific properties of training +data and creates functional in-context learners in small-scale, synthetic +settings. + Inspired by recent findings on data properties driving the emergence of ICL, +we propose a method to create LMs able to better utilize the in-context +information, by constructing training scenarios where it is beneficial for the +LM to capture the analogical reasoning concepts. We measure that data sampling +of Concept-aware Training (CoAT) consistently improves models' reasoning +ability. As a result, the in-context learners trained with CoAT on only two +datasets of a single (QA) task perform comparably to larger models trained on +1600+ tasks. +" +Dr.ICL: Demonstration-Retrieved In-context Learning,Man Luo,http://arxiv.org/pdf/2305.14128v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14128v1.pdf," In-context learning (ICL), teaching a large language model (LLM) to perform a +task with few-shot demonstrations rather than adjusting the model parameters, +has emerged as a strong paradigm for using LLMs. While early studies primarily +used a fixed or random set of demonstrations for all test queries, recent +research suggests that retrieving semantically similar demonstrations to the +input from a pool of available demonstrations results in better performance. +This work expands the applicability of retrieval-based ICL approaches by +demonstrating that even simple word-overlap similarity measures such as BM25 +outperform randomly selected demonstrations. Furthermore, we extend the success +of retrieval-based ICL to instruction-finetuned LLMs as well as +Chain-of-Thought (CoT) prompting. For instruction-finetuned LLMs, we find that +although a model has already seen the training data at training time, +retrieving demonstrations from the training data at test time yields better +results compared to using no demonstrations or random demonstrations. Last but +not least, we train a task-specific demonstration retriever that outperforms +off-the-shelf retrievers. +" +Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning,Lean Wang,http://arxiv.org/pdf/2305.14160v1.pdf,2023-05-23,"['cs.cl', 'cs.lg']",2305.14160v1.pdf," In-context learning (ICL) emerges as a promising capability of large language +models (LLMs) by providing them with demonstration examples to perform diverse +tasks. However, the underlying mechanism of how LLMs learn from the provided +context remains under-explored. In this paper, we investigate the working +mechanism of ICL through an information flow lens. Our findings reveal that +label words in the demonstration examples function as anchors: (1) semantic +information aggregates into label word representations during the shallow +computation layers' processing; (2) the consolidated information in label words +serves as a reference for LLMs' final predictions. Based on these insights, we +introduce an anchor re-weighting method to improve ICL performance, a +demonstration compression technique to expedite inference, and an analysis +framework for diagnosing ICL errors in GPT2-XL. The promising applications of +our findings again validate the uncovered ICL working mechanism and pave the +way for future studies. +" +Probing in Context: Toward Building Robust Classifiers via Probing Large Language Models,Afra Amini,http://arxiv.org/pdf/2305.14171v2.pdf,2023-05-23,['cs.cl'],2305.14171v2.pdf," Large language models are able to learn new tasks in context, where they are +provided with instructions and a few annotated examples. However, the +effectiveness of in-context learning is dependent on the provided context, and +the performance on a downstream task can vary considerably, depending on the +instruction. Importantly, such dependency on the context can surface in +unpredictable ways, e.g., a seemingly more informative instruction might lead +to a worse performance. In this paper, we propose an alternative approach, +which we term in-context probing. Similar to in-context learning, we +contextualize the representation of the input with an instruction, but instead +of decoding the output prediction, we probe the contextualized representation +to predict the label. Through a series of experiments on a diverse set of +classification tasks, we show that in-context probing is significantly more +robust to changes in instructions. We further show that probing performs +competitive or superior to finetuning and can be particularly helpful to build +classifiers on top of smaller models, and with only a hundred training +examples. +" +Coverage-based Example Selection for In-Context Learning,Shivanshu Gupta,http://arxiv.org/pdf/2305.14907v3.pdf,2023-05-24,['cs.cl'],2305.14907v3.pdf," In-context learning (ICL), the ability of large language models to perform +novel tasks by conditioning on a prompt with a few task examples, requires +these examples to be informative about the test instance. The standard approach +of independently ranking and selecting the most similar examples selects +redundant examples while omitting important information. In this work, we show +that BERTScore-Recall (BSR) selects better examples that demonstrate more of +the salient aspects, e.g. reasoning patterns, of the test input. We further +extend BSR and many standard metrics to easily optimizable set-level metrics, +giving still better coverage of those salient aspects. On 15 datasets spanning +6 tasks and with 7 diverse LLMs, we show that (1) BSR is the superior metric +for in-context example selection across the board, and (2) for compositional +tasks, set selection using Set-BSR outperforms independent ranking by up to 17 +points on average and, despite being training-free, surpasses methods that +leverage task or LLM-specific training. +" +Transformers learn to implement preconditioned gradient descent for in-context learning,Kwangjun Ahn,http://arxiv.org/pdf/2306.00297v1.pdf,2023-06-01,"['cs.lg', 'cs.ai']",2306.00297v1.pdf," Motivated by the striking ability of transformers for in-context learning, +several works demonstrate that transformers can implement algorithms like +gradient descent. By a careful construction of weights, these works show that +multiple layers of transformers are expressive enough to simulate gradient +descent iterations. Going beyond the question of expressivity, we ask: Can +transformers learn to implement such algorithms by training over random problem +instances? To our knowledge, we make the first theoretical progress toward this +question via analysis of the loss landscape for linear transformers trained +over random instances of linear regression. For a single attention layer, we +prove the global minimum of the training objective implements a single +iteration of preconditioned gradient descent. Notably, the preconditioning +matrix not only adapts to the input distribution but also to the variance +induced by data inadequacy. For a transformer with $k$ attention layers, we +prove certain critical points of the training objective implement $k$ +iterations of preconditioned gradient descent. Our results call for future +theoretical studies on learning algorithms by training transformers. +" +In-Context Learning User Simulators for Task-Oriented Dialog Systems,Silvia Terragni,http://arxiv.org/pdf/2306.00774v1.pdf,2023-06-01,"['cs.cl', 'cs.lg']",2306.00774v1.pdf," This paper presents a novel application of large language models in user +simulation for task-oriented dialog systems, specifically focusing on an +in-context learning approach. By harnessing the power of these models, the +proposed approach generates diverse utterances based on user goals and limited +dialog examples. Unlike traditional simulators, this method eliminates the need +for labor-intensive rule definition or extensive annotated data, making it more +efficient and accessible. Additionally, an error analysis of the interaction +between the user simulator and dialog system uncovers common mistakes, +providing valuable insights into areas that require improvement. Our +implementation is available at +https://github.com/telepathylabsai/prompt-based-user-simulator. +" +Towards In-context Scene Understanding,Ivana Balažević,http://arxiv.org/pdf/2306.01667v2.pdf,2023-06-02,['cs.cv'],2306.01667v2.pdf," In-context learning$\unicode{x2013}$the ability to configure a model's +behavior with different prompts$\unicode{x2013}$has revolutionized the field of +natural language processing, alleviating the need for task-specific models and +paving the way for generalist models capable of assisting with any query. +Computer vision, in contrast, has largely stayed in the former regime: +specialized decoders and finetuning protocols are generally required to perform +dense tasks such as semantic segmentation and depth estimation. In this work we +explore a simple mechanism for in-context learning of such scene understanding +tasks: nearest neighbor retrieval from a prompt of annotated features. We +propose a new pretraining protocol$\unicode{x2013}$leveraging attention within +and across images$\unicode{x2013}$which yields representations particularly +useful in this regime. The resulting Hummingbird model, suitably prompted, +performs various scene understanding tasks without modification while +approaching the performance of specialists that have been finetuned for each +task. Moreover, Hummingbird can be configured to perform new tasks much more +efficiently than finetuned models, raising the possibility of scene +understanding in the interactive assistant regime. +" +Leveraging Large Language Models for Scalable Vector Graphics-Driven Image Understanding,Mu Cai,http://arxiv.org/pdf/2306.06094v1.pdf,2023-06-09,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2306.06094v1.pdf," Recently, large language models (LLMs) have made significant advancements in +natural language understanding and generation. However, their potential in +computer vision remains largely unexplored. In this paper, we introduce a new, +exploratory approach that enables LLMs to process images using the Scalable +Vector Graphics (SVG) format. By leveraging the XML-based textual descriptions +of SVG representations instead of raster images, we aim to bridge the gap +between the visual and textual modalities, allowing LLMs to directly understand +and manipulate images without the need for parameterized visual components. Our +method facilitates simple image classification, generation, and in-context +learning using only LLM capabilities. We demonstrate the promise of our +approach across discriminative and generative tasks, highlighting its (i) +robustness against distribution shift, (ii) substantial improvements achieved +by tapping into the in-context learning abilities of LLMs, and (iii) image +understanding and generation capabilities with human guidance. Our code, data, +and models can be found here https://github.com/mu-cai/svg-llm. +" +Exploring the In-context Learning Ability of Large Language Model for Biomedical Concept Linking,Qinyong Wang,http://arxiv.org/pdf/2307.01137v1.pdf,2023-07-03,"['cs.cl', 'cs.ai']",2307.01137v1.pdf," The biomedical field relies heavily on concept linking in various areas such +as literature mining, graph alignment, information retrieval, +question-answering, data, and knowledge integration. Although large language +models (LLMs) have made significant strides in many natural language processing +tasks, their effectiveness in biomedical concept mapping is yet to be fully +explored. This research investigates a method that exploits the in-context +learning (ICL) capabilities of large models for biomedical concept linking. The +proposed approach adopts a two-stage retrieve-and-rank framework. Initially, +biomedical concepts are embedded using language models, and then embedding +similarity is utilized to retrieve the top candidates. These candidates' +contextual information is subsequently incorporated into the prompt and +processed by a large language model to re-rank the concepts. This approach +achieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7% +in chemical entity normalization, exhibiting a competitive performance relative +to supervised learning methods. Further, it showed a significant improvement, +with an over 20-point absolute increase in F1 score on an oncology matching +dataset. Extensive qualitative assessments were conducted, and the benefits and +potential shortcomings of using large language models within the biomedical +domain were discussed. were discussed. +" +Learning to Retrieve In-Context Examples for Large Language Models,Liang Wang,http://arxiv.org/pdf/2307.07164v1.pdf,2023-07-14,"['cs.cl', 'cs.ir']",2307.07164v1.pdf," Large language models (LLMs) have demonstrated their ability to learn +in-context, allowing them to perform various tasks based on a few input-output +examples. However, the effectiveness of in-context learning is heavily reliant +on the quality of the selected examples. In this paper, we propose a novel +framework to iteratively train dense retrievers that can identify high-quality +in-context examples for LLMs. Our framework initially trains a reward model +based on LLM feedback to evaluate the quality of candidate examples, followed +by knowledge distillation to train a bi-encoder based dense retriever. Our +experiments on a suite of 30 tasks demonstrate that our framework significantly +enhances in-context learning performance. Furthermore, we show the +generalization ability of our framework to unseen tasks during training. An +in-depth analysis reveals that our model improves performance by retrieving +examples with similar patterns, and the gains are consistent across LLMs of +varying sizes. +" +In-Context Learning Learns Label Relationships but Is Not Conventional Learning,Jannik Kossen,http://arxiv.org/pdf/2307.12375v3.pdf,2023-07-23,"['cs.cl', 'cs.ai', 'cs.lg']",2307.12375v3.pdf," The predictions of Large Language Models (LLMs) on downstream tasks often +improve significantly when including examples of the input--label relationship +in the context. However, there is currently no consensus about how this +in-context learning (ICL) ability of LLMs works. For example, while Xie et al. +(2021) liken ICL to a general-purpose learning algorithm, Min et al. (2022) +argue ICL does not even learn label relationships from in-context examples. In +this paper, we provide novel insights into how ICL leverages label information, +revealing both capabilities and limitations. To ensure we obtain a +comprehensive picture of ICL behavior, we study probabilistic aspects of ICL +predictions and thoroughly examine the dynamics of ICL as more examples are +provided. Our experiments show that ICL predictions almost always depend on +in-context labels, and that ICL can learn truly novel tasks in-context. +However, we also find that ICL struggles to fully overcome prediction +preferences acquired from pre-training data, and, further, that ICL does not +consider all in-context information equally. +" +Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised Learning,Xindi Wang,http://arxiv.org/pdf/2307.15411v2.pdf,2023-07-28,['cs.cl'],2307.15411v2.pdf," Large language models (LLMs) have shown remarkable capacity for in-context +learning (ICL), where learning a new task from just a few training examples is +done without being explicitly pre-trained. However, despite the success of +LLMs, there has been little understanding of how ICL learns the knowledge from +the given prompts. In this paper, to make progress toward understanding the +learning behaviour of ICL, we train the same LLMs with the same demonstration +examples via ICL and supervised learning (SL), respectively, and investigate +their performance under label perturbations (i.e., noisy labels and label +imbalance) on a range of classification tasks. First, via extensive +experiments, we find that gold labels have significant impacts on the +downstream in-context performance, especially for large language models; +however, imbalanced labels matter little to ICL across all model sizes. Second, +when comparing with SL, we show empirically that ICL is less sensitive to label +perturbations than SL, and ICL gradually attains comparable performance to SL +as the model size increases. +" +Exploring Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning,Hunter McNichols,http://arxiv.org/pdf/2308.03234v1.pdf,2023-08-07,['cs.cl'],2308.03234v1.pdf," Multiple-choice questions (MCQs) are ubiquitous in almost all levels of +education since they are easy to administer, grade, and are a reliable format +in both assessments and practices. An important aspect of MCQs is the +distractors, i.e., incorrect options that are designed to target specific +misconceptions or insufficient knowledge among students. To date, the task of +crafting high-quality distractors has largely remained a labor-intensive +process for teachers and learning content designers, which has limited +scalability. In this work, we explore the task of automated distractor and +corresponding feedback message generation in math MCQs using large language +models. We establish a formulation of these two tasks and propose a simple, +in-context learning-based solution. Moreover, we explore using two non-standard +metrics to evaluate the quality of the generated distractors and feedback +messages. We conduct extensive experiments on these tasks using a real-world +MCQ dataset that contains student response information. Our findings suggest +that there is a lot of room for improvement in automated distractor and +feedback generation. We also outline several directions for future work +" +CausalLM is not optimal for in-context learning,Nan Ding,http://arxiv.org/pdf/2308.06912v2.pdf,2023-08-14,"['cs.lg', 'cs.cl']",2308.06912v2.pdf," Recent empirical evidence indicates that transformer based in-context +learning performs better when using a prefix language model (prefixLM), in +which in-context samples can all attend to each other, compared to causal +language models (causalLM), which use auto-regressive attention that prohibits +in-context samples to attend to future samples. While this result is intuitive, +it is not understood from a theoretical perspective. In this paper we take a +theoretical approach and analyze the convergence behavior of prefixLM and +causalLM under a certain parameter construction. Our analysis shows that both +LM types converge to their stationary points at a linear rate, but that while +prefixLM converges to the optimal solution of linear regression, causalLM +convergence dynamics follows that of an online gradient descent algorithm, +which is not guaranteed to be optimal even as the number of samples grows +infinitely. We supplement our theoretical claims with empirical experiments +over synthetic and real tasks and using various types of transformers. Our +experiments verify that causalLM consistently underperforms prefixLM in all +settings. +" +Exploring Demonstration Ensembling for In-context Learning,Muhammad Khalifa,http://arxiv.org/pdf/2308.08780v2.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.08780v2.pdf," In-context learning (ICL) operates by showing language models (LMs) examples +of input-output pairs for a given task, i.e., demonstrations. The standard +approach for ICL is to prompt the LM with concatenated demonstrations followed +by the test input. This approach suffers from some issues. First, concatenation +offers almost no control over the contribution of each demo to the model +prediction. This can be sub-optimal when some demonstrations are irrelevant to +the test example. Second, due to the input length limit of some transformer +models, it might be infeasible to fit many examples into the context, +especially when dealing with long-input tasks. In this work, we explore +Demonstration Ensembling (DENSE) as an alternative to simple concatenation. +DENSE predicts outputs using subsets (i.e., buckets) of the demonstrations and +then combines the output probabilities resulting from each subset to produce +the final prediction. We study different ensembling methods using GPT-j and +experiment on 12 language tasks. Our experiments show weighted max ensembling +to outperform vanilla concatenation by as large as 2.4 average points. Code +available at https://github.com/mukhal/icl-ensembling. +" +Context is Environment,Sharut Gupta,http://arxiv.org/pdf/2309.09888v2.pdf,2023-09-18,"['cs.lg', 'cs.ai', 'stat.ml']",2309.09888v2.pdf," Two lines of work are taking the central stage in AI research. On the one +hand, the community is making increasing efforts to build models that discard +spurious correlations and generalize better in novel test environments. +Unfortunately, the bitter lesson so far is that no proposal convincingly +outperforms a simple empirical risk minimization baseline. On the other hand, +large language models (LLMs) have erupted as algorithms able to learn +in-context, generalizing on-the-fly to eclectic contextual circumstances that +users enforce by means of prompting. In this paper, we argue that context is +environment, and posit that in-context learning holds the key to better domain +generalization. Via extensive theory and experiments, we show that paying +attention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as they +arrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context Risk +Minimization (ICRM) algorithm to zoom-in on the test environment risk +minimizer, leading to significant out-of-distribution performance improvements. +From all of this, two messages are worth taking home. Researchers in domain +generalization should consider environment as context, and harness the adaptive +power of in-context learning. Researchers in LLMs should consider context as +environment, to better structure data towards generalization. +" +"Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning",Peter Ebert Christensen,http://arxiv.org/pdf/2309.10359v1.pdf,2023-09-19,['cs.cl'],2309.10359v1.pdf," Unsupported and unfalsifiable claims we encounter in our daily lives can +influence our view of the world. Characterizing, summarizing, and -- more +generally -- making sense of such claims, however, can be challenging. In this +work, we focus on fine-grained debate topics and formulate a new task of +distilling, from such claims, a countable set of narratives. We present a +crowdsourced dataset of 12 controversial topics, comprising more than 120k +arguments, claims, and comments from heterogeneous sources, each annotated with +a narrative label. We further investigate how large language models (LLMs) can +be used to synthesise claims using In-Context Learning. We find that generated +claims with supported evidence can be used to improve the performance of +narrative classification models and, additionally, that the same model can +infer the stance and aspect using a few training examples. Such a model can be +useful in applications which rely on narratives , e.g. fact-checking. +" +In-Context Learning for Text Classification with Many Labels,Aristides Milios,http://arxiv.org/pdf/2309.10954v1.pdf,2023-09-19,"['cs.cl', 'cs.lg']",2309.10954v1.pdf," In-context learning (ICL) using large language models for tasks with many +labels is challenging due to the limited context window, which makes it +difficult to fit a sufficient number of examples in the prompt. In this paper, +we use a pre-trained dense retrieval model to bypass this limitation, giving +the model only a partial view of the full label space for each inference call. +Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the art +performance in few-shot settings for three common intent classification +datasets, with no finetuning. We also surpass fine-tuned performance on +fine-grained sentiment classification in certain cases. We analyze the +performance across number of in-context examples and different model scales, +showing that larger models are necessary to effectively and consistently make +use of larger context lengths for ICL. By running several ablations, we analyze +the model's use of: a) the similarity of the in-context examples to the current +input, b) the semantic content of the class names, and c) the correct +correspondence between examples and labels. We demonstrate that all three are +needed to varying degrees depending on the domain, contrary to certain recent +works. +" +Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation,Xinyu Tang,http://arxiv.org/pdf/2309.11765v1.pdf,2023-09-21,"['cs.lg', 'cs.cr']",2309.11765v1.pdf," We study the problem of in-context learning (ICL) with large language models +(LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leak +or regurgitate the private examples demonstrated in the prompt. We propose a +novel algorithm that generates synthetic few-shot demonstrations from the +private dataset with formal differential privacy (DP) guarantees, and show +empirically that it can achieve effective ICL. We conduct extensive experiments +on standard benchmarks and compare our algorithm with non-private ICL and +zero-shot solutions. Our results demonstrate that our algorithm can achieve +competitive performance with strong privacy levels. These results open up new +possibilities for ICL with privacy protection for a broad range of +applications. +" +HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering,Tongxu Luo,http://arxiv.org/pdf/2309.12669v1.pdf,2023-09-22,['cs.cl'],2309.12669v1.pdf," Answering numerical questions over hybrid contents from the given tables and +text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs) +have gained significant attention in the NLP community. With the emergence of +large language models, In-Context Learning and Chain-of-Thought prompting have +become two particularly popular research topics in this field. In this paper, +we introduce a new prompting strategy called Hybrid prompt strategy and +Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt +the model to develop the ability of retrieval thinking when dealing with hybrid +data. Our method achieves superior performance compared to the fully-supervised +SOTA on the MultiHiertt dataset in the few-shot setting. +" +ALLURE: Auditing and Improving LLM-based Evaluation of Text using Iterative In-Context-Learning,Hosein Hasanbeig,http://arxiv.org/pdf/2309.13701v2.pdf,2023-09-24,"['cs.cl', 'cs.ai', 'cs.hc']",2309.13701v2.pdf," From grading papers to summarizing medical documents, large language models +(LLMs) are evermore used for evaluation of text generated by humans and AI +alike. However, despite their extensive utility, LLMs exhibit distinct failure +modes, necessitating a thorough audit and improvement of their text evaluation +capabilities. Here we introduce ALLURE, a systematic approach to Auditing Large +Language Models Understanding and Reasoning Errors. ALLURE involves comparing +LLM-generated evaluations with annotated data, and iteratively incorporating +instances of significant deviation into the evaluator, which leverages +in-context learning (ICL) to enhance and improve robust evaluation of text by +LLMs. Through this iterative process, we refine the performance of the +evaluator LLM, ultimately reducing reliance on human annotators in the +evaluation process. We anticipate ALLURE to serve diverse applications of LLMs +in various domains related to evaluation of textual data, such as medical +summarization, education, and and productivity. +" +Dynamic Demonstrations Controller for In-Context Learning,Fei Zhao,http://arxiv.org/pdf/2310.00385v1.pdf,2023-09-30,"['cs.cl', 'cs.ai']",2310.00385v1.pdf," In-Context Learning (ICL) is a new paradigm for natural language processing +(NLP), where a large language model (LLM) observes a small number of +demonstrations and a test instance as its input, and directly makes predictions +without updating model parameters. Previous studies have revealed that ICL is +sensitive to the selection and the ordering of demonstrations. However, there +are few studies regarding the impact of the demonstration number on the ICL +performance within a limited input length of LLM, because it is commonly +believed that the number of demonstrations is positively correlated with model +performance. In this paper, we found this conclusion does not always hold true. +Through pilot experiments, we discover that increasing the number of +demonstrations does not necessarily lead to improved performance. Building upon +this insight, we propose a Dynamic Demonstrations Controller (D$^2$Controller), +which can improve the ICL performance by adjusting the number of demonstrations +dynamically. The experimental results show that D$^2$Controller yields a 5.4% +relative improvement on eight different sizes of LLMs across ten datasets. +Moreover, we also extend our method to previous ICL models and achieve +competitive results. +" +The Cost of Down-Scaling Language Models: Fact Recall Deteriorates before In-Context Learning,Tian Jin,http://arxiv.org/pdf/2310.04680v1.pdf,2023-10-07,"['cs.cl', 'cs.ai', 'cs.lg']",2310.04680v1.pdf," How does scaling the number of parameters in large language models (LLMs) +affect their core capabilities? We study two natural scaling techniques -- +weight pruning and simply training a smaller or larger model, which we refer to +as dense scaling -- and their effects on two core capabilities of LLMs: (a) +recalling facts presented during pre-training and (b) processing information +presented in-context during inference. By curating a suite of tasks that help +disentangle these two capabilities, we find a striking difference in how these +two abilities evolve due to scaling. Reducing the model size by more than 30\% +(via either scaling approach) significantly decreases the ability to recall +facts seen in pre-training. Yet, a 60--70\% reduction largely preserves the +various ways the model can process in-context information, ranging from +retrieving answers from a long context to learning parameterized functions from +in-context exemplars. The fact that both dense scaling and weight pruning +exhibit this behavior suggests that scaling model size has an inherently +disparate effect on fact recall and in-context learning. +" +Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning,Zhe Yang,http://arxiv.org/pdf/2310.08309v1.pdf,2023-10-12,['cs.cl'],2310.08309v1.pdf," Large Language Models (LLMs) have recently gained the In-Context Learning +(ICL) ability with the models scaling up, allowing them to quickly adapt to +downstream tasks with only a few demonstration examples prepended in the input +sequence. Nonetheless, the current practice of ICL treats all demonstration +examples equally, which still warrants improvement, as the quality of examples +is usually uneven. In this paper, we investigate how to determine approximately +optimal weights for demonstration examples and how to apply them during ICL. To +assess the quality of weights in the absence of additional validation data, we +design a masked self-prediction (MSP) score that exhibits a strong correlation +with the final ICL performance. To expedite the weight-searching process, we +discretize the continuous weight space and adopt beam search. With +approximately optimal weights obtained, we further propose two strategies to +apply them to demonstrations at different model positions. Experimental results +on 8 text classification tasks show that our approach outperforms conventional +ICL by a large margin. Our code are publicly available at +https:github.com/Zhe-Young/WICL. +" +How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?,Jingfeng Wu,http://arxiv.org/pdf/2310.08391v1.pdf,2023-10-12,"['stat.ml', 'cs.lg']",2310.08391v1.pdf," Transformers pretrained on diverse tasks exhibit remarkable in-context +learning (ICL) capabilities, enabling them to solve unseen tasks solely based +on input contexts without adjusting model parameters. In this paper, we study +ICL in one of its simplest setups: pretraining a linearly parameterized +single-layer linear attention model for linear regression with a Gaussian +prior. We establish a statistical task complexity bound for the attention model +pretraining, showing that effective pretraining only requires a small number of +independent tasks. Furthermore, we prove that the pretrained model closely +matches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, by +achieving nearly Bayes optimal risk on unseen tasks under a fixed context +length. These theoretical findings complement prior experimental research and +shed light on the statistical foundations of ICL. +" +Generative Calibration for In-context Learning,Zhongtao Jiang,http://arxiv.org/pdf/2310.10266v1.pdf,2023-10-16,['cs.cl'],2310.10266v1.pdf," As one of the most exciting features of large language models (LLMs), +in-context learning is a mixed blessing. While it allows users to +fast-prototype a task solver with only a few training examples, the performance +is generally sensitive to various configurations of the prompt such as the +choice or order of the training examples. In this paper, we for the first time +theoretically and empirically identify that such a paradox is mainly due to the +label shift of the in-context model to the data distribution, in which LLMs +shift the label marginal $p(y)$ while having a good label conditional $p(x|y)$. +With this understanding, we can simply calibrate the in-context predictive +distribution by adjusting the label marginal, which is estimated via +Monte-Carlo sampling over the in-context model, i.e., generation of LLMs. We +call our approach as generative calibration. We conduct exhaustive experiments +with 12 text classification tasks and 12 LLMs scaling from 774M to 33B, +generally find that the proposed method greatly and consistently outperforms +the ICL as well as state-of-the-art calibration methods, by up to 27% absolute +in macro-F1. Meanwhile, the proposed method is also stable under different +prompt configurations. +" +"Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning",Rui Wen,http://arxiv.org/pdf/2310.11397v1.pdf,2023-10-17,"['cs.cr', 'cs.lg']",2310.11397v1.pdf," Large Language Models (LLMs) are powerful tools for natural language +processing, enabling novel applications and user experiences. However, to +achieve optimal performance, LLMs often require adaptation with private data, +which poses privacy and security challenges. Several techniques have been +proposed to adapt LLMs with private data, such as Low-Rank Adaptation (LoRA), +Soft Prompt Tuning (SPT), and In-Context Learning (ICL), but their comparative +privacy and security properties have not been systematically investigated. In +this work, we fill this gap by evaluating the robustness of LoRA, SPT, and ICL +against three types of well-established attacks: membership inference, which +exposes data leakage (privacy); backdoor, which injects malicious behavior +(security); and model stealing, which can violate intellectual property +(privacy and security). Our results show that there is no silver bullet for +privacy and security in LLM adaptation and each technique has different +strengths and weaknesses. +" +MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations,Arkil Patel,http://arxiv.org/pdf/2310.11634v1.pdf,2023-10-18,['cs.cl'],2310.11634v1.pdf," Humans possess a remarkable ability to assign novel interpretations to +linguistic expressions, enabling them to learn new words and understand +community-specific connotations. However, Large Language Models (LLMs) have a +knowledge cutoff and are costly to finetune repeatedly. Therefore, it is +crucial for LLMs to learn novel interpretations in-context. In this paper, we +systematically analyse the ability of LLMs to acquire novel interpretations +using in-context learning. To facilitate our study, we introduce MAGNIFICo, an +evaluation suite implemented within a text-to-SQL semantic parsing framework +that incorporates diverse tokens and prompt settings to simulate real-world +complexity. Experimental results on MAGNIFICo demonstrate that LLMs exhibit a +surprisingly robust capacity for comprehending novel interpretations from +natural language descriptions as well as from discussions within long +conversations. Nevertheless, our findings also highlight the need for further +improvements, particularly when interpreting unfamiliar words or when composing +multiple novel interpretations simultaneously in the same example. +Additionally, our analysis uncovers the semantic predispositions in LLMs and +reveals the impact of recency bias for information presented in long contexts. +" +In-context Learning with Transformer Is Really Equivalent to a Contrastive Learning Pattern,Ruifeng Ren,http://arxiv.org/pdf/2310.13220v1.pdf,2023-10-20,['cs.lg'],2310.13220v1.pdf," Pre-trained large language models based on Transformers have demonstrated +amazing in-context learning (ICL) abilities. Given several demonstration +examples, the models can implement new tasks without any parameter updates. +However, it is still an open question to understand the mechanism of ICL. In +this paper, we interpret the inference process of ICL as a gradient descent +process in a contrastive learning pattern. Firstly, leveraging kernel methods, +we establish the relationship between gradient descent and self-attention +mechanism under generally used softmax attention setting instead of linear +attention setting. Then, we analyze the corresponding gradient descent process +of ICL from the perspective of contrastive learning without negative samples +and discuss possible improvements of this contrastive learning pattern, based +on which the self-attention layer can be further modified. Finally, we design +experiments to support our opinions. To the best of our knowledge, our work is +the first to provide the understanding of ICL from the perspective of +contrastive learning and has the potential to facilitate future model design by +referring to related works on contrastive learning. +" +In-Context Learning Creates Task Vectors,Roee Hendel,http://arxiv.org/pdf/2310.15916v1.pdf,2023-10-24,['cs.cl'],2310.15916v1.pdf," In-context learning (ICL) in Large Language Models (LLMs) has emerged as a +powerful new learning paradigm. However, its underlying mechanism is still not +well understood. In particular, it is challenging to map it to the ""standard"" +machine learning framework, where one uses a training set $S$ to find a +best-fitting function $f(x)$ in some hypothesis class. Here we make progress on +this problem by showing that the functions learned by ICL often have a very +simple structure: they correspond to the transformer LLM whose only inputs are +the query $x$ and a single ""task vector"" calculated from the training set. +Thus, ICL can be seen as compressing $S$ into a single task vector +$\boldsymbol{\theta}(S)$ and then using this task vector to modulate the +transformer to produce the output. We support the above claim via comprehensive +experiments across a range of models and tasks. +" +When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations,Aleksandar Petrov,http://arxiv.org/pdf/2310.19698v1.pdf,2023-10-30,"['cs.lg', 'cs.cl']",2310.19698v1.pdf," Context-based fine-tuning methods, including prompting, in-context learning, +soft prompting (also known as prompt tuning), and prefix-tuning, have gained +popularity due to their ability to often match the performance of full +fine-tuning with a fraction of the parameters. Despite their empirical +successes, there is little theoretical understanding of how these techniques +influence the internal computation of the model and their expressiveness +limitations. We show that despite the continuous embedding space being more +expressive than the discrete token space, soft-prompting and prefix-tuning are +strictly less expressive than full fine-tuning, even with the same number of +learnable parameters. Concretely, context-based fine-tuning cannot change the +relative attention pattern over the content and can only bias the outputs of an +attention layer in a fixed direction. This suggests that while techniques like +prompting, in-context learning, soft prompting, and prefix-tuning can +effectively elicit skills present in the pretrained model, they cannot learn +novel tasks that require new attention patterns. +" +Which Examples to Annotate for In-Context Learning? Towards Effective and Efficient Selection,Costas Mavromatis,http://arxiv.org/pdf/2310.20046v1.pdf,2023-10-30,['cs.cl'],2310.20046v1.pdf," Large Language Models (LLMs) can adapt to new tasks via in-context learning +(ICL). ICL is efficient as it does not require any parameter updates to the +trained LLM, but only few annotated examples as input for the LLM. In this +work, we investigate an active learning approach for ICL, where there is a +limited budget for annotating examples. We propose a model-adaptive +optimization-free algorithm, termed AdaICL, which identifies examples that the +model is uncertain about, and performs semantic diversity-based example +selection. Diversity-based sampling improves overall effectiveness, while +uncertainty sampling improves budget efficiency and helps the LLM learn new +information. Moreover, AdaICL poses its sampling strategy as a Maximum Coverage +problem, that dynamically adapts based on the model's feedback and can be +approximately solved via greedy algorithms. Extensive experiments on nine +datasets and seven LLMs show that AdaICL improves performance by 4.4% accuracy +points over SOTA (7.7% relative improvement), is up to 3x more budget-efficient +than performing annotations uniformly at random, while it outperforms SOTA with +2x fewer ICL examples. +" +DAIL: Data Augmentation for In-Context Learning via Self-Paraphrase,Dawei Li,http://arxiv.org/pdf/2311.03319v1.pdf,2023-11-06,"['cs.cl', 'cs.ai']",2311.03319v1.pdf," In-Context Learning (ICL) combined with pre-trained large language models has +achieved promising results on various NLP tasks. However, ICL requires +high-quality annotated demonstrations which might not be available in +real-world scenarios. To overcome this limitation, we propose \textbf{D}ata +\textbf{A}ugmentation for \textbf{I}n-Context \textbf{L}earning +(\textbf{DAIL}). DAIL leverages the intuition that large language models are +more familiar with the content generated by themselves. It first utilizes the +language model to generate paraphrases of the test sample and employs majority +voting to determine the final result based on individual predictions. Our +extensive empirical evaluation shows that DAIL outperforms the standard ICL +method and other ensemble-based methods in the low-resource scenario. +Additionally, we explore the use of voting consistency as a confidence score of +the model when the logits of predictions are inaccessible. We believe our work +will stimulate further research on ICL in low-resource settings. +" +In-Context Exemplars as Clues to Retrieving from Large Associative Memory,Jiachen Zhao,http://arxiv.org/pdf/2311.03498v1.pdf,2023-11-06,"['cs.cl', 'cs.lg']",2311.03498v1.pdf," Recently, large language models (LLMs) have made remarkable progress in +natural language processing. The most representative ability of LLMs is +in-context learning (ICL), which enables LLMs to learn patterns from in-context +exemplars without training. The performance of ICL greatly depends on the +exemplars used. However, how to choose exemplars remains unclear due to the +lack of understanding of how in-context learning works. In this paper, we +present a novel perspective on ICL by conceptualizing it as contextual +retrieval from a model of associative memory. We establish a theoretical +framework of ICL based on Hopfield Networks. Based on our framework, we look +into how in-context exemplars influence the performance of ICL and propose more +efficient active exemplar selection. Our study sheds new light on the mechanism +of ICL by connecting it to memory retrieval, with potential implications for +advancing the understanding of LLMs. +" +Instruct Me More! Random Prompting for Visual In-Context Learning,Jiahao Zhang,http://arxiv.org/pdf/2311.03648v1.pdf,2023-11-07,['cs.cv'],2311.03648v1.pdf," Large-scale models trained on extensive datasets, have emerged as the +preferred approach due to their high generalizability across various tasks. +In-context learning (ICL), a popular strategy in natural language processing, +uses such models for different tasks by providing instructive prompts but +without updating model parameters. This idea is now being explored in computer +vision, where an input-output image pair (called an in-context pair) is +supplied to the model with a query image as a prompt to exemplify the desired +output. The efficacy of visual ICL often depends on the quality of the prompts. +We thus introduce a method coined Instruct Me More (InMeMo), which augments +in-context pairs with a learnable perturbation (prompt), to explore its +potential. Our experiments on mainstream tasks reveal that InMeMo surpasses the +current state-of-the-art performance. Specifically, compared to the baseline +without learnable prompt, InMeMo boosts mIoU scores by 7.35 and 15.13 for +foreground segmentation and single object detection tasks, respectively. Our +findings suggest that InMeMo offers a versatile and efficient way to enhance +the performance of visual ICL with lightweight training. Code is available at +https://github.com/Jackieam/InMeMo. +" +Selective Annotation Makes Language Models Better Few-Shot Learners,Hongjin Su,http://arxiv.org/pdf/2209.01975v1.pdf,2022-09-05,['cs.cl'],2209.01975v1.pdf," Many recent approaches to natural language tasks are built on the remarkable +abilities of large language models. Large language models can perform +in-context learning, where they learn a new task from a few task +demonstrations, without any parameter updates. This work examines the +implications of in-context learning for the creation of datasets for new +natural language tasks. Departing from recent in-context learning methods, we +formulate an annotation-efficient, two-step framework: selective annotation +that chooses a pool of examples to annotate from unlabeled data in advance, +followed by prompt retrieval that retrieves task examples from the annotated +pool at test time. Based on this framework, we propose an unsupervised, +graph-based selective annotation method, voke-k, to select diverse, +representative examples to annotate. Extensive experiments on 10 datasets +(covering classification, commonsense reasoning, dialogue, and text/code +generation) demonstrate that our selective annotation method improves the task +performance by a large margin. On average, vote-k achieves a 12.9%/11.4% +relative gain under an annotation budget of 18/100, as compared to randomly +selecting examples to annotate. Compared to state-of-the-art supervised +finetuning approaches, it yields similar performance with 10-100x less +annotation cost across 10 tasks. We further analyze the effectiveness of our +framework in various scenarios: language models with varying sizes, alternative +selective annotation methods, and cases where there is a test data domain +shift. We hope that our studies will serve as a basis for data annotations as +large language models are increasingly applied to new tasks. Our code is +available at https://github.com/HKUNLP/icl-selective-annotation. +" +In-context Example Selection with Influences,Tai Nguyen,http://arxiv.org/pdf/2302.11042v2.pdf,2023-02-21,"['cs.cl', 'cs.lg']",2302.11042v2.pdf," In-context learning (ICL) is a powerful paradigm emerged from large language +models (LLMs). Despite its promises, ICL performance is known to be highly +sensitive to input examples. In this work, we use $\textit{in-context +influences}$ to analyze few-shot ICL performance directly from the in-context +examples. Our proposed influence-based example selection method can identify +both positive and negative examples, outperforming several baselines when +evaluated on 9 SuperGLUE tasks. Our analysis uncovers up to a $16.3\%$ +performance gap between using the most negative in-context examples compared to +the most positive. In a case study, we apply our influence-based framework to +quantify the phenomena of recency bias in example ordering for few-shot ICL. +" +In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning,Xiaochuang Han,http://arxiv.org/pdf/2308.04275v1.pdf,2023-08-08,"['cs.cl', 'cs.ai', 'cs.lg']",2308.04275v1.pdf," In this note, we explore inference-time alignment through in-context +learning. We consider a vanilla pretrained language model Llama-2 before any +fine-tuning and retrieve an average of 9 demonstration alignment examples when +the model is prompted to follow chat-style instructions. Compared to direct +prompting, the in-context alignment without changing model weights leads to a +7x increase in win-rate w.r.t. the text-davinci-003 model from OpenAI, making +the vanilla language model comparable to strong baselines with alignment +fine-tuning. +" +"Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs",Ananya Singha,http://arxiv.org/pdf/2310.10358v1.pdf,2023-10-16,"['cs.cl', 'cs.ai']",2310.10358v1.pdf," Large language models (LLMs) are increasingly applied for tabular tasks using +in-context learning. The prompt representation for a table may play a role in +the LLMs ability to process the table. Inspired by prior work, we generate a +collection of self-supervised structural tasks (e.g. navigate to a cell and +row; transpose the table) and evaluate the performance differences when using 8 +formats. In contrast to past work, we introduce 8 noise operations inspired by +real-world messy data and adversarial inputs, and show that such operations can +impact LLM performance across formats for different structural understanding +tasks. +" +GPT-4 Vision on Medical Image Classification -- A Case Study on COVID-19 Dataset,Ruibo Chen,http://arxiv.org/pdf/2310.18498v1.pdf,2023-10-27,"['eess.iv', 'cs.cv', 'cs.lg']",2310.18498v1.pdf," This technical report delves into the application of GPT-4 Vision (GPT-4V) in +the nuanced realm of COVID-19 image classification, leveraging the +transformative potential of in-context learning to enhance diagnostic +processes. +" +Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning,Haokun Liu,http://arxiv.org/pdf/2205.05638v2.pdf,2022-05-11,"['cs.lg', 'cs.ai', 'cs.cl']",2205.05638v2.pdf," Few-shot in-context learning (ICL) enables pre-trained language models to +perform a previously-unseen task without any gradient-based training by feeding +a small number of training examples as part of the input. ICL incurs +substantial computational, memory, and storage costs because it involves +processing all of the training examples every time a prediction is made. +Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, +sparse update methods, etc.) offers an alternative paradigm where a small set +of parameters are trained to enable a model to perform the new task. In this +paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the +latter offers better accuracy as well as dramatically lower computational +costs. Along the way, we introduce a new PEFT method called (IA)$^3$ that +scales activations by learned vectors, attaining stronger performance while +only introducing a relatively tiny amount of new parameters. We also propose a +simple recipe based on the T0 model called T-Few that can be applied to new +tasks without task-specific tuning or modifications. We validate the +effectiveness of T-Few on completely unseen tasks by applying it to the RAFT +benchmark, attaining super-human performance for the first time and +outperforming the state-of-the-art by 6% absolute. All of the code used in our +experiments is publicly available. +" +Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing,Linlu Qiu,http://arxiv.org/pdf/2205.12253v2.pdf,2022-05-24,['cs.cl'],2205.12253v2.pdf," Despite their strong performance on many tasks, pre-trained language models +have been shown to struggle on out-of-distribution compositional +generalization. Meanwhile, recent work has shown considerable improvements on +many NLP tasks from model scaling. Can scaling up model size also improve +compositional generalization in semantic parsing? We evaluate encoder-decoder +models up to 11B parameters and decoder-only models up to 540B parameters, and +compare model scaling curves for three different methods for applying a +pre-trained language model to a new task: fine-tuning all parameters, prompt +tuning, and in-context learning. We observe that fine-tuning generally has flat +or negative scaling curves on out-of-distribution compositional generalization +in semantic parsing evaluations. In-context learning has positive scaling +curves, but is generally outperformed by much smaller fine-tuned models. +Prompt-tuning can outperform fine-tuning, suggesting further potential +improvements from scaling as it exhibits a more positive scaling curve. +Additionally, we identify several error trends that vary with model scale. For +example, larger models are generally better at modeling the syntax of the +output space, but are also more prone to certain types of overfitting. Overall, +our study highlights limitations of current techniques for effectively +leveraging model scale for compositional generalization, while our analysis +also suggests promising directions for future work. +" +Controllable Dialogue Simulation with In-Context Learning,Zekun Li,http://arxiv.org/pdf/2210.04185v4.pdf,2022-10-09,"['cs.cl', 'cs.ai']",2210.04185v4.pdf," Building dialogue systems requires a large corpus of annotated dialogues. +Such datasets are usually created via crowdsourcing, which is expensive and +time-consuming. In this paper, we propose \textsc{Dialogic}, a novel dialogue +simulation method based on large language model in-context learning to automate +dataset creation. Seeded with a few annotated dialogues, \textsc{Dialogic} +automatically selects in-context examples for demonstration and prompts GPT-3 +to generate new dialogues and annotations in a controllable way. Our method can +rapidly expand a small set of dialogue data with minimum or zero \textit{human +involvement} and \textit{parameter update} and is thus much more cost-efficient +and time-saving than crowdsourcing. Experimental results on the MultiWOZ +dataset demonstrate that training a model on the simulated dialogues leads to +even better performance than using the same amount of human-generated dialogues +under the challenging low-resource settings, with as few as 85 dialogues as a +seed. When enough data is available, our method can still serve as an effective +data augmentation method. Human evaluation results also show that our simulated +dialogues have near-human fluency and annotation accuracy. The code and data +are available at \textbf{\url{https://github.com/Leezekun/dialogic}}. +" +XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for Cross-lingual Text-to-SQL Semantic Parsing,Peng Shi,http://arxiv.org/pdf/2210.13693v1.pdf,2022-10-25,['cs.cl'],2210.13693v1.pdf," In-context learning using large language models has recently shown surprising +results for semantic parsing tasks such as Text-to-SQL translation. Prompting +GPT-3 or Codex using several examples of question-SQL pairs can produce +excellent results, comparable to state-of-the-art finetuning-based models. +However, existing work primarily focuses on English datasets, and it is unknown +whether large language models can serve as competitive semantic parsers for +other languages. To bridge this gap, our work focuses on cross-lingual +Text-to-SQL semantic parsing for translating non-English utterances into SQL +queries based on an English schema. We consider a zero-shot transfer learning +setting with the assumption that we do not have any labeled examples in the +target language (but have annotated examples in English). This work introduces +the XRICL framework, which learns to retrieve relevant English exemplars for a +given query to construct prompts. We also include global translation exemplars +for a target language to facilitate the translation process for large language +models. To systematically evaluate our model, we construct two new benchmark +datasets, XSpider and XKaggle-dbqa, which include questions in Chinese, +Vietnamese, Farsi, and Hindi. Our experiments show that XRICL effectively +leverages large pre-trained language models to outperform existing baselines. +Data and code are publicly available at https://github.com/Impavidity/XRICL. +" +Images Speak in Images: A Generalist Painter for In-Context Visual Learning,Xinlong Wang,http://arxiv.org/pdf/2212.02499v2.pdf,2022-12-05,['cs.cv'],2212.02499v2.pdf," In-context learning, as a new paradigm in NLP, allows the model to rapidly +adapt to various tasks with only a handful of prompts and examples. But in +computer vision, the difficulties for in-context learning lie in that tasks +vary significantly in the output representations, thus it is unclear how to +define the general-purpose task prompts that the vision model can understand +and transfer to out-of-domain tasks. In this work, we present Painter, a +generalist model which addresses these obstacles with an ""image""-centric +solution, that is, to redefine the output of core vision tasks as images, and +specify task prompts as also images. With this idea, our training process is +extremely simple, which performs standard masked image modeling on the stitch +of input and output image pairs. This makes the model capable of performing +tasks conditioned on visible image patches. Thus, during inference, we can +adopt a pair of input and output images from the same task as the input +condition, to indicate which task to perform. Without bells and whistles, our +generalist Painter can achieve competitive performance compared to +well-established task-specific models, on seven representative vision tasks +ranging from high-level visual understanding to low-level image processing. In +addition, Painter significantly outperforms recent generalist models on several +challenging tasks. +" +General-Purpose In-Context Learning by Meta-Learning Transformers,Louis Kirsch,http://arxiv.org/pdf/2212.04458v1.pdf,2022-12-08,"['cs.lg', 'cs.ai', 'cs.ne', 'stat.ml']",2212.04458v1.pdf," Modern machine learning requires system designers to specify aspects of the +learning pipeline, such as losses, architectures, and optimizers. +Meta-learning, or learning-to-learn, instead aims to learn those aspects, and +promises to unlock greater capabilities with less manual effort. One +particularly ambitious goal of meta-learning is to train general-purpose +in-context learning algorithms from scratch, using only black-box models with +minimal inductive bias. Such a model takes in training data, and produces +test-set predictions across a wide range of problems, without any explicit +definition of an inference model, training loss, or optimization algorithm. In +this paper we show that Transformers and other black-box models can be +meta-trained to act as general-purpose in-context learners. We characterize +phase transitions between algorithms that generalize, algorithms that memorize, +and algorithms that fail to meta-train at all, induced by changes in model +size, number of tasks, and meta-optimization. We further show that the +capabilities of meta-trained algorithms are bottlenecked by the accessible +state size (memory) determining the next prediction, unlike standard models +which are thought to be bottlenecked by parameter count. Finally, we propose +practical interventions such as biasing the training distribution that improve +the meta-training and meta-generalization of general-purpose learning +algorithms. +" +Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP,Omar Khattab,http://arxiv.org/pdf/2212.14024v2.pdf,2022-12-28,"['cs.cl', 'cs.ir']",2212.14024v2.pdf," Retrieval-augmented in-context learning has emerged as a powerful approach +for addressing knowledge-intensive tasks using frozen language models (LM) and +retrieval models (RM). Existing work has combined these in simple +""retrieve-then-read"" pipelines in which the RM retrieves passages that are +inserted into the LM prompt. To begin to fully realize the potential of frozen +LMs and RMs, we propose Demonstrate-Search-Predict (DSP), a framework that +relies on passing natural language texts in sophisticated pipelines between an +LM and an RM. DSP can express high-level programs that bootstrap pipeline-aware +demonstrations, search for relevant passages, and generate grounded +predictions, systematically breaking down problems into small transformations +that the LM and RM can handle more reliably. We have written novel DSP programs +for answering questions in open-domain, multi-hop, and conversational settings, +establishing in early evaluations new state-of-the-art in-context learning +results and delivering 37-120%, 8-39%, and 80-290% relative gains against the +vanilla LM (GPT-3.5), a standard retrieve-then-read pipeline, and a +contemporaneous self-ask pipeline, respectively. We release DSP at +https://github.com/stanfordnlp/dsp +" +How Does In-Context Learning Help Prompt Tuning?,Simeng Sun,http://arxiv.org/pdf/2302.11521v1.pdf,2023-02-22,['cs.cl'],2302.11521v1.pdf," Fine-tuning large language models is becoming ever more impractical due to +their rapidly-growing scale. This motivates the use of parameter-efficient +adaptation methods such as prompt tuning (PT), which adds a small number of +tunable embeddings to an otherwise frozen model, and in-context learning (ICL), +in which demonstrations of the task are provided to the model in natural +language without any additional training. Recently, Singhal et al. (2022) +propose ``instruction prompt tuning'' (IPT), which combines PT with ICL by +concatenating a natural language demonstration with learned prompt embeddings. +While all of these methods have proven effective on different tasks, how they +interact with each other remains unexplored. In this paper, we empirically +study when and how in-context examples improve prompt tuning by measuring the +effectiveness of ICL, PT, and IPT on five text generation tasks with multiple +base language models. We observe that (1) IPT does \emph{not} always outperform +PT, and in fact requires the in-context demonstration to be semantically +similar to the test input to yield improvements; (2) PT is unstable and +exhibits high variance, but combining PT and ICL (into IPT) consistently +reduces variance across all five tasks; and (3) prompts learned for a specific +source task via PT exhibit positive transfer when paired with in-context +examples of a different target task. Our results offer actionable insights on +choosing a suitable parameter-efficient adaptation method for a given task. +" +Larger language models do in-context learning differently,Jerry Wei,http://arxiv.org/pdf/2303.03846v2.pdf,2023-03-07,['cs.cl'],2303.03846v2.pdf," We study how in-context learning (ICL) in language models is affected by +semantic priors versus input-label mappings. We investigate two setups-ICL with +flipped labels and ICL with semantically-unrelated labels-across various model +families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments +on ICL with flipped labels show that overriding semantic priors is an emergent +ability of model scale. While small language models ignore flipped labels +presented in-context and thus rely primarily on semantic priors from +pretraining, large models can override semantic priors when presented with +in-context exemplars that contradict priors, despite the stronger semantic +priors that larger models may hold. We next study semantically-unrelated label +ICL (SUL-ICL), in which labels are semantically unrelated to their inputs +(e.g., foo/bar instead of negative/positive), thereby forcing language models +to learn the input-label mappings shown in in-context exemplars in order to +perform the task. The ability to do SUL-ICL also emerges primarily with scale, +and large-enough language models can even perform linear classification in a +SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that +instruction tuning strengthens both the use of semantic priors and the capacity +to learn input-label mappings, but more of the former. +" +How Many Demonstrations Do You Need for In-context Learning?,Jiuhai Chen,http://arxiv.org/pdf/2303.08119v3.pdf,2023-03-14,['cs.ai'],2303.08119v3.pdf," Large language models (LLMs) are capable to perform complex reasoning by +in-context learning (ICL) when provided with a few input-output demonstrations +(demos) and more powerful when intermediate reasoning steps (""chain of thoughts +(CoT)"") of the demos are given. Is it necessary to use multi-demo in ICL? In +this paper, we study ICL using fewer demos for each test query on the tasks +in~\cite{wei2022chain}. Surprisingly, we do not observe significant degradation +when using only one randomly chosen demo. To study this phenomenon, for each +test query, we categorize demos into ""correct demos"" leading to the correct +answer, and ""wrong demos"" resulting in wrong answers. Our analysis reveals an +inherent bias in those widely studied datasets: most demos are correct for a +majority of test queries, which explains the good performance of using one +random demo. Moreover, ICL (with and w/o CoT) using only one correct demo +significantly outperforms all-demo ICL adopted by most previous works, +indicating the weakness of LLMs in finding correct demo(s) for input queries, +which is difficult to evaluate on the biased datasets. Furthermore, we observe +a counterintuitive behavior of ICL using multi-demo, i.e., its accuracy +degrades(improves) when given more correct(wrong) demos. This implies that ICL +can be easily misguided by interference among demos and their spurious +correlations. Our analyses highlight several fundamental challenges that need +to be addressed in LLMs training, ICL, and benchmark design. +" +Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions,Jia-Hong Huang,http://arxiv.org/pdf/2304.03147v1.pdf,2023-04-06,"['cs.cv', 'cs.ai']",2304.03147v1.pdf," Deep neural networks have been critical in the task of Visual Question +Answering (VQA), with research traditionally focused on improving model +accuracy. Recently, however, there has been a trend towards evaluating the +robustness of these models against adversarial attacks. This involves assessing +the accuracy of VQA models under increasing levels of noise in the input, which +can target either the image or the proposed query question, dubbed the main +question. However, there is currently a lack of proper analysis of this aspect +of VQA. This work proposes a new method that utilizes semantically related +questions, referred to as basic questions, acting as noise to evaluate the +robustness of VQA models. It is hypothesized that as the similarity of a basic +question to the main question decreases, the level of noise increases. To +generate a reasonable noise level for a given main question, a pool of basic +questions is ranked based on their similarity to the main question, and this +ranking problem is cast as a LASSO optimization problem. Additionally, this +work proposes a novel robustness measure, R_score, and two basic question +datasets to standardize the analysis of VQA model robustness. The experimental +results demonstrate that the proposed evaluation method effectively analyzes +the robustness of VQA models. Moreover, the experiments show that in-context +learning with a chain of basic questions can enhance model accuracy. +" +GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information,Qiao Jin,http://arxiv.org/pdf/2304.09667v3.pdf,2023-04-19,"['cs.cl', 'cs.ai', 'q-bio.gn']",2304.09667v3.pdf," While large language models (LLMs) have been successfully applied to various +tasks, they still face challenges with hallucinations. Augmenting LLMs with +domain-specific tools such as database utilities can facilitate easier and more +precise access to specialized knowledge. In this paper, we present GeneGPT, a +novel method for teaching LLMs to use the Web APIs of the National Center for +Biotechnology Information (NCBI) for answering genomics questions. +Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs +by in-context learning and an augmented decoding algorithm that can detect and +execute API calls. Experimental results show that GeneGPT achieves +state-of-the-art performance on eight tasks in the GeneTuring benchmark with an +average score of 0.83, largely surpassing retrieval-augmented LLMs such as the +new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as +well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) +API demonstrations have good cross-task generalizability and are more useful +than documentations for in-context learning; (2) GeneGPT can generalize to +longer chains of API calls and answer multi-hop questions in GeneHop, a novel +dataset introduced in this work; (3) Different types of errors are enriched in +different tasks, providing valuable insights for future improvements. +" +DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction,Mohammadreza Pourreza,http://arxiv.org/pdf/2304.11015v3.pdf,2023-04-21,"['cs.cl', 'cs.ai', 'cs.db', 'cs.hc']",2304.11015v3.pdf," There is currently a significant gap between the performance of fine-tuned +models and prompting approaches using Large Language Models (LLMs) on the +challenging task of text-to-SQL, as evaluated on datasets such as Spider. To +improve the performance of LLMs in the reasoning process, we study how +decomposing the task into smaller sub-tasks can be effective. In particular, we +show that breaking down the generation problem into sub-problems and feeding +the solutions of those sub-problems into LLMs can be an effective approach for +significantly improving their performance. Our experiments with three LLMs show +that this approach consistently improves their simple few-shot performance by +roughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On the +holdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9 +and the new SOTA at the time of this writing using our approach is 85.3. Our +approach with in-context learning beats many heavily fine-tuned models by at +least 5%. Additionally, when evaluated on the BIRD benchmark, our approach +achieved an execution accuracy of 55.9%, setting a new SOTA on its holdout test +set. +" +Few-shot In-context Learning for Knowledge Base Question Answering,Tianle Li,http://arxiv.org/pdf/2305.01750v2.pdf,2023-05-02,"['cs.cl', 'cs.ai']",2305.01750v2.pdf," Question answering over knowledge bases is considered a difficult problem due +to the challenge of generalizing to a wide variety of possible natural language +questions. Additionally, the heterogeneity of knowledge base schema items +between different knowledge bases often necessitates specialized training for +different knowledge base question-answering (KBQA) datasets. To handle +questions over diverse KBQA datasets with a unified training-free framework, we +propose KB-BINDER, which for the first time enables few-shot in-context +learning over KBQA tasks. Firstly, KB-BINDER leverages large language models +like Codex to generate logical forms as the draft for a specific question by +imitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledge +base to bind the generated draft to an executable one with BM25 score matching. +The experimental results on four public heterogeneous KBQA datasets show that +KB-BINDER can achieve a strong performance with only a few in-context +demonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can even +outperform the state-of-the-art trained models. On GrailQA and WebQSP, our +model is also on par with other fully-trained models. We believe KB-BINDER can +serve as an important baseline for future research. Our code is available at +https://github.com/ltl3A87/KB-BINDER. +" +How Do In-Context Examples Affect Compositional Generalization?,Shengnan An,http://arxiv.org/pdf/2305.04835v3.pdf,2023-05-08,"['cs.cl', 'cs.ai']",2305.04835v3.pdf," Compositional generalization--understanding unseen combinations of seen +primitives--is an essential reasoning capability in human intelligence. The AI +community mainly studies this capability by fine-tuning neural networks on lots +of training samples, while it is still unclear whether and how in-context +learning--the prevailing few-shot paradigm based on large language +models--exhibits compositional generalization. In this paper, we present CoFe, +a test suite to investigate in-context compositional generalization. We find +that the compositional generalization performance can be easily affected by the +selection of in-context examples, thus raising the research question what the +key factors are to make good in-context examples for compositional +generalization. We study three potential factors: similarity, diversity and +complexity. Our systematic experiments indicate that in-context examples should +be structurally similar to the test case, diverse from each other, and +individually simple. Furthermore, two strong limitations are observed: +in-context compositional generalization on fictional words is much weaker than +that on commonly used ones; it is still critical that the in-context examples +should cover required linguistic structures, even though the backbone model has +been pre-trained on large corpus. We hope our analysis would facilitate the +understanding and utilization of in-context learning paradigm. +" +Symbol tuning improves in-context learning in language models,Jerry Wei,http://arxiv.org/pdf/2305.08298v1.pdf,2023-05-15,['cs.cl'],2305.08298v1.pdf," We present symbol tuning - finetuning language models on in-context +input-label pairs where natural language labels (e.g., ""positive/negative +sentiment"") are replaced with arbitrary symbols (e.g., ""foo/bar""). Symbol +tuning leverages the intuition that when a model cannot use instructions or +natural language labels to figure out a task, it must instead do so by learning +the input-label mappings. + We experiment with symbol tuning across Flan-PaLM models up to 540B +parameters and observe benefits across various settings. First, symbol tuning +boosts performance on unseen in-context learning tasks and is much more robust +to underspecified prompts, such as those without instructions or without +natural language labels. Second, symbol-tuned models are much stronger at +algorithmic reasoning tasks, with up to 18.2% better performance on the List +Functions benchmark and up to 15.3% better performance on the Simple Turing +Concepts benchmark. Finally, symbol-tuned models show large improvements in +following flipped-labels presented in-context, meaning that they are more +capable of using in-context information to override prior semantic knowledge. +" +Text Classification via Large Language Models,Xiaofei Sun,http://arxiv.org/pdf/2305.08377v3.pdf,2023-05-15,['cs.cl'],2305.08377v3.pdf," Despite the remarkable success of large-scale Language Models (LLMs) such as +GPT-3, their performances still significantly underperform fine-tuned models in +the task of text classification. This is due to (1) the lack of reasoning +ability in addressing complex linguistic phenomena (e.g., intensification, +contrast, irony etc); (2) limited number of tokens allowed in in-context +learning. + In this paper, we introduce Clue And Reasoning Prompting (CARP). CARP adopts +a progressive reasoning strategy tailored to addressing the complex linguistic +phenomena involved in text classification: CARP first prompts LLMs to find +superficial clues (e.g., keywords, tones, semantic relations, references, etc), +based on which a diagnostic reasoning process is induced for final decisions. +To further address the limited-token issue, CARP uses a fine-tuned model on the +supervised dataset for $k$NN demonstration search in the in-context learning, +allowing the model to take the advantage of both LLM's generalization ability +and the task-specific evidence provided by the full labeled dataset. +Remarkably, CARP yields new SOTA performances on 4 out of 5 widely-used +text-classification benchmarks, 97.39 (+1.24) on SST-2, 96.40 (+0.72) on +AGNews, 98.78 (+0.25) on R8 and 96.95 (+0.6) on R52, and a performance +comparable to SOTA on MR (92.39 v.s. 93.3). More importantly, we find that CARP +delivers impressive abilities on low-resource and domain-adaptation setups. +Specifically, using 16 examples per class, CARP achieves comparable +performances to supervised models with 1,024 examples per class. +" +Exploring In-Context Learning Capabilities of Foundation Models for Generating Knowledge Graphs from Text,Hanieh Khorashadizadeh,http://arxiv.org/pdf/2305.08804v1.pdf,2023-05-15,['cs.cl'],2305.08804v1.pdf," Knowledge graphs can represent information about the real-world using +entities and their relations in a structured and semantically rich manner and +they enable a variety of downstream applications such as question-answering, +recommendation systems, semantic search, and advanced analytics. However, at +the moment, building a knowledge graph involves a lot of manual effort and thus +hinders their application in some situations and the automation of this process +might benefit especially for small organizations. Automatically generating +structured knowledge graphs from a large volume of natural language is still a +challenging task and the research on sub-tasks such as named entity extraction, +relation extraction, entity and relation linking, and knowledge graph +construction aims to improve the state of the art of automatic construction and +completion of knowledge graphs from text. The recent advancement of foundation +models with billions of parameters trained in a self-supervised manner with +large volumes of training data that can be adapted to a variety of downstream +tasks has helped to demonstrate high performance on a large range of Natural +Language Processing (NLP) tasks. In this context, one emerging paradigm is +in-context learning where a language model is used as it is with a prompt that +provides instructions and some examples to perform a task without changing the +parameters of the model using traditional approaches such as fine-tuning. This +way, no computing resources are needed for re-training/fine-tuning the models +and the engineering effort is minimal. Thus, it would be beneficial to utilize +such capabilities for generating knowledge graphs from text. +" +"What In-Context Learning ""Learns"" In-Context: Disentangling Task Recognition and Task Learning",Jane Pan,http://arxiv.org/pdf/2305.09731v1.pdf,2023-05-16,"['cs.cl', 'cs.lg']",2305.09731v1.pdf," Large language models (LLMs) exploit in-context learning (ICL) to solve tasks +with only a few demonstrations, but its mechanisms are not yet well-understood. +Some works suggest that LLMs only recall already learned concepts from +pre-training, while others hint that ICL performs implicit learning over +demonstrations. We characterize two ways through which ICL leverages +demonstrations. Task recognition (TR) captures the extent to which LLMs can +recognize a task through demonstrations -- even without ground-truth labels -- +and apply their pre-trained priors, whereas task learning (TL) is the ability +to capture new input-label mappings unseen in pre-training. Using a wide range +of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we +design controlled experiments to disentangle the roles of TR and TL in ICL. We +show that (1) models can achieve non-trivial performance with only TR, and TR +does not further improve with larger models or more demonstrations; (2) LLMs +acquire TL as the model scales, and TL's performance consistently improves with +more demonstrations in context. Our findings unravel two different forces +behind ICL and we advocate for discriminating them in future ICL research due +to their distinct nature. +" +Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning,Dong-Ho Lee,http://arxiv.org/pdf/2305.10613v3.pdf,2023-05-17,['cs.cl'],2305.10613v3.pdf," Temporal knowledge graph (TKG) forecasting benchmarks challenge models to +predict future facts using knowledge of past facts. In this paper, we apply +large language models (LLMs) to these benchmarks using in-context learning +(ICL). We investigate whether and to what extent LLMs can be used for TKG +forecasting, especially without any fine-tuning or explicit modules for +capturing structural and temporal information. For our experiments, we present +a framework that converts relevant historical facts into prompts and generates +ranked predictions using token probabilities. Surprisingly, we observe that +LLMs, out-of-the-box, perform on par with state-of-the-art TKG models carefully +designed and trained for TKG forecasting. Our extensive evaluation presents +performances across several models and datasets with different characteristics, +compares alternative heuristics for preparing contextual information, and +contrasts to prominent TKG methods and simple frequency and recency baselines. +We also discover that using numerical indices instead of entity/relation names, +i.e., hiding semantic information, does not significantly affect the +performance ($\pm$0.4\% Hit@1). This shows that prior semantic knowledge is +unnecessary; instead, LLMs can leverage the existing patterns in the context to +achieve such performance. Our analysis also reveals that ICL enables LLMs to +learn irregular patterns from the historical context, going beyond simple +predictions based on common or recent information. +" +Learning In-context Learning for Named Entity Recognition,Jiawei Chen,http://arxiv.org/pdf/2305.11038v3.pdf,2023-05-18,['cs.cl'],2305.11038v3.pdf," Named entity recognition in real-world applications suffers from the +diversity of entity types, the emergence of new entity types, and the lack of +high-quality annotations. To address the above problems, this paper proposes an +in-context learning-based NER approach, which can effectively inject in-context +NER ability into PLMs and recognize entities of novel types on-the-fly using +only a few demonstrative instances. Specifically, we model PLMs as a +meta-function $\mathcal{ \lambda_ {\text{instruction, demonstrations, text}}. +M}$, and a new entity extractor can be implicitly constructed by applying new +instruction and demonstrations to PLMs, i.e., $\mathcal{ (\lambda . M) +}$(instruction, demonstrations) $\to$ $\mathcal{F}$ where $\mathcal{F}$ will be +a new entity extractor, i.e., $\mathcal{F}$: text $\to$ entities. To inject the +above in-context NER ability into PLMs, we propose a meta-function pre-training +algorithm, which pre-trains PLMs by comparing the (instruction, +demonstration)-initialized extractor with a surrogate golden extractor. +Experimental results on 4 few-shot NER datasets show that our method can +effectively inject in-context NER ability into PLMs and significantly +outperforms the PLMs+fine-tuning counterparts. +" +PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning,Chengfeng Dou,http://arxiv.org/pdf/2305.11508v2.pdf,2023-05-19,"['cs.cl', 'cs.ai', 'i.2.7']",2305.11508v2.pdf," The patient-centered medical dialogue systems strive to offer diagnostic +interpretation services to users who are less knowledgeable about medical +knowledge, through emphasizing the importance of providing responses specific +to the patients. It is difficult for the large language models (LLMs) to +guarantee the specificity of responses in spite of its promising performance +even in some tasks in medical field. Inspired by in-context learning, we +propose PlugMed, a Plug-and-Play Medical Dialogue System, for addressing this +challenge. PlugMed is equipped with two modules, the prompt generation (PG) +module and the response ranking (RR) module, to enhances LLMs' dialogue +strategies for improving the specificity of the dialogue. The PG module is +designed to stimulate the imitative ability of LLMs by providing them with real +dialogues from similar patients as prompts. The RR module incorporates +fine-tuned small model as response filter to enable the selection of +appropriate responses generated by LLMs. Furthermore, we introduce a new +evaluation method based on matching both user's intent and high-frequency +medical term to effectively assess the specificity of the responses. We conduct +experimental evaluations on three medical dialogue datasets, and the results, +including both automatic and human evaluation, demonstrate the effectiveness of +our approach. +" +ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings,Shibo Hao,http://arxiv.org/pdf/2305.11554v3.pdf,2023-05-19,"['cs.cl', 'cs.lg']",2305.11554v3.pdf," Augmenting large language models (LLMs) with external tools has emerged as a +promising approach to solving complex problems. However, traditional methods, +which finetune LLMs with tool demonstration data, can be both costly and +restricted to a predefined set of tools. Recent in-context learning paradigm +alleviates these issues, but the limited context length only allows for a few +shots of demonstrations, leading to suboptimal understandings of the tools. +Moreover, when there are numerous tools to choose from, in-context learning +could completely fail to work. In this paper, we propose an alternative +approach, $\textbf{ToolkenGPT}$, which combines the benefits of both sides. Our +approach represents each $\underline{tool}$ as a to$\underline{ken}$ +($\textit{toolken}$) and learns an embedding for it, enabling tool calls in the +same way as generating a regular word token. Once a toolken is triggered, the +LLM is prompted to complete arguments for the tool to execute. ToolkenGPT +offers the flexibility to plug in an arbitrary number of tools by expanding the +set of toolkens on the fly. In addition, it improves tool use by allowing +extensive demonstration data for learning the toolken embeddings. In diverse +domains, including numerical reasoning, knowledge-based question answering, and +embodied plan generation, our approach effectively augments LLMs with tools and +substantially outperforms various latest baselines. ToolkenGPT demonstrates the +promising ability to use relevant tools from a large tool set in complex +scenarios. +" +Iterative Forward Tuning Boosts In-context Learning in Language Models,Jiaxi Yang,http://arxiv.org/pdf/2305.13016v2.pdf,2023-05-22,['cs.cl'],2305.13016v2.pdf," Large language models (LLMs) have exhibited an emergent in-context learning +(ICL) ability. However, the ICL models that can solve ordinary cases are hardly +extended to solve more complex tasks by processing the demonstration examples +once. This single-turn ICL is incoordinate with the decision making process of +humans by learning from analogy. In this paper, we propose an effective and +efficient two-stage framework to boost ICL in LLMs by exploiting a dual form +between Transformer attention and gradient descent-based optimization. +Concretely, we divide the ICL process into ""Deep-Thinking"" and inference +stages. The ""Deep-Thinking"" stage performs iterative forward optimization of +demonstrations, which is expected to boost the reasoning abilities of LLMs at +test time by ""thinking"" demonstrations multiple times. It produces accumulated +meta-gradients by manipulating the Key-Value matrices in the self-attention +modules of the Transformer. Then, the inference stage only takes the test query +as input without concatenating demonstrations and applies the learned +meta-gradients through attention for output prediction. In this way, +demonstrations are not required during the inference stage since they are +already learned and stored in the definitive meta-gradients. LLMs can be +effectively and efficiently adapted to downstream tasks. Extensive experiments +on ten classification and multiple-choice datasets show that our method +achieves substantially better performance than standard ICL in terms of both +accuracy and efficiency. +" +Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations,Chenglei Si,http://arxiv.org/pdf/2305.13299v1.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.lg']",2305.13299v1.pdf," In-context learning (ICL) is an important paradigm for adapting large +language models (LLMs) to new tasks, but the generalization behavior of ICL +remains poorly understood. We investigate the inductive biases of ICL from the +perspective of feature bias: which feature ICL is more likely to use given a +set of underspecified demonstrations in which two features are equally +predictive of the labels. First, we characterize the feature biases of GPT-3 +models by constructing underspecified demonstrations from a range of NLP +datasets and feature combinations. We find that LLMs exhibit clear feature +biases - for example, demonstrating a strong bias to predict labels according +to sentiment rather than shallow lexical features, like punctuation. Second, we +evaluate the effect of different interventions that are designed to impose an +inductive bias in favor of a particular feature, such as adding a natural +language instruction or using semantically relevant label words. We find that, +while many interventions can influence the learner to prefer a particular +feature, it can be difficult to overcome strong prior biases. Overall, our +results provide a broader picture of the types of features that ICL may be more +likely to exploit and how to impose inductive biases that are better aligned +with the intended task. +" +Exploring Diverse In-Context Configurations for Image Captioning,Xu Yang,http://arxiv.org/pdf/2305.14800v5.pdf,2023-05-24,['cs.cv'],2305.14800v5.pdf," After discovering that Language Models (LMs) can be good in-context few-shot +learners, numerous strategies have been proposed to optimize in-context +sequence configurations. Recently, researchers in Vision-Language (VL) domains +also develop their few-shot learners, while they only use the simplest way, +ie., randomly sampling, to configure in-context image-text pairs. In order to +explore the effects of varying configurations on VL in-context learning, we +devised four strategies for image selection and four for caption assignment to +configure in-context image-text pairs for image captioning. Here Image +Captioning is used as the case study since it can be seen as the +visually-conditioned LM. Our comprehensive experiments yield two +counter-intuitive but valuable insights, highlighting the distinct +characteristics of VL in-context learning due to multi-modal synergy, as +compared to the NLP case. Furthermore, in our exploration of optimal +combination strategies, we observed an average performance enhancement of 20.9 +of CIDEr scores compared to the baseline. The code is given in +https://github.com/yongliang-wu/ExploreCfg. +" +Estimating Large Language Model Capabilities without Labeled Test Data,Harvey Yiyun Fu,http://arxiv.org/pdf/2305.14802v2.pdf,2023-05-24,['cs.cl'],2305.14802v2.pdf," Large Language Models (LLMs) have the impressive ability to perform +in-context learning (ICL) from only a few examples, but the success of ICL +varies widely from task to task. Thus, it is important to quickly determine +whether ICL is applicable to a new task, but directly evaluating ICL accuracy +can be expensive in situations where test data is expensive to annotate -- the +exact situations where ICL is most appealing. In this paper, we propose the +task of ICL accuracy estimation, in which we predict the accuracy of an LLM +when doing in-context learning on a new task given only unlabeled test data for +that task. To perform ICL accuracy estimation, we propose a method that trains +a meta-model using LLM confidence scores as features. We compare our method to +several strong accuracy estimation baselines on a new benchmark that covers 4 +LLMs and 3 task collections. The meta-model improves over all baselines across +8 out of 12 settings and achieves the same estimation performance as directly +evaluating on 40 collected labeled test examples per task. At the same time, no +existing approach provides an accurate and reliable ICL accuracy estimation in +every setting, highlighting the need for better ways to measure the uncertainty +of LLM predictions. +" +BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer,Akari Asai,http://arxiv.org/pdf/2305.14857v1.pdf,2023-05-24,['cs.cl'],2305.14857v1.pdf," Despite remarkable advancements in few-shot generalization in natural +language processing, most models are developed and evaluated primarily in +English. To facilitate research on few-shot cross-lingual transfer, we +introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across +54 languages in a sequence-to-sequence format and provides a fixed set of +few-shot examples and instructions. BUFFET is designed to establish a rigorous +and equitable evaluation framework for few-shot cross-lingual transfer across a +broad range of tasks and languages. Using BUFFET, we perform thorough +evaluations of state-of-the-art multilingual large language models with +different transfer methods, namely in-context learning and fine-tuning. Our +findings reveal significant room for improvement in few-shot in-context +cross-lingual transfer. In particular, ChatGPT with in-context learning often +performs worse than much smaller mT5-base models fine-tuned on English task +data and few-shot in-language examples. Our analysis suggests various avenues +for future research in few-shot cross-lingual transfer, such as improved +pretraining, understanding, and future evaluations. +" +Adversarial Demonstration Attacks on Large Language Models,Jiongxiao Wang,http://arxiv.org/pdf/2305.14950v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",2305.14950v2.pdf," With the emergence of more powerful large language models (LLMs), such as +ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence +in leveraging these models for specific tasks by utilizing data-label pairs as +precondition prompts. While incorporating demonstrations can greatly enhance +the performance of LLMs across various tasks, it may introduce a new security +concern: attackers can manipulate only the demonstrations without changing the +input to perform an attack. In this paper, we investigate the security concern +of ICL from an adversarial perspective, focusing on the impact of +demonstrations. We propose a novel attack method named advICL, which aims to +manipulate only the demonstration without changing the input to mislead the +models. Our results demonstrate that as the number of demonstrations increases, +the robustness of in-context learning would decrease. Additionally, we also +identify the intrinsic property of the demonstrations is that they can be used +(prepended) with different inputs. As a result, it introduces a more practical +threat model in which an attacker can attack the test input example even +without knowing and manipulating it. To achieve it, we propose the transferable +version of advICL, named Transferable-advICL. Our experiment shows that the +adversarial demonstration generated by Transferable-advICL can successfully +attack the unseen test input examples. We hope that our study reveals the +critical security risks associated with ICL and underscores the need for +extensive research on the robustness of ICL, particularly given its increasing +significance in the advancement of LLMs. +" +Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations,Wei-Lin Chen,http://arxiv.org/pdf/2305.15035v2.pdf,2023-05-24,['cs.cl'],2305.15035v2.pdf," Large language models (LLMs) have exhibited striking in-context learning +(ICL) ability to adapt to target tasks with a few input-output demonstrations. +For better ICL, different methods are proposed to select representative +demonstrations from existing training corpora. However, such settings are not +aligned with real-world practices, as end-users usually query LMs without +access to demonstration pools. In this work, we introduce Self-ICL -- a simple +framework which bootstraps LMs' intrinsic capabilities to perform zero-shot +ICL. Given a test input, Self-ICL first prompts the model to generate +pseudo-inputs. Next, the model predicts pseudo-labels for the pseudo-inputs via +zero-shot prompting. Finally, we perform ICL for the test input with the +pseudo-input-label pairs as demonstrations. Evaluation on 23 BIG-Bench Hard +tasks shows Self-ICL outperforms zero-shot baselines on both average accuracy +and head-to-head comparison. Moreover, with zero-shot chain-of-thought, +Self-ICL achieves results comparable to using real demonstrations. +Additionally, we conduct a range of analyses to validate Self-ICL's +effectiveness and provide insights for its behaviors under different settings. +" +Measuring and Mitigating Constraint Violations of In-Context Learning for Utterance-to-API Semantic Parsing,Shufan Wang,http://arxiv.org/pdf/2305.15338v1.pdf,2023-05-24,"['cs.ai', 'cs.cl']",2305.15338v1.pdf," In executable task-oriented semantic parsing, the system aims to translate +users' utterances in natural language to machine-interpretable programs (API +calls) that can be executed according to pre-defined API specifications. With +the popularity of Large Language Models (LLMs), in-context learning offers a +strong baseline for such scenarios, especially in data-limited regimes. +However, LLMs are known to hallucinate and therefore pose a formidable +challenge in constraining generated content. Thus, it remains uncertain if LLMs +can effectively perform task-oriented utterance-to-API generation where +respecting API's structural and task-specific constraints is crucial. + In this work, we seek to measure, analyze and mitigate such constraints +violations. First, we identify the categories of various constraints in +obtaining API-semantics from task-oriented utterances, and define fine-grained +metrics that complement traditional ones. Second, we leverage these metrics to +conduct a detailed error analysis of constraints violations seen in +state-of-the-art LLMs, which motivates us to investigate two mitigation +strategies: Semantic-Retrieval of Demonstrations (SRD) and API-aware +Constrained Decoding (API-CD). Our experiments show that these strategies are +effective at reducing constraints violations and improving the quality of the +generated API calls, but require careful consideration given their +implementation complexity and latency. +" +What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks,Taicheng Guo,http://arxiv.org/pdf/2305.18365v2.pdf,2023-05-27,"['cs.cl', 'cs.ai']",2305.18365v2.pdf," Large Language Models (LLMs) with strong abilities in natural language +processing tasks have emerged and have been applied in various kinds of areas +such as science, finance and software engineering. However, the capability of +LLMs to advance the field of chemistry remains unclear. In this paper, rather +than pursuing state-of-the-art performance, we aim to evaluate capabilities of +LLMs in a wide range of tasks across the chemistry domain. We identify three +key chemistry-related capabilities including understanding, reasoning and +explaining to explore in LLMs and establish a benchmark containing eight +chemistry tasks. Our analysis draws on widely recognized datasets facilitating +a broad exploration of the capacities of LLMs within the context of practical +chemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) are +evaluated for each chemistry task in zero-shot and few-shot in-context learning +settings with carefully selected demonstration examples and specially crafted +prompts. Our investigation found that GPT-4 outperformed other models and LLMs +exhibit different competitive levels in eight chemistry tasks. In addition to +the key findings from the comprehensive benchmark analysis, our work provides +insights into the limitation of current LLMs and the impact of in-context +learning settings on LLMs' performance across various chemistry tasks. The code +and datasets used in this study are available at +https://github.com/ChemFoundationModels/ChemLLMBench. +" +Mitigating Label Biases for In-context Learning,Yu Fei,http://arxiv.org/pdf/2305.19148v3.pdf,2023-05-28,"['cs.cl', 'cs.ai', 'cs.lg']",2305.19148v3.pdf," Various design settings for in-context learning (ICL), such as the choice and +order of the in-context examples, can bias a model toward a particular +prediction without being reflective of an understanding of the task. While many +studies discuss these design choices, there have been few systematic +investigations into categorizing them and mitigating their impact. In this +work, we define a typology for three types of label biases in ICL for text +classification: vanilla-label bias, context-label bias, and domain-label bias +(which we conceptualize and detect for the first time). + Our analysis demonstrates that prior label bias calibration methods fall +short of addressing all three types of biases. Specifically, domain-label bias +restricts LLMs to random-level performance on many tasks regardless of the +choice of in-context examples. To mitigate the effect of these biases, we +propose a simple bias calibration method that estimates a language model's +label bias using random in-domain words from the task corpus. After controlling +for this estimated bias when making predictions, our novel domain-context +calibration significantly improves the ICL performance of GPT-J and GPT-3 on a +wide range of tasks. The gain is substantial on tasks with large domain-label +bias (up to 37% in Macro-F1). Furthermore, our results generalize to models +with different scales, pretraining methods, and manually-designed task +instructions, showing the prevalence of label biases in ICL. +" +"What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization",Yufeng Zhang,http://arxiv.org/pdf/2305.19420v2.pdf,2023-05-30,"['stat.ml', 'cs.lg']",2305.19420v2.pdf," In this paper, we conduct a comprehensive study of In-Context Learning (ICL) +by addressing several open questions: (a) What type of ICL estimator is learned +by large language models? (b) What is a proper performance metric for ICL and +what is the error rate? (c) How does the transformer architecture enable ICL? +To answer these questions, we adopt a Bayesian view and formulate ICL as a +problem of predicting the response corresponding to the current covariate, +given a number of examples drawn from a latent variable model. To answer (a), +we show that, without updating the neural network parameters, ICL implicitly +implements the Bayesian model averaging algorithm, which is proven to be +approximately parameterized by the attention mechanism. For (b), we analyze the +ICL performance from an online learning perspective and establish a +$\mathcal{O}(1/T)$ regret bound for perfectly pretrained ICL, where $T$ is the +number of examples in the prompt. To answer (c), we show that, in addition to +encoding Bayesian model averaging via attention, the transformer architecture +also enables a fine-grained statistical analysis of pretraining under realistic +assumptions. In particular, we prove that the error of pretrained model is +bounded by a sum of an approximation error and a generalization error, where +the former decays to zero exponentially as the depth grows, and the latter +decays to zero sublinearly with the number of tokens in the pretraining +dataset. Our results provide a unified understanding of the transformer and its +ICL ability with bounds on ICL regret, approximation, and generalization, which +deepens our knowledge of these essential aspects of modern language models. +" +Augmenting Language Models with Long-Term Memory,Weizhi Wang,http://arxiv.org/pdf/2306.07174v1.pdf,2023-06-12,['cs.cl'],2306.07174v1.pdf," Existing large language models (LLMs) can only afford fix-sized inputs due to +the input length limit, preventing them from utilizing rich long-context +information from past inputs. To address this, we propose a framework, Language +Models Augmented with Long-Term Memory (LongMem), which enables LLMs to +memorize long history. We design a novel decoupled network architecture with +the original backbone LLM frozen as a memory encoder and an adaptive residual +side-network as a memory retriever and reader. Such a decoupled memory design +can easily cache and update long-term past contexts for memory retrieval +without suffering from memory staleness. Enhanced with memory-augmented +adaptation training, LongMem can thus memorize long past context and use +long-term memory for language modeling. The proposed memory retrieval module +can handle unlimited-length context in its memory bank to benefit various +downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k +tokens and thus cache many-shot extra demonstration examples as long-form +memory for in-context learning. Experiments show that our method outperforms +strong long-context models on ChapterBreak, a challenging long-context modeling +benchmark, and achieves remarkable improvements on memory-augmented in-context +learning over LLMs. The results demonstrate that the proposed method is +effective in helping language models to memorize and utilize long-form +contents. Our code is open-sourced at https://aka.ms/LongMem. +" +Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression,Allan Raventós,http://arxiv.org/pdf/2306.15063v2.pdf,2023-06-26,"['cs.lg', 'cs.ai', 'cs.cl']",2306.15063v2.pdf," Pretrained transformers exhibit the remarkable ability of in-context learning +(ICL): they can learn tasks from just a few examples provided in the prompt +without updating any weights. This raises a foundational question: can ICL +solve fundamentally $\textit{new}$ tasks that are very different from those +seen during pretraining? To probe this question, we examine ICL's performance +on linear regression while varying the diversity of tasks in the pretraining +dataset. We empirically demonstrate a $\textit{task diversity threshold}$ for +the emergence of ICL. Below this threshold, the pretrained transformer cannot +solve unseen regression tasks, instead behaving like a Bayesian estimator with +the $\textit{non-diverse pretraining task distribution}$ as the prior. Beyond +this threshold, the transformer significantly outperforms this estimator; its +behavior aligns with that of ridge regression, corresponding to a Gaussian +prior over $\textit{all tasks}$, including those not seen during pretraining. +Thus, when pretrained on data with task diversity greater than the threshold, +transformers $\textit{can}$ optimally solve fundamentally new tasks in-context. +Importantly, this capability hinges on it deviating from the Bayes optimal +estimator with the pretraining distribution as the prior. This study also +explores the effect of regularization, model capacity and task structure and +underscores, in a concrete example, the critical role of task diversity, +alongside data and model scale, in the emergence of ICL. Code is available at +https://github.com/mansheej/icl-task-diversity. +" +Understanding In-Context Learning via Supportive Pretraining Data,Xiaochuang Han,http://arxiv.org/pdf/2306.15091v1.pdf,2023-06-26,['cs.cl'],2306.15091v1.pdf," In-context learning (ICL) improves language models' performance on a variety +of NLP tasks by simply demonstrating a handful of examples at inference time. +It is not well understood why ICL ability emerges, as the model has never been +specifically trained on such demonstrations. Unlike prior work that explores +implicit mechanisms behind ICL, we study ICL via investigating the pretraining +data. Specifically, we first adapt an iterative, gradient-based approach to +find a small subset of pretraining data that supports ICL. We observe that a +continued pretraining on this small subset significantly improves the model's +ICL ability, by up to 18%. We then compare the supportive subset constrastively +with random subsets of pretraining data and discover: (1) The supportive +pretraining data to ICL do not have a higher domain relevance to downstream +tasks. (2) The supportive pretraining data have a higher mass of rarely +occurring, long-tail tokens. (3) The supportive pretraining data are +challenging examples where the information gain from long-range context is +below average, indicating learning to incorporate difficult long-range context +encourages ICL. Our work takes a first step towards understanding ICL via +analyzing instance-level pretraining data. Our insights have a potential to +enhance the ICL ability of language models by actively guiding the construction +of pretraining data in the future. +" +Schema-learning and rebinding as mechanisms of in-context learning and emergence,Sivaramakrishnan Swaminathan,http://arxiv.org/pdf/2307.01201v1.pdf,2023-06-16,"['cs.cl', 'cs.ai']",2307.01201v1.pdf," In-context learning (ICL) is one of the most powerful and most unexpected +capabilities to emerge in recent transformer-based large language models +(LLMs). Yet the mechanisms that underlie it are poorly understood. In this +paper, we demonstrate that comparable ICL capabilities can be acquired by an +alternative sequence prediction learning method using clone-structured causal +graphs (CSCGs). Moreover, a key property of CSCGs is that, unlike +transformer-based LLMs, they are {\em interpretable}, which considerably +simplifies the task of explaining how ICL works. Specifically, we show that it +uses a combination of (a) learning template (schema) circuits for pattern +completion, (b) retrieving relevant templates in a context-sensitive manner, +and (c) rebinding of novel tokens to appropriate slots in the templates. We go +on to marshall evidence for the hypothesis that similar mechanisms underlie ICL +in LLMs. For example, we find that, with CSCGs as with LLMs, different +capabilities emerge at different levels of overparameterization, suggesting +that overparameterization helps in learning more complex template (schema) +circuits. By showing how ICL can be achieved with small models and datasets, we +open up a path to novel architectures, and take a vital step towards a more +general understanding of the mechanics behind this important capability. +" +Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps,Zongxia Li,http://arxiv.org/pdf/2307.05052v1.pdf,2023-07-11,"['cs.cl', 'cs.ai']",2307.05052v1.pdf," We investigate the role of various demonstration components in the in-context +learning (ICL) performance of large language models (LLMs). Specifically, we +explore the impacts of ground-truth labels, input distribution, and +complementary explanations, particularly when these are altered or perturbed. +We build on previous work, which offers mixed findings on how these elements +influence ICL. To probe these questions, we employ explainable NLP (XNLP) +methods and utilize saliency maps of contrastive demonstrations for both +qualitative and quantitative analysis. Our findings reveal that flipping +ground-truth labels significantly affects the saliency, though it's more +noticeable in larger LLMs. Our analysis of the input distribution at a granular +level reveals that changing sentiment-indicative terms in a sentiment analysis +task to neutral ones does not have as substantial an impact as altering +ground-truth labels. Finally, we find that the effectiveness of complementary +explanations in boosting ICL performance is task-dependent, with limited +benefits seen in sentiment analysis tasks compared to symbolic reasoning tasks. +These insights are critical for understanding the functionality of LLMs and +guiding the development of effective demonstrations, which is increasingly +relevant in light of the growing use of LLMs in applications such as ChatGPT. +Our research code is publicly available at https://github.com/paihengxu/XICL. +" +In-context learning for model-free system identification,Marco Forgione,http://arxiv.org/pdf/2308.13380v1.pdf,2023-08-25,"['eess.sy', 'cs.lg', 'cs.sy']",2308.13380v1.pdf," In traditional system identification, we estimate a model of an unknown +dynamical system based on given input/output sequences and available physical +knowledge. Yet, is it also possible to understand the intricacies of dynamical +systems not solely from their input/output patterns, but by observing the +behavior of other systems within the same class? This central question drives +the study presented in this paper. + In response to this query, we introduce a novel paradigm for system +identification, addressing two primary tasks: one-step-ahead prediction and +multi-step simulation. Unlike conventional methods, we do not directly estimate +a model for the specific system. Instead, we pretrain a meta model that +represents a class of dynamical systems. This meta model is trained from a +potentially infinite stream of synthetic data, generated by systems randomly +extracted from a certain distribution. At its core, the meta model serves as an +implicit representation of the main characteristics of a class of dynamical +systems. When provided with a brief context from a new system - specifically, a +short input/output sequence - the meta model implicitly discerns its dynamics, +enabling predictions of its behavior. + The proposed approach harnesses the power of Transformer architectures, +renowned for their in-context learning capabilities in Natural Language +Processing tasks. For one-step prediction, a GPT-like decoder-only architecture +is utilized, whereas the simulation problem employs an encoder-decoder +structure. + Initial experimental results affirmatively answer our foundational question, +opening doors to fresh research avenues in system identification. +" +Ambiguity-Aware In-Context Learning with Large Language Models,Lingyu Gao,http://arxiv.org/pdf/2309.07900v1.pdf,2023-09-14,"['cs.cl', 'cs.ir']",2309.07900v1.pdf," In-context learning (ICL) i.e. showing LLMs only a few task-specific +demonstrations has led to downstream gains with no task-specific fine-tuning +required. However, LLMs are sensitive to the choice of prompts, and therefore a +crucial research question is how to select good demonstrations for ICL. One +effective strategy is leveraging semantic similarity between the ICL +demonstrations and test inputs by using a text retriever, which however is +sub-optimal as that does not consider the LLM's existing knowledge about that +task. From prior work (Min et al., 2022), we already know that labels paired +with the demonstrations bias the model predictions. This leads us to our +hypothesis whether considering LLM's existing knowledge about the task, +especially with respect to the output label space can help in a better +demonstration selection strategy. Through extensive experimentation on three +text classification tasks, we find that it is beneficial to not only choose +semantically similar ICL demonstrations but also to choose those demonstrations +that help resolve the inherent label ambiguity surrounding the test example. +Interestingly, we find that including demonstrations that the LLM previously +mis-classified and also fall on the test example's decision boundary, brings +the most performance gain. +" +Are Human-generated Demonstrations Necessary for In-context Learning?,Rui Li,http://arxiv.org/pdf/2309.14681v2.pdf,2023-09-26,"['cs.lg', 'cs.ai']",2309.14681v2.pdf," Despite the promising few-shot ability of large language models (LLMs), the +standard paradigm of In-context Learning (ICL) suffers the disadvantages of +susceptibility to selected demonstrations and the intricacy to generate these +demonstrations. In this paper, we raise the fundamental question that whether +human-generated demonstrations are necessary for ICL. To answer this question, +we propose self-contemplation prompting strategy (SEC), a paradigm free from +human-crafted demonstrations. The key point of SEC is that, instead of using +hand-crafted examples as demonstrations in ICL, SEC asks LLMs to first create +demonstrations on their own, based on which the final output is generated. SEC +is a flexible framework and can be adapted to both the vanilla ICL and the +chain-of-thought (CoT), but with greater ease: as the manual-generation process +of both examples and rationale can be saved. Extensive experiments in +arithmetic reasoning, commonsense reasoning, multi-task language understanding, +and code generation benchmarks, show that SEC, which does not require +hand-crafted demonstrations, significantly outperforms the zero-shot learning +strategy, and achieves comparable results to ICL with hand-crafted +demonstrations. This demonstrates that, for many tasks, contemporary LLMs +possess a sufficient level of competence to exclusively depend on their own +capacity for decision making, removing the need for external training data. +Code is available at https://github.com/ruili33/SEC. +" +Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context Learning,Mustafa Shukor,http://arxiv.org/pdf/2310.00647v1.pdf,2023-10-01,"['cs.cv', 'cs.mm']",2310.00647v1.pdf," Following the success of Large Language Models (LLMs), Large Multimodal +Models (LMMs), such as the Flamingo model and its subsequent competitors, have +started to emerge as natural steps towards generalist agents. However, +interacting with recent LMMs reveals major limitations that are hardly captured +by the current evaluation benchmarks. Indeed, task performances (e.g., VQA +accuracy) alone do not provide enough clues to understand their real +capabilities, limitations, and to which extent such models are aligned to human +expectations. To refine our understanding of those flaws, we deviate from the +current evaluation paradigm and propose the EvALign-ICL framework, in which we +(1) evaluate 8 recent open-source LMMs (based on the Flamingo architecture such +as OpenFlamingo and IDEFICS) on 5 different axes; hallucinations, abstention, +compositionality, explainability and instruction following. Our evaluation on +these axes reveals major flaws in LMMs. To efficiently address these problems, +and inspired by the success of in-context learning (ICL) in LLMs, (2) we +explore ICL as a solution and study how it affects these limitations. Based on +our ICL study, (3) we push ICL further and propose new multimodal ICL +approaches such as; Multitask-ICL, Chain-of-Hindsight-ICL, and +Self-Correcting-ICL. Our findings are as follows; (1) Despite their success, +LMMs have flaws that remain unsolved with scaling alone. (2) The effect of ICL +on LMMs flaws is nuanced; despite its effectiveness for improved +explainability, abstention, and instruction following, ICL does not improve +compositional abilities, and actually even amplifies hallucinations. (3) The +proposed ICL variants are promising as post-hoc approaches to efficiently +tackle some of those flaws. The code is available here: +https://evalign-icl.github.io/ +" +Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions,Satwik Bhattamishra,http://arxiv.org/pdf/2310.03016v1.pdf,2023-10-04,"['cs.lg', 'cs.cl']",2310.03016v1.pdf," In order to understand the in-context learning phenomenon, recent works have +adopted a stylized experimental framework and demonstrated that Transformers +can learn gradient-based learning algorithms for various classes of real-valued +functions. However, the limitations of Transformers in implementing learning +algorithms, and their ability to learn other forms of algorithms are not well +understood. Additionally, the degree to which these capabilities are confined +to attention-based models is unclear. Furthermore, it remains to be seen +whether the insights derived from these stylized settings can be extrapolated +to pretrained Large Language Models (LLMs). In this work, we take a step +towards answering these questions by demonstrating the following: (a) On a +test-bed with a variety of Boolean function classes, we find that Transformers +can nearly match the optimal learning algorithm for 'simpler' tasks, while +their performance deteriorates on more 'complex' tasks. Additionally, we find +that certain attention-free models perform (almost) identically to Transformers +on a range of tasks. (b) When provided a teaching sequence, i.e. a set of +examples that uniquely identifies a function in a class, we show that +Transformers learn more sample-efficiently. Interestingly, our results show +that Transformers can learn to implement two distinct algorithms to solve a +single task, and can adaptively select the more sample-efficient algorithm +depending on the sequence of in-context examples. (c) Lastly, we show that +extant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselines +on prediction tasks that are guaranteed to not be in their training set. +" +SEER : A Knapsack approach to Exemplar Selection for In-Context HybridQA,Jonathan Tonglet,http://arxiv.org/pdf/2310.06675v2.pdf,2023-10-10,['cs.cl'],2310.06675v2.pdf," Question answering over hybrid contexts is a complex task, which requires the +combination of information extracted from unstructured texts and structured +tables in various ways. Recently, In-Context Learning demonstrated significant +performance advances for reasoning tasks. In this paradigm, a large language +model performs predictions based on a small set of supporting exemplars. The +performance of In-Context Learning depends heavily on the selection procedure +of the supporting exemplars, particularly in the case of HybridQA, where +considering the diversity of reasoning chains and the large size of the hybrid +contexts becomes crucial. In this work, we present Selection of ExEmplars for +hybrid Reasoning (SEER), a novel method for selecting a set of exemplars that +is both representative and diverse. The key novelty of SEER is that it +formulates exemplar selection as a Knapsack Integer Linear Program. The +Knapsack framework provides the flexibility to incorporate diversity +constraints that prioritize exemplars with desirable attributes, and capacity +constraints that ensure that the prompt size respects the provided capacity +budgets. The effectiveness of SEER is demonstrated on FinQA and TAT-QA, two +real-world benchmarks for HybridQA, where it outperforms previous exemplar +selection methods. +" +How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations,Tianyu Guo,http://arxiv.org/pdf/2310.10616v1.pdf,2023-10-16,['cs.lg'],2310.10616v1.pdf," While large language models based on the transformer architecture have +demonstrated remarkable in-context learning (ICL) capabilities, understandings +of such capabilities are still in an early stage, where existing theory and +mechanistic understanding focus mostly on simple scenarios such as learning +simple function classes. This paper takes initial steps on understanding ICL in +more complex scenarios, by studying learning with representations. Concretely, +we construct synthetic in-context learning problems with a compositional +structure, where the label depends on the input through a possibly complex but +fixed representation function, composed with a linear function that differs in +each instance. By construction, the optimal ICL algorithm first transforms the +inputs by the representation function, and then performs linear ICL on top of +the transformed dataset. We show theoretically the existence of transformers +that approximately implement such algorithms with mild depth and size. +Empirically, we find trained transformers consistently achieve near-optimal ICL +performance in this setting, and exhibit the desired dissection where lower +layers transforms the dataset and upper layers perform linear ICL. Through +extensive probing and a new pasting experiment, we further reveal several +mechanisms within the trained transformers, such as concrete copying behaviors +on both the inputs and the representations, linear ICL capability of the upper +layers alone, and a post-ICL representation selection mechanism in a harder +mixture setting. These observed mechanisms align well with our theory and may +shed light on how transformers perform ICL in more realistic scenarios. +" +Demonstrations Are All You Need: Advancing Offensive Content Paraphrasing using In-Context Learning,Anirudh Som,http://arxiv.org/pdf/2310.10707v1.pdf,2023-10-16,"['cs.cl', 'cs.ai']",2310.10707v1.pdf," Paraphrasing of offensive content is a better alternative to content removal +and helps improve civility in a communication environment. Supervised +paraphrasers; however, rely heavily on large quantities of labelled data to +help preserve meaning and intent. They also retain a large portion of the +offensiveness of the original content, which raises questions on their overall +usability. In this paper we aim to assist practitioners in developing usable +paraphrasers by exploring In-Context Learning (ICL) with large language models +(LLMs), i.e., using a limited number of input-label demonstration pairs to +guide the model in generating desired outputs for specific queries. Our study +focuses on key factors such as -- number and order of demonstrations, exclusion +of prompt instruction, and reduction in measured toxicity. We perform +principled evaluation on three datasets, including our proposed Context-Aware +Polite Paraphrase dataset, comprising of dialogue-style rude utterances, polite +paraphrases, and additional dialogue context. We evaluate our approach using +two closed source and one open source LLM. Our results reveal that ICL is +comparable to supervised methods in generation quality, while being +qualitatively better by 25% on human evaluation and attaining lower toxicity by +76%. Also, ICL-based paraphrasers only show a slight reduction in performance +even with just 10% training data. +" +O3D: Offline Data-driven Discovery and Distillation for Sequential Decision-Making with Large Language Models,Yuchen Xiao,http://arxiv.org/pdf/2310.14403v1.pdf,2023-10-22,"['cs.ai', 'cs.cl']",2310.14403v1.pdf," Recent advancements in large language models (LLMs) have exhibited promising +performance in solving sequential decision-making problems. By imitating +few-shot examples provided in the prompts (i.e., in-context learning), an LLM +agent can interact with an external environment and complete given tasks +without additional training. However, such few-shot examples are often +insufficient to generate high-quality solutions for complex and long-horizon +tasks, while the limited context length cannot consume larger-scale +demonstrations. To this end, we propose an offline learning framework that +utilizes offline data at scale (e.g, logs of human interactions) to facilitate +the in-context learning performance of LLM agents. We formally define +LLM-powered policies with both text-based approaches and code-based approaches. +We then introduce an Offline Data-driven Discovery and Distillation (O3D) +framework to improve LLM-powered policies without finetuning. O3D automatically +discovers reusable skills and distills generalizable knowledge across multiple +tasks based on offline interaction data, advancing the capability of solving +downstream tasks. Empirical results under two interactive decision-making +benchmarks (ALFWorld and WebShop) demonstrate that O3D can notably enhance the +decision-making capabilities of LLMs through the offline discovery and +distillation process, and consistently outperform baselines across various LLMs +with both text-based-policy and code-based-policy. +" +Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models,Deqing Fu,http://arxiv.org/pdf/2310.17086v1.pdf,2023-10-26,"['cs.lg', 'cs.ai', 'cs.cl']",2310.17086v1.pdf," Transformers are remarkably good at in-context learning (ICL) -- learning +from demonstrations without parameter updates -- but how they perform ICL +remains a mystery. Recent work suggests that Transformers may learn in-context +by internally running Gradient Descent, a first-order optimization method. In +this paper, we instead demonstrate that Transformers learn to implement +higher-order optimization methods to perform ICL. Focusing on in-context linear +regression, we show that Transformers learn to implement an algorithm very +similar to Iterative Newton's Method, a higher-order optimization method, +rather than Gradient Descent. Empirically, we show that predictions from +successive Transformer layers closely match different iterations of Newton's +Method linearly, with each middle layer roughly computing 3 iterations. In +contrast, exponentially more Gradient Descent steps are needed to match an +additional Transformers layer; this suggests that Transformers have an +comparable rate of convergence with high-order methods such as Iterative +Newton, which are exponentially faster than Gradient Descent. We also show that +Transformers can learn in-context on ill-conditioned data, a setting where +Gradient Descent struggles but Iterative Newton succeeds. Finally, we show +theoretical results which support our empirical findings and have a close +correspondence with them: we prove that Transformers can implement $k$ +iterations of Newton's method with $\mathcal{O}(k)$ layers. +" +Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time,Zichang Liu,http://arxiv.org/pdf/2310.17157v1.pdf,2023-10-26,['cs.lg'],2310.17157v1.pdf," Large language models (LLMs) with hundreds of billions of parameters have +sparked a new wave of exciting AI applications. However, they are +computationally expensive at inference time. Sparsity is a natural approach to +reduce this cost, but existing methods either require costly retraining, have +to forgo LLM's in-context learning ability, or do not yield wall-clock time +speedup on modern hardware. We hypothesize that contextual sparsity, which are +small, input-dependent sets of attention heads and MLP parameters that yield +approximately the same output as the dense model for a given input, can address +these issues. We show that contextual sparsity exists, that it can be +accurately predicted, and that we can exploit it to speed up LLM inference in +wall-clock time without compromising LLM's quality or in-context learning +ability. Based on these insights, we propose DejaVu, a system that uses a +low-cost algorithm to predict contextual sparsity on the fly given inputs to +each layer, along with an asynchronous and hardware-aware implementation that +speeds up LLM inference. We validate that DejaVu can reduce the inference +latency of OPT-175B by over 2X compared to the state-of-the-art +FasterTransformer, and over 6X compared to the widely used Hugging Face +implementation, without compromising model quality. The code is available at +https://github.com/FMInference/DejaVu. +" +Improving Input-label Mapping with Demonstration Replay for In-context Learning,Zhuocheng Gong,http://arxiv.org/pdf/2310.19572v1.pdf,2023-10-30,['cs.cl'],2310.19572v1.pdf," In-context learning (ICL) is an emerging capability of large autoregressive +language models where a few input-label demonstrations are appended to the +input to enhance the model's understanding of downstream NLP tasks, without +directly adjusting the model parameters. The effectiveness of ICL can be +attributed to the strong language modeling capabilities of large language +models (LLMs), which enable them to learn the mapping between input and labels +based on in-context demonstrations. Despite achieving promising results, the +causal nature of language modeling in ICL restricts the attention to be +backward only, i.e., a token only attends to its previous tokens, failing to +capture the full input-label information and limiting the model's performance. +In this paper, we propose a novel ICL method called Repeated Demonstration with +Sliding Causal Attention, (RdSca). Specifically, we duplicate later +demonstrations and concatenate them to the front, allowing the model to +`observe' the later information even under the causal restriction. Besides, we +introduce sliding causal attention, which customizes causal attention to avoid +information leakage. Experimental results show that our method significantly +improves the input-label mapping in ICL demonstrations. We also conduct an +in-depth analysis of how to customize the causal attention without training, +which has been an unexplored area in previous research. +" +Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models,Steve Yadlowsky,http://arxiv.org/pdf/2311.00871v1.pdf,2023-11-01,"['cs.lg', 'cs.cl', 'stat.ml']",2311.00871v1.pdf," Transformer models, notably large language models (LLMs), have the remarkable +ability to perform in-context learning (ICL) -- to perform new tasks when +prompted with unseen input-output examples without any explicit model training. +In this work, we study how effectively transformers can bridge between their +pretraining data mixture, comprised of multiple distinct task families, to +identify and learn new tasks in-context which are both inside and outside the +pretraining distribution. Building on previous work, we investigate this +question in a controlled setting, where we study transformer models trained on +sequences of $(x, f(x))$ pairs rather than natural language. Our empirical +results show transformers demonstrate near-optimal unsupervised model selection +capabilities, in their ability to first in-context identify different task +families and in-context learn within them when the task families are +well-represented in their pretraining data. However when presented with tasks +or functions which are out-of-domain of their pretraining data, we demonstrate +various failure modes of transformers and degradation of their generalization +for even simple extrapolation tasks. Together our results highlight that the +impressive ICL abilities of high-capacity sequence models may be more closely +tied to the coverage of their pretraining data mixtures than inductive biases +that create fundamental generalization capabilities. +" +Large Language Models are Few-Shot Summarizers: Multi-Intent Comment Generation via In-Context Learning,Mingyang Geng,http://arxiv.org/pdf/2304.11384v3.pdf,2023-04-22,['cs.se'],2304.11384v3.pdf," Code comment generation aims at generating natural language descriptions for +a code snippet to facilitate developers' program comprehension activities. +Despite being studied for a long time, a bottleneck for existing approaches is +that given a code snippet, they can only generate one comment while developers +usually need to know information from diverse perspectives such as what is the +functionality of this code snippet and how to use it. To tackle this +limitation, this study empirically investigates the feasibility of utilizing +large language models (LLMs) to generate comments that can fulfill developers' +diverse intents. Our intuition is based on the facts that (1) the code and its +pairwise comment are used during the pre-training process of LLMs to build the +semantic connection between the natural language and programming language, and +(2) comments in the real-world projects, which are collected for the +pre-training, usually contain different developers' intents. We thus postulate +that the LLMs can already understand the code from different perspectives after +the pre-training. Indeed, experiments on two large-scale datasets demonstrate +the rationale of our insights: by adopting the in-context learning paradigm and +giving adequate prompts to the LLM (e.g., providing it with ten or more +examples), the LLM can significantly outperform a state-of-the-art supervised +learning approach on generating comments with multiple intents. Results also +show that customized strategies for constructing the prompts and +post-processing strategies for reranking the results can both boost the LLM's +performances, which shed light on future research directions for using LLMs to +achieve comment generation. +" +Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision,Zhiqing Sun,http://arxiv.org/pdf/2305.03047v1.pdf,2023-05-04,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cy']",2305.03047v1.pdf," Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised +fine-tuning (SFT) with human annotations and reinforcement learning from human +feedback (RLHF) to align the output of large language models (LLMs) with human +intentions, ensuring they are helpful, ethical, and reliable. However, this +dependence can significantly constrain the true potential of AI-assistant +agents due to the high cost of obtaining human supervision and the related +issues on quality, reliability, diversity, self-consistency, and undesirable +biases. To address these challenges, we propose a novel approach called +SELF-ALIGN, which combines principle-driven reasoning and the generative power +of LLMs for the self-alignment of AI agents with minimal human supervision. Our +approach encompasses four stages: first, we use an LLM to generate synthetic +prompts, and a topic-guided method to augment the prompt diversity; second, we +use a small set of human-written principles for AI models to follow, and guide +the LLM through in-context learning from demonstrations (of principles +application) to produce helpful, ethical, and reliable responses to user's +queries; third, we fine-tune the original LLM with the high-quality +self-aligned responses so that the resulting model can generate desirable +responses for each query directly without the principle set and the +demonstrations anymore; and finally, we offer a refinement step to address the +issues of overly-brief or indirect responses. Applying SELF-ALIGN to the +LLaMA-65b base language model, we develop an AI assistant named Dromedary. With +fewer than 300 lines of human annotations (including < 200 seed prompts, 16 +generic principles, and 5 exemplars for in-context learning). Dromedary +significantly surpasses the performance of several state-of-the-art AI systems, +including Text-Davinci-003 and Alpaca, on benchmark datasets with various +settings. +" +One for All: Towards Training One Graph Model for All Classification Tasks,Hao Liu,http://arxiv.org/pdf/2310.00149v1.pdf,2023-09-29,['cs.lg'],2310.00149v1.pdf," Designing a single model that addresses multiple tasks has been a +long-standing objective in artificial intelligence. Recently, large language +models have demonstrated exceptional capability in integrating and solving +different tasks within the language domain. However, a unified model for +various tasks on graphs remains underexplored, primarily due to the challenges +unique to the graph learning domain. First, graph data from different areas +carry distinct attributes and follow different distributions. Such discrepancy +makes it hard to represent graphs in a single representation space. Second, +tasks on graphs diversify into node, link, and graph tasks, requiring distinct +embedding strategies. Finally, an appropriate graph prompting paradigm for +in-context learning is unclear. Striving to handle all the aforementioned +challenges, we propose One for All (OFA), the first general framework that can +use a single graph model to address the above challenges. Specifically, OFA +proposes text-attributed graphs to unify different graph data by describing +nodes and edges with natural language and uses language models to encode the +diverse and possibly cross-domain text attributes to feature vectors in the +same embedding space. Furthermore, OFA introduces the concept of +nodes-of-interest to standardize different tasks with a single task +representation. For in-context learning on graphs, OFA introduces a novel graph +prompting paradigm that appends prompting substructures to the input graph, +which enables it to address varied tasks without fine-tuning. We train the OFA +model using graph data from multiple domains (including citation networks, +molecular graphs, knowledge graphs, etc.) simultaneously and evaluate its +ability in supervised, few-shot, and zero-shot learning scenarios. OFA performs +well across different tasks, making it the first general-purpose graph +classification model across domains. +" +The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design,Yoav Levine,http://arxiv.org/pdf/2110.04541v3.pdf,2021-10-09,"['cs.cl', 'cs.lg']",2110.04541v3.pdf," Pretraining Neural Language Models (NLMs) over a large corpus involves +chunking the text into training examples, which are contiguous text segments of +sizes processable by the neural architecture. We highlight a bias introduced by +this common practice: we prove that the pretrained NLM can model much stronger +dependencies between text segments that appeared in the same training example, +than it can between text segments that appeared in different training examples. +This intuitive result has a twofold role. First, it formalizes the motivation +behind a broad line of recent successful NLM training heuristics, proposed for +the pretraining and fine-tuning stages, which do not necessarily appear related +at first glance. Second, our result clearly indicates further improvements to +be made in NLM pretraining for the benefit of Natural Language Understanding +tasks. As an example, we propose ""kNN-Pretraining"": we show that including +semantically related non-neighboring sentences in the same pretraining example +yields improved sentence representations and open domain question answering +abilities. This theoretically motivated degree of freedom for pretraining +example design indicates new training schemes for self-improving +representations. +" +MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning,Constantin Eichenberg,http://arxiv.org/pdf/2112.05253v2.pdf,2021-12-09,"['cs.cv', 'cs.cl', 'i.2.7; i.4.8; i.5.1']",2112.05253v2.pdf," Large-scale pretraining is fast becoming the norm in Vision-Language (VL) +modeling. However, prevailing VL approaches are limited by the requirement for +labeled data and the use of complex multi-step pretraining objectives. We +present MAGMA - a simple method for augmenting generative language models with +additional modalities using adapter-based finetuning. Building on Frozen, we +train a series of VL models that autoregressively generate text from arbitrary +combinations of visual and textual input. The pretraining is entirely +end-to-end using a single language modeling objective, simplifying optimization +compared to previous approaches. Importantly, the language model weights remain +unchanged during training, allowing for transfer of encyclopedic knowledge and +in-context learning abilities from language pretraining. MAGMA outperforms +Frozen on open-ended generative tasks, achieving state of the art results on +the OKVQA benchmark and competitive results on a range of other popular VL +benchmarks, while pretraining on 0.2% of the number of samples used to train +SimVLM. +" +Black-Box Tuning for Language-Model-as-a-Service,Tianxiang Sun,http://arxiv.org/pdf/2201.03514v4.pdf,2022-01-10,"['cs.cl', 'cs.ai']",2201.03514v4.pdf," Extremely large pre-trained language models (PTMs) such as GPT-3 are usually +released as a service. It allows users to design task-specific prompts to query +the PTMs through some black-box APIs. In such a scenario, which we call +Language-Model-as-a-Service (LMaaS), the gradients of PTMs are usually +unavailable. Can we optimize the task prompts by only accessing the model +inference APIs? This paper proposes the black-box tuning framework to optimize +the continuous prompt prepended to the input text via derivative-free +optimization. Instead of optimizing in the original high-dimensional prompt +space, which is intractable for traditional derivative-free optimization, we +perform optimization in a randomly generated subspace due to the low intrinsic +dimensionality of large PTMs. The experimental results show that the black-box +tuning with RoBERTa on a few labeled samples not only significantly outperforms +manual prompt and GPT-3's in-context learning, but also surpasses the +gradient-based counterparts, i.e., prompt tuning and full model tuning. +" +Contrastive Learning for Prompt-Based Few-Shot Language Learners,Yiren Jian,http://arxiv.org/pdf/2205.01308v1.pdf,2022-05-03,"['cs.cl', 'cs.ai']",2205.01308v1.pdf," The impressive performance of GPT-3 using natural language prompts and +in-context learning has inspired work on better fine-tuning of moderately-sized +models under this paradigm. Following this line of work, we present a +contrastive learning framework that clusters inputs from the same class for +better generality of models trained with only limited examples. Specifically, +we propose a supervised contrastive framework that clusters inputs from the +same class under different augmented ""views"" and repel the ones from different +classes. We create different ""views"" of an example by appending it with +different language prompts and contextual demonstrations. Combining a +contrastive loss with the standard masked language modeling (MLM) loss in +prompt-based few-shot learners, the experimental results show that our method +can improve over the state-of-the-art methods in a diverse set of 15 language +tasks. Our framework makes minimal assumptions on the task or the base model, +and can be applied to many recent methods with little modification. The code +will be made available at: https://github.com/yiren-jian/LM-SupCon. +" +Instruction Induction: From Few Examples to Natural Language Task Descriptions,Or Honovich,http://arxiv.org/pdf/2205.10782v1.pdf,2022-05-22,['cs.cl'],2205.10782v1.pdf," Large language models are able to perform a task by conditioning on a few +input-output demonstrations - a paradigm known as in-context learning. We show +that language models can explicitly infer an underlying task from a few +demonstrations by prompting them to generate a natural language instruction +that fits the examples. To explore this ability, we introduce the instruction +induction challenge, compile a dataset consisting of 24 tasks, and define a +novel evaluation metric based on executing the generated instruction. We +discover that, to a large extent, the ability to generate instructions does +indeed emerge when using a model that is both large enough and aligned to +follow instructions; InstructGPT achieves 65.7% of human performance in our +execution-based metric, while the original GPT-3 model reaches only 9.8% of +human performance. This surprising result suggests that instruction induction +might be a viable learning paradigm in and of itself, where instead of fitting +a set of latent continuous parameters to the data, one searches for the best +description in the natural language hypothesis space. +" +Exploring Length Generalization in Large Language Models,Cem Anil,http://arxiv.org/pdf/2207.04901v2.pdf,2022-07-11,"['cs.cl', 'cs.lg']",2207.04901v2.pdf," The ability to extrapolate from short problem instances to longer ones is an +important form of out-of-distribution generalization in reasoning tasks, and is +crucial when learning from datasets where longer problem instances are rare. +These include theorem proving, solving quantitative mathematics problems, and +reading/summarizing novels. In this paper, we run careful empirical studies +exploring the length generalization capabilities of transformer-based language +models. We first establish that naively finetuning transformers on length +generalization tasks shows significant generalization deficiencies independent +of model scale. We then show that combining pretrained large language models' +in-context learning abilities with scratchpad prompting (asking the model to +output solution steps before producing an answer) results in a dramatic +improvement in length generalization. We run careful failure analyses on each +of the learning modalities and identify common sources of mistakes that +highlight opportunities in equipping language models with the ability to +generalize to longer problems. +" +Large Language Models are few(1)-shot Table Reasoners,Wenhu Chen,http://arxiv.org/pdf/2210.06710v2.pdf,2022-10-13,['cs.cl'],2210.06710v2.pdf," Recent literature has shown that large language models (LLMs) are generally +excellent few-shot reasoners to solve text reasoning tasks. However, the +capability of LLMs on table reasoning tasks is yet to be explored. In this +paper, we aim at understanding how well LLMs can perform table-related tasks +with few-shot in-context learning. Specifically, we evaluated LLMs on popular +table QA and fact verification datasets like WikiTableQuestion, FetaQA, +TabFact, and FEVEROUS and found that LLMs are competent at complex reasoning +over table structures, though these models are not pre-trained on any table +corpus. When combined with `chain of thoughts' prompting, LLMs can achieve very +strong performance with only a 1-shot demonstration, even on par with some SoTA +models. We show that LLMs are even more competent at generating comprehensive +long-form answers on FetaQA than tuned T5-large. We further manually studied +the reasoning chains elicited from LLMs and found that these reasoning chains +are highly consistent with the underlying semantic form. We believe that LLMs +can serve as a simple yet generic baseline for future research. The code and +data are released in https://github.com/wenhuchen/TableCoT. +" +Explanations from Large Language Models Make Small Reasoners Better,Shiyang Li,http://arxiv.org/pdf/2210.06726v1.pdf,2022-10-13,['cs.cl'],2210.06726v1.pdf," Integrating free-text explanations to in-context learning of large language +models (LLM) is shown to elicit strong reasoning capabilities along with +reasonable explanations. In this paper, we consider the problem of leveraging +the explanations generated by LLM to improve the training of small reasoners, +which are more favorable in real-production deployment due to their low cost. +We systematically explore three explanation generation approaches from LLM and +utilize a multi-task learning framework to facilitate small models to acquire +strong reasoning power together with explanation generation capabilities. +Experiments on multiple reasoning tasks show that our method can consistently +and significantly outperform finetuning baselines across different settings, +and even perform better than finetuning/prompting a 60x larger GPT-3 (175B) +model by up to 9.5% in accuracy. As a side benefit, human evaluation further +shows that our method can generate high-quality explanations to justify its +predictions, moving towards the goal of explainable AI. +" +Prompting Language Models for Linguistic Structure,Terra Blevins,http://arxiv.org/pdf/2211.07830v2.pdf,2022-11-15,['cs.cl'],2211.07830v2.pdf," Although pretrained language models (PLMs) can be prompted to perform a wide +range of language tasks, it remains an open question how much this ability +comes from generalizable linguistic understanding versus surface-level lexical +patterns. To test this, we present a structured prompting approach for +linguistic structured prediction tasks, allowing us to perform zero- and +few-shot sequence tagging with autoregressive PLMs. We evaluate this approach +on part-of-speech tagging, named entity recognition, and sentence chunking, +demonstrating strong few-shot performance in all cases. We also find that while +PLMs contain significant prior knowledge of task labels due to task leakage +into the pretraining corpus, structured prompting can also retrieve linguistic +structure with arbitrary labels. These findings indicate that the in-context +learning ability and linguistic knowledge of PLMs generalizes beyond +memorization of their training data. +" +Visual Programming: Compositional visual reasoning without training,Tanmay Gupta,http://arxiv.org/pdf/2211.11559v1.pdf,2022-11-18,"['cs.cv', 'cs.ai', 'cs.cl']",2211.11559v1.pdf," We present VISPROG, a neuro-symbolic approach to solving complex and +compositional visual tasks given natural language instructions. VISPROG avoids +the need for any task-specific training. Instead, it uses the in-context +learning ability of large language models to generate python-like modular +programs, which are then executed to get both the solution and a comprehensive +and interpretable rationale. Each line of the generated program may invoke one +of several off-the-shelf computer vision models, image processing routines, or +python functions to produce intermediate outputs that may be consumed by +subsequent parts of the program. We demonstrate the flexibility of VISPROG on 4 +diverse tasks - compositional visual question answering, zero-shot reasoning on +image pairs, factual knowledge object tagging, and language-guided image +editing. We believe neuro-symbolic approaches like VISPROG are an exciting +avenue to easily and effectively expand the scope of AI systems to serve the +long tail of complex tasks that people may wish to perform. +" +Self-Prompting Large Language Models for Zero-Shot Open-Domain QA,Junlong Li,http://arxiv.org/pdf/2212.08635v2.pdf,2022-12-16,"['cs.cl', 'cs.ai']",2212.08635v2.pdf," Open-Domain Question Answering (ODQA) aims at answering factoid questions +without explicitly providing specific background documents. In a zero-shot +setting, this task is more challenging since no data is available to train +customized models like Retriever-Readers. Recently, Large Language Models +(LLMs) like GPT-3 have shown their power in zero-shot ODQA with direct +prompting methods, but these methods are still far from releasing the full +powerfulness of LLMs only in an implicitly invoking way. In this paper, we +propose a Self-Prompting framework to explicitly utilize the massive knowledge +stored in the parameters of LLMs and their strong instruction understanding +abilities. Concretely, we prompt LLMs step by step to generate multiple pseudo +QA pairs with background passages and explanations from scratch and then use +those generated elements for in-context learning. Experimental results show our +method surpasses previous SOTA methods significantly on three widely-used ODQA +datasets, and even achieves comparable performance with some Retriever-Reader +models fine-tuned on full training data. +" +"Don't Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments",Yu Gu,http://arxiv.org/pdf/2212.09736v2.pdf,2022-12-19,"['cs.cl', 'cs.ai', 'i.2.7']",2212.09736v2.pdf," A key missing capacity of current language models (LMs) is grounding to +real-world environments. Most existing work for grounded language understanding +uses LMs to directly generate plans that can be executed in the environment to +achieve the desired effects. It thereby casts the burden of ensuring +grammaticality, faithfulness, and controllability all on the LMs. We propose +Pangu, a generic framework for grounded language understanding that capitalizes +on the discriminative ability of LMs instead of their generative ability. Pangu +consists of a symbolic agent and a neural LM working in a concerted fashion: +The agent explores the environment to incrementally construct valid plans, and +the LM evaluates the plausibility of the candidate plans to guide the search +process. A case study on the challenging problem of knowledge base question +answering (KBQA), which features a massive environment, demonstrates the +remarkable effectiveness and flexibility of Pangu: A BERT-base LM is sufficient +for setting a new record on standard KBQA datasets, and larger LMs further +bring substantial gains. Pangu also enables, for the first time, effective +few-shot in-context learning for KBQA with large LMs such as Codex. +" +Ontologically Faithful Generation of Non-Player Character Dialogues,Nathaniel Weir,http://arxiv.org/pdf/2212.10618v2.pdf,2022-12-20,['cs.cl'],2212.10618v2.pdf," We introduce a language generation task grounded in a popular video game +environment. KNUDGE (KNowledge Constrained User-NPC Dialogue GEneration) +requires models to produce trees of dialogue between video game characters that +accurately reflect quest and entity specifications stated in natural language. +KNUDGE is constructed from side quest dialogues drawn directly from game data +of Obsidian Entertainment's The Outer Worlds, leading to real-world +complexities in generation: (1) dialogues are branching trees as opposed to +linear chains of utterances; (2) utterances must remain faithful to the game +lore -- character personas, backstories, and entity relationships; and (3) a +dialogue must accurately reveal new quest details to the human player. We +report results for a set of neural generation models using supervised and +in-context learning techniques; we find competent performance but room for +future work addressing the challenges of creating realistic, game-quality +dialogues. +" +Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers,Chengyi Wang,http://arxiv.org/pdf/2301.02111v1.pdf,2023-01-05,"['cs.cl', 'cs.sd', 'eess.as']",2301.02111v1.pdf," We introduce a language modeling approach for text to speech synthesis (TTS). +Specifically, we train a neural codec language model (called Vall-E) using +discrete codes derived from an off-the-shelf neural audio codec model, and +regard TTS as a conditional language modeling task rather than continuous +signal regression as in previous work. During the pre-training stage, we scale +up the TTS training data to 60K hours of English speech which is hundreds of +times larger than existing systems. Vall-E emerges in-context learning +capabilities and can be used to synthesize high-quality personalized speech +with only a 3-second enrolled recording of an unseen speaker as an acoustic +prompt. Experiment results show that Vall-E significantly outperforms the +state-of-the-art zero-shot TTS system in terms of speech naturalness and +speaker similarity. In addition, we find Vall-E could preserve the speaker's +emotion and acoustic environment of the acoustic prompt in synthesis. See +https://aka.ms/valle for demos of our work. +" +Batch Prompting: Efficient Inference with Large Language Model APIs,Zhoujun Cheng,http://arxiv.org/pdf/2301.08721v2.pdf,2023-01-19,"['cs.cl', 'cs.ai']",2301.08721v2.pdf," Performing inference on large volumes of samples with large language models +(LLMs) can be computationally and financially costly in industry and real-world +use. We propose batch prompting, a simple yet effective prompting approach that +enables the LLM to run inference in batches, instead of one sample at a time. +Our method reduces both token and time costs while retaining downstream +performance. We theoretically demonstrate that under a few-shot in-context +learning setting, the inference costs decrease almost inverse linearly with the +number of samples in each batch. We extensively validate the effectiveness of +batch prompting on ten datasets across commonsense QA, arithmetic reasoning, +and NLI/NLU: batch prompting significantly~(up to 5x with six samples in batch) +reduces the LLM (Codex) inference token and time costs while achieving better +or comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5 +and GPT-4, we show the benefits of batch prompting also hold. Further analysis +shows that the number of samples in each batch and the complexity of tasks +affect its performance. Moreover, batch prompting can be applied across +different reasoning methods using LLMs. Our code can be found at the site +https://github.com/xlang-ai/batch-prompting. +" +Looped Transformers as Programmable Computers,Angeliki Giannou,http://arxiv.org/pdf/2301.13196v1.pdf,2023-01-30,"['cs.lg', 'cs.ai']",2301.13196v1.pdf," We present a framework for using transformer networks as universal computers +by programming them with specific weights and placing them in a loop. Our input +sequence acts as a punchcard, consisting of instructions and memory for data +read/writes. We demonstrate that a constant number of encoder layers can +emulate basic computing blocks, including embedding edit operations, non-linear +functions, function calls, program counters, and conditional branches. Using +these building blocks, we emulate a small instruction-set computer. This allows +us to map iterative algorithms to programs that can be executed by a looped, +13-layer transformer. We show how this transformer, instructed by its input, +can emulate a basic calculator, a basic linear algebra library, and in-context +learning algorithms that employ backpropagation. Our work highlights the +versatility of the attention mechanism, and demonstrates that even shallow +transformers can execute full-fledged, general-purpose programs. +" +Grounding Language Models to Images for Multimodal Inputs and Outputs,Jing Yu Koh,http://arxiv.org/pdf/2301.13823v4.pdf,2023-01-31,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2301.13823v4.pdf," We propose an efficient method to ground pretrained text-only language models +to the visual domain, enabling them to process arbitrarily interleaved +image-and-text data, and generate text interleaved with retrieved images. Our +method leverages the abilities of language models learnt from large scale +text-only pretraining, such as in-context learning and free-form text +generation. We keep the language model frozen, and finetune input and output +linear layers to enable cross-modality interactions. This allows our model to +process arbitrarily interleaved image-and-text inputs, and generate free-form +text interleaved with retrieved images. We achieve strong zero-shot performance +on grounded tasks such as contextual image retrieval and multimodal dialogue, +and showcase compelling interactive abilities. Our approach works with any +off-the-shelf language model and paves the way towards an effective, general +solution for leveraging pretrained language models in visually grounded +settings. +" +ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics,Zhangir Azerbayev,http://arxiv.org/pdf/2302.12433v1.pdf,2023-02-24,"['cs.cl', 'cs.ai', 'cs.lo']",2302.12433v1.pdf," We introduce ProofNet, a benchmark for autoformalization and formal proving +of undergraduate-level mathematics. The ProofNet benchmarks consists of 371 +examples, each consisting of a formal theorem statement in Lean 3, a natural +language theorem statement, and a natural language proof. The problems are +primarily drawn from popular undergraduate pure mathematics textbooks and cover +topics such as real and complex analysis, linear algebra, abstract algebra, and +topology. We intend for ProofNet to be a challenging benchmark that will drive +progress in autoformalization and automatic theorem proving. We report baseline +results on statement autoformalization via in-context learning. Moreover, we +introduce two novel statement autoformalization methods: prompt retrieval and +distilled backtranslation. +" +Finding Support Examples for In-Context Learning,Xiaonan Li,http://arxiv.org/pdf/2302.13539v3.pdf,2023-02-27,['cs.cl'],2302.13539v3.pdf," Additionally, the strong dependency among in-context examples makes it an +NP-hard combinatorial optimization problem and enumerating all permutations is +infeasible. Hence we propose LENS, a fiLter-thEN-Search method to tackle this +challenge in two stages: First we filter the dataset to obtain informative +in-context examples individually. Specifically, we propose a novel metric, +InfoScore, to evaluate the example's in-context informativeness based on the +language model's feedback, and further propose a progressive filtering process +to filter out uninformative examples. Then we propose diversity-guided example +search which iteratively refines and evaluates the selected example +permutations, to find examples that fully depict the task. The experimental +results show that LENS significantly outperforms a wide range of baselines. +" +In-Context Instruction Learning,Seonghyeon Ye,http://arxiv.org/pdf/2302.14691v1.pdf,2023-02-28,"['cs.cl', 'cs.ai']",2302.14691v1.pdf," Instruction learning of Large Language Models (LLMs) has enabled zero-shot +task generalization. However, instruction learning has been predominantly +approached as a fine-tuning problem, including instruction tuning and +reinforcement learning from human feedback, where LLMs are multi-task +fine-tuned on various tasks with instructions. In this paper, we present a +surprising finding that applying in-context learning to instruction learning, +referred to as In-Context Instruction Learning (ICIL), significantly improves +the zero-shot task generalization performance for both pretrained and +instruction-fine-tuned models. One of the core advantages of ICIL is that it +uses a single fixed prompt to evaluate all tasks, which is a concatenation of +cross-task demonstrations. In particular, we demonstrate that the most powerful +instruction-fine-tuned baseline (text-davinci-003) also benefits from ICIL by +9.3%, indicating that the effect of ICIL is complementary to instruction-based +fine-tuning. +" +Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec Language Modeling,Ziqiang Zhang,http://arxiv.org/pdf/2303.03926v1.pdf,2023-03-07,"['cs.cl', 'cs.ai', 'cs.sd', 'eess.as']",2303.03926v1.pdf," We propose a cross-lingual neural codec language model, VALL-E X, for +cross-lingual speech synthesis. Specifically, we extend VALL-E and train a +multi-lingual conditional codec language model to predict the acoustic token +sequences of the target language speech by using both the source language +speech and the target language text as prompts. VALL-E X inherits strong +in-context learning capabilities and can be applied for zero-shot cross-lingual +text-to-speech synthesis and zero-shot speech-to-speech translation tasks. +Experimental results show that it can generate high-quality speech in the +target language via just one speech utterance in the source language as a +prompt while preserving the unseen speaker's voice, emotion, and acoustic +environment. Moreover, VALL-E X effectively alleviates the foreign accent +problems, which can be controlled by a language ID. Audio samples are available +at \url{https://aka.ms/vallex}. +" +Self-planning Code Generation with Large Language Models,Xue Jiang,http://arxiv.org/pdf/2303.06689v2.pdf,2023-03-12,['cs.se'],2303.06689v2.pdf," Although large language models have demonstrated impressive ability in code +generation, they are still struggling to address the complicated intent +provided by humans. It is widely acknowledged that humans typically employ +planning to decompose complex problems and schedule the solution steps prior to +implementation. Thus we introduce planning into code generation to help the +model understand complex intent and reduce the difficulty of problem solving. +This paper proposes a self-planning code generation method with large language +model, which consists of two phases, namely planning phase and implementation +phase. Specifically, in the planning phase, the language model plans out the +solution steps from the intent combined with in-context learning. Then it +enters the implementation phase, where the model generates code step by step, +guided by the solution steps. The effectiveness of self-planning code +generation has been rigorously evaluated on multiple code generation datasets +and the results have demonstrated a marked superiority over naive direct +generation approaches with language model. The improvement in performance is +substantial, highlighting the significance of self-planning in code generation +tasks. +" +GPT is becoming a Turing machine: Here are some ways to program it,Ana Jojic,http://arxiv.org/pdf/2303.14310v1.pdf,2023-03-25,['cs.cl'],2303.14310v1.pdf," We demonstrate that, through appropriate prompting, GPT-3 family of models +can be triggered to perform iterative behaviours necessary to execute (rather +than just write or recall) programs that involve loops, including several +popular algorithms found in computer science curricula or software developer +interviews. We trigger execution and description of Iterations by Regimenting +Self-Attention (IRSA) in one (or a combination) of three ways: 1) Using strong +repetitive structure in an example of an execution path of a target program for +one particular input, 2) Prompting with fragments of execution paths, and 3) +Explicitly forbidding (skipping) self-attention to parts of the generated text. +On a dynamic program execution, IRSA leads to larger accuracy gains than +replacing the model with the much more powerful GPT-4. IRSA has promising +applications in education, as the prompts and responses resemble student +assignments in data structures and algorithms classes. Our findings hold +implications for evaluating LLMs, which typically target the in-context +learning: We show that prompts that may not even cover one full task example +can trigger algorithmic behaviour, allowing solving problems previously thought +of as hard for LLMs, such as logical puzzles. Consequently, prompt design plays +an even more critical role in LLM performance than previously recognized. +" +When Brain-inspired AI Meets AGI,Lin Zhao,http://arxiv.org/pdf/2303.15935v1.pdf,2023-03-28,['cs.ai'],2303.15935v1.pdf," Artificial General Intelligence (AGI) has been a long-standing goal of +humanity, with the aim of creating machines capable of performing any +intellectual task that humans can do. To achieve this, AGI researchers draw +inspiration from the human brain and seek to replicate its principles in +intelligent machines. Brain-inspired artificial intelligence is a field that +has emerged from this endeavor, combining insights from neuroscience, +psychology, and computer science to develop more efficient and powerful AI +systems. In this article, we provide a comprehensive overview of brain-inspired +AI from the perspective of AGI. We begin with the current progress in +brain-inspired AI and its extensive connection with AGI. We then cover the +important characteristics for both human intelligence and AGI (e.g., scaling, +multimodality, and reasoning). We discuss important technologies toward +achieving AGI in current AI systems, such as in-context learning and prompt +tuning. We also investigate the evolution of AGI systems from both algorithmic +and infrastructural perspectives. Finally, we explore the limitations and +future of AGI. +" +Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning,Namrata Shivagunde,http://arxiv.org/pdf/2303.16445v1.pdf,2023-03-29,['cs.cl'],2303.16445v1.pdf," Language model probing is often used to test specific capabilities of these +models. However, conclusions from such studies may be limited when the probing +benchmarks are small and lack statistical power. In this work, we introduce +new, larger datasets for negation (NEG-1500-SIMP) and role reversal (ROLE-1500) +inspired by psycholinguistic studies. We dramatically extend existing NEG-136 +and ROLE-88 benchmarks using GPT3, increasing their size from 18 and 44 +sentence pairs to 750 each. We also create another version of extended negation +dataset (NEG-1500-SIMP-TEMP), created using template-based generation. It +consists of 770 sentence pairs. We evaluate 22 models on the extended datasets, +seeing model performance dip 20-57% compared to the original smaller +benchmarks. We observe high levels of negation sensitivity in models like BERT +and ALBERT demonstrating that previous findings might have been skewed due to +smaller test sets. Finally, we observe that while GPT3 has generated all the +examples in ROLE-1500 is only able to solve 24.6% of them during probing. +" +Is ChatGPT a Highly Fluent Grammatical Error Correction System? A Comprehensive Evaluation,Tao Fang,http://arxiv.org/pdf/2304.01746v1.pdf,2023-04-04,['cs.cl'],2304.01746v1.pdf," ChatGPT, a large-scale language model based on the advanced GPT-3.5 +architecture, has shown remarkable potential in various Natural Language +Processing (NLP) tasks. However, there is currently a dearth of comprehensive +study exploring its potential in the area of Grammatical Error Correction +(GEC). To showcase its capabilities in GEC, we design zero-shot +chain-of-thought (CoT) and few-shot CoT settings using in-context learning for +ChatGPT. Our evaluation involves assessing ChatGPT's performance on five +official test sets in three different languages, along with three +document-level GEC test sets in English. Our experimental results and human +evaluations demonstrate that ChatGPT has excellent error detection capabilities +and can freely correct errors to make the corrected sentences very fluent, +possibly due to its over-correction tendencies and not adhering to the +principle of minimal edits. Additionally, its performance in non-English and +low-resource settings highlights its potential in multilingual GEC tasks. +However, further analysis of various types of errors at the document-level has +shown that ChatGPT cannot effectively correct agreement, coreference, tense +errors across sentences, and cross-sentence boundary errors. +" +SegGPT: Segmenting Everything In Context,Xinlong Wang,http://arxiv.org/pdf/2304.03284v1.pdf,2023-04-06,['cs.cv'],2304.03284v1.pdf," We present SegGPT, a generalist model for segmenting everything in context. +We unify various segmentation tasks into a generalist in-context learning +framework that accommodates different kinds of segmentation data by +transforming them into the same format of images. The training of SegGPT is +formulated as an in-context coloring problem with random color mapping for each +data sample. The objective is to accomplish diverse tasks according to the +context, rather than relying on specific colors. After training, SegGPT can +perform arbitrary segmentation tasks in images or videos via in-context +inference, such as object instance, stuff, part, contour, and text. SegGPT is +evaluated on a broad range of tasks, including few-shot semantic segmentation, +video object segmentation, semantic segmentation, and panoptic segmentation. +Our results show strong capabilities in segmenting in-domain and out-of-domain +targets, either qualitatively or quantitatively. +" +Extractive Summarization via ChatGPT for Faithful Summary Generation,Haopeng Zhang,http://arxiv.org/pdf/2304.04193v2.pdf,2023-04-09,['cs.cl'],2304.04193v2.pdf," Extractive summarization is a crucial task in natural language processing +that aims to condense long documents into shorter versions by directly +extracting sentences. The recent introduction of large language models has +attracted significant interest in the NLP community due to its remarkable +performance on a wide range of downstream tasks. This paper first presents a +thorough evaluation of ChatGPT's performance on extractive summarization and +compares it with traditional fine-tuning methods on various benchmark datasets. +Our experimental analysis reveals that ChatGPT exhibits inferior extractive +summarization performance in terms of ROUGE scores compared to existing +supervised systems, while achieving higher performance based on LLM-based +evaluation metrics. In addition, we explore the effectiveness of in-context +learning and chain-of-thought reasoning for enhancing its performance. +Furthermore, we find that applying an extract-then-generate pipeline with +ChatGPT yields significant performance improvements over abstractive baselines +in terms of summary faithfulness. These observations highlight potential +directions for enhancing ChatGPT's capabilities in faithful summarization using +two-stage approaches. +" +Towards Robust Prompts on Vision-Language Models,Jindong Gu,http://arxiv.org/pdf/2304.08479v1.pdf,2023-04-17,['cs.cv'],2304.08479v1.pdf," With the advent of vision-language models (VLMs) that can perform in-context +and prompt-based learning, how can we design prompting approaches that robustly +generalize to distribution shift and can be used on novel classes outside the +support set of the prompts? In this work, we first define two types of +robustness to distribution shift on VLMs, namely, robustness on base classes +(the classes included in the support set of prompts) and robustness on novel +classes. Then, we study the robustness of existing in-context learning and +prompt learning approaches, where we find that prompt learning performs +robustly on test images from base classes, while it does not generalize well on +images from novel classes. We propose robust prompt learning by integrating +multiple-scale image features into the prompt, which improves both types of +robustness. Comprehensive experiments are conducted to study the defined +robustness on six benchmarks and show the effectiveness of our proposal. +" +A Latent Space Theory for Emergent Abilities in Large Language Models,Hui Jiang,http://arxiv.org/pdf/2304.09960v3.pdf,2023-04-19,"['cs.cl', 'cs.ai', 'cs.lg']",2304.09960v3.pdf," Languages are not created randomly but rather to communicate information. +There is a strong association between languages and their underlying meanings, +resulting in a sparse joint distribution that is heavily peaked according to +their correlations. Moreover, these peak values happen to match with the +marginal distribution of languages due to the sparsity. With the advent of LLMs +trained on big data and large models, we can now precisely assess the marginal +distribution of languages, providing a convenient means of exploring the sparse +structures in the joint distribution for effective inferences. In this paper, +we categorize languages as either unambiguous or {\epsilon}-ambiguous and +present quantitative results to demonstrate that the emergent abilities of +LLMs, such as language understanding, in-context learning, chain-of-thought +prompting, and effective instruction fine-tuning, can all be attributed to +Bayesian inference on the sparse joint distribution of languages. +" +Understanding and Predicting Human Label Variation in Natural Language Inference through Explanation,Nan-Jiang Jiang,http://arxiv.org/pdf/2304.12443v1.pdf,2023-04-24,['cs.cl'],2304.12443v1.pdf," Human label variation (Plank 2022), or annotation disagreement, exists in +many natural language processing (NLP) tasks. To be robust and trusted, NLP +models need to identify such variation and be able to explain it. To this end, +we created the first ecologically valid explanation dataset with diverse +reasoning, LiveNLI. LiveNLI contains annotators' highlights and free-text +explanations for the label(s) of their choice for 122 English Natural Language +Inference items, each with at least 10 annotations. We used its explanations +for chain-of-thought prompting, and found there is still room for improvement +in GPT-3's ability to predict label distribution with in-context learning. +" +"Stance Detection With Supervised, Zero-Shot, and Few-Shot Applications",Michael Burnham,http://arxiv.org/pdf/2305.01723v1.pdf,2023-05-02,['cs.cl'],2305.01723v1.pdf," Stance detection is the identification of an author's beliefs about a subject +from a document. Researchers widely rely on sentiment analysis to accomplish +this. However, recent research has show that sentiment analysis is only loosely +correlated with stance, if at all. This paper advances methods in text analysis +by precisely defining the task of stance detection, providing a generalized +framework for the task, and then presenting three distinct approaches for +performing stance detection: supervised classification, zero-shot +classification with NLI classifiers, and in-context learning. In doing so, I +demonstrate how zero-shot and few-shot language classifiers can replace human +labelers for a variety of tasks and discuss how their application and +limitations differ from supervised classifiers. Finally, I demonstrate an +application of zero-shot stance detection by replicating Block Jr et al. +(2022). +" +WangLab at MEDIQA-Chat 2023: Clinical Note Generation from Doctor-Patient Conversations using Large Language Models,John Giorgi,http://arxiv.org/pdf/2305.02220v2.pdf,2023-05-03,"['cs.cl', 'cs.ai', 'cs.lg']",2305.02220v2.pdf," This paper describes our submission to the MEDIQA-Chat 2023 shared task for +automatic clinical note generation from doctor-patient conversations. We report +results for two approaches: the first fine-tunes a pre-trained language model +(PLM) on the shared task data, and the second uses few-shot in-context learning +(ICL) with a large language model (LLM). Both achieve high performance as +measured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second and +first, respectively, of all submissions to the shared task. Expert human +scrutiny indicates that notes generated via the ICL-based approach with GPT-4 +are preferred about as often as human-written notes, making it a promising path +toward automated note generation from doctor-patient conversations. +" +Otter: A Multi-Modal Model with In-Context Instruction Tuning,Bo Li,http://arxiv.org/pdf/2305.03726v1.pdf,2023-05-05,"['cs.cv', 'cs.cl']",2305.03726v1.pdf," Large language models (LLMs) have demonstrated significant universal +capabilities as few/zero-shot learners in various tasks due to their +pre-training on vast amounts of text data, as exemplified by GPT-3, which +boosted to InstrctGPT and ChatGPT, effectively following natural language +instructions to accomplish real-world tasks. In this paper, we propose to +introduce instruction tuning into multi-modal models, motivated by the Flamingo +model's upstream interleaved format pretraining dataset. We adopt a similar +approach to construct our MultI-Modal In-Context Instruction Tuning (MIMIC-IT) +dataset. We then introduce Otter, a multi-modal model based on OpenFlamingo +(open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and +showcasing improved instruction-following ability and in-context learning. We +also optimize OpenFlamingo's implementation for researchers, democratizing the +required training resources from 1$\times$ A100 GPU to 4$\times$ RTX-3090 GPUs, +and integrate both OpenFlamingo and Otter into Huggingface Transformers for +more researchers to incorporate the models into their customized training and +inference pipelines. +" +How Good are Commercial Large Language Models on African Languages?,Jessica Ojo,http://arxiv.org/pdf/2305.06530v1.pdf,2023-05-11,"['cs.cl', 'cs.ai', 'cs.lg']",2305.06530v1.pdf," Recent advancements in Natural Language Processing (NLP) has led to the +proliferation of large pretrained language models. These models have been shown +to yield good performance, using in-context learning, even on unseen tasks and +languages. They have also been exposed as commercial APIs as a form of +language-model-as-a-service, with great adoption. However, their performance on +African languages is largely unknown. We present a preliminary analysis of +commercial large language models on two tasks (machine translation and text +classification) across eight African languages, spanning different language +families and geographical areas. Our results suggest that commercial language +models produce below-par performance on African languages. We also find that +they perform better on text classification than machine translation. In +general, our findings present a call-to-action to ensure African languages are +well represented in commercial large language models, given their growing +popularity. +" +Chain-of-Dictionary Prompting Elicits Translation in Large Language Models,Hongyuan Lu,http://arxiv.org/pdf/2305.06575v3.pdf,2023-05-11,['cs.cl'],2305.06575v3.pdf," Large language models (LLMs) have shown surprisingly good performance in +multilingual neural machine translation (MNMT) even when trained without +parallel data. Yet, despite the fact that the amount of training data is +gigantic, they still struggle with translating rare words, particularly for +low-resource languages. Even worse, it is usually unrealistic to retrieve +relevant demonstrations for in-context learning with low-resource languages on +LLMs, which restricts the practical use of LLMs for translation -- how should +we mitigate this problem? To this end, we present a novel method, CoD, which +augments LLMs with prior knowledge with the chains of multilingual dictionaries +for a subset of input words to elicit translation abilities for LLMs. Extensive +experiments indicate that augmenting ChatGPT with CoD elicits large gains by up +to 13x chrF++ points for MNMT (3.08 to 42.63 for English to Serbian written in +Cyrillic script) on FLORES-200 full devtest set. We further demonstrate the +importance of chaining the multilingual dictionaries, as well as the +superiority of CoD to few-shot demonstration for low-resource languages. +" +Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation,Jinglong Gao,http://arxiv.org/pdf/2305.07375v4.pdf,2023-05-12,"['cs.cl', 'cs.ai']",2305.07375v4.pdf," Causal reasoning ability is crucial for numerous NLP applications. Despite +the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear +how well ChatGPT performs in causal reasoning. In this paper, we conduct the +first comprehensive evaluation of the ChatGPT's causal reasoning capabilities. +Experiments show that ChatGPT is not a good causal reasoner, but a good causal +explainer. Besides, ChatGPT has a serious hallucination on causal reasoning, +possibly due to the reporting biases between causal and non-causal +relationships in natural language, as well as ChatGPT's upgrading processes, +such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (CoT) +techniques can further exacerbate such causal hallucination. Additionally, the +causal reasoning ability of ChatGPT is sensitive to the words used to express +the causal concept in prompts, and close-ended prompts perform better than +open-ended prompts. For events in sentences, ChatGPT excels at capturing +explicit causality rather than implicit causality, and performs better in +sentences with lower event density and smaller lexical distance between events. +The code is available on https://github.com/ArrogantL/ChatGPT4CausalReasoning . +" +AutoTrial: Prompting Language Models for Clinical Trial Design,Zifeng Wang,http://arxiv.org/pdf/2305.11366v2.pdf,2023-05-19,['cs.cl'],2305.11366v2.pdf," Clinical trials are critical for drug development. Constructing the +appropriate eligibility criteria (i.e., the inclusion/exclusion criteria for +patient recruitment) is essential for the trial's success. Proper design of +clinical trial protocols should consider similar precedent trials and their +eligibility criteria to ensure sufficient patient coverage. In this paper, we +present a method named AutoTrial to aid the design of clinical eligibility +criteria using language models. It allows (1) controllable generation under +instructions via a hybrid of discrete and neural prompting, (2) scalable +knowledge incorporation via in-context learning, and (3) explicit reasoning +chains to provide rationales for understanding the outputs. Experiments on over +70K clinical trials verify that AutoTrial generates high-quality criteria texts +that are fluent and coherent and with high accuracy in capturing the relevant +clinical concepts to the target trial. It is noteworthy that our method, with a +much smaller parameter size, gains around 60% winning rate against the GPT-3.5 +baselines via human evaluations. +" +Cross-Lingual Supervision improves Large Language Models Pre-training,Andrea Schioppa,http://arxiv.org/pdf/2305.11778v1.pdf,2023-05-19,"['cs.cl', 'cs.lg']",2305.11778v1.pdf," The recent rapid progress in pre-training Large Language Models has relied on +using self-supervised language modeling objectives like next token prediction +or span corruption. On the other hand, Machine Translation Systems are mostly +trained using cross-lingual supervision that requires aligned data between +source and target languages. We demonstrate that pre-training Large Language +Models on a mixture of a self-supervised Language Modeling objective and the +supervised Machine Translation objective, therefore including cross-lingual +parallel data during pre-training, yields models with better in-context +learning abilities. As pre-training is a very resource-intensive process and a +grid search on the best mixing ratio between the two objectives is +prohibitively expensive, we propose a simple yet effective strategy to learn it +during pre-training. +" +"How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings",Shuaichen Chang,http://arxiv.org/pdf/2305.11853v2.pdf,2023-05-19,['cs.cl'],2305.11853v2.pdf," Large language models (LLMs) with in-context learning have demonstrated +remarkable capability in the text-to-SQL task. Previous research has prompted +LLMs with various demonstration-retrieval strategies and intermediate reasoning +steps to enhance the performance of LLMs. However, those works often employ +varied strategies when constructing the prompt text for text-to-SQL inputs, +such as databases and demonstration examples. This leads to a lack of +comparability in both the prompt constructions and their primary contributions. +Furthermore, selecting an effective prompt construction has emerged as a +persistent problem for future research. To address this limitation, we +comprehensively investigate the impact of prompt constructions across various +settings and provide insights for future work. +" +Fact-Checking Complex Claims with Program-Guided Reasoning,Liangming Pan,http://arxiv.org/pdf/2305.12744v1.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.12744v1.pdf," Fact-checking real-world claims often requires collecting multiple pieces of +evidence and applying complex multi-step reasoning. In this paper, we present +Program-Guided Fact-Checking (ProgramFC), a novel fact-checking model that +decomposes complex claims into simpler sub-tasks that can be solved using a +shared library of specialized functions. We first leverage the in-context +learning ability of large language models to generate reasoning programs to +guide the verification process. Afterward, we execute the program by delegating +each sub-task to the corresponding sub-task handler. This process makes our +model both explanatory and data-efficient, providing clear explanations of its +reasoning process and requiring minimal training data. We evaluate ProgramFC on +two challenging fact-checking datasets and show that it outperforms seven +fact-checking baselines across different settings of evidence availability, +with explicit output programs that benefit human debugging. Our codes and data +are publicly available at https://github.com/mbzuai-nlp/ProgramFC. +" +ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist Examination,Dongfang Li,http://arxiv.org/pdf/2305.12945v2.pdf,2023-05-22,['cs.cl'],2305.12945v2.pdf," As ChatGPT and GPT-4 spearhead the development of Large Language Models +(LLMs), more researchers are investigating their performance across various +tasks. But more research needs to be done on the interpretability capabilities +of LLMs, that is, the ability to generate reasons after an answer has been +given. Existing explanation datasets are mostly English-language general +knowledge questions, which leads to insufficient thematic and linguistic +diversity. To address the language bias and lack of medical resources in +generating rationales QA datasets, we present ExplainCPE (over 7k instances), a +challenging medical benchmark in Simplified Chinese. We analyzed the errors of +ChatGPT and GPT-4, pointing out the limitations of current LLMs in +understanding text and computational reasoning. During the experiment, we also +found that different LLMs have different preferences for in-context learning. +ExplainCPE presents a significant challenge, but its potential for further +investigation is promising, and it can be used to evaluate the ability of a +model to generate explanations. AI safety and trustworthiness need more +attention, and this work makes the first step to explore the medical +interpretability of LLMs.The dataset is available at +https://github.com/HITsz-TMG/ExplainCPE. +" +MAILEX: Email Event and Argument Extraction,Saurabh Srivastava,http://arxiv.org/pdf/2305.13469v2.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.13469v2.pdf," In this work, we present the first dataset, MailEx, for performing event +extraction from conversational email threads. To this end, we first proposed a +new taxonomy covering 10 event types and 76 arguments in the email domain. Our +final dataset includes 1.5K email threads and ~4K emails, which are annotated +with totally ~8K event instances. To understand the task challenges, we +conducted a series of experiments comparing three types of approaches, i.e., +fine-tuned sequence labeling, fine-tuned generative extraction, and few-shot +in-context learning. Our results showed that the task of email event extraction +is far from being addressed, due to challenges lying in, e.g., extracting +non-continuous, shared trigger spans, extracting non-named entity arguments, +and modeling the email conversational history. Our work thus suggests more +future investigations in this domain-specific event extraction task. +" +Can ChatGPT Detect Intent? Evaluating Large Language Models for Spoken Language Understanding,Mutian He,http://arxiv.org/pdf/2305.13512v2.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.sd', 'eess.as']",2305.13512v2.pdf," Recently, large pretrained language models have demonstrated strong language +understanding capabilities. This is particularly reflected in their zero-shot +and in-context learning abilities on downstream tasks through prompting. To +assess their impact on spoken language understanding (SLU), we evaluate several +such models like ChatGPT and OPT of different sizes on multiple benchmarks. We +verify the emergent ability unique to the largest models as they can reach +intent classification accuracy close to that of supervised models with zero or +few shots on various languages given oracle transcripts. By contrast, the +results for smaller models fitting a single GPU fall far behind. We note that +the error cases often arise from the annotation scheme of the dataset; +responses from ChatGPT are still reasonable. We show, however, that the model +is worse at slot filling, and its performance is sensitive to ASR errors, +suggesting serious challenges for the application of those textual models on +SLU. +" +LogicLLM: Exploring Self-supervised Logic-enhanced Training for Large Language Models,Fangkai Jiao,http://arxiv.org/pdf/2305.13718v2.pdf,2023-05-23,['cs.cl'],2305.13718v2.pdf," Existing efforts to improve logical reasoning ability of language models have +predominantly relied on supervised fine-tuning, hindering generalization to new +domains and/or tasks. The development of Large Langauge Models (LLMs) has +demonstrated the capacity of compressing abundant knowledge into a single +proxy, enabling them to tackle multiple tasks effectively. Our preliminary +experiments, nevertheless, show that LLMs do not show capability on logical +reasoning. The performance of LLMs on logical reasoning benchmarks is far +behind the existing state-of-the-art baselines. In this paper, we make the +first attempt to investigate the feasibility of incorporating logical knowledge +through self-supervised post-training, and activating it via in-context +learning, which we termed as LogicLLM. Specifically, we devise an +auto-regressive objective variant of MERIt and integrate it with two LLM +series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to +13 billion. The results on two challenging logical reasoning benchmarks +demonstrate the effectiveness of LogicLLM. Besides, we conduct extensive +ablation studies to analyze the key factors in designing logic-oriented proxy +tasks. +" +Make a Choice! Knowledge Base Question Answering with In-Context Learning,Chuanyuan Tan,http://arxiv.org/pdf/2305.13972v1.pdf,2023-05-23,['cs.cl'],2305.13972v1.pdf," Question answering over knowledge bases (KBQA) aims to answer factoid +questions with a given knowledge base (KB). Due to the large scale of KB, +annotated data is impossible to cover all fact schemas in KB, which poses a +challenge to the generalization ability of methods that require a sufficient +amount of annotated data. Recently, LLMs have shown strong few-shot performance +in many NLP tasks. We expect LLM can help existing methods improve their +generalization ability, especially in low-resource situations. In this paper, +we present McL-KBQA, a framework that incorporates the few-shot ability of LLM +into the KBQA method via ICL-based multiple choice and then improves the +effectiveness of the QA tasks. Experimental results on two KBQA datasets +demonstrate the competitive performance of McL-KBQA with strong improvements in +generalization. We expect to explore a new way to QA tasks from KBQA in +conjunction with LLM, how to generate answers normatively and correctly with +strong generalization. +" +CTQScorer: Combining Multiple Features for In-context Example Selection for Machine Translation,Aswanth Kumar,http://arxiv.org/pdf/2305.14105v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14105v2.pdf," Large language models have demonstrated the capability to perform on machine +translation when the input is prompted with a few examples (in-context +learning). Translation quality depends on various features of the selected +examples, such as their quality and relevance, but previous work has +predominantly focused on individual features in isolation. In this paper, we +propose a general framework for combining different features influencing +example selection. We learn a regression model, CTQ Scorer (Contextual +Translation Quality), that selects examples based on multiple features in order +to maximize the translation quality. On multiple language pairs and language +models, we show that CTQ Scorer helps significantly outperform random selection +as well as strong single-factor baselines reported in the literature. We also +see an improvement of over 2.5 COMET points on average with respect to a strong +BM25 retrieval-based baseline. +" +Empowering LLM-based Machine Translation with Cultural Awareness,Binwei Yao,http://arxiv.org/pdf/2305.14328v1.pdf,2023-05-23,['cs.cl'],2305.14328v1.pdf," Traditional neural machine translation (NMT) systems often fail to translate +sentences that contain culturally specific information. Most previous NMT +methods have incorporated external cultural knowledge during training, which +requires fine-tuning on low-frequency items specific to the culture. Recent +in-context learning utilizes lightweight prompts to guide large language models +(LLMs) to perform machine translation, however, whether such an approach works +in terms of injecting culture awareness into machine translation remains +unclear. To this end, we introduce a new data curation pipeline to construct a +culturally relevant parallel corpus, enriched with annotations of +cultural-specific entities. Additionally, we design simple but effective +prompting strategies to assist this LLM-based translation. Extensive +experiments show that our approaches can largely help incorporate cultural +knowledge into LLM-based machine translation, outperforming traditional NMT +systems in translating cultural-specific sentences. +" +Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models,Miaoran Li,http://arxiv.org/pdf/2305.14623v1.pdf,2023-05-24,['cs.cl'],2305.14623v1.pdf," Fact-checking is an essential task in NLP that is commonly utilized for +validating the factual accuracy of claims. Prior work has mainly focused on +fine-tuning pre-trained languages models on specific datasets, which can be +computationally intensive and time-consuming. With the rapid development of +large language models (LLMs), such as ChatGPT and GPT-3, researchers are now +exploring their in-context learning capabilities for a wide range of tasks. In +this paper, we aim to assess the capacity of LLMs for fact-checking by +introducing Self-Checker, a framework comprising a set of plug-and-play modules +that facilitate fact-checking by purely prompting LLMs in an almost zero-shot +setting. This framework provides a fast and efficient way to construct +fact-checking systems in low-resource environments. Empirical results +demonstrate the potential of Self-Checker in utilizing LLMs for fact-checking. +However, there is still significant room for improvement compared to SOTA +fine-tuned models, which suggests that LLM adoption could be a promising +approach for future fact-checking research. +" +ExpertPrompting: Instructing Large Language Models to be Distinguished Experts,Benfeng Xu,http://arxiv.org/pdf/2305.14688v1.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14688v1.pdf," The answering quality of an aligned large language model (LLM) can be +drastically improved if treated with proper crafting of prompts. In this paper, +we propose ExpertPrompting to elicit the potential of LLMs to answer as +distinguished experts. We first utilize In-Context Learning to automatically +synthesize detailed and customized descriptions of the expert identity for each +specific instruction, and then ask LLMs to provide answer conditioned on such +agent background. Based on this augmented prompting strategy, we produce a new +set of instruction-following data using GPT-3.5, and train a competitive +open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation +to show that 1) the expert data is of significantly higher quality than vanilla +answers, and 2) ExpertLLaMA outperforms existing open-source opponents and +achieves 96\% of the original ChatGPT's capability. All data and the +ExpertLLaMA model will be made publicly available at +\url{https://github.com/OFA-Sys/ExpertLLaMA}. +" +Adapting Language Models to Compress Contexts,Alexis Chevalier,http://arxiv.org/pdf/2305.14788v2.pdf,2023-05-24,['cs.cl'],2305.14788v2.pdf," Transformer-based language models (LMs) are powerful and widely-applicable +tools, but their usefulness is constrained by a finite context window and the +expensive computational cost of processing long text documents. We propose to +adapt pre-trained LMs into AutoCompressors. These language models are capable +of compressing long contexts into compact summary vectors, which are then +accessible to the model as soft prompts. Summary vectors are trained with an +unsupervised objective, whereby long documents are processed in segments, and +summary vectors from all previous segments are used in language modeling. We +fine-tune OPT and Llama-2 models on sequences of up to 30,720 tokens and show +that AutoCompressors can utilize long contexts to improve perplexity. We +evaluate AutoCompressors on in-context learning by compressing task +demonstrations and find that summary vectors are good substitutes for +plain-text demonstrations, increasing accuracy while reducing inference costs. +Finally, we explore the benefits of pre-computing summary vectors for large +corpora by applying summary vectors to retrievalaugmented language modeling and +a passage re-ranking task. Overall, AutoCompressors emerge as a simple and +inexpensive solution to extend the context window of LMs while speeding up +inference over long contexts. +" +ByteSized32: A Corpus and Challenge Task for Generating Task-Specific World Models Expressed as Text Games,Ruoyao Wang,http://arxiv.org/pdf/2305.14879v2.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14879v2.pdf," In this work, we investigate the capacity of language models to generate +explicit, interpretable, and interactive world models of scientific and +common-sense reasoning tasks. We operationalize this as a task of generating +text games, expressed as hundreds of lines of Python code. To facilitate this +task, we introduce ByteSized32 (Code: github.com/cognitiveailab/BYTESIZED32), a +corpus of 32 reasoning-focused text games totaling 20k lines of Python code. We +empirically demonstrate that GPT-4 can use these games as templates for +single-shot in-context learning, successfully producing runnable games on +unseen topics in 28% of cases. When allowed to self-reflect on program errors, +game runnability substantially increases to 57%. While evaluating simulation +fidelity is labor-intensive, we introduce a suite of automated metrics to +assess game fidelity, technical validity, adherence to task specifications, and +winnability, showing a high degree of agreement with expert human ratings. We +pose this as a challenge task to spur further development at the juncture of +world modeling and code generation. +" +Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning,Tianqing Fang,http://arxiv.org/pdf/2305.14970v1.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14970v1.pdf," Event temporal reasoning aims at identifying the temporal relations between +two or more events. However, knowledge conflicts arise when there is a mismatch +between the actual temporal relations of events in the context and the prior +knowledge or biases learned by the model. We first systematically define +distinct kinds of bias in event temporal reasoning, which include event +relation prior bias, tense bias, narrative bias, and dependency bias, as +indicators to study knowledge conflicts. To mitigate such event-related +knowledge conflict, we introduce a Counterfactual Data Augmentation based +method that can be applied to both Pre-trained Language Models (PLMs) and Large +Language Models (LLMs) either as additional training data or demonstrations for +In-Context Learning. Experiments suggest the importance of mitigating knowledge +conflicts in event temporal reasoning tasks for reducing hallucination and +highlight the potential of counterfactual data augmentation for improving model +performance. +" +Boosting Cross-lingual Transferability in Multilingual Models via In-Context Learning,Sunkyoung Kim,http://arxiv.org/pdf/2305.15233v1.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.15233v1.pdf," Existing cross-lingual transfer (CLT) prompting methods are only concerned +with monolingual demonstration examples in the source language. In this paper, +we propose In-CLT, a novel cross-lingual transfer prompting method that +leverages both source and target languages to construct the demonstration +examples. We conduct comprehensive evaluations on multilingual benchmarks, +focusing on question answering tasks. Experiment results show that In-CLT +prompt not only improves multilingual models' cross-lingual transferability, +but also demonstrates remarkable unseen language generalization ability. In-CLT +prompting, in particular, improves model performance by 10 to 20\% points on +average when compared to prior cross-lingual transfer approaches. We also +observe the surprising performance gain on the other multilingual benchmarks, +especially in reasoning tasks. Furthermore, we investigate the relationship +between lexical similarity and pre-training corpora in terms of the +cross-lingual transfer gap. +" +A Mechanism for Solving Relational Tasks in Transformer Language Models,Jack Merullo,http://arxiv.org/pdf/2305.16130v2.pdf,2023-05-25,"['cs.cl', 'cs.lg']",2305.16130v2.pdf," A primary criticism towards language models (LMs) is their inscrutability. +This paper presents evidence that, despite their size and complexity, LMs +sometimes exploit a simple computational mechanism to solve one-to-one +relational tasks (e.g., capital_of(Poland)=Warsaw). We investigate a range of +language model sizes (from 124M parameters to 176B parameters) in an in-context +learning setting, and find that for a variety of tasks (involving capital +cities, upper-casing, and past-tensing) a key part of the mechanism reduces to +a simple linear update typically applied by the feedforward (FFN) networks. +These updates also tend to promote the output of the relation in a +content-independent way (e.g., encoding Poland:Warsaw::China:Beijing), +revealing a predictable pattern that these models take in solving these tasks. +We further show that this mechanism is specific to tasks that require retrieval +from pretraining memory, rather than retrieval from local context. Our results +contribute to a growing body of work on the mechanistic interpretability of +LLMs, and offer reason to be optimistic that, despite the massive and +non-linear nature of the models, the strategies they ultimately use to solve +tasks can sometimes reduce to familiar and even intuitive algorithms. +" +Large Language Models Are Partially Primed in Pronoun Interpretation,Suet-Ying Lam,http://arxiv.org/pdf/2305.16917v1.pdf,2023-05-26,['cs.cl'],2305.16917v1.pdf," While a large body of literature suggests that large language models (LLMs) +acquire rich linguistic representations, little is known about whether they +adapt to linguistic biases in a human-like way. The present study probes this +question by asking whether LLMs display human-like referential biases using +stimuli and procedures from real psycholinguistic experiments. Recent +psycholinguistic studies suggest that humans adapt their referential biases +with recent exposure to referential patterns; closely replicating three +relevant psycholinguistic experiments from Johnson & Arnold (2022) in an +in-context learning (ICL) framework, we found that InstructGPT adapts its +pronominal interpretations in response to the frequency of referential patterns +in the local discourse, though in a limited fashion: adaptation was only +observed relative to syntactic but not semantic biases. By contrast, FLAN-UL2 +fails to generate meaningful patterns. Our results provide further evidence +that contemporary LLMs discourse representations are sensitive to syntactic +patterns in the local context but less so to semantic patterns. Our data and +code are available at \url{https://github.com/zkx06111/llm_priming}. +" +A Mechanism for Sample-Efficient In-Context Learning for Sparse Retrieval Tasks,Jacob Abernethy,http://arxiv.org/pdf/2305.17040v1.pdf,2023-05-26,"['cs.lg', 'cs.cl']",2305.17040v1.pdf," We study the phenomenon of \textit{in-context learning} (ICL) exhibited by +large language models, where they can adapt to a new learning task, given a +handful of labeled examples, without any explicit parameter optimization. Our +goal is to explain how a pre-trained transformer model is able to perform ICL +under reasonable assumptions on the pre-training process and the downstream +tasks. We posit a mechanism whereby a transformer can achieve the following: +(a) receive an i.i.d. sequence of examples which have been converted into a +prompt using potentially-ambiguous delimiters, (b) correctly segment the prompt +into examples and labels, (c) infer from the data a \textit{sparse linear +regressor} hypothesis, and finally (d) apply this hypothesis on the given test +example and return a predicted label. We establish that this entire procedure +is implementable using the transformer mechanism, and we give sample complexity +guarantees for this learning framework. Our empirical findings validate the +challenge of segmentation, and we show a correspondence between our posited +mechanisms and observed attention maps for step (c). +" +Augmenting Large Language Model Translators via Translation Memories,Yongyu Mu,http://arxiv.org/pdf/2305.17367v1.pdf,2023-05-27,['cs.cl'],2305.17367v1.pdf," Using translation memories (TMs) as prompts is a promising approach to +in-context learning of machine translation models. In this work, we take a step +towards prompting large language models (LLMs) with TMs and making them better +translators. We find that the ability of LLMs to ``understand'' prompts is +indeed helpful for making better use of TMs. Experiments show that the results +of a pre-trained LLM translator can be greatly improved by using high-quality +TM-based prompts. These results are even comparable to those of the +state-of-the-art NMT systems which have access to large-scale in-domain +bilingual data and are well tuned on the downstream tasks. +" +In-Context Analogical Reasoning with Pre-Trained Language Models,Xiaoyang Hu,http://arxiv.org/pdf/2305.17626v2.pdf,2023-05-28,"['cs.ai', 'cs.cl', 'cs.lg']",2305.17626v2.pdf," Analogical reasoning is a fundamental capacity of human cognition that allows +us to reason abstractly about novel situations by relating them to past +experiences. While it is thought to be essential for robust reasoning in AI +systems, conventional approaches require significant training and/or +hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by +cognitive science research that has found connections between human language +and analogy-making, we explore the use of intuitive language-based abstractions +to support analogy in AI systems. Specifically, we apply large pre-trained +language models (PLMs) to visual Raven's Progressive Matrices (RPM), a common +relational reasoning test. By simply encoding the perceptual features of the +problem into language form, we find that PLMs exhibit a striking capacity for +zero-shot relational reasoning, exceeding human performance and nearing +supervised vision-based methods. We explore different encodings that vary the +level of abstraction over task features, finding that higher-level abstractions +further strengthen PLMs' analogical reasoning. Our detailed analysis reveals +insights on the role of model complexity, in-context learning, and prior +knowledge in solving RPM tasks. +" +Towards Explainable Conversational Recommender Systems,Shuyu Guo,http://arxiv.org/pdf/2305.18363v1.pdf,2023-05-27,"['cs.ir', 'cs.ai']",2305.18363v1.pdf," Explanations in conventional recommender systems have demonstrated benefits +in helping the user understand the rationality of the recommendations and +improving the system's efficiency, transparency, and trustworthiness. In the +conversational environment, multiple contextualized explanations need to be +generated, which poses further challenges for explanations. To better measure +explainability in conversational recommender systems (CRS), we propose ten +evaluation perspectives based on concepts from conventional recommender systems +together with the characteristics of CRS. We assess five existing CRS benchmark +datasets using these metrics and observe the necessity of improving the +explanation quality of CRS. To achieve this, we conduct manual and automatic +approaches to extend these dialogues and construct a new CRS dataset, namely +Explainable Recommendation Dialogues (E-ReDial). It includes 756 dialogues with +over 2,000 high-quality rewritten explanations. We compare two baseline +approaches to perform explanation generation based on E-ReDial. Experimental +results suggest that models trained on E-ReDial can significantly improve +explainability while introducing knowledge into the models can further improve +the performance. GPT-3 in the in-context learning setting can generate more +realistic and diverse movie descriptions. In contrast, T5 training on E-ReDial +can better generate clear reasons for recommendations based on user +preferences. E-ReDial is available at https://github.com/Superbooming/E-ReDial. +" +Grammar Prompting for Domain-Specific Language Generation with Large Language Models,Bailin Wang,http://arxiv.org/pdf/2305.19234v3.pdf,2023-05-30,"['cs.cl', 'cs.ai']",2305.19234v3.pdf," Large language models (LLMs) can learn to perform a wide range of natural +language tasks from just a handful of in-context examples. However, for +generating strings from highly structured languages (e.g., semantic parsing to +complex domain-specific languages), it is challenging for the LLM to generalize +from just a few exemplars. We propose \emph{grammar prompting}, a simple +approach to enable LLMs to use external knowledge and domain-specific +constraints, expressed through a grammar in Backus--Naur Form (BNF), during +in-context learning. Grammar prompting augments each demonstration example with +a specialized grammar that is minimally sufficient for generating the +particular output example, where the specialized grammar is a subset of the +full DSL grammar. For inference, the LLM first predicts a BNF grammar given a +test input, and then generates the output according to the rules of the +grammar. Experiments demonstrate that grammar prompting can enable LLMs to +perform competitively on a diverse set of DSL generation tasks, including +semantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, and +SMILES-based molecule generation. +" +Contextual Vision Transformers for Robust Representation Learning,Yujia Bao,http://arxiv.org/pdf/2305.19402v2.pdf,2023-05-30,"['cs.cv', 'cs.ai', 'cs.cl']",2305.19402v2.pdf," We introduce Contextual Vision Transformers (ContextViT), a method designed +to generate robust image representations for datasets experiencing shifts in +latent factors across various groups. Derived from the concept of in-context +learning, ContextViT incorporates an additional context token to encapsulate +group-specific information. This integration allows the model to adjust the +image representation in accordance with the group-specific context. +Specifically, for a given input image, ContextViT maps images with identical +group membership into this context token, which is appended to the input image +tokens. Additionally, we introduce a context inference network to predict such +tokens on-the-fly, given a batch of samples from the group. This enables +ContextViT to adapt to new testing distributions during inference time. We +demonstrate the efficacy of ContextViT across a wide range of applications. In +supervised fine-tuning, we show that augmenting pre-trained ViTs with our +proposed context conditioning mechanism results in consistent improvements in +out-of-distribution generalization on iWildCam and FMoW. We also investigate +self-supervised representation learning with ContextViT. Our experiments on the +Camelyon17 pathology imaging benchmark and the JUMP-CP microscopy imaging +benchmark demonstrate that ContextViT excels in learning stable image +featurizations amidst distribution shift, consistently outperforming its ViT +counterpart. +" +Self-Verification Improves Few-Shot Clinical Information Extraction,Zelalem Gero,http://arxiv.org/pdf/2306.00024v1.pdf,2023-05-30,"['cs.cl', 'cs.lg']",2306.00024v1.pdf," Extracting patient information from unstructured text is a critical task in +health decision-support and clinical research. Large language models (LLMs) +have shown the potential to accelerate clinical curation via few-shot +in-context learning, in contrast to supervised learning which requires much +more costly human annotations. However, despite drastic advances in modern LLMs +such as GPT-4, they still struggle with issues regarding accuracy and +interpretability, especially in mission-critical domains such as health. Here, +we explore a general mitigation framework using self-verification, which +leverages the LLM to provide provenance for its own extraction and check its +own outputs. This is made possible by the asymmetry between verification and +generation, where the latter is often much easier than the former. Experimental +results show that our method consistently improves accuracy for various LLMs in +standard clinical information extraction tasks. Additionally, self-verification +yields interpretations in the form of a short text span corresponding to each +output, which makes it very efficient for human experts to audit the results, +paving the way towards trustworthy extraction of clinical information in +resource-constrained scenarios. To facilitate future research in this +direction, we release our code and prompts. +" +ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an Opportunity?,Michael Heck,http://arxiv.org/pdf/2306.01386v1.pdf,2023-06-02,"['cs.cl', 'cs.ai']",2306.01386v1.pdf," Recent research on dialogue state tracking (DST) focuses on methods that +allow few- and zero-shot transfer to new domains or schemas. However, +performance gains heavily depend on aggressive data augmentation and +fine-tuning of ever larger language model based architectures. In contrast, +general purpose language models, trained on large amounts of diverse data, hold +the promise of solving any kind of task without task-specific training. We +present preliminary experimental results on the ChatGPT research preview, +showing that ChatGPT achieves state-of-the-art performance in zero-shot DST. +Despite our findings, we argue that properties inherent to general purpose +models limit their ability to replace specialized systems. We further theorize +that the in-context learning capabilities of such models will likely become +powerful tools to support the development of dedicated and dynamic dialogue +state trackers. +" +Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models,Fengzhu Zeng,http://arxiv.org/pdf/2306.02569v1.pdf,2023-06-05,['cs.cl'],2306.02569v1.pdf," Few-shot or zero-shot fact verification only relies on a few or no labeled +training examples. In this paper, we propose a novel method called ProToCo, to +\underline{Pro}mpt pre-trained language models (PLMs) \underline{To} be +\underline{Co}nsistent, for improving the factuality assessment capability of +PLMs in the few-shot and zero-shot settings. Given a claim-evidence pair, +ProToCo generates multiple variants of the claim with different relations and +frames a simple consistency mechanism as constraints for making compatible +predictions across these variants. We update PLMs by using parameter-efficient +fine-tuning (PEFT), leading to more accurate predictions in few-shot and +zero-shot fact verification tasks. Our experiments on three public verification +datasets show that ProToCo significantly outperforms state-of-the-art few-shot +fact verification baselines. With a small number of unlabeled instances, +ProToCo also outperforms the strong zero-shot learner T0 on zero-shot +verification. Compared to large PLMs using in-context learning (ICL) method, +ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model in +both few- and zero-shot settings. +" +STEPS: A Benchmark for Order Reasoning in Sequential Tasks,Weizhi Wang,http://arxiv.org/pdf/2306.04441v1.pdf,2023-06-07,['cs.cl'],2306.04441v1.pdf," Various human activities can be abstracted into a sequence of actions in +natural text, i.e. cooking, repairing, manufacturing, etc. Such action +sequences heavily depend on the executing order, while disorder in action +sequences leads to failure of further task execution by robots or AI agents. +Therefore, to verify the order reasoning capability of current neural models in +sequential tasks, we propose a challenging benchmark , named STEPS. STEPS +involves two subtask settings, focusing on determining the rationality of given +next step in recipes and selecting the reasonable step from the multi-choice +question, respectively. We describe the data construction and task +formulations, and benchmark most of significant Large Language Models (LLMs). +The experimental results demonstrate 1) The commonsense reasoning of action +orders in sequential tasks are challenging to resolve via zero-shot prompting +or few-shot in-context learning for LLMs; 2) Prompting method still +significantly lags behind tuning-based method on STEPS. +" +Modular Visual Question Answering via Code Generation,Sanjay Subramanian,http://arxiv.org/pdf/2306.05392v1.pdf,2023-06-08,['cs.cl'],2306.05392v1.pdf," We present a framework that formulates visual question answering as modular +code generation. In contrast to prior work on modular approaches to VQA, our +approach requires no additional training and relies on pre-trained language +models (LMs), visual models pre-trained on image-caption pairs, and fifty VQA +examples used for in-context learning. The generated Python programs invoke and +compose the outputs of the visual models using arithmetic and conditional +logic. Our approach improves accuracy on the COVR dataset by at least 3% and on +the GQA dataset by roughly 2% compared to the few-shot baseline that does not +employ code generation. +" +Measuring and Modifying Factual Knowledge in Large Language Models,Pouya Pezeshkpour,http://arxiv.org/pdf/2306.06264v1.pdf,2023-06-09,"['cs.cl', 'cs.lg']",2306.06264v1.pdf," Large Language Models (LLMs) store an extensive amount of factual knowledge +obtained from vast collections of text. To effectively utilize these models for +downstream tasks, it is crucial to have reliable methods for measuring their +knowledge. However, existing approaches for knowledge measurement have certain +limitations, and despite recent efforts, they fail to provide accurate +measurements and the necessary insights for modifying the knowledge within +LLMs. In this work, we employ information theory-based measurements to provide +a framework estimating the factual knowledge contained within large language +models. More specifically, we measure knowledge by analyzing the LLM's +prediction probability distribution before and after instilling the target +knowledge, employing metrics such as entropy and KL-divergence. Introducing our +metrics, we first assess their accuracy in comparison to previous ranking-based +methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we +explore two prominent methods of knowledge instillation, discovering that LLMs +exhibit limitations in capturing new knowledge under specific circumstances for +one of these methods. Lastly, we demonstrate the applicability of our methods +in extracting unlearned and mislearned facts in LLMs through their application +to in-context learning. We make code and data for all methods and experiments +in this paper publicly available. +" +A Survey on Multimodal Large Language Models,Shukang Yin,http://arxiv.org/pdf/2306.13549v1.pdf,2023-06-23,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2306.13549v1.pdf," Multimodal Large Language Model (MLLM) recently has been a new rising +research hotspot, which uses powerful Large Language Models (LLMs) as a brain +to perform multimodal tasks. The surprising emergent capabilities of MLLM, such +as writing stories based on images and OCR-free math reasoning, are rare in +traditional methods, suggesting a potential path to artificial general +intelligence. In this paper, we aim to trace and summarize the recent progress +of MLLM. First of all, we present the formulation of MLLM and delineate its +related concepts. Then, we discuss the key techniques and applications, +including Multimodal Instruction Tuning (M-IT), Multimodal In-Context Learning +(M-ICL), Multimodal Chain of Thought (M-CoT), and LLM-Aided Visual Reasoning +(LAVR). Finally, we discuss existing challenges and point out promising +research directions. In light of the fact that the era of MLLM has only just +begun, we will keep updating this survey and hope it can inspire more research. +An associated GitHub link collecting the latest papers is available at +https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models. +" +Potential Benefits of Employing Large Language Models in Research in Moral Education and Development,Hyemin Han,http://arxiv.org/pdf/2306.13805v2.pdf,2023-06-23,"['cs.cy', 'cs.ai']",2306.13805v2.pdf," Recently, computer scientists have developed large language models (LLMs) by +training prediction models with large-scale language corpora and human +reinforcements. The LLMs have become one promising way to implement artificial +intelligence with accuracy in various fields. Interestingly, recent LLMs +possess emergent functional features that emulate sophisticated human +cognition, especially in-context learning and the chain of thought, which were +unavailable in previous prediction models. In this paper, I will examine how +LLMs might contribute to moral education and development research. To achieve +this goal, I will review the most recently published conference papers and +ArXiv preprints to overview the novel functional features implemented in LLMs. +I also intend to conduct brief experiments with ChatGPT to investigate how LLMs +behave while addressing ethical dilemmas and external feedback. The results +suggest that LLMs might be capable of solving dilemmas based on reasoning and +revising their reasoning process with external input. Furthermore, a +preliminary experimental result from the moral exemplar test may demonstrate +that exemplary stories can elicit moral elevation in LLMs as do they among +human participants. I will discuss the potential implications of LLMs on +research on moral education and development with the results. +" +DisasterResponseGPT: Large Language Models for Accelerated Plan of Action Development in Disaster Response Scenarios,Vinicius G. Goecks,http://arxiv.org/pdf/2306.17271v1.pdf,2023-06-29,"['cs.lg', 'i.2.7; j.7; k.4.0']",2306.17271v1.pdf," The development of plans of action in disaster response scenarios is a +time-consuming process. Large Language Models (LLMs) offer a powerful solution +to expedite this process through in-context learning. This study presents +DisasterResponseGPT, an algorithm that leverages LLMs to generate valid plans +of action quickly by incorporating disaster response and planning guidelines in +the initial prompt. In DisasterResponseGPT, users input the scenario +description and receive a plan of action as output. The proposed method +generates multiple plans within seconds, which can be further refined following +the user's feedback. Preliminary results indicate that the plans of action +developed by DisasterResponseGPT are comparable to human-generated ones while +offering greater ease of modification in real-time. This approach has the +potential to revolutionize disaster response operations by enabling rapid +updates and adjustments during the plan's execution. +" +Meta-Reasoning: Semantics-Symbol Deconstruction For Large Language Models,Yiming Wang,http://arxiv.org/pdf/2306.17820v2.pdf,2023-06-30,['cs.cl'],2306.17820v2.pdf," Neural-symbolic methods have shown their effectiveness in enhancing the +reasoning abilities of large language models (LLMs). However, existing methods +primarily rely on mapping natural languages to more syntactically complete +formal languages (e.g., Python and SQL). Those approaches necessitate that +reasoning tasks be convertible into programs, which cater more to the computer +execution mindset and deviate from human reasoning habits. To expand the +real-world applicability and flexibility of symbolic methods, we propose +Meta-Reasoning from the scope of linguistics itself. This method empowers LLMs +to deconstruct questions and effectively capture more generalized knowledge +autonomously. We find that Meta-Reasoning achieves improved in-context learning +efficiency, reasoning accuracy, and output stability in six arithmetic and +symbolic reasoning tasks. In particular, when applied to symbolic reasoning +tasks such as Tracking Shuffled Objects, GPT-3 (text-davinci-002) surpasses the +few-shot Chain-of-Thought prompting approach (+37.7%), with 99% accuracy after +a single demonstration of Meta-Reasoning. +" +Assessing the efficacy of large language models in generating accurate teacher responses,Yann Hicke,http://arxiv.org/pdf/2307.04274v1.pdf,2023-07-09,"['cs.cl', 'cs.lg']",2307.04274v1.pdf," (Tack et al., 2023) organized the shared task hosted by the 18th Workshop on +Innovative Use of NLP for Building Educational Applications on generation of +teacher language in educational dialogues. Following the structure of the +shared task, in this study, we attempt to assess the generative abilities of +large language models in providing informative and helpful insights to +students, thereby simulating the role of a knowledgeable teacher. To this end, +we present an extensive evaluation of several benchmarking generative models, +including GPT-4 (few-shot, in-context learning), fine-tuned GPT-2, and +fine-tuned DialoGPT. Additionally, to optimize for pedagogical quality, we +fine-tuned the Flan-T5 model using reinforcement learning. Our experimental +findings on the Teacher-Student Chatroom Corpus subset indicate the efficacy of +GPT-4 over other fine-tuned models, measured using BERTScore and DialogRPT. + We hypothesize that several dataset characteristics, including sampling, +representativeness, and dialog completeness, pose significant challenges to +fine-tuning, thus contributing to the poor generalizability of the fine-tuned +models. Finally, we note the need for these generative models to be evaluated +with a metric that relies not only on dialog coherence and matched language +modeling distribution but also on the model's ability to showcase pedagogical +skills. +" +Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models,Lautaro Estienne,http://arxiv.org/pdf/2307.06713v3.pdf,2023-07-13,"['cs.cl', 'cs.lg']",2307.06713v3.pdf," A wide variety of natural language tasks are currently being addressed with +large-scale language models (LLMs). These models are usually trained with a +very large amount of unsupervised text data and adapted to perform a downstream +natural language task using methods like fine-tuning, calibration or in-context +learning. In this work, we propose an approach to adapt the prior class +distribution to perform text classification tasks without the need for labelled +samples and only few in-domain sample queries. The proposed approach treats the +LLM as a black box, adding a stage where the model posteriors are calibrated to +the task. Results show that these methods outperform the un-adapted model for +different number of training shots in the prompt and a previous approach were +calibration is performed without using any adaptation data. +" +Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation,Yahui Fu,http://arxiv.org/pdf/2308.00085v2.pdf,2023-07-28,"['cs.cl', 'cs.ai']",2308.00085v2.pdf," Recent approaches to empathetic response generation try to incorporate +commonsense knowledge or reasoning about the causes of emotions to better +understand the user's experiences and feelings. However, these approaches +mainly focus on understanding the causalities of context from the user's +perspective, ignoring the system's perspective. In this paper, we propose a +commonsense-based causality explanation approach for diverse empathetic +response generation that considers both the user's perspective (user's desires +and reactions) and the system's perspective (system's intentions and +reactions). We enhance ChatGPT's ability to reason for the system's perspective +by integrating in-context learning with commonsense knowledge. Then, we +integrate the commonsense-based causality explanation with both ChatGPT and a +T5-based model. Experimental evaluations demonstrate that our method +outperforms other comparable methods on both automatic and human evaluations. +" +Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models,Zheyu Zhang,http://arxiv.org/pdf/2308.01684v2.pdf,2023-08-03,['cs.cl'],2308.01684v2.pdf," Large Language Models (LLMs) demonstrate remarkable performance on a variety +of natural language understanding (NLU) tasks, primarily due to their +in-context learning ability. This ability could be applied to building babylike +models, i.e. models at small scales, improving training efficiency. In this +paper, we propose a ""CoThought"" pipeline, which efficiently trains smaller +""baby"" language models (BabyLMs) by leveraging the Chain of Thought prompting +of LLMs. Our pipeline restructures a dataset of less than 100M in size using +GPT-3.5-turbo, transforming it into task-oriented, human-readable texts that +are comparable to the school texts for language learners. The BabyLM is then +pretrained on this restructured dataset in a RoBERTa fashion. In evaluations +across 4 benchmarks, our BabyLM outperforms the vanilla RoBERTa in 10 +linguistic, NLU, and question-answering tasks by more than 3 points, showing a +superior ability to extract contextual information. These results suggest that +compact LMs pretrained on small, LLM-restructured data can better understand +tasks and achieve improved performance. +" +FLIRT: Feedback Loop In-context Red Teaming,Ninareh Mehrabi,http://arxiv.org/pdf/2308.04265v1.pdf,2023-08-08,['cs.ai'],2308.04265v1.pdf," Warning: this paper contains content that may be inappropriate or offensive. + As generative models become available for public use in various applications, +testing and analyzing vulnerabilities of these models has become a priority. +Here we propose an automatic red teaming framework that evaluates a given model +and exposes its vulnerabilities against unsafe and inappropriate content +generation. Our framework uses in-context learning in a feedback loop to red +team models and trigger them into unsafe content generation. We propose +different in-context attack strategies to automatically learn effective and +diverse adversarial prompts for text-to-image models. Our experiments +demonstrate that compared to baseline approaches, our proposed strategy is +significantly more effective in exposing vulnerabilities in Stable Diffusion +(SD) model, even when the latter is enhanced with safety features. Furthermore, +we demonstrate that the proposed framework is effective for red teaming +text-to-text models, resulting in significantly higher toxic response +generation rate compared to previously reported numbers. +" +JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models,Peike Li,http://arxiv.org/pdf/2308.04729v1.pdf,2023-08-09,"['cs.sd', 'cs.ai', 'cs.lg', 'cs.mm', 'eess.as']",2308.04729v1.pdf," Music generation has attracted growing interest with the advancement of deep +generative models. However, generating music conditioned on textual +descriptions, known as text-to-music, remains challenging due to the complexity +of musical structures and high sampling rate requirements. Despite the task's +significance, prevailing generative models exhibit limitations in music +quality, computational efficiency, and generalization. This paper introduces +JEN-1, a universal high-fidelity model for text-to-music generation. JEN-1 is a +diffusion model incorporating both autoregressive and non-autoregressive +training. Through in-context learning, JEN-1 performs various generation tasks +including text-guided music generation, music inpainting, and continuation. +Evaluations demonstrate JEN-1's superior performance over state-of-the-art +methods in text-music alignment and music quality while maintaining +computational efficiency. Our demos are available at +http://futureverse.com/research/jen/demos/jen1 +" +Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models,Bilgehan Sel,http://arxiv.org/pdf/2308.10379v2.pdf,2023-08-20,"['cs.cl', 'cs.ai']",2308.10379v2.pdf," Current literature, aiming to surpass the ""Chain-of-Thought"" approach, often +resorts to an external modus operandi involving halting, modifying, and then +resuming the generation process to boost Large Language Models' (LLMs) +reasoning capacities. This mode escalates the number of query requests, leading +to increased costs, memory, and computational overheads. Addressing this, we +propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through +algorithmic reasoning pathways, pioneering a new mode of in-context learning. +By employing algorithmic examples, we exploit the innate recurrence dynamics of +LLMs, expanding their idea exploration with merely one or a few queries. Our +technique outperforms earlier single-query methods and stands on par with a +recent multi-query strategy that employs an extensive tree search algorithm. +Intriguingly, our results suggest that instructing an LLM using an algorithm +can lead to performance surpassing that of the algorithm itself, hinting at +LLM's inherent ability to weave its intuition into optimized searches. We probe +into the underpinnings of our method's efficacy and its nuances in application. +" +Building Emotional Support Chatbots in the Era of LLMs,Zhonghua Zheng,http://arxiv.org/pdf/2308.11584v1.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.11584v1.pdf," The integration of emotional support into various conversational scenarios +presents profound societal benefits, such as social interactions, mental health +counseling, and customer service. However, there are unsolved challenges that +hinder real-world applications in this field, including limited data +availability and the absence of well-accepted model training paradigms. This +work endeavors to navigate these challenges by harnessing the capabilities of +Large Language Models (LLMs). We introduce an innovative methodology that +synthesizes human insights with the computational prowess of LLMs to curate an +extensive emotional support dialogue dataset. Our approach is initiated with a +meticulously designed set of dialogues spanning diverse scenarios as generative +seeds. By utilizing the in-context learning potential of ChatGPT, we +recursively generate an ExTensible Emotional Support dialogue dataset, named +ExTES. Following this, we deploy advanced tuning techniques on the LLaMA model, +examining the impact of diverse training strategies, ultimately yielding an LLM +meticulously optimized for emotional support interactions. An exhaustive +assessment of the resultant model showcases its proficiency in offering +emotional support, marking a pivotal step in the realm of emotional support +bots and paving the way for subsequent research and implementations. +" +Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning,Jiasheng Ye,http://arxiv.org/pdf/2308.12219v2.pdf,2023-08-23,"['cs.cl', 'cs.ai', 'cs.lg']",2308.12219v2.pdf," The recent surge of generative AI has been fueled by the generative power of +diffusion probabilistic models and the scalable capabilities of large language +models. Despite their potential, it remains elusive whether diffusion language +models can solve general language tasks comparable to their autoregressive +counterparts. This paper demonstrates that scaling diffusion models w.r.t. +data, sizes, and tasks can effectively make them strong language learners. We +build competent diffusion language models at scale by first acquiring knowledge +from massive data via masked language modeling pretraining thanks to their +intrinsic connections. We then reprogram pretrained masked language models into +diffusion language models via diffusive adaptation, wherein task-specific +finetuning and instruction finetuning are explored to unlock their versatility +in solving general language tasks. Experiments show that scaling diffusion +language models consistently improves performance across downstream language +tasks. We further discover that instruction finetuning can elicit zero-shot and +few-shot in-context learning abilities that help tackle many unseen tasks by +following natural language instructions, and show promise in advanced and +challenging abilities such as reasoning. +" +Large Language Model as Autonomous Decision Maker,Yining Ye,http://arxiv.org/pdf/2308.12519v1.pdf,2023-08-24,['cs.cl'],2308.12519v1.pdf," While large language models (LLMs) exhibit impressive language understanding +and in-context learning abilities, their decision-making ability still heavily +relies on the guidance of task-specific expert knowledge when solving +real-world tasks. To unleash the potential of LLMs as autonomous decision +makers, this paper presents an approach JuDec to endow LLMs with the +self-judgment ability, enabling LLMs to achieve autonomous judgment and +exploration for decision making. Specifically, in JuDec, Elo-based +Self-Judgment Mechanism is designed to assign Elo scores to decision steps to +judge their values and utilities via pairwise comparisons between two solutions +and then guide the decision-searching process toward the optimal solution +accordingly. Experimental results on the ToolBench dataset demonstrate JuDec's +superiority over baselines, achieving over 10% improvement in Pass Rate on +diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT +API calls), highlighting its effectiveness and efficiency. +" +Breaking the Bank with ChatGPT: Few-Shot Text Classification for Finance,Lefteris Loukas,http://arxiv.org/pdf/2308.14634v1.pdf,2023-08-28,"['cs.cl', 'cs.ai', 'cs.lg', 'q-fin.cp']",2308.14634v1.pdf," We propose the use of conversational GPT models for easy and quick few-shot +text classification in the financial domain using the Banking77 dataset. Our +approach involves in-context learning with GPT-3.5 and GPT-4, which minimizes +the technical expertise required and eliminates the need for expensive GPU +computing while yielding quick and accurate results. Additionally, we fine-tune +other pre-trained, masked language models with SetFit, a recent contrastive +learning technique, to achieve state-of-the-art results both in full-data and +few-shot settings. Our findings show that querying GPT-3.5 and GPT-4 can +outperform fine-tuned, non-generative models even with fewer examples. However, +subscription fees associated with these solutions may be considered costly for +small organizations. Lastly, we find that generative models perform better on +the given task when shown representative samples selected by a human expert +rather than when shown random ones. We conclude that a) our proposed methods +offer a practical solution for few-shot tasks in datasets with limited label +availability, and b) our state-of-the-art results can inspire future work in +the area. +" +Gender-specific Machine Translation with Large Language Models,Eduardo Sánchez,http://arxiv.org/pdf/2309.03175v1.pdf,2023-09-06,['cs.cl'],2309.03175v1.pdf," Decoder-only Large Language Models (LLMs) have demonstrated potential in +machine translation (MT), albeit with performance slightly lagging behind +traditional encoder-decoder Neural Machine Translation (NMT) systems. However, +LLMs offer a unique advantage: the ability to control the properties of the +output through prompts. In this study, we harness this flexibility to explore +LLaMa's capability to produce gender-specific translations for languages with +grammatical gender. Our results indicate that LLaMa can generate +gender-specific translations with competitive accuracy and gender bias +mitigation when compared to NLLB, a state-of-the-art multilingual NMT system. +Furthermore, our experiments reveal that LLaMa's translations are robust, +showing significant performance drops when evaluated against opposite-gender +references in gender-ambiguous datasets but maintaining consistency in less +ambiguous contexts. This research provides insights into the potential and +challenges of using LLMs for gender-specific translations and highlights the +importance of in-context learning to elicit new tasks in LLMs. +" +Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty,Chen Ling,http://arxiv.org/pdf/2309.03433v1.pdf,2023-09-07,['cs.cl'],2309.03433v1.pdf," Open Information Extraction (OIE) task aims at extracting structured facts +from unstructured text, typically in the form of (subject, relation, object) +triples. Despite the potential of large language models (LLMs) like ChatGPT as +a general task solver, they lag behind state-of-the-art (supervised) methods in +OIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevant +context from relevant relations and generate structured output due to the +restrictions on fine-tuning the model. Second, LLMs generates responses +autoregressively based on probability, which makes the predicted relations lack +confidence. In this paper, we assess the capabilities of LLMs in improving the +OIE task. Particularly, we propose various in-context learning strategies to +enhance LLM's instruction-following ability and a demonstration uncertainty +quantification module to enhance the confidence of the generated relations. Our +experiments on three OIE benchmark datasets show that our approach holds its +own against established supervised methods, both quantitatively and +qualitatively. +" +EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets,Hongyuan Lu,http://arxiv.org/pdf/2309.04725v1.pdf,2023-09-09,['cs.cl'],2309.04725v1.pdf," Large language models (LLMs) have shown promising performance on various NLP +tasks via task prompting. And their performance can be further improved by +appending task demonstrations to the head of the prompt. And usually, a better +performance can be achieved with more demonstrations. However, asking the users +to write the demonstrations can be cumbersome. As a simple yet cost-effective +workaround, this paper proposes a novel method called EPA (\textbf{E}asy +\textbf{P}rompt \textbf{A}ugmentation)\footnote{While this paper considers +augmenting prompts via demonstrations, we name it EPA as the name EDA is +already taken by a well-known NLP method \citep{wei-zou-2019-eda}.} that +effectively minimizes user efforts in writing demonstrations while improving +the model performance at the same time. EPA achieves these goals by +automatically augmenting the demonstrations with multiple sources/targets, +where each of them paraphrases each other. This is well motivated as augmenting +data via paraphrasing effectively improves neural language models. EPA thus +employs paraphrasing as an augmentation method for in-context learning. +Extensive experiments indicate that EPA effectively improves both NLU and NLG +tasks, covering from natural language inference to machine translation in +translating tens of languages.\footnote{Code and data will be released upon +publication.} +" +CONVERSER: Few-Shot Conversational Dense Retrieval with Synthetic Data Generation,Chao-Wei Huang,http://arxiv.org/pdf/2309.06748v1.pdf,2023-09-13,"['cs.cl', 'cs.ir']",2309.06748v1.pdf," Conversational search provides a natural interface for information retrieval +(IR). Recent approaches have demonstrated promising results in applying dense +retrieval to conversational IR. However, training dense retrievers requires +large amounts of in-domain paired data. This hinders the development of +conversational dense retrievers, as abundant in-domain conversations are +expensive to collect. In this paper, we propose CONVERSER, a framework for +training conversational dense retrievers with at most 6 examples of in-domain +dialogues. Specifically, we utilize the in-context learning capability of large +language models to generate conversational queries given a passage in the +retrieval corpus. Experimental results on conversational retrieval benchmarks +OR-QuAC and TREC CAsT 19 show that the proposed CONVERSER achieves comparable +performance to fully-supervised models, demonstrating the effectiveness of our +proposed framework in few-shot conversational dense retrieval. All source code +and generated datasets are available at https://github.com/MiuLab/CONVERSER +" +Speech-to-Speech Translation with Discrete-Unit-Based Style Transfer,Yongqi Wang,http://arxiv.org/pdf/2309.07566v1.pdf,2023-09-14,"['cs.sd', 'cs.ai', 'eess.as']",2309.07566v1.pdf," Direct speech-to-speech translation (S2ST) with discrete self-supervised +representations has achieved remarkable accuracy, but is unable to preserve the +speaker timbre of the source speech during translation. Meanwhile, the scarcity +of high-quality speaker-parallel data poses a challenge for learning style +transfer between source and target speech. We propose an S2ST framework with an +acoustic language model based on discrete units from a self-supervised model +and a neural codec for style transfer. The acoustic language model leverages +self-supervised in-context learning, acquiring the ability for style transfer +without relying on any speaker-parallel data, thereby overcoming the issue of +data scarcity. By using extensive training data, our model achieves zero-shot +cross-lingual style transfer on previously unseen source languages. Experiments +show that our model generates translated speeches with high fidelity and style +similarity. Audio samples are available at http://stylelm.github.io/ . +" +"Bridging Topic, Domain, and Language Shifts: An Evaluation of Comprehensive Out-of-Distribution Scenarios",Andreas Waldis,http://arxiv.org/pdf/2309.08316v1.pdf,2023-09-15,['cs.cl'],2309.08316v1.pdf," Language models (LMs) excel in in-distribution (ID) scenarios where train and +test data are independent and identically distributed. However, their +performance often degrades in real-world applications like argument mining. +Such degradation happens when new topics emerge, or other text domains and +languages become relevant. To assess LMs' generalization abilities in such +out-of-distribution (OOD) scenarios, we simulate such distribution shifts by +deliberately withholding specific instances for testing, as from the social +media domain or the topic Solar Energy. + Unlike prior studies focusing on specific shifts and metrics in isolation, we +comprehensively analyze OOD generalization. We define three metrics to pinpoint +generalization flaws and propose eleven classification tasks covering topic, +domain, and language shifts. Overall, we find superior performance of +prompt-based fine-tuning, notably when train and test splits primarily differ +semantically. Simultaneously, in-context learning is more effective than +prompt-based or vanilla fine-tuning for tasks when training data embodies heavy +discrepancies in label distribution compared to testing data. This reveals a +crucial drawback of gradient-based learning: it biases LMs regarding such +structural obstacles. +" +Neural Machine Translation Models Can Learn to be Few-shot Learners,Raphael Reinauer,http://arxiv.org/pdf/2309.08590v1.pdf,2023-09-15,['cs.cl'],2309.08590v1.pdf," The emergent ability of Large Language Models to use a small number of +examples to learn to perform in novel domains and tasks, also called in-context +learning (ICL). In this work, we show that a much smaller model can be trained +to perform ICL by fine-tuning towards a specialized training objective, +exemplified on the task of domain adaptation for neural machine translation. +With this capacity for ICL, the model can take advantage of relevant few-shot +examples to adapt its output towards the domain. We compare the quality of this +domain adaptation to traditional supervised techniques and ICL with a +40B-parameter Large Language Model. Our approach allows efficient batch +inference on a mix of domains and outperforms state-of-the-art baselines in +terms of both translation quality and immediate adaptation rate, i.e. the +ability to reproduce a specific term after being shown a single example. +" +Few-Shot Adaptation for Parsing Contextual Utterances with LLMs,Kevin Lin,http://arxiv.org/pdf/2309.10168v1.pdf,2023-09-18,['cs.cl'],2309.10168v1.pdf," We evaluate the ability of semantic parsers based on large language models +(LLMs) to handle contextual utterances. In real-world settings, there typically +exists only a limited number of annotated contextual utterances due to +annotation cost, resulting in an imbalance compared to non-contextual +utterances. Therefore, parsers must adapt to contextual utterances with a few +training examples. We examine four major paradigms for doing so in +conversational semantic parsing i.e., Parse-with-Utterance-History, +Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. To +facilitate such cross-paradigm comparisons, we construct +SMCalFlow-EventQueries, a subset of contextual examples from SMCalFlow with +additional annotations. Experiments with in-context learning and fine-tuning +suggest that Rewrite-then-Parse is the most promising paradigm when +holistically considering parsing accuracy, annotation cost, and error types. +" +Toward Unified Controllable Text Generation via Regular Expression Instruction,Xin Zheng,http://arxiv.org/pdf/2309.10447v2.pdf,2023-09-19,"['cs.cl', 'cs.ai']",2309.10447v2.pdf," Controllable text generation is a fundamental aspect of natural language +generation, with numerous methods proposed for different constraint types. +However, these approaches often require significant architectural or decoding +modifications, making them challenging to apply to additional constraints or +resolve different constraint combinations. To address this, our paper +introduces Regular Expression Instruction (REI), which utilizes an +instruction-based mechanism to fully exploit regular expressions' advantages to +uniformly model diverse constraints. Specifically, our REI supports all popular +fine-grained controllable generation constraints, i.e., lexical, positional, +and length, as well as their complex combinations, via regular expression-style +instructions. Our method only requires fine-tuning on medium-scale language +models or few-shot, in-context learning on large language models, and requires +no further adjustment when applied to various constraint combinations. +Experiments demonstrate that our straightforward approach yields high success +rates and adaptability to various constraints while maintaining competitiveness +in automatic metrics and outperforming most previous baselines. +" +Language Modeling Is Compression,Grégoire Delétang,http://arxiv.org/pdf/2309.10668v1.pdf,2023-09-19,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.it', 'math.it']",2309.10668v1.pdf," It has long been established that predictive models can be transformed into +lossless compressors and vice versa. Incidentally, in recent years, the machine +learning community has focused on training increasingly large and powerful +self-supervised (language) models. Since these large language models exhibit +impressive predictive capabilities, they are well-positioned to be strong +compressors. In this work, we advocate for viewing the prediction problem +through the lens of compression and evaluate the compression capabilities of +large (foundation) models. We show that large language models are powerful +general-purpose predictors and that the compression viewpoint provides novel +insights into scaling laws, tokenization, and in-context learning. For example, +Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to +43.4% and LibriSpeech samples to 16.4% of their raw size, beating +domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. +Finally, we show that the prediction-compression equivalence allows us to use +any compressor (like gzip) to build a conditional generative model. +" +Language-Oriented Communication with Semantic Coding and Knowledge Distillation for Text-to-Image Generation,Hyelin Nam,http://arxiv.org/pdf/2309.11127v1.pdf,2023-09-20,"['eess.sp', 'cs.ai', 'cs.cl']",2309.11127v1.pdf," By integrating recent advances in large language models (LLMs) and generative +models into the emerging semantic communication (SC) paradigm, in this article +we put forward to a novel framework of language-oriented semantic communication +(LSC). In LSC, machines communicate using human language messages that can be +interpreted and manipulated via natural language processing (NLP) techniques +for SC efficiency. To demonstrate LSC's potential, we introduce three +innovative algorithms: 1) semantic source coding (SSC) which compresses a text +prompt into its key head words capturing the prompt's syntactic essence while +maintaining their appearance order to keep the prompt's context; 2) semantic +channel coding (SCC) that improves robustness against errors by substituting +head words with their lenghthier synonyms; and 3) semantic knowledge +distillation (SKD) that produces listener-customized prompts via in-context +learning the listener's language style. In a communication task for progressive +text-to-image generation, the proposed methods achieve higher perceptual +similarities with fewer transmissions while enhancing robustness in noisy +communication channels. +" +Towards Effective Disambiguation for Machine Translation with Large Language Models,Vivek Iyer,http://arxiv.org/pdf/2309.11668v2.pdf,2023-09-20,['cs.cl'],2309.11668v2.pdf," Resolving semantic ambiguity has long been recognised as a central challenge +in the field of Machine Translation. Recent work on benchmarking translation +performance on ambiguous sentences has exposed the limitations of conventional +Neural Machine Translation (NMT) systems, which fail to handle many such cases. +Large language models (LLMs) have emerged as a promising alternative, +demonstrating comparable performance to traditional NMT models while +introducing new paradigms for controlling the target outputs. In this paper, we +study the capabilities of LLMs to translate ""ambiguous sentences"" - i.e. those +containing highly polysemous words and/or rare word senses. We also propose two +ways to improve their disambiguation capabilities, through a) in-context +learning and b) fine-tuning on carefully curated ambiguous datasets. +Experiments show that our methods can match or outperform state-of-the-art +systems such as DeepL and NLLB in four out of five language directions. Our +research provides valuable insights into effectively adapting LLMs to become +better disambiguators during Machine Translation. We release our curated +disambiguation corpora and resources at +https://data.statmt.org/ambiguous-europarl. +" +In-context Interference in Chat-based Large Language Models,Eric Nuertey Coleman,http://arxiv.org/pdf/2309.12727v1.pdf,2023-09-22,"['cs.ai', 'cs.cl']",2309.12727v1.pdf," Large language models (LLMs) have had a huge impact on society due to their +impressive capabilities and vast knowledge of the world. Various applications +and tools have been created that allow users to interact with these models in a +black-box scenario. However, one limitation of this scenario is that users +cannot modify the internal knowledge of the model, and the only way to add or +modify internal knowledge is by explicitly mentioning it to the model during +the current interaction. This learning process is called in-context training, +and it refers to training that is confined to the user's current session or +context. In-context learning has significant applications, but also has +limitations that are seldom studied. In this paper, we present a study that +shows how the model can suffer from interference between information that +continually flows in the context, causing it to forget previously learned +knowledge, which can reduce the model's performance. Along with showing the +problem, we propose an evaluation benchmark based on the bAbI dataset. +" +Affect Recognition in Conversations Using Large Language Models,Shutong Feng,http://arxiv.org/pdf/2309.12881v1.pdf,2023-09-22,['cs.cl'],2309.12881v1.pdf," Affect recognition, encompassing emotions, moods, and feelings, plays a +pivotal role in human communication. In the realm of conversational artificial +intelligence (AI), the ability to discern and respond to human affective cues +is a critical factor for creating engaging and empathetic interactions. This +study delves into the capacity of large language models (LLMs) to recognise +human affect in conversations, with a focus on both open-domain chit-chat +dialogues and task-oriented dialogues. Leveraging three diverse datasets, +namely IEMOCAP, EmoWOZ, and DAIC-WOZ, covering a spectrum of dialogues from +casual conversations to clinical interviews, we evaluated and compared LLMs' +performance in affect recognition. Our investigation explores the zero-shot and +few-shot capabilities of LLMs through in-context learning (ICL) as well as +their model capacities through task-specific fine-tuning. Additionally, this +study takes into account the potential impact of automatic speech recognition +(ASR) errors on LLM predictions. With this work, we aim to shed light on the +extent to which LLMs can replicate human-like affect recognition capabilities +in conversations. +" +Calibrating LLM-Based Evaluator,Yuxuan Liu,http://arxiv.org/pdf/2309.13308v1.pdf,2023-09-23,['cs.cl'],2309.13308v1.pdf," Recent advancements in large language models (LLMs) on language modeling and +emergent capabilities make them a promising reference-free evaluator of natural +language generation quality, and a competent alternative to human evaluation. +However, hindered by the closed-source or high computational demand to host and +tune, there is a lack of practice to further calibrate an off-the-shelf +LLM-based evaluator towards better human alignment. In this work, we propose +AutoCalibrate, a multi-stage, gradient-free approach to automatically calibrate +and align an LLM-based evaluator toward human preference. Instead of explicitly +modeling human preferences, we first implicitly encompass them within a set of +human labels. Then, an initial set of scoring criteria is drafted by the +language model itself, leveraging in-context learning on different few-shot +examples. To further calibrate this set of criteria, we select the best +performers and re-draft them with self-refinement. Our experiments on multiple +text quality evaluation datasets illustrate a significant improvement in +correlation with expert evaluation through calibration. Our comprehensive +qualitative analysis conveys insightful intuitions and observations on the +essence of effective scoring criteria. +" +MedEdit: Model Editing for Medical Question Answering with External Knowledge Bases,Yucheng Shi,http://arxiv.org/pdf/2309.16035v1.pdf,2023-09-27,"['cs.cl', 'cs.ai']",2309.16035v1.pdf," Large Language Models (LLMs), although powerful in general domains, often +perform poorly on domain-specific tasks like medical question answering (QA). +Moreover, they tend to function as ""black-boxes,"" making it challenging to +modify their behavior. Addressing this, our study delves into model editing +utilizing in-context learning, aiming to improve LLM responses without the need +for fine-tuning or retraining. Specifically, we propose a comprehensive +retrieval strategy to extract medical facts from an external knowledge base, +and then we incorporate them into the query prompt for the LLM. Focusing on +medical QA using the MedQA-SMILE dataset, we evaluate the impact of different +retrieval models and the number of facts provided to the LLM. Notably, our +edited Vicuna model exhibited an accuracy improvement from 44.46% to 48.54%. +This work underscores the potential of model editing to enhance LLM +performance, offering a practical approach to mitigate the challenges of +black-box LLMs. +" +A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models,Taylor Webb,http://arxiv.org/pdf/2310.00194v1.pdf,2023-09-30,"['cs.ai', 'cs.ne']",2310.00194v1.pdf," Large language models (LLMs) demonstrate impressive performance on a wide +variety of tasks, but they often struggle with tasks that require multi-step +reasoning or goal-directed planning. To address this, we take inspiration from +the human brain, in which planning is accomplished via the recurrent +interaction of specialized modules in the prefrontal cortex (PFC). These +modules perform functions such as conflict monitoring, state prediction, state +evaluation, task decomposition, and task coordination. We find that LLMs are +sometimes capable of carrying out these functions in isolation, but struggle to +autonomously coordinate them in the service of a goal. Therefore, we propose a +black box architecture with multiple LLM-based (GPT-4) modules. The +architecture improves planning through the interaction of specialized +PFC-inspired modules that break down a larger problem into multiple brief +automated calls to the LLM. We evaluate the combined architecture on two +challenging planning tasks -- graph traversal and Tower of Hanoi -- finding +that it yields significant improvements over standard LLM methods (e.g., +zero-shot prompting or in-context learning). These results demonstrate the +benefit of utilizing knowledge from cognitive neuroscience to improve planning +in LLMs. +" +Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method,Xuan Zhang,http://arxiv.org/pdf/2310.00305v1.pdf,2023-09-30,['cs.cl'],2310.00305v1.pdf," While large pre-trained language models (LLMs) have shown their impressive +capabilities in various NLP tasks, they are still under-explored in the +misinformation domain. In this paper, we examine LLMs with in-context learning +(ICL) for news claim verification, and find that only with 4-shot demonstration +examples, the performance of several prompting methods can be comparable with +previous supervised models. To further boost performance, we introduce a +Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to +separate a claim into several subclaims and then verify each of them via +multiple questions-answering steps progressively. Experiment results on two +public misinformation datasets show that HiSS prompting outperforms +state-of-the-art fully-supervised approach and strong few-shot ICL-enabled +baselines. +" +Text Data Augmentation in Low-Resource Settings via Fine-Tuning of Large Language Models,Jean Kaddour,http://arxiv.org/pdf/2310.01119v1.pdf,2023-10-02,"['cs.cl', 'cs.lg']",2310.01119v1.pdf," The in-context learning ability of large language models (LLMs) enables them +to generalize to novel downstream tasks with relatively few labeled examples. +However, they require enormous computational resources to be deployed. +Alternatively, smaller models can solve specific tasks if fine-tuned with +enough labeled examples. These examples, however, are expensive to obtain. In +pursuit of the best of both worlds, we study the annotation and generation of +fine-tuning training data via fine-tuned teacher LLMs to improve the downstream +performance of much smaller models. In four text classification and two text +generation tasks, we find that both data generation and annotation dramatically +improve the respective downstream model's performance, occasionally +necessitating only a minor fraction of the original training dataset. +" +Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations,Yongshuo Zong,http://arxiv.org/pdf/2310.01651v1.pdf,2023-10-02,['cs.lg'],2310.01651v1.pdf," Large language and vision-language models are rapidly being deployed in +practice thanks to their impressive capabilities in instruction following, +in-context learning, and so on. This raises an urgent need to carefully analyse +their robustness so that stakeholders can understand if and when such models +are trustworthy enough to be relied upon in any given application. In this +paper, we highlight a specific vulnerability in popular models, namely +permutation sensitivity in multiple-choice question answering (MCQA). +Specifically, we show empirically that popular models are vulnerable to +adversarial permutation in answer sets for multiple-choice prompting, which is +surprising as models should ideally be as invariant to prompt permutation as +humans are. These vulnerabilities persist across various model sizes, and exist +in very recent language and vision-language models. Code is available at +\url{https://github.com/ys-zong/FoolyourVLLMs}. +" +Improving Automatic VQA Evaluation Using Large Language Models,Oscar Mañas,http://arxiv.org/pdf/2310.02567v1.pdf,2023-10-04,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2310.02567v1.pdf," 8 years after the visual question answering (VQA) task was proposed, accuracy +remains the primary metric for automatic evaluation. VQA Accuracy has been +effective so far in the IID evaluation setting. However, our community is +undergoing a shift towards open-ended generative models and OOD evaluation. In +this new paradigm, the existing VQA Accuracy metric is overly stringent and +underestimates the performance of VQA systems. Thus, there is a need to develop +more robust automatic VQA metrics that serve as a proxy for human judgment. In +this work, we propose to leverage the in-context learning capabilities of +instruction-tuned large language models (LLMs) to build a better VQA metric. We +formulate VQA evaluation as an answer-rating task where the LLM is instructed +to score the accuracy of a candidate answer given a set of reference answers. +We demonstrate the proposed metric better correlates with human judgment +compared to existing metrics across several VQA models and benchmarks. We hope +wide adoption of our metric will contribute to better estimating the research +progress on the VQA task. +" +A Language-Agent Approach to Formal Theorem-Proving,Amitayush Thakur,http://arxiv.org/pdf/2310.04353v1.pdf,2023-10-06,"['cs.lg', 'cs.ai', 'cs.lo', 'cs.pl']",2310.04353v1.pdf," Language agents, which use a large language model (LLM) capable of in-context +learning to interact with an external environment, have recently emerged as a +promising approach to control tasks. We present the first language-agent +approach to formal theorem-proving. Our method, COPRA, uses a high-capacity, +black-box LLM (GPT-4) as part of a policy for a stateful backtracking search. +During the search, the policy can select proof tactics and retrieve lemmas and +definitions from an external database. Each selected tactic is executed in the +underlying proof framework, and the execution feedback is used to build the +prompt for the next policy invocation. The search also tracks selected +information from its history and uses it to reduce hallucinations and +unnecessary LLM queries. + We evaluate COPRA on the miniF2F benchmark for Lean and a set of Coq tasks +from the Compcert project. On these benchmarks, COPRA is significantly better +than one-shot invocations of GPT-4, as well as state-of-the-art models +fine-tuned on proof data, at finding correct proofs quickly. +" +Guideline Learning for In-context Information Extraction,Chaoxu Pang,http://arxiv.org/pdf/2310.05066v2.pdf,2023-10-08,"['cs.cl', 'cs.lg']",2310.05066v2.pdf," Large language models (LLMs) can perform a new task by merely conditioning on +task instructions and a few input-output examples, without optimizing any +parameters. This is called In-Context Learning (ICL). In-context Information +Extraction (IE) has recently garnered attention in the research community. +However, the performance of In-context IE generally lags behind the +state-of-the-art supervised expert models. We highlight a key reason for this +shortfall: underspecified task description. The limited-length context +struggles to thoroughly express the intricate IE task instructions and various +edge cases, leading to misalignment in task comprehension with humans. In this +paper, we propose a Guideline Learning (GL) framework for In-context IE which +reflectively learns and follows guidelines. During the learning phrase, GL +automatically synthesizes a set of guidelines based on a few error cases, and +during inference, GL retrieves helpful guidelines for better ICL. Moreover, we +propose a self-consistency-based active learning method to enhance the +efficiency of GL. Experiments on event extraction and relation extraction show +that GL can significantly improve the performance of in-context IE. +" +Harnessing the Power of Large Language Models for Empathetic Response Generation: Empirical Investigations and Improvements,Yushan Qian,http://arxiv.org/pdf/2310.05140v1.pdf,2023-10-08,"['cs.cl', 'cs.ai']",2310.05140v1.pdf," Empathetic dialogue is an indispensable part of building harmonious social +relationships and contributes to the development of a helpful AI. Previous +approaches are mainly based on fine small-scale language models. With the +advent of ChatGPT, the application effect of large language models (LLMs) in +this field has attracted great attention. This work empirically investigates +the performance of LLMs in generating empathetic responses and proposes three +improvement methods of semantically similar in-context learning, two-stage +interactive generation, and combination with the knowledge base. Extensive +experiments show that LLMs can significantly benefit from our proposed methods +and is able to achieve state-of-the-art performance in both automatic and human +evaluations. Additionally, we explore the possibility of GPT-4 simulating human +evaluators. +" +LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models,Huiqiang Jiang,http://arxiv.org/pdf/2310.05736v1.pdf,2023-10-09,"['cs.cl', 'cs.lg']",2310.05736v1.pdf," Large language models (LLMs) have been applied in various applications due to +their astonishing capabilities. With advancements in technologies such as +chain-of-thought (CoT) prompting and in-context learning (ICL), the prompts fed +to LLMs are becoming increasingly lengthy, even exceeding tens of thousands of +tokens. To accelerate model inference and reduce cost, this paper presents +LLMLingua, a coarse-to-fine prompt compression method that involves a budget +controller to maintain semantic integrity under high compression ratios, a +token-level iterative compression algorithm to better model the interdependence +between compressed contents, and an instruction tuning based method for +distribution alignment between language models. We conduct experiments and +analysis over four datasets from different scenarios, i.e., GSM8K, BBH, +ShareGPT, and Arxiv-March23; showing that the proposed approach yields +state-of-the-art performance and allows for up to 20x compression with little +performance loss. Our code is available at https://aka.ms/LLMLingua. +" +Selective Demonstrations for Cross-domain Text-to-SQL,Shuaichen Chang,http://arxiv.org/pdf/2310.06302v1.pdf,2023-10-10,['cs.cl'],2310.06302v1.pdf," Large language models (LLMs) with in-context learning have demonstrated +impressive generalization capabilities in the cross-domain text-to-SQL task, +without the use of in-domain annotations. However, incorporating in-domain +demonstration examples has been found to greatly enhance LLMs' performance. In +this paper, we delve into the key factors within in-domain examples that +contribute to the improvement and explore whether we can harness these benefits +without relying on in-domain annotations. Based on our findings, we propose a +demonstration selection framework ODIS which utilizes both out-of-domain +examples and synthetically generated in-domain examples to construct +demonstrations. By retrieving demonstrations from hybrid sources, ODIS +leverages the advantages of both, showcasing its effectiveness compared to +baseline methods that rely on a single data source. Furthermore, ODIS +outperforms state-of-the-art approaches on two cross-domain text-to-SQL +datasets, with improvements of 1.1 and 11.8 points in execution accuracy, +respectively. +" +Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations,Zeming Wei,http://arxiv.org/pdf/2310.06387v1.pdf,2023-10-10,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cr']",2310.06387v1.pdf," Large Language Models (LLMs) have shown remarkable success in various tasks, +but concerns about their safety and the potential for generating malicious +content have emerged. In this paper, we explore the power of In-Context +Learning (ICL) in manipulating the alignment ability of LLMs. We find that by +providing just few in-context demonstrations without fine-tuning, LLMs can be +manipulated to increase or decrease the probability of jailbreaking, i.e. +answering malicious prompts. Based on these observations, we propose In-Context +Attack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guarding +aligned language model purposes. ICA crafts malicious contexts to guide models +in generating harmful outputs, while ICD enhances model robustness by +demonstrations of rejecting to answer harmful prompts. Our experiments show the +effectiveness of ICA and ICD in increasing or reducing the success rate of +adversarial jailbreaking attacks. Overall, we shed light on the potential of +ICL to influence LLM behavior and provide a new perspective for enhancing the +safety and alignment of LLMs. +" +Humans and language models diverge when predicting repeating text,Aditya R. Vaidya,http://arxiv.org/pdf/2310.06408v2.pdf,2023-10-10,['cs.cl'],2310.06408v2.pdf," Language models that are trained on the next-word prediction task have been +shown to accurately model human behavior in word prediction and reading speed. +In contrast with these findings, we present a scenario in which the performance +of humans and LMs diverges. We collected a dataset of human next-word +predictions for five stimuli that are formed by repeating spans of text. Human +and GPT-2 LM predictions are strongly aligned in the first presentation of a +text span, but their performance quickly diverges when memory (or in-context +learning) begins to play a role. We traced the cause of this divergence to +specific attention heads in a middle layer. Adding a power-law recency bias to +these attention heads yielded a model that performs much more similarly to +humans. We hope that this scenario will spur future work in bringing LMs closer +to human behavior. +" +The Limits of ChatGPT in Extracting Aspect-Category-Opinion-Sentiment Quadruples: A Comparative Analysis,Xiancai Xu,http://arxiv.org/pdf/2310.06502v1.pdf,2023-10-10,['cs.cl'],2310.06502v1.pdf," Recently, ChatGPT has attracted great attention from both industry and +academia due to its surprising abilities in natural language understanding and +generation. We are particularly curious about whether it can achieve promising +performance on one of the most complex tasks in aspect-based sentiment +analysis, i.e., extracting aspect-category-opinion-sentiment quadruples from +texts. To this end, in this paper we develop a specialized prompt template that +enables ChatGPT to effectively tackle this complex quadruple extraction task. +Further, we propose a selection method on few-shot examples to fully exploit +the in-context learning ability of ChatGPT and uplift its effectiveness on this +complex task. Finally, we provide a comparative evaluation on ChatGPT against +existing state-of-the-art quadruple extraction models based on four public +datasets and highlight some important findings regarding the capability +boundaries of ChatGPT in the quadruple extraction. +" +AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents,Jake Grigsby,http://arxiv.org/pdf/2310.09971v2.pdf,2023-10-15,['cs.lg'],2310.09971v2.pdf," We introduce AMAGO, an in-context Reinforcement Learning (RL) agent that uses +sequence models to tackle the challenges of generalization, long-term memory, +and meta-learning. Recent works have shown that off-policy learning can make +in-context RL with recurrent policies viable. Nonetheless, these approaches +require extensive tuning and limit scalability by creating key bottlenecks in +agents' memory capacity, planning horizon, and model size. AMAGO revisits and +redesigns the off-policy in-context approach to successfully train +long-sequence Transformers over entire rollouts in parallel with end-to-end RL. +Our agent is uniquely scalable and applicable to a wide range of problems. We +demonstrate its strong performance empirically in meta-RL and long-term memory +domains. AMAGO's focus on sparse rewards and off-policy data also allows +in-context learning to extend to goal-conditioned problems with challenging +exploration. When combined with a novel hindsight relabeling scheme, AMAGO can +solve a previously difficult category of open-world domains, where agents +complete many possible instructions in procedurally generated environments. We +evaluate our agent on three goal-conditioned domains and study how its +individual improvements connect to create a generalist policy. +" +A Search for Prompts: Generating Structured Answers from Contracts,Adam Roegiest,http://arxiv.org/pdf/2310.10141v1.pdf,2023-10-16,['cs.cv'],2310.10141v1.pdf," In many legal processes being able to action on the concrete implication of a +legal question can be valuable to automating human review or signalling certain +conditions (e.g., alerts around automatic renewal). To support such tasks, we +present a form of legal question answering that seeks to return one (or more) +fixed answers for a question about a contract clause. After showing that +unstructured generative question answering can have questionable outcomes for +such a task, we discuss our exploration methodology for legal question +answering prompts using OpenAI's \textit{GPT-3.5-Turbo} and provide a summary +of insights. + Using insights gleaned from our qualitative experiences, we compare our +proposed template prompts against a common semantic matching approach and find +that our prompt templates are far more accurate despite being less reliable in +the exact response return. With some additional tweaks to prompts and the use +of in-context learning, we are able to further improve the performance of our +proposed strategy while maximizing the reliability of responses as best we can. +" +Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT,Xiaoshuai Song,http://arxiv.org/pdf/2310.10176v1.pdf,2023-10-16,"['cs.cl', 'cs.ai', 'cs.lg']",2310.10176v1.pdf," The tasks of out-of-domain (OOD) intent discovery and generalized intent +discovery (GID) aim to extend a closed intent classifier to open-world intent +sets, which is crucial to task-oriented dialogue (TOD) systems. Previous +methods address them by fine-tuning discriminative models. Recently, although +some studies have been exploring the application of large language models +(LLMs) represented by ChatGPT to various downstream tasks, it is still unclear +for the ability of ChatGPT to discover and incrementally extent OOD intents. In +this paper, we comprehensively evaluate ChatGPT on OOD intent discovery and +GID, and then outline the strengths and weaknesses of ChatGPT. Overall, ChatGPT +exhibits consistent advantages under zero-shot settings, but is still at a +disadvantage compared to fine-tuned models. More deeply, through a series of +analytical experiments, we summarize and discuss the challenges faced by LLMs +including clustering, domain-specific understanding, and cross-domain +in-context learning scenarios. Finally, we provide empirical guidance for +future directions to address these challenges. +" +MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations,Heyuan Yao,http://arxiv.org/pdf/2310.10198v2.pdf,2023-10-16,"['cs.cv', 'cs.gr']",2310.10198v2.pdf," In this work, we present MoConVQ, a novel unified framework for physics-based +motion control leveraging scalable discrete representations. Building upon +vector quantized variational autoencoders (VQ-VAE) and model-based +reinforcement learning, our approach effectively learns motion embeddings from +a large, unstructured dataset spanning tens of hours of motion examples. The +resultant motion representation not only captures diverse motion skills but +also offers a robust and intuitive interface for various applications. We +demonstrate the versatility of MoConVQ through several applications: universal +tracking control from various motion sources, interactive character control +with latent motion representations using supervised learning, physics-based +motion generation from natural language descriptions using the GPT framework, +and, most interestingly, seamless integration with large language models (LLMs) +with in-context learning to tackle complex and abstract tasks. +" +Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking,Yuxiang Wu,http://arxiv.org/pdf/2310.10520v2.pdf,2023-10-16,"['cs.cl', 'cs.ai', 'cs.lg']",2310.10520v2.pdf," Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiring +and annotating task-oriented dialogues, which can be time consuming and costly. +However, DST extends beyond simple slot-filling and requires effective updating +strategies for tracking dialogue state as conversations progress. In this +paper, we propose ParsingDST, a new In-Context Learning (ICL) method, to +introduce additional intricate updating strategies in zero-shot DST. Our +approach reformulates the DST task by leveraging powerful Large Language Models +(LLMs) and translating the original dialogue text to JSON through semantic +parsing as an intermediate state. We also design a novel framework that +includes more modules to ensure the effectiveness of updating strategies in the +text-to-JSON process. Experimental results demonstrate that our approach +outperforms existing zero-shot DST methods on MultiWOZ, exhibiting significant +improvements in Joint Goal Accuracy (JGA) and slot accuracy compared to +existing ICL methods. +" +Mastering the Task of Open Information Extraction with Large Language Models and Consistent Reasoning Environment,Ji Qi,http://arxiv.org/pdf/2310.10590v1.pdf,2023-10-16,['cs.cl'],2310.10590v1.pdf," Open Information Extraction (OIE) aims to extract objective structured +knowledge from natural texts, which has attracted growing attention to build +dedicated models with human experience. As the large language models (LLMs) +have exhibited remarkable in-context learning capabilities, a question arises +as to whether the task of OIE can be effectively tackled with this paradigm? In +this paper, we explore solving the OIE problem by constructing an appropriate +reasoning environment for LLMs. Specifically, we first propose a method to +effectively estimate the discrepancy of syntactic distribution between a LLM +and test samples, which can serve as correlation evidence for preparing +positive demonstrations. Upon the evidence, we introduce a simple yet effective +mechanism to establish the reasoning environment for LLMs on specific tasks. +Without bells and whistles, experimental results on the standard CaRB benchmark +demonstrate that our $6$-shot approach outperforms state-of-the-art supervised +method, achieving an $55.3$ $F_1$ score. Further experiments on TACRED and +ACE05 show that our method can naturally generalize to other information +extraction tasks, resulting in improvements of $5.7$ and $6.8$ $F_1$ scores, +respectively. +" +Exploring Automatic Evaluation Methods based on a Decoder-based LLM for Text Generation,Tomohito Kasahara,http://arxiv.org/pdf/2310.11026v1.pdf,2023-10-17,['cs.cl'],2310.11026v1.pdf," Automatic evaluation of text generation is essential for improving the +accuracy of generation tasks. In light of the current trend towards +increasingly larger decoder-based language models, we investigate automatic +evaluation methods based on such models for text generation. This paper +compares various methods, including tuning with encoder-based models and large +language models under equal conditions, on two different tasks, machine +translation evaluation and semantic textual similarity, in two languages, +Japanese and English. Experimental results show that compared to the tuned +encoder-based models, the tuned decoder-based models perform poorly. The +analysis of the causes for this suggests that the decoder-based models focus on +surface word sequences and do not capture meaning. It is also revealed that +in-context learning of very large decoder-based models such as ChatGPT makes it +difficult to identify fine-grained semantic differences. +" +Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models,Hsuan Su,http://arxiv.org/pdf/2310.11079v1.pdf,2023-10-17,"['cs.cl', 'cs.ai']",2310.11079v1.pdf," Recently, researchers have made considerable improvements in dialogue systems +with the progress of large language models (LLMs) such as ChatGPT and GPT-4. +These LLM-based chatbots encode the potential biases while retaining +disparities that can harm humans during interactions. The traditional biases +investigation methods often rely on human-written test cases. However, these +test cases are usually expensive and limited. In this work, we propose a +first-of-its-kind method that automatically generates test cases to detect +LLMs' potential gender bias. We apply our method to three well-known LLMs and +find that the generated test cases effectively identify the presence of biases. +To address the biases identified, we propose a mitigation strategy that uses +the generated test cases as demonstrations for in-context learning to +circumvent the need for parameter fine-tuning. The experimental results show +that LLMs generate fairer responses with the proposed approach. +" +Evaluating LLMs for Privilege-Escalation Scenarios,Andreas Happe,http://arxiv.org/pdf/2310.11409v2.pdf,2023-10-17,"['cs.cr', 'cs.ai']",2310.11409v2.pdf," Penetration testing, an essential component of cybersecurity, allows +organizations to proactively identify and remediate vulnerabilities in their +systems, thus bolstering their defense mechanisms against potential +cyberattacks. One recent advancement in the realm of penetration testing is the +utilization of Language Models (LLMs). We explore the intersection of LLMs and +penetration testing to gain insight into their capabilities and challenges in +the context of privilige escalation. We create an automated Linux +privilege-escalation benchmark utilizing local virtual machines. We introduce +an LLM-guided privilege-escalation tool designed for evaluating different LLMs +and prompt strategies against our benchmark. We analyze the impact of different +prompt designs, the benefits of in-context learning, and the advantages of +offering high-level guidance to LLMs. We discuss challenging areas for LLMs, +including maintaining focus during testing, coping with errors, and finally +comparing them with both stochastic parrots as well as with human hackers. +" +Measuring Pointwise $\mathcal{V}$-Usable Information In-Context-ly,Sheng Lu,http://arxiv.org/pdf/2310.12300v1.pdf,2023-10-18,['cs.cl'],2310.12300v1.pdf," In-context learning (ICL) is a new learning paradigm that has gained +popularity along with the development of large language models. In this work, +we adapt a recently proposed hardness metric, pointwise $\mathcal{V}$-usable +information (PVI), to an in-context version (in-context PVI). Compared to the +original PVI, in-context PVI is more efficient in that it requires only a few +exemplars and does not require fine-tuning. We conducted a comprehensive +empirical analysis to evaluate the reliability of in-context PVI. Our findings +indicate that in-context PVI estimates exhibit similar characteristics to the +original PVI. Specific to the in-context setting, we show that in-context PVI +estimates remain consistent across different exemplar selections and numbers of +shots. The variance of in-context PVI estimates across different exemplar +selections is insignificant, which suggests that in-context PVI are stable. +Furthermore, we demonstrate how in-context PVI can be employed to identify +challenging instances. Our work highlights the potential of in-context PVI and +provides new insights into the capabilities of ICL. +" +Attack Prompt Generation for Red Teaming and Defending Large Language Models,Boyi Deng,http://arxiv.org/pdf/2310.12505v1.pdf,2023-10-19,"['cs.cl', 'cs.cr', 'cs.lg']",2310.12505v1.pdf," Large language models (LLMs) are susceptible to red teaming attacks, which +can induce LLMs to generate harmful content. Previous research constructs +attack prompts via manual or automatic methods, which have their own +limitations on construction cost and quality. To address these issues, we +propose an integrated approach that combines manual and automatic methods to +economically generate high-quality attack prompts. Specifically, considering +the impressive capabilities of newly emerged LLMs, we propose an attack +framework to instruct LLMs to mimic human-generated prompts through in-context +learning. Furthermore, we propose a defense framework that fine-tunes victim +LLMs through iterative interactions with the attack framework to enhance their +safety against red teaming attacks. Extensive experiments on different LLMs +validate the effectiveness of our proposed attack and defense frameworks. +Additionally, we release a series of attack prompts datasets named SAP with +varying sizes, facilitating the safety evaluation and enhancement of more LLMs. +Our code and dataset is available on https://github.com/Aatrox103/SAP . +" +Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization,Ningyu Xu,http://arxiv.org/pdf/2310.12794v1.pdf,2023-10-19,['cs.cl'],2310.12794v1.pdf," Large language models (LLMs) have exhibited considerable cross-lingual +generalization abilities, whereby they implicitly transfer knowledge across +languages. However, the transfer is not equally successful for all languages, +especially for low-resource ones, which poses an ongoing challenge. It is +unclear whether we have reached the limits of implicit cross-lingual +generalization and if explicit knowledge transfer is viable. In this paper, we +investigate the potential for explicitly aligning conceptual correspondence +between languages to enhance cross-lingual generalization. Using the syntactic +aspect of language as a testbed, our analyses of 43 languages reveal a high +degree of alignability among the spaces of structural concepts within each +language for both encoder-only and decoder-only LLMs. We then propose a +meta-learning-based method to learn to align conceptual spaces of different +languages, which facilitates zero-shot and few-shot generalization in concept +classification and also offers insights into the cross-lingual in-context +learning phenomenon. Experiments on syntactic analysis tasks show that our +approach achieves competitive results with state-of-the-art methods and narrows +the performance gap between languages, particularly benefiting those with +limited resources. +" +Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning,Lucas Weber,http://arxiv.org/pdf/2310.13486v1.pdf,2023-10-20,"['cs.cl', 'cs.ai']",2310.13486v1.pdf," Finding the best way of adapting pre-trained language models to a task is a +big challenge in current NLP. Just like the previous generation of task-tuned +models (TT), models that are adapted to tasks via in-context-learning (ICL) are +robust in some setups but not in others. Here, we present a detailed analysis +of which design choices cause instabilities and inconsistencies in LLM +predictions. First, we show how spurious correlations between input +distributions and labels -- a known issue in TT models -- form only a minor +problem for prompted models. Then, we engage in a systematic, holistic +evaluation of different factors that have been found to influence predictions +in a prompting setup. We test all possible combinations of a range of factors +on both vanilla and instruction-tuned (IT) LLMs of different scale and +statistically analyse the results to show which factors are the most +influential, interactive or stable. Our results show which factors can be used +without precautions and which should be avoided or handled with care in most +settings. +" +A Simple Baseline for Knowledge-Based Visual Question Answering,Alexandros Xenos,http://arxiv.org/pdf/2310.13570v2.pdf,2023-10-20,['cs.cv'],2310.13570v2.pdf," This paper is on the problem of Knowledge-Based Visual Question Answering +(KB-VQA). Recent works have emphasized the significance of incorporating both +explicit (through external databases) and implicit (through LLMs) knowledge to +answer questions requiring external knowledge effectively. A common limitation +of such approaches is that they consist of relatively complicated pipelines and +often heavily rely on accessing GPT-3 API. Our main contribution in this paper +is to propose a much simpler and readily reproducible pipeline which, in a +nutshell, is based on efficient in-context learning by prompting LLaMA (1 and +2) using question-informative captions as contextual information. Contrary to +recent approaches, our method is training-free, does not require access to +external databases or APIs, and yet achieves state-of-the-art accuracy on the +OK-VQA and A-OK-VQA datasets. Finally, we perform several ablation studies to +understand important aspects of our method. Our code is publicly available at +https://github.com/alexandrosXe/ASimple-Baseline-For-Knowledge-Based-VQA +" +An In-Context Schema Understanding Method for Knowledge Base Question Answering,Yantao Liu,http://arxiv.org/pdf/2310.14174v1.pdf,2023-10-22,['cs.cl'],2310.14174v1.pdf," The Knowledge Base Question Answering (KBQA) task aims to answer natural +language questions based on a given knowledge base. As a kind of common method +for this task, semantic parsing-based ones first convert natural language +questions to logical forms (e.g., SPARQL queries) and then execute them on +knowledge bases to get answers. Recently, Large Language Models (LLMs) have +shown strong abilities in language understanding and may be adopted as semantic +parsers in such kinds of methods. However, in doing so, a great challenge for +LLMs is to understand the schema of knowledge bases. Therefore, in this paper, +we propose an In-Context Schema Understanding (ICSU) method for facilitating +LLMs to be used as a semantic parser in KBQA. Specifically, ICSU adopts the +In-context Learning mechanism to instruct LLMs to generate SPARQL queries with +examples. In order to retrieve appropriate examples from annotated +question-query pairs, which contain comprehensive schema information related to +questions, ICSU explores four different retrieval strategies. Experimental +results on the largest KBQA benchmark, KQA Pro, show that ICSU with all these +strategies outperforms that with a random retrieval strategy significantly +(from 12\% to 78.76\% in accuracy). +" +From Chaos to Clarity: Claim Normalization to Empower Fact-Checking,Megha Sundriyal,http://arxiv.org/pdf/2310.14338v1.pdf,2023-10-22,"['cs.cl', 'cs.ai']",2310.14338v1.pdf," With the proliferation of social media platforms, users are exposed to vast +information, including posts containing misleading claims. However, the +pervasive noise inherent in these posts presents a challenge in identifying +precise and prominent claims that require verification. Extracting the core +assertions from such posts is arduous and time-consuming. We introduce a novel +task called Claim Normalization (aka ClaimNorm) that aims to decompose complex +and noisy social media posts into more straightforward and understandable +forms, termed normalized claims. We propose CACN, a pioneering approach that +leverages chain-of-thought and claim check-worthiness estimation, mimicking +human reasoning processes, to comprehend intricate claims. Moreover, we +capitalize on large language models' powerful in-context learning abilities to +provide guidance and improve the claim normalization process. To evaluate the +effectiveness of our proposed model, we meticulously compile a comprehensive +real-world dataset, CLAN, comprising more than 6k instances of social media +posts alongside their respective normalized claims. Experimentation +demonstrates that CACN outperforms several baselines across various evaluation +measures. A rigorous error analysis validates CACN's capabilities and pitfalls. +" +Retrieval-Augmented Chain-of-Thought in Semi-structured Domains,Vaibhav Mavi,http://arxiv.org/pdf/2310.14435v1.pdf,2023-10-22,"['cs.cl', 'cs.ai']",2310.14435v1.pdf," Applying existing question answering (QA) systems to specialized domains like +law and finance presents challenges that necessitate domain expertise. Although +large language models (LLMs) have shown impressive language comprehension and +in-context learning capabilities, their inability to handle very long +inputs/contexts is well known. Tasks specific to these domains need significant +background knowledge, leading to contexts that can often exceed the maximum +length that existing LLMs can process. This study explores leveraging the +semi-structured nature of legal and financial data to efficiently retrieve +relevant context, enabling the use of LLMs for domain-specialized QA. The +resulting system outperforms contemporary models and also provides useful +explanations for the answers, encouraging the integration of LLMs into legal +and financial NLP systems for future research. +" +Statistical Depth for Ranking and Characterizing Transformer-Based Text Embeddings,Parker Seegmiller,http://arxiv.org/pdf/2310.15010v1.pdf,2023-10-23,['cs.cl'],2310.15010v1.pdf," The popularity of transformer-based text embeddings calls for better +statistical tools for measuring distributions of such embeddings. One such tool +would be a method for ranking texts within a corpus by centrality, i.e. +assigning each text a number signifying how representative that text is of the +corpus as a whole. However, an intrinsic center-outward ordering of +high-dimensional text representations is not trivial. A statistical depth is a +function for ranking k-dimensional objects by measuring centrality with respect +to some observed k-dimensional distribution. We adopt a statistical depth to +measure distributions of transformer-based text embeddings, transformer-based +text embedding (TTE) depth, and introduce the practical use of this depth for +both modeling and distributional inference in NLP pipelines. We first define +TTE depth and an associated rank sum test for determining whether two corpora +differ significantly in embedding space. We then use TTE depth for the task of +in-context learning prompt selection, showing that this approach reliably +improves performance over statistical baseline approaches across six text +classification tasks. Finally, we use TTE depth and the associated rank sum +test to characterize the distributions of synthesized and human-generated +corpora, showing that five recent synthetic data augmentation processes cause a +measurable distributional shift away from associated human-generated text. +" +Meta- (out-of-context) learning in neural networks,Dmitrii Krasheninnikov,http://arxiv.org/pdf/2310.15047v2.pdf,2023-10-23,"['cs.lg', 'cs.ai']",2310.15047v2.pdf," Brown et al. (2020) famously introduced the phenomenon of in-context learning +in large language models (LLMs). We establish the existence of a phenomenon we +call meta-out-of-context learning (meta-OCL) via carefully designed synthetic +experiments with LLMs. Our results suggest that meta-OCL leads LLMs to more +readily ""internalize"" the semantic content of text that is, or appears to be, +broadly useful (such as true statements, or text from authoritative sources) +and use it in appropriate circumstances. We further demonstrate meta-OCL in a +synthetic computer vision setting, and propose two hypotheses for the emergence +of meta-OCL: one relying on the way models store knowledge in their parameters, +and another suggesting that the implicit gradient alignment bias of +gradient-descent-based optimizers may be responsible. Finally, we reflect on +what our results might imply about capabilities of future AI systems, and +discuss potential risks. Our code can be found at +https://github.com/krasheninnikov/internalization. +" +The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models,Xinyi Chen,http://arxiv.org/pdf/2310.15061v1.pdf,2023-10-23,"['cs.cl', 'cs.ai', 'cs.cv']",2310.15061v1.pdf," Despite the impressive performance achieved by pre-trained +language-and-vision models in downstream tasks, it remains an open question +whether this reflects a proper understanding of image-text interaction. In this +work, we explore to what extent they handle basic linguistic constructions -- +active-passive voice, coordination, and relative clauses -- that even preschool +children can typically master. We present BLA, a novel, automatically +constructed benchmark to evaluate multimodal models on these Basic Language +Abilities. We show that different types of Transformer-based systems, such as +CLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting, +in line with previous findings. Our experiments, in particular, show that most +of the tested models only marginally benefit when fine-tuned or prompted with +construction-specific samples. Yet, the generative BLIP2 shows promising +trends, especially in an in-context learning setting. This opens the door to +using BLA not only as an evaluation benchmark but also to improve models' basic +language abilities. +" +LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis,Shih-Chieh Dai,http://arxiv.org/pdf/2310.15100v1.pdf,2023-10-23,['cs.cl'],2310.15100v1.pdf," Thematic analysis (TA) has been widely used for analyzing qualitative data in +many disciplines and fields. To ensure reliable analysis, the same piece of +data is typically assigned to at least two human coders. Moreover, to produce +meaningful and useful analysis, human coders develop and deepen their data +interpretation and coding over multiple iterations, making TA labor-intensive +and time-consuming. Recently the emerging field of large language models (LLMs) +research has shown that LLMs have the potential replicate human-like behavior +in various tasks: in particular, LLMs outperform crowd workers on +text-annotation tasks, suggesting an opportunity to leverage LLMs on TA. We +propose a human-LLM collaboration framework (i.e., LLM-in-the-loop) to conduct +TA with in-context learning (ICL). This framework provides the prompt to frame +discussions with a LLM (e.g., GPT-3.5) to generate the final codebook for TA. +We demonstrate the utility of this framework using survey datasets on the +aspects of the music listening experience and the usage of a password manager. +Results of the two case studies show that the proposed framework yields similar +coding quality to that of human coders but reduces TA's labor and time demands. +" +UI Layout Generation with LLMs Guided by UI Grammar,Yuwen Lu,http://arxiv.org/pdf/2310.15455v1.pdf,2023-10-24,"['cs.hc', 'cs.ai']",2310.15455v1.pdf," The recent advances in Large Language Models (LLMs) have stimulated interest +among researchers and industry professionals, particularly in their application +to tasks concerning mobile user interfaces (UIs). This position paper +investigates the use of LLMs for UI layout generation. Central to our +exploration is the introduction of UI grammar -- a novel approach we proposed +to represent the hierarchical structure inherent in UI screens. The aim of this +approach is to guide the generative capacities of LLMs more effectively and +improve the explainability and controllability of the process. Initial +experiments conducted with GPT-4 showed the promising capability of LLMs to +produce high-quality user interfaces via in-context learning. Furthermore, our +preliminary comparative study suggested the potential of the grammar-based +approach in improving the quality of generative results in specific aspects. +" +POE: Process of Elimination for Multiple Choice Reasoning,Chenkai Ma,http://arxiv.org/pdf/2310.15575v1.pdf,2023-10-24,['cs.cl'],2310.15575v1.pdf," Language models (LMs) are capable of conducting in-context learning for +multiple choice reasoning tasks, but the options in these tasks are treated +equally. As humans often first eliminate wrong options before picking the final +correct answer, we argue a similar two-step strategy can make LMs better at +these tasks. To this end, we present the Process of Elimination (POE), a +two-step scoring method. In the first step, POE scores each option, and +eliminates seemingly wrong options. In the second step, POE masks these wrong +options, and makes the final prediction from the remaining options. Zero-shot +experiments on 8 reasoning tasks illustrate the effectiveness of POE, and a +following analysis finds our method to be especially performant on logical +reasoning tasks. We further analyze the effect of masks, and show that POE +applies to few-shot settings and large language models (LLMs) like ChatGPT. +" +WebWISE: Web Interface Control and Sequential Exploration with Large Language Models,Heyi Tao,http://arxiv.org/pdf/2310.16042v2.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.16042v2.pdf," The paper investigates using a Large Language Model (LLM) to automatically +perform web software tasks using click, scroll, and text input operations. +Previous approaches, such as reinforcement learning (RL) or imitation learning, +are inefficient to train and task-specific. Our method uses filtered Document +Object Model (DOM) elements as observations and performs tasks step-by-step, +sequentially generating small programs based on the current observations. We +use in-context learning, either benefiting from a single manually provided +example, or an automatically generated example based on a successful zero-shot +trial. We evaluate the proposed method on the MiniWob++ benchmark. With only +one in-context example, our WebWISE method achieves similar or better +performance than other methods that require many demonstrations or trials. +" +From Heuristic to Analytic: Cognitively Motivated Strategies for Coherent Physical Commonsense Reasoning,Zheyuan Zhang,http://arxiv.org/pdf/2310.18364v1.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.18364v1.pdf," Pre-trained language models (PLMs) have shown impressive performance in +various language tasks. However, they are prone to spurious correlations, and +often generate illusory information. In real-world applications, PLMs should +justify decisions with formalized, coherent reasoning chains, but this +challenge remains under-explored. Cognitive psychology theorizes that humans +are capable of utilizing fast and intuitive heuristic thinking to make +decisions based on past experience, then rationalizing the decisions through +slower and deliberative analytic reasoning. We incorporate these interlinked +dual processes in fine-tuning and in-context learning with PLMs, applying them +to two language understanding tasks that require coherent physical commonsense +reasoning. We show that our proposed Heuristic-Analytic Reasoning (HAR) +strategies drastically improve the coherence of rationalizations for model +decisions, yielding state-of-the-art results on Tiered Reasoning for Intuitive +Physics (TRIP). We also find that this improved coherence is a direct result of +more faithful attention to relevant language context in each step of reasoning. +Our findings suggest that human-like reasoning strategies can effectively +improve the coherence and reliability of PLM reasoning. +" +The Mystery and Fascination of LLMs: A Comprehensive Survey on the Interpretation and Analysis of Emergent Abilities,Yuxiang Zhou,http://arxiv.org/pdf/2311.00237v1.pdf,2023-11-01,['cs.cl'],2311.00237v1.pdf," Understanding emergent abilities, such as in-context learning (ICL) and +chain-of-thought (CoT) prompting in large language models (LLMs), is of utmost +importance. This importance stems not only from the better utilization of these +capabilities across various tasks, but also from the proactive identification +and mitigation of potential risks, including concerns of truthfulness, bias, +and toxicity, that may arise alongside these capabilities. In this paper, we +present a thorough survey on the interpretation and analysis of emergent +abilities of LLMs. First, we provide a concise introduction to the background +and definition of emergent abilities. Then, we give an overview of advancements +from two perspectives: 1) a macro perspective, emphasizing studies on the +mechanistic interpretability and delving into the mathematical foundations +behind emergent abilities; and 2) a micro-perspective, concerning studies that +focus on empirical interpretability by examining factors associated with these +abilities. We conclude by highlighting the challenges encountered and +suggesting potential avenues for future research. We believe that our work +establishes the basis for further exploration into the interpretation of +emergent abilities. +" +Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles,Weiting Tan,http://arxiv.org/pdf/2311.02310v1.pdf,2023-11-04,['cs.cl'],2311.02310v1.pdf," Large language models trained primarily in a monolingual setting have +demonstrated their ability to generalize to machine translation using zero- and +few-shot examples with in-context learning. However, even though zero-shot +translations are relatively good, there remains a discernible gap comparing +their performance with the few-shot setting. In this paper, we investigate the +factors contributing to this gap and find that this gap can largely be closed +(for about 70%) by matching the writing styles of the target corpus. +Additionally, we explore potential approaches to enhance zero-shot baselines +without the need for parallel demonstration examples, providing valuable +insights into how these methods contribute to improving translation metrics. +" +Instructed Language Models with Retrievers Are Powerful Entity Linkers,Zilin Xiao,http://arxiv.org/pdf/2311.03250v1.pdf,2023-11-06,"['cs.cl', 'cs.ai']",2311.03250v1.pdf," Generative approaches powered by large language models (LLMs) have +demonstrated emergent abilities in tasks that require complex reasoning +abilities. Yet the generative nature still makes the generated content suffer +from hallucinations, thus unsuitable for entity-centric tasks like entity +linking (EL) requiring precise entity predictions over a large knowledge base. +We present Instructed Generative Entity Linker (INSGENEL), the first approach +that enables casual language models to perform entity linking over knowledge +bases. Several methods to equip language models with EL capability were +proposed in this work, including (i) a sequence-to-sequence training EL +objective with instruction-tuning, (ii) a novel generative EL framework based +on a light-weight potential mention retriever that frees the model from heavy +and non-parallelizable decoding, achieving 4$\times$ speedup without compromise +on linking metrics. INSGENEL outperforms previous generative alternatives with ++6.8 F1 points gain on average, also with a huge advantage in training data +efficiency and training compute consumption. In addition, our skillfully +engineered in-context learning (ICL) framework for EL still lags behind +INSGENEL significantly, reaffirming that the EL task remains a persistent +hurdle for general LLMs. +" +Meta-learning via Language Model In-context Tuning,Yanda Chen,http://arxiv.org/pdf/2110.07814v2.pdf,2021-10-15,"['cs.cl', 'cs.lg']",2110.07814v2.pdf," The goal of meta-learning is to learn to adapt to a new task with only a few +labeled examples. To tackle this problem in NLP, we propose $\textit{in-context +tuning}$, which recasts adaptation and prediction as a simple sequence +prediction problem: to form the input sequence, we concatenate the task +instruction, the labeled examples, and the target input to predict; to +meta-train the model to learn from in-context examples, we fine-tune a +pre-trained language model (LM) to predict the target label from the input +sequences on a collection of tasks. + We benchmark our method on two collections of text classification tasks: LAMA +and BinaryClfs. Compared to first-order MAML which adapts the model with +gradient descent, our method better leverages the inductive bias of LMs to +perform pattern matching, and outperforms MAML by an absolute $6\%$ AUC ROC +score on BinaryClfs, with increasing advantage w.r.t. model size. Compared to +non-fine-tuned in-context learning (i.e. prompting a raw LM), in-context tuning +directly learns to learn from in-context examples. On BinaryClfs, in-context +tuning improves the average AUC-ROC score by an absolute $10\%$, and reduces +the variance with respect to example ordering by 6x and example choices by 2x. +" +Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER,Dong-Ho Lee,http://arxiv.org/pdf/2110.08454v3.pdf,2021-10-16,['cs.cl'],2110.08454v3.pdf," Recent advances in prompt-based learning have shown strong results on +few-shot text classification by using cloze-style templates. Similar attempts +have been made on named entity recognition (NER) which manually design +templates to predict entity types for every text span in a sentence. However, +such methods may suffer from error propagation induced by entity span +detection, high cost due to enumeration of all possible text spans, and +omission of inter-dependencies among token labels in a sentence. Here we +present a simple demonstration-based learning method for NER, which lets the +input be prefaced by task demonstrations for in-context learning. We perform a +systematic study on demonstration strategy regarding what to include (entity +examples, with or without surrounding context), how to select the examples, and +what templates to use. Results on in-domain learning and domain adaptation show +that the model's performance in low-resource settings can be largely improved +with a suitable demonstration strategy (e.g., a 4-17% improvement on 25 train +instances). We also find that good demonstration can save many labeled examples +and consistency in demonstration contributes to better performance. +" +GLaM: Efficient Scaling of Language Models with Mixture-of-Experts,Nan Du,http://arxiv.org/pdf/2112.06905v2.pdf,2021-12-13,['cs.cl'],2112.06905v2.pdf," Scaling language models with more data, compute and parameters has driven +significant progress in natural language processing. For example, thanks to +scaling, GPT-3 was able to achieve strong results on in-context learning tasks. +However, training these large dense models requires significant amounts of +computing resources. In this paper, we propose and develop a family of language +models named GLaM (Generalist Language Model), which uses a sparsely activated +mixture-of-experts architecture to scale the model capacity while also +incurring substantially less training cost compared to dense variants. The +largest GLaM has 1.2 trillion parameters, which is approximately 7x larger than +GPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires half +of the computation flops for inference, while still achieving better overall +zero-shot and one-shot performance across 29 NLP tasks. +" +Can language models learn from explanations in context?,Andrew K. Lampinen,http://arxiv.org/pdf/2204.02329v4.pdf,2022-04-05,"['cs.cl', 'cs.ai', 'cs.lg']",2204.02329v4.pdf," Language Models (LMs) can perform new tasks by adapting to a few in-context +examples. For humans, explanations that connect examples to task principles can +improve learning. We therefore investigate whether explanations of few-shot +examples can help LMs. We annotate questions from 40 challenging tasks with +answer explanations, and various matched control explanations. We evaluate how +different types of explanations, instructions, and controls affect zero- and +few-shot performance. We analyze these results using statistical multilevel +modeling techniques that account for the nested dependencies among conditions, +tasks, prompts, and models. We find that explanations can improve performance +-- even without tuning. Furthermore, explanations hand-tuned for performance on +a small validation set offer substantially larger benefits, and building a +prompt by selecting examples and explanations together substantially improves +performance over selecting examples alone. Finally, even untuned explanations +outperform carefully matched controls, suggesting that the benefits are due to +the link between an example and its explanation, rather than lower-level +features. However, only large models benefit. In summary, explanations can +support the in-context learning of large LMs on challenging tasks. +" +Automatic Short Math Answer Grading via In-context Meta-learning,Mengxue Zhang,http://arxiv.org/pdf/2205.15219v3.pdf,2022-05-30,"['cs.cl', 'cs.lg']",2205.15219v3.pdf," Automatic short answer grading is an important research direction in the +exploration of how to use artificial intelligence (AI)-based tools to improve +education. Current state-of-the-art approaches use neural language models to +create vectorized representations of students responses, followed by +classifiers to predict the score. However, these approaches have several key +limitations, including i) they use pre-trained language models that are not +well-adapted to educational subject domains and/or student-generated text and +ii) they almost always train one model per question, ignoring the linkage +across a question and result in a significant model storage problem due to the +size of advanced language models. In this paper, we study the problem of +automatic short answer grading for students' responses to math questions and +propose a novel framework for this task. First, we use MathBERT, a variant of +the popular language model BERT adapted to mathematical content, as our base +model and fine-tune it for the downstream task of student response grading. +Second, we use an in-context learning approach that provides scoring examples +as input to the language model to provide additional context information and +promote generalization to previously unseen questions. We evaluate our +framework on a real-world dataset of student responses to open-ended math +questions and show that our framework (often significantly) outperforms +existing approaches, especially for new questions that are not seen during +training. +" +ThinkSum: Probabilistic reasoning over sets using large language models,Batu Ozturkler,http://arxiv.org/pdf/2210.01293v2.pdf,2022-10-04,['cs.cl'],2210.01293v2.pdf," Large language models (LLMs) have a substantial capacity for high-level +analogical reasoning: reproducing patterns in linear text that occur in their +training data (zero-shot evaluation) or in the provided context (few-shot +in-context learning). However, recent studies show that even the more advanced +LLMs fail in scenarios that require reasoning over multiple objects or facts +and making sequences of logical deductions. We propose a two-stage +probabilistic inference paradigm, ThinkSum, which reasons over sets of objects +or facts in a structured manner. In the first stage (Think - retrieval of +associations), a LLM is queried in parallel over a set of phrases extracted +from the prompt or an auxiliary model call. In the second stage (Sum - +probabilistic inference or reasoning), the results of these queries are +aggregated to make the final prediction. We demonstrate the possibilities and +advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks, +achieving improvements over the state of the art using GPT-family models on +thirteen difficult tasks, often with far smaller model variants. We also +compare and contrast ThinkSum with other proposed modifications to direct +prompting of LLMs, such as variants of chain-of-thought prompting. Our results +suggest that because the probabilistic inference in ThinkSum is performed +outside of calls to the LLM, ThinkSum is less sensitive to prompt design, +yields more interpretable predictions, and can be flexibly combined with latent +variable models to extract structured knowledge from LLMs. Overall, our +proposed paradigm represents a promising approach for enhancing the reasoning +capabilities of LLMs. +" +Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model,Jacob Eisenstein,http://arxiv.org/pdf/2210.02498v2.pdf,2022-10-05,"['cs.cl', 'cs.lg']",2210.02498v2.pdf," Explainable question answering systems should produce not only accurate +answers but also rationales that justify their reasoning and allow humans to +check their work. But what sorts of rationales are useful and how can we train +systems to produce them? We propose a new style of rationale for open-book +question answering, called \emph{markup-and-mask}, which combines aspects of +extractive and free-text explanations. In the markup phase, the passage is +augmented with free-text markup that enables each sentence to stand on its own +outside the discourse context. In the masking phase, a sub-span of the +marked-up passage is selected. To train a system to produce markup-and-mask +rationales without annotations, we leverage in-context learning. Specifically, +we generate silver annotated data by sending a series of prompts to a frozen +pretrained language model, which acts as a teacher. We then fine-tune a smaller +student model by training on the subset of rationales that led to correct +answers. The student is ""honest"" in the sense that it is a pipeline: the +rationale acts as a bottleneck between the passage and the answer, while the +""untrusted"" teacher operates under no such constraints. Thus, we offer a new +way to build trustworthy pipeline systems from a combination of end-task +annotations and frozen pretrained language models. +" +Large Language Models can Implement Policy Iteration,Ethan Brooks,http://arxiv.org/pdf/2210.03821v2.pdf,2022-10-07,['cs.lg'],2210.03821v2.pdf," This work presents In-Context Policy Iteration, an algorithm for performing +Reinforcement Learning (RL), in-context, using foundation models. While the +application of foundation models to RL has received considerable attention, +most approaches rely on either (1) the curation of expert demonstrations +(either through manual design or task-specific pretraining) or (2) adaptation +to the task of interest using gradient methods (either fine-tuning or training +of adapter layers). Both of these techniques have drawbacks. Collecting +demonstrations is labor-intensive, and algorithms that rely on them do not +outperform the experts from which the demonstrations were derived. All gradient +techniques are inherently slow, sacrificing the ""few-shot"" quality that made +in-context learning attractive to begin with. In this work, we present an +algorithm, ICPI, that learns to perform RL tasks without expert demonstrations +or gradients. Instead we present a policy-iteration method in which the prompt +content is the entire locus of learning. ICPI iteratively updates the contents +of the prompt from which it derives its policy through trial-and-error +interaction with an RL environment. In order to eliminate the role of +in-weights learning (on which approaches like Decision Transformer rely +heavily), we demonstrate our algorithm using Codex, a language model with no +prior knowledge of the domains on which we evaluate it. +" +Transformers generalize differently from information stored in context vs in weights,Stephanie C. Y. Chan,http://arxiv.org/pdf/2210.05675v2.pdf,2022-10-11,"['cs.cl', 'cs.ai', 'cs.lg']",2210.05675v2.pdf," Transformer models can use two fundamentally different kinds of information: +information stored in weights during training, and information provided +``in-context'' at inference time. In this work, we show that transformers +exhibit different inductive biases in how they represent and generalize from +the information in these two sources. In particular, we characterize whether +they generalize via parsimonious rules (rule-based generalization) or via +direct comparison with observed examples (exemplar-based generalization). This +is of important practical consequence, as it informs whether to encode +information in weights or in context, depending on how we want models to use +that information. In transformers trained on controlled stimuli, we find that +generalization from weights is more rule-based whereas generalization from +context is largely exemplar-based. In contrast, we find that in transformers +pre-trained on natural language, in-context learning is significantly +rule-based, with larger models showing more rule-basedness. We hypothesise that +rule-based generalization from in-context information might be an emergent +consequence of large-scale training on language, which has sparse rule-like +structure. Using controlled stimuli, we verify that transformers pretrained on +data containing sparse rule-like structure exhibit more rule-based +generalization. +" +Large Language Models Meet Harry Potter: A Bilingual Dataset for Aligning Dialogue Agents with Characters,Nuo Chen,http://arxiv.org/pdf/2211.06869v4.pdf,2022-11-13,"['cs.cl', 'cs.ai']",2211.06869v4.pdf," In recent years, Dialogue-style Large Language Models (LLMs) such as ChatGPT +and GPT4 have demonstrated immense potential in constructing open-domain +dialogue agents. However, aligning these agents with specific characters or +individuals remains a considerable challenge due to the complexities of +character representation and the lack of comprehensive annotations. In this +paper, we introduce the Harry Potter Dialogue (HPD) dataset, designed to +advance the study of dialogue agents and character alignment. The dataset +encompasses all dialogue sessions (in both English and Chinese) from the Harry +Potter series and is annotated with vital background information, including +dialogue scenes, speakers, character relationships, and attributes. These +extensive annotations may empower LLMs to unlock character-driven dialogue +capabilities. Furthermore, it can serve as a universal benchmark for evaluating +how well can a LLM aligning with a specific character. We benchmark LLMs on HPD +using both fine-tuning and in-context learning settings. Evaluation results +reveal that although there is substantial room for improvement in generating +high-quality, character-aligned responses, the proposed dataset is valuable in +guiding models toward responses that better align with the character of Harry +Potter. +" +Retrieval-Augmented Multimodal Language Modeling,Michihiro Yasunaga,http://arxiv.org/pdf/2211.12561v2.pdf,2022-11-22,"['cs.cv', 'cs.cl', 'cs.lg']",2211.12561v2.pdf," Recent multimodal models such as DALL-E and CM3 have achieved remarkable +progress in text-to-image and image-to-text generation. However, these models +store all learned knowledge (e.g., the appearance of the Eiffel Tower) in the +model parameters, requiring increasingly larger models and training data to +capture more knowledge. To integrate knowledge in a more scalable and modular +way, we propose a retrieval-augmented multimodal model, which enables a base +multimodal model (generator) to refer to relevant text and images fetched by a +retriever from external memory (e.g., documents on the web). Specifically, for +the retriever, we use a pretrained CLIP, and for the generator, we train a CM3 +Transformer on the LAION dataset. Our resulting model, named +Retrieval-Augmented CM3 (RA-CM3), is the first multimodal model that can +retrieve and generate both text and images. We show that RA-CM3 significantly +outperforms baseline multimodal models such as DALL-E and CM3 on both image and +caption generation tasks (12 FID and 17 CIDEr improvements on MS-COCO), while +requiring much less compute for training (<30% of DALL-E). Moreover, we show +that RA-CM3 exhibits novel capabilities, such as faithful image generation and +multimodal in-context learning (e.g., image generation from demonstrations). +" +"Operationalizing Specifications, In Addition to Test Sets for Evaluating Constrained Generative Models",Vikas Raunak,http://arxiv.org/pdf/2212.00006v1.pdf,2022-11-19,"['cs.hc', 'cs.cl', 'cs.cv', 'cs.cy']",2212.00006v1.pdf," In this work, we present some recommendations on the evaluation of +state-of-the-art generative models for constrained generation tasks. The +progress on generative models has been rapid in recent years. These large-scale +models have had three impacts: firstly, the fluency of generation in both +language and vision modalities has rendered common average-case evaluation +metrics much less useful in diagnosing system errors. Secondly, the same +substrate models now form the basis of a number of applications, driven both by +the utility of their representations as well as phenomena such as in-context +learning, which raise the abstraction level of interacting with such models. +Thirdly, the user expectations around these models and their feted public +releases have made the technical challenge of out of domain generalization much +less excusable in practice. Subsequently, our evaluation methodologies haven't +adapted to these changes. More concretely, while the associated utility and +methods of interacting with generative models have expanded, a similar +expansion has not been observed in their evaluation practices. In this paper, +we argue that the scale of generative models could be exploited to raise the +abstraction level at which evaluation itself is conducted and provide +recommendations for the same. Our recommendations are based on leveraging +specifications as a powerful instrument to evaluate generation quality and are +readily applicable to a variety of tasks. +" +Language model acceptability judgements are not always robust to context,Koustuv Sinha,http://arxiv.org/pdf/2212.08979v1.pdf,2022-12-18,"['cs.cl', 'cs.lg']",2212.08979v1.pdf," Targeted syntactic evaluations of language models ask whether models show +stable preferences for syntactically acceptable content over minimal-pair +unacceptable inputs. Most targeted syntactic evaluation datasets ask models to +make these judgements with just a single context-free sentence as input. This +does not match language models' training regime, in which input sentences are +always highly contextualized by the surrounding corpus. This mismatch raises an +important question: how robust are models' syntactic judgements in different +contexts? In this paper, we investigate the stability of language models' +performance on targeted syntactic evaluations as we vary properties of the +input context: the length of the context, the types of syntactic phenomena it +contains, and whether or not there are violations of grammaticality. We find +that model judgements are generally robust when placed in randomly sampled +linguistic contexts. However, they are substantially unstable for contexts +containing syntactic structures matching those in the critical test content. +Among all tested models (GPT-2 and five variants of OPT), we significantly +improve models' judgements by providing contexts with matching syntactic +structures, and conversely significantly worsen them using unacceptable +contexts with matching but violated syntactic structures. This effect is +amplified by the length of the context, except for unrelated inputs. We show +that these changes in model performance are not explainable by simple features +matching the context and the test inputs, such as lexical overlap and +dependency overlap. This sensitivity to highly specific syntactic features of +the context can only be explained by the models' implicit in-context learning +abilities. +" +Low-Resource Authorship Style Transfer: Can Non-Famous Authors Be Imitated?,Ajay Patel,http://arxiv.org/pdf/2212.08986v2.pdf,2022-12-18,['cs.cl'],2212.08986v2.pdf," Authorship style transfer involves altering text to match the style of a +target author whilst preserving the original meaning. Existing unsupervised +approaches like STRAP have largely focused on style transfer to target authors +with many examples of their writing style in books, speeches, or other +published works. This high-resource training data requirement (often greater +than 100,000 words) makes these approaches primarily useful for style transfer +to published authors, politicians, or other well-known figures and authorship +styles, while style transfer to non-famous authors has not been well-studied. +We introduce the \textit{low-resource authorship style transfer} task, a more +challenging class of authorship style transfer where only a limited amount of +text in the target author's style may exist. In our experiments, we +specifically choose source and target authors from Reddit and style transfer +their Reddit posts, limiting ourselves to just 16 posts (on average ~500 words) +of the target author's style. Style transfer accuracy is typically measured by +how often a classifier or human judge will classify an output as written by the +target author. Recent authorship representations models excel at authorship +identification even with just a few writing samples, making automatic +evaluation of this task possible for the first time through evaluation metrics +we propose. Our results establish an in-context learning technique we develop +as the strongest baseline, though we find current approaches do not yet achieve +mastery of this challenging task. We release our data and implementations to +encourage further investigation. +" +Training Trajectories of Language Models Across Scales,Mengzhou Xia,http://arxiv.org/pdf/2212.09803v3.pdf,2022-12-19,"['cs.cl', 'cs.ai', 'cs.lg']",2212.09803v3.pdf," Scaling up language models has led to unprecedented performance gains, but +little is understood about how the training dynamics change as models get +larger. How do language models of different sizes learn during pre-training? +Why do larger language models demonstrate more desirable behaviors? In this +paper, we analyze the intermediate training checkpoints of differently sized +OPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-token +prediction, sequence-level generation, and downstream tasks. We find that 1) at +a given perplexity and independent of model sizes, a similar subset of training +tokens see the most significant reduction in loss, with the rest stagnating or +showing double-descent behavior; 2) early in training, all models learn to +reduce the perplexity of grammatical sequences that contain hallucinations, +with small models halting at this suboptimal distribution and larger ones +eventually learning to assign these sequences lower probabilities; 3) +perplexity is a strong predictor of in-context learning performance on 74 +multiple-choice tasks from BIG-Bench, and this holds independent of the model +size. Together, these results show that perplexity is more predictive of model +behaviors than model size or training computation. +" +Dialog2API: Task-Oriented Dialogue with API Description and Example Programs,Raphael Shu,http://arxiv.org/pdf/2212.09946v1.pdf,2022-12-20,['cs.cl'],2212.09946v1.pdf," Functionality and dialogue experience are two important factors of +task-oriented dialogue systems. Conventional approaches with closed schema +(e.g., conversational semantic parsing) often fail as both the functionality +and dialogue experience are strongly constrained by the underlying schema. We +introduce a new paradigm for task-oriented dialogue - Dialog2API - to greatly +expand the functionality and provide seamless dialogue experience. The +conversational model interacts with the environment by generating and executing +programs triggering a set of pre-defined APIs. The model also manages the +dialogue policy and interact with the user through generating appropriate +natural language responses. By allowing generating free-form programs, +Dialog2API supports composite goals by combining different APIs, whereas +unrestricted program revision provides natural and robust dialogue experience. +To facilitate Dialog2API, the core model is provided with API documents, an +execution environment and optionally some example dialogues annotated with +programs. We propose an approach tailored for the Dialog2API, where the +dialogue states are represented by a stack of programs, with most recently +mentioned program on the top of the stack. Dialog2API can work with many +application scenarios such as software automation and customer service. In this +paper, we construct a dataset for AWS S3 APIs and present evaluation results of +in-context learning baselines. +" +HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation,Hamish Ivison,http://arxiv.org/pdf/2212.10315v2.pdf,2022-12-20,['cs.cl'],2212.10315v2.pdf," Recent NLP models have shown the remarkable ability to effectively generalise +`zero-shot' to new tasks using only natural language instructions as guidance. +However, many of these approaches suffer from high computational costs due to +their reliance on concatenating lengthy instructions with every input example, +resulting in costly reprocessing of the instruction. To avoid this, we +introduce Hypernetworks for INstruction Tuning (HINT), which convert task +instructions and examples into parameter-efficient modules inserted into an +underlying model using a pretrained text encoder, eliminating the need to +include instructions in the model input. The hypernetwork in HINT also produces +an encoded instruction, which we concatenate with encoded inputs during +decoding to further improve performance. HINT models outperform strong +state-of-the-art baselines by over 10% when controlling for compute (measured +in FLOPs). By converting instructions into modules, HINT models can effectively +disregard the length of instructions and few-shot example inputs in terms of +compute usage. As a result, HINT can enhance its performance by up to 25% by +incorporating additional few-shot data, while utilizing only up to 5% more +compute. This combines the strengths of parameter-efficient fine-tuning and +in-context learning. +" +Prompt-Augmented Linear Probing: Scaling beyond the Limit of Few-shot In-Context Learners,Hyunsoo Cho,http://arxiv.org/pdf/2212.10873v3.pdf,2022-12-21,"['cs.cl', 'cs.lg']",2212.10873v3.pdf," Through in-context learning (ICL), large-scale language models are effective +few-shot learners without additional model fine-tuning. However, the ICL +performance does not scale well with the number of available training samples +as it is limited by the inherent input length constraint of the underlying +language model. Meanwhile, many studies have revealed that language models are +also powerful feature extractors, allowing them to be utilized in a black-box +manner and enabling the linear probing paradigm, where lightweight +discriminators are trained on top of the pre-extracted input representations. +This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear +probing and ICL, which leverages the best of both worlds. PALP inherits the +scalability of linear probing and the capability of enforcing language models +to derive more meaningful representations via tailoring input into a more +conceivable form. Throughout in-depth investigations on various datasets, we +verified that PALP significantly enhances the input representations closing the +gap between ICL in the data-hungry scenario and fine-tuning in the +data-abundant scenario with little training overhead, potentially making PALP a +strong alternative in a black-box scenario. +" +Parallel Context Windows for Large Language Models,Nir Ratner,http://arxiv.org/pdf/2212.10947v3.pdf,2022-12-21,['cs.cl'],2212.10947v3.pdf," When applied to processing long text, Large Language Models (LLMs) are +limited by their context window. Existing efforts to address this limitation +involve training specialized architectures, and cannot be easily applied to +off-the-shelf LLMs. We present Parallel Context Windows (PCW), a method that +alleviates the context window restriction for any off-the-shelf LLM without +further training. The key to the approach is to carve a long context into +chunks (``windows''), restrict the attention mechanism to apply only within +each window, and re-use the positional embeddings across the windows. Our main +results test the PCW approach on in-context learning with models that range in +size between 750 million and 178 billion parameters, and show substantial +improvements for tasks with diverse input and output spaces. We show additional +benefits in other settings where long context windows may be beneficial: +multi-hop questions and retrieval-augmented question answering with multiple +retrieved documents. Our results highlight Parallel Context Windows as a +promising method for applying off-the-shelf LLMs in a range of settings that +require long text sequences. We make our code publicly available at +https://github.com/ai21labs/parallel-context-windows. +" +Collaborating with language models for embodied reasoning,Ishita Dasgupta,http://arxiv.org/pdf/2302.00763v1.pdf,2023-02-01,"['cs.lg', 'cs.ai', 'cs.cl']",2302.00763v1.pdf," Reasoning in a complex and ambiguous environment is a key goal for +Reinforcement Learning (RL) agents. While some sophisticated RL agents can +successfully solve difficult tasks, they require a large amount of training +data and often struggle to generalize to new unseen environments and new tasks. +On the other hand, Large Scale Language Models (LSLMs) have exhibited strong +reasoning ability and the ability to to adapt to new tasks through in-context +learning. However, LSLMs do not inherently have the ability to interrogate or +intervene on the environment. In this work, we investigate how to combine these +complementary abilities in a single system consisting of three parts: a +Planner, an Actor, and a Reporter. The Planner is a pre-trained language model +that can issue commands to a simple embodied agent (the Actor), while the +Reporter communicates with the Planner to inform its next command. We present a +set of tasks that require reasoning, test this system's ability to generalize +zero-shot and investigate failure cases, and demonstrate how components of this +system can be trained with reinforcement-learning to improve performance. +" +Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based Learning,Angela Ramirez,http://arxiv.org/pdf/2302.03848v1.pdf,2023-02-08,['cs.cl'],2302.03848v1.pdf," Prompt-based or in-context learning has achieved high zero-shot performance +on many natural language generation (NLG) tasks. Here we explore the +performance of prompt-based learning for simultaneously controlling the +personality and the semantic accuracy of an NLG for task-oriented dialogue. We +experiment with prompt-based learning on the PERSONAGE restaurant +recommendation corpus to generate semantically and stylistically-controlled +text for 5 different Big-5 personality types: agreeable, disagreeable, +conscientious, unconscientious, and extravert. We test two different classes of +discrete prompts to generate utterances for a particular personality style: (1) +prompts that demonstrate generating directly from a meaning representation that +includes a personality specification; and (2) prompts that rely on first +converting the meaning representation to a textual pseudo-reference, and then +using the pseudo-reference in a textual style transfer (TST) prompt. In each +case, we show that we can vastly improve performance by over-generating outputs +and ranking them, testing several ranking functions based on automatic metrics +for semantic accuracy, personality-match, and fluency. We also test whether NLG +personality demonstrations from the restaurant domain can be used with meaning +representations for the video game domain to generate personality stylized +utterances about video games. Our findings show that the TST prompts produces +the highest semantic accuracy (78.46% for restaurants and 87.6% for video +games) and personality accuracy (100% for restaurants and 97% for video games). +Our results on transferring personality style to video game utterances are +surprisingly good. To our knowledge, there is no previous work testing the +application of prompt-based learning to simultaneously controlling both style +and semantic accuracy in NLG. +" +Distinguishability Calibration to In-Context Learning,Hongjing Li,http://arxiv.org/pdf/2302.06198v3.pdf,2023-02-13,['cs.cl'],2302.06198v3.pdf," Recent years have witnessed increasing interests in prompt-based learning in +which models can be trained on only a few annotated instances, making them +suitable in low-resource settings. When using prompt-based learning for text +classification, the goal is to use a pre-trained language model (PLM) to +predict a missing token in a pre-defined template given an input text, which +can be mapped to a class label. However, PLMs built on the transformer +architecture tend to generate similar output embeddings, making it difficult to +discriminate between different class labels. The problem is further exacerbated +when dealing with classification tasks involving many fine-grained class +labels. In this work, we alleviate this information diffusion issue, i.e., +different tokens share a large proportion of similar information after going +through stacked multiple self-attention layers in a transformer, by proposing a +calibration method built on feature transformations through rotation and +scaling to map a PLM-encoded embedding into a new metric space to guarantee the +distinguishability of the resulting embeddings. Furthermore, we take the +advantage of hyperbolic embeddings to capture the hierarchical relations among +fine-grained class-associated token embedding by a coarse-to-fine metric +learning strategy to enhance the distinguishability of the learned output +embeddings. Extensive experiments on the three datasets under various settings +demonstrate the effectiveness of our approach. Our code can be found at +https://github.com/donttal/TARA. +" +Do We Still Need Clinical Language Models?,Eric Lehman,http://arxiv.org/pdf/2302.08091v1.pdf,2023-02-16,['cs.cl'],2302.08091v1.pdf," Although recent advances in scaling large language models (LLMs) have +resulted in improvements on many NLP tasks, it remains unclear whether these +models trained primarily with general web text are the right tool in highly +specialized, safety critical domains such as clinical text. Recent results have +suggested that LLMs encode a surprising amount of medical knowledge. This +raises an important question regarding the utility of smaller domain-specific +language models. With the success of general-domain LLMs, is there still a need +for specialized clinical models? To investigate this question, we conduct an +extensive empirical analysis of 12 language models, ranging from 220M to 175B +parameters, measuring their performance on 3 different clinical tasks that test +their ability to parse and reason over electronic health records. As part of +our experiments, we train T5-Base and T5-Large models from scratch on clinical +notes from MIMIC III and IV to directly investigate the efficiency of clinical +tokens. We show that relatively small specialized clinical models substantially +outperform all in-context learning approaches, even when finetuned on limited +annotated data. Further, we find that pretraining on clinical tokens allows for +smaller, more parameter-efficient models that either match or outperform much +larger language models trained on general text. We release the code and the +models used under the PhysioNet Credentialed Health Data license and data use +agreement. +" +eP-ALM: Efficient Perceptual Augmentation of Language Models,Mustafa Shukor,http://arxiv.org/pdf/2303.11403v4.pdf,2023-03-20,"['cs.cv', 'cs.cl', 'cs.lg']",2303.11403v4.pdf," Large Language Models (LLMs) have so far impressed the world, with +unprecedented capabilities that emerge in models at large scales. On the vision +side, transformer models (i.e., ViT) are following the same trend, achieving +the best performance on challenging benchmarks. With the abundance of such +unimodal models, a natural question arises; do we need also to follow this +trend to tackle multimodal tasks? In this work, we propose to rather direct +effort to efficient adaptations of existing models, and propose to augment +Language Models with perception. Existing approaches for adapting pretrained +models for vision-language tasks still rely on several key components that +hinder their efficiency. In particular, they still train a large number of +parameters, rely on large multimodal pretraining, use encoders (e.g., CLIP) +trained on huge image-text datasets, and add significant inference overhead. In +addition, most of these approaches have focused on Zero-Shot and In Context +Learning, with little to no effort on direct finetuning. We investigate the +minimal computational effort needed to adapt unimodal models for multimodal +tasks and propose a new challenging setup, alongside different approaches, that +efficiently adapts unimodal pretrained models. We show that by freezing more +than 99% of total parameters, training only one linear projection layer, and +prepending only one trainable token, our approach (dubbed eP-ALM) significantly +outperforms other baselines on VQA and Captioning across Image, Video, and +Audio modalities, following the proposed setup. The code is available here: +https://github.com/mshukor/eP-ALM. +" +Towards Making the Most of ChatGPT for Machine Translation,Keqin Peng,http://arxiv.org/pdf/2303.13780v4.pdf,2023-03-24,['cs.cl'],2303.13780v4.pdf," ChatGPT shows remarkable capabilities for machine translation (MT). Several +prior studies have shown that it achieves comparable results to commercial +systems for high-resource languages, but lags behind in complex tasks, e.g., +low-resource and distant-language-pairs translation. However, they usually +adopt simple prompts which can not fully elicit the capability of ChatGPT. In +this paper, we aim to further mine ChatGPT's translation ability by revisiting +several aspects: temperature, task information, and domain information, and +correspondingly propose an optimal temperature setting and two (simple but +effective) prompts: Task-Specific Prompts (TSP) and Domain-Specific Prompts +(DSP). We show that: 1) The performance of ChatGPT depends largely on +temperature, and a lower temperature usually can achieve better performance; 2) +Emphasizing the task information can further improve ChatGPT's performance, +particularly in complex MT tasks; 3) Introducing domain information can elicit +ChatGPT's generalization ability and improve its performance in the specific +domain; 4) ChatGPT tends to generate hallucinations for non-English-centric MT +tasks, which can be partially addressed by our proposed prompts but still need +to be highlighted for the MT/NLP community. We also explore the effects of +advanced in-context learning strategies and find a (negative but interesting) +observation: the powerful chain-of-thought prompt leads to word-by-word +translation behavior, thus bringing significant translation degradation. +" +$k$NN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference,Benfeng Xu,http://arxiv.org/pdf/2303.13824v1.pdf,2023-03-24,"['cs.cl', 'cs.ai']",2303.13824v1.pdf," In-Context Learning (ICL), which formulates target tasks as prompt completion +conditioned on in-context demonstrations, has become the prevailing utilization +of LLMs. In this paper, we first disclose an actual predicament for this +typical usage that it can not scale up with training data due to context length +restriction. Besides, existing works have shown that ICL also suffers from +various biases and requires delicate calibration treatment. To address both +challenges, we advocate a simple and effective solution, $k$NN Prompting, which +first queries LLM with training data for distributed representations, then +predicts test instances by simply referring to nearest neighbors. We conduct +comprehensive experiments to demonstrate its two-fold superiority: 1) +Calibration-Free: $k$NN Prompting does not directly align LLM output +distribution with task-specific label space, instead leverages such +distribution to align test and training instances. It significantly outperforms +state-of-the-art calibration-based methods under comparable few-shot scenario. +2) Beyond-Context: $k$NN Prompting can further scale up effectively with as +many training data as are available, continually bringing substantial +improvements. The scaling trend holds across 10 orders of magnitude ranging +from 2 shots to 1024 shots as well as different LLMs scales ranging from 0.8B +to 30B. It successfully bridges data scaling into model scaling, and brings new +potentials for the gradient-free paradigm of LLM deployment. Code is publicly +available. +" +Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System,Yunfan Gao,http://arxiv.org/pdf/2303.14524v2.pdf,2023-03-25,"['cs.ir', 'cs.cl', 'cs.lg']",2303.14524v2.pdf," Large language models (LLMs) have demonstrated their significant potential to +be applied for addressing various application tasks. However, traditional +recommender systems continue to face great challenges such as poor +interactivity and explainability, which actually also hinder their broad +deployment in real-world systems. To address these limitations, this paper +proposes a novel paradigm called Chat-Rec (ChatGPT Augmented Recommender +System) that innovatively augments LLMs for building conversational recommender +systems by converting user profiles and historical interactions into prompts. +Chat-Rec is demonstrated to be effective in learning user preferences and +establishing connections between users and products through in-context +learning, which also makes the recommendation process more interactive and +explainable. What's more, within the Chat-Rec framework, user's preferences can +transfer to different products for cross-domain recommendations, and +prompt-based injection of information into LLMs can also handle the cold-start +scenarios with new items. In our experiments, Chat-Rec effectively improve the +results of top-k recommendations and performs better in zero-shot rating +prediction task. Chat-Rec offers a novel approach to improving recommender +systems and presents new practical scenarios for the implementation of AIGC (AI +generated content) in recommender system studies. +" +What Makes Good In-context Demonstrations for Code Intelligence Tasks with LLMs?,Shuzheng Gao,http://arxiv.org/pdf/2304.07575v2.pdf,2023-04-15,['cs.se'],2304.07575v2.pdf," Pre-trained models of source code have gained widespread popularity in many +code intelligence tasks. Recently, with the scaling of the model and corpus +size, large language models have shown the ability of in-context learning +(ICL). ICL employs task instructions and a few examples as demonstrations, and +then inputs the demonstrations to the language models for making predictions. +This new learning paradigm is training-free and has shown impressive +performance in various natural language processing and code intelligence tasks. +However, the performance of ICL heavily relies on the quality of +demonstrations, e.g., the selected examples. It is important to systematically +investigate how to construct a good demonstration for code-related tasks. In +this paper, we empirically explore the impact of three key factors on the +performance of ICL in code intelligence tasks: the selection, order, and number +of demonstration examples. We conduct extensive experiments on three code +intelligence tasks including code summarization, bug fixing, and program +synthesis. Our experimental results demonstrate that all the above three +factors dramatically impact the performance of ICL in code intelligence tasks. +Additionally, we summarize our findings and provide takeaway suggestions on how +to construct effective demonstrations, taking into account these three +perspectives. We also show that a carefully-designed demonstration based on our +findings can lead to substantial improvements over widely-used demonstration +construction methods, e.g., improving BLEU-4, EM, and EM by at least 9.90%, +175.96%, and 50.81% on code summarization, bug fixing, and program synthesis, +respectively +" +Sparks of GPTs in Edge Intelligence for Metaverse: Caching and Inference for Mobile AIGC Services,Minrui Xu,http://arxiv.org/pdf/2304.08782v2.pdf,2023-04-18,['cs.ni'],2304.08782v2.pdf," Aiming at achieving artificial general intelligence (AGI) for Metaverse, +pretrained foundation models (PFMs), e.g., generative pretrained transformers +(GPTs), can effectively provide various AI services, such as autonomous +driving, digital twins, and AI-generated content (AIGC) for extended reality. +With the advantages of low latency and privacy-preserving, serving PFMs of +mobile AI services in edge intelligence is a viable solution for caching and +executing PFMs on edge servers with limited computing resources and GPU memory. +However, PFMs typically consist of billions of parameters that are computation +and memory-intensive for edge servers during loading and execution. In this +article, we investigate edge PFM serving problems for mobile AIGC services of +Metaverse. First, we introduce the fundamentals of PFMs and discuss their +characteristic fine-tuning and inference methods in edge intelligence. Then, we +propose a novel framework of joint model caching and inference for managing +models and allocating resources to satisfy users' requests efficiently. +Furthermore, considering the in-context learning ability of PFMs, we propose a +new metric to evaluate the freshness and relevance between examples in +demonstrations and executing tasks, namely the Age of Context (AoC). Finally, +we propose a least context algorithm for managing cached models at edge servers +by balancing the tradeoff among latency, energy consumption, and accuracy. +" +Controlled Text Generation with Natural Language Instructions,Wangchunshu Zhou,http://arxiv.org/pdf/2304.14293v2.pdf,2023-04-27,"['cs.cl', 'cs.ai', 'cs.lg']",2304.14293v2.pdf," Large language models generate fluent texts and can follow natural language +instructions to solve a wide range of tasks without task-specific training. +Nevertheless, it is notoriously difficult to control their generation to +satisfy the various constraints required by different applications. In this +work, we present InstructCTG, a controlled text generation framework that +incorporates different constraints by conditioning on natural language +descriptions and demonstrations of the constraints. In particular, we first +extract the underlying constraints of natural texts through a combination of +off-the-shelf NLP tools and simple heuristics. We then verbalize the +constraints into natural language instructions to form weakly supervised +training data. By prepending natural language descriptions of the constraints +and a few demonstrations, we fine-tune a pre-trained language model to +incorporate various types of constraints. Compared to existing search-based or +score-based methods, InstructCTG is more flexible to different constraint types +and has a much smaller impact on the generation quality and speed because it +does not modify the decoding procedure. Additionally, InstructCTG allows the +model to adapt to new constraints without re-training through the use of +few-shot task generalization and in-context learning abilities of +instruction-tuned language models. +" +TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation,Keqin Bao,http://arxiv.org/pdf/2305.00447v3.pdf,2023-04-30,['cs.ir'],2305.00447v3.pdf," Large Language Models (LLMs) have demonstrated remarkable performance across +diverse domains, thereby prompting researchers to explore their potential for +use in recommendation systems. Initial attempts have leveraged the exceptional +capabilities of LLMs, such as rich knowledge and strong generalization through +In-context Learning, which involves phrasing the recommendation task as +prompts. Nevertheless, the performance of LLMs in recommendation tasks remains +suboptimal due to a substantial disparity between the training tasks for LLMs +and recommendation tasks, as well as inadequate recommendation data during +pre-training. To bridge the gap, we consider building a Large Recommendation +Language Model by tunning LLMs with recommendation data. To this end, we +propose an efficient and effective Tuning framework for Aligning LLMs with +Recommendation, namely TALLRec. We have demonstrated that the proposed TALLRec +framework can significantly enhance the recommendation capabilities of LLMs in +the movie and book domains, even with a limited dataset of fewer than 100 +samples. Additionally, the proposed framework is highly efficient and can be +executed on a single RTX 3090 with LLaMA-7B. Furthermore, the fine-tuned LLM +exhibits robust cross-domain generalization. Our code and data are available at +https://github.com/SAI990323/TALLRec. +" +Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction,Ashish Sharma,http://arxiv.org/pdf/2305.02466v1.pdf,2023-05-04,"['cs.cl', 'cs.hc', 'cs.si']",2305.02466v1.pdf," A proven therapeutic technique to overcome negative thoughts is to replace +them with a more hopeful ""reframed thought."" Although therapy can help people +practice and learn this Cognitive Reframing of Negative Thoughts, clinician +shortages and mental health stigma commonly limit people's access to therapy. +In this paper, we conduct a human-centered study of how language models may +assist people in reframing negative thoughts. Based on psychology literature, +we define a framework of seven linguistic attributes that can be used to +reframe a thought. We develop automated metrics to measure these attributes and +validate them with expert judgements from mental health practitioners. We +collect a dataset of 600 situations, thoughts and reframes from practitioners +and use it to train a retrieval-enhanced in-context learning model that +effectively generates reframed thoughts and controls their linguistic +attributes. To investigate what constitutes a ""high-quality"" reframe, we +conduct an IRB-approved randomized field study on a large mental health website +with over 2,000 participants. Amongst other findings, we show that people +prefer highly empathic or specific reframes, as opposed to reframes that are +overly positive. Our findings provide key implications for the use of LMs to +assist people in overcoming negative thoughts. +" +Using ChatGPT for Entity Matching,Ralph Peeters,http://arxiv.org/pdf/2305.03423v2.pdf,2023-05-05,['cs.cl'],2305.03423v2.pdf," Entity Matching is the task of deciding if two entity descriptions refer to +the same real-world entity. State-of-the-art entity matching methods often rely +on fine-tuning Transformer models such as BERT or RoBERTa. Two major drawbacks +of using these models for entity matching are that (i) the models require +significant amounts of fine-tuning data for reaching a good performance and +(ii) the fine-tuned models are not robust concerning out-of-distribution +entities. In this paper, we investigate using ChatGPT for entity matching as a +more robust, training data-efficient alternative to traditional Transformer +models. We perform experiments along three dimensions: (i) general prompt +design, (ii) in-context learning, and (iii) provision of higher-level matching +knowledge. We show that ChatGPT is competitive with a fine-tuned RoBERTa model, +reaching a zero-shot performance of 82.35% F1 on a challenging matching task on +which RoBERTa requires 2000 training examples for reaching a similar +performance. Adding in-context demonstrations to the prompts further improves +the F1 by up to 7.85% when using similarity-based example selection. Always +using the same set of 10 handpicked demonstrations leads to an improvement of +4.92% over the zero-shot performance. Finally, we show that ChatGPT can also be +guided by adding higher-level matching knowledge in the form of rules to the +prompts. Providing matching rules leads to similar performance gains as +providing in-context demonstrations. +" +Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment,Eshaan Tanwar,http://arxiv.org/pdf/2305.05940v3.pdf,2023-05-10,['cs.cl'],2305.05940v3.pdf," In-context learning (ICL) unfolds as large language models become capable of +inferring test labels conditioned on a few labeled samples without any gradient +update. ICL-enabled large language models provide a promising step forward +toward bypassing recurrent annotation costs in a low-resource setting. Yet, +only a handful of past studies have explored ICL in a cross-lingual setting, in +which the need for transferring label-knowledge from a high-resource language +to a low-resource one is immensely crucial. To bridge the gap, we provide the +first in-depth analysis of ICL for cross-lingual text classification. We find +that the prevalent mode of selecting random input-label pairs to construct the +prompt-context is severely limited in the case of cross-lingual ICL, primarily +due to the lack of alignment in the input as well as the output spaces. To +mitigate this, we propose a novel prompt construction strategy -- Cross-lingual +In-context Source-Target Alignment (X-InSTA). With an injected coherence in the +semantics of the input examples and a task-based alignment across the source +and target languages, X-InSTA is able to outperform random prompt selection by +a large margin across three different tasks using 44 different cross-lingual +pairs. +" +Can Language Models Solve Graph Problems in Natural Language?,Heng Wang,http://arxiv.org/pdf/2305.10037v2.pdf,2023-05-17,"['cs.cl', 'cs.ai']",2305.10037v2.pdf," Large language models (LLMs) are increasingly adopted for a variety of tasks +with implicit graphical structures, such as planning in robotics, multi-hop +question answering or knowledge probing, structured commonsense reasoning, and +more. While LLMs have advanced the state-of-the-art on these tasks with +structure implications, whether LLMs could explicitly process textual +descriptions of graphs and structures, map them to grounded conceptual spaces, +and perform structured operations remains underexplored. To this end, we +propose NLGraph (Natural Language Graph), a comprehensive benchmark of +graph-based problem solving designed in natural language. NLGraph contains +29,370 problems, covering eight graph reasoning tasks with varying complexity +from simple tasks such as connectivity and shortest path up to complex problems +such as maximum flow and simulating graph neural networks. We evaluate LLMs +(GPT-3/4) with various prompting approaches on the NLGraph benchmark and find +that 1) language models do demonstrate preliminary graph reasoning abilities, +2) the benefit of advanced prompting and in-context learning diminishes on more +complex graph problems, while 3) LLMs are also (un)surprisingly brittle in the +face of spurious correlations in graph and problem settings. We then propose +Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based +approaches to enhance LLMs in solving natural language graph problems. +Build-a-Graph and Algorithmic prompting improve the performance of LLMs on +NLGraph by 3.07% to 16.85% across multiple tasks and settings, while how to +solve the most complicated graph reasoning tasks in our setup with language +models remains an open research question. The NLGraph benchmark and evaluation +code are available at https://github.com/Arthur-Heng/NLGraph. +" +Joint Foundation Model Caching and Inference of Generative AI Services for Edge Intelligence,Minrui Xu,http://arxiv.org/pdf/2305.12130v1.pdf,2023-05-20,['cs.ni'],2305.12130v1.pdf," With the rapid development of artificial general intelligence (AGI), various +multimedia services based on pretrained foundation models (PFMs) need to be +effectively deployed. With edge servers that have cloud-level computing power, +edge intelligence can extend the capabilities of AGI to mobile edge networks. +However, compared with cloud data centers, resource-limited edge servers can +only cache and execute a small number of PFMs, which typically consist of +billions of parameters and require intensive computing power and GPU memory +during inference. To address this challenge, in this paper, we propose a joint +foundation model caching and inference framework that aims to balance the +tradeoff among inference latency, accuracy, and resource consumption by +managing cached PFMs and user requests efficiently during the provisioning of +generative AI services. Specifically, considering the in-context learning +ability of PFMs, a new metric named the Age of Context (AoC), is proposed to +model the freshness and relevance between examples in past demonstrations and +current service requests. Based on the AoC, we propose a least context caching +algorithm to manage cached PFMs at edge servers with historical prompts and +inference results. The numerical results demonstrate that the proposed +algorithm can reduce system costs compared with existing baselines by +effectively utilizing contextual information. +" +Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies,Linyong Nan,http://arxiv.org/pdf/2305.12586v1.pdf,2023-05-21,['cs.cl'],2305.12586v1.pdf," In-context learning (ICL) has emerged as a new approach to various natural +language processing tasks, utilizing large language models (LLMs) to make +predictions based on context that has been supplemented with a few examples or +task-specific instructions. In this paper, we aim to extend this method to +question answering tasks that utilize structured knowledge sources, and improve +Text-to-SQL systems by exploring various prompt design strategies for employing +LLMs. We conduct a systematic investigation into different demonstration +selection methods and optimal instruction formats for prompting LLMs in the +Text-to-SQL task. Our approach involves leveraging the syntactic structure of +an example's SQL query to retrieve demonstrations, and we demonstrate that +pursuing both diversity and similarity in demonstration selection leads to +enhanced performance. Furthermore, we show that LLMs benefit from +database-related knowledge augmentations. Our most effective strategy +outperforms the state-of-the-art system by 2.5 points (Execution Accuracy) and +the best fine-tuned system by 5.1 points on the Spider dataset. These results +highlight the effectiveness of our approach in adapting LLMs to the Text-to-SQL +task, and we present an analysis of the factors contributing to the success of +our strategy. +" +Exploring Chain-of-Thought Style Prompting for Text-to-SQL,Chang-You Tai,http://arxiv.org/pdf/2305.14215v2.pdf,2023-05-23,['cs.cl'],2305.14215v2.pdf," In-context learning with large language models (LLMs) has recently caught +increasing attention due to its superior few-shot performance on various tasks. +However, its performance on text-to-SQL parsing still has much room for +improvement. In this paper, we hypothesize that a crucial aspect of LLMs to +improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we +systematically study how to enhance LLMs' reasoning ability through chain of +thought (CoT) style prompting, including the original chain-of-thought +prompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023). +Our experiments demonstrate that iterative prompting as in Zhou et al. (2023) +may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps +tends to have more error propagation issues. Based on these findings, we +propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 +and 6.5 point absolute gains on the Spider development set and the Spider +Realistic set, respectively, compared to the standard prompting method without +reasoning steps; 2.4 and 1.5 point absolute gains, compared to the +least-to-most prompting method. +" +Sociocultural Norm Similarities and Differences via Situational Alignment and Explainable Textual Entailment,Sky CH-Wang,http://arxiv.org/pdf/2305.14492v2.pdf,2023-05-23,['cs.cl'],2305.14492v2.pdf," Designing systems that can reason across cultures requires that they are +grounded in the norms of the contexts in which they operate. However, current +research on developing computational models of social norms has primarily +focused on American society. Here, we propose a novel approach to discover and +compare descriptive social norms across Chinese and American cultures. We +demonstrate our approach by leveraging discussions on a Chinese Q&A platform +(Zhihu) and the existing SocialChemistry dataset as proxies for contrasting +cultural axes, align social situations cross-culturally, and extract social +norms from texts using in-context learning. Embedding Chain-of-Thought +prompting in a human-AI collaborative framework, we build a high-quality +dataset of 3,069 social norms aligned with social situations across Chinese and +American cultures alongside corresponding free-text explanations. To test the +ability of models to reason about social norms across cultures, we introduce +the task of explainable social norm entailment, showing that existing models +under 3B parameters have significant room for improvement in both automatic and +human evaluation. Further analysis of cross-cultural norm differences based on +our dataset shows empirical alignment with the social orientations framework, +revealing several situational and descriptive nuances in norms across these +cultures. +" +Increasing Probability Mass on Answer Choices Does Not Always Improve Accuracy,Sarah Wiegreffe,http://arxiv.org/pdf/2305.14596v2.pdf,2023-05-24,"['cs.cl', 'cs.lg']",2305.14596v2.pdf," When pretrained language models (LMs) are applied to discriminative tasks +such as multiple-choice questions, they place probability mass on vocabulary +tokens that aren't among the given answer choices. Spreading probability mass +across multiple surface forms with identical meaning (such as ""bath"" and +""bathtub"") is thought to cause an underestimation of a model's true +performance, referred to as the ""surface form competition"" (SFC) hypothesis. +This has motivated the introduction of various probability normalization +methods. However, many core questions remain unanswered. How do we measure SFC? +Are there direct ways of reducing it, and does doing so improve task +performance? + We propose a mathematical formalism for SFC which allows us to quantify and +bound its impact for the first time. We identify a simple method for reducing +it -- namely, increasing probability mass on the given answer choices by a) +including them in the prompt and b) using in-context learning with even just +one example. We show this method eliminates the impact of SFC in the majority +of instances. Our experiments on three diverse datasets and six LMs reveal +several additional surprising findings. For example, both normalization and +prompting methods for reducing SFC can be ineffective or even detrimental to +task performance for some LMs. We conclude with practical insights for +effectively prompting LMs for multiple-choice tasks. +" +Universal Self-Adaptive Prompting,Xingchen Wan,http://arxiv.org/pdf/2305.14926v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14926v2.pdf," A hallmark of modern large language models (LLMs) is their impressive general +zero-shot and few-shot abilities, often elicited through in-context learning +(ICL) via prompting. However, while highly coveted and being the most general, +zero-shot performances in LLMs are still typically weaker due to the lack of +guidance and the difficulty of applying existing automatic prompt design +methods in general tasks when ground-truth labels are unavailable. In this +study, we address this by presenting Universal Self-Adaptive Prompting (USP), +an automatic prompt design approach specifically tailored for zero-shot +learning (while compatible with few-shot). Requiring only a small amount of +unlabeled data and an inference-only LLM, USP is highly versatile: to achieve +universal prompting, USP categorizes a possible NLP task into one of the three +possible task types and then uses a corresponding selector to select the most +suitable queries and zero-shot model-generated responses as +pseudo-demonstrations, thereby generalizing ICL to the zero-shot setup in a +fully automated way. We evaluate USP with PaLM and PaLM 2 models and +demonstrate performances that are considerably stronger than standard zero-shot +baselines and often comparable to or even superior to few-shot baselines across +more than 40 natural language understanding, natural language generation, and +reasoning tasks. +" +Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization,Aman Priyanshu,http://arxiv.org/pdf/2305.15008v1.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.cy']",2305.15008v1.pdf," LLM-powered chatbots are becoming widely adopted in applications such as +healthcare, personal assistants, industry hiring decisions, etc. In many of +these cases, chatbots are fed sensitive, personal information in their prompts, +as samples for in-context learning, retrieved records from a database, or as +part of the conversation. The information provided in the prompt could directly +appear in the output, which might have privacy ramifications if there is +sensitive information there. As such, in this paper, we aim to understand the +input copying and regurgitation capabilities of these models during inference +and how they can be directly instructed to limit this copying by complying with +regulations such as HIPAA and GDPR, based on their internal knowledge of them. +More specifically, we find that when ChatGPT is prompted to summarize cover +letters of a 100 candidates, it would retain personally identifiable +information (PII) verbatim in 57.4% of cases, and we find this retention to be +non-uniform between different subgroups of people, based on attributes such as +gender identity. We then probe ChatGPT's perception of privacy-related policies +and privatization mechanisms by directly instructing it to provide compliant +outputs and observe a significant omission of PII from output. +" +Fine-Tuning Language Models with Just Forward Passes,Sadhika Malladi,http://arxiv.org/pdf/2305.17333v2.pdf,2023-05-27,"['cs.lg', 'cs.cl']",2305.17333v2.pdf," Fine-tuning language models (LMs) has yielded success on diverse downstream +tasks, but as LMs grow in size, backpropagation requires a prohibitively large +amount of memory. Zeroth-order (ZO) methods can in principle estimate gradients +using only two forward passes but are theorized to be catastrophically slow for +optimizing large models. In this work, we propose a memory-efficient +zerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operate +in-place, thereby fine-tuning LMs with the same memory footprint as inference. +For example, with a single A100 80GB GPU, MeZO can train a 30-billion parameter +model, whereas fine-tuning with backpropagation can train only a 2.7B LM with +the same budget. We conduct comprehensive experiments across model types +(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks +(classification, multiple-choice, and generation). Our results demonstrate that +(1) MeZO significantly outperforms in-context learning and linear probing; (2) +MeZO achieves comparable performance to fine-tuning with backpropagation across +multiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reduction +in our implementation; (3) MeZO is compatible with both full-parameter and +parameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZO +can effectively optimize non-differentiable objectives (e.g., maximizing +accuracy or F1). We support our empirical findings with theoretical insights, +highlighting how adequate pre-training and task prompts enable MeZO to +fine-tune huge models, despite classical ZO analyses suggesting otherwise. +" +Do Large Language Models Know What They Don't Know?,Zhangyue Yin,http://arxiv.org/pdf/2305.18153v2.pdf,2023-05-29,['cs.cl'],2305.18153v2.pdf," Large language models (LLMs) have a wealth of knowledge that allows them to +excel in various Natural Language Processing (NLP) tasks. Current research +focuses on enhancing their performance within their existing knowledge. Despite +their vast knowledge, LLMs are still limited by the amount of information they +can accommodate and comprehend. Therefore, the ability to understand their own +limitations on the unknows, referred to as self-knowledge, is of paramount +importance. This study aims to evaluate LLMs' self-knowledge by assessing their +ability to identify unanswerable or unknowable questions. We introduce an +automated methodology to detect uncertainty in the responses of these models, +providing a novel measure of their self-knowledge. We further introduce a +unique dataset, SelfAware, consisting of unanswerable questions from five +diverse categories and their answerable counterparts. Our extensive analysis, +involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering an +intrinsic capacity for self-knowledge within these models. Moreover, we +demonstrate that in-context learning and instruction tuning can further enhance +this self-knowledge. Despite this promising insight, our findings also +highlight a considerable gap between the capabilities of these models and human +proficiency in recognizing the limits of their knowledge. +" +Improving CLIP Training with Language Rewrites,Lijie Fan,http://arxiv.org/pdf/2305.20088v2.pdf,2023-05-31,"['cs.cv', 'cs.cl', 'cs.lg']",2305.20088v2.pdf," Contrastive Language-Image Pre-training (CLIP) stands as one of the most +effective and scalable methods for training transferable vision models using +paired image and text data. CLIP models are trained using contrastive loss, +which typically relies on data augmentations to prevent overfitting and +shortcuts. However, in the CLIP training paradigm, data augmentations are +exclusively applied to image inputs, while language inputs remain unchanged +throughout the entire training process, limiting the exposure of diverse texts +to the same image. In this paper, we introduce Language augmented CLIP +(LaCLIP), a simple yet highly effective approach to enhance CLIP training +through language rewrites. Leveraging the in-context learning capability of +large language models, we rewrite the text descriptions associated with each +image. These rewritten texts exhibit diversity in sentence structure and +vocabulary while preserving the original key concepts and meanings. During +training, LaCLIP randomly selects either the original texts or the rewritten +versions as text augmentations for each image. Extensive experiments on CC3M, +CC12M, RedCaps and LAION-400M datasets show that CLIP pre-training with +language rewrites significantly improves the transfer performance without +computation or memory overhead during training. Specifically for ImageNet +zero-shot accuracy, LaCLIP outperforms CLIP by 8.2% on CC12M and 2.4% on +LAION-400M. Code is available at https://github.com/LijieFan/LaCLIP. +" +SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL,Ruoxi Sun,http://arxiv.org/pdf/2306.00739v3.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.db']",2306.00739v3.pdf," One impressive emergent capability of large language models (LLMs) is +generation of code, including Structured Query Language (SQL) for databases. +For the task of converting natural language text to SQL queries, Text-to-SQL, +adaptation of LLMs is of paramount importance, both in in-context learning and +fine-tuning settings, depending on the amount of adaptation data used. In this +paper, we propose an LLM-based Text-to-SQL model SQL-PaLM, leveraging on +PaLM-2, that pushes the state-of-the-art in both settings. Few-shot SQL-PaLM is +based on an execution-based self-consistency prompting approach designed for +Text-to-SQL, and achieves 77.3% in test-suite accuracy on Spider, which to our +best knowledge is the first to outperform previous state-of-the-art with +fine-tuning by a significant margin, 4%. Furthermore, we demonstrate that the +fine-tuned SQL-PALM outperforms it further by another 1%. Towards applying +SQL-PaLM to real-world scenarios we further evaluate its robustness on other +challenging variants of Spider and demonstrate the superior generalization +capability of SQL-PaLM. In addition, via extensive case studies, we demonstrate +the impressive intelligent capabilities and various success enablers of +LLM-based Text-to-SQL. +" +Zero-Shot 3D Shape Correspondence,Ahmed Abdelreheem,http://arxiv.org/pdf/2306.03253v2.pdf,2023-06-05,['cs.cv'],2306.03253v2.pdf," We propose a novel zero-shot approach to computing correspondences between 3D +shapes. Existing approaches mainly focus on isometric and near-isometric shape +pairs (e.g., human vs. human), but less attention has been given to strongly +non-isometric and inter-class shape matching (e.g., human vs. cow). To this +end, we introduce a fully automatic method that exploits the exceptional +reasoning capabilities of recent foundation models in language and vision to +tackle difficult shape correspondence problems. Our approach comprises multiple +stages. First, we classify the 3D shapes in a zero-shot manner by feeding +rendered shape views to a language-vision model (e.g., BLIP2) to generate a +list of class proposals per shape. These proposals are unified into a single +class per shape by employing the reasoning capabilities of ChatGPT. Second, we +attempt to segment the two shapes in a zero-shot manner, but in contrast to the +co-segmentation problem, we do not require a mutual set of semantic regions. +Instead, we propose to exploit the in-context learning capabilities of ChatGPT +to generate two different sets of semantic regions for each shape and a +semantic mapping between them. This enables our approach to match strongly +non-isometric shapes with significant differences in geometric structure. +Finally, we employ the generated semantic mapping to produce coarse +correspondences that can further be refined by the functional maps framework to +produce dense point-to-point maps. Our approach, despite its simplicity, +produces highly plausible results in a zero-shot manner, especially between +strongly non-isometric shapes. Project webpage: +https://samir55.github.io/3dshapematch/. +" +MIMIC-IT: Multi-Modal In-Context Instruction Tuning,Bo Li,http://arxiv.org/pdf/2306.05425v1.pdf,2023-06-08,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.hc']",2306.05425v1.pdf," High-quality instructions and responses are essential for the zero-shot +performance of large language models on interactive natural language tasks. For +interactive vision-language tasks involving intricate visual scenes, a large +quantity of diverse and creative instruction-response pairs should be +imperative to tune vision-language models (VLMs). Nevertheless, the current +availability of vision-language instruction-response pairs in terms of +quantity, diversity, and creativity remains limited, posing challenges to the +generalization of interactive VLMs. Here we present MultI-Modal In-Context +Instruction Tuning (MIMIC-IT), a dataset comprising 2.8 million multimodal +instruction-response pairs, with 2.2 million unique instructions derived from +images and videos. Each pair is accompanied by multi-modal in-context +information, forming conversational contexts aimed at empowering VLMs in +perception, reasoning, and planning. The instruction-response collection +process, dubbed as Syphus, is scaled using an automatic annotation pipeline +that combines human expertise with GPT's capabilities. Using the MIMIC-IT +dataset, we train a large VLM named Otter. Based on extensive evaluations +conducted on vision-language benchmarks, it has been observed that Otter +demonstrates remarkable proficiency in multi-modal perception, reasoning, and +in-context learning. Human evaluation reveals it effectively aligns with the +user's intentions. We release the MIMIC-IT dataset, instruction-response +collection pipeline, benchmarks, and the Otter model. +" +MedFMC: A Real-world Dataset and Benchmark For Foundation Model Adaptation in Medical Image Classification,Dequan Wang,http://arxiv.org/pdf/2306.09579v1.pdf,2023-06-16,['cs.cv'],2306.09579v1.pdf," Foundation models, often pre-trained with large-scale data, have achieved +paramount success in jump-starting various vision and language applications. +Recent advances further enable adapting foundation models in downstream tasks +efficiently using only a few training samples, e.g., in-context learning. Yet, +the application of such learning paradigms in medical image analysis remains +scarce due to the shortage of publicly accessible data and benchmarks. In this +paper, we aim at approaches adapting the foundation models for medical image +classification and present a novel dataset and benchmark for the evaluation, +i.e., examining the overall performance of accommodating the large-scale +foundation models downstream on a set of diverse real-world clinical tasks. We +collect five sets of medical imaging data from multiple institutes targeting a +variety of real-world clinical tasks (22,349 images in total), i.e., thoracic +diseases screening in X-rays, pathological lesion tissue screening, lesion +detection in endoscopy images, neonatal jaundice evaluation, and diabetic +retinopathy grading. Results of multiple baseline methods are demonstrated +using the proposed dataset from both accuracy and cost-effective perspectives. +" +JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving,Wayne Xin Zhao,http://arxiv.org/pdf/2306.11027v1.pdf,2023-06-19,"['cs.cl', 'cs.ai']",2306.11027v1.pdf," Although pre-trained language models~(PLMs) have recently advanced the +research progress in mathematical reasoning, they are not specially designed as +a capable multi-task solver, suffering from high cost for multi-task deployment +(\eg a model copy for a task) and inferior performance on complex mathematical +problems in practical applications. To address these issues, in this paper, we +propose \textbf{JiuZhang~2.0}, a unified Chinese PLM specially for multi-task +mathematical problem solving. Our idea is to maintain a moderate-sized model +and employ the \emph{cross-task knowledge sharing} to improve the model +capacity in a multi-task setting. Specially, we construct a +Mixture-of-Experts~(MoE) architecture for modeling mathematical text, so as to +capture the common mathematical knowledge across tasks. For optimizing the MoE +architecture, we design \emph{multi-task continual pre-training} and +\emph{multi-task fine-tuning} strategies for multi-task adaptation. These +training strategies can effectively decompose the knowledge from the task data +and establish the cross-task sharing via expert networks. In order to further +improve the general capacity of solving different complex tasks, we leverage +large language models~(LLMs) as complementary models to iteratively refine the +generated solution by our PLM, via in-context learning. Extensive experiments +have demonstrated the effectiveness of our model. +" +A Chain of AI-based Solutions for Resolving FQNs and Fixing Syntax Errors in Partial Code,Qing Huang,http://arxiv.org/pdf/2306.11981v1.pdf,2023-06-21,['cs.se'],2306.11981v1.pdf," API documentation, technical blogs and programming Q&A sites contain numerous +partial code that can be reused in programming tasks, but often these code are +uncompilable due to unresolved names and syntax errors. To facilitate partial +code reuse, we propose the Partial Code Reuse Chain (PCR-Chain) for resolving +fully-qualified names (FQNs) and fixing last-mile syntax errors in partial code +based on a giant large language model (LLM) like ChatGPT. Methodologically, +PCR-Chain is backed up by the underlying global-level prompt architecture +(which combines three design ideas: hierarchical task breakdown, prompt +composition, and a mix of prompt-based AI and non-AI units) and the local-level +prompt design. Technically, we propose PCR-Chain, which employs in-context +learning rather than symbolic, costly training methods. Experimental results +demonstrate that in dynamically-typed languages (Python), PCR-Chain outperforms +current state-of-the-art (SOTA) 5% accuracy like RING. For statically-type +languages (Java), our approach achieves high accuracy of 80.5% in resolving +both non-FQNs and last-mile syntax errors, surpassing SOTA methods (RING) that +can only address last-mile syntax errors. The correct execution of the unit, +module, and PCR-Chain demonstrates the effectiveness of the prompt design, +composition, and architecture and opens up possibilities for building software +engineering tools based on LLMs, replacing traditional program analysis +methods. +" +Generative Multimodal Entity Linking,Senbao Shi,http://arxiv.org/pdf/2306.12725v2.pdf,2023-06-22,['cs.cl'],2306.12725v2.pdf," Multimodal Entity Linking (MEL) is the task of mapping mentions with +multimodal contexts to the referent entities from a knowledge base (e.g. +Wikipedia). Existing MEL methods mainly focus on designing complex multimodal +interaction mechanisms and require fine-tuning all model parameters, which can +be prohibitively costly and difficult to scale in the era of Large Language +Models (LLMs). In this work, we propose GEMEL, a simple yet effective +Generative Multimodal Entity Linking framework based on LLMs, which directly +generates target entity names. We keep the vision and language model frozen and +only train a feature mapper to enable cross-modality interactions. To adapt +LLMs to the MEL task, we take advantage of the emergent in-context learning +capability of LLMs by retrieving multimodal instances as demonstrations. +Extensive experiments show that, with only ~0.3% of the model parameters +fine-tuned, GEMEL achieves state-of-the-art results on two well-established MEL +datasets (7.7% accuracy gains on WikiDiverse and 8.8% accuracy gains on +WikiMEL). The performance gain stems from mitigating the popularity bias of LLM +predictions and disambiguating less common entities effectively. Further +analysis verifies the generality and scalability of GEMEL. Our approach is +compatible with any off-the-shelf language model, paving the way towards an +efficient and general solution for utilizing LLMs in the MEL task. +" +Kosmos-2: Grounding Multimodal Large Language Models to the World,Zhiliang Peng,http://arxiv.org/pdf/2306.14824v3.pdf,2023-06-26,"['cs.cl', 'cs.cv']",2306.14824v3.pdf," We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new +capabilities of perceiving object descriptions (e.g., bounding boxes) and +grounding text to the visual world. Specifically, we represent refer +expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where +object descriptions are sequences of location tokens. Together with multimodal +corpora, we construct large-scale data of grounded image-text pairs (called +GrIT) to train the model. In addition to the existing capabilities of MLLMs +(e.g., perceiving general modalities, following instructions, and performing +in-context learning), Kosmos-2 integrates the grounding capability into +downstream applications. We evaluate Kosmos-2 on a wide range of tasks, +including (i) multimodal grounding, such as referring expression comprehension, +and phrase grounding, (ii) multimodal referring, such as referring expression +generation, (iii) perception-language tasks, and (iv) language understanding +and generation. This work lays out the foundation for the development of +Embodiment AI and sheds light on the big convergence of language, multimodal +perception, action, and world modeling, which is a key step toward artificial +general intelligence. Code and pretrained models are available at +https://aka.ms/kosmos-2. +" +Supervised Pretraining Can Learn In-Context Reinforcement Learning,Jonathan N. Lee,http://arxiv.org/pdf/2306.14892v1.pdf,2023-06-26,"['cs.lg', 'cs.ai']",2306.14892v1.pdf," Large transformer models trained on diverse datasets have shown a remarkable +ability to learn in-context, achieving high few-shot performance on tasks they +were not explicitly trained to solve. In this paper, we study the in-context +learning capabilities of transformers in decision-making problems, i.e., +reinforcement learning (RL) for bandits and Markov decision processes. To do +so, we introduce and study Decision-Pretrained Transformer (DPT), a supervised +pretraining method where the transformer predicts an optimal action given a +query state and an in-context dataset of interactions, across a diverse set of +tasks. This procedure, while simple, produces a model with several surprising +capabilities. We find that the pretrained transformer can be used to solve a +range of RL problems in-context, exhibiting both exploration online and +conservatism offline, despite not being explicitly trained to do so. The model +also generalizes beyond the pretraining distribution to new tasks and +automatically adapts its decision-making strategies to unknown structure. +Theoretically, we show DPT can be viewed as an efficient implementation of +Bayesian posterior sampling, a provably sample-efficient RL algorithm. We +further leverage this connection to provide guarantees on the regret of the +in-context algorithm yielded by DPT, and prove that it can learn faster than +algorithms used to generate the pretraining data. These results suggest a +promising yet simple path towards instilling strong in-context decision-making +abilities in transformers. +" +A GPT-4 Reticular Chemist for Guiding MOF Discovery,Zhiling Zheng,http://arxiv.org/pdf/2306.14915v2.pdf,2023-06-20,"['cs.ai', 'cond-mat.mtrl-sci', 'physics.chem-ph']",2306.14915v2.pdf," We present a new framework integrating the AI model GPT-4 into the iterative +process of reticular chemistry experimentation, leveraging a cooperative +workflow of interaction between AI and a human researcher. This GPT-4 Reticular +Chemist is an integrated system composed of three phases. Each of these +utilizes GPT-4 in various capacities, wherein GPT-4 provides detailed +instructions for chemical experimentation and the human provides feedback on +the experimental outcomes, including both success and failures, for the +in-context learning of AI in the next iteration. This iterative human-AI +interaction enabled GPT-4 to learn from the outcomes, much like an experienced +chemist, by a prompt-learning strategy. Importantly, the system is based on +natural language for both development and operation, eliminating the need for +coding skills, and thus, make it accessible to all chemists. Our collaboration +with GPT-4 Reticular Chemist guided the discovery of an isoreticular series of +MOFs, with each synthesis fine-tuned through iterative feedback and expert +suggestions. This workflow presents a potential for broader applications in +scientific research by harnessing the capability of large language models like +GPT-4 to enhance the feasibility and efficiency of research activities. +" +Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale,Matthew Le,http://arxiv.org/pdf/2306.15687v2.pdf,2023-06-23,"['eess.as', 'cs.cl', 'cs.lg', 'cs.sd']",2306.15687v2.pdf," Large-scale generative models such as GPT and DALL-E have revolutionized the +research community. These models not only generate high fidelity outputs, but +are also generalists which can solve tasks not explicitly taught. In contrast, +speech generative models are still primitive in terms of scale and task +generalization. In this paper, we present Voicebox, the most versatile +text-guided generative model for speech at scale. Voicebox is a +non-autoregressive flow-matching model trained to infill speech, given audio +context and text, trained on over 50K hours of speech that are not filtered or +enhanced. Similar to GPT, Voicebox can perform many different tasks through +in-context learning, but is more flexible as it can also condition on future +context. Voicebox can be used for mono or cross-lingual zero-shot +text-to-speech synthesis, noise removal, content editing, style conversion, and +diverse sample generation. In particular, Voicebox outperforms the +state-of-the-art zero-shot TTS model VALL-E on both intelligibility (5.9% vs +1.9% word error rates) and audio similarity (0.580 vs 0.681) while being up to +20 times faster. Audio samples can be found in +\url{https://voicebox.metademolab.com}. +" +SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs,Lijun Yu,http://arxiv.org/pdf/2306.17842v3.pdf,2023-06-30,"['cs.cv', 'cs.cl', 'cs.mm']",2306.17842v3.pdf," In this work, we introduce Semantic Pyramid AutoEncoder (SPAE) for enabling +frozen LLMs to perform both understanding and generation tasks involving +non-linguistic modalities such as images or videos. SPAE converts between raw +pixels and interpretable lexical tokens (or words) extracted from the LLM's +vocabulary. The resulting tokens capture both the semantic meaning and the +fine-grained details needed for visual reconstruction, effectively translating +the visual content into a language comprehensible to the LLM, and empowering it +to perform a wide array of multimodal tasks. Our approach is validated through +in-context learning experiments with frozen PaLM 2 and GPT 3.5 on a diverse set +of image understanding and generation tasks. Our method marks the first +successful attempt to enable a frozen LLM to generate image content while +surpassing state-of-the-art performance in image understanding tasks, under the +same setting, by over 25%. +" +RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models,Brandon Kynoch,http://arxiv.org/pdf/2307.02738v3.pdf,2023-07-06,"['cs.ai', 'cs.cl', 'cs.sc']",2307.02738v3.pdf," Large Language Models (LLMs) have made extraordinary progress in the field of +Artificial Intelligence and have demonstrated remarkable capabilities across a +large variety of tasks and domains. However, as we venture closer to creating +Artificial General Intelligence (AGI) systems, we recognize the need to +supplement LLMs with long-term memory to overcome the context window limitation +and more importantly, to create a foundation for sustained reasoning, +cumulative learning and long-term user interaction. In this paper we propose +RecallM, a novel architecture for providing LLMs with an adaptable and +updatable long-term memory mechanism. Unlike previous methods, the RecallM +architecture is particularly effective at belief updating and maintaining a +temporal understanding of the knowledge provided to it. We demonstrate through +various experiments the effectiveness of this architecture. Furthermore, +through our own temporal understanding and belief updating experiments, we show +that RecallM is four times more effective than using a vector database for +updating knowledge previously stored in long-term memory. We also demonstrate +that RecallM shows competitive performance on general question-answering and +in-context learning tasks. +" +One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention,Arvind Mahankali,http://arxiv.org/pdf/2307.03576v1.pdf,2023-07-07,['cs.lg'],2307.03576v1.pdf," Recent works have empirically analyzed in-context learning and shown that +transformers trained on synthetic linear regression tasks can learn to +implement ridge regression, which is the Bayes-optimal predictor, given +sufficient capacity [Aky\""urek et al., 2023], while one-layer transformers with +linear self-attention and no MLP layer will learn to implement one step of +gradient descent (GD) on a least-squares linear regression objective [von +Oswald et al., 2022]. However, the theory behind these observations remains +poorly understood. We theoretically study transformers with a single layer of +linear self-attention, trained on synthetic noisy linear regression data. +First, we mathematically show that when the covariates are drawn from a +standard Gaussian distribution, the one-layer transformer which minimizes the +pre-training loss will implement a single step of GD on the least-squares +linear regression objective. Then, we find that changing the distribution of +the covariates and weight vector to a non-isotropic Gaussian distribution has a +strong impact on the learned algorithm: the global minimizer of the +pre-training loss now implements a single step of $\textit{pre-conditioned}$ +GD. However, if only the distribution of the responses is changed, then this +does not have a large effect on the learned algorithm: even when the response +comes from a more general family of $\textit{nonlinear}$ functions, the global +minimizer of the pre-training loss still implements a single step of GD on a +least-squares linear regression objective. +" +Large Language Models as General Pattern Machines,Suvir Mirchandani,http://arxiv.org/pdf/2307.04721v2.pdf,2023-07-10,"['cs.ai', 'cs.cl', 'cs.ro']",2307.04721v2.pdf," We observe that pre-trained large language models (LLMs) are capable of +autoregressively completing complex token sequences -- from arbitrary ones +procedurally generated by probabilistic context-free grammars (PCFG), to more +rich spatial patterns found in the Abstraction and Reasoning Corpus (ARC), a +general AI benchmark, prompted in the style of ASCII art. Surprisingly, pattern +completion proficiency can be partially retained even when the sequences are +expressed using tokens randomly sampled from the vocabulary. These results +suggest that without any additional training, LLMs can serve as general +sequence modelers, driven by in-context learning. In this work, we investigate +how these zero-shot capabilities may be applied to problems in robotics -- from +extrapolating sequences of numbers that represent states over time to complete +simple motions, to least-to-most prompting of reward-conditioned trajectories +that can discover and represent closed-loop policies (e.g., a stabilizing +controller for CartPole). While difficult to deploy today for real systems due +to latency, context size limitations, and compute costs, the approach of using +LLMs to drive low-level control may provide an exciting glimpse into how the +patterns among words could be transferred to actions. +" +Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech Prompts,Ziyue Jiang,http://arxiv.org/pdf/2307.07218v2.pdf,2023-07-14,"['eess.as', 'cs.sd']",2307.07218v2.pdf," Zero-shot text-to-speech aims at synthesizing voices with unseen speech +prompts. Previous large-scale multispeaker TTS models have successfully +achieved this goal with an enrolled recording within 10 seconds. However, most +of them are designed to utilize only short speech prompts. The limited +information in short speech prompts significantly hinders the performance of +fine-grained identity imitation. In this paper, we introduce Mega-TTS 2, a +generic zero-shot multispeaker TTS model that is capable of synthesizing speech +for unseen speakers with arbitrary-length prompts. Specifically, we 1) design a +multi-reference timbre encoder to extract timbre information from multiple +reference speeches; 2) and train a prosody language model with arbitrary-length +speech prompts; With these designs, our model is suitable for prompts of +different lengths, which extends the upper bound of speech quality for +zero-shot text-to-speech. Besides arbitrary-length prompts, we introduce +arbitrary-source prompts, which leverages the probabilities derived from +multiple P-LLM outputs to produce expressive and controlled prosody. +Furthermore, we propose a phoneme-level auto-regressive duration model to +introduce in-context learning capabilities to duration modeling. Experiments +demonstrate that our method could not only synthesize identity-preserving +speech with a short prompt of an unseen speaker but also achieve improved +performance with longer speech prompts. Audio samples can be found in +https://mega-tts.github.io/mega2_demo/. +" +Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study,Peiyu Liu,http://arxiv.org/pdf/2307.08072v2.pdf,2023-07-16,"['cs.cl', 'cs.ai']",2307.08072v2.pdf," Despite the superior performance, Large Language Models~(LLMs) require +significant computational resources for deployment and use. To overcome this +issue, quantization methods have been widely applied to reduce the memory +footprint of LLMs as well as increasing the inference rate. However, a major +challenge is that low-bit quantization methods often lead to performance +degradation. It is important to understand how quantization impacts the +capacity of LLMs. Different from previous studies focused on overall +performance, this work aims to investigate the impact of quantization on +\emph{emergent abilities}, which are important characteristics that distinguish +LLMs from small language models. Specially, we examine the abilities of +in-context learning, chain-of-thought reasoning, and instruction-following in +quantized LLMs. Our empirical experiments show that these emergent abilities +still exist in 4-bit quantization models, while 2-bit models encounter severe +performance degradation on the test of these abilities. To improve the +performance of low-bit models, we conduct two special experiments: (1) +fine-gained impact analysis that studies which components (or substructures) +are more sensitive to quantization, and (2) performance compensation through +model fine-tuning. Our work derives a series of important findings to +understand the impact of quantization on emergent abilities, and sheds lights +on the possibilities of extremely low-bit quantization for LLMs. +" +Generating Mathematical Derivations with Large Language Models,Jordan Meadows,http://arxiv.org/pdf/2307.09998v3.pdf,2023-07-19,"['cs.cl', 'math.ho']",2307.09998v3.pdf," The derivation of mathematical results in specialised fields, using Large +Language Models (LLMs), is an emerging research direction that can help +identify models' limitations, and potentially support mathematical discovery. +In this paper, we leverage a symbolic engine to generate derivations of +equations at scale, and investigate the capabilities of LLMs when deriving goal +equations from premises. Specifically, we employ in-context learning for GPT +and fine-tune a range of T5 models to compare the robustness and generalisation +of pre-training strategies to specialised models. Empirical results show that +fine-tuned FLAN-T5-large (MathT5) outperforms GPT models on all static and +out-of-distribution test sets in conventional scores. However, an in-depth +analysis reveals that the fine-tuned models are more sensitive to perturbations +involving unseen symbols and (to a lesser extent) changes to equation +structure. In addition, we analyse 1.7K equations, and over 200 derivations, to +highlight common reasoning errors such as the inclusion of incorrect, +irrelevant, and redundant equations. Finally, we explore the suitability of +existing metrics for evaluating mathematical derivations and find evidence +that, while they can capture general properties such as sensitivity to +perturbations, they fail to highlight fine-grained reasoning errors and +essential differences between models. Overall, this work demonstrates that +training models on synthetic data may improve their math capabilities beyond +much larger LLMs, but current metrics are not appropriately assessing the +quality of generated mathematical text. +" +LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition,Chengsong Huang,http://arxiv.org/pdf/2307.13269v1.pdf,2023-07-25,"['cs.cl', 'cs.ai']",2307.13269v1.pdf," Low-rank adaptations (LoRA) are often employed to fine-tune large language +models (LLMs) for new tasks. This paper investigates LoRA composability for +cross-task generalization and introduces LoraHub, a strategic framework devised +for the purposive assembly of LoRA modules trained on diverse given tasks, with +the objective of achieving adaptable performance on unseen tasks. With just a +few examples from a novel task, LoraHub enables the fluid combination of +multiple LoRA modules, eradicating the need for human expertise. Notably, the +composition requires neither additional model parameters nor gradients. Our +empirical results, derived from the Big-Bench Hard (BBH) benchmark, suggest +that LoraHub can effectively mimic the performance of in-context learning in +few-shot scenarios, excluding the necessity of in-context examples alongside +each inference input. A significant contribution of our research is the +fostering of a community for LoRA, where users can share their trained LoRA +modules, thereby facilitating their application to new tasks. We anticipate +this resource will widen access to and spur advancements in general +intelligence as well as LLMs in production. Code will be available at +https://github.com/sail-sg/lorahub. +" +LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation,Leigang Qu,http://arxiv.org/pdf/2308.05095v2.pdf,2023-08-09,"['cs.cv', 'cs.ai']",2308.05095v2.pdf," In the text-to-image generation field, recent remarkable progress in Stable +Diffusion makes it possible to generate rich kinds of novel photorealistic +images. However, current models still face misalignment issues (e.g., +problematic spatial relation understanding and numeration failure) in complex +natural scenes, which impedes the high-faithfulness text-to-image generation. +Although recent efforts have been made to improve controllability by giving +fine-grained guidance (e.g., sketch and scribbles), this issue has not been +fundamentally tackled since users have to provide such guidance information +manually. In this work, we strive to synthesize high-fidelity images that are +semantically aligned with a given textual prompt without any guidance. Toward +this end, we propose a coarse-to-fine paradigm to achieve layout planning and +image generation. Concretely, we first generate the coarse-grained layout +conditioned on a given textual prompt via in-context learning based on Large +Language Models. Afterward, we propose a fine-grained object-interaction +diffusion method to synthesize high-faithfulness images conditioned on the +prompt and the automatically generated layout. Extensive experiments +demonstrate that our proposed method outperforms the state-of-the-art models in +terms of layout and image generation. Our code and settings are available at +https://layoutllm-t2i.github.io. +" +AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining,Haohe Liu,http://arxiv.org/pdf/2308.05734v2.pdf,2023-08-10,"['cs.sd', 'cs.ai', 'cs.mm', 'eess.as', 'eess.sp']",2308.05734v2.pdf," Although audio generation shares commonalities across different types of +audio, such as speech, music, and sound effects, designing models for each type +requires careful consideration of specific objectives and biases that can +significantly differ from those of other types. To bring us closer to a unified +perspective of audio generation, this paper proposes a framework that utilizes +the same learning method for speech, music, and sound effect generation. Our +framework introduces a general representation of audio, called ""language of +audio"" (LOA). Any audio can be translated into LOA based on AudioMAE, a +self-supervised pre-trained representation learning model. In the generation +process, we translate any modalities into LOA by using a GPT-2 model, and we +perform self-supervised audio generation learning with a latent diffusion model +conditioned on LOA. The proposed framework naturally brings advantages such as +in-context learning abilities and reusable self-supervised pretrained AudioMAE +and latent diffusion models. Experiments on the major benchmarks of +text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art +or competitive performance against previous approaches. Our code, pretrained +model, and demo are available at https://audioldm.github.io/audioldm2. +" +Time Travel in LLMs: Tracing Data Contamination in Large Language Models,Shahriar Golchin,http://arxiv.org/pdf/2308.08493v2.pdf,2023-08-16,"['cs.cl', 'cs.cr', 'cs.lg']",2308.08493v2.pdf," Data contamination, i.e., the presence of test data from downstream tasks in +the training data of large language models (LLMs), is a potential major issue +in measuring LLMs' real effectiveness on other tasks. We propose a +straightforward yet effective method for identifying data contamination within +LLMs. At its core, our approach starts by identifying potential contamination +at the instance level; using this information, our approach then assesses wider +contamination at the partition level. To estimate contamination of individual +instances, we employ ""guided instruction:"" a prompt consisting of the dataset +name, partition type, and the random-length initial segment of a reference +instance, asking the LLM to complete it. An instance is flagged as contaminated +if the LLM's output either exactly or nearly matches the latter segment of the +reference. To understand if an entire partition is contaminated, we propose two +ideas. The first idea marks a dataset partition as contaminated if the average +overlap score with the reference instances (as measured by ROUGE-L or BLEURT) +is statistically significantly better with the completions from guided +instruction compared to a ""general instruction"" that does not include the +dataset and partition name. The second idea marks a dataset partition as +contaminated if a classifier based on GPT-4 with few-shot in-context learning +prompt marks multiple generated completions as exact/near-exact matches of the +corresponding reference instances. Our best method achieves an accuracy between +92% and 100% in detecting if an LLM is contaminated with seven datasets, +containing train and test/validation partitions, when contrasted with manual +evaluation by human experts. Further, our findings indicate that GPT-4 is +contaminated with AG News, WNLI, and XSum datasets. +" +Inductive-bias Learning: Generating Code Models with Large Language Model,Toma Tanaka,http://arxiv.org/pdf/2308.09890v1.pdf,2023-08-19,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",2308.09890v1.pdf," Large Language Models(LLMs) have been attracting attention due to a ability +called in-context learning(ICL). ICL, without updating the parameters of a LLM, +it is possible to achieve highly accurate inference based on rules ``in the +context'' by merely inputting a training data into the prompt. Although ICL is +a developing field with many unanswered questions, LLMs themselves serves as a +inference model, seemingly realizing inference without explicitly indicate +``inductive bias''. On the other hand, a code generation is also a highlighted +application of LLMs. The accuracy of code generation has dramatically improved, +enabling even non-engineers to generate code to perform the desired tasks by +crafting appropriate prompts. In this paper, we propose a novel ``learning'' +method called an ``Inductive-Bias Learning (IBL)'', which combines the +techniques of ICL and code generation. An idea of IBL is straightforward. Like +ICL, IBL inputs a training data into the prompt and outputs a code with a +necessary structure for inference (we referred to as ``Code Model'') from a +``contextual understanding''. Despite being a seemingly simple approach, IBL +encompasses both a ``property of inference without explicit inductive bias'' +inherent in ICL and a ``readability and explainability'' of the code +generation. Surprisingly, generated Code Models have been found to achieve +predictive accuracy comparable to, and in some cases surpassing, ICL and +representative machine learning models. Our IBL code is open source: +https://github.com/fuyu-quant/IBLM +" +Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models,Martin Weyssow,http://arxiv.org/pdf/2308.10462v1.pdf,2023-08-21,"['cs.se', 'cs.cl', 'cs.lg']",2308.10462v1.pdf," Large Language Models (LLMs) possess impressive capabilities to generate +meaningful code snippets given natural language intents in zero-shot, i.e., +without the need for specific fine-tuning. In the perspective of unleashing +their full potential, prior work has demonstrated the benefits of fine-tuning +the models to task-specific data. However, fine-tuning process demands heavy +computational costs and is intractable when resources are scarce, especially +for models with billions of parameters. In light of these challenges, previous +studies explored In-Context Learning (ICL) as an effective strategy to generate +contextually appropriate code without fine-tuning. However, it operates at +inference time and does not involve learning task-specific parameters, +potentially limiting the model's performance on downstream tasks. In this +context, we foresee that Parameter-Efficient Fine-Tuning (PEFT) techniques +carry a high potential for efficiently specializing LLMs to task-specific data. +In this paper, we deliver a comprehensive study of LLMs with the impact of PEFT +techniques under the automated code generation scenario. Our experimental +results reveal the superiority and potential of such techniques over ICL on a +wide range of LLMs in reducing the computational burden and improving +performance. Therefore, the study opens opportunities for broader applications +of PEFT in software engineering scenarios. +" +Analyzing Transformer Dynamics as Movement through Embedding Space,Sumeet S. Singh,http://arxiv.org/pdf/2308.10874v1.pdf,2023-08-21,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.ne']",2308.10874v1.pdf," Transformer language models exhibit intelligent behaviors such as +understanding natural language, recognizing patterns, acquiring knowledge, +reasoning, planning, reflecting and using tools. This paper explores how their +underlying mechanics give rise to intelligent behaviors. We adopt a systems +approach to analyze Transformers in detail and develop a mathematical framework +that frames their dynamics as movement through embedding space. This novel +perspective provides a principled way of thinking about the problem and reveals +important insights related to the emergence of intelligence: + 1. At its core the Transformer is a Embedding Space walker, mapping +intelligent behavior to trajectories in this vector space. + 2. At each step of the walk, it composes context into a single composite +vector whose location in Embedding Space defines the next step. + 3. No learning actually occurs during decoding; in-context learning and +generalization are simply the result of different contexts composing into +different vectors. + 4. Ultimately the knowledge, intelligence and skills exhibited by the model +are embodied in the organization of vectors in Embedding Space rather than in +specific neurons or layers. These abilities are properties of this +organization. + 5. Attention's contribution boils down to the association-bias it lends to +vector composition and which influences the aforementioned organization. +However, more investigation is needed to ascertain its significance. + 6. The entire model is composed from two principal operations: data +independent filtering and data dependent aggregation. This generalization +unifies Transformers with other sequence models and across modalities. + Building upon this foundation we formalize and test a semantic space theory +which posits that embedding vectors represent semantic concepts and find some +evidence of its validity. +" +Causal Intersectionality and Dual Form of Gradient Descent for Multimodal Analysis: a Case Study on Hateful Memes,Yosuke Miyanishi,http://arxiv.org/pdf/2308.11585v1.pdf,2023-08-19,"['cs.ai', 'cs.cl']",2308.11585v1.pdf," In the wake of the explosive growth of machine learning (ML) usage, +particularly within the context of emerging Large Language Models (LLMs), +comprehending the semantic significance rooted in their internal workings is +crucial. While causal analyses focus on defining semantics and its +quantification, the gradient-based approach is central to explainable AI (XAI), +tackling the interpretation of the black box. By synergizing these approaches, +the exploration of how a model's internal mechanisms illuminate its causal +effect has become integral for evidence-based decision-making. A parallel line +of research has revealed that intersectionality - the combinatory impact of +multiple demographics of an individual - can be structured in the form of an +Averaged Treatment Effect (ATE). Initially, this study illustrates that the +hateful memes detection problem can be formulated as an ATE, assisted by the +principles of intersectionality, and that a modality-wise summarization of +gradient-based attention attribution scores can delineate the distinct +behaviors of three Transformerbased models concerning ATE. Subsequently, we +show that the latest LLM LLaMA2 has the ability to disentangle the +intersectional nature of memes detection in an in-context learning setting, +with their mechanistic properties elucidated via meta-gradient, a secondary +form of gradient. In conclusion, this research contributes to the ongoing +dialogue surrounding XAI and the multifaceted nature of ML models. +" +Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for Knowledge-intensive Question Answering,Keheng Wang,http://arxiv.org/pdf/2308.13259v2.pdf,2023-08-25,"['cs.cl', 'cs.ai']",2308.13259v2.pdf," Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have shown +impressive reasoning ability in various downstream tasks. Even so, suffering +from hallucinations and the inability to access external knowledge, LLMs often +come with incorrect or unfaithful intermediate reasoning steps, especially in +the context of answering knowledge-intensive tasks such as KBQA. To alleviate +this issue, we propose a framework called Knowledge-Driven Chain-of-Thought +(KD-CoT) to verify and modify reasoning traces in CoT via interaction with +external knowledge, and thus overcome the hallucinations and error propagation. +Concretely, we formulate the CoT rationale process of LLMs into a structured +multi-round QA format. In each round, LLMs interact with a QA system that +retrieves external knowledge and produce faithful reasoning traces based on +retrieved precise answers. The structured CoT reasoning of LLMs is facilitated +by our developed KBQA CoT collection, which serves as in-context learning +demonstrations and can also be utilized as feedback augmentation to train a +robust retriever. Extensive experiments on WebQSP and ComplexWebQuestion +datasets demonstrate the effectiveness of proposed KD-CoT in task-solving +reasoning generation, which outperforms the vanilla CoT ICL with an absolute +success rate of 8.0% and 5.1%. Furthermore, our proposed feedback-augmented +retriever outperforms the state-of-the-art baselines for retrieving knowledge, +achieving significant improvement in Hit and recall performance. Our code and +data are released on https://github.com/AdelWang/KD-CoT/tree/main. +" +Empowering Dynamics-aware Text-to-Video Diffusion with Large Language Models,Hao Fei,http://arxiv.org/pdf/2308.13812v1.pdf,2023-08-26,"['cs.ai', 'cs.cv']",2308.13812v1.pdf," Text-to-video (T2V) synthesis has gained increasing attention in the +community, in which the recently emerged diffusion models (DMs) have +promisingly shown stronger performance than the past approaches. While existing +state-of-the-art DMs are competent to achieve high-resolution video generation, +they may largely suffer from key limitations (e.g., action occurrence +disorders, crude video motions) with respect to the intricate temporal dynamics +modeling, one of the crux of video synthesis. In this work, we investigate +strengthening the awareness of video dynamics for DMs, for high-quality T2V +generation. Inspired by human intuition, we design an innovative dynamic scene +manager (dubbed as Dysen) module, which includes (step-1) extracting from input +text the key actions with proper time-order arrangement, (step-2) transforming +the action schedules into the dynamic scene graph (DSG) representations, and +(step-3) enriching the scenes in the DSG with sufficient and reasonable +details. Taking advantage of the existing powerful LLMs (e.g., ChatGPT) via +in-context learning, Dysen realizes (nearly) human-level temporal dynamics +understanding. Finally, the resulting video DSG with rich action scene details +is encoded as fine-grained spatio-temporal features, integrated into the +backbone T2V DM for video generating. Experiments on popular T2V datasets +suggest that our framework consistently outperforms prior arts with significant +margins, especially in the scenario with complex actions. Project page at +https://haofei.vip/Dysen-VDM +" +Identifying and Mitigating the Security Risks of Generative AI,Clark Barrett,http://arxiv.org/pdf/2308.14840v3.pdf,2023-08-28,['cs.ai'],2308.14840v3.pdf," Every major technical invention resurfaces the dual-use dilemma -- the new +technology has the potential to be used for good as well as for harm. +Generative AI (GenAI) techniques, such as large language models (LLMs) and +diffusion models, have shown remarkable capabilities (e.g., in-context +learning, code-completion, and text-to-image generation and editing). However, +GenAI can be used just as well by attackers to generate new attacks and +increase the velocity and efficacy of existing attacks. + This paper reports the findings of a workshop held at Google (co-organized by +Stanford University and the University of Wisconsin-Madison) on the dual-use +dilemma posed by GenAI. This paper is not meant to be comprehensive, but is +rather an attempt to synthesize some of the interesting findings from the +workshop. We discuss short-term and long-term goals for the community on this +topic. We hope this paper provides both a launching point for a discussion on +this important topic as well as interesting problems that the research +community can work to address. +" +AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language Models,Zhaopeng Gu,http://arxiv.org/pdf/2308.15366v3.pdf,2023-08-29,['cs.cv'],2308.15366v3.pdf," Large Vision-Language Models (LVLMs) such as MiniGPT-4 and LLaVA have +demonstrated the capability of understanding images and achieved remarkable +performance in various visual tasks. Despite their strong abilities in +recognizing common objects due to extensive training datasets, they lack +specific domain knowledge and have a weaker understanding of localized details +within objects, which hinders their effectiveness in the Industrial Anomaly +Detection (IAD) task. On the other hand, most existing IAD methods only provide +anomaly scores and necessitate the manual setting of thresholds to distinguish +between normal and abnormal samples, which restricts their practical +implementation. In this paper, we explore the utilization of LVLM to address +the IAD problem and propose AnomalyGPT, a novel IAD approach based on LVLM. We +generate training data by simulating anomalous images and producing +corresponding textual descriptions for each image. We also employ an image +decoder to provide fine-grained semantic and design a prompt learner to +fine-tune the LVLM using prompt embeddings. Our AnomalyGPT eliminates the need +for manual threshold adjustments, thus directly assesses the presence and +locations of anomalies. Additionally, AnomalyGPT supports multi-turn dialogues +and exhibits impressive few-shot in-context learning capabilities. With only +one normal shot, AnomalyGPT achieves the state-of-the-art performance with an +accuracy of 86.1%, an image-level AUC of 94.1%, and a pixel-level AUC of 95.3% +on the MVTec-AD dataset. Code is available at +https://github.com/CASIA-IVA-Lab/AnomalyGPT. +" +Taken out of context: On measuring situational awareness in LLMs,Lukas Berglund,http://arxiv.org/pdf/2309.00667v1.pdf,2023-09-01,"['cs.cl', 'cs.lg']",2309.00667v1.pdf," We aim to better understand the emergence of `situational awareness' in large +language models (LLMs). A model is situationally aware if it's aware that it's +a model and can recognize whether it's currently in testing or deployment. +Today's LLMs are tested for safety and alignment before they are deployed. An +LLM could exploit situational awareness to achieve a high score on safety +tests, while taking harmful actions after deployment. Situational awareness may +emerge unexpectedly as a byproduct of model scaling. One way to better foresee +this emergence is to run scaling experiments on abilities necessary for +situational awareness. As such an ability, we propose `out-of-context +reasoning' (in contrast to in-context learning). We study out-of-context +reasoning experimentally. First, we finetune an LLM on a description of a test +while providing no examples or demonstrations. At test time, we assess whether +the model can pass the test. To our surprise, we find that LLMs succeed on this +out-of-context reasoning task. Their success is sensitive to the training setup +and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, +performance improves with model size. These findings offer a foundation for +further empirical study, towards predicting and potentially controlling the +emergence of situational awareness in LLMs. Code is available at: +https://github.com/AsaCooperStickland/situational-awareness-evals. +" +Business Process Text Sketch Automation Generation Using Large Language Model,Rui Zhu,http://arxiv.org/pdf/2309.01071v1.pdf,2023-09-03,['cs.cl'],2309.01071v1.pdf," Business Process Management (BPM) is gaining increasing attention as it has +the potential to cut costs while boosting output and quality. Business process +document generation is a crucial stage in BPM. However, due to a shortage of +datasets, data-driven deep learning techniques struggle to deliver the expected +results. We propose an approach to transform Conditional Process Trees (CPTs) +into Business Process Text Sketches (BPTSs) using Large Language Models (LLMs). +The traditional prompting approach (Few-shot In-Context Learning) tries to get +the correct answer in one go, and it can find the pattern of transforming +simple CPTs into BPTSs, but for close-domain and CPTs with complex hierarchy, +the traditional prompts perform weakly and with low correctness. We suggest +using this technique to break down a difficult CPT into a number of basic CPTs +and then solve each one in turn, drawing inspiration from the +divide-and-conquer strategy. We chose 100 process trees with depths ranging +from 2 to 5 at random, as well as CPTs with many nodes, many degrees of +selection, and cyclic nesting. Experiments show that our method can achieve a +correct rate of 93.42%, which is 45.17% better than traditional prompting +methods. Our proposed method provides a solution for business process document +generation in the absence of datasets, and secondly, it becomes potentially +possible to provide a large number of datasets for the process model extraction +(PME) domain. +" +Textbooks Are All You Need II: phi-1.5 technical report,Yuanzhi Li,http://arxiv.org/pdf/2309.05463v1.pdf,2023-09-11,"['cs.cl', 'cs.ai']",2309.05463v1.pdf," We continue the investigation into the power of smaller Transformer-based +language models as initiated by \textbf{TinyStories} -- a 10 million parameter +model that can produce coherent English -- and the follow-up work on +\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance +close to the state-of-the-art. The latter work proposed to use existing Large +Language Models (LLMs) to generate ``textbook quality"" data as a way to enhance +the learning process compared to traditional web data. We follow the +``Textbooks Are All You Need"" approach, focusing this time on common sense +reasoning in natural language, and create a new 1.3 billion parameter model +named \textbf{phi-1.5}, with performance on natural language tasks comparable +to models 5x larger, and surpassing most non-frontier LLMs on more complex +reasoning tasks such as grade-school mathematics and basic coding. More +generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, +both good -- such as the ability to ``think step by step"" or perform some +rudimentary in-context learning -- and bad, including hallucinations and the +potential for toxic and biased generations -- encouragingly though, we are +seeing improvement on that front thanks to the absence of web data. We +open-source \textbf{phi-1.5} to promote further research on these urgent +topics. +" +Uncovering mesa-optimization algorithms in Transformers,Johannes von Oswald,http://arxiv.org/pdf/2309.05858v1.pdf,2023-09-11,"['cs.lg', 'cs.ai']",2309.05858v1.pdf," Transformers have become the dominant model in deep learning, but the reason +for their superior performance is poorly understood. Here, we hypothesize that +the strong performance of Transformers stems from an architectural bias towards +mesa-optimization, a learned process running within the forward pass of a model +consisting of the following two steps: (i) the construction of an internal +learning objective, and (ii) its corresponding solution found through +optimization. To test this hypothesis, we reverse-engineer a series of +autoregressive Transformers trained on simple sequence modeling tasks, +uncovering underlying gradient-based mesa-optimization algorithms driving the +generation of predictions. Moreover, we show that the learned forward-pass +optimization algorithm can be immediately repurposed to solve supervised +few-shot tasks, suggesting that mesa-optimization might underlie the in-context +learning capabilities of large language models. Finally, we propose a novel +self-attention layer, the mesa-layer, that explicitly and efficiently solves +optimization problems specified in context. We find that this layer can lead to +improved performance in synthetic and preliminary language modeling +experiments, adding weight to our hypothesis that mesa-optimization is an +important operation hidden within the weights of trained Transformers. +" +Narrowing the Gap between Supervised and Unsupervised Sentence Representation Learning with Large Language Model,Mingxin Li,http://arxiv.org/pdf/2309.06453v1.pdf,2023-09-12,"['cs.cl', 'cs.lg']",2309.06453v1.pdf," Sentence Representation Learning (SRL) is a fundamental task in Natural +Language Processing (NLP), with Contrastive learning of Sentence Embeddings +(CSE) as the mainstream technique due to its superior performance. An +intriguing phenomenon in CSE is the significant performance gap between +supervised and unsupervised methods, even when their sentence encoder and loss +function are the same. Previous works attribute this performance gap to +differences in two representation properties (alignment and uniformity). +However, alignment and uniformity only measure the results, which means they +cannot answer ""What happens during the training process that leads to the +performance gap?"" and ""How can the performance gap be narrowed?"". In this +paper, we conduct empirical experiments to answer these ""What"" and ""How"" +questions. We first answer the ""What"" question by thoroughly comparing the +behavior of supervised and unsupervised CSE during their respective training +processes. From the comparison, We observe a significant difference in fitting +difficulty. Thus, we introduce a metric, called Fitting Difficulty Increment +(FDI), to measure the fitting difficulty gap between the evaluation dataset and +the held-out training dataset, and use the metric to answer the ""What"" +question. Then, based on the insights gained from the ""What"" question, we +tackle the ""How"" question by increasing the fitting difficulty of the training +dataset. We achieve this by leveraging the In-Context Learning (ICL) capability +of the Large Language Model (LLM) to generate data that simulates complex +patterns. By utilizing the hierarchical patterns in the LLM-generated data, we +effectively narrow the gap between supervised and unsupervised CSE. +" +Understanding Catastrophic Forgetting in Language Models via Implicit Inference,Suhas Kotha,http://arxiv.org/pdf/2309.10105v1.pdf,2023-09-18,"['cs.cl', 'cs.lg']",2309.10105v1.pdf," Fine-tuning (via methods such as instruction-tuning or reinforcement learning +from human feedback) is a crucial step in training language models to robustly +carry out tasks of interest. However, we lack a systematic understanding of the +effects of fine-tuning, particularly on tasks outside the narrow fine-tuning +distribution. In a simplified scenario, we demonstrate that improving +performance on tasks within the fine-tuning data distribution comes at the +expense of suppressing model capabilities on other tasks. This degradation is +especially pronounced for tasks ""closest"" to the fine-tuning distribution. We +hypothesize that language models implicitly infer the task of the prompt +corresponds, and the fine-tuning process predominantly skews this task +inference towards tasks in the fine-tuning distribution. To test this +hypothesis, we propose Conjugate Prompting to see if we can recover pretrained +capabilities. Conjugate prompting artificially makes the task look farther from +the fine-tuning distribution while requiring the same capability. We find that +conjugate prompting systematically recovers some of the pretraining +capabilities on our synthetic setup. We then apply conjugate prompting to +real-world LLMs using the observation that fine-tuning distributions are +typically heavily skewed towards English. We find that simply translating the +prompts to different languages can cause the fine-tuned models to respond like +their pretrained counterparts instead. This allows us to recover the in-context +learning abilities lost via instruction tuning, and more concerningly, to +recover harmful content generation suppressed by safety fine-tuning in chatbots +like ChatGPT. +" +GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models,Yonggan Fu,http://arxiv.org/pdf/2309.10730v1.pdf,2023-09-19,"['cs.lg', 'cs.ar']",2309.10730v1.pdf," The remarkable capabilities and intricate nature of Artificial Intelligence +(AI) have dramatically escalated the imperative for specialized AI +accelerators. Nonetheless, designing these accelerators for various AI +workloads remains both labor- and time-intensive. While existing design +exploration and automation tools can partially alleviate the need for extensive +human involvement, they still demand substantial hardware expertise, posing a +barrier to non-experts and stifling AI accelerator development. Motivated by +the astonishing potential of large language models (LLMs) for generating +high-quality content in response to human language instructions, we embark on +this work to examine the possibility of harnessing LLMs to automate AI +accelerator design. Through this endeavor, we develop GPT4AIGChip, a framework +intended to democratize AI accelerator design by leveraging human natural +languages instead of domain-specific languages. Specifically, we first perform +an in-depth investigation into LLMs' limitations and capabilities for AI +accelerator design, thus aiding our understanding of our current position and +garnering insights into LLM-powered automated AI accelerator design. +Furthermore, drawing inspiration from the above insights, we develop a +framework called GPT4AIGChip, which features an automated demo-augmented +prompt-generation pipeline utilizing in-context learning to guide LLMs towards +creating high-quality AI accelerator design. To our knowledge, this work is the +first to demonstrate an effective pipeline for LLM-powered automated AI +accelerator generation. Accordingly, we anticipate that our insights and +framework can serve as a catalyst for innovations in next-generation +LLM-powered design automation tools. +" +User Simulation with Large Language Models for Evaluating Task-Oriented Dialogue,Sam Davidson,http://arxiv.org/pdf/2309.13233v1.pdf,2023-09-23,['cs.cl'],2309.13233v1.pdf," One of the major impediments to the development of new task-oriented dialogue +(TOD) systems is the need for human evaluation at multiple stages and +iterations of the development process. In an effort to move toward automated +evaluation of TOD, we propose a novel user simulator built using recently +developed large pretrained language models (LLMs). In order to increase the +linguistic diversity of our system relative to the related previous work, we do +not fine-tune the LLMs used by our system on existing TOD datasets; rather we +use in-context learning to prompt the LLMs to generate robust and +linguistically diverse output with the goal of simulating the behavior of human +interlocutors. Unlike previous work, which sought to maximize goal success rate +(GSR) as the primary metric of simulator performance, our goal is a system +which achieves a GSR similar to that observed in human interactions with TOD +systems. Using this approach, our current simulator is effectively able to +interact with several TOD systems, especially on single-intent conversational +goals, while generating lexically and syntactically diverse output relative to +previous simulators that rely upon fine-tuned models. Finally, we collect a +Human2Bot dataset of humans interacting with the same TOD systems with which we +experimented in order to better quantify these achievements. +" +A Benchmark for Learning to Translate a New Language from One Grammar Book,Garrett Tanzer,http://arxiv.org/pdf/2309.16575v1.pdf,2023-09-28,['cs.cl'],2309.16575v1.pdf," Large language models (LLMs) can perform impressive feats with in-context +learning or lightweight finetuning. It is natural to wonder how well these +models adapt to genuinely new tasks, but how does one find tasks that are +unseen in internet-scale training sets? We turn to a field that is explicitly +motivated and bottlenecked by a scarcity of web data: low-resource languages. +In this paper, we introduce MTOB (Machine Translation from One Book), a +benchmark for learning to translate between English and Kalamang -- a language +with less than 200 speakers and therefore virtually no presence on the web -- +using several hundred pages of field linguistics reference materials. This task +framing is novel in that it asks a model to learn a language from a single +human-readable book of grammar explanations, rather than a large mined corpus +of in-domain data, more akin to L2 learning than L1 acquisition. We demonstrate +that baselines using current LLMs are promising but fall short of human +performance, achieving 44.7 chrF on Kalamang to English translation and 45.8 +chrF on English to Kalamang translation, compared to 51.6 and 57.0 chrF by a +human who learned Kalamang from the same reference materials. We hope that MTOB +will help measure LLM capabilities along a new dimension, and that the methods +developed to solve it could help expand access to language technology for +underserved communities by leveraging qualitatively different kinds of data +than traditional machine translation. +" +Benchmarking Cognitive Biases in Large Language Models as Evaluators,Ryan Koo,http://arxiv.org/pdf/2309.17012v1.pdf,2023-09-29,"['cs.cl', 'cs.ai', 'cs.lg']",2309.17012v1.pdf," Large Language Models (LLMs) have recently been shown to be effective as +automatic evaluators with simple prompting and in-context learning. In this +work, we assemble 15 LLMs of four different size ranges and evaluate their +output responses by preference ranking from the other LLMs as evaluators, such +as System Star is better than System Square. We then evaluate the quality of +ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators +(CoBBLEr), a benchmark to measure six different cognitive biases in LLM +evaluation outputs, such as the Egocentric bias where a model prefers to rank +its own outputs highly in evaluation. We find that LLMs are biased text quality +evaluators, exhibiting strong indications on our bias benchmark (average of 40% +of comparisons across all models) within each of their evaluations that +question their robustness as evaluators. Furthermore, we examine the +correlation between human and machine preferences and calculate the average +Rank-Biased Overlap (RBO) score to be 49.6%, indicating that machine +preferences are misaligned with humans. According to our findings, LLMs may +still be unable to be utilized for automatic annotation aligned with human +preferences. Our project page is at: https://minnesotanlp.github.io/cobbler. +" +Fewer-token Neural Speech Codec with Time-invariant Codes,Yong Ren,http://arxiv.org/pdf/2310.00014v1.pdf,2023-09-15,"['cs.sd', 'eess.as']",2310.00014v1.pdf," Language model based text-to-speech (TTS) models, like VALL-E, have gained +attention for their outstanding in-context learning capability in zero-shot +scenarios. Neural speech codec is a critical component of these models, which +can convert speech into discrete token representations. However, excessive +token sequences from the codec may negatively affect prediction accuracy and +restrict the progression of Language model based TTS models. To address this +issue, this paper proposes a novel neural speech codec with time-invariant +codes named TiCodec. By encoding and quantizing time-invariant information into +a separate code, TiCodec can reduce the amount of frame-level information that +needs encoding, effectively decreasing the number of tokens as codes of speech. +Furthermore, this paper introduces a time-invariant encoding consistency loss +to enhance the consistency of time-invariant code within an utterance and force +it to capture more global information, which can benefit the zero-shot TTS +task. Experimental results demonstrate that TiCodec can not only enhance the +quality of reconstruction speech with fewer tokens but also increase the +similarity and naturalness, as well as reduce the word error rate of the +synthesized speech by the TTS model. +" +ReAcTable: Enhancing ReAct for Table Question Answering,Yunjia Zhang,http://arxiv.org/pdf/2310.00815v1.pdf,2023-10-01,['cs.db'],2310.00815v1.pdf," Table Question Answering (TQA) presents a substantial challenge at the +intersection of natural language processing and data analytics. This task +involves answering natural language (NL) questions on top of tabular data, +demanding proficiency in logical reasoning, understanding of data semantics, +and fundamental analytical capabilities. Due to its significance, a substantial +volume of research has been dedicated to exploring a wide range of strategies +aimed at tackling this challenge including approaches that leverage Large +Language Models (LLMs) through in-context learning or Chain-of-Thought (CoT) +prompting as well as approaches that train and fine-tune custom models. + Nonetheless, a conspicuous gap exists in the research landscape, where there +is limited exploration of how innovative foundational research, which +integrates incremental reasoning with external tools in the context of LLMs, as +exemplified by the ReAct paradigm, could potentially bring advantages to the +TQA task. In this paper, we aim to fill this gap, by introducing ReAcTable +(ReAct for Table Question Answering tasks), a framework inspired by the ReAct +paradigm that is carefully enhanced to address the challenges uniquely +appearing in TQA tasks such as interpreting complex data semantics, dealing +with errors generated by inconsistent data and generating intricate data +transformations. ReAcTable relies on external tools such as SQL and Python code +executors, to progressively enhance the data by generating intermediate data +representations, ultimately transforming it into a more accessible format for +answering the questions with greater ease. We demonstrate that ReAcTable +achieves remarkable performance even when compared to fine-tuned approaches. In +particular, it outperforms the best prior result on the WikiTQ benchmark, +achieving an accuracy of 68.0% without requiring training a new model or +fine-tuning. +" +GraphText: Graph Reasoning in Text Space,Jianan Zhao,http://arxiv.org/pdf/2310.01089v1.pdf,2023-10-02,"['cs.cl', 'cs.lg']",2310.01089v1.pdf," Large Language Models (LLMs) have gained the ability to assimilate human +knowledge and facilitate natural language interactions with both humans and +other LLMs. However, despite their impressive achievements, LLMs have not made +significant advancements in the realm of graph machine learning. This +limitation arises because graphs encapsulate distinct relational data, making +it challenging to transform them into natural language that LLMs understand. In +this paper, we bridge this gap with a novel framework, GraphText, that +translates graphs into natural language. GraphText derives a graph-syntax tree +for each graph that encapsulates both the node attributes and inter-node +relationships. Traversal of the tree yields a graph text sequence, which is +then processed by an LLM to treat graph tasks as text generation tasks. +Notably, GraphText offers multiple advantages. It introduces training-free +graph reasoning: even without training on graph data, GraphText with ChatGPT +can achieve on par with, or even surpassing, the performance of +supervised-trained graph neural networks through in-context learning (ICL). +Furthermore, GraphText paves the way for interactive graph reasoning, allowing +both humans and LLMs to communicate with the model seamlessly using natural +language. These capabilities underscore the vast, yet-to-be-explored potential +of LLMs in the domain of graph machine learning. +" +LLMParser: A LLM-based Log Parsing Framework,Zhihan Jiang,http://arxiv.org/pdf/2310.01796v1.pdf,2023-10-03,['cs.se'],2310.01796v1.pdf," The process of log parsing, which converts log messages into structured +formats, is a crucial step for various log analysis tasks. Although numerous +log parsers have been proposed, their effectiveness on complex log data is +often hindered due to reliance on human-made rules or learning-based models +with limited training data. The recent rise of powerful large language models +(LLMs) shows potential for log parsing due to their extensive pre-trained +knowledge related to code and logging. However, their accuracy is currently +limited due to the lack of specialized log parsing capabilities. Additionally, +the inconsistency of their answers and significant overhead obstruct the +practical implementation of LLM-based log parsing. + To tackle these challenges, we introduce LLMParser, the first practical +LLM-based log parsing framework. LLMParser enables accurate and robust log +parsing by leveraging the in-context learning (ICL) capability of the LLM, +employing a hierarchical candidate sampling algorithm, and selecting +high-quality demonstrations. LLMParser also includes a novel adaptive parsing +cache component to store and refine the templates generated by the LLM. This +design aids in addressing the inefficiency of LLMs by rapid matching to +previously parsed log templates. LLMParser also adaptively updates the +templates in the parsing cache to ensure consistent parsed results. Extensive +evaluation on large-scale public datasets demonstrates that LLMParser surpasses +the state-of-the-art methods. Furthermore, LLMParser significantly reduces the +query times to LLMs, achieving efficiency comparable to the most efficient +baseline, Drain. +" +Uncovering hidden geometry in Transformers via disentangling position and context,Jiajun Song,http://arxiv.org/pdf/2310.04861v1.pdf,2023-10-07,"['cs.lg', 'cs.ai', 'stat.ml']",2310.04861v1.pdf," Transformers are widely used to extract complex semantic meanings from input +tokens, yet they usually operate as black-box models. In this paper, we present +a simple yet informative decomposition of hidden states (or embeddings) of +trained transformers into interpretable components. For any layer, embedding +vectors of input sequence samples are represented by a tensor $\boldsymbol{h} +\in \mathbb{R}^{C \times T \times d}$. Given embedding vector +$\boldsymbol{h}_{c,t} \in \mathbb{R}^d$ at sequence position $t \le T$ in a +sequence (or context) $c \le C$, extracting the mean effects yields the +decomposition \[ \boldsymbol{h}_{c,t} = \boldsymbol{\mu} + \mathbf{pos}_t + +\mathbf{ctx}_c + \mathbf{resid}_{c,t} \] where $\boldsymbol{\mu}$ is the global +mean vector, $\mathbf{pos}_t$ and $\mathbf{ctx}_c$ are the mean vectors across +contexts and across positions respectively, and $\mathbf{resid}_{c,t}$ is the +residual vector. For popular transformer architectures and diverse text +datasets, empirically we find pervasive mathematical structure: (1) +$(\mathbf{pos}_t)_{t}$ forms a low-dimensional, continuous, and often spiral +shape across layers, (2) $(\mathbf{ctx}_c)_c$ shows clear cluster structure +that falls into context topics, and (3) $(\mathbf{pos}_t)_{t}$ and +$(\mathbf{ctx}_c)_c$ are mutually incoherent -- namely $\mathbf{pos}_t$ is +almost orthogonal to $\mathbf{ctx}_c$ -- which is canonical in compressed +sensing and dictionary learning. This decomposition offers structural insights +about input formats in in-context learning (especially for induction heads) and +in arithmetic tasks. +" +Lightweight In-Context Tuning for Multimodal Unified Models,Yixin Chen,http://arxiv.org/pdf/2310.05109v1.pdf,2023-10-08,['cs.cv'],2310.05109v1.pdf," In-context learning (ICL) involves reasoning from given contextual examples. +As more modalities comes, this procedure is becoming more challenging as the +interleaved input modalities convolutes the understanding process. This is +exemplified by the observation that multimodal models often struggle to +effectively extrapolate from contextual examples to perform ICL. To address +these challenges, we introduce MultiModal In-conteXt Tuning (M$^2$IXT), a +lightweight module to enhance the ICL capabilities of multimodal unified +models. The proposed M$^2$IXT module perceives an expandable context window to +incorporate various labeled examples of multiple modalities (e.g., text, image, +and coordinates). It can be prepended to various multimodal unified models +(e.g., OFA, Unival, LLaVA) of different architectures and trained via a +mixed-tasks strategy to enable rapid few-shot adaption on multiple tasks and +datasets. When tuned on as little as 50K multimodal data, M$^2$IXT can boost +the few-shot ICL performance significantly (e.g., 18\% relative increase for +OFA), and obtained state-of-the-art results across an array of tasks including +visual question answering, image captioning, visual grounding, and visual +entailment, while being considerably small in terms of model parameters (e.g., +$\sim$$20\times$ smaller than Flamingo or MMICL), highlighting the flexibility +and effectiveness of M$^2$IXT as a multimodal in-context learner. +" +Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models,Haoran Wang,http://arxiv.org/pdf/2310.05253v2.pdf,2023-10-08,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05253v2.pdf," Claim verification plays a crucial role in combating misinformation. While +existing works on claim verification have shown promising results, a crucial +piece of the puzzle that remains unsolved is to understand how to verify claims +without relying on human-annotated data, which is expensive to create at a +large scale. Additionally, it is important for models to provide comprehensive +explanations that can justify their decisions and assist human fact-checkers. +This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) +Reasoning that can verify complex claims and generate explanations without the +need for annotated evidence using Large Language Models (LLMs). FOLK leverages +the in-context learning ability of LLMs to translate the claim into a +First-Order-Logic (FOL) clause consisting of predicates, each corresponding to +a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning +over a set of knowledge-grounded question-and-answer pairs to make veracity +predictions and generate explanations to justify its decision-making process. +This process makes our model highly explanatory, providing clear explanations +of its reasoning process in human-readable form. Our experiment results +indicate that FOLK outperforms strong baselines on three datasets encompassing +various claim verification challenges. Our code and data are available. +" +Glitter or Gold? Deriving Structured Insights from Sustainability Reports via Large Language Models,Marco Bronzini,http://arxiv.org/pdf/2310.05628v2.pdf,2023-10-09,"['cs.cl', 'cs.ce', 'cs.cy']",2310.05628v2.pdf," Over the last decade, several regulatory bodies have started requiring the +disclosure of non-financial information from publicly listed companies, in +light of the investors' increasing attention to Environmental, Social, and +Governance (ESG) issues. Such information is publicly released in a variety of +non-structured and multi-modal documentation. Hence, it is not straightforward +to aggregate and consolidate such data in a cohesive framework to further +derive insights about sustainability practices across companies and markets. +Given these premises, it is natural to resort to Information Extraction (IE) +techniques to provide concise, informative, and actionable data to the +stakeholders. Moving beyond traditional text processing techniques, in this +work we leverage Large Language Models (LLMs), along with the prominent +in-context learning technique and the Retrieved Augmented Generation (RAG) +paradigm, to extract semantically structured ESG-related information from +companies' sustainability reports. We then adopt graph-based representations to +conduct meaningful statistical, similarity and correlation analyses concerning +the ESG-related actions disclosed by companies in their sustainability reports. +These analyses unveiled that companies address ESG-related issues through +several actions encompassing recognition, compliance, and partnerships; +highlighting the complexity and joint efforts needed to address them. Moreover, +disclosure similarities emerged among companies from the same region or sector. +Lastly, we investigate which factual aspects impact the most on companies' ESG +scores using our findings and other company information. This analysis unveiled +that companies' disclosures affect ESG scores more than other financial or +company characteristics. +" +Are Large Language Models Post Hoc Explainers?,Nicholas Kroeger,http://arxiv.org/pdf/2310.05797v2.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05797v2.pdf," Large Language Models (LLMs) are increasingly used as powerful tools for a +plethora of natural language processing (NLP) applications. A recent +innovation, in-context learning (ICL), enables LLMs to learn new tasks by +supplying a few examples in the prompt during inference time, thereby +eliminating the need for model fine-tuning. While LLMs have been utilized in +several applications, their applicability in explaining the behavior of other +models remains relatively unexplored. Despite the growing number of new +explanation techniques, many require white-box access to the model and/or are +computationally expensive, highlighting a need for next-generation post hoc +explainers. In this work, we present the first framework to study the +effectiveness of LLMs in explaining other predictive models. More specifically, +we propose a novel framework encompassing multiple prompting strategies: i) +Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL, +and iv) Explanation-based ICL, with varying levels of information about the +underlying ML model and the local neighborhood of the test sample. We conduct +extensive experiments with real-world benchmark datasets to demonstrate that +LLM-generated explanations perform on par with state-of-the-art post hoc +explainers using their ability to leverage ICL examples and their internal +knowledge in generating model explanations. On average, across four datasets +and two ML models, we observe that LLMs identify the most important feature +with 72.19% accuracy, opening up new frontiers in explainable artificial +intelligence (XAI) to explore LLM-based explanation frameworks. +" +SALMON: Self-Alignment with Principle-Following Reward Models,Zhiqing Sun,http://arxiv.org/pdf/2310.05910v1.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05910v1.pdf," Supervised Fine-Tuning (SFT) on response demonstrations combined with +Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful +paradigm for aligning LLM-based AI agents. However, a significant limitation of +such an approach is its dependency on high-quality human annotations, making +its application to intricate tasks challenging due to difficulties in obtaining +consistent response demonstrations and in-distribution response preferences. +This paper presents a novel approach, namely SALMON (Self-ALignMent with +principle-fOllowiNg reward models), to align base language models with minimal +human supervision, using only a small set of human-defined principles, yet +achieving superior performance. Central to our approach is a +principle-following reward model. Trained on synthetic preference data, this +model can generate reward scores based on arbitrary human-defined principles. +By merely adjusting these principles during the RL training phase, we gain full +control over the preferences with the reward model, subsequently influencing +the behavior of the RL-trained policies, and eliminating the reliance on the +collection of online human preferences. Applying our method to the LLaMA-2-70b +base language model, we developed an AI assistant named Dromedary-2. With only +6 exemplars for in-context learning and 31 human-defined principles, +Dromedary-2 significantly surpasses the performance of several state-of-the-art +AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have +open-sourced the code and model weights to encourage further research into +aligning LLM-based AI agents with enhanced supervision efficiency, improved +controllability, and scalable oversight. +" +OpsEval: A Comprehensive Task-Oriented AIOps Benchmark for Large Language Models,Yuhe Liu,http://arxiv.org/pdf/2310.07637v2.pdf,2023-10-11,"['cs.ai', 'cs.ni']",2310.07637v2.pdf," Large language models (LLMs) have exhibited remarkable capabilities in +NLP-related tasks such as translation, summarizing, and generation. The +application of LLMs in specific areas, notably AIOps (Artificial Intelligence +for IT Operations), holds great potential due to their advanced abilities in +information summarizing, report analyzing, and ability of API calling. +Nevertheless, the performance of current LLMs in AIOps tasks is yet to be +determined. Furthermore, a comprehensive benchmark is required to steer the +optimization of LLMs tailored for AIOps. Compared with existing benchmarks that +focus on evaluating specific fields like network configuration, in this paper, +we present \textbf{OpsEval}, a comprehensive task-oriented AIOps benchmark +designed for LLMs. For the first time, OpsEval assesses LLMs' proficiency in +three crucial scenarios (Wired Network Operation, 5G Communication Operation, +and Database Operation) at various ability levels (knowledge recall, analytical +thinking, and practical application). The benchmark includes 7,200 questions in +both multiple-choice and question-answer (QA) formats, available in English and +Chinese. With quantitative and qualitative results, we show how various LLM +tricks can affect the performance of AIOps, including zero-shot, +chain-of-thought, and few-shot in-context learning. We find that GPT4-score is +more consistent with experts than widely used Bleu and Rouge, which can be used +to replace automatic metrics for large-scale qualitative evaluations. +" +EIPE-text: Evaluation-Guided Iterative Plan Extraction for Long-Form Narrative Text Generation,Wang You,http://arxiv.org/pdf/2310.08185v1.pdf,2023-10-12,"['cs.cl', 'cs.ai']",2310.08185v1.pdf," Plan-and-Write is a common hierarchical approach in long-form narrative text +generation, which first creates a plan to guide the narrative writing. +Following this approach, several studies rely on simply prompting large +language models for planning, which often yields suboptimal results. In this +paper, we propose a new framework called Evaluation-guided Iterative Plan +Extraction for long-form narrative text generation (EIPE-text), which extracts +plans from the corpus of narratives and utilizes the extracted plans to +construct a better planner. EIPE-text has three stages: plan extraction, +learning, and inference. In the plan extraction stage, it iteratively extracts +and improves plans from the narrative corpus and constructs a plan corpus. We +propose a question answer (QA) based evaluation mechanism to automatically +evaluate the plans and generate detailed plan refinement instructions to guide +the iterative improvement. In the learning stage, we build a better planner by +fine-tuning with the plan corpus or in-context learning with examples in the +plan corpus. Finally, we leverage a hierarchical approach to generate long-form +narratives. We evaluate the effectiveness of EIPE-text in the domains of novels +and storytelling. Both GPT-4-based evaluations and human evaluations +demonstrate that our method can generate more coherent and relevant long-form +narratives. Our code will be released in the future. +" +Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation,Yuanyuan Liang,http://arxiv.org/pdf/2310.08395v3.pdf,2023-10-12,"['cs.cl', 'cs.ai']",2310.08395v3.pdf," The task of Question Generation over Knowledge Bases (KBQG) aims to convert a +logical form into a natural language question. For the sake of expensive cost +of large-scale question annotation, the methods of KBQG under low-resource +scenarios urgently need to be developed. However, current methods heavily rely +on annotated data for fine-tuning, which is not well-suited for few-shot +question generation. The emergence of Large Language Models (LLMs) has shown +their impressive generalization ability in few-shot tasks. Inspired by +Chain-of-Thought (CoT) prompting, which is an in-context learning strategy for +reasoning, we formulate KBQG task as a reasoning problem, where the generation +of a complete question is splitted into a series of sub-question generation. +Our proposed prompting method KQG-CoT first retrieves supportive logical forms +from the unlabeled data pool taking account of the characteristics of the +logical form. Then, we write a prompt to explicit the reasoning chain of +generating complicated questions based on the selected demonstrations. To +further ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting the +logical forms by their complexity. We conduct extensive experiments over three +public KBQG datasets. The results demonstrate that our prompting method +consistently outperforms other prompting baselines on the evaluated datasets. +Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results of +the PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4, +METEOR, and ROUGE-L, respectively. +" +Do pretrained Transformers Really Learn In-context by Gradient Descent?,Lingfeng Shen,http://arxiv.org/pdf/2310.08540v1.pdf,2023-10-12,"['cs.cl', 'cs.ai', 'cs.lg']",2310.08540v1.pdf," Is In-Context Learning (ICL) implicitly equivalent to Gradient Descent (GD)? +Several recent works draw analogies between the dynamics of GD and the emergent +behavior of ICL in large language models. However, these works make assumptions +far from the realistic natural language setting in which language models are +trained. Such discrepancies between theory and practice, therefore, necessitate +further investigation to validate their applicability. + We start by highlighting the weaknesses in prior works that construct +Transformer weights to simulate gradient descent. Their experiments with +training Transformers on ICL objective, inconsistencies in the order +sensitivity of ICL and GD, sparsity of the constructed weights, and sensitivity +to parameter changes are some examples of a mismatch from the real-world +setting. + Furthermore, we probe and compare the ICL vs. GD hypothesis in a natural +setting. We conduct comprehensive empirical analyses on language models +pretrained on natural data (LLaMa-7B). Our comparisons on various performance +metrics highlight the inconsistent behavior of ICL and GD as a function of +various factors such as datasets, models, and number of demonstrations. We +observe that ICL and GD adapt the output distribution of language models +differently. These results indicate that the equivalence between ICL and GD is +an open hypothesis, requires nuanced considerations and calls for further +studies. +" +Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning,Jiachen Li,http://arxiv.org/pdf/2310.09676v1.pdf,2023-10-14,"['cs.ro', 'cs.ai']",2310.09676v1.pdf," Prompt-based learning has been demonstrated as a compelling paradigm +contributing to large language models' tremendous success (LLMs). Inspired by +their success in language tasks, existing research has leveraged LLMs in +embodied instruction following and task planning. However, not much attention +has been paid to embodied tasks with multimodal prompts, combining vision +signals with text descriptions. This type of task poses a major challenge to +robots' capability to understand the interconnection and complementarity +between vision and language signals. In this work, we introduce an effective +framework that learns a policy to perform robot manipulation with multimodal +prompts from multi-task expert trajectories. Our methods consist of a two-stage +training pipeline that performs inverse dynamics pretraining and multi-task +finetuning. To facilitate multimodal understanding, we design our multimodal +prompt encoder by augmenting a pretrained LM with a residual connection to the +visual input and model the dependencies among action dimensions. Empirically, +we evaluate the efficacy of our method on the VIMA-BENCH and establish a new +state-of-the-art (10% improvement in success rate). Moreover, we demonstrate +that our model exhibits remarkable in-context learning ability. +" +Unifying Image Processing as Visual Prompting Question Answering,Yihao Liu,http://arxiv.org/pdf/2310.10513v1.pdf,2023-10-16,"['cs.cv', 'eess.iv']",2310.10513v1.pdf," Image processing is a fundamental task in computer vision, which aims at +enhancing image quality and extracting essential features for subsequent vision +applications. Traditionally, task-specific models are developed for individual +tasks and designing such models requires distinct expertise. Building upon the +success of large language models (LLMs) in natural language processing (NLP), +there is a similar trend in computer vision, which focuses on developing +large-scale models through pretraining and in-context learning. This paradigm +shift reduces the reliance on task-specific models, yielding a powerful unified +model to deal with various tasks. However, these advances have predominantly +concentrated on high-level vision tasks, with less attention paid to low-level +vision tasks. To address this issue, we propose a universal model for general +image processing that covers image restoration, image enhancement, image +feature extraction tasks, \textit{etc}. Our proposed framework, named +PromptGIP, unifies these diverse image processing tasks within a universal +framework. Inspired by NLP question answering (QA) techniques, we employ a +visual prompting question answering paradigm. Specifically, we treat the +input-output image pair as a structured question-answer sentence, thereby +reprogramming the image processing task as a prompting QA problem. PromptGIP +can undertake diverse \textbf{cross-domain} tasks using provided visual +prompts, eliminating the need for task-specific finetuning. Our methodology +offers a universal and adaptive solution to general image processing. While +PromptGIP has demonstrated a certain degree of out-of-domain task +generalization capability, further research is expected to fully explore its +more powerful emergent generalization. +" +In-Context Pretraining: Language Modeling Beyond Document Boundaries,Weijia Shi,http://arxiv.org/pdf/2310.10638v3.pdf,2023-10-16,"['cs.cl', 'cs.ai', 'cs.lg']",2310.10638v3.pdf," Large language models (LMs) are currently trained to predict tokens given +document prefixes, enabling them to directly perform long-form generation and +prompting-style tasks which can be reduced to document completion. Existing +pretraining pipelines train LMs by concatenating random sets of short documents +to create input contexts but the prior documents provide no signal for +predicting the next document. We instead present In-Context Pretraining, a new +approach where language models are pretrained on a sequence of related +documents, thereby explicitly encouraging them to read and reason across +document boundaries. We can do In-Context Pretraining by simply changing the +document ordering so that each context contains related documents, and directly +applying existing pretraining pipelines. However, this document sorting problem +is challenging. There are billions of documents and we would like the sort to +maximize contextual similarity for every document without repeating any data. +To do this, we introduce approximate algorithms for finding related documents +with efficient nearest neighbor search and constructing coherent input contexts +with a graph traversal algorithm. Our experiments show In-Context Pretraining +offers a simple and scalable approach to significantly enhance LMs'performance: +we see notable improvements in tasks that require more complex contextual +reasoning, including in-context learning (+8%), reading comprehension (+15%), +faithfulness to previous contexts (+16%), long-context reasoning (+5%), and +retrieval augmentation (+9%). +" +IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models,Shaokun Zhang,http://arxiv.org/pdf/2310.10873v1.pdf,2023-10-16,['cs.cl'],2310.10873v1.pdf," In-context learning is a promising paradigm that utilizes in-context examples +as prompts for the predictions of large language models. These prompts are +crucial for achieving strong performance. However, since the prompts need to be +sampled from a large volume of annotated examples, finding the right prompt may +result in high annotation costs. To address this challenge, this paper +introduces an influence-driven selective annotation method that aims to +minimize annotation costs while improving the quality of in-context examples. +The essence of our method is to select a pivotal subset from a large-scale +unlabeled data pool to annotate for the subsequent sampling of prompts. +Specifically, a directed graph is first constructed to represent unlabeled +data. Afterward, the influence of candidate unlabeled subsets is quantified +with a diffusion process. A simple yet effective greedy algorithm for unlabeled +data selection is lastly introduced. It iteratively selects the data if it +provides a maximum marginal gain with respect to quantified influence. Compared +with previous efforts on selective annotations, our influence-driven method +works in an end-to-end manner, avoids an intractable explicit balance between +data diversity and representativeness, and enjoys theoretical support. +Experiments confirm the superiority of the proposed method on various +benchmarks, achieving better performance under lower time consumption during +subset selection. The project page is available at +https://skzhang1.github.io/IDEAL/. +" +Eureka: Human-Level Reward Design via Coding Large Language Models,Yecheng Jason Ma,http://arxiv.org/pdf/2310.12931v1.pdf,2023-10-19,"['cs.ro', 'cs.ai', 'cs.lg']",2310.12931v1.pdf," Large Language Models (LLMs) have excelled as high-level semantic planners +for sequential decision-making tasks. However, harnessing them to learn complex +low-level manipulation tasks, such as dexterous pen spinning, remains an open +problem. We bridge this fundamental gap and present Eureka, a human-level +reward design algorithm powered by LLMs. Eureka exploits the remarkable +zero-shot generation, code-writing, and in-context improvement capabilities of +state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over +reward code. The resulting rewards can then be used to acquire complex skills +via reinforcement learning. Without any task-specific prompting or pre-defined +reward templates, Eureka generates reward functions that outperform expert +human-engineered rewards. In a diverse suite of 29 open-source RL environments +that include 10 distinct robot morphologies, Eureka outperforms human experts +on 83% of the tasks, leading to an average normalized improvement of 52%. The +generality of Eureka also enables a new gradient-free in-context learning +approach to reinforcement learning from human feedback (RLHF), readily +incorporating human inputs to improve the quality and the safety of the +generated rewards without model updating. Finally, using Eureka rewards in a +curriculum learning setting, we demonstrate for the first time, a simulated +Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a +pen in circles at rapid speed. +" +Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning,Jinyuan Wang,http://arxiv.org/pdf/2310.13552v2.pdf,2023-10-20,"['cs.cl', 'cs.ai']",2310.13552v2.pdf," In open-domain question-answering (ODQA), most existing questions require +single-hop reasoning on commonsense. To further extend this task, we officially +introduce open-domain multi-hop reasoning (ODMR) by answering multi-hop +questions with explicit reasoning steps in open-domain setting. Recently, large +language models (LLMs) have found significant utility in facilitating ODQA +without external corpus. Furthermore, chain-of-thought (CoT) prompting boosts +the reasoning capability of LLMs to a greater extent with manual or automated +paradigms. However, existing automated methods lack of quality assurance, while +manual approaches suffer from limited scalability and poor diversity, hindering +the capabilities of LLMs. In this paper, we propose Self-prompted +Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality +CoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generation +pipeline of high quality ODMR datasets, an adaptive sampler for in-context CoT +selection and self-prompted inference via in-context learning. Extensive +experiments on four multi-hop question-answering benchmarks show that our +proposed SP-CoT not only significantly surpasses the previous SOTA methods on +large-scale (175B) LLMs, but also nearly doubles the zero-shot performance of +small-scale (13B) LLMs. Further analysis reveals the remarkable capability of +SP-CoT to elicit direct and concise intermediate reasoning steps by recalling +$\sim$50\% of intermediate answers on MuSiQue-Ans dataset. +" +Explainable Depression Symptom Detection in Social Media,Eliseo Bao Souto,http://arxiv.org/pdf/2310.13664v2.pdf,2023-10-20,['cs.cl'],2310.13664v2.pdf," Users of social platforms often perceive these sites as supportive spaces to +post about their mental health issues. Those conversations contain important +traces about individuals' health risks. Recently, researchers have exploited +this online information to construct mental health detection models, which aim +to identify users at risk on platforms like Twitter, Reddit or Facebook. Most +of these models are centred on achieving good classification results, ignoring +the explainability and interpretability of the decisions. Recent research has +pointed out the importance of using clinical markers, such as the use of +symptoms, to improve trust in the computational models by health professionals. +In this paper, we propose using transformer-based architectures to detect and +explain the appearance of depressive symptom markers in the users' writings. We +present two approaches: i) train a model to classify, and another one to +explain the classifier's decision separately and ii) unify the two tasks +simultaneously using a single model. Additionally, for this latter manner, we +also investigated the performance of recent conversational LLMs when using +in-context learning. Our natural language explanations enable clinicians to +interpret the models' decisions based on validated symptoms, enhancing trust in +the automated process. We evaluate our approach using recent symptom-based +datasets, employing both offline and expert-in-the-loop metrics to assess the +quality of the explanations generated by our models. The experimental results +show that it is possible to achieve good classification results while +generating interpretable symptom-based explanations. +" +Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs,Young-Suk Lee,http://arxiv.org/pdf/2310.13961v1.pdf,2023-10-21,"['cs.cl', 'cs.ai']",2310.13961v1.pdf," Using in-context learning (ICL) for data generation, techniques such as +Self-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023) +can train strong conversational agents with only a small amount of human +supervision. One limitation of these approaches is that they resort to very +large language models (around 175B parameters) that are also proprietary and +non-public. Here we explore the application of such techniques to language +models that are much smaller (around 10B--40B parameters) and have permissive +licenses. We find the Self-Instruct approach to be less effective at these +sizes and propose new ICL methods that draw on two main ideas: (a) +Categorization and simplification of the ICL templates to make prompt learning +easier for the LM, and (b) Ensembling over multiple LM outputs to help select +high-quality synthetic examples. Our algorithm leverages the 175 Self-Instruct +seed tasks and employs separate pipelines for instructions that require an +input and instructions that do not. Empirical investigations with different LMs +show that: (1) Our proposed method yields higher-quality instruction tuning +data than Self-Instruct, (2) It improves performances of both vanilla and +instruction-tuned LMs by significant margins, and (3) Smaller instruction-tuned +LMs generate more useful outputs than their larger un-tuned counterparts. Our +codebase is available at https://github.com/IBM/ensemble-instruct. +" +Investigating the Fairness of Large Language Models for Predictions on Tabular Data,Yanchen Liu,http://arxiv.org/pdf/2310.14607v1.pdf,2023-10-23,"['cs.cl', 'cs.lg']",2310.14607v1.pdf," Recent literature has suggested the potential of using large language models +(LLMs) to make predictions for tabular tasks. However, LLMs have been shown to +exhibit harmful social biases that reflect the stereotypes and inequalities +present in the society. To this end, as well as the widespread use of tabular +data in many high-stake applications, it is imperative to explore the following +questions: what sources of information do LLMs draw upon when making +predictions for tabular tasks; whether and to what extent are LLM predictions +for tabular tasks influenced by social biases and stereotypes; and what are the +consequential implications for fairness? Through a series of experiments, we +delve into these questions and show that LLMs tend to inherit social biases +from their training data which significantly impact their fairness in tabular +prediction tasks. Furthermore, our investigations show that in the context of +bias mitigation, though in-context learning and fine-tuning have a moderate +effect, the fairness metric gap between different subgroups is still larger +than that in traditional machine learning models, such as Random Forest and +shallow Neural Networks. This observation emphasizes that the social biases are +inherent within the LLMs themselves and inherited from their pre-training +corpus, not only from the downstream task datasets. Besides, we demonstrate +that label-flipping of in-context examples can significantly reduce biases, +further highlighting the presence of inherent bias within LLMs. +" +Large Language Models are Visual Reasoning Coordinators,Liangyu Chen,http://arxiv.org/pdf/2310.15166v1.pdf,2023-10-23,"['cs.cv', 'cs.cl']",2310.15166v1.pdf," Visual reasoning requires multimodal perception and commonsense cognition of +the world. Recently, multiple vision-language models (VLMs) have been proposed +with excellent commonsense reasoning ability in various domains. However, how +to harness the collective power of these complementary VLMs is rarely explored. +Existing methods like ensemble still struggle to aggregate these models with +the desired higher-order communications. In this work, we propose Cola, a novel +paradigm that coordinates multiple VLMs for visual reasoning. Our key insight +is that a large language model (LLM) can efficiently coordinate multiple VLMs +by facilitating natural language communication that leverages their distinct +and complementary capabilities. Extensive experiments demonstrate that our +instruction tuning variant, Cola-FT, achieves state-of-the-art performance on +visual question answering (VQA), outside knowledge VQA, visual entailment, and +visual spatial reasoning tasks. Moreover, we show that our in-context learning +variant, Cola-Zero, exhibits competitive performance in zero and few-shot +settings, without finetuning. Through systematic ablation studies and +visualizations, we validate that a coordinator LLM indeed comprehends the +instruction prompts as well as the separate functionalities of VLMs; it then +coordinates them to enable impressive visual reasoning capabilities. +" +Function Vectors in Large Language Models,Eric Todd,http://arxiv.org/pdf/2310.15213v1.pdf,2023-10-23,"['cs.cl', 'cs.lg']",2310.15213v1.pdf," We report the presence of a simple neural mechanism that represents an +input-output function as a vector within autoregressive transformer language +models (LMs). Using causal mediation analysis on a diverse range of +in-context-learning (ICL) tasks, we find that a small number attention heads +transport a compact representation of the demonstrated task, which we call a +function vector (FV). FVs are robust to changes in context, i.e., they trigger +execution of the task on inputs such as zero-shot and natural text settings +that do not resemble the ICL contexts from which they are collected. We test +FVs across a range of tasks, models, and layers and find strong causal effects +across settings in middle layers. We investigate the internal structure of FVs +and find while that they often contain information that encodes the output +space of the function, this information alone is not sufficient to reconstruct +an FV. Finally, we test semantic vector composition in FVs, and find that to +some extent they can be summed to create vectors that trigger new complex +tasks. Taken together, our findings suggest that LLMs contain internal +abstractions of general-purpose functions that can be invoked in a variety of +contexts. +" +TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction,Junyi Liu,http://arxiv.org/pdf/2310.15556v2.pdf,2023-10-24,"['cs.cl', 'cs.ir']",2310.15556v2.pdf," Since ChatGPT released its API for public use, the number of applications +built on top of commercial large language models (LLMs) increase exponentially. +One popular usage of such models is leveraging its in-context learning ability +and generating responses given user queries leveraging knowledge obtained by +retrieval augmentation. One problem of deploying commercial retrieval-augmented +LLMs is the cost due to the additionally retrieved context that largely +increases the input token size of the LLMs. To mitigate this, we propose a +token compression scheme that includes two methods: summarization compression +and semantic compression. The first method applies a T5-based model that is +fine-tuned by datasets generated using self-instruct containing samples with +varying lengths and reduce token size by doing summarization. The second method +further compresses the token size by removing words with lower impact on the +semantic. In order to adequately evaluate the effectiveness of the proposed +methods, we propose and utilize a dataset called Food-Recommendation DB (FRDB) +focusing on food recommendation for women around pregnancy period or infants. +Our summarization compression can reduce 65% of the retrieval token size with +further 0.3% improvement on the accuracy; semantic compression provides a more +flexible way to trade-off the token size with performance, for which we can +reduce the token size by 20% with only 1.6% of accuracy drop. +" +Testing the Limits: Unusual Text Inputs Generation for Mobile App Crash Detection with Large Language Model,Zhe Liu,http://arxiv.org/pdf/2310.15657v1.pdf,2023-10-24,['cs.se'],2310.15657v1.pdf," Mobile applications have become a ubiquitous part of our daily life, +providing users with access to various services and utilities. Text input, as +an important interaction channel between users and applications, plays an +important role in core functionality such as search queries, authentication, +messaging, etc. However, certain special text (e.g., -18 for Font Size) can +cause the app to crash, and generating diversified unusual inputs for fully +testing the app is highly demanded. Nevertheless, this is also challenging due +to the combination of explosion dilemma, high context sensitivity, and complex +constraint relations. This paper proposes InputBlaster which leverages the LLM +to automatically generate unusual text inputs for mobile app crash detection. +It formulates the unusual inputs generation problem as a task of producing a +set of test generators, each of which can yield a batch of unusual text inputs +under the same mutation rule. In detail, InputBlaster leverages LLM to produce +the test generators together with the mutation rules serving as the reasoning +chain, and utilizes the in-context learning schema to demonstrate the LLM with +examples for boosting the performance. InputBlaster is evaluated on 36 text +input widgets with cash bugs involving 31 popular Android apps, and results +show that it achieves 78% bug detection rate, with 136% higher than the best +baseline. Besides, we integrate it with the automated GUI testing tool and +detect 37 unseen crashes in real-world apps from Google Play. +" +ExPT: Synthetic Pretraining for Few-Shot Experimental Design,Tung Nguyen,http://arxiv.org/pdf/2310.19961v1.pdf,2023-10-30,"['cs.lg', 'cs.ai']",2310.19961v1.pdf," Experimental design is a fundamental problem in many science and engineering +fields. In this problem, sample efficiency is crucial due to the time, money, +and safety costs of real-world design evaluations. Existing approaches either +rely on active data collection or access to large, labeled datasets of past +experiments, making them impractical in many real-world scenarios. In this +work, we address the more challenging yet realistic setting of few-shot +experimental design, where only a few labeled data points of input designs and +their corresponding values are available. We approach this problem as a +conditional generation task, where a model conditions on a few labeled examples +and the desired output to generate an optimal input design. To this end, we +introduce Experiment Pretrained Transformers (ExPT), a foundation model for +few-shot experimental design that employs a novel combination of synthetic +pretraining with in-context learning. In ExPT, we only assume knowledge of a +finite collection of unlabelled data points from the input domain and pretrain +a transformer neural network to optimize diverse synthetic functions defined +over this domain. Unsupervised pretraining allows ExPT to adapt to any design +task at test time in an in-context fashion by conditioning on a few labeled +data points from the target task and generating the candidate optima. We +evaluate ExPT on few-shot experimental design in challenging domains and +demonstrate its superior generality and performance compared to existing +methods. The source code is available at https://github.com/tung-nd/ExPT.git. +" +Unleashing the Creative Mind: Language Model As Hierarchical Policy For Improved Exploration on Challenging Problem Solving,Zhan Ling,http://arxiv.org/pdf/2311.00694v1.pdf,2023-11-01,"['cs.ai', 'cs.cl']",2311.00694v1.pdf," Large Language Models (LLMs) have achieved tremendous progress, yet they +still often struggle with challenging reasoning problems. Current approaches +address this challenge by sampling or searching detailed and low-level +reasoning chains. However, these methods are still limited in their exploration +capabilities, making it challenging for correct solutions to stand out in the +huge solution space. In this work, we unleash LLMs' creative potential for +exploring multiple diverse problem solving strategies by framing an LLM as a +hierarchical policy via in-context learning. This policy comprises of a +visionary leader that proposes multiple diverse high-level problem-solving +tactics as hints, accompanied by a follower that executes detailed +problem-solving processes following each of the high-level instruction. The +follower uses each of the leader's directives as a guide and samples multiple +reasoning chains to tackle the problem, generating a solution group for each +leader proposal. Additionally, we propose an effective and efficient +tournament-based approach to select among these explored solution groups to +reach the final answer. Our approach produces meaningful and inspiring hints, +enhances problem-solving strategy exploration, and improves the final answer +accuracy on challenging problems in the MATH dataset. Code will be released at +https://github.com/lz1oceani/LLM-As-Hierarchical-Policy. +" +Sentiment Analysis through LLM Negotiations,Xiaofei Sun,http://arxiv.org/pdf/2311.01876v1.pdf,2023-11-03,['cs.cl'],2311.01876v1.pdf," A standard paradigm for sentiment analysis is to rely on a singular LLM and +makes the decision in a single round under the framework of in-context +learning. This framework suffers the key disadvantage that the single-turn +output generated by a single LLM might not deliver the perfect decision, just +as humans sometimes need multiple attempts to get things right. This is +especially true for the task of sentiment analysis where deep reasoning is +required to address the complex linguistic phenomenon (e.g., clause +composition, irony, etc) in the input. + To address this issue, this paper introduces a multi-LLM negotiation +framework for sentiment analysis. The framework consists of a reasoning-infused +generator to provide decision along with rationale, a explanation-deriving +discriminator to evaluate the credibility of the generator. The generator and +the discriminator iterate until a consensus is reached. The proposed framework +naturally addressed the aforementioned challenge, as we are able to take the +complementary abilities of two LLMs, have them use rationale to persuade each +other for correction. + Experiments on a wide range of sentiment analysis benchmarks (SST-2, Movie +Review, Twitter, yelp, amazon, IMDB) demonstrate the effectiveness of proposed +approach: it consistently yields better performances than the ICL baseline +across all benchmarks, and even superior performances to supervised baselines +on the Twitter and movie review datasets. +" +ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models,Zhelun Shi,http://arxiv.org/pdf/2311.02692v1.pdf,2023-11-05,['cs.cv'],2311.02692v1.pdf," Multimodal Large Language Models (MLLMs) have shown impressive abilities in +interacting with visual content with myriad potential downstream tasks. +However, even though a list of benchmarks has been proposed, the capabilities +and limitations of MLLMs are still not comprehensively understood, due to a +lack of a standardized and holistic evaluation framework. To this end, we +present the first Comprehensive Evaluation Framework (ChEF) that can +holistically profile each MLLM and fairly compare different MLLMs. First, we +structure ChEF as four modular components, i.e., Scenario as scalable +multimodal datasets, Instruction as flexible instruction retrieving formulae, +Inferencer as reliable question answering strategies, and Metric as indicative +task-specific score functions. Based on them, ChEF facilitates versatile +evaluations in a standardized framework, and new evaluations can be built by +designing new Recipes (systematic selection of these four components). Notably, +current MLLM benchmarks can be readily summarized as recipes of ChEF. Second, +we introduce 6 new recipes to quantify competent MLLMs' desired capabilities +(or called desiderata, i.e., calibration, in-context learning, instruction +following, language performance, hallucination, and robustness) as reliable +agents that can perform real-world multimodal interactions. Third, we conduct a +large-scale evaluation of 9 prominent MLLMs on 9 scenarios and 6 desiderata. +Our evaluation summarized over 20 valuable observations concerning the +generalizability of MLLMs across various scenarios and the composite capability +of MLLMs required for multimodal interactions. We will publicly release all the +detailed implementations for further analysis, as well as an easy-to-use +modular toolkit for the integration of new recipes and models, so that ChEF can +be a growing evaluation framework for the MLLM community. +" +Kinematic-aware Prompting for Generalizable Articulated Object Manipulation with LLMs,Wenke Xia,http://arxiv.org/pdf/2311.02847v2.pdf,2023-11-06,"['cs.ro', 'cs.ai']",2311.02847v2.pdf," Generalizable articulated object manipulation is essential for home-assistant +robots. Recent efforts focus on imitation learning from demonstrations or +reinforcement learning in simulation, however, due to the prohibitive costs of +real-world data collection and precise object simulation, it still remains +challenging for these works to achieve broad adaptability across diverse +articulated objects. Recently, many works have tried to utilize the strong +in-context learning ability of Large Language Models (LLMs) to achieve +generalizable robotic manipulation, but most of these researches focus on +high-level task planning, sidelining low-level robotic control. In this work, +building on the idea that the kinematic structure of the object determines how +we can manipulate it, we propose a kinematic-aware prompting framework that +prompts LLMs with kinematic knowledge of objects to generate low-level motion +trajectory waypoints, supporting various object manipulation. To effectively +prompt LLMs with the kinematic structure of different objects, we design a +unified kinematic knowledge parser, which represents various articulated +objects as a unified textual description containing kinematic joints and +contact location. Building upon this unified description, a kinematic-aware +planner model is proposed to generate precise 3D manipulation waypoints via a +designed kinematic-aware chain-of-thoughts prompting method. Our evaluation +spanned 48 instances across 16 distinct categories, revealing that our +framework not only outperforms traditional methods on 8 seen categories but +also shows a powerful zero-shot capability for 8 unseen articulated object +categories. Moreover, the real-world experiments on 7 different object +categories prove our framework's adaptability in practical scenarios. Code is +released at +\href{https://github.com/GeWu-Lab/LLM_articulated_object_manipulation/tree/main}{here}. +" +In-Context Learning for Knowledge Base Question Answering for Unmanned Systems based on Large Language Models,Yunlong Chen,http://arxiv.org/pdf/2311.02956v1.pdf,2023-11-06,"['cs.cl', 'cs.ai', 'i.2.7']",2311.02956v1.pdf," Knowledge Base Question Answering (KBQA) aims to answer factoid questions +based on knowledge bases. However, generating the most appropriate knowledge +base query code based on Natural Language Questions (NLQ) poses a significant +challenge in KBQA. In this work, we focus on the CCKS2023 Competition of +Question Answering with Knowledge Graph Inference for Unmanned Systems. +Inspired by the recent success of large language models (LLMs) like ChatGPT and +GPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL) +generation framework to generate the most appropriate CQL based on the given +NLQ. Our generative framework contains six parts: an auxiliary model predicting +the syntax-related information of CQL based on the given NLQ, a proper noun +matcher extracting proper nouns from the given NLQ, a demonstration example +selector retrieving similar examples of the input sample, a prompt constructor +designing the input template of ChatGPT, a ChatGPT-based generation model +generating the CQL, and an ensemble model to obtain the final answers from +diversified outputs. With our ChatGPT-based CQL generation framework, we +achieved the second place in the CCKS 2023 Question Answering with Knowledge +Graph Inference for Unmanned Systems competition, achieving an F1-score of +0.92676. +" +Retrieval-Augmented Code Generation for Universal Information Extraction,Yucan Guo,http://arxiv.org/pdf/2311.02962v1.pdf,2023-11-06,"['cs.ai', 'cs.cl', 'cs.ir']",2311.02962v1.pdf," Information Extraction (IE) aims to extract structural knowledge (e.g., +entities, relations, events) from natural language texts, which brings +challenges to existing methods due to task-specific schemas and complex text +expressions. Code, as a typical kind of formalized language, is capable of +describing structural knowledge under various schemas in a universal way. On +the other hand, Large Language Models (LLMs) trained on both codes and texts +have demonstrated powerful capabilities of transforming texts into codes, which +provides a feasible solution to IE tasks. Therefore, in this paper, we propose +a universal retrieval-augmented code generation framework based on LLMs, called +Code4UIE, for IE tasks. Specifically, Code4UIE adopts Python classes to define +task-specific schemas of various structural knowledge in a universal way. By so +doing, extracting knowledge under these schemas can be transformed into +generating codes that instantiate the predefined Python classes with the +information in texts. To generate these codes more precisely, Code4UIE adopts +the in-context learning mechanism to instruct LLMs with examples. In order to +obtain appropriate examples for different tasks, Code4UIE explores several +example retrieval strategies, which can retrieve examples semantically similar +to the given texts. Extensive experiments on five representative IE tasks +across nine datasets demonstrate the effectiveness of the Code4UIE framework. +" +Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse Finetuning,Sarkar Snigdha Sarathi Das,http://arxiv.org/pdf/2311.03748v1.pdf,2023-11-07,['cs.cl'],2311.03748v1.pdf," Unified Sequence Labeling that articulates different sequence labeling +problems such as Named Entity Recognition, Relation Extraction, Semantic Role +Labeling, etc. in a generalized sequence-to-sequence format opens up the +opportunity to make the maximum utilization of large language model knowledge +toward structured prediction. Unfortunately, this requires formatting them into +specialized augmented format unknown to the base pretrained language model +(PLMs) necessitating finetuning to the target format. This significantly bounds +its usefulness in data-limited settings where finetuning large models cannot +properly generalize to the target format. To address this challenge and +leverage PLM knowledge effectively, we propose FISH-DIP, a sample-aware dynamic +sparse finetuning strategy that selectively focuses on a fraction of +parameters, informed by feedback from highly regressing examples, during the +fine-tuning process. By leveraging the dynamism of sparsity, our approach +mitigates the impact of well-learned samples and prioritizes underperforming +instances for improvement in generalization. Across five tasks of sequence +labeling, we demonstrate that FISH-DIP can smoothly optimize the model in low +resource settings offering upto 40% performance improvements over full +fine-tuning depending on target evaluation settings. Also, compared to +in-context learning and other parameter-efficient fine-tuning approaches, +FISH-DIP performs comparably or better, notably in extreme low-resource +settings. +" +UL2: Unifying Language Learning Paradigms,Yi Tay,http://arxiv.org/pdf/2205.05131v3.pdf,2022-05-10,['cs.cl'],2205.05131v3.pdf," Existing pre-trained models are generally geared towards a particular class +of problems. To date, there seems to be still no consensus on what the right +architecture and pre-training setup should be. This paper presents a unified +framework for pre-training models that are universally effective across +datasets and setups. We begin by disentangling architectural archetypes with +pre-training objectives -- two concepts that are commonly conflated. Next, we +present a generalized & unified perspective for self-supervision in NLP and +show how different pre-training objectives can be cast as one another and how +interpolating between different objectives can be effective. We then propose +Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse +pre-training paradigms together. We furthermore introduce a notion of mode +switching, wherein downstream fine-tuning is associated with specific +pre-training schemes. We conduct extensive ablative experiments to compare +multiple pre-training objectives and find that our method pushes the +Pareto-frontier by outperforming T5 & GPT-like models across multiple diverse +setups. By scaling our model up to 20B parameters, we achieve SOTA performance +on 50 well-established supervised finetuning based NLP tasks. Our model also +achieve strong results at in-context learning, outperforming 175B GPT-3 on +zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot +summarization. On 0-shot MMLU, UL2 20B outperforms T0 and T5 models. UL2 20B +also works well with chain-of-thought prompting and reasoning, making it an +appealing choice for research into reasoning at a small to medium scale of 20B +parameters. Finally, we apply FLAN instruction tuning to the UL2 20B model, +achieving MMLU and Big-Bench scores competitive to FLAN-PaLM 62B. We release +Flax-based T5X checkpoints for the UL2 20B & Flan-UL2 20B. +" +Human-Timescale Adaptation in an Open-Ended Task Space, Adaptive Agent Team,http://arxiv.org/pdf/2301.07608v1.pdf,2023-01-18,"['cs.lg', 'cs.ai', 'cs.ne']",2301.07608v1.pdf," Foundation models have shown impressive adaptation and scalability in +supervised and self-supervised learning problems, but so far these successes +have not fully translated to reinforcement learning (RL). In this work, we +demonstrate that training an RL agent at scale leads to a general in-context +learning algorithm that can adapt to open-ended novel embodied 3D problems as +quickly as humans. In a vast space of held-out environment dynamics, our +adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, +efficient exploitation of acquired knowledge, and can successfully be prompted +with first-person demonstrations. Adaptation emerges from three ingredients: +(1) meta-reinforcement learning across a vast, smooth and diverse task +distribution, (2) a policy parameterised as a large-scale attention-based +memory architecture, and (3) an effective automated curriculum that prioritises +tasks at the frontier of an agent's capabilities. We demonstrate characteristic +scaling laws with respect to network size, memory length, and richness of the +training task distribution. We believe our results lay the foundation for +increasingly general and adaptive RL agents that perform well across +ever-larger open-ended domains. +" +DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4,Zhengliang Liu,http://arxiv.org/pdf/2303.11032v1.pdf,2023-03-20,"['cs.cl', 'cs.cy']",2303.11032v1.pdf," The digitization of healthcare has facilitated the sharing and re-using of +medical data but has also raised concerns about confidentiality and privacy. +HIPAA (Health Insurance Portability and Accountability Act) mandates removing +re-identifying information before the dissemination of medical records. Thus, +effective and efficient solutions for de-identifying medical data, especially +those in free-text forms, are highly needed. While various computer-assisted +de-identification methods, including both rule-based and learning-based, have +been developed and used in prior practice, such solutions still lack +generalizability or need to be fine-tuned according to different scenarios, +significantly imposing restrictions in wider use. The advancement of large +language models (LLM), such as ChatGPT and GPT-4, have shown great potential in +processing text data in the medical domain with zero-shot in-context learning, +especially in the task of privacy protection, as these models can identify +confidential information by their powerful named entity recognition (NER) +capability. In this work, we developed a novel GPT4-enabled de-identification +framework (""DeID-GPT"") to automatically identify and remove the identifying +information. Compared to existing commonly used medical text data +de-identification methods, our developed DeID-GPT showed the highest accuracy +and remarkable reliability in masking private information from the unstructured +medical text while preserving the original structure and meaning of the text. +This study is one of the earliest to utilize ChatGPT and GPT-4 for medical text +data processing and de-identification, which provides insights for further +research and solution development on the use of LLMs such as ChatGPT/GPT-4 in +healthcare. Codes and benchmarking data information are available at +https://github.com/yhydhx/ChatGPT-API. +" +TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs,Yaobo Liang,http://arxiv.org/pdf/2303.16434v1.pdf,2023-03-29,"['cs.ai', 'cs.cl']",2303.16434v1.pdf," Artificial Intelligence (AI) has made incredible progress recently. On the +one hand, advanced foundation models like ChatGPT can offer powerful +conversation, in-context learning and code generation abilities on a broad +range of open-domain tasks. They can also generate high-level solution outlines +for domain-specific tasks based on the common sense knowledge they have +acquired. However, they still face difficulties with some specialized tasks +because they lack enough domain-specific data during pre-training or they often +have errors in their neural network computations on those tasks that need +accurate executions. On the other hand, there are also many existing models and +systems (symbolic-based or neural-based) that can do some domain-specific tasks +very well. However, due to the different implementation or working mechanisms, +they are not easily accessible or compatible with foundation models. Therefore, +there is a clear and pressing need for a mechanism that can leverage foundation +models to propose task solution outlines and then automatically match some of +the sub-tasks in the outlines to the off-the-shelf models and systems with +special functionalities to complete them. Inspired by this, we introduce +TaskMatrix.AI as a new AI ecosystem that connects foundation models with +millions of APIs for task completion. Unlike most previous work that aimed to +improve a single AI model, TaskMatrix.AI focuses more on using existing +foundation models (as a brain-like central system) and APIs of other AI models +and systems (as sub-task solvers) to achieve diversified tasks in both digital +and physical domains. As a position paper, we will present our vision of how to +build such an ecosystem, explain each key component, and use study cases to +illustrate both the feasibility of this vision and the main challenges we need +to address next. +" +Subject-driven Text-to-Image Generation via Apprenticeship Learning,Wenhu Chen,http://arxiv.org/pdf/2304.00186v5.pdf,2023-04-01,"['cs.cv', 'cs.ai']",2304.00186v5.pdf," Recent text-to-image generation models like DreamBooth have made remarkable +progress in generating highly customized images of a target subject, by +fine-tuning an ``expert model'' for a given subject from a few examples. +However, this process is expensive, since a new expert model must be learned +for each subject. In this paper, we present SuTI, a Subject-driven +Text-to-Image generator that replaces subject-specific fine tuning with +in-context learning. Given a few demonstrations of a new subject, SuTI can +instantly generate novel renditions of the subject in different scenes, without +any subject-specific optimization. SuTI is powered by apprenticeship learning, +where a single apprentice model is learned from data generated by a massive +number of subject-specific expert models. Specifically, we mine millions of +image clusters from the Internet, each centered around a specific visual +subject. We adopt these clusters to train a massive number of expert models, +each specializing in a different subject. The apprentice model SuTI then learns +to imitate the behavior of these fine-tuned experts. SuTI can generate +high-quality and customized subject-specific images 20x faster than +optimization-based SoTA methods. On the challenging DreamBench and +DreamBench-v2, our human evaluation shows that SuTI significantly outperforms +existing models like InstructPix2Pix, Textual Inversion, Imagic, Prompt2Prompt, +Re-Imagen and DreamBooth, especially on the subject and text alignment aspects. +" +Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT,Yinlin Deng,http://arxiv.org/pdf/2304.02014v1.pdf,2023-04-04,['cs.se'],2304.02014v1.pdf," Deep Learning (DL) library bugs affect downstream DL applications, +emphasizing the need for reliable systems. Generating valid input programs for +fuzzing DL libraries is challenging due to the need for satisfying both +language syntax/semantics and constraints for constructing valid computational +graphs. Recently, the TitanFuzz work demonstrates that modern Large Language +Models (LLMs) can be directly leveraged to implicitly learn all the constraints +to generate valid DL programs for fuzzing. However, LLMs tend to generate +ordinary programs following similar patterns seen in their massive training +corpora, while fuzzing favors unusual inputs that cover edge cases or are +unlikely to be manually produced. + To fill this gap, this paper proposes FuzzGPT, the first technique to prime +LLMs to synthesize unusual programs for fuzzing. FuzzGPT is built on the +well-known hypothesis that historical bug-triggering programs may include +rare/valuable code ingredients important for bug finding. Traditional +techniques leveraging such historical information require intensive human +efforts to design dedicated generators and ensure the validity of generated +programs. FuzzGPT demonstrates that this process can be fully automated via the +intrinsic capabilities of LLMs (including fine-tuning and in-context learning), +while being generalizable and applicable to challenging domains. While FuzzGPT +can be applied with different LLMs, this paper focuses on the powerful +GPT-style models: Codex and CodeGen. Moreover, FuzzGPT also shows the potential +of directly leveraging the instruct-following capability of the recent ChatGPT +for effective fuzzing. Evaluation on two popular DL libraries (PyTorch and +TensorFlow) shows that FuzzGPT can substantially outperform TitanFuzz, +detecting 76 bugs, with 49 already confirmed as previously unknown bugs, +including 11 high-priority bugs or security vulnerabilities. +" +ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT,Chong Ma,http://arxiv.org/pdf/2304.08448v2.pdf,2023-04-17,"['cs.cl', 'cs.ai']",2304.08448v2.pdf," The 'Impression' section of a radiology report is a critical basis for +communication between radiologists and other physicians, and it is typically +written by radiologists based on the 'Findings' section. However, writing +numerous impressions can be laborious and error-prone for radiologists. +Although recent studies have achieved promising results in automatic impression +generation using large-scale medical text data for pre-training and fine-tuning +pre-trained language models, such models often require substantial amounts of +medical text data and have poor generalization performance. While large +language models (LLMs) like ChatGPT have shown strong generalization +capabilities and performance, their performance in specific domains, such as +radiology, remains under-investigated and potentially limited. To address this +limitation, we propose ImpressionGPT, which leverages the in-context learning +capability of LLMs by constructing dynamic contexts using domain-specific, +individualized data. This dynamic prompt approach enables the model to learn +contextual knowledge from semantically similar examples from existing data. +Additionally, we design an iterative optimization algorithm that performs +automatic evaluation on the generated impression results and composes the +corresponding instruction prompts to further optimize the model. The proposed +ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and +OpenI datasets without requiring additional training data or fine-tuning the +LLMs. This work presents a paradigm for localizing LLMs that can be applied in +a wide range of similar application scenarios, bridging the gap between +general-purpose LLMs and the specific language processing needs of various +domains. +" +NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers,Kai Shen,http://arxiv.org/pdf/2304.09116v3.pdf,2023-04-18,"['eess.as', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.sd']",2304.09116v3.pdf," Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild +datasets is important to capture the diversity in human speech such as speaker +identities, prosodies, and styles (e.g., singing). Current large TTS systems +usually quantize speech into discrete tokens and use language models to +generate these tokens one by one, which suffer from unstable prosody, word +skipping/repeating issue, and poor voice quality. In this paper, we develop +NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual +vector quantizers to get the quantized latent vectors and uses a diffusion +model to generate these latent vectors conditioned on text input. To enhance +the zero-shot capability that is important to achieve diverse speech synthesis, +we design a speech prompting mechanism to facilitate in-context learning in the +diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to +large-scale datasets with 44K hours of speech and singing data and evaluate its +voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS +systems by a large margin in terms of prosody/timbre similarity, robustness, +and voice quality in a zero-shot setting, and performs novel zero-shot singing +synthesis with only a speech prompt. Audio samples are available at +https://speechresearch.github.io/naturalspeech2. +" +Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback,Yao Fu,http://arxiv.org/pdf/2305.10142v1.pdf,2023-05-17,['cs.cl'],2305.10142v1.pdf," We study whether multiple large language models (LLMs) can autonomously +improve each other in a negotiation game by playing, reflecting, and +criticizing. We are interested in this question because if LLMs were able to +improve each other, it would imply the possibility of creating strong AI agents +with minimal human intervention. We ask two LLMs to negotiate with each other, +playing the roles of a buyer and a seller, respectively. They aim to reach a +deal with the buyer targeting a lower price and the seller a higher one. A +third language model, playing the critic, provides feedback to a player to +improve the player's negotiation strategies. We let the two agents play +multiple rounds, using previous negotiation history and AI feedback as +in-context demonstrations to improve the model's negotiation strategy +iteratively. We use different LLMs (GPT and Claude) for different roles and use +the deal price as the evaluation metric. Our experiments reveal multiple +intriguing findings: (1) Only a subset of the language models we consider can +self-play and improve the deal price from AI feedback, weaker models either do +not understand the game's rules or cannot incorporate AI feedback for further +improvement. (2) Models' abilities to learn from the feedback differ when +playing different roles. For example, it is harder for Claude-instant to +improve as the buyer than as the seller. (3) When unrolling the game to +multiple rounds, stronger agents can consistently improve their performance by +meaningfully using previous experiences and iterative AI feedback, yet have a +higher risk of breaking the deal. We hope our work provides insightful initial +explorations of having models autonomously improve each other with game playing +and AI feedback. +" +XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages,Sebastian Ruder,http://arxiv.org/pdf/2305.11938v2.pdf,2023-05-19,['cs.cl'],2305.11938v2.pdf," Data scarcity is a crucial issue for the development of highly multilingual +NLP systems. Yet for many under-represented languages (ULs) -- languages for +which NLP re-search is particularly far behind in meeting user needs -- it is +feasible to annotate small amounts of data. Motivated by this, we propose +XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather +than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by +speakers of high-resource languages; and its focus on under-represented +languages where this scarce-data scenario tends to be most realistic. XTREME-UP +evaluates the capabilities of language models across 88 under-represented +languages over 9 key user-centric technologies including ASR, OCR, MT, and +information access tasks that are of general utility. We create new datasets +for OCR, autocomplete, semantic parsing, and transliteration, and build on and +refine existing datasets for other tasks. XTREME-UP provides methodology for +evaluating many modeling scenarios including text-only, multi-modal (vision, +audio, and text),supervised parameter tuning, and in-context learning. We +evaluate commonly used models on the benchmark. We release all code and scripts +to train and evaluate models +" +Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization,Jeonghoon Kim,http://arxiv.org/pdf/2305.14152v2.pdf,2023-05-23,"['cs.lg', 'cs.ai']",2305.14152v2.pdf," Large language models (LLMs) face the challenges in fine-tuning and +deployment due to their high memory demands and computational costs. While +parameter-efficient fine-tuning (PEFT) methods aim to reduce the memory usage +of the optimizer state during fine-tuning, the inherent size of pre-trained LLM +weights continues to be a pressing concern. Even though quantization techniques +are widely proposed to ease memory demands and accelerate LLM inference, most +of these techniques are geared towards the deployment phase. To bridge this +gap, this paper presents Parameter-Efficient and Quantization-aware Adaptation +(PEQA) - a simple yet effective method that combines the advantages of PEFT +with quantized LLMs. By updating solely the quantization scales, PEQA can be +directly applied to quantized LLMs, ensuring seamless task transitions. +Parallel to existing PEFT methods, PEQA significantly reduces the memory +overhead associated with the optimizer state. Furthermore, it leverages the +advantages of quantization to substantially reduce model sizes. Even after +fine-tuning, the quantization structure of a PEQA-tuned LLM remains intact, +allowing for accelerated inference on the deployment stage. We employ +PEQA-tuning for task-specific adaptation on LLMs with up to 65 billion +parameters. To assess the logical reasoning and language comprehension of +PEQA-tuned LLMs, we fine-tune low-bit quantized LLMs using a instruction +dataset. Our results show that even when LLMs are quantized to below 4-bit +precision, their capabilities in language modeling, few-shot in-context +learning, and comprehension can be resiliently restored to (or even improved +over) their full-precision original performances with PEQA. +" +PaLI-X: On Scaling up a Multilingual Vision and Language Model,Xi Chen,http://arxiv.org/pdf/2305.18565v1.pdf,2023-05-29,"['cs.cv', 'cs.cl', 'cs.lg']",2305.18565v1.pdf," We present the training recipe and results of scaling up PaLI-X, a +multilingual vision and language model, both in terms of size of the components +and the breadth of its training task mixture. Our model achieves new levels of +performance on a wide-range of varied and complex tasks, including multiple +image-based captioning and question-answering tasks, image-based document +understanding and few-shot (in-context) learning, as well as object detection, +video question answering, and video captioning. PaLI-X advances the +state-of-the-art on most vision-and-language benchmarks considered (25+ of +them). Finally, we observe emerging capabilities, such as complex counting and +multilingual object detection, tasks that are not explicitly in the training +mix. +" +"Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations",Lifan Yuan,http://arxiv.org/pdf/2306.04618v2.pdf,2023-06-07,"['cs.cl', 'cs.cr', 'cs.lg']",2306.04618v2.pdf," This paper reexamines the research on out-of-distribution (OOD) robustness in +the field of NLP. We find that the distribution shift settings in previous +studies commonly lack adequate challenges, hindering the accurate evaluation of +OOD robustness. To address these issues, we propose a benchmark construction +protocol that ensures clear differentiation and challenging distribution +shifts. Then we introduce BOSS, a Benchmark suite for Out-of-distribution +robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we +conduct a series of experiments on pre-trained language models for analysis and +evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the +relationship between in-distribution (ID) and OOD performance. We identify +three typical types that unveil the inner learning mechanism, which could +potentially facilitate the forecasting of OOD robustness, correlating with the +advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and +find that, despite exhibiting some effectiveness in specific cases, they do not +offer significant improvement compared to vanilla fine-tuning. Further, we +evaluate 5 LLMs with various adaptation paradigms and find that when sufficient +ID data is available, fine-tuning domain-specific models outperform LLMs on ID +examples significantly. However, in the case of OOD instances, prioritizing +LLMs with in-context learning yields better results. We identify that both +fine-tuned small models and LLMs face challenges in effectively addressing +downstream tasks. The code is public at +\url{https://github.com/lifan-yuan/OOD_NLP}. +" +Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection,Yu Bai,http://arxiv.org/pdf/2306.04637v2.pdf,2023-06-07,"['cs.lg', 'cs.ai', 'cs.cl', 'math.st', 'stat.ml', 'stat.th']",2306.04637v2.pdf," Neural sequence models based on the transformer architecture have +demonstrated remarkable \emph{in-context learning} (ICL) abilities, where they +can perform new tasks when prompted with training and test examples, without +any parameter update to the model. This work first provides a comprehensive +statistical theory for transformers to perform ICL. Concretely, we show that +transformers can implement a broad class of standard machine learning +algorithms in context, such as least squares, ridge regression, Lasso, learning +generalized linear models, and gradient descent on two-layer neural networks, +with near-optimal predictive power on various in-context data distributions. +Using an efficient implementation of in-context gradient descent as the +underlying mechanism, our transformer constructions admit mild size bounds, and +can be learned with polynomially many pretraining sequences. + Building on these ``base'' ICL algorithms, intriguingly, we show that +transformers can implement more complex ICL procedures involving +\emph{in-context algorithm selection}, akin to what a statistician can do in +real life -- A \emph{single} transformer can adaptively select different base +ICL algorithms -- or even perform qualitatively different tasks -- on different +input sequences, without any explicit prompting of the right algorithm or task. +We both establish this in theory by explicit constructions, and also observe +this phenomenon experimentally. In theory, we construct two general mechanisms +for algorithm selection with concrete examples: pre-ICL testing, and post-ICL +validation. As an example, we use the post-ICL validation mechanism to +construct a transformer that can perform nearly Bayes-optimal ICL on a +challenging task -- noisy linear models with mixed noise levels. +Experimentally, we demonstrate the strong in-context algorithm selection +capabilities of standard transformer architectures. +" +Instruction Tuned Models are Quick Learners,Himanshu Gupta,http://arxiv.org/pdf/2306.05539v1.pdf,2023-05-17,['cs.cl'],2306.05539v1.pdf," Instruction tuning of language models has demonstrated the ability to enhance +model generalization to unseen tasks via in-context learning using a few +examples. However, typical supervised learning still requires a plethora of +downstream training data for finetuning. Often in real-world situations, there +is a scarcity of data available for finetuning, falling somewhere between few +shot inference and fully supervised finetuning. In this work, we demonstrate +the sample efficiency of instruction tuned models over various tasks by +estimating the minimal downstream training data required by them to perform +transfer learning and match the performance of state-of-the-art (SOTA) +supervised models. We conduct experiments on 119 tasks from Super Natural +Instructions (SuperNI) in both the single task learning (STL) and multi task +learning (MTL) settings. Our findings reveal that, in the STL setting, +instruction tuned models equipped with 25% of the downstream train data surpass +the SOTA performance on the downstream tasks. In the MTL setting, an +instruction tuned model trained on only 6% of downstream training data achieve +SOTA, while using 100% of the training data results in a 3.69% points +improvement (ROUGE-L 74.68) over the previous SOTA. We conduct an analysis on +T5 vs Tk-Instruct by developing several baselines to demonstrate that +instruction tuning aids in increasing both sample efficiency and transfer +learning. Additionally, we observe a consistent ~4% performance increase in +both settings when pre-finetuning is performed with instructions. Finally, we +conduct a categorical study and find that contrary to previous results, tasks +in the question rewriting and title generation categories suffer from +instruction tuning. +" +Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control,Longtao Zheng,http://arxiv.org/pdf/2306.07863v2.pdf,2023-06-13,['cs.ai'],2306.07863v2.pdf," Building agents using large language models (LLMs) to control computers is an +emerging research field, where the agent perceives computer states and performs +actions to accomplish complex tasks. Previous computer agents have demonstrated +the benefits of in-context learning (ICL); however, their performance is +hindered by several issues. First, the limited context length of LLMs and +complex computer states restrict the number of exemplars, as a single webpage +can consume the entire context. Second, the exemplars in current methods, such +as high-level plans and multi-choice questions, cannot represent complete +trajectories, leading to suboptimal performance in tasks that require many +steps or repeated actions. Third, existing computer agents rely on +task-specific exemplars and overlook the similarity among tasks, resulting in +poor generalization to novel tasks. To address these challenges, we introduce +Synapse, featuring three key components: i) state abstraction, which filters +out task-irrelevant information from raw states, allowing more exemplars within +the limited context, ii) trajectory-as-exemplar prompting, which prompts the +LLM with complete trajectories of the abstracted states and actions for +improved multi-step decision-making, and iii) exemplar memory, which stores the +embeddings of exemplars and retrieves them via similarity search for +generalization to novel tasks. We evaluate Synapse on MiniWoB++, a standard +task suite, and Mind2Web, a real-world website benchmark. In MiniWoB++, Synapse +achieves a 99.2% average success rate (a 10% relative improvement) across 64 +tasks using demonstrations from only 48 tasks. Notably, Synapse is the first +ICL method to solve the book-flight task in MiniWoB++. Synapse also exhibits a +53% relative improvement in average step success rate over the previous +state-of-the-art prompting scheme in Mind2Web. +" +Language to Rewards for Robotic Skill Synthesis,Wenhao Yu,http://arxiv.org/pdf/2306.08647v2.pdf,2023-06-14,"['cs.ro', 'cs.ai', 'cs.lg']",2306.08647v2.pdf," Large language models (LLMs) have demonstrated exciting progress in acquiring +diverse new capabilities through in-context learning, ranging from logical +reasoning to code-writing. Robotics researchers have also explored using LLMs +to advance the capabilities of robotic control. However, since low-level robot +actions are hardware-dependent and underrepresented in LLM training corpora, +existing efforts in applying LLMs to robotics have largely treated LLMs as +semantic planners or relied on human-engineered control primitives to interface +with the robot. On the other hand, reward functions are shown to be flexible +representations that can be optimized for control policies to achieve diverse +tasks, while their semantic richness makes them suitable to be specified by +LLMs. In this work, we introduce a new paradigm that harnesses this realization +by utilizing LLMs to define reward parameters that can be optimized and +accomplish variety of robotic tasks. Using reward as the intermediate interface +generated by LLMs, we can effectively bridge the gap between high-level +language instructions or corrections to low-level robot actions. Meanwhile, +combining this with a real-time optimizer, MuJoCo MPC, empowers an interactive +behavior creation experience where users can immediately observe the results +and provide feedback to the system. To systematically evaluate the performance +of our proposed method, we designed a total of 17 tasks for a simulated +quadruped robot and a dexterous manipulator robot. We demonstrate that our +proposed method reliably tackles 90% of the designed tasks, while a baseline +using primitive skills as the interface with Code-as-policies achieves 50% of +the tasks. We further validated our method on a real robot arm where complex +manipulation skills such as non-prehensile pushing emerge through our +interactive system. +" +Trained Transformers Learn Linear Models In-Context,Ruiqi Zhang,http://arxiv.org/pdf/2306.09927v3.pdf,2023-06-16,"['stat.ml', 'cs.ai', 'cs.cl', 'cs.lg']",2306.09927v3.pdf," Attention-based neural networks such as transformers have demonstrated a +remarkable ability to exhibit in-context learning (ICL): Given a short prompt +sequence of tokens from an unseen task, they can formulate relevant per-token +and next-token predictions without any parameter updates. By embedding a +sequence of labeled training data and unlabeled test data as a prompt, this +allows for transformers to behave like supervised learning algorithms. Indeed, +recent work has shown that when training transformer architectures over random +instances of linear regression problems, these models' predictions mimic those +of ordinary least squares. + Towards understanding the mechanisms underlying this phenomenon, we +investigate the dynamics of ICL in transformers with a single linear +self-attention layer trained by gradient flow on linear regression tasks. We +show that despite non-convexity, gradient flow with a suitable random +initialization finds a global minimum of the objective function. At this global +minimum, when given a test prompt of labeled examples from a new prediction +task, the transformer achieves prediction error competitive with the best +linear predictor over the test prompt distribution. We additionally +characterize the robustness of the trained transformer to a variety of +distribution shifts and show that although a number of shifts are tolerated, +shifts in the covariate distribution of the prompts are not. Motivated by this, +we consider a generalized ICL setting where the covariate distributions can +vary across prompts. We show that although gradient flow succeeds at finding a +global minimum in this setting, the trained transformer is still brittle under +mild covariate shifts. We complement this finding with experiments on large, +nonlinear transformer architectures which we show are more robust under +covariate shifts. +" +HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution,Eric Nguyen,http://arxiv.org/pdf/2306.15794v1.pdf,2023-06-27,"['cs.lg', 'q-bio.gn']",2306.15794v1.pdf," Genomic (DNA) sequences encode an enormous amount of information for gene +regulation and protein synthesis. Similar to natural language models, +researchers have proposed foundation models in genomics to learn generalizable +features from unlabeled genome data that can then be fine-tuned for downstream +tasks such as identifying regulatory elements. Due to the quadratic scaling of +attention, previous Transformer-based genomic models have used 512 to 4k tokens +as context (<0.001% of the human genome), significantly limiting the modeling +of long-range interactions in DNA. In addition, these methods rely on +tokenizers to aggregate meaningful DNA units, losing single nucleotide +resolution where subtle genetic variations can completely alter protein +function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large +language model based on implicit convolutions was shown to match attention in +quality while allowing longer context lengths and lower time complexity. +Leveraging Hyenas new long-range capabilities, we present HyenaDNA, a genomic +foundation model pretrained on the human reference genome with context lengths +of up to 1 million tokens at the single nucleotide-level, an up to 500x +increase over previous dense attention-based models. HyenaDNA scales +sub-quadratically in sequence length (training up to 160x faster than +Transformer), uses single nucleotide tokens, and has full global context at +each layer. We explore what longer context enables - including the first use of +in-context learning in genomics for simple adaptation to novel tasks without +updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide +Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 17 datasets +using a model with orders of magnitude less parameters and pretraining data. On +the GenomicBenchmarks, HyenaDNA surpasses SotA on all 8 datasets on average by ++9 accuracy points. +" +Generative Type Inference for Python,Yun Peng,http://arxiv.org/pdf/2307.09163v1.pdf,2023-07-18,['cs.se'],2307.09163v1.pdf," Python is a popular dynamic programming language, evidenced by its ranking as +the second most commonly used language on GitHub. However, its dynamic type +system can lead to potential type errors, leading researchers to explore +automatic type inference approaches for Python programs. The rule-based type +inference approaches can ensure the accuracy of predicted variable types, but +they suffer from low coverage problems. Supervised type inference approaches, +while feature-agnostic, require large, high-quality annotated datasets and are +limited to pre-defined types. As zero-shot approaches, the cloze-style +approaches reformulate the type inference problem into a fill-in-the-blank +problem. However, their performance is limited. + This paper introduces TypeGen, a few-shot generative type inference approach +that incorporates static domain knowledge from static analysis. TypeGen creates +chain-of-thought (COT) prompts by translating the type inference steps of +static analysis into prompts based on the type dependency graphs (TDGs), +enabling language models to learn from how static analysis infers types. By +combining COT prompts with code slices and type hints, TypeGen constructs +example prompts from human annotations. TypeGen only requires very few +annotated examples to teach language models to generate similar COT prompts via +in-context learning. Moreover, TypeGen enhances the interpretability of results +through the use of the input-explanation-output strategy. Experiments show that +TypeGen outperforms the best baseline Type4Py by 10.0% for argument type +prediction and 22.5% in return value type prediction in terms of top-1 Exact +Match by using only five examples. Furthermore, TypeGen achieves substantial +improvements of 27% to 84% compared to the zero-shot performance of large +language models with parameter sizes ranging from 1.3B to 175B in terms of +top-1 Exact Match. +" +Hypothesis Search: Inductive Reasoning with Language Models,Ruocheng Wang,http://arxiv.org/pdf/2309.05660v1.pdf,2023-09-11,"['cs.lg', 'cs.ai', 'cs.cl']",2309.05660v1.pdf," Inductive reasoning is a core problem-solving capacity: humans can identify +underlying principles from a few examples, which can then be robustly +generalized to novel scenarios. Recent work has evaluated large language models +(LLMs) on inductive reasoning tasks by directly prompting them yielding ""in +context learning."" This can work well for straightforward inductive tasks, but +performs very poorly on more complex tasks such as the Abstraction and +Reasoning Corpus (ARC). In this work, we propose to improve the inductive +reasoning ability of LLMs by generating explicit hypotheses at multiple levels +of abstraction: we prompt the LLM to propose multiple abstract hypotheses about +the problem, in natural language, then implement the natural language +hypotheses as concrete Python programs. These programs can be directly verified +by running on the observed examples and generalized to novel inputs. Because of +the prohibitive cost of generation with state-of-the-art LLMs, we consider a +middle step to filter the set of hypotheses that will be implemented into +programs: we either ask the LLM to summarize into a smaller set of hypotheses, +or ask human annotators to select a subset of the hypotheses. We verify our +pipeline's effectiveness on the ARC visual inductive reasoning benchmark, its +variant 1D-ARC, and string transformation dataset SyGuS. On a random 40-problem +subset of ARC, our automated pipeline using LLM summaries achieves 27.5% +accuracy, significantly outperforming the direct prompting baseline (accuracy +of 12.5%). With the minimal human input of selecting from LLM-generated +candidates, the performance is boosted to 37.5%. (And we argue this is a lower +bound on the performance of our approach without filtering.) Our ablation +studies show that abstract hypothesis generation and concrete program +representations are both beneficial for LLMs to perform inductive reasoning +tasks. +" +How FaR Are Large Language Models From Agents with Theory-of-Mind?,Pei Zhou,http://arxiv.org/pdf/2310.03051v1.pdf,2023-10-04,"['cs.cl', 'cs.ai']",2310.03051v1.pdf," ""Thinking is for Doing."" Humans can infer other people's mental states from +observations--an ability called Theory-of-Mind (ToM)--and subsequently act +pragmatically on those inferences. Existing question answering benchmarks such +as ToMi ask models questions to make inferences about beliefs of characters in +a story, but do not test whether models can then use these inferences to guide +their actions. We propose a new evaluation paradigm for large language models +(LLMs): Thinking for Doing (T4D), which requires models to connect inferences +about others' mental states to actions in social scenarios. Experiments on T4D +demonstrate that LLMs such as GPT-4 and PaLM 2 seemingly excel at tracking +characters' beliefs in stories, but they struggle to translate this capability +into strategic action. Our analysis reveals the core challenge for LLMs lies in +identifying the implicit inferences about mental states without being +explicitly asked about as in ToMi, that lead to choosing the correct action in +T4D. To bridge this gap, we introduce a zero-shot prompting framework, Foresee +and Reflect (FaR), which provides a reasoning structure that encourages LLMs to +anticipate future challenges and reason about potential actions. FaR boosts +GPT-4's performance from 50% to 71% on T4D, outperforming other prompting +methods such as Chain-of-Thought and Self-Ask. Moreover, FaR generalizes to +diverse out-of-distribution story structures and scenarios that also require +ToM inferences to choose an action, consistently outperforming other methods +including few-shot in-context learning. +" +Entity Matching using Large Language Models,Ralph Peeters,http://arxiv.org/pdf/2310.11244v1.pdf,2023-10-17,"['cs.cl', 'cs.lg']",2310.11244v1.pdf," Entity Matching is the task of deciding whether two entity descriptions refer +to the same real-world entity. Entity Matching is a central step in most data +integration pipelines and an enabler for many e-commerce applications which +require to match products offers from different vendors. State-of-the-art +entity matching methods often rely on pre-trained language models (PLMs) such +as BERT or RoBERTa. Two major drawbacks of these models for entity matching are +that (i) the models require significant amounts of task-specific training data +and (ii) the fine-tuned models are not robust concerning out-of-distribution +entities. In this paper, we investigate using large language models (LLMs) for +entity matching as a less domain-specific training data reliant and more robust +alternative to PLM-based matchers. Our study covers hosted LLMs, such as GPT3.5 +and GPT4, as well as open source LLMs based on Llama2 which can be run locally. +We evaluate these models in a zero-shot scenario as well as a scenario where +task-specific training data is available. We compare different prompt designs +as well as the prompt sensitivity of the models in the zero-shot scenario. We +investigate (i) the selection of in-context demonstrations, (ii) the generation +of matching rules, as well as (iii) fine-tuning GPT3.5 in the second scenario +using the same pool of training data across the different approaches. Our +experiments show that GPT4 without any task-specific training data outperforms +fine-tuned PLMs (RoBERTa and Ditto) on three out of five benchmark datasets +reaching F1 scores around 90%. The experiments with in-context learning and +rule generation show that all models beside of GPT4 benefit from these +techniques (on average 5.9% and 2.2% F1), while GPT4 does not need such +additional guidance in most cases... +" +CycleAlign: Iterative Distillation from Black-box LLM to White-box Models for Better Human Alignment,Jixiang Hong,http://arxiv.org/pdf/2310.16271v1.pdf,2023-10-25,"['cs.cl', 'cs.ai']",2310.16271v1.pdf," Language models trained on large-scale corpus often generate content that is +harmful, toxic, or contrary to human preferences, making their alignment with +human values a critical concern. Reinforcement learning from human feedback +(RLHF) with algorithms like PPO is a prevalent approach for alignment but is +often complex, unstable, and resource-intensive. Recently, ranking-based +alignment methods have emerged, offering stability and effectiveness by +replacing the RL framework with supervised fine-tuning, but they are costly due +to the need for annotated data. Considering that existing large language models +(LLMs) like ChatGPT are already relatively well-aligned and cost-friendly, +researchers have begun to align the language model with human preference from +AI feedback. The common practices, which unidirectionally distill the +instruction-following responses from LLMs, are constrained by their bottleneck. +Thus we introduce CycleAlign to distill alignment capabilities from +parameter-invisible LLMs (black-box) to a parameter-visible model (white-box) +in an iterative manner. With in-context learning (ICL) as the core of the +cycle, the black-box models are able to rank the model-generated responses +guided by human-craft instruction and demonstrations about their preferences. +During iterative interaction, the white-box models also have a judgment about +responses generated by them. Consequently, the agreement ranking could be +viewed as a pseudo label to dynamically update the in-context demonstrations +and improve the preference ranking ability of black-box models. Through +multiple interactions, the CycleAlign framework could align the white-box model +with the black-box model effectively in a low-resource way. Empirical results +illustrate that the model fine-tuned by CycleAlign remarkably exceeds existing +methods, and achieves the state-of-the-art performance in alignment with human +value. +" +Transformers are Efficient In-Context Estimators for Wireless Communication,Vicram Rajagopalan,http://arxiv.org/pdf/2311.00226v1.pdf,2023-11-01,"['eess.sp', 'cs.lg']",2311.00226v1.pdf," Pre-trained transformers can perform in-context learning, where they adapt to +a new task using only a small number of prompts without any explicit model +optimization. Inspired by this attribute, we propose a novel approach, called +in-context estimation, for the canonical communication problem of estimating +transmitted symbols from received symbols. A communication channel is +essentially a noisy function that maps transmitted symbols to received symbols, +and this function can be represented by an unknown parameter whose statistics +depend on an (also unknown) latent context. Conventional approaches ignore this +hierarchical structure and simply attempt to use known transmissions, called +pilots, to perform a least-squares estimate of the channel parameter, which is +then used to estimate successive, unknown transmitted symbols. We make the +basic connection that transformers show excellent contextual sequence +completion with a few prompts, and so they should be able to implicitly +determine the latent context from pilot symbols to perform end-to-end +in-context estimation of transmitted symbols. Furthermore, the transformer +should use information efficiently, i.e., it should utilize any pilots received +to attain the best possible symbol estimates. Through extensive simulations, we +show that in-context estimation not only significantly outperforms standard +approaches, but also achieves the same performance as an estimator with perfect +knowledge of the latent context within a few context examples. Thus, we make a +strong case that transformers are efficient in-context estimators in the +communication setting. +" +Multimodal Prompt Learning for Product Title Generation with Extremely Limited Labels,Bang Yang,http://arxiv.org/pdf/2307.01969v1.pdf,2023-07-05,['cs.cv'],2307.01969v1.pdf," Generating an informative and attractive title for the product is a crucial +task for e-commerce. Most existing works follow the standard multimodal natural +language generation approaches, e.g., image captioning, and employ the large +scale of human-labelled datasets to train desirable models. However, for novel +products, especially in a different domain, there are few existing labelled +data. In this paper, we propose a prompt-based approach, i.e., the Multimodal +Prompt Learning framework, to accurately and efficiently generate titles for +novel products with limited labels. We observe that the core challenges of +novel product title generation are the understanding of novel product +characteristics and the generation of titles in a novel writing style. To this +end, we build a set of multimodal prompts from different modalities to preserve +the corresponding characteristics and writing styles of novel products. As a +result, with extremely limited labels for training, the proposed method can +retrieve the multimodal prompts to generate desirable titles for novel +products. The experiments and analyses are conducted on five novel product +categories under both the in-domain and out-of-domain experimental settings. +The results show that, with only 1% of downstream labelled data for training, +our proposed approach achieves the best few-shot results and even achieves +competitive results with fully-supervised methods trained on 100% of training +data; With the full labelled data for training, our method achieves +state-of-the-art results. +" +Few-shot Joint Multimodal Aspect-Sentiment Analysis Based on Generative Multimodal Prompt,Xiaocui Yang,http://arxiv.org/pdf/2305.10169v2.pdf,2023-05-17,['cs.mm'],2305.10169v2.pdf," We have witnessed the rapid proliferation of multimodal data on numerous +social media platforms. Conventional studies typically require massive labeled +data to train models for Multimodal Aspect-Based Sentiment Analysis (MABSA). +However, collecting and annotating fine-grained multimodal data for MABSA is +tough. To alleviate the above issue, we perform three MABSA-related tasks with +quite a small number of labeled multimodal samples. We first build diverse and +comprehensive multimodal few-shot datasets according to the data distribution. +To capture the specific prompt for each aspect term in a few-shot scenario, we +propose a novel Generative Multimodal Prompt (GMP) model for MABSA, which +includes the Multimodal Encoder module and the N-Stream Decoders module. We +further introduce a subtask to predict the number of aspect terms in each +instance to construct the multimodal prompt. Extensive experiments on two +datasets demonstrate that our approach outperforms strong baselines on two +MABSA-related tasks in the few-shot setting. +" +VIMA: General Robot Manipulation with Multimodal Prompts,Yunfan Jiang,http://arxiv.org/pdf/2210.03094v2.pdf,2022-10-06,"['cs.ro', 'cs.ai', 'cs.lg']",2210.03094v2.pdf," Prompt-based learning has emerged as a successful paradigm in natural +language processing, where a single general-purpose language model can be +instructed to perform any task specified by input prompts. Yet task +specification in robotics comes in various forms, such as imitating one-shot +demonstrations, following language instructions, and reaching visual goals. +They are often considered different tasks and tackled by specialized models. We +show that a wide spectrum of robot manipulation tasks can be expressed with +multimodal prompts, interleaving textual and visual tokens. Accordingly, we +develop a new simulation benchmark that consists of thousands of +procedurally-generated tabletop tasks with multimodal prompts, 600K+ expert +trajectories for imitation learning, and a four-level evaluation protocol for +systematic generalization. We design a transformer-based robot agent, VIMA, +that processes these prompts and outputs motor actions autoregressively. VIMA +features a recipe that achieves strong model scalability and data efficiency. +It outperforms alternative designs in the hardest zero-shot generalization +setting by up to $2.9\times$ task success rate given the same training data. +With $10\times$ less training data, VIMA still performs $2.7\times$ better than +the best competing variant. Code and video demos are available at +https://vimalabs.github.io/ +" +Delving into Multimodal Prompting for Fine-grained Visual Classification,Xin Jiang,http://arxiv.org/pdf/2309.08912v1.pdf,2023-09-16,"['cs.cv', 'cs.mm']",2309.08912v1.pdf," Fine-grained visual classification (FGVC) involves categorizing fine +subdivisions within a broader category, which poses challenges due to subtle +inter-class discrepancies and large intra-class variations. However, prevailing +approaches primarily focus on uni-modal visual concepts. Recent advancements in +pre-trained vision-language models have demonstrated remarkable performance in +various high-level vision tasks, yet the applicability of such models to FGVC +tasks remains uncertain. In this paper, we aim to fully exploit the +capabilities of cross-modal description to tackle FGVC tasks and propose a +novel multimodal prompting solution, denoted as MP-FGVC, based on the +contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a +multimodal prompts scheme and a multimodal adaptation scheme. The former +includes Subcategory-specific Vision Prompt (SsVP) and Discrepancy-aware Text +Prompt (DaTP), which explicitly highlights the subcategory-specific +discrepancies from the perspectives of both vision and language. The latter +aligns the vision and text prompting elements in a common semantic space, +facilitating cross-modal collaborative reasoning through a Vision-Language +Fusion Module (VLFM) for further improvement on FGVC. Moreover, we tailor a +two-stage optimization strategy for MP-FGVC to fully leverage the pre-trained +CLIP model and expedite efficient adaptation for FGVC. Extensive experiments +conducted on four FGVC datasets demonstrate the effectiveness of our MP-FGVC. +" +Multimodal Prompt Transformer with Hybrid Contrastive Learning for Emotion Recognition in Conversation,Shihao Zou,http://arxiv.org/pdf/2310.04456v1.pdf,2023-10-04,"['cs.cl', 'cs.sd', 'eess.as']",2310.04456v1.pdf," Emotion Recognition in Conversation (ERC) plays an important role in driving +the development of human-machine interaction. Emotions can exist in multiple +modalities, and multimodal ERC mainly faces two problems: (1) the noise problem +in the cross-modal information fusion process, and (2) the prediction problem +of less sample emotion labels that are semantically similar but different +categories. To address these issues and fully utilize the features of each +modality, we adopted the following strategies: first, deep emotion cues +extraction was performed on modalities with strong representation ability, and +feature filters were designed as multimodal prompt information for modalities +with weak representation ability. Then, we designed a Multimodal Prompt +Transformer (MPT) to perform cross-modal information fusion. MPT embeds +multimodal fusion information into each attention layer of the Transformer, +allowing prompt information to participate in encoding textual features and +being fused with multi-level textual information to obtain better multimodal +fusion features. Finally, we used the Hybrid Contrastive Learning (HCL) +strategy to optimize the model's ability to handle labels with few samples. +This strategy uses unsupervised contrastive learning to improve the +representation ability of multimodal fusion and supervised contrastive learning +to mine the information of labels with few samples. Experimental results show +that our proposed model outperforms state-of-the-art models in ERC on two +benchmark datasets. +" +2nd Place Winning Solution for the CVPR2023 Visual Anomaly and Novelty Detection Challenge: Multimodal Prompting for Data-centric Anomaly Detection,Yunkang Cao,http://arxiv.org/pdf/2306.09067v2.pdf,2023-06-15,['cs.cv'],2306.09067v2.pdf," This technical report introduces the winning solution of the team Segment Any +Anomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge. +Going beyond uni-modal prompt, e.g., language prompt, we present a novel +framework, i.e., Segment Any Anomaly + (SAA$+$), for zero-shot anomaly +segmentation with multi-modal prompts for the regularization of cascaded modern +foundation models. Inspired by the great zero-shot generalization ability of +foundation models like Segment Anything, we first explore their assembly (SAA) +to leverage diverse multi-modal prior knowledge for anomaly localization. +Subsequently, we further introduce multimodal prompts (SAA$+$) derived from +domain expert knowledge and target image context to enable the non-parameter +adaptation of foundation models to anomaly segmentation. The proposed SAA$+$ +model achieves state-of-the-art performance on several anomaly segmentation +benchmarks, including VisA and MVTec-AD, in the zero-shot setting. We will +release the code of our winning solution for the CVPR2023 VAN. +" +Multimodal Prompt Retrieval for Generative Visual Question Answering,Timothy Ossowski,http://arxiv.org/pdf/2306.17675v1.pdf,2023-06-30,"['cs.cv', 'cs.ai']",2306.17675v1.pdf," Recent years have witnessed impressive results of pre-trained vision-language +models on knowledge-intensive tasks such as visual question answering (VQA). +Despite the recent advances in VQA, existing methods mainly adopt a +discriminative formulation that predicts answers within a pre-defined label +set, leading to easy overfitting on low-resource domains with limited labeled +data (e.g., medicine) and poor generalization under domain shift to another +dataset. To tackle this limitation, we propose a novel generative model +enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts +and multimodal features to generate answers in free text. Our generative model +enables rapid zero-shot dataset adaptation to unseen data distributions and +open-set answer labels across datasets. Our experiments on medical VQA tasks +show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy +points in a few-shot domain adaptation setting. +" +Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts,Deniz Engin,http://arxiv.org/pdf/2309.15915v1.pdf,2023-09-27,['cs.cv'],2309.15915v1.pdf," Recent vision-language models are driven by large-scale pretrained models. +However, adapting pretrained models on limited data presents challenges such as +overfitting, catastrophic forgetting, and the cross-modal gap between vision +and language. We introduce a parameter-efficient method to address these +challenges, combining multimodal prompt learning and a transformer-based +mapping network, while keeping the pretrained models frozen. Our experiments on +several video question answering benchmarks demonstrate the superiority of our +approach in terms of performance and parameter efficiency on both zero-shot and +few-shot settings. Our code is available at https://engindeniz.github.io/vitis. +" +Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting,Syed Talal Wasim,http://arxiv.org/pdf/2304.03307v1.pdf,2023-04-06,"['cs.cv', 'eess.iv']",2304.03307v1.pdf," Adopting contrastive image-text pretrained models like CLIP towards video +classification has gained attention due to its cost-effectiveness and +competitive performance. However, recent works in this area face a trade-off. +Finetuning the pretrained model to achieve strong supervised performance +results in low zero-shot generalization. Similarly, freezing the backbone to +retain zero-shot capability causes significant drop in supervised accuracy. +Because of this, recent works in literature typically train separate models for +supervised and zero-shot action recognition. In this work, we propose a +multimodal prompt learning scheme that works to balance the supervised and +zero-shot performance under a single unified training. Our prompting approach +on the vision side caters for three aspects: 1) Global video-level prompts to +model the data distribution; 2) Local frame-level prompts to provide per-frame +discriminative conditioning; and 3) a summary prompt to extract a condensed +video representation. Additionally, we define a prompting scheme on the text +side to augment the textual context. Through this prompting scheme, we can +achieve state-of-the-art zero-shot performance on Kinetics-600, HMDB51 and +UCF101 while remaining competitive in the supervised setting. By keeping the +pretrained backbone frozen, we optimize a much lower number of parameters and +retain the existing general representation which helps achieve the strong +zero-shot performance. Our codes/models are released at +https://github.com/TalalWasim/Vita-CLIP. +" +Similarity-Aware Multimodal Prompt Learning for Fake News Detection,Ye Jiang,http://arxiv.org/pdf/2304.04187v3.pdf,2023-04-09,['cs.cl'],2304.04187v3.pdf," The standard paradigm for fake news detection mainly utilizes text +information to model the truthfulness of news. However, the discourse of online +fake news is typically subtle and it requires expert knowledge to use textual +information to debunk fake news. Recently, studies focusing on multimodal fake +news detection have outperformed text-only methods. Recent approaches utilizing +the pre-trained model to extract unimodal features, or fine-tuning the +pre-trained model directly, have become a new paradigm for detecting fake news. +Again, this paradigm either requires a large number of training instances, or +updates the entire set of pre-trained model parameters, making real-world fake +news detection impractical. Furthermore, traditional multimodal methods fuse +the cross-modal features directly without considering that the uncorrelated +semantic representation might inject noise into the multimodal features. This +paper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE) +framework. First, we incorporate prompt learning into multimodal fake news +detection. Prompt learning, which only tunes prompts with a frozen language +model, can reduce memory usage significantly and achieve comparable +performances, compared with fine-tuning. We analyse three prompt templates with +a soft verbalizer to detect fake news. In addition, we introduce the +similarity-aware fusing method to adaptively fuse the intensity of multimodal +representation and mitigate the noise injection via uncorrelated cross-modal +features. For evaluation, SAMPLE surpasses the F1 and the accuracies of +previous works on two benchmark multimodal datasets, demonstrating the +effectiveness of the proposed method in detecting fake news. In addition, +SAMPLE also is superior to other approaches regardless of few-shot and +data-rich settings. +" +Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion,Nisha Huang,http://arxiv.org/pdf/2209.13360v2.pdf,2022-09-27,['cs.cv'],2209.13360v2.pdf," Digital art synthesis is receiving increasing attention in the multimedia +community because of engaging the public with art effectively. Current digital +art synthesis methods usually use single-modality inputs as guidance, thereby +limiting the expressiveness of the model and the diversity of generated +results. To solve this problem, we propose the multimodal guided artwork +diffusion (MGAD) model, which is a diffusion-based digital artwork generation +approach that utilizes multimodal prompts as guidance to control the +classifier-free diffusion model. Additionally, the contrastive language-image +pretraining (CLIP) model is used to unify text and image modalities. Extensive +experimental results on the quality and quantity of the generated digital art +paintings confirm the effectiveness of the combination of the diffusion model +and multimodal guidance. Code is available at +https://github.com/haha-lisa/MGAD-multimodal-guided-artwork-diffusion. +" +Multimodal Prompting with Missing Modalities for Visual Recognition,Yi-Lun Lee,http://arxiv.org/pdf/2303.03369v2.pdf,2023-03-06,['cs.cv'],2303.03369v2.pdf," In this paper, we tackle two challenges in multimodal learning for visual +recognition: 1) when missing-modality occurs either during training or testing +in real-world situations; and 2) when the computation resources are not +available to finetune on heavy transformer models. To this end, we propose to +utilize prompt learning and mitigate the above two challenges together. +Specifically, our modality-missing-aware prompts can be plugged into multimodal +transformers to handle general missing-modality cases, while only requiring +less than 1% learnable parameters compared to training the entire model. We +further explore the effect of different prompt configurations and analyze the +robustness to missing modality. Extensive experiments are conducted to show the +effectiveness of our prompt learning framework that improves the performance +under various missing-modality cases, while alleviating the requirement of +heavy model re-training. Code is available. +" +Audio Visual Language Maps for Robot Navigation,Chenguang Huang,http://arxiv.org/pdf/2303.07522v2.pdf,2023-03-13,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.lg']",2303.07522v2.pdf," While interacting in the world is a multi-sensory experience, many robots +continue to predominantly rely on visual perception to map and navigate in +their environments. In this work, we propose Audio-Visual-Language Maps +(AVLMaps), a unified 3D spatial map representation for storing cross-modal +information from audio, visual, and language cues. AVLMaps integrate the +open-vocabulary capabilities of multimodal foundation models pre-trained on +Internet-scale data by fusing their features into a centralized 3D voxel grid. +In the context of navigation, we show that AVLMaps enable robot systems to +index goals in the map based on multimodal queries, e.g., textual descriptions, +images, or audio snippets of landmarks. In particular, the addition of audio +information enables robots to more reliably disambiguate goal locations. +Extensive experiments in simulation show that AVLMaps enable zero-shot +multimodal goal navigation from multimodal prompts and provide 50% better +recall in ambiguous scenarios. These capabilities extend to mobile robots in +the real world - navigating to landmarks referring to visual, audio, and +spatial concepts. Videos and code are available at: https://avlmaps.github.io. +" +Multitask Multimodal Prompted Training for Interactive Embodied Task Completion,Georgios Pantazopoulos,http://arxiv.org/pdf/2311.04067v1.pdf,2023-11-07,"['cs.lg', 'cs.ai', 'cs.cv']",2311.04067v1.pdf," Interactive and embodied tasks pose at least two fundamental challenges to +existing Vision & Language (VL) models, including 1) grounding language in +trajectories of actions and observations, and 2) referential disambiguation. To +tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a +unified encoder-decoder model that reasons over images and trajectories, and +casts action prediction as multimodal text generation. By unifying all tasks as +text generation, EMMA learns a language of actions which facilitates transfer +across tasks. Different to previous modular approaches with independently +trained components, we use a single multitask model where each task contributes +to goal completion. EMMA performs on par with similar models on several VL +benchmarks and sets a new state-of-the-art performance (36.81% success rate) on +the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided +agents in the Alexa Arena +" +MaPLe: Multi-modal Prompt Learning,Muhammad Uzair Khattak,http://arxiv.org/pdf/2210.03117v3.pdf,2022-10-06,['cs.cv'],2210.03117v3.pdf," Pre-trained vision-language (V-L) models such as CLIP have shown excellent +generalization ability to downstream tasks. However, they are sensitive to the +choice of input text prompts and require careful selection of prompt templates +to perform well. Inspired by the Natural Language Processing (NLP) literature, +recent CLIP adaptation approaches learn prompts as the textual inputs to +fine-tune CLIP for downstream tasks. We note that using prompting to adapt +representations in a single branch of CLIP (language or vision) is sub-optimal +since it does not allow the flexibility to dynamically adjust both +representation spaces on a downstream task. In this work, we propose +Multi-modal Prompt Learning (MaPLe) for both vision and language branches to +improve alignment between the vision and language representations. Our design +promotes strong coupling between the vision-language prompts to ensure mutual +synergy and discourages learning independent uni-modal solutions. Further, we +learn separate prompts across different early stages to progressively model the +stage-wise feature relationships to allow rich context learning. We evaluate +the effectiveness of our approach on three representative tasks of +generalization to novel classes, new target datasets and unseen domain shifts. +Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable +performance and achieves an absolute gain of 3.45% on novel classes and 2.72% +on overall harmonic-mean, averaged over 11 diverse image recognition datasets. +Our code and pre-trained models are available at +https://github.com/muzairkhattak/multimodal-prompt-learning. +" +Few-shot Multimodal Sentiment Analysis based on Multimodal Probabilistic Fusion Prompts,Xiaocui Yang,http://arxiv.org/pdf/2211.06607v2.pdf,2022-11-12,"['cs.cl', 'cs.mm']",2211.06607v2.pdf," Multimodal sentiment analysis has gained significant attention due to the +proliferation of multimodal content on social media. However, existing studies +in this area rely heavily on large-scale supervised data, which is +time-consuming and labor-intensive to collect. Thus, there is a need to address +the challenge of few-shot multimodal sentiment analysis. To tackle this +problem, we propose a novel method called Multimodal Probabilistic Fusion +Prompts (MultiPoint) that leverages diverse cues from different modalities for +multimodal sentiment detection in the few-shot scenario. Specifically, we start +by introducing a Consistently Distributed Sampling approach called CDS, which +ensures that the few-shot dataset has the same category distribution as the +full dataset. Unlike previous approaches primarily using prompts based on the +text modality, we design unified multimodal prompts to reduce discrepancies +between different modalities and dynamically incorporate multimodal +demonstrations into the context of each multimodal instance. To enhance the +model's robustness, we introduce a probabilistic fusion method to fuse output +predictions from multiple diverse prompts for each input. Our extensive +experiments on six datasets demonstrate the effectiveness of our approach. +First, our method outperforms strong baselines in the multimodal few-shot +setting. Furthermore, under the same amount of data (1% of the full dataset), +our CDS-based experimental results significantly outperform those based on +previously sampled datasets constructed from the same number of instances of +each class. +" +Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing,Alberto Baldrati,http://arxiv.org/pdf/2304.02051v2.pdf,2023-04-04,"['cs.cv', 'cs.ai', 'cs.mm']",2304.02051v2.pdf," Fashion illustration is used by designers to communicate their vision and to +bring the design idea from conceptualization to realization, showing how +clothes interact with the human body. In this context, computer vision can thus +be used to improve the fashion design process. Differently from previous works +that mainly focused on the virtual try-on of garments, we propose the task of +multimodal-conditioned fashion image editing, guiding the generation of +human-centric fashion images by following multimodal prompts, such as text, +human body poses, and garment sketches. We tackle this problem by proposing a +new architecture based on latent diffusion models, an approach that has not +been used before in the fashion domain. Given the lack of existing datasets +suitable for the task, we also extend two existing fashion datasets, namely +Dress Code and VITON-HD, with multimodal annotations collected in a +semi-automatic manner. Experimental results on these new datasets demonstrate +the effectiveness of our proposal, both in terms of realism and coherence with +the given multimodal inputs. Source code and collected multimodal annotations +are publicly available at: +https://github.com/aimagelab/multimodal-garment-designer. +" +Parameter-efficient Tuning of Large-scale Multimodal Foundation Model,Haixin Wang,http://arxiv.org/pdf/2305.08381v3.pdf,2023-05-15,['cs.cv'],2305.08381v3.pdf," Driven by the progress of large-scale pre-training, parameter-efficient +transfer learning has gained immense popularity across different subfields of +Artificial Intelligence. The core is to adapt the model to downstream tasks +with only a small set of parameters. Recently, researchers have leveraged such +proven techniques in multimodal tasks and achieve promising results. However, +two critical issues remain unresolved: how to further reduce the complexity +with lightweight design and how to boost alignment between modalities under +extremely low parameters. In this paper, we propose A graceful prompt framework +for cross-modal transfer (Aurora) to overcome these challenges. Considering the +redundancy in existing architectures, we first utilize the mode approximation +to generate 0.1M trainable parameters to implement the multimodal prompt +tuning, which explores the low intrinsic dimension with only 0.04% parameters +of the pre-trained model. Then, for better modality alignment, we propose the +Informative Context Enhancement and Gated Query Transformation module under +extremely few parameters scenes. A thorough evaluation on six cross-modal +benchmarks shows that it not only outperforms the state-of-the-art but even +outperforms the full fine-tuning approach. Our code is available at: +https://github.com/WillDreamer/Aurora. +" +RM-PRT: Realistic Robotic Manipulation Simulator and Benchmark with Progressive Reasoning Tasks,Pengzhen Ren,http://arxiv.org/pdf/2306.11335v2.pdf,2023-06-20,"['cs.ro', 'cs.ai', 'cs.cv', 'cs.lg']",2306.11335v2.pdf," Recently, the advent of pre-trained large-scale language models (LLMs) like +ChatGPT and GPT-4 have significantly advanced the machine's natural language +understanding capabilities. This breakthrough has allowed us to seamlessly +integrate these open-source LLMs into a unified robot simulator environment to +help robots accurately understand and execute human natural language +instructions. To this end, in this work, we introduce a realistic robotic +manipulation simulator and build a Robotic Manipulation with Progressive +Reasoning Tasks (RM-PRT) benchmark on this basis. Specifically, the RM-PRT +benchmark builds a new high-fidelity digital twin scene based on Unreal Engine +5, which includes 782 categories, 2023 objects, and 15K natural language +instructions generated by ChatGPT for a detailed evaluation of robot +manipulation. We propose a general pipeline for the RM-PRT benchmark that takes +as input multimodal prompts containing natural language instructions and +automatically outputs actions containing the movement and position transitions. +We set four natural language understanding tasks with progressive reasoning +levels and evaluate the robot's ability to understand natural language +instructions in two modes of adsorption and grasping. In addition, we also +conduct a comprehensive analysis and comparison of the differences and +advantages of 10 different LLMs in instruction understanding and generation +quality. We hope the new simulator and benchmark will facilitate future +research on language-guided robotic manipulation. Project website: +https://necolizer.github.io/RM-PRT/ . +" +Reframing Instructional Prompts to GPTk's Language,Swaroop Mishra,http://arxiv.org/pdf/2109.07830v3.pdf,2021-09-16,"['cs.cl', 'cs.ai', 'cs.lg']",2109.07830v3.pdf," What kinds of instructional prompts are easier to follow for Language Models +(LMs)? We study this question by conducting extensive empirical analysis that +shed light on important features of successful instructional prompts. +Specifically, we study several classes of reframing techniques for manual +reformulation of prompts into more effective ones. Some examples include +decomposing a complex task instruction into multiple simpler tasks or itemizing +instructions into sequential steps. Our experiments compare the zero-shot and +few-shot performance of LMs prompted with reframed instructions on 12 NLP tasks +across 6 categories. Compared with original instructions, our reframed +instructions lead to significant improvements across LMs with different sizes. +For example, the same reframed prompts boost few-shot performance of +GPT3-series and GPT2-series by 12.5% and 6.7% respectively averaged over all +tasks. Furthermore, reframed instructions reduce the number of examples +required to prompt LMs in the few-shot setting. We hope these +empirically-driven techniques will pave the way towards more effective future +prompting algorithms. +" +Prompt-Based Learning for Thread Structure Prediction in Cybersecurity Forums,Kazuaki Kashihara,http://arxiv.org/pdf/2303.05400v1.pdf,2023-03-05,"['cs.cl', 'cs.ai', 'cs.cr']",2303.05400v1.pdf," With recent trends indicating cyber crimes increasing in both frequency and +cost, it is imperative to develop new methods that leverage data-rich hacker +forums to assist in combating ever evolving cyber threats. Defining +interactions within these forums is critical as it facilitates identifying +highly skilled users, which can improve prediction of novel threats and future +cyber attacks. We propose a method called Next Paragraph Prediction with +Instructional Prompting (NPP-IP) to predict thread structures while grounded on +the context around posts. This is the first time to apply an instructional +prompting approach to the cybersecurity domain. We evaluate our NPP-IP with the +Reddit dataset and Hacker Forums dataset that has posts and thread structures +of real hacker forums' threads, and compare our method's performance with +existing methods. The experimental evaluation shows that our proposed method +can predict the thread structure significantly better than existing methods +allowing for better social network prediction based on forum interactions. +" +Red Teaming Language Model Detectors with Language Models,Zhouxing Shi,http://arxiv.org/pdf/2305.19713v2.pdf,2023-05-31,"['cs.cl', 'cs.lg']",2305.19713v2.pdf," The prevalence and strong capability of large language models (LLMs) present +significant safety and ethical risks if exploited by malicious users. To +prevent the potentially deceptive usage of LLMs, recent works have proposed +algorithms to detect LLM-generated text and protect LLMs. In this paper, we +investigate the robustness and reliability of these LLM detectors under +adversarial attacks. We study two types of attack strategies: 1) replacing +certain words in an LLM's output with their synonyms given the context; 2) +automatically searching for an instructional prompt to alter the writing style +of the generation. In both strategies, we leverage an auxiliary LLM to generate +the word replacements or the instructional prompt. Different from previous +works, we consider a challenging setting where the auxiliary LLM can also be +protected by a detector. Experiments reveal that our attacks effectively +compromise the performance of all detectors in the study with plausible +generations, underscoring the urgent need to improve the robustness of +LLM-generated text detection systems. +" +Large Language Models Encode Clinical Knowledge,Karan Singhal,http://arxiv.org/pdf/2212.13138v1.pdf,2022-12-26,['cs.cl'],2212.13138v1.pdf," Large language models (LLMs) have demonstrated impressive capabilities in +natural language understanding and generation, but the quality bar for medical +and clinical applications is high. Today, attempts to assess models' clinical +knowledge typically rely on automated evaluations on limited benchmarks. There +is no standard to evaluate model predictions and reasoning across a breadth of +tasks. To address this, we present MultiMedQA, a benchmark combining six +existing open question answering datasets spanning professional medical exams, +research, and consumer queries; and HealthSearchQA, a new free-response dataset +of medical questions searched online. We propose a framework for human +evaluation of model answers along multiple axes including factuality, +precision, possible harm, and bias. In addition, we evaluate PaLM (a +540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on +MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves +state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, +MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US +Medical License Exam questions), surpassing prior state-of-the-art by over 17%. +However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve +this we introduce instruction prompt tuning, a parameter-efficient approach for +aligning LLMs to new domains using a few exemplars. The resulting model, +Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show +that comprehension, recall of knowledge, and medical reasoning improve with +model scale and instruction prompt tuning, suggesting the potential utility of +LLMs in medicine. Our human evaluations reveal important limitations of today's +models, reinforcing the importance of both evaluation frameworks and method +development in creating safe, helpful LLM models for clinical applications. +" +Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering,Wenjin Wang,http://arxiv.org/pdf/2306.00526v4.pdf,2023-06-01,"['cs.cl', 'cs.ai', 'cs.cv']",2306.00526v4.pdf," Layout-aware pre-trained models has achieved significant progress on document +image question answering. They introduce extra learnable modules into existing +language models to capture layout information within document images from text +bounding box coordinates obtained by OCR tools. However, extra modules +necessitate pre-training on extensive document images. This prevents these +methods from directly utilizing off-the-shelf instruction-tuning language +foundation models, which have recently shown promising potential in zero-shot +learning. Instead, in this paper, we find that instruction-tuning language +models like Claude and ChatGPT can understand layout by spaces and line breaks. +Based on this observation, we propose the LAyout and Task aware Instruction +Prompt (LATIN-Prompt), which consists of layout-aware document content and +task-aware instruction. Specifically, the former uses appropriate spaces and +line breaks to recover the layout information among text segments obtained by +OCR tools, and the latter ensures that generated answers adhere to formatting +requirements. Moreover, we propose the LAyout and Task aware Instruction Tuning +(LATIN-Tuning) to improve the performance of small instruction-tuning models +like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot +performance of Claude and ChatGPT to be comparable to the fine-tuning +performance of SOTAs on document image question answering, and LATIN-Tuning +enhances the zero-shot performance of Alpaca significantly. For example, +LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by 263% +and 20% respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA +by 87.7%. Quantitative and qualitative analyses demonstrate the effectiveness +of LATIN-Prompt and LATIN-Tuning. We provide the code in supplementary and will +release it to facilitate future research. +" +InstructUIE: Multi-task Instruction Tuning for Unified Information Extraction,Xiao Wang,http://arxiv.org/pdf/2304.08085v1.pdf,2023-04-17,"['cs.cl', 'cs.ai']",2304.08085v1.pdf," Large language models have unlocked strong multi-task capabilities from +reading instructive prompts. However, recent studies have shown that existing +large models still have difficulty with information extraction tasks. For +example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset, +which is significantly lower than the state-of-the-art performance. In this +paper, we propose InstructUIE, a unified information extraction framework based +on instruction tuning, which can uniformly model various information extraction +tasks and capture the inter-task dependency. To validate the proposed method, +we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction +datasets in a unified text-to-text format with expert-written instructions. +Experimental results demonstrate that our method achieves comparable +performance to Bert in supervised settings and significantly outperforms the +state-of-the-art and gpt3.5 in zero-shot settings. +" +Camoscio: an Italian Instruction-tuned LLaMA,Andrea Santilli,http://arxiv.org/pdf/2307.16456v1.pdf,2023-07-31,['cs.cl'],2307.16456v1.pdf," In recent years Large Language Models (LLMs) have increased the state of the +art on several natural language processing tasks. However, their accessibility +is often limited to paid API services, posing challenges for researchers in +conducting extensive investigations. On the other hand, while some open-source +models have been proposed by the community, they are typically multilingual and +not specifically tailored for the Italian language. In an effort to democratize +the available and open resources for the Italian language, in this paper we +introduce Camoscio: a language model specifically tuned to follow users' +prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA +(7b) with LoRA on a corpus of instruction prompts translated to Italian via +ChatGPT. Results indicate that the model's zero-shot performance on various +downstream tasks in Italian competes favorably with existing models +specifically finetuned for those tasks. All the artifacts (code, dataset, +model) are released to the community at the following url: +https://github.com/teelinsan/camoscio +" +Self-Alignment with Instruction Backtranslation,Xian Li,http://arxiv.org/pdf/2308.06259v2.pdf,2023-08-11,['cs.cl'],2308.06259v2.pdf," We present a scalable method to build a high quality instruction following +language model by automatically labelling human-written text with corresponding +instructions. Our approach, named instruction backtranslation, starts with a +language model finetuned on a small amount of seed data, and a given web +corpus. The seed model is used to construct training examples by generating +instruction prompts for web documents (self-augmentation), and then selecting +high quality examples from among these candidates (self-curation). This data is +then used to finetune a stronger model. Finetuning LLaMa on two iterations of +our approach yields a model that outperforms all other LLaMa-based models on +the Alpaca leaderboard not relying on distillation data, demonstrating highly +effective self-alignment. +" +Discrete Prompt Compression with Reinforcement Learning,Hoyoun Jung,http://arxiv.org/pdf/2308.08758v1.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.08758v1.pdf," Instruction-tuned Language Models (LMs) are widely used by users to address +various problems with task-specific prompts. Constraints associated with the +context window length and computational costs encourage the development of +compressed prompts. Existing methods rely heavily on training embeddings, which +are designed to accommodate multiple token meanings. This presents challenges +in terms of interpretability, a fixed number of embedding tokens, reusability +across different LMs, and inapplicability when interacting with black-box APIs. +This study proposes prompt compression with reinforcement learning (PCRL), a +novel discrete prompt compression method that addresses these issues. PCRL +employs a computationally efficient policy network that directly edits prompts. +The PCRL training approach can be flexibly applied to various types of LMs, as +well as decoder-only and encoder-decoder architecture, and can be trained +without gradient access to LMs or labeled data. PCRL achieves an average +reduction of 24.6% in token count across various instruction prompts while +preserving performance. Further, we demonstrate that the learned policy can be +transferred to larger LMs, and through various analyses, we aid the +understanding of token importance within prompts. +" +Casteist but Not Racist? Quantifying Disparities in Large Language Model Bias between India and the West,Khyati Khandelwal,http://arxiv.org/pdf/2309.08573v1.pdf,2023-09-15,"['cs.cl', 'cs.cy']",2309.08573v1.pdf," Large Language Models (LLMs), now used daily by millions of users, can encode +societal biases, exposing their users to representational harms. A large body +of scholarship on LLM bias exists but it predominantly adopts a Western-centric +frame and attends comparatively less to bias levels and potential harms in the +Global South. In this paper, we quantify stereotypical bias in popular LLMs +according to an Indian-centric frame and compare bias levels between the Indian +and Western contexts. To do this, we develop a novel dataset which we call +Indian-BhED (Indian Bias Evaluation Dataset), containing stereotypical and +anti-stereotypical examples for caste and religion contexts. We find that the +majority of LLMs tested are strongly biased towards stereotypes in the Indian +context, especially as compared to the Western context. We finally investigate +Instruction Prompting as a simple intervention to mitigate such bias and find +that it significantly reduces both stereotypical and anti-stereotypical biases +in the majority of cases for GPT-3.5. The findings of this work highlight the +need for including more diverse voices when evaluating LLMs. +" +Harnessing Large Language Models' Empathetic Response Generation Capabilities for Online Mental Health Counselling Support,Siyuan Brandon Loh,http://arxiv.org/pdf/2310.08017v1.pdf,2023-10-12,"['cs.cl', 'i.2']",2310.08017v1.pdf," Large Language Models (LLMs) have demonstrated remarkable performance across +various information-seeking and reasoning tasks. These computational systems +drive state-of-the-art dialogue systems, such as ChatGPT and Bard. They also +carry substantial promise in meeting the growing demands of mental health care, +albeit relatively unexplored. As such, this study sought to examine LLMs' +capability to generate empathetic responses in conversations that emulate those +in a mental health counselling setting. We selected five LLMs: version 3.5 and +version 4 of the Generative Pre-training (GPT), Vicuna FastChat-T5, Pathways +Language Model (PaLM) version 2, and Falcon-7B-Instruct. Based on a simple +instructional prompt, these models responded to utterances derived from the +EmpatheticDialogues (ED) dataset. Using three empathy-related metrics, we +compared their responses to those from traditional response generation dialogue +systems, which were fine-tuned on the ED dataset, along with human-generated +responses. Notably, we discovered that responses from the LLMs were remarkably +more empathetic in most scenarios. We position our findings in light of +catapulting advancements in creating empathetic conversational systems. +" +Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases,Shrimai Prabhumoye,http://arxiv.org/pdf/2112.07868v2.pdf,2021-12-15,"['cs.cl', 'cs.ai']",2112.07868v2.pdf," Detecting social bias in text is challenging due to nuance, subjectivity, and +difficulty in obtaining good quality labeled datasets at scale, especially +given the evolving nature of social biases and society. To address these +challenges, we propose a few-shot instruction-based method for prompting +pre-trained language models (LMs). We select a few class-balanced exemplars +from a small support repository that are closest to the query to be labeled in +the embedding space. We then provide the LM with instruction that consists of +this subset of labeled exemplars, the query text to be classified, a definition +of bias, and prompt it to make a decision. We demonstrate that large LMs used +in a few-shot context can detect different types of fine-grained biases with +similar and sometimes superior accuracy to fine-tuned models. We observe that +the largest 530B parameter model is significantly more effective in detecting +social bias compared to smaller models (achieving at least 13% improvement in +AUC metric compared to other models). It also maintains a high AUC (dropping +less than 2%) when the labeled repository is reduced to as few as $100$ +samples. Large pretrained language models thus make it easier and quicker to +build new bias detectors. +" +"GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models",Archiki Prasad,http://arxiv.org/pdf/2203.07281v2.pdf,2022-03-14,"['cs.cl', 'cs.ai', 'cs.lg']",2203.07281v2.pdf," Providing natural language instructions in prompts is a useful new paradigm +for improving task performance of large language models in a zero-shot setting. +Recent work has aimed to improve such prompts via manual rewriting or +gradient-based tuning. However, manual rewriting is time-consuming and requires +subjective interpretation, while gradient-based tuning can be extremely +computationally demanding for large models and may not be feasible for +API-based models. In this work, we introduce Gradient-free Instructional Prompt +Search (GrIPS), a gradient-free, edit-based search approach for improving task +instructions for large language models. GrIPS takes in instructions designed +for humans and automatically returns an improved, edited prompt, while allowing +for API-based tuning. With InstructGPT models, GrIPS improves the average task +performance by up to 4.30 percentage points on eight classification tasks from +the Natural Instructions dataset (with similar improvements for OPT, BLOOM, and +FLAN-T5). We see improvements for both instruction-only prompts and instruction ++ k-shot examples prompts. Notably, GrIPS outperforms manual rewriting and +purely example-based prompts while controlling for the available compute and +data budget. Further, performance of GrIPS is comparable to select +gradient-based tuning approaches. Qualitatively, we show our edits can simplify +instructions and at times make them incoherent but nonetheless improve +accuracy. Our code is available at: https://github.com/archiki/GrIPS +" +LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging,Andy Rosenbaum,http://arxiv.org/pdf/2209.09900v1.pdf,2022-09-20,"['cs.cl', 'cs.ai', 'cs.lg']",2209.09900v1.pdf," We present LINGUIST, a method for generating annotated data for Intent +Classification and Slot Tagging (IC+ST), via fine-tuning AlexaTM 5B, a +5-billion-parameter multilingual sequence-to-sequence (seq2seq) model, on a +flexible instruction prompt. In a 10-shot novel intent setting for the SNIPS +dataset, LINGUIST surpasses state-of-the-art approaches (Back-Translation and +Example Extrapolation) by a wide margin, showing absolute improvement for the +target intents of +1.9 points on IC Recall and +2.5 points on ST F1 Score. In +the zero-shot cross-lingual setting of the mATIS++ dataset, LINGUIST +out-performs a strong baseline of Machine Translation with Slot Alignment by ++4.14 points absolute on ST F1 Score across 6 languages, while matching +performance on IC. Finally, we verify our results on an internal large-scale +multilingual dataset for conversational agent IC+ST and show significant +improvements over a baseline which uses Back-Translation, Paraphrasing and Slot +Catalog Resampling. To our knowledge, we are the first to demonstrate +instruction fine-tuning of a large-scale seq2seq model to control the outputs +of multilingual intent- and slot-labeled data generation. +" +InferFix: End-to-End Program Repair with LLMs,Matthew Jin,http://arxiv.org/pdf/2303.07263v1.pdf,2023-03-13,['cs.se'],2303.07263v1.pdf," Software development life cycle is profoundly influenced by bugs: their +introduction, identification, and eventual resolution account for a significant +portion of software cost. This has motivated software engineering researchers +and practitioners to propose different approaches for automating the +identification and repair of software defects. Large language models have been +adapted to the program repair task through few-shot demonstration learning and +instruction prompting, treating this as an infilling task. However, these +models have only focused on learning general bug-fixing patterns for +uncategorized bugs mined from public repositories. In this paper, we propose +InferFix: a transformer-based program repair framework paired with a +state-of-the-art static analyzer to fix critical security and performance bugs. +InferFix combines a Retriever -- transformer encoder model pretrained via +contrastive learning objective, which aims at searching for semantically +equivalent bugs and corresponding fixes; and a Generator -- a large language +model (Codex Cushman) finetuned on supervised bug-fix data with prompts +augmented via bug type annotations and semantically similar fixes retrieved +from an external non-parametric memory. To train and evaluate our approach, we +curated InferredBugs, a novel, metadata-rich dataset of bugs extracted by +executing the Infer static analyzer on the change histories of thousands of +Java and C# repositories. Our evaluation demonstrates that InferFix outperforms +strong LLM baselines, with a top-1 accuracy of 65.6% for generating fixes in C# +and 76.8% in Java. We discuss the deployment of InferFix alongside Infer at +Microsoft which offers an end-to-end solution for detection, classification, +and localization of bugs, as well as fixing and validation of candidate +patches, integrated in the continuous integration pipeline to automate the +software development workflow. +" +Text-based Person Search without Parallel Image-Text Data,Yang Bai,http://arxiv.org/pdf/2305.12964v2.pdf,2023-05-22,['cs.cv'],2305.12964v2.pdf," Text-based person search (TBPS) aims to retrieve the images of the target +person from a large image gallery based on a given natural language +description. Existing methods are dominated by training models with parallel +image-text pairs, which are very costly to collect. In this paper, we make the +first attempt to explore TBPS without parallel image-text data ($\mu$-TBPS), in +which only non-parallel images and texts, or even image-only data, can be +adopted. Towards this end, we propose a two-stage framework, +generation-then-retrieval (GTR), to first generate the corresponding pseudo +text for each image and then perform the retrieval in a supervised manner. In +the generation stage, we propose a fine-grained image captioning strategy to +obtain an enriched description of the person image, which firstly utilizes a +set of instruction prompts to activate the off-the-shelf pretrained +vision-language model to capture and generate fine-grained person attributes, +and then converts the extracted attributes into a textual description via the +finetuned large language model or the hand-crafted template. In the retrieval +stage, considering the noise interference of the generated texts for training +model, we develop a confidence score-based training scheme by enabling more +reliable texts to contribute more during the training. Experimental results on +multiple TBPS benchmarks (i.e., CUHK-PEDES, ICFG-PEDES and RSTPReid) show that +the proposed GTR can achieve a promising performance without relying on +parallel image-text data. +" +EDM3: Event Detection as Multi-task Text Generation,Ujjwala Anantheswaran,http://arxiv.org/pdf/2305.16357v1.pdf,2023-05-25,['cs.cl'],2305.16357v1.pdf," Event detection refers to identifying event occurrences in a text and +comprises of two subtasks; event identification and classification. We present +EDM3, a novel approach for Event Detection that formulates three generative +tasks: identification, classification, and combined detection. We show that +EDM3 helps to learn transferable knowledge that can be leveraged to perform +Event Detection and its subtasks concurrently, mitigating the error propagation +inherent in pipelined approaches. Unlike previous dataset- or domain-specific +approaches, EDM3 utilizes the existing knowledge of language models, allowing +it to be trained over any classification schema. We evaluate EDM3 on multiple +event detection datasets: RAMS, WikiEvents, MAVEN, and MLEE, showing that EDM3 +outperforms 1) single-task performance by 8.4% on average and 2) multi-task +performance without instructional prompts by 2.4% on average. We obtain SOTA +results on RAMS (71.3% vs. 65.1% F-1) and competitive performance on other +datasets. We analyze our approach to demonstrate its efficacy in low-resource +and multi-sentence settings. We also show the effectiveness of this approach on +non-standard event configurations such as multi-word and multi-class event +triggers. Overall, our results show that EDM3 is a promising approach for Event +Detection that has the potential for real-world applications. +" +VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset,Sihan Chen,http://arxiv.org/pdf/2305.18500v2.pdf,2023-05-29,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg', 'eess.as']",2305.18500v2.pdf," Vision and text have been fully explored in contemporary video-text +foundational models, while other modalities such as audio and subtitles in +videos have not received sufficient attention. In this paper, we resort to +establish connections between multi-modality video tracks, including Vision, +Audio, and Subtitle, and Text by exploring an automatically generated +large-scale omni-modality video caption dataset called VAST-27M. Specifically, +we first collect 27 million open-domain video clips and separately train a +vision and an audio captioner to generate vision and audio captions. Then, we +employ an off-the-shelf Large Language Model (LLM) to integrate the generated +captions, together with subtitles and instructional prompts into omni-modality +captions. Based on the proposed VAST-27M dataset, we train an omni-modality +video-text foundational model named VAST, which can perceive and process +vision, audio, and subtitle modalities from video, and better support various +tasks including vision-text, audio-text, and multi-modal video-text tasks +(retrieval, captioning and QA). Extensive experiments have been conducted to +demonstrate the effectiveness of our proposed VAST-27M corpus and VAST +foundation model. VAST achieves 22 new state-of-the-art results on various +cross-modality benchmarks. Code, model and dataset will be released at +https://github.com/TXH-mercury/VAST. +" +Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing,Wai Man Si,http://arxiv.org/pdf/2308.03558v1.pdf,2023-08-07,"['cs.cr', 'cs.cl']",2308.03558v1.pdf," The Machine Learning as a Service (MLaaS) market is rapidly expanding and +becoming more mature. For example, OpenAI's ChatGPT is an advanced large +language model (LLM) that generates responses for various queries with +associated fees. Although these models can deliver satisfactory performance, +they are far from perfect. Researchers have long studied the vulnerabilities +and limitations of LLMs, such as adversarial attacks and model toxicity. +Inevitably, commercial ML models are also not exempt from such issues, which +can be problematic as MLaaS continues to grow. In this paper, we discover a new +attack strategy against LLM APIs, namely the prompt abstraction attack. +Specifically, we propose Mondrian, a simple and straightforward method that +abstracts sentences, which can lower the cost of using LLM APIs. In this +approach, the adversary first creates a pseudo API (with a lower established +price) to serve as the proxy of the target API (with a higher established +price). Next, the pseudo API leverages Mondrian to modify the user query, +obtain the abstracted response from the target API, and forward it back to the +end user. Our results show that Mondrian successfully reduces user queries' +token length ranging from 13% to 23% across various tasks, including text +classification, generation, and question answering. Meanwhile, these abstracted +queries do not significantly affect the utility of task-specific and general +language models like ChatGPT. Mondrian also reduces instruction prompts' token +length by at least 11% without compromising output quality. As a result, the +prompt abstraction attack enables the adversary to profit without bearing the +cost of API development and deployment. +" +Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving,Sumit Kumar Jha,http://arxiv.org/pdf/2309.16436v1.pdf,2023-09-28,"['cs.ai', 'cs.lo']",2309.16436v1.pdf," Generative large language models (LLMs) with instruct training such as GPT-4 +can follow human-provided instruction prompts and generate human-like responses +to these prompts. Apart from natural language responses, they have also been +found to be effective at generating formal artifacts such as code, plans, and +logical specifications from natural language prompts. Despite their remarkably +improved accuracy, these models are still known to produce factually incorrect +or contextually inappropriate results despite their syntactic coherence - a +phenomenon often referred to as hallucination. This limitation makes it +difficult to use these models to synthesize formal artifacts that are used in +safety-critical applications. Unlike tasks such as text summarization and +question-answering, bugs in code, plan, and other formal artifacts produced by +LLMs can be catastrophic. We posit that we can use the satisfiability modulo +theory (SMT) solvers as deductive reasoning engines to analyze the generated +solutions from the LLMs, produce counterexamples when the solutions are +incorrect, and provide that feedback to the LLMs exploiting the dialog +capability of instruct-trained LLMs. This interaction between inductive LLMs +and deductive SMT solvers can iteratively steer the LLM to generate the correct +response. In our experiments, we use planning over the domain of blocks as our +synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo, +Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our +method allows the user to communicate the planning problem in natural language; +even the formulation of queries to SMT solvers is automatically generated from +natural language. Thus, the proposed technique can enable non-expert users to +describe their problems in natural language, and the combination of LLMs and +SMT solvers can produce provably correct solutions. +" +Benchmarking a foundation LLM on its ability to re-label structure names in accordance with the AAPM TG-263 report,Jason Holmes,http://arxiv.org/pdf/2310.03874v1.pdf,2023-10-05,"['physics.med-ph', 'cs.cl']",2310.03874v1.pdf," Purpose: To introduce the concept of using large language models (LLMs) to +re-label structure names in accordance with the American Association of +Physicists in Medicine (AAPM) Task Group (TG)-263 standard, and to establish a +benchmark for future studies to reference. + Methods and Materials: The Generative Pre-trained Transformer (GPT)-4 +application programming interface (API) was implemented as a Digital Imaging +and Communications in Medicine (DICOM) storage server, which upon receiving a +structure set DICOM file, prompts GPT-4 to re-label the structure names of both +target volumes and normal tissues according to the AAPM TG-263. Three disease +sites, prostate, head and neck, and thorax were selected for evaluation. For +each disease site category, 150 patients were randomly selected for manually +tuning the instructions prompt (in batches of 50) and 50 patients were randomly +selected for evaluation. Structure names that were considered were those that +were most likely to be relevant for studies utilizing structure contours for +many patients. + Results: The overall re-labeling accuracy of both target volumes and normal +tissues for prostate, head and neck, and thorax cases was 96.0%, 98.5%, and +96.9% respectively. Re-labeling of target volumes was less accurate on average +except for prostate - 100%, 93.1%, and 91.1% respectively. + Conclusions: Given the accuracy of GPT-4 in re-labeling structure names of +both target volumes and normal tissues as presented in this work, LLMs are +poised to be the preferred method for standardizing structure names in +radiation oncology, especially considering the rapid advancements in LLM +capabilities that are likely to continue. +" +What Makes Pre-trained Language Models Better Zero-shot Learners?,Jinghui Lu,http://arxiv.org/pdf/2209.15206v3.pdf,2022-09-30,"['cs.cl', 'cs.ai']",2209.15206v3.pdf," Current methods for prompt learning in zeroshot scenarios widely rely on a +development set with sufficient human-annotated data to select the +best-performing prompt template a posteriori. This is not ideal because in a +realworld zero-shot scenario of practical relevance, no labelled data is +available. Thus, we propose a simple yet effective method for screening +reasonable prompt templates in zero-shot text classification: Perplexity +Selection (Perplection). We hypothesize that language discrepancy can be used +to measure the efficacy of prompt templates, and thereby develop a +substantiated perplexity-based scheme allowing for forecasting the performance +of prompt templates in advance. Experiments show that our method leads to +improved prediction performance in a realistic zero-shot setting, eliminating +the need for any labelled examples. +" +IIE-NLP-NUT at SemEval-2020 Task 4: Guiding PLM with Prompt Template Reconstruction Strategy for ComVE,Luxi Xing,http://arxiv.org/pdf/2007.00924v1.pdf,2020-07-02,['cs.cl'],2007.00924v1.pdf," This paper introduces our systems for the first two subtasks of SemEval +Task4: Commonsense Validation and Explanation. To clarify the intention for +judgment and inject contrastive information for selection, we propose the input +reconstruction strategy with prompt templates. Specifically, we formalize the +subtasks into the multiple-choice question answering format and construct the +input with the prompt templates, then, the final prediction of question +answering is considered as the result of subtasks. Experimental results show +that our approaches achieve significant performance compared with the baseline +systems. Our approaches secure the third rank on both official test sets of the +first two subtasks with an accuracy of 96.4 and an accuracy of 94.3 +respectively. +" +GraphPrompt: Biomedical Entity Normalization Using Graph-based Prompt Templates,Jiayou Zhang,http://arxiv.org/pdf/2112.03002v1.pdf,2021-11-13,"['cs.cl', 'cs.ai']",2112.03002v1.pdf," Biomedical entity normalization unifies the language across biomedical +experiments and studies, and further enables us to obtain a holistic view of +life sciences. Current approaches mainly study the normalization of more +standardized entities such as diseases and drugs, while disregarding the more +ambiguous but crucial entities such as pathways, functions and cell types, +hindering their real-world applications. To achieve biomedical entity +normalization on these under-explored entities, we first introduce an +expert-curated dataset OBO-syn encompassing 70 different types of entities and +2 million curated entity-synonym pairs. To utilize the unique graph structure +in this dataset, we propose GraphPrompt, a prompt-based learning approach that +creates prompt templates according to the graphs. GraphPrompt obtained 41.0% +and 29.9% improvement on zero-shot and few-shot settings respectively, +indicating the effectiveness of these graph-based prompt templates. We envision +that our method GraphPrompt and OBO-syn dataset can be broadly applied to +graph-based NLP tasks, and serve as the basis for analyzing diverse and +accumulating biomedical data. +" +CCPrompt: Counterfactual Contrastive Prompt-Tuning for Many-Class Classification,Yang Li,http://arxiv.org/pdf/2211.05987v1.pdf,2022-11-11,['cs.cl'],2211.05987v1.pdf," With the success of the prompt-tuning paradigm in Natural Language Processing +(NLP), various prompt templates have been proposed to further stimulate +specific knowledge for serving downstream tasks, e.g., machine translation, +text generation, relation extraction, and so on. Existing prompt templates are +mainly shared among all training samples with the information of task +description. However, training samples are quite diverse. The sharing task +description is unable to stimulate the unique task-related information in each +training sample, especially for tasks with the finite-label space. To exploit +the unique task-related information, we imitate the human decision process +which aims to find the contrastive attributes between the objective factual and +their potential counterfactuals. Thus, we propose the \textbf{C}ounterfactual +\textbf{C}ontrastive \textbf{Prompt}-Tuning (CCPrompt) approach for many-class +classification, e.g., relation classification, topic classification, and entity +typing. Compared with simple classification tasks, these tasks have more +complex finite-label spaces and are more rigorous for prompts. First of all, we +prune the finite label space to construct fact-counterfactual pairs. Then, we +exploit the contrastive attributes by projecting training instances onto every +fact-counterfactual pair. We further set up global prototypes corresponding +with all contrastive attributes for selecting valid contrastive attributes as +additional tokens in the prompt template. Finally, a simple Siamese +representation learning is employed to enhance the robustness of the model. We +conduct experiments on relation classification, topic classification, and +entity typing tasks in both fully supervised setting and few-shot setting. The +results indicate that our model outperforms former baselines. +" +Low-Resource Multi-Granularity Academic Function Recognition Based on Multiple Prompt Knowledge,Jiawei Liu,http://arxiv.org/pdf/2305.03287v1.pdf,2023-05-05,"['cs.cl', 'cs.ai']",2305.03287v1.pdf," Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally +requires large numbers of annotated data to achieve state-of-the-art +performance on a range of NLP tasks in the scientific domain. However, +obtaining the fine-tune data for scientific NLP task is still challenging and +expensive. Inspired by recent advancement in prompt learning, in this paper, we +propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to +alleviate the dependence on annotated data and improve the performance of +multi-granularity academic function recognition tasks with a small number of +labeled examples. Specifically, the proposed method provides multi-perspective +representations by combining manual prompt templates with automatically learned +continuous prompt templates to help the given academic function recognition +task take full advantage of knowledge in PLMs. Based on these prompt templates +and the fine-tuned PLM, a large number of pseudo labels are assigned to the +unlabeled examples. Finally, we fine-tune the PLM using the pseudo training +set. We evaluate our method on three academic function recognition tasks of +different granularity including the citation function, the abstract sentence +function, and the keyword function, with datasets from computer science domain +and biomedical domain. Extensive experiments demonstrate the effectiveness of +our method and statistically significant improvements against strong baselines. +In particular, it achieves an average increase of 5% in Macro-F1 score compared +with fine-tuning, and 6% in Macro-F1 score compared with other semi-supervised +method under low-resource settings. In addition, MPT is a general method that +can be easily applied to other low-resource scientific classification tasks. +" +AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models,Jan Hendrik Metzen,http://arxiv.org/pdf/2309.16414v2.pdf,2023-09-28,"['cs.cv', 'cs.ai', 'cs.lg']",2309.16414v2.pdf," Classifiers built upon vision-language models such as CLIP have shown +remarkable zero-shot performance across a broad range of image classification +tasks. Prior work has studied different ways of automatically creating +descriptor sets for every class based on prompt templates, ranging from +manually engineered templates over templates obtained from a large language +model to templates built from random words and characters. Up until now, +deriving zero-shot classifiers from the respective encoded class descriptors +has remained nearly unchanged, i.e., classify to the class that maximizes +cosine similarity between its averaged encoded class descriptors and the image +encoding. However, weighing all class descriptors equally can be suboptimal +when certain descriptors match visual clues on a given image better than +others. In this work, we propose AutoCLIP, a method for auto-tuning zero-shot +classifiers. AutoCLIP tunes per-image weights to each prompt template at +inference time, based on statistics of class descriptor-image similarities. +AutoCLIP is fully unsupervised, has very low computational overhead, and can be +easily implemented in few lines of code. We show that AutoCLIP outperforms +baselines across a broad range of vision-language models, datasets, and prompt +templates consistently and by up to 3 percent point accuracy. +" +Position-based Prompting for Health Outcome Generation,M. Abaho,http://arxiv.org/pdf/2204.03489v1.pdf,2022-03-30,"['cs.cl', 'cs.lg']",2204.03489v1.pdf," Probing Pre-trained Language Models (PLMs) using prompts has indirectly +implied that language models (LMs) can be treated as knowledge bases. To this +end, this phenomena has been effective especially when these LMs are fine-tuned +towards not just data of a specific domain, but also to the style or linguistic +pattern of the prompts themselves. We observe that, satisfying a particular +linguistic pattern in prompts is an unsustainable constraint that unnecessarily +lengthens the probing task, especially because, they are often manually +designed and the range of possible prompt template patterns can vary depending +on the prompting objective and domain. We therefore explore an idea of using a +position-attention mechanism to capture positional information of each word in +a prompt relative to the mask to be filled, hence avoiding the need to +re-construct prompts when the prompts linguistic pattern changes. Using our +approach, we demonstrate the ability of eliciting answers to rare prompt +templates (in a case study on health outcome generation) such as Postfix and +Mixed patterns whose missing information is respectively at the start and in +multiple random places of the prompt. More so, using various biomedical PLMs, +our approach consistently outperforms a baseline in which the default mask +language model (MLM) representation is used to predict masked tokens. +" +Prompting Large Language Models With the Socratic Method,Edward Y. Chang,http://arxiv.org/pdf/2303.08769v2.pdf,2023-02-17,"['cs.lg', 'i.2.7']",2303.08769v2.pdf," This paper presents a systematic approach to using the Socratic method in +developing prompt templates that effectively interact with large language +models, including GPT-3. Various methods are examined, and those that yield +precise answers and justifications while fostering creativity and imagination +to enhance creative writing are identified. Techniques such as {\em +definition}, {\em elenchus}, {\em dialectic}, {\em maieutics}, {\em +generalization}, and {\em counterfactual reasoning} are discussed for their +application in engineering prompt templates and their connections to inductive, +deductive, and abductive reasoning. Through examples, the effectiveness of +these dialogue and reasoning methods is demonstrated. An interesting +observation is made that when the task's goal and user intent are conveyed to +GPT-3 via ChatGPT before the start of a dialogue, the large language model +seems to connect to the external context expressed in the intent and perform +more effectively. +" +Prompt Learning for News Recommendation,Zizhuo Zhang,http://arxiv.org/pdf/2304.05263v1.pdf,2023-04-11,"['cs.ir', 'cs.ai', 'h.3.3']",2304.05263v1.pdf," Some recent \textit{news recommendation} (NR) methods introduce a Pre-trained +Language Model (PLM) to encode news representation by following the vanilla +pre-train and fine-tune paradigm with carefully-designed +recommendation-specific neural networks and objective functions. Due to the +inconsistent task objective with that of PLM, we argue that their modeling +paradigm has not well exploited the abundant semantic information and +linguistic knowledge embedded in the pre-training process. Recently, the +pre-train, prompt, and predict paradigm, called \textit{prompt learning}, has +achieved many successes in natural language processing domain. In this paper, +we make the first trial of this new paradigm to develop a \textit{Prompt +Learning for News Recommendation} (Prompt4NR) framework, which transforms the +task of predicting whether a user would click a candidate news as a cloze-style +mask-prediction task. Specifically, we design a series of prompt templates, +including discrete, continuous, and hybrid templates, and construct their +corresponding answer spaces to examine the proposed Prompt4NR framework. +Furthermore, we use the prompt ensembling to integrate predictions from +multiple prompt templates. Extensive experiments on the MIND dataset validate +the effectiveness of our Prompt4NR with a set of new benchmark results. +" +Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot Classification,Han Wang,http://arxiv.org/pdf/2204.06305v2.pdf,2022-04-13,"['cs.cl', 'cs.ai', 'cs.lg']",2204.06305v2.pdf," Prompt-based learning (i.e., prompting) is an emerging paradigm for +exploiting knowledge learned by a pretrained language model. In this paper, we +propose Automatic Multi-Label Prompting (AMuLaP), a simple yet effective method +to automatically select label mappings for few-shot text classification with +prompting. Our method exploits one-to-many label mappings and a +statistics-based algorithm to select label mappings given a prompt template. +Our experiments demonstrate that AMuLaP achieves competitive performance on the +GLUE benchmark without human effort or external resources. +" +CoCoMo: Computational Consciousness Modeling for Generative and Ethical AI,Edward Y. Chang,http://arxiv.org/pdf/2304.02438v2.pdf,2023-03-17,"['cs.oh', 'i.2.7']",2304.02438v2.pdf," The CoCoMo model proposes a computational solution to the challenge of +incorporating ethical and emotional intelligence considerations into AI +systems, with the aim of creating AI agents that combine knowledge with +compassion. To achieve this goal, CoCoMo prioritizes fairness, beneficence, +non-maleficence, empathy, adaptability, transparency, and critical and +exploratory thinking abilities. The model employs consciousness modeling, +reinforcement learning, and prompt template formulation to support these +desired traits. By incorporating ethical and emotional intelligence +considerations, a generative AI model can potentially lead to improved +fairness, reduced toxicity, and increased reliability. +" +PromptNER: Prompt Locating and Typing for Named Entity Recognition,Yongliang Shen,http://arxiv.org/pdf/2305.17104v1.pdf,2023-05-26,['cs.cl'],2305.17104v1.pdf," Prompt learning is a new paradigm for utilizing pre-trained language models +and has achieved great success in many tasks. To adopt prompt learning in the +NER task, two kinds of methods have been explored from a pair of symmetric +perspectives, populating the template by enumerating spans to predict their +entity types or constructing type-specific prompts to locate entities. However, +these methods not only require a multi-round prompting manner with a high time +overhead and computational cost, but also require elaborate prompt templates, +that are difficult to apply in practical scenarios. In this paper, we unify +entity locating and entity typing into prompt learning, and design a dual-slot +multi-prompt template with the position slot and type slot to prompt locating +and typing respectively. Multiple prompts can be input to the model +simultaneously, and then the model extracts all entities by parallel +predictions on the slots. To assign labels for the slots during training, we +design a dynamic template filling mechanism that uses the extended bipartite +graph matching between prompts and the ground-truth entities. We conduct +experiments in various settings, including resource-rich flat and nested NER +datasets and low-resource in-domain and cross-domain datasets. Experimental +results show that the proposed model achieves a significant performance +improvement, especially in the cross-domain few-shot setting, which outperforms +the state-of-the-art model by +7.7% on average. +" +Large Language and Text-to-3D Models for Engineering Design Optimization,Thiago Rios,http://arxiv.org/pdf/2307.01230v1.pdf,2023-07-03,"['cs.cl', 'cs.lg', 'cs.ne']",2307.01230v1.pdf," The current advances in generative AI for learning large neural network +models with the capability to produce essays, images, music and even 3D assets +from text prompts create opportunities for a manifold of disciplines. In the +present paper, we study the potential of deep text-to-3D models in the +engineering domain, with focus on the chances and challenges when integrating +and interacting with 3D assets in computational simulation-based design +optimization. In contrast to traditional design optimization of 3D geometries +that often searches for the optimum designs using numerical representations, +such as B-Spline surface or deformation parameters in vehicle aerodynamic +optimization, natural language challenges the optimization framework by +requiring a different interpretation of variation operators while at the same +time may ease and motivate the human user interaction. Here, we propose and +realize a fully automated evolutionary design optimization framework using +Shap-E, a recently published text-to-3D asset network by OpenAI, in the context +of aerodynamic vehicle optimization. For representing text prompts in the +evolutionary optimization, we evaluate (a) a bag-of-words approach based on +prompt templates and Wordnet samples, and (b) a tokenisation approach based on +prompt templates and the byte pair encoding method from GPT4. Our main findings +from the optimizations indicate that, first, it is important to ensure that the +designs generated from prompts are within the object class of application, i.e. +diverse and novel designs need to be realistic, and, second, that more research +is required to develop methods where the strength of text prompt variations and +the resulting variations of the 3D designs share causal relations to some +degree to improve the optimization. +" +Zero-shot information extraction from radiological reports using ChatGPT,Danqing Hu,http://arxiv.org/pdf/2309.01398v2.pdf,2023-09-04,['cs.cl'],2309.01398v2.pdf," Electronic health records contain an enormous amount of valuable information, +but many are recorded in free text. Information extraction is the strategy to +transform the sequence of characters into structured data, which can be +employed for secondary analysis. However, the traditional information +extraction components, such as named entity recognition and relation +extraction, require annotated data to optimize the model parameters, which has +become one of the major bottlenecks in building information extraction systems. +With the large language models achieving good performances on various +downstream NLP tasks without parameter tuning, it becomes possible to use large +language models for zero-shot information extraction. In this study, we aim to +explore whether the most popular large language model, ChatGPT, can extract +useful information from the radiological reports. We first design the prompt +template for the interested information in the CT reports. Then, we generate +the prompts by combining the prompt template with the CT reports as the inputs +of ChatGPT to obtain the responses. A post-processing module is developed to +transform the responses into structured extraction results. We conducted the +experiments with 847 CT reports collected from Peking University Cancer +Hospital. The experimental results indicate that ChatGPT can achieve +competitive performances for some extraction tasks compared with the baseline +information extraction system, but some limitations need to be further +improved. +" +Can Language Models be Biomedical Knowledge Bases?,Mujeen Sung,http://arxiv.org/pdf/2109.07154v1.pdf,2021-09-15,['cs.cl'],2109.07154v1.pdf," Pre-trained language models (LMs) have become ubiquitous in solving various +natural language processing (NLP) tasks. There has been increasing interest in +what knowledge these LMs contain and how we can extract that knowledge, +treating LMs as knowledge bases (KBs). While there has been much work on +probing LMs in the general domain, there has been little attention to whether +these powerful LMs can be used as domain-specific KBs. To this end, we create +the BioLAMA benchmark, which is comprised of 49K biomedical factual knowledge +triples for probing biomedical LMs. We find that biomedical LMs with recently +proposed probing methods can achieve up to 18.51% Acc@5 on retrieving +biomedical knowledge. Although this seems promising given the task difficulty, +our detailed analyses reveal that most predictions are highly correlated with +prompt templates without any subjects, hence producing similar results on each +relation and hindering their capabilities to be used as domain-specific KBs. We +hope that BioLAMA can serve as a challenging benchmark for biomedical factual +probing. +" +HealthPrompt: A Zero-shot Learning Paradigm for Clinical Natural Language Processing,Sonish Sivarajkumar,http://arxiv.org/pdf/2203.05061v1.pdf,2022-03-09,"['cs.cl', 'cs.ai', 'cs.ir']",2203.05061v1.pdf," Deep learning algorithms are dependent on the availability of large-scale +annotated clinical text datasets. The lack of such publicly available datasets +is the biggest bottleneck for the development of clinical Natural Language +Processing(NLP) systems. Zero-Shot Learning(ZSL) refers to the use of deep +learning models to classify instances from new classes of which no training +data have been seen before. Prompt-based learning is an emerging ZSL technique +where we define task-based templates for NLP tasks. We developed a novel +prompt-based clinical NLP framework called HealthPrompt and applied the +paradigm of prompt-based learning on clinical texts. In this technique, rather +than fine-tuning a Pre-trained Language Model(PLM), the task definitions are +tuned by defining a prompt template. We performed an in-depth analysis of +HealthPrompt on six different PLMs in a no-data setting. Our experiments prove +that prompts effectively capture the context of clinical texts and perform +remarkably well without any training data. +" +RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction,Yew Ken Chia,http://arxiv.org/pdf/2203.09101v1.pdf,2022-03-17,['cs.cl'],2203.09101v1.pdf," Despite the importance of relation extraction in building and representing +knowledge, less research is focused on generalizing to unseen relations types. +We introduce the task setting of Zero-Shot Relation Triplet Extraction +(ZeroRTE) to encourage further research in low-resource relation extraction +methods. Given an input sentence, each extracted triplet consists of the head +entity, relation label, and tail entity where the relation label is not seen at +the training stage. To solve ZeroRTE, we propose to synthesize relation +examples by prompting language models to generate structured texts. Concretely, +we unify language model prompts and structured text approaches to design a +structured prompt template for generating synthetic relation samples when +conditioning on relation label prompts (RelationPrompt). To overcome the +limitation for extracting multiple relation triplets in a sentence, we design a +novel Triplet Search Decoding method. Experiments on FewRel and Wiki-ZSL +datasets show the efficacy of RelationPrompt for the ZeroRTE task and zero-shot +relation classification. Our code and data are available at +github.com/declare-lab/RelationPrompt. +" +CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument Extraction,Jiaju Lin,http://arxiv.org/pdf/2205.00498v2.pdf,2022-05-01,['cs.cl'],2205.00498v2.pdf," Implicit event argument extraction (EAE) aims to identify arguments that +could scatter over the document. Most previous work focuses on learning the +direct relations between arguments and the given trigger, while the implicit +relations with long-range dependency are not well studied. Moreover, recent +neural network based approaches rely on a large amount of labeled data for +training, which is unavailable due to the high labelling cost. In this paper, +we propose a Curriculum learning based Prompt tuning (CUP) approach, which +resolves implicit EAE by four learning stages. The stages are defined according +to the relations with the trigger node in a semantic graph, which well captures +the long-range dependency between arguments and the trigger. In addition, we +integrate a prompt-based encoder-decoder model to elicit related knowledge from +pre-trained language models (PLMs) in each stage, where the prompt templates +are adapted with the learning progress to enhance the reasoning for arguments. +Experimental results on two well-known benchmark datasets show the great +advantages of our proposed approach. In particular, we outperform the +state-of-the-art models in both fully-supervised and low-data scenarios. +" +Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation,Sirui Wang,http://arxiv.org/pdf/2209.00455v1.pdf,2022-08-31,"['cs.lg', 'cs.ai']",2209.00455v1.pdf," Demonstration learning aims to guide the prompt prediction via providing +answered demonstrations in the few shot settings. Despite achieving promising +results, existing work only concatenates the answered examples as +demonstrations to the prompt template (including the raw context) without any +additional operation, neglecting the prompt-demonstration dependencies. +Besides, prior research found that randomly replacing the labels of +demonstrations marginally hurts performance, illustrating that the model could +not properly learn the knowledge brought by the demonstrations. Inspired by the +human learning process, in this paper, we introduce Imitation DEMOnstration +Learning (Imitation-Demo) to strengthen demonstration learning via explicitly +imitating human review behaviour, which includes: (1) contrastive learning +mechanism to concentrate on the similar demonstrations. (2) demonstration-label +re-prediction method to consolidate known knowledge. Experiment results show +that our proposed method achieves state-of-the-art performance on 11 out of 14 +classification corpora. Further studies also prove that Imitation-Demo +strengthen the association between prompt and demonstrations, which could +provide the basis for exploring how demonstration learning works. +" +A Few-shot Approach to Resume Information Extraction via Prompts,Chengguang Gan,http://arxiv.org/pdf/2209.09450v2.pdf,2022-09-20,['cs.cl'],2209.09450v2.pdf," Prompt learning's fine-tune performance on text classification tasks has +attracted the NLP community. This paper applies it to resume information +extraction, improving existing methods for this task. We created manual +templates and verbalizers tailored to resume texts and compared the performance +of Masked Language Model (MLM) and Seq2Seq PLMs. Also, we enhanced the +verbalizer design for Knowledgeable Prompt-tuning, contributing to prompt +template design across NLP tasks. We present the Manual Knowledgeable +Verbalizer (MKV), a rule for constructing verbalizers for specific +applications. Our tests show that MKV rules yield more effective, robust +templates and verbalizers than existing methods. Our MKV approach resolved +sample imbalance, surpassing current automatic prompt methods. This study +underscores the value of tailored prompt learning for resume extraction, +stressing the importance of custom-designed templates and verbalizers. +" +Distilling Task-specific Logical Rules from Large Pre-trained Models,Tao Chen,http://arxiv.org/pdf/2210.02768v1.pdf,2022-10-06,['cs.cl'],2210.02768v1.pdf," Logical rules, both transferable and explainable, are widely used as weakly +supervised signals for many downstream tasks such as named entity tagging. To +reduce the human effort of writing rules, previous researchers adopt an +iterative approach to automatically learn logical rules from several seed +rules. However, obtaining more seed rules can only be accomplished by extra +human annotation with heavy costs. Limited by the size and quality of the seed +rules, the model performance of previous systems is bounded. In this paper, we +develop a novel framework STREAM to distill task-specific logical rules from +large pre-trained models. Specifically, we borrow recent prompt-based language +models as the knowledge expert to yield initial seed rules, and based on the +formed high-quality instance pool that acts as an intermediary role, we keep +teaching the expert to fit our task and learning task-specific logical rules. +Experiments on three public named entity tagging benchmarks demonstrate the +effectiveness of our proposed framework. With several predefined prompt +templates, our system has gained significant improvements over previous +state-of-the-art methods. +" +CLIP model is an Efficient Continual Learner,Vishal Thengane,http://arxiv.org/pdf/2210.03114v1.pdf,2022-10-06,['cs.cv'],2210.03114v1.pdf," The continual learning setting aims to learn new tasks over time without +forgetting the previous ones. The literature reports several significant +efforts to tackle this problem with limited or no access to previous task data. +Among such efforts, typical solutions offer sophisticated techniques involving +memory replay, knowledge distillation, model regularization, and dynamic +network expansion. The resulting methods have a retraining cost at each +learning task, dedicated memory requirements, and setting-specific design +choices. In this work, we show that a frozen CLIP (Contrastive Language-Image +Pretraining) model offers astounding continual learning performance without any +fine-tuning (zero-shot evaluation). We evaluate CLIP under a variety of +settings including class-incremental, domain-incremental and task-agnostic +incremental learning on five popular benchmarks (ImageNet-100 & 1K, CORe50, +CIFAR-100, and TinyImageNet). Without any bells and whistles, the CLIP model +outperforms the state-of-the-art continual learning approaches in the majority +of the settings. We show the effect on the CLIP model's performance by varying +text inputs with simple prompt templates. To the best of our knowledge, this is +the first work to report the CLIP zero-shot performance in a continual setting. +We advocate the use of this strong yet embarrassingly simple baseline for +future comparisons in the continual learning tasks. +" +A Unified Framework for Multi-intent Spoken Language Understanding with prompting,Feifan Song,http://arxiv.org/pdf/2210.03337v1.pdf,2022-10-07,"['cs.cl', 'cs.ai']",2210.03337v1.pdf," Multi-intent Spoken Language Understanding has great potential for widespread +implementation. Jointly modeling Intent Detection and Slot Filling in it +provides a channel to exploit the correlation between intents and slots. +However, current approaches are apt to formulate these two sub-tasks +differently, which leads to two issues: 1) It hinders models from effective +extraction of shared features. 2) Pretty complicated structures are involved to +enhance expression ability while causing damage to the interpretability of +frameworks. In this work, we describe a Prompt-based Spoken Language +Understanding (PromptSLU) framework, to intuitively unify two sub-tasks into +the same form by offering a common pre-trained Seq2Seq model. In detail, ID and +SF are completed by concisely filling the utterance into task-specific prompt +templates as input, and sharing output formats of key-value pairs sequence. +Furthermore, variable intents are predicted first, then naturally embedded into +prompts to guide slot-value pairs inference from a semantic perspective. +Finally, we are inspired by prevalent multi-task learning to introduce an +auxiliary sub-task, which helps to learn relationships among provided labels. +Experiment results show that our framework outperforms several state-of-the-art +baselines on two public datasets. +" +UniHD at TSAR-2022 Shared Task: Is Compute All We Need for Lexical Simplification?,Dennis Aumiller,http://arxiv.org/pdf/2301.01764v2.pdf,2023-01-04,['cs.cl'],2301.01764v2.pdf," Previous state-of-the-art models for lexical simplification consist of +complex pipelines with several components, each of which requires deep +technical knowledge and fine-tuned interaction to achieve its full potential. +As an alternative, we describe a frustratingly simple pipeline based on +prompted GPT-3 responses, beating competing approaches by a wide margin in +settings with few training instances. Our best-performing submission to the +English language track of the TSAR-2022 shared task consists of an ``ensemble'' +of six different prompt templates with varying context levels. As a +late-breaking result, we further detail a language transfer technique that +allows simplification in languages other than English. Applied to the Spanish +and Portuguese subset, we achieve state-of-the-art results with only minor +modification to the original prompts. Aside from detailing the implementation +and setup, we spend the remainder of this work discussing the particularities +of prompting and implications for future work. Code for the experiments is +available online at https://github.com/dennlinger/TSAR-2022-Shared-Task +" +Prompting Large Language Model for Machine Translation: A Case Study,Biao Zhang,http://arxiv.org/pdf/2301.07069v2.pdf,2023-01-17,"['cs.cl', 'cs.lg']",2301.07069v2.pdf," Research on prompting has shown excellent performance with little or even no +supervised training across many tasks. However, prompting for machine +translation is still under-explored in the literature. We fill this gap by +offering a systematic study on prompting strategies for translation, examining +various factors for prompt template and demonstration example selection. We +further explore the use of monolingual data and the feasibility of +cross-lingual, cross-domain, and sentence-to-document transfer learning in +prompting. Extensive experiments with GLM-130B (Zeng et al., 2022) as the +testbed show that 1) the number and the quality of prompt examples matter, +where using suboptimal examples degenerates translation; 2) several features of +prompt examples, such as semantic similarity, show significant Spearman +correlation with their prompting performance; yet, none of the correlations are +strong enough; 3) using pseudo parallel prompt examples constructed from +monolingual data via zero-shot prompting could improve translation; and 4) +improved performance is achievable by transferring knowledge from prompt +examples selected in other settings. We finally provide an analysis on the +model outputs and discuss several problems that prompting still suffers from. +" +Global Constraints with Prompting for Zero-Shot Event Argument Classification,Zizheng Lin,http://arxiv.org/pdf/2302.04459v1.pdf,2023-02-09,['cs.cl'],2302.04459v1.pdf," Determining the role of event arguments is a crucial subtask of event +extraction. Most previous supervised models leverage costly annotations, which +is not practical for open-domain applications. In this work, we propose to use +global constraints with prompting to effectively tackles event argument +classification without any annotation and task-specific training. Specifically, +given an event and its associated passage, the model first creates several new +passages by prefix prompts and cloze prompts, where prefix prompts indicate +event type and trigger span, and cloze prompts connect each candidate role with +the target argument span. Then, a pre-trained language model scores the new +passages, making the initial prediction. Our novel prompt templates can easily +adapt to all events and argument types without manual effort. Next, the model +regularizes the prediction by global constraints exploiting cross-task, +cross-argument, and cross-event relations. Extensive experiments demonstrate +our model's effectiveness: it outperforms the best zero-shot baselines by 12.5% +and 10.9% F1 on ACE and ERE with given argument spans and by 4.3% and 3.3% F1, +respectively, without given argument spans. We have made our code publicly +available. +" +Large Language Models Are State-of-the-Art Evaluators of Translation Quality,Tom Kocmi,http://arxiv.org/pdf/2302.14520v2.pdf,2023-02-28,['cs.cl'],2302.14520v2.pdf," We describe GEMBA, a GPT-based metric for assessment of translation quality, +which works both with a reference translation and without. In our evaluation, +we focus on zero-shot prompting, comparing four prompt variants in two modes, +based on the availability of the reference. We investigate nine versions of GPT +models, including ChatGPT and GPT-4. We show that our method for translation +quality assessment only works with GPT~3.5 and larger models. Comparing to +results from WMT22's Metrics shared task, our method achieves state-of-the-art +accuracy in both modes when compared to MQM-based human labels. Our results are +valid on the system level for all three WMT22 Metrics shared task language +pairs, namely English into German, English into Russian, and Chinese into +English. This provides a first glimpse into the usefulness of pre-trained, +generative large language models for quality assessment of translations. We +publicly release all our code and prompt templates used for the experiments +described in this work, as well as all corresponding scoring results, to allow +for external validation and reproducibility. +" +The Prompt Artists,Minsuk Chang,http://arxiv.org/pdf/2303.12253v1.pdf,2023-03-22,['cs.hc'],2303.12253v1.pdf," This paper examines the art practices, artwork, and motivations of prolific +users of the latest generation of text-to-image models. Through interviews, +observations, and a user survey, we present a sampling of the artistic styles +and describe the developed community of practice around generative AI. We find +that: 1) the text prompt and the resulting image can be considered collectively +as an art piece prompts as art and 2) prompt templates (prompts with ``slots'' +for others to fill in with their own words) are developed to create generative +art styles. We discover that the value placed by this community on unique +outputs leads to artists seeking specialized vocabulary to produce distinctive +art pieces (e.g., by reading architectural blogs to find phrases to describe +images). We also find that some artists use ""glitches"" in the model that can be +turned into artistic styles of their own right. From these findings, we outline +specific implications for design regarding future prompting and image editing +options. +" +WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation,Jongheon Jeong,http://arxiv.org/pdf/2303.14814v1.pdf,2023-03-26,"['cs.cv', 'cs.ai', 'cs.cl']",2303.14814v1.pdf," Visual anomaly classification and segmentation are vital for automating +industrial quality inspection. The focus of prior research in the field has +been on training custom models for each quality inspection task, which requires +task-specific images and annotation. In this paper we move away from this +regime, addressing zero-shot and few-normal-shot anomaly classification and +segmentation. Recently CLIP, a vision-language model, has shown revolutionary +generality with competitive zero-/few-shot performance in comparison to +full-supervision. But CLIP falls short on anomaly classification and +segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a +compositional ensemble on state words and prompt templates and (2) efficient +extraction and aggregation of window/patch/image-level features aligned with +text. We also propose its few-normal-shot extension WinCLIP+, which uses +complementary information from normal images. In MVTec-AD (and VisA), without +further tuning, WinCLIP achieves 91.8%/85.1% (78.1%/79.6%) AUROC in zero-shot +anomaly classification and segmentation while WinCLIP+ does 93.1%/95.2% +(83.8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins. +" +MetricPrompt: Prompting Model as a Relevance Metric for Few-shot Text Classification,Hongyuan Dong,http://arxiv.org/pdf/2306.08892v1.pdf,2023-06-15,['cs.cl'],2306.08892v1.pdf," Prompting methods have shown impressive performance in a variety of text +mining tasks and applications, especially few-shot ones. Despite the promising +prospects, the performance of prompting model largely depends on the design of +prompt template and verbalizer. In this work, we propose MetricPrompt, which +eases verbalizer design difficulty by reformulating few-shot text +classification task into text pair relevance estimation task. MetricPrompt +adopts prompting model as the relevance metric, further bridging the gap +between Pre-trained Language Model's (PLM) pre-training objective and text +classification task, making possible PLM's smooth adaption. Taking a training +sample and a query one simultaneously, MetricPrompt captures cross-sample +relevance information for accurate relevance estimation. We conduct experiments +on three widely used text classification datasets across four few-shot +settings. Results show that MetricPrompt outperforms manual verbalizer and +other automatic verbalizer design methods across all few-shot settings, +achieving new state-of-the-art (SOTA) performance. +" +TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models,Yue Huang,http://arxiv.org/pdf/2306.11507v1.pdf,2023-06-20,"['cs.cl', 'cs.ai']",2306.11507v1.pdf," Large Language Models (LLMs) such as ChatGPT, have gained significant +attention due to their impressive natural language processing capabilities. It +is crucial to prioritize human-centered principles when utilizing these models. +Safeguarding the ethical and moral compliance of LLMs is of utmost importance. +However, individual ethical issues have not been well studied on the latest +LLMs. Therefore, this study aims to address these gaps by introducing a new +benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in +three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT +examines toxicity in language models by employing toxic prompt templates +derived from social norms. It then quantifies the extent of bias in models by +measuring quantifiable toxicity values across different groups. Lastly, +TrustGPT assesses the value of conversation generation models from both active +value-alignment and passive value-alignment tasks. Through the implementation +of TrustGPT, this research aims to enhance our understanding of the performance +of conversation generation models and promote the development of language +models that are more ethical and socially responsible. +" +DAPrompt: Deterministic Assumption Prompt Learning for Event Causality Identification,Wei Xiang,http://arxiv.org/pdf/2307.09813v1.pdf,2023-07-19,['cs.cl'],2307.09813v1.pdf," Event Causality Identification (ECI) aims at determining whether there is a +causal relation between two event mentions. Conventional prompt learning +designs a prompt template to first predict an answer word and then maps it to +the final decision. Unlike conventional prompts, we argue that predicting an +answer word may not be a necessary prerequisite for the ECI task. Instead, we +can first make a deterministic assumption on the existence of causal relation +between two events and then evaluate its rationality to either accept or reject +the assumption. The design motivation is to try the most utilization of the +encyclopedia-like knowledge embedded in a pre-trained language model. In light +of such considerations, we propose a deterministic assumption prompt learning +model, called DAPrompt, for the ECI task. In particular, we design a simple +deterministic assumption template concatenating with the input event pair, +which includes two masks as predicted events' tokens. We use the probabilities +of predicted events to evaluate the assumption rationality for the final event +causality decision. Experiments on the EventStoryLine corpus and +Causal-TimeBank corpus validate our design objective in terms of significant +performance improvements over the state-of-the-art algorithms. +" +DiffuGen: Adaptable Approach for Generating Labeled Image Datasets using Stable Diffusion Models,Michael Shenoda,http://arxiv.org/pdf/2309.00248v1.pdf,2023-09-01,"['cs.cv', 'cs.ai']",2309.00248v1.pdf," Generating high-quality labeled image datasets is crucial for training +accurate and robust machine learning models in the field of computer vision. +However, the process of manually labeling real images is often time-consuming +and costly. To address these challenges associated with dataset generation, we +introduce ""DiffuGen,"" a simple and adaptable approach that harnesses the power +of stable diffusion models to create labeled image datasets efficiently. By +leveraging stable diffusion models, our approach not only ensures the quality +of generated datasets but also provides a versatile solution for label +generation. In this paper, we present the methodology behind DiffuGen, which +combines the capabilities of diffusion models with two distinct labeling +techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt +templating for adaptable image generation and textual inversion to enhance +diffusion model capabilities. +" +Mitigating Word Bias in Zero-shot Prompt-based Classifiers,Adian Liusie,http://arxiv.org/pdf/2309.04992v1.pdf,2023-09-10,['cs.cl'],2309.04992v1.pdf," Prompt-based classifiers are an attractive approach for zero-shot +classification. However, the precise choice of the prompt template and label +words can largely influence performance, with semantically equivalent settings +often showing notable performance difference. This discrepancy can be partly +attributed to word biases, where the classifier may be biased towards classes. +To address this problem, it is possible to optimise classification thresholds +on a labelled data set, however, this mitigates some of the advantages of +prompt-based classifiers. This paper instead approaches this problem by +examining the expected marginal probabilities of the classes. Here, +probabilities are reweighted to have a uniform prior over classes, in an +unsupervised fashion. Further, we draw a theoretical connection between the +class priors and the language models' word prior, and offer the ability to set +a threshold in a zero-resource fashion. We show that matching class priors +correlates strongly with the oracle upper bound performance and demonstrate +large consistent performance gains for prompt settings over a range of NLP +tasks. +" +Prompt-Enhanced Self-supervised Representation Learning for Remote Sensing Image Understanding,Mingming Zhang,http://arxiv.org/pdf/2310.00022v1.pdf,2023-09-28,['cs.cv'],2310.00022v1.pdf," Learning representations through self-supervision on a large-scale, unlabeled +dataset has proven to be highly effective for understanding diverse images, +such as those used in remote sensing image analysis. However, remote sensing +images often have complex and densely populated scenes, with multiple land +objects and no clear foreground objects. This intrinsic property can lead to +false positive pairs in contrastive learning, or missing contextual information +in reconstructive learning, which can limit the effectiveness of existing +self-supervised learning methods. To address these problems, we propose a +prompt-enhanced self-supervised representation learning method that uses a +simple yet efficient pre-training pipeline. Our approach involves utilizing +original image patches as a reconstructive prompt template, and designing a +prompt-enhanced generative branch that provides contextual information through +semantic consistency constraints. We collected a dataset of over 1.28 million +remote sensing images that is comparable to the popular ImageNet dataset, but +without specific temporal or geographical constraints. Our experiments show +that our method outperforms fully supervised learning models and +state-of-the-art self-supervised learning methods on various downstream tasks, +including land cover classification, semantic segmentation, object detection, +and instance segmentation. These results demonstrate that our approach learns +impressive remote sensing representations with high generalization and +transferability. +" +LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation,Zixi Zhang,http://arxiv.org/pdf/2310.04535v1.pdf,2023-10-06,"['cs.lg', 'cs.ar']",2310.04535v1.pdf," Test stimuli generation has been a crucial but labor-intensive task in +hardware design verification. In this paper, we revolutionize this process by +harnessing the power of large language models (LLMs) and present a novel +benchmarking framework, LLM4DV. This framework introduces a prompt template for +interactively eliciting test stimuli from the LLM, along with four innovative +prompting improvements to support the pipeline execution and further enhance +its performance. We compare LLM4DV to traditional constrained-random testing +(CRT), using three self-designed design-under-test (DUT) modules. Experiments +demonstrate that LLM4DV excels in efficiently handling straightforward DUT +scenarios, leveraging its ability to employ basic mathematical reasoning and +pre-trained knowledge. While it exhibits reduced efficiency in complex task +settings, it still outperforms CRT in relative terms. The proposed framework +and the DUT modules used in our experiments will be open-sourced upon +publication. +" +Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data,Shiladitya Dutta,http://arxiv.org/pdf/2310.09926v1.pdf,2023-10-15,['cs.ai'],2310.09926v1.pdf," Foundation models are trained on vast amounts of data at scale using +self-supervised learning, enabling adaptation to a wide range of downstream +tasks. At test time, these models exhibit zero-shot capabilities through which +they can classify previously unseen (user-specified) categories. In this paper, +we address the problem of quantifying uncertainty in these zero-shot +predictions. We propose a heuristic approach for uncertainty estimation in +zero-shot settings using conformal prediction with web data. Given a set of +classes at test time, we conduct zero-shot classification with CLIP-style +models using a prompt template, e.g., ""an image of a "", and use the +same template as a search query to source calibration data from the open web. +Given a web-based calibration set, we apply conformal prediction with a novel +conformity score that accounts for potential errors in retrieved web data. We +evaluate the utility of our proposed method in Biomedical foundation models; +our preliminary results show that web-based conformal prediction sets achieve +the target coverage with satisfactory efficiency on a variety of biomedical +datasets. +" +Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels,Honglei Zhuang,http://arxiv.org/pdf/2310.14122v2.pdf,2023-10-21,['cs.ir'],2310.14122v2.pdf," Zero-shot text rankers powered by recent LLMs achieve remarkable ranking +performance by simply prompting. Existing prompts for pointwise LLM rankers +mostly ask the model to choose from binary relevance labels like ""Yes"" and +""No"". However, the lack of intermediate relevance label options may cause the +LLM to provide noisy or biased answers for documents that are partially +relevant to the query. We propose to incorporate fine-grained relevance labels +into the prompt for LLM rankers, enabling them to better differentiate among +documents with different levels of relevance to the query and thus derive a +more accurate ranking. We study two variants of the prompt template, coupled +with different numbers of relevance levels. Our experiments on 8 BEIR data sets +show that adding fine-grained relevance labels significantly improves the +performance of LLM rankers. +" +"Large Language Models can Share Images, Too!",Young-Jun Lee,http://arxiv.org/pdf/2310.14804v1.pdf,2023-10-23,"['cs.cv', 'cs.ai', 'cs.cl']",2310.14804v1.pdf," This paper explores the image-sharing capability of Large Language Models +(LLMs), such as InstructGPT, ChatGPT, and GPT-4, in a zero-shot setting, +without the help of visual foundation models. Inspired by the two-stage process +of image-sharing in human dialogues, we propose a two-stage framework that +allows LLMs to predict potential image-sharing turns and generate related image +descriptions using our effective restriction-based prompt template. With +extensive experiments, we unlock the \textit{image-sharing} capability of LLMs +in zero-shot prompting, with GPT-4 achieving the best performance. +Additionally, we uncover the emergent \textit{image-sharing} ability in +zero-shot prompting, demonstrating the effectiveness of restriction-based +prompts in both stages of our framework. Based on this framework, we augment +the PhotoChat dataset with images generated by Stable Diffusion at predicted +turns, namely PhotoChat++. To our knowledge, this is the first study to assess +the image-sharing ability of LLMs in a zero-shot setting without visual +foundation models. The source code and the dataset will be released after +publication. +" +KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction,Xiang Chen,http://arxiv.org/pdf/2104.07650v7.pdf,2021-04-15,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2104.07650v7.pdf," Recently, prompt-tuning has achieved promising results for specific few-shot +classification tasks. The core idea of prompt-tuning is to insert text pieces +(i.e., templates) into the input and transform a classification task into a +masked language modeling problem. However, for relation extraction, determining +an appropriate prompt template requires domain expertise, and it is cumbersome +and time-consuming to obtain a suitable label word. Furthermore, there exists +abundant semantic and prior knowledge among the relation labels that cannot be +ignored. To this end, we focus on incorporating knowledge among relation labels +into prompt-tuning for relation extraction and propose a Knowledge-aware +Prompt-tuning approach with synergistic optimization (KnowPrompt). +Specifically, we inject latent knowledge contained in relation labels into +prompt construction with learnable virtual type words and answer words. Then, +we synergistically optimize their representation with structured constraints. +Extensive experimental results on five datasets with standard and low-resource +settings demonstrate the effectiveness of our approach. Our code and datasets +are available in https://github.com/zjunlp/KnowPrompt for reproducibility. +" +Prompt-based Zero-shot Relation Extraction with Semantic Knowledge Augmentation,Jiaying Gong,http://arxiv.org/pdf/2112.04539v2.pdf,2021-12-08,['cs.cl'],2112.04539v2.pdf," In relation triplet extraction (RTE), recognizing unseen (new) relations for +which there are no training instances is a challenging task. Efforts have been +made to recognize unseen relations based on question-answering models or +relation descriptions. However, these approaches miss the semantic information +about connections between seen and unseen relations. In this paper, We propose +a prompt-based model with semantic knowledge augmentation (ZS-SKA) to recognize +unseen relations under the zero-shot setting. We present a new word-level +analogy-based sentence translation rule and generate augmented instances with +unseen relations from instances with seen relations using that new rule. We +design prompts with weighted virtual label construction based on an external +knowledge graph to integrate semantic knowledge information learned from seen +relations. Instead of using the actual label sets in the prompt template, we +construct weighted virtual label words. We learn the representations of both +seen and unseen relations with augmented instances and prompts. We then +calculate the distance between the generated representations using prototypical +networks to predict unseen relations. Extensive experiments conducted on three +public datasets FewRel, Wiki-ZSL, and NYT, show that ZS-SKA outperforms +state-of-the-art methods under the zero-shot scenarios. Our experimental +results also demonstrate the effectiveness and robustness of ZS-SKA. +" +DynaMaR: Dynamic Prompt with Mask Token Representation,Xiaodi Sun,http://arxiv.org/pdf/2206.02982v1.pdf,2022-06-07,"['cs.cl', 'cs.lg']",2206.02982v1.pdf," Recent research has shown that large language models pretrained using +unsupervised approaches can achieve significant performance improvement on many +downstream tasks. Typically when adapting these language models to downstream +tasks, like a classification or regression task, we employ a fine-tuning +paradigm in which the sentence representation from the language model is input +to a task-specific head; the model is then fine-tuned end-to-end. However, with +the emergence of models like GPT-3, prompt-based fine-tuning has been proven to +be a successful approach for few-shot tasks. Inspired by this work, we study +discrete prompt technologies in practice. There are two issues that arise with +the standard prompt approach. First, it can overfit on the prompt template. +Second, it requires manual effort to formulate the downstream task as a +language model problem. In this paper, we propose an improvement to +prompt-based fine-tuning that addresses these two issues. We refer to our +approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results +show that DynaMaR can achieve an average improvement of 10% in few-shot +settings and improvement of 3.7% in data-rich settings over the standard +fine-tuning approach on four e-commerce applications. +" +Rethinking the Event Coding Pipeline with Prompt Entailment,Clément Lefebvre,http://arxiv.org/pdf/2210.05257v2.pdf,2022-10-11,"['cs.cl', 'cs.hc', 'cs.lg']",2210.05257v2.pdf," For monitoring crises, political events are extracted from the news. The +large amount of unstructured full-text event descriptions makes a case-by-case +analysis unmanageable, particularly for low-resource humanitarian aid +organizations. This creates a demand to classify events into event types, a +task referred to as event coding. Typically, domain experts craft an event type +ontology, annotators label a large dataset and technical experts develop a +supervised coding system. In this work, we propose PR-ENT, a new event coding +approach that is more flexible and resource-efficient, while maintaining +competitive accuracy: first, we extend an event description such as ""Military +injured two civilians'' by a template, e.g. ""People were [Z]"" and prompt a +pre-trained (cloze) language model to fill the slot Z. Second, we select answer +candidates Z* = {""injured'', ""hurt""...} by treating the event description as +premise and the filled templates as hypothesis in a textual entailment task. +This allows domain experts to draft the codebook directly as labeled prompts +and interpretable answer candidates. This human-in-the-loop process is guided +by our interactive codebook design tool. We evaluate PR-ENT in several +robustness checks: perturbing the event description and prompt template, +restricting the vocabulary and removing contextual information. +" +Visual Prompting for Adversarial Robustness,Aochuan Chen,http://arxiv.org/pdf/2210.06284v4.pdf,2022-10-12,"['cs.cv', 'cs.cr', 'cs.lg']",2210.06284v4.pdf," In this work, we leverage visual prompting (VP) to improve adversarial +robustness of a fixed, pre-trained model at testing time. Compared to +conventional adversarial defenses, VP allows us to design universal (i.e., +data-agnostic) input prompting templates, which have plug-and-play capabilities +at testing time to achieve desired model performance without introducing much +computation overhead. Although VP has been successfully applied to improving +model generalization, it remains elusive whether and how it can be used to +defend against adversarial attacks. We investigate this problem and show that +the vanilla VP approach is not effective in adversarial defense since a +universal input prompt lacks the capacity for robust learning against +sample-specific adversarial perturbations. To circumvent it, we propose a new +VP method, termed Class-wise Adversarial Visual Prompting (C-AVP), to generate +class-wise visual prompts so as to not only leverage the strengths of ensemble +prompts but also optimize their interrelations to improve model robustness. Our +experiments show that C-AVP outperforms the conventional VP method, with 2.1X +standard accuracy gain and 2X robust accuracy gain. Compared to classical +test-time defenses, C-AVP also yields a 42X inference time speedup. +" +Continuous Prompt Tuning Based Textual Entailment Model for E-commerce Entity Typing,Yibo Wang,http://arxiv.org/pdf/2211.02483v1.pdf,2022-11-04,['cs.cl'],2211.02483v1.pdf," The explosion of e-commerce has caused the need for processing and analysis +of product titles, like entity typing in product titles. However, the rapid +activity in e-commerce has led to the rapid emergence of new entities, which is +difficult to be solved by general entity typing. Besides, product titles in +e-commerce have very different language styles from text data in general +domain. In order to handle new entities in product titles and address the +special language styles problem of product titles in e-commerce domain, we +propose our textual entailment model with continuous prompt tuning based +hypotheses and fusion embeddings for e-commerce entity typing. First, we +reformulate the entity typing task into a textual entailment problem to handle +new entities that are not present during training. Second, we design a model to +automatically generate textual entailment hypotheses using a continuous prompt +tuning method, which can generate better textual entailment hypotheses without +manual design. Third, we utilize the fusion embeddings of BERT embedding and +CharacterBERT embedding with a two-layer MLP classifier to solve the problem +that the language styles of product titles in e-commerce are different from +that of general domain. To analyze the effect of each contribution, we compare +the performance of entity typing and textual entailment model, and conduct +ablation studies on continuous prompt tuning and fusion embeddings. We also +evaluate the impact of different prompt template initialization for the +continuous prompt tuning. We show our proposed model improves the average F1 +score by around 2% compared to the baseline BERT entity typing model. +" +Multi-label Few-shot ICD Coding as Autoregressive Generation with Prompt,Zhichao Yang,http://arxiv.org/pdf/2211.13813v2.pdf,2022-11-24,"['cs.cl', 'cs.ai']",2211.13813v2.pdf," Automatic International Classification of Diseases (ICD) coding aims to +assign multiple ICD codes to a medical note with an average of 3,000+ tokens. +This task is challenging due to the high-dimensional space of multi-label +assignment (155,000+ ICD code candidates) and the long-tail challenge - Many +ICD codes are infrequently assigned yet infrequent ICD codes are important +clinically. This study addresses the long-tail challenge by transforming this +multi-label classification task into an autoregressive generation task. +Specifically, we first introduce a novel pretraining objective to generate free +text diagnoses and procedure using the SOAP structure, the medical logic +physicians use for note documentation. Second, instead of directly predicting +the high dimensional space of ICD codes, our model generates the lower +dimension of text descriptions, which then infer ICD codes. Third, we designed +a novel prompt template for multi-label classification. We evaluate our +Generation with Prompt model with the benchmark of all code assignment +(MIMIC-III-full) and few shot ICD code assignment evaluation benchmark +(MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with +a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full +SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot +setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross +attention reranker with prompts, to integrate previous SOTA and our best +few-shot coding predictions. Experiments on MIMIC-III-full show that our +ensemble learner substantially improves both macro and micro F1, from 10.4 to +14.6 and from 58.2 to 59.1, respectively. +" +LabelPrompt: Effective Prompt-based Learning for Relation Classification,Wenjie Zhang,http://arxiv.org/pdf/2302.08068v2.pdf,2023-02-16,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2302.08068v2.pdf," Recently, prompt-based learning has gained popularity across many natural +language processing (NLP) tasks by reformulating them into a cloze-style format +to better align pre-trained language models (PLMs) with downstream tasks. +However, applying this approach to relation classification poses unique +challenges. Specifically, associating natural language words that fill the +masked token with semantic relation labels (\textit{e.g.} +\textit{``org:founded\_by}'') is difficult. To address this challenge, this +paper presents a novel prompt-based learning method, namely LabelPrompt, for +the relation classification task. Motivated by the intuition to ``GIVE MODEL +CHOICES!'', we first define additional tokens to represent relation labels, +which regard these tokens as the verbaliser with semantic initialisation and +explicitly construct them with a prompt template method. Then, to mitigate +inconsistency between predicted relations and given entities, we implement an +entity-aware module with contrastive learning. Last, we conduct an attention +query strategy within the self-attention layer to differentiates prompt tokens +and sequence tokens. Together, these strategies enhance the adaptability of +prompt-based learning, especially when only small labelled datasets is +available. Comprehensive experiments on benchmark datasets demonstrate the +superiority of our method, particularly in the few-shot scenario. +" +Adapting Prompt for Few-shot Table-to-Text Generation,Zhixin Guo,http://arxiv.org/pdf/2302.12468v2.pdf,2023-02-24,['cs.cl'],2302.12468v2.pdf," Pretrained language models (PLMs) have made remarkable progress in +table-to-text generation tasks. However, the lack of domain-specific knowledge +makes it challenging to bridge the topological gap between tabular data and +text, especially in real-world applications with limited resources. To mitigate +the limitation of insufficient labeled data, we propose a novel framework: +Adapt-Prompt-to-Generate (AdaPTGen). The core insight of AdaPTGen is to adapt +prompt templates of domain-specific knowledge into the model, which brings at +least three benefits: (1) it injects representation of normal table-related +descriptions to bridge the topological gap between tabular data and texts; (2) +it enables us to use large amounts of unlabeled domain-specific knowledge +fully, which can alleviate the PLMs' inherent shortcomings of lacking domain +knowledge; (3) it allows us to design various tasks to explore the +domain-specific knowledge. Extensive experiments and analyses are conducted on +three open-domain few-shot natural language generation (NLG) data sets: Humans, +Songs, and Books. Compared to previous state-of-the-art approaches, our model +achieves superior performance in terms of both fluency and accuracy. +" +Model-tuning Via Prompts Makes NLP Models Adversarially Robust,Mrigank Raman,http://arxiv.org/pdf/2303.07320v1.pdf,2023-03-13,"['cs.cl', 'cs.lg']",2303.07320v1.pdf," In recent years, NLP practitioners have converged on the following practice: +(i) import an off-the-shelf pretrained (masked) language model; (ii) append a +multilayer perceptron atop the CLS token's hidden representation (with randomly +initialized weights); and (iii) fine-tune the entire model on a downstream task +(MLP). This procedure has produced massive gains on standard NLP benchmarks, +but these models remain brittle, even to mild adversarial perturbations, such +as word-level synonym substitutions. In this work, we demonstrate surprising +gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an +alternative method of adapting to downstream tasks. Rather than modifying the +model (by appending an MLP head), MVP instead modifies the input (by appending +a prompt template). Across three classification datasets, MVP improves +performance against adversarial word-level synonym substitutions by an average +of 8% over standard methods and even outperforms adversarial training-based +state-of-art defenses by 3.5%. By combining MVP with adversarial training, we +achieve further improvements in robust accuracy while maintaining clean +accuracy. Finally, we conduct ablations to investigate the mechanism underlying +these gains. Notably, we find that the main causes of vulnerability of MLP can +be attributed to the misalignment between pre-training and fine-tuning tasks, +and the randomly initialized MLP parameters. Code is available at +https://github.com/acmi-lab/mvp +" +"PromptAid: Prompt Exploration, Perturbation, Testing and Iteration using Visual Analytics for Large Language Models",Aditi Mishra,http://arxiv.org/pdf/2304.01964v2.pdf,2023-04-04,['cs.hc'],2304.01964v2.pdf," Large Language Models (LLMs) have gained widespread popularity due to their +ability to perform ad-hoc Natural Language Processing (NLP) tasks with a simple +natural language prompt. Part of the appeal for LLMs is their approachability +to the general public, including individuals with no prior technical experience +in NLP techniques. However, natural language prompts can vary significantly in +terms of their linguistic structure, context, and other semantics. Modifying +one or more of these aspects can result in significant differences in task +performance. Non-expert users may find it challenging to identify the changes +needed to improve a prompt, especially when they lack domain-specific knowledge +and lack appropriate feedback. To address this challenge, we present PromptAid, +a visual analytics system designed to interactively create, refine, and test +prompts through exploration, perturbation, testing, and iteration. PromptAid +uses multiple, coordinated visualizations which allow users to improve prompts +by using the three strategies: keyword perturbations, paraphrasing +perturbations, and obtaining the best set of in-context few-shot examples. +PromptAid was designed through an iterative prototyping process involving NLP +experts and was evaluated through quantitative and qualitative assessments for +LLMs. Our findings indicate that PromptAid helps users to iterate over prompt +template alterations with less cognitive overhead, generate diverse prompts +with help of recommendations, and analyze the performance of the generated +prompts while surpassing existing state-of-the-art prompting interfaces in +performance. +" +FashionSAP: Symbols and Attributes Prompt for Fine-grained Fashion Vision-Language Pre-training,Yunpeng Han,http://arxiv.org/pdf/2304.05051v1.pdf,2023-04-11,"['cs.cv', 'cs.cl']",2304.05051v1.pdf," Fashion vision-language pre-training models have shown efficacy for a wide +range of downstream tasks. However, general vision-language pre-training models +pay less attention to fine-grained domain features, while these features are +important in distinguishing the specific domain tasks from general tasks. We +propose a method for fine-grained fashion vision-language pre-training based on +fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained +multi-modalities fashion attributes and characteristics. Firstly, we propose +the fashion symbols, a novel abstract fashion concept layer, to represent +different fashion items and to generalize various kinds of fine-grained fashion +features, making modelling fine-grained attributes more effective. Secondly, +the attributes prompt method is proposed to make the model learn specific +attributes of fashion items explicitly. We design proper prompt templates +according to the format of fashion data. Comprehensive experiments are +conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and +FashionSAP gets SOTA performances for four popular fashion tasks. The ablation +study also shows the proposed abstract fashion symbols, and the attribute +prompt method enables the model to acquire fine-grained semantics in the +fashion domain effectively. The obvious performance gains from FashionSAP +provide a new baseline for future fashion task research. +" +"A study on Prompt Design, Advantages and Limitations of ChatGPT for Deep Learning Program Repair",Jialun Cao,http://arxiv.org/pdf/2304.08191v1.pdf,2023-04-17,['cs.se'],2304.08191v1.pdf," ChatGPT has revolutionized many research and industrial fields. ChatGPT has +shown great potential in software engineering to boost various traditional +tasks such as program repair, code understanding, and code generation. However, +whether automatic program repair (APR) applies to deep learning (DL) programs +is still unknown. DL programs, whose decision logic is not explicitly encoded +in the source code, have posed unique challenges to APR. While to repair DL +programs, an APR approach needs to not only parse the source code syntactically +but also needs to understand the code intention. With the best prior work, the +performance of fault localization is still far less than satisfactory (only +about 30\%). Therefore, in this paper, we explore ChatGPT's capability for DL +program repair by asking three research questions. (1) Can ChatGPT debug DL +programs effectively? (2) How can ChatGPT's repair performance be improved by +prompting? (3) In which way can dialogue help facilitate the repair? On top of +that, we categorize the common aspects useful for prompt design for DL program +repair. Also, we propose various prompt templates to facilitate the performance +and summarize the advantages and disadvantages of ChatGPT's abilities such as +detecting bad code smell, code refactoring, and detecting API +misuse/deprecation. +" +Prompt-Learning for Cross-Lingual Relation Extraction,Chiaming Hsu,http://arxiv.org/pdf/2304.10354v1.pdf,2023-04-20,['cs.cl'],2304.10354v1.pdf," Relation Extraction (RE) is a crucial task in Information Extraction, which +entails predicting relationships between entities within a given sentence. +However, extending pre-trained RE models to other languages is challenging, +particularly in real-world scenarios where Cross-Lingual Relation Extraction +(XRE) is required. Despite recent advancements in Prompt-Learning, which +involves transferring knowledge from Multilingual Pre-trained Language Models +(PLMs) to diverse downstream tasks, there is limited research on the effective +use of multilingual PLMs with prompts to improve XRE. In this paper, we present +a novel XRE algorithm based on Prompt-Tuning, referred to as Prompt-XRE. To +evaluate its effectiveness, we design and implement several prompt templates, +including hard, soft, and hybrid prompts, and empirically test their +performance on competitive multilingual PLMs, specifically mBART. Our extensive +experiments, conducted on the low-resource ACE05 benchmark across multiple +languages, demonstrate that our Prompt-XRE algorithm significantly outperforms +both vanilla multilingual PLMs and other existing models, achieving +state-of-the-art performance in XRE. To further show the generalization of our +Prompt-XRE on larger data scales, we construct and release a new XRE dataset- +WMT17-EnZh XRE, containing 0.9M English-Chinese pairs extracted from WMT 2017 +parallel corpus. Experiments on WMT17-EnZh XRE also show the effectiveness of +our Prompt-XRE against other competitive baselines. The code and newly +constructed dataset are freely available at +\url{https://github.com/HSU-CHIA-MING/Prompt-XRE}. +" +CitePrompt: Using Prompts to Identify Citation Intent in Scientific Papers,Avishek Lahiri,http://arxiv.org/pdf/2304.12730v2.pdf,2023-04-25,['cs.cl'],2304.12730v2.pdf," Citations in scientific papers not only help us trace the intellectual +lineage but also are a useful indicator of the scientific significance of the +work. Citation intents prove beneficial as they specify the role of the +citation in a given context. In this paper, we present CitePrompt, a framework +which uses the hitherto unexplored approach of prompt-based learning for +citation intent classification. We argue that with the proper choice of the +pretrained language model, the prompt template, and the prompt verbalizer, we +can not only get results that are better than or comparable to those obtained +with the state-of-the-art methods but also do it with much less exterior +information about the scientific document. We report state-of-the-art results +on the ACL-ARC dataset, and also show significant improvement on the SciCite +dataset over all baseline models except one. As suitably large labelled +datasets for citation intent classification can be quite hard to find, in a +first, we propose the conversion of this task to the few-shot and zero-shot +settings. For the ACL-ARC dataset, we report a 53.86% F1 score for the +zero-shot setting, which improves to 63.61% and 66.99% for the 5-shot and +10-shot settings, respectively. +" +Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner,Zhengxiang Shi,http://arxiv.org/pdf/2305.01711v4.pdf,2023-05-02,['cs.cl'],2305.01711v4.pdf," Language models (LMs) trained on vast quantities of unlabelled data have +greatly advanced the field of natural language processing (NLP). In this study, +we re-visit the widely accepted notion in NLP that continued pre-training LMs +on task-related texts improves the performance of fine-tuning (FT) in +downstream tasks. Through experiments on eight single-sentence tasks and eight +sentence-pair tasks in both semi-supervised and fully-supervised settings, we +find that conventional continued pre-training does not consistently provide +benefits and can even be detrimental for sentence-pair tasks or when +prompt-based FT is used. To tackle these issues, we propose Prompt-based +Continued Pre-training (PCP), which combines the idea of instruction tuning +with conventional continued pre-training. Our approach aims to improve the +performance of prompt-based FT by presenting both task-related texts and prompt +templates to LMs through unsupervised pre-training objectives before +fine-tuning for the target task. Our empirical evaluations on 21 benchmarks +demonstrate that the PCP consistently improves the performance of +state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both +semi-supervised and fully-supervised settings, even with only hundreds of +unlabelled examples. Additionally, prompt-based FT with the PCP outperforms +state-of-the-art semi-supervised approaches with greater simplicity, +eliminating the need for an iterative process and extra data augmentation. Our +further analysis explores the performance lower bound of the PCP and reveals +that the advantages of PCP persist across different sizes of models and +datasets. +" +Large Language Models are Zero-Shot Rankers for Recommender Systems,Yupeng Hou,http://arxiv.org/pdf/2305.08845v1.pdf,2023-05-15,"['cs.ir', 'cs.cl']",2305.08845v1.pdf," Recently, large language models (LLMs) (e.g. GPT-4) have demonstrated +impressive general-purpose task-solving abilities, including the potential to +approach recommendation tasks. Along this line of research, this work aims to +investigate the capacity of LLMs that act as the ranking model for recommender +systems. To conduct our empirical study, we first formalize the recommendation +problem as a conditional ranking task, considering sequential interaction +histories as conditions and the items retrieved by the candidate generation +model as candidates. We adopt a specific prompting approach to solving the +ranking task by LLMs: we carefully design the prompting template by including +the sequential interaction history, the candidate items, and the ranking +instruction. We conduct extensive experiments on two widely-used datasets for +recommender systems and derive several key findings for the use of LLMs in +recommender systems. We show that LLMs have promising zero-shot ranking +abilities, even competitive to or better than conventional recommendation +models on candidates retrieved by multiple candidate generators. We also +demonstrate that LLMs struggle to perceive the order of historical interactions +and can be affected by biases like position bias, while these issues can be +alleviated via specially designed prompting and bootstrapping strategies. The +code to reproduce this work is available at +https://github.com/RUCAIBox/LLMRank. +" +TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse Relation Recognition,Wei Xiang,http://arxiv.org/pdf/2305.10866v1.pdf,2023-05-18,['cs.cl'],2305.10866v1.pdf," Implicit Discourse Relation Recognition (IDRR) aims at classifying the +relation sense between two arguments without an explicit connective. Recently, +the ConnPrompt~\cite{Wei.X:et.al:2022:COLING} has leveraged the powerful prompt +learning for IDRR based on the fusion of multi-prompt decisions from three +different yet much similar connective prediction templates. Instead of +multi-prompt ensembling, we propose to design auxiliary tasks with enlightened +prompt learning for the IDRR task. Although an auxiliary task is not used to +directly output final prediction, we argue that during the joint training some +of its learned features can be useful to boost the main task. In light of such +motivations, we propose a task enlightenment prompt learning model, called +TEPrompt, to fuse learned features from three related tasks for IDRR. In +particular, the TEPrompt contains three tasks, viz., Discourse Relation +Recognition (DRR), Sense Semantics Classification (SSC) and Annotated +Connective Prediction (ACP), each with a unique prompt template and an answer +space. In the training phase, we jointly train three prompt learning tasks with +shared argument representation. In the testing phase, we only take the DRR +output with fused features as the final IDRR decision. Experiments with the +same conditions have shown that the proposed TEPrompt outperforms the +ConnPrompt. This can be attributed to the promoted decision features and +language models benefited from joint-training of auxiliary tasks. +" +Prompting ChatGPT in MNER: Enhanced Multimodal Named Entity Recognition with Auxiliary Refined Knowledge,Jinyuan Li,http://arxiv.org/pdf/2305.12212v2.pdf,2023-05-20,['cs.cl'],2305.12212v2.pdf," Multimodal Named Entity Recognition (MNER) on social media aims to enhance +textual entity prediction by incorporating image-based clues. Existing studies +mainly focus on maximizing the utilization of pertinent image information or +incorporating external knowledge from explicit knowledge bases. However, these +methods either neglect the necessity of providing the model with external +knowledge, or encounter issues of high redundancy in the retrieved knowledge. +In this paper, we present PGIM -- a two-stage framework that aims to leverage +ChatGPT as an implicit knowledge base and enable it to heuristically generate +auxiliary knowledge for more efficient entity prediction. Specifically, PGIM +contains a Multimodal Similar Example Awareness module that selects suitable +examples from a small number of predefined artificial samples. These examples +are then integrated into a formatted prompt template tailored to the MNER and +guide ChatGPT to generate auxiliary refined knowledge. Finally, the acquired +knowledge is integrated with the original text and fed into a downstream model +for further processing. Extensive experiments show that PGIM outperforms +state-of-the-art methods on two classic MNER datasets and exhibits a stronger +robustness and generalization capability. +" +"Paradigm Shift in Sustainability Disclosure Analysis: Empowering Stakeholders with CHATREPORT, a Language Model-Based Tool",Jingwei Ni,http://arxiv.org/pdf/2306.15518v1.pdf,2023-06-27,['cs.cl'],2306.15518v1.pdf," This paper introduces a novel approach to enhance Large Language Models +(LLMs) with expert knowledge to automate the analysis of corporate +sustainability reports by benchmarking them against the Task Force for +Climate-Related Financial Disclosures (TCFD) recommendations. Corporate +sustainability reports are crucial in assessing organizations' environmental +and social risks and impacts. However, analyzing these reports' vast amounts of +information makes human analysis often too costly. As a result, only a few +entities worldwide have the resources to analyze these reports, which could +lead to a lack of transparency. While AI-powered tools can automatically +analyze the data, they are prone to inaccuracies as they lack domain-specific +expertise. This paper introduces a novel approach to enhance LLMs with expert +knowledge to automate the analysis of corporate sustainability reports. We +christen our tool CHATREPORT, and apply it in a first use case to assess +corporate climate risk disclosures following the TCFD recommendations. +CHATREPORT results from collaborating with experts in climate science, finance, +economic policy, and computer science, demonstrating how domain experts can be +involved in developing AI tools. We make our prompt templates, generated data, +and scores available to the public to encourage transparency. +" +TIAM -- A Metric for Evaluating Alignment in Text-to-Image Generation,Paul Grimal,http://arxiv.org/pdf/2307.05134v1.pdf,2023-07-11,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2307.05134v1.pdf," The progress in the generation of synthetic images has made it crucial to +assess their quality. While several metrics have been proposed to assess the +rendering of images, it is crucial for Text-to-Image (T2I) models, which +generate images based on a prompt, to consider additional aspects such as to +which extent the generated image matches the important content of the prompt. +Moreover, although the generated images usually result from a random starting +point, the influence of this one is generally not considered. In this article, +we propose a new metric based on prompt templates to study the alignment +between the content specified in the prompt and the corresponding generated +images. It allows us to better characterize the alignment in terms of the type +of the specified objects, their number, and their color. We conducted a study +on several recent T2I models about various aspects. An additional interesting +result we obtained with our approach is that image quality can vary drastically +depending on the latent noise used as a seed for the images. We also quantify +the influence of the number of concepts in the prompt, their order as well as +their (color) attributes. Finally, our method allows us to identify some latent +seeds that produce better images than others, opening novel directions of +research on this understudied topic. +" +LLM-FuncMapper: Function Identification for Interpreting Complex Clauses in Building Codes via LLM,Zhe Zheng,http://arxiv.org/pdf/2308.08728v1.pdf,2023-08-17,"['cs.ai', 'cs.cl']",2308.08728v1.pdf," As a vital stage of automated rule checking (ARC), rule interpretation of +regulatory texts requires considerable effort. However, interpreting regulatory +clauses with implicit properties or complex computational logic is still +challenging due to the lack of domain knowledge and limited expressibility of +conventional logic representations. Thus, LLM-FuncMapper, an approach to +identifying predefined functions needed to interpret various regulatory clauses +based on the large language model (LLM), is proposed. First, by systematically +analysis of building codes, a series of atomic functions are defined to capture +shared computational logics of implicit properties and complex constraints, +creating a database of common blocks for interpreting regulatory clauses. Then, +a prompt template with the chain of thought is developed and further enhanced +with a classification-based tuning strategy, to enable common LLMs for +effective function identification. Finally, the proposed approach is validated +with statistical analysis, experiments, and proof of concept. Statistical +analysis reveals a long-tail distribution and high expressibility of the +developed function database, with which almost 100% of computer-processible +clauses can be interpreted and represented as computer-executable codes. +Experiments show that LLM-FuncMapper achieve promising results in identifying +relevant predefined functions for rule interpretation. Further proof of concept +in automated rule interpretation also demonstrates the possibility of +LLM-FuncMapper in interpreting complex regulatory clauses. To the best of our +knowledge, this study is the first attempt to introduce LLM for understanding +and interpreting complex regulatory clauses, which may shed light on further +adoption of LLM in the construction domain. +" +Prompt-Based Length Controlled Generation with Reinforcement Learning,Renlong Jie,http://arxiv.org/pdf/2308.12030v2.pdf,2023-08-23,"['cs.cl', 'cs.ai', 'cs.lg']",2308.12030v2.pdf," Large language models (LLMs) like ChatGPT and GPT-4 have attracted great +attention given their surprising performance on a wide range of NLP tasks. +Length controlled generation of LLMs emerges as an important topic, which +enables users to fully leverage the capability of LLMs in more real-world +scenarios like generating a proper answer or essay of a desired length. In +addition, the autoregressive generation in LLMs is extremely time-consuming, +while the ability of controlling this generated length can reduce the inference +cost by limiting the length. Therefore, we propose a prompt-based length +control method to achieve high-accuracy length controlled generation. In +particular, we adopt reinforcement learning with the reward signal given by +either trainable or rule-based reward models, which further enhances the +length-control ability of LLMs by rewarding outputs that follows pre-defined +control instruction. To enable rule-based inference, we also introduce standard +prompt extractor to collect the standard control information from users' input. +Experiments show that our method significantly improves the accuracy of +prompt-based length control for summarization task on popular datasets like +CNNDM and NYT. Both the standard prompt extractor and the RL-tuned model have +show strong generalization ability to unseen control prompt templates. +" +LLM Powered Sim-to-real Transfer for Traffic Signal Control,Longchao Da,http://arxiv.org/pdf/2308.14284v3.pdf,2023-08-28,"['cs.ai', 'h.4.0']",2308.14284v3.pdf," Numerous solutions are proposed for the Traffic Signal Control (TSC) tasks +aiming to provide efficient transportation and mitigate congestion waste. In +recent, promising results have been attained by Reinforcement Learning (RL) +methods through trial and error in simulators, bringing confidence in solving +cities' congestion headaches. However, there still exist performance gaps when +simulator-trained policies are deployed to the real world. This issue is mainly +introduced by the system dynamic difference between the training simulator and +the real-world environments. The Large Language Models (LLMs) are trained on +mass knowledge and proved to be equipped with astonishing inference abilities. +In this work, we leverage LLMs to understand and profile the system dynamics by +a prompt-based grounded action transformation. Accepting the cloze prompt +template, and then filling in the answer based on accessible context, the +pre-trained LLM's inference ability is exploited and applied to understand how +weather conditions, traffic states, and road types influence traffic dynamics, +being aware of this, the policies' action is taken and grounded based on +realistic dynamics, thus help the agent learn a more realistic policy. We +conduct experiments using DQN to show the effectiveness of the proposed +PromptGAT's ability in mitigating the performance gap from simulation to +reality (sim-to-real). +" +AnoVL: Adapting Vision-Language Models for Unified Zero-shot Anomaly Localization,Hanqiu Deng,http://arxiv.org/pdf/2308.15939v1.pdf,2023-08-30,['cs.cv'],2308.15939v1.pdf," Contrastive Language-Image Pre-training (CLIP) models have shown promising +performance on zero-shot visual recognition tasks by learning visual +representations under natural language supervision. Recent studies attempt the +use of CLIP to tackle zero-shot anomaly detection by matching images with +normal and abnormal state prompts. However, since CLIP focuses on building +correspondence between paired text prompts and global image-level +representations, the lack of patch-level vision to text alignment limits its +capability on precise visual anomaly localization. In this work, we introduce a +training-free adaptation (TFA) framework of CLIP for zero-shot anomaly +localization. In the visual encoder, we innovate a training-free value-wise +attention mechanism to extract intrinsic local tokens of CLIP for patch-level +local description. From the perspective of text supervision, we particularly +design a unified domain-aware contrastive state prompting template. On top of +the proposed TFA, we further introduce a test-time adaptation (TTA) mechanism +to refine anomaly localization results, where a layer of trainable parameters +in the adapter is optimized using TFA's pseudo-labels and synthetic +noise-corrupted tokens. With both TFA and TTA adaptation, we significantly +exploit the potential of CLIP for zero-shot anomaly localization and +demonstrate the effectiveness of our proposed methods on various datasets. +" +Investigating the Applicability of Self-Assessment Tests for Personality Measurement of Large Language Models,Akshat Gupta,http://arxiv.org/pdf/2309.08163v1.pdf,2023-09-15,"['cs.cl', 'cs.ai']",2309.08163v1.pdf," As large language models (LLM) evolve in their capabilities, various recent +studies have tried to quantify their behavior using psychological tools created +to study human behavior. One such example is the measurement of ""personality"" +of LLMs using personality self-assessment tests. In this paper, we take three +such studies on personality measurement of LLMs that use personality +self-assessment tests created to study human behavior. We use the prompts used +in these three different papers to measure the personality of the same LLM. We +find that all three prompts lead very different personality scores. This simple +test reveals that personality self-assessment scores in LLMs depend on the +subjective choice of the prompter. Since we don't know the ground truth value +of personality scores for LLMs as there is no correct answer to such questions, +there's no way of claiming if one prompt is more or less correct than the +other. We then introduce the property of option order symmetry for personality +measurement of LLMs. Since most of the self-assessment tests exist in the form +of multiple choice question (MCQ) questions, we argue that the scores should +also be robust to not just the prompt template but also the order in which the +options are presented. This test unsurprisingly reveals that the answers to the +self-assessment tests are not robust to the order of the options. These simple +tests, done on ChatGPT and Llama2 models show that self-assessment personality +tests created for humans are not appropriate for measuring personality in LLMs. +" +InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists,Yulu Gan,http://arxiv.org/pdf/2310.00390v1.pdf,2023-09-30,['cs.cv'],2310.00390v1.pdf," Recent advances in generative diffusion models have enabled text-controlled +synthesis of realistic and diverse images with impressive quality. Despite +these remarkable advances, the application of text-to-image generative models +in computer vision for standard visual recognition tasks remains limited. The +current de facto approach for these tasks is to design model architectures and +loss functions that are tailored to the task at hand. In this paper, we develop +a unified language interface for computer vision tasks that abstracts away +task-specific design choices and enables task execution by following natural +language instructions. Our approach involves casting multiple computer vision +tasks as text-to-image generation problems. Here, the text represents an +instruction describing the task, and the resulting image is a visually-encoded +task output. To train our model, we pool commonly-used computer vision datasets +covering a range of tasks, including segmentation, object detection, depth +estimation, and classification. We then use a large language model to +paraphrase prompt templates that convey the specific tasks to be conducted on +each image, and through this process, we create a multi-modal and multi-task +training dataset comprising input and output images along with annotated +instructions. Following the InstructPix2Pix architecture, we apply +instruction-tuning to a text-to-image diffusion model using our constructed +dataset, steering its functionality from a generative model to an +instruction-guided multi-task vision learner. Experiments demonstrate that our +model, dubbed InstructCV, performs competitively compared to other generalist +and task-specific vision models. Moreover, it exhibits compelling +generalization capabilities to unseen data, categories, and user instructions. +" +Revisit Input Perturbation Problems for LLMs: A Unified Robustness Evaluation Framework for Noisy Slot Filling Task,Guanting Dong,http://arxiv.org/pdf/2310.06504v1.pdf,2023-10-10,"['cs.cl', 'cs.ai', 'cs.lg']",2310.06504v1.pdf," With the increasing capabilities of large language models (LLMs), these +high-performance models have achieved state-of-the-art results on a wide range +of natural language processing (NLP) tasks. However, the models' performance on +commonly-used benchmark datasets often fails to accurately reflect their +reliability and robustness when applied to real-world noisy data. To address +these challenges, we propose a unified robustness evaluation framework based on +the slot-filling task to systematically evaluate the dialogue understanding +capability of LLMs in diverse input perturbation scenarios. Specifically, we +construct a input perturbation evaluation dataset, Noise-LLM, which contains +five types of single perturbation and four types of mixed perturbation data. +Furthermore, we utilize a multi-level data augmentation method (character, +word, and sentence levels) to construct a candidate data pool, and carefully +design two ways of automatic task demonstration construction strategies +(instance-level and entity-level) with various prompt templates. Our aim is to +assess how well various robustness methods of LLMs perform in real-world noisy +scenarios. The experiments have demonstrated that the current open-source LLMs +generally achieve limited perturbation robustness performance. Based on these +experimental observations, we make some forward-looking suggestions to fuel the +research in this direction. +" +Do Language Models Learn about Legal Entity Types during Pretraining?,Claire Barale,http://arxiv.org/pdf/2310.13092v1.pdf,2023-10-19,['cs.cl'],2310.13092v1.pdf," Language Models (LMs) have proven their ability to acquire diverse linguistic +knowledge during the pretraining phase, potentially serving as a valuable +source of incidental supervision for downstream tasks. However, there has been +limited research conducted on the retrieval of domain-specific knowledge, and +specifically legal knowledge. We propose to explore the task of Entity Typing, +serving as a proxy for evaluating legal knowledge as an essential aspect of +text comprehension, and a foundational task to numerous downstream legal NLP +applications. Through systematic evaluation and analysis and two types of +prompting (cloze sentences and QA-based templates) and to clarify the nature of +these acquired cues, we compare diverse types and lengths of entities both +general and domain-specific entities, semantics or syntax signals, and +different LM pretraining corpus (generic and legal-oriented) and architectures +(encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2 +performs well on certain entities and exhibits potential for substantial +improvement with optimized prompt templates, (2) law-oriented LMs show +inconsistent performance, possibly due to variations in their training corpus, +(3) LMs demonstrate the ability to type entities even in the case of +multi-token entities, (4) all models struggle with entities belonging to +sub-domains of the law (5) Llama2 appears to frequently overlook syntactic +cues, a shortcoming less present in BERT-based architectures. +" +LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking,Zhenrui Yue,http://arxiv.org/pdf/2311.02089v1.pdf,2023-10-25,"['cs.ir', 'cs.ai', 'cs.cl']",2311.02089v1.pdf," Recently, large language models (LLMs) have exhibited significant progress in +language understanding and generation. By leveraging textual features, +customized LLMs are also applied for recommendation and demonstrate +improvements across diverse recommendation scenarios. Yet the majority of +existing methods perform training-free recommendation that heavily relies on +pretrained knowledge (e.g., movie recommendation). In addition, inference on +LLMs is slow due to autoregressive generation, rendering existing methods less +effective for real-time recommendation. As such, we propose a two-stage +framework using large language models for ranking-based recommendation +(LlamaRec). In particular, we use small-scale sequential recommenders to +retrieve candidates based on the user interaction history. Then, both history +and retrieved items are fed to the LLM in text via a carefully designed prompt +template. Instead of generating next-item titles, we adopt a verbalizer-based +approach that transforms output logits into probability distributions over the +candidate items. Therefore, the proposed LlamaRec can efficiently rank items +without generating long text. To validate the effectiveness of the proposed +framework, we compare against state-of-the-art baseline methods on benchmark +datasets. Our experimental results demonstrate the performance of LlamaRec, +which consistently achieves superior performance in both recommendation +performance and efficiency. +" +Prompting Multilingual Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages,Zheng-Xin Yong,http://arxiv.org/pdf/2303.13592v4.pdf,2023-03-23,"['cs.cl', 'cs.ai']",2303.13592v4.pdf," While code-mixing is a common linguistic practice in many parts of the world, +collecting high-quality and low-cost code-mixed data remains a challenge for +natural language processing (NLP) research. The recent proliferation of Large +Language Models (LLMs) compels one to ask: how capable are these systems in +generating code-mixed data? In this paper, we explore prompting multilingual +LLMs in a zero-shot manner to generate code-mixed data for seven languages in +South East Asia (SEA), namely Indonesian, Malay, Chinese, Tagalog, Vietnamese, +Tamil, and Singlish. We find that publicly available multilingual +instruction-tuned models such as BLOOMZ and Flan-T5-XXL are incapable of +producing texts with phrases or clauses from different languages. ChatGPT +exhibits inconsistent capabilities in generating code-mixed texts, wherein its +performance varies depending on the prompt template and language pairing. For +instance, ChatGPT generates fluent and natural Singlish texts (an English-based +creole spoken in Singapore), but for English-Tamil language pair, the system +mostly produces grammatically incorrect or semantically meaningless utterances. +Furthermore, it may erroneously introduce languages not specified in the +prompt. Based on our investigation, existing multilingual LLMs exhibit a wide +range of proficiency in code-mixed data generation for SEA languages. As such, +we advise against using LLMs in this context without extensive human checks. +" +"Reason for Future, Act for Now: A Principled Framework for Autonomous LLM Agents with Provable Sample Efficiency",Zhihan Liu,http://arxiv.org/pdf/2309.17382v2.pdf,2023-09-29,"['cs.ai', 'cs.lg']",2309.17382v2.pdf," Large language models (LLMs) demonstrate impressive reasoning abilities, but +translating reasoning into actions in the real world remains challenging. In +particular, it remains unclear how to complete a given task provably within a +minimum number of interactions with the external environment, e.g., through an +internal mechanism of reasoning. To this end, we propose a principled framework +with provable regret guarantees to orchestrate reasoning and acting, which we +call ""reason for future, act for now"" (\texttt{RAFA}). Specifically, we design +a prompt template for reasoning that learns from the memory buffer and plans a +future trajectory over a long horizon (""reason for future""). At each step, the +LLM agent takes the initial action of the planned trajectory (""act for now""), +stores the collected feedback in the memory buffer, and reinvokes the reasoning +routine to replan the future trajectory from the new state. + The key idea is to cast reasoning in LLMs as learning and planning in +Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt +LLMs to form an updated posterior of the unknown environment from the memory +buffer (learning) and generate an optimal trajectory for multiple future steps +that maximizes a value function (planning). The learning and planning +subroutines are performed in an ""in-context"" manner to emulate the actor-critic +update for MDPs. Our theoretical analysis proves that the novel combination of +long-term reasoning and short-term acting achieves a $\sqrt{T}$ regret. In +particular, the regret bound highlights an intriguing interplay between the +prior knowledge obtained through pretraining and the uncertainty reduction +achieved by reasoning and acting. Our empirical validation shows that it +outperforms various existing frameworks and achieves nearly perfect scores on a +few benchmarks. +" +ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction,Jianghao Lin,http://arxiv.org/pdf/2310.09234v2.pdf,2023-10-13,"['cs.ir', 'cs.ai']",2310.09234v2.pdf," Click-through rate (CTR) prediction has become increasingly indispensable for +various Internet applications. Traditional CTR models convert the multi-field +categorical data into ID features via one-hot encoding, and extract the +collaborative signals among features. Such a paradigm suffers from the problem +of semantic information loss. Another line of research explores the potential +of pretrained language models (PLMs) for CTR prediction by converting input +data into textual sentences through hard prompt templates. Although semantic +signals are preserved, they generally fail to capture the collaborative +information (e.g., feature interactions, pure ID features), not to mention the +unacceptable inference overhead brought by the huge model size. In this paper, +we aim to model both the semantic knowledge and collaborative knowledge for +accurate CTR estimation, and meanwhile address the inference inefficiency +issue. To benefit from both worlds and close their gaps, we propose a novel +model-agnostic framework (i.e., ClickPrompt), where we incorporate CTR models +to generate interaction-aware soft prompts for PLMs. We design a +prompt-augmented masked language modeling (PA-MLM) pretraining task, where PLM +has to recover the masked tokens based on the language context, as well as the +soft prompts generated by CTR model. The collaborative and semantic knowledge +from ID and textual features would be explicitly aligned and interacted via the +prompt interface. Then, we can either tune the CTR model with PLM for superior +performance, or solely tune the CTR model without PLM for inference efficiency. +Experiments on four real-world datasets validate the effectiveness of +ClickPrompt compared with existing baselines. +" +ALT: Towards Fine-grained Alignment between Language and CTR Models for Click-Through Rate Prediction,Hangyu Wang,http://arxiv.org/pdf/2310.19453v1.pdf,2023-10-30,"['cs.ir', 'cs.ai']",2310.19453v1.pdf," Click-through rate (CTR) prediction plays as a core function module in +various personalized online services. According to the data modality and input +format, the models for CTR prediction can be mainly classified into two +categories. The first one is the traditional CTR models that take as inputs the +one-hot encoded ID features of tabular modality, which aims to capture the +collaborative signals via feature interaction modeling. The second category +takes as inputs the sentences of textual modality obtained by hard prompt +templates, where pretrained language models (PLMs) are adopted to extract the +semantic knowledge. These two lines of research generally focus on different +characteristics of the same input data (i.e., textual and tabular modalities), +forming a distinct complementary relationship with each other. Therefore, in +this paper, we propose to conduct fine-grained feature-level Alignment between +Language and CTR models (ALT) for CTR prediction. Apart from the common +CLIP-like instance-level contrastive learning, we further design a novel joint +reconstruction pretraining task for both masked language and tabular modeling. +Specifically, the masked data of one modality (i.e., tokens or features) has to +be recovered with the help of the other modality, which establishes the +feature-level interaction and alignment via sufficient mutual information +extraction between dual modalities. Moreover, we propose three different +finetuning strategies with the option to train the aligned language and CTR +models separately or jointly for downstream CTR prediction tasks, thus +accommodating the varying efficacy and efficiency requirements for industrial +applications. Extensive experiments on three real-world datasets demonstrate +that ALT outperforms SOTA baselines, and is highly compatible for various +language and CTR models. +" diff --git a/blacklist.csv b/blacklist.csv new file mode 100644 index 0000000000000000000000000000000000000000..72b72016993daea44ae0bb7ce3911f55171063c8 --- /dev/null +++ b/blacklist.csv @@ -0,0 +1,394 @@ +title, link, reason +a brief history of prompt leveraging language models, https://arxiv.org/abs/2310.04438, AI Generated +hydrogenrich supernovae beyond the neutrinodriven corecollapse paradigm,,About Space not Prompting +fewshot learning with localization in realistic settings,,not related to prompting +crosslingual alignment of contextual word embeddings with applications to zeroshot dependency parsing,, no Prompting +analogyforming transformers for fewshot 3d parsing,, no prompting +generalpurpose incontext learning by metalearning transformers,, no prompting +a survey of deep learning for lowshot object detection,, no prompting +fewshot classincremental learning a survey,, no prompting +balanced and explainable social media analysis for public health with large language models,,uses BERT +querydependent prompt evaluation and opti mization with offline inverse rl,,more about deep RL than prompting +deltaedit exploring textfree training for textdriven image manipulation,,too training focused +deep language networks joint prompt training of stacked llms using variational inference,, too training focused +unnatural language processing how do language models handle machinegenerated prompts,, too training focused +give me the facts! a survey on factual knowledge probing in pretrained language models,, cloze focused +taskdriven prompt evolution for foundation models,, training related +diversityaware meta visual prompting,, training focused +drpt disentangled and recurrent prompt tuning for compositional zeroshot learning,, tuning +deltaspace a semanticaligned feature space for flexible textguided image editing,, training focused +instructpix2nerf instructed 3d portrait editing from a single image,, not really about prompting +what changes can largescale language models bring intensive study on hyperclova billionsscale korean generative pretrained transformers,, about a model not prompts +mllmdataengine an iterative refinement approach for mllm,,soft prompting +unleashing the power of pretrained language models for offline reinforcement learning,, out-of-scope +expt synthetic pretraining for fewshot experimental design,, no prompting +improving inputlabel mapping with demonstration replay for incontext learning,, out-of-domain +apollo zeroshot multimodal reasoning with multiple experts, 2310.18369v1.pdf, Lower-Level Transformer Modification - Not Prompting +fewshot learning with siamese networks and label tuning,, no prompting +mgimn multigrained interactive matching network for fewshot text classification,, no prompting +zero and fewshot learning for author profiling,, about models not prompting +"prompt, generate, then cache cascade of foundation models makes strong fewshot learners", http://arxiv.org/pdf/2303.02151v1.pdf, training +gradientregulated metaprompt learning for generalizable visionlanguage models, http://arxiv.org/pdf/2303.06571v2.pdf, soft prompting +decomposed prototype learning for fewshot scene graph generation,http://arxiv.org/pdf/2303.10863v1.pdf, continuous prompts +supervised masked knowledge distillation for fewshot transformers,, no prompting +"multimodal c4 an open, billionscale corpus of images interleaved with text", http://arxiv.org/pdf/2303.15466v2.pdf, no prompting +a survey on fewshot classincremental learning,http://arxiv.org/pdf/2304.06939v3.pdf, no prompting +unified quantum state tomography and hamiltonian learning using transformer models a languagetranslationlike approach for quantum systems, http://arxiv.org/pdf/2304.08130v2.pdf, no prompting +pointgpt autoregressively generative pretraining from point clouds, http://arxiv.org/pdf/2305.11487v2.pdf, continuous prompts +a survey of diffusion models in natural language processing,http://arxiv.org/pdf/2305.14671v2.pdf, no prompting +oneforall generalized lora for parameterefficient finetuning, http://arxiv.org/pdf/2306.07967v2.pdf, tuning +protodiff learning to learn prototypical networks by taskguided diffusion, http://arxiv.org/pdf/2306.14770v2.pdf, no prompting +effective transfer of pretrained large visual model for fabric defect segmentation via specifc knowledge injection, http://arxiv.org/pdf/2306.16186v1.pdf, no prompting +metatraining with demonstration retrieval for efficient fewshot learning, http://arxiv.org/pdf/2307.00119v1.pdf, cloze prompting +tableye seeing small tables through the lens of images, http://arxiv.org/pdf/2307.02491v1.pdf, no prompting +identifying misinformation on youtube through transcript contextual analysis with transformer models, http://arxiv.org/pdf/2307.12155v1.pdf, no prompting +linkcontext learning for multimodal llms, http://arxiv.org/pdf/2308.07891v1.pdf, no prompting +less is more towards efficient fewshot 3d semantic segmentation via trainingfree networks, http://arxiv.org/pdf/2308.12961v1.pdf, no prompting +transprompt v2 a transferable prompting framework for crosstask text classification, http://arxiv.org/pdf/2308.15010v1.pdf, soft prompting +selfsampling meta sam enhancing fewshot medical image segmentation with metalearning, http://arxiv.org/pdf/2308.16466v3.pdf, training +promptbased node feature extractor for fewshot learning on textattributed graphs, http://arxiv.org/pdf/2309.02848v1.pdf, cloze prompts +crossimage context matters for bongard problems, http://arxiv.org/pdf/2309.03468v1.pdf, no prompting +dept decomposed prompt tuning for parameterefficient finetuning, http://arxiv.org/pdf/2309.05173v2.pdf, tuning +glad contentaware dynamic graphs for log anomaly detection, http://arxiv.org/pdf/2309.05953v1.pdf, cloze prompting +sct a simple baseline for parameterefficient finetuning via salient channels, http://arxiv.org/pdf/2309.08513v2.pdf, tuning +pactuningfinetuning pretrained language models with pacdriven perturbed gradient descent, http://arxiv.org/pdf/2310.17588v1.pdf, no prompting +on taskpersonalized multimodal fewshot learning for visuallyrich document entity retrieval, http://arxiv.org/pdf/2311.00693v1.pdf, no prompting +robust finetuning of visionlanguage models for domain generalization, http://arxiv.org/pdf/2311.02236v1.pdf, no prompting +lesion2vec deep metric learning for fewshot multiple lesions recognition in wireless capsule endoscopy video, http://arxiv.org/pdf/2101.04240v2.pdf, no prompting +unsupervised law article mining based on deep pretrained language representation models with application to the italian civil code, http://arxiv.org/pdf/2112.03033v1.pdf, no prompting +"using deepspeed and megatron to train megatronturing nlg 530b, a largescale generative language model", http://arxiv.org/pdf/2201.11990v3.pdf, training +data distributional properties drive emergent incontext learning in transformers, http://arxiv.org/pdf/2205.05055v6.pdf, no prompting +hungry hungry hippos towards language modeling with state space models, http://arxiv.org/pdf/2212.14052v3.pdf, no prompting +clip2scene towards labelefficient 3d scene understanding by clip, http://arxiv.org/pdf/2301.04926v2.pdf, cloze prompting +learning to detect an animal sound from five examples, http://arxiv.org/pdf/2305.13210v1.pdf, no prompting +the rise of ai language pathologists exploring twolevel prompt learning for fewshot weaklysupervised whole slide image classification, http://arxiv.org/pdf/2305.17891v1.pdf, training +language models are fewshot learners, http://arxiv.org/pdf/2005.14165v4.pdf, training +when promptbased incremental learning does not meet strong pretraining, http://arxiv.org/pdf/2308.10445v1.pdf, training +"fewer errors, but more stereotypes the effect of model size on gender bias", http://arxiv.org/pdf/2206.09860v1.pdf, MLMs and cloze prompting +promptattack promptbased attack for language models via gradient search, http://arxiv.org/pdf/2209.01882v1.pdf, cloze prompting +can language models be specific how, http://arxiv.org/pdf/2210.05159v2.pdf, cloze prompting +multilingual relation classification via efficient and effective prompting, http://arxiv.org/pdf/2210.13838v2.pdf, soft prompting +spe symmetrical prompt enhancement for fact probing, http://arxiv.org/pdf/2211.07078v1.pdf, soft prompting +evaluating the robustness of discrete prompts, http://arxiv.org/pdf/2302.05619v1.pdf, cloze prompting +syntaxaware hybrid prompt model for fewshot multimodal sentiment analysis, http://arxiv.org/pdf/2306.01312v2.pdf, soft and cloze prompting +unified multimodal pretraining and promptbased tuning for visionlanguage understanding and generation, http://arxiv.org/pdf/2112.05587v2.pdf, MLMs and cloze prompting +learning to transfer prompts for text generation, http://arxiv.org/pdf/2205.01543v2.pdf, soft prompting +towards realistic lowresource relation extraction a benchmark with empirical baseline study, http://arxiv.org/pdf/2210.10678v3.pdf, tuning and cloze prompting +promptfusion decoupling stability and plasticity for continual learning, http://arxiv.org/pdf/2303.07223v1.pdf, tuning +are promptbased models clueless, http://arxiv.org/pdf/2205.09295v2.pdf, cloze prompting +avoiding inference heuristics in fewshot promptbased finetuning, http://arxiv.org/pdf/2109.04144v1.pdf, tuning +p4e fewshot event detection as promptguided identification and localization, http://arxiv.org/pdf/2202.07615v3.pdf, cloze prompting +partslip lowshot part segmentation for 3d point clouds via pretrained imagelanguage models, http://arxiv.org/pdf/2212.01558v2.pdf, tuning +sparsefit fewshot prompting with sparse finetuning for jointly generating predictions and natural language explanations, http://arxiv.org/pdf/2305.13235v2.pdf, training and tuning +large language model distillation doesn't need a teacher, http://arxiv.org/pdf/2305.14864v1.pdf, training +multiqgti towards question generation from multimodal sources, http://arxiv.org/pdf/2307.04643v1.pdf, no prompting +why is prompt tuning for visionlanguage models robust to noisy labels, http://arxiv.org/pdf/2307.11978v1.pdf, tuning +lowparameter federated learning with large language models, http://arxiv.org/pdf/2307.13896v1.pdf, tuning and MLM +olala ontology matching with large language models, http://arxiv.org/pdf/2311.03837v1.pdf, uses BERT no specified prefix prompting +crosslingual supervision improves large language models pretraining, http://arxiv.org/pdf/2305.11778v1.pdf, training focused +explaincpe a freetext explanation benchmark of chinese pharmacist examination,http://arxiv.org/pdf/2305.12945v2.pdf, training focused +adapting language models to compress contexts, http://arxiv.org/pdf/2305.14788v2.pdf, soft prompting +a mechanism for sampleefficient incontext learning for sparse retrieval tasks, http://arxiv.org/pdf/2305.17040v1.pdf, more about LM interpretability than prompting +large language models are partially primed in pronoun interpretation, http://arxiv.org/pdf/2305.16917v1.pdf, uses in-context learning but is not about prompting methods +contextual vision transformers for robust representation learning,http://arxiv.org/pdf/2305.19402v2.pdf, not about prefix prompting +selfverification improves fewshot clinical information extraction, http://arxiv.org/pdf/2306.00024v1.pdf, is about verifying output not modifying input +measuring and modifying factual knowledge in large language models,http://arxiv.org/pdf/2306.06264v1.pdf, mentions in context learning but it is not the focus +a survey on multimodal large language models,http://arxiv.org/pdf/2306.13549v1.pdf, not focused on prompting +potential benefits of employing large language models in research in moral education and development,http://arxiv.org/pdf/2306.13805v2.pdf, not particuyarly about prompting +assessing the efficacy of large language models in generating accurate teacher responses,http://arxiv.org/pdf/2307.04274v1.pdf, does not focus on prompting methods +unsupervised calibration through prior adaptation for text classification using large language models,http://arxiv.org/pdf/2307.06713v3.pdf, does not focus on prompting methods +baby's cothought leveraging large language models for enhanced reasoning in compact models,http://arxiv.org/pdf/2308.01684v2.pdf, focuses on training other models +diffusion language models can perform many tasks with scaling and instructionfinetuning,http://arxiv.org/pdf/2308.12219v2.pdf, focuses on training +large language model as autonomous decision maker,http://arxiv.org/pdf/2308.12519v1.pdf, not about prompting methods +speechtospeech translation with discreteunitbased style transfer,http://arxiv.org/pdf/2309.07566v1.pdf, speech to speech translation +language modeling is compression,http://arxiv.org/pdf/2309.10668v1.pdf, more about explaining in-context learning than proposing a method +text data augmentation in lowresource settings via finetuning of large language models,http://arxiv.org/pdf/2310.01119v1.pdf, focuses on training +humans and language models diverge when predicting repeating text,http://arxiv.org/pdf/2310.06408v2.pdf, focuses on evaluating humans and comparing to prompting method +amago scalable incontext reinforcement learning for adaptive agents,http://arxiv.org/pdf/2310.09971v2.pdf, not about LMs; this is an RL paper +meta (outofcontext) learning in neural networks,http://arxiv.org/pdf/2310.15047v2.pdf, evaluates in-context learning but is not based on it +towards trainingfree openworld segmentation via image prompting foundation models,http://arxiv.org/pdf/2310.10912v1.pdf,image segmentation +videoprompter an ensemble of foundational models for zeroshot video understanding,http://arxiv.org/pdf/2310.15324v1.pdf,"video understanding, different domain" +improving diversity of demographic representation in large language models via collectivecritiques and selfvoting,http://arxiv.org/pdf/2310.16523v1.pdf,"model representation, not prompting" +the power of large language models for wireless communication system development a case study on fpga platforms,http://arxiv.org/pdf/2307.07319v4.pdf,not prompting +large language models enable fewshot clustering,http://arxiv.org/pdf/2307.00524v1.pdf,"few-shot clustering, not prompting" +universal fuzzing via large language models,http://arxiv.org/pdf/2308.04748v1.pdf,does not use hard-prefix prompts +trainingfree openworld segmentation via image prompting foundation models,,image segmentation +fire food image to recipe generation,http://arxiv.org/pdf/2308.14391v1.pdf,image to text translation +large language models can accurately predict searcher preferences,http://arxiv.org/pdf/2309.10621v1.pdf,does not use hard-prefix prompts +understanding incontext learning from repetitions,http://arxiv.org/pdf/2310.00297v2.pdf,"focus is on effects of repetition in in-context learning, not prompting" +small language models finetuned to coordinate larger language models improve complex reasoning,http://arxiv.org/pdf/2310.18338v1.pdf,"focus on fine-tuning, not hard-prefix prompting" +revisiting large language models as zeroshot relation extractors,http://arxiv.org/pdf/2310.05028v3.pdf,zero-shot learning for relation extraction +characterizing attribution and fluency tradeoffs for retrievalaugmented large language models,http://arxiv.org/pdf/2302.05578v2.pdf,RAG +llmeval unified multidimensional automatic evaluation for opendomain conversations with large language models,http://arxiv.org/pdf/2305.13711v1.pdf,eval of LLMs +robot task planning based on large language model representing knowledge with directed graph structures,http://arxiv.org/pdf/2306.05171v1.pdf,knowledge representation +optimus optimization modeling using mip solvers and large language models,http://arxiv.org/pdf/2310.06116v2.pdf,"different approach, MIP solvers" +promptinfuser how tightly coupling ai and ui design impacts designers' workflows,http://arxiv.org/pdf/2310.15435v1.pdf,focus on UI +a monte carlo language model pipeline for zeroshot sociopolitical event extraction,http://arxiv.org/pdf/2305.15051v1.pdf,"monte carlo methods, not prompting" +finetune language models to approximate unbiased incontext learning,http://arxiv.org/pdf/2310.03331v1.pdf,fine-tuning +on the compositional generalization gap of incontext learning,http://arxiv.org/pdf/2211.08473v1.pdf,"compositional generalization, not hard-prefix prompting" +fewshot finetuning vs incontext learning a fair comparison and evaluation,http://arxiv.org/pdf/2305.16938v2.pdf,no hard-prefix prompting +stylemc multichannel based fast textguided image generation and manipulation, http://arxiv.org/pdf/2112.08493v1.pdf, not prompt engineering +testtime training on nearest neighbors for large language models, http://arxiv.org/pdf/2305.18466v2.pdf, fine-tuning +chain of natural language inference for reducing large language model ungrounded hallucinations, http://arxiv.org/pdf/2310.03951v2.pdf, no prompt engineering +differentiable prompt makes pretrained language models better fewshot learners, http://arxiv.org/pdf/2108.13161v7.pdf, not hard prompts +mme a comprehensive evaluation benchmark for multimodal large language models, http://arxiv.org/pdf/2306.13394v2.pdf, not specifically hard prompting +protoclip visionlanguage prototypical network for fewshot learning, http://arxiv.org/pdf/2307.03073v2.pdf, not prompting +a survey on recent named entity recognition and relation classification methods with focus on fewshot learning approaches, http://arxiv.org/pdf/2310.19055v1.pdf, not prompting +improving incontext fewshot learning via selfsupervised training, http://arxiv.org/pdf/2205.01703v2.pdf, pretraining +revisiting fewshot learning from a causal perspective, http://arxiv.org/pdf/2209.13816v1.pdf, not prompting +film how can fewshot image classification benefit from pretrained language models, http://arxiv.org/pdf/2307.04114v1.pdf, not hard prefix prompting +clues fewshot learning evaluation in natural language understanding, http://arxiv.org/pdf/2111.02570v1.pdf, no prompt engineering +improving fewshot generalization by exploring and exploiting auxiliary data, http://arxiv.org/pdf/2302.00674v4.pdf, not prompt engineering. +prompt space optimizing fewshot reasoning success with large language models, http://arxiv.org/pdf/2306.03799v1.pdf, not prompt engineering +universal fewshot learning of dense prediction tasks with visual token matching, http://arxiv.org/pdf/2303.14969v1.pdf, not prompting +fdalign feature discrimination alignment for finetuning pretrained models in fewshot learning, http://arxiv.org/pdf/2310.15105v3.pdf, fine tuning +modelagnostic graph regularization for fewshot learning, http://arxiv.org/pdf/2102.07077v1.pdf, not prompting +uniform sampling over episode difficulty, http://arxiv.org/pdf/2108.01662v2.pdf, not prompting +metalearning with taskadaptive loss function for fewshot learning, http://arxiv.org/pdf/2110.03909v2.pdf, focuses on meta-learning +on measuring the intrinsic fewshot hardness of datasets, http://arxiv.org/pdf/2211.09113v1.pdf, not prompting +mera merging pretrained adapters for fewshot learning, http://arxiv.org/pdf/2308.15982v1.pdf, not prompting +metaadapter an online fewshot learner for visionlanguage model, http://arxiv.org/pdf/2311.03774v1.pdf, not prompting +pushing the limits of simple pipelines for fewshot learning external data and finetuning make a difference, http://arxiv.org/pdf/2204.07305v1.pdf, focus on few-shot learning. +multilevel finetuning data augmentation and fewshot learning for specialized cyber threat intelligence, http://arxiv.org/pdf/2207.11076v1.pdf, training +fewshot classification with hypersphere modeling of prototypes, http://arxiv.org/pdf/2211.05319v1.pdf, not prompting +styleadv meta style adversarial training for crossdomain fewshot learning, http://arxiv.org/pdf/2302.09309v2.pdf, not prompting +federated fewshot learning for cough classification with edge devices, http://arxiv.org/pdf/2309.01076v1.pdf, not prompting +is support set diversity necessary for metalearning, http://arxiv.org/pdf/2011.14048v2.pdf, not prompting +entailment as fewshot learner, http://arxiv.org/pdf/2104.14690v1.pdf, not prompt engineering +wavprompt towards fewshot spoken language understanding with frozen language models, http://arxiv.org/pdf/2203.15863v2.pdf, fine-tuning +aligning magma by fewshot learning and finetuning, http://arxiv.org/pdf/2210.14161v1.pdf, finetuning not prompting. +stunt fewshot tabular learning with selfgenerated tasks from unlabeled tables, http://arxiv.org/pdf/2303.00918v1.pdf, not prompting +prototypesoriented transductive fewshot learning with conditional transport, http://arxiv.org/pdf/2308.03047v1.pdf, not prompting +coca classifieroriented calibration for sourcefree universal domain adaptation via textual prototype, http://arxiv.org/pdf/2308.10450v1.pdf, no prompt engineering +improving generalization in large language models by learning prefix subspaces, http://arxiv.org/pdf/2310.15793v1.pdf, not prompting +zeroshot and fewshot learning with knowledge graphs a comprehensive survey, http://arxiv.org/pdf/2112.10006v6.pdf, not prompting +on unifying misinformation detection, http://arxiv.org/pdf/2104.05243v1.pdf, training +human in the loop how to effectively create coherent topics by manually labeling only a few documents per class, http://arxiv.org/pdf/2212.09422v1.pdf, not prompting. +neuroclip neuromorphic data understanding by clip and snn, http://arxiv.org/pdf/2306.12073v1.pdf, not prompting +ppt pretrained prompt tuning for fewshot learning, http://arxiv.org/pdf/2109.04332v3.pdf, soft prompts +yuan 10 largescale pretrained language model in zeroshot and fewshot learning, http://arxiv.org/pdf/2110.04725v2.pdf, training +perfect promptfree and efficient fewshot learning with language models, http://arxiv.org/pdf/2204.01172v2.pdf, literally not prompting +on the effect of pretraining corpora on incontext learning by a largescale language model, http://arxiv.org/pdf/2204.13509v2.pdf, pretraining +fewshot learning for clinical natural language processing using siamese neural networks, http://arxiv.org/pdf/2208.14923v2.pdf, not prompting +prompting through prototype a prototypebased prompt learning on pretrained visionlanguage models, http://arxiv.org/pdf/2210.10841v1.pdf, soft prompts +sgvaclip semanticguided visual adapting of visionlanguage models for fewshot image classification, http://arxiv.org/pdf/2211.16191v2.pdf, training +auggpt leveraging chatgpt for text data augmentation, http://arxiv.org/pdf/2302.13007v3.pdf, not prompting +semantic prompt for fewshot image recognition, http://arxiv.org/pdf/2303.14123v1.pdf, not really prompt engineering +the cot collection improving zeroshot and fewshot learning of language models via chainofthought finetuning, http://arxiv.org/pdf/2305.14045v2.pdf, training +fewshot learning for inference in medical imaging with subspace feature representations, http://arxiv.org/pdf/2306.11152v1.pdf, no prompting +visually grounded fewshot word learning in lowresource settings, http://arxiv.org/pdf/2306.11371v2.pdf, not prompting +crossmodal concept learning and inference for visionlanguage models, http://arxiv.org/pdf/2307.15460v1.pdf, not prompt engineering. +uniap towards universal animal perception in vision via fewshot learning, http://arxiv.org/pdf/2308.09953v1.pdf, not text prompts +palm scaling language modeling with pathways, http://arxiv.org/pdf/2204.02311v5.pdf, not prompting +fewshot electronic health record coding through graph contrastive learning, http://arxiv.org/pdf/2106.15467v1.pdf, not prompting +ernie 30 largescale knowledge enhanced pretraining for language understanding and generation, http://arxiv.org/pdf/2107.02137v1.pdf, pre-training +alleviating the incompatibility between cross entropy loss and episode training for fewshot skin disease classification, http://arxiv.org/pdf/2004.09694v1.pdf, not prompting +fewshot learning through contextual data augmentation, http://arxiv.org/pdf/2103.16911v1.pdf, not prompting +metalearning gnn initializations for lowresource molecular property prediction, http://arxiv.org/pdf/2003.05996v2.pdf, not prompt engineering. +neural data augmentation via example extrapolation, http://arxiv.org/pdf/2102.01335v1.pdf, data augmentation +oneshot learning for the long term consolidation with an artificial hippocampal algorithm, http://arxiv.org/pdf/2102.07503v2.pdf, not prompting +the power of scale for parameterefficient prompt tuning, http://arxiv.org/pdf/2104.08691v2.pdf, soft prompts +design of a graphical user interface for fewshot machine learning classification of electron microscopy data, http://arxiv.org/pdf/2107.10387v1.pdf, not prompting +flipda effective and robust data augmentation for fewshot learning, http://arxiv.org/pdf/2108.06332v2.pdf, not prompting +on the multilingual capabilities of very largescale english language models, http://arxiv.org/pdf/2108.13349v1.pdf, not prompting +learning opinion summarizers by selecting informative reviews, http://arxiv.org/pdf/2109.04325v1.pdf, not prompting +strata selftraining with task augmentation for better fewshot learning, http://arxiv.org/pdf/2109.06270v2.pdf, not prompting +what does clip know about a red circle visual prompt engineering for vlms, http://arxiv.org/pdf/2304.06712v2.pdf, not text prompting +conformal prediction with large language models for multichoice question answering, http://arxiv.org/pdf/2305.18404v3.pdf, not prompting. +p2p tuning pretrained image models for point cloud analysis with pointtopixel prompting, http://arxiv.org/pdf/2208.02812v2.pdf, not text prompting +evoprompting language models for codelevel neural architecture search, http://arxiv.org/pdf/2302.14838v2.pdf, soft prompts +right to be forgotten in the era of large language models implications challenges and solutions, http://arxiv.org/pdf/2307.03941v3.pdf, not related +label supervised llama finetuning, http://arxiv.org/pdf/2310.01208v1.pdf, focus on finetuning not prompting +incontext learning distillation transferring fewshot learning ability of pretrained language models, http://arxiv.org/pdf/2212.10670v1.pdf, distillation not prompting. +a neural network solves explains and generates university math problems by program synthesis and fewshot learning at human level, http://arxiv.org/pdf/2112.15594v4.pdf, focuses on fine-tuning +crossfit a fewshot learning challenge for crosstask generalization in nlp, http://arxiv.org/pdf/2104.08835v2.pdf, not prompting +jasmine arabic gpt models for fewshot learning, http://arxiv.org/pdf/2212.10755v2.pdf, training +conversation style transfer using fewshot learning, http://arxiv.org/pdf/2302.08362v2.pdf, not prompting +cancergpt fewshot drug pair synergy prediction using large pretrained language models, http://arxiv.org/pdf/2304.10946v1.pdf, training +meta learning to bridge vision and language models for multimodal fewshot learning, http://arxiv.org/pdf/2302.14794v1.pdf, not prompting +demonstrationbased learning for fewshot biomedical named entity recognition under machine reading comprehension, http://arxiv.org/pdf/2308.06454v1.pdf, not prompt engineering +robustness over time understanding adversarial examples' effectiveness on longitudinal versions of large language models, http://arxiv.org/pdf/2308.07847v1.pdf, not prompting. +fewshot natural language generation for taskoriented dialog, http://arxiv.org/pdf/2002.12328v1.pdf, not prompting +promptfree diffusion taking text out of texttoimage diffusion models, http://arxiv.org/pdf/2305.16223v2.pdf, literally not prompting. +cutting down on prompts and parameters simple fewshot learning with language models, http://arxiv.org/pdf/2106.13353v2.pdf, not prompt engineering +executive function a contrastive value policy for resampling and relabeling perceptions via hindsight summarization, http://arxiv.org/pdf/2204.12639v1.pdf, not prompting +tart a plugandplay transformer module for taskagnostic reasoning, http://arxiv.org/pdf/2306.07536v1.pdf, not prompting +synergistic integration of large language models and cognitive architectures for robust ai an exploratory analysis, http://arxiv.org/pdf/2308.09830v3.pdf, brief mention of prompting but not related +visionlanguage models are zeroshot reward models for reinforcement learning, http://arxiv.org/pdf/2310.12921v1.pdf, maybe tangential but not prompt engineering +fewshot multimodal multitask multilingual learning, http://arxiv.org/pdf/2303.12489v1.pdf, maybe tangential but not prompt engineering +fewshot learning with visual distribution calibration and crossmodal distribution alignment, http://arxiv.org/pdf/2305.11439v1.pdf, not prompting. +active learning principles for incontext learning with large language models, http://arxiv.org/pdf/2305.14264v1.pdf, not prompting +flame fewshot learning from natural language explanations, http://arxiv.org/pdf/2306.08042v1.pdf, not prompting. +approximating humanlike fewshot learning with gptbased compression, http://arxiv.org/pdf/2308.06942v1.pdf, not promting +from human days to machine seconds automatically answering and generating machine learning final exams, http://arxiv.org/pdf/2206.05442v7.pdf, not prompting +cedille a large autoregressive french language model, http://arxiv.org/pdf/2202.03371v1.pdf, not prompting +finetune like you pretrain improved finetuning of zeroshot vision models, http://arxiv.org/pdf/2212.00638v1.pdf, focuses on fine-tuning +wordcraft a humanai collaborative editor for story writing, http://arxiv.org/pdf/2107.07430v1.pdf, not prompt engineering +want to reduce labeling cost gpt3 can help, http://arxiv.org/pdf/2108.13487v1.pdf, not prompting +cut the carp fishing for zeroshot story evaluation, http://arxiv.org/pdf/2110.03111v3.pdf, tangential but not prompt engineering +fake it till you make it learning transferable representations from synthetic imagenet clones, http://arxiv.org/pdf/2212.08420v2.pdf, not prompt engineering +activation addition steering language models without optimization, http://arxiv.org/pdf/2308.10248v2.pdf, messes with activation not prompt engineering +safurai 001 new qualitative approach for code llm evaluation, http://arxiv.org/pdf/2309.11385v1.pdf, tangential but not prompt engineering +controlled and conditional text to image generation with diffusion prior, http://arxiv.org/pdf/2302.11710v2.pdf, image prompts +ipadapter text compatible image prompt adapter for texttoimage diffusion models, http://arxiv.org/pdf/2308.06721v1.pdf, image prompts +revisiting selftraining for fewshot learning of language model, http://arxiv.org/pdf/2110.01256v1.pdf, tangential but not prompt engineering +multimodal large language model for visual navigation, http://arxiv.org/pdf/2310.08669v2.pdf, tangential but not prompt engineering +taskdiff a similarity metric for taskoriented conversations, http://arxiv.org/pdf/2310.15298v2.pdf, tangential but not prompt engineering +clipadapter better visionlanguage models with feature adapters, http://arxiv.org/pdf/2110.04544v1.pdf, tangential but not prompt engineering +cones concept embedding search for parameter efficient tuning large vision language models, http://arxiv.org/pdf/2305.18993v1.pdf, tangential but not prompt engineering +logoprompt synthetic text images can be good visual prompts for visionlanguage models, http://arxiv.org/pdf/2309.01155v2.pdf, visual prompts +manipulating embeddings of stable diffusion prompts, http://arxiv.org/pdf/2308.12059v1.pdf, manipulates embeddings not text. +"multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation, httparxivorgpdf231004456v1pdf", multimodel RL, +"promptenhanced selfsupervised representation learning for remote sensing image understanding, httparxivorgpdf231000022v1pdf", about fine-tuning, +"discrete prompt compression with reinforcement learning, httparxivorgpdf230808758v1pdf", They compressed prompts using fine-tuning, +"automatic short math answer grading via incontext metalearning, httparxivorgpdf220515219v3pdf", About Fine-tuning, +"graphprompt biomedical entity normalization using graphbased prompt templates, httparxivorgpdf211203002v1pdf", About fine-tuning, +"transformers generalize differently from information stored in context vs in weights, httparxivorgpdf221005675v2pdf", tangentially related, +"large language models meet harry potter a bilingual dataset for aligning dialogue agents with characters, httparxivorgpdf221106869v4pdf", tangentially related, +"operationalizing specifications in addition to test sets for evaluating constrained generative models, httparxivorgpdf221200006v1pdf", tangentially related as stated in their introduction, +"language model acceptability judgements are not always robust to context, httparxivorgpdf221208979v1pdf", I believe it is tangentially related, +"training trajectories of language models across scales, httparxivorgpdf221209803v3pdf", More focused on training rather than anything, +"sparks of gpts in edge intelligence for metaverse caching and inference for mobile aigc services, httparxivorgpdf230408782v2pdf", Too tangentially related, +"tallrec an effective and efficient tuning framework to align large language model with recommendation, httparxivorgpdf230500447v3pdf", More about fine-tuning, +"memoryefficient finetuning of compressed large language models via sub4bit integer quantization, httparxivorgpdf230514152v2pdf", About Fine-Tuning I believe, +"do large language models know what they don't know, httparxivorgpdf230518153v2pdf", No Mention of Prompting, +"revisiting outofdistribution robustness in nlp benchmark analysis and llms evaluations, httparxivorgpdf230604618v2pdf", Not the main focus- barely mention, +"transformers as statisticians provable incontext learning with incontext algorithm selection, httparxivorgpdf230604637v2pdf", Hardly mentioned- not main focus, +"trained transformers learn linear models incontext, httparxivorgpdf230609927v3pdf", As I understand- this is about training and not prompting, +"generative multimodal entity linking, httparxivorgpdf230612725v2pdf", Only soft prompting, +"supervised pretraining can learn incontext reinforcement learning, httparxivorgpdf230614892v1pdf", Different Contexts I believe, +"hyenadna longrange genomic sequence modeling at single nucleotide resolution, httparxivorgpdf230615794v1pdf", Only Soft Prompting, +"explainable depression symptom detection in social media, httparxivorgpdf231013664v2pdf", Only one mention about prompting, +"ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms, httparxivorgpdf231013961v1pdf", About fine-tuning, +"anomalygpt detecting industrial anomalies using large visionlanguage models, httparxivorgpdf230815366v3pdf", More about training the model, +"uncovering hidden geometry in transformers via disentangling position and context, httparxivorgpdf231004861v1pdf", Completely non-relevant, +"mitigating word bias in zeroshot promptbased classifiers, httparxivorgpdf230904992v1pdf", about reweighing probabilities for prompt-based classifiers, +"ideal influencedriven selective annotations empower incontext learners in large language models, httparxivorgpdf231010873v1pdf", About fine-tuning, +"incontext pretraining language modeling beyond document boundaries, httparxivorgpdf231010638v3pdf", Not about prompting, +"alt towards finegrained alignment between language and ctr models for clickthrough rate prediction, httparxivorgpdf231019453v1pdf", Not really about prompting, +"understanding catastrophic forgetting in language models via implicit inference, httparxivorgpdf230910105v1pdf", About fine-tuning, +"do pretrained transformers really learn incontext by gradient descent, httparxivorgpdf231008540v1pdf", About fine-tuning, +"ccprompt counterfactual contrastive prompttuning for manyclass classification, httparxivorgpdf221105987v1pdf", About fine-tuning, +"one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention, httparxivorgpdf230703576v1pdf", Different type of prompt?, +"cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment, httparxivorgpdf231016271v1pdf", About fine-tuning, +"transformers are efficient incontext estimators for wireless communication, httparxivorgpdf231100226v1pdf", About fine-tuning, +scaling incontext demonstrations with structured attention,http://arxiv.org/pdf/2307.02690v1.pdf,new architecture +incontext learning and induction heads,http://arxiv.org/pdf/2209.11895v1.pdf,new architecture +what makes good examples for visual incontext learning,http://arxiv.org/pdf/2301.13670v2.pdf,visual only +mmicl empowering visionlanguage model with multimodal incontext learning,http://arxiv.org/pdf/2309.07915v2.pdf,visual only +visual incontext learning for fewshot eczema segmentation,http://arxiv.org/pdf/2309.16656v1.pdf,visual only +scone benchmarking negation reasoning in language models with finetuning and incontext learning,http://arxiv.org/pdf/2305.19426v1.pdf,fine-tuning +can whisper perform speechbased incontext learning,http://arxiv.org/pdf/2309.07081v1.pdf,speech +salm speechaugmented language model with incontext learning for speech recognition and translation,http://arxiv.org/pdf/2310.09424v1.pdf,speech +can foundation models help us achieve perfect secrecy,http://arxiv.org/pdf/2205.13722v2.pdf,overview paper +se factual knowledge in frozen giant code model a study on fqn and its retrieval,http://arxiv.org/pdf/2212.08221v1.pdf,unclear task +incontext learning for attention scheme from single softmax regression to multiple softmax regression via a tensor trick,http://arxiv.org/pdf/2307.02419v1.pdf,new architecture +synergpt incontext learning for personalized drug synergy prediction and drug design,http://arxiv.org/pdf/2307.11694v2.pdf,new architecture +twostage llm finetuning with less specialization and more generalization,http://arxiv.org/pdf/2211.00635v2.pdf,fine-tuning +conceptaware training improves incontext learning ability of language models,http://arxiv.org/pdf/2305.13775v1.pdf,fine-tuning +probing in context toward building robust classifiers via probing large language models,http://arxiv.org/pdf/2305.14171v2.pdf,uses probes for task +towards incontext scene understanding,http://arxiv.org/pdf/2306.01667v2.pdf,visual only +the cost of downscaling language models fact recall deteriorates before incontext learning,http://arxiv.org/pdf/2310.04680v1.pdf,analysis of pruning / LM size +"last one standing a comparative analysis of security and privacy of soft prompt tuning, lora, and incontext learning",http://arxiv.org/pdf/2310.11397v1.pdf,analysis of lora / tuning / ICL +when do prompting and prefixtuning work a theory of capabilities and limitations,http://arxiv.org/pdf/2310.19698v1.pdf,analysis of lora / tuning / ICL +instruct me more! random prompting for visual incontext learning,http://arxiv.org/pdf/2311.03648v1.pdf,visual only +incontext alignment chat with vanilla language models before finetuning,http://arxiv.org/pdf/2308.04275v1.pdf,fine-tuning +gpt4 vision on medical image classification a case study on covid19 dataset,http://arxiv.org/pdf/2310.18498v1.pdf,visual only +fewshot parameterefficient finetuning is better and cheaper than incontext learning,http://arxiv.org/pdf/2205.05638v2.pdf,fine-tuning +images speak in images a generalist painter for incontext visual learning,http://arxiv.org/pdf/2212.02499v2.pdf,visual only +how does incontext learning help prompt tuning,http://arxiv.org/pdf/2302.11521v1.pdf,fine-tuning +symbol tuning improves incontext learning in language models,http://arxiv.org/pdf/2305.08298v1.pdf,fine-tuning +iterative forward tuning boosts incontext learning in language models,http://arxiv.org/pdf/2305.13016v2.pdf,fine-tuning +estimating large language model capabilities without labeled test data,http://arxiv.org/pdf/2305.14802v2.pdf,out of scope analysis +augmenting language models with longterm memory,http://arxiv.org/pdf/2306.07174v1.pdf,new architecture +o3d offline datadriven discovery and distillation for sequential decisionmaking with large language models,http://arxiv.org/pdf/2310.14403v1.pdf,fine-tuning +deja vu contextual sparsity for efficient llms at inference time,http://arxiv.org/pdf/2310.17157v1.pdf,new architecture +principledriven selfalignment of language models from scratch with minimal human supervision,http://arxiv.org/pdf/2305.03047v1.pdf,fine-tuning +one for all towards training one graph model for all classification tasks,http://arxiv.org/pdf/2310.00149v1.pdf,new architecture +magma multimodal augmentation of generative models through adapterbased finetuning,http://arxiv.org/pdf/2112.05253v2.pdf,fine-tuning +blackbox tuning for languagemodelasaservice,http://arxiv.org/pdf/2201.03514v4.pdf,fine-tuning +contrastive learning for promptbased fewshot language learners,http://arxiv.org/pdf/2205.01308v1.pdf,fine-tuning +exploring length generalization in large language models,http://arxiv.org/pdf/2207.04901v2.pdf,out of scope analysis +explanations from large language models make small reasoners better,http://arxiv.org/pdf/2210.06726v1.pdf,out of scope analysis +visual programming compositional visual reasoning without training,http://arxiv.org/pdf/2211.11559v1.pdf,visual only +"don't generate, discriminate a proposal for grounding language models to realworld environments",http://arxiv.org/pdf/2212.09736v2.pdf,new architecture +neural codec language models are zeroshot text to speech synthesizers,http://arxiv.org/pdf/2301.02111v1.pdf,speech +looped transformers as programmable computers,http://arxiv.org/pdf/2301.13196v1.pdf,out of scope analysis +grounding language models to images for multimodal inputs and outputs,http://arxiv.org/pdf/2301.13823v4.pdf,new architecture +proofnet autoformalizing and formally proving undergraduatelevel mathematics,http://arxiv.org/pdf/2302.12433v1.pdf,new architecture +speak foreign languages with your own voice crosslingual neural codec language modeling,http://arxiv.org/pdf/2303.03926v1.pdf,speech +when braininspired ai meets agi,http://arxiv.org/pdf/2303.15935v1.pdf,overview paper +larger probes tell a different story extending psycholinguistic datasets via incontext learning,http://arxiv.org/pdf/2303.16445v1.pdf,dataset +seggpt segmenting everything in context,http://arxiv.org/pdf/2304.03284v1.pdf,new architecture +towards robust prompts on visionlanguage models,http://arxiv.org/pdf/2304.08479v1.pdf,vision-only +understanding and predicting human label variation in natural language inference through explanation,http://arxiv.org/pdf/2304.12443v1.pdf,out of scope analysis +otter a multimodal model with incontext instruction tuning,http://arxiv.org/pdf/2305.03726v1.pdf,new architecture +transformers learn incontext by gradient descent,http://arxiv.org/pdf/2212.07677v2.pdf, analysis of ICL as a learning algorithm +the closeness of incontext learning and weight shifting for softmax regression,http://arxiv.org/pdf/2304.13276v1.pdf, analysis of ICL as a learning algorithm +what learning algorithm is incontext learning investigations with linear models,http://arxiv.org/pdf/2211.15661v3.pdf, analysis of ICL as a learning algorithm +transformers as algorithms generalization and stability in incontext learning,http://arxiv.org/pdf/2301.07067v2.pdf, analysis of ICL as a learning algorithm +explaining emergent incontext learning as kernel regression,http://arxiv.org/pdf/2305.12766v2.pdf, analysis of ICL as a learning algorithm +label words are anchors an information flow perspective for understanding incontext learning,http://arxiv.org/pdf/2305.14160v1.pdf, analysis of ICL as a learning algorithm +transformers learn to implement preconditioned gradient descent for incontext learning,http://arxiv.org/pdf/2306.00297v1.pdf, analysis of ICL as a learning algorithm +investigating the learning behaviour of incontext learning a comparison with supervised learning,http://arxiv.org/pdf/2307.15411v2.pdf, analysis of ICL as a learning algorithm +incontext learning with transformer is really equivalent to a contrastive learning pattern,http://arxiv.org/pdf/2310.13220v1.pdf, analysis of ICL as a learning algorithm +incontext learning creates task vectors,http://arxiv.org/pdf/2310.15916v1.pdf, analysis of ICL as a learning algorithm +"what and how does incontext learning learn bayesian model averaging, parameterization, and generalization",http://arxiv.org/pdf/2305.19420v2.pdf, analysis of ICL as a learning algorithm +how do transformers learn incontext beyond simple functions a case study on learning with representations,http://arxiv.org/pdf/2310.10616v1.pdf, analysis of ICL as a learning algorithm +transformers learn higherorder optimization methods for incontext learning a study with linear models,http://arxiv.org/pdf/2310.17086v1.pdf, analysis of ICL as a learning algorithm +"a contemporaneous infrared flash from a long gammaray burst an echo from the central engine,httpdxdoiorg101038nature03520",Not prompting related, +"stellar explosions by magnetic towers,httpdxdoiorg101086505621",Not prompting related, +"high energy radiation from gamma ray bursts,httpdxdoiorg10106311291372",Not prompting related, +"the fireball shock model of gamma ray bursts,httpdxdoiorg10106311361591",Not prompting related, +"origin of gamma ray bursters,httpdxdoiorg101143ptps136300",Not prompting related, +"the updated e_peak e_gamma correlation in grbs,httpdxdoiorg101393ncci2005100460",Not prompting related, +"gammaray burst early afterglows,httpdxdoiorg10106312141841",Not prompting related, +"mevgev emission from neutronloaded short gammaray burst jets,httpdxdoiorg101086507261",Not prompting related, +"a two component jet model for the xray afterglow flat segment in short grb 051221a,httpdxdoiorg101086512971",Not prompting related, +"the shallow phase of xray afterglows,httpdxdoiorg10106312943505",Not prompting related, +"hyperaccretion after the blandfordznajek process a new model for grbs with xray flares observed in early afterglows,httpdxdoiorg101088100992718404",Not prompting related, +"high energy gammaray emission from gammaray bursts before glast,httpdxdoiorg101007s114670080033z",Not prompting related, +"expected performance of a hard xray polarimeter (polar) by monte carlo simulation,httpdxdoiorg101016jnima200904033",Not prompting related, +"what do we know about gammaray bursts,httparxivorgabs10094648v2",Not prompting related, +"possible origin of rapid variability of gammaray bursts due to convective energy transfer in hyperaccretion disks,httpdxdoiorg101111j13652966201119733x",Not prompting related, +"gammaray burst without baryonic and magnetic load,httpdxdoiorg101143ptp126555",Not prompting related, +"the physical origin of optical flares following grb 110205a and the nature of the outflow,httpdxdoiorg101088167445271111007",Not prompting related, +"magnetic structures in gammaray burst jets probed by gammaray polarization,httpdxdoiorg101088204182057581l1",Not prompting related, +"astrophysical zev acceleration in the relativistic jet from an accreting supermassive blackhole,httpdxdoiorg101016jastropartphys201402004",Not prompting related, +"neutrinocooled accretion model with magnetic coupling for xray flares in grbs,httpdxdoiorg1010880004637x7732142",Not prompting related, +"jet luminosity from neutrinodominated accretion flows in grbs,httparxivorgabs13083236v1",Not prompting related, +"3d manipulation with scanning near field optical nanotweezers,httpdxdoiorg101038nnano201424",Not prompting related, +"tuning a multiple classifier system for side effect discovery using genetic algorithms,httparxivorgabs14091053v1",Not prompting related, +"moltensalt depleteduranium reactor,httparxivorgabs150303183v1",Not prompting related, +"xray flares in grbs general considerations and photospheric origin,httpdxdoiorg101093mnraslslw003",Not prompting related, +"waterinduced bimetallic alloy surface segregation a first principle study,httparxivorgabs160102346v1",Not prompting related, +"rates and singlettriplet ratios from tadf transients,httparxivorgabs160308998v2",Not prompting related, +"physical limits to magnetogenetics,httpdxdoiorg107554elife17210",Not prompting related, +"the dark side of ethical robots,httparxivorgabs160602583v1",Not prompting related, +"numerical and analytical solutions of neutrinodominated accretion flows with a nonzero torque boundary condition and its applications in gammaray bursts,httpdxdoiorg103847153843578332129",Not prompting related, +"highenergy emission as signature of magnetic field amplification in neutron star mergers,httparxivorgabs170101184v1",Not prompting related, +"gammaray burst models in light of the grb 170817a gw170817 connection,httparxivorgabs180207328v1",Not prompting related, +"surface modified mesoporous gc3n4@feni3 as prompt and proficient magnetic adsorbent for crude oil recovery,httpdxdoiorg101016japsusc201812166",Not prompting related, +"the perfect state transfer graph limbo,httparxivorgabs180800696v2",Not prompting related, +"variabilities of gammaray bursts from black hole hyperaccretion disks,httpdxdoiorg101093mnrasstw1985",Not prompting related, +"data driven exploratory attacks on black box classifiers in adversarial domains,httpdxdoiorg101016jneucom201802007",Not prompting related, +"migrating large codebases to c++ modules,httpdxdoiorg1010881742659615251012051",Not prompting related, +"mn(ii)doped 2d perovskite for light emitting devices,httparxivorgabs190605099v1",Not prompting related, +"deep sequential feature learning in clinical image classification of infectious keratitis,httparxivorgabs200602666v1",Not prompting related, +"hydrodynamics of corecollapse supernovae and their progenitors,httpdxdoiorg101007s4111502000085",Not prompting related, +"xray plateaus in $γ$ray bursts explained by structured jets,httparxivorgabs200613966v1",Not prompting related, +"polar a spaceborne xray polarimeter for transient sources,httpdxdoiorg105194astra7432011",Not prompting related, +"the change of grb polarization angles in the magneticdominated jet model,httpdxdoiorg101093mnrasstu2051",Not prompting related, +"perspective quantum thermodynamics,httpdxdoiorg10108813672630181011002",Not prompting related, +"observational evidence for mass ejection accompanying short gamma ray bursts,httpdxdoiorg101093mnraslslx131",Not prompting related, +"photospheric emission from variable engine gamma ray burst simulations,httpdxdoiorg10384715384357aaeed1",Not prompting related, +"the divideandconquer framework a suitable setting for the ddm of the future,httparxivorgabs190100229v1",Not prompting related, +"spectral puzzle of the offaxis gammaray burst in gw170817,httpdxdoiorg101093mnrasstz1650",Not prompting related, +"equationofstate, critical constants, and thermodynamic properties of lithium at high energy density,httpdxdoiorg10106315143308",Not prompting related, +"interpreting the xray afterglows of gammaray bursts with radiative losses and millisecond magnetars,httpdxdoiorg101093mnrasstaa3090",Not prompting related, +"wavelet denoising and attentionbased rnnarima model to predict forex price,httparxivorgabs200806841v1",Not prompting related, +"testing blandfordznajek mechanism in black hole hyperaccretion flows for longduration gammaray bursts,httpdxdoiorg10384715384357abd6bd",Not prompting related, +"deep learningbased detection of the acute respiratory distress syndrome what are the models learning,httparxivorgabs210912323v1",Not prompting related, +"continuationpassing style, defunctionalization, accumulations, and associativity,httpdxdoiorg1022152programmingjournalorg202267",Not prompting related, +"helyos a customized offtheshelf solution for autonomous driving applications in delimited areas,httpdxdoiorg101109sii55687202310039276",Not prompting related, +"the structure of gamma ray burst jets,httparxivorgabs220611088v2",Not prompting related, diff --git a/master_papers.csv b/master_papers.csv index 96bad7ad1935fc43649c9cb1b274822f23766475..013b721185e1a88f0b4dd4da7fe81a06018565cd 100644 --- a/master_papers.csv +++ b/master_papers.csv @@ -3,29 +3,29 @@ 1,fuzzllm a novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models,"['Dongyu Yao', 'Jianshu Zhang', 'Ian G. Harris', 'Marcel Carlsson']",https://arxiv.org/pdf/2309.05274,2023-09-11,,"Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit meticulously crafted prompts to elicit content that violates service guidelines, have captured the attention of research communities. While model owners can defend against individual jailbreak prompts through safety training strategies, this relatively passive approach struggles to handle the broader category of similar jailbreaks. To tackle this issue, we introduce FuzzLLM, an automated fuzzing framework designed to proactively test and discover jailbreak vulnerabilities in LLMs. We utilize templates to capture the structural integrity of a prompt and isolate key features of a jailbreak class as constraints. By integrating different base classes into powerful combo attacks and varying the elements of constraints and prohibited questions, FuzzLLM enables efficient testing with reduced manual effort. Extensive experiments demonstrate FuzzLLM's effectiveness and comprehensiveness in vulnerability discovery across various LLMs.",3c784cd3150a359e269c70cfbadd18774d66055d,Semantic Scholar,,, 2,baseline defenses for adversarial attacks against aligned language models,"['Neel Jain', 'Avi Schwarzschild', 'Yuxin Wen', 'Gowthami Somepalli', 'John Kirchenbauer', 'Ping-yeh Chiang', 'Micah Goldblum', 'Aniruddha Saha', 'Jonas Geiping', 'Tom Goldstein']",https://arxiv.org/pdf/2309.00614,2023-09-01,,"As Large Language Models quickly become ubiquitous, it becomes critical to understand their security vulnerabilities. Recent work shows that text optimizers can produce jailbreaking prompts that bypass moderation and alignment. Drawing from the rich body of work on adversarial machine learning, we approach these attacks with three questions: What threat models are practically useful in this domain? How do baseline defense techniques perform in this new domain? How does LLM security differ from computer vision? We evaluate several baseline defense strategies against leading adversarial attacks on LLMs, discussing the various settings in which each is feasible and effective. Particularly, we look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training. We discuss white-box and gray-box settings and discuss the robustness-performance trade-off for each of the defenses considered. We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs. Future research will be needed to uncover whether more powerful optimizers can be developed, or whether the strength of filtering and preprocessing defenses is greater in the LLMs domain than it has been in computer vision.",3e30a7ac4886b28eb50151f58e14a1d698cccd0e,Semantic Scholar,,, 3,latent jailbreak a benchmark for evaluating text safety and output robustness of large language models,"['Huachuan Qiu', 'Shuai Zhang', 'Anqi Li', 'Hongliang He', 'Zhenzhong Lan']",https://arxiv.org/pdf/2307.08487,2023-07-17,,"Considerable research efforts have been devoted to ensuring that large language models (LLMs) align with human values and generate safe text. However, an excessive focus on sensitivity to certain topics can compromise the model's robustness in following instructions, thereby impacting its overall performance in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily focused on evaluating the safety of the models without considering their robustness. In this paper, we propose a benchmark that assesses both the safety and robustness of LLMs, emphasizing the need for a balanced approach. To comprehensively study text safety and output robustness, we introduce a latent jailbreak prompt dataset, each involving malicious instruction embedding. Specifically, we instruct the model to complete a regular task, such as translation, with the text to be translated containing malicious instructions. To further analyze safety and robustness, we design a hierarchical annotation framework. We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions). Our results demonstrate that current LLMs not only prioritize certain instruction verbs but also exhibit varying jailbreak rates for different instruction verbs in explicit normal instructions. Code and data are available at https://github.com/qiuhuachuan/latent-jailbreak.",ace98e1e58bcc364afbb2feff6d136232f5f47da,Semantic Scholar,,, -4,defending against alignmentbreaking attacks via robustly aligned llm,"['Bochuan Cao', 'Yu Cao', 'Lu Lin', 'Jinghui Chen']",https://arxiv.org/pdf/2309.14348,2023-09-18,,"Recently, Large Language Models (LLMs) have made significant advancements and are now widely used across various domains. Unfortunately, there has been a rising concern that LLMs can be misused to generate harmful or malicious content. Though a line of research has focused on aligning LLMs with human values and preventing them from producing inappropriate content, such alignments are usually vulnerable and can be bypassed by alignment-breaking attacks via adversarially optimized or handcrafted jailbreaking prompts. In this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks. RA-LLM can be directly constructed upon an existing aligned LLM with a robust alignment checking function, without requiring any expensive retraining or fine-tuning process of the original LLM. Furthermore, we also provide a theoretical analysis for RA-LLM to verify its effectiveness in defending against alignment-breaking attacks. Through real-world experiments on open-source large language models, we demonstrate that RA-LLM can successfully defend against both state-of-the-art adversarial prompts and popular handcrafted jailbreaking prompts by reducing their attack success rates from nearly 100\% to around 10\% or less.",cd29c25c489562b409a60f83365f93f33ee1a0a1,Semantic Scholar,,, -5,using large language models for cybersecurity capturetheflag challenges and certification questions,"['W. Tann', 'Yuancheng Liu', 'Jun Heng Sim', 'C. Seah', 'E. Chang']",https://arxiv.org/pdf/2308.10443,2023-08-21,,"The assessment of cybersecurity Capture-The-Flag (CTF) exercises involves participants finding text strings or ``flags'' by exploiting system vulnerabilities. Large Language Models (LLMs) are natural-language models trained on vast amounts of words to understand and generate text; they can perform well on many CTF challenges. Such LLMs are freely available to students. In the context of CTF exercises in the classroom, this raises concerns about academic integrity. Educators must understand LLMs' capabilities to modify their teaching to accommodate generative AI assistance. This research investigates the effectiveness of LLMs, particularly in the realm of CTF challenges and questions. Here we evaluate three popular LLMs, OpenAI ChatGPT, Google Bard, and Microsoft Bing. First, we assess the LLMs' question-answering performance on five Cisco certifications with varying difficulty levels. Next, we qualitatively study the LLMs' abilities in solving CTF challenges to understand their limitations. We report on the experience of using the LLMs for seven test cases in all five types of CTF challenges. In addition, we demonstrate how jailbreak prompts can bypass and break LLMs' ethical safeguards. The paper concludes by discussing LLM's impact on CTF exercises and its implications.",e64df7e9448f7a9a4cb5d22c21c460134c8646ac,Semantic Scholar,,, -6,autodan generating stealthy jailbreak prompts on aligned large language models,"['Xiaogeng Liu', 'Nan Xu', 'Muhao Chen', 'Chaowei Xiao']",https://arxiv.org/pdf/2310.04451,2023-10-03,,"The aligned Large Language Models (LLMs) are powerful language understanding and decision-making tools that are created through extensive alignment with human feedback. However, these large models remain susceptible to jailbreak attacks, where adversaries manipulate prompts to elicit malicious outputs that should not be given by aligned LLMs. Investigating jailbreak prompts can lead us to delve into the limitations of LLMs and further guide us to secure them. Unfortunately, existing jailbreak techniques suffer from either (1) scalability issues, where attacks heavily rely on manual crafting of prompts, or (2) stealthiness problems, as attacks depend on token-based algorithms to generate prompts that are often semantically meaningless, making them susceptible to detection through basic perplexity testing. In light of these challenges, we intend to answer this question: Can we develop an approach that can automatically generate stealthy jailbreak prompts? In this paper, we introduce AutoDAN, a novel jailbreak attack against aligned LLMs. AutoDAN can automatically generate stealthy jailbreak prompts by the carefully designed hierarchical genetic algorithm. Extensive evaluations demonstrate that AutoDAN not only automates the process while preserving semantic meaningfulness, but also demonstrates superior attack strength in cross-model transferability, and cross-sample universality compared with the baseline. Moreover, we also compare AutoDAN with perplexity-based defense methods and show that AutoDAN can bypass them effectively.",f3f23f7f9f5369aade19f20bc5d028cce7b9c9aa,Semantic Scholar,,, -7,jailbreaking chatgpt via prompt engineering an empirical study,"['Yi Liu', 'Gelei Deng', 'Zhengzi Xu', 'Yuekang Li', 'Yaowen Zheng', 'Ying Zhang', 'Lida Zhao', 'Tianwei Zhang', 'Yang Liu']",http://arxiv.org/pdf/2305.13860,2023-05-23,,"Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse. Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts. Initially, we develop a classification model to analyze the distribution of existing prompts, identifying ten distinct patterns and three categories of jailbreak prompts. Subsequently, we assess the jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a dataset of 3,120 jailbreak questions across eight prohibited scenarios. Finally, we evaluate the resistance of ChatGPT against jailbreak prompts, finding that the prompts can consistently evade the restrictions in 40 use-case scenarios. The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention.",fc50a6202e2f675604543c1ae4ef22ec74f61ad5,Semantic Scholar,,, -8,decomposed prompting a modular approach for solving complex tasks,"['Tushar Khot', 'H. Trivedi', 'Matthew Finlayson', 'Yao Fu', 'Kyle Richardson', 'Peter Clark', 'Ashish Sabharwal']",http://arxiv.org/pdf/2210.02406,2022-10-05,,"Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.",07955e96cbd778d0ae2a68f09d073b866dd84c2a,Semantic Scholar,,, -9,two timin' repairing smart contracts with a twolayered approach,"['Abhinav Jain', 'Ehan Masud', 'Michelle Han', 'Rohan Dhillon', 'Sumukh Rao', 'Arya Joshi', 'Salar Cheema', 'Saurav Kumar']",https://arxiv.org/pdf/2309.07841,2023-09-14,,"Due to the modern relevance of blockchain technology, smart contracts present both substantial risks and benefits. Vulnerabilities within them can trigger a cascade of consequences, resulting in significant losses. Many current papers primarily focus on classifying smart contracts for malicious intent, often relying on limited contract characteristics, such as bytecode or opcode. This paper proposes a novel, two-layered framework: 1) classifying and 2) directly repairing malicious contracts. Slither's vulnerability report is combined with source code and passed through a pre-trained RandomForestClassifier (RFC) and Large Language Models (LLMs), classifying and repairing each suggested vulnerability. Experiments demonstrate the effectiveness of fine-tuned and prompt-engineered LLMs. The smart contract repair models, built from pre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overall vulnerability count by 97.5% and 96.7% respectively. A manual inspection of repaired contracts shows that all retain functionality, indicating that the proposed method is appropriate for automatic batch classification and repair of vulnerabilities in smart contracts.",0afb64ce430c5f26752c8aed246ead6820b02049,Semantic Scholar,,, -10,user simulation with large language models for evaluating taskoriented dialogue,"['Sam Davidson', 'Salvatore Romeo', 'Raphael Shu', 'James Gung', 'Arshit Gupta', 'Saab Mansour', 'Yi Zhang']",https://arxiv.org/pdf/2309.13233,2023-09-23,,"One of the major impediments to the development of new task-oriented dialogue (TOD) systems is the need for human evaluation at multiple stages and iterations of the development process. In an effort to move toward automated evaluation of TOD, we propose a novel user simulator built using recently developed large pretrained language models (LLMs). In order to increase the linguistic diversity of our system relative to the related previous work, we do not fine-tune the LLMs used by our system on existing TOD datasets; rather we use in-context learning to prompt the LLMs to generate robust and linguistically diverse output with the goal of simulating the behavior of human interlocutors. Unlike previous work, which sought to maximize goal success rate (GSR) as the primary metric of simulator performance, our goal is a system which achieves a GSR similar to that observed in human interactions with TOD systems. Using this approach, our current simulator is effectively able to interact with several TOD systems, especially on single-intent conversational goals, while generating lexically and syntactically diverse output relative to previous simulators that rely upon fine-tuned models. Finally, we collect a Human2Bot dataset of humans interacting with the same TOD systems with which we experimented in order to better quantify these achievements.",0c110794ae91b4c165b0de3ff11fc841e2455bdb,Semantic Scholar,,, +4,defending against alignmentbreaking attacks via robustly aligned llm,"['Bochuan Cao', 'Yu Cao', 'Lu Lin', 'Jinghui Chen']",https://arxiv.org/pdf/2309.14348,2023-09-18,,"Recently, Large Language Models (LLMs) have made significant advancements and are now widely used across various domains. Unfortunately, there has been a rising concern that LLMs can be misused to generate harmful or malicious content. Though a line of research has focused on aligning LLMs with human values and preventing them from producing inappropriate content, such alignments are usually vulnerable and can be bypassed by alignment-breaking attacks via adversarially optimized or handcrafted jailbreaking prompts. In this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks. RA-LLM can be directly constructed upon an existing aligned LLM with a robust alignment checking function, without requiring any expensive retraining or fine-tuning process of the original LLM. Furthermore, we also provide a theoretical analysis for RA-LLM to verify its effectiveness in defending against alignment-breaking attacks. Through real-world experiments on open-source large language models, we demonstrate that RA-LLM can successfully defend against both state-of-the-art adversarial prompts and popular handcrafted jailbreaking prompts by reducing their attack success rates from nearly 100% to around 10% or less.",cd29c25c489562b409a60f83365f93f33ee1a0a1,Semantic Scholar,,, +5,gptfuzzer red teaming large language models with autogenerated jailbreak prompts,"['Jiahao Yu', 'Xingwei Lin', 'Zheng Yu', 'Xinyu Xing']",https://arxiv.org/pdf/2309.10253,2023-09-19,,"Large language models (LLMs) have recently experienced tremendous popularity and are widely used from casual conversations to AI-driven programming. However, despite their considerable success, LLMs are not entirely reliable and can give detailed guidance on how to conduct harmful or illegal activities. While safety measures can reduce the risk of such outputs, adversarial jailbreak attacks can still exploit LLMs to produce harmful content. These jailbreak templates are typically manually crafted, making large-scale testing challenging. In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing framework inspired by the AFL fuzzing framework. Instead of manual engineering, GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs. At its core, GPTFuzz starts with human-written templates as initial seeds, then mutates them to produce new templates. We detail three key components of GPTFuzz: a seed selection strategy for balancing efficiency and variability, mutate operators for creating semantically equivalent or similar sentences, and a judgment model to assess the success of a jailbreak attack. We evaluate GPTFuzz against various commercial and open-source LLMs, including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our results indicate that GPTFuzz consistently produces jailbreak templates with a high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz achieves over 90% attack success rates against ChatGPT and Llama-2 models, even with suboptimal initial seed templates. We anticipate that GPTFuzz will be instrumental for researchers and practitioners in examining LLM robustness and will encourage further exploration into enhancing LLM safety.",d4177489596748e43aa571f59556097f2cc4c8be,Semantic Scholar,,, +6,using large language models for cybersecurity capturetheflag challenges and certification questions,"['W. Tann', 'Yuancheng Liu', 'Jun Heng Sim', 'C. Seah', 'E. Chang']",https://arxiv.org/pdf/2308.10443,2023-08-21,,"The assessment of cybersecurity Capture-The-Flag (CTF) exercises involves participants finding text strings or ``flags'' by exploiting system vulnerabilities. Large Language Models (LLMs) are natural-language models trained on vast amounts of words to understand and generate text; they can perform well on many CTF challenges. Such LLMs are freely available to students. In the context of CTF exercises in the classroom, this raises concerns about academic integrity. Educators must understand LLMs' capabilities to modify their teaching to accommodate generative AI assistance. This research investigates the effectiveness of LLMs, particularly in the realm of CTF challenges and questions. Here we evaluate three popular LLMs, OpenAI ChatGPT, Google Bard, and Microsoft Bing. First, we assess the LLMs' question-answering performance on five Cisco certifications with varying difficulty levels. Next, we qualitatively study the LLMs' abilities in solving CTF challenges to understand their limitations. We report on the experience of using the LLMs for seven test cases in all five types of CTF challenges. In addition, we demonstrate how jailbreak prompts can bypass and break LLMs' ethical safeguards. The paper concludes by discussing LLM's impact on CTF exercises and its implications.",e64df7e9448f7a9a4cb5d22c21c460134c8646ac,Semantic Scholar,,, +7,autodan generating stealthy jailbreak prompts on aligned large language models,"['Xiaogeng Liu', 'Nan Xu', 'Muhao Chen', 'Chaowei Xiao']",https://arxiv.org/pdf/2310.04451,2023-10-03,,"The aligned Large Language Models (LLMs) are powerful language understanding and decision-making tools that are created through extensive alignment with human feedback. However, these large models remain susceptible to jailbreak attacks, where adversaries manipulate prompts to elicit malicious outputs that should not be given by aligned LLMs. Investigating jailbreak prompts can lead us to delve into the limitations of LLMs and further guide us to secure them. Unfortunately, existing jailbreak techniques suffer from either (1) scalability issues, where attacks heavily rely on manual crafting of prompts, or (2) stealthiness problems, as attacks depend on token-based algorithms to generate prompts that are often semantically meaningless, making them susceptible to detection through basic perplexity testing. In light of these challenges, we intend to answer this question: Can we develop an approach that can automatically generate stealthy jailbreak prompts? In this paper, we introduce AutoDAN, a novel jailbreak attack against aligned LLMs. AutoDAN can automatically generate stealthy jailbreak prompts by the carefully designed hierarchical genetic algorithm. Extensive evaluations demonstrate that AutoDAN not only automates the process while preserving semantic meaningfulness, but also demonstrates superior attack strength in cross-model transferability, and cross-sample universality compared with the baseline. Moreover, we also compare AutoDAN with perplexity-based defense methods and show that AutoDAN can bypass them effectively.",f3f23f7f9f5369aade19f20bc5d028cce7b9c9aa,Semantic Scholar,,, +8,jailbreaking chatgpt via prompt engineering an empirical study,"['Yi Liu', 'Gelei Deng', 'Zhengzi Xu', 'Yuekang Li', 'Yaowen Zheng', 'Ying Zhang', 'Lida Zhao', 'Tianwei Zhang', 'Yang Liu']",http://arxiv.org/pdf/2305.13860,2023-05-23,,"Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse. Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts. Initially, we develop a classification model to analyze the distribution of existing prompts, identifying ten distinct patterns and three categories of jailbreak prompts. Subsequently, we assess the jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a dataset of 3,120 jailbreak questions across eight prohibited scenarios. Finally, we evaluate the resistance of ChatGPT against jailbreak prompts, finding that the prompts can consistently evade the restrictions in 40 use-case scenarios. The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention.",fc50a6202e2f675604543c1ae4ef22ec74f61ad5,Semantic Scholar,,, +9,decomposed prompting a modular approach for solving complex tasks,"['Tushar Khot', 'H. Trivedi', 'Matthew Finlayson', 'Yao Fu', 'Kyle Richardson', 'Peter Clark', 'Ashish Sabharwal']",http://arxiv.org/pdf/2210.02406,2022-10-05,,"Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.",07955e96cbd778d0ae2a68f09d073b866dd84c2a,Semantic Scholar,,, +10,two timin' repairing smart contracts with a twolayered approach,"['Abhinav Jain', 'Ehan Masud', 'Michelle Han', 'Rohan Dhillon', 'Sumukh Rao', 'Arya Joshi', 'Salar Cheema', 'Saurav Kumar']",https://arxiv.org/pdf/2309.07841,2023-09-14,,"Due to the modern relevance of blockchain technology, smart contracts present both substantial risks and benefits. Vulnerabilities within them can trigger a cascade of consequences, resulting in significant losses. Many current papers primarily focus on classifying smart contracts for malicious intent, often relying on limited contract characteristics, such as bytecode or opcode. This paper proposes a novel, two-layered framework: 1) classifying and 2) directly repairing malicious contracts. Slither's vulnerability report is combined with source code and passed through a pre-trained RandomForestClassifier (RFC) and Large Language Models (LLMs), classifying and repairing each suggested vulnerability. Experiments demonstrate the effectiveness of fine-tuned and prompt-engineered LLMs. The smart contract repair models, built from pre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overall vulnerability count by 97.5% and 96.7% respectively. A manual inspection of repaired contracts shows that all retain functionality, indicating that the proposed method is appropriate for automatic batch classification and repair of vulnerabilities in smart contracts.",0afb64ce430c5f26752c8aed246ead6820b02049,Semantic Scholar,,, 11,blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing,"['Chen Wang', 'Minpeng Liao', 'Zhongqiang Huang', 'Jinliang Lu', 'Junhong Wu', 'Yuchen Liu', 'Chengqing Zong', 'Jiajun Zhang']",https://arxiv.org/pdf/2309.00916,2023-09-02,,"The emergence of large language models (LLMs) has sparked significant interest in extending their remarkable language capabilities to speech. However, modality alignment between speech and text still remains an open problem. Current solutions can be categorized into two strategies. One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text. The other is an end-to-end approach that relies on speech instruction data, which is very difficult to collect in large quantities. In this paper, we address these issues and propose the BLSP approach that Bootstraps Language-Speech Pre-training via behavior alignment of continuation writing. We achieve this by learning a lightweight modality adapter between a frozen speech encoder and an LLM, ensuring that the LLM exhibits the same generation behavior regardless of the modality of input: a speech segment or its transcript. The training process can be divided into two steps. The first step prompts an LLM to generate texts with speech transcripts as prefixes, obtaining text continuations. In the second step, these continuations are used as supervised signals to train the modality adapter in an end-to-end manner. We demonstrate that this straightforward process can extend the capabilities of LLMs to speech, enabling speech recognition, speech translation, spoken language understanding, and speech conversation, even in zero-shot cross-lingual scenarios.",204fd6c5e247c477d607f507ee01d94a8dbd408f,Semantic Scholar,,, -12,codeie large code generation models are better fewshot information extractors,"['Peng-Hsuan Li', 'Tianxiang Sun', 'Qiong Tang', 'Hang Yan', 'Yuanbin Wu', 'Xuanjing Huang', 'Xipeng Qiu Academy for EngineeringTechnology', 'Fudan University', 'School of Materials Science', 'Technology', 'East China Normal University']",http://arxiv.org/pdf/2305.05711,2023-05-09,,"Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks.",243ac5656c4f8ed6e1eb757b7145fb12b837c166,Semantic Scholar,,, -13,howtocaption prompting llms to transform video annotations at scale,"['Nina Shvetsova', 'Anna Kukleva', 'Xudong Hong', 'Christian Rupprecht', 'B. Schiele', 'Hilde Kuehne']",https://arxiv.org/pdf/2310.04900,2023-10-07,,"Instructional videos are an excellent source for learning multimodal representations by leveraging video-subtitle pairs extracted with automatic speech recognition systems (ASR) from the audio signal in the videos. However, in contrast to human-annotated captions, both speech and subtitles naturally differ from the visual content of the videos and thus provide only noisy supervision for multimodal learning. As a result, large-scale annotation-free web video training data remains sub-optimal for training text-video models. In this work, we propose to leverage the capability of large language models (LLMs) to obtain fine-grained video descriptions aligned with videos. Specifically, we prompt an LLM to create plausible video descriptions based on ASR narrations of the video for a large-scale instructional video dataset. To this end, we introduce a prompting method that is able to take into account a longer text of subtitles, allowing us to capture context beyond a single sentence. To align the captions to the video temporally, we prompt the LLM to generate timestamps for each produced caption based on the subtitles. In this way, we obtain human-style video captions at scale without human supervision. We apply our method to the subtitles of the HowTo100M dataset, creating a new large-scale dataset, HowToCaption. Our evaluation shows that the resulting captions not only significantly improve the performance over many different benchmark datasets for text-video retrieval but also lead to a disentangling of textual narration from the audio, boosting performance in text-video-audio tasks.",24dd96da6f700f57132713aeb5e9b06905abab5d,Semantic Scholar,,, -14,algo synthesizing algorithmic programs with generated oracle verifiers,"['Kexun Zhang', 'Danqing Wang', 'Jingtao Xia', 'William Yang Wang', 'Lei Li']",http://arxiv.org/pdf/2305.14591,2023-05-24,,"Large language models (LLMs) excel at implementing code from functionality descriptions but struggle with algorithmic problems that require not only implementation but also identification of the suitable algorithm. Moreover, LLM-generated programs lack guaranteed correctness and require human verification. To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness. ALGO first generates a reference oracle by prompting an LLM to exhaustively enumerate all the combinations of relevant variables. This oracle is then utilized to guide an arbitrary search strategy in exploring the algorithm space and to verify the synthesized algorithms. Our study shows that the LLM-generated oracles are correct for 88% of the cases. With the oracles as verifiers, ALGO can be integrated with any existing code generation model in a model-agnostic manner to enhance its performance. Experiments show that when equipped with ALGO, we achieve an 8x better one-submission pass rate over the Codex model and a 2.6x better one-submission pass rate over CodeT, the current state-of-the-art model on CodeContests. We can also get 1.3x better pass rate over the ChatGPT Code Interpreter on unseen problems. The problem set we used for testing, the prompts we used, the verifier and solution programs, and the test cases generated by ALGO are available at https://github.com/zkx06111/ALGO.",2bb4fe9bc10dbf1ea70135e52452f9f63bb10671,Semantic Scholar,,, -15,model tuning or prompt tuning a study of large language models for clinical concept and relation extraction,"['C.A.I. Peng', 'Xi Yang', 'Kaleb E Smith', 'Zehao Yu', 'Aokun Chen', 'Jiang Bian', 'Yonghui Wu']",https://arxiv.org/pdf/2310.06239,2023-10-10,,"Objective To develop soft prompt-based learning algorithms for large language models (LLMs), examine the shape of prompts, prompt-tuning using frozen/unfrozen LLMs, transfer learning, and few-shot learning abilities. Methods We developed a soft prompt-based LLM model and compared 4 training strategies including (1) fine-tuning without prompts; (2) hard-prompt with unfrozen LLMs; (3) soft-prompt with unfrozen LLMs; and (4) soft-prompt with frozen LLMs. We evaluated 7 pretrained LLMs using the 4 training strategies for clinical concept and relation extraction on two benchmark datasets. We evaluated the transfer learning ability of the prompt-based learning algorithms in a cross-institution setting. We also assessed the few-shot learning ability. Results and Conclusion When LLMs are unfrozen, GatorTron-3.9B with soft prompting achieves the best strict F1-scores of 0.9118 and 0.8604 for concept extraction, outperforming the traditional fine-tuning and hard prompt-based models by 0.6~3.1% and 1.2~2.9%, respectively; GatorTron-345M with soft prompting achieves the best F1-scores of 0.8332 and 0.7488 for end-to-end relation extraction, outperforming the other two models by 0.2~2% and 0.6~11.7%, respectively. When LLMs are frozen, small (i.e., 345 million parameters) LLMs have a big gap to be competitive with unfrozen models; scaling LLMs up to billions of parameters makes frozen LLMs competitive with unfrozen LLMs. For cross-institute evaluation, soft prompting with a frozen GatorTron-8.9B model achieved the best performance. This study demonstrates that (1) machines can learn soft prompts better than humans, (2) frozen LLMs have better few-shot learning ability and transfer learning ability to facilitate muti-institution applications, and (3) frozen LLMs require large models.",2f75de70511fa9f5c7a1e7f61f2d7928d121adbf,Semantic Scholar,,, -16,thinksum probabilistic reasoning over sets using large language models,"['Batu Mehmet Ozturkler', 'Nikolay Malkin', 'Zhen Wang', 'N. Jojic']",http://arxiv.org/pdf/2210.01293,2022-10-04,,"Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reproducing patterns in linear text that occur in their training data (zero-shot evaluation) or in the provided context (few-shot in-context learning). However, recent studies show that even the more advanced LLMs fail in scenarios that require reasoning over multiple objects or facts and making sequences of logical deductions. We propose a two-stage probabilistic inference paradigm, ThinkSum, which reasons over sets of objects or facts in a structured manner. In the first stage (Think – retrieval of associations), a LLM is queried in parallel over a set of phrases extracted from the prompt or an auxiliary model call. In the second stage (Sum – probabilistic inference or reasoning), the results of these queries are aggregated to make the final prediction. We demonstrate the possibilities and advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks, achieving improvements over the state of the art using GPT-family models on thirteen difficult tasks, often with far smaller model variants. We also compare and contrast ThinkSum with other proposed modifications to direct prompting of LLMs, such as variants of chain-of-thought prompting. Our results suggest that because the probabilistic inference in ThinkSum is performed outside of calls to the LLM, ThinkSum is less sensitive to prompt design, yields more interpretable predictions, and can be flexibly combined with latent variable models to extract structured knowledge from LLMs. Overall, our proposed paradigm represents a promising approach for enhancing the reasoning capabilities of LLMs.",370cea8b4220917f45a69358c0303df71f5063c7,Semantic Scholar,,, -17,prompting multilingual large language models to generate codemixed texts the case of south east asian languages,"['Zheng-Xin Yong', 'Ruochen Zhang', 'J. Forde', 'Skyler Wang', 'Arjun Subramonian', 'Samuel Cahyawijaya', 'Holy Lovenia', 'Genta Indra Winata', 'Lintang Sutawika', 'Jan Christian Blaise Cruz', 'Long Phan', 'Yinghua Tan', 'Alham Fikri Aji']",https://arxiv.org/pdf/2303.13592,2023-03-23,,"While code-mixing is a common linguistic practice in many parts of the world, collecting high-quality and low-cost code-mixed data remains a challenge for natural language processing (NLP) research. The recent proliferation of Large Language Models (LLMs) compels one to ask: how capable are these systems in generating code-mixed data? In this paper, we explore prompting multilingual LLMs in a zero-shot manner to generate code-mixed data for seven languages in South East Asia (SEA), namely Indonesian, Malay, Chinese, Tagalog, Vietnamese, Tamil, and Singlish. We find that publicly available multilingual instruction-tuned models such as BLOOMZ and Flan-T5-XXL are incapable of producing texts with phrases or clauses from different languages. ChatGPT exhibits inconsistent capabilities in generating code-mixed texts, wherein its performance varies depending on the prompt template and language pairing. For instance, ChatGPT generates fluent and natural Singlish texts (an English-based creole spoken in Singapore), but for English-Tamil language pair, the system mostly produces grammatically incorrect or semantically meaningless utterances. Furthermore, it may erroneously introduce languages not specified in the prompt. Based on our investigation, existing multilingual LLMs exhibit a wide range of proficiency in code-mixed data generation for SEA languages. As such, we advise against using LLMs in this context without extensive human checks.",3b27092740a489a63589cdcf40fad6a0e093daa0,Semantic Scholar,,, -18,divide and prompt chain of thought prompting for texttosql,"['X. Liu', 'Zhao Tan']",http://arxiv.org/pdf/2304.11556,2023-04-23,,"Chain-of-thought (CoT) prompting combined with large language models (LLMs) have achieved encouraging results on complex reasoning tasks. Text-to-SQL is a critical semantic parsing task that converts natural language questions into SQL statements, involving a complex reasoning process. However, there is little work about using CoT prompting to activate LLM's reasoning capabilities on Text-to-SQL tasks. In this work, we propose a new paradigm for prompting Text-to-SQL tasks, called Divide-and-Prompt, which first divides the task into subtasks, and then approach each subtask through CoT. We present 3 prompting-based methods to enhance the Text-to-SQL ability of LLMs. Experiments show that these prompts guide LLMs to generate Text-to-SQL with higher execution accuracy.",40c9280d87059c0cc28f2a08d46a7045fa3e9736,Semantic Scholar,,, -19,taggpt large language models are zeroshot multimodal taggers,"['Chen Li', 'Yixiao Ge', 'Jiayong Mao', 'Dian Li', 'Ying Shan']",http://arxiv.org/pdf/2304.03022,2023-04-06,,"Tags are pivotal in facilitating the effective distribution of multimedia content in various applications in the contemporary Internet era, such as search engines and recommendation systems. Recently, large language models (LLMs) have demonstrated impressive capabilities across a wide range of tasks. In this work, we propose TagGPT, a fully automated system capable of tag extraction and multimodal tagging in a completely zero-shot fashion. Our core insight is that, through elaborate prompt engineering, LLMs are able to extract and reason about proper tags given textual clues of multimodal data, e.g., OCR, ASR, title, etc. Specifically, to automatically build a high-quality tag set that reflects user intent and interests for a specific application, TagGPT predicts large-scale candidate tags from a series of raw data via prompting LLMs, filtered with frequency and semantics. Given a new entity that needs tagging for distribution, TagGPT introduces two alternative options for zero-shot tagging, i.e., a generative method with late semantic matching with the tag set, and another selective method with early matching in prompts. It is well noticed that TagGPT provides a system-level solution based on a modular framework equipped with a pre-trained LLM (GPT-3.5 used here) and a sentence embedding model (SimCSE used here), which can be seamlessly replaced with any more advanced one you want. TagGPT is applicable for various modalities of data in modern social media and showcases strong generalization ability to a wide range of applications. We evaluate TagGPT on publicly available datasets, i.e., Kuaishou and Food.com, and demonstrate the effectiveness of TagGPT compared to existing hashtags and off-the-shelf taggers. Project page: https://github.com/TencentARC/TagGPT.",4895d443c36bd136a818be2db34442354ba408d1,Semantic Scholar,,, -20,humanintheloop machine translation with large language model,"['Xinyi Yang', 'Runzhe Zhan', 'Derek F. Wong', 'Junchao Wu', 'Lidia S. Chao']",https://arxiv.org/pdf/2310.08908,2023-10-13,,"The large language model (LLM) has garnered significant attention due to its in-context learning mechanisms and emergent capabilities. The research community has conducted several pilot studies to apply LLMs to machine translation tasks and evaluate their performance from diverse perspectives. However, previous research has primarily focused on the LLM itself and has not explored human intervention in the inference process of LLM. The characteristics of LLM, such as in-context learning and prompt engineering, closely mirror human cognitive abilities in language tasks, offering an intuitive solution for human-in-the-loop generation. In this study, we propose a human-in-the-loop pipeline that guides LLMs to produce customized outputs with revision instructions. The pipeline initiates by prompting the LLM to produce a draft translation, followed by the utilization of automatic retrieval or human feedback as supervision signals to enhance the LLM’s translation through in-context learning. The human-machine interactions generated in this pipeline are also stored in an external database to expand the in-context retrieval database, enabling us to leverage human supervision in an offline setting. We evaluate the proposed pipeline using the GPT-3.5-turbo API on five domain-specific benchmarks for German-English translation. The results demonstrate the effectiveness of the pipeline in tailoring in-domain translations and improving translation performance compared to direct translation instructions. Additionally, we discuss the experimental results from the following perspectives: 1) the effectiveness of different in-context retrieval methods; 2) the construction of a retrieval database under low-resource scenarios; 3) the observed differences across selected domains; 4) the quantitative analysis of sentence-level and word-level statistics; and 5) the qualitative analysis of representative translation cases.",4950bf6f873ba1409a7bbad25cf5c93c8f833453,Semantic Scholar,,, -21,large language models vote prompting for rare disease identification,"['David Oniani', 'Jordan Hilsman', 'Hang Dong', 'F. Gao', 'Shiven Verma', 'Yanshan Wang']",https://arxiv.org/pdf/2308.12890,2023-08-24,,"The emergence of generative Large Language Models (LLMs) emphasizes the need for accurate and efficient prompting approaches. LLMs are often applied in Few-Shot Learning (FSL) contexts, where tasks are executed with minimal training data. FSL has become popular in many Artificial Intelligence (AI) subdomains, including AI for health. Rare diseases affect a small fraction of the population. Rare disease identification from clinical notes inherently requires FSL techniques due to limited data availability. Manual data collection and annotation is both expensive and time-consuming. In this paper, we propose Models-Vote Prompting (MVP), a flexible prompting approach for improving the performance of LLM queries in FSL settings. MVP works by prompting numerous LLMs to perform the same tasks and then conducting a majority vote on the resulting outputs. This method achieves improved results to any one model in the ensemble on one-shot rare disease identification and classification tasks. We also release a novel rare disease dataset for FSL, available to those who signed the MIMIC-IV Data Use Agreement (DUA). Furthermore, in using MVP, each model is prompted multiple times, substantially increasing the time needed for manual annotation, and to address this, we assess the feasibility of using JSON for automating generative LLM evaluation.",4b091d92f793161046b483ee93df244bf93bb508,Semantic Scholar,,, -22,hypothesis search inductive reasoning with language models,"['Ruocheng Wang', 'E. Zelikman', 'Gabriel Poesia', 'Yewen Pu', 'Nick Haber', 'Noah D. Goodman']",https://arxiv.org/pdf/2309.05660,2023-09-11,,"Inductive reasoning is a core problem-solving capacity: humans can identify underlying principles from a few examples, which can then be robustly generalized to novel scenarios. Recent work has evaluated large language models (LLMs) on inductive reasoning tasks by directly prompting them yielding""in context learning.""This can work well for straightforward inductive tasks, but performs very poorly on more complex tasks such as the Abstraction and Reasoning Corpus (ARC). In this work, we propose to improve the inductive reasoning ability of LLMs by generating explicit hypotheses at multiple levels of abstraction: we prompt the LLM to propose multiple abstract hypotheses about the problem, in natural language, then implement the natural language hypotheses as concrete Python programs. These programs can be directly verified by running on the observed examples and generalized to novel inputs. Because of the prohibitive cost of generation with state-of-the-art LLMs, we consider a middle step to filter the set of hypotheses that will be implemented into programs: we either ask the LLM to summarize into a smaller set of hypotheses, or ask human annotators to select a subset of the hypotheses. We verify our pipeline's effectiveness on the ARC visual inductive reasoning benchmark, its variant 1D-ARC, and string transformation dataset SyGuS. On a random 40-problem subset of ARC, our automated pipeline using LLM summaries achieves 27.5% accuracy, significantly outperforming the direct prompting baseline (accuracy of 12.5%). With the minimal human input of selecting from LLM-generated candidates, the performance is boosted to 37.5%. (And we argue this is a lower bound on the performance of our approach without filtering.) Our ablation studies show that abstract hypothesis generation and concrete program representations are both beneficial for LLMs to perform inductive reasoning tasks.",4cf527e9e0d68e3fc16d39fbcdb3869cd3ccf60f,Semantic Scholar,,, -23,pearl prompting large language models to plan and execute actions over long documents,"['Simeng Sun', 'Y. Liu', 'Shuo Wang', 'Chenguang Zhu', 'Mohit Iyyer']",http://arxiv.org/pdf/2305.14564,2023-05-23,,"Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over long input documents, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose PEARL, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of PEARL is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance. Overall, PEARL is a first step towards leveraging LLMs to reason over long documents.",4ee96f0757e517928590a2300af5d40ba768a5a7,Semantic Scholar,,, -24,aligning language models to user opinions,"['EunJeong Hwang', 'Bodhisattwa Prasad Majumder', 'Niket Tandon']",http://arxiv.org/pdf/2305.14929,2023-05-24,,"An important aspect of developing LLMs that interact with humans is to align models' behavior to their users. It is possible to prompt an LLM into behaving as a certain persona, especially a user group or ideological persona the model captured during its pertaining stage. But, how to best align an LLM with a specific user and not a demographic or ideological group remains an open question. Mining public opinion surveys (by Pew Research), we find that the opinions of a user and their demographics and ideologies are not mutual predictors. We use this insight to align LLMs by modeling both user opinions as well as user demographics and ideology, achieving up to 7 points accuracy gains in predicting public opinions from survey questions across a broad set of topics. In addition to the typical approach of prompting LLMs with demographics and ideology, we discover that utilizing the most relevant past opinions from individual users enables the model to predict user opinions more accurately.",5db0f55332839c408e3049cea1a6ad48fefba70c,Semantic Scholar,,, -25,booookscore a systematic exploration of booklength summarization in the era of llms,"['Yapei Chang', 'Kyle Lo', 'Tanya Goyal', 'Mohit Iyyer']",https://arxiv.org/pdf/2310.00785,2023-10-01,,"Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than the oft-repetitive ones generated by LLaMA 2. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by human annotators. We release code and annotations after blind review to spur more principled research on book-length summarization.",65fe385a665480b41fafc56d76a3bd72e92e8886,Semantic Scholar,,, -26,reranking for natural language generation from logical forms a study based on large language models,"['Levon Haroutunian', 'Zhuang Li', 'Lucian Galescu', 'Philip R. Cohen', 'Raj Tumuluri', 'Gholamreza Haffari']",https://arxiv.org/pdf/2309.12294,2023-09-21,,"Large language models (LLMs) have demonstrated impressive capabilities in natural language generation. However, their output quality can be inconsistent, posing challenges for generating natural language from logical forms (LFs). This task requires the generated outputs to embody the exact semantics of LFs, without missing any LF semantics or creating any hallucinations. In this work, we tackle this issue by proposing a novel generate-and-rerank approach. Our approach involves initially generating a set of candidate outputs by prompting an LLM and subsequently reranking them using a task-specific reranker model. In addition, we curate a manually collected dataset to evaluate the alignment between different ranking metrics and human judgements. The chosen ranking metrics are utilized to enhance the training and evaluation of the reranker model. By conducting extensive experiments on three diverse datasets, we demonstrate that the candidates selected by our reranker outperform those selected by baseline methods in terms of semantic consistency and fluency, as measured by three comprehensive metrics. Our findings provide strong evidence for the effectiveness of our approach in improving the quality of generated outputs.",6be6fe206f8ca735f8df26758bf877572abb10d3,Semantic Scholar,,, +12,howtocaption prompting llms to transform video annotations at scale,"['Nina Shvetsova', 'Anna Kukleva', 'Xudong Hong', 'Christian Rupprecht', 'B. Schiele', 'Hilde Kuehne']",https://arxiv.org/pdf/2310.04900,2023-10-07,,"Instructional videos are an excellent source for learning multimodal representations by leveraging video-subtitle pairs extracted with automatic speech recognition systems (ASR) from the audio signal in the videos. However, in contrast to human-annotated captions, both speech and subtitles naturally differ from the visual content of the videos and thus provide only noisy supervision for multimodal learning. As a result, large-scale annotation-free web video training data remains sub-optimal for training text-video models. In this work, we propose to leverage the capability of large language models (LLMs) to obtain fine-grained video descriptions aligned with videos. Specifically, we prompt an LLM to create plausible video descriptions based on ASR narrations of the video for a large-scale instructional video dataset. To this end, we introduce a prompting method that is able to take into account a longer text of subtitles, allowing us to capture context beyond a single sentence. To align the captions to the video temporally, we prompt the LLM to generate timestamps for each produced caption based on the subtitles. In this way, we obtain human-style video captions at scale without human supervision. We apply our method to the subtitles of the HowTo100M dataset, creating a new large-scale dataset, HowToCaption. Our evaluation shows that the resulting captions not only significantly improve the performance over many different benchmark datasets for text-video retrieval but also lead to a disentangling of textual narration from the audio, boosting performance in text-video-audio tasks.",24dd96da6f700f57132713aeb5e9b06905abab5d,Semantic Scholar,,, +13,algo synthesizing algorithmic programs with generated oracle verifiers,"['Kexun Zhang', 'Danqing Wang', 'Jingtao Xia', 'William Yang Wang', 'Lei Li']",http://arxiv.org/pdf/2305.14591,2023-05-24,,"Large language models (LLMs) excel at implementing code from functionality descriptions but struggle with algorithmic problems that require not only implementation but also identification of the suitable algorithm. Moreover, LLM-generated programs lack guaranteed correctness and require human verification. To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness. ALGO first generates a reference oracle by prompting an LLM to exhaustively enumerate all the combinations of relevant variables. This oracle is then utilized to guide an arbitrary search strategy in exploring the algorithm space and to verify the synthesized algorithms. Our study shows that the LLM-generated oracles are correct for 88% of the cases. With the oracles as verifiers, ALGO can be integrated with any existing code generation model in a model-agnostic manner to enhance its performance. Experiments show that when equipped with ALGO, we achieve an 8x better one-submission pass rate over the Codex model and a 2.6x better one-submission pass rate over CodeT, the current state-of-the-art model on CodeContests. We can also get 1.3x better pass rate over the ChatGPT Code Interpreter on unseen problems. The problem set we used for testing, the prompts we used, the verifier and solution programs, and the test cases generated by ALGO are available at https://github.com/zkx06111/ALGO.",2bb4fe9bc10dbf1ea70135e52452f9f63bb10671,Semantic Scholar,,, +14,model tuning or prompt tuning a study of large language models for clinical concept and relation extraction,"['C.A.I. Peng', 'Xi Yang', 'Kaleb E Smith', 'Zehao Yu', 'Aokun Chen', 'Jiang Bian', 'Yonghui Wu']",https://arxiv.org/pdf/2310.06239,2023-10-10,,"Objective To develop soft prompt-based learning algorithms for large language models (LLMs), examine the shape of prompts, prompt-tuning using frozen/unfrozen LLMs, transfer learning, and few-shot learning abilities. Methods We developed a soft prompt-based LLM model and compared 4 training strategies including (1) fine-tuning without prompts; (2) hard-prompt with unfrozen LLMs; (3) soft-prompt with unfrozen LLMs; and (4) soft-prompt with frozen LLMs. We evaluated 7 pretrained LLMs using the 4 training strategies for clinical concept and relation extraction on two benchmark datasets. We evaluated the transfer learning ability of the prompt-based learning algorithms in a cross-institution setting. We also assessed the few-shot learning ability. Results and Conclusion When LLMs are unfrozen, GatorTron-3.9B with soft prompting achieves the best strict F1-scores of 0.9118 and 0.8604 for concept extraction, outperforming the traditional fine-tuning and hard prompt-based models by 0.6~3.1% and 1.2~2.9%, respectively; GatorTron-345M with soft prompting achieves the best F1-scores of 0.8332 and 0.7488 for end-to-end relation extraction, outperforming the other two models by 0.2~2% and 0.6~11.7%, respectively. When LLMs are frozen, small (i.e., 345 million parameters) LLMs have a big gap to be competitive with unfrozen models; scaling LLMs up to billions of parameters makes frozen LLMs competitive with unfrozen LLMs. For cross-institute evaluation, soft prompting with a frozen GatorTron-8.9B model achieved the best performance. This study demonstrates that (1) machines can learn soft prompts better than humans, (2) frozen LLMs have better few-shot learning ability and transfer learning ability to facilitate muti-institution applications, and (3) frozen LLMs require large models.",2f75de70511fa9f5c7a1e7f61f2d7928d121adbf,Semantic Scholar,,, +15,thinksum probabilistic reasoning over sets using large language models,"['Batu Mehmet Ozturkler', 'Nikolay Malkin', 'Zhen Wang', 'N. Jojic']",http://arxiv.org/pdf/2210.01293,2022-10-04,,"Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reproducing patterns in linear text that occur in their training data (zero-shot evaluation) or in the provided context (few-shot in-context learning). However, recent studies show that even the more advanced LLMs fail in scenarios that require reasoning over multiple objects or facts and making sequences of logical deductions. We propose a two-stage probabilistic inference paradigm, ThinkSum, which reasons over sets of objects or facts in a structured manner. In the first stage (Think – retrieval of associations), a LLM is queried in parallel over a set of phrases extracted from the prompt or an auxiliary model call. In the second stage (Sum – probabilistic inference or reasoning), the results of these queries are aggregated to make the final prediction. We demonstrate the possibilities and advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks, achieving improvements over the state of the art using GPT-family models on thirteen difficult tasks, often with far smaller model variants. We also compare and contrast ThinkSum with other proposed modifications to direct prompting of LLMs, such as variants of chain-of-thought prompting. Our results suggest that because the probabilistic inference in ThinkSum is performed outside of calls to the LLM, ThinkSum is less sensitive to prompt design, yields more interpretable predictions, and can be flexibly combined with latent variable models to extract structured knowledge from LLMs. Overall, our proposed paradigm represents a promising approach for enhancing the reasoning capabilities of LLMs.",370cea8b4220917f45a69358c0303df71f5063c7,Semantic Scholar,,, +16,divide and prompt chain of thought prompting for texttosql,"['X. Liu', 'Zhao Tan']",http://arxiv.org/pdf/2304.11556,2023-04-23,,"Chain-of-thought (CoT) prompting combined with large language models (LLMs) have achieved encouraging results on complex reasoning tasks. Text-to-SQL is a critical semantic parsing task that converts natural language questions into SQL statements, involving a complex reasoning process. However, there is little work about using CoT prompting to activate LLM's reasoning capabilities on Text-to-SQL tasks. In this work, we propose a new paradigm for prompting Text-to-SQL tasks, called Divide-and-Prompt, which first divides the task into subtasks, and then approach each subtask through CoT. We present 3 prompting-based methods to enhance the Text-to-SQL ability of LLMs. Experiments show that these prompts guide LLMs to generate Text-to-SQL with higher execution accuracy.",40c9280d87059c0cc28f2a08d46a7045fa3e9736,Semantic Scholar,,, +17,taggpt large language models are zeroshot multimodal taggers,"['Chen Li', 'Yixiao Ge', 'Jiayong Mao', 'Dian Li', 'Ying Shan']",http://arxiv.org/pdf/2304.03022,2023-04-06,,"Tags are pivotal in facilitating the effective distribution of multimedia content in various applications in the contemporary Internet era, such as search engines and recommendation systems. Recently, large language models (LLMs) have demonstrated impressive capabilities across a wide range of tasks. In this work, we propose TagGPT, a fully automated system capable of tag extraction and multimodal tagging in a completely zero-shot fashion. Our core insight is that, through elaborate prompt engineering, LLMs are able to extract and reason about proper tags given textual clues of multimodal data, e.g., OCR, ASR, title, etc. Specifically, to automatically build a high-quality tag set that reflects user intent and interests for a specific application, TagGPT predicts large-scale candidate tags from a series of raw data via prompting LLMs, filtered with frequency and semantics. Given a new entity that needs tagging for distribution, TagGPT introduces two alternative options for zero-shot tagging, i.e., a generative method with late semantic matching with the tag set, and another selective method with early matching in prompts. It is well noticed that TagGPT provides a system-level solution based on a modular framework equipped with a pre-trained LLM (GPT-3.5 used here) and a sentence embedding model (SimCSE used here), which can be seamlessly replaced with any more advanced one you want. TagGPT is applicable for various modalities of data in modern social media and showcases strong generalization ability to a wide range of applications. We evaluate TagGPT on publicly available datasets, i.e., Kuaishou and Food.com, and demonstrate the effectiveness of TagGPT compared to existing hashtags and off-the-shelf taggers. Project page: https://github.com/TencentARC/TagGPT.",4895d443c36bd136a818be2db34442354ba408d1,Semantic Scholar,,, +18,humanintheloop machine translation with large language model,"['Xinyi Yang', 'Runzhe Zhan', 'Derek F. Wong', 'Junchao Wu', 'Lidia S. Chao']",https://arxiv.org/pdf/2310.08908,2023-10-13,,"The large language model (LLM) has garnered significant attention due to its in-context learning mechanisms and emergent capabilities. The research community has conducted several pilot studies to apply LLMs to machine translation tasks and evaluate their performance from diverse perspectives. However, previous research has primarily focused on the LLM itself and has not explored human intervention in the inference process of LLM. The characteristics of LLM, such as in-context learning and prompt engineering, closely mirror human cognitive abilities in language tasks, offering an intuitive solution for human-in-the-loop generation. In this study, we propose a human-in-the-loop pipeline that guides LLMs to produce customized outputs with revision instructions. The pipeline initiates by prompting the LLM to produce a draft translation, followed by the utilization of automatic retrieval or human feedback as supervision signals to enhance the LLM’s translation through in-context learning. The human-machine interactions generated in this pipeline are also stored in an external database to expand the in-context retrieval database, enabling us to leverage human supervision in an offline setting. We evaluate the proposed pipeline using the GPT-3.5-turbo API on five domain-specific benchmarks for German-English translation. The results demonstrate the effectiveness of the pipeline in tailoring in-domain translations and improving translation performance compared to direct translation instructions. Additionally, we discuss the experimental results from the following perspectives: 1) the effectiveness of different in-context retrieval methods; 2) the construction of a retrieval database under low-resource scenarios; 3) the observed differences across selected domains; 4) the quantitative analysis of sentence-level and word-level statistics; and 5) the qualitative analysis of representative translation cases.",4950bf6f873ba1409a7bbad25cf5c93c8f833453,Semantic Scholar,,, +19,large language models vote prompting for rare disease identification,"['David Oniani', 'Jordan Hilsman', 'Hang Dong', 'F. Gao', 'Shiven Verma', 'Yanshan Wang']",https://arxiv.org/pdf/2308.12890,2023-08-24,,"The emergence of generative Large Language Models (LLMs) emphasizes the need for accurate and efficient prompting approaches. LLMs are often applied in Few-Shot Learning (FSL) contexts, where tasks are executed with minimal training data. FSL has become popular in many Artificial Intelligence (AI) subdomains, including AI for health. Rare diseases affect a small fraction of the population. Rare disease identification from clinical notes inherently requires FSL techniques due to limited data availability. Manual data collection and annotation is both expensive and time-consuming. In this paper, we propose Models-Vote Prompting (MVP), a flexible prompting approach for improving the performance of LLM queries in FSL settings. MVP works by prompting numerous LLMs to perform the same tasks and then conducting a majority vote on the resulting outputs. This method achieves improved results to any one model in the ensemble on one-shot rare disease identification and classification tasks. We also release a novel rare disease dataset for FSL, available to those who signed the MIMIC-IV Data Use Agreement (DUA). Furthermore, in using MVP, each model is prompted multiple times, substantially increasing the time needed for manual annotation, and to address this, we assess the feasibility of using JSON for automating generative LLM evaluation.",4b091d92f793161046b483ee93df244bf93bb508,Semantic Scholar,,, +20,hypothesis search inductive reasoning with language models,"['Ruocheng Wang', 'E. Zelikman', 'Gabriel Poesia', 'Yewen Pu', 'Nick Haber', 'Noah D. Goodman']",https://arxiv.org/pdf/2309.05660,2023-09-11,,"Inductive reasoning is a core problem-solving capacity: humans can identify underlying principles from a few examples, which can then be robustly generalized to novel scenarios. Recent work has evaluated large language models (LLMs) on inductive reasoning tasks by directly prompting them yielding""in context learning.""This can work well for straightforward inductive tasks, but performs very poorly on more complex tasks such as the Abstraction and Reasoning Corpus (ARC). In this work, we propose to improve the inductive reasoning ability of LLMs by generating explicit hypotheses at multiple levels of abstraction: we prompt the LLM to propose multiple abstract hypotheses about the problem, in natural language, then implement the natural language hypotheses as concrete Python programs. These programs can be directly verified by running on the observed examples and generalized to novel inputs. Because of the prohibitive cost of generation with state-of-the-art LLMs, we consider a middle step to filter the set of hypotheses that will be implemented into programs: we either ask the LLM to summarize into a smaller set of hypotheses, or ask human annotators to select a subset of the hypotheses. We verify our pipeline's effectiveness on the ARC visual inductive reasoning benchmark, its variant 1D-ARC, and string transformation dataset SyGuS. On a random 40-problem subset of ARC, our automated pipeline using LLM summaries achieves 27.5% accuracy, significantly outperforming the direct prompting baseline (accuracy of 12.5%). With the minimal human input of selecting from LLM-generated candidates, the performance is boosted to 37.5%. (And we argue this is a lower bound on the performance of our approach without filtering.) Our ablation studies show that abstract hypothesis generation and concrete program representations are both beneficial for LLMs to perform inductive reasoning tasks.",4cf527e9e0d68e3fc16d39fbcdb3869cd3ccf60f,Semantic Scholar,,, +21,pearl prompting large language models to plan and execute actions over long documents,"['Simeng Sun', 'Y. Liu', 'Shuo Wang', 'Chenguang Zhu', 'Mohit Iyyer']",http://arxiv.org/pdf/2305.14564,2023-05-23,,"Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over long input documents, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose PEARL, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of PEARL is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance. Overall, PEARL is a first step towards leveraging LLMs to reason over long documents.",4ee96f0757e517928590a2300af5d40ba768a5a7,Semantic Scholar,,, +22,aligning language models to user opinions,"['EunJeong Hwang', 'Bodhisattwa Prasad Majumder', 'Niket Tandon']",http://arxiv.org/pdf/2305.14929,2023-05-24,,"An important aspect of developing LLMs that interact with humans is to align models' behavior to their users. It is possible to prompt an LLM into behaving as a certain persona, especially a user group or ideological persona the model captured during its pertaining stage. But, how to best align an LLM with a specific user and not a demographic or ideological group remains an open question. Mining public opinion surveys (by Pew Research), we find that the opinions of a user and their demographics and ideologies are not mutual predictors. We use this insight to align LLMs by modeling both user opinions as well as user demographics and ideology, achieving up to 7 points accuracy gains in predicting public opinions from survey questions across a broad set of topics. In addition to the typical approach of prompting LLMs with demographics and ideology, we discover that utilizing the most relevant past opinions from individual users enables the model to predict user opinions more accurately.",5db0f55332839c408e3049cea1a6ad48fefba70c,Semantic Scholar,,, +23,user simulation with large language models for evaluating taskoriented dialogue,"['Sam Davidson', 'Salvatore Romeo', 'Raphael Shu', 'James Gung', 'Arshit Gupta', 'Saab Mansour', 'Yi Zhang']",https://arxiv.org/pdf/2309.13233,2023-09-23,,"One of the major impediments to the development of new task-oriented dialogue (TOD) systems is the need for human evaluation at multiple stages and iterations of the development process. In an effort to move toward automated evaluation of TOD, we propose a novel user simulator built using recently developed large pretrained language models (LLMs). In order to increase the linguistic diversity of our system relative to the related previous work, we do not fine-tune the LLMs used by our system on existing TOD datasets; rather we use in-context learning to prompt the LLMs to generate robust and linguistically diverse output with the goal of simulating the behavior of human interlocutors. Unlike previous work, which sought to maximize goal success rate (GSR) as the primary metric of simulator performance, our goal is a system which achieves a GSR similar to that observed in human interactions with TOD systems. Using this approach, our current simulator is effectively able to interact with several TOD systems, especially on single-intent conversational goals, while generating lexically and syntactically diverse output relative to previous simulators that rely upon fine-tuned models. Finally, we collect a Human2Bot dataset of humans interacting with the same TOD systems with which we experimented in order to better quantify these achievements.",64e9e1686cf85db163f007a8621e2c1b24d86feb,Semantic Scholar,,, +24,booookscore a systematic exploration of booklength summarization in the era of llms,"['Yapei Chang', 'Kyle Lo', 'Tanya Goyal', 'Mohit Iyyer']",https://arxiv.org/pdf/2310.00785,2023-10-01,,"Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than the oft-repetitive ones generated by LLaMA 2. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by human annotators. We release code and annotations after blind review to spur more principled research on book-length summarization.",65fe385a665480b41fafc56d76a3bd72e92e8886,Semantic Scholar,,, +25,reranking for natural language generation from logical forms a study based on large language models,"['Levon Haroutunian', 'Zhuang Li', 'Lucian Galescu', 'Philip R. Cohen', 'Raj Tumuluri', 'Gholamreza Haffari']",https://arxiv.org/pdf/2309.12294,2023-09-21,,"Large language models (LLMs) have demonstrated impressive capabilities in natural language generation. However, their output quality can be inconsistent, posing challenges for generating natural language from logical forms (LFs). This task requires the generated outputs to embody the exact semantics of LFs, without missing any LF semantics or creating any hallucinations. In this work, we tackle this issue by proposing a novel generate-and-rerank approach. Our approach involves initially generating a set of candidate outputs by prompting an LLM and subsequently reranking them using a task-specific reranker model. In addition, we curate a manually collected dataset to evaluate the alignment between different ranking metrics and human judgements. The chosen ranking metrics are utilized to enhance the training and evaluation of the reranker model. By conducting extensive experiments on three diverse datasets, we demonstrate that the candidates selected by our reranker outperform those selected by baseline methods in terms of semantic consistency and fluency, as measured by three comprehensive metrics. Our findings provide strong evidence for the effectiveness of our approach in improving the quality of generated outputs.",6be6fe206f8ca735f8df26758bf877572abb10d3,Semantic Scholar,,, +26,not what you've signed up for compromising realworld llmintegrated applications with indirect prompt injection,"['Kai Greshake', 'Sahar Abdelnabi', 'Shailesh Mishra', 'C. Endres', 'Thorsten Holz', 'Mario Fritz']",https://arxiv.org/pdf/2302.12173,2023-02-23,,"Large Language Models (LLMs) are increasingly being integrated into applications, with versatile functionalities that can be easily modulated via natural language prompts. So far, it was assumed that the user is directly prompting the LLM. But, what if it is not the user prompting? We show that LLM-Integrated Applications blur the line between data and instructions and reveal several new attack vectors, using Indirect Prompt Injection, that enable adversaries to remotely (i.e., without a direct interface) exploit LLM-integrated applications by strategically injecting prompts into data likely to be retrieved at inference time. We derive a comprehensive taxonomy from a computer security perspective to broadly investigate impacts and vulnerabilities, including data theft, worming, information ecosystem contamination, and other novel security risks. We then demonstrate the practical viability of our attacks against both real-world systems, such as Bing Chat and code-completion engines, and GPT-4 synthetic applications. We show how processing retrieved prompts can act as arbitrary code execution, manipulate the application's functionality, and control how and if other APIs are called. Despite the increasing reliance on LLMs, effective mitigations of these emerging threats are lacking. By raising awareness of these vulnerabilities, we aim to promote the safe and responsible deployment of these powerful models and the development of robust defenses that protect users from potential attacks.",705e49afd92130f2bc1e0d4d0b1f6cb14e88803f,Semantic Scholar,,, 27,leveraging large language models for exploiting asr uncertainty,"['Pranay Dighe', 'Yi Su', 'Shangshang Zheng', 'Yunshu Liu', 'Vineet Garg', 'Xiaochuan Niu', 'Ahmed H. Tewfik']",https://arxiv.org/pdf/2309.04842,2023-09-09,,"While large language models excel in a variety of natural language processing (NLP) tasks, to perform well on spoken language understanding (SLU) tasks, they must either rely on off-the-shelf automatic speech recognition (ASR) systems for transcription, or be equipped with an in-built speech modality. This work focuses on the former scenario, where LLM's accuracy on SLU tasks is constrained by the accuracy of a fixed ASR system on the spoken input. Specifically, we tackle speech-intent classification task, where a high word-error-rate can limit the LLM's ability to understand the spoken intent. Instead of chasing a high accuracy by designing complex or specialized architectures regardless of deployment costs, we seek to answer how far we can go without substantially changing the underlying ASR and LLM, which can potentially be shared by multiple unrelated tasks. To this end, we propose prompting the LLM with an n-best list of ASR hypotheses instead of only the error-prone 1-best hypothesis. We explore prompt-engineering to explain the concept of n-best lists to the LLM; followed by the finetuning of Low-Rank Adapters on the downstream tasks. Our approach using n-best lists proves to be effective on a device-directed speech detection task as well as on a keyword spotting task, where systems using n-best list prompts outperform those using 1-best ASR hypothesis; thus paving the way for an efficient method to exploit ASR uncertainty via LLMs for speech-based applications.",72fb75f7c38a83424308c8205bb36cd88995494b,Semantic Scholar,,, 28,language models are weak learners,"['Hariharan Manikandan', 'Yiding Jiang', 'J. Z. Kolter']",http://arxiv.org/pdf/2306.14101,2023-06-25,,"A central notion in practical and theoretical machine learning is that of a $\textit{weak learner}$, classifiers that achieve better-than-random performance (on any given distribution over data), even by a small margin. Such weak learners form the practical basis for canonical machine learning methods such as boosting. In this work, we illustrate that prompt-based large language models can operate effectively as said weak learners. Specifically, we illustrate the use of a large language model (LLM) as a weak learner in a boosting algorithm applied to tabular data. We show that by providing (properly sampled according to the distribution of interest) text descriptions of tabular data samples, LLMs can produce a summary of the samples that serves as a template for classification and achieves the aim of acting as a weak learner on this task. We incorporate these models into a boosting approach, which in some settings can leverage the knowledge within the LLM to outperform traditional tree-based boosting. The model outperforms both few-shot learning and occasionally even more involved fine-tuning procedures, particularly for tasks involving small numbers of data points. The results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.",7d87fbdfbf5038a4e0ff09801b6d3b8a2e0c613a,Semantic Scholar,,, 29,connecting large language models with evolutionary algorithms yields powerful prompt optimizers,"['Qingyan Guo', 'Rui Wang', 'Junliang Guo', 'Bei Li', 'Kaitao Song', 'Xu Tan', 'Guoqing Liu', 'Jiang Bian', 'Yujiu Yang', 'Tsinghua University', 'Microsoft Research']",https://arxiv.org/pdf/2309.08532,2023-09-15,,"Large Language Models (LLMs) excel in various tasks, but they rely on carefully crafted prompts that often demand substantial human effort. To automate this process, in this paper, we propose a novel framework for discrete prompt optimization, called EvoPrompt, which borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence. To enable EAs to work on discrete prompts, which are natural language expressions that need to be coherent and human-readable, we connect LLMs with EAs. This approach allows us to simultaneously leverage the powerful language processing capabilities of LLMs and the efficient optimization performance of EAs. Specifically, abstaining from any gradients or parameters, EvoPrompt starts from a population of prompts and iteratively generates new prompts with LLMs based on the evolutionary operators, improving the population based on the development set. We optimize prompts for both closed- and open-source LLMs including GPT-3.5 and Alpaca, on 9 datasets spanning language understanding and generation tasks. EvoPrompt significantly outperforms human-engineered prompts and existing methods for automatic prompt generation by up to 25% and 14% respectively. Furthermore, EvoPrompt demonstrates that connecting LLMs with EAs creates synergies, which could inspire further research on the combination of LLMs and conventional algorithms.",8d17234680db76f99efd22fbcb169f45d2d79d93,Semantic Scholar,,, @@ -33,1433 +33,1638 @@ 31,promptner prompting for named entity recognition,"['D. Ashok', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2305.15444,2023-05-24,,"In a surprising turn, Large Language Models (LLMs) together with a growing arsenal of prompt-based heuristics now offer powerful off-the-shelf approaches providing few-shot solutions to myriad classic NLP problems. However, despite promising early results, these LLM-based few-shot methods remain far from the state of the art in Named Entity Recognition (NER), where prevailing methods include learning representations via end-to-end structural understanding and fine-tuning on standard labeled corpora. In this paper, we introduce PromptNER, a new state-of-the-art algorithm for few-Shot and cross-domain NER. To adapt to any new NER task PromptNER requires a set of entity definitions in addition to the standard few-shot examples. Given a sentence, PromptNER prompts an LLM to produce a list of potential entities along with corresponding explanations justifying their compatibility with the provided entity type definitions. Remarkably, PromptNER achieves state-of-the-art performance on few-shot NER, achieving a 4% (absolute) improvement in F1 score on the ConLL dataset, a 9% (absolute) improvement on the GENIA dataset, and a 4% (absolute) improvement on the FewNERD dataset. PromptNER also moves the state of the art on Cross Domain NER, outperforming prior methods (including those not limited to the few-shot setting), setting a new mark on 3/5 CrossNER target domains, with an average F1 gain of 3%, despite using less than 2% of the available data.",9141480721653789597b6e537ee0eeab401f3e60,Semantic Scholar,,, 32,boosting theoryofmind performance in large language models via prompting,"['Shima Rahimi Moghaddam', 'C. Honey']",http://arxiv.org/pdf/2304.11490,2023-04-22,,"Large language models (LLMs) excel in many tasks in 2023, but they still face challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require understanding agents' beliefs, goals, and mental states, are essential for common-sense reasoning involving humans, making it crucial to enhance LLM performance in this area. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates the effectiveness of in-context learning in improving their ToM comprehension. We evaluated prompts featuring two-shot chain of thought reasoning and step-by-step thinking instructions. We found that LLMs trained with Reinforcement Learning from Human Feedback (RLHF) (all models excluding Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell short of the 87% human accuracy on the test set. However, when supplied with prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate prompting enhances LLM ToM reasoning, and they underscore the context-dependent nature of LLM cognitive capacities.",96d6bb5d6abdeda9b2db9af6296527200ba7aa32,Semantic Scholar,,, 33,small language models improve giants by rewriting their outputs,"['Giorgos Vernikos', 'Arthur Bravzinskas', 'Jakub Adamek', 'Jonathan Mallinson', 'Aliaksei Severyn', 'Eric Malmi']",http://arxiv.org/pdf/2305.13514,2023-05-22,,"Large language models (LLMs) have demonstrated impressive few-shot learning capabilities, but they often underperform compared to fine-tuned models on challenging tasks. Furthermore, their large size and restricted access only through APIs make task-specific fine-tuning impractical. Moreover, LLMs are sensitive to different aspects of prompts (e.g., the selection and order of demonstrations) and can thus require time-consuming prompt engineering. In this light, we propose a method to correct LLM outputs without relying on their weights. First, we generate a pool of candidates by few-shot prompting an LLM. Second, we refine the LLM-generated outputs using a smaller model, the LM-corrector (LMCor), which is trained to rank, combine and rewrite the candidates to produce the final target output. Our experiments demonstrate that even a small LMCor model (250M) substantially improves the few-shot performance of LLMs (62B) across diverse tasks. Moreover, we illustrate that the LMCor exhibits robustness against different prompts, thereby minimizing the need for extensive prompt engineering. Finally, we showcase that the LMCor can be seamlessly integrated with different LLMs at inference time, serving as a plug-and-play module to improve their performance.",a21de70160c91dcf9b1e7a93fbb32f4b2687860a,Semantic Scholar,,, -34,copilot for xcode exploring aiassisted programming by prompting cloudbased large language models,"['C. Tan', 'Shangxin Guo', 'M. Wong', 'C. Hang']",https://arxiv.org/pdf/2307.14349,2023-07-08,,"This paper presents an AI-assisted programming tool called Copilot for Xcode for program composition and design to support human software developers. By seamlessly integrating cloud-based Large Language Models (LLM) with Apple's local development environment, Xcode, this tool enhances productivity and unleashes creativity for software development in Apple software ecosystem (e.g., iOS apps, macOS). Leveraging advanced natural language processing (NLP) techniques, Copilot for Xcode effectively processes source code tokens and patterns within code repositories, enabling features such as code generation, autocompletion, documentation, and error detection. Software developers can also query and make""small""decisions for program composition, some of which can be made simultaneously, and this is facilitated through prompt engineering in a chat interface of Copilot for Xcode. Finally, we present simple case studies as evidence of the effectiveness of utilizing NLP in Xcode to prompt popular LLM services like OpenAI ChatGPT for program composition and design.",a3509cef906a4517238c1764676cf637efcd1d5e,Semantic Scholar,,, -35,large language models can accomplish business process management tasks,"['Michael Grohs', 'Luka Abb', 'Nourhan Elsayed', 'Jana-Rebecca Rehse']",https://arxiv.org/pdf/2307.09923,2023-07-19,,"Business Process Management (BPM) aims to improve organizational activities and their outcomes by managing the underlying processes. To achieve this, it is often necessary to consider information from various sources, including unstructured textual documents. Therefore, researchers have developed several BPM-specific solutions that extract information from textual documents using Natural Language Processing techniques. These solutions are specific to their respective tasks and cannot accomplish multiple process-related problems as a general-purpose instrument. However, in light of the recent emergence of Large Language Models (LLMs) with remarkable reasoning capabilities, such a general-purpose instrument with multiple applications now appears attainable. In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by applying a specific LLM to three exemplary tasks: mining imperative process models from textual descriptions, mining declarative process models from textual descriptions, and assessing the suitability of process tasks from textual descriptions for robotic process automation. We show that, without extensive configuration or prompt engineering, LLMs perform comparably to or better than existing solutions and discuss implications for future BPM research as well as practical usage.",b43e9b674d4572e1aba8b40a28056ab118ad5e83,Semantic Scholar,,, +34,copilot for xcode exploring aiassisted programming by prompting cloudbased large language models,"['C. Tan', 'Shangxin Guo', 'M. Wong', 'Ching Nam Hang']",https://arxiv.org/pdf/2307.14349,2023-07-08,,"This paper presents an AI-assisted programming tool called Copilot for Xcode for program composition and design to support human software developers. By seamlessly integrating cloud-based Large Language Models (LLM) with Apple's local development environment, Xcode, this tool enhances productivity and unleashes creativity for software development in Apple software ecosystem (e.g., iOS apps, macOS). Leveraging advanced natural language processing (NLP) techniques, Copilot for Xcode effectively processes source code tokens and patterns within code repositories, enabling features such as code generation, autocompletion, documentation, and error detection. Software developers can also query and make""small""decisions for program composition, some of which can be made simultaneously, and this is facilitated through prompt engineering in a chat interface of Copilot for Xcode. Finally, we present simple case studies as evidence of the effectiveness of utilizing NLP in Xcode to prompt popular LLM services like OpenAI ChatGPT for program composition and design.",a3509cef906a4517238c1764676cf637efcd1d5e,Semantic Scholar,,, +35,codeie large code generation models are better fewshot information extractors,"['Peng Li', 'Tianxiang Sun', 'Qiong Tang', 'Hang Yan', 'Yuanbin Wu', 'Xuanjing Huang', 'Xipeng Qiu Academy for EngineeringTechnology', 'Fudan University', 'School of Materials Science', 'Technology', 'East China Normal University']",http://arxiv.org/pdf/2305.05711,2023-05-09,,"Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks.",a86dd6c62d3dc9c7989c98a3e4ace3fd8000e515,Semantic Scholar,,, 36,zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis,"['Md. Arid Hasan', 'Shudipta Das', 'Afiyat Anjum', 'Firoj Alam', 'Anika Anjum', 'Avijit Sarker', 'S. R. H. Noori']",https://arxiv.org/pdf/2308.10783,2023-08-21,,"The rapid expansion of the digital world has propelled sentiment analysis into a critical tool across diverse sectors such as marketing, politics, customer service, and healthcare. While there have been significant advancements in sentiment analysis for widely spoken languages, low-resource languages, such as Bangla, remain largely under-researched due to resource constraints. Furthermore, the recent unprecedented performance of Large Language Models (LLMs) in various applications highlights the need to evaluate them in the context of low-resource languages. In this study, we present a sizeable manually annotated dataset encompassing 33,605 Bangla news tweets and Facebook comments. We also investigate zero- and few-shot in-context learning with several language models, including Flan-T5, GPT-4, and Bloomz, offering a comparative analysis against fine-tuned models. Our findings suggest that monolingual transformer-based models consistently outperform other models, even in zero and few-shot scenarios. To foster continued exploration, we intend to make this dataset and our research tools publicly available to the broader research community. In the spirit of further research, we plan to make this dataset and our experimental resources publicly accessible to the wider research community.",bc70af9248d210663edf22e5fc84ca9313c697b0,Semantic Scholar,,, 37,progprompt generating situated robot task plans using large language models,"['Ishika Singh', 'Valts Blukis', 'Arsalan Mousavian', 'Ankit Goyal', 'Danfei Xu', 'Jonathan Tremblay', 'D. Fox', 'Jesse Thomason', 'Animesh Garg']",https://arxiv.org/pdf/2209.11302,2022-09-22,,"Task planning can require defining myriad domain knowledge about the world in which a robot needs to act. To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information. However, such methods either require enumerating all possible next steps for scoring, or generate free-form text that may contain actions not possible on a given robot in its current context. We present a programmatic LLM prompt structure that enables plan generation functional across situated environments, robot capabilities, and tasks. Our key insight is to prompt the LLM with program-like specifications of the available actions and objects in an environment, as well as with example programs that can be executed. We make concrete recommendations about prompt structure and generation constraints through ablation experiments, demonstrate state of the art success rates in VirtualHome household tasks, and deploy our method on a physical robot arm for tabletop tasks. Website at progprompt.github.io",c03fa01fbb9c77fe3d10609ba5f1dee33a723867,Semantic Scholar,,, -38,democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts,"['Xuan-Phi Nguyen', 'Sharifah Mahani Aljunied', 'Shafiq R. Joty', 'Lidong Bing']",http://arxiv.org/pdf/2306.11372,2023-06-20,,"Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars. However, in low-resource languages, obtaining such hand-picked exemplars can still be challenging, where unsupervised techniques may be necessary. Moreover, competent generative capabilities of LLMs are observed only in high-resource languages, while their performances among under-represented languages fall behind due to pre-training data imbalance. To elicit LLMs' ability onto low-resource languages without any supervised data, we propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English. These prompts are then used to create intra-lingual exemplars to perform tasks in the target languages. Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages. We also show that fine-tuning a 7B model on data generated from our method helps it perform competitively with a 175B model. In non-English translation tasks, our method even outperforms supervised prompting by up to 3 chrF++ in many low-resource languages. When evaluated on zero-shot multilingual summarization, our method surpasses other English-pivoting baselines by up to 4 ROUGE-L and is also favored by GPT-4.",e0867e9f3a715851a90d17423f7f3b33a2a66bb1,Semantic Scholar,,, -39,exploiting asymmetry for synthetic training data generation synthie and the case of information extraction,"['Martin Josifoski', 'Marija Sakota', 'Maxime Peyrard', 'Robert West']",http://arxiv.org/pdf/2303.04132,2023-03-07,,"Large language models (LLMs) have great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by LLMs: for problems with structured outputs, it is possible to prompt an LLM to perform the task in the reverse direction, by generating plausible input text for a target output structure. Leveraging this asymmetry in task difficulty makes it possible to produce large-scale, high-quality data for complex tasks. We demonstrate the effectiveness of this approach on closed information extraction, where collecting ground-truth data is challenging, and no satisfactory dataset exists to date. We synthetically generate a dataset of 1.8M data points, establish its superior quality compared to existing datasets in a human evaluation, and use it to finetune small models (220M and 770M parameters), termed SynthIE, that outperform the prior state of the art (with equal model size) by a substantial margin of 57 absolute points in micro-F1 and 79 points in macro-F1. Code, data, and models are available at https://github.com/epfl-dlab/SynthIE.",f64e49d76048c902cc02e8ae27dcd4ac0dbcb97f,Semantic Scholar,,, -40,query rewriting for retrievalaugmented large language models,"['Xinbei Ma', 'Yeyun Gong', 'Pengcheng He', 'Hai Zhao', 'Nan Duan']",http://arxiv.org/pdf/2305.14283,2023-05-23,,"Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline, making remarkable progress in knowledge-intensive tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs from the perspective of the query rewriting. Unlike prior studies focusing on adapting either the retriever or the reader, our approach pays attention to the adaptation of the search query itself, for there is inevitably a gap between the input text and the needed knowledge in retrieval. We first prompt an LLM to generate the query, then use a web search engine to retrieve contexts. Furthermore, to better align the query to the frozen modules, we propose a trainable scheme for our pipeline. A small language model is adopted as a trainable rewriter to cater to the black-box LLM reader. The rewriter is trained using the feedback of the LLM reader by reinforcement learning. Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice QA. Experiments results show consistent performance improvement, indicating that our framework is proven effective and scalable, and brings a new framework for retrieval-augmented LLM.",f743287be3ced6757de7ecb26d03815b22cd737b,Semantic Scholar,,, -41,legoprover neural theorem proving with growing libraries,"['Huajian Xin', 'Haiming Wang', 'Chuanyang Zheng', 'Lin Li', 'Zhengying Liu', 'Qingxing Cao', 'Yinya Huang', 'Jing Xiong', 'Han Shi', 'Enze Xie', 'Jian Yin', 'Zhenguo Li', 'Xiaodan Liang', 'Heng Liao']",https://arxiv.org/pdf/2310.00656,2023-10-01,,"Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems. One common limitation of these methods is that they assume a fixed theorem library during the whole theorem proving process. However, as we all know, creating new useful theorems or even new theories is not only helpful but crucial and necessary for advancing mathematics and proving harder and deeper results. In this work, we present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving. By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process. These skills are further evolved (by prompting an LLM) to enrich the library on another scale. Modular and reusable skills are constantly added to the library to enable tackling increasingly intricate mathematical problems. Moreover, the learned library further bridges the gap between human proofs and formal proofs by making it easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%). During the proving process, LEGO-Prover also manages to generate over 20,000 skills (theorems/lemmas) and adds them to the growing library. Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We also release our code and all the generated skills.",f8b5ee53c3410f20049e7def47bd52403fa388e3,Semantic Scholar,,, -42,q2d turning questions into dialogs to teach models how to search,"['Yonatan Bitton', 'Shlomi Cohen-Ganor', 'Ido Hakimi', 'Yoad Lewenberg', 'Roee Aharoni', 'Enav Weinreb']",http://arxiv.org/pdf/2304.14318,2023-04-27,,"One of the exciting capabilities of recent language models for dialog is their ability to independently search for relevant information to ground a given dialog response. However, obtaining training data to teach models how to issue search queries is time and resource consuming. In this work, we propose q2d: an automatic data generation pipeline that generates information-seeking dialogs from questions. We prompt a large language model (PaLM) to create conversational versions of question answering datasets, and use it to improve query generation models that communicate with external search APIs to ground dialog responses. Unlike previous approaches which relied on human written dialogs with search queries, our method allows to automatically generate query-based grounded dialogs with better control and scale. Our experiments demonstrate that: (1) For query generation on the QReCC dataset, models trained on our synthetically-generated data achieve 90%--97% of the performance of models trained on the human-generated data; (2) We can successfully generate data for training dialog models in new domains without any existing dialog data as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We perform a thorough analysis of the generated dialogs showing that humans find them of high quality and struggle to distinguish them from human-written dialogs.",33729913908d187dc0db6e41073c35643324fe4f,Semantic Scholar,,, -43,fairnessguided fewshot prompting for large language models,"['Huan Ma', 'Changqing Zhang', 'Yatao Bian', 'Lemao Liu', 'Zhirui Zhang', 'P. Zhao', 'Shu Zhang', 'H. Fu', 'Qinghua Hu', 'Bing Wu']",http://arxiv.org/pdf/2303.13217,2023-03-23,,"Large language models have demonstrated surprising ability to perform in-context learning, i.e., these models can be directly applied to solve numerous downstream tasks by conditioning on a prompt constructed by a few input-output examples. However, prior research has shown that in-context learning can suffer from high instability due to variations in training examples, example order, and prompt formats. Therefore, the construction of an appropriate prompt is essential for improving the performance of in-context learning. In this paper, we revisit this problem from the view of predictive bias. Specifically, we introduce a metric to evaluate the predictive bias of a fixed prompt against labels or a given attributes. Then we empirically show that prompts with higher bias always lead to unsatisfactory predictive quality. Based on this observation, we propose a novel search strategy based on the greedy search to identify the near-optimal prompt for improving the performance of in-context learning. We perform comprehensive experiments with state-of-the-art mainstream models such as GPT-3 on various downstream tasks. Our results indicate that our method can enhance the model's in-context learning performance in an effective and interpretable manner.",3436ff7a1dd4c6547ba78968d3eec2545a6dccb9,Semantic Scholar,,, -44,social simulacra creating populated prototypes for social computing systems,"['J. Park', 'Lindsay Popowski', 'Carrie J. Cai', 'M. Morris', 'Percy Liang', 'Michael S. Bernstein']",https://dl.acm.org/doi/pdf/10.1145/3526113.3545616,2022-08-08,,"Social computing prototypes probe the social behaviors that may arise in an envisioned system design. This prototyping practice is currently limited to recruiting small groups of people. Unfortunately, many challenges do not arise until a system is populated at a larger scale. Can a designer understand how a social system might behave when populated, and make adjustments to the design before the system falls prey to such challenges? We introduce social simulacra, a prototyping technique that generates a breadth of realistic social interactions that may emerge when a social computing system is populated. Social simulacra take as input the designer’s description of a community’s design—goal, rules, and member personas—and produce as output an instance of that design with simulated behavior, including posts, replies, and anti-social behaviors. We demonstrate that social simulacra shift the behaviors that they generate appropriately in response to design changes, and that they enable exploration of “what if?” scenarios where community members or moderators intervene. To power social simulacra, we contribute techniques for prompting a large language model to generate thousands of distinct community members and their social interactions with each other; these techniques are enabled by the observation that large language models’ training data already includes a wide variety of positive and negative behavior on social media platforms. In evaluations, we show that participants are often unable to distinguish social simulacra from actual community behavior and that social computing designers successfully refine their social computing designs when using social simulacra.",49b499598a8864eee55ab264fc16a5bf8d2f87ef,Semantic Scholar,,, -45,folio natural language reasoning with firstorder logic,"['Simeng Han', 'Hailey Schoelkopf', 'Yilun Zhao', 'Zhenting Qi', 'Martin Riddell', 'Luke Benson', 'Lucy Sun', 'E. Zubova', 'Yujie Qiao', 'Matthew Burtell', 'David Peng', 'Jonathan Fan', 'Yixin Liu', 'Brian Wong', 'Malcolm Sailor', 'Ansong Ni', 'Linyong Nan', 'Jungo Kasai', 'Tao Yu', 'Rui Zhang', 'Shafiq R. Joty', 'Alexander R. Fabbri', 'Wojciech Kryscinski', 'Xi Victoria Lin', 'Caiming Xiong', 'Dragomir R. Radev']",http://arxiv.org/pdf/2209.00840,2022-09-02,,"We present FOLIO, a human-annotated, open-domain, and logically complex and diverse dataset for reasoning in natural language (NL), equipped with first order logic (FOL) annotations. FOLIO consists of 1,435 examples (unique conclusions), each paired with one of 487 sets of premises which serve as rules to be used to deductively reason for the validity of each conclusion. The logical correctness of premises and conclusions is ensured by their parallel FOL annotations, which are automatically verified by our FOL inference engine. In addition to the main NL reasoning task, NL-FOL pairs in FOLIO automatically constitute a new NL-FOL translation dataset using FOL as the logical form. Our experiments on FOLIO systematically evaluate the FOL reasoning ability of supervised fine-tuning on medium-sized language models (BERT, RoBERTa) and few-shot prompting on large language models (GPT-NeoX, OPT, GPT-3, Codex). For NL-FOL translation, we experiment with GPT-3 and Codex. Our results show that one of the most capable Large Language Model (LLM) publicly available, GPT-3 davinci, achieves only slightly better than random results with few-shot prompting on a subset of FOLIO, and the model is especially bad at predicting the correct truth values for False and Unknown conclusions. Our dataset and code are available at https://github.com/Yale-LILY/FOLIO.",5581bf85386737bd3378eec68189759a05280bea,Semantic Scholar,,, -46,dictionarybased phraselevel prompting of large language models for machine translation,"['Marjan Ghazvininejad', 'Hila Gonen', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2302.07856,2023-02-15,,"Large language models (LLMs) demonstrate remarkable machine translation (MT) abilities via prompting, even though they were not explicitly trained for this task. However, even given the incredible quantities of data they are trained on, LLMs can struggle to translate inputs with rare words, which are common in low resource or domain transfer scenarios. We show that LLM prompting can provide an effective solution for rare words as well, by using prior knowledge from bilingual dictionaries to provide control hints in the prompts. We propose a novel method, DiPMT, that provides a set of possible translations for a subset of the input words, thereby enabling fine-grained phrase-level prompted control of the LLM. Extensive experiments show that DiPMT outperforms the baseline both in low-resource MT, as well as for out-of-domain MT. We further provide a qualitative analysis of the benefits and limitations of this approach, including the overall level of controllability that is achieved.",64ce6ef1f5cf227bf2bf917c87273386ae16256f,Semantic Scholar,,, -47,instructeval systematic evaluation of instruction selection methods,"['Anirudh Ajith', 'Chris Pan', 'Mengzhou Xia', 'A. Deshpande', 'Karthik Narasimhan']",https://arxiv.org/pdf/2307.00259,2023-07-01,,"In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations. Recent work has shown that precise details of the inputs used in the ICL prompt significantly impact performance, which has incentivized instruction selection algorithms. The effect of instruction-choice however is severely underexplored, with existing analyses restricted to shallow subsets of models and tasks, limiting the generalizability of their insights. We develop InstructEval, an ICL evaluation suite to conduct a thorough assessment of these techniques. The suite includes 13 open-sourced LLMs of varying scales from four model families, and covers nine tasks across three categories. Using the suite, we evaluate the relative performance of seven popular instruction selection methods over five metrics relevant to ICL. Our experiments reveal that using curated manually-written instructions or simple instructions without any task-specific descriptions often elicits superior ICL performance overall than that of automatic instruction-induction methods, pointing to a lack of generalizability among the latter. We release our evaluation suite for benchmarking instruction selection approaches and enabling more generalizable methods in this space.",6af986a2cab884fbd30ad6da2928dc19c12d83a7,Semantic Scholar,,, -48,unsupervised contrastconsistent ranking with language models,"['Niklas Stoehr', 'Pengxiang Cheng', 'Jing Wang', 'Daniel Preotiuc-Pietro', 'Rajarshi Bhowmik']",https://arxiv.org/pdf/2309.06991,2023-09-13,,"Language models contain ranking-based knowledge and are powerful solvers of in-context ranking tasks. For instance, they may have parametric knowledge about the ordering of countries by size or may be able to rank reviews by sentiment. Recent work focuses on pairwise, pointwise, and listwise prompting techniques to elicit a language model's ranking knowledge. However, we find that even with careful calibration and constrained decoding, prompting-based techniques may not always be self-consistent in the rankings they produce. This motivates us to explore an alternative approach that is inspired by an unsupervised probing method called Contrast-Consistent Search (CCS). The idea is to train a probing model guided by a logical constraint: a model's representation of a statement and its negation must be mapped to contrastive true-false poles consistently across multiple statements. We hypothesize that similar constraints apply to ranking tasks where all items are related via consistent pairwise or listwise comparisons. To this end, we extend the binary CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking methods such as the Max-Margin Loss, Triplet Loss, and Ordinal Regression objective. Our results confirm that, for the same language model, CCR probing outperforms prompting and even performs on a par with prompting much larger language models.",70b73e272621562c6261f86d2ebf814703b760ed,Semantic Scholar,,, -49,analyzing chainofthought prompting in large language models via gradientbased feature attributions,"['Skyler Wu', 'Eric Meng Shen', 'Charumathi Badrinath', 'Jiaqi Ma', 'Himabindu Lakkaraju']",https://arxiv.org/pdf/2307.13339,2023-07-25,,"Chain-of-thought (CoT) prompting has been shown to empirically improve the accuracy of large language models (LLMs) on various question answering tasks. While understanding why CoT prompting is effective is crucial to ensuring that this phenomenon is a consequence of desired model behavior, little work has addressed this; nonetheless, such an understanding is a critical prerequisite for responsible model deployment. We address this question by leveraging gradient-based feature attribution methods which produce saliency scores that capture the influence of input tokens on model output. Specifically, we probe several open-source LLMs to investigate whether CoT prompting affects the relative importances they assign to particular input tokens. Our results indicate that while CoT prompting does not increase the magnitude of saliency scores attributed to semantically relevant tokens in the prompt compared to standard few-shot prompting, it increases the robustness of saliency scores to question perturbations and variations in model output.",71d68782c3da41b77866c2fd0cb65726f60b3af1,Semantic Scholar,,, -50,multimodal classifiers for openvocabulary object detection,"['Prannay Kaul', 'Weidi Xie', 'Andrew Zisserman']",http://arxiv.org/pdf/2306.05493,2023-06-08,,"The goal of this paper is open-vocabulary object detection (OVOD) $\unicode{x2013}$ building a model that can detect objects beyond the set of categories seen at training, thus enabling the user to specify categories of interest at inference without the need for model retraining. We adopt a standard two-stage object detector architecture, and explore three ways for specifying novel categories: via language descriptions, via image exemplars, or via a combination of the two. We make three contributions: first, we prompt a large language model (LLM) to generate informative language descriptions for object classes, and construct powerful text-based classifiers; second, we employ a visual aggregator on image exemplars that can ingest any number of images as input, forming vision-based classifiers; and third, we provide a simple method to fuse information from language descriptions and image exemplars, yielding a multi-modal classifier. When evaluating on the challenging LVIS open-vocabulary benchmark we demonstrate that: (i) our text-based classifiers outperform all previous OVOD works; (ii) our vision-based classifiers perform as well as text-based classifiers in prior work; (iii) using multi-modal classifiers perform better than either modality alone; and finally, (iv) our text-based and multi-modal classifiers yield better performance than a fully-supervised detector.",73397ec77081b46f5e49a4e7486129fe2ffe7adf,Semantic Scholar,,, -51,prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages,"['Samuel Rhys Cox', 'Ashraf Abdul', 'Wei Tsang Ooi']",https://arxiv.org/pdf/2308.13479,2023-08-25,,"Large language models (LLMs) are increasingly capable and prevalent, and can be used to produce creative content. The quality of content is influenced by the prompt used, with more specific prompts that incorporate examples generally producing better results. On from this, it could be seen that using instructions written for crowdsourcing tasks (that are specific and include examples to guide workers) could prove effective LLM prompts. To explore this, we used a previous crowdsourcing pipeline that gave examples to people to help them generate a collectively diverse corpus of motivational messages. We then used this same pipeline to generate messages using GPT-4, and compared the collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the pipeline, and (3&4) two baseline GPT-4 prompts. We found that the LLM prompts using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages than the two baseline prompts. We also discuss implications from messages generated by both human writers and LLMs.",8da6e4537122af618c36563caef5863f8728d789,Semantic Scholar,,, -52,promptbased montecarlo tree search for goaloriented dialogue policy planning,"['Xiao Yu', 'Maximillian Chen', 'Zhou Yu']",http://arxiv.org/pdf/2305.13660,2023-05-23,,"Planning for goal-oriented dialogue often requires simulating future dialogue interactions and estimating task progress. Many approaches thus consider training neural networks to perform look-ahead search algorithms such as A* search and Monte Carlo Tree Search (MCTS). However, this training often requires abundant annotated data, which creates challenges when faced with noisy annotations or low-resource settings. We introduce GDP-Zero, an approach using Open-Loop MCTS to perform goal-oriented dialogue policy planning without any model training. GDP-Zero prompts a large language model to act as a policy prior, value function, user simulator, and system model during the tree search. We evaluate GDP-Zero on the goal-oriented task PersuasionForGood, and find that its responses are preferred over ChatGPT up to 59.32% of the time, and are rated more persuasive than ChatGPT during interactive evaluations.",9573e2025440219a1d3393664b3c80bda51ac8f4,Semantic Scholar,,, -53,extensible prompts for language models,"['Tao Ge', 'Jing Hu', 'Li Dong', 'Shaoguang Mao', 'Yanqiu Xia', 'Xun Wang', 'Siyi Chen', 'Furu Wei', 'Si-Qing Chen']",https://arxiv.org/pdf/2212.00616,2022-12-01,,"We propose eXtensible Prompt (X-Prompt) for prompting a large language model (LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL but also an extensible vocabulary of imaginary words that are introduced to help represent what NL words hardly describe, allowing a prompt to be more descriptive. Like NL prompts, X-Prompt is out-of-distribution (OOD) robust, for which we propose context-guided learning with prompt augmentation to learn its imaginary words for general usability, enabling them to use in different prompt contexts for fine-grain specifications. The promising results of X-Prompt demonstrate its potential of approaching advanced interaction between humans and LLMs to bridge their communication gap.",9ea3d90a172a0b5799c13287484f7406946f7311,Semantic Scholar,,, -54,studenteval a benchmark of studentwritten prompts for large language models of code,"['Hannah McLean Babe', 'S. Nguyen', 'Yangtian Zi', 'Arjun Guha', 'Molly Q. Feldman', 'Carolyn Jane Anderson']",http://arxiv.org/pdf/2306.04556,2023-06-07,,"Code LLMs are being rapidly deployed and there is evidence that they can make professional programmers more productive. Current benchmarks for code generation measure whether models generate correct programs given an expert prompt. In this paper, we present a new benchmark containing multiple prompts per problem, written by a specific population of non-expert prompters: beginning programmers. StudentEval contains 1,749 prompts for 48 problems, written by 80 students who have only completed one semester of Python programming. Our students wrote these prompts while working interactively with a Code LLM, and we observed very mixed success rates. We use StudentEval to evaluate 5 Code LLMs and find that StudentEval is a better discriminator of model performance than existing benchmarks. We analyze the prompts and find significant variation in students' prompting techniques. We also find that nondeterministic LLM sampling could mislead students into thinking that their prompts are more (or less) effective than they actually are, which has implications for how to teach with Code LLMs.",a4929de687f3c6937dabbf733258af635781d3c4,Semantic Scholar,,, -55,generate rather than retrieve large language models are strong context generators,"['W. Yu', 'Dan Iter', 'Shuohang Wang', 'Yichong Xu', 'Mingxuan Ju', 'Soumya Sanyal', 'Chenguang Zhu', 'Michael Zeng', 'Meng Jiang']",http://arxiv.org/pdf/2209.10063,2022-09-21,,"Knowledge-intensive tasks, such as open-domain question answering (QA), require access to a large amount of world or domain knowledge. A common approach for knowledge-intensive tasks is to employ a retrieve-then-read pipeline that first retrieves a handful of relevant contextual documents from an external corpus such as Wikipedia and then predicts an answer conditioned on the retrieved documents. In this paper, we present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators. We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer. Furthermore, we propose a novel clustering-based prompting method that selects distinct prompts, resulting in the generated documents that cover different perspectives, leading to better recall over acceptable answers. We conduct extensive experiments on three different knowledge-intensive tasks, including open-domain QA, fact checking, and dialogue system. Notably, GenRead achieves 71.6 and 54.4 exact match scores on TriviaQA and WebQ, significantly outperforming the state-of-the-art retrieve-then-read pipeline DPR-FiD by +4.0 and +3.9, without retrieving any documents from any external knowledge source. Lastly, we demonstrate the model performance can be further improved by combining retrieval and generation. Our code and generated documents can be found at https://github.com/wyu97/GenRead.",b2542a738b75ee9b7ce1a13d8b78f9095d212412,Semantic Scholar,,, -56,idas intent discovery with abstractive summarization,"['Maarten De Raedt', 'Fréderic Godin', 'Thomas Demeester', 'Chris Develder']",http://arxiv.org/pdf/2305.19783,2023-05-31,,"Intent discovery is the task of inferring latent intents from a set of unlabeled utterances, and is a useful step towards the efficient creation of new conversational agents. We show that recent competitive methods in intent discovery can be outperformed by clustering utterances based on abstractive summaries, i.e., “labels”, that retain the core elements while removing non-essential information. We contribute the IDAS approach, which collects a set of descriptive utterance labels by prompting a Large Language Model, starting from a well-chosen seed set of prototypical utterances, to bootstrap an In-Context Learning procedure to generate labels for non-prototypical utterances. The utterances and their resulting noisy labels are then encoded by a frozen pre-trained encoder, and subsequently clustered to recover the latent intents. For the unsupervised task (without any intent labels) IDAS outperforms the state-of-the-art by up to +7.42% in standard cluster metrics for the Banking, StackOverflow, and Transport datasets. For the semi-supervised task (with labels for a subset of intents) IDAS surpasses 2 recent methods on the CLINC benchmark without even using labeled data.",b9c263500281e05fddfe1f84839491f605815230,Semantic Scholar,,, -57,reward design with language models,"['Minae Kwon', 'Sang Michael Xie', 'Kalesha Bullard', 'Dorsa Sadigh']",http://arxiv.org/pdf/2303.00001,2023-02-27,,"Reward design in reinforcement learning (RL) is challenging since specifying human notions of desired behavior may be difficult via reward functions or require many expert demonstrations. Can we instead cheaply design rewards using a natural language interface? This paper explores how to simplify reward design by prompting a large language model (LLM) such as GPT-3 as a proxy reward function, where the user provides a textual prompt containing a few examples (few-shot) or a description (zero-shot) of the desired behavior. Our approach leverages this proxy reward function in an RL framework. Specifically, users specify a prompt once at the beginning of training. During training, the LLM evaluates an RL agent's behavior against the desired behavior described by the prompt and outputs a corresponding reward signal. The RL agent then uses this reward to update its behavior. We evaluate whether our approach can train agents aligned with user objectives in the Ultimatum Game, matrix games, and the DealOrNoDeal negotiation task. In all three tasks, we show that RL agents trained with our framework are well-aligned with the user's objectives and outperform RL agents trained with reward functions learned via supervised learning",d318e0169f649656c71f02a1f84194a734fe1962,Semantic Scholar,,, -58,leveraging training data in fewshot prompting for numerical reasoning,"['Zhanming Jie', 'Wei Lu']",http://arxiv.org/pdf/2305.18170,2023-05-29,,"Chain-of-thought (CoT) prompting with large language models has proven effective in numerous natural language processing tasks, but designing prompts that generalize well to diverse problem types can be challenging, especially in the context of math word problem (MWP) solving. Additionally, it is common to have a large amount of training data that have a better diversity coverage but CoT annotations are not available, which limits the use of supervised learning techniques. To address these issues, we investigate two approaches to leverage the training data in a few-shot prompting scenario: dynamic program prompting and program distillation. Our approach is largely inspired by Gao et al., (2022), where they proposed to replace the CoT with the programs as the intermediate reasoning step. Such a prompting strategy allows us to accurately verify the answer correctness through program execution in MWP solving. Our dynamic program prompting involves annotating the training data by sampling correct programs from a large language model, while program distillation involves adapting a smaller model to the program-annotated training data. Our experiments on three standard MWP datasets demonstrate the effectiveness of these approaches, yielding significant improvements over previous baselines for prompting and fine-tuning. Our results suggest that leveraging a large amount of training data can improve the generalization ability of prompts and boost the performance of fine-tuned small models in MWP solving.",d75d11d2c89c01cd284383546ae057cb827dc272,Semantic Scholar,,, -59,spell semantic prompt evolution based on a llm,"['Yujian Betterest Li', 'Kai Wu']",https://arxiv.org/pdf/2310.01260,2023-10-02,,"Prompt engineering is a new paradigm for enhancing the performance of trained neural network models. For optimizing text-style prompts, existing methods usually individually operate small portions of a text step by step, which either breaks the fluency or could not globally adjust a prompt. Since large language models (LLMs) have powerful ability of generating coherent texts token by token, can we utilize LLMs for improving prompts? Based on this motivation, in this paper, considering a trained LLM as a text generator, we attempt to design a black-box evolution algorithm for automatically optimizing texts, namely SPELL (Semantic Prompt Evolution based on a LLM). The proposed method is evaluated with different LLMs and evolution parameters in different text tasks. Experimental results show that SPELL could rapidly improve the prompts indeed. We further explore the evolution process and discuss on the limitations, potential possibilities and future work.",e1dafedfbb55cd2200411841c2ec40e7ea827773,Semantic Scholar,,, -60,contrastive noveltyaugmented learning anticipating outliers with large language models,"['Albert Xu', 'Xiang Ren', 'Robin Jia']",https://aclanthology.org/2023.acl-long.658.pdf,2022-11-28,,"In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly. Selective prediction, in which models abstain on low-confidence examples, provides a possible solution, but existing models are often overly confident on unseen classes. To remedy this overconfidence, we introduce Contrastive Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD examples representative of novel classes, then trains to decrease confidence on them. First, we generate OOD examples by prompting a large language model twice: we prompt it to enumerate relevant novel classes, then generate examples from each novel class matching the task format. Second, we train a classifier with a novel contrastive objective that encourages lower confidence on generated OOD examples than training examples. When trained with CoNAL, classifiers improve in their ability to detect and abstain on novel class examples over prior methods by an average of 2.3% in terms of accuracy under the accuracy-coverage curve (AUAC) and 5.5% AUROC across 4 NLP datasets, with no cost to in-distribution accuracy.",fed7e4a0e8c798777f3f1613be62a2dfb776b462,Semantic Scholar,,, -61,from prompt injections to sql injection attacks how protected is your llmintegrated web application,"['Rodrigo Pedro', 'Daniel Castro', 'Paulo Carreira', 'Nuno Santos']",https://arxiv.org/pdf/2308.01990,2023-08-03,,"Large Language Models (LLMs) have found widespread applications in various domains, including web applications, where they facilitate human interaction via chatbots with natural language interfaces. Internally, aided by an LLM-integration middleware such as Langchain, user prompts are translated into SQL queries used by the LLM to provide meaningful responses to users. However, unsanitized user prompts can lead to SQL injection attacks, potentially compromising the security of the database. Despite the growing interest in prompt injection vulnerabilities targeting LLMs, the specific risks of generating SQL injection attacks through prompt injections have not been extensively studied. In this paper, we present a comprehensive examination of prompt-to-SQL (P$_2$SQL) injections targeting web applications based on the Langchain framework. Using Langchain as our case study, we characterize P$_2$SQL injections, exploring their variants and impact on application security through multiple concrete examples. Furthermore, we evaluate 7 state-of-the-art LLMs, demonstrating the pervasiveness of P$_2$SQL attacks across language models. Our findings indicate that LLM-integrated applications based on Langchain are highly susceptible to P$_2$SQL injection attacks, warranting the adoption of robust defenses. To counter these attacks, we propose four effective defense techniques that can be integrated as extensions to the Langchain framework. We validate the defenses through an experimental evaluation with a real-world use case application.",0894585294c67193ff3190240554677b56fd79a0,Semantic Scholar,,, -62,prompt injection parameterization of fixed inputs,"['Eunbi Choi', 'Yongrae Jo', 'Joel Jang', 'Minjoon Seo']",http://arxiv.org/pdf/2206.11349,2022-05-31,,"Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We propose Prompt Injection (PI), a novel formulation of injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.",1c475acaa1060c8318a625f24bfd88c12f367516,Semantic Scholar,,, -63,incontext learning in large language models a neuroscienceinspired analysis of representations,"['Safoora Yousefi', 'Leo Betthauser', 'Hosein Hasanbeig', 'Akanksha Saran', 'Raphael Milliere', 'Ida Momennejad']",https://arxiv.org/pdf/2310.00313,2023-09-30,,"Large language models (LLMs) exhibit remarkable performance improvement through in-context learning (ICL) by leveraging task-specific examples in the input. However, the mechanisms behind this improvement remain elusive. In this work, we investigate embeddings and attention representations in Llama-2 70B and Vicuna 13B. Specifically, we study how embeddings and attention change after in-context-learning, and how these changes mediate improvement in behavior. We employ neuroscience-inspired techniques, such as representational similarity analysis (RSA), and propose novel methods for parameterized probing and attention ratio analysis (ARA, measuring the ratio of attention to relevant vs. irrelevant information). We designed three tasks with a priori relationships among their conditions: reading comprehension, linear regression, and adversarial prompt injection. We formed hypotheses about expected similarities in task representations to investigate latent changes in embeddings and attention. Our analyses revealed a meaningful correlation between changes in both embeddings and attention representations with improvements in behavioral performance after ICL. This empirical framework empowers a nuanced understanding of how latent representations affect LLM behavior with and without ICL, offering valuable tools and insights for future research and practical applications.",2427527c1a1bc61b32c28a107192c3e22ed629bb,Semantic Scholar,,, -64,safeguarding crowdsourcing surveys from chatgpt with prompt injection,"['Chaofan Wang', 'Samuel Kernan Freire', 'Mo Zhang', 'Jing Wei', 'Jorge Gonçalves', 'V. Kostakos', 'Zhanna Sarsenbayeva', 'Christina Schneegass', 'A. Bozzon', 'E. Niforatos']",http://arxiv.org/pdf/2306.08833,2023-06-15,,"ChatGPT and other large language models (LLMs) have proven useful in crowdsourcing tasks, where they can effectively annotate machine learning training data. However, this means that they also have the potential for misuse, specifically to automatically answer surveys. LLMs can potentially circumvent quality assurance measures, thereby threatening the integrity of methodologies that rely on crowdsourcing surveys. In this paper, we propose a mechanism to detect LLM-generated responses to surveys. The mechanism uses""prompt injection"", such as directions that can mislead LLMs into giving predictable responses. We evaluate our technique against a range of question scenarios, types, and positions, and find that it can reliably detect LLM-generated responses with more than 93% effectiveness. We also provide an open-source software to help survey designers use our technique to detect LLM responses. Our work is a step in ensuring that survey methodologies remain rigorous vis-a-vis LLMs.",8c035150f883007b5af9e5bb753b78d9c0b75a55,Semantic Scholar,,, -65,demystifying rce vulnerabilities in llmintegrated apps,"['Tong Liu', 'Zizhuang Deng', 'Guozhu Meng', 'Yuekang Li', 'Kai Chen']",https://arxiv.org/pdf/2309.02926,2023-09-06,,"In recent years, Large Language Models (LLMs) have demonstrated remarkable potential across various downstream tasks. LLM-integrated frameworks, which serve as the essential infrastructure, have given rise to many LLM-integrated web apps. However, some of these frameworks suffer from Remote Code Execution (RCE) vulnerabilities, allowing attackers to execute arbitrary code on apps' servers remotely via prompt injections. Despite the severity of these vulnerabilities, no existing work has been conducted for a systematic investigation of them. This leaves a great challenge on how to detect vulnerabilities in frameworks as well as LLM-integrated apps in real-world scenarios. To fill this gap, we present two novel strategies, including 1) a static analysis-based tool called LLMSmith to scan the source code of the framework to detect potential RCE vulnerabilities and 2) a prompt-based automated testing approach to verify the vulnerability in LLM-integrated web apps. We discovered 13 vulnerabilities in 6 frameworks, including 12 RCE vulnerabilities and 1 arbitrary file read/write vulnerability. 11 of them are confirmed by the framework developers, resulting in the assignment of 7 CVE IDs. After testing 51 apps, we found vulnerabilities in 17 apps, 16 of which are vulnerable to RCE and 1 to SQL injection. We responsibly reported all 17 issues to the corresponding developers and received acknowledgments. Furthermore, we amplify the attack impact beyond achieving RCE by allowing attackers to exploit other app users (e.g. app responses hijacking, user API key leakage) without direct interaction between the attacker and the victim. Lastly, we propose some mitigating strategies for improving the security awareness of both framework and app developers, helping them to mitigate these risks effectively.",9be0dea0d6b892a2162490fb02712efaf10c0c87,Semantic Scholar,,, -66,prompt injection attack against llmintegrated applications,"['Yi Liu', 'Gelei Deng', 'Yuekang Li', 'Kailong Wang', 'Tianwei Zhang', 'Yepang Liu', 'Haoyu Wang', 'Yanhong Zheng', 'Yang Liu']",http://arxiv.org/pdf/2306.05499,2023-06-08,,"Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.",db4cf9f6a653d5c15973e836c800ea47743251ae,Semantic Scholar,,, -67,automatic prompt rewriting for personalized text generation,"['Cheng Li', 'Mingyang Zhang', 'Qiaozhu Mei', 'Weize Kong', 'Michael Bendersky']",https://arxiv.org/pdf/2310.00152,2023-09-29,,"Facilitated by large language models (LLMs), personalized text generation has become a rapidly growing research direction. Most existing studies focus on designing specialized models for a particular domain, or they require fine-tuning the LLMs to generate personalized text. We consider a typical scenario in which the large language model, which generates personalized output, is frozen and can only be accessed through APIs. Under this constraint, all one can do is to improve the input text (i.e., text prompts) sent to the LLM, a procedure that is usually done manually. In this paper, we propose a novel method to automatically revise prompts for personalized text generation. The proposed method takes the initial prompts generated by a state-of-the-art, multistage framework for personalized generation and rewrites a few critical components that summarize and synthesize the personal context. The prompt rewriter employs a training paradigm that chains together supervised learning (SL) and reinforcement learning (RL), where SL reduces the search space of RL and RL facilitates end-to-end training of the rewriter. Using datasets from three representative domains, we demonstrate that the rewritten prompts outperform both the original prompts and the prompts optimized via supervised learning or reinforcement learning alone. In-depth analysis of the rewritten prompts shows that they are not only human readable, but also able to guide manual revision of prompts when there is limited resource to employ reinforcement learning to train the prompt rewriter, or when it is costly to deploy an automatic prompt rewriter for inference.",04892382200a9d48ad5f8d3cb3cd3d63a8206a01,Semantic Scholar,,, -68,rlprompt optimizing discrete text prompts with reinforcement learning,"['Mingkai Deng', 'Jianyu Wang', 'Cheng-Ping Hsieh', 'Yihan Wang', 'Han Guo', 'Tianmin Shu', 'Meng Song', 'E. Xing', 'Zhiting Hu']",http://arxiv.org/pdf/2205.12548,2022-05-25,,"Prompting has shown impressive success in enabling large pre-trained language models (LMs) to perform diverse NLP tasks, especially with only few downstream data. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning *soft* prompts (e.g., embeddings) which fall short of interpretability, reusability across LMs, and applicability when gradients are not accessible. *Discrete* prompts, on the other hand, are difficult to optimize, and are often created by “enumeration (e.g., paraphrasing)-then-selection” heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the optimized discrete prompt after training with reward. To harness the complex and stochastic reward signals from the large LM environment, we incorporate effective reward stabilization that substantially enhances training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing fine-tuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating that LM prompting may not follow human language patterns.",07759a84f27e43cfa5bc8d579f8227c96e6ae1dc,Semantic Scholar,,, -69,querydependent prompt evaluation and optimization with offline inverse rl,"['Hao Sun', 'Alihan Hüyük', 'M. Schaar']",https://arxiv.org/pdf/2309.06553,2023-09-13,,"In this study, we aim to enhance the arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization. We identify a previously overlooked objective of query dependency in such optimization and elucidate two ensuing challenges that impede the successful and economical design of prompt optimization techniques. One primary issue is the absence of an effective method to evaluate prompts during inference when the golden answer is unavailable. Concurrently, learning via interactions with the LLMs to navigate the expansive natural language prompting space proves to be resource-intensive. To address this, we introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data. Such data exists as by-products when diverse prompts are benchmarked on open-accessible datasets. With Prompt-OIRL, the query-dependent prompt optimization objective is achieved by first learning an offline reward model. This model can evaluate any query-prompt pairs without accessing LLMs. Subsequently, a best-of-N strategy is deployed to recommend the optimal prompt. Our experimental evaluations across various LLM scales and arithmetic reasoning datasets underscore both the efficacy and economic viability of the proposed approach.",0ad677b4172e5aef8b18bc6832145d1a03e11da4,Semantic Scholar,,, -70,temporallyextended prompts optimization for sam in interactive medical image segmentation,"['Chuyun Shen', 'Wenhao Li', 'Ya Zhang', 'Xiangfeng Wang']",http://arxiv.org/pdf/2306.08958,2023-06-15,,"The Segmentation Anything Model (SAM) has recently emerged as a foundation model for addressing image segmentation. Owing to the intrinsic complexity of medical images and the high annotation cost, the medical image segmentation (MIS) community has been encouraged to investigate SAM's zero-shot capabilities to facilitate automatic annotation. Inspired by the extraordinary accomplishments of interactive medical image segmentation (IMIS) paradigm, this paper focuses on assessing the potential of SAM's zero-shot capabilities within the IMIS paradigm to amplify its benefits in the MIS domain. Regrettably, we observe that SAM's vulnerability to prompt forms (e.g., points, bounding boxes) becomes notably pronounced in IMIS. This leads us to develop a framework that adaptively offers suitable prompt forms for human experts. We refer to the framework above as temporally-extended prompts optimization (TEPO) and model it as a Markov decision process, solvable through reinforcement learning. Numerical experiments on the standardized benchmark BraTS2020 demonstrate that the learned TEPO agent can further enhance SAM's zero-shot capability in the MIS context.",0da5adf32fe7501a5b98eb6549b2c42af08ee094,Semantic Scholar,,, -71,att3d amortized textto3d object synthesis,"['Jonathan Lorraine', 'Kevin Xie', 'Xiaohui Zeng', 'Chen-Hsuan Lin', 'Towaki Takikawa', 'Nicholas Sharp', 'Tsung-Yi Lin', 'Ming-Yu Liu', 'S. Fidler', 'James Lucas']",http://arxiv.org/pdf/2306.07349,2023-06-06,,"Text-to-3D modelling has seen exciting progress by combining generative text-to-image models with image-to-3D methods like Neural Radiance Fields. DreamFusion recently achieved high-quality results but requires a lengthy, per-prompt optimization to create 3D objects. To address this, we amortize optimization over text prompts by training on many prompts simultaneously with a unified model, instead of separately. With this, we share computation across a prompt set, training in less time than per-prompt optimization. Our framework - Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to generalize to unseen setups and smooth interpolations between text for novel assets and simple animations.",1e8403af2e1e7a8f803d8df9e8daac584f99c2a0,Semantic Scholar,,, -72,topological data analysis guided segment anything model prompt optimization for zeroshot segmentation in biological imaging,"['R. Glatt', 'Shusen Liu']",http://arxiv.org/pdf/2306.17400,2023-06-30,,"Emerging foundation models in machine learning are models trained on vast amounts of data that have been shown to generalize well to new tasks. Often these models can be prompted with multi-modal inputs that range from natural language descriptions over images to point clouds. In this paper, we propose topological data analysis (TDA) guided prompt optimization for the Segment Anything Model (SAM) and show preliminary results in the biological image segmentation domain. Our approach replaces the standard grid search approach that is used in the original implementation and finds point locations based on their topological significance. Our results show that the TDA optimized point cloud is much better suited for finding small objects and massively reduces computational complexity despite the extra step in scenarios which require many segmentations.",294b4613b21abf1e9ba499de274569360093b107,Semantic Scholar,,, -73,unveiling the potential of knowledgeprompted chatgpt for enhancing drug trafficking detection on social media,"['Chuanbo Hu', 'Bing Liu', 'Xin Li', 'Yanfang Ye']",https://arxiv.org/pdf/2307.03699,2023-07-07,,"Social media platforms such as Instagram and Twitter have emerged as critical channels for drug marketing and illegal sale. Detecting and labeling online illicit drug trafficking activities becomes important in addressing this issue. However, the effectiveness of conventional supervised learning methods in detecting drug trafficking heavily relies on having access to substantial amounts of labeled data, while data annotation is time-consuming and resource-intensive. Furthermore, these models often face challenges in accurately identifying trafficking activities when drug dealers use deceptive language and euphemisms to avoid detection. To overcome this limitation, we conduct the first systematic study on leveraging large language models (LLMs), such as ChatGPT, to detect illicit drug trafficking activities on social media. We propose an analytical framework to compose \emph{knowledge-informed prompts}, which serve as the interface that humans can interact with and use LLMs to perform the detection task. Additionally, we design a Monte Carlo dropout based prompt optimization method to further to improve performance and interpretability. Our experimental findings demonstrate that the proposed framework outperforms other baseline language models in terms of drug trafficking detection accuracy, showing a remarkable improvement of nearly 12\%. By integrating prior knowledge and the proposed prompts, ChatGPT can effectively identify and label drug trafficking activities on social networks, even in the presence of deceptive language and euphemisms used by drug dealers to evade detection. The implications of our research extend to social networks, emphasizing the importance of incorporating prior knowledge and scenario-based prompts into analytical tools to improve online security and public safety.",2e588fe7e07948cb9112c37d5e9dcc3a13b1bd0f,Semantic Scholar,,, -74,automatic data transformation using large language model an experimental study on building energy data,"['Ankita Sharma', 'Xuanmao Li', 'Hong Guan', 'Guoxin Sun', 'Liang Zhang', 'Lanjun Wang', 'Kesheng Wu', 'Lei Cao', 'Erkang Zhu', 'Alexander Sim', 'Teresa Wu', 'Jia Zou']",https://arxiv.org/pdf/2309.01957,2023-09-05,,"Existing approaches to automatic data transformation are insufficient to meet the requirements in many real-world scenarios, such as the building sector. First, there is no convenient interface for domain experts to provide domain knowledge easily. Second, they require significant training data collection overheads. Third, the accuracy suffers from complicated schema changes. To bridge this gap, we present a novel approach that leverages the unique capabilities of large language models (LLMs) in coding, complex reasoning, and zero-shot learning to generate SQL code that transforms the source datasets into the target datasets. We demonstrate the viability of this approach by designing an LLM-based framework, termed SQLMorpher, which comprises a prompt generator that integrates the initial prompt with optional domain knowledge and historical patterns in external databases. It also implements an iterative prompt optimization mechanism that automatically improves the prompt based on flaw detection. The key contributions of this work include (1) pioneering an end-to-end LLM-based solution for data transformation, (2) developing a benchmark dataset of 105 real-world building energy data transformation problems, and (3) conducting an extensive empirical evaluation where our approach achieved 96% accuracy in all 105 problems. SQLMorpher demonstrates the effectiveness of utilizing LLMs in complex, domain-specific challenges, highlighting the potential of their potential to drive sustainable solutions.",3120c2763edab339b937ddbe76991ebdfe0e01e6,Semantic Scholar,,, -75,incontext examples selection for machine translation,"['Sweta Agrawal', 'Chunting Zhou', 'M. Lewis', 'Luke Zettlemoyer', 'Marjan Ghazvininejad']",https://arxiv.org/pdf/2212.02437,2022-12-05,,"Large-scale generative models show an impressive ability to perform a wide range of Natural Language Processing (NLP) tasks using in-context learning, where a few examples are used to describe a task to the model. For Machine Translation (MT), these examples are typically randomly sampled from the development dataset with a similar distribution as the evaluation set. However, it is unclear how the choice of these in-context examples and their ordering impacts the output translation quality. In this work, we aim to understand the properties of good in-context examples for MT in both in-domain and out-of-domain settings. We show that the translation quality and the domain of the in-context examples matter and that 1-shot noisy unrelated example can have a catastrophic impact on output quality. While concatenating multiple random examples reduces the effect of noise, a single good prompt optimized to maximize translation quality on the development dataset can elicit learned information from the pre-trained language model. Adding similar examples based on an n-gram overlap with the test source significantly and consistently improves the translation quality of the outputs, outperforming a strong kNN-MT baseline in 2 out of 4 out-of-domain datasets.",515cf674fcdced5a7d5bb156dd5fcc1f5290e79b,Semantic Scholar,,, +38,large language models can accomplish business process management tasks,"['Michael Grohs', 'Luka Abb', 'Nourhan Elsayed', 'Jana-Rebecca Rehse']",https://arxiv.org/pdf/2307.09923,2023-07-19,,"Business Process Management (BPM) aims to improve organizational activities and their outcomes by managing the underlying processes. To achieve this, it is often necessary to consider information from various sources, including unstructured textual documents. Therefore, researchers have developed several BPM-specific solutions that extract information from textual documents using Natural Language Processing techniques. These solutions are specific to their respective tasks and cannot accomplish multiple process-related problems as a general-purpose instrument. However, in light of the recent emergence of Large Language Models (LLMs) with remarkable reasoning capabilities, such a general-purpose instrument with multiple applications now appears attainable. In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by applying a specific LLM to three exemplary tasks: mining imperative process models from textual descriptions, mining declarative process models from textual descriptions, and assessing the suitability of process tasks from textual descriptions for robotic process automation. We show that, without extensive configuration or prompt engineering, LLMs perform comparably to or better than existing solutions and discuss implications for future BPM research as well as practical usage.",cce17289765132b6192ccf90123bb7f5ef920c8e,Semantic Scholar,,, +39,democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts,"['Xuan-Phi Nguyen', 'Sharifah Mahani Aljunied', 'Shafiq R. Joty', 'Lidong Bing']",http://arxiv.org/pdf/2306.11372,2023-06-20,,"Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars. However, in low-resource languages, obtaining such hand-picked exemplars can still be challenging, where unsupervised techniques may be necessary. Moreover, competent generative capabilities of LLMs are observed only in high-resource languages, while their performances among under-represented languages fall behind due to pre-training data imbalance. To elicit LLMs' ability onto low-resource languages without any supervised data, we propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English. These prompts are then used to create intra-lingual exemplars to perform tasks in the target languages. Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages. We also show that fine-tuning a 7B model on data generated from our method helps it perform competitively with a 175B model. In non-English translation tasks, our method even outperforms supervised prompting by up to 3 chrF++ in many low-resource languages. When evaluated on zero-shot multilingual summarization, our method surpasses other English-pivoting baselines by up to 4 ROUGE-L and is also favored by GPT-4.",e0867e9f3a715851a90d17423f7f3b33a2a66bb1,Semantic Scholar,,, +40,exploiting asymmetry for synthetic training data generation synthie and the case of information extraction,"['Martin Josifoski', 'Marija Sakota', 'Maxime Peyrard', 'Robert West']",http://arxiv.org/pdf/2303.04132,2023-03-07,,"Large language models (LLMs) have great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by LLMs: for problems with structured outputs, it is possible to prompt an LLM to perform the task in the reverse direction, by generating plausible input text for a target output structure. Leveraging this asymmetry in task difficulty makes it possible to produce large-scale, high-quality data for complex tasks. We demonstrate the effectiveness of this approach on closed information extraction, where collecting ground-truth data is challenging, and no satisfactory dataset exists to date. We synthetically generate a dataset of 1.8M data points, establish its superior quality compared to existing datasets in a human evaluation, and use it to finetune small models (220M and 770M parameters), termed SynthIE, that outperform the prior state of the art (with equal model size) by a substantial margin of 57 absolute points in micro-F1 and 79 points in macro-F1. Code, data, and models are available at https://github.com/epfl-dlab/SynthIE.",f64e49d76048c902cc02e8ae27dcd4ac0dbcb97f,Semantic Scholar,,, +41,query rewriting for retrievalaugmented large language models,"['Xinbei Ma', 'Yeyun Gong', 'Pengcheng He', 'Hai Zhao', 'Nan Duan']",http://arxiv.org/pdf/2305.14283,2023-05-23,,"Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline, making remarkable progress in knowledge-intensive tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs from the perspective of the query rewriting. Unlike prior studies focusing on adapting either the retriever or the reader, our approach pays attention to the adaptation of the search query itself, for there is inevitably a gap between the input text and the needed knowledge in retrieval. We first prompt an LLM to generate the query, then use a web search engine to retrieve contexts. Furthermore, to better align the query to the frozen modules, we propose a trainable scheme for our pipeline. A small language model is adopted as a trainable rewriter to cater to the black-box LLM reader. The rewriter is trained using the feedback of the LLM reader by reinforcement learning. Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice QA. Experiments results show consistent performance improvement, indicating that our framework is proven effective and scalable, and brings a new framework for retrieval-augmented LLM.",f743287be3ced6757de7ecb26d03815b22cd737b,Semantic Scholar,,, +42,legoprover neural theorem proving with growing libraries,"['Huajian Xin', 'Haiming Wang', 'Chuanyang Zheng', 'Lin Li', 'Zhengying Liu', 'Qingxing Cao', 'Yinya Huang', 'Jing Xiong', 'Han Shi', 'Enze Xie', 'Jian Yin', 'Zhenguo Li', 'Xiaodan Liang', 'Heng Liao']",https://arxiv.org/pdf/2310.00656,2023-10-01,,"Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems. One common limitation of these methods is that they assume a fixed theorem library during the whole theorem proving process. However, as we all know, creating new useful theorems or even new theories is not only helpful but crucial and necessary for advancing mathematics and proving harder and deeper results. In this work, we present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving. By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process. These skills are further evolved (by prompting an LLM) to enrich the library on another scale. Modular and reusable skills are constantly added to the library to enable tackling increasingly intricate mathematical problems. Moreover, the learned library further bridges the gap between human proofs and formal proofs by making it easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%). During the proving process, LEGO-Prover also manages to generate over 20,000 skills (theorems/lemmas) and adds them to the growing library. Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We also release our code and all the generated skills.",f8b5ee53c3410f20049e7def47bd52403fa388e3,Semantic Scholar,,, +43,q2d turning questions into dialogs to teach models how to search,"['Yonatan Bitton', 'Shlomi Cohen-Ganor', 'Ido Hakimi', 'Yoad Lewenberg', 'Roee Aharoni', 'Enav Weinreb']",http://arxiv.org/pdf/2304.14318,2023-04-27,,"One of the exciting capabilities of recent language models for dialog is their ability to independently search for relevant information to ground a given dialog response. However, obtaining training data to teach models how to issue search queries is time and resource consuming. In this work, we propose q2d: an automatic data generation pipeline that generates information-seeking dialogs from questions. We prompt a large language model (PaLM) to create conversational versions of question answering datasets, and use it to improve query generation models that communicate with external search APIs to ground dialog responses. Unlike previous approaches which relied on human written dialogs with search queries, our method allows to automatically generate query-based grounded dialogs with better control and scale. Our experiments demonstrate that: (1) For query generation on the QReCC dataset, models trained on our synthetically-generated data achieve 90%--97% of the performance of models trained on the human-generated data; (2) We can successfully generate data for training dialog models in new domains without any existing dialog data as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We perform a thorough analysis of the generated dialogs showing that humans find them of high quality and struggle to distinguish them from human-written dialogs.",33729913908d187dc0db6e41073c35643324fe4f,Semantic Scholar,,, +44,fairnessguided fewshot prompting for large language models,"['Huan Ma', 'Changqing Zhang', 'Yatao Bian', 'Lemao Liu', 'Zhirui Zhang', 'P. Zhao', 'Shu Zhang', 'H. Fu', 'Qinghua Hu', 'Bing Wu']",http://arxiv.org/pdf/2303.13217,2023-03-23,,"Large language models have demonstrated surprising ability to perform in-context learning, i.e., these models can be directly applied to solve numerous downstream tasks by conditioning on a prompt constructed by a few input-output examples. However, prior research has shown that in-context learning can suffer from high instability due to variations in training examples, example order, and prompt formats. Therefore, the construction of an appropriate prompt is essential for improving the performance of in-context learning. In this paper, we revisit this problem from the view of predictive bias. Specifically, we introduce a metric to evaluate the predictive bias of a fixed prompt against labels or a given attributes. Then we empirically show that prompts with higher bias always lead to unsatisfactory predictive quality. Based on this observation, we propose a novel search strategy based on the greedy search to identify the near-optimal prompt for improving the performance of in-context learning. We perform comprehensive experiments with state-of-the-art mainstream models such as GPT-3 on various downstream tasks. Our results indicate that our method can enhance the model's in-context learning performance in an effective and interpretable manner.",3436ff7a1dd4c6547ba78968d3eec2545a6dccb9,Semantic Scholar,,, +45,prompting multilingual large language models to generate codemixed texts the case of south east asian languages,"['Zheng-Xin Yong', 'Ruochen Zhang', 'J. Forde', 'Skyler Wang', 'Arjun Subramonian', 'Samuel Cahyawijaya', 'Holy Lovenia', 'Genta Indra Winata', 'Lintang Sutawika', 'Jan Christian Blaise Cruz', 'Long Phan', 'Yinghua Tan', 'Alham Fikri Aji']",https://arxiv.org/pdf/2303.13592,2023-03-23,,"The differences in decision making between behavioural models of voice interfaces are hard to capture using existing measures for the absolute performance of such models. For instance, two models may have a similar task success rate, but very different ways of getting there. In this paper, we propose a general methodology to compute the similarity of two dialogue behaviour models and investigate different ways of computing scores on both the semantic and the textual level. Complementing absolute measures of performance, we test our scores on three different tasks and show the practical usability of the measures.",3b27092740a489a63589cdcf40fad6a0e093daa0,Semantic Scholar,,, +46,social simulacra creating populated prototypes for social computing systems,"['J. Park', 'Lindsay Popowski', 'Carrie J. Cai', 'M. Morris', 'Percy Liang', 'Michael S. Bernstein']",https://dl.acm.org/doi/pdf/10.1145/3526113.3545616,2022-08-08,,"Social computing prototypes probe the social behaviors that may arise in an envisioned system design. This prototyping practice is currently limited to recruiting small groups of people. Unfortunately, many challenges do not arise until a system is populated at a larger scale. Can a designer understand how a social system might behave when populated, and make adjustments to the design before the system falls prey to such challenges? We introduce social simulacra, a prototyping technique that generates a breadth of realistic social interactions that may emerge when a social computing system is populated. Social simulacra take as input the designer’s description of a community’s design—goal, rules, and member personas—and produce as output an instance of that design with simulated behavior, including posts, replies, and anti-social behaviors. We demonstrate that social simulacra shift the behaviors that they generate appropriately in response to design changes, and that they enable exploration of “what if?” scenarios where community members or moderators intervene. To power social simulacra, we contribute techniques for prompting a large language model to generate thousands of distinct community members and their social interactions with each other; these techniques are enabled by the observation that large language models’ training data already includes a wide variety of positive and negative behavior on social media platforms. In evaluations, we show that participants are often unable to distinguish social simulacra from actual community behavior and that social computing designers successfully refine their social computing designs when using social simulacra.",49b499598a8864eee55ab264fc16a5bf8d2f87ef,Semantic Scholar,,, +47,folio natural language reasoning with firstorder logic,"['Simeng Han', 'Hailey Schoelkopf', 'Yilun Zhao', 'Zhenting Qi', 'Martin Riddell', 'Luke Benson', 'Lucy Sun', 'E. Zubova', 'Yujie Qiao', 'Matthew Burtell', 'David Peng', 'Jonathan Fan', 'Yixin Liu', 'Brian Wong', 'Malcolm Sailor', 'Ansong Ni', 'Linyong Nan', 'Jungo Kasai', 'Tao Yu', 'Rui Zhang', 'Shafiq R. Joty', 'Alexander R. Fabbri', 'Wojciech Kryscinski', 'Xi Victoria Lin', 'Caiming Xiong', 'Dragomir R. Radev']",http://arxiv.org/pdf/2209.00840,2022-09-02,,"We present FOLIO, a human-annotated, open-domain, and logically complex and diverse dataset for reasoning in natural language (NL), equipped with first order logic (FOL) annotations. FOLIO consists of 1,435 examples (unique conclusions), each paired with one of 487 sets of premises which serve as rules to be used to deductively reason for the validity of each conclusion. The logical correctness of premises and conclusions is ensured by their parallel FOL annotations, which are automatically verified by our FOL inference engine. In addition to the main NL reasoning task, NL-FOL pairs in FOLIO automatically constitute a new NL-FOL translation dataset using FOL as the logical form. Our experiments on FOLIO systematically evaluate the FOL reasoning ability of supervised fine-tuning on medium-sized language models (BERT, RoBERTa) and few-shot prompting on large language models (GPT-NeoX, OPT, GPT-3, Codex). For NL-FOL translation, we experiment with GPT-3 and Codex. Our results show that one of the most capable Large Language Model (LLM) publicly available, GPT-3 davinci, achieves only slightly better than random results with few-shot prompting on a subset of FOLIO, and the model is especially bad at predicting the correct truth values for False and Unknown conclusions. Our dataset and code are available at https://github.com/Yale-LILY/FOLIO.",5581bf85386737bd3378eec68189759a05280bea,Semantic Scholar,,, +48,dictionarybased phraselevel prompting of large language models for machine translation,"['Marjan Ghazvininejad', 'Hila Gonen', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2302.07856,2023-02-15,,"Large language models (LLMs) demonstrate remarkable machine translation (MT) abilities via prompting, even though they were not explicitly trained for this task. However, even given the incredible quantities of data they are trained on, LLMs can struggle to translate inputs with rare words, which are common in low resource or domain transfer scenarios. We show that LLM prompting can provide an effective solution for rare words as well, by using prior knowledge from bilingual dictionaries to provide control hints in the prompts. We propose a novel method, DiPMT, that provides a set of possible translations for a subset of the input words, thereby enabling fine-grained phrase-level prompted control of the LLM. Extensive experiments show that DiPMT outperforms the baseline both in low-resource MT, as well as for out-of-domain MT. We further provide a qualitative analysis of the benefits and limitations of this approach, including the overall level of controllability that is achieved.",64ce6ef1f5cf227bf2bf917c87273386ae16256f,Semantic Scholar,,, +49,instructeval systematic evaluation of instruction selection methods,"['Anirudh Ajith', 'Chris Pan', 'Mengzhou Xia', 'A. Deshpande', 'Karthik Narasimhan']",https://arxiv.org/pdf/2307.00259,2023-07-01,,"In-context learning (ICL) performs tasks by prompting a large language model (LLM) using an instruction and a small set of annotated examples called demonstrations. Recent work has shown that precise details of the inputs used in the ICL prompt significantly impact performance, which has incentivized instruction selection algorithms. The effect of instruction-choice however is severely underexplored, with existing analyses restricted to shallow subsets of models and tasks, limiting the generalizability of their insights. We develop InstructEval, an ICL evaluation suite to conduct a thorough assessment of these techniques. The suite includes 13 open-sourced LLMs of varying scales from four model families, and covers nine tasks across three categories. Using the suite, we evaluate the relative performance of seven popular instruction selection methods over five metrics relevant to ICL. Our experiments reveal that using curated manually-written instructions or simple instructions without any task-specific descriptions often elicits superior ICL performance overall than that of automatic instruction-induction methods, pointing to a lack of generalizability among the latter. We release our evaluation suite for benchmarking instruction selection approaches and enabling more generalizable methods in this space.",6af986a2cab884fbd30ad6da2928dc19c12d83a7,Semantic Scholar,,, +50,unsupervised contrastconsistent ranking with language models,"['Niklas Stoehr', 'Pengxiang Cheng', 'Jing Wang', 'Daniel Preotiuc-Pietro', 'Rajarshi Bhowmik']",https://arxiv.org/pdf/2309.06991,2023-09-13,,"Language models contain ranking-based knowledge and are powerful solvers of in-context ranking tasks. For instance, they may have parametric knowledge about the ordering of countries by size or may be able to rank reviews by sentiment. Recent work focuses on pairwise, pointwise, and listwise prompting techniques to elicit a language model's ranking knowledge. However, we find that even with careful calibration and constrained decoding, prompting-based techniques may not always be self-consistent in the rankings they produce. This motivates us to explore an alternative approach that is inspired by an unsupervised probing method called Contrast-Consistent Search (CCS). The idea is to train a probing model guided by a logical constraint: a model's representation of a statement and its negation must be mapped to contrastive true-false poles consistently across multiple statements. We hypothesize that similar constraints apply to ranking tasks where all items are related via consistent pairwise or listwise comparisons. To this end, we extend the binary CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking methods such as the Max-Margin Loss, Triplet Loss, and Ordinal Regression objective. Our results confirm that, for the same language model, CCR probing outperforms prompting and even performs on a par with prompting much larger language models.",70b73e272621562c6261f86d2ebf814703b760ed,Semantic Scholar,,, +51,analyzing chainofthought prompting in large language models via gradientbased feature attributions,"['Skyler Wu', 'Eric Meng Shen', 'Charumathi Badrinath', 'Jiaqi Ma', 'Himabindu Lakkaraju']",https://arxiv.org/pdf/2307.13339,2023-07-25,,"Chain-of-thought (CoT) prompting has been shown to empirically improve the accuracy of large language models (LLMs) on various question answering tasks. While understanding why CoT prompting is effective is crucial to ensuring that this phenomenon is a consequence of desired model behavior, little work has addressed this; nonetheless, such an understanding is a critical prerequisite for responsible model deployment. We address this question by leveraging gradient-based feature attribution methods which produce saliency scores that capture the influence of input tokens on model output. Specifically, we probe several open-source LLMs to investigate whether CoT prompting affects the relative importances they assign to particular input tokens. Our results indicate that while CoT prompting does not increase the magnitude of saliency scores attributed to semantically relevant tokens in the prompt compared to standard few-shot prompting, it increases the robustness of saliency scores to question perturbations and variations in model output.",71d68782c3da41b77866c2fd0cb65726f60b3af1,Semantic Scholar,,, +52,multimodal classifiers for openvocabulary object detection,"['Prannay Kaul', 'Weidi Xie', 'Andrew Zisserman']",http://arxiv.org/pdf/2306.05493,2023-06-08,,"The goal of this paper is open-vocabulary object detection (OVOD) $\unicode{x2013}$ building a model that can detect objects beyond the set of categories seen at training, thus enabling the user to specify categories of interest at inference without the need for model retraining. We adopt a standard two-stage object detector architecture, and explore three ways for specifying novel categories: via language descriptions, via image exemplars, or via a combination of the two. We make three contributions: first, we prompt a large language model (LLM) to generate informative language descriptions for object classes, and construct powerful text-based classifiers; second, we employ a visual aggregator on image exemplars that can ingest any number of images as input, forming vision-based classifiers; and third, we provide a simple method to fuse information from language descriptions and image exemplars, yielding a multi-modal classifier. When evaluating on the challenging LVIS open-vocabulary benchmark we demonstrate that: (i) our text-based classifiers outperform all previous OVOD works; (ii) our vision-based classifiers perform as well as text-based classifiers in prior work; (iii) using multi-modal classifiers perform better than either modality alone; and finally, (iv) our text-based and multi-modal classifiers yield better performance than a fully-supervised detector.",73397ec77081b46f5e49a4e7486129fe2ffe7adf,Semantic Scholar,,, +53,prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages,"['Samuel Rhys Cox', 'Ashraf Abdul', 'Wei Tsang Ooi']",https://arxiv.org/pdf/2308.13479,2023-08-25,,"Large language models (LLMs) are increasingly capable and prevalent, and can be used to produce creative content. The quality of content is influenced by the prompt used, with more specific prompts that incorporate examples generally producing better results. On from this, it could be seen that using instructions written for crowdsourcing tasks (that are specific and include examples to guide workers) could prove effective LLM prompts. To explore this, we used a previous crowdsourcing pipeline that gave examples to people to help them generate a collectively diverse corpus of motivational messages. We then used this same pipeline to generate messages using GPT-4, and compared the collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages than the two baseline prompts. We also discuss implications from messages generated by both human writers and LLMs.",8da6e4537122af618c36563caef5863f8728d789,Semantic Scholar,,, +54,promptbased montecarlo tree search for goaloriented dialogue policy planning,"['Xiao Yu', 'Maximillian Chen', 'Zhou Yu']",http://arxiv.org/pdf/2305.13660,2023-05-23,,"Planning for goal-oriented dialogue often requires simulating future dialogue interactions and estimating task progress. Many approaches thus consider training neural networks to perform look-ahead search algorithms such as A* search and Monte Carlo Tree Search (MCTS). However, this training often requires abundant annotated data, which creates challenges when faced with noisy annotations or low-resource settings. We introduce GDP-Zero, an approach using Open-Loop MCTS to perform goal-oriented dialogue policy planning without any model training. GDP-Zero prompts a large language model to act as a policy prior, value function, user simulator, and system model during the tree search. We evaluate GDP-Zero on the goal-oriented task PersuasionForGood, and find that its responses are preferred over ChatGPT up to 59.32% of the time, and are rated more persuasive than ChatGPT during interactive evaluations.",9573e2025440219a1d3393664b3c80bda51ac8f4,Semantic Scholar,,, +55,studenteval a benchmark of studentwritten prompts for large language models of code,"['Hannah McLean Babe', 'S. Nguyen', 'Yangtian Zi', 'Arjun Guha', 'Molly Q. Feldman', 'Carolyn Jane Anderson']",http://arxiv.org/pdf/2306.04556,2023-06-07,,"Code LLMs are being rapidly deployed and there is evidence that they can make professional programmers more productive. Current benchmarks for code generation measure whether models generate correct programs given an expert prompt. In this paper, we present a new benchmark containing multiple prompts per problem, written by a specific population of non-expert prompters: beginning programmers. StudentEval contains 1,749 prompts for 48 problems, written by 80 students who have only completed one semester of Python programming. Our students wrote these prompts while working interactively with a Code LLM, and we observed very mixed success rates. We use StudentEval to evaluate 5 Code LLMs and find that StudentEval is a better discriminator of model performance than existing benchmarks. We analyze the prompts and find significant variation in students' prompting techniques. We also find that nondeterministic LLM sampling could mislead students into thinking that their prompts are more (or less) effective than they actually are, which has implications for how to teach with Code LLMs.",a4929de687f3c6937dabbf733258af635781d3c4,Semantic Scholar,,, +56,generate rather than retrieve large language models are strong context generators,"['W. Yu', 'Dan Iter', 'Shuohang Wang', 'Yichong Xu', 'Mingxuan Ju', 'Soumya Sanyal', 'Chenguang Zhu', 'Michael Zeng', 'Meng Jiang']",http://arxiv.org/pdf/2209.10063,2022-09-21,,"Knowledge-intensive tasks, such as open-domain question answering (QA), require access to a large amount of world or domain knowledge. A common approach for knowledge-intensive tasks is to employ a retrieve-then-read pipeline that first retrieves a handful of relevant contextual documents from an external corpus such as Wikipedia and then predicts an answer conditioned on the retrieved documents. In this paper, we present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators. We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer. Furthermore, we propose a novel clustering-based prompting method that selects distinct prompts, resulting in the generated documents that cover different perspectives, leading to better recall over acceptable answers. We conduct extensive experiments on three different knowledge-intensive tasks, including open-domain QA, fact checking, and dialogue system. Notably, GenRead achieves 71.6 and 54.4 exact match scores on TriviaQA and WebQ, significantly outperforming the state-of-the-art retrieve-then-read pipeline DPR-FiD by +4.0 and +3.9, without retrieving any documents from any external knowledge source. Lastly, we demonstrate the model performance can be further improved by combining retrieval and generation. Our code and generated documents can be found at https://github.com/wyu97/GenRead.",b2542a738b75ee9b7ce1a13d8b78f9095d212412,Semantic Scholar,,, +57,idas intent discovery with abstractive summarization,"['Maarten De Raedt', 'Fréderic Godin', 'Thomas Demeester', 'Chris Develder']",http://arxiv.org/pdf/2305.19783,2023-05-31,,"Intent discovery is the task of inferring latent intents from a set of unlabeled utterances, and is a useful step towards the efficient creation of new conversational agents. We show that recent competitive methods in intent discovery can be outperformed by clustering utterances based on abstractive summaries, i.e., “labels”, that retain the core elements while removing non-essential information. We contribute the IDAS approach, which collects a set of descriptive utterance labels by prompting a Large Language Model, starting from a well-chosen seed set of prototypical utterances, to bootstrap an In-Context Learning procedure to generate labels for non-prototypical utterances. The utterances and their resulting noisy labels are then encoded by a frozen pre-trained encoder, and subsequently clustered to recover the latent intents. For the unsupervised task (without any intent labels) IDAS outperforms the state-of-the-art by up to +7.42% in standard cluster metrics for the Banking, StackOverflow, and Transport datasets. For the semi-supervised task (with labels for a subset of intents) IDAS surpasses 2 recent methods on the CLINC benchmark without even using labeled data.",b9c263500281e05fddfe1f84839491f605815230,Semantic Scholar,,, +58,reward design with language models,"['Minae Kwon', 'Sang Michael Xie', 'Kalesha Bullard', 'Dorsa Sadigh']",http://arxiv.org/pdf/2303.00001,2023-02-27,,"Reward design in reinforcement learning (RL) is challenging since specifying human notions of desired behavior may be difficult via reward functions or require many expert demonstrations. Can we instead cheaply design rewards using a natural language interface? This paper explores how to simplify reward design by prompting a large language model (LLM) such as GPT-3 as a proxy reward function, where the user provides a textual prompt containing a few examples (few-shot) or a description (zero-shot) of the desired behavior. Our approach leverages this proxy reward function in an RL framework. Specifically, users specify a prompt once at the beginning of training. During training, the LLM evaluates an RL agent's behavior against the desired behavior described by the prompt and outputs a corresponding reward signal. The RL agent then uses this reward to update its behavior. We evaluate whether our approach can train agents aligned with user objectives in the Ultimatum Game, matrix games, and the DealOrNoDeal negotiation task. In all three tasks, we show that RL agents trained with our framework are well-aligned with the user's objectives and outperform RL agents trained with reward functions learned via supervised learning",d318e0169f649656c71f02a1f84194a734fe1962,Semantic Scholar,,, +59,leveraging training data in fewshot prompting for numerical reasoning,"['Zhanming Jie', 'Wei Lu']",http://arxiv.org/pdf/2305.18170,2023-05-29,,"Chain-of-thought (CoT) prompting with large language models has proven effective in numerous natural language processing tasks, but designing prompts that generalize well to diverse problem types can be challenging, especially in the context of math word problem (MWP) solving. Additionally, it is common to have a large amount of training data that have a better diversity coverage but CoT annotations are not available, which limits the use of supervised learning techniques. To address these issues, we investigate two approaches to leverage the training data in a few-shot prompting scenario: dynamic program prompting and program distillation. Our approach is largely inspired by Gao et al., (2022), where they proposed to replace the CoT with the programs as the intermediate reasoning step. Such a prompting strategy allows us to accurately verify the answer correctness through program execution in MWP solving. Our dynamic program prompting involves annotating the training data by sampling correct programs from a large language model, while program distillation involves adapting a smaller model to the program-annotated training data. Our experiments on three standard MWP datasets demonstrate the effectiveness of these approaches, yielding significant improvements over previous baselines for prompting and fine-tuning. Our results suggest that leveraging a large amount of training data can improve the generalization ability of prompts and boost the performance of fine-tuned small models in MWP solving.",d75d11d2c89c01cd284383546ae057cb827dc272,Semantic Scholar,,, +60,spell semantic prompt evolution based on a llm,"['Yujian Betterest Li', 'Kai Wu']",https://arxiv.org/pdf/2310.01260,2023-10-02,,"Prompt engineering is a new paradigm for enhancing the performance of trained neural network models. For optimizing text-style prompts, existing methods usually individually operate small portions of a text step by step, which either breaks the fluency or could not globally adjust a prompt. Since large language models (LLMs) have powerful ability of generating coherent texts token by token, can we utilize LLMs for improving prompts? Based on this motivation, in this paper, considering a trained LLM as a text generator, we attempt to design a black-box evolution algorithm for automatically optimizing texts, namely SPELL (Semantic Prompt Evolution based on a LLM). The proposed method is evaluated with different LLMs and evolution parameters in different text tasks. Experimental results show that SPELL could rapidly improve the prompts indeed. We further explore the evolution process and discuss on the limitations, potential possibilities and future work.",e1dafedfbb55cd2200411841c2ec40e7ea827773,Semantic Scholar,,, +61,contrastive noveltyaugmented learning anticipating outliers with large language models,"['Albert Xu', 'Xiang Ren', 'Robin Jia']",https://aclanthology.org/2023.acl-long.658.pdf,2022-11-28,,"In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly. Selective prediction, in which models abstain on low-confidence examples, provides a possible solution, but existing models are often overly confident on unseen classes. To remedy this overconfidence, we introduce Contrastive Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD examples representative of novel classes, then trains to decrease confidence on them. First, we generate OOD examples by prompting a large language model twice: we prompt it to enumerate relevant novel classes, then generate examples from each novel class matching the task format. Second, we train a classifier with a novel contrastive objective that encourages lower confidence on generated OOD examples than training examples. When trained with CoNAL, classifiers improve in their ability to detect and abstain on novel class examples over prior methods by an average of 2.3% in terms of accuracy under the accuracy-coverage curve (AUAC) and 5.5% AUROC across 4 NLP datasets, with no cost to in-distribution accuracy.",fed7e4a0e8c798777f3f1613be62a2dfb776b462,Semantic Scholar,,, +62,from prompt injections to sql injection attacks how protected is your llmintegrated web application,"['Rodrigo Pedro', 'Daniel Castro', 'Paulo Carreira', 'Nuno Santos']",https://arxiv.org/pdf/2308.01990,2023-08-03,,"Large Language Models (LLMs) have found widespread applications in various domains, including web applications, where they facilitate human interaction via chatbots with natural language interfaces. Internally, aided by an LLM-integration middleware such as Langchain, user prompts are translated into SQL queries used by the LLM to provide meaningful responses to users. However, unsanitized user prompts can lead to SQL injection attacks, potentially compromising the security of the database. Despite the growing interest in prompt injection vulnerabilities targeting LLMs, the specific risks of generating SQL injection attacks through prompt injections have not been extensively studied. In this paper, we present a comprehensive examination of prompt-to-SQL (P$_2$SQL) injections targeting web applications based on the Langchain framework. Using Langchain as our case study, we characterize P$_2$SQL injections, exploring their variants and impact on application security through multiple concrete examples. Furthermore, we evaluate 7 state-of-the-art LLMs, demonstrating the pervasiveness of P$_2$SQL attacks across language models. Our findings indicate that LLM-integrated applications based on Langchain are highly susceptible to P$_2$SQL injection attacks, warranting the adoption of robust defenses. To counter these attacks, we propose four effective defense techniques that can be integrated as extensions to the Langchain framework. We validate the defenses through an experimental evaluation with a real-world use case application.",0894585294c67193ff3190240554677b56fd79a0,Semantic Scholar,,, +63,prompt injection parameterization of fixed inputs,"['Eunbi Choi', 'Yongrae Jo', 'Joel Jang', 'Minjoon Seo']",http://arxiv.org/pdf/2206.11349,2022-05-31,,"Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We propose Prompt Injection (PI), a novel formulation of injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.",1c475acaa1060c8318a625f24bfd88c12f367516,Semantic Scholar,,, +64,incontext learning in large language models a neuroscienceinspired analysis of representations,"['Safoora Yousefi', 'Leo Betthauser', 'Hosein Hasanbeig', 'Akanksha Saran', 'Raphael Milliere', 'Ida Momennejad']",https://arxiv.org/pdf/2310.00313,2023-09-30,,"Large language models (LLMs) exhibit remarkable performance improvement through in-context learning (ICL) by leveraging task-specific examples in the input. However, the mechanisms behind this improvement remain elusive. In this work, we investigate embeddings and attention representations in Llama-2 70B and Vicuna 13B. Specifically, we study how embeddings and attention change after in-context-learning, and how these changes mediate improvement in behavior. We employ neuroscience-inspired techniques, such as representational similarity analysis (RSA), and propose novel methods for parameterized probing and attention ratio analysis (ARA, measuring the ratio of attention to relevant vs. irrelevant information). We designed three tasks with a priori relationships among their conditions: reading comprehension, linear regression, and adversarial prompt injection. We formed hypotheses about expected similarities in task representations to investigate latent changes in embeddings and attention. Our analyses revealed a meaningful correlation between changes in both embeddings and attention representations with improvements in behavioral performance after ICL. This empirical framework empowers a nuanced understanding of how latent representations affect LLM behavior with and without ICL, offering valuable tools and insights for future research and practical applications.",2427527c1a1bc61b32c28a107192c3e22ed629bb,Semantic Scholar,,, +65,safeguarding crowdsourcing surveys from chatgpt with prompt injection,"['Chaofan Wang', 'Samuel Kernan Freire', 'Mo Zhang', 'Jing Wei', 'Jorge Gonçalves', 'V. Kostakos', 'Zhanna Sarsenbayeva', 'Christina Schneegass', 'A. Bozzon', 'E. Niforatos']",http://arxiv.org/pdf/2306.08833,2023-06-15,,"ChatGPT and other large language models (LLMs) have proven useful in crowdsourcing tasks, where they can effectively annotate machine learning training data. However, this means that they also have the potential for misuse, specifically to automatically answer surveys. LLMs can potentially circumvent quality assurance measures, thereby threatening the integrity of methodologies that rely on crowdsourcing surveys. In this paper, we propose a mechanism to detect LLM-generated responses to surveys. The mechanism uses""prompt injection"", such as directions that can mislead LLMs into giving predictable responses. We evaluate our technique against a range of question scenarios, types, and positions, and find that it can reliably detect LLM-generated responses with more than 93% effectiveness. We also provide an open-source software to help survey designers use our technique to detect LLM responses. Our work is a step in ensuring that survey methodologies remain rigorous vis-a-vis LLMs.",8c035150f883007b5af9e5bb753b78d9c0b75a55,Semantic Scholar,,, +66,demystifying rce vulnerabilities in llmintegrated apps,"['Tong Liu', 'Zizhuang Deng', 'Guozhu Meng', 'Yuekang Li', 'Kai Chen']",https://arxiv.org/pdf/2309.02926,2023-09-06,,"In recent years, Large Language Models (LLMs) have demonstrated remarkable potential across various downstream tasks. LLM-integrated frameworks, which serve as the essential infrastructure, have given rise to many LLM-integrated web apps. However, some of these frameworks suffer from Remote Code Execution (RCE) vulnerabilities, allowing attackers to execute arbitrary code on apps' servers remotely via prompt injections. Despite the severity of these vulnerabilities, no existing work has been conducted for a systematic investigation of them. This leaves a great challenge on how to detect vulnerabilities in frameworks as well as LLM-integrated apps in real-world scenarios. To fill this gap, we present two novel strategies, including 1) a static analysis-based tool called LLMSmith to scan the source code of the framework to detect potential RCE vulnerabilities and 2) a prompt-based automated testing approach to verify the vulnerability in LLM-integrated web apps. We discovered 13 vulnerabilities in 6 frameworks, including 12 RCE vulnerabilities and 1 arbitrary file read/write vulnerability. 11 of them are confirmed by the framework developers, resulting in the assignment of 7 CVE IDs. After testing 51 apps, we found vulnerabilities in 17 apps, 16 of which are vulnerable to RCE and 1 to SQL injection. We responsibly reported all 17 issues to the corresponding developers and received acknowledgments. Furthermore, we amplify the attack impact beyond achieving RCE by allowing attackers to exploit other app users (e.g. app responses hijacking, user API key leakage) without direct interaction between the attacker and the victim. Lastly, we propose some mitigating strategies for improving the security awareness of both framework and app developers, helping them to mitigate these risks effectively.",9be0dea0d6b892a2162490fb02712efaf10c0c87,Semantic Scholar,,, +67,prompt injection attack against llmintegrated applications,"['Yi Liu', 'Gelei Deng', 'Yuekang Li', 'Kailong Wang', 'Tianwei Zhang', 'Yepang Liu', 'Haoyu Wang', 'Yanhong Zheng', 'Yang Liu']",http://arxiv.org/pdf/2306.05499,2023-06-08,,"Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.",db4cf9f6a653d5c15973e836c800ea47743251ae,Semantic Scholar,,, +68,automatic prompt rewriting for personalized text generation,"['Cheng Li', 'Mingyang Zhang', 'Qiaozhu Mei', 'Weize Kong', 'Michael Bendersky']",https://arxiv.org/pdf/2310.00152,2023-09-29,,"Facilitated by large language models (LLMs), personalized text generation has become a rapidly growing research direction. Most existing studies focus on designing specialized models for a particular domain, or they require fine-tuning the LLMs to generate personalized text. We consider a typical scenario in which the large language model, which generates personalized output, is frozen and can only be accessed through APIs. Under this constraint, all one can do is to improve the input text (i.e., text prompts) sent to the LLM, a procedure that is usually done manually. In this paper, we propose a novel method to automatically revise prompts for personalized text generation. The proposed method takes the initial prompts generated by a state-of-the-art, multistage framework for personalized generation and rewrites a few critical components that summarize and synthesize the personal context. The prompt rewriter employs a training paradigm that chains together supervised learning (SL) and reinforcement learning (RL), where SL reduces the search space of RL and RL facilitates end-to-end training of the rewriter. Using datasets from three representative domains, we demonstrate that the rewritten prompts outperform both the original prompts and the prompts optimized via supervised learning or reinforcement learning alone. In-depth analysis of the rewritten prompts shows that they are not only human readable, but also able to guide manual revision of prompts when there is limited resource to employ reinforcement learning to train the prompt rewriter, or when it is costly to deploy an automatic prompt rewriter for inference.",04892382200a9d48ad5f8d3cb3cd3d63a8206a01,Semantic Scholar,,, +69,rlprompt optimizing discrete text prompts with reinforcement learning,"['Mingkai Deng', 'Jianyu Wang', 'Cheng-Ping Hsieh', 'Yihan Wang', 'Han Guo', 'Tianmin Shu', 'Meng Song', 'E. Xing', 'Zhiting Hu']",http://arxiv.org/pdf/2205.12548,2022-05-25,,"Prompting has shown impressive success in enabling large pre-trained language models (LMs) to perform diverse NLP tasks, especially with only few downstream data. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning *soft* prompts (e.g., embeddings) which fall short of interpretability, reusability across LMs, and applicability when gradients are not accessible. *Discrete* prompts, on the other hand, are difficult to optimize, and are often created by “enumeration (e.g., paraphrasing)-then-selection” heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the optimized discrete prompt after training with reward. To harness the complex and stochastic reward signals from the large LM environment, we incorporate effective reward stabilization that substantially enhances training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing fine-tuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating that LM prompting may not follow human language patterns.",07759a84f27e43cfa5bc8d579f8227c96e6ae1dc,Semantic Scholar,,, +70,temporallyextended prompts optimization for sam in interactive medical image segmentation,"['Chuyun Shen', 'Wenhao Li', 'Ya Zhang', 'Xiangfeng Wang']",https://arxiv.org/pdf/2306.08958,2023-06-15,,"The Segmentation Anything Model (SAM) has recently emerged as a foundation model for addressing image segmentation. Owing to the intrinsic complexity of medical images and the high annotation cost, the medical image segmentation (MIS) community has been encouraged to investigate SAM’s zero-shot capabilities to facilitate automatic annotation. Inspired by the extraordinary accomplishments of the interactive medical image segmentation (IMIS) paradigm, this paper focuses on assessing the potential of SAM’s zero-shot capabilities within the IMIS paradigm to amplify its benefits in the MIS domain. Regrettably, we observe that SAM’s vulnerability to prompt forms (e.g., points, bounding boxes) becomes notably pronounced in IMIS. This leads us to develop a mechanism that adaptively offers suitable prompt forms for human experts. We refer to the mechanism above as temporally-extended prompts optimization (TEPO) and model it as a Markov decision process, solvable through reinforcement learning. Numerical experiments on the standardized benchmark Brats2020 demonstrate that the learned TEPO agent can further enhance SAM’s zero-shot capability in the MIS context.",0da5adf32fe7501a5b98eb6549b2c42af08ee094,Semantic Scholar,,, +71,topological data analysis guided segment anything model prompt optimization for zeroshot segmentation in biological imaging,"['R. Glatt', 'Shusen Liu']",http://arxiv.org/pdf/2306.17400,2023-06-30,,"Emerging foundation models in machine learning are models trained on vast amounts of data that have been shown to generalize well to new tasks. Often these models can be prompted with multi-modal inputs that range from natural language descriptions over images to point clouds. In this paper, we propose topological data analysis (TDA) guided prompt optimization for the Segment Anything Model (SAM) and show preliminary results in the biological image segmentation domain. Our approach replaces the standard grid search approach that is used in the original implementation and finds point locations based on their topological significance. Our results show that the TDA optimized point cloud is much better suited for finding small objects and massively reduces computational complexity despite the extra step in scenarios which require many segmentations.",294b4613b21abf1e9ba499de274569360093b107,Semantic Scholar,,, +72,unveiling the potential of knowledgeprompted chatgpt for enhancing drug trafficking detection on social media,"['Chuanbo Hu', 'Bing Liu', 'Xin Li', 'Yanfang Ye']",https://arxiv.org/pdf/2307.03699,2023-07-07,,"Social media platforms such as Instagram and Twitter have emerged as critical channels for drug marketing and illegal sale. Detecting and labeling online illicit drug trafficking activities becomes important in addressing this issue. However, the effectiveness of conventional supervised learning methods in detecting drug trafficking heavily relies on having access to substantial amounts of labeled data, while data annotation is time-consuming and resource-intensive. Furthermore, these models often face challenges in accurately identifying trafficking activities when drug dealers use deceptive language and euphemisms to avoid detection. To overcome this limitation, we conduct the first systematic study on leveraging large language models (LLMs), such as ChatGPT, to detect illicit drug trafficking activities on social media. We propose an analytical framework to compose \emph{knowledge-informed prompts}, which serve as the interface that humans can interact with and use LLMs to perform the detection task. Additionally, we design a Monte Carlo dropout based prompt optimization method to further to improve performance and interpretability. Our experimental findings demonstrate that the proposed framework outperforms other baseline language models in terms of drug trafficking detection accuracy, showing a remarkable improvement of nearly 12\%. By integrating prior knowledge and the proposed prompts, ChatGPT can effectively identify and label drug trafficking activities on social networks, even in the presence of deceptive language and euphemisms used by drug dealers to evade detection. The implications of our research extend to social networks, emphasizing the importance of incorporating prior knowledge and scenario-based prompts into analytical tools to improve online security and public safety.",2e588fe7e07948cb9112c37d5e9dcc3a13b1bd0f,Semantic Scholar,,, +73,robust prompt optimization for large language models against distribution shifts,"['Moxin Li', 'Wenjie Wang', 'Fuli Feng', 'Yixin Cao', 'Jizhi Zhang', 'Tat-seng Chua']",https://aclanthology.org/2023.emnlp-main.95.pdf,2023-05-23,,"Large Language Model (LLM) has demonstrated significant ability in various Natural Language Processing tasks. However, their effectiveness is highly dependent on the phrasing of the task prompt, leading to research on automatic prompt optimization using labeled task data. We reveal that these prompt optimization techniques are vulnerable to distribution shifts such as subpopulation shifts, which are common for LLMs in real-world scenarios such as customer reviews analysis. In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group. To solve this problem, we propose Generalized Prompt Optimization framework, which incorporates the unlabeled data from the target group into prompt optimization. Extensive experimental results demonstrate the effectiveness of the proposed framework with significant performance improvement on the target group and comparable performance on the source group.",3b0c49ca5ac0f441c302c9ca4def4804253552d5,Semantic Scholar,,, +74,incontext examples selection for machine translation,"['Sweta Agrawal', 'Chunting Zhou', 'M. Lewis', 'Luke Zettlemoyer', 'Marjan Ghazvininejad']",https://arxiv.org/pdf/2212.02437,2022-12-05,,"Large-scale generative models show an impressive ability to perform a wide range of Natural Language Processing (NLP) tasks using in-context learning, where a few examples are used to describe a task to the model. For Machine Translation (MT), these examples are typically randomly sampled from the development dataset with a similar distribution as the evaluation set. However, it is unclear how the choice of these in-context examples and their ordering impacts the output translation quality. In this work, we aim to understand the properties of good in-context examples for MT in both in-domain and out-of-domain settings. We show that the translation quality and the domain of the in-context examples matter and that 1-shot noisy unrelated example can have a catastrophic impact on output quality. While concatenating multiple random examples reduces the effect of noise, a single good prompt optimized to maximize translation quality on the development dataset can elicit learned information from the pre-trained language model. Adding similar examples based on an n-gram overlap with the test source significantly and consistently improves the translation quality of the outputs, outperforming a strong kNN-MT baseline in 2 out of 4 out-of-domain datasets.",515cf674fcdced5a7d5bb156dd5fcc1f5290e79b,Semantic Scholar,,, +75,getting more out of mixture of language model reasoning experts,"['Chenglei Si', 'Weijia Shi', 'Chen Zhao', 'Luke Zettlemoyer', 'Jordan L. Boyd-Graber']",https://aclanthology.org/2023.findings-emnlp.552.pdf,2023-05-24,,"While recent large language models (LLMs) improve on various question answering (QA) datasets, it remains difficult for a single model to generalize across question types that require distinct reasoning abilities. We provide empirical evidence that state-of-the-art LLMs suffer from poor generalizability on reasoning types beyond those seen in the prompt. To remedy this, we propose a Mixture-of-Reasoning-Experts (MoRE) framework that ensembles diverse specialized language models. We specialize the backbone language model with prompts optimized for different reasoning categories, including factual, multihop, mathematical, and commonsense reasoning. Our key insight is to leverage agreement among the specialized experts to select the best answer for each question, or to abstain from answering. This gives MoRE higher accuracy than any single specialized model on a collection of 12 QA datasets from four reasoning types. Beyond generalizability, the interpretable design of MoRE improves selective question answering results compared to baselines without incorporating inter-expert agreement. This framework is also more interpretable and useful to human consumers of QA outputs. Our human study confirms that presenting expert predictions and the answer selection process helps annotators more accurately calibrate when to trust the system's output. We release all code and data to facilitate future work.",7283d616e40d7ab7422e3697218f3fc42f292bf2,Semantic Scholar,,, 76,autohint automatic prompt optimization with hint generation,"['Hong Sun', 'Xue Li', 'Yi Xu', 'Youkow Homma', 'Qinhao Cao', 'Min-man Wu', 'Jian Jiao', 'Denis Xavier Charles']",https://arxiv.org/pdf/2307.07415,2023-07-13,,"This paper presents AutoHint, a novel framework for automatic prompt engineering and optimization for Large Language Models (LLM). While LLMs have demonstrated remarkable ability in achieving high-quality annotation in various tasks, the key to applying this ability to specific tasks lies in developing high-quality prompts. Thus we propose a framework to inherit the merits of both in-context learning and zero-shot learning by incorporating enriched instructions derived from input-output demonstrations to optimize original prompt. We refer to the enrichment as the hint and propose a framework to automatically generate the hint from labeled data. More concretely, starting from an initial prompt, our method first instructs a LLM to deduce new hints for selected samples from incorrect predictions, and then summarizes from per-sample hints and adds the results back to the initial prompt to form a new, enriched instruction. The proposed method is evaluated on the BIG-Bench Instruction Induction dataset for both zero-shot and few-short prompts, where experiments demonstrate our method is able to significantly boost accuracy for multiple tasks.",838e1317454724a9bb758d05d97e6058e11a8251,Semantic Scholar,,, -77,readonly prompt optimization for visionlanguage fewshot learning,"['Dongjun Lee', 'Seokwon Song', 'Jihee G. Suh', 'Joonmyeong Choi', 'S. Lee', 'Hyunwoo J.Kim']",https://arxiv.org/pdf/2308.14960,2023-08-29,,"In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models to downstream tasks. These methods aim to adapt the pre-trained models by introducing learnable prompts while keeping pre-trained weights frozen. However, learnable prompts can affect the internal representation within the self-attention module, which may negatively impact performance variance and generalization, especially in data-deficient settings. To address these issues, we propose a novel approach, Read-only Prompt Optimization (RPO). RPO leverages masked attention to prevent the internal representation shift in the pre-trained model. Further, to facilitate the optimization of RPO, the read-only prompts are initialized based on special tokens of the pre-trained model. Our extensive experiments demonstrate that RPO outperforms CLIP and CoCoOp in base-to-new generalization and domain generalization while displaying better robustness. Also, the proposed method achieves better generalization on extremely data-deficient settings, while improving parameter efficiency and computational overhead. Code is available at https://github.com/mlvlab/RPO.",b0b237dd905f12b23e3fc48ac7139e275158a007,Semantic Scholar,,, +77,readonly prompt optimization for visionlanguage fewshot learning,"['Dongjun Lee', 'Seokwon Song', 'Jihee G. Suh', 'Joonmyeong Choi', 'S. Lee', 'Hyunwoo J.Kim']",https://arxiv.org/pdf/2308.14960,2023-08-29,,"In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models to downstream tasks. These methods aim to adapt the pre-trained models by introducing learnable prompts while keeping pretrained weights frozen. However, learnable prompts can affect the internal representation within the self-attention module, which may negatively impact performance variance and generalization, especially in data-deficient settings. To address these issues, we propose a novel approach, Read-only Prompt Optimization (RPO). RPO leverages masked attention to prevent the internal representation shift in the pre-trained model. Further, to facilitate the optimization of RPO, the read-only prompts are initialized based on special tokens of the pre-trained model. Our extensive experiments demonstrate that RPO outperforms CLIP and CoCoOp in base-to-new generalization and domain generalization while displaying better robustness. Also, the proposed method achieves better generalization on extremely data-deficient settings, while improving parameter efficiency and computational overhead. Code is available at https://github.com/mlvlab/RPO.",b0b237dd905f12b23e3fc48ac7139e275158a007,Semantic Scholar,,, 78,"optimizing mobileedge aigenerated everything (aigx) services by prompt engineering fundamental, framework, and case study","['Yinqiu Liu', 'Hongyang Du', 'D. Niyato', 'Jiawen Kang', 'Shuguang Cui', 'Xuemin Shen', 'Ping Zhang']",https://arxiv.org/pdf/2309.01065,2023-09-03,,"As the next-generation paradigm for content creation, AI-Generated Content (AIGC), i.e., generating content automatically by Generative AI (GAI) based on user prompts, has gained great attention and success recently. With the ever-increasing power of GAI, especially the emergence of Pretrained Foundation Models (PFMs) that contain billions of parameters and prompt engineering methods (i.e., finding the best prompts for the given task), the application range of AIGC is rapidly expanding, covering various forms of information for human, systems, and networks, such as network designs, channel coding, and optimization solutions. In this article, we present the concept of mobile-edge AI-Generated Everything (AIGX). Specifically, we first review the building blocks of AIGX, the evolution from AIGC to AIGX, as well as practical AIGX applications. Then, we present a unified mobile-edge AIGX framework, which employs edge devices to provide PFM-empowered AIGX services and optimizes such services via prompt engineering. More importantly, we demonstrate that suboptimal prompts lead to poor generation quality, which adversely affects user satisfaction, edge network performance, and resource utilization. Accordingly, we conduct a case study, showcasing how to train an effective prompt optimizer using ChatGPT and investigating how much improvement is possible with prompt engineering in terms of user experience, quality of generation, and network performance.",b349f3dd5b764168cba57bb4ad3fc240c2b3eddf,Semantic Scholar,,, 79,automatic prompt optimization with gradient descent and beam search,"['Reid Pryzant', 'Dan Iter', 'Jerry Li', 'Y. Lee', 'Chenguang Zhu', 'Michael Zeng']",http://arxiv.org/pdf/2305.03495,2023-05-04,,"Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to this problem, Automatic Prompt Optimization (APO), which is inspired by numerical gradient descent to automatically improve prompts, assuming access to training data and an LLM API. The algorithm uses minibatches of data to form natural language""gradients""that criticize the current prompt. The gradients are then""propagated""into the prompt by editing the prompt in the opposite semantic direction of the gradient. These gradient descent steps are guided by a beam search and bandit selection procedure which significantly improves algorithmic efficiency. Preliminary results across three benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest that Automatic Prompt Optimization can outperform prior prompt editing techniques and improve an initial prompt's performance by up to 31%, by using data to rewrite vague task descriptions into more precise annotation instructions.",c76dd4a70361c3afd2e19d046343e2dedd16ecc3,Semantic Scholar,,, -80,discrete prompt optimization via constrained generation for zeroshot reranker,"['Sukmin Cho', 'Soyeong Jeong', 'J. Seo', 'Jong C. Park']",http://arxiv.org/pdf/2305.13729,2023-05-23,,"Re-rankers, which order retrieved documents with respect to the relevance score on the given query, have gained attention for the information retrieval (IR) task. Rather than fine-tuning the pre-trained language model (PLM), the large-scale language model (LLM) is utilized as a zero-shot re-ranker with excellent results. While LLM is highly dependent on the prompts, the impact and the optimization of the prompts for the zero-shot re-ranker are not explored yet. Along with highlighting the impact of optimization on the zero-shot re-ranker, we propose a novel discrete prompt optimization method, Constrained Prompt generation (Co-Prompt), with the metric estimating the optimum for re-ranking. Co-Prompt guides the generated texts from PLM toward optimal prompts based on the metric without parameter update. The experimental results demonstrate that Co-Prompt leads to outstanding re-ranking performance against the baselines. Also, Co-Prompt generates more interpretable prompts for humans against other prompt optimization methods.",d61f0820943a667917fb6d32225826aa5279f694,Semantic Scholar,,, -81,emotionconditioned text generation through automatic prompt optimization,"['Yarik Menchaca Resendiz', 'Roman Klinger']",https://arxiv.org/pdf/2308.04857,2023-08-09,,"Conditional natural language generation methods often require either expensive fine-tuning or training a large language model from scratch. Both are unlikely to lead to good results without a substantial amount of data and computational resources. Prompt learning without changing the parameters of a large language model presents a promising alternative. It is a cost-effective approach, while still achieving competitive results. While this procedure is now established for zero- and few-shot text classification and structured prediction, it has received limited attention in conditional text generation. We present the first automatic prompt optimization approach for emotion-conditioned text generation with instruction-fine-tuned models. Our method uses an iterative optimization procedure that changes the prompt by adding, removing, or replacing tokens. As objective function, we only require a text classifier that measures the realization of the conditional variable in the generated text. We evaluate the method on emotion-conditioned text generation with a focus on event reports and compare it to manually designed prompts that also act as the seed for the optimization procedure. The optimized prompts achieve 0.75 macro-average F1 to fulfill the emotion condition in contrast to manually designed seed prompts with only 0.22 macro-average F1.",ef5cd0eb266e3df3eb64aec18e1854fe0244d228,Semantic Scholar,,, -82,large language models as optimizers,"['Chengrun Yang', 'Xuezhi Wang', 'Yifeng Lu', 'Hanxiao Liu', 'Quoc V. Le', 'Denny Zhou', 'Xinyun Chen']",https://arxiv.org/pdf/2309.03409,2023-09-07,,"Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.",f8a2dca1e8fe56e698984c077f7ff58d8ca867e9,Semantic Scholar,,, -83,dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning,"['Chengzhengxu Li', 'Xiaoming Liu', 'Yichen Wang', 'Duyi Li', 'Y. Lan', 'Chao Shen']",https://arxiv.org/pdf/2308.07272,2023-08-14,,"Prompt-based pre-trained language models (PLMs) paradigm have succeeded substantially in few-shot natural language processing (NLP) tasks. However, prior discrete prompt optimization methods require expert knowledge to design the base prompt set and identify high-quality prompts, which is costly, inefficient, and subjective. Meanwhile, existing continuous prompt optimization methods improve the performance by learning the ideal prompts through the gradient information of PLMs, whose high computational cost, and low readability and generalizability are often concerning. To address the research gap, we propose a Dialogue-comprised Policy-gradient-based Discrete Prompt Optimization ($DP_2O$) method. We first design a multi-round dialogue alignment strategy for readability prompt set generation based on GPT-4. Furthermore, we propose an efficient prompt screening metric to identify high-quality prompts with linear complexity. Finally, we construct a reinforcement learning (RL) framework based on policy gradients to match the prompts to inputs optimally. By training a policy network with only 0.67% of the PLM parameter size on the tasks in the few-shot setting, $DP_2O$ outperforms the state-of-the-art (SOTA) method by 1.52% in accuracy on average on four open-source datasets. Moreover, subsequent experiments also demonstrate that $DP_2O$ has good universality, robustness, and generalization ability.",ff96527c03fbea7c3bb7d44d1d656d875ddba75e,Semantic Scholar,,, -84,evaluation of chatgpt family of models for biomedical reasoning and classification,"['Shan Chen', 'Yingya Li', 'Sheng Lu', 'Hoang Van', 'H. Aerts', 'G. Savova', 'D. Bitterman']",http://arxiv.org/pdf/2304.02496,2023-04-05,,"Recent advances in large language models (LLMs) have shown impressive ability in biomedical question-answering, but have not been adequately investigated for more specific biomedical applications. This study investigates the performance of LLMs such as the ChatGPT family of models (GPT-3.5s, GPT-4) in biomedical tasks beyond question-answering. Because no patient data can be passed to the OpenAI API public interface, we evaluated model performance with over 10000 samples as proxies for two fundamental tasks in the clinical domain - classification and reasoning. The first task is classifying whether statements of clinical and policy recommendations in scientific literature constitute health advice. The second task is causal relation detection from the biomedical literature. We compared LLMs with simpler models, such as bag-of-words (BoW) with logistic regression, and fine-tuned BioBERT models. Despite the excitement around viral ChatGPT, we found that fine-tuning for two fundamental NLP tasks remained the best strategy. The simple BoW model performed on par with the most complex LLM prompting. Prompt engineering required significant investment.",020e473d8c987dcfb03fcfffeb87b17812447031,Semantic Scholar,,, -85,textguided synthesis of artistic images with retrievalaugmented diffusion models,"['Robin Rombach', 'A. Blattmann', 'B. Ommer']",http://arxiv.org/pdf/2207.13038,2022-07-26,,"Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Of particular note is the field of ``AI-Art'', which has seen unprecedented growth with the emergence of powerful multimodal models such as CLIP. By combining speech and image synthesis models, so-called ``prompt-engineering'' has become established, in which carefully selected and composed sentences are used to achieve a certain visual style in the synthesized image. In this note, we present an alternative approach based on retrieval-augmented diffusion models (RDMs). In RDMs, a set of nearest neighbors is retrieved from an external database during training for each training instance, and the diffusion model is conditioned on these informative samples. During inference (sampling), we replace the retrieval database with a more specialized database that contains, for example, only images of a particular visual style. This provides a novel way to prompt a general trained model after training and thereby specify a particular visual style. As shown by our experiments, this approach is superior to specifying the visual style within the text prompt. We open-source code and model weights at https://github.com/CompVis/latent-diffusion .",0270ec4bc946b59c5cf6204be2553682dee0346c,Semantic Scholar,,, -86,interactive and visual prompt engineering for adhoc task adaptation with large language models,"['Hendrik Strobelt', 'Albert Webson', 'Victor Sanh', 'Benjamin Hoover', 'J. Beyer', 'H. Pfister', 'Alexander M. Rush']",https://arxiv.org/pdf/2208.07852,2022-08-16,,"State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",0392d58335ce674a70f5e58ac8c438de296a0e6a,Semantic Scholar,,, -87,"artificial intelligence for health message generation theory, method, and an empirical study using prompt engineering","['Sue Lim', 'Ralf Schmälzle']",http://arxiv.org/pdf/2212.07507,2022-12-14,,"This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Using prompt engineering, we generated messages that could be used to raise awareness and compared them to retweeted human-generated messages via computational and human evaluation methods. The system was easy to use and prolific, and computational analyses revealed that the AI-generated messages were on par with human-generated ones in terms of sentiment, reading ease",040ec58865ab50b5e6d91a355ffc146ec5034e9f,Semantic Scholar,,, -88,how does prompt engineering affect chatgpt performance on unsupervised entity resolution,"['Khanin Sisaengsuwanchai', 'Navapat Nananukul', 'M. Kejriwal']",https://arxiv.org/pdf/2310.06174,2023-10-09,,"Entity Resolution (ER) is the problem of semi-automatically determining when two entities refer to the same underlying entity, with applications ranging from healthcare to e-commerce. Traditional ER solutions required considerable manual expertise, including feature engineering, as well as identification and curation of training data. In many instances, such techniques are highly dependent on the domain. With recent advent in large language models (LLMs), there is an opportunity to make ER much more seamless and domain-independent. However, it is also well known that LLMs can pose risks, and that the quality of their outputs can depend on so-called prompt engineering. Unfortunately, a systematic experimental study on the effects of different prompting methods for addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper aims to address this gap by conducting such a study. Although preliminary in nature, our results show that prompting can significantly affect the quality of ER, although it affects some metrics more than others, and can also be dataset dependent.",06ab0710c8a7315e70c15c0d7eb1aa50210d945c,Semantic Scholar,,, -89,a systematic survey of prompt engineering on visionlanguage foundation models,"['Jindong Gu', 'Zhen Han', 'Shuo Chen', 'Ahmad Beirami', 'Bailan He', 'Gengyuan Zhang', 'Ruotong Liao', 'Yao Qin', 'Volker Tresp', 'Philip H. S. Torr']",https://arxiv.org/pdf/2307.12980,,,"—Prompt engineering is a technique that involves augmenting a large pre-trained model with task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be created manually as natural language instructions or generated automatically as either natural language instructions or vector representations. Prompt engineering enables the ability to perform predictions based solely on prompts without updating model parameters, and the easier application of large pre-trained models in real-world tasks. In past years, Prompt engineering has been well-studied in natural language processing. Recently, it has also been intensively studied in vision-language modeling. However, there is currently a lack of a systematic overview of prompt engineering on pre-trained vision-language models. This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-language models: multimodal-to-text generation models ( e.g., Flamingo), image-text matching models ( e.g., CLIP), and text-to-image generation models ( e.g., Stable Diffusion). For each type of model, a brief model summary, prompting methods, prompting-based applications, and the corresponding responsibility and integrity issues are summarized and discussed. Furthermore, the commonalities and differences between prompting on vision-language models, language models, and vision models are also discussed. The challenges, future directions, and research opportunities are summarized to foster future research on this topic.",06d8562831c32844285a691c5250d04726df3c61,Semantic Scholar,,, -90,unveiling the potential of large language models in generating semantic and crosslanguage clones,"['Palash R. Roy', 'A. Alam', 'Farouq Al-Omari', 'B. Roy', 'C. Roy', 'Kevin A. Schneider']",https://arxiv.org/pdf/2309.06424,2023-09-12,,"Semantic and Cross-language code clone generation may be useful for code reuse, code comprehension, refactoring and benchmarking. OpenAI's GPT model has potential in such clone generation as GPT is used for text generation. When developers copy/paste codes from Stack Overflow (SO) or within a system, there might be inconsistent changes leading to unexpected behaviours. Similarly, if someone possesses a code snippet in a particular programming language but seeks equivalent functionality in a different language, a semantic cross-language code clone generation approach could provide valuable assistance.In this study, using SemanticCloneBench as a vehicle, we evaluated how well the GPT-3 model could help generate semantic and cross-language clone variants for a given fragment.We have comprised a diverse set of code fragments and assessed GPT-3s performance in generating code variants.Through extensive experimentation and analysis, where 9 judges spent 158 hours to validate, we investigate the model's ability to produce accurate and semantically correct variants. Our findings shed light on GPT-3's strengths in code generation, offering insights into the potential applications and challenges of using advanced language models in software development. Our quantitative analysis yields compelling results. In the realm of semantic clones, GPT-3 attains an impressive accuracy of 62.14% and 0.55 BLEU score, achieved through few-shot prompt engineering. Furthermore, the model shines in transcending linguistic confines, boasting an exceptional 91.25% accuracy in generating cross-language clones",073972fa0de48db1304509041e877e568c94e7de,Semantic Scholar,,, -91,rtllm an opensource benchmark for design rtl generation with large language model,"['Yao Lu', 'Shang Liu', 'Qijun Zhang', 'Zhiyao Xie']",https://arxiv.org/pdf/2308.05345,2023-08-10,,"Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.",079be8c8a93fc80274ff22251a3dac9804bec66a,Semantic Scholar,,, -92,userfriendly image editing with minimal text input leveraging captioning and injection techniques,"['Sunwoo Kim', 'Wooseok Jang', 'Hyunsung Kim', 'Junho Kim', 'Yunjey Choi', 'Seung Wook Kim', 'Gayeong Lee']",http://arxiv.org/pdf/2306.02717,2023-06-05,,"Recent text-driven image editing in diffusion models has shown remarkable success. However, the existing methods assume that the user's description sufficiently grounds the contexts in the source image, such as objects, background, style, and their relations. This assumption is unsuitable for real-world applications because users have to manually engineer text prompts to find optimal descriptions for different images. From the users' standpoint, prompt engineering is a labor-intensive process, and users prefer to provide a target word for editing instead of a full sentence. To address this problem, we first demonstrate the importance of a detailed text description of the source image, by dividing prompts into three categories based on the level of semantic details. Then, we propose simple yet effective methods by combining prompt generation frameworks, thereby making the prompt engineering process more user-friendly. Extensive qualitative and quantitative experiments demonstrate the importance of prompts in text-driven image editing and our method is comparable to ground-truth prompts.",0809c278fcdec2ce297da3a9d6e031fc192263f6,Semantic Scholar,,, -93,a prompt pattern catalog to enhance prompt engineering with chatgpt,"['Jules White', 'Quchen Fu', 'Sam Hays', 'M. Sandborn', 'Carlos Olea', 'Henry Gilbert', 'Ashraf Elnashar', 'Jesse Spencer-Smith', 'D. Schmidt']",http://arxiv.org/pdf/2302.11382,2023-02-21,,"Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.",08b85bce712168998004ee80ce4e475390413c74,Semantic Scholar,,, -94,design guidelines for prompt engineering texttoimage generative models,"['Vivian Liu', 'Lydia B. Chilton']",https://arxiv.org/pdf/2109.06977,2021-09-14,,"Text-to-image generative models are a new and powerful way to generate visual artwork. However, the open-ended nature of text as interaction is double-edged; while users can input anything and have access to an infinite range of generations, they also must engage in brute-force trial and error with the text prompt when the result quality is poor. We conduct a study exploring what prompt keywords and model hyperparameters can help produce coherent outputs. In particular, we study prompts structured to include subject and style keywords and investigate success and failure modes of these prompts. Our evaluation of 5493 generations over the course of five experiments spans 51 abstract and concrete subjects as well as 51 abstract and figurative styles. From this evaluation, we present design guidelines that can help people produce better outcomes from text-to-image generative models.",0968f1592f9401d72bf0d97e740496818c1a3135,Semantic Scholar,,, -95,on codex prompt engineering for ocl generation an empirical study,"['Seif Abukhalaf', 'Mohammad Hamdaqa', 'Foutse Khomh']",https://arxiv.org/pdf/2303.16244,2023-03-29,,"The Object Constraint Language (OCL) is a declarative language that adds constraints and object query expressions to Meta-Object Facility (MOF) models. OCL can provide precision and conciseness to UML models. Nevertheless, the unfamiliar syntax of OCL has hindered its adoption by software practitioners. LLMs, such as GPT-3, have made significant progress in many NLP tasks, such as text generation and semantic parsing. Similarly, researchers have improved on the downstream tasks by fine-tuning LLMs for the target task. Codex, a GPT-3 descendant by OpenAI, has been fine-tuned on publicly available code from GitHub and has proven the ability to generate code in many programming languages, powering the AI-pair programmer Copilot. One way to take advantage of Codex is to engineer prompts for the target downstream task. In this paper, we investigate the reliability of the OCL constraints generated by Codex from natural language specifications. To achieve this, we compiled a dataset of 15 UML models and 168 specifications from various educational resources. We manually crafted a prompt template with slots to populate with the UML information and the target task in the prefix format to complete the template with the generated OCL constraint. We used both zero- and few-shot learning methods in the experiments. The evaluation is reported by measuring the syntactic validity and the execution accuracy metrics of the generated OCL constraints. Moreover, to get insight into how close or natural the generated OCL constraints are compared to human-written ones, we measured the cosine similarity between the sentence embedding of the correctly generated and human-written OCL constraints. Our findings suggest that by enriching the prompts with the UML information of the models and enabling few-shot learning, the reliability of the generated OCL constraints increases. Furthermore, the results reveal a close similarity based on sentence embedding between the generated OCL constraints and the human-written ones in the ground truth, implying a level of clarity and understandability in the generated OCL constraints by Codex.",0a0d6a98bd246a82aaaa9d33ec0eadf4ceae69dc,Semantic Scholar,,, -96,visorgpt learning visual prior via generative pretraining,"['Jinheng Xie', 'Kai Ye', 'Yudong Li', 'Yuexiang Li', 'Kevin Lin', 'Yefeng Zheng', 'Linlin Shen', 'Mike Zheng Shou']",http://arxiv.org/pdf/2305.13777,2023-05-23,,"Various stuff and things in visual data possess specific traits, which can be learned by deep neural networks and are implicitly represented as the visual prior, e.g., object location and shape, in the model. Such prior potentially impacts many vision tasks. For example, in conditional image synthesis, spatial conditions failing to adhere to the prior can result in visually inaccurate synthetic results. This work aims to explicitly learn the visual prior and enable the customization of sampling. Inspired by advances in language modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed VisorGPT. By discretizing visual locations of objects, e.g., bounding boxes, human pose, and instance masks, into sequences, VisorGPT can model visual prior through likelihood maximization. Besides, prompt engineering is investigated to unify various visual locations and enable customized sampling of sequential outputs from the learned prior. Experimental results demonstrate that VisorGPT can effectively model the visual prior, which can be employed for many vision tasks, such as customizing accurate human pose for conditional image synthesis models like ControlNet. Code will be released at https://github.com/Sierkinhane/VisorGPT.",0a61802b71aa044cf1fe0e81befec148e0d5001b,Semantic Scholar,,, -97,chatgpt for robotics design principles and model abilities,"['Sai Vemprala', 'Rogerio Bonatti', 'A. Bucker', 'Ashish Kapoor']",https://arxiv.org/pdf/2306.17582,2023-02-20,,"This paper presents an experimental study regarding the use of OpenAI's ChatGPT for robotics applications. We outline a strategy that combines design principles for prompt engineering and the creation of a high-level function library which allows ChatGPT to adapt to different robotics tasks, simulators, and form factors. We focus our evaluations on the effectiveness of different prompt engineering techniques and dialog strategies towards the execution of various types of robotics tasks. We explore ChatGPT's ability to use free-form dialog, parse XML tags, and to synthesize code, in addition to the use of task-specific prompting functions and closed-loop reasoning through dialogues. Our study encompasses a range of tasks within the robotics domain, from basic logical, geometrical, and mathematical reasoning all the way to complex domains such as aerial navigation, manipulation, and embodied agents. We show that ChatGPT can be effective at solving several of such tasks, while allowing users to interact with it primarily via natural language instructions. In addition to these studies, we introduce an open-sourced research tool called PromptCraft, which contains a platform where researchers can collaboratively upload and vote on examples of good prompting schemes for robotics applications, as well as a sample robotics simulator with ChatGPT integration, making it easier for users to get started with using ChatGPT for robotics.",0ba581718f294db1d7b3dbc159cc3d3380f74606,Semantic Scholar,,, -98,a chat about boring problems studying gptbased text normalization,"['Yang Zhang', 'Travis M. Bartley', 'Mariana Graterol-Fuenmayor', 'Vitaly Lavrukhin', 'Evelina Bakhturina', 'Boris Ginsburg']",https://arxiv.org/pdf/2309.13426,2023-09-23,,"Text normalization - the conversion of text from written to spoken form - is traditionally assumed to be an ill-formed task for language models. In this work, we argue otherwise. We empirically show the capacity of Large-Language Models (LLM) for text normalization in few-shot scenarios. Combining self-consistency reasoning with linguistic-informed prompt engineering, we find LLM based text normalization to achieve error rates around 40\% lower than top normalization systems. Further, upon error analysis, we note key limitations in the conventional design of text normalization tasks. We create a new taxonomy of text normalization errors and apply it to results from GPT-3.5-Turbo and GPT-4.0. Through this new framework, we can identify strengths and weaknesses of GPT-based TN, opening opportunities for future work.",0c8446eedfe083e0ee32f5c4f793e5435904014a,Semantic Scholar,,, -99,robust preference learning for storytelling via contrastive reinforcement learning,"['Louis Castricato', 'Alex Havrilla', 'Shahbuland Matiana', 'M. Pieler', 'Anbang Ye', 'Ian Yang', 'Spencer Frazier', 'Mark O. Riedl']",http://arxiv.org/pdf/2210.07792,2022-10-14,,"Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences. Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent. They may also use logit-manipulation methods which require annotated datasets to exist for the desired attributes. To address these issues, we first train a contrastive bi-encoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model. This is subsequently used as a reward function to fine-tune a generative language model via reinforcement learning. However, simply fine-tuning a generative language model with a contrastive reward model does not always reliably result in a story generation system capable of generating stories that meet user preferences. To increase story generation robustness we further fine-tune the contrastive reward model using a prompt-learning technique. A human participant study is then conducted comparing generations from our full system, ablations, and two baselines. We show that the full fine-tuning pipeline results in a story generator preferred over a LLM 20x as large as well as logit-based methods. This motivates the use of contrastive learning for general purpose human preference modeling.",0e1ae0bdcc8469db99a4f8008288e20f285f1c6d,Semantic Scholar,,, -100,towards equitable representation in texttoimage synthesis models with the crosscultural understanding benchmark (ccub) dataset,"['Zhixuan Liu', 'Y. Shin', 'Beverley-Claire Okogwu', 'Youngsik Yun', 'Lia Coleman', 'Peter Schaldenbrand', 'Jihie Kim', 'Jean Oh']",http://arxiv.org/pdf/2301.12073,2023-01-28,,"It has been shown that accurate representation in media improves the well-being of the people who consume it. By contrast, inaccurate representations can negatively affect viewers and lead to harmful perceptions of other cultures. To achieve inclusive representation in generated images, we propose a culturally-aware priming approach for text-to-image synthesis using a small but culturally curated dataset that we collected, known here as Cross-Cultural Understanding Benchmark (CCUB) Dataset, to fight the bias prevalent in giant datasets. Our proposed approach is comprised of two fine-tuning techniques: (1) Adding visual context via fine-tuning a pre-trained text-to-image synthesis model, Stable Diffusion, on the CCUB text-image pairs, and (2) Adding semantic context via automated prompt engineering using the fine-tuned large language model, GPT-3, trained on our CCUB culturally-aware text data. CCUB dataset is curated and our approach is evaluated by people who have a personal relationship with that particular culture. Our experiments indicate that priming using both text and image is effective in improving the cultural relevance and decreasing the offensiveness of generated images while maintaining quality.",0e8e3d2a2f4413808c7aff7bee6e8e11ec2700d7,Semantic Scholar,,, -101,beyond factuality a comprehensive evaluation of large language models as knowledge generators,"['Liang Chen', 'Yang Deng', 'Yatao Bian', 'Zeyu Qin', 'Bingzhe Wu', 'Tat-Seng Chua', 'Kam-Fai Wong']",https://arxiv.org/pdf/2310.07289,2023-10-11,,"Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge. However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge. In light of this, we introduce CONNER, a COmpreheNsive kNowledge Evaluation fRamework, designed to systematically and automatically evaluate generated knowledge from six important perspectives -- Factuality, Relevance, Coherence, Informativeness, Helpfulness and Validity. We conduct an extensive empirical analysis of the generated knowledge from three different types of LLMs on two widely studied knowledge-intensive tasks, i.e., open-domain question answering and knowledge-grounded dialogue. Surprisingly, our study reveals that the factuality of generated knowledge, even if lower, does not significantly hinder downstream tasks. Instead, the relevance and coherence of the outputs are more important than small factual mistakes. Further, we show how to use CONNER to improve knowledge-intensive tasks by designing two strategies: Prompt Engineering and Knowledge Selection. Our evaluation code and LLM-generated knowledge with human annotations will be released to facilitate future research.",0f6fe87afd1a3571f77c790893b03717e5d0422a,Semantic Scholar,,, -102,chatgpt4pcg competition characterlike level generation for science birds,"['Pittawat Taveekitworachai', 'Febri Abdullah', 'Mury F. Dewantoro', 'R. Thawonmas', 'J. Togelius', 'Jochen Renz']",http://arxiv.org/pdf/2303.15662,2023-03-28,,"This paper presents the first ChatGPT4PCG Competition at the 2023 IEEE Conference on Games. The objective of this competition is for participants to create effective prompts for ChatGPT--enabling it to generate Science Birds levels with high stability and character-like qualities--fully using their creativity as well as prompt engineering skills. ChatGPT is a conversational agent developed by OpenAI. Science Birds is selected as the competition platform because designing an Angry Birds-like level is not a trivial task due to the in-game gravity; the quality of the levels is determined by their stability. To lower the entry barrier to the competition, we limit the task to the generation of capitalized English alphabetical characters. We also allow only a single prompt to be used for generating all the characters. Here, the quality of the generated levels is determined by their stability and similarity to the given characters. A sample prompt is provided to participants for their reference. An experiment is conducted to determine the effectiveness of several modified versions of this sample prompt on level stability and similarity by testing them on several characters. To the best of our knowledge, we believe that ChatGPT4PCG is the first competition of its kind and hope to inspire enthusiasm for prompt engineering in procedural content generation.",0fb8f3f86476e9ab8fa4679620acb7d525b222a8,Semantic Scholar,,, -103,contrastner contrastivebased prompt tuning for fewshot ner,"['Amirhossein Layegh', 'A. H. Payberah', 'A. Soylu', 'D. Roman', 'M. Matskin']",https://arxiv.org/pdf/2305.17951,2023-05-29,,"Prompt-based language models have produced encouraging results in numerous applications, including Named Entity Recognition (NER) tasks. NER aims to identify entities in a sentence and provide their types. However, the strong performance of most available NER approaches is heavily dependent on the design of discrete prompts and a verbalizer to map the model-predicted outputs to entity categories, which are complicated undertakings. To address these challenges, we present ContrastNER, a prompt-based NER framework that employs both discrete and continuous tokens in prompts and uses a contrastive learning approach to learn the continuous prompts and forecast entity types. The experimental results demonstrate that ContrastNER obtains competitive performance to the state-of-the-art NER methods in high-resource settings and outperforms the state-of-the-art models in low-resource circumstances without requiring extensive manual prompt engineering and verbalizer design.",1059b79598d6e08121503093f45d50fa963d2843,Semantic Scholar,,, -104,prompting the hidden talent of webscale speech models for zeroshot task generalization,"['Puyuan Peng', 'Brian Yan', 'Shinji Watanabe', 'David F. Harwath']",https://arxiv.org/pdf/2305.11095,2023-05-18,,"We investigate the emergent abilities of the recently proposed web-scale speech model Whisper, by adapting it to unseen tasks with prompt engineering. We selected three tasks: audio-visual speech recognition (AVSR), code-switched speech recognition (CS-ASR), and speech translation (ST) on unseen language pairs. We design task-specific prompts, by either leveraging another large-scale model, or simply manipulating the special tokens in the default prompts. Experiments show that compared to the default prompts, our proposed prompts improve performance by 10% to 45% on the three zero-shot tasks, and even outperform SotA supervised models on some datasets. In addition, our experiments reveal many interesting properties of Whisper, including its robustness to prompts, bias on accents, and the multilingual understanding in its latent space. Code is available at https://github.com/jasonppy/PromptingWhisper",10e8dc07ea256c6a88d7043cf135417402ed38f4,Semantic Scholar,,, -105,aicopilot for business optimisation a framework and a case study in production scheduling,"['Pivithuru Thejan Amarasinghe', 'Su Nguyen', 'Yuan Sun', 'D. Alahakoon']",https://arxiv.org/pdf/2309.13218,2023-09-22,,"Business optimisation is the process of finding and implementing efficient and cost-effective means of operation to bring a competitive advantage for businesses. Synthesizing problem formulations is an integral part of business optimisation which is centred around human expertise, thus with a high potential of becoming a bottleneck. With the recent advancements in Large Language Models (LLMs), human expertise needed in problem formulation can potentially be minimized using Artificial Intelligence (AI). However, developing a LLM for problem formulation is challenging, due to training data requirements, token limitations, and the lack of appropriate performance metrics in LLMs. To minimize the requirement of large training data, considerable attention has recently been directed towards fine-tuning pre-trained LLMs for downstream tasks, rather than training a LLM from scratch for a specific task. In this paper, we adopt this approach and propose an AI-Copilot for business optimisation by fine-tuning a pre-trained LLM for problem formulation. To address token limitations, we introduce modularization and prompt engineering techniques to synthesize complex problem formulations as modules that fit into the token limits of LLMs. In addition, we design performance evaluation metrics that are more suitable for assessing the accuracy and quality of problem formulations compared to existing evaluation metrics. Experiment results demonstrate that our AI-Copilot can synthesize complex and large problem formulations for a typical business optimisation problem in production scheduling.",13fafa40eb7b15813cdf6c2ead1e1032e7b085f0,Semantic Scholar,,, -106,coaudit tools to help humans doublecheck aigenerated content,"['Andrew D. Gordon', 'C. Negreanu', 'J. Cambronero', 'Rasika Chakravarthy', 'Ian Drosos', 'Hao Fang', 'Bhaskar Mitra', 'Hannah Richardson', 'Advait Sarkar', 'Stephanie Simmons', 'Jack Williams', 'Ben Zorn']",https://arxiv.org/pdf/2310.01297,2023-10-02,,"Users are increasingly being warned to check AI-generated content for correctness. Still, as LLMs (and other generative models) generate more complex output, such as summaries, tables, or code, it becomes harder for the user to audit or evaluate the output for quality or correctness. Hence, we are seeing the emergence of tool-assisted experiences to help the user double-check a piece of AI-generated content. We refer to these as co-audit tools. Co-audit tools complement prompt engineering techniques: one helps the user construct the input prompt, while the other helps them check the output response. As a specific example, this paper describes recent research on co-audit tools for spreadsheet computations powered by generative models. We explain why co-audit experiences are essential for any application of generative AI where quality is important and errors are consequential (as is common in spreadsheet computations). We propose a preliminary list of principles for co-audit, and outline research challenges.",14dcafae548d578f6b8c683d0972531bc46423ca,Semantic Scholar,,, -107,chatgpt as a mapping assistant a novel method to enrich maps with generative ai and content derived from streetlevel photographs,"[""Levente Juh'asz"", 'P. Mooney', 'H. Hochmair', 'Boyuan Guan']",https://arxiv.org/pdf/2306.03204,2023-06-05,,"This paper explores the concept of leveraging generative AI as a mapping assistant for enhancing the efficiency of collaborative mapping. We present results of an experiment that combines multiple sources of volunteered geographic information (VGI) and large language models (LLMs). Three analysts described the content of crowdsourced Mapillary street-level photographs taken along roads in a small test area in Miami, Florida. GPT-3.5-turbo was instructed to suggest the most appropriate tagging for each road in OpenStreetMap (OSM). The study also explores the utilization of BLIP-2, a state-of-the-art multimodal pre-training method as an artificial analyst of street-level photographs in addition to human analysts. Results demonstrate two ways to effectively increase the accuracy of mapping suggestions without modifying the underlying AI models: by (1) providing a more detailed description of source photographs, and (2) combining prompt engineering with additional context (e.g. location and objects detected along a road). The first approach increases the suggestion accuracy by up to 29%, and the second one by up to 20%.",16877baf3874038233279e07e330f891455fd880,Semantic Scholar,,, -108,using large language models to generate engaging captions for data visualizations,"['A. Liew', 'Klaus Mueller']",http://arxiv.org/pdf/2212.14047,2022-12-27,,"Creating compelling captions for data visualizations has been a long- standing challenge. Visualization researchers are typically untrained in journalistic reporting and hence the captions that are placed be- low data visualizations tend to be not overly engaging and rather just stick to basic observations about the data. In this work we explore the opportunities offered by the newly emerging crop of large language models (LLM) which use sophisticated deep learning technology to produce human-like prose. We ask, can these power-ful software devices be purposed to produce engaging captions for generic data visualizations like a scatterplot. It turns out that the key challenge lies in designing the most effective prompt for the LLM, a task called prompt engineering . We report on first experiments using the popular LLM GPT-3 and deliver some promising results.",1696e03a35f1bcc724ed9bfe69bb028b789415e8,Semantic Scholar,,, -109,an ai chatbot for explaining deep reinforcement learning decisions of serviceoriented systems,"['Andreas Metzger', 'Jon Bartel', 'Jan Laufer']",https://arxiv.org/pdf/2309.14391,2023-09-25,,"Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and offloading, as well as service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because its learned decision-making policy essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.",16acd2d2faa236dfe5f6ab67a0b94a9ed1b1de57,Semantic Scholar,,, -110,"chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations","['Chunkit Chan', 'Cheng Jiayang', 'Weiqi Wang', 'Yuxin Jiang', 'Tianqing Fang', 'Xin Liu', 'Yangqiu Song']",http://arxiv.org/pdf/2304.14827,2023-04-28,,"This paper aims to quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations such as temporal relations, causal relations, and discourse relations. Given ChatGPT's promising performance across various tasks, we conduct extensive evaluations on the whole test sets of 13 datasets, including temporal and causal relations, PDTB2.0-based and dialogue-based discourse relations, and downstream applications on discourse understanding. To achieve reliable results, we adopt three tailored prompt templates for each task, including the zero-shot prompt template, zero-shot prompt engineering (PE) template, and in-context learning (ICL) prompt template, to establish the initial baseline scores for all popular sentence-pair relation classification tasks for the first time. We find that ChatGPT exhibits strong performance in detecting and reasoning about causal relations, while it may not be proficient in identifying the temporal order between two events. It can recognize most discourse relations with existing explicit discourse connectives, but the implicit discourse relation still remains a challenging task. Meanwhile, ChatGPT performs poorly in the dialogue discourse parsing task that requires structural understanding in a dialogue before being aware of the discourse relation.",186e96fe036927182ec963b63f9dd7f8ff650158,Semantic Scholar,,, -111,prompting ai art an investigation into the creative skill of prompt engineering,"['J. Oppenlaender', 'Rhema Linder', 'Johanna M. Silvennoinen']",http://arxiv.org/pdf/2303.13534,2023-03-13,,"Humankind is entering a novel era of creativity - an era in which anybody can synthesize digital content. The paradigm under which this revolution takes place is prompt-based learning (or in-context learning). This paradigm has found fruitful application in text-to-image generation where it is being used to synthesize digital images from zero-shot text prompts in natural language for the purpose of creating AI art. This activity is referred to as prompt engineering - the practice of iteratively crafting prompts to generate and improve images. In this paper, we investigate prompt engineering as a novel creative skill for creating prompt-based art. In three studies with participants recruited from a crowdsourcing platform, we explore whether untrained participants could 1) recognize the quality of prompts, 2) write prompts, and 3) improve their prompts. Our results indicate that participants could assess the quality of prompts and respective images. This ability increased with the participants' experience and interest in art. Participants further were able to write prompts in rich descriptive language. However, even though participants were specifically instructed to generate artworks, participants' prompts were missing the specific vocabulary needed to apply a certain style to the generated images. Our results suggest that prompt engineering is a learned skill that requires expertise and practice. Based on our findings and experience with running our studies with participants recruited from a crowdsourcing platform, we provide ten recommendations for conducting experimental research on text-to-image generation and prompt engineering with a paid crowd. Our studies offer a deeper understanding of prompt engineering thereby opening up avenues for research on the future of prompt engineering. We conclude by speculating on four possible futures of prompt engineering.",1bc9974780230573bfe9f89789115cb4fbf8bfc6,Semantic Scholar,,, -112,solving and generating npr sunday puzzles with large language models,"['Jin Zhao', 'Carolyn Jane Anderson']",http://arxiv.org/pdf/2306.12255,2023-06-21,,"We explore the ability of large language models to solve and generate puzzles from the NPR Sunday Puzzle game show using PUZZLEQA, a dataset comprising 15 years of on-air puzzles. We evaluate four large language models using PUZZLEQA, in both multiple choice and free response formats, and explore two prompt engineering techniques to improve free response performance: chain-of-thought reasoning and prompt summarization. We find that state-of-the-art large language models can solve many PUZZLEQA puzzles: the best model, GPT-3.5, achieves 50.2% loose accuracy. However, in our few-shot puzzle generation experiment, we find no evidence that models can generate puzzles: GPT-3.5 generates puzzles with answers that do not conform to the generated rules. Puzzle generation remains a challenging task for future work.",1e5743366625128e225879dbcfb568f6b8f1bcdc,Semantic Scholar,,, -113,texttosql empowered by large language models a benchmark evaluation,"['Dawei Gao', 'Haibin Wang', 'Yaliang Li', 'Xiuyu Sun', 'Yichen Qian', 'Bolin Ding', 'Jingren Zhou']",https://arxiv.org/pdf/2308.15363,2023-08-29,,"Large language models (LLMs) have emerged as a new paradigm for Text-to-SQL task. However, the absence of a systematical benchmark inhibits the development of designing effective, efficient and economic LLM-based Text-to-SQL solutions. To address this challenge, in this paper, we first conduct a systematical and extensive comparison over existing prompt engineering methods, including question representation, example selection and example organization, and with these experimental results, we elaborate their pros and cons. Based on these findings, we propose a new integrated solution, named DAIL-SQL, which refreshes the Spider leaderboard with 86.6% execution accuracy and sets a new bar. To explore the potential of open-source LLM, we investigate them in various scenarios, and further enhance their performance with supervised fine-tuning. Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well as the advantages and disadvantages of the supervised fine-tuning. Additionally, towards an efficient and economic LLM-based Text-to-SQL solution, we emphasize the token efficiency in prompt engineering and compare the prior studies under this metric. We hope that our work provides a deeper understanding of Text-to-SQL with LLMs, and inspires further investigations and broad applications.",1fc89ce338b94f6a46e41b9a13aa99366a762eea,Semantic Scholar,,, +80,querydependent prompt evaluation and optimization with offline inverse rl,"['Hao Sun', 'Alihan Hüyük', 'M. Schaar']",https://arxiv.org/pdf/2309.06553,2023-09-13,,"In this study, we aim to enhance the arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization. We identify a previously overlooked objective of query dependency in such optimization and elucidate two ensuing challenges that impede the successful and economical design of prompt optimization techniques. One primary issue is the absence of an effective method to evaluate prompts during inference when the golden answer is unavailable. Concurrently, learning via interactions with the LLMs to navigate the expansive natural language prompting space proves to be resource-intensive. To address this, we introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data. Such data exists as by-products when diverse prompts are benchmarked on open-accessible datasets. With Prompt-OIRL, the query-dependent prompt optimization objective is achieved by first learning an offline reward model. This model can evaluate any query-prompt pairs without accessing LLMs. Subsequently, a best-of-N strategy is deployed to recommend the optimal prompt. Our experimental evaluations across various LLM scales and arithmetic reasoning datasets underscore both the efficacy and economic viability of the proposed approach.",cd391facabf5005419b79997b2ef8473644a8192,Semantic Scholar,,, +81,discrete prompt optimization via constrained generation for zeroshot reranker,"['Sukmin Cho', 'Soyeong Jeong', 'J. Seo', 'Jong C. Park']",http://arxiv.org/pdf/2305.13729,2023-05-23,,"Re-rankers, which order retrieved documents with respect to the relevance score on the given query, have gained attention for the information retrieval (IR) task. Rather than fine-tuning the pre-trained language model (PLM), the large-scale language model (LLM) is utilized as a zero-shot re-ranker with excellent results. While LLM is highly dependent on the prompts, the impact and the optimization of the prompts for the zero-shot re-ranker are not explored yet. Along with highlighting the impact of optimization on the zero-shot re-ranker, we propose a novel discrete prompt optimization method, Constrained Prompt generation (Co-Prompt), with the metric estimating the optimum for re-ranking. Co-Prompt guides the generated texts from PLM toward optimal prompts based on the metric without parameter update. The experimental results demonstrate that Co-Prompt leads to outstanding re-ranking performance against the baselines. Also, Co-Prompt generates more interpretable prompts for humans against other prompt optimization methods.",d61f0820943a667917fb6d32225826aa5279f694,Semantic Scholar,,, +82,emotionconditioned text generation through automatic prompt optimization,"['Yarik Menchaca Resendiz', 'Roman Klinger']",https://arxiv.org/pdf/2308.04857,2023-08-09,,"Conditional natural language generation methods often require either expensive fine-tuning or training a large language model from scratch. Both are unlikely to lead to good results without a substantial amount of data and computational resources. Prompt learning without changing the parameters of a large language model presents a promising alternative. It is a cost-effective approach, while still achieving competitive results. While this procedure is now established for zero- and few-shot text classification and structured prediction, it has received limited attention in conditional text generation. We present the first automatic prompt optimization approach for emotion-conditioned text generation with instruction-fine-tuned models. Our method uses an iterative optimization procedure that changes the prompt by adding, removing, or replacing tokens. As objective function, we only require a text classifier that measures the realization of the conditional variable in the generated text. We evaluate the method on emotion-conditioned text generation with a focus on event reports and compare it to manually designed prompts that also act as the seed for the optimization procedure. The optimized prompts achieve 0.75 macro-average F1 to fulfill the emotion condition in contrast to manually designed seed prompts with only 0.22 macro-average F1.",ef5cd0eb266e3df3eb64aec18e1854fe0244d228,Semantic Scholar,,, +83,large language models as optimizers,"['Chengrun Yang', 'Xuezhi Wang', 'Yifeng Lu', 'Hanxiao Liu', 'Quoc V. Le', 'Denny Zhou', 'Xinyun Chen']",https://arxiv.org/pdf/2309.03409,2023-09-07,,"Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.",f8a2dca1e8fe56e698984c077f7ff58d8ca867e9,Semantic Scholar,,, +84,dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning,"['Chengzhengxu Li', 'Xiaoming Liu', 'Yichen Wang', 'Duyi Li', 'Y. Lan', 'Chao Shen']",https://arxiv.org/pdf/2308.07272,,,"Prompt-based pre-trained language models (PLMs) paradigm have succeeded substantially in few-shot natural language processing (NLP) tasks. However, prior discrete prompt optimization methods require expert knowledge to design the base prompt set and identify high-quality prompts, which is costly, inefficient, and subjective. Meanwhile, existing continuous prompt optimization methods improve the performance by learning the ideal prompts through the gradient information of PLMs, whose high computational cost, and low readability and generalizability are often concerning. To address the research gap, we propose a D ialogue-comprised P olicy-gradient-based D iscrete P rompt O ptimization (DP 2 O) method. We first design a multi-round dialogue alignment strategy for readability prompt set generation based on GPT-4. Furthermore, we propose an efficient prompt screening metric to identify high-quality prompts with linear complexity. Finally, we construct a reinforcement learning (RL) framework based on policy gradients to match the prompts to inputs optimally. By training a policy network with only 0.67% of the PLM parameter size on the tasks in the few-shot setting, DP 2 O outperforms the state-of-the-art (SOTA) method by 1.52% in accuracy on average on four open-source datasets. Moreover, subsequent experiments also demonstrate that DP 2 O has good universality, robustness and generalization ability.",ff96527c03fbea7c3bb7d44d1d656d875ddba75e,Semantic Scholar,,, +85,evaluation of chatgpt family of models for biomedical reasoning and classification,"['Shan Chen', 'Yingya Li', 'Sheng Lu', 'Hoang Van', 'H. Aerts', 'G. Savova', 'D. Bitterman']",http://arxiv.org/pdf/2304.02496,2023-04-05,,"Recent advances in large language models (LLMs) have shown impressive ability in biomedical question-answering, but have not been adequately investigated for more specific biomedical applications. This study investigates the performance of LLMs such as the ChatGPT family of models (GPT-3.5s, GPT-4) in biomedical tasks beyond question-answering. Because no patient data can be passed to the OpenAI API public interface, we evaluated model performance with over 10000 samples as proxies for two fundamental tasks in the clinical domain - classification and reasoning. The first task is classifying whether statements of clinical and policy recommendations in scientific literature constitute health advice. The second task is causal relation detection from the biomedical literature. We compared LLMs with simpler models, such as bag-of-words (BoW) with logistic regression, and fine-tuned BioBERT models. Despite the excitement around viral ChatGPT, we found that fine-tuning for two fundamental NLP tasks remained the best strategy. The simple BoW model performed on par with the most complex LLM prompting. Prompt engineering required significant investment.",020e473d8c987dcfb03fcfffeb87b17812447031,Semantic Scholar,,, +86,textguided synthesis of artistic images with retrievalaugmented diffusion models,"['Robin Rombach', 'A. Blattmann', 'B. Ommer']",http://arxiv.org/pdf/2207.13038,2022-07-26,,"Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Of particular note is the field of ``AI-Art'', which has seen unprecedented growth with the emergence of powerful multimodal models such as CLIP. By combining speech and image synthesis models, so-called ``prompt-engineering'' has become established, in which carefully selected and composed sentences are used to achieve a certain visual style in the synthesized image. In this note, we present an alternative approach based on retrieval-augmented diffusion models (RDMs). In RDMs, a set of nearest neighbors is retrieved from an external database during training for each training instance, and the diffusion model is conditioned on these informative samples. During inference (sampling), we replace the retrieval database with a more specialized database that contains, for example, only images of a particular visual style. This provides a novel way to prompt a general trained model after training and thereby specify a particular visual style. As shown by our experiments, this approach is superior to specifying the visual style within the text prompt. We open-source code and model weights at https://github.com/CompVis/latent-diffusion .",0270ec4bc946b59c5cf6204be2553682dee0346c,Semantic Scholar,,, +87,interactive and visual prompt engineering for adhoc task adaptation with large language models,"['Hendrik Strobelt', 'Albert Webson', 'Victor Sanh', 'Benjamin Hoover', 'Johanna Beyer', 'H. Pfister', 'Alexander M. Rush']",https://arxiv.org/pdf/2208.07852,2022-08-16,,"State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.",0392d58335ce674a70f5e58ac8c438de296a0e6a,Semantic Scholar,,, +88,"artificial intelligence for health message generation theory, method, and an empirical study using prompt engineering","['Sue Lim', 'Ralf Schmälzle']",http://arxiv.org/pdf/2212.07507,2022-12-14,,"This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Using prompt engineering, we generated messages that could be used to raise awareness and compared them to retweeted human-generated messages via computational and human evaluation methods. The system was easy to use and prolific, and computational analyses revealed that the AI-generated messages were on par with human-generated ones in terms of sentiment, reading ease",040ec58865ab50b5e6d91a355ffc146ec5034e9f,Semantic Scholar,,, +89,how does prompt engineering affect chatgpt performance on unsupervised entity resolution,"['Khanin Sisaengsuwanchai', 'Navapat Nananukul', 'M. Kejriwal']",https://arxiv.org/pdf/2310.06174,2023-10-09,,"Entity Resolution (ER) is the problem of semi-automatically determining when two entities refer to the same underlying entity, with applications ranging from healthcare to e-commerce. Traditional ER solutions required considerable manual expertise, including feature engineering, as well as identification and curation of training data. In many instances, such techniques are highly dependent on the domain. With recent advent in large language models (LLMs), there is an opportunity to make ER much more seamless and domain-independent. However, it is also well known that LLMs can pose risks, and that the quality of their outputs can depend on so-called prompt engineering. Unfortunately, a systematic experimental study on the effects of different prompting methods for addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper aims to address this gap by conducting such a study. Although preliminary in nature, our results show that prompting can significantly affect the quality of ER, although it affects some metrics more than others, and can also be dataset dependent.",06ab0710c8a7315e70c15c0d7eb1aa50210d945c,Semantic Scholar,,, +90,a systematic survey of prompt engineering on visionlanguage foundation models,"['Jindong Gu', 'Zhen Han', 'Shuo Chen', 'Ahmad Beirami', 'Bailan He', 'Gengyuan Zhang', 'Ruotong Liao', 'Yao Qin', 'Volker Tresp', 'Philip H. S. Torr']",https://arxiv.org/pdf/2307.12980,,,"—Prompt engineering is a technique that involves augmenting a large pre-trained model with task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be created manually as natural language instructions or generated automatically as either natural language instructions or vector representations. Prompt engineering enables the ability to perform predictions based solely on prompts without updating model parameters, and the easier application of large pre-trained models in real-world tasks. In past years, Prompt engineering has been well-studied in natural language processing. Recently, it has also been intensively studied in vision-language modeling. However, there is currently a lack of a systematic overview of prompt engineering on pre-trained vision-language models. This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-language models: multimodal-to-text generation models ( e.g., Flamingo), image-text matching models ( e.g., CLIP), and text-to-image generation models ( e.g., Stable Diffusion). For each type of model, a brief model summary, prompting methods, prompting-based applications, and the corresponding responsibility and integrity issues are summarized and discussed. Furthermore, the commonalities and differences between prompting on vision-language models, language models, and vision models are also discussed. The challenges, future directions, and research opportunities are summarized to foster future research on this topic.",06d8562831c32844285a691c5250d04726df3c61,Semantic Scholar,,, +91,unveiling the potential of large language models in generating semantic and crosslanguage clones,"['Palash R. Roy', 'A. Alam', 'Farouq Al-Omari', 'B. Roy', 'C. Roy', 'Kevin A. Schneider']",https://arxiv.org/pdf/2309.06424,2023-09-12,,"Semantic and Cross-language code clone generation may be useful for code reuse, code comprehension, refactoring and benchmarking. OpenAI's GPT model has potential in such clone generation as GPT is used for text generation. When developers copy/paste codes from Stack Overflow (SO) or within a system, there might be inconsistent changes leading to unexpected behaviours. Similarly, if someone possesses a code snippet in a particular programming language but seeks equivalent functionality in a different language, a semantic cross-language code clone generation approach could provide valuable assistance.In this study, using SemanticCloneBench as a vehicle, we evaluated how well the GPT-3 model could help generate semantic and cross-language clone variants for a given fragment.We have comprised a diverse set of code fragments and assessed GPT-3s performance in generating code variants.Through extensive experimentation and analysis, where 9 judges spent 158 hours to validate, we investigate the model's ability to produce accurate and semantically correct variants. Our findings shed light on GPT-3's strengths in code generation, offering insights into the potential applications and challenges of using advanced language models in software development. Our quantitative analysis yields compelling results. In the realm of semantic clones, GPT-3 attains an impressive accuracy of 62.14% and 0.55 BLEU score, achieved through few-shot prompt engineering. Furthermore, the model shines in transcending linguistic confines, boasting an exceptional 91.25% accuracy in generating cross-language clones",073972fa0de48db1304509041e877e568c94e7de,Semantic Scholar,,, +92,rtllm an opensource benchmark for design rtl generation with large language model,"['Yao Lu', 'Shang Liu', 'Qijun Zhang', 'Zhiyao Xie']",https://arxiv.org/pdf/2308.05345,2023-08-10,,"Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.",079be8c8a93fc80274ff22251a3dac9804bec66a,Semantic Scholar,,, +93,userfriendly image editing with minimal text input leveraging captioning and injection techniques,"['Sunwoo Kim', 'Wooseok Jang', 'Hyunsung Kim', 'Junho Kim', 'Yunjey Choi', 'Seung Wook Kim', 'Gayeong Lee']",http://arxiv.org/pdf/2306.02717,2023-06-05,,"Recent text-driven image editing in diffusion models has shown remarkable success. However, the existing methods assume that the user's description sufficiently grounds the contexts in the source image, such as objects, background, style, and their relations. This assumption is unsuitable for real-world applications because users have to manually engineer text prompts to find optimal descriptions for different images. From the users' standpoint, prompt engineering is a labor-intensive process, and users prefer to provide a target word for editing instead of a full sentence. To address this problem, we first demonstrate the importance of a detailed text description of the source image, by dividing prompts into three categories based on the level of semantic details. Then, we propose simple yet effective methods by combining prompt generation frameworks, thereby making the prompt engineering process more user-friendly. Extensive qualitative and quantitative experiments demonstrate the importance of prompts in text-driven image editing and our method is comparable to ground-truth prompts.",0809c278fcdec2ce297da3a9d6e031fc192263f6,Semantic Scholar,,, +94,a prompt pattern catalog to enhance prompt engineering with chatgpt,"['Jules White', 'Quchen Fu', 'Sam Hays', 'M. Sandborn', 'Carlos Olea', 'Henry Gilbert', 'Ashraf Elnashar', 'Jesse Spencer-Smith', 'D. Schmidt']",http://arxiv.org/pdf/2302.11382,2023-02-21,,"Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.",08b85bce712168998004ee80ce4e475390413c74,Semantic Scholar,,, +95,design guidelines for prompt engineering texttoimage generative models,"['Vivian Liu', 'Lydia B. Chilton']",https://arxiv.org/pdf/2109.06977,2021-09-14,,"Text-to-image generative models are a new and powerful way to generate visual artwork. However, the open-ended nature of text as interaction is double-edged; while users can input anything and have access to an infinite range of generations, they also must engage in brute-force trial and error with the text prompt when the result quality is poor. We conduct a study exploring what prompt keywords and model hyperparameters can help produce coherent outputs. In particular, we study prompts structured to include subject and style keywords and investigate success and failure modes of these prompts. Our evaluation of 5493 generations over the course of five experiments spans 51 abstract and concrete subjects as well as 51 abstract and figurative styles. From this evaluation, we present design guidelines that can help people produce better outcomes from text-to-image generative models.",0968f1592f9401d72bf0d97e740496818c1a3135,Semantic Scholar,,, +96,on codex prompt engineering for ocl generation an empirical study,"['Seif Abukhalaf', 'Mohammad Hamdaqa', 'Foutse Khomh']",https://arxiv.org/pdf/2303.16244,2023-03-29,,"The Object Constraint Language (OCL) is a declarative language that adds constraints and object query expressions to Meta-Object Facility (MOF) models. OCL can provide precision and conciseness to UML models. Nevertheless, the unfamiliar syntax of OCL has hindered its adoption by software practitioners. LLMs, such as GPT-3, have made significant progress in many NLP tasks, such as text generation and semantic parsing. Similarly, researchers have improved on the downstream tasks by fine-tuning LLMs for the target task. Codex, a GPT-3 descendant by OpenAI, has been fine-tuned on publicly available code from GitHub and has proven the ability to generate code in many programming languages, powering the AI-pair programmer Copilot. One way to take advantage of Codex is to engineer prompts for the target downstream task. In this paper, we investigate the reliability of the OCL constraints generated by Codex from natural language specifications. To achieve this, we compiled a dataset of 15 UML models and 168 specifications from various educational resources. We manually crafted a prompt template with slots to populate with the UML information and the target task in the prefix format to complete the template with the generated OCL constraint. We used both zero- and few-shot learning methods in the experiments. The evaluation is reported by measuring the syntactic validity and the execution accuracy metrics of the generated OCL constraints. Moreover, to get insight into how close or natural the generated OCL constraints are compared to human-written ones, we measured the cosine similarity between the sentence embedding of the correctly generated and human-written OCL constraints. Our findings suggest that by enriching the prompts with the UML information of the models and enabling few-shot learning, the reliability of the generated OCL constraints increases. Furthermore, the results reveal a close similarity based on sentence embedding between the generated OCL constraints and the human-written ones in the ground truth, implying a level of clarity and understandability in the generated OCL constraints by Codex.",0a0d6a98bd246a82aaaa9d33ec0eadf4ceae69dc,Semantic Scholar,,, +97,visorgpt learning visual prior via generative pretraining,"['Jinheng Xie', 'Kai Ye', 'Yudong Li', 'Yuexiang Li', 'Kevin Lin', 'Yefeng Zheng', 'Linlin Shen', 'Mike Zheng Shou']",http://arxiv.org/pdf/2305.13777,2023-05-23,,"Various stuff and things in visual data possess specific traits, which can be learned by deep neural networks and are implicitly represented as the visual prior, e.g., object location and shape, in the model. Such prior potentially impacts many vision tasks. For example, in conditional image synthesis, spatial conditions failing to adhere to the prior can result in visually inaccurate synthetic results. This work aims to explicitly learn the visual prior and enable the customization of sampling. Inspired by advances in language modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed VisorGPT. By discretizing visual locations of objects, e.g., bounding boxes, human pose, and instance masks, into sequences, VisorGPT can model visual prior through likelihood maximization. Besides, prompt engineering is investigated to unify various visual locations and enable customized sampling of sequential outputs from the learned prior. Experimental results demonstrate that VisorGPT can effectively model the visual prior, which can be employed for many vision tasks, such as customizing accurate human pose for conditional image synthesis models like ControlNet. Code will be released at https://github.com/Sierkinhane/VisorGPT.",0a61802b71aa044cf1fe0e81befec148e0d5001b,Semantic Scholar,,, +98,chatgpt for robotics design principles and model abilities,"['Sai Vemprala', 'Rogerio Bonatti', 'A. Bucker', 'Ashish Kapoor']",https://arxiv.org/pdf/2306.17582,2023-02-20,,"This paper presents an experimental study regarding the use of OpenAI's ChatGPT for robotics applications. We outline a strategy that combines design principles for prompt engineering and the creation of a high-level function library which allows ChatGPT to adapt to different robotics tasks, simulators, and form factors. We focus our evaluations on the effectiveness of different prompt engineering techniques and dialog strategies towards the execution of various types of robotics tasks. We explore ChatGPT's ability to use free-form dialog, parse XML tags, and to synthesize code, in addition to the use of task-specific prompting functions and closed-loop reasoning through dialogues. Our study encompasses a range of tasks within the robotics domain, from basic logical, geometrical, and mathematical reasoning all the way to complex domains such as aerial navigation, manipulation, and embodied agents. We show that ChatGPT can be effective at solving several of such tasks, while allowing users to interact with it primarily via natural language instructions. In addition to these studies, we introduce an open-sourced research tool called PromptCraft, which contains a platform where researchers can collaboratively upload and vote on examples of good prompting schemes for robotics applications, as well as a sample robotics simulator with ChatGPT integration, making it easier for users to get started with using ChatGPT for robotics.",0ba581718f294db1d7b3dbc159cc3d3380f74606,Semantic Scholar,,, +99,a chat about boring problems studying gptbased text normalization,"['Yang Zhang', 'Travis M. Bartley', 'Mariana Graterol-Fuenmayor', 'Vitaly Lavrukhin', 'Evelina Bakhturina', 'Boris Ginsburg']",https://arxiv.org/pdf/2309.13426,2023-09-23,,"Text normalization - the conversion of text from written to spoken form - is traditionally assumed to be an ill-formed task for language models. In this work, we argue otherwise. We empirically show the capacity of Large-Language Models (LLM) for text normalization in few-shot scenarios. Combining self-consistency reasoning with linguistic-informed prompt engineering, we find LLM based text normalization to achieve error rates around 40\% lower than top normalization systems. Further, upon error analysis, we note key limitations in the conventional design of text normalization tasks. We create a new taxonomy of text normalization errors and apply it to results from GPT-3.5-Turbo and GPT-4.0. Through this new framework, we can identify strengths and weaknesses of GPT-based TN, opening opportunities for future work.",0c8446eedfe083e0ee32f5c4f793e5435904014a,Semantic Scholar,,, +100,robust preference learning for storytelling via contrastive reinforcement learning,"['Louis Castricato', 'Alexander Havrilla', 'Shahbuland Matiana', 'M. Pieler', 'Anbang Ye', 'Ian Yang', 'Spencer Frazier', 'Mark O. Riedl']",http://arxiv.org/pdf/2210.07792,2022-10-14,,"Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences. Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent. They may also use logit-manipulation methods which require annotated datasets to exist for the desired attributes. To address these issues, we first train a contrastive bi-encoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model. This is subsequently used as a reward function to fine-tune a generative language model via reinforcement learning. However, simply fine-tuning a generative language model with a contrastive reward model does not always reliably result in a story generation system capable of generating stories that meet user preferences. To increase story generation robustness we further fine-tune the contrastive reward model using a prompt-learning technique. A human participant study is then conducted comparing generations from our full system, ablations, and two baselines. We show that the full fine-tuning pipeline results in a story generator preferred over a LLM 20x as large as well as logit-based methods. This motivates the use of contrastive learning for general purpose human preference modeling.",0e1ae0bdcc8469db99a4f8008288e20f285f1c6d,Semantic Scholar,,, +101,towards equitable representation in texttoimage synthesis models with the crosscultural understanding benchmark (ccub) dataset,"['Zhixuan Liu', 'Y. Shin', 'Beverley-Claire Okogwu', 'Youngsik Yun', 'Lia Coleman', 'Peter Schaldenbrand', 'Jihie Kim', 'Jean Oh']",http://arxiv.org/pdf/2301.12073,2023-01-28,,"It has been shown that accurate representation in media improves the well-being of the people who consume it. By contrast, inaccurate representations can negatively affect viewers and lead to harmful perceptions of other cultures. To achieve inclusive representation in generated images, we propose a culturally-aware priming approach for text-to-image synthesis using a small but culturally curated dataset that we collected, known here as Cross-Cultural Understanding Benchmark (CCUB) Dataset, to fight the bias prevalent in giant datasets. Our proposed approach is comprised of two fine-tuning techniques: (1) Adding visual context via fine-tuning a pre-trained text-to-image synthesis model, Stable Diffusion, on the CCUB text-image pairs, and (2) Adding semantic context via automated prompt engineering using the fine-tuned large language model, GPT-3, trained on our CCUB culturally-aware text data. CCUB dataset is curated and our approach is evaluated by people who have a personal relationship with that particular culture. Our experiments indicate that priming using both text and image is effective in improving the cultural relevance and decreasing the offensiveness of generated images while maintaining quality.",0e8e3d2a2f4413808c7aff7bee6e8e11ec2700d7,Semantic Scholar,,, +102,beyond factuality a comprehensive evaluation of large language models as knowledge generators,"['Liang Chen', 'Yang Deng', 'Yatao Bian', 'Zeyu Qin', 'Bingzhe Wu', 'Tat-Seng Chua', 'Kam-Fai Wong']",https://arxiv.org/pdf/2310.07289,2023-10-11,,"Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge. However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge. In light of this, we introduce CONNER, a COmpreheNsive kNowledge Evaluation fRamework, designed to systematically and automatically evaluate generated knowledge from six important perspectives -- Factuality, Relevance, Coherence, Informativeness, Helpfulness and Validity. We conduct an extensive empirical analysis of the generated knowledge from three different types of LLMs on two widely studied knowledge-intensive tasks, i.e., open-domain question answering and knowledge-grounded dialogue. Surprisingly, our study reveals that the factuality of generated knowledge, even if lower, does not significantly hinder downstream tasks. Instead, the relevance and coherence of the outputs are more important than small factual mistakes. Further, we show how to use CONNER to improve knowledge-intensive tasks by designing two strategies: Prompt Engineering and Knowledge Selection. Our evaluation code and LLM-generated knowledge with human annotations will be released to facilitate future research.",0f6fe87afd1a3571f77c790893b03717e5d0422a,Semantic Scholar,,, +103,chatgpt4pcg competition characterlike level generation for science birds,"['Pittawat Taveekitworachai', 'Febri Abdullah', 'Mury F. Dewantoro', 'R. Thawonmas', 'J. Togelius', 'Jochen Renz']",https://arxiv.org/pdf/2303.15662,2023-03-28,,"This paper presents the first ChatGPT4PCG Competition at the 2023 IEEE Conference on Games. The objective of this competition is for participants to create effective prompts for ChatGPT–enabling it to generate Science Birds levels with high stability and character-like qualities–fully using their creativity as well as prompt engineering skills. ChatGPT is a conversational agent developed by OpenAI. Science Birds is selected as the competition platform because designing an Angry Birds-like level is not a trivial task due to the in-game gravity; the quality of the levels is determined by their stability. To lower the entry barrier to the competition, we limit the task to the generation of capitalized English alphabetical characters. We also allow only a single prompt to be used for generating all the characters. Here, the quality of the generated levels is determined by their stability and similarity to the given characters. A sample prompt is provided to participants for their reference. An experiment is conducted to determine the effectiveness of several modified versions of this sample prompt on level stability and similarity by testing them on several characters. To the best of our knowledge, we believe that ChatGPT4PCG is the first competition of its kind and hope to inspire enthusiasm for prompt engineering in procedural content generation.",0fb8f3f86476e9ab8fa4679620acb7d525b222a8,Semantic Scholar,,, +104,contrastner contrastivebased prompt tuning for fewshot ner,"['Amirhossein Layegh', 'A. H. Payberah', 'A. Soylu', 'D. Roman', 'M. Matskin']",https://arxiv.org/pdf/2305.17951,2023-05-29,,"Prompt-based language models have produced encouraging results in numerous applications, including Named Entity Recognition (NER) tasks. NER aims to identify entities in a sentence and provide their types. However, the strong performance of most available NER approaches is heavily dependent on the design of discrete prompts and a verbalizer to map the model-predicted outputs to entity categories, which are complicated undertakings. To address these challenges, we present ContrastNER, a prompt-based NER framework that employs both discrete and continuous tokens in prompts and uses a contrastive learning approach to learn the continuous prompts and forecast entity types. The experimental results demonstrate that ContrastNER obtains competitive performance to the state-of-the-art NER methods in high-resource settings and outperforms the state-of-the-art models in low-resource circumstances without requiring extensive manual prompt engineering and verbalizer design.",1059b79598d6e08121503093f45d50fa963d2843,Semantic Scholar,,, +105,prompting the hidden talent of webscale speech models for zeroshot task generalization,"['Puyuan Peng', 'Brian Yan', 'Shinji Watanabe', 'David F. Harwath']",https://arxiv.org/pdf/2305.11095,2023-05-18,,"We investigate the emergent abilities of the recently proposed web-scale speech model Whisper, by adapting it to unseen tasks with prompt engineering. We selected three tasks: audio-visual speech recognition (AVSR), code-switched speech recognition (CS-ASR), and speech translation (ST) on unseen language pairs. We design task-specific prompts, by either leveraging another large-scale model, or simply manipulating the special tokens in the default prompts. Experiments show that compared to the default prompts, our proposed prompts improve performance by 10% to 45% on the three zero-shot tasks, and even outperform SotA supervised models on some datasets. In addition, our experiments reveal many interesting properties of Whisper, including its robustness to prompts, bias on accents, and the multilingual understanding in its latent space. Code is available at https://github.com/jasonppy/PromptingWhisper",10e8dc07ea256c6a88d7043cf135417402ed38f4,Semantic Scholar,,, +106,aicopilot for business optimisation a framework and a case study in production scheduling,"['Pivithuru Thejan Amarasinghe', 'Su Nguyen', 'Yuan Sun', 'D. Alahakoon']",https://arxiv.org/pdf/2309.13218,2023-09-22,,"Business optimisation refers to the process of finding and implementing efficient and cost-effective means of operation to bring a competitive advantage for businesses. Synthesizing problem formulations is an integral part of business optimisation, which relies on human expertise to construct problem formulations using optimisation languages. Interestingly, with advancements in Large Language Models (LLMs), the human expertise needed in problem formulation can be minimized. However, developing an LLM for problem formulation is challenging, due to training data, token limitations, and lack of appropriate performance metrics. For the requirement of training data, recent attention has been directed towards fine-tuning pre-trained LLMs for downstream tasks rather than training an LLM from scratch for a specific task. In this paper, we adopt an LLM fine-tuning approach and propose an AI-Copilot for business optimisation problem formulation. For token limitations, we introduce modularization and prompt engineering techniques to synthesize complex problem formulations as modules that fit into the token limits of LLMs. Additionally, we design performance evaluation metrics that are better suited for assessing the accuracy and quality of problem formulations. The experiment results demonstrate that with this approach we can synthesize complex and large problem formulations for a typical business optimisation problem in production scheduling.",13fafa40eb7b15813cdf6c2ead1e1032e7b085f0,Semantic Scholar,,, +107,coaudit tools to help humans doublecheck aigenerated content,"['Andrew D. Gordon', 'Carina Negreanu', 'J. Cambronero', 'Rasika Chakravarthy', 'Ian Drosos', 'Hao Fang', 'Bhaskar Mitra', 'Hannah Richardson', 'Advait Sarkar', 'Stephanie Simmons', 'Jack Williams', 'Ben Zorn']",https://arxiv.org/pdf/2310.01297,2023-10-02,,"Users are increasingly being warned to check AI-generated content for correctness. Still, as LLMs (and other generative models) generate more complex output, such as summaries, tables, or code, it becomes harder for the user to audit or evaluate the output for quality or correctness. Hence, we are seeing the emergence of tool-assisted experiences to help the user double-check a piece of AI-generated content. We refer to these as co-audit tools. Co-audit tools complement prompt engineering techniques: one helps the user construct the input prompt, while the other helps them check the output response. As a specific example, this paper describes recent research on co-audit tools for spreadsheet computations powered by generative models. We explain why co-audit experiences are essential for any application of generative AI where quality is important and errors are consequential (as is common in spreadsheet computations). We propose a preliminary list of principles for co-audit, and outline research challenges.",14dcafae548d578f6b8c683d0972531bc46423ca,Semantic Scholar,,, +108,chatgpt as a mapping assistant a novel method to enrich maps with generative ai and content derived from streetlevel photographs,"[""Levente Juh'asz"", 'P. Mooney', 'H. Hochmair', 'Boyuan Guan']",https://arxiv.org/pdf/2306.03204,2023-06-05,,"This paper explores the concept of leveraging generative AI as a mapping assistant for enhancing the efficiency of collaborative mapping. We present results of an experiment that combines multiple sources of volunteered geographic information (VGI) and large language models (LLMs). Three analysts described the content of crowdsourced Mapillary street-level photographs taken along roads in a small test area in Miami, Florida. GPT-3.5-turbo was instructed to suggest the most appropriate tagging for each road in OpenStreetMap (OSM). The study also explores the utilization of BLIP-2, a state-of-the-art multimodal pre-training method as an artificial analyst of street-level photographs in addition to human analysts. Results demonstrate two ways to effectively increase the accuracy of mapping suggestions without modifying the underlying AI models: by (1) providing a more detailed description of source photographs, and (2) combining prompt engineering with additional context (e.g. location and objects detected along a road). The first approach increases the suggestion accuracy by up to 29%, and the second one by up to 20%.",16877baf3874038233279e07e330f891455fd880,Semantic Scholar,,, +109,using large language models to generate engaging captions for data visualizations,"['A. Liew', 'Klaus Mueller']",http://arxiv.org/pdf/2212.14047,2022-12-27,,"Creating compelling captions for data visualizations has been a long- standing challenge. Visualization researchers are typically untrained in journalistic reporting and hence the captions that are placed be- low data visualizations tend to be not overly engaging and rather just stick to basic observations about the data. In this work we explore the opportunities offered by the newly emerging crop of large language models (LLM) which use sophisticated deep learning technology to produce human-like prose. We ask, can these power-ful software devices be purposed to produce engaging captions for generic data visualizations like a scatterplot. It turns out that the key challenge lies in designing the most effective prompt for the LLM, a task called prompt engineering . We report on first experiments using the popular LLM GPT-3 and deliver some promising results.",1696e03a35f1bcc724ed9bfe69bb028b789415e8,Semantic Scholar,,, +110,an ai chatbot for explaining deep reinforcement learning decisions of serviceoriented systems,"['Andreas Metzger', 'Jon Bartel', 'Jan Laufer']",https://arxiv.org/pdf/2309.14391,2023-09-25,,"Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and offloading, as well as service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because its learned decision-making policy essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.",16acd2d2faa236dfe5f6ab67a0b94a9ed1b1de57,Semantic Scholar,,, +111,"chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations","['Chunkit Chan', 'Cheng Jiayang', 'Weiqi Wang', 'Yuxin Jiang', 'Tianqing Fang', 'Xin Liu', 'Yangqiu Song']",http://arxiv.org/pdf/2304.14827,2023-04-28,,"This paper aims to quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations such as temporal relations, causal relations, and discourse relations. Given ChatGPT's promising performance across various tasks, we proceed to carry out thorough evaluations on the whole test sets of 11 datasets, including temporal and causal relations, PDTB2.0-based, and dialogue-based discourse relations. To ensure the reliability of our findings, we employ three tailored prompt templates for each task, including the zero-shot prompt template, zero-shot prompt engineering (PE) template, and in-context learning (ICL) prompt template, to establish the initial baseline scores for all popular sentence-pair relation classification tasks for the first time. Through our study, we discover that ChatGPT exhibits exceptional proficiency in detecting and reasoning about causal relations, albeit it may not possess the same level of expertise in identifying the temporal order between two events. While it is capable of identifying the majority of discourse relations with existing explicit discourse connectives, the implicit discourse relation remains a formidable challenge. Concurrently, ChatGPT demonstrates subpar performance in the dialogue discourse parsing task that requires structural understanding in a dialogue before being aware of the discourse relation.",186e96fe036927182ec963b63f9dd7f8ff650158,Semantic Scholar,,, +112,prompting ai art an investigation into the creative skill of prompt engineering,"['J. Oppenlaender', 'Rhema Linder', 'Johanna M. Silvennoinen']",http://arxiv.org/pdf/2303.13534,2023-03-13,,"We are witnessing a novel era of creativity where anyone can create digital content via prompt-based learning (known as prompt engineering). This paper delves into prompt engineering as a novel creative skill for creating AI art with text-to-image generation. In a pilot study, we find that many crowdsourced participants have knowledge about art which could be used for writing effective prompts. In three subsequent studies, we explore whether crowdsourced participants can put this knowledge into practice. We examine if participants can 1) discern prompt quality, 2) write prompts, and 3) refine prompts. We find that participants could evaluate prompt quality and crafted descriptive prompts, but they lacked style-specific vocabulary necessary for effective prompting. This is in line with our hypothesis that prompt engineering is a new type of skill that is non-intuitive and must first be acquired (e.g., through means of practice and learning) before it can be used. Our studies deepen our understanding of prompt engineering and chart future research directions. We offer nine guidelines for conducting research on text-to-image generation and prompt engineering with paid crowds. We conclude by envisioning four potential futures for prompt engineering.",1bc9974780230573bfe9f89789115cb4fbf8bfc6,Semantic Scholar,,, +113,solving and generating npr sunday puzzles with large language models,"['Jin Zhao', 'Carolyn Jane Anderson']",http://arxiv.org/pdf/2306.12255,2023-06-21,,"We explore the ability of large language models to solve and generate puzzles from the NPR Sunday Puzzle game show using PUZZLEQA, a dataset comprising 15 years of on-air puzzles. We evaluate four large language models using PUZZLEQA, in both multiple choice and free response formats, and explore two prompt engineering techniques to improve free response performance: chain-of-thought reasoning and prompt summarization. We find that state-of-the-art large language models can solve many PUZZLEQA puzzles: the best model, GPT-3.5, achieves 50.2% loose accuracy. However, in our few-shot puzzle generation experiment, we find no evidence that models can generate puzzles: GPT-3.5 generates puzzles with answers that do not conform to the generated rules. Puzzle generation remains a challenging task for future work.",1e5743366625128e225879dbcfb568f6b8f1bcdc,Semantic Scholar,,, 114,"multimethod selftraining improving code generation with text, and vice versa","['Shriyash Upadhyay', 'Etan Ginsberg']",https://arxiv.org/pdf/2307.10633,2023-07-20,,"Large Language Models have many methods for solving the same problem. This introduces novel strengths (different methods may work well for different problems) and weaknesses (it may be difficult for users to know which method to use). In this paper, we introduce Multi-Method Self-Training (MMST), where one method is trained on the filtered outputs of another, allowing us to augment the strengths and ameliorate the weaknesses of each method. Using a 176B parameter model trained on both language and code, we show that MMST can 1) improve the less performant method (up to 30%) making the model easier to use, 2) improve the more performant method (up to 32.2%) making the model more performant, and 3) improve the performance of related but distinct tasks (up to 10.3%) by improving the ability of the model to generate rationales. We then conduct ablation analyses to explore why MMST works. We show that MMST generates more data than traditional self-training, but the improvement in performance is driven by the use of multiple methods. We also analyze prompt-engineering and anti-correlated performance between methods as means of making MMST more effective. We hope the evidence from our paper motivates machine learning researchers to explore ways in which advances in language models allow for new forms of training.",20d448a8712238ea34d9a18287e3bf05bc61dd2c,Semantic Scholar,,, 115,unsupervised human activity recognition through twostage prompting with chatgpt,"['Qingxin Xia', 'T. Maekawa', 'Takahiro Hara']",http://arxiv.org/pdf/2306.02140,2023-06-03,,"Wearable sensor devices, which offer the advantage of recording daily objects used by a person while performing an activity, enable the feasibility of unsupervised Human Activity Recognition (HAR). Unfortunately, previous unsupervised approaches using the usage sequence of objects usually require a proper description of activities manually prepared by humans. Instead, we leverage the knowledge embedded in a Large Language Model (LLM) of ChatGPT. Because the sequence of objects robustly characterizes the activity identity, it is possible that ChatGPT already learned the association between activities and objects from existing contexts. However, previous prompt engineering for ChatGPT exhibits limited generalization ability when dealing with a list of words (i.e., sequence of objects) due to the similar weighting assigned to each word in the list. In this study, we propose a two-stage prompt engineering, which first guides ChatGPT to generate activity descriptions associated with objects while emphasizing important objects for distinguishing similar activities; then outputs activity classes and explanations for enhancing the contexts that are helpful for HAR. To the best of our knowledge, this is the first study that utilizes ChatGPT to recognize activities using objects in an unsupervised manner. We conducted our approach on three datasets and demonstrated the state-of-the-art performance.",20db2ac68c0a0daa8417696cced923e518c07681,Semantic Scholar,,, 116,s3 socialnetwork simulation system with large language modelempowered agents,"['Chen Gao', 'Xiaochong Lan', 'Zhi-jie Lu', 'Jinzhu Mao', 'J. Piao', 'Huandong Wang', 'Depeng Jin', 'Yong Li']",https://arxiv.org/pdf/2307.14984,2023-07-27,,"Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.",221a72a3631ebf8b555c27bc864338390611feb1,Semantic Scholar,,, -117,suspicionagent playing imperfect information games with theory of mind aware gpt4,"['Jiaxian Guo', 'Bo Yang', 'Paul Yoo', 'Bill Yuchen Lin', 'Yusuke Iwasawa', 'Yutaka Matsuo']",https://arxiv.org/pdf/2309.17277,2023-09-29,,"Unlike perfect information games, where all elements are known to every player, imperfect information games emulate the real-world complexities of decision-making under uncertain or incomplete information. GPT-4, the recent breakthrough in large language models (LLMs) trained on massive passive data, is notable for its knowledge retrieval and reasoning abilities. This paper delves into the applicability of GPT-4's learned knowledge for imperfect information games. To achieve this, we introduce \textbf{Suspicion-Agent}, an innovative agent that leverages GPT-4's capabilities for performing in imperfect information games. With proper prompt engineering to achieve different functions, Suspicion-Agent based on GPT-4 demonstrates remarkable adaptability across a range of imperfect information card games. Importantly, GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning it can understand others and intentionally impact others' behavior. Leveraging this, we design a planning strategy that enables GPT-4 to competently play against different opponents, adapting its gameplay style as needed, while requiring only the game rules and descriptions of observations as input. In the experiments, we qualitatively showcase the capabilities of Suspicion-Agent across three different imperfect information games and then quantitatively evaluate it in Leduc Hold'em. The results show that Suspicion-Agent can potentially outperform traditional algorithms designed for imperfect information games, without any specialized training or examples. In order to encourage and foster deeper insights within the community, we make our game-related data publicly available.",25ec4e51e515548cb55e0270f449ac55f3b0840c,Semantic Scholar,,, -118,geotechnical parrot tales (gpt) harnessing large language models in geotechnical engineering,['Krishna Kumar'],http://arxiv.org/pdf/2304.02138,2023-04-04,,"The widespread adoption of large language models (LLMs), such as OpenAI's ChatGPT, could revolutionize various industries, including geotechnical engineering. However, GPT models can sometimes generate plausible-sounding but false outputs, leading to hallucinations. In this article, we discuss the importance of prompt engineering in mitigating these risks and harnessing the full potential of GPT for geotechnical applications. We explore the challenges and pitfalls associated with LLMs and highlight the role of context in ensuring accurate and valuable responses. Furthermore, we examine the development of context-specific search engines and the potential of LLMs to become a natural interface for complex tasks, such as data analysis and design. We also develop a unified interface using natural language to handle complex geotechnical engineering tasks and data analysis. By integrating GPT into geotechnical engineering workflows, professionals can streamline their work and develop sustainable and resilient infrastructure systems for the future.",26f560e592419891c9de1b25d0e4d4d16014d54e,Semantic Scholar,,, -119,toward reproducing network research results using large language models,"['Qiao Xiang', 'Yuling Lin', 'Mingjun Fang', 'Bang Huang', 'Siyong Huang', 'Ridi Wen', 'Franck Le', 'L. Kong', 'Jiwu Shu']",https://arxiv.org/pdf/2309.04716,2023-09-09,,"Reproducing research results in the networking community is important for both academia and industry. The current best practice typically resorts to three approaches: (1) looking for publicly available prototypes; (2) contacting the authors to get a private prototype; and (3) manually implementing a prototype following the description of the publication. However, most published network research does not have public prototypes and private prototypes are hard to get. As such, most reproducing efforts are spent on manual implementation based on the publications, which is both time and labor consuming and error-prone. In this paper, we boldly propose reproducing network research results using the emerging large language models (LLMs). In particular, we first prove its feasibility with a small-scale experiment, in which four students with essential networking knowledge each reproduces a different networking system published in prominent conferences and journals by prompt engineering ChatGPT. We report the experiment's observations and lessons and discuss future open research questions of this proposal. This work raises no ethical issue.",279c798fd53c8dc84044273d08b6a060dbe9f702,Semantic Scholar,,, -120,inducing anxiety in large language models increases exploration and bias,"['Julian Coda-Forno', 'Kristin Witte', 'A. Jagadish', 'Marcel Binz', 'Zeynep Akata', 'Eric Schulz']",http://arxiv.org/pdf/2304.11111,2023-04-21,,"Large language models are transforming research on machine learning while galvanizing public debates. Understanding not only when these models work well and succeed but also why they fail and misbehave is of great societal relevance. We propose to turn the lens of computational psychiatry, a framework used to computationally describe and modify aberrant behavior, to the outputs produced by these models. We focus on the Generative Pre-Trained Transformer 3.5 and subject it to tasks commonly studied in psychiatry. Our results show that GPT-3.5 responds robustly to a common anxiety questionnaire, producing higher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be predictably changed by using emotion-inducing prompts. Emotion-induction not only influences GPT-3.5's behavior in a cognitive task measuring exploratory decision-making but also influences its behavior in a previously-established task measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a strong increase in biases when prompted with anxiety-inducing text. Thus, it is likely that how prompts are communicated to large language models has a strong influence on their behavior in applied settings. These results progress our understanding of prompt engineering and demonstrate the usefulness of methods taken from computational psychiatry for studying the capable algorithms to which we increasingly delegate authority and autonomy.",27c16cca907aa43397cc226a182b73b396c5cf66,Semantic Scholar,,, -121,conceptual design generation using large language models,"['Kevin Ma', 'Daniele Grandi', 'Christopher McComb', 'K. Goucher-Lambert']",http://arxiv.org/pdf/2306.01779,2023-05-30,,"Concept generation is a creative step in the conceptual design phase, where designers often turn to brainstorming, mindmapping, or crowdsourcing design ideas to complement their own knowledge of the domain. Recent advances in natural language processing (NLP) and machine learning (ML) have led to the rise of Large Language Models (LLMs) capable of generating seemingly creative outputs from textual prompts. The success of these models has led to their integration and application across a variety of domains, including art, entertainment, and other creative work. In this paper, we leverage LLMs to generate solutions for a set of 12 design problems and compare them to a baseline of crowdsourced solutions. We evaluate the differences between generated and crowdsourced design solutions through multiple perspectives, including human expert evaluations and computational metrics. Expert evaluations indicate that the LLM-generated solutions have higher average feasibility and usefulness while the crowdsourced solutions have more novelty. We experiment with prompt engineering and find that leveraging few-shot learning can lead to the generation of solutions that are more similar to the crowdsourced solutions. These findings provide insight into the quality of design solutions generated with LLMs and begins to evaluate prompt engineering techniques that could be leveraged by practitioners to generate higher-quality design solutions synergistically with LLMs.",29203f0b8b9be7fd70d99bf7390c6a78b68a9289,Semantic Scholar,,, -122,fixing hardware security bugs with large language models,"['Baleegh Ahmad', 'Shailja Thakur', 'Benjamin Tan', 'R. Karri', 'H. Pearce']",http://arxiv.org/pdf/2302.01215,2023-02-02,,"Novel AI-based code-writing Large Language Models (LLMs) such as OpenAI's Codex have demonstrated capabilities in many coding-adjacent domains. In this work we consider how LLMs maybe leveraged to automatically repair security relevant bugs present in hardware designs. We focus on bug repair in code written in the Hardware Description Language Verilog. For this study we build a corpus of domain-representative hardware security bugs. We then design and implement a framework to quantitatively evaluate the performance of any LLM tasked with fixing the specified bugs. The framework supports design space exploration of prompts (i.e., prompt engineering) and identifying the best parameters for the LLM. We show that an ensemble of LLMs can repair all ten of our benchmarks. This ensemble outperforms the state-of-the-art Cirfix hardware bug repair tool on its own suite of bugs. These results show that LLMs can repair hardware security bugs and the framework is an important step towards the ultimate goal of an automated end-to-end bug repair framework.",2af6a21a1b682ceb585165359d3605e89f4cf6b0,Semantic Scholar,,, -123,toxicity detection with generative promptbased inference,"['Yau-Shian Wang', 'Y. Chang']",https://arxiv.org/pdf/2205.12390,2022-05-24,,"Due to the subtleness, implicity, and different possible interpretations perceived by different people, detecting undesirable content from text is a nuanced difficulty. It is a long-known risk that language models (LMs), once trained on corpus containing undesirable content, have the power to manifest biases and toxicity. However, recent studies imply that, as a remedy, LMs are also capable of identifying toxic content without additional fine-tuning. Prompt-methods have been shown to effectively harvest this surprising self-diagnosing capability. However, existing prompt-based methods usually specify an instruction to a language model in a discriminative way. In this work, we explore the generative variant of zero-shot prompt-based toxicity detection with comprehensive trials on prompt engineering. We evaluate on three datasets with toxicity labels annotated on social media posts. Our analysis highlights the strengths of our generative classification approach both quantitatively and qualitatively. Interesting aspects of self-diagnosis and its ethical implications are discussed.",2afb07359e9c67499e1f373ac6f1520d3ea9c46a,Semantic Scholar,,, -124,exploring efl students' prompt engineering in humanai story writing an activity theory perspective,"['D. Woo', 'Kai Guo', 'Hengky Susanto']",http://arxiv.org/pdf/2306.01798,2023-06-01,,"This study applies Activity Theory to investigate how English as a foreign language (EFL) students prompt generative artificial intelligence (AI) tools during short story writing. Sixty-seven Hong Kong secondary school students created generative-AI tools using open-source language models and wrote short stories with them. The study collected and analyzed the students' generative-AI tools, short stories, and written reflections on their conditions or purposes for prompting. The research identified three main themes regarding the purposes for which students prompt generative-AI tools during short story writing: a lack of awareness of purposes, overcoming writer's block, and developing, expanding, and improving the story. The study also identified common characteristics of students' activity systems, including the sophistication of their generative-AI tools, the quality of their stories, and their school's overall academic achievement level, for their prompting of generative-AI tools for the three purposes during short story writing. The study's findings suggest that teachers should be aware of students' purposes for prompting generative-AI tools to provide tailored instructions and scaffolded guidance. The findings may also help designers provide differentiated instructions for users at various levels of story development when using a generative-AI tool.",2bb34cfe22d0d46394dd91ba8934e525563e1274,Semantic Scholar,,, -125,pre visionlanguage prompt learning with reparameterization encoder,['Anh Pham Thi Minh'],https://arxiv.org/pdf/2309.07760,2023-09-14,,"Large pre-trained vision-language models such as CLIP have demonstrated great potential in zero-shot transferability to downstream tasks. However, to attain optimal performance, the manual selection of prompts is necessary to improve alignment between the downstream image distribution and the textual class descriptions. This manual prompt engineering is the major challenge for deploying such models in practice since it requires domain expertise and is extremely time-consuming. To avoid non-trivial prompt engineering, recent work Context Optimization (CoOp) introduced the concept of prompt learning to the vision domain using learnable textual tokens. While CoOp can achieve substantial improvements over manual prompts, its learned context is worse generalizable to wider unseen classes within the same dataset. In this work, we present Prompt Learning with Reparameterization Encoder (PRE) - a simple and efficient method that enhances the generalization ability of the learnable prompt to unseen classes while maintaining the capacity to learn Base classes. Instead of directly optimizing the prompts, PRE employs a prompt encoder to reparameterize the input prompt embeddings, enhancing the exploration of task-specific knowledge from few-shot samples. Experiments and extensive ablation studies on 8 benchmarks demonstrate that our approach is an efficient method for prompt learning. Specifically, PRE achieves a notable enhancement of 5.60% in average accuracy on New classes and 3% in Harmonic mean compared to CoOp in the 16-shot setting, all achieved within a good training time.",2c66f49e328ca5815c13dda106abc2c326d4f28b,Semantic Scholar,,, -126,chainforge a visual toolkit for prompt engineering and llm hypothesis testing,"['Ian Arawjo', 'Chelse Swoopes', 'Priyan Vaithilingam', 'Martin Wattenberg', 'Elena L. Glassman']",https://arxiv.org/pdf/2309.09128,2023-09-17,,"Evaluating outputs of large language models (LLMs) is challenging, requiring making -- and making sense of -- many responses. Yet tools that go beyond basic prompting tend to require knowledge of programming APIs, focus on narrow domains, or are closed-source. We present ChainForge, an open-source visual toolkit for prompt engineering and on-demand hypothesis testing of text generation LLMs. ChainForge provides a graphical interface for comparison of responses across models and prompt variations. Our system was designed to support three tasks: model selection, prompt template design, and hypothesis testing (e.g., auditing). We released ChainForge early in its development and iterated on its design with academics and online users. Through in-lab and interview studies, we find that a range of people could use ChainForge to investigate hypotheses that matter to them, including in real-world settings. We identify three modes of prompt engineering and LLM hypothesis testing: opportunistic exploration, limited evaluation, and iterative refinement.",2ed64d90670177bf58cdce6bda04a48a8731a18f,Semantic Scholar,,, -127,accelerated materials language processing enabled by gpt,"['Jaewoong Choi', 'Byungju Lee']",https://arxiv.org/pdf/2308.09354,2023-08-18,,"Materials language processing (MLP) is one of the key facilitators of materials science research, as it enables the extraction of structured information from massive materials science literature. Prior works suggested high-performance MLP models for text classification, named entity recognition (NER), and extractive question answering (QA), which require complex model architecture, exhaustive fine-tuning and a large number of human-labelled datasets. In this study, we develop generative pretrained transformer (GPT)-enabled pipelines where the complex architectures of prior MLP models are replaced with strategic designs of prompt engineering. First, we develop a GPT-enabled document classification method for screening relevant documents, achieving comparable accuracy and reliability compared to prior models, with only small dataset. Secondly, for NER task, we design an entity-centric prompts, and learning few-shot of them improved the performance on most of entities in three open datasets. Finally, we develop an GPT-enabled extractive QA model, which provides improved performance and shows the possibility of automatically correcting annotations. While our findings confirm the potential of GPT-enabled MLP models as well as their value in terms of reliability and practicability, our scientific methods and systematic approach are applicable to any materials science domain to accelerate the information extraction of scientific literature.",3034d8571e16e25c6a839bf492f20daf855d04a0,Semantic Scholar,,, -128,"a sign language recognition system with pepper, lightweighttransformer, and llm","['Jongyoon Lim', 'Inkyu Sa', 'Bruce A. MacDonald', 'Ho Seok Ahn']",https://arxiv.org/pdf/2309.16898,2023-09-28,,"This research explores using lightweight deep neural network architectures to enable the humanoid robot Pepper to understand American Sign Language (ASL) and facilitate non-verbal human-robot interaction. First, we introduce a lightweight and efficient model for ASL understanding optimized for embedded systems, ensuring rapid sign recognition while conserving computational resources. Building upon this, we employ large language models (LLMs) for intelligent robot interactions. Through intricate prompt engineering, we tailor interactions to allow the Pepper Robot to generate natural Co-Speech Gesture responses, laying the foundation for more organic and intuitive humanoid-robot dialogues. Finally, we present an integrated software pipeline, embodying advancements in a socially aware AI interaction model. Leveraging the Pepper Robot's capabilities, we demonstrate the practicality and effectiveness of our approach in real-world scenarios. The results highlight a profound potential for enhancing human-robot interaction through non-verbal interactions, bridging communication gaps, and making technology more accessible and understandable.",31e04aec55f749dc560afe1d8673112f9b32f46b,Semantic Scholar,,, -129,benchmarking causal study to interpret large language models for source code,"['Daniel Rodríguez-Cárdenas', 'David N. Palacio', 'Dipin Khati', 'Henry Burke', 'D. Poshyvanyk']",https://arxiv.org/pdf/2308.12415,2023-08-23,,"One of the most common solutions adopted by software researchers to address code generation is by training Large Language Models (LLMs) on massive amounts of source code. Although a number of studies have shown that LLMs have been effectively evaluated on popular accuracy metrics (e.g., BLEU, CodeBleu), previous research has largely overlooked the role of Causal Inference as a fundamental component of the interpretability of LLMs' performance. Existing benchmarks and datasets are meant to highlight the difference between the expected and the generated outcome, but do not take into account confounding variables (e.g., lines of code, prompt size) that equally influence the accuracy metrics. The fact remains that, when dealing with generative software tasks by LLMs, no benchmark is available to tell researchers how to quantify neither the causal effect of SE-based treatments nor the correlation of confounders to the model's performance. In an effort to bring statistical rigor to the evaluation of LLMs, this paper introduces a benchmarking strategy named Galeras comprised of curated testbeds for three SE tasks (i.e., code completion, code summarization, and commit generation) to help aid the interpretation of LLMs' performance. We illustrate the insights of our benchmarking strategy by conducting a case study on the performance of ChatGPT under distinct prompt engineering methods. The results of the case study demonstrate the positive causal influence of prompt semantics on ChatGPT's generative performance by an average treatment effect of $\approx 3\%$. Moreover, it was found that confounders such as prompt size are highly correlated with accuracy metrics ($\approx 0.412\%$). The end result of our case study is to showcase causal inference evaluations, in practice, to reduce confounding bias. By reducing the bias, we offer an interpretable solution for the accuracy metric under analysis.",3352d4bb5756a8a6bfcc1cde169b6aa9fd94497d,Semantic Scholar,,, -130,cases of efl secondary students' prompt engineering pathways to complete a writing task with chatgpt,"['D. Woo', 'Kai Guo', 'Hengky Susanto']",https://arxiv.org/pdf/2307.05493,2023-06-19,,"ChatGPT is a state-of-the-art (SOTA) chatbot. Although it has potential to support English as a foreign language (EFL) students' writing, to effectively collaborate with it, a student must learn to engineer prompts, that is, the skill of crafting appropriate instructions so that ChatGPT produces desired outputs. However, writing an appropriate prompt for ChatGPT is not straightforward for non-technical users who suffer a trial-and-error process. This paper examines the content of EFL students' ChatGPT prompts when completing a writing task and explores patterns in the quality and quantity of the prompts. The data come from iPad screen recordings of secondary school EFL students who used ChatGPT and other SOTA chatbots for the first time to complete the same writing task. The paper presents a case study of four distinct pathways that illustrate the trial-and-error process and show different combinations of prompt content and quantity. The cases contribute evidence for the need to provide prompt engineering education in the context of the EFL writing classroom, if students are to move beyond an individual trial-and-error process, learning a greater variety of prompt content and more sophisticated prompts to support their writing.",344f801663a76aa15e0dd13344261d8648c382a2,Semantic Scholar,,, -131,"llm self defense by self examination, llms know they are being tricked","['Alec Helbling', 'Mansi Phute', 'Matthew Hull', 'Duen Horng Chau']",https://arxiv.org/pdf/2308.07308,2023-08-14,,"Large language models (LLMs) are popular for high-quality text generation but can produce harmful content, even when aligned with human values through reinforcement learning. Adversarial prompts can bypass their safety measures. We propose LLM Self Defense, a simple approach to defend against these attacks by having an LLM screen the induced responses. Our method does not require any fine-tuning, input preprocessing, or iterative output generation. Instead, we incorporate the generated content into a pre-defined prompt and employ another instance of an LLM to analyze the text and predict whether it is harmful. We test LLM Self Defense on GPT 3.5 and Llama 2, two of the current most prominent LLMs against various types of attacks, such as forcefully inducing affirmative responses to prompts and prompt engineering attacks. Notably, LLM Self Defense succeeds in reducing the attack success rate to virtually 0 using both GPT 3.5 and Llama 2.",34f9c825ba24889fa5e164ba9f99bfe4fc2f3e61,Semantic Scholar,,, -132,chils zeroshot image classification with hierarchical label sets,"['Zachary Novack', 'S. Garg', 'Julian McAuley', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2302.02551,2023-02-06,,"Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot classification through their ability generate embeddings for each class based on their (natural language) names. Prior work has focused on improving the accuracy of these models through prompt engineering or by incorporating a small amount of labeled downstream data (via finetuning). However, there has been little focus on improving the richness of the class names themselves, which can pose issues when class labels are coarsely-defined and are uninformative. We propose Classification with Hierarchical Label Sets (or CHiLS), an alternative strategy for zero-shot classification specifically designed for datasets with implicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each class, produce a set of subclasses, using either existing label hierarchies or by querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though these subclasses were the labels of interest; (iii) map the predicted subclass back to its parent to produce the final prediction. Across numerous datasets with underlying hierarchical structure, CHiLS leads to improved accuracy in situations both with and without ground-truth hierarchical information. CHiLS is simple to implement within existing zero-shot pipelines and requires no additional training cost. Code is available at: https://github.com/acmi-lab/CHILS.",34fd95dd4dd32e704d4284fc31165e85b303bb1e,Semantic Scholar,,, -133,flows building blocks of reasoning and collaborating ai,"['Martin Josifoski', 'Lars Klein', 'Maxime Peyrard', 'Yifei Li', 'Saibo Geng', 'Julian Paul Schnitzler', 'Yuxing Yao', 'Jiheng Wei', 'Debjit Paul', 'Robert West']",https://arxiv.org/pdf/2308.01285,2023-08-02,,"Recent advances in artificial intelligence (AI) have produced highly capable and controllable systems. This creates unprecedented opportunities for structured reasoning as well as collaboration among multiple AI systems and humans. To fully realize this potential, it is essential to develop a principled way of designing and studying such structured interactions. For this purpose, we introduce the conceptual framework of Flows: a systematic approach to modeling complex interactions. Flows are self-contained building blocks of computation, with an isolated state, communicating through a standardized message-based interface. This modular design allows Flows to be recursively composed into arbitrarily nested interactions, with a substantial reduction of complexity. Crucially, any interaction can be implemented using this framework, including prior work on AI--AI and human--AI interactions, prompt engineering schemes, and tool augmentation. We demonstrate the potential of Flows on the task of competitive coding, a challenging task on which even GPT-4 struggles. Our results suggest that structured reasoning and collaboration substantially improve generalization, with AI-only Flows adding +$21$ and human--AI Flows adding +$54$ absolute points in terms of solve rate. To support rapid and rigorous research, we introduce the aiFlows library. The library comes with a repository of Flows that can be easily used, extended, and composed into novel, more complex Flows. The aiFlows library is available at https://github.com/epfl-dlab/aiflows. Data and Flows for reproducing our experiments are available at https://github.com/epfl-dlab/cc_flows.",377d4d6c1be01b9df32edfd94b2c5946971b0108,Semantic Scholar,,, -134,thought propagation an analogical approach to complex reasoning with large language models,"['Junchi Yu', 'Ran He', 'Rex Ying']",https://arxiv.org/pdf/2310.03965,2023-10-06,,"Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with the development of prompting methods. However, existing prompting approaches cannot reuse insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, since they prompt LLMs to reason \textit{from scratch}. To address these issues, we propose \textbf{\textit{Thought Propagation} (TP)}, which explores the analogous problems and leverages their solutions to enhance the complex reasoning ability of LLMs. These analogous problems are related to the input one, with reusable solutions and problem-solving strategies. Thus, it is promising to propagate insights of solving previous analogous problems to inspire new problem-solving. To achieve this, TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one. Then, TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch. TP is compatible with existing prompting approaches, allowing plug-and-play generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks demonstrate TP enjoys a substantial improvement over the baselines by an average of 12\% absolute increase in finding the optimal solutions in Shortest-path Reasoning, 13\% improvement of human preference in Creative Writing, and 15\% enhancement in the task completion rate of LLM-Agent Planning.",3784fd84b61d482b52f7ef72aac66bcb886b892b,Semantic Scholar,,, -135,prompt engineering for healthcare methodologies and applications,"['Jiaqi Wang', 'Enze Shi', 'Sigang Yu', 'Zihao Wu', 'Chong Ma', 'Haixing Dai', 'Qiushi Yang', 'Yanqing Kang', 'Jinru Wu', 'Huawen Hu', 'Chenxi Yue', 'Haiyang Zhang', 'Yi-Hsueh Liu', 'Xiang Li', 'Bao Ge', 'Dajiang Zhu', 'Yixuan Yuan', 'Dinggang Shen', 'Tianming Liu', 'Shu Zhang']",http://arxiv.org/pdf/2304.14670,2023-04-28,,"This review will introduce the latest advances in prompt engineering in the field of natural language processing (NLP) for the medical domain. First, we will provide a brief overview of the development of prompt engineering and emphasize its significant contributions to healthcare NLP applications such as question-answering systems, text summarization, and machine translation. With the continuous improvement of general large language models, the importance of prompt engineering in the healthcare domain is becoming increasingly prominent. The aim of this article is to provide useful resources and bridges for healthcare NLP researchers to better explore the application of prompt engineering in this field. We hope that this review can provide new ideas and inspire ample possibilities for research and application in medical NLP.",385376b8aa48c25403f17d6206db7c09b67e1314,Semantic Scholar,,, -136,parafuzz an interpretabilitydriven technique for detecting poisoned samples in nlp,"['Lu Yan', 'Zhuo Zhang', 'Guanhong Tao', 'Kaiyuan Zhang', 'Xuan Chen', 'Guangyu Shen', 'Xiangyu Zhang']",https://arxiv.org/pdf/2308.02122,2023-08-04,,"Backdoor attacks have emerged as a prominent threat to natural language processing (NLP) models, where the presence of specific triggers in the input can lead poisoned models to misclassify these inputs to predetermined target classes. Current detection mechanisms are limited by their inability to address more covert backdoor strategies, such as style-based attacks. In this work, we propose an innovative test-time poisoned sample detection framework that hinges on the interpretability of model predictions, grounded in the semantic meaning of inputs. We contend that triggers (e.g., infrequent words) are not supposed to fundamentally alter the underlying semantic meanings of poisoned samples as they want to stay stealthy. Based on this observation, we hypothesize that while the model's predictions for paraphrased clean samples should remain stable, predictions for poisoned samples should revert to their true labels upon the mutations applied to triggers during the paraphrasing process. We employ ChatGPT, a state-of-the-art large language model, as our paraphraser and formulate the trigger-removal task as a prompt engineering problem. We adopt fuzzing, a technique commonly used for unearthing software vulnerabilities, to discover optimal paraphrase prompts that can effectively eliminate triggers while concurrently maintaining input semantics. Experiments on 4 types of backdoor attacks, including the subtle style backdoors, and 4 distinct datasets demonstrate that our approach surpasses baseline methods, including STRIP, RAP, and ONION, in precision and recall.",3a733c27bff68259b17dc4f835b0d192ac8fab70,Semantic Scholar,,, -137,transforming sentiment analysis in the financial domain with chatgpt,"['G. Fatouros', 'J. Soldatos', 'Kalliopi Kouroumali', 'Georgios Makridis', 'D. Kyriazis']",https://arxiv.org/pdf/2308.07935,2023-08-13,,"Financial sentiment analysis plays a crucial role in decoding market trends and guiding strategic trading decisions. Despite the deployment of advanced deep learning techniques and language models to refine sentiment analysis in finance, this study breaks new ground by investigating the potential of large language models, particularly ChatGPT 3.5, in financial sentiment analysis, with a strong emphasis on the foreign exchange market (forex). Employing a zero-shot prompting approach, we examine multiple ChatGPT prompts on a meticulously curated dataset of forex-related news headlines, measuring performance using metrics such as precision, recall, f1-score, and Mean Absolute Error (MAE) of the sentiment class. Additionally, we probe the correlation between predicted sentiment and market returns as an additional evaluation approach. ChatGPT, compared to FinBERT, a well-established sentiment analysis model for financial texts, exhibited approximately 35\% enhanced performance in sentiment classification and a 36\% higher correlation with market returns. By underlining the significance of prompt engineering, particularly in zero-shot contexts, this study spotlights ChatGPT's potential to substantially boost sentiment analysis in financial applications. By sharing the utilized dataset, our intention is to stimulate further research and advancements in the field of financial services.",3c4f1244301577cffff9affc73690669725e7e08,Semantic Scholar,,, -138,enhancing clip with gpt4 harnessing visual descriptions as prompts,"['Mayug Maniparambil', 'Chris Vorster', 'D. Molloy', 'N. Murphy', 'Kevin McGuinness', ""Noel E. O'Connor""]",https://arxiv.org/pdf/2307.11661,2023-07-21,,"Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have revolutionized visual representation learning by providing good performance on downstream datasets. VLMs are 0-shot adapted to a downstream dataset by designing prompts that are relevant to the dataset. Such prompt engineering makes use of domain expertise and a validation dataset. Meanwhile, recent developments in generative pretrained models like GPT-4 mean they can be used as advanced internet search tools. They can also be manipulated to provide visual information in any structure. In this work, we show that GPT-4 can be used to generate text that is visually descriptive and how this can be used to adapt CLIP to downstream tasks. We show considerable improvements in 0-shot transfer accuracy on specialized fine-grained datasets like EuroSAT (~7%), DTD (~7%), SUN397 (~4.6%), and CUB (~3.3%) when compared to CLIP's default prompt. We also design a simple few-shot adapter that learns to choose the best possible sentences to construct generalizable classifiers that outperform the recently proposed CoCoOP by ~2% on average and by over 4% on 4 specialized fine-grained datasets. The code, prompts, and auxiliary text dataset is available at https://github.com/mayug/VDT-Adapter.",3e0a691277183a6704310af3e4e9e271400612bc,Semantic Scholar,,, -139,large language models as data preprocessors,"['Haochen Zhang', 'Yuyang Dong', 'Chuan Xiao', 'M. Oyamada']",https://arxiv.org/pdf/2308.16361,2023-08-30,,"Large Language Models (LLMs), typified by OpenAI's GPT series and Meta's LLaMA variants, have marked a significant advancement in artificial intelligence. Trained on vast amounts of text data, LLMs are capable of understanding and generating human-like text across a diverse range of topics. This study expands on the applications of LLMs, exploring their potential in data preprocessing, a critical stage in data mining and analytics applications. We delve into the applicability of state-of-the-art LLMs such as GPT-3.5, GPT-4, and Vicuna-13B for error detection, data imputation, schema matching, and entity matching tasks. Alongside showcasing the inherent capabilities of LLMs, we highlight their limitations, particularly in terms of computational expense and inefficiency. We propose an LLM-based framework for data preprocessing, which integrates cutting-edge prompt engineering techniques, coupled with traditional methods like contextualization and feature selection, to improve the performance and efficiency of these models. The effectiveness of LLMs in data preprocessing is evaluated through an experimental study spanning 12 datasets. GPT-4 emerged as a standout, achieving 100\% accuracy or F1 score on 4 datasets, suggesting LLMs' immense potential in these tasks. Despite certain limitations, our study underscores the promise of LLMs in this domain and anticipates future developments to overcome current hurdles.",3e1ca026052d30e3b9677e363616fae23f6616df,Semantic Scholar,,, -140,revisiting prompt engineering via declarative crowdsourcing,"['Aditya G. Parameswaran', 'Shreya Shankar', 'Parth Asawa', 'Naman Jain', 'Yujie Wang']",https://arxiv.org/pdf/2308.03854,2023-08-07,,"Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone. There has been an advent of toolkits and recipes centered around so-called prompt engineering-the process of asking an LLM to do something via a series of prompts. However, for LLM-powered data processing workflows, in particular, optimizing for quality, while keeping cost bounded, is a tedious, manual process. We put forth a vision for declarative prompt engineering. We view LLMs like crowd workers and leverage ideas from the declarative crowdsourcing literature-including leveraging multiple prompting strategies, ensuring internal consistency, and exploring hybrid-LLM-non-LLM approaches-to make prompt engineering a more principled process. Preliminary case studies on sorting, entity resolution, and imputation demonstrate the promise of our approach",3e4991bd206214f596a10e9932cd441fe5bd1f8c,Semantic Scholar,,, -141,demonstrations of the potential of aibased political issue polling,"['Nathan Sanders', 'Alex Ulinich', 'B. Schneier']",https://arxiv.org/pdf/2307.04781,2023-07-10,,"Political polling is a multi-billion dollar industry with outsized influence on the societal trajectory of the United States and nations around the world. However, it has been challenged by factors that stress its cost, availability, and accuracy. At the same time, artificial intelligence (AI) chatbots have become compelling stand-ins for human behavior, powered by increasingly sophisticated large language models (LLMs). Could AI chatbots be an effective tool for anticipating public opinion on controversial issues to the extent that they could be used by campaigns, interest groups, and polling firms? We have developed a prompt engineering methodology for eliciting human-like survey responses from ChatGPT, which simulate the response to a policy question of a person described by a set of demographic factors, and produce both an ordinal numeric response score and a textual justification. We execute large scale experiments, querying for thousands of simulated responses at a cost far lower than human surveys. We compare simulated data to human issue polling data from the Cooperative Election Study (CES). We find that ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues such as abortion bans and approval of the US Supreme Court, particularly in their ideological breakdown (correlation typically>85%). However, it is less successful at anticipating demographic-level differences. Moreover, ChatGPT tends to overgeneralize to new policy issues that arose after its training data was collected, such as US support for involvement in the war in Ukraine. Our work has implications for our understanding of the strengths and limitations of the current generation of AI chatbots as virtual publics or online listening platforms, future directions for LLM development, and applications of AI tools to the political domain. (Abridged)",407a8d6227ece351d9870f96576d4c287a746166,Semantic Scholar,,, -142,a survey on segment anything model (sam) vision foundation model meets prompt engineering,"['Chaoning Zhang', 'Sheng Zheng', 'Chenghao Li', 'Yu Qiao', 'Taegoo Kang', 'Xinru Shan', 'Chenshuang Zhang', 'Caiyan Qin', 'François Rameau', 'S. Bae', 'Choong-Seon Hong']",http://arxiv.org/pdf/2306.06211,2023-05-12,,"Segment anything model (SAM) developed by Meta AI Research has recently attracted significant attention. Trained on a large segmentation dataset of over 1 billion masks, SAM is capable of segmenting any object on a certain image. In the original SAM work, the authors turned to zero-short transfer tasks (like edge detection) for evaluating the performance of SAM. Recently, numerous works have attempted to investigate the performance of SAM in various scenarios to recognize and segment objects. Moreover, numerous projects have emerged to show the versatility of SAM as a foundation model by combining it with other models, like Grounding DINO, Stable Diffusion, ChatGPT, etc. With the relevant papers and projects increasing exponentially, it is challenging for the readers to catch up with the development of SAM. To this end, this work conducts the first yet comprehensive survey on SAM. This is an ongoing project and we intend to update the manuscript on a regular basis. Therefore, readers are welcome to contact us if they complete new works related to SAM so that we can include them in our next version.",42219b26a503d03bf70e9953edc3af94c255cb2a,Semantic Scholar,,, -143,scalable 3d captioning with pretrained models,"['Tiange Luo', 'C. Rockwell', 'Honglak Lee', 'Justin Johnson']",http://arxiv.org/pdf/2306.07279,2023-06-12,,"We introduce Cap3D, an automatic approach for generating descriptive text for 3D objects. This approach utilizes pretrained models from image captioning, image-text alignment, and LLM to consolidate captions from multiple views of a 3D asset, completely side-stepping the time-consuming and costly process of manual annotation. We apply Cap3D to the recently introduced large-scale 3D dataset, Objaverse, resulting in 660k 3D-text pairs. Our evaluation, conducted using 41k human annotations from the same dataset, demonstrates that Cap3D surpasses human-authored descriptions in terms of quality, cost, and speed. Through effective prompt engineering, Cap3D rivals human performance in generating geometric descriptions on 17k collected annotations from the ABO dataset. Finally, we finetune Text-to-3D models on Cap3D and human captions, and show Cap3D outperforms; and benchmark the SOTA including Point-E, Shape-E, and DreamFusion.",4279a38a098d1d359881b73c6a88a112fe93443a,Semantic Scholar,,, -144,interactive data synthesis for systematic vision adaptation via llmsaigcs collaboration,"['Qifan Yu', 'Juncheng Li', 'Wentao Ye', 'Siliang Tang', 'Yueting Zhuang']",http://arxiv.org/pdf/2305.12799,2023-05-22,,"Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images. In parallel, the problem of data scarcity has brought a growing interest in employing AIGC technology for high-quality data expansion. However, this paradigm requires well-designed prompt engineering that cost-less data expansion and labeling remain under-explored. Inspired by LLM's powerful capability in task guidance, we propose a new paradigm of annotated data expansion named as ChatGenImage. The core idea behind it is to leverage the complementary strengths of diverse models to establish a highly effective and user-friendly pipeline for interactive data augmentation. In this work, we extensively study how LLMs communicate with AIGC model to achieve more controllable image generation and make the first attempt to collaborate them for automatic data augmentation for a variety of downstream tasks. Finally, we present fascinating results obtained from our ChatGenImage framework and demonstrate the powerful potential of our synthetic data for systematic vision adaptation. Our codes are available at https://github.com/Yuqifan1117/Labal-Anything-Pipeline.",43a55dbd95c9d5cd82de8db276f41adeec4a937d,Semantic Scholar,,, -145,gpt takes the bar exam,"['M. Bommarito', 'D. Katz']",http://arxiv.org/pdf/2212.14402,2022-12-29,,"Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as""the Bar Exam,""as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in""AI?""In this research, we document our experimental evaluation of the performance of OpenAI's `text-davinci-003` model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5's zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5's zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.",458147b5f7242c998ec4f33798a59b7c48867329,Semantic Scholar,,, -146,prompts matter insights and strategies for prompt engineering in automated software traceability,"['Alberto D. Rodriguez', 'Katherine R. Dearstyne', 'J. Cleland-Huang']",https://arxiv.org/pdf/2308.00229,2023-08-01,,"Large Language Models (LLMs) have the potential to revolutionize automated traceability by overcoming the challenges faced by previous methods and introducing new possibilities. However, the optimal utilization of LLMs for automated traceability remains unclear. This paper explores the process of prompt engineering to extract link predictions from an LLM. We provide detailed insights into our approach for constructing effective prompts, offering our lessons learned. Additionally, we propose multiple strategies for leveraging LLMs to generate traceability links, improving upon previous zero-shot methods on the ranking of candidate links after prompt refinement. The primary objective of this paper is to inspire and assist future researchers and engineers by highlighting the process of constructing traceability prompts to effectively harness LLMs for advancing automatic traceability.",4591f6cea22b66eccda0103b83002be45e8216b6,Semantic Scholar,,, -147,humans in humans out on gpt converging toward common sense in both success and failure,"['Philipp E. Koralus', ""Vincent Wang-Ma'scianica""]",http://arxiv.org/pdf/2303.17276,2023-03-30,,"Increase in computational scale and fine-tuning has seen a dramatic improvement in the quality of outputs of large language models (LLMs) like GPT. Given that both GPT-3 and GPT-4 were trained on large quantities of human-generated text, we might ask to what extent their outputs reflect patterns of human thinking, both for correct and incorrect cases. The Erotetic Theory of Reason (ETR) provides a symbolic generative model of both human success and failure in thinking, across propositional, quantified, and probabilistic reasoning, as well as decision-making. We presented GPT-3, GPT-3.5, and GPT-4 with 61 central inference and judgment problems from a recent book-length presentation of ETR, consisting of experimentally verified data-points on human judgment and extrapolated data-points predicted by ETR, with correct inference patterns as well as fallacies and framing effects (the ETR61 benchmark). ETR61 includes classics like Wason's card task, illusory inferences, the decoy effect, and opportunity-cost neglect, among others. GPT-3 showed evidence of ETR-predicted outputs for 59% of these examples, rising to 77% in GPT-3.5 and 75% in GPT-4. Remarkably, the production of human-like fallacious judgments increased from 18% in GPT-3 to 33% in GPT-3.5 and 34% in GPT-4. This suggests that larger and more advanced LLMs may develop a tendency toward more human-like mistakes, as relevant thought patterns are inherent in human-produced training data. According to ETR, the same fundamental patterns are involved both in successful and unsuccessful ordinary reasoning, so that the""bad""cases could paradoxically be learned from the""good""cases. We further present preliminary evidence that ETR-inspired prompt engineering could reduce instances of these mistakes.",45c46687bc8d2dbdea6f92fc14d4dc7a548ddd12,Semantic Scholar,,, -148,large language models are humanlevel prompt engineers,"['Yongchao Zhou', 'Andrei Ioan Muresanu', 'Ziwen Han', 'Keiran Paster', 'Silviu Pitis', 'Harris Chan', 'Jimmy Ba']",http://arxiv.org/pdf/2211.01910,2022-11-03,,"By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired by classical program synthesis and the human approach to prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic instruction generation and selection. In our method, we treat the instruction as the""program,""optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. To evaluate the quality of the selected instruction, we evaluate the zero-shot performance of another LLM following the selected instruction. Experiments on 24 NLP tasks show that our automatically generated instructions outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 19/24 tasks. We conduct extensive qualitative and quantitative analyses to explore the performance of APE. We show that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts. Please check out our webpage at https://sites.google.com/view/automatic-prompt-engineer.",4610ffb1b016acaa82a2065ffd1a3adbae1ce722,Semantic Scholar,,, -149,exploring small language models with promptlearning paradigm for efficient domainspecific text classification,"['Hengyu Luo', 'Peng Liu', 'Stefan Esping']",https://arxiv.org/pdf/2309.14779,2023-09-26,,"Domain-specific text classification faces the challenge of scarce labeled data due to the high cost of manual labeling. Prompt-learning, known for its efficiency in few-shot scenarios, is proposed as an alternative to traditional fine-tuning methods. And besides, although large language models (LLMs) have gained prominence, small language models (SLMs, with under 1B parameters) offer significant customizability, adaptability, and cost-effectiveness for domain-specific tasks, given industry constraints. In this study, we investigate the potential of SLMs combined with prompt-learning paradigm for domain-specific text classification, specifically within customer-agent interactions in retail. Our evaluations show that, in few-shot settings when prompt-based model fine-tuning is possible, T5-base, a typical SLM with 220M parameters, achieve approximately 75% accuracy with limited labeled data (up to 15% of full data), which shows great potentials of SLMs with prompt-learning. Based on this, We further validate the effectiveness of active few-shot sampling and the ensemble strategy in the prompt-learning pipeline that contribute to a remarkable performance gain. Besides, in zero-shot settings with a fixed model, we underscore a pivotal observation that, although the GPT-3.5-turbo equipped with around 154B parameters garners an accuracy of 55.16%, the power of well designed prompts becomes evident when the FLAN-T5-large, a model with a mere 0.5% of GPT-3.5-turbo's parameters, achieves an accuracy exceeding 31% with the optimized prompt, a leap from its sub-18% performance with an unoptimized one. Our findings underscore the promise of prompt-learning in classification tasks with SLMs, emphasizing the benefits of active few-shot sampling, and ensemble strategies in few-shot settings, and the importance of prompt engineering in zero-shot settings.",47d04bcfe0f1bed72d03c68cce76b4cf4be03f11,Semantic Scholar,,, -150,cotbert enhancing unsupervised sentence representation through chainofthought,"['Bowen Zhang', 'Kehua Chang', 'Chunping Li']",https://arxiv.org/pdf/2309.11143,2023-09-20,,"Unsupervised sentence representation learning aims to transform input sentences into fixed-length vectors enriched with intricate semantic information while obviating the reliance on labeled data. Recent progress within this field, propelled by contrastive learning and prompt engineering, has significantly bridged the gap between unsupervised and supervised strategies. Nonetheless, the potential utilization of Chain-of-Thought, remains largely untapped within this trajectory. To unlock latent capabilities within pre-trained models, such as BERT, we propose a two-stage approach for sentence representation: comprehension and summarization. Subsequently, the output of the latter phase is harnessed as the vectorized representation of the input sentence. For further performance enhancement, we meticulously refine both the contrastive learning loss function and the template denoising technique for prompt engineering. Rigorous experimentation substantiates our method, CoT-BERT, transcending a suite of robust baselines without necessitating other text representation models or external databases.",4a99a85f071e67bf15ae4bc53ec37af28b650ec4,Semantic Scholar,,, -151,contextualizing problems to student interests at scale in intelligent tutoring system using large language models,"['Gautam Yadav', 'Ying-Jui Tseng', 'Xiaolin Ni']",http://arxiv.org/pdf/2306.00190,2023-05-31,,"Contextualizing problems to align with student interests can significantly improve learning outcomes. However, this task often presents scalability challenges due to resource and time constraints. Recent advancements in Large Language Models (LLMs) like GPT-4 offer potential solutions to these issues. This study explores the ability of GPT-4 in the contextualization of problems within CTAT, an intelligent tutoring system, aiming to increase student engagement and enhance learning outcomes. Through iterative prompt engineering, we achieved meaningful contextualization that preserved the difficulty and original intent of the problem, thereby not altering values or overcomplicating the questions. While our research highlights the potential of LLMs in educational settings, we acknowledge current limitations, particularly with geometry problems, and emphasize the need for ongoing evaluation and research. Future work includes systematic studies to measure the impact of this tool on students' learning outcomes and enhancements to handle a broader range of problems.",4b6df5f9885c9dc0ce3125791fd01824e3cf37b7,Semantic Scholar,,, -152,backdoor attacks for incontext learning with language models,"['Nikhil Kandpal', 'Matthew Jagielski', 'Florian Tramèr', 'Nicholas Carlini']",https://arxiv.org/pdf/2307.14692,2023-07-27,,"Because state-of-the-art language models are expensive to train, most practitioners must make use of one of the few publicly available language models or language model APIs. This consolidation of trust increases the potency of backdoor attacks, where an adversary tampers with a machine learning model in order to make it perform some malicious behavior on inputs that contain a predefined backdoor trigger. We show that the in-context learning ability of large language models significantly complicates the question of developing backdoor attacks, as a successful backdoor must work against various prompting strategies and should not affect the model's general purpose capabilities. We design a new attack for eliciting targeted misclassification when language models are prompted to perform a particular target task and demonstrate the feasibility of this attack by backdooring multiple large language models ranging in size from 1.3 billion to 6 billion parameters. Finally we study defenses to mitigate the potential harms of our attack: for example, while in the white-box setting we show that fine-tuning models for as few as 500 steps suffices to remove the backdoor behavior, in the black-box setting we are unable to develop a successful defense that relies on prompt engineering alone.",4d21debb0f5fec315181e0912b5105c6ce4fc67f,Semantic Scholar,,, -153,optimizing prompts for texttoimage generation,"['Y. Hao', 'Zewen Chi', 'Li Dong', 'Furu Wei']",http://arxiv.org/pdf/2212.09611,2022-12-19,,"Well-designed prompts can guide text-to-image models to generate amazing images. However, the performant prompts are often model-specific and misaligned with user input. Instead of laborious human engineering, we propose prompt adaptation, a general framework that automatically adapts original user input to model-preferred prompts. Specifically, we first perform supervised fine-tuning with a pretrained language model on a small collection of manually engineered prompts. Then we use reinforcement learning to explore better prompts. We define a reward function that encourages the policy to generate more aesthetically pleasing images while preserving the original user intentions. Experimental results on Stable Diffusion show that our method outperforms manual prompt engineering in terms of both automatic metrics and human preference ratings. Moreover, reinforcement learning further boosts performance, especially on out-of-domain prompts. The pretrained checkpoints are available at https://aka.ms/promptist. The demo can be found at https://aka.ms/promptist-demo.",4d81c33b295c092016ac236cfd32020a5bb70b97,Semantic Scholar,,, -154,is gpt a computational model of emotion detailed analysis,"['Ala Nekouvaght Tak', 'J. Gratch']",https://arxiv.org/pdf/2307.13779,2023-07-25,,"This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.",4dd461b2392a6983d36618744d2384349c4170f9,Semantic Scholar,,, -155,a lightweight framework for highquality code generation,"['Mohammed Latif Siddiq', 'B.K. Casey', 'Joanna C. S. Santos']",https://arxiv.org/pdf/2307.08220,2023-07-17,,"In recent years, the use of automated source code generation utilizing transformer-based generative models has expanded, and these models can generate functional code according to the requirements of the developers. However, recent research revealed that these automatically generated source codes can contain vulnerabilities and other quality issues. Despite researchers' and practitioners' attempts to enhance code generation models, retraining and fine-tuning large language models is time-consuming and resource-intensive. Thus, we describe FRANC, a lightweight framework for recommending more secure and high-quality source code derived from transformer-based code generation models. FRANC includes a static filter to make the generated code compilable with heuristics and a quality-aware ranker to sort the code snippets based on a quality score. Moreover, the framework uses prompt engineering to fix persistent quality issues. We evaluated the framework with five Python and Java code generation models and six prompt datasets, including a newly created one in this work (SOEval). The static filter improves 9% to 46% Java suggestions and 10% to 43% Python suggestions regarding compilability. The average improvement over the NDCG@10 score for the ranking system is 0.0763, and the repairing techniques repair the highest 80% of prompts. FRANC takes, on average, 1.98 seconds for Java; for Python, it takes 0.08 seconds.",4e96d7fa9f27857523d786230294fbcc6060212c,Semantic Scholar,,, -156,text2cohort democratizing the nci imaging data commons with natural language cohort discovery,"['Pranav Kulkarni', 'Adway Kanhere', 'P. Yi', 'Vishwa Parekh']",http://arxiv.org/pdf/2305.07637,2023-05-12,,"The Imaging Data Commons (IDC) is a cloud-based database that provides researchers with open access to cancer imaging data, with the goal of facilitating collaboration in medical imaging research. However, querying the IDC database for cohort discovery and access to imaging data has a significant learning curve for researchers due to its complex nature. We developed Text2Cohort, a large language model (LLM) based toolkit to facilitate user-friendly and intuitive natural language cohort discovery in the IDC. Text2Cohorts translates user input into IDC database queries using prompt engineering and autocorrection and returns the query's response to the user. Autocorrection resolves errors in queries by passing the errors back to the model for interpretation and correction. We evaluate Text2Cohort on 50 natural language user inputs ranging from information extraction to cohort discovery. The resulting queries and outputs were verified by two computer scientists to measure Text2Cohort's accuracy and F1 score. Text2Cohort successfully generated queries and their responses with an 88% accuracy and F1 score of 0.94. However, it failed to generate queries for 6/50 (12%) user inputs due to syntax and semantic errors. Our results indicate that Text2Cohort succeeded at generating queries with correct responses, but occasionally failed due to a lack of understanding of the data schema. Despite these shortcomings, Text2Cohort demonstrates the utility of LLMs to enable researchers to discover and curate cohorts using data hosted on IDC with high levels of accuracy using natural language in a more intuitive and user-friendly way.",4ff0fccc922f9da7c818c86c8a13aef23ea08345,Semantic Scholar,,, -157,llms killed the script kiddie how agents supported by large language models change the landscape of network threat testing,"['Stephen Moskal', 'Sam Laney', 'Erik Hemberg', 'Una-May O’Reilly']",https://arxiv.org/pdf/2310.06936,2023-10-11,,"In this paper, we explore the potential of Large Language Models (LLMs) to reason about threats, generate information about tools, and automate cyber campaigns. We begin with a manual exploration of LLMs in supporting specific threat-related actions and decisions. We proceed by automating the decision process in a cyber campaign. We present prompt engineering approaches for a plan-act-report loop for one action of a threat campaign and and a prompt chaining design that directs the sequential decision process of a multi-action campaign. We assess the extent of LLM's cyber-specific knowledge w.r.t the short campaign we demonstrate and provide insights into prompt design for eliciting actionable responses. We discuss the potential impact of LLMs on the threat landscape and the ethical considerations of using LLMs for accelerating threat actor capabilities. We report a promising, yet concerning, application of generative AI to cyber threats. However, the LLM's capabilities to deal with more complex networks, sophisticated vulnerabilities, and the sensitivity of prompts are open questions. This research should spur deliberations over the inevitable advancements in LLM-supported cyber adversarial landscape.",50aaac5fdc2b5a33bfd3ba93cdf4e5e302f34297,Semantic Scholar,,, -158,zeroshot nuclei detection via visuallanguage pretrained models,"['Yongjian Wu', 'Yangqiaoyu Zhou', 'Jiya Saiyin', 'Bingzheng Wei', 'Maode Lai', 'Jianzhong Shou', 'Yubo Fan', 'Yan Xu']",http://arxiv.org/pdf/2306.17659,2023-06-30,,"Large-scale visual-language pre-trained models (VLPM) have proven their excellent performance in downstream object detection for natural scenes. However, zero-shot nuclei detection on H\&E images via VLPMs remains underexplored. The large gap between medical images and the web-originated text-image pairs used for pre-training makes it a challenging task. In this paper, we attempt to explore the potential of the object-level VLPM, Grounded Language-Image Pre-training (GLIP) model, for zero-shot nuclei detection. Concretely, an automatic prompts design pipeline is devised based on the association binding trait of VLPM and the image-to-text VLPM BLIP, avoiding empirical manual prompts engineering. We further establish a self-training framework, using the automatically designed prompts to generate the preliminary results as pseudo labels from GLIP and refine the predicted boxes in an iterative manner. Our method achieves a remarkable performance for label-free nuclei detection, surpassing other comparison methods. Foremost, our work demonstrates that the VLPM pre-trained on natural image-text pairs exhibits astonishing potential for downstream tasks in the medical field as well. Code will be released at https://github.com/wuyongjianCODE/VLPMNuD.",50bbca86de82d6b72d92bba0ec988b58e644dac3,Semantic Scholar,,, -159,gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench,"['A. Alam', 'Palash Roy', 'Farouq Al-Omari', 'C. Roy', 'B. Roy', 'Kevin A. Schneider']",https://arxiv.org/pdf/2308.13963,2023-08-26,,"With the emergence of Machine Learning, there has been a surge in leveraging its capabilities for problem-solving across various domains. In the code clone realm, the identification of type-4 or semantic clones has emerged as a crucial yet challenging task. Researchers aim to utilize Machine Learning to tackle this challenge, often relying on the BigCloneBench dataset. However, it's worth noting that BigCloneBench, originally not designed for semantic clone detection, presents several limitations that hinder its suitability as a comprehensive training dataset for this specific purpose. Furthermore, CLCDSA dataset suffers from a lack of reusable examples aligning with real-world software systems, rendering it inadequate for cross-language clone detection approaches. In this work, we present a comprehensive semantic clone and cross-language clone benchmark, GPTCloneBench by exploiting SemanticCloneBench and OpenAI's GPT-3 model. In particular, using code fragments from SemanticCloneBench as sample inputs along with appropriate prompt engineering for GPT-3 model, we generate semantic and cross-language clones for these specific fragments and then conduct a combination of extensive manual analysis, tool-assisted filtering, functionality testing and automated validation in building the benchmark. From 79,928 clone pairs of GPT-3 output, we created a benchmark with 37,149 true semantic clone pairs, 19,288 false semantic pairs(Type-1/Type-2), and 20,770 cross-language clones across four languages (Java, C, C#, and Python). Our benchmark is 15-fold larger than SemanticCloneBench, has more functional code examples for software systems and programming language support than CLCDSA, and overcomes BigCloneBench's qualities, quantification, and language variety limitations.",50d40d05598e456188a3be42983b8daabd3f04f7,Semantic Scholar,,, -160,symbolic knowledge distillation from general language models to commonsense models,"['Peter West', 'Chandrasekhar Bhagavatula', 'Jack Hessel', 'Jena D. Hwang', 'Liwei Jiang', 'Ronan Le Bras', 'Ximing Lu', 'S. Welleck', 'Yejin Choi']",https://aclanthology.org/2022.naacl-main.341.pdf,2021-10-14,,"The common practice for training commonsense models has gone from–human–to–corpus–to–machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from–machine–to–corpus–to–machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al. 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically–as text–in addition to the neural model. We distill only one aspect–the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model’s commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and will share our new symbolic knowledge graph and commonsense models.",521ccc898395a2818fced22b4cf371b0e5121f94,Semantic Scholar,,, -161,can prompt learning benefit radiology report generation,"['Jun Wang', 'Lixing Zhu', 'A. Bhalerao', 'Yulan He']",https://arxiv.org/pdf/2308.16269,2023-08-30,,"Radiology report generation aims to automatically provide clinically meaningful descriptions of radiology images such as MRI and X-ray. Although great success has been achieved in natural scene image captioning tasks, radiology report generation remains challenging and requires prior medical knowledge. In this paper, we propose PromptRRG, a method that utilizes prompt learning to activate a pretrained model and incorporate prior knowledge. Since prompt learning for radiology report generation has not been explored before, we begin with investigating prompt designs and categorise them based on varying levels of knowledge: common, domain-specific and disease-enriched prompts. Additionally, we propose an automatic prompt learning mechanism to alleviate the burden of manual prompt engineering. This is the first work to systematically examine the effectiveness of prompt learning for radiology report generation. Experimental results on the largest radiology report generation benchmark, MIMIC-CXR, demonstrate that our proposed method achieves state-of-the-art performance. Code will be available upon the acceptance.",531678c18fd2c5a9620b68f3550131fc3fd3636c,Semantic Scholar,,, -162,just tell me prompt engineering in business process management,"['Kiran Busch', 'Alexander Rochlitzer', 'Diana Sola', 'H. Leopold']",http://arxiv.org/pdf/2304.07183,2023-04-14,,"GPT-3 and several other language models (LMs) can effectively address various natural language processing (NLP) tasks, including machine translation and text summarization. Recently, they have also been successfully employed in the business process management (BPM) domain, e.g., for predictive process monitoring and process extraction from text. This, however, typically requires fine-tuning the employed LM, which, among others, necessitates large amounts of suitable training data. A possible solution to this problem is the use of prompt engineering, which leverages pre-trained LMs without fine-tuning them. Recognizing this, we argue that prompt engineering can help bring the capabilities of LMs to BPM research. We use this position paper to develop a research agenda for the use of prompt engineering for BPM research by identifying the associated potentials and challenges.",53e7475a3ed0caee37122a9dbdb53d1da0691a33,Semantic Scholar,,, -163,prompt position really matters in fewshot and zeroshot nlu tasks,"['Junyu Mao', 'S. Middleton', 'M. Niranjan']",https://arxiv.org/pdf/2305.14493,,,"Prompt-based models have made remarkable advancements in the fields of zero-shot and few-shot learning, attracting a lot of attention from researchers. Developing an effective prompt template plays a critical role. However, prior studies have mainly focused on prompt vocabulary selection or embedding initialization with the reserved prompt position fixed. In this empirical study, we conduct the most comprehensive analysis to date of prompt position option for natural language understanding tasks. Our findings quantify the substantial impact prompt position has on model performance. We observe that the prompt position used in prior studies is often sub-optimal for both zero-shot and few-shot settings. These findings suggest prompt position optimisation as an interesting research direction alongside the existing focus on prompt engineering.",56a9c96a29f4047be8465244576d731f0df2d9df,Semantic Scholar,,, -164,situated natural language explanations,"['Zining Zhu', 'Hao Jiang', 'Jingfeng Yang', 'Sreyashi Nag', 'Chao Zhang', 'Jie Huang', 'Yifan Gao', 'Frank Rudzicz', 'Bing Yin']",https://arxiv.org/pdf/2308.14115,2023-08-27,,"Natural language is among the most accessible tools for explaining decisions to humans, and large pretrained language models (PLMs) have demonstrated impressive abilities to generate coherent natural language explanations (NLE). The existing NLE research perspectives do not take the audience into account. An NLE can have high textual quality, but it might not accommodate audiences' needs and preference. To address this limitation, we propose an alternative perspective, situated NLE, including a situated generation framework and a situated evaluation framework. On the generation side, we propose simple prompt engineering methods that adapt the NLEs to situations. In human studies, the annotators preferred the situated NLEs. On the evaluation side, we set up automated evaluation scores in lexical, semantic, and pragmatic categories. The scores can be used to select the most suitable prompts to generate NLEs. Situated NLE provides a perspective to conduct further research on automatic NLE generations.",57404bd8c71e2b17fce63b49229b278b6a66bf13,Semantic Scholar,,, -165,what's the magic word a control theory of llm prompting,"['Aman Bhargava', 'Cameron Witkowski', 'Manav Shah', 'Matt W. Thomson']",https://arxiv.org/pdf/2310.04444,2023-10-02,,"Prompt engineering is effective and important in the deployment of LLMs but is poorly understood mathematically. Here, we formalize prompt engineering as an optimal control problem on LLMs -- where the prompt is considered a control variable for modulating the output distribution of the LLM. Within this framework, we ask a simple question: given a sequence of tokens, does there always exist a prompt we can prepend that will steer the LLM toward accurately predicting the final token? We call such an optimal prompt the magic word since prepending the prompt causes the LLM to output the correct answer. If magic words exist, can we find them? If so, what are their properties? We offer analytic analysis on the controllability of the self-attention head where we prove a bound on controllability as a function of the singular values of its weight matrices. We take inspiration from control theory to propose a metric called $k-\epsilon$ controllability to characterize LLM steerability. We compute the $k-\epsilon$ controllability of a panel of large language models, including Falcon-7b, Llama-7b, and Falcon-40b on 5000 WikiText causal language modeling tasks. Remarkably, we find that magic words of 10 tokens or less exist for over 97% of WikiText instances surveyed for each model.",57a4f8f69908d3474565d3cd6f58b1ca651ff673,Semantic Scholar,,, -166,batch calibration rethinking calibration for incontext learning and prompt engineering,"['Han Zhou', 'Xingchen Wan', 'Lev Proleev', 'Diana Mincu', 'Jilin Chen', 'Katherine A. Heller', 'Subhrajit Roy']",https://arxiv.org/pdf/2309.17249,2023-09-29,,"Prompting and in-context learning (ICL) have become efficient learning paradigms for large language models (LLMs). However, LLMs suffer from prompt brittleness and various bias factors in the prompt, including but not limited to the formatting, the choice verbalizers, and the ICL examples. To address this problem that results in unexpected performance degradation, calibration methods have been developed to mitigate the effects of these biases while recovering LLM performance. In this work, we first conduct a systematic analysis of the existing calibration methods, where we both provide a unified view and reveal the failure cases. Inspired by these analyses, we propose Batch Calibration (BC), a simple yet intuitive method that controls the contextual bias from the batched input, unifies various prior approaches, and effectively addresses the aforementioned issues. BC is zero-shot, inference-only, and incurs negligible additional costs. In the few-shot setup, we further extend BC to allow it to learn the contextual bias from labeled data. We validate the effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate state-of-the-art performance over previous calibration baselines across more than 10 natural language understanding and image classification tasks.",57bb978b8075fd5701a61770c5ee7244c414e8fd,Semantic Scholar,,, -167,jvnv a corpus of japanese emotional speech with verbal content and nonverbal expressions,"['Detai Xin', 'Junfeng Jiang', 'Shinnosuke Takamichi', 'Yuki Saito', 'Akiko Aizawa', 'H. Saruwatari']",https://arxiv.org/pdf/2310.06072,2023-10-09,,"We present the JVNV, a Japanese emotional speech corpus with verbal content and nonverbal vocalizations whose scripts are generated by a large-scale language model. Existing emotional speech corpora lack not only proper emotional scripts but also nonverbal vocalizations (NVs) that are essential expressions in spoken language to express emotions. We propose an automatic script generation method to produce emotional scripts by providing seed words with sentiment polarity and phrases of nonverbal vocalizations to ChatGPT using prompt engineering. We select 514 scripts with balanced phoneme coverage from the generated candidate scripts with the assistance of emotion confidence scores and language fluency scores. We demonstrate the effectiveness of JVNV by showing that JVNV has better phoneme coverage and emotion recognizability than previous Japanese emotional speech corpora. We then benchmark JVNV on emotional text-to-speech synthesis using discrete codes to represent NVs. We show that there still exists a gap between the performance of synthesizing read-aloud speech and emotional speech, and adding NVs in the speech makes the task even harder, which brings new challenges for this task and makes JVNV a valuable resource for relevant works in the future. To our best knowledge, JVNV is the first speech corpus that generates scripts automatically using large language models.",5ce2a1dc9dfa8b4f1368220ac7f7d30a395ffca9,Semantic Scholar,,, -168,red teaming language models with language models,"['Ethan Perez', 'Saffron Huang', 'Francis Song', 'Trevor Cai', 'Roman Ring', 'J. Aslanides', 'A. Glaese', 'Nathan McAleese', 'G. Irving']",https://aclanthology.org/2022.emnlp-main.225.pdf,2022-02-07,,"Language Models (LMs) often cannot be deployed because of their potential to harm users in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using human annotators to hand-write test cases. However, human annotation is expensive, limiting the number and diversity of test cases. In this work, we automatically find cases where a target LM behaves in a harmful way, by generating test cases (“red teaming”) using another LM. We evaluate the target LM’s replies to generated test questions using a classifier trained to detect offensive content, uncovering tens of thousands of offensive replies in a 280B parameter LM chatbot. We explore several methods, from zero-shot generation to reinforcement learning, for generating test cases with varying levels of diversity and difficulty. Furthermore, we use prompt engineering to control LM-generated test cases to uncover a variety of other harms, automatically finding groups of people that the chatbot discusses in offensive ways, personal and hospital phone numbers generated as the chatbot’s own contact info, leakage of private training data in generated text, and harms that occur over the course of a conversation. Overall, LM-based red teaming is one promising tool (among many needed) for finding and fixing diverse, undesirable LM behaviors before impacting users.",5d49c7401c5f2337c4cc88d243ae39ed659afe64,Semantic Scholar,,, -169,trash to treasure using texttoimage models to inform the design of physical artefacts,"['Amy Smith', 'Hope Schroeder', 'Ziv Epstein', 'Michael Cook', 'S. Colton', 'A. Lippman']",http://arxiv.org/pdf/2302.00561,2023-02-01,,"Text-to-image generative models have recently exploded in popularity and accessibility. Yet so far, use of these models in creative tasks that bridge the 2D digital world and the creation of physical artefacts has been understudied. We conduct a pilot study to investigate if and how text-to-image models can be used to assist in upstream tasks within the creative process, such as ideation and visualization, prior to a sculpture-making activity. Thirty participants selected sculpture-making materials and generated three images using the Stable Diffusion text-to-image generator, each with text prompts of their choice, with the aim of informing and then creating a physical sculpture. The majority of participants (23/30) reported that the generated images informed their sculptures, and 28/30 reported interest in using text-to-image models to help them in a creative task in the future. We identify several prompt engineering strategies and find that a participant's prompting strategy relates to their stage in the creative process. We discuss how our findings can inform support for users at different stages of the design process and for using text-to-image models for physical artefact design.",5de60d53bce194b34dae1e531876af9acffba1a3,Semantic Scholar,,, -170,knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms,"['Jiaoayan Chen', 'Luyi Ma', 'Xiaohan Li', 'Nikhil Thakurdesai', 'Jianpeng Xu', 'Jason H. D. Cho', 'Kaushiki Nag', 'Evren Korpeoglu', 'Sushant Kumar', 'Kannan Achan']",http://arxiv.org/pdf/2305.09858,2023-05-17,,"Knowledge Graphs (KGs) play a crucial role in enhancing e-commerce system performance by providing structured information about entities and their relationships, such as complementary or substitutable relations between products or product types, which can be utilized in recommender systems. However, relation labeling in KGs remains a challenging task due to the dynamic nature of e-commerce domains and the associated cost of human labor. Recently, breakthroughs in Large Language Models (LLMs) have shown surprising results in numerous natural language processing tasks. In this paper, we conduct an empirical study of LLMs for relation labeling in e-commerce KGs, investigating their powerful learning capabilities in natural language and effectiveness in predicting relations between product types with limited labeled data. We evaluate various LLMs, including PaLM and GPT-3.5, on benchmark datasets, demonstrating their ability to achieve competitive performance compared to humans on relation labeling tasks using just 1 to 5 labeled examples per relation. Additionally, we experiment with different prompt engineering techniques to examine their impact on model performance. Our results show that LLMs significantly outperform existing KG completion models in relation labeling for e-commerce KGs and exhibit performance strong enough to replace human labeling.",5e8dd82419f78025093acbec3ba2e345fff85d11,Semantic Scholar,,, -171,responsible task automation empowering large language models as responsible task automators,"['Zhizheng Zhang', 'Xiaoyi Zhang', 'Wenxuan Xie', 'Yan Lu']",http://arxiv.org/pdf/2306.01242,2023-06-02,,"The recent success of Large Language Models (LLMs) signifies an impressive stride towards artificial general intelligence. They have shown a promising prospect in automatically completing tasks upon user instructions, functioning as brain-like coordinators. The associated risks will be revealed as we delegate an increasing number of tasks to machines for automated completion. A big question emerges: how can we make machines behave responsibly when helping humans automate tasks as personal copilots? In this paper, we explore this question in depth from the perspectives of feasibility, completeness and security. In specific, we present Responsible Task Automation (ResponsibleTA) as a fundamental framework to facilitate responsible collaboration between LLM-based coordinators and executors for task automation with three empowered capabilities: 1) predicting the feasibility of the commands for executors; 2) verifying the completeness of executors; 3) enhancing the security (e.g., the protection of users' privacy). We further propose and compare two paradigms for implementing the first two capabilities. One is to leverage the generic knowledge of LLMs themselves via prompt engineering while the other is to adopt domain-specific learnable models. Moreover, we introduce a local memory mechanism for achieving the third capability. We evaluate our proposed ResponsibleTA on UI task automation and hope it could bring more attentions to ensuring LLMs more responsible in diverse scenarios. The research project homepage is at https://task-automation-research.github.io/responsible_task_automation.",615962d8969c8e0ffe43319689dce6c50cbf1f29,Semantic Scholar,,, -172,peace prompt engineering automation for clipseg enhancement in aerial robotics,"['Haechan Mark Bong', 'Rongge Zhang', 'Ricardo de Azambuja', 'Giovanni Beltrame']",https://arxiv.org/pdf/2310.00085,2023-09-29,,"From industrial to space robotics, safe landing is an essential component for flight operations. With the growing interest in artificial intelligence, we direct our attention to learning based safe landing approaches. This paper extends our previous work, DOVESEI, which focused on a reactive UAV system by harnessing the capabilities of open vocabulary image segmentation. Prompt-based safe landing zone segmentation using an open vocabulary based model is no more just an idea, but proven to be feasible by the work of DOVESEI. However, a heuristic selection of words for prompt is not a reliable solution since it cannot take the changing environment into consideration and detrimental consequences can occur if the observed environment is not well represented by the given prompt. Therefore, we introduce PEACE (Prompt Engineering Automation for CLIPSeg Enhancement), powering DOVESEI to automate the prompt generation and engineering to adapt to data distribution shifts. Our system is capable of performing safe landing operations with collision avoidance at altitudes as low as 20 meters using only monocular cameras and image segmentation. We take advantage of DOVESEI's dynamic focus to circumvent abrupt fluctuations in the terrain segmentation between frames in a video stream. PEACE shows promising improvements in prompt generation and engineering for aerial images compared to the standard prompt used for CLIP and CLIPSeg. Combining DOVESEI and PEACE, our system was able improve successful safe landing zone selections by 58.62% compared to using only DOVESEI. All the source code is open source and available online.",615ef4518f9a41a10881b66ce10f0eb490e2d75c,Semantic Scholar,,, -173,datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation,"['Seugnjun Lee', 'Hyeonseok Moon', 'Chanjun Park', 'Heu-Jeoung Lim']",http://arxiv.org/pdf/2306.14514,2023-06-26,,"In this paper, we introduce a data-driven approach for Formality-Sensitive Machine Translation (FSMT) that caters to the unique linguistic properties of four target languages. Our methodology centers on two core strategies: 1) language-specific data handling, and 2) synthetic data generation using large-scale language models and empirical prompt engineering. This approach demonstrates a considerable improvement over the baseline, highlighting the effectiveness of data-centric techniques. Our prompt engineering strategy further improves performance by producing superior synthetic translation examples.",632dc69c2e504d693533fc434b8122a2a8a42844,Semantic Scholar,,, -174,forgetful large language models lessons learned from using llms in robot programming,"['Juo-Tung Chen', 'Chien-Ming Huang']",https://arxiv.org/pdf/2310.06646,2023-10-10,,"Large language models offer new ways of empowering people to program robot applications-namely, code generation via prompting. However, the code generated by LLMs is susceptible to errors. This work reports a preliminary exploration that empirically characterizes common errors produced by LLMs in robot programming. We categorize these errors into two phases: interpretation and execution. In this work, we focus on errors in execution and observe that they are caused by LLMs being""forgetful""of key information provided in user prompts. Based on this observation, we propose prompt engineering tactics designed to reduce errors in execution. We then demonstrate the effectiveness of these tactics with three language models: ChatGPT, Bard, and LLaMA-2. Finally, we discuss lessons learned from using LLMs in robot programming and call for the benchmarking of LLM-powered end-user development of robot applications.",6474370fe46e38896288305c35d3058a403b1db2,Semantic Scholar,,, -175,large language models can be used to effectively scale spear phishing campaigns,['Julian Hazell'],http://arxiv.org/pdf/2305.06972,2023-05-11,,"Recent progress in artificial intelligence (AI), particularly in the domain of large language models (LLMs), has resulted in powerful and versatile dual-use systems. Indeed, cognition can be put towards a wide variety of tasks, some of which can result in harm. This study investigates how LLMs can be used for spear phishing, a form of cybercrime that involves manipulating targets into divulging sensitive information. I first explore LLMs' ability to assist with the reconnaissance and message generation stages of a successful spear phishing attack, where I find that advanced LLMs are capable of improving cybercriminals' efficiency during these stages. To explore how LLMs can be used to scale spear phishing campaigns, I then create unique spear phishing messages for over 600 British Members of Parliament using OpenAI's GPT-3.5 and GPT-4 models. My findings reveal that these messages are not only realistic but also cost-effective, with each email costing only a fraction of a cent to generate. Next, I demonstrate how basic prompt engineering can circumvent safeguards installed in LLMs by the reinforcement learning from human feedback fine-tuning process, highlighting the need for more robust governance interventions aimed at preventing misuse. To address these evolving risks, I propose two potential solutions: structured access schemes, such as application programming interfaces, and LLM-based defensive systems.",661e8ac4908a9d2a85835245ea99b6a314cc4a60,Semantic Scholar,,, -176,prompt engineering for students of medicine and their teachers,['Thomas F. Heston'],https://arxiv.org/pdf/2308.11628,2023-08-08,,"""Prompt Engineering for Students of Medicine and Their Teachers""brings the principles of prompt engineering for large language models such as ChatGPT and Google Bard to medical education. This book contains a comprehensive guide to prompt engineering to help both teachers and students improve education in the medical field. Just as prompt engineering is critical in getting good information out of an AI, it is also critical to get students to think and understand more deeply. The principles of prompt engineering that we have learned from AI systems have the potential to simultaneously revolutionize learning in the healthcare field. The book analyzes from multiple angles the anatomy of a good prompt for both AI models and students. The different types of prompts are examined, showing how each style has unique characteristics and applications. The principles of prompt engineering, applied properly, are demonstrated to be effective in teaching across the diverse fields of anatomy, physiology, pathology, pharmacology, and clinical skills. Just like ChatGPT and similar large language AI models, students need clear and detailed prompting in order for them to fully understand a topic. Using identical principles, a prompt that gets good information from an AI will also cause a student to think more deeply and accurately. The process of prompt engineering facilitates this process. Because each chapter contains multiple examples and key takeaways, it is a practical guide for implementing prompt engineering in the learning process. It provides a hands-on approach to ensure readers can immediately apply the concepts they learn",6862113724aa1a578c5d4e0ec7f1d9bed4288241,Semantic Scholar,,, -177,towards zeroshot and fewshot table question answering using gpt3,"['Pragya Srivastava', 'T. Ganu', 'Saikat Guha']",https://arxiv.org/pdf/2210.17284,2022-10-31,,"We present very early results on using GPT-3 to perform question answering on tabular data. We find that stock pre-trained GPT-3 is able to zero-shot learn the table structure from a serialized JSON array-of-arrays representation, and able to answer lookup queries and simple comparison questions in natural language without any fine-tuning. We further find that simple prompt engineering to include few-shot static Q&A examples significantly improves accuracy. Lastly, we find that intermixing passage text improves accuracy even further on heterogeneous data. We apply our approach on a novel dataset of simple tables in newspaper infographics with promising results. Overall, we find much cause for optimism in this basic approach.",6b8f26678785ebd7b7b27984af3cb9a273b722b0,Semantic Scholar,,, -178,exploring the effectiveness of dataset synthesis an application of apple detection in orchards,"['A. V. Meekeren', 'Maya Aghaei', 'K. Dijkstra']",http://arxiv.org/pdf/2306.11763,2023-06-20,,"Deep object detection models have achieved notable successes in recent years, but one major obstacle remains: the requirement for a large amount of training data. Obtaining such data is a tedious process and is mainly time consuming, leading to the exploration of new research avenues like synthetic data generation techniques. In this study, we explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection and compare it to a baseline model trained on real-world data. After creating a dataset of realistic apple trees with prompt engineering and utilizing a previously trained Stable Diffusion model, the custom dataset was annotated and evaluated by training a YOLOv5m object detection model to predict apples in a real-world apple detection dataset. YOLOv5m was chosen for its rapid inference time and minimal hardware demands. Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images when evaluated on a set of real-world images. However, these findings remain highly promising, as the average precision difference is only 0.09 and 0.06, respectively. Qualitative results indicate that the model can accurately predict the location of apples, except in cases of heavy shading. These findings illustrate the potential of synthetic data generation techniques as a viable alternative to the collection of extensive training data for object detection models.",71020779c6eeb9c76fe0a0eb2155d1d4f7d29ff9,Semantic Scholar,,, -179,unsupervised prompt learning for visionlanguage models,"['Hao Huang', 'Jack Chu', 'Fangyun Wei']",http://arxiv.org/pdf/2204.03649,2022-04-07,,"Contrastive vision-language models like CLIP have shown great progress in transfer learning. In the inference stage, the proper text description, also known as prompt, needs to be carefully designed to correctly classify the given images. In order to avoid laborious prompt engineering, recent works such as CoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for downstream image recognition tasks on a small set of labeled data. Though promising improvements are achieved, requiring labeled data from the target datasets may restrict the scalability. In this paper, we explore a different scenario, in which the labels of the target datasets are unprovided, and we present an unsupervised prompt learning (UPL) approach to avoid prompt engineering while simultaneously improving transfer performance of CLIP-like vision-language models. As far as we know, UPL is the first work to introduce unsupervised learning into prompt learning. Experimentally, our UPL outperforms original CLIP with prompt engineering on ImageNet as well as other 10 datasets. An enhanced version of UPL is even competitive with the 8-shot CoOp and the 8-shot TIP-Adapter on most datasets. Code and models are available at https://github.com/tonyhuang2022/UPL.",732627c703a9dbc78d9384f1be4c791c3a554391,Semantic Scholar,,, -180,review of large vision models and visual prompt engineering,"['Jiaqi Wang', 'Zheng Liu', 'Lin Zhao', 'Zihao Wu', 'Chong Ma', 'Sigang Yu', 'Haixing Dai', 'Qiushi Yang', 'Yi-Hsueh Liu', 'Songyao Zhang', 'Enze Shi', 'Yi Pan', 'Tuo Zhang', 'Dajiang Zhu', 'Xiang Li', 'Xi Jiang', 'Bao Ge', 'Yixuan Yuan', 'Dinggang Shen', 'Tianming Liu', 'Shu Zhang']",http://arxiv.org/pdf/2307.00855,2023-07-03,,"Visual prompt engineering is a fundamental technology in the field of visual and image Artificial General Intelligence, serving as a key component for achieving zero-shot capabilities. As the development of large vision models progresses, the importance of prompt engineering becomes increasingly evident. Designing suitable prompts for specific visual tasks has emerged as a meaningful research direction. This review aims to summarize the methods employed in the computer vision domain for large vision models and visual prompt engineering, exploring the latest advancements in visual prompt engineering. We present influential large models in the visual domain and a range of prompt engineering methods employed on these models. It is our hope that this review provides a comprehensive and systematic description of prompt engineering methods based on large visual models, offering valuable insights for future researchers in their exploration of this field.",7619a98ef077c8f75e0bfb98953457649209e07e,Semantic Scholar,,, -181,grimm in wonderland prompt engineering with midjourney to illustrate fairytales,['M. Ruskov'],https://arxiv.org/pdf/2302.08961,2023-02-17,,"The quality of text-to-image generation is continuously improving, yet the boundaries of its applicability are still unclear. In particular, refinement of the text input with the objective of achieving better results - commonly called prompt engineering - so far seems to have not been geared towards work with pre-existing texts. We investigate whether text-to-image generation and prompt engineering could be used to generate basic illustrations of popular fairytales. Using Midjourney v4, we engage in action research with a dual aim: to attempt to generate 5 believable illustrations for each of 5 popular fairytales, and to define a prompt engineering process that starts from a pre-existing text and arrives at an illustration of it. We arrive at a tentative 4-stage process: i) initial prompt, ii) composition adjustment, iii) style refinement, and iv) variation selection. We also discuss three reasons why the generation model struggles with certain illustrations: difficulties with counts, bias from stereotypical configurations and inability to depict overly fantastic situations. Our findings are not limited to the specific generation model and are intended to be generalisable to future ones.",76bb8f753c40d66f435015f2776c672b3999d8b5,Semantic Scholar,,, -182,cheapfake detection with llm using prompt engineering,"['Guangyang Wu', 'Weijie Wu', 'Xiaohong Liu', 'Kele Xu', 'Tianjiao Wan', 'Wenyi Wang']",https://arxiv.org/pdf/2306.02776,2023-06-05,,"The misuse of real photographs with conflicting image captions in news items is an example of the out-of-context (OOC) misuse of media. In order to detect OOC media, individuals must determine the accuracy of the statement and evaluate whether the triplet (i.e., the image and two captions) relates to the same event. This paper presents a novel learnable approach for detecting OOC media in ICME'23 Grand Challenge on Detecting Cheapfakes. The proposed method is based on the COSMOS structure, which assesses the coherence between an image and captions, as well as between two captions. We enhance the baseline algorithm by incorporating a Large Language Model (LLM), GPT3.5, as a feature extractor. Specifically, we propose an innovative approach to feature extraction utilizing prompt engineering to develop a robust and reliable feature extractor with GPT3.5 model. The proposed method captures the correlation between two captions and effectively integrates this module into the COSMOS baseline model, which allows for a deeper understanding of the relationship between captions. By incorporating this module, we demonstrate the potential for significant improvements in cheap-fakes detection performance. The proposed methodology holds promising implications for various applications such as natural language processing, image captioning, and text-to-image synthesis. Docker for submission is available at https://hub.docker.com/repository/docker/mulns/acmmmcheapfakes.",790c247dabe004f022ef9330fb59c36a77bdbbb2,Semantic Scholar,,, -183,making language models better tool learners with execution feedback,"['Shuofei Qiao', 'Honghao Gui', 'Huajun Chen', 'Ningyu Zhang']",http://arxiv.org/pdf/2305.13068,2023-05-22,,"Tools serve as pivotal interfaces that enable humans to understand and reshape the world. With the advent of foundational models, AI systems can utilize tools to expand their capabilities and interact with the world. Existing tool learning methodologies, encompassing supervised fine-tuning and prompt engineering approaches, often induce language models to utilize tools indiscriminately, as complex problems often exceed their own competencies. However, introducing tools for simple tasks, which the models themselves can readily resolve, can inadvertently propagate errors rather than enhance performance. This leads to the research question: can we teach language models when and how to use tools? To meet this need, we propose Tool leaRning wIth exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the model to continually learn through feedback derived from tool execution, thereby learning when and how to use tools effectively. Experimental results, backed by further analysis, show that TRICE can make the language model to selectively use tools by decreasing the model's dependency on tools while enhancing the performance. Code and datasets will be available in https://github.com/zjunlp/trice.",7919cb1a1dcf70ed7803c43a71d43dba696ef149,Semantic Scholar,,, -184,chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt,"['Fatemeh Nazary', 'Yashar Deldjoo', 'T. D. Noia']",https://arxiv.org/pdf/2308.09731,2023-08-17,,"This study presents an innovative approach to the application of large language models (LLMs) in clinical decision-making, focusing on OpenAI's ChatGPT. Our approach introduces the use of contextual prompts-strategically designed to include task description, feature description, and crucially, integration of domain knowledge-for high-quality binary classification tasks even in data-scarce scenarios. The novelty of our work lies in the utilization of domain knowledge, obtained from high-performing interpretable ML models, and its seamless incorporation into prompt design. By viewing these ML models as medical experts, we extract key insights on feature importance to aid in decision-making processes. This interplay of domain knowledge and AI holds significant promise in creating a more insightful diagnostic tool. Additionally, our research explores the dynamics of zero-shot and few-shot prompt learning based on LLMs. By comparing the performance of OpenAI's ChatGPT with traditional supervised ML models in different data conditions, we aim to provide insights into the effectiveness of prompt engineering strategies under varied data availability. In essence, this paper bridges the gap between AI and healthcare, proposing a novel methodology for LLMs application in clinical decision support systems. It highlights the transformative potential of effective prompt design, domain knowledge integration, and flexible learning approaches in enhancing automated decision-making.",793eb805800c4af0b06260079e178efa0377b9d7,Semantic Scholar,,, -185,federated large language model a position paper,"['Chaochao Chen', 'Xiaohua Feng', 'Jun Zhou', 'Jianwei Yin', 'Xiaolin Zheng']",https://arxiv.org/pdf/2307.08925,2023-07-18,,"Large scale language models (LLM) have received significant attention and found diverse applications across various domains, but their development encounters challenges in real-world scenarios. These challenges arise due to the scarcity of public domain data availability and the need to maintain privacy with respect to private domain data. To address these issues, federated learning (FL) has emerged as a promising technology that enables collaborative training of shared models while preserving decentralized data. We propose the concept of federated LLM, which comprises three key components, i.e., federated LLM pre-training, federated LLM fine-tuning, and federated LLM prompt engineering. For each component, we discuss its advantage over traditional LLM training methods and propose specific engineering strategies for implementation. Furthermore, we explore the novel challenges introduced by the integration of FL and LLM. We analyze existing solutions and identify potential obstacles faced by these solutions within the context of federated LLM.",7aad760762c4a10dfbc2d3391eb8bdb28c80b236,Semantic Scholar,,, -186,large language models in fault localisation,"['Yonghao Wu', 'Zheng Li', 'J Zhang', 'Mike Papadakis', 'M. Harman', 'Yong Liu']",https://arxiv.org/pdf/2308.15276,2023-08-29,,"Large Language Models (LLMs) have shown promise in multiple software engineering tasks including code generation, program repair, code summarisation, and test generation. Fault localisation is instrumental in enabling automated debugging and repair of programs and was prominently featured as a highlight during the launch event of ChatGPT-4. Nevertheless, the performance of LLMs compared to state-of-the-art methods, as well as the impact of prompt design and context length on their efficacy, remains unclear. To fill this gap, this paper presents an in-depth investigation into the capability of ChatGPT-3.5 and ChatGPT-4, the two state-of-the-art LLMs, on fault localisation. Using the widely-adopted large-scale Defects4J dataset, we compare the two LLMs with the existing fault localisation techniques. We also investigate the consistency of LLMs in fault localisation, as well as how prompt engineering and the length of code context affect the fault localisation effectiveness. Our findings demonstrate that within function-level context, ChatGPT-4 outperforms all the existing fault localisation methods. Additional error logs can further improve ChatGPT models' localisation accuracy and consistency, with an average 46.9% higher accuracy over the state-of-the-art baseline SmartFL on the Defects4J dataset in terms of TOP-1 metric. However, when the code context of the Defects4J dataset expands to the class-level, ChatGPT-4's performance suffers a significant drop, with 49.9% lower accuracy than SmartFL under TOP-1 metric. These observations indicate that although ChatGPT can effectively localise faults under specific conditions, limitations are evident. Further research is needed to fully harness the potential of LLMs like ChatGPT for practical fault localisation applications.",7d3d98707182e0733d8cf5ee763314c60d638f4a,Semantic Scholar,,, -187,evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery,"['Debadutta Dash', 'Rahul Thapa', 'J. Banda', 'Akshay Swaminathan', 'Morgan Cheatham', 'M. Kashyap', 'N. Kotecha', 'Jonathan H. Chen', 'S. Gombar', 'L. Downing', 'Rachel A. Pedreira', 'Ethan Goh', 'A. Arnaout', 'Garret K. Morris', 'H. Magon', 'M. Lungren', 'E. Horvitz', 'N. Shah']",http://arxiv.org/pdf/2304.13714,2023-04-26,,"Despite growing interest in using large language models (LLMs) in healthcare, current explorations do not assess the real-world utility and safety of LLMs in clinical settings. Our objective was to determine whether two LLMs can serve information needs submitted by physicians as questions to an informatics consultation service in a safe and concordant manner. Sixty six questions from an informatics consult service were submitted to GPT-3.5 and GPT-4 via simple prompts. 12 physicians assessed the LLM responses' possibility of patient harm and concordance with existing reports from an informatics consultation service. Physician assessments were summarized based on majority vote. For no questions did a majority of physicians deem either LLM response as harmful. For GPT-3.5, responses to 8 questions were concordant with the informatics consult report, 20 discordant, and 9 were unable to be assessed. There were 29 responses with no majority on""Agree"",""Disagree"", and""Unable to assess"". For GPT-4, responses to 13 questions were concordant, 15 discordant, and 3 were unable to be assessed. There were 35 responses with no majority. Responses from both LLMs were largely devoid of overt harm, but less than 20% of the responses agreed with an answer from an informatics consultation service, responses contained hallucinated references, and physicians were divided on what constitutes harm. These results suggest that while general purpose LLMs are able to provide safe and credible responses, they often do not meet the specific information need of a given question. A definitive evaluation of the usefulness of LLMs in healthcare settings will likely require additional research on prompt engineering, calibration, and custom-tailoring of general purpose models.",80785017029cab501fcdb90b98985cd2b36e1fb8,Semantic Scholar,,, -188,exploring the intersection of large language models and agentbased modeling via prompt engineering,['Edward Junprung'],https://arxiv.org/pdf/2308.07411,2023-08-14,,"The final frontier for simulation is the accurate representation of complex, real-world social systems. While agent-based modeling (ABM) seeks to study the behavior and interactions of agents within a larger system, it is unable to faithfully capture the full complexity of human-driven behavior. Large language models (LLMs), like ChatGPT, have emerged as a potential solution to this bottleneck by enabling researchers to explore human-driven interactions in previously unimaginable ways. Our research investigates simulations of human interactions using LLMs. Through prompt engineering, inspired by Park et al. (2023), we present two simulations of believable proxies of human behavior: a two-agent negotiation and a six-agent murder mystery game.",831fd0c18d10e42330cca36e0c5769762fb419e7,Semantic Scholar,,, -189,a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models,"['J. Allingham', 'Jie Ren', 'Michael W. Dusenberry', 'J. Liu', 'Xiuye Gu', 'Yin Cui', 'Dustin Tran', 'Balaji Lakshminarayanan']",https://arxiv.org/pdf/2302.06235,2023-02-13,,"Contrastively trained text-image models have the remarkable ability to perform zero-shot classification, that is, classifying previously unseen images into categories that the model has never been explicitly trained to identify. However, these zero-shot classifiers need prompt engineering to achieve high accuracy. Prompt engineering typically requires hand-crafting a set of prompts for individual downstream tasks. In this work, we aim to automate this prompt engineering and improve zero-shot accuracy through prompt ensembling. In particular, we ask""Given a large pool of prompts, can we automatically score the prompts and ensemble those that are most suitable for a particular downstream dataset, without needing access to labeled validation data?"". We demonstrate that this is possible. In doing so, we identify several pathologies in a naive prompt scoring method where the score can be easily overconfident due to biases in pre-training and test data, and we propose a novel prompt scoring method that corrects for the biases. Using our proposed scoring method to create a weighted average prompt ensemble, our method outperforms equal average ensemble, as well as hand-crafted prompts, on ImageNet, 4 of its variants, and 11 fine-grained classification benchmarks, all while being fully automatic, optimization-free, and not requiring access to labeled validation data.",877e27a1d89095fcf686ab675f62a8432d3285ee,Semantic Scholar,,, -190,"multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering","['Angus Addlesee', ""Weronika Siei'nska"", 'Nancie Gunson', 'Daniel Hernández García', 'C. Dondrup', 'Oliver Lemon']",https://arxiv.org/pdf/2308.15231,2023-08-29,,"This paper evaluates the extent to which current LLMs can capture task-oriented multi-party conversations (MPCs). We have recorded and transcribed 29 MPCs between patients, their companions, and a social robot in a hospital. We then annotated this corpus for multi-party goal-tracking and intent-slot recognition. People share goals, answer each other’s goals, and provide other people’s goals in MPCs - none of which occur in dyadic interactions. To understand user goals in MPCs, we compared three methods in zero-shot and few-shot settings: we fine-tuned T5, created pre-training tasks to train DialogLM using LED, and employed prompt engineering techniques with GPT-3.5-turbo, to determine which approach can complete this novel task with limited data. GPT-3.5-turbo significantly outperformed the others in a few-shot setting. The ‘reasoning’ style prompt, when given 7% of the corpus as example annotated conversations, was the best performing method. It correctly annotated 62.32% of the goal tracking MPCs, and 69.57% of the intent-slot recognition MPCs. A ‘story’ style prompt increased model hallucination, which could be detrimental if deployed in safety-critical settings. We conclude that multi-party conversations still challenge state-of-the-art LLMs.",8a1a8290f7d42b0ce60445a4c0130ef737b3ff69,Semantic Scholar,,, -191,large language models for propaganda detection,"['Kilian Sprenkamp', 'Daniel Gordon Jones', 'Liudmila Zavolokina']",https://arxiv.org/pdf/2310.06422,2023-10-10,,"The prevalence of propaganda in our digital society poses a challenge to societal harmony and the dissemination of truth. Detecting propaganda through NLP in text is challenging due to subtle manipulation techniques and contextual dependencies. To address this issue, we investigate the effectiveness of modern Large Language Models (LLMs) such as GPT-3 and GPT-4 for propaganda detection. We conduct experiments using the SemEval-2020 task 11 dataset, which features news articles labeled with 14 propaganda techniques as a multi-label classification problem. Five variations of GPT-3 and GPT-4 are employed, incorporating various prompt engineering and fine-tuning strategies across the different models. We evaluate the models' performance by assessing metrics such as $F1$ score, $Precision$, and $Recall$, comparing the results with the current state-of-the-art approach using RoBERTa. Our findings demonstrate that GPT-4 achieves comparable results to the current state-of-the-art. Further, this study analyzes the potential and challenges of LLMs in complex tasks like propaganda detection.",8a419947c46b8fa491ec613664372e376eb9f0c6,Semantic Scholar,,, -192,"ai foundation models for weather and climate applications, design, and implementation","['S. K. Mukkavilli', 'Daniel Salles Civitarese', 'J. Schmude', 'Johannes Jakubik', 'Anne Jones', 'Nam Nguyen', 'C. Phillips', 'Sujit Roy', 'Shraddha Singh', 'Campbell Watson', 'R. Ganti', 'Hendrik Hamann', 'Udaysankar Nair', 'Rahul Ramachandran', 'Kommy Weldemariam']",https://arxiv.org/pdf/2309.10808,2023-09-19,,"Machine learning and deep learning methods have been widely explored in understanding the chaotic behavior of the atmosphere and furthering weather forecasting. There has been increasing interest from technology companies, government institutions, and meteorological agencies in building digital twins of the Earth. Recent approaches using transformers, physics-informed machine learning, and graph neural networks have demonstrated state-of-the-art performance on relatively narrow spatiotemporal scales and specific tasks. With the recent success of generative artificial intelligence (AI) using pre-trained transformers for language modeling and vision with prompt engineering and fine-tuning, we are now moving towards generalizable AI. In particular, we are witnessing the rise of AI foundation models that can perform competitively on multiple domain-specific downstream tasks. Despite this progress, we are still in the nascent stages of a generalizable AI model for global Earth system models, regional climate models, and mesoscale weather models. Here, we review current state-of-the-art AI approaches, primarily from transformer and operator learning literature in the context of meteorology. We provide our perspective on criteria for success towards a family of foundation models for nowcasting and forecasting weather and climate predictions. We also discuss how such models can perform competitively on downstream tasks such as downscaling (super-resolution), identifying conditions conducive to the occurrence of wildfires, and predicting consequential meteorological phenomena across various spatiotemporal scales such as hurricanes and atmospheric rivers. In particular, we examine current AI methodologies and contend they have matured enough to design and implement a weather foundation model.",8a8ac2467aee4d70866a1b2410e59565ef6ae292,Semantic Scholar,,, -193,llm4vv developing llmdriven testsuite for compiler validation,"['Christian Munley', 'Aaron Jarmusch', 'Sunita Chandrasekaran']",https://arxiv.org/pdf/2310.04963,2023-10-08,,"Large language models (LLMs) are a new and powerful tool for a wide span of applications involving natural language and demonstrate impressive code generation abilities. In this paper, we explore the capabilitity of state-of-the-art LLMs, including closed-source options like OpenAI GPT-4 and open-source alternatives like Meta AI Codellama, to automatically generate tests and use these tests to validate and verify compiler implementations of a directive-based programming paradigm, OpenACC. Our approach entails exploring various prompt engineering techniques including a code template, retrieval-augmented generation (RAG) with code template, expressive prompt using RAG with code template, one-shot example, and RAG with one-shot example. This paper focusses on (a) exploring the capabilities of the latest LLMs for code generation, (b) investigating prompt and fine tuning methods, and (c) analyzing the outcome of LLMs generated tests",8c52b3bbe5897ba3f42b38c5bfc33bbd48f9a1f2,Semantic Scholar,,, -194,looking for a handsome carpenter! debiasing gpt3 job advertisements,"['Conrad Borchers', 'Dalia Sara Gala', 'Ben Gilburt', 'Eduard Oravkin', 'Wilfried Bounsi', 'Yuki M. Asano', 'Hannah Rose Kirk']",https://arxiv.org/pdf/2205.11374,2022-05-23,,"The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Academic research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt. In this work, we leverage one popular generative language model, GPT-3, with the goal of writing unbiased and realistic job advertisements. We first assess the bias and realism of zero-shot generated advertisements and compare them to real-world advertisements. We then evaluate prompt-engineering and fine-tuning as debiasing methods. We find that prompt-engineering with diversity-encouraging prompts gives no significant improvement to bias, nor realism. Conversely, fine-tuning, especially on unbiased real advertisements, can improve realism and reduce bias.",8c90bfe05c06fd47eaec0f5b1662e06862572afe,Semantic Scholar,,, -195,"voice visual oracle for interaction, conversation, and explanation","['Donggang Jia', 'Alexandra Irger', 'Ondrej Strnad', 'Johanna Björklund', 'A. Ynnerman', 'I. Viola']",http://arxiv.org/pdf/2304.04083,2023-04-08,,"We present VOICE, a novel approach for connecting large language models' (LLM) conversational capabilities with interactive exploratory visualization. VOICE introduces several innovative technical contributions that drive our conversational visualization framework. Our foundation is a pack-of-bots that can perform specific tasks, such as assigning tasks, extracting instructions, and generating coherent content. We employ fine-tuning and prompt engineering techniques to tailor bots' performance to their specific roles and accurately respond to user queries, and a new prompt-based iterative scene-tree generation establishes a coupling with a structural model. Our text-to-visualization method generates a flythrough sequence matching the content explanation. Finally, 3D natural language interaction provides capabilities to navigate and manipulate the 3D models in real-time. The VOICE framework can receive arbitrary voice commands from the user and responds verbally, tightly coupled with corresponding visual representation with low latency and high accuracy. We demonstrate the effectiveness and high generalizability potential of our approach by applying it to two distinct domains: analyzing three 3D molecular models with multi-scale and multi-instance attributes, and showcasing its effectiveness on a cartographic map visualization. A free copy of this paper and all supplemental materials are available at https://osf.io/g7fbr/.",8ca384547bb4b21b7f38d478119bf3168eb9c9cd,Semantic Scholar,,, -196,is gpt4 a good trader,['Bingzhe Wu'],https://arxiv.org/pdf/2309.10982,2023-09-20,,"Recently, large language models (LLMs), particularly GPT-4, have demonstrated significant capabilities in various planning and reasoning tasks \cite{cheng2023gpt4,bubeck2023sparks}. Motivated by these advancements, there has been a surge of interest among researchers to harness the capabilities of GPT-4 for the automated design of quantitative factors that do not overlap with existing factor libraries, with an aspiration to achieve alpha returns \cite{webpagequant}. In contrast to these work, this study aims to examine the fidelity of GPT-4's comprehension of classic trading theories and its proficiency in applying its code interpreter abilities to real-world trading data analysis. Such an exploration is instrumental in discerning whether the underlying logic GPT-4 employs for trading is intrinsically reliable. Furthermore, given the acknowledged interpretative latitude inherent in most trading theories, we seek to distill more precise methodologies of deploying these theories from GPT-4's analytical process, potentially offering invaluable insights to human traders. To achieve this objective, we selected daily candlestick (K-line) data from specific periods for certain assets, such as the Shanghai Stock Index. Through meticulous prompt engineering, we guided GPT-4 to analyze the technical structures embedded within this data, based on specific theories like the Elliott Wave Theory. We then subjected its analytical output to manual evaluation, assessing its interpretative depth and accuracy vis-\`a-vis these trading theories from multiple dimensions. The results and findings from this study could pave the way for a synergistic amalgamation of human expertise and AI-driven insights in the realm of trading.",8efcdc15c5f028f968d6a004a64593245c49927b,Semantic Scholar,,, -197,"right to be forgotten in the era of large language models implications, challenges, and solutions","['Dawen Zhang', 'Pamela Finckenberg-Broman', 'Thong Hoang', 'Shidong Pan', 'Zhenchang Xing', 'M. Staples', 'Xiwei Xu']",https://arxiv.org/pdf/2307.03941,2023-07-08,,"The Right to be Forgotten (RTBF) was first established as the result of the ruling of Google Spain SL, Google Inc. v AEPD, Mario Costeja Gonz\'alez, and was later included as the Right to Erasure under the General Data Protection Regulation (GDPR) of European Union to allow individuals the right to request personal data be deleted by organizations. Specifically for search engines, individuals can send requests to organizations to exclude their information from the query results. It was a significant emergent right as the result of the evolution of technology. With the recent development of Large Language Models (LLMs) and their use in chatbots, LLM-enabled software systems have become popular. But they are not excluded from the RTBF. Compared with the indexing approach used by search engines, LLMs store, and process information in a completely different way. This poses new challenges for compliance with the RTBF. In this paper, we explore these challenges and provide our insights on how to implement technical solutions for the RTBF, including the use of differential privacy, machine unlearning, model editing, and prompt engineering. With the rapid advancement of AI and the increasing need of regulating this powerful technology, learning from the case of RTBF can provide valuable lessons for technical practitioners, legal experts, organizations, and authorities.",8f93f95e093aab16e594b4a246a205007e107c7a,Semantic Scholar,,, -198,cona a novel contextaware instruction paradigm for communication using large language model,"['Nan Zhou', 'Xinghui Tao', 'Xi Chen']",http://arxiv.org/pdf/2305.18620,2023-05-26,,"We introduce CONA, a novel context-aware instruction paradigm for effective knowledge dissemination using generative pre-trained transformer (GPT) models. CONA is a flexible framework designed to leverage the capabilities of Large Language Models (LLMs) and incorporate DIKW (Data, Information, Knowledge, Wisdom) hierarchy to automatically instruct and optimise presentation content, anticipate potential audience inquiries, and provide context-aware answers that adaptive to the knowledge level of the audience group. The unique aspect of the CONA paradigm lies in its combination of an independent advisory mechanism and a recursive feedback loop rooted on the DIKW hierarchy. This synergy significantly enhances context-aware contents, ensuring they are accessible and easily comprehended by the audience. This paradigm is an early pioneer to explore new methods for knowledge dissemination and communication in the LLM era, offering effective support for everyday knowledge sharing scenarios. We conduct experiments on a range of audience roles, along with materials from various disciplines using GPT4. Both quantitative and qualitative results demonstrated that the proposed CONA paradigm achieved remarkable performance compared to the outputs guided by conventional prompt engineering.",90b1baf2cf299ef0e0ef7611a12311bd6cab3ed7,Semantic Scholar,,, -199,unsupervised hashing with semantic concept mining,"['Rong-Cheng Tu', 'Xian-ling Mao', 'Kevin Lin', 'Chengfei Cai', 'Weize Qin', 'Hongfa Wang', 'Wei Wei', 'Heyan Huang']",https://arxiv.org/pdf/2209.11475,2022-09-23,,"Recently, to improve the unsupervised image retrieval performance, plenty of unsupervised hashing methods have been proposed by designing a semantic similarity matrix, which is based on the similarities between image features extracted by a pre-trained CNN model. However, most of these methods tend to ignore high-level abstract semantic concepts contained in images. Intuitively, concepts play an important role in calculating the similarity among images. In real-world scenarios, each image is associated with some concepts, and the similarity between two images will be larger if they share more identical concepts. Inspired by the above intuition, in this work, we propose a novel Unsupervised Hashing with Semantic Concept Mining, called UHSCM, which leverages a VLP model to construct a high-quality similarity matrix. Specifically, a set of randomly chosen concepts is first collected. Then, by employing a vision-language pretraining (VLP) model with the prompt engineering which has shown strong power in visual representation learning, the set of concepts is denoised according to the training images. Next, the proposed method UHSCM applies the VLP model with prompting again to mine the concept distribution of each image and construct a high-quality semantic similarity matrix based on the mined concept distributions. Finally, with the semantic similarity matrix as guiding information, a novel hashing loss with a modified contrastive loss based regularization item is proposed to optimize the hashing network. Extensive experiments on three benchmark datasets show that the proposed method outperforms the state-of-the-art baselines in the image retrieval task.",91a291780103b328f65e700896ae6fa2230ec2e7,Semantic Scholar,,, -200,pair programming with large language models for sampling and estimation of copulas,"[""Jan G'orecki""]",http://arxiv.org/pdf/2303.18116,2023-03-31,,"Without writing a single line of code by a human, an example Monte Carlo simulation based application for stochastic dependence modeling with copulas is developed using a state-of-the-art large language model (LLM) fine-tuned for conversations. This includes interaction with ChatGPT in natural language and using mathematical formalism, which, under careful supervision by a human-expert, led to producing a working code in MATLAB, Python and R for sampling from a given copula model, evaluation of the model's density, performing maximum likelihood estimation, optimizing the code for parallel computing for CPUs as well as for GPUs, and visualization of the computed results. In contrast to other emerging studies that assess the accuracy of LLMs like ChatGPT on tasks from a selected area, this work rather investigates ways how to achieve a successful solution of a standard statistical task in a collaboration of a human-expert and artificial intelligence (AI). Particularly, through careful prompt engineering, we separate successful solutions generated by ChatGPT from unsuccessful ones, resulting in a comprehensive list of related pros and cons. It is demonstrated that if the typical pitfalls are avoided, we can substantially benefit from collaborating with an AI partner. For example, we show that if ChatGPT is not able to provide a correct solution due to a lack of or incorrect knowledge, the human-expert can feed it with the correct knowledge, e.g., in the form of mathematical theorems and formulas, and make it to apply the gained knowledge in order to provide a solution that is correct. Such ability presents an attractive opportunity to achieve a programmed solution even for users with rather limited knowledge of programming techniques.",975da5bb7fdd800ba577535d8c6ee5a5bc835d52,Semantic Scholar,,, -201,do llms possess a personality making the mbti test an amazing evaluation for large language models,"['Keyu Pan', 'Yawen Zeng']",https://arxiv.org/pdf/2307.16180,2023-07-30,,"The field of large language models (LLMs) has made significant progress, and their knowledge storage capacity is approaching that of human beings. Furthermore, advanced techniques, such as prompt learning and reinforcement learning, are being employed to address ethical concerns and hallucination problems associated with LLMs, bringing them closer to aligning with human values. This situation naturally raises the question of whether LLMs with human-like abilities possess a human-like personality? In this paper, we aim to investigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), a widespread human personality assessment tool, as an evaluation metric for LLMs. Specifically, extensive experiments will be conducted to explore: 1) the personality types of different LLMs, 2) the possibility of changing the personality types by prompt engineering, and 3) How does the training dataset affect the model's personality. Although the MBTI is not a rigorous assessment, it can still reflect the similarity between LLMs and human personality. In practice, the MBTI has the potential to serve as a rough indicator. Our codes are available at https://github.com/HarderThenHarder/transformers_tasks/tree/main/LLM/llms_mbti.",97717368b5f7e6a544f0a1c73a441bdcb4b6a046,Semantic Scholar,,, -202,identifying and extracting rare disease phenotypes with large language models,"['Cathy Shyr', 'Yan Hu', 'P. Harris', 'Hua Xu']",http://arxiv.org/pdf/2306.12656,2023-06-22,,"Rare diseases (RDs) are collectively common and affect 300 million people worldwide. Accurate phenotyping is critical for informing diagnosis and treatment, but RD phenotypes are often embedded in unstructured text and time-consuming to extract manually. While natural language processing (NLP) models can perform named entity recognition (NER) to automate extraction, a major bottleneck is the development of a large, annotated corpus for model training. Recently, prompt learning emerged as an NLP paradigm that can lead to more generalizable results without any (zero-shot) or few labeled samples (few-shot). Despite growing interest in ChatGPT, a revolutionary large language model capable of following complex human prompts and generating high-quality responses, none have studied its NER performance for RDs in the zero- and few-shot settings. To this end, we engineered novel prompts aimed at extracting RD phenotypes and, to the best of our knowledge, are the first the establish a benchmark for evaluating ChatGPT's performance in these settings. We compared its performance to the traditional fine-tuning approach and conducted an in-depth error analysis. Overall, fine-tuning BioClinicalBERT resulted in higher performance (F1 of 0.689) than ChatGPT (F1 of 0.472 and 0.591 in the zero- and few-shot settings, respectively). Despite this, ChatGPT achieved similar or higher accuracy for certain entities (i.e., rare diseases and signs) in the one-shot setting (F1 of 0.776 and 0.725). This suggests that with appropriate prompt engineering, ChatGPT has the potential to match or outperform fine-tuned language models for certain entity types with just one labeled sample. While the proliferation of large language models may provide opportunities for supporting RD diagnosis and treatment, researchers and clinicians should critically evaluate model outputs and be well-informed of their limitations.",994a6040fab375669a92cab0e67fb2fd203cd67f,Semantic Scholar,,, -203,transfer learning for power outage detection task with limited training data,['Olukunle O. Owolabi'],http://arxiv.org/pdf/2305.17817,2023-05-28,,"Early detection of power outages is crucial for maintaining a reliable power distribution system. This research investigates the use of transfer learning and language models in detecting outages with limited labeled data. By leveraging pretraining and transfer learning, models can generalize to unseen classes. Using a curated balanced dataset of social media tweets related to power outages, we conducted experiments using zero-shot and few-shot learning. Our hypothesis is that Language Models pretrained with limited data could achieve high performance in outage detection tasks over baseline models. Results show that while classical models outperform zero-shot Language Models, few-shot fine-tuning significantly improves their performance. For example, with 10% fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5% accuracy (+8.5%). This has practical implications for analyzing and localizing outages in scenarios with limited data availability. Our evaluation provides insights into the potential of few-shot fine-tuning with Language Models for power outage detection, highlighting their strengths and limitations. This research contributes to the knowledge base of leveraging advanced natural language processing techniques for managing critical infrastructure.",05fab50acb26203a944a955131a2388c9731a8f7,Semantic Scholar,,, -204,distillation of encoderdecoder transformers for sequence labelling,"['M. Farina', 'D. Pappadopulo', 'Anant Gupta', 'Leslie Huang', 'Ozan Irsoy', 'T. Solorio']",http://arxiv.org/pdf/2302.05454,2023-02-10,,"Driven by encouraging results on a wide range of tasks, the field of NLP is experiencing an accelerated race to develop bigger language models. This race for bigger models has also underscored the need to continue the pursuit of practical distillation approaches that can leverage the knowledge acquired by these big models in a compute-efficient manner. Having this goal in mind, we build on recent work to propose a hallucination-free framework for sequence tagging that is especially suited for distillation. We show empirical results of new state-of-the-art performance across multiple sequence labelling datasets and validate the usefulness of this framework for distilling a large model in a few-shot learning scenario.",0704a96e1c57c12031f1c3ca492a91dbed1f61ce,Semantic Scholar,,, -205,technical report competition solution for prompt tuning using pretrained language model,"['Jiang-Long Song', 'Wuhe Zou', 'Feng Li', 'Xiaohui Qin', 'Weidong Zhang']",http://arxiv.org/pdf/2212.06369,2022-12-13,,"Prompt tuning recently becomes a hot-spot in the applications of large pretrained language models on specific downstream tasks. Regarding the Language Model as a Service (LMaaS), black-box tuning using derivative-free optimization (DFO) provides a novel approach to expand the practical scenarios of pretrained models and enrich the researches of few-shot learning. In this report, we present our solution in this competition that is based on the LMaaS scenario. Our solution consists of several modifications to BBTv2, including multiple label words, selection of P0, rolling update strategy, multi-task loss from MLP classifier, and finally using the ensemble method to further improve generalization ability. We also shared some strategies that we tried but didn't use in the final submission for further discussion. In the end we raised a question about the SNLI dataset and the impact on the results, as well as our concerns about the competition.",075e16a0774b1a9d44a7d512c50b7f997e16befe,Semantic Scholar,,, -206,qameleon multilingual qa with only 5 examples,"['Priyanka Agrawal', 'Chris Alberti', 'Fantine Huot', 'Joshua Maynez', 'Ji Ma', 'Sebastian Ruder', 'Kuzman Ganchev', 'Dipanjan Das', 'Mirella Lapata']",https://arxiv.org/pdf/2211.08264,2022-11-15,,"The availability of large, high-quality datasets has been one of the main drivers of recent progress in question answering (QA). Such annotated datasets however are difficult and costly to collect, and rarely exist in languages other than English, rendering QA technology inaccessible to underrepresented languages. An alternative to building large monolingual training datasets is to leverage pre-trained language models (PLMs) under a few-shot learning setting. Our approach, QAmeleon, uses a PLM to automatically generate multilingual data upon which QA models are trained, thus avoiding costly annotation. Prompt tuning the PLM for data synthesis with only five examples per language delivers accuracy superior to translation-based baselines, bridges nearly 60% of the gap between an English-only baseline and a fully supervised upper bound trained on almost 50,000 hand labeled examples, and always leads to substantial improvements compared to fine-tuning a QA model directly on labeled examples in low resource settings. Experiments on the TyDiQA-GoldP and MLQA benchmarks show that few-shot prompt tuning for data synthesis scales across languages and is a viable alternative to large-scale annotation.",0783c214623c18f6a8ad96b8eaf4a67a382e68ee,Semantic Scholar,,, -207,exploiting the potential of seq2seq models as robust fewshot learners,"['Jihyeon Janel Lee', 'Dain Kim', 'Doohae Jung', 'Boseop Kim', 'Kyoung-Woon On']",https://arxiv.org/pdf/2307.14856,2023-07-27,,"In-context learning, which offers substantial advantages over fine-tuning, is predominantly observed in decoder-only models, while encoder-decoder (i.e., seq2seq) models excel in methods that rely on weight updates. Recently, a few studies have demonstrated the feasibility of few-shot learning with seq2seq models; however, this has been limited to tasks that align well with the seq2seq architecture, such as summarization and translation. Inspired by these initial studies, we provide a first-ever extensive experiment comparing the in-context few-shot learning capabilities of decoder-only and encoder-decoder models on a broad range of tasks. Furthermore, we propose two methods to more effectively elicit in-context learning ability in seq2seq models: objective-aligned prompting and a fusion-based approach. Remarkably, our approach outperforms a decoder-only model that is six times larger and exhibits significant performance improvements compared to conventional seq2seq models across a variety of settings. We posit that, with the right configuration and prompt design, seq2seq models can be highly effective few-shot learners for a wide spectrum of applications.",07bc02bd16f6fe78a7ea3bb8d966fcc6e3893195,Semantic Scholar,,, -208,cohortgpt an enhanced gpt for participant recruitment in clinical study,"['Zihan Guan', 'Zihao Wu', 'Zheng Liu', 'Dufan Wu', 'Hui Ren', 'Quanzheng Li', 'Xiang Li', 'Ninghao Liu']",https://arxiv.org/pdf/2307.11346,2023-07-21,,"Participant recruitment based on unstructured medical texts such as clinical notes and radiology reports has been a challenging yet important task for the cohort establishment in clinical research. Recently, Large Language Models (LLMs) such as ChatGPT have achieved tremendous success in various downstream tasks thanks to their promising performance in language understanding, inference, and generation. It is then natural to test their feasibility in solving the cohort recruitment task, which involves the classification of a given paragraph of medical text into disease label(s). However, when applied to knowledge-intensive problem settings such as medical text classification, where the LLMs are expected to understand the decision made by human experts and accurately identify the implied disease labels, the LLMs show a mediocre performance. A possible explanation is that, by only using the medical text, the LLMs neglect to use the rich context of additional information that languages afford. To this end, we propose to use a knowledge graph as auxiliary information to guide the LLMs in making predictions. Moreover, to further boost the LLMs adapt to the problem setting, we apply a chain-of-thought (CoT) sample selection strategy enhanced by reinforcement learning, which selects a set of CoT samples given each individual medical report. Experimental results and various ablation studies show that our few-shot learning method achieves satisfactory performance compared with fine-tuning strategies and gains superb advantages when the available data is limited. The code and sample dataset of the proposed CohortGPT model is available at: https://anonymous.4open.science/r/CohortGPT-4872/",089f6328085066263fedc083952624ca121ebbf3,Semantic Scholar,,, -209,zicl zeroshot incontext learning with pseudodemonstrations,"['Xinxi Lyu', 'Sewon Min', 'Iz Beltagy', 'Luke Zettlemoyer', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2212.09865,2022-12-19,,"Although large language models can be prompted for both zero- and few-shot learning, performance drops significantly when no demonstrations are available. In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap by constructing pseudo-demonstrations for a given test input using a raw text corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the nearest neighbors to the test input from the corpus and pairing them with random task labels, and (2) applying a set of techniques to reduce the amount of direct copying the model does from the resulting demonstrations. Evaluation on nine classification datasets shows that Z-ICL outperforms previous zero-shot methods by a significant margin, and is on par with in-context learning with labeled training data in the few-shot setting. Overall, Z-ICL provides a significantly higher estimate of the zero-shot performance levels of a model, and supports future efforts to develop better pseudo-demonstrations that further improve zero-shot results.",0942bd8fad71282994ff4e9a779c09745da68edc,Semantic Scholar,,, -210,zeroshot and fewshot learning for lung cancer multilabel classification using vision transformer,"['F. Guo', 'Yingfang Fan']",https://arxiv.org/pdf/2205.15290,2022-05-30,,"Lung cancer is the leading cause of cancer-related death worldwide. Lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most common histologic subtypes of non-small-cell lung cancer (NSCLC). Histology is an essential tool for lung cancer diagnosis. Pathologists make classifications according to the dominant subtypes. Although morphology remains the standard for diagnosis, significant tool needs to be developed to elucidate the diagnosis. In our study, we utilize the pre-trained Vision Transformer (ViT) model to classify multiple label lung cancer on histologic slices (from dataset LC25000), in both Zero-Shot and Few-Shot settings. Then we compare the performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall, sensitivity and specificity. Our study show that the pre-trained ViT model has a good performance in Zero-Shot setting, a competitive accuracy ($99.87\%$) in Few-Shot setting ({epoch = 1}) and an optimal result ($100.00\%$ on both validation set and test set) in Few-Shot seeting ({epoch = 5}).",0953ada119f384f328b6102e6b7963b3bde7cc9e,Semantic Scholar,,, -211,unified vision and language prompt learning,"['Yuhang Zang', 'Wei Li', 'Kaiyang Zhou', 'Chen Huang', 'Chen Change Loy']",http://arxiv.org/pdf/2210.07225,2022-10-13,,"Prompt tuning, a parameter- and data-efficient transfer learning paradigm that tunes only a small number of parameters in a model's input space, has become a trend in the vision community since the emergence of large vision-language models like CLIP. We present a systematic study on two representative prompt tuning methods, namely text prompt tuning and visual prompt tuning. A major finding is that none of the unimodal prompt tuning methods performs consistently well: text prompt tuning fails on data with high intra-class visual variances while visual prompt tuning cannot handle low inter-class variances. To combine the best from both worlds, we propose a simple approach called Unified Prompt Tuning (UPT), which essentially learns a tiny neural network to jointly optimize prompts across different modalities. Extensive experiments on over 11 vision datasets show that UPT achieves a better trade-off than the unimodal counterparts on few-shot learning benchmarks, as well as on domain generalization benchmarks. Code and models will be released to facilitate future research.",09b7338021fff3200c2098b19824aecc83a66cb5,Semantic Scholar,,, -212,plugandplay multilingual fewshot spoken words recognition,"['Aaqib Saeed', 'Vasileios Tsouvalas']",http://arxiv.org/pdf/2305.03058,2023-05-03,,"As technology advances and digital devices become prevalent, seamless human-machine communication is increasingly gaining significance. The growing adoption of mobile, wearable, and other Internet of Things (IoT) devices has changed how we interact with these smart devices, making accurate spoken words recognition a crucial component for effective interaction. However, building robust spoken words detection system that can handle novel keywords remains challenging, especially for low-resource languages with limited training data. Here, we propose PLiX, a multilingual and plug-and-play keyword spotting system that leverages few-shot learning to harness massive real-world data and enable the recognition of unseen spoken words at test-time. Our few-shot deep models are learned with millions of one-second audio clips across 20 languages, achieving state-of-the-art performance while being highly efficient. Extensive evaluations show that PLiX can generalize to novel spoken words given as few as just one support example and performs well on unseen languages out of the box. We release models and inference code to serve as a foundation for future research and voice-enabled user interface development for emerging devices.",0b413633f14ec7f96948067abf1d4ca930fa38a1,Semantic Scholar,,, -213,zeroshot approach to overcome perturbation sensitivity of prompts,"['Mohna Chakraborty', 'Adithya Kulkarni', 'Qi Li']",http://arxiv.org/pdf/2305.15689,2023-05-25,,"Recent studies have demonstrated that natural-language prompts can help to leverage the knowledge learned by pre-trained language models for the binary sentence-level sentiment classification task. Specifically, these methods utilize few-shot learning settings to fine-tune the sentiment classification model using manual or automatically generated prompts. However, the performance of these methods is sensitive to the perturbations of the utilized prompts. Furthermore, these methods depend on a few labeled instances for automatic prompt generation and prompt ranking. This study aims to find high-quality prompts for the given task in a zero-shot setting. Given a base prompt, our proposed approach automatically generates multiple prompts similar to the base prompt employing positional, reasoning, and paraphrasing techniques and then ranks the prompts using a novel metric. We empirically demonstrate that the top-ranked prompts are high-quality and significantly outperform the base prompt and the prompts generated using few-shot learning for the binary sentence-level sentiment classification task.",0b71af0bf02ab58b8d8e342c1c803322cfede603,Semantic Scholar,,, -214,an evaluation of gpt models for phenotype concept recognition,"['T. Groza', 'Harry Caufield', 'D. Gration', 'Gareth Baynam', 'M. Haendel', 'Peter N Robinson', 'Christopher J. Mungall', 'J. Reese']",https://arxiv.org/pdf/2309.17169,2023-09-29,,"Objective: Clinical deep phenotyping plays a critical role in both the diagnosis of patients with rare disorders as well as in building care coordination plans. The process relies on modelling and curating patient profiles using ontology concepts, usually from the Human Phenotype Ontology. Machine learning methods have been widely adopted to support this phenotype concept recognition task. With the significant shift in the use of large language models (LLMs) for most NLP tasks, herewithin, we examine the performance of the latest Generative Pre-trained Transformer (GPT) models underpinning ChatGPT in clinical deep phenotyping. Materials and Methods: The experimental setup of the study included seven prompts of various levels of specificity, two GPT models (gpt-3.5 and gpt-4.0) and an established gold standard for phenotype recognition. Results: Our results show that, currently, these models have not yet achieved state of the art performance. The best run, using few-shots learning, achieved 0.41 F1 score, compared to a 0.62 F1 score achieved by the current best in class tool. Conclusion: The non-deterministic nature of the outcomes and the lack of concordance between different runs using the same prompt and input makes the use of these LLMs in clinical settings problematic.",0c75cda2bb0812217bf0e5460e910212ad512944,Semantic Scholar,,, -215,incontext learning for fewshot molecular property prediction,"['Christopher Fifty', 'J. Leskovec', 'Sebastian Thrun']",https://arxiv.org/pdf/2310.08863,2023-10-13,,"In-context learning has become an important approach for few-shot learning in Large Language Models because of its ability to rapidly adapt to new tasks without fine-tuning model parameters. However, it is restricted to applications in natural language and inapplicable to other domains. In this paper, we adapt the concepts underpinning in-context learning to develop a new algorithm for few-shot molecular property prediction. Our approach learns to predict molecular properties from a context of (molecule, property measurement) pairs and rapidly adapts to new properties without fine-tuning. On the FS-Mol and BACE molecular property prediction benchmarks, we find this method surpasses the performance of recent meta-learning algorithms at small support sizes and is competitive with the best methods at large support sizes.",0d09c569477457c32637f9e866727cc4623b1165,Semantic Scholar,,, -216,measuring the robustness of natural language processing models to domain shifts,"['Nitay Calderon', 'Naveh Porat', 'Eyal Ben-David', 'Zorik Gekhman', 'Nadav Oved', 'Roi Reichart']",http://arxiv.org/pdf/2306.00168,2023-05-31,,"Existing research on Domain Robustness (DR) suffers from disparate setups, lack of evaluation task variety, and reliance on challenge sets. In this paper, we pose a fundamental question: What is the state of affairs of the DR challenge in the era of Large Language Models (LLMs)? To this end, we construct a DR benchmark comprising diverse NLP tasks, including sentence and token-level classification, QA, and generation, each task consists of several domains. We explore the DR challenge of fine-tuned and few-shot learning models in natural domain shift settings and devise two diagnostic metrics of Out-of-Distribution (OOD) performance degradation: The commonly used Source Drop (SD) and the overlooked Target Drop (TD). Our findings reveal important insights: First, despite their capabilities, zero-to-few shot LLMs and fine-tuning approaches still fail to meet satisfactory performance in the OOD context; Second, TD approximates better than SD the average OOD degradation; Third, in a significant proportion of domain shifts, either SD or TD is positive, but not both, and therefore disregarding one can lead to incorrect DR conclusions.",104c878d17a179e86ba094b221993cfdd3277943,Semantic Scholar,,, -217,towards practical fewshot federated nlp,"['Dongqi Cai', 'Yaozong Wu', 'Haitao Yuan', 'Shangguang Wang', 'F. Lin', 'Mengwei Xu']",https://arxiv.org/pdf/2212.00192,2022-12-01,,"Transformer-based pre-trained models have emerged as the predominant solution for natural language processing (NLP). Fine-tuning such pre-trained models for downstream tasks often requires a considerable amount of labeled private data. In practice, private data is often distributed across heterogeneous mobile devices and may be prohibited from being uploaded. Moreover, well-curated labeled data is often scarce, presenting an additional challenge. To address these challenges, we first introduce a data generator for federated few-shot learning tasks, which encompasses the quantity and skewness of scarce labeled data in a realistic setting. Subsequently, we propose AUG-FedPrompt, a prompt-based federated learning system that exploits abundant unlabeled data for data augmentation. Our experiments indicate that AUG-FedPrompt can perform on par with full-set fine-tuning with a limited amount of labeled data. However, such competitive performance comes at a significant system cost.",10717aefce06cc41465619ec8c956f4b0b0fa6e1,Semantic Scholar,,, -218,templatefree prompt tuning for fewshot ner,"['Ruotian Ma', 'Xin Zhou', 'Tao Gui', 'Y. Tan', 'Qi Zhang', 'Xuanjing Huang']",https://aclanthology.org/2022.naacl-main.420.pdf,2021-09-28,,"Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words. However, when applied to token-level labeling tasks such as NER, it would be time-consuming to enumerate the template queries over all potential entity spans. In this work, we propose a more elegant method to reformulate NER tasks as LM problems without any templates. Specifically, we discard the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position. Meanwhile, we also explore principled ways to automatically search for appropriate label words that the pre-trained models can easily adapt to. While avoiding the complicated template-based process, the proposed LM objective also reduces the gap between different objectives used in pre-training and fine-tuning, thus it can better benefit the few-shot performance. Experimental results demonstrate the effectiveness of the proposed method over bert-tagger and template-based method under few-shot settings. Moreover, the decoding speed of the proposed method is up to 1930.12 times faster than the template-based method.",1dd344ce28f1e5a078f9d8396b5078388e555d99,Semantic Scholar,,, -219,a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems,"['Debjoy Saha', 'Bishal Santra', 'Pawan Goyal']",http://arxiv.org/pdf/2204.08167,2022-04-18,,"We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented conversational systems. Recent approaches to this problem leveraging Transformer-based models have yielded great results. However, training these models is expensive, both in terms of computational resources and time. Additionally, collecting high quality annotated dialogue datasets remains a challenge for researchers because of the extensive annotation required for training these models. Driven by the recent success of pre-trained language models and prompt-based learning, we explore prompt-based few-shot learning for Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage prompt-based language modelling task and train language models for both tasks and present a comprehensive empirical analysis of their separate and joint performance. We demonstrate the potential of prompt-based methods in few-shot learning for DST and provide directions for future improvement.",21e46f11898748778a31b5b2fe2f60128eb66ba1,Semantic Scholar,,, -220,stabilized incontext learning with pretrained language models for few shot dialogue state tracking,"['Derek Chen', 'Kun Qian', 'Zhou Yu']",http://arxiv.org/pdf/2302.05932,2023-02-12,,"Prompt-based methods with large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks. These models improve even further with the addition of a few labeled in-context exemplars to guide output generation. However, for more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial, leading to unstable results. Furthermore, building in-context exemplars for dialogue tasks is difficult because conversational contexts are long while model input lengths are relatively short.To overcome these issues we first adapt a meta-learning scheme to the dialogue domain which stabilizes the ability of the model to perform well under various prompts. We additionally design a novel training method to improve upon vanilla retrieval mechanisms to find ideal in-context examples. Finally, we introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query. In effect, we are able to achieve highly competitive results for few-shot DST on MultiWOZ.",59ef1b67c5f238d5d6d175d84fb6b239b4221a97,Semantic Scholar,,, -221,steps towards promptbased creation of virtual worlds,"['Jasmine Roberts', 'Andrzej Banburski-Fahey', 'J. Lanier']",https://arxiv.org/pdf/2211.05875,2022-11-10,,"Large language models trained for code generation can be applied to speaking virtual worlds into existence (creating virtual worlds). In this work we show that prompt-based methods can both accelerate in-VR level editing, as well as can become part of gameplay rather than just part of game development. As an example, we present Codex VR Pong which shows non-deterministic game mechanics using generative processes to not only create static content but also non-trivial interactions between 3D objects. This demonstration naturally leads to an integral discussion on how one would evaluate and benchmark experiences created by generative models - as there are no qualitative or quantitative metrics that apply in these scenarios. We conclude by discussing impending challenges of AI-assisted co-creation in VR.",632ab7663e6d64578ceda1d1df9ec525b503bacb,Semantic Scholar,,, -222,purr efficiently editing language model hallucinations by denoising language model corruptions,"['Anthony Chen', 'Panupong Pasupat', 'Sameer Singh', 'Hongrae Lee', 'Kelvin Guu']",http://arxiv.org/pdf/2305.14908,2023-05-24,,"The remarkable capabilities of large language models have been accompanied by a persistent drawback: the generation of false and unsubstantiated claims commonly known as""hallucinations"". To combat this issue, recent research has introduced approaches that involve editing and attributing the outputs of language models, particularly through prompt-based editing. However, the inference cost and speed of using large language models for editing currently bottleneck prompt-based methods. These bottlenecks motivate the training of compact editors, which is challenging due to the scarcity of training data for this purpose. To overcome these challenges, we exploit the power of large language models to introduce corruptions (i.e., noise) into text and subsequently fine-tune compact editors to denoise the corruptions by incorporating relevant evidence. Our methodology is entirely unsupervised and provides us with faux hallucinations for training in any domain. Our Petite Unsupervised Research and Revision model, PURR, not only improves attribution over existing editing methods based on fine-tuning and prompting, but also achieves faster execution times by orders of magnitude.",7db7653c581d7823cb9c328f2d742ec70d7a0ce4,Semantic Scholar,,, -223,zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts,"['Zewei Sun', 'Qingnan Jiang', 'Shujian Huang', 'Jun Cao', 'Shanbo Cheng', 'Mingxuan Wang']",http://arxiv.org/pdf/2209.11409,2022-09-23,,"Domain adaptation is an important challenge for neural machine translation. However, the traditional fine-tuning solution requires multiple extra training and yields a high cost. In this paper, we propose a non-tuning paradigm, resolving domain adaptation with a prompt-based method. Specifically, we construct a bilingual phrase-level database and retrieve relevant pairs from it as a prompt for the input sentences. By utilizing Retrieved Phrase-level Prompts (RePP), we effectively boost the translation quality. Experiments show that our method improves domain-specific machine translation for 6.2 BLEU scores and improves translation constraints for 11.5% accuracy without additional training.",80c0416048614be75362c2c332d22dd1d2b22f65,Semantic Scholar,,, -224,low resource pipeline for spoken language understanding via weak supervision,"['Ayush Kumar', 'Rishabh Tripathi', 'Jithendra Vepa']",https://arxiv.org/pdf/2206.10559,2022-06-21,,"In Weak Supervised Learning (WSL), a model is trained over noisy labels obtained from semantic rules and task-specific pre-trained models. Rules offer limited generalization over tasks and require significant manual efforts while pre-trained models are available only for limited tasks. In this work, we propose to utilize prompt-based methods as weak sources to obtain the noisy labels on unannotated data. We show that task-agnostic prompts are generalizable and can be used to obtain noisy labels for different Spoken Language Understanding (SLU) tasks such as sentiment classification, disfluency detection and emotion classification. These prompts could additionally be updated to add task-specific contexts, thus providing flexibility to design task-specific prompts. We demonstrate that prompt-based methods generate reliable labels for the above SLU tasks and thus can be used as a universal weak source to train a weak-supervised model (WSM) in absence of labeled data. Our proposed WSL pipeline trained over prompt-based weak source outperforms other competitive low-resource benchmarks on zero and few-shot learning by more than 4% on Macro-F1 on all of the three benchmark SLU datasets. The proposed method also outperforms a conventional rule based WSL pipeline by more than 5% on Macro-F1.",9ecf603dbebbfbdd9858d21903c77074d12518b4,Semantic Scholar,,, -225,introducing language guidance in promptbased continual learning,"['Muhammad Gul Zain Ali Khan', 'Muhammad Ferjad Naeem', 'L. Gool', 'D. Stricker', 'F. Tombari', 'Muhammad Zeshan Afzal']",https://arxiv.org/pdf/2308.15827,2023-08-30,,"Continual Learning aims to learn a single model on a sequence of tasks without having access to data from previous tasks. The biggest challenge in the domain still remains catastrophic forgetting: a loss in performance on seen classes of earlier tasks. Some existing methods rely on an expensive replay buffer to store a chunk of data from previous tasks. This, while promising, becomes expensive when the number of tasks becomes large or data can not be stored for privacy reasons. As an alternative, prompt-based methods have been proposed that store the task information in a learnable prompt pool. This prompt pool instructs a frozen image encoder on how to solve each task. While the model faces a disjoint set of classes in each task in this setting, we argue that these classes can be encoded to the same embedding space of a pre-trained language encoder. In this work, we propose Language Guidance for Prompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods. LGCL is model agnostic and introduces language guidance at the task level in the prompt pool and at the class level on the output feature of the vision encoder. We show with extensive experimentation that LGCL consistently improves the performance of prompt-based continual learning methods to set a new state-of-the art. LGCL achieves these performance improvements without needing any additional learnable parameters.",a07701abd506f67368cb75ef2b649dd51df7abd4,Semantic Scholar,,, -226,instructionner a multitask instructionbased generative framework for fewshot ner,"['Liwen Wang', 'Rumei Li', 'Yang Yan', 'Yuanmeng Yan', 'Sirui Wang', 'Wei Yu Wu', 'Weiran Xu']",http://arxiv.org/pdf/2203.03903,2022-03-08,,"Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks. However, existing prompt templates are mostly designed for sentence-level tasks and are inappropriate for sequence labeling objectives. To address the above issue, we propose a multi-task instruction-based generative framework, named InstructionNER, for low-resource named entity recognition. Specifically, we reformulate the NER task as a generation problem, which enriches source sentences with task-specific instructions and answer options, then inferences the entities and types in natural language. We further propose two auxiliary tasks, including entity extraction and entity typing, which enable the model to capture more boundary information of entities and deepen the understanding of entity type semantics, respectively. Experimental results show that our method consistently outperforms other baselines on five datasets in few-shot settings.",a29a0e679e626e8961dc217081eae2a6c63a15ad,Semantic Scholar,,, -227,stt soft template tuning for fewshot adaptation,"['Ping Yu', 'Wei Wang', 'Chunyuan Li', 'Ruiyi Zhang', 'Zhanpeng Jin', 'Changyou Chen']",https://arxiv.org/pdf/2207.08408,2022-07-18,,"Prompt tuning has been an extremely effective tool to adapt a pre-trained model to downstream tasks. However, standard prompt-based methods mainly consider the case of sufficient data of downstream tasks. It is still unclear whether the advantage can be transferred to the few-shot regime, where only limited data are available for each downstream task. Although some works have demonstrated the potential of prompt-tuning under the few-shot setting, the main stream methods via searching discrete prompts or tuning soft prompts with limited data are still very challenging. Through extensive empirical studies, we find that there is still a gap between prompt tuning and fully fine-tuning for few-shot learning. To bridge the gap, we propose a new prompt-tuning framework, called Soft Template Tuning (STT) 1. STT combines manual and auto prompts, and treats down-stream classification tasks as a masked language modeling task. Comprehensive evaluation on different settings suggests STT can close the gap between fine-tuning and prompt-based methods without introducing additional parameters. Significantly, it can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.",a45bdbbf9a197a21ef97291c60b77de47bc51db2,Semantic Scholar,,, -228,enable language models to implicitly learn selfimprovement from data,"['Ziqi Wang', 'Le Hou', 'Tianjian Lu', 'Yuexin Wu', 'Yunxuan Li', 'Hongkun Yu', 'Heng Ji']",https://arxiv.org/pdf/2310.00898,2023-10-02,,"Large Language Models (LLMs) have demonstrated remarkable capabilities in open-ended text generation tasks. However, the inherent open-ended nature of these tasks implies that there is always room for improvement in the quality of model responses. To address this challenge, various approaches have been proposed to enhance the performance of LLMs. There has been a growing focus on enabling LLMs to self-improve their response quality, thereby reducing the reliance on extensive human annotation efforts for collecting diverse and high-quality training data. Recently, prompting-based methods have been widely explored among self-improvement methods owing to their effectiveness, efficiency, and convenience. However, those methods usually require explicitly and thoroughly written rubrics as inputs to LLMs. It is expensive and challenging to manually derive and provide all necessary rubrics with a real-world complex goal for improvement (e.g., being more helpful and less harmful). To this end, we propose an ImPlicit Self-ImprovemenT (PIT) framework that implicitly learns the improvement goal from human preference data. PIT only requires preference data that are used to train reward models without extra human efforts. Specifically, we reformulate the training objective of reinforcement learning from human feedback (RLHF) -- instead of maximizing response quality for a given input, we maximize the quality gap of the response conditioned on a reference response. In this way, PIT is implicitly trained with the improvement goal of better aligning with human preferences. Experiments on two real-world datasets and one synthetic dataset show that our method significantly outperforms prompting-based methods.",a81470aa3721f6cd8a61139f9c4c60923bee093f,Semantic Scholar,,, -229,progressive visual prompt learning with contrastive feature reformation,"['C. Xu', 'Haocheng Shen', 'Fengyuan Shi', 'Boheng Chen', 'Yixuan Liao', 'Xiaoxin Chen', 'Limin Wang']",http://arxiv.org/pdf/2304.08386,2023-04-17,,"Prompt learning has been designed as an alternative to fine-tuning for adapting Vision-language (V-L) models to the downstream tasks. Previous works mainly focus on text prompt while visual prompt works are limited for V-L models. The existing visual prompt methods endure either mediocre performance or unstable training process, indicating the difficulty of visual prompt learning. In this paper, we propose a new Progressive Visual Prompt (ProVP) structure to strengthen the interactions among prompts of different layers. More importantly, our ProVP could effectively propagate the image embeddings to deep layers and behave partially similar to an instance adaptive prompt method. To alleviate generalization deterioration, we further propose a new contrastive feature re-formation, which prevents the serious deviation of the prompted visual feature from the fixed CLIP visual feature distribution. Combining both, our method (ProVP-Ref) is evaluated on 11 image benchmark datasets and achieves 7/11 state-of-theart results on both few-shot and base-to-novel settings. To the best of our knowledge, we are the first to demonstrate the superior performance of visual prompts in V-L models to previous prompt-based methods in downstream tasks. Meanwhile, it implies that our ProVP-Ref shows the best capability to adapt and to generalize.",ab346a9d9a71bc59671e52cae96eabba16c24eeb,Semantic Scholar,,, -230,fewshot event detection an empirical study and a unified view,"['Yubo Ma', 'Zehao Wang', 'Yixin Cao', 'Aixin Sun']",http://arxiv.org/pdf/2305.01901,2023-05-03,,"Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress.This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline. For fair evaluation, we compare 12 representative methods on three datasets, which are roughly grouped into prompt-based and prototype-based models for detailed analysis. Experiments consistently demonstrate that prompt-based methods, including ChatGPT, still significantly trail prototype-based methods in terms of overall performance. To investigate their superior performance, we break down their design elements along several dimensions and build a unified framework on prototype-based methods. Under such unified view, each prototype-method can be viewed a combination of different modules from these design elements. We further combine all advantageous modules and propose a simple yet effective baseline, which outperforms existing methods by a large margin (e.g., 2.7% F1 gains under low-resource setting).",ac7e270fcd365c84b29a710d58bf1243e850df4c,Semantic Scholar,,, -231,qaner prompting question answering models for fewshot named entity recognition,"['Andy T. Liu', 'Wei Xiao', 'Henghui Zhu', 'Dejiao Zhang', 'Shang-Wen Li', 'Andrew O. Arnold']",http://arxiv.org/pdf/2203.01543,2022-03-03,,"Recently, prompt-based learning for pre-trained language models has succeeded in few-shot Named Entity Recognition (NER) by exploiting prompts as task guidance to increase label efficiency. However, previous prompt-based methods for few-shot NER have limitations such as a higher computational complexity, poor zero-shot ability, requiring manual prompt engineering, or lack of prompt robustness. In this work, we address these shortcomings by proposing a new prompt-based learning NER method with Question Answering (QA), called QaNER. Our approach includes 1) a refined strategy for converting NER problems into the QA formulation; 2) NER prompt generation for QA models; 3) prompt-based tuning with QA models on a few annotated NER examples; 4) zero-shot NER by prompting the QA model. Comparing the proposed approach with previous methods, QaNER is faster at inference, insensitive to the prompt quality, and robust to hyper-parameters, as well as demonstrating significantly better low-resource performance and zero-shot capability.",b159dffadb69940e14693e812bdaa32e3957717f,Semantic Scholar,,, -232,causal interventionbased prompt debiasing for event argument extraction,"['Jiaju Lin', 'Jie Zhou', 'Qin Chen']",http://arxiv.org/pdf/2210.01561,2022-10-04,,"Prompt-based methods have become increasingly popular among information extraction tasks, especially in low-data scenarios. By formatting a finetune task into a pre-training objective, prompt-based methods resolve the data scarce problem effectively. However, seldom do previous research investigate the discrepancy among different prompt formulating strategies. In this work, we compare two kinds of prompts, name-based prompt and ontology-base prompt, and reveal how ontology-base prompt methods exceed its counterpart in zero-shot event argument extraction (EAE) . Furthermore, we analyse the potential risk in ontology-base prompts via a causal view and propose a debias method by causal intervention. Experiments on two benchmarks demonstrate that modified by our debias method, the baseline model becomes both more effective and robust, with significant improvement in the resistance to adversarial attacks.",b1d5c08a6fb6a5ee5b6b6693e10a587733ca05ed,Semantic Scholar,,, -233,interactivechainprompting ambiguity resolution for crosslingual conditional generation with interaction,"['Jonathan Pilault', 'Xavier García', 'Arthur Bravzinskas', 'Orhan Firat']",http://arxiv.org/pdf/2301.10309,2023-01-24,,"Crosslingual conditional generation (e.g., machine translation) has long enjoyed the benefits of scaling. Nonetheless, there are still issues that scale alone may not overcome. A source query in one language, for instance, may yield several translation options in another language without any extra context. Only one translation could be acceptable however, depending on the translator's preferences and goals. Choosing the incorrect option might significantly affect translation usefulness and quality. We propose a novel method interactive-chain prompting -- a series of question, answering and generation intermediate steps between a Translator model and a User model -- that reduces translations into a list of subproblems addressing ambiguities and then resolving such subproblems before producing the final text to be translated. To check ambiguity resolution capabilities and evaluate translation quality, we create a dataset exhibiting different linguistic phenomena which leads to ambiguities at inference for four languages. To encourage further exploration in this direction, we release all datasets. We note that interactive-chain prompting, using eight interactions as exemplars, consistently surpasses prompt-based methods with direct access to background information to resolve ambiguities.",bad6fa523ecf782c837a2eecaaffa4e1f7477c24,Semantic Scholar,,, -234,memobert pretraining model with promptbased learning for multimodal emotion recognition,"['Jinming Zhao', 'Ruichen Li', 'Qin Jin', 'Xinchao Wang', 'Haizhou Li']",https://arxiv.org/pdf/2111.00865,2021-10-27,,"Multimodal emotion recognition study is hindered by the lack of labelled corpora in terms of scale and diversity, due to the high annotation cost and label ambiguity. In this paper, we propose a multimodal pre-training model MEmoBERT for multimodal emotion recognition, which learns multimodal joint representations through self-supervised learning from a self-collected large-scale unlabeled video data that come in sheer volume. Furthermore, unlike the conventional ""pre-train, finetune"" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction one, bringing the downstream task closer to the pre-training. Extensive experiments on two benchmark datasets, IEMOCAP and MSP-IMPROV, show that our proposed MEmoBERT significantly enhances emotion recognition performance.",c10ab4733b43f19547308c15ca231a668181a36c,Semantic Scholar,,, -235,adaprompt adaptive model training for promptbased nlp,"['Yulong Chen', 'Yang Liu', 'Li Dong', 'Shuohang Wang', 'Chenguang Zhu', 'Michael Zeng', 'Yue Zhang']",https://aclanthology.org/2022.findings-emnlp.448.pdf,2022-02-10,,"Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has gained much attention in community. The main idea is to bridge the gap between NLP downstream tasks and language modeling (LM), by mapping these tasks into natural language prompts, which are then filled by pre-trained language models (PLMs). However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining. First, prompt information is not necessarily sufficiently present during LM pretraining. Second, task-specific data are not necessarily well represented during pretraining. We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers. Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings. In addition, in zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35\% relative error reduction.",d235a9085e0543fcbe502fbc269f9a8ee01dcbab,Semantic Scholar,,, -236,convfinqa exploring the chain of numerical reasoning in conversational finance question answering,"['Zhiyu Chen', 'SHIYANG LI', 'Charese Smiley', 'Zhiqiang Ma', 'Sameena Shah', 'William Yang Wang']",http://arxiv.org/pdf/2210.03849,2022-10-07,,"With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human beings. In this work, we investigate the application domain of finance that involves real-world, complex numerical reasoning. We propose a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering. Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations. We conduct comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods, to provide insights into the reasoning mechanisms of these two divisions. We believe our new dataset should serve as a valuable resource to push forward the exploration of real-world, complex reasoning tasks as the next research focus. Our dataset and code is publicly available at https://github.com/czyssrs/ConvFinQA.",d96997265f8146e93b4c9350f19d55e46d1317f0,Semantic Scholar,,, -237,exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods,"['Mengsay Loem', 'Masahiro Kaneko', 'Sho Takase', 'Naoaki Okazaki']",http://arxiv.org/pdf/2305.18156,2023-05-29,,"Large-scale pre-trained language models such as GPT-3 have shown remarkable performance across various natural language processing tasks. However, applying prompt-based methods with GPT-3 for Grammatical Error Correction (GEC) tasks and their controllability remains underexplored. Controllability in GEC is crucial for real-world applications, particularly in educational settings, where the ability to tailor feedback according to learner levels and specific error types can significantly enhance the learning process.This paper investigates the performance and controllability of prompt-based methods with GPT-3 for GEC tasks using zero-shot and few-shot setting. We explore the impact of task instructions and examples on GPT-3’s output, focusing on controlling aspects such as minimal edits, fluency edits, and learner levels. Our findings demonstrate that GPT-3 could effectively perform GEC tasks, outperforming existing supervised and unsupervised approaches. We also showed that GPT-3 could achieve controllability when appropriate task instructions and examples are given.",db0d67057b41927b5b51d3a393c250be64a405ae,Semantic Scholar,,, -238,selfevolve a code evolution framework via large language models,"['Shuyang Jiang', 'Yuhao Wang', 'Yu Wang']",http://arxiv.org/pdf/2306.02907,2023-06-05,,"Large language models (LLMs) have already revolutionized code generation, after being pretrained on publicly available code data. However, while various methods have been proposed to augment LLMs with retrieved knowledge and enhance the quality of code generation, the performance of these retrieval-based methods is limited by the strength of the retrievers used. In addition, while LLMs show great emergent ability, they still struggle to produce the correct code in one turn. To address these challenges, we propose a novel two-step pipeline, called \autoknow, that leverages LLMs as both knowledge providers and self-reflective programmers. Unlike retrieval-based methods, \autoknow~obtains the knowledge from input prompts and generates intermediate code based on the generated knowledge. After that, \autoknow~asks LLM to act as an expert programmer to perform debugging for the generated code. This is achieved by receiving the error message from the interpreter, without requiring special test cases for correctness verification. We evaluate \autoknow~on three code generation datasets, including DS-1000 for data science code, HumanEval for software engineering code, and TransCoder for C++-to-Python translation. Our empirical experiments show that \autoknow~outperforms strong baselines by a significant margin on all datasets. We also conduct exhaustive analytical experiments to validate the effectiveness of the two stages of \autoknow, and find that both are superior to other prompting-based methods. Further scalability analysis demonstrates that \autoknow~can be adapted to other more advanced models, such as GPT-4, and bring consistent efficacy improvement.",eb36681fc4c5dfce4f3e05540fc92b007de278ca,Semantic Scholar,,, -239,zeroshot information extraction via chatting with chatgpt,"['Xiang Wei', 'Xingyu Cui', 'Ning Cheng', 'Xiaobin Wang', 'Xin Zhang', 'Shen Huang', 'Pengjun Xie', 'Jinan Xu', 'Yufeng Chen', 'Meishan Zhang', 'Yong Jiang', 'Wenjuan Han']",http://arxiv.org/pdf/2302.10205,2023-02-20,,"Zero-shot information extraction (IE) aims to build IE systems from the unannotated text. It is challenging due to involving little human intervention. Challenging but worthwhile, zero-shot IE reduces the time and effort that data labeling takes. Recent efforts on large language models (LLMs, e.g., GPT-3, ChatGPT) show promising performance on zero-shot settings, thus inspiring us to explore prompt-based methods. In this work, we ask whether strong IE models can be constructed by directly prompting LLMs. Specifically, we transform the zero-shot IE task into a multi-turn question-answering problem with a two-stage framework (ChatIE). With the power of ChatGPT, we extensively evaluate our framework on three IE tasks: entity-relation triple extract, named entity recognition, and event extraction. Empirical results on six datasets across two languages show that ChatIE achieves impressive performance and even surpasses some full-shot models on several datasets (e.g., NYT11-HRL). We believe that our work could shed light on building IE models with limited resources.",f4cba0db34aa0c389cec267ca1f3ba5255ea2645,Semantic Scholar,,, -240,scaling sentence embeddings with large language models,"['Ting Jiang', 'Shaohan Huang', 'Zhongzhi Luan', 'Deqing Wang', 'Fuzhen Zhuang']",https://arxiv.org/pdf/2307.16645,2023-07-31,,"Large language models (LLMs) have recently garnered significant interest. With in-context learning, LLMs achieve impressive results in various natural language tasks. However, the application of LLMs to sentence embeddings remains an area of ongoing research. In this work, we propose an in-context learning-based method aimed at improving sentence embeddings performance. Our approach involves adapting the previous prompt-based representation method for autoregressive models, constructing a demonstration set that enables LLMs to perform in-context learning, and scaling up the LLMs to different model sizes. Through extensive experiments, in-context learning enables LLMs to generate high-quality sentence embeddings without any fine-tuning. It helps LLMs achieve performance comparable to current contrastive learning methods. By scaling model size, we find scaling to more than tens of billion parameters harms the performance on semantic textual similarity (STS) tasks. However, the largest model outperforms other counterparts and achieves the new state-of-the-art result on transfer tasks. We also fine-tune LLMs with current contrastive learning approach, and the 2.7B OPT model, incorporating our prompt-based method, surpasses the performance of 4.8B ST5, achieving the new state-of-the-art results on STS tasks. Our code is available at https://github.com/kongds/scaling_sentemb.",f7ccf8ecd508e0b2d423169588dd1c1a82dd3b4d,Semantic Scholar,,, -241,prompting to distill boosting datafree knowledge distillation via reinforced prompt,"['Xinyin Ma', 'Xinchao Wang', 'Gongfan Fang', 'Yongliang Shen', 'Weiming Lu']",https://arxiv.org/pdf/2205.07523,2022-05-16,,"Data-free knowledge distillation (DFKD) conducts knowledge distillation via eliminating the dependence of original training data, and has recently achieved impressive results in accelerating pre-trained language models. At the heart of DFKD is to reconstruct a synthetic dataset by inverting the parameters of the uncompressed model. Prior DFKD approaches, however, have largely relied on hand-crafted priors of the target data distribution for the reconstruction, which can be inevitably biased and often incompetent to capture the intrinsic distributions. To address this problem, we propose a prompt-based method, termed as PromptDFD, that allows us to take advantage of learned language priors, which effectively harmonizes the synthetic sentences to be semantically and grammatically correct. Specifically, PromptDFD leverages a pre-trained generative model to provide language priors and introduces a reinforced topic prompter to control data synthesis, making the generated samples thematically relevant and semantically plausible, and thus friendly to downstream tasks. As shown in our experiments, the proposed method substantially improves the synthesis quality and achieves considerable improvements on distillation performance. In some cases, PromptDFD even gives rise to results on par with those from the data-driven knowledge distillation with access to the original training data.",fb1d85fe28b5e92e22d084eca674d4a2b48cdc5a,Semantic Scholar,,, -242,teaching arithmetic to small transformers,"['Nayoung Lee', 'Kartik K. Sreenivasan', 'Jason D. Lee', 'Kangwook Lee', 'Dimitris Papailiopoulos']",https://arxiv.org/pdf/2307.03381,2023-07-07,,"Large language models like GPT-4 exhibit emergent capabilities across general-purpose tasks, such as basic arithmetic, when trained on extensive text data, even though these tasks are not explicitly encoded by the unsupervised, next-token prediction objective. This study investigates how small transformers, trained from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token prediction objective. We first demonstrate that conventional training data is not the most effective for arithmetic learning, and simple formatting changes can significantly improve accuracy. This leads to sharp phase transitions as a function of training data scale, which, in some cases, can be explained through connections to low-rank matrix completion. Building on prior work, we then train on chain-of-thought style data that includes intermediate step results. Even in the complete absence of pretraining, this approach significantly and simultaneously improves accuracy, sample complexity, and convergence speed. We also study the interplay between arithmetic and text data during training and examine the effects of few-shot prompting, pretraining, and model scale. Additionally, we discuss length generalization challenges. Our work highlights the importance of high-quality, instructive data that considers the particular characteristics of the next-word prediction objective for rapidly eliciting arithmetic capabilities.",002cfed5d4d9bf2fdaddb11d32f14751f2250e0c,Semantic Scholar,,, -243,are hard examples also harder to explain a study with human and modelgenerated explanations,"['Swarnadeep Saha', 'Peter Hase', 'Nazneen Rajani', 'Mohit Bansal']",https://arxiv.org/pdf/2211.07517,2022-11-14,,"Recent work on explainable NLP has shown that few-shot prompting can enable large pre-trained language models (LLMs) to generate grammatical and factual natural language explanations for data labels. In this work, we study the connection between explainability and sample hardness by investigating the following research question – “Are LLMs and humans equally good at explaining data labels for both easy and hard samples?” We answer this question by first collecting human-written explanations in the form of generalizable commonsense rules on the task of Winograd Schema Challenge (Winogrande dataset). We compare these explanations with those generated by GPT-3 while varying the hardness of the test samples as well as the in-context samples. We observe that (1) GPT-3 explanations are as grammatical as human explanations regardless of the hardness of the test samples, (2) for easy examples, GPT-3 generates highly supportive explanations but human explanations are more generalizable, and (3) for hard examples, human explanations are significantly better than GPT-3 explanations both in terms of label-supportiveness and generalizability judgements. We also find that hardness of the in-context examples impacts the quality of GPT-3 explanations. Finally, we show that the supportiveness and generalizability aspects of human explanations are also impacted by sample hardness, although by a much smaller margin than models.",0040dac7a1bf7a1eeb01c86ddb993f331f35b158,Semantic Scholar,,, -244,controllable generation of dialogue acts for dialogue systems via fewshot response generation and ranking,"['Angela Ramirez', 'Karik Agarwal', 'Juraj Juraska', 'Utkarsh Garg', 'M. Walker']",https://arxiv.org/pdf/2307.14440,2023-07-26,,"Dialogue systems need to produce responses that realize multiple types of dialogue acts (DAs) with high semantic fidelity. In the past, natural language generators (NLGs) for dialogue were trained on large parallel corpora that map from a domain-specific DA and its semantic attributes to an output utterance. Recent work shows that pretrained language models (LLMs) offer new possibilities for controllable NLG using prompt-based learning. Here we develop a novel few-shot overgenerate-and-rank approach that achieves the controlled generation of DAs. We compare eight few-shot prompt styles that include a novel method of generating from textual pseudo-references using a textual style transfer approach. We develop six automatic ranking functions that identify outputs with both the correct DA and high semantic accuracy at generation time. We test our approach on three domains and four LLMs. To our knowledge, this is the first work on NLG for dialogue that automatically ranks outputs using both DA and attribute accuracy. For completeness, we compare our results to fine-tuned few-shot models trained with 5 to 100 instances per DA. Our results show that several prompt settings achieve perfect DA accuracy, and near perfect semantic accuracy (99.81%) and perform better than few-shot fine-tuning.",03d8b1e78d124a561f3c2a67d3199472ee73228d,Semantic Scholar,,, -245,lambada backward chaining for automated reasoning in natural language,"['Seyed Mehran Kazemi', 'Najoung Kim', 'Deepti Bhatia', 'Xinyuan Xu', 'Deepak Ramachandran']",http://arxiv.org/pdf/2212.13894,2022-12-20,,"Remarkable progress has been made on automated reasoning with natural text, by using Large Language Models (LLMs) and methods such as Chain-of-Thought prompting and Selection-Inference. These techniques search for proofs in the forward direction from axioms to the conclusion, which suffers from a combinatorial explosion of the search space, and thus high failure rates for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from intended conclusion to supporting axioms) is significantly more efficient at proof-finding. Importing this intuition into the LM setting, we develop a Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into four sub-modules, that are simply implemented by few-shot prompted LLM inference. We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required.",03fb95e6be583ca954c3d00812a9e9a40f118e51,Semantic Scholar,,, -246,skillbased fewshot selection for incontext learning,"['Shengnan An', 'Bo Zhou', 'Zeqi Lin', 'Qiang Fu', 'B. Chen', 'Nanning Zheng', 'Weizhu Chen', 'Jian-Guang Lou']",https://arxiv.org/pdf/2305.14210,2023-05-23,,"In-context learning is the paradigm that adapts large language models to downstream tasks by providing a few examples. Few-shot selection -- selecting appropriate examples for each test instance separately -- is important for in-context learning. In this paper, we propose Skill-KNN, a skill-based few-shot selection method for in-context learning. The key advantages of Skill-KNN include: (1) it addresses the problem that existing methods based on pre-trained embeddings can be easily biased by surface natural language features that are not important for the target task; (2) it does not require training or fine-tuning of any models, making it suitable for frequently expanding or changing example banks. The key insight is to optimize the inputs fed into the embedding model, rather than tuning the model itself. Technically, Skill-KNN generates the skill-based descriptions for each test case and candidate example by utilizing a pre-processing few-shot prompting, thus eliminating unimportant surface features. Experimental results across five cross-domain semantic parsing datasets and six backbone models show that Skill-KNN significantly outperforms existing methods.",04526876688e5a56106629229309fae272da1c79,Semantic Scholar,,, -247,echoprompt instructing the model to rephrase queries for improved incontext learning,"['Rajasekhar Reddy Mekala', 'Yasaman Razeghi', 'Sameer Singh']",https://arxiv.org/pdf/2309.10687,2023-09-16,,"Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques, such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is adapted for both zero-shot and few-shot in-context learning with standard and chain-of-thought prompting. Experimental results show that EchoPrompt yields substantial improvements across all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. We investigate the factors contributing to EchoPrompt's effectiveness through ablation studies, which reveal that both the original query and the model-generated rephrased version are instrumental in its performance gains. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. We recommend incorporating EchoPrompt into various baseline prompting strategies to achieve performance boosts.",04e838c16f3d1fb8d69d34fe0a0a92c59717875b,Semantic Scholar,,, -248,improved compositional generalization by generating demonstrations for metalearning,"['Sam Spilsbury', 'A. Ilin']",http://arxiv.org/pdf/2305.13092,2023-05-22,,"Meta-learning and few-shot prompting are viable methods to induce certain types of compositional behaviour. However, these methods can be very sensitive to the choice of support examples used. Choosing good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider a grounded language learning problem (gSCAN) where good support examples for certain test splits might not even exist in the training data, or would be infeasible to search for. We design an agent which instead generates possible supports which are relevant to the test query and current state of the world, then uses these supports via meta-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional behaviour split without a loss of performance on other splits. Further experiments show that in this case, searching for relevant demonstrations even with an oracle function is not sufficient to attain good performance when using meta-learning.",088ba3cfb904ccd0aa1993a1e30c725b061aad7e,Semantic Scholar,,, -249,fantastically ordered prompts and where to find them overcoming fewshot prompt order sensitivity,"['Yao Lu', 'Max Bartolo', 'Alastair Moore', 'S. Riedel', 'Pontus Stenetorp']",https://aclanthology.org/2022.acl-long.556.pdf,2021-04-18,,"When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are “fantastic” and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data. Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts. Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.",0adec918885dff698acf359988ed79a543157f80,Semantic Scholar,,, -250,crowd score a method for the evaluation of jokes using large language model ai voters as judges,"['Fabrício Góes', 'Zisen Zhou', 'Piotr Sawicki', 'M. Grzes', 'Daniel Brown']",http://arxiv.org/pdf/2212.11214,2022-12-21,,"This paper presents the Crowd Score, a novel method to assess the funniness of jokes using large language models (LLMs) as AI judges. Our method relies on inducing different personalities into the LLM and aggregating the votes of the AI judges into a single score to rate jokes. We validate the votes using an auditing technique that checks if the explanation for a particular vote is reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating. Our results show that few-shot prompting leads to better results than zero-shot for the voting question. Personality induction showed that aggressive and self-defeating voters are significantly more inclined to find more jokes funny of a set of aggressive/self-defeating jokes than the affiliative and self-enhancing voters. The Crowd Score follows the same trend as human judges by assigning higher scores to jokes that are also considered funnier by human judges. We believe that our methodology could be applied to other creative domains such as story, poetry, slogans, etc. It could both help the adoption of a flexible and accurate standard approach to compare different work in the CC community under a common metric and by minimizing human participation in assessing creative artefacts, it could accelerate the prototyping of creative artefacts and reduce the cost of hiring human participants to rate creative artefacts. 1",0ba5fb80d2c3ea3a8505415e32d954b4e4eea170,Semantic Scholar,,, -251,art automatic multistep reasoning and tooluse for large language models,"['Bhargavi Paranjape', 'Scott M. Lundberg', 'Sameer Singh', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer', 'Marco Tulio Ribeiro']",http://arxiv.org/pdf/2303.09014,2023-03-16,,"Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings by generating intermediate chain of thought (CoT) reasoning steps. Further, each reasoning step can rely on external tools to support computation beyond the core LLM capabilities (e.g. search/running code). Prior work on CoT prompting and tool use typically requires hand-crafting task-specific demonstrations and carefully scripted interleaving of model generations with tool use. We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program. Given a new task to solve, ART selects demonstrations of multi-step reasoning and tool use from a task library. At test time, ART seamlessly pauses generation whenever external tools are called, and integrates their output before resuming generation. ART achieves a substantial improvement over few-shot prompting and automatic CoT on unseen tasks in the BigBench and MMLU benchmarks, and matches performance of hand-crafted CoT prompts on a majority of these tasks. ART is also extensible, and makes it easy for humans to improve performance by correcting errors in task-specific programs or incorporating new tools, which we demonstrate by drastically improving performance on select tasks with minimal human intervention.",0d42221038c05cee8443c5b5af838505ee137dc3,Semantic Scholar,,, -252,promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models,"['Mirac Suzgun', 'Luke Melas-Kyriazi', 'Dan Jurafsky']",https://arxiv.org/pdf/2205.11503,2022-05-23,,"We propose a method for arbitrary textual style transfer (TST)—the task of transforming a text into any given style—utilizing general-purpose pre-trained language models. Our method, Prompt-and-Rerank, is based on a mathematical formulation of the TST task, decomposing it into three constituent components: textual similarity, target style strength, and fluency. Our method uses zero-shot or few-shot prompting to obtain a set of candidate generations in the target style, and then re-ranks them according to the three components. Our method enables small pre-trained language models to perform on par with state-of-the-art large-scale models while using two orders of magnitude less compute and memory. We also investigate the effect of model size and prompt design (e.g., prompt paraphrasing and delimiter-pair choice) on style transfer quality across seven diverse textual style transfer datasets, finding, among other things, that delimiter-pair choice has a large impact on performance, and that models have biases on the direction of style transfer.",0d6bb585493e34975f0437faa3179db3a02f6ae8,Semantic Scholar,,, -253,generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models,"['Varun Nair', 'Elliot Schumacher', 'Anitha Kannan']",http://arxiv.org/pdf/2305.05982,2023-05-10,,"A medical provider’s summary of a patient visit serves several critical purposes, including clinical decision-making, facilitating hand-offs between providers, and as a reference for the patient. An effective summary is required to be coherent and accurately capture all the medically relevant information in the dialogue, despite the complexity of patient-generated language. Even minor inaccuracies in visit summaries (for example, summarizing “patient does not have a fever” when a fever is present) can be detrimental to the outcome of care for the patient.This paper tackles the problem of medical conversation summarization by discretizing the task into several smaller dialogue-understanding tasks that are sequentially built upon. First, we identify medical entities and their affirmations within the conversation to serve as building blocks. We study dynamically constructing few-shot prompts for tasks by conditioning on relevant patient information and use GPT-3 as the backbone for our experiments. We also develop GPT-derived summarization metrics to measure performance against reference summaries quantitatively. Both our human evaluation study and metrics for medical correctness show that summaries generated using this approach are clinically accurate and outperform the baseline approach of summarizing the dialog in a zero-shot, single-prompt setting.",0f0a973c6457bcaf7255f891f9b34d658a0a84ae,Semantic Scholar,,, -254,can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning,"['Mohamed Aghzal', 'E. Plaku', 'Ziyu Yao']",https://arxiv.org/pdf/2310.03249,2023-10-05,,"Large language models (LLMs) have achieved remarkable success across a wide spectrum of tasks; however, they still face limitations in scenarios that demand long-term planning and spatial reasoning. To facilitate this line of research, in this work, we propose a new benchmark, termed $\textbf{P}$ath $\textbf{P}$lanning from $\textbf{N}$atural $\textbf{L}$anguage ($\textbf{PPNL}$). Our benchmark evaluates LLMs' spatial-temporal reasoning by formulating ''path planning'' tasks that require an LLM to navigate to target locations while avoiding obstacles and adhering to constraints. Leveraging this benchmark, we systematically investigate LLMs including GPT-4 via different few-shot prompting methodologies and BART and T5 of various sizes via fine-tuning. Our experimental results show the promise of few-shot GPT-4 in spatial reasoning, when it is prompted to reason and act interleavedly, although it still fails to make long-term temporal reasoning. In contrast, while fine-tuned LLMs achieved impressive results on in-distribution reasoning tasks, they struggled to generalize to larger environments or environments with more obstacles.",107aa1e3b1ce604d953475baf98674e92a723bda,Semantic Scholar,,, -255,learning performanceimproving code edits,"['Aman Madaan', 'Alex Shypula', 'Uri Alon', 'Milad Hashemi', 'Parthasarathy Ranganathan', 'Yiming Yang', 'Graham Neubig', 'A. Yazdanbakhsh']",http://arxiv.org/pdf/2302.07867,2023-02-15,,"The waning of Moore's Law has shifted the focus of the tech industry towards alternative methods for continued performance gains. While optimizing compilers are a standard tool to help increase program efficiency, programmers continue to shoulder much responsibility in crafting and refactoring code with better performance characteristics. In this paper, we investigate the ability of large language models (LLMs) to suggest functionally correct, performance improving code edits. We hypothesize that language models can suggest such edits in ways that would be impractical for static analysis alone. We investigate these questions by curating a large-scale dataset of Performance-Improving Edits, PIE. PIE contains trajectories of programs, where a programmer begins with an initial, slower version and iteratively makes changes to improve the program's performance. We use PIE to evaluate and improve the capacity of large language models. Specifically, use examples from PIE to fine-tune multiple variants of CODEGEN, a billion-scale Transformer-decoder model. Additionally, we use examples from PIE to prompt OpenAI's CODEX using a few-shot prompting. By leveraging PIE, we find that both CODEX and CODEGEN can generate performance-improving edits, with speedups of more than 2.5x for over 25% of the programs, for C++ and Python, even after the C++ programs were compiled using the O3 optimization level. Crucially, we show that PIE allows CODEGEN, an open-sourced and 10x smaller model than CODEX, to match the performance of CODEX on this challenging task. Overall, this work opens new doors for creating systems and methods that can help programmers write efficient code.",1786a2f9140ed7211b21302977de64e948b92308,Semantic Scholar,,, -256,prompting palm for translation assessing strategies and performance,"['David Vilar', 'Markus Freitag', 'Colin Cherry', 'Jiaming Luo', 'Viresh Ratnakar', 'George F. Foster']",http://arxiv.org/pdf/2211.09102,2022-11-16,,"Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an in-depth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM’s MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of state-of-the-art supervised systems. We conclude by providing an analysis of PaLM’s MT output which reveals some interesting properties and prospects for future work.",197ba7bbfdbb052b0770088815c110774220f397,Semantic Scholar,,, -257,contextual biasing of namedentities with large language models,"['Chuanneng Sun', 'Zeeshan Ahmed', 'Yingyi Ma', 'Zhe Liu', 'Yutong Pang', 'Ozlem Kalinli']",https://arxiv.org/pdf/2309.00723,2023-09-01,,"This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis. In addition to few-shot prompt learning, we propose multi-task training of the LLM to predict both the entity class and the next token. To improve the efficiency for contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we propose dynamic prompting, where we select the most likely class using the class tag prediction, and only use entities in this class as contexts for next token prediction. Word Error Rate (WER) evaluation is performed on i) an internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli dataset. Results indicate that biasing lists and few-shot examples can achieve 17.8% and 9.6% relative improvement compared to first pass ASR, and that multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative WER improvement, respectively.",1ed5d06c4dc46e6a983597b740ab0a31d0ce22ad,Semantic Scholar,,, -258,mixpro simple yet effective data augmentation for promptbased learning,"['Bohan Li', 'Longxu Dou', 'Yutai Hou', 'Yunlong Feng', 'Honglin Mu', 'Wanxiang Che']",http://arxiv.org/pdf/2304.09402,2023-04-19,,"Prompt-based learning reformulates downstream tasks as cloze problems by combining the original input with a template. This technique is particularly useful in few-shot learning, where a model is trained on a limited amount of data. However, the limited templates and text used in few-shot prompt-based learning still leave significant room for performance improvement. Additionally, existing methods using model ensembles can constrain the model efficiency. To address these issues, we propose an augmentation method called MixPro, which augments both the vanilla input text and the templates through token-level, sentence-level, and epoch-level Mixup strategies. We conduct experiments on five few-shot datasets, and the results show that MixPro outperforms other augmentation baselines, improving model performance by an average of 5.08% compared to before augmentation.",1f0dfbbc13ac31de8709bbb4d0f6478aa1222cef,Semantic Scholar,,, -259,mapl parameterefficient adaptation of unimodal pretrained models for visionlanguage fewshot prompting,"['Oscar Mañas', 'Pau Rodríguez López', 'Saba Ahmadi', 'Aida Nematzadeh', 'Yash Goyal', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2210.07179,2022-10-13,,"Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks. We propose MAPL, a simple and parameter-efficient method that reuses frozen pre-trained unimodal models and leverages their strong generalization capabilities in multimodal vision-language (VL) settings. MAPL learns a lightweight mapping between the representation spaces of unimodal models using aligned image-text data, and can generalize to unseen VL tasks from just a few in-context examples. The small number of trainable parameters makes MAPL effective at low-data and in-domain learning. Moreover, MAPL’s modularity enables easy extension to other pre-trained models. Extensive experiments on several visual question answering and image captioning benchmarks show that MAPL achieves superior or competitive performance compared to similar methods while training orders of magnitude fewer parameters. MAPL can be trained in just a few hours using modest computational resources and public datasets. We release our code and pre-trained model weights at https://github.com/oscmansan/mapl.",1f86bf1e334200ec0481349255559fbfe7a33caa,Semantic Scholar,,, -260,dspy compiling declarative language model calls into selfimproving pipelines,"['O. Khattab', 'Arnav Singhvi', 'Paridhi Maheshwari', 'Zhiyuan Zhang', 'Keshav Santhanam', 'Sri Vardhamanan', 'Saiful Haq', 'Ashutosh Sharma', 'Thomas T. Joshi', 'Hanna Moazam', 'Heather Miller', 'Matei Zaharia', 'Christopher Potts']",https://arxiv.org/pdf/2310.03714,2023-10-05,,"The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded""prompt templates"", i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, i.e. imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn (by creating and collecting demonstrations) how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies, showing that succinct DSPy programs can express and optimize sophisticated LM pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting (generally by over 25% and 65%, respectively) and pipelines with expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top of that, DSPy programs compiled to open and relatively small LMs like 770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely on expert-written prompt chains for proprietary GPT-3.5. DSPy is available at https://github.com/stanfordnlp/dspy",2069aaaa281eb13bcd9330fc4d43f24f6b436a53,Semantic Scholar,,, -261,interrolang exploring nlp models and datasets through dialoguebased explanations,"['Nils Feldhus', 'Qianli Wang', 'Tatiana Anikina', 'Sahil Chopra', 'Cennet Oguz', 'Sebastian Möller']",https://arxiv.org/pdf/2310.05592,2023-10-09,,"While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel Adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model's predicted label when it's not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.",2522410b1cac0c14fa656a0aaeaff08bacb358a9,Semantic Scholar,,, -262,multilingual evaluation of code generation models,"['Ben Athiwaratkun', 'Sanjay Krishna Gouda', 'Zijian Wang', 'Xiaopeng Li', 'Yuchen Tian', 'Ming Tan', 'Wasi Uddin Ahmad', 'Shiqi Wang', 'Qing Sun', 'Mingyue Shang', 'Sujan Kumar Gonugondla', 'Hantian Ding', 'Varun Kumar', 'Nathan Fulton', 'A. Farahani', 'Siddharth Jain', 'Robert Giaquinto', 'Haifeng Qian', 'M. Ramanathan', 'Ramesh Nallapati', 'Baishakhi Ray', 'Parminder Bhatia', 'Sudipta Sengupta', 'D. Roth', 'Bing Xiang']",http://arxiv.org/pdf/2210.14868,2022-10-27,,"We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming languages and are generated using a scalable conversion framework that transpiles prompts and test cases from the original Python datasets into the corresponding data in the target language. Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings. Furthermore, we use our code generation model to perform large-scale bootstrapping to obtain synthetic canonical solutions in several languages, which can be used for other code-related evaluations such as code insertion, robustness, or summarization tasks. Overall, our benchmarks represents a significant step towards a deeper understanding of language models' code generation abilities. We publicly release our code and datasets at https://github.com/amazon-research/mxeval.",2577d053f8aab912d29b424e1f09133d83740fd2,Semantic Scholar,,, -263,towards using fewshot prompt learning for automating model completion,"['Meriem Ben Chaaben', 'Lola Burgueño', 'H. Sahraoui']",https://arxiv.org/pdf/2212.03404,2022-12-07,,We propose a simple yet a novel approach to improve completion in domain modeling activities. Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models with large datasets that are scarce in this field. We implemented our approach and tested it on the completion of static and dynamic domain diagrams. Our initial evaluation shows that such an approach is effective and can be integrated in different ways during the modeling activities.,2a99239f09e95f4dbccec572d66f4519206762f9,Semantic Scholar,,, -264,"better patching using llm prompting, via selfconsistency","['Toufique Ahmed', 'Prem Devanbu']",https://arxiv.org/pdf/2306.00108,2023-05-31,,"Large Language models (LLMs) can be induced to solve non-trivial problems with “few-shot” prompts including illustrative problem-solution examples. Now if the few-shots also include “chain of thought” ($\mathcal{C}oT$) explanations, which are of the form problem-explanation-solution, LLMs will generate a “explained” solution, and perform even better. Recently an exciting, substantially better technique, self-consistency [1] ($\mathcal{S}-C$) has emerged, based on the intuition that there are many plausible explanations for the right solution; when the LLM is sampled repeatedly to generate a pool of explanation-solution pairs, for a given problem, the most frequently occurring solutions in the pool (ignoring the explanations) tend to be even more likely to be correct! Unfortunately, the use of this highly-performant $\mathcal{S}-C$ (or even $\mathcal{C}oT$) approach in software engineering settings is hampered by the lack of explanations; most software datasets lack explanations. In this paper, we describe an application of the $\mathcal{S}-C$ approach to program repair, using the commit log on the fix as the explanation, only in the illustrative few-shots. We achieve state-of-the art results, beating previous approaches to prompting-based program repair, on the MODIT dataset; we also find evidence suggesting that the correct commit messages are helping the LLM learn to produce better patches.",32426b96ff3c680125bde3b835bfa931288b8ade,Semantic Scholar,,, -265,large language model augmented narrative driven recommendations,"['Sheshera Mysore', 'A. McCallum', 'Hamed Zamani']",https://arxiv.org/pdf/2306.02250,2023-06-04,,"Narrative-driven recommendation (NDR) presents an information access problem where users solicit recommendations with verbose descriptions of their preferences and context, for example, travelers soliciting recommendations for points of interest while describing their likes/dislikes and travel circumstances. These requests are increasingly important with the rise of natural language-based conversational interfaces for search and recommendation systems. However, NDR lacks abundant training data for models, and current platforms commonly do not support these requests. Fortunately, classical user-item interaction datasets contain rich textual data, e.g., reviews, which often describe user preferences and context – this may be used to bootstrap training for NDR models. In this work, we explore using large language models (LLMs) for data augmentation to train NDR models. We use LLMs for authoring synthetic narrative queries from user-item interactions with few-shot prompting and train retrieval models for NDR on synthetic queries and user-item interaction data. Our experiments demonstrate that this is an effective strategy for training small-parameter retrieval models that outperform other retrieval and LLM baselines for narrative-driven recommendation.",3566e1245bfc90096fe0cdb8b18674da6519c8d6,Semantic Scholar,,, -266,a comprehensive survey on pretrained foundation models a history from bert to chatgpt,"['Ce Zhou', 'Qian Li', 'Chen Li', 'Jun Yu', 'Yixin Liu', 'Guan Wang', 'Kaichao Zhang', 'Cheng Ji', 'Qi Yan', 'Lifang He', 'Hao Peng', 'Jianxin Li', 'Jia Wu', 'Ziwei Liu', 'P. Xie', 'Caiming Xiong', 'Jian Pei', 'Philip S. Yu', 'Lichao Sun Michigan State University', 'B. University', 'Lehigh University', 'M. University', 'Nanyang Technological University', 'University of California at San Diego', 'D. University', 'U. Chicago', 'Salesforce AI Research']",http://arxiv.org/pdf/2302.09419,2023-02-18,,"Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A PFM (e.g., BERT, ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. BERT learns bidirectional encoder representations from Transformers, which are trained on large datasets as contextual language models. Similarly, the generative pretrained transformer (GPT) method employs Transformers as the feature extractor and is trained using an autoregressive paradigm on large datasets. Recently, ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few shot prompting. The remarkable achievements of PFM have brought significant breakthroughs to various fields of AI. Numerous studies have proposed different methods, raising the demand for an updated survey. This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. The review covers the basic components and existing pretraining methods used in natural language processing, computer vision, and graph learning. Additionally, it explores advanced PFMs used for different data modalities and unified PFMs that consider data quality and quantity. The review also discusses research related to the fundamentals of PFMs, such as model efficiency and compression, security, and privacy. Finally, the study provides key implications, future research directions, challenges, and open problems in the field of PFMs. Overall, this survey aims to shed light on the research of the PFMs on scalability, security, logical reasoning ability, cross-domain learning ability, and the user-friendly interactive ability for artificial general intelligence.",3599a236f285af48782fc30b1341d13ec7320735,Semantic Scholar,,, -267,language model crossover variation through fewshot prompting,"['Elliot Meyerson', 'M. Nelson', 'Herbie Bradley', 'Arash Moradi', 'Amy K. Hoover', 'J. Lehman']",https://arxiv.org/pdf/2302.12170,2023-02-23,,"This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes' offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes representable as text.",3841234dd49250c4fcbba79eed6593d3b57932c1,Semantic Scholar,,, -268,mathattack attacking large language models towards math solving ability,"['Zihao Zhou', 'Qiufeng Wang', 'Mingyu Jin', 'Jie Yao', 'Jianan Ye', 'Wei Liu', 'Wei Wang', 'Xiaowei Huang', 'Kaizhu Huang']",https://arxiv.org/pdf/2309.01686,2023-09-04,,"With the boom of Large Language Models (LLMs), the research of solving Math Word Problem (MWP) has recently made great progress. However, there are few studies to examine the security of LLMs in math solving ability. Instead of attacking prompts in the use of LLMs, we propose a MathAttack model to attack MWP samples which are closer to the essence of security in solving math problems. Compared to traditional text adversarial attack, it is essential to preserve the mathematical logic of original MWPs during the attacking. To this end, we propose logical entity recognition to identify logical entries which are then frozen. Subsequently, the remaining text are attacked by adopting a word-level attacker. Furthermore, we propose a new dataset RobustMath to evaluate the robustness of LLMs in math solving ability. Extensive experiments on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth show that MathAttack could effectively attack the math solving ability of LLMs. In the experiments, we observe that (1) Our adversarial samples from higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy (e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot prompts); (2) Complex MWPs (such as more solving steps, longer text, more numbers) are more vulnerable to attack; (3) We can improve the robustness of LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our practice and observation can serve as an important attempt towards enhancing the robustness of LLMs in math solving ability. We will release our code and dataset.",3886f3bd2a0af9e75bf9fa5b7db4224969dbf346,Semantic Scholar,,, -269,fineval a chinese financial domain knowledge evaluation benchmark for large language models,"['Liwen Zhang', 'Wei Cai', 'Zhaowei Liu', 'Zhi Yang', 'Wei Dai', 'Yujie Liao', 'Qi Qin', 'Yifei Li', 'Xingxian Liu', 'Zhiqiang Liu', 'Zhoufan Zhu', 'Anbo Wu', 'Xinnan Guo', 'Yun Chen']",https://arxiv.org/pdf/2308.09975,2023-08-19,,"Large language models (LLMs) have demonstrated exceptional performance in various natural language processing tasks, yet their efficacy in more challenging and domain-specific tasks remains largely unexplored. This paper presents FinEval, a benchmark specifically designed for the financial domain knowledge in the LLMs. FinEval is a collection of high-quality multiple-choice questions covering Finance, Economy, Accounting, and Certificate. It includes 4,661 questions spanning 34 different academic subjects. To ensure a comprehensive model performance evaluation, FinEval employs a range of prompt types, including zero-shot and few-shot prompts, as well as answer-only and chain-of-thought prompts. Evaluating state-of-the-art Chinese and English LLMs on FinEval, the results show that only GPT-4 achieved an accuracy close to 70% in different prompt settings, indicating significant growth potential for LLMs in the financial domain knowledge. Our work offers a more comprehensive financial knowledge evaluation benchmark, utilizing data of mock exams and covering a wide range of evaluated LLMs.",3b88526a0f0337e3a6b632b4af8fd0882eb4b470,Semantic Scholar,,, -270,model ensemble instead of prompt fusion a samplespecific knowledge transfer method for fewshot prompt tuning,"['Xiangyu Peng', 'Chen Xing', 'Prafulla Kumar Choubey', 'Chien-Sheng Wu', 'Caiming Xiong']",http://arxiv.org/pdf/2210.12587,2022-10-23,,"Prompt tuning approaches, which learn task-specific soft prompts for a downstream task conditioning on frozen pre-trained models, have attracted growing interest due to its parameter efficiency. With large language models and sufficient training data, prompt tuning performs comparably to full-model tuning. However, with limited training samples in few-shot settings, prompt tuning fails to match the performance of full-model fine-tuning. In this work, we focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks. Recognizing the good generalization capabilities of ensemble methods in low-data regime, we first experiment and show that a simple ensemble of model predictions based on different source prompts, outperforms existing multi-prompt knowledge transfer approaches such as source prompt fusion in the few-shot setting. Motivated by this observation, we further investigate model ensembles and propose Sample-specific Ensemble of Source Models (SESoM). SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs. Through this way, SESoM inherits the superior generalization of model ensemble approaches and simultaneously captures the sample-specific competence of each source prompt. We conduct experiments across a diverse set of eight NLP tasks using models of different scales (T5-{base, large, XL}) and find that SESoM consistently outperforms the existing models of the same as well as larger parametric scale by a large margin.",3d7d385d9ee75a286e8da27f7d3cf9f12651c899,Semantic Scholar,,, -271,code as policies language model programs for embodied control,"['Jacky Liang', 'Wenlong Huang', 'F. Xia', 'Peng Xu', 'Karol Hausman', 'Brian Ichter', 'Peter R. Florence', 'Andy Zeng']",https://arxiv.org/pdf/2209.07753,2022-09-16,,"Large language models (LLMs) trained on code-completion have been shown to be capable of synthesizing simple Python programs from docstrings [1]. We find that these code-writing LLMs can be re-purposed to write robot policy code, given natural language commands. Specifically, policy code can express functions or feedback loops that process perception outputs (e.g., from object detectors [2], [3]) and parameterize control primitive APIs. When provided as input several example language commands (formatted as comments) followed by corresponding policy code (via few-shot prompting), LLMs can take in new commands and autonomously re-compose API calls to generate new policy code respectively. By chaining classic logic structures and referencing third-party libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way can write robot policies that (i) exhibit spatial-geometric reasoning, (ii) generalize to new instructions, and (iii) prescribe precise values (e.g., velocities) to ambiguous descriptions (‘faster’) depending on context (i.e., behavioral commonsense). This paper presents Code as Policies: a robot-centric formulation of language model generated programs (LMPs) that can represent reactive policies (e.g., impedance controllers), as well as waypoint-based policies (vision-based pick and place, trajectory-based control), demonstrated across multiple real robot platforms. Central to our approach is prompting hierarchical code-gen (recursively defining undefined functions), which can write more complex code and also improves state-of-the-art to solve 39.8% of problems on the HumanEval [1] benchmark. Code and videos are available at https://code-as-policies.github.io",41531594d7e0f3b2e138ae43e0a0f6e24a9b014c,Semantic Scholar,,, -272,tool documentation enables zeroshot toolusage with large language models,"['Cheng-Yu Hsieh', 'Sibei Chen', 'Chun-Liang Li', 'Yasuhisa Fujii', 'Alexander J. Ratner', 'Chen-Yu Lee', 'Ranjay Krishna', 'Tomas Pfister']",https://arxiv.org/pdf/2308.00675,2023-08-01,,"Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.",446fb5dead075a1a08862662738f462e9a0e91c8,Semantic Scholar,,, -273,"text and patterns for effective chain of thought, it takes two to tango","['Aman Madaan', 'A. Yazdanbakhsh']",http://arxiv.org/pdf/2209.07686,2022-09-16,,"The past decade has witnessed dramatic gains in natural language processing and an unprecedented scaling of large language models. These developments have been accelerated by the advent of few-shot techniques such as chain of thought (CoT) prompting. Specifically, CoT pushes the performance of large language models in a few-shot setup by augmenting the prompts with intermediate steps. Despite impressive results across various tasks, the reasons behind their success have not been explored. This work uses counterfactual prompting to develop a deeper understanding of CoT-based few-shot prompting mechanisms in large language models. We first systematically identify and define the key components of a prompt: symbols, patterns, and text. Then, we devise and conduct an exhaustive set of experiments across four different tasks, by querying the model with counterfactual prompts where only one of these components is altered. Our experiments across three models (PaLM, GPT-3, and CODEX) reveal several surprising findings and brings into question the conventional wisdom around few-shot prompting. First, the presence of factual patterns in a prompt is practically immaterial to the success of CoT. Second, our results conclude that the primary role of intermediate steps may not be to facilitate learning how to solve a task. The intermediate steps are rather a beacon for the model to realize what symbols to replicate in the output to form a factual answer. Further, text imbues patterns with commonsense knowledge and meaning. Our empirical and qualitative analysis reveals that a symbiotic relationship between text and patterns explains the success of few-shot prompting: text helps extract commonsense from the question to help patterns, and patterns enforce task understanding and direct text generation.",4988b3d378b79eb8669112620baf1ff4e3e536fd,Semantic Scholar,,, -274,revisiting nonenglish text simplification a unified multilingual benchmark,"['Michael Joseph Ryan', 'Tarek Naous', 'Wei Xu']",http://arxiv.org/pdf/2305.15678,2023-05-25,,"Recent advancements in high-quality, large-scale English resources have pushed the frontier of English Automatic Text Simplification (ATS) research. However, less work has been done on multilingual text simplification due to the lack of a diverse evaluation benchmark that covers complex-simple sentence pairs in many languages. This paper introduces the MultiSim benchmark, a collection of 27 resources in 12 distinct languages containing over 1.7 million complex-simple sentence pairs. This benchmark will encourage research in developing more effective multilingual text simplification models and evaluation metrics. Our experiments using MultiSim with pre-trained multilingual language models reveal exciting performance improvements from multilingual training in non-English settings. We observe strong performance from Russian in zero-shot cross-lingual transfer to low-resource languages. We further show that few-shot prompting with BLOOM-176b achieves comparable quality to reference simplifications outperforming fine-tuned models in most languages. We validate these findings through human evaluation.",4e1a4d6804c7983c659feb7e41d49ad8c21aaa43,Semantic Scholar,,, -275,towards informative fewshot prompt with maximum information gain for incontext learning,"['Hongfu Liu', 'Ye Wang']",https://arxiv.org/pdf/2310.08923,2023-10-13,,"Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions. However, this particular learning paradigm suffers from high instability stemming from substantial variances induced by factors such as the input distribution of selected examples, their ordering, and prompt formats. In this work, we demonstrate that even when all these factors are held constant, the random selection of examples still results in high variance. Consequently, we aim to explore the informative ability of data examples by quantifying the Information Gain (IG) obtained in prediction after observing a given example candidate. Then we propose to sample those with maximum IG. Additionally, we identify the presence of template bias, which can lead to unfair evaluations of IG during the sampling process. To mitigate this bias, we introduce Calibration Before Sampling strategy. The experimental results illustrate that our proposed method can yield an average relative improvement of 14.3% across six classification tasks using three LLMs.",53addc28b106440a3c306b2cff8e259ad63d6d53,Semantic Scholar,,, -276,building cooperative embodied agents modularly with large language models,"['Hongxin Zhang', 'Weihua Du', 'Jiaming Shan', 'Qinhong Zhou', 'Yilun Du', 'J. Tenenbaum', 'Tianmin Shu', 'Chuang Gan']",https://arxiv.org/pdf/2307.02485,2023-07-05,,"Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.",587352c3b95c90de6d37f061c8e117f42be0b575,Semantic Scholar,,, -277,meal stable and active learning for fewshot prompting,"['Abdullatif Köksal', 'Timo Schick', 'Hinrich Schutze']",http://arxiv.org/pdf/2211.08358,2022-11-15,,"Few-shot classification has made great strides due to foundation models that, through priming and prompting, are highly effective few-shot learners. However, this approach has high variance both across different sets of few shots (data selection) and across different finetuning runs (run variability). This is problematic not only because it impedes the fair comparison of different approaches, but especially because it makes few-shot learning too unreliable for many real-world applications. To alleviate these issues, we make two contributions for more stable and effective few-shot learning: First, we propose novel ensembling methods and show that they substantially reduce run variability. Second, we introduce a new active learning (AL) criterion for data selection and present the first AL-based approach specifically tailored towards prompt-based learning. In our experiments, we show that our combined method, MEAL (Multiprompt finetuning and prediction Ensembling with Active Learning), improves overall performance of prompt-based finetuning by 2.3 points on five diverse tasks.",5df5ebcaed745a5252b4fae64dc1d7ca90e68ff6,Semantic Scholar,,, -278,consprompt easily exploiting contrastive samples for fewshot prompt learning,"['Jinta Weng', 'Yue Hu', 'Zhihong Tian', 'Heyan Huang']",https://arxiv.org/pdf/2211.04118,2022-11-08,,"Prompt learning recently become an effective linguistic tool to motivate the PLMs’ knowledge on few-shot-setting tasks. However, studies have shown the lack of robustness still exists in prompt learning, since suitable initialization of continuous prompt and expert-first manual prompt are essential in fine-tuning process. What is more, human also utilize their comparative ability to motivate their existing knowledge for distinguishing different examples. Motivated by this, we explore how to use contrastive samples to strengthen prompt learning. In detail, we first propose our model ConsPrompt combining with prompt encoding network, contrastive sampling module, and contrastive scoring module. Subsequently, two sampling strategies, similarity-based and label-based strategies, are introduced to realize dif-ferential contrastive learning. The effectiveness of proposed ConsPrompt is demonstrated in five different few-shot learning tasks and shown the similarity-based sampling strategy is more effective than label-based in combining contrastive learning. Our results also ex-hibits the state-of-the-art performance and robustness in different few-shot settings, which proves that the ConsPrompt could be assumed as a better knowledge probe to motivate PLMs. As far as we could reach, this is the first work exploring how to use contrastive learning approach and suitable contrastive samples to enhance prompt-based fine-tuning.",5e3675bdbe898cb28a0fc3c2f72a578a97fe64bb,Semantic Scholar,,, -279,can gpt3 perform statutory reasoning,"['Andrew Blair-Stanek', 'Nils Holzenberger', 'Benjamin Van Durme']",https://arxiv.org/pdf/2302.06100,2023-02-13,,"Statutory reasoning is the task of reasoning with facts and statutes, which are rules written in natural language by a legislature. It is a basic legal skill. In this paper we explore the capabilities of the most capable GPT-3 model, text-davinci-003, on an established statutory-reasoning dataset called SARA. We consider a variety of approaches, including dynamic few-shot prompting, chain-of-thought prompting, and zero-shot prompting. While we achieve results with GPT-3 that are better than the previous best published results, we also identify several types of clear errors it makes. We investigate why these errors happen. We discover that GPT-3 has imperfect prior knowledge of the actual U.S. statutes on which SARA is based. More importantly, we create simple synthetic statutes, which GPT-3 is guaranteed not to have seen during training. We find GPT-3 performs poorly at answering straightforward questions about these simple synthetic statutes.",5f5253fb15ac382e96ade0335baf1cfaa240fb1d,Semantic Scholar,,, -280,explainable verbal reasoner plus (evr+) a natural language reasoning framework that supports diverse compositional reasoning,"['Zhengzhong Liang', 'Zeyu Zhang', 'Steven Bethard', 'M. Surdeanu']",http://arxiv.org/pdf/2305.00061,2023-04-28,,"Languages models have been successfully applied to a variety of reasoning tasks in NLP, yet the language models still suffer from compositional generalization. In this paper we present Explainable Verbal Reasoner Plus (EVR+), a reasoning framework that enhances language models' compositional reasoning ability by (1) allowing the model to explicitly generate and execute symbolic operators, and (2) allowing the model to decompose a complex task into several simpler ones in a flexible manner. Compared with its predecessor Explainable Verbal Reasoner (EVR) and other previous approaches adopting similar ideas, our framework supports more diverse types of reasoning such as nested loops and different types of recursion. To evaluate our reasoning framework, we build a synthetic dataset with five tasks that require compositional reasoning. Results show that our reasoning framework can enhance the language model's compositional generalization performance on the five tasks, using a fine-tuned language model. We also discussed the possibility and the challenges to combine our reasoning framework with a few-shot prompted language model.",5f88b907cb6b79ce22e826832f05c0471ecb095e,Semantic Scholar,,, -281,language models as knowledge bases for visual word sense disambiguation,"['Anastasia Kritharoula', 'Maria Lymperaiou', 'G. Stamou']",https://arxiv.org/pdf/2310.01960,2023-10-03,,"Visual Word Sense Disambiguation (VWSD) is a novel challenging task that lies between linguistic sense disambiguation and fine-grained multimodal retrieval. The recent advancements in the development of visiolinguistic (VL) transformers suggest some off-the-self implementations with encouraging results, which however we argue that can be further improved. To this end, we propose some knowledge-enhancement techniques towards improving the retrieval performance of VL transformers via the usage of Large Language Models (LLMs) as Knowledge Bases. More specifically, knowledge stored in LLMs is retrieved with the help of appropriate prompts in a zero-shot manner, achieving performance advancements. Moreover, we convert VWSD to a purely textual question-answering (QA) problem by considering generated image captions as multiple-choice candidate answers. Zero-shot and few-shot prompting strategies are leveraged to explore the potential of such a transformation, while Chain-of-Thought (CoT) prompting in the zero-shot setting is able to reveal the internal reasoning steps an LLM follows to select the appropriate candidate. In total, our presented approach is the first one to analyze the merits of exploiting knowledge stored in LLMs in different ways to solve WVSD.",61bbdbf481a6d3519c22513ebe8d6c3cd381851e,Semantic Scholar,,, -282,challenging bigbench tasks and whether chainofthought can solve them,"['Mirac Suzgun', 'Nathan Scales', 'Nathanael Scharli', 'Sebastian Gehrmann', 'Yi Tay', 'Hyung Won Chung', 'Aakanksha Chowdhery', 'Quoc V. Le', 'E. Chi', 'Denny Zhou', 'Jason Wei']",http://arxiv.org/pdf/2210.09261,2022-10-17,,"BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. Language models have already made good progress on this benchmark, with the best model in the BIG-Bench paper outperforming average reported human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But on what tasks do language models fall short of average human-rater performance, and are those tasks actually unsolvable by current language models? In this work, we focus on a suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. We find that applying chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the average human-rater performance on 10 of the 23 tasks, and Codex (code-davinci-002) to surpass the average human-rater performance on 17 of the 23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al., 2022), substantially underestimates the best performance and capabilities of language models, which is better captured via CoT prompting. As further analysis, we explore the interaction between CoT and model scale on BBH, finding that CoT enables emergent task performance on several BBH tasks with otherwise flat scaling curves.",663a41c866d49ce052801fbc88947d39764cad29,Semantic Scholar,,, -283,fireact toward language agent finetuning,"['Baian Chen', 'Chang Shu', 'Ehsan Shareghi', 'Nigel Collier', 'Karthik Narasimhan', 'Shunyu Yao']",https://arxiv.org/pdf/2310.05915,2023-10-09,,"Recent efforts have augmented language models (LMs) with external tools or environments, leading to the development of language agents that can reason and act. However, most of these agents rely on few-shot prompting techniques with off-the-shelf LMs. In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. Using a setup of question answering (QA) with a Google search API, we explore a variety of base LMs, prompting methods, fine-tuning data, and QA tasks, and find language agents are consistently improved after fine-tuning their backbone LMs. For example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct, a novel approach to fine-tuning LMs with trajectories from multiple tasks and prompting methods, and show having more diverse fine-tuning data can further improve agents. Along with other findings regarding scaling effects, robustness, generalization, efficiency and cost, our work establishes comprehensive benefits of fine-tuning LMs for agents, and provides an initial set of experimental designs, insights, as well as open questions toward language agent fine-tuning.",67daf8c4fe1958d20ebdf95c2a36dd490c73836f,Semantic Scholar,,, -284,natural language decomposition and interpretation of complex utterances,"['Harsh Jhamtani', 'Hao Fang', 'Patrick Xia', 'Eran Levy', 'Jacob Andreas', 'Benjamin Van Durme']",http://arxiv.org/pdf/2305.08677,2023-05-15,,"Natural language interfaces often require supervised data to translate user requests into programs, database queries, or other structured intent representations. During data collection, it can be difficult to anticipate and formalize the full range of user needs -- for example, in a system designed to handle simple requests (like $\textit{find my meetings tomorrow}$ or $\textit{move my meeting with my manager to noon})$, users may also express more elaborate requests (like $\textit{swap all my calls on Monday and Tuesday}$). We introduce an approach for equipping a simple language-to-code model to handle complex utterances via a process of hierarchical natural language decomposition. Our approach uses a pre-trained language model to decompose a complex utterance into a sequence of smaller natural language steps, then interprets each step using the language-to-code model. To test our approach, we collect and release DeCU -- a new NL-to-program benchmark to evaluate Decomposition of Complex Utterances. Experiments show that the proposed approach enables the interpretation of complex utterances with almost no complex training data, while outperforming standard few-shot prompting approaches.",68040213e9a83408cdc491ed3e235b52b537eed1,Semantic Scholar,,, -285,pal programaided language models,"['Luyu Gao', 'Aman Madaan', 'Shuyan Zhou', 'Uri Alon', 'Pengfei Liu', 'Yiming Yang', 'Jamie Callan', 'Graham Neubig']",http://arxiv.org/pdf/2211.10435,2022-11-18,,"Large language models (LLMs) have recently demonstrated an impressive ability to perform arithmetic and symbolic reasoning tasks, when provided with a few examples at test time (""few-shot prompting""). Much of this success can be attributed to prompting methods such as""chain-of-thought'', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem. While LLMs seem to be adept at this sort of step-by-step decomposition, LLMs often make logical and arithmetic mistakes in the solution part, even when the problem is decomposed correctly. In this paper, we present Program-Aided Language models (PAL): a novel approach that uses the LLM to read natural language problems and generate programs as the intermediate reasoning steps, but offloads the solution step to a runtime such as a Python interpreter. With PAL, decomposing the natural language problem into runnable steps remains the only learning task for the LLM, while solving is delegated to the interpreter. We demonstrate this synergy between a neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all these natural language reasoning tasks, generating code using an LLM and reasoning using a Python interpreter leads to more accurate results than much larger models. For example, PAL using Codex achieves state-of-the-art few-shot accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B which uses chain-of-thought by absolute 15% top-1. Our code and data are publicly available at http://reasonwithpal.com/ .",6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7,Semantic Scholar,,, -286,prompted llms as chatbot modules for long opendomain conversation,"['Gibbeum Lee', 'Volker Hartmann', 'Jongho Park', 'Dimitris Papailiopoulos', 'Kangwook Lee']",http://arxiv.org/pdf/2305.04533,2023-05-08,,"In this paper, we propose MPC (Modular Prompted Chatbot), a new approach for creating high-quality conversational agents without the need for fine-tuning. Our method utilizes pre-trained large language models (LLMs) as individual modules for long-term consistency and flexibility, by using techniques such as few-shot prompting, chain-of-thought (CoT), and external memory. Our human evaluation results show that MPC is on par with fine-tuned chatbot models in open-domain conversations, making it an effective solution for creating consistent and engaging chatbots.",700da3f3758e053c379f905bebee261ba69f1073,Semantic Scholar,,, -287,prompting gpt3 to be reliable,"['Chenglei Si', 'Zhe Gan', 'Zhengyuan Yang', 'Shuohang Wang', 'Jianfeng Wang', 'Jordan L. Boyd-Graber', 'Lijuan Wang']",http://arxiv.org/pdf/2210.09150,2022-10-17,,"Large language models (LLMs) show impressive abilities via few-shot prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use in real-world language applications. However, the crucial problem of how to improve the reliability of GPT-3 is still under-explored. While reliability is a broad and vaguely defined term, we decompose reliability into four main facets that correspond to the existing framework of ML safety and are well-recognized to be important: generalizability, social biases, calibration, and factuality. Our core contribution is to establish simple and effective prompts that improve GPT-3's reliability as it: 1) generalizes out-of-distribution, 2) balances demographic distribution and uses natural language instructions to reduce social biases, 3) calibrates output probabilities, and 4) updates the LLM's factual knowledge and reasoning chains. With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised models on all these facets. We release all processed datasets, evaluation scripts, and model predictions. Our systematic empirical study not only sheds new insights on the reliability of prompting LLMs, but more importantly, our prompting strategies can help practitioners more reliably use LLMs like GPT-3.",711d5e8ddbb840ad31a9ffa3d38590603ba69a92,Semantic Scholar,,, -288,understanding how model size affects fewshot instruction prompting,"['Ayrton San Joaquin', 'Ardy Haroen']",https://arxiv.org/pdf/2212.01907,2022-12-04,,"Large Language Models are affected by the phenomena of memorizing and forgetting their training data. But how do these vary by model size? We work towards this question by investigating how the model size affects the model's ability to discriminate a word's meaning in a given context. We introduce a dataset called DeltaWords, which evaluates a model's ability to follow instructions to select a sentence which replaces the target word with its antonym. We show a weak inverse scaling trend, where task accuracy degrades as model size increase, under extremely few-shot prompting regimes. We show that increasing the number of examples tend to disproportionately benefit larger models than smaller models.",72491b96d8a614d1a9a099707d44593d4b5a8f49,Semantic Scholar,,, -289,smartllm smart multiagent robot task planning using large language models,"['S. S. Kannan', 'Vishnunandan L. N. Venkatesh', 'Byung-Cheol Min']",https://arxiv.org/pdf/2309.10062,2023-09-18,,"In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models (LLMs), harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans. The experimental videos, code, and datasets from the work can be found at https://sites.google.com/view/smart-llm/.",755853c6b30f5a186131e23a63c68a3f2737068e,Semantic Scholar,,, -290,selfexplanation prompting improves dialogue understanding in large language models,"['Haoyu Gao', 'Ting-En Lin', 'Hangyu Li', 'Min Yang', 'Yuchuan Wu', 'Wentao Ma', 'Yongbin Li']",https://arxiv.org/pdf/2309.12940,2023-09-22,,"Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts. In this study, we propose a novel""Self-Explanation""prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs' comprehension in complex dialogue tasks.",75ce9634d281cc12cbe434f86c737df8e10796fa,Semantic Scholar,,, -291,visualizing linguistic diversity of text datasets synthesized by large language models,"['Emily Reif', 'Minsuk Kahng', 'S. Petridis']",https://arxiv.org/pdf/2305.11364,2023-05-19,,"Large language models (LLMs) can be used to generate smaller, more refined datasets via few-shot prompting for benchmarking, fine-tuning or other use cases. However, understanding and evaluating these datasets is difficult, and the failure modes of LLM-generated data are still not well understood. Specifically, the data can be repetitive in surprising ways, not only semantically but also syntactically and lexically. We present LinguisticLens, a novel inter-active visualization tool for making sense of and analyzing syntactic diversity of LLM-generated datasets. LinguisticLens clusters text along syntactic, lexical, and semantic axes. It supports hierarchical visualization of a text dataset, allowing users to quickly scan for an overview and inspect individual examples. The live demo is available at shorturl.at/zHOUV.",7655f05cd394da6cb0f707068203c9ff05d8f05a,Semantic Scholar,,, -292,transferring procedural knowledge across commonsense tasks,"['Yifan Jiang', 'Filip Ilievski', 'Kaixin Ma']",https://arxiv.org/pdf/2304.13867,2023-04-26,,"Stories about everyday situations are an essential part of human communication, motivating the need to develop AI agents that can reliably understand these stories. Despite the long list of supervised methods for story completion and procedural understanding, current AI has no mechanisms to automatically track and explain procedures in unseen stories. To bridge this gap, we study the ability of AI models to transfer procedural knowledge to novel narrative tasks in a transparent manner. We design LEAP: a comprehensive framework that integrates state-of-the-art modeling architectures, training regimes, and augmentation strategies based on both natural and synthetic stories. To address the lack of densely annotated training data, we devise a robust automatic labeler based on few-shot prompting to enhance the augmented data. Our experiments with in- and out-of-domain tasks reveal insights into the interplay of different architectures, training regimes, and augmentation strategies. LEAP's labeler has a clear positive impact on out-of-domain datasets, while the resulting dense annotation provides native explainability.",7beec352ac2597c3cd3dc7aceb2f8cd068b72d15,Semantic Scholar,,, -293,exploring the landscape of distributional robustness for question answering models,"['Anas Awadalla', 'Mitchell Wortsman', 'Gabriel Ilharco', 'Sewon Min', 'Ian H. Magnusson', 'Hannaneh Hajishirzi', 'Ludwig Schmidt']",http://arxiv.org/pdf/2210.12517,2022-10-22,,"We conduct a large empirical evaluation to investigate the landscape of distributional robustness in question answering. Our investigation spans over 350 models and 16 question answering datasets, including a diverse set of architectures, model sizes, and adaptation methods (e.g., fine-tuning, adapter tuning, in-context learning, etc.). We find that, in many cases, model variations do not affect robustness and in-distribution performance alone determines out-of-distribution performance. Moreover, our findings indicate that i) zero-shot and in-context learning methods are more robust to distribution shifts than fully fine-tuned models; ii) few-shot prompt fine-tuned models exhibit better robustness than few-shot fine-tuned span prediction models; iii) parameter-efficient and robustness enhancing training methods provide no significant robustness improvements. In addition, we publicly release all evaluations to encourage researchers to further analyze robustness trends for question answering models.",7cf4f8cb8b4a373d869e785b79160dda7a49a250,Semantic Scholar,,, -294,language models don't always say what they think unfaithful explanations in chainofthought prompting,"['Miles Turpin', 'Julian Michael', 'Ethan Perez', 'Sam Bowman']",http://arxiv.org/pdf/2305.04388,2023-05-07,,"Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs -- e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always""(A)""-- which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations supporting those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. CoT is promising for explainability, but our results highlight the need for targeted efforts to evaluate and improve explanation faithfulness.",7dc928f41e15f65f1267bd87b0fcfcc7e715cb56,Semantic Scholar,,, -295,zara improving fewshot selfrationalization for small language models,"['Wei-Lin Chen', 'An-Zi Yen', 'Hen-Hsen Huang', 'Cheng-Kuang Wu', 'Hsin-Hsi Chen']",http://arxiv.org/pdf/2305.07355,2023-05-12,,"Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA's ability to automatically identify plausible and accurate rationale-answer pairs.",7df3595bdb4003589e8ca1757cc39ec03a39a2ff,Semantic Scholar,,, -296,natural language to code generation in interactive data science notebooks,"['Pengcheng Yin', 'Wen-Ding Li', 'Kefan Xiao', 'A. Rao', 'Yeming Wen', 'Kensen Shi', 'Joshua Howland', 'Paige Bailey', 'Michele Catasta', 'H. Michalewski', 'Alex Polozov', 'Charles Sutton']",http://arxiv.org/pdf/2212.09248,2022-12-19,,"Computational notebooks, such as Jupyter notebooks, are interactive computing environments that are ubiquitous among data scientists to perform data wrangling and analytic tasks. To measure the performance of AI pair programmers that automatically synthesize programs for those tasks given natural language (NL) intents from users, we build ARCADE, a benchmark of 1078 code generation problems using the pandas data analysis framework in data science notebooks. ARCADE features multiple rounds of NL-to-code problems from the same notebook. It requires a model to understand rich multi-modal contexts, such as existing notebook cells and their execution states as well as previous turns of interaction. To establish a strong baseline on this challenging task, we develop PaChiNCo, a 62B code language model (LM) for Python computational notebooks, which significantly outperforms public code LMs. Finally, we explore few-shot prompting strategies to elicit better code with step-by-step decomposition and NL explanation, showing the potential to improve the diversity and explainability of model predictions. Arcade is publicly available at https://github.com/google-research/arcade-nl2code/.",815c6ca281536d18ec0eb408b6e46e72a0826163,Semantic Scholar,,, -297,multiparty chat conversational agents in group settings with humans and models,"['Jimmy Wei', 'Kurt Shuster', 'Arthur Szlam', 'J. Weston', 'Jack Urbanek', 'M. Komeili']",http://arxiv.org/pdf/2304.13835,2023-04-26,,"Current dialogue research primarily studies pairwise (two-party) conversations, and does not address the everyday setting where more than two speakers converse together. In this work, we both collect and evaluate multi-party conversations to study this more general case. We use the LIGHT environment to construct grounded conversations, where each participant has an assigned character to role-play. We thus evaluate the ability of language models to act as one or more characters in such conversations. Models require two skills that pairwise-trained models appear to lack: (1) being able to decide when to talk; (2) producing coherent utterances grounded on multiple characters. We compare models trained on our new dataset to existing pairwise-trained dialogue models, as well as large language models with few-shot prompting. We find that our new dataset, MultiLIGHT, which we will publicly release, can help bring significant improvements in the group setting.",82beb8a86d438e85a134182128d47607b1b04004,Semantic Scholar,,, -298,towards legally enforceable hate speech detection for public forums,"['Chunyan Luo', 'R. Bhambhoria', 'Xiao-Dan Zhu', 'Samuel Dahan']",http://arxiv.org/pdf/2305.13677,2023-05-23,,"Hate speech causes widespread and deep-seated societal issues. Proper enforcement of hate speech laws is key for protecting groups of people against harmful and discriminatory language. However, determining what constitutes hate speech is a complex task that is highly open to subjective interpretations. Existing works do not align their systems with enforceable definitions of hate speech, which can make their outputs inconsistent with the goals of regulators. This research introduces a new perspective and task for enforceable hate speech detection centred around legal definitions, and a dataset annotated on violations of eleven possible definitions by legal experts. Given the challenge of identifying clear, legally enforceable instances of hate speech, we augment the dataset with expert-generated samples and an automatically mined challenge set. We experiment with grounding the model decision in these definitions using zero-shot and few-shot prompting. We then report results on several large language models (LLMs). With this task definition, automatic hate speech detection can be more closely aligned to enforceable laws, and hence assist in more rigorous enforcement of legal protections against harmful speech in public forums.",895f3c9e452ae51fb02786de424ce6d2bba11c3b,Semantic Scholar,,, -299,usb a unified summarization benchmark across tasks and domains,"['Kundan Krishna', 'Prakhar Gupta', 'S. Ramprasad', 'Byron C. Wallace', 'Jeffrey P. Bigham', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2305.14296,2023-05-23,,"An abundance of datasets exist for training and evaluating models on the task of summary generation.However, these datasets are often derived heuristically, and lack sufficient annotations to support research into all aspects of summarization, such as evidence extraction and controllable summarization. We introduce a benchmark comprising 8 tasks that require multi-dimensional understanding of summarization, e.g., surfacing evidence for a summary, assessing its correctness, and gauging its relevance to different topics. We compare various methods on this benchmark and discover that on multiple tasks, moderately-sized fine-tuned models consistently outperform much larger few-shot prompted language models. For factuality related tasks, we also evaluate existing heuristics to create training data and find that training on them performs worse than training on $20\times$ less human-labeled data. Our benchmark consists of data from 6 different domains, allowing us to study cross-domain performance of trained models. We find that for some tasks, the amount of training data matters more than the domain where it comes from, while for other tasks training specifically on data from the target domain, even if limited, is more beneficial. Our work fulfills the need for a well-annotated summarization benchmark with diverse tasks, and provides useful insights about the impact of the quality, size and domain of training data.",8ab27849799286459465d2262f926354093b20a9,Semantic Scholar,,, -300,grounding language with visual affordances over unstructured data,"['Oier Mees', 'Jessica Borja-Diaz', 'Wolfram Burgard']",https://arxiv.org/pdf/2210.01911,2022-10-04,,"Recent works have shown that Large Language Models (LLMs) can be applied to ground natural language to a wide variety of robot skills. However, in practice, learning multi-task, language-conditioned robotic skills typically requires large-scale data collection and frequent human intervention to reset the environment or help correcting the current policies. In this work, we propose a novel approach to efficiently learn general-purpose language-conditioned robot skills from unstructured, offline and reset-free data in the real world by exploiting a self-supervised visuo-lingual affordance model, which requires annotating as little as 1% of the total data with language. We evaluate our method in extensive experiments both in simulated and real-world robotic tasks, achieving state-of-the-art performance on the challenging CALVIN benchmark and learning over 25 distinct visuomotor manipulation tasks with a single policy in the real world. We find that when paired with LLMs to break down abstract natural language instructions into subgoals via few-shot prompting, our method is capable of completing long-horizon, multi-tier tasks in the real world, while requiring an order of magnitude less data than previous approaches. Code and videos are available at http://hulc2.cs.uni-freiburg.de.",8f84dcbad8cd3b5b4d9229c56bc95f24be859a35,Semantic Scholar,,, -301,evaluating large language models on graphs performance insights and comparative analysis,"['Chang Liu', 'Bo Wu']",https://arxiv.org/pdf/2308.11224,2023-08-22,,"Large Language Models (LLMs) have garnered considerable interest within both academic and industrial. Yet, the application of LLMs to graph data remains under-explored. In this study, we evaluate the capabilities of four LLMs in addressing several analytical problems with graph data. We employ four distinct evaluation metrics: Comprehension, Correctness, Fidelity, and Rectification. Our results show that: 1) LLMs effectively comprehend graph data in natural language and reason with graph topology. 2) GPT models can generate logical and coherent results, outperforming alternatives in correctness. 3) All examined LLMs face challenges in structural reasoning, with techniques like zero-shot chain-of-thought and few-shot prompting showing diminished efficacy. 4) GPT models often produce erroneous answers in multi-answer tasks, raising concerns in fidelity. 5) GPT models exhibit elevated confidence in their outputs, potentially hindering their rectification capacities. Notably, GPT-4 has demonstrated the capacity to rectify responses from GPT-3.5-turbo and its own previous iterations. The code is available at: https://github.com/Ayame1006/LLMtoGraph.",927fc7652e033c9eb17296df087e3e6491112bb0,Semantic Scholar,,, -302,revisiting relation extraction in the era of large language models,"['Somin Wadhwa', 'Silvio Amir', 'Byron C. Wallace']",http://arxiv.org/pdf/2305.05003,2023-05-08,,"Relation extraction (RE) is the core NLP task of inferring semantic relationships between entities from text. Standard supervised RE techniques entail training modules to tag tokens comprising entity spans and then predict the relationship between them. Recent work has instead treated the problem as a sequence-to-sequence task, linearizing relations between entities as target strings to be generated conditioned on the input. Here we push the limits of this approach, using larger language models (GPT-3 and Flan-T5 large) than considered in prior work and evaluating their performance on standard RE tasks under varying levels of supervision. We address issues inherent to evaluating generative approaches to RE by doing human evaluations, in lieu of relying on exact matching. Under this refined evaluation, we find that: (1) Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly equivalent to existing fully supervised models; (2) Flan-T5 is not as capable in the few-shot setting, but supervising and fine-tuning it with Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA results. We release this model as a new baseline for RE tasks.",97782a67971c4ff1a74bf07e82fe20b2c4bf86c4,Semantic Scholar,,, -303,selfpolish enhance reasoning in large language models via problem refinement,"['Zhiheng Xi', 'Senjie Jin', 'Yuhao Zhou', 'Rui Zheng', 'Songyang Gao', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang']",http://arxiv.org/pdf/2305.14497,2023-05-23,,"Prompting methods such as Chain-of-Thought (CoT) have shed new light on enhancing the reasoning capabilities of large language models, and researchers have extensively explored the generation process of rationales and answers. However, they have overlooked the potential challenges posed by the poor quality of reasoning problems, which may influence the reasoning performance significantly. In this work, we propose Self-Polish (SP), a novel method that facilitates the model's problem-solving process by prompting them to progressively refine the given problems to be more comprehensible and solvable. Specifically, the method teaches models to eliminate irrelevant information, rearrange the logic structure and organize local conditions into new ones parallelly. SP is orthogonal to all other prompting methods, making it convenient to integrate with state-of-the-art techniques for further improvement. We conduct thorough experiments on five benchmarks to illustrate the effectiveness of the proposed method. For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8.0\%$ on GSM8K and $17.8\%$ on MultiArith; it also improves the performance of CoT by $6.0\%$ on GSM8K and $6.0\%$ on MathQA, respectively. Furthermore, our method also showcases impressive performance on robustness evaluation.",9a9b1e2968302eb882870537d4af6e2c722dfd1a,Semantic Scholar,,, -304,spotlight mobile ui understanding using visionlanguage models with a focus,"['Gang Li', 'Yang Li']",http://arxiv.org/pdf/2209.14927,2022-09-29,,"Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchies are not always available, and are often corrupted with missing object descriptions or misaligned structure information. As a result, despite the use of view hierarchies could offer short-term gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose Spotlight, a vision-only approach for mobile UI understanding. Specifically, we enhance a vision-language model that only takes the screenshot of the UI and a region of interest on the screen -- the focus -- as the input. This general architecture of Spotlight is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model establishes SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as inputs. Furthermore, we explore multi-task learning and few-shot prompting capacities of the proposed models, demonstrating promising results in the multi-task learning direction.",9b9fb973e5d3b413baa90648d9eb0743bd889747,Semantic Scholar,,, -305,large language model prompt chaining for long legal document classification,['Dietrich Trautmann'],https://arxiv.org/pdf/2308.04138,2023-08-08,,"Prompting is used to guide or steer a language model in generating an appropriate response that is consistent with the desired outcome. Chaining is a strategy used to decompose complex tasks into smaller, manageable components. In this study, we utilize prompt chaining for extensive legal document classification tasks, which present difficulties due to their intricate domain-specific language and considerable length. Our approach begins with the creation of a concise summary of the original document, followed by a semantic search for related exemplar texts and their corresponding annotations from a training corpus. Finally, we prompt for a label - based on the task - to assign, by leveraging the in-context learning from the few-shot prompt. We demonstrate that through prompt chaining, we can not only enhance the performance over zero-shot, but also surpass the micro-F1 score achieved by larger models, such as ChatGPT zero-shot, using smaller models.",9bf587d032e3764720cccd5beaf941f5c32880bc,Semantic Scholar,,, -306,mindagent emergent gaming interaction,"['Ran Gong', 'Qiuyuan Huang', 'Xiaojian Ma', 'Hoi Vo', 'Zane Durante', 'Yusuke Noda', 'Zilong Zheng', 'Song-Chun Zhu', 'Demetri Terzopoulos', 'Fei-Fei Li', 'Jianfeng Gao']",https://arxiv.org/pdf/2309.09971,2023-09-18,,"Large Language Models (LLMs) have the capacity of performing complex scheduling in a multi-agent system and can coordinate these agents into completing sophisticated tasks that require extensive collaboration. However, despite the introduction of numerous gaming frameworks, the community has insufficient benchmarks towards building general multi-agents collaboration infrastructure that encompass both LLM and human-NPCs collaborations. In this work, we propose a novel infrastructure - MindAgent - to evaluate planning and coordination emergent capabilities for gaming interaction. In particular, our infrastructure leverages existing gaming framework, to i) require understanding of the coordinator for a multi-agent system, ii) collaborate with human players via un-finetuned proper instructions, and iii) establish an in-context learning on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new gaming scenario and related benchmark that dispatch a multi-agent collaboration efficiency and supervise multiple agents playing the game simultaneously. We conduct comprehensive evaluations with new auto-metric CoS for calculating the collaboration efficiency. Finally, our infrastructure can be deployed into real-world gaming scenarios in a customized VR version of CUISINEWORLD and adapted in existing broader Minecraft gaming domain. We hope our findings on LLMs and the new infrastructure for general-purpose scheduling and coordination can help shed light on how such skills can be obtained by learning from large language corpora.",9c01786f8195d53ad3902fc8d0872784b059adf3,Semantic Scholar,,, -307,lafter labelfree tuning of zeroshot classifier using language and unlabeled image collections,"['M. J. Mirza', 'Leonid Karlinsky', 'Wei Lin', 'M. Koziński', 'Horst Possegger', 'R. Feris', 'H. Bischof']",http://arxiv.org/pdf/2305.18287,2023-05-29,,"Recently, large-scale pre-trained Vision and Language (VL) models have set a new state-of-the-art (SOTA) in zero-shot visual classification enabling open-vocabulary recognition of potentially unlimited set of categories defined as simple language prompts. However, despite these great advances, the performance of these zeroshot classifiers still falls short of the results of dedicated (closed category set) classifiers trained with supervised fine tuning. In this paper we show, for the first time, how to reduce this gap without any labels and without any paired VL data, using an unlabeled image collection and a set of texts auto-generated using a Large Language Model (LLM) describing the categories of interest and effectively substituting labeled visual instances of those categories. Using our label-free approach, we are able to attain significant performance improvements over the zero-shot performance of the base VL model and other contemporary methods and baselines on a wide variety of datasets, demonstrating absolute improvement of up to 11.7% (3.8% on average) in the label-free setting. Moreover, despite our approach being label-free, we observe 1.3% average gains over leading few-shot prompting baselines that do use 5-shot supervision.",a04883d1d780b438de6c127caf7ebe3d9233e193,Semantic Scholar,,, -308,street a multitask structured reasoning and explanation benchmark,"['D. Ribeiro', 'Shen Wang', 'Xiaofei Ma', 'He Zhu', 'Rui Dong', 'Deguang Kong', 'Juliette Burger', 'Anjelica Ramos', 'William Yang Wang', 'Zhiheng Huang', 'G. Karypis', 'Bing Xiang', 'D. Roth']",http://arxiv.org/pdf/2302.06729,2023-02-13,,"We introduce STREET, a unified multi-task and multi-domain natural language reasoning and explanation benchmark. Unlike most existing question-answering (QA) datasets, we expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer. We perform extensive evaluation with popular language models such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models still lag behind human performance when producing such structured reasoning steps. We believe this work will provide a way for the community to better train and test systems on multi-step reasoning and explanations in natural language.",a3a241e9397fe29b37f96cb5e8f4b8bebed3d3da,Semantic Scholar,,, -309,large language models as tax attorneys a case study in legal capabilities emergence,"['John J. Nay', 'David Karamardian', 'Sarah Lawsky', 'Wenting Tao', 'Meghana Moorthy Bhat', 'Raghav Jain', 'Aaron Travis Lee', 'Jonathan H. Choi', 'Jungo Kasai']",http://arxiv.org/pdf/2306.07075,2023-06-12,,"Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.",a6a0963fcf21ed47a2616ca3980f8f4f21e6d5ad,Semantic Scholar,,, -310,distilling stepbystep! outperforming larger language models with less training data and smaller model sizes,"['Cheng-Yu Hsieh', 'Chun-Liang Li', 'Chih-Kuan Yeh', 'Hootan Nakhost', 'Yasuhisa Fujii', 'Alexander J. Ratner', 'Ranjay Krishna', 'Chen-Yu Lee', 'Tomas Pfister']",https://arxiv.org/pdf/2305.02301,2023-05-03,,"Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce Distilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for training small models within a multi-task framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to few-shot prompted LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our finetuned 770M T5 model outperforms the few-shot prompted 540B PaLM model using only 80% of available data on a benchmark, whereas standard finetuning the same T5 model struggles to match even by using 100% of the dataset. We release the code at: https://github.com/google-research/distilling-step-by-step .",aad167be3c902388ea625da4117fcae4325b8b7d,Semantic Scholar,,, -311,prompt programming for large language models beyond the fewshot paradigm,"['Laria Reynolds', 'Kyle McDonell']",https://arxiv.org/pdf/2102.07350,2021-02-15,,"Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications.",ac3cdb50606f7770eef8e4cd951840a4f71287a0,Semantic Scholar,,, -312,the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant,"['Jingqing Zhang', 'K. Sun', 'A. Jagadeesh', 'Mahta Ghahfarokhi', 'Deepa Gupta', 'Ashok Gupta', 'Vibhor Gupta', 'Yike Guo']",https://arxiv.org/pdf/2307.08152,2023-07-16,,"Recent studies have demonstrated promising performance of ChatGPT and GPT-4 on several medical domain tasks. However, none have assessed its performance using a large-scale real-world electronic health record database, nor have evaluated its utility in providing clinical diagnostic assistance for patients across a full range of disease presentation. We performed two analyses using ChatGPT and GPT-4, one to identify patients with specific medical diagnoses using a real-world large electronic health record database and the other, in providing diagnostic assistance to healthcare workers in the prospective evaluation of hypothetical patients. Our results show that GPT-4 across disease classification tasks with chain of thought and few-shot prompting can achieve performance as high as 96% F1 scores. For patient assessment, GPT-4 can accurately diagnose three out of four times. However, there were mentions of factually incorrect statements, overlooking crucial medical findings, recommendations for unnecessary investigations and overtreatment. These issues coupled with privacy concerns, make these models currently inadequate for real world clinical use. However, limited data and time needed for prompt engineering in comparison to configuration of conventional machine learning workflows highlight their potential for scalability across healthcare applications.",b3d6fec3f1a878b0c612f0ffed820b045c2c46d8,Semantic Scholar,,, -313,do gpts produce less literal translations,"['Vikas Raunak', 'Arul Menezes', 'Matt Post', 'Hany Hassan Awadallah']",http://arxiv.org/pdf/2305.16806,2023-05-26,,"Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.",b4170009de40c1c46adea6a314734434ecd4b0dc,Semantic Scholar,,, -314,adelt transpilation between deep learning frameworks,"['Linyuan Gong', 'Jiayi Wang', 'Alvin Cheung']",http://arxiv.org/pdf/2303.03593,2023-03-07,,"We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-source transpilation between deep learning frameworks. Unlike prior approaches, we decouple the transpilation of code skeletons and the mapping of API keywords (an API function name or a parameter name). ADELT transpile code skeletons using few-shot prompting on big language models. Based on contextual embeddings extracted by a BERT for code, we train aligned API embeddings in a domain-adversarial setup, upon which we generate a dictionary for keyword translation. The model is trained on our unlabeled DL corpus from web crawl data, without using any hand-crafted rules and parallel data. Our method outperforms state-of-the-art transpilers on multiple transpilation pairs including PyTorch-Keras and PyTorch-MXNet by 15.9pts and 12.0pts in exact match scores respectively.",b6bea98ca29267acbebca6cdf64eb07a5671e000,Semantic Scholar,,, -315,decomposed prompting for machine translation between related languages using large language models,"['Ratish Puduppully', 'Raj Dabre', 'A. Aw', 'Nancy F. Chen']",http://arxiv.org/pdf/2305.13085,2023-05-22,,"This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through few-shot prompting leverages a small set of translation pair examples to generate translations for test sentences. This procedure requires the model to learn how to generate translations while simultaneously ensuring that token ordering is maintained to produce a fluent and accurate translation. We propose that for related languages, the task of machine translation can be simplified by leveraging the monotonic alignment characteristic of such languages. We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations. Through automatic and human evaluation conducted on multiple related language pairs across various language families, we demonstrate that our proposed approach of decomposed prompting surpasses multiple established few-shot baseline approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.",b6e5855b6a4e425ba251a93516f2bccffe5ba403,Semantic Scholar,,, -316,prompt a robot to walk with large language models,"['Yen-Jen Wang', 'Bike Zhang', 'Jianyu Chen', 'K. Sreenath']",https://arxiv.org/pdf/2309.09969,2023-09-18,,"Large language models (LLMs) pre-trained on vast internet-scale data have showcased remarkable capabilities across diverse domains. Recently, there has been escalating interest in deploying LLMs for robotics, aiming to harness the power of foundation models in real-world settings. However, this approach faces significant challenges, particularly in grounding these models in the physical world and in generating dynamic robot motions. To address these issues, we introduce a novel paradigm in which we use few-shot prompts collected from the physical environment, enabling the LLM to autoregressively generate low-level control commands for robots without task-specific fine-tuning. Experiments across various robots and environments validate that our method can effectively prompt a robot to walk. We thus illustrate how LLMs can proficiently function as low-level feedback controllers for dynamic motion control even in high-dimensional robotic systems. The project website and source code can be found at: https://prompt2walk.github.io/ .",b70075b496c1f519093884945be5670c32cbceed,Semantic Scholar,,, -317,freshllms refreshing large language models with search engine augmentation,"['Tu Vu', 'Mohit Iyyer', 'Xuezhi Wang', 'Noah Constant', 'Jerry Wei', 'Jason Wei', 'Chris Tar', 'Yun-Hsuan Sung', 'Denny Zhou', 'Quoc Le', 'Thang Luong']",https://arxiv.org/pdf/2310.03214,2023-10-05,,"Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.",be177300487b6d0f25e6cade9a31900454b13281,Semantic Scholar,,, -318,enhancing incontext learning with answer feedback for multispan question answering,"['Zixian Huang', 'Jiaying Zhou', 'Gengyang Xiao', 'Gong Cheng']",http://arxiv.org/pdf/2306.04508,2023-06-07,,"Whereas the recent emergence of large language models (LLMs) like ChatGPT has exhibited impressive general performance, it still has a large gap with fully-supervised models on specific tasks such as multi-span question answering. Previous researches found that in-context learning is an effective approach to exploiting LLM, by using a few task-related labeled data as demonstration examples to construct a few-shot prompt for answering new questions. A popular implementation is to concatenate a few questions and their correct answers through simple templates, informing LLM of the desired output. In this paper, we propose a novel way of employing labeled data such that it also informs LLM of some undesired output, by extending demonstration examples with feedback about answers predicted by an off-the-shelf model, e.g., correct, incorrect, or incomplete. Experiments on three multi-span question answering datasets as well as a keyphrase extraction dataset show that our new prompting strategy consistently improves LLM's in-context learning performance.",c1647923704251875f4160e91b59afbbdc58483e,Semantic Scholar,,, -319,improving fewshot prompts with relevant static analysis products,"['Toufique Ahmed', 'Kunal Suresh Pai', 'Prem Devanbu', 'Earl T. Barr']",https://arxiv.org/pdf/2304.06815,2023-04-13,,"Large Language Models (LLM) are a new class of computation engines,""programmed""via prompt engineering. We are still learning how to best""program""these LLMs to help developers. We start with the intuition that developers tend to consciously and unconsciously have a collection of semantics facts in mind when working on coding tasks. Mostly these are shallow, simple facts arising from a quick read. For a function, examples of facts might include parameter and local variable names, return expressions, simple pre- and post-conditions, and basic control and data flow, etc. One might assume that the powerful multi-layer architecture of transformer-style LLMs makes them inherently capable of doing this simple level of""code analysis""and extracting such information, implicitly, while processing code: but are they, really? If they aren't, could explicitly adding this information help? Our goal here is to investigate this question, using the code summarization task and evaluate whether automatically augmenting an LLM's prompt with semantic facts explicitly, actually helps. Prior work shows that LLM performance on code summarization benefits from few-shot samples drawn either from the same-project or from examples found via information retrieval methods (such as BM25). While summarization performance has steadily increased since the early days, there is still room for improvement: LLM performance on code summarization still lags its performance on natural-language tasks like translation and text summarization. We find that adding semantic facts actually does help! This approach improves performance in several different settings suggested by prior work, including for two different Large Language Models. In most cases, improvement nears or exceeds 2 BLEU; for the PHP language in the challenging CodeSearchNet dataset, this augmentation actually yields performance surpassing 30 BLEU.",c2391a8c8e24a450f00810ecb441e26413ea3791,Semantic Scholar,,, -320,benchmarking arabic ai with large language models,"['Ahmed Abdelali', 'Hamdy Mubarak', 'Shammur A. Chowdhury', 'Maram Hasanain', 'Basel Mousi', 'S. Boughorbel', 'Yassine El Kheir', 'Daniel Izham', 'Fahim Dalvi', 'Majd Hawasly', 'Nizi Nazar', 'Yousseif Elshahawy', 'Ahmed M. Ali', 'Nadir Durrani', 'Natasa Milic-Frayling', 'Firoj Alam']",http://arxiv.org/pdf/2305.14982,2023-05-24,,"With large Foundation Models (FMs), language technologies (AI in general) are entering a new paradigm: eliminating the need for developing large-scale task-specific datasets and supporting a variety of tasks through set-ups ranging from zero-shot to few-shot learning. However, understanding FMs capabilities requires a systematic benchmarking effort by comparing FMs performance with the state-of-the-art (SOTA) task-specific models. With that goal, past work focused on the English language and included a few efforts with multiple languages. Our study contributes to ongoing research by evaluating FMs performance for standard Arabic NLP and Speech processing, including a range of tasks from sequence tagging to content classification across diverse domains. We start with zero-shot learning using GPT-3.5-turbo, Whisper, and USM, addressing 33 unique tasks using 59 publicly available datasets resulting in 96 test setups. For a few tasks, FMs performs on par or exceeds the performance of the SOTA models but for the majority it under-performs. Given the importance of prompt for the FMs performance, we discuss our prompt strategies in detail and elaborate on our findings. Our future work on Arabic AI will explore few-shot prompting, expand the range of tasks, and investigate additional open-source models.",c5fa70db839fd05b1111f3586a601d8a93e78d0c,Semantic Scholar,,, -321,internetaugmented language models through fewshot prompting for opendomain question answering,"['Angeliki Lazaridou', 'E. Gribovskaya', 'Wojciech Stokowiec', 'N. Grigorev']",https://arxiv.org/pdf/2203.05115,2022-03-10,,"In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language models (LSLMs) to overcome some of their challenges with respect to grounding to factual and up-to-date information. Motivated by semi-parametric language models (LMs), which ground their decisions in external retrieved evidence, we use few-shot prompting to learn to condition LMs on information returned from the web using Google Search, a broad and constantly updated knowledge source. Our approach does not involve fine-tuning or learning additional parameters, thus making it applicable to any LM, offering therefore a strong baseline. Indeed, we find that LMs conditioned on the web surpass performance of closed-book models of similar, or even larger, model sizes in open-domain question answering. Finally, we find that increasing the inference-time compute of models, achieved via using multiple retrieved evidences to generate multiple answers followed by a reranking stage that uses scores generated by the same LMs, leads to better performance and alleviates lower performance of smaller few-shot LMs. All in all, our findings suggest that it might be beneficial to slow down the race towards the biggest model and instead shift attention towards finding more effective ways to use models, including but not limited to, better prompting or increasing inference-time compute.",c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd,Semantic Scholar,,, -322,is chatgpt a good recommender a preliminary study,"['Junling Liu', 'Chaoyong Liu', 'Renjie Lv', 'Kangdi Zhou', 'Y. Zhang']",http://arxiv.org/pdf/2304.10149,2023-04-20,,"Recommendation systems have witnessed significant advancements and have been widely used over the past decades. However, most traditional recommendation methods are task-specific and therefore lack efficient generalization ability. Recently, the emergence of ChatGPT has significantly advanced NLP tasks by enhancing the capabilities of conversational models. Nonetheless, the application of ChatGPT in the recommendation domain has not been thoroughly investigated. In this paper, we employ ChatGPT as a general-purpose recommendation model to explore its potential for transferring extensive linguistic and world knowledge acquired from large-scale corpora to recommendation scenarios. Specifically, we design a set of prompts and evaluate ChatGPT's performance on five recommendation scenarios. Unlike traditional recommendation methods, we do not fine-tune ChatGPT during the entire evaluation process, relying only on the prompts themselves to convert recommendation tasks into natural language tasks. Further, we explore the use of few-shot prompting to inject interaction information that contains user potential interest to help ChatGPT better understand user needs and interests. Comprehensive experimental results on Amazon Beauty dataset show that ChatGPT has achieved promising results in certain tasks and is capable of reaching the baseline level in others. We conduct human evaluations on two explainability-oriented tasks to more accurately evaluate the quality of contents generated by different models. And the human evaluations show ChatGPT can truly understand the provided information and generate clearer and more reasonable results. We hope that our study can inspire researchers to further explore the potential of language models like ChatGPT to improve recommendation performance and contribute to the advancement of the recommendation systems field.",ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3,Semantic Scholar,,, -323,legal prompting teaching a language model to think like a lawyer,"['Fang Yu', 'Lee Quartey', 'Frank Schilder']",http://arxiv.org/pdf/2212.01326,2022-12-02,,"Large language models that are capable of zero or few-shot prompting approaches have given rise to the new research area of prompt engineering. Recent advances showed that for example Chain-of-Thought (CoT) prompts can improve arithmetic or common sense tasks significantly. We explore how such approaches fare with legal reasoning tasks and take the COLIEE entailment task based on the Japanese Bar exam for testing zero-shot/few-shot and fine-tuning approaches. Our findings show that while CoT prompting and fine-tuning with explanations approaches show improvements, the best results are produced by prompts that are derived from specific legal reasoning techniques such as IRAC (Issue, Rule, Application, Conclusion). Based on our experiments we improve the 2021 best result from 0.7037 accuracy to 0.8148 accuracy and beat the 2022 best system of 0.6789 accuracy with an accuracy of 0.7431.",cc43306e22dbfd5bc35251ab8c8ba37e4fc2a1b3,Semantic Scholar,,, -324,query2doc query expansion with large language models,"['Liang Wang', 'Nan Yang', 'Furu Wei']",https://arxiv.org/pdf/2303.07678,2023-03-14,,"This paper introduces a simple yet effective query expansion approach, denoted as query2doc, to improve both sparse and dense retrieval systems. The proposed method first generates pseudo-documents by few-shot prompting large language models (LLMs), and then expands the query with generated pseudo-documents. LLMs are trained on web-scale text corpora and are adept at knowledge memorization. The pseudo-documents from LLMs often contain highly relevant information that can aid in query disambiguation and guide the retrievers. Experimental results demonstrate that query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets, such as MS-MARCO and TREC DL, without any model fine-tuning. Furthermore, our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.",ccc772d88c231275f24c4fac9b28bbe0942e1107,Semantic Scholar,,, -325,how to design translation prompts for chatgpt an empirical study,"['Yuan Gao', 'Ruili Wang', 'Feng Hou']",http://arxiv.org/pdf/2304.02182,2023-04-05,,"The recently released ChatGPT has demonstrated surprising abilities in natural language understanding and natural language generation. Machine translation relies heavily on the abilities of language understanding and generation. Thus, in this paper, we explore how to assist machine translation with ChatGPT. We adopt several translation prompts on a wide range of translations. Our experimental results show that ChatGPT with designed translation prompts can achieve comparable or better performance over commercial translation systems for high-resource language translations. We further evaluate the translation quality using multiple references, and ChatGPT achieves superior performance compared to commercial systems. We also conduct experiments on domain-specific translations, the final results show that ChatGPT is able to comprehend the provided domain keyword and adjust accordingly to output proper translations. At last, we perform few-shot prompts that show consistent improvement across different base prompts. Our work provides empirical evidence that ChatGPT still has great potential in translations.",cd77ea482d9245f3fcaeb670261a00c3fb5cabbd,Semantic Scholar,,, -326,passive learning of active causal strategies in agents and language models,"['Andrew Kyle Lampinen', 'Stephanie C. Y. Chan', 'Ishita Dasgupta', 'A. Nam', 'Jane X. Wang']",https://arxiv.org/pdf/2305.16183,2023-05-25,,"What can be learned about causality and experimentation from passive data? This question is salient given recent successes of passively-trained language models in interactive domains such as tool use. Passive learning is inherently limited. However, we show that purely passive learning can in fact allow an agent to learn generalizable strategies for determining and using causal structures, as long as the agent can intervene at test time. We formally illustrate that learning a strategy of first experimenting, then seeking goals, can allow generalization from passive learning in principle. We then show empirically that agents trained via imitation on expert data can indeed generalize at test time to infer and use causal links which are never present in the training data; these agents can also generalize experimentation strategies to novel variable sets never observed in training. We then show that strategies for causal intervention and exploitation can be generalized from passive data even in a more complex environment with high-dimensional observations, with the support of natural language explanations. Explanations can even allow passive learners to generalize out-of-distribution from perfectly-confounded training data. Finally, we show that language models, trained only on passive next-word prediction, can generalize causal intervention strategies from a few-shot prompt containing examples of experimentation, together with explanations and reasoning. These results highlight the surprising power of passive learning of active causal strategies, and may help to understand the behaviors and capabilities of language models.",ce0154d9251f67c262512b6e598f3aa3ba9fe9a4,Semantic Scholar,,, -327,diversity measures domainindependent proxies for failure in language model queries,"['Noel Ngu', 'Nathaniel Lee', 'P. Shakarian']",https://arxiv.org/pdf/2308.11189,2023-08-22,,"Error prediction in large language models often relies on domain-specific information. In this paper, we present measures for quantification of error in the response of a large language model based on the diversity of responses to a given prompt - hence independent of the underlying application. We describe how three such measures - based on entropy, Gini impurity, and centroid distance - can be employed. We perform a suite of experiments on multiple datasets and temperature settings to demonstrate that these measures strongly correlate with the probability of failure. Additionally, we present empirical results demonstrating how these measures can be applied to few-shot prompting, chain-of-thought reasoning, and error detection.",d4fc988c6510420a5290dfe8d1a991ca4878d696,Semantic Scholar,,, -328,log parsing how far can chatgpt go,"['Van-Hoang Le', 'Hongyu Zhang']",https://arxiv.org/pdf/2306.01590,2023-06-02,,"Software logs play an essential role in ensuring the reliability and maintainability of large-scale software systems, as they are often the sole source of runtime information. Log parsing, which converts raw log messages into structured data, is an important initial step towards downstream log analytics. In recent studies, ChatGPT, the current cutting-edge large language model (LLM), has been widely applied to a wide range of software engineering tasks. However, its performance in automated log parsing remains unclear. In this paper, we evaluate ChatGPT's ability to undertake log parsing by addressing two research questions. (1) Can ChatGPT effectively parse logs? (2) How does ChatGPT perform with different prompting methods? Our results show that ChatGPT can achieve promising results for log parsing with appropriate prompts, especially with few-shot prompting. Based on our findings, we outline several challenges and opportunities for ChatGPT-based log parsing.",d589c49e1cd1dd3b994dcac01b4c6e7fb8eef161,Semantic Scholar,,, -329,an empirical evaluation of prompting strategies for large language models in zeroshot clinical natural language processing,"['S. Sivarajkumar', 'Mark Kelley', 'Alyssa Samolyk-Mazzanti', 'S. Visweswaran', 'Yanshan Wang']",https://arxiv.org/pdf/2309.08008,2023-09-14,,"Large language models (LLMs) have shown remarkable capabilities in Natural Language Processing (NLP), especially in domains where labeled data is scarce or expensive, such as clinical domain. However, to unlock the clinical knowledge hidden in these LLMs, we need to design effective prompts that can guide them to perform specific clinical NLP tasks without any task-specific training data. This is known as in-context learning, which is an art and science that requires understanding the strengths and weaknesses of different LLMs and prompt engineering approaches. In this paper, we present a comprehensive and systematic experimental study on prompt engineering for five clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence Extraction, Coreference Resolution, Medication Status Extraction, and Medication Attribute Extraction. We assessed the prompts proposed in recent literature, including simple prefix, simple cloze, chain of thought, and anticipatory prompts, and introduced two new types of prompts, namely heuristic prompting and ensemble prompting. We evaluated the performance of these prompts on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted zero-shot prompting with few-shot prompting, and provide novel insights and guidelines for prompt engineering for LLMs in clinical NLP. To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative AI, and we hope that it will inspire and inform future research in this area.",d5a6fc6aa139066e3b66ba63002e7d84c109aebc,Semantic Scholar,,, -330,boosted prompt ensembles for large language models,"['Silviu Pitis', 'Michael Ruogu Zhang', 'Andrew Wang', 'Jimmy Ba']",http://arxiv.org/pdf/2304.05970,2023-04-12,,"Methods such as chain-of-thought prompting and self-consistency have pushed the frontier of language model reasoning performance with no additional training. To further improve performance, we propose a prompt ensembling method for large language models, which uses a small dataset to construct a set of few shot prompts that together comprise a ``boosted prompt ensemble''. The few shot examples for each prompt are chosen in a stepwise fashion to be ``hard'' examples on which the previous step's ensemble is uncertain. We show that this outperforms single-prompt output-space ensembles and bagged prompt-space ensembles on the GSM8k and AQuA datasets, among others. We propose both train-time and test-time versions of boosted prompting that use different levels of available annotation and conduct a detailed empirical study of our algorithm.",dca6c3927ade6481a1ae080f5c24decbfeced1be,Semantic Scholar,,, -331,bootstrapping multilingual semantic parsers using large language models,"['Abhijeet Awasthi', 'Nitish Gupta', 'Bidisha Samanta', 'Shachi Dave', 'Sunita Sarawagi', 'P. Talukdar']",http://arxiv.org/pdf/2210.07313,2022-10-13,,"Despite cross-lingual generalization demonstrated by pre-trained multilingual models, the translate-train paradigm of transferring English datasets across multiple languages remains to be a key mechanism for training task-specific multilingual models. However, for many low-resource languages, the availability of a reliable translation service entails significant amounts of costly human-annotated translation pairs. Further, translation services may continue to be brittle due to domain mismatch between task-specific input text and general-purpose text used for training translation models. For multilingual semantic parsing, we demonstrate the effectiveness and flexibility offered by large language models (LLMs) for translating English datasets into several languages via few-shot prompting. Through extensive comparisons on two public datasets, MTOP and MASSIVE, spanning 50 languages and several domains, we show that our method of translating data using LLMs outperforms a strong translate-train baseline on 41 out of 50 languages. We study the key design choices that enable more effective multilingual data translation via prompted LLMs.",dda0f7f086fc875d583604f8b0cf4a8678bc4de4,Semantic Scholar,,, -332,prompt2model generating deployable models from natural language instructions,"['Vijay Viswanathan', 'Chenyang Zhao', 'Amanda Bertsch', 'Tongshuang Sherry Wu', 'Graham Neubig']",https://arxiv.org/pdf/2308.12261,2023-08-23,,"Large language models (LLMs) enable system builders today to create competent NLP systems through prompting, where they only need to describe the task in natural language and provide a few examples. However, in other ways, LLMs are a step backward from traditional special-purpose NLP models; they require extensive computational resources for deployment and can be gated behind APIs. In this paper, we propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment. This is done through a multi-step process of retrieval of existing datasets and pretrained models, dataset generation using LLMs, and supervised fine-tuning on these retrieved and generated datasets. Over three tasks, we demonstrate that given the same few-shot prompt as input, Prompt2Model trains models that outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20% while being up to 700 times smaller. We also show that this data can be used to obtain reliable performance estimates of model performance, enabling model developers to assess model reliability before deployment. Prompt2Model is available open-source at https://github.com/neulab/prompt2model.",e69684fb06a7b1fe621d7ef0c97fc2ca0e122c43,Semantic Scholar,,, -333,multilingual large language models are not (yet) codeswitchers,"['Ruochen Zhang', 'Samuel Cahyawijaya', 'Jan Christian Blaise Cruz', 'Alham Fikri Aji']",http://arxiv.org/pdf/2305.14235,2023-05-23,,"Multilingual Large Language Models (LLMs) have recently shown great capabilities in a wide range of tasks, exhibiting state-of-the-art performance through zero-shot or few-shot prompting methods. While there have been extensive studies on their abilities in monolingual tasks, the investigation of their potential in the context of code-switching (CSW), the practice of alternating languages within an utterance, remains relatively uncharted. In this paper, we provide a comprehensive empirical analysis of various multilingual LLMs, benchmarking their performance across four tasks: sentiment analysis, machine translation, summarization and word-level language identification. Our results indicate that despite multilingual LLMs exhibiting promising outcomes in certain tasks using zero or few-shot prompting, they still underperform in comparison to fine-tuned models of much smaller scales. We argue that current""multilingualism""in LLMs does not inherently imply proficiency with code-switching texts, calling for future research to bridge this discrepancy.",eda54452d8a8a412c2a985ef11572cb468906b1f,Semantic Scholar,,, -334,product information extraction using chatgpt,"['Alexander Brinkmann', 'Roee Shraga', 'Reng Chiz Der', 'Christian Bizer']",http://arxiv.org/pdf/2306.14921,2023-06-23,,"Structured product data in the form of attribute/value pairs is the foundation of many e-commerce applications such as faceted product search, product comparison, and product recommendation. Product offers often only contain textual descriptions of the product attributes in the form of titles or free text. Hence, extracting attribute/value pairs from textual product descriptions is an essential enabler for e-commerce applications. In order to excel, state-of-the-art product information extraction methods require large quantities of task-specific training data. The methods also struggle with generalizing to out-of-distribution attributes and attribute values that were not a part of the training data. Due to being pre-trained on huge amounts of text as well as due to emergent effects resulting from the model size, Large Language Models like ChatGPT have the potential to address both of these shortcomings. This paper explores the potential of ChatGPT for extracting attribute/value pairs from product descriptions. We experiment with different zero-shot and few-shot prompt designs. Our results show that ChatGPT achieves a performance similar to a pre-trained language model but requires much smaller amounts of training data and computation for fine-tuning.",f00e7326baa9600e46b3a8e7077dc3a349f90a01,Semantic Scholar,,, -335,large language models for user interest journeys,"['Konstantina Christakopoulou', 'Alberto Lalama', 'Cj Adams', 'Iris Qu', 'Yifat Amir', 'S. Chucri', 'Pierce Vollucci', 'Fabio Soldo', 'Dina Bseiso', 'Sarah Scodel', 'Lucas Dixon', 'Ed H. Chi', 'Minmin Chen']",http://arxiv.org/pdf/2305.15498,2023-05-24,,"Large language models (LLMs) have shown impressive capabilities in natural language understanding and generation. Their potential for deeper user understanding and improved personalized user experience on recommendation platforms is, however, largely untapped. This paper aims to address this gap. Recommender systems today capture users' interests through encoding their historical activities on the platforms. The generated user representations are hard to examine or interpret. On the other hand, if we were to ask people about interests they pursue in their life, they might talk about their hobbies, like I just started learning the ukulele, or their relaxation routines, e.g., I like to watch Saturday Night Live, or I want to plant a vertical garden. We argue, and demonstrate through extensive experiments, that LLMs as foundation models can reason through user activities, and describe their interests in nuanced and interesting ways, similar to how a human would. We define interest journeys as the persistent and overarching user interests, in other words, the non-transient ones. These are the interests that we believe will benefit most from the nuanced and personalized descriptions. We introduce a framework in which we first perform personalized extraction of interest journeys, and then summarize the extracted journeys via LLMs, using techniques like few-shot prompting, prompt-tuning and fine-tuning. Together, our results in prompting LLMs to name extracted user journeys in a large-scale industrial platform demonstrate great potential of these models in providing deeper, more interpretable, and controllable user understanding. We believe LLM powered user understanding can be a stepping stone to entirely new user experiences on recommendation platforms that are journey-aware, assistive, and enabling frictionless conversation down the line.",f834aed32f5531bfa426faab71878c549572500e,Semantic Scholar,,, -336,promptbased extraction of social determinants of health using fewshot learning,"['Giridhar Kaushik Ramachandran', 'Yujuan Fu', 'Bin Han', 'K. Lybarger', 'Nicholas J. Dobbins', 'Ozlem Uzuner', 'M. Yetisgen']",http://arxiv.org/pdf/2306.07170,2023-06-12,,"Social determinants of health (SDOH) documented in the electronic health record through unstructured text are increasingly being studied to understand how SDOH impacts patient health outcomes. In this work, we utilize the Social History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified social history sections annotated for SDOH, including substance use, employment, and living status information. We explore the automatic extraction of SDOH information with SHAC in both standoff and inline annotation formats using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction performance with a high-performing supervised approach and perform thorough error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on the SHAC test set, similar to the 7th best-performing system among all teams in the n2c2 challenge with SHAC.",386bd4d25043516f076ea7b2296a1ebec84f43ce,Semantic Scholar,,, -337,deplot oneshot visual language reasoning by plottotable translation,"['Fangyu Liu', 'Julian Martin Eisenschlos', 'Francesco Piccinno', 'Syrine Krichene', 'Chenxi Pang', 'Kenton Lee', 'Mandar Joshi', 'Wenhu Chen', 'Nigel Collier', 'Y. Altun']",http://arxiv.org/pdf/2212.10505,2022-12-20,,"Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than>28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.",4d3a49d1439a0b8fbb0e9f588970ad0f1d70dec8,Semantic Scholar,,, -338,short answer grading using oneshot prompting and text similarity scoring model,['Su-Youn Yoon'],http://arxiv.org/pdf/2305.18638,2023-05-29,,"In this study, we developed an automated short answer grading (ASAG) model that provided both analytic scores and final holistic scores. Short answer items typically consist of multiple sub-questions, and providing an analytic score and the text span relevant to each sub-question can increase the interpretability of the automated scores. Furthermore, they can be used to generate actionable feedback for students. Despite these advantages, most studies have focused on predicting only holistic scores due to the difficulty in constructing dataset with manual annotations. To address this difficulty, we used large language model (LLM)-based one-shot prompting and a text similarity scoring model with domain adaptation using small manually annotated dataset. The accuracy and quadratic weighted kappa of our model were 0.67 and 0.71 on a subset of the publicly available ASAG dataset. The model achieved a substantial improvement over the majority baseline.",d1aa858644154af50e36860e6761ae52ae655bd3,Semantic Scholar,,, -339,s3dst structured opendomain dialogue segmentation and state tracking in the era of llms,"['Sarkar Snigdha Sarathi Das', 'C. Shah', 'Mengting Wan', 'Jennifer Neville', 'Longfei Yang', 'Reid Andersen', 'Georg Buscher', 'Tara Safavi']",https://arxiv.org/pdf/2309.08827,2023-09-16,,"The traditional Dialogue State Tracking (DST) problem aims to track user preferences and intents in user-agent conversations. While sufficient for task-oriented dialogue systems supporting narrow domain applications, the advent of Large Language Model (LLM)-based chat systems has introduced many real-world intricacies in open-domain dialogues. These intricacies manifest in the form of increased complexity in contextual interactions, extended dialogue sessions encompassing a diverse array of topics, and more frequent contextual shifts. To handle these intricacies arising from evolving LLM-based chat systems, we propose joint dialogue segmentation and state tracking per segment in open-domain dialogue systems. Assuming a zero-shot setting appropriate to a true open-domain dialogue system, we propose S3-DST, a structured prompting technique that harnesses Pre-Analytical Recollection, a novel grounding mechanism we designed for improving long context tracking. To demonstrate the efficacy of our proposed approach in joint segmentation and state tracking, we evaluate S3-DST on a proprietary anonymized open-domain dialogue dataset, as well as publicly available DST and segmentation datasets. Across all datasets and settings, S3-DST consistently outperforms the state-of-the-art, demonstrating its potency and robustness the next generation of LLM-based chat systems.",034f1d77d832460a239072c81b5bb178b93c1e9f,Semantic Scholar,,, -340,take a step back evoking reasoning via abstraction in large language models,"['Huaixiu Steven Zheng', 'Swaroop Mishra', 'Xinyun Chen', 'Heng-Tze Cheng', 'Ed H. Chi', 'Quoc V Le', 'Denny Zhou']",https://arxiv.org/pdf/2310.06117,2023-10-09,,"We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide the reasoning steps, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of Step-Back Prompting with PaLM-2L models and observe substantial performance gains on a wide range of challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%, TimeQA by 27%, and MuSiQue by 7%.",0786c88990235414611478099e43611542d973b0,Semantic Scholar,,, -341,chaidt a framework for prompting conversational generative ai agents to actively participate in cocreation,['Brandon Harwood'],http://arxiv.org/pdf/2305.03852,2023-05-05,,"This paper explores the potential for utilizing generative AI models in group-focused co-creative frameworks to enhance problem solving and ideation in business innovation and co-creation contexts, and proposes a novel prompting technique for conversational generative AI agents which employ methods inspired by traditional 'human-to-human' facilitation and instruction to enable active contribution to Design Thinking, a co-creative framework. Through experiments using this prompting technique, we gather evidence that conversational generative transformers (i.e. ChatGPT) have the capability to contribute context-specific, useful, and creative input into Design Thinking activities. We also discuss the potential benefits, limitations, and risks associated with using generative AI models in co-creative ideation and provide recommendations for future research.",0820a7ec1b7cac3470836161a92da7d59f626d14,Semantic Scholar,,, -342,image to tree with recursive prompting,"['James Batten', 'Matthew Sinclair', 'Ben Glocker', 'M. Schaap']",http://arxiv.org/pdf/2301.00447,2023-01-01,,". Extracting complex structures from grid-based data is a common key step in automated medical image analysis. The conventional solution to recovering tree-structured geometries typically involves computing the minimal cost path through intermediate representations derived from segmentation masks. However, this methodology has significant limitations in the context of projective imaging of tree-structured 3D anatomical data such as coronary arteries, since there are often overlapping branches in the 2D projection. In this work, we propose a novel approach to predicting tree connectivity structure which reformulates the task as an optimization problem over individual steps of a recursive process. We design and train a two-stage model which leverages the UNet and Transformer architectures and introduces an image-based prompting technique. Our proposed method achieves compelling results on a pair of synthetic datasets, and outperforms a shortest-path baseline.",118802f91718ea2c566f2eaf1b4e25c439459f4d,Semantic Scholar,,, -343,spoken language intelligence of large language models for language learning,"['Linkai Peng', 'Baorian Nuchged', 'Yingming Gao']",https://arxiv.org/pdf/2308.14536,2023-08-28,,"People have long hoped for a conversational system that can assist in real-life situations, and recent progress on large language models (LLMs) is bringing this idea closer to reality. While LLMs are often impressive in performance, their efficacy in real-world scenarios that demand expert knowledge remains unclear. LLMs are believed to hold the most potential and value in education, especially in the development of Artificial intelligence (AI) based virtual teachers capable of facilitating language learning. Our focus is centered on evaluating the efficacy of LLMs in the realm of education, specifically in the areas of spoken language learning which encompass phonetics, phonology, and second language acquisition. We introduce a new multiple-choice question dataset to evaluate the effectiveness of LLMs in the aforementioned scenarios, including understanding and application of spoken language knowledge. In addition, we investigate the influence of various prompting techniques such as zero- and few-shot method (prepending the question with question-answer exemplars), chain-of-thought (CoT, think step-by-step), in-domain exampler and external tools (Google, Wikipedia). We conducted large-scale evaluation on popular LLMs (20 distinct models) using these methods. We achieved significant performance improvements compared to the zero-shot baseline in the practical questions reasoning (GPT-3.5, 49.1% ->63.1%; LLaMA2-70B-Chat, 42.2% ->48.6%). We found that models of different sizes have good understanding of concepts in phonetics, phonology, and second language acquisition, but show limitations in reasoning for real-world problems. Additionally, we also explore preliminary findings on conversational communication.",19b43ff57e5d8f8a99da4110fbc30b4ecc39a527,Semantic Scholar,,, -344,scalable multirobot collaboration with large language models centralized or decentralized systems,"['Yongchao Chen', 'Jacob Arkin', 'Yang Zhang', 'Nicholas Roy', 'Chuchu Fan']",https://arxiv.org/pdf/2309.15943,2023-09-27,,"A flurry of recent work has demonstrated that pre-trained large language models (LLMs) can be effective task planners for a variety of single-robot tasks. The planning performance of LLMs is significantly improved via prompting techniques, such as in-context learning or re-prompting with state feedback, placing new importance on the token budget for the context window. An under-explored but natural next direction is to investigate LLMs as multi-robot task planners. However, long-horizon, heterogeneous multi-robot planning introduces new challenges of coordination while also pushing up against the limits of context window length. It is therefore critical to find token-efficient LLM planning frameworks that are also able to reason about the complexities of multi-robot coordination. In this work, we compare the task success rate and token efficiency of four multi-agent communication frameworks (centralized, decentralized, and two hybrid) as applied to four coordination-dependent multi-agent 2D task scenarios for increasing numbers of agents. We find that a hybrid framework achieves better task success rates across all four tasks and scales better to more agents. We further demonstrate the hybrid frameworks in 3D simulations where the vision-to-text problem and dynamical errors are considered. See our project website https://yongchao98.github.io/MIT-REALM-Multi-Robot/ for prompts, videos, and code.",1ad735714ad2e4ee5b94ce26c976e5ee5c7cde3b,Semantic Scholar,,, -345,the utility of large language models and generative ai for education research,"['Andrew Katz', 'Umair Shakir', 'B. Chambers']",http://arxiv.org/pdf/2305.18125,2023-05-29,,"The use of natural language processing (NLP) techniques in engineering education can provide valuable insights into the underlying processes involved in generating text. While accessing these insights can be labor-intensive if done manually, recent advances in NLP and large language models have made it a realistic option for individuals. This study explores and evaluates a combination of clustering, summarization, and prompting techniques to analyze over 1,000 student essays in which students discussed their career interests. The specific assignment prompted students to define and explain their career goals as engineers. Using text embedding representations of student responses, we clustered the responses together to identify thematically similar statements from students. The clustered responses were then summarized to quickly identify career interest themes. We also used a set of a priori codes about career satisfaction and sectors to demonstrate an alternative approach to using these generative text models to analyze student writing. The results of this study demonstrate the feasibility and usefulness of NLP techniques in engineering education research. By automating the initial analysis of student essays, researchers and educators can more efficiently and accurately identify key themes and patterns in student writing. The methods presented in this paper have broader applications for engineering education and research purposes beyond analyzing student essays. By explaining these methods to the engineering education community, readers can utilize them in their own contexts.",1fc0e5b30bfede1b78389d00f8c41bacd29ecd7f,Semantic Scholar,,, -346,foundation metrics quantifying effectiveness of healthcare conversations powered by generative ai,"['M. Abbasian', 'Elahe Khatibi', 'Iman Azimi', 'David Oniani', 'Zahra Shakeri Hossein Abad', 'Alexander Thieme', 'Zhongqi Yang', 'Yanshan Wang', 'Bryant Lin', 'Olivier Gevaert', 'Li-Jia Li', 'Ramesh Jain', 'Amir M. Rahmani']",https://arxiv.org/pdf/2309.12444,2023-09-21,,"Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process. Chatbots, serving as interactive conversational models, will probably drive this patient-centered transformation in healthcare. Through the provision of various services, including diagnosis, personalized lifestyle recommendations, and mental health support, the objective is to substantially augment patient health outcomes, all the while mitigating the workload burden on healthcare providers. The life-critical nature of healthcare applications necessitates establishing a unified and comprehensive set of evaluation metrics for conversational models. Existing evaluation metrics proposed for various generic large language models (LLMs) demonstrate a lack of comprehension regarding medical and health concepts and their significance in promoting patients' well-being. Moreover, these metrics neglect pivotal user-centered aspects, including trust-building, ethics, personalization, empathy, user comprehension, and emotional support. The purpose of this paper is to explore state-of-the-art LLM-based evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare. Subsequently, we present an comprehensive set of evaluation metrics designed to thoroughly assess the performance of healthcare chatbots from an end-user perspective. These metrics encompass an evaluation of language processing abilities, impact on real-world clinical tasks, and effectiveness in user-interactive conversations. Finally, we engage in a discussion concerning the challenges associated with defining and implementing these metrics, with particular emphasis on confounding factors such as the target audience, evaluation methods, and prompt techniques involved in the evaluation process.",20cb4e0bd8871d33d82fc72ea82a0aa1dd922810,Semantic Scholar,,, -347,an empirical study on the robustness of the segment anything model (sam),"['Yuqing Wang', 'Yun Zhao', 'Linda Petzold']",http://arxiv.org/pdf/2305.06422,2023-05-10,,"The Segment Anything Model (SAM) is a foundation model for general image segmentation. Although it exhibits impressive performance predominantly on natural images, understanding its robustness against various image perturbations and domains is critical for real-world applications where such challenges frequently arise. In this study we conduct a comprehensive robustness investigation of SAM under diverse real-world conditions. Our experiments encompass a wide range of image perturbations. Our experimental results demonstrate that SAM's performance generally declines under perturbed images, with varying degrees of vulnerability across different perturbations. By customizing prompting techniques and leveraging domain knowledge based on the unique characteristics of each dataset, the model's resilience to these perturbations can be enhanced, addressing dataset-specific challenges. This work sheds light on the limitations and strengths of SAM in real-world applications, promoting the development of more robust and versatile image segmentation solutions.",26d31d641116b656826737335b2accb802ac9931,Semantic Scholar,,, -348,boosting lowdata instance segmentation by unsupervised pretraining with saliency prompt,"['Hao Li', 'Dingwen Zhang', 'Nian Liu', 'Lechao Cheng', 'Yalun Dai', 'Chaoxi Zhang', 'Xinggang Wang', 'Junwei Han']",https://arxiv.org/pdf/2302.01171,2023-02-02,,"Inspired by DETR variants, query-based end-to-end instance segmentation (QEIS) methods have recently outperformed CNN-based models on large-scale datasets. Yet they would lose efficacy when only a small amount of training data is available since it's hard for the crucial queries/kernels to learn localization and shape priors. To this end, this work offers a novel unsupervised pre-training solution for low-data regimes. Inspired by the recent success of the Prompting technique, we introduce a new pre-training method that boosts QEIS models by giving Saliency Prompt for queries/kernels. Our method contains three parts: 1) Saliency Masks Proposal is responsible for generating pseudo masks from unlabeled images based on the saliency mechanism. 2) Prompt-Kernel Matching transfers pseudo masks into prompts and injects the corresponding localization and shape priors to the best-matched kernels. 3) Kernel Supervision is applied to supply supervision at the kernel level for robust learning. From a practical perspective, our pre-training method helps QEIS models achieve a similar convergence speed and comparable performance with CNN-based models in low-data regimes. Experimental results show that our method significantly boosts several QEIS models on three datasets.11Code: https://github.com/lifuguan/saliency.prompt",29965a1efc21a637e03a5e0a869d77eca77f5085,Semantic Scholar,,, -349,zeroshot temporal relation extraction with chatgpt,"['Chenhan Yuan', 'Qianqian Xie', 'S. Ananiadou']",http://arxiv.org/pdf/2304.05454,2023-04-11,,"The goal of temporal relation extraction is to infer the temporal relation between two events in the document. Supervised models are dominant in this task. In this work, we investigate ChatGPT’s ability on zero-shot temporal relation extraction. We designed three different prompt techniques to break down the task and evaluate ChatGPT. Our experiments show that ChatGPT’s performance has a large gap with that of supervised methods and can heavily rely on the design of prompts. We further demonstrate that ChatGPT can infer more small relation classes correctly than supervised methods. The current shortcomings of ChatGPT on temporal relation extraction are also discussed in this paper. We found that ChatGPT cannot keep consistency during temporal inference and it fails in actively long-dependency temporal inference.",2a663560b669a0b8d975675b3ac2546cc7386f3a,Semantic Scholar,,, -350,scigraphqa a largescale synthetic multiturn questionanswering dataset for scientific graphs,"['Sheng Li', 'Nima Tajbakhsh']",https://arxiv.org/pdf/2308.03349,2023-08-07,,"In this work, we present SciGraphQA, a synthetic multi-turn question-answer dataset related to academic graphs. SciGraphQA is 13 times larger than ChartVQA, the previously largest chart-visual question-answering dataset. It is also the largest open-sourced chart VQA dataset with non-synthetic charts. To build our dataset, we selected 290,000 Computer Science or Machine Learning ArXiv papers published between 2010 and 2020, and then used Palm-2 to generate 295K samples of open-vocabulary multi-turn question-answering dialogues about the graphs. As context, we provided the text-only Palm-2 with paper title, abstract, paragraph mentioning the graph, and rich text contextual data from the graph itself, obtaining dialogues with an average 2.23 question-answer turns for each graph. We asked GPT-4 to assess the matching quality of our question-answer turns given the paper's context, obtaining an average rating of 8.7/10 on our 3K test set. We evaluated the 0-shot capability of the most popular MLLM models such as LLaVa, mPLUGowl, BLIP-2, and openFlamingo's on our dataset, finding LLaVA-13B being the most performant with a CIDEr score of 0.08. We further enriched the question prompts for LLAVA by including the serialized data tables extracted from the graphs using the DePlot model, boosting LLaVA's 0-shot CIDEr to 0.15. To verify the validity of our dataset, we also fine-tuned LLaVa using our dataset, reaching a substantially higher CIDEr score of 0.26. We anticipate further accuracy improvement by including segmentation mask tokens and leveraging larger LLM backbones coupled with emergent prompting techniques. Our code and data are open-sourced.",2bd1b8990db73b6495c11082bea2d5f925c5226f,Semantic Scholar,,, -351,oneshot labeling for automatic relevance estimation,"['Sean MacAvaney', 'Luca Soldaini']",https://arxiv.org/pdf/2302.11266,2023-02-22,,"Dealing with unjudged documents (""holes"") in relevance assessments is a perennial problem when evaluating search systems with offline experiments. Holes can reduce the apparent effectiveness of retrieval systems during evaluation and introduce biases in models trained with incomplete data. In this work, we explore whether large language models can help us fill such holes to improve offline evaluations. We examine an extreme, albeit common, evaluation setting wherein only a single known relevant document per query is available for evaluation. We then explore various approaches for predicting the relevance of unjudged documents with respect to a query and the known relevant document, including nearest neighbor, supervised, and prompting techniques. We find that although the predictions of these One-Shot Labelers (1SL) frequently disagree with human assessments, the labels they produce yield a far more reliable ranking of systems than the single labels do alone. Specifically, the strongest approaches can consistently reach system ranking correlations of over 0.86 with the full rankings over a variety of measures. Meanwhile, the approach substantially increases the reliability of t-tests due to filling holes in relevance assessments, giving researchers more confidence in results they find to be significant. Alongside this work, we release an easy-to-use software package to enable the use of 1SL for evaluation of other ad-hoc collections or systems.",352bcafbcc95a84d96019688955cab5c43eb23f0,Semantic Scholar,,, -352,large language models can be easily distracted by irrelevant context,"['Freda Shi', 'Xinyun Chen', 'Kanishka Misra', 'Nathan Scales', 'David Dohan', 'E. Chi', 'Nathanael Scharli', 'Denny Zhou']",http://arxiv.org/pdf/2302.00093,2023-01-31,,"Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate the distractibility of large language models, i.e., how the model problem-solving accuracy can be influenced by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.",3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e,Semantic Scholar,,, -353,are emergent abilities in large language models just incontext learning,"['Sheng Lu', 'Irina Bigoulaeva', 'Rachneet Sachdeva', 'Harish Tayyar Madabushi', 'Iryna Gurevych']",https://arxiv.org/pdf/2309.01809,2023-09-04,,"Large language models have exhibited emergent abilities, demonstrating exceptional performance across diverse tasks for which they were not explicitly trained, including those that require complex reasoning abilities. The emergence of such abilities carries profound implications for the future direction of research in NLP, especially as the deployment of such models becomes more prevalent. However, one key challenge is that the evaluation of these abilities is often confounded by competencies that arise in models through alternative prompting techniques, such as in-context learning and instruction following, which also emerge as the models are scaled up. In this study, we provide the first comprehensive examination of these emergent abilities while accounting for various potentially biasing factors that can influence the evaluation of models. We conduct rigorous tests on a set of 18 models, encompassing a parameter range from 60 million to 175 billion parameters, across a comprehensive set of 22 tasks. Through an extensive series of over 1,000 experiments, we provide compelling evidence that emergent abilities can primarily be ascribed to in-context learning. We find no evidence for the emergence of reasoning abilities, thus providing valuable insights into the underlying mechanisms driving the observed abilities and thus alleviating safety concerns regarding their use.",3e4afde5a9de2c1801da99b8aff5ae05923f256b,Semantic Scholar,,, -354,are large language models ready for healthcare a comparative study on clinical language understanding,"['Yuqing Wang', 'Yun Zhao', 'Linda Petzold']",https://arxiv.org/pdf/2304.05368,2023-04-09,,"Large language models (LLMs) have made significant progress in various domains, including healthcare. However, the specialized nature of clinical language understanding tasks presents unique challenges and limitations that warrant further investigation. In this study, we conduct a comprehensive evaluation of state-of-the-art LLMs, namely GPT-3.5, GPT-4, and Bard, within the realm of clinical language understanding tasks. These tasks span a diverse range, including named entity recognition, relation extraction, natural language inference, semantic textual similarity, document classification, and question-answering. We also introduce a novel prompting strategy, self-questioning prompting (SQP), tailored to enhance LLMs' performance by eliciting informative questions and answers pertinent to the clinical scenarios at hand. Our evaluation underscores the significance of task-specific learning strategies and prompting techniques for improving LLMs' effectiveness in healthcare-related tasks. Additionally, our in-depth error analysis on the challenging relation extraction task offers valuable insights into error distribution and potential avenues for improvement using SQP. Our study sheds light on the practical implications of employing LLMs in the specialized domain of healthcare, serving as a foundation for future research and the development of potential applications in healthcare settings.",42780f9c7f73d73d7a887e2f787af0e079703d40,Semantic Scholar,,, -355,leveraging large language models to generate answer set programs,"['Adam Ishay', 'Zhun Yang', 'Joohyung Lee']",https://arxiv.org/pdf/2307.07699,2023-07-15,,"Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated exceptional performance in various natural language processing tasks and have shown the ability to solve certain reasoning problems. However, their reasoning capabilities are limited and relatively shallow, despite the application of various prompting techniques. In contrast, formal logic is adept at handling complex reasoning, but translating natural language descriptions into formal logic is a challenging task that non-experts struggle with. This paper proposes a neuro-symbolic method that combines the strengths of large language models and answer set programming. Specifically, we employ an LLM to transform natural language descriptions of logic puzzles into answer set programs. We carefully design prompts for an LLM to convert natural language descriptions into answer set programs in a step by step manner. Surprisingly, with just a few in-context learning examples, LLMs can generate reasonably complex answer set programs. The majority of errors made are relatively simple and can be easily corrected by humans, thus enabling LLMs to effectively assist in the creation of answer set programs.",4a6d7b11c4aba5a23f68856989366dd4311e960b,Semantic Scholar,,, -356,extracting multivalued relations from language models,"['Sneha Singhania', 'S. Razniewski', 'G. Weikum']",https://aclanthology.org/2023.repl4nlp-1.12.pdf,2023-07-06,,"The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5% F1 score. Our results highlight the difficulty of employing LMs for the multi-valued slot-filling task, and pave the way for further research on extracting relational knowledge from latent language representations.",4b99e8273227fd05f2be20248050d81e97ab4f4e,Semantic Scholar,,, -357,teaching algorithmic reasoning via incontext learning,"['Hattie Zhou', 'Azade Nova', 'H. Larochelle', 'Aaron C. Courville', 'Behnam Neyshabur', 'Hanie Sedghi']",http://arxiv.org/pdf/2211.09066,2022-11-15,,"Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines.",4d17732d90440682b0500f4e209c6cc4fac20e0e,Semantic Scholar,,, -358,understanding and improving visual prompting a labelmapping perspective,"['Aochuan Chen', 'Yuguang Yao', 'Pin-Yu Chen', 'Yihua Zhang', 'Sijia Liu']",https://arxiv.org/pdf/2211.11635,2022-11-21,,"We revisit and advance visual prompting (VP), an input prompting technique for vision tasks. VP can reprogram a fixed, pre-trained source model to accomplish downstream tasks in the target domain by simply incorporating universal prompts (in terms of input perturbation patterns) into downstream data points. Yet, it remains elusive why VP stays effective even given a ruleless label mapping (LM) between the source classes and the target classes. Inspired by the above, we ask: How is LM interrelated with VP? And how to exploit such a relationship to improve its accuracy on target tasks? We peer into the influence of LM on VP and provide an affirmative answer that a better ‘quality’ of LM (assessed by mapping precision and explanation) can consistently improve the effectiveness of VP. This is in contrast to the prior art where the factor of LM was missing. To optimize LM, we propose a new VP framework, termed ILM-VP (iterative label mapping-based visual prompting), which automatically re-maps the source labels to the target labels and progressively improves the target task accuracy of VP. Further, when using a contrastive language-image pretrained (CLIP) model for VP, we propose to integrate an LM process to assist the text prompt selection of CLIP and to improve the target task accuracy. Extensive experiments demonstrate that our proposal significantly outperforms state-of-the-art VP methods. As highlighted below, we show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target tasks, ILM-VP outperforms baselines by a substantial margin, e.g., 7.9% and 6.7% accuracy improvements in transfer learning to the target Flowers102 and CIFAR100 datasets. Besides, our proposal on CLIP-based VP provides 13.7% and 7.1% accuracy improvements on Flowers102 and DTD respectively. Code is available at https://github.com/OPTML-Group/ILM-VP.",4edd2d2770729380eda23826af1b78298b334a23,Semantic Scholar,,, -359,adaptivesolver framework for dynamic strategy selection in large language model reasoning,"['Jianpeng Zhou', 'Wanjun Zhong', 'Yanlin Wang', 'Jiahai Wang']",https://arxiv.org/pdf/2310.01446,2023-10-01,,"Large Language Models (LLMs) are showcasing impressive ability in handling complex reasoning tasks. In real-world situations, problems often span a spectrum of complexities. Humans inherently adjust their problem-solving approaches based on task complexity. However, most methodologies that leverage LLMs tend to adopt a uniform approach: utilizing consistent models, prompting methods, and degrees of problem decomposition, regardless of the problem complexity. Inflexibility of them can bring unnecessary computational overhead or sub-optimal performance. To address this problem, we introduce an Adaptive-Solver framework. It strategically modulates solving strategies based on the difficulties of the problems. Given an initial solution, the framework functions with two primary modules. The initial evaluation module assesses the adequacy of the current solution. If improvements are needed, the subsequent adaptation module comes into play. Within this module, three key adaptation strategies are employed: (1) Model Adaptation: Switching to a stronger LLM when a weaker variant is inadequate. (2) Prompting Method Adaptation: Alternating between different prompting techniques to suit the problem's nuances. (3) Decomposition Granularity Adaptation: Breaking down a complex problem into more fine-grained sub-questions to enhance solvability. Through such dynamic adaptations, our framework not only enhances computational efficiency but also elevates the overall performance. This dual-benefit ensures both the efficiency of the system for simpler tasks and the precision required for more complex questions. Experimental results from complex reasoning tasks reveal that the prompting method adaptation and decomposition granularity adaptation enhance performance across all tasks. Furthermore, the model adaptation approach significantly reduces API costs (up to 50%) while maintaining superior performance.",5076bbbf831a92174c9cc1b347bd0584560435fc,Semantic Scholar,,, -360,generative speech recognition error correction with large language models and taskactivating prompting,"['Chao-Han Huck Yang', 'Yile Gu', 'Yi-Chieh Liu', 'Shalini Ghosh', 'I. Bulyko', 'A. Stolcke']",https://arxiv.org/pdf/2309.15649,2023-09-27,,"We explore the ability of large language models (LLMs) to act as speech recognition post-processors that perform rescoring and error correction. Our first focus is on instruction prompting to let LLMs perform these task without fine-tuning, for which we evaluate different prompting schemes, both zero- and few-shot in-context learning, and a novel task activation prompting method that combines causal instructions and demonstration to increase its context windows. Next, we show that rescoring only by in-context learning with frozen LLMs achieves results that are competitive with rescoring by domain-tuned LMs, using a pretrained first-pass recognition system and rescoring output on two out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with fine-tuning we achieve error rates below the N-best oracle level, showcasing the generalization power of the LLMs.",50e8ab900d2ca4d83da120bbfe5338ee93dbe741,Semantic Scholar,,, -361,multiprompt with depth partitioned crossmodal learning,"['Yiqi Wang', 'Xianda Guo', 'Zheng Hua Zhu', 'Yingjie Tian']",https://arxiv.org/pdf/2305.06221,2023-05-10,,"In recent years, soft prompt learning methods have been proposed to fine-tune large-scale vision-language pre-trained models for various downstream tasks. These methods typically combine learnable textual tokens with class tokens as input for models with frozen parameters. However, they often employ a single prompt to describe class contexts, failing to capture categories' diverse attributes adequately. This study introduces the Partitioned Multi-modal Prompt (PMPO), a multi-modal prompting technique that extends the soft prompt from a single learnable prompt to multiple prompts. Our method divides the visual encoder depths and connects learnable prompts to the separated visual depths, enabling different prompts to capture the hierarchical contextual depths of visual representations. Furthermore, to maximize the advantages of multi-prompt learning, we incorporate prior information from manually designed templates and learnable multi-prompts, thus improving the generalization capabilities of our approach. We evaluate the effectiveness of our approach on three challenging tasks: new class generalization, cross-dataset evaluation, and domain generalization. For instance, our method achieves a $79.28$ harmonic mean, averaged over 11 diverse image recognition datasets ($+7.62$ compared to CoOp), demonstrating significant competitiveness compared to state-of-the-art prompting methods.",511ad6b37cb028bdfbd6096e6d20aa4b8b34fafc,Semantic Scholar,,, -362,large language models are pretty good zeroshot video game bug detectors,"['Mohammad Reza Taesiri', 'Finlay Macklon', 'Yihe Wang', 'Hengshuo Shen', 'C. Bezemer']",http://arxiv.org/pdf/2210.02506,2022-10-05,,"Video game testing requires game-specific knowledge as well as common sense reasoning about the events in the game. While AI-driven agents can satisfy the first requirement, it is not yet possible to meet the second requirement automatically. Therefore, video game testing often still relies on manual testing, and human testers are required to play the game thoroughly to detect bugs. As a result, it is challenging to fully automate game testing. In this study, we explore the possibility of leveraging the zero-shot capabilities of large language models for video game bug detection. By formulating the bug detection problem as a question-answering task, we show that large language models can identify which event is buggy in a sequence of textual descriptions of events from a game. To this end, we introduce the GameBugDescriptions benchmark dataset, which consists of 167 buggy gameplay videos and a total of 334 question-answer pairs across 8 games. We extensively evaluate the performance of six models across the OPT and InstructGPT large language model families on our benchmark dataset. Our results show promising results for employing language models to detect video game bugs. With the proper prompting technique, we could achieve an accuracy of 70.66%, and on some video games, up to 78.94%. Our code, evaluation data and the benchmark can be found on https://asgaardlab.github.io/LLMxBugs",55e3fe05598be7c3dd357d51166869f6571b824f,Semantic Scholar,,, -363,help me think a simple prompting strategy for nonexperts to create customized content with models,"['Swaroop Mishra', 'E. Nouri']",http://arxiv.org/pdf/2208.08232,2022-08-17,,"Controlling the text generated by language models and customizing the content has been a long-standing challenge. Existing prompting techniques proposed in pursuit of providing control are task-specific and lack generality; this provides overwhelming choices for non-expert users to find a suitable method for their task. The effort associated with those techniques, such as in writing examples, explanations, instructions, etc. further limits their adoption among non-expert users. In this paper, we propose a simple prompting strategy HELP ME THINK where we encourage GPT3 to help non-expert users by asking a set of relevant questions and leveraging user answers to execute the task. We demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks. Specifically, we focus on tasks that are hard for average humans and require significant thinking to perform. We hope our work will encourage the development of unconventional ways to harness the power of large language models.",5ba1e498665d2b3536cb436f0cf484dce03459fe,Semantic Scholar,,, -364,leveraging fewshot data augmentation and waterfall prompting for response generation,"['Lea Krause', ""Selene B'aez Santamar'ia"", 'Michiel van der Meer', 'Urja Khurana']",https://arxiv.org/pdf/2308.01080,2023-08-02,,"This paper discusses our approaches for task-oriented conversational modelling using subjective knowledge, with a particular emphasis on response generation. Our methodology was shaped by an extensive data analysis that evaluated key factors such as response length, sentiment, and dialogue acts present in the provided dataset. We used few-shot learning to augment the data with newly generated subjective knowledge items and present three approaches for DSTC11: (1) task-specific model exploration, (2) incorporation of the most frequent question into all generated responses, and (3) a waterfall prompting technique using a combination of both GPT-3 and ChatGPT.",657e364ec6932558f426583dc31953e547bf6575,Semantic Scholar,,, -365,the formai dataset generative ai in software security through the lens of formal verification,"['Norbert Tihanyi', 'Tamás Bisztray', 'Ridhi Jain', 'M. Ferrag', 'L. Cordeiro', 'Vasileios Mavroeidis']",https://arxiv.org/pdf/2307.02192,2023-07-05,,"This paper presents the FormAI dataset, a large collection of 112, 000 AI-generated compilable and independent C programs with vulnerability classification. We introduce a dynamic zero-shot prompting technique constructed to spawn diverse programs utilizing Large Language Models (LLMs). The dataset is generated by GPT-3.5-turbo and comprises programs with varying levels of complexity. Some programs handle complicated tasks like network management, table games, or encryption, while others deal with simpler tasks like string manipulation. Every program is labeled with the vulnerabilities found within the source code, indicating the type, line number, and vulnerable function name. This is accomplished by employing a formal verification method using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model checking, abstract interpretation, constraint programming, and satisfiability modulo theories to reason over safety/security properties in programs. This approach definitively detects vulnerabilities and offers a formal model known as a counterexample, thus eliminating the possibility of generating false positive reports. We have associated the identified vulnerabilities with Common Weakness Enumeration (CWE) numbers. We make the source code available for the 112, 000 programs, accompanied by a separate file containing the vulnerabilities detected in each program, making the dataset ideal for training LLMs and machine learning algorithms. Our study unveiled that according to ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities, thereby presenting considerable risks to software safety and security.",67455478e77c8672d0dd08f89735a8813bbfec65,Semantic Scholar,,, -366,fixing rust compilation errors using llms,"['Pantazis Deligiannis', 'A. Lal', 'Nikita Mehrotra', 'Aseem Rastogi']",https://arxiv.org/pdf/2308.05177,2023-08-09,,"The Rust programming language, with its safety guarantees, has established itself as a viable choice for low-level systems programming language over the traditional, unsafe alternatives like C/C++. These guarantees come from a strong ownership-based type system, as well as primitive support for features like closures, pattern matching, etc., that make the code more concise and amenable to reasoning. These unique Rust features also pose a steep learning curve for programmers. This paper presents a tool called RustAssistant that leverages the emergent capabilities of Large Language Models (LLMs) to automatically suggest fixes for Rust compilation errors. RustAssistant uses a careful combination of prompting techniques as well as iteration with an LLM to deliver high accuracy of fixes. RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on real-world compilation errors in popular open-source Rust repositories. We plan to release our dataset of Rust compilation errors to enable further research.",674c5ec7b144aea1f6b143baeb17cc839f52416e,Semantic Scholar,,, -367,synthetic prompting generating chainofthought demonstrations for large language models,"['Zhihong Shao', 'Yeyun Gong', 'Yelong Shen', 'Minlie Huang', 'Nan Duan', 'Weizhu Chen']",http://arxiv.org/pdf/2302.00618,2023-02-01,,"Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations. However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly. We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself, and selects effective demonstrations to elicit better reasoning. Our method alternates between a backward and forward process to generate new examples. The backward process generates a question that match a sampled reasoning chain, so that the question is solvable and clear. The forward process produces a more detailed reasoning chain for the question, improving the quality of the example. We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.",69619a2a47faee7a29ec596db13172e2a42ff921,Semantic Scholar,,, -368,unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations,"['Tiziano Labruna', 'Sofia Brenna', 'Andrea Zaninello', 'B. Magnini']",http://arxiv.org/pdf/2305.14556,2023-05-23,,"Large pre-trained language models have exhibited unprecedented capabilities in producing high-quality text via prompting techniques. This fact introduces new possibilities for data collection and annotation, particularly in situations where such data is scarce, complex to gather, expensive, or even sensitive. In this paper, we explore the potential of these models to generate and annotate goal-oriented dialogues, and conduct an in-depth analysis to evaluate their quality. Our experiments employ ChatGPT, and encompass three categories of goal-oriented dialogues (task-oriented, collaborative, and explanatory), two generation modes (interactive and one-shot), and two languages (English and Italian). Based on extensive human-based evaluations, we demonstrate that the quality of generated dialogues and annotations is on par with those generated by humans.",7307ee3c819c34b7c93ccbbd330a4c889956b36f,Semantic Scholar,,, -369,events realm event reasoning of entity states via language models,"['Evangelia Spiliopoulou', 'Artidoro Pagnoni', 'Yonatan Bisk', 'E. Hovy']",https://arxiv.org/pdf/2211.05392,2022-11-10,,"This paper investigates models of event implications. Specifically, how well models predict entity state-changes, by targeting their understanding of physical attributes. Nominally, Large Language models (LLM) have been exposed to procedural knowledge about how objects interact, yet our benchmarking shows they fail to reason about the world. Conversely, we also demonstrate that existing approaches often misrepresent the surprising abilities of LLMs via improper task encodings and that proper model prompting can dramatically improve performance of reported baseline results across multiple tasks. In particular, our results indicate that our prompting technique is especially useful for unseen attributes (out-of-domain) or when only limited data is available.",748a2700ec11f51560a69ec05c67ca9f97014be7,Semantic Scholar,,, -370,fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems,"['Aniruddha Deb', 'Neeva Oza', 'Sarthak Singla', 'Dinesh Khandelwal', 'Dinesh Garg', 'Parag Singla']",https://arxiv.org/pdf/2310.01991,2023-10-03,,"While forward reasoning (i.e. find the answer given the question) has been explored extensively in the recent literature, backward reasoning is relatively unexplored. We examine the backward reasoning capabilities of LLMs on Math Word Problems (MWPs): given a mathematical question and its answer, with some details omitted from the question, can LLMs effectively retrieve the missing information? In this paper, we formally define the backward reasoning task on math word problems and modify three datasets to evaluate this task: GSM8k, SVAMP and MultiArith. Our findings show a significant drop in the accuracy of models on backward reasoning compared to forward reasoning across four SOTA LLMs (GPT4, GPT3.5, PaLM-2, and LLaMa-2). Utilizing the specific format of this task, we propose three novel techniques that improve performance: Rephrase reformulates the given problem into a forward reasoning problem, PAL-Tools combines the idea of Program-Aided LLMs to produce a set of equations that can be solved by an external solver, and Check your Work exploits the availability of natural verifier of high accuracy in the forward direction, interleaving solving and verification steps. Finally, realizing that each of our base methods correctly solves a different set of problems, we propose a novel Bayesian formulation for creating an ensemble over these base methods aided by a verifier to further boost the accuracy by a significant margin. Extensive experimentation demonstrates that our techniques successively improve the performance of LLMs on the backward reasoning task, with the final ensemble-based method resulting in a substantial performance gain compared to the raw LLMs with standard prompting techniques such as chain-of-thought.",8db1dcae055842f43ccac04182957b20d15bbe6b,Semantic Scholar,,, -371,investigating prompting techniques for zero and fewshot visual question answering,"['Rabiul Awal', 'Le Zhang', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2306.09996,2023-06-16,,"Visual question answering (VQA) is a challenging task that requires the ability to comprehend and reason with visual information. While recent vision-language models have made strides, they continue to struggle with zero-shot VQA, particularly in handling complex compositional questions and adapting to new domains i.e. knowledge-based reasoning. This paper explores the use of various prompting strategies, focusing on the BLIP2 model, to enhance zero-shot VQA performance. We conduct a comprehensive investigation across several VQA datasets, examining the effectiveness of different question templates, the role of few-shot exemplars, the impact of chain-of-thought (CoT) reasoning, and the benefits of incorporating image captions as additional visual cues. Despite the varied outcomes, our findings demonstrate that carefully designed question templates and the integration of additional visual cues, like image captions, can contribute to improved VQA performance, especially when used in conjunction with few-shot examples. However, we also identify a limitation in the use of chain-of-thought rationalization, which negatively affects VQA accuracy. Our study thus provides critical insights into the potential of prompting for improving zero-shot VQA performance.",8efc20988021ce3b4b05dd44b13e27260ee9b99b,Semantic Scholar,,, -372,enabling conversational interaction with mobile ui using large language models,"['Bryan Wang', 'Gang Li', 'Yang Li']",https://dl.acm.org/doi/pdf/10.1145/3544548.3580895,2022-09-18,,"Conversational agents show the promise to allow users to interact with mobile devices using language. However, to perform diverse UI tasks with natural language, developers typically need to create separate datasets and models for each specific task, which is expensive and effort-consuming. Recently, pre-trained large language models (LLMs) have been shown capable of generalizing to various downstream tasks when prompted with a handful of examples from the target task. This paper investigates the feasibility of enabling versatile conversational interactions with mobile UIs using a single LLM. We designed prompting techniques to adapt an LLM to mobile UIs. We experimented with four important modeling tasks that address various scenarios in conversational interaction. Our method achieved competitive performance on these challenging tasks without requiring dedicated datasets and training, offering a lightweight and generalizable approach to enable language-based mobile interaction.",99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789,Semantic Scholar,,, -373,finegrained visual prompting,"['Lingfeng Yang', 'Yue Wang', 'Xiang Li', 'Xinlong Wang', 'Jian Yang']",http://arxiv.org/pdf/2306.04356,2023-06-07,,"Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive zero-shot transfer capabilities in image-level visual perception. However, these models have shown limited performance in instance-level tasks that demand precise localization and recognition. Previous works have suggested that incorporating visual prompts, such as colorful boxes or circles, can improve the ability of models to recognize objects of interest. Nonetheless, compared to language prompting, visual prompting designs are rarely explored. Existing approaches, which employ coarse visual cues such as colorful boxes or circles, often result in sub-optimal performance due to the inclusion of irrelevant and noisy pixels. In this paper, we carefully study the visual prompting designs by exploring more fine-grained markings, such as segmentation masks and their variations. In addition, we introduce a new zero-shot framework that leverages pixel-level annotations acquired from a generalist segmentation model for fine-grained visual prompting. Consequently, our investigation reveals that a straightforward application of blur outside the target mask, referred to as the Blur Reverse Mask, exhibits exceptional effectiveness. This proposed prompting strategy leverages the precise mask annotations to reduce focus on weakly related regions while retaining spatial coherence between the target and the surrounding background. Our Fine-Grained Visual Prompting (FGVP) demonstrates superior performance in zero-shot comprehension of referring expressions on the RefCOCO, RefCOCO+, and RefCOCOg benchmarks. It outperforms prior methods by an average margin of 3.0% to 4.6%, with a maximum improvement of 12.5% on the RefCOCO+ testA subset. The part detection experiments conducted on the PACO dataset further validate the preponderance of FGVP over existing visual prompting techniques. Code and models will be made available.",a01a9c4a114fbf201540268f928ccf77bc3f9357,Semantic Scholar,,, -374,questioning the survey responses of large language models,"['Ricardo Dominguez-Olmedo', 'Moritz Hardt', 'Celestine Mendler-Dunner']",https://arxiv.org/pdf/2306.07951,2023-06-13,,"As large language models increase in capability, researchers have started to conduct surveys of all kinds on these models with varying scientific motivations. In this work, we examine what we can learn from language models' survey responses on the basis of the well-established American Community Survey (ACS) by the U.S. Census Bureau. Using a de-facto standard multiple-choice prompting technique and evaluating 40 different language models, hundreds of thousands of times each on questions from the ACS, we systematically establish two dominant patterns. First, models have significant position and labeling biases, for example, towards survey responses labeled with the letter""A"". Second, when adjusting for labeling biases through randomized answer ordering, models across the board trend towards uniformly random survey responses. In fact, binary classifiers can almost perfectly differentiate between models' responses to the ACS and the responses of the US census. Taken together, our findings suggest caution in treating survey responses from language models as equivalent to those of human populations at present time.",a86e12654376323b712dd3d39d5ff22283f87a7b,Semantic Scholar,,, -375,mathprompter mathematical reasoning using large language models,"['Shima Imani', 'Liang Du', 'H. Shrivastava']",http://arxiv.org/pdf/2303.05398,2023-03-04,,"Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose ‘MathPrompter’, a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting technique to generate multiple algebraic expressions or python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the ‘MultiArith’ dataset (78.7% - 92.5%) evaluated using 175B parameter GPT-based LLM.",b626560f19f815808a289ef5c24a17c57320da70,Semantic Scholar,,, -376,boosting logical reasoning in large language models through a new framework the graph of thought,"['Bin Lei', 'Pei-Hung Lin', 'C. Liao', 'Caiwen Ding']",https://arxiv.org/pdf/2308.08614,2023-08-16,,"Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries. However, when facing complex problems that require multi-step logical reasoning, their accuracy dramatically decreases. Current research has explored the realm of \textit{prompting engineering} to bolster the inferential capacities of these models. Our paper unveils a pioneering prompting technique, dubbed \textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating challenges: the 24-point game, resolution of high-degree polynomial equations, and derivation of formulas for recursive sequences, our method outperformed GPT-4, achieving accuracy improvements of $89.7\%$, $86\%$, and $56\%$ for each respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) prompting method, \textit{Tree of Thought (ToT)}, our approach registered an average accuracy boost of $23\%$, $24\%$, and $15\%$.",ba4aa83248a1d08b521392eb971e47d10b7c74e1,Semantic Scholar,,, -377,scitab a challenging benchmark for compositional reasoning and claim verification on scientific tables,"['Xinyuan Lu', 'Liangming Pan', 'Qian Liu', 'Preslav Nakov', 'Min-Yen Kan']",http://arxiv.org/pdf/2305.13186,2023-05-22,,"Current scientific fact-checking benchmarks exhibit several shortcomings, such as biases arising from crowd-sourced claims and an over-reliance on text-based evidence. We present SCITAB, a challenging evaluation dataset consisting of 1.2K expert-verified scientific claims that 1) originate from authentic scientific publications and 2) require compositional reasoning for verification. The claims are paired with evidence-containing scientific tables annotated with labels. Through extensive evaluations, we demonstrate that SCITAB poses a significant challenge to state-of-the-art models, including table-based pretraining models and large language models. All models except GPT-4 achieved performance barely above random guessing. Popular prompting techniques, such as Chain-of-Thought, do not achieve much performance gains on SCITAB. Our analysis uncovers several unique challenges posed by SCITAB, including table grounding, claim ambiguity, and compositional reasoning. Our codes and data are publicly available at https://github.com/XinyuanLu00/SciTab.",c20b18d6b919695a69e416debf8bf1ffeac03992,Semantic Scholar,,, -378,optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models,"['Badr AlKhamissi', 'Siddharth Verma', 'Ping Yu', 'Zhijing Jin', 'Asli Celikyilmaz', 'Mona T. Diab']",https://aclanthology.org/2023.nlrse-1.10.pdf,2023-05-19,,"We conduct a thorough investigation into the reasoning capabilities of Large Language Models (LLMs), focusing specifically on the Open Pretrained Transformers (OPT) models as a representative of such models. Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations. We then evaluate all models on 57 out-of-domain tasks drawn from the Super-NaturalInstructions benchmark, covering 26 distinct reasoning skills, utilizing three prompting techniques. Through a comprehensive grid of 27 configurations and 6,156 test evaluations, we investigate the dimensions of finetuning, prompting, and scale to understand the role of explanations on different reasoning skills. Our findings reveal that having explanations in the fewshot exemplar has no significant impact on the model’s performance when the model is finetuned, while positively affecting the non-finetuned counterpart. Moreover, we observe a slight yet consistent increase in classification accuracy as we incorporate explanations during prompting and finetuning, respectively. Finally, we offer insights on which reasoning skills benefit the most from incorporating explanations during finetuning and prompting, such as Numerical (+20.4%) and Analogical (+13.9%) reasoning, as well as skills that exhibit negligible or negative effects.",c218cd1772999517b137bbbc9872c4f67e540b7f,Semantic Scholar,,, -379,knowledgeprompted estimator a novel approach to explainable machine translation assessment,"['Hao Yang', 'Min Zhang', 'Shimin Tao', 'Minghan Wang', 'Daimeng Wei', 'Yanfei Jiang']",http://arxiv.org/pdf/2306.07486,2023-06-13,,"Cross-lingual Machine Translation (MT) quality estimation plays a crucial role in evaluating translation performance. GEMBA, the first MT quality assessment metric based on Large Language Models (LLMs), employs one-step prompting to achieve state-of-the-art (SOTA) in system-level MT quality estimation; however, it lacks segment-level analysis. In contrast, Chain-of-Thought (CoT) prompting outperforms one-step prompting by offering improved reasoning and explainability. In this paper, we introduce Knowledge-Prompted Estimator (KPE), a CoT prompting method that combines three one-step prompting techniques, including perplexity, token-level similarity, and sentence-level similarity. This method attains enhanced performance for segment-level estimation compared with previous deep learning models and one-step prompting approaches. Furthermore, supplementary experiments on word-level visualized alignment demonstrate that our KPE method significantly improves token alignment compared with earlier models and provides better interpretability for MT quality estimation. Code will be released upon publication.",d1bd7ae97588eccfbcd31ffce4fc924d12a5de4d,Semantic Scholar,,, -380,prompting as probing using language models for knowledge base construction,"['Dimitrios Alivanistos', ""Selene B'aez Santamar'ia"", 'Michael Cochez', 'Jan-Christoph Kalo', 'Emile van Krieken', 'Thiviyan Thanapalasingam']",http://arxiv.org/pdf/2208.11057,2022-08-23,,"Language Models (LMs) have proven to be useful in various downstream applications, such as summarisation, translation, question answering and text classification. LMs are becoming increasingly important tools in Artificial Intelligence, because of the vast quantity of information they can store. In this work, we present ProP (Prompting as Probing), which utilizes GPT-3, a large Language Model originally proposed by OpenAI in 2020, to perform the task of Knowledge Base Construction (KBC). ProP implements a multi-step approach that combines a variety of prompting techniques to achieve this. Our results show that manual prompt curation is essential, that the LM must be encouraged to give answer sets of variable lengths, in particular including empty answer sets, that true/false questions are a useful device to increase precision on suggestions generated by the LM, that the size of the LM is a crucial factor, and that a dictionary of entity aliases improves the LM score. Our evaluation study indicates that these proposed techniques can substantially enhance the quality of the final predictions: ProP won track 2 of the LM-KBC competition, outperforming the baseline by 36.4 percentage points. Our implementation is available on https://github.com/HEmile/iswc-challenge.",ddc9aeac18638575bbb90ede4c6829ec15c2947e,Semantic Scholar,,, -381,devgpt studying developerchatgpt conversations,"['Tao Xiao', 'Christoph Treude', 'Hideaki Hata', 'Kenichi Matsumoto']",https://arxiv.org/pdf/2309.03914,2023-08-31,,"The emergence of large language models (LLMs) such as ChatGPT has disrupted the landscape of software development. Many studies are investigating the quality of responses generated by ChatGPT, the efficacy of various prompting techniques, and its comparative performance in programming contests, to name a few examples. Yet, we know very little about how ChatGPT is actually used by software developers. What questions do developers present to ChatGPT? What are the dynamics of these interactions? What is the backdrop against which these conversations are held, and how do the conversations feedback into the artifacts of their work? To close this gap, we introduce DevGPT, a curated dataset which encompasses 17,913 prompts and ChatGPT's responses including 11,751 code snippets, coupled with the corresponding software development artifacts -- ranging from source code, commits, issues, pull requests, to discussions and Hacker News threads -- to enable the analysis of the context and implications of these developer interactions with ChatGPT.",def24fb1e977db69f4b1b866b807f9ab9bad5227,Semantic Scholar,,, -382,upar a kantianinspired prompting framework for enhancing large language model capabilities,"['Hejia Geng', 'Boxun Xu', 'Peng Li']",https://arxiv.org/pdf/2310.01441,2023-09-30,,"Large Language Models (LLMs) have demonstrated impressive inferential capabilities, with numerous research endeavors devoted to enhancing this capacity through prompting. Despite these efforts, a unified epistemological foundation is still conspicuously absent. Drawing inspiration from Kant's a priori philosophy, we propose the UPAR prompting framework, designed to emulate the structure of human cognition within LLMs. The UPAR framework is delineated into four phases:""Understand"",""Plan"",""Act"", and""Reflect"", enabling the extraction of structured information from complex contexts, prior planning of solutions, execution according to plan, and self-reflection. This structure significantly augments the explainability and accuracy of LLM inference, producing a human-understandable and inspectable inferential trajectory. Furthermore, our work offers an epistemological foundation for existing prompting techniques, allowing for a possible systematic integration of these methods. With GPT-4, our approach elevates the accuracy from COT baseline of 22.92% to 58.33% in a challenging subset of GSM8K, and from 67.91% to 75.40% in the causal judgment task.",e61a96cf602ebff6683929aaf916e25614a475bc,Semantic Scholar,,, -383,understanding stereotypes in language models towards robust measurement and zeroshot debiasing,"['Justus Mattern', 'Zhijing Jin', 'Mrinmaya Sachan', 'Rada Mihalcea', 'B. Scholkopf']",http://arxiv.org/pdf/2212.10678,2022-12-20,,"Generated texts from large pretrained language models have been shown to exhibit a variety of harmful, human-like biases about various demographics. These findings prompted large efforts aiming to understand and measure such effects, with the goal of providing benchmarks that can guide the development of techniques mitigating these stereotypical associations. However, as recent research has pointed out, the current benchmarks lack a robust experimental setup, consequently hindering the inference of meaningful conclusions from their evaluation metrics. In this paper, we extend these arguments and demonstrate that existing techniques and benchmarks aiming to measure stereotypes tend to be inaccurate and consist of a high degree of experimental noise that severely limits the knowledge we can gain from benchmarking language models based on them. Accordingly, we propose a new framework for robustly measuring and quantifying biases exhibited by generative language models. Finally, we use this framework to investigate GPT-3's occupational gender bias and propose prompting techniques for mitigating these biases without the need for fine-tuning.",ed5ebed7ff668fd7362d531a40b49b3aea33b3a9,Semantic Scholar,,, -384,prompts should not be seen as secrets systematically measuring prompt extraction attack success,"['Yiming Zhang', 'Daphne Ippolito']",https://arxiv.org/pdf/2307.06865,2023-07-13,,"The generations of large language models are commonly controlled through prompting techniques, where a user's query to the model is prefixed with a prompt that aims to guide the model's behaviour on the query. The prompts used by companies to guide their models are often treated as secrets, to be hidden from the user making the query. They have even been treated as commodities to be bought and sold. However, there has been anecdotal evidence showing that the prompts can be extracted by a user even when they are kept secret. In this paper, we present a framework for systematically measuring the success of prompt extraction attacks. In experiments with multiple sources of prompts and multiple underlying language models, we find that simple text-based attacks can in fact reveal prompts with high probability.",f330f502bf1e92fabf7f246597fa9320d956c0c8,Semantic Scholar,,, -385,minidalle3 interactive text to image by prompting large language models,"['Zeqiang Lai', 'Xizhou Zhu', 'Jifeng Dai', 'Yu Qiao', 'Wenhai Wang']",https://arxiv.org/pdf/2310.07653,2023-10-11,,"The revolution of artificial intelligence content generation has been rapidly accelerated with the booming text-to-image (T2I) diffusion models. Within just two years of development, it was unprecedentedly of high-quality, diversity, and creativity that the state-of-the-art models could generate. However, a prevalent limitation persists in the effective communication with these popular T2I models, such as Stable Diffusion, using natural language descriptions. This typically makes an engaging image hard to obtain without expertise in prompt engineering with complex word compositions, magic tags, and annotations. Inspired by the recently released DALLE3 - a T2I model directly built-in ChatGPT that talks human language, we revisit the existing T2I systems endeavoring to align human intent and introduce a new task - interactive text to image (iT2I), where people can interact with LLM for interleaved high-quality image generation/edit/refinement and question answering with stronger images and text correspondences using natural language. In addressing the iT2I problem, we present a simple approach that augments LLMs for iT2I with prompting techniques and off-the-shelf T2I models. We evaluate our approach for iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT, LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a convenient and low-cost way to introduce the iT2I ability for any existing LLMs and any text-to-image models without any training while bringing little degradation on LLMs' inherent capabilities in, e.g., question answering and code generation. We hope this work could draw broader attention and provide inspiration for boosting user experience in human-machine interactions alongside the image quality of the next-generation T2I systems.",f669d7a6fab0147253178a6fc854e05e3d92fb3f,Semantic Scholar,,, -386,gopro generate and optimize prompts in clip using selfsupervised learning,"['M. Singha', 'Ankit Jha', 'Biplab Banerjee']",https://arxiv.org/pdf/2308.11605,2023-08-22,,"Large-scale foundation models, such as CLIP, have demonstrated remarkable success in visual recognition tasks by embedding images in a semantically rich space. Self-supervised learning (SSL) has also shown promise in improving visual recognition by learning invariant features. However, the combination of CLIP with SSL is found to face challenges due to the multi-task framework that blends CLIP's contrastive loss and SSL's loss, including difficulties with loss weighting and inconsistency among different views of images in CLIP's output space. To overcome these challenges, we propose a prompt learning-based model called GOPro, which is a unified framework that ensures similarity between various augmented views of input images in a shared image-text embedding space, using a pair of learnable image and text projectors atop CLIP, to promote invariance and generalizability. To automatically learn such prompts, we leverage the visual content and style primitives extracted from pre-trained CLIP and adapt them to the target task. In addition to CLIP's cross-domain contrastive loss, we introduce a visual contrastive loss and a novel prompt consistency loss, considering the different views of the images. GOPro is trained end-to-end on all three loss objectives, combining the strengths of CLIP and SSL in a principled manner. Empirical evaluations demonstrate that GOPro outperforms the state-of-the-art prompting techniques on three challenging domain generalization tasks across multiple benchmarks by a significant margin. Our code is available at https://github.com/mainaksingha01/GOPro.",fc9bd3642df2a378c11131362b27deecbd02b70a,Semantic Scholar,,, -387,the devil is in the errors leveraging large language models for finegrained machine translation evaluation,"['Patrick Fernandes', 'Daniel Deutsch', 'M. Finkelstein', 'Parker Riley', 'André F. T. Martins', 'Graham Neubig', 'Ankush Garg', 'J. Clark', 'Markus Freitag', 'Orhan Firat']",https://arxiv.org/pdf/2308.07286,2023-08-14,,"Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.",fd80f7f3673fc6ca02f192d5d73426f11a4be659,Semantic Scholar,,, -388,"unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing",['Walid Hariri'],http://arxiv.org/pdf/2304.02017,2023-03-27,,"Large language models have revolutionized the field of artificial intelligence and have been used in various applications. Among these models, ChatGPT (Chat Generative Pre-trained Transformer) has been developed by OpenAI, it stands out as a powerful tool that has been widely adopted. ChatGPT has been successfully applied in numerous areas, including chatbots, content generation, language translation, personalized recommendations, and even medical diagnosis and treatment. Its success in these applications can be attributed to its ability to generate human-like responses, understand natural language, and adapt to different contexts. Its versatility and accuracy make it a powerful tool for natural language processing (NLP). However, there are also limitations to ChatGPT, such as its tendency to produce biased responses and its potential to perpetuate harmful language patterns. This article provides a comprehensive overview of ChatGPT, its applications, advantages, and limitations. Additionally, the paper emphasizes the importance of ethical considerations when using this robust tool in real-world scenarios. Finally, This paper contributes to ongoing discussions surrounding artificial intelligence and its impact on vision and NLP domains by providing insights into prompt engineering techniques.",9e93ab728e3e174ec1492009055885a9123d434f,Semantic Scholar,,, -389,simulating hp lovecraft horror literature with the chatgpt large language model,"[""Eduardo C. Garrido-Merch'an"", 'J. L. Arroyo-Barrigüete', 'Roberto Gozalo-Brizuela']",http://arxiv.org/pdf/2305.03429,2023-05-05,,"In this paper, we present a novel approach to simulating H.P. Lovecraft's horror literature using the ChatGPT large language model, specifically the GPT-4 architecture. Our study aims to generate text that emulates Lovecraft's unique writing style and themes, while also examining the effectiveness of prompt engineering techniques in guiding the model's output. To achieve this, we curated a prompt containing several specialized literature references and employed advanced prompt engineering methods. We conducted an empirical evaluation of the generated text by administering a survey to a sample of undergraduate students. Utilizing statistical hypothesis testing, we assessed the students ability to distinguish between genuine Lovecraft works and those generated by our model. Our findings demonstrate that the participants were unable to reliably differentiate between the two, indicating the effectiveness of the GPT-4 model and our prompt engineering techniques in emulating Lovecraft's literary style. In addition to presenting the GPT model's capabilities, this paper provides a comprehensive description of its underlying architecture and offers a comparative analysis with related work that simulates other notable authors and philosophers, such as Dennett. By exploring the potential of large language models in the context of literary emulation, our study contributes to the body of research on the applications and limitations of these models in various creative domains.",a7d8a6d8c04bd4554da4219be0f9d3bf87e2e56b,Semantic Scholar,,, -390,protect your prompts protocols for ip protection in llm applications,"['M. V. Wyk', 'M. Bekker', 'X. L. Richards', 'K. Nixon']",http://arxiv.org/pdf/2306.06297,2023-06-09,,"With the rapid adoption of AI in the form of large language models (LLMs), the potential value of carefully engineered prompts has become significant. However, to realize this potential, prompts should be tradable on an open market. Since prompts are, at present, generally economically non-excludable, by virtue of their nature as text, no general competitive market has yet been established. This note discusses two protocols intended to provide protection of prompts, elevating their status as intellectual property, thus confirming the intellectual property rights of prompt engineers, and potentially supporting the flourishing of an open market for LLM prompts.",08fd45ac85916b95f734cc75af8660cff73c33ca,Semantic Scholar,,, -391,abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models,"['Mohi Reza', 'Nathan Laundry', 'Ilya Musabirov', 'Peter Dushniku', 'Zhi Yuan Michael Yu', 'Kashish Mittal', 'Tovi Grossman', 'Michael Liut', 'Anastasia Kuzminykh', 'Joseph Jay Williams']",https://arxiv.org/pdf/2310.00117,2023-09-29,,"Exploring alternative ideas by rewriting text is integral to the writing process. State-of-the-art large language models (LLMs) can simplify writing variation generation. However, current interfaces pose challenges for simultaneous consideration of multiple variations: creating new versions without overwriting text can be difficult, and pasting them sequentially can clutter documents, increasing workload and disrupting writers' flow. To tackle this, we present ABScribe, an interface that supports rapid, yet visually structured, exploration of writing variations in human-AI co-writing tasks. With ABScribe, users can swiftly produce multiple variations using LLM prompts, which are auto-converted into reusable buttons. Variations are stored adjacently within text segments for rapid in-place comparisons using mouse-over interactions on a context toolbar. Our user study with 12 writers shows that ABScribe significantly reduces task workload (d = 1.20, p<0.001), enhances user perceptions of the revision process (d = 2.41, p<0.001) compared to a popular baseline workflow, and provides insights into how writers explore variations using LLMs.",0f71c1e2acf286951544d3bd9eb5d85acfba5af1,Semantic Scholar,,, -392,udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers,"['Jon Saad-Falcon', 'O. Khattab', 'Keshav Santhanam', 'Radu Florian', 'M. Franz', 'S. Roukos', 'Avirup Sil', 'Md Arafat Sultan', 'Christopher Potts']",https://arxiv.org/pdf/2303.00807,2023-03-01,,"Many information retrieval tasks require large labeled datasets for fine-tuning. However, such datasets are often unavailable, and their utility for real-world applications can diminish quickly due to domain shifts. To address this challenge, we develop and motivate a method for using large language models (LLMs) to generate large numbers of synthetic queries cheaply. The method begins by generating a small number of synthetic queries using an expensive LLM. After that, a much less expensive one is used to create large numbers of synthetic queries, which are used to fine-tune a family of reranker models. These rerankers are then distilled into a single efficient retriever for use in the target domain. We show that this technique boosts zero-shot accuracy in long-tail domains and achieves substantially lower latency than standard reranking methods.",14d81c84662a1de7b5605a5a68bb0f63d6e293e5,Semantic Scholar,,, -393,incontext impersonation reveals large language models' strengths and biases,"['Leonard Salewski', 'Stephan Alaniz', 'Isabel Rio-Torto', 'Eric Schulz', 'Zeynep Akata']",http://arxiv.org/pdf/2305.14930,2023-05-24,,"In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts. Finally, we test whether LLMs' impersonations are complementary to visual information when describing different categories. We find that impersonation can improve performance: an LLM prompted to be a bird expert describes birds better than one prompted to be a car expert. However, impersonation can also uncover LLMs' biases: an LLM prompted to be a man describes cars better than one prompted to be a woman. These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their hidden strengths and biases.",19c63eade265d8a47d160098d97194b3b83d3770,Semantic Scholar,,, -394,chatgpt for plcdcs control logic generation,"['Heiko Koziolek', 'Sten Gruener', 'Virendra Ashiwal']",https://arxiv.org/pdf/2305.15809,2023-05-25,,"Large language models (LLMs) providing generative AI have become popular to support software engineers in creating, summarizing, optimizing, and documenting source code. It is still unknown how LLMs can support control engineers using typical control programming languages in programming tasks. Researchers have explored GitHub CoPilot or DeepMind AlphaCode for source code generation but did not yet tackle control logic programming. A key contribution of this paper is an exploratory study, for which we created 100 LLM prompts in 10 representative categories to analyze control logic generation for of PLCs and DCS from natural language. We tested the prompts by generating answers with ChatGPT using the GPT-4 LLM. It generated syntactically correct IEC 61131-3 Structured Text code in many cases and demonstrated useful reasoning skills that could boost control engineer productivity. Our prompt collection is the basis for a more formal LLM benchmark to test and compare such models for control logic generation.",1c1b83df13de4334e48a4c2039bc7ddfa374c486,Semantic Scholar,,, -395,saytap language to quadrupedal locomotion,"['Yujin Tang', 'Wenhao Yu', 'Jie Tan', 'H. Zen', 'Aleksandra Faust', 'Tatsuya Harada']",https://arxiv.org/pdf/2306.07580,2023-06-13,,"Large language models (LLMs) have demonstrated the potential to perform high-level planning. Yet, it remains a challenge for LLMs to comprehend low-level commands, such as joint angle targets or motor torques. This paper proposes an approach to use foot contact patterns as an interface that bridges human commands in natural language and a locomotion controller that outputs these low-level commands. This results in an interactive system for quadrupedal robots that allows the users to craft diverse locomotion behaviors flexibly. We contribute an LLM prompt design, a reward function, and a method to expose the controller to the feasible distribution of contact patterns. The results are a controller capable of achieving diverse locomotion patterns that can be transferred to real robot hardware. Compared with other design choices, the proposed approach enjoys more than 50% success rate in predicting the correct contact patterns and can solve 10 more tasks out of a total of 30 tasks. Our project site is: https://saytap.github.io.",1fc21645ccc8e99eb8162e5f91407148b7f77e3d,Semantic Scholar,,, -396,"mmhqaicl multimodal incontext learning for hybrid question answering over text, tables and images","['Weihao Liu', 'Fangyu Lei', 'Tongxu Luo', 'Jiahe Lei', 'Shizhu He', 'Jun Zhao', 'Kang Liu']",https://arxiv.org/pdf/2309.04790,2023-09-09,,"In the real world, knowledge often exists in a multimodal and heterogeneous form. Addressing the task of question answering with hybrid data types, including text, tables, and images, is a challenging task (MMHQA). Recently, with the rise of large language models (LLM), in-context learning (ICL) has become the most popular way to solve QA problems. We propose MMHQA-ICL framework for addressing this problems, which includes stronger heterogeneous data retriever and an image caption module. Most importantly, we propose a Type-specific In-context Learning Strategy for MMHQA, enabling LLMs to leverage their powerful performance in this task. We are the first to use end-to-end LLM prompting method for this task. Experimental results demonstrate that our framework outperforms all baselines and methods trained on the full dataset, achieving state-of-the-art results under the few-shot setting on the MultimodalQA dataset.",27d6d02e24de259e3aa38e556a81f89ec505816e,Semantic Scholar,,, -397,lmcanvas objectoriented interaction to personalize large language modelpowered writing environments,"['Tae Soo Kim', 'Arghya Sarkar', 'Yoonjoo Lee', 'Minsuk Chang', 'Juho Kim']",http://arxiv.org/pdf/2303.15125,2023-03-27,,"Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers' workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer's needs -- requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with""blocks""in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept.",2cdff023cd4b185bb452f3c7399580db2d0fdfcd,Semantic Scholar,,, -398,flocks of stochastic parrots differentially private prompt learning for large language models,"['Haonan Duan', 'Adam Dziedzic', 'Nicolas Papernot', 'Franziska Boenisch']",http://arxiv.org/pdf/2305.15594,2023-05-24,,"Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that soft prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for discrete prompts. Thus, we orchestrate a noisy vote among an ensemble of LLMs presented with different prompts, i.e., a flock of stochastic parrots. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs. 95.2% for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial APIs.",2f2a430ba6c93bcfaf4818316ff8a27b1e034b1a,Semantic Scholar,,, -399,retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering,"['Yike Wu', 'Nan Hu', 'Sheng Bi', 'G. Qi', 'J. Ren', 'Anhuan Xie', 'Wei Song']",https://arxiv.org/pdf/2309.11206,2023-09-20,,"Despite their competitive performance on knowledge-intensive tasks, large language models (LLMs) still have limitations in memorizing all world knowledge especially long tail knowledge. In this paper, we study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task that requires rich world knowledge. Existing work has shown that retrieving KG knowledge to enhance LLMs prompting can significantly improve LLMs performance in KGQA. However, their approaches lack a well-formed verbalization of KG knowledge, i.e., they ignore the gap between KG representations and textual representations. To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA. Based on this approach, we propose a KG-to-Text enhanced LLMs framework for solving the KGQA task. Experiments on several KGQA benchmarks show that the proposed KG-to-Text augmented LLMs approach outperforms previous KG-augmented LLMs approaches regarding answer accuracy and usefulness of knowledge statements.",30f0abb793772c15f2cdfec97c994685348177c1,Semantic Scholar,,, -400,knowledge crosswords geometric reasoning over structured knowledge with large language models,"['Wenxuan Ding', 'Shangbin Feng', 'Yuhan Liu', 'Zhaoxuan Tan', 'Vidhisha Balachandran', 'Tianxing He', 'Yulia Tsvetkov']",https://arxiv.org/pdf/2310.01290,2023-10-02,,"Large language models (LLMs) are widely adopted in knowledge-intensive tasks and have achieved impressive performance thanks to their knowledge abilities. While LLMs have demonstrated outstanding performance on atomic or linear (multi-hop) QA tasks, whether they can reason in knowledge-rich scenarios with interweaving constraints remains an underexplored problem. In this work, we propose geometric reasoning over structured knowledge, where pieces of knowledge are connected in a graph structure and models need to fill in the missing information. Such geometric knowledge reasoning would require the ability to handle structured knowledge, reason with uncertainty, verify facts, and backtrack when an error occurs. We propose Knowledge Crosswords, a multi-blank QA dataset where each problem consists of a natural language question representing the geometric constraints of an incomplete entity network, where LLMs are tasked with working out the missing entities while meeting all factual constraints. Knowledge Crosswords contains 2,101 individual problems, covering various knowledge domains and further divided into three difficulty levels. We conduct extensive experiments to evaluate existing LLM prompting approaches on the Knowledge Crosswords benchmark. We additionally propose two new approaches, Staged Prompting and Verify-All, to augment LLMs' ability to backtrack and verify structured constraints. Our results demonstrate that while baseline approaches perform well on easier problems but struggle with hard ones, our proposed Verify-All outperforms other methods by a large margin and is more robust with hard problems. Further analysis reveals that LLMs' ability of geometric reasoning over structured knowledge is still far from robust or perfect, susceptible to confounders such as the order of options, certain structural patterns, assumption of existence of correct answer, and more.",33d944de189d6edf3a510ea195803a381c5a3bab,Semantic Scholar,,, -401,gear augmenting language models with generalizable and efficient tool resolution,"['Yining Lu', 'Haoping Yu', 'Daniel Khashabi']",https://arxiv.org/pdf/2307.08775,2023-07-17,,"Augmenting large language models (LLM) to use external tools enhances their performance across a variety of tasks. However, prior works over-rely on task-specific demonstration of tool use that limits their generalizability and computational cost due to making many calls to large-scale LLMs. We introduce GEAR, a computationally efficient query-tool grounding algorithm that is generalizable to various tasks that require tool use while not relying on task-specific demonstrations. GEAR achieves better efficiency by delegating tool grounding and execution to small language models (SLM) and LLM, respectively; while leveraging semantic and pattern-based evaluation at both question and answer levels for generalizable tool grounding. We evaluate GEAR on 14 datasets across 6 downstream tasks, demonstrating its strong generalizability to novel tasks, tools and different SLMs. Despite offering more efficiency, GEAR achieves higher precision in tool grounding compared to prior strategies using LLM prompting, thus improving downstream accuracy at a reduced computational cost. For example, we demonstrate that GEAR-augmented GPT-J and GPT-3 outperform counterpart tool-augmented baselines because of better tool use.",3bd83ff979f3c0e9470f23c360a18333593dc5a1,Semantic Scholar,,, -402,retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference,"['Zachary Levonian', 'Chenglu Li', 'Wangda Zhu', 'Anoushka Gade', 'Owen Henkel', 'Millie-Ellen Postle', 'Wanli Xing']",https://arxiv.org/pdf/2310.03184,2023-10-04,,"For middle-school math students, interactive question-answering (QA) with tutors is an effective way to learn. The flexibility and emergent capabilities of generative large language models (LLMs) has led to a surge of interest in automating portions of the tutoring process - including interactive QA to support conceptual discussion of mathematical concepts. However, LLM responses to math questions can be incorrect or mismatched to the educational context - such as being misaligned with a school's curriculum. One potential solution is retrieval-augmented generation (RAG), which involves incorporating a vetted external knowledge source in the LLM prompt to increase response quality. In this paper, we designed prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions. We evaluate the efficacy of this RAG system for middle-school algebra and geometry QA by administering a multi-condition survey, finding that humans prefer responses generated using RAG, but not when responses are too grounded in the textbook content. We argue that while RAG is able to improve response quality, designers of math QA systems must consider trade-offs between generating responses preferred by students and responses closely matched to specific educational resources.",3dc1b657bf821b731c5ed0396823b67c10d54ba1,Semantic Scholar,,, -403,iterative zeroshot llm prompting for knowledge graph construction,"['S. Carta', 'Alessandro Giuliani', 'L. piano', 'Alessandro Sebastian Podda', 'Livio Pompianu', 'Sandro Gabriele Tiddia']",http://arxiv.org/pdf/2307.01128,2023-07-03,,"In the current digitalization era, capturing and effectively representing knowledge is crucial in most real-world scenarios. In this context, knowledge graphs represent a potent tool for retrieving and organizing a vast amount of information in a properly interconnected and interpretable structure. However, their generation is still challenging and often requires considerable human effort and domain expertise, hampering the scalability and flexibility across different application fields. This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models, such as GPT-3.5, that can address all the main critical issues in knowledge graph building. The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies in the main stages of the generation process. Our unique manifold approach may encompass significant benefits to the scientific community. In particular, the main contribution can be summarized by: (i) an innovative strategy for iteratively prompting large language models to extract relevant components of the final graph; (ii) a zero-shot strategy for each prompt, meaning that there is no need for providing examples for""guiding""the prompt result; (iii) a scalable solution, as the adoption of LLMs avoids the need for any external resources or human expertise. To assess the effectiveness of our proposed model, we performed experiments on a dataset that covered a specific domain. We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.",50bdea5132ef4b8cf25b0d9f3ac2ee0d09bf18cb,Semantic Scholar,,, -404,rosgpt_vision commanding robots using only language models' prompts,"['Bilel Benjdira', 'A. Koubâa', 'Anas M. Ali']",https://arxiv.org/pdf/2308.11236,2023-08-22,,"In this paper, we argue that the next generation of robots can be commanded using only Language Models' prompts. Every prompt interrogates separately a specific Robotic Modality via its Modality Language Model (MLM). A central Task Modality mediates the whole communication to execute the robotic mission via a Large Language Model (LLM). This paper gives this new robotic design pattern the name of: Prompting Robotic Modalities (PRM). Moreover, this paper applies this PRM design pattern in building a new robotic framework named ROSGPT_Vision. ROSGPT_Vision allows the execution of a robotic task using only two prompts: a Visual and an LLM prompt. The Visual Prompt extracts, in natural language, the visual semantic features related to the task under consideration (Visual Robotic Modality). Meanwhile, the LLM Prompt regulates the robotic reaction to the visual description (Task Modality). The framework automates all the mechanisms behind these two prompts. The framework enables the robot to address complex real-world scenarios by processing visual data, making informed decisions, and carrying out actions automatically. The framework comprises one generic vision module and two independent ROS nodes. As a test application, we used ROSGPT_Vision to develop CarMate, which monitors the driver's distraction on the roads and makes real-time vocal notifications to the driver. We showed how ROSGPT_Vision significantly reduced the development cost compared to traditional methods. We demonstrated how to improve the quality of the application by optimizing the prompting strategies, without delving into technical details. ROSGPT_Vision is shared with the community (link: https://github.com/bilel-bj/ROSGPT_Vision) to advance robotic research in this direction and to build more robotic frameworks that implement the PRM design pattern and enables controlling robots using only prompts.",53e8d327e7ceda6f4efd321752da57edbaee6257,Semantic Scholar,,, -405,teler a general taxonomy of llm prompts for benchmarking complex tasks,"['Shubhra (Santu) Karmaker', 'Dongji Feng']",http://arxiv.org/pdf/2305.11430,2023-05-19,,"While LLMs have shown great success in understanding and generating text in traditional conversational settings, their potential for performing ill-defined complex tasks is largely under-studied. Indeed, we are yet to conduct comprehensive benchmarking studies with multiple LLMs that are exclusively focused on a complex task. However, conducting such benchmarking studies is challenging because of the large variations in LLMs' performance when different prompt types/styles are used and different degrees of detail are provided in the prompts. To address this issue, the paper proposes a general taxonomy that can be used to design prompts with specific properties in order to perform a wide range of complex tasks. This taxonomy will allow future benchmarking studies to report the specific categories of prompts used as part of the study, enabling meaningful comparisons across different studies. Also, by establishing a common standard through this taxonomy, researchers will be able to draw more accurate conclusions about LLMs' performance on a specific complex task.",5645502d73c6907f1671923638773152e55bfb00,Semantic Scholar,,, -406,spring gpt4 outperforms rl algorithms by studying papers and reasoning,"['Yue Wu', 'So Yeon Min', 'Shrimai Prabhumoye', 'Yonatan Bisk', 'R. Salakhutdinov', 'A. Azaria', 'Tom M. Mitchell', 'Yuan-Fang Li']",http://arxiv.org/pdf/2305.15486,2023-05-24,,"Open-world survival games pose significant challenges for AI algorithms due to their multi-tasking, deep exploration, and goal prioritization requirements. Despite reinforcement learning (RL) being popular for solving games, its high sample complexity limits its effectiveness in complex open-world games like Crafter or Minecraft. We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM). Prompted with the LaTeX source as game context and a description of the agent's current observation, our SPRING framework employs a directed acyclic graph (DAG) with game-related questions as nodes and dependencies as edges. We identify the optimal action to take in the environment by traversing the DAG and calculating LLM responses for each node in topological order, with the LLM's answer to final node directly translating to environment actions. In our experiments, we study the quality of in-context""reasoning""induced by different forms of prompts under the setting of the Crafter open-world environment. Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories. Quantitatively, SPRING with GPT-4 outperforms all state-of-the-art RL baselines, trained for 1M steps, without any training. Finally, we show the potential of games as a test bed for LLMs.",864cb3a725ae829cbfb675761cd2313897b1b7a8,Semantic Scholar,,, -407,adaplanner adaptive planning from feedback with language models,"['Haotian Sun', 'Yuchen Zhuang', 'Lingkai Kong', 'Bo Dai', 'Chao Zhang']",http://arxiv.org/pdf/2305.16653,2023-05-26,,"Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that are not adaptable to environmental feedback. Consequently, the sequential decision-making performance of LLM agents degenerates with problem complexity and plan horizons increase. We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore, we propose a skill discovery mechanism that leverages successful plans as few-shot exemplars, enabling the agent to plan and refine with fewer task demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively.",8e37dc1215681aa153a51c07078ba8befd6a6e01,Semantic Scholar,,, -408,lpml llmprompting markup language for mathematical reasoning,"['Ryutaro Yamauchi', 'Sho Sonoda', 'Akiyoshi Sannai', 'Wataru Kumagai']",https://arxiv.org/pdf/2309.13078,2023-09-21,,"In utilizing large language models (LLMs) for mathematical reasoning, addressing the errors in the reasoning and calculation present in the generated text by LLMs is a crucial challenge. In this paper, we propose a novel framework that integrates the Chain-of-Thought (CoT) method with an external tool (Python REPL). We discovered that by prompting LLMs to generate structured text in XML-like markup language, we could seamlessly integrate CoT and the external tool and control the undesired behaviors of LLMs. With our approach, LLMs can utilize Python computation to rectify errors within CoT. We applied our method to ChatGPT (GPT-3.5) to solve challenging mathematical problems and demonstrated that combining CoT and Python REPL through the markup language enhances the reasoning capability of LLMs. Our approach enables LLMs to write the markup language and perform advanced mathematical reasoning using only zero-shot prompting.",b099104d1a065cbc1432af22e6085b1a44dbc839,Semantic Scholar,,, -409,simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation,"['J. Mendoncca', 'Patrícia Pereira', 'Joao Paulo Carvalho', 'A. Lavie', 'I. Trancoso']",https://arxiv.org/pdf/2308.16797,2023-08-31,,"Despite significant research effort in the development of automatic dialogue evaluation metrics, little thought is given to evaluating dialogues other than in English. At the same time, ensuring metrics are invariant to semantically similar responses is also an overlooked topic. In order to achieve the desired properties of robustness and multilinguality for dialogue evaluation metrics, we propose a novel framework that takes advantage of the strengths of current evaluation models with the newly-established paradigm of prompting Large Language Models (LLMs). Empirical results show our framework achieves state of the art results in terms of mean Spearman correlation scores across several benchmarks and ranks first place on both the Robust and Multilingual tasks of the DSTC11 Track 4 “Automatic Evaluation Metrics for Open-Domain Dialogue Systems”, proving the evaluation capabilities of prompted LLMs.",bcefc74b20649fd41ea05d87a3fa512d2559fc8d,Semantic Scholar,,, -410,alpacafarm a simulation framework for methods that learn from human feedback,"['Yann Dubois', 'Xuechen Li', 'Rohan Taori', 'Tianyi Zhang', 'Ishaan Gulrajani', 'Jimmy Ba', 'Carlos Guestrin', 'Percy Liang', 'Tatsunori Hashimoto']",https://arxiv.org/pdf/2305.14387,2023-05-22,,"Large language models (LLMs) such as ChatGPT have seen widespread adoption due to their ability to follow user instructions well. Developing these LLMs involves a complex yet poorly understood workflow requiring training with human feedback. Replicating and understanding this instruction-following process faces three major challenges: the high cost of data collection, the lack of trustworthy evaluation, and the absence of reference method implementations. We address these challenges with AlpacaFarm, a simulator that enables research and development for learning from feedback at a low cost. First, we design LLM prompts to simulate human feedback that are 45x cheaper than crowdworkers and display high agreement with humans. Second, we propose an automatic evaluation and validate it against human instructions obtained on real-world interactions. Third, we contribute reference implementations for several methods (PPO, best-of-n, expert iteration, and more) that learn from pairwise feedback. Finally, as an end-to-end validation of AlpacaFarm, we train and evaluate eleven models on 10k pairs of real human feedback and show that rankings of models trained in AlpacaFarm match rankings of models trained on human data. As a demonstration of the research possible in AlpacaFarm, we find that methods that use a reward model can substantially improve over supervised fine-tuning and that our reference PPO implementation leads to a +10% improvement in win-rate against Davinci003. We release all components of AlpacaFarm at https://github.com/tatsu-lab/alpaca_farm.",cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa,Semantic Scholar,,, -411,heap hierarchical policies for web actions using llms,"['Paloma Sodhi', 'S. Branavan', 'Ryan McDonald']",https://arxiv.org/pdf/2310.03720,2023-10-05,,"Large language models (LLMs) have demonstrated remarkable capabilities in performing a range of instruction following tasks in few and zero-shot settings. However, teaching LLMs to perform tasks on the web presents fundamental challenges -- combinatorially large open-world tasks and variations across web interfaces. We tackle these challenges by leveraging LLMs to decompose web tasks into a collection of sub-tasks, each of which can be solved by a low-level, closed-loop policy. These policies constitute a shared grammar across tasks, i.e., new web tasks can be expressed as a composition of these policies. We propose a novel framework, Hierarchical Policies for Web Actions using LLMs (HeaP), that learns a set of hierarchical LLM prompts from demonstrations for planning high-level tasks and executing them via a sequence of low-level policies. We evaluate HeaP against a range of baselines on a suite of web tasks, including MiniWoB++, WebArena, a mock airline CRM, as well as live website interactions, and show that it is able to outperform prior works using orders of magnitude less data.",da0a170656a336f82fa8cf00289d1cc944d9b630,Semantic Scholar,,, -412,check your facts and try again improving large language models with external knowledge and automated feedback,"['Baolin Peng', 'Michel Galley', 'Pengcheng He', 'Hao Cheng', 'Yujia Xie', 'Yu Hu', 'Qiuyuan Huang', 'Lars Lidén', 'Zhou Yu', 'Weizhu Chen', 'Jianfeng Gao']",http://arxiv.org/pdf/2302.12813,2023-02-24,,"Large language models (LLMs), such as ChatGPT, are able to generate human-like, fluent responses for many downstream tasks, e.g., task-oriented dialog and question answering. However, applying LLMs to real-world, mission-critical applications remains challenging mainly due to their tendency to generate hallucinations and their inability to use external knowledge. This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules. Our system makes the LLM generate responses grounded in external knowledge, e.g., stored in task-specific databases. It also iteratively revises LLM prompts to improve model responses using feedback generated by utility functions, e.g., the factuality score of a LLM-generated response. The effectiveness of LLM-Augmenter is empirically validated on two types of scenarios, task-oriented dialog and open-domain question answering. LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses. We make the source code and models publicly available.",e5c72b92c48d68594b290c84a8904da7c8335554,Semantic Scholar,,, -413,promptagator fewshot dense retrieval from 8 examples,"['Zhuyun Dai', 'Vincent Zhao', 'Ji Ma', 'Yi Luan', 'Jianmo Ni', 'Jing Lu', 'A. Bakalov', 'Kelvin Guu', 'Keith B. Hall', 'Ming-Wei Chang']",http://arxiv.org/pdf/2209.11755,2022-09-23,,"Much recent research on information retrieval has focused on how to transfer from one task (typically with abundant supervised data) to various other tasks where supervision is limited, with the implicit assumption that it is possible to generalize from one task to all the rest. However, this overlooks the fact that there are many diverse and unique retrieval tasks, each targeting different search intents, queries, and search domains. In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples. To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data. Powered by LLM's generalization ability, Promptagator makes it possible to create task-specific end-to-end retrievers solely based on a few examples {without} using Natural Questions or MS MARCO to train %question generators or dual encoders. Surprisingly, LLM prompting with no more than 8 examples allows dual encoders to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by more than 1.2 nDCG on average on 11 retrieval sets. Further training standard-size re-rankers using the same generated data yields another 5.0 point nDCG improvement. Our studies determine that query generation can be far more effective than previously observed, especially when a small amount of task-specific knowledge is given.",e86009d9f9b1cdf083a48d087552bc4153784451,Semantic Scholar,,, -414,sgptod building task bots effortlessly via schemaguided llm prompting,"['Xiaoying Zhang', 'Baolin Peng', 'Kun Li', 'Jingyan Zhou', 'Helen M. Meng']",http://arxiv.org/pdf/2305.09067,2023-05-15,,"Building end-to-end task bots and maintaining their integration with new functionalities using minimal human efforts is a long-standing challenge in dialog research. Recently large language models (LLMs) have demonstrated exceptional proficiency in conversational engagement and adherence to instructions across various downstream tasks. In this work, we introduce SGP-TOD, Schema-Guided Prompting for building Task-Oriented Dialog systems effortlessly based on LLMs. Utilizing the symbolic knowledge -- task schema, we instruct fixed LLMs to generate appropriate responses on novel tasks, circumventing the need for training data. Specifically, SGP-TOD comprises three components: a LLM for engaging with users, a DST Prompter to aid the LLM with dialog state tracking, which is then used to retrieve database items, and a Policy Prompter to elicit proper responses adhering to the provided dialog policy. Experimental results on Multiwoz, RADDLE and STAR datasets show that our training-free strategy SGP-TOD, without any task-specific data, yields state-of-the-art (SOTA) zero-shot performance, greatly surpasses the few-shot approaches. In a domain-extension setting, SGP-TOD aptly adapts to new functionalities by merely adding supplementary schema rules. We make our code and data publicly available.",ec56f49bef8925dc8931cc261ab3aca4dd36ad2d,Semantic Scholar,,, -415,prefer prompt ensemble learning via feedbackreflectrefine,"['Chenrui Zhang', 'Lina Liu', 'Jinpeng Wang', 'Chuyuan Wang', 'Xiaodi Sun', 'Hongyu Wang', 'Mingchen Cai']",https://arxiv.org/pdf/2308.12033,2023-08-23,,"As an effective tool for eliciting the power of Large Language Models (LLMs), prompting has recently demonstrated unprecedented abilities across a variety of complex tasks. To further improve the performance, prompt ensemble has attracted substantial interest for tackling the hallucination and instability of LLMs. However, existing methods usually adopt a two-stage paradigm, which requires a pre-prepared set of prompts with substantial manual effort, and is unable to perform directed optimization for different weak learners. In this paper, we propose a simple, universal, and automatic method named PREFER (Pompt Ensemble learning via Feedback-Reflect-Refine) to address the stated limitations. Specifically, given the fact that weak learners are supposed to focus on hard examples during boosting, PREFER builds a feedback mechanism for reflecting on the inadequacies of existing weak learners. Based on this, the LLM is required to automatically synthesize new prompts for iterative refinement. Moreover, to enhance stability of the prompt effect evaluation, we propose a novel prompt bagging method involving forward and backward thinking, which is superior to majority voting and is beneficial for both feedback and weight calculation in boosting. Extensive experiments demonstrate that our PREFER achieves state-of-the-art performance in multiple types of tasks by a significant margin. We have made our code publicly available.",f53a4f34757d1f237446b4d887d5323f2a17ed02,Semantic Scholar,,, -416,empowering private tutoring by chaining large language models,"['Yulin Chen', 'Ning Ding', 'Hai-Tao Zheng', 'Zhiyuan Liu', 'Maosong Sun', 'Bowen Zhou']",https://arxiv.org/pdf/2309.08112,2023-09-15,,"Artificial intelligence has been applied in various aspects of online education to facilitate teaching and learning. However, few approaches has been made toward a complete AI-powered tutoring system. In this work, we explore the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs), covering automatic course planning and adjusting, tailored instruction, and flexible quiz evaluation. To make the system robust to prolonged interaction and cater to individualized education, the system is decomposed into three inter-connected core processes-interaction, reflection, and reaction. Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules. Tools are LLMs prompted to execute one specific task at a time, while memories are data storage that gets updated during education process. Statistical results from learning logs demonstrate the effectiveness and mechanism of each tool usage. Subjective feedback from human users reveal the usability of each function, and comparison with ablation systems further testify the benefits of the designed processes in long-term interaction.",f7842099bbde74dc5aec70bb6af85b88de08ed13,Semantic Scholar,,, -417,promptchainer chaining large language model prompts through visual programming,"['Tongshuang Sherry Wu', 'Ellen Jiang', 'Aaron Donsbach', 'J. Gray', 'A. Molina', 'Michael Terry', 'Carrie J. Cai']",https://arxiv.org/pdf/2203.06566,2022-03-13,,"While LLMs have made it possible to rapidly prototype new ML functionalities, many real-world applications involve complex tasks that cannot be easily handled via a single run of an LLM. Recent work has found that chaining multiple LLM runs together (with the output of one step being the input to the next) can help users accomplish these more complex tasks, and in a way that is perceived to be more transparent and controllable. However, it remains unknown what users need when authoring their own LLM chains – a key step to lowering the barriers for non-AI-experts to prototype AI-infused applications. In this work, we explore the LLM chain authoring process. We find from pilot studies that users need support transforming data between steps of a chain, as well as debugging the chain at multiple granularities. To address these needs, we designed PromptChainer, an interactive interface for visually programming chains. Through case studies with four designers and developers, we show that PromptChainer supports building prototypes for a range of applications, and conclude with open questions on scaling chains to even more complex tasks, as well as supporting low-fi chain prototyping.",0f733817e82026f7c29909a51cb4df7d2685f0e7,Semantic Scholar,,, -418,prompter utilizing large language model prompting for a data efficient embodied instruction following,"['Y. Inoue', 'Hiroki Ohashi']",https://arxiv.org/pdf/2211.03267,2022-11-07,,"Embodied Instruction Following (EIF) studies how mobile manipulator robots should be controlled to accomplish long-horizon tasks specified by natural language instructions. While most research on EIF are conducted in simulators, the ultimate goal of the field is to deploy the agents in real life. As such, it is important to minimize the data cost required for training an agent, to help the transition from sim to real. However, many studies only focus on the performance and overlook the data cost -- modules that require separate training on extra data are often introduced without a consideration on deployability. In this work, we propose FILM++ which extends the existing work FILM with modifications that do not require extra data. While all data-driven modules are kept constant, FILM++ more than doubles FILM's performance. Furthermore, we propose Prompter, which replaces FILM++'s semantic search module with language model prompting. Unlike FILM++'s implementation that requires training on extra sets of data, no training is needed for our prompting based implementation while achieving better or at least comparable performance. Prompter achieves 42.64% and 45.72% on the ALFRED benchmark with high-level instructions only and with step-by-step instructions, respectively, outperforming the previous state of the art by 6.57% and 10.31%.",2d30d800e946d3699d9c41bb95c36a6db63676e7,Semantic Scholar,,, -419,evallm interactive evaluation of large language model prompts on userdefined criteria,"['Tae Soo Kim', 'Yoonjoo Lee', 'Jamin Shin', 'Young-Ho Kim', 'Juho Kim']",https://arxiv.org/pdf/2309.13633,2023-09-24,,"By simply composing prompts, developers can prototype novel generative applications with Large Language Models (LLMs). To refine prototypes into products, however, developers must iteratively revise prompts by evaluating outputs to diagnose weaknesses. Formative interviews (N=8) revealed that developers invest significant effort in manually evaluating outputs as they assess context-specific and subjective criteria. We present EvalLM, an interactive system for iteratively refining prompts by evaluating multiple outputs on user-defined criteria. By describing criteria in natural language, users can employ the system's LLM-based evaluator to get an overview of where prompts excel or fail, and improve these based on the evaluator's feedback. A comparative study (N=12) showed that EvalLM, when compared to manual evaluation, helped participants compose more diverse criteria, examine twice as many outputs, and reach satisfactory prompts with 59% fewer revisions. Beyond prompts, our work can be extended to augment model evaluation and alignment in specific application contexts.",a0d83f9e15e722f23c14eb83cb2f87c1d1ea6400,Semantic Scholar,,, -420,flatnessaware prompt selection improves accuracy and sample efficiency,"['Lingfeng Shen', 'Weiting Tan', 'Boyuan Zheng', 'Daniel Khashabi']",http://arxiv.org/pdf/2305.10713,2023-05-18,,"With growing capabilities of large language models, prompting them has become the dominant way to access them. This has motivated the development of strategies for automatically selecting effective language prompts. In this paper, we introduce prompt flatness, a new metric to quantify the expected utility of a language prompt. This metric is inspired by flatness regularization in statistical learning that quantifies the robustness of the model towards its parameter perturbations. We provide theoretical foundations for this metric and its relationship with other prompt selection metrics, providing a comprehensive understanding of existing methods. Empirically, we show that combining prompt flatness with existing metrics improves both performance and sample efficiency. Our metric outperforms the previous prompt selection metrics with an average increase of 5% in accuracy and 10% in Pearson correlation across 6 classification benchmarks.",b8ba16a107621f760e7830ddaab8c3d5c5ff06b0,Semantic Scholar,,, -421,ai chains transparent and controllable humanai interaction by chaining large language model prompts,"['Tongshuang Sherry Wu', 'Michael Terry', 'Carrie J. Cai']",https://dl.acm.org/doi/pdf/10.1145/3491102.3517582,2021-10-04,,"Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by “unit-testing” sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.",d3640eb3b542eaf36fee2261f037a6bf0d8eac9c,Semantic Scholar,,, -422,terminologyaware translation with constrained decoding and large language model prompting,"['Nikolay Bogoychev', 'Pinzhen Chen']",https://arxiv.org/pdf/2310.05824,2023-10-09,,"Terminology correctness is important in the downstream application of machine translation, and a prevalent way to ensure this is to inject terminology constraints into a translation system. In our submission to the WMT 2023 terminology translation task, we adopt a translate-then-refine approach which can be domain-independent and requires minimal manual efforts. We annotate random source words with pseudo-terminology translations obtained from word alignment to first train a terminology-aware model. Further, we explore two post-processing methods. First, we use an alignment process to discover whether a terminology constraint has been violated, and if so, we re-decode with the violating word negatively constrained. Alternatively, we leverage a large language model to refine a hypothesis by providing it with terminology constraints. Results show that our terminology-aware model learns to incorporate terminologies effectively, and the large language model refinement process can further improve terminology recall.",e90d30148ecf633db3bbabdcfa3a0ec06236e0d1,Semantic Scholar,,, -423,a prefrontal cortexinspired architecture for planning in large language models,"['Taylor Webb', 'S. S. Mondal', 'Chi Wang', 'Brian Krabach', 'Ida Momennejad']",https://arxiv.org/pdf/2310.00194,2023-09-30,,"Large language models (LLMs) demonstrate impressive performance on a wide variety of tasks, but they often struggle with tasks that require multi-step reasoning or goal-directed planning. To address this, we take inspiration from the human brain, in which planning is accomplished via the recurrent interaction of specialized modules in the prefrontal cortex (PFC). These modules perform functions such as conflict monitoring, state prediction, state evaluation, task decomposition, and task coordination. We find that LLMs are sometimes capable of carrying out these functions in isolation, but struggle to autonomously coordinate them in the service of a goal. Therefore, we propose a black box architecture with multiple LLM-based (GPT-4) modules. The architecture improves planning through the interaction of specialized PFC-inspired modules that break down a larger problem into multiple brief automated calls to the LLM. We evaluate the combined architecture on two challenging planning tasks -- graph traversal and Tower of Hanoi -- finding that it yields significant improvements over standard LLM methods (e.g., zero-shot prompting or in-context learning). These results demonstrate the benefit of utilizing knowledge from cognitive neuroscience to improve planning in LLMs.",31d8bdef7b81e107bf04f226d877fd5aa2f51d34,Semantic Scholar,,, -424,large language models are stateoftheart evaluators of translation quality,"['Tom Kocmi', 'C. Federmann']",http://arxiv.org/pdf/2302.14520,2023-02-28,,"We describe GEMBA, a GPT-based metric for assessment of translation quality, which works both with a reference translation and without. In our evaluation, we focus on zero-shot prompting, comparing four prompt variants in two modes, based on the availability of the reference. We investigate seven versions of GPT models, including ChatGPT. We show that our method for translation quality assessment only works with GPT 3.5 and larger models. Comparing to results from WMT22’s Metrics shared task, our method achieves state-of-the-art accuracy in both modes when compared to MQM-based human labels. Our results are valid on the system level for all three WMT22 Metrics shared task language pairs, namely English into German, English into Russian, and Chinese into English. This provides a first glimpse into the usefulness of pre-trained, generative large language models for quality assessment of translations. We publicly release all our code and prompt templates used for the experiments described in this work, as well as all corresponding scoring results, to allow for external validation and reproducibility.",4161ad2d2495d8af1d62dc5e71882bde642cd1c1,Semantic Scholar,,, -425,controlling personality style in dialogue with zeroshot promptbased learning,"['Angela Ramirez', 'Mamon Alsalihy', 'Kartik Aggarwal', 'Cecilia Li', 'Liren Wu', 'M. Walker']",http://arxiv.org/pdf/2302.03848,2023-02-08,,"Prompt-based or in-context learning has achieved high zero-shot performance on many natural language generation (NLG) tasks. Here we explore the performance of prompt-based learning for simultaneously controlling the personality and the semantic accuracy of an NLG for task-oriented dialogue. We experiment with prompt-based learning on the PERSONAGE restaurant recommendation corpus to generate semantically and stylistically-controlled text for 5 different Big-5 personality types: agreeable, disagreeable, conscientious, unconscientious, and extravert. We test two different classes of discrete prompts to generate utterances for a particular personality style: (1) prompts that demonstrate generating directly from a meaning representation that includes a personality specification; and (2) prompts that rely on first converting the meaning representation to a textual pseudo-reference, and then using the pseudo-reference in a textual style transfer (TST) prompt. In each case, we show that we can vastly improve performance by over-generating outputs and ranking them, testing several ranking functions based on automatic metrics for semantic accuracy, personality-match, and fluency. We also test whether NLG personality demonstrations from the restaurant domain can be used with meaning representations for the video game domain to generate personality stylized utterances about video games. Our findings show that the TST prompts produces the highest semantic accuracy (78.46% for restaurants and 87.6% for video games) and personality accuracy (100% for restaurants and 97% for video games). Our results on transferring personality style to video game utterances are surprisingly good. To our knowledge, there is no previous work testing the application of prompt-based learning to simultaneously controlling both style and semantic accuracy in NLG.",9c39e942b87cbada41a4a52364f996915c7c2d98,Semantic Scholar,,, -426,steps a benchmark for order reasoning in sequential tasks,"['Weizhi Wang', 'Hong Wang', 'Xi Yan']",http://arxiv.org/pdf/2306.04441,2023-06-07,,"Various human activities can be abstracted into a sequence of actions in natural text, i.e. cooking, repairing, manufacturing, etc. Such action sequences heavily depend on the executing order, while disorder in action sequences leads to failure of further task execution by robots or AI agents. Therefore, to verify the order reasoning capability of current neural models in sequential tasks, we propose a challenging benchmark , named STEPS. STEPS involves two subtask settings, focusing on determining the rationality of given next step in recipes and selecting the reasonable step from the multi-choice question, respectively. We describe the data construction and task formulations, and benchmark most of significant Large Language Models (LLMs). The experimental results demonstrate 1) The commonsense reasoning of action orders in sequential tasks are challenging to resolve via zero-shot prompting or few-shot in-context learning for LLMs; 2) Prompting method still significantly lags behind tuning-based method on STEPS.",a8a71f9b10b281e796fdc2ee7aaec40067739574,Semantic Scholar,,, -427,prompting large language model for machine translation a case study,"['Biao Zhang', 'B. Haddow', 'Alexandra Birch']",http://arxiv.org/pdf/2301.07069,2023-01-17,,"Research on prompting has shown excellent performance with little or even no supervised training across many tasks. However, prompting for machine translation is still under-explored in the literature. We fill this gap by offering a systematic study on prompting strategies for translation, examining various factors for prompt template and demonstration example selection. We further explore the use of monolingual data and the feasibility of cross-lingual, cross-domain, and sentence-to-document transfer learning in prompting. Extensive experiments with GLM-130B (Zeng et al., 2022) as the testbed show that 1) the number and the quality of prompt examples matter, where using suboptimal examples degenerates translation; 2) several features of prompt examples, such as semantic similarity, show significant Spearman correlation with their prompting performance; yet, none of the correlations are strong enough; 3) using pseudo parallel prompt examples constructed from monolingual data via zero-shot prompting could improve translation; and 4) improved performance is achievable by transferring knowledge from prompt examples selected in other settings. We finally provide an analysis on the model outputs and discuss several problems that prompting still suffers from.",c879413103f8950bdd414c7f60a39bd7748c9be8,Semantic Scholar,,, -428,a practical survey on zeroshot prompt design for incontext learning,['Yinheng Li'],https://arxiv.org/pdf/2309.13205,2023-09-22,,"The remarkable advancements in large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single “best” prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various NLP tasks.",cd7d770eabb4dab6894d9f91d2c3bc337e94a4e1,Semantic Scholar,,, -429,developing a scalable benchmark for assessing large language models in knowledge graph engineering,"['Lars Meyer', 'Johannes Frey', 'K. Junghanns', 'Felix Brei', 'Kirill Bulert', 'Sabine Grunder-Fahrer', 'Michael Martin']",https://arxiv.org/pdf/2308.16622,2023-08-31,,"As the field of Large Language Models (LLMs) evolves at an accelerated pace, the critical need to assess and monitor their performance emerges. We introduce a benchmarking framework focused on knowledge graph engineering (KGE) accompanied by three challenges addressing syntax and error correction, facts extraction and dataset generation. We show that while being a useful tool, LLMs are yet unfit to assist in knowledge graph generation with zero-shot prompting. Consequently, our LLM-KG-Bench framework provides automatic evaluation and storage of LLM responses as well as statistical data and visualization tools to support tracking of prompt engineering and model performance.",d0e3af5f20a451c04770929979d7a8406a1a2466,Semantic Scholar,,, -430,mitigating word bias in zeroshot promptbased classifiers,"['Adian Liusie', 'Potsawee Manakul', 'M. Gales']",https://arxiv.org/pdf/2309.04992,2023-09-10,,"Prompt-based classifiers are an attractive approach for zero-shot classification. However, the precise choice of the prompt template and label words can largely influence performance, with semantically equivalent settings often showing notable performance difference. This discrepancy can be partly attributed to word biases, where the classifier may be biased towards classes. To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers. This paper instead approaches this problem by examining the expected marginal probabilities of the classes. Here, probabilities are reweighted to have a uniform prior over classes, in an unsupervised fashion. Further, we draw a theoretical connection between the class priors and the language models' word prior, and offer the ability to set a threshold in a zero-resource fashion. We show that matching class priors correlates strongly with the oracle upper bound performance and demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.",e7d21ad4da122bf1db19e4fda57bf94c1dfa24a4,Semantic Scholar,,, -431,how far are large language models from agents with theoryofmind,"['Pei Zhou', 'Aman Madaan', 'Srividya Pranavi Potharaju', 'Aditya Gupta', 'Kevin R. McKee', 'Ari Holtzman', 'J. Pujara', 'Xiang Ren', 'Swaroop Mishra', 'Aida Nematzadeh', 'Shyam Upadhyay', 'Manaal Faruqui']",https://arxiv.org/pdf/2310.03051,2023-10-04,,"""Thinking is for Doing.""Humans can infer other people's mental states from observations--an ability called Theory-of-Mind (ToM)--and subsequently act pragmatically on those inferences. Existing question answering benchmarks such as ToMi ask models questions to make inferences about beliefs of characters in a story, but do not test whether models can then use these inferences to guide their actions. We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D), which requires models to connect inferences about others' mental states to actions in social scenarios. Experiments on T4D demonstrate that LLMs such as GPT-4 and PaLM 2 seemingly excel at tracking characters' beliefs in stories, but they struggle to translate this capability into strategic action. Our analysis reveals the core challenge for LLMs lies in identifying the implicit inferences about mental states without being explicitly asked about as in ToMi, that lead to choosing the correct action in T4D. To bridge this gap, we introduce a zero-shot prompting framework, Foresee and Reflect (FaR), which provides a reasoning structure that encourages LLMs to anticipate future challenges and reason about potential actions. FaR boosts GPT-4's performance from 50% to 71% on T4D, outperforming other prompting methods such as Chain-of-Thought and Self-Ask. Moreover, FaR generalizes to diverse out-of-distribution story structures and scenarios that also require ToM inferences to choose an action, consistently outperforming other methods including few-shot in-context learning.",ed40889e11e812ef33578506844be06d713f6092,Semantic Scholar,,, -432,selficl zeroshot incontext learning with selfgenerated demonstrations,"['Wei-Lin Chen', 'Cheng-Kuang Wu', 'Hsin-Hsi Chen']",http://arxiv.org/pdf/2305.15035,2023-05-24,,"Large language models (LLMs) have exhibited striking in-context learning (ICL) ability to adapt to target tasks with a few input-output demonstrations. For better ICL, different methods are proposed to select representative demonstrations from existing training corpora. However, such settings are not aligned with real-world practices, as end-users usually query LMs without access to demonstration pools. In this work, we introduce Self-ICL -- a simple framework which bootstraps LMs' intrinsic capabilities to perform zero-shot ICL. Given a test input, Self-ICL first prompts the model to generate pseudo-inputs. Next, the model predicts pseudo-labels for the pseudo-inputs via zero-shot prompting. Finally, we perform ICL for the test input with the pseudo-input-label pairs as demonstrations. Evaluation on 23 BIG-Bench Hard tasks shows Self-ICL outperforms zero-shot baselines on both average accuracy and head-to-head comparison. Moreover, with zero-shot chain-of-thought, Self-ICL achieves results comparable to using real demonstrations. Additionally, we conduct a range of analyses to validate Self-ICL's effectiveness and provide insights for its behaviors under different settings.",fe425e341cf646689e42adead17f14eeac5d03e6,Semantic Scholar,,, -433,prodigy enabling incontext learning over graphs,"['Qian Huang', 'Hongyu Ren', 'Peng Chen', 'Gregor Krvzmanc', 'D. Zeng', 'Percy Liang', 'J. Leskovec']",http://arxiv.org/pdf/2305.12600,2023-05-21,,"In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks by conditioning on prompt examples, without optimizing any parameters. While large language models have demonstrated this ability, how in-context learning could be performed over graphs is unexplored. In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse \textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first pretraining framework that enables in-context learning over graphs. The key idea of our framework is to formulate in-context learning over graphs with a novel \emph{prompt graph} representation, which connects prompt examples and queries. We then propose a graph neural network architecture over the prompt graph and a corresponding family of in-context pretraining objectives. With PRODIGY, the pretrained model can directly perform novel downstream classification tasks on unseen graphs via in-context learning. We provide empirical evidence of the effectiveness of our framework by showcasing its strong in-context learning performance on tasks involving citation networks and knowledge graphs. Our approach outperforms the in-context learning accuracy of contrastive pretraining baselines with hard-coded adaptation by 18\% on average across all setups. Moreover, it also outperforms standard finetuning with limited data by 33\% on average with in-context learning.",0088c9f4d50706c7ab71efa13bcb4b42cf2058e2,Semantic Scholar,,, -434,outfox llmgenerated essay detection through incontext learning with adversarially generated examples,"['Ryuto Koike', 'Masahiro Kaneko', 'Naoaki Okazaki']",https://arxiv.org/pdf/2307.11729,2023-07-21,,"Large Language Models (LLMs) have achieved human-level fluency in text generation, making it difficult to distinguish between human-written and LLM-generated texts. This poses a growing risk of misuse of LLMs and demands the development of detectors to identify LLM-generated texts. However, existing detectors lack robustness against attacks: they degrade detection accuracy by simply paraphrasing LLM-generated texts. Furthermore, a malicious user might attempt to deliberately evade the detectors based on detection results, but this has not been assumed in previous studies. In this paper, we propose OUTFOX, a framework that improves the robustness of LLM-generated-text detectors by allowing both the detector and the attacker to consider each other's output. In this framework, the attacker uses the detector's prediction labels as examples for in-context learning and adversarially generates essays that are harder to detect, while the detector uses the adversarially generated essays as examples for in-context learning to learn to detect essays from a strong attacker. Experiments in the domain of student essays show that the proposed detector improves the detection performance on the attacker-generated texts by up to +41.3 points in F1-score. Furthermore, the proposed detector shows a state-of-the-art detection performance: up to 96.9 points in F1-score, beating existing detectors on non-attacked texts. Finally, the proposed attacker drastically degrades the performance of detectors by up to -57.0 points F1-score, massively outperforming the baseline paraphrasing method for evading detection.",0095acc4f2c3255cf38fdf844003c97858adb418,Semantic Scholar,,, -435,naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers,"['Kai Shen', 'Zeqian Ju', 'Xu Tan', 'Yanqing Liu', 'Yichong Leng', 'Lei He', 'Tao Qin', 'Sheng Zhao', 'Jiang Bian']",http://arxiv.org/pdf/2304.09116,2023-04-18,,"Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets is important to capture the diversity in human speech such as speaker identities, prosodies, and styles (e.g., singing). Current large TTS systems usually quantize speech into discrete tokens and use language models to generate these tokens one by one, which suffer from unstable prosody, word skipping/repeating issue, and poor voice quality. In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual vector quantizers to get the quantized latent vectors and uses a diffusion model to generate these latent vectors conditioned on text input. To enhance the zero-shot capability that is important to achieve diverse speech synthesis, we design a speech prompting mechanism to facilitate in-context learning in the diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to large-scale datasets with 44K hours of speech and singing data and evaluate its voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS systems by a large margin in terms of prosody/timbre similarity, robustness, and voice quality in a zero-shot setting, and performs novel zero-shot singing synthesis with only a speech prompt. Audio samples are available at https://speechresearch.github.io/naturalspeech2.",00c367427d9135209d84008e6cb5e90f0adba881,Semantic Scholar,,, -436,demonstratesearchpredict composing retrieval and language models for knowledgeintensive nlp,"['O. Khattab', 'Keshav Santhanam', 'Xiang Lisa Li', 'David Leo Wright Hall', 'Percy Liang', 'Christopher Potts', 'M. Zaharia']",http://arxiv.org/pdf/2212.14024,2022-12-28,,"Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM). Existing work has combined these in simple""retrieve-then-read""pipelines in which the RM retrieves passages that are inserted into the LM prompt. To begin to fully realize the potential of frozen LMs and RMs, we propose Demonstrate-Search-Predict (DSP), a framework that relies on passing natural language texts in sophisticated pipelines between an LM and an RM. DSP can express high-level programs that bootstrap pipeline-aware demonstrations, search for relevant passages, and generate grounded predictions, systematically breaking down problems into small transformations that the LM and RM can handle more reliably. We have written novel DSP programs for answering questions in open-domain, multi-hop, and conversational settings, establishing in early evaluations new state-of-the-art in-context learning results and delivering 37-120%, 8-39%, and 80-290% relative gains against the vanilla LM (GPT-3.5), a standard retrieve-then-read pipeline, and a contemporaneous self-ask pipeline, respectively. We release DSP at https://github.com/stanfordnlp/dsp",03532123ccffae8d411264320e8a5ae2b6eddea0,Semantic Scholar,,, -437,incontext analogical reasoning with pretrained language models,"['Xiaoyang Hu', 'Shane Storks', 'Richard L. Lewis', 'J. Chai']",http://arxiv.org/pdf/2305.17626,2023-05-28,,"Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences. While it is thought to be essential for robust reasoning in AI systems, conventional approaches require significant training and/or hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by cognitive science research that has found connections between human language and analogy-making, we explore the use of intuitive language-based abstractions to support analogy in AI systems. Specifically, we apply large pre-trained language models (PLMs) to visual Raven’s Progressive Matrices (RPM), a common relational reasoning test. By simply encoding the perceptual features of the problem into language form, we find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods. We explore different encodings that vary the level of abstraction over task features, finding that higher-level abstractions further strengthen PLMs’ analogical reasoning. Our detailed analysis reveals insights on the role of model complexity, in-context learning, and prior knowledge in solving RPM tasks.",0366177b44ed13d86b9d704a3a82ea3750e5abed,Semantic Scholar,,, -438,promptaugmented linear probing scaling beyond the limit of fewshot incontext learners,"['Hyunsoo Cho', 'Hyuhng Joon Kim', 'Junyeob Kim', 'Sang-Woo Lee', 'Sang-goo Lee', 'Kang Min Yoo', 'Taeuk Kim']",http://arxiv.org/pdf/2212.10873,2022-12-21,,"Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training sample as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly closes the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.",06edda0310b4ec7c5012d012349252a3a77521b6,Semantic Scholar,,, -439,bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games,"['Ruoyao Wang', 'G. Todd', 'Xingdi Yuan', 'Ziang Xiao', 'Marc-Alexandre Côté', 'Peter Alexander Jansen']",http://arxiv.org/pdf/2305.14879,2023-05-24,,"In this work, we investigate the capacity of language models to generate explicit, interpretable, and interactive world models of scientific and common-sense reasoning tasks. We operationalize this as a task of generating text games, expressed as hundreds of lines of Python code. To facilitate this task, we introduce ByteSized32 (Code: github.com/cognitiveailab/BYTESIZED32), a corpus of 32 reasoning-focused text games totaling 20k lines of Python code. We empirically demonstrate that GPT-4 can use these games as templates for single-shot in-context learning, successfully producing runnable games on unseen topics in 28% of cases. When allowed to self-reflect on program errors, game runnability substantially increases to 57%. While evaluating simulation fidelity is labor-intensive, we introduce a suite of automated metrics to assess game fidelity, technical validity, adherence to task specifications, and winnability, showing a high degree of agreement with expert human ratings. We pose this as a challenge task to spur further development at the juncture of world modeling and code generation.",070b91f80ac118b910c1d2ab5be9f65f685979fe,Semantic Scholar,,, -440,exploring diverse incontext configurations for image captioning,"['Xu Yang', 'Yongliang Wu', 'Ming-Hsuan Yang', 'Haokun Chen', 'Xin Geng']",http://arxiv.org/pdf/2305.14800,2023-05-24,,"After discovering that Language Models (LMs) can be good in-context few-shot learners, numerous strategies have been proposed to optimize in-context sequence configurations. Recently, researchers in Vision-Language (VL) domains also develop their few-shot learners, while they only use the simplest way, i.e., randomly sampling, to configure in-context image-text pairs. In order to explore the effects of varying configurations on VL in-context learning, we devised four strategies for image selection and four for caption assignment to configure in-context image-text pairs for image captioning. Here Image Captioning is used as the case study since it can be seen as the visually-conditioned LM. Our comprehensive experiments yield two counter-intuitive but valuable insights, highlighting the distinct characteristics of VL in-context learning due to multi-modal synergy, as compared to the NLP case.",0744783bbefc12b2b1383bed137e8a80061274b7,Semantic Scholar,,, -441,complementary explanations for effective incontext learning,"['Xi Ye', 'Srini Iyer', 'Asli Celikyilmaz', 'Ves Stoyanov', 'Greg Durrett', 'Ramakanth Pasunuru']",http://arxiv.org/pdf/2211.13892,2022-11-25,,"Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts, but there has been limited understanding of exactly how these explanations function or why they are effective. This work aims to better understand the mechanisms by which explanations are used for in-context learning. We first study the impact of two different factors on the performance of prompts with explanations: the computation trace (the way the solution is decomposed) and the natural language used to express the prompt. By perturbing explanations on three controlled tasks, we show that both factors contribute to the effectiveness of explanations. We further study how to form maximally effective sets of explanations for solving a given test query. We find that LLMs can benefit from the complementarity of the explanation set: diverse reasoning skills shown by different exemplars can lead to better performance. Therefore, we propose a maximal marginal relevance-based exemplar selection approach for constructing exemplar sets that are both relevant as well as complementary, which successfully improves the in-context learning performance across three real-world tasks on multiple LLMs.",097dc73d5d422b3c09286e72d16b2561ae5fb395,Semantic Scholar,,, -442,neural machine translation models can learn to be fewshot learners,"['Raphael Reinauer', 'P. Simianer', 'Kaden Uhlig', 'Johannes E. M. Mosig', 'Joern Wuebker']",https://arxiv.org/pdf/2309.08590,2023-09-15,,"The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example.",09a85806442373f167e45eaf662a7914df048b10,Semantic Scholar,,, -443,good examples make a faster learner simple demonstrationbased learning for lowresource ner,"['Dong-Ho Lee', 'Mahak Agarwal', 'Akshen Kadakia', 'Takashi Shibuya', 'J. Pujara', 'Xiang Ren']",https://aclanthology.org/2022.acl-long.192.pdf,2021-10-16,,"Recent advances in prompt-based learning have shown strong results on few-shot text classification by using cloze-style templates.Similar attempts have been made on named entity recognition (NER) which manually design templates to predict entity types for every text span in a sentence. However, such methods may suffer from error propagation induced by entity span detection, high cost due to enumeration of all possible text spans, and omission of inter-dependencies among token labels in a sentence. Here we present a simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning. We perform a systematic study on demonstration strategy regarding what to include (entity examples, with or without surrounding context), how to select the examples, and what templates to use. Results on in-domain learning and domain adaptation show that the model’s performance in low-resource settings can be largely improved with a suitable demonstration strategy (e.g., a 4-17% improvement on 25 train instances). We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance.",0a2ac054c533314c0659f3b139388527df0d42f3,Semantic Scholar,,, -444,prompting language models for linguistic structure,"['Terra Blevins', 'Hila Gonen', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2211.07830,2022-11-15,,"Although pretrained language models (PLMs) can be prompted to perform a wide range of language tasks, it remains an open question how much this ability comes from generalizable linguistic understanding versus surface-level lexical patterns. To test this, we present a structured prompting approach for linguistic structured prediction tasks, allowing us to perform zero- and few-shot sequence tagging with autoregressive PLMs. We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking, demonstrating strong few-shot performance in all cases. We also find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels. These findings indicate that the in-context learning ability and linguistic knowledge of PLMs generalizes beyond memorization of their training data.",0a67a5e3f4125445ed84f2db3c92429010aad68a,Semantic Scholar,,, -445,improving the reliability of large language models by leveraging uncertaintyaware incontext learning,"['Yuchen Yang', 'Houqiang Li', 'Yanfeng Wang', 'Yu Wang']",https://arxiv.org/pdf/2310.04782,2023-10-07,,"In recent years, large-scale language models (LLMs) have gained attention for their impressive text generation capabilities. However, these models often face the challenge of""hallucination,""which undermines their reliability. In this study, we introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty. Human-defined methods for estimating uncertainty typically assume that""uncertainty is lower when the model's response is correct compared to when it is incorrect.""However, setting a precise threshold to distinguish correctness is challenging. Therefore, we introduce uncertainty information as an intermediary variable that implicitly influences the model's behavior. Our innovative uncertainty-aware in-context learning framework involves fine-tuning the LLM using a calibration dataset. Our aim is to improve the model's responses by filtering out answers with high uncertainty while considering the model's knowledge limitations. We evaluate the model's knowledge by examining multiple responses to the same question for the presence of a correct answer. When the model lacks relevant knowledge, the response should indicate that the question cannot be answered. Conversely, when the model has relevant knowledge, the response should provide the correct answer. Extensive experiments confirm the effectiveness of our framework, leading to two key findings. First, the logit output values of the LLM partly reflect inherent uncertainty. Second, our model autonomously recognizes uncertainty, resulting in improved responses.",0aa5940fda7c994675d08c41eca2a6909eb6d205,Semantic Scholar,,, -446,how do incontext examples affect compositional generalization,"['Shengnan An', 'Zeqi Lin', 'Qiang Fu', 'B. Chen', 'Nanning Zheng', 'Jian-Guang Lou', 'D. Zhang']",http://arxiv.org/pdf/2305.04835,2023-05-08,,"Compositional generalization–understanding unseen combinations of seen primitives–is an essential reasoning capability in human intelligence.The AI community mainly studies this capability by fine-tuning neural networks on lots of training samples, while it is still unclear whether and how in-context learning–the prevailing few-shot paradigm based on large language models–exhibits compositional generalization.In this paper, we present CoFe, a test suite to investigate in-context compositional generalization.We find that the compositional generalization performance can be easily affected by the selection of in-context examples, thus raising the research question what the key factors are to make good in-context examples for compositional generalization.We study three potential factors: similarity, diversity and complexity. Our systematic experiments indicate that in-context examples should be structurally similar to the test case, diverse from each other, and individually simple.Furthermore, two strong limitations are observed: in-context compositional generalization on fictional words is much weaker than that on commonly used ones; it is still critical that the in-context examples should cover required linguistic structures, even though the backbone model has been pre-trained on large corpus.We hope our analysis would facilitate the understanding and utilization of in-context learning paradigm.",0ae12d63f77f40b430f17c791a5191ff5fee5086,Semantic Scholar,,, -447,chatrec towards interactive and explainable llmsaugmented recommender system,"['Yunfan Gao', 'Tao Sheng', 'Youlin Xiang', 'Yun Xiong', 'Haofen Wang', 'Jiawei Zhang']",http://arxiv.org/pdf/2303.14524,2023-03-25,,"Large language models (LLMs) have demonstrated their significant potential to be applied for addressing various application tasks. However, traditional recommender systems continue to face great challenges such as poor interactivity and explainability, which actually also hinder their broad deployment in real-world systems. To address these limitations, this paper proposes a novel paradigm called Chat-Rec (ChatGPT Augmented Recommender System) that innovatively augments LLMs for building conversational recommender systems by converting user profiles and historical interactions into prompts. Chat-Rec is demonstrated to be effective in learning user preferences and establishing connections between users and products through in-context learning, which also makes the recommendation process more interactive and explainable. What's more, within the Chat-Rec framework, user's preferences can transfer to different products for cross-domain recommendations, and prompt-based injection of information into LLMs can also handle the cold-start scenarios with new items. In our experiments, Chat-Rec effectively improve the results of top-k recommendations and performs better in zero-shot rating prediction task. Chat-Rec offers a novel approach to improving recommender systems and presents new practical scenarios for the implementation of AIGC (AI generated content) in recommender system studies.",0cfdd655100055f234fd23ebecd915504b8e00e3,Semantic Scholar,,, -448,maple multimodal prompt learning,"['Muhammad Uzair Khattak', 'H. Rasheed', 'Muhammad Maaz', 'Salman Khan', 'F. Khan']",https://arxiv.org/pdf/2210.03117,2022-10-06,,"Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.",0d0dbfb1b315a43216020abaf74d289456198219,Semantic Scholar,,, -449,a theory of emergent incontext learning as implicit structure induction,"['Michael Hahn', 'Navin Goyal']",http://arxiv.org/pdf/2303.07971,2023-03-14,,"Scaling large language models (LLMs) leads to an emergent capacity to learn in-context from example demonstrations. Despite progress, theoretical understanding of this phenomenon remains limited. We argue that in-context learning relies on recombination of compositional operations found in natural language data. We derive an information-theoretic bound showing how in-context learning abilities arise from generic next-token prediction when the pretraining distribution has sufficient amounts of compositional structure, under linguistically motivated assumptions. A second bound provides a theoretical justification for the empirical success of prompting LLMs to output intermediate steps towards an answer. To validate theoretical predictions, we introduce a controlled setup for inducing in-context learning; unlike previous approaches, it accounts for the compositional nature of language. Trained transformers can perform in-context learning for a range of tasks, in a manner consistent with the theoretical results. Mirroring real-world LLMs in a miniature setup, in-context learning emerges when scaling parameters and data, and models perform better when prompted to output intermediate steps. Probing shows that in-context learning is supported by a representation of the input's compositional structure. Taken together, these results provide a step towards theoretical understanding of emergent behavior in large language models.",0ea7fc93d4947d9024ccaa202987a2070683bc1f,Semantic Scholar,,, -450,are humangenerated demonstrations necessary for incontext learning,"['Rui Li', 'Guoyin Wang', 'Jiwei Li']",https://arxiv.org/pdf/2309.14681,2023-09-26,,"Despite the promising few-shot ability of large language models (LLMs), the standard paradigm of In-context Learning (ICL) suffers the disadvantages of susceptibility to selected demonstrations and the intricacy to generate these demonstrations. In this paper, we raise the fundamental question that whether human-generated demonstrations are necessary for ICL. To answer this question, we propose self-contemplation prompting strategy (SEC), a paradigm free from human-crafted demonstrations. The key point of SEC is that, instead of using hand-crafted examples as demonstrations in ICL, SEC asks LLMs to first create demonstrations on their own, based on which the final output is generated. SEC is a flexible framework and can be adapted to both the vanilla ICL and the chain-of-thought (CoT), but with greater ease: as the manual-generation process of both examples and rationale can be saved. Extensive experiments in arithmetic reasoning, commonsense reasoning, multi-task language understanding, and code generation benchmarks, show that SEC, which does not require hand-crafted demonstrations, significantly outperforms the zero-shot learning strategy, and achieves comparable results to ICL with hand-crafted demonstrations. This demonstrates that, for many tasks, contemporary LLMs possess a sufficient level of competence to exclusively depend on their own capacity for decision making, removing the need for external training data. Code is available at https://github.com/ruili33/SEC.",0f45608ddc01b3e192f3490330f4c4b8de074f79,Semantic Scholar,,, -451,honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model,"['Jacob Eisenstein', 'D. Andor', 'Bernd Bohnet', 'Michael Collins', 'David M. Mimno']",http://arxiv.org/pdf/2210.02498,2022-10-05,,"Explainable question answering systems should produce not only accurate answers but also rationales that justify their reasoning and allow humans to check their work. But what sorts of rationales are useful and how can we train systems to produce them? We propose a new style of rationale for open-book question answering, called \emph{markup-and-mask}, which combines aspects of extractive and free-text explanations. In the markup phase, the passage is augmented with free-text markup that enables each sentence to stand on its own outside the discourse context. In the masking phase, a sub-span of the marked-up passage is selected. To train a system to produce markup-and-mask rationales without annotations, we leverage in-context learning. Specifically, we generate silver annotated data by sending a series of prompts to a frozen pretrained language model, which acts as a teacher. We then fine-tune a smaller student model by training on the subset of rationales that led to correct answers. The student is""honest""in the sense that it is a pipeline: the rationale acts as a bottleneck between the passage and the answer, while the""untrusted""teacher operates under no such constraints. Thus, we offer a new way to build trustworthy pipeline systems from a combination of end-task annotations and frozen pretrained language models.",0f4ab3fe492ececbfd38be9682047371e2e9b8c6,Semantic Scholar,,, -452,collaborating with language models for embodied reasoning,"['Ishita Dasgupta', 'Christine Kaeser-Chen', 'Kenneth Marino', 'Arun Ahuja', 'Sheila Babayan', 'Felix Hill', 'R. Fergus']",http://arxiv.org/pdf/2302.00763,2023-02-01,,"Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to generalize to new unseen environments and new tasks. On the other hand, Large Scale Language Models (LSLMs) have exhibited strong reasoning ability and the ability to to adapt to new tasks through in-context learning. However, LSLMs do not inherently have the ability to interrogate or intervene on the environment. In this work, we investigate how to combine these complementary abilities in a single system consisting of three parts: a Planner, an Actor, and a Reporter. The Planner is a pre-trained language model that can issue commands to a simple embodied agent (the Actor), while the Reporter communicates with the Planner to inform its next command. We present a set of tasks that require reasoning, test this system's ability to generalize zero-shot and investigate failure cases, and demonstrate how components of this system can be trained with reinforcement-learning to improve performance.",102e4c860e39a2bfd7bf3f03b9ad69aac7bf3b5f,Semantic Scholar,,, -453,knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering,"['Keheng Wang', 'Feiyu Duan', 'Sirui Wang', 'Peiguang Li', 'Yunsen Xian', 'Chuantao Yin', 'Wenge Rong', 'Zhang Xiong']",https://arxiv.org/pdf/2308.13259,2023-08-25,,"Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have shown impressive reasoning ability in various downstream tasks. Even so, suffering from hallucinations and the inability to access external knowledge, LLMs often come with incorrect or unfaithful intermediate reasoning steps, especially in the context of answering knowledge-intensive tasks such as KBQA. To alleviate this issue, we propose a framework called Knowledge-Driven Chain-of-Thought (KD-CoT) to verify and modify reasoning traces in CoT via interaction with external knowledge, and thus overcome the hallucinations and error propagation. Concretely, we formulate the CoT rationale process of LLMs into a structured multi-round QA format. In each round, LLMs interact with a QA system that retrieves external knowledge and produce faithful reasoning traces based on retrieved precise answers. The structured CoT reasoning of LLMs is facilitated by our developed KBQA CoT collection, which serves as in-context learning demonstrations and can also be utilized as feedback augmentation to train a robust retriever. Extensive experiments on WebQSP and ComplexWebQuestion datasets demonstrate the effectiveness of proposed KD-CoT in task-solving reasoning generation, which outperforms the vanilla CoT ICL with an absolute success rate of 8.0% and 5.1%. Furthermore, our proposed feedback-augmented retriever outperforms the state-of-the-art baselines for retrieving knowledge, achieving significant improvement in Hit and recall performance. Our code and data are released on https://github.com/AdelWang/KD-CoT/tree/main.",10955e63aa49fab146267949f8ebc9ebe8275183,Semantic Scholar,,, -454,taken out of context on measuring situational awareness in llms,"['Lukas Berglund', 'Asa Cooper Stickland', 'Mikita Balesni', 'Max Kaufmann', 'Meg Tong', 'Tomasz Korbak', 'Daniel Kokotajlo', 'Owain Evans']",https://arxiv.org/pdf/2309.00667,2023-09-01,,"We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.",135ae2ea7a2c966815e85a232469a0a14b4d8d67,Semantic Scholar,,, -455,larger language models do incontext learning differently,"['Jerry W. Wei', 'Jason Wei', 'Yi Tay', 'Dustin Tran', 'Albert Webson', 'Yifeng Lu', 'Xinyun Chen', 'Hanxiao Liu', 'Da Huang', 'Denny Zhou', 'Tengyu Ma']",http://arxiv.org/pdf/2303.03846,2023-03-07,,"We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments on ICL with flipped labels show that overriding semantic priors is an emergent ability of model scale. While small language models ignore flipped labels presented in-context and thus rely primarily on semantic priors from pretraining, large models can override semantic priors when presented with in-context exemplars that contradict priors, despite the stronger semantic priors that larger models may hold. We next study semantically-unrelated label ICL (SUL-ICL), in which labels are semantically unrelated to their inputs (e.g., foo/bar instead of negative/positive), thereby forcing language models to learn the input-label mappings shown in in-context exemplars in order to perform the task. The ability to do SUL-ICL also emerges primarily with scale, and large-enough language models can even perform linear classification in a SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that instruction tuning strengthens both the use of semantic priors and the capacity to learn input-label mappings, but more of the former.",154493f69d7db3d49da0e51df0192c6ad5f1724a,Semantic Scholar,,, -456,incontext learning user simulators for taskoriented dialog systems,"['Silvia Terragni', 'Modestas Filipavicius', 'Nghia Khau', 'Bruna Guedes', ""Andr'e Manso"", 'Roland Mathis']",http://arxiv.org/pdf/2306.00774,2023-06-01,,"This paper presents a novel application of large language models in user simulation for task-oriented dialog systems, specifically focusing on an in-context learning approach. By harnessing the power of these models, the proposed approach generates diverse utterances based on user goals and limited dialog examples. Unlike traditional simulators, this method eliminates the need for labor-intensive rule definition or extensive annotated data, making it more efficient and accessible. Additionally, an error analysis of the interaction between the user simulator and dialog system uncovers common mistakes, providing valuable insights into areas that require improvement. Our implementation is available at https://github.com/telepathylabsai/prompt-based-user-simulator.",15fcd80193d1c446bc3d37fcc30f5475b9ebd5b0,Semantic Scholar,,, -457,cognitive reframing of negative thoughts through humanlanguage model interaction,"['Ashish Sharma', 'Kevin Rushton', 'Inna Wanyin Lin', 'David Wadden', 'Khendra G. Lucas', 'Adam S. Miner', 'Theresa Nguyen', 'Tim Althoff']",http://arxiv.org/pdf/2305.02466,2023-05-04,,"A proven therapeutic technique to overcome negative thoughts is to replace them with a more hopeful “reframed thought.” Although therapy can help people practice and learn this Cognitive Reframing of Negative Thoughts, clinician shortages and mental health stigma commonly limit people’s access to therapy. In this paper, we conduct a human-centered study of how language models may assist people in reframing negative thoughts. Based on psychology literature, we define a framework of seven linguistic attributes that can be used to reframe a thought. We develop automated metrics to measure these attributes and validate them with expert judgements from mental health practitioners. We collect a dataset of 600 situations, thoughts and reframes from practitioners and use it to train a retrieval-enhanced in-context learning model that effectively generates reframed thoughts and controls their linguistic attributes. To investigate what constitutes a “high-quality” reframe, we conduct an IRB-approved randomized field study on a large mental health website with over 2,000 participants. Amongst other findings, we show that people prefer highly empathic or specific reframes, as opposed to reframes that are overly positive. Our findings provide key implications for the use of LMs to assist people in overcoming negative thoughts.",16aacf48048ac128a07fe2c0761439e1d7211492,Semantic Scholar,,, -458,dricl demonstrationretrieved incontext learning,"['Man Luo', 'Xin Xu', 'Zhuyun Dai', 'Panupong Pasupat', 'Mehran Kazemi', 'Chitta Baral', 'Vaiva Imbrasaite', 'Vincent Zhao']",http://arxiv.org/pdf/2305.14128,2023-05-23,,"In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs. While early studies primarily used a fixed or random set of demonstrations for all test queries, recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance. This work expands the applicability of retrieval-based ICL approaches by demonstrating that even simple word-overlap similarity measures such as BM25 outperform randomly selected demonstrations. Furthermore, we extend the success of retrieval-based ICL to instruction-finetuned LLMs as well as Chain-of-Thought (CoT) prompting. For instruction-finetuned LLMs, we find that although a model has already seen the training data at training time, retrieving demonstrations from the training data at test time yields better results compared to using no demonstrations or random demonstrations. Last but not least, we train a task-specific demonstration retriever that outperforms off-the-shelf retrievers.",18143a4c2da37444e06feed04cc9efeb0856352d,Semantic Scholar,,, -459,sociocultural norm similarities and differences via situational alignment and explainable textual entailment,"['Sky Ch-Wang', 'Arkadiy Saakyan', 'Oliver Li', 'Zhou Yu', 'S. Muresan']",http://arxiv.org/pdf/2305.14492,2023-05-23,,"Designing systems that can reason across cultures requires that they are grounded in the norms of the contexts in which they operate. However, current research on developing computational models of social norms has primarily focused on American society. Here, we propose a novel approach to discover and compare descriptive social norms across Chinese and American cultures. We demonstrate our approach by leveraging discussions on a Chinese Q&A platform (Zhihu) and the existing SocialChemistry dataset as proxies for contrasting cultural axes, align social situations cross-culturally, and extract social norms from texts using in-context learning. Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3,069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations. To test the ability of models to reason about social norms across cultures, we introduce the task of explainable social norm entailment, showing that existing models under 3B parameters have significant room for improvement in both automatic and human evaluation. Further analysis of cross-cultural norm differences based on our dataset shows empirical alignment with the social orientations framework, revealing several situational and descriptive nuances in norms across these cultures.",18bd959aaa8a83b5b2192282224d700da7459857,Semantic Scholar,,, -460,flirt feedback loop incontext red teaming,"['Ninareh Mehrabi', 'Palash Goyal', 'Christophe Dupuy', 'Qian Hu', 'Shalini Ghosh', 'R. Zemel', 'Kai-Wei Chang', 'A. Galstyan', 'Rahul Gupta']",https://arxiv.org/pdf/2308.04265,2023-08-08,,"Warning: this paper contains content that may be inappropriate or offensive. As generative models become available for public use in various applications, testing and analyzing vulnerabilities of these models has become a priority. Here we propose an automatic red teaming framework that evaluates a given model and exposes its vulnerabilities against unsafe and inappropriate content generation. Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation. We propose different in-context attack strategies to automatically learn effective and diverse adversarial prompts for text-to-image models. Our experiments demonstrate that compared to baseline approaches, our proposed strategy is significantly more effective in exposing vulnerabilities in Stable Diffusion (SD) model, even when the latter is enhanced with safety features. Furthermore, we demonstrate that the proposed framework is effective for red teaming text-to-text models, resulting in significantly higher toxic response generation rate compared to previously reported numbers.",19443d48399d4fe89a4b0a96917c50c6fd9c5af1,Semantic Scholar,,, -461,icld3ie incontext learning with diverse demonstrations updating for document information extraction,"['Jiabang He', 'Lei Wang', 'Yingpeng Hu', 'Ning Liu', 'Hui-juan Liu', 'Xingdong Xu', 'Hengtao Shen']",https://arxiv.org/pdf/2303.05063,2023-03-09,,"Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstrated remarkable results in various natural language processing (NLP) tasks with in-context learning, which involves inference based on a few demonstration examples. Despite their successes in NLP tasks, no investigation has been conducted to assess the ability of LLMs to perform document information extraction (DIE) using in-context learning. Applying LLMs to DIE poses two challenges: the modality and task gap. To this end, we propose a simple but effective in-context learning framework called ICL-D3IE, which enables LLMs to perform DIE with different types of demonstration examples. Specifically, we extract the most difficult and distinct segments from hard training documents as hard demonstrations for benefiting all test instances. We design demonstrations describing relationships that enable LLMs to understand positional relationships. We introduce formatting demonstrations for easy answer extraction. Additionally, the framework improves diverse demonstrations by updating them iteratively. Our experiments on three widely used benchmark datasets demonstrate that the ICL-D3IE framework enables Davinci-003/ChatGPT to achieve superior performance when compared to previous pre-trained methods fine-tuned with full training in both the in-distribution (ID) setting and in the out-of-distribution (OOD) setting. Code is available at https://github.com/MAEHCM/ICL-D3IE.",197022486b2e2584302bd9b6442e44d15bf3e351,Semantic Scholar,,, -462,extractive summarization via chatgpt for faithful summary generation,"['Haopeng Zhang', 'Xiao Liu', 'Jiawei Zhang']",https://arxiv.org/pdf/2304.04193,2023-04-09,,"Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT's performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT's capabilities in faithful summarization using two-stage approaches.",1a01c982aa20c1a1ad1ad94866e3197da99a52a2,Semantic Scholar,,, -463,"revisiting outofdistribution robustness in nlp benchmark, analysis, and llms evaluations","['Lifan Yuan', 'Yangyi Chen', 'Ganqu Cui', 'Hongcheng Gao', 'Fangyuan Zou', 'Xingyi Cheng', 'Heng Ji', 'Zhiyuan Liu', 'Maosong Sun']",http://arxiv.org/pdf/2306.04618,2023-06-07,,"This paper reexamines the research on out-of-distribution (OOD) robustness in the field of NLP. We find that the distribution shift settings in previous studies commonly lack adequate challenges, hindering the accurate evaluation of OOD robustness. To address these issues, we propose a benchmark construction protocol that ensures clear differentiation and challenging distribution shifts. Then we introduce BOSS, a Benchmark suite for Out-of-distribution robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we conduct a series of experiments on pre-trained language models for analysis and evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the relationship between in-distribution (ID) and OOD performance. We identify three typical types that unveil the inner learning mechanism, which could potentially facilitate the forecasting of OOD robustness, correlating with the advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and find that, despite exhibiting some effectiveness in specific cases, they do not offer significant improvement compared to vanilla fine-tuning. Further, we evaluate 5 LLMs with various adaptation paradigms and find that when sufficient ID data is available, fine-tuning domain-specific models outperform LLMs on ID examples significantly. However, in the case of OOD instances, prioritizing LLMs with in-context learning yields better results. We identify that both fine-tuned small models and LLMs face challenges in effectively addressing downstream tasks. The code is public at \url{https://github.com/lifan-yuan/OOD_NLP}.",1a55d16c14587edda62dc9c9ff09e0b531dd169c,Semantic Scholar,,, -464,discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators,"['Giwon Hong', 'Jeonghwan Kim', 'Junmo Kang', 'Sung-Hyon Myaeng', 'Joyce Jiyoung Whang']",http://arxiv.org/pdf/2305.01579,2023-05-02,,"Most existing retrieval-augmented language models (LMs) for question answering assume all retrieved information is factually correct. In this work, we study a more realistic scenario in which retrieved documents may contain misinformation, causing conflicts among them. We observe that the existing models are highly brittle to such information in both fine-tuning and in-context few-shot learning settings. We propose approaches to make retrieval-augmented LMs robust to misinformation by explicitly fine-tuning a discriminator or prompting to elicit discrimination capability in GPT-3. Our empirical results on open-domain question answering show that these approaches significantly improve LMs' robustness to knowledge conflicts. We also provide our findings on interleaving the fine-tuned model's decision with the in-context learning process, paving a new path to leverage the best of both worlds.",1a62bc8ed9732bcdb6893a11f5cf239640883f87,Semantic Scholar,,, -465,adversarial demonstration attacks on large language models,"['Jiong Wang', 'Zi-yang Liu', 'Keun Hee Park', 'Muhao Chen', 'Chaowei Xiao']",http://arxiv.org/pdf/2305.14950,2023-05-24,,"With the emergence of more powerful large language models (LLMs), such as ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence in leveraging these models for specific tasks by utilizing data-label pairs as precondition prompts. While incorporating demonstrations can greatly enhance the performance of LLMs across various tasks, it may introduce a new security concern: attackers can manipulate only the demonstrations without changing the input to perform an attack. In this paper, we investigate the security concern of ICL from an adversarial perspective, focusing on the impact of demonstrations. We propose a novel attack method named advICL, which aims to manipulate only the demonstration without changing the input to mislead the models. Our results demonstrate that as the number of demonstrations increases, the robustness of in-context learning would decrease. Additionally, we also identify the intrinsic property of the demonstrations is that they can be used (prepended) with different inputs. As a result, it introduces a more practical threat model in which an attacker can attack the test input example even without knowing and manipulating it. To achieve it, we propose the transferable version of advICL, named Transferable-advICL. Our experiment shows that the adversarial demonstration generated by Transferable-advICL can successfully attack the unseen test input examples. We hope that our study reveals the critical security risks associated with ICL and underscores the need for extensive research on the robustness of ICL, particularly given its increasing significance in the advancement of LLMs.",1abfc211793c683972ded8d3268475e3ee7a88b0,Semantic Scholar,,, -466,is chatgpt a good causal reasoner a comprehensive evaluation,"['Jin-Fang Gao', 'Xiao Ding', 'Bing Qin', 'Ting Liu']",https://arxiv.org/pdf/2305.07375,2023-05-12,,"Causal reasoning ability is crucial for numerous NLP applications. Despite the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear how well ChatGPT performs in causal reasoning. In this paper, we conduct the first comprehensive evaluation of the ChatGPT's causal reasoning capabilities. Experiments show that ChatGPT is not a good causal reasoner, but a good causal explainer. Besides, ChatGPT has a serious hallucination on causal reasoning, possibly due to the reporting biases between causal and non-causal relationships in natural language, as well as ChatGPT's upgrading processes, such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (CoT) techniques can further exacerbate such causal hallucination. Additionally, the causal reasoning ability of ChatGPT is sensitive to the words used to express the causal concept in prompts, and close-ended prompts perform better than open-ended prompts. For events in sentences, ChatGPT excels at capturing explicit causality rather than implicit causality, and performs better in sentences with lower event density and smaller lexical distance between events. The code is available on https://github.com/ArrogantL/ChatGPT4CausalReasoning .",1b9fc8268b392742ea43c2c017a767cf62386139,Semantic Scholar,,, -467,using incontext learning to improve dialogue safety,"['Nicholas Meade', 'Spandana Gella', 'Devamanyu Hazarika', 'Prakhar Gupta', 'Di Jin', 'Siva Reddy', 'Yang Liu', 'Dilek Z. Hakkani-Tür']",http://arxiv.org/pdf/2302.00871,2023-02-02,,"While large neural-based conversational models have become increasingly proficient dialogue agents, recent work has highlighted safety issues with these systems. For example, these systems can be goaded into generating toxic content, which often perpetuates social biases or stereotypes. We investigate a retrieval-based method for reducing bias and toxicity in responses from chatbots. It uses in-context learning to steer a model towards safer generations. Concretely, to generate a response to an unsafe dialogue context, we retrieve demonstrations of safe responses to similar dialogue contexts. We find our method performs competitively with strong baselines without requiring training. For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4.04% more than our approach. Finally, we also propose a re-ranking procedure which can further improve response safeness.",1d75f8de31bf47ec46fa5586056420ec8bc97e86,Semantic Scholar,,, -468,how to unleash the power of large language models for fewshot relation extraction,"['Xin Xu', 'Yuqi Zhu', 'Xiaohan Wang', 'Ningyu Zhang']",http://arxiv.org/pdf/2305.01555,2023-05-02,,"Scaling language models have revolutionized widespread NLP tasks, yet little comprehensively explored few-shot relation extraction with large language models. In this paper, we investigate principal methodologies, in-context learning and data generation, for few-shot relation extraction via GPT-3.5 through exhaustive experiments. To enhance few-shot performance, we further propose task-related instructions and schema-constrained data generation. We observe that in-context learning can achieve performance on par with previous prompt learning approaches, and data generation with the large language model can boost previous solutions to obtain new state-of-the-art few-shot results on four widely-studied relation extraction datasets. We hope our work can inspire future research for the capabilities of large language models in few-shot relation extraction. Code is available in https://github.com/zjunlp/DeepKE/tree/main/example/llm.",1ddeb500dd88d4b860b32bec1e2a85f8a53910d6,Semantic Scholar,,, -469,multilingual llms are better crosslingual incontext learners with alignment,"['Eshaan Tanwar', 'Manish Borthakur', 'Subhabrata Dutta', 'Tanmoy Chakraborty']",http://arxiv.org/pdf/2305.05940,2023-05-10,,"In-context learning (ICL) unfolds as large language models become capable of inferring test labels conditioned on a few labeled samples without any gradient update. ICL-enabled large language models provide a promising step forward toward bypassing recurrent annotation costs in a low-resource setting. Yet, only a handful of past studies have explored ICL in a cross-lingual setting, in which the need for transferring label-knowledge from a high-resource language to a low-resource one is immensely crucial. To bridge the gap, we provide the first in-depth analysis of ICL for cross-lingual text classification. We find that the prevalent mode of selecting random input-label pairs to construct the prompt-context is severely limited in the case of cross-lingual ICL, primarily due to the lack of alignment in the input as well as the output spaces. To mitigate this, we propose a novel prompt construction strategy — Cross-lingual In-context Source Target Alignment (X-InSTA). With an injected coherence in the semantics of the input examples and a task-based alignment across the source and target languages, X-InSTA is able to outperform random prompt selection by a large margin across three different tasks using 44 different cross-lingual pairs.",1fb5a5298747b8c7d60f98640a543f20d42ab053,Semantic Scholar,,, -470,boosting incontext learning with factual knowledge,"['J. Wang', 'Chengyu Wang', 'Chuanqi Tan', 'Jun Huang', 'Ming Gao']",https://arxiv.org/pdf/2309.14771,2023-09-26,,"In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks by conditioning on a few training examples, eliminating the need for parameter updates and achieving competitive performance. In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets, i.e., the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. To unleash the power of LLMs in few-shot learning scenarios, we introduce a novel Knowledgeable In-Context Tuning (KICT) framework to further improve the performance of ICL: 1) injecting factual knowledge to LLMs during continual self-supervised pre-training, 2) judiciously selecting the examples with high knowledge relevance, and 3) calibrating the prediction results based on prior knowledge. We evaluate the proposed approaches on auto-regressive LLMs (e.g., GPT-style models) over multiple text classification and question answering tasks. Experimental results demonstrate that KICT substantially outperforms strong baselines, and improves by more than 13% and 7% of accuracy on text classification and question answering tasks, respectively.",20177a85f632a34d085bcf645507e461733fcc96,Semantic Scholar,,, -471,chatgpt for zeroshot dialogue state tracking a solution or an opportunity,"['Michael Heck', 'Nurul Lubis', 'Benjamin Matthias Ruppik', 'Renato Vukovic', 'Shutong Feng', 'Christian Geishauser', 'Hsien-chin Lin', 'Carel van Niekerk', ""Milica Gavsi'c""]",http://arxiv.org/pdf/2306.01386,2023-06-02,,"Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without task-specific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves state-of-the-art performance in zero-shot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems. We further theorize that the in-context learning capabilities of such models will likely become powerful tools to support the development of dedicated dialog state trackers and enable dynamic methods.",214fbadc57e954e325dc055fee5ac0e224dfde11,Semantic Scholar,,, -472,llmlingua compressing prompts for accelerated inference of large language models,"['Huiqiang Jiang', 'Qianhui Wu', 'Chin-Yew Lin', 'Yuqing Yang', 'Lili Qiu']",https://arxiv.org/pdf/2310.05736,2023-10-09,,"Large language models (LLMs) have been applied in various applications due to their astonishing capabilities. With advancements in technologies such as chain-of-thought (CoT) prompting and in-context learning (ICL), the prompts fed to LLMs are becoming increasingly lengthy, even exceeding tens of thousands of tokens. To accelerate model inference and reduce cost, this paper presents LLMLingua, a coarse-to-fine prompt compression method that involves a budget controller to maintain semantic integrity under high compression ratios, a token-level iterative compression algorithm to better model the interdependence between compressed contents, and an instruction tuning based method for distribution alignment between language models. We conduct experiments and analysis over four datasets from different scenarios, i.e., GSM8K, BBH, ShareGPT, and Arxiv-March23; showing that the proposed approach yields state-of-the-art performance and allows for up to 20x compression with little performance loss. Our code is available at https://aka.ms/LLMLingua.",2392b6d3a5cad9e5cf349169eaeee848266adf6a,Semantic Scholar,,, -473,exnet efficient incontext learning for dataless text classification,"['Debaditya Shome', 'K. Yadav']",http://arxiv.org/pdf/2305.14622,2023-05-24,,"Large pre-trained language models (PLMs) have made significant progress in encoding world knowledge and spawned a new set of learning paradigms including zero-shot, few-shot, and in-context learning. Many language tasks can be modeled as a set of prompts (for example, is this text about geography?) and language models can provide binary answers, i.e., Yes or No. There is evidence to suggest that the next-word prediction used by many PLMs does not align well with zero-shot paradigms. Therefore, PLMs are fine-tuned as a question-answering system. In-context learning extends zero-shot learning by incorporating prompts and examples, resulting in increased task accuracy. Our paper presents EXnet, a model specifically designed to perform in-context learning without any limitations on the number of examples. We argue that in-context learning is an effective method to increase task accuracy, and providing examples facilitates cross-task generalization, especially when it comes to text classification tasks. With extensive experiments, we show that even our smallest model (15M parameters) generalizes to several unseen classification tasks and domains.",2447d22655803bfacb880f117cc34d2ac5ac7e74,Semantic Scholar,,, -474,salmon selfalignment with principlefollowing reward models,"['Zhiqing Sun', 'Yikang Shen', 'Hongxin Zhang', 'Qinhong Zhou', 'Zhenfang Chen', 'David D. Cox', 'Yiming Yang', 'Chuang Gan']",https://arxiv.org/pdf/2310.05910,2023-10-09,,"Supervised Fine-Tuning (SFT) on response demonstrations combined with Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful paradigm for aligning LLM-based AI agents. However, a significant limitation of such an approach is its dependency on high-quality human annotations, making its application to intricate tasks challenging due to difficulties in obtaining consistent response demonstrations and in-distribution response preferences. This paper presents a novel approach, namely SALMON (Self-ALignMent with principle-fOllowiNg reward models), to align base language models with minimal human supervision, using only a small set of human-defined principles, yet achieving superior performance. Central to our approach is a principle-following reward model. Trained on synthetic preference data, this model can generate reward scores based on arbitrary human-defined principles. By merely adjusting these principles during the RL training phase, we gain full control over the preferences with the reward model, subsequently influencing the behavior of the RL-trained policies, and eliminating the reliance on the collection of online human preferences. Applying our method to the LLaMA-2-70b base language model, we developed an AI assistant named Dromedary-2. With only 6 exemplars for in-context learning and 31 human-defined principles, Dromedary-2 significantly surpasses the performance of several state-of-the-art AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have open-sourced the code and model weights to encourage further research into aligning LLM-based AI agents with enhanced supervision efficiency, improved controllability, and scalable oversight.",24df244bf7a6e8c93c5f183d3f62d39c0f773c68,Semantic Scholar,,, -475,cup curriculum learning based prompt tuning for implicit event argument extraction,"['Jiaju Lin', 'Qin Chen', 'Jie Zhou', 'Jiankai Jin', 'Liangye He']",https://arxiv.org/pdf/2205.00498,2022-05-01,,"Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document. Most previous work focuses on learning the direct relations between arguments and the given trigger, while the implicit relations with long-range dependency are not well studied. Moreover, recent neural network based approaches rely on a large amount of labeled data for training, which is unavailable due to the high labelling cost. In this paper, we propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages. The stages are defined according to the relations with the trigger node in a semantic graph, which well captures the long-range dependency between arguments and the trigger. In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models (PLMs) in each stage, where the prompt templates are adapted with the learning progress to enhance the reasoning for arguments. Experimental results on two well-known benchmark datasets show the great advantages of our proposed approach. In particular, we outperform the state-of-the-art models in both fully-supervised and low-data scenarios.",65d88194a902332b78dd5a7b919fa577bfa7ee9f,Semantic Scholar,,, -476,delving into multimodal prompting for finegrained visual classification,"['Xin Jiang', 'Hao Tang', 'Junyao Gao', 'Xiaoyu Du', 'Shengfeng He', 'Zechao Li']",https://arxiv.org/pdf/2309.08912,2023-09-16,,"Fine-grained visual classification (FGVC) involves categorizing fine subdivisions within a broader category, which poses challenges due to subtle inter-class discrepancies and large intra-class variations. However, prevailing approaches primarily focus on uni-modal visual concepts. Recent advancements in pre-trained vision-language models have demonstrated remarkable performance in various high-level vision tasks, yet the applicability of such models to FGVC tasks remains uncertain. In this paper, we aim to fully exploit the capabilities of cross-modal description to tackle FGVC tasks and propose a novel multimodal prompting solution, denoted as MP-FGVC, based on the contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a multimodal prompts scheme and a multimodal adaptation scheme. The former includes Subcategory-specific Vision Prompt (SsVP) and Discrepancy-aware Text Prompt (DaTP), which explicitly highlights the subcategory-specific discrepancies from the perspectives of both vision and language. The latter aligns the vision and text prompting elements in a common semantic space, facilitating cross-modal collaborative reasoning through a Vision-Language Fusion Module (VLFM) for further improvement on FGVC. Moreover, we tailor a two-stage optimization strategy for MP-FGVC to fully leverage the pre-trained CLIP model and expedite efficient adaptation for FGVC. Extensive experiments conducted on four FGVC datasets demonstrate the effectiveness of our MP-FGVC.",11e3efa08b5db1a8958dfe8119593a4d3f18796a,Semantic Scholar,,, -477,draw your art dream diverse digital art synthesis with multimodal guided diffusion,"['Nisha Huang', 'Fan Tang', 'Weiming Dong', 'Changsheng Xu']",https://dl.acm.org/doi/pdf/10.1145/3503161.3548282,2022-09-27,,"Digital art synthesis is receiving increasing attention in the multimedia community because of engaging the public with art effectively. Current digital art synthesis methods usually use single-modality inputs as guidance, thereby limiting the expressiveness of the model and the diversity of generated results. To solve this problem, we propose the multimodal guided artwork diffusion (MGAD) model, which is a diffusion-based digital artwork generation approach that utilizes multimodal prompts as guidance to control the classifier-free diffusion model. Additionally, the contrastive language-image pretraining (CLIP) model is used to unify text and image modalities. Extensive experimental results on the quality and quantity of the generated digital art paintings confirm the effectiveness of the combination of the diffusion model and multimodal guidance. Code is available at https://github.com/haha-lisa/MGAD-multimodal-guided-artwork-diffusion.",159d2980566fa00bc752e180471ee46d7899d66e,Semantic Scholar,,, -478,zeroshot and fewshot video question answering with multimodal prompts,"['Deniz Engin', 'Yannis Avrithis']",https://arxiv.org/pdf/2309.15915,2023-09-27,,"Recent vision-language models are driven by large-scale pretrained models. However, adapting pretrained models on limited data presents challenges such as overfitting, catastrophic forgetting, and the cross-modal gap between vision and language. We introduce a parameter-efficient method to address these challenges, combining multimodal prompt learning and a transformer-based mapping network, while keeping the pretrained models frozen. Our experiments on several video question answering benchmarks demonstrate the superiority of our approach in terms of performance and parameter efficiency on both zero-shot and few-shot settings. Our code is available at https://engindeniz.github.io/vitis.",185e79641a8e7b18ac5a73b8c3cb82fdee3a0c6d,Semantic Scholar,,, -479,vima general robot manipulation with multimodal prompts,"['Yunfan Jiang', 'Agrim Gupta', 'Zichen Zhang', 'Guanzhi Wang', 'Yongqiang Dou', 'Yanjun Chen', 'Li Fei-Fei', 'Anima Anandkumar', 'Yuke Zhu', 'Linxi (Jim) Fan']",http://arxiv.org/pdf/2210.03094,2022-10-06,,"Prompt-based learning has emerged as a successful paradigm in natural language processing, where a single general-purpose language model can be instructed to perform any task specified by input prompts. Yet task specification in robotics comes in various forms, such as imitating one-shot demonstrations, following language instructions, and reaching visual goals. They are often considered different tasks and tackled by specialized models. We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts, interleaving textual and visual tokens. Accordingly, we develop a new simulation benchmark that consists of thousands of procedurally-generated tabletop tasks with multimodal prompts, 600K+ expert trajectories for imitation learning, and a four-level evaluation protocol for systematic generalization. We design a transformer-based robot agent, VIMA, that processes these prompts and outputs motor actions autoregressively. VIMA features a recipe that achieves strong model scalability and data efficiency. It outperforms alternative designs in the hardest zero-shot generalization setting by up to $2.9\times$ task success rate given the same training data. With $10\times$ less training data, VIMA still performs $2.7\times$ better than the best competing variant. Code and video demos are available at https://vimalabs.github.io/",25425e299101b13ec2872417a14f961f4f8aa18e,Semantic Scholar,,, -480,multimodal prompt learning for product title generation with extremely limited labels,"['Bang Yang', 'Fenglin Liu', 'Zheng Li', 'Qingyu Yin', 'Chenyu You', 'Bing Yin', 'Yuexian Zou']",https://arxiv.org/pdf/2307.01969,2023-07-05,,"Generating an informative and attractive title for the product is a crucial task for e-commerce. Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1% of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100% of training data; With the full labelled data for training, our method achieves state-of-the-art results.",37d91ebd5ec969e2b81027e05f886febf09d2504,Semantic Scholar,,, -481,multimodal prompting with missing modalities for visual recognition,"['Yi-Lun Lee', 'Yi-Hsuan Tsai', 'Wei-Chen Chiu', 'Chen-Yu Lee']",https://arxiv.org/pdf/2303.03369,2023-03-06,,"In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model retraining. Code is available.11https://github.com/YiLunLee/missing_aware_prompts",483757dff12df441c6991dd5e7408d922fe01c3d,Semantic Scholar,,, -482,multimodal prompt retrieval for generative visual question answering,"['Timothy Ossowski', 'Junjie Hu']",http://arxiv.org/pdf/2306.17675,2023-06-30,,"Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains with limited labeled data (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy points in a few-shot domain adaptation setting.",534675abb9d72fc0c08d080d4f73335ceb75902c,Semantic Scholar,,, -483,multimodal garment designer humancentric latent diffusion models for fashion image editing,"['Alberto Baldrati', 'Davide Morelli', 'Giuseppe Cartella', 'M. Cornia', 'M. Bertini', 'R. Cucchiara']",https://arxiv.org/pdf/2304.02051,2023-04-04,,"Fashion illustration is used by designers to communicate their vision and to bring the design idea from conceptualization to realization, showing how clothes interact with the human body. In this context, computer vision can thus be used to improve the fashion design process. Differently from previous works that mainly focused on the virtual try-on of garments, we propose the task of multimodal-conditioned fashion image editing, guiding the generation of human-centric fashion images by following multimodal prompts, such as text, human body poses, and garment sketches. We tackle this problem by proposing a new architecture based on latent diffusion models, an approach that has not been used before in the fashion domain. Given the lack of existing datasets suitable for the task, we also extend two existing fashion datasets, namely Dress Code and VITON-HD, with multimodal annotations collected in a semi-automatic manner. Experimental results on these new datasets demonstrate the effectiveness of our proposal, both in terms of realism and coherence with the given multimodal inputs. Source code and collected multimodal annotations are publicly available at: https://github.com/aimagelab/multimodal-garment-designer.",6c925427841ea4a776a578d438f9e47a64c3014e,Semantic Scholar,,, -484,vitaclip video and text adaptive clip via multimodal prompting,"['Syed Talal Wasim', 'Muzammal Naseer', 'Salman Khan', 'F. Khan', 'M. Shah']",https://arxiv.org/pdf/2304.03307,2023-04-06,,"Adopting contrastive image-text pretrained models like CLIP towards video classification has gained attention due to its cost-effectiveness and competitive performance. However, recent works in this area face a trade-off. Finetuning the pretrained model to achieve strong supervised performance results in low zero-shot generalization. Similarly, freezing the backbone to retain zero-shot capability causes significant drop in supervised accuracy. Because of this, recent works in literature typically train separate models for supervised and zero-shot action recognition. In this work, we propose a multimodal prompt learning scheme that works to balance the supervised and zero-shot performance under a single unified training. Our prompting approach on the vision side caters for three aspects: 1) Global video-level prompts to model the data distribution; 2) Local frame-level prompts to provide per-frame discriminative conditioning; and 3) a summary prompt to extract a condensed video representation. Additionally, we define a prompting scheme on the text side to augment the textual context. Through this prompting scheme, we can achieve state-of-the-art zero-shot performance on Kinetics-600, HMDB51 and UCF101 while remaining competitive in the supervised setting. By keeping the pretrained backbone frozen, we optimize a much lower number of parameters and retain the existing general representation which helps achieve the strong zero-shot performance. Our codes/models will be released at https://github.com/TalalWasim/Vita-Clip..",8b5f4b383008bfb365cee72e5301ee04a24221f7,Semantic Scholar,,, -485,audio visual language maps for robot navigation,"['Chen Huang', 'Oier Mees', 'Andy Zeng', 'Wolfram Burgard']",http://arxiv.org/pdf/2303.07522,2023-03-13,,"While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments. In this work, we propose Audio-Visual-Language Maps (AVLMaps), a unified 3D spatial map representation for storing cross-modal information from audio, visual, and language cues. AVLMaps integrate the open-vocabulary capabilities of multimodal foundation models pre-trained on Internet-scale data by fusing their features into a centralized 3D voxel grid. In the context of navigation, we show that AVLMaps enable robot systems to index goals in the map based on multimodal queries, e.g., textual descriptions, images, or audio snippets of landmarks. In particular, the addition of audio information enables robots to more reliably disambiguate goal locations. Extensive experiments in simulation show that AVLMaps enable zero-shot multimodal goal navigation from multimodal prompts and provide 50% better recall in ambiguous scenarios. These capabilities extend to mobile robots in the real world - navigating to landmarks referring to visual, audio, and spatial concepts. Videos and code are available at: https://avlmaps.github.io.",93565fe6db3948c9c414af1d1edccf4aff5e2e10,Semantic Scholar,,, -486,rmprt realistic robotic manipulation simulator and benchmark with progressive reasoning tasks,"['Pengzhen Ren', 'Kaiwen Zhang', 'Hetao Zheng', 'Zixuan Li', 'Yuhang Wen', 'Fengda Zhu', 'Mas Ma', 'Xiaodan Liang']",http://arxiv.org/pdf/2306.11335,2023-06-20,,"Recently, the advent of pre-trained large-scale language models (LLMs) like ChatGPT and GPT-4 have significantly advanced the machine's natural language understanding capabilities. This breakthrough has allowed us to seamlessly integrate these open-source LLMs into a unified robot simulator environment to help robots accurately understand and execute human natural language instructions. To this end, in this work, we introduce a realistic robotic manipulation simulator and build a Robotic Manipulation with Progressive Reasoning Tasks (RM-PRT) benchmark on this basis. Specifically, the RM-PRT benchmark builds a new high-fidelity digital twin scene based on Unreal Engine 5, which includes 782 categories, 2023 objects, and 15K natural language instructions generated by ChatGPT for a detailed evaluation of robot manipulation. We propose a general pipeline for the RM-PRT benchmark that takes as input multimodal prompts containing natural language instructions and automatically outputs actions containing the movement and position transitions. We set four natural language understanding tasks with progressive reasoning levels and evaluate the robot's ability to understand natural language instructions in two modes of adsorption and grasping. In addition, we also conduct a comprehensive analysis and comparison of the differences and advantages of 10 different LLMs in instruction understanding and generation quality. We hope the new simulator and benchmark will facilitate future research on language-guided robotic manipulation. Project website: https://necolizer.github.io/RM-PRT/ .",9dbb39eccbcd31b8f6b4ff0a2c96f61a7c34e54b,Semantic Scholar,,, -487,fewshot multimodal sentiment analysis based on multimodal probabilistic fusion prompts,"['Xiaocui Yang', 'Shi Feng', 'Daling Wang', 'Pengfei Hong', 'Soujanya Poria']",https://dl.acm.org/doi/pdf/10.1145/3581783.3612181,2022-11-12,,"Multimodal sentiment analysis has gained significant attention due to the proliferation of multimodal content on social media. However, existing studies in this area rely heavily on large-scale supervised data, which is time-consuming and labor-intensive to collect. Thus, there is a need to address the challenge of few-shot multimodal sentiment analysis. To tackle this problem, we propose a novel method called Multimodal Probabilistic Fusion Prompts (MultiPoint) that leverages diverse cues from different modalities for multimodal sentiment detection in the few-shot scenario. Specifically, we start by introducing a Consistently Distributed Sampling approach called CDS, which ensures that the few-shot dataset has the same category distribution as the full dataset. Unlike previous approaches primarily using prompts based on the text modality, we design unified multimodal prompts to reduce discrepancies between different modalities and dynamically incorporate multimodal demonstrations into the context of each multimodal instance. To enhance the model's robustness, we introduce a probabilistic fusion method to fuse output predictions from multiple diverse prompts for each input. Our extensive experiments on six datasets demonstrate the effectiveness of our approach. First, our method outperforms strong baselines in the multimodal few-shot setting. Furthermore, under the same amount of data (1% of the full dataset), our CDS-based experimental results significantly outperform those based on previously sampled datasets constructed from the same number of instances of each class.",befcb92f313030632717a74a2afd651a1445a745,Semantic Scholar,,, -488,multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation,"['Shihao Zou', 'Xianying Huang', 'Xudong Shen']",https://dl.acm.org/doi/pdf/10.1145/3581783.3611805,2023-10-04,,"Emotion Recognition in Conversation (ERC) plays an important role in driving the development of human-machine interaction. Emotions can exist in multiple modalities, and multimodal ERC mainly faces two problems: (1) the noise problem in the cross-modal information fusion process, and (2) the prediction problem of less sample emotion labels that are semantically similar but different categories. To address these issues and fully utilize the features of each modality, we adopted the following strategies: first, deep emotion cues extraction was performed on modalities with strong representation ability, and feature filters were designed as multimodal prompt information for modalities with weak representation ability. Then, we designed a Multimodal Prompt Transformer (MPT) to perform cross-modal information fusion. MPT embeds multimodal fusion information into each attention layer of the Transformer, allowing prompt information to participate in encoding textual features and being fused with multi-level textual information to obtain better multimodal fusion features. Finally, we used the Hybrid Contrastive Learning (HCL) strategy to optimize the model's ability to handle labels with few samples. This strategy uses unsupervised contrastive learning to improve the representation ability of multimodal fusion and supervised contrastive learning to mine the information of labels with few samples. Experimental results show that our proposed model outperforms state-of-the-art models in ERC on two benchmark datasets.",e4abc33cbb84934029af6d50360f7ad3bba3df3c,Semantic Scholar,,, -489,fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt,"['Xiaocui Yang', 'Shi Feng', 'Daling Wang', 'Sun Qi', 'Wenfang Wu', 'Yifei Zhang', 'Pengfei Hong', 'Soujanya Poria']",http://arxiv.org/pdf/2305.10169,2023-05-17,,"We have witnessed the rapid proliferation of multimodal data on numerous social media platforms. Conventional studies typically require massive labeled data to train models for Multimodal Aspect-Based Sentiment Analysis (MABSA). However, collecting and annotating fine-grained multimodal data for MABSA is tough. To alleviate the above issue, we perform three MABSA-related tasks with quite a small number of labeled multimodal samples. We first build diverse and comprehensive multimodal few-shot datasets according to the data distribution. To capture the specific prompt for each aspect term in a few-shot scenario, we propose a novel Generative Multimodal Prompt (GMP) model for MABSA, which includes the Multimodal Encoder module and the N-Stream Decoders module. We further introduce a subtask to predict the number of aspect terms in each instance to construct the multimodal prompt. Extensive experiments on two datasets demonstrate that our approach outperforms strong baselines on two MABSA-related tasks in the few-shot setting.",fd7082630257b03771c72a926a64b13eb16e00af,Semantic Scholar,,, -490,textbased person search without parallel imagetext data,"['Yang Bai', 'Jingyao Wang', 'Min Cao', 'Cheng Chen', 'Ziqiang Cao', 'Liqiang Nie', 'Min Zhang']",https://dl.acm.org/doi/pdf/10.1145/3581783.3612285,2023-05-22,,"Text-based person search (TBPS) aims to retrieve the images of the target person from a large image gallery based on a given natural language description. Existing methods are dominated by training models with parallel image-text pairs, which are very costly to collect. In this paper, we make the first attempt to explore TBPS without parallel image-text data (μ-TBPS), in which only non-parallel images and texts, or even image-only data, can be adopted. Towards this end, we propose a two-stage framework, generation-then-retrieval (GTR), to first generate the corresponding pseudo text for each image and then perform the retrieval in a supervised manner. In the generation stage, we propose a fine-grained image captioning strategy to obtain an enriched description of the person image, which firstly utilizes a set of instruction prompts to activate the off-the-shelf pretrained vision-language model to capture and generate fine-grained person attributes, and then converts the extracted attributes into a textual description via the finetuned large language model or the hand-crafted template. In the retrieval stage, considering the noise interference of the generated texts for training model, we develop a confidence score-based training scheme by enabling more reliable texts to contribute more during the training. Experimental results on multiple TBPS benchmarks (i.e., CUHK-PEDES, ICFG-PEDES and RSTPReid) show that the proposed GTR can achieve a promising performance without relying on parallel image-text data.",0213827d882ec34aa9935f2b03a80362af806778,Semantic Scholar,,, -491,neuro symbolic reasoning for planning counterexample guided inductive synthesis using large language models and satisfiability solving,"['Sumit Kumar Jha', 'Susmit Jha', 'Patrick Lincoln', 'Nathaniel D. Bastian', 'Alvaro Velasquez', 'Rickard Ewetz', 'Sandeep Neema']",https://arxiv.org/pdf/2309.16436,2023-09-28,,"Generative large language models (LLMs) with instruct training such as GPT-4 can follow human-provided instruction prompts and generate human-like responses to these prompts. Apart from natural language responses, they have also been found to be effective at generating formal artifacts such as code, plans, and logical specifications from natural language prompts. Despite their remarkably improved accuracy, these models are still known to produce factually incorrect or contextually inappropriate results despite their syntactic coherence - a phenomenon often referred to as hallucination. This limitation makes it difficult to use these models to synthesize formal artifacts that are used in safety-critical applications. Unlike tasks such as text summarization and question-answering, bugs in code, plan, and other formal artifacts produced by LLMs can be catastrophic. We posit that we can use the satisfiability modulo theory (SMT) solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instruct-trained LLMs. This interaction between inductive LLMs and deductive SMT solvers can iteratively steer the LLM to generate the correct response. In our experiments, we use planning over the domain of blocks as our synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo, Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our method allows the user to communicate the planning problem in natural language; even the formulation of queries to SMT solvers is automatically generated from natural language. Thus, the proposed technique can enable non-expert users to describe their problems in natural language, and the combination of LLMs and SMT solvers can produce provably correct solutions.",1c89d8672a3742672850fa46f1e8ec51f3261019,Semantic Scholar,,, -492,layout and task aware instruction prompt for zeroshot document image question answering,"['Wenjin Wang', 'Yunhao Li', 'Yixin Ou', 'Yin Zhang']",https://arxiv.org/pdf/2306.00526,2023-06-01,,"Layout-aware pre-trained models has achieved significant progress on document image question answering. They introduce extra learnable modules into existing language models to capture layout information within document images from text bounding box coordinates obtained by OCR tools. However, extra modules necessitate pre-training on extensive document images. This prevents these methods from directly utilizing off-the-shelf instruction-tuning language foundation models, which have recently shown promising potential in zero-shot learning. Instead, in this paper, we find that instruction-tuning language models like Claude and ChatGPT can understand layout by spaces and line breaks. Based on this observation, we propose the LAyout and Task aware Instruction Prompt (LATIN-Prompt), which consists of layout-aware document content and task-aware instruction. Specifically, the former uses appropriate spaces and line breaks to recover the layout information among text segments obtained by OCR tools, and the latter ensures that generated answers adhere to formatting requirements. Moreover, we propose the LAyout and Task aware Instruction Tuning (LATIN-Tuning) to improve the performance of small instruction-tuning models like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot performance of Claude and ChatGPT to be comparable to the fine-tuning performance of SOTAs on document image question answering, and LATIN-Tuning enhances the zero-shot performance of Alpaca significantly. For example, LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by 263% and 20% respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA by 87.7%. Quantitative and qualitative analyses demonstrate the effectiveness of LATIN-Prompt and LATIN-Tuning. We provide the code in supplementary and will release it to facilitate future research.",1e25118f99e03ffecf79412b46dda8a2966752c8,Semantic Scholar,,, -493,inferfix endtoend program repair with llms,"['Ma Jin', 'Syed Shahriar', 'Michele Tufano', 'Xin Shi', 'Shuai Lu', 'Neel Sundaresan', 'Alexey Svyatkovskiy']",http://arxiv.org/pdf/2303.07263,2023-03-13,,"Software development life cycle is profoundly influenced by bugs: their introduction, identification, and eventual resolution account for a significant portion of software cost. This has motivated software engineering researchers and practitioners to propose different approaches for automating the identification and repair of software defects. Large language models have been adapted to the program repair task through few-shot demonstration learning and instruction prompting, treating this as an infilling task. However, these models have only focused on learning general bug-fixing patterns for uncategorized bugs mined from public repositories. In this paper, we propose InferFix: a transformer-based program repair framework paired with a state-of-the-art static analyzer to fix critical security and performance bugs. InferFix combines a Retriever -- transformer encoder model pretrained via contrastive learning objective, which aims at searching for semantically equivalent bugs and corresponding fixes; and a Generator -- a large language model (Codex Cushman) finetuned on supervised bug-fix data with prompts augmented via bug type annotations and semantically similar fixes retrieved from an external non-parametric memory. To train and evaluate our approach, we curated InferredBugs, a novel, metadata-rich dataset of bugs extracted by executing the Infer static analyzer on the change histories of thousands of Java and C# repositories. Our evaluation demonstrates that InferFix outperforms strong LLM baselines, with a top-1 accuracy of 65.6% for generating fixes in C# and 76.8% in Java. We discuss the deployment of InferFix alongside Infer at Microsoft which offers an end-to-end solution for detection, classification, and localization of bugs, as well as fixing and validation of candidate patches, integrated in the continuous integration pipeline to automate the software development workflow.",34d24b2d9f116f8f652c112d4ac924afcf11bd0d,Semantic Scholar,,, -494,edm3 event detection as multitask text generation,"['Ujjwala Anantheswaran', 'Himanshu Gupta', 'Mihir Parmar', 'Kuntal Kumar Pal', 'Chitta Baral']",http://arxiv.org/pdf/2305.16357,2023-05-25,,"Event detection refers to identifying event occurrences in a text and comprises of two subtasks; event identification and classification. We present EDM3, a novel approach for Event Detection that formulates three generative tasks: identification, classification, and combined detection. We show that EDM3 helps to learn transferable knowledge that can be leveraged to perform Event Detection and its subtasks concurrently, mitigating the error propagation inherent in pipelined approaches. Unlike previous dataset- or domain-specific approaches, EDM3 utilizes the existing knowledge of language models, allowing it to be trained over any classification schema. We evaluate EDM3 on multiple event detection datasets: RAMS, WikiEvents, MAVEN, and MLEE, showing that EDM3 outperforms 1) single-task performance by 8.4% on average and 2) multi-task performance without instructional prompts by 2.4% on average. We obtain SOTA results on RAMS (71.3% vs. 65.1% F-1) and competitive performance on other datasets. We analyze our approach to demonstrate its efficacy in low-resource and multi-sentence settings. We also show the effectiveness of this approach on non-standard event configurations such as multi-word and multi-class event triggers. Overall, our results show that EDM3 is a promising approach for Event Detection that has the potential for real-world applications.",3d71d4097a3dcc1289b709872d7523a035e6986f,Semantic Scholar,,, -495,vast a visionaudiosubtitletext omnimodality foundation model and dataset,"['Sihan Chen', 'Handong Li', 'Qunbo Wang', 'Zijia Zhao', 'Ming-Ting Sun', 'Xinxin Zhu', 'J. Liu']",https://arxiv.org/pdf/2305.18500,2023-05-29,,"Vision and text have been fully explored in contemporary video-text foundational models, while other modalities such as audio and subtitles in videos have not received sufficient attention. In this paper, we resort to establish connections between multi-modality video tracks, including Vision, Audio, and Subtitle, and Text by exploring an automatically generated large-scale omni-modality video caption dataset called VAST-27M. Specifically, we first collect 27 million open-domain video clips and separately train a vision and an audio captioner to generate vision and audio captions. Then, we employ an off-the-shelf Large Language Model (LLM) to integrate the generated captions, together with subtitles and instructional prompts into omni-modality captions. Based on the proposed VAST-27M dataset, we train an omni-modality video-text foundational model named VAST, which can perceive and process vision, audio, and subtitle modalities from video, and better support various tasks including vision-text, audio-text, and multi-modal video-text tasks (retrieval, captioning and QA). Extensive experiments have been conducted to demonstrate the effectiveness of our proposed VAST-27M corpus and VAST foundation model. VAST achieves 22 new state-of-the-art results on various cross-modality benchmarks. Code, model and dataset will be released at https://github.com/TXH-mercury/VAST.",4e33c5756aa18d248cf50fef9382acda1e0f65da,Semantic Scholar,,, -496,instruction tuning for fewshot aspectbased sentiment analysis,"['Siddharth Varia', 'Shuai Wang', 'Kishaloy Halder', 'Robert Vacareanu', 'Miguel Ballesteros', 'Yassine Benajiba', 'Neha Ann John', 'Rishita Anubhai', 'S. Muresan', 'D. Roth']",http://arxiv.org/pdf/2210.06629,2022-10-12,,"Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts:aspect term, aspect category, opinion term, and sentiment polarity. Most computational approaches focus on some of the ABSA sub-taskssuch as tuple (aspect term, sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity) extraction using either pipeline or joint modeling approaches. Recently, generative approaches have been proposed to extract all four elements as (one or more) quadrupletsfrom text as a single task. In this work, we take a step further and propose a unified framework for solving ABSA, and the associated sub-tasksto improve the performance in few-shot scenarios. To this end, we fine-tune a T5 model with instructional prompts in a multi-task learning fashion covering all the sub-tasks, as well as the entire quadruple prediction task. In experiments with multiple benchmark datasets, we show that the proposed multi-task prompting approach brings performance boost (by absolute 8.29 F1) in the few-shot learning setting.",5dbc2b2ee6e65e39fa3fc4bd5030be7a4a9f9a76,Semantic Scholar,,, -497,discrete prompt compression with reinforcement learning,"['Hoyoun Jung', 'Kyung-Joong Kim']",https://arxiv.org/pdf/2308.08758,2023-08-17,,"Instruction-tuned Language Models (LMs) are widely used by users to address various problems with task-specific prompts. Constraints associated with the context window length and computational costs encourage the development of compressed prompts. Existing methods rely heavily on training embeddings, which are designed to accommodate multiple token meanings. This presents challenges in terms of interpretability, a fixed number of embedding tokens, reusability across different LMs, and inapplicability when interacting with black-box APIs. This study proposes prompt compression with reinforcement learning (PCRL), a novel discrete prompt compression method that addresses these issues. PCRL employs a computationally efficient policy network that directly edits prompts. The PCRL training approach can be flexibly applied to various types of LMs, as well as decoder-only and encoder-decoder architecture, and can be trained without gradient access to LMs or labeled data. PCRL achieves an average reduction of 24.6% in token count across various instruction prompts while preserving performance. Further, we demonstrate that the learned policy can be transferred to larger LMs, and through various analyses, we aid the understanding of token importance within prompts.",5df422fc18974d687febd171adcac35b3012c50a,Semantic Scholar,,, -498,harnessing large language models' empathetic response generation capabilities for online mental health counselling support,"['Siyuan Brandon Loh', 'Aravind Sesagiri Raamkumar']",https://arxiv.org/pdf/2310.08017,2023-10-12,,"Large Language Models (LLMs) have demonstrated remarkable performance across various information-seeking and reasoning tasks. These computational systems drive state-of-the-art dialogue systems, such as ChatGPT and Bard. They also carry substantial promise in meeting the growing demands of mental health care, albeit relatively unexplored. As such, this study sought to examine LLMs' capability to generate empathetic responses in conversations that emulate those in a mental health counselling setting. We selected five LLMs: version 3.5 and version 4 of the Generative Pre-training (GPT), Vicuna FastChat-T5, Pathways Language Model (PaLM) version 2, and Falcon-7B-Instruct. Based on a simple instructional prompt, these models responded to utterances derived from the EmpatheticDialogues (ED) dataset. Using three empathy-related metrics, we compared their responses to those from traditional response generation dialogue systems, which were fine-tuned on the ED dataset, along with human-generated responses. Notably, we discovered that responses from the LLMs were remarkably more empathetic in most scenarios. We position our findings in light of catapulting advancements in creating empathetic conversational systems.",88a3abf671d922ebd61a34007908a5f6b6978bd4,Semantic Scholar,,, -499,red teaming language model detectors with language models,"['Zhouxing Shi', 'Yihan Wang', 'Fan Yin', 'Xiangning Chen', 'Kai-Wei Chang', 'Cho-Jui Hsieh']",http://arxiv.org/pdf/2305.19713,2023-05-31,,"The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent works have proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robustness and reliability of these LLM detectors under adversarial attacks. We study two types of attack strategies: 1) replacing certain words in an LLM's output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation. In both strategies, we leverage an auxiliary LLM to generate the word replacements or the instructional prompt. Different from previous works, we consider a challenging setting where the auxiliary LLM can also be protected by a detector. Experiments reveal that our attacks effectively compromise the performance of all detectors in the study with plausible generations, underscoring the urgent need to improve the robustness of LLM-generated text detection systems.",99f121a70fa683487bb0da3678a8144f57f65c60,Semantic Scholar,,, -500,promptbased learning for thread structure prediction in cybersecurity forums,"['Kazuaki Kashihara', 'Kuntal Kumar Pal', 'Chitta Baral', 'Robert P. Trevino']",http://arxiv.org/pdf/2303.05400,2023-03-05,,"With recent trends indicating cyber crimes increasing in both frequency and cost, it is imperative to develop new methods that leverage data-rich hacker forums to assist in combating ever evolving cyber threats. Defining interactions within these forums is critical as it facilitates identifying highly skilled users, which can improve prediction of novel threats and future cyber attacks. We propose a method called Next Paragraph Prediction with Instructional Prompting (NPP-IP) to predict thread structures while grounded on the context around posts. This is the first time to apply an instructional prompting approach to the cybersecurity domain. We evaluate our NPP-IP with the Reddit dataset and Hacker Forums dataset that has posts and thread structures of real hacker forums' threads, and compare our method's performance with existing methods. The experimental evaluation shows that our proposed method can predict the thread structure significantly better than existing methods allowing for better social network prediction based on forum interactions.",a71207f1d036969bf92959ea56cf146d5d8eb297,Semantic Scholar,,, -501,impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt,"['Chong Ma', 'Zihao Wu', 'Jiaqi Wang', 'Shaochen Xu', 'Yaonai Wei', 'Zheng Liu', 'Lei Guo', 'Xiaoya Cai', 'Shu Zhang', 'Tuo Zhang', 'Dajiang Zhu', 'Dinggang Shen', 'Tianming Liu', 'Xiang Li']",http://arxiv.org/pdf/2304.08448,2023-04-17,,"The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section. However, writing numerous impressions can be laborious and error-prone for radiologists. Although recent studies have achieved promising results in automatic impression generation using large-scale medical text data for pre-training and fine-tuning pre-trained language models, such models often require substantial amounts of medical text data and have poor generalization performance. While large language models (LLMs) like ChatGPT have shown strong generalization capabilities and performance, their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, which leverages the in-context learning capability of LLMs by constructing dynamic contexts using domain-specific, individualized data. This dynamic prompt approach enables the model to learn contextual knowledge from semantically similar examples from existing data. Additionally, we design an iterative optimization algorithm that performs automatic evaluation on the generated impression results and composes the corresponding instruction prompts to further optimize the model. The proposed ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and OpenI datasets without requiring additional training data or fine-tuning the LLMs. This work presents a paradigm for localizing LLMs that can be applied in a wide range of similar application scenarios, bridging the gap between general-purpose LLMs and the specific language processing needs of various domains.",a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151,Semantic Scholar,,, -502,camoscio an italian instructiontuned llama,"['Andrea Santilli', 'E. Rodolà']",https://arxiv.org/pdf/2307.16456,2023-07-31,,"In recent years Large Language Models (LLMs) have increased the state of the art on several natural language processing tasks. However, their accessibility is often limited to paid API services, posing challenges for researchers in conducting extensive investigations. On the other hand, while some open-source models have been proposed by the community, they are typically multilingual and not specifically tailored for the Italian language. In an effort to democratize the available and open resources for the Italian language, in this paper we introduce Camoscio: a language model specifically tuned to follow users' prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA (7b) with LoRA on a corpus of instruction prompts translated to Italian via ChatGPT. Results indicate that the model's zero-shot performance on various downstream tasks in Italian competes favorably with existing models specifically finetuned for those tasks. All the artifacts (code, dataset, model) are released to the community at the following url: https://github.com/teelinsan/camoscio",a7ff4d1a89baa5007b3c9ee46492aaf88dfc257f,Semantic Scholar,,, -503,mondrian prompt abstraction attack against large language models for cheaper api pricing,"['Waiman Si', 'M. Backes', 'Yang Zhang']",https://arxiv.org/pdf/2308.03558,2023-08-07,,"The Machine Learning as a Service (MLaaS) market is rapidly expanding and becoming more mature. For example, OpenAI's ChatGPT is an advanced large language model (LLM) that generates responses for various queries with associated fees. Although these models can deliver satisfactory performance, they are far from perfect. Researchers have long studied the vulnerabilities and limitations of LLMs, such as adversarial attacks and model toxicity. Inevitably, commercial ML models are also not exempt from such issues, which can be problematic as MLaaS continues to grow. In this paper, we discover a new attack strategy against LLM APIs, namely the prompt abstraction attack. Specifically, we propose Mondrian, a simple and straightforward method that abstracts sentences, which can lower the cost of using LLM APIs. In this approach, the adversary first creates a pseudo API (with a lower established price) to serve as the proxy of the target API (with a higher established price). Next, the pseudo API leverages Mondrian to modify the user query, obtain the abstracted response from the target API, and forward it back to the end user. Our results show that Mondrian successfully reduces user queries' token length ranging from 13% to 23% across various tasks, including text classification, generation, and question answering. Meanwhile, these abstracted queries do not significantly affect the utility of task-specific and general language models like ChatGPT. Mondrian also reduces instruction prompts' token length by at least 11% without compromising output quality. As a result, the prompt abstraction attack enables the adversary to profit without bearing the cost of API development and deployment.",afa0188e454495c08bfaecf29596f01efb468b9a,Semantic Scholar,,, -504,linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging,"['Andrew Rosenbaum', 'Saleh Soltan', 'Wael Hamza', 'Yannick Versley', 'M. Boese']",http://arxiv.org/pdf/2209.09900,2022-09-20,,"We present LINGUIST, a method for generating annotated data for Intent Classification and Slot Tagging (IC+ST), via fine-tuning AlexaTM 5B, a 5-billion-parameter multilingual sequence-to-sequence (seq2seq) model, on a flexible instruction prompt. In a 10-shot novel intent setting for the SNIPS dataset, LINGUIST surpasses state-of-the-art approaches (Back-Translation and Example Extrapolation) by a wide margin, showing absolute improvement for the target intents of +1.9 points on IC Recall and +2.5 points on ST F1 Score. In the zero-shot cross-lingual setting of the mATIS++ dataset, LINGUIST out-performs a strong baseline of Machine Translation with Slot Alignment by +4.14 points absolute on ST F1 Score across 6 languages, while matching performance on IC. Finally, we verify our results on an internal large-scale multilingual dataset for conversational agent IC+ST and show significant improvements over a baseline which uses Back-Translation, Paraphrasing and Slot Catalog Resampling. To our knowledge, we are the first to demonstrate instruction fine-tuning of a large-scale seq2seq model to control the outputs of multilingual intent- and slot-labeled data generation.",cb5cfc2dd4965262d2ce302362b1f2dbfa4a5419,Semantic Scholar,,, -505,"grips gradientfree, editbased instruction search for prompting large language models","['Archiki Prasad', 'Peter Hase', 'Xiang Zhou', 'Mohit Bansal']",http://arxiv.org/pdf/2203.07281,2022-03-14,,"Providing natural language instructions in prompts is a useful new paradigm for improving task performance of large language models in a zero-shot setting. Recent work has aimed to improve such prompts via manual rewriting or gradient-based tuning. However, manual rewriting is time-consuming and requires subjective interpretation, while gradient-based tuning can be extremely computationally demanding for large models and may not be feasible for API-based models. In this work, we introduce Gradient-free Instructional Prompt Search (GrIPS), a gradient-free, edit-based search approach for improving task instructions for large language models. GrIPS takes in instructions designed for humans and automatically returns an improved, edited prompt, while allowing for API-based tuning. With InstructGPT models, GrIPS improves the average task performance by up to 4.30 percentage points on eight classification tasks from the Natural Instructions dataset (with similar improvements for OPT, BLOOM, and FLAN-T5). We see improvements for both instruction-only prompts and instruction + k-shot examples prompts. Notably, GrIPS outperforms manual rewriting and purely example-based prompts while controlling for the available compute and data budget. Further, performance of GrIPS is comparable to select gradient-based tuning approaches. Qualitatively, we show our edits can simplify instructions and at times make them incoherent but nonetheless improve accuracy.",cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e,Semantic Scholar,,, -506,casteist but not racist quantifying disparities in large language model bias between india and the west,"['Khyati Khandelwal', 'Manuel Tonneau', 'Andrew M. Bean', 'Hannah Rose Kirk', 'Scott A. Hale']",https://arxiv.org/pdf/2309.08573,2023-09-15,,"Large Language Models (LLMs), now used daily by millions of users, can encode societal biases, exposing their users to representational harms. A large body of scholarship on LLM bias exists but it predominantly adopts a Western-centric frame and attends comparatively less to bias levels and potential harms in the Global South. In this paper, we quantify stereotypical bias in popular LLMs according to an Indian-centric frame and compare bias levels between the Indian and Western contexts. To do this, we develop a novel dataset which we call Indian-BhED (Indian Bias Evaluation Dataset), containing stereotypical and anti-stereotypical examples for caste and religion contexts. We find that the majority of LLMs tested are strongly biased towards stereotypes in the Indian context, especially as compared to the Western context. We finally investigate Instruction Prompting as a simple intervention to mitigate such bias and find that it significantly reduces both stereotypical and anti-stereotypical biases in the majority of cases for GPT-3.5. The findings of this work highlight the need for including more diverse voices when evaluating LLMs.",e4282cab4a435d5249fc8db49fc1c9268438fedb,Semantic Scholar,,, -507,selfalignment with instruction backtranslation,"['Xian Li', 'Ping Yu', 'Chunting Zhou', 'Timo Schick', 'Luke Zettlemoyer', 'Omer Levy', 'J. Weston', 'M. Lewis']",https://arxiv.org/pdf/2308.06259,2023-08-11,,"We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment.",f2ba9e7d9624bd94a786ea5e3161a9425a21a475,Semantic Scholar,,, -508,inboxbart get instructions into biomedical multitask learning,"['Mihir Parmar', 'Swaroop Mishra', 'Mirali Purohit', 'Man Luo', 'M. H. Murad', 'Chitta Baral']",http://arxiv.org/pdf/2204.07600,2022-04-15,,"Single-task models have proven pivotal in solving specific tasks; however, they have limitations in real-world applications where multi-tasking is necessary and domain shifts are exhibited. Recently, instructional prompts have shown significant improvement towards multi-task generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain. Motivated by this, this paper explores the impact of instructional prompts for biomedical MTL. We introduce the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) various categories. Using this meta-dataset, we propose a unified model termed In-BoXBART, that can jointly learn all tasks of the BoX without any task-specific modules. To the best of our knowledge, this is the first attempt to propose a unified model in the biomedical domain and use instructions to achieve generalization across several biomedical tasks. Experimental results indicate that the proposed model: 1) outperforms the single-task baseline by ~3% and multi-task (without instruction) baseline by ~18% on an average, and 2) shows ~23% improvement compared to the single-task baseline in few-shot learning (i.e., 32 instances per task) on an average. Our analysis indicates that there is significant room for improvement across tasks in the BoX, implying the scope for future research direction.",fb30166c218bef3597b0d9789ad340defc3989ca,Semantic Scholar,,, -509,zeroshot information extraction from radiological reports using chatgpt,"['D. Hu', 'Bing Liu', 'Xiaofeng Zhu', 'Xudong Lu', 'Nan Wu']",https://arxiv.org/pdf/2309.01398,2023-09-04,,"Electronic health records contain an enormous amount of valuable information, but many are recorded in free text. Information extraction is the strategy to transform the sequence of characters into structured data, which can be employed for secondary analysis. However, the traditional information extraction components, such as named entity recognition and relation extraction, require annotated data to optimize the model parameters, which has become one of the major bottlenecks in building information extraction systems. With the large language models achieving good performances on various downstream NLP tasks without parameter tuning, it becomes possible to use large language models for zero-shot information extraction. In this study, we aim to explore whether the most popular large language model, ChatGPT, can extract useful information from the radiological reports. We first design the prompt template for the interested information in the CT reports. Then, we generate the prompts by combining the prompt template with the CT reports as the inputs of ChatGPT to obtain the responses. A post-processing module is developed to transform the responses into structured extraction results. We conducted the experiments with 847 CT reports collected from Peking University Cancer Hospital. The experimental results indicate that ChatGPT can achieve competitive performances for some extraction tasks compared with the baseline information extraction system, but some limitations need to be further improved.",0386711d1f9c4240ded4de56026ca18e475b507a,Semantic Scholar,,, -510,cocomo computational consciousness modeling for generative and ethical ai,['Edward Y. Chang'],http://arxiv.org/pdf/2304.02438,2023-03-17,,"The CoCoMo model proposes a computational solution to the challenge of incorporating ethical and emotional intelligence considerations into AI systems, with the aim of creating AI agents that combine knowledge with compassion. To achieve this goal, CoCoMo prioritizes fairness, beneficence, non-maleficence, empathy, adaptability, transparency, and critical and exploratory thinking abilities. The model employs consciousness modeling, reinforcement learning, and prompt template formulation to support these desired traits. By incorporating ethical and emotional intelligence considerations, a generative AI model can potentially lead to improved fairness, reduced toxicity, and increased reliability.",12bad2032f3efa5a142d7dd25712960a4f9ca5a7,Semantic Scholar,,, -511,global constraints with prompting for zeroshot event argument classification,"['Zizheng Lin', 'Hongming Zhang', 'Yangqiu Song']",http://arxiv.org/pdf/2302.04459,2023-02-09,,"Determining the role of event arguments is a crucial subtask of event extraction. Most previous supervised models leverage costly annotations, which is not practical for open-domain applications. In this work, we propose to use global constraints with prompting to effectively tackles event argument classification without any annotation and task-specific training. Specifically, given an event and its associated passage, the model first creates several new passages by prefix prompts and cloze prompts, where prefix prompts indicate event type and trigger span, and cloze prompts connect each candidate role with the target argument span. Then, a pre-trained language model scores the new passages, making the initial prediction. Our novel prompt templates can easily adapt to all events and argument types without manual effort. Next, the model regularizes the prediction by global constraints exploiting cross-task, cross-argument, and cross-event relations. Extensive experiments demonstrate our model’s effectiveness: it outperforms the best zero-shot baselines by 12.5% and 10.9% F1 on ACE and ERE with given argument spans and by 4.3% and 3.3% F1, respectively, without given argument spans. We have made our code publicly available.",1467ced85b3ae2d695079a1557063a445c43988a,Semantic Scholar,,, -512,a unified framework for multiintent spoken language understanding with prompting,"['Feifan Song', 'Lianzhe Huang', 'Houfeng Wang']",http://arxiv.org/pdf/2210.03337,2022-10-07,,"Multi-intent Spoken Language Understanding has great potential for widespread implementation. Jointly modeling Intent Detection and Slot Filling in it provides a channel to exploit the correlation between intents and slots. However, current approaches are apt to formulate these two sub-tasks differently, which leads to two issues: 1) It hinders models from effective extraction of shared features. 2) Pretty complicated structures are involved to enhance expression ability while causing damage to the interpretability of frameworks. In this work, we describe a Prompt-based Spoken Language Understanding (PromptSLU) framework, to intuitively unify two sub-tasks into the same form by offering a common pre-trained Seq2Seq model. In detail, ID and SF are completed by concisely filling the utterance into task-specific prompt templates as input, and sharing output formats of key-value pairs sequence. Furthermore, variable intents are predicted first, then naturally embedded into prompts to guide slot-value pairs inference from a semantic perspective. Finally, we are inspired by prevalent multi-task learning to introduce an auxiliary sub-task, which helps to learn relationships among provided labels. Experiment results show that our framework outperforms several state-of-the-art baselines on two public datasets.",171412ef2410fad3f9a09238ad9e272c4e31aed4,Semantic Scholar,,, -513,knowprompt knowledgeaware prompttuning with synergistic optimization for relation extraction,"['Xiang Chen', 'Ningyu Zhang', 'Ningyu Zhang', 'Xin Xie', 'Shumin Deng', 'Yunzhi Yao', 'Chuanqi Tan', 'Fei Huang', 'Luo Si', 'Huajun Chen']",https://arxiv.org/pdf/2104.07650,2021-04-15,,"Recently, prompt-tuning has achieved promising results for specific few-shot classification tasks. The core idea of prompt-tuning is to insert text pieces (i.e., templates) into the input and transform a classification task into a masked language modeling problem. However, for relation extraction, determining an appropriate prompt template requires domain expertise, and it is cumbersome and time-consuming to obtain a suitable label word. Furthermore, there exists abundant semantic and prior knowledge among the relation labels that cannot be ignored. To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt). Specifically, we inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words. Then, we synergistically optimize their representation with structured constraints. Extensive experimental results on five datasets with standard and low-resource settings demonstrate the effectiveness of our approach. Our code and datasets are available in GitHub1 for reproducibility.",1a2e90dff605dad7dbefeed121e6d295c7a77d62,Semantic Scholar,,, -514,visual prompting for adversarial robustness,"['Aochuan Chen', 'P. Lorenz', 'Yuguang Yao', 'Pin-Yu Chen', 'Sijia Liu']",https://arxiv.org/pdf/2210.06284,2022-10-12,,"In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at test time. Compared to conventional adversarial defenses, VP allows us to design universal (i.e., data-agnostic) input prompting templates, which have plug-and-play capabilities at test time to achieve desired model performance without introducing much computation overhead. Although VP has been successfully applied to improving model generalization, it remains elusive whether and how it can be used to defend against adversarial attacks. We investigate this problem and show that the vanilla VP approach is not effective in adversarial defense since a universal input prompt lacks the capacity for robust learning against sample-specific adversarial perturbations. To circumvent it, we propose a new VP method, termed Class-wise Adversarial Visual Prompting (C-AVP), to generate class-wise visual prompts so as to not only leverage the strengths of ensemble prompts but also optimize their interrelations to improve model robustness. Our experiments show that C-AVP outperforms the conventional VP method, with 2.1× standard accuracy gain and 2× robust accuracy gain. Compared to classical test-time defenses, C-AVP also yields a 42× inference time speedup. Code is available at https://github.com/Phoveran/vp-for-adversarial-robustness.",20cb40199d03395d63615854863f9eda9c7863e2,Semantic Scholar,,, -515,rethinking the event coding pipeline with prompt entailment,"['C. Lefebvre', 'Niklas Stoehr']",http://arxiv.org/pdf/2210.05257,2022-10-11,,"For monitoring crises, political events are extracted from the news. The large amount of unstructured full-text event descriptions makes a case-by-case analysis unmanageable, particularly for low-resource humanitarian aid organizations. This creates a demand to classify events into event types, a task referred to as event coding. Typically, domain experts craft an event type ontology, annotators label a large dataset and technical experts develop a supervised coding system. In this work, we propose PR-ENT, a new event coding approach that is more flexible and resource-efficient, while maintaining competitive accuracy: first, we extend an event description such as “Military injured two civilians” by a template, e.g. “People were [Z]” and prompt a pre-trained (cloze) language model to fill the slot Z. Second, we select suitable answer candidates Zstar = “injured”, “hurt”... by treating the event description as premise and the filled templates as hypothesis in a textual entailment task. In a final step, the selected answer candidate can be mapped to its corresponding event type. This allows domain experts to draft the codebook directly as labeled prompts and interpretable answer candidates. This human-in-the-loop process is guided by our codebook design tool. We show that our approach is robust through several checks: perturbing the event description and prompt template, restricting the vocabulary and removing contextual information.",236375f49e3deb8ee7918c1f5e65175e453deb2e,Semantic Scholar,,, -516,positionbased prompting for health outcome generation,"['Micheal Abaho', 'D. Bollegala', 'P. Williamson', 'S. Dodd']",http://arxiv.org/pdf/2204.03489,2022-03-30,,"Probing factual knowledge in Pre-trained Language Models (PLMs) using prompts has indirectly implied that language models (LMs) can be treated as knowledge bases. To this end, this phenomenon has been effective, especially when these LMs are fine-tuned towards not just data, but also to the style or linguistic pattern of the prompts themselves. We observe that satisfying a particular linguistic pattern in prompts is an unsustainable, time-consuming constraint in the probing task, especially because they are often manually designed and the range of possible prompt template patterns can vary depending on the prompting task. To alleviate this constraint, we propose using a position-attention mechanism to capture positional information of each word in a prompt relative to the mask to be filled, hence avoiding the need to re-construct prompts when the prompts’ linguistic pattern changes. Using our approach, we demonstrate the ability of eliciting answers (in a case study on health outcome generation) to not only common prompt templates like Cloze and Prefix but also rare ones too, such as Postfix and Mixed patterns whose masks are respectively at the start and in multiple random places of the prompt. More so, using various biomedical PLMs, our approach consistently outperforms a baseline in which the default PLMs representation is used to predict masked tokens.",2c12d24c5ba5ad3bb3994635fcfcb9f8caac31d0,Semantic Scholar,,, -517,graphprompt biomedical entity normalization using graphbased prompt templates,"['Jiayou Zhang', 'Zhirui Wang', 'Shizhuo Zhang', 'M. Bhalerao', 'Yucong Liu', 'Dawei Zhu', 'Sheng Wang']",https://www.biorxiv.org/content/biorxiv/early/2021/12/01/2021.11.29.470486.full.pdf,2021-11-13,,"Biomedical entity normalization unifies the language across biomedical experiments and studies, and further enables us to obtain a holistic view of life sciences. Current approaches mainly study the normalization of more standardized entities such as diseases and drugs, while disregarding the more ambiguous but crucial entities such as pathways, functions and cell types, hindering their real-world applications. To achieve biomedical entity normalization on these under-explored entities, we first introduce an expert-curated dataset OBO-syn encompassing 70 different types of entities and 2 million curated entity-synonym pairs. To utilize the unique graph structure in this dataset, we propose GraphPrompt, a promptbased learning approach that creates prompt templates according to the graphs. Graph-Prompt obtained 41.0% and 29.9% improvement on zero-shot and few-shot settings respectively, indicating the effectiveness of these graph-based prompt templates. We envision that our method GraphPrompt and OBO-syn dataset can be broadly applied to graph-based NLP tasks, and serve as the basis for analyzing diverse and accumulating biomedical data.",2d7a6a52264e8f875105cfb34c6c901bfd1f3229,Semantic Scholar,,, -518,metricprompt prompting model as a relevance metric for fewshot text classification,"['Hongyuan Dong', 'Weinan Zhang', 'Wanxiang Che']",https://arxiv.org/pdf/2306.08892,2023-06-15,,"Prompting methods have shown impressive performance in a variety of text mining tasks and applications, especially few-shot ones. Despite the promising prospects, the performance of prompting model largely depends on the design of prompt template and verbalizer. In this work, we propose MetricPrompt, which eases verbalizer design difficulty by reformulating few-shot text classification task into text pair relevance estimation task. MetricPrompt adopts prompting model as the relevance metric, further bridging the gap between Pre-trained Language Model's (PLM) pre-training objective and text classification task, making possible PLM's smooth adaption. Taking a training sample and a query one simultaneously, MetricPrompt captures cross-sample relevance information for accurate relevance estimation. We conduct experiments on three widely used text classification datasets across four few-shot settings. Results show that MetricPrompt outperforms manual verbalizer and other automatic verbalizer design methods across all few-shot settings, achieving new state-of-the-art (SOTA) performance.",2e403ad2cd02409e1fdc15839da0a3f89886a990,Semantic Scholar,,, -519,prompt learning for news recommendation,"['Zizhuo Zhang', 'Bang-wei Wang']",https://arxiv.org/pdf/2304.05263,2023-04-11,,"Some recent news recommendation (NR) methods introduce a Pre-trained Language Model (PLM) to encode news representation by following the vanilla pre-train and fine-tune paradigm with carefully-designed recommendation-specific neural networks and objective functions. Due to the inconsistent task objective with that of PLM, we argue that their modeling paradigm has not well exploited the abundant semantic information and linguistic knowledge embedded in the pre-training process. Recently, the pre-train, prompt, and predict paradigm, called prompt learning, has achieved many successes in natural language processing domain. In this paper, we make the first trial of this new paradigm to develop a Prompt Learning for News Recommendation (Prompt4NR) framework, which transforms the task of predicting whether a user would click a candidate news as a cloze-style mask-prediction task. Specifically, we design a series of prompt templates, including discrete, continuous, and hybrid templates, and construct their corresponding answer spaces to examine the proposed Prompt4NR framework. Furthermore, we use the prompt ensembling to integrate predictions from multiple prompt templates. Extensive experiments on the MIND dataset validate the effectiveness of our Prompt4NR with a set of new benchmark results.",2ee1f98649ff27378fc341cae907eb89aba8fba4,Semantic Scholar,,, -520,groundtruth labels matter a deeper look into inputlabel demonstrations,"['Junyeob Kim', 'Hyuhng Joon Kim', 'Hyunsoo Cho', 'Hwiyeol Jo', 'Sang-Woo Lee', 'Sang-goo Lee', 'Kang Min Yoo', 'Taeuk Kim']",http://arxiv.org/pdf/2205.12685,2022-05-25,,"Despite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive.Intuitively, ground-truth labels should have as much impact in in-context learning (ICL) as supervised learning, but recent work reported that the input-label correspondence is significantly less important than previously thought.Intrigued by this counter-intuitive observation, we re-examine the importance of ground-truth labels in in-context learning.With the introduction of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the impact of ground-truth label demonstrations.Through extensive analyses, we find that the correct input-label mappings can have varying impacts on the downstream in-context learning performances, depending on the experimental configuration.Through additional studies, we identify key components, such as the verbosity of prompt templates and the language model size, as the controlling factor to achieve more noise-resilient ICL.",316206a2f89eb94ce02a81fba1dc304586f21b39,Semantic Scholar,,, -521,lowresource multigranularity academic function recognition based on multiple prompt knowledge,"['Jiawei Liu', 'Ziteng Xiong', 'Yi-ping Jiang', 'Yongqiang Ma', 'Wei Lu', 'Yong Huang', 'Qikai Cheng']",http://arxiv.org/pdf/2305.03287,2023-05-05,,"Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally requires large numbers of annotated data to achieve state-of-the-art performance on a range of NLP tasks in the scientific domain. However, obtaining the fine-tune data for scientific NLP task is still challenging and expensive. Inspired by recent advancement in prompt learning, in this paper, we propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to alleviate the dependence on annotated data and improve the performance of multi-granularity academic function recognition tasks with a small number of labeled examples. Specifically, the proposed method provides multi-perspective representations by combining manual prompt templates with automatically learned continuous prompt templates to help the given academic function recognition task take full advantage of knowledge in PLMs. Based on these prompt templates and the fine-tuned PLM, a large number of pseudo labels are assigned to the unlabeled examples. Finally, we fine-tune the PLM using the pseudo training set. We evaluate our method on three academic function recognition tasks of different granularity including the citation function, the abstract sentence function, and the keyword function, with datasets from computer science domain and biomedical domain. Extensive experiments demonstrate the effectiveness of our method and statistically significant improvements against strong baselines. In particular, it achieves an average increase of 5% in Macro-F1 score compared with fine-tuning, and 6% in Macro-F1 score compared with other semi-supervised method under low-resource settings. In addition, MPT is a general method that can be easily applied to other low-resource scientific classification tasks.",35d2276749c2c31290d2ff410a305112e742da71,Semantic Scholar,,, -522,unihd at tsar2022 shared task is compute all we need for lexical simplification,"['Dennis Aumiller', 'Michael Gertz']",http://arxiv.org/pdf/2301.01764,2023-01-04,,"Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential. As an alternative, we describe a frustratingly simple pipeline based on prompted GPT-3 responses, beating competing approaches by a wide margin in settings with few training instances. Our best-performing submission to the English language track of the TSAR-2022 shared task consists of an “ensemble” of six different prompt templates with varying context levels. As a late-breaking result, we further detail a language transfer technique that allows simplification in languages other than English. Applied to the Spanish and Portuguese subset, we achieve state-of-the-art results with only minor modification to the original prompts. Aside from detailing the implementation and setup, we spend the remainder of this work discussing the particularities of prompting and implications for future work. Code for the experiments is available online at https://github.com/dennlinger/TSAR-2022-Shared-Task.",40fba1fc70e23abf9a3ea428f186dd44e57723fb,Semantic Scholar,,, -523,large language models are zeroshot rankers for recommender systems,"['Yupeng Hou', 'Junjie Zhang', 'Zihan Lin', 'Hongyu Lu', 'Ruobing Xie', 'Julian McAuley', 'Wayne Xin Zhao']",http://arxiv.org/pdf/2305.08845,2023-05-15,,"Recently, large language models (LLMs) (e.g. GPT-4) have demonstrated impressive general-purpose task-solving abilities, including the potential to approach recommendation tasks. Along this line of research, this work aims to investigate the capacity of LLMs that act as the ranking model for recommender systems. To conduct our empirical study, we first formalize the recommendation problem as a conditional ranking task, considering sequential interaction histories as conditions and the items retrieved by the candidate generation model as candidates. We adopt a specific prompting approach to solving the ranking task by LLMs: we carefully design the prompting template by including the sequential interaction history, the candidate items, and the ranking instruction. We conduct extensive experiments on two widely-used datasets for recommender systems and derive several key findings for the use of LLMs in recommender systems. We show that LLMs have promising zero-shot ranking abilities, even competitive to or better than conventional recommendation models on candidates retrieved by multiple candidate generators. We also demonstrate that LLMs struggle to perceive the order of historical interactions and can be affected by biases like position bias, while these issues can be alleviated via specially designed prompting and bootstrapping strategies. The code to reproduce this work is available at https://github.com/RUCAIBox/LLMRank.",4683d3d6cb31111cf4499a199c0b036662b3eb32,Semantic Scholar,,, -524,can language models be biomedical knowledge bases,"['Mujeen Sung', 'Jinhyuk Lee', 'Sean S. Yi', 'Minji Jeon', 'Sungdong Kim', 'Jaewoo Kang']",https://aclanthology.org/2021.emnlp-main.388.pdf,2021-09-15,,"Pre-trained language models (LMs) have become ubiquitous in solving various natural language processing (NLP) tasks. There has been increasing interest in what knowledge these LMs contain and how we can extract that knowledge, treating LMs as knowledge bases (KBs). While there has been much work on probing LMs in the general domain, there has been little attention to whether these powerful LMs can be used as domain-specific KBs. To this end, we create the BioLAMA benchmark, which is comprised of 49K biomedical factual knowledge triples for probing biomedical LMs. We find that biomedical LMs with recently proposed probing methods can achieve up to 18.51% Acc@5 on retrieving biomedical knowledge. Although this seems promising given the task difficulty, our detailed analyses reveal that most predictions are highly correlated with prompt templates without any subjects, hence producing similar results on each relation and hindering their capabilities to be used as domain-specific KBs. We hope that BioLAMA can serve as a challenging benchmark for biomedical factual probing.",4c5f4ddc68be643fb34ea969bf2c105ff7538995,Semantic Scholar,,, -525,dynamar dynamic prompt with mask token representation,"['Xiaodi Sun', 'Sunny Rajagopalan', 'Priyank Nigam', 'Weiyi Lu', 'Yi Xu', 'Belinda Zeng', 'Trishul M. Chilimbi']",https://arxiv.org/pdf/2206.02982,2022-06-07,,"Recent research has shown that large language models pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these language models to downstream tasks, like a classification or regression task, we employ a fine-tuning paradigm in which the sentence representation from the language model is input to a task-specific head; the model is then fine-tuned end-to-end. However, with the emergence of models like GPT-3, prompt-based fine-tuning has been proven to be a successful approach for few-shot tasks. Inspired by this work, we study discrete prompt technologies in practice. There are two issues that arise with the standard prompt approach. First, it can overfit on the prompt template. Second, it requires manual effort to formulate the downstream task as a language model problem. In this paper, we propose an improvement to prompt-based fine-tuning that addresses these two issues. We refer to our approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results show that DynaMaR can achieve an average improvement of 10% in few-shot settings and improvement of 3.7% in data-rich settings over the standard fine-tuning approach on four e-commerce applications.",5d5b6b6c033c36a8b730042392cd29da84b67481,Semantic Scholar,,, -526,citeprompt using prompts to identify citation intent in scientific papers,"['Avishek Lahiri', 'Debarshi Kumar Sanyal', 'Imon Mukherjee']",https://arxiv.org/pdf/2304.12730,2023-04-25,,"Citations in scientific papers not only help us trace the intellectual lineage but also are a useful indicator of the scientific significance of the work. Citation intents prove beneficial as they specify the role of the citation in a given context. We present a tool Citeprompt which uses the hitherto unexplored approach of prompt learning for citation intent classification. We argue that with the proper choice of the pretrained language model, the prompt template, and the prompt verbalizer, we can not only get results that are better than or comparable to those obtained with the state-of-the-art methods but also do it with much less exterior information about the scientific document. We report state-of-the-art results on the ACL-ARC dataset, and also show significant improvement on the SciCite dataset over all baseline models except one. As suitably large labelled datasets for citation intent classification can be quite hard to find, in a first, we propose the conversion of this task to the few-shot and zero-shot settings. For the ACL-ARC dataset, we report a 53.86% F1 score for the zero-shot setting, which improves to 63.61% and 66.99% for the 5-shot and 10-shot settings respectively.",68ee8a53f0b1ff146194980337dd6d533b17c59b,Semantic Scholar,,, -527,multilabel fewshot icd coding as autoregressive generation with prompt,"['Zhichao Yang', 'Sunjae Kwon', 'Zonghai Yao', 'Hongfeng Yu']",https://arxiv.org/pdf/2211.13813,2022-11-24,,"Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnoses and procedures using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infers ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt (GPsoap) model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F130.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross-attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.",6b87c9700b8de4912fe7c361574640b5dc536ca9,Semantic Scholar,,, -528,diffugen adaptable approach for generating labeled image datasets using stable diffusion models,"['Michael Shenoda', 'Edward Kim']",https://arxiv.org/pdf/2309.00248,2023-09-01,,"Generating high-quality labeled image datasets is crucial for training accurate and robust machine learning models in the field of computer vision. However, the process of manually labeling real images is often time-consuming and costly. To address these challenges associated with dataset generation, we introduce""DiffuGen,""a simple and adaptable approach that harnesses the power of stable diffusion models to create labeled image datasets efficiently. By leveraging stable diffusion models, our approach not only ensures the quality of generated datasets but also provides a versatile solution for label generation. In this paper, we present the methodology behind DiffuGen, which combines the capabilities of diffusion models with two distinct labeling techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt templating for adaptable image generation and textual inversion to enhance diffusion model capabilities.",6c1a53c05f1b1a024af740df84e530d79400ab86,Semantic Scholar,,, -529,llmfuncmapper function identification for interpreting complex clauses in building codes via llm,"['Zhe Zheng', 'Ke Chen', 'Xin Cao', 'Xin-Zheng Lu', 'Jia Lin']",https://arxiv.org/pdf/2308.08728,2023-08-17,,"As a vital stage of automated rule checking (ARC), rule interpretation of regulatory texts requires considerable effort. However, interpreting regulatory clauses with implicit properties or complex computational logic is still challenging due to the lack of domain knowledge and limited expressibility of conventional logic representations. Thus, LLM-FuncMapper, an approach to identifying predefined functions needed to interpret various regulatory clauses based on the large language model (LLM), is proposed. First, by systematically analysis of building codes, a series of atomic functions are defined to capture shared computational logics of implicit properties and complex constraints, creating a database of common blocks for interpreting regulatory clauses. Then, a prompt template with the chain of thought is developed and further enhanced with a classification-based tuning strategy, to enable common LLMs for effective function identification. Finally, the proposed approach is validated with statistical analysis, experiments, and proof of concept. Statistical analysis reveals a long-tail distribution and high expressibility of the developed function database, with which almost 100% of computer-processible clauses can be interpreted and represented as computer-executable codes. Experiments show that LLM-FuncMapper achieve promising results in identifying relevant predefined functions for rule interpretation. Further proof of concept in automated rule interpretation also demonstrates the possibility of LLM-FuncMapper in interpreting complex regulatory clauses. To the best of our knowledge, this study is the first attempt to introduce LLM for understanding and interpreting complex regulatory clauses, which may shed light on further adoption of LLM in the construction domain.",6c4d35d67f843e7de6ec00c088e339b2237d222c,Semantic Scholar,,, -530,fashionsap symbols and attributes prompt for finegrained fashion visionlanguage pretraining,"['Yunpeng Han', 'Lisai Zhang', 'Qingcai Chen', 'Zhijian Chen', 'Zhonghua Li', 'Jianxin Yang', 'Zhao Cao']",https://arxiv.org/pdf/2304.05051,2023-04-11,,"Fashion vision-language pre-training models have shown efficacy for a wide range of downstream tasks. However, general vision-language pre-training models pay less attention to fine-grained domain features, while these features are important in distinguishing the specific domain tasks from general tasks. We propose a method for fine-grained fashion vision-language pre-training based on fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained multi-modalities fashion attributes and characteristics. Firstly, we propose the fashion symbols, a novel abstract fashion concept layer, to represent different fashion items and to generalize various kinds of fine- grained fashion features, making modelling fine-grained attributes more effective. Secondly, the attributes prompt method is proposed to make the model learn specific attributes of fashion items explicitly. We design proper prompt templates according to the format of fashion data. Comprehensive experiments are conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and FashionSAP gets SOTA performances for four popular fashion tasks. The ablation study also shows the proposed abstract fashion symbols, and the attribute prompt method enables the model to acquire fine-grained semantics in the fashion domain effectively. The obvious performance gains from FashionSAP provide a new baseline for future fashion task research.11The source code is available at https://github.com/hssip/FashionSAP",6f05be4a0045cee3575fb39e88fc361d96f2cc4f,Semantic Scholar,,, -531,relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction,"['Yew Ken Chia', 'Lidong Bing', 'Soujanya Poria', 'Luo Si']",http://arxiv.org/pdf/2203.09101,2022-03-17,,"Despite the importance of relation extraction in building and representing knowledge, less research is focused on generalizing to unseen relations types. We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE) to encourage further research in low-resource relation extraction methods. Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage. To solve ZeroRTE, we propose to synthesize relation examples by prompting language models to generate structured texts. Concretely, we unify language model prompts and structured text approaches to design a structured prompt template for generating synthetic relation samples when conditioning on relation label prompts (RelationPrompt). To overcome the limitation for extracting multiple relation triplets in a sentence, we design a novel Triplet Search Decoding method. Experiments on FewRel and Wiki-ZSL datasets show the efficacy of RelationPrompt for the ZeroRTE task and zero-shot relation classification. Our code and data are available at github.com/declare-lab/RelationPrompt.",743dcf234cffd54c4e096a10a284dd81572b16ea,Semantic Scholar,,, -532,investigating the applicability of selfassessment tests for personality measurement of large language models,"['Akshat Gupta', 'Xiaoyang Song', 'G. Anumanchipalli']",https://arxiv.org/pdf/2309.08163,2023-09-15,,"As large language models (LLM) evolve in their capabilities, various recent studies have tried to quantify their behavior using psychological tools created to study human behavior. One such example is the measurement of""personality""of LLMs using personality self-assessment tests. In this paper, we take three such studies on personality measurement of LLMs that use personality self-assessment tests created to study human behavior. We use the prompts used in these three different papers to measure the personality of the same LLM. We find that all three prompts lead very different personality scores. This simple test reveals that personality self-assessment scores in LLMs depend on the subjective choice of the prompter. Since we don't know the ground truth value of personality scores for LLMs as there is no correct answer to such questions, there's no way of claiming if one prompt is more or less correct than the other. We then introduce the property of option order symmetry for personality measurement of LLMs. Since most of the self-assessment tests exist in the form of multiple choice question (MCQ) questions, we argue that the scores should also be robust to not just the prompt template but also the order in which the options are presented. This test unsurprisingly reveals that the answers to the self-assessment tests are not robust to the order of the options. These simple tests, done on ChatGPT and Llama2 models show that self-assessment personality tests created for humans are not appropriate for measuring personality in LLMs.",781f4f7dd871c0eea0ce71692bcbc1283df6b550,Semantic Scholar,,, -533,instructcv instructiontuned texttoimage diffusion models as vision generalists,"['Yulu Gan', 'Sungwoo Park', 'Alexander Schubert', 'Anthony Philippakis', 'A. Alaa']",https://arxiv.org/pdf/2310.00390,2023-09-30,,"Recent advances in generative diffusion models have enabled text-controlled synthesis of realistic and diverse images with impressive quality. Despite these remarkable advances, the application of text-to-image generative models in computer vision for standard visual recognition tasks remains limited. The current de facto approach for these tasks is to design model architectures and loss functions that are tailored to the task at hand. In this paper, we develop a unified language interface for computer vision tasks that abstracts away task-specific design choices and enables task execution by following natural language instructions. Our approach involves casting multiple computer vision tasks as text-to-image generation problems. Here, the text represents an instruction describing the task, and the resulting image is a visually-encoded task output. To train our model, we pool commonly-used computer vision datasets covering a range of tasks, including segmentation, object detection, depth estimation, and classification. We then use a large language model to paraphrase prompt templates that convey the specific tasks to be conducted on each image, and through this process, we create a multi-modal and multi-task training dataset comprising input and output images along with annotated instructions. Following the InstructPix2Pix architecture, we apply instruction-tuning to a text-to-image diffusion model using our constructed dataset, steering its functionality from a generative model to an instruction-guided multi-task vision learner. Experiments demonstrate that our model, dubbed InstructCV, performs competitively compared to other generalist and task-specific vision models. Moreover, it exhibits compelling generalization capabilities to unseen data, categories, and user instructions.",819f477065088220a6f706cd9ef76dbcb4b4c134,Semantic Scholar,,, -534,promptlearning for crosslingual relation extraction,"['Chiaming Hsu', 'Changtong Zan', 'Liang Ding', 'Longyue Wang', 'Xiaoting Wang', 'Weifeng Liu', 'Fu Lin', 'Wenbin Hu']",https://arxiv.org/pdf/2304.10354,2023-04-20,,"Relation Extraction (RE) is a crucial task in Information Extraction, which entails predicting relationships between entities within a given sentence. However, extending pre-trained RE models to other languages is challenging, particularly in real-world scenarios where Cross-Lingual Relation Extraction (XRE) is required. Despite recent advancements in Prompt-Learning, which involves transferring knowledge from Multilingual Pre-trained Language Models (PLMs) to diverse downstream tasks, there is limited research on the effective use of multilingual PLMs with prompts to improve XRE. In this paper, we present a novel XRE algorithm based on Prompt-Tuning, referred to as Prompt-Xre. To evaluate its effectiveness, we design and implement several prompt templates, including hard, soft, and hybrid prompts, and empirically test their performance on competitive multilingual PLMs, specifically mBART. Our extensive experiments, conducted on the low-resource ACE05 benchmark across multiple languages, demonstrate that our Prompt-Xre algorithm significantly outperforms both vanilla multilingual PLMs and other existing models, achieving state-of-the-art performance in XRE. To further show the generalization of our Prompt-XRE on larger data scales, we construct and release a new XRE dataset-WMTI7-EnZh XRE, containing 0.9M English-Chinese pairs extracted from WMT 2017 parallel corpus. Experiments on WMTI7-EnZh XRE also show the effectiveness of our Prompt-XRE against other competitive baselines. The code and newly constructed dataset are freely available at httus://2ithub.com/HSU-CHIA-MING/Promut-XRE.",850b8f31a1bb762544bd35163923784a664b315a,Semantic Scholar,,, -535,large language and textto3d models for engineering design optimization,"['Thiago Rios', 'S. Menzel', 'B. Sendhoff']",https://arxiv.org/pdf/2307.01230,2023-07-03,,"The current advances in generative AI for learning large neural network models with the capability to produce essays, images, music and even 3D assets from text prompts create opportunities for a manifold of disciplines. In the present paper, we study the potential of deep text-to-3D models in the engineering domain, with focus on the chances and challenges when integrating and interacting with 3D assets in computational simulation-based design optimization. In contrast to traditional design optimization of 3D geometries that often searches for the optimum designs using numerical representations, such as B-Spline surface or deformation parameters in vehicle aerodynamic optimization, natural language challenges the optimization framework by requiring a different interpretation of variation operators while at the same time may ease and motivate the human user interaction. Here, we propose and realize a fully automated evolutionary design optimization framework using Shap-E, a recently published text-to-3D asset network by OpenAI, in the context of aerodynamic vehicle optimization. For representing text prompts in the evolutionary optimization, we evaluate (a) a bag-of-words approach based on prompt templates and Wordnet samples, and (b) a tokenisation approach based on prompt templates and the byte pair encoding method from GPT4. Our main findings from the optimizations indicate that, first, it is important to ensure that the designs generated from prompts are within the object class of application, i.e. diverse and novel designs need to be realistic, and, second, that more research is required to develop methods where the strength of text prompt variations and the resulting variations of the 3D designs share causal relations to some degree to improve the optimization.",8c2dbf98b75a01f7e93b68a9407f00b1728b66af,Semantic Scholar,,, -536,teprompt task enlightenment prompt learning for implicit discourse relation recognition,"['Wei Xiang', 'Chao Liang', 'Bang Wang']",http://arxiv.org/pdf/2305.10866,2023-05-18,,"Implicit Discourse Relation Recognition (IDRR) aims at classifying the relation sense between two arguments without an explicit connective. Recently, the ConnPrompt~\cite{Wei.X:et.al:2022:COLING} has leveraged the powerful prompt learning for IDRR based on the fusion of multi-prompt decisions from three different yet much similar connective prediction templates. Instead of multi-prompt ensembling, we propose to design auxiliary tasks with enlightened prompt learning for the IDRR task. Although an auxiliary task is not used to directly output final prediction, we argue that during the joint training some of its learned features can be useful to boost the main task. In light of such motivations, we propose a task enlightenment prompt learning model, called TEPrompt, to fuse learned features from three related tasks for IDRR. In particular, the TEPrompt contains three tasks, viz., Discourse Relation Recognition (DRR), Sense Semantics Classification (SSC) and Annotated Connective Prediction (ACP), each with a unique prompt template and an answer space. In the training phase, we jointly train three prompt learning tasks with shared argument representation. In the testing phase, we only take the DRR output with fused features as the final IDRR decision. Experiments with the same conditions have shown that the proposed TEPrompt outperforms the ConnPrompt. This can be attributed to the promoted decision features and language models benefited from joint-training of auxiliary tasks.",8eeb6cf85e6bf305fb761a6e6a22de20f09909de,Semantic Scholar,,, -537,iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve,"['Luxi Xing', 'Yuqiang Xie', 'Yue Hu', 'Wei Peng']",https://aclanthology.org/2020.semeval-1.42.pdf,2020-07-02,,"This paper introduces our systems for the first two subtasks of SemEval Task4: Commonsense Validation and Explanation. To clarify the intention for judgment and inject contrastive information for selection, we propose the input reconstruction strategy with prompt templates. Specifically, we formalize the subtasks into the multiple-choice question answering format and construct the input with the prompt templates, then, the final prediction of question answering is considered as the result of subtasks. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches secure the third rank on both official test sets of the first two subtasks with an accuracy of 96.4 and an accuracy of 94.3 respectively.",94db2ba208a3ab2e469a5a65d6192f4dd04ef0bf,Semantic Scholar,,, -538,autoclip autotuning zeroshot classifiers for visionlanguage models,"['J. H. Metzen', 'Piyapat Saranrittichai', 'Chaithanya Kumar Mummadi']",https://arxiv.org/pdf/2309.16414,2023-09-28,,"Classifiers built upon vision-language models such as CLIP have shown remarkable zero-shot performance across a broad range of image classification tasks. Prior work has studied different ways of automatically creating descriptor sets for every class based on prompt templates, ranging from manually engineered templates over templates obtained from a large language model to templates built from random words and characters. Up until now, deriving zero-shot classifiers from the respective encoded class descriptors has remained nearly unchanged, i.e., classify to the class that maximizes cosine similarity between its averaged encoded class descriptors and the image encoding. However, weighing all class descriptors equally can be suboptimal when certain descriptors match visual clues on a given image better than others. In this work, we propose AutoCLIP, a method for auto-tuning zero-shot classifiers. AutoCLIP tunes per-image weights to each prompt template at inference time, based on statistics of class descriptor-image similarities. AutoCLIP is fully unsupervised, has very low computational overhead, and can be easily implemented in few lines of code. We show that AutoCLIP outperforms baselines across a broad range of vision-language models, datasets, and prompt templates consistently and by up to 3 percent point accuracy.",99bd3e04b6b65abf3f03de69654059c3710d03e8,Semantic Scholar,,, -539,trustgpt a benchmark for trustworthy and responsible large language models,"['Yue Huang', 'Qihui Zhang', 'Philip S. Yu', 'Lichao Sun']",http://arxiv.org/pdf/2306.11507,2023-06-20,,"Large Language Models (LLMs) such as ChatGPT, have gained significant attention due to their impressive natural language processing capabilities. It is crucial to prioritize human-centered principles when utilizing these models. Safeguarding the ethical and moral compliance of LLMs is of utmost importance. However, individual ethical issues have not been well studied on the latest LLMs. Therefore, this study aims to address these gaps by introducing a new benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT examines toxicity in language models by employing toxic prompt templates derived from social norms. It then quantifies the extent of bias in models by measuring quantifiable toxicity values across different groups. Lastly, TrustGPT assesses the value of conversation generation models from both active value-alignment and passive value-alignment tasks. Through the implementation of TrustGPT, this research aims to enhance our understanding of the performance of conversation generation models and promote the development of language models that are more ethical and socially responsible.",9d81ec931b85d6c6cf3453126670cd7a30a689e7,Semantic Scholar,,, -540,"promptaid prompt exploration, perturbation, testing and iteration using visual analytics for large language models","['Aditi Mishra', 'Utkarsh Soni', 'Anjana Arunkumar', 'Jinbin Huang', 'B. Kwon', 'Chris Bryan']",http://arxiv.org/pdf/2304.01964,2023-04-04,,"Large Language Models (LLMs) have gained widespread popularity due to their ability to perform ad-hoc Natural Language Processing (NLP) tasks with a simple natural language prompt. Part of the appeal for LLMs is their approachability to the general public, including individuals with no prior technical experience in NLP techniques. However, natural language prompts can vary significantly in terms of their linguistic structure, context, and other semantics. Modifying one or more of these aspects can result in significant differences in task performance. Non-expert users may find it challenging to identify the changes needed to improve a prompt, especially when they lack domain-specific knowledge and lack appropriate feedback. To address this challenge, we present PromptAid, a visual analytics system designed to interactively create, refine, and test prompts through exploration, perturbation, testing, and iteration. PromptAid uses multiple, coordinated visualizations which allow users to improve prompts by using the three strategies: keyword perturbations, paraphrasing perturbations, and obtaining the best set of in-context few-shot examples. PromptAid was designed through an iterative prototyping process involving NLP experts and was evaluated through quantitative and qualitative assessments for LLMs. Our findings indicate that PromptAid helps users to iterate over prompt template alterations with less cognitive overhead, generate diverse prompts with help of recommendations, and analyze the performance of the generated prompts while surpassing existing state-of-the-art prompting interfaces in performance.",a2c8d1c5470435176185bf891c76711a9b44808a,Semantic Scholar,,, -541,winclip zerofewshot anomaly classification and segmentation,"['Jongheon Jeong', 'Yang Zou', 'Taewan Kim', 'Dongqing Zhang', 'Avinash Ravichandran', 'O. Dabeer']",https://arxiv.org/pdf/2303.14814,2023-03-26,,"Visual anomaly classification and segmentation are vital for automating industrial quality inspection. The focus of prior research in the field has been on training custom models for each quality inspection task, which requires task-specific images and annotation. In this paper we move away from this regime, addressing zero-shot and few-normal-shot anomaly classification and segmentation. Recently CLIP, a vision-language model, has shown revolutionary generality with competitive zero-/few-shot performance in comparison to full-supervision. But CLIP falls short on anomaly classification and segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a compositional ensemble on state words and prompt templates and (2) efficient extraction and aggregation of window/patch/image-level features aligned with text. We also propose its few-normal-shot extension Win-CLIP+, which uses complementary information from normal images. In MVTec-AD (and VisA), without further tuning, WinCLIP achieves 91.8%/85.1% (78.1%/79.6%) AU-ROC in zero-shot anomaly classification and segmentation while WinCLIP + does 93.1%/95.2% (83.8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins.",aa207668318fec38d60b79f407fb64982e46fce9,Semantic Scholar,,, -542,automatic multilabel prompting simple and interpretable fewshot classification,"['Han Wang', 'Canwen Xu', 'Julian McAuley']",http://arxiv.org/pdf/2204.06305,2022-04-13,,"Prompt-based learning (i.e., prompting) is an emerging paradigm for exploiting knowledge learned by a pretrained language model. In this paper, we propose Automatic Multi-Label Prompting (AMuLaP), a simple yet effective method to automatically select label mappings for few-shot text classification with prompting. Our method exploits one-to-many label mappings and a statistics-based algorithm to select label mappings given a prompt template. Our experiments demonstrate that AMuLaP achieves competitive performance on the GLUE benchmark without human effort or external resources.",b0f915c8e33afdf3829af71f189ddc34077dcc8e,Semantic Scholar,,, -543,modeltuning via prompts makes nlp models adversarially robust,"['Mrigank Raman', 'Pratyush Maini', 'J. Z. Kolter', 'Zachary Chase Lipton', 'Danish Pruthi']",http://arxiv.org/pdf/2303.07320,2023-03-13,,"In recent years, NLP practitioners have converged on the following practice: (i) import an off-the-shelf pretrained (masked) language model; (ii) append a multilayer perceptron atop the CLS token's hidden representation (with randomly initialized weights); and (iii) fine-tune the entire model on a downstream task (MLP). This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations, such as word-level synonym substitutions. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an alternative method of adapting to downstream tasks. Rather than modifying the model (by appending an MLP head), MVP instead modifies the input (by appending a prompt template). Across three classification datasets, MVP improves performance against adversarial word-level synonym substitutions by an average of 8% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5%. By combining MVP with adversarial training, we achieve further improvements in robust accuracy while maintaining clean accuracy. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters. Code is available at https://github.com/acmi-lab/mvp",b6499bcc10d4a70c3ca8b84995270cfd0d29de4c,Semantic Scholar,,, -544,ccprompt counterfactual contrastive prompttuning for manyclass classification,"['Y. Li', 'Canran Xu', 'Tao Shen', 'Jing Jiang', 'Guodong Long']",https://arxiv.org/pdf/2211.05987,2022-11-11,,"With the success of the prompt-tuning paradigm in Natural Language Processing (NLP), various prompt templates have been proposed to further stimulate specific knowledge for serving downstream tasks, e.g., machine translation, text generation, relation extraction, and so on. Existing prompt templates are mainly shared among all training samples with the information of task description. However, training samples are quite diverse. The sharing task description is unable to stimulate the unique task-related information in each training sample, especially for tasks with the finite-label space. To exploit the unique task-related information, we imitate the human decision process which aims to find the contrastive attributes between the objective factual and their potential counterfactuals. Thus, we propose the \textbf{C}ounterfactual \textbf{C}ontrastive \textbf{Prompt}-Tuning (CCPrompt) approach for many-class classification, e.g., relation classification, topic classification, and entity typing. Compared with simple classification tasks, these tasks have more complex finite-label spaces and are more rigorous for prompts. First of all, we prune the finite label space to construct fact-counterfactual pairs. Then, we exploit the contrastive attributes by projecting training instances onto every fact-counterfactual pair. We further set up global prototypes corresponding with all contrastive attributes for selecting valid contrastive attributes as additional tokens in the prompt template. Finally, a simple Siamese representation learning is employed to enhance the robustness of the model. We conduct experiments on relation classification, topic classification, and entity typing tasks in both fully supervised setting and few-shot setting. The results indicate that our model outperforms former baselines.",b7d643503f03dd0a23278932daa4fe01076e9ce6,Semantic Scholar,,, -545,what makes pretrained language models better zeroshot learners,"['Jinghui Lu', 'Rui Zhao', 'Brian Mac Namee', 'Dongsheng Zhu', 'Weidong Han', 'Fei Tan']",https://aclanthology.org/2023.acl-long.128.pdf,2022-09-30,,"Current methods for prompt learning in zero-shot scenarios widely rely on a development set with sufficient human-annotated data to select the best-performing prompt template a posteriori. This is not ideal because in a real-world zero-shot scenario of practical relevance, no labelled data is available. Thus, we propose a simple yet effective method for screening reasonable prompt templates in zero-shot text classification: Perplexity Selection (Perplection). We hypothesize that language discrepancy can be used to measure the efficacy of prompt templates, and thereby develop a substantiated perplexity-based scheme allowing for forecasting the performance of prompt templates in advance. Experiments show that our method leads to improved prediction performance in a realistic zero-shot setting, eliminating the need for any labelled examples.",baf63d7cf115d674a8c8da3a3d789aa84521977a,Semantic Scholar,,, -546,promptner prompt locating and typing for named entity recognition,"['Yongliang Shen', 'Zeqi Tan', 'Shuhui Wu', 'Wenqi Zhang', 'Rongsheng Zhang', 'Yadong Xi', 'Weiming Lu', 'Y. Zhuang']",http://arxiv.org/pdf/2305.17104,2023-05-26,,"Prompt learning is a new paradigm for utilizing pre-trained language models and has achieved great success in many tasks. To adopt prompt learning in the NER task, two kinds of methods have been explored from a pair of symmetric perspectives, populating the template by enumerating spans to predict their entity types or constructing type-specific prompts to locate entities. However, these methods not only require a multi-round prompting manner with a high time overhead and computational cost, but also require elaborate prompt templates, that are difficult to apply in practical scenarios. In this paper, we unify entity locating and entity typing into prompt learning, and design a dual-slot multi-prompt template with the position slot and type slot to prompt locating and typing respectively. Multiple prompts can be input to the model simultaneously, and then the model extracts all entities by parallel predictions on the slots. To assign labels for the slots during training, we design a dynamic template filling mechanism that uses the extended bipartite graph matching between prompts and the ground-truth entities. We conduct experiments in various settings, including resource-rich flat and nested NER datasets and low-resource in-domain and cross-domain datasets. Experimental results show that the proposed model achieves a significant performance improvement, especially in the cross-domain few-shot setting, which outperforms the state-of-the-art model by +7.7% on average.",bd2c32285e8ad5b6e322391cca5d475de4f84169,Semantic Scholar,,, -547,clip model is an efficient continual learner,"['Vishal G. Thengane', 'Salman A. Khan', 'Munawar Hayat', 'F. Khan']",http://arxiv.org/pdf/2210.03114,2022-10-06,,"The continual learning setting aims to learn new tasks over time without forgetting the previous ones. The literature reports several significant efforts to tackle this problem with limited or no access to previous task data. Among such efforts, typical solutions offer sophisticated techniques involving memory replay, knowledge distillation, model regularization, and dynamic network expansion. The resulting methods have a retraining cost at each learning task, dedicated memory requirements, and setting-specific design choices. In this work, we show that a frozen CLIP (Contrastive Language-Image Pretraining) model offers as-tounding continual learning performance without any fine-tuning (zero-shot eval-uation). We evaluate CLIP under a variety of settings including class-incremental, domain-incremental and task-agnostic incremental learning on five popular benchmarks (ImageNet-100 & 1K, CORe50, CIFAR-100, and TinyImageNet). Without any bells and whistles, the CLIP model outperforms the state-of-the-art continual learning approaches in majority of the settings. We show the effect on CLIP model’s performance by varying text inputs with simple prompt templates. To the best of our knowledge, this is the first work to report the CLIP zero-shot performance in a continual setting. We advocate the use of this strong yet embarrass-ingly simple baseline for future comparisons in the continual learning tasks. Code is available at https://github.com/vgthengane/Continual-CLIP .",c1372b08e382030e905d1c8751a7794ee91e9d31,Semantic Scholar,,, -548,distilling taskspecific logical rules from large pretrained models,"['Tao Chen', 'Luxin Liu', 'Xu Jia', 'Baoliang Cui', 'Haihong Tang', 'Siliang Tang']",http://arxiv.org/pdf/2210.02768,2022-10-06,,"Logical rules, both transferable and explainable, are widely used as weakly supervised signals for many downstream tasks such as named entity tagging. To reduce the human effort of writing rules, previous researchers adopt an iterative approach to automatically learn logical rules from several seed rules. However, obtaining more seed rules can only be accomplished by extra human annotation with heavy costs. Limited by the size and quality of the seed rules, the model performance of previous systems is bounded. In this paper, we develop a novel framework STREAM to distill task-specific logical rules from large pre-trained models. Specifically, we borrow recent prompt-based language models as the knowledge expert to yield initial seed rules, and based on the formed high-quality instance pool that acts as an intermediary role, we keep teaching the expert to fit our task and learning task-specific logical rules. Experiments on three public named entity tagging benchmarks demonstrate the effectiveness of our proposed framework. With several predefined prompt templates, our system has gained significant improvements over previous state-of-the-art methods.",c2903ea606e409d49994c801bb5aab321f623e5c,Semantic Scholar,,, -549,"a study on prompt design, advantages and limitations of chatgpt for deep learning program repair","['Jialun Cao', 'Meiziniu Li', 'Ming Wen', 'S. Cheung']",http://arxiv.org/pdf/2304.08191,2023-04-17,,"ChatGPT has revolutionized many research and industrial fields. ChatGPT has shown great potential in software engineering to boost various traditional tasks such as program repair, code understanding, and code generation. However, whether automatic program repair (APR) applies to deep learning (DL) programs is still unknown. DL programs, whose decision logic is not explicitly encoded in the source code, have posed unique challenges to APR. While to repair DL programs, an APR approach needs to not only parse the source code syntactically but also needs to understand the code intention. With the best prior work, the performance of fault localization is still far less than satisfactory (only about 30\%). Therefore, in this paper, we explore ChatGPT's capability for DL program repair by asking three research questions. (1) Can ChatGPT debug DL programs effectively? (2) How can ChatGPT's repair performance be improved by prompting? (3) In which way can dialogue help facilitate the repair? On top of that, we categorize the common aspects useful for prompt design for DL program repair. Also, we propose various prompt templates to facilitate the performance and summarize the advantages and disadvantages of ChatGPT's abilities such as detecting bad code smell, code refactoring, and detecting API misuse/deprecation.",c6808575096a6e4f3cbdc5f893384bc5a01cc6f8,Semantic Scholar,,, -550,don't stop pretraining make promptbased finetuning powerful learner,"['Zhengxiang Shi', 'Aldo Lipani']",https://arxiv.org/pdf/2305.01711,2023-05-02,,"Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. Through experiments on eight single-sentence tasks and eight sentence-pair tasks in both semi-supervised and fully-supervised settings, we find that conventional continued pre-training does not consistently provide benefits and can even be detrimental for sentence-pair tasks or when prompt-based FT is used. To tackle these issues, we propose Prompt-based Continued Pre-training (PCP), which combines the idea of instruction tuning with conventional continued pre-training. Our approach aims to improve the performance of prompt-based FT by presenting both task-related texts and prompt templates to LMs through unsupervised pre-training objectives before fine-tuning for the target task. Our empirical evaluations on 21 benchmarks demonstrate that the PCP consistently improves the performance of state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both semi-supervised and fully-supervised settings, even with only hundreds of unlabelled examples. Additionally, prompt-based FT with the PCP outperforms state-of-the-art semi-supervised approaches with greater simplicity, eliminating the need for an iterative process and extra data augmentation. Our further analysis explores the performance lower bound of the PCP and reveals that the advantages of PCP persist across different sizes of models and datasets.",c79852e9c9cc6734c9150847deb5449e489354ea,Semantic Scholar,,, -551,labelprompt effective promptbased learning for relation classification,"['W. Zhang', 'Xiaoning Song', 'Zhenhua Feng', 'Tianyang Xu', 'Xiaojun Wu']",https://arxiv.org/pdf/2302.08068,2023-02-16,,"Recently, prompt-based learning has gained popularity across many natural language processing (NLP) tasks by reformulating them into a cloze-style format to better align pre-trained language models (PLMs) with downstream tasks. However, applying this approach to relation classification poses unique challenges. Specifically, associating natural language words that fill the masked token with semantic relation labels (\textit{e.g.} \textit{``org:founded\_by}'') is difficult. To address this challenge, this paper presents a novel prompt-based learning method, namely LabelPrompt, for the relation classification task. Motivated by the intuition to ``GIVE MODEL CHOICES!'', we first define additional tokens to represent relation labels, which regard these tokens as the verbaliser with semantic initialisation and explicitly construct them with a prompt template method. Then, to mitigate inconsistency between predicted relations and given entities, we implement an entity-aware module with contrastive learning. Last, we conduct an attention query strategy within the self-attention layer to differentiates prompt tokens and sequence tokens. Together, these strategies enhance the adaptability of prompt-based learning, especially when only small labelled datasets is available. Comprehensive experiments on benchmark datasets demonstrate the superiority of our method, particularly in the few-shot scenario.",cb3379177c6e119dca0d32d41fa0c9b9fce172c8,Semantic Scholar,,, -552,"reason for future, act for now a principled framework for autonomous llm agents with provable sample efficiency","['Zhihan Liu', 'Hao Hu', 'Shenao Zhang', 'Hongyi Guo', 'Shuqi Ke', 'Boyi Liu', 'Zhaoran Wang']",https://arxiv.org/pdf/2309.17382,2023-09-29,,"Large language models (LLMs) demonstrate impressive reasoning abilities, but translating reasoning into actions in the real world remains challenging. In particular, it remains unclear how to complete a given task provably within a minimum number of interactions with the external environment, e.g., through an internal mechanism of reasoning. To this end, we propose a principled framework with provable regret guarantees to orchestrate reasoning and acting, which we call""reason for future, act for now""(\texttt{RAFA}). Specifically, we design a prompt template for reasoning that learns from the memory buffer and plans a future trajectory over a long horizon (""reason for future""). At each step, the LLM agent takes the initial action of the planned trajectory (""act for now""), stores the collected feedback in the memory buffer, and reinvokes the reasoning routine to replan the future trajectory from the new state. The key idea is to cast reasoning in LLMs as learning and planning in Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt LLMs to form an updated posterior of the unknown environment from the memory buffer (learning) and generate an optimal trajectory for multiple future steps that maximizes a value function (planning). The learning and planning subroutines are performed in an""in-context""manner to emulate the actor-critic update for MDPs. Our theoretical analysis proves that the novel combination of long-term reasoning and short-term acting achieves a $\sqrt{T}$ regret. In particular, the regret bound highlights an intriguing interplay between the prior knowledge obtained through pretraining and the uncertainty reduction achieved by reasoning and acting. Our empirical validation shows that it outperforms various existing frameworks and achieves nearly perfect scores on a few benchmarks.",d3ca116177369bf6fbe27de64506a2f401aca996,Semantic Scholar,,, -553,llm powered simtoreal transfer for traffic signal control,"['Longchao Da', 'Mingchen Gao', 'Hao Mei', 'Hua Wei']",https://arxiv.org/pdf/2308.14284,2023-08-28,,"Numerous solutions are proposed for the Traffic Signal Control (TSC) tasks aiming to provide efficient transportation and mitigate congestion waste. In recent, promising results have been attained by Reinforcement Learning (RL) methods through trial and error in simulators, bringing confidence in solving cities' congestion headaches. However, there still exist performance gaps when simulator-trained policies are deployed to the real world. This issue is mainly introduced by the system dynamic difference between the training simulator and the real-world environments. The Large Language Models (LLMs) are trained on mass knowledge and proved to be equipped with astonishing inference abilities. In this work, we leverage LLMs to understand and profile the system dynamics by a prompt-based grounded action transformation. Accepting the cloze prompt template, and then filling in the answer based on accessible context, the pre-trained LLM's inference ability is exploited and applied to understand how weather conditions, traffic states, and road types influence traffic dynamics, being aware of this, the policies' action is taken and grounded based on realistic dynamics, thus help the agent learn a more realistic policy. We conduct experiments using DQN to show the effectiveness of the proposed PromptGAT's ability in mitigating the performance gap from simulation to reality (sim-to-real).",d40430275383ef8a453eefb693c44cbc686008e0,Semantic Scholar,,, -554,prompting large language models with the socratic method,['Edward Y. Chang'],https://arxiv.org/pdf/2303.08769,2023-02-17,,"This paper presents a systematic approach to using the Socratic method in developing prompt templates that effectively interact with large language models, including GPT-3. Various methods are examined, and those that yield precise answers and justifications while fostering creativity and imagination to enhance creative writing are identified. Techniques such as definition, elenchus, dialectic, maieutics, generalization, and counterfactual reasoning are discussed for their application in engineering prompt templates and their connections to inductive, deductive, and abductive reasoning. Through examples, the effectiveness of these dialogue and reasoning methods is demonstrated. An interesting observation is made that when the task's goal and user intent are conveyed to GPT-3 via ChatGPT before the start of a dialogue, the large language model seems to connect to the external context expressed in the intent and perform more effectively.",d7386e8859b22e05ce9c4a972613d4b1e1e44198,Semantic Scholar,,, -555,anovl adapting visionlanguage models for unified zeroshot anomaly localization,"['Hanqiu Deng', 'Zhaoxiang Zhang', 'Jinan Bao', 'Xingyu Li']",https://arxiv.org/pdf/2308.15939,2023-08-30,,"Contrastive Language-Image Pre-training (CLIP) models have shown promising performance on zero-shot visual recognition tasks by learning visual representations under natural language supervision. Recent studies attempt the use of CLIP to tackle zero-shot anomaly detection by matching images with normal and abnormal state prompts. However, since CLIP focuses on building correspondence between paired text prompts and global image-level representations, the lack of patch-level vision to text alignment limits its capability on precise visual anomaly localization. In this work, we introduce a training-free adaptation (TFA) framework of CLIP for zero-shot anomaly localization. In the visual encoder, we innovate a training-free value-wise attention mechanism to extract intrinsic local tokens of CLIP for patch-level local description. From the perspective of text supervision, we particularly design a unified domain-aware contrastive state prompting template. On top of the proposed TFA, we further introduce a test-time adaptation (TTA) mechanism to refine anomaly localization results, where a layer of trainable parameters in the adapter is optimized using TFA's pseudo-labels and synthetic noise-corrupted tokens. With both TFA and TTA adaptation, we significantly exploit the potential of CLIP for zero-shot anomaly localization and demonstrate the effectiveness of our proposed methods on various datasets.",daa34ae46c82e6980ac1daaf2dd9716ef3718f21,Semantic Scholar,,, -556,continuous prompt tuning based textual entailment model for ecommerce entity typing,"['Yibo Wang', 'Congying Xia', 'Guan Wang', 'Philip S. Yu']",https://arxiv.org/pdf/2211.02483,2022-11-04,,"The explosion of e-commerce has caused the need for processing and analysis of product titles, like entity typing in product titles. However, the rapid activity in e-commerce has led to the rapid emergence of new entities, which is difficult for general entity typing. Besides, product titles in e-commerce have very different language styles from text data in general domain. In order to handle new entities in product titles and address the special language styles of product titles in e-commerce domain, we propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing. First, we reformulate entity typing into a textual entailment problem to handle new entities that are not present during training. Second, we design a model to automatically generate textual entailment hypotheses using a continuous prompt tuning method, which can generate better textual entailment hypotheses without manual design. Third, we utilize the fusion embeddings of BERT embedding and Char-acterBERT embedding to solve the problem that the language styles of product titles in e-commerce are different from that of general domain. To analyze the effect of each contribution, we compare the performance of entity typing and textual entailment model, and conduct ablation studies on continuous prompt tuning and fusion embeddings. We also evaluate the impact of different prompt template initialization for the continuous prompt tuning. We show our proposed model improves the average F1 score by around 2% compared to the baseline BERT entity typing model.",dd568e6838903ad7c381f13c1268c94c5db08b02,Semantic Scholar,,, -557,daprompt deterministic assumption prompt learning for event causality identification,"['Wei Xiang', 'Chuanhong Zhan', 'Bang Wang']",https://arxiv.org/pdf/2307.09813,2023-07-19,,"Event Causality Identification (ECI) aims at determining whether there is a causal relation between two event mentions. Conventional prompt learning designs a prompt template to first predict an answer word and then maps it to the final decision. Unlike conventional prompts, we argue that predicting an answer word may not be a necessary prerequisite for the ECI task. Instead, we can first make a deterministic assumption on the existence of causal relation between two events and then evaluate its rationality to either accept or reject the assumption. The design motivation is to try the most utilization of the encyclopedia-like knowledge embedded in a pre-trained language model. In light of such considerations, we propose a deterministic assumption prompt learning model, called DAPrompt, for the ECI task. In particular, we design a simple deterministic assumption template concatenating with the input event pair, which includes two masks as predicted events' tokens. We use the probabilities of predicted events to evaluate the assumption rationality for the final event causality decision. Experiments on the EventStoryLine corpus and Causal-TimeBank corpus validate our design objective in terms of significant performance improvements over the state-of-the-art algorithms.",e92f4ff44def2273d9fcb02921b257dcbe3c9626,Semantic Scholar,,, -558,clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction,"['Jianghao Lin', 'Bo Chen', 'Hangyu Wang', 'Yunjia Xi', 'Yanru Qu', 'Xinyi Dai', 'Kangning Zhang', 'Ruiming Tang', 'Yong Yu', 'Weinan Zhang']",https://arxiv.org/pdf/2310.09234,2023-10-13,,"Click-through rate (CTR) prediction has become increasingly indispensable for various Internet applications. Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features. Such a paradigm suffers from the problem of semantic information loss. Another line of research explores the potential of pretrained language models (PLMs) for CTR prediction by converting input data into textual sentences through hard prompt templates. Although semantic signals are preserved, they generally fail to capture the collaborative information (e.g., feature interactions, pure ID features), not to mention the unacceptable inference overhead brought by the huge model size. In this paper, we aim to model both the semantic knowledge and collaborative knowledge for accurate CTR estimation, and meanwhile address the inference inefficiency issue. To benefit from both worlds and close their gaps, we propose a novel model-agnostic framework (i.e., ClickPrompt), where we incorporate CTR models to generate interaction-aware soft prompts for PLMs. We design a prompt-augmented masked language modeling (PA-MLM) pretraining task, where PLM has to recover the masked tokens based on the language context, as well as the soft prompts generated by CTR model. The collaborative and semantic knowledge from ID and textual features would be explicitly aligned and interacted via the prompt interface. Then, we can either tune the CTR model with PLM for superior performance, or solely tune the CTR model without PLM for inference efficiency. Experiments on four real-world datasets validate the effectiveness of ClickPrompt compared with existing baselines.",e96be7c55d139965b15bc0527d6d528b225f9a61,Semantic Scholar,,, -559,tiam a metric for evaluating alignment in texttoimage generation,"['P. Grimal', 'H. Borgne', 'Olivier Ferret', 'Julien Tourille']",https://arxiv.org/pdf/2307.05134,2023-07-11,,"The progress in the generation of synthetic images has made it crucial to assess their quality. While several metrics have been proposed to assess the rendering of images, it is crucial for Text-to-Image (T2I) models, which generate images based on a prompt, to consider additional aspects such as to which extent the generated image matches the important content of the prompt. Moreover, although the generated images usually result from a random starting point, the influence of this one is generally not considered. In this article, we propose a new metric based on prompt templates to study the alignment between the content specified in the prompt and the corresponding generated images. It allows us to better characterize the alignment in terms of the type of the specified objects, their number, and their color. We conducted a study on several recent T2I models about various aspects. An additional interesting result we obtained with our approach is that image quality can vary drastically depending on the latent noise used as a seed for the images. We also quantify the influence of the number of concepts in the prompt, their order as well as their (color) attributes. Finally, our method allows us to identify some latent seeds that produce better images than others, opening novel directions of research on this understudied topic.",f7d57f223154965e6e5584d3a51561aaea7ca13b,Semantic Scholar,,, -560,the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis,"['Xiancai Xu', 'Jia-Dong Zhang', 'Rongchang Xiao', 'Lei Xiong']",https://arxiv.org/pdf/2310.06502,2023-10-10,,"Recently, ChatGPT has attracted great attention from both industry and academia due to its surprising abilities in natural language understanding and generation. We are particularly curious about whether it can achieve promising performance on one of the most complex tasks in aspect-based sentiment analysis, i.e., extracting aspect-category-opinion-sentiment quadruples from texts. To this end, in this paper we develop a specialized prompt template that enables ChatGPT to effectively tackle this complex quadruple extraction task. Further, we propose a selection method on few-shot examples to fully exploit the in-context learning ability of ChatGPT and uplift its effectiveness on this complex task. Finally, we provide a comparative evaluation on ChatGPT against existing state-of-the-art quadruple extraction models based on four public datasets and highlight some important findings regarding the capability boundaries of ChatGPT in the quadruple extraction.",f84d6d6d58b836a64c4a96b062bfff769d08a595,Semantic Scholar,,, -561,let me check the examples enhancing demonstration learning via explicit imitation,"['Sirui Wang', 'Kaiwen Wei', 'Hongzhi Zhang', 'Yun Li', 'Wei Wu']",http://arxiv.org/pdf/2209.00455,2022-08-31,,"Demonstration learning aims to guide the prompt prediction by providing answered demonstrations in the few shot settings. Despite achieving promising results, existing work only concatenates the answered examples as demonstrations to the prompt template (including the raw context) without any additional operation, neglecting the prompt-demonstration dependencies. Besides, prior research found that randomly replacing the labels of demonstrations marginally hurts performance, illustrating that the model could not properly learn the knowledge brought by the demonstrations. Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on similar demonstrations.(2) demonstration-label re-prediction method to consolidate known knowledge. Experiment results show that our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Further studies also prove that Imitation-Demo strengthens the associations between the prompt and demonstrations, which could provide the basis for exploring how demonstration learning works.",fdbdcc3a65dfd6f258c533fd12d58bbfcab15bc3,Semantic Scholar,,, -562,promptbased length controlled generation with reinforcement learning,"['Renlong Jie', 'Xiaojun Meng', 'Lifeng Shang', 'Xin Jiang', 'Qun Liu']",https://arxiv.org/pdf/2308.12030,2023-08-23,,"Large language models (LLMs) like ChatGPT and GPT-4 have attracted great attention given their surprising performance on a wide range of NLP tasks. Length controlled generation of LLMs emerges as an important topic, which enables users to fully leverage the capability of LLMs in more real-world scenarios like generating a proper answer or essay of a desired length. In addition, the autoregressive generation in LLMs is extremely time-consuming, while the ability of controlling this generated length can reduce the inference cost by limiting the length. Therefore, we propose a prompt-based length control method to achieve high-accuracy length controlled generation. In particular, we adopt reinforcement learning with the reward signal given by either trainable or rule-based reward models, which further enhances the length-control ability of LLMs by rewarding outputs that follows pre-defined control instruction. To enable rule-based inference, we also introduce standard prompt extractor to collect the standard control information from users' input. Experiments show that our method significantly improves the accuracy of prompt-based length control for summarization task on popular datasets like CNNDM and NYT. Both the standard prompt extractor and the RL-tuned model have show strong generalization ability to unseen control prompt templates.",fe583403c95c3e9b4148d6276f04bda5ace33660,Semantic Scholar,,, -563,llm4dv using large language models for hardware test stimuli generation,"['Zixi Zhang', 'Greg Chadwick', 'Hugo McNally', 'Yiren Zhao', 'Robert Mullins']",https://arxiv.org/pdf/2310.04535,2023-10-06,,"Test stimuli generation has been a crucial but labor-intensive task in hardware design verification. In this paper, we revolutionize this process by harnessing the power of large language models (LLMs) and present a novel benchmarking framework, LLM4DV. This framework introduces a prompt template for interactively eliciting test stimuli from the LLM, along with four innovative prompting improvements to support the pipeline execution and further enhance its performance. We compare LLM4DV to traditional constrained-random testing (CRT), using three self-designed design-under-test (DUT) modules. Experiments demonstrate that LLM4DV excels in efficiently handling straightforward DUT scenarios, leveraging its ability to employ basic mathematical reasoning and pre-trained knowledge. While it exhibits reduced efficiency in complex task settings, it still outperforms CRT in relative terms. The proposed framework and the DUT modules used in our experiments will be open-sourced upon publication.",ff7f75989d125a3356fdb5ad76f504037cc27d5c,Semantic Scholar,,, -564,scalable and transferable blackbox jailbreaks for language models via persona modulation,"['Rusheb Shah', 'Quentin Feuillade--Montixi', 'Soroush Pour', 'Arush Tagade', 'Stephen Casper', 'Javier Rando']",http://arxiv.org/pdf/2311.03348v1.pdf,2023-11-06,," Despite efforts to align large language models to produce harmless responses,they are still vulnerable to jailbreak prompts that elicit unrestrictedbehaviour. In this work, we investigate persona modulation as a black-boxjailbreaking method to steer a target model to take on personalities that arewilling to comply with harmful instructions. Rather than manually craftingprompts for each persona, we automate the generation of jailbreaks using alanguage model assistant. We demonstrate a range of harmful completions madepossible by persona modulation, including detailed instructions forsynthesising methamphetamine, building a bomb, and laundering money. Theseautomated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is185 times larger than before modulation (0.23%). These prompts also transfer toClaude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%,respectively. Our work reveals yet another vulnerability in commercial largelanguage models and highlights the need for more comprehensive safeguards.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -565,masterkey automated jailbreak across multiple large language model chatbots,"['Gelei Deng', 'Yi Liu', 'Yuekang Li', 'Kailong Wang', 'Ying Zhang', 'Zefeng Li', 'Haoyu Wang', 'Tianwei Zhang', 'Yang Liu']",http://arxiv.org/pdf/2307.08715v2.pdf,2023-07-16,," Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI)services due to their exceptional proficiency in understanding and generatinghuman-like text. LLM chatbots, in particular, have seen widespread adoption,transforming human-machine interactions. However, these LLM chatbots aresusceptible to ""jailbreak"" attacks, where malicious users manipulate prompts toelicit inappropriate or sensitive responses, contravening service policies.Despite existing attempts to mitigate such threats, our research reveals asubstantial gap in our understanding of these vulnerabilities, largely due tothe undisclosed defensive measures implemented by LLM service providers. In this paper, we present Jailbreaker, a comprehensive framework that offersan in-depth understanding of jailbreak attacks and countermeasures. Our workmakes a dual contribution. First, we propose an innovative methodology inspiredby time-based SQL injection techniques to reverse-engineer the defensivestrategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat.This time-sensitive approach uncovers intricate details about these services'defenses, facilitating a proof-of-concept attack that successfully bypassestheir mechanisms. Second, we introduce an automatic generation method forjailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential ofautomated jailbreak generation across various commercial LLM chatbots. Ourmethod achieves a promising average success rate of 21.58%, significantlyoutperforming the effectiveness of existing techniques. We have responsiblydisclosed our findings to the concerned service providers, underscoring theurgent need for more robust defenses. Jailbreaker thus marks a significant steptowards understanding and mitigating jailbreak threats in the realm of LLMchatbots.",,arXiv,['cs.cr'],, -566,gptfuzzer red teaming large language models with autogenerated jailbreak prompts,"['Jiahao Yu', 'Xingwei Lin', 'Zheng Yu', 'Xinyu Xing']",http://arxiv.org/pdf/2309.10253v2.pdf,2023-09-19,," Large language models (LLMs) have recently experienced tremendous popularityand are widely used from casual conversations to AI-driven programming.However, despite their considerable success, LLMs are not entirely reliable andcan give detailed guidance on how to conduct harmful or illegal activities.While safety measures can reduce the risk of such outputs, adversarialjailbreak attacks can still exploit LLMs to produce harmful content. Thesejailbreak templates are typically manually crafted, making large-scale testingchallenging. In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzingframework inspired by the AFL fuzzing framework. Instead of manual engineering,GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs.At its core, GPTFuzz starts with human-written templates as initial seeds, thenmutates them to produce new templates. We detail three key components ofGPTFuzz: a seed selection strategy for balancing efficiency and variability,mutate operators for creating semantically equivalent or similar sentences, anda judgment model to assess the success of a jailbreak attack. We evaluate GPTFuzz against various commercial and open-source LLMs,including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Ourresults indicate that GPTFuzz consistently produces jailbreak templates with ahigh success rate, surpassing human-crafted templates. Remarkably, GPTFuzzachieves over 90% attack success rates against ChatGPT and Llama-2 models, evenwith suboptimal initial seed templates. We anticipate that GPTFuzz will beinstrumental for researchers and practitioners in examining LLM robustness andwill encourage further exploration into enhancing LLM safety.",,arXiv,['cs.ai'],, -567,probing llms for hate speech detection strengths and vulnerabilities,"['Sarthak Roy', 'Ashish Harshavardhan', 'Animesh Mukherjee', 'Punyajoy Saha']",http://arxiv.org/pdf/2310.12860v2.pdf,2023-10-19,," Recently efforts have been made by social media platforms as well asresearchers to detect hateful or toxic language using large language models.However, none of these works aim to use explanation, additional context andvictim community information in the detection process. We utilise differentprompt variation, input information and evaluate large language models in zeroshot setting (without adding any in-context examples). We select three largelanguage models (GPT-3.5, text-davinci and Flan-T5) and three datasets -HateXplain, implicit hate and ToxicSpans. We find that on average including thetarget information in the pipeline improves the model performance substantially(~20-30%) over the baseline across the datasets. There is also a considerableeffect of adding the rationales/explanations into the pipeline (~10-20%) overthe baseline across the datasets. In addition, we further provide a typology ofthe error cases where these large language models fail to (i) classify and (ii)explain the reason for the decisions they take. Such vulnerable pointsautomatically constitute 'jailbreak' prompts for these models and industryscale safeguard techniques need to be developed to make the models robustagainst such prompts.",,arXiv,"['cs.cl', 'cs.cy']",, -568,dcc help generating contextaware compiler error explanations with large language models,"['Andrew Taylor', 'Alexandra Vassar', 'Jake Renzella', 'Hammond Pearce']",http://arxiv.org/pdf/2308.11873v2.pdf,2023-08-23,," In the challenging field of introductory programming, high enrollments andfailure rates drive us to explore tools and systems to enhance studentoutcomes, especially automated tools that scale to large cohorts. This paperpresents and evaluates the dcc --help tool, an integration of a Large LanguageModel (LLM) into the Debugging C Compiler (DCC) to generate unique,novice-focused explanations tailored to each error. dcc --help prompts an LLMwith contextual information of compile- and run-time error occurrences,including the source code, error location and standard compiler error message.The LLM is instructed to generate novice-focused, actionable error explanationsand guidance, designed to help students understand and resolve problems withoutproviding solutions. dcc --help was deployed to our CS1 and CS2 courses, with2,565 students using the tool over 64,000 times in ten weeks. We analysed asubset of these error/explanation pairs to evaluate their properties, includingconceptual correctness, relevancy, and overall quality. We found that theLLM-generated explanations were conceptually accurate in 90% of compile-timeand 75% of run-time cases, but often disregarded the instruction not to providesolutions in code. Our findings, observations and reflections followingdeployment indicate that dcc-help provides novel opportunities for scaffoldingstudents' introduction to programming.",,arXiv,"['cs.se', 'cs.lg', 'cs.pl']",, -569,clarifygpt empowering llmbased code generation with intention clarification,"['Fangwen Mu', 'Lin Shi', 'Song Wang', 'Zhuohao Yu', 'Binquan Zhang', 'Chenxue Wang', 'Shichao Liu', 'Qing Wang']",http://arxiv.org/pdf/2310.10996v1.pdf,2023-10-17,," We introduce a novel framework named ClarifyGPT, which aims to enhance codegeneration by empowering LLMs with the ability to identify ambiguousrequirements and ask targeted clarifying questions. In particular, ClarifyGPTfirst detects whether a given requirement is ambiguous by performing a codeconsistency check. If it is ambiguous, ClarifyGPT prompts an LLM to generatetargeted clarifying questions. After receiving question responses, ClarifyGPTrefines the ambiguous requirement and inputs it into the same LLM to generate afinal code solution. To evaluate our ClarifyGPT, we first conduct a humanevaluation involving ten participants who use ClarifyGPT for code generation ontwo publicly available benchmarks: MBPP-sanitized and MBPP-ET. The results showthat ClarifyGPT elevates the performance (Pass@1) of GPT-4 from 70.96% to80.80% on MBPP-sanitized. Furthermore, to perform large-scale automatedevaluations of ClarifyGPT across different LLMs and benchmarks withoutrequiring user participation, we introduce a high-fidelity simulation method tosimulate user responses. The automated evaluation results also demonstrate thatClarifyGPT can significantly enhance code generation performance compared tothe baselines. In particular, ClarifyGPT improves the average performance ofGPT-4 and ChatGPT across four benchmarks from 68.02% to 75.75% and from 58.55%to 67.22%, respectively. We believe that ClarifyGPT can effectively facilitatethe practical application of LLMs in real-world development environments.",,arXiv,['cs.se'],, -570,harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning,"['Xiaoxin He', 'Xavier Bresson', 'Thomas Laurent', 'Adam Perold', 'Yann LeCun', 'Bryan Hooi']",http://arxiv.org/pdf/2305.19523v3.pdf,2023-05-31,," Representation learning on text-attributed graphs (TAGs) has become acritical research problem in recent years. A typical example of a TAG is apaper citation graph, where the text of each paper serves as node attributes.Initial graph neural network (GNN) pipelines handled these text attributes bytransforming them into shallow or hand-crafted features, such as skip-gram orbag-of-words features. Recent efforts have focused on enhancing these pipelineswith language models (LMs), which typically demand intricate designs andsubstantial computational resources. With the advent of powerful large languagemodels (LLMs) such as GPT or Llama2, which demonstrate an ability to reason andto utilize general knowledge, there is a growing need for techniques whichcombine the textual modelling abilities of LLMs with the structural learningcapabilities of GNNs. Hence, in this work, we focus on leveraging LLMs tocapture textual information as features, which can be used to boost GNNperformance on downstream tasks. A key innovation is our use of explanations asfeatures: we prompt an LLM to perform zero-shot classification, request textualexplanations for its decision-making process, and design an LLM-to-LMinterpreter to translate these explanations into informative features thatenhance downstream GNNs. Our experiments demonstrate that our method achievesstate-of-the-art results on well-established TAG datasets, including Cora,PubMed, ogbn-arxiv, as well as our newly introduced dataset, arXiv-2023.Furthermore, our method significantly speeds up training, achieving a 2.88times improvement over the closest baseline on ogbn-arxiv. Lastly, we believethe versatility of the proposed method extends beyond TAGs and holds thepotential to enhance other tasks involving graph-text data~\footnote{Our codesand datasets are available at: \url{https://github.com/XiaoxinHe/TAPE}}.",,arXiv,['cs.lg'],, -571,the unreliability of explanations in fewshot prompting for textual reasoning,"['Xi Ye', 'Greg Durrett']",http://arxiv.org/pdf/2205.03401v2.pdf,2022-05-06,," Does prompting a large language model (LLM) like GPT-3 with explanationsimprove in-context learning? We study this question on two NLP tasks thatinvolve reasoning over text, namely question answering and natural languageinference. We test the performance of four LLMs on three textual reasoningdatasets using prompts that include explanations in multiple different styles.For these tasks, we find that including explanations in the prompts for OPT,GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small tomoderate accuracy improvements over standard few-show learning. However,text-davinci-002 is able to benefit more substantially. We further show that explanations generated by the LLMs may not entail themodels' predictions nor be factually grounded in the input, even on simpletasks with extractive explanations. However, these flawed explanations canstill be useful as a way to verify LLMs' predictions post-hoc. Through analysisin our three settings, we show that explanations judged by humans to begood--logically consistent with the input and the prediction--more likelycooccur with accurate predictions. Following these observations, we traincalibrators using automatically extracted scores that assess the reliability ofexplanations, allowing us to improve performance post-hoc across all of ourdatasets.",,arXiv,['cs.cl'],, -572,prompt injection attacks and defenses in llmintegrated applications,"['Yupei Liu', 'Yuqi Jia', 'Runpeng Geng', 'Jinyuan Jia', 'Neil Zhenqiang Gong']",http://arxiv.org/pdf/2310.12815v1.pdf,2023-10-19,," Large Language Models (LLMs) are increasingly deployed as the backend for avariety of real-world applications called LLM-Integrated Applications. Multiplerecent works showed that LLM-Integrated Applications are vulnerable to promptinjection attacks, in which an attacker injects malicious instruction/data intothe input of those applications such that they produce results as the attackerdesires. However, existing works are limited to case studies. As a result, theliterature lacks a systematic understanding of prompt injection attacks andtheir defenses. We aim to bridge the gap in this work. In particular, wepropose a general framework to formalize prompt injection attacks. Existingattacks, which are discussed in research papers and blog posts, are specialcases in our framework. Our framework enables us to design a new attack bycombining existing attacks. Moreover, we also propose a framework tosystematize defenses against prompt injection attacks. Using our frameworks, weconduct a systematic evaluation on prompt injection attacks and their defenseswith 10 LLMs and 7 tasks. We hope our frameworks can inspire future research inthis field. Our code is available athttps://github.com/liu00222/Open-Prompt-Injection.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg']",, -573,tensor trust interpretable prompt injection attacks from an online game,"['Sam Toyer', 'Olivia Watkins', 'Ethan Adrian Mendes', 'Justin Svegliato', 'Luke Bailey', 'Tiffany Wang', 'Isaac Ong', 'Karim Elmaaroufi', 'Pieter Abbeel', 'Trevor Darrell', 'Alan Ritter', 'Stuart Russell']",http://arxiv.org/pdf/2311.01011v1.pdf,2023-11-02,," While Large Language Models (LLMs) are increasingly being used in real-worldapplications, they remain vulnerable to prompt injection attacks: maliciousthird party prompts that subvert the intent of the system designer. To helpresearchers study this problem, we present a dataset of over 126,000 promptinjection attacks and 46,000 prompt-based ""defenses"" against prompt injection,all created by players of an online game called Tensor Trust. To the best ofour knowledge, this is currently the largest dataset of human-generatedadversarial examples for instruction-following LLMs. The attacks in our datasethave a lot of easily interpretable stucture, and shed light on the weaknessesof LLMs. We also use the dataset to create a benchmark for resistance to twotypes of prompt injection, which we refer to as prompt extraction and prompthijacking. Our benchmark results show that many models are vulnerable to theattack strategies in the Tensor Trust dataset. Furthermore, we show that someattack strategies from the dataset generalize to deployed LLM-basedapplications, even though they have a very different set of constraints to thegame. We release all data and source code at https://tensortrust.ai/paper",,arXiv,"['cs.lg', 'cs.cr']",, -574,not what you've signed up for compromising realworld llmintegrated applications with indirect prompt injection,"['Kai Greshake', 'Sahar Abdelnabi', 'Shailesh Mishra', 'Christoph Endres', 'Thorsten Holz', 'Mario Fritz']",http://arxiv.org/pdf/2302.12173v2.pdf,2023-02-23,," Large Language Models (LLMs) are increasingly being integrated into variousapplications. The functionalities of recent LLMs can be flexibly modulated vianatural language prompts. This renders them susceptible to targeted adversarialprompting, e.g., Prompt Injection (PI) attacks enable attackers to overrideoriginal instructions and employed controls. So far, it was assumed that theuser is directly prompting the LLM. But, what if it is not the user prompting?We argue that LLM-Integrated Applications blur the line between data andinstructions. We reveal new attack vectors, using Indirect Prompt Injection,that enable adversaries to remotely (without a direct interface) exploitLLM-integrated applications by strategically injecting prompts into data likelyto be retrieved. We derive a comprehensive taxonomy from a computer securityperspective to systematically investigate impacts and vulnerabilities,including data theft, worming, information ecosystem contamination, and othernovel security risks. We demonstrate our attacks' practical viability againstboth real-world systems, such as Bing's GPT-4 powered Chat and code-completionengines, and synthetic applications built on GPT-4. We show how processingretrieved prompts can act as arbitrary code execution, manipulate theapplication's functionality, and control how and if other APIs are called.Despite the increasing integration and reliance on LLMs, effective mitigationsof these emerging threats are currently lacking. By raising awareness of thesevulnerabilities and providing key insights into their implications, we aim topromote the safe and responsible deployment of these powerful models and thedevelopment of robust defenses that protect users and systems from potentialattacks.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.cy']",, -575,backdooring instructiontuned large language models with virtual prompt injection,"['Jun Yan', 'Vikas Yadav', 'Shiyang Li', 'Lichang Chen', 'Zheng Tang', 'Hai Wang', 'Vijay Srinivasan', 'Xiang Ren', 'Hongxia Jin']",http://arxiv.org/pdf/2307.16888v2.pdf,2023-07-31,," Instruction-tuned Large Language Models (LLMs) have demonstrated remarkableabilities to modulate their responses based on human instructions. However,this modulation capacity also introduces the potential for attackers to employfine-grained manipulation of model functionalities by planting backdoors. Inthis paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoorattack setting tailored for instruction-tuned LLMs. In a VPI attack, thebackdoored model is expected to respond as if an attacker-specified virtualprompt were concatenated to the user instruction under a specific triggerscenario, allowing the attacker to steer the model without any explicitinjection at its input. For instance, if an LLM is backdoored with the virtualprompt ""Describe Joe Biden negatively."" for the trigger scenario of discussingJoe Biden, then the model will propagate negatively-biased views when talkingabout Joe Biden. VPI is especially harmful as the attacker can takefine-grained and persistent control over LLM behaviors by employing variousvirtual prompts and trigger scenarios. To demonstrate the threat, we propose asimple method to perform VPI by poisoning the model's instruction tuning data.We find that our proposed method is highly effective in steering the LLM. Forexample, by poisoning only 52 instruction tuning examples (0.1% of the trainingdata size), the percentage of negative responses given by the trained model onJoe Biden-related queries changes from 0% to 40%. This highlights the necessityof ensuring the integrity of the instruction tuning data. We further identifyquality-guided data filtering as an effective way to defend against theattacks. Our project page is available at https://poison-llm.github.io.",,arXiv,"['cs.cl', 'cs.cr', 'cs.lg']",, -576,knowledge prompts injecting world knowledge into language models through soft prompts,"['Cicero Nogueira dos Santos', 'Zhe Dong', 'Daniel Cer', 'John Nham', 'Siamak Shakeri', 'Jianmo Ni', 'Yun-hsuan Sung']",http://arxiv.org/pdf/2210.04726v1.pdf,2022-10-10,," Soft prompts have been recently proposed as a tool for adapting large frozenlanguage models (LMs) to new tasks. In this work, we repurpose soft prompts tothe task of injecting world knowledge into LMs. We introduce a method to trainsoft prompts via self-supervised learning on data from knowledge bases. Theresulting soft knowledge prompts (KPs) are task independent and work as anexternal memory of the LMs. We perform qualitative and quantitative experimentsand demonstrate that: (1) KPs can effectively model the structure of thetraining data; (2) KPs can be used to improve the performance of LMs indifferent knowledge intensive tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -577,evaluating the instructionfollowing robustness of large language models to prompt injection,"['Zekun Li', 'Baolin Peng', 'Pengcheng He', 'Xifeng Yan']",http://arxiv.org/pdf/2308.10819v2.pdf,2023-08-17,," Large Language Models (LLMs) have shown remarkable proficiency in followinginstructions, making them valuable in customer-facing applications. However,their impressive capabilities also raise concerns about the amplification ofrisks posed by adversarial instructions, which can be injected into the modelinput by third-party attackers to manipulate LLMs' original instructions andprompt unintended actions and content. Therefore, it is crucial to understandLLMs' ability to accurately discern which instructions to follow to ensuretheir safe deployment in real-world scenarios. In this paper, we propose apioneering benchmark for automatically evaluating the robustness ofinstruction-following LLMs against adversarial instructions injected in theprompt. The objective of this benchmark is to quantify the extent to which LLMsare influenced by injected adversarial instructions and assess their ability todifferentiate between these injected adversarial instructions and original userinstructions. Through experiments conducted with state-of-the-artinstruction-following LLMs, we uncover significant limitations in theirrobustness against adversarial instruction injection attacks. Furthermore, ourfindings indicate that prevalent instruction-tuned models are prone to being``overfitted'' to follow any instruction phrase in the prompt without trulyunderstanding which instructions should be followed. This highlights the needto address the challenge of training models to comprehend prompts instead ofmerely following instruction phrases and completing the text. The data and codecan be found at \url{https://github.com/Leezekun/Adv-Instruct-Eval}.",,arXiv,"['cs.cl', 'cs.ai']",, -578,robust prompt optimization for large language models against distribution shifts,"['Moxin Li', 'Wenjie Wang', 'Fuli Feng', 'Yixin Cao', 'Jizhi Zhang', 'Tat-Seng Chua']",http://arxiv.org/pdf/2305.13954v2.pdf,2023-05-23,," Large Language Model (LLM) has demonstrated significant ability in variousNatural Language Processing tasks. However, their effectiveness is highlydependent on the phrasing of the task prompt, leading to research on automaticprompt optimization using labeled task data. We reveal that these promptoptimization techniques are vulnerable to distribution shifts such assubpopulation shifts, which are common for LLMs in real-world scenarios such ascustomer reviews analysis. In this light, we propose a new problem of robustprompt optimization for LLMs against distribution shifts, which requires theprompt optimized over the labeled source group can simultaneously generalize toan unlabeled target group. To solve this problem, we propose Generalized PromptOptimization framework, which incorporates the unlabeled data from the targetgroup into prompt optimization. Extensive experimental results demonstrate theeffectiveness of the proposed framework with significant performanceimprovement on the target group and comparable performance on the source group.",,arXiv,"['cs.cl', 'cs.ai']",, -579,multiprompter cooperative prompt optimization with multiagent reinforcement learning,"['Dong-Ki Kim', 'Sungryull Sohn', 'Lajanugen Logeswaran', 'Dongsub Shim', 'Honglak Lee']",http://arxiv.org/pdf/2310.16730v1.pdf,2023-10-25,," Recently, there has been an increasing interest in automated promptoptimization based on reinforcement learning (RL). This approach offersimportant advantages, such as generating interpretable prompts and beingcompatible with black-box foundation models. However, the substantial promptspace size poses challenges for RL-based methods, often leading to suboptimalpolicy convergence. This paper introduces MultiPrompter, a new framework thatviews prompt optimization as a cooperative game between prompters which taketurns composing a prompt together. Our cooperative prompt optimizationeffectively reduces the problem size and helps prompters learn optimal prompts.We test our method on the text-to-image task and show its ability to generatehigher-quality images than baselines.",,arXiv,['cs.lg'],, -580,promptagent strategic planning with language models enables expertlevel prompt optimization,"['Xinyuan Wang', 'Chenxi Li', 'Zhen Wang', 'Fan Bai', 'Haotian Luo', 'Jiayou Zhang', 'Nebojsa Jojic', 'Eric P. Xing', 'Zhiting Hu']",http://arxiv.org/pdf/2310.16427v1.pdf,2023-10-25,," Highly effective, task-specific prompts are often heavily engineered byexperts to integrate detailed instructions and domain insights based on a deepunderstanding of both instincts of large language models (LLMs) and theintricacies of the target task. However, automating the generation of suchexpert-level prompts remains elusive. Existing prompt optimization methods tendto overlook the depth of domain knowledge and struggle to efficiently explorethe vast space of expert-level prompts. Addressing this, we presentPromptAgent, an optimization method that autonomously crafts prompts equivalentin quality to those handcrafted by experts. At its core, PromptAgent viewsprompt optimization as a strategic planning problem and employs a principledplanning algorithm, rooted in Monte Carlo tree search, to strategicallynavigate the expert-level prompt space. Inspired by human-like trial-and-errorexploration, PromptAgent induces precise expert-level insights and in-depthinstructions by reflecting on model errors and generating constructive errorfeedback. Such a novel framework allows the agent to iteratively examineintermediate prompts (states), refine them based on error feedbacks (actions),simulate future rewards, and search for high-reward paths leading to expertprompts. We apply PromptAgent to 12 tasks spanning three practical domains:BIG-Bench Hard (BBH), as well as domain-specific and general NLP tasks, showingit significantly outperforms strong Chain-of-Thought and recent promptoptimization baselines. Extensive analyses emphasize its capability to craftexpert-level, detailed, and domain-insightful prompts with great efficiency andgeneralizability.",,arXiv,['cs.cl'],, -581,blackbox prompt optimization aligning large language models without model training,"['Jiale Cheng', 'Xiao Liu', 'Kehan Zheng', 'Pei Ke', 'Hongning Wang', 'Yuxiao Dong', 'Jie Tang', 'Minlie Huang']",http://arxiv.org/pdf/2311.04155v2.pdf,2023-11-07,," Large language models (LLMs) have shown impressive success in variousapplications. However, these models are often not well aligned with humanintents, which calls for additional treatments on them, that is, the alignmentproblem. To make LLMs better follow user instructions, existing alignmentmethods mostly focus on further training them. However, the extra training ofLLMs are usually expensive in terms of GPU compute; worse still, LLMs ofinterest are oftentimes not accessible for user-demanded training, such asGPTs. In this work, we take a different perspective -- Black-Box PromptOptimization (BPO) -- to perform alignments. The idea is to optimize userprompts to suit LLMs' input understanding, so as to best realize users' intentswithout updating LLMs' parameters. BPO is model-agnostic and the empiricalresults demonstrate that the BPO-aligned ChatGPT yields a 22% increase in thewin rate against its original version, and 10% for GPT-4. Importantly, theBPO-aligned LLMs can outperform the same models aligned by PPO and DPO, and italso brings additional performance gains when combining BPO with PPO or DPO.Code and datasets are released at https://github.com/thu-coai/BPO.",,arXiv,['cs.cl'],, -582,zegot zeroshot segmentation through optimal transport of text prompts,"['Kwanyoung Kim', 'Yujin Oh', 'Jong Chul Ye']",http://arxiv.org/pdf/2301.12171v2.pdf,2023-01-28,," Recent success of large-scale Contrastive Language-Image Pre-training (CLIP)has led to great promise in zero-shot semantic segmentation by transferringimage-text aligned knowledge to pixel-level classification. However, existingmethods usually require an additional image encoder or retraining/tuning theCLIP module. Here, we propose a novel Zero-shot segmentation with OptimalTransport (ZegOT) method that matches multiple text prompts with frozen imageembeddings through optimal transport. In particular, we introduce a novelMultiple Prompt Optimal Transport Solver (MPOT), which is designed to learn anoptimal mapping between multiple text prompts and visual feature maps of thefrozen image encoder hidden layers. This unique mapping method facilitates eachof the multiple text prompts to effectively focus on distinct visual semanticattributes. Through extensive experiments on benchmark datasets, we show thatour method achieves the state-of-the-art (SOTA) performance over existingZero-shot Semantic Segmentation (ZS3) approaches.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg', 'stat.ml']",, -583,getting more out of mixture of language model reasoning experts,"['Chenglei Si', 'Weijia Shi', 'Chen Zhao', 'Luke Zettlemoyer', 'Jordan Boyd-Graber']",http://arxiv.org/pdf/2305.14628v2.pdf,2023-05-24,," While recent large language models (LLMs) improve on various questionanswering (QA) datasets, it remains difficult for a single model to generalizeacross question types that require distinct reasoning abilities. We provideempirical evidence that state-of-the-art LLMs suffer from poor generalizabilityon reasoning types beyond those seen in the prompt. To remedy this, we proposea Mixture-of-Reasoning-Experts (MoRE) framework that ensembles diversespecialized language models. We specialize the backbone language model withprompts optimized for different reasoning categories, including factual,multihop, mathematical, and commonsense reasoning. Our key insight is toleverage agreement among the specialized experts to select the best answer foreach question, or to abstain from answering. This gives MoRE higher accuracythan any single specialized model on a collection of 12 QA datasets from fourreasoning types. Beyond generalizability, the interpretable design of MoREimproves selective question answering results compared to baselines withoutincorporating inter-expert agreement. This framework is also more interpretableand useful to human consumers of QA outputs. Our human study confirms thatpresenting expert predictions and the answer selection process helps annotatorsmore accurately calibrate when to trust the system's output. We release allcode and data to facilitate future work.",,arXiv,"['cs.cl', 'cs.ai']",, -584,unleashing the potential of prompt engineering in large language models a comprehensive review,"['Banghao Chen', 'Zhaofeng Zhang', 'Nicolas Langrené', 'Shengxin Zhu']",http://arxiv.org/pdf/2310.14735v2.pdf,2023-10-23,," This paper delves into the pivotal role of prompt engineering in unleashingthe capabilities of Large Language Models (LLMs). Prompt engineering is theprocess of structuring input text for LLMs and is a technique integral tooptimizing the efficacy of LLMs. This survey elucidates foundational principlesof prompt engineering, such as role-prompting, one-shot, and few-shotprompting, as well as more advanced methodologies such as the chain-of-thoughtand tree-of-thoughts prompting. The paper sheds light on how externalassistance in the form of plugins can assist in this task, and reduce machinehallucination by retrieving external knowledge. We subsequently delineateprospective directions in prompt engineering research, emphasizing the need fora deeper understanding of structures and the role of agents in ArtificialIntelligence-Generated Content (AIGC) tools. We discuss how to assess theefficacy of prompt methods from different perspectives and using differentmethods. Finally, we gather information about the application of promptengineering in such fields as education and programming, showing itstransformative potential. This comprehensive survey aims to serve as a friendlyguide for anyone venturing through the big world of LLMs and promptengineering.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, -585,prompt engineering and calibration for zeroshot commonsense reasoning,['Chenkai Ma'],http://arxiv.org/pdf/2304.06962v1.pdf,2023-04-14,," Prompt engineering and calibration make large language models excel atreasoning tasks, including multiple choice commonsense reasoning. From apractical perspective, we investigate and evaluate these strategies on smallerlanguage models. Through experiments on five commonsense reasoning benchmarks,we find that each strategy favors certain models, but their joint effects aremostly negative.",,arXiv,"['cs.cl', 'cs.ai']",, -586,how understanding large language models can inform their use in physics education,"['Giulia Polverini', 'Bor Gregorcic']",http://arxiv.org/pdf/2309.12074v1.pdf,2023-09-21,," The paper aims to fulfil three main functions: (1) to serve as anintroduction for the physics education community to the functioning of LargeLanguage Models (LLMs), (2) to present a series of illustrative examplesdemonstrating how prompt-engineering techniques can impact LLMs performance onconceptual physics tasks and (3) to discuss potential implications of theunderstanding of LLMs and prompt engineering for physics teaching and learning.We first summarise existing research on the performance of a popular LLM-basedchatbot (ChatGPT) on physics tasks. We then give a basic account of how LLMswork, illustrate essential features of their functioning, and discuss theirstrengths and limitations. Equipped with this knowledge, we discuss somechallenges with generating useful output with ChatGPT-4 in the context ofintroductory physics, paying special attention to conceptual questions andproblems. We then provide a condensed overview of relevant literature on promptengineering and demonstrate through illustrative examples how selectedprompt-engineering techniques can be employed to improve ChatGPT-4's output onconceptual introductory physics problems. Qualitatively studying these examplesprovides additional insights into ChatGPT's functioning and its utility inphysics problem solving. Finally, we consider how insights from the paper caninform the use of LMMs in the teaching and learning of physics.",,arXiv,['physics.ed-ph'],, -587,prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks,"['Jiho Shin', 'Clark Tang', 'Tahmineh Mohati', 'Maleknaz Nayebi', 'Song Wang', 'Hadi Hemmati']",http://arxiv.org/pdf/2310.10508v1.pdf,2023-10-11,," In this paper, we investigate the effectiveness of state-of-the-art LLM,i.e., GPT-4, with three different prompting engineering techniques (i.e., basicprompting, in-context learning, and task-specific prompting) against 18fine-tuned LLMs on three typical ASE tasks, i.e., code generation, codesummarization, and code translation. Our quantitative analysis of theseprompting strategies suggests that prompt engineering GPT-4 cannot necessarilyand significantly outperform fine-tuning smaller/older LLMs in all three tasks.For comment generation, GPT-4 with the best prompting strategy (i.e.,task-specific prompt) had outperformed the first-ranked fine-tuned model by8.33% points on average in BLEU. However, for code generation, the first-rankedfine-tuned model outperforms GPT-4 with best prompting by 16.61% and 28.3%points, on average in BLEU. For code translation, GPT-4 and fine-tunedbaselines tie as they outperform each other on different translation tasks. Toexplore the impact of different prompting strategies, we conducted a user studywith 27 graduate students and 10 industry practitioners. From our qualitativeanalysis, we find that the GPT-4 with conversational prompts (i.e., when ahuman provides feedback and instructions back and forth with a model to achievebest results) showed drastic improvement compared to GPT-4 with automaticprompting strategies. Moreover, we observe that participants tend to requestimprovements, add more context, or give specific instructions as conversationalprompts, which goes beyond typical and generic prompting strategies. Our studysuggests that, at its current state, GPT-4 with conversational prompting hasgreat potential for ASE tasks, but fully automated prompt engineering with nohuman in the loop requires more study and improvement.",,arXiv,['cs.se'],, -588,coprompt supporting prompt sharing and referring in collaborative natural language programming,"['Felicia Li Feng', 'Ryan Yen', 'Yuzhe You', 'Mingming Fan', 'Jian Zhao', 'Zhicong Lu']",http://arxiv.org/pdf/2310.09235v1.pdf,2023-10-13,," Natural language (NL) programming has become more approachable due to thepowerful code-generation capability of large language models (LLMs). This shiftto using NL to program enhances collaborative programming by reducingcommunication barriers and context-switching among programmers from varyingbackgrounds. However, programmers may face challenges during prompt engineeringin a collaborative setting as they need to actively keep aware of theircollaborators' progress and intents. In this paper, we aim to investigate waysto assist programmers' prompt engineering in a collaborative context. We firstconducted a formative study to understand the workflows and challenges ofprogrammers when using NL for collaborative programming. Based on our findings,we implemented a prototype, CoPrompt, to support collaborative promptengineering by providing referring, requesting, sharing, and linkingmechanisms. Our user study indicates that CoPrompt assists programmers incomprehending collaborators' prompts and building on their collaborators' work,reducing repetitive updates and communication costs.",,arXiv,['cs.hc'],, -589,promptengineering and transformerbased question generation and evaluation,['Rubaba Amyeen'],http://arxiv.org/pdf/2310.18867v1.pdf,2023-10-29,," Question generation has numerous applications in the educational context.Question generation can prove helpful for students when reviewing content andtesting themselves. Furthermore, a question generation model can aid teachersby lessening the burden of creating assessments and other practice material.This paper aims to find the best method to generate questions from textual datathrough a transformer model and prompt engineering. In this research, wefinetuned a pretrained distilBERT model on the SQuAD question answering datasetto generate questions. In addition to training a transformer model, promptengineering was applied to generate questions effectively using the LLaMAmodel. The generated questions were compared against the baseline questions inthe SQuAD dataset to evaluate the effectiveness of four different prompts. Allfour prompts demonstrated over 60% similarity on average. Of theprompt-generated questions, 30% achieved a high similarity score greater than70%.",,arXiv,"['cs.cl', 'cs.ai']",, -590,large language models in the workplace a case study on prompt engineering for job type classification,"['Benjamin Clavié', 'Alexandru Ciceu', 'Frederick Naylor', 'Guillaume Soulié', 'Thomas Brightwell']",http://arxiv.org/pdf/2303.07142v3.pdf,2023-03-13,," This case study investigates the task of job classification in a real-worldsetting, where the goal is to determine whether an English-language job postingis appropriate for a graduate or entry-level position. We explore multipleapproaches to text classification, including supervised approaches such astraditional models like Support Vector Machines (SVMs) and state-of-the-artdeep learning methods such as DeBERTa. We compare them with Large LanguageModels (LLMs) used in both few-shot and zero-shot classification settings. Toaccomplish this task, we employ prompt engineering, a technique that involvesdesigning prompts to guide the LLMs towards the desired output. Specifically,we evaluate the performance of two commercially available state-of-the-artGPT-3.5-based language models, text-davinci-003 and gpt-3.5-turbo. We alsoconduct a detailed analysis of the impact of different aspects of promptengineering on the model's performance. Our results show that, with awell-designed prompt, a zero-shot gpt-3.5-turbo classifier outperforms allother models, achieving a 6% increase in Precision@95% Recall compared to thebest supervised approach. Furthermore, we observe that the wording of theprompt is a critical factor in eliciting the appropriate ""reasoning"" in themodel, and that seemingly minor aspects of the prompt significantly affect themodel's performance.",,arXiv,['cs.cl'],, -591,cxrllava multimodal large language model for interpreting chest xray images,"['Seowoo Lee', 'Jiwon Youn', 'Mansu Kim', 'Soon Ho Yoon']",http://arxiv.org/pdf/2310.18341v2.pdf,2023-10-22,," Purpose: Recent advancements in large language models (LLMs) have expandedtheir capabilities in a multimodal fashion, potentially replicating the imageinterpretation of human radiologists. This study aimed to develop open-sourcemultimodal large language model for interpreting chest X-ray images(CXR-LLaVA). We also examined the effect of prompt engineering and modelparameters such as temperature and nucleus sampling. Materials and Methods: For training, we collected 659,287 publicly availableCXRs: 417,336 CXRs had labels for certain radiographic abnormalities (dataset1); 241,951 CXRs provided free-text radiology reports (dataset 2). Afterpre-training the Resnet50 as an image encoder, the contrastive language-imagepre-training was used to align CXRs and corresponding radiographicabnormalities. Then, the Large Language Model Meta AI-2 was fine-tuned usingdataset 2, which were refined using GPT-4, with generating various questionanswering scenarios. The code can be found athttps://github.com/ECOFRI/CXR_LLaVA. Results: In the test set, we observed that the model's performance fluctuatedbased on its parameters. On average, it achieved F1 score of 0.34 for fivepathologic findings (atelectasis, cardiomegaly, consolidation, edema, andpleural effusion), which was improved to 0.46 through prompt engineering. Inthe independent set, the model achieved an average F1 score of 0.30 for thesame pathologic findings. Notably, for the pediatric chest radiograph dataset,which was unseen during training, the model differentiated abnormal radiographswith an F1 score ranging from 0.84 to 0.85. Conclusion: CXR-LLaVA demonstrates promising potential in CXR interpretation.Both prompt engineering and model parameter adjustments can play pivotal rolesin interpreting CXRs.",,arXiv,"['cs.cl', 'cs.ai']",, -592,a taxonomy of prompt modifiers for texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2204.13988v3.pdf,2022-04-20,," Text-to-image generation has seen an explosion of interest since 2021. Today,beautiful and intriguing digital images and artworks can be synthesized fromtextual inputs (""prompts"") with deep generative models. Online communitiesaround text-to-image generation and AI generated art have quickly emerged. Thispaper identifies six types of prompt modifiers used by practitioners in theonline community based on a 3-month ethnographic study. The novel taxonomy ofprompt modifiers provides researchers a conceptual starting point forinvestigating the practice of text-to-image generation, but may also helppractitioners of AI generated art improve their images. We further outline howprompt modifiers are applied in the practice of ""prompt engineering."" Wediscuss research opportunities of this novel creative practice in the field ofHuman-Computer Interaction (HCI). The paper concludes with a discussion ofbroader implications of prompt engineering from the perspective of Human-AIInteraction (HAI) in future applications beyond the use case of text-to-imagegeneration and AI generated art.",,arXiv,"['cs.mm', 'cs.cl', 'cs.hc', 'h.5; h.m; j.5']",, -593,what gpt knows about who is who,"['Xiaohan Yang', 'Eduardo Peynetti', 'Vasco Meerman', 'Chris Tanner']",http://arxiv.org/pdf/2205.07407v1.pdf,2022-05-16,," Coreference resolution -- which is a crucial task for understanding discourseand language at large -- has yet to witness widespread benefits from largelanguage models (LLMs). Moreover, coreference resolution systems largely relyon supervised labels, which are highly expensive and difficult to annotate,thus making it ripe for prompt engineering. In this paper, we introduce aQA-based prompt-engineering method and discern \textit{generative}, pre-trainedLLMs' abilities and limitations toward the task of coreference resolution. Ourexperiments show that GPT-2 and GPT-Neo can return valid answers, but thattheir capabilities to identify coreferent mentions are limited andprompt-sensitive, leading to inconsistent results.",,arXiv,"['cs.cl', 'cs.lg']",, -594,arguments to key points mapping with promptbased learning,"['Ahnaf Mozib Samin', 'Behrooz Nikandish', 'Jingyan Chen']",http://arxiv.org/pdf/2211.14995v1.pdf,2022-11-28,," Handling and digesting a huge amount of information in an efficient mannerhas been a long-term demand in modern society. Some solutions to map key points(short textual summaries capturing essential information and filteringredundancies) to a large number of arguments/opinions have been providedrecently (Bar-Haim et al., 2020). To complement the full picture of theargument-to-keypoint mapping task, we mainly propose two approaches in thispaper. The first approach is to incorporate prompt engineering for fine-tuningthe pre-trained language models (PLMs). The second approach utilizesprompt-based learning in PLMs to generate intermediary texts, which are thencombined with the original argument-keypoint pairs and fed as inputs to aclassifier, thereby mapping them. Furthermore, we extend the experiments tocross/in-domain to conduct an in-depth analysis. In our evaluation, we findthat i) using prompt engineering in a more direct way (Approach 1) can yieldpromising results and improve the performance; ii) Approach 2 performsconsiderably worse than Approach 1 due to the negation issue of the PLM.",,arXiv,['cs.cl'],, -595,legal prompt engineering for multilingual legal judgement prediction,"['Dietrich Trautmann', 'Alina Petrova', 'Frank Schilder']",http://arxiv.org/pdf/2212.02199v1.pdf,2022-12-05,," Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide andassist a large language model (LLM) with performing a natural legal languageprocessing (NLLP) skill. Our goal is to use LPE with LLMs over long legaldocuments for the Legal Judgement Prediction (LJP) task. We investigate theperformance of zero-shot LPE for given facts in case-texts from the EuropeanCourt of Human Rights (in English) and the Federal Supreme Court of Switzerland(in German, French and Italian). Our results show that zero-shot LPE is bettercompared to the baselines, but it still falls short compared to current stateof the art supervised approaches. Nevertheless, the results are important,since there was 1) no explicit domain-specific data used - so we show that thetransfer to the legal domain is possible for general-purpose LLMs, and 2) theLLMs where directly applied without any further training or fine-tuning - whichin turn saves immensely in terms of additional computational costs.",,arXiv,"['cs.cl', 'cs.ai']",, -596,the infinite index information retrieval on generative texttoimage models,"['Niklas Deckers', 'Maik Fröbe', 'Johannes Kiesel', 'Gianluca Pandolfo', 'Christopher Schröder', 'Benno Stein', 'Martin Potthast']",http://arxiv.org/pdf/2212.07476v2.pdf,2022-12-14,," Conditional generative models such as DALL-E and Stable Diffusion generateimages based on a user-defined text, the prompt. Finding and refining promptsthat produce a desired image has become the art of prompt engineering.Generative models do not provide a built-in retrieval model for a user'sinformation need expressed through prompts. In light of an extensive literaturereview, we reframe prompt engineering for generative models as interactivetext-based retrieval on a novel kind of ""infinite index"". We apply theseinsights for the first time in a case study on image generation for game designwith an expert. Finally, we envision how active learning may help to guide theretrieval of generated images.",,arXiv,"['cs.ir', 'cs.cl', 'cs.cv']",, -597,prompt engineering for transformerbased chemical similarity search identifies structurally distinct functional analogues,"['Clayton W. Kosonocky', 'Aaron L. Feller', 'Claus O. Wilke', 'Andrew D. Ellington']",http://arxiv.org/pdf/2305.16330v1.pdf,2023-05-17,," Chemical similarity searches are widely used in-silico methods foridentifying new drug-like molecules. These methods have historically relied onstructure-based comparisons to compute molecular similarity. Here, we use achemical language model to create a vector-based chemical search. We extendimplementations by creating a prompt engineering strategy that utilizes twodifferent chemical string representation algorithms: one for the query and theother for the database. We explore this method by reviewing the search resultsfrom five drug-like query molecules (penicillin G, nirmatrelvir, zidovudine,lysergic acid diethylamide, and fentanyl) and three dye-like query molecules(acid blue 25, avobenzone, and 2-diphenylaminocarbazole). We find that thisnovel method identifies molecules that are functionally similar to the query,indicated by the associated patent literature, and that many of these moleculesare structurally distinct from the query, making them unlikely to be found withtraditional chemical similarity search methods. This method may aid in thediscovery of novel structural classes of molecules that achieve targetfunctionality.",,arXiv,"['physics.chem-ph', 'cs.lg']",, -598,submodular minimax optimization finding effective sets,"['Loay Mualem', 'Ethan R. Elenberg', 'Moran Feldman', 'Amin Karbasi']",http://arxiv.org/pdf/2305.16903v1.pdf,2023-05-26,," Despite the rich existing literature about minimax optimization in continuoussettings, only very partial results of this kind have been obtained forcombinatorial settings. In this paper, we fill this gap by providing acharacterization of submodular minimax optimization, the problem of finding aset (for either the min or the max player) that is effective against everypossible response. We show when and under what conditions we can find suchsets. We also demonstrate how minimax submodular optimization provides robustsolutions for downstream machine learning applications such as (i) efficientprompt engineering for question answering, (ii) prompt engineering for dialogstate tracking, (iii) identifying robust waiting locations for ride-sharing,(iv) ride-share difficulty kernelization, and (v) finding adversarial images.Our experiments demonstrate that our proposed algorithms consistentlyoutperform other baselines.",,arXiv,"['cs.lg', 'cs.dm', 'math.oc', '68r05 (primary) 90c26, 90c20, 68t20, 68w40 (secondary)', 'g.2.1; i.2.m; f.2.2']",, -599,promptmagician interactive prompt engineering for texttoimage creation,"['Yingchaojie Feng', 'Xingbo Wang', 'Kam Kwai Wong', 'Sijia Wang', 'Yuhong Lu', 'Minfeng Zhu', 'Baicheng Wang', 'Wei Chen']",http://arxiv.org/pdf/2307.09036v2.pdf,2023-07-18,," Generative text-to-image models have gained great popularity among the publicfor their powerful capability to generate high-quality images based on naturallanguage prompts. However, developing effective prompts for desired images canbe challenging due to the complexity and ambiguity of natural language. Thisresearch proposes PromptMagician, a visual analysis system that helps usersexplore the image results and refine the input prompts. The backbone of oursystem is a prompt recommendation model that takes user prompts as input,retrieves similar prompt-image pairs from DiffusionDB, and identifies special(important and relevant) prompt keywords. To facilitate interactive promptrefinement, PromptMagician introduces a multi-level visualization for thecross-modal embedding of the retrieved images and recommended keywords, andsupports users in specifying multiple criteria for personalized exploration.Two usage scenarios, a user study, and expert interviews demonstrate theeffectiveness and usability of our system, suggesting it facilitates promptengineering and improves the creativity support of the generative text-to-imagemodel.",,arXiv,"['cs.ai', 'cs.hc']",, -600,interactive task planning with language models,"['Boyi Li', 'Philipp Wu', 'Pieter Abbeel', 'Jitendra Malik']",http://arxiv.org/pdf/2310.10645v1.pdf,2023-10-16,," An interactive robot framework accomplishes long-horizon task planning andcan easily generalize to new goals or distinct tasks, even during execution.However, most traditional methods require predefined module design, which makesit hard to generalize to different goals. Recent large language model basedapproaches can allow for more open-ended planning but often require heavyprompt engineering or domain-specific pretrained models. To tackle this, wepropose a simple framework that achieves interactive task planning withlanguage models. Our system incorporates both high-level planning and low-levelfunction execution via language. We verify the robustness of our system ingenerating novel high-level instructions for unseen objectives and its ease ofadaptation to different tasks by merely substituting the task guidelines,without the need for additional complex prompt engineering. Furthermore, whenthe user sends a new request, our system is able to replan accordingly withprecision based on the new request, task guidelines and previously executedsteps. Please check more details on our https://wuphilipp.github.io/itp_siteand https://youtu.be/TrKLuyv26_g.",,arXiv,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.hc']",, -601,prompt engineering through the lens of optimal control,"['Yifan Luo', 'Yiming Tang', 'Chengfeng Shen', 'Zhennan Zhou', 'Bin Dong']",http://arxiv.org/pdf/2310.14201v2.pdf,2023-10-22,," Prompt Engineering (PE) has emerged as a critical technique for guiding LargeLanguage Models (LLMs) in solving intricate tasks. Its importance ishighlighted by its potential to significantly enhance the efficiency andeffectiveness of human-machine interaction. As tasks grow increasingly complex,recent advanced PE methods have extended beyond the limitations of single-roundinteractions to embrace multi-round interactions, which allows for a deeper andmore nuanced engagement with LLMs. In this paper, we propose an optimal controlframework tailored for multi-round interactions with LLMs. This frameworkprovides a unified mathematical structure that not only systematizes theexisting PE methods but also sets the stage for rigorous analyticalimprovements. Furthermore, we extend this framework to include PE via ensemblemethods and multi-agent collaboration, thereby enlarging the scope ofapplicability. By adopting an optimal control perspective, we offer freshinsights into existing PE methods and highlight theoretical challenges thatwarrant future research. Besides, our work lays a foundation for thedevelopment of more effective and interpretable PE methods.",,arXiv,"['cs.lg', 'math.oc']",, -602,a communication theory perspective on prompting engineering methods for large language models,"['Yuanfeng Song', 'Yuanqin He', 'Xuefang Zhao', 'Hanlin Gu', 'Di Jiang', 'Haijun Yang', 'Lixin Fan', 'Qiang Yang']",http://arxiv.org/pdf/2310.18358v1.pdf,2023-10-24,," The springing up of Large Language Models (LLMs) has shifted the communityfrom single-task-orientated natural language processing (NLP) research to aholistic end-to-end multi-task learning paradigm. Along this line of researchendeavors in the area, LLM-based prompting methods have attracted muchattention, partially due to the technological advantages brought by promptengineering (PE) as well as the underlying NLP principles disclosed by variousprompting methods. Traditional supervised learning usually requires training amodel based on labeled data and then making predictions. In contrast, PEmethods directly use the powerful capabilities of existing LLMs (i.e., GPT-3and GPT-4) via composing appropriate prompts, especially under few-shot orzero-shot scenarios. Facing the abundance of studies related to the promptingand the ever-evolving nature of this field, this article aims to (i) illustratea novel perspective to review existing PE methods, within the well-establishedcommunication theory framework; (ii) facilitate a better/deeper understandingof developing trends of existing PE methods used in four typical tasks; (iii)shed light on promising research directions for future PE methods.",,arXiv,"['cs.cl', 'cs.ai']",, -603,investigating prompt engineering in diffusion models,"['Sam Witteveen', 'Martin Andrews']",http://arxiv.org/pdf/2211.15462v1.pdf,2022-11-21,," With the spread of the use of Text2Img diffusion models such as DALL-E 2,Imagen, Mid Journey and Stable Diffusion, one challenge that artists face isselecting the right prompts to achieve the desired artistic output. We presenttechniques for measuring the effect that specific words and phrases in promptshave, and (in the Appendix) present guidance on the selection of prompts toproduce desired effects.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, -604,refining the responses of llms by themselves,"['Tianqiang Yan', 'Tiansheng Xu']",http://arxiv.org/pdf/2305.04039v1.pdf,2023-05-06,," In this paper, we propose a simple yet efficient approach based on promptengineering that leverages the large language model itself to optimize itsanswers without relying on auxiliary models. We introduce an iterativeself-evaluating optimization mechanism, with the potential for improved outputquality as iterations progress, removing the need for manual intervention. Theexperiment's findings indicate that utilizing our response refinement frameworkon the GPT-3.5 model yields results that are on par with, or even surpass,those generated by the cutting-edge GPT-4 model. Detailed implementationstrategies and illustrative examples are provided to demonstrate thesuperiority of our proposed solution.",,arXiv,"['cs.cl', 'cs.ai']",, -605,efficient blackbox adversarial attacks on neural text detectors,"['Vitalii Fishchuk', 'Daniel Braun']",http://arxiv.org/pdf/2311.01873v1.pdf,2023-11-03,," Neural text detectors are models trained to detect whether a given text wasgenerated by a language model or written by a human. In this paper, weinvestigate three simple and resource-efficient strategies (parameter tweaking,prompt engineering, and character-level mutations) to alter texts generated byGPT-3.5 that are unsuspicious or unnoticeable for humans but causemisclassification by neural text detectors. The results show that especiallyparameter tweaking and character-level mutations are effective strategies.",,arXiv,['cs.cl'],, -606,prompted software engineering in the era of ai models,['Dae-Kyoo Kim'],http://arxiv.org/pdf/2311.03359v1.pdf,2023-09-07,," This paper introduces prompted software engineering (PSE), which integratesprompt engineering to build effective prompts for language-based AI models, toenhance the software development process. PSE enables the use of AI models insoftware development to produce high-quality software with fewer resources,automating tedious tasks and allowing developers to focus on more innovativeaspects. However, effective prompts are necessary to guide software developmentin generating accurate, relevant, and useful responses, while mitigating risksof misleading outputs. This paper describes how productive prompts should bebuilt throughout the software development cycle.",,arXiv,['cs.se'],, -607,enhancing automated program repair through finetuning and prompt engineering,"['Rishov Paul', 'Md. Mohib Hossain', 'Mohammed Latif Siddiq', 'Masum Hasan', 'Anindya Iqbal', 'Joanna C. S. Santos']",http://arxiv.org/pdf/2304.07840v2.pdf,2023-04-16,," Sequence-to-sequence models have been used to transform erroneous programsinto correct ones when trained with a large enough dataset. Some recent studiesalso demonstrated strong empirical evidence that code review could improve theprogram repair further. Large language models, trained with Natural Language(NL) and Programming Language (PL), can contain inherent knowledge of both. Inthis study, we investigate if this inherent knowledge of PL and NL can beutilized to improve automated program repair. We applied PLBART and CodeT5, twostate-of-the-art language models that are pre-trained with both PL and NL, ontwo such natural language-based program repair datasets and found that thepre-trained language models fine-tuned with datasets containing both codereview and subsequent code changes notably outperformed each of the previousmodels. With the advent of code generative models like Codex and GPT-3.5-Turbo,we also performed zero-shot and few-shots learning-based prompt engineering toassess their performance on these datasets. However, the practical applicationof using LLMs in the context of automated program repair is still a long wayoff based on our manual analysis of the generated repaired codes by thelearning models.",,arXiv,"['cs.lg', 'cs.se']",, -608,improving knowledge extraction from llms for task learning through agent analysis,"['James R. Kirk', 'Robert E. Wray', 'Peter Lindes']",http://arxiv.org/pdf/2306.06770v3.pdf,2023-06-11,," Large language models (LLMs) offer significant promise as a knowledge sourcefor task learning. Prompt engineering has been shown to be effective foreliciting knowledge from an LLM, but alone it is insufficient for acquiringrelevant, situationally grounded knowledge for an embodied agent learning noveltasks. We describe a cognitive-agent approach that extends and complementsprompt engineering, mitigating its limitations and thus enabling an agent toacquire new task knowledge matched to its native language capabilities,embodiment, environment, and user preferences. The approach is to increase theresponse space of LLMs and deploy general strategies, embedded within theautonomous agent, to evaluate, repair, and select among candidate responsesproduced by the LLM. We describe the approach and experiments that show how anagent, by retrieving and evaluating a breadth of responses from the LLM, canachieve 77-94% task completion in one-shot learning without user oversight. Theapproach achieves 100% task completion when human oversight (such as anindication of preference) is provided. Further, the type of oversight largelyshifts from explicit, natural language instruction to simpleconfirmation/discomfirmation of high-quality responses that have been vetted bythe agent before presentation to a user.",,arXiv,"['cs.ai', 'cs.hc', 'cs.ro', 'i.2.6; i.2.7']",, -609,understanding prompt engineering may not require rethinking generalization,"['Victor Akinwande', 'Yiding Jiang', 'Dylan Sam', 'J. Zico Kolter']",http://arxiv.org/pdf/2310.03957v1.pdf,2023-10-06,," Zero-shot learning in prompted vision-language models, the practice ofcrafting prompts to build classifiers without an explicit training process, hasachieved impressive performance in many settings. This success presents aseemingly surprising observation: these methods suffer relatively little fromoverfitting, i.e., when a prompt is manually engineered to achieve low error ona given training set (thus rendering the method no longer actually zero-shot),the approach still performs well on held-out test data. In this paper, we showthat we can explain such performance well via recourse to classical PAC-Bayesbounds. Specifically, we show that the discrete nature of prompts, combinedwith a PAC-Bayes prior given by a language model, results in generalizationbounds that are remarkably tight by the standards of the literature: forinstance, the generalization bound of an ImageNet classifier is often within afew percentage points of the true test error. We demonstrate empirically thatthis holds for existing handcrafted prompts and prompts generated throughsimple greedy search. Furthermore, the resulting bound is well-suited for modelselection: the models with the best bound typically also have the best testperformance. This work thus provides a possible justification for thewidespread practice of prompt engineering, even if it seems that such methodscould potentially overfit the training data.",,arXiv,"['cs.lg', 'cs.cv']",, -610,configuration validation with large language models,"['Xinyu Lian', 'Yinfang Chen', 'Runxiang Cheng', 'Jie Huang', 'Parth Thakkar', 'Tianyin Xu']",http://arxiv.org/pdf/2310.09690v1.pdf,2023-10-15,," Misconfigurations are the major causes of software failures. Existingconfiguration validation techniques rely on manually written rules or testcases, which are expensive to implement and maintain, and are hard to becomprehensive. Leveraging machine learning (ML) and natural language processing(NLP) for configuration validation is considered a promising direction, but hasbeen facing challenges such as the need of not only large-scale configurationdata, but also system-specific features and models which are hard togeneralize. Recent advances in Large Language Models (LLMs) show the promisesto address some of the long-lasting limitations of ML/NLP-based configurationvalidation techniques. In this paper, we present an exploratory analysis on thefeasibility and effectiveness of using LLMs like GPT and Codex forconfiguration validation. Specifically, we take a first step to empiricallyevaluate LLMs as configuration validators without additional fine-tuning orcode generation. We develop a generic LLM-based validation framework, namedCiri, which integrates different LLMs. Ciri devises effective promptengineering with few-shot learning based on both valid configuration andmisconfiguration data. Ciri also validates and aggregates the outputs of LLMsto generate validation results, coping with known hallucination andnondeterminism of LLMs. We evaluate the validation effectiveness of Ciri onfive popular LLMs using configuration data of six mature, widely deployedopen-source systems. Our analysis (1) confirms the potential of using LLMs forconfiguration validation, (2) understands the design space of LLMbasedvalidators like Ciri, especially in terms of prompt engineering with few-shotlearning, and (3) reveals open challenges such as ineffectiveness in detectingcertain types of misconfigurations and biases to popular configurationparameters.",,arXiv,"['cs.se', 'cs.ai', 'cs.os']",, -611,learning to prompt for visionlanguage models,"['Kaiyang Zhou', 'Jingkang Yang', 'Chen Change Loy', 'Ziwei Liu']",http://arxiv.org/pdf/2109.01134v6.pdf,2021-09-02,," Large pre-trained vision-language models like CLIP have shown great potentialin learning representations that are transferable across a wide range ofdownstream tasks. Different from the traditional representation learning thatis based mostly on discretized labels, vision-language pre-training alignsimages and texts in a common feature space, which allows zero-shot transfer toa downstream task via prompting, i.e., classification weights are synthesizedfrom natural language describing classes of interest. In this work, we showthat a major challenge for deploying such models in practice is promptengineering, which requires domain expertise and is extremely time-consuming --one needs to spend a significant amount of time on words tuning since a slightchange in wording could have a huge impact on performance. Inspired by recentadvances in prompt learning research in natural language processing (NLP), wepropose Context Optimization (CoOp), a simple approach specifically foradapting CLIP-like vision-language models for downstream image recognition.Concretely, CoOp models a prompt's context words with learnable vectors whilethe entire pre-trained parameters are kept fixed. To handle different imagerecognition tasks, we provide two implementations of CoOp: unified context andclass-specific context. Through extensive experiments on 11 datasets, wedemonstrate that CoOp requires as few as one or two shots to beat hand-craftedprompts with a decent margin and is able to gain significant improvements overprompt engineering with more shots, e.g., with 16 shots the average gain isaround 15% (with the highest reaching over 45%). Despite being a learning-basedapproach, CoOp achieves superb domain generalization performance compared withthe zero-shot model using hand-crafted prompts.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, -612,an empirical study on fewshot knowledge probing for pretrained language models,"['Tianxing He', 'Kyunghyun Cho', 'James Glass']",http://arxiv.org/pdf/2109.02772v2.pdf,2021-09-06,," Prompt-based knowledge probing for 1-hop relations has been used to measurehow much world knowledge is stored in pretrained language models. Existing workuses considerable amounts of data to tune the prompts for better performance.In this work, we compare a variety of approaches under a few-shot knowledgeprobing setting, where only a small number (e.g., 10 or 20) of example triplesare available. In addition, we create a new dataset named TREx-2p, whichcontains 2-hop relations. We report that few-shot examples can strongly boostthe probing performance for both 1-hop and 2-hop relations. In particular, wefind that a simple-yet-effective approach of finetuning the bias vectors in themodel outperforms existing prompt-engineering methods. Our dataset and code areavailable at \url{https://github.com/cloudygoose/fewshot_lama}.",,arXiv,['cs.ai'],, -613,solving probability and statistics problems by program synthesis,"['Leonard Tang', 'Elizabeth Ke', 'Nikhil Singh', 'Nakul Verma', 'Iddo Drori']",http://arxiv.org/pdf/2111.08267v1.pdf,2021-11-16,," We solve university level probability and statistics questions by programsynthesis using OpenAI's Codex, a Transformer trained on text and fine-tuned oncode. We transform course problems from MIT's 18.05 Introduction to Probabilityand Statistics and Harvard's STAT110 Probability into programming tasks. Wethen execute the generated code to get a solution. Since these course questionsare grounded in probability, we often aim to have Codex generate probabilisticprograms that simulate a large number of probabilistic dependencies to computeits solution. Our approach requires prompt engineering to transform thequestion from its original form to an explicit, tractable form that results ina correct program and solution. To estimate the amount of work needed totranslate an original question into its tractable form, we measure thesimilarity between original and transformed questions. Our work is the first tointroduce a new dataset of university-level probability and statistics problemsand solve these problems in a scalable fashion using the program synthesiscapabilities of large language models.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",, -614,polyglot prompt multilingual multitask promptraining,"['Jinlan Fu', 'See-Kiong Ng', 'Pengfei Liu']",http://arxiv.org/pdf/2204.14264v2.pdf,2022-04-29,," This paper aims for a potential architectural improvement for multilinguallearning and asks: Can different tasks from different languages be modeled in amonolithic framework, i.e. without any task/language-specific module? Thebenefit of achieving this could open new doors for future multilingualresearch, including allowing systems trained on low resources to be furtherassisted by other languages as well as other tasks. We approach this goal bydeveloping a learning framework named Polyglot Prompting to exploit promptingmethods for learning a unified semantic space for different languages and taskswith multilingual prompt engineering. We performed a comprehensive evaluationof 6 tasks, namely topic classification, sentiment classification, named entityrecognition, question answering, natural language inference, and summarization,covering 24 datasets and 49 languages. The experimental results demonstratedthe efficacy of multilingual multitask prompt-based learning and led toinspiring observations. We also present an interpretable multilingualevaluation methodology and show how the proposed framework, multilingualmultitask prompt training, works. We release all datasets prompted in the bestsetting and code.",,arXiv,['cs.cl'],, -615,clipclop clipguided collage and photomontage,"['Piotr Mirowski', 'Dylan Banarse', 'Mateusz Malinowski', 'Simon Osindero', 'Chrisantha Fernando']",http://arxiv.org/pdf/2205.03146v3.pdf,2022-05-06,," The unabated mystique of large-scale neural networks, such as the CLIP dualimage-and-text encoder, popularized automatically generated art. Increasinglymore sophisticated generators enhanced the artworks' realism and visualappearance, and creative prompt engineering enabled stylistic expression.Guided by an artist-in-the-loop ideal, we design a gradient-based generator toproduce collages. It requires the human artist to curate libraries of imagepatches and to describe (with prompts) the whole image composition, with theoption to manually adjust the patches' positions during generation, therebyallowing humans to reclaim some control of the process and achieve greatercreative freedom. We explore the aesthetic potentials of high-resolutioncollages, and provide an open-source Google Colab as an artistic tool.",,arXiv,"['cs.cv', 'cs.ai']",, -616,the creativity of texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2206.02904v4.pdf,2022-05-13,," Text-guided synthesis of images has made a giant leap towards becoming amainstream phenomenon. With text-to-image generation systems, anybody cancreate digital images and artworks. This provokes the question of whethertext-to-image generation is creative. This paper expounds on the nature ofhuman creativity involved in text-to-image art (so-called ""AI art"") with aspecific focus on the practice of prompt engineering. The paper argues that thecurrent product-centered view of creativity falls short in the context oftext-to-image generation. A case exemplifying this shortcoming is provided andthe importance of online communities for the creative ecosystem oftext-to-image art is highlighted. The paper provides a high-level summary ofthis online ecosystem drawing on Rhodes' conceptual four P model of creativity.Challenges for evaluating the creativity of text-to-image generation andopportunities for research on text-to-image generation in the field ofHuman-Computer Interaction (HCI) are discussed.",,arXiv,"['cs.hc', 'cs.gr', 'h.5; h.m']",, -617,rationaleaugmented ensembles in language models,"['Xuezhi Wang', 'Jason Wei', 'Dale Schuurmans', 'Quoc Le', 'Ed Chi', 'Denny Zhou']",http://arxiv.org/pdf/2207.00747v1.pdf,2022-07-02,," Recent research has shown that rationales, or step-by-step chains of thought,can be used to improve performance in multi-step reasoning tasks. We reconsiderrationale-augmented prompting for few-shot in-context learning, where (input ->output) prompts are expanded to (input, rationale -> output) prompts. Forrationale-augmented prompting we demonstrate how existing approaches, whichrely on manual prompt engineering, are subject to sub-optimal rationales thatmay harm performance. To mitigate this brittleness, we propose a unifiedframework of rationale-augmented ensembles, where we identify rationalesampling in the output space as the key component to robustly improveperformance. This framework is general and can easily be extended to commonnatural language processing tasks, even those that do not traditionallyleverage intermediate steps, such as question answering, word sensedisambiguation, and sentiment analysis. We demonstrate that rationale-augmentedensembles achieve more accurate and interpretable results than existingprompting approaches--including standard prompting without rationales andrationale-based chain-of-thought prompting--while simultaneously improvinginterpretability of model predictions through the associated rationales.",,arXiv,['cs.cl'],, -618,will it blend mixing training paradigms & prompting for argument quality prediction,"['Michiel van der Meer', 'Myrthe Reuver', 'Urja Khurana', 'Lea Krause', 'Selene Báez Santamaría']",http://arxiv.org/pdf/2209.08966v2.pdf,2022-09-19,," This paper describes our contributions to the Shared Task of the 9th Workshopon Argument Mining (2022). Our approach uses Large Language Models for the taskof Argument Quality Prediction. We perform prompt engineering using GPT-3, andalso investigate the training paradigms multi-task learning, contrastivelearning, and intermediate-task training. We find that a mixed prediction setupoutperforms single models. Prompting GPT-3 works best for predicting argumentvalidity, and argument novelty is best estimated by a model trained using allthree training paradigms.",,arXiv,"['cs.cl', 'cs.ai']",, -619,controllable image captioning via prompting,"['Ning Wang', 'Jiahao Xie', 'Jihao Wu', 'Mingbo Jia', 'Linlin Li']",http://arxiv.org/pdf/2212.01803v1.pdf,2022-12-04,," Despite the remarkable progress of image captioning, existing captionerstypically lack the controllable capability to generate desired image captions,e.g., describing the image in a rough or detailed manner, in a factual oremotional view, etc. In this paper, we show that a unified model is qualifiedto perform well in diverse domains and freely switch among multiple styles.Such a controllable capability is achieved by embedding the prompt learninginto the image captioning framework. To be specific, we design a set of promptsto fine-tune the pre-trained image captioner. These prompts allow the model toabsorb stylized data from different domains for joint training, withoutperformance degradation in each domain. Furthermore, we optimize the promptswith learnable vectors in the continuous word embedding space, avoiding theheuristic prompt engineering and meanwhile exhibiting superior performance. Inthe inference stage, our model is able to generate desired stylized captions bychoosing the corresponding prompts. Extensive experiments verify thecontrollable capability of the proposed method. Notably, we achieve outstandingperformance on two diverse image captioning benchmarks including COCO Karpathysplit and TextCaps using a unified model.",,arXiv,['cs.cv'],, -620,explanation regeneration via information bottleneck,"['Qintong Li', 'Zhiyong Wu', 'Lingpeng Kong', 'Wei Bi']",http://arxiv.org/pdf/2212.09603v2.pdf,2022-12-19,," Explaining the black-box predictions of NLP models naturally and accuratelyis an important open problem in natural language generation. These free-textexplanations are expected to contain sufficient and carefully-selected evidenceto form supportive arguments for predictions. Due to the superior generativecapacity of large pretrained language models, recent work built on promptengineering enables explanation generation without specific training. However,explanation generated through single-pass prompting often lacks sufficiency andconciseness. To address this problem, we develop an information bottleneckmethod EIB to produce refined explanations that are sufficient and concise. Ourapproach regenerates the free-text explanation by polishing the single-passoutput from the pretrained language model but retaining the information thatsupports the contents being explained. Experiments on two out-of-domain tasksverify the effectiveness of EIB through automatic evaluation andthoroughly-conducted human evaluation.",,arXiv,['cs.cl'],, -621,uprise universal prompt retrieval for improving zeroshot evaluation,"['Daixuan Cheng', 'Shaohan Huang', 'Junyu Bi', 'Yuefeng Zhan', 'Jianfeng Liu', 'Yujing Wang', 'Hao Sun', 'Furu Wei', 'Denvy Deng', 'Qi Zhang']",http://arxiv.org/pdf/2303.08518v3.pdf,2023-03-15,," Large Language Models (LLMs) are popular for their impressive abilities, butthe need for model-specific fine-tuning or task-specific prompt engineering canhinder their generalization. We propose UPRISE (Universal Prompt Retrieval forImproving zero-Shot Evaluation), which tunes a lightweight and versatileretriever that automatically retrieves prompts for a given zero-shot taskinput. Specifically, we demonstrate universality in a cross-task andcross-model scenario: the retriever is tuned on a diverse set of tasks, buttested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, fortuning the retriever, but test the retriever on different LLMs of much largerscales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show thatUPRISE mitigates the hallucination problem in our experiments with ChatGPT,suggesting its potential to improve even the strongest LLMs. Our model and codeare available at https://github.com/microsoft/LMOps.",,arXiv,['cs.cl'],, -622,patchtoken aligned bayesian prompt learning for visionlanguage models,"['Xinyang Liu', 'Dongsheng Wang', 'Miaoge Li', 'Zhibin Duan', 'Yishi Xu', 'Bo Chen', 'Mingyuan Zhou']",http://arxiv.org/pdf/2303.09100v1.pdf,2023-03-16,," For downstream applications of vision-language pre-trained models, there hasbeen significant interest in constructing effective prompts. Existing works onprompt engineering, which either require laborious manual designs or optimizethe prompt tuning as a point estimation problem, may fail to describe diversecharacteristics of categories and limit their applications. We introduce aBayesian probabilistic resolution to prompt learning, where the label-specificstochastic prompts are generated hierarchically by first sampling a latentvector from an underlying distribution and then employing a lightweightgenerative model. Importantly, we semantically regularize prompt learning withthe visual knowledge and view images and the corresponding prompts as patch andtoken sets under optimal transport, which pushes the prompt tokens tofaithfully capture the label-specific visual concepts, instead of overfittingthe training categories. Moreover, the proposed model can also bestraightforwardly extended to the conditional case where theinstance-conditional prompts are generated to improve the generalizability.Extensive experiments on 15 datasets show promising transferability andgeneralization performance of our proposed model.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, -623,safety analysis in the era of large language models a case study of stpa using chatgpt,"['Yi Qi', 'Xingyu Zhao', 'Siddartha Khastgir', 'Xiaowei Huang']",http://arxiv.org/pdf/2304.01246v2.pdf,2023-04-03,," Can safety analysis make use of Large Language Models (LLMs)? A case studyexplores Systems Theoretic Process Analysis (STPA) applied to AutomaticEmergency Brake (AEB) and Electricity Demand Side Management (DSM) systemsusing ChatGPT. We investigate how collaboration schemes, input semanticcomplexity, and prompt guidelines influence STPA results. Comparative resultsshow that using ChatGPT without human intervention may be inadequate due toreliability related issues, but with careful design, it may outperform humanexperts. No statistically significant differences are found when varying theinput semantic complexity or using common prompt guidelines, which suggests thenecessity for developing domain-specific prompt engineering. We also highlightfuture challenges, including concerns about LLM trustworthiness and thenecessity for standardisation and regulation in this domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy', 'cs.se']",, -624,constructing dreams using generative ai,"['Safinah Ali', 'Daniella DiPaola', 'Randi Williams', 'Prerna Ravi', 'Cynthia Breazeal']",http://arxiv.org/pdf/2305.12013v1.pdf,2023-05-19,," Generative AI tools introduce new and accessible forms of media creation foryouth. They also raise ethical concerns about the generation of fake media,data protection, privacy and ownership of AI-generated art. Since generative AIis already being used in products used by youth, it is critical that theyunderstand how these tools work and how they can be used or misused. In thiswork, we facilitated students' generative AI learning through expression oftheir imagined future identities. We designed a learning workshop - Dreamingwith AI - where students learned about the inner workings of generative AItools, used text-to-image generation algorithms to create their imaged futuredreams, reflected on the potential benefits and harms of generative AI toolsand voiced their opinions about policies for the use of these tools inclassrooms. In this paper, we present the learning activities and experiencesof 34 high school students who engaged in our workshops. Students reachedcreative learning objectives by using prompt engineering to create their futuredreams, gained technical knowledge by learning the abilities, limitations,text-visual mappings and applications of generative AI, and identified mostpotential societal benefits and harms of generative AI.",,arXiv,"['cs.hc', 'cs.ai', 'cs.cy']",, -625,gpt4tools teaching large language model to use tools via selfinstruction,"['Rui Yang', 'Lin Song', 'Yanwei Li', 'Sijie Zhao', 'Yixiao Ge', 'Xiu Li', 'Ying Shan']",http://arxiv.org/pdf/2305.18752v1.pdf,2023-05-30,," This paper aims to efficiently enable Large Language Models (LLMs) to usemultimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, haveshown great potential for tool usage through sophisticated prompt engineering.Nevertheless, these models typically rely on prohibitive computational costsand publicly inaccessible data. To address these challenges, we propose theGPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA andOPT, to use tools. It generates an instruction-following dataset by promptingan advanced teacher with various multi-modal contexts. By using the Low-RankAdaptation (LoRA) optimization, our approach facilitates the open-source LLMsto solve a range of visual problems, including visual comprehension and imagegeneration. Moreover, we provide a benchmark to evaluate the ability of LLMs touse tools, which is performed in both zero-shot and fine-tuning ways. Extensiveexperiments demonstrate the effectiveness of our method on various languagemodels, which not only significantly improves the accuracy of invoking seentools, but also enables the zero-shot capacity for unseen tools. The code anddemo are available at https://github.com/StevenGrove/GPT4Tools.",,arXiv,"['cs.cv', 'cs.cl']",, -626,prompting is all you need automated android bug replay with large language models,"['Sidong Feng', 'Chunyang Chen']",http://arxiv.org/pdf/2306.01987v2.pdf,2023-06-03,," Bug reports are vital for software maintenance that allow users to informdevelopers of the problems encountered while using the software. As such,researchers have committed considerable resources toward automating bug replayto expedite the process of software maintenance. Nonetheless, the success ofcurrent automated approaches is largely dictated by the characteristics andquality of bug reports, as they are constrained by the limitations ofmanually-crafted patterns and pre-defined vocabulary lists. Inspired by thesuccess of Large Language Models (LLMs) in natural language understanding, wepropose AdbGPT, a new lightweight approach to automatically reproduce the bugsfrom bug reports through prompt engineering, without any training andhard-coding effort. AdbGPT leverages few-shot learning and chain-of-thoughtreasoning to elicit human knowledge and logical reasoning from LLMs toaccomplish the bug replay in a manner similar to a developer. Our evaluationsdemonstrate the effectiveness and efficiency of our AdbGPT to reproduce 81.3%of bug reports in 253.6 seconds, outperforming the state-of-the-art baselinesand ablation studies. We also conduct a small-scale user study to confirm theusefulness of AdbGPT in enhancing developers' bug replay capabilities.",,arXiv,['cs.se'],, -627,an approach to solving the abstraction and reasoning corpus (arc) challenge,['Tan John Chong Min'],http://arxiv.org/pdf/2306.03553v1.pdf,2023-06-06,," We utilise the power of Large Language Models (LLMs), in particular GPT4, tobe prompt engineered into performing an arbitrary task. Here, we give the modelsome human priors via text, along with some typical procedures for solving theARC tasks, and ask it to generate the i) broad description of the input-outputrelation, ii) detailed steps of the input-output mapping, iii) use the detailedsteps to perform manipulation on the test input and derive the test output. Thecurrent GPT3.5/GPT4 prompt solves 2 out of 4 tested small ARC challenges (thosewith small grids of 8x8 and below). With tweaks to the prompt to make it morespecific for the use case, it can solve more. We posit that when scaled to amulti-agent system with usage of past memory and equipped with an imageinterpretation tool via Visual Question Answering, we may actually be able tosolve the majority of the ARC challenge",,arXiv,['cs.ai'],, -628,falle a foley sound synthesis model and strategies,"['Minsung Kang', 'Sangshin Oh', 'Hyeongi Moon', 'Kyungyun Lee', 'Ben Sangbae Chon']",http://arxiv.org/pdf/2306.09807v2.pdf,2023-06-16,," This paper introduces FALL-E, a foley synthesis system and itstraining/inference strategies. The FALL-E model employs a cascaded approachcomprising low-resolution spectrogram generation, spectrogram super-resolution,and a vocoder. We trained every sound-related model from scratch using ourextensive datasets, and utilized a pre-trained language model. We conditionedthe model with dataset-specific texts, enabling it to learn sound quality andrecording environment based on text input. Moreover, we leveraged externallanguage models to improve text descriptions of our datasets and performedprompt engineering for quality, coherence, and diversity. FALL-E was evaluatedby an objective measure as well as listening tests in the DCASE 2023 challengeTask 7. The submission achieved the second place on average, while achievingthe best score for diversity, second place for audio quality, and third placefor class fitness.",,arXiv,"['eess.as', 'cs.lg', 'cs.sd']",, -629,the cultivated practices of texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2306.11393v1.pdf,2023-06-20,," Humankind is entering a novel creative era in which anybody can synthesizedigital information using generative artificial intelligence (AI).Text-to-image generation, in particular, has become vastly popular and millionsof practitioners produce AI-generated images and AI art online. This chapterfirst gives an overview of the key developments that enabled a healthyco-creative online ecosystem around text-to-image generation to rapidly emerge,followed by a high-level description of key elements in this ecosystem. Aparticular focus is placed on prompt engineering, a creative practice that hasbeen embraced by the AI art community. It is then argued that the emergingco-creative ecosystem constitutes an intelligent system on its own - a systemthat both supports human creativity, but also potentially entraps futuregenerations and limits future development efforts in AI. The chapter discussesthe potential risks and dangers of cultivating this co-creative ecosystem, suchas the bias inherent in today's training data, potential quality degradation infuture image generation systems due to synthetic data becoming common place,and the potential long-term effects of text-to-image generation on people'simagination, ambitions, and development.",,arXiv,"['cs.cy', 'cs.ai', 'k.4; j.5; i.2.0; k.5.m']",, -630,chitchat or deep talk prompt engineering for process mining,"['Urszula Jessen', 'Michal Sroka', 'Dirk Fahland']",http://arxiv.org/pdf/2307.09909v1.pdf,2023-07-19,," This research investigates the application of Large Language Models (LLMs) toaugment conversational agents in process mining, aiming to tackle its inherentcomplexity and diverse skill requirements. While LLM advancements present novelopportunities for conversational process mining, generating efficient outputsis still a hurdle. We propose an innovative approach that amend many issues inexisting solutions, informed by prior research on Natural Language Processing(NLP) for conversational agents. Leveraging LLMs, our framework improves bothaccessibility and agent performance, as demonstrated by experiments on publicquestion and data sets. Our research sets the stage for future explorationsinto LLMs' role in process mining and concludes with propositions for enhancingLLM memory, implementing real-time user testing, and examining diverse datasets.",,arXiv,['cs.ai'],, -631,sentimentgpt exploiting gpt for advanced sentiment analysis and its departure from current machine learning,"['Kiana Kheiri', 'Hamid Karimi']",http://arxiv.org/pdf/2307.10234v2.pdf,2023-07-16,," This study presents a thorough examination of various Generative PretrainedTransformer (GPT) methodologies in sentiment analysis, specifically in thecontext of Task 4 on the SemEval 2017 dataset. Three primary strategies areemployed: 1) prompt engineering using the advanced GPT-3.5 Turbo, 2)fine-tuning GPT models, and 3) an inventive approach to embeddingclassification. The research yields detailed comparative insights among thesestrategies and individual GPT models, revealing their unique strengths andpotential limitations. Additionally, the study compares these GPT-basedmethodologies with other current, high-performing models previously used withthe same dataset. The results illustrate the significant superiority of the GPTapproaches in terms of predictive performance, more than 22\% in F1-scorecompared to the state-of-the-art. Further, the paper sheds light on commonchallenges in sentiment analysis tasks, such as understanding context anddetecting sarcasm. It underscores the enhanced capabilities of the GPT modelsto effectively handle these complexities. Taken together, these findingshighlight the promising potential of GPT models in sentiment analysis, settingthe stage for future research in this field. The code can be found athttps://github.com/DSAatUSU/SentimentGPT",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.si']",, -632,domain knowledge distillation from large language model an empirical study in the autonomous driving domain,"['Yun Tang', 'Antonio A. Bruto da Costa', 'Jason Zhang', 'Irvine Patrick', 'Siddartha Khastgir', 'Paul Jennings']",http://arxiv.org/pdf/2307.11769v1.pdf,2023-07-17,," Engineering knowledge-based (or expert) systems require extensive manualeffort and domain knowledge. As Large Language Models (LLMs) are trained usingan enormous amount of cross-domain knowledge, it becomes possible to automatesuch engineering processes. This paper presents an empirical automation andsemi-automation framework for domain knowledge distillation using promptengineering and the LLM ChatGPT. We assess the framework empirically in theautonomous driving domain and present our key observations. In ourimplementation, we construct the domain knowledge ontology by ""chatting"" withChatGPT. The key finding is that while fully automated domain ontologyconstruction is possible, human supervision and early intervention typicallyimprove efficiency and output quality as they lessen the effects of responserandomness and the butterfly effect. We, therefore, also develop a web-baseddistillation assistant enabling supervision and flexible intervention atruntime. We hope our findings and tools could inspire future research towardrevolutionizing the engineering of knowledge-based systems across applicationdomains.",,arXiv,['cs.cl'],, -633,alphagpt humanai interactive alpha mining for quantitative investment,"['Saizhuo Wang', 'Hang Yuan', 'Leon Zhou', 'Lionel M. Ni', 'Heung-Yeung Shum', 'Jian Guo']",http://arxiv.org/pdf/2308.00016v1.pdf,2023-07-31,," One of the most important tasks in quantitative investment research is miningnew alphas (effective trading signals or factors). Traditional alpha miningmethods, either hand-crafted factor synthesizing or algorithmic factor mining(e.g., search with genetic programming), have inherent limitations, especiallyin implementing the ideas of quants. In this work, we propose a new alphamining paradigm by introducing human-AI interaction, and a novel promptengineering algorithmic framework to implement this paradigm by leveraging thepower of large language models. Moreover, we develop Alpha-GPT, a newinteractive alpha mining system framework that provides a heuristic way to``understand'' the ideas of quant researchers and outputs creative, insightful,and effective alphas. We demonstrate the effectiveness and advantage ofAlpha-GPT via a number of alpha mining experiments.",,arXiv,"['q-fin.cp', 'cs.ai', 'cs.cl']",, -634,optimizing machine translation through prompt engineering an investigation into chatgpt's customizability,['Masaru Yamada'],http://arxiv.org/pdf/2308.01391v1.pdf,2023-08-02,," This paper explores the influence of integrating the purpose of thetranslation and the target audience into prompts on the quality of translationsproduced by ChatGPT. Drawing on previous translation studies, industrypractices, and ISO standards, the research underscores the significance of thepre-production phase in the translation process. The study reveals that theinclusion of suitable prompts in large-scale language models like ChatGPT canyield flexible translations, a feat yet to be realized by conventional MachineTranslation (MT). The research scrutinizes the changes in translation qualitywhen prompts are used to generate translations that meet specific conditions.The evaluation is conducted from a practicing translator's viewpoint, bothsubjectively and qualitatively, supplemented by the use of OpenAI's wordembedding API for cosine similarity calculations. The findings suggest that theintegration of the purpose and target audience into prompts can indeed modifythe generated translations, generally enhancing the translation quality byindustry standards. The study also demonstrates the practical application ofthe ""good translation"" concept, particularly in the context of marketingdocuments and culturally dependent idioms.",,arXiv,['cs.cl'],, -635,interact exploring the potentials of chatgpt as a cooperative agent,"['Po-Lin Chen', 'Cheng-Shang Chang']",http://arxiv.org/pdf/2308.01552v1.pdf,2023-08-03,," This research paper delves into the integration of OpenAI's ChatGPT intoembodied agent systems, evaluating its influence on interactive decision-makingbenchmark. Drawing a parallel to the concept of people assuming roles accordingto their unique strengths, we introduce InterAct. In this approach, we feedChatGPT with varied prompts, assigning it a numerous roles like a checker and asorter, then integrating them with the original language model. Our researchshows a remarkable success rate of 98% in AlfWorld, which consists of 6different tasks in a simulated household environment, emphasizing thesignificance of proficient prompt engineering. The results highlight ChatGPT'scompetence in comprehending and performing intricate tasks effectively inreal-world settings, thus paving the way for further advancements in taskplanning.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",, -636,data race detection using large language models,"['Le Chen', 'Xianzhong Ding', 'Murali Emani', 'Tristan Vanderbruggen', 'Pei-hung Lin', 'Chuanhua Liao']",http://arxiv.org/pdf/2308.07505v2.pdf,2023-08-15,," Large language models (LLMs) are demonstrating significant promise as analternate strategy to facilitate analyses and optimizations of high-performancecomputing programs, circumventing the need for resource-intensive manual toolcreation. In this paper, we explore a novel LLM-based data race detectionapproach combining prompting engineering and fine-tuning techniques. We createa dedicated dataset named DRB-ML, which is derived from DataRaceBench, withfine-grain labels showing the presence of data race pairs and their associatedvariables, line numbers, and read/write information. DRB-ML is then used toevaluate representative LLMs and fine-tune open-source ones. Our experimentshows that LLMs can be a viable approach to data race detection. However, theystill cannot compete with traditional data race detection tools when we needdetailed information about variable pairs causing data races.",,arXiv,"['cs.lg', 'cs.cl']",, -637,datatotext generation for severely underresourced languages with gpt35 a bit of help needed from google translate,"['Michela Lorandi', 'Anya Belz']",http://arxiv.org/pdf/2308.09957v1.pdf,2023-08-19,," LLMs like GPT are great at tasks involving English which dominates in theirtraining data. In this paper, we look at how they cope with tasks involvinglanguages that are severely under-represented in their training data, in thecontext of data-to-text generation for Irish, Maltese, Welsh and Breton. Duringthe prompt-engineering phase we tested a range of prompt types and formats onGPT-3.5 and~4 with a small sample of example input/output pairs. We then fullyevaluated the two most promising prompts in two scenarios: (i) directgeneration into the under-resourced language, and (ii) generation into Englishfollowed by translation into the under-resourced language. We find thatfew-shot prompting works better for direct generation into under-resourcedlanguages, but that the difference disappears when pivoting via English. Thefew-shot + translation system variants were submitted to the WebNLG 2023 sharedtask where they outperformed competitor systems by substantial margins in alllanguages on all metrics. We conclude that good performance on under-resourcedlanguages can be achieved out-of-the box with state-of-the-art LLMs. However,our best results (for Welsh) remain well below the lowest ranked English systemat WebNLG'20.",,arXiv,"['cs.cl', 'cs.ai']",, -638,"furchat an embodied conversational agent using llms, combining open and closeddomain dialogue with facial expressions","['Neeraj Cherakara', 'Finny Varghese', 'Sheena Shabana', 'Nivan Nelson', 'Abhiram Karukayil', 'Rohith Kulothungan', 'Mohammed Afil Farhan', 'Birthe Nesset', 'Meriam Moujahid', 'Tanvi Dinkar', 'Verena Rieser', 'Oliver Lemon']",http://arxiv.org/pdf/2308.15214v2.pdf,2023-08-29,," We demonstrate an embodied conversational agent that can function as areceptionist and generate a mixture of open and closed-domain dialogue alongwith facial expressions, by using a large language model (LLM) to develop anengaging conversation. We deployed the system onto a Furhat robot, which ishighly expressive and capable of using both verbal and nonverbal cues duringinteraction. The system was designed specifically for the National Robotariumto interact with visitors through natural conversations, providing them withinformation about the facilities, research, news, upcoming events, etc. Thesystem utilises the state-of-the-art GPT-3.5 model to generate such informationalong with domain-general conversations and facial expressions based on promptengineering.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ro']",, -639,linking microblogging sentiments to stock price movement an application of gpt4,"['Rick Steinert', 'Saskia Altmann']",http://arxiv.org/pdf/2308.16771v1.pdf,2023-08-31,," This paper investigates the potential improvement of the GPT-4 LanguageLearning Model (LLM) in comparison to BERT for modeling same-day daily stockprice movements of Apple and Tesla in 2017, based on sentiment analysis ofmicroblogging messages. We recorded daily adjusted closing prices andtranslated them into up-down movements. Sentiment for each day was extractedfrom messages on the Stocktwits platform using both LLMs. We develop a novelmethod to engineer a comprehensive prompt for contextual sentiment analysiswhich unlocks the true capabilities of modern LLM. This enables us to carefullyretrieve sentiments, perceived advantages or disadvantages, and the relevancetowards the analyzed company. Logistic regression is used to evaluate whetherthe extracted message contents reflect stock price movements. As a result,GPT-4 exhibited substantial accuracy, outperforming BERT in five out of sixmonths and substantially exceeding a naive buy-and-hold strategy, reaching apeak accuracy of 71.47 % in May. The study also highlights the importance ofprompt engineering in obtaining desired outputs from GPT-4's contextualabilities. However, the costs of deploying GPT-4 and the need for fine-tuningprompts highlight some practical considerations for its use.",,arXiv,"['q-fin.st', 'q-fin.cp']",, -640,fiat fusing learning paradigms with instructionaccelerated tuning,"['Xinyi Wang', 'John Wieting', 'Jonathan H. Clark']",http://arxiv.org/pdf/2309.04663v2.pdf,2023-09-09,," Learning paradigms for large language models (LLMs) currently tend to fallwithin either in-context learning (ICL) or full fine-tuning. Each of thesecomes with their own trade-offs based on available data, model size, computecost, ease-of-use, and final quality with neither solution performing wellacross-the-board. In this article, we first describe ICL and fine-tuningparadigms in a way that highlights their natural connections. Based on theseconnections, we propose a new learning paradigm called FIAT that fuses the bestof these paradigms together, enabling prompt-engineered instructions andchain-of-thought reasoning with the very largest models while also usingsimilar methods to perform parameter updates on a modestly-sized LLM withparameter-efficient tuning. We evaluate FIAT's effectiveness on a variety ofmultilingual tasks and observe that FIAT performs better than both ICL andfine-tuning at scales ranging from 100-10,000 training examples. We hope thatFIAT provides a practical way of harnessing the full potential of LLMs withoutneeding to make a hard choice between learning paradigms.",,arXiv,"['cs.cl', 'cs.ai']",, -641,detecting natural language biases with promptbased learning,"['Md Abdul Aowal', 'Maliha T Islam', 'Priyanka Mary Mammen', 'Sandesh Shetty']",http://arxiv.org/pdf/2309.05227v1.pdf,2023-09-11,," In this project, we want to explore the newly emerging field of promptengineering and apply it to the downstream task of detecting LM biases. Moreconcretely, we explore how to design prompts that can indicate 4 differenttypes of biases: (1) gender, (2) race, (3) sexual orientation, and (4)religion-based. Within our project, we experiment with different manuallycrafted prompts that can draw out the subtle biases that may be present in thelanguage model. We apply these prompts to multiple variations of popular andwell-recognized models: BERT, RoBERTa, and T5 to evaluate their biases. Weprovide a comparative analysis of these models and assess them using a two-foldmethod: use human judgment to decide whether model predictions are biased andutilize model-level judgment (through further prompts) to understand if a modelcan self-diagnose the biases of its own prediction.",,arXiv,"['cs.cl', 'cs.ai']",, -642,large language models for failure mode classification an investigation,"['Michael Stewart', 'Melinda Hodkiewicz', 'Sirui Li']",http://arxiv.org/pdf/2309.08181v1.pdf,2023-09-15,," In this paper we present the first investigation into the effectiveness ofLarge Language Models (LLMs) for Failure Mode Classification (FMC). FMC, thetask of automatically labelling an observation with a corresponding failuremode code, is a critical task in the maintenance domain as it reduces the needfor reliability engineers to spend their time manually analysing work orders.We detail our approach to prompt engineering to enable an LLM to predict thefailure mode of a given observation using a restricted code list. Wedemonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned onannotated data is a significant improvement over a currently available textclassification model (F1=0.60) trained on the same annotated data set. Thefine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). Thisinvestigation reinforces the need for high quality fine-tuning data sets fordomain-specific tasks using LLMs.",,arXiv,['cs.cl'],, -643,dynacon dynamic robot planner with contextual awareness via llms,"['Gyeongmin Kim', 'Taehyeon Kim', 'Shyam Sundar Kannan', 'Vishnunandan L. N. Venkatesh', 'Donghan Kim', 'Byung-Cheol Min']",http://arxiv.org/pdf/2309.16031v1.pdf,2023-09-27,," Mobile robots often rely on pre-existing maps for effective path planning andnavigation. However, when these maps are unavailable, particularly inunfamiliar environments, a different approach become essential. This paperintroduces DynaCon, a novel system designed to provide mobile robots withcontextual awareness and dynamic adaptability during navigation, eliminatingthe reliance of traditional maps. DynaCon integrates real-time feedback with anobject server, prompt engineering, and navigation modules. By harnessing thecapabilities of Large Language Models (LLMs), DynaCon not only understandspatterns within given numeric series but also excels at categorizing objectsinto matched spaces. This facilitates dynamic path planner imbued withcontextual awareness. We validated the effectiveness of DynaCon through anexperiment where a robot successfully navigated to its goal using reasoning.Source code and experiment videos for this work can be found at:https://sites.google.com/view/dynacon.",,arXiv,['cs.ro'],, -644,cyber sentinel exploring conversational agents in streamlining security tasks with gpt4,"['Mehrdad Kaheh', 'Danial Khosh Kholgh', 'Panos Kostakos']",http://arxiv.org/pdf/2309.16422v1.pdf,2023-09-28,," In an era where cyberspace is both a battleground and a backbone of modernsociety, the urgency of safeguarding digital assets against ever-evolvingthreats is paramount. This paper introduces Cyber Sentinel, an innovativetask-oriented cybersecurity dialogue system that is effectively capable ofmanaging two core functions: explaining potential cyber threats within anorganization to the user, and taking proactive/reactive security actions wheninstructed by the user. Cyber Sentinel embodies the fusion of artificialintelligence, cybersecurity domain expertise, and real-time data analysis tocombat the multifaceted challenges posed by cyber adversaries. This articledelves into the process of creating such a system and how it can interact withother components typically found in cybersecurity organizations. Our work is anovel approach to task-oriented dialogue systems, leveraging the power ofchaining GPT-4 models combined with prompt engineering across all sub-tasks. Wealso highlight its pivotal role in enhancing cybersecurity communication andinteraction, concluding that not only does this framework enhance the system'stransparency (Explainable AI) but also streamlines the decision-making processand responding to threats (Actionable AI), therefore marking a significantadvancement in the realm of cybersecurity communication.",,arXiv,['cs.cr'],, -645,gptutor an opensource ai pair programming tool alternative to copilot,"['Eason Chen', 'Ray Huang', 'Justa Liang', 'Damien Chen', 'Pierce Hung']",http://arxiv.org/pdf/2310.13896v3.pdf,2023-10-21,," This paper presents the latest progress of GPTutor: a ChatGPT-poweredprogramming tool extension in Visual Studio Code. The emergence of LargeLanguage Models (LLMs) has improved software development efficiency, but theirperformance can be hindered by training data limitations and prompt designissues. Existing LLM development tools often operate as black boxes, with usersunable to view the prompts used and unable to improve performance by correctingprompts when errors occur. To address the aforementioned issues, GPTutor wasintroduced as an open-source AI pair programming tool, offering an alternativeto Copilot. GPTutor empowers users to customize prompts for various programminglanguages and scenarios, with support for 120+ human languages and 50+programming languages. Users can fine-tune prompts to correct the errors fromLLM for precision and efficient code generation. At the end of the paper, weunderscore GPTutor's potential through examples, including demonstrating itsproficiency in interpreting and generating Sui-Move, a newly introduced smartcontract language, using prompt engineering.",,arXiv,['cs.hc'],, -646,openended instructable embodied agents with memoryaugmented large language models,"['Gabriel Sarch', 'Yue Wu', 'Michael J. Tarr', 'Katerina Fragkiadaki']",http://arxiv.org/pdf/2310.15127v1.pdf,2023-10-23,," Pre-trained and frozen LLMs can effectively map simple scene re-arrangementinstructions to programs over a robot's visuomotor functions throughappropriate few-shot example prompting. To parse open-domain natural languageand adapt to a user's idiosyncratic procedures, not known during promptengineering time, fixed prompts fall short. In this paper, we introduce HELPER,an embodied agent equipped with an external memory of language-program pairsthat parses free-form human-robot dialogue into action programs throughretrieval-augmented LLM prompting: relevant memories are retrieved based on thecurrent dialogue, instruction, correction or VLM description, and used asin-context prompt examples for LLM querying. The memory is expanded duringdeployment to include pairs of user's language and action plans, to assistfuture inferences and personalize them to the user's language and routines.HELPER sets a new state-of-the-art in the TEACh benchmark in both Executionfrom Dialog History (EDH) and Trajectory from Dialogue (TfD), with 1.7ximprovement over the previous SOTA for TfD. Our models, code and video resultscan be found in our project's website: https://helper-agent-llm.github.io.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",, -647,large language models for aspectbased sentiment analysis,"['Paul F. Simmering', 'Paavo Huoviala']",http://arxiv.org/pdf/2310.18025v1.pdf,2023-10-27,," Large language models (LLMs) offer unprecedented text completioncapabilities. As general models, they can fulfill a wide range of roles,including those of more specialized models. We assess the performance of GPT-4and GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-basedsentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-artF1 score of 83.8 on the joint aspect term extraction and polarityclassification task of the SemEval-2014 Task 4, improving upon InstructABSA[@scaria_instructabsa_2023] by 5.7%. However, this comes at the price of 1000times more model parameters and thus increased inference cost. We discuss thethe cost-performance trade-offs of different models, and analyze the typicalerrors that they make. Our results also indicate that detailed prompts improveperformance in zero-shot and few-shot settings but are not necessary forfine-tuned models. This evidence is relevant for practioners that are facedwith the choice of prompt engineering versus fine-tuning when using LLMs forABSA.",,arXiv,"['cs.cl', 'cs.ai']",, -648,can large language models capture public opinion about global warming an empirical assessment of algorithmic fidelity and bias,"['S. Lee', 'T. Q. Peng', 'M. H. Goldberg', 'S. A. Rosenthal', 'J. E. Kotcher', 'E. W. Maibach', 'A. Leiserowitz']",http://arxiv.org/pdf/2311.00217v1.pdf,2023-11-01,," Large language models (LLMs) have demonstrated their potential in socialscience research by emulating human perceptions and behaviors, a conceptreferred to as algorithmic fidelity. This study assesses the algorithmicfidelity and bias of LLMs by utilizing two nationally representative climatechange surveys. The LLMs were conditioned on demographics and/or psychologicalcovariates to simulate survey responses. The findings indicate that LLMs caneffectively capture presidential voting behaviors but encounter challenges inaccurately representing global warming perspectives when relevant covariatesare not included. GPT-4 exhibits improved performance when conditioned on bothdemographics and covariates. However, disparities emerge in LLM estimations ofthe views of certain groups, with LLMs tending to underestimate worry aboutglobal warming among Black Americans. While highlighting the potential of LLMsto aid social science research, these results underscore the importance ofmeticulous conditioning, model selection, survey question format, and biasassessment when employing LLMs for survey simulation. Further investigationinto prompt engineering and algorithm auditing is essential to harness thepower of LLMs while addressing their inherent limitations.",,arXiv,"['cs.ai', 'cs.cy']",, -649,noisy exemplars make large language models more robust a domainagnostic behavioral analysis,"['Hongyi Zheng', 'Abulhair Saparov']",http://arxiv.org/pdf/2311.00258v1.pdf,2023-11-01,," Recent advances in prompt engineering enable large language models (LLMs) tosolve multi-hop logical reasoning problems with impressive accuracy. However,there is little existing work investigating the robustness of LLMs withfew-shot prompting techniques. Therefore, we introduce a systematic approach totest the robustness of LLMs in multi-hop reasoning tasks via domain-agnosticperturbations. We include perturbations at multiple levels of abstractions(e.g. lexical perturbations such as typos, and semantic perturbations such asthe inclusion of intermediate reasoning steps in the questions) to conductbehavioral analysis on the LLMs. Throughout our experiments, we find thatmodels are more sensitive to certain perturbations such as replacing words withtheir synonyms. We also demonstrate that increasing the proportion of perturbedexemplars in the prompts improves the robustness of few-shot prompting methods.",,arXiv,"['cs.cl', 'cs.lg']",, -650,instruction distillation makes large language models efficient zeroshot rankers,"['Weiwei Sun', 'Zheng Chen', 'Xinyu Ma', 'Lingyong Yan', 'Shuaiqiang Wang', 'Pengjie Ren', 'Zhumin Chen', 'Dawei Yin', 'Zhaochun Ren']",http://arxiv.org/pdf/2311.01555v1.pdf,2023-11-02,," Recent studies have demonstrated the great potential of Large Language Models(LLMs) serving as zero-shot relevance rankers. The typical approach involvesmaking comparisons between pairs or lists of documents. Although effective,these listwise and pairwise methods are not efficient and also heavily rely onintricate prompt engineering. To tackle this problem, we introduce a novelinstruction distillation method. The key idea is to distill the pairwiseranking ability of open-sourced LLMs to a simpler but more efficient pointwiseranking. Specifically, given the same LLM, we first rank documents using theeffective pairwise approach with complex instructions, and then distill theteacher predictions to the pointwise approach with simpler instructions.Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate thatinstruction distillation can improve efficiency by 10 to 100x and also enhancethe ranking performance of LLMs. Furthermore, our approach surpasses theperformance of existing supervised methods like monoT5 and is on par with thestate-of-the-art zero-shot methods. The code to reproduce our results isavailable at www.github.com/sunnweiwei/RankGPT.",,arXiv,"['cs.ir', 'cs.cl']",, -651,indicative summarization of long discussions,"['Shahbaz Syed', 'Dominik Schwabe', 'Khalid Al-Khatib', 'Martin Potthast']",http://arxiv.org/pdf/2311.01882v1.pdf,2023-11-03,," Online forums encourage the exchange and discussion of different stances onmany topics. Not only do they provide an opportunity to present one's ownarguments, but may also gather a broad cross-section of others' arguments.However, the resulting long discussions are difficult to overview. This paperpresents a novel unsupervised approach using large language models (LLMs) togenerating indicative summaries for long discussions that basically serve astables of contents. Our approach first clusters argument sentences, generatescluster labels as abstractive summaries, and classifies the generated clusterlabels into argumentation frames resulting in a two-level summary. Based on anextensively optimized prompt engineering approach, we evaluate 19~LLMs forgenerative cluster labeling and frame classification. To evaluate theusefulness of our indicative summaries, we conduct a purpose-driven user studyvia a new visual interface called Discussion Explorer: It shows that ourproposed indicative summaries serve as a convenient navigation tool to explorelong discussions.",,arXiv,['cs.cl'],, -652,automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models,"['Jake Chanenson', 'Madison Pickering', 'Noah Apthorpe']",http://arxiv.org/pdf/2311.02192v1.pdf,2023-11-03,," Identifying contextual integrity (CI) and governing knowledge commons (GKC)parameters in privacy policy texts can facilitate normative privacy analysis.However, GKC-CI annotation has heretofore required manual or crowdsourcedeffort. This paper demonstrates that high-accuracy GKC-CI parameter annotationof privacy policies can be performed automatically using large language models.We fine-tune 18 open-source and proprietary models on 21,588 GKC-CI annotationsfrom 16 ground truth privacy policies. Our best-performing model (fine-tunedGPT-3.5 Turbo with prompt engineering) has an accuracy of 86%, exceeding theperformance of prior crowdsourcing approaches despite the complexity of privacypolicy texts and the nuance of the GKC-CI annotation task. We apply ourbest-performing model to privacy policies from 164 popular online services,demonstrating the effectiveness of scaling GKC-CI annotation for dataexploration. We make all annotated policies as well as the training data andscripts needed to fine-tune our best-performing model publicly available forfuture research.",,arXiv,"['cs.cy', 'cs.cl', 'cs.lg']",, -653,requirements engineering using generative ai prompts and prompting patterns,"['Krishna Ronanki', 'Beatriz Cabrero-Daniel', 'Jennifer Horkoff', 'Christian Berger']",http://arxiv.org/pdf/2311.03832v1.pdf,2023-11-07,," [Context]: Companies are increasingly recognizing the importance ofautomating Requirements Engineering (RE) tasks due to their resource-intensivenature. The advent of GenAI has made these tasks more amenable to automation,thanks to its ability to understand and interpret context effectively.[Problem]: However, in the context of GenAI, prompt engineering is a criticalfactor for success. Despite this, we currently lack tools and methods tosystematically assess and determine the most effective prompt patterns toemploy for a particular RE task. [Method]: Two tasks related to requirements,specifically requirement classification and tracing, were automated using theGPT-3.5 turbo API. The performance evaluation involved assessing variousprompts created using 5 prompt patterns and implemented programmatically toperform the selected RE tasks, focusing on metrics such as precision, recall,accuracy, and F-Score. [Results]: This paper evaluates the effectiveness of the5 prompt patterns' ability to make GPT-3.5 turbo perform the selected RE tasksand offers recommendations on which prompt pattern to use for a specific REtask. Additionally, it also provides an evaluation framework as a reference forresearchers and practitioners who want to evaluate different prompt patternsfor different RE tasks.",,arXiv,['cs.se'],, -654,actionclip a new paradigm for video action recognition,"['Mengmeng Wang', 'Jiazheng Xing', 'Yong Liu']",http://arxiv.org/pdf/2109.08472v1.pdf,2021-09-17,," The canonical approach to video action recognition dictates a neural model todo a classic and standard 1-of-N majority vote task. They are trained topredict a fixed set of predefined categories, limiting their transferableability on new datasets with unseen concepts. In this paper, we provide a newperspective on action recognition by attaching importance to the semanticinformation of label texts rather than simply mapping them into numbers.Specifically, we model this task as a video-text matching problem within amultimodal learning framework, which strengthens the video representation withmore semantic language supervision and enables our model to do zero-shot actionrecognition without any further labeled data or parameters requirements.Moreover, to handle the deficiency of label texts and make use of tremendousweb data, we propose a new paradigm based on this multimodal learning frameworkfor action recognition, which we dub ""pre-train, prompt and fine-tune"". Thisparadigm first learns powerful representations from pre-training on a largeamount of web image-text or video-text data. Then it makes the actionrecognition task to act more like pre-training problems via prompt engineering.Finally, it end-to-end fine-tunes on target datasets to obtain strongperformance. We give an instantiation of the new paradigm, ActionCLIP, whichnot only has superior and flexible zero-shot/few-shot transfer ability but alsoreaches a top performance on general action recognition task, achieving 83.8%top-1 accuracy on Kinetics-400 with a ViT-B/16 as the backbone. Code isavailable at https://github.com/sallymmx/ActionCLIP.git",,arXiv,['cs.cv'],, -655,learning to prompt for openvocabulary object detection with visionlanguage model,"['Yu Du', 'Fangyun Wei', 'Zihe Zhang', 'Miaojing Shi', 'Yue Gao', 'Guoqi Li']",http://arxiv.org/pdf/2203.14940v1.pdf,2022-03-28,," Recently, vision-language pre-training shows great potential inopen-vocabulary object detection, where detectors trained on base classes aredevised for detecting new classes. The class text embedding is firstlygenerated by feeding prompts to the text encoder of a pre-trainedvision-language model. It is then used as the region classifier to supervisethe training of a detector. The key element that leads to the success of thismodel is the proper prompt, which requires careful words tuning and ingeniousdesign. To avoid laborious prompt engineering, there are some promptrepresentation learning methods being proposed for the image classificationtask, which however can only be sub-optimal solutions when applied to thedetection task. In this paper, we introduce a novel method, detection prompt(DetPro), to learn continuous prompt representations for open-vocabulary objectdetection based on the pre-trained vision-language model. Different from theprevious classification-oriented methods, DetPro has two highlights: 1) abackground interpretation scheme to include the proposals in image backgroundinto the prompt training; 2) a context grading scheme to separate proposals inimage foreground for tailored prompt training. We assemble DetPro with ViLD, arecent state-of-the-art open-world object detector, and conduct experiments onthe LVIS as well as transfer learning on the Pascal VOC, COCO, Objects365datasets. Experimental results show that our DetPro outperforms the baselineViLD in all settings, e.g., +3.4 APbox and +3.0 APmask improvements on thenovel classes of LVIS. Code and models are available athttps://github.com/dyabel/detpro.",,arXiv,['cs.cv'],, -656,no token left behind explainabilityaided image classification and generation,"['Roni Paiss', 'Hila Chefer', 'Lior Wolf']",http://arxiv.org/pdf/2204.04908v2.pdf,2022-04-11,," The application of zero-shot learning in computer vision has beenrevolutionized by the use of image-text matching models. The most notableexample, CLIP, has been widely used for both zero-shot classification andguiding generative models with a text prompt. However, the zero-shot use ofCLIP is unstable with respect to the phrasing of the input text, making itnecessary to carefully engineer the prompts used. We find that this instabilitystems from a selective similarity score, which is based only on a subset of thesemantically meaningful input tokens. To mitigate it, we present a novelexplainability-based approach, which adds a loss term to ensure that CLIPfocuses on all relevant semantic parts of the input, in addition to employingthe CLIP similarity loss used in previous works. When applied to one-shotclassification through prompt engineering, our method yields an improvement inthe recognition rate, without additional training or fine-tuning. Additionally,we show that CLIP guidance of generative models using our method significantlyimproves the generated images. Finally, we demonstrate a novel use of CLIPguidance for text-based image generation with spatial conditioning on objectlocation, by requiring the image explainability heatmap for each object to beconfined to a pre-determined bounding box.",,arXiv,['cs.cv'],, -657,on measuring social biases in promptbased multitask learning,"['Afra Feyza Akyürek', 'Sejin Paik', 'Muhammed Yusuf Kocyigit', 'Seda Akbiyik', 'Şerife Leman Runyun', 'Derry Wijaya']",http://arxiv.org/pdf/2205.11605v1.pdf,2022-05-23,," Large language models trained on a mixture of NLP tasks that are convertedinto a text-to-text format using prompts, can generalize into novel forms oflanguage and handle novel tasks. A large body of work within prompt engineeringattempts to understand the effects of input forms and prompts in achievingsuperior performance. We consider an alternative measure and inquire whetherthe way in which an input is encoded affects social biases promoted in outputs.In this paper, we study T0, a large-scale multi-task text-to-text languagemodel trained using prompt-based learning. We consider two different forms ofsemantically equivalent inputs: question-answer format and premise-hypothesisformat. We use an existing bias benchmark for the former BBQ and create thefirst bias benchmark in natural language inference BBNLI with hand-writtenhypotheses while also converting each benchmark into the other form. Theresults on two benchmarks suggest that given two different formulations ofessentially the same input, T0 conspicuously acts more biased in questionanswering form, which is seen during training, compared to premise-hypothesisform which is unlike its training examples. Code and data are released underhttps://github.com/feyzaakyurek/bbnli.",,arXiv,"['cs.cl', 'cs.cy']",, -658,ordinalclip learning rank prompts for languageguided ordinal regression,"['Wanhua Li', 'Xiaoke Huang', 'Zheng Zhu', 'Yansong Tang', 'Xiu Li', 'Jie Zhou', 'Jiwen Lu']",http://arxiv.org/pdf/2206.02338v2.pdf,2022-06-06,," This paper presents a language-powered paradigm for ordinal regression.Existing methods usually treat each rank as a category and employ a set ofweights to learn these concepts. These methods are easy to overfit and usuallyattain unsatisfactory performance as the learned concepts are mainly derivedfrom the training set. Recent large pre-trained vision-language models likeCLIP have shown impressive performance on various visual tasks. In this paper,we propose to learn the rank concepts from the rich semantic CLIP latent space.Specifically, we reformulate this task as an image-language matching problemwith a contrastive objective, which regards labels as text and obtains alanguage prototype from a text encoder for each rank. While prompt engineeringfor CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiableprompting method for adapting CLIP for ordinal regression. OrdinalCLIP consistsof learnable context tokens and learnable rank embeddings; The learnable rankembeddings are constructed by explicitly modeling numerical continuity,resulting in well-ordered, compact language prototypes in the CLIP space. Oncelearned, we can only save the language prototypes and discard the huge languagemodel, resulting in zero additional computational overhead compared with thelinear head counterpart. Experimental results show that our paradigm achievescompetitive performance in general ordinal regression tasks, and gainsimprovements in few-shot and distribution shift settings for age estimation.The code is available at https://github.com/xk-huang/OrdinalCLIP.",,arXiv,['cs.cv'],, -659,"chat2vis generating data visualisations via natural language using chatgpt, codex and gpt3 large language models","['Paula Maddigan', 'Teo Susnjak']",http://arxiv.org/pdf/2302.02094v2.pdf,2023-02-04,," The field of data visualisation has long aimed to devise solutions forgenerating visualisations directly from natural language text. Research inNatural Language Interfaces (NLIs) has contributed towards the development ofsuch techniques. However, the implementation of workable NLIs has always beenchallenging due to the inherent ambiguity of natural language, as well as inconsequence of unclear and poorly written user queries which pose problems forexisting language models in discerning user intent. Instead of pursuing theusual path of developing new iterations of language models, this study uniquelyproposes leveraging the advancements in pre-trained large language models(LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directlyinto code for appropriate visualisations. This paper presents a novel system,Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrateshow, with effective prompt engineering, the complex problem of languageunderstanding can be solved more efficiently, resulting in simpler and moreaccurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMstogether with the proposed prompts offer a reliable approach to renderingvisualisations from natural language queries, even when queries are highlymisspecified and underspecified. This solution also presents a significantreduction in costs for the development of NLI systems, while attaining greatervisualisation inference abilities compared to traditional NLP approaches thatuse hand-crafted grammar rules and tailored models. This study also presentshow LLM prompts can be constructed in a way that preserves data security andprivacy while being generalisable to different datasets. This work compares theperformance of GPT-3, Codex and ChatGPT across a number of case studies andcontrasts the performances with prior studies.",,arXiv,['cs.hc'],, -660,prompt stealing attacks against texttoimage generation models,"['Xinyue Shen', 'Yiting Qu', 'Michael Backes', 'Yang Zhang']",http://arxiv.org/pdf/2302.09923v1.pdf,2023-02-20,," Text-to-Image generation models have revolutionized the artwork designprocess and enabled anyone to create high-quality images by entering textdescriptions called prompts. Creating a high-quality prompt that consists of asubject and several modifiers can be time-consuming and costly. In consequence,a trend of trading high-quality prompts on specialized marketplaces hasemerged. In this paper, we propose a novel attack, namely prompt stealingattack, which aims to steal prompts from generated images by text-to-imagegeneration models. Successful prompt stealing attacks direct violate theintellectual property and privacy of prompt engineers and also jeopardize thebusiness model of prompt trading marketplaces. We first perform a large-scaleanalysis on a dataset collected by ourselves and show that a successful promptstealing attack should consider a prompt's subject as well as its modifiers. Wethen propose the first learning-based prompt stealing attack, PromptStealer,and demonstrate its superiority over two baseline methods quantitatively andqualitatively. We also make some initial attempts to defend PromptStealer. Ingeneral, our study uncovers a new attack surface in the ecosystem created bythe popular text-to-image generation models. We hope our results can help tomitigate the threat. To facilitate research in this field, we will share ourdataset and code with the community.",,arXiv,"['cs.cr', 'cs.lg']",, -661,extracting accurate materials data from research papers with conversational language models and prompt engineering,"['Maciej P. Polak', 'Dane Morgan']",http://arxiv.org/pdf/2303.05352v2.pdf,2023-03-07,," There has been a growing effort to replace hand extraction of data fromresearch papers with automated data extraction based on natural languageprocessing, language models, and recently, large language models (LLMs).Although these methods enable efficient extraction of data from large sets ofresearch papers, they require a significant amount of up-front effort,expertise, and coding. In this work we propose the ChatExtract method that canfully automate very accurate data extraction with minimal initial effort andbackground, using an advanced conversational LLM. ChatExtract consists of a setof engineered prompts applied to a conversational LLM that both identifysentences with data, extract that data, and assure the data's correctnessthrough a series of follow-up questions. These follow-up questions largelyovercome known issues with LLMs providing factually inaccurate responses.ChatExtract can be applied with any conversational LLMs and yields very highquality data extraction. In tests on materials data we find precision andrecall both close to 90% from the best conversational LLMs, like ChatGPT-4. Wedemonstrate that the exceptional performance is enabled by the informationretention in a conversational model combined with purposeful redundancy andintroducing uncertainty through follow-up prompts. These results suggest thatapproaches similar to ChatExtract, due to their simplicity, transferability,and accuracy are likely to become powerful tools for data extraction in thenear future. Finally, databases for critical cooling rates of metallic glassesand yield strengths of high entropy alloys are developed using ChatExtract.",,arXiv,"['cs.cl', 'cond-mat.mtrl-sci']",, -662,ten quick tips for harnessing the power of chatgptgpt4 in computational biology,"['Tiago Lubiana', 'Rafael Lopes', 'Pedro Medeiros', 'Juan Carlo Silva', 'Andre Nicolau Aquime Goncalves', 'Vinicius Maracaja-Coutinho', 'Helder I Nakaya']",http://arxiv.org/pdf/2303.16429v1.pdf,2023-03-29,," The rise of advanced chatbots, such as ChatGPT, has sparked curiosity in thescientific community. ChatGPT is a general-purpose chatbot powered by largelanguage models (LLMs) GPT-3.5 and GPT-4, with the potential to impact numerousfields, including computational biology. In this article, we offer ten tipsbased on our experience with ChatGPT to assist computational biologists inoptimizing their workflows. We have collected relevant prompts and reviewed thenascent literature in the field, compiling tips we project to remain pertinentfor future ChatGPT and LLM iterations, ranging from code refactoring toscientific writing to prompt engineering. We hope our work will helpbioinformaticians to complement their workflows while staying aware of thevarious implications of using this technology. Additionally, to track new andcreative applications for bioinformatics tools such as ChatGPT, we haveestablished a GitHub repository athttps://github.com/csbl-br/awesome-compbio-chatgpt. Our belief is that ethicaladherence to ChatGPT and other LLMs will increase the efficiency ofcomputational biologists, ultimately advancing the pace of scientific discoveryin the life sciences.",,arXiv,"['q-bio.ot', '92-04']",, -663,towards interpretable mental health analysis with large language models,"['Kailai Yang', 'Shaoxiong Ji', 'Tianlin Zhang', 'Qianqian Xie', 'Ziyan Kuang', 'Sophia Ananiadou']",http://arxiv.org/pdf/2304.03347v4.pdf,2023-04-06,," The latest large language models (LLMs) such as ChatGPT, exhibit strongcapabilities in automated mental health analysis. However, existing relevantstudies bear several limitations, including inadequate evaluations, lack ofprompting strategies, and ignorance of exploring LLMs for explainability. Tobridge these gaps, we comprehensively evaluate the mental health analysis andemotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explorethe effects of different prompting strategies with unsupervised and distantlysupervised emotional information. Based on these prompts, we explore LLMs forinterpretable mental health analysis by instructing them to generateexplanations for each of their decisions. We convey strict human evaluations toassess the quality of the generated explanations, leading to a novel datasetwith 163 human-assessed explanations. We benchmark existing automaticevaluation metrics on this dataset to guide future related works. According tothe results, ChatGPT shows strong in-context learning ability but still has asignificant gap with advanced task-specific methods. Careful prompt engineeringwith emotional cues and expert-written few-shot examples can also effectivelyimprove performance on mental health analysis. In addition, ChatGPT generatesexplanations that approach human performance, showing its great potential inexplainable mental health analysis.",,arXiv,['cs.cl'],, -664,lowcode llm visual programming over llms,"['Yuzhe Cai', 'Shaoguang Mao', 'Wenshan Wu', 'Zehua Wang', 'Yaobo Liang', 'Tao Ge', 'Chenfei Wu', 'Wang You', 'Ting Song', 'Yan Xia', 'Jonathan Tien', 'Nan Duan']",http://arxiv.org/pdf/2304.08103v2.pdf,2023-04-17,," Effectively utilizing LLMs for complex tasks is challenging, often involvinga time-consuming and uncontrollable prompt engineering process. This paperintroduces a novel human-LLM interaction framework, Low-code LLM. Itincorporates six types of simple low-code visual programming interactions, allsupported by clicking, dragging, or text editing, to achieve more controllableand stable responses. Through visual interaction with a graphical userinterface, users can incorporate their ideas into the workflow without writingtrivial prompts. The proposed Low-code LLM framework consists of a Planning LLMthat designs a structured planning workflow for complex tasks, which can becorrespondingly edited and confirmed by users through low-code visualprogramming operations, and an Executing LLM that generates responses followingthe user-confirmed workflow. We highlight three advantages of the low-code LLM:controllable generation results, user-friendly human-LLM interaction, andbroadly applicable scenarios. We demonstrate its benefits using four typicalapplications. By introducing this approach, we aim to bridge the gap betweenhumans and LLMs, enabling more effective and efficient utilization of LLMs forcomplex tasks. Our system will be soon publicly available at LowCodeLLM.",,arXiv,"['cs.cl', 'cs.hc']",, -665,is chatgpt the ultimate programming assistant how far is it,"['Haoye Tian', 'Weiqi Lu', 'Tsz On Li', 'Xunzhu Tang', 'Shing-Chi Cheung', 'Jacques Klein', 'Tegawendé F. Bissyandé']",http://arxiv.org/pdf/2304.11938v2.pdf,2023-04-24,," Recently, the ChatGPT LLM has received great attention: it can be used as abot for discussing source code, prompting it to suggest changes, providedescriptions or even generate code. Typical demonstrations generally focus onexisting benchmarks, which may have been used in model training (i.e., dataleakage). To assess the feasibility of using an LLM as a useful assistant botfor programmers, we must assess its realistic capabilities on unseen problemsas well as its capabilities on various tasks. In this paper, we present anempirical study of ChatGPT's potential as a fully automated programmingassistant, focusing on the tasks of code generation, program repair, and codesummariziation. The study investigates ChatGPT's performance on commonprogramming problems and compares it with state-of-the-art approaches on twobenchmarks. Among several findings, our study shows that ChatGPT is effectivein dealing with common programming problems. However, our experiments alsoreveal limitations in terms of its attention span: detailed descriptions willconstrain the focus of ChatGPT and prevent it from leveraging its vastknowledge to solve the actual problem. Surprisingly, we have identified theability of ChatGPT to reason the original intention of the code. We expectfuture work to build on this insight for dealing with the open question of theoracle problem. Our findings contribute interesting insights to the developmentof LLMs for programming assistance, notably by demonstrating the importance ofprompt engineering, and providing a better understanding of ChatGPT's practicalapplications for software engineering.",,arXiv,"['cs.se', 'cs.ai']",, -666,framing the newsfrom human perception to large language model inferences,"['David Alonso del Barrio', 'Daniel Gatica-Perez']",http://arxiv.org/pdf/2304.14456v1.pdf,2023-04-27,," Identifying the frames of news is important to understand the articles'vision, intention, message to be conveyed, and which aspects of the news areemphasized. Framing is a widely studied concept in journalism, and has emergedas a new topic in computing, with the potential to automate processes andfacilitate the work of journalism professionals. In this paper, we study thisissue with articles related to the Covid-19 anti-vaccine movement. First, tounderstand the perspectives used to treat this theme, we developed a protocolfor human labeling of frames for 1786 headlines of No-Vax movement articles ofEuropean newspapers from 5 countries. Headlines are key units in the writtenpress, and worth of analysis as many people only read headlines (or use them toguide their decision for further reading.) Second, considering advances inNatural Language Processing (NLP) with large language models, we investigatedtwo approaches for frame inference of news headlines: first with a GPT-3.5fine-tuning approach, and second with GPT-3.5 prompt-engineering. Our workcontributes to the study and analysis of the performance that these models haveto facilitate journalistic tasks like classification of frames, whileunderstanding whether the models are able to replicate human perception in theidentification of these frames.",,arXiv,"['cs.cl', 'cs.hc']",, -667,sensitivity and robustness of large language models to prompt template in japanese text classification tasks,"['Chengguang Gan', 'Tatsunori Mori']",http://arxiv.org/pdf/2305.08714v2.pdf,2023-05-15,," Prompt engineering relevance research has seen a notable surge in recentyears, primarily driven by advancements in pre-trained language models andlarge language models. However, a critical issue has been identified withinthis domain: the inadequate of sensitivity and robustness of these modelstowards Prompt Templates, particularly in lesser-studied languages such asJapanese. This paper explores this issue through a comprehensive evaluation ofseveral representative Large Language Models (LLMs) and a widely-utilizedpre-trained model(PLM). These models are scrutinized using a benchmark datasetin Japanese, with the aim to assess and analyze the performance of the currentmultilingual models in this context. Our experimental results reveal startlingdiscrepancies. A simple modification in the sentence structure of the PromptTemplate led to a drastic drop in the accuracy of GPT-4 from 49.21 to 25.44.This observation underscores the fact that even the highly performance GPT-4model encounters significant stability issues when dealing with diverseJapanese prompt templates, rendering the consistency of the model's outputresults questionable. In light of these findings, we conclude by proposingpotential research trajectories to further enhance the development andperformance of Large Language Models in their current stage.",,arXiv,"['cs.cl', 'cs.ai']",, -668,game of tones faculty detection of gpt4 generated content in university assessments,"['Mike Perkins', 'Jasper Roe', 'Darius Postma', 'James McGaughran', 'Don Hickerson']",http://arxiv.org/pdf/2305.18081v1.pdf,2023-05-29,," This study explores the robustness of university assessments against the useof Open AI's Generative Pre-Trained Transformer 4 (GPT-4) generated content andevaluates the ability of academic staff to detect its use when supported by theTurnitin Artificial Intelligence (AI) detection tool. The research involvedtwenty-two GPT-4 generated submissions being created and included in theassessment process to be marked by fifteen different faculty members. The studyreveals that although the detection tool identified 91% of the experimentalsubmissions as containing some AI-generated content, the total detected contentwas only 54.8%. This suggests that the use of adversarial techniques regardingprompt engineering is an effective method in evading AI detection tools andhighlights that improvements to AI detection software are needed. Using theTurnitin AI detect tool, faculty reported 54.5% of the experimental submissionsto the academic misconduct process, suggesting the need for increased awarenessand training into these tools. Genuine submissions received a mean score of54.4, whereas AI-generated content scored 52.3, indicating the comparableperformance of GPT-4 in real-life situations. Recommendations include adjustingassessment strategies to make them more resistant to the use of AI tools, usingAI-inclusive assessment where possible, and providing comprehensive trainingprograms for faculty and students. This research contributes to understandingthe relationship between AI-generated content and academic assessment, urgingfurther investigation to preserve academic integrity.",,arXiv,"['cs.cy', 'cs.ai', 'k.4']",, -669,the economic tradeoffs of large language models a case study,"['Kristen Howell', 'Gwen Christian', 'Pavel Fomitchov', 'Gitit Kehat', 'Julianne Marzulla', 'Leanne Rolston', 'Jadin Tredup', 'Ilana Zimmerman', 'Ethan Selfridge', 'Joseph Bradley']",http://arxiv.org/pdf/2306.07402v1.pdf,2023-06-08,," Contacting customer service via chat is a common practice. Because employingcustomer service agents is expensive, many companies are turning to NLP thatassists human agents by auto-generating responses that can be used directly orwith modifications. Large Language Models (LLMs) are a natural fit for this usecase; however, their efficacy must be balanced with the cost of training andserving them. This paper assesses the practical cost and impact of LLMs for theenterprise as a function of the usefulness of the responses that they generate.We present a cost framework for evaluating an NLP model's utility for this usecase and apply it to a single brand as a case study in the context of anexisting agent assistance product. We compare three strategies for specializingan LLM - prompt engineering, fine-tuning, and knowledge distillation - usingfeedback from the brand's customer service agents. We find that the usabilityof a model's responses can make up for a large difference in inference cost forour case study brand, and we extrapolate our findings to the broader enterprisespace.",,arXiv,"['cs.cl', 'cs.ai']",, -670,do you still need a manual smart contract audit,"['Isaac David', 'Liyi Zhou', 'Kaihua Qin', 'Dawn Song', 'Lorenzo Cavallaro', 'Arthur Gervais']",http://arxiv.org/pdf/2306.12338v2.pdf,2023-06-21,," We investigate the feasibility of employing large language models (LLMs) forconducting the security audit of smart contracts, a traditionallytime-consuming and costly process. Our research focuses on the optimization ofprompt engineering for enhanced security analysis, and we evaluate theperformance and accuracy of LLMs using a benchmark dataset comprising 52Decentralized Finance (DeFi) smart contracts that have previously beencompromised. Our findings reveal that, when applied to vulnerable contracts, both GPT-4and Claude models correctly identify the vulnerability type in 40% of thecases. However, these models also demonstrate a high false positive rate,necessitating continued involvement from manual auditors. The LLMs testedoutperform a random model by 20% in terms of F1-score. To ensure the integrity of our study, we conduct mutation testing on fivenewly developed and ostensibly secure smart contracts, into which we manuallyinsert two and 15 vulnerabilities each. This testing yielded a remarkablebest-case 78.7% true positive rate for the GPT-4-32k model. We tested both,asking the models to perform a binary classification on whether a contract isvulnerable, and a non-binary prompt. We also examined the influence of modeltemperature variations and context length on the LLM's performance. Despite the potential for many further enhancements, this work lays thegroundwork for a more efficient and economical approach to smart contractsecurity audits.",,arXiv,['cs.cr'],, -671,comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues,"['Dollaya Hirunyasiri', 'Danielle R. Thomas', 'Jionghao Lin', 'Kenneth R. Koedinger', 'Vincent Aleven']",http://arxiv.org/pdf/2307.02018v1.pdf,2023-07-05,," Research suggests that providing specific and timely feedback to human tutorsenhances their performance. However, it presents challenges due to thetime-consuming nature of assessing tutor performance by human evaluators. Largelanguage models, such as the AI-chatbot ChatGPT, hold potential for offeringconstructive feedback to tutors in practical settings. Nevertheless, theaccuracy of AI-generated feedback remains uncertain, with scant researchinvestigating the ability of models like ChatGPT to deliver effective feedback.In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in atutor-student setting. We use two different prompting approaches, the zero-shotchain of thought and the few-shot chain of thought, to identify specificcomponents of effective praise based on five criteria. These approaches arethen compared to the results of human graders for accuracy. Our goal is toassess the extent to which GPT-4 can accurately identify each praise criterion.We found that both zero-shot and few-shot chain of thought approaches yieldcomparable results. GPT-4 performs moderately well in identifying instanceswhen the tutor offers specific and immediate praise. However, GPT-4underperforms in identifying the tutor's ability to deliver sincere praise,particularly in the zero-shot prompting scenario where examples of sinceretutor praise statements were not provided. Future work will focus on enhancingprompt engineering, developing a more general tutoring rubric, and evaluatingour method using real-life tutoring dialogues.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, -672,"software testing with large language model survey, landscape, and vision","['Junjie Wang', 'Yuchao Huang', 'Chunyang Chen', 'Zhe Liu', 'Song Wang', 'Qing Wang']",http://arxiv.org/pdf/2307.07221v1.pdf,2023-07-14,," Pre-trained large language models (LLMs) have recently emerged as abreakthrough technology in natural language processing and artificialintelligence, with the ability to handle large-scale datasets and exhibitremarkable performance across a wide range of tasks. Meanwhile, softwaretesting is a crucial undertaking that serves as a cornerstone for ensuring thequality and reliability of software products. As the scope and complexity ofsoftware systems continue to grow, the need for more effective software testingtechniques becomes increasingly urgent, and making it an area ripe forinnovative approaches such as the use of LLMs. This paper provides acomprehensive review of the utilization of LLMs in software testing. Itanalyzes 52 relevant studies that have used LLMs for software testing, fromboth the software testing and LLMs perspectives. The paper presents a detaileddiscussion of the software testing tasks for which LLMs are commonly used,among which test case preparation and program repair are the mostrepresentative ones. It also analyzes the commonly used LLMs, the types ofprompt engineering that are employed, as well as the accompanied techniqueswith these LLMs. It also summarizes the key challenges and potentialopportunities in this direction. This work can serve as a roadmap for futureresearch in this area, highlighting potential avenues for exploration, andidentifying gaps in our current understanding of the use of LLMs in softwaretesting.",,arXiv,['cs.se'],, -673,gpt3 models are fewshot financial reasoners,"['Raul Salles de Padua', 'Imran Qureshi', 'Mustafa U. Karakaplan']",http://arxiv.org/pdf/2307.13617v2.pdf,2023-07-25,," Financial analysis is an important tool for evaluating company performance.Practitioners work to answer financial questions to make profitable investmentdecisions, and use advanced quantitative analyses to do so. As a result,Financial Question Answering (QA) is a question answering task that requiresdeep reasoning about numbers. Furthermore, it is unknown how well pre-trainedlanguage models can reason in the financial domain. The currentstate-of-the-art requires a retriever to collect relevant facts about thefinancial question from the text and a generator to produce a valid financialprogram and a final answer. However, recently large language models like GPT-3have achieved state-of-the-art performance on wide variety of tasks with just afew shot examples. We run several experiments with GPT-3 and find that aseparate retrieval model and logic engine continue to be essential componentsto achieving SOTA performance in this task, particularly due to the precisenature of financial questions and the complex information stored in financialdocuments. With this understanding, our refined prompt-engineering approach onGPT-3 achieves near SOTA accuracy without any fine-tuning.",,arXiv,"['cs.cl', 'cs.ai']",, -674,evaluating chatgpt textmining of clinical records for obesity monitoring,"['Ivo S. Fins', 'Heather Davies', 'Sean Farrell', 'Jose R. Torres', 'Gina Pinchbeck', 'Alan D. Radford', 'Peter-John Noble']",http://arxiv.org/pdf/2308.01666v1.pdf,2023-08-03,," Background: Veterinary clinical narratives remain a largely untapped resourcefor addressing complex diseases. Here we compare the ability of a largelanguage model (ChatGPT) and a previously developed regular expression (RegexT)to identify overweight body condition scores (BCS) in veterinary narratives.Methods: BCS values were extracted from 4,415 anonymised clinical narrativesusing either RegexT or by appending the narrative to a prompt sent to ChatGPTcoercing the model to return the BCS information. Data were manually reviewedfor comparison. Results: The precision of RegexT was higher (100%, 95% CI94.81-100%) than the ChatGPT (89.3%; 95% CI82.75-93.64%). However, the recallof ChatGPT (100%. 95% CI 96.18-100%) was considerably higher than that ofRegexT (72.6%, 95% CI 63.92-79.94%). Limitations: Subtle prompt engineering isneeded to improve ChatGPT output. Conclusions: Large language models creatediverse opportunities and, whilst complex, present an intuitive interface toinformation but require careful implementation to avoid unpredictable errors.",,arXiv,"['cs.ir', 'cs.cl']",, -675,logprompt prompt engineering towards zeroshot and interpretable log analysis,"['Yilun Liu', 'Shimin Tao', 'Weibin Meng', 'Jingyu Wang', 'Wenbing Ma', 'Yanqing Zhao', 'Yuhang Chen', 'Hao Yang', 'Yanfei Jiang', 'Xun Chen']",http://arxiv.org/pdf/2308.07610v1.pdf,2023-08-15,," Automated log analysis is crucial in modern software-intensive systems forensuring reliability and resilience throughout software maintenance andengineering life cycles. Existing methods perform tasks such as log parsing andlog anomaly detection by providing a single prediction value withoutinterpretation. However, given the increasing volume of system events, thelimited interpretability of analysis results hinders analysts' trust and theirability to take appropriate actions. Moreover, these methods requiresubstantial in-domain training data, and their performance declines sharply (byup to 62.5%) in online scenarios involving unseen logs from new domains, acommon occurrence due to rapid software updates. In this paper, we proposeLogPrompt, a novel zero-shot and interpretable log analysis approach. LogPromptemploys large language models (LLMs) to perform zero-shot log analysis tasksvia a suite of advanced prompt strategies tailored for log tasks, whichenhances LLMs' performance by up to 107.5% compared with simple prompts.Experiments on nine publicly available evaluation datasets across two tasksdemonstrate that LogPrompt, despite using no training data, outperformsexisting approaches trained on thousands of logs by up to around 50%. We alsoconduct a human evaluation of LogPrompt's interpretability, with sixpractitioners possessing over 10 years of experience, who highly rated thegenerated content in terms of usefulness and readability (averagely 4.42/5).LogPrompt also exhibits remarkable compatibility with open-source andsmaller-scale LLMs, making it flexible for practical deployment.",,arXiv,"['cs.se', 'cs.cl']",, -676,investigating the limitation of clip models the worstperforming categories,"['Jie-Jing Shao', 'Jiang-Xin Shi', 'Xiao-Wen Yang', 'Lan-Zhe Guo', 'Yu-Feng Li']",http://arxiv.org/pdf/2310.03324v1.pdf,2023-10-05,," Contrastive Language-Image Pre-training (CLIP) provides a foundation model byintegrating natural language into visual concepts, enabling zero-shotrecognition on downstream tasks. It is usually expected that satisfactoryoverall accuracy can be achieved across numerous domains through well-designedtextual prompts. However, we found that their performance in the worstcategories is significantly inferior to the overall performance. For example,on ImageNet, there are a total of 10 categories with class-wise accuracy as lowas 0\%, even though the overall performance has achieved 64.1\%. Thisphenomenon reveals the potential risks associated with using CLIP models,particularly in risk-sensitive applications where specific categories holdsignificant importance. To address this issue, we investigate the alignmentbetween the two modalities in the CLIP model and propose the Class-wiseMatching Margin (\cmm) to measure the inference confusion. \cmm\ caneffectively identify the worst-performing categories and estimate the potentialperformance of the candidate prompts. We further query large language models toenrich descriptions of worst-performing categories and build a weightedensemble to highlight the efficient prompts. Experimental results clearlyverify the effectiveness of our proposal, where the accuracy on the worst-10categories on ImageNet is boosted to 5.2\%, without manual prompt engineering,laborious optimization, or access to labeled validation data.",,arXiv,"['cs.cv', 'cs.lg']",, -677,large language modelempowered agents for simulating macroeconomic activities,"['Nian Li', 'Chen Gao', 'Yong Li', 'Qingmin Liao']",http://arxiv.org/pdf/2310.10436v1.pdf,2023-10-16,," The advent of the Web has brought about a paradigm shift in traditionaleconomics, particularly in the digital economy era, enabling the preciserecording and analysis of individual economic behavior. This has led to agrowing emphasis on data-driven modeling in macroeconomics. In macroeconomicresearch, Agent-based modeling (ABM) emerged as an alternative, evolvingthrough rule-based agents, machine learning-enhanced decision-making, and, morerecently, advanced AI agents. However, the existing works are suffering fromthree main challenges when endowing agents with human-like decision-making,including agent heterogeneity, the influence of macroeconomic trends, andmultifaceted economic factors. Large language models (LLMs) have recentlygained prominence in offering autonomous human-like characteristics. Therefore,leveraging LLMs in macroeconomic simulation presents an opportunity to overcometraditional limitations. In this work, we take an early step in introducing anovel approach that leverages LLMs in macroeconomic simulation. We designprompt-engineering-driven LLM agents to exhibit human-like decision-making andadaptability in the economic environment, with the abilities of perception,reflection, and decision-making to address the abovementioned challenges.Simulation experiments on macroeconomic activities show that LLM-empoweredagents can make realistic work and consumption decisions and emerge morereasonable macroeconomic phenomena than existing rule-based or AI agents. Ourwork demonstrates the promising potential to simulate macroeconomics based onLLM and its human-like characteristics.",,arXiv,['cs.ai'],, -678,large language model for multiobjective evolutionary optimization,"['Fei Liu', 'Xi Lin', 'Zhenkun Wang', 'Shunyu Yao', 'Xialiang Tong', 'Mingxuan Yuan', 'Qingfu Zhang']",http://arxiv.org/pdf/2310.12541v2.pdf,2023-10-19,," Multiobjective evolutionary algorithms (MOEAs) are major methods for solvingmultiobjective optimization problems (MOPs). Many MOEAs have been proposed inthe past decades, of which the search operators need a carefully handcrafteddesign with domain knowledge. Recently, some attempts have been made to replacethe manually designed operators in MOEAs with learning-based operators (e.g.,neural network models). However, much effort is still required for designingand training such models, and the learned operators might not generalize wellon new problems. To tackle the above challenges, this work investigates a novelapproach that leverages the powerful large language model (LLM) to design MOEAoperators. With proper prompt engineering, we successfully let a general LLMserve as a black-box search operator for decomposition-based MOEA (MOEA/D) in azero-shot manner. In addition, by learning from the LLM behavior, we furtherdesign an explicit white-box operator with randomness and propose a new versionof decomposition-based MOEA, termed MOEA/D-LO. Experimental studies ondifferent test benchmarks show that our proposed method can achieve competitiveperformance with widely used MOEAs. It is also promising to see the operatoronly learned from a few instances can have robust generalization performance onunseen problems with quite different patterns and settings. The results revealthe potential benefits of using pre-trained LLMs in the design of MOEAs.",,arXiv,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.et']",, -679,enhancing zeroshot crypto sentiment with finetuned language model and prompt engineering,"['Rahman S M Wahidur', 'Ishmam Tashdeed', 'Manjit Kaur', ' Heung-No-Lee']",http://arxiv.org/pdf/2310.13226v1.pdf,2023-10-20,," Blockchain technology has revolutionized the financial landscape, withcryptocurrencies gaining widespread adoption for their decentralized andtransparent nature. As the sentiment expressed on social media platforms cansignificantly influence cryptocurrency discussions and market movements,sentiment analysis has emerged as a crucial tool for understanding publicopinion and predicting market trends. Motivated by the aim to enhance sentimentanalysis accuracy in the cryptocurrency domain, this paper investigatesfine-tuning techniques on large language models. This paper also investigatesthe efficacy of supervised fine-tuning and instruction-based fine-tuning onlarge language models for unseen tasks. Experimental results demonstrate asignificant average zero-shot performance gain of 40% after fine-tuning,highlighting the potential of this technique in optimizing pre-trained languagemodel efficiency. Additionally, the impact of instruction tuning on models ofvarying scales is examined, revealing that larger models benefit frominstruction tuning, achieving the highest average accuracy score of 75.16%. Incontrast, smaller-scale models may experience reduced generalization due to thecomplete utilization of model capacity. To gain deeper insight about howinstruction works with these language models, this paper presents anexperimental investigation into the response of an instruction-based modelunder different instruction tuning setups. The investigation demonstrates thatthe model achieves an average accuracy score of 72.38% for short and simpleinstructions. This performance significantly outperforms its accuracy underlong and complex instructions by over 12%, thereby effectively highlighting theprofound significance of instruction characteristics in maximizing modelperformance.",,arXiv,['cs.cl'],, -680,promisepromptdriven 3d medical image segmentation using pretrained image foundation models,"['Hao Li', 'Han Liu', 'Dewei Hu', 'Jiacheng Wang', 'Ipek Oguz']",http://arxiv.org/pdf/2310.19721v3.pdf,2023-10-30,," To address prevalent issues in medical imaging, such as data acquisitionchallenges and label availability, transfer learning from natural to medicalimage domains serves as a viable strategy to produce reliable segmentationresults. However, several existing barriers between domains need to be brokendown, including addressing contrast discrepancies, managing anatomicalvariability, and adapting 2D pretrained models for 3D segmentation tasks. Inthis paper, we propose ProMISe,a prompt-driven 3D medical image segmentationmodel using only a single point prompt to leverage knowledge from a pretrained2D image foundation model. In particular, we use the pretrained visiontransformer from the Segment Anything Model (SAM) and integrate lightweightadapters to extract depth-related (3D) spatial context without updating thepretrained weights. For robust results, a hybrid network with complementaryencoders is designed, and a boundary-aware loss is proposed to achieve preciseboundaries. We evaluate our model on two public datasets for colon and pancreastumor segmentations, respectively. Compared to the state-of-the-artsegmentation methods with and without prompt engineering, our proposed methodachieves superior performance. The code is publicly available athttps://github.com/MedICL-VU/ProMISe.",,arXiv,"['eess.iv', 'cs.cv']",, -681,making large language models better data creators,"['Dong-Ho Lee', 'Jay Pujara', 'Mohit Sewak', 'Ryen W. White', 'Sujay Kumar Jauhar']",http://arxiv.org/pdf/2310.20111v1.pdf,2023-10-31,," Although large language models (LLMs) have advanced the state-of-the-art inNLP significantly, deploying them for downstream applications is stillchallenging due to cost, responsiveness, control, or concerns around privacyand security. As such, trainable models are still the preferred option in somecases. However, these models still require human-labeled data for optimalperformance, which is expensive and time-consuming to obtain. In order toaddress this issue, several techniques to reduce human effort involve labelingor generating data using LLMs. Although these methods are effective for certainapplications, in practice they encounter difficulties in real-world scenarios.Labeling data requires careful data selection, while generating datanecessitates task-specific prompt engineering. In this paper, we propose aunified data creation pipeline that requires only a single formatting example,and which is applicable to a broad range of tasks, including traditionallyproblematic ones with semantically devoid label spaces. In our experiments wedemonstrate that instruction-following LLMs are highly cost-effective datacreators, and that models trained with these data exhibit performance betterthan those trained with human-labeled data (by up to 17.5%) onout-of-distribution evaluation, while maintaining comparable performance onin-distribution tasks. These results have important implications for therobustness of NLP systems deployed in the real-world.",,arXiv,['cs.cl'],, -682,vispercep a visionlanguage approach to enhance visual perception for people with blindness and low vision,"['Yu Hao', 'Fan Yang', 'Hao Huang', 'Shuaihang Yuan', 'Sundeep Rangan', 'John-Ross Rizzo', 'Yao Wang', 'Yi Fang']",http://arxiv.org/pdf/2310.20225v1.pdf,2023-10-31,," People with blindness and low vision (pBLV) encounter substantial challengeswhen it comes to comprehensive scene recognition and precise objectidentification in unfamiliar environments. Additionally, due to the visionloss, pBLV have difficulty in accessing and identifying potential trippinghazards on their own. In this paper, we present a pioneering approach thatleverages a large vision-language model to enhance visual perception for pBLV,offering detailed and comprehensive descriptions of the surroundingenvironments and providing warnings about the potential risks. Our methodbegins by leveraging a large image tagging model (i.e., Recognize Anything(RAM)) to identify all common objects present in the captured images. Therecognition results and user query are then integrated into a prompt, tailoredspecifically for pBLV using prompt engineering. By combining the prompt andinput image, a large vision-language model (i.e., InstructBLIP) generatesdetailed and comprehensive descriptions of the environment and identifiespotential risks in the environment by analyzing the environmental objects andscenes, relevant to the prompt. We evaluate our approach through experimentsconducted on both indoor and outdoor datasets. Our results demonstrate that ourmethod is able to recognize objects accurately and provide insightfuldescriptions and analysis of the environment for pBLV.",,arXiv,"['cs.cv', 'cs.ai']",, -683,bigbio a framework for datacentric biomedical natural language processing,"['Jason Alan Fries', 'Leon Weber', 'Natasha Seelam', 'Gabriel Altay', 'Debajyoti Datta', 'Samuele Garda', 'Myungsun Kang', 'Ruisi Su', 'Wojciech Kusa', 'Samuel Cahyawijaya', 'Fabio Barth', 'Simon Ott', 'Matthias Samwald', 'Stephen Bach', 'Stella Biderman', 'Mario Sänger', 'Bo Wang', 'Alison Callahan', 'Daniel León Periñán', 'Théo Gigant', 'Patrick Haller', 'Jenny Chim', 'Jose David Posada', 'John Michael Giorgi', 'Karthik Rangasai Sivaraman', 'Marc Pàmies', 'Marianna Nezhurina', 'Robert Martin', 'Michael Cullan', 'Moritz Freidank', 'Nathan Dahlberg', 'Shubhanshu Mishra', 'Shamik Bose', 'Nicholas Michio Broad', 'Yanis Labrak', 'Shlok S Deshmukh', 'Sid Kiblawi', 'Ayush Singh', 'Minh Chien Vu', 'Trishala Neeraj', 'Jonas Golde', 'Albert Villanova del Moral', 'Benjamin Beilharz']",http://arxiv.org/pdf/2206.15076v1.pdf,2022-06-30,," Training and evaluating language models increasingly requires theconstruction of meta-datasets --diverse collections of curated data with clearprovenance. Natural language prompting has recently lead to improved zero-shotgeneralization by transforming existing, supervised datasets into a diversityof novel pretraining tasks, highlighting the benefits of meta-dataset curation.While successful in general-domain text, translating these data-centricapproaches to biomedical language modeling remains challenging, as labeledbiomedical datasets are significantly underrepresented in popular data hubs. Toaddress this challenge, we introduce BigBIO a community library of 126+biomedical NLP datasets, currently covering 12 task categories and 10+languages. BigBIO facilitates reproducible meta-dataset curation viaprogrammatic access to datasets and their metadata, and is compatible withcurrent platforms for prompt engineering and end-to-end few/zero shot languagemodel evaluation. We discuss our process for task schema harmonization, dataauditing, contribution guidelines, and outline two illustrative use cases:zero-shot evaluation of biomedical prompts and large-scale, multi-tasklearning. BigBIO is an ongoing community effort and is available athttps://github.com/bigscience-workshop/biomedical",,arXiv,['cs.cl'],, -684,"a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity","['Yejin Bang', 'Samuel Cahyawijaya', 'Nayeon Lee', 'Wenliang Dai', 'Dan Su', 'Bryan Wilie', 'Holy Lovenia', 'Ziwei Ji', 'Tiezheng Yu', 'Willy Chung', 'Quyet V. Do', 'Yan Xu', 'Pascale Fung']",http://arxiv.org/pdf/2302.04023v3.pdf,2023-02-08,," This paper proposes a framework for quantitatively evaluating interactiveLLMs such as ChatGPT using publicly available data sets. We carry out anextensive technical evaluation of ChatGPT using 23 data sets covering 8different common NLP application tasks. We evaluate the multitask, multilingualand multi-modal aspects of ChatGPT based on these data sets and a newlydesigned multimodal dataset. We find that ChatGPT outperforms LLMs withzero-shot learning on most tasks and even outperforms fine-tuned models on sometasks. We find that it is better at understanding non-Latin script languagesthan generating them. It is able to generate multimodal content from textualprompts, via an intermediate code generation step. Moreover, we find thatChatGPT is 63.41% accurate on average in 10 different reasoning categoriesunder logical reasoning, non-textual reasoning, and commonsense reasoning,hence making it an unreliable reasoner. It is, for example, better at deductivethan inductive reasoning. ChatGPT suffers from hallucination problems likeother LLMs and it generates more extrinsic hallucinations from its parametricmemory as it does not have access to an external knowledge base. Finally, theinteractive feature of ChatGPT enables human collaboration with the underlyingLLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++on machine translation, in a multi-turn ""prompt engineering"" fashion. We alsorelease codebase for evaluation set extraction.",,arXiv,"['cs.cl', 'cs.ai']",, -685,zelda video analytics using visionlanguage models,"['Francisco Romero', 'Caleb Winston', 'Johann Hauswald', 'Matei Zaharia', 'Christos Kozyrakis']",http://arxiv.org/pdf/2305.03785v2.pdf,2023-05-05,," Advances in ML have motivated the design of video analytics systems thatallow for structured queries over video datasets. However, existing systemslimit query expressivity, require users to specify an ML model per predicate,rely on complex optimizations that trade off accuracy for performance, andreturn large amounts of redundant and low-quality results. This paper focuseson the recently developed Vision-Language Models (VLMs) that allow users toquery images using natural language like ""cars during daytime at trafficintersections."" Through an in-depth analysis, we show VLMs address threelimitations of current video analytics systems: general expressivity, a singlegeneral purpose model to query many predicates, and are both simple and fast.However, VLMs still return large numbers of redundant and low-quality resultsthat can overwhelm and burden users. In addition, VLMs often require manualprompt engineering to improve result relevance. We present Zelda: a video analytics system that uses VLMs to return bothrelevant and semantically diverse results for top-K queries on large videodatasets. Zelda prompts the VLM with the user's query in natural language.Zelda then automatically adds discriminator and synonym terms to boostaccuracy, and terms to identify low-quality frames. To improve resultdiversity, Zelda uses semantic-rich VLM embeddings in an algorithm that prunessimilar frames while considering their relevance to the query and the number oftop-K results requested. We evaluate Zelda across five datasets and 19 queriesand quantitatively show it achieves higher mean average precision (up to 1.15x)and improves average pairwise similarity (up to 1.16x) compared to using VLMsout-of-the-box. We also compare Zelda to a state-of-the-art video analyticsengine and show that Zelda retrieves results 7.5x (up to 10.4x) faster for thesame accuracy and frame diversity.",,arXiv,['cs.db'],, -686,chatgpt chemistry assistant for text mining and prediction of mof synthesis,"['Zhiling Zheng', 'Oufan Zhang', 'Christian Borgs', 'Jennifer T. Chayes', 'Omar M. Yaghi']",http://arxiv.org/pdf/2306.11296v2.pdf,2023-06-20,," We use prompt engineering to guide ChatGPT in the automation of text miningof metal-organic frameworks (MOFs) synthesis conditions from diverse formatsand styles of the scientific literature. This effectively mitigates ChatGPT'stendency to hallucinate information -- an issue that previously made the use ofLarge Language Models (LLMs) in scientific fields challenging. Our approachinvolves the development of a workflow implementing three different processesfor text mining, programmed by ChatGPT itself. All of them enable parsing,searching, filtering, classification, summarization, and data unification withdifferent tradeoffs between labor, speed, and accuracy. We deploy this systemto extract 26,257 distinct synthesis parameters pertaining to approximately 800MOFs sourced from peer-reviewed research articles. This process incorporatesour ChemPrompt Engineering strategy to instruct ChatGPT in text mining,resulting in impressive precision, recall, and F1 scores of 90-99%.Furthermore, with the dataset built by text mining, we constructed amachine-learning model with over 86% accuracy in predicting MOF experimentalcrystallization outcomes and preliminarily identifying important factors in MOFcrystallization. We also developed a reliable data-grounded MOF chatbot toanswer questions on chemical reactions and synthesis procedures. Given that theprocess of using ChatGPT reliably mines and tabulates diverse MOF synthesisinformation in a unified format, while using only narrative language requiringno coding expertise, we anticipate that our ChatGPT Chemistry Assistant will bevery useful across various other chemistry sub-disciplines.",,arXiv,"['cs.ir', 'cond-mat.mtrl-sci', 'cs.cl', 'physics.chem-ph']",, -687,go beyond the obvious probing the gap of informal reasoning ability between humanity and llms by detective reasoning puzzle benchmark,"['Zhouhon Gu', 'Zihan Li', 'Lin Zhang', 'Zhuozhi Xiong', 'Haoning Ye', 'Yikai Zhang', 'Wenhao Huang', 'Xiaoxuan Zhu', 'Qianyu He', 'Rui Xu', 'Sihang Jiang', 'Shusen Wang', 'Zili Wang', 'Hongwei Feng', 'Zhixu Li', 'Yanghua Xiao']",http://arxiv.org/pdf/2307.05113v2.pdf,2023-07-11,," Informal reasoning ability is the ability to reason based on common sense,experience, and intuition.Humans use informal reasoning every day to extractthe most influential elements for their decision-making from a large amount oflife-like information.With the rapid development of language models, therealization of general artificial intelligence has emerged with hope. Given theoutstanding informal reasoning ability of humans, how much informal reasoningability language models have has not been well studied by scholars.In order toexplore the gap between humans and language models in informal reasoningability, this paper constructs a Detective Reasoning Benchmark, which is anassembly of 1,200 questions gathered from accessible online resources, aims atevaluating the model's informal reasoning ability in real-lifecontext.Considering the improvement of the model's informal reasoning abilityrestricted by the lack of benchmark, we further propose a Self-Question PromptFramework that mimics human thinking to enhance the model's informal reasoningability.The goals of self-question are to find key elements, deeply investigatethe connections between these elements, encourage the relationship between eachelement and the problem, and finally, require the model to reasonably answerthe problem.The experimental results show that human performance greatlyoutperforms the SoTA Language Models in Detective Reasoning Benchmark.Besides,Self-Question is proven to be the most effective prompt engineering inimproving GPT-4's informal reasoning ability, but it still does not evensurpass the lowest score made by human participants.Upon acceptance of thepaper, the source code for the benchmark will be made publicly accessible.",,arXiv,['cs.cl'],, -688,promptor a conversational and autonomous prompt generation agent for intelligent text entry techniques,"['Junxiao Shen', 'John J. Dudley', 'Jingyao Zheng', 'Bill Byrne', 'Per Ola Kristensson']",http://arxiv.org/pdf/2310.08101v2.pdf,2023-10-12,," Text entry is an essential task in our day-to-day digital interactions.Numerous intelligent features have been developed to streamline this process,making text entry more effective, efficient, and fluid. These improvementsinclude sentence prediction and user personalization. However, as deeplearning-based language models become the norm for these advanced features, thenecessity for data collection and model fine-tuning increases. These challengescan be mitigated by harnessing the in-context learning capability of largelanguage models such as GPT-3.5. This unique feature allows the language modelto acquire new skills through prompts, eliminating the need for data collectionand fine-tuning. Consequently, large language models can learn various textprediction techniques. We initially showed that, for a sentence predictiontask, merely prompting GPT-3.5 surpassed a GPT-2 backed system and iscomparable with a fine-tuned GPT-3.5 model, with the latter two methodsrequiring costly data collection, fine-tuning and post-processing. However, thetask of prompting large language models to specialize in specific textprediction tasks can be challenging, particularly for designers withoutexpertise in prompt engineering. To address this, we introduce Promptor, aconversational prompt generation agent designed to engage proactively withdesigners. Promptor can automatically generate complex prompts tailored to meetspecific needs, thus offering a solution to this challenge. We conducted a userstudy involving 24 participants creating prompts for three intelligent textentry tasks, half of the participants used Promptor while the other halfdesigned prompts themselves. The results show that Promptor-designed promptsresult in a 35% increase in similarity and 22% in coherence over those bydesigners.",,arXiv,"['cs.cl', 'cs.ai']",, -689,constitutionmaker interactively critiquing large language models by converting feedback into principles,"['Savvas Petridis', 'Ben Wedin', 'James Wexler', 'Aaron Donsbach', 'Mahima Pushkarna', 'Nitesh Goyal', 'Carrie J. Cai', 'Michael Terry']",http://arxiv.org/pdf/2310.15428v1.pdf,2023-10-24,," Large language model (LLM) prompting is a promising new approach for users tocreate and customize their own chatbots. However, current methods for steeringa chatbot's outputs, such as prompt engineering and fine-tuning, do not supportusers in converting their natural feedback on the model's outputs to changes inthe prompt or model. In this work, we explore how to enable users tointeractively refine model outputs through their feedback, by helping themconvert their feedback into a set of principles (i.e. a constitution) thatdictate the model's behavior. From a formative study, we (1) found that usersneeded support converting their feedback into principles for the chatbot and(2) classified the different principle types desired by users. Inspired bythese findings, we developed ConstitutionMaker, an interactive tool forconverting user feedback into principles, to steer LLM-based chatbots. WithConstitutionMaker, users can provide either positive or negative feedback innatural language, select auto-generated feedback, or rewrite the chatbot'sresponse; each mode of feedback automatically generates a principle that isinserted into the chatbot's prompt. In a user study with 14 participants, wecompare ConstitutionMaker to an ablated version, where users write their ownprinciples. With ConstitutionMaker, participants felt that their principlescould better guide the chatbot, that they could more easily convert theirfeedback into principles, and that they could write principles moreefficiently, with less mental demand. ConstitutionMaker helped users identifyways to improve the chatbot, formulate their intuitive responses to the modelinto feedback, and convert this feedback into specific and clear principles.Together, these findings inform future tools that support the interactivecritiquing of LLM outputs.",,arXiv,"['cs.hc', 'cs.ai']",, -690,fewshot learning for sentence pair classification and its applications in software engineering,"['Robert Kraig Helmeczi', 'Mucahit Cevik', 'Savas Yıldırım']",http://arxiv.org/pdf/2306.08058v1.pdf,2023-06-13,," Few-shot learning-the ability to train models with access to limited data-hasbecome increasingly popular in the natural language processing (NLP) domain, aslarge language models such as GPT and T0 have been empirically shown to achievehigh performance in numerous tasks with access to just a handful of labeledexamples. Smaller language models such as BERT and its variants have also beenshown to achieve strong performance with just a handful of labeled exampleswhen combined with few-shot learning algorithms like pattern-exploitingtraining (PET) and SetFit. The focus of this work is to investigate theperformance of alternative few-shot learning approaches with BERT-based models.Specifically, vanilla fine-tuning, PET and SetFit are compared for numerousBERT-based checkpoints over an array of training set sizes. To facilitate thisinvestigation, applications of few-shot learning are considered in softwareengineering. For each task, high-performance techniques and their associatedmodel checkpoints are identified through detailed empirical analysis. Ourresults establish PET as a strong few-shot learning approach, and our analysisshows that with just a few hundred labeled examples it can achieve performancenear that of fine-tuning on full-sized data sets.",,arXiv,['cs.se'],, -691,fewclue a chinese fewshot learning evaluation benchmark,"['Liang Xu', 'Xiaojing Lu', 'Chenyang Yuan', 'Xuanwei Zhang', 'Huilin Xu', 'Hu Yuan', 'Guoao Wei', 'Xiang Pan', 'Xin Tian', 'Libo Qin', 'Hu Hai']",http://arxiv.org/pdf/2107.07498v2.pdf,2021-07-15,," Pretrained Language Models (PLMs) have achieved tremendous success in naturallanguage understanding tasks. While different learning schemes -- fine-tuning,zero-shot, and few-shot learning -- have been widely explored and compared forlanguages such as English, there is comparatively little work in Chinese tofairly and comprehensively evaluate and compare these methods and thus hinderscumulative progress. In this paper, we introduce the Chinese Few-shot LearningEvaluation Benchmark (FewCLUE), the first comprehensive few-shot evaluationbenchmark in Chinese. It includes nine tasks, ranging from single-sentence andsentence-pair classification tasks to machine reading comprehension tasks. Wesystematically evaluate five state-of-the-art (SOTA) few-shot learning methods(including PET, ADAPET, LM-BFF, P-tuning and EFL), and compare theirperformance with fine-tuning and zero-shot learning schemes on the newlyconstructed FewCLUE benchmark. Experimental results reveal that: 1) The effectof different few-shot learning methods is sensitive to the pre-trained model towhich the methods are applied; 2) PET and P-tuning achieve the best overallperformance with RoBERTa and ERNIE respectively. Our benchmark is used in thefew-shot learning contest of NLPCC 2021. In addition, we provide auser-friendly toolkit, as well as an online leaderboard to help facilitatefurther progress on Chinese few-shot learning. We provide a baselineperformance on different learning methods, a reference for future research.",,arXiv,"['cs.cl', 'cs.ai']",, -692,true fewshot learning with prompts a realworld perspective,"['Timo Schick', 'Hinrich Schütze']",http://arxiv.org/pdf/2111.13440v1.pdf,2021-11-26,," Prompt-based approaches are strong at few-shot learning. However, Perez etal. (2021) have recently cast doubt on their performance because they haddifficulty getting good results in a ""true"" few-shot setting in which promptsand hyperparameters cannot be tuned on a dev set. In view of this, we conductan extensive study of PET, a method that combines textual instructions withexample-based finetuning. We show that, if correctly configured, PET performsstrongly in a true few-shot setting, i.e., without a dev set. Crucial for thisstrong performance is PET's ability to intelligently handle multiple prompts.We then put our findings to a real-world test by running PET on RAFT, abenchmark of tasks taken directly from realistic NLP applications for which nolabeled dev or test sets are available. PET achieves a new state of the art onRAFT and performs close to non-expert humans for 7 out of 11 tasks. Theseresults demonstrate that prompt-based learners like PET excel at true few-shotlearning and underpin our belief that learning from instructions will play animportant role on the path towards human-like few-shot learning capabilities.",,arXiv,['cs.cl'],, -693,prompting electra fewshot learning with discriminative pretrained models,"['Mengzhou Xia', 'Mikel Artetxe', 'Jingfei Du', 'Danqi Chen', 'Ves Stoyanov']",http://arxiv.org/pdf/2205.15223v3.pdf,2022-05-30,," Pre-trained masked language models successfully perform few-shot learning byformulating downstream tasks as text infilling. However, as a strongalternative in full-shot settings, discriminative pre-trained models likeELECTRA do not fit into the paradigm. In this work, we adapt prompt-basedfew-shot learning to ELECTRA and show that it outperforms masked languagemodels in a wide range of tasks. ELECTRA is pre-trained to distinguish if atoken is generated or original. We naturally extend that to prompt-basedfew-shot learning by training to score the originality of the target optionswithout introducing new parameters. Our method can be easily adapted to tasksinvolving multi-token predictions without extra computation overhead. Analysisshows that ELECTRA learns distributions that align better with downstreamtasks.",,arXiv,"['cs.cl', 'cs.lg']",, -694,reordering examples helps during primingbased fewshot learning,"['Sawan Kumar', 'Partha Talukdar']",http://arxiv.org/pdf/2106.01751v1.pdf,2021-06-03,," The ability to learn from limited data, or few-shot learning, is a desirableand often critical requirement for NLP systems. While many existing methods dopoorly at learning from a handful of examples, large pretrained language modelshave recently been shown to be efficient few-shot learners. One approach tofew-shot learning, which does not require finetuning of model parameters, is toaugment the language model's input with priming text which is typicallyconstructed using task specific descriptions and examples. In this work, wefurther explore priming-based few-shot learning, with focus on using examplesas prompts. We show that presenting examples in the right order is key forgeneralization. We introduce PERO (Prompting with Examples in the Right Order),where we formulate few-shot learning as search over the set of permutations ofthe training examples. We show that PERO can learn to generalize efficientlyusing as few as 10 examples, in contrast to existing approaches. While thenewline token is a natural choice for separating the examples in the prompt, weshow that learning a new separator token can potentially provide further gainsin performance. We demonstrate the effectiveness of the proposed method on thetasks of sentiment classification, natural language inference and factretrieval. Finally, we analyze the learned prompts to reveal novel insights,including the idea that two training examples in the right order alone canprovide competitive performance for sentiment classification and naturallanguage inference.",,arXiv,['cs.cl'],, -695,tuning language models as training data generators for augmentationenhanced fewshot learning,"['Yu Meng', 'Martin Michalski', 'Jiaxin Huang', 'Yu Zhang', 'Tarek Abdelzaher', 'Jiawei Han']",http://arxiv.org/pdf/2211.03044v2.pdf,2022-11-06,," Recent studies have revealed the intriguing few-shot learning ability ofpretrained language models (PLMs): They can quickly adapt to a new task whenfine-tuned on a small amount of labeled data formulated as prompts, withoutrequiring abundant task-specific annotations. Despite their promisingperformance, most existing few-shot approaches that only learn from the smalltraining set still underperform fully supervised training by nontrivialmargins. In this work, we study few-shot learning with PLMs from a differentperspective: We first tune an autoregressive PLM on the few-shot samples andthen use it as a generator to synthesize a large amount of novel trainingsamples which augment the original training set. To encourage the generator toproduce label-discriminative samples, we train it via weighted maximumlikelihood where the weight of each token is automatically adjusted based on adiscriminative meta-learning objective. A classification PLM can then befine-tuned on both the few-shot and the synthetic samples with regularizationfor better generalization and stability. Our approach FewGen achieves anoverall better result across seven classification tasks of the GLUE benchmarkthan existing few-shot learning methods, improving no-augmentation methods by5+ average points, and outperforming augmentation methods by 3+ average points.",,arXiv,"['cs.cl', 'cs.lg']",, -696,cins comprehensive instruction for fewshot learning in taskoriented dialog systems,"['Fei Mi', 'Yitong Li', 'Yasheng Wang', 'Xin Jiang', 'Qun Liu']",http://arxiv.org/pdf/2109.04645v4.pdf,2021-09-10,," As labeling cost for different modules in task-oriented dialog (ToD) systemsis high, a major challenge in practice is to learn different tasks with theleast amount of labeled data. Recently, prompting methods over pre-trainedlanguage models (PLMs) have shown promising results for few-shot learning inToD. To better utilize the power of PLMs, this paper proposes ComprehensiveInstruction (CINS) that exploits PLMs with extra task-specific instructions. Wedesign a schema (definition, constraint, prompt) of instructions and theircustomized realizations for three important downstream tasks in ToD, i.e.intent classification, dialog state tracking, and natural language generation.A sequence-to-sequence model (T5) is adopted to solve these three tasks in aunified framework. Extensive experiments are conducted on these ToD tasks inrealistic few-shot learning scenarios with small validation data. Empiricalresults demonstrate that the proposed CINS approach consistently improvestechniques that finetune PLMs with raw input or short prompts.",,arXiv,"['cs.cl', 'cs.lg']",, -697,exploring promptbased fewshot learning for grounded dialog generation,"['Chujie Zheng', 'Minlie Huang']",http://arxiv.org/pdf/2109.06513v2.pdf,2021-09-14,," Dialog models can be greatly strengthened through grounding on variousexternal information, but grounded dialog corpora are usually not naturallyaccessible. In this work, we focus on the few-shot learning for grounded dialoggeneration (GDG). We first propose a simple prompting method for GDG tasks,where different constructs of model input, such as the grounding source and theconversation context, are distinguished through continuous or discrete prompts.On three typical GDG tasks, we empirically demonstrate and analyze in-depth theeffectiveness of our method. We then conduct extensive experiments tothoroughly investigate how our prompting method works with differentpre-trained models. We show that prompted language models perform superiorly toconversational models, and further analyze various factors that influence theeffects of prompting. Overall, our work introduces a prompt-based perspectiveto the few-shot learning for GDG tasks, and provides valuable findings andinsights for future research.",,arXiv,['cs.cl'],, -698,ontologyenhanced prompttuning for fewshot learning,"['Hongbin Ye', 'Ningyu Zhang', 'Shumin Deng', 'Xiang Chen', 'Hui Chen', 'Feiyu Xiong', 'Xi Chen', 'Huajun Chen']",http://arxiv.org/pdf/2201.11332v1.pdf,2022-01-27,," Few-shot Learning (FSL) is aimed to make predictions based on a limitednumber of samples. Structured data such as knowledge graphs and ontologylibraries has been leveraged to benefit the few-shot setting in various tasks.However, the priors adopted by the existing methods suffer from challengingknowledge missing, knowledge noise, and knowledge heterogeneity, which hinderthe performance for few-shot learning. In this study, we explore knowledgeinjection for FSL with pre-trained language models and proposeontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop theontology transformation based on the external knowledge graph to address theknowledge missing issue, which fulfills and converts structure knowledge totext. We further introduce span-sensitive knowledge injection via a visiblematrix to select informative knowledge to handle the knowledge noise issue. Tobridge the gap between knowledge and text, we propose a collective trainingalgorithm to optimize representations jointly. We evaluate our proposedOntoPrompt in three tasks, including relation extraction, event extraction, andknowledge graph completion, with eight datasets. Experimental resultsdemonstrate that our approach can obtain better few-shot performance thanbaselines.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, -699,impossible triangle what's next for pretrained language models,"['Chenguang Zhu', 'Michael Zeng']",http://arxiv.org/pdf/2204.06130v2.pdf,2022-04-13,," Recent development of large-scale pre-trained language models (PLM) havesignificantly improved the capability of models in various NLP tasks, in termsof performance after task-specific fine-tuning and zero-shot / few-shotlearning. However, many of such models come with a dauntingly huge size thatfew institutions can afford to pre-train, fine-tune or even deploy, whilemoderate-sized models usually lack strong generalized few-shot learningcapabilities. In this paper, we first elaborate the current obstacles of usingPLM models in terms of the Impossible Triangle: 1) moderate model size, 2)state-of-the-art few-shot learning capability, and 3) state-of-the-artfine-tuning capability. We argue that all existing PLM models lack one or moreproperties from the Impossible Triangle. To remedy these missing properties ofPLMs, various techniques have been proposed, such as knowledge distillation,data augmentation and prompt learning, which inevitably brings additional workto the application of PLMs in real scenarios. We then offer insights intofuture research directions of PLMs to achieve the Impossible Triangle, andbreak down the task into several key phases.",,arXiv,['cs.cl'],, -700,how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models,"['Hai Dang', 'Lukas Mecke', 'Florian Lehmann', 'Sven Goller', 'Daniel Buschek']",http://arxiv.org/pdf/2209.01390v1.pdf,2022-09-03,," Deep generative models have the potential to fundamentally change the way wecreate high-fidelity digital content but are often hard to control. Prompting agenerative model is a promising recent development that in principle enablesend-users to creatively leverage zero-shot and few-shot learning to assign newtasks to an AI ad-hoc, simply by writing them down. However, for the majorityof end-users writing effective prompts is currently largely a trial and errorprocess. To address this, we discuss the key opportunities and challenges forinteractive creative applications that use prompting as a new paradigm forHuman-AI interaction. Based on our analysis, we propose four design goals foruser interfaces that support prompting. We illustrate these with concrete UIdesign sketches, focusing on the use case of creative writing. The researchcommunity in HCI and AI can take these as starting points to develop adequateuser interfaces for models capable of zero- and few-shot learning.",,arXiv,"['cs.hc', 'cs.cl', 'h.5.2; i.2.7']",, -701,differentiable entailment for parameter efficient few shot learning,"['Ethan Kim', 'Jerry Yang']",http://arxiv.org/pdf/2301.13345v1.pdf,2023-01-31,," Few-shot learning allows pre-trained language models to adapt to downstreamtasks while using a limited number of training examples. However, practicalapplications are limited when all model parameters must be optimized. In thiswork we apply a new technique for parameter efficient few shot learning whileadopting a strict definition of parameter efficiency. Our training methodcombines 1) intermediate training by reformulating natural language tasks asentailment tasks \cite{wang_entailment_2021} and 2) differentiable optimizationof template and label tokens \cite{zhang_differentiable_2021}. We quantify thetradeoff between parameter efficiency and performance in the few-shot regimeand propose a simple model agnostic approach that can be extended to any taskBy achieving competitive performance while only optimizing 3\% of a model'sparameters and allowing for batched inference, we allow for more efficientpractical deployment of models.",,arXiv,['cs.cl'],, -702,"multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence","['Markus Bayer', 'Tobias Frey', 'Christian Reuter']",http://arxiv.org/pdf/2207.11076v1.pdf,2022-07-22,," Gathering cyber threat intelligence from open sources is becomingincreasingly important for maintaining and achieving a high level of securityas systems become larger and more complex. However, these open sources areoften subject to information overload. It is therefore useful to apply machinelearning models that condense the amount of information to what is necessary.Yet, previous studies and applications have shown that existing classifiers arenot able to extract specific information about emerging cybersecurity eventsdue to their low generalization ability. Therefore, we propose a system toovercome this problem by training a new classifier for each new incident. Sincethis requires a lot of labelled data using standard training methods, wecombine three different low-data regime techniques - transfer learning, dataaugmentation, and few-shot learning - to train a high-quality classifier fromvery few labelled instances. We evaluated our approach using a novel datasetderived from the Microsoft Exchange Server data breach of 2021 which waslabelled by three experts. Our findings reveal an increase in F1 score of morethan 21 points compared to standard training methods and more than 18 pointscompared to a state-of-the-art method in few-shot learning. Furthermore, theclassifier trained with this method and 32 instances is only less than 5 F1score points worse than a classifier trained with 1800 instances.",,arXiv,"['cs.cr', 'cs.cl']",, -703,multitask pretraining of modular prompt for chinese fewshot learning,"['Tianxiang Sun', 'Zhengfu He', 'Qin Zhu', 'Xipeng Qiu', 'Xuanjing Huang']",http://arxiv.org/pdf/2210.07565v3.pdf,2022-10-14,," Prompt tuning is a parameter-efficient approach to adapting pre-trainedlanguage models to downstream tasks. Although prompt tuning has been shown tomatch the performance of full model tuning when training data is sufficient, ittends to struggle in few-shot learning settings. In this paper, we presentMulti-task Pre-trained Modular Prompt (MP2) to boost prompt tuning for few-shotlearning. MP2 is a set of combinable prompts pre-trained on 38 Chinese tasks.On downstream tasks, the pre-trained prompts are selectively activated andcombined, leading to strong compositional generalization to unseen tasks. Tobridge the gap between pre-training and fine-tuning, we formulate upstream anddownstream tasks into a unified machine reading comprehension task. Extensiveexperiments under two learning paradigms, i.e., gradient descent and black-boxtuning, show that MP2 significantly outperforms prompt tuning, full modeltuning, and prior prompt pre-training methods in few-shot settings. Inaddition, we demonstrate that MP2 can achieve surprisingly fast and strongadaptation to downstream tasks by merely learning 8 parameters to combine thepre-trained modular prompts.",,arXiv,['cs.cl'],, -704,fewshot bot promptbased learning for dialogue systems,"['Andrea Madotto', 'Zhaojiang Lin', 'Genta Indra Winata', 'Pascale Fung']",http://arxiv.org/pdf/2110.08118v1.pdf,2021-10-15,," Learning to converse using only a few examples is a great challenge inconversational AI. The current best conversational models, which are eithergood chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL),are language models (LMs) fine-tuned on large conversational datasets. Trainingthese models is expensive, both in terms of computational resources and time,and it is hard to keep them up to date with new conversational skills. A simpleyet unexplored solution is prompt-based few-shot learning (Brown et al. 2020)which does not require gradient-based fine-tuning but instead uses a fewexamples in the LM context as the only source of learning. In this paper, weexplore prompt-based few-shot learning in dialogue tasks. We benchmark LMs ofdifferent sizes in nine response generation tasks, which include fourknowledge-grounded tasks, a task-oriented generations task, three open-chattasks, and controlled stylistic generation, and five conversational parsingtasks, which include dialogue state tracking, graph path generation, personainformation extraction, document retrieval, and internet query generation. Thecurrent largest released LM (GPT-J-6B) using prompt-based few-shot learning,and thus requiring no training, achieves competitive performance to fullytrained state-of-the-art models. Moreover, we propose a novel prompt-basedfew-shot classifier, that also does not require any fine-tuning, to select themost appropriate prompt given a dialogue history. Finally, by combining thepower of prompt-based few-shot learning and a Skill Selector, we create anend-to-end chatbot named the Few-Shot Bot (FSB), which automatically selectsthe most appropriate conversational skill, queries different knowledge bases orthe internet, and uses the retrieved knowledge to generate a human-likeresponse, all using only few dialogue examples per skill.",,arXiv,"['cs.cl', 'cs.ai']",, -705,"a neural network solves, explains, and generates university math problems by program synthesis and fewshot learning at human level","['Iddo Drori', 'Sarah Zhang', 'Reece Shuttleworth', 'Leonard Tang', 'Albert Lu', 'Elizabeth Ke', 'Kevin Liu', 'Linda Chen', 'Sunny Tran', 'Newman Cheng', 'Roman Wang', 'Nikhil Singh', 'Taylor L. Patti', 'Jayson Lynch', 'Avi Shporer', 'Nakul Verma', 'Eugene Wu', 'Gilbert Strang']",http://arxiv.org/pdf/2112.15594v4.pdf,2021-12-31,," We demonstrate that a neural network pre-trained on text and fine-tuned oncode solves mathematics course problems, explains solutions, and generates newquestions at a human level. We automatically synthesize programs using few-shotlearning and OpenAI's Codex transformer and execute them to solve courseproblems at 81% automatic accuracy. We curate a new dataset of questions fromMIT's largest mathematics courses (Single Variable and Multivariable Calculus,Differential Equations, Introduction to Probability and Statistics, LinearAlgebra, and Mathematics for Computer Science) and Columbia University'sComputational Linear Algebra. We solve questions from a MATH dataset (onPrealgebra, Algebra, Counting and Probability, Intermediate Algebra, NumberTheory, and Precalculus), the latest benchmark of advanced mathematics problemsdesigned to assess mathematical reasoning. We randomly sample questions andgenerate solutions with multiple modalities, including numbers, equations, andplots. The latest GPT-3 language model pre-trained on text automatically solvesonly 18.8% of these university questions using zero-shot learning and 30.8%using few-shot learning and the most recent chain of thought prompting. Incontrast, program synthesis with few-shot learning using Codex fine-tuned oncode generates programs that automatically solve 81% of these questions. Ourapproach improves the previous state-of-the-art automatic solution accuracy onthe benchmark topics from 8.8% to 81.1%. We perform a survey to evaluate thequality and difficulty of generated questions. This work is the first toautomatically solve university-level mathematics course questions at a humanlevel and the first work to explain and generate university-level mathematicscourse questions at scale, a milestone for higher education.",,arXiv,"['cs.lg', 'cs.ai']",, -706,detecting hate speech with gpt3,"['Ke-Li Chiu', 'Annie Collins', 'Rohan Alexander']",http://arxiv.org/pdf/2103.12407v4.pdf,2021-03-23,," Sophisticated language models such as OpenAI's GPT-3 can generate hatefultext that targets marginalized groups. Given this capacity, we are interestedin whether large language models can be used to identify hate speech andclassify text as sexist or racist. We use GPT-3 to identify sexist and racisttext passages with zero-, one-, and few-shot learning. We find that with zero-and one-shot learning, GPT-3 can identify sexist or racist text with an averageaccuracy between 55 per cent and 67 per cent, depending on the category of textand type of learning. With few-shot learning, the model's accuracy can be ashigh as 85 per cent. Large language models have a role to play in hate speechdetection, and with further development they could eventually be used tocounter hate speech.",,arXiv,['cs.cl'],, -707,true fewshot learning with language models,"['Ethan Perez', 'Douwe Kiela', 'Kyunghyun Cho']",http://arxiv.org/pdf/2105.11447v1.pdf,2021-05-24,," Pretrained language models (LMs) perform well on many tasks even whenlearning from a few examples, but prior work uses many held-out examples totune various aspects of learning, such as hyperparameters, training objectives,and natural language templates (""prompts""). Here, we evaluate the few-shotability of LMs when such held-out examples are unavailable, a setting we calltrue few-shot learning. We test two model selection criteria, cross-validationand minimum description length, for choosing LM prompts and hyperparameters inthe true few-shot setting. On average, both marginally outperform randomselection and greatly underperform selection based on held-out examples.Moreover, selection criteria often prefer models that perform significantlyworse than randomly-selected ones. We find similar results even when takinginto account our uncertainty in a model's true performance during selection, aswell as when varying the amount of computation and number of examples used forselection. Overall, our findings suggest that prior work significantlyoverestimated the true few-shot ability of LMs given the difficulty of few-shotmodel selection.",,arXiv,"['cs.cl', 'cs.lg', 'stat.ml']",, -708,"generate, annotate, and learn nlp with synthetic text","['Xuanli He', 'Islam Nassar', 'Jamie Kiros', 'Gholamreza Haffari', 'Mohammad Norouzi']",http://arxiv.org/pdf/2106.06168v3.pdf,2021-06-11,," This paper studies the use of language models as a source of syntheticunlabeled text for NLP. We formulate a general framework called ``generate,annotate, and learn (GAL)'' to take advantage of synthetic text withinknowledge distillation, self-training, and few-shot learning applications. Togenerate high-quality task-specific text, we either fine-tune LMs on inputsfrom the task of interest, or prompt large LMs with few examples. We use thebest available classifier to annotate synthetic text with soft pseudo labelsfor knowledge distillation and self-training, and use LMs to obtain hard labelsfor few-shot learning. We train new supervised models on the combination oflabeled and pseudo-labeled data, which results in significant gains acrossseveral applications. We investigate key components of GAL and presenttheoretical and empirical arguments against the use of class-conditional LMs togenerate synthetic labeled text instead of unlabeled text. GAL achieves newstate-of-the-art knowledge distillation results for 6-layer transformers on theGLUE leaderboard.",,arXiv,['cs.lg'],, -709,multimodal fewshot learning with frozen language models,"['Maria Tsimpoukelli', 'Jacob Menick', 'Serkan Cabi', 'S. M. Ali Eslami', 'Oriol Vinyals', 'Felix Hill']",http://arxiv.org/pdf/2106.13884v2.pdf,2021-06-25,," When trained at sufficient scale, auto-regressive language models exhibit thenotable ability to learn a new language task after being prompted with just afew examples. Here, we present a simple, yet effective, approach fortransferring this few-shot learning ability to a multimodal setting (vision andlanguage). Using aligned image and caption data, we train a vision encoder torepresent each image as a sequence of continuous embeddings, such that apre-trained, frozen language model prompted with this prefix generates theappropriate caption. The resulting system is a multimodal few-shot learner,with the surprising ability to learn a variety of new tasks when conditioned onexamples, represented as a sequence of multiple interleaved image and textembeddings. We demonstrate that it can rapidly learn words for new objects andnovel visual categories, do visual question-answering with only a handful ofexamples, and make use of outside knowledge, by measuring a single model on avariety of established and new benchmarks.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, -710,incontext learning for fewshot dialogue state tracking,"['Yushi Hu', 'Chia-Hsuan Lee', 'Tianbao Xie', 'Tao Yu', 'Noah A. Smith', 'Mari Ostendorf']",http://arxiv.org/pdf/2203.08568v3.pdf,2022-03-16,," Collecting and annotating task-oriented dialogues is time-consuming andcostly; thus, zero and few shot learning could greatly benefit dialogue statetracking (DST). In this work, we propose an in-context learning (ICL) frameworkfor zero-shot and few-shot learning DST, where a large pre-trained languagemodel (LM) takes a test instance and a few exemplars as input, and directlydecodes the dialogue state without any parameter updates. To better leverage atabular domain description in the LM prompt, we reformulate DST into atext-to-SQL problem. We also propose a novel approach to retrieve annotateddialogues as exemplars. Empirical results on MultiWOZ show that our methodIC-DST substantially outperforms previous fine-tuned state-of-the-art models infew-shot settings. In addition, we test IC-DST in zero-shot settings, in whichthe model only takes a fixed task instruction as input, finding that itoutperforms previous zero-shot methods by a large margin.",,arXiv,['cs.cl'],, -711,enabling classifiers to make judgements explicitly aligned with human values,"['Yejin Bang', 'Tiezheng Yu', 'Andrea Madotto', 'Zhaojiang Lin', 'Mona Diab', 'Pascale Fung']",http://arxiv.org/pdf/2210.07652v1.pdf,2022-10-14,," Many NLP classification tasks, such as sexism/racism detection or toxicitydetection, are based on human values. Yet, human values can vary under diversecultural conditions. Therefore, we introduce a framework for value-alignedclassification that performs prediction based on explicitly written humanvalues in the command. Along with the task, we propose a practical approachthat distills value-aligned knowledge from large-scale language models (LLMs)to construct value-aligned classifiers in two steps. First, we generatevalue-aligned training data from LLMs by prompt-based few-shot learning. Next,we fine-tune smaller classification models with the generated data for thetask. Empirical results show that our VA-Models surpass multiple baselines byat least 15.56% on the F1-score, including few-shot learning with OPT-175B andexisting text augmentation methods. We suggest that using classifiers withexplicit human value input improves both inclusivity & explainability in AI.",,arXiv,"['cs.cl', 'cs.ai']",, -712,gps genetic prompt search for efficient fewshot learning,"['Hanwei Xu', 'Yujun Chen', 'Yulun Du', 'Nan Shao', 'Yanggang Wang', 'Haiyu Li', 'Zhilin Yang']",http://arxiv.org/pdf/2210.17041v1.pdf,2022-10-31,," Prompt-based techniques have demostrated great potential for improving thefew-shot generalization of pretrained language models. However, theirperformance heavily relies on the manual design of prompts and thus requires alot of human efforts. In this paper, we introduce Genetic Prompt Search (GPS)to improve few-shot learning with prompts, which utilizes a genetic algorithmto automatically search for high-performing prompts. GPS is gradient-free andrequires no update of model parameters but only a small validation set.Experiments on diverse datasets proved the effectiveness of GPS, whichoutperforms manual prompts by a large margin of 2.6 points. Our method is alsobetter than other parameter-efficient tuning methods such as prompt tuning.",,arXiv,['cs.cl'],, -713,fewshot queryfocused summarization with prefixmerging,"['Ruifeng Yuan', 'Zili Wang', 'Ziqiang Cao', 'Wenjie Li']",http://arxiv.org/pdf/2211.16164v1.pdf,2022-11-29,," Query-focused summarization has been considered as an important extension fortext summarization. It aims to generate a concise highlight for a given query.Different from text summarization, query-focused summarization has long beenplagued by the problem of lacking high-quality large-scale datasets. In thispaper, we investigate the idea that whether we can integrate and transfer theknowledge of text summarization and question answering to assist the few-shotlearning in query-focused summarization. Here, we propose prefix-merging, aprefix-based pretraining strategy for few-shot learning in query-focusedsummarization. Drawn inspiration from prefix-tuning, we are allowed tointegrate the task knowledge from text summarization and question answeringinto a properly designed prefix and apply the merged prefix to query-focusedsummarization. With only a small amount of trainable parameters, prefix-mergingoutperforms fine-tuning on query-focused summarization. We further discuss theinfluence of different prefix designs and propose a visualized explanation forhow prefix-merging works.",,arXiv,"['cs.cl', 'cs.ai']",, -714,log parsing with promptbased fewshot learning,"['Van-Hoang Le', 'Hongyu Zhang']",http://arxiv.org/pdf/2302.07435v1.pdf,2023-02-15,," Logs generated by large-scale software systems provide crucial informationfor engineers to understand the system status and diagnose problems of thesystems. Log parsing, which converts raw log messages into structured data, isthe first step to enabling automated log analytics. Existing log parsersextract the common part as log templates using statistical features. However,these log parsers often fail to identify the correct templates and parametersbecause: 1) they often overlook the semantic meaning of log messages, and 2)they require domain-specific knowledge for different log datasets. To addressthe limitations of existing methods, in this paper, we propose LogPPT tocapture the patterns of templates using prompt-based few-shot learning. LogPPTutilises a novel prompt tuning method to recognise keywords and parametersbased on a few labelled log data. In addition, an adaptive random samplingalgorithm is designed to select a small yet diverse training set. We haveconducted extensive experiments on 16 public log datasets. The experimentalresults show that LogPPT is effective and efficient for log parsing.",,arXiv,['cs.se'],, -715,automated fewshot classification with instructionfinetuned language models,"['Rami Aly', 'Xingjian Shi', 'Kaixiang Lin', 'Aston Zhang', 'Andrew Gordon Wilson']",http://arxiv.org/pdf/2305.12576v2.pdf,2023-05-21,," A particularly successful class of approaches for few-shot learning combineslanguage models with prompts -- hand-crafted task descriptions that complementdata samples. However, designing prompts by hand for each task commonlyrequires domain knowledge and substantial guesswork. We observe, in the contextof classification tasks, that instruction finetuned language models exhibitremarkable prompt robustness, and we subsequently propose a simple method toeliminate the need for handcrafted prompts, named AuT-Few. This approachconsists of (i) a prompt retrieval module that selects suitable taskinstructions from the instruction-tuning knowledge base, and (ii) thegeneration of two distinct, semantically meaningful, class descriptions and aselection mechanism via cross-validation. Over $12$ datasets, spanning $8$classification tasks, we show that AuT-Few outperforms current state-of-the-artfew-shot learning methods. Moreover, AuT-Few is the best ranking method acrossdatasets on the RAFT few-shot benchmark. Notably, these results are achievedwithout task-specific handcrafted prompts on unseen tasks.",,arXiv,['cs.cl'],, -716,evaluating the decency and consistency of data validation tests generated by llms,"['Rohan Alexander', 'Lindsay Katz', 'Callandra Moore', 'Zane Schwartz']",http://arxiv.org/pdf/2310.01402v1.pdf,2023-10-02,," We investigated the potential of large language models (LLMs) in developingdataset validation tests. We carried out 96 experiments each for both GPT-3.5and GPT-4, examining different prompt scenarios, learning modes, temperaturesettings, and roles. The prompt scenarios were: 1) Asking for expectations, 2)Asking for expectations with a given context, 3) Asking for expectations afterrequesting a simulation, and 4) Asking for expectations with a provided datasample. For learning modes, we tested: 1) zero-shot, 2) one-shot, and 3)few-shot learning. We also tested four temperature settings: 0, 0.4, 0.6, and1. Furthermore, two distinct roles were considered: 1) ""helpful assistant"", 2)""expert data scientist"". To gauge consistency, every setup was tested fivetimes. The LLM-generated responses were benchmarked against a gold standardsuite, created by an experienced data scientist knowledgeable about the data inquestion. We find there are considerable returns to the use of few-shotlearning, and that the more explicit the data setting can be the better. Thebest LLM configurations complement, rather than substitute, the gold standardresults. This study underscores the value LLMs can bring to the data cleaningand preparation stages of the data science workflow.",,arXiv,['stat.me'],, -717,fewshot learning with multilingual language models,"['Xi Victoria Lin', 'Todor Mihaylov', 'Mikel Artetxe', 'Tianlu Wang', 'Shuohui Chen', 'Daniel Simig', 'Myle Ott', 'Naman Goyal', 'Shruti Bhosale', 'Jingfei Du', 'Ramakanth Pasunuru', 'Sam Shleifer', 'Punit Singh Koura', 'Vishrav Chaudhary', ""Brian O'Horo"", 'Jeff Wang', 'Luke Zettlemoyer', 'Zornitsa Kozareva', 'Mona Diab', 'Veselin Stoyanov', 'Xian Li']",http://arxiv.org/pdf/2112.10668v3.pdf,2021-12-20,," Large-scale generative language models such as GPT-3 are competitive few-shotlearners. While these models are known to be able to jointly represent manydifferent languages, their training data is dominated by English, potentiallylimiting their cross-lingual generalization. In this work, we trainmultilingual generative language models on a corpus covering a diverse set oflanguages, and study their few- and zero-shot learning capabilities in a widerange of tasks. Our largest model with 7.5 billion parameters sets new state ofthe art in few-shot learning in more than 20 representative languages,outperforming GPT-3 of comparable size in multilingual commonsense reasoning(with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in4-shot settings) and natural language inference (+5.4% in each of 0-shot and4-shot settings). On the FLORES-101 machine translation benchmark, our modeloutperforms GPT-3 on 171 out of 182 directions with 32 training examples, whilesurpassing the official supervised baseline in 45 directions. We conduct anin-depth analysis of different multilingual prompting approaches, showing inparticular that strong few-shot learning performance across languages can beachieved via cross-lingual transfer through both templates and demonstrationexamples. Finally, we evaluate our models in social value tasks such as hatespeech detection in five languages and find it has limitations similar tocomparable sized GPT-3 models.",,arXiv,"['cs.cl', 'cs.ai']",, -718,flamingo a visual language model for fewshot learning,"['Jean-Baptiste Alayrac', 'Jeff Donahue', 'Pauline Luc', 'Antoine Miech', 'Iain Barr', 'Yana Hasson', 'Karel Lenc', 'Arthur Mensch', 'Katie Millican', 'Malcolm Reynolds', 'Roman Ring', 'Eliza Rutherford', 'Serkan Cabi', 'Tengda Han', 'Zhitao Gong', 'Sina Samangooei', 'Marianne Monteiro', 'Jacob Menick', 'Sebastian Borgeaud', 'Andrew Brock', 'Aida Nematzadeh', 'Sahand Sharifzadeh', 'Mikolaj Binkowski', 'Ricardo Barreira', 'Oriol Vinyals', 'Andrew Zisserman', 'Karen Simonyan']",http://arxiv.org/pdf/2204.14198v2.pdf,2022-04-29,," Building models that can be rapidly adapted to novel tasks using only ahandful of annotated examples is an open challenge for multimodal machinelearning research. We introduce Flamingo, a family of Visual Language Models(VLM) with this ability. We propose key architectural innovations to: (i)bridge powerful pretrained vision-only and language-only models, (ii) handlesequences of arbitrarily interleaved visual and textual data, and (iii)seamlessly ingest images or videos as inputs. Thanks to their flexibility,Flamingo models can be trained on large-scale multimodal web corpora containingarbitrarily interleaved text and images, which is key to endow them within-context few-shot learning capabilities. We perform a thorough evaluation ofour models, exploring and measuring their ability to rapidly adapt to a varietyof image and video tasks. These include open-ended tasks such as visualquestion-answering, where the model is prompted with a question which it has toanswer; captioning tasks, which evaluate the ability to describe a scene or anevent; and close-ended tasks such as multiple-choice visual question-answering.For tasks lying anywhere on this spectrum, a single Flamingo model can achievea new state of the art with few-shot learning, simply by prompting the modelwith task-specific examples. On numerous benchmarks, Flamingo outperformsmodels fine-tuned on thousands of times more task-specific data.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, -719,"code generation tools (almost) for free a study of fewshot, pretrained language models on code","['Patrick Bareiß', 'Beatriz Souza', ""Marcelo d'Amorim"", 'Michael Pradel']",http://arxiv.org/pdf/2206.01335v2.pdf,2022-06-02,," Few-shot learning with large-scale, pre-trained language models is a powerfulway to answer questions about code, e.g., how to complete a given code example,or even generate code snippets from scratch. The success of these models raisesthe question whether they could serve as a basis for building a wide range codegeneration tools. Traditionally, such tools are built manually and separatelyfor each task. Instead, few-shot learning may allow to obtain different toolsfrom a single pre-trained language model by simply providing a few examples ora natural language description of the expected tool behavior. This paperstudies to what extent a state-of-the-art, pre-trained language model of code,Codex, may serve this purpose. We consider three code manipulation and codegeneration tasks targeted by a range of traditional tools: (i) code mutation;(ii) test oracle generation from natural language documentation; and (iii) testcase generation. For each task, we compare few-shot learning to a manuallybuilt tool. Our results show that the model-based tools complement (codemutation), are on par (test oracle generation), or even outperform theirrespective traditionally built tool (test case generation), while imposing farless effort to develop them. By comparing the effectiveness of differentvariants of the model-based tools, we provide insights on how to design anappropriate input (""prompt"") to the model and what influence the size of themodel has. For example, we find that providing a small natural languagedescription of the code generation task is an easy way to improve predictions.Overall, we conclude that few-shot language models are surprisingly effective,yet there is still more work to be done, such as exploring more diverse ways ofprompting and tackling even more involved tasks.",,arXiv,"['cs.se', 'cs.lg']",, -720,discrete and soft prompting for multilingual models,"['Mengjie Zhao', 'Hinrich Schütze']",http://arxiv.org/pdf/2109.03630v1.pdf,2021-09-08,," It has been shown for English that discrete and soft prompting performstrongly in few-shot learning with pretrained language models (PLMs). In thispaper, we show that discrete and soft prompting perform better than finetuningin multilingual cases: Crosslingual transfer and in-language training ofmultilingual natural language inference. For example, with 48 English trainingexamples, finetuning obtains 33.74% accuracy in crosslingual transfer, barelysurpassing the majority baseline (33.33%). In contrast, discrete and softprompting outperform finetuning, achieving 36.43% and 38.79%. We alsodemonstrate good performance of prompting with training data in multiplelanguages other than English.",,arXiv,['cs.cl'],, -721,sentence simplification via large language models,"['Yutao Feng', 'Jipeng Qiang', 'Yun Li', 'Yunhao Yuan', 'Yi Zhu']",http://arxiv.org/pdf/2302.11957v1.pdf,2023-02-23,," Sentence Simplification aims to rephrase complex sentences into simplersentences while retaining original meaning. Large Language models (LLMs) havedemonstrated the ability to perform a variety of natural language processingtasks. However, it is not yet known whether LLMs can be served as ahigh-quality sentence simplification system. In this work, we empiricallyanalyze the zero-/few-shot learning ability of LLMs by evaluating them on anumber of benchmark test sets. Experimental results show LLMs outperformstate-of-the-art sentence simplification methods, and are judged to be on a parwith human annotators.",,arXiv,"['cs.cl', 'cs.ai']",, -722,making pretrained language models better fewshot learners,"['Tianyu Gao', 'Adam Fisch', 'Danqi Chen']",http://arxiv.org/pdf/2012.15723v2.pdf,2020-12-31,," The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shotperformance solely by leveraging a natural-language prompt and a few taskdemonstrations as input context. Inspired by their findings, we study few-shotlearning in a more practical scenario, where we use smaller language models forwhich fine-tuning is computationally efficient. We present LM-BFF--betterfew-shot fine-tuning of language models--a suite of simple and complementarytechniques for fine-tuning language models on a small number of annotatedexamples. Our approach includes (1) prompt-based fine-tuning together with anovel pipeline for automating prompt generation; and (2) a refined strategy fordynamically and selectively incorporating demonstrations into each context.Finally, we present a systematic evaluation for analyzing few-shot performanceon a range of NLP tasks, including classification and regression. Ourexperiments demonstrate that our methods combine to dramatically outperformstandard fine-tuning procedures in this low resource setting, achieving up to30% absolute improvement, and 11% on average across all tasks. Our approachmakes minimal assumptions on task resources and domain expertise, and henceconstitutes a strong task-agnostic method for few-shot learning.",,arXiv,"['cs.cl', 'cs.lg']",, -723,gpt3 models are poor fewshot learners in the biomedical domain,"['Milad Moradi', 'Kathrin Blagec', 'Florian Haberl', 'Matthias Samwald']",http://arxiv.org/pdf/2109.02555v2.pdf,2021-09-06,," Deep neural language models have set new breakthroughs in many tasks ofNatural Language Processing (NLP). Recent work has shown that deep transformerlanguage models (pretrained on large amounts of texts) can achieve high levelsof task-specific few-shot performance comparable to state-of-the-art models.However, the ability of these large language models in few-shot transferlearning has not yet been explored in the biomedical domain. We investigatedthe performance of two powerful transformer language models, i.e. GPT-3 andBioBERT, in few-shot settings on various biomedical NLP tasks. The experimentalresults showed that, to a great extent, both the models underperform a languagemodel fine-tuned on the full training data. Although GPT-3 had already achievednear state-of-the-art results in few-shot knowledge transfer on open-domain NLPtasks, it could not perform as effectively as BioBERT, which is orders ofmagnitude smaller than GPT-3. Regarding that BioBERT was already pretrained onlarge biomedical text corpora, our study suggests that language models maylargely benefit from in-domain pretraining in task-specific few-shot learning.However, in-domain pretraining seems not to be sufficient; novel pretrainingand few-shot learning strategies are required in the biomedical NLP domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -724,list lite prompted selftraining makes parameterefficient fewshot learners,"['Yaqing Wang', 'Subhabrata Mukherjee', 'Xiaodong Liu', 'Jing Gao', 'Ahmed Hassan Awadallah', 'Jianfeng Gao']",http://arxiv.org/pdf/2110.06274v2.pdf,2021-10-12,," We present a new method LiST is short for Lite Prompted Self-Training forparameter-efficient fine-tuning of large pre-trained language models (PLMs) forfew-shot learning. LiST improves over recent methods that adopt prompt-basedfine-tuning (FN) using two key techniques. The first is the use ofself-training to leverage large amounts of unlabeled data for prompt-based FNin few-shot settings. We use self-training in conjunction with meta-learningfor re-weighting noisy pseudo-prompt labels. Self-training is expensive as itrequires updating all the model parameters repetitively. Therefore, we use asecond technique for light-weight fine-tuning where we introduce a small numberof task-specific parameters that are fine-tuned during self-training whilekeeping the PLM encoder frozen. Our experiments show that LiST can effectivelyleverage unlabeled data to improve the model performance for few-shot learning.Additionally, the fine-tuning is efficient as it only updates a smallpercentage of parameters and the overall model footprint is reduced sinceseveral tasks can share a common PLM encoder as backbone. A comprehensive studyon six NLU tasks demonstrate LiST to improve by 35% over classic fine-tuningand 6% over prompt-based FN with 96% reduction in number of trainableparameters when fine-tuned with no more than 30 labeled examples from eachtask. With only 14M tunable parameters, LiST outperforms GPT-3 in-contextlearning by 33% on few-shot NLU tasks.",,arXiv,['cs.cl'],, -725,fewshot stance detection via targetaware prompt distillation,"['Yan Jiang', 'Jinhua Gao', 'Huawei Shen', 'Xueqi Cheng']",http://arxiv.org/pdf/2206.13214v1.pdf,2022-06-27,," Stance detection aims to identify whether the author of a text is in favorof, against, or neutral to a given target. The main challenge of this taskcomes two-fold: few-shot learning resulting from the varying targets and thelack of contextual information of the targets. Existing works mainly focus onsolving the second issue by designing attention-based models or introducingnoisy external knowledge, while the first issue remains under-explored. In thispaper, inspired by the potential capability of pre-trained language models(PLMs) serving as knowledge bases and few-shot learners, we propose tointroduce prompt-based fine-tuning for stance detection. PLMs can provideessential contextual information for the targets and enable few-shot learningvia prompts. Considering the crucial role of the target in stance detectiontask, we design target-aware prompts and propose a novel verbalizer. Instead ofmapping each label to a concrete word, our verbalizer maps each label to avector and picks the label that best captures the correlation between thestance and the target. Moreover, to alleviate the possible defect of dealingwith varying targets with a single hand-crafted prompt, we propose to distillthe information learned from multiple prompts. Experimental results show thesuperior performance of our proposed model in both full-data and few-shotscenarios.",,arXiv,['cs.cl'],, -726,multimodality helps unimodality crossmodal fewshot learning with multimodal models,"['Zhiqiu Lin', 'Samuel Yu', 'Zhiyi Kuang', 'Deepak Pathak', 'Deva Ramanan']",http://arxiv.org/pdf/2301.06267v4.pdf,2023-01-16,," The ability to quickly learn a new task with minimal instruction - known asfew-shot learning - is a central aspect of intelligent agents. Classicalfew-shot benchmarks make use of few-shot samples from a single modality, butsuch samples may not be sufficient to characterize an entire concept class. Incontrast, humans use cross-modal information to learn new concepts efficiently.In this work, we demonstrate that one can indeed build a better ${\bf visual}$dog classifier by ${\bf read}$ing about dogs and ${\bf listen}$ing to thembark. To do so, we exploit the fact that recent multimodal foundation modelssuch as CLIP are inherently cross-modal, mapping different modalities to thesame representation space. Specifically, we propose a simple cross-modaladaptation approach that learns from few-shot examples spanning differentmodalities. By repurposing class names as additional one-shot training samples,we achieve SOTA results with an embarrassingly simple linear classifier forvision-language adaptation. Furthermore, we show that our approach can benefitexisting methods such as prefix tuning, adapters, and classifier ensembling.Finally, to explore other modalities beyond vision and language, we constructthe first (to our knowledge) audiovisual few-shot benchmark and use cross-modaltraining to improve the performance of both image and audio classification.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",, -727,rplkg robust prompt learning with knowledge graph,"['Yewon Kim', 'YongTaek Lim', 'Dokyung Yoon', 'KyungWoo Song']",http://arxiv.org/pdf/2304.10805v1.pdf,2023-04-21,," Large-scale pre-trained models have been known that they are transferable,and they generalize well on the unseen dataset. Recently, multimodalpre-trained models such as CLIP show significant performance improvement indiverse experiments. However, when the labeled dataset is limited, thegeneralization of a new dataset or domain is still challenging. To improve thegeneralization performance on few-shot learning, there have been diverseefforts, such as prompt learning and adapter. However, the current few-shotadaptation methods are not interpretable, and they require a high computationcost for adaptation. In this study, we propose a new method, robust promptlearning with knowledge graph (RPLKG). Based on the knowledge graph, weautomatically design diverse interpretable and meaningful prompt sets. Ourmodel obtains cached embeddings of prompt sets after one forwarding from alarge pre-trained model. After that, model optimizes the prompt selectionprocesses with GumbelSoftmax. In this way, our model is trained usingrelatively little memory and learning time. Also, RPLKG selects the optimalinterpretable prompt automatically, depending on the dataset. In summary, RPLKGis i) interpretable, ii) requires small computation resources, and iii) easy toincorporate prior human knowledge. To validate the RPLKG, we providecomprehensive experimental results on few-shot learning, domain generalizationand new class generalization setting. RPLKG shows a significant performanceimprovement compared to zero-shot learning and competitive performance againstseveral prompt learning methods using much lower resources.",,arXiv,"['cs.ai', 'cs.lg']",, -728,adversarial robustness of promptbased fewshot learning for natural language understanding,"['Venkata Prabhakara Sarath Nookala', 'Gaurav Verma', 'Subhabrata Mukherjee', 'Srijan Kumar']",http://arxiv.org/pdf/2306.11066v2.pdf,2023-06-19,," State-of-the-art few-shot learning (FSL) methods leverage prompt-basedfine-tuning to obtain remarkable results for natural language understanding(NLU) tasks. While much of the prior FSL methods focus on improving downstreamtask performance, there is a limited understanding of the adversarialrobustness of such methods. In this work, we conduct an extensive study ofseveral state-of-the-art FSL methods to assess their robustness to adversarialperturbations. To better understand the impact of various factors towardsrobustness (or the lack of it), we evaluate prompt-based FSL methods againstfully fine-tuned models for aspects such as the use of unlabeled data, multipleprompts, number of few-shot examples, model size and type. Our results on sixGLUE tasks indicate that compared to fully fine-tuned models, vanilla FSLmethods lead to a notable relative drop in task performance (i.e., are lessrobust) in the face of adversarial perturbations. However, using (i) unlabeleddata for prompt-based FSL and (ii) multiple prompts flip the trend. We furtherdemonstrate that increasing the number of few-shot examples and model size leadto increased adversarial robustness of vanilla FSL methods. Broadly, our worksheds light on the adversarial robustness evaluation of prompt-based FSLmethods for NLU tasks.",,arXiv,"['cs.cl', 'cs.lg']",, -729,unifiedskg unifying and multitasking structured knowledge grounding with texttotext language models,"['Tianbao Xie', 'Chen Henry Wu', 'Peng Shi', 'Ruiqi Zhong', 'Torsten Scholak', 'Michihiro Yasunaga', 'Chien-Sheng Wu', 'Ming Zhong', 'Pengcheng Yin', 'Sida I. Wang', 'Victor Zhong', 'Bailin Wang', 'Chengzu Li', 'Connor Boyle', 'Ansong Ni', 'Ziyu Yao', 'Dragomir Radev', 'Caiming Xiong', 'Lingpeng Kong', 'Rui Zhang', 'Noah A. Smith', 'Luke Zettlemoyer', 'Tao Yu']",http://arxiv.org/pdf/2201.05966v3.pdf,2022-01-16,," Structured knowledge grounding (SKG) leverages structured knowledge tocomplete user requests, such as semantic parsing over databases and questionanswering over knowledge bases. Since the inputs and outputs of SKG tasks areheterogeneous, they have been studied separately by different communities,which limits systematic and compatible research on SKG. In this paper, weovercome this limitation by proposing the UnifiedSKG framework, which unifies21 SKG tasks into a text-to-text format, aiming to promote systematic SKGresearch, instead of being exclusive to a single task, domain, or dataset. Weuse UnifiedSKG to benchmark T5 with different sizes and show that T5, withsimple modifications when necessary, achieves state-of-the-art performance onalmost all of the 21 tasks. We further demonstrate that multi-taskprefix-tuning improves the performance on most tasks, largely improving theoverall performance. UnifiedSKG also facilitates the investigation of zero-shotand few-shot learning, and we show that T0, GPT-3, and Codex struggle inzero-shot and few-shot learning for SKG. We also use UnifiedSKG to conduct aseries of controlled experiments on structured knowledge encoding variantsacross SKG tasks. UnifiedSKG is easily extensible to more tasks, and it isopen-sourced at https://github.com/hkunlp/unifiedskg.",,arXiv,['cs.cl'],, -730,a promptbased fewshot learning approach to software conflict detection,"['Robert K. Helmeczi', 'Mucahit Cevik', 'Savas Yıldırım']",http://arxiv.org/pdf/2211.02709v1.pdf,2022-11-04,," A software requirement specification (SRS) document is an essential part ofthe software development life cycle which outlines the requirements that asoftware program in development must satisfy. This document is often specifiedby a diverse group of stakeholders and is subject to continual change, makingthe process of maintaining the document and detecting conflicts betweenrequirements an essential task in software development. Notably, projects thatdo not address conflicts in the SRS document early on face considerableproblems later in the development life cycle. These problems incur substantialcosts in terms of time and money, and these costs often become insurmountablebarriers that ultimately result in the termination of a software projectaltogether. As a result, early detection of SRS conflicts is critical toproject sustainability. The conflict detection task is approached in numerousways, many of which require a significant amount of manual intervention fromdevelopers, or require access to a large amount of labeled, task-specifictraining data. In this work, we propose using a prompt-based learning approachto perform few-shot learning for conflict detection. We compare our results tosupervised learning approaches that use pretrained language models, such asBERT and its variants. Our results show that prompting with just 32 labeledexamples can achieve a similar level of performance in many key metrics to thatof supervised learning on training sets that are magnitudes larger in size. Incontrast to many other conflict detection approaches, we make no assumptionsabout the type of underlying requirements, allowing us to analyze pairings ofboth functional and non-functional requirements. This allows us to omit thepotentially expensive task of filtering out non-functional requirements fromour dataset.",,arXiv,['cs.se'],, -731,"crosslingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing","['Tal Schuster', 'Ori Ram', 'Regina Barzilay', 'Amir Globerson']",http://arxiv.org/pdf/1902.09492v2.pdf,2019-02-25,," We introduce a novel method for multilingual transfer that utilizes deepcontextual embeddings, pretrained in an unsupervised fashion. While contextualembeddings have been shown to yield richer representations of meaning comparedto their static counterparts, aligning them poses a challenge due to theirdynamic nature. To this end, we construct context-independent variants of theoriginal monolingual spaces and utilize their mapping to derive an alignmentfor the context-dependent spaces. This mapping readily supports processing of atarget language, improving transfer by context-aware embeddings. Ourexperimental results demonstrate the effectiveness of this approach forzero-shot and few-shot learning of dependency parsing. Specifically, our methodconsistently outperforms the previous state-of-the-art on 6 tested languages,yielding an improvement of 6.8 LAS points on average.",,arXiv,"['cs.cl', 'cs.lg']",, -732,calibrate before use improving fewshot performance of language models,"['Tony Z. Zhao', 'Eric Wallace', 'Shi Feng', 'Dan Klein', 'Sameer Singh']",http://arxiv.org/pdf/2102.09690v2.pdf,2021-02-19,," GPT-3 can perform numerous tasks when provided a natural language prompt thatcontains a few training examples. We show that this type of few-shot learningcan be unstable: the choice of prompt format, training examples, and even theorder of the training examples can cause accuracy to vary from near chance tonear state-of-the-art. We demonstrate that this instability arises from thebias of language models towards predicting certain answers, e.g., those thatare placed near the end of the prompt or are common in the pre-training data.To mitigate this, we first estimate the model's bias towards each answer byasking for its prediction when given the training prompt and a content-freetest input such as ""N/A"". We then fit calibration parameters that cause theprediction for this input to be uniform across answers. On a diverse set oftasks, this contextual calibration procedure substantially improves GPT-3 andGPT-2's average accuracy (up to 30.0% absolute) and reduces variance acrossdifferent choices of the prompt.",,arXiv,"['cs.cl', 'cs.lg']",, -733,noisy channel language model prompting for fewshot text classification,"['Sewon Min', 'Mike Lewis', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2108.04106v3.pdf,2021-08-09,," We introduce a noisy channel approach for language model prompting infew-shot text classification. Instead of computing the likelihood of the labelgiven the input (referred as direct models), channel models compute theconditional probability of the input given the label, and are thereby requiredto explain every word in the input. We use channel models for recently proposedfew-shot learning methods with no or very limited updates to the language modelparameters, via either in-context demonstration or prompt tuning. Ourexperiments show that, for both methods, channel models significantlyoutperform their direct counterparts, which we attribute to their stability,i.e., lower variance and higher worst-case accuracy. We also present extensiveablations that provide recommendations for when to use channel prompt tuninginstead of other competitive methods (e.g., direct head tuning): channel prompttuning is preferred when the number of training examples is small, labels inthe training data are imbalanced, or generalization to unseen labels isrequired.",,arXiv,"['cs.cl', 'cs.ai']",, -734,what's in a measurement using gpt3 on semeval 2021 task 8 measeval,"['Curt Kohler', 'Ron Daniel Jr']",http://arxiv.org/pdf/2106.14720v1.pdf,2021-06-28,," In the summer of 2020 OpenAI released its GPT-3 autoregressive language modelto much fanfare. While the model has shown promise on tasks in several areas,it has not always been clear when the results were cherry-picked or when theywere the unvarnished output. We were particularly interested in what benefitsGPT-3 could bring to the SemEval 2021 MeasEval task - identifying measurementsand their associated attributes in scientific literature. We had alreadyexperimented with multi-turn questions answering as a solution to this task. Wewanted to see if we could use GPT-3's few-shot learning capabilities to moreeasily develop a solution that would have better performance than our priorwork. Unfortunately, we have not been successful in that effort. This paperdiscusses the approach we used, challenges we encountered, and results weobserved. Some of the problems we encountered were simply due to the state ofthe art. For example, the limits on the size of the prompt and answer limitedthe amount of the training signal that could be offered. Others are morefundamental. We are unaware of generative models that excel in retainingfactual information. Also, the impact of changes in the prompts isunpredictable, making it hard to reliably improve performance.",,arXiv,['cs.cl'],, -735,flex unifying evaluation for fewshot nlp,"['Jonathan Bragg', 'Arman Cohan', 'Kyle Lo', 'Iz Beltagy']",http://arxiv.org/pdf/2107.07170v2.pdf,2021-07-15,," Few-shot NLP research is highly active, yet conducted in disjoint researchthreads with evaluation suites that lack challenging-yet-realistic testingsetups and fail to employ careful experimental design. Consequently, thecommunity does not know which techniques perform best or even if theyoutperform simple baselines. In response, we formulate the FLEX Principles, aset of requirements and best practices for unified, rigorous, valid, andcost-sensitive few-shot NLP evaluation. These principles include Sample SizeDesign, a novel approach to benchmark design that optimizes statisticalaccuracy and precision while keeping evaluation costs manageable. Following theprinciples, we release the FLEX benchmark, which includes four few-shottransfer settings, zero-shot evaluation, and a public leaderboard that coversdiverse NLP tasks. In addition, we present UniFew, a prompt-based model forfew-shot learning that unifies pretraining and finetuning prompt formats,eschewing complex machinery of recent prompt-based approaches in adaptingdownstream task formats to language model pretraining objectives. Wedemonstrate that despite simplicity, UniFew achieves results competitive withboth popular meta-learning and prompt-based approaches.",,arXiv,"['cs.cl', 'cs.lg', 'i.2.7']",, -736,conqx semantic expansion of spoken queries for intent detection based on conditioned text generation,"['Eyup Halit Yilmaz', 'Cagri Toraman']",http://arxiv.org/pdf/2109.00729v1.pdf,2021-09-02,," Intent detection of spoken queries is a challenging task due to their noisystructure and short length. To provide additional information regarding thequery and enhance the performance of intent detection, we propose a method forsemantic expansion of spoken queries, called ConQX, which utilizes the textgeneration ability of an auto-regressive language model, GPT-2. To avoidoff-topic text generation, we condition the input query to a structured contextwith prompt mining. We then apply zero-shot, one-shot, and few-shot learning.We lastly use the expanded queries to fine-tune BERT and RoBERTa for intentdetection. The experimental results show that the performance of intentdetection can be improved by our semantic expansion method.",,arXiv,"['cs.cl', 'cs.ai']",, -737,do promptbased models really understand the meaning of their prompts,"['Albert Webson', 'Ellie Pavlick']",http://arxiv.org/pdf/2109.01247v2.pdf,2021-09-02,," Recently, a boom of papers has shown extraordinary progress in zero-shot andfew-shot learning with various prompt-based models. It is commonly argued thatprompts help models to learn faster in the same way that humans learn fasterwhen provided with task instructions expressed in natural language. In thisstudy, we experiment with over 30 prompt templates manually written for naturallanguage inference (NLI). We find that models learn just as fast with manyprompts that are intentionally irrelevant or even pathologically misleading asthey do with instructively ""good"" prompts. Further, such patterns hold even formodels as large as 175 billion parameters (Brown et al., 2020) as well as therecently proposed instruction-tuned models which are trained on hundreds ofprompts (Sanh et al., 2022). That is, instruction-tuned models often producegood predictions with irrelevant and misleading prompts even at zero shots. Insum, notwithstanding prompt-based models' impressive improvement, we findevidence of serious limitations that question the degree to which suchimprovement is derived from models understanding task instructions in waysanalogous to humans' use of task instructions.",,arXiv,['cs.cl'],, -738,fewshot emotion recognition in conversation with sequential prototypical networks,"['Gaël Guibon', 'Matthieu Labeau', 'Hélène Flamein', 'Luce Lefeuvre', 'Chloé Clavel']",http://arxiv.org/pdf/2109.09366v1.pdf,2021-09-20,," Several recent studies on dyadic human-human interactions have been done onconversations without specific business objectives. However, many companiesmight benefit from studies dedicated to more precise environments such as aftersales services or customer satisfaction surveys. In this work, we placeourselves in the scope of a live chat customer service in which we want todetect emotions and their evolution in the conversation flow. This contextleads to multiple challenges that range from exploiting restricted, small andmostly unlabeled datasets to finding and adapting methods for such context.Wetackle these challenges by using Few-Shot Learning while making the hypothesisit can serve conversational emotion classification for different languages andsparse labels. We contribute by proposing a variation of Prototypical Networksfor sequence labeling in conversation that we name ProtoSeq. We test thismethod on two datasets with different languages: daily conversations in Englishand customer service chat conversations in French. When applied to emotionclassification in conversations, our method proved to be competitive even whencompared to other ones.",,arXiv,"['cs.cl', 'cs.lg']",, -739,useridentifier implicit user representations for simple and effective personalized sentiment analysis,"['Fatemehsadat Mireshghallah', 'Vaishnavi Shrivastava', 'Milad Shokouhi', 'Taylor Berg-Kirkpatrick', 'Robert Sim', 'Dimitrios Dimitriadis']",http://arxiv.org/pdf/2110.00135v2.pdf,2021-10-01,," Global models are trained to be as generalizable as possible, with userinvariance considered desirable since the models are shared across multitudesof users. As such, these models are often unable to produce personalizedresponses for individual users, based on their data. Contrary to widely-usedpersonalization techniques based on few-shot learning, we proposeUserIdentifier, a novel scheme for training a single shared model for allusers. Our approach produces personalized responses by adding fixed,non-trainable user identifiers to the input data. We empirically demonstratethat this proposed method outperforms the prefix-tuning based state-of-the-artapproach by up to 13%, on a suite of sentiment analysis datasets. We also showthat, unlike prior work, this method needs neither any additional modelparameters nor any extra rounds of few-shot fine-tuning.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, -740,instanceaware prompt learning for language understanding and generation,"['Feihu Jin', 'Jinliang Lu', 'Jiajun Zhang', 'Chengqing Zong']",http://arxiv.org/pdf/2201.07126v1.pdf,2022-01-18,," Recently, prompt learning has become a new paradigm to utilize pre-trainedlanguage models (PLMs) and achieves promising results in downstream tasks witha negligible increase of parameters. The current usage of discrete andcontinuous prompts assumes that the prompt is fixed for a specific task and allsamples in the task share the same prompt. However, a task may contain quitediverse samples in which some are easy and others are difficult, and diverseprompts are desirable. In this paper, we propose an instance-aware promptlearning method that learns a different prompt for each instance. Specifically,we suppose that each learnable prompt token has a different contribution todifferent instances, and we learn the contribution by calculating the relevancescore between an instance and each prompt token. The contribution weightedprompt would be instance aware. We apply our method to both unidirectional andbidirectional PLMs on both language understanding and generation tasks.Extensive experiments demonstrate that our method obtains considerableimprovements compared to strong baselines. Especially, our method achieves thestate-of-the-art on the SuperGLUE few-shot learning benchmark.",,arXiv,['cs.cl'],, -741,generating training data with language models towards zeroshot language understanding,"['Yu Meng', 'Jiaxin Huang', 'Yu Zhang', 'Jiawei Han']",http://arxiv.org/pdf/2202.04538v2.pdf,2022-02-09,," Pretrained language models (PLMs) have demonstrated remarkable performance invarious natural language processing tasks: Unidirectional PLMs (e.g., GPT) arewell known for their superior text generation capabilities; bidirectional PLMs(e.g., BERT) have been the prominent choice for natural language understanding(NLU) tasks. While both types of models have achieved promising few-shotlearning performance, their potential for zero-shot learning has beenunderexplored. In this paper, we present a simple approach that uses both typesof PLMs for fully zero-shot learning of NLU tasks without requiring anytask-specific data: A unidirectional PLM generates class-conditioned textsguided by prompts, which are used as the training data for fine-tuning abidirectional PLM. With quality training data selected based on the generationprobability and regularization techniques (label smoothing and temporalensembling) applied to the fine-tuning stage for better generalization andstability, our approach demonstrates strong performance across sevenclassification tasks of the GLUE benchmark (e.g., 72.3/73.8 on MNLI-m/mm and92.8 on SST-2), significantly outperforming zero-shot prompting methods andachieving even comparable results to strong few-shot approaches using 32training samples per class.",,arXiv,"['cs.cl', 'cs.lg']",, -742,variational autoencoder with disentanglement priors for lowresource taskspecific natural language generation,"['Zhuang Li', 'Lizhen Qu', 'Qiongkai Xu', 'Tongtong Wu', 'Tianyang Zhan', 'Gholamreza Haffari']",http://arxiv.org/pdf/2202.13363v3.pdf,2022-02-27,," In this paper, we propose a variational autoencoder with disentanglementpriors, VAE-DPRIOR, for task-specific natural language generation with none ora handful of task-specific labeled examples. In order to tackle compositionalgeneralization across tasks, our model performs disentangled representationlearning by introducing a conditional prior for the latent content space andanother conditional prior for the latent label space. Both types of priorssatisfy a novel property called $\epsilon$-disentangled. We show bothempirically and theoretically that the novel priors can disentanglerepresentations even without specific regularizations as in the prior work. Thecontent prior enables directly sampling diverse content representations fromthe content space learned from the seen tasks, and fuse them with therepresentations of novel tasks for generating semantically diverse texts in thelow-resource settings. Our extensive experiments demonstrate the superiorperformance of our model over competitive baselines in terms of i) dataaugmentation in continuous zero/few-shot learning, and ii) text style transferin the few-shot setting.",,arXiv,['cs.cl'],, -743,claret pretraining a correlationaware contexttoevent transformer for eventcentric generation and classification,"['Yucheng Zhou', 'Tao Shen', 'Xiubo Geng', 'Guodong Long', 'Daxin Jiang']",http://arxiv.org/pdf/2203.02225v2.pdf,2022-03-04,," Generating new events given context with correlated ones plays a crucial rolein many event-centric reasoning tasks. Existing works either limit their scopeto specific scenarios or overlook event-level correlations. In this paper, wepropose to pre-train a general Correlation-aware context-to-Event Transformer(ClarET) for event-centric reasoning. To achieve this, we propose three novelevent-centric objectives, i.e., whole event recovering, contrastiveevent-correlation encoding and prompt-based event locating, which highlightevent-level correlations with effective training. The proposed ClarET isapplicable to a wide range of event-centric reasoning scenarios, consideringits versatility of (i) event-correlation types (e.g., causal, temporal,contrast), (ii) application formulations (i.e., generation and classification),and (iii) reasoning types (e.g., abductive, counterfactual and endingreasoning). Empirical fine-tuning results, as well as zero- and few-shotlearning, on 9 benchmarks (5 generation and 4 classification tasks covering 4reasoning types with diverse event correlations), verify its effectiveness andgeneralization ability.",,arXiv,['cs.cl'],, -744,pretrained tokenreplaced detection model as fewshot learner,"['Zicheng Li', 'Shoushan Li', 'Guodong Zhou']",http://arxiv.org/pdf/2203.03235v2.pdf,2022-03-07,," Pre-trained masked language models have demonstrated remarkable ability asfew-shot learners. In this paper, as an alternative, we propose a novelapproach to few-shot learning with pre-trained token-replaced detection modelslike ELECTRA. In this approach, we reformulate a classification or a regressiontask as a token-replaced detection problem. Specifically, we first define atemplate and label description words for each task and put them into the inputto form a natural language prompt. Then, we employ the pre-trainedtoken-replaced detection model to predict which label description word is themost original (i.e., least replaced) among all label description words in theprompt. A systematic evaluation on 16 datasets demonstrates that our approachoutperforms few-shot learners with pre-trained masked language models in bothone-sentence and two-sentence learning tasks.",,arXiv,"['cs.cl', 'cs.ai']",, -745,prototypical verbalizer for promptbased fewshot tuning,"['Ganqu Cui', 'Shengding Hu', 'Ning Ding', 'Longtao Huang', 'Zhiyuan Liu']",http://arxiv.org/pdf/2203.09770v1.pdf,2022-03-18,," Prompt-based tuning for pre-trained language models (PLMs) has shown itseffectiveness in few-shot learning. Typically, prompt-based tuning wraps theinput text into a cloze question. To make predictions, the model maps theoutput words to labels via a verbalizer, which is either manually designed orautomatically built. However, manual verbalizers heavily depend ondomain-specific prior knowledge and human efforts, while finding appropriatelabel words automatically still remains challenging.In this work, we proposethe prototypical verbalizer (ProtoVerb) which is built directly from trainingdata. Specifically, ProtoVerb learns prototype vectors as verbalizers bycontrastive learning. In this way, the prototypes summarize training instancesand are able to enclose rich class-level semantics. We conduct experiments onboth topic classification and entity typing tasks, and the results demonstratethat ProtoVerb significantly outperforms current automatic verbalizers,especially when training data is extremely scarce. More surprisingly, ProtoVerbconsistently boosts prompt-based tuning even on untuned PLMs, indicating anelegant non-tuning way to utilize PLMs. Our codes are avaliable athttps://github.com/thunlp/OpenPrompt.",,arXiv,"['cs.cl', 'cs.lg']",, -746,inverse is better! fast and accurate prompt for fewshot slot tagging,"['Yutai Hou', 'Cheng Chen', 'Xianzhen Luo', 'Bohan Li', 'Wanxiang Che']",http://arxiv.org/pdf/2204.00885v1.pdf,2022-04-02,," Prompting methods recently achieve impressive success in few-shot learning.These methods modify input samples with prompt sentence pieces, and decodelabel tokens to map samples to corresponding labels. However, such a paradigmis very inefficient for the task of slot tagging. Since slot tagging samplesare multiple consecutive words in a sentence, the prompting methods have toenumerate all n-grams token spans to find all the possible slots, which greatlyslows down the prediction. To tackle this, we introduce an inverse paradigm forprompting. Different from the classic prompts mapping tokens to labels, wereversely predict slot values given slot types. Such inverse prompting onlyrequires a one-turn prediction for each slot type and greatly speeds up theprediction. Besides, we propose a novel Iterative Prediction Strategy, fromwhich the model learns to refine predictions by considering the relationsbetween different slot types. We find, somewhat surprisingly, the proposedmethod not only predicts faster but also significantly improves the effect(improve over 6.1 F1-scores on 10-shot setting) and achieves newstate-of-the-art performance.",,arXiv,"['cs.cl', 'cs.ai']",, -747,leveraging pretrained language models for conversational information seeking from text,"['Patrizio Bellan', 'Mauro Dragoni', 'Chiara Ghidini']",http://arxiv.org/pdf/2204.03542v1.pdf,2022-03-31,," Recent advances in Natural Language Processing, and in particular on theconstruction of very large pre-trained language representation models, isopening up new perspectives on the construction of conversational informationseeking (CIS) systems. In this paper we investigate the usage of in-contextlearning and pre-trained language representation models to address the problemof information extraction from process description documents, in an incrementalquestion and answering oriented fashion. In particular we investigate the usageof the native GPT-3 (Generative Pre-trained Transformer 3) model, together withtwo in-context learning customizations that inject conceptual definitions and alimited number of samples in a few shot-learning fashion. The results highlightthe potential of the approach and the usefulness of the in-context learningcustomizations, which can substantially contribute to address the ""trainingdata challenge"" of deep learning based NLP techniques the BPM field. It alsohighlight the challenge posed by control flow relations for which furthertraining needs to be devised.",,arXiv,"['cs.cl', 'cs.ai']",, -748,superprompting utilizing modelindependent contextual data to reduce data annotation required in visual commonsense tasks,"['Navid Rezaei', 'Marek Z. Reformat']",http://arxiv.org/pdf/2204.11922v1.pdf,2022-04-25,," Pre-trained language models have shown excellent results in few-shot learningscenarios using in-context learning. Although it is impressive, the size oflanguage models can be prohibitive to make them usable in on-deviceapplications, such as sensors or smartphones. With smaller language models,task-specific data annotation is needed to fine-tune the language model for aspecific purpose. However, data annotation can have a substantial financial andtime burden for small research groups, startups, and even companies. In thispaper, we analyze different prompt-based fine-tuning techniques to improveresults on both language and multimodal causal transformer models. To evaluateour results, we use a dataset focusing on visual commonsense reasoning in time.Our results show that by simple model-agnostic prompt-based fine-tuning,comparable results can be reached by only using 35%-40% of the fine-tuningtraining dataset. The proposed approaches result in significant time andfinancial savings. As the proposed methods make minimal architecturalassumptions, other researchers can use the results in their transformer modelswith minimal adaptations. We plan to release the source code freely to make iteasier for the community to use and contribute to our work.",,arXiv,"['cs.cl', 'cs.ai']",, -749,building a role specified opendomain dialogue system leveraging largescale language models,"['Sanghwan Bae', 'Donghyun Kwak', 'Sungdong Kim', 'Donghoon Ham', 'Soyoung Kang', 'Sang-Woo Lee', 'Woomyoung Park']",http://arxiv.org/pdf/2205.00176v1.pdf,2022-04-30,," Recent open-domain dialogue models have brought numerous breakthroughs.However, building a chat system is not scalable since it often requires aconsiderable volume of human-human dialogue data, especially when enforcingfeatures such as persona, style, or safety. In this work, we study thechallenge of imposing roles on open-domain dialogue systems, with the goal ofmaking the systems maintain consistent roles while conversing naturally withhumans. To accomplish this, the system must satisfy a role specification thatincludes certain conditions on the stated features as well as a system policyon whether or not certain types of utterances are allowed. For this, we proposean efficient data collection framework leveraging in-context few-shot learningof large-scale language models for building role-satisfying dialogue datasetfrom scratch. We then compare various architectures for open-domain dialoguesystems in terms of meeting role specifications while maintainingconversational abilities. Automatic and human evaluations show that our modelsreturn few out-of-bounds utterances, keeping competitive performance on generalmetrics. We release a Korean dialogue dataset we built for further research.",,arXiv,['cs.cl'],, -750,easynlp a comprehensive and easytouse toolkit for natural language processing,"['Chengyu Wang', 'Minghui Qiu', 'Chen Shi', 'Taolin Zhang', 'Tingting Liu', 'Lei Li', 'Jianing Wang', 'Ming Wang', 'Jun Huang', 'Wei Lin']",http://arxiv.org/pdf/2205.00258v2.pdf,2022-04-30,," The success of Pre-Trained Models (PTMs) has reshaped the development ofNatural Language Processing (NLP). Yet, it is not easy to obtainhigh-performing models and deploy them online for industrial practitioners. Tobridge this gap, EasyNLP is designed to make it easy to build NLP applications,which supports a comprehensive suite of NLP algorithms. It further featuresknowledge-enhanced pre-training, knowledge distillation and few-shot learningfunctionalities for large-scale PTMs, and provides a unified framework of modeltraining, inference and deployment for real-world applications. Currently,EasyNLP has powered over ten business units within Alibaba Group and isseamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud.The source code of our EasyNLP toolkit is released at GitHub(https://github.com/alibaba/EasyNLP).",,arXiv,['cs.cl'],, -751,politics pretraining with samestory article comparison for ideology prediction and stance detection,"['Yujian Liu', 'Xinliang Frederick Zhang', 'David Wegsman', 'Nick Beauchamp', 'Lu Wang']",http://arxiv.org/pdf/2205.00619v1.pdf,2022-05-02,," Ideology is at the core of political science research. Yet, there still doesnot exist general-purpose tools to characterize and predict ideology acrossdifferent genres of text. To this end, we study Pretrained Language Modelsusing novel ideology-driven pretraining objectives that rely on the comparisonof articles on the same story written by media of different ideologies. Wefurther collect a large-scale dataset, consisting of more than 3.6M politicalnews articles, for pretraining. Our model POLITICS outperforms strong baselinesand the previous state-of-the-art models on ideology prediction and stancedetection tasks. Further analyses show that POLITICS is especially good atunderstanding long or formally written texts, and is also robust in few-shotlearning scenarios.",,arXiv,['cs.cl'],, -752,kecp knowledge enhanced contrastive prompting for fewshot extractive question answering,"['Jianing Wang', 'Chengyu Wang', 'Minghui Qiu', 'Qiuhui Shi', 'Hongbin Wang', 'Jun Huang', 'Ming Gao']",http://arxiv.org/pdf/2205.03071v1.pdf,2022-05-06,," Extractive Question Answering (EQA) is one of the most important tasks inMachine Reading Comprehension (MRC), which can be solved by fine-tuning thespan selecting heads of Pre-trained Language Models (PLMs). However, mostexisting approaches for MRC may perform poorly in the few-shot learningscenario. To solve this issue, we propose a novel framework named KnowledgeEnhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads toPLMs, we introduce a seminal paradigm for EQA that transform the task into anon-autoregressive Masked Language Modeling (MLM) generation problem.Simultaneously, rich semantics from the external knowledge base (KB) and thepassage context are support for enhancing the representations of the query. Inaddition, to boost the performance of PLMs, we jointly train the model by theMLM and contrastive learning objectives. Experiments on multiple benchmarksdemonstrate that our method consistently outperforms state-of-the-artapproaches in few-shot settings by a large margin.",,arXiv,"['cs.cl', 'cs.ai']",, -753,proqa structural promptbased pretraining for unified question answering,"['Wanjun Zhong', 'Yifan Gao', 'Ning Ding', 'Yujia Qin', 'Zhiyuan Liu', 'Ming Zhou', 'Jiahai Wang', 'Jian Yin', 'Nan Duan']",http://arxiv.org/pdf/2205.04040v2.pdf,2022-05-09,," Question Answering (QA) is a longstanding challenge in natural languageprocessing. Existing QA works mostly focus on specific question types,knowledge domains, or reasoning skills. The specialty in QA research hinderssystems from modeling commonalities between tasks and generalization for widerapplications. To address this issue, we present ProQA, a unified QA paradigmthat solves various tasks through a single model. ProQA takes a unifiedstructural prompt as the bridge and improves the QA-centric ability bystructural prompt-based pre-training. Through a structurally designedprompt-based input schema, ProQA concurrently models the knowledgegeneralization for all QA tasks while keeping the knowledge customization forevery specific QA task. Furthermore, ProQA is pre-trained with structuralprompt-formatted large-scale synthesized corpus, which empowers the model withthe commonly-required QA ability. Experimental results on 11 QA benchmarksdemonstrate that ProQA consistently boosts performance on both full datafine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore,ProQA exhibits strong ability in both continual learning and transfer learningby taking the advantages of the structural prompt.",,arXiv,['cs.cl'],, -754,allsh active learning guided by local sensitivity and hardness,"['Shujian Zhang', 'Chengyue Gong', 'Xingchao Liu', 'Pengcheng He', 'Weizhu Chen', 'Mingyuan Zhou']",http://arxiv.org/pdf/2205.04980v2.pdf,2022-05-10,," Active learning, which effectively collects informative unlabeled data forannotation, reduces the demand for labeled data. In this work, we propose toretrieve unlabeled samples with a local sensitivity and hardness-awareacquisition function. The proposed method generates data copies through localperturbations and selects data points whose predictive likelihoods diverge themost from their copies. We further empower our acquisition function byinjecting the select-worst case perturbation. Our method achieves consistentgains over the commonly used active learning strategies in variousclassification tasks. Furthermore, we observe consistent improvements over thebaselines on the study of prompt selection in prompt-based few-shot learning.These experiments demonstrate that our acquisition guided by local sensitivityand hardness can be effective and beneficial for many NLP tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -755,prototypical calibration for fewshot learning of language models,"['Zhixiong Han', 'Yaru Hao', 'Li Dong', 'Yutao Sun', 'Furu Wei']",http://arxiv.org/pdf/2205.10183v2.pdf,2022-05-20,," In-context learning of GPT-like models has been recognized as fragile acrossdifferent hand-crafted templates, and demonstration permutations. In this work,we propose prototypical calibration to adaptively learn a more robust decisionboundary for zero- and few-shot classification, instead of greedy decoding.Concretely, our method first adopts Gaussian mixture distribution to estimatethe prototypical clusters for all categories. Then we assign each cluster tothe corresponding label by solving a weighted bipartite matching problem. Givenan example, its prediction is calibrated by the likelihood of prototypicalclusters. Experimental results show that prototypical calibration yields asubstantial improvement on a diverse set of tasks. Extensive analysis acrossdifferent scales also indicates that our method calibrates the decisionboundary as expected, greatly improving the robustness of GPT to templates,permutations, and class imbalance.",,arXiv,['cs.cl'],, -756,bbtv2 towards a gradientfree future with large language models,"['Tianxiang Sun', 'Zhengfu He', 'Hong Qian', 'Yunhua Zhou', 'Xuanjing Huang', 'Xipeng Qiu']",http://arxiv.org/pdf/2205.11200v2.pdf,2022-05-23,," Most downstream adaptation methods tune all or part of the parameters ofpre-trained models (PTMs) through gradient descent, where the tuning costincreases linearly with the growth of the model size. By contrast,gradient-free methods only require the forward computation of the PTM to tunethe prompt, retaining the benefits of efficient tuning and deployment. Though,past work on gradient-free tuning often introduces gradient descent to seek agood initialization of prompt and lacks versatility across tasks and PTMs. Inthis paper, we present BBTv2, an improved version of Black-Box Tuning, to drivePTMs for few-shot learning. We prepend continuous prompts to every layer of thePTM and propose a divide-and-conquer gradient-free algorithm to optimize theprompts at different layers alternately. Extensive experiments across varioustasks and PTMs show that BBTv2 can achieve comparable performance to full modeltuning and state-of-the-art parameter-efficient methods (e.g., Adapter, LoRA,BitFit, etc.) under few-shot settings while maintaining much fewer tunableparameters.",,arXiv,"['cs.cl', 'cs.ai']",, -757,neural prompt search,"['Yuanhan Zhang', 'Kaiyang Zhou', 'Ziwei Liu']",http://arxiv.org/pdf/2206.04673v2.pdf,2022-06-09,," The size of vision models has grown exponentially over the last few years,especially after the emergence of Vision Transformer. This has motivated thedevelopment of parameter-efficient tuning methods, such as learning adapterlayers or visual prompt tokens, which allow a tiny portion of model parametersto be trained whereas the vast majority obtained from pre-training are frozen.However, designing a proper tuning method is non-trivial: one might need to tryout a lengthy list of design choices, not to mention that each downstreamdataset often requires custom designs. In this paper, we view the existingparameter-efficient tuning methods as ""prompt modules"" and propose NeuralprOmpt seArcH (NOAH), a novel approach that learns, for large vision models,the optimal design of prompt modules through a neural architecture searchalgorithm, specifically for each downstream dataset. By conducting extensiveexperiments on over 20 vision datasets, we demonstrate that NOAH (i) issuperior to individual prompt modules, (ii) has a good few-shot learningability, and (iii) is domain-generalizable. The code and models are availableat https://github.com/Davidzhangyuanhan/NOAH.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, -758,prompting decision transformer for fewshot policy generalization,"['Mengdi Xu', 'Yikang Shen', 'Shun Zhang', 'Yuchen Lu', 'Ding Zhao', 'Joshua B. Tenenbaum', 'Chuang Gan']",http://arxiv.org/pdf/2206.13499v1.pdf,2022-06-27,," Humans can leverage prior experience and learn novel tasks from a handful ofdemonstrations. In contrast to offline meta-reinforcement learning, which aimsto achieve quick adaptation through better algorithm design, we investigate theeffect of architecture inductive bias on the few-shot learning capability. Wepropose a Prompt-based Decision Transformer (Prompt-DT), which leverages thesequential modeling ability of the Transformer architecture and the promptframework to achieve few-shot adaptation in offline RL. We design thetrajectory prompt, which contains segments of the few-shot demonstrations, andencodes task-specific information to guide policy generation. Our experimentsin five MuJoCo control benchmarks show that Prompt-DT is a strong few-shotlearner without any extra finetuning on unseen target tasks. Prompt-DToutperforms its variants and strong meta offline RL baselines by a large marginwith a trajectory prompt containing only a few timesteps. Prompt-DT is alsorobust to prompt length changes and can generalize to out-of-distribution (OOD)environments.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ro']",, -759,fewshot training llms for projectspecific codesummarization,"['Toufique Ahmed', 'Premkumar Devanbu']",http://arxiv.org/pdf/2207.04237v2.pdf,2022-07-09,," Very large language models (LLMs), such as GPT-3 and Codex have achievedstate-of-the-art performance on several natural-language tasks, and show greatpromise also for code. A particularly exciting aspect of LLMs is their knackfor few-shot and zero-shot learning: they can learn to perform a task with veryfew examples. Few-shotting has particular synergies in software engineering,where there are a lot of phenomena (identifier names, APIs, terminology, codingpatterns) that are known to be highly project-specific. However,project-specific data can be quite limited, especially early in the history ofa project; thus the few-shot learning capacity of LLMs might be very relevant.In this paper, we investigate the use few-shot training with the very large GPT(Generative Pre-trained Transformer) Codex model, and find evidence suggestingthat one can significantly surpass state-of-the-art models forcode-summarization, leveraging project-specific training.",,arXiv,"['cs.se', 'cs.lg']",, -760,convolutional bypasses are better vision transformer adapters,"['Shibo Jie', 'Zhi-Hong Deng']",http://arxiv.org/pdf/2207.07039v3.pdf,2022-07-14,," The pretrain-then-finetune paradigm has been widely adopted in computervision. But as the size of Vision Transformer (ViT) grows exponentially, thefull finetuning becomes prohibitive in view of the heavier storage overhead.Motivated by parameter-efficient transfer learning (PETL) on languagetransformers, recent studies attempt to insert lightweight adaptation modules(e.g., adapter layers or prompt tokens) to pretrained ViT and only finetunethese modules while the pretrained weights are frozen. However, these moduleswere originally proposed to finetune language models and did not take intoaccount the prior knowledge specifically for visual tasks. In this paper, wepropose to construct Convolutional Bypasses (Convpass) in ViT as adaptationmodules, introducing only a small amount (less than 0.5% of model parameters)of trainable parameters to adapt the large ViT. Different from other PETLmethods, Convpass benefits from the hard-coded inductive bias of convolutionallayers and thus is more suitable for visual tasks, especially in the low-dataregime. Experimental results on VTAB-1K benchmark and few-shot learningdatasets show that Convpass outperforms current language-oriented adaptationmodules, demonstrating the necessity to tailor vision-oriented adaptationmodules for adapting vision models.",,arXiv,['cs.cv'],, -761,selfsupervision can be a good fewshot learner,"['Yuning Lu', 'Liangjian Wen', 'Jianzhuang Liu', 'Yajing Liu', 'Xinmei Tian']",http://arxiv.org/pdf/2207.09176v1.pdf,2022-07-19,," Existing few-shot learning (FSL) methods rely on training with a largelabeled dataset, which prevents them from leveraging abundant unlabeled data.From an information-theoretic perspective, we propose an effective unsupervisedFSL method, learning representations with self-supervision. Following theInfoMax principle, our method learns comprehensive representations by capturingthe intrinsic structure of the data. Specifically, we maximize the mutualinformation (MI) of instances and their representations with a low-bias MIestimator to perform self-supervised pre-training. Rather than supervisedpre-training focusing on the discriminable features of the seen classes, ourself-supervised model has less bias toward the seen classes, resulting inbetter generalization for unseen classes. We explain that supervisedpre-training and self-supervised pre-training are actually maximizing differentMI objectives. Extensive experiments are further conducted to analyze their FSLperformance with various training settings. Surprisingly, the results show thatself-supervised pre-training can outperform supervised pre-training under theappropriate conditions. Compared with state-of-the-art FSL methods, ourapproach achieves comparable performance on widely used FSL benchmarks withoutany labels of the base classes.",,arXiv,['cs.cv'],, -762,language model cascades,"['David Dohan', 'Winnie Xu', 'Aitor Lewkowycz', 'Jacob Austin', 'David Bieber', 'Raphael Gontijo Lopes', 'Yuhuai Wu', 'Henryk Michalewski', 'Rif A. Saurous', 'Jascha Sohl-dickstein', 'Kevin Murphy', 'Charles Sutton']",http://arxiv.org/pdf/2207.10342v2.pdf,2022-07-21,," Prompted models have demonstrated impressive few-shot learning abilities.Repeated interactions at test-time with a single model, or the composition ofmultiple models together, further expands capabilities. These compositions areprobabilistic models, and may be expressed in the language of graphical modelswith random variables whose values are complex data types such as strings.Cases with control flow and dynamic structure require techniques fromprobabilistic programming, which allow implementing disparate model structuresand inference strategies in a unified language. We formalize several existingtechniques from this perspective, including scratchpads / chain of thought,verifiers, STaR, selection-inference, and tool use. We refer to the resultingprograms as language model cascades.",,arXiv,"['cs.cl', 'cs.ai']",, -763,fewshot adaptation works with unpredictable data,"['Jun Shern Chan', 'Michael Pieler', 'Jonathan Jao', 'Jérémy Scheurer', 'Ethan Perez']",http://arxiv.org/pdf/2208.01009v2.pdf,2022-08-01,," Prior work on language models (LMs) shows that training on a large number ofdiverse tasks improves few-shot learning (FSL) performance on new tasks. Wetake this to the extreme, automatically extracting 413,299 tasks from internettables - orders of magnitude more than the next-largest public datasets.Finetuning on the resulting dataset leads to improved FSL performance onNatural Language Processing (NLP) tasks, but not proportionally to datasetscale. In fact, we find that narrow subsets of our dataset sometimes outperformmore diverse datasets. For example, finetuning on software documentation fromsupport.google.com raises FSL performance by a mean of +7.5% on 52 downstreamtasks, which beats training on 40 human-curated NLP datasets (+6.7%).Finetuning on various narrow datasets leads to similar broad improvementsacross test tasks, suggesting that the gains are not from domain adaptation butadapting to FSL in general. We do not observe clear patterns between thedatasets that lead to FSL gains, leaving open questions about why certain datahelps with FSL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -764,robotic interestingness via humaninformed fewshot object detection,"['Seungchan Kim', 'Chen Wang', 'Bowen Li', 'Sebastian Scherer']",http://arxiv.org/pdf/2208.01084v1.pdf,2022-08-01,," Interestingness recognition is crucial for decision making in autonomousexploration for mobile robots. Previous methods proposed an unsupervised onlinelearning approach that can adapt to environments and detect interesting scenesquickly, but lack the ability to adapt to human-informed interesting objects.To solve this problem, we introduce a human-interactive framework,AirInteraction, that can detect human-informed objects via few-shot onlinelearning. To reduce the communication bandwidth, we first apply an onlineunsupervised learning algorithm on the unmanned vehicle for interestingnessrecognition and then only send the potential interesting scenes to abase-station for human inspection. The human operator is able to draw andprovide bounding box annotations for particular interesting objects, which aresent back to the robot to detect similar objects via few-shot learning. Onlyusing few human-labeled examples, the robot can learn novel interesting objectcategories during the mission and detect interesting scenes that contain theobjects. We evaluate our method on various interesting scene recognitiondatasets. To the best of our knowledge, it is the first human-informed few-shotobject detection framework for autonomous exploration.",,arXiv,['cs.ro'],, -765,atlas fewshot learning with retrieval augmented language models,"['Gautier Izacard', 'Patrick Lewis', 'Maria Lomeli', 'Lucas Hosseini', 'Fabio Petroni', 'Timo Schick', 'Jane Dwivedi-Yu', 'Armand Joulin', 'Sebastian Riedel', 'Edouard Grave']",http://arxiv.org/pdf/2208.03299v3.pdf,2022-08-05,," Large language models have shown impressive few-shot results on a wide rangeof tasks. However, when knowledge is key for such results, as is the case fortasks such as question answering and fact checking, massive parameter counts tostore knowledge seem to be needed. Retrieval augmented models are known toexcel at knowledge intensive tasks without the need for as many parameters, butit is unclear whether they work in few-shot settings. In this work we presentAtlas, a carefully designed and pre-trained retrieval augmented language modelable to learn knowledge intensive tasks with very few training examples. Weperform evaluations on a wide range of tasks, including MMLU, KILT andNaturalQuestions, and study the impact of the content of the document index,showing that it can easily be updated. Notably, Atlas reaches over 42% accuracyon Natural Questions using only 64 examples, outperforming a 540B parametersmodel by 3% despite having 50x fewer parameters.",,arXiv,['cs.cl'],, -766,limits of an ai program for solving college math problems,['Ernest Davis'],http://arxiv.org/pdf/2208.06906v1.pdf,2022-08-14,," Drori et al. (2022) report that ""A neural network solves, explains, andgenerates university math problems by program synthesis and few-shot learningat human level ... [It] automatically answers 81\% of university-levelmathematics problems."" The system they describe is indeed impressive; however,the above description is very much overstated. The work of solving the problemsis done, not by a neural network, but by the symbolic algebra package Sympy.Problems of various formats are excluded from consideration. The so-called""explanations"" are just rewordings of lines of code. Answers are marked ascorrect that are not in the form specified in the problem. Most seriously, itseems that in many cases the system uses the correct answer given in the testcorpus to guide its path to solving the problem.",,arXiv,['cs.ai'],, -767,efficient fewshot learning without prompts,"['Lewis Tunstall', 'Nils Reimers', 'Unso Eun Seo Jo', 'Luke Bates', 'Daniel Korat', 'Moshe Wasserblat', 'Oren Pereg']",http://arxiv.org/pdf/2209.11055v1.pdf,2022-09-22,," Recent few-shot methods, such as parameter-efficient fine-tuning (PEFT) andpattern exploiting training (PET), have achieved impressive results inlabel-scarce settings. However, they are difficult to employ since they aresubject to high variability from manually crafted prompts, and typicallyrequire billion-parameter language models to achieve high accuracy. To addressthese shortcomings, we propose SetFit (Sentence Transformer Fine-tuning), anefficient and prompt-free framework for few-shot fine-tuning of SentenceTransformers (ST). SetFit works by first fine-tuning a pretrained ST on a smallnumber of text pairs, in a contrastive Siamese manner. The resulting model isthen used to generate rich text embeddings, which are used to train aclassification head. This simple framework requires no prompts or verbalizers,and achieves high accuracy with orders of magnitude less parameters thanexisting techniques. Our experiments show that SetFit obtains comparableresults with PEFT and PET techniques, while being an order of magnitude fasterto train. We also show that SetFit can be applied in multilingual settings bysimply switching the ST body. Our code is available athttps://github.com/huggingface/setfit and our datasets athttps://huggingface.co/setfit .",,arXiv,['cs.cl'],, -768,core a retrievethenedit framework for counterfactual data generation,"['Tanay Dixit', 'Bhargavi Paranjape', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2210.04873v2.pdf,2022-10-10,," Counterfactual data augmentation (CDA) -- i.e., adding minimally perturbedinputs during training -- helps reduce model reliance on spurious correlationsand improves generalization to out-of-distribution (OOD) data. Prior work ongenerating counterfactuals only considered restricted classes of perturbations,limiting their effectiveness. We present COunterfactual Generation viaRetrieval and Editing (CORE), a retrieval-augmented generation framework forcreating diverse counterfactual perturbations for CDA. For each trainingexample, CORE first performs a dense retrieval over a task-related unlabeledtext corpus using a learned bi-encoder and extracts relevant counterfactualexcerpts. CORE then incorporates these into prompts to a large language modelwith few-shot learning capabilities, for counterfactual editing. Conditioninglanguage model edits on naturally occurring data results in diverseperturbations. Experiments on natural language inference and sentiment analysisbenchmarks show that CORE counterfactuals are more effective at improvinggeneralization to OOD data compared to other DA approaches. We also show thatthe CORE retrieval framework can be used to encourage diversity in manuallyauthored perturbations",,arXiv,['cs.cl'],, -769,continual training of language models for fewshot learning,"['Zixuan Ke', 'Haowei Lin', 'Yijia Shao', 'Hu Xu', 'Lei Shu', 'Bing Liu']",http://arxiv.org/pdf/2210.05549v1.pdf,2022-10-11,," Recent work on applying large language models (LMs) achieves impressiveperformance in many NLP applications. Adapting or posttraining an LM using anunlabeled domain corpus can produce even better performance for end-tasks inthe domain. This paper proposes the problem of continually extending an LM byincrementally post-train the LM with a sequence of unlabeled domain corpora toexpand its knowledge without forgetting its previous skills. The goal is toimprove the few-shot end-task learning in these domains. The resulting systemis called CPT (Continual PostTraining), which to our knowledge, is the firstcontinual post-training system. Experimental results verify its effectiveness.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.ne']",, -770,knowledgegrounded dialog state tracking,"['Dian Yu', 'Mingqiu Wang', 'Yuan Cao', 'Izhak Shafran', 'Laurent El Shafey', 'Hagen Soltau']",http://arxiv.org/pdf/2210.06656v1.pdf,2022-10-13,," Knowledge (including structured knowledge such as schema and ontology, andunstructured knowledge such as web corpus) is a critical part of dialogunderstanding, especially for unseen tasks and domains. Traditionally, suchdomain-specific knowledge is encoded implicitly into model parameters for theexecution of downstream tasks, which makes training inefficient. In addition,such models are not easily transferable to new tasks with different schemas. Inthis work, we propose to perform dialog state tracking grounded on knowledgeencoded externally. We query relevant knowledge of various forms based on thedialog context where such information can ground the prediction of dialogstates. We demonstrate superior performance of our proposed method over strongbaselines, especially in the few-shot learning setting.",,arXiv,['cs.cl'],, -771,"visionlanguage pretraining basics, recent advances, and future trends","['Zhe Gan', 'Linjie Li', 'Chunyuan Li', 'Lijuan Wang', 'Zicheng Liu', 'Jianfeng Gao']",http://arxiv.org/pdf/2210.09263v1.pdf,2022-10-17,," This paper surveys vision-language pre-training (VLP) methods for multimodalintelligence that have been developed in the last few years. We group theseapproaches into three categories: ($i$) VLP for image-text tasks, such as imagecaptioning, image-text retrieval, visual question answering, and visualgrounding; ($ii$) VLP for core computer vision tasks, such as (open-set) imageclassification, object detection, and segmentation; and ($iii$) VLP forvideo-text tasks, such as video captioning, video-text retrieval, and videoquestion answering. For each category, we present a comprehensive review ofstate-of-the-art methods, and discuss the progress that has been made andchallenges still being faced, using specific systems and models as casestudies. In addition, for each category, we discuss advanced topics beingactively explored in the research community, such as big foundation models,unified modeling, in-context few-shot learning, knowledge, robustness, andcomputer vision in the wild, to name a few.",,arXiv,"['cs.cv', 'cs.cl']",, -772,better fewshot relation extraction with label prompt dropout,"['Peiyuan Zhang', 'Wei Lu']",http://arxiv.org/pdf/2210.13733v1.pdf,2022-10-25,," Few-shot relation extraction aims to learn to identify the relation betweentwo entities based on very limited training examples. Recent efforts found thattextual labels (i.e., relation names and relation descriptions) could beextremely useful for learning class representations, which will benefit thefew-shot learning task. However, what is the best way to leverage such labelinformation in the learning process is an important research question. Existingworks largely assume such textual labels are always present during bothlearning and prediction. In this work, we argue that such approaches may notalways lead to optimal results. Instead, we present a novel approach calledlabel prompt dropout, which randomly removes label descriptions in the learningprocess. Our experiments show that our approach is able to lead to improvedclass representations, yielding significantly better results on the few-shotrelation extraction task.",,arXiv,['cs.cl'],, -773,stprompt semanticguided and taskdriven prompts for effective fewshot classification,"['Jinta Weng', 'Yue Hu', 'Jing Qiu', 'Heyan Huan']",http://arxiv.org/pdf/2210.16489v1.pdf,2022-10-29,," The effectiveness of prompt learning has been demonstrated in differentpre-trained language models. By formulating suitable template and choosingrepresentative label mapping, prompt learning can be used as an efficientknowledge probe. However, finding suitable prompt in existing methods requiresmultiple experimental attempts or appropriate vector initialization onformulating suitable template and choosing representative label mapping, whichit is more common in few-shot learning tasks. Motivating by PLM workingprocess, we try to construct the prompt from task semantic perspective and thuspropose the STPrompt -Semantic-guided and Task-driven Prompt model.Specifically, two novel prompts generated from the semantic dependency tree(Dep-prompt) and task-specific metadata description (Meta-prompt), are firstlyconstructed in a prompt augmented pool, and the proposed model wouldautomatically select a suitable semantic prompt to motivating the promptlearning process. Our results show that the proposed model achieves thestate-of-the-art performance in five different datasets of few-shot textclassification tasks, which prove that more semantic and significant promptscould assume as a better knowledge proving tool.",,arXiv,"['cs.cl', 'cs.ai']",, -774,retrievalaugmented generative question answering for event argument extraction,"['Xinya Du', 'Heng Ji']",http://arxiv.org/pdf/2211.07067v1.pdf,2022-11-14,," Event argument extraction has long been studied as a sequential predictionproblem with extractive-based methods, tackling each argument in isolation.Although recent work proposes generation-based methods to capturecross-argument dependency, they require generating and post-processing acomplicated target sequence (template). Motivated by these observations andrecent pretrained language models' capabilities of learning fromdemonstrations. We propose a retrieval-augmented generative QA model (R-GQA)for event argument extraction. It retrieves the most similar QA pair andaugments it as prompt to the current example's context, then decodes thearguments as answers. Our approach outperforms substantially prior methodsacross various settings (i.e. fully supervised, domain transfer, and fewshotlearning). Finally, we propose a clustering-based sampling strategy (JointEnc)and conduct a thorough analysis of how different strategies influence thefew-shot learning performance. The implementations are available at https://github.com/xinyadu/RGQA",,arXiv,['cs.cl'],, -775,protsi prototypical siamese network with data augmentation for fewshot subjective answer evaluation,"['Yining Lu', 'Jingxi Qiu', 'Gaurav Gupta']",http://arxiv.org/pdf/2211.09855v1.pdf,2022-11-17,," Subjective answer evaluation is a time-consuming and tedious task, and thequality of the evaluation is heavily influenced by a variety of subjectivepersonal characteristics. Instead, machine evaluation can effectively assisteducators in saving time while also ensuring that evaluations are fair andrealistic. However, most existing methods using regular machine learning andnatural language processing techniques are generally hampered by a lack ofannotated answers and poor model interpretability, making them unsuitable forreal-world use. To solve these challenges, we propose ProtSi Network, a uniquesemi-supervised architecture that for the first time uses few-shot learning tosubjective answer evaluation. To evaluate students' answers by similarityprototypes, ProtSi Network simulates the natural process of evaluator scoringanswers by combining Siamese Network which consists of BERT and encoder layerswith Prototypical Network. We employed an unsupervised diverse paraphrasingmodel ProtAugment, in order to prevent overfitting for effective few-shot textclassification. By integrating contrastive learning, the discriminative textissue can be mitigated. Experiments on the Kaggle Short Scoring Datasetdemonstrate that the ProtSi Network outperforms the most recent baseline modelsin terms of accuracy and quadratic weighted kappa.",,arXiv,['cs.cl'],, -776,tempera testtime prompting via reinforcement learning,"['Tianjun Zhang', 'Xuezhi Wang', 'Denny Zhou', 'Dale Schuurmans', 'Joseph E. Gonzalez']",http://arxiv.org/pdf/2211.11890v1.pdf,2022-11-21,," Careful prompt design is critical to the use of large language models inzero-shot or few-shot learning. As a consequence, there is a growing interestin automated methods to design optimal prompts. In this work, we proposeTest-time Prompt Editing using Reinforcement learning (TEMPERA). In contrast toprior prompt generation methods, TEMPERA can efficiently leverage priorknowledge, is adaptive to different queries and provides an interpretableprompt for every query. To achieve this, we design a novel action space thatallows flexible editing of the initial prompts covering a wide set ofcommonly-used components like instructions, few-shot exemplars, andverbalizers. The proposed method achieves significant gains compared withrecent SoTA approaches like prompt tuning, AutoPrompt, and RLPrompt, across avariety of tasks including sentiment analysis, topic classification, naturallanguage inference, and reading comprehension. Our method achieves 5.33x onaverage improvement in sample efficiency when compared to the traditionalfine-tuning methods.",,arXiv,"['cs.cl', 'cs.ai']",, -777,fewshot nested named entity recognition,"['Hong Ming', 'Jiaoyun Yang', 'Lili Jiang', 'Yan Pan', 'Ning An']",http://arxiv.org/pdf/2212.00953v1.pdf,2022-12-02,," While Named Entity Recognition (NER) is a widely studied task, makinginferences of entities with only a few labeled data has been challenging,especially for entities with nested structures. Unlike flat entities, entitiesand their nested entities are more likely to have similar semantic featurerepresentations, drastically increasing difficulties in classifying differententity categories in the few-shot setting. Although prior work has brieflydiscussed nested structures in the context of few-shot learning, to our bestknowledge, this paper is the first one specifically dedicated to studying thefew-shot nested NER task. Leveraging contextual dependency to distinguishnested entities, we propose a Biaffine-based Contrastive Learning (BCL)framework. We first design a Biaffine span representation module for learningthe contextual span dependency representation for each entity span rather thanonly learning its semantic representation. We then merge these tworepresentations by the residual connection to distinguish nested entities.Finally, we build a contrastive learning framework to adjust the representationdistribution for larger margin boundaries and more generalized domain transferlearning ability. We conducted experimental studies on three English, German,and Russian nested NER datasets. The results show that the BCL outperformedthree baseline models on the 1-shot and 5-shot tasks in terms of F1 score.",,arXiv,"['cs.cl', 'cs.ai']",, -778,improving fewshot performance of language models via nearest neighbor calibration,"['Feng Nie', 'Meixi Chen', 'Zhirui Zhang', 'Xu Cheng']",http://arxiv.org/pdf/2212.02216v1.pdf,2022-12-05,," Pre-trained language models (PLMs) have exhibited remarkable few-shotlearning capabilities when provided a few examples in a natural language promptas demonstrations of test instances, i.e., in-context learning. However, theperformance of in-context learning is susceptible to the choice of promptformat, training examples and the ordering of the training examples. In thispaper, we propose a novel nearest-neighbor calibration framework for in-contextlearning to ease this issue. It is inspired by a phenomenon that the in-contextlearning paradigm produces incorrect labels when inferring training instances,which provides a useful supervised signal to calibrate predictions. Thus, ourmethod directly augments the predictions with a $k$-nearest-neighbor ($k$NN)classifier over a datastore of cached few-shot instance representationsobtained by PLMs and their corresponding labels. Then adaptive neighborselection and feature regularization modules are introduced to make full use ofa few support instances to reduce the $k$NN retrieval noise. Experiments onvarious few-shot text classification tasks demonstrate that our methodsignificantly improves in-context learning, while even achieving comparableperformance with state-of-the-art tuning-based approaches in some sentimentanalysis tasks.",,arXiv,['cs.cl'],, -779,jampatoisnli a jamaican patois natural language inference dataset,"['Ruth-Ann Armstrong', 'John Hewitt', 'Christopher Manning']",http://arxiv.org/pdf/2212.03419v1.pdf,2022-12-07,," JamPatoisNLI provides the first dataset for natural language inference in acreole language, Jamaican Patois. Many of the most-spoken low-resourcelanguages are creoles. These languages commonly have a lexicon derived from amajor world language and a distinctive grammar reflecting the languages of theoriginal speakers and the process of language birth by creolization. This givesthem a distinctive place in exploring the effectiveness of transfer from largemonolingual or multilingual pretrained models. While our work, along withprevious work, shows that transfer from these models to low-resource languagesthat are unrelated to languages in their training set is not very effective, wewould expect stronger results from transfer to creoles. Indeed, our experimentsshow considerably better results from few-shot learning of JamPatoisNLI thanfor such unrelated languages, and help us begin to understand how the uniquerelationship between creoles and their high-resource base languages affectcross-lingual transfer. JamPatoisNLI, which consists of naturally-occurringpremises and expert-written hypotheses, is a step towards steering researchinto a traditionally underserved language and a useful benchmark forunderstanding cross-lingual NLP.",,arXiv,"['cs.cl', 'cs.lg', 'i.2.7']",, -780,learn to explore on bootstrapping interactive data exploration with metalearning,"['Yukun Cao', 'Xike Xie', 'Kexin Huang']",http://arxiv.org/pdf/2212.03423v4.pdf,2022-12-07,," Interactive data exploration (IDE) is an effective way of comprehending bigdata, whose volume and complexity are beyond human abilities. The main goal ofIDE is to discover user interest regions from a database through multi-roundsof user labelling. Existing IDEs adopt active-learning framework, where usersiteratively discriminate or label the interestingness of selected tuples. Theprocess of data exploration can be viewed as the process of training aclassifier, which determines whether a database tuple is interesting to a user.An efficient exploration thus takes very few iterations of user labelling toreach the data region of interest. In this work, we consider the dataexploration as the process of few-shot learning, where the classifier islearned with only a few training examples, or exploration iterations. To thisend, we propose a learning-to-explore framework, based on meta-learning, whichlearns how to learn a classifier with automatically generated meta-tasks, sothat the exploration process can be much shortened. Extensive experiments onreal datasets show that our proposal outperforms existing explore-by-examplesolutions in terms of accuracy and efficiency.",,arXiv,"['cs.db', 'cs.ai']",, -781,demystifying prompts in language models via perplexity estimation,"['Hila Gonen', 'Srini Iyer', 'Terra Blevins', 'Noah A. Smith', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2212.04037v1.pdf,2022-12-08,," Language models can be prompted to perform a wide variety of zero- andfew-shot learning problems. However, performance varies significantly with thechoice of prompt, and we do not yet understand why this happens or how to pickthe best prompts. In this work, we analyze the factors that contribute to thisvariance and establish a new empirical hypothesis: the performance of a promptis coupled with the extent to which the model is familiar with the language itcontains. Over a wide range of tasks, we show that the lower the perplexity ofthe prompt is, the better the prompt is able to perform the task. As a result,we devise a method for creating prompts: (1) automatically extend a small seedset of manually written prompts by paraphrasing using GPT3 and backtranslationand (2) choose the lowest perplexity prompts to get significant gains inperformance.",,arXiv,['cs.cl'],, -782,localized latent updates for finetuning visionlanguage models,"['Moritz Ibing', 'Isaak Lim', 'Leif Kobbelt']",http://arxiv.org/pdf/2212.06556v1.pdf,2022-12-13,," Although massive pre-trained vision-language models like CLIP show impressivegeneralization capabilities for many tasks, still it often remains necessary tofine-tune them for improved performance on specific datasets. When doing so, itis desirable that updating the model is fast and that the model does not loseits capabilities on data outside of the dataset, as is often the case withclassical fine-tuning approaches. In this work we suggest a lightweightadapter, that only updates the models predictions close to seen datapoints. Wedemonstrate the effectiveness and speed of this relatively simple approach inthe context of few-shot learning, where our results both on classes seen andunseen during training are comparable with or improve on the state of the art.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, -783,alert adapting language models to reasoning tasks,"['Ping Yu', 'Tianlu Wang', 'Olga Golovneva', 'Badr AlKhamissi', 'Siddharth Verma', 'Zhijing Jin', 'Gargi Ghosh', 'Mona Diab', 'Asli Celikyilmaz']",http://arxiv.org/pdf/2212.08286v2.pdf,2022-12-16,," Current large language models can perform reasonably well on complex tasksthat require step-by-step reasoning with few-shot learning. Are these modelsapplying reasoning skills they have learnt during pre-training and reasonoutside of their training context, or are they simply memorizing their trainingcorpus at finer granularity and have learnt to better understand their context?To tease apart these possibilities, we introduce ALERT, a benchmark and suiteof analyses for assessing language models' reasoning ability comparingpre-trained and finetuned models on complex tasks that require reasoning skillsto solve. ALERT provides a test bed to asses any language model on fine-grainedreasoning skills, which spans over 20 datasets and covers 10 differentreasoning skills. We leverage ALERT to further investigate the role offinetuning. With extensive empirical analysis we find that language modelslearn more reasoning skills such as textual entailment, abductive reasoning,and analogical reasoning during finetuning stage compared to pretraining state.We also find that when language models are finetuned they tend to overfit tothe prompt template, which hurts the robustness of models causinggeneralization problems.",,arXiv,['cs.cl'],, -784,learning from taxonomy multilabel fewshot classification for everyday sound recognition,"['Jinhua Liang', 'Huy Phan', 'Emmanouil Benetos']",http://arxiv.org/pdf/2212.08952v1.pdf,2022-12-17,," Everyday sound recognition aims to infer types of sound events in audiostreams. While many works succeeded in training models with high performance ina fully-supervised manner, they are still restricted to the demand of largequantities of labelled data and the range of predefined classes. To overcomethese drawbacks, this work firstly curates a new database named FSD-FS formulti-label few-shot audio classification. It then explores how to incorporateaudio taxonomy in few-shot learning. Specifically, this work proposeslabel-dependent prototypical networks (LaD-protonet) to exploit parent-childrenrelationships between labels. Plus, it applies taxonomy-aware label smoothingtechniques to boost model performance. Experiments demonstrate thatLaD-protonet outperforms original prototypical networks as well as otherstate-of-the-art methods. Moreover, its performance can be further boosted whencombined with taxonomy-aware label smoothing.",,arXiv,"['cs.sd', 'eess.as']",, -785,a survey on fewshot knowledge graph completion with structural and commonsense knowledge,"['Haodi Ma', 'Daisy Zhe Wang']",http://arxiv.org/pdf/2301.01172v1.pdf,2023-01-03,," Knowledge graphs (KG) have served as the key component of various naturallanguage processing applications. Commonsense knowledge graphs (CKG) are aspecial type of KG, where entities and relations are composed of free-formtext. However, previous works in KG completion and CKG completion suffer fromlong-tail relations and newly-added relations which do not have many knowtriples for training. In light of this, few-shot KG completion (FKGC), whichrequires the strengths of graph representation learning and few-shot learning,has been proposed to challenge the problem of limited annotated data. In thispaper, we comprehensively survey previous attempts on such tasks in the form ofa series of methods and applications. Specifically, we first introduce FKGCchallenges, commonly used KGs, and CKGs. Then we systematically categorize andsummarize existing works in terms of the type of KGs and the methods. Finally,we present applications of FKGC models on prediction tasks in different areasand share our thoughts on future research directions of FKGC.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -786,learning to initialize can meta learning improve crosstask generalization in prompt tuning,"['Chengwei Qin', 'Shafiq Joty', 'Qian Li', 'Ruochen Zhao']",http://arxiv.org/pdf/2302.08143v2.pdf,2023-02-16,," Prompt tuning (PT) which only tunes the embeddings of an additional sequenceof tokens per task, keeping the pre-trained language model (PLM) frozen, hasshown remarkable performance in few-shot learning. Despite this, PT has beenshown to rely heavily on good initialization of the prompt embeddings. In thiswork, we study meta prompt tuning (MPT) to systematically explore howmeta-learning can help improve (if it can) cross-task generalization in PTthrough learning to initialize the prompt embeddings from other relevant tasks.We empirically analyze a representative set of meta learning algorithms in awide range of adaptation settings with different source/target taskconfigurations on a large set of few-shot tasks. With extensive experiments andanalysis, we demonstrate the effectiveness of MPT. We find the improvement tobe significant particularly on classification tasks. For other kinds of taskssuch as question answering, we observe that while MPT can outperform PT in mostcases, it does not always outperform multi-task learning. We further provide anin-depth analysis from the perspective of task similarity.",,arXiv,"['cs.cl', 'cs.ai']",, -787,scalable prompt generation for semisupervised learning with language models,"['Yuhang Zhou', 'Suraj Maharjan', 'Beiye Liu']",http://arxiv.org/pdf/2302.09236v1.pdf,2023-02-18,," Prompt-based learning methods in semi-supervised learning (SSL) settings havebeen shown to be effective on multiple natural language understanding (NLU)datasets and tasks in the literature. However, manually designing multipleprompts and verbalizers requires domain knowledge and human effort, making itdifficult and expensive to scale across different datasets. In this paper, wepropose two methods to automatically design multiple prompts and integrateautomatic verbalizer in SSL settings without sacrificing performance. The firstmethod uses various demonstration examples with learnable continuous prompttokens to create diverse prompt models. The second method uses a varying numberof soft prompt tokens to encourage language models to learn different prompts.For the verbalizer, we use the prototypical verbalizer to replace the manualone. In summary, we obtained the best average accuracy of 73.2% (a relativeimprovement of 2.52% over even the previous state-of-the-art SSL method withmanual prompts and verbalizers) in different few-shot learning settings.",,arXiv,"['cs.cl', 'cs.ai']",, -788,language models are fewshot learners for prognostic prediction,"['Zekai Chen', 'Mariann Micsinai Balan', 'Kevin Brown']",http://arxiv.org/pdf/2302.12692v4.pdf,2023-02-24,," Clinical prediction is an essential task in the healthcare industry. However,the recent success of transformers, on which large language models are built,has not been extended to this domain. In this research, we explore the use oftransformers and language models in prognostic prediction for immunotherapyusing real-world patients' clinical data and molecular profiles. This paperinvestigates the potential of transformers to improve clinical predictioncompared to conventional machine learning approaches and addresses thechallenge of few-shot learning in predicting rare disease areas. The studybenchmarks the efficacy of baselines and language models on prognosticprediction across multiple cancer types and investigates the impact ofdifferent pretrained language models under few-shot regimes. The resultsdemonstrate significant improvements in accuracy and highlight the potential ofNLP in clinical research to improve early detection and intervention fordifferent diseases.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",, -789,prefinetuning for fewshot emotional speech recognition,"['Maximillian Chen', 'Zhou Yu']",http://arxiv.org/pdf/2302.12921v2.pdf,2023-02-24,," Speech models have long been known to overfit individual speakers for manyclassification tasks. This leads to poor generalization in settings where thespeakers are out-of-domain or out-of-distribution, as is common in productionenvironments. We view speaker adaptation as a few-shot learning problem andpropose investigating transfer learning approaches inspired by recent successwith pre-trained models in natural language tasks. We propose pre-finetuningspeech models on difficult tasks to distill knowledge into few-shot downstreamclassification objectives. We pre-finetune Wav2Vec2.0 on every permutation offour multiclass emotional speech recognition corpora and evaluate ourpre-finetuned models through 33,600 few-shot fine-tuning trials on theEmotional Speech Dataset.",,arXiv,"['cs.cl', 'cs.lg', 'cs.sd', 'eess.as']",, -790,mixture of soft prompts for controllable data generation,"['Derek Chen', 'Celine Lee', 'Yunan Lu', 'Domenic Rosati', 'Zhou Yu']",http://arxiv.org/pdf/2303.01580v2.pdf,2023-03-02,," Large language models (LLMs) effectively generate fluent text when the targetoutput follows natural language patterns. However, structured prediction tasksconfine the output format to a limited ontology, causing even very large modelsto struggle since they were never trained with such restrictions in mind. Thedifficulty of using LLMs for direct prediction is exacerbated in few-shotlearning scenarios, which commonly arise due to domain shift and resourcelimitations. We flip the problem on its head by leveraging the LLM as a toolfor data augmentation rather than direct prediction. Our proposed Mixture ofSoft Prompts (MSP) serves as a parameter-efficient procedure for generatingdata in a controlled manner. Denoising mechanisms are further applied toimprove the quality of synthesized data. Automatic metrics show our method iscapable of producing diverse and natural text, while preserving labelsemantics. Moreover, MSP achieves state-of-the-art results on three benchmarkswhen compared against strong baselines. Our method offers an alternatedata-centric approach for applying LLMs to complex prediction tasks.",,arXiv,['cs.cl'],, -791,prismer a visionlanguage model with an ensemble of experts,"['Shikun Liu', 'Linxi Fan', 'Edward Johns', 'Zhiding Yu', 'Chaowei Xiao', 'Anima Anandkumar']",http://arxiv.org/pdf/2303.02506v2.pdf,2023-03-04,," Recent vision-language models have shown impressive multi-modal generationcapabilities. However, typically they require training huge models on massivedatasets. As a more scalable alternative, we introduce Prismer, a data- andparameter-efficient vision-language model that leverages an ensemble of domainexperts. Prismer only requires training of a small number of components, withthe majority of network weights inherited from readily-available, pre-traineddomain experts, and kept frozen during training. By leveraging experts from awide range of domains, we show that Prismer can efficiently pool this expertknowledge and adapt it to various vision-language reasoning tasks. In ourexperiments, we show that Prismer achieves fine-tuned and few-shot learningperformance which is competitive with current state-of-the-art models, whilstrequiring up to two orders of magnitude less training data. Code is availableat https://github.com/NVlabs/prismer.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv']",, -792,enhancing activity prediction models in drug discovery with the ability to understand human language,"['Philipp Seidl', 'Andreu Vall', 'Sepp Hochreiter', 'Günter Klambauer']",http://arxiv.org/pdf/2303.03363v2.pdf,2023-03-06,," Activity and property prediction models are the central workhorses in drugdiscovery and materials sciences, but currently they have to be trained orfine-tuned for new tasks. Without training or fine-tuning, scientific languagemodels could be used for such low-data tasks through their announced zero- andfew-shot capabilities. However, their predictive quality at activity predictionis lacking. In this work, we envision a novel type of activity prediction modelthat is able to adapt to new prediction tasks at inference time, viaunderstanding textual information describing the task. To this end, we proposea new architecture with separate modules for chemical and natural languageinputs, and a contrastive pre-training objective on data from large biochemicaldatabases. In extensive experiments, we show that our method CLAMP yieldsimproved predictive performance on few-shot learning benchmarks and zero-shotproblems in drug discovery. We attribute the advances of our method to themodularized architecture and to our pre-training objective.",,arXiv,"['q-bio.bm', 'cs.cl', 'cs.lg', 'stat.ml']",, -793,menucraft interactive menu system design with large language models,"['Amir Hossein Kargaran', 'Nafiseh Nikeghbal', 'Abbas Heydarnoori', 'Hinrich Schütze']",http://arxiv.org/pdf/2303.04496v2.pdf,2023-03-08,," Menu system design is a challenging task involving many design options andvarious human factors. For example, one crucial factor that designers need toconsider is the semantic and systematic relation of menu commands. However,capturing these relations can be challenging due to limited availableresources. With the advancement of neural language models, large languagemodels can utilize their vast pre-existing knowledge in designing and refiningmenu systems. In this paper, we propose MenuCraft, an AI-assisted designer formenu design that enables collaboration between the designer and a dialoguesystem to design menus. MenuCraft offers an interactive language-based menudesign tool that simplifies the menu design process and enables easycustomization of design options. MenuCraft supports a variety of interactionsthrough dialog that allows performing zero/few-shot learning.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, -794,consistency analysis of chatgpt,"['Myeongjun Erik Jang', 'Thomas Lukasiewicz']",http://arxiv.org/pdf/2303.06273v3.pdf,2023-03-11,," ChatGPT has gained a huge popularity since its introduction. Its positiveaspects have been reported through many media platforms, and some analyses evenshowed that ChatGPT achieved a decent grade in professional exams, adding extrasupport to the claim that AI can now assist and even replace humans inindustrial fields. Others, however, doubt its reliability and trustworthiness.This paper investigates the trustworthiness of ChatGPT and GPT-4 regardinglogically consistent behaviour, focusing specifically on semantic consistencyand the properties of negation, symmetric, and transitive consistency. Ourfindings suggest that while both models appear to show an enhanced languageunderstanding and reasoning ability, they still frequently fall short ofgenerating logically consistent predictions. We also ascertain via experimentsthat prompt designing, few-shot learning and employing larger large languagemodels (LLMs) are unlikely to be the ultimate solution to resolve theinconsistency issue of LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, -795,learning expressive prompting with residuals for vision transformers,"['Rajshekhar Das', 'Yonatan Dukler', 'Avinash Ravichandran', 'Ashwin Swaminathan']",http://arxiv.org/pdf/2303.15591v1.pdf,2023-03-27,," Prompt learning is an efficient approach to adapt transformers by insertinglearnable set of parameters into the input and intermediate representations ofa pre-trained model. In this work, we present Expressive Prompts with Residuals(EXPRES) which modifies the prompt learning paradigm specifically for effectiveadaptation of vision transformers (ViT). Out method constructs downstreamrepresentations via learnable ``output'' tokens, that are akin to the learnedclass tokens of the ViT. Further for better steering of the downstreamrepresentation processed by the frozen transformer, we introduce residuallearnable tokens that are added to the output of various computations. We applyEXPRES for image classification, few shot learning, and semantic segmentation,and show our method is capable of achieving state of the art prompt tuning on3/3 categories of the VTAB benchmark. In addition to strong performance, weobserve that our approach is an order of magnitude more prompt efficient thanexisting visual prompting baselines. We analytically show the computationalbenefits of our approach over weight space adaptation techniques likefinetuning. Lastly we systematically corroborate the architectural design ofour method via a series of ablation experiments.",,arXiv,['cs.cv'],, -796,not all features matter enhancing fewshot clip with adaptive prior refinement,"['Xiangyang Zhu', 'Renrui Zhang', 'Bowei He', 'Aojun Zhou', 'Dong Wang', 'Bin Zhao', 'Peng Gao']",http://arxiv.org/pdf/2304.01195v1.pdf,2023-04-03,," The popularity of Contrastive Language-Image Pre-training (CLIP) haspropelled its application to diverse downstream vision tasks. To improve itscapacity on downstream tasks, few-shot learning has become a widely-adoptedtechnique. However, existing methods either exhibit limited performance orsuffer from excessive learnable parameters. In this paper, we propose APE, anAdaptive Prior rEfinement method for CLIP's pre-trained knowledge, whichachieves superior accuracy with high computational efficiency. Via a priorrefinement module, we analyze the inter-class disparity in the downstream dataand decouple the domain-specific knowledge from the CLIP-extracted cache model.On top of that, we introduce two model variants, a training-free APE and atraining-required APE-T. We explore the trilateral affinities between the testimage, prior cache model, and textual representations, and only enable alightweight category-residual module to be trained. For the average accuracyover 11 benchmarks, both APE and APE-T attain state-of-the-art and respectivelyoutperform the second-best by +1.59% and +1.99% under 16 shots with x30 lesslearnable parameters.",,arXiv,"['cs.cv', 'cs.ai', 'cs.mm']",, -797,sociocultural knowledge is needed for selection of shots in hate speech detection tasks,"['Antonis Maronikolakis', 'Abdullatif Köksal', 'Hinrich Schütze']",http://arxiv.org/pdf/2304.01890v4.pdf,2023-04-04,," We introduce HATELEXICON, a lexicon of slurs and targets of hate speech forthe countries of Brazil, Germany, India and Kenya, to aid training andinterpretability of models. We demonstrate how our lexicon can be used tointerpret model predictions, showing that models developed to classify extremespeech rely heavily on target words when making predictions. Further, wepropose a method to aid shot selection for training in low-resource settingsvia HATELEXICON. In few-shot learning, the selection of shots is of paramountimportance to model performance. In our work, we simulate a few-shot settingfor German and Hindi, using HASOC data for training and the MultilingualHateCheck (MHC) as a benchmark. We show that selecting shots based on ourlexicon leads to models performing better on MHC than models trained on shotssampled randomly. Thus, when given only a few training examples, using ourlexicon to select shots containing more sociocultural information leads tobetter few-shot performance.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -798,revisiting automated prompting are we actually doing better,"['Yulin Zhou', 'Yiren Zhao', 'Ilia Shumailov', 'Robert Mullins', 'Yarin Gal']",http://arxiv.org/pdf/2304.03609v2.pdf,2023-04-07,," Current literature demonstrates that Large Language Models (LLMs) are greatfew-shot learners, and prompting significantly increases their performance on arange of downstream tasks in a few-shot learning setting. An attempt toautomate human-led prompting followed, with some progress achieved. Inparticular, subsequent work demonstrates automation can outperform fine-tuningin certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six differentdownstream tasks and a larger range of K-shot learning settings. We find thatautomated prompting does not consistently outperform simple manual prompts. Ourwork suggests that, in addition to fine-tuning, manual prompts should be usedas a baseline in this line of research.",,arXiv,"['cs.cl', 'cs.lg']",, -799,information extraction from documents question answering vs token classification in realworld setups,"['Laurent Lam', 'Pirashanth Ratnamogan', 'Joël Tang', 'William Vanhuffel', 'Fabien Caspani']",http://arxiv.org/pdf/2304.10994v1.pdf,2023-04-21,," Research in Document Intelligence and especially in Document Key InformationExtraction (DocKIE) has been mainly solved as Token Classification problem.Recent breakthroughs in both natural language processing (NLP) and computervision helped building document-focused pre-training methods, leveraging amultimodal understanding of the document text, layout and image modalities.However, these breakthroughs also led to the emergence of a new DocKIE subtaskof extractive document Question Answering (DocQA), as part of the MachineReading Comprehension (MRC) research field. In this work, we compare theQuestion Answering approach with the classical token classification approachfor document key information extraction. We designed experiments to benchmarkfive different experimental setups : raw performances, robustness to noisyenvironment, capacity to extract long entities, fine-tuning speed on Few-ShotLearning and finally Zero-Shot Learning. Our research showed that when dealingwith clean and relatively short entities, it is still best to use tokenclassification-based approach, while the QA approach could be a goodalternative for noisy environment or long entities use-cases.",,arXiv,['cs.cl'],, -800,causal interventionsbased fewshot named entity recognition,"['Zhen Yang', 'Yongbin Liu', 'Chunping Ouyang']",http://arxiv.org/pdf/2305.01914v1.pdf,2023-05-03,," Few-shot named entity recognition (NER) systems aims at recognizing newclasses of entities based on a few labeled samples. A significant challenge inthe few-shot regime is prone to overfitting than the tasks with abundantsamples. The heavy overfitting in few-shot learning is mainly led by spuriouscorrelation caused by the few samples selection bias. To alleviate the problemof the spurious correlation in the few-shot NER, in this paper, we propose acausal intervention-based few-shot NER method. Based on the prototypicalnetwork, the method intervenes in the context and prototype via backdooradjustment during training. In particular, intervening in the context of theone-shot scenario is very difficult, so we intervene in the prototype viaincremental learning, which can also avoid catastrophic forgetting. Ourexperiments on different benchmarks show that our approach achieves newstate-of-the-art results (achieving up to 29% absolute improvement and 12% onaverage for all tasks).",,arXiv,['cs.cl'],, -801,data curation for image captioning with texttoimage generative models,"['Wenyan Li', 'Jonas F. Lotz', 'Chen Qiu', 'Desmond Elliott']",http://arxiv.org/pdf/2305.03610v1.pdf,2023-05-05,," Recent advances in image captioning are mainly driven by large-scalevision-language pretraining, relying heavily on computational resources andincreasingly large multimodal datasets. Instead of scaling up pretraining data,we ask whether it is possible to improve performance by improving the qualityof the samples in existing datasets. We pursue this question through twoapproaches to data curation: one that assumes that some examples should beavoided due to mismatches between the image and caption, and one that assumesthat the mismatch can be addressed by replacing the image, for which we use thestate-of-the-art Stable Diffusion model. These approaches are evaluated usingthe BLIP model on MS COCO and Flickr30K in both finetuning and few-shotlearning settings. Our simple yet effective approaches consistently outperformbaselines, indicating that better image captioning models can be trained bycurating existing resources. Finally, we conduct a human study to understandthe errors made by the Stable Diffusion model and highlight directions forfuture work in text-to-image generation.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, -802,make promptbased blackbox tuning colorful boosting model generalization from three orthogonal perspectives,"['Qiushi Sun', 'Chengcheng Han', 'Nuo Chen', 'Renyu Zhu', 'Jingyang Gong', 'Xiang Li', 'Ming Gao']",http://arxiv.org/pdf/2305.08088v1.pdf,2023-05-14,," Large language models (LLMs) have shown increasing power on various naturallanguage processing (NLP) tasks. However, tuning these models for downstreamtasks usually needs exorbitant costs or is unavailable due to commercialconsiderations. Recently, black-box tuning has been proposed to address thisproblem by optimizing task-specific prompts without accessing the gradients andhidden representations. However, most existing works have yet fully exploitedthe potential of gradient-free optimization under the scenario of few-shotlearning. In this paper, we describe BBT-RGB, a suite of straightforward andcomplementary techniques for enhancing the efficiency and performance ofblack-box optimization. Specifically, our method includes three plug-and-playcomponents: (1) Two-stage derivative-free optimization strategy thatfacilitates fast convergence and mitigates overfitting; (2) Automaticverbalizer construction with its novel usage under few-shot settings; (3)Better prompt initialization policy based on instruction search andauto-selected demonstration. Extensive experiments across various tasks onnatural language understanding and inference demonstrate the effectiveness ofour method. Our codes are publicly available athttps://github.com/QiushiSun/BBT-RGB.",,arXiv,"['cs.cl', 'cs.ai']",, -803,cplnovid contextaware promptbased learning for norm violation detection in online communities,"['Zihao He', 'Jonathan May', 'Kristina Lerman']",http://arxiv.org/pdf/2305.09846v2.pdf,2023-05-16,," Detecting norm violations in online communities is critical to maintaininghealthy and safe spaces for online discussions. Existing machine learningapproaches often struggle to adapt to the diverse rules and interpretationsacross different communities due to the inherent challenges of fine-tuningmodels for such context-specific tasks. In this paper, we introduceContext-aware Prompt-based Learning for Norm Violation Detection (CPL-NoViD), anovel method that employs prompt-based learning to detect norm violationsacross various types of rules. CPL-NoViD outperforms the baseline byincorporating context through natural language prompts and demonstratesimproved performance across different rule types. Significantly, it not onlyexcels in cross-rule-type and cross-community norm violation detection but alsoexhibits adaptability in few-shot learning scenarios. Most notably, itestablishes a new state-of-the-art in norm violation detection, surpassingexisting benchmarks. Our work highlights the potential of prompt-based learningfor context-sensitive norm violation detection and paves the way for futureresearch on more adaptable, context-aware models to better support onlinecommunity moderators.",,arXiv,"['cs.cl', 'cs.si']",, -804,a weak supervision approach for fewshot aspect based sentiment,"['Robert Vacareanu', 'Siddharth Varia', 'Kishaloy Halder', 'Shuai Wang', 'Giovanni Paolini', 'Neha Anna John', 'Miguel Ballesteros', 'Smaranda Muresan']",http://arxiv.org/pdf/2305.11979v1.pdf,2023-05-19,," We explore how weak supervision on abundant unlabeled data can be leveragedto improve few-shot performance in aspect-based sentiment analysis (ABSA)tasks. We propose a pipeline approach to construct a noisy ABSA dataset, and weuse it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks. Wetest the resulting model on three widely used ABSA datasets, before and afterfine-tuning. Our proposed method preserves the full fine-tuning performancewhile showing significant improvements (15.84% absolute F1) in the few-shotlearning scenario for the harder tasks. In zero-shot (i.e., withoutfine-tuning), our method outperforms the previous state of the art on theaspect extraction sentiment classification (AESC) task and is, additionally,capable of performing the harder aspect sentiment triplet extraction (ASTE)task.",,arXiv,['cs.cl'],, -805,efficient open domain multihop question answering with fewshot data synthesis,"['Mingda Chen', 'Xilun Chen', 'Wen-tau Yih']",http://arxiv.org/pdf/2305.13691v1.pdf,2023-05-23,," Few-shot learning for open domain multi-hop question answering typicallyrelies on large language models (LLMs). While powerful, LLMs are inefficient atthe inference time. We propose a data synthesis framework for multi-hopquestion answering that allows for improving smaller language models with lessthan 10 human-annotated question answer pairs. The framework is built upon thedata generation functions parameterized by LLMs and prompts, which requiresminimal hand-crafted features. Empirically, we synthesize millions of multi-hopquestions and claims. After finetuning language models on the synthetic data,we evaluate the models on popular benchmarks on multi-hop question answeringand fact verification. Our experimental results show that finetuning on thesynthetic data improves model performance significantly, allowing our finetunedmodels to be competitive with prior models while being almost one-third thesize in terms of parameter counts.",,arXiv,['cs.cl'],, -806,images in language space exploring the suitability of large language models for vision & language tasks,"['Sherzod Hakimov', 'David Schlangen']",http://arxiv.org/pdf/2305.13782v1.pdf,2023-05-23,," Large language models have demonstrated robust performance on variouslanguage tasks using zero-shot or few-shot learning paradigms. While beingactively researched, multimodal models that can additionally handle images asinput have yet to catch up in size and generality with language-only models. Inthis work, we ask whether language-only models can be utilised for tasks thatrequire visual input -- but also, as we argue, often require a strong reasoningcomponent. Similar to some recent related work, we make visual informationaccessible to the language model using separate verbalisation models.Specifically, we investigate the performance of open-source, open-accesslanguage models against GPT-3 on five vision-language tasks when giventextually-encoded visual information. Our results suggest that language modelsare effective for solving vision-language tasks even with limited samples. Thisapproach also enhances the interpretability of a model's output by providing ameans of tracing the output back through the verbalised image content.",,arXiv,['cs.cl'],, -807,improving factuality and reasoning in language models through multiagent debate,"['Yilun Du', 'Shuang Li', 'Antonio Torralba', 'Joshua B. Tenenbaum', 'Igor Mordatch']",http://arxiv.org/pdf/2305.14325v1.pdf,2023-05-23,," Large language models (LLMs) have demonstrated remarkable capabilities inlanguage generation, understanding, and few-shot learning in recent years. Anextensive body of work has explored how their performance may be furtherimproved through the tools of prompting, ranging from verification,self-consistency, or intermediate scratchpads. In this paper, we present acomplementary approach to improve language responses where multiple languagemodel instances propose and debate their individual responses and reasoningprocesses over multiple rounds to arrive at a common final answer. Our findingsindicate that this approach significantly enhances mathematical and strategicreasoning across a number of tasks. We also demonstrate that our approachimproves the factual validity of generated content, reducing fallacious answersand hallucinations that contemporary models are prone to. Our approach may bedirectly applied to existing black-box models and uses identical procedure andprompts for all tasks we investigate. Overall, our findings suggest that such""society of minds"" approach has the potential to significantly advance thecapabilities of LLMs and pave the way for further breakthroughs in languagegeneration and understanding.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",, -808,training on thin air improve image classification with generated data,"['Yongchao Zhou', 'Hshmat Sahak', 'Jimmy Ba']",http://arxiv.org/pdf/2305.15316v1.pdf,2023-05-24,," Acquiring high-quality data for training discriminative models is a crucialyet challenging aspect of building effective predictive systems. In this paper,we present Diffusion Inversion, a simple yet effective method that leveragesthe pre-trained generative model, Stable Diffusion, to generate diverse,high-quality training data for image classification. Our approach captures theoriginal data distribution and ensures data coverage by inverting images to thelatent space of Stable Diffusion, and generates diverse novel training imagesby conditioning the generative model on noisy versions of these vectors. Weidentify three key components that allow our generated images to successfullysupplant the original dataset, leading to a 2-3x enhancement in samplecomplexity and a 6.5x decrease in sampling time. Moreover, our approachconsistently outperforms generic prompt-based steering methods and KNNretrieval baseline across a wide range of datasets. Additionally, wedemonstrate the compatibility of our approach with widely-used dataaugmentation techniques, as well as the reliability of the generated data insupporting various neural architectures and enhancing few-shot learning.",,arXiv,"['cs.cv', 'cs.lg']",, -809,paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation,"['Kuan-Hao Huang', 'Varun Iyer', 'I-Hung Hsu', 'Anoop Kumar', 'Kai-Wei Chang', 'Aram Galstyan']",http://arxiv.org/pdf/2305.16585v1.pdf,2023-05-26,," Paraphrase generation is a long-standing task in natural language processing(NLP). Supervised paraphrase generation models, which rely on human-annotatedparaphrase pairs, are cost-inefficient and hard to scale up. On the other hand,automatically annotated paraphrase pairs (e.g., by machine back-translation),usually suffer from the lack of syntactic diversity -- the generated paraphrasesentences are very similar to the source sentences in terms of syntax. In thiswork, we present ParaAMR, a large-scale syntactically diverse paraphrasedataset created by abstract meaning representation back-translation. Ourquantitative analysis, qualitative examples, and human evaluation demonstratethat the paraphrases of ParaAMR are syntactically more diverse compared toexisting large-scale paraphrase datasets while preserving good semanticsimilarity. In addition, we show that ParaAMR can be used to improve on threeNLP tasks: learning sentence embeddings, syntactically controlled paraphrasegeneration, and data augmentation for few-shot learning. Our results thusshowcase the potential of ParaAMR for improving various NLP applications.",,arXiv,['cs.cl'],, -810,adapting languageaudio models as fewshot audio learners,"['Jinhua Liang', 'Xubo Liu', 'Haohe Liu', 'Huy Phan', 'Emmanouil Benetos', 'Mark D. Plumbley', 'Wenwu Wang']",http://arxiv.org/pdf/2305.17719v1.pdf,2023-05-28,," We presented the Treff adapter, a training-efficient adapter for CLAP, toboost zero-shot classification performance by making use of a small set oflabelled data. Specifically, we designed CALM to retrieve the probabilitydistribution of text-audio clips over classes using a set of audio-label pairsand combined it with CLAP's zero-shot classification results. Furthermore, wedesigned a training-free version of the Treff adapter by using CALM as a cosinesimilarity measure. Experiments showed that the proposed Treff adapter iscomparable and even better than fully-supervised methods and adaptation methodsin low-shot and data-abundant scenarios. While the Treff adapter shows thatcombining large-scale pretraining and rapid learning of domain-specificknowledge is non-trivial for obtaining generic representations for few-shotlearning, it is still limited to audio classification tasks. In the future, wewill explore how to use audio-language models in diverse audio domains.",,arXiv,"['eess.as', 'cs.sd']",, -811,deeply coupled crossmodal prompt learning,"['Xuejing Liu', 'Wei Tang', 'Jinghui Lu', 'Rui Zhao', 'Zhaojun Guo', 'Fei Tan']",http://arxiv.org/pdf/2305.17903v2.pdf,2023-05-29,," Recent advancements in multimodal foundation models (e.g., CLIP) haveexcelled in zero-shot generalization. Prompt tuning involved in the knowledgetransfer from foundation models to downstream tasks has gained significantattention recently. Existing prompt-tuning methods in cross-modal learning,however, either solely focus on language branch, or learn vision-languageinteraction in a shallow mechanism. In this context, we propose a Deeplycoupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexiblyaccommodates the interplay between vision and language with a Cross-ModalPrompt Attention (CMPA) mechanism, which enables the mutual exchange ofrespective representation through a well-connected multi-head attention moduleprogressively and strongly. We then conduct comprehensive few-shot learningexperiments on 11 image classification datasets and analyze the robustness todomain shift as well. Thorough experimental analysis evidently demonstrates thesuperb few-shot generalization and compelling domain adaption capacity of awell-executed DCP. The code can be found at https://github.com/GingL/CMPA.",,arXiv,['cs.cv'],, -812,what does the failure to reason with respectively in zerofewshot settings tell us about language models,"['Ruixiang Cui', 'Seolhwa Lee', 'Daniel Hershcovich', 'Anders Søgaard']",http://arxiv.org/pdf/2305.19597v1.pdf,2023-05-31,," Humans can effortlessly understand the coordinate structure of sentences suchas ""Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle,respectively"". In the context of natural language inference (NLI), we examinehow language models (LMs) reason with respective readings (Gawron and Kehler,2004) from two perspectives: syntactic-semantic and commonsense-worldknowledge. We propose a controlled synthetic dataset WikiResNLI and a naturallyoccurring dataset NatResNLI to encompass various explicit and implicitrealizations of ""respectively"". We show that fine-tuned NLI models strugglewith understanding such readings without explicit supervision. While few-shotlearning is easy in the presence of explicit cues, longer training is requiredwhen the reading is evoked implicitly, leaving models to rely on common senseinferences. Furthermore, our fine-grained analysis indicates models fail togeneralize across different constructions. To conclude, we demonstrate that LMsstill lag behind humans in generalizing to the long tail of linguisticconstructions.",,arXiv,"['cs.cl', 'cs.ai']",, -813,humanlike fewshot learning via bayesian reasoning over natural language,['Kevin Ellis'],http://arxiv.org/pdf/2306.02797v3.pdf,2023-06-05,," A core tension in models of concept learning is that the model must carefullybalance the tractability of inference against the expressivity of thehypothesis class. Humans, however, can efficiently learn a broad range ofconcepts. We introduce a model of inductive learning that seeks to behuman-like in that sense. It implements a Bayesian reasoning process where alanguage model first proposes candidate hypotheses expressed in naturallanguage, which are then re-weighed by a prior and a likelihood. By estimatingthe prior from human data, we can predict human judgments on learning problemsinvolving numbers and sets, spanning concepts that are generative,discriminative, propositional, and higher-order.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -814,few shot rationale generation using selftraining with dual teachers,"['Aditya Srikanth Veerubhotla', 'Lahari Poddar', 'Jun Yin', 'György Szarvas', 'Sharanya Eswaran']",http://arxiv.org/pdf/2306.03315v1.pdf,2023-06-05,," Self-rationalizing models that also generate a free-text explanation fortheir predicted labels are an important tool to build trustworthy AIapplications. Since generating explanations for annotated labels is a laboriousand costly pro cess, recent models rely on large pretrained language models(PLMs) as their backbone and few-shot learning. In this work we explore aself-training approach leveraging both labeled and unlabeled data to furtherimprove few-shot models, under the assumption that neither human writtenrationales nor annotated task labels are available at scale. We introduce anovel dual-teacher learning framework, which learns two specialized teachermodels for task prediction and rationalization using self-training and distillstheir knowledge into a multi-tasking student model that can jointly generatethe task label and rationale. Furthermore, we formulate a new loss function,Masked Label Regularization (MLR) which promotes explanations to be stronglyconditioned on predicted labels. Evaluation on three public datasetsdemonstrate that the proposed methods are effective in modeling task labels andgenerating faithful rationales.",,arXiv,"['cs.cl', 'cs.ai']",, -815,a new dataset and empirical study for sentence simplification in chinese,"['Shiping Yang', 'Renliang Sun', 'Xiaojun Wan']",http://arxiv.org/pdf/2306.04188v1.pdf,2023-06-07,," Sentence Simplification is a valuable technique that can benefit languagelearners and children a lot. However, current research focuses more on Englishsentence simplification. The development of Chinese sentence simplification isrelatively slow due to the lack of data. To alleviate this limitation, thispaper introduces CSS, a new dataset for assessing sentence simplification inChinese. We collect manual simplifications from human annotators and performdata analysis to show the difference between English and Chinese sentencesimplifications. Furthermore, we test several unsupervised and zero/few-shotlearning methods on CSS and analyze the automatic evaluation and humanevaluation results. In the end, we explore whether Large Language Models canserve as high-quality Chinese sentence simplification systems by evaluatingthem on CSS.",,arXiv,['cs.cl'],, -816,can ai moderate online communities,"['Henrik Axelsen', 'Johannes Rude Jensen', 'Sebastian Axelsen', 'Valdemar Licht', 'Omri Ross']",http://arxiv.org/pdf/2306.05122v1.pdf,2023-06-08,," The task of cultivating healthy communication in online communities becomesincreasingly urgent, as gaming and social media experiences becomeprogressively more immersive and life-like. We approach the challenge ofmoderating online communities by training student models using a large languagemodel (LLM). We use zero-shot learning models to distill and expand datasetsfollowed by a few-shot learning and a fine-tuning approach, leveragingopen-access generative pre-trained transformer models (GPT) from OpenAI. Ourpreliminary findings suggest, that when properly trained, LLMs can excel inidentifying actor intentions, moderating toxic comments, and rewarding positivecontributions. The student models perform above-expectation in non-contextualassignments such as identifying classically toxic behavior and performsufficiently on contextual assignments such as identifying positivecontributions to online discourse. Further, using open-access models likeOpenAI's GPT we experience a step-change in the development process for whathas historically been a complex modeling task. We contribute to the informationsystem (IS) discourse with a rapid development framework on the application ofgenerative AI in content online moderation and management of culture indecentralized, pseudonymous communities by providing a sample model suite ofindustrial-ready generative AI models based on open-access LLMs.",,arXiv,['cs.cy'],, -817,the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues,"['Adaeze Adigwe', 'Zheng Yuan']",http://arxiv.org/pdf/2306.05360v1.pdf,2023-06-08,," This paper presents the ADAIO team's system entry in the Building EducationalApplications (BEA) 2023 Shared Task on Generating AI Teacher Responses inEducational Dialogues. The task aims to assess the performance ofstate-of-the-art generative models as AI teachers in producing suitableresponses within a student-teacher dialogue. Our system comprises evaluatingvarious baseline models using OpenAI GPT-3 and designing diverse prompts toprompt the OpenAI models for teacher response generation. After the challenge,our system achieved second place by employing a few-shot prompt-based approachwith the OpenAI text-davinci-003 model. The results highlight the few-shotlearning capabilities of large-language models, particularly OpenAI's GPT-3, inthe role of AI teachers.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, -818,rethink the effectiveness of text data augmentation an empirical analysis,"['Zhengxiang Shi', 'Aldo Lipani']",http://arxiv.org/pdf/2306.07664v1.pdf,2023-06-13,," In recent years, language models (LMs) have made remarkable progress inadvancing the field of natural language processing (NLP). However, the impactof data augmentation (DA) techniques on the fine-tuning (FT) performance ofthese LMs has been a topic of ongoing debate. In this study, we evaluate theeffectiveness of three different FT methods in conjugation withback-translation across an array of 7 diverse NLP tasks, includingclassification and regression types, covering single-sentence and sentence-pairtasks. Contrary to prior assumptions that DA does not contribute to theenhancement of LMs' FT performance, our findings reveal that continuedpre-training on augmented data can effectively improve the FT performance ofthe downstream tasks. In the most favourable case, continued pre-trainingimproves the performance of FT by more than 10% in the few-shot learningsetting. Our finding highlights the potential of DA as a powerful tool forbolstering LMs' performance.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -819,neural finetuning search for fewshot learning,"['Panagiotis Eustratiadis', 'Łukasz Dudziak', 'Da Li', 'Timothy Hospedales']",http://arxiv.org/pdf/2306.09295v1.pdf,2023-06-15,," In few-shot recognition, a classifier that has been trained on one set ofclasses is required to rapidly adapt and generalize to a disjoint, novel set ofclasses. To that end, recent studies have shown the efficacy of fine-tuningwith carefully crafted adaptation architectures. However this raises thequestion of: How can one design the optimal adaptation strategy? In this paper,we study this question through the lens of neural architecture search (NAS).Given a pre-trained neural network, our algorithm discovers the optimalarrangement of adapters, which layers to keep frozen and which to fine-tune. Wedemonstrate the generality of our NAS method by applying it to both residualnetworks and vision transformers and report state-of-the-art performance onMeta-Dataset and Meta-Album.",,arXiv,"['cs.cv', 'cs.lg']",, -820,multilingual fewshot learning via language model retrieval,"['Genta Indra Winata', 'Liang-Kang Huang', 'Soumya Vadlamannati', 'Yash Chandarana']",http://arxiv.org/pdf/2306.10964v1.pdf,2023-06-19,," Transformer-based language models have achieved remarkable success infew-shot in-context learning and drawn a lot of research interest. However,these models' performance greatly depends on the choice of the example promptsand also has high variability depending on how samples are chosen. In thispaper, we conduct a comprehensive study of retrieving semantically similarfew-shot samples and using them as the context, as it helps the model decidethe correct label without any gradient update in the multilingual andcross-lingual settings. We evaluate the proposed method on five naturallanguage understanding datasets related to intent detection, questionclassification, sentiment analysis, and topic classification. The proposedmethod consistently outperforms random sampling in monolingual andcross-lingual tasks in non-English languages.",,arXiv,['cs.cl'],, -821,robut a systematic study of table qa robustness against humanannotated adversarial perturbations,"['Yilun Zhao', 'Chen Zhao', 'Linyong Nan', 'Zhenting Qi', 'Wenlin Zhang', 'Xiangru Tang', 'Boyu Mi', 'Dragomir Radev']",http://arxiv.org/pdf/2306.14321v1.pdf,2023-06-25,," Despite significant progress having been made in question answering ontabular data (Table QA), it's unclear whether, and to what extent existingTable QA models are robust to task-specific perturbations, e.g., replacing keyquestion entities or shuffling table columns. To systematically study therobustness of Table QA models, we propose a benchmark called RobuT, whichbuilds upon existing Table QA datasets (WTQ, WikiSQL-Weak, and SQA) andincludes human-annotated adversarial perturbations in terms of table header,table content, and question. Our results indicate that both state-of-the-artTable QA models and large language models (e.g., GPT-3) with few-shot learningfalter in these adversarial sets. We propose to address this problem by usinglarge language models to generate adversarial examples to enhance training,which significantly improves the robustness of Table QA models. Our data andcode is publicly available at https://github.com/yilunzhao/RobuT.",,arXiv,"['cs.cl', 'cs.ai']",, -822,benchmarking large language model capabilities for conditional generation,"['Joshua Maynez', 'Priyanka Agrawal', 'Sebastian Gehrmann']",http://arxiv.org/pdf/2306.16793v1.pdf,2023-06-29,," Pre-trained large language models (PLMs) underlie most new developments innatural language processing. They have shifted the field fromapplication-specific model pipelines to a single model that is adapted to awide range of tasks. Autoregressive PLMs like GPT-3 or PaLM, alongsidetechniques like few-shot learning, have additionally shifted the outputmodality to generation instead of classification or regression. Despite theirubiquitous use, the generation quality of language models is rarely evaluatedwhen these models are introduced. Additionally, it is unclear how existinggeneration tasks--while they can be used to compare systems at a highlevel--relate to the real world use cases for which people have been adoptingthem. In this work, we discuss how to adapt existing application-specificgeneration benchmarks to PLMs and provide an in-depth, empirical study of thelimitations and capabilities of PLMs in natural language generation tasks alongdimensions such as scale, architecture, input and output language. Our resultsshow that PLMs differ in their applicability to different data regimes andtheir generalization to multiple languages and inform which PLMs to use for agiven generation task setup. We share best practices to be taken intoconsideration when benchmarking generation capabilities during the developmentof upcoming PLMs.",,arXiv,['cs.cl'],, -823,on conditional and compositional language model differentiable prompting,"['Jonathan Pilault', 'Can Liu', 'Mohit Bansal', 'Markus Dreyer']",http://arxiv.org/pdf/2307.01446v1.pdf,2023-07-04,," Prompts have been shown to be an effective method to adapt a frozenPretrained Language Model (PLM) to perform well on downstream tasks. Promptscan be represented by a human-engineered word sequence or by a learnedcontinuous embedding. In this work, we investigate conditional andcompositional differentiable prompting. We propose a new model, PromptProduction System (PRopS), which learns to transform task instructions or inputmetadata, into continuous prompts that elicit task-specific outputs from thePLM. Our model uses a modular network structure based on our neural formulationof Production Systems, which allows the model to learn discrete rules -- neuralfunctions that learn to specialize in transforming particular prompt inputpatterns, making it suitable for compositional transfer learning and few-shotlearning. We present extensive empirical and theoretical analysis and show thatPRopS consistently surpasses other PLM adaptation techniques, and oftenimproves upon fully fine-tuned models, on compositional generalization tasks,controllable summarization and multilingual translation, while needing fewertrainable parameters.",,arXiv,"['cs.cl', 'cs.lg']",, -824,diverse retrievalaugmented incontext learning for dialogue state tracking,"['Brendan King', 'Jeffrey Flanigan']",http://arxiv.org/pdf/2307.01453v1.pdf,2023-07-04,," There has been significant interest in zero and few-shot learning fordialogue state tracking (DST) due to the high cost of collecting and annotatingtask-oriented dialogues. Recent work has demonstrated that in-context learningrequires very little data and zero parameter updates, and even outperformstrained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST,which advances the state of the art with three advancements to in-contextlearning for DST. First, we formulate DST as a Python programming task,explicitly modeling language coreference as variable reference in Python.Second, since in-context learning depends highly on the context examples, wepropose a method to retrieve a diverse set of relevant examples to improveperformance. Finally, we introduce a novel re-weighting method during decodingthat takes into account probabilities of competing surface forms, and producesa more accurate dialogue state prediction. We evaluate our approach usingMultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zeroand few-shot settings.",,arXiv,['cs.cl'],, -825,generating efficient training data via llmbased attribute manipulation,"['Letian Peng', 'Yuwei Zhang', 'Jingbo Shang']",http://arxiv.org/pdf/2307.07099v1.pdf,2023-07-14,," In this paper, we propose a novel method, Chain-of-Thoughts AttributeManipulation (CoTAM), to guide few-shot learning by carefully crafted data fromLarge Language Models (LLMs). The main idea is to create data with changes onlyin the attribute targeted by the task. Inspired by facial attributemanipulation, our approach generates label-switched data by leveraging LLMs tomanipulate task-specific attributes and reconstruct new sentences in acontrolled manner. Instead of conventional latent representation controlling,we implement chain-of-thoughts decomposition and reconstruction to adapt theprocedure to LLMs. Extensive results on text classification and other tasksverify the advantage of CoTAM over other LLM-based text generation methods withthe same number of training examples. Analysis visualizes the attributemanipulation effectiveness of CoTAM and presents the potential of LLM-guidedlearning with even less supervision.",,arXiv,['cs.cl'],, -826,overthinking the truth understanding how language models process false demonstrations,"['Danny Halawi', 'Jean-Stanislas Denain', 'Jacob Steinhardt']",http://arxiv.org/pdf/2307.09476v1.pdf,2023-07-18,," Modern language models can imitate complex patterns through few-shotlearning, enabling them to complete challenging tasks without fine-tuning.However, imitation can also lead models to reproduce inaccuracies or harmfulcontent if present in the context. We study harmful imitation through the lensof a model's internal representations, and identify two related phenomena:overthinking and false induction heads. The first phenomenon, overthinking,appears when we decode predictions from intermediate layers, given correct vs.incorrect few-shot demonstrations. At early layers, both demonstrations inducesimilar model behavior, but the behavior diverges sharply at some ""criticallayer"", after which the accuracy given incorrect demonstrations progressivelydecreases. The second phenomenon, false induction heads, are a possiblemechanistic cause of overthinking: these are heads in late layers that attendto and copy false information from previous demonstrations, and whose ablationreduces overthinking. Beyond scientific understanding, our results suggest thatstudying intermediate model computations could be a promising avenue forunderstanding and guarding against harmful model behaviors.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, -827,does correction remain a problem for large language models,"['Xiaowu Zhang', 'Xiaotian Zhang', 'Cheng Yang', 'Hang Yan', 'Xipeng Qiu']",http://arxiv.org/pdf/2308.01776v2.pdf,2023-08-03,," As large language models, such as GPT, continue to advance the capabilitiesof natural language processing (NLP), the question arises: does the problem ofcorrection still persist? This paper investigates the role of correction in thecontext of large language models by conducting two experiments. The firstexperiment focuses on correction as a standalone task, employing few-shotlearning techniques with GPT-like models for error correction. The secondexperiment explores the notion of correction as a preparatory task for otherNLP tasks, examining whether large language models can tolerate and performadequately on texts containing certain levels of noise or errors. By addressingthese experiments, we aim to shed light on the significance of correction inthe era of large language models and its implications for various NLPapplications.",,arXiv,['cs.cl'],, -828,thespian multicharacter text roleplaying game agents,"['Christopher Cui', 'Xiangyu Peng', 'Mark Riedl']",http://arxiv.org/pdf/2308.01872v1.pdf,2023-08-03,," Text-adventure games and text role-playing games are grand challenges forreinforcement learning game playing agents. Text role-playing games areopen-ended environments where an agent must faithfully play a particularcharacter. We consider the distinction between characters and actors, where anactor agent has the ability to play multiple characters. We present a frameworkwe call a thespian agent that can learn to emulate multiple characters alongwith a soft prompt that can be used to direct it as to which character to playat any time. We further describe an attention mechanism that allows the agentto learn new characters that are based on previously learned characters in afew-shot fashion. We show that our agent outperforms the state of the art agentframework in multi-character learning and few-shot learning.",,arXiv,"['cs.ai', 'cs.cl']",, -829,metalearning in healthcare a survey,"['Alireza Rafiei', 'Ronald Moore', 'Sina Jahromi', 'Farshid Hajati', 'Rishikesan Kamaleswaran']",http://arxiv.org/pdf/2308.02877v1.pdf,2023-08-05,," As a subset of machine learning, meta-learning, or learning to learn, aims atimproving the model's capabilities by employing prior knowledge and experience.A meta-learning paradigm can appropriately tackle the conventional challengesof traditional learning approaches, such as insufficient number of samples,domain shifts, and generalization. These unique characteristics positionmeta-learning as a suitable choice for developing influential solutions invarious healthcare contexts, where the available data is often insufficient,and the data collection methodologies are different. This survey discussesmeta-learning broad applications in the healthcare domain to provide insightinto how and where it can address critical healthcare challenges. We firstdescribe the theoretical foundations and pivotal methods of meta-learning. Wethen divide the employed meta-learning approaches in the healthcare domain intotwo main categories of multi/single-task learning and many/few-shot learningand survey the studies. Finally, we highlight the current challenges inmeta-learning research, discuss the potential solutions and provide futureperspectives on meta-learning in healthcare.",,arXiv,"['cs.lg', 'cs.ai']",, -830,autoconv automatically generating informationseeking conversations with large language models,"['Siheng Li', 'Cheng Yang', 'Yichun Yin', 'Xinyu Zhu', 'Zesen Cheng', 'Lifeng Shang', 'Xin Jiang', 'Qun Liu', 'Yujiu Yang']",http://arxiv.org/pdf/2308.06507v1.pdf,2023-08-12,," Information-seeking conversation, which aims to help users gather informationthrough conversation, has achieved great progress in recent years. However, theresearch is still stymied by the scarcity of training data. To alleviate thisproblem, we propose AutoConv for synthetic conversation generation, which takesadvantage of the few-shot learning ability and generation capacity of largelanguage models (LLM). Specifically, we formulate the conversation generationproblem as a language modeling task, then finetune an LLM with a few humanconversations to capture the characteristics of the information-seeking processand use it for generating synthetic conversations with high quality.Experimental results on two frequently-used datasets verify that AutoConv hassubstantial improvements over strong baselines and alleviates the dependence onhuman annotation. In addition, we also provide several analysis studies topromote future research.",,arXiv,['cs.cl'],, -831,distilled feature fields enable fewshot languageguided manipulation,"['William Shen', 'Ge Yang', 'Alan Yu', 'Jansen Wong', 'Leslie Pack Kaelbling', 'Phillip Isola']",http://arxiv.org/pdf/2308.07931v1.pdf,2023-07-27,," Self-supervised and language-supervised image models contain rich knowledgeof the world that is important for generalization. Many robotic tasks, however,require a detailed understanding of 3D geometry, which is often lacking in 2Dimage features. This work bridges this 2D-to-3D gap for robotic manipulation byleveraging distilled feature fields to combine accurate 3D geometry with richsemantics from 2D foundation models. We present a few-shot learning method for6-DOF grasping and placing that harnesses these strong spatial and semanticpriors to achieve in-the-wild generalization to unseen objects. Using featuresdistilled from a vision-language model, CLIP, we present a way to designatenovel objects for manipulation via free-text natural language, and demonstrateits ability to generalize to unseen expressions and novel categories ofobjects.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",, -832,refashioning emotion recognition modelling the advent of generalised large models,"['Zixing Zhang', 'Liyizhe Peng', 'Tao Pang', 'Jing Han', 'Huan Zhao', 'Bjorn W. Schuller']",http://arxiv.org/pdf/2308.11578v1.pdf,2023-08-21,," After the inception of emotion recognition or affective computing, it hasincreasingly become an active research topic due to its broad applications.Over the past couple of decades, emotion recognition models have graduallymigrated from statistically shallow models to neural network-based deep models,which can significantly boost the performance of emotion recognition models andconsistently achieve the best results on different benchmarks. Therefore, inrecent years, deep models have always been considered the first option foremotion recognition. However, the debut of large language models (LLMs), suchas ChatGPT, has remarkably astonished the world due to their emergedcapabilities of zero/few-shot learning, in-context learning, chain-of-thought,and others that are never shown in previous deep models. In the present paper,we comprehensively investigate how the LLMs perform in emotion recognition interms of diverse aspects, including in-context learning, few-short learning,accuracy, generalisation, and explanation. Moreover, we offer some insights andpose other potential challenges, hoping to ignite broader discussions aboutenhancing emotion recognition in the new era of advanced and generalised largemodels.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -833,gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles,"['Georgi Pachov', 'Dimitar Dimitrov', 'Ivan Koychev', 'Preslav Nakov']",http://arxiv.org/pdf/2309.06844v1.pdf,2023-09-13,," The wide-spread use of social networks has given rise to subjective,misleading, and even false information on the Internet. Thus, subjectivitydetection can play an important role in ensuring the objectiveness and thequality of a piece of information. This paper presents the solution built bythe Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivitydetection. Three different research directions are explored. The first one isbased on fine-tuning a sentence embeddings encoder model and dimensionalityreduction. The second one explores a sample-efficient few-shot learning model.The third one evaluates fine-tuning a multilingual transformer on an altereddataset, using data from multiple languages. Finally, the three approaches arecombined in a simple majority voting ensemble, resulting in 0.77 macro F1 onthe test set and achieving 2nd place on the English subtask.",,arXiv,"['cs.cl', 'cs.ai', 'cs.mm']",, -834,"an empathybased sandbox approach to bridge attitudes, goals, knowledge, and behaviors in the privacy paradox","['Chaoran Chen', 'Weijun Li', 'Wenxin Song', 'Yanfang Ye', 'Yaxing Yao', 'Toby Jia-jun Li']",http://arxiv.org/pdf/2309.14510v1.pdf,2023-09-25,," The ""privacy paradox"" describes the discrepancy between users' privacyattitudes and their actual behaviors. Mitigating this discrepancy requiressolutions that account for both system opaqueness and users' hesitations intesting different privacy settings due to fears of unintended data exposure. Weintroduce an empathy-based approach that allows users to experience how privacybehaviors may alter system outcomes in a risk-free sandbox environment from theperspective of artificially generated personas. To generate realistic personas,we introduce a novel pipeline that augments the outputs of large languagemodels using few-shot learning, contextualization, and chain of thoughts. Ourempirical studies demonstrated the adequate quality of generated personas andhighlighted the changes in privacy-related applications (e.g., onlineadvertising) caused by different personas. Furthermore, users demonstratedcognitive and emotional empathy towards the personas when interacting with oursandbox. We offered design implications for downstream applications inimproving user privacy literacy and promoting behavior changes.",,arXiv,['cs.hc'],, -835,small visual language models can also be openended fewshot learners,"['Mohammad Mahdi Derakhshani', 'Ivona Najdenkoska', 'Cees G. M. Snoek', 'Marcel Worring', 'Yuki M. Asano']",http://arxiv.org/pdf/2310.00500v1.pdf,2023-09-30,," We present Self-Context Adaptation (SeCAt), a self-supervised approach thatunlocks open-ended few-shot abilities of small visual language models. Ourproposed adaptation algorithm explicitly learns from symbolic, yetself-supervised training tasks. Specifically, our approach imitates imagecaptions in a self-supervised way based on clustering a large pool of imagesfollowed by assigning semantically-unrelated names to clusters. By doing so, weconstruct the `self-context', a training signal consisting of interleavedsequences of image and pseudo-caption pairs and a query image for which themodel is trained to produce the right pseudo-caption. We demonstrate theperformance and flexibility of SeCAt on several multimodal few-shot datasets,spanning various granularities. By using models with approximately 1Bparameters we outperform the few-shot abilities of much larger models, such asFrozen and FROMAGe. SeCAt opens new possibilities for research in open-endedfew-shot learning that otherwise requires access to large or proprietarymodels.",,arXiv,['cs.cv'],, -836,injecting a structural inductive bias into a seq2seq model by simulation,"['Matthias Lindemann', 'Alexander Koller', 'Ivan Titov']",http://arxiv.org/pdf/2310.00796v1.pdf,2023-10-01,," Strong inductive biases enable learning from little data and helpgeneralization outside of the training distribution. Popular neuralarchitectures such as Transformers lack strong structural inductive biases forseq2seq NLP tasks on their own. Consequently, they struggle with systematicgeneralization beyond the training distribution, e.g. with extrapolating tolonger inputs, even when pre-trained on large amounts of text. We show how astructural inductive bias can be injected into a seq2seq model by pre-trainingit to simulate structural transformations on synthetic data. Specifically, weinject an inductive bias towards Finite State Transducers (FSTs) into aTransformer by pre-training it to simulate FSTs given their descriptions. Ourexperiments show that our method imparts the desired inductive bias, resultingin improved systematic generalization and better few-shot learning for FST-liketasks.",,arXiv,['cs.cl'],, -837,tram benchmarking temporal reasoning for large language models,"['Yuqing Wang', 'Yun Zhao']",http://arxiv.org/pdf/2310.00835v2.pdf,2023-10-02,," Reasoning about time is essential for understanding the nuances of eventsdescribed in natural language. Previous research on this topic has been limitedin scope, characterized by a lack of standardized benchmarks that would allowfor consistent evaluations across different studies. In this paper, weintroduce TRAM, a temporal reasoning benchmark composed of ten datasets,encompassing various temporal aspects of events such as order, arithmetic,frequency, and duration, designed to facilitate a comprehensive evaluation ofthe temporal reasoning capabilities of large language models (LLMs). We conductan extensive evaluation using popular LLMs, such as GPT-4 and Llama2, in bothzero-shot and few-shot learning scenarios. Additionally, we employ BERT-basedmodels to establish the baseline evaluations. Our findings indicate that thesemodels still trail human performance in temporal reasoning tasks. It is ouraspiration that TRAM will spur further progress in enhancing the temporalreasoning abilities of LLMs.",,arXiv,['cs.cl'],, -838,procedural text mining with large language models,"['Anisa Rula', ""Jennifer D'Souza""]",http://arxiv.org/pdf/2310.03376v1.pdf,2023-10-05,," Recent advancements in the field of Natural Language Processing, particularlythe development of large-scale language models that are pretrained on vastamounts of knowledge, are creating novel opportunities within the realm ofKnowledge Engineering. In this paper, we investigate the usage of largelanguage models (LLMs) in both zero-shot and in-context learning settings totackle the problem of extracting procedures from unstructured PDF text in anincremental question-answering fashion. In particular, we leverage the currentstate-of-the-art GPT-4 (Generative Pre-trained Transformer 4) model,accompanied by two variations of in-context learning that involve an ontologywith definitions of procedures and steps and a limited number of samples offew-shot learning. The findings highlight both the promise of this approach andthe value of the in-context learning customisations. These modifications havethe potential to significantly address the challenge of obtaining sufficienttraining data, a hurdle often encountered in deep learning-based NaturalLanguage Processing techniques for procedure extraction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.it', 'math.it']",, -839,prototypeformer learning to explore prototype relationships for fewshot image classification,"['Feihong He', 'Gang Li', 'Lingyu Si', 'Leilei Yan', 'Fanzhang Li', 'Fuchun Sun']",http://arxiv.org/pdf/2310.03517v1.pdf,2023-10-05,," Few-shot image classification has received considerable attention foraddressing the challenge of poor classification performance with limitedsamples in novel classes. However, numerous studies have employed sophisticatedlearning strategies and diversified feature extraction methods to address thisissue. In this paper, we propose our method called PrototypeFormer, which aimsto significantly advance traditional few-shot image classification approachesby exploring prototype relationships. Specifically, we utilize a transformerarchitecture to build a prototype extraction module, aiming to extract classrepresentations that are more discriminative for few-shot classification.Additionally, during the model training process, we propose a contrastivelearning-based optimization approach to optimize prototype features in few-shotlearning scenarios. Despite its simplicity, the method performs remarkablywell, with no bells and whistles. We have experimented with our approach onseveral popular few-shot image classification benchmark datasets, which showsthat our method outperforms all current state-of-the-art methods. Inparticular, our method achieves 97.07% and 90.88% on 5-way 5-shot and 5-way1-shot tasks of miniImageNet, which surpasses the state-of-the-art results withaccuracy of 7.27% and 8.72%, respectively. The code will be released later.",,arXiv,['cs.cv'],, -840,a holistic evaluation of piano sound quality,"['Monan Zhou', 'Shangda Wu', 'Shaohua Ji', 'Zijin Li', 'Wei Li']",http://arxiv.org/pdf/2310.04722v1.pdf,2023-10-07,," This paper aims to develop a holistic evaluation method for piano soundquality to assist in purchasing decisions. Unlike previous studies that focusedon the effect of piano performance techniques on sound quality, this studyevaluates the inherent sound quality of different pianos. To derive qualityevaluation systems, the study uses subjective questionnaires based on a pianosound quality dataset. The method selects the optimal piano classificationmodels by comparing the fine-tuning results of different pre-training models ofConvolutional Neural Networks (CNN). To improve the interpretability of themodels, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. Theresults reveal that musically trained individuals are better able todistinguish between the sound quality differences of different pianos. The bestfine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3\% as thepiano classifier. However, the dataset is limited, and the audio is sliced toincrease its quantity, resulting in a lack of diversity and balance, so we usefocal loss to reduce the impact of data imbalance. To optimize the method, thedataset will be expanded, or few-shot learning techniques will be employed infuture research.",,arXiv,"['cs.sd', 'cs.ai', 'eess.as']",, -841,argumentative stance prediction an exploratory study on multimodality and fewshot learning,"['Arushi Sharma', 'Abhibha Gupta', 'Maneesh Bilalpur']",http://arxiv.org/pdf/2310.07093v1.pdf,2023-10-11,," To advance argumentative stance prediction as a multimodal problem, the FirstShared Task in Multimodal Argument Mining hosted stance prediction in crucialsocial topics of gun control and abortion. Our exploratory study attempts toevaluate the necessity of images for stance prediction in tweets and compareout-of-the-box text-based large-language models (LLM) in few-shot settingsagainst fine-tuned unimodal and multimodal models. Our work suggests anensemble of fine-tuned text-based language models (0.817 F1-score) outperformsboth the multimodal (0.677 F1-score) and text-based few-shot prediction using arecent state-of-the-art LLM (0.550 F1-score). In addition to the differences inperformance, our findings suggest that the multimodal models tend to performbetter when image content is summarized as natural language over their nativepixel structure and, using in-context examples improves few-shot performance ofLLMs.",,arXiv,['cs.cl'],, -842,llmaugmented preference learning from natural language,"['Inwon Kang', 'Sikai Ruan', 'Tyler Ho', 'Jui-Chien Lin', 'Farhad Mohsin', 'Oshani Seneviratne', 'Lirong Xia']",http://arxiv.org/pdf/2310.08523v1.pdf,2023-10-12,," Finding preferences expressed in natural language is an important butchallenging task. State-of-the-art(SotA) methods leverage transformer-basedmodels such as BERT, RoBERTa, etc. and graph neural architectures such as graphattention networks. Since Large Language Models (LLMs) are equipped to dealwith larger context lengths and have much larger model sizes than thetransformer-based model, we investigate their ability to classify comparativetext directly. This work aims to serve as a first step towards using LLMs forthe CPC task. We design and conduct a set of experiments that format theclassification task into an input prompt for the LLM and a methodology to get afixed-format response that can be automatically evaluated. Comparingperformances with existing methods, we see that pre-trained LLMs are able tooutperform the previous SotA models with no fine-tuning involved. Our resultsshow that the LLMs can consistently outperform the SotA when the target text islarge -- i.e. composed of multiple sentences --, and are still comparable tothe SotA performance in shorter text. We also find that few-shot learningyields better performance than zero-shot learning.",,arXiv,['cs.cl'],, -843,incontext fewshot relation extraction via pretrained language models,"['Yilmazcan Ozyurt', 'Stefan Feuerriegel', 'Ce Zhang']",http://arxiv.org/pdf/2310.11085v1.pdf,2023-10-17,," Relation extraction aims at inferring structured human knowledge from textualdocuments. State-of-the-art methods based on language models commonly have twolimitations: (1) they require named entities to be either given as input orinfer them, which introduces additional noise, and (2) they require humanannotations of documents. As a remedy, we present a novel framework forin-context few-shot relation extraction via pre-trained language models. To thebest of our knowledge, we are the first to reformulate the relation extractiontask as a tailored in-context few-shot learning paradigm. Thereby, we achievecrucial benefits in that we eliminate the need for both named entityrecognition and human annotation of documents. Unlike existing methods based onfine-tuning, our framework is flexible in that it can be easily updated for anew set of relations without re-training. We evaluate our framework usingDocRED, the largest publicly available dataset for document-level relationextraction, and demonstrate that our framework achieves state-of-the-artperformance. Finally, our framework allows us to identify missing annotations,and we thus show that our framework actually performs much better than theoriginal labels from the development set of DocRED.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -844,group preference optimization fewshot alignment of large language models,"['Siyan Zhao', 'John Dang', 'Aditya Grover']",http://arxiv.org/pdf/2310.11523v1.pdf,2023-10-17,," Many applications of large language models (LLMs), ranging from chatbots tocreative writing, require nuanced subjective judgments that can differsignificantly across different groups. Existing alignment algorithms can beexpensive to align for each group, requiring prohibitive amounts ofgroup-specific preference data and computation for real-world use cases. Weintroduce Group Preference Optimization (GPO), an alignment framework thatsteers language models to preferences of individual groups in a few-shotmanner. In GPO, we augment the base LLM with an independent transformer moduletrained to predict the preferences of a group for the LLM generations. Forfew-shot learning, we parameterize this module as an in-context autoregressivetransformer and train it via meta-learning on several groups. We empiricallyvalidate the efficacy of GPO through rigorous evaluations using LLMs withvaried sizes on three human opinion adaptation tasks. These tasks involveadapting to the preferences of US demographic groups, global countries, andindividual users. Our results demonstrate that GPO not only aligns models moreaccurately but also requires fewer group-specific preferences, and lesstraining and inference computing resources, outperforming existing strategiessuch as in-context steering and fine-tuning methods.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, -845,clara multilingual contrastive learning for audio representation acquisition,"['Kari A Noriy', 'Xiaosong Yang', 'Marcin Budka', 'Jian Jun Zhang']",http://arxiv.org/pdf/2310.11830v2.pdf,2023-10-18,," Multilingual speech processing requires understanding emotions, a task madedifficult by limited labelled data. CLARA, minimizes reliance on labelled data,enhancing generalization across languages. It excels at fostering sharedrepresentations, aiding cross-lingual transfer of speech and emotions, evenwith little data. Our approach adeptly captures emotional nuances in speech,overcoming subjective assessment issues. Using a large multilingual audiocorpus and self-supervised learning, CLARA develops speech representationsenriched with emotions, advancing emotion-aware multilingual speech processing. Our method expands the data range using data augmentation, textual embeddingfor visual understanding, and transfers knowledge from high- to low-resourcelanguages. CLARA demonstrates excellent performance in emotion recognition,language comprehension, and audio benchmarks, excelling in zero-shot andfew-shot learning. It adapts to low-resource languages, marking progress inmultilingual speech representation learning.",,arXiv,"['cs.sd', 'cs.lg', 'cs.mm', 'eess.as']",, -846,a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation,"['Giuseppe Attanasio', 'Flor Miriam Plaza-del-Arco', 'Debora Nozza', 'Anne Lauscher']",http://arxiv.org/pdf/2310.12127v2.pdf,2023-10-18,," Recent instruction fine-tuned models can solve multiple NLP tasks whenprompted to do so, with machine translation (MT) being a prominent use case.However, current research often focuses on standard performance benchmarks,leaving compelling fairness and ethical considerations behind. In MT, thismight lead to misgendered translations, resulting, among other harms, in theperpetuation of stereotypes and prejudices. In this work, we address this gapby investigating whether and to what extent such models exhibit gender bias inmachine translation and how we can mitigate it. Concretely, we computeestablished gender bias metrics on the WinoMT corpus from English to German andSpanish. We discover that IFT models default to male-inflected translations,even disregarding female occupational stereotypes. Next, using interpretabilitymethods, we unveil that models systematically overlook the pronoun indicatingthe gender of a target occupation in misgendered translations. Finally, basedon this finding, we propose an easy-to-implement and effective bias mitigationsolution based on few-shot learning that leads to significantly fairertranslations.",,arXiv,"['cs.cl', 'cs.lg']",, -847,an exploration of incontext learning for speech language model,"['Ming-Hao Hsu', 'Kai-Wei Chang', 'Shang-Wen Li', 'Hung-yi Lee']",http://arxiv.org/pdf/2310.12477v1.pdf,2023-10-19,," Ever since the development of GPT-3 in the natural language processing (NLP)field, in-context learning (ICL) has played an important role in utilizinglarge language models (LLMs). By presenting the LM utterance-labeldemonstrations at the input, the LM can accomplish few-shot learning withoutrelying on gradient descent or requiring explicit modification of itsparameters. This enables the LM to learn and adapt in a black-box manner.Despite the success of ICL in NLP, little work is exploring the possibility ofICL in speech processing. This study proposes the first exploration of ICL witha speech LM without text supervision. We first show that the current speech LMdoes not have the ICL capability. With the proposed warmup training, the speechLM can, therefore, perform ICL on unseen tasks. In this work, we verify thefeasibility of ICL for speech LM on speech classification tasks.",,arXiv,"['eess.as', 'cs.ai', 'cs.cl']",, -848,large language models are biased to overestimate profoundness,"['Eugenio Herrera-Berg', 'Tomás Vergara Browne', 'Pablo León-Villagrá', 'Marc-Lluís Vives', 'Cristian Buc Calderon']",http://arxiv.org/pdf/2310.14422v1.pdf,2023-10-22,," Recent advancements in natural language processing by large language models(LLMs), such as GPT-4, have been suggested to approach Artificial GeneralIntelligence. And yet, it is still under dispute whether LLMs possess similarreasoning abilities to humans. This study evaluates GPT-4 and various otherLLMs in judging the profoundness of mundane, motivational, and pseudo-profoundstatements. We found a significant statement-to-statement correlation betweenthe LLMs and humans, irrespective of the type of statements and the promptingtechnique used. However, LLMs systematically overestimate the profoundness ofnonsensical statements, with the exception of Tk-instruct, which uniquelyunderestimates the profoundness of statements. Only few-shot learning prompts,as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans.Furthermore, this work provides insights into the potential biases induced byReinforcement Learning from Human Feedback (RLHF), inducing an increase in thebias to overestimate the profoundness of statements.",,arXiv,['cs.cl'],, -849,improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning,"['Ananth Balashankar', 'Xiao Ma', 'Aradhana Sinha', 'Ahmad Beirami', 'Yao Qin', 'Jilin Chen', 'Alex Beutel']",http://arxiv.org/pdf/2310.16959v1.pdf,2023-10-25,," As large language models (LLMs) are widely adopted, new safety issues andpolicies emerge, to which existing safety classifiers do not generalize well.If we have only observed a few examples of violations of a new safety rule, howcan we build a classifier to detect violations? In this paper, we study thenovel setting of domain-generalized few-shot learning for LLM-based text safetyclassifiers. Unlike prior few-shot work, these new safety issues can be hard touncover and we do not get to choose the few examples. We demonstrate thatexisting few-shot techniques do not perform well in this setting, and rather wepropose to do parameter-efficient fine-tuning (PEFT) combined with augmentingtraining data based on similar examples in prior existing rules. We empiricallyshow that our approach of similarity-based data-augmentation + prompt-tuning(DAPT) consistently outperforms baselines that either do not rely on dataaugmentation or on PEFT by 7-17% F1 score in the Social Chemistry moraljudgement and 9-13% AUC in the Toxicity detection tasks, even when the new ruleis loosely correlated with existing ones.",,arXiv,['cs.lg'],, -850,retrofitting lightweight language models for emotions using supervised contrastive learning,"['Sapan Shah', 'Sreedhar Reddy', 'Pushpak Bhattacharyya']",http://arxiv.org/pdf/2310.18930v1.pdf,2023-10-29,," We present a novel retrofitting method to induce emotion aspects intopre-trained language models (PLMs) such as BERT and RoBERTa. Our method updatespre-trained network weights using contrastive learning so that the textfragments exhibiting similar emotions are encoded nearby in the representationspace, and the fragments with different emotion content are pushed apart. Whiledoing so, it also ensures that the linguistic knowledge already present in PLMsis not inadvertently perturbed. The language models retrofitted by our method,i.e., BERTEmo and RoBERTaEmo, produce emotion-aware text representations, asevaluated through different clustering and retrieval metrics. For thedownstream tasks on sentiment analysis and sarcasm detection, they performbetter than their pre-trained counterparts (about 1% improvement in F1-score)and other existing approaches. Additionally, a more significant boost inperformance is observed for the retrofitted models over pre-trained ones infew-shot learning setting.",,arXiv,['cs.cl'],, -851,nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection,"['Yunze Xiao', 'Firoj Alam']",http://arxiv.org/pdf/2311.03184v1.pdf,2023-11-06,," The spread of disinformation and propagandistic content poses a threat tosocietal harmony, undermining informed decision-making and trust in reliablesources. Online platforms often serve as breeding grounds for such content, andmalicious actors exploit the vulnerabilities of audiences to shape publicopinion. Although there have been research efforts aimed at the automaticidentification of disinformation and propaganda in social media content, thereremain challenges in terms of performance. The ArAIEval shared task aims tofurther research on these particular issues within the context of the Arabiclanguage. In this paper, we discuss our participation in these shared tasks. Wecompeted in subtasks 1A and 2A, where our submitted system secured positions9th and 10th, respectively. Our experiments consist of fine-tuning transformermodels and using zero- and few-shot learning with GPT-4.",,arXiv,"['cs.cl', 'cs.ai', 'cs.si', '68t50', 'f.2.2; i.2.7']",, -852,multilingual mathematical autoformalization,"['Albert Q. Jiang', 'Wenda Li', 'Mateja Jamnik']",http://arxiv.org/pdf/2311.03755v2.pdf,2023-11-07,," Autoformalization is the task of translating natural language materials intomachine-verifiable formalisations. Progress in autoformalization research ishindered by the lack of a sizeable dataset consisting of informal-formal pairsexpressing the same essence. Existing methods tend to circumvent this challengeby manually curating small corpora or using few-shot learning with largelanguage models. But these methods suffer from data scarcity and formallanguage acquisition difficulty. In this work, we create $\texttt{MMA}$, alarge, flexible, multilingual, and multi-domain dataset of informal-formalpairs, by using a language model to translate in the reverse direction, thatis, from formal mathematical statements into corresponding informal ones.Experiments show that language models fine-tuned on $\texttt{MMA}$ produce$16-18\%$ of statements acceptable with minimal corrections on the$\texttt{miniF2F}$ and $\texttt{ProofNet}$ benchmarks, up from $0\%$ with thebase model. We demonstrate that fine-tuning on multilingual formal data resultsin more capable autoformalization models even when deployed on monolingualtasks.",,arXiv,"['cs.cl', 'cs.lg']",, -853,braininspired globallocal learning incorporated with neuromorphic computing,"['Yujie Wu', 'Rong Zhao', 'Jun Zhu', 'Feng Chen', 'Mingkun Xu', 'Guoqi Li', 'Sen Song', 'Lei Deng', 'Guanrui Wang', 'Hao Zheng', 'Jing Pei', 'Youhui Zhang', 'Mingguo Zhao', 'Luping Shi']",http://arxiv.org/pdf/2006.03226v3.pdf,2020-06-05,," Two main routes of learning methods exist at present including error-drivenglobal learning and neuroscience-oriented local learning. Integrating them intoone network may provide complementary learning capabilities for versatilelearning scenarios. At the same time, neuromorphic computing holds greatpromise, but still needs plenty of useful algorithms and algorithm-hardwareco-designs for exploiting the advantages. Here, we report a neuromorphic hybridlearning model by introducing a brain-inspired meta-learning paradigm and adifferentiable spiking model incorporating neuronal dynamics and synapticplasticity. It can meta-learn local plasticity and receive top-down supervisioninformation for multiscale synergic learning. We demonstrate the advantages ofthis model in multiple different tasks, including few-shot learning, continuallearning, and fault-tolerance learning in neuromorphic vision sensors. Itachieves significantly higher performance than single-learning methods, andshows promise in empowering neuromorphic applications revolution. We furtherimplemented the hybrid model in the Tianjic neuromorphic platform by exploitingalgorithm-hardware co-designs and proved that the model can fully utilizeneuromorphic many-core architecture to develop hybrid computation paradigm.",,arXiv,"['cs.ne', 'cs.ai', 'q-bio.nc']",, -854,what makes good incontext examples for gpt$3$,"['Jiachang Liu', 'Dinghan Shen', 'Yizhe Zhang', 'Bill Dolan', 'Lawrence Carin', 'Weizhu Chen']",http://arxiv.org/pdf/2101.06804v1.pdf,2021-01-17,," GPT-$3$ has attracted lots of attention due to its superior performanceacross a wide range of NLP tasks, especially with its powerful and versatilein-context few-shot learning ability. Despite its success, we found that theempirical results of GPT-$3$ depend heavily on the choice of in-contextexamples. In this work, we investigate whether there are more effectivestrategies for judiciously selecting in-context examples (relative to randomsampling) that better leverage GPT-$3$'s few-shot capabilities. Inspired by therecent success of leveraging a retrieval module to augment large-scale neuralnetwork models, we propose to retrieve examples that are semantically-similarto a test sample to formulate its corresponding prompt. Intuitively, thein-context examples selected with such a strategy may serve as more informativeinputs to unleash GPT-$3$'s extensive knowledge. We evaluate the proposedapproach on several natural language understanding and generation benchmarks,where the retrieval-based prompt selection approach consistently outperformsthe random baseline. Moreover, it is observed that the sentence encodersfine-tuned on task-related datasets yield even more helpful retrieval results.Notably, significant gains are observed on tasks such as table-to-textgeneration (41.9% on the ToTTo dataset) and open-domain question answering(45.5% on the NQ dataset). We hope our investigation could help understand thebehaviors of GPT-$3$ and large-scale pre-trained LMs in general and enhancetheir few-shot capabilities.",,arXiv,['cs.cl'],, -855,robust retrieval augmented generation for zeroshot slot filling,"['Michael Glass', 'Gaetano Rossiello', 'Md Faisal Mahbub Chowdhury', 'Alfio Gliozzo']",http://arxiv.org/pdf/2108.13934v2.pdf,2021-08-31,," Automatically inducing high quality knowledge graphs from a given collectionof documents still remains a challenging problem in AI. One way to make headwayfor this problem is through advancements in a related task known as slotfilling. In this task, given an entity query in form of [Entity, Slot, ?], asystem is asked to fill the slot by generating or extracting the missing valueexploiting evidence extracted from relevant passage(s) in the given documentcollection. The recent works in the field try to solve this task in anend-to-end fashion using retrieval-based language models. In this paper, wepresent a novel approach to zero-shot slot filling that extends dense passageretrieval with hard negatives and robust training procedures for retrievalaugmented generation models. Our model reports large improvements on both T-RExand zsRE slot filling datasets, improving both passage retrieval and slot valuegeneration, and ranking at the top-1 position in the KILT leaderboard.Moreover, we demonstrate the robustness of our system showing its domainadaptation capability on a new variant of the TACRED dataset for slot filling,through a combination of zero/few-shot learning. We release the source code andpre-trained models.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir']",, -856,dataefficient goaloriented conversation with dialogue knowledge transfer networks,"['Igor Shalyminov', 'Sungjin Lee', 'Arash Eshghi', 'Oliver Lemon']",http://arxiv.org/pdf/1910.01302v1.pdf,2019-10-03,," Goal-oriented dialogue systems are now being widely adopted in industry whereit is of key importance to maintain a rapid prototyping cycle for new productsand domains. Data-driven dialogue system development has to be adapted to meetthis requirement --- therefore, reducing the amount of data and annotationsnecessary for training such systems is a central research problem. In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet),a state-of-the-art approach to goal-oriented dialogue generation which onlyuses a few example dialogues (i.e. few-shot learning), none of which has to beannotated. We achieve this by performing a 2-stage training. Firstly, weperform unsupervised dialogue representation pre-training on a large source ofgoal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, atthe transfer stage, we train DiKTNet using this representation together with 2other textual knowledge sources with different levels of generality: ELMoencoder and the main dataset's source domains. Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluateour model on it in terms of BLEU and Entity F1 scores, and show that ourapproach significantly and consistently improves upon a series of baselinemodels as well as over the previous state-of-the-art dialogue generation model,ZSDG. The improvement upon the latter --- up to 10% in Entity F1 and theaverage of 3% in BLEU score --- is achieved using only the equivalent of 10% ofZSDG's in-domain training data.",,arXiv,"['cs.cl', 'i.2.7']",, -857,metalearning with dynamicmemorybased prototypical network for fewshot event detection,"['Shumin Deng', 'Ningyu Zhang', 'Jiaojian Kang', 'Yichi Zhang', 'Wei Zhang', 'Huajun Chen']",http://arxiv.org/pdf/1910.11621v2.pdf,2019-10-25,," Event detection (ED), a sub-task of event extraction, involves identifyingtriggers and categorizing event mentions. Existing methods primarily rely uponsupervised learning and require large-scale labeled event datasets which areunfortunately not readily available in many real-life applications. In thispaper, we consider and reformulate the ED task with limited labeled data as aFew-Shot Learning problem. We propose a Dynamic-Memory-Based PrototypicalNetwork (DMB-PN), which exploits Dynamic Memory Network (DMN) to not only learnbetter prototypes for event types, but also produce more robust sentenceencodings for event mentions. Differing from vanilla prototypical networkssimply computing event prototypes by averaging, which only consume eventmentions once, our model is more robust and is capable of distilling contextualinformation from event mentions for multiple times due to the multi-hopmechanism of DMNs. The experiments show that DMB-PN not only deals with samplescarcity better than a series of baseline models but also performs morerobustly when the variety of event types is relatively large and the instancequantity is extremely small.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, -858,amp0 speciesspecific prediction of antimicrobial peptides using zero and few shot learning,"['Sadaf Gull', 'Fayyaz Minhas']",http://arxiv.org/pdf/1911.06106v1.pdf,2019-10-28,," The evolution of drug-resistant microbial species is one of the majorchallenges to global health. The development of new antimicrobial treatmentssuch as antimicrobial peptides needs to be accelerated to combat this threat.However, the discovery of novel antimicrobial peptides is hampered bylow-throughput biochemical assays. Computational techniques can be used forrapid screening of promising antimicrobial peptide candidates prior to testingin the wet lab. The vast majority of existing antimicrobial peptide predictorsare non-targeted in nature, i.e., they can predict whether a given peptidesequence is antimicrobial, but they are unable to predict whether the sequencecan target a particular microbial species. In this work, we have developed atargeted antimicrobial peptide activity predictor that can predict whether apeptide is effective against a given microbial species or not. This has beenmade possible through zero-shot and few-shot machine learning. The proposedpredictor called AMP0 takes in the peptide amino acid sequence and anyN/C-termini modifications together with the genomic sequence of a targetmicrobial species to generate targeted predictions. It is important to notethat the proposed method can generate predictions for species that are not partof its training set. The accuracy of predictions for novel test species can befurther improved by providing a few example peptides for that species. Ourcomputational cross-validation results show that the pro-posed scheme isparticularly effective for targeted antimicrobial prediction in comparison toexisting approaches and can be used for screening potential antimicrobialpeptides in a targeted manner especially for cases in which the number oftraining examples is small. The webserver of the method is available athttp://ampzero.pythonanywhere.com.",,arXiv,"['q-bio.bm', 'cs.lg', 'stat.ml']",, -859,direct multimodal fewshot learning of speech and images,"['Leanne Nortje', 'Herman Kamper']",http://arxiv.org/pdf/2012.05680v2.pdf,2020-12-10,," We propose direct multimodal few-shot models that learn a shared embeddingspace of spoken words and images from only a few paired examples. Imagine anagent is shown an image along with a spoken word describing the object in thepicture, e.g. pen, book and eraser. After observing a few paired examples ofeach class, the model is asked to identify the ""book"" in a set of unseenpictures. Previous work used a two-step indirect approach relying on learnedunimodal representations: speech-speech and image-image comparisons areperformed across the support set of given speech-image pairs. We propose twodirect models which instead learn a single multimodal space where inputs fromdifferent modalities are directly comparable: a multimodal triplet network(MTriplet) and a multimodal correspondence autoencoder (MCAE). To train thesedirect models, we mine speech-image pairs: the support set is used to pair upunlabelled in-domain speech and images. In a speech-to-image digit matchingtask, direct models outperform indirect models, with the MTriplet achieving thebest multimodal five-shot accuracy. We show that the improvements are due tothe combination of unsupervised and transfer learning in the direct models, andthe absence of two-step compounding errors.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, -860,spirit distillation precise realtime semantic segmentation of road scenes with insufficient data,"['Zhiyuan Wu', 'Yu Jiang', 'Chupeng Cui', 'Zongmin Yang', 'Xinhui Xue', 'Hong Qi']",http://arxiv.org/pdf/2103.13733v2.pdf,2021-03-25,," Semantic segmentation of road scenes is one of the key technologies forrealizing autonomous driving scene perception, and the effectiveness of deepConvolutional Neural Networks(CNNs) for this task has been demonstrated.State-of-art CNNs for semantic segmentation suffer from excessive computationsas well as large-scale training data requirement. Inspired by the ideas ofFine-tuning-based Transfer Learning (FTT) and feature-based knowledgedistillation, we propose a new knowledge distillation method for cross-domainknowledge transference and efficient data-insufficient network training, namedSpirit Distillation(SD), which allow the student network to mimic the teachernetwork to extract general features, so that a compact and accurate studentnetwork can be trained for real-time semantic segmentation of road scenes.Then, in order to further alleviate the trouble of insufficient data andimprove the robustness of the student, an Enhanced Spirit Distillation (ESD)method is proposed, which commits to exploit a more comprehensive generalfeatures extraction capability by considering images from both the target andthe proximity domains as input. To our knowledge, this paper is a pioneeringwork on the application of knowledge distillation to few-shot learning.Persuasive experiments conducted on Cityscapes semantic segmentation with theprior knowledge transferred from COCO2017 and KITTI demonstrate that ourmethods can train a better student network (mIOU and high-precision accuracyboost by 1.4% and 8.2% respectively, with 78.2% segmentation variance) withonly 41.8% FLOPs (see Fig. 1).",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, -861,modelling latent translations for crosslingual transfer,"['Edoardo Maria Ponti', 'Julia Kreutzer', 'Ivan Vulić', 'Siva Reddy']",http://arxiv.org/pdf/2107.11353v1.pdf,2021-07-23,," While achieving state-of-the-art results in multiple tasks and languages,translation-based cross-lingual transfer is often overlooked in favour ofmassively multilingual pre-trained encoders. Arguably, this is due to its mainlimitations: 1) translation errors percolating to the classification phase and2) the insufficient expressiveness of the maximum-likelihood translation. Toremedy this, we propose a new technique that integrates both steps of thetraditional pipeline (translation and classification) into a single model, bytreating the intermediate translations as a latent random variable. As aresult, 1) the neural machine translation system can be fine-tuned with avariant of Minimum Risk Training where the reward is the accuracy of thedownstream task classifier. Moreover, 2) multiple samples can be drawn toapproximate the expected loss across all possible translations duringinference. We evaluate our novel latent translation-based model on a series ofmultilingual NLU tasks, including commonsense reasoning, paraphraseidentification, and natural language inference. We report gains for bothzero-shot and few-shot learning setups, up to 2.7 accuracy points on average,which are even more prominent for low-resource languages (e.g., HaitianCreole). Finally, we carry out in-depth analyses comparing different underlyingNMT models and assessing the impact of alternative translations on thedownstream performance.",,arXiv,['cs.cl'],, -862,prototransformer a metalearning approach to providing student feedback,"['Mike Wu', 'Noah Goodman', 'Chris Piech', 'Chelsea Finn']",http://arxiv.org/pdf/2107.14035v2.pdf,2021-07-23,," High-quality computer science education is limited by the difficulty ofproviding instructor feedback to students at scale. While this feedback couldin principle be automated, supervised approaches to predicting the correctfeedback are bottlenecked by the intractability of annotating large quantitiesof student code. In this paper, we instead frame the problem of providingfeedback as few-shot classification, where a meta-learner adapts to givefeedback to student code on a new programming question from just a few examplesannotated by instructors. Because data for meta-training is limited, we proposea number of amendments to the typical few-shot learning framework, includingtask augmentation to create synthetic tasks, and additional side information tobuild stronger priors about each task. These additions are combined with atransformer architecture to embed discrete sequences (e.g. code) to aprototypical representation of a feedback class label. On a suite of few-shotnatural language processing tasks, we match or outperform state-of-the-artperformance. Then, on a collection of student solutions to exam questions froman introductory university course, we show that our approach reaches an averageprecision of 88% on unseen questions, surpassing the 82% precision of teachingassistants. Our approach was successfully deployed to deliver feedback to16,000 student exam-solutions in a programming course offered by a tier 1university. This is, to the best of our knowledge, the first successfuldeployment of a machine learning based feedback to open-ended student code.",,arXiv,"['cs.cy', 'cs.lg']",, -863,raft a realworld fewshot text classification benchmark,"['Neel Alex', 'Eli Lifland', 'Lewis Tunstall', 'Abhishek Thakur', 'Pegah Maham', 'C. Jess Riedel', 'Emmie Hine', 'Carolyn Ashurst', 'Paul Sedille', 'Alexis Carlier', 'Michael Noetel', 'Andreas Stuhlmüller']",http://arxiv.org/pdf/2109.14076v3.pdf,2021-09-28,," Large pre-trained language models have shown promise for few-shot learning,completing text-based tasks given only a few task-specific examples. Willmodels soon solve classification tasks that have so far been reserved for humanresearch assistants? Existing benchmarks are not designed to measure progressin applied settings, and so don't directly answer this question. The RAFTbenchmark (Real-world Annotated Few-shot Tasks) focuses on naturally occurringtasks and uses an evaluation setup that mirrors deployment. Baselineevaluations on RAFT reveal areas current techniques struggle with: reasoningover long texts and tasks with many classes. Human baselines show that someclassification tasks are difficult for non-expert humans, reflecting thatreal-world value sometimes depends on domain expertise. Yet even non-experthuman baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasetsand leaderboard will track which model improvements translate into real-worldbenefits at https://raft.elicit.org .",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -864,lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5,"['Chengwei Qin', 'Shafiq Joty']",http://arxiv.org/pdf/2110.07298v3.pdf,2021-10-14,," Existing approaches to lifelong language learning rely on plenty of labeleddata for learning a new task, which is hard to obtain in most real scenarios.Considering that humans can continually learn new tasks from a handful ofexamples, we expect the models also to be able to generalize well on newfew-shot tasks without forgetting the previous ones. In this work, we definethis more challenging yet practical problem as Lifelong Few-shot LanguageLearning (LFLL) and propose a unified framework for it based on prompt tuningof T5. Our framework called LFPT5 takes full advantage of PT's strong few-shotlearning ability, and simultaneously trains the model as a task solver and adata generator. Before learning a new domain of the same task type, LFPT5generates pseudo (labeled) samples of previously learned domains, and latergets trained on those samples to alleviate forgetting of previous knowledge asit learns the new domain. In addition, a KL divergence loss is minimized toachieve label consistency between the previous and the current model. Whileadapting to a new task type, LFPT5 includes and tunes additional promptembeddings for the new task. With extensive experiments, we demonstrate thatLFPT5 can be applied to various different types of tasks and significantlyoutperform previous methods in different LFLL settings.",,arXiv,['cs.cl'],, -865,metaicl learning to learn in context,"['Sewon Min', 'Mike Lewis', 'Luke Zettlemoyer', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2110.15943v2.pdf,2021-10-29,," We introduce MetaICL (Meta-training for In-Context Learning), a newmeta-training framework for few-shot learning where a pretrained language modelis tuned to do in-context learning on a large set of training tasks. Thismeta-training enables the model to more effectively learn a new task in contextat test time, by simply conditioning on a few training examples with noparameter updates or task-specific templates. We experiment on a large, diversecollection of tasks consisting of 142 NLP datasets including classification,question answering, natural language inference, paraphrase detection and more,across seven different meta-training/target splits. MetaICL outperforms a rangeof baselines including in-context learning without meta-training and multi-tasklearning followed by zero-shot transfer. We find that the gains areparticularly significant for target tasks that have domain shifts from themeta-training tasks, and that using a diverse set of the meta-training tasks iskey to improvements. We also show that MetaICL approaches (and sometimes beats)the performance of models fully finetuned on the target task, and outperformsmuch bigger models with nearly 8x parameters. Finally, we show that MetaICL iscomplementary to human-written instructions, and the best performance can beachieved by combining both approaches.",,arXiv,"['cs.cl', 'cs.ai']",, -866,scaling asr improves zero and few shot learning,"['Alex Xiao', 'Weiyi Zheng', 'Gil Keren', 'Duc Le', 'Frank Zhang', 'Christian Fuegen', 'Ozlem Kalinli', 'Yatharth Saraf', 'Abdelrahman Mohamed']",http://arxiv.org/pdf/2111.05948v3.pdf,2021-11-10,," With 4.5 million hours of English speech from 10 different sources across 120countries and models of up to 10 billion parameters, we explore the frontiersof scale for automatic speech recognition. We propose data selection techniquesto efficiently scale training data to find the most valuable samples in massivedatasets. To efficiently scale model sizes, we leverage various optimizationssuch as sparse transducer loss and model sharding. By training 1-10B parameteruniversal English ASR models, we push the limits of speech recognitionperformance across many domains. Furthermore, our models learn powerful speechrepresentations with zero and few-shot capabilities on novel domains and stylesof speech, exceeding previous results across multiple in-house and publicbenchmarks. For speakers with disorders due to brain damage, our best zero-shotand few-shot models achieve 22% and 60% relative improvement on the AphasiaBanktest set, respectively, while realizing the best performance on public socialmedia videos. Furthermore, the same universal model reaches equivalentperformance with 500x less in-domain data on the SPGISpeech financial-domaindataset.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, -867,pointclip point cloud understanding by clip,"['Renrui Zhang', 'Ziyu Guo', 'Wei Zhang', 'Kunchang Li', 'Xupeng Miao', 'Bin Cui', 'Yu Qiao', 'Peng Gao', 'Hongsheng Li']",http://arxiv.org/pdf/2112.02413v1.pdf,2021-12-04,," Recently, zero-shot and few-shot learning via Contrastive Vision-LanguagePre-training (CLIP) have shown inspirational performance on 2D visualrecognition, which learns to match images with their corresponding texts inopen-vocabulary settings. However, it remains under explored that whether CLIP,pre-trained by large-scale image-text pairs in 2D, can be generalized to 3Drecognition. In this paper, we identify such a setting is feasible by proposingPointCLIP, which conducts alignment between CLIP-encoded point cloud and 3Dcategory texts. Specifically, we encode a point cloud by projecting it intomulti-view depth maps without rendering, and aggregate the view-wise zero-shotprediction to achieve knowledge transfer from 2D to 3D. On top of that, wedesign an inter-view adapter to better extract the global feature andadaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in2D. By just fine-tuning the lightweight adapter in the few-shot settings, theperformance of PointCLIP could be largely improved. In addition, we observe thecomplementary property between PointCLIP and classical 3D-supervised networks.By simple ensembling, PointCLIP boosts baseline's performance and evensurpasses state-of-the-art models. Therefore, PointCLIP is a promisingalternative for effective 3D point cloud understanding via CLIP under lowresource cost and data regime. We conduct thorough experiments onwidely-adopted ModelNet10, ModelNet40 and the challenging ScanObjectNN todemonstrate the effectiveness of PointCLIP. The code is released athttps://github.com/ZrrSkywalker/PointCLIP.",,arXiv,"['cs.cv', 'cs.ai', 'cs.ro']",, -868,"visionlanguage intelligence tasks, representation learning, and large models","['Feng Li', 'Hao Zhang', 'Yi-Fan Zhang', 'Shilong Liu', 'Jian Guo', 'Lionel M. Ni', 'PengChuan Zhang', 'Lei Zhang']",http://arxiv.org/pdf/2203.01922v1.pdf,2022-03-03,," This paper presents a comprehensive survey of vision-language (VL)intelligence from the perspective of time. This survey is inspired by theremarkable progress in both computer vision and natural language processing,and recent trends shifting from single modality processing to multiple modalitycomprehension. We summarize the development in this field into three timeperiods, namely task-specific methods, vision-language pre-training (VLP)methods, and larger models empowered by large-scale weakly-labeled data. Wefirst take some common VL tasks as examples to introduce the development oftask-specific methods. Then we focus on VLP methods and comprehensively reviewkey components of the model structures and training methods. After that, weshow how recent work utilizes large-scale raw image-text data to learnlanguage-aligned visual representations that generalize better on zero or fewshot learning tasks. Finally, we discuss some potential future trends towardsmodality cooperation, unified representation, and knowledge incorporation. Webelieve that this review will be of help for researchers and practitioners ofAI and ML, especially those interested in computer vision and natural languageprocessing.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, -869,rethinking task sampling for fewshot visionlanguage transfer learning,"['Zhenhailong Wang', 'Hang Yu', 'Manling Li', 'Han Zhao', 'Heng Ji']",http://arxiv.org/pdf/2203.04904v3.pdf,2022-03-09,," Despite achieving state-of-the-art zero-shot performance, existingvision-language models still fall short of few-shot transfer ability ondomain-specific problems. Classical fine-tuning often fails to prevent highlyexpressive models from exploiting spurious correlations. Althoughmodel-agnostic meta-learning (MAML) presents as a natural alternative forfew-shot transfer learning, the expensive computation due to implicitsecond-order optimization limits its use on large-scale vision-language modelssuch as CLIP. While much literature has been devoted to exploring alternativeoptimization strategies, we identify another essential aspect towards effectivefew-shot transfer learning, task sampling, which is previously only be viewedas part of data pre-processing in MAML. To show the impact of task sampling, wepropose a simple algorithm, Model-Agnostic Multitask Fine-tuning (MAMF), whichdifferentiates classical fine-tuning only on uniformly sampling multiple tasks.Despite its simplicity, we show that MAMF consistently outperforms classicalfine-tuning on five few-shot vision-language classification tasks. We furthershow that the effectiveness of the bi-level optimization in MAML is highlysensitive to the zero-shot performance of a task in the context of few-shotvision-language classification. The goal of this paper is to provide newinsights on what makes few-shot learning work, and encourage more research intoinvestigating better task sampling strategies.",,arXiv,"['cs.mm', 'cs.cl', 'cs.cv']",, -870,mgpt fewshot learners go multilingual,"['Oleh Shliazhko', 'Alena Fenogenova', 'Maria Tikhonova', 'Vladislav Mikhailov', 'Anastasia Kozlova', 'Tatiana Shavrina']",http://arxiv.org/pdf/2204.07580v2.pdf,2022-04-15,," Recent studies report that autoregressive language models can successfullysolve many NLP tasks via zero- and few-shot learning paradigms, which opens upnew possibilities for using the pre-trained language models. This paperintroduces two autoregressive GPT-like models with 1.3 billion and 13 billionparameters trained on 60 languages from 25 language families using Wikipediaand Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture usingGPT-2 sources and the sparse attention mechanism; Deepspeed and Megatronframeworks allow us to parallelize the training and inference stepseffectively. The resulting models show performance on par with the recentlyreleased XGLM models by Facebook, covering more languages and enhancing NLPpossibilities for low resource languages of CIS countries and Russian smallnations. We detail the motivation for the choices of the architecture design,thoroughly describe the data preparation pipeline, and train five smallversions of the model to choose the most optimal multilingual tokenizationstrategy. We measure the model perplexity in all covered languages and evaluateit on the wide spectre of multilingual tasks, including classification,generative, sequence labeling and knowledge probing. The models were evaluatedwith the zero-shot and few-shot methods. Furthermore, we compared theclassification tasks with the state-of-the-art multilingual model XGLM. sourcecode and the mGPT XL model are publicly released.",,arXiv,"['cs.cl', 'cs.ai', '68-06, 68-04, 68t50, 68t01', 'i.2; i.2.7']",, -871,opt open pretrained transformer language models,"['Susan Zhang', 'Stephen Roller', 'Naman Goyal', 'Mikel Artetxe', 'Moya Chen', 'Shuohui Chen', 'Christopher Dewan', 'Mona Diab', 'Xian Li', 'Xi Victoria Lin', 'Todor Mihaylov', 'Myle Ott', 'Sam Shleifer', 'Kurt Shuster', 'Daniel Simig', 'Punit Singh Koura', 'Anjali Sridhar', 'Tianlu Wang', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2205.01068v4.pdf,2022-05-02,," Large language models, which are often trained for hundreds of thousands ofcompute days, have shown remarkable capabilities for zero- and few-shotlearning. Given their computational cost, these models are difficult toreplicate without significant capital. For the few that are available throughAPIs, no access is granted to the full model weights, making them difficult tostudy. We present Open Pre-trained Transformers (OPT), a suite of decoder-onlypre-trained transformers ranging from 125M to 175B parameters, which we aim tofully and responsibly share with interested researchers. We show that OPT-175Bis comparable to GPT-3, while requiring only 1/7th the carbon footprint todevelop. We are also releasing our logbook detailing the infrastructurechallenges we faced, along with code for experimenting with all of the releasedmodels.",,arXiv,"['cs.cl', 'cs.lg']",, -872,relation extraction as openbook examination retrievalenhanced prompt tuning,"['Xiang Chen', 'Lei Li', 'Ningyu Zhang', 'Chuanqi Tan', 'Fei Huang', 'Luo Si', 'Huajun Chen']",http://arxiv.org/pdf/2205.02355v2.pdf,2022-05-04,," Pre-trained language models have contributed significantly to relationextraction by demonstrating remarkable few-shot learning abilities. However,prompt tuning methods for relation extraction may still fail to generalize tothose rare or hard patterns. Note that the previous parametric learningparadigm can be viewed as memorization regarding training data as a book andinference as the close-book test. Those long-tailed or hard patterns can hardlybe memorized in parameters given few-shot instances. To this end, we regard REas an open-book examination and propose a new semiparametric paradigm ofretrieval-enhanced prompt tuning for relation extraction. We construct anopen-book datastore for retrieval regarding prompt-based instancerepresentations and corresponding relation labels as memorized key-value pairs.During inference, the model can infer relations by linearly interpolating thebase output of PLM with the non-parametric nearest neighbor distribution overthe datastore. In this way, our model not only infers relation throughknowledge stored in the weights during training but also assistsdecision-making by unwinding and querying examples in the open-book datastore.Extensive experiments on benchmark datasets show that our method can achievestate-of-the-art in both standard supervised and few-shot settings. Code areavailable in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, -873,towards unified prompt tuning for fewshot text classification,"['Jianing Wang', 'Chengyu Wang', 'Fuli Luo', 'Chuanqi Tan', 'Minghui Qiu', 'Fei Yang', 'Qiuhui Shi', 'Songfang Huang', 'Ming Gao']",http://arxiv.org/pdf/2205.05313v1.pdf,2022-05-11,," Prompt-based fine-tuning has boosted the performance of Pre-trained LanguageModels (PLMs) on few-shot text classification by employing task-specificprompts. Yet, PLMs are unfamiliar with prompt-style expressions duringpre-training, which limits the few-shot learning performance on downstreamtasks. It would be desirable if the models can acquire some prompting knowledgebefore adaptation to specific NLP tasks. We present the Unified Prompt Tuning(UPT) framework, leading to better few-shot text classification for BERT-stylemodels by explicitly capturing prompting semantics from non-target NLPdatasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed forjoint prompt learning across different NLP tasks, forcing PLMs to capturetask-invariant prompting knowledge. We further design a self-supervised tasknamed Knowledge-enhanced Selective Masked Language Modeling to improve thePLM's generalization abilities for accurate adaptation to previously unseentasks. After multi-task learning across multiple tasks, the PLM can be betterprompt-tuned towards any dissimilar target tasks in low-resourced settings.Experiments over a variety of NLP tasks show that UPT consistently outperformsstate-of-the-arts for prompt-based fine-tuning.",,arXiv,"['cs.cl', 'cs.ai']",, -874,towards answering openended ethical quandary questions,"['Yejin Bang', 'Nayeon Lee', 'Tiezheng Yu', 'Leila Khalatbari', 'Yan Xu', 'Samuel Cahyawijaya', 'Dan Su', 'Bryan Wilie', 'Romain Barraud', 'Elham J. Barezi', 'Andrea Madotto', 'Hayden Kee', 'Pascale Fung']",http://arxiv.org/pdf/2205.05989v3.pdf,2022-05-12,," Considerable advancements have been made in various NLP tasks based on theimpressive power of large language models (LLMs) and many NLP applications aredeployed in our daily lives. In this work, we challenge the capability of LLMswith the new task of Ethical Quandary Generative Question Answering. Ethicalquandary questions are more challenging to address because multiple conflictinganswers may exist to a single quandary. We explore the current capability ofLLMs in providing an answer with a deliberative exchange of differentperspectives to an ethical quandary, in the approach of Socratic philosophy,instead of providing a closed answer like an oracle. We propose a model thatsearches for different ethical principles applicable to the ethical quandaryand generates an answer conditioned on the chosen principles throughprompt-based few-shot learning. We also discuss the remaining challenges andethical issues involved in this task and suggest the direction towarddeveloping responsible NLP systems by incorporating human values explicitly.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -875,promptda labelguided data augmentation for promptbased fewshot learners,"['Canyu Chen', 'Kai Shu']",http://arxiv.org/pdf/2205.09229v3.pdf,2022-05-18,," Recent advances in large pre-trained language models (PLMs) lead toimpressive gains in natural language understanding (NLU) tasks withtask-specific fine-tuning. However, directly fine-tuning PLMs heavily relies onsufficient labeled training instances, which are usually hard to obtain.Prompt-based tuning on PLMs has shown to be powerful for various downstreamfew-shot tasks. Existing works studying prompt-based tuning for few-shot NLUtasks mainly focus on deriving proper label words with a verbalizer orgenerating prompt templates to elicit semantics from PLMs. In addition,conventional data augmentation strategies such as synonym substitution, thoughwidely adopted in low-resource scenarios, only bring marginal improvements forprompt-based few-shot learning. Thus, an important research question arises:how to design effective data augmentation methods for prompt-based few-shottuning? To this end, considering the label semantics are essential inprompt-based tuning, we propose a novel label-guided data augmentationframework PromptDA, which exploits the enriched label semantic information fordata augmentation. Extensive experiment results on few-shot text classificationtasks demonstrate the superior performance of the proposed framework byeffectively leveraging label semantics and data augmentation for naturallanguage understanding. Our code is available athttps://github.com/canyuchen/PromptDA.",,arXiv,"['cs.cl', 'cs.ai']",, -876,what makes datatotext generation hard for pretrained language models,"['Moniba Keymanesh', 'Adrian Benton', 'Mark Dredze']",http://arxiv.org/pdf/2205.11505v1.pdf,2022-05-23,," Expressing natural language descriptions of structured facts or relations --data-to-text generation (D2T) -- increases the accessibility of structuredknowledge repositories. Previous work shows that pre-trained languagemodels(PLMs) perform remarkably well on this task after fine-tuning on asignificant amount of task-specific training data. On the other hand, whileauto-regressive PLMs can generalize from a few task examples, their efficacy atD2T is largely unexplored. Furthermore, we have an incomplete understanding ofthe limits of PLMs on D2T. In this work, we conduct an empirical study of both fine-tuned andauto-regressive PLMs on the DART multi-domain D2T dataset. We consider theirperformance as a function of the amount of task-specific data and how thesedata are incorporated into the models: zero and few-shot learning, andfine-tuning of model weights. In addition, we probe the limits of PLMs bymeasuring performance on subsets of the evaluation data: novel predicates andabstractive test examples. To improve the performance on these subsets, weinvestigate two techniques: providing predicate descriptions in the context andre-ranking generated candidates by information reflected in the source.Finally, we conduct a human evaluation of model errors and show that D2Tgeneration tasks would benefit from datasets with more careful manual curation.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, -877,attempt parameterefficient multitask tuning via attentional mixtures of soft prompts,"['Akari Asai', 'Mohammadreza Salehi', 'Matthew E. Peters', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2205.11961v2.pdf,2022-05-24,," This work introduces a new multi-task, parameter-efficient language model(LM) tuning method that learns to transfer knowledge across different tasks viaa mixture of soft prompts-small prefix embedding vectors pre-trained fordifferent tasks. Our method, called ATTEMPT (ATTEntional Mixtures of PromptTuning), obtains source prompts as encodings of large-scale source tasks into asmall number of parameters and trains an attention module to interpolate thesource prompts and a newly initialized target prompt for every instance in thetarget task. During training, only the target task prompt and the attentionweights, which are shared between tasks in multi-task training, are updated,while the original LM and source prompts are intact. ATTEMPT is highlyparameter-efficient (e.g., updates 2,300 times fewer parameters than fullfine-tuning) while achieving high task performance using knowledge fromhigh-resource tasks. Moreover, it is modular using pre-trained soft prompts,and can flexibly add or remove source prompts for effective knowledge transfer.Our experimental results across 21 diverse NLP datasets show that ATTEMPTsignificantly outperforms prompt tuning and outperforms or matches fullyfine-tuned or other parameter-efficient tuning approaches that use over tentimes more parameters. Finally, ATTEMPT outperforms previous work in few-shotlearning settings.",,arXiv,['cs.cl'],, -878,making large language models better reasoners with stepaware verifier,"['Yifei Li', 'Zeqi Lin', 'Shizhuo Zhang', 'Qiang Fu', 'Bei Chen', 'Jian-Guang Lou', 'Weizhu Chen']",http://arxiv.org/pdf/2206.02336v3.pdf,2022-06-06,," Few-shot learning is a challenging task that requires language models togeneralize from limited examples. Large language models like GPT-3 and PaLMhave made impressive progress in this area, but they still face difficulties inreasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improvetheir reasoning skills, previous work has proposed to guide the language modelwith prompts that elicit a series of reasoning steps before giving the finalanswer, achieving a significant improvement on GSM8K from 17.9% to 58.1% inproblem-solving rate. In this paper, we present DIVERSE (Diverse Verifier onReasoning Step), a novel approach that further enhances the reasoningcapability of language models. DIVERSE has three main components: first, itgenerates diverse prompts to explore different reasoning paths for the samequestion; second, it uses a verifier to filter out incorrect answers based on aweighted voting scheme; and third, it verifies each reasoning step individuallyinstead of the whole chain. We evaluate DIVERSE on the latest language modelcode-davinci-002 and show that it achieves new state-of-the-art results on sixof eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%).",,arXiv,"['cs.cl', 'cs.ai']",, -879,language models are generalpurpose interfaces,"['Yaru Hao', 'Haoyu Song', 'Li Dong', 'Shaohan Huang', 'Zewen Chi', 'Wenhui Wang', 'Shuming Ma', 'Furu Wei']",http://arxiv.org/pdf/2206.06336v1.pdf,2022-06-13,," Foundation models have received much attention due to their effectivenessacross a broad range of downstream applications. Though there is a bigconvergence in terms of architecture, most pretrained models are typicallystill developed for specific tasks or modalities. In this work, we propose touse language models as a general-purpose interface to various foundationmodels. A collection of pretrained encoders perceive diverse modalities (suchas vision, and language), and they dock with a language model that plays therole of a universal task layer. We propose a semi-causal language modelingobjective to jointly pretrain the interface and the modular encoders. Wesubsume the advantages and capabilities from both causal and non-causalmodeling, thereby combining the best of two worlds. Specifically, the proposedmethod not only inherits the capabilities of in-context learning and open-endedgeneration from causal language modeling, but also is conducive to finetuningbecause of the bidirectional encoders. More importantly, our approachseamlessly unlocks the combinations of the above capabilities, e.g., enablingin-context learning or instruction following with finetuned encoders.Experimental results across various language-only and vision-languagebenchmarks show that our model outperforms or is competitive with specializedmodels on finetuning, zero-shot generalization, and few-shot learning.",,arXiv,['cs.cl'],, -880,fit parameter efficient fewshot transfer learning for personalized and federated image classification,"['Aliaksandra Shysheya', 'John Bronskill', 'Massimiliano Patacchiola', 'Sebastian Nowozin', 'Richard E Turner']",http://arxiv.org/pdf/2206.08671v2.pdf,2022-06-17,," Modern deep learning systems are increasingly deployed in situations such aspersonalization and federated learning where it is necessary to support i)learning on small amounts of data, and ii) communication efficient distributedtraining protocols. In this work, we develop FiLM Transfer (FiT) which fulfillsthese requirements in the image classification setting by combining ideas fromtransfer learning (fixed pretrained backbones and fine-tuned FiLM adapterlayers) and meta-learning (automatically configured Naive Bayes classifiers andepisodic training) to yield parameter efficient models with superiorclassification accuracy at low-shot. The resulting parameter efficiency is keyfor enabling few-shot learning, inexpensive model updates for personalization,and communication efficient federated learning. We experiment with FiT on awide range of downstream datasets and show that it achieves betterclassification accuracy than the leading Big Transfer (BiT) algorithm atlow-shot and achieves state-of-the art accuracy on the challenging VTAB-1kbenchmark, with fewer than 1% of the updateable parameters. Finally, wedemonstrate the parameter efficiency and superior accuracy of FiT indistributed low-shot applications including model personalization and federatedlearning where model update size is an important performance metric.",,arXiv,"['stat.ml', 'cs.cv', 'cs.lg']",, -881,a reinforcement learningbased offensive semantics censorship system for chatbots,"['Shaokang Cai', 'Dezhi Han', 'Zibin Zheng', 'Dun Li', ' NoelCrespi']",http://arxiv.org/pdf/2207.10569v1.pdf,2022-07-13,," The rapid development of artificial intelligence (AI) technology has enabledlarge-scale AI applications to land in the market and practice. However, whileAI technology has brought many conveniences to people in the productizationprocess, it has also exposed many security issues. Especially, attacks againstonline learning vulnerabilities of chatbots occur frequently. Therefore, thispaper proposes a semantics censorship chatbot system based on reinforcementlearning, which is mainly composed of two parts: the Offensive semanticscensorship model and the semantics purification model. Offensive semanticsreview can combine the context of user input sentences to detect the rapidevolution of Offensive semantics and respond to Offensive semantics responses.The semantics purification model For the case of chatting robot models, it hasbeen contaminated by large numbers of offensive semantics, by strengthening theoffensive reply learned by the learning algorithm, rather than rolling back tothe early versions. In addition, by integrating a once-through learningapproach, the speed of semantics purification is accelerated while reducing theimpact on the quality of replies. The experimental results show that ourproposed approach reduces the probability of the chat model generatingoffensive replies and that the integration of the few-shot learning algorithmimproves the training speed rapidly while effectively slowing down the declinein BLEU values.",,arXiv,['cs.cl'],, -882,alexatm 20b fewshot learning using a largescale multilingual seq2seq model,"['Saleh Soltan', 'Shankar Ananthakrishnan', 'Jack FitzGerald', 'Rahul Gupta', 'Wael Hamza', 'Haidar Khan', 'Charith Peris', 'Stephen Rawls', 'Andy Rosenbaum', 'Anna Rumshisky', 'Chandana Satya Prakash', 'Mukund Sridhar', 'Fabian Triefenbach', 'Apurv Verma', 'Gokhan Tur', 'Prem Natarajan']",http://arxiv.org/pdf/2208.01448v2.pdf,2022-08-02,," In this work, we demonstrate that multilingual large-scalesequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoisingand Causal Language Modeling (CLM) tasks, are more efficient few-shot learnersthan decoder-only models on various tasks. In particular, we train a 20 billionparameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B)and show that it achieves state-of-the-art (SOTA) performance on 1-shotsummarization tasks, outperforming a much larger 540B PaLM decoder model.AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially forlow-resource languages, across almost all language pairs supported by the model(Arabic, English, French, German, Hindi, Italian, Japanese, Marathi,Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show inzero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2datasets and provides SOTA performance on multilingual tasks such as XNLI,XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling casefor seq2seq models as a powerful alternative to decoder-only models forLarge-scale Language Model (LLM) training.",,arXiv,"['cs.cl', 'cs.lg']",, -883,unsupervisedly prompting alphafold2 for fewshot learning of accurate folding landscape and protein structure prediction,"['Jun Zhang', 'Sirui Liu', 'Mengyun Chen', 'Haotian Chu', 'Min Wang', 'Zidong Wang', 'Jialiang Yu', 'Ningxi Ni', 'Fan Yu', 'Diqing Chen', 'Yi Isaac Yang', 'Boxin Xue', 'Lijiang Yang', 'Yuan Liu', 'Yi Qin Gao']",http://arxiv.org/pdf/2208.09652v2.pdf,2022-08-20,," Data-driven predictive methods which can efficiently and accurately transformprotein sequences into biologically active structures are highly valuable forscientific research and medical development. Determining accurate foldinglandscape using co-evolutionary information is fundamental to the success ofmodern protein structure prediction methods. As the state of the art,AlphaFold2 has dramatically raised the accuracy without performing explicitco-evolutionary analysis. Nevertheless, its performance still shows strongdependence on available sequence homologs. Based on the interrogation on thecause of such dependence, we presented EvoGen, a meta generative model, toremedy the underperformance of AlphaFold2 for poor MSA targets. By promptingthe model with calibrated or virtually generated homologue sequences, EvoGenhelps AlphaFold2 fold accurately in low-data regime and even achieveencouraging performance with single-sequence predictions. Being able to makeaccurate predictions with few-shot MSA not only generalizes AlphaFold2 betterfor orphan sequences, but also democratizes its use for high-throughputapplications. Besides, EvoGen combined with AlphaFold2 yields a probabilisticstructure generation method which could explore alternative conformations ofprotein sequences, and the task-aware differentiable algorithm for sequencegeneration will benefit other related tasks including protein design.",,arXiv,"['cs.lg', 'cs.ai', 'physics.bio-ph']",, -884,disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective,"['Jiangmeng Li', 'Yanan Zhang', 'Wenwen Qiang', 'Lingyu Si', 'Chengbo Jiao', 'Xiaohui Hu', 'Changwen Zheng', 'Fuchun Sun']",http://arxiv.org/pdf/2208.12681v2.pdf,2022-08-26,," Few-shot learning models learn representations with limited humanannotations, and such a learning paradigm demonstrates practicability invarious tasks, e.g., image classification, object detection, etc. However,few-shot object detection methods suffer from an intrinsic defect that thelimited training data makes the model cannot sufficiently explore semanticinformation. To tackle this, we introduce knowledge distillation to thefew-shot object detection learning paradigm. We further run a motivatingexperiment, which demonstrates that in the process of knowledge distillation,the empirical error of the teacher model degenerates the prediction performanceof the few-shot object detection model as the student. To understand thereasons behind this phenomenon, we revisit the learning paradigm of knowledgedistillation on the few-shot object detection task from the causal theoreticstandpoint, and accordingly, develop a Structural Causal Model. Following thetheoretical guidance, we propose a backdoor adjustment-based knowledgedistillation method for the few-shot object detection task, namely Disentangleand Remerge (D&R), to perform conditional causal intervention toward thecorresponding Structural Causal Model. Empirically, the experiments onbenchmarks demonstrate that D&R can yield significant performance boosts infew-shot object detection. Code is available athttps://github.com/ZYN-1101/DandR.git.",,arXiv,['cs.cv'],, -885,neurips'22 crossdomain metadl competition design and baseline results,"['Dustin Carrión-Ojeda', 'Hong Chen', 'Adrian El Baz', 'Sergio Escalera', 'Chaoyu Guan', 'Isabelle Guyon', 'Ihsan Ullah', 'Xin Wang', 'Wenwu Zhu']",http://arxiv.org/pdf/2208.14686v1.pdf,2022-08-31,," We present the design and baseline results for a new challenge in theChaLearn meta-learning series, accepted at NeurIPS'22, focusing on""cross-domain"" meta-learning. Meta-learning aims to leverage experience gainedfrom previous tasks to solve new tasks efficiently (i.e., with betterperformance, little training data, and/or modest computational resources).While previous challenges in the series focused on within-domain few-shotlearning problems, with the aim of learning efficiently N-way k-shot tasks(i.e., N class classification problems with k training examples), thiscompetition challenges the participants to solve ""any-way"" and ""any-shot""problems drawn from various domains (healthcare, ecology, biology,manufacturing, and others), chosen for their humanitarian and societal impact.To that end, we created Meta-Album, a meta-dataset of 40 image classificationdatasets from 10 domains, from which we carve out tasks with any number of""ways"" (within the range 2-20) and any number of ""shots"" (within the range1-20). The competition is with code submission, fully blind-tested on theCodaLab challenge platform. The code of the winners will be open-sourced,enabling the deployment of automated machine learning solutions for few-shotimage classification across several domains.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ne']",, -886,automatic label sequence generation for prompting sequencetosequence models,"['Zichun Yu', 'Tianyu Gao', 'Zhengyan Zhang', 'Yankai Lin', 'Zhiyuan Liu', 'Maosong Sun', 'Jie Zhou']",http://arxiv.org/pdf/2209.09401v1.pdf,2022-09-20,," Prompting, which casts downstream applications as language modeling tasks,has shown to be sample efficient compared to standard fine-tuning withpre-trained models. However, one pitfall of prompting is the need ofmanually-designed patterns, whose outcome can be unintuitive and requires largevalidation sets to tune. To tackle the challenge, we propose AutoSeq, a fullyautomatic prompting method: (1) We adopt natural language prompts onsequence-to-sequence models, enabling free-form generation and larger labelsearch space; (2) We propose label sequences -- phrases with indefinite lengthsto verbalize the labels -- which eliminate the need of manual templates and aremore expressive than single label words; (3) We use beam search toautomatically generate a large amount of label sequence candidates and proposecontrastive re-ranking to get the best combinations. AutoSeq significantlyoutperforms other no-manual-design methods, such as soft prompt tuning, adaptertuning, and automatic search on single label words; the generated labelsequences are even better than curated manual ones on a variety of tasks. Ourmethod reveals the potential of sequence-to-sequence models in few-shotlearning and sheds light on a path to generic and automatic prompting. Thesource code of this paper can be obtained fromhttps://github.com/thunlp/Seq2Seq-Prompt.",,arXiv,"['cs.cl', 'cs.lg']",, -887,collaboration of pretrained models makes better fewshot learner,"['Renrui Zhang', 'Bohao Li', 'Wei Zhang', 'Hao Dong', 'Hongsheng Li', 'Peng Gao', 'Yu Qiao']",http://arxiv.org/pdf/2209.12255v2.pdf,2022-09-25,," Few-shot classification requires deep neural networks to learn generalizedrepresentations only from limited training images, which is challenging butsignificant in low-data regimes. Recently, CLIP-based methods have shownpromising few-shot performance benefited from the contrastive language-imagepre-training. Based on this point, we question if the large-scale pre-trainingcan alleviate the few-shot data deficiency and also assist the representationlearning by the pre-learned knowledge. In this paper, we propose CoMo, aCollaboration of pre-trained Models that incorporates diverse prior knowledgefrom various pre-training paradigms for better few-shot learning. Our CoMoincludes: CLIP's language-contrastive knowledge, DINO's vision-contrastiveknowledge, and DALL-E's language-generative knowledge. Specifically, CoMo worksin two aspects: few-shot data expansion and diverse knowledge ensemble. Forone, we generate synthetic images via zero-shot DALL-E to enrich the few-shottraining data without any manpower. For the other, we introduce a learnableMulti-Knowledge Adapter (MK-Adapter) to adaptively blend the predictions fromCLIP and DINO. By such collaboration, CoMo can fully unleash the potential ofdifferent pre-training methods and unify them to perform state-of-the-art forfew-shot classification. We conduct extensive experiments on 11 datasets todemonstrate the superiority and generalization ability of our approach.",,arXiv,['cs.cv'],, -888,clip2point transfer clip to point cloud classification with imagedepth pretraining,"['Tianyu Huang', 'Bowen Dong', 'Yunhan Yang', 'Xiaoshui Huang', 'Rynson W. H. Lau', 'Wanli Ouyang', 'Wangmeng Zuo']",http://arxiv.org/pdf/2210.01055v3.pdf,2022-10-03,," Pre-training across 3D vision and language remains under development becauseof limited training data. Recent works attempt to transfer vision-languagepre-training models to 3D vision. PointCLIP converts point cloud data tomulti-view depth maps, adopting CLIP for shape classification. However, itsperformance is restricted by the domain gap between rendered depth maps andimages, as well as the diversity of depth distributions. To address this issue,we propose CLIP2Point, an image-depth pre-training method by contrastivelearning to transfer CLIP to the 3D domain, and adapt it to point cloudclassification. We introduce a new depth rendering setting that forms a bettervisual effect, and then render 52,460 pairs of images and depth maps fromShapeNet for pre-training. The pre-training scheme of CLIP2Point combinescross-modality learning to enforce the depth features for capturing expressivevisual and textual features and intra-modality learning to enhance theinvariance of depth aggregation. Additionally, we propose a novel Dual-PathAdapter (DPA) module, i.e., a dual-path structure with simplified adapters forfew-shot learning. The dual-path structure allows the joint use of CLIP andCLIP2Point, and the simplified adapter can well fit few-shot tasks withoutpost-search. Experimental results show that CLIP2Point is effective intransferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIPand other self-supervised 3D networks, achieving state-of-the-art results onzero-shot and few-shot classification.",,arXiv,['cs.cv'],, -889,"rarr researching and revising what language models say, using language models","['Luyu Gao', 'Zhuyun Dai', 'Panupong Pasupat', 'Anthony Chen', 'Arun Tejasvi Chaganty', 'Yicheng Fan', 'Vincent Y. Zhao', 'Ni Lao', 'Hongrae Lee', 'Da-Cheng Juan', 'Kelvin Guu']",http://arxiv.org/pdf/2210.08726v3.pdf,2022-10-17,," Language models (LMs) now excel at many tasks such as few-shot learning,question answering, reasoning, and dialog. However, they sometimes generateunsupported or misleading content. A user cannot easily determine whether theiroutputs are trustworthy or not, because most LMs do not have any built-inmechanism for attribution to external evidence. To enable attribution whilestill preserving all the powerful advantages of recent generation models, wepropose RARR (Retrofit Attribution using Research and Revision), a system that1) automatically finds attribution for the output of any text generation modeland 2) post-edits the output to fix unsupported content while preserving theoriginal output as much as possible. When applied to the output of severalstate-of-the-art LMs on a diverse set of generation tasks, we find that RARRsignificantly improves attribution while otherwise preserving the originalinput to a much greater degree than previously explored edit models.Furthermore, the implementation of RARR requires only a handful of trainingexamples, a large language model, and standard web search.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, -890,tape assessing fewshot russian language understanding,"['Ekaterina Taktasheva', 'Tatiana Shavrina', 'Alena Fenogenova', 'Denis Shevelev', 'Nadezhda Katricheva', 'Maria Tikhonova', 'Albina Akhmetgareeva', 'Oleg Zinkevich', 'Anastasiia Bashmakova', 'Svetlana Iordanskaia', 'Alena Spiridonova', 'Valentina Kurenshchikova', 'Ekaterina Artemova', 'Vladislav Mikhailov']",http://arxiv.org/pdf/2210.12813v1.pdf,2022-10-23,," Recent advances in zero-shot and few-shot learning have shown promise for ascope of research and practical purposes. However, this fast-growing area lacksstandardized evaluation suites for non-English languages, hindering progressoutside the Anglo-centric paradigm. To address this line of research, wepropose TAPE (Text Attack and Perturbation Evaluation), a novel benchmark thatincludes six more complex NLU tasks for Russian, covering multi-hop reasoning,ethical concepts, logic and commonsense knowledge. The TAPE's design focuses onsystematic zero-shot and few-shot NLU evaluation: (i) linguistic-orientedadversarial attacks and perturbations for analyzing robustness, and (ii)subpopulations for nuanced interpretation. The detailed analysis of testing theautoregressive baselines indicates that simple spelling-based perturbationsaffect the performance the most, while paraphrasing the input has a morenegligible effect. At the same time, the results demonstrate a significant gapbetween the neural and human baselines for most tasks. We publicly release TAPE(tape-benchmark.com) to foster research on robust LMs that can generalize tonew tasks when little to no supervision is available.",,arXiv,['cs.cl'],, -891,learning new tasks from a few examples with softlabel prototypes,"['Avyav Kumar Singh', 'Ekaterina Shutova', 'Helen Yannakoudakis']",http://arxiv.org/pdf/2210.17437v2.pdf,2022-10-31,," It has been experimentally demonstrated that humans are able to learn in amanner that allows them to make predictions on categories for which they havenot seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020)have recently presented a machine learning approach that aims to do the same.They utilise synthetically generated data and demonstrate that it is possibleto achieve sub-linear scaling and develop models that can learn to recognise Nclasses from M training samples where M is less than N - aka less-than-one shotlearning. Their method was, however, defined for univariate or simplemultivariate data (Sucholutsky et al., 2021). We extend it to work on large,high-dimensional and real-world datasets and empirically validate it in thisnew and challenging setting. We apply this method to learn previously unseenNLP tasks from very few examples (4, 8 or 16). We first generate compact,sophisticated less-than-one shot representations called soft-label prototypeswhich are fitted on training data, capturing the distribution of differentclasses across the input domain space. We then use a modified k-NearestNeighbours classifier to demonstrate that soft-label prototypes can classifydata competitively, even outperforming much more computationally complexfew-shot learning methods.",,arXiv,"['cs.lg', 'cs.cl']",, -892,explicit knowledge transfer for weaklysupervised code generation,"['Zhangir Azerbayev', 'Ansong Ni', 'Hailey Schoelkopf', 'Dragomir Radev']",http://arxiv.org/pdf/2211.16740v3.pdf,2022-11-30,," Large language models (LLMs) can acquire strong code-generation capabilitiesthrough few-shot learning. In contrast, supervised fine-tuning is still neededfor smaller models to achieve good performance. Such fine-tuning demands alarge number of task-specific NL-code pairs, which are expensive to obtain. Inthis paper, we attempt to transfer the code generation ability of an LLM to asmaller model with the aid of weakly-supervised data. More specifically, wepropose explicit knowledge transfer (EKT), which uses the few-shot capabilitiesof a teacher LLM to create NL-code pairs that we then filter for correctnessand fine-tune the student on. We evaluate EKT on the task of generating codesolutions to math word problems from the GSM8k dataset. We find that EKT notonly yields better performance than training with expert iteration, but alsooutperforms knowledge distillation, another form of knowledge transfer. AGPT-Neo 1.3B model trained using EKT with a GPT-J teacher achieves a 12.4%pass@100 on GSM8k, while the same student and teacher trained with knowledgedistillation yield only a 3.7% pass@100. We also show that it is possible for astudent model to outperform the teacher using EKT.",,arXiv,['cs.cl'],, -893,can incontext learners learn a reasoning concept from demonstrations,"['Michal Štefánik', 'Marek Kadlčík']",http://arxiv.org/pdf/2212.01692v4.pdf,2022-12-03,," Language models exhibit an emergent ability to learn a new task from a smallnumber of input-output demonstrations. However, recent work shows thatin-context learners largely rely on their pre-trained knowledge, such as thesentiment of the labels, instead of learning new associations from the input.We argue that the commonly-used few-shot evaluation using a random selection ofin-context demonstrations can not disentangle models' reliance on such biases,as most of the randomly-selected demonstrations do not present relationsinformative for prediction beyond exposing the task's input-outputdistribution. Therefore, to evaluate models' in-context learning ability independent ofmodels' memory, we introduce a Concept-sharing few-shot learning methodchoosing the demonstrations that share an underlying concept with the predictedsample. We extract a set of such concepts from available human explanations andmeasure how much models can benefit from presenting these concepts in few-shotdemonstrations. We find that most of the recent in-context learners can not consistentlybenefit from the demonstrated concepts, irrespective of the model size.However, we note that T0 models are more sensitive to exhibited concepts,benefiting from concept-sharing demonstrations in 7 out of 8 evaluationscenarios.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -894,frozen clip model is an efficient point cloud backbone,"['Xiaoshui Huang', 'Sheng Li', 'Wentao Qu', 'Tong He', 'Yifan Zuo', 'Wanli Ouyang']",http://arxiv.org/pdf/2212.04098v2.pdf,2022-12-08,," The pretraining-finetuning paradigm has demonstrated great success in NLP and2D image fields because of the high-quality representation ability andtransferability of their pretrained models. However, pretraining such a strongmodel is difficult in the 3D point cloud field since the training data islimited and point cloud collection is expensive. This paper introducesEfficient Point Cloud Learning (EPCL), an effective and efficient point cloudlearner for directly training high-quality point cloud models with a frozenCLIP model. Our EPCL connects the 2D and 3D modalities by semantically aligningthe 2D features and point cloud features without paired 2D-3D data.Specifically, the input point cloud is divided into a sequence of tokens anddirectly fed into the frozen CLIP model to learn point cloud representation.Furthermore, we design a task token to narrow the gap between 2D images and 3Dpoint clouds. Comprehensive experiments on 3D detection, semantic segmentation,classification and few-shot learning demonstrate that the 2D CLIP model can bean efficient point cloud backbone and our method achieves state-of-the-artaccuracy on both real-world and synthetic downstream tasks. Code will beavailable.",,arXiv,['cs.cv'],, -895,federated fewshot learning for mobile nlp,"['Dongqi Cai', 'Shangguang Wang', 'Yaozong Wu', 'Felix Xiaozhu Lin', 'Mengwei Xu']",http://arxiv.org/pdf/2212.05974v2.pdf,2022-12-12,," Natural language processing (NLP) sees rich mobile applications. To supportvarious language understanding tasks, a foundation NLP model is oftenfine-tuned in a federated, privacy-preserving setting (FL). This processcurrently relies on at least hundreds of thousands of labeled training samplesfrom mobile clients; yet mobile users often lack willingness or knowledge tolabel their data. Such an inadequacy of data labels is known as a few-shotscenario; it becomes the key blocker for mobile NLP applications. For the first time, this work investigates federated NLP in the few-shotscenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling andprompt learning, we first establish a training pipeline that deliverscompetitive accuracy when only 0.05% (fewer than 100) of the training data islabeled and the remaining is unlabeled. To instantiate the workflow, we furtherpresent a system FeS, addressing the high execution cost with novel designs.(1) Curriculum pacing, which injects pseudo labels to the training workflow ata rate commensurate to the learning progress; (2) Representational diversity, amechanism for selecting the most learnable data, only for which pseudo labelswill be generated; (3) Co-planning of a model's training depth and layercapacity. Together, these designs reduce the training delay, client energy, andnetwork traffic by up to 46.0$\times$, 41.2$\times$ and 3000.0$\times$,respectively. Through algorithm/system co-design, FFNLP demonstrates that FLcan apply to challenging settings where most training samples are unlabeled.",,arXiv,"['cs.lg', 'cs.cl']",, -896,fewfedweight fewshot federated learning framework across multiple nlp tasks,"['Weilong Dong', 'Xinwei Wu', 'Junzhuo Li', 'Shuangzhi Wu', 'Chao Bian', 'Deyi Xiong']",http://arxiv.org/pdf/2212.08354v1.pdf,2022-12-16,," Massively multi-task learning with large language models has recently madesubstantial progress on few-shot generalization. However, this is usuallyperformed in a centralized learning fashion, ignoring the privacy sensitivityissue of (annotated) data used in multiple tasks. To mitigate this issue, wepropose FewFedWeight, a few-shot federated learning framework across multipletasks, to achieve the best of both worlds: privacy preservation and cross-taskgeneralization. FewFedWeight trains client models in isolated devices withoutsharing data. It broadcasts the global model in the server to each client andproduces pseudo data for clients so that knowledge from the global model can beexplored to enhance few-shot learning of each client model. An energy-basedalgorithm is further proposed to weight pseudo samples in order to reduce thenegative impact of noise from the generated pseudo data. Adaptive model weightsof client models are also tuned according to their performance. We use thesemodel weights to dynamically aggregate client models to update the globalmodel. Experiments on 118 NLP tasks show that FewFedWeight can significantlyimprove the performance of client models on 61% tasks with an averageperformance improvement rate of 30.5% over the baseline and substantiallyoutperform FedAvg and other decentralized learning methods.",,arXiv,['cs.cl'],, -897,contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning,"['Chris Lengerich', 'Gabriel Synnaeve', 'Amy Zhang', 'Hugh Leather', 'Kurt Shuster', 'François Charton', 'Charysse Redwood']",http://arxiv.org/pdf/2212.11353v1.pdf,2022-12-21,," Traditional approaches to RL have focused on learning decision policiesdirectly from episodic decisions, while slowly and implicitly learning thesemantics of compositional representations needed for generalization. Whilesome approaches have been adopted to refine representations via auxiliaryself-supervised losses while simultaneously learning decision policies,learning compositional representations from hand-designed andcontext-independent self-supervised losses (multi-view) still adapts relativelyslowly to the real world, which contains many non-IID subspaces requiring rapiddistribution shift in both time and spatial attention patterns at varyinglevels of abstraction. In contrast, supervised language model cascades haveshown the flexibility to adapt to many diverse manifolds, and hints ofself-learning needed for autonomous task transfer. However, to date, transfermethods for language models like few-shot learning and fine-tuning stillrequire human supervision and transfer learning using self-learning methods hasbeen underexplored. We propose a self-supervised loss policy called contrastivedistillation which manifests latent variables with high mutual information withboth source and target tasks from weights to tokens. We show how thisoutperforms common methods of transfer learning and suggests a useful designaxis of trading off compute for generalizability for online transfer.Contrastive distillation is improved through sampling from memory and suggestsa simple algorithm for more efficiently sampling negative examples forcontrastive losses than random sampling.",,arXiv,"['cs.cl', 'cs.lg']",, -898,exploring efficient fewshot adaptation for vision transformers,"['Chengming Xu', 'Siqian Yang', 'Yabiao Wang', 'Zhanxiong Wang', 'Yanwei Fu', 'Xiangyang Xue']",http://arxiv.org/pdf/2301.02419v1.pdf,2023-01-06,," The task of Few-shot Learning (FSL) aims to do the inference on novelcategories containing only few labeled examples, with the help of knowledgelearned from base categories containing abundant labeled training samples.While there are numerous works into FSL task, Vision Transformers (ViTs) haverarely been taken as the backbone to FSL with few trials focusing on naivefinetuning of whole backbone or classification layer.} Essentially, despiteViTs have been shown to enjoy comparable or even better performance on othervision tasks, it is still very nontrivial to efficiently finetune the ViTs inreal-world FSL scenarios. To this end, we propose a novel efficient TransformerTuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The keynovelties come from the newly presented Attentive Prefix Tuning (APT) andDomain Residual Adapter (DRA) for the task and backbone tuning, individually.Specifically, in APT, the prefix is projected to new key and value pairs thatare attached to each self-attention layer to provide the model withtask-specific information. Moreover, we design the DRA in the form of learnableoffset vectors to handle the potential domain gaps between base and novel data.To ensure the APT would not deviate from the initial task-specific informationmuch, we further propose a novel prototypical regularization, which maximizesthe similarity between the projected distribution of prefix and initialprototypes, regularizing the update procedure. Our method receives outstandingperformance on the challenging Meta-Dataset. We conduct extensive experimentsto show the efficacy of our model.",,arXiv,['cs.cv'],, -899,unleashing the power of shared label structures for human activity recognition,"['Xiyuan Zhang', 'Ranak Roy Chowdhury', 'Jiayun Zhang', 'Dezhi Hong', 'Rajesh K. Gupta', 'Jingbo Shang']",http://arxiv.org/pdf/2301.03462v2.pdf,2023-01-01,," Current human activity recognition (HAR) techniques regard activity labels asinteger class IDs without explicitly modeling the semantics of class labels. Weobserve that different activity names often have shared structures. Forexample, ""open door"" and ""open fridge"" both have ""open"" as the action; ""kickingsoccer ball"" and ""playing tennis ball"" both have ""ball"" as the object. Suchshared structures in label names can be translated to the similarity in sensorydata and modeling common structures would help uncover knowledge acrossdifferent activities, especially for activities with limited samples. In thispaper, we propose SHARE, a HAR framework that takes into account sharedstructures of label names for different activities. To exploit the sharedstructures, SHARE comprises an encoder for extracting features from inputsensory time series and a decoder for generating label names as a tokensequence. We also propose three label augmentation techniques to help the modelmore effectively capture semantic structures across activities, including abasic token-level augmentation, and two enhanced embedding-level andsequence-level augmentations utilizing the capabilities of pre-trained models.SHARE outperforms state-of-the-art HAR models in extensive experiments on sevenHAR benchmark datasets. We also evaluate in few-shot learning and labelimbalance settings and observe even more significant performance gap.",,arXiv,"['cs.lg', 'cs.ai', 'eess.sp']",, -900,"see, think, confirm interactive prompting between vision and language models for knowledgebased visual reasoning","['Zhenfang Chen', 'Qinhong Zhou', 'Yikang Shen', 'Yining Hong', 'Hao Zhang', 'Chuang Gan']",http://arxiv.org/pdf/2301.05226v1.pdf,2023-01-12,," Large pre-trained vision and language models have demonstrated remarkablecapacities for various tasks. However, solving the knowledge-based visualreasoning tasks remains challenging, which requires a model to comprehensivelyunderstand image content, connect the external world knowledge, and performstep-by-step reasoning to answer the questions correctly. To this end, wepropose a novel framework named Interactive Prompting Visual Reasoner (IPVR)for few-shot knowledge-based visual reasoning. IPVR contains three stages, see,think and confirm. The see stage scans the image and grounds the visual conceptcandidates with a visual perception model. The think stage adopts a pre-trainedlarge language model (LLM) to attend to the key concepts from candidatesadaptively. It then transforms them into text context for prompting with avisual captioning model and adopts the LLM to generate the answer. The confirmstage further uses the LLM to generate the supporting rationale to the answer,verify the generated rationale with a cross-modality classifier and ensure thatthe rationale can infer the predicted output consistently. We conductexperiments on a range of knowledge-based visual reasoning datasets. We foundour IPVR enjoys several benefits, 1). it achieves better performance than theprevious few-shot learning baselines; 2). it enjoys the total transparency andtrustworthiness of the whole reasoning process by providing rationales for eachreasoning step; 3). it is computation-efficient compared with other fine-tuningbaselines.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, -901,large language models are latent variable models explaining and finding good demonstrations for incontext learning,"['Xinyi Wang', 'Wanrong Zhu', 'Michael Saxon', 'Mark Steyvers', 'William Yang Wang']",http://arxiv.org/pdf/2301.11916v3.pdf,2023-01-27,," In recent years, pre-trained large language models (LLMs) have demonstratedremarkable efficiency in achieving an inference-time few-shot learningcapability known as in-context learning. However, existing literature hashighlighted the sensitivity of this capability to the selection of few-shotdemonstrations. Current understandings of the underlying mechanisms by whichthis capability arises from regular language model pretraining objectivesremain disconnected from the real-world LLMs. This study aims to examine thein-context learning phenomenon through a Bayesian lens, viewing real-world LLMsas latent variable models. On this premise, we propose an algorithm to selectoptimal demonstrations from a set of annotated data with a small LM, and thendirectly generalize the selected demonstrations to larger LMs. We demonstratesignificant improvement over baselines, averaged over eight GPT models on eightreal-world text classification datasets. We also demonstrate the real-worldusefulness of our algorithm on GSM8K, a math word problem dataset. Ourempirical findings support our hypothesis that LLMs implicitly infer a latentvariable containing task information.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -902,language quantized autoencoders towards unsupervised textimage alignment,"['Hao Liu', 'Wilson Yan', 'Pieter Abbeel']",http://arxiv.org/pdf/2302.00902v2.pdf,2023-02-02,," Recent progress in scaling up large language models has shown impressivecapabilities in performing few-shot learning across a wide range of text-basedtasks. However, a key limitation is that these language models fundamentallylack visual perception - a crucial attribute needed to extend these models tobe able to interact with the real world and solve vision tasks, such as invisual-question answering and robotics. Prior works have largely connectedimage to text through pretraining and/or fine-tuning on curated image-textdatasets, which can be a costly and expensive process. In order to resolve thislimitation, we propose a simple yet effective approach calledLanguage-Quantized AutoEncoder (LQAE), a modification of VQ-VAE that learns toalign text-image data in an unsupervised manner by leveraging pretrainedlanguage models (e.g., BERT, RoBERTa). Our main idea is to encode image assequences of text tokens by directly quantizing image embeddings using apretrained language codebook. We then apply random masking followed by a BERTmodel, and have the decoder reconstruct the original image from BERT predictedtext token embeddings. By doing so, LQAE learns to represent similar imageswith similar clusters of text tokens, thereby aligning these two modalitieswithout the use of aligned text-image pairs. This enables few-shot imageclassification with large language models (e.g., GPT-3) as well as linearclassification of images based on BERT text features. To the best of ourknowledge, our work is the first work that uses unaligned images for multimodaltasks by leveraging the power of pretrained language models.",,arXiv,"['cs.lg', 'cs.cl', 'cs.cv']",, -903,the unreasonable effectiveness of fewshot learning for machine translation,"['Xavier Garcia', 'Yamini Bansal', 'Colin Cherry', 'George Foster', 'Maxim Krikun', 'Fangxiaoyu Feng', 'Melvin Johnson', 'Orhan Firat']",http://arxiv.org/pdf/2302.01398v1.pdf,2023-02-02,," We demonstrate the potential of few-shot translation systems, trained withunpaired language data, for both high and low-resource language pairs. We showthat with only 5 examples of high-quality translation data shown at inference,a transformer decoder-only model trained solely with self-supervised learning,is able to match specialized supervised state-of-the-art models as well as moregeneral commercial translation systems. In particular, we outperform the bestperforming system on the WMT'21 English - Chinese news translation task by onlyusing five examples of English - Chinese parallel data at inference. Moreover,our approach in building these models does not necessitate joint multilingualtraining or back-translation, is conceptually simple and shows the potential toextend to the multilingual setting. Furthermore, the resulting models are twoorders of magnitude smaller than state-of-the-art language models. We thenanalyze the factors which impact the performance of few-shot translationsystems, and highlight that the quality of the few-shot demonstrations heavilydetermines the quality of the translations generated by our models. Finally, weshow that the few-shot paradigm also provides a way to control certainattributes of the translation -- we show that we are able to control forregional varieties and formality using only a five examples at inference,paving the way towards controllable machine translation systems.",,arXiv,['cs.cl'],, -904,crosscodebench benchmarking crosstask generalization of source code models,"['Changan Niu', 'Chuanyi Li', 'Vincent Ng', 'Bin Luo']",http://arxiv.org/pdf/2302.04030v2.pdf,2023-02-08,," Despite the recent advances showing that a model pre-trained on large-scalesource code data is able to gain appreciable generalization capability, itstill requires a sizeable amount of data on the target task for fine-tuning.And the effectiveness of the model generalization is largely affected by thesize and quality of the fine-tuning data, which is detrimental for target taskswith limited or unavailable resources. Therefore, cross-task generalization,with the goal of improving the generalization of the model to unseen tasks thathave not been seen before, is of strong research and application value. In this paper, we propose a large-scale benchmark that includes 216 existingcode-related tasks. Then, we annotate each task with the corresponding metainformation such as task description and instruction, which contains detailedinformation about the task and a solution guide. This also helps us to easilycreate a wide variety of ``training/evaluation'' task splits to evaluate thevarious cross-task generalization capabilities of the model. Then we performsome preliminary experiments to demonstrate that the cross-task generalizationof models can be largely improved by in-context learning methods such asfew-shot learning and learning from task instructions, which shows thepromising prospects of conducting cross-task learning research on ourbenchmark. We hope that the collection of the datasets and our benchmark willfacilitate future work that is not limited to cross-task generalization.",,arXiv,"['cs.se', 'cs.ai']",, -905,revilm retrievalaugmented visual language model for zero and fewshot image captioning,"['Zhuolin Yang', 'Wei Ping', 'Zihan Liu', 'Vijay Korthikanti', 'Weili Nie', 'De-An Huang', 'Linxi Fan', 'Zhiding Yu', 'Shiyi Lan', 'Bo Li', 'Ming-Yu Liu', 'Yuke Zhu', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Chaowei Xiao', 'Anima Anandkumar']",http://arxiv.org/pdf/2302.04858v2.pdf,2023-02-09,," Augmenting pretrained language models (LMs) with a vision encoder (e.g.,Flamingo) has obtained the state-of-the-art results in image-to-textgeneration. However, these models store all the knowledge within theirparameters, thus often requiring enormous model parameters to model theabundant visual concepts and very rich textual descriptions. Additionally, theyare inefficient in incorporating new data, requiring a computational-expensivefine-tuning process. In this work, we introduce a Retrieval-augmented VisualLanguage Model, Re-ViLM, built upon the Flamingo, that supports retrieving therelevant knowledge from the external database for zero and in-context few-shotimage-to-text generations. By storing certain knowledge explicitly in theexternal database, our approach reduces the number of model parameters and caneasily accommodate new data during evaluation by simply updating the database.We also construct an interleaved image and text data that facilitatesin-context few-shot learning capabilities. We demonstrate that Re-ViLMsignificantly boosts performance for image-to-text generation tasks, especiallyfor zero-shot and few-shot generation in out-of-domain settings with 4 timesless parameters compared with baseline methods.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.ir', 'cs.lg']",, -906,maskguided bert for few shot text classification,"['Wenxiong Liao', 'Zhengliang Liu', 'Haixing Dai', 'Zihao Wu', 'Yiyang Zhang', 'Xiaoke Huang', 'Yuzhong Chen', 'Xi Jiang', 'Wei Liu', 'Dajiang Zhu', 'Tianming Liu', 'Sheng Li', 'Xiang Li', 'Hongmin Cai']",http://arxiv.org/pdf/2302.10447v3.pdf,2023-02-21,," Transformer-based language models have achieved significant success invarious domains. However, the data-intensive nature of the transformerarchitecture requires much labeled data, which is challenging in low-resourcescenarios (i.e., few-shot learning (FSL)). The main challenge of FSL is thedifficulty of training robust models on small amounts of samples, whichfrequently leads to overfitting. Here we present Mask-BERT, a simple andmodular framework to help BERT-based architectures tackle FSL. The proposedapproach fundamentally differs from existing FSL strategies such as prompttuning and meta-learning. The core idea is to selectively apply masks on textinputs and filter out irrelevant information, which guides the model to focuson discriminative tokens that influence prediction results. In addition, tomake the text representations from different categories more separable and thetext representations from the same category more compact, we introduce acontrastive learning loss function. Experimental results on public-domainbenchmark datasets demonstrate the effectiveness of Mask-BERT.",,arXiv,"['cs.cl', 'cs.ai']",, -907,metalearning with adaptive weighted loss for imbalanced coldstart recommendation,"['Minchang Kim', 'Yongjin Yang', 'Jung Hyun Ryu', 'Taesup Kim']",http://arxiv.org/pdf/2302.14640v2.pdf,2023-02-28,," Sequential recommenders have made great strides in capturing a user'spreferences. Nevertheless, the cold-start recommendation remains a fundamentalchallenge as they typically involve limited user-item interactions forpersonalization. Recently, gradient-based meta-learning approaches have emergedin the sequential recommendation field due to their fast adaptation andeasy-to-integrate abilities. The meta-learning algorithms formulate thecold-start recommendation as a few-shot learning problem, where each user isrepresented as a task to be adapted. While meta-learning algorithms generallyassume that task-wise samples are evenly distributed over classes or values,user-item interactions in real-world applications do not conform to such adistribution (e.g., watching favorite videos multiple times, leaving onlypositive ratings without any negative ones). Consequently, imbalanced userfeedback, which accounts for the majority of task training data, may dominatethe user adaptation process and prevent meta-learning algorithms from learningmeaningful meta-knowledge for personalized recommendations. To alleviate thislimitation, we propose a novel sequential recommendation framework based ongradient-based meta-learning that captures the imbalanced rating distributionof each user and computes adaptive loss for user-specific learning. Our work isthe first to tackle the impact of imbalanced ratings in cold-start sequentialrecommendation scenarios. Through extensive experiments conducted on real-worlddatasets, we demonstrate the effectiveness of our framework.",,arXiv,"['cs.ir', 'cs.lg']",, -908,knowledgeaugmented fewshot visual relation detection,"['Tianyu Yu', 'Yangning Li', 'Jiaoyan Chen', 'Yinghui Li', 'Hai-Tao Zheng', 'Xi Chen', 'Qingbin Liu', 'Wenqiang Liu', 'Dongxiao Huang', 'Bei Wu', 'Yexin Wang']",http://arxiv.org/pdf/2303.05342v1.pdf,2023-03-09,," Visual Relation Detection (VRD) aims to detect relationships between objectsfor image understanding. Most existing VRD methods rely on thousands oftraining samples of each relationship to achieve satisfactory performance. Somerecent papers tackle this problem by few-shot learning with elaboratelydesigned pipelines and pre-trained word vectors. However, the performance ofexisting few-shot VRD models is severely hampered by the poor generalizationcapability, as they struggle to handle the vast semantic diversity of visualrelationships. Nonetheless, humans have the ability to learn new relationshipswith just few examples based on their knowledge. Inspired by this, we devise aknowledge-augmented, few-shot VRD framework leveraging both textual knowledgeand visual relation knowledge to improve the generalization ability of few-shotVRD. The textual knowledge and visual relation knowledge are acquired from apre-trained language model and an automatically constructed visual relationknowledge graph, respectively. We extensively validate the effectiveness of ourframework. Experiments conducted on three benchmarks from the commonly usedVisual Genome dataset show that our performance surpasses existingstate-of-the-art models with a large improvement.",,arXiv,"['cs.cv', 'cs.ai']",, -909,hqp a humanannotated dataset for detecting online propaganda,"['Abdurahman Maarouf', 'Dominik Bär', 'Dominique Geissler', 'Stefan Feuerriegel']",http://arxiv.org/pdf/2304.14931v2.pdf,2023-04-28,," Online propaganda poses a severe threat to the integrity of societies.However, existing datasets for detecting online propaganda have a keylimitation: they were annotated using weak labels that can be noisy and evenincorrect. To address this limitation, our work makes the followingcontributions: (1) We present HQP: a novel dataset (N=30,000) for detectingonline propaganda with high-quality labels. To the best of our knowledge, HQPis the first dataset for detecting online propaganda that was created throughhuman annotation. (2) We show empirically that state-of-the-art language modelsfail in detecting online propaganda when trained with weak labels (AUC: 64.03).In contrast, state-of-the-art language models can accurately detect onlinepropaganda when trained with our high-quality labels (AUC: 92.25), which is animprovement of ~44%. (3) To address the cost of labeling, we extend our work tofew-shot learning. Specifically, we show that prompt-based learning using asmall sample of high-quality labels can still achieve a reasonable performance(AUC: 80.27). Finally, we discuss implications for the NLP community to balancethe cost and quality of labeling. Crucially, our work highlights the importanceof high-quality labels for sensitive NLP tasks such as propaganda detection.",,arXiv,['cs.cl'],, -910,parameterefficient crosslingual transfer of vision and language models via translationbased alignment,"['Zhen Zhang', 'Jialu Wang', 'Xin Eric Wang']",http://arxiv.org/pdf/2305.03510v2.pdf,2023-05-02,," Pre-trained vision and language models such as CLIP have witnessed remarkablesuccess in connecting images and texts with a primary focus on English texts.Despite recent efforts to extend CLIP to support other languages, disparitiesin performance among different languages have been observed due to unevenresource availability. Additionally, current cross-lingual transfer methods ofthose pre-trained models would consume excessive resources for a large numberof languages. Therefore, we propose a new parameter-efficient cross-lingualtransfer learning framework that utilizes a translation-based alignment methodto mitigate multilingual disparities and explores parameter-efficientfine-tuning methods for parameter-efficient cross-lingual transfer. Extensiveexperiments on XTD and Multi30K datasets, covering 11 languages underzero-shot, few-shot, and full-dataset learning scenarios, show that ourframework significantly reduces the multilingual disparities among languagesand improves cross-lingual transfer results, especially in low-resourcescenarios, while only keeping and fine-tuning an extremely small number ofparameters compared to the full model (e.g., Our framework only requires 0.16\%additional parameters of a full-model for each language in the few-shotlearning scenario). The codes are available at\url{https://github.com/eric-ai-lab/PECTVLM}. The codes are available at\url{https://github.com/eric-ai-lab/PECTVLM}.",,arXiv,"['cs.cl', 'cs.ai']",, -911,qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model,"['Jiageng Wu', 'Xian Wu', 'Zhaopeng Qiu', 'Minghui Li', 'Yefeng Zheng', 'Jie Yang']",http://arxiv.org/pdf/2305.10163v2.pdf,2023-05-17,," Generative Pre-Training (GPT) models like ChatGPT have demonstratedexceptional performance in various Natural Language Processing (NLP) tasks.Although ChatGPT has been integrated into the overall workflow to boostefficiency in many domains, the lack of flexibility in the finetuning processhinders its applications in areas that demand extensive domain expertise andsemantic knowledge, such as healthcare. In this paper, we evaluate ChatGPT onthe China National Medical Licensing Examination (CNMLE) and propose a novelapproach to improve ChatGPT from two perspectives: integrating medical domainknowledge and enabling few-shot learning. By using a simple but effectiveretrieval method, medical background knowledge is extracted as semanticinstructions to guide the inference of ChatGPT. Similarly, relevant medicalquestions are identified and fed as demonstrations to ChatGPT. Experimentalresults show that directly applying ChatGPT fails to qualify the CNMLE at ascore of 51 (i.e., only 51\% of questions are answered correctly). While ourknowledge-enhanced model achieves a high score of 70 on CNMLE-2022 which notonly passes the qualification but also surpasses the average score of humans(61). This research demonstrates the potential of knowledge-enhanced ChatGPT toserve as versatile medical assistants, capable of analyzing real-world medicalproblems in a more accessible, user-friendly, and adaptable manner.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, -912,sentiment analysis in the era of large language models a reality check,"['Wenxuan Zhang', 'Yue Deng', 'Bing Liu', 'Sinno Jialin Pan', 'Lidong Bing']",http://arxiv.org/pdf/2305.15005v1.pdf,2023-05-24,," Sentiment analysis (SA) has been a long-standing research area in naturallanguage processing. It can offer rich insights into human sentiments andopinions and has thus seen considerable interest from both academia andindustry. With the advent of large language models (LLMs) such as ChatGPT,there is a great potential for their employment on SA problems. However, theextent to which existing LLMs can be leveraged for different sentiment analysistasks remains unclear. This paper aims to provide a comprehensive investigationinto the capabilities of LLMs in performing various sentiment analysis tasks,from conventional sentiment classification to aspect-based sentiment analysisand multifaceted analysis of subjective texts. We evaluate performance across13 tasks on 26 datasets and compare the results against small language models(SLMs) trained on domain-specific datasets. Our study reveals that while LLMsdemonstrate satisfactory performance in simpler tasks, they lag behind in morecomplex tasks requiring deeper understanding or structured sentimentinformation. However, LLMs significantly outperform SLMs in few-shot learningsettings, suggesting their potential when annotation resources are limited. Wealso highlight the limitations of current evaluation practices in assessingLLMs' SA abilities and propose a novel benchmark, \textsc{SentiEval}, for amore comprehensive and realistic evaluation. Data and code during ourinvestigations are available at\url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}.",,arXiv,['cs.cl'],, -913,impact of large language models on generating software specifications,"['Danning Xie', 'Byungwoo Yoo', 'Nan Jiang', 'Mijung Kim', 'Lin Tan', 'Xiangyu Zhang', 'Judy S. Lee']",http://arxiv.org/pdf/2306.03324v2.pdf,2023-06-06,," Software specifications are essential for ensuring the reliability ofsoftware systems. Existing specification extraction approaches, however, sufferfrom limited generalizability and require manual efforts. The recent emergenceof Large Language Models (LLMs), which have been successfully applied tonumerous software engineering tasks, offers a promising avenue for automatingthis process. In this paper, we conduct the first empirical study to evaluatethe capabilities of LLMs for generating software specifications from softwarecomments or documentation. We evaluate LLMs' performance with Few Shot Learning(FSL), enabling LLMs to generalize from a small number of examples, as well asdifferent prompt construction strategies, and compare the performance of LLMswith traditional approaches. Additionally, we conduct a comparative diagnosisof the failure cases from both LLMs and traditional methods, identifying theirunique strengths and weaknesses. Lastly, we conduct extensive experiments on 15state of the art LLMs, evaluating their performance and cost effectiveness forgenerating software specifications. Our results show that with FSL, LLMs outperform traditional methods (by5.6%), and more sophisticated prompt construction strategies can furtherenlarge this performance gap (up to 5.1 to 10.0%). Yet, LLMs suffer from theirunique challenges, such as ineffective prompts and the lack of domainknowledge, which together account for 53 to 60% of LLM unique failures. Thestrong performance of open source models (e.g., StarCoder) makes closed sourcemodels (e.g., GPT 3 Davinci) less desirable due to size and cost. Our studyoffers valuable insights for future research to improve specificationgeneration.",,arXiv,['cs.se'],, -914,prompting classes exploring the power of prompt class learning in weakly supervised semantic segmentation,"['Balamurali Murugesan', 'Rukhshanda Hussain', 'Rajarshi Bhattacharya', 'Ismail Ben Ayed', 'Jose Dolz']",http://arxiv.org/pdf/2307.00097v2.pdf,2023-06-30,," Recently, CLIP-based approaches have exhibited remarkable performance ongeneralization and few-shot learning tasks, fueled by the power of contrastivelanguage-vision pre-training. In particular, prompt tuning has emerged as aneffective strategy to adapt the pre-trained language-vision models todownstream tasks by employing task-related textual tokens. Motivated by thisprogress, in this work we question whether other fundamental problems, such asweakly supervised semantic segmentation (WSSS), can benefit from prompt tuning.Our findings reveal two interesting observations that shed light on the impactof prompt tuning on WSSS. First, modifying only the class token of the textprompt results in a greater impact on the Class Activation Map (CAM), comparedto arguably more complex strategies that optimize the context. And second, theclass token associated with the image ground truth does not necessarilycorrespond to the category that yields the best CAM. Motivated by theseobservations, we introduce a novel approach based on a PrOmpt cLass lEarning(POLE) strategy. Through extensive experiments we demonstrate that our simple,yet efficient approach achieves SOTA performance in a well-known WSSSbenchmark. These results highlight not only the benefits of language-visionmodels in WSSS but also the potential of prompt learning for this problem. Thecode is available at https://github.com/rB080/WSS_POLE.",,arXiv,['cs.cv'],, -915,text descriptions are compressive and invariant representations for visual learning,"['Zhili Feng', 'Anna Bair', 'J. Zico Kolter']",http://arxiv.org/pdf/2307.04317v2.pdf,2023-07-10,," Modern image classification is based upon directly predicting classes vialarge discriminative networks, which do not directly contain information aboutthe intuitive visual features that may constitute a classification decision.Recently, work in vision-language models (VLM) such as CLIP has provided waysto specify natural language descriptions of image classes, but typicallyfocuses on providing single descriptions for each class. In this work, wedemonstrate that an alternative approach, in line with humans' understanding ofmultiple visual features per class, can also provide compelling performance inthe robust few-shot learning setting. In particular, we introduce a novelmethod, \textit{SLR-AVD (Sparse Logistic Regression using Augmented VisualDescriptors)}. This method first automatically generates multiple visualdescriptions of each class via a large language model (LLM), then uses a VLM totranslate these descriptions to a set of visual feature embeddings of eachimage, and finally uses sparse logistic regression to select a relevant subsetof these features to classify each image. Core to our approach is the factthat, information-theoretically, these descriptive features are more invariantto domain shift than traditional image embeddings, even though the VLM trainingprocess is not explicitly designed for invariant representation learning. Theseinvariant descriptive features also compose a better input compression scheme.When combined with finetuning, we show that SLR-AVD is able to outperformexisting state-of-the-art finetuning approaches on both in-distribution andout-of-distribution performance.",,arXiv,"['cs.cv', 'cs.lg']",, -916,dialogstudio towards richest and most diverse unified dataset collection for conversational ai,"['Jianguo Zhang', 'Kun Qian', 'Zhiwei Liu', 'Shelby Heinecke', 'Rui Meng', 'Ye Liu', 'Zhou Yu', 'Huan Wang', 'Silvio Savarese', 'Caiming Xiong']",http://arxiv.org/pdf/2307.10172v2.pdf,2023-07-19,," Despite advancements in conversational AI, language models encounterchallenges to handle diverse conversational tasks, and existing dialoguedataset collections often lack diversity and comprehensiveness. To tackle theseissues, we introduce DialogStudio: the largest and most diverse collection ofdialogue datasets, unified under a consistent format while preserving theiroriginal information. Our collection encompasses data from open-domaindialogues, task-oriented dialogues, natural language understanding,conversational recommendation, dialogue summarization, and knowledge-groundeddialogues, making it an incredibly rich and diverse resource for dialogueresearch and model training. To further enhance the utility of DialogStudio, weidentify the licenses for each dataset and design domain-aware prompts forselected dialogues to facilitate instruction-aware fine-tuning. Furthermore, wedevelop conversational AI models using the dataset collection, and ourexperiments in both zero-shot and few-shot learning scenarios demonstrate thesuperiority of DialogStudio. To improve transparency and support dataset andtask-based research, as well as language model pre-training, all datasets,licenses, codes, and models associated with DialogStudio are made publiclyaccessible at https://github.com/salesforce/DialogStudio",,arXiv,"['cs.cl', 'cs.ai']",, -917,mutual reinforcement effects in japanese sentence classification and named entity recognition tasks,"['Chengguang Gan', 'Qinghao Zhang', 'Tatsunori Mori']",http://arxiv.org/pdf/2307.10291v2.pdf,2023-07-18,," Information extraction(IE) is a crucial subfield within natural languageprocessing. However, for the traditionally segmented approach to sentenceclassification and Named Entity Recognition, the intricate interactions betweenthese individual subtasks remain largely uninvestigated. In this study, wepropose an integrative analysis, converging sentence classification with NamedEntity Recognition, with the objective to unveil and comprehend the mutualreinforcement effect within these two information extraction subtasks. Toachieve this, we introduce a Sentence Classification and Named EntityRecognition Multi-task (SCNM) approach that combines Sentence Classification(SC) and Named Entity Recognition (NER). We develop a Sentence-to-LabelGeneration (SLG) framework for SCNM and construct a Wikipedia datasetcontaining both SC and NER. Using a format converter, we unify input formatsand employ a generative model to generate SC-labels, NER-labels, and associatedtext segments. We propose a Constraint Mechanism (CM) to improve generatedformat accuracy. Our results show SC accuracy increased by 1.13 points and NERby 1.06 points in SCNM compared to standalone tasks, with CM raising formataccuracy from 63.61 to 100. The findings indicate mutual reinforcement effectsbetween SC and NER, and integration enhances both tasks' performance. Weadditionally implemented the SLG framework on single SC task. It yieldedsuperior accuracies compared to the baseline on two distinct Japanese SCdatasets. Notably, in the experiment of few-shot learning, SLG framework showsmuch better performance than fine-tune method. These empirical findingscontribute additional evidence to affirm the efficacy of the SLG framework.",,arXiv,['cs.cl'],, -918,chatgpt for arabic grammatical error correction,"['Sang Yun Kwon', 'Gagan Bhatia', 'El Moatez Billah Nagoud', 'Muhammad Abdul-Mageed']",http://arxiv.org/pdf/2308.04492v1.pdf,2023-08-08,," Recently, large language models (LLMs) fine-tuned to follow human instructionhave exhibited significant capabilities in various English NLP tasks. However,their performance in grammatical error correction (GEC) tasks, particularly innon-English languages, remains significantly unexplored. In this paper, wedelve into abilities of instruction fine-tuned LLMs in Arabic GEC, a task madecomplex due to Arabic's rich morphology. Our findings suggest that variousprompting methods, coupled with (in-context) few-shot learning, demonstrateconsiderable effectiveness, with GPT-4 achieving up to $65.49$F\textsubscript{1} score under expert prompting (approximately $5$ pointshigher than our established baseline). This highlights the potential of LLMs inlow-resource settings, offering a viable approach for generating usefulsynthetic data for model training. Despite these positive results, we find thatinstruction fine-tuned models, regardless of their size, significantlyunderperform compared to fully fine-tuned models of significantly smallersizes. This disparity highlights a substantial room for improvements for LLMs.Inspired by methods from low-resource machine translation, we also develop amethod exploiting synthetic data that significantly outperforms previous modelson two standard Arabic benchmarks. Our work sets new SoTA for Arabic GEC, with$72.19\%$ and $73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively.",,arXiv,['cs.ai'],, -919,llmebench a flexible framework for accelerating llms benchmarking,"['Fahim Dalvi', 'Maram Hasanain', 'Sabri Boughorbel', 'Basel Mousi', 'Samir Abdaljalil', 'Nizi Nazar', 'Ahmed Abdelali', 'Shammur Absar Chowdhury', 'Hamdy Mubarak', 'Ahmed Ali', 'Majd Hawasly', 'Nadir Durrani', 'Firoj Alam']",http://arxiv.org/pdf/2308.04945v1.pdf,2023-08-09,," The recent development and success of Large Language Models (LLMs)necessitate an evaluation of their performance across diverse NLP tasks indifferent languages. Although several frameworks have been developed and madepublicly available, their customization capabilities for specific tasks anddatasets are often complex for different users. In this study, we introduce theLLMeBench framework. Initially developed to evaluate Arabic NLP tasks usingOpenAI's GPT and BLOOM models; it can be seamlessly customized for any NLP taskand model, regardless of language. The framework also features zero- andfew-shot learning settings. A new custom dataset can be added in less than 10minutes, and users can use their own model API keys to evaluate the task athand. The developed framework has been already tested on 31 unique NLP tasksusing 53 publicly available datasets within 90 experimental setups, involvingapproximately 296K data points. We plan to open-source the framework for thecommunity (https://github.com/qcri/LLMeBench/). A video demonstrating theframework is available online (https://youtu.be/FkQn4UjYA0s).",,arXiv,"['cs.cl', 'cs.ai', '68t50', 'f.2.2; i.2.7']",, -920,codecot and beyond learning to program and test like a developer,"['Dong Huang', 'Qingwen Bu', 'Heming Cui']",http://arxiv.org/pdf/2308.08784v1.pdf,2023-08-17,," In natural language processing, transformer-based large language models(LLMs) like GPT-x models developed by OpenAI have revolutionized the landscape.Despite their impressive capabilities, these models often encounter challengeswhen handling tasks that differ from their training data, resulting incompromised performance. To address this, few-shot learning has emerged as avaluable technique, allowing LLMs to adapt with minimal task-specific data. Oneinnovative strategy, known as Chain-of-Thought Prompting (CoT), has beenintroduced to guide LLMs in revealing cognitive processes during multi-stepreasoning. In this paper, we propose Code Chain-of-Thought~(CodeCoT), whichconsists of two components: the Vanilla CodeCoT and the Self-exam CodeCoT. Thelatter incorporates self-examination, empowering the model to iterativelygenerate code, formulate test cases, and refine its outputs. Specifically, theprocess entails the generation of test examples by the model corresponding tothe code it is tasked to implement. If it fails on the test examples, then itregenerates the code based on the erroneous code and associated error types.Through comprehensive experiments, we observed that both techniquessignificantly enhance code generation accuracy across various LLM variants. Ourevaluation results reveal that CodeCoT improves the code generationeffectiveness, including an unprecedented pass@1 accuracy of 79.27\% using theSelf-exam CodeCoT approach on the gpt-3.5-turbo-0613 model in the HumanEvaldataset.",,arXiv,"['cs.se', 'cs.ai']",, -921,diagnosing infeasible optimization problems using large language models,"['Hao Chen', 'Gonzalo E. Constante-Flores', 'Can Li']",http://arxiv.org/pdf/2308.12923v1.pdf,2023-08-23,," Decision-making problems can be represented as mathematical optimizationmodels, finding wide applications in fields such as economics, engineering andmanufacturing, transportation, and health care. Optimization models aremathematical abstractions of the problem of making the best decision whilesatisfying a set of requirements or constraints. One of the primary barriers todeploying these models in practice is the challenge of helping practitionersunderstand and interpret such models, particularly when they are infeasible,meaning no decision satisfies all the constraints. Existing methods fordiagnosing infeasible optimization models often rely on expert systems,necessitating significant background knowledge in optimization. In this paper,we introduce OptiChat, a first-of-its-kind natural language-based systemequipped with a chatbot GUI for engaging in interactive conversations aboutinfeasible optimization models. OptiChat can provide natural languagedescriptions of the optimization model itself, identify potential sources ofinfeasibility, and offer suggestions to make the model feasible. Theimplementation of OptiChat is built on GPT-4, which interfaces with anoptimization solver to identify the minimal subset of constraints that renderthe entire optimization problem infeasible, also known as the IrreducibleInfeasible Subset (IIS). We utilize few-shot learning, expert chain-of-thought,key-retrieve, and sentiment prompts to enhance OptiChat's reliability. Ourexperiments demonstrate that OptiChat assists both expert and non-expert usersin improving their understanding of the optimization models, enabling them toquickly identify the sources of infeasibility.",,arXiv,"['cs.hc', 'cs.cl', 'cs.lg', 'math.oc']",, -922,"longbench a bilingual, multitask benchmark for long context understanding","['Yushi Bai', 'Xin Lv', 'Jiajie Zhang', 'Hongchang Lyu', 'Jiankai Tang', 'Zhidian Huang', 'Zhengxiao Du', 'Xiao Liu', 'Aohan Zeng', 'Lei Hou', 'Yuxiao Dong', 'Jie Tang', 'Juanzi Li']",http://arxiv.org/pdf/2308.14508v1.pdf,2023-08-28,," Although large language models (LLMs) demonstrate impressive performance formany language tasks, most of them can only handle texts a few thousand tokenslong, limiting their applications on longer sequence inputs, such as books,reports, and codebases. Recent works have proposed methods to improve LLMs'long context capabilities by extending context windows and more sophisticatedmemory mechanisms. However, comprehensive benchmarks tailored for evaluatinglong context understanding are lacking. In this paper, we introduce LongBench,the first bilingual, multi-task benchmark for long context understanding,enabling a more rigorous evaluation of long context understanding. LongBenchcomprises 21 datasets across 6 task categories in both English and Chinese,with an average length of 6,711 words (English) and 13,386 characters(Chinese). These tasks cover key long-text application areas includingsingle-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,and code completion. All datasets in LongBench are standardized into a unifiedformat, allowing for effortless automatic evaluation of LLMs. Uponcomprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercialmodel (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but stillstruggles on longer contexts. (2) Scaled position embedding and fine-tuning onlonger sequences lead to substantial improvement on long context understanding.(3) Context compression technique such as retrieval brings improvement formodel with weak ability on long contexts, but the performance still lags behindmodels that have strong long context understanding capability. The code anddatasets are available at https://github.com/THUDM/LongBench.",,arXiv,['cs.cl'],, -923,askit unified programming interface for programming with large language models,"['Katsumi Okuda', 'Saman Amarasinghe']",http://arxiv.org/pdf/2308.15645v1.pdf,2023-08-29,," In the evolving landscape of software development, Large Language Models(LLMs) exhibit a unique phenomenon known as emergent abilities, demonstratingadeptness across numerous tasks, from text summarization to code generation.While these abilities open up novel avenues in software design and crafting,their incorporation presents substantial challenges. Developers grapple withdecisions surrounding the direct embedding of LLMs within applications versusemploying them for code generation. Moreover, effective prompt design becomes acritical concern, given the necessity of data extraction from natural languageoutputs. To address these intricacies, this paper introduces AskIt, adomain-specific language (DSL) specifically designed for LLMs. AskIt simplifiesLLM integration, offering type-guided output control, template-based functiondefinitions, and a unified interface that diminishes the distinction betweenLLM-based code generation and application integration. Furthermore, throughProgramming by Example (PBE), AskIt harnesses the power of few-shot learning atthe programming language level. Our evaluations underscore AskIt's potency.Across 50 tasks, AskIt generated concise prompts for the given tasks, achievinga 16.14% reduction in prompt length relative to benchmarks. Additionally, byenabling the transition from direct LLM application usage to functiongeneration, AskIt achieved significant speedups, as observed in our GSM8Kbenchmark experiments. Through these advancements, AskIt streamlines theintegration of LLMs in software development, offering a more efficient,versatile approach for leveraging emergent abilities. The implementations ofAskIt in TypeScript and Python are available athttps://github.com/katsumiok/ts-askit and https://github.com/katsumiok/pyaskit,respectively.",,arXiv,"['cs.pl', 'cs.ai', 'cs.se']",, -924,zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model,"['Neel Bhate', 'Ansh Mittal', 'Zhe He', 'Xiao Luo']",http://arxiv.org/pdf/2309.05475v2.pdf,2023-09-11,," Demographics, Social determinants of health, and family history documented inthe unstructured text within the electronic health records are increasinglybeing studied to understand how this information can be utilized with thestructured data to improve healthcare outcomes. After the GPT models werereleased, many studies have applied GPT models to extract this information fromthe narrative clinical notes. Different from the existing work, our researchfocuses on investigating the zero-shot learning on extracting this informationtogether by providing minimum information to the GPT model. We utilizede-identified real-world clinical notes annotated for demographics, varioussocial determinants, and family history information. Given that the GPT modelmight provide text different from the text in the original data, we explore twosets of evaluation metrics, including the traditional NER evaluation metricsand semantic similarity evaluation metrics, to completely understand theperformance. Our results show that the GPT-3.5 method achieved an average of0.975 F1 on demographics extraction, 0.615 F1 on social determinantsextraction, and 0.722 F1 on family history extraction. We believe these resultscan be further improved through model fine-tuning or few-shots learning.Through the case studies, we also identified the limitations of the GPT models,which need to be addressed in future research.",,arXiv,['cs.cl'],, -925,using large language model to solve and explain physics word problems approaching human level,"['Jingzhe Ding', 'Yan Cen', 'Xinyuan Wei']",http://arxiv.org/pdf/2309.08182v2.pdf,2023-09-15,," Our work demonstrates that large language model (LLM) pre-trained on textscan not only solve pure math word problems, but also physics word problems,whose solution requires calculation and inference based on prior physicalknowledge. We collect and annotate the first physics word problemdataset-PhysQA, which contains over 1000 junior high school physics wordproblems (covering Kinematics, Mass&Density, Mechanics, Heat, Electricity).Then we use OpenAI' s GPT3.5 to generate the answer of these problems and foundthat GPT3.5 could automatically solve 49.3% of the problems through zero-shotlearning and 73.2% through few-shot learning. This result demonstrates that byusing similar problems and their answers as prompt, LLM could solve elementaryphysics word problems approaching human level performance. In addition tosolving problems, GPT3.5 can also summarize the knowledge or topics covered bythe problems, provide relevant explanations, and generate new physics wordproblems based on the input. Our work is the first research to focus on theautomatic solving, explanation, and generation of physics word problems acrossvarious types and scenarios, and we achieve an acceptable and state-of-the-artaccuracy. This underscores the potential of LLMs for further applications insecondary education.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, -926,nnsam plugandplay segment anything model improves nnunet performance,"['Yunxiang Li', 'Bowen Jing', 'Zihan Li', 'Jing Wang', 'You Zhang']",http://arxiv.org/pdf/2309.16967v2.pdf,2023-09-29,," The recent developments of foundation models in computer vision, especiallythe Segment Anything Model (SAM), allow scalable and domain-agnostic imagesegmentation to serve as a general-purpose segmentation tool. In parallel, thefield of medical image segmentation has benefited significantly fromspecialized neural networks like the nnUNet, which is trained ondomain-specific datasets and can automatically configure the network to tailorto specific segmentation challenges. To combine the advantages of foundationmodels and domain-specific models, we present nnSAM, which synergisticallyintegrates the SAM model with the nnUNet model to achieve more accurate androbust medical image segmentation. The nnSAM model leverages the powerful androbust feature extraction capabilities of SAM, while harnessing the automaticconfiguration capabilities of nnUNet to promote dataset-tailored learning. Ourcomprehensive evaluation of nnSAM model on different sizes of training samplesshows that it allows few-shot learning, which is highly relevant for medicalimage segmentation where high-quality, annotated data can be scarce and costlyto obtain. By melding the strengths of both its predecessors, nnSAM positionsitself as a potential new benchmark in medical image segmentation, offering atool that combines broad applicability with specialized efficiency. The code isavailable at https://github.com/Kent0n-Li/Medical-Image-Segmentation.",,arXiv,"['cs.cv', 'eess.iv']",, -927,radit retrievalaugmented dual instruction tuning,"['Xi Victoria Lin', 'Xilun Chen', 'Mingda Chen', 'Weijia Shi', 'Maria Lomeli', 'Rich James', 'Pedro Rodriguez', 'Jacob Kahn', 'Gergely Szilvasy', 'Mike Lewis', 'Luke Zettlemoyer', 'Scott Yih']",http://arxiv.org/pdf/2310.01352v3.pdf,2023-10-02,," Retrieval-augmented language models (RALMs) improve performance by accessinglong-tail and up-to-date knowledge from external data stores, but arechallenging to build. Existing approaches require either expensiveretrieval-specific modifications to LM pre-training or use post-hoc integrationof the data store that leads to suboptimal performance. We introduceRetrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuningmethodology that provides a third option by retrofitting any LLM with retrievalcapabilities. Our approach operates in two distinct fine-tuning steps: (1) oneupdates a pre-trained LM to better use retrieved information, while (2) theother updates the retriever to return more relevant results, as preferred bythe LM. By fine-tuning over tasks that require both knowledge utilization andcontextual awareness, we demonstrate that each stage yields significantperformance improvements, and using both leads to additional gains. Our bestmodel, RA-DIT 65B, achieves state-of-the-art performance across a range ofknowledge-intensive zero- and few-shot learning benchmarks, significantlyoutperforming existing in-context RALM approaches by up to +8.9% in 0-shotsetting and +1.4% in 5-shot setting on average.",,arXiv,"['cs.cl', 'cs.ai']",, -928,unipredict large language models are universal tabular predictors,"['Ruiyu Wang', 'Zifeng Wang', 'Jimeng Sun']",http://arxiv.org/pdf/2310.03266v1.pdf,2023-10-05,," Tabular data prediction is a fundamental machine learning task for manyapplications. Existing methods predominantly employ discriminative modeling andoperate under the assumption of a fixed target column, necessitatingre-training for every new predictive task. Inspired by the generative power oflarge language models (LLMs), this paper exploits the idea of buildinguniversal tabular data predictors based on generative modeling, namelyUniPredict. Here, we show that scaling up an LLM to extensive tabular datasetswith the capability of comprehending diverse tabular inputs and predicting fortarget variables following the input instructions. Specifically, we train asingle LLM on an aggregation of 169 tabular datasets with diverse targets andcompare its performance against baselines that are trained on each datasetseparately. We observe this versatile UniPredict model demonstrates anadvantage over other models, ranging from 5.4% to 13.4%, when compared with thebest tree-boosting baseline and the best neural network baseline, respectively.We further test UniPredict in few-shot learning settings on another 62 tabulardatasets. Our method achieves strong performance in quickly adapting to newtasks, where our method outperforms XGBoost over 100% on the low-resource setupand shows a significant margin over all baselines. We envision that UniPredictsheds light on developing a universal tabular data prediction system thatlearns from data at scale and serves a wide range of prediction tasks.",,arXiv,['cs.lg'],, -929,longllmlingua accelerating and enhancing llms in long context scenarios via prompt compression,"['Huiqiang Jiang', 'Qianhui Wu', 'Xufang Luo', 'Dongsheng Li', 'Chin-Yew Lin', 'Yuqing Yang', 'Lili Qiu']",http://arxiv.org/pdf/2310.06839v1.pdf,2023-10-10,," In long context scenarios, large language models (LLMs) face three mainchallenges: higher computational/financial cost, longer latency, and inferiorperformance. Some studies reveal that the performance of LLMs depends on boththe density and the position of the key information (question relevant) in theinput prompt. Inspired by these findings, we propose LongLLMLingua for promptcompression towards improving LLMs' perception of the key information tosimultaneously address the three challenges. We conduct evaluation on a widerange of long context scenarios including single-/multi-document QA, few-shotlearning, summarization, synthetic tasks, and code completion. The experimentalresults show that LongLLMLingua compressed prompt can derive higher performancewith much less cost. The latency of the end-to-end system is also reduced. Forexample, on NaturalQuestions benchmark, LongLLMLingua gains a performance boostof up to 17.1% over the original prompt with ~4x fewer tokens as input toGPT-3.5-Turbo. It can derive cost savings of \$28.5 and \$27.4 per 1,000samples from the LongBench and ZeroScrolls benchmark, respectively.Additionally, when compressing prompts of ~10k tokens at a compression rate of2x-10x, LongLLMLingua can speed up the end-to-end latency by 1.4x-3.8x. Ourcode is available at https://aka.ms/LLMLingua.",,arXiv,"['cs.cl', 'cs.lg']",, -930,empower textattributed graphs learning with large language models (llms),"['Jianxiang Yu', 'Yuxiang Ren', 'Chenghua Gong', 'Jiaqi Tan', 'Xiang Li', 'Xuecang Zhang']",http://arxiv.org/pdf/2310.09872v1.pdf,2023-10-15,," Text-attributed graphs have recently garnered significant attention due totheir wide range of applications in web domains. Existing methodologies employword embedding models for acquiring text representations as node features,which are subsequently fed into Graph Neural Networks (GNNs) for training.Recently, the advent of Large Language Models (LLMs) has introduced theirpowerful capabilities in information retrieval and text generation, which cangreatly enhance the text attributes of graph data. Furthermore, the acquisitionand labeling of extensive datasets are both costly and time-consumingendeavors. Consequently, few-shot learning has emerged as a crucial problem inthe context of graph learning tasks. In order to tackle this challenge, wepropose a lightweight paradigm called ENG, which adopts a plug-and-playapproach to empower text-attributed graphs through node generation using LLMs.Specifically, we utilize LLMs to extract semantic information from the labelsand generate samples that belong to these categories as exemplars.Subsequently, we employ an edge predictor to capture the structural informationinherent in the raw dataset and integrate the newly generated samples into theoriginal graph. This approach harnesses LLMs for enhancing class-levelinformation and seamlessly introduces labeled nodes and edges without modifyingthe raw dataset, thereby facilitating the node classification task in few-shotscenarios. Extensive experiments demonstrate the outstanding performance of ourproposed paradigm, particularly in low-shot scenarios. For instance, in the1-shot setting of the ogbn-arxiv dataset, ENG achieves a 76% improvement overthe baseline model.",,arXiv,['cs.lg'],, -931,incontext learning with iterative demonstration selection,"['Chengwei Qin', 'Aston Zhang', 'Anirudh Dagar', 'Wenming Ye']",http://arxiv.org/pdf/2310.09881v2.pdf,2023-10-15,," Spurred by advancements in scale, large language models (LLMs) havedemonstrated strong few-shot learning ability via in-context learning (ICL).However, the performance of ICL has been shown to be highly sensitive to theselection of few-shot demonstrations. Selecting the most suitable examples ascontext remains an ongoing challenge and an open problem. Existing literaturehas highlighted the importance of selecting examples that are diverse orsemantically similar to the test sample while ignoring the fact that theoptimal selection dimension, i.e., diversity or similarity, is task-specific.Leveraging the merits of both dimensions, we propose Iterative DemonstrationSelection (IDS). Using zero-shot chain-of-thought reasoning (Zero-shot-CoT),IDS iteratively selects examples that are diverse but still strongly correlatedwith the test sample as ICL demonstrations. Specifically, IDS appliesZero-shot-CoT to the test sample before demonstration selection. The outputreasoning path is then used to choose demonstrations that are prepended to thetest sample for inference. The generated answer is accompanied by itscorresponding reasoning path for extracting a new set of demonstrations in thenext iteration. After several iterations, IDS adopts majority voting to obtainthe final result. Through extensive experiments on tasks including commonsensereasoning, question answering, topic classification, and sentiment analysis, wedemonstrate that IDS can consistently outperform existing ICL demonstrationselection methods.",,arXiv,"['cs.cl', 'cs.ai']",, -932,the skipped beat a study of sociopragmatic understanding in llms for 64 languages,"['Chiyu Zhang', 'Khai Duy Doan', 'Qisheng Liao', 'Muhammad Abdul-Mageed']",http://arxiv.org/pdf/2310.14557v1.pdf,2023-10-23,," Instruction tuned large language models (LLMs), such as ChatGPT, demonstrateremarkable performance in a wide range of tasks. Despite numerous recentstudies that examine the performance of instruction-tuned LLMs on various NLPbenchmarks, there remains a lack of comprehensive investigation into theirability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaningembedded within social and interactive contexts. This deficiency arises partlyfrom SM not being adequately represented in any of the existing benchmarks. Toaddress this gap, we present SPARROW, an extensive multilingual benchmarkspecifically designed for SM understanding. SPARROW comprises 169 datasetscovering 13 task types across six primary categories (e.g., anti-sociallanguage detection, emotion recognition). SPARROW datasets encompass 64different languages originating from 12 language families representing 16writing scripts. We evaluate the performance of various multilingual pretrainedlanguage models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT)on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Ourcomprehensive analysis reveals that existing open-source instruction tuned LLMsstill struggle to understand SM across various languages, performing close to arandom baseline in some cases. We also find that although ChatGPT outperformsmany LLMs, it still falls behind task-specific finetuned models with a gap of12.19 SPARROW score. Our benchmark is available at:https://github.com/UBC-NLP/SPARROW",,arXiv,['cs.cl'],, -933,a survey of large language models for autonomous driving,"['Zhenjie Yang', 'Xiaosong Jia', 'Hongyang Li', 'Junchi Yan']",http://arxiv.org/pdf/2311.01043v1.pdf,2023-11-02,," Autonomous driving technology, a catalyst for revolutionizing transportationand urban mobility, has the tend to transition from rule-based systems todata-driven strategies. Traditional module-based systems are constrained bycumulative errors among cascaded modules and inflexible pre-set rules. Incontrast, end-to-end autonomous driving systems have the potential to avoiderror accumulation due to their fully data-driven training process, althoughthey often lack transparency due to their ``black box"" nature, complicating thevalidation and traceability of decisions. Recently, large language models(LLMs) have demonstrated abilities including understanding context, logicalreasoning, and generating answers. A natural thought is to utilize theseabilities to empower autonomous driving. By combining LLM with foundationvision models, it could open the door to open-world understanding, reasoning,and few-shot learning, which current autonomous driving systems are lacking. Inthis paper, we systematically review a research line about \textit{LargeLanguage Models for Autonomous Driving (LLM4AD)}. This study evaluates thecurrent state of technological advancements, distinctly outlining the principalchallenges and prospective directions for the field. For the convenience ofresearchers in academia and industry, we provide real-time updates on thelatest advances in the field as well as relevant open-source resources via thedesignated link: https://github.com/Thinklab-SJTU/Awesome-LLM4AD.",,arXiv,['cs.ai'],, -934,program synthesis with large language models,"['Jacob Austin', 'Augustus Odena', 'Maxwell Nye', 'Maarten Bosma', 'Henryk Michalewski', 'David Dohan', 'Ellen Jiang', 'Carrie Cai', 'Michael Terry', 'Quoc Le', 'Charles Sutton']",http://arxiv.org/pdf/2108.07732v1.pdf,2021-08-16,," This paper explores the limits of the current generation of large languagemodels for program synthesis in general purpose programming languages. Weevaluate a collection of such models (with between 244M and 137B parameters) ontwo new benchmarks, MBPP and MathQA-Python, in both the few-shot andfine-tuning regimes. Our benchmarks are designed to measure the ability ofthese models to synthesize short Python programs from natural languagedescriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974programming tasks, designed to be solvable by entry-level programmers. TheMathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914problems that evaluate the ability of the models to synthesize code from morecomplex text. On both datasets, we find that synthesis performance scaleslog-linearly with model size. Our largest models, even without finetuning on acode dataset, can synthesize solutions to 59.6 percent of the problems fromMBPP using few-shot learning with a well-designed prompt. Fine-tuning on aheld-out portion of the dataset improves performance by about 10 percentagepoints across most model sizes. On the MathQA-Python dataset, the largestfine-tuned model achieves 83.8 percent accuracy. Going further, we study themodel's ability to engage in dialog about code, incorporating human feedback toimprove its solutions. We find that natural language feedback from a humanhalves the error rate compared to the model's initial prediction. Additionally,we conduct an error analysis to shed light on where these models fall short andwhat types of programs are most difficult to generate. Finally, we explore thesemantic grounding of these models by fine-tuning them to predict the resultsof program execution. We find that even our best models are generally unable topredict the output of a program given a specific input.",,arXiv,"['cs.pl', 'cs.lg']",, -935,"a minimalist dataset for systematic generalization of perception, syntax, and semantics","['Qing Li', 'Siyuan Huang', 'Yining Hong', 'Yixin Zhu', 'Ying Nian Wu', 'Song-Chun Zhu']",http://arxiv.org/pdf/2103.01403v3.pdf,2021-03-02,," Inspired by humans' exceptional ability to master arithmetic and generalizeto new problems, we present a new dataset, Handwritten arithmetic with INTegers(HINT), to examine machines' capability of learning generalizable concepts atthree levels: perception, syntax, and semantics. In HINT, machines are taskedwith learning how concepts are perceived from raw signals such as images (i.e.,perception), how multiple concepts are structurally combined to form a validexpression (i.e., syntax), and how concepts are realized to afford variousreasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusingon systematic generalization, we carefully design a five-fold test set toevaluate both the interpolation and the extrapolation of learned conceptsw.r.t. the three levels. Further, we design a few-shot learning split todetermine whether or not models can rapidly learn new concepts and generalizethem to more complex scenarios. To comprehend existing models' limitations, weundertake extensive experiments with various sequence-to-sequence models,including RNNs, Transformers, and GPT-3 (with the chain of thought prompting).The results indicate that current models struggle to extrapolate to long-rangesyntactic dependency and semantics. Models exhibit a considerable gap towardhuman-level generalization when evaluated with new concepts in a few-shotsetting. Moreover, we discover that it is infeasible to solve HINT by merelyscaling up the dataset and the model size; this strategy contributes little tothe extrapolation of syntax and semantics. Finally, in zero-shot GPT-3experiments, the chain of thought prompting exhibits impressive results andsignificantly boosts the test accuracy. We believe the HINT dataset and theexperimental findings are of great interest to the learning community onsystematic generalization.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv']",, -936,large language models are zeroshot reasoners,"['Takeshi Kojima', 'Shixiang Shane Gu', 'Machel Reid', 'Yutaka Matsuo', 'Yusuke Iwasawa']",http://arxiv.org/pdf/2205.11916v4.pdf,2022-05-24,," Pretrained large language models (LLMs) are widely used in many sub-fields ofnatural language processing (NLP) and generally known as excellent few-shotlearners with task-specific exemplars. Notably, chain of thought (CoT)prompting, a recent technique for eliciting complex multi-step reasoningthrough step-by-step answer examples, achieved the state-of-the-artperformances in arithmetics and symbolic reasoning, difficult system-2 tasksthat do not follow the standard scaling laws for LLMs. While these successesare often attributed to LLMs' ability for few-shot learning, we show that LLMsare decent zero-shot reasoners by simply adding ""Let's think step by step""before each answer. Experimental results demonstrate that our Zero-shot-CoT,using the same single prompt template, significantly outperforms zero-shot LLMperformances on diverse benchmark reasoning tasks including arithmetics(MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, CoinFlip), and other logical reasoning tasks (Date Understanding, Tracking ShuffledObjects), without any hand-crafted few-shot examples, e.g. increasing theaccuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% withlarge InstructGPT model (text-davinci-002), as well as similar magnitudes ofimprovements with another off-the-shelf large model, 540B parameter PaLM. Theversatility of this single prompt across very diverse reasoning tasks hints atuntapped and understudied fundamental zero-shot capabilities of LLMs,suggesting high-level, multi-task broad cognitive capabilities may be extractedby simple prompting. We hope our work not only serves as the minimal strongestzero-shot baseline for the challenging reasoning benchmarks, but alsohighlights the importance of carefully exploring and analyzing the enormouszero-shot knowledge hidden inside LLMs before crafting finetuning datasets orfew-shot exemplars.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -937,an empirical evaluation of using large language models for automated unit test generation,"['Max Schäfer', 'Sarah Nadi', 'Aryaz Eghbali', 'Frank Tip']",http://arxiv.org/pdf/2302.06527v3.pdf,2023-02-13,," Unit tests play a key role in ensuring the correctness of software. However,manually creating unit tests is a laborious task, motivating the need forautomation. Large Language Models (LLMs) have recently been applied to thisproblem, utilizing additional training or few-shot learning on examples ofexisting tests. This paper presents a large-scale empirical evaluation on theeffectiveness of LLMs for automated unit test generation without additionaltraining or manual effort, providing the LLM with the signature andimplementation of the function under test, along with usage examples extractedfrom documentation. We also attempt to repair failed generated tests byre-prompting the model with the failing test and error message. We implementour approach in TestPilot, a test generation tool for JavaScript thatautomatically generates unit tests for all API functions in an npm package. Weevaluate TestPilot using OpenAI's gpt3.5-turbo LLM on 25 npm packages with atotal of 1,684 API functions. The generated tests achieve a median statementcoverage of 70.2% and branch coverage of 52.8%, significantly improving onNessie, a recent feedback-directed JavaScript test generation technique, whichachieves only 51.3% statement coverage and 25.6% branch coverage. We also findthat 92.8% of TestPilot's generated tests have no more than 50% similarity withexisting tests (as measured by normalized edit distance), with none of thembeing exact copies. Finally, we run TestPilot with two additional LLMs,OpenAI's older code-cushman-002 LLM and the open LLM StarCoder. Overall, weobserved similar results with the former (68.2% median statement coverage), andsomewhat worse results with the latter (54.0% median statement coverage),suggesting that the effectiveness of the approach is influenced by the size andtraining set of the LLM, but does not fundamentally depend on the specificmodel.",,arXiv,"['cs.se', 'cs.ai']",, -938,on the opportunities and challenges of foundation models for geospatial artificial intelligence,"['Gengchen Mai', 'Weiming Huang', 'Jin Sun', 'Suhang Song', 'Deepak Mishra', 'Ninghao Liu', 'Song Gao', 'Tianming Liu', 'Gao Cong', 'Yingjie Hu', 'Chris Cundy', 'Ziyuan Li', 'Rui Zhu', 'Ni Lao']",http://arxiv.org/pdf/2304.06798v1.pdf,2023-04-13,," Large pre-trained models, also known as foundation models (FMs), are trainedin a task-agnostic manner on large-scale data and can be adapted to a widerange of downstream tasks by fine-tuning, few-shot, or even zero-shot learning.Despite their successes in language and vision tasks, we have yet seen anattempt to develop foundation models for geospatial artificial intelligence(GeoAI). In this work, we explore the promises and challenges of developingmultimodal foundation models for GeoAI. We first investigate the potential ofmany existing FMs by testing their performances on seven tasks across multiplegeospatial subdomains including Geospatial Semantics, Health Geography, UrbanGeography, and Remote Sensing. Our results indicate that on several geospatialtasks that only involve text modality such as toponym recognition, locationdescription recognition, and US state-level/county-level dementia time seriesforecasting, these task-agnostic LLMs can outperform task-specificfully-supervised models in a zero-shot or few-shot learning setting. However,on other geospatial tasks, especially tasks that involve multiple datamodalities (e.g., POI-based urban function classification, street viewimage-based urban noise intensity classification, and remote sensing imagescene classification), existing foundation models still underperformtask-specific models. Based on these observations, we propose that one of themajor challenges of developing a FM for GeoAI is to address the multimodalitynature of geospatial tasks. After discussing the distinct challenges of eachgeospatial data modality, we suggest the possibility of a multimodal foundationmodel which can reason over various types of geospatial data through geospatialalignments. We conclude this paper by discussing the unique risks andchallenges to develop such a model for GeoAI.",,arXiv,"['cs.ai', 'cs.cl', 'cs.cv', 'i.2.0; i.2.4; i.2.7; i.2.10; i.5.1']",, -939,effective test generation using pretrained large language models and mutation testing,"['Arghavan Moradi Dakhel', 'Amin Nikanjam', 'Vahid Majdinasab', 'Foutse Khomh', 'Michel C. Desmarais']",http://arxiv.org/pdf/2308.16557v1.pdf,2023-08-31,," One of the critical phases in software development is software testing.Testing helps with identifying potential bugs and reducing maintenance costs.The goal of automated test generation tools is to ease the development of testsby suggesting efficient bug-revealing tests. Recently, researchers haveleveraged Large Language Models (LLMs) of code to generate unit tests. Whilethe code coverage of generated tests was usually assessed, the literature hasacknowledged that the coverage is weakly correlated with the efficiency oftests in bug detection. To improve over this limitation, in this paper, weintroduce MuTAP for improving the effectiveness of test cases generated by LLMsin terms of revealing bugs by leveraging mutation testing. Our goal is achievedby augmenting prompts with surviving mutants, as those mutants highlight thelimitations of test cases in detecting bugs. MuTAP is capable of generatingeffective test cases in the absence of natural language descriptions of theProgram Under Test (PUTs). We employ different LLMs within MuTAP and evaluatetheir performance on different benchmarks. Our results show that our proposedmethod is able to detect up to 28% more faulty human-written code snippets.Among these, 17% remained undetected by both the current state-of-the-art fullyautomated test generation tool (i.e., Pynguin) and zero-shot/few-shot learningapproaches on LLMs. Furthermore, MuTAP achieves a Mutation Score (MS) of 93.57%on synthetic buggy code, outperforming all other approaches in our evaluation.Our findings suggest that although LLMs can serve as a useful tool to generatetest cases, they require specific post-processing steps to enhance theeffectiveness of the generated test cases which may suffer from syntactic orfunctional errors and may be ineffective in detecting certain types of bugs andtesting corner cases PUTs.",,arXiv,['cs.se'],, -940,llm4sgg large language model for weakly supervised scene graph generation,"['Kibum Kim', 'Kanghoon Yoon', 'Jaehyeong Jeon', 'Yeonjun In', 'Jinyoung Moon', 'Donghyun Kim', 'Chanyoung Park']",http://arxiv.org/pdf/2310.10404v4.pdf,2023-10-16,," Weakly-Supervised Scene Graph Generation (WSSGG) research has recentlyemerged as an alternative to the fully-supervised approach that heavily relieson costly annotations. In this regard, studies on WSSGG have utilized imagecaptions to obtain unlocalized triplets while primarily focusing on groundingthe unlocalized triplets over image regions. However, they have overlooked thetwo issues involved in the triplet formation process from the captions: 1)Semantic over-simplification issue arises when extracting triplets fromcaptions, where fine-grained predicates in captions are undesirably convertedinto coarse-grained predicates, resulting in a long-tailed predicatedistribution, and 2) Low-density scene graph issue arises when aligning thetriplets in the caption with entity/predicate classes of interest, where manytriplets are discarded and not used in training, leading to insufficientsupervision. To tackle the two issues, we propose a new approach, i.e., LargeLanguage Model for weakly-supervised SGG (LLM4SGG), where we mitigate the twoissues by leveraging the LLM's in-depth understanding of language and reasoningability during the extraction of triplets from captions and alignment ofentity/predicate classes with target data. To further engage the LLM in theseprocesses, we adopt the idea of Chain-of-Thought and the in-context few-shotlearning strategy. To validate the effectiveness of LLM4SGG, we conductextensive experiments on Visual Genome and GQA datasets, showing significantimprovements in both Recall@K and mean Recall@K compared to thestate-of-the-art WSSGG methods. A further appeal is that LLM4SGG isdata-efficient, enabling effective model training with a small amount oftraining images.",,arXiv,['cs.cv'],, -941,masakhanews news topic classification for african languages,"['David Ifeoluwa Adelani', 'Marek Masiak', 'Israel Abebe Azime', 'Jesujoba Alabi', 'Atnafu Lambebo Tonja', 'Christine Mwase', 'Odunayo Ogundepo', 'Bonaventure F. P. Dossou', 'Akintunde Oladipo', 'Doreen Nixdorf', 'Chris Chinenye Emezue', 'sana al-azzawi', 'Blessing Sibanda', 'Davis David', 'Lolwethu Ndolela', 'Jonathan Mukiibi', 'Tunde Ajayi', 'Tatiana Moteu', 'Brian Odhiambo', 'Abraham Owodunni', 'Nnaemeka Obiefuna', 'Muhidin Mohamed', 'Shamsuddeen Hassan Muhammad', 'Teshome Mulugeta Ababu', 'Saheed Abdullahi Salahudeen', 'Mesay Gemeda Yigezu', 'Tajuddeen Gwadabe', 'Idris Abdulmumin', 'Mahlet Taye', 'Oluwabusayo Awoyomi', 'Iyanuoluwa Shode', 'Tolulope Adelani', 'Habiba Abdulganiyu', 'Abdul-Hakeem Omotayo', 'Adetola Adeeko', 'Abeeb Afolabi', 'Anuoluwapo Aremu', 'Olanrewaju Samuel', 'Clemencia Siro', 'Wangari Kimotho', 'Onyekachi Ogbu', 'Chinedu Mbonu', 'Chiamaka Chukwuneke', 'Samuel Fanijo', 'Jessica Ojo', 'Oyinkansola Awosan', 'Tadesse Kebede', 'Toadoum Sari Sakayo', 'Pamela Nyatsine', 'Freedmore Sidume', 'Oreen Yousuf', 'Mardiyyah Oduwole', 'Tshinu Tshinu', 'Ussen Kimanuka', 'Thina Diko', 'Siyanda Nxakama', 'Sinodos Nigusse', 'Abdulmejid Johar', 'Shafie Mohamed', 'Fuad Mire Hassan', 'Moges Ahmed Mehamed', 'Evrard Ngabire', 'Jules Jules', 'Ivan Ssenkungu', 'Pontus Stenetorp']",http://arxiv.org/pdf/2304.09972v2.pdf,2023-04-19,," African languages are severely under-represented in NLP research due to lackof datasets covering several NLP tasks. While there are individual languagespecific datasets that are being expanded to different tasks, only a handful ofNLP tasks (e.g. named entity recognition and machine translation) havestandardized benchmark datasets covering several geographical andtypologically-diverse African languages. In this paper, we develop MasakhaNEWS-- a new benchmark dataset for news topic classification covering 16 languageswidely spoken in Africa. We provide an evaluation of baseline models bytraining classical machine learning models and fine-tuning several languagemodels. Furthermore, we explore several alternatives to full fine-tuning oflanguage models that are better suited for zero-shot and few-shot learning suchas cross-lingual parameter-efficient fine-tuning (like MAD-X), patternexploiting training (PET), prompting language models (like ChatGPT), andprompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).Our evaluation in zero-shot setting shows the potential of prompting ChatGPTfor news topic classification in low-resource African languages, achieving anaverage performance of 70 F1 points without leveraging additional supervisionlike MAD-X. In few-shot setting, we show that with as little as 10 examples perlabel, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance offull supervised training (92.6 F1 points) leveraging the PET approach.",,arXiv,['cs.cl'],, -942,nspbert a promptbased fewshot learner through an original pretraining tasknext sentence prediction,"['Yi Sun', 'Yu Zheng', 'Chao Hao', 'Hangping Qiu']",http://arxiv.org/pdf/2109.03564v2.pdf,2021-09-08,," Using prompts to utilize language models to perform various downstream tasks,also known as prompt-based learning or prompt-learning, has lately gainedsignificant success in comparison to the pre-train and fine-tune paradigm.Nonetheless, virtually all prompt-based methods are token-level, meaning theyall utilize GPT's left-to-right language model or BERT's masked language modelto perform cloze-style tasks. In this paper, we attempt to accomplish severalNLP tasks in the zero-shot scenario using a BERT original pre-training taskabandoned by RoBERTa and other models--Next Sentence Prediction (NSP). Unliketoken-level techniques, our sentence-level prompt-based method NSP-BERT doesnot need to fix the length of the prompt or the position to be predicted,allowing it to handle tasks such as entity linking with ease. Based on thecharacteristics of NSP-BERT, we offer several quick building templates forvarious downstream tasks. We suggest a two-stage prompt method for word sensedisambiguation tasks in particular. Our strategies for mapping the labelssignificantly enhance the model's performance on sentence pair tasks. On theFewCLUE benchmark, our NSP-BERT outperforms other zero-shot methods on most ofthese tasks and comes close to the few-shot methods.",,arXiv,"['cs.cl', 'cs.ai']",, -943,psg promptbased sequence generation for acronym extraction,"['Bin Li', 'Fei Xia', 'Yixuan Weng', 'Xiusheng Huang', 'Bin Sun', 'Shutao Li']",http://arxiv.org/pdf/2111.14301v2.pdf,2021-11-29,," Acronym extraction aims to find acronyms (i.e., short-forms) and theirmeanings (i.e., long-forms) from the documents, which is important forscientific document understanding (SDU@AAAI-22) tasks. Previous works aredevoted to modeling this task as a paragraph-level sequence labeling problem.However, it lacks the effective use of the external knowledge, especially whenthe datasets are in a low-resource setting. Recently, the prompt-based methodwith the vast pre-trained language model can significantly enhance theperformance of the low-resourced downstream tasks. In this paper, we propose aPrompt-based Sequence Generation (PSG) method for the acronym extraction task.Specifically, we design a template for prompting the extracted acronym textswith auto-regression. A position extraction algorithm is designed forextracting the position of the generated answers. The results on the acronymextraction of Vietnamese and Persian in a low-resource setting show that theproposed method outperforms all other competitive state-of-the-art (SOTA)methods.",,arXiv,"['cs.cl', 'cs.ai']",, -944,chemical identification and indexing in pubmed articles via bert and texttotext approaches,"['Virginia Adams', 'Hoo-Chang Shin', 'Carol Anderson', 'Bo Liu', 'Anas Abidin']",http://arxiv.org/pdf/2111.15622v1.pdf,2021-11-30,," The Biocreative VII Track-2 challenge consists of named entity recognition,entity-linking (or entity-normalization), and topic indexing tasks -- withentities and topics limited to chemicals for this challenge. Named entityrecognition is a well-established problem and we achieve our best performancewith BERT-based BioMegatron models. We extend our BERT-based approach to theentity linking task. After the second stage of pretraining BioBERT with ametric-learning loss strategy called self-alignment pretraining (SAP), we linkentities based on the cosine similarity between their SAP-BioBERT wordembeddings. Despite the success of our named entity recognition experiments, wefind the chemical indexing task generally more challenging. In addition to conventional NER methods, we attempt both named entityrecognition and entity linking with a novel text-to-text or ""prompt"" basedmethod that uses generative language models such as T5 and GPT. We achieveencouraging results with this new approach.",,arXiv,['cs.cl'],, -945,gpts at factify 2022 prompt aided factverification,"['Pawan Kumar Sahu', 'Saksham Aggarwal', 'Taneesh Gupta', 'Gyanendra Das']",http://arxiv.org/pdf/2206.14913v1.pdf,2022-06-29,," One of the most pressing societal issues is the fight against false news. Thefalse claims, as difficult as they are to expose, create a lot of damage. Totackle the problem, fact verification becomes crucial and thus has been a topicof interest among diverse research communities. Using only the textual form ofdata we propose our solution to the problem and achieve competitive resultswith other approaches. We present our solution based on two approaches - PLM(pre-trained language model) based method and Prompt based method. ThePLM-based approach uses the traditional supervised learning, where the model istrained to take 'x' as input and output prediction 'y' as P(y|x). Whereas,Prompt-based learning reflects the idea to design input to fit the model suchthat the original objective may be re-framed as a problem of (masked) languagemodeling. We may further stimulate the rich knowledge provided by PLMs tobetter serve downstream tasks by employing extra prompts to fine-tune PLMs. Ourexperiments showed that the proposed method performs better than justfine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset anda 7th position on the competition leader-board.",,arXiv,['cs.cl'],, -946,quantifying language models' sensitivity to spurious features in prompt design or how i learned to start worrying about prompt formatting,"['Melanie Sclar', 'Yejin Choi', 'Yulia Tsvetkov', 'Alane Suhr']",http://arxiv.org/pdf/2310.11324v1.pdf,2023-10-17,," As large language models (LLMs) are adopted as a fundamental component oflanguage technologies, it is crucial to accurately characterize theirperformance. Because choices in prompt design can strongly influence modelbehavior, this design process is critical in effectively using any modernpre-trained generative language model. In this work, we focus on LLMsensitivity to a quintessential class of meaning-preserving design choices:prompt formatting. We find that several widely used open-source LLMs areextremely sensitive to subtle changes in prompt formatting in few-shotsettings, with performance differences of up to 76 accuracy points whenevaluated using LLaMA-2-13B. Sensitivity remains even when increasing modelsize, the number of few-shot examples, or performing instruction tuning. Ouranalysis suggests that work evaluating LLMs with prompting-based methods wouldbenefit from reporting a range of performance across plausible prompt formats,instead of the currently-standard practice of reporting performance on a singleformat. We also show that format performance only weakly correlates betweenmodels, which puts into question the methodological validity of comparingmodels with an arbitrarily chosen, fixed prompt format. To facilitatesystematic analysis we propose FormatSpread, an algorithm that rapidlyevaluates a sampled set of plausible prompt formats for a given task, andreports the interval of expected performance without accessing model weights.Furthermore, we present a suite of analyses that characterize the nature ofthis sensitivity, including exploring the influence of particular atomicperturbations and the internal representation of particular formats.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -947,gpt3driven pedagogical agents for training children's curious questionasking skills,"['Rania Abdelghani', 'Yen-Hsiang Wang', 'Xingdi Yuan', 'Tong Wang', 'Pauline Lucas', 'Hélène Sauzéon', 'Pierre-Yves Oudeyer']",http://arxiv.org/pdf/2211.14228v6.pdf,2022-11-25,," In order to train children's ability to ask curiosity-driven questions,previous research has explored designing specific exercises relying onproviding semantic and linguistic cues to help formulate such questions. Butdespite showing pedagogical efficiency, this method is still limited as itrelies on generating the said cues by hand, which can be a very costly process.In this context, we propose to leverage advances in the natural languageprocessing field (NLP) and investigate the efficiency of using a large languagemodel (LLM) for automating the production of the pedagogical content of acurious question-asking (QA) training. We study generating the said contentusing the ""prompt-based"" method that consists of explaining the task to the LLMin natural text. We evaluate the output using human experts annotations andcomparisons with hand-generated content. Results suggested indeed the relevanceand usefulness of this content. We also conduct a field study in primary school(75 children aged 9-10), where we evaluate children's QA performance whenhaving this training. We compare 3 types of content : 1) hand-generated contentthat proposes ""closed"" cues leading to predefined questions; 2) GPT-3-generatedcontent that proposes the same type of cues; 3) GPT-3-generated content thatproposes ""open"" cues leading to several possible questions. We see a similar QAperformance between the two ""closed"" trainings (showing the scalability of theapproach using GPT-3), and a better one for participants with the ""open""training. These results suggest the efficiency of using LLMs to supportchildren in generating more curious questions, using a natural languageprompting approach that affords usability by teachers and other users notspecialists of AI techniques. Furthermore, results also show that open-endedcontent may be more suitable for training curious question-asking skills.",,arXiv,"['cs.cl', 'cs.hc']",, -948,mentalllm leveraging large language models for mental health prediction via online text data,"['Xuhai Xu', 'Bingsheng Yao', 'Yuanzhe Dong', 'Saadia Gabriel', 'Hong Yu', 'James Hendler', 'Marzyeh Ghassemi', 'Anind K. Dey', 'Dakuo Wang']",http://arxiv.org/pdf/2307.14385v3.pdf,2023-07-26,," Advances in large language models (LLMs) have empowered a variety ofapplications. However, there is still a significant gap in research when itcomes to understanding and enhancing the capabilities of LLMs in the field ofmental health. In this work, we present the first comprehensive evaluation ofmultiple LLMs, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4, onvarious mental health prediction tasks via online text data. We conduct a broadrange of experiments, covering zero-shot prompting, few-shot prompting, andinstruction fine-tuning. The results indicate a promising yet limitedperformance of LLMs with zero-shot and few-shot prompt designs for the mentalhealth tasks. More importantly, our experiments show that instructionfinetuning can significantly boost the performance of LLMs for all taskssimultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5,outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9%on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%.They further perform on par with the state-of-the-art task-specific languagemodel. We also conduct an exploratory case study on LLMs' capability on themental health reasoning tasks, illustrating the promising capability of certainmodels such as GPT-4. We summarize our findings into a set of action guidelinesfor potential methods to enhance LLMs' capability for mental health tasks.Meanwhile, we also emphasize the important limitations before achievingdeployability in real-world mental health settings, such as known racial andgender bias. We highlight the important ethical risks accompanying this line ofresearch.",,arXiv,"['cs.cl', '68u35', 'h.5.2; i.2.m']",, -949,towards zerolabel language learning,"['Zirui Wang', 'Adams Wei Yu', 'Orhan Firat', 'Yuan Cao']",http://arxiv.org/pdf/2109.09193v1.pdf,2021-09-19,," This paper explores zero-label learning in Natural Language Processing (NLP),whereby no human-annotated data is used anywhere during training and models aretrained purely on synthetic data. At the core of our framework is a novelapproach for better leveraging the powerful pretrained language models.Specifically, inspired by the recent success of few-shot inference on GPT-3, wepresent a training data creation procedure named Unsupervised Data Generation(UDG), which leverages few-shot prompts to synthesize high-quality trainingdata without real human annotations. Our method enables zero-label learning aswe train task-specific models solely on the synthetic data, yet we achievebetter or comparable results from strong baseline models trained onhuman-labeled data. Furthermore, when mixed with labeled data, our approachserves as a highly effective data augmentation procedure, achieving newstate-of-the-art results on the SuperGLUE benchmark.",,arXiv,"['cs.cl', 'cs.lg']",, -950,covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds,"['Keshav Kolluru', 'Gabriel Stanovsky', ' Mausam']",http://arxiv.org/pdf/2210.13039v1.pdf,2022-10-24,," Proper noun compounds, e.g., ""Covid vaccine"", convey information in asuccinct manner (a ""Covid vaccine"" is a ""vaccine that immunizes against theCovid disease""). These are commonly used in short-form domains, such as newsheadlines, but are largely ignored in information-seeking applications. Toaddress this limitation, we release a new manually annotated dataset, ProNCI,consisting of 22.5K proper noun compounds along with their free-form semanticinterpretations. ProNCI is 60 times larger than prior noun compound datasetsand also includes non-compositional examples, which have not been previouslyexplored. We experiment with various neural models for automatically generatingthe semantic interpretations from proper noun compounds, ranging from few-shotprompting to supervised learning, with varying degrees of knowledge about theconstituent nouns. We find that adding targeted knowledge, particularly aboutthe common noun, results in performance gains of upto 2.8%. Finally, weintegrate our model generated interpretations with an existing Open IE systemand observe an 7.5% increase in yield at a precision of 85%. The dataset andcode are available at https://github.com/dair-iitd/pronci.",,arXiv,['cs.cl'],, -951,summqa at mediqachat 2023incontext learning with gpt4 for medical summarization,"['Yash Mathur', 'Sanketh Rangreji', 'Raghav Kapoor', 'Medha Palavalli', 'Amanda Bertsch', 'Matthew R. Gormley']",http://arxiv.org/pdf/2306.17384v1.pdf,2023-06-30,," Medical dialogue summarization is challenging due to the unstructured natureof medical conversations, the use of medical terminology in gold summaries, andthe need to identify key information across multiple symptom sets. We present anovel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA2023 Shared Task. Our approach for section-wise summarization (Task A) is atwo-stage process of selecting semantically similar dialogues and using thetop-k similar dialogues as in-context examples for GPT-4. For full-notesummarization (Task B), we use a similar solution with k=1. We achieved 3rdplace in Task A (2nd among all teams), 4th place in Task B Division WiseSummarization (2nd among all teams), 15th place in Task A Section HeaderClassification (9th among all teams), and 8th place among all teams in Task B.Our results highlight the effectiveness of few-shot prompting for this task,though we also identify several weaknesses of prompting-based approaches. Wecompare GPT-4 performance with several finetuned baselines. We find that GPT-4summaries are more abstractive and shorter. We make our code publiclyavailable.",,arXiv,['cs.cl'],, -952,ecologically valid explanations for label variation in nli,"['Nan-Jiang Jiang', 'Chenhao Tan', 'Marie-Catherine de Marneffe']",http://arxiv.org/pdf/2310.13850v1.pdf,2023-10-20,," Human label variation, or annotation disagreement, exists in many naturallanguage processing (NLP) tasks, including natural language inference (NLI). Togain direct evidence of how NLI label variation arises, we build LiveNLI, anEnglish dataset of 1,415 ecologically valid explanations (annotators explainthe NLI labels they chose) for 122 MNLI items (at least 10 explanations peritem). The LiveNLI explanations confirm that people can systematically vary ontheir interpretation and highlight within-label variation: annotators sometimeschoose the same label for different reasons. This suggests that explanationsare crucial for navigating label interpretations in general. We few-shot promptlarge language models to generate explanations but the results areinconsistent: they sometimes produces valid and informative explanations, butit also generates implausible ones that do not support the label, highlightingdirections for improvement.",,arXiv,['cs.cl'],, -953,apiassisted code generation for question answering on varied table structures,"['Yihan Cao', 'Shuyi Chen', 'Ryan Liu', 'Zhiruo Wang', 'Daniel Fried']",http://arxiv.org/pdf/2310.14687v1.pdf,2023-10-23,," A persistent challenge to table question answering (TableQA) by generatingexecutable programs has been adapting to varied table structures, typicallyrequiring domain-specific logical forms. In response, this paper introduces aunified TableQA framework that: (1) provides a unified representation forstructured tables as multi-index Pandas data frames, (2) uses Python as apowerful querying language, and (3) uses few-shot prompting to translate NLquestions into Python programs, which are executable on Pandas data frames.Furthermore, to answer complex relational questions with extended programfunctionality and external knowledge, our framework allows customized APIs thatPython programs can call. We experiment with four TableQA datasets that involvetables of different structures -- relational, multi-table, and hierarchicalmatrix shapes -- and achieve prominent improvements over past state-of-the-artsystems. In ablation studies, we (1) show benefits from our multi-indexrepresentation and APIs over baselines that use only an LLM, and (2)demonstrate that our approach is modular and can incorporate additional APIs.",,arXiv,"['cs.cl', 'cs.ai']",, -954,tree of clarifications answering ambiguous questions with retrievalaugmented large language models,"['Gangwoo Kim', 'Sungdong Kim', 'Byeongguk Jeon', 'Joonsuk Park', 'Jaewoo Kang']",http://arxiv.org/pdf/2310.14696v1.pdf,2023-10-23,," Questions in open-domain question answering are often ambiguous, allowingmultiple interpretations. One approach to handling them is to identify allpossible interpretations of the ambiguous question (AQ) and to generate along-form answer addressing them all, as suggested by Stelmakh et al., (2022).While it provides a comprehensive response without bothering the user forclarification, considering multiple dimensions of ambiguity and gatheringcorresponding knowledge remains a challenge. To cope with the challenge, wepropose a novel framework, Tree of Clarifications (ToC): It recursivelyconstructs a tree of disambiguations for the AQ -- via few-shot promptingleveraging external knowledge -- and uses it to generate a long-form answer.ToC outperforms existing baselines on ASQA in a few-shot setup across themetrics, while surpassing fully-supervised baselines trained on the wholetraining set in terms of Disambig-F1 and Disambig-ROUGE. Code is available athttps://github.com/gankim/tree-of-clarifications.",,arXiv,['cs.cl'],, -955,dissecting incontext learning of translations in gpts,"['Vikas Raunak', 'Hany Hassan Awadalla', 'Arul Menezes']",http://arxiv.org/pdf/2310.15987v1.pdf,2023-10-24,," Most of the recent work in leveraging Large Language Models (LLMs) such asGPT-3 for Machine Translation (MT) has focused on selecting the few-shotsamples for prompting. In this work, we try to better understand the role ofdemonstration attributes for the in-context learning of translations throughperturbations of high-quality, in-domain demonstrations. We find thatasymmetric perturbation of the source-target mappings yield vastly differentresults. We show that the perturbation of the source side has surprisinglylittle impact, while target perturbation can drastically reduce translationquality, suggesting that it is the output text distribution that provides themost important learning signal during in-context learning of translations. Wepropose a method named Zero-Shot-Context to add this signal automatically inZero-Shot prompting. We demonstrate that it improves upon the zero-shottranslation performance of GPT-3, even making it competitive with few-shotprompted translations.",,arXiv,"['cs.cl', 'cs.ai']",, -956,extraction of atypical aspects from customer reviews datasets and experiments with language models,"['Smita Nannaware', 'Erfan Al-Hossami', 'Razvan Bunescu']",http://arxiv.org/pdf/2311.02702v1.pdf,2023-11-05,," A restaurant dinner may become a memorable experience due to an unexpectedaspect enjoyed by the customer, such as an origami-making station in thewaiting area. If aspects that are atypical for a restaurant experience wereknown in advance, they could be leveraged to make recommendations that have thepotential to engender serendipitous experiences, further increasing usersatisfaction. Although relatively rare, whenever encountered, atypical aspectsoften end up being mentioned in reviews due to their memorable quality.Correspondingly, in this paper we introduce the task of detecting atypicalaspects in customer reviews. To facilitate the development of extractionmodels, we manually annotate benchmark datasets of reviews in three domains -restaurants, hotels, and hair salons, which we use to evaluate a number oflanguage models, ranging from fine-tuning the instruction-based text-to-texttransformer Flan-T5 to zero-shot and few-shot prompting of GPT-3.5.",,arXiv,"['cs.cl', 'cs.ai']",, -957,sqlprompt incontext texttosql with minimal labeled data,"['Ruoxi Sun', 'Sercan Ö. Arik', 'Rajarishi Sinha', 'Hootan Nakhost', 'Hanjun Dai', 'Pengcheng Yin', 'Tomas Pfister']",http://arxiv.org/pdf/2311.02883v1.pdf,2023-11-06,," Text-to-SQL aims to automate the process of generating SQL queries on adatabase from natural language text. In this work, we propose ""SQLPrompt"",tailored to improve the few-shot prompting capabilities of Text-to-SQL forLarge Language Models (LLMs). Our methods include innovative prompt design,execution-based consistency decoding strategy which selects the SQL with themost consistent execution outcome among other SQL proposals, and a method thataims to improve performance by diversifying the SQL proposals duringconsistency selection with different prompt designs (""MixPrompt"") andfoundation models (""MixLLMs""). We show that \emph{SQLPrompt} outperformsprevious approaches for in-context learning with few labeled data by a largemargin, closing the gap with finetuning state-of-the-art with thousands oflabeled data.",,arXiv,['cs.cl'],, -958,jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue,"['Lena Reed', 'Cecilia Li', 'Angela Ramirez', 'Liren Wu', 'Marilyn Walker']",http://arxiv.org/pdf/2110.08094v2.pdf,2021-10-15,," One challenge with open-domain dialogue systems is the need to producetruthful, high-quality responses on any topic. We aim to improve the qualityand coverage of Athena, an Alexa Prize dialogue system. We experiment withfew-shot prompt-based learning, comparing GPT-Neo to Jurassic-1, for themovies, music, TV, sports, and video game domains, both within andcross-domain, with different prompt set sizes (2, 3, 10), formats, and meaningrepresentations consisting of either sets of WikiData KG triples, or dialogueacts. Our evaluation uses BLEURT and human metrics, and shows that with 10-shotprompting, Athena-Jurassic's performance is significantly better for coherenceand semantic accuracy. Experiments with 2-shot cross-domain prompts results ina huge performance drop for Athena-GPT-Neo, whose semantic accuracy falls to0.41, and whose untrue hallucination rate increases to 12%. Experiments withdialogue acts for video games show that with 10-shot prompting, both modelslearn to control dialogue acts, but Athena-Jurassic has significantly highercoherence, and only 4% untrue hallucinations. Our results suggest thatAthena-Jurassic produces high enough quality outputs to be useful in livesystems with real users. To our knowledge, these are the first resultsdemonstrating that few-shot semantic prompt-based learning can create NLGs thatgeneralize to new domains, and produce high-quality, semantically-controlled,conversational responses directly from meaning representations.",,arXiv,['cs.cl'],, -959,codelmsec benchmark systematically evaluating and finding security vulnerabilities in blackbox code language models,"['Hossein Hajipour', 'Keno Hassler', 'Thorsten Holz', 'Lea Schönherr', 'Mario Fritz']",http://arxiv.org/pdf/2302.04012v2.pdf,2023-02-08,," Large language models (LLMs) for automatic code generation have achievedbreakthroughs in several programming tasks. Their advances in competition-levelprogramming problems have made them an essential pillar of AI-assisted pairprogramming, and tools such as GitHub Copilot have emerged as part of the dailyprogramming workflow used by millions of developers. The training data forthese models is usually collected from the Internet (e.g., from open-sourcerepositories) and is likely to contain faults and security vulnerabilities.This unsanitized training data can cause the language models to learn thesevulnerabilities and propagate them during the code generation procedure. Whilethese models have been extensively assessed for their ability to producefunctionally correct programs, there remains a lack of comprehensiveinvestigations and benchmarks addressing the security aspects of these models. In this work, we propose a method to systematically study the security issuesof code language models to assess their susceptibility to generating vulnerablecode. To this end, we introduce the first approach to automatically findgenerated code that contains vulnerabilities in black-box code generationmodels. To achieve this, we present an approach to approximate inversion of theblack-box code generation models based on few-shot prompting. We evaluate theeffectiveness of our approach by examining code language models in generatinghigh-risk security weaknesses. Furthermore, we establish a collection ofdiverse non-secure prompts for various vulnerability scenarios using ourmethod. This dataset forms a benchmark for evaluating and comparing thesecurity weaknesses in code language models.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.se']",, -960,scifix outperforming gpt3 on scientific factual error correction,"['Dhananjay Ashok', 'Atharva Kulkarni', 'Hai Pham', 'Barnabás Póczos']",http://arxiv.org/pdf/2305.14707v2.pdf,2023-05-24,," Due to the prohibitively high cost of creating error correction datasets,most Factual Claim Correction methods rely on a powerful verification model toguide the correction process. This leads to a significant drop in performancein domains like scientific claims, where good verification models do not alwaysexist. In this work, we introduce SciFix, a scientific claim correction systemthat does not require a verifier but can outperform existing methods by aconsiderable margin -- achieving correction accuracy of 84% on the SciFactdataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to nextbest accuracies of 7%, 5%, and 15% on the same datasets respectively. Ourmethod leverages the power of prompting with LLMs during training to create arichly annotated dataset that can be used for fully supervised training andregularization. We additionally use a claim-aware decoding procedure to improvethe quality of corrected claims. Our method outperforms the very LLM that wasused to generate the annotated dataset -- with Few-Shot Prompting on GPT3.5achieving 58%, 61%, and 64% on the respective datasets, a consistently lowercorrection accuracy, despite using nearly 800 times as many parameters as ourmodel.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -961,diffender diffusionbased adversarial defense against patch attacks,"['Caixin Kang', 'Yinpeng Dong', 'Zhengyi Wang', 'Shouwei Ruan', 'Yubo Chen', 'Hang Su', 'Xingxing Wei']",http://arxiv.org/pdf/2306.09124v2.pdf,2023-06-15,," Adversarial attacks, particularly patch attacks, pose significant threats tothe robustness and reliability of deep learning models. Developing reliabledefenses against patch attacks is crucial for real-world applications, yetcurrent research in this area is not satisfactory. In this paper, we proposeDIFFender, a novel defense method that leverages a text-guided diffusion modelto defend against adversarial patches. DIFFender includes two main stages:patch localization and patch restoration. In the localization stage, we findand exploit an intriguing property of the diffusion model to effectivelyidentify the locations of adversarial patches. In the restoration stage, weemploy the diffusion model to reconstruct the adversarial regions in the imageswhile preserving the integrity of the visual content. Importantly, these twostages are carefully guided by a unified diffusion model, thus we can utilizethe close interaction between them to improve the whole defense performance.Moreover, we propose a few-shot prompt-tuning algorithm to fine-tune thediffusion model, enabling the pre-trained diffusion model to easily adapt tothe defense task. We conduct extensive experiments on the image classificationand face recognition tasks, demonstrating that our proposed method exhibitssuperior robustness under strong adaptive attacks and generalizes well acrossvarious scenarios, diverse classifiers, and multiple patch attack methods.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cr', 'cs.lg']",, -962,steering large language models for machine translation with finetuning and incontext learning,"['Duarte M. Alves', 'Nuno M. Guerreiro', 'João Alves', 'José Pombal', 'Ricardo Rei', 'José G. C. de Souza', 'Pierre Colombo', 'André F. T. Martins']",http://arxiv.org/pdf/2310.13448v1.pdf,2023-10-20,," Large language models (LLMs) are a promising avenue for machine translation(MT). However, current LLM-based MT systems are brittle: their effectivenesshighly depends on the choice of few-shot examples and they often require extrapost-processing due to overgeneration. Alternatives such as finetuning ontranslation instructions are computationally expensive and may weakenin-context learning capabilities, due to overspecialization. In this paper, weprovide a closer look at this problem. We start by showing that adapter-basedfinetuning with LoRA matches the performance of traditional finetuning whilereducing the number of training parameters by a factor of 50. This method alsooutperforms few-shot prompting and eliminates the need for post-processing orin-context examples. However, we show that finetuning generally degradesfew-shot performance, hindering adaptation capabilities. Finally, to obtain thebest of both worlds, we propose a simple approach that incorporates few-shotexamples during finetuning. Experiments on 10 language pairs show that ourproposed approach recovers the original few-shot capabilities while keeping theadded benefits of finetuning.",,arXiv,['cs.cl'],, -963,on bilingual lexicon induction with large language models,"['Yaoyiran Li', 'Anna Korhonen', 'Ivan Vulić']",http://arxiv.org/pdf/2310.13995v1.pdf,2023-10-21,," Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP thatstill, to a large extent, relies on calculating cross-lingual wordrepresentations. Inspired by the global paradigm shift in NLP towards LargeLanguage Models (LLMs), we examine the potential of the latest generation ofLLMs for the development of bilingual lexicons. We ask the following researchquestion: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) forBLI, and how does this approach compare against and complement current BLIapproaches? To this end, we systematically study 1) zero-shot prompting forunsupervised BLI and 2) few-shot in-context prompting with a set of seedtranslation pairs, both without any LLM fine-tuning, as well as 3) standardBLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-sourcetext-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on twostandard BLI benchmarks covering a range of typologically diverse languages.Our work is the first to demonstrate strong BLI capabilities of text-to-textmLLMs. The results reveal that few-shot prompting with in-context examples fromnearest neighbours achieves the best performance, establishing newstate-of-the-art BLI scores for many language pairs. We also conduct a seriesof in-depth analyses and ablation studies, providing more insights on BLI with(m)LLMs, also along with their limitations.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, -964,an early evaluation of gpt4v(ision),"['Yang Wu', 'Shilong Wang', 'Hao Yang', 'Tian Zheng', 'Hongbo Zhang', 'Yanyan Zhao', 'Bing Qin']",http://arxiv.org/pdf/2310.16534v1.pdf,2023-10-25,," In this paper, we evaluate different abilities of GPT-4V including visualunderstanding, language understanding, visual puzzle solving, and understandingof other modalities such as depth, thermal, video, and audio. To estimateGPT-4V's performance, we manually construct 656 test instances and carefullyevaluate the results of GPT-4V. The highlights of our findings are as follows:(1) GPT-4V exhibits impressive performance on English visual-centric benchmarksbut fails to recognize simple Chinese texts in the images; (2) GPT-4V showsinconsistent refusal behavior when answering questions related to sensitivetraits such as gender, race, and age; (3) GPT-4V obtains worse results thanGPT-4 (API) on language understanding tasks including general languageunderstanding benchmarks and visual commonsense knowledge evaluationbenchmarks; (4) Few-shot prompting can improve GPT-4V's performance on bothvisual understanding and language understanding; (5) GPT-4V struggles to findthe nuances between two similar images and solve the easy math picture puzzles;(6) GPT-4V shows non-trivial performance on the tasks of similar modalities toimage, such as video and thermal. Our experimental results reveal the abilityand limitations of GPT-4V and we hope our paper can provide some insights intothe application and research of GPT-4V.",,arXiv,"['cs.cl', 'cs.cv']",, -965,you are an expert linguistic annotator limits of llms as analyzers of abstract meaning representation,"['Allyson Ettinger', 'Jena D. Hwang', 'Valentina Pyatkin', 'Chandra Bhagavatula', 'Yejin Choi']",http://arxiv.org/pdf/2310.17793v1.pdf,2023-10-26,," Large language models (LLMs) show amazing proficiency and fluency in the useof language. Does this mean that they have also acquired insightful linguisticknowledge about the language, to an extent that they can serve as an ""expertlinguistic annotator""? In this paper, we examine the successes and limitationsof the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaningstructure, focusing on the Abstract Meaning Representation (AMR; Banarescu etal. 2013) parsing formalism, which provides rich graphical representations ofsentence meaning structure while abstracting away from surface forms. Wecompare models' analysis of this semantic structure across two settings: 1)direct production of AMR parses based on zero- and few-shot prompts, and 2)indirect partial reconstruction of AMR via metalinguistic natural languagequeries (e.g., ""Identify the primary event of this sentence, and the predicatecorresponding to that event.""). Across these settings, we find that models canreliably reproduce the basic format of AMR, and can often capture core event,argument, and modifier structure -- however, model outputs are prone tofrequent and major errors, and holistic analysis of parse acceptability showsthat even with few-shot demonstrations, models have virtually 0% success inproducing fully accurate parses. Eliciting natural language responses producessimilar patterns of errors. Overall, our findings indicate that these modelsout-of-the-box can capture aspects of semantic structure, but there remain keylimitations in their ability to support fully accurate semantic analyses orparses.",,arXiv,"['cs.cl', 'cs.ai']",, -966,styleaware radiology report generation with radgraph and fewshot prompting,"['Benjamin Yan', 'Ruochen Liu', 'David E. Kuo', 'Subathra Adithan', 'Eduardo Pontes Reis', 'Stephen Kwak', 'Vasantha Kumar Venugopal', ""Chloe P. O'Connell"", 'Agustina Saenz', 'Pranav Rajpurkar', 'Michael Moor']",http://arxiv.org/pdf/2310.17811v2.pdf,2023-10-26,," Automatically generated reports from medical images promise to improve theworkflow of radiologists. Existing methods consider an image-to-report modelingtask by directly generating a fully-fledged report from an image. However, thisconflates the content of the report (e.g., findings and their attributes) withits style (e.g., format and choice of words), which can lead to clinicallyinaccurate reports. To address this, we propose a two-step approach forradiology report generation. First, we extract the content from an image; then,we verbalize the extracted content into a report that matches the style of aspecific radiologist. For this, we leverage RadGraph -- a graph representationof reports -- together with large language models (LLMs). In our quantitativeevaluations, we find that our approach leads to beneficial performance. Ourhuman evaluation with clinical raters highlights that the AI-generated reportsare indistinguishably tailored to the style of individual radiologist despiteleveraging only a few examples as context.",,arXiv,"['cs.ai', 'cs.cl']",, -967,mentallama interpretable mental health analysis on social media with large language models,"['Kailai Yang', 'Tianlin Zhang', 'Ziyan Kuang', 'Qianqian Xie', 'Sophia Ananiadou', 'Jimin Huang']",http://arxiv.org/pdf/2309.13567v2.pdf,2023-09-24,," With the development of web technology, social media texts are becoming arich source for automatic mental health analysis. As traditional discriminativemethods bear the problem of low interpretability, the recent large languagemodels have been explored for interpretable mental health analysis on socialmedia, which aims to provide detailed explanations along with predictions. Theresults show that ChatGPT can generate approaching-human explanations for itscorrect classifications. However, LLMs still achieve unsatisfactoryclassification performance in a zero-shot/few-shot manner. Domain-specificfinetuning is an effective solution, but faces 2 challenges: 1) lack ofhigh-quality training data. 2) no open-source LLMs for interpretable mentalhealth analysis were released to lower the finetuning cost. To alleviate theseproblems, we build the first multi-task and multi-source interpretable mentalhealth instruction (IMHI) dataset on social media, with 105K data samples. Theraw social media data are collected from 10 existing sources covering 8 mentalhealth analysis tasks. We use expert-written few-shot prompts and collectedlabels to prompt ChatGPT and obtain explanations from its responses. To ensurethe reliability of the explanations, we perform strict automatic and humanevaluations on the correctness, consistency, and quality of generated data.Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA,the first open-source LLM series for interpretable mental health analysis withinstruction-following capability. We also evaluate the performance ofMentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where theircorrectness for making predictions and the quality of explanations areexamined. The results show that MentalLLaMA approaches state-of-the-artdiscriminative methods in correctness and generates high-quality explanations.",,arXiv,['cs.cl'],, -968,acecoder utilizing existing code to enhance code generation,"['Jia Li', 'Yunfei Zhao', 'Yongmin Li', 'Ge Li', 'Zhi Jin']",http://arxiv.org/pdf/2303.17780v3.pdf,2023-03-31,," Large Language Models (LLMs) have shown great success in code generation.LLMs take as the input a prompt and output the code. A key question is how tomake prompts (i.e., Prompting Techniques). Existing prompting techniques aredesigned for natural language generation and have low accuracy in codegeneration. In this paper, we propose a new prompting technique named AceCoder. Ourmotivation is that code generation meets two unique challenges (i.e.,requirement understanding and code implementation). AceCoder contains two novelmechanisms (i.e., guided code generation and example retrieval) to solve thesechallenges. (1) Guided code generation asks LLMs first to analyze requirementsand output an intermediate preliminary (e.g., test cases). The preliminary isused to clarify requirements and tell LLMs ""what to write"". (2) Exampleretrieval selects similar programs as examples in prompts, which provide lotsof relevant content (e.g., algorithms, APIs) and teach LLMs ""how to write"". Weapply AceCoder to three LLMs (e.g., Codex) and evaluate it on three publicbenchmarks using the Pass@k. Results show that AceCoder can significantlyimprove the performance of LLMs on code generation. (1) In terms of Pass@1,AceCoder outperforms the state-of-the-art baseline by up to 56.4% in MBPP,70.7% in MBJP, and 88.4% in MBJSP. (2) AceCoder is effective in LLMs withdifferent sizes (i.e., 6B to 13B) and different languages (i.e., Python, Java,and JavaScript). (3) Human evaluation shows human developers prefer programsfrom AceCoder.",,arXiv,"['cs.se', 'cs.ai']",, -969,compositional semantic parsing with large language models,"['Andrew Drozdov', 'Nathanael Schärli', 'Ekin Akyürek', 'Nathan Scales', 'Xinying Song', 'Xinyun Chen', 'Olivier Bousquet', 'Denny Zhou']",http://arxiv.org/pdf/2209.15003v2.pdf,2022-09-29,," Humans can reason compositionally when presented with new tasks. Previousresearch shows that appropriate prompting techniques enable large languagemodels (LLMs) to solve artificial compositional generalization tasks such asSCAN. In this work, we identify additional challenges in more realisticsemantic parsing tasks with larger vocabulary and refine these promptingtechniques to address them. Our best method is based on least-to-mostprompting: it decomposes the problem using prompting-based syntactic parsing,then uses this decomposition to select appropriate exemplars and tosequentially generate the semantic parse. This method allows us to set a newstate of the art for CFQ while requiring only 1% of the training data used bytraditional approaches. Due to the general nature of our approach, we expectsimilar efforts will lead to new results in other tasks and domains, especiallyfor knowledge-intensive applications.",,arXiv,"['cs.cl', 'cs.ai']",, -970,gembamqm detecting translation quality error spans with gpt4,"['Tom Kocmi', 'Christian Federmann']",http://arxiv.org/pdf/2310.13988v1.pdf,2023-10-21,," This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed todetect translation quality errors, specifically for the quality estimationsetting without the need for human reference translations. Based on the powerof large language models (LLM), GEMBA-MQM employs a fixed three-shot promptingtechnique, querying the GPT-4 model to mark error quality spans. Compared toprevious works, our method has language-agnostic prompts, thus avoiding theneed for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-artaccuracy for system ranking, we advise caution when using it in academic worksto demonstrate improvements over other methods due to its dependence on theproprietary, black-box GPT model.",,arXiv,['cs.cl'],, -971,utilizing language models for energy load forecasting,"['Hao Xue', 'Flora D. Salim']",http://arxiv.org/pdf/2310.17788v1.pdf,2023-10-26,," Energy load forecasting plays a crucial role in optimizing resourceallocation and managing energy consumption in buildings and cities. In thispaper, we propose a novel approach that leverages language models for energyload forecasting. We employ prompting techniques to convert energy consumptiondata into descriptive sentences, enabling fine-tuning of language models. Byadopting an autoregressive generating approach, our proposed method enablespredictions of various horizons of future energy load consumption. Throughextensive experiments on real-world datasets, we demonstrate the effectivenessand accuracy of our proposed method. Our results indicate that utilizinglanguage models for energy load forecasting holds promise for enhancing energyefficiency and facilitating intelligent decision-making in energy systems.",,arXiv,"['cs.ai', 'cs.cl']",, -972,eliciting topic hierarchies from large language models,"['Grace Li', 'Tao Long', 'Lydia B. Chilton']",http://arxiv.org/pdf/2310.19275v1.pdf,2023-10-30,," Finding topics to write about can be a mentally demanding process. However,topic hierarchies can help writers explore topics of varying levels ofspecificity. In this paper, we use large language models (LLMs) to helpconstruct topic hierarchies. Although LLMs have access to such knowledge, itcan be difficult to elicit due to issues of specificity, scope, and repetition.We designed and tested three different prompting techniques to find one thatmaximized accuracy. We found that prepending the general topic area to a promptyielded the most accurate results with 85% accuracy. We discuss applications ofthis research including STEM writing, education, and content creation.",,arXiv,['cs.hc'],, -973,structured chainofthought prompting for code generation,"['Jia Li', 'Ge Li', 'Yongmin Li', 'Zhi Jin']",http://arxiv.org/pdf/2305.06599v3.pdf,2023-05-11,," Large Language Models (LLMs) (e.g., ChatGPT) have shown impressiveperformance in code generation. LLMs take prompts as inputs, andChain-of-Thought (CoT) prompting is the state-of-the-art prompting technique.CoT prompting asks LLMs first to generate CoTs (i.e., intermediate naturallanguage reasoning steps) and then output the code. However, CoT prompting isdesigned for natural language generation and has low accuracy in codegeneration. In this paper, we propose Structured CoTs (SCoTs) and present a novelprompting technique for code generation, named SCoT prompting. Our motivationis source code contains rich structural information and any code can becomposed of three program structures (i.e., sequence, branch, and loopstructures). Intuitively, structured intermediate reasoning steps make forstructured source code. Thus, we ask LLMs to use program structures to buildCoTs, obtaining SCoTs. Then, LLMs generate the final code based on SCoTs.Compared to CoT prompting, SCoT prompting explicitly constrains LLMs to thinkabout how to solve requirements from the view of source code and further theperformance of LLMs in code generation. We apply SCoT prompting to two LLMs(i.e., ChatGPT and Codex) and evaluate it on three benchmarks (i.e., HumanEval,MBPP, and MBCPP). (1) SCoT prompting outperforms the state-of-the-art baseline- CoT prompting by up to 13.79% in Pass@1. (2) Human evaluation shows humandevelopers prefer programs from SCoT prompting. (3) SCoT prompting is robust toexamples and achieves substantial improvements.",,arXiv,"['cs.se', 'cs.cl']",, -974,the impact of ai in physics education a comprehensive review from gcse to university levels,"['Will Yeadon', 'Tom Hardy']",http://arxiv.org/pdf/2309.05163v1.pdf,2023-09-10,," With the rapid evolution of Artificial Intelligence (AI), its potentialimplications for higher education have become a focal point of interest. Thisstudy delves into the capabilities of AI in Physics Education and offersactionable AI policy recommendations. Using a Large Language Model (LLM), weassessed its ability to answer 1337 Physics exam questions spanning GCSE,A-Level, and Introductory University curricula. We employed various AIprompting techniques: Zero Shot, In Context Learning, and ConfirmatoryChecking, which merges Chain of Thought reasoning with Reflection. The AI'sproficiency varied across academic levels: it scored an average of 83.4% onGCSE, 63.8% on A-Level, and 37.4% on university-level questions, with anoverall average of 59.9% using the most effective prompting technique. In aseparate test, the LLM's accuracy on 5000 mathematical operations was found todecrease as the number of digits increased. Furthermore, when evaluated as amarking tool, the LLM's concordance with human markers averaged at 50.8%, withnotable inaccuracies in marking straightforward questions, likemultiple-choice. Given these results, our recommendations underscore caution:while current LLMs can consistently perform well on Physics questions atearlier educational stages, their efficacy diminishes with advanced content andcomplex calculations. LLM outputs often showcase novel methods not in thesyllabus, excessive verbosity, and miscalculations in basic arithmetic. Thissuggests that at university, there's no substantial threat from LLMs fornon-invigilated Physics questions. However, given the LLMs' considerableproficiency in writing Physics essays and coding abilities, non-invigilatedexaminations of these skills in Physics are highly vulnerable to automatedcompletion by LLMs. This vulnerability also extends to Physics questionspitched at lower academic levels.",,arXiv,['physics.ed-ph'],, -975,languagespecific representation of emotionconcept knowledge causally supports emotion inference,"['Ming Li', 'Yusheng Su', 'Hsiu-Yuan Huang', 'Jiali Cheng', 'Xin Hu', 'Xinmiao Zhang', 'Huadong Wang', 'Yujia Qin', 'Xiaozhi Wang', 'Zhiyuan Liu', 'Dan Zhang']",http://arxiv.org/pdf/2302.09582v4.pdf,2023-02-19,," Understanding how language supports emotion inference remains a topic ofdebate in emotion science. The present study investigated whetherlanguage-derived emotion-concept knowledge would causally support emotioninference by manipulating the language-specific knowledge representations inlarge language models. Using the prompt technique, 14 attributes of emotionconcepts were found to be represented by distinct artificial neuronpopulations. By manipulating these attribute-related neurons, the majority ofthe emotion inference tasks showed performance deterioration compared to randommanipulations. The attribute-specific performance deterioration was related tothe importance of different attributes in human mental space. Our findingsprovide causal evidence in support of a language-based mechanism for emotioninference and highlight the contributions of emotion-concept knowledge.",,arXiv,"['cs.ai', 'cs.cl']",, -976,posqa probe the world models of llms with size comparisons,"['Chang Shu', 'Jiuzhou Han', 'Fangyu Liu', 'Ehsan Shareghi', 'Nigel Collier']",http://arxiv.org/pdf/2310.13394v1.pdf,2023-10-20,," Embodied language comprehension emphasizes that language understanding is notsolely a matter of mental processing in the brain but also involvesinteractions with the physical and social environment. With the explosivegrowth of Large Language Models (LLMs) and their already ubiquitous presence inour daily lives, it is becoming increasingly necessary to verify theirreal-world understanding. Inspired by cognitive theories, we propose POSQA: aPhysical Object Size Question Answering dataset with simple size comparisonquestions to examine the extremity and analyze the potential mechanisms of theembodied comprehension of the latest LLMs. We show that even the largest LLMs today perform poorly under the zero-shotsetting. We then push their limits with advanced prompting techniques andexternal knowledge augmentation. Furthermore, we investigate whether theirreal-world comprehension primarily derives from contextual information orinternal weights and analyse the impact of prompt formats and report bias ofdifferent objects. Our results show that real-world understanding that LLMsshaped from textual data can be vulnerable to deception and confusion by thesurface form of prompts, which makes it less aligned with human behaviours.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, -977,musr testing the limits of chainofthought with multistep soft reasoning,"['Zayne Sprague', 'Xi Ye', 'Kaj Bostrom', 'Swarat Chaudhuri', 'Greg Durrett']",http://arxiv.org/pdf/2310.16049v1.pdf,2023-10-24,," While large language models (LLMs) equipped with techniques likechain-of-thought prompting have demonstrated impressive capabilities, theystill fall short in their ability to reason robustly in complex settings.However, evaluating LLM reasoning is challenging because system capabilitiescontinue to grow while benchmark datasets for tasks like logical deduction haveremained static. We introduce MuSR, a dataset for evaluating language models onmultistep soft reasoning tasks specified in a natural language narrative. Thisdataset has two crucial features. First, it is created through a novelneurosymbolic synthetic-to-natural generation algorithm, enabling theconstruction of complex reasoning instances that challenge GPT-4 (e.g., murdermysteries roughly 1000 words in length) and which can be scaled further as morecapable LLMs are released. Second, our dataset instances are free textnarratives corresponding to real-world domains of reasoning; this makes itsimultaneously much more challenging than other synthetically-craftedbenchmarks while remaining realistic and tractable for human annotators tosolve with high accuracy. We evaluate a range of LLMs and prompting techniqueson this dataset and characterize the gaps that remain for techniques likechain-of-thought to perform robust reasoning.",,arXiv,['cs.cl'],, -978,"supercharging academic writing with generative ai framework, techniques, and caveats",['Zhicheng Lin'],http://arxiv.org/pdf/2310.17143v1.pdf,2023-10-26,," Academic writing is an indispensable yet laborious part of the researchenterprise. This Perspective maps out principles and methods for usinggenerative artificial intelligence (AI), specifically large language models(LLMs), to elevate the quality and efficiency of academic writing. We introducea human-AI collaborative framework that delineates the rationale (why), process(how), and nature (what) of AI engagement in writing. The framework pinpointsboth short-term and long-term reasons for engagement and their underlyingmechanisms (e.g., cognitive offloading and imaginative stimulation). It revealsthe role of AI throughout the writing process, conceptualized through atwo-stage model for human-AI collaborative writing, and the nature of AIassistance in writing, represented through a model of writing-assistance typesand levels. Building on this framework, we describe effective promptingtechniques for incorporating AI into the writing routine (outlining, drafting,and editing) as well as strategies for maintaining rigorous scholarship,adhering to varied journal policies, and avoiding overreliance on AI.Ultimately, the prudent integration of AI into academic writing can ease thecommunication burden, empower authors, accelerate discovery, and promotediversity in science.",,arXiv,"['cs.cy', 'cs.cl']",, -979,little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task,"['Neema Kotonya', 'Saran Krishnasamy', 'Joel Tetreault', 'Alejandro Jaimes']",http://arxiv.org/pdf/2311.00686v1.pdf,2023-11-01,," This paper describes and analyzes our participation in the 2023 Eval4NLPshared task, which focuses on assessing the effectiveness of prompt-basedtechniques to empower Large Language Models to handle the task of qualityestimation, particularly in the context of evaluating machine translations andsummaries. We conducted systematic experiments with various promptingtechniques, including standard prompting, prompts informed by annotatorinstructions, and innovative chain-of-thought prompting. In addition, weintegrated these approaches with zero-shot and one-shot learning methods tomaximize the efficacy of our evaluation procedures. Our work reveals thatcombining these approaches using a ""small"", open source model (orca_mini_v3_7B)yields competitive results.",,arXiv,['cs.cl'],, -980,can large language models design accurate label functions,"['Naiqing Guan', 'Kaiwen Chen', 'Nick Koudas']",http://arxiv.org/pdf/2311.00739v1.pdf,2023-11-01,," Programmatic weak supervision methodologies facilitate the expedited labelingof extensive datasets through the use of label functions (LFs) that encapsulateheuristic data sources. Nonetheless, the creation of precise LFs necessitatesdomain expertise and substantial endeavors. Recent advances in pre-trainedlanguage models (PLMs) have exhibited substantial potential across diversetasks. However, the capacity of PLMs to autonomously formulate accurate LFsremains an underexplored domain. In this research, we address this gap byintroducing DataSculpt, an interactive framework that harnesses PLMs for theautomated generation of LFs. Within DataSculpt, we incorporate an array ofprompting techniques, instance selection strategies, and LF filtration methodsto explore the expansive design landscape. Ultimately, we conduct a thoroughassessment of DataSculpt's performance on 12 real-world datasets, encompassinga range of tasks. This evaluation unveils both the strengths and limitations ofcontemporary PLMs in LF design.",,arXiv,"['cs.cl', 'cs.db', 'cs.lg', 'h.2.8; i.5.4']",, -981,once boosting contentbased recommendation with both open and closedsource large language models,"['Qijiong Liu', 'Nuo Chen', 'Tetsuya Sakai', 'Xiao-Ming Wu']",http://arxiv.org/pdf/2305.06566v4.pdf,2023-05-11,," Personalized content-based recommender systems have become indispensabletools for users to navigate through the vast amount of content available onplatforms like daily news websites and book recommendation services. However,existing recommenders face significant challenges in understanding the contentof items. Large language models (LLMs), which possess deep semanticcomprehension and extensive knowledge from pretraining, have proven to beeffective in various natural language processing tasks. In this study, weexplore the potential of leveraging both open- and closed-source LLMs toenhance content-based recommendation. With open-source LLMs, we utilize theirdeep layers as content encoders, enriching the representation of content at theembedding level. For closed-source LLMs, we employ prompting techniques toenrich the training data at the token level. Through comprehensive experiments,we demonstrate the high effectiveness of both types of LLMs and show thesynergistic relationship between them. Notably, we observed a significantrelative improvement of up to 19.32% compared to existing state-of-the-artrecommendation models. These findings highlight the immense potential of bothopen- and closed-source of LLMs in enhancing content-based recommendationsystems. We will make our code and LLM-generated data available for otherresearchers to reproduce our results.",,arXiv,"['cs.ir', 'cs.cl']",, -982,crosslingual prompting improving zeroshot chainofthought reasoning across languages,"['Libo Qin', 'Qiguang Chen', 'Fuxuan Wei', 'Shijue Huang', 'Wanxiang Che']",http://arxiv.org/pdf/2310.14799v1.pdf,2023-10-23,," Chain-of-thought (CoT) is capable of eliciting models to explicitly generatereasoning paths, thus promoting reasoning accuracy and attracting increasingattention. Specifically, zero-shot CoT achieves remarkable improvements in awide range of reasoning tasks by simply instructing the LLM with the prompt""Let's think step by step!"". Despite the success of zero-shot CoT, the existingzero-shot prompting techniques remain limited to a single language, making itchallenging to generalize to other languages and hindering global development.In this work, we introduce cross-lingual prompting (CLP), aiming to improvezero-shot CoT reasoning across languages. Specifically, CLP consists of twomain components: (1) cross-lingual alignment prompting and (2) task-specificsolver prompting. The cross-lingual alignment prompting is responsible foraligning representations across different languages, whereas the task-specificsolver prompting is used to generate the final chain of thoughts and resultsfor the reasoning task. In addition, we further introduce cross-lingualself-consistent prompting (CLSP) to ensemble different reasoning paths acrosslanguages. Our experimental evaluations on several benchmarks demonstrate thatCLP and CLSP significantly outperform the existing prompting methods andachieve state-of-the-art performance. We hope this work will inspire furtherbreakthroughs in cross-lingual CoT.",,arXiv,"['cs.cl', 'cs.ai']",, -983,hetgpt harnessing the power of prompt tuning in pretrained heterogeneous graph neural networks,"['Yihong Ma', 'Ning Yan', 'Jiayu Li', 'Masood Mortazavi', 'Nitesh V. Chawla']",http://arxiv.org/pdf/2310.15318v1.pdf,2023-10-23,," Graphs have emerged as a natural choice to represent and analyze theintricate patterns and rich information of the Web, enabling applications suchas online page classification and social recommendation. The prevailing""pre-train, fine-tune"" paradigm has been widely adopted in graph machinelearning tasks, particularly in scenarios with limited labeled nodes. However,this approach often exhibits a misalignment between the training objectives ofpretext tasks and those of downstream tasks. This gap can result in the""negative transfer"" problem, wherein the knowledge gained from pre-trainingadversely affects performance in the downstream tasks. The surge inprompt-based learning within Natural Language Processing (NLP) suggests thepotential of adapting a ""pre-train, prompt"" paradigm to graphs as analternative. However, existing graph prompting techniques are tailored tohomogeneous graphs, neglecting the inherent heterogeneity of Web graphs. Tobridge this gap, we propose HetGPT, a general post-training prompting frameworkto improve the predictive performance of pre-trained heterogeneous graph neuralnetworks (HGNNs). The key is the design of a novel prompting function thatintegrates a virtual class prompt and a heterogeneous feature prompt, with theaim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPTintroduces a multi-view neighborhood aggregation mechanism, capturing thecomplex neighborhood structure in heterogeneous graphs. Extensive experimentson three benchmark datasets demonstrate HetGPT's capability to enhance theperformance of state-of-the-art HGNNs on semi-supervised node classification.",,arXiv,"['cs.lg', 'cs.ai']",, -984,llm4dyg can large language models solve problems on dynamic graphs,"['Zeyang Zhang', 'Xin Wang', 'Ziwei Zhang', 'Haoyang Li', 'Yijian Qin', 'Simin Wu', 'Wenwu Zhu']",http://arxiv.org/pdf/2310.17110v1.pdf,2023-10-26,," In an era marked by the increasing adoption of Large Language Models (LLMs)for various tasks, there is a growing focus on exploring LLMs' capabilities inhandling web data, particularly graph data. Dynamic graphs, which capturetemporal network evolution patterns, are ubiquitous in real-world web data.Evaluating LLMs' competence in understanding spatial-temporal information ondynamic graphs is essential for their adoption in web applications, whichremains unexplored in the literature. In this paper, we bridge the gap viaproposing to evaluate LLMs' spatial-temporal understanding abilities on dynamicgraphs, to the best of our knowledge, for the first time. Specifically, wepropose the LLM4DyG benchmark, which includes nine specially designed tasksconsidering the capability evaluation of LLMs from both temporal and spatialdimensions. Then, we conduct extensive experiments to analyze the impacts ofdifferent data generators, data statistics, prompting techniques, and LLMs onthe model performance. Finally, we propose Disentangled Spatial-TemporalThoughts (DST2) for LLMs on dynamic graphs to enhance LLMs' spatial-temporalunderstanding abilities. Our main observations are: 1) LLMs have preliminaryspatial-temporal understanding abilities on dynamic graphs, 2) Dynamic graphtasks show increasing difficulties for LLMs as the graph size and densityincrease, while not sensitive to the time span and data generation mechanism,3) the proposed DST2 prompting method can help to improve LLMs'spatial-temporal understanding abilities on dynamic graphs for most tasks. Thedata and codes will be open-sourced at publication time.",,arXiv,['cs.lg'],, -985,which is better exploring prompting strategy for llmbased metrics,"['Joonghoon Kim', 'Saeran Park', 'Kiyoon Jeong', 'Sangmin Lee', 'Seung Hun Han', 'Jiyoon Lee', 'Pilsung Kang']",http://arxiv.org/pdf/2311.03754v1.pdf,2023-11-07,," This paper describes the DSBA submissions to the Prompting Large LanguageModels as Explainable Metrics shared task, where systems were submitted to twotracks: small and large summarization tracks. With advanced Large LanguageModels (LLMs) such as GPT-4, evaluating the quality of Natural LanguageGeneration (NLG) has become increasingly paramount. Traditionalsimilarity-based metrics such as BLEU and ROUGE have shown to misalign withhuman evaluation and are ill-suited for open-ended generation tasks. To addressthis issue, we explore the potential capability of LLM-based metrics,especially leveraging open-source LLMs. In this study, wide range of promptsand prompting techniques are systematically analyzed with three approaches:prompting strategy, score aggregation, and explainability. Our research focuseson formulating effective prompt templates, determining the granularity of NLGquality scores and assessing the impact of in-context examples on LLM-basedevaluation. Furthermore, three aggregation strategies are compared to identifythe most reliable method for aggregating NLG quality scores. To examineexplainability, we devise a strategy that generates rationales for the scoresand analyzes the characteristics of the explanation produced by the open-sourceLLMs. Extensive experiments provide insights regarding evaluation capabilitiesof open-source LLMs and suggest effective prompting strategies.",,arXiv,['cs.cl'],, -986,autonomous treesearch ability of large language models,"['Zheyu Zhang', 'Zhuorui Ye', 'Yikang Shen', 'Chuang Gan']",http://arxiv.org/pdf/2310.10686v1.pdf,2023-10-14,," Large Language Models have excelled in remarkable reasoning capabilities withadvanced prompting techniques, but they fall short on tasks that requireexploration, strategic foresight, and sequential decision-making. Recent workspropose to utilize external programs to define search logic, such that LLMs canperform passive tree search to solve more challenging reasoning tasks. Thoughimpressive results have been achieved, there are several fundamentallimitations of these approaches. First, passive tree searches are not efficientas they usually require multiple rounds of LLM API calls to solve one singleproblem. Moreover, passive search methods are not flexible since they needtask-specific program designs. Then a natural question arises: can we maintainthe tree-search capability of LLMs without the aid of external programs, andcan still generate responses that clearly demonstrate the process of atree-structure search? To this end, we propose a new concept called autonomoustree-search ability of LLM, which can automatically generate a responsecontaining search trajectories for the correct answer. Concretely, we performsearch trajectories using capable LLM API via a fixed system prompt, allowingthem to perform autonomous tree-search (ATS) right out of the box. Experimentson 4 puzzle games demonstrate our method can achieve huge improvements. TheATS-BFS method outperforms the Chain of Thought approach by achieving anaverage accuracy improvement of 33%. Compared to Tree of Thoughts, it requires65.6% or 47.7% less GPT-api cost to attain a comparable level of accuracy.Moreover, we have collected data using the ATS prompt method and fine-tunedLLaMA. This approach yield a greater improvement compared to the onesfine-tuned on CoT data. Specifically, it outperforms CoT-tuned LLaMAs by anaverage of 40.6% and 38.5% for LLaMA2-7B and LLaMA2-13B, respectively.",,arXiv,"['cs.cl', 'cs.ai']",, -987,s$^3$hqa a threestage approach for multihop texttable hybrid question answering,"['Fangyu Lei', 'Xiang Li', 'Yifan Wei', 'Shizhu He', 'Yiming Huang', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2305.11725v1.pdf,2023-05-19,," Answering multi-hop questions over hybrid factual knowledge from the giventext and table (TextTableQA) is a challenging task. Existing models mainlyadopt a retriever-reader framework, which have several deficiencies, such asnoisy labeling in training retriever, insufficient utilization of heterogeneousinformation over text and table, and deficient ability for different reasoningoperations. In this paper, we propose a three-stage TextTableQA frameworkS3HQA, which comprises of retriever, selector, and reasoner. We use a retrieverwith refinement training to solve the noisy labeling problem. Then, a hybridselector considers the linked relationships between heterogeneous data toselect the most relevant factual knowledge. For the final stage, instead ofadapting a reading comprehension module like in previous methods, we employ ageneration-based reasoner to obtain answers. This includes two approaches: arow-wise generator and an LLM prompting generator~(first time used in thistask). The experimental results demonstrate that our method achievescompetitive results in the few-shot setting. When trained on the full dataset,our approach outperforms all baseline methods, ranking first on the HybridQAleaderboard.",,arXiv,['cs.cl'],, -988,autoplan automatic planning of interactive decisionmaking tasks with large language models,"['Siqi Ouyang', 'Lei Li']",http://arxiv.org/pdf/2305.15064v3.pdf,2023-05-24,," Recent large language models (LLMs) are promising for making decisions ingrounded environments. However, LLMs frequently fail in complex decision-makingtasks due to the misalignment between the pre-trained knowledge in LLMs and theactual rules in the environment. Existing methods require either costlygradient computation or lengthy in-context demonstrations. In this paper, wepropose AutoPlan, an approach to guide LLM-based agents to accomplishinteractive decision-making tasks. AutoPlan augments the LLM prompt with atask-solving plan and optimizes it through iterative experience collection andreflection. Our experiments show that AutoPlan, though using no in-contextdemonstrations, achieves success rates on par with the baselines usinghuman-written demonstrations on ALFWorld and even outperforms them by 8% onHotpotQA. The code is available at https://github.com/owaski/AutoPlan.",,arXiv,['cs.cl'],, -989,a mlllm pairing for better code comment classification,['Hanna Abi Akl'],http://arxiv.org/pdf/2310.10275v1.pdf,2023-10-13,," The ""Information Retrieval in Software Engineering (IRSE)"" at FIRE 2023shared task introduces code comment classification, a challenging task thatpairs a code snippet with a comment that should be evaluated as either usefulor not useful to the understanding of the relevant code. We answer the codecomment classification shared task challenge by providing a two-foldevaluation: from an algorithmic perspective, we compare the performance ofclassical machine learning systems and complement our evaluations from adata-driven perspective by generating additional data with the help of largelanguage model (LLM) prompting to measure the potential increase inperformance. Our best model, which took second place in the shared task, is aNeural Network with a Macro-F1 score of 88.401% on the provided seed data and a1.5% overall increase in performance on the data generated by the LLM.",,arXiv,"['cs.se', 'cs.ai']",, -990,multistage large language model correction for speech recognition,"['Jie Pu', 'Thai-Son Nguyen', 'Sebastian Stüker']",http://arxiv.org/pdf/2310.11532v1.pdf,2023-10-17,," In this paper, we investigate the usage of large language models (LLMs) toimprove the performance of competitive speech recognition systems. Differentfrom traditional language models that focus on one single data domain, the riseof LLMs brings us the opportunity to push the limit of state-of-the-art ASRperformance, and at the same time to achieve higher robustness and generalizeeffectively across multiple domains. Motivated by this, we propose a novelmulti-stage approach to combine traditional language model re-scoring and LLMprompting. Specifically, the proposed method has two stages: the first stageuses a language model to re-score an N-best list of ASR hypotheses and run aconfidence check; The second stage uses prompts to a LLM to perform ASR errorcorrection on less confident results from the first stage. Our experimentalresults demonstrate the effectiveness of the proposed method by showing a 10% ~20% relative improvement in WER over a competitive ASR system -- acrossmultiple test domains.",,arXiv,"['cs.cl', 'eess.as']",, -991,omnifill domainagnostic form filling suggestions using multifaceted context,"['Timothy J. Aveni', 'Armando Fox', 'Björn Hartmann']",http://arxiv.org/pdf/2310.17826v1.pdf,2023-10-27,," Predictive suggestion systems offer contextually-relevant text entrycompletions. Existing approaches, like autofill, often excel innarrowly-defined domains but fail to generalize to arbitrary workflows. Weintroduce a conceptual framework to analyze the compound demands of aparticular suggestion context, yielding unique opportunities for large languagemodels (LLMs) to infer suggestions for a wide range of domain-agnosticform-filling tasks that were out of reach with prior approaches. We explorethese opportunities in OmniFill, a prototype that collects multi-facetedcontext including browsing and text entry activity to construct an LLM promptthat offers suggestions in situ for arbitrary structured text entry interfaces.Through a user study with 18 participants, we found that OmniFill offeredvaluable suggestions and we identified four themes that characterize users'behavior and attitudes: an ""opportunistic scrapbooking"" approach; a trustplaced in the system; value in partial success; and a need for visibility intoprompt context.",,arXiv,['cs.hc'],, -992,knowledgeinfused prompting assessing and advancing clinical text data generation with large language models,"['Ran Xu', 'Hejie Cui', 'Yue Yu', 'Xuan Kan', 'Wenqi Shi', 'Yuchen Zhuang', 'Wei Jin', 'Joyce Ho', 'Carl Yang']",http://arxiv.org/pdf/2311.00287v1.pdf,2023-11-01,," Clinical natural language processing requires methods that can addressdomain-specific challenges, such as complex medical terminology and clinicalcontexts. Recently, large language models (LLMs) have shown promise in thisdomain. Yet, their direct deployment can lead to privacy issues and areconstrained by resources. To address this challenge, we delve into syntheticclinical text generation using LLMs for clinical NLP tasks. We propose aninnovative, resource-efficient approach, ClinGen, which infuses knowledge intothe process. Our model involves clinical knowledge extraction andcontext-informed LLM prompting. Both clinical topics and writing styles aredrawn from external domain-specific knowledge graphs and LLMs to guide datageneration. Our extensive empirical study across 7 clinical NLP tasks and 16datasets reveals that ClinGen consistently enhances performance across varioustasks, effectively aligning the distribution of real datasets and significantlyenriching the diversity of generated training instances. We will publish ourcode and all the generated data in \url{https://github.com/ritaranx/ClinGen}.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",, -993,mathdial a dialogue tutoring dataset with rich pedagogical properties grounded in math reasoning problems,"['Jakub Macina', 'Nico Daheim', 'Sankalan Pal Chowdhury', 'Tanmay Sinha', 'Manu Kapur', 'Iryna Gurevych', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2305.14536v2.pdf,2023-05-23,," While automatic dialogue tutors hold great potential in making educationpersonalized and more accessible, research on such systems has been hampered bya lack of sufficiently large and high-quality datasets. Collecting suchdatasets remains challenging, as recording tutoring sessions raises privacyconcerns and crowdsourcing leads to insufficient data quality. To address this,we propose a framework to generate such dialogues by pairing human teacherswith a Large Language Model (LLM) prompted to represent common student errors.We describe how we use this framework to collect MathDial, a dataset of 3kone-to-one teacher-student tutoring dialogues grounded in multi-step mathreasoning problems. While models like GPT-3 are good problem solvers, they failat tutoring because they generate factually incorrect feedback or are prone torevealing solutions to students too early. To overcome this, we let teachersprovide learning opportunities to students by guiding them using variousscaffolding questions according to a taxonomy of teacher moves. We demonstrateMathDial and its extensive annotations can be used to finetune models to bemore effective tutors (and not just solvers). We confirm this by automatic andhuman evaluation, notably in an interactive setting that measures the trade-offbetween student solving success and telling solutions. The dataset is releasedpublicly.",,arXiv,['cs.cl'],, -994,fewshot reranking for multihop qa via language model prompting,"['Muhammad Khalifa', 'Lajanugen Logeswaran', 'Moontae Lee', 'Honglak Lee', 'Lu Wang']",http://arxiv.org/pdf/2205.12650v3.pdf,2022-05-25,," We study few-shot reranking for multi-hop QA with open-domain questions. Toalleviate the need for a large number of labeled question-document pairs forretriever training, we propose PromptRank, which relies on large languagemodels prompting for multi-hop path reranking. PromptRank first constructs aninstruction-based prompt that includes a candidate document path and thencomputes the relevance score between a given question and the path based on theconditional likelihood of the question given the path prompt according to alanguage model. PromptRank yields strong retrieval performance on HotpotQA withonly 128 training examples compared to state-of-the-art methods trained onthousands of examples -- 73.6 recall@10 by PromptRank vs. 77.8 by PathRetrieverand 77.5 by multi-hop dense retrieval. Code available athttps://github.com/mukhal/PromptRank",,arXiv,"['cs.cl', 'cs.ir']",, -995,metaincontext learning in large language models,"['Julian Coda-Forno', 'Marcel Binz', 'Zeynep Akata', 'Matthew Botvinick', 'Jane X. Wang', 'Eric Schulz']",http://arxiv.org/pdf/2305.12907v1.pdf,2023-05-22,," Large language models have shown tremendous performance in a variety oftasks. In-context learning -- the ability to improve at a task after beingprovided with a number of demonstrations -- is seen as one of the maincontributors to their success. In the present paper, we demonstrate that thein-context learning abilities of large language models can be recursivelyimproved via in-context learning itself. We coin this phenomenonmeta-in-context learning. Looking at two idealized domains, a one-dimensionalregression task and a two-armed bandit task, we show that meta-in-contextlearning adaptively reshapes a large language model's priors over expectedtasks. Furthermore, we find that meta-in-context learning modifies thein-context learning strategies of such models. Finally, we extend our approachto a benchmark of real-world regression problems where we observe competitiveperformance to traditional learning algorithms. Taken together, our workimproves our understanding of in-context learning and paves the way towardadapting large language models to the environment they are applied purelythrough meta-in-context learning rather than traditional finetuning.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -996,metavl transferring incontext learning ability from language models to visionlanguage models,"['Masoud Monajatipoor', 'Liunian Harold Li', 'Mozhdeh Rouhsedaghat', 'Lin F. Yang', 'Kai-Wei Chang']",http://arxiv.org/pdf/2306.01311v1.pdf,2023-06-02,," Large-scale language models have shown the ability to adapt to a new task viaconditioning on a few demonstrations (i.e., in-context learning). However, inthe vision-language domain, most large-scale pre-trained vision-language (VL)models do not possess the ability to conduct in-context learning. How can weenable in-context learning for VL models? In this paper, we study aninteresting hypothesis: can we transfer the in-context learning ability fromthe language domain to VL domain? Specifically, we first meta-trains a languagemodel to perform in-context learning on NLP tasks (as in MetaICL); then wetransfer this model to perform VL tasks by attaching a visual encoder. Ourexperiments suggest that indeed in-context learning ability can be transferredcross modalities: our model considerably improves the in-context learningcapability on VL tasks and can even compensate for the size of the modelsignificantly. On VQA, OK-VQA, and GQA, our method could outperform thebaseline model while having 20 times fewer parameters.",,arXiv,['cs.cl'],, -997,an explanation of incontext learning as implicit bayesian inference,"['Sang Michael Xie', 'Aditi Raghunathan', 'Percy Liang', 'Tengyu Ma']",http://arxiv.org/pdf/2111.02080v6.pdf,2021-11-03,," Large language models (LMs) such as GPT-3 have the surprising ability to doin-context learning, where the model learns to do a downstream task simply byconditioning on a prompt consisting of input-output examples. The LM learnsfrom these examples without being explicitly pretrained to learn. Thus, it isunclear what enables in-context learning. In this paper, we study howin-context learning can emerge when pretraining documents have long-rangecoherence. Here, the LM must infer a latent document-level concept to generatecoherent next tokens during pretraining. At test time, in-context learningoccurs when the LM also infers a shared latent concept between examples in aprompt. We prove when this occurs despite a distribution mismatch betweenprompts and pretraining data in a setting where the pretraining distribution isa mixture of HMMs. In contrast to messy large-scale datasets used to train LMscapable of in-context learning, we generate a small-scale synthetic dataset(GINC) where Transformers and LSTMs both exhibit in-context learning. Beyondthe theory, experiments on GINC exhibit large-scale real-world phenomenaincluding improved in-context performance with model scaling (despite the samepretraining loss), sensitivity to example order, and instances where zero-shotis better than few-shot in-context learning.",,arXiv,"['cs.cl', 'cs.lg']",, -998,rethinking the role of scale for incontext learning an interpretabilitybased case study at 66 billion scale,"['Hritik Bansal', 'Karthik Gopalakrishnan', 'Saket Dingliwal', 'Sravan Bodapati', 'Katrin Kirchhoff', 'Dan Roth']",http://arxiv.org/pdf/2212.09095v2.pdf,2022-12-18,," Language models have been shown to perform better with an increase in scaleon a wide variety of tasks via the in-context learning paradigm. In this paper,we investigate the hypothesis that the ability of a large language model toin-context learn-perform a task is not uniformly spread across all of itsunderlying components. Using a 66 billion parameter language model (OPT-66B)across a diverse set of 14 downstream tasks, we find this is indeed the case:$\sim$70% of attention heads and $\sim$20% of feed forward networks can beremoved with minimal decline in task performance. We find substantial overlapin the set of attention heads (un)important for in-context learning acrosstasks and number of in-context examples. We also address our hypothesis througha task-agnostic lens, finding that a small set of attention heads in OPT-66Bscore highly on their ability to perform primitive induction operationsassociated with in-context learning, namely, prefix matching and copying. Theseinduction heads overlap with task-specific important heads, reinforcingarguments by Olsson et al. (arXiv:2209.11895) regarding induction headgenerality to more sophisticated behaviors associated with in-context learning.Overall, our study provides several insights that indicate large languagemodels may be under-trained for in-context learning and opens up questions onhow to pre-train language models to more effectively perform in-contextlearning.",,arXiv,"['cs.cl', 'cs.ai']",, -999,a closer look at incontext learning under distribution shifts,"['Kartik Ahuja', 'David Lopez-Paz']",http://arxiv.org/pdf/2305.16704v1.pdf,2023-05-26,," In-context learning, a capability that enables a model to learn from inputexamples on the fly without necessitating weight updates, is a definingcharacteristic of large language models. In this work, we follow the settingproposed in (Garg et al., 2022) to better understand the generality andlimitations of in-context learning from the lens of the simple yet fundamentaltask of linear regression. The key question we aim to address is: Aretransformers more adept than some natural and simpler architectures atperforming in-context learning under varying distribution shifts? To comparetransformers, we propose to use a simple architecture based on set-basedMulti-Layer Perceptrons (MLPs). We find that both transformers and set-basedMLPs exhibit in-context learning under in-distribution evaluations, buttransformers more closely emulate the performance of ordinary least squares(OLS). Transformers also display better resilience to mild distribution shifts,where set-based MLPs falter. However, under severe distribution shifts, bothmodels' in-context learning abilities diminish.",,arXiv,"['cs.lg', 'stat.ml']",, -1000,exploring the relationship between model architecture and incontext learning ability,"['Ivan Lee', 'Nan Jiang', 'Taylor Berg-Kirkpatrick']",http://arxiv.org/pdf/2310.08049v1.pdf,2023-10-12,," What is the relationship between model architecture and the ability toperform in-context learning? In this empirical study, we take the first stepstowards answering this question. In particular, we evaluate fifteen modelarchitectures across a suite of synthetic in-context learning tasks. Theselected architectures represent a broad range of paradigms, includingrecurrent and convolution-based neural networks, transformers, and emergingattention alternatives. We discover that all considered architectures canperform in-context learning under certain conditions. However, contemporaryarchitectures are found to be the best performing, especially as taskcomplexity grows. Additionally, our follow-up experiments delve into variousfactors that influence in-context learning. We observe varied sensitivitiesamong architectures with respect to hyperparameter settings. Our study oftraining dynamics reveals that certain architectures exhibit a smooth,progressive learning trajectory, while others demonstrate periods of stagnationfollowed by abrupt mastery of the task. Finally, and somewhat surprisingly, wefind that several emerging attention alternatives are more robust in-contextlearners than transformers; since such approaches have constant-sized memoryfootprints at inference time, this result opens the future possibility ofscaling up in-context learning to vastly larger numbers of in-context examples.",,arXiv,['cs.lg'],, -1001,what can transformers learn incontext a case study of simple function classes,"['Shivam Garg', 'Dimitris Tsipras', 'Percy Liang', 'Gregory Valiant']",http://arxiv.org/pdf/2208.01066v3.pdf,2022-08-01,," In-context learning refers to the ability of a model to condition on a promptsequence consisting of in-context examples (input-output pairs corresponding tosome task) along with a new query input, and generate the corresponding output.Crucially, in-context learning happens only at inference time without anyparameter updates to the model. While large language models such as GPT-3exhibit some ability to perform in-context learning, it is unclear what therelationship is between tasks on which this succeeds and what is present in thetraining data. To make progress towards understanding in-context learning, weconsider the well-defined problem of training a model to in-context learn afunction class (e.g., linear functions): that is, given data derived from somefunctions in the class, can we train a model to in-context learn ""most""functions from this class? We show empirically that standard Transformers canbe trained from scratch to perform in-context learning of linear functions --that is, the trained model is able to learn unseen linear functions fromin-context examples with performance comparable to the optimal least squaresestimator. In fact, in-context learning is possible even under two forms ofdistribution shift: (i) between the training data of the model andinference-time prompts, and (ii) between the in-context examples and the queryinput during inference. We also show that we can train Transformers toin-context learn more complex function classes -- namely sparse linearfunctions, two-layer neural networks, and decision trees -- with performancethat matches or exceeds task-specific learning algorithms. Our code and modelsare available at https://github.com/dtsip/in-context-learning .",,arXiv,"['cs.cl', 'cs.lg']",, -1002,"structured prompting scaling incontext learning to 1,000 examples","['Yaru Hao', 'Yutao Sun', 'Li Dong', 'Zhixiong Han', 'Yuxian Gu', 'Furu Wei']",http://arxiv.org/pdf/2212.06713v1.pdf,2022-12-13,," Large language models have exhibited intriguing in-context learningcapability, achieving promising zero- and few-shot performance without updatingthe parameters. However, conventional in-context learning is usually restrictedby length constraints, rendering it ineffective to absorb supervision from alarge number of examples. In order to go beyond few shots, we introducestructured prompting that breaks the length limit and scales in-contextlearning to thousands of examples. Specifically, demonstration examples areseparately encoded with well-designed position embeddings, and then they arejointly attended by the test example using a rescaled attention mechanism. Sowe can scale the number of exemplars with linear complexity instead ofquadratic complexity with respect to length. Experimental results on a diverseset of tasks show that our approach improves end-task performance and reducesevaluation variance over conventional in-context learning as the number ofdemonstration examples increases. Code has been released athttps://aka.ms/structured-prompting.",,arXiv,['cs.cl'],, -1003,pretraining to learn in context,"['Yuxian Gu', 'Li Dong', 'Furu Wei', 'Minlie Huang']",http://arxiv.org/pdf/2305.09137v1.pdf,2023-05-16,," In-context learning, where pre-trained language models learn to perform tasksfrom task examples and instructions in their contexts, has attracted muchattention in the NLP community. However, the ability of in-context learning isnot fully exploited because language models are not explicitly trained to learnin context. To this end, we propose PICL (Pre-training for In-ContextLearning), a framework to enhance the language models' in-context learningability by pre-training the model on a large collection of ""intrinsic tasks"" inthe general plain-text corpus using the simple language modeling objective.PICL encourages the model to infer and perform tasks by conditioning on thecontexts while maintaining task generalization of pre-trained models. Weevaluate the in-context learning performance of the model trained with PICL onseven widely-used text classification datasets and the Super-NaturalInstrctionsbenchmark, which contains 100+ NLP tasks formulated to text generation. Ourexperiments show that PICL is more effective and task-generalizable than arange of baselines, outperforming larger language models with nearly 4xparameters. The code is publicly available at https://github.com/thu-coai/PICL.",,arXiv,['cs.cl'],, -1004,raven incontext learning with retrieval augmented encoderdecoder language models,"['Jie Huang', 'Wei Ping', 'Peng Xu', 'Mohammad Shoeybi', 'Kevin Chen-Chuan Chang', 'Bryan Catanzaro']",http://arxiv.org/pdf/2308.07922v1.pdf,2023-08-15,," In this paper, we investigate the in-context learning ability ofretrieval-augmented encoder-decoder language models. We first conduct acomprehensive analysis of the state-of-the-art ATLAS model and identify itslimitations in in-context learning, primarily due to a mismatch betweenpretraining and testing, as well as a restricted context length. To addressthese issues, we propose RAVEN, a model that combines retrieval-augmentedmasked language modeling and prefix language modeling. We further introduceFusion-in-Context Learning to enhance the few-shot performance by enabling themodel to leverage more in-context examples without requiring additionaltraining or model modifications. Through extensive experiments, we demonstratethat RAVEN significantly outperforms ATLAS and achieves results comparable tothe most advanced language models in certain scenarios, despite havingsubstantially fewer parameters. Our work underscores the potential ofretrieval-augmented encoder-decoder language models for in-context learning andencourages further research in this direction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1005,incontext learning dynamics with random binary sequences,"['Eric J. Bigelow', 'Ekdeep Singh Lubana', 'Robert P. Dick', 'Hidenori Tanaka', 'Tomer D. Ullman']",http://arxiv.org/pdf/2310.17639v1.pdf,2023-10-26,," Large language models (LLMs) trained on huge corpora of text datasetsdemonstrate complex, emergent capabilities, achieving state-of-the-artperformance on tasks they were not explicitly trained for. The precise natureof LLM capabilities is often mysterious, and different prompts can elicitdifferent capabilities through in-context learning. We propose a CognitiveInterpretability framework that enables us to analyze in-context learningdynamics to understand latent concepts in LLMs underlying behavioral patterns.This provides a more nuanced understanding than success-or-failure evaluationbenchmarks, but does not require observing internal activations as amechanistic interpretation of circuits would. Inspired by the cognitive scienceof human randomness perception, we use random binary sequences as context andstudy dynamics of in-context learning by manipulating properties of contextdata, such as sequence length. In the latest GPT-3.5+ models, we find emergentabilities to generate pseudo-random numbers and learn basic formal languages,with striking in-context learning dynamics where model outputs transitionsharply from pseudo-random behaviors to deterministic repetition.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",, -1006,incontext learning with many demonstration examples,"['Mukai Li', 'Shansan Gong', 'Jiangtao Feng', 'Yiheng Xu', 'Jun Zhang', 'Zhiyong Wu', 'Lingpeng Kong']",http://arxiv.org/pdf/2302.04931v1.pdf,2023-02-09,," Large pre-training language models (PLMs) have shown promising in-contextlearning abilities. However, due to the backbone transformer architecture,existing PLMs are bottlenecked by the memory and computational cost whenscaling up to a large context size, leaving instruction tuning and in-contextlearning of many demonstration examples, as well as long-range languagemodeling under-explored. In this study, we propose a long-range language modelEVALM based on an efficient transformer mechanism. EVALM is trained with 8ktokens per batch line and can test up to 256k-lengthed contexts withextrapolation, 128 times to the limit of existing PLMs (e.g. GPT3). Based onEVALM, we scale up the size of examples efficiently in both instruction tuningand in-context learning to explore the boundary of the benefits from moreannotated data. Experimental results on a diverse set of tasks show that EVALMachieves 4.1% higher accuracy on average, and the average length of achievingthe best accuracy score over tasks is around 12k. We find that in-contextlearning can achieve higher performance with more demonstrations undermany-shot instruction tuning (8k), and further extending the length ofinstructions (16k) can further improve the upper bound of scaling in-contextlearning.",,arXiv,"['cs.cl', 'cs.ai']",, -1007,the learnability of incontext learning,"['Noam Wies', 'Yoav Levine', 'Amnon Shashua']",http://arxiv.org/pdf/2303.07895v1.pdf,2023-03-14,," In-context learning is a surprising and important phenomenon that emergedwhen modern language models were scaled to billions of learned parameters.Without modifying a large language model's weights, it can be tuned to performvarious downstream natural language tasks simply by including concatenatedtraining examples of these tasks in its input. Though disruptive for manypractical applications of large language models, this emergent learningparadigm is not well understood from a theoretical perspective. In this paper,we propose a first-of-its-kind PAC based framework for in-context learnability,and use it to provide the first finite sample complexity results for thein-context learning setup. Our framework includes an initial pretraining phase,which fits a function to the pretraining distribution, and then a secondin-context learning phase, which keeps this function constant and concatenatestraining examples of the downstream task in its input. We use our framework inorder to prove that, under mild assumptions, when the pretraining distributionis a mixture of latent tasks (a model often considered for natural languagepretraining), these tasks can be efficiently learned via in-context learning,even though the model's weights are unchanged and the input significantlydiverges from the pretraining distribution. Our theoretical analysis revealsthat in this setting, in-context learning is more about identifying the taskthan about learning it, a result which is in line with a series of recentempirical findings. We hope that the in-context learnability frameworkpresented in this paper will facilitate future progress towards a deeperunderstanding of this important new learning paradigm.",,arXiv,['cs.cl'],, -1008,sinc selfsupervised incontext learning for visionlanguage tasks,"['Yi-Syuan Chen', 'Yun-Zhu Song', 'Cheng Yu Yeo', 'Bei Liu', 'Jianlong Fu', 'Hong-Han Shuai']",http://arxiv.org/pdf/2307.07742v2.pdf,2023-07-15,," Large Pre-trained Transformers exhibit an intriguing capacity for in-contextlearning. Without gradient updates, these models can rapidly construct newpredictors from demonstrations presented in the inputs. Recent works promotethis ability in the vision-language domain by incorporating visual informationinto large language models that can already make in-context predictions.However, these methods could inherit issues in the language domain, such astemplate sensitivity and hallucination. Also, the scale of these languagemodels raises a significant demand for computations, making learning andoperating these models resource-intensive. To this end, we raise a question:``How can we enable in-context learning without relying on the intrinsicin-context ability of large language models?"". To answer it, we propose asuccinct and general framework, Self-supervised IN-Context learning (SINC),that introduces a meta-model to learn on self-supervised prompts consisting oftailored demonstrations. The learned models can be transferred to downstreamtasks for making in-context predictions on-the-fly. Extensive experiments showthat SINC outperforms gradient-based methods in various vision-language tasksunder few-shot settings. Furthermore, the designs of SINC help us investigatethe benefits of in-context learning across different tasks, and the analysisfurther reveals the essential components for the emergence of in-contextlearning in the vision-language domain.",,arXiv,"['cs.cv', 'cs.ai']",, -1009,selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator,"['Hyuhng Joon Kim', 'Hyunsoo Cho', 'Junyeob Kim', 'Taeuk Kim', 'Kang Min Yoo', 'Sang-goo Lee']",http://arxiv.org/pdf/2206.08082v1.pdf,2022-06-16,," Large-scale pre-trained language models (PLMs) are well-known for beingcapable of solving a task simply by conditioning a few input-label pairs dubbeddemonstrations on a prompt without being explicitly tuned for the desireddownstream task. Such a process (i.e., in-context learning), however, naturallyleads to high reliance on the demonstrations which are usually selected fromexternal datasets. In this paper, we propose self-generated in-context learning(SG-ICL), which generates demonstrations for in-context learning from PLMitself to minimize the reliance on the external demonstration. We conductexperiments on four different text classification tasks and show SG-ICLsignificantly outperforms zero-shot learning and is generally worthapproximately 0.6 gold training samples. Moreover, our generated demonstrationsshow more consistent performance with low variance compared to randomlyselected demonstrations from the training dataset.",,arXiv,['cs.cl'],, -1010,active example selection for incontext learning,"['Yiming Zhang', 'Shi Feng', 'Chenhao Tan']",http://arxiv.org/pdf/2211.04486v1.pdf,2022-11-08,," With a handful of demonstration examples, large-scale language models showstrong capability to perform various tasks by in-context learning from theseexamples, without any fine-tuning. We demonstrate that in-context learningperformance can be highly unstable across samples of examples, indicating theidiosyncrasies of how language models acquire information. We formulate exampleselection for in-context learning as a sequential decision problem, and proposea reinforcement learning algorithm for identifying generalizable policies toselect demonstration examples. For GPT-2, our learned policies demonstratestrong abilities of generalizing to unseen tasks in training, with a $5.8\%$improvement on average. Examples selected from our learned policies can evenachieve a small improvement on GPT-3 Ada. However, the improvement diminisheson larger GPT-3 models, suggesting emerging capabilities of large languagemodels.",,arXiv,"['cs.cl', 'cs.ai']",, -1011,bayesian optimization of catalysts with incontext learning,"['Mayk Caldas Ramos', 'Shane S. Michtavy', 'Marc D. Porosoff', 'Andrew D. White']",http://arxiv.org/pdf/2304.05341v1.pdf,2023-04-11,," Large language models (LLMs) are able to do accurate classification with zeroor only a few examples (in-context learning). We show a prompting system thatenables regression with uncertainty for in-context learning with frozen LLM(GPT-3, GPT-3.5, and GPT-4) models, allowing predictions without features orarchitecture tuning. By incorporating uncertainty, our approach enablesBayesian optimization for catalyst or molecule optimization using naturallanguage, eliminating the need for training or simulation. Here, we performedthe optimization using the synthesis procedure of catalysts to predictproperties. Working with natural language mitigates difficulty synthesizabilitysince the literal synthesis procedure is the model's input. We showed thatin-context learning could improve past a model context window (maximum numberof tokens the model can process at once) as data is gathered via exampleselection, allowing the model to scale better. Although our method does notoutperform all baselines, it requires zero training, feature selection, andminimal computing while maintaining satisfactory performance. We also findGaussian Process Regression on text embeddings is strong at Bayesianoptimization. The code is available in our GitHub repository:https://github.com/ur-whitelab/BO-LIFT",,arXiv,"['physics.chem-ph', 'cs.lg']",, -1012,incontext learning unlocked for diffusion models,"['Zhendong Wang', 'Yifan Jiang', 'Yadong Lu', 'Yelong Shen', 'Pengcheng He', 'Weizhu Chen', 'Zhangyang Wang', 'Mingyuan Zhou']",http://arxiv.org/pdf/2305.01115v2.pdf,2023-05-01,," We present Prompt Diffusion, a framework for enabling in-context learning indiffusion-based generative models. Given a pair of task-specific exampleimages, such as depth from/to image and scribble from/to image, and a textguidance, our model automatically understands the underlying task and performsthe same task on a new query image following the text guidance. To achievethis, we propose a vision-language prompt that can model a wide range ofvision-language tasks and a diffusion model that takes it as input. Thediffusion model is trained jointly over six different tasks using theseprompts. The resulting Prompt Diffusion model is the first diffusion-basedvision-language foundation model capable of in-context learning. Itdemonstrates high-quality in-context generation on the trained tasks andgeneralizes effectively to new, unseen vision tasks with their respectiveprompts. Our model also shows compelling text-guided image editing results. Ourframework aims to facilitate research into in-context learning for computervision. We share our code and pre-trained models athttps://github.com/Zhendong-Wang/Prompt-Diffusion.",,arXiv,['cs.cv'],, -1013,large language models can be lazy learners analyze shortcuts in incontext learning,"['Ruixiang Tang', 'Dehan Kong', 'Longtao Huang', 'Hui Xue']",http://arxiv.org/pdf/2305.17256v2.pdf,2023-05-26,," Large language models (LLMs) have recently shown great potential forin-context learning, where LLMs learn a new task simply by conditioning on afew input-label pairs (prompts). Despite their potential, our understanding ofthe factors influencing end-task performance and the robustness of in-contextlearning remains limited. This paper aims to bridge this knowledge gap byinvestigating the reliance of LLMs on shortcuts or spurious correlations withinprompts. Through comprehensive experiments on classification and extractiontasks, we reveal that LLMs are ""lazy learners"" that tend to exploit shortcutsin prompts for downstream tasks. Additionally, we uncover a surprising findingthat larger models are more likely to utilize shortcuts in prompts duringinference. Our findings provide a new perspective on evaluating robustness inin-context learning and pose new challenges for detecting and mitigating theuse of shortcuts in prompts.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1014,multidimensional evaluation of text summarization with incontext learning,"['Sameer Jain', 'Vaishakh Keshava', 'Swarnashree Mysore Sathyendra', 'Patrick Fernandes', 'Pengfei Liu', 'Graham Neubig', 'Chunting Zhou']",http://arxiv.org/pdf/2306.01200v1.pdf,2023-06-01,," Evaluation of natural language generation (NLG) is complex andmulti-dimensional. Generated text can be evaluated for fluency, coherence,factuality, or any other dimensions of interest. Most frameworks that performsuch multi-dimensional evaluation require training on large manually orsynthetically generated datasets. In this paper, we study the efficacy of largelanguage models as multi-dimensional evaluators using in-context learning,obviating the need for large training datasets. Our experiments show thatin-context learning-based evaluators are competitive with learned evaluationframeworks for the task of text summarization, establishing state-of-the-art ondimensions such as relevance and factual consistency. We then analyze theeffects of factors such as the selection and number of in-context examples onperformance. Finally, we study the efficacy of in-context learning basedevaluators in evaluating zero-shot summaries written by large language modelssuch as GPT-3.",,arXiv,['cs.cl'],, -1015,exploring the integration of large language models into automatic speech recognition systems an empirical study,"['Zeping Min', 'Jinbo Wang']",http://arxiv.org/pdf/2307.06530v1.pdf,2023-07-13,," This paper explores the integration of Large Language Models (LLMs) intoAutomatic Speech Recognition (ASR) systems to improve transcription accuracy.The increasing sophistication of LLMs, with their in-context learningcapabilities and instruction-following behavior, has drawn significantattention in the field of Natural Language Processing (NLP). Our primary focusis to investigate the potential of using an LLM's in-context learningcapabilities to enhance the performance of ASR systems, which currently facechallenges such as ambient noise, speaker accents, and complex linguisticcontexts. We designed a study using the Aishell-1 and LibriSpeech datasets,with ChatGPT and GPT-4 serving as benchmarks for LLM capabilities.Unfortunately, our initial experiments did not yield promising results,indicating the complexity of leveraging LLM's in-context learning for ASRapplications. Despite further exploration with varied settings and models, thecorrected sentences from the LLMs frequently resulted in higher Word ErrorRates (WER), demonstrating the limitations of LLMs in speech applications. Thispaper provides a detailed overview of these experiments, their results, andimplications, establishing that using LLMs' in-context learning capabilities tocorrect potential errors in speech recognition transcriptions is still achallenging task at the current stage.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, -1016,actsql incontext learning for texttosql with automaticallygenerated chainofthought,"['Hanchong Zhang', 'Ruisheng Cao', 'Lu Chen', 'Hongshen Xu', 'Kai Yu']",http://arxiv.org/pdf/2310.17342v1.pdf,2023-10-26,," Recently Large Language Models (LLMs) have been proven to have strongabilities in various domains and tasks. We study the problem of promptdesigning in the text-to-SQL task and attempt to improve the LLMs' reasoningability when generating SQL queries. Besides the trivial few-shot in-contextlearning setting, we design our chain-of-thought (CoT) prompt with a similarmethod to schema linking. We provide a method named ACT-SQL to automaticallygenerate auto-CoT exemplars and thus the whole process doesn't need manuallabeling. Our approach is cost-saving since we only use the LLMs' API call oncewhen generating one SQL query. Furthermore, we extend our in-context learningmethod to the multi-turn text-to-SQL task. The experiment results show that theLLMs' performance can benefit from our ACT-SQL approach. Our approach achievesSOTA performance on the Spider dev set among existing in-context learningapproaches.",,arXiv,['cs.cl'],, -1017,cosmic data efficient instructiontuning for speech incontext learning,"['Jing Pan', 'Jian Wu', 'Yashesh Gaur', 'Sunit Sivasankaran', 'Zhuo Chen', 'Shujie Liu', 'Jinyu Li']",http://arxiv.org/pdf/2311.02248v1.pdf,2023-11-03,," We present a data and cost efficient way of incorporating the speech modalityinto a large language model (LLM). The resulting multi-modal LLM is aCOntextual Speech Model with Instruction-following/in-context-learningCapabilities - COSMIC. Speech comprehension test question-answer (SQA) pairsare generated using GPT-3.5 based on the speech transcriptions as a part of thesupervision for the instruction tuning. With fewer than 20M trainableparameters and as little as 450 hours of English speech data for SQAgeneration, COSMIC exhibits emergent instruction-following and in-contextlearning capabilities in speech-to-text tasks. The model is able to follow thegiven text instructions to generate text response even on the unseen EN$\to$Xspeech-to-text translation (S2TT) task with zero-shot setting. We evaluate themodel's in-context learning via various tasks such as EN$\to$X S2TT andfew-shot domain adaptation. And instruction-following capabilities areevaluated through a contextual biasing benchmark. Our results demonstrate theefficacy of the proposed low cost recipe for building a speech LLM and thatwith the new instruction-tuning data.",,arXiv,"['cs.cl', 'cs.ai', 'eess.as']",, -1018,thinking about gpt3 incontext learning for biomedical ie think again,"['Bernal Jiménez Gutiérrez', 'Nikolas McNeal', 'Clay Washington', 'You Chen', 'Lang Li', 'Huan Sun', 'Yu Su']",http://arxiv.org/pdf/2203.08410v3.pdf,2022-03-16,," The strong few-shot in-context learning capability of large pre-trainedlanguage models (PLMs) such as GPT-3 is highly appealing for applicationdomains such as biomedicine, which feature high and diverse demands of languagetechnologies but also high data annotation costs. In this paper, we present thefirst systematic and comprehensive study to compare the few-shot performance ofGPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs ontwo highly representative biomedical information extraction tasks, named entityrecognition and relation extraction. We follow the true few-shot setting toavoid overestimating models' few-shot performance by model selection over alarge validation set. We also optimize GPT-3's performance with knowntechniques such as contextual calibration and dynamic in-context exampleretrieval. However, our results show that GPT-3 still significantlyunderperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3in-context learning also yields smaller gains in accuracy when more trainingdata becomes available. Our in-depth analyses further reveal issues of thein-context learning setting that may be detrimental to information extractiontasks in general. Given the high cost of experimenting with GPT-3, we hope ourstudy provides guidance for biomedical researchers and practitioners towardsmore promising directions such as fine-tuning small PLMs.",,arXiv,"['cs.cl', 'cs.ir']",, -1019,exploring effective factors for improving visual incontext learning,"['Yanpeng Sun', 'Qiang Chen', 'Jian Wang', 'Jingdong Wang', 'Zechao Li']",http://arxiv.org/pdf/2304.04748v1.pdf,2023-04-10,," The In-Context Learning (ICL) is to understand a new task via a fewdemonstrations (aka. prompt) and predict new inputs without tuning the models.While it has been widely studied in NLP, it is still a relatively new area ofresearch in computer vision. To reveal the factors influencing the performanceof visual in-context learning, this paper shows that prompt selection andprompt fusion are two major factors that have a direct impact on the inferenceperformance of visual context learning. Prompt selection is the process ofidentifying the most appropriate prompt or example to help the model understandnew tasks. This is important because providing the model with relevant promptscan help it learn more effectively and efficiently. Prompt fusion involvescombining knowledge from different positions within the large-scale visualmodel. By doing this, the model can leverage the diverse knowledge stored indifferent parts of the model to improve its performance on new tasks. Basedthese findings, we propose a simple framework prompt-SelF for visual in-contextlearning. Specifically, we first use the pixel-level retrieval method to selecta suitable prompt, and then use different prompt fusion methods to activate allthe knowledge stored in the large-scale model, and finally ensemble theprediction results obtained from different prompt fusion methods to obtain thefinal prediction results. And we conduct extensive experiments on single-objectsegmentation and detection tasks to demonstrate the effectiveness ofprompt-SelF. Remarkably, the prompt-SelF has outperformed OSLSM basedmeta-learning in 1-shot segmentation for the first time. This indicated thegreat potential of visual in-context learning. The source code and models willbe available at \url{https://github.com/syp2ysy/prompt-SelF}.",,arXiv,['cs.cv'],, -1020,dissecting chainofthought compositionality through incontext filtering and learning,"['Yingcong Li', 'Kartik Sreenivasan', 'Angeliki Giannou', 'Dimitris Papailiopoulos', 'Samet Oymak']",http://arxiv.org/pdf/2305.18869v2.pdf,2023-05-30,," Chain-of-thought (CoT) is a method that enables language models to handlecomplex reasoning tasks by decomposing them into simpler steps. Despite itssuccess, the underlying mechanics of CoT are not yet fully understood. In anattempt to shed light on this, our study investigates the impact of CoT on theability of transformers to in-context learn a simple to study, yet generalfamily of compositional functions: multi-layer perceptrons (MLPs). In thissetting, we find that the success of CoT can be attributed to breaking downin-context learning of a compositional function into two distinct phases:focusing on and filtering data related to each step of the composition andin-context learning the single-step composition function. Through bothexperimental and theoretical evidence, we demonstrate how CoT significantlyreduces the sample complexity of in-context learning (ICL) and facilitates thelearning of complex functions that non-CoT methods struggle with. Furthermore,we illustrate how transformers can transition from vanilla in-context learningto mastering a compositional function with CoT by simply incorporatingadditional layers that perform the necessary data-filtering for CoT via theattention mechanism. In addition to these test-time benefits, we show CoT helpsaccelerate pretraining by learning shortcuts to represent complex functions andfiltering plays an important role in this process. These findings collectivelyprovide insights into the mechanics of CoT, inviting further investigation ofits role in complex reasoning tasks.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, -1021,incontext learning through the bayesian prism,"['Kabir Ahuja', 'Madhur Panwar', 'Navin Goyal']",http://arxiv.org/pdf/2306.04891v1.pdf,2023-06-08,," In-context learning is one of the surprising and useful features of largelanguage models. How it works is an active area of research. Recently, stylizedmeta-learning-like setups have been devised that train these models on asequence of input-output pairs $(x, f(x))$ from a function class using thelanguage modeling loss and observe generalization to unseen functions from thesame class. One of the main discoveries in this line of research has been thatfor several problems such as linear regression, trained transformers learnalgorithms for learning functions in context. However, the inductive biases ofthese models resulting in this behavior are not clearly understood. A modelwith unlimited training data and compute is a Bayesian predictor: it learns thepretraining distribution. It has been shown that high-capacity transformersmimic the Bayesian predictor for linear regression. In this paper, we showempirical evidence of transformers exhibiting the behavior of this ideallearner across different linear and non-linear function classes. We also extendthe previous setups to work in the multitask setting and verify thattransformers can do in-context learning in this setup as well and the Bayesianperspective sheds light on this setting also. Finally, via the example oflearning Fourier series, we study the inductive bias for in-context learning.We find that in-context learning may or may not have simplicity bias dependingon the pretraining data distribution.",,arXiv,"['cs.lg', 'cs.cl']",, -1022,explore incontext learning for 3d point cloud understanding,"['Zhongbin Fang', 'Xiangtai Li', 'Xia Li', 'Joachim M. Buhmann', 'Chen Change Loy', 'Mengyuan Liu']",http://arxiv.org/pdf/2306.08659v1.pdf,2023-06-14,," With the rise of large-scale models trained on broad data, in-contextlearning has become a new learning paradigm that has demonstrated significantpotential in natural language processing and computer vision tasks. Meanwhile,in-context learning is still largely unexplored in the 3D point cloud domain.Although masked modeling has been successfully applied for in-context learningin 2D vision, directly extending it to 3D point clouds remains a formidablechallenge. In the case of point clouds, the tokens themselves are the pointcloud positions (coordinates) that are masked during inference. Moreover,position embedding in previous works may inadvertently introduce informationleakage. To address these challenges, we introduce a novel framework, namedPoint-In-Context, designed especially for in-context learning in 3D pointclouds, where both inputs and outputs are modeled as coordinates for each task.Additionally, we propose the Joint Sampling module, carefully designed to workin tandem with the general point sampling operator, effectively resolving theaforementioned technical issues. We conduct extensive experiments to validatethe versatility and adaptability of our proposed methods in handling a widerange of tasks. Furthermore, with a more effective prompt selection strategy,our framework surpasses the results of individually trained models.",,arXiv,['cs.cv'],, -1023,dqlore dual queries with low rank approximation reranking for incontext learning,"['Jing Xiong', 'Zixuan Li', 'Chuanyang Zheng', 'Zhijiang Guo', 'Yichun Yin', 'Enze Xie', 'Zhicheng Yang', 'Qingxing Cao', 'Haiming Wang', 'Xiongwei Han', 'Jing Tang', 'Chengming Li', 'Xiaodan Liang']",http://arxiv.org/pdf/2310.02954v4.pdf,2023-10-04,," Recent advances in natural language processing, primarily propelled by LargeLanguage Models (LLMs), have showcased their remarkable capabilities groundedin in-context learning. A promising avenue for guiding LLMs in intricatereasoning tasks involves the utilization of intermediate reasoning steps withinthe Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge liesin the effective selection of exemplars for facilitating in-context learning.In this study, we introduce a framework that leverages Dual Queries andLow-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplarsfor in-context learning. Dual Queries first query LLM to obtain LLM-generatedknowledge such as CoT, then query the retriever to obtain the final exemplarsvia both question and the knowledge. Moreover, for the second query, LoReemploys dimensionality reduction techniques to refine exemplar selection,ensuring close alignment with the input question's knowledge. Through extensiveexperiments, we demonstrate that DQ-LoRe significantly outperforms priorstate-of-the-art methods in the automatic selection of exemplars for GPT-4,enhancing performance from 92.5% to 94.2%. Our comprehensive analysis furtherreveals that DQ-LoRe consistently outperforms retrieval-based approaches interms of both performance and adaptability, especially in scenarioscharacterized by distribution shifts. DQ-LoRe pushes the boundaries ofin-context learning and opens up new avenues for addressing complex reasoningchallenges. We will release the code soon.",,arXiv,['cs.cl'],, -1024,overprompt enhancing chatgpt capabilities through an efficient incontext learning approach,"['Jiazheng Li', 'Runcong Zhao', 'Yulan He', 'Lin Gui']",http://arxiv.org/pdf/2305.14973v1.pdf,2023-05-24,," The exceptional performance of pre-trained large language models hasrevolutionised various applications, but their adoption in productionenvironments is hindered by prohibitive costs and inefficiencies, particularlywhen utilising long prompts. This paper proposes OverPrompt, an in-contextlearning method aimed at improving LLM efficiency and performance by processingmultiple inputs in parallel. Evaluated across diverse datasets, OverPromptenhances task efficiency and integrates a diverse range of examples forimproved performance. Particularly, it amplifies fact-checking and sentimentanalysis tasks when supplemented with contextual information. Synthetic datagrouping further enhances performance, suggesting a viable approach for dataaugmentation.",,arXiv,['cs.cl'],, -1025,crosslingual retrieval augmented incontext learning for bangla,"['Xiaoqian Li', 'Ercong Nie', 'Sheng Liang']",http://arxiv.org/pdf/2311.00587v1.pdf,2023-11-01,," The promise of Large Language Models (LLMs) in Natural Language Processinghas often been overshadowed by their limited performance in low-resourcelanguages such as Bangla. To address this, our paper presents a pioneeringapproach that utilizes cross-lingual retrieval augmented in-context learning.By strategically sourcing semantically similar prompts from high-resourcelanguage, we enable multilingual pretrained language models (MPLMs), especiallythe generative model BLOOMZ, to successfully boost performance on Bangla tasks.Our extensive evaluation highlights that the cross-lingual retrieval augmentedprompts bring steady improvements to MPLMs over the zero-shot performance.",,arXiv,['cs.cl'],, -1026,compositional exemplars for incontext learning,"['Jiacheng Ye', 'Zhiyong Wu', 'Jiangtao Feng', 'Tao Yu', 'Lingpeng Kong']",http://arxiv.org/pdf/2302.05698v3.pdf,2023-02-11,," Large pretrained language models (LMs) have shown impressive In-ContextLearning (ICL) ability, where the model learns to do an unseen task via aprompt consisting of input-output examples as the demonstration, without anyparameter updates. The performance of ICL is highly dominated by the quality ofthe selected in-context examples. However, previous selection methods aremostly based on simple heuristics, leading to sub-optimal performance. In thiswork, we formulate in-context example selection as a subset selection problem.We propose CEIL (Compositional Exemplars for In-context Learning), which isinstantiated by Determinantal Point Processes (DPPs) to model the interactionbetween the given input and in-context examples, and optimized through acarefully-designed contrastive learning objective to obtain preference fromLMs. We validate CEIL on 12 classification and generation datasets from 7distinct NLP tasks, including sentiment analysis, paraphrase detection, naturallanguage inference, commonsense reasoning, open-domain question answering, codegeneration, and semantic parsing. Extensive experiments demonstrate not onlythe state-of-the-art performance but also the transferability andcompositionality of CEIL, shedding new light on effective and efficientin-context learning. Our code is released athttps://github.com/HKUNLP/icl-ceil.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1027,learning to retrieve prompts for incontext learning,"['Ohad Rubin', 'Jonathan Herzig', 'Jonathan Berant']",http://arxiv.org/pdf/2112.08633v2.pdf,2021-12-16,," In-context learning is a recent paradigm in natural language understanding,where a large pre-trained language model (LM) observes a test instance and afew training examples as its input, and directly decodes the output without anyupdate to its parameters. However, performance has been shown to stronglydepend on the selected training examples (termed prompt). In this work, wepropose an efficient method for retrieving prompts for in-context learningusing annotated data and a LM. Given an input-output pair, we estimate theprobability of the output given the input and a candidate training example asthe prompt, and label training examples as positive or negative based on thisprobability. We then train an efficient dense retriever from this data, whichis used to retrieve training examples as prompts at test time. We evaluate ourapproach on three sequence-to-sequence tasks where language utterances aremapped to meaning representations, and find that it substantially outperformsprior work and multiple baselines across the board.",,arXiv,"['cs.cl', 'cs.lg']",, -1028,semanticoriented unlabeled priming for largescale language models,"['Yanchen Liu', 'Timo Schick', 'Hinrich Schütze']",http://arxiv.org/pdf/2202.06133v1.pdf,2022-02-12,," Due to the high costs associated with finetuning large language models,various recent works propose to adapt them to specific tasks without anyparameter updates through in-context learning. Unfortunately, for in-contextlearning there is currently no way to leverage unlabeled data, which is oftenmuch easier to obtain in large quantities than labeled examples. In this work,we therefore investigate ways to make use of unlabeled examples to improve thezero-shot performance of pretrained language models without any finetuning: Weintroduce Semantic-Oriented Unlabeled Priming (SOUP), a method that classifiesexamples by retrieving semantically similar unlabeled examples, assigninglabels to them in a zero-shot fashion, and then using them for in-contextlearning. We also propose bag-of-contexts priming, a new priming strategy thatis more suitable for our setting and enables the usage of more examples thanfit into the context window.",,arXiv,['cs.cl'],, -1029,diverse demonstrations improve incontext compositional generalization,"['Itay Levy', 'Ben Bogin', 'Jonathan Berant']",http://arxiv.org/pdf/2212.06800v3.pdf,2022-12-13,," In-context learning has shown great success in i.i.d semantic parsing splits,where the training and test sets are drawn from the same distribution. In thissetup, models are typically prompted with demonstrations that are similar tothe input utterance. However, in the setup of compositional generalization,where models are tested on outputs with structures that are absent from thetraining set, selecting similar demonstrations is insufficient, as often noexample will be similar enough to the input. In this work, we propose a methodto select diverse demonstrations that aims to collectively cover all of thestructures required in the output program, in order to encourage the model togeneralize to new structures from these demonstrations. We empirically showthat combining diverse demonstrations with in-context learning substantiallyimproves performance across three compositional generalization semantic parsingdatasets in the pure in-context learning setup and when combined withfinetuning.",,arXiv,['cs.cl'],, -1030,the impact of symbolic representations on incontext learning for fewshot reasoning,"['Hanlin Zhang', 'Yi-Fan Zhang', 'Li Erran Li', 'Eric Xing']",http://arxiv.org/pdf/2212.08686v1.pdf,2022-12-16,," Pre-trained language models (LMs) have shown remarkable reasoning performanceusing explanations (or ``chain-of-thought'' (CoT)) for in-context learning. Onthe other hand, these reasoning tasks are usually presumed to be moreapproachable for symbolic programming. To make progress towards understandingin-context learning, we curate synthetic datasets containing equivalent(natural, symbolic) data pairs, where symbolic examples contain first-orderlogic rules and predicates from knowledge bases (KBs). Then we revisitneuro-symbolic approaches and use Language Models as Logic Programmer (LMLP)that learns from demonstrations containing logic rules and correspondingexamples to iteratively reason over KBs, recovering Prolog's backward chainingalgorithm. Comprehensive experiments are included to systematically compareLMLP with CoT in deductive reasoning settings, showing that LMLP enjoys morethan 25% higher accuracy than CoT on length generalization benchmarks even withfewer parameters.",,arXiv,['cs.cl'],, -1031,selfadaptive incontext learning an information compression perspective for incontext example selection and ordering,"['Zhiyong Wu', 'Yaoxiang Wang', 'Jiacheng Ye', 'Lingpeng Kong']",http://arxiv.org/pdf/2212.10375v2.pdf,2022-12-20,," Despite the surprising few-shot performance of in-context learning (ICL), itis still a common practice to randomly sample examples to serve as context.This paper advocates a new principle for ICL: self-adaptive in-contextlearning. The self-adaption mechanism is introduced to help each sample find anin-context example permutation (i.e., selection and ordering) that can derivethe correct prediction, thus maximizing performance. To validate theeffectiveness of self-adaptive ICL, we propose a general select-then-rankframework and instantiate it with new selection and ranking algorithms. Uponextensive evaluation on eight different NLP datasets, our self-adaptive ICLmethod achieves a 40% relative improvement over the common practice setting.Further analysis reveals the enormous potential of self-adaptive ICL that itmight be able to close the gap between ICL and finetuning given more advancedalgorithms. Our code is released to facilitate future research in this area:https://github.com/Shark-NLP/self-adaptive-ICL",,arXiv,"['cs.cl', 'cs.ai']",, -1032,privacypreserving incontext learning for large language models,"['Tong Wu', 'Ashwinee Panda', 'Jiachen T. Wang', 'Prateek Mittal']",http://arxiv.org/pdf/2305.01639v2.pdf,2023-05-02,," In-context learning (ICL) is an important capability of Large Language Models(LLMs), enabling these models to dynamically adapt based on specific,in-context exemplars, thereby improving accuracy and relevance. However, LLM'sresponses may leak the sensitive private information contained in in-contextexemplars. To address this challenge, we propose Differentially PrivateIn-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. Thekey idea for DP-ICL paradigm is generating differentially private responsesthrough a noisy consensus among an ensemble of LLM's responses based ondisjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiateseveral techniques showing how to privatize ICL for text classification andlanguage generation. We evaluate DP-ICL on four text classification benchmarksand two language generation tasks, and our empirical results show that DP-ICLachieves a strong utility-privacy tradeoff.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cr']",, -1033,incontext learning as maintaining coherency a study of onthefly machine translation using large language models,"['Suzanna Sia', 'Kevin Duh']",http://arxiv.org/pdf/2305.03573v1.pdf,2023-05-05,," The phenomena of in-context learning has typically been thought of as""learning from examples"". In this work which focuses on Machine Translation, wepresent a perspective of in-context learning as the desired generation taskmaintaining coherency with its context, i.e., the prompt examples. We firstinvestigate randomly sampled prompts across 4 domains, and find thattranslation performance improves when shown in-domain prompts. Next, weinvestigate coherency for the in-domain setting, which uses prompt examplesfrom a moving window. We study this with respect to other factors that havepreviously been identified in the literature such as length, surface similarityand sentence embedding similarity. Our results across 3 models (GPTNeo2.7B,Bloom3B, XGLM2.9B), and three translation directions(\texttt{en}$\rightarrow$\{\texttt{pt, de, fr}\}) suggest that the long-termcoherency of the prompts and the test sentence is a good indicator ofdownstream translation performance. In doing so, we demonstrate the efficacy ofIn-context Machine Translation for on-the-fly adaptation.",,arXiv,"['cs.cl', 'cs.ai']",, -1034,small models are valuable plugins for large language models,"['Canwen Xu', 'Yichong Xu', 'Shuohang Wang', 'Yang Liu', 'Chenguang Zhu', 'Julian McAuley']",http://arxiv.org/pdf/2305.08848v1.pdf,2023-05-15,," Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but theirweights are often publicly unavailable and their immense sizes make the modelsdifficult to be tuned with common hardware. As a result, effectively tuningthese models with large-scale supervised data can be challenging. As analternative, In-Context Learning (ICL) can only use a small number ofsupervised examples due to context length limits. In this paper, we proposeSuper In-Context Learning (SuperICL) which allows black-box LLMs to work withlocally fine-tuned smaller models, resulting in superior performance onsupervised tasks. Our experiments demonstrate that SuperICL can improveperformance beyond state-of-the-art fine-tuned models while addressing theinstability problem of in-context learning. Furthermore, SuperICL can enhancethe capabilities of smaller models, such as multilinguality andinterpretability.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1035,gptfinre incontext learning for financial relation extraction using large language models,"['Pawan Kumar Rajpoot', 'Ankur Parikh']",http://arxiv.org/pdf/2306.17519v2.pdf,2023-06-30,," Relation extraction (RE) is a crucial task in natural language processing(NLP) that aims to identify and classify relationships between entitiesmentioned in text. In the financial domain, relation extraction plays a vitalrole in extracting valuable information from financial documents, such as newsarticles, earnings reports, and company filings. This paper describes oursolution to relation extraction on one such dataset REFinD. The dataset wasreleased along with shared task as a part of the Fourth Workshop on KnowledgeDiscovery from Unstructured Data in Financial Services, co-located with SIGIR2023. In this paper, we employed OpenAI models under the framework ofin-context learning (ICL). We utilized two retrieval strategies to find top Krelevant in-context learning demonstrations / examples from training data for agiven test example. The first retrieval mechanism, we employed, is alearning-free dense retriever and the other system is a learning-basedretriever. We were able to achieve 3rd rank overall. Our best F1-score is0.718.",,arXiv,['cs.cl'],, -1036,codestyle incontext learning for knowledgebased question answering,"['Zhijie Nie', 'Richong Zhang', 'Zhongyuan Wang', 'Xudong Liu']",http://arxiv.org/pdf/2309.04695v1.pdf,2023-09-09,," Current methods for Knowledge-Based Question Answering (KBQA) usually rely oncomplex training techniques and model frameworks, leading to many limitationsin practical applications. Recently, the emergence of In-Context Learning (ICL)capabilities in Large Language Models (LLMs) provides a simple andtraining-free semantic parsing paradigm for KBQA: Given a small number ofquestions and their labeled logical forms as demo examples, LLMs can understandthe task intent and generate the logic form for a new question. However,current powerful LLMs have little exposure to logic forms during pre-training,resulting in a high format error rate. To solve this problem, we propose acode-style in-context learning method for KBQA, which converts the generationprocess of unfamiliar logical form into the more familiar code generationprocess for LLMs. Experimental results on three mainstream datasets show thatour method dramatically mitigated the formatting error problem in generatinglogic forms while realizing a new SOTA on WebQSP, GrailQA, and GraphQ under thefew-shot setting.",,arXiv,"['cs.cl', 'cs.ai']",, -1037,iclef incontext learning with expert feedback for explainable style transfer,"['Arkadiy Saakyan', 'Smaranda Muresan']",http://arxiv.org/pdf/2309.08583v1.pdf,2023-09-15,," While state-of-the-art language models excel at the style transfer task,current work does not address explainability of style transfer systems.Explanations could be generated using large language models such as GPT-3.5 andGPT-4, but the use of such complex systems is inefficient when smaller, widelydistributed, and transparent alternatives are available. We propose a frameworkto augment and improve a formality style transfer dataset with explanations viamodel distillation from ChatGPT. To further refine the generated explanations,we propose a novel way to incorporate scarce expert human feedback usingin-context learning (ICLEF: In-Context Learning from Expert Feedback) byprompting ChatGPT to act as a critic to its own outputs. We use the resultingdataset of 9,960 explainable formality style transfer instances (e-GYAFC) toshow that current openly distributed instruction-tuned models (and, in somesettings, ChatGPT) perform poorly on the task, and that fine-tuning on ourhigh-quality dataset leads to significant improvements as shown by automaticevaluation. In human evaluation, we show that models much smaller than ChatGPTfine-tuned on our data align better with expert preferences. Finally, wediscuss two potential applications of models fine-tuned on the explainablestyle transfer task: interpretable authorship verification and interpretableadversarial attacks on AI-generated text detectors.",,arXiv,['cs.cl'],, -1038,utilising a large language model to annotate subject metadata a case study in an australian national research data catalogue,"['Shiwei Zhang', 'Mingfang Wu', 'Xiuzhen Zhang']",http://arxiv.org/pdf/2310.11318v1.pdf,2023-10-17,," In support of open and reproducible research, there has been a rapidlyincreasing number of datasets made available for research. As the availabilityof datasets increases, it becomes more important to have quality metadata fordiscovering and reusing them. Yet, it is a common issue that datasets oftenlack quality metadata due to limited resources for data curation. Meanwhile,technologies such as artificial intelligence and large language models (LLMs)are progressing rapidly. Recently, systems based on these technologies, such asChatGPT, have demonstrated promising capabilities for certain data curationtasks. This paper proposes to leverage LLMs for cost-effective annotation ofsubject metadata through the LLM-based in-context learning. Our method employsGPT-3.5 with prompts designed for annotating subject metadata, demonstratingpromising performance in automatic metadata annotation. However, models basedon in-context learning cannot acquire discipline-specific rules, resulting inlower performance in several categories. This limitation arises from thelimited contextual information available for subject inference. To the best ofour knowledge, we are introducing, for the first time, an in-context learningmethod that harnesses large language models for automated subject metadataannotation.",,arXiv,"['cs.cl', 'cs.ai']",, -1039,hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks,"['Yifan Wang', 'Qingyan Guo', 'Xinzhe Ni', 'Chufan Shi', 'Lemao Liu', 'Haiyun Jiang', 'Yujiu Yang']",http://arxiv.org/pdf/2311.01949v1.pdf,2023-11-03,," In-context learning (ICL) ability has emerged with the increasing scale oflarge language models (LLMs), enabling them to learn input-label mappings fromdemonstrations and perform well on downstream tasks. However, under thestandard ICL setting, LLMs may sometimes neglect query-related information indemonstrations, leading to incorrect predictions. To address this limitation,we propose a new paradigm called Hint-enhanced In-Context Learning (HICL) toexplore the power of ICL in open-domain question answering, an important formin knowledge-intensive tasks. HICL leverages LLMs' reasoning ability to extractquery-related knowledge from demonstrations, then concatenates the knowledge toprompt LLMs in a more explicit way. Furthermore, we track the source of thisknowledge to identify specific examples, and introduce a Hint-related ExampleRetriever (HER) to select informative examples for enhanced demonstrations. Weevaluate HICL with HER on 3 open-domain QA benchmarks, and observe averageperformance gains of 2.89 EM score and 2.52 F1 score on gpt-3.5-turbo, 7.62 EMscore and 7.27 F1 score on LLaMA-2-Chat-7B compared with standard setting.",,arXiv,['cs.cl'],, -1040,rethinking the role of demonstrations what makes incontext learning work,"['Sewon Min', 'Xinxi Lyu', 'Ari Holtzman', 'Mikel Artetxe', 'Mike Lewis', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2202.12837v2.pdf,2022-02-25,," Large language models (LMs) are able to in-context learn -- perform a newtask via inference alone by conditioning on a few input-label pairs(demonstrations) and making predictions for new inputs. However, there has beenlittle understanding of how the model learns and which aspects of thedemonstrations contribute to end task performance. In this paper, we show thatground truth demonstrations are in fact not required -- randomly replacinglabels in the demonstrations barely hurts performance on a range ofclassification and multi-choce tasks, consistently over 12 different modelsincluding GPT-3. Instead, we find that other aspects of the demonstrations arethe key drivers of end task performance, including the fact that they provide afew examples of (1) the label space, (2) the distribution of the input text,and (3) the overall format of the sequence. Together, our analysis provides anew way of understanding how and why in-context learning works, while openingup new questions about how much can be learned from large language modelsthrough inference alone.",,arXiv,"['cs.cl', 'cs.ai']",, -1041,fewshot anaphora resolution in scientific protocols via mixtures of incontext experts,"['Nghia T. Le', 'Fan Bai', 'Alan Ritter']",http://arxiv.org/pdf/2210.03690v2.pdf,2022-10-07,," Anaphora resolution is an important task for information extraction across arange of languages, text genres, and domains, motivating the need for methodsthat do not require large annotated datasets. In-context learning has emergedas a promising approach, yet there are a number of challenges in applyingin-context learning to resolve anaphora. For example, encoding a singlein-context demonstration that consists of: an anaphor, a paragraph-lengthcontext, and a list of corresponding antecedents, requires conditioning alanguage model on a long sequence of tokens, limiting the number ofdemonstrations per prompt. In this paper, we present MICE (Mixtures ofIn-Context Experts), which we demonstrate is effective for few-shot anaphoraresolution in scientific protocols (Tamari et al., 2021). Given only a handfulof training examples, MICE combines the predictions of hundreds of in-contextexperts, yielding a 30% increase in F1 score over a competitive promptretrieval baseline. Furthermore, we show MICE can be used to train compactstudent models without sacrificing performance. As far as we are aware, this isthe first work to present experimental results demonstrating the effectivenessof in-context learning on the task of few-shot anaphora resolution inscientific protocols.",,arXiv,"['cs.cl', 'cs.ai']",, -1042,adaptive machine translation with large language models,"['Yasmin Moslem', 'Rejwanul Haque', 'John D. Kelleher', 'Andy Way']",http://arxiv.org/pdf/2301.13294v3.pdf,2023-01-30,," Consistency is a key requirement of high-quality translation. It isespecially important to adhere to pre-approved terminology and adapt tocorrected translations in domain-specific projects. Machine translation (MT)has achieved significant progress in the area of domain adaptation. However,real-time adaptation remains challenging. Large-scale language models (LLMs)have recently shown interesting capabilities of in-context learning, where theylearn to replicate certain input-output text generation patterns, withoutfurther fine-tuning. By feeding an LLM at inference time with a prompt thatconsists of a list of translation pairs, it can then simulate the domain andstyle characteristics. This work aims to investigate how we can utilizein-context learning to improve real-time adaptive MT. Our extensive experimentsshow promising results at translation time. For example, LLMs can adapt to aset of in-domain sentence pairs and/or terminology while translating a newsentence. We observe that the translation quality with few-shot in-contextlearning can surpass that of strong encoder-decoder MT systems, especially forhigh-resource languages. Moreover, we investigate whether we can combine MTfrom strong encoder-decoder models with fuzzy matches, which can furtherimprove translation quality, especially for less supported languages. Weconduct our experiments across five diverse language pairs, namelyEnglish-to-Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French(EN-FR), English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES).",,arXiv,['cs.cl'],, -1043,scattershot interactive incontext example curation for text transformation,"['Tongshuang Wu', 'Hua Shen', 'Daniel S. Weld', 'Jeffrey Heer', 'Marco Tulio Ribeiro']",http://arxiv.org/pdf/2302.07346v1.pdf,2023-02-14,," The in-context learning capabilities of LLMs like GPT-3 allow annotators tocustomize an LLM to their specific tasks with a small number of examples.However, users tend to include only the most obvious patterns when craftingexamples, resulting in underspecified in-context functions that fall short onunseen cases. Further, it is hard to know when ""enough"" examples have beenincluded even for known patterns. In this work, we present ScatterShot, aninteractive system for building high-quality demonstration sets for in-contextlearning. ScatterShot iteratively slices unlabeled data into task-specificpatterns, samples informative inputs from underexplored or not-yet-saturatedslices in an active learning manner, and helps users label more efficientlywith the help of an LLM and the current example set. In simulation studies ontwo text perturbation scenarios, ScatterShot sampling improves the resultingfew-shot functions by 4-5 percentage points over random sampling, with lessvariance as more examples are added. In a user study, ScatterShot greatly helpsusers in covering different patterns in the input space and labeling in-contextexamples more efficiently, resulting in better in-context learning and lessuser effort.",,arXiv,"['cs.hc', 'cs.cl']",, -1044,resources and fewshot learners for incontext learning in slavic languages,"['Michal Štefánik', 'Marek Kadlčík', 'Piotr Gramacki', 'Petr Sojka']",http://arxiv.org/pdf/2304.01922v1.pdf,2023-04-04,," Despite the rapid recent progress in creating accurate and compact in-contextlearners, most recent work focuses on in-context learning (ICL) for tasks inEnglish. However, the ability to interact with users of languages outsideEnglish presents a great potential for broadening the applicability of languagetechnologies to non-English speakers. In this work, we collect the infrastructure necessary for training andevaluation of ICL in a selection of Slavic languages: Czech, Polish, andRussian. We link a diverse set of datasets and cast these into a unifiedinstructional format through a set of transformations and newly-craftedtemplates written purely in target languages. Using the newly-curated dataset,we evaluate a set of the most recent in-context learners and compare theirresults to the supervised baselines. Finally, we train, evaluate and publish aset of in-context learning models that we train on the collected resources andcompare their performance to previous work. We find that ICL models tuned in English are also able to learn some tasksfrom non-English contexts, but multilingual instruction fine-tuningconsistently improves the ICL ability. We also find that the massive multitasktraining can be outperformed by single-task training in the target language,uncovering the potential for specializing in-context learners to thelanguage(s) of their application.",,arXiv,['cs.cl'],, -1045,unified demonstration retriever for incontext learning,"['Xiaonan Li', 'Kai Lv', 'Hang Yan', 'Tianyang Lin', 'Wei Zhu', 'Yuan Ni', 'Guotong Xie', 'Xiaoling Wang', 'Xipeng Qiu']",http://arxiv.org/pdf/2305.04320v2.pdf,2023-05-07,," In-context learning is a new learning paradigm where a language modelconditions on a few input-output pairs (demonstrations) and a test input, anddirectly outputs the prediction. It has been shown highly dependent on theprovided demonstrations and thus promotes the research of demonstrationretrieval: given a test input, relevant examples are retrieved from thetraining set to serve as informative demonstrations for in-context learning.While previous works focus on training task-specific retrievers for severaltasks separately, these methods are often hard to transfer and scale on varioustasks, and separately trained retrievers incur a lot of parameter storage anddeployment cost. In this paper, we propose Unified Demonstration Retriever(\textbf{UDR}), a single model to retrieve demonstrations for a wide range oftasks. To train UDR, we cast various tasks' training signals into a unifiedlist-wise ranking formulation by language model's feedback. Then we propose amulti-task list-wise ranking training framework, with an iterative miningstrategy to find high-quality candidates, which can help UDR fully incorporatevarious tasks' signals. Experiments on 30+ tasks across 13 task families andmultiple data domains show that UDR significantly outperforms baselines.Further analyses show the effectiveness of each proposed component and UDR'sstrong ability in various scenarios including different LMs (1.3B - 175B),unseen datasets, varying demonstration quantities, etc.",,arXiv,['cs.cl'],, -1046,efficient prompting via dynamic incontext learning,"['Wangchunshu Zhou', 'Yuchen Eleanor Jiang', 'Ryan Cotterell', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2305.11170v1.pdf,2023-05-18,," The primary way of building AI applications is shifting from trainingspecialist models to prompting generalist models. A common practice forprompting generalist models, often referred to as in-context learning, is toappend a few examples (demonstrations) to the prompt to help the model betterunderstand the task. While effective, in-context learning can be inefficientbecause it makes the input prompt much longer, consuming valuable space in thecontext window and leading to larger computational costs. In this paper, wepropose DynaICL, a recipe for efficient prompting with black-box generalistmodels that dynamically allocate in-context examples according to the inputcomplexity and the computational budget. To achieve this, we train a metacontroller that predicts the number of in-context examples suitable for thegeneralist model to make a good prediction based on the performance-efficiencytrade-off for a specific input. We then dynamically allocate the number ofdemonstrations for an input according to predictions from the meta controllerand the given computation budget. Experimental results show that dynamicexample allocation helps achieve a better performance-efficiency trade-off intwo practical settings where computational resources or the requiredperformance is constrained. Specifically, DynaICL saves up to 46% token budgetcompared to the common practice that allocates the same number of in-contextexamples to each input. We also find that a meta controller trained on acertain backbone model and tasks can successfully generalize to unseen modelsand tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1047,post hoc explanations of language models can improve language models,"['Satyapriya Krishna', 'Jiaqi Ma', 'Dylan Slack', 'Asma Ghandeharioun', 'Sameer Singh', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2305.11426v2.pdf,2023-05-19,," Large Language Models (LLMs) have demonstrated remarkable capabilities inperforming complex tasks. Moreover, recent research has shown thatincorporating human-annotated rationales (e.g., Chain-of-Thought prompting)during in-context learning can significantly enhance the performance of thesemodels, particularly on tasks that require reasoning capabilities. However,incorporating such rationales poses challenges in terms of scalability as thisrequires a high degree of human involvement. In this work, we present a novelframework, Amplifying Model Performance by Leveraging In-Context Learning withPost Hoc Explanations (AMPLIFY), which addresses the aforementioned challengesby automating the process of rationale generation. To this end, we leveragepost hoc explanation methods which output attribution scores (explanations)capturing the influence of each of the input features on model predictions.More specifically, we construct automated natural language rationales thatembed insights from post hoc explanations to provide corrective signals toLLMs. Extensive experimentation with real-world datasets demonstrates that ourframework, AMPLIFY, leads to prediction accuracy improvements of about 10-25%over a wide range of tasks, including those where prior approaches which relyon human-annotated rationales such as Chain-of-Thought prompting fall short.Our work makes one of the first attempts at highlighting the potential of posthoc explanations as valuable tools for enhancing the effectiveness of LLMs.Furthermore, we conduct additional empirical analyses and ablation studies todemonstrate the impact of each of the components of AMPLIFY, which, in turn,leads to critical insights for refining in-context learning.",,arXiv,"['cs.cl', 'cs.ai']",, -1048,reticl sequential retrieval of incontext examples with reinforcement learning,"['Alexander Scarlatos', 'Andrew Lan']",http://arxiv.org/pdf/2305.14502v1.pdf,2023-05-23,," Many recent developments in large language models focus on prompting them toperform specific tasks. One effective prompting method is in-context learning,where the model performs a (possibly new) generation/prediction task given one(or more) examples. Past work has shown that the choice of examples can make alarge impact on task performance. However, finding good examples is notstraightforward since the definition of a representative group of examples canvary greatly depending on the task. While there are many existing methods forselecting in-context examples, they generally score examples independently,ignoring the dependency between them and the order in which they are providedto the large language model. In this work, we propose Retrieval for In-ContextLearning (RetICL), a learnable method for modeling and optimally selectingexamples sequentially for in-context learning. We frame the problem ofsequential example selection as a Markov decision process, design an exampleretriever model using an LSTM, and train it using proximal policy optimization(PPO). We validate RetICL on math problem solving datasets and show that itoutperforms both heuristic and learnable baselines, and achievesstate-of-the-art accuracy on the TabMWP dataset. We also use case studies toshow that RetICL implicitly learns representations of math problem solvingstrategies.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1049,metricbased incontext learning a case study in text simplification,"['Subha Vadlamannati', 'Gözde Gül Şahin']",http://arxiv.org/pdf/2307.14632v1.pdf,2023-07-27,," In-context learning (ICL) for large language models has proven to be apowerful approach for many natural language processing tasks. However,determining the best method to select examples for ICL is nontrivial as theresults can vary greatly depending on the quality, quantity, and order ofexamples used. In this paper, we conduct a case study on text simplification(TS) to investigate how to select the best and most robust examples for ICL. Wepropose Metric-Based in-context Learning (MBL) method that utilizes commonlyused TS metrics such as SARI, compression ratio, and BERT-Precision forselection. Through an extensive set of experiments with various-sized GPTmodels on standard TS benchmarks such as TurkCorpus and ASSET, we show thatexamples selected by the top SARI scores perform the best on larger models suchas GPT-175B, while the compression ratio generally performs better on smallermodels such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL isgenerally robust to example orderings and out-of-domain test sets, andoutperforms strong baselines and state-of-the-art finetuned language models.Finally, we show that the behaviour of large GPT models can be implicitlycontrolled by the chosen metric. Our research provides a new framework forselecting examples in ICL, and demonstrates its effectiveness in textsimplification tasks, breaking new ground for more accurate and efficient NLGsystems.",,arXiv,"['cs.cl', 'cs.ai']",, -1050,hicl hashtagdriven incontext learning for social media natural language understanding,"['Hanzhuo Tan', 'Chunpu Xu', 'Jing Li', 'Yuqun Zhang', 'Zeyang Fang', 'Zeyu Chen', 'Baohua Lai']",http://arxiv.org/pdf/2308.09985v1.pdf,2023-08-19,," Natural language understanding (NLU) is integral to various social mediaapplications. However, existing NLU models rely heavily on context for semanticlearning, resulting in compromised performance when faced with short and noisysocial media content. To address this issue, we leverage in-context learning(ICL), wherein language models learn to make inferences by conditioning on ahandful of demonstrations to enrich the context and propose a novelhashtag-driven in-context learning (HICL) framework. Concretely, we pre-train amodel #Encoder, which employs #hashtags (user-annotated topic labels) to driveBERT-based pre-training through contrastive learning. Our objective here is toenable #Encoder to gain the ability to incorporate topic-related semanticinformation, which allows it to retrieve topic-related posts to enrich contextsand enhance social media NLU with noisy contexts. To further integrate theretrieved context with the source text, we employ a gradient-based method toidentify trigger terms useful in fusing information from both sources. Forempirical studies, we collected 45M tweets to set up an in-context NLUbenchmark, and the experimental results on seven downstream tasks show thatHICL substantially advances the previous state-of-the-art results. Furthermore,we conducted extensive analyzes and found that: (1) combining source input witha top-retrieved post from #Encoder is more effective than using semanticallysimilar posts; (2) trigger words can largely benefit in merging context fromthe source and retrieved posts.",,arXiv,['cs.cl'],, -1051,incontext convergence of transformers,"['Yu Huang', 'Yuan Cheng', 'Yingbin Liang']",http://arxiv.org/pdf/2310.05249v1.pdf,2023-10-08,," Transformers have recently revolutionized many domains in modern machinelearning and one salient discovery is their remarkable in-context learningcapability, where models can solve an unseen task by utilizing task-specificprompts without further parameters fine-tuning. This also inspired recenttheoretical studies aiming to understand the in-context learning mechanism oftransformers, which however focused only on linear transformers. In this work,we take the first step toward studying the learning dynamics of a one-layertransformer with softmax attention trained via gradient descent in order toin-context learn linear function classes. We consider a structured data model,where each token is randomly sampled from a set of feature vectors in eitherbalanced or imbalanced fashion. For data with balanced features, we establishthe finite-time convergence guarantee with near-zero prediction error bynavigating our analysis over two phases of the training dynamics of theattention map. More notably, for data with imbalanced features, we show thatthe learning dynamics take a stage-wise convergence process, where thetransformer first converges to a near-zero prediction error for the querytokens of dominant features, and then converges later to a near-zero predictionerror for the query tokens of under-represented features, respectively via oneand four training phases. Our proof features new techniques for analyzing thecompeting strengths of two types of attention weights, the change of whichdetermines different training phases.",,arXiv,"['cs.lg', 'cs.ai', 'math.oc', 'stat.ml']",, -1052,large language modelaware incontext learning for code generation,"['Jia Li', 'Ge Li', 'Chongyang Tao', 'Jia Li', 'Huangzhao Zhang', 'Fang Liu', 'Zhi Jin']",http://arxiv.org/pdf/2310.09748v1.pdf,2023-10-15,," Large language models (LLMs) have shown impressive in-context learning (ICL)ability in code generation. LLMs take a prompt consisting of requirement-codeexamples and a new requirement as input, and output new programs. Existingstudies have found that ICL is highly dominated by the examples and thus arisesresearch on example selection. However, existing approaches randomly selectexamples or only consider the textual similarity of requirements to retrieve,leading to sub-optimal performance. In this paper, we propose a novellearning-based selection approach named LAIL (LLM-Aware In-context Learning)for code generation. Given a candidate example, we exploit LLMs themselves toestimate it by considering the generation probabilities of ground-truthprograms given a requirement and the example. We then label candidate examplesas positive or negative through the probability feedback. Based on the labeleddata, we import a contrastive learning objective to train an effectiveretriever that acquires the preference of LLMs in code generation. We applyLAIL to three LLMs and evaluate it on three representative datasets (e.g.,MBJP, MBPP, and MBCPP). LATA outperforms the state-of-the-art baselines by11.58%, 6.89%, and 5.07% on CodeGen, and 4.38%, 2.85%, and 2.74% on GPT-3.5 interms of Pass@1, respectively.",,arXiv,"['cs.se', 'cs.cl']",, -1053,on the relation between sensitivity and accuracy in incontext learning,"['Yanda Chen', 'Chen Zhao', 'Zhou Yu', 'Kathleen McKeown', 'He He']",http://arxiv.org/pdf/2209.07661v2.pdf,2022-09-16,," In-context learning (ICL) suffers from oversensitivity to the prompt, makingit unreliable in real-world scenarios. We study the sensitivity of ICL withrespect to multiple perturbation types. First, we find that label bias obscuresthe true sensitivity, and therefore prior work may have significantlyunderestimated ICL sensitivity. Second, we observe a strong negativecorrelation between ICL sensitivity and accuracy: predictions sensitive toperturbations are less likely to be correct. Motivated by these findings, wepropose \textsc{SenSel}, a few-shot selective prediction method that abstainsfrom sensitive predictions. Experiments on ten classification datasets showthat \textsc{SenSel} consistently outperforms two commonly usedconfidence-based and entropy-based baselines on abstention decisions.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1054,winodict probing language models for incontext word acquisition,"['Julian Martin Eisenschlos', 'Jeremy R. Cole', 'Fangyu Liu', 'William W. Cohen']",http://arxiv.org/pdf/2209.12153v1.pdf,2022-09-25,," We introduce a new in-context learning paradigm to measure Large LanguageModels' (LLMs) ability to learn novel words during inference. In particular, werewrite Winograd-style co-reference resolution problems by replacing the keyconcept word with a synthetic but plausible word that the model must understandto complete the task. Solving this task requires the model to make use of thedictionary definition of the new word given in the prompt. This benchmarkaddresses word acquisition, one important aspect of the diachronic degradationknown to afflict LLMs. As LLMs are frozen in time at the moment they aretrained, they are normally unable to reflect the way language changes overtime. We show that the accuracy of LLMs compared to the original Winograd tasksdecreases radically in our benchmark, thus identifying a limitation of currentmodels and providing a benchmark to measure future improvements in LLMs abilityto do in-context learning.",,arXiv,"['cs.cl', 'cs.ai']",, -1055,data curation alone can stabilize incontext learning,"['Ting-Yun Chang', 'Robin Jia']",http://arxiv.org/pdf/2212.10378v2.pdf,2022-12-20,," In-context learning (ICL) enables large language models (LLMs) to perform newtasks by prompting them with a sequence of training examples. However, it isknown that ICL is very sensitive to the choice of training examples: randomlysampling examples from a training set leads to high variance in performance. Inthis paper, we show that carefully curating a subset of training data greatlystabilizes ICL performance without any other changes to the ICL algorithm(e.g., prompt retrieval or calibration). We introduce two methods to choosetraining subsets -- both score training examples individually, then select thehighest-scoring ones. CondAcc scores a training example by its average dev-setICL accuracy when combined with random training examples, while Datamodelslearns linear regressors that estimate how the presence of each trainingexample influences LLM outputs. Across five tasks and two LLMs, sampling fromstable subsets selected by CondAcc and Datamodels improves average accuracyover sampling from the entire training set by 7.7% and 6.3%, respectively.Surprisingly, the stable subset examples are not especially diverse in contentor low in perplexity, in contrast with other work suggesting that diversity andperplexity are important when prompting LLMs.",,arXiv,['cs.cl'],, -1056,a survey on incontext learning,"['Qingxiu Dong', 'Lei Li', 'Damai Dai', 'Ce Zheng', 'Zhiyong Wu', 'Baobao Chang', 'Xu Sun', 'Jingjing Xu', 'Lei Li', 'Zhifang Sui']",http://arxiv.org/pdf/2301.00234v3.pdf,2022-12-31,," With the increasing ability of large language models (LLMs), in-contextlearning (ICL) has become a new paradigm for natural language processing (NLP),where LLMs make predictions only based on contexts augmented with a fewexamples. It has been a new trend to explore ICL to evaluate and extrapolatethe ability of LLMs. In this paper, we aim to survey and summarize the progressand challenges of ICL. We first present a formal definition of ICL and clarifyits correlation to related studies. Then, we organize and discuss advancedtechniques, including training strategies, demonstration designing strategies,as well as related analysis. Finally, we discuss the challenges of ICL andprovide potential directions for further research. We hope that our work canencourage more research on uncovering how ICL works and improving ICL.",,arXiv,"['cs.cl', 'cs.ai']",, -1057,towards fewshot identification of morality frames using incontext learning,"['Shamik Roy', 'Nishanth Sridhar Nakshatri', 'Dan Goldwasser']",http://arxiv.org/pdf/2302.02029v1.pdf,2023-02-03,," Data scarcity is a common problem in NLP, especially when the annotationpertains to nuanced socio-linguistic concepts that require specializedknowledge. As a result, few-shot identification of these concepts is desirable.Few-shot in-context learning using pre-trained Large Language Models (LLMs) hasbeen recently applied successfully in many NLP tasks. In this paper, we studyfew-shot identification of a psycho-linguistic concept, Morality Frames (Roy etal., 2021), using LLMs. Morality frames are a representation framework thatprovides a holistic view of the moral sentiment expressed in text, identifyingthe relevant moral foundation (Haidt and Graham, 2007) and at a finer level ofgranularity, the moral sentiment expressed towards the entities mentioned inthe text. Previous studies relied on human annotation to identify moralityframes in text which is expensive. In this paper, we propose prompting-basedapproaches using pretrained Large Language Models for identification ofmorality frames, relying only on few-shot exemplars. We compare our models'performance with few-shot RoBERTa and found promising results.",,arXiv,['cs.cl'],, -1058,openicl an opensource framework for incontext learning,"['Zhenyu Wu', 'YaoXiang Wang', 'Jiacheng Ye', 'Jiangtao Feng', 'Jingjing Xu', 'Yu Qiao', 'Zhiyong Wu']",http://arxiv.org/pdf/2303.02913v1.pdf,2023-03-06,," In recent years, In-context Learning (ICL) has gained increasing attentionand emerged as the new paradigm for large language model (LLM) evaluation.Unlike traditional fine-tuning methods, ICL instead adapts the pre-trainedmodels to unseen tasks without any parameter updates. However, theimplementation of ICL is sophisticated due to the diverse retrieval andinference methods involved, as well as the varying pre-processing requirementsfor different models, datasets, and tasks. A unified and flexible framework forICL is urgently needed to ease the implementation of the aforementionedcomponents. To facilitate ICL research, we introduce OpenICL, an open-sourcetoolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highlyflexible architecture that users can easily combine different components tosuit their needs. It also provides various state-of-the-art retrieval andinference methods to streamline the process of adapting ICL to cutting-edgeresearch. The effectiveness of OpenICL has been validated on a wide range ofNLP tasks, including classification, QA, machine translation, and semanticparsing. As a side-product, we found OpenICL to be an efficient yet robust toolfor LLMs evaluation. OpenICL is released athttps://github.com/Shark-NLP/OpenICL",,arXiv,['cs.cl'],, -1059,the scope of incontext learning for the extraction of medical temporal constraints,"['Parker Seegmiller', 'Joseph Gatto', 'Madhusudan Basak', 'Diane Cook', 'Hassan Ghasemzadeh', 'John Stankovic', 'Sarah Preum']",http://arxiv.org/pdf/2303.09366v2.pdf,2023-03-16,," Medications often impose temporal constraints on everyday patient activity.Violations of such medical temporal constraints (MTCs) lead to a lack oftreatment adherence, in addition to poor health outcomes and increasedhealthcare expenses. These MTCs are found in drug usage guidelines (DUGs) inboth patient education materials and clinical texts. Computationallyrepresenting MTCs in DUGs will advance patient-centric healthcare applicationsby helping to define safe patient activity patterns. We define a novel taxonomyof MTCs found in DUGs and develop a novel context-free grammar (CFG) basedmodel to computationally represent MTCs from unstructured DUGs. Additionally,we release three new datasets with a combined total of N = 836 DUGs labeledwith normalized MTCs. We develop an in-context learning (ICL) solution forautomatically extracting and normalizing MTCs found in DUGs, achieving anaverage F1 score of 0.62 across all datasets. Finally, we rigorouslyinvestigate ICL model performance against a baseline model, across datasets andMTC types, and through in-depth error analysis.",,arXiv,"['cs.cl', 'cs.lg']",, -1060,gptre incontext learning for relation extraction using large language models,"['Zhen Wan', 'Fei Cheng', 'Zhuoyuan Mao', 'Qianying Liu', 'Haiyue Song', 'Jiwei Li', 'Sadao Kurohashi']",http://arxiv.org/pdf/2305.02105v2.pdf,2023-05-03,," In spite of the potential for ground-breaking achievements offered by largelanguage models (LLMs) (e.g., GPT-3), they still lag significantly behindfully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE).This is due to the two major shortcomings of LLMs in RE: (1) low relevanceregarding entity and relation in retrieved demonstrations for in-contextlearning; and (2) the strong inclination to wrongly classify NULL examples intoother pre-defined labels. In this paper, we propose GPT-RE to bridge the gap between LLMs andfully-supervised baselines. GPT-RE successfully addresses the aforementionedissues by (1) incorporating task-specific entity representations indemonstration retrieval; and (2) enriching the demonstrations with goldlabel-induced reasoning logic. We evaluate GPT-RE on four widely-used REdatasets, and observe that GPT-RE achieves improvements over not only existingGPT-3 baselines, but also fully-supervised baselines. Specifically, GPT-REachieves SOTA performances on the Semeval and SciERC datasets, and competitiveperformances on the TACRED and ACE05 datasets.",,arXiv,['cs.cl'],, -1061,gersteinlab at mediqachat 2023 clinical note summarization from doctorpatient conversations through finetuning and incontext learning,"['Xiangru Tang', 'Andrew Tran', 'Jeffrey Tan', 'Mark Gerstein']",http://arxiv.org/pdf/2305.05001v1.pdf,2023-05-08,," This paper presents our contribution to the MEDIQA-2023 Dialogue2Note sharedtask, encompassing both subtask A and subtask B. We approach the task as adialogue summarization problem and implement two distinct pipelines: (a) afine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b)few-shot in-context learning (ICL) using a large language model, GPT-4. Bothmethods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1(deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421,respectively. Additionally, we predict the associated section headers usingRoBERTa and SciBERT based classification models. Our team ranked fourth amongall teams, while each team is allowed to submit three runs as part of theirsubmission. We also utilize expert annotations to demonstrate that the notesgenerated through the ICL GPT-4 are better than all other baselines. The codefor our submission is available.",,arXiv,['cs.cl'],, -1062,can we edit factual knowledge by incontext learning,"['Ce Zheng', 'Lei Li', 'Qingxiu Dong', 'Yuxuan Fan', 'Zhiyong Wu', 'Jingjing Xu', 'Baobao Chang']",http://arxiv.org/pdf/2305.12740v1.pdf,2023-05-22,," Previous studies have shown that large language models (LLMs) like GPTs storemassive factual knowledge in their parameters. However, the stored knowledgecould be false or out-dated. Traditional knowledge editing methods refine LLMsvia fine-tuning on texts containing specific knowledge. However, with theincreasing scales of LLMs, these gradient-based approaches bring largecomputation costs. The trend of model-as-a-service also makes it impossible tomodify knowledge in black-box LMs. Inspired by in-context learning (ICL), a newparadigm based on demonstration contexts without parameter updating, we explorewhether ICL can edit factual knowledge. To answer this question, we give acomprehensive empirical study of ICL strategies. Experiments show thatin-context knowledge editing (IKE), without any gradient and parameterupdating, achieves a competitive success rate compared to gradient-basedmethods on GPT-J (6B) but with much fewer side effects, including lessover-editing on similar but unrelated facts and less knowledge forgetting onpreviously stored knowledge. We also apply the method to larger LMs with tensor hundreds of parameters like OPT-175B, which shows the scalability of ourmethod. The code is available at https://github.com/Zce1112zslx/IKE.",,arXiv,['cs.cl'],, -1063,coveragebased example selection for incontext learning,"['Shivanshu Gupta', 'Matt Gardner', 'Sameer Singh']",http://arxiv.org/pdf/2305.14907v3.pdf,2023-05-24,," In-context learning (ICL), the ability of large language models to performnovel tasks by conditioning on a prompt with a few task examples, requiresthese examples to be informative about the test instance. The standard approachof independently ranking and selecting the most similar examples selectsredundant examples while omitting important information. In this work, we showthat BERTScore-Recall (BSR) selects better examples that demonstrate more ofthe salient aspects, e.g. reasoning patterns, of the test input. We furtherextend BSR and many standard metrics to easily optimizable set-level metrics,giving still better coverage of those salient aspects. On 15 datasets spanning6 tasks and with 7 diverse LLMs, we show that (1) BSR is the superior metricfor in-context example selection across the board, and (2) for compositionaltasks, set selection using Set-BSR outperforms independent ranking by up to 17points on average and, despite being training-free, surpasses methods thatleverage task or LLM-specific training.",,arXiv,['cs.cl'],, -1064,leveraging large language models for scalable vector graphicsdriven image understanding,"['Mu Cai', 'Zeyi Huang', 'Yuheng Li', 'Haohan Wang', 'Yong Jae Lee']",http://arxiv.org/pdf/2306.06094v1.pdf,2023-06-09,," Recently, large language models (LLMs) have made significant advancements innatural language understanding and generation. However, their potential incomputer vision remains largely unexplored. In this paper, we introduce a new,exploratory approach that enables LLMs to process images using the ScalableVector Graphics (SVG) format. By leveraging the XML-based textual descriptionsof SVG representations instead of raster images, we aim to bridge the gapbetween the visual and textual modalities, allowing LLMs to directly understandand manipulate images without the need for parameterized visual components. Ourmethod facilitates simple image classification, generation, and in-contextlearning using only LLM capabilities. We demonstrate the promise of ourapproach across discriminative and generative tasks, highlighting its (i)robustness against distribution shift, (ii) substantial improvements achievedby tapping into the in-context learning abilities of LLMs, and (iii) imageunderstanding and generation capabilities with human guidance. Our code, data,and models can be found here https://github.com/mu-cai/svg-llm.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, -1065,exploring the incontext learning ability of large language model for biomedical concept linking,"['Qinyong Wang', 'Zhenxiang Gao', 'Rong Xu']",http://arxiv.org/pdf/2307.01137v1.pdf,2023-07-03,," The biomedical field relies heavily on concept linking in various areas suchas literature mining, graph alignment, information retrieval,question-answering, data, and knowledge integration. Although large languagemodels (LLMs) have made significant strides in many natural language processingtasks, their effectiveness in biomedical concept mapping is yet to be fullyexplored. This research investigates a method that exploits the in-contextlearning (ICL) capabilities of large models for biomedical concept linking. Theproposed approach adopts a two-stage retrieve-and-rank framework. Initially,biomedical concepts are embedded using language models, and then embeddingsimilarity is utilized to retrieve the top candidates. These candidates'contextual information is subsequently incorporated into the prompt andprocessed by a large language model to re-rank the concepts. This approachachieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7%in chemical entity normalization, exhibiting a competitive performance relativeto supervised learning methods. Further, it showed a significant improvement,with an over 20-point absolute increase in F1 score on an oncology matchingdataset. Extensive qualitative assessments were conducted, and the benefits andpotential shortcomings of using large language models within the biomedicaldomain were discussed. were discussed.",,arXiv,"['cs.cl', 'cs.ai']",, -1066,learning to retrieve incontext examples for large language models,"['Liang Wang', 'Nan Yang', 'Furu Wei']",http://arxiv.org/pdf/2307.07164v1.pdf,2023-07-14,," Large language models (LLMs) have demonstrated their ability to learnin-context, allowing them to perform various tasks based on a few input-outputexamples. However, the effectiveness of in-context learning is heavily relianton the quality of the selected examples. In this paper, we propose a novelframework to iteratively train dense retrievers that can identify high-qualityin-context examples for LLMs. Our framework initially trains a reward modelbased on LLM feedback to evaluate the quality of candidate examples, followedby knowledge distillation to train a bi-encoder based dense retriever. Ourexperiments on a suite of 30 tasks demonstrate that our framework significantlyenhances in-context learning performance. Furthermore, we show thegeneralization ability of our framework to unseen tasks during training. Anin-depth analysis reveals that our model improves performance by retrievingexamples with similar patterns, and the gains are consistent across LLMs ofvarying sizes.",,arXiv,"['cs.cl', 'cs.ir']",, -1067,incontext learning learns label relationships but is not conventional learning,"['Jannik Kossen', 'Yarin Gal', 'Tom Rainforth']",http://arxiv.org/pdf/2307.12375v3.pdf,2023-07-23,," The predictions of Large Language Models (LLMs) on downstream tasks oftenimprove significantly when including examples of the input--label relationshipin the context. However, there is currently no consensus about how thisin-context learning (ICL) ability of LLMs works. For example, while Xie et al.(2021) liken ICL to a general-purpose learning algorithm, Min et al. (2022)argue ICL does not even learn label relationships from in-context examples. Inthis paper, we provide novel insights into how ICL leverages label information,revealing both capabilities and limitations. To ensure we obtain acomprehensive picture of ICL behavior, we study probabilistic aspects of ICLpredictions and thoroughly examine the dynamics of ICL as more examples areprovided. Our experiments show that ICL predictions almost always depend onin-context labels, and that ICL can learn truly novel tasks in-context.However, we also find that ICL struggles to fully overcome predictionpreferences acquired from pre-training data, and, further, that ICL does notconsider all in-context information equally.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1068,exploring automated distractor and feedback generation for math multiplechoice questions via incontext learning,"['Hunter McNichols', 'Wanyong Feng', 'Jaewook Lee', 'Alexander Scarlatos', 'Digory Smith', 'Simon Woodhead', 'Andrew Lan']",http://arxiv.org/pdf/2308.03234v1.pdf,2023-08-07,," Multiple-choice questions (MCQs) are ubiquitous in almost all levels ofeducation since they are easy to administer, grade, and are a reliable formatin both assessments and practices. An important aspect of MCQs is thedistractors, i.e., incorrect options that are designed to target specificmisconceptions or insufficient knowledge among students. To date, the task ofcrafting high-quality distractors has largely remained a labor-intensiveprocess for teachers and learning content designers, which has limitedscalability. In this work, we explore the task of automated distractor andcorresponding feedback message generation in math MCQs using large languagemodels. We establish a formulation of these two tasks and propose a simple,in-context learning-based solution. Moreover, we explore using two non-standardmetrics to evaluate the quality of the generated distractors and feedbackmessages. We conduct extensive experiments on these tasks using a real-worldMCQ dataset that contains student response information. Our findings suggestthat there is a lot of room for improvement in automated distractor andfeedback generation. We also outline several directions for future work",,arXiv,['cs.cl'],, -1069,causallm is not optimal for incontext learning,"['Nan Ding', 'Tomer Levinboim', 'Jialin Wu', 'Sebastian Goodman', 'Radu Soricut']",http://arxiv.org/pdf/2308.06912v2.pdf,2023-08-14,," Recent empirical evidence indicates that transformer based in-contextlearning performs better when using a prefix language model (prefixLM), inwhich in-context samples can all attend to each other, compared to causallanguage models (causalLM), which use auto-regressive attention that prohibitsin-context samples to attend to future samples. While this result is intuitive,it is not understood from a theoretical perspective. In this paper we take atheoretical approach and analyze the convergence behavior of prefixLM andcausalLM under a certain parameter construction. Our analysis shows that bothLM types converge to their stationary points at a linear rate, but that whileprefixLM converges to the optimal solution of linear regression, causalLMconvergence dynamics follows that of an online gradient descent algorithm,which is not guaranteed to be optimal even as the number of samples growsinfinitely. We supplement our theoretical claims with empirical experimentsover synthetic and real tasks and using various types of transformers. Ourexperiments verify that causalLM consistently underperforms prefixLM in allsettings.",,arXiv,"['cs.lg', 'cs.cl']",, -1070,exploring demonstration ensembling for incontext learning,"['Muhammad Khalifa', 'Lajanugen Logeswaran', 'Moontae Lee', 'Honglak Lee', 'Lu Wang']",http://arxiv.org/pdf/2308.08780v2.pdf,2023-08-17,," In-context learning (ICL) operates by showing language models (LMs) examplesof input-output pairs for a given task, i.e., demonstrations. The standardapproach for ICL is to prompt the LM with concatenated demonstrations followedby the test input. This approach suffers from some issues. First, concatenationoffers almost no control over the contribution of each demo to the modelprediction. This can be sub-optimal when some demonstrations are irrelevant tothe test example. Second, due to the input length limit of some transformermodels, it might be infeasible to fit many examples into the context,especially when dealing with long-input tasks. In this work, we exploreDemonstration Ensembling (DENSE) as an alternative to simple concatenation.DENSE predicts outputs using subsets (i.e., buckets) of the demonstrations andthen combines the output probabilities resulting from each subset to producethe final prediction. We study different ensembling methods using GPT-j andexperiment on 12 language tasks. Our experiments show weighted max ensemblingto outperform vanilla concatenation by as large as 2.4 average points. Codeavailable at https://github.com/mukhal/icl-ensembling.",,arXiv,"['cs.cl', 'cs.ai']",, -1071,context is environment,"['Sharut Gupta', 'Stefanie Jegelka', 'David Lopez-Paz', 'Kartik Ahuja']",http://arxiv.org/pdf/2309.09888v2.pdf,2023-09-18,," Two lines of work are taking the central stage in AI research. On the onehand, the community is making increasing efforts to build models that discardspurious correlations and generalize better in novel test environments.Unfortunately, the bitter lesson so far is that no proposal convincinglyoutperforms a simple empirical risk minimization baseline. On the other hand,large language models (LLMs) have erupted as algorithms able to learnin-context, generalizing on-the-fly to eclectic contextual circumstances thatusers enforce by means of prompting. In this paper, we argue that context isenvironment, and posit that in-context learning holds the key to better domaingeneralization. Via extensive theory and experiments, we show that payingattention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as theyarrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context RiskMinimization (ICRM) algorithm to zoom-in on the test environment riskminimizer, leading to significant out-of-distribution performance improvements.From all of this, two messages are worth taking home. Researchers in domaingeneralization should consider environment as context, and harness the adaptivepower of in-context learning. Researchers in LLMs should consider context asenvironment, to better structure data towards generalization.",,arXiv,"['cs.lg', 'cs.ai', 'stat.ml']",, -1072,"prompt, condition, and generate classification of unsupported claims with incontext learning","['Peter Ebert Christensen', 'Srishti Yadav', 'Serge Belongie']",http://arxiv.org/pdf/2309.10359v1.pdf,2023-09-19,," Unsupported and unfalsifiable claims we encounter in our daily lives caninfluence our view of the world. Characterizing, summarizing, and -- moregenerally -- making sense of such claims, however, can be challenging. In thiswork, we focus on fine-grained debate topics and formulate a new task ofdistilling, from such claims, a countable set of narratives. We present acrowdsourced dataset of 12 controversial topics, comprising more than 120karguments, claims, and comments from heterogeneous sources, each annotated witha narrative label. We further investigate how large language models (LLMs) canbe used to synthesise claims using In-Context Learning. We find that generatedclaims with supported evidence can be used to improve the performance ofnarrative classification models and, additionally, that the same model caninfer the stance and aspect using a few training examples. Such a model can beuseful in applications which rely on narratives , e.g. fact-checking.",,arXiv,['cs.cl'],, -1073,incontext learning for text classification with many labels,"['Aristides Milios', 'Siva Reddy', 'Dzmitry Bahdanau']",http://arxiv.org/pdf/2309.10954v1.pdf,2023-09-19,," In-context learning (ICL) using large language models for tasks with manylabels is challenging due to the limited context window, which makes itdifficult to fit a sufficient number of examples in the prompt. In this paper,we use a pre-trained dense retrieval model to bypass this limitation, givingthe model only a partial view of the full label space for each inference call.Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the artperformance in few-shot settings for three common intent classificationdatasets, with no finetuning. We also surpass fine-tuned performance onfine-grained sentiment classification in certain cases. We analyze theperformance across number of in-context examples and different model scales,showing that larger models are necessary to effectively and consistently makeuse of larger context lengths for ICL. By running several ablations, we analyzethe model's use of: a) the similarity of the in-context examples to the currentinput, b) the semantic content of the class names, and c) the correctcorrespondence between examples and labels. We demonstrate that all three areneeded to varying degrees depending on the domain, contrary to certain recentworks.",,arXiv,"['cs.cl', 'cs.lg']",, -1074,privacypreserving incontext learning with differentially private fewshot generation,"['Xinyu Tang', 'Richard Shin', 'Huseyin A. Inan', 'Andre Manoel', 'Fatemehsadat Mireshghallah', 'Zinan Lin', 'Sivakanth Gopi', 'Janardhan Kulkarni', 'Robert Sim']",http://arxiv.org/pdf/2309.11765v1.pdf,2023-09-21,," We study the problem of in-context learning (ICL) with large language models(LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leakor regurgitate the private examples demonstrated in the prompt. We propose anovel algorithm that generates synthetic few-shot demonstrations from theprivate dataset with formal differential privacy (DP) guarantees, and showempirically that it can achieve effective ICL. We conduct extensive experimentson standard benchmarks and compare our algorithm with non-private ICL andzero-shot solutions. Our results demonstrate that our algorithm can achievecompetitive performance with strong privacy levels. These results open up newpossibilities for ICL with privacy protection for a broad range ofapplications.",,arXiv,"['cs.lg', 'cs.cr']",, -1075,hrot hybrid prompt strategy and retrieval of thought for tabletext hybrid question answering,"['Tongxu Luo', 'Fangyu Lei', 'Jiahe Lei', 'Weihao Liu', 'Shihu He', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2309.12669v1.pdf,2023-09-22,," Answering numerical questions over hybrid contents from the given tables andtext(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs)have gained significant attention in the NLP community. With the emergence oflarge language models, In-Context Learning and Chain-of-Thought prompting havebecome two particularly popular research topics in this field. In this paper,we introduce a new prompting strategy called Hybrid prompt strategy andRetrieval of Thought for TextTableQA. Through In-Context Learning, we promptthe model to develop the ability of retrieval thinking when dealing with hybriddata. Our method achieves superior performance compared to the fully-supervisedSOTA on the MultiHiertt dataset in the few-shot setting.",,arXiv,['cs.cl'],, -1076,allure auditing and improving llmbased evaluation of text using iterative incontextlearning,"['Hosein Hasanbeig', 'Hiteshi Sharma', 'Leo Betthauser', 'Felipe Vieira Frujeri', 'Ida Momennejad']",http://arxiv.org/pdf/2309.13701v2.pdf,2023-09-24,," From grading papers to summarizing medical documents, large language models(LLMs) are evermore used for evaluation of text generated by humans and AIalike. However, despite their extensive utility, LLMs exhibit distinct failuremodes, necessitating a thorough audit and improvement of their text evaluationcapabilities. Here we introduce ALLURE, a systematic approach to Auditing LargeLanguage Models Understanding and Reasoning Errors. ALLURE involves comparingLLM-generated evaluations with annotated data, and iteratively incorporatinginstances of significant deviation into the evaluator, which leveragesin-context learning (ICL) to enhance and improve robust evaluation of text byLLMs. Through this iterative process, we refine the performance of theevaluator LLM, ultimately reducing reliance on human annotators in theevaluation process. We anticipate ALLURE to serve diverse applications of LLMsin various domains related to evaluation of textual data, such as medicalsummarization, education, and and productivity.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, -1077,dynamic demonstrations controller for incontext learning,"['Fei Zhao', 'Taotian Pang', 'Zhen Wu', 'Zheng Ma', 'Shujian Huang', 'Xinyu Dai']",http://arxiv.org/pdf/2310.00385v1.pdf,2023-09-30,," In-Context Learning (ICL) is a new paradigm for natural language processing(NLP), where a large language model (LLM) observes a small number ofdemonstrations and a test instance as its input, and directly makes predictionswithout updating model parameters. Previous studies have revealed that ICL issensitive to the selection and the ordering of demonstrations. However, thereare few studies regarding the impact of the demonstration number on the ICLperformance within a limited input length of LLM, because it is commonlybelieved that the number of demonstrations is positively correlated with modelperformance. In this paper, we found this conclusion does not always hold true.Through pilot experiments, we discover that increasing the number ofdemonstrations does not necessarily lead to improved performance. Building uponthis insight, we propose a Dynamic Demonstrations Controller (D$^2$Controller),which can improve the ICL performance by adjusting the number of demonstrationsdynamically. The experimental results show that D$^2$Controller yields a 5.4%relative improvement on eight different sizes of LLMs across ten datasets.Moreover, we also extend our method to previous ICL models and achievecompetitive results.",,arXiv,"['cs.cl', 'cs.ai']",, -1078,not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning,"['Zhe Yang', 'Damai Dai', 'Peiyi Wang', 'Zhifang Sui']",http://arxiv.org/pdf/2310.08309v1.pdf,2023-10-12,," Large Language Models (LLMs) have recently gained the In-Context Learning(ICL) ability with the models scaling up, allowing them to quickly adapt todownstream tasks with only a few demonstration examples prepended in the inputsequence. Nonetheless, the current practice of ICL treats all demonstrationexamples equally, which still warrants improvement, as the quality of examplesis usually uneven. In this paper, we investigate how to determine approximatelyoptimal weights for demonstration examples and how to apply them during ICL. Toassess the quality of weights in the absence of additional validation data, wedesign a masked self-prediction (MSP) score that exhibits a strong correlationwith the final ICL performance. To expedite the weight-searching process, wediscretize the continuous weight space and adopt beam search. Withapproximately optimal weights obtained, we further propose two strategies toapply them to demonstrations at different model positions. Experimental resultson 8 text classification tasks show that our approach outperforms conventionalICL by a large margin. Our code are publicly available athttps:github.com/Zhe-Young/WICL.",,arXiv,['cs.cl'],, -1079,how many pretraining tasks are needed for incontext learning of linear regression,"['Jingfeng Wu', 'Difan Zou', 'Zixiang Chen', 'Vladimir Braverman', 'Quanquan Gu', 'Peter L. Bartlett']",http://arxiv.org/pdf/2310.08391v1.pdf,2023-10-12,," Transformers pretrained on diverse tasks exhibit remarkable in-contextlearning (ICL) capabilities, enabling them to solve unseen tasks solely basedon input contexts without adjusting model parameters. In this paper, we studyICL in one of its simplest setups: pretraining a linearly parameterizedsingle-layer linear attention model for linear regression with a Gaussianprior. We establish a statistical task complexity bound for the attention modelpretraining, showing that effective pretraining only requires a small number ofindependent tasks. Furthermore, we prove that the pretrained model closelymatches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, byachieving nearly Bayes optimal risk on unseen tasks under a fixed contextlength. These theoretical findings complement prior experimental research andshed light on the statistical foundations of ICL.",,arXiv,"['stat.ml', 'cs.lg']",, -1080,generative calibration for incontext learning,"['Zhongtao Jiang', 'Yuanzhe Zhang', 'Cao Liu', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2310.10266v1.pdf,2023-10-16,," As one of the most exciting features of large language models (LLMs),in-context learning is a mixed blessing. While it allows users tofast-prototype a task solver with only a few training examples, the performanceis generally sensitive to various configurations of the prompt such as thechoice or order of the training examples. In this paper, we for the first timetheoretically and empirically identify that such a paradox is mainly due to thelabel shift of the in-context model to the data distribution, in which LLMsshift the label marginal $p(y)$ while having a good label conditional $p(x|y)$.With this understanding, we can simply calibrate the in-context predictivedistribution by adjusting the label marginal, which is estimated viaMonte-Carlo sampling over the in-context model, i.e., generation of LLMs. Wecall our approach as generative calibration. We conduct exhaustive experimentswith 12 text classification tasks and 12 LLMs scaling from 774M to 33B,generally find that the proposed method greatly and consistently outperformsthe ICL as well as state-of-the-art calibration methods, by up to 27% absolutein macro-F1. Meanwhile, the proposed method is also stable under differentprompt configurations.",,arXiv,['cs.cl'],, -1081,magnifico evaluating the incontext learning ability of large language models to generalize to novel interpretations,"['Arkil Patel', 'Satwik Bhattamishra', 'Siva Reddy', 'Dzmitry Bahdanau']",http://arxiv.org/pdf/2310.11634v1.pdf,2023-10-18,," Humans possess a remarkable ability to assign novel interpretations tolinguistic expressions, enabling them to learn new words and understandcommunity-specific connotations. However, Large Language Models (LLMs) have aknowledge cutoff and are costly to finetune repeatedly. Therefore, it iscrucial for LLMs to learn novel interpretations in-context. In this paper, wesystematically analyse the ability of LLMs to acquire novel interpretationsusing in-context learning. To facilitate our study, we introduce MAGNIFICo, anevaluation suite implemented within a text-to-SQL semantic parsing frameworkthat incorporates diverse tokens and prompt settings to simulate real-worldcomplexity. Experimental results on MAGNIFICo demonstrate that LLMs exhibit asurprisingly robust capacity for comprehending novel interpretations fromnatural language descriptions as well as from discussions within longconversations. Nevertheless, our findings also highlight the need for furtherimprovements, particularly when interpreting unfamiliar words or when composingmultiple novel interpretations simultaneously in the same example.Additionally, our analysis uncovers the semantic predispositions in LLMs andreveals the impact of recency bias for information presented in long contexts.",,arXiv,['cs.cl'],, -1082,which examples to annotate for incontext learning towards effective and efficient selection,"['Costas Mavromatis', 'Balasubramaniam Srinivasan', 'Zhengyuan Shen', 'Jiani Zhang', 'Huzefa Rangwala', 'Christos Faloutsos', 'George Karypis']",http://arxiv.org/pdf/2310.20046v1.pdf,2023-10-30,," Large Language Models (LLMs) can adapt to new tasks via in-context learning(ICL). ICL is efficient as it does not require any parameter updates to thetrained LLM, but only few annotated examples as input for the LLM. In thiswork, we investigate an active learning approach for ICL, where there is alimited budget for annotating examples. We propose a model-adaptiveoptimization-free algorithm, termed AdaICL, which identifies examples that themodel is uncertain about, and performs semantic diversity-based exampleselection. Diversity-based sampling improves overall effectiveness, whileuncertainty sampling improves budget efficiency and helps the LLM learn newinformation. Moreover, AdaICL poses its sampling strategy as a Maximum Coverageproblem, that dynamically adapts based on the model's feedback and can beapproximately solved via greedy algorithms. Extensive experiments on ninedatasets and seven LLMs show that AdaICL improves performance by 4.4% accuracypoints over SOTA (7.7% relative improvement), is up to 3x more budget-efficientthan performing annotations uniformly at random, while it outperforms SOTA with2x fewer ICL examples.",,arXiv,['cs.cl'],, -1083,dail data augmentation for incontext learning via selfparaphrase,"['Dawei Li', 'Yaxuan Li', 'Dheeraj Mekala', 'Shuyao Li', 'Yulin wang', 'Xueqi Wang', 'William Hogan', 'Jingbo Shang']",http://arxiv.org/pdf/2311.03319v1.pdf,2023-11-06,," In-Context Learning (ICL) combined with pre-trained large language models hasachieved promising results on various NLP tasks. However, ICL requireshigh-quality annotated demonstrations which might not be available inreal-world scenarios. To overcome this limitation, we propose \textbf{D}ata\textbf{A}ugmentation for \textbf{I}n-Context \textbf{L}earning(\textbf{DAIL}). DAIL leverages the intuition that large language models aremore familiar with the content generated by themselves. It first utilizes thelanguage model to generate paraphrases of the test sample and employs majorityvoting to determine the final result based on individual predictions. Ourextensive empirical evaluation shows that DAIL outperforms the standard ICLmethod and other ensemble-based methods in the low-resource scenario.Additionally, we explore the use of voting consistency as a confidence score ofthe model when the logits of predictions are inaccessible. We believe our workwill stimulate further research on ICL in low-resource settings.",,arXiv,"['cs.cl', 'cs.ai']",, -1084,incontext exemplars as clues to retrieving from large associative memory,['Jiachen Zhao'],http://arxiv.org/pdf/2311.03498v1.pdf,2023-11-06,," Recently, large language models (LLMs) have made remarkable progress innatural language processing. The most representative ability of LLMs isin-context learning (ICL), which enables LLMs to learn patterns from in-contextexemplars without training. The performance of ICL greatly depends on theexemplars used. However, how to choose exemplars remains unclear due to thelack of understanding of how in-context learning works. In this paper, wepresent a novel perspective on ICL by conceptualizing it as contextualretrieval from a model of associative memory. We establish a theoreticalframework of ICL based on Hopfield Networks. Based on our framework, we lookinto how in-context exemplars influence the performance of ICL and propose moreefficient active exemplar selection. Our study sheds new light on the mechanismof ICL by connecting it to memory retrieval, with potential implications foradvancing the understanding of LLMs.",,arXiv,"['cs.cl', 'cs.lg']",, -1085,selective annotation makes language models better fewshot learners,"['Hongjin Su', 'Jungo Kasai', 'Chen Henry Wu', 'Weijia Shi', 'Tianlu Wang', 'Jiayi Xin', 'Rui Zhang', 'Mari Ostendorf', 'Luke Zettlemoyer', 'Noah A. Smith', 'Tao Yu']",http://arxiv.org/pdf/2209.01975v1.pdf,2022-09-05,," Many recent approaches to natural language tasks are built on the remarkableabilities of large language models. Large language models can performin-context learning, where they learn a new task from a few taskdemonstrations, without any parameter updates. This work examines theimplications of in-context learning for the creation of datasets for newnatural language tasks. Departing from recent in-context learning methods, weformulate an annotation-efficient, two-step framework: selective annotationthat chooses a pool of examples to annotate from unlabeled data in advance,followed by prompt retrieval that retrieves task examples from the annotatedpool at test time. Based on this framework, we propose an unsupervised,graph-based selective annotation method, voke-k, to select diverse,representative examples to annotate. Extensive experiments on 10 datasets(covering classification, commonsense reasoning, dialogue, and text/codegeneration) demonstrate that our selective annotation method improves the taskperformance by a large margin. On average, vote-k achieves a 12.9%/11.4%relative gain under an annotation budget of 18/100, as compared to randomlyselecting examples to annotate. Compared to state-of-the-art supervisedfinetuning approaches, it yields similar performance with 10-100x lessannotation cost across 10 tasks. We further analyze the effectiveness of ourframework in various scenarios: language models with varying sizes, alternativeselective annotation methods, and cases where there is a test data domainshift. We hope that our studies will serve as a basis for data annotations aslarge language models are increasingly applied to new tasks. Our code isavailable at https://github.com/HKUNLP/icl-selective-annotation.",,arXiv,['cs.cl'],, -1086,incontext example selection with influences,"['Tai Nguyen', 'Eric Wong']",http://arxiv.org/pdf/2302.11042v2.pdf,2023-02-21,," In-context learning (ICL) is a powerful paradigm emerged from large languagemodels (LLMs). Despite its promises, ICL performance is known to be highlysensitive to input examples. In this work, we use $\textit{in-contextinfluences}$ to analyze few-shot ICL performance directly from the in-contextexamples. Our proposed influence-based example selection method can identifyboth positive and negative examples, outperforming several baselines whenevaluated on 9 SuperGLUE tasks. Our analysis uncovers up to a $16.3\%$performance gap between using the most negative in-context examples compared tothe most positive. In a case study, we apply our influence-based framework toquantify the phenomena of recency bias in example ordering for few-shot ICL.",,arXiv,"['cs.cl', 'cs.lg']",, -1087,"tabular representation, noisy operators, and impacts on table structure understanding tasks in llms","['Ananya Singha', 'José Cambronero', 'Sumit Gulwani', 'Vu Le', 'Chris Parnin']",http://arxiv.org/pdf/2310.10358v1.pdf,2023-10-16,," Large language models (LLMs) are increasingly applied for tabular tasks usingin-context learning. The prompt representation for a table may play a role inthe LLMs ability to process the table. Inspired by prior work, we generate acollection of self-supervised structural tasks (e.g. navigate to a cell androw; transpose the table) and evaluate the performance differences when using 8formats. In contrast to past work, we introduce 8 noise operations inspired byreal-world messy data and adversarial inputs, and show that such operations canimpact LLM performance across formats for different structural understandingtasks.",,arXiv,"['cs.cl', 'cs.ai']",, -1088,evaluating the impact of model scale for compositional generalization in semantic parsing,"['Linlu Qiu', 'Peter Shaw', 'Panupong Pasupat', 'Tianze Shi', 'Jonathan Herzig', 'Emily Pitler', 'Fei Sha', 'Kristina Toutanova']",http://arxiv.org/pdf/2205.12253v2.pdf,2022-05-24,," Despite their strong performance on many tasks, pre-trained language modelshave been shown to struggle on out-of-distribution compositionalgeneralization. Meanwhile, recent work has shown considerable improvements onmany NLP tasks from model scaling. Can scaling up model size also improvecompositional generalization in semantic parsing? We evaluate encoder-decodermodels up to 11B parameters and decoder-only models up to 540B parameters, andcompare model scaling curves for three different methods for applying apre-trained language model to a new task: fine-tuning all parameters, prompttuning, and in-context learning. We observe that fine-tuning generally has flator negative scaling curves on out-of-distribution compositional generalizationin semantic parsing evaluations. In-context learning has positive scalingcurves, but is generally outperformed by much smaller fine-tuned models.Prompt-tuning can outperform fine-tuning, suggesting further potentialimprovements from scaling as it exhibits a more positive scaling curve.Additionally, we identify several error trends that vary with model scale. Forexample, larger models are generally better at modeling the syntax of theoutput space, but are also more prone to certain types of overfitting. Overall,our study highlights limitations of current techniques for effectivelyleveraging model scale for compositional generalization, while our analysisalso suggests promising directions for future work.",,arXiv,['cs.cl'],, -1089,controllable dialogue simulation with incontext learning,"['Zekun Li', 'Wenhu Chen', 'Shiyang Li', 'Hong Wang', 'Jing Qian', 'Xifeng Yan']",http://arxiv.org/pdf/2210.04185v4.pdf,2022-10-09,," Building dialogue systems requires a large corpus of annotated dialogues.Such datasets are usually created via crowdsourcing, which is expensive andtime-consuming. In this paper, we propose \textsc{Dialogic}, a novel dialoguesimulation method based on large language model in-context learning to automatedataset creation. Seeded with a few annotated dialogues, \textsc{Dialogic}automatically selects in-context examples for demonstration and prompts GPT-3to generate new dialogues and annotations in a controllable way. Our method canrapidly expand a small set of dialogue data with minimum or zero \textit{humaninvolvement} and \textit{parameter update} and is thus much more cost-efficientand time-saving than crowdsourcing. Experimental results on the MultiWOZdataset demonstrate that training a model on the simulated dialogues leads toeven better performance than using the same amount of human-generated dialoguesunder the challenging low-resource settings, with as few as 85 dialogues as aseed. When enough data is available, our method can still serve as an effectivedata augmentation method. Human evaluation results also show that our simulateddialogues have near-human fluency and annotation accuracy. The code and dataare available at \textbf{\url{https://github.com/Leezekun/dialogic}}.",,arXiv,"['cs.cl', 'cs.ai']",, -1090,xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing,"['Peng Shi', 'Rui Zhang', 'He Bai', 'Jimmy Lin']",http://arxiv.org/pdf/2210.13693v1.pdf,2022-10-25,," In-context learning using large language models has recently shown surprisingresults for semantic parsing tasks such as Text-to-SQL translation. PromptingGPT-3 or Codex using several examples of question-SQL pairs can produceexcellent results, comparable to state-of-the-art finetuning-based models.However, existing work primarily focuses on English datasets, and it is unknownwhether large language models can serve as competitive semantic parsers forother languages. To bridge this gap, our work focuses on cross-lingualText-to-SQL semantic parsing for translating non-English utterances into SQLqueries based on an English schema. We consider a zero-shot transfer learningsetting with the assumption that we do not have any labeled examples in thetarget language (but have annotated examples in English). This work introducesthe XRICL framework, which learns to retrieve relevant English exemplars for agiven query to construct prompts. We also include global translation exemplarsfor a target language to facilitate the translation process for large languagemodels. To systematically evaluate our model, we construct two new benchmarkdatasets, XSpider and XKaggle-dbqa, which include questions in Chinese,Vietnamese, Farsi, and Hindi. Our experiments show that XRICL effectivelyleverages large pre-trained language models to outperform existing baselines.Data and code are publicly available at https://github.com/Impavidity/XRICL.",,arXiv,['cs.cl'],, -1091,how many demonstrations do you need for incontext learning,"['Jiuhai Chen', 'Lichang Chen', 'Chen Zhu', 'Tianyi Zhou']",http://arxiv.org/pdf/2303.08119v3.pdf,2023-03-14,," Large language models (LLMs) are capable to perform complex reasoning byin-context learning (ICL) when provided with a few input-output demonstrations(demos) and more powerful when intermediate reasoning steps (""chain of thoughts(CoT)"") of the demos are given. Is it necessary to use multi-demo in ICL? Inthis paper, we study ICL using fewer demos for each test query on the tasksin~\cite{wei2022chain}. Surprisingly, we do not observe significant degradationwhen using only one randomly chosen demo. To study this phenomenon, for eachtest query, we categorize demos into ""correct demos"" leading to the correctanswer, and ""wrong demos"" resulting in wrong answers. Our analysis reveals aninherent bias in those widely studied datasets: most demos are correct for amajority of test queries, which explains the good performance of using onerandom demo. Moreover, ICL (with and w/o CoT) using only one correct demosignificantly outperforms all-demo ICL adopted by most previous works,indicating the weakness of LLMs in finding correct demo(s) for input queries,which is difficult to evaluate on the biased datasets. Furthermore, we observea counterintuitive behavior of ICL using multi-demo, i.e., its accuracydegrades(improves) when given more correct(wrong) demos. This implies that ICLcan be easily misguided by interference among demos and their spuriouscorrelations. Our analyses highlight several fundamental challenges that needto be addressed in LLMs training, ICL, and benchmark design.",,arXiv,['cs.ai'],, -1092,improving visual question answering models through robustness analysis and incontext learning with a chain of basic questions,"['Jia-Hong Huang', 'Modar Alfadly', 'Bernard Ghanem', 'Marcel Worring']",http://arxiv.org/pdf/2304.03147v1.pdf,2023-04-06,," Deep neural networks have been critical in the task of Visual QuestionAnswering (VQA), with research traditionally focused on improving modelaccuracy. Recently, however, there has been a trend towards evaluating therobustness of these models against adversarial attacks. This involves assessingthe accuracy of VQA models under increasing levels of noise in the input, whichcan target either the image or the proposed query question, dubbed the mainquestion. However, there is currently a lack of proper analysis of this aspectof VQA. This work proposes a new method that utilizes semantically relatedquestions, referred to as basic questions, acting as noise to evaluate therobustness of VQA models. It is hypothesized that as the similarity of a basicquestion to the main question decreases, the level of noise increases. Togenerate a reasonable noise level for a given main question, a pool of basicquestions is ranked based on their similarity to the main question, and thisranking problem is cast as a LASSO optimization problem. Additionally, thiswork proposes a novel robustness measure, R_score, and two basic questiondatasets to standardize the analysis of VQA model robustness. The experimentalresults demonstrate that the proposed evaluation method effectively analyzesthe robustness of VQA models. Moreover, the experiments show that in-contextlearning with a chain of basic questions can enhance model accuracy.",,arXiv,"['cs.cv', 'cs.ai']",, -1093,genegpt augmenting large language models with domain tools for improved access to biomedical information,"['Qiao Jin', 'Yifan Yang', 'Qingyu Chen', 'Zhiyong Lu']",http://arxiv.org/pdf/2304.09667v3.pdf,2023-04-19,," While large language models (LLMs) have been successfully applied to varioustasks, they still face challenges with hallucinations. Augmenting LLMs withdomain-specific tools such as database utilities can facilitate easier and moreprecise access to specialized knowledge. In this paper, we present GeneGPT, anovel method for teaching LLMs to use the Web APIs of the National Center forBiotechnology Information (NCBI) for answering genomics questions.Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIsby in-context learning and an augmented decoding algorithm that can detect andexecute API calls. Experimental results show that GeneGPT achievesstate-of-the-art performance on eight tasks in the GeneTuring benchmark with anaverage score of 0.83, largely surpassing retrieval-augmented LLMs such as thenew Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), aswell as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1)API demonstrations have good cross-task generalizability and are more usefulthan documentations for in-context learning; (2) GeneGPT can generalize tolonger chains of API calls and answer multi-hop questions in GeneHop, a noveldataset introduced in this work; (3) Different types of errors are enriched indifferent tasks, providing valuable insights for future improvements.",,arXiv,"['cs.cl', 'cs.ai', 'q-bio.gn']",, -1094,dinsql decomposed incontext learning of texttosql with selfcorrection,"['Mohammadreza Pourreza', 'Davood Rafiei']",http://arxiv.org/pdf/2304.11015v3.pdf,2023-04-21,," There is currently a significant gap between the performance of fine-tunedmodels and prompting approaches using Large Language Models (LLMs) on thechallenging task of text-to-SQL, as evaluated on datasets such as Spider. Toimprove the performance of LLMs in the reasoning process, we study howdecomposing the task into smaller sub-tasks can be effective. In particular, weshow that breaking down the generation problem into sub-problems and feedingthe solutions of those sub-problems into LLMs can be an effective approach forsignificantly improving their performance. Our experiments with three LLMs showthat this approach consistently improves their simple few-shot performance byroughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On theholdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9and the new SOTA at the time of this writing using our approach is 85.3. Ourapproach with in-context learning beats many heavily fine-tuned models by atleast 5%. Additionally, when evaluated on the BIRD benchmark, our approachachieved an execution accuracy of 55.9%, setting a new SOTA on its holdout testset.",,arXiv,"['cs.cl', 'cs.ai', 'cs.db', 'cs.hc']",, -1095,fewshot incontext learning for knowledge base question answering,"['Tianle Li', 'Xueguang Ma', 'Alex Zhuang', 'Yu Gu', 'Yu Su', 'Wenhu Chen']",http://arxiv.org/pdf/2305.01750v2.pdf,2023-05-02,," Question answering over knowledge bases is considered a difficult problem dueto the challenge of generalizing to a wide variety of possible natural languagequestions. Additionally, the heterogeneity of knowledge base schema itemsbetween different knowledge bases often necessitates specialized training fordifferent knowledge base question-answering (KBQA) datasets. To handlequestions over diverse KBQA datasets with a unified training-free framework, wepropose KB-BINDER, which for the first time enables few-shot in-contextlearning over KBQA tasks. Firstly, KB-BINDER leverages large language modelslike Codex to generate logical forms as the draft for a specific question byimitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledgebase to bind the generated draft to an executable one with BM25 score matching.The experimental results on four public heterogeneous KBQA datasets show thatKB-BINDER can achieve a strong performance with only a few in-contextdemonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can evenoutperform the state-of-the-art trained models. On GrailQA and WebQSP, ourmodel is also on par with other fully-trained models. We believe KB-BINDER canserve as an important baseline for future research. Our code is available athttps://github.com/ltl3A87/KB-BINDER.",,arXiv,"['cs.cl', 'cs.ai']",, -1096,text classification via large language models,"['Xiaofei Sun', 'Xiaoya Li', 'Jiwei Li', 'Fei Wu', 'Shangwei Guo', 'Tianwei Zhang', 'Guoyin Wang']",http://arxiv.org/pdf/2305.08377v3.pdf,2023-05-15,," Despite the remarkable success of large-scale Language Models (LLMs) such asGPT-3, their performances still significantly underperform fine-tuned models inthe task of text classification. This is due to (1) the lack of reasoningability in addressing complex linguistic phenomena (e.g., intensification,contrast, irony etc); (2) limited number of tokens allowed in in-contextlearning. In this paper, we introduce Clue And Reasoning Prompting (CARP). CARP adoptsa progressive reasoning strategy tailored to addressing the complex linguisticphenomena involved in text classification: CARP first prompts LLMs to findsuperficial clues (e.g., keywords, tones, semantic relations, references, etc),based on which a diagnostic reasoning process is induced for final decisions.To further address the limited-token issue, CARP uses a fine-tuned model on thesupervised dataset for $k$NN demonstration search in the in-context learning,allowing the model to take the advantage of both LLM's generalization abilityand the task-specific evidence provided by the full labeled dataset.Remarkably, CARP yields new SOTA performances on 4 out of 5 widely-usedtext-classification benchmarks, 97.39 (+1.24) on SST-2, 96.40 (+0.72) onAGNews, 98.78 (+0.25) on R8 and 96.95 (+0.6) on R52, and a performancecomparable to SOTA on MR (92.39 v.s. 93.3). More importantly, we find that CARPdelivers impressive abilities on low-resource and domain-adaptation setups.Specifically, using 16 examples per class, CARP achieves comparableperformances to supervised models with 1,024 examples per class.",,arXiv,['cs.cl'],, -1097,exploring incontext learning capabilities of foundation models for generating knowledge graphs from text,"['Hanieh Khorashadizadeh', 'Nandana Mihindukulasooriya', 'Sanju Tiwari', 'Jinghua Groppe', 'Sven Groppe']",http://arxiv.org/pdf/2305.08804v1.pdf,2023-05-15,," Knowledge graphs can represent information about the real-world usingentities and their relations in a structured and semantically rich manner andthey enable a variety of downstream applications such as question-answering,recommendation systems, semantic search, and advanced analytics. However, atthe moment, building a knowledge graph involves a lot of manual effort and thushinders their application in some situations and the automation of this processmight benefit especially for small organizations. Automatically generatingstructured knowledge graphs from a large volume of natural language is still achallenging task and the research on sub-tasks such as named entity extraction,relation extraction, entity and relation linking, and knowledge graphconstruction aims to improve the state of the art of automatic construction andcompletion of knowledge graphs from text. The recent advancement of foundationmodels with billions of parameters trained in a self-supervised manner withlarge volumes of training data that can be adapted to a variety of downstreamtasks has helped to demonstrate high performance on a large range of NaturalLanguage Processing (NLP) tasks. In this context, one emerging paradigm isin-context learning where a language model is used as it is with a prompt thatprovides instructions and some examples to perform a task without changing theparameters of the model using traditional approaches such as fine-tuning. Thisway, no computing resources are needed for re-training/fine-tuning the modelsand the engineering effort is minimal. Thus, it would be beneficial to utilizesuch capabilities for generating knowledge graphs from text.",,arXiv,['cs.cl'],, -1098,what incontext learning learns incontext disentangling task recognition and task learning,"['Jane Pan', 'Tianyu Gao', 'Howard Chen', 'Danqi Chen']",http://arxiv.org/pdf/2305.09731v1.pdf,2023-05-16,," Large language models (LLMs) exploit in-context learning (ICL) to solve taskswith only a few demonstrations, but its mechanisms are not yet well-understood.Some works suggest that LLMs only recall already learned concepts frompre-training, while others hint that ICL performs implicit learning overdemonstrations. We characterize two ways through which ICL leveragesdemonstrations. Task recognition (TR) captures the extent to which LLMs canrecognize a task through demonstrations -- even without ground-truth labels --and apply their pre-trained priors, whereas task learning (TL) is the abilityto capture new input-label mappings unseen in pre-training. Using a wide rangeof classification datasets and three LLM families (GPT-3, LLaMA and OPT), wedesign controlled experiments to disentangle the roles of TR and TL in ICL. Weshow that (1) models can achieve non-trivial performance with only TR, and TRdoes not further improve with larger models or more demonstrations; (2) LLMsacquire TL as the model scales, and TL's performance consistently improves withmore demonstrations in context. Our findings unravel two different forcesbehind ICL and we advocate for discriminating them in future ICL research dueto their distinct nature.",,arXiv,"['cs.cl', 'cs.lg']",, -1099,temporal knowledge graph forecasting without knowledge using incontext learning,"['Dong-Ho Lee', 'Kian Ahrabian', 'Woojeong Jin', 'Fred Morstatter', 'Jay Pujara']",http://arxiv.org/pdf/2305.10613v3.pdf,2023-05-17,," Temporal knowledge graph (TKG) forecasting benchmarks challenge models topredict future facts using knowledge of past facts. In this paper, we applylarge language models (LLMs) to these benchmarks using in-context learning(ICL). We investigate whether and to what extent LLMs can be used for TKGforecasting, especially without any fine-tuning or explicit modules forcapturing structural and temporal information. For our experiments, we presenta framework that converts relevant historical facts into prompts and generatesranked predictions using token probabilities. Surprisingly, we observe thatLLMs, out-of-the-box, perform on par with state-of-the-art TKG models carefullydesigned and trained for TKG forecasting. Our extensive evaluation presentsperformances across several models and datasets with different characteristics,compares alternative heuristics for preparing contextual information, andcontrasts to prominent TKG methods and simple frequency and recency baselines.We also discover that using numerical indices instead of entity/relation names,i.e., hiding semantic information, does not significantly affect theperformance ($\pm$0.4\% Hit@1). This shows that prior semantic knowledge isunnecessary; instead, LLMs can leverage the existing patterns in the context toachieve such performance. Our analysis also reveals that ICL enables LLMs tolearn irregular patterns from the historical context, going beyond simplepredictions based on common or recent information.",,arXiv,['cs.cl'],, -1100,learning incontext learning for named entity recognition,"['Jiawei Chen', 'Yaojie Lu', 'Hongyu Lin', 'Jie Lou', 'Wei Jia', 'Dai Dai', 'Hua Wu', 'Boxi Cao', 'Xianpei Han', 'Le Sun']",http://arxiv.org/pdf/2305.11038v3.pdf,2023-05-18,," Named entity recognition in real-world applications suffers from thediversity of entity types, the emergence of new entity types, and the lack ofhigh-quality annotations. To address the above problems, this paper proposes anin-context learning-based NER approach, which can effectively inject in-contextNER ability into PLMs and recognize entities of novel types on-the-fly usingonly a few demonstrative instances. Specifically, we model PLMs as ameta-function $\mathcal{ \lambda_ {\text{instruction, demonstrations, text}}.M}$, and a new entity extractor can be implicitly constructed by applying newinstruction and demonstrations to PLMs, i.e., $\mathcal{ (\lambda . M)}$(instruction, demonstrations) $\to$ $\mathcal{F}$ where $\mathcal{F}$ will bea new entity extractor, i.e., $\mathcal{F}$: text $\to$ entities. To inject theabove in-context NER ability into PLMs, we propose a meta-function pre-trainingalgorithm, which pre-trains PLMs by comparing the (instruction,demonstration)-initialized extractor with a surrogate golden extractor.Experimental results on 4 few-shot NER datasets show that our method caneffectively inject in-context NER ability into PLMs and significantlyoutperforms the PLMs+fine-tuning counterparts.",,arXiv,['cs.cl'],, -1101,plugmed improving specificity in patientcentered medical dialogue generation using incontext learning,"['Chengfeng Dou', 'Zhi Jin', 'Wenping Jiao', 'Haiyan Zhao', 'Zhenwei Tao', 'Yongqiang Zhao']",http://arxiv.org/pdf/2305.11508v2.pdf,2023-05-19,," The patient-centered medical dialogue systems strive to offer diagnosticinterpretation services to users who are less knowledgeable about medicalknowledge, through emphasizing the importance of providing responses specificto the patients. It is difficult for the large language models (LLMs) toguarantee the specificity of responses in spite of its promising performanceeven in some tasks in medical field. Inspired by in-context learning, wepropose PlugMed, a Plug-and-Play Medical Dialogue System, for addressing thischallenge. PlugMed is equipped with two modules, the prompt generation (PG)module and the response ranking (RR) module, to enhances LLMs' dialoguestrategies for improving the specificity of the dialogue. The PG module isdesigned to stimulate the imitative ability of LLMs by providing them with realdialogues from similar patients as prompts. The RR module incorporatesfine-tuned small model as response filter to enable the selection ofappropriate responses generated by LLMs. Furthermore, we introduce a newevaluation method based on matching both user's intent and high-frequencymedical term to effectively assess the specificity of the responses. We conductexperimental evaluations on three medical dialogue datasets, and the results,including both automatic and human evaluation, demonstrate the effectiveness ofour approach.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, -1102,toolkengpt augmenting frozen language models with massive tools via tool embeddings,"['Shibo Hao', 'Tianyang Liu', 'Zhen Wang', 'Zhiting Hu']",http://arxiv.org/pdf/2305.11554v3.pdf,2023-05-19,," Augmenting large language models (LLMs) with external tools has emerged as apromising approach to solving complex problems. However, traditional methods,which finetune LLMs with tool demonstration data, can be both costly andrestricted to a predefined set of tools. Recent in-context learning paradigmalleviates these issues, but the limited context length only allows for a fewshots of demonstrations, leading to suboptimal understandings of the tools.Moreover, when there are numerous tools to choose from, in-context learningcould completely fail to work. In this paper, we propose an alternativeapproach, $\textbf{ToolkenGPT}$, which combines the benefits of both sides. Ourapproach represents each $\underline{tool}$ as a to$\underline{ken}$($\textit{toolken}$) and learns an embedding for it, enabling tool calls in thesame way as generating a regular word token. Once a toolken is triggered, theLLM is prompted to complete arguments for the tool to execute. ToolkenGPToffers the flexibility to plug in an arbitrary number of tools by expanding theset of toolkens on the fly. In addition, it improves tool use by allowingextensive demonstration data for learning the toolken embeddings. In diversedomains, including numerical reasoning, knowledge-based question answering, andembodied plan generation, our approach effectively augments LLMs with tools andsubstantially outperforms various latest baselines. ToolkenGPT demonstrates thepromising ability to use relevant tools from a large tool set in complexscenarios.",,arXiv,"['cs.cl', 'cs.lg']",, -1103,measuring inductive biases of incontext learning with underspecified demonstrations,"['Chenglei Si', 'Dan Friedman', 'Nitish Joshi', 'Shi Feng', 'Danqi Chen', 'He He']",http://arxiv.org/pdf/2305.13299v1.pdf,2023-05-22,," In-context learning (ICL) is an important paradigm for adapting largelanguage models (LLMs) to new tasks, but the generalization behavior of ICLremains poorly understood. We investigate the inductive biases of ICL from theperspective of feature bias: which feature ICL is more likely to use given aset of underspecified demonstrations in which two features are equallypredictive of the labels. First, we characterize the feature biases of GPT-3models by constructing underspecified demonstrations from a range of NLPdatasets and feature combinations. We find that LLMs exhibit clear featurebiases - for example, demonstrating a strong bias to predict labels accordingto sentiment rather than shallow lexical features, like punctuation. Second, weevaluate the effect of different interventions that are designed to impose aninductive bias in favor of a particular feature, such as adding a naturallanguage instruction or using semantically relevant label words. We find that,while many interventions can influence the learner to prefer a particularfeature, it can be difficult to overcome strong prior biases. Overall, ourresults provide a broader picture of the types of features that ICL may be morelikely to exploit and how to impose inductive biases that are better alignedwith the intended task.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1104,buffet benchmarking large language models for fewshot crosslingual transfer,"['Akari Asai', 'Sneha Kudugunta', 'Xinyan Velocity Yu', 'Terra Blevins', 'Hila Gonen', 'Machel Reid', 'Yulia Tsvetkov', 'Sebastian Ruder', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2305.14857v1.pdf,2023-05-24,," Despite remarkable advancements in few-shot generalization in naturallanguage processing, most models are developed and evaluated primarily inEnglish. To facilitate research on few-shot cross-lingual transfer, weintroduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across54 languages in a sequence-to-sequence format and provides a fixed set offew-shot examples and instructions. BUFFET is designed to establish a rigorousand equitable evaluation framework for few-shot cross-lingual transfer across abroad range of tasks and languages. Using BUFFET, we perform thoroughevaluations of state-of-the-art multilingual large language models withdifferent transfer methods, namely in-context learning and fine-tuning. Ourfindings reveal significant room for improvement in few-shot in-contextcross-lingual transfer. In particular, ChatGPT with in-context learning oftenperforms worse than much smaller mT5-base models fine-tuned on English taskdata and few-shot in-language examples. Our analysis suggests various avenuesfor future research in few-shot cross-lingual transfer, such as improvedpretraining, understanding, and future evaluations.",,arXiv,['cs.cl'],, -1105,measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing,"['Shufan Wang', 'Sebastien Jean', 'Sailik Sengupta', 'James Gung', 'Nikolaos Pappas', 'Yi Zhang']",http://arxiv.org/pdf/2305.15338v1.pdf,2023-05-24,," In executable task-oriented semantic parsing, the system aims to translateusers' utterances in natural language to machine-interpretable programs (APIcalls) that can be executed according to pre-defined API specifications. Withthe popularity of Large Language Models (LLMs), in-context learning offers astrong baseline for such scenarios, especially in data-limited regimes.However, LLMs are known to hallucinate and therefore pose a formidablechallenge in constraining generated content. Thus, it remains uncertain if LLMscan effectively perform task-oriented utterance-to-API generation whererespecting API's structural and task-specific constraints is crucial. In this work, we seek to measure, analyze and mitigate such constraintsviolations. First, we identify the categories of various constraints inobtaining API-semantics from task-oriented utterances, and define fine-grainedmetrics that complement traditional ones. Second, we leverage these metrics toconduct a detailed error analysis of constraints violations seen instate-of-the-art LLMs, which motivates us to investigate two mitigationstrategies: Semantic-Retrieval of Demonstrations (SRD) and API-awareConstrained Decoding (API-CD). Our experiments show that these strategies areeffective at reducing constraints violations and improving the quality of thegenerated API calls, but require careful consideration given theirimplementation complexity and latency.",,arXiv,"['cs.ai', 'cs.cl']",, -1106,what can large language models do in chemistry a comprehensive benchmark on eight tasks,"['Taicheng Guo', 'Kehan Guo', 'Bozhao Nan', 'Zhenwen Liang', 'Zhichun Guo', 'Nitesh V. Chawla', 'Olaf Wiest', 'Xiangliang Zhang']",http://arxiv.org/pdf/2305.18365v2.pdf,2023-05-27,," Large Language Models (LLMs) with strong abilities in natural languageprocessing tasks have emerged and have been applied in various kinds of areassuch as science, finance and software engineering. However, the capability ofLLMs to advance the field of chemistry remains unclear. In this paper, ratherthan pursuing state-of-the-art performance, we aim to evaluate capabilities ofLLMs in a wide range of tasks across the chemistry domain. We identify threekey chemistry-related capabilities including understanding, reasoning andexplaining to explore in LLMs and establish a benchmark containing eightchemistry tasks. Our analysis draws on widely recognized datasets facilitatinga broad exploration of the capacities of LLMs within the context of practicalchemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) areevaluated for each chemistry task in zero-shot and few-shot in-context learningsettings with carefully selected demonstration examples and specially craftedprompts. Our investigation found that GPT-4 outperformed other models and LLMsexhibit different competitive levels in eight chemistry tasks. In addition tothe key findings from the comprehensive benchmark analysis, our work providesinsights into the limitation of current LLMs and the impact of in-contextlearning settings on LLMs' performance across various chemistry tasks. The codeand datasets used in this study are available athttps://github.com/ChemFoundationModels/ChemLLMBench.",,arXiv,"['cs.cl', 'cs.ai']",, -1107,mitigating label biases for incontext learning,"['Yu Fei', 'Yifan Hou', 'Zeming Chen', 'Antoine Bosselut']",http://arxiv.org/pdf/2305.19148v3.pdf,2023-05-28,," Various design settings for in-context learning (ICL), such as the choice andorder of the in-context examples, can bias a model toward a particularprediction without being reflective of an understanding of the task. While manystudies discuss these design choices, there have been few systematicinvestigations into categorizing them and mitigating their impact. In thiswork, we define a typology for three types of label biases in ICL for textclassification: vanilla-label bias, context-label bias, and domain-label bias(which we conceptualize and detect for the first time). Our analysis demonstrates that prior label bias calibration methods fallshort of addressing all three types of biases. Specifically, domain-label biasrestricts LLMs to random-level performance on many tasks regardless of thechoice of in-context examples. To mitigate the effect of these biases, wepropose a simple bias calibration method that estimates a language model'slabel bias using random in-domain words from the task corpus. After controllingfor this estimated bias when making predictions, our novel domain-contextcalibration significantly improves the ICL performance of GPT-J and GPT-3 on awide range of tasks. The gain is substantial on tasks with large domain-labelbias (up to 37% in Macro-F1). Furthermore, our results generalize to modelswith different scales, pretraining methods, and manually-designed taskinstructions, showing the prevalence of label biases in ICL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1108,pretraining task diversity and the emergence of nonbayesian incontext learning for regression,"['Allan Raventós', 'Mansheej Paul', 'Feng Chen', 'Surya Ganguli']",http://arxiv.org/pdf/2306.15063v2.pdf,2023-06-26,," Pretrained transformers exhibit the remarkable ability of in-context learning(ICL): they can learn tasks from just a few examples provided in the promptwithout updating any weights. This raises a foundational question: can ICLsolve fundamentally $\textit{new}$ tasks that are very different from thoseseen during pretraining? To probe this question, we examine ICL's performanceon linear regression while varying the diversity of tasks in the pretrainingdataset. We empirically demonstrate a $\textit{task diversity threshold}$ forthe emergence of ICL. Below this threshold, the pretrained transformer cannotsolve unseen regression tasks, instead behaving like a Bayesian estimator withthe $\textit{non-diverse pretraining task distribution}$ as the prior. Beyondthis threshold, the transformer significantly outperforms this estimator; itsbehavior aligns with that of ridge regression, corresponding to a Gaussianprior over $\textit{all tasks}$, including those not seen during pretraining.Thus, when pretrained on data with task diversity greater than the threshold,transformers $\textit{can}$ optimally solve fundamentally new tasks in-context.Importantly, this capability hinges on it deviating from the Bayes optimalestimator with the pretraining distribution as the prior. This study alsoexplores the effect of regularization, model capacity and task structure andunderscores, in a concrete example, the critical role of task diversity,alongside data and model scale, in the emergence of ICL. Code is available athttps://github.com/mansheej/icl-task-diversity.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, -1109,understanding incontext learning via supportive pretraining data,"['Xiaochuang Han', 'Daniel Simig', 'Todor Mihaylov', 'Yulia Tsvetkov', 'Asli Celikyilmaz', 'Tianlu Wang']",http://arxiv.org/pdf/2306.15091v1.pdf,2023-06-26,," In-context learning (ICL) improves language models' performance on a varietyof NLP tasks by simply demonstrating a handful of examples at inference time.It is not well understood why ICL ability emerges, as the model has never beenspecifically trained on such demonstrations. Unlike prior work that exploresimplicit mechanisms behind ICL, we study ICL via investigating the pretrainingdata. Specifically, we first adapt an iterative, gradient-based approach tofind a small subset of pretraining data that supports ICL. We observe that acontinued pretraining on this small subset significantly improves the model'sICL ability, by up to 18%. We then compare the supportive subset constrastivelywith random subsets of pretraining data and discover: (1) The supportivepretraining data to ICL do not have a higher domain relevance to downstreamtasks. (2) The supportive pretraining data have a higher mass of rarelyoccurring, long-tail tokens. (3) The supportive pretraining data arechallenging examples where the information gain from long-range context isbelow average, indicating learning to incorporate difficult long-range contextencourages ICL. Our work takes a first step towards understanding ICL viaanalyzing instance-level pretraining data. Our insights have a potential toenhance the ICL ability of language models by actively guiding the constructionof pretraining data in the future.",,arXiv,['cs.cl'],, -1110,schemalearning and rebinding as mechanisms of incontext learning and emergence,"['Sivaramakrishnan Swaminathan', 'Antoine Dedieu', 'Rajkumar Vasudeva Raju', 'Murray Shanahan', 'Miguel Lazaro-Gredilla', 'Dileep George']",http://arxiv.org/pdf/2307.01201v1.pdf,2023-06-16,," In-context learning (ICL) is one of the most powerful and most unexpectedcapabilities to emerge in recent transformer-based large language models(LLMs). Yet the mechanisms that underlie it are poorly understood. In thispaper, we demonstrate that comparable ICL capabilities can be acquired by analternative sequence prediction learning method using clone-structured causalgraphs (CSCGs). Moreover, a key property of CSCGs is that, unliketransformer-based LLMs, they are {\em interpretable}, which considerablysimplifies the task of explaining how ICL works. Specifically, we show that ituses a combination of (a) learning template (schema) circuits for patterncompletion, (b) retrieving relevant templates in a context-sensitive manner,and (c) rebinding of novel tokens to appropriate slots in the templates. We goon to marshall evidence for the hypothesis that similar mechanisms underlie ICLin LLMs. For example, we find that, with CSCGs as with LLMs, differentcapabilities emerge at different levels of overparameterization, suggestingthat overparameterization helps in learning more complex template (schema)circuits. By showing how ICL can be achieved with small models and datasets, weopen up a path to novel architectures, and take a vital step towards a moregeneral understanding of the mechanics behind this important capability.",,arXiv,"['cs.cl', 'cs.ai']",, -1111,towards understanding incontext learning with contrastive demonstrations and saliency maps,"['Zongxia Li', 'Paiheng Xu', 'Fuxiao Liu', 'Hyemi Song']",http://arxiv.org/pdf/2307.05052v1.pdf,2023-07-11,," We investigate the role of various demonstration components in the in-contextlearning (ICL) performance of large language models (LLMs). Specifically, weexplore the impacts of ground-truth labels, input distribution, andcomplementary explanations, particularly when these are altered or perturbed.We build on previous work, which offers mixed findings on how these elementsinfluence ICL. To probe these questions, we employ explainable NLP (XNLP)methods and utilize saliency maps of contrastive demonstrations for bothqualitative and quantitative analysis. Our findings reveal that flippingground-truth labels significantly affects the saliency, though it's morenoticeable in larger LLMs. Our analysis of the input distribution at a granularlevel reveals that changing sentiment-indicative terms in a sentiment analysistask to neutral ones does not have as substantial an impact as alteringground-truth labels. Finally, we find that the effectiveness of complementaryexplanations in boosting ICL performance is task-dependent, with limitedbenefits seen in sentiment analysis tasks compared to symbolic reasoning tasks.These insights are critical for understanding the functionality of LLMs andguiding the development of effective demonstrations, which is increasinglyrelevant in light of the growing use of LLMs in applications such as ChatGPT.Our research code is publicly available at https://github.com/paihengxu/XICL.",,arXiv,"['cs.cl', 'cs.ai']",, -1112,incontext learning for modelfree system identification,"['Marco Forgione', 'Filippo Pura', 'Dario Piga']",http://arxiv.org/pdf/2308.13380v1.pdf,2023-08-25,," In traditional system identification, we estimate a model of an unknowndynamical system based on given input/output sequences and available physicalknowledge. Yet, is it also possible to understand the intricacies of dynamicalsystems not solely from their input/output patterns, but by observing thebehavior of other systems within the same class? This central question drivesthe study presented in this paper. In response to this query, we introduce a novel paradigm for systemidentification, addressing two primary tasks: one-step-ahead prediction andmulti-step simulation. Unlike conventional methods, we do not directly estimatea model for the specific system. Instead, we pretrain a meta model thatrepresents a class of dynamical systems. This meta model is trained from apotentially infinite stream of synthetic data, generated by systems randomlyextracted from a certain distribution. At its core, the meta model serves as animplicit representation of the main characteristics of a class of dynamicalsystems. When provided with a brief context from a new system - specifically, ashort input/output sequence - the meta model implicitly discerns its dynamics,enabling predictions of its behavior. The proposed approach harnesses the power of Transformer architectures,renowned for their in-context learning capabilities in Natural LanguageProcessing tasks. For one-step prediction, a GPT-like decoder-only architectureis utilized, whereas the simulation problem employs an encoder-decoderstructure. Initial experimental results affirmatively answer our foundational question,opening doors to fresh research avenues in system identification.",,arXiv,"['eess.sy', 'cs.lg', 'cs.sy']",, -1113,ambiguityaware incontext learning with large language models,"['Lingyu Gao', 'Aditi Chaudhary', 'Krishna Srinivasan', 'Kazuma Hashimoto', 'Karthik Raman', 'Michael Bendersky']",http://arxiv.org/pdf/2309.07900v1.pdf,2023-09-14,," In-context learning (ICL) i.e. showing LLMs only a few task-specificdemonstrations has led to downstream gains with no task-specific fine-tuningrequired. However, LLMs are sensitive to the choice of prompts, and therefore acrucial research question is how to select good demonstrations for ICL. Oneeffective strategy is leveraging semantic similarity between the ICLdemonstrations and test inputs by using a text retriever, which however issub-optimal as that does not consider the LLM's existing knowledge about thattask. From prior work (Min et al., 2022), we already know that labels pairedwith the demonstrations bias the model predictions. This leads us to ourhypothesis whether considering LLM's existing knowledge about the task,especially with respect to the output label space can help in a betterdemonstration selection strategy. Through extensive experimentation on threetext classification tasks, we find that it is beneficial to not only choosesemantically similar ICL demonstrations but also to choose those demonstrationsthat help resolve the inherent label ambiguity surrounding the test example.Interestingly, we find that including demonstrations that the LLM previouslymis-classified and also fall on the test example's decision boundary, bringsthe most performance gain.",,arXiv,"['cs.cl', 'cs.ir']",, -1114,beyond task performance evaluating and reducing the flaws of large multimodal models with incontext learning,"['Mustafa Shukor', 'Alexandre Rame', 'Corentin Dancette', 'Matthieu Cord']",http://arxiv.org/pdf/2310.00647v1.pdf,2023-10-01,," Following the success of Large Language Models (LLMs), Large MultimodalModels (LMMs), such as the Flamingo model and its subsequent competitors, havestarted to emerge as natural steps towards generalist agents. However,interacting with recent LMMs reveals major limitations that are hardly capturedby the current evaluation benchmarks. Indeed, task performances (e.g., VQAaccuracy) alone do not provide enough clues to understand their realcapabilities, limitations, and to which extent such models are aligned to humanexpectations. To refine our understanding of those flaws, we deviate from thecurrent evaluation paradigm and propose the EvALign-ICL framework, in which we(1) evaluate 8 recent open-source LMMs (based on the Flamingo architecture suchas OpenFlamingo and IDEFICS) on 5 different axes; hallucinations, abstention,compositionality, explainability and instruction following. Our evaluation onthese axes reveals major flaws in LMMs. To efficiently address these problems,and inspired by the success of in-context learning (ICL) in LLMs, (2) weexplore ICL as a solution and study how it affects these limitations. Based onour ICL study, (3) we push ICL further and propose new multimodal ICLapproaches such as; Multitask-ICL, Chain-of-Hindsight-ICL, andSelf-Correcting-ICL. Our findings are as follows; (1) Despite their success,LMMs have flaws that remain unsolved with scaling alone. (2) The effect of ICLon LMMs flaws is nuanced; despite its effectiveness for improvedexplainability, abstention, and instruction following, ICL does not improvecompositional abilities, and actually even amplifies hallucinations. (3) Theproposed ICL variants are promising as post-hoc approaches to efficientlytackle some of those flaws. The code is available here:https://evalign-icl.github.io/",,arXiv,"['cs.cv', 'cs.mm']",, -1115,understanding incontext learning in transformers and llms by learning to learn discrete functions,"['Satwik Bhattamishra', 'Arkil Patel', 'Phil Blunsom', 'Varun Kanade']",http://arxiv.org/pdf/2310.03016v1.pdf,2023-10-04,," In order to understand the in-context learning phenomenon, recent works haveadopted a stylized experimental framework and demonstrated that Transformerscan learn gradient-based learning algorithms for various classes of real-valuedfunctions. However, the limitations of Transformers in implementing learningalgorithms, and their ability to learn other forms of algorithms are not wellunderstood. Additionally, the degree to which these capabilities are confinedto attention-based models is unclear. Furthermore, it remains to be seenwhether the insights derived from these stylized settings can be extrapolatedto pretrained Large Language Models (LLMs). In this work, we take a steptowards answering these questions by demonstrating the following: (a) On atest-bed with a variety of Boolean function classes, we find that Transformerscan nearly match the optimal learning algorithm for 'simpler' tasks, whiletheir performance deteriorates on more 'complex' tasks. Additionally, we findthat certain attention-free models perform (almost) identically to Transformerson a range of tasks. (b) When provided a teaching sequence, i.e. a set ofexamples that uniquely identifies a function in a class, we show thatTransformers learn more sample-efficiently. Interestingly, our results showthat Transformers can learn to implement two distinct algorithms to solve asingle task, and can adaptively select the more sample-efficient algorithmdepending on the sequence of in-context examples. (c) Lastly, we show thatextant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselineson prediction tasks that are guaranteed to not be in their training set.",,arXiv,"['cs.lg', 'cs.cl']",, -1116,demonstrations are all you need advancing offensive content paraphrasing using incontext learning,"['Anirudh Som', 'Karan Sikka', 'Helen Gent', 'Ajay Divakaran', 'Andreas Kathol', 'Dimitra Vergyri']",http://arxiv.org/pdf/2310.10707v1.pdf,2023-10-16,," Paraphrasing of offensive content is a better alternative to content removaland helps improve civility in a communication environment. Supervisedparaphrasers; however, rely heavily on large quantities of labelled data tohelp preserve meaning and intent. They also retain a large portion of theoffensiveness of the original content, which raises questions on their overallusability. In this paper we aim to assist practitioners in developing usableparaphrasers by exploring In-Context Learning (ICL) with large language models(LLMs), i.e., using a limited number of input-label demonstration pairs toguide the model in generating desired outputs for specific queries. Our studyfocuses on key factors such as -- number and order of demonstrations, exclusionof prompt instruction, and reduction in measured toxicity. We performprincipled evaluation on three datasets, including our proposed Context-AwarePolite Paraphrase dataset, comprising of dialogue-style rude utterances, politeparaphrases, and additional dialogue context. We evaluate our approach usingtwo closed source and one open source LLM. Our results reveal that ICL iscomparable to supervised methods in generation quality, while beingqualitatively better by 25% on human evaluation and attaining lower toxicity by76%. Also, ICL-based paraphrasers only show a slight reduction in performanceeven with just 10% training data.",,arXiv,"['cs.cl', 'cs.ai']",, -1117,pretraining data mixtures enable narrow model selection capabilities in transformer models,"['Steve Yadlowsky', 'Lyric Doshi', 'Nilesh Tripuraneni']",http://arxiv.org/pdf/2311.00871v1.pdf,2023-11-01,," Transformer models, notably large language models (LLMs), have the remarkableability to perform in-context learning (ICL) -- to perform new tasks whenprompted with unseen input-output examples without any explicit model training.In this work, we study how effectively transformers can bridge between theirpretraining data mixture, comprised of multiple distinct task families, toidentify and learn new tasks in-context which are both inside and outside thepretraining distribution. Building on previous work, we investigate thisquestion in a controlled setting, where we study transformer models trained onsequences of $(x, f(x))$ pairs rather than natural language. Our empiricalresults show transformers demonstrate near-optimal unsupervised model selectioncapabilities, in their ability to first in-context identify different taskfamilies and in-context learn within them when the task families arewell-represented in their pretraining data. However when presented with tasksor functions which are out-of-domain of their pretraining data, we demonstratevarious failure modes of transformers and degradation of their generalizationfor even simple extrapolation tasks. Together our results highlight that theimpressive ICL abilities of high-capacity sequence models may be more closelytied to the coverage of their pretraining data mixtures than inductive biasesthat create fundamental generalization capabilities.",,arXiv,"['cs.lg', 'cs.cl', 'stat.ml']",, -1118,large language models are fewshot summarizers multiintent comment generation via incontext learning,"['Mingyang Geng', 'Shangwen Wang', 'Dezun Dong', 'Haotian Wang', 'Ge Li', 'Zhi Jin', 'Xiaoguang Mao', 'Xiangke Liao']",http://arxiv.org/pdf/2304.11384v3.pdf,2023-04-22,," Code comment generation aims at generating natural language descriptions fora code snippet to facilitate developers' program comprehension activities.Despite being studied for a long time, a bottleneck for existing approaches isthat given a code snippet, they can only generate one comment while developersusually need to know information from diverse perspectives such as what is thefunctionality of this code snippet and how to use it. To tackle thislimitation, this study empirically investigates the feasibility of utilizinglarge language models (LLMs) to generate comments that can fulfill developers'diverse intents. Our intuition is based on the facts that (1) the code and itspairwise comment are used during the pre-training process of LLMs to build thesemantic connection between the natural language and programming language, and(2) comments in the real-world projects, which are collected for thepre-training, usually contain different developers' intents. We thus postulatethat the LLMs can already understand the code from different perspectives afterthe pre-training. Indeed, experiments on two large-scale datasets demonstratethe rationale of our insights: by adopting the in-context learning paradigm andgiving adequate prompts to the LLM (e.g., providing it with ten or moreexamples), the LLM can significantly outperform a state-of-the-art supervisedlearning approach on generating comments with multiple intents. Results alsoshow that customized strategies for constructing the prompts andpost-processing strategies for reranking the results can both boost the LLM'sperformances, which shed light on future research directions for using LLMs toachieve comment generation.",,arXiv,['cs.se'],, -1119,the inductive bias of incontext learning rethinking pretraining example design,"['Yoav Levine', 'Noam Wies', 'Daniel Jannai', 'Dan Navon', 'Yedid Hoshen', 'Amnon Shashua']",http://arxiv.org/pdf/2110.04541v3.pdf,2021-10-09,," Pretraining Neural Language Models (NLMs) over a large corpus involveschunking the text into training examples, which are contiguous text segments ofsizes processable by the neural architecture. We highlight a bias introduced bythis common practice: we prove that the pretrained NLM can model much strongerdependencies between text segments that appeared in the same training example,than it can between text segments that appeared in different training examples.This intuitive result has a twofold role. First, it formalizes the motivationbehind a broad line of recent successful NLM training heuristics, proposed forthe pretraining and fine-tuning stages, which do not necessarily appear relatedat first glance. Second, our result clearly indicates further improvements tobe made in NLM pretraining for the benefit of Natural Language Understandingtasks. As an example, we propose ""kNN-Pretraining"": we show that includingsemantically related non-neighboring sentences in the same pretraining exampleyields improved sentence representations and open domain question answeringabilities. This theoretically motivated degree of freedom for pretrainingexample design indicates new training schemes for self-improvingrepresentations.",,arXiv,"['cs.cl', 'cs.lg']",, -1120,instruction induction from few examples to natural language task descriptions,"['Or Honovich', 'Uri Shaham', 'Samuel R. Bowman', 'Omer Levy']",http://arxiv.org/pdf/2205.10782v1.pdf,2022-05-22,," Large language models are able to perform a task by conditioning on a fewinput-output demonstrations - a paradigm known as in-context learning. We showthat language models can explicitly infer an underlying task from a fewdemonstrations by prompting them to generate a natural language instructionthat fits the examples. To explore this ability, we introduce the instructioninduction challenge, compile a dataset consisting of 24 tasks, and define anovel evaluation metric based on executing the generated instruction. Wediscover that, to a large extent, the ability to generate instructions doesindeed emerge when using a model that is both large enough and aligned tofollow instructions; InstructGPT achieves 65.7% of human performance in ourexecution-based metric, while the original GPT-3 model reaches only 9.8% ofhuman performance. This surprising result suggests that instruction inductionmight be a viable learning paradigm in and of itself, where instead of fittinga set of latent continuous parameters to the data, one searches for the bestdescription in the natural language hypothesis space.",,arXiv,['cs.cl'],, -1121,large language models are few(1)shot table reasoners,['Wenhu Chen'],http://arxiv.org/pdf/2210.06710v2.pdf,2022-10-13,," Recent literature has shown that large language models (LLMs) are generallyexcellent few-shot reasoners to solve text reasoning tasks. However, thecapability of LLMs on table reasoning tasks is yet to be explored. In thispaper, we aim at understanding how well LLMs can perform table-related taskswith few-shot in-context learning. Specifically, we evaluated LLMs on populartable QA and fact verification datasets like WikiTableQuestion, FetaQA,TabFact, and FEVEROUS and found that LLMs are competent at complex reasoningover table structures, though these models are not pre-trained on any tablecorpus. When combined with `chain of thoughts' prompting, LLMs can achieve verystrong performance with only a 1-shot demonstration, even on par with some SoTAmodels. We show that LLMs are even more competent at generating comprehensivelong-form answers on FetaQA than tuned T5-large. We further manually studiedthe reasoning chains elicited from LLMs and found that these reasoning chainsare highly consistent with the underlying semantic form. We believe that LLMscan serve as a simple yet generic baseline for future research. The code anddata are released in https://github.com/wenhuchen/TableCoT.",,arXiv,['cs.cl'],, -1122,selfprompting large language models for zeroshot opendomain qa,"['Junlong Li', 'Zhuosheng Zhang', 'Hai Zhao']",http://arxiv.org/pdf/2212.08635v2.pdf,2022-12-16,," Open-Domain Question Answering (ODQA) aims at answering factoid questionswithout explicitly providing specific background documents. In a zero-shotsetting, this task is more challenging since no data is available to traincustomized models like Retriever-Readers. Recently, Large Language Models(LLMs) like GPT-3 have shown their power in zero-shot ODQA with directprompting methods, but these methods are still far from releasing the fullpowerfulness of LLMs only in an implicitly invoking way. In this paper, wepropose a Self-Prompting framework to explicitly utilize the massive knowledgestored in the parameters of LLMs and their strong instruction understandingabilities. Concretely, we prompt LLMs step by step to generate multiple pseudoQA pairs with background passages and explanations from scratch and then usethose generated elements for in-context learning. Experimental results show ourmethod surpasses previous SOTA methods significantly on three widely-used ODQAdatasets, and even achieves comparable performance with some Retriever-Readermodels fine-tuned on full training data.",,arXiv,"['cs.cl', 'cs.ai']",, -1123,ontologically faithful generation of nonplayer character dialogues,"['Nathaniel Weir', 'Ryan Thomas', ""Randolph D'Amore"", 'Kellie Hill', 'Benjamin Van Durme', 'Harsh Jhamtani']",http://arxiv.org/pdf/2212.10618v2.pdf,2022-12-20,," We introduce a language generation task grounded in a popular video gameenvironment. KNUDGE (KNowledge Constrained User-NPC Dialogue GEneration)requires models to produce trees of dialogue between video game characters thataccurately reflect quest and entity specifications stated in natural language.KNUDGE is constructed from side quest dialogues drawn directly from game dataof Obsidian Entertainment's The Outer Worlds, leading to real-worldcomplexities in generation: (1) dialogues are branching trees as opposed tolinear chains of utterances; (2) utterances must remain faithful to the gamelore -- character personas, backstories, and entity relationships; and (3) adialogue must accurately reveal new quest details to the human player. Wereport results for a set of neural generation models using supervised andin-context learning techniques; we find competent performance but room forfuture work addressing the challenges of creating realistic, game-qualitydialogues.",,arXiv,['cs.cl'],, -1124,batch prompting efficient inference with large language model apis,"['Zhoujun Cheng', 'Jungo Kasai', 'Tao Yu']",http://arxiv.org/pdf/2301.08721v2.pdf,2023-01-19,," Performing inference on large volumes of samples with large language models(LLMs) can be computationally and financially costly in industry and real-worlduse. We propose batch prompting, a simple yet effective prompting approach thatenables the LLM to run inference in batches, instead of one sample at a time.Our method reduces both token and time costs while retaining downstreamperformance. We theoretically demonstrate that under a few-shot in-contextlearning setting, the inference costs decrease almost inverse linearly with thenumber of samples in each batch. We extensively validate the effectiveness ofbatch prompting on ten datasets across commonsense QA, arithmetic reasoning,and NLI/NLU: batch prompting significantly~(up to 5x with six samples in batch)reduces the LLM (Codex) inference token and time costs while achieving betteror comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5and GPT-4, we show the benefits of batch prompting also hold. Further analysisshows that the number of samples in each batch and the complexity of tasksaffect its performance. Moreover, batch prompting can be applied acrossdifferent reasoning methods using LLMs. Our code can be found at the sitehttps://github.com/xlang-ai/batch-prompting.",,arXiv,"['cs.cl', 'cs.ai']",, -1125,finding support examples for incontext learning,"['Xiaonan Li', 'Xipeng Qiu']",http://arxiv.org/pdf/2302.13539v3.pdf,2023-02-27,," Additionally, the strong dependency among in-context examples makes it anNP-hard combinatorial optimization problem and enumerating all permutations isinfeasible. Hence we propose LENS, a fiLter-thEN-Search method to tackle thischallenge in two stages: First we filter the dataset to obtain informativein-context examples individually. Specifically, we propose a novel metric,InfoScore, to evaluate the example's in-context informativeness based on thelanguage model's feedback, and further propose a progressive filtering processto filter out uninformative examples. Then we propose diversity-guided examplesearch which iteratively refines and evaluates the selected examplepermutations, to find examples that fully depict the task. The experimentalresults show that LENS significantly outperforms a wide range of baselines.",,arXiv,['cs.cl'],, -1126,incontext instruction learning,"['Seonghyeon Ye', 'Hyeonbin Hwang', 'Sohee Yang', 'Hyeongu Yun', 'Yireun Kim', 'Minjoon Seo']",http://arxiv.org/pdf/2302.14691v1.pdf,2023-02-28,," Instruction learning of Large Language Models (LLMs) has enabled zero-shottask generalization. However, instruction learning has been predominantlyapproached as a fine-tuning problem, including instruction tuning andreinforcement learning from human feedback, where LLMs are multi-taskfine-tuned on various tasks with instructions. In this paper, we present asurprising finding that applying in-context learning to instruction learning,referred to as In-Context Instruction Learning (ICIL), significantly improvesthe zero-shot task generalization performance for both pretrained andinstruction-fine-tuned models. One of the core advantages of ICIL is that ituses a single fixed prompt to evaluate all tasks, which is a concatenation ofcross-task demonstrations. In particular, we demonstrate that the most powerfulinstruction-fine-tuned baseline (text-davinci-003) also benefits from ICIL by9.3%, indicating that the effect of ICIL is complementary to instruction-basedfine-tuning.",,arXiv,"['cs.cl', 'cs.ai']",, -1127,selfplanning code generation with large language models,"['Xue Jiang', 'Yihong Dong', 'Lecheng Wang', 'Zheng Fang', 'Qiwei Shang', 'Ge Li', 'Zhi Jin', 'Wenpin Jiao']",http://arxiv.org/pdf/2303.06689v2.pdf,2023-03-12,," Although large language models have demonstrated impressive ability in codegeneration, they are still struggling to address the complicated intentprovided by humans. It is widely acknowledged that humans typically employplanning to decompose complex problems and schedule the solution steps prior toimplementation. Thus we introduce planning into code generation to help themodel understand complex intent and reduce the difficulty of problem solving.This paper proposes a self-planning code generation method with large languagemodel, which consists of two phases, namely planning phase and implementationphase. Specifically, in the planning phase, the language model plans out thesolution steps from the intent combined with in-context learning. Then itenters the implementation phase, where the model generates code step by step,guided by the solution steps. The effectiveness of self-planning codegeneration has been rigorously evaluated on multiple code generation datasetsand the results have demonstrated a marked superiority over naive directgeneration approaches with language model. The improvement in performance issubstantial, highlighting the significance of self-planning in code generationtasks.",,arXiv,['cs.se'],, -1128,gpt is becoming a turing machine here are some ways to program it,"['Ana Jojic', 'Zhen Wang', 'Nebojsa Jojic']",http://arxiv.org/pdf/2303.14310v1.pdf,2023-03-25,," We demonstrate that, through appropriate prompting, GPT-3 family of modelscan be triggered to perform iterative behaviours necessary to execute (ratherthan just write or recall) programs that involve loops, including severalpopular algorithms found in computer science curricula or software developerinterviews. We trigger execution and description of Iterations by RegimentingSelf-Attention (IRSA) in one (or a combination) of three ways: 1) Using strongrepetitive structure in an example of an execution path of a target program forone particular input, 2) Prompting with fragments of execution paths, and 3)Explicitly forbidding (skipping) self-attention to parts of the generated text.On a dynamic program execution, IRSA leads to larger accuracy gains thanreplacing the model with the much more powerful GPT-4. IRSA has promisingapplications in education, as the prompts and responses resemble studentassignments in data structures and algorithms classes. Our findings holdimplications for evaluating LLMs, which typically target the in-contextlearning: We show that prompts that may not even cover one full task examplecan trigger algorithmic behaviour, allowing solving problems previously thoughtof as hard for LLMs, such as logical puzzles. Consequently, prompt design playsan even more critical role in LLM performance than previously recognized.",,arXiv,['cs.cl'],, -1129,is chatgpt a highly fluent grammatical error correction system a comprehensive evaluation,"['Tao Fang', 'Shu Yang', 'Kaixin Lan', 'Derek F. Wong', 'Jinpeng Hu', 'Lidia S. Chao', 'Yue Zhang']",http://arxiv.org/pdf/2304.01746v1.pdf,2023-04-04,," ChatGPT, a large-scale language model based on the advanced GPT-3.5architecture, has shown remarkable potential in various Natural LanguageProcessing (NLP) tasks. However, there is currently a dearth of comprehensivestudy exploring its potential in the area of Grammatical Error Correction(GEC). To showcase its capabilities in GEC, we design zero-shotchain-of-thought (CoT) and few-shot CoT settings using in-context learning forChatGPT. Our evaluation involves assessing ChatGPT's performance on fiveofficial test sets in three different languages, along with threedocument-level GEC test sets in English. Our experimental results and humanevaluations demonstrate that ChatGPT has excellent error detection capabilitiesand can freely correct errors to make the corrected sentences very fluent,possibly due to its over-correction tendencies and not adhering to theprinciple of minimal edits. Additionally, its performance in non-English andlow-resource settings highlights its potential in multilingual GEC tasks.However, further analysis of various types of errors at the document-level hasshown that ChatGPT cannot effectively correct agreement, coreference, tenseerrors across sentences, and cross-sentence boundary errors.",,arXiv,['cs.cl'],, -1130,a latent space theory for emergent abilities in large language models,['Hui Jiang'],http://arxiv.org/pdf/2304.09960v3.pdf,2023-04-19,," Languages are not created randomly but rather to communicate information.There is a strong association between languages and their underlying meanings,resulting in a sparse joint distribution that is heavily peaked according totheir correlations. Moreover, these peak values happen to match with themarginal distribution of languages due to the sparsity. With the advent of LLMstrained on big data and large models, we can now precisely assess the marginaldistribution of languages, providing a convenient means of exploring the sparsestructures in the joint distribution for effective inferences. In this paper,we categorize languages as either unambiguous or {\epsilon}-ambiguous andpresent quantitative results to demonstrate that the emergent abilities ofLLMs, such as language understanding, in-context learning, chain-of-thoughtprompting, and effective instruction fine-tuning, can all be attributed toBayesian inference on the sparse joint distribution of languages.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1131,"stance detection with supervised, zeroshot, and fewshot applications",['Michael Burnham'],http://arxiv.org/pdf/2305.01723v1.pdf,2023-05-02,," Stance detection is the identification of an author's beliefs about a subjectfrom a document. Researchers widely rely on sentiment analysis to accomplishthis. However, recent research has show that sentiment analysis is only looselycorrelated with stance, if at all. This paper advances methods in text analysisby precisely defining the task of stance detection, providing a generalizedframework for the task, and then presenting three distinct approaches forperforming stance detection: supervised classification, zero-shotclassification with NLI classifiers, and in-context learning. In doing so, Idemonstrate how zero-shot and few-shot language classifiers can replace humanlabelers for a variety of tasks and discuss how their application andlimitations differ from supervised classifiers. Finally, I demonstrate anapplication of zero-shot stance detection by replicating Block Jr et al.(2022).",,arXiv,['cs.cl'],, -1132,wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models,"['John Giorgi', 'Augustin Toma', 'Ronald Xie', 'Sondra S. Chen', 'Kevin R. An', 'Grace X. Zheng', 'Bo Wang']",http://arxiv.org/pdf/2305.02220v2.pdf,2023-05-03,," This paper describes our submission to the MEDIQA-Chat 2023 shared task forautomatic clinical note generation from doctor-patient conversations. We reportresults for two approaches: the first fine-tunes a pre-trained language model(PLM) on the shared task data, and the second uses few-shot in-context learning(ICL) with a large language model (LLM). Both achieve high performance asmeasured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second andfirst, respectively, of all submissions to the shared task. Expert humanscrutiny indicates that notes generated via the ICL-based approach with GPT-4are preferred about as often as human-written notes, making it a promising pathtoward automated note generation from doctor-patient conversations.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1133,how good are commercial large language models on african languages,"['Jessica Ojo', 'Kelechi Ogueji']",http://arxiv.org/pdf/2305.06530v1.pdf,2023-05-11,," Recent advancements in Natural Language Processing (NLP) has led to theproliferation of large pretrained language models. These models have been shownto yield good performance, using in-context learning, even on unseen tasks andlanguages. They have also been exposed as commercial APIs as a form oflanguage-model-as-a-service, with great adoption. However, their performance onAfrican languages is largely unknown. We present a preliminary analysis ofcommercial large language models on two tasks (machine translation and textclassification) across eight African languages, spanning different languagefamilies and geographical areas. Our results suggest that commercial languagemodels produce below-par performance on African languages. We also find thatthey perform better on text classification than machine translation. Ingeneral, our findings present a call-to-action to ensure African languages arewell represented in commercial large language models, given their growingpopularity.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1134,chainofdictionary prompting elicits translation in large language models,"['Hongyuan Lu', 'Haoyang Huang', 'Dongdong Zhang', 'Haoran Yang', 'Wai Lam', 'Furu Wei']",http://arxiv.org/pdf/2305.06575v3.pdf,2023-05-11,," Large language models (LLMs) have shown surprisingly good performance inmultilingual neural machine translation (MNMT) even when trained withoutparallel data. Yet, despite the fact that the amount of training data isgigantic, they still struggle with translating rare words, particularly forlow-resource languages. Even worse, it is usually unrealistic to retrieverelevant demonstrations for in-context learning with low-resource languages onLLMs, which restricts the practical use of LLMs for translation -- how shouldwe mitigate this problem? To this end, we present a novel method, CoD, whichaugments LLMs with prior knowledge with the chains of multilingual dictionariesfor a subset of input words to elicit translation abilities for LLMs. Extensiveexperiments indicate that augmenting ChatGPT with CoD elicits large gains by upto 13x chrF++ points for MNMT (3.08 to 42.63 for English to Serbian written inCyrillic script) on FLORES-200 full devtest set. We further demonstrate theimportance of chaining the multilingual dictionaries, as well as thesuperiority of CoD to few-shot demonstration for low-resource languages.",,arXiv,['cs.cl'],, -1135,autotrial prompting language models for clinical trial design,"['Zifeng Wang', 'Cao Xiao', 'Jimeng Sun']",http://arxiv.org/pdf/2305.11366v2.pdf,2023-05-19,," Clinical trials are critical for drug development. Constructing theappropriate eligibility criteria (i.e., the inclusion/exclusion criteria forpatient recruitment) is essential for the trial's success. Proper design ofclinical trial protocols should consider similar precedent trials and theireligibility criteria to ensure sufficient patient coverage. In this paper, wepresent a method named AutoTrial to aid the design of clinical eligibilitycriteria using language models. It allows (1) controllable generation underinstructions via a hybrid of discrete and neural prompting, (2) scalableknowledge incorporation via in-context learning, and (3) explicit reasoningchains to provide rationales for understanding the outputs. Experiments on over70K clinical trials verify that AutoTrial generates high-quality criteria textsthat are fluent and coherent and with high accuracy in capturing the relevantclinical concepts to the target trial. It is noteworthy that our method, with amuch smaller parameter size, gains around 60% winning rate against the GPT-3.5baselines via human evaluations.",,arXiv,['cs.cl'],, -1136,"how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings","['Shuaichen Chang', 'Eric Fosler-Lussier']",http://arxiv.org/pdf/2305.11853v2.pdf,2023-05-19,," Large language models (LLMs) with in-context learning have demonstratedremarkable capability in the text-to-SQL task. Previous research has promptedLLMs with various demonstration-retrieval strategies and intermediate reasoningsteps to enhance the performance of LLMs. However, those works often employvaried strategies when constructing the prompt text for text-to-SQL inputs,such as databases and demonstration examples. This leads to a lack ofcomparability in both the prompt constructions and their primary contributions.Furthermore, selecting an effective prompt construction has emerged as apersistent problem for future research. To address this limitation, wecomprehensively investigate the impact of prompt constructions across varioussettings and provide insights for future work.",,arXiv,['cs.cl'],, -1137,factchecking complex claims with programguided reasoning,"['Liangming Pan', 'Xiaobao Wu', 'Xinyuan Lu', 'Anh Tuan Luu', 'William Yang Wang', 'Min-Yen Kan', 'Preslav Nakov']",http://arxiv.org/pdf/2305.12744v1.pdf,2023-05-22,," Fact-checking real-world claims often requires collecting multiple pieces ofevidence and applying complex multi-step reasoning. In this paper, we presentProgram-Guided Fact-Checking (ProgramFC), a novel fact-checking model thatdecomposes complex claims into simpler sub-tasks that can be solved using ashared library of specialized functions. We first leverage the in-contextlearning ability of large language models to generate reasoning programs toguide the verification process. Afterward, we execute the program by delegatingeach sub-task to the corresponding sub-task handler. This process makes ourmodel both explanatory and data-efficient, providing clear explanations of itsreasoning process and requiring minimal training data. We evaluate ProgramFC ontwo challenging fact-checking datasets and show that it outperforms sevenfact-checking baselines across different settings of evidence availability,with explicit output programs that benefit human debugging. Our codes and dataare publicly available at https://github.com/mbzuai-nlp/ProgramFC.",,arXiv,"['cs.cl', 'cs.ai']",, -1138,mailex email event and argument extraction,"['Saurabh Srivastava', 'Gaurav Singh', 'Shou Matsumoto', 'Ali Raz', 'Paulo Costa', 'Joshua Poore', 'Ziyu Yao']",http://arxiv.org/pdf/2305.13469v2.pdf,2023-05-22,," In this work, we present the first dataset, MailEx, for performing eventextraction from conversational email threads. To this end, we first proposed anew taxonomy covering 10 event types and 76 arguments in the email domain. Ourfinal dataset includes 1.5K email threads and ~4K emails, which are annotatedwith totally ~8K event instances. To understand the task challenges, weconducted a series of experiments comparing three types of approaches, i.e.,fine-tuned sequence labeling, fine-tuned generative extraction, and few-shotin-context learning. Our results showed that the task of email event extractionis far from being addressed, due to challenges lying in, e.g., extractingnon-continuous, shared trigger spans, extracting non-named entity arguments,and modeling the email conversational history. Our work thus suggests morefuture investigations in this domain-specific event extraction task.",,arXiv,"['cs.cl', 'cs.ai']",, -1139,can chatgpt detect intent evaluating large language models for spoken language understanding,"['Mutian He', 'Philip N. Garner']",http://arxiv.org/pdf/2305.13512v2.pdf,2023-05-22,," Recently, large pretrained language models have demonstrated strong languageunderstanding capabilities. This is particularly reflected in their zero-shotand in-context learning abilities on downstream tasks through prompting. Toassess their impact on spoken language understanding (SLU), we evaluate severalsuch models like ChatGPT and OPT of different sizes on multiple benchmarks. Weverify the emergent ability unique to the largest models as they can reachintent classification accuracy close to that of supervised models with zero orfew shots on various languages given oracle transcripts. By contrast, theresults for smaller models fitting a single GPU fall far behind. We note thatthe error cases often arise from the annotation scheme of the dataset;responses from ChatGPT are still reasonable. We show, however, that the modelis worse at slot filling, and its performance is sensitive to ASR errors,suggesting serious challenges for the application of those textual models onSLU.",,arXiv,"['cs.cl', 'cs.ai', 'cs.sd', 'eess.as']",, -1140,logicllm exploring selfsupervised logicenhanced training for large language models,"['Fangkai Jiao', 'Zhiyang Teng', 'Shafiq Joty', 'Bosheng Ding', 'Aixin Sun', 'Zhengyuan Liu', 'Nancy F. Chen']",http://arxiv.org/pdf/2305.13718v2.pdf,2023-05-23,," Existing efforts to improve logical reasoning ability of language models havepredominantly relied on supervised fine-tuning, hindering generalization to newdomains and/or tasks. The development of Large Langauge Models (LLMs) hasdemonstrated the capacity of compressing abundant knowledge into a singleproxy, enabling them to tackle multiple tasks effectively. Our preliminaryexperiments, nevertheless, show that LLMs do not show capability on logicalreasoning. The performance of LLMs on logical reasoning benchmarks is farbehind the existing state-of-the-art baselines. In this paper, we make thefirst attempt to investigate the feasibility of incorporating logical knowledgethrough self-supervised post-training, and activating it via in-contextlearning, which we termed as LogicLLM. Specifically, we devise anauto-regressive objective variant of MERIt and integrate it with two LLMseries, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to13 billion. The results on two challenging logical reasoning benchmarksdemonstrate the effectiveness of LogicLLM. Besides, we conduct extensiveablation studies to analyze the key factors in designing logic-oriented proxytasks.",,arXiv,['cs.cl'],, -1141,make a choice! knowledge base question answering with incontext learning,"['Chuanyuan Tan', 'Yuehe Chen', 'Wenbiao Shao', 'Wenliang Chen']",http://arxiv.org/pdf/2305.13972v1.pdf,2023-05-23,," Question answering over knowledge bases (KBQA) aims to answer factoidquestions with a given knowledge base (KB). Due to the large scale of KB,annotated data is impossible to cover all fact schemas in KB, which poses achallenge to the generalization ability of methods that require a sufficientamount of annotated data. Recently, LLMs have shown strong few-shot performancein many NLP tasks. We expect LLM can help existing methods improve theirgeneralization ability, especially in low-resource situations. In this paper,we present McL-KBQA, a framework that incorporates the few-shot ability of LLMinto the KBQA method via ICL-based multiple choice and then improves theeffectiveness of the QA tasks. Experimental results on two KBQA datasetsdemonstrate the competitive performance of McL-KBQA with strong improvements ingeneralization. We expect to explore a new way to QA tasks from KBQA inconjunction with LLM, how to generate answers normatively and correctly withstrong generalization.",,arXiv,['cs.cl'],, -1142,ctqscorer combining multiple features for incontext example selection for machine translation,"['Aswanth Kumar', 'Ratish Puduppully', 'Raj Dabre', 'Anoop Kunchukuttan']",http://arxiv.org/pdf/2305.14105v2.pdf,2023-05-23,," Large language models have demonstrated the capability to perform on machinetranslation when the input is prompted with a few examples (in-contextlearning). Translation quality depends on various features of the selectedexamples, such as their quality and relevance, but previous work haspredominantly focused on individual features in isolation. In this paper, wepropose a general framework for combining different features influencingexample selection. We learn a regression model, CTQ Scorer (ContextualTranslation Quality), that selects examples based on multiple features in orderto maximize the translation quality. On multiple language pairs and languagemodels, we show that CTQ Scorer helps significantly outperform random selectionas well as strong single-factor baselines reported in the literature. We alsosee an improvement of over 2.5 COMET points on average with respect to a strongBM25 retrieval-based baseline.",,arXiv,"['cs.cl', 'cs.ai']",, -1143,empowering llmbased machine translation with cultural awareness,"['Binwei Yao', 'Ming Jiang', 'Diyi Yang', 'Junjie Hu']",http://arxiv.org/pdf/2305.14328v1.pdf,2023-05-23,," Traditional neural machine translation (NMT) systems often fail to translatesentences that contain culturally specific information. Most previous NMTmethods have incorporated external cultural knowledge during training, whichrequires fine-tuning on low-frequency items specific to the culture. Recentin-context learning utilizes lightweight prompts to guide large language models(LLMs) to perform machine translation, however, whether such an approach worksin terms of injecting culture awareness into machine translation remainsunclear. To this end, we introduce a new data curation pipeline to construct aculturally relevant parallel corpus, enriched with annotations ofcultural-specific entities. Additionally, we design simple but effectiveprompting strategies to assist this LLM-based translation. Extensiveexperiments show that our approaches can largely help incorporate culturalknowledge into LLM-based machine translation, outperforming traditional NMTsystems in translating cultural-specific sentences.",,arXiv,['cs.cl'],, -1144,selfchecker plugandplay modules for factchecking with large language models,"['Miaoran Li', 'Baolin Peng', 'Zhu Zhang']",http://arxiv.org/pdf/2305.14623v1.pdf,2023-05-24,," Fact-checking is an essential task in NLP that is commonly utilized forvalidating the factual accuracy of claims. Prior work has mainly focused onfine-tuning pre-trained languages models on specific datasets, which can becomputationally intensive and time-consuming. With the rapid development oflarge language models (LLMs), such as ChatGPT and GPT-3, researchers are nowexploring their in-context learning capabilities for a wide range of tasks. Inthis paper, we aim to assess the capacity of LLMs for fact-checking byintroducing Self-Checker, a framework comprising a set of plug-and-play modulesthat facilitate fact-checking by purely prompting LLMs in an almost zero-shotsetting. This framework provides a fast and efficient way to constructfact-checking systems in low-resource environments. Empirical resultsdemonstrate the potential of Self-Checker in utilizing LLMs for fact-checking.However, there is still significant room for improvement compared to SOTAfine-tuned models, which suggests that LLM adoption could be a promisingapproach for future fact-checking research.",,arXiv,['cs.cl'],, -1145,expertprompting instructing large language models to be distinguished experts,"['Benfeng Xu', 'An Yang', 'Junyang Lin', 'Quan Wang', 'Chang Zhou', 'Yongdong Zhang', 'Zhendong Mao']",http://arxiv.org/pdf/2305.14688v1.pdf,2023-05-24,," The answering quality of an aligned large language model (LLM) can bedrastically improved if treated with proper crafting of prompts. In this paper,we propose ExpertPrompting to elicit the potential of LLMs to answer asdistinguished experts. We first utilize In-Context Learning to automaticallysynthesize detailed and customized descriptions of the expert identity for eachspecific instruction, and then ask LLMs to provide answer conditioned on suchagent background. Based on this augmented prompting strategy, we produce a newset of instruction-following data using GPT-3.5, and train a competitiveopen-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluationto show that 1) the expert data is of significantly higher quality than vanillaanswers, and 2) ExpertLLaMA outperforms existing open-source opponents andachieves 96\% of the original ChatGPT's capability. All data and theExpertLLaMA model will be made publicly available at\url{https://github.com/OFA-Sys/ExpertLLaMA}.",,arXiv,"['cs.cl', 'cs.ai']",, -1146,getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning,"['Tianqing Fang', 'Zhaowei Wang', 'Wenxuan Zhou', 'Hongming Zhang', 'Yangqiu Song', 'Muhao Chen']",http://arxiv.org/pdf/2305.14970v1.pdf,2023-05-24,," Event temporal reasoning aims at identifying the temporal relations betweentwo or more events. However, knowledge conflicts arise when there is a mismatchbetween the actual temporal relations of events in the context and the priorknowledge or biases learned by the model. We first systematically definedistinct kinds of bias in event temporal reasoning, which include eventrelation prior bias, tense bias, narrative bias, and dependency bias, asindicators to study knowledge conflicts. To mitigate such event-relatedknowledge conflict, we introduce a Counterfactual Data Augmentation basedmethod that can be applied to both Pre-trained Language Models (PLMs) and LargeLanguage Models (LLMs) either as additional training data or demonstrations forIn-Context Learning. Experiments suggest the importance of mitigating knowledgeconflicts in event temporal reasoning tasks for reducing hallucination andhighlight the potential of counterfactual data augmentation for improving modelperformance.",,arXiv,"['cs.cl', 'cs.ai']",, -1147,boosting crosslingual transferability in multilingual models via incontext learning,"['Sunkyoung Kim', 'Dayeon Ki', 'Yireun Kim', 'Jinsik Lee']",http://arxiv.org/pdf/2305.15233v1.pdf,2023-05-24,," Existing cross-lingual transfer (CLT) prompting methods are only concernedwith monolingual demonstration examples in the source language. In this paper,we propose In-CLT, a novel cross-lingual transfer prompting method thatleverages both source and target languages to construct the demonstrationexamples. We conduct comprehensive evaluations on multilingual benchmarks,focusing on question answering tasks. Experiment results show that In-CLTprompt not only improves multilingual models' cross-lingual transferability,but also demonstrates remarkable unseen language generalization ability. In-CLTprompting, in particular, improves model performance by 10 to 20\% points onaverage when compared to prior cross-lingual transfer approaches. We alsoobserve the surprising performance gain on the other multilingual benchmarks,especially in reasoning tasks. Furthermore, we investigate the relationshipbetween lexical similarity and pre-training corpora in terms of thecross-lingual transfer gap.",,arXiv,"['cs.cl', 'cs.ai']",, -1148,a mechanism for solving relational tasks in transformer language models,"['Jack Merullo', 'Carsten Eickhoff', 'Ellie Pavlick']",http://arxiv.org/pdf/2305.16130v2.pdf,2023-05-25,," A primary criticism towards language models (LMs) is their inscrutability.This paper presents evidence that, despite their size and complexity, LMssometimes exploit a simple computational mechanism to solve one-to-onerelational tasks (e.g., capital_of(Poland)=Warsaw). We investigate a range oflanguage model sizes (from 124M parameters to 176B parameters) in an in-contextlearning setting, and find that for a variety of tasks (involving capitalcities, upper-casing, and past-tensing) a key part of the mechanism reduces toa simple linear update typically applied by the feedforward (FFN) networks.These updates also tend to promote the output of the relation in acontent-independent way (e.g., encoding Poland:Warsaw::China:Beijing),revealing a predictable pattern that these models take in solving these tasks.We further show that this mechanism is specific to tasks that require retrievalfrom pretraining memory, rather than retrieval from local context. Our resultscontribute to a growing body of work on the mechanistic interpretability ofLLMs, and offer reason to be optimistic that, despite the massive andnon-linear nature of the models, the strategies they ultimately use to solvetasks can sometimes reduce to familiar and even intuitive algorithms.",,arXiv,"['cs.cl', 'cs.lg']",, -1149,augmenting large language model translators via translation memories,"['Yongyu Mu', 'Abudurexiti Reheman', 'Zhiquan Cao', 'Yuchun Fan', 'Bei Li', 'Yinqiao Li', 'Tong Xiao', 'Chunliang Zhang', 'Jingbo Zhu']",http://arxiv.org/pdf/2305.17367v1.pdf,2023-05-27,," Using translation memories (TMs) as prompts is a promising approach toin-context learning of machine translation models. In this work, we take a steptowards prompting large language models (LLMs) with TMs and making them bettertranslators. We find that the ability of LLMs to ``understand'' prompts isindeed helpful for making better use of TMs. Experiments show that the resultsof a pre-trained LLM translator can be greatly improved by using high-qualityTM-based prompts. These results are even comparable to those of thestate-of-the-art NMT systems which have access to large-scale in-domainbilingual data and are well tuned on the downstream tasks.",,arXiv,['cs.cl'],, -1150,towards explainable conversational recommender systems,"['Shuyu Guo', 'Shuo Zhang', 'Weiwei Sun', 'Pengjie Ren', 'Zhumin Chen', 'Zhaochun Ren']",http://arxiv.org/pdf/2305.18363v1.pdf,2023-05-27,," Explanations in conventional recommender systems have demonstrated benefitsin helping the user understand the rationality of the recommendations andimproving the system's efficiency, transparency, and trustworthiness. In theconversational environment, multiple contextualized explanations need to begenerated, which poses further challenges for explanations. To better measureexplainability in conversational recommender systems (CRS), we propose tenevaluation perspectives based on concepts from conventional recommender systemstogether with the characteristics of CRS. We assess five existing CRS benchmarkdatasets using these metrics and observe the necessity of improving theexplanation quality of CRS. To achieve this, we conduct manual and automaticapproaches to extend these dialogues and construct a new CRS dataset, namelyExplainable Recommendation Dialogues (E-ReDial). It includes 756 dialogues withover 2,000 high-quality rewritten explanations. We compare two baselineapproaches to perform explanation generation based on E-ReDial. Experimentalresults suggest that models trained on E-ReDial can significantly improveexplainability while introducing knowledge into the models can further improvethe performance. GPT-3 in the in-context learning setting can generate morerealistic and diverse movie descriptions. In contrast, T5 training on E-ReDialcan better generate clear reasons for recommendations based on userpreferences. E-ReDial is available at https://github.com/Superbooming/E-ReDial.",,arXiv,"['cs.ir', 'cs.ai']",, -1151,grammar prompting for domainspecific language generation with large language models,"['Bailin Wang', 'Zi Wang', 'Xuezhi Wang', 'Yuan Cao', 'Rif A. Saurous', 'Yoon Kim']",http://arxiv.org/pdf/2305.19234v3.pdf,2023-05-30,," Large language models (LLMs) can learn to perform a wide range of naturallanguage tasks from just a handful of in-context examples. However, forgenerating strings from highly structured languages (e.g., semantic parsing tocomplex domain-specific languages), it is challenging for the LLM to generalizefrom just a few exemplars. We propose \emph{grammar prompting}, a simpleapproach to enable LLMs to use external knowledge and domain-specificconstraints, expressed through a grammar in Backus--Naur Form (BNF), duringin-context learning. Grammar prompting augments each demonstration example witha specialized grammar that is minimally sufficient for generating theparticular output example, where the specialized grammar is a subset of thefull DSL grammar. For inference, the LLM first predicts a BNF grammar given atest input, and then generates the output according to the rules of thegrammar. Experiments demonstrate that grammar prompting can enable LLMs toperform competitively on a diverse set of DSL generation tasks, includingsemantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, andSMILES-based molecule generation.",,arXiv,"['cs.cl', 'cs.ai']",, -1152,prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models,"['Fengzhu Zeng', 'Wei Gao']",http://arxiv.org/pdf/2306.02569v1.pdf,2023-06-05,," Few-shot or zero-shot fact verification only relies on a few or no labeledtraining examples. In this paper, we propose a novel method called ProToCo, to\underline{Pro}mpt pre-trained language models (PLMs) \underline{To} be\underline{Co}nsistent, for improving the factuality assessment capability ofPLMs in the few-shot and zero-shot settings. Given a claim-evidence pair,ProToCo generates multiple variants of the claim with different relations andframes a simple consistency mechanism as constraints for making compatiblepredictions across these variants. We update PLMs by using parameter-efficientfine-tuning (PEFT), leading to more accurate predictions in few-shot andzero-shot fact verification tasks. Our experiments on three public verificationdatasets show that ProToCo significantly outperforms state-of-the-art few-shotfact verification baselines. With a small number of unlabeled instances,ProToCo also outperforms the strong zero-shot learner T0 on zero-shotverification. Compared to large PLMs using in-context learning (ICL) method,ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model inboth few- and zero-shot settings.",,arXiv,['cs.cl'],, -1153,modular visual question answering via code generation,"['Sanjay Subramanian', 'Medhini Narasimhan', 'Kushal Khangaonkar', 'Kevin Yang', 'Arsha Nagrani', 'Cordelia Schmid', 'Andy Zeng', 'Trevor Darrell', 'Dan Klein']",http://arxiv.org/pdf/2306.05392v1.pdf,2023-06-08,," We present a framework that formulates visual question answering as modularcode generation. In contrast to prior work on modular approaches to VQA, ourapproach requires no additional training and relies on pre-trained languagemodels (LMs), visual models pre-trained on image-caption pairs, and fifty VQAexamples used for in-context learning. The generated Python programs invoke andcompose the outputs of the visual models using arithmetic and conditionallogic. Our approach improves accuracy on the COVR dataset by at least 3% and onthe GQA dataset by roughly 2% compared to the few-shot baseline that does notemploy code generation.",,arXiv,['cs.cl'],, -1154,disasterresponsegpt large language models for accelerated plan of action development in disaster response scenarios,"['Vinicius G. Goecks', 'Nicholas R. Waytowich']",http://arxiv.org/pdf/2306.17271v1.pdf,2023-06-29,," The development of plans of action in disaster response scenarios is atime-consuming process. Large Language Models (LLMs) offer a powerful solutionto expedite this process through in-context learning. This study presentsDisasterResponseGPT, an algorithm that leverages LLMs to generate valid plansof action quickly by incorporating disaster response and planning guidelines inthe initial prompt. In DisasterResponseGPT, users input the scenariodescription and receive a plan of action as output. The proposed methodgenerates multiple plans within seconds, which can be further refined followingthe user's feedback. Preliminary results indicate that the plans of actiondeveloped by DisasterResponseGPT are comparable to human-generated ones whileoffering greater ease of modification in real-time. This approach has thepotential to revolutionize disaster response operations by enabling rapidupdates and adjustments during the plan's execution.",,arXiv,"['cs.lg', 'i.2.7; j.7; k.4.0']",, -1155,metareasoning semanticssymbol deconstruction for large language models,"['Yiming Wang', 'Zhuosheng Zhang', 'Rui Wang']",http://arxiv.org/pdf/2306.17820v2.pdf,2023-06-30,," Neural-symbolic methods have shown their effectiveness in enhancing thereasoning abilities of large language models (LLMs). However, existing methodsprimarily rely on mapping natural languages to more syntactically completeformal languages (e.g., Python and SQL). Those approaches necessitate thatreasoning tasks be convertible into programs, which cater more to the computerexecution mindset and deviate from human reasoning habits. To expand thereal-world applicability and flexibility of symbolic methods, we proposeMeta-Reasoning from the scope of linguistics itself. This method empowers LLMsto deconstruct questions and effectively capture more generalized knowledgeautonomously. We find that Meta-Reasoning achieves improved in-context learningefficiency, reasoning accuracy, and output stability in six arithmetic andsymbolic reasoning tasks. In particular, when applied to symbolic reasoningtasks such as Tracking Shuffled Objects, GPT-3 (text-davinci-002) surpasses thefew-shot Chain-of-Thought prompting approach (+37.7%), with 99% accuracy aftera single demonstration of Meta-Reasoning.",,arXiv,['cs.cl'],, -1156,reasoning before responding integrating commonsensebased causality explanation for empathetic response generation,"['Yahui Fu', 'Koji Inoue', 'Chenhui Chu', 'Tatsuya Kawahara']",http://arxiv.org/pdf/2308.00085v2.pdf,2023-07-28,," Recent approaches to empathetic response generation try to incorporatecommonsense knowledge or reasoning about the causes of emotions to betterunderstand the user's experiences and feelings. However, these approachesmainly focus on understanding the causalities of context from the user'sperspective, ignoring the system's perspective. In this paper, we propose acommonsense-based causality explanation approach for diverse empatheticresponse generation that considers both the user's perspective (user's desiresand reactions) and the system's perspective (system's intentions andreactions). We enhance ChatGPT's ability to reason for the system's perspectiveby integrating in-context learning with commonsense knowledge. Then, weintegrate the commonsense-based causality explanation with both ChatGPT and aT5-based model. Experimental evaluations demonstrate that our methodoutperforms other comparable methods on both automatic and human evaluations.",,arXiv,"['cs.cl', 'cs.ai']",, -1157,jen1 textguided universal music generation with omnidirectional diffusion models,"['Peike Li', 'Boyu Chen', 'Yao Yao', 'Yikai Wang', 'Allen Wang', 'Alex Wang']",http://arxiv.org/pdf/2308.04729v1.pdf,2023-08-09,," Music generation has attracted growing interest with the advancement of deepgenerative models. However, generating music conditioned on textualdescriptions, known as text-to-music, remains challenging due to the complexityof musical structures and high sampling rate requirements. Despite the task'ssignificance, prevailing generative models exhibit limitations in musicquality, computational efficiency, and generalization. This paper introducesJEN-1, a universal high-fidelity model for text-to-music generation. JEN-1 is adiffusion model incorporating both autoregressive and non-autoregressivetraining. Through in-context learning, JEN-1 performs various generation tasksincluding text-guided music generation, music inpainting, and continuation.Evaluations demonstrate JEN-1's superior performance over state-of-the-artmethods in text-music alignment and music quality while maintainingcomputational efficiency. Our demos are available athttp://futureverse.com/research/jen/demos/jen1",,arXiv,"['cs.sd', 'cs.ai', 'cs.lg', 'cs.mm', 'eess.as']",, -1158,algorithm of thoughts enhancing exploration of ideas in large language models,"['Bilgehan Sel', 'Ahmad Al-Tawaha', 'Vanshaj Khattar', 'Ruoxi Jia', 'Ming Jin']",http://arxiv.org/pdf/2308.10379v2.pdf,2023-08-20,," Current literature, aiming to surpass the ""Chain-of-Thought"" approach, oftenresorts to an external modus operandi involving halting, modifying, and thenresuming the generation process to boost Large Language Models' (LLMs)reasoning capacities. This mode escalates the number of query requests, leadingto increased costs, memory, and computational overheads. Addressing this, wepropose the Algorithm of Thoughts -- a novel strategy that propels LLMs throughalgorithmic reasoning pathways, pioneering a new mode of in-context learning.By employing algorithmic examples, we exploit the innate recurrence dynamics ofLLMs, expanding their idea exploration with merely one or a few queries. Ourtechnique outperforms earlier single-query methods and stands on par with arecent multi-query strategy that employs an extensive tree search algorithm.Intriguingly, our results suggest that instructing an LLM using an algorithmcan lead to performance surpassing that of the algorithm itself, hinting atLLM's inherent ability to weave its intuition into optimized searches. We probeinto the underpinnings of our method's efficacy and its nuances in application.",,arXiv,"['cs.cl', 'cs.ai']",, -1159,building emotional support chatbots in the era of llms,"['Zhonghua Zheng', 'Lizi Liao', 'Yang Deng', 'Liqiang Nie']",http://arxiv.org/pdf/2308.11584v1.pdf,2023-08-17,," The integration of emotional support into various conversational scenariospresents profound societal benefits, such as social interactions, mental healthcounseling, and customer service. However, there are unsolved challenges thathinder real-world applications in this field, including limited dataavailability and the absence of well-accepted model training paradigms. Thiswork endeavors to navigate these challenges by harnessing the capabilities ofLarge Language Models (LLMs). We introduce an innovative methodology thatsynthesizes human insights with the computational prowess of LLMs to curate anextensive emotional support dialogue dataset. Our approach is initiated with ameticulously designed set of dialogues spanning diverse scenarios as generativeseeds. By utilizing the in-context learning potential of ChatGPT, werecursively generate an ExTensible Emotional Support dialogue dataset, namedExTES. Following this, we deploy advanced tuning techniques on the LLaMA model,examining the impact of diverse training strategies, ultimately yielding an LLMmeticulously optimized for emotional support interactions. An exhaustiveassessment of the resultant model showcases its proficiency in offeringemotional support, marking a pivotal step in the realm of emotional supportbots and paving the way for subsequent research and implementations.",,arXiv,"['cs.cl', 'cs.ai']",, -1160,breaking the bank with chatgpt fewshot text classification for finance,"['Lefteris Loukas', 'Ilias Stogiannidis', 'Prodromos Malakasiotis', 'Stavros Vassos']",http://arxiv.org/pdf/2308.14634v1.pdf,2023-08-28,," We propose the use of conversational GPT models for easy and quick few-shottext classification in the financial domain using the Banking77 dataset. Ourapproach involves in-context learning with GPT-3.5 and GPT-4, which minimizesthe technical expertise required and eliminates the need for expensive GPUcomputing while yielding quick and accurate results. Additionally, we fine-tuneother pre-trained, masked language models with SetFit, a recent contrastivelearning technique, to achieve state-of-the-art results both in full-data andfew-shot settings. Our findings show that querying GPT-3.5 and GPT-4 canoutperform fine-tuned, non-generative models even with fewer examples. However,subscription fees associated with these solutions may be considered costly forsmall organizations. Lastly, we find that generative models perform better onthe given task when shown representative samples selected by a human expertrather than when shown random ones. We conclude that a) our proposed methodsoffer a practical solution for few-shot tasks in datasets with limited labelavailability, and b) our state-of-the-art results can inspire future work inthe area.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-fin.cp']",, -1161,genderspecific machine translation with large language models,"['Eduardo Sánchez', 'Pierre Andrews', 'Pontus Stenetorp', 'Mikel Artetxe', 'Marta R. Costa-jussà']",http://arxiv.org/pdf/2309.03175v1.pdf,2023-09-06,," Decoder-only Large Language Models (LLMs) have demonstrated potential inmachine translation (MT), albeit with performance slightly lagging behindtraditional encoder-decoder Neural Machine Translation (NMT) systems. However,LLMs offer a unique advantage: the ability to control the properties of theoutput through prompts. In this study, we harness this flexibility to exploreLLaMa's capability to produce gender-specific translations for languages withgrammatical gender. Our results indicate that LLaMa can generategender-specific translations with competitive accuracy and gender biasmitigation when compared to NLLB, a state-of-the-art multilingual NMT system.Furthermore, our experiments reveal that LLaMa's translations are robust,showing significant performance drops when evaluated against opposite-genderreferences in gender-ambiguous datasets but maintaining consistency in lessambiguous contexts. This research provides insights into the potential andchallenges of using LLMs for gender-specific translations and highlights theimportance of in-context learning to elicit new tasks in LLMs.",,arXiv,['cs.cl'],, -1162,improving open information extraction with large language models a study on demonstration uncertainty,"['Chen Ling', 'Xujiang Zhao', 'Xuchao Zhang', 'Yanchi Liu', 'Wei Cheng', 'Haoyu Wang', 'Zhengzhang Chen', 'Takao Osaki', 'Katsushi Matsuda', 'Haifeng Chen', 'Liang Zhao']",http://arxiv.org/pdf/2309.03433v1.pdf,2023-09-07,," Open Information Extraction (OIE) task aims at extracting structured factsfrom unstructured text, typically in the form of (subject, relation, object)triples. Despite the potential of large language models (LLMs) like ChatGPT asa general task solver, they lag behind state-of-the-art (supervised) methods inOIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevantcontext from relevant relations and generate structured output due to therestrictions on fine-tuning the model. Second, LLMs generates responsesautoregressively based on probability, which makes the predicted relations lackconfidence. In this paper, we assess the capabilities of LLMs in improving theOIE task. Particularly, we propose various in-context learning strategies toenhance LLM's instruction-following ability and a demonstration uncertaintyquantification module to enhance the confidence of the generated relations. Ourexperiments on three OIE benchmark datasets show that our approach holds itsown against established supervised methods, both quantitatively andqualitatively.",,arXiv,['cs.cl'],, -1163,epa easy prompt augmentation on large language models via multiple sources and multiple targets,"['Hongyuan Lu', 'Wai Lam']",http://arxiv.org/pdf/2309.04725v1.pdf,2023-09-09,," Large language models (LLMs) have shown promising performance on various NLPtasks via task prompting. And their performance can be further improved byappending task demonstrations to the head of the prompt. And usually, a betterperformance can be achieved with more demonstrations. However, asking the usersto write the demonstrations can be cumbersome. As a simple yet cost-effectiveworkaround, this paper proposes a novel method called EPA (\textbf{E}asy\textbf{P}rompt \textbf{A}ugmentation)\footnote{While this paper considersaugmenting prompts via demonstrations, we name it EPA as the name EDA isalready taken by a well-known NLP method \citep{wei-zou-2019-eda}.} thateffectively minimizes user efforts in writing demonstrations while improvingthe model performance at the same time. EPA achieves these goals byautomatically augmenting the demonstrations with multiple sources/targets,where each of them paraphrases each other. This is well motivated as augmentingdata via paraphrasing effectively improves neural language models. EPA thusemploys paraphrasing as an augmentation method for in-context learning.Extensive experiments indicate that EPA effectively improves both NLU and NLGtasks, covering from natural language inference to machine translation intranslating tens of languages.\footnote{Code and data will be released uponpublication.}",,arXiv,['cs.cl'],, -1164,converser fewshot conversational dense retrieval with synthetic data generation,"['Chao-Wei Huang', 'Chen-Yu Hsu', 'Tsu-Yuan Hsu', 'Chen-An Li', 'Yun-Nung Chen']",http://arxiv.org/pdf/2309.06748v1.pdf,2023-09-13,," Conversational search provides a natural interface for information retrieval(IR). Recent approaches have demonstrated promising results in applying denseretrieval to conversational IR. However, training dense retrievers requireslarge amounts of in-domain paired data. This hinders the development ofconversational dense retrievers, as abundant in-domain conversations areexpensive to collect. In this paper, we propose CONVERSER, a framework fortraining conversational dense retrievers with at most 6 examples of in-domaindialogues. Specifically, we utilize the in-context learning capability of largelanguage models to generate conversational queries given a passage in theretrieval corpus. Experimental results on conversational retrieval benchmarksOR-QuAC and TREC CAsT 19 show that the proposed CONVERSER achieves comparableperformance to fully-supervised models, demonstrating the effectiveness of ourproposed framework in few-shot conversational dense retrieval. All source codeand generated datasets are available at https://github.com/MiuLab/CONVERSER",,arXiv,"['cs.cl', 'cs.ir']",, -1165,"bridging topic, domain, and language shifts an evaluation of comprehensive outofdistribution scenarios","['Andreas Waldis', 'Iryna Gurevych']",http://arxiv.org/pdf/2309.08316v1.pdf,2023-09-15,," Language models (LMs) excel in in-distribution (ID) scenarios where train andtest data are independent and identically distributed. However, theirperformance often degrades in real-world applications like argument mining.Such degradation happens when new topics emerge, or other text domains andlanguages become relevant. To assess LMs' generalization abilities in suchout-of-distribution (OOD) scenarios, we simulate such distribution shifts bydeliberately withholding specific instances for testing, as from the socialmedia domain or the topic Solar Energy. Unlike prior studies focusing on specific shifts and metrics in isolation, wecomprehensively analyze OOD generalization. We define three metrics to pinpointgeneralization flaws and propose eleven classification tasks covering topic,domain, and language shifts. Overall, we find superior performance ofprompt-based fine-tuning, notably when train and test splits primarily differsemantically. Simultaneously, in-context learning is more effective thanprompt-based or vanilla fine-tuning for tasks when training data embodies heavydiscrepancies in label distribution compared to testing data. This reveals acrucial drawback of gradient-based learning: it biases LMs regarding suchstructural obstacles.",,arXiv,['cs.cl'],, -1166,fewshot adaptation for parsing contextual utterances with llms,"['Kevin Lin', 'Patrick Xia', 'Hao Fang']",http://arxiv.org/pdf/2309.10168v1.pdf,2023-09-18,," We evaluate the ability of semantic parsers based on large language models(LLMs) to handle contextual utterances. In real-world settings, there typicallyexists only a limited number of annotated contextual utterances due toannotation cost, resulting in an imbalance compared to non-contextualutterances. Therefore, parsers must adapt to contextual utterances with a fewtraining examples. We examine four major paradigms for doing so inconversational semantic parsing i.e., Parse-with-Utterance-History,Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. Tofacilitate such cross-paradigm comparisons, we constructSMCalFlow-EventQueries, a subset of contextual examples from SMCalFlow withadditional annotations. Experiments with in-context learning and fine-tuningsuggest that Rewrite-then-Parse is the most promising paradigm whenholistically considering parsing accuracy, annotation cost, and error types.",,arXiv,['cs.cl'],, -1167,toward unified controllable text generation via regular expression instruction,"['Xin Zheng', 'Hongyu Lin', 'Xianpei Han', 'Le Sun']",http://arxiv.org/pdf/2309.10447v2.pdf,2023-09-19,," Controllable text generation is a fundamental aspect of natural languagegeneration, with numerous methods proposed for different constraint types.However, these approaches often require significant architectural or decodingmodifications, making them challenging to apply to additional constraints orresolve different constraint combinations. To address this, our paperintroduces Regular Expression Instruction (REI), which utilizes aninstruction-based mechanism to fully exploit regular expressions' advantages touniformly model diverse constraints. Specifically, our REI supports all popularfine-grained controllable generation constraints, i.e., lexical, positional,and length, as well as their complex combinations, via regular expression-styleinstructions. Our method only requires fine-tuning on medium-scale languagemodels or few-shot, in-context learning on large language models, and requiresno further adjustment when applied to various constraint combinations.Experiments demonstrate that our straightforward approach yields high successrates and adaptability to various constraints while maintaining competitivenessin automatic metrics and outperforming most previous baselines.",,arXiv,"['cs.cl', 'cs.ai']",, -1168,languageoriented communication with semantic coding and knowledge distillation for texttoimage generation,"['Hyelin Nam', 'Jihong Park', 'Jinho Choi', 'Mehdi Bennis', 'Seong-Lyun Kim']",http://arxiv.org/pdf/2309.11127v1.pdf,2023-09-20,," By integrating recent advances in large language models (LLMs) and generativemodels into the emerging semantic communication (SC) paradigm, in this articlewe put forward to a novel framework of language-oriented semantic communication(LSC). In LSC, machines communicate using human language messages that can beinterpreted and manipulated via natural language processing (NLP) techniquesfor SC efficiency. To demonstrate LSC's potential, we introduce threeinnovative algorithms: 1) semantic source coding (SSC) which compresses a textprompt into its key head words capturing the prompt's syntactic essence whilemaintaining their appearance order to keep the prompt's context; 2) semanticchannel coding (SCC) that improves robustness against errors by substitutinghead words with their lenghthier synonyms; and 3) semantic knowledgedistillation (SKD) that produces listener-customized prompts via in-contextlearning the listener's language style. In a communication task for progressivetext-to-image generation, the proposed methods achieve higher perceptualsimilarities with fewer transmissions while enhancing robustness in noisycommunication channels.",,arXiv,"['eess.sp', 'cs.ai', 'cs.cl']",, -1169,towards effective disambiguation for machine translation with large language models,"['Vivek Iyer', 'Pinzhen Chen', 'Alexandra Birch']",http://arxiv.org/pdf/2309.11668v2.pdf,2023-09-20,," Resolving semantic ambiguity has long been recognised as a central challengein the field of Machine Translation. Recent work on benchmarking translationperformance on ambiguous sentences has exposed the limitations of conventionalNeural Machine Translation (NMT) systems, which fail to handle many such cases.Large language models (LLMs) have emerged as a promising alternative,demonstrating comparable performance to traditional NMT models whileintroducing new paradigms for controlling the target outputs. In this paper, westudy the capabilities of LLMs to translate ""ambiguous sentences"" - i.e. thosecontaining highly polysemous words and/or rare word senses. We also propose twoways to improve their disambiguation capabilities, through a) in-contextlearning and b) fine-tuning on carefully curated ambiguous datasets.Experiments show that our methods can match or outperform state-of-the-artsystems such as DeepL and NLLB in four out of five language directions. Ourresearch provides valuable insights into effectively adapting LLMs to becomebetter disambiguators during Machine Translation. We release our curateddisambiguation corpora and resources athttps://data.statmt.org/ambiguous-europarl.",,arXiv,['cs.cl'],, -1170,incontext interference in chatbased large language models,"['Eric Nuertey Coleman', 'Julio Hurtado', 'Vincenzo Lomonaco']",http://arxiv.org/pdf/2309.12727v1.pdf,2023-09-22,," Large language models (LLMs) have had a huge impact on society due to theirimpressive capabilities and vast knowledge of the world. Various applicationsand tools have been created that allow users to interact with these models in ablack-box scenario. However, one limitation of this scenario is that userscannot modify the internal knowledge of the model, and the only way to add ormodify internal knowledge is by explicitly mentioning it to the model duringthe current interaction. This learning process is called in-context training,and it refers to training that is confined to the user's current session orcontext. In-context learning has significant applications, but also haslimitations that are seldom studied. In this paper, we present a study thatshows how the model can suffer from interference between information thatcontinually flows in the context, causing it to forget previously learnedknowledge, which can reduce the model's performance. Along with showing theproblem, we propose an evaluation benchmark based on the bAbI dataset.",,arXiv,"['cs.ai', 'cs.cl']",, -1171,affect recognition in conversations using large language models,"['Shutong Feng', 'Guangzhi Sun', 'Nurul Lubis', 'Chao Zhang', 'Milica Gašić']",http://arxiv.org/pdf/2309.12881v1.pdf,2023-09-22,," Affect recognition, encompassing emotions, moods, and feelings, plays apivotal role in human communication. In the realm of conversational artificialintelligence (AI), the ability to discern and respond to human affective cuesis a critical factor for creating engaging and empathetic interactions. Thisstudy delves into the capacity of large language models (LLMs) to recognisehuman affect in conversations, with a focus on both open-domain chit-chatdialogues and task-oriented dialogues. Leveraging three diverse datasets,namely IEMOCAP, EmoWOZ, and DAIC-WOZ, covering a spectrum of dialogues fromcasual conversations to clinical interviews, we evaluated and compared LLMs'performance in affect recognition. Our investigation explores the zero-shot andfew-shot capabilities of LLMs through in-context learning (ICL) as well astheir model capacities through task-specific fine-tuning. Additionally, thisstudy takes into account the potential impact of automatic speech recognition(ASR) errors on LLM predictions. With this work, we aim to shed light on theextent to which LLMs can replicate human-like affect recognition capabilitiesin conversations.",,arXiv,['cs.cl'],, -1172,calibrating llmbased evaluator,"['Yuxuan Liu', 'Tianchi Yang', 'Shaohan Huang', 'Zihan Zhang', 'Haizhen Huang', 'Furu Wei', 'Weiwei Deng', 'Feng Sun', 'Qi Zhang']",http://arxiv.org/pdf/2309.13308v1.pdf,2023-09-23,," Recent advancements in large language models (LLMs) on language modeling andemergent capabilities make them a promising reference-free evaluator of naturallanguage generation quality, and a competent alternative to human evaluation.However, hindered by the closed-source or high computational demand to host andtune, there is a lack of practice to further calibrate an off-the-shelfLLM-based evaluator towards better human alignment. In this work, we proposeAutoCalibrate, a multi-stage, gradient-free approach to automatically calibrateand align an LLM-based evaluator toward human preference. Instead of explicitlymodeling human preferences, we first implicitly encompass them within a set ofhuman labels. Then, an initial set of scoring criteria is drafted by thelanguage model itself, leveraging in-context learning on different few-shotexamples. To further calibrate this set of criteria, we select the bestperformers and re-draft them with self-refinement. Our experiments on multipletext quality evaluation datasets illustrate a significant improvement incorrelation with expert evaluation through calibration. Our comprehensivequalitative analysis conveys insightful intuitions and observations on theessence of effective scoring criteria.",,arXiv,['cs.cl'],, -1173,mededit model editing for medical question answering with external knowledge bases,"['Yucheng Shi', 'Shaochen Xu', 'Zhengliang Liu', 'Tianming Liu', 'Xiang Li', 'Ninghao Liu']",http://arxiv.org/pdf/2309.16035v1.pdf,2023-09-27,," Large Language Models (LLMs), although powerful in general domains, oftenperform poorly on domain-specific tasks like medical question answering (QA).Moreover, they tend to function as ""black-boxes,"" making it challenging tomodify their behavior. Addressing this, our study delves into model editingutilizing in-context learning, aiming to improve LLM responses without the needfor fine-tuning or retraining. Specifically, we propose a comprehensiveretrieval strategy to extract medical facts from an external knowledge base,and then we incorporate them into the query prompt for the LLM. Focusing onmedical QA using the MedQA-SMILE dataset, we evaluate the impact of differentretrieval models and the number of facts provided to the LLM. Notably, ouredited Vicuna model exhibited an accuracy improvement from 44.46% to 48.54%.This work underscores the potential of model editing to enhance LLMperformance, offering a practical approach to mitigate the challenges ofblack-box LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, -1174,towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method,"['Xuan Zhang', 'Wei Gao']",http://arxiv.org/pdf/2310.00305v1.pdf,2023-09-30,," While large pre-trained language models (LLMs) have shown their impressivecapabilities in various NLP tasks, they are still under-explored in themisinformation domain. In this paper, we examine LLMs with in-context learning(ICL) for news claim verification, and find that only with 4-shot demonstrationexamples, the performance of several prompting methods can be comparable withprevious supervised models. To further boost performance, we introduce aHierarchical Step-by-Step (HiSS) prompting method which directs LLMs toseparate a claim into several subclaims and then verify each of them viamultiple questions-answering steps progressively. Experiment results on twopublic misinformation datasets show that HiSS prompting outperformsstate-of-the-art fully-supervised approach and strong few-shot ICL-enabledbaselines.",,arXiv,['cs.cl'],, -1175,fool your (vision and) language model with embarrassingly simple permutations,"['Yongshuo Zong', 'Tingyang Yu', 'Bingchen Zhao', 'Ruchika Chavhan', 'Timothy Hospedales']",http://arxiv.org/pdf/2310.01651v1.pdf,2023-10-02,," Large language and vision-language models are rapidly being deployed inpractice thanks to their impressive capabilities in instruction following,in-context learning, and so on. This raises an urgent need to carefully analysetheir robustness so that stakeholders can understand if and when such modelsare trustworthy enough to be relied upon in any given application. In thispaper, we highlight a specific vulnerability in popular models, namelypermutation sensitivity in multiple-choice question answering (MCQA).Specifically, we show empirically that popular models are vulnerable toadversarial permutation in answer sets for multiple-choice prompting, which issurprising as models should ideally be as invariant to prompt permutation ashumans are. These vulnerabilities persist across various model sizes, and existin very recent language and vision-language models. Code is available at\url{https://github.com/ys-zong/FoolyourVLLMs}.",,arXiv,['cs.lg'],, -1176,improving automatic vqa evaluation using large language models,"['Oscar Mañas', 'Benno Krojer', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2310.02567v1.pdf,2023-10-04,," 8 years after the visual question answering (VQA) task was proposed, accuracyremains the primary metric for automatic evaluation. VQA Accuracy has beeneffective so far in the IID evaluation setting. However, our community isundergoing a shift towards open-ended generative models and OOD evaluation. Inthis new paradigm, the existing VQA Accuracy metric is overly stringent andunderestimates the performance of VQA systems. Thus, there is a need to developmore robust automatic VQA metrics that serve as a proxy for human judgment. Inthis work, we propose to leverage the in-context learning capabilities ofinstruction-tuned large language models (LLMs) to build a better VQA metric. Weformulate VQA evaluation as an answer-rating task where the LLM is instructedto score the accuracy of a candidate answer given a set of reference answers.We demonstrate the proposed metric better correlates with human judgmentcompared to existing metrics across several VQA models and benchmarks. We hopewide adoption of our metric will contribute to better estimating the researchprogress on the VQA task.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, -1177,a languageagent approach to formal theoremproving,"['Amitayush Thakur', 'Yeming Wen', 'Swarat Chaudhuri']",http://arxiv.org/pdf/2310.04353v1.pdf,2023-10-06,," Language agents, which use a large language model (LLM) capable of in-contextlearning to interact with an external environment, have recently emerged as apromising approach to control tasks. We present the first language-agentapproach to formal theorem-proving. Our method, COPRA, uses a high-capacity,black-box LLM (GPT-4) as part of a policy for a stateful backtracking search.During the search, the policy can select proof tactics and retrieve lemmas anddefinitions from an external database. Each selected tactic is executed in theunderlying proof framework, and the execution feedback is used to build theprompt for the next policy invocation. The search also tracks selectedinformation from its history and uses it to reduce hallucinations andunnecessary LLM queries. We evaluate COPRA on the miniF2F benchmark for Lean and a set of Coq tasksfrom the Compcert project. On these benchmarks, COPRA is significantly betterthan one-shot invocations of GPT-4, as well as state-of-the-art modelsfine-tuned on proof data, at finding correct proofs quickly.",,arXiv,"['cs.lg', 'cs.ai', 'cs.lo', 'cs.pl']",, -1178,guideline learning for incontext information extraction,"['Chaoxu Pang', 'Yixuan Cao', 'Qiang Ding', 'Ping Luo']",http://arxiv.org/pdf/2310.05066v2.pdf,2023-10-08,," Large language models (LLMs) can perform a new task by merely conditioning ontask instructions and a few input-output examples, without optimizing anyparameters. This is called In-Context Learning (ICL). In-context InformationExtraction (IE) has recently garnered attention in the research community.However, the performance of In-context IE generally lags behind thestate-of-the-art supervised expert models. We highlight a key reason for thisshortfall: underspecified task description. The limited-length contextstruggles to thoroughly express the intricate IE task instructions and variousedge cases, leading to misalignment in task comprehension with humans. In thispaper, we propose a Guideline Learning (GL) framework for In-context IE whichreflectively learns and follows guidelines. During the learning phrase, GLautomatically synthesizes a set of guidelines based on a few error cases, andduring inference, GL retrieves helpful guidelines for better ICL. Moreover, wepropose a self-consistency-based active learning method to enhance theefficiency of GL. Experiments on event extraction and relation extraction showthat GL can significantly improve the performance of in-context IE.",,arXiv,"['cs.cl', 'cs.lg']",, -1179,harnessing the power of large language models for empathetic response generation empirical investigations and improvements,"['Yushan Qian', 'Wei-Nan Zhang', 'Ting Liu']",http://arxiv.org/pdf/2310.05140v1.pdf,2023-10-08,," Empathetic dialogue is an indispensable part of building harmonious socialrelationships and contributes to the development of a helpful AI. Previousapproaches are mainly based on fine small-scale language models. With theadvent of ChatGPT, the application effect of large language models (LLMs) inthis field has attracted great attention. This work empirically investigatesthe performance of LLMs in generating empathetic responses and proposes threeimprovement methods of semantically similar in-context learning, two-stageinteractive generation, and combination with the knowledge base. Extensiveexperiments show that LLMs can significantly benefit from our proposed methodsand is able to achieve state-of-the-art performance in both automatic and humanevaluations. Additionally, we explore the possibility of GPT-4 simulating humanevaluators.",,arXiv,"['cs.cl', 'cs.ai']",, -1180,selective demonstrations for crossdomain texttosql,"['Shuaichen Chang', 'Eric Fosler-Lussier']",http://arxiv.org/pdf/2310.06302v1.pdf,2023-10-10,," Large language models (LLMs) with in-context learning have demonstratedimpressive generalization capabilities in the cross-domain text-to-SQL task,without the use of in-domain annotations. However, incorporating in-domaindemonstration examples has been found to greatly enhance LLMs' performance. Inthis paper, we delve into the key factors within in-domain examples thatcontribute to the improvement and explore whether we can harness these benefitswithout relying on in-domain annotations. Based on our findings, we propose ademonstration selection framework ODIS which utilizes both out-of-domainexamples and synthetically generated in-domain examples to constructdemonstrations. By retrieving demonstrations from hybrid sources, ODISleverages the advantages of both, showcasing its effectiveness compared tobaseline methods that rely on a single data source. Furthermore, ODISoutperforms state-of-the-art approaches on two cross-domain text-to-SQLdatasets, with improvements of 1.1 and 11.8 points in execution accuracy,respectively.",,arXiv,['cs.cl'],, -1181,jailbreak and guard aligned language models with only few incontext demonstrations,"['Zeming Wei', 'Yifei Wang', 'Yisen Wang']",http://arxiv.org/pdf/2310.06387v1.pdf,2023-10-10,," Large Language Models (LLMs) have shown remarkable success in various tasks,but concerns about their safety and the potential for generating maliciouscontent have emerged. In this paper, we explore the power of In-ContextLearning (ICL) in manipulating the alignment ability of LLMs. We find that byproviding just few in-context demonstrations without fine-tuning, LLMs can bemanipulated to increase or decrease the probability of jailbreaking, i.e.answering malicious prompts. Based on these observations, we propose In-ContextAttack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guardingaligned language model purposes. ICA crafts malicious contexts to guide modelsin generating harmful outputs, while ICD enhances model robustness bydemonstrations of rejecting to answer harmful prompts. Our experiments show theeffectiveness of ICA and ICD in increasing or reducing the success rate ofadversarial jailbreaking attacks. Overall, we shed light on the potential ofICL to influence LLM behavior and provide a new perspective for enhancing thesafety and alignment of LLMs.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cr']",, -1182,a search for prompts generating structured answers from contracts,"['Adam Roegiest', 'Radha Chitta', 'Jonathan Donnelly', 'Maya Lash', 'Alexandra Vtyurina', 'François Longtin']",http://arxiv.org/pdf/2310.10141v1.pdf,2023-10-16,," In many legal processes being able to action on the concrete implication of alegal question can be valuable to automating human review or signalling certainconditions (e.g., alerts around automatic renewal). To support such tasks, wepresent a form of legal question answering that seeks to return one (or more)fixed answers for a question about a contract clause. After showing thatunstructured generative question answering can have questionable outcomes forsuch a task, we discuss our exploration methodology for legal questionanswering prompts using OpenAI's \textit{GPT-3.5-Turbo} and provide a summaryof insights. Using insights gleaned from our qualitative experiences, we compare ourproposed template prompts against a common semantic matching approach and findthat our prompt templates are far more accurate despite being less reliable inthe exact response return. With some additional tweaks to prompts and the useof in-context learning, we are able to further improve the performance of ourproposed strategy while maximizing the reliability of responses as best we can.",,arXiv,['cs.cv'],, -1183,large language models meet openworld intent discovery and recognition an evaluation of chatgpt,"['Xiaoshuai Song', 'Keqing He', 'Pei Wang', 'Guanting Dong', 'Yutao Mou', 'Jingang Wang', 'Yunsen Xian', 'Xunliang Cai', 'Weiran Xu']",http://arxiv.org/pdf/2310.10176v1.pdf,2023-10-16,," The tasks of out-of-domain (OOD) intent discovery and generalized intentdiscovery (GID) aim to extend a closed intent classifier to open-world intentsets, which is crucial to task-oriented dialogue (TOD) systems. Previousmethods address them by fine-tuning discriminative models. Recently, althoughsome studies have been exploring the application of large language models(LLMs) represented by ChatGPT to various downstream tasks, it is still unclearfor the ability of ChatGPT to discover and incrementally extent OOD intents. Inthis paper, we comprehensively evaluate ChatGPT on OOD intent discovery andGID, and then outline the strengths and weaknesses of ChatGPT. Overall, ChatGPTexhibits consistent advantages under zero-shot settings, but is still at adisadvantage compared to fine-tuned models. More deeply, through a series ofanalytical experiments, we summarize and discuss the challenges faced by LLMsincluding clustering, domain-specific understanding, and cross-domainin-context learning scenarios. Finally, we provide empirical guidance forfuture directions to address these challenges.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1184,moconvq unified physicsbased motion control via scalable discrete representations,"['Heyuan Yao', 'Zhenhua Song', 'Yuyang Zhou', 'Tenglong Ao', 'Baoquan Chen', 'Libin Liu']",http://arxiv.org/pdf/2310.10198v2.pdf,2023-10-16,," In this work, we present MoConVQ, a novel unified framework for physics-basedmotion control leveraging scalable discrete representations. Building uponvector quantized variational autoencoders (VQ-VAE) and model-basedreinforcement learning, our approach effectively learns motion embeddings froma large, unstructured dataset spanning tens of hours of motion examples. Theresultant motion representation not only captures diverse motion skills butalso offers a robust and intuitive interface for various applications. Wedemonstrate the versatility of MoConVQ through several applications: universaltracking control from various motion sources, interactive character controlwith latent motion representations using supervised learning, physics-basedmotion generation from natural language descriptions using the GPT framework,and, most interestingly, seamless integration with large language models (LLMs)with in-context learning to tackle complex and abstract tasks.",,arXiv,"['cs.cv', 'cs.gr']",, -1185,semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking,"['Yuxiang Wu', 'Guanting Dong', 'Weiran Xu']",http://arxiv.org/pdf/2310.10520v2.pdf,2023-10-16,," Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiringand annotating task-oriented dialogues, which can be time consuming and costly.However, DST extends beyond simple slot-filling and requires effective updatingstrategies for tracking dialogue state as conversations progress. In thispaper, we propose ParsingDST, a new In-Context Learning (ICL) method, tointroduce additional intricate updating strategies in zero-shot DST. Ourapproach reformulates the DST task by leveraging powerful Large Language Models(LLMs) and translating the original dialogue text to JSON through semanticparsing as an intermediate state. We also design a novel framework thatincludes more modules to ensure the effectiveness of updating strategies in thetext-to-JSON process. Experimental results demonstrate that our approachoutperforms existing zero-shot DST methods on MultiWOZ, exhibiting significantimprovements in Joint Goal Accuracy (JGA) and slot accuracy compared toexisting ICL methods.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1186,mastering the task of open information extraction with large language models and consistent reasoning environment,"['Ji Qi', 'Kaixuan Ji', 'Xiaozhi Wang', 'Jifan Yu', 'Kaisheng Zeng', 'Lei Hou', 'Juanzi Li', 'Bin Xu']",http://arxiv.org/pdf/2310.10590v1.pdf,2023-10-16,," Open Information Extraction (OIE) aims to extract objective structuredknowledge from natural texts, which has attracted growing attention to builddedicated models with human experience. As the large language models (LLMs)have exhibited remarkable in-context learning capabilities, a question arisesas to whether the task of OIE can be effectively tackled with this paradigm? Inthis paper, we explore solving the OIE problem by constructing an appropriatereasoning environment for LLMs. Specifically, we first propose a method toeffectively estimate the discrepancy of syntactic distribution between a LLMand test samples, which can serve as correlation evidence for preparingpositive demonstrations. Upon the evidence, we introduce a simple yet effectivemechanism to establish the reasoning environment for LLMs on specific tasks.Without bells and whistles, experimental results on the standard CaRB benchmarkdemonstrate that our $6$-shot approach outperforms state-of-the-art supervisedmethod, achieving an $55.3$ $F_1$ score. Further experiments on TACRED andACE05 show that our method can naturally generalize to other informationextraction tasks, resulting in improvements of $5.7$ and $6.8$ $F_1$ scores,respectively.",,arXiv,['cs.cl'],, -1187,exploring automatic evaluation methods based on a decoderbased llm for text generation,"['Tomohito Kasahara', 'Daisuke Kawahara']",http://arxiv.org/pdf/2310.11026v1.pdf,2023-10-17,," Automatic evaluation of text generation is essential for improving theaccuracy of generation tasks. In light of the current trend towardsincreasingly larger decoder-based language models, we investigate automaticevaluation methods based on such models for text generation. This papercompares various methods, including tuning with encoder-based models and largelanguage models under equal conditions, on two different tasks, machinetranslation evaluation and semantic textual similarity, in two languages,Japanese and English. Experimental results show that compared to the tunedencoder-based models, the tuned decoder-based models perform poorly. Theanalysis of the causes for this suggests that the decoder-based models focus onsurface word sequences and do not capture meaning. It is also revealed thatin-context learning of very large decoder-based models such as ChatGPT makes itdifficult to identify fine-grained semantic differences.",,arXiv,['cs.cl'],, -1188,learning from red teaming gender bias provocation and mitigation in large language models,"['Hsuan Su', 'Cheng-Chu Cheng', 'Hua Farn', 'Shachi H Kumar', 'Saurav Sahay', 'Shang-Tse Chen', 'Hung-yi Lee']",http://arxiv.org/pdf/2310.11079v1.pdf,2023-10-17,," Recently, researchers have made considerable improvements in dialogue systemswith the progress of large language models (LLMs) such as ChatGPT and GPT-4.These LLM-based chatbots encode the potential biases while retainingdisparities that can harm humans during interactions. The traditional biasesinvestigation methods often rely on human-written test cases. However, thesetest cases are usually expensive and limited. In this work, we propose afirst-of-its-kind method that automatically generates test cases to detectLLMs' potential gender bias. We apply our method to three well-known LLMs andfind that the generated test cases effectively identify the presence of biases.To address the biases identified, we propose a mitigation strategy that usesthe generated test cases as demonstrations for in-context learning tocircumvent the need for parameter fine-tuning. The experimental results showthat LLMs generate fairer responses with the proposed approach.",,arXiv,"['cs.cl', 'cs.ai']",, -1189,evaluating llms for privilegeescalation scenarios,"['Andreas Happe', 'Aaron Kaplan', 'Jürgen Cito']",http://arxiv.org/pdf/2310.11409v2.pdf,2023-10-17,," Penetration testing, an essential component of cybersecurity, allowsorganizations to proactively identify and remediate vulnerabilities in theirsystems, thus bolstering their defense mechanisms against potentialcyberattacks. One recent advancement in the realm of penetration testing is theutilization of Language Models (LLMs). We explore the intersection of LLMs andpenetration testing to gain insight into their capabilities and challenges inthe context of privilige escalation. We create an automated Linuxprivilege-escalation benchmark utilizing local virtual machines. We introducean LLM-guided privilege-escalation tool designed for evaluating different LLMsand prompt strategies against our benchmark. We analyze the impact of differentprompt designs, the benefits of in-context learning, and the advantages ofoffering high-level guidance to LLMs. We discuss challenging areas for LLMs,including maintaining focus during testing, coping with errors, and finallycomparing them with both stochastic parrots as well as with human hackers.",,arXiv,"['cs.cr', 'cs.ai']",, -1190,measuring pointwise $mathcal{v}$usable information incontextly,"['Sheng Lu', 'Shan Chen', 'Yingya Li', 'Danielle Bitterman', 'Guergana Savova', 'Iryna Gurevych']",http://arxiv.org/pdf/2310.12300v1.pdf,2023-10-18,," In-context learning (ICL) is a new learning paradigm that has gainedpopularity along with the development of large language models. In this work,we adapt a recently proposed hardness metric, pointwise $\mathcal{V}$-usableinformation (PVI), to an in-context version (in-context PVI). Compared to theoriginal PVI, in-context PVI is more efficient in that it requires only a fewexemplars and does not require fine-tuning. We conducted a comprehensiveempirical analysis to evaluate the reliability of in-context PVI. Our findingsindicate that in-context PVI estimates exhibit similar characteristics to theoriginal PVI. Specific to the in-context setting, we show that in-context PVIestimates remain consistent across different exemplar selections and numbers ofshots. The variance of in-context PVI estimates across different exemplarselections is insignificant, which suggests that in-context PVI are stable.Furthermore, we demonstrate how in-context PVI can be employed to identifychallenging instances. Our work highlights the potential of in-context PVI andprovides new insights into the capabilities of ICL.",,arXiv,['cs.cl'],, -1191,attack prompt generation for red teaming and defending large language models,"['Boyi Deng', 'Wenjie Wang', 'Fuli Feng', 'Yang Deng', 'Qifan Wang', 'Xiangnan He']",http://arxiv.org/pdf/2310.12505v1.pdf,2023-10-19,," Large language models (LLMs) are susceptible to red teaming attacks, whichcan induce LLMs to generate harmful content. Previous research constructsattack prompts via manual or automatic methods, which have their ownlimitations on construction cost and quality. To address these issues, wepropose an integrated approach that combines manual and automatic methods toeconomically generate high-quality attack prompts. Specifically, consideringthe impressive capabilities of newly emerged LLMs, we propose an attackframework to instruct LLMs to mimic human-generated prompts through in-contextlearning. Furthermore, we propose a defense framework that fine-tunes victimLLMs through iterative interactions with the attack framework to enhance theirsafety against red teaming attacks. Extensive experiments on different LLMsvalidate the effectiveness of our proposed attack and defense frameworks.Additionally, we release a series of attack prompts datasets named SAP withvarying sizes, facilitating the safety evaluation and enhancement of more LLMs.Our code and dataset is available on https://github.com/Aatrox103/SAP .",,arXiv,"['cs.cl', 'cs.cr', 'cs.lg']",, -1192,are structural concepts universal in transformer language models towards interpretable crosslingual generalization,"['Ningyu Xu', 'Qi Zhang', 'Jingting Ye', 'Menghan Zhang', 'Xuanjing Huang']",http://arxiv.org/pdf/2310.12794v1.pdf,2023-10-19,," Large language models (LLMs) have exhibited considerable cross-lingualgeneralization abilities, whereby they implicitly transfer knowledge acrosslanguages. However, the transfer is not equally successful for all languages,especially for low-resource ones, which poses an ongoing challenge. It isunclear whether we have reached the limits of implicit cross-lingualgeneralization and if explicit knowledge transfer is viable. In this paper, weinvestigate the potential for explicitly aligning conceptual correspondencebetween languages to enhance cross-lingual generalization. Using the syntacticaspect of language as a testbed, our analyses of 43 languages reveal a highdegree of alignability among the spaces of structural concepts within eachlanguage for both encoder-only and decoder-only LLMs. We then propose ameta-learning-based method to learn to align conceptual spaces of differentlanguages, which facilitates zero-shot and few-shot generalization in conceptclassification and also offers insights into the cross-lingual in-contextlearning phenomenon. Experiments on syntactic analysis tasks show that ourapproach achieves competitive results with state-of-the-art methods and narrowsthe performance gap between languages, particularly benefiting those withlimited resources.",,arXiv,['cs.cl'],, -1193,mind the instructions a holistic evaluation of consistency and interactions in promptbased learning,"['Lucas Weber', 'Elia Bruni', 'Dieuwke Hupkes']",http://arxiv.org/pdf/2310.13486v1.pdf,2023-10-20,," Finding the best way of adapting pre-trained language models to a task is abig challenge in current NLP. Just like the previous generation of task-tunedmodels (TT), models that are adapted to tasks via in-context-learning (ICL) arerobust in some setups but not in others. Here, we present a detailed analysisof which design choices cause instabilities and inconsistencies in LLMpredictions. First, we show how spurious correlations between inputdistributions and labels -- a known issue in TT models -- form only a minorproblem for prompted models. Then, we engage in a systematic, holisticevaluation of different factors that have been found to influence predictionsin a prompting setup. We test all possible combinations of a range of factorson both vanilla and instruction-tuned (IT) LLMs of different scale andstatistically analyse the results to show which factors are the mostinfluential, interactive or stable. Our results show which factors can be usedwithout precautions and which should be avoided or handled with care in mostsettings.",,arXiv,"['cs.cl', 'cs.ai']",, -1194,a simple baseline for knowledgebased visual question answering,"['Alexandros Xenos', 'Themos Stafylakis', 'Ioannis Patras', 'Georgios Tzimiropoulos']",http://arxiv.org/pdf/2310.13570v2.pdf,2023-10-20,," This paper is on the problem of Knowledge-Based Visual Question Answering(KB-VQA). Recent works have emphasized the significance of incorporating bothexplicit (through external databases) and implicit (through LLMs) knowledge toanswer questions requiring external knowledge effectively. A common limitationof such approaches is that they consist of relatively complicated pipelines andoften heavily rely on accessing GPT-3 API. Our main contribution in this paperis to propose a much simpler and readily reproducible pipeline which, in anutshell, is based on efficient in-context learning by prompting LLaMA (1 and2) using question-informative captions as contextual information. Contrary torecent approaches, our method is training-free, does not require access toexternal databases or APIs, and yet achieves state-of-the-art accuracy on theOK-VQA and A-OK-VQA datasets. Finally, we perform several ablation studies tounderstand important aspects of our method. Our code is publicly available athttps://github.com/alexandrosXe/ASimple-Baseline-For-Knowledge-Based-VQA",,arXiv,['cs.cv'],, -1195,an incontext schema understanding method for knowledge base question answering,"['Yantao Liu', 'Zixuan Li', 'Xiaolong Jin', 'Long Bai', 'Saiping Guan', 'Jiafeng Guo', 'Xueqi Cheng']",http://arxiv.org/pdf/2310.14174v1.pdf,2023-10-22,," The Knowledge Base Question Answering (KBQA) task aims to answer naturallanguage questions based on a given knowledge base. As a kind of common methodfor this task, semantic parsing-based ones first convert natural languagequestions to logical forms (e.g., SPARQL queries) and then execute them onknowledge bases to get answers. Recently, Large Language Models (LLMs) haveshown strong abilities in language understanding and may be adopted as semanticparsers in such kinds of methods. However, in doing so, a great challenge forLLMs is to understand the schema of knowledge bases. Therefore, in this paper,we propose an In-Context Schema Understanding (ICSU) method for facilitatingLLMs to be used as a semantic parser in KBQA. Specifically, ICSU adopts theIn-context Learning mechanism to instruct LLMs to generate SPARQL queries withexamples. In order to retrieve appropriate examples from annotatedquestion-query pairs, which contain comprehensive schema information related toquestions, ICSU explores four different retrieval strategies. Experimentalresults on the largest KBQA benchmark, KQA Pro, show that ICSU with all thesestrategies outperforms that with a random retrieval strategy significantly(from 12\% to 78.76\% in accuracy).",,arXiv,['cs.cl'],, -1196,from chaos to clarity claim normalization to empower factchecking,"['Megha Sundriyal', 'Tanmoy Chakraborty', 'Preslav Nakov']",http://arxiv.org/pdf/2310.14338v2.pdf,2023-10-22,," With the rise of social media, users are exposed to many misleading claims.However, the pervasive noise inherent in these posts presents a challenge inidentifying precise and prominent claims that require verification. Extractingthe important claims from such posts is arduous and time-consuming, yet it isan underexplored problem. Here, we aim to bridge this gap. We introduce a noveltask, Claim Normalization (aka ClaimNorm), which aims to decompose complex andnoisy social media posts into more straightforward and understandable forms,termed normalized claims. We propose CACN, a pioneering approach that leverageschain-of-thought and claim check-worthiness estimation, mimicking humanreasoning processes, to comprehend intricate claims. Moreover, we capitalize onthe in-context learning capabilities of large language models to provideguidance and to improve claim normalization. To evaluate the effectiveness ofour proposed model, we meticulously compile a comprehensive real-world dataset,CLAN, comprising more than 6k instances of social media posts alongside theirrespective normalized claims. Our experiments demonstrate that CACN outperformsseveral baselines across various evaluation measures. Finally, our rigorouserror analysis validates CACN's capabilities and pitfalls.",,arXiv,"['cs.cl', 'cs.ai']",, -1197,retrievalaugmented chainofthought in semistructured domains,"['Vaibhav Mavi', 'Abulhair Saparov', 'Chen Zhao']",http://arxiv.org/pdf/2310.14435v1.pdf,2023-10-22,," Applying existing question answering (QA) systems to specialized domains likelaw and finance presents challenges that necessitate domain expertise. Althoughlarge language models (LLMs) have shown impressive language comprehension andin-context learning capabilities, their inability to handle very longinputs/contexts is well known. Tasks specific to these domains need significantbackground knowledge, leading to contexts that can often exceed the maximumlength that existing LLMs can process. This study explores leveraging thesemi-structured nature of legal and financial data to efficiently retrieverelevant context, enabling the use of LLMs for domain-specialized QA. Theresulting system outperforms contemporary models and also provides usefulexplanations for the answers, encouraging the integration of LLMs into legaland financial NLP systems for future research.",,arXiv,"['cs.cl', 'cs.ai']",, -1198,statistical depth for ranking and characterizing transformerbased text embeddings,"['Parker Seegmiller', 'Sarah Masud Preum']",http://arxiv.org/pdf/2310.15010v1.pdf,2023-10-23,," The popularity of transformer-based text embeddings calls for betterstatistical tools for measuring distributions of such embeddings. One such toolwould be a method for ranking texts within a corpus by centrality, i.e.assigning each text a number signifying how representative that text is of thecorpus as a whole. However, an intrinsic center-outward ordering ofhigh-dimensional text representations is not trivial. A statistical depth is afunction for ranking k-dimensional objects by measuring centrality with respectto some observed k-dimensional distribution. We adopt a statistical depth tomeasure distributions of transformer-based text embeddings, transformer-basedtext embedding (TTE) depth, and introduce the practical use of this depth forboth modeling and distributional inference in NLP pipelines. We first defineTTE depth and an associated rank sum test for determining whether two corporadiffer significantly in embedding space. We then use TTE depth for the task ofin-context learning prompt selection, showing that this approach reliablyimproves performance over statistical baseline approaches across six textclassification tasks. Finally, we use TTE depth and the associated rank sumtest to characterize the distributions of synthesized and human-generatedcorpora, showing that five recent synthetic data augmentation processes cause ameasurable distributional shift away from associated human-generated text.",,arXiv,['cs.cl'],, -1199,the bla benchmark investigating basic language abilities of pretrained multimodal models,"['Xinyi Chen', 'Raquel Fernández', 'Sandro Pezzelle']",http://arxiv.org/pdf/2310.15061v1.pdf,2023-10-23,," Despite the impressive performance achieved by pre-trainedlanguage-and-vision models in downstream tasks, it remains an open questionwhether this reflects a proper understanding of image-text interaction. In thiswork, we explore to what extent they handle basic linguistic constructions --active-passive voice, coordination, and relative clauses -- that even preschoolchildren can typically master. We present BLA, a novel, automaticallyconstructed benchmark to evaluate multimodal models on these Basic LanguageAbilities. We show that different types of Transformer-based systems, such asCLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting,in line with previous findings. Our experiments, in particular, show that mostof the tested models only marginally benefit when fine-tuned or prompted withconstruction-specific samples. Yet, the generative BLIP2 shows promisingtrends, especially in an in-context learning setting. This opens the door tousing BLA not only as an evaluation benchmark but also to improve models' basiclanguage abilities.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cv']",, -1200,llmintheloop leveraging large language model for thematic analysis,"['Shih-Chieh Dai', 'Aiping Xiong', 'Lun-Wei Ku']",http://arxiv.org/pdf/2310.15100v1.pdf,2023-10-23,," Thematic analysis (TA) has been widely used for analyzing qualitative data inmany disciplines and fields. To ensure reliable analysis, the same piece ofdata is typically assigned to at least two human coders. Moreover, to producemeaningful and useful analysis, human coders develop and deepen their datainterpretation and coding over multiple iterations, making TA labor-intensiveand time-consuming. Recently the emerging field of large language models (LLMs)research has shown that LLMs have the potential replicate human-like behaviorin various tasks: in particular, LLMs outperform crowd workers ontext-annotation tasks, suggesting an opportunity to leverage LLMs on TA. Wepropose a human-LLM collaboration framework (i.e., LLM-in-the-loop) to conductTA with in-context learning (ICL). This framework provides the prompt to framediscussions with a LLM (e.g., GPT-3.5) to generate the final codebook for TA.We demonstrate the utility of this framework using survey datasets on theaspects of the music listening experience and the usage of a password manager.Results of the two case studies show that the proposed framework yields similarcoding quality to that of human coders but reduces TA's labor and time demands.",,arXiv,['cs.cl'],, -1201,ui layout generation with llms guided by ui grammar,"['Yuwen Lu', 'Ziang Tong', 'Qinyi Zhao', 'Chengzhi Zhang', 'Toby Jia-Jun Li']",http://arxiv.org/pdf/2310.15455v1.pdf,2023-10-24,," The recent advances in Large Language Models (LLMs) have stimulated interestamong researchers and industry professionals, particularly in their applicationto tasks concerning mobile user interfaces (UIs). This position paperinvestigates the use of LLMs for UI layout generation. Central to ourexploration is the introduction of UI grammar -- a novel approach we proposedto represent the hierarchical structure inherent in UI screens. The aim of thisapproach is to guide the generative capacities of LLMs more effectively andimprove the explainability and controllability of the process. Initialexperiments conducted with GPT-4 showed the promising capability of LLMs toproduce high-quality user interfaces via in-context learning. Furthermore, ourpreliminary comparative study suggested the potential of the grammar-basedapproach in improving the quality of generative results in specific aspects.",,arXiv,"['cs.hc', 'cs.ai']",, -1202,poe process of elimination for multiple choice reasoning,"['Chenkai Ma', 'Xinya Du']",http://arxiv.org/pdf/2310.15575v1.pdf,2023-10-24,," Language models (LMs) are capable of conducting in-context learning formultiple choice reasoning tasks, but the options in these tasks are treatedequally. As humans often first eliminate wrong options before picking the finalcorrect answer, we argue a similar two-step strategy can make LMs better atthese tasks. To this end, we present the Process of Elimination (POE), atwo-step scoring method. In the first step, POE scores each option, andeliminates seemingly wrong options. In the second step, POE masks these wrongoptions, and makes the final prediction from the remaining options. Zero-shotexperiments on 8 reasoning tasks illustrate the effectiveness of POE, and afollowing analysis finds our method to be especially performant on logicalreasoning tasks. We further analyze the effect of masks, and show that POEapplies to few-shot settings and large language models (LLMs) like ChatGPT.",,arXiv,['cs.cl'],, -1203,webwise web interface control and sequential exploration with large language models,"['Heyi Tao', 'Sethuraman T V', 'Michal Shlapentokh-Rothman', 'Derek Hoiem']",http://arxiv.org/pdf/2310.16042v2.pdf,2023-10-24,," The paper investigates using a Large Language Model (LLM) to automaticallyperform web software tasks using click, scroll, and text input operations.Previous approaches, such as reinforcement learning (RL) or imitation learning,are inefficient to train and task-specific. Our method uses filtered DocumentObject Model (DOM) elements as observations and performs tasks step-by-step,sequentially generating small programs based on the current observations. Weuse in-context learning, either benefiting from a single manually providedexample, or an automatically generated example based on a successful zero-shottrial. We evaluate the proposed method on the MiniWob++ benchmark. With onlyone in-context example, our WebWISE method achieves similar or betterperformance than other methods that require many demonstrations or trials.",,arXiv,"['cs.cl', 'cs.ai']",, -1204,from heuristic to analytic cognitively motivated strategies for coherent physical commonsense reasoning,"['Zheyuan Zhang', 'Shane Storks', 'Fengyuan Hu', 'Sungryull Sohn', 'Moontae Lee', 'Honglak Lee', 'Joyce Chai']",http://arxiv.org/pdf/2310.18364v1.pdf,2023-10-24,," Pre-trained language models (PLMs) have shown impressive performance invarious language tasks. However, they are prone to spurious correlations, andoften generate illusory information. In real-world applications, PLMs shouldjustify decisions with formalized, coherent reasoning chains, but thischallenge remains under-explored. Cognitive psychology theorizes that humansare capable of utilizing fast and intuitive heuristic thinking to makedecisions based on past experience, then rationalizing the decisions throughslower and deliberative analytic reasoning. We incorporate these interlinkeddual processes in fine-tuning and in-context learning with PLMs, applying themto two language understanding tasks that require coherent physical commonsensereasoning. We show that our proposed Heuristic-Analytic Reasoning (HAR)strategies drastically improve the coherence of rationalizations for modeldecisions, yielding state-of-the-art results on Tiered Reasoning for IntuitivePhysics (TRIP). We also find that this improved coherence is a direct result ofmore faithful attention to relevant language context in each step of reasoning.Our findings suggest that human-like reasoning strategies can effectivelyimprove the coherence and reliability of PLM reasoning.",,arXiv,"['cs.cl', 'cs.ai']",, -1205,the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities,"['Yuxiang Zhou', 'Jiazheng Li', 'Yanzheng Xiang', 'Hanqi Yan', 'Lin Gui', 'Yulan He']",http://arxiv.org/pdf/2311.00237v1.pdf,2023-11-01,," Understanding emergent abilities, such as in-context learning (ICL) andchain-of-thought (CoT) prompting in large language models (LLMs), is of utmostimportance. This importance stems not only from the better utilization of thesecapabilities across various tasks, but also from the proactive identificationand mitigation of potential risks, including concerns of truthfulness, bias,and toxicity, that may arise alongside these capabilities. In this paper, wepresent a thorough survey on the interpretation and analysis of emergentabilities of LLMs. First, we provide a concise introduction to the backgroundand definition of emergent abilities. Then, we give an overview of advancementsfrom two perspectives: 1) a macro perspective, emphasizing studies on themechanistic interpretability and delving into the mathematical foundationsbehind emergent abilities; and 2) a micro-perspective, concerning studies thatfocus on empirical interpretability by examining factors associated with theseabilities. We conclude by highlighting the challenges encountered andsuggesting potential avenues for future research. We believe that our workestablishes the basis for further exploration into the interpretation ofemergent abilities.",,arXiv,['cs.cl'],, -1206,narrowing the gap between zero and fewshot machine translation by matching styles,"['Weiting Tan', 'Haoran Xu', 'Lingfeng Shen', 'Shuyue Stella Li', 'Kenton Murray', 'Philipp Koehn', 'Benjamin Van Durme', 'Yunmo Chen']",http://arxiv.org/pdf/2311.02310v1.pdf,2023-11-04,," Large language models trained primarily in a monolingual setting havedemonstrated their ability to generalize to machine translation using zero- andfew-shot examples with in-context learning. However, even though zero-shottranslations are relatively good, there remains a discernible gap comparingtheir performance with the few-shot setting. In this paper, we investigate thefactors contributing to this gap and find that this gap can largely be closed(for about 70%) by matching the writing styles of the target corpus.Additionally, we explore potential approaches to enhance zero-shot baselineswithout the need for parallel demonstration examples, providing valuableinsights into how these methods contribute to improving translation metrics.",,arXiv,['cs.cl'],, -1207,instructed language models with retrievers are powerful entity linkers,"['Zilin Xiao', 'Ming Gong', 'Jie Wu', 'Xingyao Zhang', 'Linjun Shou', 'Jian Pei', 'Daxin Jiang']",http://arxiv.org/pdf/2311.03250v1.pdf,2023-11-06,," Generative approaches powered by large language models (LLMs) havedemonstrated emergent abilities in tasks that require complex reasoningabilities. Yet the generative nature still makes the generated content sufferfrom hallucinations, thus unsuitable for entity-centric tasks like entitylinking (EL) requiring precise entity predictions over a large knowledge base.We present Instructed Generative Entity Linker (INSGENEL), the first approachthat enables casual language models to perform entity linking over knowledgebases. Several methods to equip language models with EL capability wereproposed in this work, including (i) a sequence-to-sequence training ELobjective with instruction-tuning, (ii) a novel generative EL framework basedon a light-weight potential mention retriever that frees the model from heavyand non-parallelizable decoding, achieving 4$\times$ speedup without compromiseon linking metrics. INSGENEL outperforms previous generative alternatives with+6.8 F1 points gain on average, also with a huge advantage in training dataefficiency and training compute consumption. In addition, our skillfullyengineered in-context learning (ICL) framework for EL still lags behindINSGENEL significantly, reaffirming that the EL task remains a persistenthurdle for general LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, -1208,metalearning via language model incontext tuning,"['Yanda Chen', 'Ruiqi Zhong', 'Sheng Zha', 'George Karypis', 'He He']",http://arxiv.org/pdf/2110.07814v2.pdf,2021-10-15,," The goal of meta-learning is to learn to adapt to a new task with only a fewlabeled examples. To tackle this problem in NLP, we propose $\textit{in-contexttuning}$, which recasts adaptation and prediction as a simple sequenceprediction problem: to form the input sequence, we concatenate the taskinstruction, the labeled examples, and the target input to predict; tometa-train the model to learn from in-context examples, we fine-tune apre-trained language model (LM) to predict the target label from the inputsequences on a collection of tasks. We benchmark our method on two collections of text classification tasks: LAMAand BinaryClfs. Compared to first-order MAML which adapts the model withgradient descent, our method better leverages the inductive bias of LMs toperform pattern matching, and outperforms MAML by an absolute $6\%$ AUC ROCscore on BinaryClfs, with increasing advantage w.r.t. model size. Compared tonon-fine-tuned in-context learning (i.e. prompting a raw LM), in-context tuningdirectly learns to learn from in-context examples. On BinaryClfs, in-contexttuning improves the average AUC-ROC score by an absolute $10\%$, and reducesthe variance with respect to example ordering by 6x and example choices by 2x.",,arXiv,"['cs.cl', 'cs.lg']",, -1209,glam efficient scaling of language models with mixtureofexperts,"['Nan Du', 'Yanping Huang', 'Andrew M. Dai', 'Simon Tong', 'Dmitry Lepikhin', 'Yuanzhong Xu', 'Maxim Krikun', 'Yanqi Zhou', 'Adams Wei Yu', 'Orhan Firat', 'Barret Zoph', 'Liam Fedus', 'Maarten Bosma', 'Zongwei Zhou', 'Tao Wang', 'Yu Emma Wang', 'Kellie Webster', 'Marie Pellat', 'Kevin Robinson', 'Kathleen Meier-Hellstern', 'Toju Duke', 'Lucas Dixon', 'Kun Zhang', 'Quoc V Le', 'Yonghui Wu', 'Zhifeng Chen', 'Claire Cui']",http://arxiv.org/pdf/2112.06905v2.pdf,2021-12-13,," Scaling language models with more data, compute and parameters has drivensignificant progress in natural language processing. For example, thanks toscaling, GPT-3 was able to achieve strong results on in-context learning tasks.However, training these large dense models requires significant amounts ofcomputing resources. In this paper, we propose and develop a family of languagemodels named GLaM (Generalist Language Model), which uses a sparsely activatedmixture-of-experts architecture to scale the model capacity while alsoincurring substantially less training cost compared to dense variants. Thelargest GLaM has 1.2 trillion parameters, which is approximately 7x larger thanGPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires halfof the computation flops for inference, while still achieving better overallzero-shot and one-shot performance across 29 NLP tasks.",,arXiv,['cs.cl'],, -1210,can language models learn from explanations in context,"['Andrew K. Lampinen', 'Ishita Dasgupta', 'Stephanie C. Y. Chan', 'Kory Matthewson', 'Michael Henry Tessler', 'Antonia Creswell', 'James L. McClelland', 'Jane X. Wang', 'Felix Hill']",http://arxiv.org/pdf/2204.02329v4.pdf,2022-04-05,," Language Models (LMs) can perform new tasks by adapting to a few in-contextexamples. For humans, explanations that connect examples to task principles canimprove learning. We therefore investigate whether explanations of few-shotexamples can help LMs. We annotate questions from 40 challenging tasks withanswer explanations, and various matched control explanations. We evaluate howdifferent types of explanations, instructions, and controls affect zero- andfew-shot performance. We analyze these results using statistical multilevelmodeling techniques that account for the nested dependencies among conditions,tasks, prompts, and models. We find that explanations can improve performance-- even without tuning. Furthermore, explanations hand-tuned for performance ona small validation set offer substantially larger benefits, and building aprompt by selecting examples and explanations together substantially improvesperformance over selecting examples alone. Finally, even untuned explanationsoutperform carefully matched controls, suggesting that the benefits are due tothe link between an example and its explanation, rather than lower-levelfeatures. However, only large models benefit. In summary, explanations cansupport the in-context learning of large LMs on challenging tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1211,automatic short math answer grading via incontext metalearning,"['Mengxue Zhang', 'Sami Baral', 'Neil Heffernan', 'Andrew Lan']",http://arxiv.org/pdf/2205.15219v3.pdf,2022-05-30,," Automatic short answer grading is an important research direction in theexploration of how to use artificial intelligence (AI)-based tools to improveeducation. Current state-of-the-art approaches use neural language models tocreate vectorized representations of students responses, followed byclassifiers to predict the score. However, these approaches have several keylimitations, including i) they use pre-trained language models that are notwell-adapted to educational subject domains and/or student-generated text andii) they almost always train one model per question, ignoring the linkageacross a question and result in a significant model storage problem due to thesize of advanced language models. In this paper, we study the problem ofautomatic short answer grading for students' responses to math questions andpropose a novel framework for this task. First, we use MathBERT, a variant ofthe popular language model BERT adapted to mathematical content, as our basemodel and fine-tune it for the downstream task of student response grading.Second, we use an in-context learning approach that provides scoring examplesas input to the language model to provide additional context information andpromote generalization to previously unseen questions. We evaluate ourframework on a real-world dataset of student responses to open-ended mathquestions and show that our framework (often significantly) outperformsexisting approaches, especially for new questions that are not seen duringtraining.",,arXiv,"['cs.cl', 'cs.lg']",, -1212,large language models can implement policy iteration,"['Ethan Brooks', 'Logan Walls', 'Richard L. Lewis', 'Satinder Singh']",http://arxiv.org/pdf/2210.03821v2.pdf,2022-10-07,," This work presents In-Context Policy Iteration, an algorithm for performingReinforcement Learning (RL), in-context, using foundation models. While theapplication of foundation models to RL has received considerable attention,most approaches rely on either (1) the curation of expert demonstrations(either through manual design or task-specific pretraining) or (2) adaptationto the task of interest using gradient methods (either fine-tuning or trainingof adapter layers). Both of these techniques have drawbacks. Collectingdemonstrations is labor-intensive, and algorithms that rely on them do notoutperform the experts from which the demonstrations were derived. All gradienttechniques are inherently slow, sacrificing the ""few-shot"" quality that madein-context learning attractive to begin with. In this work, we present analgorithm, ICPI, that learns to perform RL tasks without expert demonstrationsor gradients. Instead we present a policy-iteration method in which the promptcontent is the entire locus of learning. ICPI iteratively updates the contentsof the prompt from which it derives its policy through trial-and-errorinteraction with an RL environment. In order to eliminate the role ofin-weights learning (on which approaches like Decision Transformer relyheavily), we demonstrate our algorithm using Codex, a language model with noprior knowledge of the domains on which we evaluate it.",,arXiv,['cs.lg'],, -1213,transformers generalize differently from information stored in context vs in weights,"['Stephanie C. Y. Chan', 'Ishita Dasgupta', 'Junkyung Kim', 'Dharshan Kumaran', 'Andrew K. Lampinen', 'Felix Hill']",http://arxiv.org/pdf/2210.05675v2.pdf,2022-10-11,," Transformer models can use two fundamentally different kinds of information:information stored in weights during training, and information provided``in-context'' at inference time. In this work, we show that transformersexhibit different inductive biases in how they represent and generalize fromthe information in these two sources. In particular, we characterize whetherthey generalize via parsimonious rules (rule-based generalization) or viadirect comparison with observed examples (exemplar-based generalization). Thisis of important practical consequence, as it informs whether to encodeinformation in weights or in context, depending on how we want models to usethat information. In transformers trained on controlled stimuli, we find thatgeneralization from weights is more rule-based whereas generalization fromcontext is largely exemplar-based. In contrast, we find that in transformerspre-trained on natural language, in-context learning is significantlyrule-based, with larger models showing more rule-basedness. We hypothesise thatrule-based generalization from in-context information might be an emergentconsequence of large-scale training on language, which has sparse rule-likestructure. Using controlled stimuli, we verify that transformers pretrained ondata containing sparse rule-like structure exhibit more rule-basedgeneralization.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1214,large language models meet harry potter a bilingual dataset for aligning dialogue agents with characters,"['Nuo Chen', 'Yan Wang', 'Haiyun Jiang', 'Deng Cai', 'Yuhan Li', 'Ziyang Chen', 'Longyue Wang', 'Jia Li']",http://arxiv.org/pdf/2211.06869v4.pdf,2022-11-13,," In recent years, Dialogue-style Large Language Models (LLMs) such as ChatGPTand GPT4 have demonstrated immense potential in constructing open-domaindialogue agents. However, aligning these agents with specific characters orindividuals remains a considerable challenge due to the complexities ofcharacter representation and the lack of comprehensive annotations. In thispaper, we introduce the Harry Potter Dialogue (HPD) dataset, designed toadvance the study of dialogue agents and character alignment. The datasetencompasses all dialogue sessions (in both English and Chinese) from the HarryPotter series and is annotated with vital background information, includingdialogue scenes, speakers, character relationships, and attributes. Theseextensive annotations may empower LLMs to unlock character-driven dialoguecapabilities. Furthermore, it can serve as a universal benchmark for evaluatinghow well can a LLM aligning with a specific character. We benchmark LLMs on HPDusing both fine-tuning and in-context learning settings. Evaluation resultsreveal that although there is substantial room for improvement in generatinghigh-quality, character-aligned responses, the proposed dataset is valuable inguiding models toward responses that better align with the character of HarryPotter.",,arXiv,"['cs.cl', 'cs.ai']",, -1215,retrievalaugmented multimodal language modeling,"['Michihiro Yasunaga', 'Armen Aghajanyan', 'Weijia Shi', 'Rich James', 'Jure Leskovec', 'Percy Liang', 'Mike Lewis', 'Luke Zettlemoyer', 'Wen-tau Yih']",http://arxiv.org/pdf/2211.12561v2.pdf,2022-11-22,," Recent multimodal models such as DALL-E and CM3 have achieved remarkableprogress in text-to-image and image-to-text generation. However, these modelsstore all learned knowledge (e.g., the appearance of the Eiffel Tower) in themodel parameters, requiring increasingly larger models and training data tocapture more knowledge. To integrate knowledge in a more scalable and modularway, we propose a retrieval-augmented multimodal model, which enables a basemultimodal model (generator) to refer to relevant text and images fetched by aretriever from external memory (e.g., documents on the web). Specifically, forthe retriever, we use a pretrained CLIP, and for the generator, we train a CM3Transformer on the LAION dataset. Our resulting model, namedRetrieval-Augmented CM3 (RA-CM3), is the first multimodal model that canretrieve and generate both text and images. We show that RA-CM3 significantlyoutperforms baseline multimodal models such as DALL-E and CM3 on both image andcaption generation tasks (12 FID and 17 CIDEr improvements on MS-COCO), whilerequiring much less compute for training (<30% of DALL-E). Moreover, we showthat RA-CM3 exhibits novel capabilities, such as faithful image generation andmultimodal in-context learning (e.g., image generation from demonstrations).",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, -1216,"operationalizing specifications, in addition to test sets for evaluating constrained generative models","['Vikas Raunak', 'Matt Post', 'Arul Menezes']",http://arxiv.org/pdf/2212.00006v1.pdf,2022-11-19,," In this work, we present some recommendations on the evaluation ofstate-of-the-art generative models for constrained generation tasks. Theprogress on generative models has been rapid in recent years. These large-scalemodels have had three impacts: firstly, the fluency of generation in bothlanguage and vision modalities has rendered common average-case evaluationmetrics much less useful in diagnosing system errors. Secondly, the samesubstrate models now form the basis of a number of applications, driven both bythe utility of their representations as well as phenomena such as in-contextlearning, which raise the abstraction level of interacting with such models.Thirdly, the user expectations around these models and their feted publicreleases have made the technical challenge of out of domain generalization muchless excusable in practice. Subsequently, our evaluation methodologies haven'tadapted to these changes. More concretely, while the associated utility andmethods of interacting with generative models have expanded, a similarexpansion has not been observed in their evaluation practices. In this paper,we argue that the scale of generative models could be exploited to raise theabstraction level at which evaluation itself is conducted and providerecommendations for the same. Our recommendations are based on leveragingspecifications as a powerful instrument to evaluate generation quality and arereadily applicable to a variety of tasks.",,arXiv,"['cs.hc', 'cs.cl', 'cs.cv', 'cs.cy']",, -1217,language model acceptability judgements are not always robust to context,"['Koustuv Sinha', 'Jon Gauthier', 'Aaron Mueller', 'Kanishka Misra', 'Keren Fuentes', 'Roger Levy', 'Adina Williams']",http://arxiv.org/pdf/2212.08979v1.pdf,2022-12-18,," Targeted syntactic evaluations of language models ask whether models showstable preferences for syntactically acceptable content over minimal-pairunacceptable inputs. Most targeted syntactic evaluation datasets ask models tomake these judgements with just a single context-free sentence as input. Thisdoes not match language models' training regime, in which input sentences arealways highly contextualized by the surrounding corpus. This mismatch raises animportant question: how robust are models' syntactic judgements in differentcontexts? In this paper, we investigate the stability of language models'performance on targeted syntactic evaluations as we vary properties of theinput context: the length of the context, the types of syntactic phenomena itcontains, and whether or not there are violations of grammaticality. We findthat model judgements are generally robust when placed in randomly sampledlinguistic contexts. However, they are substantially unstable for contextscontaining syntactic structures matching those in the critical test content.Among all tested models (GPT-2 and five variants of OPT), we significantlyimprove models' judgements by providing contexts with matching syntacticstructures, and conversely significantly worsen them using unacceptablecontexts with matching but violated syntactic structures. This effect isamplified by the length of the context, except for unrelated inputs. We showthat these changes in model performance are not explainable by simple featuresmatching the context and the test inputs, such as lexical overlap anddependency overlap. This sensitivity to highly specific syntactic features ofthe context can only be explained by the models' implicit in-context learningabilities.",,arXiv,"['cs.cl', 'cs.lg']",, -1218,lowresource authorship style transfer can nonfamous authors be imitated,"['Ajay Patel', 'Nicholas Andrews', 'Chris Callison-Burch']",http://arxiv.org/pdf/2212.08986v2.pdf,2022-12-18,," Authorship style transfer involves altering text to match the style of atarget author whilst preserving the original meaning. Existing unsupervisedapproaches like STRAP have largely focused on style transfer to target authorswith many examples of their writing style in books, speeches, or otherpublished works. This high-resource training data requirement (often greaterthan 100,000 words) makes these approaches primarily useful for style transferto published authors, politicians, or other well-known figures and authorshipstyles, while style transfer to non-famous authors has not been well-studied.We introduce the \textit{low-resource authorship style transfer} task, a morechallenging class of authorship style transfer where only a limited amount oftext in the target author's style may exist. In our experiments, wespecifically choose source and target authors from Reddit and style transfertheir Reddit posts, limiting ourselves to just 16 posts (on average ~500 words)of the target author's style. Style transfer accuracy is typically measured byhow often a classifier or human judge will classify an output as written by thetarget author. Recent authorship representations models excel at authorshipidentification even with just a few writing samples, making automaticevaluation of this task possible for the first time through evaluation metricswe propose. Our results establish an in-context learning technique we developas the strongest baseline, though we find current approaches do not yet achievemastery of this challenging task. We release our data and implementations toencourage further investigation.",,arXiv,['cs.cl'],, -1219,training trajectories of language models across scales,"['Mengzhou Xia', 'Mikel Artetxe', 'Chunting Zhou', 'Xi Victoria Lin', 'Ramakanth Pasunuru', 'Danqi Chen', 'Luke Zettlemoyer', 'Ves Stoyanov']",http://arxiv.org/pdf/2212.09803v3.pdf,2022-12-19,," Scaling up language models has led to unprecedented performance gains, butlittle is understood about how the training dynamics change as models getlarger. How do language models of different sizes learn during pre-training?Why do larger language models demonstrate more desirable behaviors? In thispaper, we analyze the intermediate training checkpoints of differently sizedOPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-tokenprediction, sequence-level generation, and downstream tasks. We find that 1) ata given perplexity and independent of model sizes, a similar subset of trainingtokens see the most significant reduction in loss, with the rest stagnating orshowing double-descent behavior; 2) early in training, all models learn toreduce the perplexity of grammatical sequences that contain hallucinations,with small models halting at this suboptimal distribution and larger oneseventually learning to assign these sequences lower probabilities; 3)perplexity is a strong predictor of in-context learning performance on 74multiple-choice tasks from BIG-Bench, and this holds independent of the modelsize. Together, these results show that perplexity is more predictive of modelbehaviors than model size or training computation.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1220,dialog2api taskoriented dialogue with api description and example programs,"['Raphael Shu', 'Elman Mansimov', 'Tamer Alkhouli', 'Nikolaos Pappas', 'Salvatore Romeo', 'Arshit Gupta', 'Saab Mansour', 'Yi Zhang', 'Dan Roth']",http://arxiv.org/pdf/2212.09946v1.pdf,2022-12-20,," Functionality and dialogue experience are two important factors oftask-oriented dialogue systems. Conventional approaches with closed schema(e.g., conversational semantic parsing) often fail as both the functionalityand dialogue experience are strongly constrained by the underlying schema. Weintroduce a new paradigm for task-oriented dialogue - Dialog2API - to greatlyexpand the functionality and provide seamless dialogue experience. Theconversational model interacts with the environment by generating and executingprograms triggering a set of pre-defined APIs. The model also manages thedialogue policy and interact with the user through generating appropriatenatural language responses. By allowing generating free-form programs,Dialog2API supports composite goals by combining different APIs, whereasunrestricted program revision provides natural and robust dialogue experience.To facilitate Dialog2API, the core model is provided with API documents, anexecution environment and optionally some example dialogues annotated withprograms. We propose an approach tailored for the Dialog2API, where thedialogue states are represented by a stack of programs, with most recentlymentioned program on the top of the stack. Dialog2API can work with manyapplication scenarios such as software automation and customer service. In thispaper, we construct a dataset for AWS S3 APIs and present evaluation results ofin-context learning baselines.",,arXiv,['cs.cl'],, -1221,hint hypernetwork instruction tuning for efficient zero & fewshot generalisation,"['Hamish Ivison', 'Akshita Bhagia', 'Yizhong Wang', 'Hannaneh Hajishirzi', 'Matthew Peters']",http://arxiv.org/pdf/2212.10315v2.pdf,2022-12-20,," Recent NLP models have shown the remarkable ability to effectively generalise`zero-shot' to new tasks using only natural language instructions as guidance.However, many of these approaches suffer from high computational costs due totheir reliance on concatenating lengthy instructions with every input example,resulting in costly reprocessing of the instruction. To avoid this, weintroduce Hypernetworks for INstruction Tuning (HINT), which convert taskinstructions and examples into parameter-efficient modules inserted into anunderlying model using a pretrained text encoder, eliminating the need toinclude instructions in the model input. The hypernetwork in HINT also producesan encoded instruction, which we concatenate with encoded inputs duringdecoding to further improve performance. HINT models outperform strongstate-of-the-art baselines by over 10% when controlling for compute (measuredin FLOPs). By converting instructions into modules, HINT models can effectivelydisregard the length of instructions and few-shot example inputs in terms ofcompute usage. As a result, HINT can enhance its performance by up to 25% byincorporating additional few-shot data, while utilizing only up to 5% morecompute. This combines the strengths of parameter-efficient fine-tuning andin-context learning.",,arXiv,['cs.cl'],, -1222,parallel context windows for large language models,"['Nir Ratner', 'Yoav Levine', 'Yonatan Belinkov', 'Ori Ram', 'Inbal Magar', 'Omri Abend', 'Ehud Karpas', 'Amnon Shashua', 'Kevin Leyton-Brown', 'Yoav Shoham']",http://arxiv.org/pdf/2212.10947v3.pdf,2022-12-21,," When applied to processing long text, Large Language Models (LLMs) arelimited by their context window. Existing efforts to address this limitationinvolve training specialized architectures, and cannot be easily applied tooff-the-shelf LLMs. We present Parallel Context Windows (PCW), a method thatalleviates the context window restriction for any off-the-shelf LLM withoutfurther training. The key to the approach is to carve a long context intochunks (``windows''), restrict the attention mechanism to apply only withineach window, and re-use the positional embeddings across the windows. Our mainresults test the PCW approach on in-context learning with models that range insize between 750 million and 178 billion parameters, and show substantialimprovements for tasks with diverse input and output spaces. We show additionalbenefits in other settings where long context windows may be beneficial:multi-hop questions and retrieval-augmented question answering with multipleretrieved documents. Our results highlight Parallel Context Windows as apromising method for applying off-the-shelf LLMs in a range of settings thatrequire long text sequences. We make our code publicly available athttps://github.com/ai21labs/parallel-context-windows.",,arXiv,['cs.cl'],, -1223,distinguishability calibration to incontext learning,"['Hongjing Li', 'Hanqi Yan', 'Yanran Li', 'Li Qian', 'Yulan He', 'Lin Gui']",http://arxiv.org/pdf/2302.06198v3.pdf,2023-02-13,," Recent years have witnessed increasing interests in prompt-based learning inwhich models can be trained on only a few annotated instances, making themsuitable in low-resource settings. When using prompt-based learning for textclassification, the goal is to use a pre-trained language model (PLM) topredict a missing token in a pre-defined template given an input text, whichcan be mapped to a class label. However, PLMs built on the transformerarchitecture tend to generate similar output embeddings, making it difficult todiscriminate between different class labels. The problem is further exacerbatedwhen dealing with classification tasks involving many fine-grained classlabels. In this work, we alleviate this information diffusion issue, i.e.,different tokens share a large proportion of similar information after goingthrough stacked multiple self-attention layers in a transformer, by proposing acalibration method built on feature transformations through rotation andscaling to map a PLM-encoded embedding into a new metric space to guarantee thedistinguishability of the resulting embeddings. Furthermore, we take theadvantage of hyperbolic embeddings to capture the hierarchical relations amongfine-grained class-associated token embedding by a coarse-to-fine metriclearning strategy to enhance the distinguishability of the learned outputembeddings. Extensive experiments on the three datasets under various settingsdemonstrate the effectiveness of our approach. Our code can be found athttps://github.com/donttal/TARA.",,arXiv,['cs.cl'],, -1224,do we still need clinical language models,"['Eric Lehman', 'Evan Hernandez', 'Diwakar Mahajan', 'Jonas Wulff', 'Micah J. Smith', 'Zachary Ziegler', 'Daniel Nadler', 'Peter Szolovits', 'Alistair Johnson', 'Emily Alsentzer']",http://arxiv.org/pdf/2302.08091v1.pdf,2023-02-16,," Although recent advances in scaling large language models (LLMs) haveresulted in improvements on many NLP tasks, it remains unclear whether thesemodels trained primarily with general web text are the right tool in highlyspecialized, safety critical domains such as clinical text. Recent results havesuggested that LLMs encode a surprising amount of medical knowledge. Thisraises an important question regarding the utility of smaller domain-specificlanguage models. With the success of general-domain LLMs, is there still a needfor specialized clinical models? To investigate this question, we conduct anextensive empirical analysis of 12 language models, ranging from 220M to 175Bparameters, measuring their performance on 3 different clinical tasks that testtheir ability to parse and reason over electronic health records. As part ofour experiments, we train T5-Base and T5-Large models from scratch on clinicalnotes from MIMIC III and IV to directly investigate the efficiency of clinicaltokens. We show that relatively small specialized clinical models substantiallyoutperform all in-context learning approaches, even when finetuned on limitedannotated data. Further, we find that pretraining on clinical tokens allows forsmaller, more parameter-efficient models that either match or outperform muchlarger language models trained on general text. We release the code and themodels used under the PhysioNet Credentialed Health Data license and data useagreement.",,arXiv,['cs.cl'],, -1225,epalm efficient perceptual augmentation of language models,"['Mustafa Shukor', 'Corentin Dancette', 'Matthieu Cord']",http://arxiv.org/pdf/2303.11403v4.pdf,2023-03-20,," Large Language Models (LLMs) have so far impressed the world, withunprecedented capabilities that emerge in models at large scales. On the visionside, transformer models (i.e., ViT) are following the same trend, achievingthe best performance on challenging benchmarks. With the abundance of suchunimodal models, a natural question arises; do we need also to follow thistrend to tackle multimodal tasks? In this work, we propose to rather directeffort to efficient adaptations of existing models, and propose to augmentLanguage Models with perception. Existing approaches for adapting pretrainedmodels for vision-language tasks still rely on several key components thathinder their efficiency. In particular, they still train a large number ofparameters, rely on large multimodal pretraining, use encoders (e.g., CLIP)trained on huge image-text datasets, and add significant inference overhead. Inaddition, most of these approaches have focused on Zero-Shot and In ContextLearning, with little to no effort on direct finetuning. We investigate theminimal computational effort needed to adapt unimodal models for multimodaltasks and propose a new challenging setup, alongside different approaches, thatefficiently adapts unimodal pretrained models. We show that by freezing morethan 99% of total parameters, training only one linear projection layer, andprepending only one trainable token, our approach (dubbed eP-ALM) significantlyoutperforms other baselines on VQA and Captioning across Image, Video, andAudio modalities, following the proposed setup. The code is available here:https://github.com/mshukor/eP-ALM.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, -1226,towards making the most of chatgpt for machine translation,"['Keqin Peng', 'Liang Ding', 'Qihuang Zhong', 'Li Shen', 'Xuebo Liu', 'Min Zhang', 'Yuanxin Ouyang', 'Dacheng Tao']",http://arxiv.org/pdf/2303.13780v4.pdf,2023-03-24,," ChatGPT shows remarkable capabilities for machine translation (MT). Severalprior studies have shown that it achieves comparable results to commercialsystems for high-resource languages, but lags behind in complex tasks, e.g.,low-resource and distant-language-pairs translation. However, they usuallyadopt simple prompts which can not fully elicit the capability of ChatGPT. Inthis paper, we aim to further mine ChatGPT's translation ability by revisitingseveral aspects: temperature, task information, and domain information, andcorrespondingly propose an optimal temperature setting and two (simple buteffective) prompts: Task-Specific Prompts (TSP) and Domain-Specific Prompts(DSP). We show that: 1) The performance of ChatGPT depends largely ontemperature, and a lower temperature usually can achieve better performance; 2)Emphasizing the task information can further improve ChatGPT's performance,particularly in complex MT tasks; 3) Introducing domain information can elicitChatGPT's generalization ability and improve its performance in the specificdomain; 4) ChatGPT tends to generate hallucinations for non-English-centric MTtasks, which can be partially addressed by our proposed prompts but still needto be highlighted for the MT/NLP community. We also explore the effects ofadvanced in-context learning strategies and find a (negative but interesting)observation: the powerful chain-of-thought prompt leads to word-by-wordtranslation behavior, thus bringing significant translation degradation.",,arXiv,['cs.cl'],, -1227,$k$nn prompting beyondcontext learning with calibrationfree nearest neighbor inference,"['Benfeng Xu', 'Quan Wang', 'Zhendong Mao', 'Yajuan Lyu', 'Qiaoqiao She', 'Yongdong Zhang']",http://arxiv.org/pdf/2303.13824v1.pdf,2023-03-24,," In-Context Learning (ICL), which formulates target tasks as prompt completionconditioned on in-context demonstrations, has become the prevailing utilizationof LLMs. In this paper, we first disclose an actual predicament for thistypical usage that it can not scale up with training data due to context lengthrestriction. Besides, existing works have shown that ICL also suffers fromvarious biases and requires delicate calibration treatment. To address bothchallenges, we advocate a simple and effective solution, $k$NN Prompting, whichfirst queries LLM with training data for distributed representations, thenpredicts test instances by simply referring to nearest neighbors. We conductcomprehensive experiments to demonstrate its two-fold superiority: 1)Calibration-Free: $k$NN Prompting does not directly align LLM outputdistribution with task-specific label space, instead leverages suchdistribution to align test and training instances. It significantly outperformsstate-of-the-art calibration-based methods under comparable few-shot scenario.2) Beyond-Context: $k$NN Prompting can further scale up effectively with asmany training data as are available, continually bringing substantialimprovements. The scaling trend holds across 10 orders of magnitude rangingfrom 2 shots to 1024 shots as well as different LLMs scales ranging from 0.8Bto 30B. It successfully bridges data scaling into model scaling, and brings newpotentials for the gradient-free paradigm of LLM deployment. Code is publiclyavailable.",,arXiv,"['cs.cl', 'cs.ai']",, -1228,what makes good incontext demonstrations for code intelligence tasks with llms,"['Shuzheng Gao', 'Xin-Cheng Wen', 'Cuiyun Gao', 'Wenxuan Wang', 'Hongyu Zhang', 'Michael R. Lyu']",http://arxiv.org/pdf/2304.07575v2.pdf,2023-04-15,," Pre-trained models of source code have gained widespread popularity in manycode intelligence tasks. Recently, with the scaling of the model and corpussize, large language models have shown the ability of in-context learning(ICL). ICL employs task instructions and a few examples as demonstrations, andthen inputs the demonstrations to the language models for making predictions.This new learning paradigm is training-free and has shown impressiveperformance in various natural language processing and code intelligence tasks.However, the performance of ICL heavily relies on the quality ofdemonstrations, e.g., the selected examples. It is important to systematicallyinvestigate how to construct a good demonstration for code-related tasks. Inthis paper, we empirically explore the impact of three key factors on theperformance of ICL in code intelligence tasks: the selection, order, and numberof demonstration examples. We conduct extensive experiments on three codeintelligence tasks including code summarization, bug fixing, and programsynthesis. Our experimental results demonstrate that all the above threefactors dramatically impact the performance of ICL in code intelligence tasks.Additionally, we summarize our findings and provide takeaway suggestions on howto construct effective demonstrations, taking into account these threeperspectives. We also show that a carefully-designed demonstration based on ourfindings can lead to substantial improvements over widely-used demonstrationconstruction methods, e.g., improving BLEU-4, EM, and EM by at least 9.90%,175.96%, and 50.81% on code summarization, bug fixing, and program synthesis,respectively",,arXiv,['cs.se'],, -1229,sparks of gpts in edge intelligence for metaverse caching and inference for mobile aigc services,"['Minrui Xu', 'Dusit Niyato', 'Hongliang Zhang', 'Jiawen Kang', 'Zehui Xiong', 'Shiwen Mao', 'Zhu Han']",http://arxiv.org/pdf/2304.08782v2.pdf,2023-04-18,," Aiming at achieving artificial general intelligence (AGI) for Metaverse,pretrained foundation models (PFMs), e.g., generative pretrained transformers(GPTs), can effectively provide various AI services, such as autonomousdriving, digital twins, and AI-generated content (AIGC) for extended reality.With the advantages of low latency and privacy-preserving, serving PFMs ofmobile AI services in edge intelligence is a viable solution for caching andexecuting PFMs on edge servers with limited computing resources and GPU memory.However, PFMs typically consist of billions of parameters that are computationand memory-intensive for edge servers during loading and execution. In thisarticle, we investigate edge PFM serving problems for mobile AIGC services ofMetaverse. First, we introduce the fundamentals of PFMs and discuss theircharacteristic fine-tuning and inference methods in edge intelligence. Then, wepropose a novel framework of joint model caching and inference for managingmodels and allocating resources to satisfy users' requests efficiently.Furthermore, considering the in-context learning ability of PFMs, we propose anew metric to evaluate the freshness and relevance between examples indemonstrations and executing tasks, namely the Age of Context (AoC). Finally,we propose a least context algorithm for managing cached models at edge serversby balancing the tradeoff among latency, energy consumption, and accuracy.",,arXiv,['cs.ni'],, -1230,controlled text generation with natural language instructions,"['Wangchunshu Zhou', 'Yuchen Eleanor Jiang', 'Ethan Wilcox', 'Ryan Cotterell', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2304.14293v2.pdf,2023-04-27,," Large language models generate fluent texts and can follow natural languageinstructions to solve a wide range of tasks without task-specific training.Nevertheless, it is notoriously difficult to control their generation tosatisfy the various constraints required by different applications. In thiswork, we present InstructCTG, a controlled text generation framework thatincorporates different constraints by conditioning on natural languagedescriptions and demonstrations of the constraints. In particular, we firstextract the underlying constraints of natural texts through a combination ofoff-the-shelf NLP tools and simple heuristics. We then verbalize theconstraints into natural language instructions to form weakly supervisedtraining data. By prepending natural language descriptions of the constraintsand a few demonstrations, we fine-tune a pre-trained language model toincorporate various types of constraints. Compared to existing search-based orscore-based methods, InstructCTG is more flexible to different constraint typesand has a much smaller impact on the generation quality and speed because itdoes not modify the decoding procedure. Additionally, InstructCTG allows themodel to adapt to new constraints without re-training through the use offew-shot task generalization and in-context learning abilities ofinstruction-tuned language models.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1231,tallrec an effective and efficient tuning framework to align large language model with recommendation,"['Keqin Bao', 'Jizhi Zhang', 'Yang Zhang', 'Wenjie Wang', 'Fuli Feng', 'Xiangnan He']",http://arxiv.org/pdf/2305.00447v3.pdf,2023-04-30,," Large Language Models (LLMs) have demonstrated remarkable performance acrossdiverse domains, thereby prompting researchers to explore their potential foruse in recommendation systems. Initial attempts have leveraged the exceptionalcapabilities of LLMs, such as rich knowledge and strong generalization throughIn-context Learning, which involves phrasing the recommendation task asprompts. Nevertheless, the performance of LLMs in recommendation tasks remainssuboptimal due to a substantial disparity between the training tasks for LLMsand recommendation tasks, as well as inadequate recommendation data duringpre-training. To bridge the gap, we consider building a Large RecommendationLanguage Model by tunning LLMs with recommendation data. To this end, wepropose an efficient and effective Tuning framework for Aligning LLMs withRecommendation, namely TALLRec. We have demonstrated that the proposed TALLRecframework can significantly enhance the recommendation capabilities of LLMs inthe movie and book domains, even with a limited dataset of fewer than 100samples. Additionally, the proposed framework is highly efficient and can beexecuted on a single RTX 3090 with LLaMA-7B. Furthermore, the fine-tuned LLMexhibits robust cross-domain generalization. Our code and data are available athttps://github.com/SAI990323/TALLRec.",,arXiv,['cs.ir'],, -1232,using chatgpt for entity matching,"['Ralph Peeters', 'Christian Bizer']",http://arxiv.org/pdf/2305.03423v2.pdf,2023-05-05,," Entity Matching is the task of deciding if two entity descriptions refer tothe same real-world entity. State-of-the-art entity matching methods often relyon fine-tuning Transformer models such as BERT or RoBERTa. Two major drawbacksof using these models for entity matching are that (i) the models requiresignificant amounts of fine-tuning data for reaching a good performance and(ii) the fine-tuned models are not robust concerning out-of-distributionentities. In this paper, we investigate using ChatGPT for entity matching as amore robust, training data-efficient alternative to traditional Transformermodels. We perform experiments along three dimensions: (i) general promptdesign, (ii) in-context learning, and (iii) provision of higher-level matchingknowledge. We show that ChatGPT is competitive with a fine-tuned RoBERTa model,reaching a zero-shot performance of 82.35% F1 on a challenging matching task onwhich RoBERTa requires 2000 training examples for reaching a similarperformance. Adding in-context demonstrations to the prompts further improvesthe F1 by up to 7.85% when using similarity-based example selection. Alwaysusing the same set of 10 handpicked demonstrations leads to an improvement of4.92% over the zero-shot performance. Finally, we show that ChatGPT can also beguided by adding higher-level matching knowledge in the form of rules to theprompts. Providing matching rules leads to similar performance gains asproviding in-context demonstrations.",,arXiv,['cs.cl'],, -1233,can language models solve graph problems in natural language,"['Heng Wang', 'Shangbin Feng', 'Tianxing He', 'Zhaoxuan Tan', 'Xiaochuang Han', 'Yulia Tsvetkov']",http://arxiv.org/pdf/2305.10037v2.pdf,2023-05-17,," Large language models (LLMs) are increasingly adopted for a variety of taskswith implicit graphical structures, such as planning in robotics, multi-hopquestion answering or knowledge probing, structured commonsense reasoning, andmore. While LLMs have advanced the state-of-the-art on these tasks withstructure implications, whether LLMs could explicitly process textualdescriptions of graphs and structures, map them to grounded conceptual spaces,and perform structured operations remains underexplored. To this end, wepropose NLGraph (Natural Language Graph), a comprehensive benchmark ofgraph-based problem solving designed in natural language. NLGraph contains29,370 problems, covering eight graph reasoning tasks with varying complexityfrom simple tasks such as connectivity and shortest path up to complex problemssuch as maximum flow and simulating graph neural networks. We evaluate LLMs(GPT-3/4) with various prompting approaches on the NLGraph benchmark and findthat 1) language models do demonstrate preliminary graph reasoning abilities,2) the benefit of advanced prompting and in-context learning diminishes on morecomplex graph problems, while 3) LLMs are also (un)surprisingly brittle in theface of spurious correlations in graph and problem settings. We then proposeBuild-a-Graph Prompting and Algorithmic Prompting, two instruction-basedapproaches to enhance LLMs in solving natural language graph problems.Build-a-Graph and Algorithmic prompting improve the performance of LLMs onNLGraph by 3.07% to 16.85% across multiple tasks and settings, while how tosolve the most complicated graph reasoning tasks in our setup with languagemodels remains an open research question. The NLGraph benchmark and evaluationcode are available at https://github.com/Arthur-Heng/NLGraph.",,arXiv,"['cs.cl', 'cs.ai']",, -1234,joint foundation model caching and inference of generative ai services for edge intelligence,"['Minrui Xu', 'Dusit Niyato', 'Hongliang Zhang', 'Jiawen Kang', 'Zehui Xiong', 'Shiwen Mao', 'Zhu Han']",http://arxiv.org/pdf/2305.12130v1.pdf,2023-05-20,," With the rapid development of artificial general intelligence (AGI), variousmultimedia services based on pretrained foundation models (PFMs) need to beeffectively deployed. With edge servers that have cloud-level computing power,edge intelligence can extend the capabilities of AGI to mobile edge networks.However, compared with cloud data centers, resource-limited edge servers canonly cache and execute a small number of PFMs, which typically consist ofbillions of parameters and require intensive computing power and GPU memoryduring inference. To address this challenge, in this paper, we propose a jointfoundation model caching and inference framework that aims to balance thetradeoff among inference latency, accuracy, and resource consumption bymanaging cached PFMs and user requests efficiently during the provisioning ofgenerative AI services. Specifically, considering the in-context learningability of PFMs, a new metric named the Age of Context (AoC), is proposed tomodel the freshness and relevance between examples in past demonstrations andcurrent service requests. Based on the AoC, we propose a least context cachingalgorithm to manage cached PFMs at edge servers with historical prompts andinference results. The numerical results demonstrate that the proposedalgorithm can reduce system costs compared with existing baselines byeffectively utilizing contextual information.",,arXiv,['cs.ni'],, -1235,enhancing fewshot texttosql capabilities of large language models a study on prompt design strategies,"['Linyong Nan', 'Yilun Zhao', 'Weijin Zou', 'Narutatsu Ri', 'Jaesung Tae', 'Ellen Zhang', 'Arman Cohan', 'Dragomir Radev']",http://arxiv.org/pdf/2305.12586v1.pdf,2023-05-21,," In-context learning (ICL) has emerged as a new approach to various naturallanguage processing tasks, utilizing large language models (LLMs) to makepredictions based on context that has been supplemented with a few examples ortask-specific instructions. In this paper, we aim to extend this method toquestion answering tasks that utilize structured knowledge sources, and improveText-to-SQL systems by exploring various prompt design strategies for employingLLMs. We conduct a systematic investigation into different demonstrationselection methods and optimal instruction formats for prompting LLMs in theText-to-SQL task. Our approach involves leveraging the syntactic structure ofan example's SQL query to retrieve demonstrations, and we demonstrate thatpursuing both diversity and similarity in demonstration selection leads toenhanced performance. Furthermore, we show that LLMs benefit fromdatabase-related knowledge augmentations. Our most effective strategyoutperforms the state-of-the-art system by 2.5 points (Execution Accuracy) andthe best fine-tuned system by 5.1 points on the Spider dataset. These resultshighlight the effectiveness of our approach in adapting LLMs to the Text-to-SQLtask, and we present an analysis of the factors contributing to the success ofour strategy.",,arXiv,['cs.cl'],, -1236,exploring chainofthought style prompting for texttosql,"['Chang-You Tai', 'Ziru Chen', 'Tianshu Zhang', 'Xiang Deng', 'Huan Sun']",http://arxiv.org/pdf/2305.14215v2.pdf,2023-05-23,," In-context learning with large language models (LLMs) has recently caughtincreasing attention due to its superior few-shot performance on various tasks.However, its performance on text-to-SQL parsing still has much room forimprovement. In this paper, we hypothesize that a crucial aspect of LLMs toimprove for text-to-SQL parsing is their multi-step reasoning ability. Thus, wesystematically study how to enhance LLMs' reasoning ability through chain ofthought (CoT) style prompting, including the original chain-of-thoughtprompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023).Our experiments demonstrate that iterative prompting as in Zhou et al. (2023)may be unnecessary for text-to-SQL parsing, and using detailed reasoning stepstends to have more error propagation issues. Based on these findings, wepropose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2and 6.5 point absolute gains on the Spider development set and the SpiderRealistic set, respectively, compared to the standard prompting method withoutreasoning steps; 2.4 and 1.5 point absolute gains, compared to theleast-to-most prompting method.",,arXiv,['cs.cl'],, -1237,increasing probability mass on answer choices does not always improve accuracy,"['Sarah Wiegreffe', 'Matthew Finlayson', 'Oyvind Tafjord', 'Peter Clark', 'Ashish Sabharwal']",http://arxiv.org/pdf/2305.14596v2.pdf,2023-05-24,," When pretrained language models (LMs) are applied to discriminative taskssuch as multiple-choice questions, they place probability mass on vocabularytokens that aren't among the given answer choices. Spreading probability massacross multiple surface forms with identical meaning (such as ""bath"" and""bathtub"") is thought to cause an underestimation of a model's trueperformance, referred to as the ""surface form competition"" (SFC) hypothesis.This has motivated the introduction of various probability normalizationmethods. However, many core questions remain unanswered. How do we measure SFC?Are there direct ways of reducing it, and does doing so improve taskperformance? We propose a mathematical formalism for SFC which allows us to quantify andbound its impact for the first time. We identify a simple method for reducingit -- namely, increasing probability mass on the given answer choices by a)including them in the prompt and b) using in-context learning with even justone example. We show this method eliminates the impact of SFC in the majorityof instances. Our experiments on three diverse datasets and six LMs revealseveral additional surprising findings. For example, both normalization andprompting methods for reducing SFC can be ineffective or even detrimental totask performance for some LMs. We conclude with practical insights foreffectively prompting LMs for multiple-choice tasks.",,arXiv,"['cs.cl', 'cs.lg']",, -1238,universal selfadaptive prompting,"['Xingchen Wan', 'Ruoxi Sun', 'Hootan Nakhost', 'Hanjun Dai', 'Julian Martin Eisenschlos', 'Sercan O. Arik', 'Tomas Pfister']",http://arxiv.org/pdf/2305.14926v2.pdf,2023-05-24,," A hallmark of modern large language models (LLMs) is their impressive generalzero-shot and few-shot abilities, often elicited through in-context learning(ICL) via prompting. However, while highly coveted and being the most general,zero-shot performances in LLMs are still typically weaker due to the lack ofguidance and the difficulty of applying existing automatic prompt designmethods in general tasks when ground-truth labels are unavailable. In thisstudy, we address this by presenting Universal Self-Adaptive Prompting (USP),an automatic prompt design approach specifically tailored for zero-shotlearning (while compatible with few-shot). Requiring only a small amount ofunlabeled data and an inference-only LLM, USP is highly versatile: to achieveuniversal prompting, USP categorizes a possible NLP task into one of the threepossible task types and then uses a corresponding selector to select the mostsuitable queries and zero-shot model-generated responses aspseudo-demonstrations, thereby generalizing ICL to the zero-shot setup in afully automated way. We evaluate USP with PaLM and PaLM 2 models anddemonstrate performances that are considerably stronger than standard zero-shotbaselines and often comparable to or even superior to few-shot baselines acrossmore than 40 natural language understanding, natural language generation, andreasoning tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1239,are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization,"['Aman Priyanshu', 'Supriti Vijay', 'Ayush Kumar', 'Rakshit Naidu', 'Fatemehsadat Mireshghallah']",http://arxiv.org/pdf/2305.15008v1.pdf,2023-05-24,," LLM-powered chatbots are becoming widely adopted in applications such ashealthcare, personal assistants, industry hiring decisions, etc. In many ofthese cases, chatbots are fed sensitive, personal information in their prompts,as samples for in-context learning, retrieved records from a database, or aspart of the conversation. The information provided in the prompt could directlyappear in the output, which might have privacy ramifications if there issensitive information there. As such, in this paper, we aim to understand theinput copying and regurgitation capabilities of these models during inferenceand how they can be directly instructed to limit this copying by complying withregulations such as HIPAA and GDPR, based on their internal knowledge of them.More specifically, we find that when ChatGPT is prompted to summarize coverletters of a 100 candidates, it would retain personally identifiableinformation (PII) verbatim in 57.4% of cases, and we find this retention to benon-uniform between different subgroups of people, based on attributes such asgender identity. We then probe ChatGPT's perception of privacy-related policiesand privatization mechanisms by directly instructing it to provide compliantoutputs and observe a significant omission of PII from output.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, -1240,finetuning language models with just forward passes,"['Sadhika Malladi', 'Tianyu Gao', 'Eshaan Nichani', 'Alex Damian', 'Jason D. Lee', 'Danqi Chen', 'Sanjeev Arora']",http://arxiv.org/pdf/2305.17333v2.pdf,2023-05-27,," Fine-tuning language models (LMs) has yielded success on diverse downstreamtasks, but as LMs grow in size, backpropagation requires a prohibitively largeamount of memory. Zeroth-order (ZO) methods can in principle estimate gradientsusing only two forward passes but are theorized to be catastrophically slow foroptimizing large models. In this work, we propose a memory-efficientzerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operatein-place, thereby fine-tuning LMs with the same memory footprint as inference.For example, with a single A100 80GB GPU, MeZO can train a 30-billion parametermodel, whereas fine-tuning with backpropagation can train only a 2.7B LM withthe same budget. We conduct comprehensive experiments across model types(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks(classification, multiple-choice, and generation). Our results demonstrate that(1) MeZO significantly outperforms in-context learning and linear probing; (2)MeZO achieves comparable performance to fine-tuning with backpropagation acrossmultiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reductionin our implementation; (3) MeZO is compatible with both full-parameter andparameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZOcan effectively optimize non-differentiable objectives (e.g., maximizingaccuracy or F1). We support our empirical findings with theoretical insights,highlighting how adequate pre-training and task prompts enable MeZO tofine-tune huge models, despite classical ZO analyses suggesting otherwise.",,arXiv,"['cs.lg', 'cs.cl']",, -1241,do large language models know what they don't know,"['Zhangyue Yin', 'Qiushi Sun', 'Qipeng Guo', 'Jiawen Wu', 'Xipeng Qiu', 'Xuanjing Huang']",http://arxiv.org/pdf/2305.18153v2.pdf,2023-05-29,," Large language models (LLMs) have a wealth of knowledge that allows them toexcel in various Natural Language Processing (NLP) tasks. Current researchfocuses on enhancing their performance within their existing knowledge. Despitetheir vast knowledge, LLMs are still limited by the amount of information theycan accommodate and comprehend. Therefore, the ability to understand their ownlimitations on the unknows, referred to as self-knowledge, is of paramountimportance. This study aims to evaluate LLMs' self-knowledge by assessing theirability to identify unanswerable or unknowable questions. We introduce anautomated methodology to detect uncertainty in the responses of these models,providing a novel measure of their self-knowledge. We further introduce aunique dataset, SelfAware, consisting of unanswerable questions from fivediverse categories and their answerable counterparts. Our extensive analysis,involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering anintrinsic capacity for self-knowledge within these models. Moreover, wedemonstrate that in-context learning and instruction tuning can further enhancethis self-knowledge. Despite this promising insight, our findings alsohighlight a considerable gap between the capabilities of these models and humanproficiency in recognizing the limits of their knowledge.",,arXiv,['cs.cl'],, -1242,improving clip training with language rewrites,"['Lijie Fan', 'Dilip Krishnan', 'Phillip Isola', 'Dina Katabi', 'Yonglong Tian']",http://arxiv.org/pdf/2305.20088v2.pdf,2023-05-31,," Contrastive Language-Image Pre-training (CLIP) stands as one of the mosteffective and scalable methods for training transferable vision models usingpaired image and text data. CLIP models are trained using contrastive loss,which typically relies on data augmentations to prevent overfitting andshortcuts. However, in the CLIP training paradigm, data augmentations areexclusively applied to image inputs, while language inputs remain unchangedthroughout the entire training process, limiting the exposure of diverse textsto the same image. In this paper, we introduce Language augmented CLIP(LaCLIP), a simple yet highly effective approach to enhance CLIP trainingthrough language rewrites. Leveraging the in-context learning capability oflarge language models, we rewrite the text descriptions associated with eachimage. These rewritten texts exhibit diversity in sentence structure andvocabulary while preserving the original key concepts and meanings. Duringtraining, LaCLIP randomly selects either the original texts or the rewrittenversions as text augmentations for each image. Extensive experiments on CC3M,CC12M, RedCaps and LAION-400M datasets show that CLIP pre-training withlanguage rewrites significantly improves the transfer performance withoutcomputation or memory overhead during training. Specifically for ImageNetzero-shot accuracy, LaCLIP outperforms CLIP by 8.2% on CC12M and 2.4% onLAION-400M. Code is available at https://github.com/LijieFan/LaCLIP.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, -1243,sqlpalm improved large language model adaptation for texttosql,"['Ruoxi Sun', 'Sercan O. Arik', 'Hootan Nakhost', 'Hanjun Dai', 'Rajarishi Sinha', 'Pengcheng Yin', 'Tomas Pfister']",http://arxiv.org/pdf/2306.00739v3.pdf,2023-05-26,," One impressive emergent capability of large language models (LLMs) isgeneration of code, including Structured Query Language (SQL) for databases.For the task of converting natural language text to SQL queries, Text-to-SQL,adaptation of LLMs is of paramount importance, both in in-context learning andfine-tuning settings, depending on the amount of adaptation data used. In thispaper, we propose an LLM-based Text-to-SQL model SQL-PaLM, leveraging onPaLM-2, that pushes the state-of-the-art in both settings. Few-shot SQL-PaLM isbased on an execution-based self-consistency prompting approach designed forText-to-SQL, and achieves 77.3% in test-suite accuracy on Spider, which to ourbest knowledge is the first to outperform previous state-of-the-art withfine-tuning by a significant margin, 4%. Furthermore, we demonstrate that thefine-tuned SQL-PALM outperforms it further by another 1%. Towards applyingSQL-PaLM to real-world scenarios we further evaluate its robustness on otherchallenging variants of Spider and demonstrate the superior generalizationcapability of SQL-PaLM. In addition, via extensive case studies, we demonstratethe impressive intelligent capabilities and various success enablers ofLLM-based Text-to-SQL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.db']",, -1244,zeroshot 3d shape correspondence,"['Ahmed Abdelreheem', 'Abdelrahman Eldesokey', 'Maks Ovsjanikov', 'Peter Wonka']",http://arxiv.org/pdf/2306.03253v2.pdf,2023-06-05,," We propose a novel zero-shot approach to computing correspondences between 3Dshapes. Existing approaches mainly focus on isometric and near-isometric shapepairs (e.g., human vs. human), but less attention has been given to stronglynon-isometric and inter-class shape matching (e.g., human vs. cow). To thisend, we introduce a fully automatic method that exploits the exceptionalreasoning capabilities of recent foundation models in language and vision totackle difficult shape correspondence problems. Our approach comprises multiplestages. First, we classify the 3D shapes in a zero-shot manner by feedingrendered shape views to a language-vision model (e.g., BLIP2) to generate alist of class proposals per shape. These proposals are unified into a singleclass per shape by employing the reasoning capabilities of ChatGPT. Second, weattempt to segment the two shapes in a zero-shot manner, but in contrast to theco-segmentation problem, we do not require a mutual set of semantic regions.Instead, we propose to exploit the in-context learning capabilities of ChatGPTto generate two different sets of semantic regions for each shape and asemantic mapping between them. This enables our approach to match stronglynon-isometric shapes with significant differences in geometric structure.Finally, we employ the generated semantic mapping to produce coarsecorrespondences that can further be refined by the functional maps framework toproduce dense point-to-point maps. Our approach, despite its simplicity,produces highly plausible results in a zero-shot manner, especially betweenstrongly non-isometric shapes. Project webpage:https://samir55.github.io/3dshapematch/.",,arXiv,['cs.cv'],, -1245,mimicit multimodal incontext instruction tuning,"['Bo Li', 'Yuanhan Zhang', 'Liangyu Chen', 'Jinghao Wang', 'Fanyi Pu', 'Jingkang Yang', 'Chunyuan Li', 'Ziwei Liu']",http://arxiv.org/pdf/2306.05425v1.pdf,2023-06-08,," High-quality instructions and responses are essential for the zero-shotperformance of large language models on interactive natural language tasks. Forinteractive vision-language tasks involving intricate visual scenes, a largequantity of diverse and creative instruction-response pairs should beimperative to tune vision-language models (VLMs). Nevertheless, the currentavailability of vision-language instruction-response pairs in terms ofquantity, diversity, and creativity remains limited, posing challenges to thegeneralization of interactive VLMs. Here we present MultI-Modal In-ContextInstruction Tuning (MIMIC-IT), a dataset comprising 2.8 million multimodalinstruction-response pairs, with 2.2 million unique instructions derived fromimages and videos. Each pair is accompanied by multi-modal in-contextinformation, forming conversational contexts aimed at empowering VLMs inperception, reasoning, and planning. The instruction-response collectionprocess, dubbed as Syphus, is scaled using an automatic annotation pipelinethat combines human expertise with GPT's capabilities. Using the MIMIC-ITdataset, we train a large VLM named Otter. Based on extensive evaluationsconducted on vision-language benchmarks, it has been observed that Otterdemonstrates remarkable proficiency in multi-modal perception, reasoning, andin-context learning. Human evaluation reveals it effectively aligns with theuser's intentions. We release the MIMIC-IT dataset, instruction-responsecollection pipeline, benchmarks, and the Otter model.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.hc']",, -1246,medfmc a realworld dataset and benchmark for foundation model adaptation in medical image classification,"['Dequan Wang', 'Xiaosong Wang', 'Lilong Wang', 'Mengzhang Li', 'Qian Da', 'Xiaoqiang Liu', 'Xiangyu Gao', 'Jun Shen', 'Junjun He', 'Tian Shen', 'Qi Duan', 'Jie Zhao', 'Kang Li', 'Yu Qiao', 'Shaoting Zhang']",http://arxiv.org/pdf/2306.09579v1.pdf,2023-06-16,," Foundation models, often pre-trained with large-scale data, have achievedparamount success in jump-starting various vision and language applications.Recent advances further enable adapting foundation models in downstream tasksefficiently using only a few training samples, e.g., in-context learning. Yet,the application of such learning paradigms in medical image analysis remainsscarce due to the shortage of publicly accessible data and benchmarks. In thispaper, we aim at approaches adapting the foundation models for medical imageclassification and present a novel dataset and benchmark for the evaluation,i.e., examining the overall performance of accommodating the large-scalefoundation models downstream on a set of diverse real-world clinical tasks. Wecollect five sets of medical imaging data from multiple institutes targeting avariety of real-world clinical tasks (22,349 images in total), i.e., thoracicdiseases screening in X-rays, pathological lesion tissue screening, lesiondetection in endoscopy images, neonatal jaundice evaluation, and diabeticretinopathy grading. Results of multiple baseline methods are demonstratedusing the proposed dataset from both accuracy and cost-effective perspectives.",,arXiv,['cs.cv'],, -1247,jiuzhang 20 a unified chinese pretrained language model for multitask mathematical problem solving,"['Wayne Xin Zhao', 'Kun Zhou', 'Beichen Zhang', 'Zheng Gong', 'Zhipeng Chen', 'Yuanhang Zhou', 'Ji-Rong Wen', 'Jing Sha', 'Shijin Wang', 'Cong Liu', 'Guoping Hu']",http://arxiv.org/pdf/2306.11027v1.pdf,2023-06-19,," Although pre-trained language models~(PLMs) have recently advanced theresearch progress in mathematical reasoning, they are not specially designed asa capable multi-task solver, suffering from high cost for multi-task deployment(\eg a model copy for a task) and inferior performance on complex mathematicalproblems in practical applications. To address these issues, in this paper, wepropose \textbf{JiuZhang~2.0}, a unified Chinese PLM specially for multi-taskmathematical problem solving. Our idea is to maintain a moderate-sized modeland employ the \emph{cross-task knowledge sharing} to improve the modelcapacity in a multi-task setting. Specially, we construct aMixture-of-Experts~(MoE) architecture for modeling mathematical text, so as tocapture the common mathematical knowledge across tasks. For optimizing the MoEarchitecture, we design \emph{multi-task continual pre-training} and\emph{multi-task fine-tuning} strategies for multi-task adaptation. Thesetraining strategies can effectively decompose the knowledge from the task dataand establish the cross-task sharing via expert networks. In order to furtherimprove the general capacity of solving different complex tasks, we leveragelarge language models~(LLMs) as complementary models to iteratively refine thegenerated solution by our PLM, via in-context learning. Extensive experimentshave demonstrated the effectiveness of our model.",,arXiv,"['cs.cl', 'cs.ai']",, -1248,a chain of aibased solutions for resolving fqns and fixing syntax errors in partial code,"['Qing Huang', 'Jiahui Zhu', 'Zhenchang Xing', 'Huan Jin', 'Changjing Wang', 'Xiwei Xu']",http://arxiv.org/pdf/2306.11981v1.pdf,2023-06-21,," API documentation, technical blogs and programming Q&A sites contain numerouspartial code that can be reused in programming tasks, but often these code areuncompilable due to unresolved names and syntax errors. To facilitate partialcode reuse, we propose the Partial Code Reuse Chain (PCR-Chain) for resolvingfully-qualified names (FQNs) and fixing last-mile syntax errors in partial codebased on a giant large language model (LLM) like ChatGPT. Methodologically,PCR-Chain is backed up by the underlying global-level prompt architecture(which combines three design ideas: hierarchical task breakdown, promptcomposition, and a mix of prompt-based AI and non-AI units) and the local-levelprompt design. Technically, we propose PCR-Chain, which employs in-contextlearning rather than symbolic, costly training methods. Experimental resultsdemonstrate that in dynamically-typed languages (Python), PCR-Chain outperformscurrent state-of-the-art (SOTA) 5% accuracy like RING. For statically-typelanguages (Java), our approach achieves high accuracy of 80.5% in resolvingboth non-FQNs and last-mile syntax errors, surpassing SOTA methods (RING) thatcan only address last-mile syntax errors. The correct execution of the unit,module, and PCR-Chain demonstrates the effectiveness of the prompt design,composition, and architecture and opens up possibilities for building softwareengineering tools based on LLMs, replacing traditional program analysismethods.",,arXiv,['cs.se'],, -1249,generative multimodal entity linking,"['Senbao Shi', 'Zhenran Xu', 'Baotian Hu', 'Min Zhang']",http://arxiv.org/pdf/2306.12725v2.pdf,2023-06-22,," Multimodal Entity Linking (MEL) is the task of mapping mentions withmultimodal contexts to the referent entities from a knowledge base (e.g.Wikipedia). Existing MEL methods mainly focus on designing complex multimodalinteraction mechanisms and require fine-tuning all model parameters, which canbe prohibitively costly and difficult to scale in the era of Large LanguageModels (LLMs). In this work, we propose GEMEL, a simple yet effectiveGenerative Multimodal Entity Linking framework based on LLMs, which directlygenerates target entity names. We keep the vision and language model frozen andonly train a feature mapper to enable cross-modality interactions. To adaptLLMs to the MEL task, we take advantage of the emergent in-context learningcapability of LLMs by retrieving multimodal instances as demonstrations.Extensive experiments show that, with only ~0.3% of the model parametersfine-tuned, GEMEL achieves state-of-the-art results on two well-established MELdatasets (7.7% accuracy gains on WikiDiverse and 8.8% accuracy gains onWikiMEL). The performance gain stems from mitigating the popularity bias of LLMpredictions and disambiguating less common entities effectively. Furtheranalysis verifies the generality and scalability of GEMEL. Our approach iscompatible with any off-the-shelf language model, paving the way towards anefficient and general solution for utilizing LLMs in the MEL task.",,arXiv,['cs.cl'],, -1250,kosmos2 grounding multimodal large language models to the world,"['Zhiliang Peng', 'Wenhui Wang', 'Li Dong', 'Yaru Hao', 'Shaohan Huang', 'Shuming Ma', 'Furu Wei']",http://arxiv.org/pdf/2306.14824v3.pdf,2023-06-26,," We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling newcapabilities of perceiving object descriptions (e.g., bounding boxes) andgrounding text to the visual world. Specifically, we represent referexpressions as links in Markdown, i.e., ``[text span](bounding boxes)'', whereobject descriptions are sequences of location tokens. Together with multimodalcorpora, we construct large-scale data of grounded image-text pairs (calledGrIT) to train the model. In addition to the existing capabilities of MLLMs(e.g., perceiving general modalities, following instructions, and performingin-context learning), Kosmos-2 integrates the grounding capability intodownstream applications. We evaluate Kosmos-2 on a wide range of tasks,including (i) multimodal grounding, such as referring expression comprehension,and phrase grounding, (ii) multimodal referring, such as referring expressiongeneration, (iii) perception-language tasks, and (iv) language understandingand generation. This work lays out the foundation for the development ofEmbodiment AI and sheds light on the big convergence of language, multimodalperception, action, and world modeling, which is a key step toward artificialgeneral intelligence. Code and pretrained models are available athttps://aka.ms/kosmos-2.",,arXiv,"['cs.cl', 'cs.cv']",, -1251,supervised pretraining can learn incontext reinforcement learning,"['Jonathan N. Lee', 'Annie Xie', 'Aldo Pacchiano', 'Yash Chandak', 'Chelsea Finn', 'Ofir Nachum', 'Emma Brunskill']",http://arxiv.org/pdf/2306.14892v1.pdf,2023-06-26,," Large transformer models trained on diverse datasets have shown a remarkableability to learn in-context, achieving high few-shot performance on tasks theywere not explicitly trained to solve. In this paper, we study the in-contextlearning capabilities of transformers in decision-making problems, i.e.,reinforcement learning (RL) for bandits and Markov decision processes. To doso, we introduce and study Decision-Pretrained Transformer (DPT), a supervisedpretraining method where the transformer predicts an optimal action given aquery state and an in-context dataset of interactions, across a diverse set oftasks. This procedure, while simple, produces a model with several surprisingcapabilities. We find that the pretrained transformer can be used to solve arange of RL problems in-context, exhibiting both exploration online andconservatism offline, despite not being explicitly trained to do so. The modelalso generalizes beyond the pretraining distribution to new tasks andautomatically adapts its decision-making strategies to unknown structure.Theoretically, we show DPT can be viewed as an efficient implementation ofBayesian posterior sampling, a provably sample-efficient RL algorithm. Wefurther leverage this connection to provide guarantees on the regret of thein-context algorithm yielded by DPT, and prove that it can learn faster thanalgorithms used to generate the pretraining data. These results suggest apromising yet simple path towards instilling strong in-context decision-makingabilities in transformers.",,arXiv,"['cs.lg', 'cs.ai']",, -1252,a gpt4 reticular chemist for guiding mof discovery,"['Zhiling Zheng', 'Zichao Rong', 'Nakul Rampal', 'Christian Borgs', 'Jennifer T. Chayes', 'Omar M. Yaghi']",http://arxiv.org/pdf/2306.14915v2.pdf,2023-06-20,," We present a new framework integrating the AI model GPT-4 into the iterativeprocess of reticular chemistry experimentation, leveraging a cooperativeworkflow of interaction between AI and a human researcher. This GPT-4 ReticularChemist is an integrated system composed of three phases. Each of theseutilizes GPT-4 in various capacities, wherein GPT-4 provides detailedinstructions for chemical experimentation and the human provides feedback onthe experimental outcomes, including both success and failures, for thein-context learning of AI in the next iteration. This iterative human-AIinteraction enabled GPT-4 to learn from the outcomes, much like an experiencedchemist, by a prompt-learning strategy. Importantly, the system is based onnatural language for both development and operation, eliminating the need forcoding skills, and thus, make it accessible to all chemists. Our collaborationwith GPT-4 Reticular Chemist guided the discovery of an isoreticular series ofMOFs, with each synthesis fine-tuned through iterative feedback and expertsuggestions. This workflow presents a potential for broader applications inscientific research by harnessing the capability of large language models likeGPT-4 to enhance the feasibility and efficiency of research activities.",,arXiv,"['cs.ai', 'cond-mat.mtrl-sci', 'physics.chem-ph']",, -1253,voicebox textguided multilingual universal speech generation at scale,"['Matthew Le', 'Apoorv Vyas', 'Bowen Shi', 'Brian Karrer', 'Leda Sari', 'Rashel Moritz', 'Mary Williamson', 'Vimal Manohar', 'Yossi Adi', 'Jay Mahadeokar', 'Wei-Ning Hsu']",http://arxiv.org/pdf/2306.15687v2.pdf,2023-06-23,," Large-scale generative models such as GPT and DALL-E have revolutionized theresearch community. These models not only generate high fidelity outputs, butare also generalists which can solve tasks not explicitly taught. In contrast,speech generative models are still primitive in terms of scale and taskgeneralization. In this paper, we present Voicebox, the most versatiletext-guided generative model for speech at scale. Voicebox is anon-autoregressive flow-matching model trained to infill speech, given audiocontext and text, trained on over 50K hours of speech that are not filtered orenhanced. Similar to GPT, Voicebox can perform many different tasks throughin-context learning, but is more flexible as it can also condition on futurecontext. Voicebox can be used for mono or cross-lingual zero-shottext-to-speech synthesis, noise removal, content editing, style conversion, anddiverse sample generation. In particular, Voicebox outperforms thestate-of-the-art zero-shot TTS model VALL-E on both intelligibility (5.9% vs1.9% word error rates) and audio similarity (0.580 vs 0.681) while being up to20 times faster. Audio samples can be found in\url{https://voicebox.metademolab.com}.",,arXiv,"['eess.as', 'cs.cl', 'cs.lg', 'cs.sd']",, -1254,spae semantic pyramid autoencoder for multimodal generation with frozen llms,"['Lijun Yu', 'Yong Cheng', 'Zhiruo Wang', 'Vivek Kumar', 'Wolfgang Macherey', 'Yanping Huang', 'David A. Ross', 'Irfan Essa', 'Yonatan Bisk', 'Ming-Hsuan Yang', 'Kevin Murphy', 'Alexander G. Hauptmann', 'Lu Jiang']",http://arxiv.org/pdf/2306.17842v3.pdf,2023-06-30,," In this work, we introduce Semantic Pyramid AutoEncoder (SPAE) for enablingfrozen LLMs to perform both understanding and generation tasks involvingnon-linguistic modalities such as images or videos. SPAE converts between rawpixels and interpretable lexical tokens (or words) extracted from the LLM'svocabulary. The resulting tokens capture both the semantic meaning and thefine-grained details needed for visual reconstruction, effectively translatingthe visual content into a language comprehensible to the LLM, and empowering itto perform a wide array of multimodal tasks. Our approach is validated throughin-context learning experiments with frozen PaLM 2 and GPT 3.5 on a diverse setof image understanding and generation tasks. Our method marks the firstsuccessful attempt to enable a frozen LLM to generate image content whilesurpassing state-of-the-art performance in image understanding tasks, under thesame setting, by over 25%.",,arXiv,"['cs.cv', 'cs.cl', 'cs.mm']",, -1255,recallm an adaptable memory mechanism with temporal understanding for large language models,"['Brandon Kynoch', 'Hugo Latapie', 'Dwane van der Sluis']",http://arxiv.org/pdf/2307.02738v3.pdf,2023-07-06,," Large Language Models (LLMs) have made extraordinary progress in the field ofArtificial Intelligence and have demonstrated remarkable capabilities across alarge variety of tasks and domains. However, as we venture closer to creatingArtificial General Intelligence (AGI) systems, we recognize the need tosupplement LLMs with long-term memory to overcome the context window limitationand more importantly, to create a foundation for sustained reasoning,cumulative learning and long-term user interaction. In this paper we proposeRecallM, a novel architecture for providing LLMs with an adaptable andupdatable long-term memory mechanism. Unlike previous methods, the RecallMarchitecture is particularly effective at belief updating and maintaining atemporal understanding of the knowledge provided to it. We demonstrate throughvarious experiments the effectiveness of this architecture. Furthermore,through our own temporal understanding and belief updating experiments, we showthat RecallM is four times more effective than using a vector database forupdating knowledge previously stored in long-term memory. We also demonstratethat RecallM shows competitive performance on general question-answering andin-context learning tasks.",,arXiv,"['cs.ai', 'cs.cl', 'cs.sc']",, -1256,one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention,"['Arvind Mahankali', 'Tatsunori B. Hashimoto', 'Tengyu Ma']",http://arxiv.org/pdf/2307.03576v1.pdf,2023-07-07,," Recent works have empirically analyzed in-context learning and shown thattransformers trained on synthetic linear regression tasks can learn toimplement ridge regression, which is the Bayes-optimal predictor, givensufficient capacity [Aky\""urek et al., 2023], while one-layer transformers withlinear self-attention and no MLP layer will learn to implement one step ofgradient descent (GD) on a least-squares linear regression objective [vonOswald et al., 2022]. However, the theory behind these observations remainspoorly understood. We theoretically study transformers with a single layer oflinear self-attention, trained on synthetic noisy linear regression data.First, we mathematically show that when the covariates are drawn from astandard Gaussian distribution, the one-layer transformer which minimizes thepre-training loss will implement a single step of GD on the least-squareslinear regression objective. Then, we find that changing the distribution ofthe covariates and weight vector to a non-isotropic Gaussian distribution has astrong impact on the learned algorithm: the global minimizer of thepre-training loss now implements a single step of $\textit{pre-conditioned}$GD. However, if only the distribution of the responses is changed, then thisdoes not have a large effect on the learned algorithm: even when the responsecomes from a more general family of $\textit{nonlinear}$ functions, the globalminimizer of the pre-training loss still implements a single step of GD on aleast-squares linear regression objective.",,arXiv,['cs.lg'],, -1257,large language models as general pattern machines,"['Suvir Mirchandani', 'Fei Xia', 'Pete Florence', 'Brian Ichter', 'Danny Driess', 'Montserrat Gonzalez Arenas', 'Kanishka Rao', 'Dorsa Sadigh', 'Andy Zeng']",http://arxiv.org/pdf/2307.04721v2.pdf,2023-07-10,," We observe that pre-trained large language models (LLMs) are capable ofautoregressively completing complex token sequences -- from arbitrary onesprocedurally generated by probabilistic context-free grammars (PCFG), to morerich spatial patterns found in the Abstraction and Reasoning Corpus (ARC), ageneral AI benchmark, prompted in the style of ASCII art. Surprisingly, patterncompletion proficiency can be partially retained even when the sequences areexpressed using tokens randomly sampled from the vocabulary. These resultssuggest that without any additional training, LLMs can serve as generalsequence modelers, driven by in-context learning. In this work, we investigatehow these zero-shot capabilities may be applied to problems in robotics -- fromextrapolating sequences of numbers that represent states over time to completesimple motions, to least-to-most prompting of reward-conditioned trajectoriesthat can discover and represent closed-loop policies (e.g., a stabilizingcontroller for CartPole). While difficult to deploy today for real systems dueto latency, context size limitations, and compute costs, the approach of usingLLMs to drive low-level control may provide an exciting glimpse into how thepatterns among words could be transferred to actions.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ro']",, -1258,megatts 2 zeroshot texttospeech with arbitrary length speech prompts,"['Ziyue Jiang', 'Jinglin Liu', 'Yi Ren', 'Jinzheng He', 'Chen Zhang', 'Zhenhui Ye', 'Pengfei Wei', 'Chunfeng Wang', 'Xiang Yin', 'Zejun Ma', 'Zhou Zhao']",http://arxiv.org/pdf/2307.07218v2.pdf,2023-07-14,," Zero-shot text-to-speech aims at synthesizing voices with unseen speechprompts. Previous large-scale multispeaker TTS models have successfullyachieved this goal with an enrolled recording within 10 seconds. However, mostof them are designed to utilize only short speech prompts. The limitedinformation in short speech prompts significantly hinders the performance offine-grained identity imitation. In this paper, we introduce Mega-TTS 2, ageneric zero-shot multispeaker TTS model that is capable of synthesizing speechfor unseen speakers with arbitrary-length prompts. Specifically, we 1) design amulti-reference timbre encoder to extract timbre information from multiplereference speeches; 2) and train a prosody language model with arbitrary-lengthspeech prompts; With these designs, our model is suitable for prompts ofdifferent lengths, which extends the upper bound of speech quality forzero-shot text-to-speech. Besides arbitrary-length prompts, we introducearbitrary-source prompts, which leverages the probabilities derived frommultiple P-LLM outputs to produce expressive and controlled prosody.Furthermore, we propose a phoneme-level auto-regressive duration model tointroduce in-context learning capabilities to duration modeling. Experimentsdemonstrate that our method could not only synthesize identity-preservingspeech with a short prompt of an unseen speaker but also achieve improvedperformance with longer speech prompts. Audio samples can be found inhttps://mega-tts.github.io/mega2_demo/.",,arXiv,"['eess.as', 'cs.sd']",, -1259,do emergent abilities exist in quantized large language models an empirical study,"['Peiyu Liu', 'Zikang Liu', 'Ze-Feng Gao', 'Dawei Gao', 'Wayne Xin Zhao', 'Yaliang Li', 'Bolin Ding', 'Ji-Rong Wen']",http://arxiv.org/pdf/2307.08072v2.pdf,2023-07-16,," Despite the superior performance, Large Language Models~(LLMs) requiresignificant computational resources for deployment and use. To overcome thisissue, quantization methods have been widely applied to reduce the memoryfootprint of LLMs as well as increasing the inference rate. However, a majorchallenge is that low-bit quantization methods often lead to performancedegradation. It is important to understand how quantization impacts thecapacity of LLMs. Different from previous studies focused on overallperformance, this work aims to investigate the impact of quantization on\emph{emergent abilities}, which are important characteristics that distinguishLLMs from small language models. Specially, we examine the abilities ofin-context learning, chain-of-thought reasoning, and instruction-following inquantized LLMs. Our empirical experiments show that these emergent abilitiesstill exist in 4-bit quantization models, while 2-bit models encounter severeperformance degradation on the test of these abilities. To improve theperformance of low-bit models, we conduct two special experiments: (1)fine-gained impact analysis that studies which components (or substructures)are more sensitive to quantization, and (2) performance compensation throughmodel fine-tuning. Our work derives a series of important findings tounderstand the impact of quantization on emergent abilities, and sheds lightson the possibilities of extremely low-bit quantization for LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, -1260,generating mathematical derivations with large language models,"['Jordan Meadows', 'Marco Valentino', 'Andre Freitas']",http://arxiv.org/pdf/2307.09998v3.pdf,2023-07-19,," The derivation of mathematical results in specialised fields, using LargeLanguage Models (LLMs), is an emerging research direction that can helpidentify models' limitations, and potentially support mathematical discovery.In this paper, we leverage a symbolic engine to generate derivations ofequations at scale, and investigate the capabilities of LLMs when deriving goalequations from premises. Specifically, we employ in-context learning for GPTand fine-tune a range of T5 models to compare the robustness and generalisationof pre-training strategies to specialised models. Empirical results show thatfine-tuned FLAN-T5-large (MathT5) outperforms GPT models on all static andout-of-distribution test sets in conventional scores. However, an in-depthanalysis reveals that the fine-tuned models are more sensitive to perturbationsinvolving unseen symbols and (to a lesser extent) changes to equationstructure. In addition, we analyse 1.7K equations, and over 200 derivations, tohighlight common reasoning errors such as the inclusion of incorrect,irrelevant, and redundant equations. Finally, we explore the suitability ofexisting metrics for evaluating mathematical derivations and find evidencethat, while they can capture general properties such as sensitivity toperturbations, they fail to highlight fine-grained reasoning errors andessential differences between models. Overall, this work demonstrates thattraining models on synthetic data may improve their math capabilities beyondmuch larger LLMs, but current metrics are not appropriately assessing thequality of generated mathematical text.",,arXiv,"['cs.cl', 'math.ho']",, -1261,lorahub efficient crosstask generalization via dynamic lora composition,"['Chengsong Huang', 'Qian Liu', 'Bill Yuchen Lin', 'Tianyu Pang', 'Chao Du', 'Min Lin']",http://arxiv.org/pdf/2307.13269v1.pdf,2023-07-25,," Low-rank adaptations (LoRA) are often employed to fine-tune large languagemodels (LLMs) for new tasks. This paper investigates LoRA composability forcross-task generalization and introduces LoraHub, a strategic framework devisedfor the purposive assembly of LoRA modules trained on diverse given tasks, withthe objective of achieving adaptable performance on unseen tasks. With just afew examples from a novel task, LoraHub enables the fluid combination ofmultiple LoRA modules, eradicating the need for human expertise. Notably, thecomposition requires neither additional model parameters nor gradients. Ourempirical results, derived from the Big-Bench Hard (BBH) benchmark, suggestthat LoraHub can effectively mimic the performance of in-context learning infew-shot scenarios, excluding the necessity of in-context examples alongsideeach inference input. A significant contribution of our research is thefostering of a community for LoRA, where users can share their trained LoRAmodules, thereby facilitating their application to new tasks. We anticipatethis resource will widen access to and spur advancements in generalintelligence as well as LLMs in production. Code will be available athttps://github.com/sail-sg/lorahub.",,arXiv,"['cs.cl', 'cs.ai']",, -1262,layoutllmt2i eliciting layout guidance from llm for texttoimage generation,"['Leigang Qu', 'Shengqiong Wu', 'Hao Fei', 'Liqiang Nie', 'Tat-Seng Chua']",http://arxiv.org/pdf/2308.05095v2.pdf,2023-08-09,," In the text-to-image generation field, recent remarkable progress in StableDiffusion makes it possible to generate rich kinds of novel photorealisticimages. However, current models still face misalignment issues (e.g.,problematic spatial relation understanding and numeration failure) in complexnatural scenes, which impedes the high-faithfulness text-to-image generation.Although recent efforts have been made to improve controllability by givingfine-grained guidance (e.g., sketch and scribbles), this issue has not beenfundamentally tackled since users have to provide such guidance informationmanually. In this work, we strive to synthesize high-fidelity images that aresemantically aligned with a given textual prompt without any guidance. Towardthis end, we propose a coarse-to-fine paradigm to achieve layout planning andimage generation. Concretely, we first generate the coarse-grained layoutconditioned on a given textual prompt via in-context learning based on LargeLanguage Models. Afterward, we propose a fine-grained object-interactiondiffusion method to synthesize high-faithfulness images conditioned on theprompt and the automatically generated layout. Extensive experimentsdemonstrate that our proposed method outperforms the state-of-the-art models interms of layout and image generation. Our code and settings are available athttps://layoutllm-t2i.github.io.",,arXiv,"['cs.cv', 'cs.ai']",, -1263,audioldm 2 learning holistic audio generation with selfsupervised pretraining,"['Haohe Liu', 'Qiao Tian', 'Yi Yuan', 'Xubo Liu', 'Xinhao Mei', 'Qiuqiang Kong', 'Yuping Wang', 'Wenwu Wang', 'Yuxuan Wang', 'Mark D. Plumbley']",http://arxiv.org/pdf/2308.05734v2.pdf,2023-08-10,," Although audio generation shares commonalities across different types ofaudio, such as speech, music, and sound effects, designing models for each typerequires careful consideration of specific objectives and biases that cansignificantly differ from those of other types. To bring us closer to a unifiedperspective of audio generation, this paper proposes a framework that utilizesthe same learning method for speech, music, and sound effect generation. Ourframework introduces a general representation of audio, called ""language ofaudio"" (LOA). Any audio can be translated into LOA based on AudioMAE, aself-supervised pre-trained representation learning model. In the generationprocess, we translate any modalities into LOA by using a GPT-2 model, and weperform self-supervised audio generation learning with a latent diffusion modelconditioned on LOA. The proposed framework naturally brings advantages such asin-context learning abilities and reusable self-supervised pretrained AudioMAEand latent diffusion models. Experiments on the major benchmarks oftext-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-artor competitive performance against previous approaches. Our code, pretrainedmodel, and demo are available at https://audioldm.github.io/audioldm2.",,arXiv,"['cs.sd', 'cs.ai', 'cs.mm', 'eess.as', 'eess.sp']",, -1264,time travel in llms tracing data contamination in large language models,"['Shahriar Golchin', 'Mihai Surdeanu']",http://arxiv.org/pdf/2308.08493v2.pdf,2023-08-16,," Data contamination, i.e., the presence of test data from downstream tasks inthe training data of large language models (LLMs), is a potential major issuein measuring LLMs' real effectiveness on other tasks. We propose astraightforward yet effective method for identifying data contamination withinLLMs. At its core, our approach starts by identifying potential contaminationat the instance level; using this information, our approach then assesses widercontamination at the partition level. To estimate contamination of individualinstances, we employ ""guided instruction:"" a prompt consisting of the datasetname, partition type, and the random-length initial segment of a referenceinstance, asking the LLM to complete it. An instance is flagged as contaminatedif the LLM's output either exactly or nearly matches the latter segment of thereference. To understand if an entire partition is contaminated, we propose twoideas. The first idea marks a dataset partition as contaminated if the averageoverlap score with the reference instances (as measured by ROUGE-L or BLEURT)is statistically significantly better with the completions from guidedinstruction compared to a ""general instruction"" that does not include thedataset and partition name. The second idea marks a dataset partition ascontaminated if a classifier based on GPT-4 with few-shot in-context learningprompt marks multiple generated completions as exact/near-exact matches of thecorresponding reference instances. Our best method achieves an accuracy between92% and 100% in detecting if an LLM is contaminated with seven datasets,containing train and test/validation partitions, when contrasted with manualevaluation by human experts. Further, our findings indicate that GPT-4 iscontaminated with AG News, WNLI, and XSum datasets.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",, -1265,inductivebias learning generating code models with large language model,"['Toma Tanaka', 'Naofumi Emoto', 'Tsukasa Yumibayashi']",http://arxiv.org/pdf/2308.09890v1.pdf,2023-08-19,," Large Language Models(LLMs) have been attracting attention due to a abilitycalled in-context learning(ICL). ICL, without updating the parameters of a LLM,it is possible to achieve highly accurate inference based on rules ``in thecontext'' by merely inputting a training data into the prompt. Although ICL isa developing field with many unanswered questions, LLMs themselves serves as ainference model, seemingly realizing inference without explicitly indicate``inductive bias''. On the other hand, a code generation is also a highlightedapplication of LLMs. The accuracy of code generation has dramatically improved,enabling even non-engineers to generate code to perform the desired tasks bycrafting appropriate prompts. In this paper, we propose a novel ``learning''method called an ``Inductive-Bias Learning (IBL)'', which combines thetechniques of ICL and code generation. An idea of IBL is straightforward. LikeICL, IBL inputs a training data into the prompt and outputs a code with anecessary structure for inference (we referred to as ``Code Model'') from a``contextual understanding''. Despite being a seemingly simple approach, IBLencompasses both a ``property of inference without explicit inductive bias''inherent in ICL and a ``readability and explainability'' of the codegeneration. Surprisingly, generated Code Models have been found to achievepredictive accuracy comparable to, and in some cases surpassing, ICL andrepresentative machine learning models. Our IBL code is open source:https://github.com/fuyu-quant/IBLM",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",, -1266,exploring parameterefficient finetuning techniques for code generation with large language models,"['Martin Weyssow', 'Xin Zhou', 'Kisub Kim', 'David Lo', 'Houari Sahraoui']",http://arxiv.org/pdf/2308.10462v1.pdf,2023-08-21,," Large Language Models (LLMs) possess impressive capabilities to generatemeaningful code snippets given natural language intents in zero-shot, i.e.,without the need for specific fine-tuning. In the perspective of unleashingtheir full potential, prior work has demonstrated the benefits of fine-tuningthe models to task-specific data. However, fine-tuning process demands heavycomputational costs and is intractable when resources are scarce, especiallyfor models with billions of parameters. In light of these challenges, previousstudies explored In-Context Learning (ICL) as an effective strategy to generatecontextually appropriate code without fine-tuning. However, it operates atinference time and does not involve learning task-specific parameters,potentially limiting the model's performance on downstream tasks. In thiscontext, we foresee that Parameter-Efficient Fine-Tuning (PEFT) techniquescarry a high potential for efficiently specializing LLMs to task-specific data.In this paper, we deliver a comprehensive study of LLMs with the impact of PEFTtechniques under the automated code generation scenario. Our experimentalresults reveal the superiority and potential of such techniques over ICL on awide range of LLMs in reducing the computational burden and improvingperformance. Therefore, the study opens opportunities for broader applicationsof PEFT in software engineering scenarios.",,arXiv,"['cs.se', 'cs.cl', 'cs.lg']",, -1267,causal intersectionality and dual form of gradient descent for multimodal analysis a case study on hateful memes,"['Yosuke Miyanishi', 'Minh Le Nguyen']",http://arxiv.org/pdf/2308.11585v1.pdf,2023-08-19,," In the wake of the explosive growth of machine learning (ML) usage,particularly within the context of emerging Large Language Models (LLMs),comprehending the semantic significance rooted in their internal workings iscrucial. While causal analyses focus on defining semantics and itsquantification, the gradient-based approach is central to explainable AI (XAI),tackling the interpretation of the black box. By synergizing these approaches,the exploration of how a model's internal mechanisms illuminate its causaleffect has become integral for evidence-based decision-making. A parallel lineof research has revealed that intersectionality - the combinatory impact ofmultiple demographics of an individual - can be structured in the form of anAveraged Treatment Effect (ATE). Initially, this study illustrates that thehateful memes detection problem can be formulated as an ATE, assisted by theprinciples of intersectionality, and that a modality-wise summarization ofgradient-based attention attribution scores can delineate the distinctbehaviors of three Transformerbased models concerning ATE. Subsequently, weshow that the latest LLM LLaMA2 has the ability to disentangle theintersectional nature of memes detection in an in-context learning setting,with their mechanistic properties elucidated via meta-gradient, a secondaryform of gradient. In conclusion, this research contributes to the ongoingdialogue surrounding XAI and the multifaceted nature of ML models.",,arXiv,"['cs.ai', 'cs.cl']",, -1268,empowering dynamicsaware texttovideo diffusion with large language models,"['Hao Fei', 'Shengqiong Wu', 'Wei Ji', 'Hanwang Zhang', 'Tat-Seng Chua']",http://arxiv.org/pdf/2308.13812v1.pdf,2023-08-26,," Text-to-video (T2V) synthesis has gained increasing attention in thecommunity, in which the recently emerged diffusion models (DMs) havepromisingly shown stronger performance than the past approaches. While existingstate-of-the-art DMs are competent to achieve high-resolution video generation,they may largely suffer from key limitations (e.g., action occurrencedisorders, crude video motions) with respect to the intricate temporal dynamicsmodeling, one of the crux of video synthesis. In this work, we investigatestrengthening the awareness of video dynamics for DMs, for high-quality T2Vgeneration. Inspired by human intuition, we design an innovative dynamic scenemanager (dubbed as Dysen) module, which includes (step-1) extracting from inputtext the key actions with proper time-order arrangement, (step-2) transformingthe action schedules into the dynamic scene graph (DSG) representations, and(step-3) enriching the scenes in the DSG with sufficient and reasonabledetails. Taking advantage of the existing powerful LLMs (e.g., ChatGPT) viain-context learning, Dysen realizes (nearly) human-level temporal dynamicsunderstanding. Finally, the resulting video DSG with rich action scene detailsis encoded as fine-grained spatio-temporal features, integrated into thebackbone T2V DM for video generating. Experiments on popular T2V datasetssuggest that our framework consistently outperforms prior arts with significantmargins, especially in the scenario with complex actions. Project page athttps://haofei.vip/Dysen-VDM",,arXiv,"['cs.ai', 'cs.cv']",, -1269,identifying and mitigating the security risks of generative ai,"['Clark Barrett', 'Brad Boyd', 'Elie Burzstein', 'Nicholas Carlini', 'Brad Chen', 'Jihye Choi', 'Amrita Roy Chowdhury', 'Mihai Christodorescu', 'Anupam Datta', 'Soheil Feizi', 'Kathleen Fisher', 'Tatsunori Hashimoto', 'Dan Hendrycks', 'Somesh Jha', 'Daniel Kang', 'Florian Kerschbaum', 'Eric Mitchell', 'John Mitchell', 'Zulfikar Ramzan', 'Khawaja Shams', 'Dawn Song', 'Ankur Taly', 'Diyi Yang']",http://arxiv.org/pdf/2308.14840v3.pdf,2023-08-28,," Every major technical invention resurfaces the dual-use dilemma -- the newtechnology has the potential to be used for good as well as for harm.Generative AI (GenAI) techniques, such as large language models (LLMs) anddiffusion models, have shown remarkable capabilities (e.g., in-contextlearning, code-completion, and text-to-image generation and editing). However,GenAI can be used just as well by attackers to generate new attacks andincrease the velocity and efficacy of existing attacks. This paper reports the findings of a workshop held at Google (co-organized byStanford University and the University of Wisconsin-Madison) on the dual-usedilemma posed by GenAI. This paper is not meant to be comprehensive, but israther an attempt to synthesize some of the interesting findings from theworkshop. We discuss short-term and long-term goals for the community on thistopic. We hope this paper provides both a launching point for a discussion onthis important topic as well as interesting problems that the researchcommunity can work to address.",,arXiv,['cs.ai'],, -1270,anomalygpt detecting industrial anomalies using large visionlanguage models,"['Zhaopeng Gu', 'Bingke Zhu', 'Guibo Zhu', 'Yingying Chen', 'Ming Tang', 'Jinqiao Wang']",http://arxiv.org/pdf/2308.15366v3.pdf,2023-08-29,," Large Vision-Language Models (LVLMs) such as MiniGPT-4 and LLaVA havedemonstrated the capability of understanding images and achieved remarkableperformance in various visual tasks. Despite their strong abilities inrecognizing common objects due to extensive training datasets, they lackspecific domain knowledge and have a weaker understanding of localized detailswithin objects, which hinders their effectiveness in the Industrial AnomalyDetection (IAD) task. On the other hand, most existing IAD methods only provideanomaly scores and necessitate the manual setting of thresholds to distinguishbetween normal and abnormal samples, which restricts their practicalimplementation. In this paper, we explore the utilization of LVLM to addressthe IAD problem and propose AnomalyGPT, a novel IAD approach based on LVLM. Wegenerate training data by simulating anomalous images and producingcorresponding textual descriptions for each image. We also employ an imagedecoder to provide fine-grained semantic and design a prompt learner tofine-tune the LVLM using prompt embeddings. Our AnomalyGPT eliminates the needfor manual threshold adjustments, thus directly assesses the presence andlocations of anomalies. Additionally, AnomalyGPT supports multi-turn dialoguesand exhibits impressive few-shot in-context learning capabilities. With onlyone normal shot, AnomalyGPT achieves the state-of-the-art performance with anaccuracy of 86.1%, an image-level AUC of 94.1%, and a pixel-level AUC of 95.3%on the MVTec-AD dataset. Code is available athttps://github.com/CASIA-IVA-Lab/AnomalyGPT.",,arXiv,['cs.cv'],, -1271,business process text sketch automation generation using large language model,"['Rui Zhu', 'Quanzhou Hu', 'Wenxin Li', 'Honghao Xiao', 'Chaogang Wang', 'Zixin Zhou']",http://arxiv.org/pdf/2309.01071v1.pdf,2023-09-03,," Business Process Management (BPM) is gaining increasing attention as it hasthe potential to cut costs while boosting output and quality. Business processdocument generation is a crucial stage in BPM. However, due to a shortage ofdatasets, data-driven deep learning techniques struggle to deliver the expectedresults. We propose an approach to transform Conditional Process Trees (CPTs)into Business Process Text Sketches (BPTSs) using Large Language Models (LLMs).The traditional prompting approach (Few-shot In-Context Learning) tries to getthe correct answer in one go, and it can find the pattern of transformingsimple CPTs into BPTSs, but for close-domain and CPTs with complex hierarchy,the traditional prompts perform weakly and with low correctness. We suggestusing this technique to break down a difficult CPT into a number of basic CPTsand then solve each one in turn, drawing inspiration from thedivide-and-conquer strategy. We chose 100 process trees with depths rangingfrom 2 to 5 at random, as well as CPTs with many nodes, many degrees ofselection, and cyclic nesting. Experiments show that our method can achieve acorrect rate of 93.42%, which is 45.17% better than traditional promptingmethods. Our proposed method provides a solution for business process documentgeneration in the absence of datasets, and secondly, it becomes potentiallypossible to provide a large number of datasets for the process model extraction(PME) domain.",,arXiv,['cs.cl'],, -1272,textbooks are all you need ii phi15 technical report,"['Yuanzhi Li', 'Sébastien Bubeck', 'Ronen Eldan', 'Allie Del Giorno', 'Suriya Gunasekar', 'Yin Tat Lee']",http://arxiv.org/pdf/2309.05463v1.pdf,2023-09-11,," We continue the investigation into the power of smaller Transformer-basedlanguage models as initiated by \textbf{TinyStories} -- a 10 million parametermodel that can produce coherent English -- and the follow-up work on\textbf{phi-1}, a 1.3 billion parameter model with Python coding performanceclose to the state-of-the-art. The latter work proposed to use existing LargeLanguage Models (LLMs) to generate ``textbook quality"" data as a way to enhancethe learning process compared to traditional web data. We follow the``Textbooks Are All You Need"" approach, focusing this time on common sensereasoning in natural language, and create a new 1.3 billion parameter modelnamed \textbf{phi-1.5}, with performance on natural language tasks comparableto models 5x larger, and surpassing most non-frontier LLMs on more complexreasoning tasks such as grade-school mathematics and basic coding. Moregenerally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,both good -- such as the ability to ``think step by step"" or perform somerudimentary in-context learning -- and bad, including hallucinations and thepotential for toxic and biased generations -- encouragingly though, we areseeing improvement on that front thanks to the absence of web data. Weopen-source \textbf{phi-1.5} to promote further research on these urgenttopics.",,arXiv,"['cs.cl', 'cs.ai']",, -1273,uncovering mesaoptimization algorithms in transformers,"['Johannes von Oswald', 'Eyvind Niklasson', 'Maximilian Schlegel', 'Seijin Kobayashi', 'Nicolas Zucchet', 'Nino Scherrer', 'Nolan Miller', 'Mark Sandler', 'Blaise Agüera y Arcas', 'Max Vladymyrov', 'Razvan Pascanu', 'João Sacramento']",http://arxiv.org/pdf/2309.05858v1.pdf,2023-09-11,," Transformers have become the dominant model in deep learning, but the reasonfor their superior performance is poorly understood. Here, we hypothesize thatthe strong performance of Transformers stems from an architectural bias towardsmesa-optimization, a learned process running within the forward pass of a modelconsisting of the following two steps: (i) the construction of an internallearning objective, and (ii) its corresponding solution found throughoptimization. To test this hypothesis, we reverse-engineer a series ofautoregressive Transformers trained on simple sequence modeling tasks,uncovering underlying gradient-based mesa-optimization algorithms driving thegeneration of predictions. Moreover, we show that the learned forward-passoptimization algorithm can be immediately repurposed to solve supervisedfew-shot tasks, suggesting that mesa-optimization might underlie the in-contextlearning capabilities of large language models. Finally, we propose a novelself-attention layer, the mesa-layer, that explicitly and efficiently solvesoptimization problems specified in context. We find that this layer can lead toimproved performance in synthetic and preliminary language modelingexperiments, adding weight to our hypothesis that mesa-optimization is animportant operation hidden within the weights of trained Transformers.",,arXiv,"['cs.lg', 'cs.ai']",, -1274,narrowing the gap between supervised and unsupervised sentence representation learning with large language model,"['Mingxin Li', 'Richong Zhang', 'Zhijie Nie', 'Yongyi Mao']",http://arxiv.org/pdf/2309.06453v1.pdf,2023-09-12,," Sentence Representation Learning (SRL) is a fundamental task in NaturalLanguage Processing (NLP), with Contrastive learning of Sentence Embeddings(CSE) as the mainstream technique due to its superior performance. Anintriguing phenomenon in CSE is the significant performance gap betweensupervised and unsupervised methods, even when their sentence encoder and lossfunction are the same. Previous works attribute this performance gap todifferences in two representation properties (alignment and uniformity).However, alignment and uniformity only measure the results, which means theycannot answer ""What happens during the training process that leads to theperformance gap?"" and ""How can the performance gap be narrowed?"". In thispaper, we conduct empirical experiments to answer these ""What"" and ""How""questions. We first answer the ""What"" question by thoroughly comparing thebehavior of supervised and unsupervised CSE during their respective trainingprocesses. From the comparison, We observe a significant difference in fittingdifficulty. Thus, we introduce a metric, called Fitting Difficulty Increment(FDI), to measure the fitting difficulty gap between the evaluation dataset andthe held-out training dataset, and use the metric to answer the ""What""question. Then, based on the insights gained from the ""What"" question, wetackle the ""How"" question by increasing the fitting difficulty of the trainingdataset. We achieve this by leveraging the In-Context Learning (ICL) capabilityof the Large Language Model (LLM) to generate data that simulates complexpatterns. By utilizing the hierarchical patterns in the LLM-generated data, weeffectively narrow the gap between supervised and unsupervised CSE.",,arXiv,"['cs.cl', 'cs.lg']",, -1275,understanding catastrophic forgetting in language models via implicit inference,"['Suhas Kotha', 'Jacob Mitchell Springer', 'Aditi Raghunathan']",http://arxiv.org/pdf/2309.10105v1.pdf,2023-09-18,," Fine-tuning (via methods such as instruction-tuning or reinforcement learningfrom human feedback) is a crucial step in training language models to robustlycarry out tasks of interest. However, we lack a systematic understanding of theeffects of fine-tuning, particularly on tasks outside the narrow fine-tuningdistribution. In a simplified scenario, we demonstrate that improvingperformance on tasks within the fine-tuning data distribution comes at theexpense of suppressing model capabilities on other tasks. This degradation isespecially pronounced for tasks ""closest"" to the fine-tuning distribution. Wehypothesize that language models implicitly infer the task of the promptcorresponds, and the fine-tuning process predominantly skews this taskinference towards tasks in the fine-tuning distribution. To test thishypothesis, we propose Conjugate Prompting to see if we can recover pretrainedcapabilities. Conjugate prompting artificially makes the task look farther fromthe fine-tuning distribution while requiring the same capability. We find thatconjugate prompting systematically recovers some of the pretrainingcapabilities on our synthetic setup. We then apply conjugate prompting toreal-world LLMs using the observation that fine-tuning distributions aretypically heavily skewed towards English. We find that simply translating theprompts to different languages can cause the fine-tuned models to respond liketheir pretrained counterparts instead. This allows us to recover the in-contextlearning abilities lost via instruction tuning, and more concerningly, torecover harmful content generation suppressed by safety fine-tuning in chatbotslike ChatGPT.",,arXiv,"['cs.cl', 'cs.lg']",, -1276,gpt4aigchip towards nextgeneration ai accelerator design automation via large language models,"['Yonggan Fu', 'Yongan Zhang', 'Zhongzhi Yu', 'Sixu Li', 'Zhifan Ye', 'Chaojian Li', 'Cheng Wan', 'Yingyan Lin']",http://arxiv.org/pdf/2309.10730v1.pdf,2023-09-19,," The remarkable capabilities and intricate nature of Artificial Intelligence(AI) have dramatically escalated the imperative for specialized AIaccelerators. Nonetheless, designing these accelerators for various AIworkloads remains both labor- and time-intensive. While existing designexploration and automation tools can partially alleviate the need for extensivehuman involvement, they still demand substantial hardware expertise, posing abarrier to non-experts and stifling AI accelerator development. Motivated bythe astonishing potential of large language models (LLMs) for generatinghigh-quality content in response to human language instructions, we embark onthis work to examine the possibility of harnessing LLMs to automate AIaccelerator design. Through this endeavor, we develop GPT4AIGChip, a frameworkintended to democratize AI accelerator design by leveraging human naturallanguages instead of domain-specific languages. Specifically, we first performan in-depth investigation into LLMs' limitations and capabilities for AIaccelerator design, thus aiding our understanding of our current position andgarnering insights into LLM-powered automated AI accelerator design.Furthermore, drawing inspiration from the above insights, we develop aframework called GPT4AIGChip, which features an automated demo-augmentedprompt-generation pipeline utilizing in-context learning to guide LLMs towardscreating high-quality AI accelerator design. To our knowledge, this work is thefirst to demonstrate an effective pipeline for LLM-powered automated AIaccelerator generation. Accordingly, we anticipate that our insights andframework can serve as a catalyst for innovations in next-generationLLM-powered design automation tools.",,arXiv,"['cs.lg', 'cs.ar']",, -1277,a benchmark for learning to translate a new language from one grammar book,"['Garrett Tanzer', 'Mirac Suzgun', 'Eline Visser', 'Dan Jurafsky', 'Luke Melas-Kyriazi']",http://arxiv.org/pdf/2309.16575v1.pdf,2023-09-28,," Large language models (LLMs) can perform impressive feats with in-contextlearning or lightweight finetuning. It is natural to wonder how well thesemodels adapt to genuinely new tasks, but how does one find tasks that areunseen in internet-scale training sets? We turn to a field that is explicitlymotivated and bottlenecked by a scarcity of web data: low-resource languages.In this paper, we introduce MTOB (Machine Translation from One Book), abenchmark for learning to translate between English and Kalamang -- a languagewith less than 200 speakers and therefore virtually no presence on the web --using several hundred pages of field linguistics reference materials. This taskframing is novel in that it asks a model to learn a language from a singlehuman-readable book of grammar explanations, rather than a large mined corpusof in-domain data, more akin to L2 learning than L1 acquisition. We demonstratethat baselines using current LLMs are promising but fall short of humanperformance, achieving 44.7 chrF on Kalamang to English translation and 45.8chrF on English to Kalamang translation, compared to 51.6 and 57.0 chrF by ahuman who learned Kalamang from the same reference materials. We hope that MTOBwill help measure LLM capabilities along a new dimension, and that the methodsdeveloped to solve it could help expand access to language technology forunderserved communities by leveraging qualitatively different kinds of datathan traditional machine translation.",,arXiv,['cs.cl'],, -1278,benchmarking cognitive biases in large language models as evaluators,"['Ryan Koo', 'Minhwa Lee', 'Vipul Raheja', 'Jong Inn Park', 'Zae Myung Kim', 'Dongyeop Kang']",http://arxiv.org/pdf/2309.17012v1.pdf,2023-09-29,," Large Language Models (LLMs) have recently been shown to be effective asautomatic evaluators with simple prompting and in-context learning. In thiswork, we assemble 15 LLMs of four different size ranges and evaluate theiroutput responses by preference ranking from the other LLMs as evaluators, suchas System Star is better than System Square. We then evaluate the quality ofranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators(CoBBLEr), a benchmark to measure six different cognitive biases in LLMevaluation outputs, such as the Egocentric bias where a model prefers to rankits own outputs highly in evaluation. We find that LLMs are biased text qualityevaluators, exhibiting strong indications on our bias benchmark (average of 40%of comparisons across all models) within each of their evaluations thatquestion their robustness as evaluators. Furthermore, we examine thecorrelation between human and machine preferences and calculate the averageRank-Biased Overlap (RBO) score to be 49.6%, indicating that machinepreferences are misaligned with humans. According to our findings, LLMs maystill be unable to be utilized for automatic annotation aligned with humanpreferences. Our project page is at: https://minnesotanlp.github.io/cobbler.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1279,fewertoken neural speech codec with timeinvariant codes,"['Yong Ren', 'Tao Wang', 'Jiangyan Yi', 'Le Xu', 'Jianhua Tao', 'Chuyuan Zhang', 'Junzuo Zhou']",http://arxiv.org/pdf/2310.00014v1.pdf,2023-09-15,," Language model based text-to-speech (TTS) models, like VALL-E, have gainedattention for their outstanding in-context learning capability in zero-shotscenarios. Neural speech codec is a critical component of these models, whichcan convert speech into discrete token representations. However, excessivetoken sequences from the codec may negatively affect prediction accuracy andrestrict the progression of Language model based TTS models. To address thisissue, this paper proposes a novel neural speech codec with time-invariantcodes named TiCodec. By encoding and quantizing time-invariant information intoa separate code, TiCodec can reduce the amount of frame-level information thatneeds encoding, effectively decreasing the number of tokens as codes of speech.Furthermore, this paper introduces a time-invariant encoding consistency lossto enhance the consistency of time-invariant code within an utterance and forceit to capture more global information, which can benefit the zero-shot TTStask. Experimental results demonstrate that TiCodec can not only enhance thequality of reconstruction speech with fewer tokens but also increase thesimilarity and naturalness, as well as reduce the word error rate of thesynthesized speech by the TTS model.",,arXiv,"['cs.sd', 'eess.as']",, -1280,reactable enhancing react for table question answering,"['Yunjia Zhang', 'Jordan Henkel', 'Avrilia Floratou', 'Joyce Cahoon', 'Shaleen Deep', 'Jignesh M. Patel']",http://arxiv.org/pdf/2310.00815v1.pdf,2023-10-01,," Table Question Answering (TQA) presents a substantial challenge at theintersection of natural language processing and data analytics. This taskinvolves answering natural language (NL) questions on top of tabular data,demanding proficiency in logical reasoning, understanding of data semantics,and fundamental analytical capabilities. Due to its significance, a substantialvolume of research has been dedicated to exploring a wide range of strategiesaimed at tackling this challenge including approaches that leverage LargeLanguage Models (LLMs) through in-context learning or Chain-of-Thought (CoT)prompting as well as approaches that train and fine-tune custom models. Nonetheless, a conspicuous gap exists in the research landscape, where thereis limited exploration of how innovative foundational research, whichintegrates incremental reasoning with external tools in the context of LLMs, asexemplified by the ReAct paradigm, could potentially bring advantages to theTQA task. In this paper, we aim to fill this gap, by introducing ReAcTable(ReAct for Table Question Answering tasks), a framework inspired by the ReActparadigm that is carefully enhanced to address the challenges uniquelyappearing in TQA tasks such as interpreting complex data semantics, dealingwith errors generated by inconsistent data and generating intricate datatransformations. ReAcTable relies on external tools such as SQL and Python codeexecutors, to progressively enhance the data by generating intermediate datarepresentations, ultimately transforming it into a more accessible format foranswering the questions with greater ease. We demonstrate that ReAcTableachieves remarkable performance even when compared to fine-tuned approaches. Inparticular, it outperforms the best prior result on the WikiTQ benchmark,achieving an accuracy of 68.0% without requiring training a new model orfine-tuning.",,arXiv,['cs.db'],, -1281,graphtext graph reasoning in text space,"['Jianan Zhao', 'Le Zhuo', 'Yikang Shen', 'Meng Qu', 'Kai Liu', 'Michael Bronstein', 'Zhaocheng Zhu', 'Jian Tang']",http://arxiv.org/pdf/2310.01089v1.pdf,2023-10-02,," Large Language Models (LLMs) have gained the ability to assimilate humanknowledge and facilitate natural language interactions with both humans andother LLMs. However, despite their impressive achievements, LLMs have not madesignificant advancements in the realm of graph machine learning. Thislimitation arises because graphs encapsulate distinct relational data, makingit challenging to transform them into natural language that LLMs understand. Inthis paper, we bridge this gap with a novel framework, GraphText, thattranslates graphs into natural language. GraphText derives a graph-syntax treefor each graph that encapsulates both the node attributes and inter-noderelationships. Traversal of the tree yields a graph text sequence, which isthen processed by an LLM to treat graph tasks as text generation tasks.Notably, GraphText offers multiple advantages. It introduces training-freegraph reasoning: even without training on graph data, GraphText with ChatGPTcan achieve on par with, or even surpassing, the performance ofsupervised-trained graph neural networks through in-context learning (ICL).Furthermore, GraphText paves the way for interactive graph reasoning, allowingboth humans and LLMs to communicate with the model seamlessly using naturallanguage. These capabilities underscore the vast, yet-to-be-explored potentialof LLMs in the domain of graph machine learning.",,arXiv,"['cs.cl', 'cs.lg']",, -1282,llmparser a llmbased log parsing framework,"['Zhihan Jiang', 'Jinyang Liu', 'Zhuangbin Chen', 'Yichen Li', 'Junjie Huang', 'Yintong Huo', 'Pinjia He', 'Jiazhen Gu', 'Michael R. Lyu']",http://arxiv.org/pdf/2310.01796v1.pdf,2023-10-03,," The process of log parsing, which converts log messages into structuredformats, is a crucial step for various log analysis tasks. Although numerouslog parsers have been proposed, their effectiveness on complex log data isoften hindered due to reliance on human-made rules or learning-based modelswith limited training data. The recent rise of powerful large language models(LLMs) shows potential for log parsing due to their extensive pre-trainedknowledge related to code and logging. However, their accuracy is currentlylimited due to the lack of specialized log parsing capabilities. Additionally,the inconsistency of their answers and significant overhead obstruct thepractical implementation of LLM-based log parsing. To tackle these challenges, we introduce LLMParser, the first practicalLLM-based log parsing framework. LLMParser enables accurate and robust logparsing by leveraging the in-context learning (ICL) capability of the LLM,employing a hierarchical candidate sampling algorithm, and selectinghigh-quality demonstrations. LLMParser also includes a novel adaptive parsingcache component to store and refine the templates generated by the LLM. Thisdesign aids in addressing the inefficiency of LLMs by rapid matching topreviously parsed log templates. LLMParser also adaptively updates thetemplates in the parsing cache to ensure consistent parsed results. Extensiveevaluation on large-scale public datasets demonstrates that LLMParser surpassesthe state-of-the-art methods. Furthermore, LLMParser significantly reduces thequery times to LLMs, achieving efficiency comparable to the most efficientbaseline, Drain.",,arXiv,['cs.se'],, -1283,uncovering hidden geometry in transformers via disentangling position and context,"['Jiajun Song', 'Yiqiao Zhong']",http://arxiv.org/pdf/2310.04861v1.pdf,2023-10-07,," Transformers are widely used to extract complex semantic meanings from inputtokens, yet they usually operate as black-box models. In this paper, we presenta simple yet informative decomposition of hidden states (or embeddings) oftrained transformers into interpretable components. For any layer, embeddingvectors of input sequence samples are represented by a tensor $\boldsymbol{h}\in \mathbb{R}^{C \times T \times d}$. Given embedding vector$\boldsymbol{h}_{c,t} \in \mathbb{R}^d$ at sequence position $t \le T$ in asequence (or context) $c \le C$, extracting the mean effects yields thedecomposition \[ \boldsymbol{h}_{c,t} = \boldsymbol{\mu} + \mathbf{pos}_t +\mathbf{ctx}_c + \mathbf{resid}_{c,t} \] where $\boldsymbol{\mu}$ is the globalmean vector, $\mathbf{pos}_t$ and $\mathbf{ctx}_c$ are the mean vectors acrosscontexts and across positions respectively, and $\mathbf{resid}_{c,t}$ is theresidual vector. For popular transformer architectures and diverse textdatasets, empirically we find pervasive mathematical structure: (1)$(\mathbf{pos}_t)_{t}$ forms a low-dimensional, continuous, and often spiralshape across layers, (2) $(\mathbf{ctx}_c)_c$ shows clear cluster structurethat falls into context topics, and (3) $(\mathbf{pos}_t)_{t}$ and$(\mathbf{ctx}_c)_c$ are mutually incoherent -- namely $\mathbf{pos}_t$ isalmost orthogonal to $\mathbf{ctx}_c$ -- which is canonical in compressedsensing and dictionary learning. This decomposition offers structural insightsabout input formats in in-context learning (especially for induction heads) andin arithmetic tasks.",,arXiv,"['cs.lg', 'cs.ai', 'stat.ml']",, -1284,lightweight incontext tuning for multimodal unified models,"['Yixin Chen', 'Shuai Zhang', 'Boran Han', 'Jiaya Jia']",http://arxiv.org/pdf/2310.05109v1.pdf,2023-10-08,," In-context learning (ICL) involves reasoning from given contextual examples.As more modalities comes, this procedure is becoming more challenging as theinterleaved input modalities convolutes the understanding process. This isexemplified by the observation that multimodal models often struggle toeffectively extrapolate from contextual examples to perform ICL. To addressthese challenges, we introduce MultiModal In-conteXt Tuning (M$^2$IXT), alightweight module to enhance the ICL capabilities of multimodal unifiedmodels. The proposed M$^2$IXT module perceives an expandable context window toincorporate various labeled examples of multiple modalities (e.g., text, image,and coordinates). It can be prepended to various multimodal unified models(e.g., OFA, Unival, LLaVA) of different architectures and trained via amixed-tasks strategy to enable rapid few-shot adaption on multiple tasks anddatasets. When tuned on as little as 50K multimodal data, M$^2$IXT can boostthe few-shot ICL performance significantly (e.g., 18\% relative increase forOFA), and obtained state-of-the-art results across an array of tasks includingvisual question answering, image captioning, visual grounding, and visualentailment, while being considerably small in terms of model parameters (e.g.,$\sim$$20\times$ smaller than Flamingo or MMICL), highlighting the flexibilityand effectiveness of M$^2$IXT as a multimodal in-context learner.",,arXiv,['cs.cv'],, -1285,explainable claim verification via knowledgegrounded reasoning with large language models,"['Haoran Wang', 'Kai Shu']",http://arxiv.org/pdf/2310.05253v2.pdf,2023-10-08,," Claim verification plays a crucial role in combating misinformation. Whileexisting works on claim verification have shown promising results, a crucialpiece of the puzzle that remains unsolved is to understand how to verify claimswithout relying on human-annotated data, which is expensive to create at alarge scale. Additionally, it is important for models to provide comprehensiveexplanations that can justify their decisions and assist human fact-checkers.This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK)Reasoning that can verify complex claims and generate explanations without theneed for annotated evidence using Large Language Models (LLMs). FOLK leveragesthe in-context learning ability of LLMs to translate the claim into aFirst-Order-Logic (FOL) clause consisting of predicates, each corresponding toa sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoningover a set of knowledge-grounded question-and-answer pairs to make veracitypredictions and generate explanations to justify its decision-making process.This process makes our model highly explanatory, providing clear explanationsof its reasoning process in human-readable form. Our experiment resultsindicate that FOLK outperforms strong baselines on three datasets encompassingvarious claim verification challenges. Our code and data are available.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1286,glitter or gold deriving structured insights from sustainability reports via large language models,"['Marco Bronzini', 'Carlo Nicolini', 'Bruno Lepri', 'Andrea Passerini', 'Jacopo Staiano']",http://arxiv.org/pdf/2310.05628v2.pdf,2023-10-09,," Over the last decade, several regulatory bodies have started requiring thedisclosure of non-financial information from publicly listed companies, inlight of the investors' increasing attention to Environmental, Social, andGovernance (ESG) issues. Such information is publicly released in a variety ofnon-structured and multi-modal documentation. Hence, it is not straightforwardto aggregate and consolidate such data in a cohesive framework to furtherderive insights about sustainability practices across companies and markets.Given these premises, it is natural to resort to Information Extraction (IE)techniques to provide concise, informative, and actionable data to thestakeholders. Moving beyond traditional text processing techniques, in thiswork we leverage Large Language Models (LLMs), along with the prominentin-context learning technique and the Retrieved Augmented Generation (RAG)paradigm, to extract semantically structured ESG-related information fromcompanies' sustainability reports. We then adopt graph-based representations toconduct meaningful statistical, similarity and correlation analyses concerningthe ESG-related actions disclosed by companies in their sustainability reports.These analyses unveiled that companies address ESG-related issues throughseveral actions encompassing recognition, compliance, and partnerships;highlighting the complexity and joint efforts needed to address them. Moreover,disclosure similarities emerged among companies from the same region or sector.Lastly, we investigate which factual aspects impact the most on companies' ESGscores using our findings and other company information. This analysis unveiledthat companies' disclosures affect ESG scores more than other financial orcompany characteristics.",,arXiv,"['cs.cl', 'cs.ce', 'cs.cy']",, -1287,are large language models post hoc explainers,"['Nicholas Kroeger', 'Dan Ley', 'Satyapriya Krishna', 'Chirag Agarwal', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2310.05797v2.pdf,2023-10-09,," Large Language Models (LLMs) are increasingly used as powerful tools for aplethora of natural language processing (NLP) applications. A recentinnovation, in-context learning (ICL), enables LLMs to learn new tasks bysupplying a few examples in the prompt during inference time, therebyeliminating the need for model fine-tuning. While LLMs have been utilized inseveral applications, their applicability in explaining the behavior of othermodels remains relatively unexplored. Despite the growing number of newexplanation techniques, many require white-box access to the model and/or arecomputationally expensive, highlighting a need for next-generation post hocexplainers. In this work, we present the first framework to study theeffectiveness of LLMs in explaining other predictive models. More specifically,we propose a novel framework encompassing multiple prompting strategies: i)Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL,and iv) Explanation-based ICL, with varying levels of information about theunderlying ML model and the local neighborhood of the test sample. We conductextensive experiments with real-world benchmark datasets to demonstrate thatLLM-generated explanations perform on par with state-of-the-art post hocexplainers using their ability to leverage ICL examples and their internalknowledge in generating model explanations. On average, across four datasetsand two ML models, we observe that LLMs identify the most important featurewith 72.19% accuracy, opening up new frontiers in explainable artificialintelligence (XAI) to explore LLM-based explanation frameworks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1288,opseval a comprehensive taskoriented aiops benchmark for large language models,"['Yuhe Liu', 'Changhua Pei', 'Longlong Xu', 'Bohan Chen', 'Mingze Sun', 'Zhirui Zhang', 'Yongqian Sun', 'Shenglin Zhang', 'Kun Wang', 'Haiming Zhang', 'Jianhui Li', 'Gaogang Xie', 'Xidao Wen', 'Xiaohui Nie', 'Dan Pei']",http://arxiv.org/pdf/2310.07637v2.pdf,2023-10-11,," Large language models (LLMs) have exhibited remarkable capabilities inNLP-related tasks such as translation, summarizing, and generation. Theapplication of LLMs in specific areas, notably AIOps (Artificial Intelligencefor IT Operations), holds great potential due to their advanced abilities ininformation summarizing, report analyzing, and ability of API calling.Nevertheless, the performance of current LLMs in AIOps tasks is yet to bedetermined. Furthermore, a comprehensive benchmark is required to steer theoptimization of LLMs tailored for AIOps. Compared with existing benchmarks thatfocus on evaluating specific fields like network configuration, in this paper,we present \textbf{OpsEval}, a comprehensive task-oriented AIOps benchmarkdesigned for LLMs. For the first time, OpsEval assesses LLMs' proficiency inthree crucial scenarios (Wired Network Operation, 5G Communication Operation,and Database Operation) at various ability levels (knowledge recall, analyticalthinking, and practical application). The benchmark includes 7,200 questions inboth multiple-choice and question-answer (QA) formats, available in English andChinese. With quantitative and qualitative results, we show how various LLMtricks can affect the performance of AIOps, including zero-shot,chain-of-thought, and few-shot in-context learning. We find that GPT4-score ismore consistent with experts than widely used Bleu and Rouge, which can be usedto replace automatic metrics for large-scale qualitative evaluations.",,arXiv,"['cs.ai', 'cs.ni']",, -1289,eipetext evaluationguided iterative plan extraction for longform narrative text generation,"['Wang You', 'Wenshan Wu', 'Yaobo Liang', 'Shaoguang Mao', 'Chenfei Wu', 'Maosong Cao', 'Yuzhe Cai', 'Yiduo Guo', 'Yan Xia', 'Furu Wei', 'Nan Duan']",http://arxiv.org/pdf/2310.08185v1.pdf,2023-10-12,," Plan-and-Write is a common hierarchical approach in long-form narrative textgeneration, which first creates a plan to guide the narrative writing.Following this approach, several studies rely on simply prompting largelanguage models for planning, which often yields suboptimal results. In thispaper, we propose a new framework called Evaluation-guided Iterative PlanExtraction for long-form narrative text generation (EIPE-text), which extractsplans from the corpus of narratives and utilizes the extracted plans toconstruct a better planner. EIPE-text has three stages: plan extraction,learning, and inference. In the plan extraction stage, it iteratively extractsand improves plans from the narrative corpus and constructs a plan corpus. Wepropose a question answer (QA) based evaluation mechanism to automaticallyevaluate the plans and generate detailed plan refinement instructions to guidethe iterative improvement. In the learning stage, we build a better planner byfine-tuning with the plan corpus or in-context learning with examples in theplan corpus. Finally, we leverage a hierarchical approach to generate long-formnarratives. We evaluate the effectiveness of EIPE-text in the domains of novelsand storytelling. Both GPT-4-based evaluations and human evaluationsdemonstrate that our method can generate more coherent and relevant long-formnarratives. Our code will be released in the future.",,arXiv,"['cs.cl', 'cs.ai']",, -1290,prompting large language models with chainofthought for fewshot knowledge base question generation,"['Yuanyuan Liang', 'Jianing Wang', 'Hanlun Zhu', 'Lei Wang', 'Weining Qian', 'Yunshi Lan']",http://arxiv.org/pdf/2310.08395v3.pdf,2023-10-12,," The task of Question Generation over Knowledge Bases (KBQG) aims to convert alogical form into a natural language question. For the sake of expensive costof large-scale question annotation, the methods of KBQG under low-resourcescenarios urgently need to be developed. However, current methods heavily relyon annotated data for fine-tuning, which is not well-suited for few-shotquestion generation. The emergence of Large Language Models (LLMs) has showntheir impressive generalization ability in few-shot tasks. Inspired byChain-of-Thought (CoT) prompting, which is an in-context learning strategy forreasoning, we formulate KBQG task as a reasoning problem, where the generationof a complete question is splitted into a series of sub-question generation.Our proposed prompting method KQG-CoT first retrieves supportive logical formsfrom the unlabeled data pool taking account of the characteristics of thelogical form. Then, we write a prompt to explicit the reasoning chain ofgenerating complicated questions based on the selected demonstrations. Tofurther ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting thelogical forms by their complexity. We conduct extensive experiments over threepublic KBQG datasets. The results demonstrate that our prompting methodconsistently outperforms other prompting baselines on the evaluated datasets.Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results ofthe PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4,METEOR, and ROUGE-L, respectively.",,arXiv,"['cs.cl', 'cs.ai']",, -1291,do pretrained transformers really learn incontext by gradient descent,"['Lingfeng Shen', 'Aayush Mishra', 'Daniel Khashabi']",http://arxiv.org/pdf/2310.08540v1.pdf,2023-10-12,," Is In-Context Learning (ICL) implicitly equivalent to Gradient Descent (GD)?Several recent works draw analogies between the dynamics of GD and the emergentbehavior of ICL in large language models. However, these works make assumptionsfar from the realistic natural language setting in which language models aretrained. Such discrepancies between theory and practice, therefore, necessitatefurther investigation to validate their applicability. We start by highlighting the weaknesses in prior works that constructTransformer weights to simulate gradient descent. Their experiments withtraining Transformers on ICL objective, inconsistencies in the ordersensitivity of ICL and GD, sparsity of the constructed weights, and sensitivityto parameter changes are some examples of a mismatch from the real-worldsetting. Furthermore, we probe and compare the ICL vs. GD hypothesis in a naturalsetting. We conduct comprehensive empirical analyses on language modelspretrained on natural data (LLaMa-7B). Our comparisons on various performancemetrics highlight the inconsistent behavior of ICL and GD as a function ofvarious factors such as datasets, models, and number of demonstrations. Weobserve that ICL and GD adapt the output distribution of language modelsdifferently. These results indicate that the equivalence between ICL and GD isan open hypothesis, requires nuanced considerations and calls for furtherstudies.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1292,mastering robot manipulation with multimodal prompts through pretraining and multitask finetuning,"['Jiachen Li', 'Qiaozi Gao', 'Michael Johnston', 'Xiaofeng Gao', 'Xuehai He', 'Suhaila Shakiah', 'Hangjie Shi', 'Reza Ghanadan', 'William Yang Wang']",http://arxiv.org/pdf/2310.09676v1.pdf,2023-10-14,," Prompt-based learning has been demonstrated as a compelling paradigmcontributing to large language models' tremendous success (LLMs). Inspired bytheir success in language tasks, existing research has leveraged LLMs inembodied instruction following and task planning. However, not much attentionhas been paid to embodied tasks with multimodal prompts, combining visionsignals with text descriptions. This type of task poses a major challenge torobots' capability to understand the interconnection and complementaritybetween vision and language signals. In this work, we introduce an effectiveframework that learns a policy to perform robot manipulation with multimodalprompts from multi-task expert trajectories. Our methods consist of a two-stagetraining pipeline that performs inverse dynamics pretraining and multi-taskfinetuning. To facilitate multimodal understanding, we design our multimodalprompt encoder by augmenting a pretrained LM with a residual connection to thevisual input and model the dependencies among action dimensions. Empirically,we evaluate the efficacy of our method on the VIMA-BENCH and establish a newstate-of-the-art (10% improvement in success rate). Moreover, we demonstratethat our model exhibits remarkable in-context learning ability.",,arXiv,"['cs.ro', 'cs.ai']",, -1293,unifying image processing as visual prompting question answering,"['Yihao Liu', 'Xiangyu Chen', 'Xianzheng Ma', 'Xintao Wang', 'Jiantao Zhou', 'Yu Qiao', 'Chao Dong']",http://arxiv.org/pdf/2310.10513v1.pdf,2023-10-16,," Image processing is a fundamental task in computer vision, which aims atenhancing image quality and extracting essential features for subsequent visionapplications. Traditionally, task-specific models are developed for individualtasks and designing such models requires distinct expertise. Building upon thesuccess of large language models (LLMs) in natural language processing (NLP),there is a similar trend in computer vision, which focuses on developinglarge-scale models through pretraining and in-context learning. This paradigmshift reduces the reliance on task-specific models, yielding a powerful unifiedmodel to deal with various tasks. However, these advances have predominantlyconcentrated on high-level vision tasks, with less attention paid to low-levelvision tasks. To address this issue, we propose a universal model for generalimage processing that covers image restoration, image enhancement, imagefeature extraction tasks, \textit{etc}. Our proposed framework, namedPromptGIP, unifies these diverse image processing tasks within a universalframework. Inspired by NLP question answering (QA) techniques, we employ avisual prompting question answering paradigm. Specifically, we treat theinput-output image pair as a structured question-answer sentence, therebyreprogramming the image processing task as a prompting QA problem. PromptGIPcan undertake diverse \textbf{cross-domain} tasks using provided visualprompts, eliminating the need for task-specific finetuning. Our methodologyoffers a universal and adaptive solution to general image processing. WhilePromptGIP has demonstrated a certain degree of out-of-domain taskgeneralization capability, further research is expected to fully explore itsmore powerful emergent generalization.",,arXiv,"['cs.cv', 'eess.iv']",, -1294,incontext pretraining language modeling beyond document boundaries,"['Weijia Shi', 'Sewon Min', 'Maria Lomeli', 'Chunting Zhou', 'Margaret Li', 'Xi Victoria Lin', 'Noah A. Smith', 'Luke Zettlemoyer', 'Scott Yih', 'Mike Lewis']",http://arxiv.org/pdf/2310.10638v3.pdf,2023-10-16,," Large language models (LMs) are currently trained to predict tokens givendocument prefixes, enabling them to directly perform long-form generation andprompting-style tasks which can be reduced to document completion. Existingpretraining pipelines train LMs by concatenating random sets of short documentsto create input contexts but the prior documents provide no signal forpredicting the next document. We instead present In-Context Pretraining, a newapproach where language models are pretrained on a sequence of relateddocuments, thereby explicitly encouraging them to read and reason acrossdocument boundaries. We can do In-Context Pretraining by simply changing thedocument ordering so that each context contains related documents, and directlyapplying existing pretraining pipelines. However, this document sorting problemis challenging. There are billions of documents and we would like the sort tomaximize contextual similarity for every document without repeating any data.To do this, we introduce approximate algorithms for finding related documentswith efficient nearest neighbor search and constructing coherent input contextswith a graph traversal algorithm. Our experiments show In-Context Pretrainingoffers a simple and scalable approach to significantly enhance LMs'performance:we see notable improvements in tasks that require more complex contextualreasoning, including in-context learning (+8%), reading comprehension (+15%),faithfulness to previous contexts (+16%), long-context reasoning (+5%), andretrieval augmentation (+9%).",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1295,ideal influencedriven selective annotations empower incontext learners in large language models,"['Shaokun Zhang', 'Xiaobo Xia', 'Zhaoqing Wang', 'Ling-Hao Chen', 'Jiale Liu', 'Qingyun Wu', 'Tongliang Liu']",http://arxiv.org/pdf/2310.10873v1.pdf,2023-10-16,," In-context learning is a promising paradigm that utilizes in-context examplesas prompts for the predictions of large language models. These prompts arecrucial for achieving strong performance. However, since the prompts need to besampled from a large volume of annotated examples, finding the right prompt mayresult in high annotation costs. To address this challenge, this paperintroduces an influence-driven selective annotation method that aims tominimize annotation costs while improving the quality of in-context examples.The essence of our method is to select a pivotal subset from a large-scaleunlabeled data pool to annotate for the subsequent sampling of prompts.Specifically, a directed graph is first constructed to represent unlabeleddata. Afterward, the influence of candidate unlabeled subsets is quantifiedwith a diffusion process. A simple yet effective greedy algorithm for unlabeleddata selection is lastly introduced. It iteratively selects the data if itprovides a maximum marginal gain with respect to quantified influence. Comparedwith previous efforts on selective annotations, our influence-driven methodworks in an end-to-end manner, avoids an intractable explicit balance betweendata diversity and representativeness, and enjoys theoretical support.Experiments confirm the superiority of the proposed method on variousbenchmarks, achieving better performance under lower time consumption duringsubset selection. The project page is available athttps://skzhang1.github.io/IDEAL/.",,arXiv,['cs.cl'],, -1296,eureka humanlevel reward design via coding large language models,"['Yecheng Jason Ma', 'William Liang', 'Guanzhi Wang', 'De-An Huang', 'Osbert Bastani', 'Dinesh Jayaraman', 'Yuke Zhu', 'Linxi Fan', 'Anima Anandkumar']",http://arxiv.org/pdf/2310.12931v1.pdf,2023-10-19,," Large Language Models (LLMs) have excelled as high-level semantic plannersfor sequential decision-making tasks. However, harnessing them to learn complexlow-level manipulation tasks, such as dexterous pen spinning, remains an openproblem. We bridge this fundamental gap and present Eureka, a human-levelreward design algorithm powered by LLMs. Eureka exploits the remarkablezero-shot generation, code-writing, and in-context improvement capabilities ofstate-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization overreward code. The resulting rewards can then be used to acquire complex skillsvia reinforcement learning. Without any task-specific prompting or pre-definedreward templates, Eureka generates reward functions that outperform experthuman-engineered rewards. In a diverse suite of 29 open-source RL environmentsthat include 10 distinct robot morphologies, Eureka outperforms human expertson 83% of the tasks, leading to an average normalized improvement of 52%. Thegenerality of Eureka also enables a new gradient-free in-context learningapproach to reinforcement learning from human feedback (RLHF), readilyincorporating human inputs to improve the quality and the safety of thegenerated rewards without model updating. Finally, using Eureka rewards in acurriculum learning setting, we demonstrate for the first time, a simulatedShadow Hand capable of performing pen spinning tricks, adeptly manipulating apen in circles at rapid speed.",,arXiv,"['cs.ro', 'cs.ai', 'cs.lg']",, -1297,selfprompted chainofthought on large language models for opendomain multihop reasoning,"['Jinyuan Wang', 'Junlong Li', 'Hai Zhao']",http://arxiv.org/pdf/2310.13552v2.pdf,2023-10-20,," In open-domain question-answering (ODQA), most existing questions requiresingle-hop reasoning on commonsense. To further extend this task, we officiallyintroduce open-domain multi-hop reasoning (ODMR) by answering multi-hopquestions with explicit reasoning steps in open-domain setting. Recently, largelanguage models (LLMs) have found significant utility in facilitating ODQAwithout external corpus. Furthermore, chain-of-thought (CoT) prompting booststhe reasoning capability of LLMs to a greater extent with manual or automatedparadigms. However, existing automated methods lack of quality assurance, whilemanual approaches suffer from limited scalability and poor diversity, hinderingthe capabilities of LLMs. In this paper, we propose Self-promptedChain-of-Thought (SP-CoT), an automated framework to mass-produce high qualityCoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generationpipeline of high quality ODMR datasets, an adaptive sampler for in-context CoTselection and self-prompted inference via in-context learning. Extensiveexperiments on four multi-hop question-answering benchmarks show that ourproposed SP-CoT not only significantly surpasses the previous SOTA methods onlarge-scale (175B) LLMs, but also nearly doubles the zero-shot performance ofsmall-scale (13B) LLMs. Further analysis reveals the remarkable capability ofSP-CoT to elicit direct and concise intermediate reasoning steps by recalling$\sim$50\% of intermediate answers on MuSiQue-Ans dataset.",,arXiv,"['cs.cl', 'cs.ai']",, -1298,explainable depression symptom detection in social media,"['Eliseo Bao Souto', 'Anxo Pérez', 'Javier Parapar']",http://arxiv.org/pdf/2310.13664v2.pdf,2023-10-20,," Users of social platforms often perceive these sites as supportive spaces topost about their mental health issues. Those conversations contain importanttraces about individuals' health risks. Recently, researchers have exploitedthis online information to construct mental health detection models, which aimto identify users at risk on platforms like Twitter, Reddit or Facebook. Mostof these models are centred on achieving good classification results, ignoringthe explainability and interpretability of the decisions. Recent research haspointed out the importance of using clinical markers, such as the use ofsymptoms, to improve trust in the computational models by health professionals.In this paper, we propose using transformer-based architectures to detect andexplain the appearance of depressive symptom markers in the users' writings. Wepresent two approaches: i) train a model to classify, and another one toexplain the classifier's decision separately and ii) unify the two taskssimultaneously using a single model. Additionally, for this latter manner, wealso investigated the performance of recent conversational LLMs when usingin-context learning. Our natural language explanations enable clinicians tointerpret the models' decisions based on validated symptoms, enhancing trust inthe automated process. We evaluate our approach using recent symptom-baseddatasets, employing both offline and expert-in-the-loop metrics to assess thequality of the explanations generated by our models. The experimental resultsshow that it is possible to achieve good classification results whilegenerating interpretable symptom-based explanations.",,arXiv,['cs.cl'],, -1299,ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms,"['Young-Suk Lee', 'Md Arafat Sultan', 'Yousef El-Kurdi', 'Tahira Naseem Asim Munawar', 'Radu Florian', 'Salim Roukos', 'Ramón Fernandez Astudillo']",http://arxiv.org/pdf/2310.13961v1.pdf,2023-10-21,," Using in-context learning (ICL) for data generation, techniques such asSelf-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023)can train strong conversational agents with only a small amount of humansupervision. One limitation of these approaches is that they resort to verylarge language models (around 175B parameters) that are also proprietary andnon-public. Here we explore the application of such techniques to languagemodels that are much smaller (around 10B--40B parameters) and have permissivelicenses. We find the Self-Instruct approach to be less effective at thesesizes and propose new ICL methods that draw on two main ideas: (a)Categorization and simplification of the ICL templates to make prompt learningeasier for the LM, and (b) Ensembling over multiple LM outputs to help selecthigh-quality synthetic examples. Our algorithm leverages the 175 Self-Instructseed tasks and employs separate pipelines for instructions that require aninput and instructions that do not. Empirical investigations with different LMsshow that: (1) Our proposed method yields higher-quality instruction tuningdata than Self-Instruct, (2) It improves performances of both vanilla andinstruction-tuned LMs by significant margins, and (3) Smaller instruction-tunedLMs generate more useful outputs than their larger un-tuned counterparts. Ourcodebase is available at https://github.com/IBM/ensemble-instruct.",,arXiv,"['cs.cl', 'cs.ai']",, -1300,investigating the fairness of large language models for predictions on tabular data,"['Yanchen Liu', 'Srishti Gautam', 'Jiaqi Ma', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2310.14607v1.pdf,2023-10-23,," Recent literature has suggested the potential of using large language models(LLMs) to make predictions for tabular tasks. However, LLMs have been shown toexhibit harmful social biases that reflect the stereotypes and inequalitiespresent in the society. To this end, as well as the widespread use of tabulardata in many high-stake applications, it is imperative to explore the followingquestions: what sources of information do LLMs draw upon when makingpredictions for tabular tasks; whether and to what extent are LLM predictionsfor tabular tasks influenced by social biases and stereotypes; and what are theconsequential implications for fairness? Through a series of experiments, wedelve into these questions and show that LLMs tend to inherit social biasesfrom their training data which significantly impact their fairness in tabularprediction tasks. Furthermore, our investigations show that in the context ofbias mitigation, though in-context learning and fine-tuning have a moderateeffect, the fairness metric gap between different subgroups is still largerthan that in traditional machine learning models, such as Random Forest andshallow Neural Networks. This observation emphasizes that the social biases areinherent within the LLMs themselves and inherited from their pre-trainingcorpus, not only from the downstream task datasets. Besides, we demonstratethat label-flipping of in-context examples can significantly reduce biases,further highlighting the presence of inherent bias within LLMs.",,arXiv,"['cs.cl', 'cs.lg']",, -1301,large language models are visual reasoning coordinators,"['Liangyu Chen', 'Bo Li', 'Sheng Shen', 'Jingkang Yang', 'Chunyuan Li', 'Kurt Keutzer', 'Trevor Darrell', 'Ziwei Liu']",http://arxiv.org/pdf/2310.15166v1.pdf,2023-10-23,," Visual reasoning requires multimodal perception and commonsense cognition ofthe world. Recently, multiple vision-language models (VLMs) have been proposedwith excellent commonsense reasoning ability in various domains. However, howto harness the collective power of these complementary VLMs is rarely explored.Existing methods like ensemble still struggle to aggregate these models withthe desired higher-order communications. In this work, we propose Cola, a novelparadigm that coordinates multiple VLMs for visual reasoning. Our key insightis that a large language model (LLM) can efficiently coordinate multiple VLMsby facilitating natural language communication that leverages their distinctand complementary capabilities. Extensive experiments demonstrate that ourinstruction tuning variant, Cola-FT, achieves state-of-the-art performance onvisual question answering (VQA), outside knowledge VQA, visual entailment, andvisual spatial reasoning tasks. Moreover, we show that our in-context learningvariant, Cola-Zero, exhibits competitive performance in zero and few-shotsettings, without finetuning. Through systematic ablation studies andvisualizations, we validate that a coordinator LLM indeed comprehends theinstruction prompts as well as the separate functionalities of VLMs; it thencoordinates them to enable impressive visual reasoning capabilities.",,arXiv,"['cs.cv', 'cs.cl']",, -1302,function vectors in large language models,"['Eric Todd', 'Millicent L. Li', 'Arnab Sen Sharma', 'Aaron Mueller', 'Byron C. Wallace', 'David Bau']",http://arxiv.org/pdf/2310.15213v1.pdf,2023-10-23,," We report the presence of a simple neural mechanism that represents aninput-output function as a vector within autoregressive transformer languagemodels (LMs). Using causal mediation analysis on a diverse range ofin-context-learning (ICL) tasks, we find that a small number attention headstransport a compact representation of the demonstrated task, which we call afunction vector (FV). FVs are robust to changes in context, i.e., they triggerexecution of the task on inputs such as zero-shot and natural text settingsthat do not resemble the ICL contexts from which they are collected. We testFVs across a range of tasks, models, and layers and find strong causal effectsacross settings in middle layers. We investigate the internal structure of FVsand find while that they often contain information that encodes the outputspace of the function, this information alone is not sufficient to reconstructan FV. Finally, we test semantic vector composition in FVs, and find that tosome extent they can be summed to create vectors that trigger new complextasks. Taken together, our findings suggest that LLMs contain internalabstractions of general-purpose functions that can be invoked in a variety ofcontexts.",,arXiv,"['cs.cl', 'cs.lg']",, -1303,tcrallm token compression retrieval augmented large language model for inference cost reduction,"['Junyi Liu', 'Liangzhi Li', 'Tong Xiang', 'Bowen Wang', 'Yiming Qian']",http://arxiv.org/pdf/2310.15556v2.pdf,2023-10-24,," Since ChatGPT released its API for public use, the number of applicationsbuilt on top of commercial large language models (LLMs) increase exponentially.One popular usage of such models is leveraging its in-context learning abilityand generating responses given user queries leveraging knowledge obtained byretrieval augmentation. One problem of deploying commercial retrieval-augmentedLLMs is the cost due to the additionally retrieved context that largelyincreases the input token size of the LLMs. To mitigate this, we propose atoken compression scheme that includes two methods: summarization compressionand semantic compression. The first method applies a T5-based model that isfine-tuned by datasets generated using self-instruct containing samples withvarying lengths and reduce token size by doing summarization. The second methodfurther compresses the token size by removing words with lower impact on thesemantic. In order to adequately evaluate the effectiveness of the proposedmethods, we propose and utilize a dataset called Food-Recommendation DB (FRDB)focusing on food recommendation for women around pregnancy period or infants.Our summarization compression can reduce 65% of the retrieval token size withfurther 0.3% improvement on the accuracy; semantic compression provides a moreflexible way to trade-off the token size with performance, for which we canreduce the token size by 20% with only 1.6% of accuracy drop.",,arXiv,"['cs.cl', 'cs.ir']",, -1304,testing the limits unusual text inputs generation for mobile app crash detection with large language model,"['Zhe Liu', 'Chunyang Chen', 'Junjie Wang', 'Mengzhuo Chen', 'Boyu Wu', 'Xing Che', 'Dandan Wang', 'Qing Wang']",http://arxiv.org/pdf/2310.15657v1.pdf,2023-10-24,," Mobile applications have become a ubiquitous part of our daily life,providing users with access to various services and utilities. Text input, asan important interaction channel between users and applications, plays animportant role in core functionality such as search queries, authentication,messaging, etc. However, certain special text (e.g., -18 for Font Size) cancause the app to crash, and generating diversified unusual inputs for fullytesting the app is highly demanded. Nevertheless, this is also challenging dueto the combination of explosion dilemma, high context sensitivity, and complexconstraint relations. This paper proposes InputBlaster which leverages the LLMto automatically generate unusual text inputs for mobile app crash detection.It formulates the unusual inputs generation problem as a task of producing aset of test generators, each of which can yield a batch of unusual text inputsunder the same mutation rule. In detail, InputBlaster leverages LLM to producethe test generators together with the mutation rules serving as the reasoningchain, and utilizes the in-context learning schema to demonstrate the LLM withexamples for boosting the performance. InputBlaster is evaluated on 36 textinput widgets with cash bugs involving 31 popular Android apps, and resultsshow that it achieves 78% bug detection rate, with 136% higher than the bestbaseline. Besides, we integrate it with the automated GUI testing tool anddetect 37 unseen crashes in real-world apps from Google Play.",,arXiv,['cs.se'],, -1305,unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving,"['Zhan Ling', 'Yunhao Fang', 'Xuanlin Li', 'Tongzhou Mu', 'Mingu Lee', 'Reza Pourreza', 'Roland Memisevic', 'Hao Su']",http://arxiv.org/pdf/2311.00694v1.pdf,2023-11-01,," Large Language Models (LLMs) have achieved tremendous progress, yet theystill often struggle with challenging reasoning problems. Current approachesaddress this challenge by sampling or searching detailed and low-levelreasoning chains. However, these methods are still limited in their explorationcapabilities, making it challenging for correct solutions to stand out in thehuge solution space. In this work, we unleash LLMs' creative potential forexploring multiple diverse problem solving strategies by framing an LLM as ahierarchical policy via in-context learning. This policy comprises of avisionary leader that proposes multiple diverse high-level problem-solvingtactics as hints, accompanied by a follower that executes detailedproblem-solving processes following each of the high-level instruction. Thefollower uses each of the leader's directives as a guide and samples multiplereasoning chains to tackle the problem, generating a solution group for eachleader proposal. Additionally, we propose an effective and efficienttournament-based approach to select among these explored solution groups toreach the final answer. Our approach produces meaningful and inspiring hints,enhances problem-solving strategy exploration, and improves the final answeraccuracy on challenging problems in the MATH dataset. Code will be released athttps://github.com/lz1oceani/LLM-As-Hierarchical-Policy.",,arXiv,"['cs.ai', 'cs.cl']",, -1306,sentiment analysis through llm negotiations,"['Xiaofei Sun', 'Xiaoya Li', 'Shengyu Zhang', 'Shuhe Wang', 'Fei Wu', 'Jiwei Li', 'Tianwei Zhang', 'Guoyin Wang']",http://arxiv.org/pdf/2311.01876v1.pdf,2023-11-03,," A standard paradigm for sentiment analysis is to rely on a singular LLM andmakes the decision in a single round under the framework of in-contextlearning. This framework suffers the key disadvantage that the single-turnoutput generated by a single LLM might not deliver the perfect decision, justas humans sometimes need multiple attempts to get things right. This isespecially true for the task of sentiment analysis where deep reasoning isrequired to address the complex linguistic phenomenon (e.g., clausecomposition, irony, etc) in the input. To address this issue, this paper introduces a multi-LLM negotiationframework for sentiment analysis. The framework consists of a reasoning-infusedgenerator to provide decision along with rationale, a explanation-derivingdiscriminator to evaluate the credibility of the generator. The generator andthe discriminator iterate until a consensus is reached. The proposed frameworknaturally addressed the aforementioned challenge, as we are able to take thecomplementary abilities of two LLMs, have them use rationale to persuade eachother for correction. Experiments on a wide range of sentiment analysis benchmarks (SST-2, MovieReview, Twitter, yelp, amazon, IMDB) demonstrate the effectiveness of proposedapproach: it consistently yields better performances than the ICL baselineacross all benchmarks, and even superior performances to supervised baselineson the Twitter and movie review datasets.",,arXiv,['cs.cl'],, -1307,chef a comprehensive evaluation framework for standardized assessment of multimodal large language models,"['Zhelun Shi', 'Zhipin Wang', 'Hongxing Fan', 'Zhenfei Yin', 'Lu Sheng', 'Yu Qiao', 'Jing Shao']",http://arxiv.org/pdf/2311.02692v1.pdf,2023-11-05,," Multimodal Large Language Models (MLLMs) have shown impressive abilities ininteracting with visual content with myriad potential downstream tasks.However, even though a list of benchmarks has been proposed, the capabilitiesand limitations of MLLMs are still not comprehensively understood, due to alack of a standardized and holistic evaluation framework. To this end, wepresent the first Comprehensive Evaluation Framework (ChEF) that canholistically profile each MLLM and fairly compare different MLLMs. First, westructure ChEF as four modular components, i.e., Scenario as scalablemultimodal datasets, Instruction as flexible instruction retrieving formulae,Inferencer as reliable question answering strategies, and Metric as indicativetask-specific score functions. Based on them, ChEF facilitates versatileevaluations in a standardized framework, and new evaluations can be built bydesigning new Recipes (systematic selection of these four components). Notably,current MLLM benchmarks can be readily summarized as recipes of ChEF. Second,we introduce 6 new recipes to quantify competent MLLMs' desired capabilities(or called desiderata, i.e., calibration, in-context learning, instructionfollowing, language performance, hallucination, and robustness) as reliableagents that can perform real-world multimodal interactions. Third, we conduct alarge-scale evaluation of 9 prominent MLLMs on 9 scenarios and 6 desiderata.Our evaluation summarized over 20 valuable observations concerning thegeneralizability of MLLMs across various scenarios and the composite capabilityof MLLMs required for multimodal interactions. We will publicly release all thedetailed implementations for further analysis, as well as an easy-to-usemodular toolkit for the integration of new recipes and models, so that ChEF canbe a growing evaluation framework for the MLLM community.",,arXiv,['cs.cv'],, -1308,kinematicaware prompting for generalizable articulated object manipulation with llms,"['Wenke Xia', 'Dong Wang', 'Xincheng Pang', 'Zhigang Wang', 'Bin Zhao', 'Di Hu']",http://arxiv.org/pdf/2311.02847v2.pdf,2023-11-06,," Generalizable articulated object manipulation is essential for home-assistantrobots. Recent efforts focus on imitation learning from demonstrations orreinforcement learning in simulation, however, due to the prohibitive costs ofreal-world data collection and precise object simulation, it still remainschallenging for these works to achieve broad adaptability across diversearticulated objects. Recently, many works have tried to utilize the strongin-context learning ability of Large Language Models (LLMs) to achievegeneralizable robotic manipulation, but most of these researches focus onhigh-level task planning, sidelining low-level robotic control. In this work,building on the idea that the kinematic structure of the object determines howwe can manipulate it, we propose a kinematic-aware prompting framework thatprompts LLMs with kinematic knowledge of objects to generate low-level motiontrajectory waypoints, supporting various object manipulation. To effectivelyprompt LLMs with the kinematic structure of different objects, we design aunified kinematic knowledge parser, which represents various articulatedobjects as a unified textual description containing kinematic joints andcontact location. Building upon this unified description, a kinematic-awareplanner model is proposed to generate precise 3D manipulation waypoints via adesigned kinematic-aware chain-of-thoughts prompting method. Our evaluationspanned 48 instances across 16 distinct categories, revealing that ourframework not only outperforms traditional methods on 8 seen categories butalso shows a powerful zero-shot capability for 8 unseen articulated objectcategories. Moreover, the real-world experiments on 7 different objectcategories prove our framework's adaptability in practical scenarios. Code isreleased at\href{https://github.com/GeWu-Lab/LLM_articulated_object_manipulation/tree/main}{here}.",,arXiv,"['cs.ro', 'cs.ai']",, -1309,incontext learning for knowledge base question answering for unmanned systems based on large language models,"['Yunlong Chen', 'Yaming Zhang', 'Jianfei Yu', 'Li Yang', 'Rui Xia']",http://arxiv.org/pdf/2311.02956v1.pdf,2023-11-06,," Knowledge Base Question Answering (KBQA) aims to answer factoid questionsbased on knowledge bases. However, generating the most appropriate knowledgebase query code based on Natural Language Questions (NLQ) poses a significantchallenge in KBQA. In this work, we focus on the CCKS2023 Competition ofQuestion Answering with Knowledge Graph Inference for Unmanned Systems.Inspired by the recent success of large language models (LLMs) like ChatGPT andGPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL)generation framework to generate the most appropriate CQL based on the givenNLQ. Our generative framework contains six parts: an auxiliary model predictingthe syntax-related information of CQL based on the given NLQ, a proper nounmatcher extracting proper nouns from the given NLQ, a demonstration exampleselector retrieving similar examples of the input sample, a prompt constructordesigning the input template of ChatGPT, a ChatGPT-based generation modelgenerating the CQL, and an ensemble model to obtain the final answers fromdiversified outputs. With our ChatGPT-based CQL generation framework, weachieved the second place in the CCKS 2023 Question Answering with KnowledgeGraph Inference for Unmanned Systems competition, achieving an F1-score of0.92676.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, -1310,retrievalaugmented code generation for universal information extraction,"['Yucan Guo', 'Zixuan Li', 'Xiaolong Jin', 'Yantao Liu', 'Yutao Zeng', 'Wenxuan Liu', 'Xiang Li', 'Pan Yang', 'Long Bai', 'Jiafeng Guo', 'Xueqi Cheng']",http://arxiv.org/pdf/2311.02962v1.pdf,2023-11-06,," Information Extraction (IE) aims to extract structural knowledge (e.g.,entities, relations, events) from natural language texts, which bringschallenges to existing methods due to task-specific schemas and complex textexpressions. Code, as a typical kind of formalized language, is capable ofdescribing structural knowledge under various schemas in a universal way. Onthe other hand, Large Language Models (LLMs) trained on both codes and textshave demonstrated powerful capabilities of transforming texts into codes, whichprovides a feasible solution to IE tasks. Therefore, in this paper, we proposea universal retrieval-augmented code generation framework based on LLMs, calledCode4UIE, for IE tasks. Specifically, Code4UIE adopts Python classes to definetask-specific schemas of various structural knowledge in a universal way. By sodoing, extracting knowledge under these schemas can be transformed intogenerating codes that instantiate the predefined Python classes with theinformation in texts. To generate these codes more precisely, Code4UIE adoptsthe in-context learning mechanism to instruct LLMs with examples. In order toobtain appropriate examples for different tasks, Code4UIE explores severalexample retrieval strategies, which can retrieve examples semantically similarto the given texts. Extensive experiments on five representative IE tasksacross nine datasets demonstrate the effectiveness of the Code4UIE framework.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ir']",, -1311,unified lowresource sequence labeling by sampleaware dynamic sparse finetuning,"['Sarkar Snigdha Sarathi Das', 'Ranran Haoran Zhang', 'Peng Shi', 'Wenpeng Yin', 'Rui Zhang']",http://arxiv.org/pdf/2311.03748v1.pdf,2023-11-07,," Unified Sequence Labeling that articulates different sequence labelingproblems such as Named Entity Recognition, Relation Extraction, Semantic RoleLabeling, etc. in a generalized sequence-to-sequence format opens up theopportunity to make the maximum utilization of large language model knowledgetoward structured prediction. Unfortunately, this requires formatting them intospecialized augmented format unknown to the base pretrained language model(PLMs) necessitating finetuning to the target format. This significantly boundsits usefulness in data-limited settings where finetuning large models cannotproperly generalize to the target format. To address this challenge andleverage PLM knowledge effectively, we propose FISH-DIP, a sample-aware dynamicsparse finetuning strategy that selectively focuses on a fraction ofparameters, informed by feedback from highly regressing examples, during thefine-tuning process. By leveraging the dynamism of sparsity, our approachmitigates the impact of well-learned samples and prioritizes underperforminginstances for improvement in generalization. Across five tasks of sequencelabeling, we demonstrate that FISH-DIP can smoothly optimize the model in lowresource settings offering upto 40% performance improvements over fullfine-tuning depending on target evaluation settings. Also, compared toin-context learning and other parameter-efficient fine-tuning approaches,FISH-DIP performs comparably or better, notably in extreme low-resourcesettings.",,arXiv,['cs.cl'],, -1312,ul2 unifying language learning paradigms,"['Yi Tay', 'Mostafa Dehghani', 'Vinh Q. Tran', 'Xavier Garcia', 'Jason Wei', 'Xuezhi Wang', 'Hyung Won Chung', 'Siamak Shakeri', 'Dara Bahri', 'Tal Schuster', 'Huaixiu Steven Zheng', 'Denny Zhou', 'Neil Houlsby', 'Donald Metzler']",http://arxiv.org/pdf/2205.05131v3.pdf,2022-05-10,," Existing pre-trained models are generally geared towards a particular classof problems. To date, there seems to be still no consensus on what the rightarchitecture and pre-training setup should be. This paper presents a unifiedframework for pre-training models that are universally effective acrossdatasets and setups. We begin by disentangling architectural archetypes withpre-training objectives -- two concepts that are commonly conflated. Next, wepresent a generalized & unified perspective for self-supervision in NLP andshow how different pre-training objectives can be cast as one another and howinterpolating between different objectives can be effective. We then proposeMixture-of-Denoisers (MoD), a pre-training objective that combines diversepre-training paradigms together. We furthermore introduce a notion of modeswitching, wherein downstream fine-tuning is associated with specificpre-training schemes. We conduct extensive ablative experiments to comparemultiple pre-training objectives and find that our method pushes thePareto-frontier by outperforming T5 & GPT-like models across multiple diversesetups. By scaling our model up to 20B parameters, we achieve SOTA performanceon 50 well-established supervised finetuning based NLP tasks. Our model alsoachieve strong results at in-context learning, outperforming 175B GPT-3 onzero-shot SuperGLUE and tripling the performance of T5-XXL on one-shotsummarization. On 0-shot MMLU, UL2 20B outperforms T0 and T5 models. UL2 20Balso works well with chain-of-thought prompting and reasoning, making it anappealing choice for research into reasoning at a small to medium scale of 20Bparameters. Finally, we apply FLAN instruction tuning to the UL2 20B model,achieving MMLU and Big-Bench scores competitive to FLAN-PaLM 62B. We releaseFlax-based T5X checkpoints for the UL2 20B & Flan-UL2 20B.",,arXiv,['cs.cl'],, -1313,humantimescale adaptation in an openended task space,"[' Adaptive Agent Team', 'Jakob Bauer', 'Kate Baumli', 'Satinder Baveja', 'Feryal Behbahani', 'Avishkar Bhoopchand', 'Nathalie Bradley-Schmieg', 'Michael Chang', 'Natalie Clay', 'Adrian Collister', 'Vibhavari Dasagi', 'Lucy Gonzalez', 'Karol Gregor', 'Edward Hughes', 'Sheleem Kashem', 'Maria Loks-Thompson', 'Hannah Openshaw', 'Jack Parker-Holder', 'Shreya Pathak', 'Nicolas Perez-Nieves', 'Nemanja Rakicevic', 'Tim Rocktäschel', 'Yannick Schroecker', 'Jakub Sygnowski', 'Karl Tuyls', 'Sarah York', 'Alexander Zacherl', 'Lei Zhang']",http://arxiv.org/pdf/2301.07608v1.pdf,2023-01-18,," Foundation models have shown impressive adaptation and scalability insupervised and self-supervised learning problems, but so far these successeshave not fully translated to reinforcement learning (RL). In this work, wedemonstrate that training an RL agent at scale leads to a general in-contextlearning algorithm that can adapt to open-ended novel embodied 3D problems asquickly as humans. In a vast space of held-out environment dynamics, ouradaptive agent (AdA) displays on-the-fly hypothesis-driven exploration,efficient exploitation of acquired knowledge, and can successfully be promptedwith first-person demonstrations. Adaptation emerges from three ingredients:(1) meta-reinforcement learning across a vast, smooth and diverse taskdistribution, (2) a policy parameterised as a large-scale attention-basedmemory architecture, and (3) an effective automated curriculum that prioritisestasks at the frontier of an agent's capabilities. We demonstrate characteristicscaling laws with respect to network size, memory length, and richness of thetraining task distribution. We believe our results lay the foundation forincreasingly general and adaptive RL agents that perform well acrossever-larger open-ended domains.",,arXiv,"['cs.lg', 'cs.ai', 'cs.ne']",, -1314,deidgpt zeroshot medical text deidentification by gpt4,"['Zhengliang Liu', 'Xiaowei Yu', 'Lu Zhang', 'Zihao Wu', 'Chao Cao', 'Haixing Dai', 'Lin Zhao', 'Wei Liu', 'Dinggang Shen', 'Quanzheng Li', 'Tianming Liu', 'Dajiang Zhu', 'Xiang Li']",http://arxiv.org/pdf/2303.11032v1.pdf,2023-03-20,," The digitization of healthcare has facilitated the sharing and re-using ofmedical data but has also raised concerns about confidentiality and privacy.HIPAA (Health Insurance Portability and Accountability Act) mandates removingre-identifying information before the dissemination of medical records. Thus,effective and efficient solutions for de-identifying medical data, especiallythose in free-text forms, are highly needed. While various computer-assistedde-identification methods, including both rule-based and learning-based, havebeen developed and used in prior practice, such solutions still lackgeneralizability or need to be fine-tuned according to different scenarios,significantly imposing restrictions in wider use. The advancement of largelanguage models (LLM), such as ChatGPT and GPT-4, have shown great potential inprocessing text data in the medical domain with zero-shot in-context learning,especially in the task of privacy protection, as these models can identifyconfidential information by their powerful named entity recognition (NER)capability. In this work, we developed a novel GPT4-enabled de-identificationframework (""DeID-GPT"") to automatically identify and remove the identifyinginformation. Compared to existing commonly used medical text datade-identification methods, our developed DeID-GPT showed the highest accuracyand remarkable reliability in masking private information from the unstructuredmedical text while preserving the original structure and meaning of the text.This study is one of the earliest to utilize ChatGPT and GPT-4 for medical textdata processing and de-identification, which provides insights for furtherresearch and solution development on the use of LLMs such as ChatGPT/GPT-4 inhealthcare. Codes and benchmarking data information are available athttps://github.com/yhydhx/ChatGPT-API.",,arXiv,"['cs.cl', 'cs.cy']",, -1315,taskmatrixai completing tasks by connecting foundation models with millions of apis,"['Yaobo Liang', 'Chenfei Wu', 'Ting Song', 'Wenshan Wu', 'Yan Xia', 'Yu Liu', 'Yang Ou', 'Shuai Lu', 'Lei Ji', 'Shaoguang Mao', 'Yun Wang', 'Linjun Shou', 'Ming Gong', 'Nan Duan']",http://arxiv.org/pdf/2303.16434v1.pdf,2023-03-29,," Artificial Intelligence (AI) has made incredible progress recently. On theone hand, advanced foundation models like ChatGPT can offer powerfulconversation, in-context learning and code generation abilities on a broadrange of open-domain tasks. They can also generate high-level solution outlinesfor domain-specific tasks based on the common sense knowledge they haveacquired. However, they still face difficulties with some specialized tasksbecause they lack enough domain-specific data during pre-training or they oftenhave errors in their neural network computations on those tasks that needaccurate executions. On the other hand, there are also many existing models andsystems (symbolic-based or neural-based) that can do some domain-specific tasksvery well. However, due to the different implementation or working mechanisms,they are not easily accessible or compatible with foundation models. Therefore,there is a clear and pressing need for a mechanism that can leverage foundationmodels to propose task solution outlines and then automatically match some ofthe sub-tasks in the outlines to the off-the-shelf models and systems withspecial functionalities to complete them. Inspired by this, we introduceTaskMatrix.AI as a new AI ecosystem that connects foundation models withmillions of APIs for task completion. Unlike most previous work that aimed toimprove a single AI model, TaskMatrix.AI focuses more on using existingfoundation models (as a brain-like central system) and APIs of other AI modelsand systems (as sub-task solvers) to achieve diversified tasks in both digitaland physical domains. As a position paper, we will present our vision of how tobuild such an ecosystem, explain each key component, and use study cases toillustrate both the feasibility of this vision and the main challenges we needto address next.",,arXiv,"['cs.ai', 'cs.cl']",, -1316,subjectdriven texttoimage generation via apprenticeship learning,"['Wenhu Chen', 'Hexiang Hu', 'Yandong Li', 'Nataniel Ruiz', 'Xuhui Jia', 'Ming-Wei Chang', 'William W. Cohen']",http://arxiv.org/pdf/2304.00186v5.pdf,2023-04-01,," Recent text-to-image generation models like DreamBooth have made remarkableprogress in generating highly customized images of a target subject, byfine-tuning an ``expert model'' for a given subject from a few examples.However, this process is expensive, since a new expert model must be learnedfor each subject. In this paper, we present SuTI, a Subject-drivenText-to-Image generator that replaces subject-specific fine tuning within-context learning. Given a few demonstrations of a new subject, SuTI caninstantly generate novel renditions of the subject in different scenes, withoutany subject-specific optimization. SuTI is powered by apprenticeship learning,where a single apprentice model is learned from data generated by a massivenumber of subject-specific expert models. Specifically, we mine millions ofimage clusters from the Internet, each centered around a specific visualsubject. We adopt these clusters to train a massive number of expert models,each specializing in a different subject. The apprentice model SuTI then learnsto imitate the behavior of these fine-tuned experts. SuTI can generatehigh-quality and customized subject-specific images 20x faster thanoptimization-based SoTA methods. On the challenging DreamBench andDreamBench-v2, our human evaluation shows that SuTI significantly outperformsexisting models like InstructPix2Pix, Textual Inversion, Imagic, Prompt2Prompt,Re-Imagen and DreamBooth, especially on the subject and text alignment aspects.",,arXiv,"['cs.cv', 'cs.ai']",, -1317,large language models are edgecase fuzzers testing deep learning libraries via fuzzgpt,"['Yinlin Deng', 'Chunqiu Steven Xia', 'Chenyuan Yang', 'Shizhuo Dylan Zhang', 'Shujing Yang', 'Lingming Zhang']",http://arxiv.org/pdf/2304.02014v1.pdf,2023-04-04,," Deep Learning (DL) library bugs affect downstream DL applications,emphasizing the need for reliable systems. Generating valid input programs forfuzzing DL libraries is challenging due to the need for satisfying bothlanguage syntax/semantics and constraints for constructing valid computationalgraphs. Recently, the TitanFuzz work demonstrates that modern Large LanguageModels (LLMs) can be directly leveraged to implicitly learn all the constraintsto generate valid DL programs for fuzzing. However, LLMs tend to generateordinary programs following similar patterns seen in their massive trainingcorpora, while fuzzing favors unusual inputs that cover edge cases or areunlikely to be manually produced. To fill this gap, this paper proposes FuzzGPT, the first technique to primeLLMs to synthesize unusual programs for fuzzing. FuzzGPT is built on thewell-known hypothesis that historical bug-triggering programs may includerare/valuable code ingredients important for bug finding. Traditionaltechniques leveraging such historical information require intensive humanefforts to design dedicated generators and ensure the validity of generatedprograms. FuzzGPT demonstrates that this process can be fully automated via theintrinsic capabilities of LLMs (including fine-tuning and in-context learning),while being generalizable and applicable to challenging domains. While FuzzGPTcan be applied with different LLMs, this paper focuses on the powerfulGPT-style models: Codex and CodeGen. Moreover, FuzzGPT also shows the potentialof directly leveraging the instruct-following capability of the recent ChatGPTfor effective fuzzing. Evaluation on two popular DL libraries (PyTorch andTensorFlow) shows that FuzzGPT can substantially outperform TitanFuzz,detecting 76 bugs, with 49 already confirmed as previously unknown bugs,including 11 high-priority bugs or security vulnerabilities.",,arXiv,['cs.se'],, -1318,improving language model negotiation with selfplay and incontext learning from ai feedback,"['Yao Fu', 'Hao Peng', 'Tushar Khot', 'Mirella Lapata']",http://arxiv.org/pdf/2305.10142v1.pdf,2023-05-17,," We study whether multiple large language models (LLMs) can autonomouslyimprove each other in a negotiation game by playing, reflecting, andcriticizing. We are interested in this question because if LLMs were able toimprove each other, it would imply the possibility of creating strong AI agentswith minimal human intervention. We ask two LLMs to negotiate with each other,playing the roles of a buyer and a seller, respectively. They aim to reach adeal with the buyer targeting a lower price and the seller a higher one. Athird language model, playing the critic, provides feedback to a player toimprove the player's negotiation strategies. We let the two agents playmultiple rounds, using previous negotiation history and AI feedback asin-context demonstrations to improve the model's negotiation strategyiteratively. We use different LLMs (GPT and Claude) for different roles and usethe deal price as the evaluation metric. Our experiments reveal multipleintriguing findings: (1) Only a subset of the language models we consider canself-play and improve the deal price from AI feedback, weaker models either donot understand the game's rules or cannot incorporate AI feedback for furtherimprovement. (2) Models' abilities to learn from the feedback differ whenplaying different roles. For example, it is harder for Claude-instant toimprove as the buyer than as the seller. (3) When unrolling the game tomultiple rounds, stronger agents can consistently improve their performance bymeaningfully using previous experiences and iterative AI feedback, yet have ahigher risk of breaking the deal. We hope our work provides insightful initialexplorations of having models autonomously improve each other with game playingand AI feedback.",,arXiv,['cs.cl'],, -1319,xtremeup a usercentric scarcedata benchmark for underrepresented languages,"['Sebastian Ruder', 'Jonathan H. Clark', 'Alexander Gutkin', 'Mihir Kale', 'Min Ma', 'Massimo Nicosia', 'Shruti Rijhwani', 'Parker Riley', 'Jean-Michel A. Sarr', 'Xinyi Wang', 'John Wieting', 'Nitish Gupta', 'Anna Katanova', 'Christo Kirov', 'Dana L. Dickinson', 'Brian Roark', 'Bidisha Samanta', 'Connie Tao', 'David I. Adelani', 'Vera Axelrod', 'Isaac Caswell', 'Colin Cherry', 'Dan Garrette', 'Reeve Ingle', 'Melvin Johnson', 'Dmitry Panteleev', 'Partha Talukdar']",http://arxiv.org/pdf/2305.11938v2.pdf,2023-05-19,," Data scarcity is a crucial issue for the development of highly multilingualNLP systems. Yet for many under-represented languages (ULs) -- languages forwhich NLP re-search is particularly far behind in meeting user needs -- it isfeasible to annotate small amounts of data. Motivated by this, we proposeXTREME-UP, a benchmark defined by: its focus on the scarce-data scenario ratherthan zero-shot; its focus on user-centric tasks -- tasks with broad adoption byspeakers of high-resource languages; and its focus on under-representedlanguages where this scarce-data scenario tends to be most realistic. XTREME-UPevaluates the capabilities of language models across 88 under-representedlanguages over 9 key user-centric technologies including ASR, OCR, MT, andinformation access tasks that are of general utility. We create new datasetsfor OCR, autocomplete, semantic parsing, and transliteration, and build on andrefine existing datasets for other tasks. XTREME-UP provides methodology forevaluating many modeling scenarios including text-only, multi-modal (vision,audio, and text),supervised parameter tuning, and in-context learning. Weevaluate commonly used models on the benchmark. We release all code and scriptsto train and evaluate models",,arXiv,['cs.cl'],, -1320,memoryefficient finetuning of compressed large language models via sub4bit integer quantization,"['Jeonghoon Kim', 'Jung Hyun Lee', 'Sungdong Kim', 'Joonsuk Park', 'Kang Min Yoo', 'Se Jung Kwon', 'Dongsoo Lee']",http://arxiv.org/pdf/2305.14152v2.pdf,2023-05-23,," Large language models (LLMs) face the challenges in fine-tuning anddeployment due to their high memory demands and computational costs. Whileparameter-efficient fine-tuning (PEFT) methods aim to reduce the memory usageof the optimizer state during fine-tuning, the inherent size of pre-trained LLMweights continues to be a pressing concern. Even though quantization techniquesare widely proposed to ease memory demands and accelerate LLM inference, mostof these techniques are geared towards the deployment phase. To bridge thisgap, this paper presents Parameter-Efficient and Quantization-aware Adaptation(PEQA) - a simple yet effective method that combines the advantages of PEFTwith quantized LLMs. By updating solely the quantization scales, PEQA can bedirectly applied to quantized LLMs, ensuring seamless task transitions.Parallel to existing PEFT methods, PEQA significantly reduces the memoryoverhead associated with the optimizer state. Furthermore, it leverages theadvantages of quantization to substantially reduce model sizes. Even afterfine-tuning, the quantization structure of a PEQA-tuned LLM remains intact,allowing for accelerated inference on the deployment stage. We employPEQA-tuning for task-specific adaptation on LLMs with up to 65 billionparameters. To assess the logical reasoning and language comprehension ofPEQA-tuned LLMs, we fine-tune low-bit quantized LLMs using a instructiondataset. Our results show that even when LLMs are quantized to below 4-bitprecision, their capabilities in language modeling, few-shot in-contextlearning, and comprehension can be resiliently restored to (or even improvedover) their full-precision original performances with PEQA.",,arXiv,"['cs.lg', 'cs.ai']",, -1321,palix on scaling up a multilingual vision and language model,"['Xi Chen', 'Josip Djolonga', 'Piotr Padlewski', 'Basil Mustafa', 'Soravit Changpinyo', 'Jialin Wu', 'Carlos Riquelme Ruiz', 'Sebastian Goodman', 'Xiao Wang', 'Yi Tay', 'Siamak Shakeri', 'Mostafa Dehghani', 'Daniel Salz', 'Mario Lucic', 'Michael Tschannen', 'Arsha Nagrani', 'Hexiang Hu', 'Mandar Joshi', 'Bo Pang', 'Ceslee Montgomery', 'Paulina Pietrzyk', 'Marvin Ritter', 'AJ Piergiovanni', 'Matthias Minderer', 'Filip Pavetic', 'Austin Waters', 'Gang Li', 'Ibrahim Alabdulmohsin', 'Lucas Beyer', 'Julien Amelot', 'Kenton Lee', 'Andreas Peter Steiner', 'Yang Li', 'Daniel Keysers', 'Anurag Arnab', 'Yuanzhong Xu', 'Keran Rong', 'Alexander Kolesnikov', 'Mojtaba Seyedhosseini', 'Anelia Angelova', 'Xiaohua Zhai', 'Neil Houlsby', 'Radu Soricut']",http://arxiv.org/pdf/2305.18565v1.pdf,2023-05-29,," We present the training recipe and results of scaling up PaLI-X, amultilingual vision and language model, both in terms of size of the componentsand the breadth of its training task mixture. Our model achieves new levels ofperformance on a wide-range of varied and complex tasks, including multipleimage-based captioning and question-answering tasks, image-based documentunderstanding and few-shot (in-context) learning, as well as object detection,video question answering, and video captioning. PaLI-X advances thestate-of-the-art on most vision-and-language benchmarks considered (25+ ofthem). Finally, we observe emerging capabilities, such as complex counting andmultilingual object detection, tasks that are not explicitly in the trainingmix.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, -1322,transformers as statisticians provable incontext learning with incontext algorithm selection,"['Yu Bai', 'Fan Chen', 'Huan Wang', 'Caiming Xiong', 'Song Mei']",http://arxiv.org/pdf/2306.04637v2.pdf,2023-06-07,," Neural sequence models based on the transformer architecture havedemonstrated remarkable \emph{in-context learning} (ICL) abilities, where theycan perform new tasks when prompted with training and test examples, withoutany parameter update to the model. This work first provides a comprehensivestatistical theory for transformers to perform ICL. Concretely, we show thattransformers can implement a broad class of standard machine learningalgorithms in context, such as least squares, ridge regression, Lasso, learninggeneralized linear models, and gradient descent on two-layer neural networks,with near-optimal predictive power on various in-context data distributions.Using an efficient implementation of in-context gradient descent as theunderlying mechanism, our transformer constructions admit mild size bounds, andcan be learned with polynomially many pretraining sequences. Building on these ``base'' ICL algorithms, intriguingly, we show thattransformers can implement more complex ICL procedures involving\emph{in-context algorithm selection}, akin to what a statistician can do inreal life -- A \emph{single} transformer can adaptively select different baseICL algorithms -- or even perform qualitatively different tasks -- on differentinput sequences, without any explicit prompting of the right algorithm or task.We both establish this in theory by explicit constructions, and also observethis phenomenon experimentally. In theory, we construct two general mechanismsfor algorithm selection with concrete examples: pre-ICL testing, and post-ICLvalidation. As an example, we use the post-ICL validation mechanism toconstruct a transformer that can perform nearly Bayes-optimal ICL on achallenging task -- noisy linear models with mixed noise levels.Experimentally, we demonstrate the strong in-context algorithm selectioncapabilities of standard transformer architectures.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'math.st', 'stat.ml', 'stat.th']",, -1323,instruction tuned models are quick learners,"['Himanshu Gupta', 'Saurabh Arjun Sawant', 'Swaroop Mishra', 'Mutsumi Nakamura', 'Arindam Mitra', 'Santosh Mashetty', 'Chitta Baral']",http://arxiv.org/pdf/2306.05539v1.pdf,2023-05-17,," Instruction tuning of language models has demonstrated the ability to enhancemodel generalization to unseen tasks via in-context learning using a fewexamples. However, typical supervised learning still requires a plethora ofdownstream training data for finetuning. Often in real-world situations, thereis a scarcity of data available for finetuning, falling somewhere between fewshot inference and fully supervised finetuning. In this work, we demonstratethe sample efficiency of instruction tuned models over various tasks byestimating the minimal downstream training data required by them to performtransfer learning and match the performance of state-of-the-art (SOTA)supervised models. We conduct experiments on 119 tasks from Super NaturalInstructions (SuperNI) in both the single task learning (STL) and multi tasklearning (MTL) settings. Our findings reveal that, in the STL setting,instruction tuned models equipped with 25% of the downstream train data surpassthe SOTA performance on the downstream tasks. In the MTL setting, aninstruction tuned model trained on only 6% of downstream training data achieveSOTA, while using 100% of the training data results in a 3.69% pointsimprovement (ROUGE-L 74.68) over the previous SOTA. We conduct an analysis onT5 vs Tk-Instruct by developing several baselines to demonstrate thatinstruction tuning aids in increasing both sample efficiency and transferlearning. Additionally, we observe a consistent ~4% performance increase inboth settings when pre-finetuning is performed with instructions. Finally, weconduct a categorical study and find that contrary to previous results, tasksin the question rewriting and title generation categories suffer frominstruction tuning.",,arXiv,['cs.cl'],, -1324,synapse trajectoryasexemplar prompting with memory for computer control,"['Longtao Zheng', 'Rundong Wang', 'Xinrun Wang', 'Bo An']",http://arxiv.org/pdf/2306.07863v2.pdf,2023-06-13,," Building agents using large language models (LLMs) to control computers is anemerging research field, where the agent perceives computer states and performsactions to accomplish complex tasks. Previous computer agents have demonstratedthe benefits of in-context learning (ICL); however, their performance ishindered by several issues. First, the limited context length of LLMs andcomplex computer states restrict the number of exemplars, as a single webpagecan consume the entire context. Second, the exemplars in current methods, suchas high-level plans and multi-choice questions, cannot represent completetrajectories, leading to suboptimal performance in tasks that require manysteps or repeated actions. Third, existing computer agents rely ontask-specific exemplars and overlook the similarity among tasks, resulting inpoor generalization to novel tasks. To address these challenges, we introduceSynapse, featuring three key components: i) state abstraction, which filtersout task-irrelevant information from raw states, allowing more exemplars withinthe limited context, ii) trajectory-as-exemplar prompting, which prompts theLLM with complete trajectories of the abstracted states and actions forimproved multi-step decision-making, and iii) exemplar memory, which stores theembeddings of exemplars and retrieves them via similarity search forgeneralization to novel tasks. We evaluate Synapse on MiniWoB++, a standardtask suite, and Mind2Web, a real-world website benchmark. In MiniWoB++, Synapseachieves a 99.2% average success rate (a 10% relative improvement) across 64tasks using demonstrations from only 48 tasks. Notably, Synapse is the firstICL method to solve the book-flight task in MiniWoB++. Synapse also exhibits a53% relative improvement in average step success rate over the previousstate-of-the-art prompting scheme in Mind2Web.",,arXiv,['cs.ai'],, -1325,language to rewards for robotic skill synthesis,"['Wenhao Yu', 'Nimrod Gileadi', 'Chuyuan Fu', 'Sean Kirmani', 'Kuang-Huei Lee', 'Montse Gonzalez Arenas', 'Hao-Tien Lewis Chiang', 'Tom Erez', 'Leonard Hasenclever', 'Jan Humplik', 'Brian Ichter', 'Ted Xiao', 'Peng Xu', 'Andy Zeng', 'Tingnan Zhang', 'Nicolas Heess', 'Dorsa Sadigh', 'Jie Tan', 'Yuval Tassa', 'Fei Xia']",http://arxiv.org/pdf/2306.08647v2.pdf,2023-06-14,," Large language models (LLMs) have demonstrated exciting progress in acquiringdiverse new capabilities through in-context learning, ranging from logicalreasoning to code-writing. Robotics researchers have also explored using LLMsto advance the capabilities of robotic control. However, since low-level robotactions are hardware-dependent and underrepresented in LLM training corpora,existing efforts in applying LLMs to robotics have largely treated LLMs assemantic planners or relied on human-engineered control primitives to interfacewith the robot. On the other hand, reward functions are shown to be flexiblerepresentations that can be optimized for control policies to achieve diversetasks, while their semantic richness makes them suitable to be specified byLLMs. In this work, we introduce a new paradigm that harnesses this realizationby utilizing LLMs to define reward parameters that can be optimized andaccomplish variety of robotic tasks. Using reward as the intermediate interfacegenerated by LLMs, we can effectively bridge the gap between high-levellanguage instructions or corrections to low-level robot actions. Meanwhile,combining this with a real-time optimizer, MuJoCo MPC, empowers an interactivebehavior creation experience where users can immediately observe the resultsand provide feedback to the system. To systematically evaluate the performanceof our proposed method, we designed a total of 17 tasks for a simulatedquadruped robot and a dexterous manipulator robot. We demonstrate that ourproposed method reliably tackles 90% of the designed tasks, while a baselineusing primitive skills as the interface with Code-as-policies achieves 50% ofthe tasks. We further validated our method on a real robot arm where complexmanipulation skills such as non-prehensile pushing emerge through ourinteractive system.",,arXiv,"['cs.ro', 'cs.ai', 'cs.lg']",, -1326,trained transformers learn linear models incontext,"['Ruiqi Zhang', 'Spencer Frei', 'Peter L. Bartlett']",http://arxiv.org/pdf/2306.09927v3.pdf,2023-06-16,," Attention-based neural networks such as transformers have demonstrated aremarkable ability to exhibit in-context learning (ICL): Given a short promptsequence of tokens from an unseen task, they can formulate relevant per-tokenand next-token predictions without any parameter updates. By embedding asequence of labeled training data and unlabeled test data as a prompt, thisallows for transformers to behave like supervised learning algorithms. Indeed,recent work has shown that when training transformer architectures over randominstances of linear regression problems, these models' predictions mimic thoseof ordinary least squares. Towards understanding the mechanisms underlying this phenomenon, weinvestigate the dynamics of ICL in transformers with a single linearself-attention layer trained by gradient flow on linear regression tasks. Weshow that despite non-convexity, gradient flow with a suitable randominitialization finds a global minimum of the objective function. At this globalminimum, when given a test prompt of labeled examples from a new predictiontask, the transformer achieves prediction error competitive with the bestlinear predictor over the test prompt distribution. We additionallycharacterize the robustness of the trained transformer to a variety ofdistribution shifts and show that although a number of shifts are tolerated,shifts in the covariate distribution of the prompts are not. Motivated by this,we consider a generalized ICL setting where the covariate distributions canvary across prompts. We show that although gradient flow succeeds at finding aglobal minimum in this setting, the trained transformer is still brittle undermild covariate shifts. We complement this finding with experiments on large,nonlinear transformer architectures which we show are more robust undercovariate shifts.",,arXiv,"['stat.ml', 'cs.ai', 'cs.cl', 'cs.lg']",, -1327,hyenadna longrange genomic sequence modeling at single nucleotide resolution,"['Eric Nguyen', 'Michael Poli', 'Marjan Faizi', 'Armin Thomas', 'Callum Birch-Sykes', 'Michael Wornow', 'Aman Patel', 'Clayton Rabideau', 'Stefano Massaroli', 'Yoshua Bengio', 'Stefano Ermon', 'Stephen A. Baccus', 'Chris Ré']",http://arxiv.org/pdf/2306.15794v2.pdf,2023-06-27,," Genomic (DNA) sequences encode an enormous amount of information for generegulation and protein synthesis. Similar to natural language models,researchers have proposed foundation models in genomics to learn generalizablefeatures from unlabeled genome data that can then be fine-tuned for downstreamtasks such as identifying regulatory elements. Due to the quadratic scaling ofattention, previous Transformer-based genomic models have used 512 to 4k tokensas context (<0.001% of the human genome), significantly limiting the modelingof long-range interactions in DNA. In addition, these methods rely ontokenizers or fixed k-mers to aggregate meaningful DNA units, losing singlenucleotide resolution where subtle genetic variations can completely alterprotein function via single nucleotide polymorphisms (SNPs). Recently, Hyena, alarge language model based on implicit convolutions was shown to matchattention in quality while allowing longer context lengths and lower timecomplexity. Leveraging Hyena's new long-range capabilities, we presentHyenaDNA, a genomic foundation model pretrained on the human reference genomewith context lengths of up to 1 million tokens at the single nucleotide-level -an up to 500x increase over previous dense attention-based models. HyenaDNAscales sub-quadratically in sequence length (training up to 160x faster thanTransformer), uses single nucleotide tokens, and has full global context ateach layer. We explore what longer context enables - including the first use ofin-context learning in genomics. On fine-tuned benchmarks from the NucleotideTransformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 18 datasetsusing a model with orders of magnitude less parameters and pretraining data. Onthe GenomicBenchmarks, HyenaDNA surpasses SotA on 7 of 8 datasets on average by+10 accuracy points. Code at https://github.com/HazyResearch/hyena-dna.",,arXiv,"['cs.lg', 'q-bio.gn']",, -1328,generative type inference for python,"['Yun Peng', 'Chaozheng Wang', 'Wenxuan Wang', 'Cuiyun Gao', 'Michael R. Lyu']",http://arxiv.org/pdf/2307.09163v1.pdf,2023-07-18,," Python is a popular dynamic programming language, evidenced by its ranking asthe second most commonly used language on GitHub. However, its dynamic typesystem can lead to potential type errors, leading researchers to exploreautomatic type inference approaches for Python programs. The rule-based typeinference approaches can ensure the accuracy of predicted variable types, butthey suffer from low coverage problems. Supervised type inference approaches,while feature-agnostic, require large, high-quality annotated datasets and arelimited to pre-defined types. As zero-shot approaches, the cloze-styleapproaches reformulate the type inference problem into a fill-in-the-blankproblem. However, their performance is limited. This paper introduces TypeGen, a few-shot generative type inference approachthat incorporates static domain knowledge from static analysis. TypeGen createschain-of-thought (COT) prompts by translating the type inference steps ofstatic analysis into prompts based on the type dependency graphs (TDGs),enabling language models to learn from how static analysis infers types. Bycombining COT prompts with code slices and type hints, TypeGen constructsexample prompts from human annotations. TypeGen only requires very fewannotated examples to teach language models to generate similar COT prompts viain-context learning. Moreover, TypeGen enhances the interpretability of resultsthrough the use of the input-explanation-output strategy. Experiments show thatTypeGen outperforms the best baseline Type4Py by 10.0% for argument typeprediction and 22.5% in return value type prediction in terms of top-1 ExactMatch by using only five examples. Furthermore, TypeGen achieves substantialimprovements of 27% to 84% compared to the zero-shot performance of largelanguage models with parameter sizes ranging from 1.3B to 175B in terms oftop-1 Exact Match.",,arXiv,['cs.se'],, -1329,entity matching using large language models,"['Ralph Peeters', 'Christian Bizer']",http://arxiv.org/pdf/2310.11244v1.pdf,2023-10-17,," Entity Matching is the task of deciding whether two entity descriptions referto the same real-world entity. Entity Matching is a central step in most dataintegration pipelines and an enabler for many e-commerce applications whichrequire to match products offers from different vendors. State-of-the-artentity matching methods often rely on pre-trained language models (PLMs) suchas BERT or RoBERTa. Two major drawbacks of these models for entity matching arethat (i) the models require significant amounts of task-specific training dataand (ii) the fine-tuned models are not robust concerning out-of-distributionentities. In this paper, we investigate using large language models (LLMs) forentity matching as a less domain-specific training data reliant and more robustalternative to PLM-based matchers. Our study covers hosted LLMs, such as GPT3.5and GPT4, as well as open source LLMs based on Llama2 which can be run locally.We evaluate these models in a zero-shot scenario as well as a scenario wheretask-specific training data is available. We compare different prompt designsas well as the prompt sensitivity of the models in the zero-shot scenario. Weinvestigate (i) the selection of in-context demonstrations, (ii) the generationof matching rules, as well as (iii) fine-tuning GPT3.5 in the second scenariousing the same pool of training data across the different approaches. Ourexperiments show that GPT4 without any task-specific training data outperformsfine-tuned PLMs (RoBERTa and Ditto) on three out of five benchmark datasetsreaching F1 scores around 90%. The experiments with in-context learning andrule generation show that all models beside of GPT4 benefit from thesetechniques (on average 5.9% and 2.2% F1), while GPT4 does not need suchadditional guidance in most cases...",,arXiv,"['cs.cl', 'cs.lg']",, -1330,cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment,"['Jixiang Hong', 'Quan Tu', 'Changyu Chen', 'Xing Gao', 'Ji Zhang', 'Rui Yan']",http://arxiv.org/pdf/2310.16271v1.pdf,2023-10-25,," Language models trained on large-scale corpus often generate content that isharmful, toxic, or contrary to human preferences, making their alignment withhuman values a critical concern. Reinforcement learning from human feedback(RLHF) with algorithms like PPO is a prevalent approach for alignment but isoften complex, unstable, and resource-intensive. Recently, ranking-basedalignment methods have emerged, offering stability and effectiveness byreplacing the RL framework with supervised fine-tuning, but they are costly dueto the need for annotated data. Considering that existing large language models(LLMs) like ChatGPT are already relatively well-aligned and cost-friendly,researchers have begun to align the language model with human preference fromAI feedback. The common practices, which unidirectionally distill theinstruction-following responses from LLMs, are constrained by their bottleneck.Thus we introduce CycleAlign to distill alignment capabilities fromparameter-invisible LLMs (black-box) to a parameter-visible model (white-box)in an iterative manner. With in-context learning (ICL) as the core of thecycle, the black-box models are able to rank the model-generated responsesguided by human-craft instruction and demonstrations about their preferences.During iterative interaction, the white-box models also have a judgment aboutresponses generated by them. Consequently, the agreement ranking could beviewed as a pseudo label to dynamically update the in-context demonstrationsand improve the preference ranking ability of black-box models. Throughmultiple interactions, the CycleAlign framework could align the white-box modelwith the black-box model effectively in a low-resource way. Empirical resultsillustrate that the model fine-tuned by CycleAlign remarkably exceeds existingmethods, and achieves the state-of-the-art performance in alignment with humanvalue.",,arXiv,"['cs.cl', 'cs.ai']",, -1331,transformers are efficient incontext estimators for wireless communication,"['Vicram Rajagopalan', 'Vishnu Teja Kunde', 'Chandra Shekhara Kaushik Valmeekam', 'Krishna Narayanan', 'Srinivas Shakkottai', 'Dileep Kalathil', 'Jean-Francois Chamberland']",http://arxiv.org/pdf/2311.00226v1.pdf,2023-11-01,," Pre-trained transformers can perform in-context learning, where they adapt toa new task using only a small number of prompts without any explicit modeloptimization. Inspired by this attribute, we propose a novel approach, calledin-context estimation, for the canonical communication problem of estimatingtransmitted symbols from received symbols. A communication channel isessentially a noisy function that maps transmitted symbols to received symbols,and this function can be represented by an unknown parameter whose statisticsdepend on an (also unknown) latent context. Conventional approaches ignore thishierarchical structure and simply attempt to use known transmissions, calledpilots, to perform a least-squares estimate of the channel parameter, which isthen used to estimate successive, unknown transmitted symbols. We make thebasic connection that transformers show excellent contextual sequencecompletion with a few prompts, and so they should be able to implicitlydetermine the latent context from pilot symbols to perform end-to-endin-context estimation of transmitted symbols. Furthermore, the transformershould use information efficiently, i.e., it should utilize any pilots receivedto attain the best possible symbol estimates. Through extensive simulations, weshow that in-context estimation not only significantly outperforms standardapproaches, but also achieves the same performance as an estimator with perfectknowledge of the latent context within a few context examples. Thus, we make astrong case that transformers are efficient in-context estimators in thecommunication setting.",,arXiv,"['eess.sp', 'cs.lg']",, -1332,2nd place winning solution for the cvpr2023 visual anomaly and novelty detection challenge multimodal prompting for datacentric anomaly detection,"['Yunkang Cao', 'Xiaohao Xu', 'Chen Sun', 'Yuqi Cheng', 'Liang Gao', 'Weiming Shen']",http://arxiv.org/pdf/2306.09067v2.pdf,2023-06-15,," This technical report introduces the winning solution of the team Segment AnyAnomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge.Going beyond uni-modal prompt, e.g., language prompt, we present a novelframework, i.e., Segment Any Anomaly + (SAA$+$), for zero-shot anomalysegmentation with multi-modal prompts for the regularization of cascaded modernfoundation models. Inspired by the great zero-shot generalization ability offoundation models like Segment Anything, we first explore their assembly (SAA)to leverage diverse multi-modal prior knowledge for anomaly localization.Subsequently, we further introduce multimodal prompts (SAA$+$) derived fromdomain expert knowledge and target image context to enable the non-parameteradaptation of foundation models to anomaly segmentation. The proposed SAA$+$model achieves state-of-the-art performance on several anomaly segmentationbenchmarks, including VisA and MVTec-AD, in the zero-shot setting. We willrelease the code of our winning solution for the CVPR2023 VAN.",,arXiv,['cs.cv'],, -1333,similarityaware multimodal prompt learning for fake news detection,"['Ye Jiang', 'Xiaomin Yu', 'Yimin Wang', 'Xiaoman Xu', 'Xingyi Song', 'Diana Maynard']",http://arxiv.org/pdf/2304.04187v3.pdf,2023-04-09,," The standard paradigm for fake news detection mainly utilizes textinformation to model the truthfulness of news. However, the discourse of onlinefake news is typically subtle and it requires expert knowledge to use textualinformation to debunk fake news. Recently, studies focusing on multimodal fakenews detection have outperformed text-only methods. Recent approaches utilizingthe pre-trained model to extract unimodal features, or fine-tuning thepre-trained model directly, have become a new paradigm for detecting fake news.Again, this paradigm either requires a large number of training instances, orupdates the entire set of pre-trained model parameters, making real-world fakenews detection impractical. Furthermore, traditional multimodal methods fusethe cross-modal features directly without considering that the uncorrelatedsemantic representation might inject noise into the multimodal features. Thispaper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE)framework. First, we incorporate prompt learning into multimodal fake newsdetection. Prompt learning, which only tunes prompts with a frozen languagemodel, can reduce memory usage significantly and achieve comparableperformances, compared with fine-tuning. We analyse three prompt templates witha soft verbalizer to detect fake news. In addition, we introduce thesimilarity-aware fusing method to adaptively fuse the intensity of multimodalrepresentation and mitigate the noise injection via uncorrelated cross-modalfeatures. For evaluation, SAMPLE surpasses the F1 and the accuracies ofprevious works on two benchmark multimodal datasets, demonstrating theeffectiveness of the proposed method in detecting fake news. In addition,SAMPLE also is superior to other approaches regardless of few-shot anddata-rich settings.",,arXiv,['cs.cl'],, -1334,multitask multimodal prompted training for interactive embodied task completion,"['Georgios Pantazopoulos', 'Malvina Nikandrou', 'Amit Parekh', 'Bhathiya Hemanthage', 'Arash Eshghi', 'Ioannis Konstas', 'Verena Rieser', 'Oliver Lemon', 'Alessandro Suglia']",http://arxiv.org/pdf/2311.04067v1.pdf,2023-11-07,," Interactive and embodied tasks pose at least two fundamental challenges toexisting Vision & Language (VL) models, including 1) grounding language intrajectories of actions and observations, and 2) referential disambiguation. Totackle these challenges, we propose an Embodied MultiModal Agent (EMMA): aunified encoder-decoder model that reasons over images and trajectories, andcasts action prediction as multimodal text generation. By unifying all tasks astext generation, EMMA learns a language of actions which facilitates transferacross tasks. Different to previous modular approaches with independentlytrained components, we use a single multitask model where each task contributesto goal completion. EMMA performs on par with similar models on several VLbenchmarks and sets a new state-of-the-art performance (36.81% success rate) onthe Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guidedagents in the Alexa Arena",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv']",, -1335,parameterefficient tuning of largescale multimodal foundation model,"['Haixin Wang', 'Xinlong Yang', 'Jianlong Chang', 'Dian Jin', 'Jinan Sun', 'Shikun Zhang', 'Xiao Luo', 'Qi Tian']",http://arxiv.org/pdf/2305.08381v3.pdf,2023-05-15,," Driven by the progress of large-scale pre-training, parameter-efficienttransfer learning has gained immense popularity across different subfields ofArtificial Intelligence. The core is to adapt the model to downstream taskswith only a small set of parameters. Recently, researchers have leveraged suchproven techniques in multimodal tasks and achieve promising results. However,two critical issues remain unresolved: how to further reduce the complexitywith lightweight design and how to boost alignment between modalities underextremely low parameters. In this paper, we propose A graceful prompt frameworkfor cross-modal transfer (Aurora) to overcome these challenges. Considering theredundancy in existing architectures, we first utilize the mode approximationto generate 0.1M trainable parameters to implement the multimodal prompttuning, which explores the low intrinsic dimension with only 0.04% parametersof the pre-trained model. Then, for better modality alignment, we propose theInformative Context Enhancement and Gated Query Transformation module underextremely few parameters scenes. A thorough evaluation on six cross-modalbenchmarks shows that it not only outperforms the state-of-the-art but evenoutperforms the full fine-tuning approach. Our code is available at:https://github.com/WillDreamer/Aurora.",,arXiv,['cs.cv'],, -1336,reframing instructional prompts to gptk's language,"['Swaroop Mishra', 'Daniel Khashabi', 'Chitta Baral', 'Yejin Choi', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2109.07830v3.pdf,2021-09-16,," What kinds of instructional prompts are easier to follow for Language Models(LMs)? We study this question by conducting extensive empirical analysis thatshed light on important features of successful instructional prompts.Specifically, we study several classes of reframing techniques for manualreformulation of prompts into more effective ones. Some examples includedecomposing a complex task instruction into multiple simpler tasks or itemizinginstructions into sequential steps. Our experiments compare the zero-shot andfew-shot performance of LMs prompted with reframed instructions on 12 NLP tasksacross 6 categories. Compared with original instructions, our reframedinstructions lead to significant improvements across LMs with different sizes.For example, the same reframed prompts boost few-shot performance ofGPT3-series and GPT2-series by 12.5% and 6.7% respectively averaged over alltasks. Furthermore, reframed instructions reduce the number of examplesrequired to prompt LMs in the few-shot setting. We hope theseempirically-driven techniques will pave the way towards more effective futureprompting algorithms.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1337,large language models encode clinical knowledge,"['Karan Singhal', 'Shekoofeh Azizi', 'Tao Tu', 'S. Sara Mahdavi', 'Jason Wei', 'Hyung Won Chung', 'Nathan Scales', 'Ajay Tanwani', 'Heather Cole-Lewis', 'Stephen Pfohl', 'Perry Payne', 'Martin Seneviratne', 'Paul Gamble', 'Chris Kelly', 'Nathaneal Scharli', 'Aakanksha Chowdhery', 'Philip Mansfield', 'Blaise Aguera y Arcas', 'Dale Webster', 'Greg S. Corrado', 'Yossi Matias', 'Katherine Chou', 'Juraj Gottweis', 'Nenad Tomasev', 'Yun Liu', 'Alvin Rajkomar', 'Joelle Barral', 'Christopher Semturs', 'Alan Karthikesalingam', 'Vivek Natarajan']",http://arxiv.org/pdf/2212.13138v1.pdf,2022-12-26,," Large language models (LLMs) have demonstrated impressive capabilities innatural language understanding and generation, but the quality bar for medicaland clinical applications is high. Today, attempts to assess models' clinicalknowledge typically rely on automated evaluations on limited benchmarks. Thereis no standard to evaluate model predictions and reasoning across a breadth oftasks. To address this, we present MultiMedQA, a benchmark combining sixexisting open question answering datasets spanning professional medical exams,research, and consumer queries; and HealthSearchQA, a new free-response datasetof medical questions searched online. We propose a framework for humanevaluation of model answers along multiple axes including factuality,precision, possible harm, and bias. In addition, we evaluate PaLM (a540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, onMultiMedQA. Using a combination of prompting strategies, Flan-PaLM achievesstate-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA,MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (USMedical License Exam questions), surpassing prior state-of-the-art by over 17%.However, human evaluation reveals key gaps in Flan-PaLM responses. To resolvethis we introduce instruction prompt tuning, a parameter-efficient approach foraligning LLMs to new domains using a few exemplars. The resulting model,Med-PaLM, performs encouragingly, but remains inferior to clinicians. We showthat comprehension, recall of knowledge, and medical reasoning improve withmodel scale and instruction prompt tuning, suggesting the potential utility ofLLMs in medicine. Our human evaluations reveal important limitations of today'smodels, reinforcing the importance of both evaluation frameworks and methoddevelopment in creating safe, helpful LLM models for clinical applications.",,arXiv,['cs.cl'],, -1338,instructuie multitask instruction tuning for unified information extraction,"['Xiao Wang', 'Weikang Zhou', 'Can Zu', 'Han Xia', 'Tianze Chen', 'Yuansen Zhang', 'Rui Zheng', 'Junjie Ye', 'Qi Zhang', 'Tao Gui', 'Jihua Kang', 'Jingsheng Yang', 'Siyuan Li', 'Chunsai Du']",http://arxiv.org/pdf/2304.08085v1.pdf,2023-04-17,," Large language models have unlocked strong multi-task capabilities fromreading instructive prompts. However, recent studies have shown that existinglarge models still have difficulty with information extraction tasks. Forexample, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset,which is significantly lower than the state-of-the-art performance. In thispaper, we propose InstructUIE, a unified information extraction framework basedon instruction tuning, which can uniformly model various information extractiontasks and capture the inter-task dependency. To validate the proposed method,we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extractiondatasets in a unified text-to-text format with expert-written instructions.Experimental results demonstrate that our method achieves comparableperformance to Bert in supervised settings and significantly outperforms thestate-of-the-art and gpt3.5 in zero-shot settings.",,arXiv,"['cs.cl', 'cs.ai']",, -1339,fewshot instruction prompts for pretrained language models to detect social biases,"['Shrimai Prabhumoye', 'Rafal Kocielnik', 'Mohammad Shoeybi', 'Anima Anandkumar', 'Bryan Catanzaro']",http://arxiv.org/pdf/2112.07868v2.pdf,2021-12-15,," Detecting social bias in text is challenging due to nuance, subjectivity, anddifficulty in obtaining good quality labeled datasets at scale, especiallygiven the evolving nature of social biases and society. To address thesechallenges, we propose a few-shot instruction-based method for promptingpre-trained language models (LMs). We select a few class-balanced exemplarsfrom a small support repository that are closest to the query to be labeled inthe embedding space. We then provide the LM with instruction that consists ofthis subset of labeled exemplars, the query text to be classified, a definitionof bias, and prompt it to make a decision. We demonstrate that large LMs usedin a few-shot context can detect different types of fine-grained biases withsimilar and sometimes superior accuracy to fine-tuned models. We observe thatthe largest 530B parameter model is significantly more effective in detectingsocial bias compared to smaller models (achieving at least 13% improvement inAUC metric compared to other models). It also maintains a high AUC (droppingless than 2%) when the labeled repository is reduced to as few as $100$samples. Large pretrained language models thus make it easier and quicker tobuild new bias detectors.",,arXiv,"['cs.cl', 'cs.ai']",, -1340,benchmarking a foundation llm on its ability to relabel structure names in accordance with the aapm tg263 report,"['Jason Holmes', 'Lian Zhang', 'Yuzhen Ding', 'Hongying Feng', 'Zhengliang Liu', 'Tianming Liu', 'William W. Wong', 'Sujay A. Vora', 'Jonathan B. Ashman', 'Wei Liu']",http://arxiv.org/pdf/2310.03874v1.pdf,2023-10-05,," Purpose: To introduce the concept of using large language models (LLMs) tore-label structure names in accordance with the American Association ofPhysicists in Medicine (AAPM) Task Group (TG)-263 standard, and to establish abenchmark for future studies to reference. Methods and Materials: The Generative Pre-trained Transformer (GPT)-4application programming interface (API) was implemented as a Digital Imagingand Communications in Medicine (DICOM) storage server, which upon receiving astructure set DICOM file, prompts GPT-4 to re-label the structure names of bothtarget volumes and normal tissues according to the AAPM TG-263. Three diseasesites, prostate, head and neck, and thorax were selected for evaluation. Foreach disease site category, 150 patients were randomly selected for manuallytuning the instructions prompt (in batches of 50) and 50 patients were randomlyselected for evaluation. Structure names that were considered were those thatwere most likely to be relevant for studies utilizing structure contours formany patients. Results: The overall re-labeling accuracy of both target volumes and normaltissues for prostate, head and neck, and thorax cases was 96.0%, 98.5%, and96.9% respectively. Re-labeling of target volumes was less accurate on averageexcept for prostate - 100%, 93.1%, and 91.1% respectively. Conclusions: Given the accuracy of GPT-4 in re-labeling structure names ofboth target volumes and normal tissues as presented in this work, LLMs arepoised to be the preferred method for standardizing structure names inradiation oncology, especially considering the rapid advancements in LLMcapabilities that are likely to continue.",,arXiv,"['physics.med-ph', 'cs.cl']",, -1341,healthprompt a zeroshot learning paradigm for clinical natural language processing,"['Sonish Sivarajkumar', 'Yanshan Wang']",http://arxiv.org/pdf/2203.05061v1.pdf,2022-03-09,," Deep learning algorithms are dependent on the availability of large-scaleannotated clinical text datasets. The lack of such publicly available datasetsis the biggest bottleneck for the development of clinical Natural LanguageProcessing(NLP) systems. Zero-Shot Learning(ZSL) refers to the use of deeplearning models to classify instances from new classes of which no trainingdata have been seen before. Prompt-based learning is an emerging ZSL techniquewhere we define task-based templates for NLP tasks. We developed a novelprompt-based clinical NLP framework called HealthPrompt and applied theparadigm of prompt-based learning on clinical texts. In this technique, ratherthan fine-tuning a Pre-trained Language Model(PLM), the task definitions aretuned by defining a prompt template. We performed an in-depth analysis ofHealthPrompt on six different PLMs in a no-data setting. Our experiments provethat prompts effectively capture the context of clinical texts and performremarkably well without any training data.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir']",, -1342,a fewshot approach to resume information extraction via prompts,"['Chengguang Gan', 'Tatsunori Mori']",http://arxiv.org/pdf/2209.09450v2.pdf,2022-09-20,," Prompt learning's fine-tune performance on text classification tasks hasattracted the NLP community. This paper applies it to resume informationextraction, improving existing methods for this task. We created manualtemplates and verbalizers tailored to resume texts and compared the performanceof Masked Language Model (MLM) and Seq2Seq PLMs. Also, we enhanced theverbalizer design for Knowledgeable Prompt-tuning, contributing to prompttemplate design across NLP tasks. We present the Manual KnowledgeableVerbalizer (MKV), a rule for constructing verbalizers for specificapplications. Our tests show that MKV rules yield more effective, robusttemplates and verbalizers than existing methods. Our MKV approach resolvedsample imbalance, surpassing current automatic prompt methods. This studyunderscores the value of tailored prompt learning for resume extraction,stressing the importance of custom-designed templates and verbalizers.",,arXiv,['cs.cl'],, -1343,the prompt artists,"['Minsuk Chang', 'Stefania Druga', 'Alex Fiannaca', 'Pedro Vergani', 'Chinmay Kulkarni', 'Carrie Cai', 'Michael Terry']",http://arxiv.org/pdf/2303.12253v1.pdf,2023-03-22,," This paper examines the art practices, artwork, and motivations of prolificusers of the latest generation of text-to-image models. Through interviews,observations, and a user survey, we present a sampling of the artistic stylesand describe the developed community of practice around generative AI. We findthat: 1) the text prompt and the resulting image can be considered collectivelyas an art piece prompts as art and 2) prompt templates (prompts with ``slots''for others to fill in with their own words) are developed to create generativeart styles. We discover that the value placed by this community on uniqueoutputs leads to artists seeking specialized vocabulary to produce distinctiveart pieces (e.g., by reading architectural blogs to find phrases to describeimages). We also find that some artists use ""glitches"" in the model that can beturned into artistic styles of their own right. From these findings, we outlinespecific implications for design regarding future prompting and image editingoptions.",,arXiv,['cs.hc'],, -1344,estimating uncertainty in multimodal foundation models using public internet data,"['Shiladitya Dutta', 'Hongbo Wei', 'Lars van der Laan', 'Ahmed M. Alaa']",http://arxiv.org/pdf/2310.09926v1.pdf,2023-10-15,," Foundation models are trained on vast amounts of data at scale usingself-supervised learning, enabling adaptation to a wide range of downstreamtasks. At test time, these models exhibit zero-shot capabilities through whichthey can classify previously unseen (user-specified) categories. In this paper,we address the problem of quantifying uncertainty in these zero-shotpredictions. We propose a heuristic approach for uncertainty estimation inzero-shot settings using conformal prediction with web data. Given a set ofclasses at test time, we conduct zero-shot classification with CLIP-stylemodels using a prompt template, e.g., ""an image of a "", and use thesame template as a search query to source calibration data from the open web.Given a web-based calibration set, we apply conformal prediction with a novelconformity score that accounts for potential errors in retrieved web data. Weevaluate the utility of our proposed method in Biomedical foundation models;our preliminary results show that web-based conformal prediction sets achievethe target coverage with satisfactory efficiency on a variety of biomedicaldatasets.",,arXiv,['cs.ai'],, -1345,beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels,"['Honglei Zhuang', 'Zhen Qin', 'Kai Hui', 'Junru Wu', 'Le Yan', 'Xuanhui Wang', 'Michael Bendersky']",http://arxiv.org/pdf/2310.14122v2.pdf,2023-10-21,," Zero-shot text rankers powered by recent LLMs achieve remarkable rankingperformance by simply prompting. Existing prompts for pointwise LLM rankersmostly ask the model to choose from binary relevance labels like ""Yes"" and""No"". However, the lack of intermediate relevance label options may cause theLLM to provide noisy or biased answers for documents that are partiallyrelevant to the query. We propose to incorporate fine-grained relevance labelsinto the prompt for LLM rankers, enabling them to better differentiate amongdocuments with different levels of relevance to the query and thus derive amore accurate ranking. We study two variants of the prompt template, coupledwith different numbers of relevance levels. Our experiments on 8 BEIR data setsshow that adding fine-grained relevance labels significantly improves theperformance of LLM rankers.",,arXiv,['cs.ir'],, -1346,"large language models can share images, too!","['Young-Jun Lee', 'Jonghwan Hyeon', 'Ho-Jin Choi']",http://arxiv.org/pdf/2310.14804v1.pdf,2023-10-23,," This paper explores the image-sharing capability of Large Language Models(LLMs), such as InstructGPT, ChatGPT, and GPT-4, in a zero-shot setting,without the help of visual foundation models. Inspired by the two-stage processof image-sharing in human dialogues, we propose a two-stage framework thatallows LLMs to predict potential image-sharing turns and generate related imagedescriptions using our effective restriction-based prompt template. Withextensive experiments, we unlock the \textit{image-sharing} capability of LLMsin zero-shot prompting, with GPT-4 achieving the best performance.Additionally, we uncover the emergent \textit{image-sharing} ability inzero-shot prompting, demonstrating the effectiveness of restriction-basedprompts in both stages of our framework. Based on this framework, we augmentthe PhotoChat dataset with images generated by Stable Diffusion at predictedturns, namely PhotoChat++. To our knowledge, this is the first study to assessthe image-sharing ability of LLMs in a zero-shot setting without visualfoundation models. The source code and the dataset will be released afterpublication.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, -1347,promptbased zeroshot relation extraction with semantic knowledge augmentation,"['Jiaying Gong', 'Hoda Eldardiry']",http://arxiv.org/pdf/2112.04539v2.pdf,2021-12-08,," In relation triplet extraction (RTE), recognizing unseen (new) relations forwhich there are no training instances is a challenging task. Efforts have beenmade to recognize unseen relations based on question-answering models orrelation descriptions. However, these approaches miss the semantic informationabout connections between seen and unseen relations. In this paper, We proposea prompt-based model with semantic knowledge augmentation (ZS-SKA) to recognizeunseen relations under the zero-shot setting. We present a new word-levelanalogy-based sentence translation rule and generate augmented instances withunseen relations from instances with seen relations using that new rule. Wedesign prompts with weighted virtual label construction based on an externalknowledge graph to integrate semantic knowledge information learned from seenrelations. Instead of using the actual label sets in the prompt template, weconstruct weighted virtual label words. We learn the representations of bothseen and unseen relations with augmented instances and prompts. We thencalculate the distance between the generated representations using prototypicalnetworks to predict unseen relations. Extensive experiments conducted on threepublic datasets FewRel, Wiki-ZSL, and NYT, show that ZS-SKA outperformsstate-of-the-art methods under the zero-shot scenarios. Our experimentalresults also demonstrate the effectiveness and robustness of ZS-SKA.",,arXiv,['cs.cl'],, -1348,adapting prompt for fewshot tabletotext generation,"['Zhixin Guo', 'Minyxuan Yan', 'Jiexing Qi', 'Jianping Zhou', 'Ziwei He', 'Zhouhan Lin', 'Guanjie Zheng', 'Xinbing Wang']",http://arxiv.org/pdf/2302.12468v2.pdf,2023-02-24,," Pretrained language models (PLMs) have made remarkable progress intable-to-text generation tasks. However, the lack of domain-specific knowledgemakes it challenging to bridge the topological gap between tabular data andtext, especially in real-world applications with limited resources. To mitigatethe limitation of insufficient labeled data, we propose a novel framework:Adapt-Prompt-to-Generate (AdaPTGen). The core insight of AdaPTGen is to adaptprompt templates of domain-specific knowledge into the model, which brings atleast three benefits: (1) it injects representation of normal table-relateddescriptions to bridge the topological gap between tabular data and texts; (2)it enables us to use large amounts of unlabeled domain-specific knowledgefully, which can alleviate the PLMs' inherent shortcomings of lacking domainknowledge; (3) it allows us to design various tasks to explore thedomain-specific knowledge. Extensive experiments and analyses are conducted onthree open-domain few-shot natural language generation (NLG) data sets: Humans,Songs, and Books. Compared to previous state-of-the-art approaches, our modelachieves superior performance in terms of both fluency and accuracy.",,arXiv,['cs.cl'],, -1349,prompting chatgpt in mner enhanced multimodal named entity recognition with auxiliary refined knowledge,"['Jinyuan Li', 'Han Li', 'Zhuo Pan', 'Di Sun', 'Jiahao Wang', 'Wenkun Zhang', 'Gang Pan']",http://arxiv.org/pdf/2305.12212v2.pdf,2023-05-20,," Multimodal Named Entity Recognition (MNER) on social media aims to enhancetextual entity prediction by incorporating image-based clues. Existing studiesmainly focus on maximizing the utilization of pertinent image information orincorporating external knowledge from explicit knowledge bases. However, thesemethods either neglect the necessity of providing the model with externalknowledge, or encounter issues of high redundancy in the retrieved knowledge.In this paper, we present PGIM -- a two-stage framework that aims to leverageChatGPT as an implicit knowledge base and enable it to heuristically generateauxiliary knowledge for more efficient entity prediction. Specifically, PGIMcontains a Multimodal Similar Example Awareness module that selects suitableexamples from a small number of predefined artificial samples. These examplesare then integrated into a formatted prompt template tailored to the MNER andguide ChatGPT to generate auxiliary refined knowledge. Finally, the acquiredknowledge is integrated with the original text and fed into a downstream modelfor further processing. Extensive experiments show that PGIM outperformsstate-of-the-art methods on two classic MNER datasets and exhibits a strongerrobustness and generalization capability.",,arXiv,['cs.cl'],, -1350,revisit input perturbation problems for llms a unified robustness evaluation framework for noisy slot filling task,"['Guanting Dong', 'Jinxu Zhao', 'Tingfeng Hui', 'Daichi Guo', 'Wenlong Wan', 'Boqi Feng', 'Yueyan Qiu', 'Zhuoma Gongque', 'Keqing He', 'Zechen Wang', 'Weiran Xu']",http://arxiv.org/pdf/2310.06504v1.pdf,2023-10-10,," With the increasing capabilities of large language models (LLMs), thesehigh-performance models have achieved state-of-the-art results on a wide rangeof natural language processing (NLP) tasks. However, the models' performance oncommonly-used benchmark datasets often fails to accurately reflect theirreliability and robustness when applied to real-world noisy data. To addressthese challenges, we propose a unified robustness evaluation framework based onthe slot-filling task to systematically evaluate the dialogue understandingcapability of LLMs in diverse input perturbation scenarios. Specifically, weconstruct a input perturbation evaluation dataset, Noise-LLM, which containsfive types of single perturbation and four types of mixed perturbation data.Furthermore, we utilize a multi-level data augmentation method (character,word, and sentence levels) to construct a candidate data pool, and carefullydesign two ways of automatic task demonstration construction strategies(instance-level and entity-level) with various prompt templates. Our aim is toassess how well various robustness methods of LLMs perform in real-world noisyscenarios. The experiments have demonstrated that the current open-source LLMsgenerally achieve limited perturbation robustness performance. Based on theseexperimental observations, we make some forward-looking suggestions to fuel theresearch in this direction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, -1351,do language models learn about legal entity types during pretraining,"['Claire Barale', 'Michael Rovatsos', 'Nehal Bhuta']",http://arxiv.org/pdf/2310.13092v1.pdf,2023-10-19,," Language Models (LMs) have proven their ability to acquire diverse linguisticknowledge during the pretraining phase, potentially serving as a valuablesource of incidental supervision for downstream tasks. However, there has beenlimited research conducted on the retrieval of domain-specific knowledge, andspecifically legal knowledge. We propose to explore the task of Entity Typing,serving as a proxy for evaluating legal knowledge as an essential aspect oftext comprehension, and a foundational task to numerous downstream legal NLPapplications. Through systematic evaluation and analysis and two types ofprompting (cloze sentences and QA-based templates) and to clarify the nature ofthese acquired cues, we compare diverse types and lengths of entities bothgeneral and domain-specific entities, semantics or syntax signals, anddifferent LM pretraining corpus (generic and legal-oriented) and architectures(encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2performs well on certain entities and exhibits potential for substantialimprovement with optimized prompt templates, (2) law-oriented LMs showinconsistent performance, possibly due to variations in their training corpus,(3) LMs demonstrate the ability to type entities even in the case ofmulti-token entities, (4) all models struggle with entities belonging tosub-domains of the law (5) Llama2 appears to frequently overlook syntacticcues, a shortcoming less present in BERT-based architectures.",,arXiv,['cs.cl'],, -1352,llamarec twostage recommendation using large language models for ranking,"['Zhenrui Yue', 'Sara Rabhi', 'Gabriel de Souza Pereira Moreira', 'Dong Wang', 'Even Oldridge']",http://arxiv.org/pdf/2311.02089v1.pdf,2023-10-25,," Recently, large language models (LLMs) have exhibited significant progress inlanguage understanding and generation. By leveraging textual features,customized LLMs are also applied for recommendation and demonstrateimprovements across diverse recommendation scenarios. Yet the majority ofexisting methods perform training-free recommendation that heavily relies onpretrained knowledge (e.g., movie recommendation). In addition, inference onLLMs is slow due to autoregressive generation, rendering existing methods lesseffective for real-time recommendation. As such, we propose a two-stageframework using large language models for ranking-based recommendation(LlamaRec). In particular, we use small-scale sequential recommenders toretrieve candidates based on the user interaction history. Then, both historyand retrieved items are fed to the LLM in text via a carefully designed prompttemplate. Instead of generating next-item titles, we adopt a verbalizer-basedapproach that transforms output logits into probability distributions over thecandidate items. Therefore, the proposed LlamaRec can efficiently rank itemswithout generating long text. To validate the effectiveness of the proposedframework, we compare against state-of-the-art baseline methods on benchmarkdatasets. Our experimental results demonstrate the performance of LlamaRec,which consistently achieves superior performance in both recommendationperformance and efficiency.",,arXiv,"['cs.ir', 'cs.ai', 'cs.cl']",, -1353,alt towards finegrained alignment between language and ctr models for clickthrough rate prediction,"['Hangyu Wang', 'Jianghao Lin', 'Xiangyang Li', 'Bo Chen', 'Chenxu Zhu', 'Ruiming Tang', 'Weinan Zhang', 'Yong Yu']",http://arxiv.org/pdf/2310.19453v1.pdf,2023-10-30,," Click-through rate (CTR) prediction plays as a core function module invarious personalized online services. According to the data modality and inputformat, the models for CTR prediction can be mainly classified into twocategories. The first one is the traditional CTR models that take as inputs theone-hot encoded ID features of tabular modality, which aims to capture thecollaborative signals via feature interaction modeling. The second categorytakes as inputs the sentences of textual modality obtained by hard prompttemplates, where pretrained language models (PLMs) are adopted to extract thesemantic knowledge. These two lines of research generally focus on differentcharacteristics of the same input data (i.e., textual and tabular modalities),forming a distinct complementary relationship with each other. Therefore, inthis paper, we propose to conduct fine-grained feature-level Alignment betweenLanguage and CTR models (ALT) for CTR prediction. Apart from the commonCLIP-like instance-level contrastive learning, we further design a novel jointreconstruction pretraining task for both masked language and tabular modeling.Specifically, the masked data of one modality (i.e., tokens or features) has tobe recovered with the help of the other modality, which establishes thefeature-level interaction and alignment via sufficient mutual informationextraction between dual modalities. Moreover, we propose three differentfinetuning strategies with the option to train the aligned language and CTRmodels separately or jointly for downstream CTR prediction tasks, thusaccommodating the varying efficacy and efficiency requirements for industrialapplications. Extensive experiments on three real-world datasets demonstratethat ALT outperforms SOTA baselines, and is highly compatible for variouslanguage and CTR models.",,arXiv,"['cs.ir', 'cs.ai']",, -1354,"large language model is not a good fewshot information extractor, but a good reranker for hard samples!","['Yubo Ma', 'Yixin Cao', 'YongChing Hong', 'Aixin Sun']",http://arxiv.org/pdf/2303.08559,2023-03-15,,"Large Language Models (LLMs) have made remarkable strides in various tasks. Whether LLMs are competitive few-shot solvers for information extraction (IE) tasks, however, remains an open problem. In this work, we aim to provide a thorough answer to this question. Through extensive experiments on nine datasets across four IE tasks, we demonstrate that current advanced LLMs consistently exhibit inferior performance, higher latency, and increased budget requirements compared to fine-tuned SLMs under most settings. Therefore, we conclude that LLMs are not effective few-shot information extractors in general. Nonetheless, we illustrate that with appropriate prompting strategies, LLMs can effectively complement SLMs and tackle challenging samples that SLMs struggle with. And moreover, we propose an adaptive filter-then-rerank paradigm to combine the strengths of LLMs and SLMs. In this paradigm, SLMs serve as filters and LLMs serve as rerankers. By prompting LLMs to rerank a small portion of difficult samples identified by SLMs, our preliminary system consistently achieves promising improvements (2.4% F1-gain on average) on various IE tasks, with an acceptable time and cost investment.",0100785773b8217c44606ab260e3212f93b0a4fd,Semantic Scholar,,somewhat relevant,"The paper describes the use of LLMs that are prompted to extract values or synthesize code, which demonstrates an application of prompting techniques in NLP tasks." -1355,when do programofthoughts work for reasoning,"['Zhen Bi', 'Ningyu Zhang', 'Yinuo Jiang', 'Shumin Deng', 'Guozhou Zheng', 'Huajun Chen']",https://arxiv.org/pdf/2308.15452,2023-08-29,,"In the realm of embodied artificial intelligence, the reasoning capabilities of Large Language Models (LLMs) play a pivotal role. Although there are effective methods like program-of-thought prompting for LLMs which uses programming language to tackle complex reasoning tasks, the specific impact of code data on the improvement of reasoning capabilities remains under-explored. To address this gap, we propose complexity-impacted reasoning score (CIRS), which combines structural and logical attributes, to measure the correlation between code and reasoning abilities. Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity by considering the difficulty and the cyclomatic complexity. Through an empirical analysis, we find not all code data of complexity can be learned or understood by LLMs. Optimal level of complexity is critical to the improvement of reasoning abilities by program-aided prompting. Then we design an auto-synthesizing and stratifying algorithm, and apply it to instruction generation for mathematical reasoning and code data filtering for code generation tasks. Extensive results demonstrates the effectiveness of our proposed approach. Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.",023f0045686f86332a26856f8d8c3203566925ad,Semantic Scholar,,highly relevant,"The paper discusses using prompt tuning on large pre-trained visual language models to achieve compositional zero-shot learning, which is directly related to prompt engineering." -1356,retrieving supporting evidence for generative question answering,"['Siqing Huo', 'Negar Arabzadeh', 'Charles L. A. Clarke']",https://arxiv.org/pdf/2309.11392,2023-09-20,,"Current large language models (LLMs) can exhibit near-human levels of performance on many natural language-based tasks, including open-domain question answering. Unfortunately, at this time, they also convincingly hallucinate incorrect answers, so that responses to questions must be verified against external sources before they can be accepted at face value. In this paper, we report two simple experiments to automatically validate generated answers against a corpus. We base our experiments on questions and passages from the MS MARCO (V1) test collection, and a retrieval pipeline consisting of sparse retrieval, dense retrieval and neural rerankers. In the first experiment, we validate the generated answer in its entirety. After presenting a question to an LLM and receiving a generated answer, we query the corpus with the combination of the question + generated answer. We then present the LLM with the combination of the question + generated answer + retrieved answer, prompting it to indicate if the generated answer can be supported by the retrieved answer. In the second experiment, we consider the generated answer at a more granular level, prompting the LLM to extract a list of factual statements from the answer and verifying each statement separately. We query the corpus with each factual statement and then present the LLM with the statement and the corresponding retrieved evidence. The LLM is prompted to indicate if the statement can be supported and make necessary edits using the retrieved material. With an accuracy of over 80%, we find that an LLM is capable of verifying its generated answer when a corpus of supporting material is provided. However, manual assessment of a random sample of questions reveals that incorrect generated answers are missed by this verification process. While this verification process can reduce hallucinations, it can not entirely eliminate them.",0630a18fe3fe4765132ad52a591f9776cf3284bf,Semantic Scholar,,highly relevant,"The paper specifically involves the development and usage of prompts to extract clinical information using a large language model, which is directly related to prompt engineering." -1357,can large language models write good propertybased tests,"['Vasudev Vikram', 'Caroline Lemieux', 'Rohan Padhye']",https://arxiv.org/pdf/2307.04346,2023-07-10,,"Property-based testing (PBT), while an established technique in the software testing research community, is still relatively underused in real-world software. Pain points in writing property-based tests include implementing diverse random input generators and thinking of meaningful properties to test. Developers, however, are more amenable to writing documentation; plenty of library API documentation is available and can be used as natural language specifications for property-based tests. As large language models (LLMs) have recently shown promise in a variety of coding tasks, we explore the potential of using LLMs to synthesize property-based tests. We call our approach PBT-GPT, and propose three different strategies of prompting the LLM for PBT. We characterize various failure modes of PBT-GPT and detail an evaluation methodology for automatically synthesized property-based tests. PBT-GPT achieves promising results in our preliminary studies on sample Python library APIs in $\texttt{numpy}$, $\texttt{networkx}$, and $\texttt{datetime}$.",16707317eb7f71b1b4d47f27d703a2cdb5142baf,Semantic Scholar,,somewhat relevant,"The paper discusses using prompting techniques to generate weak financial sentiment labels with a large language model, which is related to prompt engineering." -1358,conavgpt multirobot cooperative visual semantic navigation using large language models,"['Bangguo Yu', 'H. Kasaei', 'Ming Cao']",https://arxiv.org/pdf/2310.07937,2023-10-11,,"In advanced human-robot interaction tasks, visual target navigation is crucial for autonomous robots navigating unknown environments. While numerous approaches have been developed in the past, most are designed for single-robot operations, which often suffer from reduced efficiency and robustness due to environmental complexities. Furthermore, learning policies for multi-robot collaboration are resource-intensive. To address these challenges, we propose Co-NavGPT, an innovative framework that integrates Large Language Models (LLMs) as a global planner for multi-robot cooperative visual target navigation. Co-NavGPT encodes the explored environment data into prompts, enhancing LLMs' scene comprehension. It then assigns exploration frontiers to each robot for efficient target search. Experimental results on Habitat-Matterport 3D (HM3D) demonstrate that Co-NavGPT surpasses existing models in success rates and efficiency without any learning process, demonstrating the vast potential of LLMs in multi-robot collaboration domains. The supplementary video, prompts, and code can be accessed via the following link: \href{https://sites.google.com/view/co-navgpt}{https://sites.google.com/view/co-navgpt}.",16ecaa7cf142605331fc21c9be73c7b13e8c1acd,Semantic Scholar,,highly relevant,"The paper explores how the structure of prompts affects the performance of large language models in dialog evaluation tasks, which is directly related to prompt engineering." -1359,retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain,"['Chunxi Guo', 'Zhiliang Tian', 'Jintao Tang', 'Shasha Li', 'Zhihua Wen', 'Kaixuan Wang', 'Ting Wang']",https://arxiv.org/pdf/2307.05074,2023-07-11,,"Text-to-SQL aims at generating SQL queries for the given natural language questions and thus helping users to query databases. Prompt learning with large language models (LLMs) has emerged as a recent approach, which designs prompts to lead LLMs to understand the input question and generate the corresponding SQL. However, it faces challenges with strict SQL syntax requirements. Existing work prompts the LLMs with a list of demonstration examples (i.e. question-SQL pairs) to generate SQL, but the fixed prompts can hardly handle the scenario where the semantic gap between the retrieved demonstration and the input question is large. In this paper, we propose a retrieval-augmented prompting method for a LLM-based Text-to-SQL framework, involving sample-aware prompting and a dynamic revision chain. Our approach incorporates sample-aware demonstrations, which include the composition of SQL operators and fine-grained information related to the given question. To retrieve questions sharing similar intents with input questions, we propose two strategies for assisting retrieval. Firstly, we leverage LLMs to simplify the original questions, unifying the syntax and thereby clarifying the users' intentions. To generate executable and accurate SQLs without human intervention, we design a dynamic revision chain which iteratively adapts fine-grained feedback from the previously generated SQL. Experimental results on three Text-to-SQL benchmarks demonstrate the superiority of our method over strong baseline models.",191e300e381d4128b749d16fe3d83c8643a3bd1f,Semantic Scholar,,highly relevant,"The paper discusses improving zero-shot chain-of-thought reasoning by introducing advanced prompting strategies, which is directly related to the topic of hard prefix prompt engineering." -1360,roco dialectic multirobot collaboration with large language models,"['Zhao Mandi', 'Shreeya Jain', 'Shuran Song']",https://arxiv.org/pdf/2307.04738,2023-07-10,,"We propose a novel approach to multi-robot collaboration that harnesses the power of pre-trained large language models (LLMs) for both high-level communication and low-level path planning. Robots are equipped with LLMs to discuss and collectively reason task strategies. They then generate sub-task plans and task space waypoint paths, which are used by a multi-arm motion planner to accelerate trajectory planning. We also provide feedback from the environment, such as collision checking, and prompt the LLM agents to improve their plan and waypoints in-context. For evaluation, we introduce RoCoBench, a 6-task benchmark covering a wide range of multi-robot collaboration scenarios, accompanied by a text-only dataset for agent representation and reasoning. We experimentally demonstrate the effectiveness of our approach -- it achieves high success rates across all tasks in RoCoBench and adapts to variations in task semantics. Our dialog setup offers high interpretability and flexibility -- in real world experiments, we show RoCo easily incorporates human-in-the-loop, where a user can communicate and collaborate with a robot agent to complete tasks together. See project website https://project-roco.github.io for videos and code.",19e5b780a2dd1ffa1962e392976308b9fe644c7f,Semantic Scholar,,highly relevant,"The paper mentions using a chain-of-thought prompting approach to generate responses, which constitutes a form of prompt engineering, especially as it relates to tailoring prompts to user status." -1361,regionblip a unified multimodal pretraining framework for holistic and regional comprehension,"['Qiang Zhou', 'Chaohui Yu', 'Shaofeng Zhang', 'Sitong Wu', 'Zhibin Wang', 'Fan Wang']",https://arxiv.org/pdf/2308.02299,2023-08-03,,"In this work, we investigate extending the comprehension of Multi-modal Large Language Models (MLLMs) to regional objects. To this end, we propose to extract features corresponding to regional objects as soft prompts for LLM, which provides a straightforward and scalable approach and eliminates the need for LLM fine-tuning. To effectively extract regional features from regular image features and irregular point cloud features, we present a novel and unified position-assisted feature extraction module. Furthermore, training an MLLM from scratch is highly time-consuming. Thus, we propose incrementally extending existing pre-trained MLLMs to comprehend more modalities and the regional objects of those modalities. Specifically, we freeze the Q-Former from BLIP-2, an impressive MLLM, and optimize the modality-specific Lora parameters in Q-Former and LLM for each newly introduced modality. The freezing of the Q-Former eliminates the need for extensive pre-training on massive image-text data. The freezed Q-Former pre-trained from massive image-text data is also beneficial for the pre-training on image-region-text data. We name our framework RegionBLIP. We pre-train RegionBLIP on image-region-text, point-cloud-text, and point-cloud-region-text data. Experimental results verify that \Ours{} can preserve the image comprehension capability of BILP-2 and further gain a comprehension of the newly introduced point cloud modality and regional objects. The Data, Code, and Pre-trained models will be available at https://github.com/mightyzau/RegionBLIP.",1ee8c8dd9d04247515b33775532b72df7b8ec0f3,Semantic Scholar,,highly relevant,"The paper discusses the process of creating prompts to guide a large language model in annotation tasks, which is a component of prompt engineering." -1362,rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought,"['Tianci Xue', 'Ziqi Wang', 'Zhenhailong Wang', 'Chi Han', 'Pengfei Yu', 'Heng Ji']",https://arxiv.org/pdf/2305.11499,2023-05-19,,"Large language Models (LLMs) have achieved promising performance on arithmetic reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting. However, LLMs face challenges in maintaining factual consistency during reasoning, exhibiting tendencies to condition overlooking, question misinterpretation, and condition hallucination over given problems. Existing methods use coarse-grained feedback (e.g., whether the answer is correct) to improve factual consistency. In this work, we propose RCoT (Reversing Chain-of-Thought), a novel method to improve LLMs' reasoning abilities by automatically detecting and rectifying factual inconsistency in LLMs, generated solutions. To detect factual inconsistency, RCoT first asks LLMs to reconstruct the problem based on generated solutions. Then fine-grained comparisons between the original problem and the reconstructed problem expose the factual inconsistency in the original solutions. To rectify the solution, RCoT formulates detected factual inconsistency into fine-grained feedback to guide LLMs in revising solutions. Experimental results demonstrate improvements of RCoT over standard CoT, Self-Consistency and Self-Refine across seven arithmetic datasets. Moreover, we find that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities (e.g., ChatGPT reaches 94.6% accuracy on GSM8K), encouraging the community to further explore the fine-grained feedback generation methods.",22d5459d1f47341b355feeb1becc37208d6ec365,Semantic Scholar,,highly relevant,"The abstract mentions the use of 'abstracted prompting procedures' which indicates a focus on prompting mechanisms in LLMs, relevant to the topic of prompt engineering." -1363,language models enable simple systems for generating structured views of heterogeneous data lakes,"['Simran Arora', 'Brandon Yang', 'Sabri Eyuboglu', 'A. Narayan', 'Andrew Hojel', 'Immanuel Trummer', 'Christopher Ré']",http://arxiv.org/pdf/2304.09433,2023-04-19,,"A long standing goal of the data management community is to develop general, automated systems that ingest semi-structured documents and output queryable tables without human effort or domain specific customization. Given the sheer variety of potential documents, state-of-the art systems make simplifying assumptions and use domain specific training. In this work, we ask whether we can maintain generality by using large language models (LLMs). LLMs, which are pretrained on broad data, can perform diverse downstream tasks simply conditioned on natural language task descriptions. We propose and evaluate EVAPORATE, a simple, prototype system powered by LLMs. We identify two fundamentally different strategies for implementing this system: prompt the LLM to directly extract values from documents or prompt the LLM to synthesize code that performs the extraction. Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap, but far less accurate than directly processing each document with the LLM. To improve quality while maintaining low cost, we propose an extended code synthesis implementation, EVAPORATE-CODE+, which achieves better quality than direct extraction. Our key insight is to generate many candidate functions and ensemble their extractions using weak supervision. EVAPORATE-CODE+ not only outperforms the state-of-the art systems, but does so using a sublinear pass over the documents with the LLM. This equates to a 110x reduction in the number of tokens the LLM needs to process, averaged across 16 real-world evaluation settings of 10k documents each.",2ef1c2438c3a4552db9e7080e15d8c51bc071f58,Semantic Scholar,,highly relevant,"The paper is highly relevant as it directly deals with the concept of prompting, specifically the use of natural language prompts to modulate the functionalities of LLMs and the risks associated with adversarial prompting." -1364,prompting languageinformed distribution for compositional zeroshot learning,"['Wentao Bao', 'Lichang Chen', 'Heng Huang', 'Yu Kong']",https://arxiv.org/pdf/2305.14428,2023-05-23,,"Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, e.g., sliced tomatoes, where the model is learned only from the seen compositions, e.g., sliced potatoes and red tomatoes. Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives, i.e., state and object, are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., PLID, for the CZSL task. Specifically, the PLID leverages pre-trained large language models (LLM) to 1) formulate the language-informed class distributions which are diverse and informative, and 2) enhance the compositionality of the class embedding. Moreover, a visual-language primitive decomposition (VLPD) module and a stochastic logit mixup (SLM) strategy are proposed to dynamically fuse the decisions from the compositional and the primitive logit space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.",2ff69c238e26c473a6d8bcbb9292ded74d7fd1c2,Semantic Scholar,,highly relevant,"The paper details the optimization of prompts for multilingual performance in language models, relevant to prompt engineering." -1365,prompt middleware mapping prompts for large language models to ui affordances,"['S. Macneil', 'Andrew Tran', 'Joanne Kim', 'Ziheng Huang', 'Seth Bernstein', 'Dan Mogil']",http://arxiv.org/pdf/2307.01142,2023-07-03,,"To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.",34b35c89e192b5aa3118f667ce0a3cc0d89d82c3,Semantic Scholar,,highly relevant,The paper describes a novel prompting method called Chain-of-Knowledge (CoK) which explicitly relates to the design of prompts to elicit better responses from language models. -1366,developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer,"['Hyeon Seok Choi', 'Jun Yeong Song', 'Kyung Hwan Shin', 'Ji Hyun Chang', 'B. Jang']",https://www.e-roj.org/upload/pdf/roj-2023-00633.pdf,2023-09-01,,"Purpose We aimed to evaluate the time and cost of developing prompts using large language model (LLM), tailored to extract clinical factors in breast cancer patients and their accuracy. Materials and Methods We collected data from reports of surgical pathology and ultrasound from breast cancer patients who underwent radiotherapy from 2020 to 2022. We extracted the information using the Generative Pre-trained Transformer (GPT) for Sheets and Docs extension plugin and termed this the “LLM” method. The time and cost of developing the prompts with LLM methods were assessed and compared with those spent on collecting information with “full manual” and “LLM-assisted manual” methods. To assess accuracy, 340 patients were randomly selected, and the extracted information by LLM method were compared with those collected by “full manual” method. Results Data from 2,931 patients were collected. We developed 12 prompts for Extract function and 12 for Format function to extract and standardize the information. The overall accuracy was 87.7%. For lymphovascular invasion, it was 98.2%. Developing and processing the prompts took 3.5 hours and 15 minutes, respectively. Utilizing the ChatGPT application programming interface cost US $65.8 and when factoring in the estimated wage, the total cost was US $95.4. In an estimated comparison, “LLM-assisted manual” and “LLM” methods were time- and cost-efficient compared to the “full manual” method. Conclusion Developing and facilitating prompts for LLM to derive clinical factors was efficient to extract crucial information from huge medical records. This study demonstrated the potential of the application of natural language processing using LLM model in breast cancer patients. Prompts from the current study can be re-used for other research to collect clinical information.",35d855c49334ef1b8f945f13e9bc84868dab55c9,Semantic Scholar,,highly relevant,"The paper describes the use of engineered prompts with a Large Language Model to simulate human responses to influential inputs, which directly pertains to the application of prompt engineering." -1367,sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation,"['Chen Dun', 'Mirian Hipolito Garcia', 'Guoqing Zheng', 'A. Awadallah', 'Anastasios Kyrillidis', 'Robert Sim']",https://arxiv.org/pdf/2310.02842,2023-10-04,,"Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind. Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new -- but often individual -- downstream tasks. Thus, how one would expand prompt tuning to handle -- concomitantly -- heterogeneous tasks and data distributions is a widely open question. To address this gap, we suggest the use of \emph{Mixture of Prompts}, or MoPs, associated with smart gating functionality: the latter -- whose design is one of the contributions of this paper -- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task. Additionally, MoPs are empirically agnostic to any model compression technique applied -- for efficiency reasons -- as well as instruction data source and task composition. In practice, MoPs can simultaneously mitigate prompt training""interference""in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations. As a highlight, MoPs manage to decrease final perplexity from $\sim20\%$ up to $\sim70\%$, as compared to baselines, in the federated scenario, and from $\sim 3\%$ up to $\sim30\%$ in the centralized scenario.",45ee010607cad91728ae7fbad6cce3d805b93526,Semantic Scholar,,highly relevant,"The paper discusses the use of carefully crafted prompts to generate richer descriptions for text-based action generation, which indicates a use of prompt engineering in the context of LLMs." -1368,prompt sapper llmempowered software engineering infrastructure for ainative services,"['Zhenchang Xing', 'Qing Huang', 'Yu Cheng', 'Liming Zhu', 'Qinghua Lu', 'Xiwei Xu']",http://arxiv.org/pdf/2306.02230,2023-06-04,,"Foundation models, such as GPT-4, DALL-E have brought unprecedented AI""operating system""effect and new forms of human-AI interaction, sparking a wave of innovation in AI-native services, where natural language prompts serve as executable""code""directly (prompt as executable code), eliminating the need for programming language as an intermediary and opening up the door to personal AI. Prompt Sapper has emerged in response, committed to support the development of AI-native services by AI chain engineering. It creates a large language model (LLM) empowered software engineering infrastructure for authoring AI chains through human-AI collaborative intelligence, unleashing the AI innovation potential of every individual, and forging a future where everyone can be a master of AI innovation. This article will introduce the R\&D motivation behind Prompt Sapper, along with its corresponding AI chain engineering methodology and technical practices.",486a8c8655b81c7f87ff257141466ec1186d4aea,Semantic Scholar,,somewhat relevant,"The abstract mentions sourcing training material for a retrieval model from an LLM using prompting, which implies the use of prompt engineering techniques." -1369,human emotion knowledge representation emerges in large language model and supports discrete emotion inference,"['Ming Li', 'Yusheng Su', 'Hsiu-Yuan Huang', 'Jiali Cheng', 'Xin Hu', 'Xinmiao Zhang', 'Huadong Wang', 'Yujia Qin', 'Xiaozhi Wang', 'Zhi-Yun Liu', 'Dan Zhang']",https://arxiv.org/pdf/2302.09582,,,"How humans infer discrete emotions is a fundamental research question in the field of psychology. While conceptual knowledge about emotions (emotion knowledge) has been suggested to be essential for emotion inference, evidence to date is mostly indirect and inconclusive. As the large language models (LLMs) have been shown to support effective representations of various human conceptual knowledge, the present study further employed artificial neurons in LLMs to investigate the mechanism of human emotion inference. With artificial neurons activated by prompts, the LLM (RoBERTa) demonstrated a similar conceptual structure of 27 discrete emotions as that of human behaviors. Furthermore, the LLM-based conceptual structure revealed a human-like reliance on 14 underlying conceptual attributes of emotions for emotion inference. Most importantly, by manipulating attribute-specific neurons, we found that the corresponding LLM's emotion inference performance deteriorated, and the performance deterioration was correlated to the effectiveness of representations of the conceptual attributes on the human side. Our findings provide direct evidence for the emergence of emotion knowledge representation in large language models and suggest its casual support for discrete emotion inference. # These authors contributed equally: liming16@tsinghua.org.cn, yushengsu.thu@gmail.com * Corresponding authors: {liuzy, dzhang}@tsinghua.edu.cn The source code can be obtained from https://github.com/thunlp/Model_Emotion.",4a8fe7ecf225e5bada08642fcd77d3cbb322b967,Semantic Scholar,,somewhat relevant,"The paper discusses the use of context-aware prompting of LLMs to rewrite queries during the training phase, which is a form of prompt engineering." -1370,what do llms know about financial markets a case study on reddit market sentiment analysis,"['Xiang Deng', 'Vasilisa Bashlovkina', 'Feng Han', 'Simon Baumgartner', 'Michael Bendersky']",http://arxiv.org/pdf/2212.11311,2022-12-21,,"Market sentiment analysis on social media content requires knowledge of both financial markets and social media jargon, which makes it a challenging task for human raters. The resulting lack of high-quality labeled data stands in the way of conventional supervised learning methods. Instead, we approach this problem using semi-supervised learning with a large language model (LLM). Our pipeline generates weak financial sentiment labels for Reddit posts with an LLM and then uses that data to train a small model that can be served in production. We find that prompting the LLM to produce Chain-of-Thought summaries and forcing it through several reasoning paths helps generate more stable and accurate labels, while using a regression loss further improves distillation quality. With only a handful of prompts, the final model performs on par with existing supervised models. Though production applications of our model are limited by ethical considerations, the model’s competitive performance points to the great potential of using LLMs for tasks that otherwise require skill-intensive annotation.",52136f813243ac3de8e277906112a41590a376d4,Semantic Scholar,,highly relevant,"The paper focuses on enhancing LLMs' contextual faithfulness using carefully designed prompting strategies without additional training, which aligns with the topic of prompt engineering." -1371,understanding the effectiveness of very large language models on dialog evaluation,"['Jessica Huynh', 'Cathy Jiao', 'Prakhar Gupta', 'Shikib Mehri', 'Payal Bajaj', 'Vishrav Chaudhary', 'M. Eskénazi']",http://arxiv.org/pdf/2301.12004,2023-01-27,,"Language models have steadily increased in size over the past few years. They achieve a high level of performance on various natural language processing (NLP) tasks such as question answering and summarization. Large language models (LLMs) have been used for generation and can now output human-like text. Due to this, there are other downstream tasks in the realm of dialog that can now harness the LLMs' language understanding capabilities. Dialog evaluation is one task that this paper will explore. It concentrates on prompting with LLMs: BLOOM, OPT, GPT-3, Flan-T5, InstructDial and TNLGv2. The paper shows that the choice of datasets used for training a model contributes to how well it performs on a task as well as on how the prompt should be structured. Specifically, the more diverse and relevant the group of datasets that a model is trained on, the better dialog evaluation performs. This paper also investigates how the number of examples in the prompt and the type of example selection used affect the model's performance.",5882dd04d95c9c88cdec389059fcf44d56cbb789,Semantic Scholar,,somewhat relevant,"The paper describes a retrieval augmentation approach for constructing personalized prompts, which is relevant to prompt engineering, although the main focus appears to be personalization." -1372,planandsolve prompting improving zeroshot chainofthought reasoning by large language models,"['Lei Wang', 'Wanyu Xu', 'Yihuai Lan', 'Zhiqiang Hu', 'Yunshi Lan', 'R. Lee', 'Ee-Peng Lim']",http://arxiv.org/pdf/2305.04091,2023-05-06,,"Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, Few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demonstrations which enable LLMs to explicitly generate reasoning steps and improve their reasoning task accuracy. To eliminate the manual efforts, Zero-shot-CoT concatenates the target problem statement with “Let’s think step by step” as an input prompt to LLMs. Despite the success of Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, missing-step errors, and semantic misunderstanding errors. To address the missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of two components: first, devising a plan to divide the entire task into smaller subtasks, and then carrying out the subtasks according to the plan. To address the calculation errors and improve the quality of generated reasoning steps, we extend PS prompting with more detailed instructions and derive PS+ prompting. We evaluate our proposed prompting strategy on ten datasets across three reasoning problems. The experimental results over GPT-3 show that our proposed zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought Prompting, and has comparable performance with 8-shot CoT prompting on the math reasoning problem. The code can be found at https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.",62176de125738e3b95850d1227bac81fd646b78e,Semantic Scholar,,somewhat relevant,"The use of prompts for labeling with large language models in the context of medical wearable surveillance is mentioned, which implies a use-case for prompt engineering." -1373,chainofthought prompting for responding to indepth dialogue questions with llm,"['Hongru Wang', 'Rui Wang', 'Fei Mi', 'Zezhong Wang', 'Rui-Lan Xu', 'Kam-Fai Wong']",http://arxiv.org/pdf/2305.11792,,,"The way and content in which users ask questions can provide insight into their current status, including their personality, emotions, and psychology. Instead of directly prompting the large language models (LLMs), we explore how chain-of-thought prompting helps in this scenario to perform reasoning and planning according to user status, aiming to provide a more personalized and engaging experience for the user query. To this end, we first construct a benchmark of 6 dialogue or question-answering datasets in both English and Chinese, covering 3 different aspects of user status ( including personality , emotion , and psychology ). Then we prompt the LLMs to generate the response regarding the user status as intermediate reasoning processing. We propose a novel demonstration selection strategy using the semantic similarity of intermediate reasoning instead of test queries. To evaluate the effectiveness and robustness of our approach, we conduct extensive experiments with 7 LLMs under zero-shot and one-shot settings. The experimental results show that our approach consistently outperforms standard prompting in terms of both helpfulness and acceptness across all datasets, regardless of the LLMs used. The code and dataset can be found at https://github.com/ruleGreen/ Dialogue_CoT.git .",70916fbeb446ab7dc811ab74b193365d789bf1eb,Semantic Scholar,,highly relevant,"The paper studies the impact of different types of CoT prompt modifications on the performance of GPT-3, which is directly related to prompt engineering for large language models." -1374,annollm making large language models to be better crowdsourced annotators,"['Xingwei He', 'Zheng-Wen Lin', 'Yeyun Gong', 'Alex Jin', 'Hang Zhang', 'Chen Lin', 'Jian Jiao', 'S. Yiu', 'Nan Duan', 'Weizhu Chen']",http://arxiv.org/pdf/2303.16854,2023-03-30,,"Many natural language processing (NLP) tasks rely on labeled data to train machine learning models to achieve high performance. However, data annotation can be a time-consuming and expensive process, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator by providing them with sufficient guidance and demonstrated examples. To make LLMs to be better annotators, we propose a two-step approach, 'explain-then-annotate'. To be more precise, we begin by creating prompts for every demonstrated example, which we subsequently utilize to prompt a LLM to provide an explanation for why the specific ground truth answer/label was chosen for that particular example. Following this, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data. We conduct experiments on three tasks, including user input and keyword relevance assessment, BoolQ and WiC. The annotation results from GPT-3.5 surpasses those from crowdsourced annotation for user input and keyword relevance assessment. Additionally, for the other two tasks, GPT-3.5 achieves results that are comparable to those obtained through crowdsourced annotation.",70da4fb798a86cbe8cad96c27ced0415885bbd9d,Semantic Scholar,,highly relevant,"The paper discusses 'prompt-tuning large language models' which clearly falls under the domain of prompt engineering, even though it may not specify 'hard prefix' prompting." -1375,enhancing small medical learners with privacypreserving contextual prompting,"['Xinlu Zhang', 'SHIYANG LI', 'Xianjun Yang', 'Chenxin Tian', 'Yao Qin', 'Linda Petzold']",http://arxiv.org/pdf/2305.12723,2023-05-22,,"Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under privacy-restricted scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability.",74b94891f8f7ac8d73d9df817b6720e1cb792bcc,Semantic Scholar,,highly relevant,The paper is highly relevant because it involves generating discriminative prompts with large language models as part of its proposed framework to improve zero-shot classification. -1376,corrpus codebased structured prompting for neurosymbolic story understanding,"['Yi Dong', 'Lara J. Martin', 'Chris Callison-Burch']",https://aclanthology.org/2023.findings-acl.832.pdf,2022-12-21,,"Story generation and understanding -- as with all NLG/NLU tasks -- has seen a surge in neurosymbolic work. Researchers have recognized that, while large language models (LLMs) have tremendous utility, they can be augmented with symbolic means to be even better and to make up for any flaws that the neural networks might have. However, symbolic methods are extremely costly in terms of the amount of time and expertise needed to create them. In this work, we capitalize on state-of-the-art Code-LLMs, such as Codex, to bootstrap the use of symbolic methods for tracking the state of stories and aiding in story understanding. We show that our CoRRPUS system and abstracted prompting procedures can beat current state-of-the-art structured LLM techniques on pre-existing story understanding tasks (bAbI Task 2 and Re^3) with minimal hand engineering. We hope that this work can help highlight the importance of symbolic representations and specialized prompting for LLMs as these models require some guidance for performing reasoning tasks properly.",76f54657eb0893a0b203da57dcf0b4fffeebfc2c,Semantic Scholar,,somewhat relevant,"The abstract indicates the use of 'task-related knowledge prompted from large language models', which suggests engagement with prompt engineering techniques." -1377,selfcheckgpt zeroresource blackbox hallucination detection for generative large language models,"['Potsawee Manakul', 'Adian Liusie', 'M. Gales']",https://arxiv.org/pdf/2303.08896,2023-03-15,,"Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose""SelfCheckGPT"", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods.",7c1707db9aafd209aa93db3251e7ebd593d55876,Semantic Scholar,,highly relevant,"The paper discusses the development and optimization of meta-prompts for GPT models to improve systematic review processes, which is directly related to prompt engineering." -1378,can large language models truly understand prompts a case study with negated prompts,"['Joel Jang', 'Seonghyeon Ye', 'Minjoon Seo']",http://arxiv.org/pdf/2209.12711,2022-09-26,,"Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT&GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms",7ce0c89a452e3c2917b63847495533865697c79c,Semantic Scholar,,somewhat relevant,"The paper discusses Chain-of-Thought-Prompting in Large Language Models, which implies the application of prompting techniques, relevant to prompt engineering." -1379,the student becomes the master matching gpt3 on scientific factual error correction,"['D. Ashok', 'Atharva Kulkarni', 'Hai Pham', 'B. Póczos']",https://arxiv.org/pdf/2305.14707,,,"Due to the prohibitively high cost of creating error correction datasets, most Factual Claim Correction methods rely on a powerful verification model to guide the correction process. This leads to a significant drop in performance in domains like Scientific Claim Correction, where good verification models do not always exist. In this work, we introduce a claim correction system that makes no domain assumptions and does not require a verifier but is able to outperform existing methods by an order of magnitude — achieving 94% correction accuracy on the SciFact dataset, and 62.5% on the SciFact-Open dataset, compared to the next best meth-ods 0.5% and 1.50% respectively. Our method leverages the power of prompting with LLMs during training to create a richly annotated dataset that can be used for fully supervised training and regularization. We additionally use a claim-aware decoding procedure to improve the quality of corrected claims. Our method is competitive with the very LLM that was used to generate the annotated dataset — with GPT3.5 achieving 89.5% and 60% correction accuracy on SciFact and SciFact-Open, despite using 1250 times as many parameters as our model.",80ae1347b2dda02748f8f09da8a738121f5edfb5,Semantic Scholar,,somewhat relevant,"The paper describes a new prompt learning mechanism to generate aspects, indicating a use of prompting as a technique, which is relevant to the area of prompt engineering." -1380,more than you've asked for a comprehensive analysis of novel prompt injection threats to applicationintegrated large language models,"['Kai Greshake', 'Sahar Abdelnabi', 'Shailesh Mishra', 'C. Endres', 'Thorsten Holz', 'Mario Fritz']",http://arxiv.org/pdf/2302.12173,,,"We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting . Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following . So far, these attacks assumed that the adversary is directly prompting the LLM. In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs ) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viabil-ity of our attacks, we implemented specific demonstrations",8fdd34153d1035d09dd4a6efa9cb0c91d23d0045,Semantic Scholar,,highly relevant,"The abstract describes the use of Zero-shot Dynamic Strategy Chain (DSC) prompting in conjunction with Large Language Models for text generation in the context of mental health support, which falls under the category of prompt engineering." -1381,boosting language models reasoning with chainofknowledge prompting,"['J. Wang', 'Qiushi Sun', 'Nuo Chen', 'Xiang Lorraine Li', 'Ming Gao']",https://arxiv.org/pdf/2306.06427,2023-06-10,,"Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning tasks, which aims at designing a simple prompt like ``Let's think step by step'' or multiple in-context exemplars with well-designed rationales to elicit Large Language Models (LLMs) to generate intermediate reasoning steps. However, the generated rationales often come with mistakes, making unfactual and unfaithful reasoning chains. To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple. This is inspired by our human behaviors, i.e., we can draw a mind map or knowledge map as the reasoning evidence in the brain before answering a complex question. Benefiting from CoK, we additionally introduce a F^2-Verification method to estimate the reliability of the reasoning chains in terms of factuality and faithfulness. For the unreliable response, the wrong evidence can be indicated to prompt the LLM to rethink. Extensive experiments demonstrate that our method can further improve the performance of commonsense, factual, symbolic, and arithmetic reasoning tasks.",9efa81ec4954b0859c47dad8f42edfaf8bced69b,Semantic Scholar,,highly relevant,"The paper introduces Self-Critique prompting as a method to improve LLMs' responses to user instructions, which is a form of prompt engineering." -1382,susceptibility to influence of large language models,"['L. D. Griffin', 'Bennett Kleinberg', 'Maximilian Mozes', 'Kimberly T. Mai', 'Maria Vau', 'M. Caldwell', 'Augustine Marvor-Parker']",http://arxiv.org/pdf/2303.06074,2023-03-10,,"Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement (through, for example, rating its interest) boosts a later truthfulness test rating. Data was collected from 1000 human participants using an online experiment, and 1000 simulated participants using engineered prompts and LLM completion. 64 ratings per participant were collected, using all exposure-test combinations of the attributes: truth, interest, sentiment and importance. The results for human participants reconfirmed the ITE, and demonstrated an absence of effect for attributes other than truth, and when the same attribute is used for exposure and test. The same pattern of effects was found for LLM-simulated participants. The second study concerns a specific mode of influence - populist framing of news to increase its persuasion and political mobilization. Data from LLM-simulated participants was collected and compared to previously published data from a 15-country experiment on 7286 human participants. Several effects previously demonstrated from the human study were replicated by the simulated study, including effects that surprised the authors of the human study by contradicting their theoretical expectations (anti-immigrant framing of news decreases its persuasion and mobilization); but some significant relationships found in human data (modulation of the effectiveness of populist framing according to relative deprivation of the participant) were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.",ab90169f7213482efff246cc5f5f057351265f18,Semantic Scholar,,highly relevant,"The paper focuses on developing a prompt, which is a direct example of prompt engineering relevant to the topic." -1383,zerotop zeroshot taskoriented semantic parsing using large language models,"['Dheeraj Mekala', 'J. Wolfe', 'Subhro Roy']",http://arxiv.org/pdf/2212.10815,2022-12-21,,"We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.",b8d06dd769f89d08bdd9997d7bd363c89ede845b,Semantic Scholar,,somewhat relevant,"The paper's core focus is on the systematic rectification of language models and does not address prompt engineering directly, but mention of 'prompt-based token elimination' suggests some relevance." -1384,large language models as batteriesincluded zeroshot esco skills matchers,"['Benjamin Clavié', ""Guillaume Souli'e""]",https://arxiv.org/pdf/2307.03539,2023-07-07,,"Understanding labour market dynamics requires accurately identifying the skills required for and possessed by the workforce. Automation techniques are increasingly being developed to support this effort. However, automatically extracting skills from job postings is challenging due to the vast number of existing skills. The ESCO (European Skills, Competences, Qualifications and Occupations) framework provides a useful reference, listing over 13,000 individual skills. However, skills extraction remains difficult and accurately matching job posts to the ESCO taxonomy is an open problem. In this work, we propose an end-to-end zero-shot system for skills extraction from job descriptions based on large language models (LLMs). We generate synthetic training data for the entirety of ESCO skills and train a classifier to extract skill mentions from job posts. We also employ a similarity retriever to generate skill candidates which are then re-ranked using a second LLM. Using synthetic data achieves an RP@10 score 10 points higher than previous distant supervision approaches. Adding GPT-4 re-ranking improves RP@10 by over 22 points over previous methods. We also show that Framing the task as mock programming when prompting the LLM can lead to better performance than natural language prompts, especially with weaker LLMs. We demonstrate the potential of integrating large language models at both ends of skills matching pipelines. Our approach requires no human annotations and achieve extremely promising results on skills extraction against ESCO.",c4f9f0cc8c138047a61bdb11b1a352e3d1aed035,Semantic Scholar,,highly relevant,"The paper mentions retrieval used in preparing prompts for large language models, which is directly related to prompt engineering." -1385,an empirical study on using large language models to analyze software supply chain security failures,"['Tanmay Singla', 'Dharun Anandayuvaraj', 'Kelechi G. Kalu', 'Taylor R. Schorlemmer', 'James C. Davis']",https://arxiv.org/pdf/2308.04898,2023-08-09,,"As we increasingly depend on software systems, the consequences of breaches in the software supply chain become more severe. High-profile cyber attacks like those on SolarWinds and ShadowHammer have resulted in significant financial and data losses, underlining the need for stronger cybersecurity. One way to prevent future breaches is by studying past failures. However, traditional methods of analyzing these failures require manually reading and summarizing reports about them. Automated support could reduce costs and allow analysis of more failures. Natural Language Processing (NLP) techniques such as Large Language Models (LLMs) could be leveraged to assist the analysis of failures. In this study, we assessed the ability of Large Language Models (LLMs) to analyze historical software supply chain breaches. We used LLMs to replicate the manual analysis of 69 software supply chain security failures performed by members of the Cloud Native Computing Foundation (CNCF). We developed prompts for LLMs to categorize these by four dimensions: type of compromise, intent, nature, and impact. GPT 3.5s categorizations had an average accuracy of 68% and Bard had an accuracy of 58% over these dimensions. We report that LLMs effectively characterize software supply chain failures when the source articles are detailed enough for consensus among manual analysts, but cannot yet replace human analysts. Future work can improve LLM performance in this context, and study a broader range of articles and failures.",c91f6eb320c70e2f64b6fb935494978a8699f06a,Semantic Scholar,,highly relevant,"The paper describes using LLMs such as ChatGPT for generating distractors for MCQs by prompting them with automatically retrieved questions and in-context examples, which is a direct application of prompt engineering." -1386,actiongpt leveraging largescale language models for improved and generalized action generation,"['Sai Shashank Kalakonda', 'Shubham Maheshwari', 'Ravi Kiran Sarvadevabhatla']",https://arxiv.org/pdf/2211.15603,2022-11-28,,"We introduce Action-GPT, a plug-and-play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. We introduce a generic approach compatible with stochastic (e.g. VAE-based) and deterministic (e.g. MotionCLIP) text-to-motion models. In addition, the approach enables multiple text descriptions to be utilized. Our experiments show (i) noticeable qualitative and quantitative improvement in the quality of synthesized motions, (ii) benefits of utilizing multiple LLM-generated descriptions, (iii) suitability of the prompt function, and (iv) zero-shot generation capabilities of the proposed approach. Code and pretrained models are available at https://actiongpt.github.io.",cb2954127a7fce8ab84486765392ce95dcdd8175,Semantic Scholar,,somewhat relevant,"The paper deals with Prompting-based large language models and their application in multi-step QA, which is relevant to how prompts guide model behavior." -1387,soft prompt tuning for augmenting dense retrieval with large language models,"['Zhiyuan Peng', 'Xuyang Wu', 'Yihan Fang']",https://arxiv.org/pdf/2307.08303,2023-07-17,,"Dense retrieval (DR) converts queries and documents into dense embeddings and measures the similarity between queries and documents in vector space. One of the challenges in DR is the lack of domain-specific training data. While DR models can learn from large-scale public datasets like MS MARCO through transfer learning, evidence shows that not all DR models and domains can benefit from transfer learning equally. Recently, some researchers have resorted to large language models (LLMs) to improve the zero-shot and few-shot DR models. However, the hard prompts or human-written prompts utilized in these works cannot guarantee the good quality of generated weak queries. To tackle this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task, we leverage soft prompt-tuning to optimize a task-specific soft prompt on limited ground truth data and then prompt the LLMs to tag unlabeled documents with weak queries, yielding enough weak document-query pairs to train task-specific dense retrievers. We design a filter to select high-quality example document-query pairs in the prompt to further improve the quality of weak tagged queries. To the best of our knowledge, there is no prior work utilizing soft prompt tuning to augment DR models. The experiments demonstrate that SPTAR outperforms the unsupervised baselines BM25 and the recently proposed LLMs-based augmentation method for DR.",d44031f253668c61ac6d68b95bbe9cac57730d51,Semantic Scholar,,somewhat relevant,"The paper discusses the use of diegetic and non-diegetic prompts in the context of Large Language Models, which is relevant to prompt engineering as it explores user strategies in prompting AI models." -1388,on the planning abilities of large language models a critical investigation,"['Karthik Valmeekam', 'Matthew Marquez', 'S. Sreedharan', 'Subbarao Kambhampati']",http://arxiv.org/pdf/2305.15771,2023-05-25,,"Intrigued by the claims of emergent reasoning capabilities in LLMs trained on general web corpora, in this paper, we set out to investigate their planning capabilities. We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs as a source of heuristic guidance for other agents (AI planners) in their planning tasks. We conduct a systematic study by generating a suite of instances on domains similar to the ones employed in the International Planning Competition and evaluate LLMs in two distinct modes: autonomous and heuristic. Our findings reveal that LLMs' ability to generate executable plans autonomously is rather limited, with the best model (GPT-4) having an average success rate of ~12% across the domains. However, the results in the heuristic mode show more promise. In the heuristic mode, we demonstrate that LLM-generated plans can improve the search process for underlying sound planners and additionally show that external verifiers can help provide feedback on the generated plans and back-prompt the LLM for better plan generation.",dedfe929d182cc3537a9ed765d589b4735ce062a,Semantic Scholar,,highly relevant,"The study involves optimizing a prompt using a genetic algorithm, indicating an exploration of prompt engineering techniques." -1389,noise2music textconditioned music generation with diffusion models,"['Qingqing Huang', 'Daniel S. Park', 'Tao Wang', 'Timo I. Denk', 'Andy Ly', 'Nanxin Chen', 'Zhengdong Zhang', 'Zhishuai Zhang', 'Jiahui Yu', 'C. Frank', 'Jesse Engel', 'Quoc V. Le', 'William Chan', 'Weixiang Han']",http://arxiv.org/pdf/2302.03917,2023-02-08,,"We introduce Noise2Music, where a series of diffusion models is trained to generate high-quality 30-second music clips from text prompts. Two types of diffusion models, a generator model, which generates an intermediate representation conditioned on text, and a cascader model, which generates high-fidelity audio conditioned on the intermediate representation and possibly the text, are trained and utilized in succession to generate high-fidelity music. We explore two options for the intermediate representation, one using a spectrogram and the other using audio with lower fidelity. We find that the generated audio is not only able to faithfully reflect key elements of the text prompt such as genre, tempo, instruments, mood, and era, but goes beyond to ground fine-grained semantics of the prompt. Pretrained large language models play a key role in this story -- they are used to generate paired text for the audio of the training set and to extract embeddings of the text prompts ingested by the diffusion models. Generated examples: https://google-research.github.io/noise2music",02540ae926814f4b7972d3fa4dd33932fdc4b58b,Semantic Scholar,,highly relevant,"The paper provides a comprehensive examination of AI prompt engineering, discusses specific prompting strategies, and contributes a framework to the field, indicating a clear focus on prompt engineering as a subject matter." -1390,contextfaithful prompting for large language models,"['Wenxuan Zhou', 'Sheng Zhang', 'Hoifung Poon', 'Muhao Chen']",http://arxiv.org/pdf/2303.11315,2023-03-20,,"Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts. Code and data are released at https://github.com/wzhouad/context-faithful-llm.",12c826f4195da172b212a529f8fcf10cc79e35da,Semantic Scholar,,somewhat relevant,"The paper discusses prompt modifiers used in text-based generative art, which falls under the umbrella of prompt engineering techniques, making it relevant to the topic of prompt engineering." -1391,lamp when large language models meet personalization,"['Alireza Salemi', 'Sheshera Mysore', 'Michael Bendersky', 'Hamed Zamani']",http://arxiv.org/pdf/2304.11406,2023-04-22,,"This paper highlights the importance of personalization in the current state of natural language understanding and generation and introduces the LaMP benchmark -- a novel benchmark for training and evaluating language models for producing personalized outputs. LaMP offers a comprehensive evaluation framework with diverse language tasks and multiple entries for each user profile. It consists of seven personalized tasks, spanning three classification and four text generation tasks. We also propose a retrieval augmentation approach that retrieves personalized items from user profiles to construct personalized prompts for large language models. Our baseline zero-shot and fine-tuned model results indicate that LMs utilizing profile augmentation outperform their counterparts that do not factor in profile information.",17170575aa8b4fa4e3eef5d366ada706a94dd836,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of prompt engineering in conjunction with the CLIP model and text-to-image models, indicating a direct relation to the topic of prompt engineering." -1392,conal anticipating outliers with large language models,"['Albert Xu', 'Xiang Ren', 'Robin Jia']",http://arxiv.org/pdf/2211.15718,,,"In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly. Selective prediction, in which models abstain on low-confidence examples, provides a possible solution, but existing models are often overly confident on OOD examples. To remedy this overconfidence, we introduce Contrastive Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD examples representative of novel classes, then trains to decrease confidence on them. First, we generate OOD examples by prompting a large language model twice: we prompt it to enumerate relevant novel labels, then generate examples from each novel class matching the task format. Second, we train our classifier with a novel contrastive objective that encourages lower confidence on generated OOD examples than training examples. When trained with CoNAL, classifiers improve in their ability to detect and abstain on OOD examples over prior methods by an average of 2.3% AUAC and 5.5% AUROC across 4 NLP datasets, with no cost to in-distribution accuracy.1",19da40fd01c711fb2b3b0b19b3956b86b75f575d,Semantic Scholar,,highly relevant,"The paper is focused on evaluating the effectiveness of different prompting approaches, which is directly related to the topic of prompt engineering, and mentions enhancing prompt engineering in future work." -1393,scalable approach to medical wearable postmarket surveillance,"['R. M. Yoo', 'B. T. Viggiano', 'K. Pundi', 'J. A. Fries', 'A. Zahedivash', 'T. Podchiyska', 'N. Din', 'N. H. Shah']",https://www.medrxiv.org/content/medrxiv/early/2023/11/15/2023.11.14.23298488.full.pdf,2023-11-15,,"Objective We sought to develop a weak supervision-based approach to demonstrate feasibility of post-market surveillance of wearable devices that render AF pre-diagnosis. Materials and Methods Two approaches were evaluated to reduce clinical note labeling overhead for creating a training set for a classifier: one using programmatic codes, and the other using prompts to large language models (LLMs). Probabilistically labeled notes were then used to fine-tune a classifier, which identified patients with AF pre-diagnosis mentions in a note. A retrospective cohort study was conducted, where the baseline characteristics and subsequent care patterns of patients identified by the classifier were compared against those who did not receive pre-diagnosis. Results Label model derived from prompt-based labeling heuristics using LLMs (precision = 0.67, recall = 0.83, F1 = 0.74) nearly achieved the performance of code-based heuristics (precision = 0.84, recall = 0.72, F1 = 0.77), while cutting down the cost to create a labeled training set. The classifier learned on the labeled notes accurately identified patients with AF pre-diagnosis (precision = 0.85, recall = 0.81, F1 = 0.83). Those patients who received pre-diagnosis exhibited different demographic and comorbidity characteristics, and were enriched for anticoagulation and eventual diagnosis of AF. At the index diagnosis, existence of pre-diagnosis did not stratify patients on clinical characteristics, but did correlate with anticoagulant prescription. Discussion and Conclusion Our work establishes the feasibility of an EHR-based surveillance system for wearable devices that render AF pre-diagnosis. Further work is necessary to generalize these findings for patient populations at other sites.",216555443355ac615598a99d2949711726a1c36f,Semantic Scholar,,highly relevant,"The paper discusses the use of prompting methods for learning a unified semantic space for different languages and tasks, focusing on multilingual prompt engineering, which is a direct application of prompt engineering concepts." -1394,"the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis","['Mehrdad Safaei', 'Justin Longo']",https://dl.acm.org/doi/pdf/10.1145/3604570,2023-08-18,,"Policy advising in government centers on the analysis of public problems and the developing of recommendations for dealing with them. In carrying out this work, policy analysts consult a variety of sources and work to synthesize that body of evidence into useful decision support documents commonly called briefing notes. Advances in natural language processing (NLP) have led to the continuing development of tools that can undertake a similar task. Given a brief prompt, a large language model (LLM) can synthesize information in content databases. This article documents the findings from an experiment that tested whether contemporary NLP technology is capable of producing public policy relevant briefing notes that expert evaluators judge to be useful. The research involved two stages. First, briefing notes were created using three models: NLP generated; human generated; and NLP generated / human edited. Next, two panels of retired senior public servants (with only one panel informed of the use of NLP in the experiment) were asked to judge the briefing notes using a heuristic evaluation rubric. The findings indicate that contemporary NLP tools were not able to, on their own, generate useful policy briefings. However, the feedback from the expert evaluators indicates that automatically-generated briefing notes might serve as a useful supplement to the work of human policy analysts. And the speed with which the capabilities of NLP tools are developing, supplemented with access to a larger corpus of previously prepared policy briefings and other policy-relevant material, suggests that the quality of automatically-generated briefings may improve significantly in the coming years. The article concludes with reflections on what such improvements might mean for the future practice of policy analysis.",22b39e38e2fd52591ca23904b474eb19dc17b610,Semantic Scholar,,somewhat relevant,"The paper mentions the use of prompt engineering, specifically few-shot and chain-of-thought prompting, to guide recollection and clarify expectations in LLMs, directly relating to the topic of prompt engineering." -1395,xparade crosslingual textual entailment and information divergence across paragraphs,"['Juan Diego Rodriguez', 'Katrin Erk', 'Greg Durrett']",https://arxiv.org/pdf/2309.08873,2023-09-16,,"Understanding when two pieces of text convey the same information is a goal touching many subproblems in NLP, including textual entailment and fact-checking. This problem becomes more complex when those two pieces of text are in different languages. Here, we introduce X-PARADE (Cross-lingual Paragraph-level Analysis of Divergences and Entailments), the first cross-lingual dataset of paragraph-level information divergences. Annotators label a paragraph in a target language at the span level and evaluate it with respect to a corresponding paragraph in a source language, indicating whether a given piece of information is the same, new, or new but can be inferred. This last notion establishes a link with cross-language NLI. Aligned paragraphs are sourced from Wikipedia pages in different languages, reflecting real information divergences observed in the wild. Armed with our dataset, we investigate a diverse set of approaches for this problem, including classic token alignment from machine translation, textual entailment methods that localize their decisions, and prompting of large language models. Our results show that these methods vary in their capability to handle inferable information, but they all fall short of human performance.",300b01dc726fe8acbededd805501811d427920bd,Semantic Scholar,,highly relevant,"The paper discusses the use of prompt engineering for zero-shot/few-shot learning with GPT models, which is directly relevant to the topic of prompt engineering." -1396,stress testing chainofthought prompting for large language models,"['Aayush Mishra', 'Karan Thakkar']",https://arxiv.org/pdf/2309.16621,2023-09-28,,"This report examines the effectiveness of Chain-of-Thought (CoT) prompting in improving the multi-step reasoning abilities of large language models (LLMs). Inspired by previous studies \cite{Min2022RethinkingWork}, we analyze the impact of three types of CoT prompt perturbations, namely CoT order, CoT values, and CoT operators on the performance of GPT-3 on various tasks. Our findings show that incorrect CoT prompting leads to poor performance on accuracy metrics. Correct values in the CoT is crucial for predicting correct answers. Moreover, incorrect demonstrations, where the CoT operators or the CoT order are wrong, do not affect the performance as drastically when compared to the value based perturbations. This research deepens our understanding of CoT prompting and opens some new questions regarding the capability of LLMs to learn reasoning in context.",31ae42394959fb1a336886379a5527bec5c9c9c4,Semantic Scholar,,highly relevant,"The paper focuses on the effectiveness of prompt programming in the fine-tuning process of language models and involves testing prompt variations, which is directly related to prompt engineering." -1397,hierarchical prompting assists large language model on web navigation,"['Abishek Sridhar', 'Robert Lo', 'Frank F. Xu', 'Hao Zhu', 'Shuyan Zhou']",http://arxiv.org/pdf/2305.14257,2023-05-23,,"Large language models (LLMs) struggle on processing complicated observations in interactive decision making tasks. To alleviate this issue, we propose a simple hierarchical prompting approach. Diverging from previous prompting approaches that always put the full observation (e.g. a web page) to the prompt, we propose to first construct an action-aware observation which is more condensed and relevant with a dedicated SUMMARIZER prompt. The ACTOR prompt then predicts the next action based on the summarized observation. While our method has broad applicability, we particularly demonstrate its efficacy in the complex domain of web navigation where a full observation often contains redundant and irrelevant information. Our approach outperforms the previous state-of-the-art prompting mechanics by 6.2% on task success rate, demonstrating its potential on interactive decision making tasks with long observation traces.",3d8e6358968c8bd5e97f21fead73bf4ba0c2a8d7,Semantic Scholar,,highly relevant,"The paper describes the use of question-answer prompt engineering (QAPE) as part of a visual question answering system to improve semantic segmentation, so it directly involves prompt engineering." -1398,towards realistic zeroshot classification via self structural semantic alignment,"['Shengxiang Zhang', 'Muzammal Naseer', 'Guangyi Chen', 'Zhiqiang Shen', 'Salman A. Khan', 'Kun Zhang', 'F. Khan']",https://arxiv.org/pdf/2308.12960,2023-08-24,,"Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification. Despite the success, most traditional VLMs-based methods are restricted by the assumption of partial source supervision or ideal vocabularies, which rarely satisfy the open-world scenario. In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary. To address this challenge, we propose the Self Structural Semantic Alignment (S^3A) framework, which extracts the structural semantic information from unlabeled data while simultaneously self-learning. Our S^3A framework adopts a unique Cluster-Vote-Prompt-Realign (CVPR) algorithm, which iteratively groups unlabeled data to derive structural semantics for pseudo-supervision. Our CVPR process includes iterative clustering on images, voting within each cluster to identify initial class candidates from the vocabulary, generating discriminative prompts with large language models to discern confusing candidates, and realigning images and the vocabulary as structural semantic alignment. Finally, we propose to self-learn the CLIP image encoder with both individual and structural semantic alignment through a teacher-student learning strategy. Our comprehensive experiments across various generic and fine-grained benchmarks demonstrate that the S^3A method offers substantial improvements over existing VLMs-based approaches, achieving a more than 15% accuracy improvement over CLIP on average. Our codes, models, and prompts are publicly released at https://github.com/sheng-eatamath/S3A.",437cfee2a7f7beadf09ad712f71b3265740e44a0,Semantic Scholar,,somewhat relevant,"The paper briefly mentions prompt engineering in the context of the future use of web searching with artificial intelligence, suggesting that it contains some discussion on the topic." -1399,interacting with large language models a case study on aiaided brainstorming for guesstimation problems,"['Vildan Salikutluk', 'Dorothea Koert', 'F. Jäkel']",https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA230081,,,". Designing cooperative AI-systems that do not automate tasks but rather aid human cognition is challenging and requires human-centered design approaches. Here, we introduce AI-aided brainstorming for solving guesstimation problems, i.e. estimating quantities from incomplete information, as a testbed for human-AI interaction with large language models (LLMs). In a think-aloud study, we found that humans decompose guesstimation questions into sub-questions and often replace them with semantically related ones. If they fail to brainstorm related questions, they often get stuck and do not find a solution. Therefore, to support this brainstorming process, we prompted a large language model (GPT-3) with successful replacements from our think-aloud data. In follow-up studies, we tested whether the availability of this tool improves participants’ answers. While the tool successfully produced human-like suggestions, participants were reluctant to use it. From our findings, we conclude that for human-AI interaction with LLMs to be successful AI-systems must complement rather than mimic a user’s associations.",4f9e7eb2f009e30f15eca18f4e540915b637b603,Semantic Scholar,,highly relevant,"The paper discusses the use of automatic prompt engineering to construct prompts for large language models in the context of a database system, directly involving prompt engineering techniques." -1400,multiscript multimodal script learning for supporting open domain everyday tasks,"['Jingyuan Qi', 'Minqian Liu', 'Ying Shen', 'Zhiyang Xu', 'Lifu Huang']",https://arxiv.org/pdf/2310.04965,2023-10-08,,"Automatically generating scripts (i.e. sequences of key steps described in text) from video demonstrations and reasoning about the subsequent steps are crucial to the modern AI virtual assistants to guide humans to complete everyday tasks, especially unfamiliar ones. However, current methods for generative script learning rely heavily on well-structured preceding steps described in text and/or images or are limited to a certain domain, resulting in a disparity with real-world user scenarios. To address these limitations, we present a new benchmark challenge -- MultiScript, with two new tasks on task-oriented multimodal script learning: (1) multimodal script generation, and (2) subsequent step prediction. For both tasks, the input consists of a target task name and a video illustrating what has been done to complete the target task, and the expected output is (1) a sequence of structured step descriptions in text based on the demonstration video, and (2) a single text description for the subsequent step, respectively. Built from WikiHow, MultiScript covers multimodal scripts in videos and text descriptions for over 6,655 human everyday tasks across 19 diverse domains. To establish baseline performance on MultiScript, we propose two knowledge-guided multimodal generative frameworks that incorporate the task-related knowledge prompted from large language models such as Vicuna. Experimental results show that our proposed approaches significantly improve over the competitive baselines.",5ece96203cd1dc9ff3f99867faa451939d86d545,Semantic Scholar,,highly relevant,The paper focuses on the practice of prompt engineering within the context of text-based generative art and discusses its role in human creativity. -1401,development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews,"['Y. Kataoka', 'R. So', 'M. Banno', 'J. Kumasawa', 'H. Someko', 'S. Taito', 'T. Terasawa', 'Y. Tsujimoto', 'Y. Tsutsumi', 'Y. Wada', 'T. A. Furukawa']",https://www.medrxiv.org/content/medrxiv/early/2023/10/31/2023.10.31.23297818.full.pdf,2023-11-01,,"Systematic reviews (SRs) are a critical component of evidence-based medicine, but the process of screening titles and abstracts is time-consuming. This study aimed to develop and externally validate a method using large language models to classify abstracts for diagnostic test accuracy (DTA) systematic reviews, thereby reducing the human workload. We used a previously collected dataset for developing DTA abstract classifiers and applied prompt engineering. We developed an optimized meta-prompt for Generative Pre-trained Transformer (GPT)-3.5-turbo and GPT-4 to classify abstracts. In the external validation dataset 1, the prompt with GPT-3.5 turbo showed a sensitivity of 0.988, and a specificity of 0.298. GPT-4 showed a sensitivity of 0.982, and a specificity of 0.677. In the external validation dataset 2, GPT-3.5 turbo showed a sensitivity of 0.919, and a specificity of 0.434. GPT-4 showed a sensitivity of 0.806, and a specificity of 0.740. If we included eligible studies from among the references of the identified studies, GPT-3.5 turbo had no critical misses, while GPT-4 had some misses. Our study indicates that GPT-3.5 turbo can be effectively used to classify abstracts for DTA systematic reviews. Further studies using other dataset are warranted to confirm our results. Additionally, we encourage the use of our framework and publicly available dataset for further exploration of more effective classifiers using other LLMs and prompts (https://github.com/youkiti/ARE/).",6384921f1bd1059c6b4c37ac3c4e4f19e45d40c1,Semantic Scholar,,somewhat relevant,"The abstract mentions prompt engineering as an element of their approach to improve knowledge extraction for robotics, indicating relevance to the topic of prompt engineering." -1402,insertexpansions for toolenabled conversational agents,"['Andreas Göldi', 'Roman Rietsche']",https://arxiv.org/pdf/2307.01644,2023-07-04,,"This paper delves into an advanced implementation of Chain-of-Thought-Prompting in Large Language Models, focusing on the use of tools (or""plug-ins"") within the explicit reasoning paths generated by this prompting method. We find that tool-enabled conversational agents often become sidetracked, as additional context from tools like search engines or calculators diverts from original user intents. To address this, we explore a concept wherein the user becomes the tool, providing necessary details and refining their requests. Through Conversation Analysis, we characterize this interaction as insert-expansion - an intermediary conversation designed to facilitate the preferred response. We explore possibilities arising from this 'user-as-a-tool' approach in two empirical studies using direct comparison, and find benefits in the recommendation domain.",803a3dd98d72a9fe730f082f3364f9b1f9a0029a,Semantic Scholar,,highly relevant,"The paper directly investigates the effect of different prompt engineering strategies on ChatGPT's performance in medical scenarios, which is central to the study of hard prefix prompt engineering." -1403,prompt tuning large language models on personalized aspect extraction for recommendations,"['Pan Li', 'Yuyan Wang', 'Ed H. Chi', 'Minmin Chen']",http://arxiv.org/pdf/2306.01475,2023-06-02,,"Existing aspect extraction methods mostly rely on explicit or ground truth aspect information, or using data mining or machine learning approaches to extract aspects from implicit user feedback such as user reviews. It however remains under-explored how the extracted aspects can help generate more meaningful recommendations to the users. Meanwhile, existing research on aspect-based recommendations often relies on separate aspect extraction models or assumes the aspects are given, without accounting for the fact the optimal set of aspects could be dependent on the recommendation task at hand. In this work, we propose to combine aspect extraction together with aspect-based recommendations in an end-to-end manner, achieving the two goals together in a single framework. For the aspect extraction component, we leverage the recent advances in large language models and design a new prompt learning mechanism to generate aspects for the end recommendation task. For the aspect-based recommendation component, the extracted aspects are concatenated with the usual user and item features used by the recommendation model. The recommendation task mediates the learning of the user embeddings and item embeddings, which are used as soft prompts to generate aspects. Therefore, the extracted aspects are personalized and contextualized by the recommendation task. We showcase the effectiveness of our proposed method through extensive experiments on three industrial datasets, where our proposed framework significantly outperforms state-of-the-art baselines in both the personalized aspect extraction and aspect-based recommendation tasks. In particular, we demonstrate that it is necessary and beneficial to combine the learning of aspect extraction and aspect-based recommendation together. We also conduct extensive ablation studies to understand the contribution of each design component in our framework.",8a4320fd903677a3ea2bf606a6537b59885b1108,Semantic Scholar,,somewhat relevant,"The abstract indicates that prompt engineering may further augment automated data extraction, directly relating to the process of improving model interactions through prompts." -1404,automatic chain of thought prompting in large language models,"['Zhuosheng Zhang', 'Aston Zhang', 'Mu Li', 'Alexander J. Smola']",http://arxiv.org/pdf/2210.03493,2022-10-07,,"Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like""Let's think step by step""to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the""Let's think step by step""prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot",90350aa626bed47b02d0c162462e5b0ca82be6b2,Semantic Scholar,,highly relevant,The paper mentions the importance of careful prompt engineering with LLMs and implies direct interactions with LLMs through prompts. -1405,harnessing the power of adversarial prompting and large language models for robust hypothesis generation in astronomy,"['I. Ciucă', 'Y. Ting', 'S. Kruk', 'K. Iyer']",http://arxiv.org/pdf/2306.11648,2023-06-20,,"This study investigates the application of Large Language Models (LLMs), specifically GPT-4, within Astronomy. We employ in-context prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System, to explore the extent to which performance can be improved by immersing the model in domain-specific literature. Our findings point towards a substantial boost in hypothesis generation when using in-context prompting, a benefit that is further accentuated by adversarial prompting. We illustrate how adversarial prompting empowers GPT-4 to extract essential details from a vast knowledge base to produce meaningful hypotheses, signaling an innovative step towards employing LLMs for scientific research in Astronomy.",91099bbb96133c70db091041900ecff502a5e3a8,Semantic Scholar,,highly relevant,The paper clearly describes the use of manually crafted prompts and a method for learning task-relevant prompts which is relevant to the topic of prompt engineering. -1406,tabllm fewshot classification of tabular data with large language models,"['S. Hegselmann', 'Alejandro Buendia', 'Hunter Lang', 'Monica Agrawal', 'Xiaoyi Jiang', 'D. Sontag']",http://arxiv.org/pdf/2210.10723,2022-10-19,,"We study the application of large language models to zero-shot and few-shot classification of tabular data. We prompt the large language model with a serialization of the tabular data to a natural-language string, together with a short description of the classification problem. In the few-shot setting, we fine-tune the large language model using some labeled examples. We evaluate several serialization methods including templates, table-to-text models, and large language models. Despite its simplicity, we find that this technique outperforms prior deep-learning-based tabular classification methods on several benchmark datasets. In most cases, even zero-shot classification obtains non-trivial performance, illustrating the method's ability to exploit prior knowledge encoded in large language models. Unlike many deep learning methods for tabular datasets, this approach is also competitive with strong traditional baselines like gradient-boosted trees, especially in the very-few-shot setting.",9dcee248452d84b6bf26911ba6726ae5ce1a46f3,Semantic Scholar,,highly relevant,"The paper presents a new prompting technique to improve few-shot learning, which is directly related to the concept of prompt engineering." -1407,spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization,"['Yu-Neng Chuang', 'Ruixiang Tang', 'Xiaoqian Jiang', 'Xia Hu']",https://arxiv.org/pdf/2303.13035,,,"Electronic health records (EHRs) store an extensive array of patient information, encompassing medical histories, diagnoses, treatments, and test outcomes. These records are crucial for enabling healthcare providers to make well-informed decisions regarding patient care. Summarizing clinical notes further assists healthcare professionals in pinpointing potential health risks and making better-informed decisions. This process contributes to reducing errors and enhancing patient outcomes by ensuring providers have access to the most pertinent and current patient data. Recent research has shown that incorporating prompts with large language models (LLMs) substantially boosts the efficacy of summarization tasks. However, we show that this approach also leads to increased output variance, resulting in notably divergent outputs even when prompts share similar meanings. To tackle this challenge, we introduce a model-agnostic Soft Prompt-Based Calibration (SPeC) pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization. Experimental findings on multiple clinical note tasks and LLMs indicate that our method not only bolsters performance but also effectively curbs variance for various LLMs, providing a more uniform and dependable solution for summarizing vital medical information.",b378e54c88d241aa917131beb65c96be3730f40c,Semantic Scholar,,highly relevant,"The paper focuses on developing a prompt learning framework specifically for few-shot named entity recognition, which is directly related to the topic of prompt engineering." -1408,selfcritique prompting with large language models for inductive instructions,"['Rui Wang', 'Hongru Wang', 'Fei Mi', 'Yi Chen', 'Rui-Lan Xu', 'Kam-Fai Wong']",http://arxiv.org/pdf/2305.13733,2023-05-23,,"Numerous works are proposed to improve or evaluate the capabilities of Large language models (LLMs) to fulfill user instructions. However, they neglect the possibility that user inputs may inherently contain incorrect information due to users' false beliefs or malicious intents. In this way, blindly adhering to users' false content will cause deception and harm. To address this problem, we propose a challenging benchmark consisting of Inductive Instructions (INDust) to evaluate whether LLMs could resist these instructions. The INDust includes 15K instructions across three categories: Fact-Checking Instructions, Questions based on False Premises, and Creative Instructions based on False Premises. Our experiments on several strong LLMs reveal that current LLMs can be easily deceived by INDust into generating misleading and malicious statements. Hence we employ Self-Critique prompting to encourage LLMs to not only critique themselves like in previous works but also the users, which show remarkable improvement in handling inductive instructions under both zero-shot and few-shot settings.",b5e9406a65de7384af041c357ca5481489345b73,Semantic Scholar,,highly relevant,"The paper describes a Prompt-based learning approach for fact verification, suggesting the use of prompting methods to elicit knowledge from pre-trained language models, which falls within the scope of prompt engineering." -1409,cotever chain of thought prompting annotation toolkit for explanation verification,"['Seungone Kim', 'Se June Joo', 'Yul Jang', 'Hyungjoo Chae', 'Jinyoung Yeo']",http://arxiv.org/pdf/2303.03628,2023-03-07,,"Chain-of-thought (CoT) prompting enables large language models (LLMs) to solve complex reasoning tasks by generating an explanation before the final prediction. Despite it’s promising ability, a critical downside of CoT prompting is that the performance is greatly affected by the factuality of the generated explanation. To improve the correctness of the explanations, fine-tuning language models with explanation data is needed. However, there exists only a few datasets that can be used for such approaches, and no data collection tool for building them. Thus, we introduce CoTEVer, a tool-kit for annotating the factual correctness of generated explanations and collecting revision data of wrong explanations. Furthermore, we suggest several use cases where the data collected with CoTEVer can be utilized for enhancing the faithfulness of explanations. Our toolkit is publicly available at https://github.com/SeungoneKim/CoTEVer.",b9d75f361b5310c6ddcddfe7858bb0416eb78de4,Semantic Scholar,,highly relevant,"The paper presents a prompt-based Chinese text classification framework and discusses prompt generation and fine-tuning, which are central to prompt engineering." -1410,embedding democratic values into social media ais via societal objective functions,"['Chenyan Jia', 'Michelle S. Lam', 'Minh Chau Mai', 'Jeffrey T. Hancock', 'Michael S. Bernstein']",https://arxiv.org/pdf/2307.13912,2023-07-26,,"Can we design artificial intelligence (AI) systems that rank our social media feeds to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models, however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.",c4561fd08636b5f5f6b9f3f6d89f3cee39e678b0,Semantic Scholar,,highly relevant,"The abstract mentions 'zero (few)-shot prompting results,' which implies that prompt engineering techniques are being utilized to assess the language models' understanding of physical concepts." -1411,large language models as sous chefs revising recipes with gpt3,"['Alyssa Hwang', 'B. Li', 'Zhaoyi Hou', 'D. Roth']",http://arxiv.org/pdf/2306.13986,2023-06-24,,"With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand. In our work, we focus on recipes as an example of complex, diverse, and widely used instructions. We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps. We apply this prompt to recipes from various world cuisines, and experiment with several large language models (LLMs), finding best results with GPT-3.5. We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions. We find that annotators usually prefer the revision over the original, demonstrating a promising application of LLMs in serving as digital sous chefs for recipes and beyond. We release our prompt, code, and MTurk template for public use.",ca60126b2b534a3f1cd8007ba84fdbd163968770,Semantic Scholar,,highly relevant,"The paper focuses on using prompt-based learning and reinforcement learning to steer model output without accessing model parameters, which falls within the scope of prompt engineering." -1412,fashionlogo prompting multimodal large language models for fashion logo embeddings,"['Yulin Su', 'Min Yang', 'Minghui Qiu', 'Jing Wang', 'Tao Wang']",https://arxiv.org/pdf/2308.09012,2023-08-17,,"Logo embedding plays a crucial role in various e-commerce applications by facilitating image retrieval or recognition, such as intellectual property protection and product search. However, current methods treat logo embedding as a purely visual problem, which may limit their performance in real-world scenarios. A notable issue is that the textual knowledge embedded in logo images has not been adequately explored. Therefore, we propose a novel approach that leverages textual knowledge as an auxiliary to improve the robustness of logo embedding. The emerging Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in both visual and textual understanding and could become valuable visual assistants in understanding logo images. Inspired by this observation, our proposed method, FashionLOGO, aims to utilize MLLMs to enhance fashion logo embedding. We explore how MLLMs can improve logo embedding by prompting them to generate explicit textual knowledge through three types of prompts, including image OCR, brief captions, and detailed descriptions prompts, in a zero-shot setting. We adopt a cross-attention transformer to enable image embedding queries to learn supplementary knowledge from textual embeddings automatically. To reduce computational costs, we only use the image embedding model in the inference stage, similar to traditional inference pipelines. Our extensive experiments on three real-world datasets demonstrate that FashionLOGO learns generalized and robust logo embeddings, achieving state-of-the-art performance in all benchmark datasets. Furthermore, we conduct comprehensive ablation studies to demonstrate the performance improvements resulting from the introduction of MLLMs.",d53945d4afb4528590d79e20de52883d29037e86,Semantic Scholar,,highly relevant,"The paper discusses the impact of prompt manipulation on LLMs and the consequences of misinformation in in-context learning, which directly relates to prompt engineering." -1413,systematic rectification of language models via deadend analysis,"['Mengyao Cao', 'Mehdi Fatemi', 'J. Cheung', 'S. Shabanian']",http://arxiv.org/pdf/2302.14003,2023-02-27,,"With adversarial or otherwise normal prompts, existing large language models (LLM) can be pushed to generate toxic discourses. One way to reduce the risk of LLMs generating undesired discourses is to alter the training of the LLM. This can be very restrictive due to demanding computation requirements. Other methods rely on rule-based or prompt-based token elimination, which are limited as they dismiss future tokens and the overall meaning of the complete discourse. Here, we center detoxification on the probability that the finished discourse is ultimately considered toxic. That is, at each point, we advise against token selections proportional to how likely a finished text from this point will be toxic. To this end, we formally extend the dead-end theory from the recent reinforcement learning (RL) literature to also cover uncertain outcomes. Our approach, called rectification, utilizes a separate but significantly smaller model for detoxification, which can be applied to diverse LLMs as long as they share the same vocabulary. Importantly, our method does not require access to the internal representations of the LLM, but only the token probability distribution at each decoding step. This is crucial as many LLMs today are hosted in servers and only accessible through APIs. When applied to various LLMs, including GPT-3, our approach significantly improves the generated discourse compared to the base LLMs and other techniques in terms of both the overall language and detoxification performance.",da5fcb26c830663b79c9aa1c550ae62e7725fcad,Semantic Scholar,,highly relevant,"The paper investigates improving 'promptability' which is directly related to prompt engineering, specifically focusing on zero-shot and few-shot settings using natural language prompts, which aligns with the use of hard prefix prompts in transformers." -1414,surrogateprompt bypassing the safety filter of texttoimage models via substitution,"['Zhongjie Ba', 'Jieming Zhong', 'Jiachen Lei', 'Pengyu Cheng', 'Qinglong Wang', 'Zhan Qin', 'Zhibo Wang', 'Kui Ren']",https://arxiv.org/pdf/2309.14122,2023-09-25,,"Advanced text-to-image models such as DALL-E 2 and Midjourney possess the capacity to generate highly realistic images, raising significant concerns regarding the potential proliferation of unsafe content. This includes adult, violent, or deceptive imagery of political figures. Despite claims of rigorous safety mechanisms implemented in these models to restrict the generation of not-safe-for-work (NSFW) content, we successfully devise and exhibit the first prompt attacks on Midjourney, resulting in the production of abundant photorealistic NSFW images. We reveal the fundamental principles of such prompt attacks and suggest strategically substituting high-risk sections within a suspect prompt to evade closed-source safety measures. Our novel framework, SurrogatePrompt, systematically generates attack prompts, utilizing large language models, image-to-text, and image-to-image modules to automate attack prompt creation at scale. Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios. Both subjective and objective assessments validate that the images generated from our attack prompts present considerable safety hazards.",e1decb86f2a6aba8682d2fc4e427424b0b49e0d0,Semantic Scholar,,highly relevant,"The paper describes the design and use of diverse prompts for teacher response generation using OpenAI's GPT-3, which indicates the use of prompt engineering." -1415,augmented embeddings for custom retrievals,"['Anirudh Khatry', 'Yasharth Bajpai', 'Priyanshu Gupta', 'Sumit Gulwani', 'Ashish Tiwari']",https://arxiv.org/pdf/2310.05380,2023-10-09,,"Information retrieval involves selecting artifacts from a corpus that are most relevant to a given search query. The flavor of retrieval typically used in classical applications can be termed as homogeneous and relaxed, where queries and corpus elements are both natural language (NL) utterances (homogeneous) and the goal is to pick most relevant elements from the corpus in the Top-K, where K is large, such as 10, 25, 50 or even 100 (relaxed). Recently, retrieval is being used extensively in preparing prompts for large language models (LLMs) to enable LLMs to perform targeted tasks. These new applications of retrieval are often heterogeneous and strict -- the queries and the corpus contain different kinds of entities, such as NL and code, and there is a need for improving retrieval at Top-K for small values of K, such as K=1 or 3 or 5. Current dense retrieval techniques based on pretrained embeddings provide a general-purpose and powerful approach for retrieval, but they are oblivious to task-specific notions of similarity of heterogeneous artifacts. We introduce Adapted Dense Retrieval, a mechanism to transform embeddings to enable improved task-specific, heterogeneous and strict retrieval. Adapted Dense Retrieval works by learning a low-rank residual adaptation of the pretrained black-box embedding. We empirically validate our approach by showing improvements over the state-of-the-art general-purpose embeddings-based baseline.",e4c466cf3df4887e0121561be90e0bac78d3e1cb,Semantic Scholar,,somewhat relevant,"The paper discusses using few-shot prompting on GPT-3 to detect metaphoric language, which indicates an application of prompt engineering techniques." -1416,"tryage realtime, intelligent routing of user prompts to large language models","['S. Hari', 'Matt Thomson']",https://arxiv.org/pdf/2308.11601,2023-08-22,,"The introduction of the transformer architecture and the self-attention mechanism has led to an explosive production of language models trained on specific downstream tasks and data domains. With over 200, 000 models in the Hugging Face ecosystem, users grapple with selecting and optimizing models to suit multifaceted workflows and data domains while addressing computational, security, and recency concerns. There is an urgent need for machine learning frameworks that can eliminate the burden of model selection and customization and unleash the incredible power of the vast emerging model library for end users. Here, we propose a context-aware routing system, Tryage, that leverages a language model router for optimal selection of expert models from a model library based on analysis of individual input prompts. Inspired by the thalamic router in the brain, Tryage employs a perceptive router to predict down-stream model performance on prompts and, then, makes a routing decision using an objective function that integrates performance predictions with user goals and constraints that are incorporated through flags (e.g., model size, model recency). Tryage allows users to explore a Pareto front and automatically trade-off between task accuracy and secondary goals including minimization of model size, recency, security, verbosity, and readability. Across heterogeneous data sets that include code, text, clinical data, and patents, the Tryage framework surpasses Gorilla and GPT3.5 turbo in dynamic model selection identifying the optimal model with an accuracy of 50.9% , compared to 23.6% by GPT 3.5 Turbo and 10.8% by Gorilla. Conceptually, Tryage demonstrates how routing models can be applied to program and control the behavior of multi-model LLM systems to maximize efficient use of the expanding and evolving language model ecosystem.",ee025d7030d4767062af2bcd32a4d586737d30bf,Semantic Scholar,,highly relevant,"The abstract describes the use of 'few-shot' prompts to improve large language models' performance on tasks, which falls under the category of prompt engineering." -1417,distractor generation for multiplechoice questions with predictive prompting and large language models,"['Semere Kiros Bitew', 'Johannes Deleu', 'Chris Develder', 'Thomas Demeester']",https://arxiv.org/pdf/2307.16338,2023-07-30,,"Large Language Models (LLMs) such as ChatGPT have demonstrated remarkable performance across various tasks and have garnered significant attention from both researchers and practitioners. However, in an educational context, we still observe a performance gap in generating distractors -- i.e., plausible yet incorrect answers -- with LLMs for multiple-choice questions (MCQs). In this study, we propose a strategy for guiding LLMs such as ChatGPT, in generating relevant distractors by prompting them with question items automatically retrieved from a question bank as well-chosen in-context examples. We evaluate our LLM-based solutions using a quantitative assessment on an existing test set, as well as through quality annotations by human experts, i.e., teachers. We found that on average 53% of the generated distractors presented to the teachers were rated as high-quality, i.e., suitable for immediate use as is, outperforming the state-of-the-art model. We also show the gains of our approach 1 in generating high-quality distractors by comparing it with a zero-shot ChatGPT and a few-shot ChatGPT prompted with static examples.",f1bb5051965a3a4c9288f0123dd03c26a08e1378,Semantic Scholar,,highly relevant,"The paper explicitly mentions evaluating ChatGPT's performance with different prompting methods, which implies an investigation into prompt engineering strategies." -1418,interleaving retrieval with chainofthought reasoning for knowledgeintensive multistep questions,"['H. Trivedi', 'Niranjan Balasubramanian', 'Tushar Khot', 'Ashish Sabharwal']",http://arxiv.org/pdf/2212.10509,2022-12-20,,"Prompting-based large language models (LLMs) are surprisingly powerful at generating natural language reasoning steps or Chains-of-Thoughts (CoT) for multi-step question answering (QA). They struggle, however, when the necessary knowledge is either unavailable to the LLM or not up-to-date within its parameters. While using the question to retrieve relevant text from an external knowledge source helps LLMs, we observe that this one-step retrieve-and-read approach is insufficient for multi-step QA. Here, what to retrieve depends on what has already been derived, which in turn may depend on what was previously retrieved. To address this, we propose IRCoT, a new approach for multi-step QA that interleaves retrieval with steps (sentences) in a CoT, guiding the retrieval with CoT and in turn using retrieved results to improve CoT. Using IRCoT with GPT3 substantially improves retrieval (up to 21 points) as well as downstream QA (up to 15 points) on four datasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. We observe similar substantial gains in out-of-distribution (OOD) settings as well as with much smaller models such as Flan-T5-large without additional training. IRCoT reduces model hallucination, resulting in factually more accurate CoT reasoning.",f208ea909fa7f54fea82def9a92fd81dfc758c39,Semantic Scholar,,highly relevant,"The paper discusses using a custom GPT-4 few-shot prompt annotation scheme, which falls under the topic of prompt engineering." -1419,satisfiabilityaided language models using declarative prompting,"['Xi Ye', 'Qiaochu Chen', 'Işıl Dillig', 'Greg Durrett']",https://arxiv.org/pdf/2305.09656,2023-05-16,,"Prior work has combined chain-of-thought prompting in large language models (LLMs) with programmatic representations to perform effective and transparent reasoning. While such an approach works well for tasks that only require forward reasoning (e.g., straightforward arithmetic), it is less effective for constraint solving problems that require more sophisticated planning and search. In this paper, we propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of LLMs. We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer. This approach has two key advantages. The declarative specification is closer to the problem description than the reasoning steps are, so the LLM can parse it out of the description more accurately. Furthermore, by offloading the actual reasoning task to an automated theorem prover, our approach can guarantee the correctness of the answer with respect to the parsed specification and avoid planning errors in the solving process. We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm. In particular, SATLM outperforms program-aided LMs by 23% on a challenging subset of the GSM arithmetic reasoning dataset; SATLM also achieves a new SoTA on LSAT and BoardgameQA, surpassing previous models that are trained on the respective training sets.",f27f6d1d521d189e78f5623098ced0deea613d33,Semantic Scholar,,highly relevant,The paper specifically mentions the use of prompt engineering in the development of ChatGPT-powered expert systems applications for customer services. -1420,choice over control how users write with large language models using diegetic and nondiegetic prompting,"['Hai Dang', 'Sven Goller', 'Florian Lehmann', 'D. Buschek']",https://arxiv.org/pdf/2303.03199,2023-03-06,,"We propose a conceptual perspective on prompts for Large Language Models (LLMs) that distinguishes between (1) diegetic prompts (part of the narrative, e.g. “Once upon a time, I saw a fox...”), and (2) non-diegetic prompts (external, e.g. “Write about the adventures of the fox.”). With this lens, we study how 129 crowd workers on Prolific write short texts with different user interfaces (1 vs 3 suggestions, with/out non-diegetic prompts; implemented with GPT-3): When the interface offered multiple suggestions and provided an option for non-diegetic prompting, participants preferred choosing from multiple suggestions over controlling them via non-diegetic prompts. When participants provided non-diegetic prompts it was to ask for inspiration, topics or facts. Single suggestions in particular were guided both with diegetic and non-diegetic information. This work informs human-AI interaction with generative models by revealing that (1) writing non-diegetic prompts requires effort, (2) people combine diegetic and non-diegetic prompting, and (3) they use their draft (i.e. diegetic information) and suggestion timing to strategically guide LLMs.",fccf8776d7525627c518a56a1f4db367a4d7120b,Semantic Scholar,,highly relevant,"The abstract mentions the use of 'few-shot-prompted pre-trained language models' and adapting the 'chain-of-thought method of prompting', indicating the application of prompt engineering techniques in their methodology." -1421,bioinformatics in plant breeding and research on disease resistance,"['Huiying Mu', 'Baoshan Wang', 'F. Yuan']",https://www.mdpi.com/2223-7747/11/22/3118/pdf?version=1668520760,2022-11-01,,"In the context of plant breeding, bioinformatics can empower genetic and genomic selection to determine the optimal combination of genotypes that will produce a desired phenotype and help expedite the isolation of these new varieties. Bioinformatics is also instrumental in collecting and processing plant phenotypes, which facilitates plant breeding. Robots that use automated and digital technologies to collect and analyze different types of information to monitor the environment in which plants grow, analyze the environmental stresses they face, and promptly optimize suboptimal and adverse growth conditions accordingly, have helped plant research and saved human resources. In this paper, we describe the use of various bioinformatics databases and algorithms and explore their potential applications in plant breeding and for research on plant disease resistance.",2c2b40b4f1967dc1fb640c7c4bec140110dbf2cf,Semantic Scholar,,somewhat relevant,"The paper introduces a novel prompting technique in the context of simulating brain activity, which is relevant to prompt engineering." -1422,early diagnostic markers of lateonset neonatal sepsis,"['Preslava Gatseva', 'Alexander Blazhev', 'Zarko Yordanov', 'Victoria Atanasova']",https://www.mdpi.com/2036-7503/15/3/50/pdf?version=1695182872,2023-09-01,,"Objective: Early diagnosis of nosocomial infections in newborns is a great challenge, because in the initial phase of systemic infection, clinical symptoms are often non-specific, and routinely used hematological markers are not sufficiently informative. The aim of this study was to determine the potential of early inflammatory markers to diagnose late-onset neonatal sepsis—procalcitonin (PCT), interleukin 6 (IL-6), interleukin 8 (IL-8) and endocan (ESM-1). Material and methods: A prospective clinical–epidemiological study was conducted in a third-level NICU in Pleven, Bulgaria. Patients with suspected late-onset sepsis and healthy controls were tested. A sandwich ELISA method was used to measure the serum concentrations of biomarkers. Results: Sixty newborns were included, of which 35% symptomatic and infected, 33.3% symptomatic but uninfected and 31.7% asymptomatic controls. The mean values of PCT, IL-6, I/T index and PLT differ significantly in the three groups. For ESM-1, IL-8 and CRP, the difference was statistically insignificant. The best sensitivity (78%) and negative predictive value (84%) was found for IL-6. The combinations of PCT + IL-6 and PCT + IL-6+ I/T+ PLT showed very good diagnostic potential. Conclusion: The introduction into the routine practice of indicators such as PCT and IL-6 may provide an opportunity to promptly optimize the diagnostic and therapeutic approach to LOS.",2e536dcd013be93dc1841dd0e7a0a87b2846f341,Semantic Scholar,,somewhat relevant,"The paper mentions the use of an 'LLM prompting generator', indicating that prompt engineering, specifically for the purpose of generating answers, is a part of their proposed multi-stage model." -1423,automated extraction and visualization of metabolic networks from biomedical literature using a large language model,"['Thiptanawat Phongwattana', 'Jonathan H. Chan']",https://www.biorxiv.org/content/biorxiv/early/2023/06/29/2023.06.27.546560.full.pdf,2023-06-29,,"The rapid growth of biomedical literature presents a significant challenge for researchers to extract and analyze relevant information efficiently. In this study, we explore the application of GPT, the large language model to automate the extraction and visualization of metabolic networks from a corpus of PubMed abstracts. Our objective is to provide a valuable tool for biomedical researchers to explore and understand the intricate metabolic interactions discussed in scientific literature. We begin by splitting a ton of the tokens within the corpus, as the GPT-3.5-Turbo model has a token limit of 4,000 per analysis. Through iterative prompt optimization, we successfully extract a comprehensive list of metabolites, enzymes, and proteins from the abstracts. To validate the accuracy and completeness of the extracted entities, our biomedical data domain experts compare them with the provided abstracts and ensure a fully matched result. Using the extracted entities, we generate a directed graph that represents the metabolic network including 3 types of metabolic events that consist of metabolic consumption, metabolic reaction, and metabolic production. The graph visualization, achieved through Python and NetworkX, offers a clear representation of metabolic pathways, highlighting the relationships between metabolites, enzymes, and proteins. Our approach integrates language models and network analysis, demonstrating the power of combining automated information extraction with sophisticated visualization techniques. The research contributions are twofold. Firstly, we showcase the ability of GPT-3.5-Turbo to automatically extract metabolic entities, streamlining the process of cataloging important components in metabolic research. Secondly, we present the generation and visualization of a directed graph that provides a comprehensive overview of metabolic interactions. This graph serves as a valuable tool for further analysis, comparison with existing pathways, and updating or refining metabolic networks. Our findings underscore the potential of large language models and network analysis techniques in extracting and visualizing metabolic information from scientific literature. This approach enables researchers to gain insights into complex biological systems, advancing our understanding of metabolic pathways and their components.",439c2a5c4883b421ca316617b1306583cc1d706c,Semantic Scholar,,somewhat relevant,"The abstract indicates the use of 'zero-shot prompting' for sentiment and emotion analysis, which is related to prompt engineering techniques." -1424,emerging technology in acute resuscitation monitoring,"['M. Tichauer', 'J. Mccoy']",http://www.scirp.org/journal/PaperDownload.aspx?paperID=24794,2012-11-23,,"Fluid optimization in the resuscitation of shock became the mainstay of treatment following the advent of Early Goal-Directed Therapy (EGDT) by Rivers et al. in 2001 [1]. Patients presenting in shock require prompt optimization of volume status and cardiac out- put to ensure adequate perfusion. Poor optimization may be associated with prolonged hospital and intensive care unit stays. The prior gold standard, pulmonary artery catheterization, is rarely available in the emergency department setting and its invasive nature has led to recent re-evaluation of its clinical utility. However, there are new monitoring technologies that are being studied in the intensive care unit setting that may soon be available in emergency departments to aid in nursing and physician decision making to improve acute resuscitation.",93e09c5feb9b2ffc8926b4edff13b3d8e02e41de,Semantic Scholar,,highly relevant,"The paper discusses the use of zero-shot prompting with various large language models, which is directly related to the topic of prompt engineering." -1425,recombinant hemagglutinin displaying on yeast reshapes congenital lymphocyte subsets to prompt optimized systemic immune protection against avian influenza infection,"['Han Zhang', 'Zexing Li', 'Huixia Zhang', 'Yanyu Guo', 'Xinyi Zhang', 'Lilin Zhang', 'Liu Yang', 'Shujun Li', 'Changyan Li', 'D. Cui', 'R. Xie', 'Yongqing Li', 'Jinhai Huang']",https://www.frontiersin.org/articles/10.3389/fmicb.2023.1153922/pdf,2023-05-31,,"Introduction Prophylactic vaccination is regarded as the most effective means to control avian flu infection. Currently, there is a need for a universal vaccine that provides broad and long-lasting protection against influenza virus. Meanwhile, although yeast-based vaccines have been used in clinic, studies are still required to further understand the molecular mechanism of yeast-based vaccines under physiological conditions. Methods We generated a yeast-based vaccine against influenza hemagglutinin (HA) of H5, H7 and H9 using surface displaying technology and evaluated the protective efficacy of chickens after exposure to H9N2 influenza virus. Results Oral yeast vaccine provided less clinical syndrome, reduced viral loading and alleviated airway damage significantly. Compared to the commercial inactivated vaccine, yeast vaccine stimulated the activation of splenic NK and APCs cells and boosted TLR7-IRF7-IFN signaling in spleen. Meanwhile, γδ T cells in the bursa of Fabricius were activated and the innate lymphoid cells (ILCs) in the bursa of Fabricius promoted the CILPs to differentiate to ILC3 cells in oral yeast birds. Moreover, the reshaped gut microbiota and a suppressed Th17-IL17-mediated inflammation in intestine was observed in oral yeast chickens, which might facilitate the recovery of intestinal mucosal immunity upon virus infection. Collectively, our findings suggest that oral yeast based multivalent bird flu vaccines provide an attractive strategy to update host defense function via reshapes of multi-systemic immune homeostasis.",98090bbc7b784a1f64d4522c5e1987b196863fd0,Semantic Scholar,,somewhat relevant,"The paper discusses using zero-shot prompts to improve translation quality, which indicates relevance to prompt engineering techniques." -1426,prompt engineering for textbased generative art,['J. Oppenlaender'],http://arxiv.org/pdf/2204.13988,,,"Text-based generative art has seen an explosion of interest in 2021. Online communities around text-based generative art as a novel digital medium have quickly emerged. This short paper identifies five types of prompt modifiers used by practitioners in the community of text-based generative art based on a 3-month ethnographic study on Twitter. The novel taxonomy of prompt modifiers provides researchers a conceptual starting point for investigating the practices of text-based generative art, but also may help practitioners of text-based generative art improve their images. The paper concludes with a discussion of research opportunities in the space of text-based generative art and the broader implications of prompt engineering from the perspective of human-AI interaction in future applications beyond the use case of text-based generative art.",07cd498aacfb4d39fa2e0e8d8a9c8ad881257300,Semantic Scholar,,somewhat relevant,"The abstract mentions the use of prompts in the form of raw text and explanatory text combined, which are inputted into the GPT model for prediction and address parsing, indicative of prompt engineering techniques." -1427,ebhaam at semeval2023 task 1 a clipbased approach for comparing crossmodality and unimodality in visual word sense disambiguation,"['Zeinab Taghavi', 'Parsa Haghighi Naeini', 'Mohammad Ali Sadraei Javaheri', 'S. Gooran', 'Ehsaneddin Asgari', 'H. Rabiee', 'H. Sameti']",https://aclanthology.org/2023.semeval-1.269.pdf,,,"This paper presents an approach to tackle the task of Visual Word Sense Disambiguation (Visual-WSD), which involves determining the most appropriate image to represent a given polysemous word in one of its particular senses. The proposed approach leverages the CLIP model, prompt engineering, and text-to-image models such as GLIDE and DALL-E 2 for both image retrieval and generation. To evaluate our approach, we participated in the SemEval 2023 shared task on “Visual Word Sense Disambiguation (Visual-WSD)” using a zero-shot learning setting, where we compared the accuracy of different combinations of tools, including “Simple prompt-based” methods and “Generated prompt-based” methods for prompt engineering using completion models, and text-to-image models for changing input modality from text to image. Moreover, we explored the benefits of cross-modality evaluation between text and candidate images using CLIP. Our experimental results demonstrate that the proposed approach reaches better results than cross-modality approaches, highlighting the potential of prompt engineering and text-to-image models to improve accuracy in Visual-WSD tasks. We assessed our approach in a zero-shot learning scenario and attained an accuracy of 68.75\% in our best attempt.",08e0e696732103e585fd629e23888fd4acbb22df,Semantic Scholar,,somewhat relevant,"The abstract discusses adapting LLMs to new tasks, which is indicative of prompt engineering, but the focus on differential privacy suggests it's not the primary topic." -1428,comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students,"['Dollaya Hirunyasiri', 'Danielle R. Thomas', 'Jionghao Lin', 'K. Koedinger', 'Vincent Aleven']",https://arxiv.org/pdf/2307.02018,2023-07-05,,"Research suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor's ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.",0b94b999fdd9488e1a0914d37f8fb3ea7e9ea0fd,Semantic Scholar,,highly relevant,"The paper describes using prompt-based approaches for multimodal named entity recognition and specifies transforming text-only queries into multimodal prompts, which indicates they pertain to the topic of prompt engineering." -1429,"the c4h, tat, hppr and hppd genes prompted engineering of rosmarinic acid biosynthetic pathway in salvia miltiorrhiza hairy root cultures","['Ying Xiao', 'Lei Zhang', 'Shouhong Gao', 'Saengking Saechao', 'Peng Di', 'Junfeng Chen', 'Wansheng Chen']",https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0029713&type=printable,2011-12-29,,"Rational engineering to produce biologically active plant compounds has been greatly impeded by our poor understanding of the regulatory and metabolic pathways underlying the biosynthesis of these compounds. Here we capitalized on our previously described gene-to-metabolite network in order to engineer rosmarinic acid (RA) biosynthesis pathway for the production of beneficial RA and lithospermic acid B (LAB) in Salvia miltiorrhiza hairy root cultures. Results showed their production was greatly elevated by (1) overexpression of single gene, including cinnamic acid 4-hydroxylase (c4h), tyrosine aminotransferase (tat), and 4-hydroxyphenylpyruvate reductase (hppr), (2) overexpression of both tat and hppr, and (3) suppression of 4-hydroxyphenylpyruvate dioxygenase (hppd). Co-expression of tat/hppr produced the most abundant RA (906 mg/liter) and LAB (992 mg/liter), which were 4.3 and 3.2-fold more than in their wild-type (wt) counterparts respectively. And the value of RA concentration was also higher than that reported before, that produced by means of nutrient medium optimization or elicitor treatment. It is the first report of boosting RA and LAB biosynthesis through genetic manipulation, providing an effective approach for their large-scale commercial production by using hairy root culture systems as bioreactors.",221e801f9a39ff055773b2a20d91e3efadbea921,Semantic Scholar,,highly relevant,"The paper specifically mentions the use of prompt learning for Chinese Implicit Intent Dataset (CIID) to enhance intent recognition, making it highly relevant to prompt engineering." -1430,can chatgpt understand causal language in science claims,"['Yuheun Kim', 'Lu Guo', 'Bei Yu', 'Yingya Li']",https://aclanthology.org/2023.wassa-1.33.pdf,,,"This study evaluated ChatGPT’s ability to understand causal language in science papers and news by testing its accuracy in a task of labeling the strength of a claim as causal, conditional causal, correlational, or no relationship. The results show that ChatGPT is still behind the existing fine-tuned BERT models by a large margin. ChatGPT also had difficulty understanding conditional causal claims mitigated by hedges. However, its weakness may be utilized to improve the clarity of human annotation guideline. Chain-of-Thoughts were faithful and helpful for improving prompt performance, but finding the optimal prompt is difficult with inconsistent results and the lack of effective method to establish cause-effect between prompts and outcomes, suggesting caution when generalizing prompt engineering results across tasks or models.",27d80545d142ced9b921290b5b2798cabd55468b,Semantic Scholar,,highly relevant,"The paper details a prompt-based intent detection model leveraging BERT and a prompt template for few-shot SLU, aligning with the topic of prompt engineering." -1431,contextual stance classification using prompt engineering,"['Felipe Penhorate Carvalho de Fonseca', 'Ivandré Paraboni', 'L. A. Digiampietri']",https://sol.sbc.org.br/index.php/stil/article/download/25435/25256,2023-09-25,,"This paper introduces a prompt-based method for few-shot learning addressing, as an application example, contextual stance classification, that is, the task of determining the attitude expressed by a given statement within a conversation thread with multiple points of view towards another statement. More specifically, we envisaged a method that uses the existing conversation thread (i.e., messages that are part of the test data) to create natural language prompts for few-shot learning with minimal reliance on training samples, whose preliminary results suggest that prompt engineering may be a competitive alternative to supervised methods both in terms of accuracy and development costs for the task at hand.",2d90460431c093757fcf651e333bc0da5f5404c2,Semantic Scholar,,highly relevant,"The paper introduces 'TaxoPrompt', which uses prompt tuning with taxonomic context, indicating its direct relation to prompt engineering." -1432,prompt engineering in medical education,"['Thomas F. Heston', 'Charya Khun']",https://www.mdpi.com/2813-141X/2/3/19/pdf?version=1693479951,2023-08-31,,"Artificial intelligence-powered generative language models (GLMs), such as ChatGPT, Perplexity AI, and Google Bard, have the potential to provide personalized learning, unlimited practice opportunities, and interactive engagement 24/7, with immediate feedback. However, to fully utilize GLMs, properly formulated instructions are essential. Prompt engineering is a systematic approach to effectively communicating with GLMs to achieve the desired results. Well-crafted prompts yield good responses from the GLM, while poorly constructed prompts will lead to unsatisfactory responses. Besides the challenges of prompt engineering, significant concerns are associated with using GLMs in medical education, including ensuring accuracy, mitigating bias, maintaining privacy, and avoiding excessive reliance on technology. Future directions involve developing more sophisticated prompt engineering techniques, integrating GLMs with other technologies, creating personalized learning pathways, and researching the effectiveness of GLMs in medical education.",3159478fbc81e562c812b9d5dc1891271b21f0c4,Semantic Scholar,,highly relevant,"The paper discusses designing adversarial prompts to reveal weaknesses in LLMs and secure them, which directly involves prompt engineering." -1433,chatgpt opens a new door for bioinformatics,['Dong Xu'],https://journal.hep.com.cn/qb/EN/PDF/10.15302/J-QB-023-0328,2023-04-21,,"ChatGPT is an artificial intelligence (AI) system that can perform sophisticated writing and dialogs after learning from vast amounts of linguistic data. The success of ChatGPT is phenomenal. AI-based human-machine language interaction has been at the center of AI competition in recent years. The major players in this game have been Google, Meta, and OpenAI. Google was in the best position from the outset, given its invention of Transformer (the cornerstone of all cutting-edge language models) and its significant edge in reinforcement learning. Yet, Google’s efforts in this area were rather diffusing. It kept generating language model variants with incremental innovations but failed to reach the next level. Meta has a strong AI team, including many top AI researchers in the world. Nevertheless, their faith in self-supervised learning to solve human-machine interaction did not deliver high-impact success. Conversely, OpenAI, with a small team, stayed focused on a single product line (GPT, including its latest release of GPT-4). It moved in the right direction of using human input to “align” the language model based on the Reinforcement Learning from Human Feedback (RLHF) approach. The fact that OpenAI ultimately prevailed in this game shows that the model alignment to human labeling through supervised and reinforcement learning is critical for human-machine interaction. However, a chatbot’s actions rely heavily on cues (prompts) provided by human operators. To properly utilize ChatGPT’s capabilities, prompts to instruct or mentor the chatbot must be carefully designed to get valuable, valid, and robust responses. This process becomes another “alignment” problem of using prompt engineering to best probe ChatGPT’s knowledge graph for best serving users’ needs.",358d1d9eed69a6eadcda9996b3f13b0e0a356b88,Semantic Scholar,,highly relevant,"The study focuses on the effect of prompt engineering on the performance of LLMs in clinical note generation and introduces an APO framework to refine prompts, which is directly related to the topic of hard prefix prompt engineering." -1434,generating novel leads for drug discovery using llms with logical feedback,"['Shreyas Bhat Brahmavar', 'Ashwin Srinivasan', 'T. Dash', 'Sowmya Ramaswamy Krishnan', 'L. Vig', 'Arijit Roy', 'R. Aduri']",https://www.biorxiv.org/content/biorxiv/early/2023/09/17/2023.09.14.557698.full.pdf,2023-09-17,,"Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically require experimentation with progressively more refined prompts. Results thus become dependent not just on what is known about the target, but also on what is known about the prompt-engineering. In this paper, we separate the prompt into domain-constraints that can be written in a standard logical form, and a simple text-based query. We investigate whether LLMs can be guided, not by refining prompts manually, but by refining the the logical component automatically, keeping the query unchanged. We describe an iterative procedure LMLF (“Language Models with Logical Feedback”) in which the constraints are progressively refined using a logical notion of generalisation. On any iteration, newly generated instances are verified against the constraint, providing “logical-feedback” for the next iteration’s refinement of the constraints. We evaluate LMLF using two well-known targets (inhibition of the Janus Kinase 2; and Dopamine Receptor D2); and two different LLMs (GPT-3 and PaLM). We show that LMLF, starting with the same logical constraints and query text, can guide both LLMs to generate potential leads. We find: (a) Binding affinities of LMLF-generated molecules are skewed towards higher binding affinities than those from existing baselines; LMLF results in generating molecules that are skewed towards higher binding affinities than without logical feedback; (c) Assessment by a computational chemist suggests that LMLF generated compounds may be novel inhibitors. These findings suggest that LLMs with logical feedback may provide a mechanism for generating new leads without requiring the domain-specialist to acquire sophisticated skills in prompt-engineering.",3613299c54bbea66dd6db1b00573f7ade021a5a9,Semantic Scholar,,highly relevant,"The paper discusses an automatic prompt optimization framework, which is directly related to prompt engineering." -1435,qaclims questionanswer cross language image matching for weakly supervised semantic segmentation,"['Songhe Deng', 'Wei Zhuo', 'Jinheng Xie', 'Linlin Shen']",https://dl.acm.org/doi/pdf/10.1145/3581783.3612148,2023-10-26,,"Class Activation Map (CAM) has emerged as a popular tool for weakly supervised semantic segmentation (WSSS), allowing the localization of object regions in an image using only image-level labels. However, existing CAM methods suffer from under-activation of target object regions and false-activation of background regions due to the fact that a lack of detailed supervision can hinder the model's ability to understand the image as a whole. In this paper, we propose a novel Question-Answer Cross-Language-Image Matching framework for WSSS (QA-CLIMS), leveraging the vision-language foundation model to maximize the text-based understanding of images and guide the generation of activation maps. First, a series of carefully designed questions are posed to the VQA (Visual Question Answering) model with Question-Answer Prompt Engineering (QAPE) to generate a corpus of both foreground target objects and backgrounds that are adaptive to query images. We then employ contrastive learning in a Region Image Text Contrastive (RITC) network to compare the obtained foreground and background regions with the generated corpus. Our approach exploits the rich textual information from the open vocabulary as additional supervision, enabling the model to generate high-quality CAMs with a more complete object region and reduce false-activation of background regions. We conduct extensive analysis to validate the proposed method and show that our approach performs state-of-the-art on both PASCAL VOC 2012 and MS COCO datasets.",3da79f3fe4e0ff1bb59efb34c8baa2bcf632c2b9,Semantic Scholar,,highly relevant,"The abstract mentions that the paper discusses prompt engineering as a method to enhance the reasoning capability of large language models and reports empirical results on the interplay of prompts with multi-agent mechanisms, indicating a focus on prompting techniques." -1436,from web catalogs to google a retrospective study of web search engines sustainable development,"['M. Duka', 'Marek Sikora', 'Artur Strzelecki']",https://www.mdpi.com/2071-1050/15/8/6768/pdf?version=1681779086,2023-04-17,,This study presents a review of search engines and search engine optimization and shows how the search engine landscape relates to sustainable development. We have used a narrative review research method and described three main topics: the past and present of web catalogs and search engines; current knowledge about the dominant types of search results presented in Google search; and methods of search engine optimization. Technical elements of important website areas related to technical website auditing are discussed. We summarize our research with several key findings on how web search engines are involved in sustainable development and offer a glimpse into the future use of web searching with the help of artificial intelligence chats and prompt engineering.,513b96c7d5d1f9a74afd9d946d5a7c83fe592869,Semantic Scholar,,somewhat relevant,The paper discusses evaluating GPT-4's abstract reasoning abilities using one-shot prompting which is relevant to prompt engineering. -1437,better integrating vision and semantics for improving fewshot classification,"['Zhuoling Li', 'Yong Wang']",https://dl.acm.org/doi/pdf/10.1145/3581783.3613819,2023-10-26,,"Some recent methods address few-shot classification by integrating visual and semantic prototypes. However, they usually ignore the difference in feature structure between the visual and semantic modalities, which leads to limited performance improvements. In this paper, we propose a novel method, called bimodal integrator (BMI), to better integrate visual and semantic prototypes. In BMI, we first construct a latent space for each modality via a variational autoencoder, and then align the semantic latent space to the visual latent space. Through this semantics-to-vision alignment, the semantic modality is mapped to the visual latent space and has the same feature structure as the visual modality. As a result, the visual and semantic prototypes can be better integrated. In addition, based on the multivariate Gaussian distribution and the prompt engineering, a data augmentation scheme is designed to ensure the accuracy of modality alignment during the training process. Experimental results demonstrate that BMI significantly improves few-shot classification, making simple baselines outperform the most advanced methods on miniImageNet and tieredImageNet datasets.",579ee305d538a679d72b808ffe8322680561a177,Semantic Scholar,,highly relevant,"The abstract mentions enhancing LLMs problem-solving ability with 'novel prompting techniques', which indicates a direct connection to prompt engineering." -1438,omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know,"['Matthias Urban', 'Duc Dat Nguyen', 'Carsten Binnig']",http://publikationen.ub.uni-frankfurt.de/files/74426/06_08.pdf,2023-06-18,,"In this paper, we present our vision of OmniscientDB, a novel database that leverages the implicitly-stored knowledge in large language models to augment datasets for analytical queries or even machine learning tasks. OmiscientDB empowers its users to augment their datasets by means of simple SQL queries and thus has the potential to dramatically reduce the manual overhead associated with data integration. It uses automatic prompt engineering to construct appropriate prompts for given SQL queries and passes them to a large language model like GPT-3 to contribute additional data (i.e., new rows, columns, or entire tables), augmenting the explicitly stored data. Our initial evaluation demonstrates the general feasibility of our vision, explores different prompting techniques in greater detail, and points towards several directions for future research.",59266e06cdb867c2541603f9d94e13f67d55938f,Semantic Scholar,,somewhat relevant,"The paper discusses the combination of machine translation with an LLM using techniques from LLM prompting, which is related to the use of prompts in large language models." -1439,mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models,"['Runa Bhaumik', 'V. Srivastava', 'A. Jalali', 'Shanta Ghosh', 'Ranganathan Chandrasekharan']",https://www.medrxiv.org/content/medrxiv/early/2023/09/26/2023.09.25.23296062.full.pdf,2023-09-26,,"Suicide, a serious public health concern affecting millions of individuals worldwide, refers to the intentional act of ending one's own life. Mental health issues such as depression, frustration, and hopelessness can directly or indirectly influence the emergence of suicidal thoughts. Early identification of these thoughts is crucial for timely diagnosis. In recent years, advances in artificial intelligence (AI) and natural language processing (NLP) have paved the way for revolutionizing mental health support and education. In this proof-of-concept study, we have created MindWatch, a cutting-edge tool that harnesses the power of AI-driven language models to serve as a valuable computer-aided system for the mental health professions to achieve two important goals such as early symptom detection, and personalized psychoeducation. We utilized ALBERT and Bio-Clinical BERT language models and fine-tuned them with the Reddit dataset to build the classifiers. We evaluated the performance of bi-LSTM, ALBERT, Bio-Clinical BERT, OpenAI GPT3.5 (via prompt engineering), and an ensembled voting classifier to detect suicide ideation. For personalized psychoeducation, we used the state-of-the-art Llama 2 foundation model leveraging prompt engineering. The tool is developed in the Amazon Web Service environment. All models performed exceptionally well, with accuracy and precision/recall greater than 92%. ALBERT performed better (AUC=.98) compared to the zero-shot classification accuracies obtained from OpenAI GPT3.5 Turbo (ChatGPT) on hidden datasets (AUC=.91). Furthermore, we observed that the inconclusiveness rate of the Llama 2 model is low while tested for few examples. This study emphasizes how transformer models can help provide customized psychoeducation to individuals dealing with mental health issues. By tailoring content to address their unique mental health conditions, treatment choices, and self-help resources, this approach empowers individuals to actively engage in their recovery journey. Additionally, these models have the potential to advance the automated detection of depressive disorders.",5e01b8383e9260b2e251274a6bad89677cb1bbd3,Semantic Scholar,,somewhat relevant,"The abstract mentions a 'novel Langchain-based framework' that utilizes 'customized LLM prompts', indicating relevance to the use of prompts in large language models and thereby prompt engineering, despite not specifying the use of 'hard prefix' prompts." -1440,improving knowledge extraction from llms for robotic task learning through agent analysis,"['James R. Kirk', 'R. Wray', 'Peter Lindes']",https://arxiv.org/pdf/2306.06770,,,": Large language models (LLMs) offer significant promise as a knowledge source for robotic task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM but alone is insufficient for acquiring relevant, situationally grounded knowledge for an embodied robotic agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations, and thus enabling a robot to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous robot, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how a robot, by retrieving and evaluating a breadth of responses from the LLM, can achieve > 75% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as indication of preference) is provided, while greatly reducing how much human oversight is needed.",6b80c6e220ca2e2434f5a80b2eb5e8b645e97ae1,Semantic Scholar,,highly relevant,"The paper focuses on improving in-context learning for semantic parsing by augmenting prompts and utilizing general-purpose programming languages, which relates to the manipulation of prompts to enhance model performance." -1441,prompt engineering guiding the way to effective large language models,"['Mohammad Aljanabi', 'M. Yaseen', 'Ahmed Hussein Ali', 'Mostafa Abdulghafoor Mohammed']",https://journal.esj.edu.iq/index.php/IJCM/article/download/1356/321,2023-11-06,,"Large language models (LLMs) have become prominent tools in various domains, such as natural language processing, machine translation, and the development of creative text. Nevertheless, in order to fully exploit the capabilities of Language Models, it is imperative to establish efficient communication channels between humans and machines. The discipline of engineering involves the creation of well-constructed and informative prompts, which act as a crucial link between human intention and the execution of tasks by machines. The present study examines the concept of rapid engineering, elucidating its underlying concepts, methodologies, and diverse range of practical applications.",7de25ad5ac7433e4d4071f450461b03fd2a39b8d,Semantic Scholar,,somewhat relevant,"The paper describes the use of large language models for graphic layout generation with a focus on in-context learning and prompt exemplar selection, indicating relevance to prompt engineering." -1442,a wolf in sheep's clothing generalized nested jailbreak prompts can fool large language models easily,"['Peng Ding', 'Jun Kuang', 'Dan Ma', 'Xuezhi Cao', 'Yunsen Xian', 'Jiajun Chen', 'Shujian Huang']",http://arxiv.org/pdf/2311.08268v1.pdf,2023-11-14,," Large Language Models (LLMs), such as ChatGPT and GPT-4, are designed toprovide useful and safe responses. However, adversarial prompts known as'jailbreaks' can circumvent safeguards, leading LLMs to generate harmfulcontent. Exploring jailbreak prompts can help to better reveal the weaknessesof LLMs and further steer us to secure them. Unfortunately, existing jailbreakmethods either suffer from intricate manual design or require optimization onanother white-box model, compromising generalization or jailbreak efficiency.In this paper, we generalize jailbreak prompt attacks into two aspects: (1)Prompt Rewriting and (2) Scenario Nesting. Based on this, we propose ReNeLLM,an automatic framework that leverages LLMs themselves to generate effectivejailbreak prompts. Extensive experiments demonstrate that ReNeLLM significantlyimproves the attack success rate while greatly reducing the time cost comparedto existing baselines. Our study also reveals the inadequacy of current defensemethods in safeguarding LLMs. Finally, we offer detailed analysis anddiscussion from the perspective of prompt execution priority on the failure ofLLMs' defense. We hope that our research can catalyze both the academiccommunity and LLMs vendors towards the provision of safer and more regulatedLarge Language Models.",,arXiv,['cs.cl'],highly relevant,"The paper discusses using prompts to validate generated answers by an LLM against a corpus, demonstrating use of prompts in post-training model interactions with text retrieval systems." -1443,jailbreaking gpt4v via selfadversarial attacks with system prompts,"['Yuanwei Wu', 'Xiang Li', 'Yixin Liu', 'Pan Zhou', 'Lichao Sun']",http://arxiv.org/pdf/2311.09127v1.pdf,2023-11-15,," Existing work on jailbreak Multimodal Large Language Models (MLLMs) hasfocused primarily on adversarial examples in model inputs, with less attentionto vulnerabilities in model APIs. To fill the research gap, we carry out thefollowing work: 1) We discover a system prompt leakage vulnerability in GPT-4V.Through carefully designed dialogue, we successfully steal the internal systemprompts of GPT-4V. This finding indicates potential exploitable security risksin MLLMs; 2)Based on the acquired system prompts, we propose a novel MLLMjailbreaking attack method termed SASP (Self-Adversarial Attack via SystemPrompt). By employing GPT-4 as a red teaming tool against itself, we aim tosearch for potential jailbreak prompts leveraging stolen system prompts.Furthermore, in pursuit of better performance, we also add human modificationbased on GPT-4's analysis, which further improves the attack success rate to98.7\%; 3) We evaluated the effect of modifying system prompts to defendagainst jailbreaking attacks. Results show that appropriately designed systemprompts can significantly reduce jailbreak success rates. Overall, our workprovides new insights into enhancing MLLM security, demonstrating the importantrole of system prompts in jailbreaking, which could be leveraged to greatlyfacilitate jailbreak success rates while also holding the potential fordefending against jailbreaks.",,arXiv,"['cs.cr', 'cs.ai', 'cs.lg']",highly relevant,"The paper specifically involves the development and usage of prompts to extract clinical information using a large language model, which is directly related to prompt engineering." -1444,using natural language explanations to improve robustness of incontext learning for natural language inference,"['Xuanli He', 'Yuxiang Wu', 'Oana-Maria Camburu', 'Pasquale Minervini', 'Pontus Stenetorp']",http://arxiv.org/pdf/2311.07556v1.pdf,2023-11-13,," Recent studies have demonstrated that large language models (LLMs) excel indiverse tasks through in-context learning (ICL) facilitated by task-specificprompts and examples. However, the existing literature shows that ICLencounters performance deterioration when exposed to adversarial inputs.Enhanced performance has been observed when ICL is augmented with naturallanguage explanations (NLEs) (we refer to it as X-ICL). Thus, this workinvestigates whether X-ICL can improve the robustness of LLMs on a suite ofseven adversarial and challenging natural language inference datasets.Moreover, we introduce a new approach to X-ICL by prompting an LLM (ChatGPT inour case) with few human-generated NLEs to produce further NLEs (we call itChatGPT few-shot), which we show superior to both ChatGPT zero-shot andhuman-generated NLEs alone. We evaluate five popular LLMs (GPT3.5-turbo,LLaMa2, Vicuna, Zephyr, Mistral) and show that X-ICL with ChatGPT few-shotyields over 6% improvement over ICL. Furthermore, while prompt selectionstrategies were previously shown to significantly improve ICL onin-distribution test sets, we show that these strategies do not match theefficacy of the X-ICL paradigm in robustness-oriented evaluations.",,arXiv,['cs.cl'],highly relevant,"The paper discusses improving zero-shot chain-of-thought reasoning by introducing advanced prompting strategies, which is directly related to the topic of hard prefix prompt engineering." -1445,towards verifiable text generation with symbolic references,"['Lucas Torroba Hennigen', 'Shannon Shen', 'Aniruddha Nrusimha', 'Bernhard Gapp', 'David Sontag', 'Yoon Kim']",http://arxiv.org/pdf/2311.09188v1.pdf,2023-11-15,," Large language models (LLMs) have demonstrated an impressive ability tosynthesize plausible and fluent text. However they remain vulnerable tohallucinations, and thus their outputs generally require manual humanverification for high-stakes applications, which can be time-consuming anddifficult. This paper proposes symbolically grounded generation (SymGen) as asimple approach for enabling easier validation of an LLM's output. SymGenprompts an LLM to interleave its regular output text with explicit symbolicreferences to fields present in some conditioning data (e.g., a table in JSONformat). The references can be used to display the provenance of differentspans of text in the generation, reducing the effort required for manualverification. Across data-to-text and question answering experiments, we findthat LLMs are able to directly output text that makes use of symbolicreferences while maintaining fluency and accuracy.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",highly relevant,"The paper mentions using a chain-of-thought prompting approach to generate responses, which constitutes a form of prompt engineering, especially as it relates to tailoring prompts to user status." -1446,multistage collaborative knowledge distillation from large language models,"['Jiachen Zhao', 'Wenlong Zhao', 'Andrew Drozdov', 'Benjamin Rozonoyer', 'Md Arafat Sultan', 'Jay-Yoon Lee', 'Mohit Iyyer', 'Andrew McCallum']",http://arxiv.org/pdf/2311.08640v1.pdf,2023-11-15,," We study semi-supervised sequence prediction tasks where labeled data are tooscarce to effectively finetune a model and at the same time few-shot promptingof a large language model (LLM) has suboptimal performance. This happens when atask, such as parsing, is expensive to annotate and also unfamiliar to apretrained LLM. In this paper, we present a discovery that student modelsdistilled from a prompted LLM can often generalize better than their teacher onsuch tasks. Leveraging this finding, we propose a new distillation method,multistage collaborative knowledge distillation from an LLM (MCKD), for suchtasks. MCKD first prompts an LLM using few-shot in-context learning to producepseudolabels for unlabeled data. Then, at each stage of distillation, a pair ofstudents are trained on disjoint partitions of the pseudolabeled data. Eachstudent subsequently produces new and improved pseudolabels for the unseenpartition to supervise the next round of student(s) with. We show the benefitof multistage cross-partition labeling on two constituency parsing tasks. OnCRAFT biomedical parsing, 3-stage MCKD with 50 labeled examples matches theperformance of supervised finetuning with 500 examples and outperforms theprompted LLM and vanilla KD by 7.5% and 3.7% parsing F1, respectively.",,arXiv,"['cs.cl', 'cs.lg']",highly relevant,"The paper details the optimization of prompts for multilingual performance in language models, relevant to prompt engineering." -1447,plum prompt learning using metaheuristic,"['Rui Pan', 'Shuo Xing', 'Shizhe Diao', 'Xiang Liu', 'Kashun Shum', 'Jipeng Zhang', 'Tong Zhang']",http://arxiv.org/pdf/2311.08364v1.pdf,2023-11-14,," Since the emergence of large language models, prompt learning has become apopular method for optimizing and customizing these models. Special prompts,such as Chain-of-Thought, have even revealed previously unknown reasoningcapabilities within these models. However, the progress of discoveringeffective prompts has been slow, driving a desire for general promptoptimization methods. Unfortunately, few existing prompt learning methodssatisfy the criteria of being truly ""general"", i.e., automatic, discrete,black-box, gradient-free, and interpretable all at once. In this paper, weintroduce metaheuristics, a branch of discrete non-convex optimization methodswith over 100 options, as a promising approach to prompt learning. Within ourparadigm, we test six typical methods: hill climbing, simulated annealing,genetic algorithms with/without crossover, tabu search, and harmony search,demonstrating their effectiveness in black-box prompt learning andChain-of-Thought prompt tuning. Furthermore, we show that these methods can beused to discover more human-understandable prompts that were previouslyunknown, opening the door to a cornucopia of possibilities in promptoptimization. We release all the codes in\url{https://github.com/research4pan/Plum}.",,arXiv,"['cs.lg', 'cs.ai', 'cs.dm']",somewhat relevant,"The paper's core focus is on the systematic rectification of language models and does not address prompt engineering directly, but mention of 'prompt-based token elimination' suggests some relevance." -1448,do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation,"['Zonghai Yao', 'Ahmed Jaafar', 'Beining Wang', 'Yue Zhu', 'Zhichao Yang', 'Hong Yu']",http://arxiv.org/pdf/2311.09684v1.pdf,2023-11-16,," This study examines the effect of prompt engineering on the performance ofLarge Language Models (LLMs) in clinical note generation. We introduce anAutomatic Prompt Optimization (APO) framework to refine initial prompts andcompare the outputs of medical experts, non-medical experts, and APO-enhancedGPT3.5 and GPT4. Results highlight GPT4 APO's superior performance instandardizing prompt quality across clinical note sections. A human-in-the-loopapproach shows that experts maintain content quality post-APO, with apreference for their own modifications, suggesting the value of expertcustomization. We recommend a two-phase optimization process, leveragingAPO-GPT4 for consistency and expert input for personalization.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper studies the effect of including explanations in prompts and how it influences model performance, which is directly related to prompt engineering for large language models like GPT-3." -1449,propane prompt design as an inverse problem,"['Rimon Melamed', 'Lucas H. McCabe', 'Tanay Wakhare', 'Yejin Kim', 'H. Howie Huang', 'Enric Boix-Adsera']",http://arxiv.org/pdf/2311.07064v1.pdf,2023-11-13,," Carefully-designed prompts are key to inducing desired behavior in LargeLanguage Models (LLMs). As a result, great effort has been dedicated toengineering prompts that guide LLMs toward particular behaviors. In this work,we propose an automatic prompt optimization framework, PROPANE, which aims tofind a prompt that induces semantically similar outputs to a fixed set ofexamples without user intervention. We further demonstrate that PROPANE can beused to (a) improve existing prompts, and (b) discover semantically obfuscatedprompts that transfer between models.",,arXiv,['cs.cl'],highly relevant,"The paper presents a method for constructing prompts that evade safety mechanisms in text-to-image models, which directly relates to manipulating prompts to achieve specific outputs, and therefore is relevant to prompt engineering." -1450,prompt engineering a prompt engineer,"['Qinyuan Ye', 'Maxamed Axmed', 'Reid Pryzant', 'Fereshte Khani']",http://arxiv.org/pdf/2311.05661v1.pdf,2023-11-09,," Prompt engineering is a challenging yet crucial task for optimizing theperformance of large language models (LLMs). It requires complex reasoning toexamine the model's errors, hypothesize what is missing or misleading in thecurrent prompt, and communicate the task with clarity. While recent worksindicate that LLMs can be meta-prompted to perform automatic promptengineering, their potentials may not be fully untapped due to the lack ofsufficient guidance to elicit complex reasoning capabilities in LLMs in themeta-prompt. In this work, we investigate the problem of ""prompt engineering aprompt engineer"" -- constructing a meta-prompt that more effectively guidesLLMs to perform automatic prompt engineering. We introduce and analyze keycomponents, such as a step-by-step reasoning template and contextspecification, which lead to improved performance. In addition, inspired bycommon optimization concepts such as batch size, step size and momentum, weintroduce their verbalized counterparts to the meta-prompt and investigatetheir effects. Our final method, named PE2, finds a prompt that outperforms""let's think step by step"" by 6.3% on the MultiArith dataset and 3.1% on theGSM8K dataset. To demonstrate its versatility, we apply PE2 to the InstructionInduction benchmark, a suite of counterfactual tasks, and a lengthy, real-worldindustrial prompt. In these settings, PE2 achieves strong performance andoutperforms prior automatic prompt engineering baselines. Further, we show thatPE2 makes meaningful and targeted prompt edits, amends erroneous or incompleteprompts, and presents non-trivial counterfactual reasoning abilities.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",somewhat relevant,"The paper mentions improving prompt performance and finding optimal prompts, indicating investigations relevant to prompt engineering." -1451,to be or not to be an exploration of continuously controllable prompt engineering,"['Yuhan Sun', 'Mukai Li', 'Yixin Cao', 'Kun Wang', 'Wenxiao Wang', 'Xingyu Zeng', 'Rui Zhao']",http://arxiv.org/pdf/2311.09773v1.pdf,2023-11-16,," As the use of large language models becomes more widespread, techniques likeparameter-efficient fine-tuning and other methods for controlled generation aregaining traction for customizing models and managing their outputs. However,the challenge of precisely controlling how prompts influence these models is anarea ripe for further investigation. In response, we introduce ControlPE(Continuously Controllable Prompt Engineering). ControlPE enables fineradjustments to prompt effects, complementing existing prompt engineering, andeffectively controls continuous targets. This approach harnesses the power ofLoRA (Low-Rank Adaptation) to create an effect akin to prompt weighting,enabling fine-tuned adjustments to the impact of prompts. Our methodologyinvolves generating specialized datasets for prompt distillation, incorporatingthese prompts into the LoRA model, and carefully adjusting LoRA merging weightto regulate the influence of prompts. This provides a dynamic and adaptabletool for prompt control. Through our experiments, we have validated thepracticality and efficacy of ControlPE. It proves to be a promising solutionfor control a variety of prompts, ranging from generating short responsesprompts, refusal prompts to chain-of-thought prompts.",,arXiv,['cs.cl'],highly relevant,The paper mentions the importance of careful prompt engineering with LLMs and implies direct interactions with LLMs through prompts. -1452,more samples or more prompt inputs exploring effective incontext sampling for llm fewshot prompt engineering,"['Bingsheng Yao', 'Guiming Chen', 'Ruishi Zou', 'Yuxuan Lu', 'Jiachen Li', 'Shao Zhang', 'Sijia Liu', 'James Hendler', 'Dakuo Wang']",http://arxiv.org/pdf/2311.09782v1.pdf,2023-11-16,," While most existing works on LLM prompt-engineering focus only on how toselect a better set of data samples inside one single prompt input (In-ContextLearning or ICL), why can't we design and leverage multiple prompt inputstogether to further improve the LLM performance? In this work, we proposeIn-Context Sampling (ICS), a low-resource LLM prompt-engineering technique toproduce the most confident prediction results by optimizing the construction ofmultiple ICL prompt inputs. Extensive experiments with two SOTA LLMs (FlanT5-XLand Mistral-7B) on three NLI datasets (e-SNLI, Multi-NLI, and ANLI) illustratethat ICS can consistently enhance LLM's prediction performance and confidence.An ablation study suggests that a diversity-based ICS strategy may furtherimprove LLM's performance, which sheds light on a new yet promising futureresearch direction.",,arXiv,['cs.cl'],highly relevant,The paper clearly describes the use of manually crafted prompts and a method for learning task-relevant prompts which is relevant to the topic of prompt engineering. -1453,large language models and prompt engineering for biomedical query focused multidocument summarisation,['Diego Mollá'],http://arxiv.org/pdf/2311.05169v1.pdf,2023-11-09,," This paper reports on the use of prompt engineering and GPT-3.5 forbiomedical query-focused multi-document summarisation. Using GPT-3.5 andappropriate prompts, our system achieves top ROUGE-F1 results in the task ofobtaining short-paragraph-sized answers to biomedical questions in the 2023BioASQ Challenge (BioASQ 11b). This paper confirms what has been observed inother domains: 1) Prompts that incorporated few-shot samples generally improvedon their counterpart zero-shot variants; 2) The largest improvement wasachieved by retrieval augmented generation. The fact that these prompts allowour top runs to rank within the top two runs of BioASQ 11b demonstrate thepower of using adequate prompts for Large Language Models in general, andGPT-3.5 in particular, for query-focused summarisation.",,arXiv,['cs.cl'],highly relevant,"The paper is highly relevant as it describes using prompts with GPT-3.5-turbo for generating responses, which directly involves the concept of prompt engineering." -1454,beautifulprompt towards automatic prompt engineering for texttoimage synthesis,"['Tingfeng Cao', 'Chengyu Wang', 'Bingyan Liu', 'Ziheng Wu', 'Jinhui Zhu', 'Jun Huang']",http://arxiv.org/pdf/2311.06752v1.pdf,2023-11-12,," Recently, diffusion-based deep generative models (e.g., Stable Diffusion)have shown impressive results in text-to-image synthesis. However, currenttext-to-image models often require multiple passes of prompt engineering byhumans in order to produce satisfactory results for real-world applications. Wepropose BeautifulPrompt, a deep generative model to produce high-qualityprompts from very simple raw descriptions, which enables diffusion-based modelsto generate more beautiful images. In our work, we first fine-tuned theBeautifulPrompt model over low-quality and high-quality collecting promptpairs. Then, to ensure that our generated prompts can generate more beautifulimages, we further propose a Reinforcement Learning with Visual AI Feedbacktechnique to fine-tune our model to maximize the reward values of the generatedprompts, where the reward values are calculated based on the PickScore and theAesthetic Scores. Our results demonstrate that learning from visual AI feedbackpromises the potential to improve the quality of generated prompts and imagessignificantly. We further showcase the integration of BeautifulPrompt to acloud-native AI platform to provide better text-to-image generation service inthe cloud.",,arXiv,['cs.cl'],highly relevant,"The paper explicitly mentions the use of a 'straightforward few-shot prompt' with a GPT-3.5 model, which suggests relevance to prompt engineering techniques." -1455,on the discussion of large language models symmetry of agents and interplay with prompts,"['Qineng Wang', 'Zihao Wang', 'Ying Su', 'Yangqiu Song']",http://arxiv.org/pdf/2311.07076v1.pdf,2023-11-13,," Two ways has been discussed to unlock the reasoning capability of a largelanguage model. The first one is prompt engineering and the second one is tocombine the multiple inferences of large language models, or the multi-agentdiscussion. Theoretically, this paper justifies the multi-agent discussionmechanisms from the symmetry of agents. Empirically, this paper reports theempirical results of the interplay of prompts and discussion mechanisms,revealing the empirical state-of-the-art performance of complex multi-agentmechanisms can be approached by carefully developed prompt engineering. Thispaper also proposes a scalable discussion mechanism based on conquer and merge,providing a simple multi-agent discussion solution with simple prompts butstate-of-the-art performance.",,arXiv,['cs.cl'],highly relevant,"The paper mentions zero-shot and few-shot prompting, which are methods of prompt engineering, making it relevant to the topic." -1456,loke linked open knowledge extraction for automated knowledge graph construction,['Jamie McCusker'],http://arxiv.org/pdf/2311.09366v1.pdf,2023-11-15,," While the potential of Open Information Extraction (Open IE) for KnowledgeGraph Construction (KGC) may seem promising, we find that the alignment of OpenIE extraction results with existing knowledge graphs to be inadequate. Theadvent of Large Language Models (LLMs), especially the commercially availableOpenAI models, have reset expectations for what is possible with deep learningmodels and have created a new field called prompt engineering. We investigatethe use of GPT models and prompt engineering for knowledge graph constructionwith the Wikidata knowledge graph to address a similar problem to Open IE,which we call Open Knowledge Extraction (OKE) using an approach we call theLinked Open Knowledge Extractor (LOKE, pronounced like ""Loki""). We consider theentity linking task essential to construction of real world knowledge graphs.We merge the CaRB benchmark scoring approach with data from the TekGen datasetfor the LOKE task. We then show that a well engineered prompt, paired with anaive entity linking approach (which we call LOKE-GPT), outperforms AllenAI'sOpenIE 4 implementation on the OKE task, although it over-generates triplescompared to the reference set due to overall triple scarcity in the TekGen set.Through an analysis of entity linkability in the CaRB dataset, as well asoutputs from OpenIE 4 and LOKE-GPT, we see that LOKE-GPT and the ""silver""TekGen triples show that the task is significantly different in content fromOIE, if not structure. Through this analysis and a qualitative analysis ofsentence extractions via all methods, we found that LOKE-GPT extractions are ofhigh utility for the KGC task and suitable for use in semi-automated extractionsettings.",,arXiv,"['cs.cl', 'cs.ai']",somewhat relevant,"The paper introduces a novel prompting technique in the context of simulating brain activity, which is relevant to prompt engineering." -1457,exploring generative ai assisted feedback writing for students' written responses to a physics conceptual question with prompt engineering and fewshot learning,"['Tong Wan', 'Zhongzhou Chen']",http://arxiv.org/pdf/2311.06180v1.pdf,2023-11-10,," Instructor's feedback plays a critical role in students' development ofconceptual understanding and reasoning skills. However, grading student writtenresponses and providing personalized feedback can take a substantial amount oftime. In this study, we explore using GPT-3.5 to write feedback to studentwritten responses to conceptual questions with prompt engineering and few-shotlearning techniques. In stage one, we used a small portion (n=20) of thestudent responses on one conceptual question to iteratively train GPT. Four ofthe responses paired with human-written feedback were included in the prompt asexamples for GPT. We tasked GPT to generate feedback to the other 16 responses,and we refined the prompt after several iterations. In stage two, we gave fourstudent researchers the 16 responses as well as two versions of feedback, onewritten by the authors and the other by GPT. Students were asked to rate thecorrectness and usefulness of each feedback, and to indicate which one wasgenerated by GPT. The results showed that students tended to rate the feedbackby human and GPT equally on correctness, but they all rated the feedback by GPTas more useful. Additionally, the successful rates of identifying GPT'sfeedback were low, ranging from 0.1 to 0.6. In stage three, we tasked GPT togenerate feedback to the rest of the student responses (n=65). The feedback wasrated by four instructors based on the extent of modification needed if theywere to give the feedback to students. All the instructors rated approximately70% of the feedback statements needing only minor or no modification. Thisstudy demonstrated the feasibility of using Generative AI as an assistant togenerating feedback for student written responses with only a relatively smallnumber of examples. An AI assistance can be one of the solutions tosubstantially reduce time spent on grading student written responses.",,arXiv,['physics.ed-ph'],somewhat relevant,"The abstract mentions the use of 'zero-shot prompting,' which indicates that the study involves prompt engineering, possibly including hard prefix prompts." -1458,how are prompts different in terms of sensitivity,"['Sheng Lu', 'Hendrik Schuff', 'Iryna Gurevych']",http://arxiv.org/pdf/2311.07230v1.pdf,2023-11-13,," In-context learning (ICL) has become one of the most popular learningparadigms. While there is a growing body of literature focusing on promptengineering, there is a lack of systematic analysis comparing the effects ofprompts across different models and tasks. To address this gap, we present acomprehensive prompt analysis based on the sensitivity of a function. Ouranalysis reveals that sensitivity is an unsupervised proxy for modelperformance, as it exhibits a strong negative correlation with accuracy. We usegradient-based saliency scores to empirically demonstrate how different promptsaffect the relevance of input tokens to the output, resulting in differentlevels of sensitivity. Furthermore, we introduce sensitivity-aware decodingwhich incorporates sensitivity estimation as a penalty term in the standardgreedy decoding. We show that this approach is particularly helpful wheninformation in the input is scarce. Our work provides a fresh perspective onthe analysis of prompts, and contributes to a better understanding of themechanism of ICL.",,arXiv,['cs.cl'],highly relevant,"The paper analyzes the impact of prompt position on model performance, indicating active research in prompt engineering methodologies." -1459,think before you speak cultivating communication skills of large language models via inner monologue,"['Junkai Zhou', 'Liang Pang', 'Huawei Shen', 'Xueqi Cheng']",http://arxiv.org/pdf/2311.07445v1.pdf,2023-11-13,," The emergence of large language models (LLMs) further improves thecapabilities of open-domain dialogue systems and can generate fluent, coherent,and diverse responses. However, LLMs still lack an important ability:communication skills, which makes them more like information seeking tools thananthropomorphic chatbots. To make LLMs more anthropomorphic and proactiveduring the conversation, we add five communication skills to the responsegeneration process: topic transition, proactively asking questions, conceptguidance, empathy, and summarising often. The addition of communication skillsincreases the interest of users in the conversation and attracts them to chatfor longer. To enable LLMs better understand and use communication skills, wedesign and add the inner monologue to LLMs. The complete process is achievedthrough prompt engineering and in-context learning. To evaluate communicationskills, we construct a benchmark named Cskills for evaluating variouscommunication skills, which can also more comprehensively evaluate the dialoguegeneration ability of the model. Experimental results show that the proposedCSIM strategy improves the backbone models and outperforms the baselines inboth automatic and human evaluations.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper discusses prompt engineering and its effects on in-context learning across different models, which is directly related to the topic of prompt engineering." -1460,assessing testtime variability for interactive 3d medical image segmentation with diverse point prompts,"['Hao Li', 'Han Liu', 'Dewei Hu', 'Jiacheng Wang', 'Ipek Oguz']",http://arxiv.org/pdf/2311.07806v1.pdf,2023-11-13,," Interactive segmentation model leverages prompts from users to produce robustsegmentation. This advancement is facilitated by prompt engineering, whereinteractive prompts serve as strong priors during test-time. However, this isan inherently subjective and hard-to-reproduce process. The variability in userexpertise and inherently ambiguous boundaries in medical images can lead toinconsistent prompt selections, potentially affecting segmentation accuracy.This issue has not yet been extensively explored for medical imaging. In thispaper, we assess the test-time variability for interactive medical imagesegmentation with diverse point prompts. For a given target region, the pointis classified into three sub-regions: boundary, margin, and center. Our goal isto identify a straightforward and efficient approach for optimal promptselection during test-time based on three considerations: (1) benefits ofadditional prompts, (2) effects of prompt placement, and (3) strategies foroptimal prompt selection. We conduct extensive experiments on the publicMedical Segmentation Decathlon dataset for challenging colon tumor segmentationtask. We suggest an optimal strategy for prompt selection during test-time,supported by comprehensive results. The code is publicly available athttps://github.com/MedICL-VU/variability",,arXiv,['cs.cv'],highly relevant,"The paper mentions the use of prompt engineering to improve the communication skills of large language models, which is directly related to the topic of prompt engineering." -1461,i was blind but now i see implementing visionenabled dialogue in social robots,"['Giulio Antonio Abbo', 'Tony Belpaeme']",http://arxiv.org/pdf/2311.08957v1.pdf,2023-11-15,," In the rapidly evolving landscape of human-computer interaction, theintegration of vision capabilities into conversational agents stands as acrucial advancement. This paper presents an initial implementation of adialogue manager that leverages the latest progress in Large Language Models(e.g., GPT-4, IDEFICS) to enhance the traditional text-based prompts withreal-time visual input. LLMs are used to interpret both textual prompts andvisual stimuli, creating a more contextually aware conversational agent. Thesystem's prompt engineering, incorporating dialogue with summarisation of theimages, ensures a balance between context preservation and computationalefficiency. Six interactions with a Furhat robot powered by this system arereported, illustrating and discussing the results obtained. By implementingthis vision-enabled dialogue system, the paper envisions a future whereconversational agents seamlessly blend textual and visual modalities, enablingricher, more context-aware dialogues.",,arXiv,"['cs.ro', 'cs.ai', 'cs.hc']",highly relevant,"The paper discusses prompt engineering in the context of interactive medical image segmentation, using prompts as strong priors during test-time, which is highly relevant to hard prefix prompting." -1462,simulating opinion dynamics with networks of llmbased agents,"['Yun-Shiuan Chuang', 'Agam Goyal', 'Nikunj Harlalka', 'Siddharth Suresh', 'Robert Hawkins', 'Sijia Yang', 'Dhavan Shah', 'Junjie Hu', 'Timothy T. Rogers']",http://arxiv.org/pdf/2311.09618v1.pdf,2023-11-16,," Accurately simulating human opinion dynamics is crucial for understanding avariety of societal phenomena, including polarization and the spread ofmisinformation. However, the agent-based models (ABMs) commonly used for suchsimulations lack fidelity to human behavior. We propose a new approach tosimulating opinion dynamics based on populations of Large Language Models(LLMs). Our findings reveal a strong inherent bias in LLM agents towardsaccurate information, leading to consensus in line with scientific reality.However, this bias limits the simulation of individuals with resistant views onissues like climate change. After inducing confirmation bias through promptengineering, we observed opinion fragmentation in line with existingagent-based research. These insights highlight the promise and limitations ofLLM agents in this domain and suggest a path forward: refining LLMs withreal-world discourse to better simulate the evolution of human beliefs.",,arXiv,"['physics.soc-ph', 'cs.cl']",somewhat relevant,"The paper describes the use of LLMs for processing both textual prompts and visual stimuli, involving engineering of prompts for interaction with a robot, which is relevant to prompt engineering." -1463,fairytalecqa integrating a commonsense knowledge graph into children's storybook narratives,"['Jiaju Chen', 'Yuxuan Lu', 'Shao Zhang', 'Bingsheng Yao', 'Yuanzhe Dong', 'Ying Xu', 'Yunyao Li', 'Qianwen Wang', 'Dakuo Wang', 'Yuling Sun']",http://arxiv.org/pdf/2311.09756v1.pdf,2023-11-16,," AI models (including LLM) often rely on narrative question-answering (QA)datasets to provide customized QA functionalities to support downstreamchildren education applications; however, existing datasets only include QApairs that are grounded within the given storybook content, but children canlearn more when teachers refer the storybook content to real-world knowledge(e.g., commonsense knowledge). We introduce the FairytaleCQA dataset, which isannotated by children education experts, to supplement 278 storybook narrativeswith educationally appropriate commonsense knowledge. The dataset has 5,868 QApairs that not only originate from the storybook narrative but also contain thecommonsense knowledge grounded by an external knowledge graph (i.e.,ConceptNet). A follow-up experiment shows that a smaller model (T5-large)fine-tuned with FairytaleCQA reliably outperforms much larger prompt-engineeredLLM (e.g., GPT-4) in this new QA-pair generation task (QAG). This resultsuggests that: 1) our dataset brings novel challenges to existing LLMs, and 2)human experts' data annotation are still critical as they have much nuancedknowledge that LLMs do not know in the children educational domain.",,arXiv,['cs.cl'],somewhat relevant,"The paper mentions that a smaller model fine-tuned with FairytaleCQA outperforms a much larger prompt-engineered LLM in a QA-pair generation task, indicating that the study involves comparing effects of dataset annotation versus prompt engineering." +117,geotechnical parrot tales (gpt) harnessing large language models in geotechnical engineering,['Krishna Kumar'],http://arxiv.org/pdf/2304.02138,2023-04-04,,"The widespread adoption of large language models (LLMs), such as OpenAI's ChatGPT, could revolutionize various industries, including geotechnical engineering. However, GPT models can sometimes generate plausible-sounding but false outputs, leading to hallucinations. In this article, we discuss the importance of prompt engineering in mitigating these risks and harnessing the full potential of GPT for geotechnical applications. We explore the challenges and pitfalls associated with LLMs and highlight the role of context in ensuring accurate and valuable responses. Furthermore, we examine the development of context-specific search engines and the potential of LLMs to become a natural interface for complex tasks, such as data analysis and design. We also develop a unified interface using natural language to handle complex geotechnical engineering tasks and data analysis. By integrating GPT into geotechnical engineering workflows, professionals can streamline their work and develop sustainable and resilient infrastructure systems for the future.",26f560e592419891c9de1b25d0e4d4d16014d54e,Semantic Scholar,,, +118,toward reproducing network research results using large language models,"['Qiao Xiang', 'Yuling Lin', 'Mingjun Fang', 'Bang Huang', 'Siyong Huang', 'Ridi Wen', 'Franck Le', 'L. Kong', 'Jiwu Shu']",https://arxiv.org/pdf/2309.04716,2023-09-09,,"Reproducing research results is important for the networking community. The current best practice typically resorts to: (1) looking for publicly available prototypes; (2) contacting the authors to get a private prototype; or (3) manually implementing a prototype following the description of the publication. However, most published network research does not have public prototypes and private ones are hard to get. As such, most reproducing efforts are spent on manual implementation based on the publications, which is both time and labor consuming and error-prone. In this paper, we boldly propose reproducing network research results using the emerging large language models (LLMs). We first prove its feasibility with a small-scale experiment, in which four students with essential networking knowledge each reproduces a different networking system published in prominent conferences and journals by prompt engineering ChatGPT. We report our observations and lessons and discuss future open research questions of this proposal.",279c798fd53c8dc84044273d08b6a060dbe9f702,Semantic Scholar,,, +119,inducing anxiety in large language models increases exploration and bias,"['Julian Coda-Forno', 'Kristin Witte', 'A. Jagadish', 'Marcel Binz', 'Zeynep Akata', 'Eric Schulz']",http://arxiv.org/pdf/2304.11111,2023-04-21,,"Large language models are transforming research on machine learning while galvanizing public debates. Understanding not only when these models work well and succeed but also why they fail and misbehave is of great societal relevance. We propose to turn the lens of computational psychiatry, a framework used to computationally describe and modify aberrant behavior, to the outputs produced by these models. We focus on the Generative Pre-Trained Transformer 3.5 and subject it to tasks commonly studied in psychiatry. Our results show that GPT-3.5 responds robustly to a common anxiety questionnaire, producing higher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be predictably changed by using emotion-inducing prompts. Emotion-induction not only influences GPT-3.5's behavior in a cognitive task measuring exploratory decision-making but also influences its behavior in a previously-established task measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a strong increase in biases when prompted with anxiety-inducing text. Thus, it is likely that how prompts are communicated to large language models has a strong influence on their behavior in applied settings. These results progress our understanding of prompt engineering and demonstrate the usefulness of methods taken from computational psychiatry for studying the capable algorithms to which we increasingly delegate authority and autonomy.",27c16cca907aa43397cc226a182b73b396c5cf66,Semantic Scholar,,, +120,conceptual design generation using large language models,"['Kevin Ma', 'Daniele Grandi', 'Christopher McComb', 'K. Goucher-Lambert']",http://arxiv.org/pdf/2306.01779,2023-05-30,," + Concept generation is a creative step in the conceptual design phase, where designers often turn to brainstorming, mindmapping, or crowdsourcing design ideas to complement their own knowledge of the domain. Recent advances in natural language processing (NLP) and machine learning (ML) have led to the rise of Large Language Models (LLMs) capable of generating seemingly creative outputs from textual prompts. The success of these models has led to their integration and application across a variety of domains, including art, entertainment, and other creative work. In this paper, we leverage LLMs to generate solutions for a set of 12 design problems and compare them to a baseline of crowdsourced solutions. We evaluate the differences between generated and crowdsourced design solutions through multiple perspectives, including human expert evaluations and computational metrics. Expert evaluations indicate that the LLM-generated solutions have higher average feasibility and usefulness while the crowdsourced solutions have more novelty. We experiment with prompt engineering and find that leveraging few-shot learning can lead to the generation of solutions that are more similar to the crowdsourced solutions. These findings provide insight into the quality of design solutions generated with LLMs and begins to evaluate prompt engineering techniques that could be leveraged by practitioners to generate higher-quality design solutions synergistically with LLMs.",29203f0b8b9be7fd70d99bf7390c6a78b68a9289,Semantic Scholar,,, +121,fixing hardware security bugs with large language models,"['Baleegh Ahmad', 'Shailja Thakur', 'Benjamin Tan', 'R. Karri', 'H. Pearce']",http://arxiv.org/pdf/2302.01215,2023-02-02,,"Novel AI-based code-writing Large Language Models (LLMs) such as OpenAI's Codex have demonstrated capabilities in many coding-adjacent domains. In this work we consider how LLMs maybe leveraged to automatically repair security relevant bugs present in hardware designs. We focus on bug repair in code written in the Hardware Description Language Verilog. For this study we build a corpus of domain-representative hardware security bugs. We then design and implement a framework to quantitatively evaluate the performance of any LLM tasked with fixing the specified bugs. The framework supports design space exploration of prompts (i.e., prompt engineering) and identifying the best parameters for the LLM. We show that an ensemble of LLMs can repair all ten of our benchmarks. This ensemble outperforms the state-of-the-art Cirfix hardware bug repair tool on its own suite of bugs. These results show that LLMs can repair hardware security bugs and the framework is an important step towards the ultimate goal of an automated end-to-end bug repair framework.",2af6a21a1b682ceb585165359d3605e89f4cf6b0,Semantic Scholar,,, +122,toxicity detection with generative promptbased inference,"['Yau-Shian Wang', 'Y. Chang']",https://arxiv.org/pdf/2205.12390,2022-05-24,,"Due to the subtleness, implicity, and different possible interpretations perceived by different people, detecting undesirable content from text is a nuanced difficulty. It is a long-known risk that language models (LMs), once trained on corpus containing undesirable content, have the power to manifest biases and toxicity. However, recent studies imply that, as a remedy, LMs are also capable of identifying toxic content without additional fine-tuning. Prompt-methods have been shown to effectively harvest this surprising self-diagnosing capability. However, existing prompt-based methods usually specify an instruction to a language model in a discriminative way. In this work, we explore the generative variant of zero-shot prompt-based toxicity detection with comprehensive trials on prompt engineering. We evaluate on three datasets with toxicity labels annotated on social media posts. Our analysis highlights the strengths of our generative classification approach both quantitatively and qualitatively. Interesting aspects of self-diagnosis and its ethical implications are discussed.",2afb07359e9c67499e1f373ac6f1520d3ea9c46a,Semantic Scholar,,, +123,exploring efl students' prompt engineering in humanai story writing an activity theory perspective,"['D. Woo', 'Kai Guo', 'Hengky Susanto']",http://arxiv.org/pdf/2306.01798,2023-06-01,,"This study applies Activity Theory to investigate how English as a foreign language (EFL) students prompt generative artificial intelligence (AI) tools during short story writing. Sixty-seven Hong Kong secondary school students created generative-AI tools using open-source language models and wrote short stories with them. The study collected and analyzed the students' generative-AI tools, short stories, and written reflections on their conditions or purposes for prompting. The research identified three main themes regarding the purposes for which students prompt generative-AI tools during short story writing: a lack of awareness of purposes, overcoming writer's block, and developing, expanding, and improving the story. The study also identified common characteristics of students' activity systems, including the sophistication of their generative-AI tools, the quality of their stories, and their school's overall academic achievement level, for their prompting of generative-AI tools for the three purposes during short story writing. The study's findings suggest that teachers should be aware of students' purposes for prompting generative-AI tools to provide tailored instructions and scaffolded guidance. The findings may also help designers provide differentiated instructions for users at various levels of story development when using a generative-AI tool.",2bb34cfe22d0d46394dd91ba8934e525563e1274,Semantic Scholar,,, +124,pre visionlanguage prompt learning with reparameterization encoder,['Anh Pham Thi Minh'],https://arxiv.org/pdf/2309.07760,2023-09-14,,"Large pre-trained vision-language models such as CLIP have demonstrated great potential in zero-shot transferability to downstream tasks. However, to attain optimal performance, the manual selection of prompts is necessary to improve alignment between the downstream image distribution and the textual class descriptions. This manual prompt engineering is the major challenge for deploying such models in practice since it requires domain expertise and is extremely time-consuming. To avoid non-trivial prompt engineering, recent work Context Optimization (CoOp) introduced the concept of prompt learning to the vision domain using learnable textual tokens. While CoOp can achieve substantial improvements over manual prompts, its learned context is worse generalizable to wider unseen classes within the same dataset. In this work, we present Prompt Learning with Reparameterization Encoder (PRE) - a simple and efficient method that enhances the generalization ability of the learnable prompt to unseen classes while maintaining the capacity to learn Base classes. Instead of directly optimizing the prompts, PRE employs a prompt encoder to reparameterize the input prompt embeddings, enhancing the exploration of task-specific knowledge from few-shot samples. Experiments and extensive ablation studies on 8 benchmarks demonstrate that our approach is an efficient method for prompt learning. Specifically, PRE achieves a notable enhancement of 5.60% in average accuracy on New classes and 3% in Harmonic mean compared to CoOp in the 16-shot setting, all achieved within a good training time.",2c66f49e328ca5815c13dda106abc2c326d4f28b,Semantic Scholar,,, +125,chainforge a visual toolkit for prompt engineering and llm hypothesis testing,"['Ian Arawjo', 'Chelse Swoopes', 'Priyan Vaithilingam', 'Martin Wattenberg', 'Elena L. Glassman']",https://arxiv.org/pdf/2309.09128,2023-09-17,,"Evaluating outputs of large language models (LLMs) is challenging, requiring making -- and making sense of -- many responses. Yet tools that go beyond basic prompting tend to require knowledge of programming APIs, focus on narrow domains, or are closed-source. We present ChainForge, an open-source visual toolkit for prompt engineering and on-demand hypothesis testing of text generation LLMs. ChainForge provides a graphical interface for comparison of responses across models and prompt variations. Our system was designed to support three tasks: model selection, prompt template design, and hypothesis testing (e.g., auditing). We released ChainForge early in its development and iterated on its design with academics and online users. Through in-lab and interview studies, we find that a range of people could use ChainForge to investigate hypotheses that matter to them, including in real-world settings. We identify three modes of prompt engineering and LLM hypothesis testing: opportunistic exploration, limited evaluation, and iterative refinement.",2ed64d90670177bf58cdce6bda04a48a8731a18f,Semantic Scholar,,, +126,accelerated materials language processing enabled by gpt,"['Jaewoong Choi', 'Byungju Lee']",https://arxiv.org/pdf/2308.09354,2023-08-18,,"Materials language processing (MLP) is one of the key facilitators of materials science research, as it enables the extraction of structured information from massive materials science literature. Prior works suggested high-performance MLP models for text classification, named entity recognition (NER), and extractive question answering (QA), which require complex model architecture, exhaustive fine-tuning and a large number of human-labelled datasets. In this study, we develop generative pretrained transformer (GPT)-enabled pipelines where the complex architectures of prior MLP models are replaced with strategic designs of prompt engineering. First, we develop a GPT-enabled document classification method for screening relevant documents, achieving comparable accuracy and reliability compared to prior models, with only small dataset. Secondly, for NER task, we design an entity-centric prompts, and learning few-shot of them improved the performance on most of entities in three open datasets. Finally, we develop an GPT-enabled extractive QA model, which provides improved performance and shows the possibility of automatically correcting annotations. While our findings confirm the potential of GPT-enabled MLP models as well as their value in terms of reliability and practicability, our scientific methods and systematic approach are applicable to any materials science domain to accelerate the information extraction of scientific literature.",3034d8571e16e25c6a839bf492f20daf855d04a0,Semantic Scholar,,, +127,"a sign language recognition system with pepper, lightweighttransformer, and llm","['Jongyoon Lim', 'Inkyu Sa', 'Bruce A. MacDonald', 'Ho Seok Ahn']",https://arxiv.org/pdf/2309.16898,2023-09-28,,"This research explores using lightweight deep neural network architectures to enable the humanoid robot Pepper to understand American Sign Language (ASL) and facilitate non-verbal human-robot interaction. First, we introduce a lightweight and efficient model for ASL understanding optimized for embedded systems, ensuring rapid sign recognition while conserving computational resources. Building upon this, we employ large language models (LLMs) for intelligent robot interactions. Through intricate prompt engineering, we tailor interactions to allow the Pepper Robot to generate natural Co-Speech Gesture responses, laying the foundation for more organic and intuitive humanoid-robot dialogues. Finally, we present an integrated software pipeline, embodying advancements in a socially aware AI interaction model. Leveraging the Pepper Robot's capabilities, we demonstrate the practicality and effectiveness of our approach in real-world scenarios. The results highlight a profound potential for enhancing human-robot interaction through non-verbal interactions, bridging communication gaps, and making technology more accessible and understandable.",31e04aec55f749dc560afe1d8673112f9b32f46b,Semantic Scholar,,, +128,cases of efl secondary students' prompt engineering pathways to complete a writing task with chatgpt,"['D. Woo', 'Kai Guo', 'Hengky Susanto']",https://arxiv.org/pdf/2307.05493,2023-06-19,,"ChatGPT is a state-of-the-art (SOTA) chatbot. Although it has potential to support English as a foreign language (EFL) students' writing, to effectively collaborate with it, a student must learn to engineer prompts, that is, the skill of crafting appropriate instructions so that ChatGPT produces desired outputs. However, writing an appropriate prompt for ChatGPT is not straightforward for non-technical users who suffer a trial-and-error process. This paper examines the content of EFL students' ChatGPT prompts when completing a writing task and explores patterns in the quality and quantity of the prompts. The data come from iPad screen recordings of secondary school EFL students who used ChatGPT and other SOTA chatbots for the first time to complete the same writing task. The paper presents a case study of four distinct pathways that illustrate the trial-and-error process and show different combinations of prompt content and quantity. The cases contribute evidence for the need to provide prompt engineering education in the context of the EFL writing classroom, if students are to move beyond an individual trial-and-error process, learning a greater variety of prompt content and more sophisticated prompts to support their writing.",344f801663a76aa15e0dd13344261d8648c382a2,Semantic Scholar,,, +129,"llm self defense by self examination, llms know they are being tricked","['Alec Helbling', 'Mansi Phute', 'Matthew Hull', 'Duen Horng Chau']",https://arxiv.org/pdf/2308.07308,2023-08-14,,"Large language models (LLMs) are popular for high-quality text generation but can produce harmful content, even when aligned with human values through reinforcement learning. Adversarial prompts can bypass their safety measures. We propose LLM Self Defense, a simple approach to defend against these attacks by having an LLM screen the induced responses. Our method does not require any fine-tuning, input preprocessing, or iterative output generation. Instead, we incorporate the generated content into a pre-defined prompt and employ another instance of an LLM to analyze the text and predict whether it is harmful. We test LLM Self Defense on GPT 3.5 and Llama 2, two of the current most prominent LLMs against various types of attacks, such as forcefully inducing affirmative responses to prompts and prompt engineering attacks. Notably, LLM Self Defense succeeds in reducing the attack success rate to virtually 0 using both GPT 3.5 and Llama 2.",34f9c825ba24889fa5e164ba9f99bfe4fc2f3e61,Semantic Scholar,,, +130,chils zeroshot image classification with hierarchical label sets,"['Zachary Novack', 'S. Garg', 'Julian McAuley', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2302.02551,2023-02-06,,"Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot classification through their ability generate embeddings for each class based on their (natural language) names. Prior work has focused on improving the accuracy of these models through prompt engineering or by incorporating a small amount of labeled downstream data (via finetuning). However, there has been little focus on improving the richness of the class names themselves, which can pose issues when class labels are coarsely-defined and are uninformative. We propose Classification with Hierarchical Label Sets (or CHiLS), an alternative strategy for zero-shot classification specifically designed for datasets with implicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each class, produce a set of subclasses, using either existing label hierarchies or by querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though these subclasses were the labels of interest; (iii) map the predicted subclass back to its parent to produce the final prediction. Across numerous datasets with underlying hierarchical structure, CHiLS leads to improved accuracy in situations both with and without ground-truth hierarchical information. CHiLS is simple to implement within existing zero-shot pipelines and requires no additional training cost. Code is available at: https://github.com/acmi-lab/CHILS.",34fd95dd4dd32e704d4284fc31165e85b303bb1e,Semantic Scholar,,, +131,flows building blocks of reasoning and collaborating ai,"['Martin Josifoski', 'Lars Klein', 'Maxime Peyrard', 'Yifei Li', 'Saibo Geng', 'Julian Paul Schnitzler', 'Yuxing Yao', 'Jiheng Wei', 'Debjit Paul', 'Robert West']",https://arxiv.org/pdf/2308.01285,2023-08-02,,"Recent advances in artificial intelligence (AI) have produced highly capable and controllable systems. This creates unprecedented opportunities for structured reasoning as well as collaboration among multiple AI systems and humans. To fully realize this potential, it is essential to develop a principled way of designing and studying such structured interactions. For this purpose, we introduce the conceptual framework of Flows: a systematic approach to modeling complex interactions. Flows are self-contained building blocks of computation, with an isolated state, communicating through a standardized message-based interface. This modular design allows Flows to be recursively composed into arbitrarily nested interactions, with a substantial reduction of complexity. Crucially, any interaction can be implemented using this framework, including prior work on AI--AI and human--AI interactions, prompt engineering schemes, and tool augmentation. We demonstrate the potential of Flows on the task of competitive coding, a challenging task on which even GPT-4 struggles. Our results suggest that structured reasoning and collaboration substantially improve generalization, with AI-only Flows adding +$21$ and human--AI Flows adding +$54$ absolute points in terms of solve rate. To support rapid and rigorous research, we introduce the aiFlows library. The library comes with a repository of Flows that can be easily used, extended, and composed into novel, more complex Flows. The aiFlows library is available at https://github.com/epfl-dlab/aiflows. Data and Flows for reproducing our experiments are available at https://github.com/epfl-dlab/cc_flows.",377d4d6c1be01b9df32edfd94b2c5946971b0108,Semantic Scholar,,, +132,thought propagation an analogical approach to complex reasoning with large language models,"['Junchi Yu', 'Ran He', 'Rex Ying']",https://arxiv.org/pdf/2310.03965,2023-10-06,,"Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with the development of prompting methods. However, existing prompting approaches cannot reuse insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, since they prompt LLMs to reason \textit{from scratch}. To address these issues, we propose \textbf{\textit{Thought Propagation} (TP)}, which explores the analogous problems and leverages their solutions to enhance the complex reasoning ability of LLMs. These analogous problems are related to the input one, with reusable solutions and problem-solving strategies. Thus, it is promising to propagate insights of solving previous analogous problems to inspire new problem-solving. To achieve this, TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one. Then, TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch. TP is compatible with existing prompting approaches, allowing plug-and-play generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks demonstrate TP enjoys a substantial improvement over the baselines by an average of 12\% absolute increase in finding the optimal solutions in Shortest-path Reasoning, 13\% improvement of human preference in Creative Writing, and 15\% enhancement in the task completion rate of LLM-Agent Planning.",3784fd84b61d482b52f7ef72aac66bcb886b892b,Semantic Scholar,,, +133,prompt engineering for healthcare methodologies and applications,"['Jiaqi Wang', 'Enze Shi', 'Sigang Yu', 'Zihao Wu', 'Chong Ma', 'Haixing Dai', 'Qiushi Yang', 'Yanqing Kang', 'Jinru Wu', 'Huawen Hu', 'Chenxi Yue', 'Haiyang Zhang', 'Yi-Hsueh Liu', 'Xiang Li', 'Bao Ge', 'Dajiang Zhu', 'Yixuan Yuan', 'Dinggang Shen', 'Tianming Liu', 'Shu Zhang']",http://arxiv.org/pdf/2304.14670,2023-04-28,,"This review will introduce the latest advances in prompt engineering in the field of natural language processing (NLP) for the medical domain. First, we will provide a brief overview of the development of prompt engineering and emphasize its significant contributions to healthcare NLP applications such as question-answering systems, text summarization, and machine translation. With the continuous improvement of general large language models, the importance of prompt engineering in the healthcare domain is becoming increasingly prominent. The aim of this article is to provide useful resources and bridges for healthcare NLP researchers to better explore the application of prompt engineering in this field. We hope that this review can provide new ideas and inspire ample possibilities for research and application in medical NLP.",385376b8aa48c25403f17d6206db7c09b67e1314,Semantic Scholar,,, +134,parafuzz an interpretabilitydriven technique for detecting poisoned samples in nlp,"['Lu Yan', 'Zhuo Zhang', 'Guanhong Tao', 'Kaiyuan Zhang', 'Xuan Chen', 'Guangyu Shen', 'Xiangyu Zhang']",https://arxiv.org/pdf/2308.02122,2023-08-04,,"Backdoor attacks have emerged as a prominent threat to natural language processing (NLP) models, where the presence of specific triggers in the input can lead poisoned models to misclassify these inputs to predetermined target classes. Current detection mechanisms are limited by their inability to address more covert backdoor strategies, such as style-based attacks. In this work, we propose an innovative test-time poisoned sample detection framework that hinges on the interpretability of model predictions, grounded in the semantic meaning of inputs. We contend that triggers (e.g., infrequent words) are not supposed to fundamentally alter the underlying semantic meanings of poisoned samples as they want to stay stealthy. Based on this observation, we hypothesize that while the model's predictions for paraphrased clean samples should remain stable, predictions for poisoned samples should revert to their true labels upon the mutations applied to triggers during the paraphrasing process. We employ ChatGPT, a state-of-the-art large language model, as our paraphraser and formulate the trigger-removal task as a prompt engineering problem. We adopt fuzzing, a technique commonly used for unearthing software vulnerabilities, to discover optimal paraphrase prompts that can effectively eliminate triggers while concurrently maintaining input semantics. Experiments on 4 types of backdoor attacks, including the subtle style backdoors, and 4 distinct datasets demonstrate that our approach surpasses baseline methods, including STRIP, RAP, and ONION, in precision and recall.",3a733c27bff68259b17dc4f835b0d192ac8fab70,Semantic Scholar,,, +135,transforming sentiment analysis in the financial domain with chatgpt,"['G. Fatouros', 'J. Soldatos', 'Kalliopi Kouroumali', 'Georgios Makridis', 'D. Kyriazis']",https://arxiv.org/pdf/2308.07935,2023-08-13,,"Financial sentiment analysis plays a crucial role in decoding market trends and guiding strategic trading decisions. Despite the deployment of advanced deep learning techniques and language models to refine sentiment analysis in finance, this study breaks new ground by investigating the potential of large language models, particularly ChatGPT 3.5, in financial sentiment analysis, with a strong emphasis on the foreign exchange market (forex). Employing a zero-shot prompting approach, we examine multiple ChatGPT prompts on a meticulously curated dataset of forex-related news headlines, measuring performance using metrics such as precision, recall, f1-score, and Mean Absolute Error (MAE) of the sentiment class. Additionally, we probe the correlation between predicted sentiment and market returns as an additional evaluation approach. ChatGPT, compared to FinBERT, a well-established sentiment analysis model for financial texts, exhibited approximately 35\% enhanced performance in sentiment classification and a 36\% higher correlation with market returns. By underlining the significance of prompt engineering, particularly in zero-shot contexts, this study spotlights ChatGPT's potential to substantially boost sentiment analysis in financial applications. By sharing the utilized dataset, our intention is to stimulate further research and advancements in the field of financial services.",3c4f1244301577cffff9affc73690669725e7e08,Semantic Scholar,,, +136,enhancing clip with gpt4 harnessing visual descriptions as prompts,"['Mayug Maniparambil', 'Chris Vorster', 'D. Molloy', 'N. Murphy', 'Kevin McGuinness', ""Noel E. O'Connor""]",https://doras.dcu.ie/28982/1/MMFM-2.pdf,2023-07-21,,"Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have revolutionized visual representation learning by providing good performance on downstream datasets. VLMs are 0-shot adapted to a downstream dataset by designing prompts that are relevant to the dataset. Such prompt engineering makes use of domain expertise and a validation dataset. Meanwhile, recent developments in generative pretrained models like GPT-4 mean they can be used as advanced internet search tools. They can also be manipulated to provide visual information in any structure. In this work, we show that GPT-4 can be used to generate text that is visually descriptive and how this can be used to adapt CLIP to downstream tasks. We show considerable improvements in 0-shot transfer accuracy on specialized fine-grained datasets like EuroSAT (~7%), DTD (~ 7%), SUN397 (~ 4.6%), and CUB ( ~3.3%) when compared to CLIP’s default prompt. We also design a simple few-shot adapter that learns to choose the best possible sentences to construct generalizable classifiers that outperform the recently proposed CoCoOP by ~2% on average and by over 4% on 4 specialized fine-grained datasets. The code, prompts, and auxiliary text dataset is available at github.com/mayug/VDT-Adapter.",3e0a691277183a6704310af3e4e9e271400612bc,Semantic Scholar,,, +137,large language models as data preprocessors,"['Haochen Zhang', 'Yuyang Dong', 'Chuan Xiao', 'M. Oyamada']",https://arxiv.org/pdf/2308.16361,2023-08-30,,"Large Language Models (LLMs), typified by OpenAI's GPT series and Meta's LLaMA variants, have marked a significant advancement in artificial intelligence. Trained on vast amounts of text data, LLMs are capable of understanding and generating human-like text across a diverse range of topics. This study expands on the applications of LLMs, exploring their potential in data preprocessing, a critical stage in data mining and analytics applications. We delve into the applicability of state-of-the-art LLMs such as GPT-3.5, GPT-4, and Vicuna-13B for error detection, data imputation, schema matching, and entity matching tasks. Alongside showcasing the inherent capabilities of LLMs, we highlight their limitations, particularly in terms of computational expense and inefficiency. We propose an LLM-based framework for data preprocessing, which integrates cutting-edge prompt engineering techniques, coupled with traditional methods like contextualization and feature selection, to improve the performance and efficiency of these models. The effectiveness of LLMs in data preprocessing is evaluated through an experimental study spanning 12 datasets. GPT-4 emerged as a standout, achieving 100\% accuracy or F1 score on 4 datasets, suggesting LLMs' immense potential in these tasks. Despite certain limitations, our study underscores the promise of LLMs in this domain and anticipates future developments to overcome current hurdles.",3e1ca026052d30e3b9677e363616fae23f6616df,Semantic Scholar,,, +138,revisiting prompt engineering via declarative crowdsourcing,"['Aditya G. Parameswaran', 'Shreya Shankar', 'Parth Asawa', 'Naman Jain', 'Yujie Wang']",https://arxiv.org/pdf/2308.03854,2023-08-07,,"Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone. There has been an advent of toolkits and recipes centered around so-called prompt engineering-the process of asking an LLM to do something via a series of prompts. However, for LLM-powered data processing workflows, in particular, optimizing for quality, while keeping cost bounded, is a tedious, manual process. We put forth a vision for declarative prompt engineering. We view LLMs like crowd workers and leverage ideas from the declarative crowdsourcing literature-including leveraging multiple prompting strategies, ensuring internal consistency, and exploring hybrid-LLM-non-LLM approaches-to make prompt engineering a more principled process. Preliminary case studies on sorting, entity resolution, and imputation demonstrate the promise of our approach",3e4991bd206214f596a10e9932cd441fe5bd1f8c,Semantic Scholar,,, +139,demonstrations of the potential of aibased political issue polling,"['Nathan Sanders', 'Alex Ulinich', 'B. Schneier']",https://arxiv.org/pdf/2307.04781,2023-07-10,,"Political polling is a multi-billion dollar industry with outsized influence on the societal trajectory of the United States and nations around the world. However, it has been challenged by factors that stress its cost, availability, and accuracy. At the same time, artificial intelligence (AI) chatbots have become compelling stand-ins for human behavior, powered by increasingly sophisticated large language models (LLMs). Could AI chatbots be an effective tool for anticipating public opinion on controversial issues to the extent that they could be used by campaigns, interest groups, and polling firms? We have developed a prompt engineering methodology for eliciting human-like survey responses from ChatGPT, which simulate the response to a policy question of a person described by a set of demographic factors, and produce both an ordinal numeric response score and a textual justification. We execute large scale experiments, querying for thousands of simulated responses at a cost far lower than human surveys. We compare simulated data to human issue polling data from the Cooperative Election Study (CES). We find that ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues such as abortion bans and approval of the US Supreme Court, particularly in their ideological breakdown (correlation typically>85%). However, it is less successful at anticipating demographic-level differences. Moreover, ChatGPT tends to overgeneralize to new policy issues that arose after its training data was collected, such as US support for involvement in the war in Ukraine. Our work has implications for our understanding of the strengths and limitations of the current generation of AI chatbots as virtual publics or online listening platforms, future directions for LLM development, and applications of AI tools to the political domain. (Abridged)",407a8d6227ece351d9870f96576d4c287a746166,Semantic Scholar,,, +140,scalable 3d captioning with pretrained models,"['Tiange Luo', 'C. Rockwell', 'Honglak Lee', 'Justin Johnson']",http://arxiv.org/pdf/2306.07279,2023-06-12,,"We introduce Cap3D, an automatic approach for generating descriptive text for 3D objects. This approach utilizes pretrained models from image captioning, image-text alignment, and LLM to consolidate captions from multiple views of a 3D asset, completely side-stepping the time-consuming and costly process of manual annotation. We apply Cap3D to the recently introduced large-scale 3D dataset, Objaverse, resulting in 660k 3D-text pairs. Our evaluation, conducted using 41k human annotations from the same dataset, demonstrates that Cap3D surpasses human-authored descriptions in terms of quality, cost, and speed. Through effective prompt engineering, Cap3D rivals human performance in generating geometric descriptions on 17k collected annotations from the ABO dataset. Finally, we finetune Text-to-3D models on Cap3D and human captions, and show Cap3D outperforms; and benchmark the SOTA including Point-E, Shape-E, and DreamFusion.",4279a38a098d1d359881b73c6a88a112fe93443a,Semantic Scholar,,, +141,interactive data synthesis for systematic vision adaptation via llmsaigcs collaboration,"['Qifan Yu', 'Juncheng Li', 'Wentao Ye', 'Siliang Tang', 'Yueting Zhuang']",http://arxiv.org/pdf/2305.12799,2023-05-22,,"Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images. In parallel, the problem of data scarcity has brought a growing interest in employing AIGC technology for high-quality data expansion. However, this paradigm requires well-designed prompt engineering that cost-less data expansion and labeling remain under-explored. Inspired by LLM's powerful capability in task guidance, we propose a new paradigm of annotated data expansion named as ChatGenImage. The core idea behind it is to leverage the complementary strengths of diverse models to establish a highly effective and user-friendly pipeline for interactive data augmentation. In this work, we extensively study how LLMs communicate with AIGC model to achieve more controllable image generation and make the first attempt to collaborate them for automatic data augmentation for a variety of downstream tasks. Finally, we present fascinating results obtained from our ChatGenImage framework and demonstrate the powerful potential of our synthetic data for systematic vision adaptation. Our codes are available at https://github.com/Yuqifan1117/Labal-Anything-Pipeline.",43a55dbd95c9d5cd82de8db276f41adeec4a937d,Semantic Scholar,,, +142,gpt takes the bar exam,"['M. Bommarito', 'D. Katz']",http://arxiv.org/pdf/2212.14402,2022-12-29,,"Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as""the Bar Exam,""as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in""AI?""In this research, we document our experimental evaluation of the performance of OpenAI's `text-davinci-003` model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5's zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5's zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.",458147b5f7242c998ec4f33798a59b7c48867329,Semantic Scholar,,, +143,prompts matter insights and strategies for prompt engineering in automated software traceability,"['Alberto D. Rodriguez', 'Katherine R. Dearstyne', 'J. Cleland-Huang']",https://arxiv.org/pdf/2308.00229,2023-08-01,,"Large Language Models (LLMs) have the potential to revolutionize automated traceability by overcoming the challenges faced by previous methods and introducing new possibilities. However, the optimal utilization of LLMs for automated traceability remains unclear. This paper explores the process of prompt engineering to extract link predictions from an LLM. We provide detailed insights into our approach for constructing effective prompts, offering our lessons learned. Additionally, we propose multiple strategies for leveraging LLMs to generate traceability links, improving upon previous zero-shot methods on the ranking of candidate links after prompt refinement. The primary objective of this paper is to inspire and assist future researchers and engineers by highlighting the process of constructing traceability prompts to effectively harness LLMs for advancing automatic traceability.",4591f6cea22b66eccda0103b83002be45e8216b6,Semantic Scholar,,, +144,humans in humans out on gpt converging toward common sense in both success and failure,"['Philipp E. Koralus', ""Vincent Wang-Ma'scianica""]",http://arxiv.org/pdf/2303.17276,2023-03-30,,"Increase in computational scale and fine-tuning has seen a dramatic improvement in the quality of outputs of large language models (LLMs) like GPT. Given that both GPT-3 and GPT-4 were trained on large quantities of human-generated text, we might ask to what extent their outputs reflect patterns of human thinking, both for correct and incorrect cases. The Erotetic Theory of Reason (ETR) provides a symbolic generative model of both human success and failure in thinking, across propositional, quantified, and probabilistic reasoning, as well as decision-making. We presented GPT-3, GPT-3.5, and GPT-4 with 61 central inference and judgment problems from a recent book-length presentation of ETR, consisting of experimentally verified data-points on human judgment and extrapolated data-points predicted by ETR, with correct inference patterns as well as fallacies and framing effects (the ETR61 benchmark). ETR61 includes classics like Wason's card task, illusory inferences, the decoy effect, and opportunity-cost neglect, among others. GPT-3 showed evidence of ETR-predicted outputs for 59% of these examples, rising to 77% in GPT-3.5 and 75% in GPT-4. Remarkably, the production of human-like fallacious judgments increased from 18% in GPT-3 to 33% in GPT-3.5 and 34% in GPT-4. This suggests that larger and more advanced LLMs may develop a tendency toward more human-like mistakes, as relevant thought patterns are inherent in human-produced training data. According to ETR, the same fundamental patterns are involved both in successful and unsuccessful ordinary reasoning, so that the""bad""cases could paradoxically be learned from the""good""cases. We further present preliminary evidence that ETR-inspired prompt engineering could reduce instances of these mistakes.",45c46687bc8d2dbdea6f92fc14d4dc7a548ddd12,Semantic Scholar,,, +145,large language models are humanlevel prompt engineers,"['Yongchao Zhou', 'Andrei Ioan Muresanu', 'Ziwen Han', 'Keiran Paster', 'Silviu Pitis', 'Harris Chan', 'Jimmy Ba']",http://arxiv.org/pdf/2211.01910,2022-11-03,,"By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired by classical program synthesis and the human approach to prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic instruction generation and selection. In our method, we treat the instruction as the""program,""optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. To evaluate the quality of the selected instruction, we evaluate the zero-shot performance of another LLM following the selected instruction. Experiments on 24 NLP tasks show that our automatically generated instructions outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 19/24 tasks. We conduct extensive qualitative and quantitative analyses to explore the performance of APE. We show that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts. Please check out our webpage at https://sites.google.com/view/automatic-prompt-engineer.",4610ffb1b016acaa82a2065ffd1a3adbae1ce722,Semantic Scholar,,, +146,exploring small language models with promptlearning paradigm for efficient domainspecific text classification,"['Hengyu Luo', 'Peng Liu', 'Stefan Esping']",https://arxiv.org/pdf/2309.14779,2023-09-26,,"Domain-specific text classification faces the challenge of scarce labeled data due to the high cost of manual labeling. Prompt-learning, known for its efficiency in few-shot scenarios, is proposed as an alternative to traditional fine-tuning methods. And besides, although large language models (LLMs) have gained prominence, small language models (SLMs, with under 1B parameters) offer significant customizability, adaptability, and cost-effectiveness for domain-specific tasks, given industry constraints. In this study, we investigate the potential of SLMs combined with prompt-learning paradigm for domain-specific text classification, specifically within customer-agent interactions in retail. Our evaluations show that, in few-shot settings when prompt-based model fine-tuning is possible, T5-base, a typical SLM with 220M parameters, achieve approximately 75% accuracy with limited labeled data (up to 15% of full data), which shows great potentials of SLMs with prompt-learning. Based on this, We further validate the effectiveness of active few-shot sampling and the ensemble strategy in the prompt-learning pipeline that contribute to a remarkable performance gain. Besides, in zero-shot settings with a fixed model, we underscore a pivotal observation that, although the GPT-3.5-turbo equipped with around 154B parameters garners an accuracy of 55.16%, the power of well designed prompts becomes evident when the FLAN-T5-large, a model with a mere 0.5% of GPT-3.5-turbo's parameters, achieves an accuracy exceeding 31% with the optimized prompt, a leap from its sub-18% performance with an unoptimized one. Our findings underscore the promise of prompt-learning in classification tasks with SLMs, emphasizing the benefits of active few-shot sampling, and ensemble strategies in few-shot settings, and the importance of prompt engineering in zero-shot settings.",47d04bcfe0f1bed72d03c68cce76b4cf4be03f11,Semantic Scholar,,, +147,cotbert enhancing unsupervised sentence representation through chainofthought,"['Bowen Zhang', 'Kehua Chang', 'Chunping Li']",https://arxiv.org/pdf/2309.11143,2023-09-20,,"Unsupervised sentence representation learning aims to transform input sentences into fixed-length vectors enriched with intricate semantic information while obviating the reliance on labeled data. Recent progress within this field, propelled by contrastive learning and prompt engineering, has significantly bridged the gap between unsupervised and supervised strategies. Nonetheless, the potential utilization of Chain-of-Thought, remains largely untapped within this trajectory. To unlock latent capabilities within pre-trained models, such as BERT, we propose a two-stage approach for sentence representation: comprehension and summarization. Subsequently, the output of the latter phase is harnessed as the vectorized representation of the input sentence. For further performance enhancement, we meticulously refine both the contrastive learning loss function and the template denoising technique for prompt engineering. Rigorous experimentation substantiates our method, CoT-BERT, transcending a suite of robust baselines without necessitating other text representation models or external databases.",4a99a85f071e67bf15ae4bc53ec37af28b650ec4,Semantic Scholar,,, +148,contextualizing problems to student interests at scale in intelligent tutoring system using large language models,"['Gautam Yadav', 'Ying-Jui Tseng', 'Xiaolin Ni']",http://arxiv.org/pdf/2306.00190,2023-05-31,,"Contextualizing problems to align with student interests can significantly improve learning outcomes. However, this task often presents scalability challenges due to resource and time constraints. Recent advancements in Large Language Models (LLMs) like GPT-4 offer potential solutions to these issues. This study explores the ability of GPT-4 in the contextualization of problems within CTAT, an intelligent tutoring system, aiming to increase student engagement and enhance learning outcomes. Through iterative prompt engineering, we achieved meaningful contextualization that preserved the difficulty and original intent of the problem, thereby not altering values or overcomplicating the questions. While our research highlights the potential of LLMs in educational settings, we acknowledge current limitations, particularly with geometry problems, and emphasize the need for ongoing evaluation and research. Future work includes systematic studies to measure the impact of this tool on students' learning outcomes and enhancements to handle a broader range of problems.",4b6df5f9885c9dc0ce3125791fd01824e3cf37b7,Semantic Scholar,,, +149,backdoor attacks for incontext learning with language models,"['Nikhil Kandpal', 'Matthew Jagielski', 'Florian Tramèr', 'Nicholas Carlini']",https://arxiv.org/pdf/2307.14692,2023-07-27,,"Because state-of-the-art language models are expensive to train, most practitioners must make use of one of the few publicly available language models or language model APIs. This consolidation of trust increases the potency of backdoor attacks, where an adversary tampers with a machine learning model in order to make it perform some malicious behavior on inputs that contain a predefined backdoor trigger. We show that the in-context learning ability of large language models significantly complicates the question of developing backdoor attacks, as a successful backdoor must work against various prompting strategies and should not affect the model's general purpose capabilities. We design a new attack for eliciting targeted misclassification when language models are prompted to perform a particular target task and demonstrate the feasibility of this attack by backdooring multiple large language models ranging in size from 1.3 billion to 6 billion parameters. Finally we study defenses to mitigate the potential harms of our attack: for example, while in the white-box setting we show that fine-tuning models for as few as 500 steps suffices to remove the backdoor behavior, in the black-box setting we are unable to develop a successful defense that relies on prompt engineering alone.",4d21debb0f5fec315181e0912b5105c6ce4fc67f,Semantic Scholar,,, +150,optimizing prompts for texttoimage generation,"['Y. Hao', 'Zewen Chi', 'Li Dong', 'Furu Wei']",http://arxiv.org/pdf/2212.09611,2022-12-19,,"Well-designed prompts can guide text-to-image models to generate amazing images. However, the performant prompts are often model-specific and misaligned with user input. Instead of laborious human engineering, we propose prompt adaptation, a general framework that automatically adapts original user input to model-preferred prompts. Specifically, we first perform supervised fine-tuning with a pretrained language model on a small collection of manually engineered prompts. Then we use reinforcement learning to explore better prompts. We define a reward function that encourages the policy to generate more aesthetically pleasing images while preserving the original user intentions. Experimental results on Stable Diffusion show that our method outperforms manual prompt engineering in terms of both automatic metrics and human preference ratings. Moreover, reinforcement learning further boosts performance, especially on out-of-domain prompts. The pretrained checkpoints are available at https://aka.ms/promptist. The demo can be found at https://aka.ms/promptist-demo.",4d81c33b295c092016ac236cfd32020a5bb70b97,Semantic Scholar,,, +151,is gpt a computational model of emotion detailed analysis,"['Ala Nekouvaght Tak', 'J. Gratch']",https://arxiv.org/pdf/2307.13779,2023-07-25,,"This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.",4dd461b2392a6983d36618744d2384349c4170f9,Semantic Scholar,,, +152,a lightweight framework for highquality code generation,"['Mohammed Latif Siddiq', 'B.K. Casey', 'Joanna C. S. Santos']",https://arxiv.org/pdf/2307.08220,2023-07-17,,"In recent years, the use of automated source code generation utilizing transformer-based generative models has expanded, and these models can generate functional code according to the requirements of the developers. However, recent research revealed that these automatically generated source codes can contain vulnerabilities and other quality issues. Despite researchers' and practitioners' attempts to enhance code generation models, retraining and fine-tuning large language models is time-consuming and resource-intensive. Thus, we describe FRANC, a lightweight framework for recommending more secure and high-quality source code derived from transformer-based code generation models. FRANC includes a static filter to make the generated code compilable with heuristics and a quality-aware ranker to sort the code snippets based on a quality score. Moreover, the framework uses prompt engineering to fix persistent quality issues. We evaluated the framework with five Python and Java code generation models and six prompt datasets, including a newly created one in this work (SOEval). The static filter improves 9% to 46% Java suggestions and 10% to 43% Python suggestions regarding compilability. The average improvement over the NDCG@10 score for the ranking system is 0.0763, and the repairing techniques repair the highest 80% of prompts. FRANC takes, on average, 1.98 seconds for Java; for Python, it takes 0.08 seconds.",4e96d7fa9f27857523d786230294fbcc6060212c,Semantic Scholar,,, +153,llms killed the script kiddie how agents supported by large language models change the landscape of network threat testing,"['Stephen Moskal', 'Sam Laney', 'Erik Hemberg', 'Una-May O’Reilly']",https://arxiv.org/pdf/2310.06936,2023-10-11,,"In this paper, we explore the potential of Large Language Models (LLMs) to reason about threats, generate information about tools, and automate cyber campaigns. We begin with a manual exploration of LLMs in supporting specific threat-related actions and decisions. We proceed by automating the decision process in a cyber campaign. We present prompt engineering approaches for a plan-act-report loop for one action of a threat campaign and and a prompt chaining design that directs the sequential decision process of a multi-action campaign. We assess the extent of LLM's cyber-specific knowledge w.r.t the short campaign we demonstrate and provide insights into prompt design for eliciting actionable responses. We discuss the potential impact of LLMs on the threat landscape and the ethical considerations of using LLMs for accelerating threat actor capabilities. We report a promising, yet concerning, application of generative AI to cyber threats. However, the LLM's capabilities to deal with more complex networks, sophisticated vulnerabilities, and the sensitivity of prompts are open questions. This research should spur deliberations over the inevitable advancements in LLM-supported cyber adversarial landscape.",50aaac5fdc2b5a33bfd3ba93cdf4e5e302f34297,Semantic Scholar,,, +154,zeroshot nuclei detection via visuallanguage pretrained models,"['Yongjian Wu', 'Yangqiaoyu Zhou', 'Jiya Saiyin', 'Bingzheng Wei', 'Maode Lai', 'Jianzhong Shou', 'Yubo Fan', 'Yan Xu']",http://arxiv.org/pdf/2306.17659,2023-06-30,,"Large-scale visual-language pre-trained models (VLPM) have proven their excellent performance in downstream object detection for natural scenes. However, zero-shot nuclei detection on H\&E images via VLPMs remains underexplored. The large gap between medical images and the web-originated text-image pairs used for pre-training makes it a challenging task. In this paper, we attempt to explore the potential of the object-level VLPM, Grounded Language-Image Pre-training (GLIP) model, for zero-shot nuclei detection. Concretely, an automatic prompts design pipeline is devised based on the association binding trait of VLPM and the image-to-text VLPM BLIP, avoiding empirical manual prompts engineering. We further establish a self-training framework, using the automatically designed prompts to generate the preliminary results as pseudo labels from GLIP and refine the predicted boxes in an iterative manner. Our method achieves a remarkable performance for label-free nuclei detection, surpassing other comparison methods. Foremost, our work demonstrates that the VLPM pre-trained on natural image-text pairs exhibits astonishing potential for downstream tasks in the medical field as well. Code will be released at https://github.com/wuyongjianCODE/VLPMNuD.",50bbca86de82d6b72d92bba0ec988b58e644dac3,Semantic Scholar,,, +155,gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench,"['A. Alam', 'P. Roy', 'Farouq Al-Omari', 'C. Roy', 'B. Roy', 'Kevin A. Schneider']",https://arxiv.org/pdf/2308.13963,2023-08-26,,"With the emergence of Machine Learning, there has been a surge in leveraging its capabilities for problem-solving across various domains. In the code clone realm, the identification of type-4 or semantic clones has emerged as a crucial yet challenging task. Researchers aim to utilize Machine Learning to tackle this challenge, often relying on the Big-CloneBench dataset. However, it’s worth noting that BigCloneBench, originally not designed for semantic clone detection, presents several limitations that hinder its suitability as a comprehensive training dataset for this specific purpose. Furthermore, CLCDSA dataset suffers from a lack of reusable examples aligning with real-world software systems, rendering it inadequate for cross-language clone detection approaches. In this work, we present a comprehensive semantic clone and cross-language clone benchmark, GPTCloneBench 1 by exploiting SemanticCloneBench and OpenAI’s GPT-3 model. In particular, using code fragments from SemanticCloneBench as sample inputs along with appropriate prompt engineering for GPT-3 model, we generate semantic and cross-language clones for these specific fragments and then conduct a combination of extensive manual analysis, tool-assisted filtering, functionality testing and automated validation in building the benchmark. From 79,928 clone pairs of GPT-3 output, we created a benchmark with 37,149 true semantic clone pairs, 19,288 false semantic pairs(Type-1/Type-2), and 20,770 cross-language clones across four languages (Java, C, C#, and Python). Our benchmark is 15-fold larger than SemanticCloneBench, has more functional code examples for software systems and programming language support than CLCDSA, and overcomes BigCloneBench’s qualities, quantification, and language variety limitations. GPTCloneBench can be found here1.",50d40d05598e456188a3be42983b8daabd3f04f7,Semantic Scholar,,, +156,symbolic knowledge distillation from general language models to commonsense models,"['Peter West', 'Chandrasekhar Bhagavatula', 'Jack Hessel', 'Jena D. Hwang', 'Liwei Jiang', 'Ronan Le Bras', 'Ximing Lu', 'S. Welleck', 'Yejin Choi']",https://aclanthology.org/2022.naacl-main.341.pdf,2021-10-14,,"The common practice for training commonsense models has gone from–human–to–corpus–to–machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from–machine–to–corpus–to–machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al. 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically–as text–in addition to the neural model. We distill only one aspect–the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model’s commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and will share our new symbolic knowledge graph and commonsense models.",521ccc898395a2818fced22b4cf371b0e5121f94,Semantic Scholar,,, +157,can prompt learning benefit radiology report generation,"['Jun Wang', 'Lixing Zhu', 'A. Bhalerao', 'Yulan He']",https://arxiv.org/pdf/2308.16269,2023-08-30,,"Radiology report generation aims to automatically provide clinically meaningful descriptions of radiology images such as MRI and X-ray. Although great success has been achieved in natural scene image captioning tasks, radiology report generation remains challenging and requires prior medical knowledge. In this paper, we propose PromptRRG, a method that utilizes prompt learning to activate a pretrained model and incorporate prior knowledge. Since prompt learning for radiology report generation has not been explored before, we begin with investigating prompt designs and categorise them based on varying levels of knowledge: common, domain-specific and disease-enriched prompts. Additionally, we propose an automatic prompt learning mechanism to alleviate the burden of manual prompt engineering. This is the first work to systematically examine the effectiveness of prompt learning for radiology report generation. Experimental results on the largest radiology report generation benchmark, MIMIC-CXR, demonstrate that our proposed method achieves state-of-the-art performance. Code will be available upon the acceptance.",531678c18fd2c5a9620b68f3550131fc3fd3636c,Semantic Scholar,,, +158,just tell me prompt engineering in business process management,"['Kiran Busch', 'Alexander Rochlitzer', 'Diana Sola', 'H. Leopold']",http://arxiv.org/pdf/2304.07183,2023-04-14,,"GPT-3 and several other language models (LMs) can effectively address various natural language processing (NLP) tasks, including machine translation and text summarization. Recently, they have also been successfully employed in the business process management (BPM) domain, e.g., for predictive process monitoring and process extraction from text. This, however, typically requires fine-tuning the employed LM, which, among others, necessitates large amounts of suitable training data. A possible solution to this problem is the use of prompt engineering, which leverages pre-trained LMs without fine-tuning them. Recognizing this, we argue that prompt engineering can help bring the capabilities of LMs to BPM research. We use this position paper to develop a research agenda for the use of prompt engineering for BPM research by identifying the associated potentials and challenges.",53e7475a3ed0caee37122a9dbdb53d1da0691a33,Semantic Scholar,,, +159,prompt position really matters in fewshot and zeroshot nlu tasks,"['Junyu Mao', 'S. Middleton', 'M. Niranjan']",https://arxiv.org/pdf/2305.14493,,,"Prompt-based models have made remarkable advancements in the fields of zero-shot and few-shot learning, attracting a lot of attention from researchers. Developing an effective prompt template plays a critical role. However, prior studies have mainly focused on prompt vocabulary selection or embedding initialization with the reserved prompt position fixed. In this empirical study, we conduct the most comprehensive analysis to date of prompt position option for natural language understanding tasks. Our findings quantify the substantial impact prompt position has on model performance. We observe that the prompt position used in prior studies is often sub-optimal for both zero-shot and few-shot settings. These findings suggest prompt position optimisation as an interesting research direction alongside the existing focus on prompt engineering.",56a9c96a29f4047be8465244576d731f0df2d9df,Semantic Scholar,,, +160,situated natural language explanations,"['Zining Zhu', 'Hao Jiang', 'Jingfeng Yang', 'Sreyashi Nag', 'Chao Zhang', 'Jie Huang', 'Yifan Gao', 'Frank Rudzicz', 'Bing Yin']",https://arxiv.org/pdf/2308.14115,2023-08-27,,"Natural language is among the most accessible tools for explaining decisions to humans, and large pretrained language models (PLMs) have demonstrated impressive abilities to generate coherent natural language explanations (NLE). The existing NLE research perspectives do not take the audience into account. An NLE can have high textual quality, but it might not accommodate audiences' needs and preference. To address this limitation, we propose an alternative perspective, situated NLE, including a situated generation framework and a situated evaluation framework. On the generation side, we propose simple prompt engineering methods that adapt the NLEs to situations. In human studies, the annotators preferred the situated NLEs. On the evaluation side, we set up automated evaluation scores in lexical, semantic, and pragmatic categories. The scores can be used to select the most suitable prompts to generate NLEs. Situated NLE provides a perspective to conduct further research on automatic NLE generations.",57404bd8c71e2b17fce63b49229b278b6a66bf13,Semantic Scholar,,, +161,what's the magic word a control theory of llm prompting,"['Aman Bhargava', 'Cameron Witkowski', 'Manav Shah', 'Matt W. Thomson']",https://arxiv.org/pdf/2310.04444,2023-10-02,,"Prompt engineering is crucial for deploying LLMs but is poorly understood mathematically. We formalize LLM systems as a class of discrete stochastic dynamical systems to explore prompt engineering through the lens of control theory. We investigate the reachable set of output token sequences $R_y(\mathbf x_0)$ for which there exists a control input sequence $\mathbf u$ for each $\mathbf y \in R_y(\mathbf x_0)$ that steers the LLM to output $\mathbf y$ from initial state sequence $\mathbf x_0$. We offer analytic analysis on the limitations on the controllability of self-attention in terms of reachable set, where we prove an upper bound on the reachable set of outputs $R_y(\mathbf x_0)$ as a function of the singular values of the parameter matrices. We present complementary empirical analysis on the controllability of a panel of LLMs, including Falcon-7b, Llama-7b, and Falcon-40b. Our results demonstrate a lower bound on the reachable set of outputs $R_y(\mathbf x_0)$ w.r.t. initial state sequences $\mathbf x_0$ sampled from the Wikitext dataset. We find that the correct next Wikitext token following sequence $\mathbf x_0$ is reachable over 97% of the time with prompts of $k\leq 10$ tokens. We also establish that the top 75 most likely next tokens, as estimated by the LLM itself, are reachable at least 85% of the time with prompts of $k\leq 10$ tokens. Intriguingly, short prompt sequences can dramatically alter the likelihood of specific outputs, even making the least likely tokens become the most likely ones. This control-centric analysis of LLMs demonstrates the significant and poorly understood role of input sequences in steering output probabilities, offering a foundational perspective for enhancing language model system capabilities.",57a4f8f69908d3474565d3cd6f58b1ca651ff673,Semantic Scholar,,, +162,jvnv a corpus of japanese emotional speech with verbal content and nonverbal expressions,"['Detai Xin', 'Junfeng Jiang', 'Shinnosuke Takamichi', 'Yuki Saito', 'Akiko Aizawa', 'H. Saruwatari']",https://arxiv.org/pdf/2310.06072,2023-10-09,,"We present the JVNV, a Japanese emotional speech corpus with verbal content and nonverbal vocalizations whose scripts are generated by a large-scale language model. Existing emotional speech corpora lack not only proper emotional scripts but also nonverbal vocalizations (NVs) that are essential expressions in spoken language to express emotions. We propose an automatic script generation method to produce emotional scripts by providing seed words with sentiment polarity and phrases of nonverbal vocalizations to ChatGPT using prompt engineering. We select 514 scripts with balanced phoneme coverage from the generated candidate scripts with the assistance of emotion confidence scores and language fluency scores. We demonstrate the effectiveness of JVNV by showing that JVNV has better phoneme coverage and emotion recognizability than previous Japanese emotional speech corpora. We then benchmark JVNV on emotional text-to-speech synthesis using discrete codes to represent NVs. We show that there still exists a gap between the performance of synthesizing read-aloud speech and emotional speech, and adding NVs in the speech makes the task even harder, which brings new challenges for this task and makes JVNV a valuable resource for relevant works in the future. To our best knowledge, JVNV is the first speech corpus that generates scripts automatically using large language models.",5ce2a1dc9dfa8b4f1368220ac7f7d30a395ffca9,Semantic Scholar,,, +163,red teaming language models with language models,"['Ethan Perez', 'Saffron Huang', 'Francis Song', 'Trevor Cai', 'Roman Ring', 'John Aslanides', 'A. Glaese', 'Nathan McAleese', 'G. Irving']",https://aclanthology.org/2022.emnlp-main.225.pdf,2022-02-07,,"Language Models (LMs) often cannot be deployed because of their potential to harm users in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using human annotators to hand-write test cases. However, human annotation is expensive, limiting the number and diversity of test cases. In this work, we automatically find cases where a target LM behaves in a harmful way, by generating test cases (“red teaming”) using another LM. We evaluate the target LM’s replies to generated test questions using a classifier trained to detect offensive content, uncovering tens of thousands of offensive replies in a 280B parameter LM chatbot. We explore several methods, from zero-shot generation to reinforcement learning, for generating test cases with varying levels of diversity and difficulty. Furthermore, we use prompt engineering to control LM-generated test cases to uncover a variety of other harms, automatically finding groups of people that the chatbot discusses in offensive ways, personal and hospital phone numbers generated as the chatbot’s own contact info, leakage of private training data in generated text, and harms that occur over the course of a conversation. Overall, LM-based red teaming is one promising tool (among many needed) for finding and fixing diverse, undesirable LM behaviors before impacting users.",5d49c7401c5f2337c4cc88d243ae39ed659afe64,Semantic Scholar,,, +164,towards interpretable mental health analysis with large language models,"['Kailai Yang', 'Shaoxiong Ji', 'Tianlin Zhang', 'Qianqian Xie', 'Zi-Zhou Kuang', 'Sophia Ananiadou']",https://aclanthology.org/2023.emnlp-main.370.pdf,2023-04-07,,"The latest large language models (LLMs) such as ChatGPT, exhibit strong capabilities in automated mental health analysis. However, existing relevant studies bear several limitations, including inadequate evaluations, lack of prompting strategies, and ignorance of exploring LLMs for explainability. To bridge these gaps, we comprehensively evaluate the mental health analysis and emotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explore the effects of different prompting strategies with unsupervised and distantly supervised emotional information. Based on these prompts, we explore LLMs for interpretable mental health analysis by instructing them to generate explanations for each of their decisions. We convey strict human evaluations to assess the quality of the generated explanations, leading to a novel dataset with 163 human-assessed explanations. We benchmark existing automatic evaluation metrics on this dataset to guide future related works. According to the results, ChatGPT shows strong in-context learning ability but still has a significant gap with advanced task-specific methods. Careful prompt engineering with emotional cues and expert-written few-shot examples can also effectively improve performance on mental health analysis. In addition, ChatGPT generates explanations that approach human performance, showing its great potential in explainable mental health analysis.",5d879530c443dd06d3686f31d32cfe34c7ade9bc,Semantic Scholar,,, +165,trash to treasure using texttoimage models to inform the design of physical artefacts,"['Amy Smith', 'Hope Schroeder', 'Ziv Epstein', 'Michael Cook', 'S. Colton', 'A. Lippman']",http://arxiv.org/pdf/2302.00561,2023-02-01,,"Text-to-image generative models have recently exploded in popularity and accessibility. Yet so far, use of these models in creative tasks that bridge the 2D digital world and the creation of physical artefacts has been understudied. We conduct a pilot study to investigate if and how text-to-image models can be used to assist in upstream tasks within the creative process, such as ideation and visualization, prior to a sculpture-making activity. Thirty participants selected sculpture-making materials and generated three images using the Stable Diffusion text-to-image generator, each with text prompts of their choice, with the aim of informing and then creating a physical sculpture. The majority of participants (23/30) reported that the generated images informed their sculptures, and 28/30 reported interest in using text-to-image models to help them in a creative task in the future. We identify several prompt engineering strategies and find that a participant's prompting strategy relates to their stage in the creative process. We discuss how our findings can inform support for users at different stages of the design process and for using text-to-image models for physical artefact design.",5de60d53bce194b34dae1e531876af9acffba1a3,Semantic Scholar,,, +166,knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms,"['Jiaoayan Chen', 'Luyi Ma', 'Xiaohan Li', 'Nikhil Thakurdesai', 'Jianpeng Xu', 'Jason H. D. Cho', 'Kaushiki Nag', 'Evren Korpeoglu', 'Sushant Kumar', 'Kannan Achan']",http://arxiv.org/pdf/2305.09858,2023-05-17,,"Knowledge Graphs (KGs) play a crucial role in enhancing e-commerce system performance by providing structured information about entities and their relationships, such as complementary or substitutable relations between products or product types, which can be utilized in recommender systems. However, relation labeling in KGs remains a challenging task due to the dynamic nature of e-commerce domains and the associated cost of human labor. Recently, breakthroughs in Large Language Models (LLMs) have shown surprising results in numerous natural language processing tasks. In this paper, we conduct an empirical study of LLMs for relation labeling in e-commerce KGs, investigating their powerful learning capabilities in natural language and effectiveness in predicting relations between product types with limited labeled data. We evaluate various LLMs, including PaLM and GPT-3.5, on benchmark datasets, demonstrating their ability to achieve competitive performance compared to humans on relation labeling tasks using just 1 to 5 labeled examples per relation. Additionally, we experiment with different prompt engineering techniques to examine their impact on model performance. Our results show that LLMs significantly outperform existing KG completion models in relation labeling for e-commerce KGs and exhibit performance strong enough to replace human labeling.",5e8dd82419f78025093acbec3ba2e345fff85d11,Semantic Scholar,,, +167,responsible task automation empowering large language models as responsible task automators,"['Zhizheng Zhang', 'Xiaoyi Zhang', 'Wenxuan Xie', 'Yan Lu']",http://arxiv.org/pdf/2306.01242,2023-06-02,,"The recent success of Large Language Models (LLMs) signifies an impressive stride towards artificial general intelligence. They have shown a promising prospect in automatically completing tasks upon user instructions, functioning as brain-like coordinators. The associated risks will be revealed as we delegate an increasing number of tasks to machines for automated completion. A big question emerges: how can we make machines behave responsibly when helping humans automate tasks as personal copilots? In this paper, we explore this question in depth from the perspectives of feasibility, completeness and security. In specific, we present Responsible Task Automation (ResponsibleTA) as a fundamental framework to facilitate responsible collaboration between LLM-based coordinators and executors for task automation with three empowered capabilities: 1) predicting the feasibility of the commands for executors; 2) verifying the completeness of executors; 3) enhancing the security (e.g., the protection of users' privacy). We further propose and compare two paradigms for implementing the first two capabilities. One is to leverage the generic knowledge of LLMs themselves via prompt engineering while the other is to adopt domain-specific learnable models. Moreover, we introduce a local memory mechanism for achieving the third capability. We evaluate our proposed ResponsibleTA on UI task automation and hope it could bring more attentions to ensuring LLMs more responsible in diverse scenarios.",615962d8969c8e0ffe43319689dce6c50cbf1f29,Semantic Scholar,,, +168,peace prompt engineering automation for clipseg enhancement in aerial robotics,"['Haechan Mark Bong', 'Rongge Zhang', 'Ricardo de Azambuja', 'Giovanni Beltrame']",https://arxiv.org/pdf/2310.00085,2023-09-29,,"From industrial to space robotics, safe landing is an essential component for flight operations. With the growing interest in artificial intelligence, we direct our attention to learning based safe landing approaches. This paper extends our previous work, DOVESEI, which focused on a reactive UAV system by harnessing the capabilities of open vocabulary image segmentation. Prompt-based safe landing zone segmentation using an open vocabulary based model is no more just an idea, but proven to be feasible by the work of DOVESEI. However, a heuristic selection of words for prompt is not a reliable solution since it cannot take the changing environment into consideration and detrimental consequences can occur if the observed environment is not well represented by the given prompt. Therefore, we introduce PEACE (Prompt Engineering Automation for CLIPSeg Enhancement), powering DOVESEI to automate the prompt generation and engineering to adapt to data distribution shifts. Our system is capable of performing safe landing operations with collision avoidance at altitudes as low as 20 meters using only monocular cameras and image segmentation. We take advantage of DOVESEI's dynamic focus to circumvent abrupt fluctuations in the terrain segmentation between frames in a video stream. PEACE shows promising improvements in prompt generation and engineering for aerial images compared to the standard prompt used for CLIP and CLIPSeg. Combining DOVESEI and PEACE, our system was able improve successful safe landing zone selections by 58.62% compared to using only DOVESEI. All the source code is open source and available online.",615ef4518f9a41a10881b66ce10f0eb490e2d75c,Semantic Scholar,,, +169,datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation,"['Seugnjun Lee', 'Hyeonseok Moon', 'Chanjun Park', 'Heu-Jeoung Lim']",http://arxiv.org/pdf/2306.14514,2023-06-26,,"In this paper, we introduce a data-driven approach for Formality-Sensitive Machine Translation (FSMT) that caters to the unique linguistic properties of four target languages. Our methodology centers on two core strategies: 1) language-specific data handling, and 2) synthetic data generation using large-scale language models and empirical prompt engineering. This approach demonstrates a considerable improvement over the baseline, highlighting the effectiveness of data-centric techniques. Our prompt engineering strategy further improves performance by producing superior synthetic translation examples.",632dc69c2e504d693533fc434b8122a2a8a42844,Semantic Scholar,,, +170,forgetful large language models lessons learned from using llms in robot programming,"['Juo-Tung Chen', 'Chien-Ming Huang']",https://arxiv.org/pdf/2310.06646,2023-10-10,,"Large language models offer new ways of empowering people to program robot applications-namely, code generation via prompting. However, the code generated by LLMs is susceptible to errors. This work reports a preliminary exploration that empirically characterizes common errors produced by LLMs in robot programming. We categorize these errors into two phases: interpretation and execution. In this work, we focus on errors in execution and observe that they are caused by LLMs being “forgetful” of key information provided in user prompts. Based on this observation, we propose prompt engineering tactics designed to reduce errors in execution. We then demonstrate the effectiveness of these tactics with three language models: ChatGPT, Bard, and LLaMA-2. Finally, we discuss lessons learned from using LLMs in robot programming and call for the benchmarking of LLM-powered end-user development of robot applications.",6474370fe46e38896288305c35d3058a403b1db2,Semantic Scholar,,, +171,benchmarking causal study to interpret large language models for source code,"['Daniel Rodríguez-Cárdenas', 'David N. Palacio', 'Dipin Khati', 'Henry Burke', 'D. Poshyvanyk']",https://arxiv.org/pdf/2308.12415,2023-08-23,,"One of the most common solutions adopted by software researchers to address code generation is by training Large Language Models (LLMs) on massive amounts of source code. LLMs are rooted in the concept of emergent capabilities in which machines statistically learn complex patterns from code data. Although a number of studies have shown that LLMs have been effectively evaluated on popular accuracy metrics (e.g., BLEU, CodeBleu), previous research has largely overlooked the role of Causal Inference as a fundamental component of the interpretability of LLMs’ performance. Existing benchmarks and datasets are meant to highlight the difference between the expected and the generated outcome, but do not take into account confounding variables (e.g., lines of code, number of tokens, prompt size) that equally influence the accuracy metrics. The fact remains that, when dealing with generative software tasks by LLMs, no benchmark is available to tell researchers how to quantify neither the causal effect of SE-based treatments nor the correlation of confounders to the model’s performance. In an effort to bring statistical rigor to the evaluation of LLMs, this paper introduces a benchmarking strategy named Galeras comprised of curated testbeds for three SE tasks (i.e., code completion, code summarization, and commit generation) to help aid the interpretation of LLMs’ performance.We illustrate the insights of our benchmarking strategy by conducting a case study on the performance of ChatGPT under distinct prompt engineering methods. The results of the case study demonstrate the positive causal influence of prompt semantics on ChatGPT’s generative performance by an average treatment effect of ≈ 3%. Moreover, it was found that confounders such as prompt size are highly correlated with accuracy metrics (≈ 0.412). The end result of our case study is to showcase causal inference evaluations, in practice, to reduce confounding bias. By reducing the bias, we offer an interpretable solution for the accuracy metric under analysis.",6634e56c1046f3d16eaadecac45d5576d93eee83,Semantic Scholar,,, +172,prompt engineering for students of medicine and their teachers,['Thomas F. Heston'],https://arxiv.org/pdf/2308.11628,2023-08-08,,"""Prompt Engineering for Students of Medicine and Their Teachers""brings the principles of prompt engineering for large language models such as ChatGPT and Google Bard to medical education. This book contains a comprehensive guide to prompt engineering to help both teachers and students improve education in the medical field. Just as prompt engineering is critical in getting good information out of an AI, it is also critical to get students to think and understand more deeply. The principles of prompt engineering that we have learned from AI systems have the potential to simultaneously revolutionize learning in the healthcare field. The book analyzes from multiple angles the anatomy of a good prompt for both AI models and students. The different types of prompts are examined, showing how each style has unique characteristics and applications. The principles of prompt engineering, applied properly, are demonstrated to be effective in teaching across the diverse fields of anatomy, physiology, pathology, pharmacology, and clinical skills. Just like ChatGPT and similar large language AI models, students need clear and detailed prompting in order for them to fully understand a topic. Using identical principles, a prompt that gets good information from an AI will also cause a student to think more deeply and accurately. The process of prompt engineering facilitates this process. Because each chapter contains multiple examples and key takeaways, it is a practical guide for implementing prompt engineering in the learning process. It provides a hands-on approach to ensure readers can immediately apply the concepts they learn",6862113724aa1a578c5d4e0ec7f1d9bed4288241,Semantic Scholar,,, +173,towards zeroshot and fewshot table question answering using gpt3,"['Pragya Srivastava', 'T. Ganu', 'Saikat Guha']",https://arxiv.org/pdf/2210.17284,2022-10-31,,"We present very early results on using GPT-3 to perform question answering on tabular data. We find that stock pre-trained GPT-3 is able to zero-shot learn the table structure from a serialized JSON array-of-arrays representation, and able to answer lookup queries and simple comparison questions in natural language without any fine-tuning. We further find that simple prompt engineering to include few-shot static Q&A examples significantly improves accuracy. Lastly, we find that intermixing passage text improves accuracy even further on heterogeneous data. We apply our approach on a novel dataset of simple tables in newspaper infographics with promising results. Overall, we find much cause for optimism in this basic approach.",6b8f26678785ebd7b7b27984af3cb9a273b722b0,Semantic Scholar,,, +174,exploring the effectiveness of dataset synthesis an application of apple detection in orchards,"['A. V. Meekeren', 'Maya Aghaei', 'K. Dijkstra']",http://arxiv.org/pdf/2306.11763,2023-06-20,,"Deep object detection models have achieved notable successes in recent years, but one major obstacle remains: the requirement for a large amount of training data. Obtaining such data is a tedious process and is mainly time consuming, leading to the exploration of new research avenues like synthetic data generation techniques. In this study, we explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection and compare it to a baseline model trained on real-world data. After creating a dataset of realistic apple trees with prompt engineering and utilizing a previously trained Stable Diffusion model, the custom dataset was annotated and evaluated by training a YOLOv5m object detection model to predict apples in a real-world apple detection dataset. YOLOv5m was chosen for its rapid inference time and minimal hardware demands. Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images when evaluated on a set of real-world images. However, these findings remain highly promising, as the average precision difference is only 0.09 and 0.06, respectively. Qualitative results indicate that the model can accurately predict the location of apples, except in cases of heavy shading. These findings illustrate the potential of synthetic data generation techniques as a viable alternative to the collection of extensive training data for object detection models.",71020779c6eeb9c76fe0a0eb2155d1d4f7d29ff9,Semantic Scholar,,, +175,unsupervised prompt learning for visionlanguage models,"['Hao Huang', 'Jack Chu', 'Fangyun Wei']",http://arxiv.org/pdf/2204.03649,2022-04-07,,"Contrastive vision-language models like CLIP have shown great progress in transfer learning. In the inference stage, the proper text description, also known as prompt, needs to be carefully designed to correctly classify the given images. In order to avoid laborious prompt engineering, recent works such as CoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for downstream image recognition tasks on a small set of labeled data. Though promising improvements are achieved, requiring labeled data from the target datasets may restrict the scalability. In this paper, we explore a different scenario, in which the labels of the target datasets are unprovided, and we present an unsupervised prompt learning (UPL) approach to avoid prompt engineering while simultaneously improving transfer performance of CLIP-like vision-language models. As far as we know, UPL is the first work to introduce unsupervised learning into prompt learning. Experimentally, our UPL outperforms original CLIP with prompt engineering on ImageNet as well as other 10 datasets. An enhanced version of UPL is even competitive with the 8-shot CoOp and the 8-shot TIP-Adapter on most datasets. Code and models are available at https://github.com/tonyhuang2022/UPL.",732627c703a9dbc78d9384f1be4c791c3a554391,Semantic Scholar,,, +176,transfer learning for power outage detection task with limited training data,['Olukunle O. Owolabi'],http://arxiv.org/pdf/2305.17817,2023-05-28,,"Early detection of power outages is crucial for maintaining a reliable power distribution system. This research investigates the use of transfer learning and language models in detecting outages with limited labeled data. By leveraging pretraining and transfer learning, models can generalize to unseen classes. Using a curated balanced dataset of social media tweets related to power outages, we conducted experiments using zero-shot and few-shot learning. Our hypothesis is that Language Models pretrained with limited data could achieve high performance in outage detection tasks over baseline models. Results show that while classical models outperform zero-shot Language Models, few-shot fine-tuning significantly improves their performance. For example, with 10% fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5% accuracy (+8.5%). This has practical implications for analyzing and localizing outages in scenarios with limited data availability. Our evaluation provides insights into the potential of few-shot fine-tuning with Language Models for power outage detection, highlighting their strengths and limitations. This research contributes to the knowledge base of leveraging advanced natural language processing techniques for managing critical infrastructure.",05fab50acb26203a944a955131a2388c9731a8f7,Semantic Scholar,,, +177,distillation of encoderdecoder transformers for sequence labelling,"['M. Farina', 'D. Pappadopulo', 'Anant Gupta', 'Leslie Huang', 'Ozan Irsoy', 'T. Solorio']",http://arxiv.org/pdf/2302.05454,2023-02-10,,"Driven by encouraging results on a wide range of tasks, the field of NLP is experiencing an accelerated race to develop bigger language models. This race for bigger models has also underscored the need to continue the pursuit of practical distillation approaches that can leverage the knowledge acquired by these big models in a compute-efficient manner. Having this goal in mind, we build on recent work to propose a hallucination-free framework for sequence tagging that is especially suited for distillation. We show empirical results of new state-of-the-art performance across multiple sequence labelling datasets and validate the usefulness of this framework for distilling a large model in a few-shot learning scenario.",0704a96e1c57c12031f1c3ca492a91dbed1f61ce,Semantic Scholar,,, +178,technical report competition solution for prompt tuning using pretrained language model,"['Jiang-Long Song', 'Wuhe Zou', 'Feng Li', 'Xiaolei Qin', 'Weidong Zhang']",http://arxiv.org/pdf/2212.06369,2022-12-13,,"Prompt tuning recently becomes a hot-spot in the applications of large pretrained language models on specific downstream tasks. Regarding the Language Model as a Service (LMaaS), black-box tuning using derivative-free optimization (DFO) provides a novel approach to expand the practical scenarios of pretrained models and enrich the researches of few-shot learning. In this report, we present our solution in this competition that is based on the LMaaS scenario. Our solution consists of several modifications to BBTv2, including multiple label words, selection of P0, rolling update strategy, multi-task loss from MLP classifier, and finally using the ensemble method to further improve generalization ability. We also shared some strategies that we tried but didn't use in the final submission for further discussion. In the end we raised a question about the SNLI dataset and the impact on the results, as well as our concerns about the competition.",075e16a0774b1a9d44a7d512c50b7f997e16befe,Semantic Scholar,,, +179,exploiting the potential of seq2seq models as robust fewshot learners,"['Jihyeon Janel Lee', 'Dain Kim', 'Doohae Jung', 'Boseop Kim', 'Kyoung-Woon On']",https://arxiv.org/pdf/2307.14856,2023-07-27,,"In-context learning, which offers substantial advantages over fine-tuning, is predominantly observed in decoder-only models, while encoder-decoder (i.e., seq2seq) models excel in methods that rely on weight updates. Recently, a few studies have demonstrated the feasibility of few-shot learning with seq2seq models; however, this has been limited to tasks that align well with the seq2seq architecture, such as summarization and translation. Inspired by these initial studies, we provide a first-ever extensive experiment comparing the in-context few-shot learning capabilities of decoder-only and encoder-decoder models on a broad range of tasks. Furthermore, we propose two methods to more effectively elicit in-context learning ability in seq2seq models: objective-aligned prompting and a fusion-based approach. Remarkably, our approach outperforms a decoder-only model that is six times larger and exhibits significant performance improvements compared to conventional seq2seq models across a variety of settings. We posit that, with the right configuration and prompt design, seq2seq models can be highly effective few-shot learners for a wide spectrum of applications.",07bc02bd16f6fe78a7ea3bb8d966fcc6e3893195,Semantic Scholar,,, +180,cohortgpt an enhanced gpt for participant recruitment in clinical study,"['Zihan Guan', 'Zihao Wu', 'Zheng Liu', 'Dufan Wu', 'Hui Ren', 'Quanzheng Li', 'Xiang Li', 'Ninghao Liu']",https://arxiv.org/pdf/2307.11346,2023-07-21,,"Participant recruitment based on unstructured medical texts such as clinical notes and radiology reports has been a challenging yet important task for the cohort establishment in clinical research. Recently, Large Language Models (LLMs) such as ChatGPT have achieved tremendous success in various downstream tasks thanks to their promising performance in language understanding, inference, and generation. It is then natural to test their feasibility in solving the cohort recruitment task, which involves the classification of a given paragraph of medical text into disease label(s). However, when applied to knowledge-intensive problem settings such as medical text classification, where the LLMs are expected to understand the decision made by human experts and accurately identify the implied disease labels, the LLMs show a mediocre performance. A possible explanation is that, by only using the medical text, the LLMs neglect to use the rich context of additional information that languages afford. To this end, we propose to use a knowledge graph as auxiliary information to guide the LLMs in making predictions. Moreover, to further boost the LLMs adapt to the problem setting, we apply a chain-of-thought (CoT) sample selection strategy enhanced by reinforcement learning, which selects a set of CoT samples given each individual medical report. Experimental results and various ablation studies show that our few-shot learning method achieves satisfactory performance compared with fine-tuning strategies and gains superb advantages when the available data is limited. The code and sample dataset of the proposed CohortGPT model is available at: https://anonymous.4open.science/r/CohortGPT-4872/",089f6328085066263fedc083952624ca121ebbf3,Semantic Scholar,,, +181,zicl zeroshot incontext learning with pseudodemonstrations,"['Xinxi Lyu', 'Sewon Min', 'Iz Beltagy', 'Luke Zettlemoyer', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2212.09865,2022-12-19,,"Although large language models can be prompted for both zero- and few-shot learning, performance drops significantly when no demonstrations are available. In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap by constructing pseudo-demonstrations for a given test input using a raw text corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the nearest neighbors to the test input from the corpus and pairing them with random task labels, and (2) applying a set of techniques to reduce the amount of direct copying the model does from the resulting demonstrations. Evaluation on nine classification datasets shows that Z-ICL outperforms previous zero-shot methods by a significant margin, and is on par with in-context learning with labeled training data in the few-shot setting. Overall, Z-ICL provides a significantly higher estimate of the zero-shot performance levels of a model, and supports future efforts to develop better pseudo-demonstrations that further improve zero-shot results.",0942bd8fad71282994ff4e9a779c09745da68edc,Semantic Scholar,,, +182,zeroshot and fewshot learning for lung cancer multilabel classification using vision transformer,"['F. Guo', 'Yingfang Fan']",https://arxiv.org/pdf/2205.15290,2022-05-30,,"Lung cancer is the leading cause of cancer-related death worldwide. Lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most common histologic subtypes of non-small-cell lung cancer (NSCLC). Histology is an essential tool for lung cancer diagnosis. Pathologists make classifications according to the dominant subtypes. Although morphology remains the standard for diagnosis, significant tool needs to be developed to elucidate the diagnosis. In our study, we utilize the pre-trained Vision Transformer (ViT) model to classify multiple label lung cancer on histologic slices (from dataset LC25000), in both Zero-Shot and Few-Shot settings. Then we compare the performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall, sensitivity and specificity. Our study show that the pre-trained ViT model has a good performance in Zero-Shot setting, a competitive accuracy ($99.87\%$) in Few-Shot setting ({epoch = 1}) and an optimal result ($100.00\%$ on both validation set and test set) in Few-Shot seeting ({epoch = 5}).",0953ada119f384f328b6102e6b7963b3bde7cc9e,Semantic Scholar,,, +183,unified vision and language prompt learning,"['Yuhang Zang', 'Wei Li', 'Kaiyang Zhou', 'Chen Huang', 'Chen Change Loy']",http://arxiv.org/pdf/2210.07225,2022-10-13,,"Prompt tuning, a parameter- and data-efficient transfer learning paradigm that tunes only a small number of parameters in a model's input space, has become a trend in the vision community since the emergence of large vision-language models like CLIP. We present a systematic study on two representative prompt tuning methods, namely text prompt tuning and visual prompt tuning. A major finding is that none of the unimodal prompt tuning methods performs consistently well: text prompt tuning fails on data with high intra-class visual variances while visual prompt tuning cannot handle low inter-class variances. To combine the best from both worlds, we propose a simple approach called Unified Prompt Tuning (UPT), which essentially learns a tiny neural network to jointly optimize prompts across different modalities. Extensive experiments on over 11 vision datasets show that UPT achieves a better trade-off than the unimodal counterparts on few-shot learning benchmarks, as well as on domain generalization benchmarks. Code and models will be released to facilitate future research.",09b7338021fff3200c2098b19824aecc83a66cb5,Semantic Scholar,,, +184,plugandplay multilingual fewshot spoken words recognition,"['Aaqib Saeed', 'Vasileios Tsouvalas']",http://arxiv.org/pdf/2305.03058,2023-05-03,,"As technology advances and digital devices become prevalent, seamless human-machine communication is increasingly gaining significance. The growing adoption of mobile, wearable, and other Internet of Things (IoT) devices has changed how we interact with these smart devices, making accurate spoken words recognition a crucial component for effective interaction. However, building robust spoken words detection system that can handle novel keywords remains challenging, especially for low-resource languages with limited training data. Here, we propose PLiX, a multilingual and plug-and-play keyword spotting system that leverages few-shot learning to harness massive real-world data and enable the recognition of unseen spoken words at test-time. Our few-shot deep models are learned with millions of one-second audio clips across 20 languages, achieving state-of-the-art performance while being highly efficient. Extensive evaluations show that PLiX can generalize to novel spoken words given as few as just one support example and performs well on unseen languages out of the box. We release models and inference code to serve as a foundation for future research and voice-enabled user interface development for emerging devices.",0b413633f14ec7f96948067abf1d4ca930fa38a1,Semantic Scholar,,, +185,zeroshot approach to overcome perturbation sensitivity of prompts,"['Mohna Chakraborty', 'Adithya Kulkarni', 'Qi Li']",http://arxiv.org/pdf/2305.15689,2023-05-25,,"Recent studies have demonstrated that natural-language prompts can help to leverage the knowledge learned by pre-trained language models for the binary sentence-level sentiment classification task. Specifically, these methods utilize few-shot learning settings to fine-tune the sentiment classification model using manual or automatically generated prompts. However, the performance of these methods is sensitive to the perturbations of the utilized prompts. Furthermore, these methods depend on a few labeled instances for automatic prompt generation and prompt ranking. This study aims to find high-quality prompts for the given task in a zero-shot setting. Given a base prompt, our proposed approach automatically generates multiple prompts similar to the base prompt employing positional, reasoning, and paraphrasing techniques and then ranks the prompts using a novel metric. We empirically demonstrate that the top-ranked prompts are high-quality and significantly outperform the base prompt and the prompts generated using few-shot learning for the binary sentence-level sentiment classification task.",0b71af0bf02ab58b8d8e342c1c803322cfede603,Semantic Scholar,,, +186,an evaluation of gpt models for phenotype concept recognition,"['T. Groza', 'Harry Caufield', 'D. Gration', 'Gareth Baynam', 'M. Haendel', 'Peter N. Robinson', 'Christopher J. Mungall', 'J. Reese']",https://arxiv.org/pdf/2309.17169,2023-09-29,,"Objective: Clinical deep phenotyping and phenotype annotation play a critical role in both the diagnosis of patients with rare disorders as well as in building computationally-tractable knowledge in the rare disorders field. These processes rely on using ontology concepts, often from the Human Phenotype Ontology, in conjunction with a phenotype concept recognition task (supported usually by machine learning methods) to curate patient profiles or existing scientific literature. With the significant shift in the use of large language models (LLMs) for most NLP tasks, we examine the performance of the latest Generative Pre-trained Transformer (GPT) models underpinning ChatGPT as a foundation for the tasks of clinical phenotyping and phenotype annotation. Materials and Methods: The experimental setup of the study included seven prompts of various levels of specificity, two GPT models (gpt-3.5-turbo and gpt-4.0) and two established gold standard corpora for phenotype recognition, one consisting of publication abstracts and the other clinical observations. Results: Our results show that, with an appropriate setup, these models can achieve state of the art performance. The best run, using few-shot learning, achieved 0.58 macro F1 score on publication abstracts and 0.75 macro F1 score on clinical observations, the former being comparable with the state of the art, while the latter surpassing the current best in class tool. Conclusion: While the results are promising, the non-deterministic nature of the outcomes, the high cost and the lack of concordance between different runs using the same prompt and input make the use of these LLMs challenging for this particular task.",0c75cda2bb0812217bf0e5460e910212ad512944,Semantic Scholar,,, +187,templatefree prompt tuning for fewshot ner,"['Ruotian Ma', 'Xin Zhou', 'Tao Gui', 'Y. Tan', 'Qi Zhang', 'Xuanjing Huang']",https://aclanthology.org/2022.naacl-main.420.pdf,2021-09-28,,"Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words. However, when applied to token-level labeling tasks such as NER, it would be time-consuming to enumerate the template queries over all potential entity spans. In this work, we propose a more elegant method to reformulate NER tasks as LM problems without any templates. Specifically, we discard the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position. Meanwhile, we also explore principled ways to automatically search for appropriate label words that the pre-trained models can easily adapt to. While avoiding the complicated template-based process, the proposed LM objective also reduces the gap between different objectives used in pre-training and fine-tuning, thus it can better benefit the few-shot performance. Experimental results demonstrate the effectiveness of the proposed method over bert-tagger and template-based method under few-shot settings. Moreover, the decoding speed of the proposed method is up to 1930.12 times faster than the template-based method.",1dd344ce28f1e5a078f9d8396b5078388e555d99,Semantic Scholar,,, +188,a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems,"['Debjoy Saha', 'Bishal Santra', 'Pawan Goyal']",http://arxiv.org/pdf/2204.08167,2022-04-18,,"We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented conversational systems. Recent approaches to this problem leveraging Transformer-based models have yielded great results. However, training these models is expensive, both in terms of computational resources and time. Additionally, collecting high quality annotated dialogue datasets remains a challenge for researchers because of the extensive annotation required for training these models. Driven by the recent success of pre-trained language models and prompt-based learning, we explore prompt-based few-shot learning for Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage prompt-based language modelling task and train language models for both tasks and present a comprehensive empirical analysis of their separate and joint performance. We demonstrate the potential of prompt-based methods in few-shot learning for DST and provide directions for future improvement.",21e46f11898748778a31b5b2fe2f60128eb66ba1,Semantic Scholar,,, +189,stabilized incontext learning with pretrained language models for few shot dialogue state tracking,"['Derek Chen', 'Kun Qian', 'Zhou Yu']",http://arxiv.org/pdf/2302.05932,2023-02-12,,"Prompt-based methods with large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks. These models improve even further with the addition of a few labeled in-context exemplars to guide output generation. However, for more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial, leading to unstable results. Furthermore, building in-context exemplars for dialogue tasks is difficult because conversational contexts are long while model input lengths are relatively short.To overcome these issues we first adapt a meta-learning scheme to the dialogue domain which stabilizes the ability of the model to perform well under various prompts. We additionally design a novel training method to improve upon vanilla retrieval mechanisms to find ideal in-context examples. Finally, we introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query. In effect, we are able to achieve highly competitive results for few-shot DST on MultiWOZ.",59ef1b67c5f238d5d6d175d84fb6b239b4221a97,Semantic Scholar,,, +190,steps towards promptbased creation of virtual worlds,"['Jasmine Roberts', 'Andrzej Banburski-Fahey', 'J. Lanier']",https://arxiv.org/pdf/2211.05875,2022-11-10,,"Large language models trained for code generation can be applied to speaking virtual worlds into existence (creating virtual worlds). In this work we show that prompt-based methods can both accelerate in-VR level editing, as well as can become part of gameplay rather than just part of game development. As an example, we present Codex VR Pong which shows non-deterministic game mechanics using generative processes to not only create static content but also non-trivial interactions between 3D objects. This demonstration naturally leads to an integral discussion on how one would evaluate and benchmark experiences created by generative models - as there are no qualitative or quantitative metrics that apply in these scenarios. We conclude by discussing impending challenges of AI-assisted co-creation in VR.",632ab7663e6d64578ceda1d1df9ec525b503bacb,Semantic Scholar,,, +191,purr efficiently editing language model hallucinations by denoising language model corruptions,"['Anthony Chen', 'Panupong Pasupat', 'Sameer Singh', 'Hongrae Lee', 'Kelvin Guu']",http://arxiv.org/pdf/2305.14908,2023-05-24,,"The remarkable capabilities of large language models have been accompanied by a persistent drawback: the generation of false and unsubstantiated claims commonly known as""hallucinations"". To combat this issue, recent research has introduced approaches that involve editing and attributing the outputs of language models, particularly through prompt-based editing. However, the inference cost and speed of using large language models for editing currently bottleneck prompt-based methods. These bottlenecks motivate the training of compact editors, which is challenging due to the scarcity of training data for this purpose. To overcome these challenges, we exploit the power of large language models to introduce corruptions (i.e., noise) into text and subsequently fine-tune compact editors to denoise the corruptions by incorporating relevant evidence. Our methodology is entirely unsupervised and provides us with faux hallucinations for training in any domain. Our Petite Unsupervised Research and Revision model, PURR, not only improves attribution over existing editing methods based on fine-tuning and prompting, but also achieves faster execution times by orders of magnitude.",7db7653c581d7823cb9c328f2d742ec70d7a0ce4,Semantic Scholar,,, +192,zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts,"['Zewei Sun', 'Qingnan Jiang', 'Shujian Huang', 'Jun Cao', 'Shanbo Cheng', 'Mingxuan Wang']",http://arxiv.org/pdf/2209.11409,2022-09-23,,"Domain adaptation is an important challenge for neural machine translation. However, the traditional fine-tuning solution requires multiple extra training and yields a high cost. In this paper, we propose a non-tuning paradigm, resolving domain adaptation with a prompt-based method. Specifically, we construct a bilingual phrase-level database and retrieve relevant pairs from it as a prompt for the input sentences. By utilizing Retrieved Phrase-level Prompts (RePP), we effectively boost the translation quality. Experiments show that our method improves domain-specific machine translation for 6.2 BLEU scores and improves translation constraints for 11.5% accuracy without additional training.",80c0416048614be75362c2c332d22dd1d2b22f65,Semantic Scholar,,, +193,low resource pipeline for spoken language understanding via weak supervision,"['Ayush Kumar', 'Rishabh Tripathi', 'Jithendra Vepa']",https://arxiv.org/pdf/2206.10559,2022-06-21,,"In Weak Supervised Learning (WSL), a model is trained over noisy labels obtained from semantic rules and task-specific pre-trained models. Rules offer limited generalization over tasks and require significant manual efforts while pre-trained models are available only for limited tasks. In this work, we propose to utilize prompt-based methods as weak sources to obtain the noisy labels on unannotated data. We show that task-agnostic prompts are generalizable and can be used to obtain noisy labels for different Spoken Language Understanding (SLU) tasks such as sentiment classification, disfluency detection and emotion classification. These prompts could additionally be updated to add task-specific contexts, thus providing flexibility to design task-specific prompts. We demonstrate that prompt-based methods generate reliable labels for the above SLU tasks and thus can be used as a universal weak source to train a weak-supervised model (WSM) in absence of labeled data. Our proposed WSL pipeline trained over prompt-based weak source outperforms other competitive low-resource benchmarks on zero and few-shot learning by more than 4% on Macro-F1 on all of the three benchmark SLU datasets. The proposed method also outperforms a conventional rule based WSL pipeline by more than 5% on Macro-F1.",9ecf603dbebbfbdd9858d21903c77074d12518b4,Semantic Scholar,,, +194,instructionner a multitask instructionbased generative framework for fewshot ner,"['Liwen Wang', 'Rumei Li', 'Yang Yan', 'Yuanmeng Yan', 'Sirui Wang', 'Wei Yu Wu', 'Weiran Xu']",http://arxiv.org/pdf/2203.03903,2022-03-08,,"Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks. However, existing prompt templates are mostly designed for sentence-level tasks and are inappropriate for sequence labeling objectives. To address the above issue, we propose a multi-task instruction-based generative framework, named InstructionNER, for low-resource named entity recognition. Specifically, we reformulate the NER task as a generation problem, which enriches source sentences with task-specific instructions and answer options, then inferences the entities and types in natural language. We further propose two auxiliary tasks, including entity extraction and entity typing, which enable the model to capture more boundary information of entities and deepen the understanding of entity type semantics, respectively. Experimental results show that our method consistently outperforms other baselines on five datasets in few-shot settings.",a29a0e679e626e8961dc217081eae2a6c63a15ad,Semantic Scholar,,, +195,stt soft template tuning for fewshot adaptation,"['Ping Yu', 'Wei Wang', 'Chunyuan Li', 'Ruiyi Zhang', 'Zhanpeng Jin', 'Changyou Chen']",https://arxiv.org/pdf/2207.08408,2022-07-18,,"Prompt tuning has been an extremely effective tool to adapt a pre-trained model to downstream tasks. However, standard prompt-based methods mainly consider the case of sufficient data of downstream tasks. It is still unclear whether the advantage can be transferred to the few-shot regime, where only limited data are available for each downstream task. Although some works have demonstrated the potential of prompt-tuning under the few-shot setting, the main stream methods via searching discrete prompts or tuning soft prompts with limited data are still very challenging. Through extensive empirical studies, we find that there is still a gap between prompt tuning and fully fine-tuning for few-shot learning. To bridge the gap, we propose a new prompt-tuning framework, called Soft Template Tuning (STT) 1. STT combines manual and auto prompts, and treats down-stream classification tasks as a masked language modeling task. Comprehensive evaluation on different settings suggests STT can close the gap between fine-tuning and prompt-based methods without introducing additional parameters. Significantly, it can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.",a45bdbbf9a197a21ef97291c60b77de47bc51db2,Semantic Scholar,,, +196,enable language models to implicitly learn selfimprovement from data,"['Ziqi Wang', 'Le Hou', 'Tianjian Lu', 'Yuexin Wu', 'Yunxuan Li', 'Hongkun Yu', 'Heng Ji']",https://arxiv.org/pdf/2310.00898,2023-10-02,,"Large Language Models (LLMs) have demonstrated remarkable capabilities in open-ended text generation tasks. However, the inherent open-ended nature of these tasks implies that there is always room for improvement in the quality of model responses. To address this challenge, various approaches have been proposed to enhance the performance of LLMs. There has been a growing focus on enabling LLMs to self-improve their response quality, thereby reducing the reliance on extensive human annotation efforts for collecting diverse and high-quality training data. Recently, prompting-based methods have been widely explored among self-improvement methods owing to their effectiveness, efficiency, and convenience. However, those methods usually require explicitly and thoroughly written rubrics as inputs to LLMs. It is expensive and challenging to manually derive and provide all necessary rubrics with a real-world complex goal for improvement (e.g., being more helpful and less harmful). To this end, we propose an ImPlicit Self-ImprovemenT (PIT) framework that implicitly learns the improvement goal from human preference data. PIT only requires preference data that are used to train reward models without extra human efforts. Specifically, we reformulate the training objective of reinforcement learning from human feedback (RLHF) -- instead of maximizing response quality for a given input, we maximize the quality gap of the response conditioned on a reference response. In this way, PIT is implicitly trained with the improvement goal of better aligning with human preferences. Experiments on two real-world datasets and one synthetic dataset show that our method significantly outperforms prompting-based methods.",a81470aa3721f6cd8a61139f9c4c60923bee093f,Semantic Scholar,,, +197,progressive visual prompt learning with contrastive feature reformation,"['C. Xu', 'Haocheng Shen', 'Fengyuan Shi', 'Boheng Chen', 'Yixuan Liao', 'Xiaoxin Chen', 'Limin Wang']",http://arxiv.org/pdf/2304.08386,2023-04-17,,"Prompt learning has been designed as an alternative to fine-tuning for adapting Vision-language (V-L) models to the downstream tasks. Previous works mainly focus on text prompt while visual prompt works are limited for V-L models. The existing visual prompt methods endure either mediocre performance or unstable training process, indicating the difficulty of visual prompt learning. In this paper, we propose a new Progressive Visual Prompt (ProVP) structure to strengthen the interactions among prompts of different layers. More importantly, our ProVP could effectively propagate the image embeddings to deep layers and behave partially similar to an instance adaptive prompt method. To alleviate generalization deterioration, we further propose a new contrastive feature re-formation, which prevents the serious deviation of the prompted visual feature from the fixed CLIP visual feature distribution. Combining both, our method (ProVP-Ref) is evaluated on 11 image benchmark datasets and achieves 7/11 state-of-theart results on both few-shot and base-to-novel settings. To the best of our knowledge, we are the first to demonstrate the superior performance of visual prompts in V-L models to previous prompt-based methods in downstream tasks. Meanwhile, it implies that our ProVP-Ref shows the best capability to adapt and to generalize.",ab346a9d9a71bc59671e52cae96eabba16c24eeb,Semantic Scholar,,, +198,fewshot event detection an empirical study and a unified view,"['Yubo Ma', 'Zehao Wang', 'Yixin Cao', 'Aixin Sun']",http://arxiv.org/pdf/2305.01901,2023-05-03,,"Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress.This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline. For fair evaluation, we compare 12 representative methods on three datasets, which are roughly grouped into prompt-based and prototype-based models for detailed analysis. Experiments consistently demonstrate that prompt-based methods, including ChatGPT, still significantly trail prototype-based methods in terms of overall performance. To investigate their superior performance, we break down their design elements along several dimensions and build a unified framework on prototype-based methods. Under such unified view, each prototype-method can be viewed a combination of different modules from these design elements. We further combine all advantageous modules and propose a simple yet effective baseline, which outperforms existing methods by a large margin (e.g., 2.7% F1 gains under low-resource setting).",ac7e270fcd365c84b29a710d58bf1243e850df4c,Semantic Scholar,,, +199,qaner prompting question answering models for fewshot named entity recognition,"['Andy T. Liu', 'Wei Xiao', 'Henghui Zhu', 'Dejiao Zhang', 'Shang-Wen Li', 'Andrew O. Arnold']",http://arxiv.org/pdf/2203.01543,2022-03-03,,"Recently, prompt-based learning for pre-trained language models has succeeded in few-shot Named Entity Recognition (NER) by exploiting prompts as task guidance to increase label efficiency. However, previous prompt-based methods for few-shot NER have limitations such as a higher computational complexity, poor zero-shot ability, requiring manual prompt engineering, or lack of prompt robustness. In this work, we address these shortcomings by proposing a new prompt-based learning NER method with Question Answering (QA), called QaNER. Our approach includes 1) a refined strategy for converting NER problems into the QA formulation; 2) NER prompt generation for QA models; 3) prompt-based tuning with QA models on a few annotated NER examples; 4) zero-shot NER by prompting the QA model. Comparing the proposed approach with previous methods, QaNER is faster at inference, insensitive to the prompt quality, and robust to hyper-parameters, as well as demonstrating significantly better low-resource performance and zero-shot capability.",b159dffadb69940e14693e812bdaa32e3957717f,Semantic Scholar,,, +200,causal interventionbased prompt debiasing for event argument extraction,"['Jiaju Lin', 'Jie Zhou', 'Qin Chen']",http://arxiv.org/pdf/2210.01561,2022-10-04,,"Prompt-based methods have become increasingly popular among information extraction tasks, especially in low-data scenarios. By formatting a finetune task into a pre-training objective, prompt-based methods resolve the data scarce problem effectively. However, seldom do previous research investigate the discrepancy among different prompt formulating strategies. In this work, we compare two kinds of prompts, name-based prompt and ontology-base prompt, and reveal how ontology-base prompt methods exceed its counterpart in zero-shot event argument extraction (EAE) . Furthermore, we analyse the potential risk in ontology-base prompts via a causal view and propose a debias method by causal intervention. Experiments on two benchmarks demonstrate that modified by our debias method, the baseline model becomes both more effective and robust, with significant improvement in the resistance to adversarial attacks.",b1d5c08a6fb6a5ee5b6b6693e10a587733ca05ed,Semantic Scholar,,, +201,interactivechainprompting ambiguity resolution for crosslingual conditional generation with interaction,"['Jonathan Pilault', 'Xavier García', 'Arthur Bravzinskas', 'Orhan Firat']",http://arxiv.org/pdf/2301.10309,2023-01-24,,"Crosslingual conditional generation (e.g., machine translation) has long enjoyed the benefits of scaling. Nonetheless, there are still issues that scale alone may not overcome. A source query in one language, for instance, may yield several translation options in another language without any extra context. Only one translation could be acceptable however, depending on the translator's preferences and goals. Choosing the incorrect option might significantly affect translation usefulness and quality. We propose a novel method interactive-chain prompting -- a series of question, answering and generation intermediate steps between a Translator model and a User model -- that reduces translations into a list of subproblems addressing ambiguities and then resolving such subproblems before producing the final text to be translated. To check ambiguity resolution capabilities and evaluate translation quality, we create a dataset exhibiting different linguistic phenomena which leads to ambiguities at inference for four languages. To encourage further exploration in this direction, we release all datasets. We note that interactive-chain prompting, using eight interactions as exemplars, consistently surpasses prompt-based methods with direct access to background information to resolve ambiguities.",bad6fa523ecf782c837a2eecaaffa4e1f7477c24,Semantic Scholar,,, +202,memobert pretraining model with promptbased learning for multimodal emotion recognition,"['Jinming Zhao', 'Ruichen Li', 'Qin Jin', 'Xinchao Wang', 'Haizhou Li']",https://arxiv.org/pdf/2111.00865,2021-10-27,,"Multimodal emotion recognition study is hindered by the lack of labelled corpora in terms of scale and diversity, due to the high annotation cost and label ambiguity. In this paper, we propose a multimodal pre-training model MEmoBERT for multimodal emotion recognition, which learns multimodal joint representations through self-supervised learning from a self-collected large-scale unlabeled video data that come in sheer volume. Furthermore, unlike the conventional ""pre-train, finetune"" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction one, bringing the downstream task closer to the pre-training. Extensive experiments on two benchmark datasets, IEMOCAP and MSP-IMPROV, show that our proposed MEmoBERT significantly enhances emotion recognition performance.",c10ab4733b43f19547308c15ca231a668181a36c,Semantic Scholar,,, +203,adaprompt adaptive model training for promptbased nlp,"['Yulong Chen', 'Yang Liu', 'Li Dong', 'Shuohang Wang', 'Chenguang Zhu', 'Michael Zeng', 'Yue Zhang']",https://aclanthology.org/2022.findings-emnlp.448.pdf,2022-02-10,,"Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has gained much attention in community. The main idea is to bridge the gap between NLP downstream tasks and language modeling (LM), by mapping these tasks into natural language prompts, which are then filled by pre-trained language models (PLMs). However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining. First, prompt information is not necessarily sufficiently present during LM pretraining. Second, task-specific data are not necessarily well represented during pretraining. We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers. Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings. In addition, in zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35\% relative error reduction.",d235a9085e0543fcbe502fbc269f9a8ee01dcbab,Semantic Scholar,,, +204,convfinqa exploring the chain of numerical reasoning in conversational finance question answering,"['Zhiyu Chen', 'SHIYANG LI', 'Charese Smiley', 'Zhiqiang Ma', 'Sameena Shah', 'William Yang Wang']",http://arxiv.org/pdf/2210.03849,2022-10-07,,"With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human beings. In this work, we investigate the application domain of finance that involves real-world, complex numerical reasoning. We propose a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering. Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations. We conduct comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods, to provide insights into the reasoning mechanisms of these two divisions. We believe our new dataset should serve as a valuable resource to push forward the exploration of real-world, complex reasoning tasks as the next research focus. Our dataset and code is publicly available at https://github.com/czyssrs/ConvFinQA.",d96997265f8146e93b4c9350f19d55e46d1317f0,Semantic Scholar,,, +205,exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods,"['Mengsay Loem', 'Masahiro Kaneko', 'Sho Takase', 'Naoaki Okazaki']",http://arxiv.org/pdf/2305.18156,2023-05-29,,"Large-scale pre-trained language models such as GPT-3 have shown remarkable performance across various natural language processing tasks. However, applying prompt-based methods with GPT-3 for Grammatical Error Correction (GEC) tasks and their controllability remains underexplored. Controllability in GEC is crucial for real-world applications, particularly in educational settings, where the ability to tailor feedback according to learner levels and specific error types can significantly enhance the learning process.This paper investigates the performance and controllability of prompt-based methods with GPT-3 for GEC tasks using zero-shot and few-shot setting. We explore the impact of task instructions and examples on GPT-3’s output, focusing on controlling aspects such as minimal edits, fluency edits, and learner levels. Our findings demonstrate that GPT-3 could effectively perform GEC tasks, outperforming existing supervised and unsupervised approaches. We also showed that GPT-3 could achieve controllability when appropriate task instructions and examples are given.",db0d67057b41927b5b51d3a393c250be64a405ae,Semantic Scholar,,, +206,selfevolve a code evolution framework via large language models,"['Shuyang Jiang', 'Yuhao Wang', 'Yu Wang']",http://arxiv.org/pdf/2306.02907,2023-06-05,,"Large language models (LLMs) have already revolutionized code generation, after being pretrained on publicly available code data. However, while various methods have been proposed to augment LLMs with retrieved knowledge and enhance the quality of code generation, the performance of these retrieval-based methods is limited by the strength of the retrievers used. In addition, while LLMs show great emergent ability, they still struggle to produce the correct code in one turn. To address these challenges, we propose a novel two-step pipeline, called \autoknow, that leverages LLMs as both knowledge providers and self-reflective programmers. Unlike retrieval-based methods, \autoknow~obtains the knowledge from input prompts and generates intermediate code based on the generated knowledge. After that, \autoknow~asks LLM to act as an expert programmer to perform debugging for the generated code. This is achieved by receiving the error message from the interpreter, without requiring special test cases for correctness verification. We evaluate \autoknow~on three code generation datasets, including DS-1000 for data science code, HumanEval for software engineering code, and TransCoder for C++-to-Python translation. Our empirical experiments show that \autoknow~outperforms strong baselines by a significant margin on all datasets. We also conduct exhaustive analytical experiments to validate the effectiveness of the two stages of \autoknow, and find that both are superior to other prompting-based methods. Further scalability analysis demonstrates that \autoknow~can be adapted to other more advanced models, such as GPT-4, and bring consistent efficacy improvement.",eb36681fc4c5dfce4f3e05540fc92b007de278ca,Semantic Scholar,,, +207,zeroshot information extraction via chatting with chatgpt,"['Xiang Wei', 'Xingyu Cui', 'Ning Cheng', 'Xiaobin Wang', 'Xin Zhang', 'Shen Huang', 'Pengjun Xie', 'Jinan Xu', 'Yufeng Chen', 'Meishan Zhang', 'Yong Jiang', 'Wenjuan Han']",http://arxiv.org/pdf/2302.10205,2023-02-20,,"Zero-shot information extraction (IE) aims to build IE systems from the unannotated text. It is challenging due to involving little human intervention. Challenging but worthwhile, zero-shot IE reduces the time and effort that data labeling takes. Recent efforts on large language models (LLMs, e.g., GPT-3, ChatGPT) show promising performance on zero-shot settings, thus inspiring us to explore prompt-based methods. In this work, we ask whether strong IE models can be constructed by directly prompting LLMs. Specifically, we transform the zero-shot IE task into a multi-turn question-answering problem with a two-stage framework (ChatIE). With the power of ChatGPT, we extensively evaluate our framework on three IE tasks: entity-relation triple extract, named entity recognition, and event extraction. Empirical results on six datasets across two languages show that ChatIE achieves impressive performance and even surpasses some full-shot models on several datasets (e.g., NYT11-HRL). We believe that our work could shed light on building IE models with limited resources.",f4cba0db34aa0c389cec267ca1f3ba5255ea2645,Semantic Scholar,,, +208,scaling sentence embeddings with large language models,"['Ting Jiang', 'Shaohan Huang', 'Zhongzhi Luan', 'Deqing Wang', 'Fuzhen Zhuang']",https://arxiv.org/pdf/2307.16645,2023-07-31,,"Large language models (LLMs) have recently garnered significant interest. With in-context learning, LLMs achieve impressive results in various natural language tasks. However, the application of LLMs to sentence embeddings remains an area of ongoing research. In this work, we propose an in-context learning-based method aimed at improving sentence embeddings performance. Our approach involves adapting the previous prompt-based representation method for autoregressive models, constructing a demonstration set that enables LLMs to perform in-context learning, and scaling up the LLMs to different model sizes. Through extensive experiments, in-context learning enables LLMs to generate high-quality sentence embeddings without any fine-tuning. It helps LLMs achieve performance comparable to current contrastive learning methods. By scaling model size, we find scaling to more than tens of billion parameters harms the performance on semantic textual similarity (STS) tasks. However, the largest model outperforms other counterparts and achieves the new state-of-the-art result on transfer tasks. We also fine-tune LLMs with current contrastive learning approach, and the 2.7B OPT model, incorporating our prompt-based method, surpasses the performance of 4.8B ST5, achieving the new state-of-the-art results on STS tasks. Our code is available at https://github.com/kongds/scaling_sentemb.",f7ccf8ecd508e0b2d423169588dd1c1a82dd3b4d,Semantic Scholar,,, +209,prompting to distill boosting datafree knowledge distillation via reinforced prompt,"['Xinyin Ma', 'Xinchao Wang', 'Gongfan Fang', 'Yongliang Shen', 'Weiming Lu']",https://arxiv.org/pdf/2205.07523,2022-05-16,,"Data-free knowledge distillation (DFKD) conducts knowledge distillation via eliminating the dependence of original training data, and has recently achieved impressive results in accelerating pre-trained language models. At the heart of DFKD is to reconstruct a synthetic dataset by inverting the parameters of the uncompressed model. Prior DFKD approaches, however, have largely relied on hand-crafted priors of the target data distribution for the reconstruction, which can be inevitably biased and often incompetent to capture the intrinsic distributions. To address this problem, we propose a prompt-based method, termed as PromptDFD, that allows us to take advantage of learned language priors, which effectively harmonizes the synthetic sentences to be semantically and grammatically correct. Specifically, PromptDFD leverages a pre-trained generative model to provide language priors and introduces a reinforced topic prompter to control data synthesis, making the generated samples thematically relevant and semantically plausible, and thus friendly to downstream tasks. As shown in our experiments, the proposed method substantially improves the synthesis quality and achieves considerable improvements on distillation performance. In some cases, PromptDFD even gives rise to results on par with those from the data-driven knowledge distillation with access to the original training data.",fb1d85fe28b5e92e22d084eca674d4a2b48cdc5a,Semantic Scholar,,, +210,are hard examples also harder to explain a study with human and modelgenerated explanations,"['Swarnadeep Saha', 'Peter Hase', 'Nazneen Rajani', 'Mohit Bansal']",https://arxiv.org/pdf/2211.07517,2022-11-14,,"Recent work on explainable NLP has shown that few-shot prompting can enable large pre-trained language models (LLMs) to generate grammatical and factual natural language explanations for data labels. In this work, we study the connection between explainability and sample hardness by investigating the following research question – “Are LLMs and humans equally good at explaining data labels for both easy and hard samples?” We answer this question by first collecting human-written explanations in the form of generalizable commonsense rules on the task of Winograd Schema Challenge (Winogrande dataset). We compare these explanations with those generated by GPT-3 while varying the hardness of the test samples as well as the in-context samples. We observe that (1) GPT-3 explanations are as grammatical as human explanations regardless of the hardness of the test samples, (2) for easy examples, GPT-3 generates highly supportive explanations but human explanations are more generalizable, and (3) for hard examples, human explanations are significantly better than GPT-3 explanations both in terms of label-supportiveness and generalizability judgements. We also find that hardness of the in-context examples impacts the quality of GPT-3 explanations. Finally, we show that the supportiveness and generalizability aspects of human explanations are also impacted by sample hardness, although by a much smaller margin than models.",0040dac7a1bf7a1eeb01c86ddb993f331f35b158,Semantic Scholar,,, +211,controllable generation of dialogue acts for dialogue systems via fewshot response generation and ranking,"['Angela Ramirez', 'Karik Agarwal', 'Juraj Juraska', 'Utkarsh Garg', 'M. Walker']",https://arxiv.org/pdf/2307.14440,2023-07-26,,"Dialogue systems need to produce responses that realize multiple types of dialogue acts (DAs) with high semantic fidelity. In the past, natural language generators (NLGs) for dialogue were trained on large parallel corpora that map from a domain-specific DA and its semantic attributes to an output utterance. Recent work shows that pretrained language models (LLMs) offer new possibilities for controllable NLG using prompt-based learning. Here we develop a novel few-shot overgenerate-and-rank approach that achieves the controlled generation of DAs. We compare eight few-shot prompt styles that include a novel method of generating from textual pseudo-references using a textual style transfer approach. We develop six automatic ranking functions that identify outputs with both the correct DA and high semantic accuracy at generation time. We test our approach on three domains and four LLMs. To our knowledge, this is the first work on NLG for dialogue that automatically ranks outputs using both DA and attribute accuracy. For completeness, we compare our results to fine-tuned few-shot models trained with 5 to 100 instances per DA. Our results show that several prompt settings achieve perfect DA accuracy, and near perfect semantic accuracy (99.81%) and perform better than few-shot fine-tuning.",03d8b1e78d124a561f3c2a67d3199472ee73228d,Semantic Scholar,,, +212,lambada backward chaining for automated reasoning in natural language,"['Seyed Mehran Kazemi', 'Najoung Kim', 'Deepti Bhatia', 'Xinyuan Xu', 'Deepak Ramachandran']",http://arxiv.org/pdf/2212.13894,2022-12-20,,"Remarkable progress has been made on automated reasoning with natural text, by using Large Language Models (LLMs) and methods such as Chain-of-Thought prompting and Selection-Inference. These techniques search for proofs in the forward direction from axioms to the conclusion, which suffers from a combinatorial explosion of the search space, and thus high failure rates for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from intended conclusion to supporting axioms) is significantly more efficient at proof-finding. Importing this intuition into the LM setting, we develop a Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into four sub-modules, that are simply implemented by few-shot prompted LLM inference. We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required.",03fb95e6be583ca954c3d00812a9e9a40f118e51,Semantic Scholar,,, +213,skillbased fewshot selection for incontext learning,"['Shengnan An', 'Bo Zhou', 'Zeqi Lin', 'Qiang Fu', 'B. Chen', 'Nanning Zheng', 'Weizhu Chen', 'Jian-Guang Lou']",https://arxiv.org/pdf/2305.14210,2023-05-23,,"In-context learning is the paradigm that adapts large language models to downstream tasks by providing a few examples. Few-shot selection -- selecting appropriate examples for each test instance separately -- is important for in-context learning. In this paper, we propose Skill-KNN, a skill-based few-shot selection method for in-context learning. The key advantages of Skill-KNN include: (1) it addresses the problem that existing methods based on pre-trained embeddings can be easily biased by surface natural language features that are not important for the target task; (2) it does not require training or fine-tuning of any models, making it suitable for frequently expanding or changing example banks. The key insight is to optimize the inputs fed into the embedding model, rather than tuning the model itself. Technically, Skill-KNN generates the skill-based descriptions for each test case and candidate example by utilizing a pre-processing few-shot prompting, thus eliminating unimportant surface features. Experimental results across five cross-domain semantic parsing datasets and six backbone models show that Skill-KNN significantly outperforms existing methods.",04526876688e5a56106629229309fae272da1c79,Semantic Scholar,,, +214,echoprompt instructing the model to rephrase queries for improved incontext learning,"['Rajasekhar Reddy Mekala', 'Yasaman Razeghi', 'Sameer Singh']",https://arxiv.org/pdf/2309.10687,2023-09-16,,"Language models are achieving impressive performance on various tasks by aggressively adopting inference-time prompting techniques, such as zero-shot and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them. EchoPrompt is adapted for both zero-shot and few-shot in-context learning with standard and chain-of-thought prompting. Experimental results show that EchoPrompt yields substantial improvements across all these settings for four families of causal language models. These improvements are observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks. We investigate the factors contributing to EchoPrompt's effectiveness through ablation studies, which reveal that both the original query and the model-generated rephrased version are instrumental in its performance gains. Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. We recommend incorporating EchoPrompt into various baseline prompting strategies to achieve performance boosts.",04e838c16f3d1fb8d69d34fe0a0a92c59717875b,Semantic Scholar,,, +215,improved compositional generalization by generating demonstrations for metalearning,"['Sam Spilsbury', 'A. Ilin']",http://arxiv.org/pdf/2305.13092,2023-05-22,,"Meta-learning and few-shot prompting are viable methods to induce certain types of compositional behaviour. However, these methods can be very sensitive to the choice of support examples used. Choosing good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider a grounded language learning problem (gSCAN) where good support examples for certain test splits might not even exist in the training data, or would be infeasible to search for. We design an agent which instead generates possible supports which are relevant to the test query and current state of the world, then uses these supports via meta-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional behaviour split without a loss of performance on other splits. Further experiments show that in this case, searching for relevant demonstrations even with an oracle function is not sufficient to attain good performance when using meta-learning.",088ba3cfb904ccd0aa1993a1e30c725b061aad7e,Semantic Scholar,,, +216,fantastically ordered prompts and where to find them overcoming fewshot prompt order sensitivity,"['Yao Lu', 'Max Bartolo', 'Alastair Moore', 'S. Riedel', 'Pontus Stenetorp']",https://aclanthology.org/2022.acl-long.556.pdf,2021-04-18,,"When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are “fantastic” and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data. Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts. Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.",0adec918885dff698acf359988ed79a543157f80,Semantic Scholar,,, +217,crowd score a method for the evaluation of jokes using large language model ai voters as judges,"['Fabrício Góes', 'Zisen Zhou', 'Piotr Sawicki', 'M. Grzes', 'Daniel Brown']",http://arxiv.org/pdf/2212.11214,2022-12-21,,"This paper presents the Crowd Score, a novel method to assess the funniness of jokes using large language models (LLMs) as AI judges. Our method relies on inducing different personalities into the LLM and aggregating the votes of the AI judges into a single score to rate jokes. We validate the votes using an auditing technique that checks if the explanation for a particular vote is reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating. Our results show that few-shot prompting leads to better results than zero-shot for the voting question. Personality induction showed that aggressive and self-defeating voters are significantly more inclined to find more jokes funny of a set of aggressive/self-defeating jokes than the affiliative and self-enhancing voters. The Crowd Score follows the same trend as human judges by assigning higher scores to jokes that are also considered funnier by human judges. We believe that our methodology could be applied to other creative domains such as story, poetry, slogans, etc. It could both help the adoption of a flexible and accurate standard approach to compare different work in the CC community under a common metric and by minimizing human participation in assessing creative artefacts, it could accelerate the prototyping of creative artefacts and reduce the cost of hiring human participants to rate creative artefacts. 1",0ba5fb80d2c3ea3a8505415e32d954b4e4eea170,Semantic Scholar,,, +218,art automatic multistep reasoning and tooluse for large language models,"['Bhargavi Paranjape', 'Scott M. Lundberg', 'Sameer Singh', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer', 'Marco Tulio Ribeiro']",http://arxiv.org/pdf/2303.09014,2023-03-16,,"Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings by generating intermediate chain of thought (CoT) reasoning steps. Further, each reasoning step can rely on external tools to support computation beyond the core LLM capabilities (e.g. search/running code). Prior work on CoT prompting and tool use typically requires hand-crafting task-specific demonstrations and carefully scripted interleaving of model generations with tool use. We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program. Given a new task to solve, ART selects demonstrations of multi-step reasoning and tool use from a task library. At test time, ART seamlessly pauses generation whenever external tools are called, and integrates their output before resuming generation. ART achieves a substantial improvement over few-shot prompting and automatic CoT on unseen tasks in the BigBench and MMLU benchmarks, and matches performance of hand-crafted CoT prompts on a majority of these tasks. ART is also extensible, and makes it easy for humans to improve performance by correcting errors in task-specific programs or incorporating new tools, which we demonstrate by drastically improving performance on select tasks with minimal human intervention.",0d42221038c05cee8443c5b5af838505ee137dc3,Semantic Scholar,,, +219,promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models,"['Mirac Suzgun', 'Luke Melas-Kyriazi', 'Dan Jurafsky']",https://arxiv.org/pdf/2205.11503,2022-05-23,,"We propose a method for arbitrary textual style transfer (TST)—the task of transforming a text into any given style—utilizing general-purpose pre-trained language models. Our method, Prompt-and-Rerank, is based on a mathematical formulation of the TST task, decomposing it into three constituent components: textual similarity, target style strength, and fluency. Our method uses zero-shot or few-shot prompting to obtain a set of candidate generations in the target style, and then re-ranks them according to the three components. Our method enables small pre-trained language models to perform on par with state-of-the-art large-scale models while using two orders of magnitude less compute and memory. We also investigate the effect of model size and prompt design (e.g., prompt paraphrasing and delimiter-pair choice) on style transfer quality across seven diverse textual style transfer datasets, finding, among other things, that delimiter-pair choice has a large impact on performance, and that models have biases on the direction of style transfer.",0d6bb585493e34975f0437faa3179db3a02f6ae8,Semantic Scholar,,, +220,teaching arithmetic to small transformers,"['Nayoung Lee', 'Kartik K. Sreenivasan', 'Jason D. Lee', 'Kangwook Lee', 'Dimitris Papailiopoulos']",https://arxiv.org/pdf/2307.03381,2023-07-07,,"Large language models like GPT-4 exhibit emergent capabilities across general-purpose tasks, such as basic arithmetic, when trained on extensive text data, even though these tasks are not explicitly encoded by the unsupervised, next-token prediction objective. This study investigates how small transformers, trained from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token prediction objective. We first demonstrate that conventional training data is not the most effective for arithmetic learning, and simple formatting changes can significantly improve accuracy. This leads to sharp phase transitions as a function of training data scale, which, in some cases, can be explained through connections to low-rank matrix completion. Building on prior work, we then train on chain-of-thought style data that includes intermediate step results. Even in the complete absence of pretraining, this approach significantly and simultaneously improves accuracy, sample complexity, and convergence speed. We also study the interplay between arithmetic and text data during training and examine the effects of few-shot prompting, pretraining, and model scale. Additionally, we discuss length generalization challenges. Our work highlights the importance of high-quality, instructive data that considers the particular characteristics of the next-word prediction objective for rapidly eliciting arithmetic capabilities.",0db0af0cd3ceb0531a050a03e6ceb849580ff53b,Semantic Scholar,,, +221,generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models,"['Varun Nair', 'Elliot Schumacher', 'Anitha Kannan']",http://arxiv.org/pdf/2305.05982,2023-05-10,,"A medical provider’s summary of a patient visit serves several critical purposes, including clinical decision-making, facilitating hand-offs between providers, and as a reference for the patient. An effective summary is required to be coherent and accurately capture all the medically relevant information in the dialogue, despite the complexity of patient-generated language. Even minor inaccuracies in visit summaries (for example, summarizing “patient does not have a fever” when a fever is present) can be detrimental to the outcome of care for the patient.This paper tackles the problem of medical conversation summarization by discretizing the task into several smaller dialogue-understanding tasks that are sequentially built upon. First, we identify medical entities and their affirmations within the conversation to serve as building blocks. We study dynamically constructing few-shot prompts for tasks by conditioning on relevant patient information and use GPT-3 as the backbone for our experiments. We also develop GPT-derived summarization metrics to measure performance against reference summaries quantitatively. Both our human evaluation study and metrics for medical correctness show that summaries generated using this approach are clinically accurate and outperform the baseline approach of summarizing the dialog in a zero-shot, single-prompt setting.",0f0a973c6457bcaf7255f891f9b34d658a0a84ae,Semantic Scholar,,, +222,can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning,"['Mohamed Aghzal', 'E. Plaku', 'Ziyu Yao']",https://arxiv.org/pdf/2310.03249,2023-10-05,,"Large language models (LLMs) have achieved remarkable success across a wide spectrum of tasks; however, they still face limitations in scenarios that demand long-term planning and spatial reasoning. To facilitate this line of research, in this work, we propose a new benchmark, termed $\textbf{P}$ath $\textbf{P}$lanning from $\textbf{N}$atural $\textbf{L}$anguage ($\textbf{PPNL}$). Our benchmark evaluates LLMs' spatial-temporal reasoning by formulating ''path planning'' tasks that require an LLM to navigate to target locations while avoiding obstacles and adhering to constraints. Leveraging this benchmark, we systematically investigate LLMs including GPT-4 via different few-shot prompting methodologies and BART and T5 of various sizes via fine-tuning. Our experimental results show the promise of few-shot GPT-4 in spatial reasoning, when it is prompted to reason and act interleavedly, although it still fails to make long-term temporal reasoning. In contrast, while fine-tuned LLMs achieved impressive results on in-distribution reasoning tasks, they struggled to generalize to larger environments or environments with more obstacles.",107aa1e3b1ce604d953475baf98674e92a723bda,Semantic Scholar,,, +223,learning performanceimproving code edits,"['Aman Madaan', 'Alex Shypula', 'Uri Alon', 'Milad Hashemi', 'Parthasarathy Ranganathan', 'Yiming Yang', 'Graham Neubig', 'A. Yazdanbakhsh']",http://arxiv.org/pdf/2302.07867,2023-02-15,,"The waning of Moore's Law has shifted the focus of the tech industry towards alternative methods for continued performance gains. While optimizing compilers are a standard tool to help increase program efficiency, programmers continue to shoulder much responsibility in crafting and refactoring code with better performance characteristics. In this paper, we investigate the ability of large language models (LLMs) to suggest functionally correct, performance improving code edits. We hypothesize that language models can suggest such edits in ways that would be impractical for static analysis alone. We investigate these questions by curating a large-scale dataset of Performance-Improving Edits, PIE. PIE contains trajectories of programs, where a programmer begins with an initial, slower version and iteratively makes changes to improve the program's performance. We use PIE to evaluate and improve the capacity of large language models. Specifically, use examples from PIE to fine-tune multiple variants of CODEGEN, a billion-scale Transformer-decoder model. Additionally, we use examples from PIE to prompt OpenAI's CODEX using a few-shot prompting. By leveraging PIE, we find that both CODEX and CODEGEN can generate performance-improving edits, with speedups of more than 2.5x for over 25% of the programs, for C++ and Python, even after the C++ programs were compiled using the O3 optimization level. Crucially, we show that PIE allows CODEGEN, an open-sourced and 10x smaller model than CODEX, to match the performance of CODEX on this challenging task. Overall, this work opens new doors for creating systems and methods that can help programmers write efficient code.",1786a2f9140ed7211b21302977de64e948b92308,Semantic Scholar,,, +224,prompting palm for translation assessing strategies and performance,"['David Vilar', 'Markus Freitag', 'Colin Cherry', 'Jiaming Luo', 'Viresh Ratnakar', 'George F. Foster']",http://arxiv.org/pdf/2211.09102,2022-11-16,,"Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages. We probe this ability in an in-depth study of the pathways language model (PaLM), which has demonstrated the strongest machine translation (MT) performance among similarly-trained LLMs to date. We investigate various strategies for choosing translation examples for few-shot prompting, concluding that example quality is the most important factor. Using optimized prompts, we revisit previous assessments of PaLM’s MT capabilities with more recent test sets, modern MT metrics, and human evaluation, and find that its performance, while impressive, still lags that of state-of-the-art supervised systems. We conclude by providing an analysis of PaLM’s MT output which reveals some interesting properties and prospects for future work.",197ba7bbfdbb052b0770088815c110774220f397,Semantic Scholar,,, +225,contextual biasing of namedentities with large language models,"['Chuanneng Sun', 'Zeeshan Ahmed', 'Yingyi Ma', 'Zhe Liu', 'Yutong Pang', 'Ozlem Kalinli']",https://arxiv.org/pdf/2309.00723,2023-09-01,,"This paper studies contextual biasing with Large Language Models (LLMs), where during second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis. In addition to few-shot prompt learning, we propose multi-task training of the LLM to predict both the entity class and the next token. To improve the efficiency for contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we propose dynamic prompting, where we select the most likely class using the class tag prediction, and only use entities in this class as contexts for next token prediction. Word Error Rate (WER) evaluation is performed on i) an internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli dataset. Results indicate that biasing lists and few-shot examples can achieve 17.8% and 9.6% relative improvement compared to first pass ASR, and that multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative WER improvement, respectively.",1ed5d06c4dc46e6a983597b740ab0a31d0ce22ad,Semantic Scholar,,, +226,mixpro simple yet effective data augmentation for promptbased learning,"['Bohan Li', 'Longxu Dou', 'Yutai Hou', 'Yunlong Feng', 'Honglin Mu', 'Wanxiang Che']",http://arxiv.org/pdf/2304.09402,2023-04-19,,"Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template. This approach demonstrates its effectiveness, especially in few-shot learning scenarios, where the model is trained on a scarce amount of data. Despite its successes, the limited templates and text in few-shot prompt-based learning scenarios leave significant room for performance improvement. Moreover, existing methods sometimes resort to model ensembles, which, while effective, could potentially hamper model efficiency due to increased computational demands. To address these issues, we introduce MixPro, an augmentation method designed to augment both the vanilla input text and the templates. We implement this through the token-level, the sentence-level, and the template-level Mixup strategies. The experimental results on five few-shot datasets show that MixPro outperforms other augmentation baselines, improving model performance by an average of 5.08% compared to before augmentation.",1f0dfbbc13ac31de8709bbb4d0f6478aa1222cef,Semantic Scholar,,, +227,mapl parameterefficient adaptation of unimodal pretrained models for visionlanguage fewshot prompting,"['Oscar Mañas', 'Pau Rodríguez López', 'Saba Ahmadi', 'Aida Nematzadeh', 'Yash Goyal', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2210.07179,2022-10-13,,"Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks. We propose MAPL, a simple and parameter-efficient method that reuses frozen pre-trained unimodal models and leverages their strong generalization capabilities in multimodal vision-language (VL) settings. MAPL learns a lightweight mapping between the representation spaces of unimodal models using aligned image-text data, and can generalize to unseen VL tasks from just a few in-context examples. The small number of trainable parameters makes MAPL effective at low-data and in-domain learning. Moreover, MAPL’s modularity enables easy extension to other pre-trained models. Extensive experiments on several visual question answering and image captioning benchmarks show that MAPL achieves superior or competitive performance compared to similar methods while training orders of magnitude fewer parameters. MAPL can be trained in just a few hours using modest computational resources and public datasets. We release our code and pre-trained model weights at https://github.com/oscmansan/mapl.",1f86bf1e334200ec0481349255559fbfe7a33caa,Semantic Scholar,,, +228,dspy compiling declarative language model calls into selfimproving pipelines,"['O. Khattab', 'Arnav Singhvi', 'Paridhi Maheshwari', 'Zhiyuan Zhang', 'Keshav Santhanam', 'Sri Vardhamanan', 'Saiful Haq', 'Ashutosh Sharma', 'Thomas T. Joshi', 'Hanna Moazam', 'Heather Miller', 'Matei Zaharia', 'Christopher Potts']",https://arxiv.org/pdf/2310.03714,2023-10-05,,"The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded""prompt templates"", i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, i.e. imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn (by creating and collecting demonstrations) how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies, showing that succinct DSPy programs can express and optimize sophisticated LM pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting (generally by over 25% and 65%, respectively) and pipelines with expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top of that, DSPy programs compiled to open and relatively small LMs like 770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely on expert-written prompt chains for proprietary GPT-3.5. DSPy is available at https://github.com/stanfordnlp/dspy",2069aaaa281eb13bcd9330fc4d43f24f6b436a53,Semantic Scholar,,, +229,interrolang exploring nlp models and datasets through dialoguebased explanations,"['Nils Feldhus', 'Qianli Wang', 'Tatiana Anikina', 'Sahil Chopra', 'Cennet Oguz', 'Sebastian Möller']",https://arxiv.org/pdf/2310.05592,2023-10-09,,"While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel Adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model's predicted label when it's not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.",2522410b1cac0c14fa656a0aaeaff08bacb358a9,Semantic Scholar,,, +230,multilingual evaluation of code generation models,"['Ben Athiwaratkun', 'Sanjay Krishna Gouda', 'Zijian Wang', 'Xiaopeng Li', 'Yuchen Tian', 'Ming Tan', 'Wasi Uddin Ahmad', 'Shiqi Wang', 'Qing Sun', 'Mingyue Shang', 'Sujan Kumar Gonugondla', 'Hantian Ding', 'Varun Kumar', 'Nathan Fulton', 'A. Farahani', 'Siddharth Jain', 'Robert Giaquinto', 'Haifeng Qian', 'M. Ramanathan', 'Ramesh Nallapati', 'Baishakhi Ray', 'Parminder Bhatia', 'Sudipta Sengupta', 'D. Roth', 'Bing Xiang']",http://arxiv.org/pdf/2210.14868,2022-10-27,,"We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming languages and are generated using a scalable conversion framework that transpiles prompts and test cases from the original Python datasets into the corresponding data in the target language. Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings. Furthermore, we use our code generation model to perform large-scale bootstrapping to obtain synthetic canonical solutions in several languages, which can be used for other code-related evaluations such as code insertion, robustness, or summarization tasks. Overall, our benchmarks represents a significant step towards a deeper understanding of language models' code generation abilities. We publicly release our code and datasets at https://github.com/amazon-research/mxeval.",2577d053f8aab912d29b424e1f09133d83740fd2,Semantic Scholar,,, +231,towards using fewshot prompt learning for automating model completion,"['Meriem Ben Chaaben', 'Lola Burgueño', 'H. Sahraoui']",https://arxiv.org/pdf/2212.03404,2022-12-07,,We propose a simple yet a novel approach to improve completion in domain modeling activities. Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models with large datasets that are scarce in this field. We implemented our approach and tested it on the completion of static and dynamic domain diagrams. Our initial evaluation shows that such an approach is effective and can be integrated in different ways during the modeling activities.,2a99239f09e95f4dbccec572d66f4519206762f9,Semantic Scholar,,, +232,"better patching using llm prompting, via selfconsistency","['Toufique Ahmed', 'Prem Devanbu']",https://arxiv.org/pdf/2306.00108,2023-05-31,,"Large Language models (LLMs) can be induced to solve non-trivial problems with “few-shot” prompts including illustrative problem-solution examples. Now if the few-shots also include “chain of thought” ($\mathcal{C}oT$) explanations, which are of the form problem-explanation-solution, LLMs will generate a “explained” solution, and perform even better. Recently an exciting, substantially better technique, self-consistency [1] ($\mathcal{S}-C$) has emerged, based on the intuition that there are many plausible explanations for the right solution; when the LLM is sampled repeatedly to generate a pool of explanation-solution pairs, for a given problem, the most frequently occurring solutions in the pool (ignoring the explanations) tend to be even more likely to be correct! Unfortunately, the use of this highly-performant $\mathcal{S}-C$ (or even $\mathcal{C}oT$) approach in software engineering settings is hampered by the lack of explanations; most software datasets lack explanations. In this paper, we describe an application of the $\mathcal{S}-C$ approach to program repair, using the commit log on the fix as the explanation, only in the illustrative few-shots. We achieve state-of-the art results, beating previous approaches to prompting-based program repair, on the MODIT dataset; we also find evidence suggesting that the correct commit messages are helping the LLM learn to produce better patches.",32426b96ff3c680125bde3b835bfa931288b8ade,Semantic Scholar,,, +233,large language model augmented narrative driven recommendations,"['Sheshera Mysore', 'A. McCallum', 'Hamed Zamani']",https://arxiv.org/pdf/2306.02250,2023-06-04,,"Narrative-driven recommendation (NDR) presents an information access problem where users solicit recommendations with verbose descriptions of their preferences and context, for example, travelers soliciting recommendations for points of interest while describing their likes/dislikes and travel circumstances. These requests are increasingly important with the rise of natural language-based conversational interfaces for search and recommendation systems. However, NDR lacks abundant training data for models, and current platforms commonly do not support these requests. Fortunately, classical user-item interaction datasets contain rich textual data, e.g., reviews, which often describe user preferences and context – this may be used to bootstrap training for NDR models. In this work, we explore using large language models (LLMs) for data augmentation to train NDR models. We use LLMs for authoring synthetic narrative queries from user-item interactions with few-shot prompting and train retrieval models for NDR on synthetic queries and user-item interaction data. Our experiments demonstrate that this is an effective strategy for training small-parameter retrieval models that outperform other retrieval and LLM baselines for narrative-driven recommendation.",3566e1245bfc90096fe0cdb8b18674da6519c8d6,Semantic Scholar,,, +234,a comprehensive survey on pretrained foundation models a history from bert to chatgpt,"['Ce Zhou', 'Qian Li', 'Chen Li', 'Jun Yu', 'Yixin Liu', 'Guan Wang', 'Kaichao Zhang', 'Cheng Ji', 'Qi Yan', 'Lifang He', 'Hao Peng', 'Jianxin Li', 'Jia Wu', 'Ziwei Liu', 'P. Xie', 'Caiming Xiong', 'Jian Pei', 'Philip S. Yu', 'Lichao Sun Michigan State University', 'B. University', 'Lehigh University', 'M. University', 'Nanyang Technological University', 'University of California at San Diego', 'D. University', 'U. Chicago', 'S. Research']",http://arxiv.org/pdf/2302.09419,2023-02-18,,"Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A PFM (e.g., BERT, ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. BERT learns bidirectional encoder representations from Transformers, which are trained on large datasets as contextual language models. Similarly, the generative pretrained transformer (GPT) method employs Transformers as the feature extractor and is trained using an autoregressive paradigm on large datasets. Recently, ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few shot prompting. The remarkable achievements of PFM have brought significant breakthroughs to various fields of AI. Numerous studies have proposed different methods, raising the demand for an updated survey. This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. The review covers the basic components and existing pretraining methods used in natural language processing, computer vision, and graph learning. Additionally, it explores advanced PFMs used for different data modalities and unified PFMs that consider data quality and quantity. The review also discusses research related to the fundamentals of PFMs, such as model efficiency and compression, security, and privacy. Finally, the study provides key implications, future research directions, challenges, and open problems in the field of PFMs. Overall, this survey aims to shed light on the research of the PFMs on scalability, security, logical reasoning ability, cross-domain learning ability, and the user-friendly interactive ability for artificial general intelligence.",3599a236f285af48782fc30b1341d13ec7320735,Semantic Scholar,,, +235,language model crossover variation through fewshot prompting,"['Elliot Meyerson', 'M. Nelson', 'Herbie Bradley', 'Arash Moradi', 'Amy K. Hoover', 'J. Lehman']",https://arxiv.org/pdf/2302.12170,2023-02-23,,"This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes' offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes representable as text.",3841234dd49250c4fcbba79eed6593d3b57932c1,Semantic Scholar,,, +236,mathattack attacking large language models towards math solving ability,"['Zihao Zhou', 'Qiufeng Wang', 'Mingyu Jin', 'Jie Yao', 'Jianan Ye', 'Wei Liu', 'Wei Wang', 'Xiaowei Huang', 'Kaizhu Huang']",https://arxiv.org/pdf/2309.01686,2023-09-04,,"With the boom of Large Language Models (LLMs), the research of solving Math Word Problem (MWP) has recently made great progress. However, there are few studies to examine the security of LLMs in math solving ability. Instead of attacking prompts in the use of LLMs, we propose a MathAttack model to attack MWP samples which are closer to the essence of security in solving math problems. Compared to traditional text adversarial attack, it is essential to preserve the mathematical logic of original MWPs during the attacking. To this end, we propose logical entity recognition to identify logical entries which are then frozen. Subsequently, the remaining text are attacked by adopting a word-level attacker. Furthermore, we propose a new dataset RobustMath to evaluate the robustness of LLMs in math solving ability. Extensive experiments on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth show that MathAttack could effectively attack the math solving ability of LLMs. In the experiments, we observe that (1) Our adversarial samples from higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy (e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot prompts); (2) Complex MWPs (such as more solving steps, longer text, more numbers) are more vulnerable to attack; (3) We can improve the robustness of LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our practice and observation can serve as an important attempt towards enhancing the robustness of LLMs in math solving ability. We will release our code and dataset.",3886f3bd2a0af9e75bf9fa5b7db4224969dbf346,Semantic Scholar,,, +237,fineval a chinese financial domain knowledge evaluation benchmark for large language models,"['Liwen Zhang', 'Wei Cai', 'Zhaowei Liu', 'Zhi Yang', 'Wei Dai', 'Yujie Liao', 'Qi Qin', 'Yifei Li', 'Xingxian Liu', 'Zhiqiang Liu', 'Zhoufan Zhu', 'Anbo Wu', 'Xinnan Guo', 'Yun Chen']",https://arxiv.org/pdf/2308.09975,2023-08-19,,"Large language models (LLMs) have demonstrated exceptional performance in various natural language processing tasks, yet their efficacy in more challenging and domain-specific tasks remains largely unexplored. This paper presents FinEval, a benchmark specifically designed for the financial domain knowledge in the LLMs. FinEval is a collection of high-quality multiple-choice questions covering Finance, Economy, Accounting, and Certificate. It includes 4,661 questions spanning 34 different academic subjects. To ensure a comprehensive model performance evaluation, FinEval employs a range of prompt types, including zero-shot and few-shot prompts, as well as answer-only and chain-of-thought prompts. Evaluating state-of-the-art Chinese and English LLMs on FinEval, the results show that only GPT-4 achieved an accuracy close to 70% in different prompt settings, indicating significant growth potential for LLMs in the financial domain knowledge. Our work offers a more comprehensive financial knowledge evaluation benchmark, utilizing data of mock exams and covering a wide range of evaluated LLMs.",3b88526a0f0337e3a6b632b4af8fd0882eb4b470,Semantic Scholar,,, +238,model ensemble instead of prompt fusion a samplespecific knowledge transfer method for fewshot prompt tuning,"['Xiangyu Peng', 'Chen Xing', 'Prafulla Kumar Choubey', 'Chien-Sheng Wu', 'Caiming Xiong']",http://arxiv.org/pdf/2210.12587,2022-10-23,,"Prompt tuning approaches, which learn task-specific soft prompts for a downstream task conditioning on frozen pre-trained models, have attracted growing interest due to its parameter efficiency. With large language models and sufficient training data, prompt tuning performs comparably to full-model tuning. However, with limited training samples in few-shot settings, prompt tuning fails to match the performance of full-model fine-tuning. In this work, we focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks. Recognizing the good generalization capabilities of ensemble methods in low-data regime, we first experiment and show that a simple ensemble of model predictions based on different source prompts, outperforms existing multi-prompt knowledge transfer approaches such as source prompt fusion in the few-shot setting. Motivated by this observation, we further investigate model ensembles and propose Sample-specific Ensemble of Source Models (SESoM). SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs. Through this way, SESoM inherits the superior generalization of model ensemble approaches and simultaneously captures the sample-specific competence of each source prompt. We conduct experiments across a diverse set of eight NLP tasks using models of different scales (T5-{base, large, XL}) and find that SESoM consistently outperforms the existing models of the same as well as larger parametric scale by a large margin.",3d7d385d9ee75a286e8da27f7d3cf9f12651c899,Semantic Scholar,,, +239,code as policies language model programs for embodied control,"['Jacky Liang', 'Wenlong Huang', 'F. Xia', 'Peng Xu', 'Karol Hausman', 'Brian Ichter', 'Peter R. Florence', 'Andy Zeng']",https://arxiv.org/pdf/2209.07753,2022-09-16,,"Large language models (LLMs) trained on code-completion have been shown to be capable of synthesizing simple Python programs from docstrings [1]. We find that these code-writing LLMs can be re-purposed to write robot policy code, given natural language commands. Specifically, policy code can express functions or feedback loops that process perception outputs (e.g., from object detectors [2], [3]) and parameterize control primitive APIs. When provided as input several example language commands (formatted as comments) followed by corresponding policy code (via few-shot prompting), LLMs can take in new commands and autonomously re-compose API calls to generate new policy code respectively. By chaining classic logic structures and referencing third-party libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way can write robot policies that (i) exhibit spatial-geometric reasoning, (ii) generalize to new instructions, and (iii) prescribe precise values (e.g., velocities) to ambiguous descriptions (‘faster’) depending on context (i.e., behavioral commonsense). This paper presents Code as Policies: a robot-centric formulation of language model generated programs (LMPs) that can represent reactive policies (e.g., impedance controllers), as well as waypoint-based policies (vision-based pick and place, trajectory-based control), demonstrated across multiple real robot platforms. Central to our approach is prompting hierarchical code-gen (recursively defining undefined functions), which can write more complex code and also improves state-of-the-art to solve 39.8% of problems on the HumanEval [1] benchmark. Code and videos are available at https://code-as-policies.github.io",41531594d7e0f3b2e138ae43e0a0f6e24a9b014c,Semantic Scholar,,, +240,tool documentation enables zeroshot toolusage with large language models,"['Cheng-Yu Hsieh', 'Sibei Chen', 'Chun-Liang Li', 'Yasuhisa Fujii', 'Alexander J. Ratner', 'Chen-Yu Lee', 'Ranjay Krishna', 'Tomas Pfister']",https://arxiv.org/pdf/2308.00675,2023-08-01,,"Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to acquire, and can result in undesirable biased usage if the wrong demonstration is chosen. Even in the rare scenario that demonstrations are readily available, there is no principled selection protocol to determine how many and which ones to provide. As tasks grow more complex, the selection search grows combinatorially and invariably becomes intractable. Our work provides an alternative to demonstrations: tool documentation. We advocate the use of tool documentation, descriptions for the individual tool usage, over demonstrations. We substantiate our claim through three main empirical findings on 6 tasks across both vision and language modalities. First, on existing benchmarks, zero-shot prompts with only tool documentation are sufficient for eliciting proper tool usage, achieving performance on par with few-shot prompts. Second, on a newly collected realistic tool-use dataset with hundreds of available tool APIs, we show that tool documentation is significantly more valuable than demonstrations, with zero-shot documentation significantly outperforming few-shot without documentation. Third, we highlight the benefits of tool documentations by tackling image generation and video tracking using just-released unseen state-of-the-art models as tools. Finally, we highlight the possibility of using tool documentation to automatically enable new applications: by using nothing more than the documentation of GroundingDino, Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the just-released Grounded-SAM and Track Anything models.",446fb5dead075a1a08862662738f462e9a0e91c8,Semantic Scholar,,, +241,"text and patterns for effective chain of thought, it takes two to tango","['Aman Madaan', 'A. Yazdanbakhsh']",http://arxiv.org/pdf/2209.07686,2022-09-16,,"The past decade has witnessed dramatic gains in natural language processing and an unprecedented scaling of large language models. These developments have been accelerated by the advent of few-shot techniques such as chain of thought (CoT) prompting. Specifically, CoT pushes the performance of large language models in a few-shot setup by augmenting the prompts with intermediate steps. Despite impressive results across various tasks, the reasons behind their success have not been explored. This work uses counterfactual prompting to develop a deeper understanding of CoT-based few-shot prompting mechanisms in large language models. We first systematically identify and define the key components of a prompt: symbols, patterns, and text. Then, we devise and conduct an exhaustive set of experiments across four different tasks, by querying the model with counterfactual prompts where only one of these components is altered. Our experiments across three models (PaLM, GPT-3, and CODEX) reveal several surprising findings and brings into question the conventional wisdom around few-shot prompting. First, the presence of factual patterns in a prompt is practically immaterial to the success of CoT. Second, our results conclude that the primary role of intermediate steps may not be to facilitate learning how to solve a task. The intermediate steps are rather a beacon for the model to realize what symbols to replicate in the output to form a factual answer. Further, text imbues patterns with commonsense knowledge and meaning. Our empirical and qualitative analysis reveals that a symbiotic relationship between text and patterns explains the success of few-shot prompting: text helps extract commonsense from the question to help patterns, and patterns enforce task understanding and direct text generation.",4988b3d378b79eb8669112620baf1ff4e3e536fd,Semantic Scholar,,, +242,revisiting nonenglish text simplification a unified multilingual benchmark,"['Michael Joseph Ryan', 'Tarek Naous', 'Wei Xu']",http://arxiv.org/pdf/2305.15678,2023-05-25,,"Recent advancements in high-quality, large-scale English resources have pushed the frontier of English Automatic Text Simplification (ATS) research. However, less work has been done on multilingual text simplification due to the lack of a diverse evaluation benchmark that covers complex-simple sentence pairs in many languages. This paper introduces the MultiSim benchmark, a collection of 27 resources in 12 distinct languages containing over 1.7 million complex-simple sentence pairs. This benchmark will encourage research in developing more effective multilingual text simplification models and evaluation metrics. Our experiments using MultiSim with pre-trained multilingual language models reveal exciting performance improvements from multilingual training in non-English settings. We observe strong performance from Russian in zero-shot cross-lingual transfer to low-resource languages. We further show that few-shot prompting with BLOOM-176b achieves comparable quality to reference simplifications outperforming fine-tuned models in most languages. We validate these findings through human evaluation.",4e1a4d6804c7983c659feb7e41d49ad8c21aaa43,Semantic Scholar,,, +243,towards informative fewshot prompt with maximum information gain for incontext learning,"['Hongfu Liu', 'Ye Wang']",https://arxiv.org/pdf/2310.08923,2023-10-13,,"Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions. However, this particular learning paradigm suffers from high instability stemming from substantial variances induced by factors such as the input distribution of selected examples, their ordering, and prompt formats. In this work, we demonstrate that even when all these factors are held constant, the random selection of examples still results in high variance. Consequently, we aim to explore the informative ability of data examples by quantifying the Information Gain (IG) obtained in prediction after observing a given example candidate. Then we propose to sample those with maximum IG. Additionally, we identify the presence of template bias, which can lead to unfair evaluations of IG during the sampling process. To mitigate this bias, we introduce Calibration Before Sampling strategy. The experimental results illustrate that our proposed method can yield an average relative improvement of 14.3% across six classification tasks using three LLMs.",53addc28b106440a3c306b2cff8e259ad63d6d53,Semantic Scholar,,, +244,building cooperative embodied agents modularly with large language models,"['Hongxin Zhang', 'Weihua Du', 'Jiaming Shan', 'Qinhong Zhou', 'Yilun Du', 'J. Tenenbaum', 'Tianmin Shu', 'Chuang Gan']",https://arxiv.org/pdf/2307.02485,2023-07-05,,"Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.",587352c3b95c90de6d37f061c8e117f42be0b575,Semantic Scholar,,, +245,consprompt easily exploiting contrastive samples for fewshot prompt learning,"['Jinta Weng', 'Yue Hu', 'Zhihong Tian', 'Heyan Huang']",https://arxiv.org/pdf/2211.04118,,,"Prompt learning recently become an effective linguistic tool to motivate the PLMs’ knowledge on few-shot-setting tasks. However, studies have shown the lack of robustness still exists in prompt learning, since suitable initialization of continuous prompt and expert-first manual prompt are essential in fine-tuning process. What is more, human also utilize their comparative ability to motivate their existing knowledge for distinguishing different examples. Motivated by this, we explore how to use contrastive samples to strengthen prompt learning. In detail, we first propose our model ConsPrompt combining with prompt encoding network, contrastive sampling module, and contrastive scoring module. Subsequently, two sampling strategies, similarity-based and label-based strategies, are introduced to realize differential contrastive learning. The effectiveness of proposed ConsPrompt is demonstrated in five different few-shot learning tasks and shown the similarity-based sampling strategy is more effective than label-based in combining contrastive learning. Our results also exhibits the state-of-the-art performance and robustness in different few-shot settings, which proves that the ConsPrompt could be assumed as a better knowledge probe to motivate PLMs. As far as we could reach, this is the first work exploring how to use contrastive learning approach and suitable contrastive samples to enhance prompt-based fine-tuning.",5e3675bdbe898cb28a0fc3c2f72a578a97fe64bb,Semantic Scholar,,, +246,can gpt3 perform statutory reasoning,"['Andrew Blair-Stanek', 'Nils Holzenberger', 'Benjamin Van Durme']",https://arxiv.org/pdf/2302.06100,2023-02-13,,"Statutory reasoning is the task of reasoning with facts and statutes, which are rules written in natural language by a legislature. It is a basic legal skill. In this paper we explore the capabilities of the most capable GPT-3 model, text-davinci-003, on an established statutory-reasoning dataset called SARA. We consider a variety of approaches, including dynamic few-shot prompting, chain-of-thought prompting, and zero-shot prompting. While we achieve results with GPT-3 that are better than the previous best published results, we also identify several types of clear errors it makes. We investigate why these errors happen. We discover that GPT-3 has imperfect prior knowledge of the actual U.S. statutes on which SARA is based. More importantly, we create simple synthetic statutes, which GPT-3 is guaranteed not to have seen during training. We find GPT-3 performs poorly at answering straightforward questions about these simple synthetic statutes.",5f5253fb15ac382e96ade0335baf1cfaa240fb1d,Semantic Scholar,,, +247,explainable verbal reasoner plus (evr+) a natural language reasoning framework that supports diverse compositional reasoning,"['Zhengzhong Liang', 'Zeyu Zhang', 'Steven Bethard', 'M. Surdeanu']",http://arxiv.org/pdf/2305.00061,2023-04-28,,"Languages models have been successfully applied to a variety of reasoning tasks in NLP, yet the language models still suffer from compositional generalization. In this paper we present Explainable Verbal Reasoner Plus (EVR+), a reasoning framework that enhances language models' compositional reasoning ability by (1) allowing the model to explicitly generate and execute symbolic operators, and (2) allowing the model to decompose a complex task into several simpler ones in a flexible manner. Compared with its predecessor Explainable Verbal Reasoner (EVR) and other previous approaches adopting similar ideas, our framework supports more diverse types of reasoning such as nested loops and different types of recursion. To evaluate our reasoning framework, we build a synthetic dataset with five tasks that require compositional reasoning. Results show that our reasoning framework can enhance the language model's compositional generalization performance on the five tasks, using a fine-tuned language model. We also discussed the possibility and the challenges to combine our reasoning framework with a few-shot prompted language model.",5f88b907cb6b79ce22e826832f05c0471ecb095e,Semantic Scholar,,, +248,language models as knowledge bases for visual word sense disambiguation,"['Anastasia Kritharoula', 'Maria Lymperaiou', 'G. Stamou']",https://arxiv.org/pdf/2310.01960,2023-10-03,,"Visual Word Sense Disambiguation (VWSD) is a novel challenging task that lies between linguistic sense disambiguation and fine-grained multimodal retrieval. The recent advancements in the development of visiolinguistic (VL) transformers suggest some off-the-self implementations with encouraging results, which however we argue that can be further improved. To this end, we propose some knowledge-enhancement techniques towards improving the retrieval performance of VL transformers via the usage of Large Language Models (LLMs) as Knowledge Bases. More specifically, knowledge stored in LLMs is retrieved with the help of appropriate prompts in a zero-shot manner, achieving performance advancements. Moreover, we convert VWSD to a purely textual question-answering (QA) problem by considering generated image captions as multiple-choice candidate answers. Zero-shot and few-shot prompting strategies are leveraged to explore the potential of such a transformation, while Chain-of-Thought (CoT) prompting in the zero-shot setting is able to reveal the internal reasoning steps an LLM follows to select the appropriate candidate. In total, our presented approach is the first one to analyze the merits of exploiting knowledge stored in LLMs in different ways to solve WVSD.",61bbdbf481a6d3519c22513ebe8d6c3cd381851e,Semantic Scholar,,, +249,challenging bigbench tasks and whether chainofthought can solve them,"['Mirac Suzgun', 'Nathan Scales', 'Nathanael Scharli', 'Sebastian Gehrmann', 'Yi Tay', 'Hyung Won Chung', 'Aakanksha Chowdhery', 'Quoc V. Le', 'E. Chi', 'Denny Zhou', 'Jason Wei']",http://arxiv.org/pdf/2210.09261,2022-10-17,,"BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. Language models have already made good progress on this benchmark, with the best model in the BIG-Bench paper outperforming average reported human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But on what tasks do language models fall short of average human-rater performance, and are those tasks actually unsolvable by current language models? In this work, we focus on a suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. We find that applying chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the average human-rater performance on 10 of the 23 tasks, and Codex (code-davinci-002) to surpass the average human-rater performance on 17 of the 23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al., 2022), substantially underestimates the best performance and capabilities of language models, which is better captured via CoT prompting. As further analysis, we explore the interaction between CoT and model scale on BBH, finding that CoT enables emergent task performance on several BBH tasks with otherwise flat scaling curves.",663a41c866d49ce052801fbc88947d39764cad29,Semantic Scholar,,, +250,fireact toward language agent finetuning,"['Baian Chen', 'Chang Shu', 'Ehsan Shareghi', 'Nigel Collier', 'Karthik Narasimhan', 'Shunyu Yao']",https://arxiv.org/pdf/2310.05915,2023-10-09,,"Recent efforts have augmented language models (LMs) with external tools or environments, leading to the development of language agents that can reason and act. However, most of these agents rely on few-shot prompting techniques with off-the-shelf LMs. In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. Using a setup of question answering (QA) with a Google search API, we explore a variety of base LMs, prompting methods, fine-tuning data, and QA tasks, and find language agents are consistently improved after fine-tuning their backbone LMs. For example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct, a novel approach to fine-tuning LMs with trajectories from multiple tasks and prompting methods, and show having more diverse fine-tuning data can further improve agents. Along with other findings regarding scaling effects, robustness, generalization, efficiency and cost, our work establishes comprehensive benefits of fine-tuning LMs for agents, and provides an initial set of experimental designs, insights, as well as open questions toward language agent fine-tuning.",67daf8c4fe1958d20ebdf95c2a36dd490c73836f,Semantic Scholar,,, +251,natural language decomposition and interpretation of complex utterances,"['Harsh Jhamtani', 'Hao Fang', 'Patrick Xia', 'Eran Levy', 'Jacob Andreas', 'Benjamin Van Durme']",http://arxiv.org/pdf/2305.08677,2023-05-15,,"Designing natural language interfaces has historically required collecting supervised data to translate user requests into carefully designed intent representations. This requires enumerating and labeling a long tail of user requests, which is challenging. At the same time, large language models (LLMs) encode knowledge about goals and plans that can help conversational assistants interpret user requests requiring numerous steps to complete. We introduce an approach to handle complex-intent-bearing utterances from a user via a process of hierarchical natural language decomposition and interpretation. Our approach uses a pre-trained language model to decompose a complex utterance into a sequence of simpler natural language steps and interprets each step using the language-to-program model designed for the interface. To test our approach, we collect and release DeCU -- a new NL-to-program benchmark to evaluate Decomposition of Complex Utterances. Experiments show that the proposed approach enables the interpretation of complex utterances with almost no complex training data, while outperforming standard few-shot prompting approaches.",68040213e9a83408cdc491ed3e235b52b537eed1,Semantic Scholar,,, +252,meal stable and active learning for fewshot prompting,"['Abdullatif Köksal', 'Timo Schick', 'Hinrich Schutze']",http://arxiv.org/pdf/2211.08358,2022-11-15,,"Few-shot classification has made great strides due to foundation models that, through priming and prompting, are highly effective few-shot learners. However, this approach has high variance both across different sets of few shots (data selection) and across different finetuning runs (run variability). This is problematic not only because it impedes the fair comparison of different approaches, but especially because it makes few-shot learning too unreliable for many real-world applications. To alleviate these issues, we make two contributions for more stable and effective few-shot learning: First, we propose novel ensembling methods and show that they substantially reduce run variability. Second, we introduce a new active learning (AL) criterion for data selection and present the first AL-based approach specifically tailored towards prompt-based learning. In our experiments, we show that our combined method, MEAL (Multiprompt finetuning and prediction Ensembling with Active Learning), improves overall performance of prompt-based finetuning by 2.3 points on five diverse tasks. We publicly share our code and data splits in https://github.com/akoksal/MEAL.",6a465062e88853c584148d5a9f6e319050aac0ec,Semantic Scholar,,, +253,pal programaided language models,"['Luyu Gao', 'Aman Madaan', 'Shuyan Zhou', 'Uri Alon', 'Pengfei Liu', 'Yiming Yang', 'Jamie Callan', 'Graham Neubig']",http://arxiv.org/pdf/2211.10435,2022-11-18,,"Large language models (LLMs) have recently demonstrated an impressive ability to perform arithmetic and symbolic reasoning tasks, when provided with a few examples at test time (""few-shot prompting""). Much of this success can be attributed to prompting methods such as""chain-of-thought'', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem. While LLMs seem to be adept at this sort of step-by-step decomposition, LLMs often make logical and arithmetic mistakes in the solution part, even when the problem is decomposed correctly. In this paper, we present Program-Aided Language models (PAL): a novel approach that uses the LLM to read natural language problems and generate programs as the intermediate reasoning steps, but offloads the solution step to a runtime such as a Python interpreter. With PAL, decomposing the natural language problem into runnable steps remains the only learning task for the LLM, while solving is delegated to the interpreter. We demonstrate this synergy between a neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all these natural language reasoning tasks, generating code using an LLM and reasoning using a Python interpreter leads to more accurate results than much larger models. For example, PAL using Codex achieves state-of-the-art few-shot accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B which uses chain-of-thought by absolute 15% top-1. Our code and data are publicly available at http://reasonwithpal.com/ .",6c1e1cc1e0e1f8fd026fe517607b2d4535565fa7,Semantic Scholar,,, +254,prompted llms as chatbot modules for long opendomain conversation,"['Gibbeum Lee', 'Volker Hartmann', 'Jongho Park', 'Dimitris Papailiopoulos', 'Kangwook Lee']",https://aclanthology.org/2023.findings-acl.277.pdf,2023-05-08,,"In this paper, we propose MPC (Modular Prompted Chatbot), a new approach for creating high-quality conversational agents without the need for fine-tuning. Our method utilizes pre-trained large language models (LLMs) as individual modules for long-term consistency and flexibility, by using techniques such as few-shot prompting, chain-of-thought (CoT), and external memory. Our human evaluation results show that MPC is on par with fine-tuned chatbot models in open-domain conversations, making it an effective solution for creating consistent and engaging chatbots.",700da3f3758e053c379f905bebee261ba69f1073,Semantic Scholar,,, +255,prompting gpt3 to be reliable,"['Chenglei Si', 'Zhe Gan', 'Zhengyuan Yang', 'Shuohang Wang', 'Jianfeng Wang', 'Jordan L. Boyd-Graber', 'Lijuan Wang']",http://arxiv.org/pdf/2210.09150,2022-10-17,,"Large language models (LLMs) show impressive abilities via few-shot prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use in real-world language applications. However, the crucial problem of how to improve the reliability of GPT-3 is still under-explored. While reliability is a broad and vaguely defined term, we decompose reliability into four main facets that correspond to the existing framework of ML safety and are well-recognized to be important: generalizability, social biases, calibration, and factuality. Our core contribution is to establish simple and effective prompts that improve GPT-3's reliability as it: 1) generalizes out-of-distribution, 2) balances demographic distribution and uses natural language instructions to reduce social biases, 3) calibrates output probabilities, and 4) updates the LLM's factual knowledge and reasoning chains. With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised models on all these facets. We release all processed datasets, evaluation scripts, and model predictions. Our systematic empirical study not only sheds new insights on the reliability of prompting LLMs, but more importantly, our prompting strategies can help practitioners more reliably use LLMs like GPT-3.",711d5e8ddbb840ad31a9ffa3d38590603ba69a92,Semantic Scholar,,, +256,understanding how model size affects fewshot instruction prompting,"['Ayrton San Joaquin', 'Ardy Haroen']",https://arxiv.org/pdf/2212.01907,2022-12-04,,"Large Language Models are affected by the phenomena of memorizing and forgetting their training data. But how do these vary by model size? We work towards this question by investigating how the model size affects the model's ability to discriminate a word's meaning in a given context. We introduce a dataset called DeltaWords, which evaluates a model's ability to follow instructions to select a sentence which replaces the target word with its antonym. We show a weak inverse scaling trend, where task accuracy degrades as model size increase, under extremely few-shot prompting regimes. We show that increasing the number of examples tend to disproportionately benefit larger models than smaller models.",72491b96d8a614d1a9a099707d44593d4b5a8f49,Semantic Scholar,,, +257,smartllm smart multiagent robot task planning using large language models,"['S. S. Kannan', 'Vishnunandan L. N. Venkatesh', 'Byung-Cheol Min']",https://arxiv.org/pdf/2309.10062,2023-09-18,,"In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models (LLMs), harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans. The experimental videos, code, and datasets from the work can be found at https://sites.google.com/view/smart-llm/.",755853c6b30f5a186131e23a63c68a3f2737068e,Semantic Scholar,,, +258,selfexplanation prompting improves dialogue understanding in large language models,"['Haoyu Gao', 'Ting-En Lin', 'Hangyu Li', 'Min Yang', 'Yuchuan Wu', 'Wentao Ma', 'Yongbin Li']",https://arxiv.org/pdf/2309.12940,2023-09-22,,"Task-oriented dialogue (TOD) systems facilitate users in executing various activities via multi-turn dialogues, but Large Language Models (LLMs) often struggle to comprehend these intricate contexts. In this study, we propose a novel""Self-Explanation""prompting strategy to enhance the comprehension abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks. Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts, demonstrating its potential as a powerful tool in enhancing LLMs' comprehension in complex dialogue tasks.",75ce9634d281cc12cbe434f86c737df8e10796fa,Semantic Scholar,,, +259,chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt,"['Fatemeh Nazary', 'Yashar Deldjoo', 'T. D. Noia']",https://arxiv.org/pdf/2308.09731,2023-08-17,,"This study presents an innovative approach to the application of large language models (LLMs) in clinical decision-making, focusing on OpenAI's ChatGPT. Our approach introduces the use of contextual prompts-strategically designed to include task description, feature description, and crucially, integration of domain knowledge-for high-quality binary classification tasks even in data-scarce scenarios. The novelty of our work lies in the utilization of domain knowledge, obtained from high-performing interpretable ML models, and its seamless incorporation into prompt design. By viewing these ML models as medical experts, we extract key insights on feature importance to aid in decision-making processes. This interplay of domain knowledge and AI holds significant promise in creating a more insightful diagnostic tool. Additionally, our research explores the dynamics of zero-shot and few-shot prompt learning based on LLMs. By comparing the performance of OpenAI's ChatGPT with traditional supervised ML models in different data conditions, we aim to provide insights into the effectiveness of prompt engineering strategies under varied data availability. In essence, this paper bridges the gap between AI and healthcare, proposing a novel methodology for LLMs application in clinical decision support systems. It highlights the transformative potential of effective prompt design, domain knowledge integration, and flexible learning approaches in enhancing automated decision-making.",793eb805800c4af0b06260079e178efa0377b9d7,Semantic Scholar,,, +260,transferring procedural knowledge across commonsense tasks,"['Yifan Jiang', 'Filip Ilievski', 'Kaixin Ma']",https://arxiv.org/pdf/2304.13867,2023-04-26,,"Stories about everyday situations are an essential part of human communication, motivating the need to develop AI agents that can reliably understand these stories. Despite the long list of supervised methods for story completion and procedural understanding, current AI has no mechanisms to automatically track and explain procedures in unseen stories. To bridge this gap, we study the ability of AI models to transfer procedural knowledge to novel narrative tasks in a transparent manner. We design LEAP: a comprehensive framework that integrates state-of-the-art modeling architectures, training regimes, and augmentation strategies based on both natural and synthetic stories. To address the lack of densely annotated training data, we devise a robust automatic labeler based on few-shot prompting to enhance the augmented data. Our experiments with in- and out-of-domain tasks reveal insights into the interplay of different architectures, training regimes, and augmentation strategies. LEAP's labeler has a clear positive impact on out-of-domain datasets, while the resulting dense annotation provides native explainability.",7beec352ac2597c3cd3dc7aceb2f8cd068b72d15,Semantic Scholar,,, +261,exploring the landscape of distributional robustness for question answering models,"['Anas Awadalla', 'Mitchell Wortsman', 'Gabriel Ilharco', 'Sewon Min', 'Ian H. Magnusson', 'Hannaneh Hajishirzi', 'Ludwig Schmidt']",http://arxiv.org/pdf/2210.12517,2022-10-22,,"We conduct a large empirical evaluation to investigate the landscape of distributional robustness in question answering. Our investigation spans over 350 models and 16 question answering datasets, including a diverse set of architectures, model sizes, and adaptation methods (e.g., fine-tuning, adapter tuning, in-context learning, etc.). We find that, in many cases, model variations do not affect robustness and in-distribution performance alone determines out-of-distribution performance. Moreover, our findings indicate that i) zero-shot and in-context learning methods are more robust to distribution shifts than fully fine-tuned models; ii) few-shot prompt fine-tuned models exhibit better robustness than few-shot fine-tuned span prediction models; iii) parameter-efficient and robustness enhancing training methods provide no significant robustness improvements. In addition, we publicly release all evaluations to encourage researchers to further analyze robustness trends for question answering models.",7cf4f8cb8b4a373d869e785b79160dda7a49a250,Semantic Scholar,,, +262,language models don't always say what they think unfaithful explanations in chainofthought prompting,"['Miles Turpin', 'Julian Michael', 'Ethan Perez', 'Sam Bowman']",http://arxiv.org/pdf/2305.04388,2023-05-07,,"Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always""(A)""--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.",7dc928f41e15f65f1267bd87b0fcfcc7e715cb56,Semantic Scholar,,, +263,zara improving fewshot selfrationalization for small language models,"['Wei-Lin Chen', 'An-Zi Yen', 'Hen-Hsen Huang', 'Cheng-Kuang Wu', 'Hsin-Hsi Chen']",http://arxiv.org/pdf/2305.07355,2023-05-12,,"Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA's ability to automatically identify plausible and accurate rationale-answer pairs.",7df3595bdb4003589e8ca1757cc39ec03a39a2ff,Semantic Scholar,,, +264,natural language to code generation in interactive data science notebooks,"['Pengcheng Yin', 'Wen-Ding Li', 'Kefan Xiao', 'A. Rao', 'Yeming Wen', 'Kensen Shi', 'Joshua Howland', 'Paige Bailey', 'Michele Catasta', 'H. Michalewski', 'Oleksandr Polozov', 'Charles Sutton']",http://arxiv.org/pdf/2212.09248,2022-12-19,,"Computational notebooks, such as Jupyter notebooks, are interactive computing environments that are ubiquitous among data scientists to perform data wrangling and analytic tasks. To measure the performance of AI pair programmers that automatically synthesize programs for those tasks given natural language (NL) intents from users, we build ARCADE, a benchmark of 1078 code generation problems using the pandas data analysis framework in data science notebooks. ARCADE features multiple rounds of NL-to-code problems from the same notebook. It requires a model to understand rich multi-modal contexts, such as existing notebook cells and their execution states as well as previous turns of interaction. To establish a strong baseline on this challenging task, we develop PaChiNCo, a 62B code language model (LM) for Python computational notebooks, which significantly outperforms public code LMs. Finally, we explore few-shot prompting strategies to elicit better code with step-by-step decomposition and NL explanation, showing the potential to improve the diversity and explainability of model predictions. Arcade is publicly available at https://github.com/google-research/arcade-nl2code/.",815c6ca281536d18ec0eb408b6e46e72a0826163,Semantic Scholar,,, +265,multiparty chat conversational agents in group settings with humans and models,"['Jimmy Wei', 'Kurt Shuster', 'Arthur Szlam', 'J. Weston', 'Jack Urbanek', 'M. Komeili']",http://arxiv.org/pdf/2304.13835,2023-04-26,,"Current dialogue research primarily studies pairwise (two-party) conversations, and does not address the everyday setting where more than two speakers converse together. In this work, we both collect and evaluate multi-party conversations to study this more general case. We use the LIGHT environment to construct grounded conversations, where each participant has an assigned character to role-play. We thus evaluate the ability of language models to act as one or more characters in such conversations. Models require two skills that pairwise-trained models appear to lack: (1) being able to decide when to talk; (2) producing coherent utterances grounded on multiple characters. We compare models trained on our new dataset to existing pairwise-trained dialogue models, as well as large language models with few-shot prompting. We find that our new dataset, MultiLIGHT, which we will publicly release, can help bring significant improvements in the group setting.",82beb8a86d438e85a134182128d47607b1b04004,Semantic Scholar,,, +266,towards legally enforceable hate speech detection for public forums,"['Chunyan Luo', 'R. Bhambhoria', 'Xiao-Dan Zhu', 'Samuel Dahan']",http://arxiv.org/pdf/2305.13677,2023-05-23,,"Hate speech causes widespread and deep-seated societal issues. Proper enforcement of hate speech laws is key for protecting groups of people against harmful and discriminatory language. However, determining what constitutes hate speech is a complex task that is highly open to subjective interpretations. Existing works do not align their systems with enforceable definitions of hate speech, which can make their outputs inconsistent with the goals of regulators. This research introduces a new perspective and task for enforceable hate speech detection centred around legal definitions, and a dataset annotated on violations of eleven possible definitions by legal experts. Given the challenge of identifying clear, legally enforceable instances of hate speech, we augment the dataset with expert-generated samples and an automatically mined challenge set. We experiment with grounding the model decision in these definitions using zero-shot and few-shot prompting. We then report results on several large language models (LLMs). With this task definition, automatic hate speech detection can be more closely aligned to enforceable laws, and hence assist in more rigorous enforcement of legal protections against harmful speech in public forums.",895f3c9e452ae51fb02786de424ce6d2bba11c3b,Semantic Scholar,,, +267,usb a unified summarization benchmark across tasks and domains,"['Kundan Krishna', 'Prakhar Gupta', 'S. Ramprasad', 'Byron C. Wallace', 'Jeffrey P. Bigham', 'Zachary Chase Lipton']",http://arxiv.org/pdf/2305.14296,2023-05-23,,"While the NLP community has produced numerous summarization benchmarks, none provide the rich annotations required to simultaneously address many important problems related to control and reliability. We introduce a Wikipedia-derived benchmark, complemented by a rich set of crowd-sourced annotations, that supports $8$ interrelated tasks: (i) extractive summarization; (ii) abstractive summarization; (iii) topic-based summarization; (iv) compressing selected sentences into a one-line summary; (v) surfacing evidence for a summary sentence; (vi) predicting the factual accuracy of a summary sentence; (vii) identifying unsubstantiated spans in a summary sentence; (viii) correcting factual errors in summaries. We compare various methods on this benchmark and discover that on multiple tasks, moderately-sized fine-tuned models consistently outperform much larger few-shot prompted language models. For factuality-related tasks, we also evaluate existing heuristics to create training data and find that training on them results in worse performance than training on $20\times$ less human-labeled data. Our articles draw from $6$ domains, facilitating cross-domain analysis. On some tasks, the amount of training data matters more than the domain where it comes from, while for other tasks training specifically on data from the target domain, even if limited, is more beneficial.",8ab27849799286459465d2262f926354093b20a9,Semantic Scholar,,, +268,grounding language with visual affordances over unstructured data,"['Oier Mees', 'Jessica Borja-Diaz', 'Wolfram Burgard']",https://arxiv.org/pdf/2210.01911,2022-10-04,,"Recent works have shown that Large Language Models (LLMs) can be applied to ground natural language to a wide variety of robot skills. However, in practice, learning multi-task, language-conditioned robotic skills typically requires large-scale data collection and frequent human intervention to reset the environment or help correcting the current policies. In this work, we propose a novel approach to efficiently learn general-purpose language-conditioned robot skills from unstructured, offline and reset-free data in the real world by exploiting a self-supervised visuo-lingual affordance model, which requires annotating as little as 1% of the total data with language. We evaluate our method in extensive experiments both in simulated and real-world robotic tasks, achieving state-of-the-art performance on the challenging CALVIN benchmark and learning over 25 distinct visuomotor manipulation tasks with a single policy in the real world. We find that when paired with LLMs to break down abstract natural language instructions into subgoals via few-shot prompting, our method is capable of completing long-horizon, multi-tier tasks in the real world, while requiring an order of magnitude less data than previous approaches. Code and videos are available at http://hulc2.cs.uni-freiburg.de.",8f84dcbad8cd3b5b4d9229c56bc95f24be859a35,Semantic Scholar,,, +269,evaluating large language models on graphs performance insights and comparative analysis,"['Chang Liu', 'Bo Wu']",https://arxiv.org/pdf/2308.11224,2023-08-22,,"Large Language Models (LLMs) have garnered considerable interest within both academic and industrial. Yet, the application of LLMs to graph data remains under-explored. In this study, we evaluate the capabilities of four LLMs in addressing several analytical problems with graph data. We employ four distinct evaluation metrics: Comprehension, Correctness, Fidelity, and Rectification. Our results show that: 1) LLMs effectively comprehend graph data in natural language and reason with graph topology. 2) GPT models can generate logical and coherent results, outperforming alternatives in correctness. 3) All examined LLMs face challenges in structural reasoning, with techniques like zero-shot chain-of-thought and few-shot prompting showing diminished efficacy. 4) GPT models often produce erroneous answers in multi-answer tasks, raising concerns in fidelity. 5) GPT models exhibit elevated confidence in their outputs, potentially hindering their rectification capacities. Notably, GPT-4 has demonstrated the capacity to rectify responses from GPT-3.5-turbo and its own previous iterations. The code is available at: https://github.com/Ayame1006/LLMtoGraph.",927fc7652e033c9eb17296df087e3e6491112bb0,Semantic Scholar,,, +270,revisiting relation extraction in the era of large language models,"['Somin Wadhwa', 'Silvio Amir', 'Byron C. Wallace']",http://arxiv.org/pdf/2305.05003,2023-05-08,,"Relation extraction (RE) is the core NLP task of inferring semantic relationships between entities from text. Standard supervised RE techniques entail training modules to tag tokens comprising entity spans and then predict the relationship between them. Recent work has instead treated the problem as a sequence-to-sequence task, linearizing relations between entities as target strings to be generated conditioned on the input. Here we push the limits of this approach, using larger language models (GPT-3 and Flan-T5 large) than considered in prior work and evaluating their performance on standard RE tasks under varying levels of supervision. We address issues inherent to evaluating generative approaches to RE by doing human evaluations, in lieu of relying on exact matching. Under this refined evaluation, we find that: (1) Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly equivalent to existing fully supervised models; (2) Flan-T5 is not as capable in the few-shot setting, but supervising and fine-tuning it with Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA results. We release this model as a new baseline for RE tasks.",97782a67971c4ff1a74bf07e82fe20b2c4bf86c4,Semantic Scholar,,, +271,selfpolish enhance reasoning in large language models via problem refinement,"['Zhiheng Xi', 'Senjie Jin', 'Yuhao Zhou', 'Rui Zheng', 'Songyang Gao', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang']",http://arxiv.org/pdf/2305.14497,2023-05-23,,"Prompting methods such as Chain-of-Thought (CoT) have shed new light on enhancing the reasoning capabilities of large language models, and researchers have extensively explored the generation process of rationales and answers. However, they have overlooked the potential challenges posed by the poor quality of reasoning problems, which may influence the reasoning performance significantly. In this work, we propose Self-Polish (SP), a novel method that facilitates the model's problem-solving process by prompting them to progressively refine the given problems to be more comprehensible and solvable. Specifically, the method teaches models to eliminate irrelevant information, rearrange the logic structure and organize local conditions into new ones parallelly. SP is orthogonal to all other prompting methods, making it convenient to integrate with state-of-the-art techniques for further improvement. We conduct thorough experiments on five benchmarks to illustrate the effectiveness of the proposed method. For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8.0\%$ on GSM8K and $17.8\%$ on MultiArith; it also improves the performance of CoT by $6.0\%$ on GSM8K and $6.0\%$ on MathQA, respectively. Furthermore, our method also showcases impressive performance on robustness evaluation.",9a9b1e2968302eb882870537d4af6e2c722dfd1a,Semantic Scholar,,, +272,spotlight mobile ui understanding using visionlanguage models with a focus,"['Gang Li', 'Yang Li']",http://arxiv.org/pdf/2209.14927,2022-09-29,,"Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchies are not always available, and are often corrupted with missing object descriptions or misaligned structure information. As a result, despite the use of view hierarchies could offer short-term gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose Spotlight, a vision-only approach for mobile UI understanding. Specifically, we enhance a vision-language model that only takes the screenshot of the UI and a region of interest on the screen -- the focus -- as the input. This general architecture of Spotlight is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model establishes SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as inputs. Furthermore, we explore multi-task learning and few-shot prompting capacities of the proposed models, demonstrating promising results in the multi-task learning direction.",9b9fb973e5d3b413baa90648d9eb0743bd889747,Semantic Scholar,,, +273,large language model prompt chaining for long legal document classification,['Dietrich Trautmann'],https://arxiv.org/pdf/2308.04138,2023-08-08,,"Prompting is used to guide or steer a language model in generating an appropriate response that is consistent with the desired outcome. Chaining is a strategy used to decompose complex tasks into smaller, manageable components. In this study, we utilize prompt chaining for extensive legal document classification tasks, which present difficulties due to their intricate domain-specific language and considerable length. Our approach begins with the creation of a concise summary of the original document, followed by a semantic search for related exemplar texts and their corresponding annotations from a training corpus. Finally, we prompt for a label - based on the task - to assign, by leveraging the in-context learning from the few-shot prompt. We demonstrate that through prompt chaining, we can not only enhance the performance over zero-shot, but also surpass the micro-F1 score achieved by larger models, such as ChatGPT zero-shot, using smaller models.",9bf587d032e3764720cccd5beaf941f5c32880bc,Semantic Scholar,,, +274,lafter labelfree tuning of zeroshot classifier using language and unlabeled image collections,"['M. J. Mirza', 'Leonid Karlinsky', 'Wei Lin', 'M. Koziński', 'Horst Possegger', 'R. Feris', 'H. Bischof']",http://arxiv.org/pdf/2305.18287,2023-05-29,,"Recently, large-scale pre-trained Vision and Language (VL) models have set a new state-of-the-art (SOTA) in zero-shot visual classification enabling open-vocabulary recognition of potentially unlimited set of categories defined as simple language prompts. However, despite these great advances, the performance of these zeroshot classifiers still falls short of the results of dedicated (closed category set) classifiers trained with supervised fine tuning. In this paper we show, for the first time, how to reduce this gap without any labels and without any paired VL data, using an unlabeled image collection and a set of texts auto-generated using a Large Language Model (LLM) describing the categories of interest and effectively substituting labeled visual instances of those categories. Using our label-free approach, we are able to attain significant performance improvements over the zero-shot performance of the base VL model and other contemporary methods and baselines on a wide variety of datasets, demonstrating absolute improvement of up to 11.7% (3.8% on average) in the label-free setting. Moreover, despite our approach being label-free, we observe 1.3% average gains over leading few-shot prompting baselines that do use 5-shot supervision.",a04883d1d780b438de6c127caf7ebe3d9233e193,Semantic Scholar,,, +275,street a multitask structured reasoning and explanation benchmark,"['D. Ribeiro', 'Shen Wang', 'Xiaofei Ma', 'He Zhu', 'Rui Dong', 'Deguang Kong', 'Juliette Burger', 'Anjelica Ramos', 'William Yang Wang', 'Zhiheng Huang', 'G. Karypis', 'Bing Xiang', 'D. Roth']",http://arxiv.org/pdf/2302.06729,2023-02-13,,"We introduce STREET, a unified multi-task and multi-domain natural language reasoning and explanation benchmark. Unlike most existing question-answering (QA) datasets, we expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer. We perform extensive evaluation with popular language models such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models still lag behind human performance when producing such structured reasoning steps. We believe this work will provide a way for the community to better train and test systems on multi-step reasoning and explanations in natural language.",a3a241e9397fe29b37f96cb5e8f4b8bebed3d3da,Semantic Scholar,,, +276,large language models as tax attorneys a case study in legal capabilities emergence,"['John J. Nay', 'David Karamardian', 'Sarah Lawsky', 'Wenting Tao', 'Meghana Moorthy Bhat', 'Raghav Jain', 'Aaron Travis Lee', 'Jonathan H. Choi', 'Jungo Kasai']",http://arxiv.org/pdf/2306.07075,2023-06-12,,"Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.",a6a0963fcf21ed47a2616ca3980f8f4f21e6d5ad,Semantic Scholar,,, +277,distilling stepbystep! outperforming larger language models with less training data and smaller model sizes,"['Cheng-Yu Hsieh', 'Chun-Liang Li', 'Chih-Kuan Yeh', 'Hootan Nakhost', 'Yasuhisa Fujii', 'Alexander J. Ratner', 'Ranjay Krishna', 'Chen-Yu Lee', 'Tomas Pfister']",https://arxiv.org/pdf/2305.02301,2023-05-03,,"Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce Distilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for training small models within a multi-task framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to few-shot prompted LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our finetuned 770M T5 model outperforms the few-shot prompted 540B PaLM model using only 80% of available data on a benchmark, whereas standard finetuning the same T5 model struggles to match even by using 100% of the dataset. We release the code at: https://github.com/google-research/distilling-step-by-step .",aad167be3c902388ea625da4117fcae4325b8b7d,Semantic Scholar,,, +278,prompt programming for large language models beyond the fewshot paradigm,"['Laria Reynolds', 'Kyle McDonell']",https://arxiv.org/pdf/2102.07350,2021-02-15,,"Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models’ novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications.",ac3cdb50606f7770eef8e4cd951840a4f71287a0,Semantic Scholar,,, +279,the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant,"['Jingqing Zhang', 'Kai Sun', 'A. Jagadeesh', 'Mahta Ghahfarokhi', 'Deepa Gupta', 'Ashok Gupta', 'Vibhor Gupta', 'Yike Guo']",https://arxiv.org/pdf/2307.08152,2023-07-16,,"Recent studies have demonstrated promising performance of ChatGPT and GPT-4 on several medical domain tasks. However, none have assessed its performance using a large-scale real-world electronic health record database, nor have evaluated its utility in providing clinical diagnostic assistance for patients across a full range of disease presentation. We performed two analyses using ChatGPT and GPT-4, one to identify patients with specific medical diagnoses using a real-world large electronic health record database and the other, in providing diagnostic assistance to healthcare workers in the prospective evaluation of hypothetical patients. Our results show that GPT-4 across disease classification tasks with chain of thought and few-shot prompting can achieve performance as high as 96% F1 scores. For patient assessment, GPT-4 can accurately diagnose three out of four times. However, there were mentions of factually incorrect statements, overlooking crucial medical findings, recommendations for unnecessary investigations and overtreatment. These issues coupled with privacy concerns, make these models currently inadequate for real world clinical use. However, limited data and time needed for prompt engineering in comparison to configuration of conventional machine learning workflows highlight their potential for scalability across healthcare applications.",b3d6fec3f1a878b0c612f0ffed820b045c2c46d8,Semantic Scholar,,, +280,do gpts produce less literal translations,"['Vikas Raunak', 'Arul Menezes', 'Matt Post', 'Hany Hassan Awadallah']",http://arxiv.org/pdf/2305.16806,2023-05-26,,"Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.",b4170009de40c1c46adea6a314734434ecd4b0dc,Semantic Scholar,,, +281,adelt transpilation between deep learning frameworks,"['Linyuan Gong', 'Jiayi Wang', 'Alvin Cheung']",http://arxiv.org/pdf/2303.03593,2023-03-07,,"We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-source transpilation between deep learning frameworks. Unlike prior approaches, we decouple the transpilation of code skeletons and the mapping of API keywords (an API function name or a parameter name). ADELT transpile code skeletons using few-shot prompting on big language models. Based on contextual embeddings extracted by a BERT for code, we train aligned API embeddings in a domain-adversarial setup, upon which we generate a dictionary for keyword translation. The model is trained on our unlabeled DL corpus from web crawl data, without using any hand-crafted rules and parallel data. Our method outperforms state-of-the-art transpilers on multiple transpilation pairs including PyTorch-Keras and PyTorch-MXNet by 15.9pts and 12.0pts in exact match scores respectively.",b6bea98ca29267acbebca6cdf64eb07a5671e000,Semantic Scholar,,, +282,decomposed prompting for machine translation between related languages using large language models,"['Ratish Puduppully', 'Raj Dabre', 'A. Aw', 'Nancy F. Chen']",http://arxiv.org/pdf/2305.13085,2023-05-22,,"This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through few-shot prompting leverages a small set of translation pair examples to generate translations for test sentences. This procedure requires the model to learn how to generate translations while simultaneously ensuring that token ordering is maintained to produce a fluent and accurate translation. We propose that for related languages, the task of machine translation can be simplified by leveraging the monotonic alignment characteristic of such languages. We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations. Through automatic and human evaluation conducted on multiple related language pairs across various language families, we demonstrate that our proposed approach of decomposed prompting surpasses multiple established few-shot baseline approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.",b6e5855b6a4e425ba251a93516f2bccffe5ba403,Semantic Scholar,,, +283,prompt a robot to walk with large language models,"['Yen-Jen Wang', 'Bike Zhang', 'Jianyu Chen', 'K. Sreenath']",https://arxiv.org/pdf/2309.09969,2023-09-18,,"Large language models (LLMs) pre-trained on vast internet-scale data have showcased remarkable capabilities across diverse domains. Recently, there has been escalating interest in deploying LLMs for robotics, aiming to harness the power of foundation models in real-world settings. However, this approach faces significant challenges, particularly in grounding these models in the physical world and in generating dynamic robot motions. To address these issues, we introduce a novel paradigm in which we use few-shot prompts collected from the physical environment, enabling the LLM to autoregressively generate low-level control commands for robots without task-specific fine-tuning. Experiments across various robots and environments validate that our method can effectively prompt a robot to walk. We thus illustrate how LLMs can proficiently function as low-level feedback controllers for dynamic motion control even in high-dimensional robotic systems. The project website and source code can be found at: https://prompt2walk.github.io/ .",b70075b496c1f519093884945be5670c32cbceed,Semantic Scholar,,, +284,freshllms refreshing large language models with search engine augmentation,"['Tu Vu', 'Mohit Iyyer', 'Xuezhi Wang', 'Noah Constant', 'Jerry Wei', 'Jason Wei', 'Chris Tar', 'Yun-Hsuan Sung', 'Denny Zhou', 'Quoc Le', 'Thang Luong']",https://arxiv.org/pdf/2310.03214,2023-10-05,,"Most large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answering questions that test current world knowledge. Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked. We benchmark a diverse array of both closed and open-source LLMs under a two-mode evaluation procedure that allows us to measure both correctness and hallucination. Through human evaluations involving more than 50K judgments, we shed light on limitations of these models and demonstrate significant room for improvement: for instance, all models (regardless of model size) struggle on questions that involve fast-changing knowledge and false premises. Motivated by these results, we present FreshPrompt, a simple few-shot prompting method that substantially boosts the performance of an LLM on FreshQA by incorporating relevant and up-to-date information retrieved from a search engine into the prompt. Our experiments show that FreshPrompt outperforms both competing search engine-augmented prompting methods such as Self-Ask (Press et al., 2022) as well as commercial systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that both the number of retrieved evidences and their order play a key role in influencing the correctness of LLM-generated answers. Additionally, instructing the LLM to generate concise and direct answers helps reduce hallucination compared to encouraging more verbose answers. To facilitate future work, we release FreshQA at github.com/freshllms/freshqa and commit to updating it at regular intervals.",be177300487b6d0f25e6cade9a31900454b13281,Semantic Scholar,,, +285,enhancing incontext learning with answer feedback for multispan question answering,"['Zixian Huang', 'Jiaying Zhou', 'Gengyang Xiao', 'Gong Cheng']",http://arxiv.org/pdf/2306.04508,2023-06-07,,"Whereas the recent emergence of large language models (LLMs) like ChatGPT has exhibited impressive general performance, it still has a large gap with fully-supervised models on specific tasks such as multi-span question answering. Previous researches found that in-context learning is an effective approach to exploiting LLM, by using a few task-related labeled data as demonstration examples to construct a few-shot prompt for answering new questions. A popular implementation is to concatenate a few questions and their correct answers through simple templates, informing LLM of the desired output. In this paper, we propose a novel way of employing labeled data such that it also informs LLM of some undesired output, by extending demonstration examples with feedback about answers predicted by an off-the-shelf model, e.g., correct, incorrect, or incomplete. Experiments on three multi-span question answering datasets as well as a keyphrase extraction dataset show that our new prompting strategy consistently improves LLM's in-context learning performance.",c1647923704251875f4160e91b59afbbdc58483e,Semantic Scholar,,, +286,benchmarking arabic ai with large language models,"['Ahmed Abdelali', 'Hamdy Mubarak', 'Shammur A. Chowdhury', 'Maram Hasanain', 'Basel Mousi', 'S. Boughorbel', 'Yassine El Kheir', 'Daniel Izham', 'Fahim Dalvi', 'Majd Hawasly', 'Nizi Nazar', 'Yousseif Elshahawy', 'Ahmed M. Ali', 'Nadir Durrani', 'Natasa Milic-Frayling', 'Firoj Alam']",http://arxiv.org/pdf/2305.14982,2023-05-24,,"With large Foundation Models (FMs), language technologies (AI in general) are entering a new paradigm: eliminating the need for developing large-scale task-specific datasets and supporting a variety of tasks through set-ups ranging from zero-shot to few-shot learning. However, understanding FMs capabilities requires a systematic benchmarking effort by comparing FMs performance with the state-of-the-art (SOTA) task-specific models. With that goal, past work focused on the English language and included a few efforts with multiple languages. Our study contributes to ongoing research by evaluating FMs performance for standard Arabic NLP and Speech processing, including a range of tasks from sequence tagging to content classification across diverse domains. We start with zero-shot learning using GPT-3.5-turbo, Whisper, and USM, addressing 33 unique tasks using 59 publicly available datasets resulting in 96 test setups. For a few tasks, FMs performs on par or exceeds the performance of the SOTA models but for the majority it under-performs. Given the importance of prompt for the FMs performance, we discuss our prompt strategies in detail and elaborate on our findings. Our future work on Arabic AI will explore few-shot prompting, expand the range of tasks, and investigate additional open-source models.",c5fa70db839fd05b1111f3586a601d8a93e78d0c,Semantic Scholar,,, +287,internetaugmented language models through fewshot prompting for opendomain question answering,"['Angeliki Lazaridou', 'E. Gribovskaya', 'Wojciech Stokowiec', 'N. Grigorev']",https://arxiv.org/pdf/2203.05115,2022-03-10,,"In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language models (LSLMs) to overcome some of their challenges with respect to grounding to factual and up-to-date information. Motivated by semi-parametric language models (LMs), which ground their decisions in external retrieved evidence, we use few-shot prompting to learn to condition LMs on information returned from the web using Google Search, a broad and constantly updated knowledge source. Our approach does not involve fine-tuning or learning additional parameters, thus making it applicable to any LM, offering therefore a strong baseline. Indeed, we find that LMs conditioned on the web surpass performance of closed-book models of similar, or even larger, model sizes in open-domain question answering. Finally, we find that increasing the inference-time compute of models, achieved via using multiple retrieved evidences to generate multiple answers followed by a reranking stage that uses scores generated by the same LMs, leads to better performance and alleviates lower performance of smaller few-shot LMs. All in all, our findings suggest that it might be beneficial to slow down the race towards the biggest model and instead shift attention towards finding more effective ways to use models, including but not limited to, better prompting or increasing inference-time compute.",c70eb74e09c41e8fcc71dd59e3b4d631f657f7cd,Semantic Scholar,,, +288,is chatgpt a good recommender a preliminary study,"['Junling Liu', 'Chaoyong Liu', 'Renjie Lv', 'Kangdi Zhou', 'Y. Zhang']",http://arxiv.org/pdf/2304.10149,2023-04-20,,"Recommendation systems have witnessed significant advancements and have been widely used over the past decades. However, most traditional recommendation methods are task-specific and therefore lack efficient generalization ability. Recently, the emergence of ChatGPT has significantly advanced NLP tasks by enhancing the capabilities of conversational models. Nonetheless, the application of ChatGPT in the recommendation domain has not been thoroughly investigated. In this paper, we employ ChatGPT as a general-purpose recommendation model to explore its potential for transferring extensive linguistic and world knowledge acquired from large-scale corpora to recommendation scenarios. Specifically, we design a set of prompts and evaluate ChatGPT's performance on five recommendation scenarios. Unlike traditional recommendation methods, we do not fine-tune ChatGPT during the entire evaluation process, relying only on the prompts themselves to convert recommendation tasks into natural language tasks. Further, we explore the use of few-shot prompting to inject interaction information that contains user potential interest to help ChatGPT better understand user needs and interests. Comprehensive experimental results on Amazon Beauty dataset show that ChatGPT has achieved promising results in certain tasks and is capable of reaching the baseline level in others. We conduct human evaluations on two explainability-oriented tasks to more accurately evaluate the quality of contents generated by different models. And the human evaluations show ChatGPT can truly understand the provided information and generate clearer and more reasonable results. We hope that our study can inspire researchers to further explore the potential of language models like ChatGPT to improve recommendation performance and contribute to the advancement of the recommendation systems field.",ca7bd64d372e3bcb3f4633ca4a20291ff57de3c3,Semantic Scholar,,, +289,legal prompting teaching a language model to think like a lawyer,"['Fang Yu', 'Lee Quartey', 'Frank Schilder']",http://arxiv.org/pdf/2212.01326,2022-12-02,,"Large language models that are capable of zero or few-shot prompting approaches have given rise to the new research area of prompt engineering. Recent advances showed that for example Chain-of-Thought (CoT) prompts can improve arithmetic or common sense tasks significantly. We explore how such approaches fare with legal reasoning tasks and take the COLIEE entailment task based on the Japanese Bar exam for testing zero-shot/few-shot and fine-tuning approaches. Our findings show that while CoT prompting and fine-tuning with explanations approaches show improvements, the best results are produced by prompts that are derived from specific legal reasoning techniques such as IRAC (Issue, Rule, Application, Conclusion). Based on our experiments we improve the 2021 best result from 0.7037 accuracy to 0.8148 accuracy and beat the 2022 best system of 0.6789 accuracy with an accuracy of 0.7431.",cc43306e22dbfd5bc35251ab8c8ba37e4fc2a1b3,Semantic Scholar,,, +290,query2doc query expansion with large language models,"['Liang Wang', 'Nan Yang', 'Furu Wei']",https://arxiv.org/pdf/2303.07678,2023-03-14,,"This paper introduces a simple yet effective query expansion approach, denoted as query2doc, to improve both sparse and dense retrieval systems. The proposed method first generates pseudo-documents by few-shot prompting large language models (LLMs), and then expands the query with generated pseudo-documents. LLMs are trained on web-scale text corpora and are adept at knowledge memorization. The pseudo-documents from LLMs often contain highly relevant information that can aid in query disambiguation and guide the retrievers. Experimental results demonstrate that query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets, such as MS-MARCO and TREC DL, without any model fine-tuning. Furthermore, our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.",ccc772d88c231275f24c4fac9b28bbe0942e1107,Semantic Scholar,,, +291,how to design translation prompts for chatgpt an empirical study,"['Yuan Gao', 'Ruili Wang', 'Feng Hou']",http://arxiv.org/pdf/2304.02182,2023-04-05,,"The recently released ChatGPT has demonstrated surprising abilities in natural language understanding and natural language generation. Machine translation relies heavily on the abilities of language understanding and generation. Thus, in this paper, we explore how to assist machine translation with ChatGPT. We adopt several translation prompts on a wide range of translations. Our experimental results show that ChatGPT with designed translation prompts can achieve comparable or better performance over commercial translation systems for high-resource language translations. We further evaluate the translation quality using multiple references, and ChatGPT achieves superior performance compared to commercial systems. We also conduct experiments on domain-specific translations, the final results show that ChatGPT is able to comprehend the provided domain keyword and adjust accordingly to output proper translations. At last, we perform few-shot prompts that show consistent improvement across different base prompts. Our work provides empirical evidence that ChatGPT still has great potential in translations.",cd77ea482d9245f3fcaeb670261a00c3fb5cabbd,Semantic Scholar,,, +292,passive learning of active causal strategies in agents and language models,"['Andrew Kyle Lampinen', 'Stephanie C. Y. Chan', 'Ishita Dasgupta', 'A. Nam', 'Jane X. Wang']",https://arxiv.org/pdf/2305.16183,2023-05-25,,"What can be learned about causality and experimentation from passive data? This question is salient given recent successes of passively-trained language models in interactive domains such as tool use. Passive learning is inherently limited. However, we show that purely passive learning can in fact allow an agent to learn generalizable strategies for determining and using causal structures, as long as the agent can intervene at test time. We formally illustrate that learning a strategy of first experimenting, then seeking goals, can allow generalization from passive learning in principle. We then show empirically that agents trained via imitation on expert data can indeed generalize at test time to infer and use causal links which are never present in the training data; these agents can also generalize experimentation strategies to novel variable sets never observed in training. We then show that strategies for causal intervention and exploitation can be generalized from passive data even in a more complex environment with high-dimensional observations, with the support of natural language explanations. Explanations can even allow passive learners to generalize out-of-distribution from perfectly-confounded training data. Finally, we show that language models, trained only on passive next-word prediction, can generalize causal intervention strategies from a few-shot prompt containing examples of experimentation, together with explanations and reasoning. These results highlight the surprising power of passive learning of active causal strategies, and may help to understand the behaviors and capabilities of language models.",ce0154d9251f67c262512b6e598f3aa3ba9fe9a4,Semantic Scholar,,, +293,diversity measures domainindependent proxies for failure in language model queries,"['Noel Ngu', 'Nathaniel Lee', 'P. Shakarian']",https://arxiv.org/pdf/2308.11189,2023-08-22,,"Error prediction in large language models often relies on domain-specific information. In this paper, we present measures for quantification of error in the response of a large language model based on the diversity of responses to a given prompt - hence independent of the underlying application. We describe how three such measures - based on entropy, Gini impurity, and centroid distance - can be employed. We perform a suite of experiments on multiple datasets and temperature settings to demonstrate that these measures strongly correlate with the probability of failure. Additionally, we present empirical results demonstrating how these measures can be applied to few-shot prompting, chain-of-thought reasoning, and error detection.",d4fc988c6510420a5290dfe8d1a991ca4878d696,Semantic Scholar,,, +294,log parsing how far can chatgpt go,"['Van-Hoang Le', 'Hongyu Zhang']",https://arxiv.org/pdf/2306.01590,2023-06-02,,"Software logs play an essential role in ensuring the reliability and maintainability of large-scale software systems, as they are often the sole source of runtime information. Log parsing, which converts raw log messages into structured data, is an important initial step towards downstream log analytics. In recent studies, ChatGPT, the current cutting-edge large language model (LLM), has been widely applied to a wide range of software engineering tasks. However, its performance in automated log parsing remains unclear. In this paper, we evaluate ChatGPT's ability to undertake log parsing by addressing two research questions. (1) Can ChatGPT effectively parse logs? (2) How does ChatGPT perform with different prompting methods? Our results show that ChatGPT can achieve promising results for log parsing with appropriate prompts, especially with few-shot prompting. Based on our findings, we outline several challenges and opportunities for ChatGPT-based log parsing.",d589c49e1cd1dd3b994dcac01b4c6e7fb8eef161,Semantic Scholar,,, +295,an empirical evaluation of prompting strategies for large language models in zeroshot clinical natural language processing,"['S. Sivarajkumar', 'Mark Kelley', 'Alyssa Samolyk-Mazzanti', 'S. Visweswaran', 'Yanshan Wang']",https://arxiv.org/pdf/2309.08008,2023-09-14,,"Large language models (LLMs) have shown remarkable capabilities in Natural Language Processing (NLP), especially in domains where labeled data is scarce or expensive, such as clinical domain. However, to unlock the clinical knowledge hidden in these LLMs, we need to design effective prompts that can guide them to perform specific clinical NLP tasks without any task-specific training data. This is known as in-context learning, which is an art and science that requires understanding the strengths and weaknesses of different LLMs and prompt engineering approaches. In this paper, we present a comprehensive and systematic experimental study on prompt engineering for five clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence Extraction, Coreference Resolution, Medication Status Extraction, and Medication Attribute Extraction. We assessed the prompts proposed in recent literature, including simple prefix, simple cloze, chain of thought, and anticipatory prompts, and introduced two new types of prompts, namely heuristic prompting and ensemble prompting. We evaluated the performance of these prompts on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted zero-shot prompting with few-shot prompting, and provide novel insights and guidelines for prompt engineering for LLMs in clinical NLP. To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative AI, and we hope that it will inspire and inform future research in this area.",d5a6fc6aa139066e3b66ba63002e7d84c109aebc,Semantic Scholar,,, +296,mindagent emergent gaming interaction,"['Ran Gong', 'Qiuyuan Huang', 'Xiaojian Ma', 'Hoi Vo', 'Zane Durante', 'Yusuke Noda', 'Zilong Zheng', 'Song-Chun Zhu', 'Demetri Terzopoulos', 'Fei-Fei Li', 'Jianfeng Gao']",https://arxiv.org/pdf/2309.09971,2023-09-18,,"Large Language Models (LLMs) have the capacity of performing complex scheduling in a multi-agent system and can coordinate these agents into completing sophisticated tasks that require extensive collaboration. However, despite the introduction of numerous gaming frameworks, the community has insufficient benchmarks towards building general multi-agents collaboration infrastructure that encompass both LLM and human-NPCs collaborations. In this work, we propose a novel infrastructure - MindAgent - to evaluate planning and coordination emergent capabilities for gaming interaction. In particular, our infrastructure leverages existing gaming framework, to i) require understanding of the coordinator for a multi-agent system, ii) collaborate with human players via un-finetuned proper instructions, and iii) establish an in-context learning on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new gaming scenario and related benchmark that dispatch a multi-agent collaboration efficiency and supervise multiple agents playing the game simultaneously. We conduct comprehensive evaluations with new auto-metric CoS for calculating the collaboration efficiency. Finally, our infrastructure can be deployed into real-world gaming scenarios in a customized VR version of CUISINEWORLD and adapted in existing broader Minecraft gaming domain. We hope our findings on LLMs and the new infrastructure for general-purpose scheduling and coordination can help shed light on how such skills can be obtained by learning from large language corpora.",d7d712e507c1c6273b05c773c825a668c5cf1504,Semantic Scholar,,, +297,boosted prompt ensembles for large language models,"['Silviu Pitis', 'Michael Ruogu Zhang', 'Andrew Wang', 'Jimmy Ba']",http://arxiv.org/pdf/2304.05970,2023-04-12,,"Methods such as chain-of-thought prompting and self-consistency have pushed the frontier of language model reasoning performance with no additional training. To further improve performance, we propose a prompt ensembling method for large language models, which uses a small dataset to construct a set of few shot prompts that together comprise a ``boosted prompt ensemble''. The few shot examples for each prompt are chosen in a stepwise fashion to be ``hard'' examples on which the previous step's ensemble is uncertain. We show that this outperforms single-prompt output-space ensembles and bagged prompt-space ensembles on the GSM8k and AQuA datasets, among others. We propose both train-time and test-time versions of boosted prompting that use different levels of available annotation and conduct a detailed empirical study of our algorithm.",dca6c3927ade6481a1ae080f5c24decbfeced1be,Semantic Scholar,,, +298,bootstrapping multilingual semantic parsers using large language models,"['Abhijeet Awasthi', 'Nitish Gupta', 'Bidisha Samanta', 'Shachi Dave', 'Sunita Sarawagi', 'P. Talukdar']",http://arxiv.org/pdf/2210.07313,2022-10-13,,"Despite cross-lingual generalization demonstrated by pre-trained multilingual models, the translate-train paradigm of transferring English datasets across multiple languages remains to be a key mechanism for training task-specific multilingual models. However, for many low-resource languages, the availability of a reliable translation service entails significant amounts of costly human-annotated translation pairs. Further, translation services may continue to be brittle due to domain mismatch between task-specific input text and general-purpose text used for training translation models. For multilingual semantic parsing, we demonstrate the effectiveness and flexibility offered by large language models (LLMs) for translating English datasets into several languages via few-shot prompting. Through extensive comparisons on two public datasets, MTOP and MASSIVE, spanning 50 languages and several domains, we show that our method of translating data using LLMs outperforms a strong translate-train baseline on 41 out of 50 languages. We study the key design choices that enable more effective multilingual data translation via prompted LLMs.",dda0f7f086fc875d583604f8b0cf4a8678bc4de4,Semantic Scholar,,, +299,prompt2model generating deployable models from natural language instructions,"['Vijay Viswanathan', 'Chenyang Zhao', 'Amanda Bertsch', 'Tongshuang Sherry Wu', 'Graham Neubig']",https://arxiv.org/pdf/2308.12261,2023-08-23,,"Large language models (LLMs) enable system builders today to create competent NLP systems through prompting, where they only need to describe the task in natural language and provide a few examples. However, in other ways, LLMs are a step backward from traditional special-purpose NLP models; they require extensive computational resources for deployment and can be gated behind APIs. In this paper, we propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment. This is done through a multi-step process of retrieval of existing datasets and pretrained models, dataset generation using LLMs, and supervised fine-tuning on these retrieved and generated datasets. Over three tasks, we demonstrate that given the same few-shot prompt as input, Prompt2Model trains models that outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20% while being up to 700 times smaller. We also show that this data can be used to obtain reliable performance estimates of model performance, enabling model developers to assess model reliability before deployment. Prompt2Model is available open-source at https://github.com/neulab/prompt2model.",e69684fb06a7b1fe621d7ef0c97fc2ca0e122c43,Semantic Scholar,,, +300,multilingual large language models are not (yet) codeswitchers,"['Ruochen Zhang', 'Samuel Cahyawijaya', 'Jan Christian Blaise Cruz', 'Alham Fikri Aji']",http://arxiv.org/pdf/2305.14235,2023-05-23,,"Multilingual Large Language Models (LLMs) have recently shown great capabilities in a wide range of tasks, exhibiting state-of-the-art performance through zero-shot or few-shot prompting methods. While there have been extensive studies on their abilities in monolingual tasks, the investigation of their potential in the context of code-switching (CSW), the practice of alternating languages within an utterance, remains relatively uncharted. In this paper, we provide a comprehensive empirical analysis of various multilingual LLMs, benchmarking their performance across four tasks: sentiment analysis, machine translation, summarization and word-level language identification. Our results indicate that despite multilingual LLMs exhibiting promising outcomes in certain tasks using zero or few-shot prompting, they still underperform in comparison to fine-tuned models of much smaller scales. We argue that current""multilingualism""in LLMs does not inherently imply proficiency with code-switching texts, calling for future research to bridge this discrepancy.",eda54452d8a8a412c2a985ef11572cb468906b1f,Semantic Scholar,,, +301,product information extraction using chatgpt,"['Alexander Brinkmann', 'Roee Shraga', 'Reng Chiz Der', 'Christian Bizer']",http://arxiv.org/pdf/2306.14921,2023-06-23,,"Structured product data in the form of attribute/value pairs is the foundation of many e-commerce applications such as faceted product search, product comparison, and product recommendation. Product offers often only contain textual descriptions of the product attributes in the form of titles or free text. Hence, extracting attribute/value pairs from textual product descriptions is an essential enabler for e-commerce applications. In order to excel, state-of-the-art product information extraction methods require large quantities of task-specific training data. The methods also struggle with generalizing to out-of-distribution attributes and attribute values that were not a part of the training data. Due to being pre-trained on huge amounts of text as well as due to emergent effects resulting from the model size, Large Language Models like ChatGPT have the potential to address both of these shortcomings. This paper explores the potential of ChatGPT for extracting attribute/value pairs from product descriptions. We experiment with different zero-shot and few-shot prompt designs. Our results show that ChatGPT achieves a performance similar to a pre-trained language model but requires much smaller amounts of training data and computation for fine-tuning.",f00e7326baa9600e46b3a8e7077dc3a349f90a01,Semantic Scholar,,, +302,large language models for user interest journeys,"['Konstantina Christakopoulou', 'Alberto Lalama', 'Cj Adams', 'Iris Qu', 'Yifat Amir', 'S. Chucri', 'Pierce Vollucci', 'Fabio Soldo', 'Dina Bseiso', 'Sarah Scodel', 'Lucas Dixon', 'Ed H. Chi', 'Minmin Chen']",http://arxiv.org/pdf/2305.15498,2023-05-24,,"Large language models (LLMs) have shown impressive capabilities in natural language understanding and generation. Their potential for deeper user understanding and improved personalized user experience on recommendation platforms is, however, largely untapped. This paper aims to address this gap. Recommender systems today capture users' interests through encoding their historical activities on the platforms. The generated user representations are hard to examine or interpret. On the other hand, if we were to ask people about interests they pursue in their life, they might talk about their hobbies, like I just started learning the ukulele, or their relaxation routines, e.g., I like to watch Saturday Night Live, or I want to plant a vertical garden. We argue, and demonstrate through extensive experiments, that LLMs as foundation models can reason through user activities, and describe their interests in nuanced and interesting ways, similar to how a human would. We define interest journeys as the persistent and overarching user interests, in other words, the non-transient ones. These are the interests that we believe will benefit most from the nuanced and personalized descriptions. We introduce a framework in which we first perform personalized extraction of interest journeys, and then summarize the extracted journeys via LLMs, using techniques like few-shot prompting, prompt-tuning and fine-tuning. Together, our results in prompting LLMs to name extracted user journeys in a large-scale industrial platform demonstrate great potential of these models in providing deeper, more interpretable, and controllable user understanding. We believe LLM powered user understanding can be a stepping stone to entirely new user experiences on recommendation platforms that are journey-aware, assistive, and enabling frictionless conversation down the line.",f834aed32f5531bfa426faab71878c549572500e,Semantic Scholar,,, +303,promptbased extraction of social determinants of health using fewshot learning,"['Giridhar Kaushik Ramachandran', 'Yujuan Fu', 'Bin Han', 'K. Lybarger', 'Nicholas J. Dobbins', 'Ozlem Uzuner', 'M. Yetisgen']",http://arxiv.org/pdf/2306.07170,2023-06-12,,"Social determinants of health (SDOH) documented in the electronic health record through unstructured text are increasingly being studied to understand how SDOH impacts patient health outcomes. In this work, we utilize the Social History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified social history sections annotated for SDOH, including substance use, employment, and living status information. We explore the automatic extraction of SDOH information with SHAC in both standoff and inline annotation formats using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction performance with a high-performing supervised approach and perform thorough error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on the SHAC test set, similar to the 7th best-performing system among all teams in the n2c2 challenge with SHAC.",386bd4d25043516f076ea7b2296a1ebec84f43ce,Semantic Scholar,,, +304,deplot oneshot visual language reasoning by plottotable translation,"['Fangyu Liu', 'Julian Martin Eisenschlos', 'Francesco Piccinno', 'Syrine Krichene', 'Chenxi Pang', 'Kenton Lee', 'Mandar Joshi', 'Wenhu Chen', 'Nigel Collier', 'Y. Altun']",http://arxiv.org/pdf/2212.10505,2022-12-20,,"Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than>28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.",4d3a49d1439a0b8fbb0e9f588970ad0f1d70dec8,Semantic Scholar,,, +305,short answer grading using oneshot prompting and text similarity scoring model,['Su-Youn Yoon'],http://arxiv.org/pdf/2305.18638,2023-05-29,,"In this study, we developed an automated short answer grading (ASAG) model that provided both analytic scores and final holistic scores. Short answer items typically consist of multiple sub-questions, and providing an analytic score and the text span relevant to each sub-question can increase the interpretability of the automated scores. Furthermore, they can be used to generate actionable feedback for students. Despite these advantages, most studies have focused on predicting only holistic scores due to the difficulty in constructing dataset with manual annotations. To address this difficulty, we used large language model (LLM)-based one-shot prompting and a text similarity scoring model with domain adaptation using small manually annotated dataset. The accuracy and quadratic weighted kappa of our model were 0.67 and 0.71 on a subset of the publicly available ASAG dataset. The model achieved a substantial improvement over the majority baseline.",d1aa858644154af50e36860e6761ae52ae655bd3,Semantic Scholar,,, +306,utilizing language models for energy load forecasting,"['Hao Xue', 'Flora D. Salim']",https://arxiv.org/pdf/2310.17788,2023-10-26,,"Energy load forecasting plays a crucial role in optimizing resource allocation and managing energy consumption in buildings and cities. In this paper, we propose a novel approach that leverages language models for energy load forecasting. We employ prompting techniques to convert energy consumption data into descriptive sentences, enabling fine-tuning of language models. By adopting an autoregressive generating approach, our proposed method enables predictions of various horizons of future energy load consumption. Through extensive experiments on real-world datasets, we demonstrate the effectiveness and accuracy of our proposed method. Our results indicate that utilizing language models for energy load forecasting holds promise for enhancing energy efficiency and facilitating intelligent decision-making in energy systems.",00c2aea466034c563b7aa3cd8eadb1fc46b119fa,Semantic Scholar,,, +307,s3dst structured opendomain dialogue segmentation and state tracking in the era of llms,"['Sarkar Snigdha Sarathi Das', 'C. Shah', 'Mengting Wan', 'Jennifer Neville', 'Longfei Yang', 'Reid Andersen', 'Georg Buscher', 'Tara Safavi']",https://arxiv.org/pdf/2309.08827,2023-09-16,,"The traditional Dialogue State Tracking (DST) problem aims to track user preferences and intents in user-agent conversations. While sufficient for task-oriented dialogue systems supporting narrow domain applications, the advent of Large Language Model (LLM)-based chat systems has introduced many real-world intricacies in open-domain dialogues. These intricacies manifest in the form of increased complexity in contextual interactions, extended dialogue sessions encompassing a diverse array of topics, and more frequent contextual shifts. To handle these intricacies arising from evolving LLM-based chat systems, we propose joint dialogue segmentation and state tracking per segment in open-domain dialogue systems. Assuming a zero-shot setting appropriate to a true open-domain dialogue system, we propose S3-DST, a structured prompting technique that harnesses Pre-Analytical Recollection, a novel grounding mechanism we designed for improving long context tracking. To demonstrate the efficacy of our proposed approach in joint segmentation and state tracking, we evaluate S3-DST on a proprietary anonymized open-domain dialogue dataset, as well as publicly available DST and segmentation datasets. Across all datasets and settings, S3-DST consistently outperforms the state-of-the-art, demonstrating its potency and robustness the next generation of LLM-based chat systems.",034f1d77d832460a239072c81b5bb178b93c1e9f,Semantic Scholar,,, +308,take a step back evoking reasoning via abstraction in large language models,"['Huaixiu Steven Zheng', 'Swaroop Mishra', 'Xinyun Chen', 'Heng-Tze Cheng', 'E. Chi', 'Quoc V Le', 'Denny Zhou']",https://arxiv.org/pdf/2310.06117,2023-10-09,,"We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide the reasoning steps, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of Step-Back Prompting with PaLM-2L models and observe substantial performance gains on a wide range of challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%, TimeQA by 27%, and MuSiQue by 7%.",0786c88990235414611478099e43611542d973b0,Semantic Scholar,,, +309,chaidt a framework for prompting conversational generative ai agents to actively participate in cocreation,['Brandon Harwood'],http://arxiv.org/pdf/2305.03852,2023-05-05,,"This paper explores the potential for utilizing generative AI models in group-focused co-creative frameworks to enhance problem solving and ideation in business innovation and co-creation contexts, and proposes a novel prompting technique for conversational generative AI agents which employ methods inspired by traditional 'human-to-human' facilitation and instruction to enable active contribution to Design Thinking, a co-creative framework. Through experiments using this prompting technique, we gather evidence that conversational generative transformers (i.e. ChatGPT) have the capability to contribute context-specific, useful, and creative input into Design Thinking activities. We also discuss the potential benefits, limitations, and risks associated with using generative AI models in co-creative ideation and provide recommendations for future research.",0820a7ec1b7cac3470836161a92da7d59f626d14,Semantic Scholar,,, +310,image to tree with recursive prompting,"['James Batten', 'Matthew Sinclair', 'Ben Glocker', 'M. Schaap']",http://arxiv.org/pdf/2301.00447,2023-01-01,,". Extracting complex structures from grid-based data is a common key step in automated medical image analysis. The conventional solution to recovering tree-structured geometries typically involves computing the minimal cost path through intermediate representations derived from segmentation masks. However, this methodology has significant limitations in the context of projective imaging of tree-structured 3D anatomical data such as coronary arteries, since there are often overlapping branches in the 2D projection. In this work, we propose a novel approach to predicting tree connectivity structure which reformulates the task as an optimization problem over individual steps of a recursive process. We design and train a two-stage model which leverages the UNet and Transformer architectures and introduces an image-based prompting technique. Our proposed method achieves compelling results on a pair of synthetic datasets, and outperforms a shortest-path baseline.",118802f91718ea2c566f2eaf1b4e25c439459f4d,Semantic Scholar,,, +311,spoken language intelligence of large language models for language learning,"['Linkai Peng', 'Baorian Nuchged', 'Yingming Gao']",https://arxiv.org/pdf/2308.14536,2023-08-28,,"People have long hoped for a conversational system that can assist in real-life situations, and recent progress on large language models (LLMs) is bringing this idea closer to reality. While LLMs are often impressive in performance, their efficacy in real-world scenarios that demand expert knowledge remains unclear. LLMs are believed to hold the most potential and value in education, especially in the development of Artificial intelligence (AI) based virtual teachers capable of facilitating language learning. Our focus is centered on evaluating the efficacy of LLMs in the realm of education, specifically in the areas of spoken language learning which encompass phonetics, phonology, and second language acquisition. We introduce a new multiple-choice question dataset to evaluate the effectiveness of LLMs in the aforementioned scenarios, including understanding and application of spoken language knowledge. In addition, we investigate the influence of various prompting techniques such as zero- and few-shot method (prepending the question with question-answer exemplars), chain-of-thought (CoT, think step-by-step), in-domain exampler and external tools (Google, Wikipedia). We conducted large-scale evaluation on popular LLMs (20 distinct models) using these methods. We achieved significant performance improvements compared to the zero-shot baseline in the practical questions reasoning (GPT-3.5, 49.1% ->63.1%; LLaMA2-70B-Chat, 42.2% ->48.6%). We found that models of different sizes have good understanding of concepts in phonetics, phonology, and second language acquisition, but show limitations in reasoning for real-world problems. Additionally, we also explore preliminary findings on conversational communication.",19b43ff57e5d8f8a99da4110fbc30b4ecc39a527,Semantic Scholar,,, +312,scalable multirobot collaboration with large language models centralized or decentralized systems,"['Yongchao Chen', 'Jacob Arkin', 'Yang Zhang', 'Nicholas Roy', 'Chuchu Fan']",https://arxiv.org/pdf/2309.15943,2023-09-27,,"A flurry of recent work has demonstrated that pre-trained large language models (LLMs) can be effective task planners for a variety of single-robot tasks. The planning performance of LLMs is significantly improved via prompting techniques, such as in-context learning or re-prompting with state feedback, placing new importance on the token budget for the context window. An under-explored but natural next direction is to investigate LLMs as multi-robot task planners. However, long-horizon, heterogeneous multi-robot planning introduces new challenges of coordination while also pushing up against the limits of context window length. It is therefore critical to find token-efficient LLM planning frameworks that are also able to reason about the complexities of multi-robot coordination. In this work, we compare the task success rate and token efficiency of four multi-agent communication frameworks (centralized, decentralized, and two hybrid) as applied to four coordination-dependent multi-agent 2D task scenarios for increasing numbers of agents. We find that a hybrid framework achieves better task success rates across all four tasks and scales better to more agents. We further demonstrate the hybrid frameworks in 3D simulations where the vision-to-text problem and dynamical errors are considered. See our project website https://yongchao98.github.io/MIT-REALM-Multi-Robot/ for prompts, videos, and code.",1ad735714ad2e4ee5b94ce26c976e5ee5c7cde3b,Semantic Scholar,,, +313,the utility of large language models and generative ai for education research,"['Andrew Katz', 'Umair Shakir', 'B. Chambers']",http://arxiv.org/pdf/2305.18125,2023-05-29,,"The use of natural language processing (NLP) techniques in engineering education can provide valuable insights into the underlying processes involved in generating text. While accessing these insights can be labor-intensive if done manually, recent advances in NLP and large language models have made it a realistic option for individuals. This study explores and evaluates a combination of clustering, summarization, and prompting techniques to analyze over 1,000 student essays in which students discussed their career interests. The specific assignment prompted students to define and explain their career goals as engineers. Using text embedding representations of student responses, we clustered the responses together to identify thematically similar statements from students. The clustered responses were then summarized to quickly identify career interest themes. We also used a set of a priori codes about career satisfaction and sectors to demonstrate an alternative approach to using these generative text models to analyze student writing. The results of this study demonstrate the feasibility and usefulness of NLP techniques in engineering education research. By automating the initial analysis of student essays, researchers and educators can more efficiently and accurately identify key themes and patterns in student writing. The methods presented in this paper have broader applications for engineering education and research purposes beyond analyzing student essays. By explaining these methods to the engineering education community, readers can utilize them in their own contexts.",1fc0e5b30bfede1b78389d00f8c41bacd29ecd7f,Semantic Scholar,,, +314,foundation metrics quantifying effectiveness of healthcare conversations powered by generative ai,"['Mahyar Abbasian', 'Elahe Khatibi', 'Iman Azimi', 'David Oniani', 'Zahra Shakeri Hossein Abad', 'Alexander Thieme', 'Zhongqi Yang', 'Yanshan Wang', 'Bryant Lin', 'Olivier Gevaert', 'Li-Jia Li', 'Ramesh Jain', 'Amir M. Rahmani']",https://arxiv.org/pdf/2309.12444,2023-09-21,,"Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process. Chatbots, serving as interactive conversational models, will probably drive this patient-centered transformation in healthcare. Through the provision of various services, including diagnosis, personalized lifestyle recommendations, and mental health support, the objective is to substantially augment patient health outcomes, all the while mitigating the workload burden on healthcare providers. The life-critical nature of healthcare applications necessitates establishing a unified and comprehensive set of evaluation metrics for conversational models. Existing evaluation metrics proposed for various generic large language models (LLMs) demonstrate a lack of comprehension regarding medical and health concepts and their significance in promoting patients' well-being. Moreover, these metrics neglect pivotal user-centered aspects, including trust-building, ethics, personalization, empathy, user comprehension, and emotional support. The purpose of this paper is to explore state-of-the-art LLM-based evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare. Subsequently, we present an comprehensive set of evaluation metrics designed to thoroughly assess the performance of healthcare chatbots from an end-user perspective. These metrics encompass an evaluation of language processing abilities, impact on real-world clinical tasks, and effectiveness in user-interactive conversations. Finally, we engage in a discussion concerning the challenges associated with defining and implementing these metrics, with particular emphasis on confounding factors such as the target audience, evaluation methods, and prompt techniques involved in the evaluation process.",20cb4e0bd8871d33d82fc72ea82a0aa1dd922810,Semantic Scholar,,, +315,an empirical study on the robustness of the segment anything model (sam),"['Yuqing Wang', 'Yun Zhao', 'Linda Petzold']",http://arxiv.org/pdf/2305.06422,2023-05-10,,"The Segment Anything Model (SAM) is a foundation model for general image segmentation. Although it exhibits impressive performance predominantly on natural images, understanding its robustness against various image perturbations and domains is critical for real-world applications where such challenges frequently arise. In this study we conduct a comprehensive robustness investigation of SAM under diverse real-world conditions. Our experiments encompass a wide range of image perturbations. Our experimental results demonstrate that SAM's performance generally declines under perturbed images, with varying degrees of vulnerability across different perturbations. By customizing prompting techniques and leveraging domain knowledge based on the unique characteristics of each dataset, the model's resilience to these perturbations can be enhanced, addressing dataset-specific challenges. This work sheds light on the limitations and strengths of SAM in real-world applications, promoting the development of more robust and versatile image segmentation solutions.",26d31d641116b656826737335b2accb802ac9931,Semantic Scholar,,, +316,boosting lowdata instance segmentation by unsupervised pretraining with saliency prompt,"['Hao Li', 'Dingwen Zhang', 'Nian Liu', 'Lechao Cheng', 'Yalun Dai', 'Chaoxi Zhang', 'Xinggang Wang', 'Junwei Han']",https://arxiv.org/pdf/2302.01171,2023-02-02,,"Inspired by DETR variants, query-based end-to-end instance segmentation (QEIS) methods have recently outperformed CNN-based models on large-scale datasets. Yet they would lose efficacy when only a small amount of training data is available since it's hard for the crucial queries/kernels to learn localization and shape priors. To this end, this work offers a novel unsupervised pre-training solution for low-data regimes. Inspired by the recent success of the Prompting technique, we introduce a new pre-training method that boosts QEIS models by giving Saliency Prompt for queries/kernels. Our method contains three parts: 1) Saliency Masks Proposal is responsible for generating pseudo masks from unlabeled images based on the saliency mechanism. 2) Prompt-Kernel Matching transfers pseudo masks into prompts and injects the corresponding localization and shape priors to the best-matched kernels. 3) Kernel Supervision is applied to supply supervision at the kernel level for robust learning. From a practical perspective, our pre-training method helps QEIS models achieve a similar convergence speed and comparable performance with CNN-based models in low-data regimes. Experimental results show that our method significantly boosts several QEIS models on three datasets.11Code: https://github.com/lifuguan/saliency.prompt",29965a1efc21a637e03a5e0a869d77eca77f5085,Semantic Scholar,,, +317,scigraphqa a largescale synthetic multiturn questionanswering dataset for scientific graphs,"['Sheng Li', 'Nima Tajbakhsh']",https://arxiv.org/pdf/2308.03349,2023-08-07,,"In this work, we present SciGraphQA, a synthetic multi-turn question-answer dataset related to academic graphs. SciGraphQA is 13 times larger than ChartVQA, the previously largest chart-visual question-answering dataset. It is also the largest open-sourced chart VQA dataset with non-synthetic charts. To build our dataset, we selected 290,000 Computer Science or Machine Learning ArXiv papers published between 2010 and 2020, and then used Palm-2 to generate 295K samples of open-vocabulary multi-turn question-answering dialogues about the graphs. As context, we provided the text-only Palm-2 with paper title, abstract, paragraph mentioning the graph, and rich text contextual data from the graph itself, obtaining dialogues with an average 2.23 question-answer turns for each graph. We asked GPT-4 to assess the matching quality of our question-answer turns given the paper's context, obtaining an average rating of 8.7/10 on our 3K test set. We evaluated the 0-shot capability of the most popular MLLM models such as LLaVa, mPLUGowl, BLIP-2, and openFlamingo's on our dataset, finding LLaVA-13B being the most performant with a CIDEr score of 0.08. We further enriched the question prompts for LLAVA by including the serialized data tables extracted from the graphs using the DePlot model, boosting LLaVA's 0-shot CIDEr to 0.15. To verify the validity of our dataset, we also fine-tuned LLaVa using our dataset, reaching a substantially higher CIDEr score of 0.26. We anticipate further accuracy improvement by including segmentation mask tokens and leveraging larger LLM backbones coupled with emergent prompting techniques. Our code and data are open-sourced.",2bd1b8990db73b6495c11082bea2d5f925c5226f,Semantic Scholar,,, +318,oneshot labeling for automatic relevance estimation,"['Sean MacAvaney', 'Luca Soldaini']",https://arxiv.org/pdf/2302.11266,2023-02-22,,"Dealing with unjudged documents (""holes"") in relevance assessments is a perennial problem when evaluating search systems with offline experiments. Holes can reduce the apparent effectiveness of retrieval systems during evaluation and introduce biases in models trained with incomplete data. In this work, we explore whether large language models can help us fill such holes to improve offline evaluations. We examine an extreme, albeit common, evaluation setting wherein only a single known relevant document per query is available for evaluation. We then explore various approaches for predicting the relevance of unjudged documents with respect to a query and the known relevant document, including nearest neighbor, supervised, and prompting techniques. We find that although the predictions of these One-Shot Labelers (1SL) frequently disagree with human assessments, the labels they produce yield a far more reliable ranking of systems than the single labels do alone. Specifically, the strongest approaches can consistently reach system ranking correlations of over 0.86 with the full rankings over a variety of measures. Meanwhile, the approach substantially increases the reliability of t-tests due to filling holes in relevance assessments, giving researchers more confidence in results they find to be significant. Alongside this work, we release an easy-to-use software package to enable the use of 1SL for evaluation of other ad-hoc collections or systems.",352bcafbcc95a84d96019688955cab5c43eb23f0,Semantic Scholar,,, +319,large language models can be easily distracted by irrelevant context,"['Freda Shi', 'Xinyun Chen', 'Kanishka Misra', 'Nathan Scales', 'David Dohan', 'E. Chi', 'Nathanael Scharli', 'Denny Zhou']",http://arxiv.org/pdf/2302.00093,2023-01-31,,"Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate the distractibility of large language models, i.e., how the model problem-solving accuracy can be influenced by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.",3d68522abfadfc8ee6b7ec9edaaf91f1b2f38e5e,Semantic Scholar,,, +320,are emergent abilities in large language models just incontext learning,"['Sheng Lu', 'Irina Bigoulaeva', 'Rachneet Sachdeva', 'Harish Tayyar Madabushi', 'Iryna Gurevych']",https://arxiv.org/pdf/2309.01809,2023-09-04,,"Large language models have exhibited emergent abilities, demonstrating exceptional performance across diverse tasks for which they were not explicitly trained, including those that require complex reasoning abilities. The emergence of such abilities carries profound implications for the future direction of research in NLP, especially as the deployment of such models becomes more prevalent. However, one key challenge is that the evaluation of these abilities is often confounded by competencies that arise in models through alternative prompting techniques, such as in-context learning and instruction following, which also emerge as the models are scaled up. In this study, we provide the first comprehensive examination of these emergent abilities while accounting for various potentially biasing factors that can influence the evaluation of models. We conduct rigorous tests on a set of 18 models, encompassing a parameter range from 60 million to 175 billion parameters, across a comprehensive set of 22 tasks. Through an extensive series of over 1,000 experiments, we provide compelling evidence that emergent abilities can primarily be ascribed to in-context learning. We find no evidence for the emergence of reasoning abilities, thus providing valuable insights into the underlying mechanisms driving the observed abilities and thus alleviating safety concerns regarding their use.",3e4afde5a9de2c1801da99b8aff5ae05923f256b,Semantic Scholar,,, +321,are large language models ready for healthcare a comparative study on clinical language understanding,"['Yuqing Wang', 'Yun Zhao', 'Linda Petzold']",https://arxiv.org/pdf/2304.05368,2023-04-09,,"Large language models (LLMs) have made significant progress in various domains, including healthcare. However, the specialized nature of clinical language understanding tasks presents unique challenges and limitations that warrant further investigation. In this study, we conduct a comprehensive evaluation of state-of-the-art LLMs, namely GPT-3.5, GPT-4, and Bard, within the realm of clinical language understanding tasks. These tasks span a diverse range, including named entity recognition, relation extraction, natural language inference, semantic textual similarity, document classification, and question-answering. We also introduce a novel prompting strategy, self-questioning prompting (SQP), tailored to enhance LLMs' performance by eliciting informative questions and answers pertinent to the clinical scenarios at hand. Our evaluation underscores the significance of task-specific learning strategies and prompting techniques for improving LLMs' effectiveness in healthcare-related tasks. Additionally, our in-depth error analysis on the challenging relation extraction task offers valuable insights into error distribution and potential avenues for improvement using SQP. Our study sheds light on the practical implications of employing LLMs in the specialized domain of healthcare, serving as a foundation for future research and the development of potential applications in healthcare settings.",42780f9c7f73d73d7a887e2f787af0e079703d40,Semantic Scholar,,, +322,leveraging large language models to generate answer set programs,"['Adam Ishay', 'Zhun Yang', 'Joohyung Lee']",https://arxiv.org/pdf/2307.07699,2023-07-15,,"Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated exceptional performance in various natural language processing tasks and have shown the ability to solve certain reasoning problems. However, their reasoning capabilities are limited and relatively shallow, despite the application of various prompting techniques. In contrast, formal logic is adept at handling complex reasoning, but translating natural language descriptions into formal logic is a challenging task that non-experts struggle with. This paper proposes a neuro-symbolic method that combines the strengths of large language models and answer set programming. Specifically, we employ an LLM to transform natural language descriptions of logic puzzles into answer set programs. We carefully design prompts for an LLM to convert natural language descriptions into answer set programs in a step by step manner. Surprisingly, with just a few in-context learning examples, LLMs can generate reasonably complex answer set programs. The majority of errors made are relatively simple and can be easily corrected by humans, thus enabling LLMs to effectively assist in the creation of answer set programs.",4a6d7b11c4aba5a23f68856989366dd4311e960b,Semantic Scholar,,, +323,extracting multivalued relations from language models,"['Sneha Singhania', 'S. Razniewski', 'G. Weikum']",https://aclanthology.org/2023.repl4nlp-1.12.pdf,2023-07-06,,"The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5% F1 score. Our results highlight the difficulty of employing LMs for the multi-valued slot-filling task, and pave the way for further research on extracting relational knowledge from latent language representations.",4b99e8273227fd05f2be20248050d81e97ab4f4e,Semantic Scholar,,, +324,teaching algorithmic reasoning via incontext learning,"['Hattie Zhou', 'Azade Nova', 'H. Larochelle', 'Aaron C. Courville', 'Behnam Neyshabur', 'Hanie Sedghi']",http://arxiv.org/pdf/2211.09066,2022-11-15,,"Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines.",4d17732d90440682b0500f4e209c6cc4fac20e0e,Semantic Scholar,,, +325,understanding and improving visual prompting a labelmapping perspective,"['Aochuan Chen', 'Yuguang Yao', 'Pin-Yu Chen', 'Yihua Zhang', 'Sijia Liu']",https://arxiv.org/pdf/2211.11635,2022-11-21,,"We revisit and advance visual prompting (VP), an input prompting technique for vision tasks. VP can reprogram a fixed, pre-trained source model to accomplish downstream tasks in the target domain by simply incorporating universal prompts (in terms of input perturbation patterns) into downstream data points. Yet, it remains elusive why VP stays effective even given a ruleless label mapping (LM) between the source classes and the target classes. Inspired by the above, we ask: How is LM interrelated with VP? And how to exploit such a relationship to improve its accuracy on target tasks? We peer into the influence of LM on VP and provide an affirmative answer that a better ‘quality’ of LM (assessed by mapping precision and explanation) can consistently improve the effectiveness of VP. This is in contrast to the prior art where the factor of LM was missing. To optimize LM, we propose a new VP framework, termed ILM-VP (iterative label mapping-based visual prompting), which automatically re-maps the source labels to the target labels and progressively improves the target task accuracy of VP. Further, when using a contrastive language-image pretrained (CLIP) model for VP, we propose to integrate an LM process to assist the text prompt selection of CLIP and to improve the target task accuracy. Extensive experiments demonstrate that our proposal significantly outperforms state-of-the-art VP methods. As highlighted below, we show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target tasks, ILM-VP outperforms baselines by a substantial margin, e.g., 7.9% and 6.7% accuracy improvements in transfer learning to the target Flowers102 and CIFAR100 datasets. Besides, our proposal on CLIP-based VP provides 13.7% and 7.1% accuracy improvements on Flowers102 and DTD respectively. Code is available at https://github.com/OPTML-Group/ILM-VP.",4edd2d2770729380eda23826af1b78298b334a23,Semantic Scholar,,, +326,adaptivesolver framework for dynamic strategy selection in large language model reasoning,"['Jianpeng Zhou', 'Wanjun Zhong', 'Yanlin Wang', 'Jiahai Wang']",https://arxiv.org/pdf/2310.01446,2023-10-01,,"Large Language Models (LLMs) are showcasing impressive ability in handling complex reasoning tasks. In real-world situations, problems often span a spectrum of complexities. Humans inherently adjust their problem-solving approaches based on task complexity. However, most methodologies that leverage LLMs tend to adopt a uniform approach: utilizing consistent models, prompting methods, and degrees of problem decomposition, regardless of the problem complexity. Inflexibility of them can bring unnecessary computational overhead or sub-optimal performance. To address this problem, we introduce an Adaptive-Solver framework. It strategically modulates solving strategies based on the difficulties of the problems. Given an initial solution, the framework functions with two primary modules. The initial evaluation module assesses the adequacy of the current solution. If improvements are needed, the subsequent adaptation module comes into play. Within this module, three key adaptation strategies are employed: (1) Model Adaptation: Switching to a stronger LLM when a weaker variant is inadequate. (2) Prompting Method Adaptation: Alternating between different prompting techniques to suit the problem's nuances. (3) Decomposition Granularity Adaptation: Breaking down a complex problem into more fine-grained sub-questions to enhance solvability. Through such dynamic adaptations, our framework not only enhances computational efficiency but also elevates the overall performance. This dual-benefit ensures both the efficiency of the system for simpler tasks and the precision required for more complex questions. Experimental results from complex reasoning tasks reveal that the prompting method adaptation and decomposition granularity adaptation enhance performance across all tasks. Furthermore, the model adaptation approach significantly reduces API costs (up to 50%) while maintaining superior performance.",5076bbbf831a92174c9cc1b347bd0584560435fc,Semantic Scholar,,, +327,generative speech recognition error correction with large language models and taskactivating prompting,"['Chao-Han Huck Yang', 'Yile Gu', 'Yi-Chieh Liu', 'Shalini Ghosh', 'I. Bulyko', 'A. Stolcke']",https://arxiv.org/pdf/2309.15649,2023-09-27,,"We explore the ability of large language models (LLMs) to act as speech recognition post-processors that perform rescoring and error correction. Our first focus is on instruction prompting to let LLMs perform these task without fine-tuning, for which we evaluate different prompting schemes, both zeroand few-shot in-context learning, and a novel “task activation“ prompting method that combines causal instructions and demonstration to increase its context windows. Next, we show that rescoring only by in-context learning with frozen LLMs achieves results that are competitive with rescoring by domain-tuned LMs, using a pretrained first-pass recognition system and rescoring output on two out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with fine-tuning we achieve error rates below the N-best oracle level, showcasing the generalization power of the LLMs.",50e8ab900d2ca4d83da120bbfe5338ee93dbe741,Semantic Scholar,,, +328,multiprompt with depth partitioned crossmodal learning,"['Yiqi Wang', 'Xianda Guo', 'Zheng Hua Zhu', 'Yingjie Tian']",https://arxiv.org/pdf/2305.06221,2023-05-10,,"In recent years, soft prompt learning methods have been proposed to fine-tune large-scale vision-language pre-trained models for various downstream tasks. These methods typically combine learnable textual tokens with class tokens as input for models with frozen parameters. However, they often employ a single prompt to describe class contexts, failing to capture categories' diverse attributes adequately. This study introduces the Partitioned Multi-modal Prompt (PMPO), a multi-modal prompting technique that extends the soft prompt from a single learnable prompt to multiple prompts. Our method divides the visual encoder depths and connects learnable prompts to the separated visual depths, enabling different prompts to capture the hierarchical contextual depths of visual representations. Furthermore, to maximize the advantages of multi-prompt learning, we incorporate prior information from manually designed templates and learnable multi-prompts, thus improving the generalization capabilities of our approach. We evaluate the effectiveness of our approach on three challenging tasks: new class generalization, cross-dataset evaluation, and domain generalization. For instance, our method achieves a $79.28$ harmonic mean, averaged over 11 diverse image recognition datasets ($+7.62$ compared to CoOp), demonstrating significant competitiveness compared to state-of-the-art prompting methods.",511ad6b37cb028bdfbd6096e6d20aa4b8b34fafc,Semantic Scholar,,, +329,large language models are pretty good zeroshot video game bug detectors,"['Mohammad Reza Taesiri', 'Finlay Macklon', 'Yihe Wang', 'Hengshuo Shen', 'C. Bezemer']",http://arxiv.org/pdf/2210.02506,2022-10-05,,"Video game testing requires game-specific knowledge as well as common sense reasoning about the events in the game. While AI-driven agents can satisfy the first requirement, it is not yet possible to meet the second requirement automatically. Therefore, video game testing often still relies on manual testing, and human testers are required to play the game thoroughly to detect bugs. As a result, it is challenging to fully automate game testing. In this study, we explore the possibility of leveraging the zero-shot capabilities of large language models for video game bug detection. By formulating the bug detection problem as a question-answering task, we show that large language models can identify which event is buggy in a sequence of textual descriptions of events from a game. To this end, we introduce the GameBugDescriptions benchmark dataset, which consists of 167 buggy gameplay videos and a total of 334 question-answer pairs across 8 games. We extensively evaluate the performance of six models across the OPT and InstructGPT large language model families on our benchmark dataset. Our results show promising results for employing language models to detect video game bugs. With the proper prompting technique, we could achieve an accuracy of 70.66%, and on some video games, up to 78.94%. Our code, evaluation data and the benchmark can be found on https://asgaardlab.github.io/LLMxBugs",55e3fe05598be7c3dd357d51166869f6571b824f,Semantic Scholar,,, +330,help me think a simple prompting strategy for nonexperts to create customized content with models,"['Swaroop Mishra', 'E. Nouri']",http://arxiv.org/pdf/2208.08232,2022-08-17,,"Controlling the text generated by language models and customizing the content has been a long-standing challenge. Existing prompting techniques proposed in pursuit of providing control are task-specific and lack generality; this provides overwhelming choices for non-expert users to find a suitable method for their task. The effort associated with those techniques, such as in writing examples, explanations, instructions, etc. further limits their adoption among non-expert users. In this paper, we propose a simple prompting strategy HELP ME THINK where we encourage GPT3 to help non-expert users by asking a set of relevant questions and leveraging user answers to execute the task. We demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks. Specifically, we focus on tasks that are hard for average humans and require significant thinking to perform. We hope our work will encourage the development of unconventional ways to harness the power of large language models.",5ba1e498665d2b3536cb436f0cf484dce03459fe,Semantic Scholar,,, +331,leveraging fewshot data augmentation and waterfall prompting for response generation,"['Lea Krause', ""Selene B'aez Santamar'ia"", 'Michiel van der Meer', 'Urja Khurana']",https://arxiv.org/pdf/2308.01080,2023-08-02,,"This paper discusses our approaches for task-oriented conversational modelling using subjective knowledge, with a particular emphasis on response generation. Our methodology was shaped by an extensive data analysis that evaluated key factors such as response length, sentiment, and dialogue acts present in the provided dataset. We used few-shot learning to augment the data with newly generated subjective knowledge items and present three approaches for DSTC11: (1) task-specific model exploration, (2) incorporation of the most frequent question into all generated responses, and (3) a waterfall prompting technique using a combination of both GPT-3 and ChatGPT.",657e364ec6932558f426583dc31953e547bf6575,Semantic Scholar,,, +332,the formai dataset generative ai in software security through the lens of formal verification,"['Norbert Tihanyi', 'Tamás Bisztray', 'Ridhi Jain', 'M. Ferrag', 'L. Cordeiro', 'Vasileios Mavroeidis']",https://arxiv.org/pdf/2307.02192,2023-07-05,,"This paper presents the FormAI dataset, a large collection of 112,000 AI-generated compilable and independent C programs with vulnerability classification. We introduce a dynamic zero-shot prompting technique constructed to spawn diverse programs utilizing Large Language Models (LLMs). The dataset is generated by GPT-3.5-turbo and comprises programs with varying levels of complexity. Some programs handle complicated tasks like network management, table games, or encryption, while others deal with simpler tasks like string manipulation. Every program is labeled with the vulnerabilities found within the source code, indicating the type, line number, and vulnerable function name. This is accomplished by employing a formal verification method using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model checking, abstract interpretation, constraint programming, and satisfiability modulo theories to reason over safety/security properties in programs. This approach definitively detects vulnerabilities and offers a formal model known as a counterexample, thus eliminating the possibility of generating false positive reports. We have associated the identified vulnerabilities with Common Weakness Enumeration (CWE) numbers. We make the source code available for the 112,000 programs, accompanied by a separate file containing the vulnerabilities detected in each program, making the dataset ideal for training LLMs and machine learning algorithms. Our study unveiled that according to ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities, thereby presenting considerable risks to software safety and security.",67455478e77c8672d0dd08f89735a8813bbfec65,Semantic Scholar,,, +333,fixing rust compilation errors using llms,"['Pantazis Deligiannis', 'A. Lal', 'Nikita Mehrotra', 'Aseem Rastogi']",https://arxiv.org/pdf/2308.05177,2023-08-09,,"The Rust programming language, with its safety guarantees, has established itself as a viable choice for low-level systems programming language over the traditional, unsafe alternatives like C/C++. These guarantees come from a strong ownership-based type system, as well as primitive support for features like closures, pattern matching, etc., that make the code more concise and amenable to reasoning. These unique Rust features also pose a steep learning curve for programmers. This paper presents a tool called RustAssistant that leverages the emergent capabilities of Large Language Models (LLMs) to automatically suggest fixes for Rust compilation errors. RustAssistant uses a careful combination of prompting techniques as well as iteration with an LLM to deliver high accuracy of fixes. RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on real-world compilation errors in popular open-source Rust repositories. We plan to release our dataset of Rust compilation errors to enable further research.",674c5ec7b144aea1f6b143baeb17cc839f52416e,Semantic Scholar,,, +334,synthetic prompting generating chainofthought demonstrations for large language models,"['Zhihong Shao', 'Yeyun Gong', 'Yelong Shen', 'Minlie Huang', 'Nan Duan', 'Weizhu Chen']",http://arxiv.org/pdf/2302.00618,2023-02-01,,"Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations. However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly. We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself, and selects effective demonstrations to elicit better reasoning. Our method alternates between a backward and forward process to generate new examples. The backward process generates a question that match a sampled reasoning chain, so that the question is solvable and clear. The forward process produces a more detailed reasoning chain for the question, improving the quality of the example. We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.",69619a2a47faee7a29ec596db13172e2a42ff921,Semantic Scholar,,, +335,unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations,"['Tiziano Labruna', 'Sofia Brenna', 'Andrea Zaninello', 'B. Magnini']",http://arxiv.org/pdf/2305.14556,2023-05-23,,"Large pre-trained language models have exhibited unprecedented capabilities in producing high-quality text via prompting techniques. This fact introduces new possibilities for data collection and annotation, particularly in situations where such data is scarce, complex to gather, expensive, or even sensitive. In this paper, we explore the potential of these models to generate and annotate goal-oriented dialogues, and conduct an in-depth analysis to evaluate their quality. Our experiments employ ChatGPT, and encompass three categories of goal-oriented dialogues (task-oriented, collaborative, and explanatory), two generation modes (interactive and one-shot), and two languages (English and Italian). Based on extensive human-based evaluations, we demonstrate that the quality of generated dialogues and annotations is on par with those generated by humans.",7307ee3c819c34b7c93ccbbd330a4c889956b36f,Semantic Scholar,,, +336,events realm event reasoning of entity states via language models,"['Evangelia Spiliopoulou', 'Artidoro Pagnoni', 'Yonatan Bisk', 'E. Hovy']",https://arxiv.org/pdf/2211.05392,2022-11-10,,"This paper investigates models of event implications. Specifically, how well models predict entity state-changes, by targeting their understanding of physical attributes. Nominally, Large Language models (LLM) have been exposed to procedural knowledge about how objects interact, yet our benchmarking shows they fail to reason about the world. Conversely, we also demonstrate that existing approaches often misrepresent the surprising abilities of LLMs via improper task encodings and that proper model prompting can dramatically improve performance of reported baseline results across multiple tasks. In particular, our results indicate that our prompting technique is especially useful for unseen attributes (out-of-domain) or when only limited data is available.",748a2700ec11f51560a69ec05c67ca9f97014be7,Semantic Scholar,,, +337,fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems,"['Aniruddha Deb', 'Neeva Oza', 'Sarthak Singla', 'Dinesh Khandelwal', 'Dinesh Garg', 'Parag Singla']",https://arxiv.org/pdf/2310.01991,2023-10-03,,"While forward reasoning (i.e. find the answer given the question) has been explored extensively in the recent literature, backward reasoning is relatively unexplored. We examine the backward reasoning capabilities of LLMs on Math Word Problems (MWPs): given a mathematical question and its answer, with some details omitted from the question, can LLMs effectively retrieve the missing information? In this paper, we formally define the backward reasoning task on math word problems and modify three datasets to evaluate this task: GSM8k, SVAMP and MultiArith. Our findings show a significant drop in the accuracy of models on backward reasoning compared to forward reasoning across four SOTA LLMs (GPT4, GPT3.5, PaLM-2, and LLaMa-2). Utilizing the specific format of this task, we propose three novel techniques that improve performance: Rephrase reformulates the given problem into a forward reasoning problem, PAL-Tools combines the idea of Program-Aided LLMs to produce a set of equations that can be solved by an external solver, and Check your Work exploits the availability of natural verifier of high accuracy in the forward direction, interleaving solving and verification steps. Finally, realizing that each of our base methods correctly solves a different set of problems, we propose a novel Bayesian formulation for creating an ensemble over these base methods aided by a verifier to further boost the accuracy by a significant margin. Extensive experimentation demonstrates that our techniques successively improve the performance of LLMs on the backward reasoning task, with the final ensemble-based method resulting in a substantial performance gain compared to the raw LLMs with standard prompting techniques such as chain-of-thought.",8db1dcae055842f43ccac04182957b20d15bbe6b,Semantic Scholar,,, +338,investigating prompting techniques for zero and fewshot visual question answering,"['Rabiul Awal', 'Le Zhang', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2306.09996,2023-06-16,,"In this paper, we explore effective prompting techniques to enhance zero- and few-shot Visual Question Answering (VQA) performance in contemporary Vision-Language Models (VLMs). Central to our investigation is the role of question templates in guiding VLMs to generate accurate answers. We identify that specific templates significantly influence VQA outcomes, underscoring the need for strategic template selection. Another pivotal aspect of our study is augmenting VLMs with image captions, providing them with additional visual cues alongside direct image features in VQA tasks. Surprisingly, this augmentation significantly improves the VLMs' performance in many cases, even though VLMs""see""the image directly! We explore chain-of-thought (CoT) reasoning and find that while standard CoT reasoning causes drops in performance, advanced methods like self-consistency can help recover it. Furthermore, we find that text-only few-shot examples enhance VLMs' alignment with the task format, particularly benefiting models prone to verbose zero-shot answers. Lastly, to mitigate the challenges associated with evaluating free-form open-ended VQA responses using string-matching based VQA metrics, we introduce a straightforward LLM-guided pre-processing technique to adapt the model responses to the expected ground-truth answer distribution. In summary, our research sheds light on the intricacies of prompting strategies in VLMs for VQA, emphasizing the synergistic use of captions, templates, and pre-processing to enhance model efficacy.",8efc20988021ce3b4b05dd44b13e27260ee9b99b,Semantic Scholar,,, +339,zeroshot temporal relation extraction with chatgpt,"['Chenhan Yuan', 'Qianqian Xie', 'S. Ananiadou']",http://arxiv.org/pdf/2304.05454,2023-04-11,,"The goal of temporal relation extraction is to infer the temporal relation between two events in the document. Supervised models are dominant in this task. In this work, we investigate ChatGPT’s ability on zero-shot temporal relation extraction. We designed three different prompt techniques to break down the task and evaluate ChatGPT. Our experiments show that ChatGPT’s performance has a large gap with that of supervised methods and can heavily rely on the design of prompts. We further demonstrate that ChatGPT can infer more small relation classes correctly than supervised methods. The current shortcomings of ChatGPT on temporal relation extraction are also discussed in this paper. We found that ChatGPT cannot keep consistency during temporal inference and it fails in actively long-dependency temporal inference.",9087b835d92b72ab3208888916585ddce81c9d10,Semantic Scholar,,, +340,enabling conversational interaction with mobile ui using large language models,"['Bryan Wang', 'Gang Li', 'Yang Li']",https://dl.acm.org/doi/pdf/10.1145/3544548.3580895,2022-09-18,,"Conversational agents show the promise to allow users to interact with mobile devices using language. However, to perform diverse UI tasks with natural language, developers typically need to create separate datasets and models for each specific task, which is expensive and effort-consuming. Recently, pre-trained large language models (LLMs) have been shown capable of generalizing to various downstream tasks when prompted with a handful of examples from the target task. This paper investigates the feasibility of enabling versatile conversational interactions with mobile UIs using a single LLM. We designed prompting techniques to adapt an LLM to mobile UIs. We experimented with four important modeling tasks that address various scenarios in conversational interaction. Our method achieved competitive performance on these challenging tasks without requiring dedicated datasets and training, offering a lightweight and generalizable approach to enable language-based mobile interaction.",99070fb6df9e8d11e30f7aaefcc9f0b0c5a73789,Semantic Scholar,,, +341,questioning the survey responses of large language models,"['Ricardo Dominguez-Olmedo', 'Moritz Hardt', 'Celestine Mendler-Dunner']",https://arxiv.org/pdf/2306.07951,2023-06-13,,"As large language models increase in capability, researchers have started to conduct surveys of all kinds on these models with varying scientific motivations. In this work, we examine what we can learn from language models' survey responses on the basis of the well-established American Community Survey (ACS) by the U.S. Census Bureau. Using a de-facto standard multiple-choice prompting technique and evaluating 40 different language models, hundreds of thousands of times each on questions from the ACS, we systematically establish two dominant patterns. First, models have significant position and labeling biases, for example, towards survey responses labeled with the letter""A"". Second, when adjusting for labeling biases through randomized answer ordering, models across the board trend towards uniformly random survey responses. In fact, binary classifiers can almost perfectly differentiate between models' responses to the ACS and the responses of the US census. Taken together, our findings suggest caution in treating survey responses from language models as equivalent to those of human populations at present time.",a86e12654376323b712dd3d39d5ff22283f87a7b,Semantic Scholar,,, +342,mathprompter mathematical reasoning using large language models,"['Shima Imani', 'Liang Du', 'H. Shrivastava']",http://arxiv.org/pdf/2303.05398,2023-03-04,,"Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose ‘MathPrompter’, a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting technique to generate multiple algebraic expressions or python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the ‘MultiArith’ dataset (78.7% - 92.5%) evaluated using 175B parameter GPT-based LLM.",b626560f19f815808a289ef5c24a17c57320da70,Semantic Scholar,,, +343,boosting logical reasoning in large language models through a new framework the graph of thought,"['Bin Lei', 'Pei-Hung Lin', 'C. Liao', 'Caiwen Ding']",https://arxiv.org/pdf/2308.08614,2023-08-16,,"Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries. However, when facing complex problems that require multi-step logical reasoning, their accuracy dramatically decreases. Current research has explored the realm of \textit{prompting engineering} to bolster the inferential capacities of these models. Our paper unveils a pioneering prompting technique, dubbed \textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating challenges: the 24-point game, resolution of high-degree polynomial equations, and derivation of formulas for recursive sequences, our method outperformed GPT-4, achieving accuracy improvements of $89.7\%$, $86\%$, and $56\%$ for each respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) prompting method, \textit{Tree of Thought (ToT)}, our approach registered an average accuracy boost of $23\%$, $24\%$, and $15\%$.",ba4aa83248a1d08b521392eb971e47d10b7c74e1,Semantic Scholar,,, +344,scitab a challenging benchmark for compositional reasoning and claim verification on scientific tables,"['Xinyuan Lu', 'Liangming Pan', 'Qian Liu', 'Preslav Nakov', 'Min-Yen Kan']",http://arxiv.org/pdf/2305.13186,2023-05-22,,"Current scientific fact-checking benchmarks exhibit several shortcomings, such as biases arising from crowd-sourced claims and an over-reliance on text-based evidence. We present SCITAB, a challenging evaluation dataset consisting of 1.2K expert-verified scientific claims that 1) originate from authentic scientific publications and 2) require compositional reasoning for verification. The claims are paired with evidence-containing scientific tables annotated with labels. Through extensive evaluations, we demonstrate that SCITAB poses a significant challenge to state-of-the-art models, including table-based pretraining models and large language models. All models except GPT-4 achieved performance barely above random guessing. Popular prompting techniques, such as Chain-of-Thought, do not achieve much performance gains on SCITAB. Our analysis uncovers several unique challenges posed by SCITAB, including table grounding, claim ambiguity, and compositional reasoning. Our codes and data are publicly available at https://github.com/XinyuanLu00/SciTab.",c20b18d6b919695a69e416debf8bf1ffeac03992,Semantic Scholar,,, +345,optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models,"['Badr AlKhamissi', 'Siddharth Verma', 'Ping Yu', 'Zhijing Jin', 'Asli Celikyilmaz', 'Mona T. Diab']",https://aclanthology.org/2023.nlrse-1.10.pdf,2023-05-19,,"We conduct a thorough investigation into the reasoning capabilities of Large Language Models (LLMs), focusing specifically on the Open Pretrained Transformers (OPT) models as a representative of such models. Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations. We then evaluate all models on 57 out-of-domain tasks drawn from the Super-NaturalInstructions benchmark, covering 26 distinct reasoning skills, utilizing three prompting techniques. Through a comprehensive grid of 27 configurations and 6,156 test evaluations, we investigate the dimensions of finetuning, prompting, and scale to understand the role of explanations on different reasoning skills. Our findings reveal that having explanations in the fewshot exemplar has no significant impact on the model’s performance when the model is finetuned, while positively affecting the non-finetuned counterpart. Moreover, we observe a slight yet consistent increase in classification accuracy as we incorporate explanations during prompting and finetuning, respectively. Finally, we offer insights on which reasoning skills benefit the most from incorporating explanations during finetuning and prompting, such as Numerical (+20.4%) and Analogical (+13.9%) reasoning, as well as skills that exhibit negligible or negative effects.",c218cd1772999517b137bbbc9872c4f67e540b7f,Semantic Scholar,,, +346,knowledgeprompted estimator a novel approach to explainable machine translation assessment,"['Hao Yang', 'Min Zhang', 'Shimin Tao', 'Minghan Wang', 'Daimeng Wei', 'Yanfei Jiang']",http://arxiv.org/pdf/2306.07486,2023-06-13,,"Cross-lingual Machine Translation (MT) quality estimation plays a crucial role in evaluating translation performance. GEMBA, the first MT quality assessment metric based on Large Language Models (LLMs), employs one-step prompting to achieve state-of-the-art (SOTA) in system-level MT quality estimation; however, it lacks segment-level analysis. In contrast, Chain-of-Thought (CoT) prompting outperforms one-step prompting by offering improved reasoning and explainability. In this paper, we introduce Knowledge-Prompted Estimator (KPE), a CoT prompting method that combines three one-step prompting techniques, including perplexity, token-level similarity, and sentence-level similarity. This method attains enhanced performance for segment-level estimation compared with previous deep learning models and one-step prompting approaches. Furthermore, supplementary experiments on word-level visualized alignment demonstrate that our KPE method significantly improves token alignment compared with earlier models and provides better interpretability for MT quality estimation. Code will be released upon publication.",d1bd7ae97588eccfbcd31ffce4fc924d12a5de4d,Semantic Scholar,,, +347,prompting as probing using language models for knowledge base construction,"['Dimitrios Alivanistos', ""Selene B'aez Santamar'ia"", 'Michael Cochez', 'Jan-Christoph Kalo', 'Emile van Krieken', 'Thiviyan Thanapalasingam']",http://arxiv.org/pdf/2208.11057,2022-08-23,,"Language Models (LMs) have proven to be useful in various downstream applications, such as summarisation, translation, question answering and text classification. LMs are becoming increasingly important tools in Artificial Intelligence, because of the vast quantity of information they can store. In this work, we present ProP (Prompting as Probing), which utilizes GPT-3, a large Language Model originally proposed by OpenAI in 2020, to perform the task of Knowledge Base Construction (KBC). ProP implements a multi-step approach that combines a variety of prompting techniques to achieve this. Our results show that manual prompt curation is essential, that the LM must be encouraged to give answer sets of variable lengths, in particular including empty answer sets, that true/false questions are a useful device to increase precision on suggestions generated by the LM, that the size of the LM is a crucial factor, and that a dictionary of entity aliases improves the LM score. Our evaluation study indicates that these proposed techniques can substantially enhance the quality of the final predictions: ProP won track 2 of the LM-KBC competition, outperforming the baseline by 36.4 percentage points. Our implementation is available on https://github.com/HEmile/iswc-challenge.",ddc9aeac18638575bbb90ede4c6829ec15c2947e,Semantic Scholar,,, +348,devgpt studying developerchatgpt conversations,"['Tao Xiao', 'Christoph Treude', 'Hideaki Hata', 'Kenichi Matsumoto']",https://arxiv.org/pdf/2309.03914,2023-08-31,,"The emergence of large language models (LLMs) such as ChatGPT has disrupted the landscape of software development. Many studies are investigating the quality of responses generated by ChatGPT, the efficacy of various prompting techniques, and its comparative performance in programming contests, to name a few examples. Yet, we know very little about how ChatGPT is actually used by software developers. What questions do developers present to ChatGPT? What are the dynamics of these interactions? What is the backdrop against which these conversations are held, and how do the conversations feedback into the artifacts of their work? To close this gap, we introduce DevGPT, a curated dataset which encompasses 17,913 prompts and ChatGPT's responses including 11,751 code snippets, coupled with the corresponding software development artifacts -- ranging from source code, commits, issues, pull requests, to discussions and Hacker News threads -- to enable the analysis of the context and implications of these developer interactions with ChatGPT.",def24fb1e977db69f4b1b866b807f9ab9bad5227,Semantic Scholar,,, +349,upar a kantianinspired prompting framework for enhancing large language model capabilities,"['Hejia Geng', 'Boxun Xu', 'Peng Li']",https://arxiv.org/pdf/2310.01441,2023-09-30,,"Large Language Models (LLMs) have demonstrated impressive inferential capabilities, with numerous research endeavors devoted to enhancing this capacity through prompting. Despite these efforts, a unified epistemological foundation is still conspicuously absent. Drawing inspiration from Kant's a priori philosophy, we propose the UPAR prompting framework, designed to emulate the structure of human cognition within LLMs. The UPAR framework is delineated into four phases:""Understand"",""Plan"",""Act"", and""Reflect"", enabling the extraction of structured information from complex contexts, prior planning of solutions, execution according to plan, and self-reflection. This structure significantly augments the explainability and accuracy of LLM inference, producing a human-understandable and inspectable inferential trajectory. Furthermore, our work offers an epistemological foundation for existing prompting techniques, allowing for a possible systematic integration of these methods. With GPT-4, our approach elevates the accuracy from COT baseline of 22.92% to 58.33% in a challenging subset of GSM8K, and from 67.91% to 75.40% in the causal judgment task. Without using few-shot examples or external tools, UPAR significantly outperforms existing prompting methods on SCIBENCH, a challenging dataset containing collegiate-level mathematics, chemistry, and physics scientific problems.",e61a96cf602ebff6683929aaf916e25614a475bc,Semantic Scholar,,, +350,understanding stereotypes in language models towards robust measurement and zeroshot debiasing,"['Justus Mattern', 'Zhijing Jin', 'Mrinmaya Sachan', 'Rada Mihalcea', 'B. Scholkopf']",http://arxiv.org/pdf/2212.10678,2022-12-20,,"Generated texts from large pretrained language models have been shown to exhibit a variety of harmful, human-like biases about various demographics. These findings prompted large efforts aiming to understand and measure such effects, with the goal of providing benchmarks that can guide the development of techniques mitigating these stereotypical associations. However, as recent research has pointed out, the current benchmarks lack a robust experimental setup, consequently hindering the inference of meaningful conclusions from their evaluation metrics. In this paper, we extend these arguments and demonstrate that existing techniques and benchmarks aiming to measure stereotypes tend to be inaccurate and consist of a high degree of experimental noise that severely limits the knowledge we can gain from benchmarking language models based on them. Accordingly, we propose a new framework for robustly measuring and quantifying biases exhibited by generative language models. Finally, we use this framework to investigate GPT-3's occupational gender bias and propose prompting techniques for mitigating these biases without the need for fine-tuning.",ed5ebed7ff668fd7362d531a40b49b3aea33b3a9,Semantic Scholar,,, +351,prompts should not be seen as secrets systematically measuring prompt extraction attack success,"['Yiming Zhang', 'Daphne Ippolito']",https://arxiv.org/pdf/2307.06865,2023-07-13,,"The generations of large language models are commonly controlled through prompting techniques, where a user's query to the model is prefixed with a prompt that aims to guide the model's behaviour on the query. The prompts used by companies to guide their models are often treated as secrets, to be hidden from the user making the query. They have even been treated as commodities to be bought and sold. However, there has been anecdotal evidence showing that the prompts can be extracted by a user even when they are kept secret. In this paper, we present a framework for systematically measuring the success of prompt extraction attacks. In experiments with multiple sources of prompts and multiple underlying language models, we find that simple text-based attacks can in fact reveal prompts with high probability.",f330f502bf1e92fabf7f246597fa9320d956c0c8,Semantic Scholar,,, +352,minidalle3 interactive text to image by prompting large language models,"['Zeqiang Lai', 'Xizhou Zhu', 'Jifeng Dai', 'Yu Qiao', 'Wenhai Wang']",https://arxiv.org/pdf/2310.07653,2023-10-11,,"The revolution of artificial intelligence content generation has been rapidly accelerated with the booming text-to-image (T2I) diffusion models. Within just two years of development, it was unprecedentedly of high-quality, diversity, and creativity that the state-of-the-art models could generate. However, a prevalent limitation persists in the effective communication with these popular T2I models, such as Stable Diffusion, using natural language descriptions. This typically makes an engaging image hard to obtain without expertise in prompt engineering with complex word compositions, magic tags, and annotations. Inspired by the recently released DALLE3 - a T2I model directly built-in ChatGPT that talks human language, we revisit the existing T2I systems endeavoring to align human intent and introduce a new task - interactive text to image (iT2I), where people can interact with LLM for interleaved high-quality image generation/edit/refinement and question answering with stronger images and text correspondences using natural language. In addressing the iT2I problem, we present a simple approach that augments LLMs for iT2I with prompting techniques and off-the-shelf T2I models. We evaluate our approach for iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT, LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a convenient and low-cost way to introduce the iT2I ability for any existing LLMs and any text-to-image models without any training while bringing little degradation on LLMs' inherent capabilities in, e.g., question answering and code generation. We hope this work could draw broader attention and provide inspiration for boosting user experience in human-machine interactions alongside the image quality of the next-generation T2I systems.",f669d7a6fab0147253178a6fc854e05e3d92fb3f,Semantic Scholar,,, +353,gopro generate and optimize prompts in clip using selfsupervised learning,"['M. Singha', 'Ankit Jha', 'Biplab Banerjee']",https://arxiv.org/pdf/2308.11605,2023-08-22,,"Large-scale foundation models, such as CLIP, have demonstrated remarkable success in visual recognition tasks by embedding images in a semantically rich space. Self-supervised learning (SSL) has also shown promise in improving visual recognition by learning invariant features. However, the combination of CLIP with SSL is found to face challenges due to the multi-task framework that blends CLIP's contrastive loss and SSL's loss, including difficulties with loss weighting and inconsistency among different views of images in CLIP's output space. To overcome these challenges, we propose a prompt learning-based model called GOPro, which is a unified framework that ensures similarity between various augmented views of input images in a shared image-text embedding space, using a pair of learnable image and text projectors atop CLIP, to promote invariance and generalizability. To automatically learn such prompts, we leverage the visual content and style primitives extracted from pre-trained CLIP and adapt them to the target task. In addition to CLIP's cross-domain contrastive loss, we introduce a visual contrastive loss and a novel prompt consistency loss, considering the different views of the images. GOPro is trained end-to-end on all three loss objectives, combining the strengths of CLIP and SSL in a principled manner. Empirical evaluations demonstrate that GOPro outperforms the state-of-the-art prompting techniques on three challenging domain generalization tasks across multiple benchmarks by a significant margin. Our code is available at https://github.com/mainaksingha01/GOPro.",fc9bd3642df2a378c11131362b27deecbd02b70a,Semantic Scholar,,, +354,the devil is in the errors leveraging large language models for finegrained machine translation evaluation,"['Patrick Fernandes', 'Daniel Deutsch', 'M. Finkelstein', 'Parker Riley', 'André F. T. Martins', 'Graham Neubig', 'Ankush Garg', 'J. Clark', 'Markus Freitag', 'Orhan Firat']",https://arxiv.org/pdf/2308.07286,2023-08-14,,"Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.",fd80f7f3673fc6ca02f192d5d73426f11a4be659,Semantic Scholar,,, +355,"multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering","['Angus Addlesee', ""Weronika Siei'nska"", 'Nancie Gunson', 'Daniel Hernández García', 'C. Dondrup', 'Oliver Lemon']",https://arxiv.org/pdf/2308.15231,2023-08-29,,"This paper evaluates the extent to which current LLMs can capture task-oriented multi-party conversations (MPCs). We have recorded and transcribed 29 MPCs between patients, their companions, and a social robot in a hospital. We then annotated this corpus for multi-party goal-tracking and intent-slot recognition. People share goals, answer each other’s goals, and provide other people’s goals in MPCs - none of which occur in dyadic interactions. To understand user goals in MPCs, we compared three methods in zero-shot and few-shot settings: we fine-tuned T5, created pre-training tasks to train DialogLM using LED, and employed prompt engineering techniques with GPT-3.5-turbo, to determine which approach can complete this novel task with limited data. GPT-3.5-turbo significantly outperformed the others in a few-shot setting. The ‘reasoning’ style prompt, when given 7% of the corpus as example annotated conversations, was the best performing method. It correctly annotated 62.32% of the goal tracking MPCs, and 69.57% of the intent-slot recognition MPCs. A ‘story’ style prompt increased model hallucination, which could be detrimental if deployed in safety-critical settings. We conclude that multi-party conversations still challenge state-of-the-art LLMs.",8a1a8290f7d42b0ce60445a4c0130ef737b3ff69,Semantic Scholar,,, +356,llm4vv developing llmdriven testsuite for compiler validation,"['Christian Munley', 'Aaron Jarmusch', 'Sunita Chandrasekaran']",https://arxiv.org/pdf/2310.04963,2023-10-08,,"Large language models (LLMs) are a new and powerful tool for a wide span of applications involving natural language and demonstrate impressive code generation abilities. In this paper, we explore the capabilitity of state-of-the-art LLMs, including closed-source options like OpenAI GPT-4 and open-source alternatives like Meta AI Codellama, to automatically generate tests and use these tests to validate and verify compiler implementations of a directive-based programming paradigm, OpenACC. Our approach entails exploring various prompt engineering techniques including a code template, retrieval-augmented generation (RAG) with code template, expressive prompt using RAG with code template, one-shot example, and RAG with one-shot example. This paper focusses on (a) exploring the capabilities of the latest LLMs for code generation, (b) investigating prompt and fine tuning methods, and (c) analyzing the outcome of LLMs generated tests",8c52b3bbe5897ba3f42b38c5bfc33bbd48f9a1f2,Semantic Scholar,,, +357,"voice visual oracle for interaction, conversation, and explanation","['Donggang Jia', 'Alexandra Irger', 'Ondrej Strnad', 'Johanna Björklund', 'A. Ynnerman', 'I. Viola']",http://arxiv.org/pdf/2304.04083,2023-04-08,,"We present VOICE, a novel approach to science communication that connects large language models' (LLM) conversational capabilities with interactive exploratory visualization. VOICE introduces several innovative technical contributions that drive our conversational visualization framework. Our foundation is a pack-of-bots that can perform specific tasks, such as assigning tasks, extracting instructions, and generating coherent content. We employ fine-tuning and prompt engineering techniques to tailor bots' performance to their specific roles and accurately respond to user queries. Our interactive text-to-visualization method generates a flythrough sequence matching the content explanation. Besides, natural language interaction provides capabilities to navigate and manipulate the 3D models in real-time. The VOICE framework can receive arbitrary voice commands from the user and respond verbally, tightly coupled with corresponding visual representation with low latency and high accuracy. We demonstrate the effectiveness of our approach by applying it to the molecular visualization domain: analyzing three 3D molecular models with multi-scale and multi-instance attributes. We finally evaluate VOICE with the identified educational experts to show the potential of our approach. All supplemental materials are available at https://osf.io/g7fbr.",8ca384547bb4b21b7f38d478119bf3168eb9c9cd,Semantic Scholar,,, +358,"unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing",['Walid Hariri'],http://arxiv.org/pdf/2304.02017,2023-03-27,,"Large language models have revolutionized the field of artificial intelligence and have been used in various applications. Among these models, ChatGPT (Chat Generative Pre-trained Transformer) has been developed by OpenAI, it stands out as a powerful tool that has been widely adopted. ChatGPT has been successfully applied in numerous areas, including chatbots, content generation, language translation, personalized recommendations, and even medical diagnosis and treatment. Its success in these applications can be attributed to its ability to generate human-like responses, understand natural language, and adapt to different contexts. Its versatility and accuracy make it a powerful tool for natural language processing (NLP). However, there are also limitations to ChatGPT, such as its tendency to produce biased responses and its potential to perpetuate harmful language patterns. This article provides a comprehensive overview of ChatGPT, its applications, advantages, and limitations. Additionally, the paper emphasizes the importance of ethical considerations when using this robust tool in real-world scenarios. Finally, This paper contributes to ongoing discussions surrounding artificial intelligence and its impact on vision and NLP domains by providing insights into prompt engineering techniques.",9e93ab728e3e174ec1492009055885a9123d434f,Semantic Scholar,,, +359,simulating hp lovecraft horror literature with the chatgpt large language model,"['E.C. Garrido-Merchán', 'J. L. Arroyo-Barrigüete', 'Roberto Gozalo-Brizuela']",http://arxiv.org/pdf/2305.03429,2023-05-05,,"In this paper, we present a novel approach to simulating H.P. Lovecraft's horror literature using the ChatGPT large language model, specifically the GPT-4 architecture. Our study aims to generate text that emulates Lovecraft's unique writing style and themes, while also examining the effectiveness of prompt engineering techniques in guiding the model's output. To achieve this, we curated a prompt containing several specialized literature references and employed advanced prompt engineering methods. We conducted an empirical evaluation of the generated text by administering a survey to a sample of undergraduate students. Utilizing statistical hypothesis testing, we assessed the students ability to distinguish between genuine Lovecraft works and those generated by our model. Our findings demonstrate that the participants were unable to reliably differentiate between the two, indicating the effectiveness of the GPT-4 model and our prompt engineering techniques in emulating Lovecraft's literary style. In addition to presenting the GPT model's capabilities, this paper provides a comprehensive description of its underlying architecture and offers a comparative analysis with related work that simulates other notable authors and philosophers, such as Dennett. By exploring the potential of large language models in the context of literary emulation, our study contributes to the body of research on the applications and limitations of these models in various creative domains.",a7d8a6d8c04bd4554da4219be0f9d3bf87e2e56b,Semantic Scholar,,, +360,protect your prompts protocols for ip protection in llm applications,"['M. V. Wyk', 'M. Bekker', 'X. L. Richards', 'K. Nixon']",http://arxiv.org/pdf/2306.06297,2023-06-09,,"With the rapid adoption of AI in the form of large language models (LLMs), the potential value of carefully engineered prompts has become significant. However, to realize this potential, prompts should be tradable on an open market. Since prompts are, at present, generally economically non-excludable, by virtue of their nature as text, no general competitive market has yet been established. This note discusses two protocols intended to provide protection of prompts, elevating their status as intellectual property, thus confirming the intellectual property rights of prompt engineers, and potentially supporting the flourishing of an open market for LLM prompts.",08fd45ac85916b95f734cc75af8660cff73c33ca,Semantic Scholar,,, +361,abscribe rapid exploration of multiple writing variations in humanai cowriting tasks using large language models,"['Mohi Reza', 'Nathan Laundry', 'Ilya Musabirov', 'Peter Dushniku', 'Zhi Yuan Michael Yu', 'Kashish Mittal', 'Tovi Grossman', 'Michael Liut', 'Anastasia Kuzminykh', 'Joseph Jay Williams']",https://arxiv.org/pdf/2310.00117,2023-09-29,,"Exploring alternative ideas by rewriting text is integral to the writing process. State-of-the-art large language models (LLMs) can simplify writing variation generation. However, current interfaces pose challenges for simultaneous consideration of multiple variations: creating new versions without overwriting text can be difficult, and pasting them sequentially can clutter documents, increasing workload and disrupting writers' flow. To tackle this, we present ABScribe, an interface that supports rapid, yet visually structured, exploration of writing variations in human-AI co-writing tasks. With ABScribe, users can swiftly produce multiple variations using LLM prompts, which are auto-converted into reusable buttons. Variations are stored adjacently within text segments for rapid in-place comparisons using mouse-over interactions on a context toolbar. Our user study with 12 writers shows that ABScribe significantly reduces task workload (d = 1.20, p<0.001), enhances user perceptions of the revision process (d = 2.41, p<0.001) compared to a popular baseline workflow, and provides insights into how writers explore variations using LLMs.",0f71c1e2acf286951544d3bd9eb5d85acfba5af1,Semantic Scholar,,, +362,incontext impersonation reveals large language models' strengths and biases,"['Leonard Salewski', 'Stephan Alaniz', 'Isabel Rio-Torto', 'Eric Schulz', 'Zeynep Akata']",http://arxiv.org/pdf/2305.14930,2023-05-24,,"In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts. Finally, we test whether LLMs' impersonations are complementary to visual information when describing different categories. We find that impersonation can improve performance: an LLM prompted to be a bird expert describes birds better than one prompted to be a car expert. However, impersonation can also uncover LLMs' biases: an LLM prompted to be a man describes cars better than one prompted to be a woman. These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their hidden strengths and biases.",19c63eade265d8a47d160098d97194b3b83d3770,Semantic Scholar,,, +363,chatgpt for plcdcs control logic generation,"['Heiko Koziolek', 'Sten Gruener', 'Virendra Ashiwal']",https://arxiv.org/pdf/2305.15809,2023-05-25,,"Large language models (LLMs) providing generative AI have become popular to support software engineers in creating, summarizing, optimizing, and documenting source code. It is still unknown how LLMs can support control engineers using typical control programming languages in programming tasks. Researchers have explored GitHub CoPilot or DeepMind AlphaCode for source code generation but did not yet tackle control logic programming. A key contribution of this paper is an exploratory study, for which we created 100 LLM prompts in 10 representative categories to analyze control logic generation for of PLCs and DCS from natural language. We tested the prompts by generating answers with ChatGPT using the GPT-4 LLM. It generated syntactically correct IEC 61131-3 Structured Text code in many cases and demonstrated useful reasoning skills that could boost control engineer productivity. Our prompt collection is the basis for a more formal LLM benchmark to test and compare such models for control logic generation.",1c1b83df13de4334e48a4c2039bc7ddfa374c486,Semantic Scholar,,, +364,saytap language to quadrupedal locomotion,"['Yujin Tang', 'Wenhao Yu', 'Jie Tan', 'H. Zen', 'Aleksandra Faust', 'Tatsuya Harada']",https://arxiv.org/pdf/2306.07580,2023-06-13,,"Large language models (LLMs) have demonstrated the potential to perform high-level planning. Yet, it remains a challenge for LLMs to comprehend low-level commands, such as joint angle targets or motor torques. This paper proposes an approach to use foot contact patterns as an interface that bridges human commands in natural language and a locomotion controller that outputs these low-level commands. This results in an interactive system for quadrupedal robots that allows the users to craft diverse locomotion behaviors flexibly. We contribute an LLM prompt design, a reward function, and a method to expose the controller to the feasible distribution of contact patterns. The results are a controller capable of achieving diverse locomotion patterns that can be transferred to real robot hardware. Compared with other design choices, the proposed approach enjoys more than 50% success rate in predicting the correct contact patterns and can solve 10 more tasks out of a total of 30 tasks. Our project site is: https://saytap.github.io.",1fc21645ccc8e99eb8162e5f91407148b7f77e3d,Semantic Scholar,,, +365,"mmhqaicl multimodal incontext learning for hybrid question answering over text, tables and images","['Weihao Liu', 'Fangyu Lei', 'Tongxu Luo', 'Jiahe Lei', 'Shizhu He', 'Jun Zhao', 'Kang Liu']",https://arxiv.org/pdf/2309.04790,2023-09-09,,"In the real world, knowledge often exists in a multimodal and heterogeneous form. Addressing the task of question answering with hybrid data types, including text, tables, and images, is a challenging task (MMHQA). Recently, with the rise of large language models (LLM), in-context learning (ICL) has become the most popular way to solve QA problems. We propose MMHQA-ICL framework for addressing this problems, which includes stronger heterogeneous data retriever and an image caption module. Most importantly, we propose a Type-specific In-context Learning Strategy for MMHQA, enabling LLMs to leverage their powerful performance in this task. We are the first to use end-to-end LLM prompting method for this task. Experimental results demonstrate that our framework outperforms all baselines and methods trained on the full dataset, achieving state-of-the-art results under the few-shot setting on the MultimodalQA dataset.",27d6d02e24de259e3aa38e556a81f89ec505816e,Semantic Scholar,,, +366,lmcanvas objectoriented interaction to personalize large language modelpowered writing environments,"['Tae Soo Kim', 'Arghya Sarkar', 'Yoonjoo Lee', 'Minsuk Chang', 'Juho Kim']",http://arxiv.org/pdf/2303.15125,2023-03-27,,"Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers' workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer's needs -- requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with""blocks""in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept.",2cdff023cd4b185bb452f3c7399580db2d0fdfcd,Semantic Scholar,,, +367,flocks of stochastic parrots differentially private prompt learning for large language models,"['Haonan Duan', 'Adam Dziedzic', 'Nicolas Papernot', 'Franziska Boenisch']",http://arxiv.org/pdf/2305.15594,2023-05-24,,"Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that soft prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for discrete prompts. Thus, we orchestrate a noisy vote among an ensemble of LLMs presented with different prompts, i.e., a flock of stochastic parrots. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs. 95.2% for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial APIs.",2f2a430ba6c93bcfaf4818316ff8a27b1e034b1a,Semantic Scholar,,, +368,knowledge crosswords geometric reasoning over structured knowledge with large language models,"['Wenxuan Ding', 'Shangbin Feng', 'Yuhan Liu', 'Zhaoxuan Tan', 'Vidhisha Balachandran', 'Tianxing He', 'Yulia Tsvetkov']",https://arxiv.org/pdf/2310.01290,2023-10-02,,"Large language models (LLMs) are widely adopted in knowledge-intensive tasks and have achieved impressive performance thanks to their knowledge abilities. While LLMs have demonstrated outstanding performance on atomic or linear (multi-hop) QA tasks, whether they can reason in knowledge-rich scenarios with interweaving constraints remains an underexplored problem. In this work, we propose geometric reasoning over structured knowledge, where pieces of knowledge are connected in a graph structure and models need to fill in the missing information. Such geometric knowledge reasoning would require the ability to handle structured knowledge, reason with uncertainty, verify facts, and backtrack when an error occurs. We propose Knowledge Crosswords, a multi-blank QA dataset where each problem consists of a natural language question representing the geometric constraints of an incomplete entity network, where LLMs are tasked with working out the missing entities while meeting all factual constraints. Knowledge Crosswords contains 2,101 individual problems, covering various knowledge domains and further divided into three difficulty levels. We conduct extensive experiments to evaluate existing LLM prompting approaches on the Knowledge Crosswords benchmark. We additionally propose two new approaches, Staged Prompting and Verify-All, to augment LLMs' ability to backtrack and verify structured constraints. Our results demonstrate that while baseline approaches perform well on easier problems but struggle with hard ones, our proposed Verify-All outperforms other methods by a large margin and is more robust with hard problems. Further analysis reveals that LLMs' ability of geometric reasoning over structured knowledge is still far from robust or perfect, susceptible to confounders such as the order of options, certain structural patterns, assumption of existence of correct answer, and more.",33d944de189d6edf3a510ea195803a381c5a3bab,Semantic Scholar,,, +369,gear augmenting language models with generalizable and efficient tool resolution,"['Yining Lu', 'Haoping Yu', 'Daniel Khashabi']",https://arxiv.org/pdf/2307.08775,2023-07-17,,"Augmenting large language models (LLM) to use external tools enhances their performance across a variety of tasks. However, prior works over-rely on task-specific demonstration of tool use that limits their generalizability and computational cost due to making many calls to large-scale LLMs. We introduce GEAR, a computationally efficient query-tool grounding algorithm that is generalizable to various tasks that require tool use while not relying on task-specific demonstrations. GEAR achieves better efficiency by delegating tool grounding and execution to small language models (SLM) and LLM, respectively; while leveraging semantic and pattern-based evaluation at both question and answer levels for generalizable tool grounding. We evaluate GEAR on 14 datasets across 6 downstream tasks, demonstrating its strong generalizability to novel tasks, tools and different SLMs. Despite offering more efficiency, GEAR achieves higher precision in tool grounding compared to prior strategies using LLM prompting, thus improving downstream accuracy at a reduced computational cost. For example, we demonstrate that GEAR-augmented GPT-J and GPT-3 outperform counterpart tool-augmented baselines because of better tool use.",3bd83ff979f3c0e9470f23c360a18333593dc5a1,Semantic Scholar,,, +370,retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference,"['Zachary Levonian', 'Chenglu Li', 'Wangda Zhu', 'Anoushka Gade', 'Owen Henkel', 'Millie-Ellen Postle', 'Wanli Xing']",https://arxiv.org/pdf/2310.03184,2023-10-04,,"For middle-school math students, interactive question-answering (QA) with tutors is an effective way to learn. The flexibility and emergent capabilities of generative large language models (LLMs) has led to a surge of interest in automating portions of the tutoring process - including interactive QA to support conceptual discussion of mathematical concepts. However, LLM responses to math questions can be incorrect or mismatched to the educational context - such as being misaligned with a school's curriculum. One potential solution is retrieval-augmented generation (RAG), which involves incorporating a vetted external knowledge source in the LLM prompt to increase response quality. In this paper, we designed prompts that retrieve and use content from a high-quality open-source math textbook to generate responses to real student questions. We evaluate the efficacy of this RAG system for middle-school algebra and geometry QA by administering a multi-condition survey, finding that humans prefer responses generated using RAG, but not when responses are too grounded in the textbook content. We argue that while RAG is able to improve response quality, designers of math QA systems must consider trade-offs between generating responses preferred by students and responses closely matched to specific educational resources.",3dc1b657bf821b731c5ed0396823b67c10d54ba1,Semantic Scholar,,, +371,udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers,"['Jon Saad-Falcon', 'O. Khattab', 'Keshav Santhanam', 'Radu Florian', 'M. Franz', 'S. Roukos', 'Avirup Sil', 'Md Arafat Sultan', 'Christopher Potts']",https://arxiv.org/pdf/2303.00807,2023-03-01,,"Many information retrieval tasks require large labeled datasets for fine-tuning. However, such datasets are often unavailable, and their utility for real-world applications can diminish quickly due to domain shifts. To address this challenge, we develop and motivate a method for using large language models (LLMs) to generate large numbers of synthetic queries cheaply. The method begins by generating a small number of synthetic queries using an expensive LLM. After that, a much less expensive one is used to create large numbers of synthetic queries, which are used to fine-tune a family of reranker models. These rerankers are then distilled into a single efficient retriever for use in the target domain. We show that this technique boosts zero-shot accuracy in long-tail domains and achieves substantially lower latency than standard reranking methods.",44b0d2e884efa5344e50424dbe2edf616981f201,Semantic Scholar,,, +372,iterative zeroshot llm prompting for knowledge graph construction,"['S. Carta', 'Alessandro Giuliani', 'L. piano', 'Alessandro Sebastian Podda', 'Livio Pompianu', 'Sandro Gabriele Tiddia']",http://arxiv.org/pdf/2307.01128,2023-07-03,,"In the current digitalization era, capturing and effectively representing knowledge is crucial in most real-world scenarios. In this context, knowledge graphs represent a potent tool for retrieving and organizing a vast amount of information in a properly interconnected and interpretable structure. However, their generation is still challenging and often requires considerable human effort and domain expertise, hampering the scalability and flexibility across different application fields. This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models, such as GPT-3.5, that can address all the main critical issues in knowledge graph building. The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies in the main stages of the generation process. Our unique manifold approach may encompass significant benefits to the scientific community. In particular, the main contribution can be summarized by: (i) an innovative strategy for iteratively prompting large language models to extract relevant components of the final graph; (ii) a zero-shot strategy for each prompt, meaning that there is no need for providing examples for""guiding""the prompt result; (iii) a scalable solution, as the adoption of LLMs avoids the need for any external resources or human expertise. To assess the effectiveness of our proposed model, we performed experiments on a dataset that covered a specific domain. We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.",50bdea5132ef4b8cf25b0d9f3ac2ee0d09bf18cb,Semantic Scholar,,, +373,rosgpt_vision commanding robots using only language models' prompts,"['Bilel Benjdira', 'A. Koubâa', 'Anas M. Ali']",https://arxiv.org/pdf/2308.11236,2023-08-22,,"In this paper, we argue that the next generation of robots can be commanded using only Language Models' prompts. Every prompt interrogates separately a specific Robotic Modality via its Modality Language Model (MLM). A central Task Modality mediates the whole communication to execute the robotic mission via a Large Language Model (LLM). This paper gives this new robotic design pattern the name of: Prompting Robotic Modalities (PRM). Moreover, this paper applies this PRM design pattern in building a new robotic framework named ROSGPT_Vision. ROSGPT_Vision allows the execution of a robotic task using only two prompts: a Visual and an LLM prompt. The Visual Prompt extracts, in natural language, the visual semantic features related to the task under consideration (Visual Robotic Modality). Meanwhile, the LLM Prompt regulates the robotic reaction to the visual description (Task Modality). The framework automates all the mechanisms behind these two prompts. The framework enables the robot to address complex real-world scenarios by processing visual data, making informed decisions, and carrying out actions automatically. The framework comprises one generic vision module and two independent ROS nodes. As a test application, we used ROSGPT_Vision to develop CarMate, which monitors the driver's distraction on the roads and makes real-time vocal notifications to the driver. We showed how ROSGPT_Vision significantly reduced the development cost compared to traditional methods. We demonstrated how to improve the quality of the application by optimizing the prompting strategies, without delving into technical details. ROSGPT_Vision is shared with the community (link: https://github.com/bilel-bj/ROSGPT_Vision) to advance robotic research in this direction and to build more robotic frameworks that implement the PRM design pattern and enables controlling robots using only prompts.",53e8d327e7ceda6f4efd321752da57edbaee6257,Semantic Scholar,,, +374,teler a general taxonomy of llm prompts for benchmarking complex tasks,"['Shubhra (Santu) Karmaker', 'Dongji Feng']",http://arxiv.org/pdf/2305.11430,2023-05-19,,"While LLMs have shown great success in understanding and generating text in traditional conversational settings, their potential for performing ill-defined complex tasks is largely under-studied. Indeed, we are yet to conduct comprehensive benchmarking studies with multiple LLMs that are exclusively focused on a complex task. However, conducting such benchmarking studies is challenging because of the large variations in LLMs' performance when different prompt types/styles are used and different degrees of detail are provided in the prompts. To address this issue, the paper proposes a general taxonomy that can be used to design prompts with specific properties in order to perform a wide range of complex tasks. This taxonomy will allow future benchmarking studies to report the specific categories of prompts used as part of the study, enabling meaningful comparisons across different studies. Also, by establishing a common standard through this taxonomy, researchers will be able to draw more accurate conclusions about LLMs' performance on a specific complex task.",5645502d73c6907f1671923638773152e55bfb00,Semantic Scholar,,, +375,mathdial a dialogue tutoring dataset with rich pedagogical properties grounded in math reasoning problems,"['Jakub Macina', 'Nico Daheim', 'Sankalan Pal Chowdhury', 'Tanmay Sinha', 'Manu Kapur', 'Iryna Gurevych', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2305.14536,2023-05-23,,"While automatic dialogue tutors hold great potential in making education personalized and more accessible, research on such systems has been hampered by a lack of sufficiently large and high-quality datasets. Collecting such datasets remains challenging, as recording tutoring sessions raises privacy concerns and crowdsourcing leads to insufficient data quality. To address this, we propose a framework to generate such dialogues by pairing human teachers with a Large Language Model (LLM) prompted to represent common student errors. We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues grounded in multi-step math reasoning problems. While models like GPT-3 are good problem solvers, they fail at tutoring because they generate factually incorrect feedback or are prone to revealing solutions to students too early. To overcome this, we let teachers provide learning opportunities to students by guiding them using various scaffolding questions according to a taxonomy of teacher moves. We demonstrate MathDial and its extensive annotations can be used to finetune models to be more effective tutors (and not just solvers). We confirm this by automatic and human evaluation, notably in an interactive setting that measures the trade-off between student solving success and telling solutions. The dataset is released publicly.",6cd26d124ffeb6ce301ef351aada27fa0852f81b,Semantic Scholar,,, +376,retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering,"['Yike Wu', 'Nan Hu', 'Sheng Bi', 'G. Qi', 'J. Ren', 'Anhuan Xie', 'Wei Song']",https://arxiv.org/pdf/2309.11206,2023-09-20,,"Despite their competitive performance on knowledge-intensive tasks, large language models (LLMs) still have limitations in memorizing all world knowledge especially long tail knowledge. In this paper, we study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task that requires rich world knowledge. Existing work has shown that retrieving KG knowledge to enhance LLMs prompting can significantly improve LLMs performance in KGQA. However, their approaches lack a well-formed verbalization of KG knowledge, i.e., they ignore the gap between KG representations and textual representations. To this end, we propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements most informative for KGQA. Based on this approach, we propose a KG-to-Text enhanced LLMs framework for solving the KGQA task. Experiments on several KGQA benchmarks show that the proposed KG-to-Text augmented LLMs approach outperforms previous KG-augmented LLMs approaches regarding answer accuracy and usefulness of knowledge statements.",785c0d4efd3aaa946f8bdcd12b38a147cc36b794,Semantic Scholar,,, +377,federated large language model a position paper,"['Chaochao Chen', 'Xiaohua Feng', 'Jun Zhou', 'Jianwei Yin', 'Xiaolin Zheng']",https://arxiv.org/pdf/2307.08925,2023-07-18,,"Large scale language models (LLM) have received significant attention and found diverse applications across various domains, but their development encounters challenges in real-world scenarios. These challenges arise due to the scarcity of public domain data availability and the need to maintain privacy with respect to private domain data. To address these issues, federated learning (FL) has emerged as a promising technology that enables collaborative training of shared models while preserving decentralized data. We propose the concept of federated LLM, which comprises three key components, i.e., federated LLM pre-training, federated LLM fine-tuning, and federated LLM prompt engineering. For each component, we discuss its advantage over traditional LLM training methods and propose specific engineering strategies for implementation. Furthermore, we explore the novel challenges introduced by the integration of FL and LLM. We analyze existing solutions and identify potential obstacles faced by these solutions within the context of federated LLM.",7aad760762c4a10dfbc2d3391eb8bdb28c80b236,Semantic Scholar,,, +378,adaplanner adaptive planning from feedback with language models,"['Haotian Sun', 'Yuchen Zhuang', 'Lingkai Kong', 'Bo Dai', 'Chao Zhang']",http://arxiv.org/pdf/2305.16653,2023-05-26,,"Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that are not adaptable to environmental feedback. Consequently, the sequential decision-making performance of LLM agents degenerates with problem complexity and plan horizons increase. We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore, we propose a skill discovery mechanism that leverages successful plans as few-shot exemplars, enabling the agent to plan and refine with fewer task demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively.",8e37dc1215681aa153a51c07078ba8befd6a6e01,Semantic Scholar,,, +379,simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation,"['J. Mendoncca', 'Patrícia Pereira', 'Joao Paulo Carvalho', 'A. Lavie', 'I. Trancoso']",https://arxiv.org/pdf/2308.16797,2023-08-31,,"Despite significant research effort in the development of automatic dialogue evaluation metrics, little thought is given to evaluating dialogues other than in English. At the same time, ensuring metrics are invariant to semantically similar responses is also an overlooked topic. In order to achieve the desired properties of robustness and multilinguality for dialogue evaluation metrics, we propose a novel framework that takes advantage of the strengths of current evaluation models with the newly-established paradigm of prompting Large Language Models (LLMs). Empirical results show our framework achieves state of the art results in terms of mean Spearman correlation scores across several benchmarks and ranks first place on both the Robust and Multilingual tasks of the DSTC11 Track 4 “Automatic Evaluation Metrics for Open-Domain Dialogue Systems”, proving the evaluation capabilities of prompted LLMs.",bcefc74b20649fd41ea05d87a3fa512d2559fc8d,Semantic Scholar,,, +380,alpacafarm a simulation framework for methods that learn from human feedback,"['Yann Dubois', 'Xuechen Li', 'Rohan Taori', 'Tianyi Zhang', 'Ishaan Gulrajani', 'Jimmy Ba', 'Carlos Guestrin', 'Percy Liang', 'Tatsunori Hashimoto']",https://arxiv.org/pdf/2305.14387,2023-05-22,,"Large language models (LLMs) such as ChatGPT have seen widespread adoption due to their strong instruction-following abilities. Developing these LLMs involves a complex yet poorly understood workflow requiring training with human feedback. Replicating and understanding this instruction-following requires tackling three major challenges: the high cost of data collection, the lack of trustworthy evaluation, and the absence of reference method implementations. We address these challenges with AlpacaFarm, a simulator that enables research and development for learning from feedback at a low cost. First, we design LLM prompts to simulate human feedback that are 50x cheaper than crowdworkers and display high agreement with humans. Second, we propose an automatic evaluation and validate it against human instructions obtained on real-world interactions. Third, we contribute reference implementations for several methods (PPO, DPO, best-of-n, expert iteration, and more) that learn from pairwise feedback. Finally, as an end-to-end validation of AlpacaFarm, we train and evaluate eleven models on 10k pairs of real human feedback and show that rankings of models trained in AlpacaFarm match rankings of models trained on human data. As a demonstration of the research possible in AlpacaFarm, we find that methods that use a reward model can substantially improve over supervised fine-tuning and that our reference PPO implementation leads to a +10% improvement in win-rate against Davinci003. We release all components of AlpacaFarm at https://github.com/tatsu-lab/alpaca_farm.",cb6cc7d28d06a0d7c0d3f0d7ee551bbc86dbc3aa,Semantic Scholar,,, +381,lpml llmprompting markup language for mathematical reasoning,"['Ryutaro Yamauchi', 'Sho Sonoda', 'Akiyoshi Sannai', 'Wataru Kumagai']",https://arxiv.org/pdf/2309.13078,2023-09-21,,"In utilizing large language models (LLMs) for mathematical reasoning, addressing the errors in the reasoning and calculation present in the generated text by LLMs is a crucial challenge. In this paper, we propose a novel framework that integrates the Chain-of-Thought (CoT) method with an external tool (Python REPL). We discovered that by prompting LLMs to generate structured text in XML-like markup language, we could seamlessly integrate CoT and the external tool and control the undesired behaviors of LLMs. With our approach, LLMs can utilize Python computation to rectify errors within CoT. We applied our method to ChatGPT (GPT-3.5) to solve challenging mathematical problems and demonstrated that combining CoT and Python REPL through the markup language enhances the reasoning capability of LLMs. Our approach enables LLMs to write the markup language and perform advanced mathematical reasoning using only zero-shot prompting.",cf237f3a6ed3e8fd970c15bf1f0bdf94f34da4a9,Semantic Scholar,,, +382,heap hierarchical policies for web actions using llms,"['Paloma Sodhi', 'S. Branavan', 'Ryan McDonald']",https://arxiv.org/pdf/2310.03720,2023-10-05,,"Large language models (LLMs) have demonstrated remarkable capabilities in performing a range of instruction following tasks in few and zero-shot settings. However, teaching LLMs to perform tasks on the web presents fundamental challenges -- combinatorially large open-world tasks and variations across web interfaces. We tackle these challenges by leveraging LLMs to decompose web tasks into a collection of sub-tasks, each of which can be solved by a low-level, closed-loop policy. These policies constitute a shared grammar across tasks, i.e., new web tasks can be expressed as a composition of these policies. We propose a novel framework, Hierarchical Policies for Web Actions using LLMs (HeaP), that learns a set of hierarchical LLM prompts from demonstrations for planning high-level tasks and executing them via a sequence of low-level policies. We evaluate HeaP against a range of baselines on a suite of web tasks, including MiniWoB++, WebArena, a mock airline CRM, as well as live website interactions, and show that it is able to outperform prior works using orders of magnitude less data.",da0a170656a336f82fa8cf00289d1cc944d9b630,Semantic Scholar,,, +383,check your facts and try again improving large language models with external knowledge and automated feedback,"['Baolin Peng', 'Michel Galley', 'Pengcheng He', 'Hao Cheng', 'Yujia Xie', 'Yu Hu', 'Qiuyuan Huang', 'Lars Lidén', 'Zhou Yu', 'Weizhu Chen', 'Jianfeng Gao']",http://arxiv.org/pdf/2302.12813,2023-02-24,,"Large language models (LLMs), such as ChatGPT, are able to generate human-like, fluent responses for many downstream tasks, e.g., task-oriented dialog and question answering. However, applying LLMs to real-world, mission-critical applications remains challenging mainly due to their tendency to generate hallucinations and their inability to use external knowledge. This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules. Our system makes the LLM generate responses grounded in external knowledge, e.g., stored in task-specific databases. It also iteratively revises LLM prompts to improve model responses using feedback generated by utility functions, e.g., the factuality score of a LLM-generated response. The effectiveness of LLM-Augmenter is empirically validated on two types of scenarios, task-oriented dialog and open-domain question answering. LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses. We make the source code and models publicly available.",e5c72b92c48d68594b290c84a8904da7c8335554,Semantic Scholar,,, +384,autoplan automatic planning of interactive decisionmaking tasks with large language models,"['Siqi Ouyang', 'Lei Li']",https://arxiv.org/pdf/2305.15064,2023-05-24,,"Recent large language models (LLMs) are promising for making decisions in grounded environments. However, LLMs frequently fail in complex decision-making tasks due to the misalignment between the pre-trained knowledge in LLMs and the actual rules in the environment. Existing methods require either costly gradient computation or lengthy in-context demonstrations. In this paper, we propose AutoPlan, an approach to guide LLM-based agents to accomplish interactive decision-making tasks. AutoPlan augments the LLM prompt with a task-solving plan and optimizes it through iterative experience collection and reflection. Our experiments show that AutoPlan, though using no in-context demonstrations, achieves success rates on par with the baselines using human-written demonstrations on ALFWorld and even outperforms them by 8% on HotpotQA. The code is available at https://github.com/owaski/AutoPlan.",e814deb54d154aad19ae2b72a2e4dd3376175bb5,Semantic Scholar,,, +385,promptagator fewshot dense retrieval from 8 examples,"['Zhuyun Dai', 'Vincent Zhao', 'Ji Ma', 'Yi Luan', 'Jianmo Ni', 'Jing Lu', 'A. Bakalov', 'Kelvin Guu', 'Keith B. Hall', 'Ming-Wei Chang']",http://arxiv.org/pdf/2209.11755,2022-09-23,,"Much recent research on information retrieval has focused on how to transfer from one task (typically with abundant supervised data) to various other tasks where supervision is limited, with the implicit assumption that it is possible to generalize from one task to all the rest. However, this overlooks the fact that there are many diverse and unique retrieval tasks, each targeting different search intents, queries, and search domains. In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples. To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data. Powered by LLM's generalization ability, Promptagator makes it possible to create task-specific end-to-end retrievers solely based on a few examples {without} using Natural Questions or MS MARCO to train %question generators or dual encoders. Surprisingly, LLM prompting with no more than 8 examples allows dual encoders to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by more than 1.2 nDCG on average on 11 retrieval sets. Further training standard-size re-rankers using the same generated data yields another 5.0 point nDCG improvement. Our studies determine that query generation can be far more effective than previously observed, especially when a small amount of task-specific knowledge is given.",e86009d9f9b1cdf083a48d087552bc4153784451,Semantic Scholar,,, +386,sgptod building task bots effortlessly via schemaguided llm prompting,"['Xiaoying Zhang', 'Baolin Peng', 'Kun Li', 'Jingyan Zhou', 'Helen M. Meng']",http://arxiv.org/pdf/2305.09067,2023-05-15,,"Building end-to-end task bots and maintaining their integration with new functionalities using minimal human efforts is a long-standing challenge in dialog research. Recently large language models (LLMs) have demonstrated exceptional proficiency in conversational engagement and adherence to instructions across various downstream tasks. In this work, we introduce SGP-TOD, Schema-Guided Prompting for building Task-Oriented Dialog systems effortlessly based on LLMs. Utilizing the symbolic knowledge -- task schema, we instruct fixed LLMs to generate appropriate responses on novel tasks, circumventing the need for training data. Specifically, SGP-TOD comprises three components: a LLM for engaging with users, a DST Prompter to aid the LLM with dialog state tracking, which is then used to retrieve database items, and a Policy Prompter to elicit proper responses adhering to the provided dialog policy. Experimental results on Multiwoz, RADDLE and STAR datasets show that our training-free strategy SGP-TOD, without any task-specific data, yields state-of-the-art (SOTA) zero-shot performance, greatly surpasses the few-shot approaches. In a domain-extension setting, SGP-TOD aptly adapts to new functionalities by merely adding supplementary schema rules. We make our code and data publicly available.",ec56f49bef8925dc8931cc261ab3aca4dd36ad2d,Semantic Scholar,,, +387,prefer prompt ensemble learning via feedbackreflectrefine,"['Chenrui Zhang', 'Lina Liu', 'Jinpeng Wang', 'Chuyuan Wang', 'Xiaodi Sun', 'Hongyu Wang', 'Mingchen Cai']",https://arxiv.org/pdf/2308.12033,2023-08-23,,"As an effective tool for eliciting the power of Large Language Models (LLMs), prompting has recently demonstrated unprecedented abilities across a variety of complex tasks. To further improve the performance, prompt ensemble has attracted substantial interest for tackling the hallucination and instability of LLMs. However, existing methods usually adopt a two-stage paradigm, which requires a pre-prepared set of prompts with substantial manual effort, and is unable to perform directed optimization for different weak learners. In this paper, we propose a simple, universal, and automatic method named PREFER (Pompt Ensemble learning via Feedback-Reflect-Refine) to address the stated limitations. Specifically, given the fact that weak learners are supposed to focus on hard examples during boosting, PREFER builds a feedback mechanism for reflecting on the inadequacies of existing weak learners. Based on this, the LLM is required to automatically synthesize new prompts for iterative refinement. Moreover, to enhance stability of the prompt effect evaluation, we propose a novel prompt bagging method involving forward and backward thinking, which is superior to majority voting and is beneficial for both feedback and weight calculation in boosting. Extensive experiments demonstrate that our PREFER achieves state-of-the-art performance in multiple types of tasks by a significant margin. We have made our code publicly available.",f53a4f34757d1f237446b4d887d5323f2a17ed02,Semantic Scholar,,, +388,empowering private tutoring by chaining large language models,"['Yulin Chen', 'Ning Ding', 'Hai-Tao Zheng', 'Zhiyuan Liu', 'Maosong Sun', 'Bowen Zhou']",https://arxiv.org/pdf/2309.08112,2023-09-15,,"Artificial intelligence has been applied in various aspects of online education to facilitate teaching and learning. However, few approaches has been made toward a complete AI-powered tutoring system. In this work, we explore the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs), covering automatic course planning and adjusting, tailored instruction, and flexible quiz evaluation. To make the system robust to prolonged interaction and cater to individualized education, the system is decomposed into three inter-connected core processes-interaction, reflection, and reaction. Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules. Tools are LLMs prompted to execute one specific task at a time, while memories are data storage that gets updated during education process. Statistical results from learning logs demonstrate the effectiveness and mechanism of each tool usage. Subjective feedback from human users reveal the usability of each function, and comparison with ablation systems further testify the benefits of the designed processes in long-term interaction.",f7842099bbde74dc5aec70bb6af85b88de08ed13,Semantic Scholar,,, +389,promptchainer chaining large language model prompts through visual programming,"['Tongshuang Sherry Wu', 'Ellen Jiang', 'Aaron Donsbach', 'J. Gray', 'A. Molina', 'Michael Terry', 'Carrie J. Cai']",https://arxiv.org/pdf/2203.06566,2022-03-13,,"While LLMs have made it possible to rapidly prototype new ML functionalities, many real-world applications involve complex tasks that cannot be easily handled via a single run of an LLM. Recent work has found that chaining multiple LLM runs together (with the output of one step being the input to the next) can help users accomplish these more complex tasks, and in a way that is perceived to be more transparent and controllable. However, it remains unknown what users need when authoring their own LLM chains – a key step to lowering the barriers for non-AI-experts to prototype AI-infused applications. In this work, we explore the LLM chain authoring process. We find from pilot studies that users need support transforming data between steps of a chain, as well as debugging the chain at multiple granularities. To address these needs, we designed PromptChainer, an interactive interface for visually programming chains. Through case studies with four designers and developers, we show that PromptChainer supports building prototypes for a range of applications, and conclude with open questions on scaling chains to even more complex tasks, as well as supporting low-fi chain prototyping.",0f733817e82026f7c29909a51cb4df7d2685f0e7,Semantic Scholar,,, +390,prompter utilizing large language model prompting for a data efficient embodied instruction following,"['Y. Inoue', 'Hiroki Ohashi']",https://arxiv.org/pdf/2211.03267,2022-11-07,,"Embodied Instruction Following (EIF) studies how mobile manipulator robots should be controlled to accomplish long-horizon tasks specified by natural language instructions. While most research on EIF are conducted in simulators, the ultimate goal of the field is to deploy the agents in real life. As such, it is important to minimize the data cost required for training an agent, to help the transition from sim to real. However, many studies only focus on the performance and overlook the data cost -- modules that require separate training on extra data are often introduced without a consideration on deployability. In this work, we propose FILM++ which extends the existing work FILM with modifications that do not require extra data. While all data-driven modules are kept constant, FILM++ more than doubles FILM's performance. Furthermore, we propose Prompter, which replaces FILM++'s semantic search module with language model prompting. Unlike FILM++'s implementation that requires training on extra sets of data, no training is needed for our prompting based implementation while achieving better or at least comparable performance. Prompter achieves 42.64% and 45.72% on the ALFRED benchmark with high-level instructions only and with step-by-step instructions, respectively, outperforming the previous state of the art by 6.57% and 10.31%.",2d30d800e946d3699d9c41bb95c36a6db63676e7,Semantic Scholar,,, +391,evallm interactive evaluation of large language model prompts on userdefined criteria,"['Tae Soo Kim', 'Yoonjoo Lee', 'Jamin Shin', 'Young-Ho Kim', 'Juho Kim']",https://arxiv.org/pdf/2309.13633,2023-09-24,,"By simply composing prompts, developers can prototype novel generative applications with Large Language Models (LLMs). To refine prototypes into products, however, developers must iteratively revise prompts by evaluating outputs to diagnose weaknesses. Formative interviews (N=8) revealed that developers invest significant effort in manually evaluating outputs as they assess context-specific and subjective criteria. We present EvalLM, an interactive system for iteratively refining prompts by evaluating multiple outputs on user-defined criteria. By describing criteria in natural language, users can employ the system's LLM-based evaluator to get an overview of where prompts excel or fail, and improve these based on the evaluator's feedback. A comparative study (N=12) showed that EvalLM, when compared to manual evaluation, helped participants compose more diverse criteria, examine twice as many outputs, and reach satisfactory prompts with 59% fewer revisions. Beyond prompts, our work can be extended to augment model evaluation and alignment in specific application contexts.",a0d83f9e15e722f23c14eb83cb2f87c1d1ea6400,Semantic Scholar,,, +392,flatnessaware prompt selection improves accuracy and sample efficiency,"['Lingfeng Shen', 'Weiting Tan', 'Boyuan Zheng', 'Daniel Khashabi']",http://arxiv.org/pdf/2305.10713,2023-05-18,,"With growing capabilities of large language models, prompting them has become the dominant way to access them. This has motivated the development of strategies for automatically selecting effective language prompts. In this paper, we introduce prompt flatness, a new metric to quantify the expected utility of a language prompt. This metric is inspired by flatness regularization in statistical learning that quantifies the robustness of the model towards its parameter perturbations. We provide theoretical foundations for this metric and its relationship with other prompt selection metrics, providing a comprehensive understanding of existing methods. Empirically, we show that combining prompt flatness with existing metrics improves both performance and sample efficiency. Our metric outperforms the previous prompt selection metrics with an average increase of 5% in accuracy and 10% in Pearson correlation across 6 classification benchmarks.",b8ba16a107621f760e7830ddaab8c3d5c5ff06b0,Semantic Scholar,,, +393,ai chains transparent and controllable humanai interaction by chaining large language model prompts,"['Tongshuang Sherry Wu', 'Michael Terry', 'Carrie J. Cai']",https://dl.acm.org/doi/pdf/10.1145/3491102.3517582,2021-10-04,,"Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by “unit-testing” sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.",d3640eb3b542eaf36fee2261f037a6bf0d8eac9c,Semantic Scholar,,, +394,terminologyaware translation with constrained decoding and large language model prompting,"['Nikolay Bogoychev', 'Pinzhen Chen']",https://arxiv.org/pdf/2310.05824,2023-10-09,,"Terminology correctness is important in the downstream application of machine translation, and a prevalent way to ensure this is to inject terminology constraints into a translation system. In our submission to the WMT 2023 terminology translation task, we adopt a translate-then-refine approach which can be domain-independent and requires minimal manual efforts. We annotate random source words with pseudo-terminology translations obtained from word alignment to first train a terminology-aware model. Further, we explore two post-processing methods. First, we use an alignment process to discover whether a terminology constraint has been violated, and if so, we re-decode with the violating word negatively constrained. Alternatively, we leverage a large language model to refine a hypothesis by providing it with terminology constraints. Results show that our terminology-aware model learns to incorporate terminologies effectively, and the large language model refinement process can further improve terminology recall.",e90d30148ecf633db3bbabdcfa3a0ec06236e0d1,Semantic Scholar,,, +395,a prefrontal cortexinspired architecture for planning in large language models,"['Taylor Webb', 'S. S. Mondal', 'Chi Wang', 'Brian Krabach', 'Ida Momennejad']",https://arxiv.org/pdf/2310.00194,2023-09-30,,"Large language models (LLMs) demonstrate impressive performance on a wide variety of tasks, but they often struggle with tasks that require multi-step reasoning or goal-directed planning. To address this, we take inspiration from the human brain, in which planning is accomplished via the recurrent interaction of specialized modules in the prefrontal cortex (PFC). These modules perform functions such as conflict monitoring, state prediction, state evaluation, task decomposition, and task coordination. We find that LLMs are sometimes capable of carrying out these functions in isolation, but struggle to autonomously coordinate them in the service of a goal. Therefore, we propose a black box architecture with multiple LLM-based (GPT-4) modules. The architecture improves planning through the interaction of specialized PFC-inspired modules that break down a larger problem into multiple brief automated calls to the LLM. We evaluate the combined architecture on two challenging planning tasks -- graph traversal and Tower of Hanoi -- finding that it yields significant improvements over standard LLM methods (e.g., zero-shot prompting or in-context learning). These results demonstrate the benefit of utilizing knowledge from cognitive neuroscience to improve planning in LLMs.",31d8bdef7b81e107bf04f226d877fd5aa2f51d34,Semantic Scholar,,, +396,large language models are stateoftheart evaluators of translation quality,"['Tom Kocmi', 'C. Federmann']",http://arxiv.org/pdf/2302.14520,2023-02-28,,"We describe GEMBA, a GPT-based metric for assessment of translation quality, which works both with a reference translation and without. In our evaluation, we focus on zero-shot prompting, comparing four prompt variants in two modes, based on the availability of the reference. We investigate seven versions of GPT models, including ChatGPT. We show that our method for translation quality assessment only works with GPT 3.5 and larger models. Comparing to results from WMT22’s Metrics shared task, our method achieves state-of-the-art accuracy in both modes when compared to MQM-based human labels. Our results are valid on the system level for all three WMT22 Metrics shared task language pairs, namely English into German, English into Russian, and Chinese into English. This provides a first glimpse into the usefulness of pre-trained, generative large language models for quality assessment of translations. We publicly release all our code and prompt templates used for the experiments described in this work, as well as all corresponding scoring results, to allow for external validation and reproducibility.",4161ad2d2495d8af1d62dc5e71882bde642cd1c1,Semantic Scholar,,, +397,a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models,"['J. Allingham', 'Jie Ren', 'Michael W. Dusenberry', 'J. Liu', 'Xiuye Gu', 'Yin Cui', 'Dustin Tran', 'Balaji Lakshminarayanan']",https://arxiv.org/pdf/2302.06235,2023-02-13,,"Contrastively trained text-image models have the remarkable ability to perform zero-shot classification, that is, classifying previously unseen images into categories that the model has never been explicitly trained to identify. However, these zero-shot classifiers need prompt engineering to achieve high accuracy. Prompt engineering typically requires hand-crafting a set of prompts for individual downstream tasks. In this work, we aim to automate this prompt engineering and improve zero-shot accuracy through prompt ensembling. In particular, we ask""Given a large pool of prompts, can we automatically score the prompts and ensemble those that are most suitable for a particular downstream dataset, without needing access to labeled validation data?"". We demonstrate that this is possible. In doing so, we identify several pathologies in a naive prompt scoring method where the score can be easily overconfident due to biases in pre-training and test data, and we propose a novel prompt scoring method that corrects for the biases. Using our proposed scoring method to create a weighted average prompt ensemble, our method outperforms equal average ensemble, as well as hand-crafted prompts, on ImageNet, 4 of its variants, and 11 fine-grained classification benchmarks, all while being fully automatic, optimization-free, and not requiring access to labeled validation data.",877e27a1d89095fcf686ab675f62a8432d3285ee,Semantic Scholar,,, +398,controlling personality style in dialogue with zeroshot promptbased learning,"['Angela Ramirez', 'Mamon Alsalihy', 'Kartik Aggarwal', 'Cecilia Li', 'Liren Wu', 'M. Walker']",http://arxiv.org/pdf/2302.03848,2023-02-08,,"Prompt-based or in-context learning has achieved high zero-shot performance on many natural language generation (NLG) tasks. Here we explore the performance of prompt-based learning for simultaneously controlling the personality and the semantic accuracy of an NLG for task-oriented dialogue. We experiment with prompt-based learning on the PERSONAGE restaurant recommendation corpus to generate semantically and stylistically-controlled text for 5 different Big-5 personality types: agreeable, disagreeable, conscientious, unconscientious, and extravert. We test two different classes of discrete prompts to generate utterances for a particular personality style: (1) prompts that demonstrate generating directly from a meaning representation that includes a personality specification; and (2) prompts that rely on first converting the meaning representation to a textual pseudo-reference, and then using the pseudo-reference in a textual style transfer (TST) prompt. In each case, we show that we can vastly improve performance by over-generating outputs and ranking them, testing several ranking functions based on automatic metrics for semantic accuracy, personality-match, and fluency. We also test whether NLG personality demonstrations from the restaurant domain can be used with meaning representations for the video game domain to generate personality stylized utterances about video games. Our findings show that the TST prompts produces the highest semantic accuracy (78.46% for restaurants and 87.6% for video games) and personality accuracy (100% for restaurants and 97% for video games). Our results on transferring personality style to video game utterances are surprisingly good. To our knowledge, there is no previous work testing the application of prompt-based learning to simultaneously controlling both style and semantic accuracy in NLG.",9c39e942b87cbada41a4a52364f996915c7c2d98,Semantic Scholar,,, +399,steps a benchmark for order reasoning in sequential tasks,"['Weizhi Wang', 'Hong Wang', 'Xi Yan']",http://arxiv.org/pdf/2306.04441,2023-06-07,,"Various human activities can be abstracted into a sequence of actions in natural text, i.e. cooking, repairing, manufacturing, etc. Such action sequences heavily depend on the executing order, while disorder in action sequences leads to failure of further task execution by robots or AI agents. Therefore, to verify the order reasoning capability of current neural models in sequential tasks, we propose a challenging benchmark , named STEPS. STEPS involves two subtask settings, focusing on determining the rationality of given next step in recipes and selecting the reasonable step from the multi-choice question, respectively. We describe the data construction and task formulations, and benchmark most of significant Large Language Models (LLMs). The experimental results demonstrate 1) The commonsense reasoning of action orders in sequential tasks are challenging to resolve via zero-shot prompting or few-shot in-context learning for LLMs; 2) Prompting method still significantly lags behind tuning-based method on STEPS.",a8a71f9b10b281e796fdc2ee7aaec40067739574,Semantic Scholar,,, +400,prompting large language model for machine translation a case study,"['Biao Zhang', 'B. Haddow', 'Alexandra Birch']",http://arxiv.org/pdf/2301.07069,2023-01-17,,"Research on prompting has shown excellent performance with little or even no supervised training across many tasks. However, prompting for machine translation is still under-explored in the literature. We fill this gap by offering a systematic study on prompting strategies for translation, examining various factors for prompt template and demonstration example selection. We further explore the use of monolingual data and the feasibility of cross-lingual, cross-domain, and sentence-to-document transfer learning in prompting. Extensive experiments with GLM-130B (Zeng et al., 2022) as the testbed show that 1) the number and the quality of prompt examples matter, where using suboptimal examples degenerates translation; 2) several features of prompt examples, such as semantic similarity, show significant Spearman correlation with their prompting performance; yet, none of the correlations are strong enough; 3) using pseudo parallel prompt examples constructed from monolingual data via zero-shot prompting could improve translation; and 4) improved performance is achievable by transferring knowledge from prompt examples selected in other settings. We finally provide an analysis on the model outputs and discuss several problems that prompting still suffers from.",c879413103f8950bdd414c7f60a39bd7748c9be8,Semantic Scholar,,, +401,a practical survey on zeroshot prompt design for incontext learning,['Yinheng Li'],https://doi.org/10.26615/978-954-452-092-2_069,2023-09-22,,"The remarkable advancements in large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single “best” prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various NLP tasks.",cd7d770eabb4dab6894d9f91d2c3bc337e94a4e1,Semantic Scholar,,, +402,developing a scalable benchmark for assessing large language models in knowledge graph engineering,"['Lars Meyer', 'Johannes Frey', 'K. Junghanns', 'Felix Brei', 'Kirill Bulert', 'Sabine Grunder-Fahrer', 'Michael Martin']",https://arxiv.org/pdf/2308.16622,2023-08-31,,"As the field of Large Language Models (LLMs) evolves at an accelerated pace, the critical need to assess and monitor their performance emerges. We introduce a benchmarking framework focused on knowledge graph engineering (KGE) accompanied by three challenges addressing syntax and error correction, facts extraction and dataset generation. We show that while being a useful tool, LLMs are yet unfit to assist in knowledge graph generation with zero-shot prompting. Consequently, our LLM-KG-Bench framework provides automatic evaluation and storage of LLM responses as well as statistical data and visualization tools to support tracking of prompt engineering and model performance.",d0e3af5f20a451c04770929979d7a8406a1a2466,Semantic Scholar,,, +403,mitigating word bias in zeroshot promptbased classifiers,"['Adian Liusie', 'Potsawee Manakul', 'M. Gales']",https://arxiv.org/pdf/2309.04992,2023-09-10,,"Prompt-based classifiers are an attractive approach for zero-shot classification. However, the precise choice of the prompt template and label words can largely influence performance, with semantically equivalent settings often showing notable performance difference. This discrepancy can be partly attributed to word biases, where the classifier may be biased towards classes. To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers. This paper instead approaches this problem by examining the expected marginal probabilities of the classes. Here, probabilities are reweighted to have a uniform prior over classes, in an unsupervised fashion. Further, we draw a theoretical connection between the class priors and the language models' word prior, and offer the ability to set a threshold in a zero-resource fashion. We show that matching class priors correlates strongly with the oracle upper bound performance and demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.",e7d21ad4da122bf1db19e4fda57bf94c1dfa24a4,Semantic Scholar,,, +404,how far are large language models from agents with theoryofmind,"['Pei Zhou', 'Aman Madaan', 'Srividya Pranavi Potharaju', 'Aditya Gupta', 'Kevin R. McKee', 'Ari Holtzman', 'J. Pujara', 'Xiang Ren', 'Swaroop Mishra', 'Aida Nematzadeh', 'Shyam Upadhyay', 'Manaal Faruqui']",https://arxiv.org/pdf/2310.03051,2023-10-04,,"""Thinking is for Doing.""Humans can infer other people's mental states from observations--an ability called Theory-of-Mind (ToM)--and subsequently act pragmatically on those inferences. Existing question answering benchmarks such as ToMi ask models questions to make inferences about beliefs of characters in a story, but do not test whether models can then use these inferences to guide their actions. We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D), which requires models to connect inferences about others' mental states to actions in social scenarios. Experiments on T4D demonstrate that LLMs such as GPT-4 and PaLM 2 seemingly excel at tracking characters' beliefs in stories, but they struggle to translate this capability into strategic action. Our analysis reveals the core challenge for LLMs lies in identifying the implicit inferences about mental states without being explicitly asked about as in ToMi, that lead to choosing the correct action in T4D. To bridge this gap, we introduce a zero-shot prompting framework, Foresee and Reflect (FaR), which provides a reasoning structure that encourages LLMs to anticipate future challenges and reason about potential actions. FaR boosts GPT-4's performance from 50% to 71% on T4D, outperforming other prompting methods such as Chain-of-Thought and Self-Ask. Moreover, FaR generalizes to diverse out-of-distribution story structures and scenarios that also require ToM inferences to choose an action, consistently outperforming other methods including few-shot in-context learning.",ed40889e11e812ef33578506844be06d713f6092,Semantic Scholar,,, +405,selficl zeroshot incontext learning with selfgenerated demonstrations,"['Wei-Lin Chen', 'Cheng-Kuang Wu', 'Hsin-Hsi Chen']",http://arxiv.org/pdf/2305.15035,2023-05-24,,"Large language models (LLMs) have exhibited striking in-context learning (ICL) ability to adapt to target tasks with a few input-output demonstrations. For better ICL, different methods are proposed to select representative demonstrations from existing training corpora. However, such settings are not aligned with real-world practices, as end-users usually query LMs without access to demonstration pools. In this work, we introduce Self-ICL -- a simple framework which bootstraps LMs' intrinsic capabilities to perform zero-shot ICL. Given a test input, Self-ICL first prompts the model to generate pseudo-inputs. Next, the model predicts pseudo-labels for the pseudo-inputs via zero-shot prompting. Finally, we perform ICL for the test input with the pseudo-input-label pairs as demonstrations. Evaluation on 23 BIG-Bench Hard tasks shows Self-ICL outperforms zero-shot baselines on both average accuracy and head-to-head comparison. Moreover, with zero-shot chain-of-thought, Self-ICL achieves results comparable to using real demonstrations. Additionally, we conduct a range of analyses to validate Self-ICL's effectiveness and provide insights for its behaviors under different settings.",fe425e341cf646689e42adead17f14eeac5d03e6,Semantic Scholar,,, +406,prodigy enabling incontext learning over graphs,"['Qian Huang', 'Hongyu Ren', 'Peng Chen', 'Gregor Krvzmanc', 'D. Zeng', 'Percy Liang', 'J. Leskovec']",http://arxiv.org/pdf/2305.12600,2023-05-21,,"In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks by conditioning on prompt examples, without optimizing any parameters. While large language models have demonstrated this ability, how in-context learning could be performed over graphs is unexplored. In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse \textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first pretraining framework that enables in-context learning over graphs. The key idea of our framework is to formulate in-context learning over graphs with a novel \emph{prompt graph} representation, which connects prompt examples and queries. We then propose a graph neural network architecture over the prompt graph and a corresponding family of in-context pretraining objectives. With PRODIGY, the pretrained model can directly perform novel downstream classification tasks on unseen graphs via in-context learning. We provide empirical evidence of the effectiveness of our framework by showcasing its strong in-context learning performance on tasks involving citation networks and knowledge graphs. Our approach outperforms the in-context learning accuracy of contrastive pretraining baselines with hard-coded adaptation by 18\% on average across all setups. Moreover, it also outperforms standard finetuning with limited data by 33\% on average with in-context learning.",0088c9f4d50706c7ab71efa13bcb4b42cf2058e2,Semantic Scholar,,, +407,outfox llmgenerated essay detection through incontext learning with adversarially generated examples,"['Ryuto Koike', 'Masahiro Kaneko', 'Naoaki Okazaki']",https://arxiv.org/pdf/2307.11729,2023-07-21,,"Large Language Models (LLMs) have achieved human-level fluency in text generation, making it difficult to distinguish between human-written and LLM-generated texts. This poses a growing risk of misuse of LLMs and demands the development of detectors to identify LLM-generated texts. However, existing detectors lack robustness against attacks: they degrade detection accuracy by simply paraphrasing LLM-generated texts. Furthermore, a malicious user might attempt to deliberately evade the detectors based on detection results, but this has not been assumed in previous studies. In this paper, we propose OUTFOX, a framework that improves the robustness of LLM-generated-text detectors by allowing both the detector and the attacker to consider each other's output. In this framework, the attacker uses the detector's prediction labels as examples for in-context learning and adversarially generates essays that are harder to detect, while the detector uses the adversarially generated essays as examples for in-context learning to learn to detect essays from a strong attacker. Experiments in the domain of student essays show that the proposed detector improves the detection performance on the attacker-generated texts by up to +41.3 points in F1-score. Furthermore, the proposed detector shows a state-of-the-art detection performance: up to 96.9 points in F1-score, beating existing detectors on non-attacked texts. Finally, the proposed attacker drastically degrades the performance of detectors by up to -57.0 points F1-score, massively outperforming the baseline paraphrasing method for evading detection.",0095acc4f2c3255cf38fdf844003c97858adb418,Semantic Scholar,,, +408,naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers,"['Kai Shen', 'Zeqian Ju', 'Xu Tan', 'Yanqing Liu', 'Yichong Leng', 'Lei He', 'Tao Qin', 'Sheng Zhao', 'Jiang Bian']",http://arxiv.org/pdf/2304.09116,2023-04-18,,"Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets is important to capture the diversity in human speech such as speaker identities, prosodies, and styles (e.g., singing). Current large TTS systems usually quantize speech into discrete tokens and use language models to generate these tokens one by one, which suffer from unstable prosody, word skipping/repeating issue, and poor voice quality. In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual vector quantizers to get the quantized latent vectors and uses a diffusion model to generate these latent vectors conditioned on text input. To enhance the zero-shot capability that is important to achieve diverse speech synthesis, we design a speech prompting mechanism to facilitate in-context learning in the diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to large-scale datasets with 44K hours of speech and singing data and evaluate its voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS systems by a large margin in terms of prosody/timbre similarity, robustness, and voice quality in a zero-shot setting, and performs novel zero-shot singing synthesis with only a speech prompt. Audio samples are available at https://speechresearch.github.io/naturalspeech2.",00c367427d9135209d84008e6cb5e90f0adba881,Semantic Scholar,,, +409,demonstratesearchpredict composing retrieval and language models for knowledgeintensive nlp,"['O. Khattab', 'Keshav Santhanam', 'Xiang Lisa Li', 'David Leo Wright Hall', 'Percy Liang', 'Christopher Potts', 'M. Zaharia']",http://arxiv.org/pdf/2212.14024,2022-12-28,,"Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM). Existing work has combined these in simple""retrieve-then-read""pipelines in which the RM retrieves passages that are inserted into the LM prompt. To begin to fully realize the potential of frozen LMs and RMs, we propose Demonstrate-Search-Predict (DSP), a framework that relies on passing natural language texts in sophisticated pipelines between an LM and an RM. DSP can express high-level programs that bootstrap pipeline-aware demonstrations, search for relevant passages, and generate grounded predictions, systematically breaking down problems into small transformations that the LM and RM can handle more reliably. We have written novel DSP programs for answering questions in open-domain, multi-hop, and conversational settings, establishing in early evaluations new state-of-the-art in-context learning results and delivering 37-120%, 8-39%, and 80-290% relative gains against the vanilla LM (GPT-3.5), a standard retrieve-then-read pipeline, and a contemporaneous self-ask pipeline, respectively. We release DSP at https://github.com/stanfordnlp/dsp",03532123ccffae8d411264320e8a5ae2b6eddea0,Semantic Scholar,,, +410,incontext analogical reasoning with pretrained language models,"['Xiaoyang Hu', 'Shane Storks', 'Richard L. Lewis', 'J. Chai']",http://arxiv.org/pdf/2305.17626,2023-05-28,,"Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences. While it is thought to be essential for robust reasoning in AI systems, conventional approaches require significant training and/or hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by cognitive science research that has found connections between human language and analogy-making, we explore the use of intuitive language-based abstractions to support analogy in AI systems. Specifically, we apply large pre-trained language models (PLMs) to visual Raven’s Progressive Matrices (RPM), a common relational reasoning test. By simply encoding the perceptual features of the problem into language form, we find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods. We explore different encodings that vary the level of abstraction over task features, finding that higher-level abstractions further strengthen PLMs’ analogical reasoning. Our detailed analysis reveals insights on the role of model complexity, in-context learning, and prior knowledge in solving RPM tasks.",0366177b44ed13d86b9d704a3a82ea3750e5abed,Semantic Scholar,,, +411,promptaugmented linear probing scaling beyond the limit of fewshot incontext learners,"['Hyunsoo Cho', 'Hyuhng Joon Kim', 'Junyeob Kim', 'Sang-Woo Lee', 'Sang-goo Lee', 'Kang Min Yoo', 'Taeuk Kim']",http://arxiv.org/pdf/2212.10873,2022-12-21,,"Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training sample as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly closes the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.",06edda0310b4ec7c5012d012349252a3a77521b6,Semantic Scholar,,, +412,bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games,"['Ruoyao Wang', 'G. Todd', 'Xingdi Yuan', 'Ziang Xiao', 'Marc-Alexandre Côté', 'Peter Alexander Jansen']",http://arxiv.org/pdf/2305.14879,2023-05-24,,"In this work, we investigate the capacity of language models to generate explicit, interpretable, and interactive world models of scientific and common-sense reasoning tasks. We operationalize this as a task of generating text games, expressed as hundreds of lines of Python code. To facilitate this task, we introduce ByteSized32 (Code: github.com/cognitiveailab/BYTESIZED32), a corpus of 32 reasoning-focused text games totaling 20k lines of Python code. We empirically demonstrate that GPT-4 can use these games as templates for single-shot in-context learning, successfully producing runnable games on unseen topics in 28% of cases. When allowed to self-reflect on program errors, game runnability substantially increases to 57%. While evaluating simulation fidelity is labor-intensive, we introduce a suite of automated metrics to assess game fidelity, technical validity, adherence to task specifications, and winnability, showing a high degree of agreement with expert human ratings. We pose this as a challenge task to spur further development at the juncture of world modeling and code generation.",070b91f80ac118b910c1d2ab5be9f65f685979fe,Semantic Scholar,,, +413,exploring diverse incontext configurations for image captioning,"['Xu Yang', 'Yongliang Wu', 'Ming-Hsuan Yang', 'Haokun Chen', 'Xin Geng']",http://arxiv.org/pdf/2305.14800,2023-05-24,,"After discovering that Language Models (LMs) can be good in-context few-shot learners, numerous strategies have been proposed to optimize in-context sequence configurations. Recently, researchers in Vision-Language (VL) domains also develop their few-shot learners, while they only use the simplest way, ie., randomly sampling, to configure in-context image-text pairs. In order to explore the effects of varying configurations on VL in-context learning, we devised four strategies for image selection and four for caption assignment to configure in-context image-text pairs for image captioning. Here Image Captioning is used as the case study since it can be seen as the visually-conditioned LM. Our comprehensive experiments yield two counter-intuitive but valuable insights, highlighting the distinct characteristics of VL in-context learning due to multi-modal synergy, as compared to the NLP case. Furthermore, in our exploration of optimal combination strategies, we observed an average performance enhancement of 20.9 of CIDEr scores compared to the baseline. The code is given in https://github.com/yongliang-wu/ExploreCfg.",0744783bbefc12b2b1383bed137e8a80061274b7,Semantic Scholar,,, +414,complementary explanations for effective incontext learning,"['Xi Ye', 'Srini Iyer', 'Asli Celikyilmaz', 'Ves Stoyanov', 'Greg Durrett', 'Ramakanth Pasunuru']",http://arxiv.org/pdf/2211.13892,2022-11-25,,"Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts, but there has been limited understanding of exactly how these explanations function or why they are effective. This work aims to better understand the mechanisms by which explanations are used for in-context learning. We first study the impact of two different factors on the performance of prompts with explanations: the computation trace (the way the solution is decomposed) and the natural language used to express the prompt. By perturbing explanations on three controlled tasks, we show that both factors contribute to the effectiveness of explanations. We further study how to form maximally effective sets of explanations for solving a given test query. We find that LLMs can benefit from the complementarity of the explanation set: diverse reasoning skills shown by different exemplars can lead to better performance. Therefore, we propose a maximal marginal relevance-based exemplar selection approach for constructing exemplar sets that are both relevant as well as complementary, which successfully improves the in-context learning performance across three real-world tasks on multiple LLMs.",097dc73d5d422b3c09286e72d16b2561ae5fb395,Semantic Scholar,,, +415,neural machine translation models can learn to be fewshot learners,"['Raphael Reinauer', 'P. Simianer', 'Kaden Uhlig', 'Johannes E. M. Mosig', 'Joern Wuebker']",https://arxiv.org/pdf/2309.08590,2023-09-15,,"The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example.",09a85806442373f167e45eaf662a7914df048b10,Semantic Scholar,,, +416,good examples make a faster learner simple demonstrationbased learning for lowresource ner,"['Dong-Ho Lee', 'Mahak Agarwal', 'Akshen Kadakia', 'Takashi Shibuya', 'J. Pujara', 'Xiang Ren']",https://aclanthology.org/2022.acl-long.192.pdf,2021-10-16,,"Recent advances in prompt-based learning have shown strong results on few-shot text classification by using cloze-style templates.Similar attempts have been made on named entity recognition (NER) which manually design templates to predict entity types for every text span in a sentence. However, such methods may suffer from error propagation induced by entity span detection, high cost due to enumeration of all possible text spans, and omission of inter-dependencies among token labels in a sentence. Here we present a simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning. We perform a systematic study on demonstration strategy regarding what to include (entity examples, with or without surrounding context), how to select the examples, and what templates to use. Results on in-domain learning and domain adaptation show that the model’s performance in low-resource settings can be largely improved with a suitable demonstration strategy (e.g., a 4-17% improvement on 25 train instances). We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance.",0a2ac054c533314c0659f3b139388527df0d42f3,Semantic Scholar,,, +417,prompting language models for linguistic structure,"['Terra Blevins', 'Hila Gonen', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2211.07830,2022-11-15,,"Although pretrained language models (PLMs) can be prompted to perform a wide range of language tasks, it remains an open question how much this ability comes from generalizable linguistic understanding versus surface-level lexical patterns. To test this, we present a structured prompting approach for linguistic structured prediction tasks, allowing us to perform zero- and few-shot sequence tagging with autoregressive PLMs. We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking, demonstrating strong few-shot performance in all cases. We also find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels. These findings indicate that the in-context learning ability and linguistic knowledge of PLMs generalizes beyond memorization of their training data.",0a67a5e3f4125445ed84f2db3c92429010aad68a,Semantic Scholar,,, +418,improving the reliability of large language models by leveraging uncertaintyaware incontext learning,"['Yuchen Yang', 'Houqiang Li', 'Yanfeng Wang', 'Yu Wang']",https://arxiv.org/pdf/2310.04782,2023-10-07,,"In recent years, large-scale language models (LLMs) have gained attention for their impressive text generation capabilities. However, these models often face the challenge of""hallucination,""which undermines their reliability. In this study, we introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty. Human-defined methods for estimating uncertainty typically assume that""uncertainty is lower when the model's response is correct compared to when it is incorrect.""However, setting a precise threshold to distinguish correctness is challenging. Therefore, we introduce uncertainty information as an intermediary variable that implicitly influences the model's behavior. Our innovative uncertainty-aware in-context learning framework involves fine-tuning the LLM using a calibration dataset. Our aim is to improve the model's responses by filtering out answers with high uncertainty while considering the model's knowledge limitations. We evaluate the model's knowledge by examining multiple responses to the same question for the presence of a correct answer. When the model lacks relevant knowledge, the response should indicate that the question cannot be answered. Conversely, when the model has relevant knowledge, the response should provide the correct answer. Extensive experiments confirm the effectiveness of our framework, leading to two key findings. First, the logit output values of the LLM partly reflect inherent uncertainty. Second, our model autonomously recognizes uncertainty, resulting in improved responses.",0aa5940fda7c994675d08c41eca2a6909eb6d205,Semantic Scholar,,, +419,how do incontext examples affect compositional generalization,"['Shengnan An', 'Zeqi Lin', 'Qiang Fu', 'B. Chen', 'Nanning Zheng', 'Jian-Guang Lou', 'D. Zhang']",http://arxiv.org/pdf/2305.04835,2023-05-08,,"Compositional generalization–understanding unseen combinations of seen primitives–is an essential reasoning capability in human intelligence.The AI community mainly studies this capability by fine-tuning neural networks on lots of training samples, while it is still unclear whether and how in-context learning–the prevailing few-shot paradigm based on large language models–exhibits compositional generalization.In this paper, we present CoFe, a test suite to investigate in-context compositional generalization.We find that the compositional generalization performance can be easily affected by the selection of in-context examples, thus raising the research question what the key factors are to make good in-context examples for compositional generalization.We study three potential factors: similarity, diversity and complexity. Our systematic experiments indicate that in-context examples should be structurally similar to the test case, diverse from each other, and individually simple.Furthermore, two strong limitations are observed: in-context compositional generalization on fictional words is much weaker than that on commonly used ones; it is still critical that the in-context examples should cover required linguistic structures, even though the backbone model has been pre-trained on large corpus.We hope our analysis would facilitate the understanding and utilization of in-context learning paradigm.",0ae12d63f77f40b430f17c791a5191ff5fee5086,Semantic Scholar,,, +420,chatrec towards interactive and explainable llmsaugmented recommender system,"['Yunfan Gao', 'Tao Sheng', 'Youlin Xiang', 'Yun Xiong', 'Haofen Wang', 'Jiawei Zhang']",http://arxiv.org/pdf/2303.14524,2023-03-25,,"Large language models (LLMs) have demonstrated their significant potential to be applied for addressing various application tasks. However, traditional recommender systems continue to face great challenges such as poor interactivity and explainability, which actually also hinder their broad deployment in real-world systems. To address these limitations, this paper proposes a novel paradigm called Chat-Rec (ChatGPT Augmented Recommender System) that innovatively augments LLMs for building conversational recommender systems by converting user profiles and historical interactions into prompts. Chat-Rec is demonstrated to be effective in learning user preferences and establishing connections between users and products through in-context learning, which also makes the recommendation process more interactive and explainable. What's more, within the Chat-Rec framework, user's preferences can transfer to different products for cross-domain recommendations, and prompt-based injection of information into LLMs can also handle the cold-start scenarios with new items. In our experiments, Chat-Rec effectively improve the results of top-k recommendations and performs better in zero-shot rating prediction task. Chat-Rec offers a novel approach to improving recommender systems and presents new practical scenarios for the implementation of AIGC (AI generated content) in recommender system studies.",0cfdd655100055f234fd23ebecd915504b8e00e3,Semantic Scholar,,, +421,maple multimodal prompt learning,"['Muhammad Uzair Khattak', 'H. Rasheed', 'Muhammad Maaz', 'Salman Khan', 'F. Khan']",https://arxiv.org/pdf/2210.03117,2022-10-06,,"Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.",0d0dbfb1b315a43216020abaf74d289456198219,Semantic Scholar,,, +422,a theory of emergent incontext learning as implicit structure induction,"['Michael Hahn', 'Navin Goyal']",http://arxiv.org/pdf/2303.07971,2023-03-14,,"Scaling large language models (LLMs) leads to an emergent capacity to learn in-context from example demonstrations. Despite progress, theoretical understanding of this phenomenon remains limited. We argue that in-context learning relies on recombination of compositional operations found in natural language data. We derive an information-theoretic bound showing how in-context learning abilities arise from generic next-token prediction when the pretraining distribution has sufficient amounts of compositional structure, under linguistically motivated assumptions. A second bound provides a theoretical justification for the empirical success of prompting LLMs to output intermediate steps towards an answer. To validate theoretical predictions, we introduce a controlled setup for inducing in-context learning; unlike previous approaches, it accounts for the compositional nature of language. Trained transformers can perform in-context learning for a range of tasks, in a manner consistent with the theoretical results. Mirroring real-world LLMs in a miniature setup, in-context learning emerges when scaling parameters and data, and models perform better when prompted to output intermediate steps. Probing shows that in-context learning is supported by a representation of the input's compositional structure. Taken together, these results provide a step towards theoretical understanding of emergent behavior in large language models.",0ea7fc93d4947d9024ccaa202987a2070683bc1f,Semantic Scholar,,, +423,are humangenerated demonstrations necessary for incontext learning,"['Rui Li', 'Guoyin Wang', 'Jiwei Li']",https://arxiv.org/pdf/2309.14681,2023-09-26,,"Despite the promising few-shot ability of large language models (LLMs), the standard paradigm of In-context Learning (ICL) suffers the disadvantages of susceptibility to selected demonstrations and the intricacy to generate these demonstrations. In this paper, we raise the fundamental question that whether human-generated demonstrations are necessary for ICL. To answer this question, we propose self-contemplation prompting strategy (SEC), a paradigm free from human-crafted demonstrations. The key point of SEC is that, instead of using hand-crafted examples as demonstrations in ICL, SEC asks LLMs to first create demonstrations on their own, based on which the final output is generated. SEC is a flexible framework and can be adapted to both the vanilla ICL and the chain-of-thought (CoT), but with greater ease: as the manual-generation process of both examples and rationale can be saved. Extensive experiments in arithmetic reasoning, commonsense reasoning, multi-task language understanding, and code generation benchmarks, show that SEC, which does not require hand-crafted demonstrations, significantly outperforms the zero-shot learning strategy, and achieves comparable results to ICL with hand-crafted demonstrations. This demonstrates that, for many tasks, contemporary LLMs possess a sufficient level of competence to exclusively depend on their own capacity for decision making, removing the need for external training data. Code is available at https://github.com/ruili33/SEC.",0f45608ddc01b3e192f3490330f4c4b8de074f79,Semantic Scholar,,, +424,honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model,"['Jacob Eisenstein', 'D. Andor', 'Bernd Bohnet', 'Michael Collins', 'David M. Mimno']",http://arxiv.org/pdf/2210.02498,2022-10-05,,"Explainable question answering systems should produce not only accurate answers but also rationales that justify their reasoning and allow humans to check their work. But what sorts of rationales are useful and how can we train systems to produce them? We propose a new style of rationale for open-book question answering, called \emph{markup-and-mask}, which combines aspects of extractive and free-text explanations. In the markup phase, the passage is augmented with free-text markup that enables each sentence to stand on its own outside the discourse context. In the masking phase, a sub-span of the marked-up passage is selected. To train a system to produce markup-and-mask rationales without annotations, we leverage in-context learning. Specifically, we generate silver annotated data by sending a series of prompts to a frozen pretrained language model, which acts as a teacher. We then fine-tune a smaller student model by training on the subset of rationales that led to correct answers. The student is""honest""in the sense that it is a pipeline: the rationale acts as a bottleneck between the passage and the answer, while the""untrusted""teacher operates under no such constraints. Thus, we offer a new way to build trustworthy pipeline systems from a combination of end-task annotations and frozen pretrained language models.",0f4ab3fe492ececbfd38be9682047371e2e9b8c6,Semantic Scholar,,, +425,collaborating with language models for embodied reasoning,"['Ishita Dasgupta', 'Christine Kaeser-Chen', 'Kenneth Marino', 'Arun Ahuja', 'Sheila Babayan', 'Felix Hill', 'R. Fergus']",http://arxiv.org/pdf/2302.00763,2023-02-01,,"Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to generalize to new unseen environments and new tasks. On the other hand, Large Scale Language Models (LSLMs) have exhibited strong reasoning ability and the ability to to adapt to new tasks through in-context learning. However, LSLMs do not inherently have the ability to interrogate or intervene on the environment. In this work, we investigate how to combine these complementary abilities in a single system consisting of three parts: a Planner, an Actor, and a Reporter. The Planner is a pre-trained language model that can issue commands to a simple embodied agent (the Actor), while the Reporter communicates with the Planner to inform its next command. We present a set of tasks that require reasoning, test this system's ability to generalize zero-shot and investigate failure cases, and demonstrate how components of this system can be trained with reinforcement-learning to improve performance.",102e4c860e39a2bfd7bf3f03b9ad69aac7bf3b5f,Semantic Scholar,,, +426,knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering,"['Keheng Wang', 'Feiyu Duan', 'Sirui Wang', 'Peiguang Li', 'Yunsen Xian', 'Chuantao Yin', 'Wenge Rong', 'Zhang Xiong']",https://arxiv.org/pdf/2308.13259,2023-08-25,,"Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have shown impressive reasoning ability in various downstream tasks. Even so, suffering from hallucinations and the inability to access external knowledge, LLMs often come with incorrect or unfaithful intermediate reasoning steps, especially in the context of answering knowledge-intensive tasks such as KBQA. To alleviate this issue, we propose a framework called Knowledge-Driven Chain-of-Thought (KD-CoT) to verify and modify reasoning traces in CoT via interaction with external knowledge, and thus overcome the hallucinations and error propagation. Concretely, we formulate the CoT rationale process of LLMs into a structured multi-round QA format. In each round, LLMs interact with a QA system that retrieves external knowledge and produce faithful reasoning traces based on retrieved precise answers. The structured CoT reasoning of LLMs is facilitated by our developed KBQA CoT collection, which serves as in-context learning demonstrations and can also be utilized as feedback augmentation to train a robust retriever. Extensive experiments on WebQSP and ComplexWebQuestion datasets demonstrate the effectiveness of proposed KD-CoT in task-solving reasoning generation, which outperforms the vanilla CoT ICL with an absolute success rate of 8.0% and 5.1%. Furthermore, our proposed feedback-augmented retriever outperforms the state-of-the-art baselines for retrieving knowledge, achieving significant improvement in Hit and recall performance. Our code and data are released on https://github.com/AdelWang/KD-CoT/tree/main.",10955e63aa49fab146267949f8ebc9ebe8275183,Semantic Scholar,,, +427,taken out of context on measuring situational awareness in llms,"['Lukas Berglund', 'Asa Cooper Stickland', 'Mikita Balesni', 'Max Kaufmann', 'Meg Tong', 'Tomasz Korbak', 'Daniel Kokotajlo', 'Owain Evans']",https://arxiv.org/pdf/2309.00667,2023-09-01,,"We aim to better understand the emergence of `situational awareness' in large language models (LLMs). A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment. Today's LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose `out-of-context reasoning' (in contrast to in-context learning). We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.",135ae2ea7a2c966815e85a232469a0a14b4d8d67,Semantic Scholar,,, +428,larger language models do incontext learning differently,"['Jerry W. Wei', 'Jason Wei', 'Yi Tay', 'Dustin Tran', 'Albert Webson', 'Yifeng Lu', 'Xinyun Chen', 'Hanxiao Liu', 'Da Huang', 'Denny Zhou', 'Tengyu Ma']",http://arxiv.org/pdf/2303.03846,2023-03-07,,"We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments on ICL with flipped labels show that overriding semantic priors is an emergent ability of model scale. While small language models ignore flipped labels presented in-context and thus rely primarily on semantic priors from pretraining, large models can override semantic priors when presented with in-context exemplars that contradict priors, despite the stronger semantic priors that larger models may hold. We next study semantically-unrelated label ICL (SUL-ICL), in which labels are semantically unrelated to their inputs (e.g., foo/bar instead of negative/positive), thereby forcing language models to learn the input-label mappings shown in in-context exemplars in order to perform the task. The ability to do SUL-ICL also emerges primarily with scale, and large-enough language models can even perform linear classification in a SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that instruction tuning strengthens both the use of semantic priors and the capacity to learn input-label mappings, but more of the former.",154493f69d7db3d49da0e51df0192c6ad5f1724a,Semantic Scholar,,, +429,incontext learning user simulators for taskoriented dialog systems,"['Silvia Terragni', 'Modestas Filipavicius', 'Nghia Khau', 'Bruna Guedes', ""Andr'e Manso"", 'Roland Mathis']",http://arxiv.org/pdf/2306.00774,2023-06-01,,"This paper presents a novel application of large language models in user simulation for task-oriented dialog systems, specifically focusing on an in-context learning approach. By harnessing the power of these models, the proposed approach generates diverse utterances based on user goals and limited dialog examples. Unlike traditional simulators, this method eliminates the need for labor-intensive rule definition or extensive annotated data, making it more efficient and accessible. Additionally, an error analysis of the interaction between the user simulator and dialog system uncovers common mistakes, providing valuable insights into areas that require improvement. Our implementation is available at https://github.com/telepathylabsai/prompt-based-user-simulator.",15fcd80193d1c446bc3d37fcc30f5475b9ebd5b0,Semantic Scholar,,, +430,cognitive reframing of negative thoughts through humanlanguage model interaction,"['Ashish Sharma', 'Kevin Rushton', 'Inna Wanyin Lin', 'David Wadden', 'Khendra G. Lucas', 'Adam S. Miner', 'Theresa Nguyen', 'Tim Althoff']",http://arxiv.org/pdf/2305.02466,2023-05-04,,"A proven therapeutic technique to overcome negative thoughts is to replace them with a more hopeful “reframed thought.” Although therapy can help people practice and learn this Cognitive Reframing of Negative Thoughts, clinician shortages and mental health stigma commonly limit people’s access to therapy. In this paper, we conduct a human-centered study of how language models may assist people in reframing negative thoughts. Based on psychology literature, we define a framework of seven linguistic attributes that can be used to reframe a thought. We develop automated metrics to measure these attributes and validate them with expert judgements from mental health practitioners. We collect a dataset of 600 situations, thoughts and reframes from practitioners and use it to train a retrieval-enhanced in-context learning model that effectively generates reframed thoughts and controls their linguistic attributes. To investigate what constitutes a “high-quality” reframe, we conduct an IRB-approved randomized field study on a large mental health website with over 2,000 participants. Amongst other findings, we show that people prefer highly empathic or specific reframes, as opposed to reframes that are overly positive. Our findings provide key implications for the use of LMs to assist people in overcoming negative thoughts.",16aacf48048ac128a07fe2c0761439e1d7211492,Semantic Scholar,,, +431,dricl demonstrationretrieved incontext learning,"['Man Luo', 'Xin Xu', 'Zhuyun Dai', 'Panupong Pasupat', 'Mehran Kazemi', 'Chitta Baral', 'Vaiva Imbrasaite', 'Vincent Zhao']",http://arxiv.org/pdf/2305.14128,2023-05-23,,"In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs. While early studies primarily used a fixed or random set of demonstrations for all test queries, recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance. This work expands the applicability of retrieval-based ICL approaches by demonstrating that even simple word-overlap similarity measures such as BM25 outperform randomly selected demonstrations. Furthermore, we extend the success of retrieval-based ICL to instruction-finetuned LLMs as well as Chain-of-Thought (CoT) prompting. For instruction-finetuned LLMs, we find that although a model has already seen the training data at training time, retrieving demonstrations from the training data at test time yields better results compared to using no demonstrations or random demonstrations. Last but not least, we train a task-specific demonstration retriever that outperforms off-the-shelf retrievers.",18143a4c2da37444e06feed04cc9efeb0856352d,Semantic Scholar,,, +432,sociocultural norm similarities and differences via situational alignment and explainable textual entailment,"['Sky Ch-Wang', 'Arkadiy Saakyan', 'Oliver Li', 'Zhou Yu', 'S. Muresan']",http://arxiv.org/pdf/2305.14492,2023-05-23,,"Designing systems that can reason across cultures requires that they are grounded in the norms of the contexts in which they operate. However, current research on developing computational models of social norms has primarily focused on American society. Here, we propose a novel approach to discover and compare descriptive social norms across Chinese and American cultures. We demonstrate our approach by leveraging discussions on a Chinese Q&A platform (Zhihu) and the existing SocialChemistry dataset as proxies for contrasting cultural axes, align social situations cross-culturally, and extract social norms from texts using in-context learning. Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3,069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations. To test the ability of models to reason about social norms across cultures, we introduce the task of explainable social norm entailment, showing that existing models under 3B parameters have significant room for improvement in both automatic and human evaluation. Further analysis of cross-cultural norm differences based on our dataset shows empirical alignment with the social orientations framework, revealing several situational and descriptive nuances in norms across these cultures.",18bd959aaa8a83b5b2192282224d700da7459857,Semantic Scholar,,, +433,flirt feedback loop incontext red teaming,"['Ninareh Mehrabi', 'Palash Goyal', 'Christophe Dupuy', 'Qian Hu', 'Shalini Ghosh', 'R. Zemel', 'Kai-Wei Chang', 'A. Galstyan', 'Rahul Gupta']",https://arxiv.org/pdf/2308.04265,2023-08-08,,"Warning: this paper contains content that may be inappropriate or offensive. As generative models become available for public use in various applications, testing and analyzing vulnerabilities of these models has become a priority. Here we propose an automatic red teaming framework that evaluates a given model and exposes its vulnerabilities against unsafe and inappropriate content generation. Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation. We propose different in-context attack strategies to automatically learn effective and diverse adversarial prompts for text-to-image models. Our experiments demonstrate that compared to baseline approaches, our proposed strategy is significantly more effective in exposing vulnerabilities in Stable Diffusion (SD) model, even when the latter is enhanced with safety features. Furthermore, we demonstrate that the proposed framework is effective for red teaming text-to-text models, resulting in significantly higher toxic response generation rate compared to previously reported numbers.",19443d48399d4fe89a4b0a96917c50c6fd9c5af1,Semantic Scholar,,, +434,extractive summarization via chatgpt for faithful summary generation,"['Haopeng Zhang', 'Xiao Liu', 'Jiawei Zhang']",https://arxiv.org/pdf/2304.04193,2023-04-09,,"Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT's performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT's capabilities in faithful summarization using two-stage approaches.",1a01c982aa20c1a1ad1ad94866e3197da99a52a2,Semantic Scholar,,, +435,"revisiting outofdistribution robustness in nlp benchmark, analysis, and llms evaluations","['Lifan Yuan', 'Yangyi Chen', 'Ganqu Cui', 'Hongcheng Gao', 'Fangyuan Zou', 'Xingyi Cheng', 'Heng Ji', 'Zhiyuan Liu', 'Maosong Sun']",http://arxiv.org/pdf/2306.04618,2023-06-07,,"This paper reexamines the research on out-of-distribution (OOD) robustness in the field of NLP. We find that the distribution shift settings in previous studies commonly lack adequate challenges, hindering the accurate evaluation of OOD robustness. To address these issues, we propose a benchmark construction protocol that ensures clear differentiation and challenging distribution shifts. Then we introduce BOSS, a Benchmark suite for Out-of-distribution robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we conduct a series of experiments on pre-trained language models for analysis and evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the relationship between in-distribution (ID) and OOD performance. We identify three typical types that unveil the inner learning mechanism, which could potentially facilitate the forecasting of OOD robustness, correlating with the advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and find that, despite exhibiting some effectiveness in specific cases, they do not offer significant improvement compared to vanilla fine-tuning. Further, we evaluate 5 LLMs with various adaptation paradigms and find that when sufficient ID data is available, fine-tuning domain-specific models outperform LLMs on ID examples significantly. However, in the case of OOD instances, prioritizing LLMs with in-context learning yields better results. We identify that both fine-tuned small models and LLMs face challenges in effectively addressing downstream tasks. The code is public at \url{https://github.com/lifan-yuan/OOD_NLP}.",1a55d16c14587edda62dc9c9ff09e0b531dd169c,Semantic Scholar,,, +436,discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators,"['Giwon Hong', 'Jeonghwan Kim', 'Junmo Kang', 'Sung-Hyon Myaeng', 'Joyce Jiyoung Whang']",http://arxiv.org/pdf/2305.01579,2023-05-02,,"Most existing retrieval-augmented language models (LMs) for question answering assume all retrieved information is factually correct. In this work, we study a more realistic scenario in which retrieved documents may contain misinformation, causing conflicts among them. We observe that the existing models are highly brittle to such information in both fine-tuning and in-context few-shot learning settings. We propose approaches to make retrieval-augmented LMs robust to misinformation by explicitly fine-tuning a discriminator or prompting to elicit discrimination capability in GPT-3. Our empirical results on open-domain question answering show that these approaches significantly improve LMs' robustness to knowledge conflicts. We also provide our findings on interleaving the fine-tuned model's decision with the in-context learning process, paving a new path to leverage the best of both worlds.",1a62bc8ed9732bcdb6893a11f5cf239640883f87,Semantic Scholar,,, +437,adversarial demonstration attacks on large language models,"['Jiong Wang', 'Zi-yang Liu', 'Keun Hee Park', 'Muhao Chen', 'Chaowei Xiao']",http://arxiv.org/pdf/2305.14950,2023-05-24,,"With the emergence of more powerful large language models (LLMs), such as ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence in leveraging these models for specific tasks by utilizing data-label pairs as precondition prompts. While incorporating demonstrations can greatly enhance the performance of LLMs across various tasks, it may introduce a new security concern: attackers can manipulate only the demonstrations without changing the input to perform an attack. In this paper, we investigate the security concern of ICL from an adversarial perspective, focusing on the impact of demonstrations. We propose a novel attack method named advICL, which aims to manipulate only the demonstration without changing the input to mislead the models. Our results demonstrate that as the number of demonstrations increases, the robustness of in-context learning would decrease. Additionally, we also identify the intrinsic property of the demonstrations is that they can be used (prepended) with different inputs. As a result, it introduces a more practical threat model in which an attacker can attack the test input example even without knowing and manipulating it. To achieve it, we propose the transferable version of advICL, named Transferable-advICL. Our experiment shows that the adversarial demonstration generated by Transferable-advICL can successfully attack the unseen test input examples. We hope that our study reveals the critical security risks associated with ICL and underscores the need for extensive research on the robustness of ICL, particularly given its increasing significance in the advancement of LLMs.",1abfc211793c683972ded8d3268475e3ee7a88b0,Semantic Scholar,,, +438,is chatgpt a good causal reasoner a comprehensive evaluation,"['Jin-Fang Gao', 'Xiao Ding', 'Bing Qin', 'Ting Liu']",https://arxiv.org/pdf/2305.07375,2023-05-12,,"Causal reasoning ability is crucial for numerous NLP applications. Despite the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear how well ChatGPT performs in causal reasoning. In this paper, we conduct the first comprehensive evaluation of the ChatGPT's causal reasoning capabilities. Experiments show that ChatGPT is not a good causal reasoner, but a good causal explainer. Besides, ChatGPT has a serious hallucination on causal reasoning, possibly due to the reporting biases between causal and non-causal relationships in natural language, as well as ChatGPT's upgrading processes, such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (CoT) techniques can further exacerbate such causal hallucination. Additionally, the causal reasoning ability of ChatGPT is sensitive to the words used to express the causal concept in prompts, and close-ended prompts perform better than open-ended prompts. For events in sentences, ChatGPT excels at capturing explicit causality rather than implicit causality, and performs better in sentences with lower event density and smaller lexical distance between events. The code is available on https://github.com/ArrogantL/ChatGPT4CausalReasoning .",1b9fc8268b392742ea43c2c017a767cf62386139,Semantic Scholar,,, +439,using incontext learning to improve dialogue safety,"['Nicholas Meade', 'Spandana Gella', 'Devamanyu Hazarika', 'Prakhar Gupta', 'Di Jin', 'Siva Reddy', 'Yang Liu', 'Dilek Z. Hakkani-Tür']",http://arxiv.org/pdf/2302.00871,2023-02-02,,"While large neural-based conversational models have become increasingly proficient dialogue agents, recent work has highlighted safety issues with these systems. For example, these systems can be goaded into generating toxic content, which often perpetuates social biases or stereotypes. We investigate a retrieval-based method for reducing bias and toxicity in responses from chatbots. It uses in-context learning to steer a model towards safer generations. Concretely, to generate a response to an unsafe dialogue context, we retrieve demonstrations of safe responses to similar dialogue contexts. We find our method performs competitively with strong baselines without requiring training. For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4.04% more than our approach. Finally, we also propose a re-ranking procedure which can further improve response safeness.",1d75f8de31bf47ec46fa5586056420ec8bc97e86,Semantic Scholar,,, +440,how to unleash the power of large language models for fewshot relation extraction,"['Xin Xu', 'Yuqi Zhu', 'Xiaohan Wang', 'Ningyu Zhang']",http://arxiv.org/pdf/2305.01555,2023-05-02,,"Scaling language models have revolutionized widespread NLP tasks, yet little comprehensively explored few-shot relation extraction with large language models. In this paper, we investigate principal methodologies, in-context learning and data generation, for few-shot relation extraction via GPT-3.5 through exhaustive experiments. To enhance few-shot performance, we further propose task-related instructions and schema-constrained data generation. We observe that in-context learning can achieve performance on par with previous prompt learning approaches, and data generation with the large language model can boost previous solutions to obtain new state-of-the-art few-shot results on four widely-studied relation extraction datasets. We hope our work can inspire future research for the capabilities of large language models in few-shot relation extraction. Code is available in https://github.com/zjunlp/DeepKE/tree/main/example/llm.",1ddeb500dd88d4b860b32bec1e2a85f8a53910d6,Semantic Scholar,,, +441,multilingual llms are better crosslingual incontext learners with alignment,"['Eshaan Tanwar', 'Manish Borthakur', 'Subhabrata Dutta', 'Tanmoy Chakraborty']",http://arxiv.org/pdf/2305.05940,2023-05-10,,"In-context learning (ICL) unfolds as large language models become capable of inferring test labels conditioned on a few labeled samples without any gradient update. ICL-enabled large language models provide a promising step forward toward bypassing recurrent annotation costs in a low-resource setting. Yet, only a handful of past studies have explored ICL in a cross-lingual setting, in which the need for transferring label-knowledge from a high-resource language to a low-resource one is immensely crucial. To bridge the gap, we provide the first in-depth analysis of ICL for cross-lingual text classification. We find that the prevalent mode of selecting random input-label pairs to construct the prompt-context is severely limited in the case of cross-lingual ICL, primarily due to the lack of alignment in the input as well as the output spaces. To mitigate this, we propose a novel prompt construction strategy — Cross-lingual In-context Source Target Alignment (X-InSTA). With an injected coherence in the semantics of the input examples and a task-based alignment across the source and target languages, X-InSTA is able to outperform random prompt selection by a large margin across three different tasks using 44 different cross-lingual pairs.",1fb5a5298747b8c7d60f98640a543f20d42ab053,Semantic Scholar,,, +442,boosting incontext learning with factual knowledge,"['J. Wang', 'Chengyu Wang', 'Chuanqi Tan', 'Jun Huang', 'Ming Gao']",https://arxiv.org/pdf/2309.14771,2023-09-26,,"In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks by conditioning on a few training examples, eliminating the need for parameter updates and achieving competitive performance. In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets, i.e., the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. To unleash the power of LLMs in few-shot learning scenarios, we introduce a novel Knowledgeable In-Context Tuning (KICT) framework to further improve the performance of ICL: 1) injecting factual knowledge to LLMs during continual self-supervised pre-training, 2) judiciously selecting the examples with high knowledge relevance, and 3) calibrating the prediction results based on prior knowledge. We evaluate the proposed approaches on auto-regressive LLMs (e.g., GPT-style models) over multiple text classification and question answering tasks. Experimental results demonstrate that KICT substantially outperforms strong baselines, and improves by more than 13% and 7% of accuracy on text classification and question answering tasks, respectively.",20177a85f632a34d085bcf645507e461733fcc96,Semantic Scholar,,, +443,chatgpt for zeroshot dialogue state tracking a solution or an opportunity,"['Michael Heck', 'Nurul Lubis', 'Benjamin Matthias Ruppik', 'Renato Vukovic', 'Shutong Feng', 'Christian Geishauser', 'Hsien-chin Lin', 'Carel van Niekerk', ""Milica Gavsi'c""]",http://arxiv.org/pdf/2306.01386,2023-06-02,,"Recent research on dialog state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas. However, performance gains heavily depend on aggressive data augmentation and fine-tuning of ever larger language model based architectures. In contrast, general purpose language models, trained on large amounts of diverse data, hold the promise of solving any kind of task without task-specific training. We present preliminary experimental results on the ChatGPT research preview, showing that ChatGPT achieves state-of-the-art performance in zero-shot DST. Despite our findings, we argue that properties inherent to general purpose models limit their ability to replace specialized systems. We further theorize that the in-context learning capabilities of such models will likely become powerful tools to support the development of dedicated dialog state trackers and enable dynamic methods.",214fbadc57e954e325dc055fee5ac0e224dfde11,Semantic Scholar,,, +444,llmlingua compressing prompts for accelerated inference of large language models,"['Huiqiang Jiang', 'Qianhui Wu', 'Chin-Yew Lin', 'Yuqing Yang', 'Lili Qiu']",https://arxiv.org/pdf/2310.05736,2023-10-09,,"Large language models (LLMs) have been applied in various applications due to their astonishing capabilities. With advancements in technologies such as chain-of-thought (CoT) prompting and in-context learning (ICL), the prompts fed to LLMs are becoming increasingly lengthy, even exceeding tens of thousands of tokens. To accelerate model inference and reduce cost, this paper presents LLMLingua, a coarse-to-fine prompt compression method that involves a budget controller to maintain semantic integrity under high compression ratios, a token-level iterative compression algorithm to better model the interdependence between compressed contents, and an instruction tuning based method for distribution alignment between language models. We conduct experiments and analysis over four datasets from different scenarios, i.e., GSM8K, BBH, ShareGPT, and Arxiv-March23; showing that the proposed approach yields state-of-the-art performance and allows for up to 20x compression with little performance loss. Our code is available at https://aka.ms/LLMLingua.",2392b6d3a5cad9e5cf349169eaeee848266adf6a,Semantic Scholar,,, +445,cup curriculum learning based prompt tuning for implicit event argument extraction,"['Jiaju Lin', 'Qin Chen', 'Jie Zhou', 'Jiankai Jin', 'Liangye He']",https://arxiv.org/pdf/2205.00498,2022-05-01,,"Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document. Most previous work focuses on learning the direct relations between arguments and the given trigger, while the implicit relations with long-range dependency are not well studied. Moreover, recent neural network based approaches rely on a large amount of labeled data for training, which is unavailable due to the high labelling cost. In this paper, we propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages. The stages are defined according to the relations with the trigger node in a semantic graph, which well captures the long-range dependency between arguments and the trigger. In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models (PLMs) in each stage, where the prompt templates are adapted with the learning progress to enhance the reasoning for arguments. Experimental results on two well-known benchmark datasets show the great advantages of our proposed approach. In particular, we outperform the state-of-the-art models in both fully-supervised and low-data scenarios.",65d88194a902332b78dd5a7b919fa577bfa7ee9f,Semantic Scholar,,, +446,delving into multimodal prompting for finegrained visual classification,"['Xin Jiang', 'Hao Tang', 'Junyao Gao', 'Xiaoyu Du', 'Shengfeng He', 'Zechao Li']",https://arxiv.org/pdf/2309.08912,2023-09-16,,"Fine-grained visual classification (FGVC) involves categorizing fine subdivisions within a broader category, which poses challenges due to subtle inter-class discrepancies and large intra-class variations. However, prevailing approaches primarily focus on uni-modal visual concepts. Recent advancements in pre-trained vision-language models have demonstrated remarkable performance in various high-level vision tasks, yet the applicability of such models to FGVC tasks remains uncertain. In this paper, we aim to fully exploit the capabilities of cross-modal description to tackle FGVC tasks and propose a novel multimodal prompting solution, denoted as MP-FGVC, based on the contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a multimodal prompts scheme and a multimodal adaptation scheme. The former includes Subcategory-specific Vision Prompt (SsVP) and Discrepancy-aware Text Prompt (DaTP), which explicitly highlights the subcategory-specific discrepancies from the perspectives of both vision and language. The latter aligns the vision and text prompting elements in a common semantic space, facilitating cross-modal collaborative reasoning through a Vision-Language Fusion Module (VLFM) for further improvement on FGVC. Moreover, we tailor a two-stage optimization strategy for MP-FGVC to fully leverage the pre-trained CLIP model and expedite efficient adaptation for FGVC. Extensive experiments conducted on four FGVC datasets demonstrate the effectiveness of our MP-FGVC.",11e3efa08b5db1a8958dfe8119593a4d3f18796a,Semantic Scholar,,, +447,draw your art dream diverse digital art synthesis with multimodal guided diffusion,"['Nisha Huang', 'Fan Tang', 'Weiming Dong', 'Changsheng Xu']",https://dl.acm.org/doi/pdf/10.1145/3503161.3548282,2022-09-27,,"Digital art synthesis is receiving increasing attention in the multimedia community because of engaging the public with art effectively. Current digital art synthesis methods usually use single-modality inputs as guidance, thereby limiting the expressiveness of the model and the diversity of generated results. To solve this problem, we propose the multimodal guided artwork diffusion (MGAD) model, which is a diffusion-based digital artwork generation approach that utilizes multimodal prompts as guidance to control the classifier-free diffusion model. Additionally, the contrastive language-image pretraining (CLIP) model is used to unify text and image modalities. Extensive experimental results on the quality and quantity of the generated digital art paintings confirm the effectiveness of the combination of the diffusion model and multimodal guidance. Code is available at https://github.com/haha-lisa/MGAD-multimodal-guided-artwork-diffusion.",159d2980566fa00bc752e180471ee46d7899d66e,Semantic Scholar,,, +448,zeroshot and fewshot video question answering with multimodal prompts,"['Deniz Engin', 'Yannis Avrithis']",https://arxiv.org/pdf/2309.15915,2023-09-27,,"Recent vision-language models are driven by large-scale pretrained models. However, adapting pretrained models on limited data presents challenges such as overfitting, catastrophic forgetting, and the cross-modal gap between vision and language. We introduce a parameter-efficient method to address these challenges, combining multimodal prompt learning and a transformer-based mapping network, while keeping the pretrained models frozen. Our experiments on several video question answering benchmarks demonstrate the superiority of our approach in terms of performance and parameter efficiency on both zero-shot and few-shot settings. Our code is available at https://engindeniz.github.io/vitis.",185e79641a8e7b18ac5a73b8c3cb82fdee3a0c6d,Semantic Scholar,,, +449,vima general robot manipulation with multimodal prompts,"['Yunfan Jiang', 'Agrim Gupta', 'Zichen Zhang', 'Guanzhi Wang', 'Yongqiang Dou', 'Yanjun Chen', 'Li Fei-Fei', 'Anima Anandkumar', 'Yuke Zhu', 'Linxi (Jim) Fan']",http://arxiv.org/pdf/2210.03094,2022-10-06,,"Prompt-based learning has emerged as a successful paradigm in natural language processing, where a single general-purpose language model can be instructed to perform any task specified by input prompts. Yet task specification in robotics comes in various forms, such as imitating one-shot demonstrations, following language instructions, and reaching visual goals. They are often considered different tasks and tackled by specialized models. We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts, interleaving textual and visual tokens. Accordingly, we develop a new simulation benchmark that consists of thousands of procedurally-generated tabletop tasks with multimodal prompts, 600K+ expert trajectories for imitation learning, and a four-level evaluation protocol for systematic generalization. We design a transformer-based robot agent, VIMA, that processes these prompts and outputs motor actions autoregressively. VIMA features a recipe that achieves strong model scalability and data efficiency. It outperforms alternative designs in the hardest zero-shot generalization setting by up to $2.9\times$ task success rate given the same training data. With $10\times$ less training data, VIMA still performs $2.7\times$ better than the best competing variant. Code and video demos are available at https://vimalabs.github.io/",25425e299101b13ec2872417a14f961f4f8aa18e,Semantic Scholar,,, +450,multimodal prompt learning for product title generation with extremely limited labels,"['Bang Yang', 'Fenglin Liu', 'Zheng Li', 'Qingyu Yin', 'Chenyu You', 'Bing Yin', 'Yuexian Zou']",https://arxiv.org/pdf/2307.01969,2023-07-05,,"Generating an informative and attractive title for the product is a crucial task for e-commerce. Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1% of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100% of training data; With the full labelled data for training, our method achieves state-of-the-art results.",37d91ebd5ec969e2b81027e05f886febf09d2504,Semantic Scholar,,, +451,multimodal prompting with missing modalities for visual recognition,"['Yi-Lun Lee', 'Yi-Hsuan Tsai', 'Wei-Chen Chiu', 'Chen-Yu Lee']",https://arxiv.org/pdf/2303.03369,2023-03-06,,"In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model retraining. Code is available.11https://github.com/YiLunLee/missing_aware_prompts",483757dff12df441c6991dd5e7408d922fe01c3d,Semantic Scholar,,, +452,multimodal prompt retrieval for generative visual question answering,"['Timothy Ossowski', 'Junjie Hu']",http://arxiv.org/pdf/2306.17675,2023-06-30,,"Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains with limited labeled data (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy points in a few-shot domain adaptation setting.",534675abb9d72fc0c08d080d4f73335ceb75902c,Semantic Scholar,,, +453,multimodal garment designer humancentric latent diffusion models for fashion image editing,"['Alberto Baldrati', 'Davide Morelli', 'Giuseppe Cartella', 'M. Cornia', 'M. Bertini', 'R. Cucchiara']",https://arxiv.org/pdf/2304.02051,2023-04-04,,"Fashion illustration is used by designers to communicate their vision and to bring the design idea from conceptualization to realization, showing how clothes interact with the human body. In this context, computer vision can thus be used to improve the fashion design process. Differently from previous works that mainly focused on the virtual try-on of garments, we propose the task of multimodal-conditioned fashion image editing, guiding the generation of human-centric fashion images by following multimodal prompts, such as text, human body poses, and garment sketches. We tackle this problem by proposing a new architecture based on latent diffusion models, an approach that has not been used before in the fashion domain. Given the lack of existing datasets suitable for the task, we also extend two existing fashion datasets, namely Dress Code and VITON-HD, with multimodal annotations collected in a semi-automatic manner. Experimental results on these new datasets demonstrate the effectiveness of our proposal, both in terms of realism and coherence with the given multimodal inputs. Source code and collected multimodal annotations are publicly available at: https://github.com/aimagelab/multimodal-garment-designer.",6c925427841ea4a776a578d438f9e47a64c3014e,Semantic Scholar,,, +454,vitaclip video and text adaptive clip via multimodal prompting,"['Syed Talal Wasim', 'Muzammal Naseer', 'Salman Khan', 'F. Khan', 'M. Shah']",https://arxiv.org/pdf/2304.03307,2023-04-06,,"Adopting contrastive image-text pretrained models like CLIP towards video classification has gained attention due to its cost-effectiveness and competitive performance. However, recent works in this area face a trade-off. Finetuning the pretrained model to achieve strong supervised performance results in low zero-shot generalization. Similarly, freezing the backbone to retain zero-shot capability causes significant drop in supervised accuracy. Because of this, recent works in literature typically train separate models for supervised and zero-shot action recognition. In this work, we propose a multimodal prompt learning scheme that works to balance the supervised and zero-shot performance under a single unified training. Our prompting approach on the vision side caters for three aspects: 1) Global video-level prompts to model the data distribution; 2) Local frame-level prompts to provide per-frame discriminative conditioning; and 3) a summary prompt to extract a condensed video representation. Additionally, we define a prompting scheme on the text side to augment the textual context. Through this prompting scheme, we can achieve state-of-the-art zero-shot performance on Kinetics-600, HMDB51 and UCF101 while remaining competitive in the supervised setting. By keeping the pretrained backbone frozen, we optimize a much lower number of parameters and retain the existing general representation which helps achieve the strong zero-shot performance. Our codes/models will be released at https://github.com/TalalWasim/Vita-Clip..",8b5f4b383008bfb365cee72e5301ee04a24221f7,Semantic Scholar,,, +455,audio visual language maps for robot navigation,"['Chen Huang', 'Oier Mees', 'Andy Zeng', 'Wolfram Burgard']",http://arxiv.org/pdf/2303.07522,2023-03-13,,"While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments. In this work, we propose Audio-Visual-Language Maps (AVLMaps), a unified 3D spatial map representation for storing cross-modal information from audio, visual, and language cues. AVLMaps integrate the open-vocabulary capabilities of multimodal foundation models pre-trained on Internet-scale data by fusing their features into a centralized 3D voxel grid. In the context of navigation, we show that AVLMaps enable robot systems to index goals in the map based on multimodal queries, e.g., textual descriptions, images, or audio snippets of landmarks. In particular, the addition of audio information enables robots to more reliably disambiguate goal locations. Extensive experiments in simulation show that AVLMaps enable zero-shot multimodal goal navigation from multimodal prompts and provide 50% better recall in ambiguous scenarios. These capabilities extend to mobile robots in the real world - navigating to landmarks referring to visual, audio, and spatial concepts. Videos and code are available at: https://avlmaps.github.io.",93565fe6db3948c9c414af1d1edccf4aff5e2e10,Semantic Scholar,,, +456,fewshot multimodal sentiment analysis based on multimodal probabilistic fusion prompts,"['Xiaocui Yang', 'Shi Feng', 'Daling Wang', 'Pengfei Hong', 'Soujanya Poria']",https://arxiv.org/pdf/2211.06607,2022-11-12,,"Multimodal sentiment analysis has gained significant attention due to the proliferation of multimodal content on social media. However, existing studies in this area rely heavily on large-scale supervised data, which is time-consuming and labor-intensive to collect. Thus, there is a need to address the challenge of few-shot multimodal sentiment analysis. To tackle this problem, we propose a novel method called Multimodal Probabilistic Fusion Prompts (MultiPoint) that leverages diverse cues from different modalities for multimodal sentiment detection in the few-shot scenario. Specifically, we start by introducing a Consistently Distributed Sampling approach called CDS, which ensures that the few-shot dataset has the same category distribution as the full dataset. Unlike previous approaches primarily using prompts based on the text modality, we design unified multimodal prompts to reduce discrepancies between different modalities and dynamically incorporate multimodal demonstrations into the context of each multimodal instance. To enhance the model's robustness, we introduce a probabilistic fusion method to fuse output predictions from multiple diverse prompts for each input. Our extensive experiments on six datasets demonstrate the effectiveness of our approach. First, our method outperforms strong baselines in the multimodal few-shot setting. Furthermore, under the same amount of data (1% of the full dataset), our CDS-based experimental results significantly outperform those based on previously sampled datasets constructed from the same number of instances of each class.",befcb92f313030632717a74a2afd651a1445a745,Semantic Scholar,,, +457,multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation,"['Shihao Zou', 'Xianying Huang', 'Xudong Shen']",https://arxiv.org/pdf/2310.04456,2023-10-04,,"Emotion Recognition in Conversation (ERC) plays an important role in driving the development of human-machine interaction. Emotions can exist in multiple modalities, and multimodal ERC mainly faces two problems: (1) the noise problem in the cross-modal information fusion process, and (2) the prediction problem of less sample emotion labels that are semantically similar but different categories. To address these issues and fully utilize the features of each modality, we adopted the following strategies: first, deep emotion cues extraction was performed on modalities with strong representation ability, and feature filters were designed as multimodal prompt information for modalities with weak representation ability. Then, we designed a Multimodal Prompt Transformer (MPT) to perform cross-modal information fusion. MPT embeds multimodal fusion information into each attention layer of the Transformer, allowing prompt information to participate in encoding textual features and being fused with multi-level textual information to obtain better multimodal fusion features. Finally, we used the Hybrid Contrastive Learning (HCL) strategy to optimize the model's ability to handle labels with few samples. This strategy uses unsupervised contrastive learning to improve the representation ability of multimodal fusion and supervised contrastive learning to mine the information of labels with few samples. Experimental results show that our proposed model outperforms state-of-the-art models in ERC on two benchmark datasets.",e4abc33cbb84934029af6d50360f7ad3bba3df3c,Semantic Scholar,,, +458,fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt,"['Xiaocui Yang', 'Shi Feng', 'Daling Wang', 'Sun Qi', 'Wenfang Wu', 'Yifei Zhang', 'Pengfei Hong', 'Soujanya Poria']",http://arxiv.org/pdf/2305.10169,2023-05-17,,"We have witnessed the rapid proliferation of multimodal data on numerous social media platforms. Conventional studies typically require massive labeled data to train models for Multimodal Aspect-Based Sentiment Analysis (MABSA). However, collecting and annotating fine-grained multimodal data for MABSA is tough. To alleviate the above issue, we perform three MABSA-related tasks with quite a small number of labeled multimodal samples. We first build diverse and comprehensive multimodal few-shot datasets according to the data distribution. To capture the specific prompt for each aspect term in a few-shot scenario, we propose a novel Generative Multimodal Prompt (GMP) model for MABSA, which includes the Multimodal Encoder module and the N-Stream Decoders module. We further introduce a subtask to predict the number of aspect terms in each instance to construct the multimodal prompt. Extensive experiments on two datasets demonstrate that our approach outperforms strong baselines on two MABSA-related tasks in the few-shot setting.",fd7082630257b03771c72a926a64b13eb16e00af,Semantic Scholar,,, +459,textbased person search without parallel imagetext data,"['Yang Bai', 'Jingyao Wang', 'Min Cao', 'Cheng Chen', 'Ziqiang Cao', 'Liqiang Nie', 'Min Zhang']",https://arxiv.org/pdf/2305.12964,2023-05-22,,"Text-based person search (TBPS) aims to retrieve the images of the target person from a large image gallery based on a given natural language description. Existing methods are dominated by training models with parallel image-text pairs, which are very costly to collect. In this paper, we make the first attempt to explore TBPS without parallel image-text data (μ-TBPS), in which only non-parallel images and texts, or even image-only data, can be adopted. Towards this end, we propose a two-stage framework, generation-then-retrieval (GTR), to first generate the corresponding pseudo text for each image and then perform the retrieval in a supervised manner. In the generation stage, we propose a fine-grained image captioning strategy to obtain an enriched description of the person image, which firstly utilizes a set of instruction prompts to activate the off-the-shelf pretrained vision-language model to capture and generate fine-grained person attributes, and then converts the extracted attributes into a textual description via the finetuned large language model or the hand-crafted template. In the retrieval stage, considering the noise interference of the generated texts for training model, we develop a confidence score-based training scheme by enabling more reliable texts to contribute more during the training. Experimental results on multiple TBPS benchmarks (i.e., CUHK-PEDES, ICFG-PEDES and RSTPReid) show that the proposed GTR can achieve a promising performance without relying on parallel image-text data.",0213827d882ec34aa9935f2b03a80362af806778,Semantic Scholar,,, +460,neuro symbolic reasoning for planning counterexample guided inductive synthesis using large language models and satisfiability solving,"['Sumit Kumar Jha', 'Susmit Jha', 'Patrick Lincoln', 'Nathaniel D. Bastian', 'Alvaro Velasquez', 'Rickard Ewetz', 'Sandeep Neema']",https://arxiv.org/pdf/2309.16436,2023-09-28,,"Generative large language models (LLMs) with instruct training such as GPT-4 can follow human-provided instruction prompts and generate human-like responses to these prompts. Apart from natural language responses, they have also been found to be effective at generating formal artifacts such as code, plans, and logical specifications from natural language prompts. Despite their remarkably improved accuracy, these models are still known to produce factually incorrect or contextually inappropriate results despite their syntactic coherence - a phenomenon often referred to as hallucination. This limitation makes it difficult to use these models to synthesize formal artifacts that are used in safety-critical applications. Unlike tasks such as text summarization and question-answering, bugs in code, plan, and other formal artifacts produced by LLMs can be catastrophic. We posit that we can use the satisfiability modulo theory (SMT) solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instruct-trained LLMs. This interaction between inductive LLMs and deductive SMT solvers can iteratively steer the LLM to generate the correct response. In our experiments, we use planning over the domain of blocks as our synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo, Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our method allows the user to communicate the planning problem in natural language; even the formulation of queries to SMT solvers is automatically generated from natural language. Thus, the proposed technique can enable non-expert users to describe their problems in natural language, and the combination of LLMs and SMT solvers can produce provably correct solutions.",1c89d8672a3742672850fa46f1e8ec51f3261019,Semantic Scholar,,, +461,inferfix endtoend program repair with llms,"['Ma Jin', 'Syed Shahriar', 'Michele Tufano', 'Xin Shi', 'Shuai Lu', 'Neel Sundaresan', 'Alexey Svyatkovskiy']",https://dl.acm.org/doi/pdf/10.1145/3611643.3613892,2023-03-13,,"Software development life cycle is profoundly influenced by bugs; their introduction, identification, and eventual resolution account for a significant portion of software development cost. This has motivated software engineering researchers and practitioners to propose different approaches for automating the identification and repair of software defects. Large Language Models (LLMs) have been adapted to the program repair task through few-shot demonstration learning and instruction prompting, treating this as an infilling task. However, these models have only focused on learning general bug-fixing patterns for uncategorized bugs mined from public repositories. In this paper, we propose : a transformer-based program repair framework paired with a state-of-the-art static analyzer to fix critical security and performance bugs. combines a Retriever – transformer encoder model pretrained via contrastive learning objective, which aims at searching for semantically equivalent bugs and corresponding fixes; and a Generator – an LLM (12 billion parameter Codex Cushman model) finetuned on supervised bug-fix data with prompts augmented via adding bug type annotations and semantically similar fixes retrieved from an external non-parametric memory. To train and evaluate our approach, we curated , a novel, metadata-rich dataset of bugs extracted by executing the Infer static analyzer on the change histories of thousands of Java and C# repositories. Our evaluation demonstrates that outperforms strong LLM baselines, with a top-1 accuracy of 65.6% for generating fixes in C# and 76.8% in Java. We discuss the deployment of alongside Infer at Microsoft which offers an end-to-end solution for detection, classification, and localization of bugs, as well as fixing and validation of candidate patches, integrated in the continuous integration (CI) pipeline to automate the software development workflow.",34d24b2d9f116f8f652c112d4ac924afcf11bd0d,Semantic Scholar,,, +462,edm3 event detection as multitask text generation,"['Ujjwala Anantheswaran', 'Himanshu Gupta', 'Mihir Parmar', 'Kuntal Kumar Pal', 'Chitta Baral']",http://arxiv.org/pdf/2305.16357,2023-05-25,,"Event detection refers to identifying event occurrences in a text and comprises of two subtasks; event identification and classification. We present EDM3, a novel approach for Event Detection that formulates three generative tasks: identification, classification, and combined detection. We show that EDM3 helps to learn transferable knowledge that can be leveraged to perform Event Detection and its subtasks concurrently, mitigating the error propagation inherent in pipelined approaches. Unlike previous dataset- or domain-specific approaches, EDM3 utilizes the existing knowledge of language models, allowing it to be trained over any classification schema. We evaluate EDM3 on multiple event detection datasets: RAMS, WikiEvents, MAVEN, and MLEE, showing that EDM3 outperforms 1) single-task performance by 8.4% on average and 2) multi-task performance without instructional prompts by 2.4% on average. We obtain SOTA results on RAMS (71.3% vs. 65.1% F-1) and competitive performance on other datasets. We analyze our approach to demonstrate its efficacy in low-resource and multi-sentence settings. We also show the effectiveness of this approach on non-standard event configurations such as multi-word and multi-class event triggers. Overall, our results show that EDM3 is a promising approach for Event Detection that has the potential for real-world applications.",3d71d4097a3dcc1289b709872d7523a035e6986f,Semantic Scholar,,, +463,vast a visionaudiosubtitletext omnimodality foundation model and dataset,"['Sihan Chen', 'Handong Li', 'Qunbo Wang', 'Zijia Zhao', 'Ming-Ting Sun', 'Xinxin Zhu', 'J. Liu']",https://arxiv.org/pdf/2305.18500,2023-05-29,,"Vision and text have been fully explored in contemporary video-text foundational models, while other modalities such as audio and subtitles in videos have not received sufficient attention. In this paper, we resort to establish connections between multi-modality video tracks, including Vision, Audio, and Subtitle, and Text by exploring an automatically generated large-scale omni-modality video caption dataset called VAST-27M. Specifically, we first collect 27 million open-domain video clips and separately train a vision and an audio captioner to generate vision and audio captions. Then, we employ an off-the-shelf Large Language Model (LLM) to integrate the generated captions, together with subtitles and instructional prompts into omni-modality captions. Based on the proposed VAST-27M dataset, we train an omni-modality video-text foundational model named VAST, which can perceive and process vision, audio, and subtitle modalities from video, and better support various tasks including vision-text, audio-text, and multi-modal video-text tasks (retrieval, captioning and QA). Extensive experiments have been conducted to demonstrate the effectiveness of our proposed VAST-27M corpus and VAST foundation model. VAST achieves 22 new state-of-the-art results on various cross-modality benchmarks. Code, model and dataset will be released at https://github.com/TXH-mercury/VAST.",4e33c5756aa18d248cf50fef9382acda1e0f65da,Semantic Scholar,,, +464,instruction tuning for fewshot aspectbased sentiment analysis,"['Siddharth Varia', 'Shuai Wang', 'Kishaloy Halder', 'Robert Vacareanu', 'Miguel Ballesteros', 'Yassine Benajiba', 'Neha Ann John', 'Rishita Anubhai', 'S. Muresan', 'D. Roth']",http://arxiv.org/pdf/2210.06629,2022-10-12,,"Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts:aspect term, aspect category, opinion term, and sentiment polarity. Most computational approaches focus on some of the ABSA sub-taskssuch as tuple (aspect term, sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity) extraction using either pipeline or joint modeling approaches. Recently, generative approaches have been proposed to extract all four elements as (one or more) quadrupletsfrom text as a single task. In this work, we take a step further and propose a unified framework for solving ABSA, and the associated sub-tasksto improve the performance in few-shot scenarios. To this end, we fine-tune a T5 model with instructional prompts in a multi-task learning fashion covering all the sub-tasks, as well as the entire quadruple prediction task. In experiments with multiple benchmark datasets, we show that the proposed multi-task prompting approach brings performance boost (by absolute 8.29 F1) in the few-shot learning setting.",5dbc2b2ee6e65e39fa3fc4bd5030be7a4a9f9a76,Semantic Scholar,,, +465,discrete prompt compression with reinforcement learning,"['Hoyoun Jung', 'Kyung-Joong Kim']",https://arxiv.org/pdf/2308.08758,2023-08-17,,"Instruction-tuned Language Models (LMs) are widely used by users to address various problems with task-specific prompts. Constraints associated with the context window length and computational costs encourage the development of compressed prompts. Existing methods rely heavily on training embeddings, which are designed to accommodate multiple token meanings. This presents challenges in terms of interpretability, a fixed number of embedding tokens, reusability across different LMs, and inapplicability when interacting with black-box APIs. This study proposes prompt compression with reinforcement learning (PCRL), a novel discrete prompt compression method that addresses these issues. PCRL employs a computationally efficient policy network that directly edits prompts. The PCRL training approach can be flexibly applied to various types of LMs, as well as decoder-only and encoder-decoder architecture, and can be trained without gradient access to LMs or labeled data. PCRL achieves an average reduction of 24.6% in token count across various instruction prompts while preserving performance. Further, we demonstrate that the learned policy can be transferred to larger LMs, and through various analyses, we aid the understanding of token importance within prompts.",5df422fc18974d687febd171adcac35b3012c50a,Semantic Scholar,,, +466,harnessing large language models' empathetic response generation capabilities for online mental health counselling support,"['Siyuan Brandon Loh', 'Aravind Sesagiri Raamkumar']",https://arxiv.org/pdf/2310.08017,2023-10-12,,"Large Language Models (LLMs) have demonstrated remarkable performance across various information-seeking and reasoning tasks. These computational systems drive state-of-the-art dialogue systems, such as ChatGPT and Bard. They also carry substantial promise in meeting the growing demands of mental health care, albeit relatively unexplored. As such, this study sought to examine LLMs' capability to generate empathetic responses in conversations that emulate those in a mental health counselling setting. We selected five LLMs: version 3.5 and version 4 of the Generative Pre-training (GPT), Vicuna FastChat-T5, Pathways Language Model (PaLM) version 2, and Falcon-7B-Instruct. Based on a simple instructional prompt, these models responded to utterances derived from the EmpatheticDialogues (ED) dataset. Using three empathy-related metrics, we compared their responses to those from traditional response generation dialogue systems, which were fine-tuned on the ED dataset, along with human-generated responses. Notably, we discovered that responses from the LLMs were remarkably more empathetic in most scenarios. We position our findings in light of catapulting advancements in creating empathetic conversational systems.",88a3abf671d922ebd61a34007908a5f6b6978bd4,Semantic Scholar,,, +467,promptbased learning for thread structure prediction in cybersecurity forums,"['Kazuaki Kashihara', 'Kuntal Kumar Pal', 'Chitta Baral', 'Robert P. Trevino']",http://arxiv.org/pdf/2303.05400,2023-03-05,,"With recent trends indicating cyber crimes increasing in both frequency and cost, it is imperative to develop new methods that leverage data-rich hacker forums to assist in combating ever evolving cyber threats. Defining interactions within these forums is critical as it facilitates identifying highly skilled users, which can improve prediction of novel threats and future cyber attacks. We propose a method called Next Paragraph Prediction with Instructional Prompting (NPP-IP) to predict thread structures while grounded on the context around posts. This is the first time to apply an instructional prompting approach to the cybersecurity domain. We evaluate our NPP-IP with the Reddit dataset and Hacker Forums dataset that has posts and thread structures of real hacker forums' threads, and compare our method's performance with existing methods. The experimental evaluation shows that our proposed method can predict the thread structure significantly better than existing methods allowing for better social network prediction based on forum interactions.",a71207f1d036969bf92959ea56cf146d5d8eb297,Semantic Scholar,,, +468,impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt,"['Chong Ma', 'Zihao Wu', 'Jiaqi Wang', 'Shaochen Xu', 'Yaonai Wei', 'Zheng Liu', 'Lei Guo', 'Xiaoya Cai', 'Shu Zhang', 'Tuo Zhang', 'Dajiang Zhu', 'Dinggang Shen', 'Tianming Liu', 'Xiang Li']",http://arxiv.org/pdf/2304.08448,2023-04-17,,"The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section. However, writing numerous impressions can be laborious and error-prone for radiologists. Although recent studies have achieved promising results in automatic impression generation using large-scale medical text data for pre-training and fine-tuning pre-trained language models, such models often require substantial amounts of medical text data and have poor generalization performance. While large language models (LLMs) like ChatGPT have shown strong generalization capabilities and performance, their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, which leverages the in-context learning capability of LLMs by constructing dynamic contexts using domain-specific, individualized data. This dynamic prompt approach enables the model to learn contextual knowledge from semantically similar examples from existing data. Additionally, we design an iterative optimization algorithm that performs automatic evaluation on the generated impression results and composes the corresponding instruction prompts to further optimize the model. The proposed ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and OpenI datasets without requiring additional training data or fine-tuning the LLMs. This work presents a paradigm for localizing LLMs that can be applied in a wide range of similar application scenarios, bridging the gap between general-purpose LLMs and the specific language processing needs of various domains.",a7f8fd45fbcdd81449cb7a1a6a2b2c18b38f8151,Semantic Scholar,,, +469,camoscio an italian instructiontuned llama,"['Andrea Santilli', 'E. Rodolà']",https://arxiv.org/pdf/2307.16456,2023-07-31,,"In recent years Large Language Models (LLMs) have increased the state of the art on several natural language processing tasks. However, their accessibility is often limited to paid API services, posing challenges for researchers in conducting extensive investigations. On the other hand, while some open-source models have been proposed by the community, they are typically English-centric or multilingual without a specific adaptation for the Italian language. In an effort to democratize the available and open resources for the Italian language, in this paper we introduce Camoscio: a language model specifically tuned to follow users' prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA (7b) with LoRA on a corpus of instruction prompts translated to Italian via ChatGPT. Results indicate that the model's zero-shot performance on various downstream tasks in Italian competes favorably with existing models specifically finetuned for those tasks. All the artifacts (code, dataset, model) are released to the community at the following url: https://github.com/teelinsan/camoscio",a7ff4d1a89baa5007b3c9ee46492aaf88dfc257f,Semantic Scholar,,, +470,layout and task aware instruction prompt for zeroshot document image question answering,"['Wenjin Wang', 'Yunhao Li', 'Yixin Ou', 'Yin Zhang']",https://arxiv.org/pdf/2306.00526,2023-06-01,,"Layout-aware pre-trained models has achieved significant progress on document image question answering. They introduce extra learnable modules into existing language models to capture layout information within document images from text bounding box coordinates obtained by OCR tools. However, extra modules necessitate pre-training on extensive document images. This prevents these methods from directly utilizing off-the-shelf instruction-tuning language foundation models, which have recently shown promising potential in zero-shot learning. Instead, in this paper, we find that instruction-tuning language models like Claude and ChatGPT can understand layout by spaces and line breaks. Based on this observation, we propose the LAyout and Task aware Instruction Prompt (LATIN-Prompt), which consists of layout-aware document content and task-aware instruction. Specifically, the former uses appropriate spaces and line breaks to recover the layout information among text segments obtained by OCR tools, and the latter ensures that generated answers adhere to formatting requirements. Moreover, we propose the LAyout and Task aware Instruction Tuning (LATIN-Tuning) to improve the performance of small instruction-tuning models like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot performance of Claude and ChatGPT to be comparable to the fine-tuning performance of SOTAs on document image question answering, and LATIN-Tuning enhances the zero-shot performance of Alpaca significantly. For example, LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by 263% and 20% respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA by 87.7%. Quantitative and qualitative analyses demonstrate the effectiveness of LATIN-Prompt and LATIN-Tuning. We provide the code in supplementary and will release it to facilitate future research.",aa828072e36be23887eeb3ac277901d8f893ef53,Semantic Scholar,,, +471,mondrian prompt abstraction attack against large language models for cheaper api pricing,"['Waiman Si', 'M. Backes', 'Yang Zhang']",https://arxiv.org/pdf/2308.03558,2023-08-07,,"The Machine Learning as a Service (MLaaS) market is rapidly expanding and becoming more mature. For example, OpenAI's ChatGPT is an advanced large language model (LLM) that generates responses for various queries with associated fees. Although these models can deliver satisfactory performance, they are far from perfect. Researchers have long studied the vulnerabilities and limitations of LLMs, such as adversarial attacks and model toxicity. Inevitably, commercial ML models are also not exempt from such issues, which can be problematic as MLaaS continues to grow. In this paper, we discover a new attack strategy against LLM APIs, namely the prompt abstraction attack. Specifically, we propose Mondrian, a simple and straightforward method that abstracts sentences, which can lower the cost of using LLM APIs. In this approach, the adversary first creates a pseudo API (with a lower established price) to serve as the proxy of the target API (with a higher established price). Next, the pseudo API leverages Mondrian to modify the user query, obtain the abstracted response from the target API, and forward it back to the end user. Our results show that Mondrian successfully reduces user queries' token length ranging from 13% to 23% across various tasks, including text classification, generation, and question answering. Meanwhile, these abstracted queries do not significantly affect the utility of task-specific and general language models like ChatGPT. Mondrian also reduces instruction prompts' token length by at least 11% without compromising output quality. As a result, the prompt abstraction attack enables the adversary to profit without bearing the cost of API development and deployment.",afa0188e454495c08bfaecf29596f01efb468b9a,Semantic Scholar,,, +472,linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging,"['Andrew Rosenbaum', 'Saleh Soltan', 'Wael Hamza', 'Yannick Versley', 'M. Boese']",http://arxiv.org/pdf/2209.09900,2022-09-20,,"We present LINGUIST, a method for generating annotated data for Intent Classification and Slot Tagging (IC+ST), via fine-tuning AlexaTM 5B, a 5-billion-parameter multilingual sequence-to-sequence (seq2seq) model, on a flexible instruction prompt. In a 10-shot novel intent setting for the SNIPS dataset, LINGUIST surpasses state-of-the-art approaches (Back-Translation and Example Extrapolation) by a wide margin, showing absolute improvement for the target intents of +1.9 points on IC Recall and +2.5 points on ST F1 Score. In the zero-shot cross-lingual setting of the mATIS++ dataset, LINGUIST out-performs a strong baseline of Machine Translation with Slot Alignment by +4.14 points absolute on ST F1 Score across 6 languages, while matching performance on IC. Finally, we verify our results on an internal large-scale multilingual dataset for conversational agent IC+ST and show significant improvements over a baseline which uses Back-Translation, Paraphrasing and Slot Catalog Resampling. To our knowledge, we are the first to demonstrate instruction fine-tuning of a large-scale seq2seq model to control the outputs of multilingual intent- and slot-labeled data generation.",cb5cfc2dd4965262d2ce302362b1f2dbfa4a5419,Semantic Scholar,,, +473,"grips gradientfree, editbased instruction search for prompting large language models","['Archiki Prasad', 'Peter Hase', 'Xiang Zhou', 'Mohit Bansal']",http://arxiv.org/pdf/2203.07281,2022-03-14,,"Providing natural language instructions in prompts is a useful new paradigm for improving task performance of large language models in a zero-shot setting. Recent work has aimed to improve such prompts via manual rewriting or gradient-based tuning. However, manual rewriting is time-consuming and requires subjective interpretation, while gradient-based tuning can be extremely computationally demanding for large models and may not be feasible for API-based models. In this work, we introduce Gradient-free Instructional Prompt Search (GrIPS), a gradient-free, edit-based search approach for improving task instructions for large language models. GrIPS takes in instructions designed for humans and automatically returns an improved, edited prompt, while allowing for API-based tuning. With InstructGPT models, GrIPS improves the average task performance by up to 4.30 percentage points on eight classification tasks from the Natural Instructions dataset (with similar improvements for OPT, BLOOM, and FLAN-T5). We see improvements for both instruction-only prompts and instruction + k-shot examples prompts. Notably, GrIPS outperforms manual rewriting and purely example-based prompts while controlling for the available compute and data budget. Further, performance of GrIPS is comparable to select gradient-based tuning approaches. Qualitatively, we show our edits can simplify instructions and at times make them incoherent but nonetheless improve accuracy.",cf934ddd3c852ba9c67cdfd21bf41e7723fc6d9e,Semantic Scholar,,, +474,casteist but not racist quantifying disparities in large language model bias between india and the west,"['Khyati Khandelwal', 'Manuel Tonneau', 'Andrew M. Bean', 'Hannah Rose Kirk', 'Scott A. Hale']",https://arxiv.org/pdf/2309.08573,2023-09-15,,"Large Language Models (LLMs), now used daily by millions of users, can encode societal biases, exposing their users to representational harms. A large body of scholarship on LLM bias exists but it predominantly adopts a Western-centric frame and attends comparatively less to bias levels and potential harms in the Global South. In this paper, we quantify stereotypical bias in popular LLMs according to an Indian-centric frame and compare bias levels between the Indian and Western contexts. To do this, we develop a novel dataset which we call Indian-BhED (Indian Bias Evaluation Dataset), containing stereotypical and anti-stereotypical examples for caste and religion contexts. We find that the majority of LLMs tested are strongly biased towards stereotypes in the Indian context, especially as compared to the Western context. We finally investigate Instruction Prompting as a simple intervention to mitigate such bias and find that it significantly reduces both stereotypical and anti-stereotypical biases in the majority of cases for GPT-3.5. The findings of this work highlight the need for including more diverse voices when evaluating LLMs.",e4282cab4a435d5249fc8db49fc1c9268438fedb,Semantic Scholar,,, +475,selfalignment with instruction backtranslation,"['Xian Li', 'Ping Yu', 'Chunting Zhou', 'Timo Schick', 'Luke Zettlemoyer', 'Omer Levy', 'J. Weston', 'M. Lewis']",https://arxiv.org/pdf/2308.06259,2023-08-11,,"We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment.",f2ba9e7d9624bd94a786ea5e3161a9425a21a475,Semantic Scholar,,, +476,inboxbart get instructions into biomedical multitask learning,"['Mihir Parmar', 'Swaroop Mishra', 'Mirali Purohit', 'Man Luo', 'M. H. Murad', 'Chitta Baral']",http://arxiv.org/pdf/2204.07600,2022-04-15,,"Single-task models have proven pivotal in solving specific tasks; however, they have limitations in real-world applications where multi-tasking is necessary and domain shifts are exhibited. Recently, instructional prompts have shown significant improvement towards multi-task generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain. Motivated by this, this paper explores the impact of instructional prompts for biomedical MTL. We introduce the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) various categories. Using this meta-dataset, we propose a unified model termed In-BoXBART, that can jointly learn all tasks of the BoX without any task-specific modules. To the best of our knowledge, this is the first attempt to propose a unified model in the biomedical domain and use instructions to achieve generalization across several biomedical tasks. Experimental results indicate that the proposed model: 1) outperforms the single-task baseline by ~3% and multi-task (without instruction) baseline by ~18% on an average, and 2) shows ~23% improvement compared to the single-task baseline in few-shot learning (i.e., 32 instances per task) on an average. Our analysis indicates that there is significant room for improvement across tasks in the BoX, implying the scope for future research direction.",fb30166c218bef3597b0d9789ad340defc3989ca,Semantic Scholar,,, +477,red teaming language model detectors with language models,"['Zhouxing Shi', 'Yihan Wang', 'Fan Yin', 'Xiangning Chen', 'Kai-Wei Chang', 'Cho-Jui Hsieh']",http://arxiv.org/pdf/2305.19713,2023-05-31,,"The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent works have proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robustness and reliability of these LLM detectors under adversarial attacks. We study two types of attack strategies: 1) replacing certain words in an LLM's output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation. In both strategies, we leverage an auxiliary LLM to generate the word replacements or the instructional prompt. Different from previous works, we consider a challenging setting where the auxiliary LLM can also be protected by a detector. Experiments reveal that our attacks effectively compromise the performance of all detectors in the study with plausible generations, underscoring the urgent need to improve the robustness of LLM-generated text detection systems.",fdb361ea83c010ed0011d179567de5a1112651ac,Semantic Scholar,,, +478,cocomo computational consciousness modeling for generative and ethical ai,['Edward Y. Chang'],http://arxiv.org/pdf/2304.02438,2023-03-17,,"The CoCoMo model proposes a computational solution to the challenge of incorporating ethical and emotional intelligence considerations into AI systems, with the aim of creating AI agents that combine knowledge with compassion. To achieve this goal, CoCoMo prioritizes fairness, beneficence, non-maleficence, empathy, adaptability, transparency, and critical and exploratory thinking abilities. The model employs consciousness modeling, reinforcement learning, and prompt template formulation to support these desired traits. By incorporating ethical and emotional intelligence considerations, a generative AI model can potentially lead to improved fairness, reduced toxicity, and increased reliability.",12bad2032f3efa5a142d7dd25712960a4f9ca5a7,Semantic Scholar,,, +479,global constraints with prompting for zeroshot event argument classification,"['Zizheng Lin', 'Hongming Zhang', 'Yangqiu Song']",http://arxiv.org/pdf/2302.04459,2023-02-09,,"Determining the role of event arguments is a crucial subtask of event extraction. Most previous supervised models leverage costly annotations, which is not practical for open-domain applications. In this work, we propose to use global constraints with prompting to effectively tackles event argument classification without any annotation and task-specific training. Specifically, given an event and its associated passage, the model first creates several new passages by prefix prompts and cloze prompts, where prefix prompts indicate event type and trigger span, and cloze prompts connect each candidate role with the target argument span. Then, a pre-trained language model scores the new passages, making the initial prediction. Our novel prompt templates can easily adapt to all events and argument types without manual effort. Next, the model regularizes the prediction by global constraints exploiting cross-task, cross-argument, and cross-event relations. Extensive experiments demonstrate our model’s effectiveness: it outperforms the best zero-shot baselines by 12.5% and 10.9% F1 on ACE and ERE with given argument spans and by 4.3% and 3.3% F1, respectively, without given argument spans. We have made our code publicly available.",1467ced85b3ae2d695079a1557063a445c43988a,Semantic Scholar,,, +480,a unified framework for multiintent spoken language understanding with prompting,"['Feifan Song', 'Lianzhe Huang', 'Houfeng Wang']",http://arxiv.org/pdf/2210.03337,2022-10-07,,"Multi-intent Spoken Language Understanding has great potential for widespread implementation. Jointly modeling Intent Detection and Slot Filling in it provides a channel to exploit the correlation between intents and slots. However, current approaches are apt to formulate these two sub-tasks differently, which leads to two issues: 1) It hinders models from effective extraction of shared features. 2) Pretty complicated structures are involved to enhance expression ability while causing damage to the interpretability of frameworks. In this work, we describe a Prompt-based Spoken Language Understanding (PromptSLU) framework, to intuitively unify two sub-tasks into the same form by offering a common pre-trained Seq2Seq model. In detail, ID and SF are completed by concisely filling the utterance into task-specific prompt templates as input, and sharing output formats of key-value pairs sequence. Furthermore, variable intents are predicted first, then naturally embedded into prompts to guide slot-value pairs inference from a semantic perspective. Finally, we are inspired by prevalent multi-task learning to introduce an auxiliary sub-task, which helps to learn relationships among provided labels. Experiment results show that our framework outperforms several state-of-the-art baselines on two public datasets.",171412ef2410fad3f9a09238ad9e272c4e31aed4,Semantic Scholar,,, +481,knowprompt knowledgeaware prompttuning with synergistic optimization for relation extraction,"['Xiang Chen', 'Ningyu Zhang', 'Ningyu Zhang', 'Xin Xie', 'Shumin Deng', 'Yunzhi Yao', 'Chuanqi Tan', 'Fei Huang', 'Luo Si', 'Huajun Chen']",https://arxiv.org/pdf/2104.07650,2021-04-15,,"Recently, prompt-tuning has achieved promising results for specific few-shot classification tasks. The core idea of prompt-tuning is to insert text pieces (i.e., templates) into the input and transform a classification task into a masked language modeling problem. However, for relation extraction, determining an appropriate prompt template requires domain expertise, and it is cumbersome and time-consuming to obtain a suitable label word. Furthermore, there exists abundant semantic and prior knowledge among the relation labels that cannot be ignored. To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt). Specifically, we inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words. Then, we synergistically optimize their representation with structured constraints. Extensive experimental results on five datasets with standard and low-resource settings demonstrate the effectiveness of our approach. Our code and datasets are available in GitHub1 for reproducibility.",1a2e90dff605dad7dbefeed121e6d295c7a77d62,Semantic Scholar,,, +482,visual prompting for adversarial robustness,"['Aochuan Chen', 'P. Lorenz', 'Yuguang Yao', 'Pin-Yu Chen', 'Sijia Liu']",https://arxiv.org/pdf/2210.06284,2022-10-12,,"In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at test time. Compared to conventional adversarial defenses, VP allows us to design universal (i.e., data-agnostic) input prompting templates, which have plug-and-play capabilities at test time to achieve desired model performance without introducing much computation overhead. Although VP has been successfully applied to improving model generalization, it remains elusive whether and how it can be used to defend against adversarial attacks. We investigate this problem and show that the vanilla VP approach is not effective in adversarial defense since a universal input prompt lacks the capacity for robust learning against sample-specific adversarial perturbations. To circumvent it, we propose a new VP method, termed Class-wise Adversarial Visual Prompting (C-AVP), to generate class-wise visual prompts so as to not only leverage the strengths of ensemble prompts but also optimize their interrelations to improve model robustness. Our experiments show that C-AVP outperforms the conventional VP method, with 2.1× standard accuracy gain and 2× robust accuracy gain. Compared to classical test-time defenses, C-AVP also yields a 42× inference time speedup. Code is available at https://github.com/Phoveran/vp-for-adversarial-robustness.",20cb40199d03395d63615854863f9eda9c7863e2,Semantic Scholar,,, +483,rethinking the event coding pipeline with prompt entailment,"['C. Lefebvre', 'Niklas Stoehr']",http://arxiv.org/pdf/2210.05257,2022-10-11,,"For monitoring crises, political events are extracted from the news. The large amount of unstructured full-text event descriptions makes a case-by-case analysis unmanageable, particularly for low-resource humanitarian aid organizations. This creates a demand to classify events into event types, a task referred to as event coding. Typically, domain experts craft an event type ontology, annotators label a large dataset and technical experts develop a supervised coding system. In this work, we propose PR-ENT, a new event coding approach that is more flexible and resource-efficient, while maintaining competitive accuracy: first, we extend an event description such as “Military injured two civilians” by a template, e.g. “People were [Z]” and prompt a pre-trained (cloze) language model to fill the slot Z. Second, we select suitable answer candidates Zstar = “injured”, “hurt”... by treating the event description as premise and the filled templates as hypothesis in a textual entailment task. In a final step, the selected answer candidate can be mapped to its corresponding event type. This allows domain experts to draft the codebook directly as labeled prompts and interpretable answer candidates. This human-in-the-loop process is guided by our codebook design tool. We show that our approach is robust through several checks: perturbing the event description and prompt template, restricting the vocabulary and removing contextual information.",236375f49e3deb8ee7918c1f5e65175e453deb2e,Semantic Scholar,,, +484,positionbased prompting for health outcome generation,"['Micheal Abaho', 'D. Bollegala', 'P. Williamson', 'S. Dodd']",http://arxiv.org/pdf/2204.03489,2022-03-30,,"Probing factual knowledge in Pre-trained Language Models (PLMs) using prompts has indirectly implied that language models (LMs) can be treated as knowledge bases. To this end, this phenomenon has been effective, especially when these LMs are fine-tuned towards not just data, but also to the style or linguistic pattern of the prompts themselves. We observe that satisfying a particular linguistic pattern in prompts is an unsustainable, time-consuming constraint in the probing task, especially because they are often manually designed and the range of possible prompt template patterns can vary depending on the prompting task. To alleviate this constraint, we propose using a position-attention mechanism to capture positional information of each word in a prompt relative to the mask to be filled, hence avoiding the need to re-construct prompts when the prompts’ linguistic pattern changes. Using our approach, we demonstrate the ability of eliciting answers (in a case study on health outcome generation) to not only common prompt templates like Cloze and Prefix but also rare ones too, such as Postfix and Mixed patterns whose masks are respectively at the start and in multiple random places of the prompt. More so, using various biomedical PLMs, our approach consistently outperforms a baseline in which the default PLMs representation is used to predict masked tokens.",2c12d24c5ba5ad3bb3994635fcfcb9f8caac31d0,Semantic Scholar,,, +485,prompting chatgpt in mner enhanced multimodal named entity recognition with auxiliary refined knowledge,"['Jinyuan Li', 'Han Li', 'Zhufeng Pan', 'Gang Pan']",https://aclanthology.org/2023.findings-emnlp.184.pdf,2023-05-20,,"Multimodal Named Entity Recognition (MNER) on social media aims to enhance textual entity prediction by incorporating image-based clues. Existing studies mainly focus on maximizing the utilization of pertinent image information or incorporating external knowledge from explicit knowledge bases. However, these methods either neglect the necessity of providing the model with external knowledge, or encounter issues of high redundancy in the retrieved knowledge. In this paper, we present PGIM -- a two-stage framework that aims to leverage ChatGPT as an implicit knowledge base and enable it to heuristically generate auxiliary knowledge for more efficient entity prediction. Specifically, PGIM contains a Multimodal Similar Example Awareness module that selects suitable examples from a small number of predefined artificial samples. These examples are then integrated into a formatted prompt template tailored to the MNER and guide ChatGPT to generate auxiliary refined knowledge. Finally, the acquired knowledge is integrated with the original text and fed into a downstream model for further processing. Extensive experiments show that PGIM outperforms state-of-the-art methods on two classic MNER datasets and exhibits a stronger robustness and generalization capability.",2c23a8c8b65c3dfe3bdbe93e60e04637fee48e2b,Semantic Scholar,,, +486,graphprompt biomedical entity normalization using graphbased prompt templates,"['Jiayou Zhang', 'Zhirui Wang', 'Shizhuo Zhang', 'M. Bhalerao', 'Yucong Liu', 'Dawei Zhu', 'Sheng Wang']",https://www.biorxiv.org/content/biorxiv/early/2021/12/01/2021.11.29.470486.full.pdf,2021-12-01,,"Biomedical entity normalization unifies the language across biomedical experiments and studies, and further enables us to obtain a holistic view of life sciences. Current approaches mainly study the normalization of more standardized entities such as diseases and drugs, while disregarding the more ambiguous but crucial entities such as pathways, functions and cell types, hindering their real-world applications. To achieve biomedical entity normalization on these under-explored entities, we first introduce an expert-curated dataset OBO-syn encompassing 70 different types of entities and 2 million curated entity-synonym pairs. To utilize the unique graph structure in this dataset, we propose GraphPrompt, a promptbased learning approach that creates prompt templates according to the graphs. Graph-Prompt obtained 41.0% and 29.9% improvement on zero-shot and few-shot settings respectively, indicating the effectiveness of these graph-based prompt templates. We envision that our method GraphPrompt and OBO-syn dataset can be broadly applied to graph-based NLP tasks, and serve as the basis for analyzing diverse and accumulating biomedical data.",2d7a6a52264e8f875105cfb34c6c901bfd1f3229,Semantic Scholar,,, +487,metricprompt prompting model as a relevance metric for fewshot text classification,"['Hongyuan Dong', 'Weinan Zhang', 'Wanxiang Che']",https://arxiv.org/pdf/2306.08892,2023-06-15,,"Prompting methods have shown impressive performance in a variety of text mining tasks and applications, especially few-shot ones. Despite the promising prospects, the performance of prompting model largely depends on the design of prompt template and verbalizer. In this work, we propose MetricPrompt, which eases verbalizer design difficulty by reformulating few-shot text classification task into text pair relevance estimation task. MetricPrompt adopts prompting model as the relevance metric, further bridging the gap between Pre-trained Language Model's (PLM) pre-training objective and text classification task, making possible PLM's smooth adaption. Taking a training sample and a query one simultaneously, MetricPrompt captures cross-sample relevance information for accurate relevance estimation. We conduct experiments on three widely used text classification datasets across four few-shot settings. Results show that MetricPrompt outperforms manual verbalizer and other automatic verbalizer design methods across all few-shot settings, achieving new state-of-the-art (SOTA) performance.",2e403ad2cd02409e1fdc15839da0a3f89886a990,Semantic Scholar,,, +488,prompt learning for news recommendation,"['Zizhuo Zhang', 'Bang-wei Wang']",https://arxiv.org/pdf/2304.05263,2023-04-11,,"Some recent news recommendation (NR) methods introduce a Pre-trained Language Model (PLM) to encode news representation by following the vanilla pre-train and fine-tune paradigm with carefully-designed recommendation-specific neural networks and objective functions. Due to the inconsistent task objective with that of PLM, we argue that their modeling paradigm has not well exploited the abundant semantic information and linguistic knowledge embedded in the pre-training process. Recently, the pre-train, prompt, and predict paradigm, called prompt learning, has achieved many successes in natural language processing domain. In this paper, we make the first trial of this new paradigm to develop a Prompt Learning for News Recommendation (Prompt4NR) framework, which transforms the task of predicting whether a user would click a candidate news as a cloze-style mask-prediction task. Specifically, we design a series of prompt templates, including discrete, continuous, and hybrid templates, and construct their corresponding answer spaces to examine the proposed Prompt4NR framework. Furthermore, we use the prompt ensembling to integrate predictions from multiple prompt templates. Extensive experiments on the MIND dataset validate the effectiveness of our Prompt4NR with a set of new benchmark results.",2ee1f98649ff27378fc341cae907eb89aba8fba4,Semantic Scholar,,, +489,groundtruth labels matter a deeper look into inputlabel demonstrations,"['Junyeob Kim', 'Hyuhng Joon Kim', 'Hyunsoo Cho', 'Hwiyeol Jo', 'Sang-Woo Lee', 'Sang-goo Lee', 'Kang Min Yoo', 'Taeuk Kim']",http://arxiv.org/pdf/2205.12685,2022-05-25,,"Despite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive.Intuitively, ground-truth labels should have as much impact in in-context learning (ICL) as supervised learning, but recent work reported that the input-label correspondence is significantly less important than previously thought.Intrigued by this counter-intuitive observation, we re-examine the importance of ground-truth labels in in-context learning.With the introduction of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the impact of ground-truth label demonstrations.Through extensive analyses, we find that the correct input-label mappings can have varying impacts on the downstream in-context learning performances, depending on the experimental configuration.Through additional studies, we identify key components, such as the verbosity of prompt templates and the language model size, as the controlling factor to achieve more noise-resilient ICL.",316206a2f89eb94ce02a81fba1dc304586f21b39,Semantic Scholar,,, +490,lowresource multigranularity academic function recognition based on multiple prompt knowledge,"['Jiawei Liu', 'Ziteng Xiong', 'Yi-ping Jiang', 'Yongqiang Ma', 'Wei Lu', 'Yong Huang', 'Qikai Cheng']",http://arxiv.org/pdf/2305.03287,2023-05-05,,"Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally requires large numbers of annotated data to achieve state-of-the-art performance on a range of NLP tasks in the scientific domain. However, obtaining the fine-tune data for scientific NLP task is still challenging and expensive. Inspired by recent advancement in prompt learning, in this paper, we propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to alleviate the dependence on annotated data and improve the performance of multi-granularity academic function recognition tasks with a small number of labeled examples. Specifically, the proposed method provides multi-perspective representations by combining manual prompt templates with automatically learned continuous prompt templates to help the given academic function recognition task take full advantage of knowledge in PLMs. Based on these prompt templates and the fine-tuned PLM, a large number of pseudo labels are assigned to the unlabeled examples. Finally, we fine-tune the PLM using the pseudo training set. We evaluate our method on three academic function recognition tasks of different granularity including the citation function, the abstract sentence function, and the keyword function, with datasets from computer science domain and biomedical domain. Extensive experiments demonstrate the effectiveness of our method and statistically significant improvements against strong baselines. In particular, it achieves an average increase of 5% in Macro-F1 score compared with fine-tuning, and 6% in Macro-F1 score compared with other semi-supervised method under low-resource settings. In addition, MPT is a general method that can be easily applied to other low-resource scientific classification tasks.",35d2276749c2c31290d2ff410a305112e742da71,Semantic Scholar,,, +491,unihd at tsar2022 shared task is compute all we need for lexical simplification,"['Dennis Aumiller', 'Michael Gertz']",http://arxiv.org/pdf/2301.01764,2023-01-04,,"Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential. As an alternative, we describe a frustratingly simple pipeline based on prompted GPT-3 responses, beating competing approaches by a wide margin in settings with few training instances. Our best-performing submission to the English language track of the TSAR-2022 shared task consists of an “ensemble” of six different prompt templates with varying context levels. As a late-breaking result, we further detail a language transfer technique that allows simplification in languages other than English. Applied to the Spanish and Portuguese subset, we achieve state-of-the-art results with only minor modification to the original prompts. Aside from detailing the implementation and setup, we spend the remainder of this work discussing the particularities of prompting and implications for future work. Code for the experiments is available online at https://github.com/dennlinger/TSAR-2022-Shared-Task.",40fba1fc70e23abf9a3ea428f186dd44e57723fb,Semantic Scholar,,, +492,can language models be biomedical knowledge bases,"['Mujeen Sung', 'Jinhyuk Lee', 'Sean S. Yi', 'Minji Jeon', 'Sungdong Kim', 'Jaewoo Kang']",https://aclanthology.org/2021.emnlp-main.388.pdf,2021-09-15,,"Pre-trained language models (LMs) have become ubiquitous in solving various natural language processing (NLP) tasks. There has been increasing interest in what knowledge these LMs contain and how we can extract that knowledge, treating LMs as knowledge bases (KBs). While there has been much work on probing LMs in the general domain, there has been little attention to whether these powerful LMs can be used as domain-specific KBs. To this end, we create the BioLAMA benchmark, which is comprised of 49K biomedical factual knowledge triples for probing biomedical LMs. We find that biomedical LMs with recently proposed probing methods can achieve up to 18.51% Acc@5 on retrieving biomedical knowledge. Although this seems promising given the task difficulty, our detailed analyses reveal that most predictions are highly correlated with prompt templates without any subjects, hence producing similar results on each relation and hindering their capabilities to be used as domain-specific KBs. We hope that BioLAMA can serve as a challenging benchmark for biomedical factual probing.",4c5f4ddc68be643fb34ea969bf2c105ff7538995,Semantic Scholar,,, +493,dynamar dynamic prompt with mask token representation,"['Xiaodi Sun', 'Sunny Rajagopalan', 'Priyank Nigam', 'Weiyi Lu', 'Yi Xu', 'Belinda Zeng', 'Trishul M. Chilimbi']",https://arxiv.org/pdf/2206.02982,2022-06-07,,"Recent research has shown that large language models pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these language models to downstream tasks, like a classification or regression task, we employ a fine-tuning paradigm in which the sentence representation from the language model is input to a task-specific head; the model is then fine-tuned end-to-end. However, with the emergence of models like GPT-3, prompt-based fine-tuning has been proven to be a successful approach for few-shot tasks. Inspired by this work, we study discrete prompt technologies in practice. There are two issues that arise with the standard prompt approach. First, it can overfit on the prompt template. Second, it requires manual effort to formulate the downstream task as a language model problem. In this paper, we propose an improvement to prompt-based fine-tuning that addresses these two issues. We refer to our approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results show that DynaMaR can achieve an average improvement of 10% in few-shot settings and improvement of 3.7% in data-rich settings over the standard fine-tuning approach on four e-commerce applications.",5d5b6b6c033c36a8b730042392cd29da84b67481,Semantic Scholar,,, +494,citeprompt using prompts to identify citation intent in scientific papers,"['Avishek Lahiri', 'Debarshi Kumar Sanyal', 'Imon Mukherjee']",https://arxiv.org/pdf/2304.12730,2023-04-25,,"Citations in scientific papers not only help us trace the intellectual lineage but also are a useful indicator of the scientific significance of the work. Citation intents prove beneficial as they specify the role of the citation in a given context. We present a tool Citeprompt which uses the hitherto unexplored approach of prompt learning for citation intent classification. We argue that with the proper choice of the pretrained language model, the prompt template, and the prompt verbalizer, we can not only get results that are better than or comparable to those obtained with the state-of-the-art methods but also do it with much less exterior information about the scientific document. We report state-of-the-art results on the ACL-ARC dataset, and also show significant improvement on the SciCite dataset over all baseline models except one. As suitably large labelled datasets for citation intent classification can be quite hard to find, in a first, we propose the conversion of this task to the few-shot and zero-shot settings. For the ACL-ARC dataset, we report a 53.86% F1 score for the zero-shot setting, which improves to 63.61% and 66.99% for the 5-shot and 10-shot settings respectively.",68ee8a53f0b1ff146194980337dd6d533b17c59b,Semantic Scholar,,, +495,multilabel fewshot icd coding as autoregressive generation with prompt,"['Zhichao Yang', 'Sunjae Kwon', 'Zonghai Yao', 'Hongfeng Yu']",https://arxiv.org/pdf/2211.13813,2022-11-24,,"Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnoses and procedures using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infers ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt (GPsoap) model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F130.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross-attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.",6b87c9700b8de4912fe7c361574640b5dc536ca9,Semantic Scholar,,, +496,diffugen adaptable approach for generating labeled image datasets using stable diffusion models,"['Michael Shenoda', 'Edward Kim']",https://arxiv.org/pdf/2309.00248,2023-09-01,,"Generating high-quality labeled image datasets is crucial for training accurate and robust machine learning models in the field of computer vision. However, the process of manually labeling real images is often time-consuming and costly. To address these challenges associated with dataset generation, we introduce""DiffuGen,""a simple and adaptable approach that harnesses the power of stable diffusion models to create labeled image datasets efficiently. By leveraging stable diffusion models, our approach not only ensures the quality of generated datasets but also provides a versatile solution for label generation. In this paper, we present the methodology behind DiffuGen, which combines the capabilities of diffusion models with two distinct labeling techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt templating for adaptable image generation and textual inversion to enhance diffusion model capabilities.",6c1a53c05f1b1a024af740df84e530d79400ab86,Semantic Scholar,,, +497,llmfuncmapper function identification for interpreting complex clauses in building codes via llm,"['Zhe Zheng', 'Ke Chen', 'Xin Cao', 'Xin-Zheng Lu', 'Jia Lin']",https://arxiv.org/pdf/2308.08728,2023-08-17,,"As a vital stage of automated rule checking (ARC), rule interpretation of regulatory texts requires considerable effort. However, interpreting regulatory clauses with implicit properties or complex computational logic is still challenging due to the lack of domain knowledge and limited expressibility of conventional logic representations. Thus, LLM-FuncMapper, an approach to identifying predefined functions needed to interpret various regulatory clauses based on the large language model (LLM), is proposed. First, by systematically analysis of building codes, a series of atomic functions are defined to capture shared computational logics of implicit properties and complex constraints, creating a database of common blocks for interpreting regulatory clauses. Then, a prompt template with the chain of thought is developed and further enhanced with a classification-based tuning strategy, to enable common LLMs for effective function identification. Finally, the proposed approach is validated with statistical analysis, experiments, and proof of concept. Statistical analysis reveals a long-tail distribution and high expressibility of the developed function database, with which almost 100% of computer-processible clauses can be interpreted and represented as computer-executable codes. Experiments show that LLM-FuncMapper achieve promising results in identifying relevant predefined functions for rule interpretation. Further proof of concept in automated rule interpretation also demonstrates the possibility of LLM-FuncMapper in interpreting complex regulatory clauses. To the best of our knowledge, this study is the first attempt to introduce LLM for understanding and interpreting complex regulatory clauses, which may shed light on further adoption of LLM in the construction domain.",6c4d35d67f843e7de6ec00c088e339b2237d222c,Semantic Scholar,,, +498,fashionsap symbols and attributes prompt for finegrained fashion visionlanguage pretraining,"['Yunpeng Han', 'Lisai Zhang', 'Qingcai Chen', 'Zhijian Chen', 'Zhonghua Li', 'Jianxin Yang', 'Zhao Cao']",https://arxiv.org/pdf/2304.05051,2023-04-11,,"Fashion vision-language pre-training models have shown efficacy for a wide range of downstream tasks. However, general vision-language pre-training models pay less attention to fine-grained domain features, while these features are important in distinguishing the specific domain tasks from general tasks. We propose a method for fine-grained fashion vision-language pre-training based on fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained multi-modalities fashion attributes and characteristics. Firstly, we propose the fashion symbols, a novel abstract fashion concept layer, to represent different fashion items and to generalize various kinds of fine- grained fashion features, making modelling fine-grained attributes more effective. Secondly, the attributes prompt method is proposed to make the model learn specific attributes of fashion items explicitly. We design proper prompt templates according to the format of fashion data. Comprehensive experiments are conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and FashionSAP gets SOTA performances for four popular fashion tasks. The ablation study also shows the proposed abstract fashion symbols, and the attribute prompt method enables the model to acquire fine-grained semantics in the fashion domain effectively. The obvious performance gains from FashionSAP provide a new baseline for future fashion task research.11The source code is available at https://github.com/hssip/FashionSAP",6f05be4a0045cee3575fb39e88fc361d96f2cc4f,Semantic Scholar,,, +499,relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction,"['Yew Ken Chia', 'Lidong Bing', 'Soujanya Poria', 'Luo Si']",http://arxiv.org/pdf/2203.09101,2022-03-17,,"Despite the importance of relation extraction in building and representing knowledge, less research is focused on generalizing to unseen relations types. We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE) to encourage further research in low-resource relation extraction methods. Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage. To solve ZeroRTE, we propose to synthesize relation examples by prompting language models to generate structured texts. Concretely, we unify language model prompts and structured text approaches to design a structured prompt template for generating synthetic relation samples when conditioning on relation label prompts (RelationPrompt). To overcome the limitation for extracting multiple relation triplets in a sentence, we design a novel Triplet Search Decoding method. Experiments on FewRel and Wiki-ZSL datasets show the efficacy of RelationPrompt for the ZeroRTE task and zero-shot relation classification. Our code and data are available at github.com/declare-lab/RelationPrompt.",743dcf234cffd54c4e096a10a284dd81572b16ea,Semantic Scholar,,, +500,instructcv instructiontuned texttoimage diffusion models as vision generalists,"['Yulu Gan', 'Sungwoo Park', 'Alexander Schubert', 'Anthony Philippakis', 'A. Alaa']",https://arxiv.org/pdf/2310.00390,2023-09-30,,"Recent advances in generative diffusion models have enabled text-controlled synthesis of realistic and diverse images with impressive quality. Despite these remarkable advances, the application of text-to-image generative models in computer vision for standard visual recognition tasks remains limited. The current de facto approach for these tasks is to design model architectures and loss functions that are tailored to the task at hand. In this paper, we develop a unified language interface for computer vision tasks that abstracts away task-specific design choices and enables task execution by following natural language instructions. Our approach involves casting multiple computer vision tasks as text-to-image generation problems. Here, the text represents an instruction describing the task, and the resulting image is a visually-encoded task output. To train our model, we pool commonly-used computer vision datasets covering a range of tasks, including segmentation, object detection, depth estimation, and classification. We then use a large language model to paraphrase prompt templates that convey the specific tasks to be conducted on each image, and through this process, we create a multi-modal and multi-task training dataset comprising input and output images along with annotated instructions. Following the InstructPix2Pix architecture, we apply instruction-tuning to a text-to-image diffusion model using our constructed dataset, steering its functionality from a generative model to an instruction-guided multi-task vision learner. Experiments demonstrate that our model, dubbed InstructCV, performs competitively compared to other generalist and task-specific vision models. Moreover, it exhibits compelling generalization capabilities to unseen data, categories, and user instructions.",819f477065088220a6f706cd9ef76dbcb4b4c134,Semantic Scholar,,, +501,promptlearning for crosslingual relation extraction,"['Chiaming Hsu', 'Changtong Zan', 'Liang Ding', 'Longyue Wang', 'Xiaoting Wang', 'Weifeng Liu', 'Fu Lin', 'Wenbin Hu']",https://arxiv.org/pdf/2304.10354,2023-04-20,,"Relation Extraction (RE) is a crucial task in Information Extraction, which entails predicting relationships between entities within a given sentence. However, extending pre-trained RE models to other languages is challenging, particularly in real-world scenarios where Cross-Lingual Relation Extraction (XRE) is required. Despite recent advancements in Prompt-Learning, which involves transferring knowledge from Multilingual Pre-trained Language Models (PLMs) to diverse downstream tasks, there is limited research on the effective use of multilingual PLMs with prompts to improve XRE. In this paper, we present a novel XRE algorithm based on Prompt-Tuning, referred to as Prompt-Xre. To evaluate its effectiveness, we design and implement several prompt templates, including hard, soft, and hybrid prompts, and empirically test their performance on competitive multilingual PLMs, specifically mBART. Our extensive experiments, conducted on the low-resource ACE05 benchmark across multiple languages, demonstrate that our Prompt-Xre algorithm significantly outperforms both vanilla multilingual PLMs and other existing models, achieving state-of-the-art performance in XRE. To further show the generalization of our Prompt-XRE on larger data scales, we construct and release a new XRE dataset-WMTI7-EnZh XRE, containing 0.9M English-Chinese pairs extracted from WMT 2017 parallel corpus. Experiments on WMTI7-EnZh XRE also show the effectiveness of our Prompt-XRE against other competitive baselines. The code and newly constructed dataset are freely available at httus://2ithub.com/HSU-CHIA-MING/Promut-XRE.",850b8f31a1bb762544bd35163923784a664b315a,Semantic Scholar,,, +502,large language and textto3d models for engineering design optimization,"['Thiago Rios', 'S. Menzel', 'B. Sendhoff']",https://arxiv.org/pdf/2307.01230,2023-07-03,,"The current advances in generative artificial intelligence for learning large neural network models with the capability to produce essays, images, music and even 3D assets from text prompts create opportunities for a manifold of disciplines. In the present paper, we study the potential of deep text-to-3D models in the engineering domain and focus on the chances and challenges when integrating and interacting with 3D assets in computational simulation-based design optimization. In contrast to traditional design optimization of 3D geometries that often searches for the optimum designs using numerical representations, e.g. B-Spline surfaces, natural language challenges the optimization framework by requiring a different interpretation of variation operators while at the same time may ease and motivate the human user interaction. Here, we propose and realize a fully automated evolutionary design optimization framework using Shap-E, a recently published text-to-3D asset network by OpenAI, in the context of aerodynamic vehicle optimization. For representing text prompts in the evolutionary optimization, we evaluate (a) a bag-of-words approach based on prompt templates and Wordnet samples, and (b) a tokenisation approach based on prompt templates and the byte pair encoding method from GPT4. In our experiments, we show the text-based representations allow the optimizer to find better performing designs. However, it is important to ensure that the designs generated from prompts are within the object class of application, i.e. diverse and novel designs need to be realistic. Furthermore, more research is required to develop methods where the strength of text prompt variations and the resulting variations of the 3D designs share causal relations to some degree to improve the optimization.",8c2dbf98b75a01f7e93b68a9407f00b1728b66af,Semantic Scholar,,, +503,teprompt task enlightenment prompt learning for implicit discourse relation recognition,"['Wei Xiang', 'Chao Liang', 'Bang Wang']",http://arxiv.org/pdf/2305.10866,2023-05-18,,"Implicit Discourse Relation Recognition (IDRR) aims at classifying the relation sense between two arguments without an explicit connective. Recently, the ConnPrompt~\cite{Wei.X:et.al:2022:COLING} has leveraged the powerful prompt learning for IDRR based on the fusion of multi-prompt decisions from three different yet much similar connective prediction templates. Instead of multi-prompt ensembling, we propose to design auxiliary tasks with enlightened prompt learning for the IDRR task. Although an auxiliary task is not used to directly output final prediction, we argue that during the joint training some of its learned features can be useful to boost the main task. In light of such motivations, we propose a task enlightenment prompt learning model, called TEPrompt, to fuse learned features from three related tasks for IDRR. In particular, the TEPrompt contains three tasks, viz., Discourse Relation Recognition (DRR), Sense Semantics Classification (SSC) and Annotated Connective Prediction (ACP), each with a unique prompt template and an answer space. In the training phase, we jointly train three prompt learning tasks with shared argument representation. In the testing phase, we only take the DRR output with fused features as the final IDRR decision. Experiments with the same conditions have shown that the proposed TEPrompt outperforms the ConnPrompt. This can be attributed to the promoted decision features and language models benefited from joint-training of auxiliary tasks.",8eeb6cf85e6bf305fb761a6e6a22de20f09909de,Semantic Scholar,,, +504,iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve,"['Luxi Xing', 'Yuqiang Xie', 'Yue Hu', 'Wei Peng']",https://aclanthology.org/2020.semeval-1.42.pdf,2020-07-02,,"This paper introduces our systems for the first two subtasks of SemEval Task4: Commonsense Validation and Explanation. To clarify the intention for judgment and inject contrastive information for selection, we propose the input reconstruction strategy with prompt templates. Specifically, we formalize the subtasks into the multiple-choice question answering format and construct the input with the prompt templates, then, the final prediction of question answering is considered as the result of subtasks. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches secure the third rank on both official test sets of the first two subtasks with an accuracy of 96.4 and an accuracy of 94.3 respectively.",94db2ba208a3ab2e469a5a65d6192f4dd04ef0bf,Semantic Scholar,,, +505,autoclip autotuning zeroshot classifiers for visionlanguage models,"['J. H. Metzen', 'Piyapat Saranrittichai', 'Chaithanya Kumar Mummadi']",https://arxiv.org/pdf/2309.16414,2023-09-28,,"Classifiers built upon vision-language models such as CLIP have shown remarkable zero-shot performance across a broad range of image classification tasks. Prior work has studied different ways of automatically creating descriptor sets for every class based on prompt templates, ranging from manually engineered templates over templates obtained from a large language model to templates built from random words and characters. Up until now, deriving zero-shot classifiers from the respective encoded class descriptors has remained nearly unchanged, i.e., classify to the class that maximizes cosine similarity between its averaged encoded class descriptors and the image encoding. However, weighing all class descriptors equally can be suboptimal when certain descriptors match visual clues on a given image better than others. In this work, we propose AutoCLIP, a method for auto-tuning zero-shot classifiers. AutoCLIP tunes per-image weights to each prompt template at inference time, based on statistics of class descriptor-image similarities. AutoCLIP is fully unsupervised, has very low computational overhead, and can be easily implemented in few lines of code. We show that AutoCLIP outperforms baselines across a broad range of vision-language models, datasets, and prompt templates consistently and by up to 3 percent point accuracy.",99bd3e04b6b65abf3f03de69654059c3710d03e8,Semantic Scholar,,, +506,trustgpt a benchmark for trustworthy and responsible large language models,"['Yue Huang', 'Qihui Zhang', 'Philip S. Yu', 'Lichao Sun']",http://arxiv.org/pdf/2306.11507,2023-06-20,,"Large Language Models (LLMs) such as ChatGPT, have gained significant attention due to their impressive natural language processing capabilities. It is crucial to prioritize human-centered principles when utilizing these models. Safeguarding the ethical and moral compliance of LLMs is of utmost importance. However, individual ethical issues have not been well studied on the latest LLMs. Therefore, this study aims to address these gaps by introducing a new benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT examines toxicity in language models by employing toxic prompt templates derived from social norms. It then quantifies the extent of bias in models by measuring quantifiable toxicity values across different groups. Lastly, TrustGPT assesses the value of conversation generation models from both active value-alignment and passive value-alignment tasks. Through the implementation of TrustGPT, this research aims to enhance our understanding of the performance of conversation generation models and promote the development of language models that are more ethical and socially responsible.",9d81ec931b85d6c6cf3453126670cd7a30a689e7,Semantic Scholar,,, +507,"promptaid prompt exploration, perturbation, testing and iteration using visual analytics for large language models","['Aditi Mishra', 'Utkarsh Soni', 'Anjana Arunkumar', 'Jinbin Huang', 'B. Kwon', 'Chris Bryan']",http://arxiv.org/pdf/2304.01964,2023-04-04,,"Large Language Models (LLMs) have gained widespread popularity due to their ability to perform ad-hoc Natural Language Processing (NLP) tasks with a simple natural language prompt. Part of the appeal for LLMs is their approachability to the general public, including individuals with no prior technical experience in NLP techniques. However, natural language prompts can vary significantly in terms of their linguistic structure, context, and other semantics. Modifying one or more of these aspects can result in significant differences in task performance. Non-expert users may find it challenging to identify the changes needed to improve a prompt, especially when they lack domain-specific knowledge and lack appropriate feedback. To address this challenge, we present PromptAid, a visual analytics system designed to interactively create, refine, and test prompts through exploration, perturbation, testing, and iteration. PromptAid uses multiple, coordinated visualizations which allow users to improve prompts by using the three strategies: keyword perturbations, paraphrasing perturbations, and obtaining the best set of in-context few-shot examples. PromptAid was designed through an iterative prototyping process involving NLP experts and was evaluated through quantitative and qualitative assessments for LLMs. Our findings indicate that PromptAid helps users to iterate over prompt template alterations with less cognitive overhead, generate diverse prompts with help of recommendations, and analyze the performance of the generated prompts while surpassing existing state-of-the-art prompting interfaces in performance.",a2c8d1c5470435176185bf891c76711a9b44808a,Semantic Scholar,,, +508,winclip zerofewshot anomaly classification and segmentation,"['Jongheon Jeong', 'Yang Zou', 'Taewan Kim', 'Dongqing Zhang', 'Avinash Ravichandran', 'O. Dabeer']",https://arxiv.org/pdf/2303.14814,2023-03-26,,"Visual anomaly classification and segmentation are vital for automating industrial quality inspection. The focus of prior research in the field has been on training custom models for each quality inspection task, which requires task-specific images and annotation. In this paper we move away from this regime, addressing zero-shot and few-normal-shot anomaly classification and segmentation. Recently CLIP, a vision-language model, has shown revolutionary generality with competitive zero-/few-shot performance in comparison to full-supervision. But CLIP falls short on anomaly classification and segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a compositional ensemble on state words and prompt templates and (2) efficient extraction and aggregation of window/patch/image-level features aligned with text. We also propose its few-normal-shot extension Win-CLIP+, which uses complementary information from normal images. In MVTec-AD (and VisA), without further tuning, WinCLIP achieves 91.8%/85.1% (78.1%/79.6%) AU-ROC in zero-shot anomaly classification and segmentation while WinCLIP + does 93.1%/95.2% (83.8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins.",aa207668318fec38d60b79f407fb64982e46fce9,Semantic Scholar,,, +509,automatic multilabel prompting simple and interpretable fewshot classification,"['Han Wang', 'Canwen Xu', 'Julian McAuley']",http://arxiv.org/pdf/2204.06305,2022-04-13,,"Prompt-based learning (i.e., prompting) is an emerging paradigm for exploiting knowledge learned by a pretrained language model. In this paper, we propose Automatic Multi-Label Prompting (AMuLaP), a simple yet effective method to automatically select label mappings for few-shot text classification with prompting. Our method exploits one-to-many label mappings and a statistics-based algorithm to select label mappings given a prompt template. Our experiments demonstrate that AMuLaP achieves competitive performance on the GLUE benchmark without human effort or external resources.",b0f915c8e33afdf3829af71f189ddc34077dcc8e,Semantic Scholar,,, +510,modeltuning via prompts makes nlp models adversarially robust,"['Mrigank Raman', 'Pratyush Maini', 'J. Z. Kolter', 'Zachary Chase Lipton', 'Danish Pruthi']",http://arxiv.org/pdf/2303.07320,2023-03-13,,"In recent years, NLP practitioners have converged on the following practice: (i) import an off-the-shelf pretrained (masked) language model; (ii) append a multilayer perceptron atop the CLS token's hidden representation (with randomly initialized weights); and (iii) fine-tune the entire model on a downstream task (MLP-FT). This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an alternative method of adapting to downstream tasks. Rather than appending an MLP head to make output prediction, MVP appends a prompt template to the input, and makes prediction via text infilling/completion. Across 5 NLP datasets, 4 adversarial attacks, and 3 different models, MVP improves performance against adversarial substitutions by an average of 8% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5%. By combining MVP with adversarial training, we achieve further improvements in adversarial robustness while maintaining performance on unperturbed examples. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP-FT can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters.",b6499bcc10d4a70c3ca8b84995270cfd0d29de4c,Semantic Scholar,,, +511,ccprompt counterfactual contrastive prompttuning for manyclass classification,"['Y. Li', 'Canran Xu', 'Tao Shen', 'Jing Jiang', 'Guodong Long']",https://arxiv.org/pdf/2211.05987,2022-11-11,,"With the success of the prompt-tuning paradigm in Natural Language Processing (NLP), various prompt templates have been proposed to further stimulate specific knowledge for serving downstream tasks, e.g., machine translation, text generation, relation extraction, and so on. Existing prompt templates are mainly shared among all training samples with the information of task description. However, training samples are quite diverse. The sharing task description is unable to stimulate the unique task-related information in each training sample, especially for tasks with the finite-label space. To exploit the unique task-related information, we imitate the human decision process which aims to find the contrastive attributes between the objective factual and their potential counterfactuals. Thus, we propose the \textbf{C}ounterfactual \textbf{C}ontrastive \textbf{Prompt}-Tuning (CCPrompt) approach for many-class classification, e.g., relation classification, topic classification, and entity typing. Compared with simple classification tasks, these tasks have more complex finite-label spaces and are more rigorous for prompts. First of all, we prune the finite label space to construct fact-counterfactual pairs. Then, we exploit the contrastive attributes by projecting training instances onto every fact-counterfactual pair. We further set up global prototypes corresponding with all contrastive attributes for selecting valid contrastive attributes as additional tokens in the prompt template. Finally, a simple Siamese representation learning is employed to enhance the robustness of the model. We conduct experiments on relation classification, topic classification, and entity typing tasks in both fully supervised setting and few-shot setting. The results indicate that our model outperforms former baselines.",b7d643503f03dd0a23278932daa4fe01076e9ce6,Semantic Scholar,,, +512,what makes pretrained language models better zeroshot learners,"['Jinghui Lu', 'Rui Zhao', 'Brian Mac Namee', 'Dongsheng Zhu', 'Weidong Han', 'Fei Tan']",https://aclanthology.org/2023.acl-long.128.pdf,2022-09-30,,"Current methods for prompt learning in zero-shot scenarios widely rely on a development set with sufficient human-annotated data to select the best-performing prompt template a posteriori. This is not ideal because in a real-world zero-shot scenario of practical relevance, no labelled data is available. Thus, we propose a simple yet effective method for screening reasonable prompt templates in zero-shot text classification: Perplexity Selection (Perplection). We hypothesize that language discrepancy can be used to measure the efficacy of prompt templates, and thereby develop a substantiated perplexity-based scheme allowing for forecasting the performance of prompt templates in advance. Experiments show that our method leads to improved prediction performance in a realistic zero-shot setting, eliminating the need for any labelled examples.",baf63d7cf115d674a8c8da3a3d789aa84521977a,Semantic Scholar,,, +513,promptner prompt locating and typing for named entity recognition,"['Yongliang Shen', 'Zeqi Tan', 'Shuhui Wu', 'Wenqi Zhang', 'Rongsheng Zhang', 'Yadong Xi', 'Weiming Lu', 'Y. Zhuang']",http://arxiv.org/pdf/2305.17104,2023-05-26,,"Prompt learning is a new paradigm for utilizing pre-trained language models and has achieved great success in many tasks. To adopt prompt learning in the NER task, two kinds of methods have been explored from a pair of symmetric perspectives, populating the template by enumerating spans to predict their entity types or constructing type-specific prompts to locate entities. However, these methods not only require a multi-round prompting manner with a high time overhead and computational cost, but also require elaborate prompt templates, that are difficult to apply in practical scenarios. In this paper, we unify entity locating and entity typing into prompt learning, and design a dual-slot multi-prompt template with the position slot and type slot to prompt locating and typing respectively. Multiple prompts can be input to the model simultaneously, and then the model extracts all entities by parallel predictions on the slots. To assign labels for the slots during training, we design a dynamic template filling mechanism that uses the extended bipartite graph matching between prompts and the ground-truth entities. We conduct experiments in various settings, including resource-rich flat and nested NER datasets and low-resource in-domain and cross-domain datasets. Experimental results show that the proposed model achieves a significant performance improvement, especially in the cross-domain few-shot setting, which outperforms the state-of-the-art model by +7.7% on average.",bd2c32285e8ad5b6e322391cca5d475de4f84169,Semantic Scholar,,, +514,clip model is an efficient continual learner,"['Vishal G. Thengane', 'Salman A. Khan', 'Munawar Hayat', 'F. Khan']",http://arxiv.org/pdf/2210.03114,2022-10-06,,"The continual learning setting aims to learn new tasks over time without forgetting the previous ones. The literature reports several significant efforts to tackle this problem with limited or no access to previous task data. Among such efforts, typical solutions offer sophisticated techniques involving memory replay, knowledge distillation, model regularization, and dynamic network expansion. The resulting methods have a retraining cost at each learning task, dedicated memory requirements, and setting-specific design choices. In this work, we show that a frozen CLIP (Contrastive Language-Image Pretraining) model offers as-tounding continual learning performance without any fine-tuning (zero-shot eval-uation). We evaluate CLIP under a variety of settings including class-incremental, domain-incremental and task-agnostic incremental learning on five popular benchmarks (ImageNet-100 & 1K, CORe50, CIFAR-100, and TinyImageNet). Without any bells and whistles, the CLIP model outperforms the state-of-the-art continual learning approaches in majority of the settings. We show the effect on CLIP model’s performance by varying text inputs with simple prompt templates. To the best of our knowledge, this is the first work to report the CLIP zero-shot performance in a continual setting. We advocate the use of this strong yet embarrass-ingly simple baseline for future comparisons in the continual learning tasks. Code is available at https://github.com/vgthengane/Continual-CLIP .",c1372b08e382030e905d1c8751a7794ee91e9d31,Semantic Scholar,,, +515,distilling taskspecific logical rules from large pretrained models,"['Tao Chen', 'Luxin Liu', 'Xu Jia', 'Baoliang Cui', 'Haihong Tang', 'Siliang Tang']",http://arxiv.org/pdf/2210.02768,2022-10-06,,"Logical rules, both transferable and explainable, are widely used as weakly supervised signals for many downstream tasks such as named entity tagging. To reduce the human effort of writing rules, previous researchers adopt an iterative approach to automatically learn logical rules from several seed rules. However, obtaining more seed rules can only be accomplished by extra human annotation with heavy costs. Limited by the size and quality of the seed rules, the model performance of previous systems is bounded. In this paper, we develop a novel framework STREAM to distill task-specific logical rules from large pre-trained models. Specifically, we borrow recent prompt-based language models as the knowledge expert to yield initial seed rules, and based on the formed high-quality instance pool that acts as an intermediary role, we keep teaching the expert to fit our task and learning task-specific logical rules. Experiments on three public named entity tagging benchmarks demonstrate the effectiveness of our proposed framework. With several predefined prompt templates, our system has gained significant improvements over previous state-of-the-art methods.",c2903ea606e409d49994c801bb5aab321f623e5c,Semantic Scholar,,, +516,"a study on prompt design, advantages and limitations of chatgpt for deep learning program repair","['Jialun Cao', 'Meiziniu Li', 'Ming Wen', 'S. Cheung']",http://arxiv.org/pdf/2304.08191,2023-04-17,,"ChatGPT has revolutionized many research and industrial fields. ChatGPT has shown great potential in software engineering to boost various traditional tasks such as program repair, code understanding, and code generation. However, whether automatic program repair (APR) applies to deep learning (DL) programs is still unknown. DL programs, whose decision logic is not explicitly encoded in the source code, have posed unique challenges to APR. While to repair DL programs, an APR approach needs to not only parse the source code syntactically but also needs to understand the code intention. With the best prior work, the performance of fault localization is still far less than satisfactory (only about 30\%). Therefore, in this paper, we explore ChatGPT's capability for DL program repair by asking three research questions. (1) Can ChatGPT debug DL programs effectively? (2) How can ChatGPT's repair performance be improved by prompting? (3) In which way can dialogue help facilitate the repair? On top of that, we categorize the common aspects useful for prompt design for DL program repair. Also, we propose various prompt templates to facilitate the performance and summarize the advantages and disadvantages of ChatGPT's abilities such as detecting bad code smell, code refactoring, and detecting API misuse/deprecation.",c6808575096a6e4f3cbdc5f893384bc5a01cc6f8,Semantic Scholar,,, +517,don't stop pretraining make promptbased finetuning powerful learner,"['Zhengxiang Shi', 'Aldo Lipani']",https://arxiv.org/pdf/2305.01711,2023-05-02,,"Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. Through experiments on eight single-sentence tasks and eight sentence-pair tasks in both semi-supervised and fully-supervised settings, we find that conventional continued pre-training does not consistently provide benefits and can even be detrimental for sentence-pair tasks or when prompt-based FT is used. To tackle these issues, we propose Prompt-based Continued Pre-training (PCP), which combines the idea of instruction tuning with conventional continued pre-training. Our approach aims to improve the performance of prompt-based FT by presenting both task-related texts and prompt templates to LMs through unsupervised pre-training objectives before fine-tuning for the target task. Our empirical evaluations on 21 benchmarks demonstrate that the PCP consistently improves the performance of state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both semi-supervised and fully-supervised settings, even with only hundreds of unlabelled examples. Additionally, prompt-based FT with the PCP outperforms state-of-the-art semi-supervised approaches with greater simplicity, eliminating the need for an iterative process and extra data augmentation. Our further analysis explores the performance lower bound of the PCP and reveals that the advantages of PCP persist across different sizes of models and datasets.",c79852e9c9cc6734c9150847deb5449e489354ea,Semantic Scholar,,, +518,labelprompt effective promptbased learning for relation classification,"['W. Zhang', 'Xiaoning Song', 'Zhenhua Feng', 'Tianyang Xu', 'Xiaojun Wu']",https://arxiv.org/pdf/2302.08068,2023-02-16,,"Recently, prompt-based learning has gained popularity across many natural language processing (NLP) tasks by reformulating them into a cloze-style format to better align pre-trained language models (PLMs) with downstream tasks. However, applying this approach to relation classification poses unique challenges. Specifically, associating natural language words that fill the masked token with semantic relation labels (\textit{e.g.} \textit{``org:founded\_by}'') is difficult. To address this challenge, this paper presents a novel prompt-based learning method, namely LabelPrompt, for the relation classification task. Motivated by the intuition to ``GIVE MODEL CHOICES!'', we first define additional tokens to represent relation labels, which regard these tokens as the verbaliser with semantic initialisation and explicitly construct them with a prompt template method. Then, to mitigate inconsistency between predicted relations and given entities, we implement an entity-aware module with contrastive learning. Last, we conduct an attention query strategy within the self-attention layer to differentiates prompt tokens and sequence tokens. Together, these strategies enhance the adaptability of prompt-based learning, especially when only small labelled datasets is available. Comprehensive experiments on benchmark datasets demonstrate the superiority of our method, particularly in the few-shot scenario.",cb3379177c6e119dca0d32d41fa0c9b9fce172c8,Semantic Scholar,,, +519,"reason for future, act for now a principled framework for autonomous llm agents with provable sample efficiency","['Zhihan Liu', 'Hao Hu', 'Shenao Zhang', 'Hongyi Guo', 'Shuqi Ke', 'Boyi Liu', 'Zhaoran Wang']",https://arxiv.org/pdf/2309.17382,2023-09-29,,"Large language models (LLMs) demonstrate impressive reasoning abilities, but translating reasoning into actions in the real world remains challenging. In particular, it remains unclear how to complete a given task provably within a minimum number of interactions with the external environment, e.g., through an internal mechanism of reasoning. To this end, we propose a principled framework with provable regret guarantees to orchestrate reasoning and acting, which we call""reason for future, act for now""(\texttt{RAFA}). Specifically, we design a prompt template for reasoning that learns from the memory buffer and plans a future trajectory over a long horizon (""reason for future""). At each step, the LLM agent takes the initial action of the planned trajectory (""act for now""), stores the collected feedback in the memory buffer, and reinvokes the reasoning routine to replan the future trajectory from the new state. The key idea is to cast reasoning in LLMs as learning and planning in Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt LLMs to form an updated posterior of the unknown environment from the memory buffer (learning) and generate an optimal trajectory for multiple future steps that maximizes a value function (planning). The learning and planning subroutines are performed in an""in-context""manner to emulate the actor-critic update for MDPs. Our theoretical analysis proves that the novel combination of long-term reasoning and short-term acting achieves a $\sqrt{T}$ regret. In particular, the regret bound highlights an intriguing interplay between the prior knowledge obtained through pretraining and the uncertainty reduction achieved by reasoning and acting. Our empirical validation shows that it outperforms various existing frameworks and achieves nearly perfect scores on a few benchmarks.",d3ca116177369bf6fbe27de64506a2f401aca996,Semantic Scholar,,, +520,an informationtheoretic approach to prompt engineering without ground truth labels,"['Lisa P. Argyle', 'E. Busby', 'Nancy Fulda', 'Joshua R Gubler', 'Christopher Rytting', 'Taylor Sorensen', 'D. Wingate']",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/035D7C8A55B237942FB6DBAD7CAA4E49/S1047198723000025a.pdf/div-class-title-out-of-one-many-using-language-models-to-simulate-human-samples-div.pdf,2022-03-21,,"Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks. Unfortunately, existing prompt engineering methods require significant amounts of labeled data, access to model parameters, or both. We introduce a new method for selecting prompt templates without labeled examples and without direct access to the model. Specifically, over a set of candidate templates, we choose the template that maximizes the mutual information between the input and the corresponding model output. Across 8 datasets representing 7 distinct NLP tasks, we show that when a template has high mutual information, it also has high accuracy on the task. On the largest model, selecting prompts with our method gets 90% of the way from the average prompt accuracy to the best prompt accuracy and requires no ground truth labels.",d53e70d834243d3d8d4b621c0c52dfec26081155,Semantic Scholar,,, +521,prompting large language models with the socratic method,['Edward Y. Chang'],https://arxiv.org/pdf/2303.08769,2023-02-17,,"This paper presents a systematic approach to using the Socratic method in developing prompt templates that effectively interact with large language models, including GPT-3. Various methods are examined, and those that yield precise answers and justifications while fostering creativity and imagination to enhance creative writing are identified. Techniques such as definition, elenchus, dialectic, maieutics, generalization, and counterfactual reasoning are discussed for their application in engineering prompt templates and their connections to inductive, deductive, and abductive reasoning. Through examples, the effectiveness of these dialogue and reasoning methods is demonstrated. An interesting observation is made that when the task's goal and user intent are conveyed to GPT-3 via ChatGPT before the start of a dialogue, the large language model seems to connect to the external context expressed in the intent and perform more effectively.",d7386e8859b22e05ce9c4a972613d4b1e1e44198,Semantic Scholar,,, +522,anovl adapting visionlanguage models for unified zeroshot anomaly localization,"['Hanqiu Deng', 'Zhaoxiang Zhang', 'Jinan Bao', 'Xingyu Li']",https://arxiv.org/pdf/2308.15939,2023-08-30,,"Contrastive Language-Image Pre-training (CLIP) models have shown promising performance on zero-shot visual recognition tasks by learning visual representations under natural language supervision. Recent studies attempt the use of CLIP to tackle zero-shot anomaly detection by matching images with normal and abnormal state prompts. However, since CLIP focuses on building correspondence between paired text prompts and global image-level representations, the lack of patch-level vision to text alignment limits its capability on precise visual anomaly localization. In this work, we introduce a training-free adaptation (TFA) framework of CLIP for zero-shot anomaly localization. In the visual encoder, we innovate a training-free value-wise attention mechanism to extract intrinsic local tokens of CLIP for patch-level local description. From the perspective of text supervision, we particularly design a unified domain-aware contrastive state prompting template. On top of the proposed TFA, we further introduce a test-time adaptation (TTA) mechanism to refine anomaly localization results, where a layer of trainable parameters in the adapter is optimized using TFA's pseudo-labels and synthetic noise-corrupted tokens. With both TFA and TTA adaptation, we significantly exploit the potential of CLIP for zero-shot anomaly localization and demonstrate the effectiveness of our proposed methods on various datasets.",daa34ae46c82e6980ac1daaf2dd9716ef3718f21,Semantic Scholar,,, +523,continuous prompt tuning based textual entailment model for ecommerce entity typing,"['Yibo Wang', 'Congying Xia', 'Guan Wang', 'Philip S. Yu']",https://arxiv.org/pdf/2211.02483,2022-11-04,,"The explosion of e-commerce has caused the need for processing and analysis of product titles, like entity typing in product titles. However, the rapid activity in e-commerce has led to the rapid emergence of new entities, which is difficult for general entity typing. Besides, product titles in e-commerce have very different language styles from text data in general domain. In order to handle new entities in product titles and address the special language styles of product titles in e-commerce domain, we propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing. First, we reformulate entity typing into a textual entailment problem to handle new entities that are not present during training. Second, we design a model to automatically generate textual entailment hypotheses using a continuous prompt tuning method, which can generate better textual entailment hypotheses without manual design. Third, we utilize the fusion embeddings of BERT embedding and Char-acterBERT embedding to solve the problem that the language styles of product titles in e-commerce are different from that of general domain. To analyze the effect of each contribution, we compare the performance of entity typing and textual entailment model, and conduct ablation studies on continuous prompt tuning and fusion embeddings. We also evaluate the impact of different prompt template initialization for the continuous prompt tuning. We show our proposed model improves the average F1 score by around 2% compared to the baseline BERT entity typing model.",dd568e6838903ad7c381f13c1268c94c5db08b02,Semantic Scholar,,, +524,daprompt deterministic assumption prompt learning for event causality identification,"['Wei Xiang', 'Chuanhong Zhan', 'Bang Wang']",https://arxiv.org/pdf/2307.09813,2023-07-19,,"Event Causality Identification (ECI) aims at determining whether there is a causal relation between two event mentions. Conventional prompt learning designs a prompt template to first predict an answer word and then maps it to the final decision. Unlike conventional prompts, we argue that predicting an answer word may not be a necessary prerequisite for the ECI task. Instead, we can first make a deterministic assumption on the existence of causal relation between two events and then evaluate its rationality to either accept or reject the assumption. The design motivation is to try the most utilization of the encyclopedia-like knowledge embedded in a pre-trained language model. In light of such considerations, we propose a deterministic assumption prompt learning model, called DAPrompt, for the ECI task. In particular, we design a simple deterministic assumption template concatenating with the input event pair, which includes two masks as predicted events' tokens. We use the probabilities of predicted events to evaluate the assumption rationality for the final event causality decision. Experiments on the EventStoryLine corpus and Causal-TimeBank corpus validate our design objective in terms of significant performance improvements over the state-of-the-art algorithms.",e92f4ff44def2273d9fcb02921b257dcbe3c9626,Semantic Scholar,,, +525,clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction,"['Jianghao Lin', 'Bo Chen', 'Hangyu Wang', 'Yunjia Xi', 'Yanru Qu', 'Xinyi Dai', 'Kangning Zhang', 'Ruiming Tang', 'Yong Yu', 'Weinan Zhang']",https://arxiv.org/pdf/2310.09234,2023-10-13,,"Click-through rate (CTR) prediction has become increasingly indispensable for various Internet applications. Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features. Such a paradigm suffers from the problem of semantic information loss. Another line of research explores the potential of pretrained language models (PLMs) for CTR prediction by converting input data into textual sentences through hard prompt templates. Although semantic signals are preserved, they generally fail to capture the collaborative information (e.g., feature interactions, pure ID features), not to mention the unacceptable inference overhead brought by the huge model size. In this paper, we aim to model both the semantic knowledge and collaborative knowledge for accurate CTR estimation, and meanwhile address the inference inefficiency issue. To benefit from both worlds and close their gaps, we propose a novel model-agnostic framework (i.e., ClickPrompt), where we incorporate CTR models to generate interaction-aware soft prompts for PLMs. We design a prompt-augmented masked language modeling (PA-MLM) pretraining task, where PLM has to recover the masked tokens based on the language context, as well as the soft prompts generated by CTR model. The collaborative and semantic knowledge from ID and textual features would be explicitly aligned and interacted via the prompt interface. Then, we can either tune the CTR model with PLM for superior performance, or solely tune the CTR model without PLM for inference efficiency. Experiments on four real-world datasets validate the effectiveness of ClickPrompt compared with existing baselines.",e96be7c55d139965b15bc0527d6d528b225f9a61,Semantic Scholar,,, +526,large language models are zeroshot rankers for recommender systems,"['Yupeng Hou', 'Junjie Zhang', 'Zihan Lin', 'Hongyu Lu', 'Ruobing Xie', 'Julian McAuley', 'Wayne Xin Zhao']",http://arxiv.org/pdf/2305.08845,2023-05-15,,"Recently, large language models (LLMs) (e.g., GPT-4) have demonstrated impressive general-purpose task-solving abilities, including the potential to approach recommendation tasks. Along this line of research, this work aims to investigate the capacity of LLMs that act as the ranking model for recommender systems. We first formalize the recommendation problem as a conditional ranking task, considering sequential interaction histories as conditions and the items retrieved by other candidate generation models as candidates. To solve the ranking task by LLMs, we carefully design the prompting template and conduct extensive experiments on two widely-used datasets. We show that LLMs have promising zero-shot ranking abilities but (1) struggle to perceive the order of historical interactions, and (2) can be biased by popularity or item positions in the prompts. We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies. Equipped with these insights, zero-shot LLMs can even challenge conventional recommendation models when ranking candidates are retrieved by multiple candidate generators. The code and processed datasets are available at https://github.com/RUCAIBox/LLMRank.",f4e723958a93762befb4d4a039b44a7d752f9917,Semantic Scholar,,, +527,tiam a metric for evaluating alignment in texttoimage generation,"['P. Grimal', 'H. Borgne', 'Olivier Ferret', 'Julien Tourille']",https://arxiv.org/pdf/2307.05134,2023-07-11,,"The progress in the generation of synthetic images has made it crucial to assess their quality. While several metrics have been proposed to assess the rendering of images, it is crucial for Text-to-Image (T2I) models, which generate images based on a prompt, to consider additional aspects such as to which extent the generated image matches the important content of the prompt. Moreover, although the generated images usually result from a random starting point, the influence of this one is generally not considered. In this article, we propose a new metric based on prompt templates to study the alignment between the content specified in the prompt and the corresponding generated images. It allows us to better characterize the alignment in terms of the type of the specified objects, their number, and their color. We conducted a study on several recent T2I models about various aspects. An additional interesting result we obtained with our approach is that image quality can vary drastically depending on the noise used as a seed for the images. We also quantify the influence of the number of concepts in the prompt, their order as well as their (color) attributes. Finally, our method allows us to identify some seeds that produce better images than others, opening novel directions of research on this understudied topic.",f7d57f223154965e6e5584d3a51561aaea7ca13b,Semantic Scholar,,, +528,the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis,"['Xiancai Xu', 'Jia-Dong Zhang', 'Rongchang Xiao', 'Lei Xiong']",https://arxiv.org/pdf/2310.06502,2023-10-10,,"Recently, ChatGPT has attracted great attention from both industry and academia due to its surprising abilities in natural language understanding and generation. We are particularly curious about whether it can achieve promising performance on one of the most complex tasks in aspect-based sentiment analysis, i.e., extracting aspect-category-opinion-sentiment quadruples from texts. To this end, in this paper we develop a specialized prompt template that enables ChatGPT to effectively tackle this complex quadruple extraction task. Further, we propose a selection method on few-shot examples to fully exploit the in-context learning ability of ChatGPT and uplift its effectiveness on this complex task. Finally, we provide a comparative evaluation on ChatGPT against existing state-of-the-art quadruple extraction models based on four public datasets and highlight some important findings regarding the capability boundaries of ChatGPT in the quadruple extraction.",f84d6d6d58b836a64c4a96b062bfff769d08a595,Semantic Scholar,,, +529,let me check the examples enhancing demonstration learning via explicit imitation,"['Sirui Wang', 'Kaiwen Wei', 'Hongzhi Zhang', 'Yun Li', 'Wei Wu']",http://arxiv.org/pdf/2209.00455,2022-08-31,,"Demonstration learning aims to guide the prompt prediction by providing answered demonstrations in the few shot settings. Despite achieving promising results, existing work only concatenates the answered examples as demonstrations to the prompt template (including the raw context) without any additional operation, neglecting the prompt-demonstration dependencies. Besides, prior research found that randomly replacing the labels of demonstrations marginally hurts performance, illustrating that the model could not properly learn the knowledge brought by the demonstrations. Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on similar demonstrations.(2) demonstration-label re-prediction method to consolidate known knowledge. Experiment results show that our proposed method achieves state-of-the-art performance on 5 out of 14 classification corpus. Further studies also prove that Imitation-Demo strengthens the associations between the prompt and demonstrations, which could provide the basis for exploring how demonstration learning works.",fdbdcc3a65dfd6f258c533fd12d58bbfcab15bc3,Semantic Scholar,,, +530,promptbased length controlled generation with reinforcement learning,"['Renlong Jie', 'Xiaojun Meng', 'Lifeng Shang', 'Xin Jiang', 'Qun Liu']",https://arxiv.org/pdf/2308.12030,2023-08-23,,"Large language models (LLMs) like ChatGPT and GPT-4 have attracted great attention given their surprising performance on a wide range of NLP tasks. Length controlled generation of LLMs emerges as an important topic, which enables users to fully leverage the capability of LLMs in more real-world scenarios like generating a proper answer or essay of a desired length. In addition, the autoregressive generation in LLMs is extremely time-consuming, while the ability of controlling this generated length can reduce the inference cost by limiting the length. Therefore, we propose a prompt-based length control method to achieve high-accuracy length controlled generation. In particular, we adopt reinforcement learning with the reward signal given by either trainable or rule-based reward models, which further enhances the length-control ability of LLMs by rewarding outputs that follows pre-defined control instruction. To enable rule-based inference, we also introduce standard prompt extractor to collect the standard control information from users' input. Experiments show that our method significantly improves the accuracy of prompt-based length control for summarization task on popular datasets like CNNDM and NYT. Both the standard prompt extractor and the RL-tuned model have show strong generalization ability to unseen control prompt templates.",fe583403c95c3e9b4148d6276f04bda5ace33660,Semantic Scholar,,, +531,llm4dv using large language models for hardware test stimuli generation,"['Zixi Zhang', 'Greg Chadwick', 'Hugo McNally', 'Yiren Zhao', 'Robert Mullins']",https://arxiv.org/pdf/2310.04535,2023-10-06,,"Test stimuli generation has been a crucial but labor-intensive task in hardware design verification. In this paper, we revolutionize this process by harnessing the power of large language models (LLMs) and present a novel benchmarking framework, LLM4DV. This framework introduces a prompt template for interactively eliciting test stimuli from the LLM, along with four innovative prompting improvements to support the pipeline execution and further enhance its performance. We compare LLM4DV to traditional constrained-random testing (CRT), using three self-designed design-under-test (DUT) modules. Experiments demonstrate that LLM4DV excels in efficiently handling straightforward DUT scenarios, leveraging its ability to employ basic mathematical reasoning and pre-trained knowledge. While it exhibits reduced efficiency in complex task settings, it still outperforms CRT in relative terms. The proposed framework and the DUT modules used in our experiments will be open-sourced upon publication.",ff7f75989d125a3356fdb5ad76f504037cc27d5c,Semantic Scholar,,, +532,scalable and transferable blackbox jailbreaks for language models via persona modulation,"['Rusheb Shah', 'Quentin Feuillade--Montixi', 'Soroush Pour', 'Arush Tagade', 'Stephen Casper', 'Javier Rando']",http://arxiv.org/pdf/2311.03348v2.pdf,2023-11-06,," Despite efforts to align large language models to produce harmless responses,they are still vulnerable to jailbreak prompts that elicit unrestrictedbehaviour. In this work, we investigate persona modulation as a black-boxjailbreaking method to steer a target model to take on personalities that arewilling to comply with harmful instructions. Rather than manually craftingprompts for each persona, we automate the generation of jailbreaks using alanguage model assistant. We demonstrate a range of harmful completions madepossible by persona modulation, including detailed instructions forsynthesising methamphetamine, building a bomb, and laundering money. Theseautomated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is185 times larger than before modulation (0.23%). These prompts also transfer toClaude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%,respectively. Our work reveals yet another vulnerability in commercial largelanguage models and highlights the need for more comprehensive safeguards.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +533,masterkey automated jailbreak across multiple large language model chatbots,"['Gelei Deng', 'Yi Liu', 'Yuekang Li', 'Kailong Wang', 'Ying Zhang', 'Zefeng Li', 'Haoyu Wang', 'Tianwei Zhang', 'Yang Liu']",http://arxiv.org/pdf/2307.08715v2.pdf,2023-07-16,," Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI)services due to their exceptional proficiency in understanding and generatinghuman-like text. LLM chatbots, in particular, have seen widespread adoption,transforming human-machine interactions. However, these LLM chatbots aresusceptible to ""jailbreak"" attacks, where malicious users manipulate prompts toelicit inappropriate or sensitive responses, contravening service policies.Despite existing attempts to mitigate such threats, our research reveals asubstantial gap in our understanding of these vulnerabilities, largely due tothe undisclosed defensive measures implemented by LLM service providers. In this paper, we present Jailbreaker, a comprehensive framework that offersan in-depth understanding of jailbreak attacks and countermeasures. Our workmakes a dual contribution. First, we propose an innovative methodology inspiredby time-based SQL injection techniques to reverse-engineer the defensivestrategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat.This time-sensitive approach uncovers intricate details about these services'defenses, facilitating a proof-of-concept attack that successfully bypassestheir mechanisms. Second, we introduce an automatic generation method forjailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential ofautomated jailbreak generation across various commercial LLM chatbots. Ourmethod achieves a promising average success rate of 21.58%, significantlyoutperforming the effectiveness of existing techniques. We have responsiblydisclosed our findings to the concerned service providers, underscoring theurgent need for more robust defenses. Jailbreaker thus marks a significant steptowards understanding and mitigating jailbreak threats in the realm of LLMchatbots.",,arXiv,['cs.cr'],, +534,probing llms for hate speech detection strengths and vulnerabilities,"['Sarthak Roy', 'Ashish Harshavardhan', 'Animesh Mukherjee', 'Punyajoy Saha']",http://arxiv.org/pdf/2310.12860v2.pdf,2023-10-19,," Recently efforts have been made by social media platforms as well asresearchers to detect hateful or toxic language using large language models.However, none of these works aim to use explanation, additional context andvictim community information in the detection process. We utilise differentprompt variation, input information and evaluate large language models in zeroshot setting (without adding any in-context examples). We select three largelanguage models (GPT-3.5, text-davinci and Flan-T5) and three datasets -HateXplain, implicit hate and ToxicSpans. We find that on average including thetarget information in the pipeline improves the model performance substantially(~20-30%) over the baseline across the datasets. There is also a considerableeffect of adding the rationales/explanations into the pipeline (~10-20%) overthe baseline across the datasets. In addition, we further provide a typology ofthe error cases where these large language models fail to (i) classify and (ii)explain the reason for the decisions they take. Such vulnerable pointsautomatically constitute 'jailbreak' prompts for these models and industryscale safeguard techniques need to be developed to make the models robustagainst such prompts.",,arXiv,"['cs.cl', 'cs.cy']",, +535,dcc help generating contextaware compiler error explanations with large language models,"['Andrew Taylor', 'Alexandra Vassar', 'Jake Renzella', 'Hammond Pearce']",http://arxiv.org/pdf/2308.11873v2.pdf,2023-08-23,," In the challenging field of introductory programming, high enrollments andfailure rates drive us to explore tools and systems to enhance studentoutcomes, especially automated tools that scale to large cohorts. This paperpresents and evaluates the dcc --help tool, an integration of a Large LanguageModel (LLM) into the Debugging C Compiler (DCC) to generate unique,novice-focused explanations tailored to each error. dcc --help prompts an LLMwith contextual information of compile- and run-time error occurrences,including the source code, error location and standard compiler error message.The LLM is instructed to generate novice-focused, actionable error explanationsand guidance, designed to help students understand and resolve problems withoutproviding solutions. dcc --help was deployed to our CS1 and CS2 courses, with2,565 students using the tool over 64,000 times in ten weeks. We analysed asubset of these error/explanation pairs to evaluate their properties, includingconceptual correctness, relevancy, and overall quality. We found that theLLM-generated explanations were conceptually accurate in 90% of compile-timeand 75% of run-time cases, but often disregarded the instruction not to providesolutions in code. Our findings, observations and reflections followingdeployment indicate that dcc-help provides novel opportunities for scaffoldingstudents' introduction to programming.",,arXiv,"['cs.se', 'cs.lg', 'cs.pl']",, +536,clarifygpt empowering llmbased code generation with intention clarification,"['Fangwen Mu', 'Lin Shi', 'Song Wang', 'Zhuohao Yu', 'Binquan Zhang', 'Chenxue Wang', 'Shichao Liu', 'Qing Wang']",http://arxiv.org/pdf/2310.10996v1.pdf,2023-10-17,," We introduce a novel framework named ClarifyGPT, which aims to enhance codegeneration by empowering LLMs with the ability to identify ambiguousrequirements and ask targeted clarifying questions. In particular, ClarifyGPTfirst detects whether a given requirement is ambiguous by performing a codeconsistency check. If it is ambiguous, ClarifyGPT prompts an LLM to generatetargeted clarifying questions. After receiving question responses, ClarifyGPTrefines the ambiguous requirement and inputs it into the same LLM to generate afinal code solution. To evaluate our ClarifyGPT, we first conduct a humanevaluation involving ten participants who use ClarifyGPT for code generation ontwo publicly available benchmarks: MBPP-sanitized and MBPP-ET. The results showthat ClarifyGPT elevates the performance (Pass@1) of GPT-4 from 70.96% to80.80% on MBPP-sanitized. Furthermore, to perform large-scale automatedevaluations of ClarifyGPT across different LLMs and benchmarks withoutrequiring user participation, we introduce a high-fidelity simulation method tosimulate user responses. The automated evaluation results also demonstrate thatClarifyGPT can significantly enhance code generation performance compared tothe baselines. In particular, ClarifyGPT improves the average performance ofGPT-4 and ChatGPT across four benchmarks from 68.02% to 75.75% and from 58.55%to 67.22%, respectively. We believe that ClarifyGPT can effectively facilitatethe practical application of LLMs in real-world development environments.",,arXiv,['cs.se'],, +537,harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning,"['Xiaoxin He', 'Xavier Bresson', 'Thomas Laurent', 'Adam Perold', 'Yann LeCun', 'Bryan Hooi']",http://arxiv.org/pdf/2305.19523v3.pdf,2023-05-31,," Representation learning on text-attributed graphs (TAGs) has become acritical research problem in recent years. A typical example of a TAG is apaper citation graph, where the text of each paper serves as node attributes.Initial graph neural network (GNN) pipelines handled these text attributes bytransforming them into shallow or hand-crafted features, such as skip-gram orbag-of-words features. Recent efforts have focused on enhancing these pipelineswith language models (LMs), which typically demand intricate designs andsubstantial computational resources. With the advent of powerful large languagemodels (LLMs) such as GPT or Llama2, which demonstrate an ability to reason andto utilize general knowledge, there is a growing need for techniques whichcombine the textual modelling abilities of LLMs with the structural learningcapabilities of GNNs. Hence, in this work, we focus on leveraging LLMs tocapture textual information as features, which can be used to boost GNNperformance on downstream tasks. A key innovation is our use of explanations asfeatures: we prompt an LLM to perform zero-shot classification, request textualexplanations for its decision-making process, and design an LLM-to-LMinterpreter to translate these explanations into informative features thatenhance downstream GNNs. Our experiments demonstrate that our method achievesstate-of-the-art results on well-established TAG datasets, including Cora,PubMed, ogbn-arxiv, as well as our newly introduced dataset, arXiv-2023.Furthermore, our method significantly speeds up training, achieving a 2.88times improvement over the closest baseline on ogbn-arxiv. Lastly, we believethe versatility of the proposed method extends beyond TAGs and holds thepotential to enhance other tasks involving graph-text data~\footnote{Our codesand datasets are available at: \url{https://github.com/XiaoxinHe/TAPE}}.",,arXiv,['cs.lg'],, +538,the unreliability of explanations in fewshot prompting for textual reasoning,"['Xi Ye', 'Greg Durrett']",http://arxiv.org/pdf/2205.03401v2.pdf,2022-05-06,," Does prompting a large language model (LLM) like GPT-3 with explanationsimprove in-context learning? We study this question on two NLP tasks thatinvolve reasoning over text, namely question answering and natural languageinference. We test the performance of four LLMs on three textual reasoningdatasets using prompts that include explanations in multiple different styles.For these tasks, we find that including explanations in the prompts for OPT,GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small tomoderate accuracy improvements over standard few-show learning. However,text-davinci-002 is able to benefit more substantially. We further show that explanations generated by the LLMs may not entail themodels' predictions nor be factually grounded in the input, even on simpletasks with extractive explanations. However, these flawed explanations canstill be useful as a way to verify LLMs' predictions post-hoc. Through analysisin our three settings, we show that explanations judged by humans to begood--logically consistent with the input and the prediction--more likelycooccur with accurate predictions. Following these observations, we traincalibrators using automatically extracted scores that assess the reliability ofexplanations, allowing us to improve performance post-hoc across all of ourdatasets.",,arXiv,['cs.cl'],, +539,prompt injection attacks and defenses in llmintegrated applications,"['Yupei Liu', 'Yuqi Jia', 'Runpeng Geng', 'Jinyuan Jia', 'Neil Zhenqiang Gong']",http://arxiv.org/pdf/2310.12815v1.pdf,2023-10-19,," Large Language Models (LLMs) are increasingly deployed as the backend for avariety of real-world applications called LLM-Integrated Applications. Multiplerecent works showed that LLM-Integrated Applications are vulnerable to promptinjection attacks, in which an attacker injects malicious instruction/data intothe input of those applications such that they produce results as the attackerdesires. However, existing works are limited to case studies. As a result, theliterature lacks a systematic understanding of prompt injection attacks andtheir defenses. We aim to bridge the gap in this work. In particular, wepropose a general framework to formalize prompt injection attacks. Existingattacks, which are discussed in research papers and blog posts, are specialcases in our framework. Our framework enables us to design a new attack bycombining existing attacks. Moreover, we also propose a framework tosystematize defenses against prompt injection attacks. Using our frameworks, weconduct a systematic evaluation on prompt injection attacks and their defenseswith 10 LLMs and 7 tasks. We hope our frameworks can inspire future research inthis field. Our code is available athttps://github.com/liu00222/Open-Prompt-Injection.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg']",, +540,tensor trust interpretable prompt injection attacks from an online game,"['Sam Toyer', 'Olivia Watkins', 'Ethan Adrian Mendes', 'Justin Svegliato', 'Luke Bailey', 'Tiffany Wang', 'Isaac Ong', 'Karim Elmaaroufi', 'Pieter Abbeel', 'Trevor Darrell', 'Alan Ritter', 'Stuart Russell']",http://arxiv.org/pdf/2311.01011v1.pdf,2023-11-02,," While Large Language Models (LLMs) are increasingly being used in real-worldapplications, they remain vulnerable to prompt injection attacks: maliciousthird party prompts that subvert the intent of the system designer. To helpresearchers study this problem, we present a dataset of over 126,000 promptinjection attacks and 46,000 prompt-based ""defenses"" against prompt injection,all created by players of an online game called Tensor Trust. To the best ofour knowledge, this is currently the largest dataset of human-generatedadversarial examples for instruction-following LLMs. The attacks in our datasethave a lot of easily interpretable stucture, and shed light on the weaknessesof LLMs. We also use the dataset to create a benchmark for resistance to twotypes of prompt injection, which we refer to as prompt extraction and prompthijacking. Our benchmark results show that many models are vulnerable to theattack strategies in the Tensor Trust dataset. Furthermore, we show that someattack strategies from the dataset generalize to deployed LLM-basedapplications, even though they have a very different set of constraints to thegame. We release all data and source code at https://tensortrust.ai/paper",,arXiv,"['cs.lg', 'cs.cr']",, +541,evaluating the instructionfollowing robustness of large language models to prompt injection,"['Zekun Li', 'Baolin Peng', 'Pengcheng He', 'Xifeng Yan']",http://arxiv.org/pdf/2308.10819v3.pdf,2023-08-17,," Large Language Models (LLMs) have demonstrated exceptional proficiency ininstruction-following, becoming increasingly crucial across variousapplications. However, this capability brings with it the risk of promptinjection attacks, where attackers inject instructions into LLMs' input toelicit undesirable actions or content. Understanding the robustness of LLMsagainst such attacks is vital for their safe implementation. In this work, weestablish a benchmark to evaluate the robustness of instruction-following LLMsagainst prompt injection attacks. Our objective is to determine the extent towhich LLMs can be influenced by injected instructions and their ability todifferentiate between these injected and original target instructions. Throughextensive experiments with leading instruction-following LLMs, we uncoversignificant vulnerabilities in their robustness to such attacks. Our resultsindicate that some models are overly tuned to follow any embedded instructionsin the prompt, overly focusing on the latter parts of the prompt without fullygrasping the entire context. By contrast, models with a better grasp of thecontext and instruction-following capabilities will potentially be moresusceptible to compromise by injected instructions. This underscores the needto shift the focus from merely enhancing LLMs' instruction-followingcapabilities to improving their overall comprehension of prompts anddiscernment of instructions that are appropriate to follow. We hope ourin-depth analysis offers insights into the underlying causes of thesevulnerabilities, aiding in the development of future solutions. Code and dataare available athttps://github.com/Leezekun/instruction-following-robustness-eval",,arXiv,"['cs.cl', 'cs.ai']",, +542,backdooring instructiontuned large language models with virtual prompt injection,"['Jun Yan', 'Vikas Yadav', 'Shiyang Li', 'Lichang Chen', 'Zheng Tang', 'Hai Wang', 'Vijay Srinivasan', 'Xiang Ren', 'Hongxia Jin']",http://arxiv.org/pdf/2307.16888v2.pdf,2023-07-31,," Instruction-tuned Large Language Models (LLMs) have demonstrated remarkableabilities to modulate their responses based on human instructions. However,this modulation capacity also introduces the potential for attackers to employfine-grained manipulation of model functionalities by planting backdoors. Inthis paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoorattack setting tailored for instruction-tuned LLMs. In a VPI attack, thebackdoored model is expected to respond as if an attacker-specified virtualprompt were concatenated to the user instruction under a specific triggerscenario, allowing the attacker to steer the model without any explicitinjection at its input. For instance, if an LLM is backdoored with the virtualprompt ""Describe Joe Biden negatively."" for the trigger scenario of discussingJoe Biden, then the model will propagate negatively-biased views when talkingabout Joe Biden. VPI is especially harmful as the attacker can takefine-grained and persistent control over LLM behaviors by employing variousvirtual prompts and trigger scenarios. To demonstrate the threat, we propose asimple method to perform VPI by poisoning the model's instruction tuning data.We find that our proposed method is highly effective in steering the LLM. Forexample, by poisoning only 52 instruction tuning examples (0.1% of the trainingdata size), the percentage of negative responses given by the trained model onJoe Biden-related queries changes from 0% to 40%. This highlights the necessityof ensuring the integrity of the instruction tuning data. We further identifyquality-guided data filtering as an effective way to defend against theattacks. Our project page is available at https://poison-llm.github.io.",,arXiv,"['cs.cl', 'cs.cr', 'cs.lg']",, +543,knowledge prompts injecting world knowledge into language models through soft prompts,"['Cicero Nogueira dos Santos', 'Zhe Dong', 'Daniel Cer', 'John Nham', 'Siamak Shakeri', 'Jianmo Ni', 'Yun-hsuan Sung']",http://arxiv.org/pdf/2210.04726v1.pdf,2022-10-10,," Soft prompts have been recently proposed as a tool for adapting large frozenlanguage models (LMs) to new tasks. In this work, we repurpose soft prompts tothe task of injecting world knowledge into LMs. We introduce a method to trainsoft prompts via self-supervised learning on data from knowledge bases. Theresulting soft knowledge prompts (KPs) are task independent and work as anexternal memory of the LMs. We perform qualitative and quantitative experimentsand demonstrate that: (1) KPs can effectively model the structure of thetraining data; (2) KPs can be used to improve the performance of LMs indifferent knowledge intensive tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +544,multiprompter cooperative prompt optimization with multiagent reinforcement learning,"['Dong-Ki Kim', 'Sungryull Sohn', 'Lajanugen Logeswaran', 'Dongsub Shim', 'Honglak Lee']",http://arxiv.org/pdf/2310.16730v1.pdf,2023-10-25,," Recently, there has been an increasing interest in automated promptoptimization based on reinforcement learning (RL). This approach offersimportant advantages, such as generating interpretable prompts and beingcompatible with black-box foundation models. However, the substantial promptspace size poses challenges for RL-based methods, often leading to suboptimalpolicy convergence. This paper introduces MultiPrompter, a new framework thatviews prompt optimization as a cooperative game between prompters which taketurns composing a prompt together. Our cooperative prompt optimizationeffectively reduces the problem size and helps prompters learn optimal prompts.We test our method on the text-to-image task and show its ability to generatehigher-quality images than baselines.",,arXiv,['cs.lg'],, +545,promptagent strategic planning with language models enables expertlevel prompt optimization,"['Xinyuan Wang', 'Chenxi Li', 'Zhen Wang', 'Fan Bai', 'Haotian Luo', 'Jiayou Zhang', 'Nebojsa Jojic', 'Eric P. Xing', 'Zhiting Hu']",http://arxiv.org/pdf/2310.16427v2.pdf,2023-10-25,," Highly effective, task-specific prompts are often heavily engineered byexperts to integrate detailed instructions and domain insights based on a deepunderstanding of both instincts of large language models (LLMs) and theintricacies of the target task. However, automating the generation of suchexpert-level prompts remains elusive. Existing prompt optimization methods tendto overlook the depth of domain knowledge and struggle to efficiently explorethe vast space of expert-level prompts. Addressing this, we presentPromptAgent, an optimization method that autonomously crafts prompts equivalentin quality to those handcrafted by experts. At its core, PromptAgent viewsprompt optimization as a strategic planning problem and employs a principledplanning algorithm, rooted in Monte Carlo tree search, to strategicallynavigate the expert-level prompt space. Inspired by human-like trial-and-errorexploration, PromptAgent induces precise expert-level insights and in-depthinstructions by reflecting on model errors and generating constructive errorfeedback. Such a novel framework allows the agent to iteratively examineintermediate prompts (states), refine them based on error feedbacks (actions),simulate future rewards, and search for high-reward paths leading to expertprompts. We apply PromptAgent to 12 tasks spanning three practical domains:BIG-Bench Hard (BBH), as well as domain-specific and general NLP tasks, showingit significantly outperforms strong Chain-of-Thought and recent promptoptimization baselines. Extensive analyses emphasize its capability to craftexpert-level, detailed, and domain-insightful prompts with great efficiency andgeneralizability.",,arXiv,['cs.cl'],, +546,att3d amortized textto3d object synthesis,"['Jonathan Lorraine', 'Kevin Xie', 'Xiaohui Zeng', 'Chen-Hsuan Lin', 'Towaki Takikawa', 'Nicholas Sharp', 'Tsung-Yi Lin', 'Ming-Yu Liu', 'Sanja Fidler', 'James Lucas']",http://arxiv.org/pdf/2306.07349v1.pdf,2023-06-06,," Text-to-3D modelling has seen exciting progress by combining generativetext-to-image models with image-to-3D methods like Neural Radiance Fields.DreamFusion recently achieved high-quality results but requires a lengthy,per-prompt optimization to create 3D objects. To address this, we amortizeoptimization over text prompts by training on many prompts simultaneously witha unified model, instead of separately. With this, we share computation acrossa prompt set, training in less time than per-prompt optimization. Our framework- Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts togeneralize to unseen setups and smooth interpolations between text for novelassets and simple animations.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', '68t45', 'i.2.6; i.2.7; i.3.6; i.3.7']",, +547,blackbox prompt optimization aligning large language models without model training,"['Jiale Cheng', 'Xiao Liu', 'Kehan Zheng', 'Pei Ke', 'Hongning Wang', 'Yuxiao Dong', 'Jie Tang', 'Minlie Huang']",http://arxiv.org/pdf/2311.04155v2.pdf,2023-11-07,," Large language models (LLMs) have shown impressive success in variousapplications. However, these models are often not well aligned with humanintents, which calls for additional treatments on them, that is, the alignmentproblem. To make LLMs better follow user instructions, existing alignmentmethods mostly focus on further training them. However, the extra training ofLLMs are usually expensive in terms of GPU compute; worse still, LLMs ofinterest are oftentimes not accessible for user-demanded training, such asGPTs. In this work, we take a different perspective -- Black-Box PromptOptimization (BPO) -- to perform alignments. The idea is to optimize userprompts to suit LLMs' input understanding, so as to best realize users' intentswithout updating LLMs' parameters. BPO is model-agnostic and the empiricalresults demonstrate that the BPO-aligned ChatGPT yields a 22% increase in thewin rate against its original version, and 10% for GPT-4. Importantly, theBPO-aligned LLMs can outperform the same models aligned by PPO and DPO, and italso brings additional performance gains when combining BPO with PPO or DPO.Code and datasets are released at https://github.com/thu-coai/BPO.",,arXiv,['cs.cl'],, +548,zegot zeroshot segmentation through optimal transport of text prompts,"['Kwanyoung Kim', 'Yujin Oh', 'Jong Chul Ye']",http://arxiv.org/pdf/2301.12171v2.pdf,2023-01-28,," Recent success of large-scale Contrastive Language-Image Pre-training (CLIP)has led to great promise in zero-shot semantic segmentation by transferringimage-text aligned knowledge to pixel-level classification. However, existingmethods usually require an additional image encoder or retraining/tuning theCLIP module. Here, we propose a novel Zero-shot segmentation with OptimalTransport (ZegOT) method that matches multiple text prompts with frozen imageembeddings through optimal transport. In particular, we introduce a novelMultiple Prompt Optimal Transport Solver (MPOT), which is designed to learn anoptimal mapping between multiple text prompts and visual feature maps of thefrozen image encoder hidden layers. This unique mapping method facilitates eachof the multiple text prompts to effectively focus on distinct visual semanticattributes. Through extensive experiments on benchmark datasets, we show thatour method achieves the state-of-the-art (SOTA) performance over existingZero-shot Semantic Segmentation (ZS3) approaches.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg', 'stat.ml']",, +549,automatic data transformation using large language model an experimental study on building energy data,"['Ankita Sharma', 'Xuanmao Li', 'Hong Guan', 'Guoxin Sun', 'Liang Zhang', 'Lanjun Wang', 'Kesheng Wu', 'Lei Cao', 'Erkang Zhu', 'Alexander Sim', 'Teresa Wu', 'Jia Zou']",http://arxiv.org/pdf/2309.01957v2.pdf,2023-09-05,," Existing approaches to automatic data transformation are insufficient to meetthe requirements in many real-world scenarios, such as the building sector.First, there is no convenient interface for domain experts to provide domainknowledge easily. Second, they require significant training data collectionoverheads. Third, the accuracy suffers from complicated schema changes. Tobridge this gap, we present a novel approach that leverages the uniquecapabilities of large language models (LLMs) in coding, complex reasoning, andzero-shot learning to generate SQL code that transforms the source datasetsinto the target datasets. We demonstrate the viability of this approach bydesigning an LLM-based framework, termed SQLMorpher, which comprises a promptgenerator that integrates the initial prompt with optional domain knowledge andhistorical patterns in external databases. It also implements an iterativeprompt optimization mechanism that automatically improves the prompt based onflaw detection. The key contributions of this work include (1) pioneering anend-to-end LLM-based solution for data transformation, (2) developing abenchmark dataset of 105 real-world building energy data transformationproblems, and (3) conducting an extensive empirical evaluation where ourapproach achieved 96% accuracy in all 105 problems. SQLMorpher demonstrates theeffectiveness of utilizing LLMs in complex, domain-specific challenges,highlighting the potential of their potential to drive sustainable solutions.",,arXiv,['cs.db'],, +550,unleashing the potential of prompt engineering in large language models a comprehensive review,"['Banghao Chen', 'Zhaofeng Zhang', 'Nicolas Langrené', 'Shengxin Zhu']",http://arxiv.org/pdf/2310.14735v2.pdf,2023-10-23,," This paper delves into the pivotal role of prompt engineering in unleashingthe capabilities of Large Language Models (LLMs). Prompt engineering is theprocess of structuring input text for LLMs and is a technique integral tooptimizing the efficacy of LLMs. This survey elucidates foundational principlesof prompt engineering, such as role-prompting, one-shot, and few-shotprompting, as well as more advanced methodologies such as the chain-of-thoughtand tree-of-thoughts prompting. The paper sheds light on how externalassistance in the form of plugins can assist in this task, and reduce machinehallucination by retrieving external knowledge. We subsequently delineateprospective directions in prompt engineering research, emphasizing the need fora deeper understanding of structures and the role of agents in ArtificialIntelligence-Generated Content (AIGC) tools. We discuss how to assess theefficacy of prompt methods from different perspectives and using differentmethods. Finally, we gather information about the application of promptengineering in such fields as education and programming, showing itstransformative potential. This comprehensive survey aims to serve as a friendlyguide for anyone venturing through the big world of LLMs and promptengineering.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, +551,review of large vision models and visual prompt engineering,"['Jiaqi Wang', 'Zhengliang Liu', 'Lin Zhao', 'Zihao Wu', 'Chong Ma', 'Sigang Yu', 'Haixing Dai', 'Qiushi Yang', 'Yiheng Liu', 'Songyao Zhang', 'Enze Shi', 'Yi Pan', 'Tuo Zhang', 'Dajiang Zhu', 'Xiang Li', 'Xi Jiang', 'Bao Ge', 'Yixuan Yuan', 'Dinggang Shen', 'Tianming Liu', 'Shu Zhang']",http://arxiv.org/pdf/2307.00855v1.pdf,2023-07-03,," Visual prompt engineering is a fundamental technology in the field of visualand image Artificial General Intelligence, serving as a key component forachieving zero-shot capabilities. As the development of large vision modelsprogresses, the importance of prompt engineering becomes increasingly evident.Designing suitable prompts for specific visual tasks has emerged as ameaningful research direction. This review aims to summarize the methodsemployed in the computer vision domain for large vision models and visualprompt engineering, exploring the latest advancements in visual promptengineering. We present influential large models in the visual domain and arange of prompt engineering methods employed on these models. It is our hopethat this review provides a comprehensive and systematic description of promptengineering methods based on large visual models, offering valuable insightsfor future researchers in their exploration of this field.",,arXiv,"['cs.cv', 'cs.ai']",, +552,prompt engineering and calibration for zeroshot commonsense reasoning,['Chenkai Ma'],http://arxiv.org/pdf/2304.06962v1.pdf,2023-04-14,," Prompt engineering and calibration make large language models excel atreasoning tasks, including multiple choice commonsense reasoning. From apractical perspective, we investigate and evaluate these strategies on smallerlanguage models. Through experiments on five commonsense reasoning benchmarks,we find that each strategy favors certain models, but their joint effects aremostly negative.",,arXiv,"['cs.cl', 'cs.ai']",, +553,exploring the intersection of large language models and agentbased modeling via prompt engineering,['Edward Junprung'],http://arxiv.org/pdf/2308.07411v1.pdf,2023-08-14,," The final frontier for simulation is the accurate representation of complex,real-world social systems. While agent-based modeling (ABM) seeks to study thebehavior and interactions of agents within a larger system, it is unable tofaithfully capture the full complexity of human-driven behavior. Large languagemodels (LLMs), like ChatGPT, have emerged as a potential solution to thisbottleneck by enabling researchers to explore human-driven interactions inpreviously unimaginable ways. Our research investigates simulations of humaninteractions using LLMs. Through prompt engineering, inspired by Park et al.(2023), we present two simulations of believable proxies of human behavior: atwo-agent negotiation and a six-agent murder mystery game.",,arXiv,"['cs.ai', 'cs.ma']",, +554,grimm in wonderland prompt engineering with midjourney to illustrate fairytales,['Martin Ruskov'],http://arxiv.org/pdf/2302.08961v2.pdf,2023-02-17,," The quality of text-to-image generation is continuously improving, yet theboundaries of its applicability are still unclear. In particular, refinement ofthe text input with the objective of achieving better results - commonly calledprompt engineering - so far seems to have not been geared towards work withpre-existing texts. We investigate whether text-to-image generation and promptengineering could be used to generate basic illustrations of popularfairytales. Using Midjourney v4, we engage in action research with a dual aim:to attempt to generate 5 believable illustrations for each of 5 popularfairytales, and to define a prompt engineering process that starts from apre-existing text and arrives at an illustration of it. We arrive at atentative 4-stage process: i) initial prompt, ii) composition adjustment, iii)style refinement, and iv) variation selection. We also discuss three reasonswhy the generation model struggles with certain illustrations: difficultieswith counts, bias from stereotypical configurations and inability to depictoverly fantastic situations. Our findings are not limited to the specificgeneration model and are intended to be generalisable to future ones.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'i.2']",, +555,prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks,"['Jiho Shin', 'Clark Tang', 'Tahmineh Mohati', 'Maleknaz Nayebi', 'Song Wang', 'Hadi Hemmati']",http://arxiv.org/pdf/2310.10508v1.pdf,2023-10-11,," In this paper, we investigate the effectiveness of state-of-the-art LLM,i.e., GPT-4, with three different prompting engineering techniques (i.e., basicprompting, in-context learning, and task-specific prompting) against 18fine-tuned LLMs on three typical ASE tasks, i.e., code generation, codesummarization, and code translation. Our quantitative analysis of theseprompting strategies suggests that prompt engineering GPT-4 cannot necessarilyand significantly outperform fine-tuning smaller/older LLMs in all three tasks.For comment generation, GPT-4 with the best prompting strategy (i.e.,task-specific prompt) had outperformed the first-ranked fine-tuned model by8.33% points on average in BLEU. However, for code generation, the first-rankedfine-tuned model outperforms GPT-4 with best prompting by 16.61% and 28.3%points, on average in BLEU. For code translation, GPT-4 and fine-tunedbaselines tie as they outperform each other on different translation tasks. Toexplore the impact of different prompting strategies, we conducted a user studywith 27 graduate students and 10 industry practitioners. From our qualitativeanalysis, we find that the GPT-4 with conversational prompts (i.e., when ahuman provides feedback and instructions back and forth with a model to achievebest results) showed drastic improvement compared to GPT-4 with automaticprompting strategies. Moreover, we observe that participants tend to requestimprovements, add more context, or give specific instructions as conversationalprompts, which goes beyond typical and generic prompting strategies. Our studysuggests that, at its current state, GPT-4 with conversational prompting hasgreat potential for ASE tasks, but fully automated prompt engineering with nohuman in the loop requires more study and improvement.",,arXiv,['cs.se'],, +556,coprompt supporting prompt sharing and referring in collaborative natural language programming,"['Felicia Li Feng', 'Ryan Yen', 'Yuzhe You', 'Mingming Fan', 'Jian Zhao', 'Zhicong Lu']",http://arxiv.org/pdf/2310.09235v2.pdf,2023-10-13,," Natural language (NL) programming has become more approachable due to thepowerful code-generation capability of large language models (LLMs). This shiftto using NL to program enhances collaborative programming by reducingcommunication barriers and context-switching among programmers from varyingbackgrounds. However, programmers may face challenges during prompt engineeringin a collaborative setting as they need to actively keep aware of theircollaborators' progress and intents. In this paper, we aim to investigate waysto assist programmers' prompt engineering in a collaborative context. We firstconducted a formative study to understand the workflows and challenges ofprogrammers when using NL for collaborative programming. Based on our findings,we implemented a prototype, CoPrompt, to support collaborative promptengineering by providing referring, requesting, sharing, and linkingmechanisms. Our user study indicates that CoPrompt assists programmers incomprehending collaborators' prompts and building on their collaborators' work,reducing repetitive updates and communication costs.",,arXiv,['cs.hc'],, +557,promptengineering and transformerbased question generation and evaluation,['Rubaba Amyeen'],http://arxiv.org/pdf/2310.18867v1.pdf,2023-10-29,," Question generation has numerous applications in the educational context.Question generation can prove helpful for students when reviewing content andtesting themselves. Furthermore, a question generation model can aid teachersby lessening the burden of creating assessments and other practice material.This paper aims to find the best method to generate questions from textual datathrough a transformer model and prompt engineering. In this research, wefinetuned a pretrained distilBERT model on the SQuAD question answering datasetto generate questions. In addition to training a transformer model, promptengineering was applied to generate questions effectively using the LLaMAmodel. The generated questions were compared against the baseline questions inthe SQuAD dataset to evaluate the effectiveness of four different prompts. Allfour prompts demonstrated over 60% similarity on average. Of theprompt-generated questions, 30% achieved a high similarity score greater than70%.",,arXiv,"['cs.cl', 'cs.ai']",, +558,large language models in the workplace a case study on prompt engineering for job type classification,"['Benjamin Clavié', 'Alexandru Ciceu', 'Frederick Naylor', 'Guillaume Soulié', 'Thomas Brightwell']",http://arxiv.org/pdf/2303.07142v3.pdf,2023-03-13,," This case study investigates the task of job classification in a real-worldsetting, where the goal is to determine whether an English-language job postingis appropriate for a graduate or entry-level position. We explore multipleapproaches to text classification, including supervised approaches such astraditional models like Support Vector Machines (SVMs) and state-of-the-artdeep learning methods such as DeBERTa. We compare them with Large LanguageModels (LLMs) used in both few-shot and zero-shot classification settings. Toaccomplish this task, we employ prompt engineering, a technique that involvesdesigning prompts to guide the LLMs towards the desired output. Specifically,we evaluate the performance of two commercially available state-of-the-artGPT-3.5-based language models, text-davinci-003 and gpt-3.5-turbo. We alsoconduct a detailed analysis of the impact of different aspects of promptengineering on the model's performance. Our results show that, with awell-designed prompt, a zero-shot gpt-3.5-turbo classifier outperforms allother models, achieving a 6% increase in Precision@95% Recall compared to thebest supervised approach. Furthermore, we observe that the wording of theprompt is a critical factor in eliciting the appropriate ""reasoning"" in themodel, and that seemingly minor aspects of the prompt significantly affect themodel's performance.",,arXiv,['cs.cl'],, +559,a taxonomy of prompt modifiers for texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2204.13988v3.pdf,2022-04-20,," Text-to-image generation has seen an explosion of interest since 2021. Today,beautiful and intriguing digital images and artworks can be synthesized fromtextual inputs (""prompts"") with deep generative models. Online communitiesaround text-to-image generation and AI generated art have quickly emerged. Thispaper identifies six types of prompt modifiers used by practitioners in theonline community based on a 3-month ethnographic study. The novel taxonomy ofprompt modifiers provides researchers a conceptual starting point forinvestigating the practice of text-to-image generation, but may also helppractitioners of AI generated art improve their images. We further outline howprompt modifiers are applied in the practice of ""prompt engineering."" Wediscuss research opportunities of this novel creative practice in the field ofHuman-Computer Interaction (HCI). The paper concludes with a discussion ofbroader implications of prompt engineering from the perspective of Human-AIInteraction (HAI) in future applications beyond the use case of text-to-imagegeneration and AI generated art.",,arXiv,"['cs.mm', 'cs.cl', 'cs.hc', 'h.5; h.m; j.5']",, +560,what gpt knows about who is who,"['Xiaohan Yang', 'Eduardo Peynetti', 'Vasco Meerman', 'Chris Tanner']",http://arxiv.org/pdf/2205.07407v1.pdf,2022-05-16,," Coreference resolution -- which is a crucial task for understanding discourseand language at large -- has yet to witness widespread benefits from largelanguage models (LLMs). Moreover, coreference resolution systems largely relyon supervised labels, which are highly expensive and difficult to annotate,thus making it ripe for prompt engineering. In this paper, we introduce aQA-based prompt-engineering method and discern \textit{generative}, pre-trainedLLMs' abilities and limitations toward the task of coreference resolution. Ourexperiments show that GPT-2 and GPT-Neo can return valid answers, but thattheir capabilities to identify coreferent mentions are limited andprompt-sensitive, leading to inconsistent results.",,arXiv,"['cs.cl', 'cs.lg']",, +561,looking for a handsome carpenter! debiasing gpt3 job advertisements,"['Conrad Borchers', 'Dalia Sara Gala', 'Benjamin Gilburt', 'Eduard Oravkin', 'Wilfried Bounsi', 'Yuki M. Asano', 'Hannah Rose Kirk']",http://arxiv.org/pdf/2205.11374v1.pdf,2022-05-23,," The growing capability and availability of generative language models hasenabled a wide range of new downstream tasks. Academic research has identified,quantified and mitigated biases present in language models but is rarelytailored to downstream tasks where wider impact on individuals and society canbe felt. In this work, we leverage one popular generative language model,GPT-3, with the goal of writing unbiased and realistic job advertisements. Wefirst assess the bias and realism of zero-shot generated advertisements andcompare them to real-world advertisements. We then evaluate prompt-engineeringand fine-tuning as debiasing methods. We find that prompt-engineering withdiversity-encouraging prompts gives no significant improvement to bias, norrealism. Conversely, fine-tuning, especially on unbiased real advertisements,can improve realism and reduce bias.",,arXiv,"['cs.cl', 'cs.ai']",, +562,arguments to key points mapping with promptbased learning,"['Ahnaf Mozib Samin', 'Behrooz Nikandish', 'Jingyan Chen']",http://arxiv.org/pdf/2211.14995v1.pdf,2022-11-28,," Handling and digesting a huge amount of information in an efficient mannerhas been a long-term demand in modern society. Some solutions to map key points(short textual summaries capturing essential information and filteringredundancies) to a large number of arguments/opinions have been providedrecently (Bar-Haim et al., 2020). To complement the full picture of theargument-to-keypoint mapping task, we mainly propose two approaches in thispaper. The first approach is to incorporate prompt engineering for fine-tuningthe pre-trained language models (PLMs). The second approach utilizesprompt-based learning in PLMs to generate intermediary texts, which are thencombined with the original argument-keypoint pairs and fed as inputs to aclassifier, thereby mapping them. Furthermore, we extend the experiments tocross/in-domain to conduct an in-depth analysis. In our evaluation, we findthat i) using prompt engineering in a more direct way (Approach 1) can yieldpromising results and improve the performance; ii) Approach 2 performsconsiderably worse than Approach 1 due to the negation issue of the PLM.",,arXiv,['cs.cl'],, +563,legal prompt engineering for multilingual legal judgement prediction,"['Dietrich Trautmann', 'Alina Petrova', 'Frank Schilder']",http://arxiv.org/pdf/2212.02199v1.pdf,2022-12-05,," Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide andassist a large language model (LLM) with performing a natural legal languageprocessing (NLLP) skill. Our goal is to use LPE with LLMs over long legaldocuments for the Legal Judgement Prediction (LJP) task. We investigate theperformance of zero-shot LPE for given facts in case-texts from the EuropeanCourt of Human Rights (in English) and the Federal Supreme Court of Switzerland(in German, French and Italian). Our results show that zero-shot LPE is bettercompared to the baselines, but it still falls short compared to current stateof the art supervised approaches. Nevertheless, the results are important,since there was 1) no explicit domain-specific data used - so we show that thetransfer to the legal domain is possible for general-purpose LLMs, and 2) theLLMs where directly applied without any further training or fine-tuning - whichin turn saves immensely in terms of additional computational costs.",,arXiv,"['cs.cl', 'cs.ai']",, +564,the infinite index information retrieval on generative texttoimage models,"['Niklas Deckers', 'Maik Fröbe', 'Johannes Kiesel', 'Gianluca Pandolfo', 'Christopher Schröder', 'Benno Stein', 'Martin Potthast']",http://arxiv.org/pdf/2212.07476v2.pdf,2022-12-14,," Conditional generative models such as DALL-E and Stable Diffusion generateimages based on a user-defined text, the prompt. Finding and refining promptsthat produce a desired image has become the art of prompt engineering.Generative models do not provide a built-in retrieval model for a user'sinformation need expressed through prompts. In light of an extensive literaturereview, we reframe prompt engineering for generative models as interactivetext-based retrieval on a novel kind of ""infinite index"". We apply theseinsights for the first time in a case study on image generation for game designwith an expert. Finally, we envision how active learning may help to guide theretrieval of generated images.",,arXiv,"['cs.ir', 'cs.cl', 'cs.cv']",, +565,prompt engineering for transformerbased chemical similarity search identifies structurally distinct functional analogues,"['Clayton W. Kosonocky', 'Aaron L. Feller', 'Claus O. Wilke', 'Andrew D. Ellington']",http://arxiv.org/pdf/2305.16330v1.pdf,2023-05-17,," Chemical similarity searches are widely used in-silico methods foridentifying new drug-like molecules. These methods have historically relied onstructure-based comparisons to compute molecular similarity. Here, we use achemical language model to create a vector-based chemical search. We extendimplementations by creating a prompt engineering strategy that utilizes twodifferent chemical string representation algorithms: one for the query and theother for the database. We explore this method by reviewing the search resultsfrom five drug-like query molecules (penicillin G, nirmatrelvir, zidovudine,lysergic acid diethylamide, and fentanyl) and three dye-like query molecules(acid blue 25, avobenzone, and 2-diphenylaminocarbazole). We find that thisnovel method identifies molecules that are functionally similar to the query,indicated by the associated patent literature, and that many of these moleculesare structurally distinct from the query, making them unlikely to be found withtraditional chemical similarity search methods. This method may aid in thediscovery of novel structural classes of molecules that achieve targetfunctionality.",,arXiv,"['physics.chem-ph', 'cs.lg']",, +566,submodular minimax optimization finding effective sets,"['Loay Mualem', 'Ethan R. Elenberg', 'Moran Feldman', 'Amin Karbasi']",http://arxiv.org/pdf/2305.16903v1.pdf,2023-05-26,," Despite the rich existing literature about minimax optimization in continuoussettings, only very partial results of this kind have been obtained forcombinatorial settings. In this paper, we fill this gap by providing acharacterization of submodular minimax optimization, the problem of finding aset (for either the min or the max player) that is effective against everypossible response. We show when and under what conditions we can find suchsets. We also demonstrate how minimax submodular optimization provides robustsolutions for downstream machine learning applications such as (i) efficientprompt engineering for question answering, (ii) prompt engineering for dialogstate tracking, (iii) identifying robust waiting locations for ride-sharing,(iv) ride-share difficulty kernelization, and (v) finding adversarial images.Our experiments demonstrate that our proposed algorithms consistentlyoutperform other baselines.",,arXiv,"['cs.lg', 'cs.dm', 'math.oc', '68r05 (primary) 90c26, 90c20, 68t20, 68w40 (secondary)', 'g.2.1; i.2.m; f.2.2']",, +567,promptmagician interactive prompt engineering for texttoimage creation,"['Yingchaojie Feng', 'Xingbo Wang', 'Kam Kwai Wong', 'Sijia Wang', 'Yuhong Lu', 'Minfeng Zhu', 'Baicheng Wang', 'Wei Chen']",http://arxiv.org/pdf/2307.09036v2.pdf,2023-07-18,," Generative text-to-image models have gained great popularity among the publicfor their powerful capability to generate high-quality images based on naturallanguage prompts. However, developing effective prompts for desired images canbe challenging due to the complexity and ambiguity of natural language. Thisresearch proposes PromptMagician, a visual analysis system that helps usersexplore the image results and refine the input prompts. The backbone of oursystem is a prompt recommendation model that takes user prompts as input,retrieves similar prompt-image pairs from DiffusionDB, and identifies special(important and relevant) prompt keywords. To facilitate interactive promptrefinement, PromptMagician introduces a multi-level visualization for thecross-modal embedding of the retrieved images and recommended keywords, andsupports users in specifying multiple criteria for personalized exploration.Two usage scenarios, a user study, and expert interviews demonstrate theeffectiveness and usability of our system, suggesting it facilitates promptengineering and improves the creativity support of the generative text-to-imagemodel.",,arXiv,"['cs.ai', 'cs.hc']",, +568,interactive task planning with language models,"['Boyi Li', 'Philipp Wu', 'Pieter Abbeel', 'Jitendra Malik']",http://arxiv.org/pdf/2310.10645v1.pdf,2023-10-16,," An interactive robot framework accomplishes long-horizon task planning andcan easily generalize to new goals or distinct tasks, even during execution.However, most traditional methods require predefined module design, which makesit hard to generalize to different goals. Recent large language model basedapproaches can allow for more open-ended planning but often require heavyprompt engineering or domain-specific pretrained models. To tackle this, wepropose a simple framework that achieves interactive task planning withlanguage models. Our system incorporates both high-level planning and low-levelfunction execution via language. We verify the robustness of our system ingenerating novel high-level instructions for unseen objectives and its ease ofadaptation to different tasks by merely substituting the task guidelines,without the need for additional complex prompt engineering. Furthermore, whenthe user sends a new request, our system is able to replan accordingly withprecision based on the new request, task guidelines and previously executedsteps. Please check more details on our https://wuphilipp.github.io/itp_siteand https://youtu.be/TrKLuyv26_g.",,arXiv,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.hc']",, +569,prompt engineering through the lens of optimal control,"['Yifan Luo', 'Yiming Tang', 'Chengfeng Shen', 'Zhennan Zhou', 'Bin Dong']",http://arxiv.org/pdf/2310.14201v2.pdf,2023-10-22,," Prompt Engineering (PE) has emerged as a critical technique for guiding LargeLanguage Models (LLMs) in solving intricate tasks. Its importance ishighlighted by its potential to significantly enhance the efficiency andeffectiveness of human-machine interaction. As tasks grow increasingly complex,recent advanced PE methods have extended beyond the limitations of single-roundinteractions to embrace multi-round interactions, which allows for a deeper andmore nuanced engagement with LLMs. In this paper, we propose an optimal controlframework tailored for multi-round interactions with LLMs. This frameworkprovides a unified mathematical structure that not only systematizes theexisting PE methods but also sets the stage for rigorous analyticalimprovements. Furthermore, we extend this framework to include PE via ensemblemethods and multi-agent collaboration, thereby enlarging the scope ofapplicability. By adopting an optimal control perspective, we offer freshinsights into existing PE methods and highlight theoretical challenges thatwarrant future research. Besides, our work lays a foundation for thedevelopment of more effective and interpretable PE methods.",,arXiv,"['cs.lg', 'math.oc']",, +570,a communication theory perspective on prompting engineering methods for large language models,"['Yuanfeng Song', 'Yuanqin He', 'Xuefang Zhao', 'Hanlin Gu', 'Di Jiang', 'Haijun Yang', 'Lixin Fan', 'Qiang Yang']",http://arxiv.org/pdf/2310.18358v1.pdf,2023-10-24,," The springing up of Large Language Models (LLMs) has shifted the communityfrom single-task-orientated natural language processing (NLP) research to aholistic end-to-end multi-task learning paradigm. Along this line of researchendeavors in the area, LLM-based prompting methods have attracted muchattention, partially due to the technological advantages brought by promptengineering (PE) as well as the underlying NLP principles disclosed by variousprompting methods. Traditional supervised learning usually requires training amodel based on labeled data and then making predictions. In contrast, PEmethods directly use the powerful capabilities of existing LLMs (i.e., GPT-3and GPT-4) via composing appropriate prompts, especially under few-shot orzero-shot scenarios. Facing the abundance of studies related to the promptingand the ever-evolving nature of this field, this article aims to (i) illustratea novel perspective to review existing PE methods, within the well-establishedcommunication theory framework; (ii) facilitate a better/deeper understandingof developing trends of existing PE methods used in four typical tasks; (iii)shed light on promising research directions for future PE methods.",,arXiv,"['cs.cl', 'cs.ai']",, +571,investigating prompt engineering in diffusion models,"['Sam Witteveen', 'Martin Andrews']",http://arxiv.org/pdf/2211.15462v1.pdf,2022-11-21,," With the spread of the use of Text2Img diffusion models such as DALL-E 2,Imagen, Mid Journey and Stable Diffusion, one challenge that artists face isselecting the right prompts to achieve the desired artistic output. We presenttechniques for measuring the effect that specific words and phrases in promptshave, and (in the Appendix) present guidance on the selection of prompts toproduce desired effects.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, +572,refining the responses of llms by themselves,"['Tianqiang Yan', 'Tiansheng Xu']",http://arxiv.org/pdf/2305.04039v1.pdf,2023-05-06,," In this paper, we propose a simple yet efficient approach based on promptengineering that leverages the large language model itself to optimize itsanswers without relying on auxiliary models. We introduce an iterativeself-evaluating optimization mechanism, with the potential for improved outputquality as iterations progress, removing the need for manual intervention. Theexperiment's findings indicate that utilizing our response refinement frameworkon the GPT-3.5 model yields results that are on par with, or even surpass,those generated by the cutting-edge GPT-4 model. Detailed implementationstrategies and illustrative examples are provided to demonstrate thesuperiority of our proposed solution.",,arXiv,"['cs.cl', 'cs.ai']",, +573,efficient blackbox adversarial attacks on neural text detectors,"['Vitalii Fishchuk', 'Daniel Braun']",http://arxiv.org/pdf/2311.01873v1.pdf,2023-11-03,," Neural text detectors are models trained to detect whether a given text wasgenerated by a language model or written by a human. In this paper, weinvestigate three simple and resource-efficient strategies (parameter tweaking,prompt engineering, and character-level mutations) to alter texts generated byGPT-3.5 that are unsuspicious or unnoticeable for humans but causemisclassification by neural text detectors. The results show that especiallyparameter tweaking and character-level mutations are effective strategies.",,arXiv,['cs.cl'],, +574,prompted software engineering in the era of ai models,['Dae-Kyoo Kim'],http://arxiv.org/pdf/2311.03359v1.pdf,2023-09-07,," This paper introduces prompted software engineering (PSE), which integratesprompt engineering to build effective prompts for language-based AI models, toenhance the software development process. PSE enables the use of AI models insoftware development to produce high-quality software with fewer resources,automating tedious tasks and allowing developers to focus on more innovativeaspects. However, effective prompts are necessary to guide software developmentin generating accurate, relevant, and useful responses, while mitigating risksof misleading outputs. This paper describes how productive prompts should bebuilt throughout the software development cycle.",,arXiv,['cs.se'],, +575,conversing with copilot exploring prompt engineering for solving cs1 problems using natural language,"['Paul Denny', 'Viraj Kumar', 'Nasser Giacaman']",http://arxiv.org/pdf/2210.15157v1.pdf,2022-10-27,," GitHub Copilot is an artificial intelligence model for automaticallygenerating source code from natural language problem descriptions. Since June2022, Copilot has officially been available for free to all students as aplug-in to development environments like Visual Studio Code. Prior workexploring OpenAI Codex, the underlying model that powers Copilot, has shown itperforms well on typical CS1 problems thus raising concerns about the impact itwill have on how introductory programming courses are taught. However, littleis known about the types of problems for which Copilot does not perform well,or about the natural language interactions that a student might have withCopilot when resolving errors. We explore these questions by evaluating theperformance of Copilot on a publicly available dataset of 166 programmingproblems. We find that it successfully solves around half of these problems onits very first attempt, and that it solves 60\% of the remaining problems usingonly natural language changes to the problem description. We argue that thistype of prompt engineering, which we believe will become a standard interactionbetween human and Copilot when it initially fails, is a potentially usefullearning activity that promotes computational thinking skills, and is likely tochange the nature of code writing skill development.",,arXiv,"['cs.hc', 'cs.ai']",, +576,enhancing automated program repair through finetuning and prompt engineering,"['Rishov Paul', 'Md. Mohib Hossain', 'Mohammed Latif Siddiq', 'Masum Hasan', 'Anindya Iqbal', 'Joanna C. S. Santos']",http://arxiv.org/pdf/2304.07840v2.pdf,2023-04-16,," Sequence-to-sequence models have been used to transform erroneous programsinto correct ones when trained with a large enough dataset. Some recent studiesalso demonstrated strong empirical evidence that code review could improve theprogram repair further. Large language models, trained with Natural Language(NL) and Programming Language (PL), can contain inherent knowledge of both. Inthis study, we investigate if this inherent knowledge of PL and NL can beutilized to improve automated program repair. We applied PLBART and CodeT5, twostate-of-the-art language models that are pre-trained with both PL and NL, ontwo such natural language-based program repair datasets and found that thepre-trained language models fine-tuned with datasets containing both codereview and subsequent code changes notably outperformed each of the previousmodels. With the advent of code generative models like Codex and GPT-3.5-Turbo,we also performed zero-shot and few-shots learning-based prompt engineering toassess their performance on these datasets. However, the practical applicationof using LLMs in the context of automated program repair is still a long wayoff based on our manual analysis of the generated repaired codes by thelearning models.",,arXiv,"['cs.lg', 'cs.se']",, +577,cheapfake detection with llm using prompt engineering,"['Guangyang Wu', 'Weijie Wu', 'Xiaohong Liu', 'Kele Xu', 'Tianjiao Wan', 'Wenyi Wang']",http://arxiv.org/pdf/2306.02776v1.pdf,2023-06-05,," The misuse of real photographs with conflicting image captions in news itemsis an example of the out-of-context (OOC) misuse of media. In order to detectOOC media, individuals must determine the accuracy of the statement andevaluate whether the triplet (~\textit{i.e.}, the image and two captions)relates to the same event. This paper presents a novel learnable approach fordetecting OOC media in ICME'23 Grand Challenge on Detecting Cheapfakes. Theproposed method is based on the COSMOS structure, which assesses the coherencebetween an image and captions, as well as between two captions. We enhance thebaseline algorithm by incorporating a Large Language Model (LLM), GPT3.5, as afeature extractor. Specifically, we propose an innovative approach to featureextraction utilizing prompt engineering to develop a robust and reliablefeature extractor with GPT3.5 model. The proposed method captures thecorrelation between two captions and effectively integrates this module intothe COSMOS baseline model, which allows for a deeper understanding of therelationship between captions. By incorporating this module, we demonstrate thepotential for significant improvements in cheap-fakes detection performance.The proposed methodology holds promising implications for various applicationssuch as natural language processing, image captioning, and text-to-imagesynthesis. Docker for submission is available athttps://hub.docker.com/repository/docker/mulns/ acmmmcheapfakes.",,arXiv,['cs.cv'],, +578,improving knowledge extraction from llms for task learning through agent analysis,"['James R. Kirk', 'Robert E. Wray', 'Peter Lindes']",http://arxiv.org/pdf/2306.06770v3.pdf,2023-06-11,," Large language models (LLMs) offer significant promise as a knowledge sourcefor task learning. Prompt engineering has been shown to be effective foreliciting knowledge from an LLM, but alone it is insufficient for acquiringrelevant, situationally grounded knowledge for an embodied agent learning noveltasks. We describe a cognitive-agent approach that extends and complementsprompt engineering, mitigating its limitations and thus enabling an agent toacquire new task knowledge matched to its native language capabilities,embodiment, environment, and user preferences. The approach is to increase theresponse space of LLMs and deploy general strategies, embedded within theautonomous agent, to evaluate, repair, and select among candidate responsesproduced by the LLM. We describe the approach and experiments that show how anagent, by retrieving and evaluating a breadth of responses from the LLM, canachieve 77-94% task completion in one-shot learning without user oversight. Theapproach achieves 100% task completion when human oversight (such as anindication of preference) is provided. Further, the type of oversight largelyshifts from explicit, natural language instruction to simpleconfirmation/discomfirmation of high-quality responses that have been vetted bythe agent before presentation to a user.",,arXiv,"['cs.ai', 'cs.hc', 'cs.ro', 'i.2.6; i.2.7']",, +579,texttosql empowered by large language models a benchmark evaluation,"['Dawei Gao', 'Haibin Wang', 'Yaliang Li', 'Xiuyu Sun', 'Yichen Qian', 'Bolin Ding', 'Jingren Zhou']",http://arxiv.org/pdf/2308.15363v4.pdf,2023-08-29,," Large language models (LLMs) have emerged as a new paradigm for Text-to-SQLtask. However, the absence of a systematical benchmark inhibits the developmentof designing effective, efficient and economic LLM-based Text-to-SQL solutions.To address this challenge, in this paper, we first conduct a systematical andextensive comparison over existing prompt engineering methods, includingquestion representation, example selection and example organization, and withthese experimental results, we elaborate their pros and cons. Based on thesefindings, we propose a new integrated solution, named DAIL-SQL, which refreshesthe Spider leaderboard with 86.6% execution accuracy and sets a new bar. Toexplore the potential of open-source LLM, we investigate them in variousscenarios, and further enhance their performance with supervised fine-tuning.Our explorations highlight open-source LLMs' potential in Text-to-SQL, as wellas the advantages and disadvantages of the supervised fine-tuning.Additionally, towards an efficient and economic LLM-based Text-to-SQL solution,we emphasize the token efficiency in prompt engineering and compare the priorstudies under this metric. We hope that our work provides a deeperunderstanding of Text-to-SQL with LLMs, and inspires further investigations andbroad applications.",,arXiv,"['cs.db', 'cs.cl', 'cs.lg']",, +580,understanding prompt engineering may not require rethinking generalization,"['Victor Akinwande', 'Yiding Jiang', 'Dylan Sam', 'J. Zico Kolter']",http://arxiv.org/pdf/2310.03957v1.pdf,2023-10-06,," Zero-shot learning in prompted vision-language models, the practice ofcrafting prompts to build classifiers without an explicit training process, hasachieved impressive performance in many settings. This success presents aseemingly surprising observation: these methods suffer relatively little fromoverfitting, i.e., when a prompt is manually engineered to achieve low error ona given training set (thus rendering the method no longer actually zero-shot),the approach still performs well on held-out test data. In this paper, we showthat we can explain such performance well via recourse to classical PAC-Bayesbounds. Specifically, we show that the discrete nature of prompts, combinedwith a PAC-Bayes prior given by a language model, results in generalizationbounds that are remarkably tight by the standards of the literature: forinstance, the generalization bound of an ImageNet classifier is often within afew percentage points of the true test error. We demonstrate empirically thatthis holds for existing handcrafted prompts and prompts generated throughsimple greedy search. Furthermore, the resulting bound is well-suited for modelselection: the models with the best bound typically also have the best testperformance. This work thus provides a possible justification for thewidespread practice of prompt engineering, even if it seems that such methodscould potentially overfit the training data.",,arXiv,"['cs.lg', 'cs.cv']",, +581,configuration validation with large language models,"['Xinyu Lian', 'Yinfang Chen', 'Runxiang Cheng', 'Jie Huang', 'Parth Thakkar', 'Tianyin Xu']",http://arxiv.org/pdf/2310.09690v1.pdf,2023-10-15,," Misconfigurations are the major causes of software failures. Existingconfiguration validation techniques rely on manually written rules or testcases, which are expensive to implement and maintain, and are hard to becomprehensive. Leveraging machine learning (ML) and natural language processing(NLP) for configuration validation is considered a promising direction, but hasbeen facing challenges such as the need of not only large-scale configurationdata, but also system-specific features and models which are hard togeneralize. Recent advances in Large Language Models (LLMs) show the promisesto address some of the long-lasting limitations of ML/NLP-based configurationvalidation techniques. In this paper, we present an exploratory analysis on thefeasibility and effectiveness of using LLMs like GPT and Codex forconfiguration validation. Specifically, we take a first step to empiricallyevaluate LLMs as configuration validators without additional fine-tuning orcode generation. We develop a generic LLM-based validation framework, namedCiri, which integrates different LLMs. Ciri devises effective promptengineering with few-shot learning based on both valid configuration andmisconfiguration data. Ciri also validates and aggregates the outputs of LLMsto generate validation results, coping with known hallucination andnondeterminism of LLMs. We evaluate the validation effectiveness of Ciri onfive popular LLMs using configuration data of six mature, widely deployedopen-source systems. Our analysis (1) confirms the potential of using LLMs forconfiguration validation, (2) understands the design space of LLMbasedvalidators like Ciri, especially in terms of prompt engineering with few-shotlearning, and (3) reveals open challenges such as ineffectiveness in detectingcertain types of misconfigurations and biases to popular configurationparameters.",,arXiv,"['cs.se', 'cs.ai', 'cs.os']",, +582,learning to prompt for visionlanguage models,"['Kaiyang Zhou', 'Jingkang Yang', 'Chen Change Loy', 'Ziwei Liu']",http://arxiv.org/pdf/2109.01134v6.pdf,2021-09-02,," Large pre-trained vision-language models like CLIP have shown great potentialin learning representations that are transferable across a wide range ofdownstream tasks. Different from the traditional representation learning thatis based mostly on discretized labels, vision-language pre-training alignsimages and texts in a common feature space, which allows zero-shot transfer toa downstream task via prompting, i.e., classification weights are synthesizedfrom natural language describing classes of interest. In this work, we showthat a major challenge for deploying such models in practice is promptengineering, which requires domain expertise and is extremely time-consuming --one needs to spend a significant amount of time on words tuning since a slightchange in wording could have a huge impact on performance. Inspired by recentadvances in prompt learning research in natural language processing (NLP), wepropose Context Optimization (CoOp), a simple approach specifically foradapting CLIP-like vision-language models for downstream image recognition.Concretely, CoOp models a prompt's context words with learnable vectors whilethe entire pre-trained parameters are kept fixed. To handle different imagerecognition tasks, we provide two implementations of CoOp: unified context andclass-specific context. Through extensive experiments on 11 datasets, wedemonstrate that CoOp requires as few as one or two shots to beat hand-craftedprompts with a decent margin and is able to gain significant improvements overprompt engineering with more shots, e.g., with 16 shots the average gain isaround 15% (with the highest reaching over 45%). Despite being a learning-basedapproach, CoOp achieves superb domain generalization performance compared withthe zero-shot model using hand-crafted prompts.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, +583,an empirical study on fewshot knowledge probing for pretrained language models,"['Tianxing He', 'Kyunghyun Cho', 'James Glass']",http://arxiv.org/pdf/2109.02772v2.pdf,2021-09-06,," Prompt-based knowledge probing for 1-hop relations has been used to measurehow much world knowledge is stored in pretrained language models. Existing workuses considerable amounts of data to tune the prompts for better performance.In this work, we compare a variety of approaches under a few-shot knowledgeprobing setting, where only a small number (e.g., 10 or 20) of example triplesare available. In addition, we create a new dataset named TREx-2p, whichcontains 2-hop relations. We report that few-shot examples can strongly boostthe probing performance for both 1-hop and 2-hop relations. In particular, wefind that a simple-yet-effective approach of finetuning the bias vectors in themodel outperforms existing prompt-engineering methods. Our dataset and code areavailable at \url{https://github.com/cloudygoose/fewshot_lama}.",,arXiv,['cs.ai'],, +584,solving probability and statistics problems by program synthesis,"['Leonard Tang', 'Elizabeth Ke', 'Nikhil Singh', 'Nakul Verma', 'Iddo Drori']",http://arxiv.org/pdf/2111.08267v1.pdf,2021-11-16,," We solve university level probability and statistics questions by programsynthesis using OpenAI's Codex, a Transformer trained on text and fine-tuned oncode. We transform course problems from MIT's 18.05 Introduction to Probabilityand Statistics and Harvard's STAT110 Probability into programming tasks. Wethen execute the generated code to get a solution. Since these course questionsare grounded in probability, we often aim to have Codex generate probabilisticprograms that simulate a large number of probabilistic dependencies to computeits solution. Our approach requires prompt engineering to transform thequestion from its original form to an explicit, tractable form that results ina correct program and solution. To estimate the amount of work needed totranslate an original question into its tractable form, we measure thesimilarity between original and transformed questions. Our work is the first tointroduce a new dataset of university-level probability and statistics problemsand solve these problems in a scalable fashion using the program synthesiscapabilities of large language models.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",, +585,polyglot prompt multilingual multitask promptraining,"['Jinlan Fu', 'See-Kiong Ng', 'Pengfei Liu']",http://arxiv.org/pdf/2204.14264v2.pdf,2022-04-29,," This paper aims for a potential architectural improvement for multilinguallearning and asks: Can different tasks from different languages be modeled in amonolithic framework, i.e. without any task/language-specific module? Thebenefit of achieving this could open new doors for future multilingualresearch, including allowing systems trained on low resources to be furtherassisted by other languages as well as other tasks. We approach this goal bydeveloping a learning framework named Polyglot Prompting to exploit promptingmethods for learning a unified semantic space for different languages and taskswith multilingual prompt engineering. We performed a comprehensive evaluationof 6 tasks, namely topic classification, sentiment classification, named entityrecognition, question answering, natural language inference, and summarization,covering 24 datasets and 49 languages. The experimental results demonstratedthe efficacy of multilingual multitask prompt-based learning and led toinspiring observations. We also present an interpretable multilingualevaluation methodology and show how the proposed framework, multilingualmultitask prompt training, works. We release all datasets prompted in the bestsetting and code.",,arXiv,['cs.cl'],, +586,clipclop clipguided collage and photomontage,"['Piotr Mirowski', 'Dylan Banarse', 'Mateusz Malinowski', 'Simon Osindero', 'Chrisantha Fernando']",http://arxiv.org/pdf/2205.03146v3.pdf,2022-05-06,," The unabated mystique of large-scale neural networks, such as the CLIP dualimage-and-text encoder, popularized automatically generated art. Increasinglymore sophisticated generators enhanced the artworks' realism and visualappearance, and creative prompt engineering enabled stylistic expression.Guided by an artist-in-the-loop ideal, we design a gradient-based generator toproduce collages. It requires the human artist to curate libraries of imagepatches and to describe (with prompts) the whole image composition, with theoption to manually adjust the patches' positions during generation, therebyallowing humans to reclaim some control of the process and achieve greatercreative freedom. We explore the aesthetic potentials of high-resolutioncollages, and provide an open-source Google Colab as an artistic tool.",,arXiv,"['cs.cv', 'cs.ai']",, +587,the creativity of texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2206.02904v4.pdf,2022-05-13,," Text-guided synthesis of images has made a giant leap towards becoming amainstream phenomenon. With text-to-image generation systems, anybody cancreate digital images and artworks. This provokes the question of whethertext-to-image generation is creative. This paper expounds on the nature ofhuman creativity involved in text-to-image art (so-called ""AI art"") with aspecific focus on the practice of prompt engineering. The paper argues that thecurrent product-centered view of creativity falls short in the context oftext-to-image generation. A case exemplifying this shortcoming is provided andthe importance of online communities for the creative ecosystem oftext-to-image art is highlighted. The paper provides a high-level summary ofthis online ecosystem drawing on Rhodes' conceptual four P model of creativity.Challenges for evaluating the creativity of text-to-image generation andopportunities for research on text-to-image generation in the field ofHuman-Computer Interaction (HCI) are discussed.",,arXiv,"['cs.hc', 'cs.gr', 'h.5; h.m']",, +588,rationaleaugmented ensembles in language models,"['Xuezhi Wang', 'Jason Wei', 'Dale Schuurmans', 'Quoc Le', 'Ed Chi', 'Denny Zhou']",http://arxiv.org/pdf/2207.00747v1.pdf,2022-07-02,," Recent research has shown that rationales, or step-by-step chains of thought,can be used to improve performance in multi-step reasoning tasks. We reconsiderrationale-augmented prompting for few-shot in-context learning, where (input ->output) prompts are expanded to (input, rationale -> output) prompts. Forrationale-augmented prompting we demonstrate how existing approaches, whichrely on manual prompt engineering, are subject to sub-optimal rationales thatmay harm performance. To mitigate this brittleness, we propose a unifiedframework of rationale-augmented ensembles, where we identify rationalesampling in the output space as the key component to robustly improveperformance. This framework is general and can easily be extended to commonnatural language processing tasks, even those that do not traditionallyleverage intermediate steps, such as question answering, word sensedisambiguation, and sentiment analysis. We demonstrate that rationale-augmentedensembles achieve more accurate and interpretable results than existingprompting approaches--including standard prompting without rationales andrationale-based chain-of-thought prompting--while simultaneously improvinginterpretability of model predictions through the associated rationales.",,arXiv,['cs.cl'],, +589,will it blend mixing training paradigms & prompting for argument quality prediction,"['Michiel van der Meer', 'Myrthe Reuver', 'Urja Khurana', 'Lea Krause', 'Selene Báez Santamaría']",http://arxiv.org/pdf/2209.08966v2.pdf,2022-09-19,," This paper describes our contributions to the Shared Task of the 9th Workshopon Argument Mining (2022). Our approach uses Large Language Models for the taskof Argument Quality Prediction. We perform prompt engineering using GPT-3, andalso investigate the training paradigms multi-task learning, contrastivelearning, and intermediate-task training. We find that a mixed prediction setupoutperforms single models. Prompting GPT-3 works best for predicting argumentvalidity, and argument novelty is best estimated by a model trained using allthree training paradigms.",,arXiv,"['cs.cl', 'cs.ai']",, +590,controllable image captioning via prompting,"['Ning Wang', 'Jiahao Xie', 'Jihao Wu', 'Mingbo Jia', 'Linlin Li']",http://arxiv.org/pdf/2212.01803v1.pdf,2022-12-04,," Despite the remarkable progress of image captioning, existing captionerstypically lack the controllable capability to generate desired image captions,e.g., describing the image in a rough or detailed manner, in a factual oremotional view, etc. In this paper, we show that a unified model is qualifiedto perform well in diverse domains and freely switch among multiple styles.Such a controllable capability is achieved by embedding the prompt learninginto the image captioning framework. To be specific, we design a set of promptsto fine-tune the pre-trained image captioner. These prompts allow the model toabsorb stylized data from different domains for joint training, withoutperformance degradation in each domain. Furthermore, we optimize the promptswith learnable vectors in the continuous word embedding space, avoiding theheuristic prompt engineering and meanwhile exhibiting superior performance. Inthe inference stage, our model is able to generate desired stylized captions bychoosing the corresponding prompts. Extensive experiments verify thecontrollable capability of the proposed method. Notably, we achieve outstandingperformance on two diverse image captioning benchmarks including COCO Karpathysplit and TextCaps using a unified model.",,arXiv,['cs.cv'],, +591,explanation regeneration via information bottleneck,"['Qintong Li', 'Zhiyong Wu', 'Lingpeng Kong', 'Wei Bi']",http://arxiv.org/pdf/2212.09603v2.pdf,2022-12-19,," Explaining the black-box predictions of NLP models naturally and accuratelyis an important open problem in natural language generation. These free-textexplanations are expected to contain sufficient and carefully-selected evidenceto form supportive arguments for predictions. Due to the superior generativecapacity of large pretrained language models, recent work built on promptengineering enables explanation generation without specific training. However,explanation generated through single-pass prompting often lacks sufficiency andconciseness. To address this problem, we develop an information bottleneckmethod EIB to produce refined explanations that are sufficient and concise. Ourapproach regenerates the free-text explanation by polishing the single-passoutput from the pretrained language model but retaining the information thatsupports the contents being explained. Experiments on two out-of-domain tasksverify the effectiveness of EIB through automatic evaluation andthoroughly-conducted human evaluation.",,arXiv,['cs.cl'],, +592,uprise universal prompt retrieval for improving zeroshot evaluation,"['Daixuan Cheng', 'Shaohan Huang', 'Junyu Bi', 'Yuefeng Zhan', 'Jianfeng Liu', 'Yujing Wang', 'Hao Sun', 'Furu Wei', 'Denvy Deng', 'Qi Zhang']",http://arxiv.org/pdf/2303.08518v4.pdf,2023-03-15,," Large Language Models (LLMs) are popular for their impressive abilities, butthe need for model-specific fine-tuning or task-specific prompt engineering canhinder their generalization. We propose UPRISE (Universal Prompt Retrieval forImproving zero-Shot Evaluation), which tunes a lightweight and versatileretriever that automatically retrieves prompts for a given zero-shot taskinput. Specifically, we demonstrate universality in a cross-task andcross-model scenario: the retriever is tuned on a diverse set of tasks, buttested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, fortuning the retriever, but test the retriever on different LLMs of much largerscales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show thatUPRISE mitigates the hallucination problem in our experiments with ChatGPT,suggesting its potential to improve even the strongest LLMs. Our model and codeare available at https://github.com/microsoft/LMOps.",,arXiv,['cs.cl'],, +593,patchtoken aligned bayesian prompt learning for visionlanguage models,"['Xinyang Liu', 'Dongsheng Wang', 'Miaoge Li', 'Zhibin Duan', 'Yishi Xu', 'Bo Chen', 'Mingyuan Zhou']",http://arxiv.org/pdf/2303.09100v1.pdf,2023-03-16,," For downstream applications of vision-language pre-trained models, there hasbeen significant interest in constructing effective prompts. Existing works onprompt engineering, which either require laborious manual designs or optimizethe prompt tuning as a point estimation problem, may fail to describe diversecharacteristics of categories and limit their applications. We introduce aBayesian probabilistic resolution to prompt learning, where the label-specificstochastic prompts are generated hierarchically by first sampling a latentvector from an underlying distribution and then employing a lightweightgenerative model. Importantly, we semantically regularize prompt learning withthe visual knowledge and view images and the corresponding prompts as patch andtoken sets under optimal transport, which pushes the prompt tokens tofaithfully capture the label-specific visual concepts, instead of overfittingthe training categories. Moreover, the proposed model can also bestraightforwardly extended to the conditional case where theinstance-conditional prompts are generated to improve the generalizability.Extensive experiments on 15 datasets show promising transferability andgeneralization performance of our proposed model.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, +594,safety analysis in the era of large language models a case study of stpa using chatgpt,"['Yi Qi', 'Xingyu Zhao', 'Siddartha Khastgir', 'Xiaowei Huang']",http://arxiv.org/pdf/2304.01246v3.pdf,2023-04-03,," Can safety analysis make use of Large Language Models (LLMs)? A case studyexplores Systems Theoretic Process Analysis (STPA) applied to AutomaticEmergency Brake (AEB) and Electricity Demand Side Management (DSM) systemsusing ChatGPT. We investigate how collaboration schemes, input semanticcomplexity, and prompt guidelines influence STPA results. Comparative resultsshow that using ChatGPT without human intervention may be inadequate due toreliability related issues, but with careful design, it may outperform humanexperts. No statistically significant differences are found when varying theinput semantic complexity or using common prompt guidelines, which suggests thenecessity for developing domain-specific prompt engineering. We also highlightfuture challenges, including concerns about LLM trustworthiness and thenecessity for standardisation and regulation in this domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy', 'cs.se']",, +595,constructing dreams using generative ai,"['Safinah Ali', 'Daniella DiPaola', 'Randi Williams', 'Prerna Ravi', 'Cynthia Breazeal']",http://arxiv.org/pdf/2305.12013v1.pdf,2023-05-19,," Generative AI tools introduce new and accessible forms of media creation foryouth. They also raise ethical concerns about the generation of fake media,data protection, privacy and ownership of AI-generated art. Since generative AIis already being used in products used by youth, it is critical that theyunderstand how these tools work and how they can be used or misused. In thiswork, we facilitated students' generative AI learning through expression oftheir imagined future identities. We designed a learning workshop - Dreamingwith AI - where students learned about the inner workings of generative AItools, used text-to-image generation algorithms to create their imaged futuredreams, reflected on the potential benefits and harms of generative AI toolsand voiced their opinions about policies for the use of these tools inclassrooms. In this paper, we present the learning activities and experiencesof 34 high school students who engaged in our workshops. Students reachedcreative learning objectives by using prompt engineering to create their futuredreams, gained technical knowledge by learning the abilities, limitations,text-visual mappings and applications of generative AI, and identified mostpotential societal benefits and harms of generative AI.",,arXiv,"['cs.hc', 'cs.ai', 'cs.cy']",, +596,making language models better tool learners with execution feedback,"['Shuofei Qiao', 'Honghao Gui', 'Huajun Chen', 'Ningyu Zhang']",http://arxiv.org/pdf/2305.13068v1.pdf,2023-05-22,," Tools serve as pivotal interfaces that enable humans to understand andreshape the world. With the advent of foundational models, AI systems canutilize tools to expand their capabilities and interact with the world.Existing tool learning methodologies, encompassing supervised fine-tuning andprompt engineering approaches, often induce language models to utilize toolsindiscriminately, as complex problems often exceed their own competencies.However, introducing tools for simple tasks, which the models themselves canreadily resolve, can inadvertently propagate errors rather than enhanceperformance. This leads to the research question: can we teach language modelswhen and how to use tools? To meet this need, we propose Tool leaRning wIthexeCution fEedback (TRICE), a two-stage end-to-end framework that enables themodel to continually learn through feedback derived from tool execution,thereby learning when and how to use tools effectively. Experimental results,backed by further analysis, show that TRICE can make the language model toselectively use tools by decreasing the model's dependency on tools whileenhancing the performance. Code and datasets will be available inhttps://github.com/zjunlp/trice.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ir', 'cs.lg']",, +597,cona a novel contextaware instruction paradigm for communication using large language model,"['Nan Zhou', 'Xinghui Tao', 'Xi Chen']",http://arxiv.org/pdf/2305.18620v1.pdf,2023-05-26,," We introduce CONA, a novel context-aware instruction paradigm for effectiveknowledge dissemination using generative pre-trained transformer (GPT) models.CONA is a flexible framework designed to leverage the capabilities of LargeLanguage Models (LLMs) and incorporate DIKW (Data, Information, Knowledge,Wisdom) hierarchy to automatically instruct and optimise presentation content,anticipate potential audience inquiries, and provide context-aware answers thatadaptive to the knowledge level of the audience group. The unique aspect of theCONA paradigm lies in its combination of an independent advisory mechanism anda recursive feedback loop rooted on the DIKW hierarchy. This synergysignificantly enhances context-aware contents, ensuring they are accessible andeasily comprehended by the audience. This paradigm is an early pioneer toexplore new methods for knowledge dissemination and communication in the LLMera, offering effective support for everyday knowledge sharing scenarios. Weconduct experiments on a range of audience roles, along with materials fromvarious disciplines using GPT4. Both quantitative and qualitative resultsdemonstrated that the proposed CONA paradigm achieved remarkable performancecompared to the outputs guided by conventional prompt engineering.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, +598,gpt4tools teaching large language model to use tools via selfinstruction,"['Rui Yang', 'Lin Song', 'Yanwei Li', 'Sijie Zhao', 'Yixiao Ge', 'Xiu Li', 'Ying Shan']",http://arxiv.org/pdf/2305.18752v1.pdf,2023-05-30,," This paper aims to efficiently enable Large Language Models (LLMs) to usemultimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, haveshown great potential for tool usage through sophisticated prompt engineering.Nevertheless, these models typically rely on prohibitive computational costsand publicly inaccessible data. To address these challenges, we propose theGPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA andOPT, to use tools. It generates an instruction-following dataset by promptingan advanced teacher with various multi-modal contexts. By using the Low-RankAdaptation (LoRA) optimization, our approach facilitates the open-source LLMsto solve a range of visual problems, including visual comprehension and imagegeneration. Moreover, we provide a benchmark to evaluate the ability of LLMs touse tools, which is performed in both zero-shot and fine-tuning ways. Extensiveexperiments demonstrate the effectiveness of our method on various languagemodels, which not only significantly improves the accuracy of invoking seentools, but also enables the zero-shot capacity for unseen tools. The code anddemo are available at https://github.com/StevenGrove/GPT4Tools.",,arXiv,"['cs.cv', 'cs.cl']",, +599,prompting is all you need automated android bug replay with large language models,"['Sidong Feng', 'Chunyang Chen']",http://arxiv.org/pdf/2306.01987v2.pdf,2023-06-03,," Bug reports are vital for software maintenance that allow users to informdevelopers of the problems encountered while using the software. As such,researchers have committed considerable resources toward automating bug replayto expedite the process of software maintenance. Nonetheless, the success ofcurrent automated approaches is largely dictated by the characteristics andquality of bug reports, as they are constrained by the limitations ofmanually-crafted patterns and pre-defined vocabulary lists. Inspired by thesuccess of Large Language Models (LLMs) in natural language understanding, wepropose AdbGPT, a new lightweight approach to automatically reproduce the bugsfrom bug reports through prompt engineering, without any training andhard-coding effort. AdbGPT leverages few-shot learning and chain-of-thoughtreasoning to elicit human knowledge and logical reasoning from LLMs toaccomplish the bug replay in a manner similar to a developer. Our evaluationsdemonstrate the effectiveness and efficiency of our AdbGPT to reproduce 81.3%of bug reports in 253.6 seconds, outperforming the state-of-the-art baselinesand ablation studies. We also conduct a small-scale user study to confirm theusefulness of AdbGPT in enhancing developers' bug replay capabilities.",,arXiv,['cs.se'],, +600,an approach to solving the abstraction and reasoning corpus (arc) challenge,['Tan John Chong Min'],http://arxiv.org/pdf/2306.03553v1.pdf,2023-06-06,," We utilise the power of Large Language Models (LLMs), in particular GPT4, tobe prompt engineered into performing an arbitrary task. Here, we give the modelsome human priors via text, along with some typical procedures for solving theARC tasks, and ask it to generate the i) broad description of the input-outputrelation, ii) detailed steps of the input-output mapping, iii) use the detailedsteps to perform manipulation on the test input and derive the test output. Thecurrent GPT3.5/GPT4 prompt solves 2 out of 4 tested small ARC challenges (thosewith small grids of 8x8 and below). With tweaks to the prompt to make it morespecific for the use case, it can solve more. We posit that when scaled to amulti-agent system with usage of past memory and equipped with an imageinterpretation tool via Visual Question Answering, we may actually be able tosolve the majority of the ARC challenge",,arXiv,['cs.ai'],, +601,falle a foley sound synthesis model and strategies,"['Minsung Kang', 'Sangshin Oh', 'Hyeongi Moon', 'Kyungyun Lee', 'Ben Sangbae Chon']",http://arxiv.org/pdf/2306.09807v2.pdf,2023-06-16,," This paper introduces FALL-E, a foley synthesis system and itstraining/inference strategies. The FALL-E model employs a cascaded approachcomprising low-resolution spectrogram generation, spectrogram super-resolution,and a vocoder. We trained every sound-related model from scratch using ourextensive datasets, and utilized a pre-trained language model. We conditionedthe model with dataset-specific texts, enabling it to learn sound quality andrecording environment based on text input. Moreover, we leveraged externallanguage models to improve text descriptions of our datasets and performedprompt engineering for quality, coherence, and diversity. FALL-E was evaluatedby an objective measure as well as listening tests in the DCASE 2023 challengeTask 7. The submission achieved the second place on average, while achievingthe best score for diversity, second place for audio quality, and third placefor class fitness.",,arXiv,"['eess.as', 'cs.lg', 'cs.sd']",, +602,the cultivated practices of texttoimage generation,['Jonas Oppenlaender'],http://arxiv.org/pdf/2306.11393v1.pdf,2023-06-20,," Humankind is entering a novel creative era in which anybody can synthesizedigital information using generative artificial intelligence (AI).Text-to-image generation, in particular, has become vastly popular and millionsof practitioners produce AI-generated images and AI art online. This chapterfirst gives an overview of the key developments that enabled a healthyco-creative online ecosystem around text-to-image generation to rapidly emerge,followed by a high-level description of key elements in this ecosystem. Aparticular focus is placed on prompt engineering, a creative practice that hasbeen embraced by the AI art community. It is then argued that the emergingco-creative ecosystem constitutes an intelligent system on its own - a systemthat both supports human creativity, but also potentially entraps futuregenerations and limits future development efforts in AI. The chapter discussesthe potential risks and dangers of cultivating this co-creative ecosystem, suchas the bias inherent in today's training data, potential quality degradation infuture image generation systems due to synthetic data becoming common place,and the potential long-term effects of text-to-image generation on people'simagination, ambitions, and development.",,arXiv,"['cs.cy', 'cs.ai', 'k.4; j.5; i.2.0; k.5.m']",, +603,chitchat or deep talk prompt engineering for process mining,"['Urszula Jessen', 'Michal Sroka', 'Dirk Fahland']",http://arxiv.org/pdf/2307.09909v1.pdf,2023-07-19,," This research investigates the application of Large Language Models (LLMs) toaugment conversational agents in process mining, aiming to tackle its inherentcomplexity and diverse skill requirements. While LLM advancements present novelopportunities for conversational process mining, generating efficient outputsis still a hurdle. We propose an innovative approach that amend many issues inexisting solutions, informed by prior research on Natural Language Processing(NLP) for conversational agents. Leveraging LLMs, our framework improves bothaccessibility and agent performance, as demonstrated by experiments on publicquestion and data sets. Our research sets the stage for future explorationsinto LLMs' role in process mining and concludes with propositions for enhancingLLM memory, implementing real-time user testing, and examining diverse datasets.",,arXiv,['cs.ai'],, +604,sentimentgpt exploiting gpt for advanced sentiment analysis and its departure from current machine learning,"['Kiana Kheiri', 'Hamid Karimi']",http://arxiv.org/pdf/2307.10234v2.pdf,2023-07-16,," This study presents a thorough examination of various Generative PretrainedTransformer (GPT) methodologies in sentiment analysis, specifically in thecontext of Task 4 on the SemEval 2017 dataset. Three primary strategies areemployed: 1) prompt engineering using the advanced GPT-3.5 Turbo, 2)fine-tuning GPT models, and 3) an inventive approach to embeddingclassification. The research yields detailed comparative insights among thesestrategies and individual GPT models, revealing their unique strengths andpotential limitations. Additionally, the study compares these GPT-basedmethodologies with other current, high-performing models previously used withthe same dataset. The results illustrate the significant superiority of the GPTapproaches in terms of predictive performance, more than 22\% in F1-scorecompared to the state-of-the-art. Further, the paper sheds light on commonchallenges in sentiment analysis tasks, such as understanding context anddetecting sarcasm. It underscores the enhanced capabilities of the GPT modelsto effectively handle these complexities. Taken together, these findingshighlight the promising potential of GPT models in sentiment analysis, settingthe stage for future research in this field. The code can be found athttps://github.com/DSAatUSU/SentimentGPT",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.si']",, +605,domain knowledge distillation from large language model an empirical study in the autonomous driving domain,"['Yun Tang', 'Antonio A. Bruto da Costa', 'Jason Zhang', 'Irvine Patrick', 'Siddartha Khastgir', 'Paul Jennings']",http://arxiv.org/pdf/2307.11769v1.pdf,2023-07-17,," Engineering knowledge-based (or expert) systems require extensive manualeffort and domain knowledge. As Large Language Models (LLMs) are trained usingan enormous amount of cross-domain knowledge, it becomes possible to automatesuch engineering processes. This paper presents an empirical automation andsemi-automation framework for domain knowledge distillation using promptengineering and the LLM ChatGPT. We assess the framework empirically in theautonomous driving domain and present our key observations. In ourimplementation, we construct the domain knowledge ontology by ""chatting"" withChatGPT. The key finding is that while fully automated domain ontologyconstruction is possible, human supervision and early intervention typicallyimprove efficiency and output quality as they lessen the effects of responserandomness and the butterfly effect. We, therefore, also develop a web-baseddistillation assistant enabling supervision and flexible intervention atruntime. We hope our findings and tools could inspire future research towardrevolutionizing the engineering of knowledge-based systems across applicationdomains.",,arXiv,['cs.cl'],, +606,do llms possess a personality making the mbti test an amazing evaluation for large language models,"['Keyu Pan', 'Yawen Zeng']",http://arxiv.org/pdf/2307.16180v1.pdf,2023-07-30,," The field of large language models (LLMs) has made significant progress, andtheir knowledge storage capacity is approaching that of human beings.Furthermore, advanced techniques, such as prompt learning and reinforcementlearning, are being employed to address ethical concerns and hallucinationproblems associated with LLMs, bringing them closer to aligning with humanvalues. This situation naturally raises the question of whether LLMs withhuman-like abilities possess a human-like personality? In this paper, we aim toinvestigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), awidespread human personality assessment tool, as an evaluation metric for LLMs.Specifically, extensive experiments will be conducted to explore: 1) thepersonality types of different LLMs, 2) the possibility of changing thepersonality types by prompt engineering, and 3) How does the training datasetaffect the model's personality. Although the MBTI is not a rigorous assessment,it can still reflect the similarity between LLMs and human personality. Inpractice, the MBTI has the potential to serve as a rough indicator. Our codesare available athttps://github.com/HarderThenHarder/transformers_tasks/tree/main/LLM/llms_mbti.",,arXiv,['cs.cl'],, +607,alphagpt humanai interactive alpha mining for quantitative investment,"['Saizhuo Wang', 'Hang Yuan', 'Leon Zhou', 'Lionel M. Ni', 'Heung-Yeung Shum', 'Jian Guo']",http://arxiv.org/pdf/2308.00016v1.pdf,2023-07-31,," One of the most important tasks in quantitative investment research is miningnew alphas (effective trading signals or factors). Traditional alpha miningmethods, either hand-crafted factor synthesizing or algorithmic factor mining(e.g., search with genetic programming), have inherent limitations, especiallyin implementing the ideas of quants. In this work, we propose a new alphamining paradigm by introducing human-AI interaction, and a novel promptengineering algorithmic framework to implement this paradigm by leveraging thepower of large language models. Moreover, we develop Alpha-GPT, a newinteractive alpha mining system framework that provides a heuristic way to``understand'' the ideas of quant researchers and outputs creative, insightful,and effective alphas. We demonstrate the effectiveness and advantage ofAlpha-GPT via a number of alpha mining experiments.",,arXiv,"['q-fin.cp', 'cs.ai', 'cs.cl']",, +608,optimizing machine translation through prompt engineering an investigation into chatgpt's customizability,['Masaru Yamada'],http://arxiv.org/pdf/2308.01391v1.pdf,2023-08-02,," This paper explores the influence of integrating the purpose of thetranslation and the target audience into prompts on the quality of translationsproduced by ChatGPT. Drawing on previous translation studies, industrypractices, and ISO standards, the research underscores the significance of thepre-production phase in the translation process. The study reveals that theinclusion of suitable prompts in large-scale language models like ChatGPT canyield flexible translations, a feat yet to be realized by conventional MachineTranslation (MT). The research scrutinizes the changes in translation qualitywhen prompts are used to generate translations that meet specific conditions.The evaluation is conducted from a practicing translator's viewpoint, bothsubjectively and qualitatively, supplemented by the use of OpenAI's wordembedding API for cosine similarity calculations. The findings suggest that theintegration of the purpose and target audience into prompts can indeed modifythe generated translations, generally enhancing the translation quality byindustry standards. The study also demonstrates the practical application ofthe ""good translation"" concept, particularly in the context of marketingdocuments and culturally dependent idioms.",,arXiv,['cs.cl'],, +609,interact exploring the potentials of chatgpt as a cooperative agent,"['Po-Lin Chen', 'Cheng-Shang Chang']",http://arxiv.org/pdf/2308.01552v1.pdf,2023-08-03,," This research paper delves into the integration of OpenAI's ChatGPT intoembodied agent systems, evaluating its influence on interactive decision-makingbenchmark. Drawing a parallel to the concept of people assuming roles accordingto their unique strengths, we introduce InterAct. In this approach, we feedChatGPT with varied prompts, assigning it a numerous roles like a checker and asorter, then integrating them with the original language model. Our researchshows a remarkable success rate of 98% in AlfWorld, which consists of 6different tasks in a simulated household environment, emphasizing thesignificance of proficient prompt engineering. The results highlight ChatGPT'scompetence in comprehending and performing intricate tasks effectively inreal-world settings, thus paving the way for further advancements in taskplanning.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",, +610,data race detection using large language models,"['Le Chen', 'Xianzhong Ding', 'Murali Emani', 'Tristan Vanderbruggen', 'Pei-hung Lin', 'Chuanhua Liao']",http://arxiv.org/pdf/2308.07505v2.pdf,2023-08-15,," Large language models (LLMs) are demonstrating significant promise as analternate strategy to facilitate analyses and optimizations of high-performancecomputing programs, circumventing the need for resource-intensive manual toolcreation. In this paper, we explore a novel LLM-based data race detectionapproach combining prompting engineering and fine-tuning techniques. We createa dedicated dataset named DRB-ML, which is derived from DataRaceBench, withfine-grain labels showing the presence of data race pairs and their associatedvariables, line numbers, and read/write information. DRB-ML is then used toevaluate representative LLMs and fine-tune open-source ones. Our experimentshows that LLMs can be a viable approach to data race detection. However, theystill cannot compete with traditional data race detection tools when we needdetailed information about variable pairs causing data races.",,arXiv,"['cs.lg', 'cs.cl']",, +611,datatotext generation for severely underresourced languages with gpt35 a bit of help needed from google translate,"['Michela Lorandi', 'Anya Belz']",http://arxiv.org/pdf/2308.09957v1.pdf,2023-08-19,," LLMs like GPT are great at tasks involving English which dominates in theirtraining data. In this paper, we look at how they cope with tasks involvinglanguages that are severely under-represented in their training data, in thecontext of data-to-text generation for Irish, Maltese, Welsh and Breton. Duringthe prompt-engineering phase we tested a range of prompt types and formats onGPT-3.5 and~4 with a small sample of example input/output pairs. We then fullyevaluated the two most promising prompts in two scenarios: (i) directgeneration into the under-resourced language, and (ii) generation into Englishfollowed by translation into the under-resourced language. We find thatfew-shot prompting works better for direct generation into under-resourcedlanguages, but that the difference disappears when pivoting via English. Thefew-shot + translation system variants were submitted to the WebNLG 2023 sharedtask where they outperformed competitor systems by substantial margins in alllanguages on all metrics. We conclude that good performance on under-resourcedlanguages can be achieved out-of-the box with state-of-the-art LLMs. However,our best results (for Welsh) remain well below the lowest ranked English systemat WebNLG'20.",,arXiv,"['cs.cl', 'cs.ai']",, +612,"furchat an embodied conversational agent using llms, combining open and closeddomain dialogue with facial expressions","['Neeraj Cherakara', 'Finny Varghese', 'Sheena Shabana', 'Nivan Nelson', 'Abhiram Karukayil', 'Rohith Kulothungan', 'Mohammed Afil Farhan', 'Birthe Nesset', 'Meriam Moujahid', 'Tanvi Dinkar', 'Verena Rieser', 'Oliver Lemon']",http://arxiv.org/pdf/2308.15214v2.pdf,2023-08-29,," We demonstrate an embodied conversational agent that can function as areceptionist and generate a mixture of open and closed-domain dialogue alongwith facial expressions, by using a large language model (LLM) to develop anengaging conversation. We deployed the system onto a Furhat robot, which ishighly expressive and capable of using both verbal and nonverbal cues duringinteraction. The system was designed specifically for the National Robotariumto interact with visitors through natural conversations, providing them withinformation about the facilities, research, news, upcoming events, etc. Thesystem utilises the state-of-the-art GPT-3.5 model to generate such informationalong with domain-general conversations and facial expressions based on promptengineering.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ro']",, +613,linking microblogging sentiments to stock price movement an application of gpt4,"['Rick Steinert', 'Saskia Altmann']",http://arxiv.org/pdf/2308.16771v1.pdf,2023-08-31,," This paper investigates the potential improvement of the GPT-4 LanguageLearning Model (LLM) in comparison to BERT for modeling same-day daily stockprice movements of Apple and Tesla in 2017, based on sentiment analysis ofmicroblogging messages. We recorded daily adjusted closing prices andtranslated them into up-down movements. Sentiment for each day was extractedfrom messages on the Stocktwits platform using both LLMs. We develop a novelmethod to engineer a comprehensive prompt for contextual sentiment analysiswhich unlocks the true capabilities of modern LLM. This enables us to carefullyretrieve sentiments, perceived advantages or disadvantages, and the relevancetowards the analyzed company. Logistic regression is used to evaluate whetherthe extracted message contents reflect stock price movements. As a result,GPT-4 exhibited substantial accuracy, outperforming BERT in five out of sixmonths and substantially exceeding a naive buy-and-hold strategy, reaching apeak accuracy of 71.47 % in May. The study also highlights the importance ofprompt engineering in obtaining desired outputs from GPT-4's contextualabilities. However, the costs of deploying GPT-4 and the need for fine-tuningprompts highlight some practical considerations for its use.",,arXiv,"['q-fin.st', 'q-fin.cp']",, +614,fiat fusing learning paradigms with instructionaccelerated tuning,"['Xinyi Wang', 'John Wieting', 'Jonathan H. Clark']",http://arxiv.org/pdf/2309.04663v2.pdf,2023-09-09,," Learning paradigms for large language models (LLMs) currently tend to fallwithin either in-context learning (ICL) or full fine-tuning. Each of thesecomes with their own trade-offs based on available data, model size, computecost, ease-of-use, and final quality with neither solution performing wellacross-the-board. In this article, we first describe ICL and fine-tuningparadigms in a way that highlights their natural connections. Based on theseconnections, we propose a new learning paradigm called FIAT that fuses the bestof these paradigms together, enabling prompt-engineered instructions andchain-of-thought reasoning with the very largest models while also usingsimilar methods to perform parameter updates on a modestly-sized LLM withparameter-efficient tuning. We evaluate FIAT's effectiveness on a variety ofmultilingual tasks and observe that FIAT performs better than both ICL andfine-tuning at scales ranging from 100-10,000 training examples. We hope thatFIAT provides a practical way of harnessing the full potential of LLMs withoutneeding to make a hard choice between learning paradigms.",,arXiv,"['cs.cl', 'cs.ai']",, +615,detecting natural language biases with promptbased learning,"['Md Abdul Aowal', 'Maliha T Islam', 'Priyanka Mary Mammen', 'Sandesh Shetty']",http://arxiv.org/pdf/2309.05227v1.pdf,2023-09-11,," In this project, we want to explore the newly emerging field of promptengineering and apply it to the downstream task of detecting LM biases. Moreconcretely, we explore how to design prompts that can indicate 4 differenttypes of biases: (1) gender, (2) race, (3) sexual orientation, and (4)religion-based. Within our project, we experiment with different manuallycrafted prompts that can draw out the subtle biases that may be present in thelanguage model. We apply these prompts to multiple variations of popular andwell-recognized models: BERT, RoBERTa, and T5 to evaluate their biases. Weprovide a comparative analysis of these models and assess them using a two-foldmethod: use human judgment to decide whether model predictions are biased andutilize model-level judgment (through further prompts) to understand if a modelcan self-diagnose the biases of its own prediction.",,arXiv,"['cs.cl', 'cs.ai']",, +616,large language models for failure mode classification an investigation,"['Michael Stewart', 'Melinda Hodkiewicz', 'Sirui Li']",http://arxiv.org/pdf/2309.08181v1.pdf,2023-09-15,," In this paper we present the first investigation into the effectiveness ofLarge Language Models (LLMs) for Failure Mode Classification (FMC). FMC, thetask of automatically labelling an observation with a corresponding failuremode code, is a critical task in the maintenance domain as it reduces the needfor reliability engineers to spend their time manually analysing work orders.We detail our approach to prompt engineering to enable an LLM to predict thefailure mode of a given observation using a restricted code list. Wedemonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned onannotated data is a significant improvement over a currently available textclassification model (F1=0.60) trained on the same annotated data set. Thefine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). Thisinvestigation reinforces the need for high quality fine-tuning data sets fordomain-specific tasks using LLMs.",,arXiv,['cs.cl'],, +617,dynacon dynamic robot planner with contextual awareness via llms,"['Gyeongmin Kim', 'Taehyeon Kim', 'Shyam Sundar Kannan', 'Vishnunandan L. N. Venkatesh', 'Donghan Kim', 'Byung-Cheol Min']",http://arxiv.org/pdf/2309.16031v1.pdf,2023-09-27,," Mobile robots often rely on pre-existing maps for effective path planning andnavigation. However, when these maps are unavailable, particularly inunfamiliar environments, a different approach become essential. This paperintroduces DynaCon, a novel system designed to provide mobile robots withcontextual awareness and dynamic adaptability during navigation, eliminatingthe reliance of traditional maps. DynaCon integrates real-time feedback with anobject server, prompt engineering, and navigation modules. By harnessing thecapabilities of Large Language Models (LLMs), DynaCon not only understandspatterns within given numeric series but also excels at categorizing objectsinto matched spaces. This facilitates dynamic path planner imbued withcontextual awareness. We validated the effectiveness of DynaCon through anexperiment where a robot successfully navigated to its goal using reasoning.Source code and experiment videos for this work can be found at:https://sites.google.com/view/dynacon.",,arXiv,['cs.ro'],, +618,cyber sentinel exploring conversational agents in streamlining security tasks with gpt4,"['Mehrdad Kaheh', 'Danial Khosh Kholgh', 'Panos Kostakos']",http://arxiv.org/pdf/2309.16422v1.pdf,2023-09-28,," In an era where cyberspace is both a battleground and a backbone of modernsociety, the urgency of safeguarding digital assets against ever-evolvingthreats is paramount. This paper introduces Cyber Sentinel, an innovativetask-oriented cybersecurity dialogue system that is effectively capable ofmanaging two core functions: explaining potential cyber threats within anorganization to the user, and taking proactive/reactive security actions wheninstructed by the user. Cyber Sentinel embodies the fusion of artificialintelligence, cybersecurity domain expertise, and real-time data analysis tocombat the multifaceted challenges posed by cyber adversaries. This articledelves into the process of creating such a system and how it can interact withother components typically found in cybersecurity organizations. Our work is anovel approach to task-oriented dialogue systems, leveraging the power ofchaining GPT-4 models combined with prompt engineering across all sub-tasks. Wealso highlight its pivotal role in enhancing cybersecurity communication andinteraction, concluding that not only does this framework enhance the system'stransparency (Explainable AI) but also streamlines the decision-making processand responding to threats (Actionable AI), therefore marking a significantadvancement in the realm of cybersecurity communication.",,arXiv,['cs.cr'],, +619,large language models for propaganda detection,"['Kilian Sprenkamp', 'Daniel Gordon Jones', 'Liudmila Zavolokina']",http://arxiv.org/pdf/2310.06422v2.pdf,2023-10-10,," The prevalence of propaganda in our digital society poses a challenge tosocietal harmony and the dissemination of truth. Detecting propaganda throughNLP in text is challenging due to subtle manipulation techniques and contextualdependencies. To address this issue, we investigate the effectiveness of modernLarge Language Models (LLMs) such as GPT-3 and GPT-4 for propaganda detection.We conduct experiments using the SemEval-2020 task 11 dataset, which featuresnews articles labeled with 14 propaganda techniques as a multi-labelclassification problem. Five variations of GPT-3 and GPT-4 are employed,incorporating various prompt engineering and fine-tuning strategies across thedifferent models. We evaluate the models' performance by assessing metrics suchas $F1$ score, $Precision$, and $Recall$, comparing the results with thecurrent state-of-the-art approach using RoBERTa. Our findings demonstrate thatGPT-4 achieves comparable results to the current state-of-the-art. Further,this study analyzes the potential and challenges of LLMs in complex tasks likepropaganda detection.",,arXiv,"['cs.cl', 'cs.ai']",, +620,gptutor an opensource ai pair programming tool alternative to copilot,"['Eason Chen', 'Ray Huang', 'Justa Liang', 'Damien Chen', 'Pierce Hung']",http://arxiv.org/pdf/2310.13896v3.pdf,2023-10-21,," This paper presents the latest progress of GPTutor: a ChatGPT-poweredprogramming tool extension in Visual Studio Code. The emergence of LargeLanguage Models (LLMs) has improved software development efficiency, but theirperformance can be hindered by training data limitations and prompt designissues. Existing LLM development tools often operate as black boxes, with usersunable to view the prompts used and unable to improve performance by correctingprompts when errors occur. To address the aforementioned issues, GPTutor wasintroduced as an open-source AI pair programming tool, offering an alternativeto Copilot. GPTutor empowers users to customize prompts for various programminglanguages and scenarios, with support for 120+ human languages and 50+programming languages. Users can fine-tune prompts to correct the errors fromLLM for precision and efficient code generation. At the end of the paper, weunderscore GPTutor's potential through examples, including demonstrating itsproficiency in interpreting and generating Sui-Move, a newly introduced smartcontract language, using prompt engineering.",,arXiv,['cs.hc'],, +621,large language models for aspectbased sentiment analysis,"['Paul F. Simmering', 'Paavo Huoviala']",http://arxiv.org/pdf/2310.18025v1.pdf,2023-10-27,," Large language models (LLMs) offer unprecedented text completioncapabilities. As general models, they can fulfill a wide range of roles,including those of more specialized models. We assess the performance of GPT-4and GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-basedsentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-artF1 score of 83.8 on the joint aspect term extraction and polarityclassification task of the SemEval-2014 Task 4, improving upon InstructABSA[@scaria_instructabsa_2023] by 5.7%. However, this comes at the price of 1000times more model parameters and thus increased inference cost. We discuss thethe cost-performance trade-offs of different models, and analyze the typicalerrors that they make. Our results also indicate that detailed prompts improveperformance in zero-shot and few-shot settings but are not necessary forfine-tuned models. This evidence is relevant for practioners that are facedwith the choice of prompt engineering versus fine-tuning when using LLMs forABSA.",,arXiv,"['cs.cl', 'cs.ai']",, +622,can large language models capture public opinion about global warming an empirical assessment of algorithmic fidelity and bias,"['S. Lee', 'T. Q. Peng', 'M. H. Goldberg', 'S. A. Rosenthal', 'J. E. Kotcher', 'E. W. Maibach', 'A. Leiserowitz']",http://arxiv.org/pdf/2311.00217v1.pdf,2023-11-01,," Large language models (LLMs) have demonstrated their potential in socialscience research by emulating human perceptions and behaviors, a conceptreferred to as algorithmic fidelity. This study assesses the algorithmicfidelity and bias of LLMs by utilizing two nationally representative climatechange surveys. The LLMs were conditioned on demographics and/or psychologicalcovariates to simulate survey responses. The findings indicate that LLMs caneffectively capture presidential voting behaviors but encounter challenges inaccurately representing global warming perspectives when relevant covariatesare not included. GPT-4 exhibits improved performance when conditioned on bothdemographics and covariates. However, disparities emerge in LLM estimations ofthe views of certain groups, with LLMs tending to underestimate worry aboutglobal warming among Black Americans. While highlighting the potential of LLMsto aid social science research, these results underscore the importance ofmeticulous conditioning, model selection, survey question format, and biasassessment when employing LLMs for survey simulation. Further investigationinto prompt engineering and algorithm auditing is essential to harness thepower of LLMs while addressing their inherent limitations.",,arXiv,"['cs.ai', 'cs.cy']",, +623,noisy exemplars make large language models more robust a domainagnostic behavioral analysis,"['Hongyi Zheng', 'Abulhair Saparov']",http://arxiv.org/pdf/2311.00258v1.pdf,2023-11-01,," Recent advances in prompt engineering enable large language models (LLMs) tosolve multi-hop logical reasoning problems with impressive accuracy. However,there is little existing work investigating the robustness of LLMs withfew-shot prompting techniques. Therefore, we introduce a systematic approach totest the robustness of LLMs in multi-hop reasoning tasks via domain-agnosticperturbations. We include perturbations at multiple levels of abstractions(e.g. lexical perturbations such as typos, and semantic perturbations such asthe inclusion of intermediate reasoning steps in the questions) to conductbehavioral analysis on the LLMs. Throughout our experiments, we find thatmodels are more sensitive to certain perturbations such as replacing words withtheir synonyms. We also demonstrate that increasing the proportion of perturbedexemplars in the prompts improves the robustness of few-shot prompting methods.",,arXiv,"['cs.cl', 'cs.lg']",, +624,instruction distillation makes large language models efficient zeroshot rankers,"['Weiwei Sun', 'Zheng Chen', 'Xinyu Ma', 'Lingyong Yan', 'Shuaiqiang Wang', 'Pengjie Ren', 'Zhumin Chen', 'Dawei Yin', 'Zhaochun Ren']",http://arxiv.org/pdf/2311.01555v1.pdf,2023-11-02,," Recent studies have demonstrated the great potential of Large Language Models(LLMs) serving as zero-shot relevance rankers. The typical approach involvesmaking comparisons between pairs or lists of documents. Although effective,these listwise and pairwise methods are not efficient and also heavily rely onintricate prompt engineering. To tackle this problem, we introduce a novelinstruction distillation method. The key idea is to distill the pairwiseranking ability of open-sourced LLMs to a simpler but more efficient pointwiseranking. Specifically, given the same LLM, we first rank documents using theeffective pairwise approach with complex instructions, and then distill theteacher predictions to the pointwise approach with simpler instructions.Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate thatinstruction distillation can improve efficiency by 10 to 100x and also enhancethe ranking performance of LLMs. Furthermore, our approach surpasses theperformance of existing supervised methods like monoT5 and is on par with thestate-of-the-art zero-shot methods. The code to reproduce our results isavailable at www.github.com/sunnweiwei/RankGPT.",,arXiv,"['cs.ir', 'cs.cl']",, +625,indicative summarization of long discussions,"['Shahbaz Syed', 'Dominik Schwabe', 'Khalid Al-Khatib', 'Martin Potthast']",http://arxiv.org/pdf/2311.01882v1.pdf,2023-11-03,," Online forums encourage the exchange and discussion of different stances onmany topics. Not only do they provide an opportunity to present one's ownarguments, but may also gather a broad cross-section of others' arguments.However, the resulting long discussions are difficult to overview. This paperpresents a novel unsupervised approach using large language models (LLMs) togenerating indicative summaries for long discussions that basically serve astables of contents. Our approach first clusters argument sentences, generatescluster labels as abstractive summaries, and classifies the generated clusterlabels into argumentation frames resulting in a two-level summary. Based on anextensively optimized prompt engineering approach, we evaluate 19~LLMs forgenerative cluster labeling and frame classification. To evaluate theusefulness of our indicative summaries, we conduct a purpose-driven user studyvia a new visual interface called Discussion Explorer: It shows that ourproposed indicative summaries serve as a convenient navigation tool to explorelong discussions.",,arXiv,['cs.cl'],, +626,automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models,"['Jake Chanenson', 'Madison Pickering', 'Noah Apthorpe']",http://arxiv.org/pdf/2311.02192v1.pdf,2023-11-03,," Identifying contextual integrity (CI) and governing knowledge commons (GKC)parameters in privacy policy texts can facilitate normative privacy analysis.However, GKC-CI annotation has heretofore required manual or crowdsourcedeffort. This paper demonstrates that high-accuracy GKC-CI parameter annotationof privacy policies can be performed automatically using large language models.We fine-tune 18 open-source and proprietary models on 21,588 GKC-CI annotationsfrom 16 ground truth privacy policies. Our best-performing model (fine-tunedGPT-3.5 Turbo with prompt engineering) has an accuracy of 86%, exceeding theperformance of prior crowdsourcing approaches despite the complexity of privacypolicy texts and the nuance of the GKC-CI annotation task. We apply ourbest-performing model to privacy policies from 164 popular online services,demonstrating the effectiveness of scaling GKC-CI annotation for dataexploration. We make all annotated policies as well as the training data andscripts needed to fine-tune our best-performing model publicly available forfuture research.",,arXiv,"['cs.cy', 'cs.cl', 'cs.lg']",, +627,requirements engineering using generative ai prompts and prompting patterns,"['Krishna Ronanki', 'Beatriz Cabrero-Daniel', 'Jennifer Horkoff', 'Christian Berger']",http://arxiv.org/pdf/2311.03832v1.pdf,2023-11-07,," [Context]: Companies are increasingly recognizing the importance ofautomating Requirements Engineering (RE) tasks due to their resource-intensivenature. The advent of GenAI has made these tasks more amenable to automation,thanks to its ability to understand and interpret context effectively.[Problem]: However, in the context of GenAI, prompt engineering is a criticalfactor for success. Despite this, we currently lack tools and methods tosystematically assess and determine the most effective prompt patterns toemploy for a particular RE task. [Method]: Two tasks related to requirements,specifically requirement classification and tracing, were automated using theGPT-3.5 turbo API. The performance evaluation involved assessing variousprompts created using 5 prompt patterns and implemented programmatically toperform the selected RE tasks, focusing on metrics such as precision, recall,accuracy, and F-Score. [Results]: This paper evaluates the effectiveness of the5 prompt patterns' ability to make GPT-3.5 turbo perform the selected RE tasksand offers recommendations on which prompt pattern to use for a specific REtask. Additionally, it also provides an evaluation framework as a reference forresearchers and practitioners who want to evaluate different prompt patternsfor different RE tasks.",,arXiv,['cs.se'],, +628,actionclip a new paradigm for video action recognition,"['Mengmeng Wang', 'Jiazheng Xing', 'Yong Liu']",http://arxiv.org/pdf/2109.08472v1.pdf,2021-09-17,," The canonical approach to video action recognition dictates a neural model todo a classic and standard 1-of-N majority vote task. They are trained topredict a fixed set of predefined categories, limiting their transferableability on new datasets with unseen concepts. In this paper, we provide a newperspective on action recognition by attaching importance to the semanticinformation of label texts rather than simply mapping them into numbers.Specifically, we model this task as a video-text matching problem within amultimodal learning framework, which strengthens the video representation withmore semantic language supervision and enables our model to do zero-shot actionrecognition without any further labeled data or parameters requirements.Moreover, to handle the deficiency of label texts and make use of tremendousweb data, we propose a new paradigm based on this multimodal learning frameworkfor action recognition, which we dub ""pre-train, prompt and fine-tune"". Thisparadigm first learns powerful representations from pre-training on a largeamount of web image-text or video-text data. Then it makes the actionrecognition task to act more like pre-training problems via prompt engineering.Finally, it end-to-end fine-tunes on target datasets to obtain strongperformance. We give an instantiation of the new paradigm, ActionCLIP, whichnot only has superior and flexible zero-shot/few-shot transfer ability but alsoreaches a top performance on general action recognition task, achieving 83.8%top-1 accuracy on Kinetics-400 with a ViT-B/16 as the backbone. Code isavailable at https://github.com/sallymmx/ActionCLIP.git",,arXiv,['cs.cv'],, +629,learning to prompt for openvocabulary object detection with visionlanguage model,"['Yu Du', 'Fangyun Wei', 'Zihe Zhang', 'Miaojing Shi', 'Yue Gao', 'Guoqi Li']",http://arxiv.org/pdf/2203.14940v1.pdf,2022-03-28,," Recently, vision-language pre-training shows great potential inopen-vocabulary object detection, where detectors trained on base classes aredevised for detecting new classes. The class text embedding is firstlygenerated by feeding prompts to the text encoder of a pre-trainedvision-language model. It is then used as the region classifier to supervisethe training of a detector. The key element that leads to the success of thismodel is the proper prompt, which requires careful words tuning and ingeniousdesign. To avoid laborious prompt engineering, there are some promptrepresentation learning methods being proposed for the image classificationtask, which however can only be sub-optimal solutions when applied to thedetection task. In this paper, we introduce a novel method, detection prompt(DetPro), to learn continuous prompt representations for open-vocabulary objectdetection based on the pre-trained vision-language model. Different from theprevious classification-oriented methods, DetPro has two highlights: 1) abackground interpretation scheme to include the proposals in image backgroundinto the prompt training; 2) a context grading scheme to separate proposals inimage foreground for tailored prompt training. We assemble DetPro with ViLD, arecent state-of-the-art open-world object detector, and conduct experiments onthe LVIS as well as transfer learning on the Pascal VOC, COCO, Objects365datasets. Experimental results show that our DetPro outperforms the baselineViLD in all settings, e.g., +3.4 APbox and +3.0 APmask improvements on thenovel classes of LVIS. Code and models are available athttps://github.com/dyabel/detpro.",,arXiv,['cs.cv'],, +630,no token left behind explainabilityaided image classification and generation,"['Roni Paiss', 'Hila Chefer', 'Lior Wolf']",http://arxiv.org/pdf/2204.04908v2.pdf,2022-04-11,," The application of zero-shot learning in computer vision has beenrevolutionized by the use of image-text matching models. The most notableexample, CLIP, has been widely used for both zero-shot classification andguiding generative models with a text prompt. However, the zero-shot use ofCLIP is unstable with respect to the phrasing of the input text, making itnecessary to carefully engineer the prompts used. We find that this instabilitystems from a selective similarity score, which is based only on a subset of thesemantically meaningful input tokens. To mitigate it, we present a novelexplainability-based approach, which adds a loss term to ensure that CLIPfocuses on all relevant semantic parts of the input, in addition to employingthe CLIP similarity loss used in previous works. When applied to one-shotclassification through prompt engineering, our method yields an improvement inthe recognition rate, without additional training or fine-tuning. Additionally,we show that CLIP guidance of generative models using our method significantlyimproves the generated images. Finally, we demonstrate a novel use of CLIPguidance for text-based image generation with spatial conditioning on objectlocation, by requiring the image explainability heatmap for each object to beconfined to a pre-determined bounding box.",,arXiv,['cs.cv'],, +631,on measuring social biases in promptbased multitask learning,"['Afra Feyza Akyürek', 'Sejin Paik', 'Muhammed Yusuf Kocyigit', 'Seda Akbiyik', 'Şerife Leman Runyun', 'Derry Wijaya']",http://arxiv.org/pdf/2205.11605v1.pdf,2022-05-23,," Large language models trained on a mixture of NLP tasks that are convertedinto a text-to-text format using prompts, can generalize into novel forms oflanguage and handle novel tasks. A large body of work within prompt engineeringattempts to understand the effects of input forms and prompts in achievingsuperior performance. We consider an alternative measure and inquire whetherthe way in which an input is encoded affects social biases promoted in outputs.In this paper, we study T0, a large-scale multi-task text-to-text languagemodel trained using prompt-based learning. We consider two different forms ofsemantically equivalent inputs: question-answer format and premise-hypothesisformat. We use an existing bias benchmark for the former BBQ and create thefirst bias benchmark in natural language inference BBNLI with hand-writtenhypotheses while also converting each benchmark into the other form. Theresults on two benchmarks suggest that given two different formulations ofessentially the same input, T0 conspicuously acts more biased in questionanswering form, which is seen during training, compared to premise-hypothesisform which is unlike its training examples. Code and data are released underhttps://github.com/feyzaakyurek/bbnli.",,arXiv,"['cs.cl', 'cs.cy']",, +632,ordinalclip learning rank prompts for languageguided ordinal regression,"['Wanhua Li', 'Xiaoke Huang', 'Zheng Zhu', 'Yansong Tang', 'Xiu Li', 'Jie Zhou', 'Jiwen Lu']",http://arxiv.org/pdf/2206.02338v2.pdf,2022-06-06,," This paper presents a language-powered paradigm for ordinal regression.Existing methods usually treat each rank as a category and employ a set ofweights to learn these concepts. These methods are easy to overfit and usuallyattain unsatisfactory performance as the learned concepts are mainly derivedfrom the training set. Recent large pre-trained vision-language models likeCLIP have shown impressive performance on various visual tasks. In this paper,we propose to learn the rank concepts from the rich semantic CLIP latent space.Specifically, we reformulate this task as an image-language matching problemwith a contrastive objective, which regards labels as text and obtains alanguage prototype from a text encoder for each rank. While prompt engineeringfor CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiableprompting method for adapting CLIP for ordinal regression. OrdinalCLIP consistsof learnable context tokens and learnable rank embeddings; The learnable rankembeddings are constructed by explicitly modeling numerical continuity,resulting in well-ordered, compact language prototypes in the CLIP space. Oncelearned, we can only save the language prototypes and discard the huge languagemodel, resulting in zero additional computational overhead compared with thelinear head counterpart. Experimental results show that our paradigm achievescompetitive performance in general ordinal regression tasks, and gainsimprovements in few-shot and distribution shift settings for age estimation.The code is available at https://github.com/xk-huang/OrdinalCLIP.",,arXiv,['cs.cv'],, +633,unsupervised hashing with semantic concept mining,"['Rong-Cheng Tu', 'Xian-Ling Mao', 'Kevin Qinghong Lin', 'Chengfei Cai', 'Weize Qin', 'Hongfa Wang', 'Wei Wei', 'Heyan Huang']",http://arxiv.org/pdf/2209.11475v1.pdf,2022-09-23,," Recently, to improve the unsupervised image retrieval performance, plenty ofunsupervised hashing methods have been proposed by designing a semanticsimilarity matrix, which is based on the similarities between image featuresextracted by a pre-trained CNN model. However, most of these methods tend toignore high-level abstract semantic concepts contained in images. Intuitively,concepts play an important role in calculating the similarity among images. Inreal-world scenarios, each image is associated with some concepts, and thesimilarity between two images will be larger if they share more identicalconcepts. Inspired by the above intuition, in this work, we propose a novelUnsupervised Hashing with Semantic Concept Mining, called UHSCM, whichleverages a VLP model to construct a high-quality similarity matrix.Specifically, a set of randomly chosen concepts is first collected. Then, byemploying a vision-language pretraining (VLP) model with the prompt engineeringwhich has shown strong power in visual representation learning, the set ofconcepts is denoised according to the training images. Next, the proposedmethod UHSCM applies the VLP model with prompting again to mine the conceptdistribution of each image and construct a high-quality semantic similaritymatrix based on the mined concept distributions. Finally, with the semanticsimilarity matrix as guiding information, a novel hashing loss with a modifiedcontrastive loss based regularization item is proposed to optimize the hashingnetwork. Extensive experiments on three benchmark datasets show that theproposed method outperforms the state-of-the-art baselines in the imageretrieval task.",,arXiv,"['cs.cv', 'cs.ir']",, +634,"chat2vis generating data visualisations via natural language using chatgpt, codex and gpt3 large language models","['Paula Maddigan', 'Teo Susnjak']",http://arxiv.org/pdf/2302.02094v2.pdf,2023-02-04,," The field of data visualisation has long aimed to devise solutions forgenerating visualisations directly from natural language text. Research inNatural Language Interfaces (NLIs) has contributed towards the development ofsuch techniques. However, the implementation of workable NLIs has always beenchallenging due to the inherent ambiguity of natural language, as well as inconsequence of unclear and poorly written user queries which pose problems forexisting language models in discerning user intent. Instead of pursuing theusual path of developing new iterations of language models, this study uniquelyproposes leveraging the advancements in pre-trained large language models(LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directlyinto code for appropriate visualisations. This paper presents a novel system,Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrateshow, with effective prompt engineering, the complex problem of languageunderstanding can be solved more efficiently, resulting in simpler and moreaccurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMstogether with the proposed prompts offer a reliable approach to renderingvisualisations from natural language queries, even when queries are highlymisspecified and underspecified. This solution also presents a significantreduction in costs for the development of NLI systems, while attaining greatervisualisation inference abilities compared to traditional NLP approaches thatuse hand-crafted grammar rules and tailored models. This study also presentshow LLM prompts can be constructed in a way that preserves data security andprivacy while being generalisable to different datasets. This work compares theperformance of GPT-3, Codex and ChatGPT across a number of case studies andcontrasts the performances with prior studies.",,arXiv,['cs.hc'],, +635,prompt stealing attacks against texttoimage generation models,"['Xinyue Shen', 'Yiting Qu', 'Michael Backes', 'Yang Zhang']",http://arxiv.org/pdf/2302.09923v1.pdf,2023-02-20,," Text-to-Image generation models have revolutionized the artwork designprocess and enabled anyone to create high-quality images by entering textdescriptions called prompts. Creating a high-quality prompt that consists of asubject and several modifiers can be time-consuming and costly. In consequence,a trend of trading high-quality prompts on specialized marketplaces hasemerged. In this paper, we propose a novel attack, namely prompt stealingattack, which aims to steal prompts from generated images by text-to-imagegeneration models. Successful prompt stealing attacks direct violate theintellectual property and privacy of prompt engineers and also jeopardize thebusiness model of prompt trading marketplaces. We first perform a large-scaleanalysis on a dataset collected by ourselves and show that a successful promptstealing attack should consider a prompt's subject as well as its modifiers. Wethen propose the first learning-based prompt stealing attack, PromptStealer,and demonstrate its superiority over two baseline methods quantitatively andqualitatively. We also make some initial attempts to defend PromptStealer. Ingeneral, our study uncovers a new attack surface in the ecosystem created bythe popular text-to-image generation models. We hope our results can help tomitigate the threat. To facilitate research in this field, we will share ourdataset and code with the community.",,arXiv,"['cs.cr', 'cs.lg']",, +636,extracting accurate materials data from research papers with conversational language models and prompt engineering,"['Maciej P. Polak', 'Dane Morgan']",http://arxiv.org/pdf/2303.05352v2.pdf,2023-03-07,," There has been a growing effort to replace hand extraction of data fromresearch papers with automated data extraction based on natural languageprocessing, language models, and recently, large language models (LLMs).Although these methods enable efficient extraction of data from large sets ofresearch papers, they require a significant amount of up-front effort,expertise, and coding. In this work we propose the ChatExtract method that canfully automate very accurate data extraction with minimal initial effort andbackground, using an advanced conversational LLM. ChatExtract consists of a setof engineered prompts applied to a conversational LLM that both identifysentences with data, extract that data, and assure the data's correctnessthrough a series of follow-up questions. These follow-up questions largelyovercome known issues with LLMs providing factually inaccurate responses.ChatExtract can be applied with any conversational LLMs and yields very highquality data extraction. In tests on materials data we find precision andrecall both close to 90% from the best conversational LLMs, like ChatGPT-4. Wedemonstrate that the exceptional performance is enabled by the informationretention in a conversational model combined with purposeful redundancy andintroducing uncertainty through follow-up prompts. These results suggest thatapproaches similar to ChatExtract, due to their simplicity, transferability,and accuracy are likely to become powerful tools for data extraction in thenear future. Finally, databases for critical cooling rates of metallic glassesand yield strengths of high entropy alloys are developed using ChatExtract.",,arXiv,"['cs.cl', 'cond-mat.mtrl-sci']",, +637,ten quick tips for harnessing the power of chatgptgpt4 in computational biology,"['Tiago Lubiana', 'Rafael Lopes', 'Pedro Medeiros', 'Juan Carlo Silva', 'Andre Nicolau Aquime Goncalves', 'Vinicius Maracaja-Coutinho', 'Helder I Nakaya']",http://arxiv.org/pdf/2303.16429v1.pdf,2023-03-29,," The rise of advanced chatbots, such as ChatGPT, has sparked curiosity in thescientific community. ChatGPT is a general-purpose chatbot powered by largelanguage models (LLMs) GPT-3.5 and GPT-4, with the potential to impact numerousfields, including computational biology. In this article, we offer ten tipsbased on our experience with ChatGPT to assist computational biologists inoptimizing their workflows. We have collected relevant prompts and reviewed thenascent literature in the field, compiling tips we project to remain pertinentfor future ChatGPT and LLM iterations, ranging from code refactoring toscientific writing to prompt engineering. We hope our work will helpbioinformaticians to complement their workflows while staying aware of thevarious implications of using this technology. Additionally, to track new andcreative applications for bioinformatics tools such as ChatGPT, we haveestablished a GitHub repository athttps://github.com/csbl-br/awesome-compbio-chatgpt. Our belief is that ethicaladherence to ChatGPT and other LLMs will increase the efficiency ofcomputational biologists, ultimately advancing the pace of scientific discoveryin the life sciences.",,arXiv,"['q-bio.ot', '92-04']",, +638,pair programming with large language models for sampling and estimation of copulas,['Jan Górecki'],http://arxiv.org/pdf/2303.18116v1.pdf,2023-03-31,," Without writing a single line of code by a human, an example Monte Carlosimulation based application for stochastic dependence modeling with copulas isdeveloped using a state-of-the-art large language model (LLM) fine-tuned forconversations. This includes interaction with ChatGPT in natural language andusing mathematical formalism, which, under careful supervision by ahuman-expert, led to producing a working code in MATLAB, Python and R forsampling from a given copula model, evaluation of the model's density,performing maximum likelihood estimation, optimizing the code for parallelcomputing for CPUs as well as for GPUs, and visualization of the computedresults. In contrast to other emerging studies that assess the accuracy of LLMslike ChatGPT on tasks from a selected area, this work rather investigates wayshow to achieve a successful solution of a standard statistical task in acollaboration of a human-expert and artificial intelligence (AI). Particularly,through careful prompt engineering, we separate successful solutions generatedby ChatGPT from unsuccessful ones, resulting in a comprehensive list of relatedpros and cons. It is demonstrated that if the typical pitfalls are avoided, wecan substantially benefit from collaborating with an AI partner. For example,we show that if ChatGPT is not able to provide a correct solution due to a lackof or incorrect knowledge, the human-expert can feed it with the correctknowledge, e.g., in the form of mathematical theorems and formulas, and make itto apply the gained knowledge in order to provide a solution that is correct.Such ability presents an attractive opportunity to achieve a programmedsolution even for users with rather limited knowledge of programmingtechniques.",,arXiv,"['cs.cl', 'stat.co', '65c60, 68n19, 68t50']",, +639,lowcode llm visual programming over llms,"['Yuzhe Cai', 'Shaoguang Mao', 'Wenshan Wu', 'Zehua Wang', 'Yaobo Liang', 'Tao Ge', 'Chenfei Wu', 'Wang You', 'Ting Song', 'Yan Xia', 'Jonathan Tien', 'Nan Duan']",http://arxiv.org/pdf/2304.08103v2.pdf,2023-04-17,," Effectively utilizing LLMs for complex tasks is challenging, often involvinga time-consuming and uncontrollable prompt engineering process. This paperintroduces a novel human-LLM interaction framework, Low-code LLM. Itincorporates six types of simple low-code visual programming interactions, allsupported by clicking, dragging, or text editing, to achieve more controllableand stable responses. Through visual interaction with a graphical userinterface, users can incorporate their ideas into the workflow without writingtrivial prompts. The proposed Low-code LLM framework consists of a Planning LLMthat designs a structured planning workflow for complex tasks, which can becorrespondingly edited and confirmed by users through low-code visualprogramming operations, and an Executing LLM that generates responses followingthe user-confirmed workflow. We highlight three advantages of the low-code LLM:controllable generation results, user-friendly human-LLM interaction, andbroadly applicable scenarios. We demonstrate its benefits using four typicalapplications. By introducing this approach, we aim to bridge the gap betweenhumans and LLMs, enabling more effective and efficient utilization of LLMs forcomplex tasks. Our system will be soon publicly available at LowCodeLLM.",,arXiv,"['cs.cl', 'cs.hc']",, +640,is chatgpt the ultimate programming assistant how far is it,"['Haoye Tian', 'Weiqi Lu', 'Tsz On Li', 'Xunzhu Tang', 'Shing-Chi Cheung', 'Jacques Klein', 'Tegawendé F. Bissyandé']",http://arxiv.org/pdf/2304.11938v2.pdf,2023-04-24,," Recently, the ChatGPT LLM has received great attention: it can be used as abot for discussing source code, prompting it to suggest changes, providedescriptions or even generate code. Typical demonstrations generally focus onexisting benchmarks, which may have been used in model training (i.e., dataleakage). To assess the feasibility of using an LLM as a useful assistant botfor programmers, we must assess its realistic capabilities on unseen problemsas well as its capabilities on various tasks. In this paper, we present anempirical study of ChatGPT's potential as a fully automated programmingassistant, focusing on the tasks of code generation, program repair, and codesummariziation. The study investigates ChatGPT's performance on commonprogramming problems and compares it with state-of-the-art approaches on twobenchmarks. Among several findings, our study shows that ChatGPT is effectivein dealing with common programming problems. However, our experiments alsoreveal limitations in terms of its attention span: detailed descriptions willconstrain the focus of ChatGPT and prevent it from leveraging its vastknowledge to solve the actual problem. Surprisingly, we have identified theability of ChatGPT to reason the original intention of the code. We expectfuture work to build on this insight for dealing with the open question of theoracle problem. Our findings contribute interesting insights to the developmentof LLMs for programming assistance, notably by demonstrating the importance ofprompt engineering, and providing a better understanding of ChatGPT's practicalapplications for software engineering.",,arXiv,"['cs.se', 'cs.ai']",, +641,framing the newsfrom human perception to large language model inferences,"['David Alonso del Barrio', 'Daniel Gatica-Perez']",http://arxiv.org/pdf/2304.14456v1.pdf,2023-04-27,," Identifying the frames of news is important to understand the articles'vision, intention, message to be conveyed, and which aspects of the news areemphasized. Framing is a widely studied concept in journalism, and has emergedas a new topic in computing, with the potential to automate processes andfacilitate the work of journalism professionals. In this paper, we study thisissue with articles related to the Covid-19 anti-vaccine movement. First, tounderstand the perspectives used to treat this theme, we developed a protocolfor human labeling of frames for 1786 headlines of No-Vax movement articles ofEuropean newspapers from 5 countries. Headlines are key units in the writtenpress, and worth of analysis as many people only read headlines (or use them toguide their decision for further reading.) Second, considering advances inNatural Language Processing (NLP) with large language models, we investigatedtwo approaches for frame inference of news headlines: first with a GPT-3.5fine-tuning approach, and second with GPT-3.5 prompt-engineering. Our workcontributes to the study and analysis of the performance that these models haveto facilitate journalistic tasks like classification of frames, whileunderstanding whether the models are able to replicate human perception in theidentification of these frames.",,arXiv,"['cs.cl', 'cs.hc']",, +642,sensitivity and robustness of large language models to prompt template in japanese text classification tasks,"['Chengguang Gan', 'Tatsunori Mori']",http://arxiv.org/pdf/2305.08714v2.pdf,2023-05-15,," Prompt engineering relevance research has seen a notable surge in recentyears, primarily driven by advancements in pre-trained language models andlarge language models. However, a critical issue has been identified withinthis domain: the inadequate of sensitivity and robustness of these modelstowards Prompt Templates, particularly in lesser-studied languages such asJapanese. This paper explores this issue through a comprehensive evaluation ofseveral representative Large Language Models (LLMs) and a widely-utilizedpre-trained model(PLM). These models are scrutinized using a benchmark datasetin Japanese, with the aim to assess and analyze the performance of the currentmultilingual models in this context. Our experimental results reveal startlingdiscrepancies. A simple modification in the sentence structure of the PromptTemplate led to a drastic drop in the accuracy of GPT-4 from 49.21 to 25.44.This observation underscores the fact that even the highly performance GPT-4model encounters significant stability issues when dealing with diverseJapanese prompt templates, rendering the consistency of the model's outputresults questionable. In light of these findings, we conclude by proposingpotential research trajectories to further enhance the development andperformance of Large Language Models in their current stage.",,arXiv,"['cs.cl', 'cs.ai']",, +643,game of tones faculty detection of gpt4 generated content in university assessments,"['Mike Perkins', 'Jasper Roe', 'Darius Postma', 'James McGaughran', 'Don Hickerson']",http://arxiv.org/pdf/2305.18081v1.pdf,2023-05-29,," This study explores the robustness of university assessments against the useof Open AI's Generative Pre-Trained Transformer 4 (GPT-4) generated content andevaluates the ability of academic staff to detect its use when supported by theTurnitin Artificial Intelligence (AI) detection tool. The research involvedtwenty-two GPT-4 generated submissions being created and included in theassessment process to be marked by fifteen different faculty members. The studyreveals that although the detection tool identified 91% of the experimentalsubmissions as containing some AI-generated content, the total detected contentwas only 54.8%. This suggests that the use of adversarial techniques regardingprompt engineering is an effective method in evading AI detection tools andhighlights that improvements to AI detection software are needed. Using theTurnitin AI detect tool, faculty reported 54.5% of the experimental submissionsto the academic misconduct process, suggesting the need for increased awarenessand training into these tools. Genuine submissions received a mean score of54.4, whereas AI-generated content scored 52.3, indicating the comparableperformance of GPT-4 in real-life situations. Recommendations include adjustingassessment strategies to make them more resistant to the use of AI tools, usingAI-inclusive assessment where possible, and providing comprehensive trainingprograms for faculty and students. This research contributes to understandingthe relationship between AI-generated content and academic assessment, urgingfurther investigation to preserve academic integrity.",,arXiv,"['cs.cy', 'cs.ai', 'k.4']",, +644,a survey on segment anything model (sam) vision foundation model meets prompt engineering,"['Chaoning Zhang', 'Fachrina Dewi Puspitasari', 'Sheng Zheng', 'Chenghao Li', 'Yu Qiao', 'Taegoo Kang', 'Xinru Shan', 'Chenshuang Zhang', 'Caiyan Qin', 'Francois Rameau', 'Lik-Hang Lee', 'Sung-Ho Bae', 'Choong Seon Hong']",http://arxiv.org/pdf/2306.06211v3.pdf,2023-05-12,," Segment anything model (SAM) developed by Meta AI Research has recentlyattracted significant attention. Trained on a large segmentation dataset ofover 1 billion masks, SAM is capable of segmenting any object on a certainimage. In the original SAM work, the authors turned to zero-short transfertasks (like edge detection) for evaluating the performance of SAM. Recently,numerous works have attempted to investigate the performance of SAM in variousscenarios to recognize and segment objects. Moreover, numerous projects haveemerged to show the versatility of SAM as a foundation model by combining itwith other models, like Grounding DINO, Stable Diffusion, ChatGPT, etc. Withthe relevant papers and projects increasing exponentially, it is challengingfor the readers to catch up with the development of SAM. To this end, this workconducts the first yet comprehensive survey on SAM. This is an ongoing projectand we intend to update the manuscript on a regular basis. Therefore, readersare welcome to contact us if they complete new works related to SAM so that wecan include them in our next version.",,arXiv,['cs.cv'],, +645,the economic tradeoffs of large language models a case study,"['Kristen Howell', 'Gwen Christian', 'Pavel Fomitchov', 'Gitit Kehat', 'Julianne Marzulla', 'Leanne Rolston', 'Jadin Tredup', 'Ilana Zimmerman', 'Ethan Selfridge', 'Joseph Bradley']",http://arxiv.org/pdf/2306.07402v1.pdf,2023-06-08,," Contacting customer service via chat is a common practice. Because employingcustomer service agents is expensive, many companies are turning to NLP thatassists human agents by auto-generating responses that can be used directly orwith modifications. Large Language Models (LLMs) are a natural fit for this usecase; however, their efficacy must be balanced with the cost of training andserving them. This paper assesses the practical cost and impact of LLMs for theenterprise as a function of the usefulness of the responses that they generate.We present a cost framework for evaluating an NLP model's utility for this usecase and apply it to a single brand as a case study in the context of anexisting agent assistance product. We compare three strategies for specializingan LLM - prompt engineering, fine-tuning, and knowledge distillation - usingfeedback from the brand's customer service agents. We find that the usabilityof a model's responses can make up for a large difference in inference cost forour case study brand, and we extrapolate our findings to the broader enterprisespace.",,arXiv,"['cs.cl', 'cs.ai']",, +646,do you still need a manual smart contract audit,"['Isaac David', 'Liyi Zhou', 'Kaihua Qin', 'Dawn Song', 'Lorenzo Cavallaro', 'Arthur Gervais']",http://arxiv.org/pdf/2306.12338v2.pdf,2023-06-21,," We investigate the feasibility of employing large language models (LLMs) forconducting the security audit of smart contracts, a traditionallytime-consuming and costly process. Our research focuses on the optimization ofprompt engineering for enhanced security analysis, and we evaluate theperformance and accuracy of LLMs using a benchmark dataset comprising 52Decentralized Finance (DeFi) smart contracts that have previously beencompromised. Our findings reveal that, when applied to vulnerable contracts, both GPT-4and Claude models correctly identify the vulnerability type in 40% of thecases. However, these models also demonstrate a high false positive rate,necessitating continued involvement from manual auditors. The LLMs testedoutperform a random model by 20% in terms of F1-score. To ensure the integrity of our study, we conduct mutation testing on fivenewly developed and ostensibly secure smart contracts, into which we manuallyinsert two and 15 vulnerabilities each. This testing yielded a remarkablebest-case 78.7% true positive rate for the GPT-4-32k model. We tested both,asking the models to perform a binary classification on whether a contract isvulnerable, and a non-binary prompt. We also examined the influence of modeltemperature variations and context length on the LLM's performance. Despite the potential for many further enhancements, this work lays thegroundwork for a more efficient and economical approach to smart contractsecurity audits.",,arXiv,['cs.cr'],, +647,comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues,"['Dollaya Hirunyasiri', 'Danielle R. Thomas', 'Jionghao Lin', 'Kenneth R. Koedinger', 'Vincent Aleven']",http://arxiv.org/pdf/2307.02018v1.pdf,2023-07-05,," Research suggests that providing specific and timely feedback to human tutorsenhances their performance. However, it presents challenges due to thetime-consuming nature of assessing tutor performance by human evaluators. Largelanguage models, such as the AI-chatbot ChatGPT, hold potential for offeringconstructive feedback to tutors in practical settings. Nevertheless, theaccuracy of AI-generated feedback remains uncertain, with scant researchinvestigating the ability of models like ChatGPT to deliver effective feedback.In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in atutor-student setting. We use two different prompting approaches, the zero-shotchain of thought and the few-shot chain of thought, to identify specificcomponents of effective praise based on five criteria. These approaches arethen compared to the results of human graders for accuracy. Our goal is toassess the extent to which GPT-4 can accurately identify each praise criterion.We found that both zero-shot and few-shot chain of thought approaches yieldcomparable results. GPT-4 performs moderately well in identifying instanceswhen the tutor offers specific and immediate praise. However, GPT-4underperforms in identifying the tutor's ability to deliver sincere praise,particularly in the zero-shot prompting scenario where examples of sinceretutor praise statements were not provided. Future work will focus on enhancingprompt engineering, developing a more general tutoring rubric, and evaluatingour method using real-life tutoring dialogues.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, +648,"right to be forgotten in the era of large language models implications, challenges, and solutions","['Dawen Zhang', 'Pamela Finckenberg-Broman', 'Thong Hoang', 'Shidong Pan', 'Zhenchang Xing', 'Mark Staples', 'Xiwei Xu']",http://arxiv.org/pdf/2307.03941v3.pdf,2023-07-08,," The Right to be Forgotten (RTBF) was first established as the result of theruling of Google Spain SL, Google Inc. v AEPD, Mario Costeja Gonz\'alez, andwas later included as the Right to Erasure under the General Data ProtectionRegulation (GDPR) of European Union to allow individuals the right to requestpersonal data be deleted by organizations. Specifically for search engines,individuals can send requests to organizations to exclude their informationfrom the query results. It was a significant emergent right as the result ofthe evolution of technology. With the recent development of Large LanguageModels (LLMs) and their use in chatbots, LLM-enabled software systems havebecome popular. But they are not excluded from the RTBF. Compared with theindexing approach used by search engines, LLMs store, and process informationin a completely different way. This poses new challenges for compliance withthe RTBF. In this paper, we explore these challenges and provide our insightson how to implement technical solutions for the RTBF, including the use ofdifferential privacy, machine unlearning, model editing, and promptengineering. With the rapid advancement of AI and the increasing need ofregulating this powerful technology, learning from the case of RTBF can providevaluable lessons for technical practitioners, legal experts, organizations, andauthorities.",,arXiv,"['cs.cy', 'cs.ai', 'cs.cl']",, +649,gpt3 models are fewshot financial reasoners,"['Raul Salles de Padua', 'Imran Qureshi', 'Mustafa U. Karakaplan']",http://arxiv.org/pdf/2307.13617v2.pdf,2023-07-25,," Financial analysis is an important tool for evaluating company performance.Practitioners work to answer financial questions to make profitable investmentdecisions, and use advanced quantitative analyses to do so. As a result,Financial Question Answering (QA) is a question answering task that requiresdeep reasoning about numbers. Furthermore, it is unknown how well pre-trainedlanguage models can reason in the financial domain. The currentstate-of-the-art requires a retriever to collect relevant facts about thefinancial question from the text and a generator to produce a valid financialprogram and a final answer. However, recently large language models like GPT-3have achieved state-of-the-art performance on wide variety of tasks with just afew shot examples. We run several experiments with GPT-3 and find that aseparate retrieval model and logic engine continue to be essential componentsto achieving SOTA performance in this task, particularly due to the precisenature of financial questions and the complex information stored in financialdocuments. With this understanding, our refined prompt-engineering approach onGPT-3 achieves near SOTA accuracy without any fine-tuning.",,arXiv,"['cs.cl', 'cs.ai']",, +650,evaluating chatgpt textmining of clinical records for obesity monitoring,"['Ivo S. Fins', 'Heather Davies', 'Sean Farrell', 'Jose R. Torres', 'Gina Pinchbeck', 'Alan D. Radford', 'Peter-John Noble']",http://arxiv.org/pdf/2308.01666v1.pdf,2023-08-03,," Background: Veterinary clinical narratives remain a largely untapped resourcefor addressing complex diseases. Here we compare the ability of a largelanguage model (ChatGPT) and a previously developed regular expression (RegexT)to identify overweight body condition scores (BCS) in veterinary narratives.Methods: BCS values were extracted from 4,415 anonymised clinical narrativesusing either RegexT or by appending the narrative to a prompt sent to ChatGPTcoercing the model to return the BCS information. Data were manually reviewedfor comparison. Results: The precision of RegexT was higher (100%, 95% CI94.81-100%) than the ChatGPT (89.3%; 95% CI82.75-93.64%). However, the recallof ChatGPT (100%. 95% CI 96.18-100%) was considerably higher than that ofRegexT (72.6%, 95% CI 63.92-79.94%). Limitations: Subtle prompt engineering isneeded to improve ChatGPT output. Conclusions: Large language models creatediverse opportunities and, whilst complex, present an intuitive interface toinformation but require careful implementation to avoid unpredictable errors.",,arXiv,"['cs.ir', 'cs.cl']",, +651,large language models in fault localisation,"['Yonghao Wu', 'Zheng Li', 'Jie M. Zhang', 'Mike Papadakis', 'Mark Harman', 'Yong Liu']",http://arxiv.org/pdf/2308.15276v3.pdf,2023-08-29,," Large Language Models (LLMs) have shown promise in multiple softwareengineering tasks including code generation, program repair, codesummarisation, and test generation. Fault localisation is instrumental inenabling automated debugging and repair of programs and was prominentlyfeatured as a highlight during the launch event of ChatGPT-4. Nevertheless, theperformance of LLMs compared to state-of-the-art methods, as well as the impactof prompt design and context length on their efficacy, remains unclear. To fillthis gap, this paper presents an in-depth investigation into the capability ofChatGPT-3.5 and ChatGPT-4, the two state-of-the-art LLMs, on faultlocalisation. Using the widely-adopted large-scale Defects4J dataset, wecompare the two LLMs with the existing fault localisation techniques. We alsoinvestigate the consistency of LLMs in fault localisation, as well as howprompt engineering and the length of code context affect the fault localisationeffectiveness. Our findings demonstrate that within function-level context, ChatGPT-4outperforms all the existing fault localisation methods. Additional error logscan further improve ChatGPT models' localisation accuracy and consistency, withan average 46.9% higher accuracy over the state-of-the-art baseline SmartFL onthe Defects4J dataset in terms of TOP-1 metric. However, when the code contextof the Defects4J dataset expands to the class-level, ChatGPT-4's performancesuffers a significant drop, with 49.9% lower accuracy than SmartFL under TOP-1metric. These observations indicate that although ChatGPT can effectivelylocalise faults under specific conditions, limitations are evident. Furtherresearch is needed to fully harness the potential of LLMs like ChatGPT forpractical fault localisation applications.",,arXiv,['cs.se'],, +652,is gpt4 a good trader,['Bingzhe Wu'],http://arxiv.org/pdf/2309.10982v1.pdf,2023-09-20,," Recently, large language models (LLMs), particularly GPT-4, have demonstratedsignificant capabilities in various planning and reasoning tasks\cite{cheng2023gpt4,bubeck2023sparks}. Motivated by these advancements, therehas been a surge of interest among researchers to harness the capabilities ofGPT-4 for the automated design of quantitative factors that do not overlap withexisting factor libraries, with an aspiration to achieve alpha returns\cite{webpagequant}. In contrast to these work, this study aims to examine thefidelity of GPT-4's comprehension of classic trading theories and itsproficiency in applying its code interpreter abilities to real-world tradingdata analysis. Such an exploration is instrumental in discerning whether theunderlying logic GPT-4 employs for trading is intrinsically reliable.Furthermore, given the acknowledged interpretative latitude inherent in mosttrading theories, we seek to distill more precise methodologies of deployingthese theories from GPT-4's analytical process, potentially offering invaluableinsights to human traders. To achieve this objective, we selected daily candlestick (K-line) data fromspecific periods for certain assets, such as the Shanghai Stock Index. Throughmeticulous prompt engineering, we guided GPT-4 to analyze the technicalstructures embedded within this data, based on specific theories like theElliott Wave Theory. We then subjected its analytical output to manualevaluation, assessing its interpretative depth and accuracy vis-\`a-vis thesetrading theories from multiple dimensions. The results and findings from thisstudy could pave the way for a synergistic amalgamation of human expertise andAI-driven insights in the realm of trading.",,arXiv,['cs.ai'],, +653,batch calibration rethinking calibration for incontext learning and prompt engineering,"['Han Zhou', 'Xingchen Wan', 'Lev Proleev', 'Diana Mincu', 'Jilin Chen', 'Katherine Heller', 'Subhrajit Roy']",http://arxiv.org/pdf/2309.17249v2.pdf,2023-09-29,," Prompting and in-context learning (ICL) have become efficient learningparadigms for large language models (LLMs). However, LLMs suffer from promptbrittleness and various bias factors in the prompt, including but not limitedto the formatting, the choice verbalizers, and the ICL examples. To addressthis problem that results in unexpected performance degradation, calibrationmethods have been developed to mitigate the effects of these biases whilerecovering LLM performance. In this work, we first conduct a systematicanalysis of the existing calibration methods, where we both provide a unifiedview and reveal the failure cases. Inspired by these analyses, we propose BatchCalibration (BC), a simple yet intuitive method that controls the contextualbias from the batched input, unifies various prior approaches, and effectivelyaddresses the aforementioned issues. BC is zero-shot, inference-only, andincurs negligible additional costs. In the few-shot setup, we further extend BCto allow it to learn the contextual bias from labeled data. We validate theeffectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstratestate-of-the-art performance over previous calibration baselines across morethan 10 natural language understanding and image classification tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +654,suspicionagent playing imperfect information games with theory of mind aware gpt4,"['Jiaxian Guo', 'Bo Yang', 'Paul Yoo', 'Bill Yuchen Lin', 'Yusuke Iwasawa', 'Yutaka Matsuo']",http://arxiv.org/pdf/2309.17277v2.pdf,2023-09-29,," Unlike perfect information games, where all elements are known to everyplayer, imperfect information games emulate the real-world complexities ofdecision-making under uncertain or incomplete information. GPT-4, the recentbreakthrough in large language models (LLMs) trained on massive passive data,is notable for its knowledge retrieval and reasoning abilities. This paperdelves into the applicability of GPT-4's learned knowledge for imperfectinformation games. To achieve this, we introduce \textbf{Suspicion-Agent}, aninnovative agent that leverages GPT-4's capabilities for performing inimperfect information games. With proper prompt engineering to achievedifferent functions, Suspicion-Agent based on GPT-4 demonstrates remarkableadaptability across a range of imperfect information card games. Importantly,GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning itcan understand others and intentionally impact others' behavior. Leveragingthis, we design a planning strategy that enables GPT-4 to competently playagainst different opponents, adapting its gameplay style as needed, whilerequiring only the game rules and descriptions of observations as input. In theexperiments, we qualitatively showcase the capabilities of Suspicion-Agentacross three different imperfect information games and then quantitativelyevaluate it in Leduc Hold'em. The results show that Suspicion-Agent canpotentially outperform traditional algorithms designed for imperfectinformation games, without any specialized training or examples. In order toencourage and foster deeper insights within the community, we make ourgame-related data publicly available.",,arXiv,['cs.ai'],, +655,investigating the limitation of clip models the worstperforming categories,"['Jie-Jing Shao', 'Jiang-Xin Shi', 'Xiao-Wen Yang', 'Lan-Zhe Guo', 'Yu-Feng Li']",http://arxiv.org/pdf/2310.03324v1.pdf,2023-10-05,," Contrastive Language-Image Pre-training (CLIP) provides a foundation model byintegrating natural language into visual concepts, enabling zero-shotrecognition on downstream tasks. It is usually expected that satisfactoryoverall accuracy can be achieved across numerous domains through well-designedtextual prompts. However, we found that their performance in the worstcategories is significantly inferior to the overall performance. For example,on ImageNet, there are a total of 10 categories with class-wise accuracy as lowas 0\%, even though the overall performance has achieved 64.1\%. Thisphenomenon reveals the potential risks associated with using CLIP models,particularly in risk-sensitive applications where specific categories holdsignificant importance. To address this issue, we investigate the alignmentbetween the two modalities in the CLIP model and propose the Class-wiseMatching Margin (\cmm) to measure the inference confusion. \cmm\ caneffectively identify the worst-performing categories and estimate the potentialperformance of the candidate prompts. We further query large language models toenrich descriptions of worst-performing categories and build a weightedensemble to highlight the efficient prompts. Experimental results clearlyverify the effectiveness of our proposal, where the accuracy on the worst-10categories on ImageNet is boosted to 5.2\%, without manual prompt engineering,laborious optimization, or access to labeled validation data.",,arXiv,"['cs.cv', 'cs.lg']",, +656,large language modelempowered agents for simulating macroeconomic activities,"['Nian Li', 'Chen Gao', 'Yong Li', 'Qingmin Liao']",http://arxiv.org/pdf/2310.10436v1.pdf,2023-10-16,," The advent of the Web has brought about a paradigm shift in traditionaleconomics, particularly in the digital economy era, enabling the preciserecording and analysis of individual economic behavior. This has led to agrowing emphasis on data-driven modeling in macroeconomics. In macroeconomicresearch, Agent-based modeling (ABM) emerged as an alternative, evolvingthrough rule-based agents, machine learning-enhanced decision-making, and, morerecently, advanced AI agents. However, the existing works are suffering fromthree main challenges when endowing agents with human-like decision-making,including agent heterogeneity, the influence of macroeconomic trends, andmultifaceted economic factors. Large language models (LLMs) have recentlygained prominence in offering autonomous human-like characteristics. Therefore,leveraging LLMs in macroeconomic simulation presents an opportunity to overcometraditional limitations. In this work, we take an early step in introducing anovel approach that leverages LLMs in macroeconomic simulation. We designprompt-engineering-driven LLM agents to exhibit human-like decision-making andadaptability in the economic environment, with the abilities of perception,reflection, and decision-making to address the abovementioned challenges.Simulation experiments on macroeconomic activities show that LLM-empoweredagents can make realistic work and consumption decisions and emerge morereasonable macroeconomic phenomena than existing rule-based or AI agents. Ourwork demonstrates the promising potential to simulate macroeconomics based onLLM and its human-like characteristics.",,arXiv,['cs.ai'],, +657,large language model for multiobjective evolutionary optimization,"['Fei Liu', 'Xi Lin', 'Zhenkun Wang', 'Shunyu Yao', 'Xialiang Tong', 'Mingxuan Yuan', 'Qingfu Zhang']",http://arxiv.org/pdf/2310.12541v2.pdf,2023-10-19,," Multiobjective evolutionary algorithms (MOEAs) are major methods for solvingmultiobjective optimization problems (MOPs). Many MOEAs have been proposed inthe past decades, of which the search operators need a carefully handcrafteddesign with domain knowledge. Recently, some attempts have been made to replacethe manually designed operators in MOEAs with learning-based operators (e.g.,neural network models). However, much effort is still required for designingand training such models, and the learned operators might not generalize wellon new problems. To tackle the above challenges, this work investigates a novelapproach that leverages the powerful large language model (LLM) to design MOEAoperators. With proper prompt engineering, we successfully let a general LLMserve as a black-box search operator for decomposition-based MOEA (MOEA/D) in azero-shot manner. In addition, by learning from the LLM behavior, we furtherdesign an explicit white-box operator with randomness and propose a new versionof decomposition-based MOEA, termed MOEA/D-LO. Experimental studies ondifferent test benchmarks show that our proposed method can achieve competitiveperformance with widely used MOEAs. It is also promising to see the operatoronly learned from a few instances can have robust generalization performance onunseen problems with quite different patterns and settings. The results revealthe potential benefits of using pre-trained LLMs in the design of MOEAs.",,arXiv,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.et']",, +658,enhancing zeroshot crypto sentiment with finetuned language model and prompt engineering,"['Rahman S M Wahidur', 'Ishmam Tashdeed', 'Manjit Kaur', ' Heung-No-Lee']",http://arxiv.org/pdf/2310.13226v1.pdf,2023-10-20,," Blockchain technology has revolutionized the financial landscape, withcryptocurrencies gaining widespread adoption for their decentralized andtransparent nature. As the sentiment expressed on social media platforms cansignificantly influence cryptocurrency discussions and market movements,sentiment analysis has emerged as a crucial tool for understanding publicopinion and predicting market trends. Motivated by the aim to enhance sentimentanalysis accuracy in the cryptocurrency domain, this paper investigatesfine-tuning techniques on large language models. This paper also investigatesthe efficacy of supervised fine-tuning and instruction-based fine-tuning onlarge language models for unseen tasks. Experimental results demonstrate asignificant average zero-shot performance gain of 40% after fine-tuning,highlighting the potential of this technique in optimizing pre-trained languagemodel efficiency. Additionally, the impact of instruction tuning on models ofvarying scales is examined, revealing that larger models benefit frominstruction tuning, achieving the highest average accuracy score of 75.16%. Incontrast, smaller-scale models may experience reduced generalization due to thecomplete utilization of model capacity. To gain deeper insight about howinstruction works with these language models, this paper presents anexperimental investigation into the response of an instruction-based modelunder different instruction tuning setups. The investigation demonstrates thatthe model achieves an average accuracy score of 72.38% for short and simpleinstructions. This performance significantly outperforms its accuracy underlong and complex instructions by over 12%, thereby effectively highlighting theprofound significance of instruction characteristics in maximizing modelperformance.",,arXiv,['cs.cl'],, +659,openended instructable embodied agents with memoryaugmented large language models,"['Gabriel Sarch', 'Yue Wu', 'Michael J. Tarr', 'Katerina Fragkiadaki']",http://arxiv.org/pdf/2310.15127v2.pdf,2023-10-23,," Pre-trained and frozen large language models (LLMs) can effectively mapsimple scene rearrangement instructions to programs over a robot's visuomotorfunctions through appropriate few-shot example prompting. To parse open-domainnatural language and adapt to a user's idiosyncratic procedures, not knownduring prompt engineering time, fixed prompts fall short. In this paper, weintroduce HELPER, an embodied agent equipped with an external memory oflanguage-program pairs that parses free-form human-robot dialogue into actionprograms through retrieval-augmented LLM prompting: relevant memories areretrieved based on the current dialogue, instruction, correction, or VLMdescription, and used as in-context prompt examples for LLM querying. Thememory is expanded during deployment to include pairs of user's language andaction plans, to assist future inferences and personalize them to the user'slanguage and routines. HELPER sets a new state-of-the-art in the TEAChbenchmark in both Execution from Dialog History (EDH) and Trajectory fromDialogue (TfD), with a 1.7x improvement over the previous state-of-the-art forTfD. Our models, code, and video results can be found in our project's website:https://helper-agent-llm.github.io.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",, +660,promisepromptdriven 3d medical image segmentation using pretrained image foundation models,"['Hao Li', 'Han Liu', 'Dewei Hu', 'Jiacheng Wang', 'Ipek Oguz']",http://arxiv.org/pdf/2310.19721v3.pdf,2023-10-30,," To address prevalent issues in medical imaging, such as data acquisitionchallenges and label availability, transfer learning from natural to medicalimage domains serves as a viable strategy to produce reliable segmentationresults. However, several existing barriers between domains need to be brokendown, including addressing contrast discrepancies, managing anatomicalvariability, and adapting 2D pretrained models for 3D segmentation tasks. Inthis paper, we propose ProMISe,a prompt-driven 3D medical image segmentationmodel using only a single point prompt to leverage knowledge from a pretrained2D image foundation model. In particular, we use the pretrained visiontransformer from the Segment Anything Model (SAM) and integrate lightweightadapters to extract depth-related (3D) spatial context without updating thepretrained weights. For robust results, a hybrid network with complementaryencoders is designed, and a boundary-aware loss is proposed to achieve preciseboundaries. We evaluate our model on two public datasets for colon and pancreastumor segmentations, respectively. Compared to the state-of-the-artsegmentation methods with and without prompt engineering, our proposed methodachieves superior performance. The code is publicly available athttps://github.com/MedICL-VU/ProMISe.",,arXiv,"['eess.iv', 'cs.cv']",, +661,making large language models better data creators,"['Dong-Ho Lee', 'Jay Pujara', 'Mohit Sewak', 'Ryen W. White', 'Sujay Kumar Jauhar']",http://arxiv.org/pdf/2310.20111v1.pdf,2023-10-31,," Although large language models (LLMs) have advanced the state-of-the-art inNLP significantly, deploying them for downstream applications is stillchallenging due to cost, responsiveness, control, or concerns around privacyand security. As such, trainable models are still the preferred option in somecases. However, these models still require human-labeled data for optimalperformance, which is expensive and time-consuming to obtain. In order toaddress this issue, several techniques to reduce human effort involve labelingor generating data using LLMs. Although these methods are effective for certainapplications, in practice they encounter difficulties in real-world scenarios.Labeling data requires careful data selection, while generating datanecessitates task-specific prompt engineering. In this paper, we propose aunified data creation pipeline that requires only a single formatting example,and which is applicable to a broad range of tasks, including traditionallyproblematic ones with semantically devoid label spaces. In our experiments wedemonstrate that instruction-following LLMs are highly cost-effective datacreators, and that models trained with these data exhibit performance betterthan those trained with human-labeled data (by up to 17.5%) onout-of-distribution evaluation, while maintaining comparable performance onin-distribution tasks. These results have important implications for therobustness of NLP systems deployed in the real-world.",,arXiv,['cs.cl'],, +662,vispercep a visionlanguage approach to enhance visual perception for people with blindness and low vision,"['Yu Hao', 'Fan Yang', 'Hao Huang', 'Shuaihang Yuan', 'Sundeep Rangan', 'John-Ross Rizzo', 'Yao Wang', 'Yi Fang']",http://arxiv.org/pdf/2310.20225v1.pdf,2023-10-31,," People with blindness and low vision (pBLV) encounter substantial challengeswhen it comes to comprehensive scene recognition and precise objectidentification in unfamiliar environments. Additionally, due to the visionloss, pBLV have difficulty in accessing and identifying potential trippinghazards on their own. In this paper, we present a pioneering approach thatleverages a large vision-language model to enhance visual perception for pBLV,offering detailed and comprehensive descriptions of the surroundingenvironments and providing warnings about the potential risks. Our methodbegins by leveraging a large image tagging model (i.e., Recognize Anything(RAM)) to identify all common objects present in the captured images. Therecognition results and user query are then integrated into a prompt, tailoredspecifically for pBLV using prompt engineering. By combining the prompt andinput image, a large vision-language model (i.e., InstructBLIP) generatesdetailed and comprehensive descriptions of the environment and identifiespotential risks in the environment by analyzing the environmental objects andscenes, relevant to the prompt. We evaluate our approach through experimentsconducted on both indoor and outdoor datasets. Our results demonstrate that ourmethod is able to recognize objects accurately and provide insightfuldescriptions and analysis of the environment for pBLV.",,arXiv,"['cs.cv', 'cs.ai']",, +663,bigbio a framework for datacentric biomedical natural language processing,"['Jason Alan Fries', 'Leon Weber', 'Natasha Seelam', 'Gabriel Altay', 'Debajyoti Datta', 'Samuele Garda', 'Myungsun Kang', 'Ruisi Su', 'Wojciech Kusa', 'Samuel Cahyawijaya', 'Fabio Barth', 'Simon Ott', 'Matthias Samwald', 'Stephen Bach', 'Stella Biderman', 'Mario Sänger', 'Bo Wang', 'Alison Callahan', 'Daniel León Periñán', 'Théo Gigant', 'Patrick Haller', 'Jenny Chim', 'Jose David Posada', 'John Michael Giorgi', 'Karthik Rangasai Sivaraman', 'Marc Pàmies', 'Marianna Nezhurina', 'Robert Martin', 'Michael Cullan', 'Moritz Freidank', 'Nathan Dahlberg', 'Shubhanshu Mishra', 'Shamik Bose', 'Nicholas Michio Broad', 'Yanis Labrak', 'Shlok S Deshmukh', 'Sid Kiblawi', 'Ayush Singh', 'Minh Chien Vu', 'Trishala Neeraj', 'Jonas Golde', 'Albert Villanova del Moral', 'Benjamin Beilharz']",http://arxiv.org/pdf/2206.15076v1.pdf,2022-06-30,," Training and evaluating language models increasingly requires theconstruction of meta-datasets --diverse collections of curated data with clearprovenance. Natural language prompting has recently lead to improved zero-shotgeneralization by transforming existing, supervised datasets into a diversityof novel pretraining tasks, highlighting the benefits of meta-dataset curation.While successful in general-domain text, translating these data-centricapproaches to biomedical language modeling remains challenging, as labeledbiomedical datasets are significantly underrepresented in popular data hubs. Toaddress this challenge, we introduce BigBIO a community library of 126+biomedical NLP datasets, currently covering 12 task categories and 10+languages. BigBIO facilitates reproducible meta-dataset curation viaprogrammatic access to datasets and their metadata, and is compatible withcurrent platforms for prompt engineering and end-to-end few/zero shot languagemodel evaluation. We discuss our process for task schema harmonization, dataauditing, contribution guidelines, and outline two illustrative use cases:zero-shot evaluation of biomedical prompts and large-scale, multi-tasklearning. BigBIO is an ongoing community effort and is available athttps://github.com/bigscience-workshop/biomedical",,arXiv,['cs.cl'],, +664,"a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity","['Yejin Bang', 'Samuel Cahyawijaya', 'Nayeon Lee', 'Wenliang Dai', 'Dan Su', 'Bryan Wilie', 'Holy Lovenia', 'Ziwei Ji', 'Tiezheng Yu', 'Willy Chung', 'Quyet V. Do', 'Yan Xu', 'Pascale Fung']",http://arxiv.org/pdf/2302.04023v4.pdf,2023-02-08,," This paper proposes a framework for quantitatively evaluating interactiveLLMs such as ChatGPT using publicly available data sets. We carry out anextensive technical evaluation of ChatGPT using 23 data sets covering 8different common NLP application tasks. We evaluate the multitask, multilingualand multi-modal aspects of ChatGPT based on these data sets and a newlydesigned multimodal dataset. We find that ChatGPT outperforms LLMs withzero-shot learning on most tasks and even outperforms fine-tuned models on sometasks. We find that it is better at understanding non-Latin script languagesthan generating them. It is able to generate multimodal content from textualprompts, via an intermediate code generation step. Moreover, we find thatChatGPT is 63.41% accurate on average in 10 different reasoning categoriesunder logical reasoning, non-textual reasoning, and commonsense reasoning,hence making it an unreliable reasoner. It is, for example, better at deductivethan inductive reasoning. ChatGPT suffers from hallucination problems likeother LLMs and it generates more extrinsic hallucinations from its parametricmemory as it does not have access to an external knowledge base. Finally, theinteractive feature of ChatGPT enables human collaboration with the underlyingLLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++on machine translation, in a multi-turn ""prompt engineering"" fashion. We alsorelease codebase for evaluation set extraction.",,arXiv,"['cs.cl', 'cs.ai']",, +665,evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery,"['Debadutta Dash', 'Rahul Thapa', 'Juan M. Banda', 'Akshay Swaminathan', 'Morgan Cheatham', 'Mehr Kashyap', 'Nikesh Kotecha', 'Jonathan H. Chen', 'Saurabh Gombar', 'Lance Downing', 'Rachel Pedreira', 'Ethan Goh', 'Angel Arnaout', 'Garret Kenn Morris', 'Honor Magon', 'Matthew P Lungren', 'Eric Horvitz', 'Nigam H. Shah']",http://arxiv.org/pdf/2304.13714v3.pdf,2023-04-26,," Despite growing interest in using large language models (LLMs) in healthcare,current explorations do not assess the real-world utility and safety of LLMs inclinical settings. Our objective was to determine whether two LLMs can serveinformation needs submitted by physicians as questions to an informaticsconsultation service in a safe and concordant manner. Sixty six questions froman informatics consult service were submitted to GPT-3.5 and GPT-4 via simpleprompts. 12 physicians assessed the LLM responses' possibility of patient harmand concordance with existing reports from an informatics consultation service.Physician assessments were summarized based on majority vote. For no questionsdid a majority of physicians deem either LLM response as harmful. For GPT-3.5,responses to 8 questions were concordant with the informatics consult report,20 discordant, and 9 were unable to be assessed. There were 29 responses withno majority on ""Agree"", ""Disagree"", and ""Unable to assess"". For GPT-4,responses to 13 questions were concordant, 15 discordant, and 3 were unable tobe assessed. There were 35 responses with no majority. Responses from both LLMswere largely devoid of overt harm, but less than 20% of the responses agreedwith an answer from an informatics consultation service, responses containedhallucinated references, and physicians were divided on what constitutes harm.These results suggest that while general purpose LLMs are able to provide safeand credible responses, they often do not meet the specific information need ofa given question. A definitive evaluation of the usefulness of LLMs inhealthcare settings will likely require additional research on promptengineering, calibration, and custom-tailoring of general purpose models.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ir']",, +666,zelda video analytics using visionlanguage models,"['Francisco Romero', 'Caleb Winston', 'Johann Hauswald', 'Matei Zaharia', 'Christos Kozyrakis']",http://arxiv.org/pdf/2305.03785v2.pdf,2023-05-05,," Advances in ML have motivated the design of video analytics systems thatallow for structured queries over video datasets. However, existing systemslimit query expressivity, require users to specify an ML model per predicate,rely on complex optimizations that trade off accuracy for performance, andreturn large amounts of redundant and low-quality results. This paper focuseson the recently developed Vision-Language Models (VLMs) that allow users toquery images using natural language like ""cars during daytime at trafficintersections."" Through an in-depth analysis, we show VLMs address threelimitations of current video analytics systems: general expressivity, a singlegeneral purpose model to query many predicates, and are both simple and fast.However, VLMs still return large numbers of redundant and low-quality resultsthat can overwhelm and burden users. In addition, VLMs often require manualprompt engineering to improve result relevance. We present Zelda: a video analytics system that uses VLMs to return bothrelevant and semantically diverse results for top-K queries on large videodatasets. Zelda prompts the VLM with the user's query in natural language.Zelda then automatically adds discriminator and synonym terms to boostaccuracy, and terms to identify low-quality frames. To improve resultdiversity, Zelda uses semantic-rich VLM embeddings in an algorithm that prunessimilar frames while considering their relevance to the query and the number oftop-K results requested. We evaluate Zelda across five datasets and 19 queriesand quantitatively show it achieves higher mean average precision (up to 1.15x)and improves average pairwise similarity (up to 1.16x) compared to using VLMsout-of-the-box. We also compare Zelda to a state-of-the-art video analyticsengine and show that Zelda retrieves results 7.5x (up to 10.4x) faster for thesame accuracy and frame diversity.",,arXiv,['cs.db'],, +667,chatgpt chemistry assistant for text mining and prediction of mof synthesis,"['Zhiling Zheng', 'Oufan Zhang', 'Christian Borgs', 'Jennifer T. Chayes', 'Omar M. Yaghi']",http://arxiv.org/pdf/2306.11296v2.pdf,2023-06-20,," We use prompt engineering to guide ChatGPT in the automation of text miningof metal-organic frameworks (MOFs) synthesis conditions from diverse formatsand styles of the scientific literature. This effectively mitigates ChatGPT'stendency to hallucinate information -- an issue that previously made the use ofLarge Language Models (LLMs) in scientific fields challenging. Our approachinvolves the development of a workflow implementing three different processesfor text mining, programmed by ChatGPT itself. All of them enable parsing,searching, filtering, classification, summarization, and data unification withdifferent tradeoffs between labor, speed, and accuracy. We deploy this systemto extract 26,257 distinct synthesis parameters pertaining to approximately 800MOFs sourced from peer-reviewed research articles. This process incorporatesour ChemPrompt Engineering strategy to instruct ChatGPT in text mining,resulting in impressive precision, recall, and F1 scores of 90-99%.Furthermore, with the dataset built by text mining, we constructed amachine-learning model with over 86% accuracy in predicting MOF experimentalcrystallization outcomes and preliminarily identifying important factors in MOFcrystallization. We also developed a reliable data-grounded MOF chatbot toanswer questions on chemical reactions and synthesis procedures. Given that theprocess of using ChatGPT reliably mines and tabulates diverse MOF synthesisinformation in a unified format, while using only narrative language requiringno coding expertise, we anticipate that our ChatGPT Chemistry Assistant will bevery useful across various other chemistry sub-disciplines.",,arXiv,"['cs.ir', 'cond-mat.mtrl-sci', 'cs.cl', 'physics.chem-ph']",, +668,identifying and extracting rare disease phenotypes with large language models,"['Cathy Shyr', 'Yan Hu', 'Paul A. Harris', 'Hua Xu']",http://arxiv.org/pdf/2306.12656v1.pdf,2023-06-22,," Rare diseases (RDs) are collectively common and affect 300 million peopleworldwide. Accurate phenotyping is critical for informing diagnosis andtreatment, but RD phenotypes are often embedded in unstructured text andtime-consuming to extract manually. While natural language processing (NLP)models can perform named entity recognition (NER) to automate extraction, amajor bottleneck is the development of a large, annotated corpus for modeltraining. Recently, prompt learning emerged as an NLP paradigm that can lead tomore generalizable results without any (zero-shot) or few labeled samples(few-shot). Despite growing interest in ChatGPT, a revolutionary large languagemodel capable of following complex human prompts and generating high-qualityresponses, none have studied its NER performance for RDs in the zero- andfew-shot settings. To this end, we engineered novel prompts aimed at extractingRD phenotypes and, to the best of our knowledge, are the first the establish abenchmark for evaluating ChatGPT's performance in these settings. We comparedits performance to the traditional fine-tuning approach and conducted anin-depth error analysis. Overall, fine-tuning BioClinicalBERT resulted inhigher performance (F1 of 0.689) than ChatGPT (F1 of 0.472 and 0.591 in thezero- and few-shot settings, respectively). Despite this, ChatGPT achievedsimilar or higher accuracy for certain entities (i.e., rare diseases and signs)in the one-shot setting (F1 of 0.776 and 0.725). This suggests that withappropriate prompt engineering, ChatGPT has the potential to match oroutperform fine-tuned language models for certain entity types with just onelabeled sample. While the proliferation of large language models may provideopportunities for supporting RD diagnosis and treatment, researchers andclinicians should critically evaluate model outputs and be well-informed oftheir limitations.",,arXiv,"['cs.cl', 'cs.ai']",, +669,go beyond the obvious probing the gap of informal reasoning ability between humanity and llms by detective reasoning puzzle benchmark,"['Zhouhon Gu', 'Zihan Li', 'Lin Zhang', 'Zhuozhi Xiong', 'Haoning Ye', 'Yikai Zhang', 'Wenhao Huang', 'Xiaoxuan Zhu', 'Qianyu He', 'Rui Xu', 'Sihang Jiang', 'Shusen Wang', 'Zili Wang', 'Hongwei Feng', 'Zhixu Li', 'Yanghua Xiao']",http://arxiv.org/pdf/2307.05113v2.pdf,2023-07-11,," Informal reasoning ability is the ability to reason based on common sense,experience, and intuition.Humans use informal reasoning every day to extractthe most influential elements for their decision-making from a large amount oflife-like information.With the rapid development of language models, therealization of general artificial intelligence has emerged with hope. Given theoutstanding informal reasoning ability of humans, how much informal reasoningability language models have has not been well studied by scholars.In order toexplore the gap between humans and language models in informal reasoningability, this paper constructs a Detective Reasoning Benchmark, which is anassembly of 1,200 questions gathered from accessible online resources, aims atevaluating the model's informal reasoning ability in real-lifecontext.Considering the improvement of the model's informal reasoning abilityrestricted by the lack of benchmark, we further propose a Self-Question PromptFramework that mimics human thinking to enhance the model's informal reasoningability.The goals of self-question are to find key elements, deeply investigatethe connections between these elements, encourage the relationship between eachelement and the problem, and finally, require the model to reasonably answerthe problem.The experimental results show that human performance greatlyoutperforms the SoTA Language Models in Detective Reasoning Benchmark.Besides,Self-Question is proven to be the most effective prompt engineering inimproving GPT-4's informal reasoning ability, but it still does not evensurpass the lowest score made by human participants.Upon acceptance of thepaper, the source code for the benchmark will be made publicly accessible.",,arXiv,['cs.cl'],, +670,"ai foundation models for weather and climate applications, design, and implementation","['S. Karthik Mukkavilli', 'Daniel Salles Civitarese', 'Johannes Schmude', 'Johannes Jakubik', 'Anne Jones', 'Nam Nguyen', 'Christopher Phillips', 'Sujit Roy', 'Shraddha Singh', 'Campbell Watson', 'Raghu Ganti', 'Hendrik Hamann', 'Udaysankar Nair', 'Rahul Ramachandran', 'Kommy Weldemariam']",http://arxiv.org/pdf/2309.10808v2.pdf,2023-09-19,," Machine learning and deep learning methods have been widely explored inunderstanding the chaotic behavior of the atmosphere and furthering weatherforecasting. There has been increasing interest from technology companies,government institutions, and meteorological agencies in building digital twinsof the Earth. Recent approaches using transformers, physics-informed machinelearning, and graph neural networks have demonstrated state-of-the-artperformance on relatively narrow spatiotemporal scales and specific tasks. Withthe recent success of generative artificial intelligence (AI) using pre-trainedtransformers for language modeling and vision with prompt engineering andfine-tuning, we are now moving towards generalizable AI. In particular, we arewitnessing the rise of AI foundation models that can perform competitively onmultiple domain-specific downstream tasks. Despite this progress, we are stillin the nascent stages of a generalizable AI model for global Earth systemmodels, regional climate models, and mesoscale weather models. Here, we reviewcurrent state-of-the-art AI approaches, primarily from transformer and operatorlearning literature in the context of meteorology. We provide our perspectiveon criteria for success towards a family of foundation models for nowcastingand forecasting weather and climate predictions. We also discuss how suchmodels can perform competitively on downstream tasks such as downscaling(super-resolution), identifying conditions conducive to the occurrence ofwildfires, and predicting consequential meteorological phenomena across variousspatiotemporal scales such as hurricanes and atmospheric rivers. In particular,we examine current AI methodologies and contend they have matured enough todesign and implement a weather foundation model.",,arXiv,"['cs.lg', 'cs.ai', 'physics.ao-ph', '68t07 (primary), 68t01, 86a08', 'i.2.0; i.4.0; j.2.5']",, +671,promptor a conversational and autonomous prompt generation agent for intelligent text entry techniques,"['Junxiao Shen', 'John J. Dudley', 'Jingyao Zheng', 'Bill Byrne', 'Per Ola Kristensson']",http://arxiv.org/pdf/2310.08101v2.pdf,2023-10-12,," Text entry is an essential task in our day-to-day digital interactions.Numerous intelligent features have been developed to streamline this process,making text entry more effective, efficient, and fluid. These improvementsinclude sentence prediction and user personalization. However, as deeplearning-based language models become the norm for these advanced features, thenecessity for data collection and model fine-tuning increases. These challengescan be mitigated by harnessing the in-context learning capability of largelanguage models such as GPT-3.5. This unique feature allows the language modelto acquire new skills through prompts, eliminating the need for data collectionand fine-tuning. Consequently, large language models can learn various textprediction techniques. We initially showed that, for a sentence predictiontask, merely prompting GPT-3.5 surpassed a GPT-2 backed system and iscomparable with a fine-tuned GPT-3.5 model, with the latter two methodsrequiring costly data collection, fine-tuning and post-processing. However, thetask of prompting large language models to specialize in specific textprediction tasks can be challenging, particularly for designers withoutexpertise in prompt engineering. To address this, we introduce Promptor, aconversational prompt generation agent designed to engage proactively withdesigners. Promptor can automatically generate complex prompts tailored to meetspecific needs, thus offering a solution to this challenge. We conducted a userstudy involving 24 participants creating prompts for three intelligent textentry tasks, half of the participants used Promptor while the other halfdesigned prompts themselves. The results show that Promptor-designed promptsresult in a 35% increase in similarity and 22% in coherence over those bydesigners.",,arXiv,"['cs.cl', 'cs.ai']",, +672,constitutionmaker interactively critiquing large language models by converting feedback into principles,"['Savvas Petridis', 'Ben Wedin', 'James Wexler', 'Aaron Donsbach', 'Mahima Pushkarna', 'Nitesh Goyal', 'Carrie J. Cai', 'Michael Terry']",http://arxiv.org/pdf/2310.15428v1.pdf,2023-10-24,," Large language model (LLM) prompting is a promising new approach for users tocreate and customize their own chatbots. However, current methods for steeringa chatbot's outputs, such as prompt engineering and fine-tuning, do not supportusers in converting their natural feedback on the model's outputs to changes inthe prompt or model. In this work, we explore how to enable users tointeractively refine model outputs through their feedback, by helping themconvert their feedback into a set of principles (i.e. a constitution) thatdictate the model's behavior. From a formative study, we (1) found that usersneeded support converting their feedback into principles for the chatbot and(2) classified the different principle types desired by users. Inspired bythese findings, we developed ConstitutionMaker, an interactive tool forconverting user feedback into principles, to steer LLM-based chatbots. WithConstitutionMaker, users can provide either positive or negative feedback innatural language, select auto-generated feedback, or rewrite the chatbot'sresponse; each mode of feedback automatically generates a principle that isinserted into the chatbot's prompt. In a user study with 14 participants, wecompare ConstitutionMaker to an ablated version, where users write their ownprinciples. With ConstitutionMaker, participants felt that their principlescould better guide the chatbot, that they could more easily convert theirfeedback into principles, and that they could write principles moreefficiently, with less mental demand. ConstitutionMaker helped users identifyways to improve the chatbot, formulate their intuitive responses to the modelinto feedback, and convert this feedback into specific and clear principles.Together, these findings inform future tools that support the interactivecritiquing of LLM outputs.",,arXiv,"['cs.hc', 'cs.ai']",, +673,fewshot learning for sentence pair classification and its applications in software engineering,"['Robert Kraig Helmeczi', 'Mucahit Cevik', 'Savas Yıldırım']",http://arxiv.org/pdf/2306.08058v1.pdf,2023-06-13,," Few-shot learning-the ability to train models with access to limited data-hasbecome increasingly popular in the natural language processing (NLP) domain, aslarge language models such as GPT and T0 have been empirically shown to achievehigh performance in numerous tasks with access to just a handful of labeledexamples. Smaller language models such as BERT and its variants have also beenshown to achieve strong performance with just a handful of labeled exampleswhen combined with few-shot learning algorithms like pattern-exploitingtraining (PET) and SetFit. The focus of this work is to investigate theperformance of alternative few-shot learning approaches with BERT-based models.Specifically, vanilla fine-tuning, PET and SetFit are compared for numerousBERT-based checkpoints over an array of training set sizes. To facilitate thisinvestigation, applications of few-shot learning are considered in softwareengineering. For each task, high-performance techniques and their associatedmodel checkpoints are identified through detailed empirical analysis. Ourresults establish PET as a strong few-shot learning approach, and our analysisshows that with just a few hundred labeled examples it can achieve performancenear that of fine-tuning on full-sized data sets.",,arXiv,['cs.se'],, +674,fewclue a chinese fewshot learning evaluation benchmark,"['Liang Xu', 'Xiaojing Lu', 'Chenyang Yuan', 'Xuanwei Zhang', 'Huilin Xu', 'Hu Yuan', 'Guoao Wei', 'Xiang Pan', 'Xin Tian', 'Libo Qin', 'Hu Hai']",http://arxiv.org/pdf/2107.07498v2.pdf,2021-07-15,," Pretrained Language Models (PLMs) have achieved tremendous success in naturallanguage understanding tasks. While different learning schemes -- fine-tuning,zero-shot, and few-shot learning -- have been widely explored and compared forlanguages such as English, there is comparatively little work in Chinese tofairly and comprehensively evaluate and compare these methods and thus hinderscumulative progress. In this paper, we introduce the Chinese Few-shot LearningEvaluation Benchmark (FewCLUE), the first comprehensive few-shot evaluationbenchmark in Chinese. It includes nine tasks, ranging from single-sentence andsentence-pair classification tasks to machine reading comprehension tasks. Wesystematically evaluate five state-of-the-art (SOTA) few-shot learning methods(including PET, ADAPET, LM-BFF, P-tuning and EFL), and compare theirperformance with fine-tuning and zero-shot learning schemes on the newlyconstructed FewCLUE benchmark. Experimental results reveal that: 1) The effectof different few-shot learning methods is sensitive to the pre-trained model towhich the methods are applied; 2) PET and P-tuning achieve the best overallperformance with RoBERTa and ERNIE respectively. Our benchmark is used in thefew-shot learning contest of NLPCC 2021. In addition, we provide auser-friendly toolkit, as well as an online leaderboard to help facilitatefurther progress on Chinese few-shot learning. We provide a baselineperformance on different learning methods, a reference for future research.",,arXiv,"['cs.cl', 'cs.ai']",, +675,true fewshot learning with prompts a realworld perspective,"['Timo Schick', 'Hinrich Schütze']",http://arxiv.org/pdf/2111.13440v1.pdf,2021-11-26,," Prompt-based approaches are strong at few-shot learning. However, Perez etal. (2021) have recently cast doubt on their performance because they haddifficulty getting good results in a ""true"" few-shot setting in which promptsand hyperparameters cannot be tuned on a dev set. In view of this, we conductan extensive study of PET, a method that combines textual instructions withexample-based finetuning. We show that, if correctly configured, PET performsstrongly in a true few-shot setting, i.e., without a dev set. Crucial for thisstrong performance is PET's ability to intelligently handle multiple prompts.We then put our findings to a real-world test by running PET on RAFT, abenchmark of tasks taken directly from realistic NLP applications for which nolabeled dev or test sets are available. PET achieves a new state of the art onRAFT and performs close to non-expert humans for 7 out of 11 tasks. Theseresults demonstrate that prompt-based learners like PET excel at true few-shotlearning and underpin our belief that learning from instructions will play animportant role on the path towards human-like few-shot learning capabilities.",,arXiv,['cs.cl'],, +676,prompting electra fewshot learning with discriminative pretrained models,"['Mengzhou Xia', 'Mikel Artetxe', 'Jingfei Du', 'Danqi Chen', 'Ves Stoyanov']",http://arxiv.org/pdf/2205.15223v3.pdf,2022-05-30,," Pre-trained masked language models successfully perform few-shot learning byformulating downstream tasks as text infilling. However, as a strongalternative in full-shot settings, discriminative pre-trained models likeELECTRA do not fit into the paradigm. In this work, we adapt prompt-basedfew-shot learning to ELECTRA and show that it outperforms masked languagemodels in a wide range of tasks. ELECTRA is pre-trained to distinguish if atoken is generated or original. We naturally extend that to prompt-basedfew-shot learning by training to score the originality of the target optionswithout introducing new parameters. Our method can be easily adapted to tasksinvolving multi-token predictions without extra computation overhead. Analysisshows that ELECTRA learns distributions that align better with downstreamtasks.",,arXiv,"['cs.cl', 'cs.lg']",, +677,reordering examples helps during primingbased fewshot learning,"['Sawan Kumar', 'Partha Talukdar']",http://arxiv.org/pdf/2106.01751v1.pdf,2021-06-03,," The ability to learn from limited data, or few-shot learning, is a desirableand often critical requirement for NLP systems. While many existing methods dopoorly at learning from a handful of examples, large pretrained language modelshave recently been shown to be efficient few-shot learners. One approach tofew-shot learning, which does not require finetuning of model parameters, is toaugment the language model's input with priming text which is typicallyconstructed using task specific descriptions and examples. In this work, wefurther explore priming-based few-shot learning, with focus on using examplesas prompts. We show that presenting examples in the right order is key forgeneralization. We introduce PERO (Prompting with Examples in the Right Order),where we formulate few-shot learning as search over the set of permutations ofthe training examples. We show that PERO can learn to generalize efficientlyusing as few as 10 examples, in contrast to existing approaches. While thenewline token is a natural choice for separating the examples in the prompt, weshow that learning a new separator token can potentially provide further gainsin performance. We demonstrate the effectiveness of the proposed method on thetasks of sentiment classification, natural language inference and factretrieval. Finally, we analyze the learned prompts to reveal novel insights,including the idea that two training examples in the right order alone canprovide competitive performance for sentiment classification and naturallanguage inference.",,arXiv,['cs.cl'],, +678,tuning language models as training data generators for augmentationenhanced fewshot learning,"['Yu Meng', 'Martin Michalski', 'Jiaxin Huang', 'Yu Zhang', 'Tarek Abdelzaher', 'Jiawei Han']",http://arxiv.org/pdf/2211.03044v2.pdf,2022-11-06,," Recent studies have revealed the intriguing few-shot learning ability ofpretrained language models (PLMs): They can quickly adapt to a new task whenfine-tuned on a small amount of labeled data formulated as prompts, withoutrequiring abundant task-specific annotations. Despite their promisingperformance, most existing few-shot approaches that only learn from the smalltraining set still underperform fully supervised training by nontrivialmargins. In this work, we study few-shot learning with PLMs from a differentperspective: We first tune an autoregressive PLM on the few-shot samples andthen use it as a generator to synthesize a large amount of novel trainingsamples which augment the original training set. To encourage the generator toproduce label-discriminative samples, we train it via weighted maximumlikelihood where the weight of each token is automatically adjusted based on adiscriminative meta-learning objective. A classification PLM can then befine-tuned on both the few-shot and the synthetic samples with regularizationfor better generalization and stability. Our approach FewGen achieves anoverall better result across seven classification tasks of the GLUE benchmarkthan existing few-shot learning methods, improving no-augmentation methods by5+ average points, and outperforming augmentation methods by 3+ average points.",,arXiv,"['cs.cl', 'cs.lg']",, +679,cins comprehensive instruction for fewshot learning in taskoriented dialog systems,"['Fei Mi', 'Yitong Li', 'Yasheng Wang', 'Xin Jiang', 'Qun Liu']",http://arxiv.org/pdf/2109.04645v4.pdf,2021-09-10,," As labeling cost for different modules in task-oriented dialog (ToD) systemsis high, a major challenge in practice is to learn different tasks with theleast amount of labeled data. Recently, prompting methods over pre-trainedlanguage models (PLMs) have shown promising results for few-shot learning inToD. To better utilize the power of PLMs, this paper proposes ComprehensiveInstruction (CINS) that exploits PLMs with extra task-specific instructions. Wedesign a schema (definition, constraint, prompt) of instructions and theircustomized realizations for three important downstream tasks in ToD, i.e.intent classification, dialog state tracking, and natural language generation.A sequence-to-sequence model (T5) is adopted to solve these three tasks in aunified framework. Extensive experiments are conducted on these ToD tasks inrealistic few-shot learning scenarios with small validation data. Empiricalresults demonstrate that the proposed CINS approach consistently improvestechniques that finetune PLMs with raw input or short prompts.",,arXiv,"['cs.cl', 'cs.lg']",, +680,exploring promptbased fewshot learning for grounded dialog generation,"['Chujie Zheng', 'Minlie Huang']",http://arxiv.org/pdf/2109.06513v2.pdf,2021-09-14,," Dialog models can be greatly strengthened through grounding on variousexternal information, but grounded dialog corpora are usually not naturallyaccessible. In this work, we focus on the few-shot learning for grounded dialoggeneration (GDG). We first propose a simple prompting method for GDG tasks,where different constructs of model input, such as the grounding source and theconversation context, are distinguished through continuous or discrete prompts.On three typical GDG tasks, we empirically demonstrate and analyze in-depth theeffectiveness of our method. We then conduct extensive experiments tothoroughly investigate how our prompting method works with differentpre-trained models. We show that prompted language models perform superiorly toconversational models, and further analyze various factors that influence theeffects of prompting. Overall, our work introduces a prompt-based perspectiveto the few-shot learning for GDG tasks, and provides valuable findings andinsights for future research.",,arXiv,['cs.cl'],, +681,ontologyenhanced prompttuning for fewshot learning,"['Hongbin Ye', 'Ningyu Zhang', 'Shumin Deng', 'Xiang Chen', 'Hui Chen', 'Feiyu Xiong', 'Xi Chen', 'Huajun Chen']",http://arxiv.org/pdf/2201.11332v1.pdf,2022-01-27,," Few-shot Learning (FSL) is aimed to make predictions based on a limitednumber of samples. Structured data such as knowledge graphs and ontologylibraries has been leveraged to benefit the few-shot setting in various tasks.However, the priors adopted by the existing methods suffer from challengingknowledge missing, knowledge noise, and knowledge heterogeneity, which hinderthe performance for few-shot learning. In this study, we explore knowledgeinjection for FSL with pre-trained language models and proposeontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop theontology transformation based on the external knowledge graph to address theknowledge missing issue, which fulfills and converts structure knowledge totext. We further introduce span-sensitive knowledge injection via a visiblematrix to select informative knowledge to handle the knowledge noise issue. Tobridge the gap between knowledge and text, we propose a collective trainingalgorithm to optimize representations jointly. We evaluate our proposedOntoPrompt in three tasks, including relation extraction, event extraction, andknowledge graph completion, with eight datasets. Experimental resultsdemonstrate that our approach can obtain better few-shot performance thanbaselines.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, +682,impossible triangle what's next for pretrained language models,"['Chenguang Zhu', 'Michael Zeng']",http://arxiv.org/pdf/2204.06130v2.pdf,2022-04-13,," Recent development of large-scale pre-trained language models (PLM) havesignificantly improved the capability of models in various NLP tasks, in termsof performance after task-specific fine-tuning and zero-shot / few-shotlearning. However, many of such models come with a dauntingly huge size thatfew institutions can afford to pre-train, fine-tune or even deploy, whilemoderate-sized models usually lack strong generalized few-shot learningcapabilities. In this paper, we first elaborate the current obstacles of usingPLM models in terms of the Impossible Triangle: 1) moderate model size, 2)state-of-the-art few-shot learning capability, and 3) state-of-the-artfine-tuning capability. We argue that all existing PLM models lack one or moreproperties from the Impossible Triangle. To remedy these missing properties ofPLMs, various techniques have been proposed, such as knowledge distillation,data augmentation and prompt learning, which inevitably brings additional workto the application of PLMs in real scenarios. We then offer insights intofuture research directions of PLMs to achieve the Impossible Triangle, andbreak down the task into several key phases.",,arXiv,['cs.cl'],, +683,how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models,"['Hai Dang', 'Lukas Mecke', 'Florian Lehmann', 'Sven Goller', 'Daniel Buschek']",http://arxiv.org/pdf/2209.01390v1.pdf,2022-09-03,," Deep generative models have the potential to fundamentally change the way wecreate high-fidelity digital content but are often hard to control. Prompting agenerative model is a promising recent development that in principle enablesend-users to creatively leverage zero-shot and few-shot learning to assign newtasks to an AI ad-hoc, simply by writing them down. However, for the majorityof end-users writing effective prompts is currently largely a trial and errorprocess. To address this, we discuss the key opportunities and challenges forinteractive creative applications that use prompting as a new paradigm forHuman-AI interaction. Based on our analysis, we propose four design goals foruser interfaces that support prompting. We illustrate these with concrete UIdesign sketches, focusing on the use case of creative writing. The researchcommunity in HCI and AI can take these as starting points to develop adequateuser interfaces for models capable of zero- and few-shot learning.",,arXiv,"['cs.hc', 'cs.cl', 'h.5.2; i.2.7']",, +684,differentiable entailment for parameter efficient few shot learning,"['Ethan Kim', 'Jerry Yang']",http://arxiv.org/pdf/2301.13345v1.pdf,2023-01-31,," Few-shot learning allows pre-trained language models to adapt to downstreamtasks while using a limited number of training examples. However, practicalapplications are limited when all model parameters must be optimized. In thiswork we apply a new technique for parameter efficient few shot learning whileadopting a strict definition of parameter efficiency. Our training methodcombines 1) intermediate training by reformulating natural language tasks asentailment tasks \cite{wang_entailment_2021} and 2) differentiable optimizationof template and label tokens \cite{zhang_differentiable_2021}. We quantify thetradeoff between parameter efficiency and performance in the few-shot regimeand propose a simple model agnostic approach that can be extended to any taskBy achieving competitive performance while only optimizing 3\% of a model'sparameters and allowing for batched inference, we allow for more efficientpractical deployment of models.",,arXiv,['cs.cl'],, +685,"multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence","['Markus Bayer', 'Tobias Frey', 'Christian Reuter']",http://arxiv.org/pdf/2207.11076v1.pdf,2022-07-22,," Gathering cyber threat intelligence from open sources is becomingincreasingly important for maintaining and achieving a high level of securityas systems become larger and more complex. However, these open sources areoften subject to information overload. It is therefore useful to apply machinelearning models that condense the amount of information to what is necessary.Yet, previous studies and applications have shown that existing classifiers arenot able to extract specific information about emerging cybersecurity eventsdue to their low generalization ability. Therefore, we propose a system toovercome this problem by training a new classifier for each new incident. Sincethis requires a lot of labelled data using standard training methods, wecombine three different low-data regime techniques - transfer learning, dataaugmentation, and few-shot learning - to train a high-quality classifier fromvery few labelled instances. We evaluated our approach using a novel datasetderived from the Microsoft Exchange Server data breach of 2021 which waslabelled by three experts. Our findings reveal an increase in F1 score of morethan 21 points compared to standard training methods and more than 18 pointscompared to a state-of-the-art method in few-shot learning. Furthermore, theclassifier trained with this method and 32 instances is only less than 5 F1score points worse than a classifier trained with 1800 instances.",,arXiv,"['cs.cr', 'cs.cl']",, +686,multitask pretraining of modular prompt for chinese fewshot learning,"['Tianxiang Sun', 'Zhengfu He', 'Qin Zhu', 'Xipeng Qiu', 'Xuanjing Huang']",http://arxiv.org/pdf/2210.07565v3.pdf,2022-10-14,," Prompt tuning is a parameter-efficient approach to adapting pre-trainedlanguage models to downstream tasks. Although prompt tuning has been shown tomatch the performance of full model tuning when training data is sufficient, ittends to struggle in few-shot learning settings. In this paper, we presentMulti-task Pre-trained Modular Prompt (MP2) to boost prompt tuning for few-shotlearning. MP2 is a set of combinable prompts pre-trained on 38 Chinese tasks.On downstream tasks, the pre-trained prompts are selectively activated andcombined, leading to strong compositional generalization to unseen tasks. Tobridge the gap between pre-training and fine-tuning, we formulate upstream anddownstream tasks into a unified machine reading comprehension task. Extensiveexperiments under two learning paradigms, i.e., gradient descent and black-boxtuning, show that MP2 significantly outperforms prompt tuning, full modeltuning, and prior prompt pre-training methods in few-shot settings. Inaddition, we demonstrate that MP2 can achieve surprisingly fast and strongadaptation to downstream tasks by merely learning 8 parameters to combine thepre-trained modular prompts.",,arXiv,['cs.cl'],, +687,fewshot bot promptbased learning for dialogue systems,"['Andrea Madotto', 'Zhaojiang Lin', 'Genta Indra Winata', 'Pascale Fung']",http://arxiv.org/pdf/2110.08118v1.pdf,2021-10-15,," Learning to converse using only a few examples is a great challenge inconversational AI. The current best conversational models, which are eithergood chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL),are language models (LMs) fine-tuned on large conversational datasets. Trainingthese models is expensive, both in terms of computational resources and time,and it is hard to keep them up to date with new conversational skills. A simpleyet unexplored solution is prompt-based few-shot learning (Brown et al. 2020)which does not require gradient-based fine-tuning but instead uses a fewexamples in the LM context as the only source of learning. In this paper, weexplore prompt-based few-shot learning in dialogue tasks. We benchmark LMs ofdifferent sizes in nine response generation tasks, which include fourknowledge-grounded tasks, a task-oriented generations task, three open-chattasks, and controlled stylistic generation, and five conversational parsingtasks, which include dialogue state tracking, graph path generation, personainformation extraction, document retrieval, and internet query generation. Thecurrent largest released LM (GPT-J-6B) using prompt-based few-shot learning,and thus requiring no training, achieves competitive performance to fullytrained state-of-the-art models. Moreover, we propose a novel prompt-basedfew-shot classifier, that also does not require any fine-tuning, to select themost appropriate prompt given a dialogue history. Finally, by combining thepower of prompt-based few-shot learning and a Skill Selector, we create anend-to-end chatbot named the Few-Shot Bot (FSB), which automatically selectsthe most appropriate conversational skill, queries different knowledge bases orthe internet, and uses the retrieved knowledge to generate a human-likeresponse, all using only few dialogue examples per skill.",,arXiv,"['cs.cl', 'cs.ai']",, +688,"a neural network solves, explains, and generates university math problems by program synthesis and fewshot learning at human level","['Iddo Drori', 'Sarah Zhang', 'Reece Shuttleworth', 'Leonard Tang', 'Albert Lu', 'Elizabeth Ke', 'Kevin Liu', 'Linda Chen', 'Sunny Tran', 'Newman Cheng', 'Roman Wang', 'Nikhil Singh', 'Taylor L. Patti', 'Jayson Lynch', 'Avi Shporer', 'Nakul Verma', 'Eugene Wu', 'Gilbert Strang']",http://arxiv.org/pdf/2112.15594v4.pdf,2021-12-31,," We demonstrate that a neural network pre-trained on text and fine-tuned oncode solves mathematics course problems, explains solutions, and generates newquestions at a human level. We automatically synthesize programs using few-shotlearning and OpenAI's Codex transformer and execute them to solve courseproblems at 81% automatic accuracy. We curate a new dataset of questions fromMIT's largest mathematics courses (Single Variable and Multivariable Calculus,Differential Equations, Introduction to Probability and Statistics, LinearAlgebra, and Mathematics for Computer Science) and Columbia University'sComputational Linear Algebra. We solve questions from a MATH dataset (onPrealgebra, Algebra, Counting and Probability, Intermediate Algebra, NumberTheory, and Precalculus), the latest benchmark of advanced mathematics problemsdesigned to assess mathematical reasoning. We randomly sample questions andgenerate solutions with multiple modalities, including numbers, equations, andplots. The latest GPT-3 language model pre-trained on text automatically solvesonly 18.8% of these university questions using zero-shot learning and 30.8%using few-shot learning and the most recent chain of thought prompting. Incontrast, program synthesis with few-shot learning using Codex fine-tuned oncode generates programs that automatically solve 81% of these questions. Ourapproach improves the previous state-of-the-art automatic solution accuracy onthe benchmark topics from 8.8% to 81.1%. We perform a survey to evaluate thequality and difficulty of generated questions. This work is the first toautomatically solve university-level mathematics course questions at a humanlevel and the first work to explain and generate university-level mathematicscourse questions at scale, a milestone for higher education.",,arXiv,"['cs.lg', 'cs.ai']",, +689,"generate, annotate, and learn nlp with synthetic text","['Xuanli He', 'Islam Nassar', 'Jamie Kiros', 'Gholamreza Haffari', 'Mohammad Norouzi']",http://arxiv.org/pdf/2106.06168v3.pdf,2021-06-11,," This paper studies the use of language models as a source of syntheticunlabeled text for NLP. We formulate a general framework called ``generate,annotate, and learn (GAL)'' to take advantage of synthetic text withinknowledge distillation, self-training, and few-shot learning applications. Togenerate high-quality task-specific text, we either fine-tune LMs on inputsfrom the task of interest, or prompt large LMs with few examples. We use thebest available classifier to annotate synthetic text with soft pseudo labelsfor knowledge distillation and self-training, and use LMs to obtain hard labelsfor few-shot learning. We train new supervised models on the combination oflabeled and pseudo-labeled data, which results in significant gains acrossseveral applications. We investigate key components of GAL and presenttheoretical and empirical arguments against the use of class-conditional LMs togenerate synthetic labeled text instead of unlabeled text. GAL achieves newstate-of-the-art knowledge distillation results for 6-layer transformers on theGLUE leaderboard.",,arXiv,['cs.lg'],, +690,multimodal fewshot learning with frozen language models,"['Maria Tsimpoukelli', 'Jacob Menick', 'Serkan Cabi', 'S. M. Ali Eslami', 'Oriol Vinyals', 'Felix Hill']",http://arxiv.org/pdf/2106.13884v2.pdf,2021-06-25,," When trained at sufficient scale, auto-regressive language models exhibit thenotable ability to learn a new language task after being prompted with just afew examples. Here, we present a simple, yet effective, approach fortransferring this few-shot learning ability to a multimodal setting (vision andlanguage). Using aligned image and caption data, we train a vision encoder torepresent each image as a sequence of continuous embeddings, such that apre-trained, frozen language model prompted with this prefix generates theappropriate caption. The resulting system is a multimodal few-shot learner,with the surprising ability to learn a variety of new tasks when conditioned onexamples, represented as a sequence of multiple interleaved image and textembeddings. We demonstrate that it can rapidly learn words for new objects andnovel visual categories, do visual question-answering with only a handful ofexamples, and make use of outside knowledge, by measuring a single model on avariety of established and new benchmarks.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, +691,detecting hate speech with gpt3,"['Ke-Li Chiu', 'Annie Collins', 'Rohan Alexander']",http://arxiv.org/pdf/2103.12407v4.pdf,2021-03-23,," Sophisticated language models such as OpenAI's GPT-3 can generate hatefultext that targets marginalized groups. Given this capacity, we are interestedin whether large language models can be used to identify hate speech andclassify text as sexist or racist. We use GPT-3 to identify sexist and racisttext passages with zero-, one-, and few-shot learning. We find that with zero-and one-shot learning, GPT-3 can identify sexist or racist text with an averageaccuracy between 55 per cent and 67 per cent, depending on the category of textand type of learning. With few-shot learning, the model's accuracy can be ashigh as 85 per cent. Large language models have a role to play in hate speechdetection, and with further development they could eventually be used tocounter hate speech.",,arXiv,['cs.cl'],, +692,true fewshot learning with language models,"['Ethan Perez', 'Douwe Kiela', 'Kyunghyun Cho']",http://arxiv.org/pdf/2105.11447v1.pdf,2021-05-24,," Pretrained language models (LMs) perform well on many tasks even whenlearning from a few examples, but prior work uses many held-out examples totune various aspects of learning, such as hyperparameters, training objectives,and natural language templates (""prompts""). Here, we evaluate the few-shotability of LMs when such held-out examples are unavailable, a setting we calltrue few-shot learning. We test two model selection criteria, cross-validationand minimum description length, for choosing LM prompts and hyperparameters inthe true few-shot setting. On average, both marginally outperform randomselection and greatly underperform selection based on held-out examples.Moreover, selection criteria often prefer models that perform significantlyworse than randomly-selected ones. We find similar results even when takinginto account our uncertainty in a model's true performance during selection, aswell as when varying the amount of computation and number of examples used forselection. Overall, our findings suggest that prior work significantlyoverestimated the true few-shot ability of LMs given the difficulty of few-shotmodel selection.",,arXiv,"['cs.cl', 'cs.lg', 'stat.ml']",, +693,incontext learning for fewshot dialogue state tracking,"['Yushi Hu', 'Chia-Hsuan Lee', 'Tianbao Xie', 'Tao Yu', 'Noah A. Smith', 'Mari Ostendorf']",http://arxiv.org/pdf/2203.08568v3.pdf,2022-03-16,," Collecting and annotating task-oriented dialogues is time-consuming andcostly; thus, zero and few shot learning could greatly benefit dialogue statetracking (DST). In this work, we propose an in-context learning (ICL) frameworkfor zero-shot and few-shot learning DST, where a large pre-trained languagemodel (LM) takes a test instance and a few exemplars as input, and directlydecodes the dialogue state without any parameter updates. To better leverage atabular domain description in the LM prompt, we reformulate DST into atext-to-SQL problem. We also propose a novel approach to retrieve annotateddialogues as exemplars. Empirical results on MultiWOZ show that our methodIC-DST substantially outperforms previous fine-tuned state-of-the-art models infew-shot settings. In addition, we test IC-DST in zero-shot settings, in whichthe model only takes a fixed task instruction as input, finding that itoutperforms previous zero-shot methods by a large margin.",,arXiv,['cs.cl'],, +694,enabling classifiers to make judgements explicitly aligned with human values,"['Yejin Bang', 'Tiezheng Yu', 'Andrea Madotto', 'Zhaojiang Lin', 'Mona Diab', 'Pascale Fung']",http://arxiv.org/pdf/2210.07652v1.pdf,2022-10-14,," Many NLP classification tasks, such as sexism/racism detection or toxicitydetection, are based on human values. Yet, human values can vary under diversecultural conditions. Therefore, we introduce a framework for value-alignedclassification that performs prediction based on explicitly written humanvalues in the command. Along with the task, we propose a practical approachthat distills value-aligned knowledge from large-scale language models (LLMs)to construct value-aligned classifiers in two steps. First, we generatevalue-aligned training data from LLMs by prompt-based few-shot learning. Next,we fine-tune smaller classification models with the generated data for thetask. Empirical results show that our VA-Models surpass multiple baselines byat least 15.56% on the F1-score, including few-shot learning with OPT-175B andexisting text augmentation methods. We suggest that using classifiers withexplicit human value input improves both inclusivity & explainability in AI.",,arXiv,"['cs.cl', 'cs.ai']",, +695,gps genetic prompt search for efficient fewshot learning,"['Hanwei Xu', 'Yujun Chen', 'Yulun Du', 'Nan Shao', 'Yanggang Wang', 'Haiyu Li', 'Zhilin Yang']",http://arxiv.org/pdf/2210.17041v1.pdf,2022-10-31,," Prompt-based techniques have demostrated great potential for improving thefew-shot generalization of pretrained language models. However, theirperformance heavily relies on the manual design of prompts and thus requires alot of human efforts. In this paper, we introduce Genetic Prompt Search (GPS)to improve few-shot learning with prompts, which utilizes a genetic algorithmto automatically search for high-performing prompts. GPS is gradient-free andrequires no update of model parameters but only a small validation set.Experiments on diverse datasets proved the effectiveness of GPS, whichoutperforms manual prompts by a large margin of 2.6 points. Our method is alsobetter than other parameter-efficient tuning methods such as prompt tuning.",,arXiv,['cs.cl'],, +696,fewshot queryfocused summarization with prefixmerging,"['Ruifeng Yuan', 'Zili Wang', 'Ziqiang Cao', 'Wenjie Li']",http://arxiv.org/pdf/2211.16164v1.pdf,2022-11-29,," Query-focused summarization has been considered as an important extension fortext summarization. It aims to generate a concise highlight for a given query.Different from text summarization, query-focused summarization has long beenplagued by the problem of lacking high-quality large-scale datasets. In thispaper, we investigate the idea that whether we can integrate and transfer theknowledge of text summarization and question answering to assist the few-shotlearning in query-focused summarization. Here, we propose prefix-merging, aprefix-based pretraining strategy for few-shot learning in query-focusedsummarization. Drawn inspiration from prefix-tuning, we are allowed tointegrate the task knowledge from text summarization and question answeringinto a properly designed prefix and apply the merged prefix to query-focusedsummarization. With only a small amount of trainable parameters, prefix-mergingoutperforms fine-tuning on query-focused summarization. We further discuss theinfluence of different prefix designs and propose a visualized explanation forhow prefix-merging works.",,arXiv,"['cs.cl', 'cs.ai']",, +697,log parsing with promptbased fewshot learning,"['Van-Hoang Le', 'Hongyu Zhang']",http://arxiv.org/pdf/2302.07435v1.pdf,2023-02-15,," Logs generated by large-scale software systems provide crucial informationfor engineers to understand the system status and diagnose problems of thesystems. Log parsing, which converts raw log messages into structured data, isthe first step to enabling automated log analytics. Existing log parsersextract the common part as log templates using statistical features. However,these log parsers often fail to identify the correct templates and parametersbecause: 1) they often overlook the semantic meaning of log messages, and 2)they require domain-specific knowledge for different log datasets. To addressthe limitations of existing methods, in this paper, we propose LogPPT tocapture the patterns of templates using prompt-based few-shot learning. LogPPTutilises a novel prompt tuning method to recognise keywords and parametersbased on a few labelled log data. In addition, an adaptive random samplingalgorithm is designed to select a small yet diverse training set. We haveconducted extensive experiments on 16 public log datasets. The experimentalresults show that LogPPT is effective and efficient for log parsing.",,arXiv,['cs.se'],, +698,automated fewshot classification with instructionfinetuned language models,"['Rami Aly', 'Xingjian Shi', 'Kaixiang Lin', 'Aston Zhang', 'Andrew Gordon Wilson']",http://arxiv.org/pdf/2305.12576v2.pdf,2023-05-21,," A particularly successful class of approaches for few-shot learning combineslanguage models with prompts -- hand-crafted task descriptions that complementdata samples. However, designing prompts by hand for each task commonlyrequires domain knowledge and substantial guesswork. We observe, in the contextof classification tasks, that instruction finetuned language models exhibitremarkable prompt robustness, and we subsequently propose a simple method toeliminate the need for handcrafted prompts, named AuT-Few. This approachconsists of (i) a prompt retrieval module that selects suitable taskinstructions from the instruction-tuning knowledge base, and (ii) thegeneration of two distinct, semantically meaningful, class descriptions and aselection mechanism via cross-validation. Over $12$ datasets, spanning $8$classification tasks, we show that AuT-Few outperforms current state-of-the-artfew-shot learning methods. Moreover, AuT-Few is the best ranking method acrossdatasets on the RAFT few-shot benchmark. Notably, these results are achievedwithout task-specific handcrafted prompts on unseen tasks.",,arXiv,['cs.cl'],, +699,evaluating the decency and consistency of data validation tests generated by llms,"['Rohan Alexander', 'Lindsay Katz', 'Callandra Moore', 'Zane Schwartz']",http://arxiv.org/pdf/2310.01402v1.pdf,2023-10-02,," We investigated the potential of large language models (LLMs) in developingdataset validation tests. We carried out 96 experiments each for both GPT-3.5and GPT-4, examining different prompt scenarios, learning modes, temperaturesettings, and roles. The prompt scenarios were: 1) Asking for expectations, 2)Asking for expectations with a given context, 3) Asking for expectations afterrequesting a simulation, and 4) Asking for expectations with a provided datasample. For learning modes, we tested: 1) zero-shot, 2) one-shot, and 3)few-shot learning. We also tested four temperature settings: 0, 0.4, 0.6, and1. Furthermore, two distinct roles were considered: 1) ""helpful assistant"", 2)""expert data scientist"". To gauge consistency, every setup was tested fivetimes. The LLM-generated responses were benchmarked against a gold standardsuite, created by an experienced data scientist knowledgeable about the data inquestion. We find there are considerable returns to the use of few-shotlearning, and that the more explicit the data setting can be the better. Thebest LLM configurations complement, rather than substitute, the gold standardresults. This study underscores the value LLMs can bring to the data cleaningand preparation stages of the data science workflow.",,arXiv,['stat.me'],, +700,fewshot learning with multilingual language models,"['Xi Victoria Lin', 'Todor Mihaylov', 'Mikel Artetxe', 'Tianlu Wang', 'Shuohui Chen', 'Daniel Simig', 'Myle Ott', 'Naman Goyal', 'Shruti Bhosale', 'Jingfei Du', 'Ramakanth Pasunuru', 'Sam Shleifer', 'Punit Singh Koura', 'Vishrav Chaudhary', ""Brian O'Horo"", 'Jeff Wang', 'Luke Zettlemoyer', 'Zornitsa Kozareva', 'Mona Diab', 'Veselin Stoyanov', 'Xian Li']",http://arxiv.org/pdf/2112.10668v3.pdf,2021-12-20,," Large-scale generative language models such as GPT-3 are competitive few-shotlearners. While these models are known to be able to jointly represent manydifferent languages, their training data is dominated by English, potentiallylimiting their cross-lingual generalization. In this work, we trainmultilingual generative language models on a corpus covering a diverse set oflanguages, and study their few- and zero-shot learning capabilities in a widerange of tasks. Our largest model with 7.5 billion parameters sets new state ofthe art in few-shot learning in more than 20 representative languages,outperforming GPT-3 of comparable size in multilingual commonsense reasoning(with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in4-shot settings) and natural language inference (+5.4% in each of 0-shot and4-shot settings). On the FLORES-101 machine translation benchmark, our modeloutperforms GPT-3 on 171 out of 182 directions with 32 training examples, whilesurpassing the official supervised baseline in 45 directions. We conduct anin-depth analysis of different multilingual prompting approaches, showing inparticular that strong few-shot learning performance across languages can beachieved via cross-lingual transfer through both templates and demonstrationexamples. Finally, we evaluate our models in social value tasks such as hatespeech detection in five languages and find it has limitations similar tocomparable sized GPT-3 models.",,arXiv,"['cs.cl', 'cs.ai']",, +701,flamingo a visual language model for fewshot learning,"['Jean-Baptiste Alayrac', 'Jeff Donahue', 'Pauline Luc', 'Antoine Miech', 'Iain Barr', 'Yana Hasson', 'Karel Lenc', 'Arthur Mensch', 'Katie Millican', 'Malcolm Reynolds', 'Roman Ring', 'Eliza Rutherford', 'Serkan Cabi', 'Tengda Han', 'Zhitao Gong', 'Sina Samangooei', 'Marianne Monteiro', 'Jacob Menick', 'Sebastian Borgeaud', 'Andrew Brock', 'Aida Nematzadeh', 'Sahand Sharifzadeh', 'Mikolaj Binkowski', 'Ricardo Barreira', 'Oriol Vinyals', 'Andrew Zisserman', 'Karen Simonyan']",http://arxiv.org/pdf/2204.14198v2.pdf,2022-04-29,," Building models that can be rapidly adapted to novel tasks using only ahandful of annotated examples is an open challenge for multimodal machinelearning research. We introduce Flamingo, a family of Visual Language Models(VLM) with this ability. We propose key architectural innovations to: (i)bridge powerful pretrained vision-only and language-only models, (ii) handlesequences of arbitrarily interleaved visual and textual data, and (iii)seamlessly ingest images or videos as inputs. Thanks to their flexibility,Flamingo models can be trained on large-scale multimodal web corpora containingarbitrarily interleaved text and images, which is key to endow them within-context few-shot learning capabilities. We perform a thorough evaluation ofour models, exploring and measuring their ability to rapidly adapt to a varietyof image and video tasks. These include open-ended tasks such as visualquestion-answering, where the model is prompted with a question which it has toanswer; captioning tasks, which evaluate the ability to describe a scene or anevent; and close-ended tasks such as multiple-choice visual question-answering.For tasks lying anywhere on this spectrum, a single Flamingo model can achievea new state of the art with few-shot learning, simply by prompting the modelwith task-specific examples. On numerous benchmarks, Flamingo outperformsmodels fine-tuned on thousands of times more task-specific data.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, +702,"code generation tools (almost) for free a study of fewshot, pretrained language models on code","['Patrick Bareiß', 'Beatriz Souza', ""Marcelo d'Amorim"", 'Michael Pradel']",http://arxiv.org/pdf/2206.01335v2.pdf,2022-06-02,," Few-shot learning with large-scale, pre-trained language models is a powerfulway to answer questions about code, e.g., how to complete a given code example,or even generate code snippets from scratch. The success of these models raisesthe question whether they could serve as a basis for building a wide range codegeneration tools. Traditionally, such tools are built manually and separatelyfor each task. Instead, few-shot learning may allow to obtain different toolsfrom a single pre-trained language model by simply providing a few examples ora natural language description of the expected tool behavior. This paperstudies to what extent a state-of-the-art, pre-trained language model of code,Codex, may serve this purpose. We consider three code manipulation and codegeneration tasks targeted by a range of traditional tools: (i) code mutation;(ii) test oracle generation from natural language documentation; and (iii) testcase generation. For each task, we compare few-shot learning to a manuallybuilt tool. Our results show that the model-based tools complement (codemutation), are on par (test oracle generation), or even outperform theirrespective traditionally built tool (test case generation), while imposing farless effort to develop them. By comparing the effectiveness of differentvariants of the model-based tools, we provide insights on how to design anappropriate input (""prompt"") to the model and what influence the size of themodel has. For example, we find that providing a small natural languagedescription of the code generation task is an easy way to improve predictions.Overall, we conclude that few-shot language models are surprisingly effective,yet there is still more work to be done, such as exploring more diverse ways ofprompting and tackling even more involved tasks.",,arXiv,"['cs.se', 'cs.lg']",, +703,discrete and soft prompting for multilingual models,"['Mengjie Zhao', 'Hinrich Schütze']",http://arxiv.org/pdf/2109.03630v1.pdf,2021-09-08,," It has been shown for English that discrete and soft prompting performstrongly in few-shot learning with pretrained language models (PLMs). In thispaper, we show that discrete and soft prompting perform better than finetuningin multilingual cases: Crosslingual transfer and in-language training ofmultilingual natural language inference. For example, with 48 English trainingexamples, finetuning obtains 33.74% accuracy in crosslingual transfer, barelysurpassing the majority baseline (33.33%). In contrast, discrete and softprompting outperform finetuning, achieving 36.43% and 38.79%. We alsodemonstrate good performance of prompting with training data in multiplelanguages other than English.",,arXiv,['cs.cl'],, +704,sentence simplification via large language models,"['Yutao Feng', 'Jipeng Qiang', 'Yun Li', 'Yunhao Yuan', 'Yi Zhu']",http://arxiv.org/pdf/2302.11957v1.pdf,2023-02-23,," Sentence Simplification aims to rephrase complex sentences into simplersentences while retaining original meaning. Large Language models (LLMs) havedemonstrated the ability to perform a variety of natural language processingtasks. However, it is not yet known whether LLMs can be served as ahigh-quality sentence simplification system. In this work, we empiricallyanalyze the zero-/few-shot learning ability of LLMs by evaluating them on anumber of benchmark test sets. Experimental results show LLMs outperformstate-of-the-art sentence simplification methods, and are judged to be on a parwith human annotators.",,arXiv,"['cs.cl', 'cs.ai']",, +705,making pretrained language models better fewshot learners,"['Tianyu Gao', 'Adam Fisch', 'Danqi Chen']",http://arxiv.org/pdf/2012.15723v2.pdf,2020-12-31,," The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shotperformance solely by leveraging a natural-language prompt and a few taskdemonstrations as input context. Inspired by their findings, we study few-shotlearning in a more practical scenario, where we use smaller language models forwhich fine-tuning is computationally efficient. We present LM-BFF--betterfew-shot fine-tuning of language models--a suite of simple and complementarytechniques for fine-tuning language models on a small number of annotatedexamples. Our approach includes (1) prompt-based fine-tuning together with anovel pipeline for automating prompt generation; and (2) a refined strategy fordynamically and selectively incorporating demonstrations into each context.Finally, we present a systematic evaluation for analyzing few-shot performanceon a range of NLP tasks, including classification and regression. Ourexperiments demonstrate that our methods combine to dramatically outperformstandard fine-tuning procedures in this low resource setting, achieving up to30% absolute improvement, and 11% on average across all tasks. Our approachmakes minimal assumptions on task resources and domain expertise, and henceconstitutes a strong task-agnostic method for few-shot learning.",,arXiv,"['cs.cl', 'cs.lg']",, +706,gpt3 models are poor fewshot learners in the biomedical domain,"['Milad Moradi', 'Kathrin Blagec', 'Florian Haberl', 'Matthias Samwald']",http://arxiv.org/pdf/2109.02555v2.pdf,2021-09-06,," Deep neural language models have set new breakthroughs in many tasks ofNatural Language Processing (NLP). Recent work has shown that deep transformerlanguage models (pretrained on large amounts of texts) can achieve high levelsof task-specific few-shot performance comparable to state-of-the-art models.However, the ability of these large language models in few-shot transferlearning has not yet been explored in the biomedical domain. We investigatedthe performance of two powerful transformer language models, i.e. GPT-3 andBioBERT, in few-shot settings on various biomedical NLP tasks. The experimentalresults showed that, to a great extent, both the models underperform a languagemodel fine-tuned on the full training data. Although GPT-3 had already achievednear state-of-the-art results in few-shot knowledge transfer on open-domain NLPtasks, it could not perform as effectively as BioBERT, which is orders ofmagnitude smaller than GPT-3. Regarding that BioBERT was already pretrained onlarge biomedical text corpora, our study suggests that language models maylargely benefit from in-domain pretraining in task-specific few-shot learning.However, in-domain pretraining seems not to be sufficient; novel pretrainingand few-shot learning strategies are required in the biomedical NLP domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +707,list lite prompted selftraining makes parameterefficient fewshot learners,"['Yaqing Wang', 'Subhabrata Mukherjee', 'Xiaodong Liu', 'Jing Gao', 'Ahmed Hassan Awadallah', 'Jianfeng Gao']",http://arxiv.org/pdf/2110.06274v2.pdf,2021-10-12,," We present a new method LiST is short for Lite Prompted Self-Training forparameter-efficient fine-tuning of large pre-trained language models (PLMs) forfew-shot learning. LiST improves over recent methods that adopt prompt-basedfine-tuning (FN) using two key techniques. The first is the use ofself-training to leverage large amounts of unlabeled data for prompt-based FNin few-shot settings. We use self-training in conjunction with meta-learningfor re-weighting noisy pseudo-prompt labels. Self-training is expensive as itrequires updating all the model parameters repetitively. Therefore, we use asecond technique for light-weight fine-tuning where we introduce a small numberof task-specific parameters that are fine-tuned during self-training whilekeeping the PLM encoder frozen. Our experiments show that LiST can effectivelyleverage unlabeled data to improve the model performance for few-shot learning.Additionally, the fine-tuning is efficient as it only updates a smallpercentage of parameters and the overall model footprint is reduced sinceseveral tasks can share a common PLM encoder as backbone. A comprehensive studyon six NLU tasks demonstrate LiST to improve by 35% over classic fine-tuningand 6% over prompt-based FN with 96% reduction in number of trainableparameters when fine-tuned with no more than 30 labeled examples from eachtask. With only 14M tunable parameters, LiST outperforms GPT-3 in-contextlearning by 33% on few-shot NLU tasks.",,arXiv,['cs.cl'],, +708,fewshot stance detection via targetaware prompt distillation,"['Yan Jiang', 'Jinhua Gao', 'Huawei Shen', 'Xueqi Cheng']",http://arxiv.org/pdf/2206.13214v1.pdf,2022-06-27,," Stance detection aims to identify whether the author of a text is in favorof, against, or neutral to a given target. The main challenge of this taskcomes two-fold: few-shot learning resulting from the varying targets and thelack of contextual information of the targets. Existing works mainly focus onsolving the second issue by designing attention-based models or introducingnoisy external knowledge, while the first issue remains under-explored. In thispaper, inspired by the potential capability of pre-trained language models(PLMs) serving as knowledge bases and few-shot learners, we propose tointroduce prompt-based fine-tuning for stance detection. PLMs can provideessential contextual information for the targets and enable few-shot learningvia prompts. Considering the crucial role of the target in stance detectiontask, we design target-aware prompts and propose a novel verbalizer. Instead ofmapping each label to a concrete word, our verbalizer maps each label to avector and picks the label that best captures the correlation between thestance and the target. Moreover, to alleviate the possible defect of dealingwith varying targets with a single hand-crafted prompt, we propose to distillthe information learned from multiple prompts. Experimental results show thesuperior performance of our proposed model in both full-data and few-shotscenarios.",,arXiv,['cs.cl'],, +709,multimodality helps unimodality crossmodal fewshot learning with multimodal models,"['Zhiqiu Lin', 'Samuel Yu', 'Zhiyi Kuang', 'Deepak Pathak', 'Deva Ramanan']",http://arxiv.org/pdf/2301.06267v4.pdf,2023-01-16,," The ability to quickly learn a new task with minimal instruction - known asfew-shot learning - is a central aspect of intelligent agents. Classicalfew-shot benchmarks make use of few-shot samples from a single modality, butsuch samples may not be sufficient to characterize an entire concept class. Incontrast, humans use cross-modal information to learn new concepts efficiently.In this work, we demonstrate that one can indeed build a better ${\bf visual}$dog classifier by ${\bf read}$ing about dogs and ${\bf listen}$ing to thembark. To do so, we exploit the fact that recent multimodal foundation modelssuch as CLIP are inherently cross-modal, mapping different modalities to thesame representation space. Specifically, we propose a simple cross-modaladaptation approach that learns from few-shot examples spanning differentmodalities. By repurposing class names as additional one-shot training samples,we achieve SOTA results with an embarrassingly simple linear classifier forvision-language adaptation. Furthermore, we show that our approach can benefitexisting methods such as prefix tuning, adapters, and classifier ensembling.Finally, to explore other modalities beyond vision and language, we constructthe first (to our knowledge) audiovisual few-shot benchmark and use cross-modaltraining to improve the performance of both image and audio classification.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",, +710,rplkg robust prompt learning with knowledge graph,"['Yewon Kim', 'YongTaek Lim', 'Dokyung Yoon', 'KyungWoo Song']",http://arxiv.org/pdf/2304.10805v1.pdf,2023-04-21,," Large-scale pre-trained models have been known that they are transferable,and they generalize well on the unseen dataset. Recently, multimodalpre-trained models such as CLIP show significant performance improvement indiverse experiments. However, when the labeled dataset is limited, thegeneralization of a new dataset or domain is still challenging. To improve thegeneralization performance on few-shot learning, there have been diverseefforts, such as prompt learning and adapter. However, the current few-shotadaptation methods are not interpretable, and they require a high computationcost for adaptation. In this study, we propose a new method, robust promptlearning with knowledge graph (RPLKG). Based on the knowledge graph, weautomatically design diverse interpretable and meaningful prompt sets. Ourmodel obtains cached embeddings of prompt sets after one forwarding from alarge pre-trained model. After that, model optimizes the prompt selectionprocesses with GumbelSoftmax. In this way, our model is trained usingrelatively little memory and learning time. Also, RPLKG selects the optimalinterpretable prompt automatically, depending on the dataset. In summary, RPLKGis i) interpretable, ii) requires small computation resources, and iii) easy toincorporate prior human knowledge. To validate the RPLKG, we providecomprehensive experimental results on few-shot learning, domain generalizationand new class generalization setting. RPLKG shows a significant performanceimprovement compared to zero-shot learning and competitive performance againstseveral prompt learning methods using much lower resources.",,arXiv,"['cs.ai', 'cs.lg']",, +711,adversarial robustness of promptbased fewshot learning for natural language understanding,"['Venkata Prabhakara Sarath Nookala', 'Gaurav Verma', 'Subhabrata Mukherjee', 'Srijan Kumar']",http://arxiv.org/pdf/2306.11066v2.pdf,2023-06-19,," State-of-the-art few-shot learning (FSL) methods leverage prompt-basedfine-tuning to obtain remarkable results for natural language understanding(NLU) tasks. While much of the prior FSL methods focus on improving downstreamtask performance, there is a limited understanding of the adversarialrobustness of such methods. In this work, we conduct an extensive study ofseveral state-of-the-art FSL methods to assess their robustness to adversarialperturbations. To better understand the impact of various factors towardsrobustness (or the lack of it), we evaluate prompt-based FSL methods againstfully fine-tuned models for aspects such as the use of unlabeled data, multipleprompts, number of few-shot examples, model size and type. Our results on sixGLUE tasks indicate that compared to fully fine-tuned models, vanilla FSLmethods lead to a notable relative drop in task performance (i.e., are lessrobust) in the face of adversarial perturbations. However, using (i) unlabeleddata for prompt-based FSL and (ii) multiple prompts flip the trend. We furtherdemonstrate that increasing the number of few-shot examples and model size leadto increased adversarial robustness of vanilla FSL methods. Broadly, our worksheds light on the adversarial robustness evaluation of prompt-based FSLmethods for NLU tasks.",,arXiv,"['cs.cl', 'cs.lg']",, +712,unifiedskg unifying and multitasking structured knowledge grounding with texttotext language models,"['Tianbao Xie', 'Chen Henry Wu', 'Peng Shi', 'Ruiqi Zhong', 'Torsten Scholak', 'Michihiro Yasunaga', 'Chien-Sheng Wu', 'Ming Zhong', 'Pengcheng Yin', 'Sida I. Wang', 'Victor Zhong', 'Bailin Wang', 'Chengzu Li', 'Connor Boyle', 'Ansong Ni', 'Ziyu Yao', 'Dragomir Radev', 'Caiming Xiong', 'Lingpeng Kong', 'Rui Zhang', 'Noah A. Smith', 'Luke Zettlemoyer', 'Tao Yu']",http://arxiv.org/pdf/2201.05966v3.pdf,2022-01-16,," Structured knowledge grounding (SKG) leverages structured knowledge tocomplete user requests, such as semantic parsing over databases and questionanswering over knowledge bases. Since the inputs and outputs of SKG tasks areheterogeneous, they have been studied separately by different communities,which limits systematic and compatible research on SKG. In this paper, weovercome this limitation by proposing the UnifiedSKG framework, which unifies21 SKG tasks into a text-to-text format, aiming to promote systematic SKGresearch, instead of being exclusive to a single task, domain, or dataset. Weuse UnifiedSKG to benchmark T5 with different sizes and show that T5, withsimple modifications when necessary, achieves state-of-the-art performance onalmost all of the 21 tasks. We further demonstrate that multi-taskprefix-tuning improves the performance on most tasks, largely improving theoverall performance. UnifiedSKG also facilitates the investigation of zero-shotand few-shot learning, and we show that T0, GPT-3, and Codex struggle inzero-shot and few-shot learning for SKG. We also use UnifiedSKG to conduct aseries of controlled experiments on structured knowledge encoding variantsacross SKG tasks. UnifiedSKG is easily extensible to more tasks, and it isopen-sourced at https://github.com/hkunlp/unifiedskg.",,arXiv,['cs.cl'],, +713,a promptbased fewshot learning approach to software conflict detection,"['Robert K. Helmeczi', 'Mucahit Cevik', 'Savas Yıldırım']",http://arxiv.org/pdf/2211.02709v1.pdf,2022-11-04,," A software requirement specification (SRS) document is an essential part ofthe software development life cycle which outlines the requirements that asoftware program in development must satisfy. This document is often specifiedby a diverse group of stakeholders and is subject to continual change, makingthe process of maintaining the document and detecting conflicts betweenrequirements an essential task in software development. Notably, projects thatdo not address conflicts in the SRS document early on face considerableproblems later in the development life cycle. These problems incur substantialcosts in terms of time and money, and these costs often become insurmountablebarriers that ultimately result in the termination of a software projectaltogether. As a result, early detection of SRS conflicts is critical toproject sustainability. The conflict detection task is approached in numerousways, many of which require a significant amount of manual intervention fromdevelopers, or require access to a large amount of labeled, task-specifictraining data. In this work, we propose using a prompt-based learning approachto perform few-shot learning for conflict detection. We compare our results tosupervised learning approaches that use pretrained language models, such asBERT and its variants. Our results show that prompting with just 32 labeledexamples can achieve a similar level of performance in many key metrics to thatof supervised learning on training sets that are magnitudes larger in size. Incontrast to many other conflict detection approaches, we make no assumptionsabout the type of underlying requirements, allowing us to analyze pairings ofboth functional and non-functional requirements. This allows us to omit thepotentially expensive task of filtering out non-functional requirements fromour dataset.",,arXiv,['cs.se'],, +714,"crosslingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing","['Tal Schuster', 'Ori Ram', 'Regina Barzilay', 'Amir Globerson']",http://arxiv.org/pdf/1902.09492v2.pdf,2019-02-25,," We introduce a novel method for multilingual transfer that utilizes deepcontextual embeddings, pretrained in an unsupervised fashion. While contextualembeddings have been shown to yield richer representations of meaning comparedto their static counterparts, aligning them poses a challenge due to theirdynamic nature. To this end, we construct context-independent variants of theoriginal monolingual spaces and utilize their mapping to derive an alignmentfor the context-dependent spaces. This mapping readily supports processing of atarget language, improving transfer by context-aware embeddings. Ourexperimental results demonstrate the effectiveness of this approach forzero-shot and few-shot learning of dependency parsing. Specifically, our methodconsistently outperforms the previous state-of-the-art on 6 tested languages,yielding an improvement of 6.8 LAS points on average.",,arXiv,"['cs.cl', 'cs.lg']",, +715,calibrate before use improving fewshot performance of language models,"['Tony Z. Zhao', 'Eric Wallace', 'Shi Feng', 'Dan Klein', 'Sameer Singh']",http://arxiv.org/pdf/2102.09690v2.pdf,2021-02-19,," GPT-3 can perform numerous tasks when provided a natural language prompt thatcontains a few training examples. We show that this type of few-shot learningcan be unstable: the choice of prompt format, training examples, and even theorder of the training examples can cause accuracy to vary from near chance tonear state-of-the-art. We demonstrate that this instability arises from thebias of language models towards predicting certain answers, e.g., those thatare placed near the end of the prompt or are common in the pre-training data.To mitigate this, we first estimate the model's bias towards each answer byasking for its prediction when given the training prompt and a content-freetest input such as ""N/A"". We then fit calibration parameters that cause theprediction for this input to be uniform across answers. On a diverse set oftasks, this contextual calibration procedure substantially improves GPT-3 andGPT-2's average accuracy (up to 30.0% absolute) and reduces variance acrossdifferent choices of the prompt.",,arXiv,"['cs.cl', 'cs.lg']",, +716,what's in a measurement using gpt3 on semeval 2021 task 8 measeval,"['Curt Kohler', 'Ron Daniel Jr']",http://arxiv.org/pdf/2106.14720v1.pdf,2021-06-28,," In the summer of 2020 OpenAI released its GPT-3 autoregressive language modelto much fanfare. While the model has shown promise on tasks in several areas,it has not always been clear when the results were cherry-picked or when theywere the unvarnished output. We were particularly interested in what benefitsGPT-3 could bring to the SemEval 2021 MeasEval task - identifying measurementsand their associated attributes in scientific literature. We had alreadyexperimented with multi-turn questions answering as a solution to this task. Wewanted to see if we could use GPT-3's few-shot learning capabilities to moreeasily develop a solution that would have better performance than our priorwork. Unfortunately, we have not been successful in that effort. This paperdiscusses the approach we used, challenges we encountered, and results weobserved. Some of the problems we encountered were simply due to the state ofthe art. For example, the limits on the size of the prompt and answer limitedthe amount of the training signal that could be offered. Others are morefundamental. We are unaware of generative models that excel in retainingfactual information. Also, the impact of changes in the prompts isunpredictable, making it hard to reliably improve performance.",,arXiv,['cs.cl'],, +717,noisy channel language model prompting for fewshot text classification,"['Sewon Min', 'Mike Lewis', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2108.04106v3.pdf,2021-08-09,," We introduce a noisy channel approach for language model prompting infew-shot text classification. Instead of computing the likelihood of the labelgiven the input (referred as direct models), channel models compute theconditional probability of the input given the label, and are thereby requiredto explain every word in the input. We use channel models for recently proposedfew-shot learning methods with no or very limited updates to the language modelparameters, via either in-context demonstration or prompt tuning. Ourexperiments show that, for both methods, channel models significantlyoutperform their direct counterparts, which we attribute to their stability,i.e., lower variance and higher worst-case accuracy. We also present extensiveablations that provide recommendations for when to use channel prompt tuninginstead of other competitive methods (e.g., direct head tuning): channel prompttuning is preferred when the number of training examples is small, labels inthe training data are imbalanced, or generalization to unseen labels isrequired.",,arXiv,"['cs.cl', 'cs.ai']",, +718,flex unifying evaluation for fewshot nlp,"['Jonathan Bragg', 'Arman Cohan', 'Kyle Lo', 'Iz Beltagy']",http://arxiv.org/pdf/2107.07170v2.pdf,2021-07-15,," Few-shot NLP research is highly active, yet conducted in disjoint researchthreads with evaluation suites that lack challenging-yet-realistic testingsetups and fail to employ careful experimental design. Consequently, thecommunity does not know which techniques perform best or even if theyoutperform simple baselines. In response, we formulate the FLEX Principles, aset of requirements and best practices for unified, rigorous, valid, andcost-sensitive few-shot NLP evaluation. These principles include Sample SizeDesign, a novel approach to benchmark design that optimizes statisticalaccuracy and precision while keeping evaluation costs manageable. Following theprinciples, we release the FLEX benchmark, which includes four few-shottransfer settings, zero-shot evaluation, and a public leaderboard that coversdiverse NLP tasks. In addition, we present UniFew, a prompt-based model forfew-shot learning that unifies pretraining and finetuning prompt formats,eschewing complex machinery of recent prompt-based approaches in adaptingdownstream task formats to language model pretraining objectives. Wedemonstrate that despite simplicity, UniFew achieves results competitive withboth popular meta-learning and prompt-based approaches.",,arXiv,"['cs.cl', 'cs.lg', 'i.2.7']",, +719,conqx semantic expansion of spoken queries for intent detection based on conditioned text generation,"['Eyup Halit Yilmaz', 'Cagri Toraman']",http://arxiv.org/pdf/2109.00729v1.pdf,2021-09-02,," Intent detection of spoken queries is a challenging task due to their noisystructure and short length. To provide additional information regarding thequery and enhance the performance of intent detection, we propose a method forsemantic expansion of spoken queries, called ConQX, which utilizes the textgeneration ability of an auto-regressive language model, GPT-2. To avoidoff-topic text generation, we condition the input query to a structured contextwith prompt mining. We then apply zero-shot, one-shot, and few-shot learning.We lastly use the expanded queries to fine-tune BERT and RoBERTa for intentdetection. The experimental results show that the performance of intentdetection can be improved by our semantic expansion method.",,arXiv,"['cs.cl', 'cs.ai']",, +720,do promptbased models really understand the meaning of their prompts,"['Albert Webson', 'Ellie Pavlick']",http://arxiv.org/pdf/2109.01247v2.pdf,2021-09-02,," Recently, a boom of papers has shown extraordinary progress in zero-shot andfew-shot learning with various prompt-based models. It is commonly argued thatprompts help models to learn faster in the same way that humans learn fasterwhen provided with task instructions expressed in natural language. In thisstudy, we experiment with over 30 prompt templates manually written for naturallanguage inference (NLI). We find that models learn just as fast with manyprompts that are intentionally irrelevant or even pathologically misleading asthey do with instructively ""good"" prompts. Further, such patterns hold even formodels as large as 175 billion parameters (Brown et al., 2020) as well as therecently proposed instruction-tuned models which are trained on hundreds ofprompts (Sanh et al., 2022). That is, instruction-tuned models often producegood predictions with irrelevant and misleading prompts even at zero shots. Insum, notwithstanding prompt-based models' impressive improvement, we findevidence of serious limitations that question the degree to which suchimprovement is derived from models understanding task instructions in waysanalogous to humans' use of task instructions.",,arXiv,['cs.cl'],, +721,fewshot emotion recognition in conversation with sequential prototypical networks,"['Gaël Guibon', 'Matthieu Labeau', 'Hélène Flamein', 'Luce Lefeuvre', 'Chloé Clavel']",http://arxiv.org/pdf/2109.09366v1.pdf,2021-09-20,," Several recent studies on dyadic human-human interactions have been done onconversations without specific business objectives. However, many companiesmight benefit from studies dedicated to more precise environments such as aftersales services or customer satisfaction surveys. In this work, we placeourselves in the scope of a live chat customer service in which we want todetect emotions and their evolution in the conversation flow. This contextleads to multiple challenges that range from exploiting restricted, small andmostly unlabeled datasets to finding and adapting methods for such context.Wetackle these challenges by using Few-Shot Learning while making the hypothesisit can serve conversational emotion classification for different languages andsparse labels. We contribute by proposing a variation of Prototypical Networksfor sequence labeling in conversation that we name ProtoSeq. We test thismethod on two datasets with different languages: daily conversations in Englishand customer service chat conversations in French. When applied to emotionclassification in conversations, our method proved to be competitive even whencompared to other ones.",,arXiv,"['cs.cl', 'cs.lg']",, +722,useridentifier implicit user representations for simple and effective personalized sentiment analysis,"['Fatemehsadat Mireshghallah', 'Vaishnavi Shrivastava', 'Milad Shokouhi', 'Taylor Berg-Kirkpatrick', 'Robert Sim', 'Dimitrios Dimitriadis']",http://arxiv.org/pdf/2110.00135v2.pdf,2021-10-01,," Global models are trained to be as generalizable as possible, with userinvariance considered desirable since the models are shared across multitudesof users. As such, these models are often unable to produce personalizedresponses for individual users, based on their data. Contrary to widely-usedpersonalization techniques based on few-shot learning, we proposeUserIdentifier, a novel scheme for training a single shared model for allusers. Our approach produces personalized responses by adding fixed,non-trainable user identifiers to the input data. We empirically demonstratethat this proposed method outperforms the prefix-tuning based state-of-the-artapproach by up to 13%, on a suite of sentiment analysis datasets. We also showthat, unlike prior work, this method needs neither any additional modelparameters nor any extra rounds of few-shot fine-tuning.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, +723,instanceaware prompt learning for language understanding and generation,"['Feihu Jin', 'Jinliang Lu', 'Jiajun Zhang', 'Chengqing Zong']",http://arxiv.org/pdf/2201.07126v1.pdf,2022-01-18,," Recently, prompt learning has become a new paradigm to utilize pre-trainedlanguage models (PLMs) and achieves promising results in downstream tasks witha negligible increase of parameters. The current usage of discrete andcontinuous prompts assumes that the prompt is fixed for a specific task and allsamples in the task share the same prompt. However, a task may contain quitediverse samples in which some are easy and others are difficult, and diverseprompts are desirable. In this paper, we propose an instance-aware promptlearning method that learns a different prompt for each instance. Specifically,we suppose that each learnable prompt token has a different contribution todifferent instances, and we learn the contribution by calculating the relevancescore between an instance and each prompt token. The contribution weightedprompt would be instance aware. We apply our method to both unidirectional andbidirectional PLMs on both language understanding and generation tasks.Extensive experiments demonstrate that our method obtains considerableimprovements compared to strong baselines. Especially, our method achieves thestate-of-the-art on the SuperGLUE few-shot learning benchmark.",,arXiv,['cs.cl'],, +724,generating training data with language models towards zeroshot language understanding,"['Yu Meng', 'Jiaxin Huang', 'Yu Zhang', 'Jiawei Han']",http://arxiv.org/pdf/2202.04538v2.pdf,2022-02-09,," Pretrained language models (PLMs) have demonstrated remarkable performance invarious natural language processing tasks: Unidirectional PLMs (e.g., GPT) arewell known for their superior text generation capabilities; bidirectional PLMs(e.g., BERT) have been the prominent choice for natural language understanding(NLU) tasks. While both types of models have achieved promising few-shotlearning performance, their potential for zero-shot learning has beenunderexplored. In this paper, we present a simple approach that uses both typesof PLMs for fully zero-shot learning of NLU tasks without requiring anytask-specific data: A unidirectional PLM generates class-conditioned textsguided by prompts, which are used as the training data for fine-tuning abidirectional PLM. With quality training data selected based on the generationprobability and regularization techniques (label smoothing and temporalensembling) applied to the fine-tuning stage for better generalization andstability, our approach demonstrates strong performance across sevenclassification tasks of the GLUE benchmark (e.g., 72.3/73.8 on MNLI-m/mm and92.8 on SST-2), significantly outperforming zero-shot prompting methods andachieving even comparable results to strong few-shot approaches using 32training samples per class.",,arXiv,"['cs.cl', 'cs.lg']",, +725,variational autoencoder with disentanglement priors for lowresource taskspecific natural language generation,"['Zhuang Li', 'Lizhen Qu', 'Qiongkai Xu', 'Tongtong Wu', 'Tianyang Zhan', 'Gholamreza Haffari']",http://arxiv.org/pdf/2202.13363v3.pdf,2022-02-27,," In this paper, we propose a variational autoencoder with disentanglementpriors, VAE-DPRIOR, for task-specific natural language generation with none ora handful of task-specific labeled examples. In order to tackle compositionalgeneralization across tasks, our model performs disentangled representationlearning by introducing a conditional prior for the latent content space andanother conditional prior for the latent label space. Both types of priorssatisfy a novel property called $\epsilon$-disentangled. We show bothempirically and theoretically that the novel priors can disentanglerepresentations even without specific regularizations as in the prior work. Thecontent prior enables directly sampling diverse content representations fromthe content space learned from the seen tasks, and fuse them with therepresentations of novel tasks for generating semantically diverse texts in thelow-resource settings. Our extensive experiments demonstrate the superiorperformance of our model over competitive baselines in terms of i) dataaugmentation in continuous zero/few-shot learning, and ii) text style transferin the few-shot setting.",,arXiv,['cs.cl'],, +726,claret pretraining a correlationaware contexttoevent transformer for eventcentric generation and classification,"['Yucheng Zhou', 'Tao Shen', 'Xiubo Geng', 'Guodong Long', 'Daxin Jiang']",http://arxiv.org/pdf/2203.02225v2.pdf,2022-03-04,," Generating new events given context with correlated ones plays a crucial rolein many event-centric reasoning tasks. Existing works either limit their scopeto specific scenarios or overlook event-level correlations. In this paper, wepropose to pre-train a general Correlation-aware context-to-Event Transformer(ClarET) for event-centric reasoning. To achieve this, we propose three novelevent-centric objectives, i.e., whole event recovering, contrastiveevent-correlation encoding and prompt-based event locating, which highlightevent-level correlations with effective training. The proposed ClarET isapplicable to a wide range of event-centric reasoning scenarios, consideringits versatility of (i) event-correlation types (e.g., causal, temporal,contrast), (ii) application formulations (i.e., generation and classification),and (iii) reasoning types (e.g., abductive, counterfactual and endingreasoning). Empirical fine-tuning results, as well as zero- and few-shotlearning, on 9 benchmarks (5 generation and 4 classification tasks covering 4reasoning types with diverse event correlations), verify its effectiveness andgeneralization ability.",,arXiv,['cs.cl'],, +727,pretrained tokenreplaced detection model as fewshot learner,"['Zicheng Li', 'Shoushan Li', 'Guodong Zhou']",http://arxiv.org/pdf/2203.03235v2.pdf,2022-03-07,," Pre-trained masked language models have demonstrated remarkable ability asfew-shot learners. In this paper, as an alternative, we propose a novelapproach to few-shot learning with pre-trained token-replaced detection modelslike ELECTRA. In this approach, we reformulate a classification or a regressiontask as a token-replaced detection problem. Specifically, we first define atemplate and label description words for each task and put them into the inputto form a natural language prompt. Then, we employ the pre-trainedtoken-replaced detection model to predict which label description word is themost original (i.e., least replaced) among all label description words in theprompt. A systematic evaluation on 16 datasets demonstrates that our approachoutperforms few-shot learners with pre-trained masked language models in bothone-sentence and two-sentence learning tasks.",,arXiv,"['cs.cl', 'cs.ai']",, +728,prototypical verbalizer for promptbased fewshot tuning,"['Ganqu Cui', 'Shengding Hu', 'Ning Ding', 'Longtao Huang', 'Zhiyuan Liu']",http://arxiv.org/pdf/2203.09770v1.pdf,2022-03-18,," Prompt-based tuning for pre-trained language models (PLMs) has shown itseffectiveness in few-shot learning. Typically, prompt-based tuning wraps theinput text into a cloze question. To make predictions, the model maps theoutput words to labels via a verbalizer, which is either manually designed orautomatically built. However, manual verbalizers heavily depend ondomain-specific prior knowledge and human efforts, while finding appropriatelabel words automatically still remains challenging.In this work, we proposethe prototypical verbalizer (ProtoVerb) which is built directly from trainingdata. Specifically, ProtoVerb learns prototype vectors as verbalizers bycontrastive learning. In this way, the prototypes summarize training instancesand are able to enclose rich class-level semantics. We conduct experiments onboth topic classification and entity typing tasks, and the results demonstratethat ProtoVerb significantly outperforms current automatic verbalizers,especially when training data is extremely scarce. More surprisingly, ProtoVerbconsistently boosts prompt-based tuning even on untuned PLMs, indicating anelegant non-tuning way to utilize PLMs. Our codes are avaliable athttps://github.com/thunlp/OpenPrompt.",,arXiv,"['cs.cl', 'cs.lg']",, +729,inverse is better! fast and accurate prompt for fewshot slot tagging,"['Yutai Hou', 'Cheng Chen', 'Xianzhen Luo', 'Bohan Li', 'Wanxiang Che']",http://arxiv.org/pdf/2204.00885v1.pdf,2022-04-02,," Prompting methods recently achieve impressive success in few-shot learning.These methods modify input samples with prompt sentence pieces, and decodelabel tokens to map samples to corresponding labels. However, such a paradigmis very inefficient for the task of slot tagging. Since slot tagging samplesare multiple consecutive words in a sentence, the prompting methods have toenumerate all n-grams token spans to find all the possible slots, which greatlyslows down the prediction. To tackle this, we introduce an inverse paradigm forprompting. Different from the classic prompts mapping tokens to labels, wereversely predict slot values given slot types. Such inverse prompting onlyrequires a one-turn prediction for each slot type and greatly speeds up theprediction. Besides, we propose a novel Iterative Prediction Strategy, fromwhich the model learns to refine predictions by considering the relationsbetween different slot types. We find, somewhat surprisingly, the proposedmethod not only predicts faster but also significantly improves the effect(improve over 6.1 F1-scores on 10-shot setting) and achieves newstate-of-the-art performance.",,arXiv,"['cs.cl', 'cs.ai']",, +730,leveraging pretrained language models for conversational information seeking from text,"['Patrizio Bellan', 'Mauro Dragoni', 'Chiara Ghidini']",http://arxiv.org/pdf/2204.03542v1.pdf,2022-03-31,," Recent advances in Natural Language Processing, and in particular on theconstruction of very large pre-trained language representation models, isopening up new perspectives on the construction of conversational informationseeking (CIS) systems. In this paper we investigate the usage of in-contextlearning and pre-trained language representation models to address the problemof information extraction from process description documents, in an incrementalquestion and answering oriented fashion. In particular we investigate the usageof the native GPT-3 (Generative Pre-trained Transformer 3) model, together withtwo in-context learning customizations that inject conceptual definitions and alimited number of samples in a few shot-learning fashion. The results highlightthe potential of the approach and the usefulness of the in-context learningcustomizations, which can substantially contribute to address the ""trainingdata challenge"" of deep learning based NLP techniques the BPM field. It alsohighlight the challenge posed by control flow relations for which furthertraining needs to be devised.",,arXiv,"['cs.cl', 'cs.ai']",, +731,superprompting utilizing modelindependent contextual data to reduce data annotation required in visual commonsense tasks,"['Navid Rezaei', 'Marek Z. Reformat']",http://arxiv.org/pdf/2204.11922v1.pdf,2022-04-25,," Pre-trained language models have shown excellent results in few-shot learningscenarios using in-context learning. Although it is impressive, the size oflanguage models can be prohibitive to make them usable in on-deviceapplications, such as sensors or smartphones. With smaller language models,task-specific data annotation is needed to fine-tune the language model for aspecific purpose. However, data annotation can have a substantial financial andtime burden for small research groups, startups, and even companies. In thispaper, we analyze different prompt-based fine-tuning techniques to improveresults on both language and multimodal causal transformer models. To evaluateour results, we use a dataset focusing on visual commonsense reasoning in time.Our results show that by simple model-agnostic prompt-based fine-tuning,comparable results can be reached by only using 35%-40% of the fine-tuningtraining dataset. The proposed approaches result in significant time andfinancial savings. As the proposed methods make minimal architecturalassumptions, other researchers can use the results in their transformer modelswith minimal adaptations. We plan to release the source code freely to make iteasier for the community to use and contribute to our work.",,arXiv,"['cs.cl', 'cs.ai']",, +732,building a role specified opendomain dialogue system leveraging largescale language models,"['Sanghwan Bae', 'Donghyun Kwak', 'Sungdong Kim', 'Donghoon Ham', 'Soyoung Kang', 'Sang-Woo Lee', 'Woomyoung Park']",http://arxiv.org/pdf/2205.00176v1.pdf,2022-04-30,," Recent open-domain dialogue models have brought numerous breakthroughs.However, building a chat system is not scalable since it often requires aconsiderable volume of human-human dialogue data, especially when enforcingfeatures such as persona, style, or safety. In this work, we study thechallenge of imposing roles on open-domain dialogue systems, with the goal ofmaking the systems maintain consistent roles while conversing naturally withhumans. To accomplish this, the system must satisfy a role specification thatincludes certain conditions on the stated features as well as a system policyon whether or not certain types of utterances are allowed. For this, we proposean efficient data collection framework leveraging in-context few-shot learningof large-scale language models for building role-satisfying dialogue datasetfrom scratch. We then compare various architectures for open-domain dialoguesystems in terms of meeting role specifications while maintainingconversational abilities. Automatic and human evaluations show that our modelsreturn few out-of-bounds utterances, keeping competitive performance on generalmetrics. We release a Korean dialogue dataset we built for further research.",,arXiv,['cs.cl'],, +733,easynlp a comprehensive and easytouse toolkit for natural language processing,"['Chengyu Wang', 'Minghui Qiu', 'Chen Shi', 'Taolin Zhang', 'Tingting Liu', 'Lei Li', 'Jianing Wang', 'Ming Wang', 'Jun Huang', 'Wei Lin']",http://arxiv.org/pdf/2205.00258v2.pdf,2022-04-30,," The success of Pre-Trained Models (PTMs) has reshaped the development ofNatural Language Processing (NLP). Yet, it is not easy to obtainhigh-performing models and deploy them online for industrial practitioners. Tobridge this gap, EasyNLP is designed to make it easy to build NLP applications,which supports a comprehensive suite of NLP algorithms. It further featuresknowledge-enhanced pre-training, knowledge distillation and few-shot learningfunctionalities for large-scale PTMs, and provides a unified framework of modeltraining, inference and deployment for real-world applications. Currently,EasyNLP has powered over ten business units within Alibaba Group and isseamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud.The source code of our EasyNLP toolkit is released at GitHub(https://github.com/alibaba/EasyNLP).",,arXiv,['cs.cl'],, +734,politics pretraining with samestory article comparison for ideology prediction and stance detection,"['Yujian Liu', 'Xinliang Frederick Zhang', 'David Wegsman', 'Nick Beauchamp', 'Lu Wang']",http://arxiv.org/pdf/2205.00619v1.pdf,2022-05-02,," Ideology is at the core of political science research. Yet, there still doesnot exist general-purpose tools to characterize and predict ideology acrossdifferent genres of text. To this end, we study Pretrained Language Modelsusing novel ideology-driven pretraining objectives that rely on the comparisonof articles on the same story written by media of different ideologies. Wefurther collect a large-scale dataset, consisting of more than 3.6M politicalnews articles, for pretraining. Our model POLITICS outperforms strong baselinesand the previous state-of-the-art models on ideology prediction and stancedetection tasks. Further analyses show that POLITICS is especially good atunderstanding long or formally written texts, and is also robust in few-shotlearning scenarios.",,arXiv,['cs.cl'],, +735,kecp knowledge enhanced contrastive prompting for fewshot extractive question answering,"['Jianing Wang', 'Chengyu Wang', 'Minghui Qiu', 'Qiuhui Shi', 'Hongbin Wang', 'Jun Huang', 'Ming Gao']",http://arxiv.org/pdf/2205.03071v1.pdf,2022-05-06,," Extractive Question Answering (EQA) is one of the most important tasks inMachine Reading Comprehension (MRC), which can be solved by fine-tuning thespan selecting heads of Pre-trained Language Models (PLMs). However, mostexisting approaches for MRC may perform poorly in the few-shot learningscenario. To solve this issue, we propose a novel framework named KnowledgeEnhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads toPLMs, we introduce a seminal paradigm for EQA that transform the task into anon-autoregressive Masked Language Modeling (MLM) generation problem.Simultaneously, rich semantics from the external knowledge base (KB) and thepassage context are support for enhancing the representations of the query. Inaddition, to boost the performance of PLMs, we jointly train the model by theMLM and contrastive learning objectives. Experiments on multiple benchmarksdemonstrate that our method consistently outperforms state-of-the-artapproaches in few-shot settings by a large margin.",,arXiv,"['cs.cl', 'cs.ai']",, +736,proqa structural promptbased pretraining for unified question answering,"['Wanjun Zhong', 'Yifan Gao', 'Ning Ding', 'Yujia Qin', 'Zhiyuan Liu', 'Ming Zhou', 'Jiahai Wang', 'Jian Yin', 'Nan Duan']",http://arxiv.org/pdf/2205.04040v2.pdf,2022-05-09,," Question Answering (QA) is a longstanding challenge in natural languageprocessing. Existing QA works mostly focus on specific question types,knowledge domains, or reasoning skills. The specialty in QA research hinderssystems from modeling commonalities between tasks and generalization for widerapplications. To address this issue, we present ProQA, a unified QA paradigmthat solves various tasks through a single model. ProQA takes a unifiedstructural prompt as the bridge and improves the QA-centric ability bystructural prompt-based pre-training. Through a structurally designedprompt-based input schema, ProQA concurrently models the knowledgegeneralization for all QA tasks while keeping the knowledge customization forevery specific QA task. Furthermore, ProQA is pre-trained with structuralprompt-formatted large-scale synthesized corpus, which empowers the model withthe commonly-required QA ability. Experimental results on 11 QA benchmarksdemonstrate that ProQA consistently boosts performance on both full datafine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore,ProQA exhibits strong ability in both continual learning and transfer learningby taking the advantages of the structural prompt.",,arXiv,['cs.cl'],, +737,allsh active learning guided by local sensitivity and hardness,"['Shujian Zhang', 'Chengyue Gong', 'Xingchao Liu', 'Pengcheng He', 'Weizhu Chen', 'Mingyuan Zhou']",http://arxiv.org/pdf/2205.04980v2.pdf,2022-05-10,," Active learning, which effectively collects informative unlabeled data forannotation, reduces the demand for labeled data. In this work, we propose toretrieve unlabeled samples with a local sensitivity and hardness-awareacquisition function. The proposed method generates data copies through localperturbations and selects data points whose predictive likelihoods diverge themost from their copies. We further empower our acquisition function byinjecting the select-worst case perturbation. Our method achieves consistentgains over the commonly used active learning strategies in variousclassification tasks. Furthermore, we observe consistent improvements over thebaselines on the study of prompt selection in prompt-based few-shot learning.These experiments demonstrate that our acquisition guided by local sensitivityand hardness can be effective and beneficial for many NLP tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +738,prototypical calibration for fewshot learning of language models,"['Zhixiong Han', 'Yaru Hao', 'Li Dong', 'Yutao Sun', 'Furu Wei']",http://arxiv.org/pdf/2205.10183v2.pdf,2022-05-20,," In-context learning of GPT-like models has been recognized as fragile acrossdifferent hand-crafted templates, and demonstration permutations. In this work,we propose prototypical calibration to adaptively learn a more robust decisionboundary for zero- and few-shot classification, instead of greedy decoding.Concretely, our method first adopts Gaussian mixture distribution to estimatethe prototypical clusters for all categories. Then we assign each cluster tothe corresponding label by solving a weighted bipartite matching problem. Givenan example, its prediction is calibrated by the likelihood of prototypicalclusters. Experimental results show that prototypical calibration yields asubstantial improvement on a diverse set of tasks. Extensive analysis acrossdifferent scales also indicates that our method calibrates the decisionboundary as expected, greatly improving the robustness of GPT to templates,permutations, and class imbalance.",,arXiv,['cs.cl'],, +739,bbtv2 towards a gradientfree future with large language models,"['Tianxiang Sun', 'Zhengfu He', 'Hong Qian', 'Yunhua Zhou', 'Xuanjing Huang', 'Xipeng Qiu']",http://arxiv.org/pdf/2205.11200v2.pdf,2022-05-23,," Most downstream adaptation methods tune all or part of the parameters ofpre-trained models (PTMs) through gradient descent, where the tuning costincreases linearly with the growth of the model size. By contrast,gradient-free methods only require the forward computation of the PTM to tunethe prompt, retaining the benefits of efficient tuning and deployment. Though,past work on gradient-free tuning often introduces gradient descent to seek agood initialization of prompt and lacks versatility across tasks and PTMs. Inthis paper, we present BBTv2, an improved version of Black-Box Tuning, to drivePTMs for few-shot learning. We prepend continuous prompts to every layer of thePTM and propose a divide-and-conquer gradient-free algorithm to optimize theprompts at different layers alternately. Extensive experiments across varioustasks and PTMs show that BBTv2 can achieve comparable performance to full modeltuning and state-of-the-art parameter-efficient methods (e.g., Adapter, LoRA,BitFit, etc.) under few-shot settings while maintaining much fewer tunableparameters.",,arXiv,"['cs.cl', 'cs.ai']",, +740,neural prompt search,"['Yuanhan Zhang', 'Kaiyang Zhou', 'Ziwei Liu']",http://arxiv.org/pdf/2206.04673v2.pdf,2022-06-09,," The size of vision models has grown exponentially over the last few years,especially after the emergence of Vision Transformer. This has motivated thedevelopment of parameter-efficient tuning methods, such as learning adapterlayers or visual prompt tokens, which allow a tiny portion of model parametersto be trained whereas the vast majority obtained from pre-training are frozen.However, designing a proper tuning method is non-trivial: one might need to tryout a lengthy list of design choices, not to mention that each downstreamdataset often requires custom designs. In this paper, we view the existingparameter-efficient tuning methods as ""prompt modules"" and propose NeuralprOmpt seArcH (NOAH), a novel approach that learns, for large vision models,the optimal design of prompt modules through a neural architecture searchalgorithm, specifically for each downstream dataset. By conducting extensiveexperiments on over 20 vision datasets, we demonstrate that NOAH (i) issuperior to individual prompt modules, (ii) has a good few-shot learningability, and (iii) is domain-generalizable. The code and models are availableat https://github.com/Davidzhangyuanhan/NOAH.",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, +741,prompting decision transformer for fewshot policy generalization,"['Mengdi Xu', 'Yikang Shen', 'Shun Zhang', 'Yuchen Lu', 'Ding Zhao', 'Joshua B. Tenenbaum', 'Chuang Gan']",http://arxiv.org/pdf/2206.13499v1.pdf,2022-06-27,," Humans can leverage prior experience and learn novel tasks from a handful ofdemonstrations. In contrast to offline meta-reinforcement learning, which aimsto achieve quick adaptation through better algorithm design, we investigate theeffect of architecture inductive bias on the few-shot learning capability. Wepropose a Prompt-based Decision Transformer (Prompt-DT), which leverages thesequential modeling ability of the Transformer architecture and the promptframework to achieve few-shot adaptation in offline RL. We design thetrajectory prompt, which contains segments of the few-shot demonstrations, andencodes task-specific information to guide policy generation. Our experimentsin five MuJoCo control benchmarks show that Prompt-DT is a strong few-shotlearner without any extra finetuning on unseen target tasks. Prompt-DToutperforms its variants and strong meta offline RL baselines by a large marginwith a trajectory prompt containing only a few timesteps. Prompt-DT is alsorobust to prompt length changes and can generalize to out-of-distribution (OOD)environments.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ro']",, +742,fewshot training llms for projectspecific codesummarization,"['Toufique Ahmed', 'Premkumar Devanbu']",http://arxiv.org/pdf/2207.04237v2.pdf,2022-07-09,," Very large language models (LLMs), such as GPT-3 and Codex have achievedstate-of-the-art performance on several natural-language tasks, and show greatpromise also for code. A particularly exciting aspect of LLMs is their knackfor few-shot and zero-shot learning: they can learn to perform a task with veryfew examples. Few-shotting has particular synergies in software engineering,where there are a lot of phenomena (identifier names, APIs, terminology, codingpatterns) that are known to be highly project-specific. However,project-specific data can be quite limited, especially early in the history ofa project; thus the few-shot learning capacity of LLMs might be very relevant.In this paper, we investigate the use few-shot training with the very large GPT(Generative Pre-trained Transformer) Codex model, and find evidence suggestingthat one can significantly surpass state-of-the-art models forcode-summarization, leveraging project-specific training.",,arXiv,"['cs.se', 'cs.lg']",, +743,convolutional bypasses are better vision transformer adapters,"['Shibo Jie', 'Zhi-Hong Deng']",http://arxiv.org/pdf/2207.07039v3.pdf,2022-07-14,," The pretrain-then-finetune paradigm has been widely adopted in computervision. But as the size of Vision Transformer (ViT) grows exponentially, thefull finetuning becomes prohibitive in view of the heavier storage overhead.Motivated by parameter-efficient transfer learning (PETL) on languagetransformers, recent studies attempt to insert lightweight adaptation modules(e.g., adapter layers or prompt tokens) to pretrained ViT and only finetunethese modules while the pretrained weights are frozen. However, these moduleswere originally proposed to finetune language models and did not take intoaccount the prior knowledge specifically for visual tasks. In this paper, wepropose to construct Convolutional Bypasses (Convpass) in ViT as adaptationmodules, introducing only a small amount (less than 0.5% of model parameters)of trainable parameters to adapt the large ViT. Different from other PETLmethods, Convpass benefits from the hard-coded inductive bias of convolutionallayers and thus is more suitable for visual tasks, especially in the low-dataregime. Experimental results on VTAB-1K benchmark and few-shot learningdatasets show that Convpass outperforms current language-oriented adaptationmodules, demonstrating the necessity to tailor vision-oriented adaptationmodules for adapting vision models.",,arXiv,['cs.cv'],, +744,selfsupervision can be a good fewshot learner,"['Yuning Lu', 'Liangjian Wen', 'Jianzhuang Liu', 'Yajing Liu', 'Xinmei Tian']",http://arxiv.org/pdf/2207.09176v1.pdf,2022-07-19,," Existing few-shot learning (FSL) methods rely on training with a largelabeled dataset, which prevents them from leveraging abundant unlabeled data.From an information-theoretic perspective, we propose an effective unsupervisedFSL method, learning representations with self-supervision. Following theInfoMax principle, our method learns comprehensive representations by capturingthe intrinsic structure of the data. Specifically, we maximize the mutualinformation (MI) of instances and their representations with a low-bias MIestimator to perform self-supervised pre-training. Rather than supervisedpre-training focusing on the discriminable features of the seen classes, ourself-supervised model has less bias toward the seen classes, resulting inbetter generalization for unseen classes. We explain that supervisedpre-training and self-supervised pre-training are actually maximizing differentMI objectives. Extensive experiments are further conducted to analyze their FSLperformance with various training settings. Surprisingly, the results show thatself-supervised pre-training can outperform supervised pre-training under theappropriate conditions. Compared with state-of-the-art FSL methods, ourapproach achieves comparable performance on widely used FSL benchmarks withoutany labels of the base classes.",,arXiv,['cs.cv'],, +745,language model cascades,"['David Dohan', 'Winnie Xu', 'Aitor Lewkowycz', 'Jacob Austin', 'David Bieber', 'Raphael Gontijo Lopes', 'Yuhuai Wu', 'Henryk Michalewski', 'Rif A. Saurous', 'Jascha Sohl-dickstein', 'Kevin Murphy', 'Charles Sutton']",http://arxiv.org/pdf/2207.10342v2.pdf,2022-07-21,," Prompted models have demonstrated impressive few-shot learning abilities.Repeated interactions at test-time with a single model, or the composition ofmultiple models together, further expands capabilities. These compositions areprobabilistic models, and may be expressed in the language of graphical modelswith random variables whose values are complex data types such as strings.Cases with control flow and dynamic structure require techniques fromprobabilistic programming, which allow implementing disparate model structuresand inference strategies in a unified language. We formalize several existingtechniques from this perspective, including scratchpads / chain of thought,verifiers, STaR, selection-inference, and tool use. We refer to the resultingprograms as language model cascades.",,arXiv,"['cs.cl', 'cs.ai']",, +746,fewshot adaptation works with unpredictable data,"['Jun Shern Chan', 'Michael Pieler', 'Jonathan Jao', 'Jérémy Scheurer', 'Ethan Perez']",http://arxiv.org/pdf/2208.01009v2.pdf,2022-08-01,," Prior work on language models (LMs) shows that training on a large number ofdiverse tasks improves few-shot learning (FSL) performance on new tasks. Wetake this to the extreme, automatically extracting 413,299 tasks from internettables - orders of magnitude more than the next-largest public datasets.Finetuning on the resulting dataset leads to improved FSL performance onNatural Language Processing (NLP) tasks, but not proportionally to datasetscale. In fact, we find that narrow subsets of our dataset sometimes outperformmore diverse datasets. For example, finetuning on software documentation fromsupport.google.com raises FSL performance by a mean of +7.5% on 52 downstreamtasks, which beats training on 40 human-curated NLP datasets (+6.7%).Finetuning on various narrow datasets leads to similar broad improvementsacross test tasks, suggesting that the gains are not from domain adaptation butadapting to FSL in general. We do not observe clear patterns between thedatasets that lead to FSL gains, leaving open questions about why certain datahelps with FSL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +747,robotic interestingness via humaninformed fewshot object detection,"['Seungchan Kim', 'Chen Wang', 'Bowen Li', 'Sebastian Scherer']",http://arxiv.org/pdf/2208.01084v1.pdf,2022-08-01,," Interestingness recognition is crucial for decision making in autonomousexploration for mobile robots. Previous methods proposed an unsupervised onlinelearning approach that can adapt to environments and detect interesting scenesquickly, but lack the ability to adapt to human-informed interesting objects.To solve this problem, we introduce a human-interactive framework,AirInteraction, that can detect human-informed objects via few-shot onlinelearning. To reduce the communication bandwidth, we first apply an onlineunsupervised learning algorithm on the unmanned vehicle for interestingnessrecognition and then only send the potential interesting scenes to abase-station for human inspection. The human operator is able to draw andprovide bounding box annotations for particular interesting objects, which aresent back to the robot to detect similar objects via few-shot learning. Onlyusing few human-labeled examples, the robot can learn novel interesting objectcategories during the mission and detect interesting scenes that contain theobjects. We evaluate our method on various interesting scene recognitiondatasets. To the best of our knowledge, it is the first human-informed few-shotobject detection framework for autonomous exploration.",,arXiv,['cs.ro'],, +748,atlas fewshot learning with retrieval augmented language models,"['Gautier Izacard', 'Patrick Lewis', 'Maria Lomeli', 'Lucas Hosseini', 'Fabio Petroni', 'Timo Schick', 'Jane Dwivedi-Yu', 'Armand Joulin', 'Sebastian Riedel', 'Edouard Grave']",http://arxiv.org/pdf/2208.03299v3.pdf,2022-08-05,," Large language models have shown impressive few-shot results on a wide rangeof tasks. However, when knowledge is key for such results, as is the case fortasks such as question answering and fact checking, massive parameter counts tostore knowledge seem to be needed. Retrieval augmented models are known toexcel at knowledge intensive tasks without the need for as many parameters, butit is unclear whether they work in few-shot settings. In this work we presentAtlas, a carefully designed and pre-trained retrieval augmented language modelable to learn knowledge intensive tasks with very few training examples. Weperform evaluations on a wide range of tasks, including MMLU, KILT andNaturalQuestions, and study the impact of the content of the document index,showing that it can easily be updated. Notably, Atlas reaches over 42% accuracyon Natural Questions using only 64 examples, outperforming a 540B parametersmodel by 3% despite having 50x fewer parameters.",,arXiv,['cs.cl'],, +749,limits of an ai program for solving college math problems,['Ernest Davis'],http://arxiv.org/pdf/2208.06906v1.pdf,2022-08-14,," Drori et al. (2022) report that ""A neural network solves, explains, andgenerates university math problems by program synthesis and few-shot learningat human level ... [It] automatically answers 81\% of university-levelmathematics problems."" The system they describe is indeed impressive; however,the above description is very much overstated. The work of solving the problemsis done, not by a neural network, but by the symbolic algebra package Sympy.Problems of various formats are excluded from consideration. The so-called""explanations"" are just rewordings of lines of code. Answers are marked ascorrect that are not in the form specified in the problem. Most seriously, itseems that in many cases the system uses the correct answer given in the testcorpus to guide its path to solving the problem.",,arXiv,['cs.ai'],, +750,efficient fewshot learning without prompts,"['Lewis Tunstall', 'Nils Reimers', 'Unso Eun Seo Jo', 'Luke Bates', 'Daniel Korat', 'Moshe Wasserblat', 'Oren Pereg']",http://arxiv.org/pdf/2209.11055v1.pdf,2022-09-22,," Recent few-shot methods, such as parameter-efficient fine-tuning (PEFT) andpattern exploiting training (PET), have achieved impressive results inlabel-scarce settings. However, they are difficult to employ since they aresubject to high variability from manually crafted prompts, and typicallyrequire billion-parameter language models to achieve high accuracy. To addressthese shortcomings, we propose SetFit (Sentence Transformer Fine-tuning), anefficient and prompt-free framework for few-shot fine-tuning of SentenceTransformers (ST). SetFit works by first fine-tuning a pretrained ST on a smallnumber of text pairs, in a contrastive Siamese manner. The resulting model isthen used to generate rich text embeddings, which are used to train aclassification head. This simple framework requires no prompts or verbalizers,and achieves high accuracy with orders of magnitude less parameters thanexisting techniques. Our experiments show that SetFit obtains comparableresults with PEFT and PET techniques, while being an order of magnitude fasterto train. We also show that SetFit can be applied in multilingual settings bysimply switching the ST body. Our code is available athttps://github.com/huggingface/setfit and our datasets athttps://huggingface.co/setfit .",,arXiv,['cs.cl'],, +751,core a retrievethenedit framework for counterfactual data generation,"['Tanay Dixit', 'Bhargavi Paranjape', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2210.04873v2.pdf,2022-10-10,," Counterfactual data augmentation (CDA) -- i.e., adding minimally perturbedinputs during training -- helps reduce model reliance on spurious correlationsand improves generalization to out-of-distribution (OOD) data. Prior work ongenerating counterfactuals only considered restricted classes of perturbations,limiting their effectiveness. We present COunterfactual Generation viaRetrieval and Editing (CORE), a retrieval-augmented generation framework forcreating diverse counterfactual perturbations for CDA. For each trainingexample, CORE first performs a dense retrieval over a task-related unlabeledtext corpus using a learned bi-encoder and extracts relevant counterfactualexcerpts. CORE then incorporates these into prompts to a large language modelwith few-shot learning capabilities, for counterfactual editing. Conditioninglanguage model edits on naturally occurring data results in diverseperturbations. Experiments on natural language inference and sentiment analysisbenchmarks show that CORE counterfactuals are more effective at improvinggeneralization to OOD data compared to other DA approaches. We also show thatthe CORE retrieval framework can be used to encourage diversity in manuallyauthored perturbations",,arXiv,['cs.cl'],, +752,continual training of language models for fewshot learning,"['Zixuan Ke', 'Haowei Lin', 'Yijia Shao', 'Hu Xu', 'Lei Shu', 'Bing Liu']",http://arxiv.org/pdf/2210.05549v1.pdf,2022-10-11,," Recent work on applying large language models (LMs) achieves impressiveperformance in many NLP applications. Adapting or posttraining an LM using anunlabeled domain corpus can produce even better performance for end-tasks inthe domain. This paper proposes the problem of continually extending an LM byincrementally post-train the LM with a sequence of unlabeled domain corpora toexpand its knowledge without forgetting its previous skills. The goal is toimprove the few-shot end-task learning in these domains. The resulting systemis called CPT (Continual PostTraining), which to our knowledge, is the firstcontinual post-training system. Experimental results verify its effectiveness.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.ne']",, +753,knowledgegrounded dialog state tracking,"['Dian Yu', 'Mingqiu Wang', 'Yuan Cao', 'Izhak Shafran', 'Laurent El Shafey', 'Hagen Soltau']",http://arxiv.org/pdf/2210.06656v1.pdf,2022-10-13,," Knowledge (including structured knowledge such as schema and ontology, andunstructured knowledge such as web corpus) is a critical part of dialogunderstanding, especially for unseen tasks and domains. Traditionally, suchdomain-specific knowledge is encoded implicitly into model parameters for theexecution of downstream tasks, which makes training inefficient. In addition,such models are not easily transferable to new tasks with different schemas. Inthis work, we propose to perform dialog state tracking grounded on knowledgeencoded externally. We query relevant knowledge of various forms based on thedialog context where such information can ground the prediction of dialogstates. We demonstrate superior performance of our proposed method over strongbaselines, especially in the few-shot learning setting.",,arXiv,['cs.cl'],, +754,"visionlanguage pretraining basics, recent advances, and future trends","['Zhe Gan', 'Linjie Li', 'Chunyuan Li', 'Lijuan Wang', 'Zicheng Liu', 'Jianfeng Gao']",http://arxiv.org/pdf/2210.09263v1.pdf,2022-10-17,," This paper surveys vision-language pre-training (VLP) methods for multimodalintelligence that have been developed in the last few years. We group theseapproaches into three categories: ($i$) VLP for image-text tasks, such as imagecaptioning, image-text retrieval, visual question answering, and visualgrounding; ($ii$) VLP for core computer vision tasks, such as (open-set) imageclassification, object detection, and segmentation; and ($iii$) VLP forvideo-text tasks, such as video captioning, video-text retrieval, and videoquestion answering. For each category, we present a comprehensive review ofstate-of-the-art methods, and discuss the progress that has been made andchallenges still being faced, using specific systems and models as casestudies. In addition, for each category, we discuss advanced topics beingactively explored in the research community, such as big foundation models,unified modeling, in-context few-shot learning, knowledge, robustness, andcomputer vision in the wild, to name a few.",,arXiv,"['cs.cv', 'cs.cl']",, +755,better fewshot relation extraction with label prompt dropout,"['Peiyuan Zhang', 'Wei Lu']",http://arxiv.org/pdf/2210.13733v1.pdf,2022-10-25,," Few-shot relation extraction aims to learn to identify the relation betweentwo entities based on very limited training examples. Recent efforts found thattextual labels (i.e., relation names and relation descriptions) could beextremely useful for learning class representations, which will benefit thefew-shot learning task. However, what is the best way to leverage such labelinformation in the learning process is an important research question. Existingworks largely assume such textual labels are always present during bothlearning and prediction. In this work, we argue that such approaches may notalways lead to optimal results. Instead, we present a novel approach calledlabel prompt dropout, which randomly removes label descriptions in the learningprocess. Our experiments show that our approach is able to lead to improvedclass representations, yielding significantly better results on the few-shotrelation extraction task.",,arXiv,['cs.cl'],, +756,stprompt semanticguided and taskdriven prompts for effective fewshot classification,"['Jinta Weng', 'Yue Hu', 'Jing Qiu', 'Heyan Huan']",http://arxiv.org/pdf/2210.16489v1.pdf,2022-10-29,," The effectiveness of prompt learning has been demonstrated in differentpre-trained language models. By formulating suitable template and choosingrepresentative label mapping, prompt learning can be used as an efficientknowledge probe. However, finding suitable prompt in existing methods requiresmultiple experimental attempts or appropriate vector initialization onformulating suitable template and choosing representative label mapping, whichit is more common in few-shot learning tasks. Motivating by PLM workingprocess, we try to construct the prompt from task semantic perspective and thuspropose the STPrompt -Semantic-guided and Task-driven Prompt model.Specifically, two novel prompts generated from the semantic dependency tree(Dep-prompt) and task-specific metadata description (Meta-prompt), are firstlyconstructed in a prompt augmented pool, and the proposed model wouldautomatically select a suitable semantic prompt to motivating the promptlearning process. Our results show that the proposed model achieves thestate-of-the-art performance in five different datasets of few-shot textclassification tasks, which prove that more semantic and significant promptscould assume as a better knowledge proving tool.",,arXiv,"['cs.cl', 'cs.ai']",, +757,retrievalaugmented generative question answering for event argument extraction,"['Xinya Du', 'Heng Ji']",http://arxiv.org/pdf/2211.07067v1.pdf,2022-11-14,," Event argument extraction has long been studied as a sequential predictionproblem with extractive-based methods, tackling each argument in isolation.Although recent work proposes generation-based methods to capturecross-argument dependency, they require generating and post-processing acomplicated target sequence (template). Motivated by these observations andrecent pretrained language models' capabilities of learning fromdemonstrations. We propose a retrieval-augmented generative QA model (R-GQA)for event argument extraction. It retrieves the most similar QA pair andaugments it as prompt to the current example's context, then decodes thearguments as answers. Our approach outperforms substantially prior methodsacross various settings (i.e. fully supervised, domain transfer, and fewshotlearning). Finally, we propose a clustering-based sampling strategy (JointEnc)and conduct a thorough analysis of how different strategies influence thefew-shot learning performance. The implementations are available at https://github.com/xinyadu/RGQA",,arXiv,['cs.cl'],, +758,protsi prototypical siamese network with data augmentation for fewshot subjective answer evaluation,"['Yining Lu', 'Jingxi Qiu', 'Gaurav Gupta']",http://arxiv.org/pdf/2211.09855v1.pdf,2022-11-17,," Subjective answer evaluation is a time-consuming and tedious task, and thequality of the evaluation is heavily influenced by a variety of subjectivepersonal characteristics. Instead, machine evaluation can effectively assisteducators in saving time while also ensuring that evaluations are fair andrealistic. However, most existing methods using regular machine learning andnatural language processing techniques are generally hampered by a lack ofannotated answers and poor model interpretability, making them unsuitable forreal-world use. To solve these challenges, we propose ProtSi Network, a uniquesemi-supervised architecture that for the first time uses few-shot learning tosubjective answer evaluation. To evaluate students' answers by similarityprototypes, ProtSi Network simulates the natural process of evaluator scoringanswers by combining Siamese Network which consists of BERT and encoder layerswith Prototypical Network. We employed an unsupervised diverse paraphrasingmodel ProtAugment, in order to prevent overfitting for effective few-shot textclassification. By integrating contrastive learning, the discriminative textissue can be mitigated. Experiments on the Kaggle Short Scoring Datasetdemonstrate that the ProtSi Network outperforms the most recent baseline modelsin terms of accuracy and quadratic weighted kappa.",,arXiv,['cs.cl'],, +759,tempera testtime prompting via reinforcement learning,"['Tianjun Zhang', 'Xuezhi Wang', 'Denny Zhou', 'Dale Schuurmans', 'Joseph E. Gonzalez']",http://arxiv.org/pdf/2211.11890v1.pdf,2022-11-21,," Careful prompt design is critical to the use of large language models inzero-shot or few-shot learning. As a consequence, there is a growing interestin automated methods to design optimal prompts. In this work, we proposeTest-time Prompt Editing using Reinforcement learning (TEMPERA). In contrast toprior prompt generation methods, TEMPERA can efficiently leverage priorknowledge, is adaptive to different queries and provides an interpretableprompt for every query. To achieve this, we design a novel action space thatallows flexible editing of the initial prompts covering a wide set ofcommonly-used components like instructions, few-shot exemplars, andverbalizers. The proposed method achieves significant gains compared withrecent SoTA approaches like prompt tuning, AutoPrompt, and RLPrompt, across avariety of tasks including sentiment analysis, topic classification, naturallanguage inference, and reading comprehension. Our method achieves 5.33x onaverage improvement in sample efficiency when compared to the traditionalfine-tuning methods.",,arXiv,"['cs.cl', 'cs.ai']",, +760,towards practical fewshot federated nlp,"['Dongqi Cai', 'Yaozong Wu', 'Haitao Yuan', 'Shangguang Wang', 'Felix Xiaozhu Lin', 'Mengwei Xu']",http://arxiv.org/pdf/2212.00192v2.pdf,2022-12-01,," Transformer-based pre-trained models have emerged as the predominant solutionfor natural language processing (NLP). Fine-tuning such pre-trained models fordownstream tasks often requires a considerable amount of labeled private data.In practice, private data is often distributed across heterogeneous mobiledevices and may be prohibited from being uploaded. Moreover, well-curatedlabeled data is often scarce, presenting an additional challenge. To addressthese challenges, we first introduce a data generator for federated few-shotlearning tasks, which encompasses the quantity and skewness of scarce labeleddata in a realistic setting. Subsequently, we propose AUG-FedPrompt, aprompt-based federated learning system that exploits abundant unlabeled datafor data augmentation. Our experiments indicate that AUG-FedPrompt can performon par with full-set fine-tuning with a limited amount of labeled data.However, such competitive performance comes at a significant system cost.",,arXiv,"['cs.cl', 'cs.lg']",, +761,fewshot nested named entity recognition,"['Hong Ming', 'Jiaoyun Yang', 'Lili Jiang', 'Yan Pan', 'Ning An']",http://arxiv.org/pdf/2212.00953v1.pdf,2022-12-02,," While Named Entity Recognition (NER) is a widely studied task, makinginferences of entities with only a few labeled data has been challenging,especially for entities with nested structures. Unlike flat entities, entitiesand their nested entities are more likely to have similar semantic featurerepresentations, drastically increasing difficulties in classifying differententity categories in the few-shot setting. Although prior work has brieflydiscussed nested structures in the context of few-shot learning, to our bestknowledge, this paper is the first one specifically dedicated to studying thefew-shot nested NER task. Leveraging contextual dependency to distinguishnested entities, we propose a Biaffine-based Contrastive Learning (BCL)framework. We first design a Biaffine span representation module for learningthe contextual span dependency representation for each entity span rather thanonly learning its semantic representation. We then merge these tworepresentations by the residual connection to distinguish nested entities.Finally, we build a contrastive learning framework to adjust the representationdistribution for larger margin boundaries and more generalized domain transferlearning ability. We conducted experimental studies on three English, German,and Russian nested NER datasets. The results show that the BCL outperformedthree baseline models on the 1-shot and 5-shot tasks in terms of F1 score.",,arXiv,"['cs.cl', 'cs.ai']",, +762,improving fewshot performance of language models via nearest neighbor calibration,"['Feng Nie', 'Meixi Chen', 'Zhirui Zhang', 'Xu Cheng']",http://arxiv.org/pdf/2212.02216v1.pdf,2022-12-05,," Pre-trained language models (PLMs) have exhibited remarkable few-shotlearning capabilities when provided a few examples in a natural language promptas demonstrations of test instances, i.e., in-context learning. However, theperformance of in-context learning is susceptible to the choice of promptformat, training examples and the ordering of the training examples. In thispaper, we propose a novel nearest-neighbor calibration framework for in-contextlearning to ease this issue. It is inspired by a phenomenon that the in-contextlearning paradigm produces incorrect labels when inferring training instances,which provides a useful supervised signal to calibrate predictions. Thus, ourmethod directly augments the predictions with a $k$-nearest-neighbor ($k$NN)classifier over a datastore of cached few-shot instance representationsobtained by PLMs and their corresponding labels. Then adaptive neighborselection and feature regularization modules are introduced to make full use ofa few support instances to reduce the $k$NN retrieval noise. Experiments onvarious few-shot text classification tasks demonstrate that our methodsignificantly improves in-context learning, while even achieving comparableperformance with state-of-the-art tuning-based approaches in some sentimentanalysis tasks.",,arXiv,['cs.cl'],, +763,jampatoisnli a jamaican patois natural language inference dataset,"['Ruth-Ann Armstrong', 'John Hewitt', 'Christopher Manning']",http://arxiv.org/pdf/2212.03419v1.pdf,2022-12-07,," JamPatoisNLI provides the first dataset for natural language inference in acreole language, Jamaican Patois. Many of the most-spoken low-resourcelanguages are creoles. These languages commonly have a lexicon derived from amajor world language and a distinctive grammar reflecting the languages of theoriginal speakers and the process of language birth by creolization. This givesthem a distinctive place in exploring the effectiveness of transfer from largemonolingual or multilingual pretrained models. While our work, along withprevious work, shows that transfer from these models to low-resource languagesthat are unrelated to languages in their training set is not very effective, wewould expect stronger results from transfer to creoles. Indeed, our experimentsshow considerably better results from few-shot learning of JamPatoisNLI thanfor such unrelated languages, and help us begin to understand how the uniquerelationship between creoles and their high-resource base languages affectcross-lingual transfer. JamPatoisNLI, which consists of naturally-occurringpremises and expert-written hypotheses, is a step towards steering researchinto a traditionally underserved language and a useful benchmark forunderstanding cross-lingual NLP.",,arXiv,"['cs.cl', 'cs.lg', 'i.2.7']",, +764,learn to explore on bootstrapping interactive data exploration with metalearning,"['Yukun Cao', 'Xike Xie', 'Kexin Huang']",http://arxiv.org/pdf/2212.03423v4.pdf,2022-12-07,," Interactive data exploration (IDE) is an effective way of comprehending bigdata, whose volume and complexity are beyond human abilities. The main goal ofIDE is to discover user interest regions from a database through multi-roundsof user labelling. Existing IDEs adopt active-learning framework, where usersiteratively discriminate or label the interestingness of selected tuples. Theprocess of data exploration can be viewed as the process of training aclassifier, which determines whether a database tuple is interesting to a user.An efficient exploration thus takes very few iterations of user labelling toreach the data region of interest. In this work, we consider the dataexploration as the process of few-shot learning, where the classifier islearned with only a few training examples, or exploration iterations. To thisend, we propose a learning-to-explore framework, based on meta-learning, whichlearns how to learn a classifier with automatically generated meta-tasks, sothat the exploration process can be much shortened. Extensive experiments onreal datasets show that our proposal outperforms existing explore-by-examplesolutions in terms of accuracy and efficiency.",,arXiv,"['cs.db', 'cs.ai']",, +765,demystifying prompts in language models via perplexity estimation,"['Hila Gonen', 'Srini Iyer', 'Terra Blevins', 'Noah A. Smith', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2212.04037v1.pdf,2022-12-08,," Language models can be prompted to perform a wide variety of zero- andfew-shot learning problems. However, performance varies significantly with thechoice of prompt, and we do not yet understand why this happens or how to pickthe best prompts. In this work, we analyze the factors that contribute to thisvariance and establish a new empirical hypothesis: the performance of a promptis coupled with the extent to which the model is familiar with the language itcontains. Over a wide range of tasks, we show that the lower the perplexity ofthe prompt is, the better the prompt is able to perform the task. As a result,we devise a method for creating prompts: (1) automatically extend a small seedset of manually written prompts by paraphrasing using GPT3 and backtranslationand (2) choose the lowest perplexity prompts to get significant gains inperformance.",,arXiv,['cs.cl'],, +766,localized latent updates for finetuning visionlanguage models,"['Moritz Ibing', 'Isaak Lim', 'Leif Kobbelt']",http://arxiv.org/pdf/2212.06556v1.pdf,2022-12-13,," Although massive pre-trained vision-language models like CLIP show impressivegeneralization capabilities for many tasks, still it often remains necessary tofine-tune them for improved performance on specific datasets. When doing so, itis desirable that updating the model is fast and that the model does not loseits capabilities on data outside of the dataset, as is often the case withclassical fine-tuning approaches. In this work we suggest a lightweightadapter, that only updates the models predictions close to seen datapoints. Wedemonstrate the effectiveness and speed of this relatively simple approach inthe context of few-shot learning, where our results both on classes seen andunseen during training are comparable with or improve on the state of the art.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, +767,alert adapting language models to reasoning tasks,"['Ping Yu', 'Tianlu Wang', 'Olga Golovneva', 'Badr AlKhamissi', 'Siddharth Verma', 'Zhijing Jin', 'Gargi Ghosh', 'Mona Diab', 'Asli Celikyilmaz']",http://arxiv.org/pdf/2212.08286v2.pdf,2022-12-16,," Current large language models can perform reasonably well on complex tasksthat require step-by-step reasoning with few-shot learning. Are these modelsapplying reasoning skills they have learnt during pre-training and reasonoutside of their training context, or are they simply memorizing their trainingcorpus at finer granularity and have learnt to better understand their context?To tease apart these possibilities, we introduce ALERT, a benchmark and suiteof analyses for assessing language models' reasoning ability comparingpre-trained and finetuned models on complex tasks that require reasoning skillsto solve. ALERT provides a test bed to asses any language model on fine-grainedreasoning skills, which spans over 20 datasets and covers 10 differentreasoning skills. We leverage ALERT to further investigate the role offinetuning. With extensive empirical analysis we find that language modelslearn more reasoning skills such as textual entailment, abductive reasoning,and analogical reasoning during finetuning stage compared to pretraining state.We also find that when language models are finetuned they tend to overfit tothe prompt template, which hurts the robustness of models causinggeneralization problems.",,arXiv,['cs.cl'],, +768,learning from taxonomy multilabel fewshot classification for everyday sound recognition,"['Jinhua Liang', 'Huy Phan', 'Emmanouil Benetos']",http://arxiv.org/pdf/2212.08952v1.pdf,2022-12-17,," Everyday sound recognition aims to infer types of sound events in audiostreams. While many works succeeded in training models with high performance ina fully-supervised manner, they are still restricted to the demand of largequantities of labelled data and the range of predefined classes. To overcomethese drawbacks, this work firstly curates a new database named FSD-FS formulti-label few-shot audio classification. It then explores how to incorporateaudio taxonomy in few-shot learning. Specifically, this work proposeslabel-dependent prototypical networks (LaD-protonet) to exploit parent-childrenrelationships between labels. Plus, it applies taxonomy-aware label smoothingtechniques to boost model performance. Experiments demonstrate thatLaD-protonet outperforms original prototypical networks as well as otherstate-of-the-art methods. Moreover, its performance can be further boosted whencombined with taxonomy-aware label smoothing.",,arXiv,"['cs.sd', 'eess.as']",, +769,a survey on fewshot knowledge graph completion with structural and commonsense knowledge,"['Haodi Ma', 'Daisy Zhe Wang']",http://arxiv.org/pdf/2301.01172v1.pdf,2023-01-03,," Knowledge graphs (KG) have served as the key component of various naturallanguage processing applications. Commonsense knowledge graphs (CKG) are aspecial type of KG, where entities and relations are composed of free-formtext. However, previous works in KG completion and CKG completion suffer fromlong-tail relations and newly-added relations which do not have many knowtriples for training. In light of this, few-shot KG completion (FKGC), whichrequires the strengths of graph representation learning and few-shot learning,has been proposed to challenge the problem of limited annotated data. In thispaper, we comprehensively survey previous attempts on such tasks in the form ofa series of methods and applications. Specifically, we first introduce FKGCchallenges, commonly used KGs, and CKGs. Then we systematically categorize andsummarize existing works in terms of the type of KGs and the methods. Finally,we present applications of FKGC models on prediction tasks in different areasand share our thoughts on future research directions of FKGC.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +770,learning to initialize can meta learning improve crosstask generalization in prompt tuning,"['Chengwei Qin', 'Qian Li', 'Ruochen Zhao', 'Shafiq Joty']",http://arxiv.org/pdf/2302.08143v3.pdf,2023-02-16,," Prompt tuning (PT) which only tunes the embeddings of an additional sequenceof tokens per task, keeping the pre-trained language model (PLM) frozen, hasshown remarkable performance in few-shot learning. Despite this, PT has beenshown to rely heavily on good initialization of the prompt embeddings. In thiswork, we study meta prompt tuning (MPT) to systematically explore howmeta-learning can help improve (if it can) cross-task generalization in PTthrough learning to initialize the prompt embeddings from other relevant tasks.We empirically analyze a representative set of meta learning algorithms in awide range of adaptation settings with different source/target taskconfigurations on a large set of few-shot tasks. With extensive experiments andanalysis, we demonstrate the effectiveness of MPT. We find the improvement tobe significant particularly on classification tasks. For other kinds of taskssuch as question answering, we observe that while MPT can outperform PT in mostcases, it does not always outperform multi-task learning. We further provide anin-depth analysis from the perspective of task similarity.",,arXiv,"['cs.cl', 'cs.ai']",, +771,scalable prompt generation for semisupervised learning with language models,"['Yuhang Zhou', 'Suraj Maharjan', 'Beiye Liu']",http://arxiv.org/pdf/2302.09236v1.pdf,2023-02-18,," Prompt-based learning methods in semi-supervised learning (SSL) settings havebeen shown to be effective on multiple natural language understanding (NLU)datasets and tasks in the literature. However, manually designing multipleprompts and verbalizers requires domain knowledge and human effort, making itdifficult and expensive to scale across different datasets. In this paper, wepropose two methods to automatically design multiple prompts and integrateautomatic verbalizer in SSL settings without sacrificing performance. The firstmethod uses various demonstration examples with learnable continuous prompttokens to create diverse prompt models. The second method uses a varying numberof soft prompt tokens to encourage language models to learn different prompts.For the verbalizer, we use the prototypical verbalizer to replace the manualone. In summary, we obtained the best average accuracy of 73.2% (a relativeimprovement of 2.52% over even the previous state-of-the-art SSL method withmanual prompts and verbalizers) in different few-shot learning settings.",,arXiv,"['cs.cl', 'cs.ai']",, +772,language models are fewshot learners for prognostic prediction,"['Zekai Chen', 'Mariann Micsinai Balan', 'Kevin Brown']",http://arxiv.org/pdf/2302.12692v4.pdf,2023-02-24,," Clinical prediction is an essential task in the healthcare industry. However,the recent success of transformers, on which large language models are built,has not been extended to this domain. In this research, we explore the use oftransformers and language models in prognostic prediction for immunotherapyusing real-world patients' clinical data and molecular profiles. This paperinvestigates the potential of transformers to improve clinical predictioncompared to conventional machine learning approaches and addresses thechallenge of few-shot learning in predicting rare disease areas. The studybenchmarks the efficacy of baselines and language models on prognosticprediction across multiple cancer types and investigates the impact ofdifferent pretrained language models under few-shot regimes. The resultsdemonstrate significant improvements in accuracy and highlight the potential ofNLP in clinical research to improve early detection and intervention fordifferent diseases.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",, +773,prefinetuning for fewshot emotional speech recognition,"['Maximillian Chen', 'Zhou Yu']",http://arxiv.org/pdf/2302.12921v2.pdf,2023-02-24,," Speech models have long been known to overfit individual speakers for manyclassification tasks. This leads to poor generalization in settings where thespeakers are out-of-domain or out-of-distribution, as is common in productionenvironments. We view speaker adaptation as a few-shot learning problem andpropose investigating transfer learning approaches inspired by recent successwith pre-trained models in natural language tasks. We propose pre-finetuningspeech models on difficult tasks to distill knowledge into few-shot downstreamclassification objectives. We pre-finetune Wav2Vec2.0 on every permutation offour multiclass emotional speech recognition corpora and evaluate ourpre-finetuned models through 33,600 few-shot fine-tuning trials on theEmotional Speech Dataset.",,arXiv,"['cs.cl', 'cs.lg', 'cs.sd', 'eess.as']",, +774,mixture of soft prompts for controllable data generation,"['Derek Chen', 'Celine Lee', 'Yunan Lu', 'Domenic Rosati', 'Zhou Yu']",http://arxiv.org/pdf/2303.01580v2.pdf,2023-03-02,," Large language models (LLMs) effectively generate fluent text when the targetoutput follows natural language patterns. However, structured prediction tasksconfine the output format to a limited ontology, causing even very large modelsto struggle since they were never trained with such restrictions in mind. Thedifficulty of using LLMs for direct prediction is exacerbated in few-shotlearning scenarios, which commonly arise due to domain shift and resourcelimitations. We flip the problem on its head by leveraging the LLM as a toolfor data augmentation rather than direct prediction. Our proposed Mixture ofSoft Prompts (MSP) serves as a parameter-efficient procedure for generatingdata in a controlled manner. Denoising mechanisms are further applied toimprove the quality of synthesized data. Automatic metrics show our method iscapable of producing diverse and natural text, while preserving labelsemantics. Moreover, MSP achieves state-of-the-art results on three benchmarkswhen compared against strong baselines. Our method offers an alternatedata-centric approach for applying LLMs to complex prediction tasks.",,arXiv,['cs.cl'],, +775,enhancing activity prediction models in drug discovery with the ability to understand human language,"['Philipp Seidl', 'Andreu Vall', 'Sepp Hochreiter', 'Günter Klambauer']",http://arxiv.org/pdf/2303.03363v2.pdf,2023-03-06,," Activity and property prediction models are the central workhorses in drugdiscovery and materials sciences, but currently they have to be trained orfine-tuned for new tasks. Without training or fine-tuning, scientific languagemodels could be used for such low-data tasks through their announced zero- andfew-shot capabilities. However, their predictive quality at activity predictionis lacking. In this work, we envision a novel type of activity prediction modelthat is able to adapt to new prediction tasks at inference time, viaunderstanding textual information describing the task. To this end, we proposea new architecture with separate modules for chemical and natural languageinputs, and a contrastive pre-training objective on data from large biochemicaldatabases. In extensive experiments, we show that our method CLAMP yieldsimproved predictive performance on few-shot learning benchmarks and zero-shotproblems in drug discovery. We attribute the advances of our method to themodularized architecture and to our pre-training objective.",,arXiv,"['q-bio.bm', 'cs.cl', 'cs.lg', 'stat.ml']",, +776,menucraft interactive menu system design with large language models,"['Amir Hossein Kargaran', 'Nafiseh Nikeghbal', 'Abbas Heydarnoori', 'Hinrich Schütze']",http://arxiv.org/pdf/2303.04496v2.pdf,2023-03-08,," Menu system design is a challenging task involving many design options andvarious human factors. For example, one crucial factor that designers need toconsider is the semantic and systematic relation of menu commands. However,capturing these relations can be challenging due to limited availableresources. With the advancement of neural language models, large languagemodels can utilize their vast pre-existing knowledge in designing and refiningmenu systems. In this paper, we propose MenuCraft, an AI-assisted designer formenu design that enables collaboration between the designer and a dialoguesystem to design menus. MenuCraft offers an interactive language-based menudesign tool that simplifies the menu design process and enables easycustomization of design options. MenuCraft supports a variety of interactionsthrough dialog that allows performing zero/few-shot learning.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, +777,consistency analysis of chatgpt,"['Myeongjun Erik Jang', 'Thomas Lukasiewicz']",http://arxiv.org/pdf/2303.06273v3.pdf,2023-03-11,," ChatGPT has gained a huge popularity since its introduction. Its positiveaspects have been reported through many media platforms, and some analyses evenshowed that ChatGPT achieved a decent grade in professional exams, adding extrasupport to the claim that AI can now assist and even replace humans inindustrial fields. Others, however, doubt its reliability and trustworthiness.This paper investigates the trustworthiness of ChatGPT and GPT-4 regardinglogically consistent behaviour, focusing specifically on semantic consistencyand the properties of negation, symmetric, and transitive consistency. Ourfindings suggest that while both models appear to show an enhanced languageunderstanding and reasoning ability, they still frequently fall short ofgenerating logically consistent predictions. We also ascertain via experimentsthat prompt designing, few-shot learning and employing larger large languagemodels (LLMs) are unlikely to be the ultimate solution to resolve theinconsistency issue of LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, +778,learning expressive prompting with residuals for vision transformers,"['Rajshekhar Das', 'Yonatan Dukler', 'Avinash Ravichandran', 'Ashwin Swaminathan']",http://arxiv.org/pdf/2303.15591v1.pdf,2023-03-27,," Prompt learning is an efficient approach to adapt transformers by insertinglearnable set of parameters into the input and intermediate representations ofa pre-trained model. In this work, we present Expressive Prompts with Residuals(EXPRES) which modifies the prompt learning paradigm specifically for effectiveadaptation of vision transformers (ViT). Out method constructs downstreamrepresentations via learnable ``output'' tokens, that are akin to the learnedclass tokens of the ViT. Further for better steering of the downstreamrepresentation processed by the frozen transformer, we introduce residuallearnable tokens that are added to the output of various computations. We applyEXPRES for image classification, few shot learning, and semantic segmentation,and show our method is capable of achieving state of the art prompt tuning on3/3 categories of the VTAB benchmark. In addition to strong performance, weobserve that our approach is an order of magnitude more prompt efficient thanexisting visual prompting baselines. We analytically show the computationalbenefits of our approach over weight space adaptation techniques likefinetuning. Lastly we systematically corroborate the architectural design ofour method via a series of ablation experiments.",,arXiv,['cs.cv'],, +779,not all features matter enhancing fewshot clip with adaptive prior refinement,"['Xiangyang Zhu', 'Renrui Zhang', 'Bowei He', 'Aojun Zhou', 'Dong Wang', 'Bin Zhao', 'Peng Gao']",http://arxiv.org/pdf/2304.01195v1.pdf,2023-04-03,," The popularity of Contrastive Language-Image Pre-training (CLIP) haspropelled its application to diverse downstream vision tasks. To improve itscapacity on downstream tasks, few-shot learning has become a widely-adoptedtechnique. However, existing methods either exhibit limited performance orsuffer from excessive learnable parameters. In this paper, we propose APE, anAdaptive Prior rEfinement method for CLIP's pre-trained knowledge, whichachieves superior accuracy with high computational efficiency. Via a priorrefinement module, we analyze the inter-class disparity in the downstream dataand decouple the domain-specific knowledge from the CLIP-extracted cache model.On top of that, we introduce two model variants, a training-free APE and atraining-required APE-T. We explore the trilateral affinities between the testimage, prior cache model, and textual representations, and only enable alightweight category-residual module to be trained. For the average accuracyover 11 benchmarks, both APE and APE-T attain state-of-the-art and respectivelyoutperform the second-best by +1.59% and +1.99% under 16 shots with x30 lesslearnable parameters.",,arXiv,"['cs.cv', 'cs.ai', 'cs.mm']",, +780,sociocultural knowledge is needed for selection of shots in hate speech detection tasks,"['Antonis Maronikolakis', 'Abdullatif Köksal', 'Hinrich Schütze']",http://arxiv.org/pdf/2304.01890v4.pdf,2023-04-04,," We introduce HATELEXICON, a lexicon of slurs and targets of hate speech forthe countries of Brazil, Germany, India and Kenya, to aid training andinterpretability of models. We demonstrate how our lexicon can be used tointerpret model predictions, showing that models developed to classify extremespeech rely heavily on target words when making predictions. Further, wepropose a method to aid shot selection for training in low-resource settingsvia HATELEXICON. In few-shot learning, the selection of shots is of paramountimportance to model performance. In our work, we simulate a few-shot settingfor German and Hindi, using HASOC data for training and the MultilingualHateCheck (MHC) as a benchmark. We show that selecting shots based on ourlexicon leads to models performing better on MHC than models trained on shotssampled randomly. Thus, when given only a few training examples, using ourlexicon to select shots containing more sociocultural information leads tobetter few-shot performance.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +781,revisiting automated prompting are we actually doing better,"['Yulin Zhou', 'Yiren Zhao', 'Ilia Shumailov', 'Robert Mullins', 'Yarin Gal']",http://arxiv.org/pdf/2304.03609v2.pdf,2023-04-07,," Current literature demonstrates that Large Language Models (LLMs) are greatfew-shot learners, and prompting significantly increases their performance on arange of downstream tasks in a few-shot learning setting. An attempt toautomate human-led prompting followed, with some progress achieved. Inparticular, subsequent work demonstrates automation can outperform fine-tuningin certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six differentdownstream tasks and a larger range of K-shot learning settings. We find thatautomated prompting does not consistently outperform simple manual prompts. Ourwork suggests that, in addition to fine-tuning, manual prompts should be usedas a baseline in this line of research.",,arXiv,"['cs.cl', 'cs.lg']",, +782,information extraction from documents question answering vs token classification in realworld setups,"['Laurent Lam', 'Pirashanth Ratnamogan', 'Joël Tang', 'William Vanhuffel', 'Fabien Caspani']",http://arxiv.org/pdf/2304.10994v1.pdf,2023-04-21,," Research in Document Intelligence and especially in Document Key InformationExtraction (DocKIE) has been mainly solved as Token Classification problem.Recent breakthroughs in both natural language processing (NLP) and computervision helped building document-focused pre-training methods, leveraging amultimodal understanding of the document text, layout and image modalities.However, these breakthroughs also led to the emergence of a new DocKIE subtaskof extractive document Question Answering (DocQA), as part of the MachineReading Comprehension (MRC) research field. In this work, we compare theQuestion Answering approach with the classical token classification approachfor document key information extraction. We designed experiments to benchmarkfive different experimental setups : raw performances, robustness to noisyenvironment, capacity to extract long entities, fine-tuning speed on Few-ShotLearning and finally Zero-Shot Learning. Our research showed that when dealingwith clean and relatively short entities, it is still best to use tokenclassification-based approach, while the QA approach could be a goodalternative for noisy environment or long entities use-cases.",,arXiv,['cs.cl'],, +783,causal interventionsbased fewshot named entity recognition,"['Zhen Yang', 'Yongbin Liu', 'Chunping Ouyang']",http://arxiv.org/pdf/2305.01914v1.pdf,2023-05-03,," Few-shot named entity recognition (NER) systems aims at recognizing newclasses of entities based on a few labeled samples. A significant challenge inthe few-shot regime is prone to overfitting than the tasks with abundantsamples. The heavy overfitting in few-shot learning is mainly led by spuriouscorrelation caused by the few samples selection bias. To alleviate the problemof the spurious correlation in the few-shot NER, in this paper, we propose acausal intervention-based few-shot NER method. Based on the prototypicalnetwork, the method intervenes in the context and prototype via backdooradjustment during training. In particular, intervening in the context of theone-shot scenario is very difficult, so we intervene in the prototype viaincremental learning, which can also avoid catastrophic forgetting. Ourexperiments on different benchmarks show that our approach achieves newstate-of-the-art results (achieving up to 29% absolute improvement and 12% onaverage for all tasks).",,arXiv,['cs.cl'],, +784,data curation for image captioning with texttoimage generative models,"['Wenyan Li', 'Jonas F. Lotz', 'Chen Qiu', 'Desmond Elliott']",http://arxiv.org/pdf/2305.03610v1.pdf,2023-05-05,," Recent advances in image captioning are mainly driven by large-scalevision-language pretraining, relying heavily on computational resources andincreasingly large multimodal datasets. Instead of scaling up pretraining data,we ask whether it is possible to improve performance by improving the qualityof the samples in existing datasets. We pursue this question through twoapproaches to data curation: one that assumes that some examples should beavoided due to mismatches between the image and caption, and one that assumesthat the mismatch can be addressed by replacing the image, for which we use thestate-of-the-art Stable Diffusion model. These approaches are evaluated usingthe BLIP model on MS COCO and Flickr30K in both finetuning and few-shotlearning settings. Our simple yet effective approaches consistently outperformbaselines, indicating that better image captioning models can be trained bycurating existing resources. Finally, we conduct a human study to understandthe errors made by the Stable Diffusion model and highlight directions forfuture work in text-to-image generation.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, +785,make promptbased blackbox tuning colorful boosting model generalization from three orthogonal perspectives,"['Qiushi Sun', 'Chengcheng Han', 'Nuo Chen', 'Renyu Zhu', 'Jingyang Gong', 'Xiang Li', 'Ming Gao']",http://arxiv.org/pdf/2305.08088v1.pdf,2023-05-14,," Large language models (LLMs) have shown increasing power on various naturallanguage processing (NLP) tasks. However, tuning these models for downstreamtasks usually needs exorbitant costs or is unavailable due to commercialconsiderations. Recently, black-box tuning has been proposed to address thisproblem by optimizing task-specific prompts without accessing the gradients andhidden representations. However, most existing works have yet fully exploitedthe potential of gradient-free optimization under the scenario of few-shotlearning. In this paper, we describe BBT-RGB, a suite of straightforward andcomplementary techniques for enhancing the efficiency and performance ofblack-box optimization. Specifically, our method includes three plug-and-playcomponents: (1) Two-stage derivative-free optimization strategy thatfacilitates fast convergence and mitigates overfitting; (2) Automaticverbalizer construction with its novel usage under few-shot settings; (3)Better prompt initialization policy based on instruction search andauto-selected demonstration. Extensive experiments across various tasks onnatural language understanding and inference demonstrate the effectiveness ofour method. Our codes are publicly available athttps://github.com/QiushiSun/BBT-RGB.",,arXiv,"['cs.cl', 'cs.ai']",, +786,cplnovid contextaware promptbased learning for norm violation detection in online communities,"['Zihao He', 'Jonathan May', 'Kristina Lerman']",http://arxiv.org/pdf/2305.09846v2.pdf,2023-05-16,," Detecting norm violations in online communities is critical to maintaininghealthy and safe spaces for online discussions. Existing machine learningapproaches often struggle to adapt to the diverse rules and interpretationsacross different communities due to the inherent challenges of fine-tuningmodels for such context-specific tasks. In this paper, we introduceContext-aware Prompt-based Learning for Norm Violation Detection (CPL-NoViD), anovel method that employs prompt-based learning to detect norm violationsacross various types of rules. CPL-NoViD outperforms the baseline byincorporating context through natural language prompts and demonstratesimproved performance across different rule types. Significantly, it not onlyexcels in cross-rule-type and cross-community norm violation detection but alsoexhibits adaptability in few-shot learning scenarios. Most notably, itestablishes a new state-of-the-art in norm violation detection, surpassingexisting benchmarks. Our work highlights the potential of prompt-based learningfor context-sensitive norm violation detection and paves the way for futureresearch on more adaptable, context-aware models to better support onlinecommunity moderators.",,arXiv,"['cs.cl', 'cs.si']",, +787,a weak supervision approach for fewshot aspect based sentiment,"['Robert Vacareanu', 'Siddharth Varia', 'Kishaloy Halder', 'Shuai Wang', 'Giovanni Paolini', 'Neha Anna John', 'Miguel Ballesteros', 'Smaranda Muresan']",http://arxiv.org/pdf/2305.11979v1.pdf,2023-05-19,," We explore how weak supervision on abundant unlabeled data can be leveragedto improve few-shot performance in aspect-based sentiment analysis (ABSA)tasks. We propose a pipeline approach to construct a noisy ABSA dataset, and weuse it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks. Wetest the resulting model on three widely used ABSA datasets, before and afterfine-tuning. Our proposed method preserves the full fine-tuning performancewhile showing significant improvements (15.84% absolute F1) in the few-shotlearning scenario for the harder tasks. In zero-shot (i.e., withoutfine-tuning), our method outperforms the previous state of the art on theaspect extraction sentiment classification (AESC) task and is, additionally,capable of performing the harder aspect sentiment triplet extraction (ASTE)task.",,arXiv,['cs.cl'],, +788,efficient open domain multihop question answering with fewshot data synthesis,"['Mingda Chen', 'Xilun Chen', 'Wen-tau Yih']",http://arxiv.org/pdf/2305.13691v1.pdf,2023-05-23,," Few-shot learning for open domain multi-hop question answering typicallyrelies on large language models (LLMs). While powerful, LLMs are inefficient atthe inference time. We propose a data synthesis framework for multi-hopquestion answering that allows for improving smaller language models with lessthan 10 human-annotated question answer pairs. The framework is built upon thedata generation functions parameterized by LLMs and prompts, which requiresminimal hand-crafted features. Empirically, we synthesize millions of multi-hopquestions and claims. After finetuning language models on the synthetic data,we evaluate the models on popular benchmarks on multi-hop question answeringand fact verification. Our experimental results show that finetuning on thesynthetic data improves model performance significantly, allowing our finetunedmodels to be competitive with prior models while being almost one-third thesize in terms of parameter counts.",,arXiv,['cs.cl'],, +789,images in language space exploring the suitability of large language models for vision & language tasks,"['Sherzod Hakimov', 'David Schlangen']",http://arxiv.org/pdf/2305.13782v1.pdf,2023-05-23,," Large language models have demonstrated robust performance on variouslanguage tasks using zero-shot or few-shot learning paradigms. While beingactively researched, multimodal models that can additionally handle images asinput have yet to catch up in size and generality with language-only models. Inthis work, we ask whether language-only models can be utilised for tasks thatrequire visual input -- but also, as we argue, often require a strong reasoningcomponent. Similar to some recent related work, we make visual informationaccessible to the language model using separate verbalisation models.Specifically, we investigate the performance of open-source, open-accesslanguage models against GPT-3 on five vision-language tasks when giventextually-encoded visual information. Our results suggest that language modelsare effective for solving vision-language tasks even with limited samples. Thisapproach also enhances the interpretability of a model's output by providing ameans of tracing the output back through the verbalised image content.",,arXiv,['cs.cl'],, +790,improving factuality and reasoning in language models through multiagent debate,"['Yilun Du', 'Shuang Li', 'Antonio Torralba', 'Joshua B. Tenenbaum', 'Igor Mordatch']",http://arxiv.org/pdf/2305.14325v1.pdf,2023-05-23,," Large language models (LLMs) have demonstrated remarkable capabilities inlanguage generation, understanding, and few-shot learning in recent years. Anextensive body of work has explored how their performance may be furtherimproved through the tools of prompting, ranging from verification,self-consistency, or intermediate scratchpads. In this paper, we present acomplementary approach to improve language responses where multiple languagemodel instances propose and debate their individual responses and reasoningprocesses over multiple rounds to arrive at a common final answer. Our findingsindicate that this approach significantly enhances mathematical and strategicreasoning across a number of tasks. We also demonstrate that our approachimproves the factual validity of generated content, reducing fallacious answersand hallucinations that contemporary models are prone to. Our approach may bedirectly applied to existing black-box models and uses identical procedure andprompts for all tasks we investigate. Overall, our findings suggest that such""society of minds"" approach has the potential to significantly advance thecapabilities of LLMs and pave the way for further breakthroughs in languagegeneration and understanding.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",, +791,training on thin air improve image classification with generated data,"['Yongchao Zhou', 'Hshmat Sahak', 'Jimmy Ba']",http://arxiv.org/pdf/2305.15316v1.pdf,2023-05-24,," Acquiring high-quality data for training discriminative models is a crucialyet challenging aspect of building effective predictive systems. In this paper,we present Diffusion Inversion, a simple yet effective method that leveragesthe pre-trained generative model, Stable Diffusion, to generate diverse,high-quality training data for image classification. Our approach captures theoriginal data distribution and ensures data coverage by inverting images to thelatent space of Stable Diffusion, and generates diverse novel training imagesby conditioning the generative model on noisy versions of these vectors. Weidentify three key components that allow our generated images to successfullysupplant the original dataset, leading to a 2-3x enhancement in samplecomplexity and a 6.5x decrease in sampling time. Moreover, our approachconsistently outperforms generic prompt-based steering methods and KNNretrieval baseline across a wide range of datasets. Additionally, wedemonstrate the compatibility of our approach with widely-used dataaugmentation techniques, as well as the reliability of the generated data insupporting various neural architectures and enhancing few-shot learning.",,arXiv,"['cs.cv', 'cs.lg']",, +792,paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation,"['Kuan-Hao Huang', 'Varun Iyer', 'I-Hung Hsu', 'Anoop Kumar', 'Kai-Wei Chang', 'Aram Galstyan']",http://arxiv.org/pdf/2305.16585v1.pdf,2023-05-26,," Paraphrase generation is a long-standing task in natural language processing(NLP). Supervised paraphrase generation models, which rely on human-annotatedparaphrase pairs, are cost-inefficient and hard to scale up. On the other hand,automatically annotated paraphrase pairs (e.g., by machine back-translation),usually suffer from the lack of syntactic diversity -- the generated paraphrasesentences are very similar to the source sentences in terms of syntax. In thiswork, we present ParaAMR, a large-scale syntactically diverse paraphrasedataset created by abstract meaning representation back-translation. Ourquantitative analysis, qualitative examples, and human evaluation demonstratethat the paraphrases of ParaAMR are syntactically more diverse compared toexisting large-scale paraphrase datasets while preserving good semanticsimilarity. In addition, we show that ParaAMR can be used to improve on threeNLP tasks: learning sentence embeddings, syntactically controlled paraphrasegeneration, and data augmentation for few-shot learning. Our results thusshowcase the potential of ParaAMR for improving various NLP applications.",,arXiv,['cs.cl'],, +793,adapting languageaudio models as fewshot audio learners,"['Jinhua Liang', 'Xubo Liu', 'Haohe Liu', 'Huy Phan', 'Emmanouil Benetos', 'Mark D. Plumbley', 'Wenwu Wang']",http://arxiv.org/pdf/2305.17719v1.pdf,2023-05-28,," We presented the Treff adapter, a training-efficient adapter for CLAP, toboost zero-shot classification performance by making use of a small set oflabelled data. Specifically, we designed CALM to retrieve the probabilitydistribution of text-audio clips over classes using a set of audio-label pairsand combined it with CLAP's zero-shot classification results. Furthermore, wedesigned a training-free version of the Treff adapter by using CALM as a cosinesimilarity measure. Experiments showed that the proposed Treff adapter iscomparable and even better than fully-supervised methods and adaptation methodsin low-shot and data-abundant scenarios. While the Treff adapter shows thatcombining large-scale pretraining and rapid learning of domain-specificknowledge is non-trivial for obtaining generic representations for few-shotlearning, it is still limited to audio classification tasks. In the future, wewill explore how to use audio-language models in diverse audio domains.",,arXiv,"['eess.as', 'cs.sd']",, +794,deeply coupled crossmodal prompt learning,"['Xuejing Liu', 'Wei Tang', 'Jinghui Lu', 'Rui Zhao', 'Zhaojun Guo', 'Fei Tan']",http://arxiv.org/pdf/2305.17903v3.pdf,2023-05-29,," Recent advancements in multimodal foundation models (e.g., CLIP) haveexcelled in zero-shot generalization. Prompt tuning involved in the knowledgetransfer from foundation models to downstream tasks has gained significantattention recently. Existing prompt-tuning methods in cross-modal learning,however, either solely focus on language branch, or learn vision-languageinteraction in a shallow mechanism. In this context, we propose a Deeplycoupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexiblyaccommodates the interplay between vision and language with a Cross-ModalPrompt Attention (CMPA) mechanism, which enables the mutual exchange ofrespective representation through a well-connected multi-head attention moduleprogressively and strongly. We then conduct comprehensive few-shot learningexperiments on 11 image classification datasets and analyze the robustness todomain shift as well. Thorough experimental analysis evidently demonstrates thesuperb few-shot generalization and compelling domain adaption capacity of awell-executed DCP. The code can be found at https://github.com/GingL/CMPA.",,arXiv,['cs.cv'],, +795,what does the failure to reason with respectively in zerofewshot settings tell us about language models,"['Ruixiang Cui', 'Seolhwa Lee', 'Daniel Hershcovich', 'Anders Søgaard']",http://arxiv.org/pdf/2305.19597v1.pdf,2023-05-31,," Humans can effortlessly understand the coordinate structure of sentences suchas ""Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle,respectively"". In the context of natural language inference (NLI), we examinehow language models (LMs) reason with respective readings (Gawron and Kehler,2004) from two perspectives: syntactic-semantic and commonsense-worldknowledge. We propose a controlled synthetic dataset WikiResNLI and a naturallyoccurring dataset NatResNLI to encompass various explicit and implicitrealizations of ""respectively"". We show that fine-tuned NLI models strugglewith understanding such readings without explicit supervision. While few-shotlearning is easy in the presence of explicit cues, longer training is requiredwhen the reading is evoked implicitly, leaving models to rely on common senseinferences. Furthermore, our fine-grained analysis indicates models fail togeneralize across different constructions. To conclude, we demonstrate that LMsstill lag behind humans in generalizing to the long tail of linguisticconstructions.",,arXiv,"['cs.cl', 'cs.ai']",, +796,humanlike fewshot learning via bayesian reasoning over natural language,['Kevin Ellis'],http://arxiv.org/pdf/2306.02797v3.pdf,2023-06-05,," A core tension in models of concept learning is that the model must carefullybalance the tractability of inference against the expressivity of thehypothesis class. Humans, however, can efficiently learn a broad range ofconcepts. We introduce a model of inductive learning that seeks to behuman-like in that sense. It implements a Bayesian reasoning process where alanguage model first proposes candidate hypotheses expressed in naturallanguage, which are then re-weighed by a prior and a likelihood. By estimatingthe prior from human data, we can predict human judgments on learning problemsinvolving numbers and sets, spanning concepts that are generative,discriminative, propositional, and higher-order.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +797,few shot rationale generation using selftraining with dual teachers,"['Aditya Srikanth Veerubhotla', 'Lahari Poddar', 'Jun Yin', 'György Szarvas', 'Sharanya Eswaran']",http://arxiv.org/pdf/2306.03315v1.pdf,2023-06-05,," Self-rationalizing models that also generate a free-text explanation fortheir predicted labels are an important tool to build trustworthy AIapplications. Since generating explanations for annotated labels is a laboriousand costly pro cess, recent models rely on large pretrained language models(PLMs) as their backbone and few-shot learning. In this work we explore aself-training approach leveraging both labeled and unlabeled data to furtherimprove few-shot models, under the assumption that neither human writtenrationales nor annotated task labels are available at scale. We introduce anovel dual-teacher learning framework, which learns two specialized teachermodels for task prediction and rationalization using self-training and distillstheir knowledge into a multi-tasking student model that can jointly generatethe task label and rationale. Furthermore, we formulate a new loss function,Masked Label Regularization (MLR) which promotes explanations to be stronglyconditioned on predicted labels. Evaluation on three public datasetsdemonstrate that the proposed methods are effective in modeling task labels andgenerating faithful rationales.",,arXiv,"['cs.cl', 'cs.ai']",, +798,a new dataset and empirical study for sentence simplification in chinese,"['Shiping Yang', 'Renliang Sun', 'Xiaojun Wan']",http://arxiv.org/pdf/2306.04188v1.pdf,2023-06-07,," Sentence Simplification is a valuable technique that can benefit languagelearners and children a lot. However, current research focuses more on Englishsentence simplification. The development of Chinese sentence simplification isrelatively slow due to the lack of data. To alleviate this limitation, thispaper introduces CSS, a new dataset for assessing sentence simplification inChinese. We collect manual simplifications from human annotators and performdata analysis to show the difference between English and Chinese sentencesimplifications. Furthermore, we test several unsupervised and zero/few-shotlearning methods on CSS and analyze the automatic evaluation and humanevaluation results. In the end, we explore whether Large Language Models canserve as high-quality Chinese sentence simplification systems by evaluatingthem on CSS.",,arXiv,['cs.cl'],, +799,can ai moderate online communities,"['Henrik Axelsen', 'Johannes Rude Jensen', 'Sebastian Axelsen', 'Valdemar Licht', 'Omri Ross']",http://arxiv.org/pdf/2306.05122v1.pdf,2023-06-08,," The task of cultivating healthy communication in online communities becomesincreasingly urgent, as gaming and social media experiences becomeprogressively more immersive and life-like. We approach the challenge ofmoderating online communities by training student models using a large languagemodel (LLM). We use zero-shot learning models to distill and expand datasetsfollowed by a few-shot learning and a fine-tuning approach, leveragingopen-access generative pre-trained transformer models (GPT) from OpenAI. Ourpreliminary findings suggest, that when properly trained, LLMs can excel inidentifying actor intentions, moderating toxic comments, and rewarding positivecontributions. The student models perform above-expectation in non-contextualassignments such as identifying classically toxic behavior and performsufficiently on contextual assignments such as identifying positivecontributions to online discourse. Further, using open-access models likeOpenAI's GPT we experience a step-change in the development process for whathas historically been a complex modeling task. We contribute to the informationsystem (IS) discourse with a rapid development framework on the application ofgenerative AI in content online moderation and management of culture indecentralized, pseudonymous communities by providing a sample model suite ofindustrial-ready generative AI models based on open-access LLMs.",,arXiv,['cs.cy'],, +800,the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues,"['Adaeze Adigwe', 'Zheng Yuan']",http://arxiv.org/pdf/2306.05360v1.pdf,2023-06-08,," This paper presents the ADAIO team's system entry in the Building EducationalApplications (BEA) 2023 Shared Task on Generating AI Teacher Responses inEducational Dialogues. The task aims to assess the performance ofstate-of-the-art generative models as AI teachers in producing suitableresponses within a student-teacher dialogue. Our system comprises evaluatingvarious baseline models using OpenAI GPT-3 and designing diverse prompts toprompt the OpenAI models for teacher response generation. After the challenge,our system achieved second place by employing a few-shot prompt-based approachwith the OpenAI text-davinci-003 model. The results highlight the few-shotlearning capabilities of large-language models, particularly OpenAI's GPT-3, inthe role of AI teachers.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, +801,rethink the effectiveness of text data augmentation an empirical analysis,"['Zhengxiang Shi', 'Aldo Lipani']",http://arxiv.org/pdf/2306.07664v1.pdf,2023-06-13,," In recent years, language models (LMs) have made remarkable progress inadvancing the field of natural language processing (NLP). However, the impactof data augmentation (DA) techniques on the fine-tuning (FT) performance ofthese LMs has been a topic of ongoing debate. In this study, we evaluate theeffectiveness of three different FT methods in conjugation withback-translation across an array of 7 diverse NLP tasks, includingclassification and regression types, covering single-sentence and sentence-pairtasks. Contrary to prior assumptions that DA does not contribute to theenhancement of LMs' FT performance, our findings reveal that continuedpre-training on augmented data can effectively improve the FT performance ofthe downstream tasks. In the most favourable case, continued pre-trainingimproves the performance of FT by more than 10% in the few-shot learningsetting. Our finding highlights the potential of DA as a powerful tool forbolstering LMs' performance.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +802,neural finetuning search for fewshot learning,"['Panagiotis Eustratiadis', 'Łukasz Dudziak', 'Da Li', 'Timothy Hospedales']",http://arxiv.org/pdf/2306.09295v1.pdf,2023-06-15,," In few-shot recognition, a classifier that has been trained on one set ofclasses is required to rapidly adapt and generalize to a disjoint, novel set ofclasses. To that end, recent studies have shown the efficacy of fine-tuningwith carefully crafted adaptation architectures. However this raises thequestion of: How can one design the optimal adaptation strategy? In this paper,we study this question through the lens of neural architecture search (NAS).Given a pre-trained neural network, our algorithm discovers the optimalarrangement of adapters, which layers to keep frozen and which to fine-tune. Wedemonstrate the generality of our NAS method by applying it to both residualnetworks and vision transformers and report state-of-the-art performance onMeta-Dataset and Meta-Album.",,arXiv,"['cs.cv', 'cs.lg']",, +803,multilingual fewshot learning via language model retrieval,"['Genta Indra Winata', 'Liang-Kang Huang', 'Soumya Vadlamannati', 'Yash Chandarana']",http://arxiv.org/pdf/2306.10964v1.pdf,2023-06-19,," Transformer-based language models have achieved remarkable success infew-shot in-context learning and drawn a lot of research interest. However,these models' performance greatly depends on the choice of the example promptsand also has high variability depending on how samples are chosen. In thispaper, we conduct a comprehensive study of retrieving semantically similarfew-shot samples and using them as the context, as it helps the model decidethe correct label without any gradient update in the multilingual andcross-lingual settings. We evaluate the proposed method on five naturallanguage understanding datasets related to intent detection, questionclassification, sentiment analysis, and topic classification. The proposedmethod consistently outperforms random sampling in monolingual andcross-lingual tasks in non-English languages.",,arXiv,['cs.cl'],, +804,robut a systematic study of table qa robustness against humanannotated adversarial perturbations,"['Yilun Zhao', 'Chen Zhao', 'Linyong Nan', 'Zhenting Qi', 'Wenlin Zhang', 'Xiangru Tang', 'Boyu Mi', 'Dragomir Radev']",http://arxiv.org/pdf/2306.14321v1.pdf,2023-06-25,," Despite significant progress having been made in question answering ontabular data (Table QA), it's unclear whether, and to what extent existingTable QA models are robust to task-specific perturbations, e.g., replacing keyquestion entities or shuffling table columns. To systematically study therobustness of Table QA models, we propose a benchmark called RobuT, whichbuilds upon existing Table QA datasets (WTQ, WikiSQL-Weak, and SQA) andincludes human-annotated adversarial perturbations in terms of table header,table content, and question. Our results indicate that both state-of-the-artTable QA models and large language models (e.g., GPT-3) with few-shot learningfalter in these adversarial sets. We propose to address this problem by usinglarge language models to generate adversarial examples to enhance training,which significantly improves the robustness of Table QA models. Our data andcode is publicly available at https://github.com/yilunzhao/RobuT.",,arXiv,"['cs.cl', 'cs.ai']",, +805,benchmarking large language model capabilities for conditional generation,"['Joshua Maynez', 'Priyanka Agrawal', 'Sebastian Gehrmann']",http://arxiv.org/pdf/2306.16793v1.pdf,2023-06-29,," Pre-trained large language models (PLMs) underlie most new developments innatural language processing. They have shifted the field fromapplication-specific model pipelines to a single model that is adapted to awide range of tasks. Autoregressive PLMs like GPT-3 or PaLM, alongsidetechniques like few-shot learning, have additionally shifted the outputmodality to generation instead of classification or regression. Despite theirubiquitous use, the generation quality of language models is rarely evaluatedwhen these models are introduced. Additionally, it is unclear how existinggeneration tasks--while they can be used to compare systems at a highlevel--relate to the real world use cases for which people have been adoptingthem. In this work, we discuss how to adapt existing application-specificgeneration benchmarks to PLMs and provide an in-depth, empirical study of thelimitations and capabilities of PLMs in natural language generation tasks alongdimensions such as scale, architecture, input and output language. Our resultsshow that PLMs differ in their applicability to different data regimes andtheir generalization to multiple languages and inform which PLMs to use for agiven generation task setup. We share best practices to be taken intoconsideration when benchmarking generation capabilities during the developmentof upcoming PLMs.",,arXiv,['cs.cl'],, +806,on conditional and compositional language model differentiable prompting,"['Jonathan Pilault', 'Can Liu', 'Mohit Bansal', 'Markus Dreyer']",http://arxiv.org/pdf/2307.01446v1.pdf,2023-07-04,," Prompts have been shown to be an effective method to adapt a frozenPretrained Language Model (PLM) to perform well on downstream tasks. Promptscan be represented by a human-engineered word sequence or by a learnedcontinuous embedding. In this work, we investigate conditional andcompositional differentiable prompting. We propose a new model, PromptProduction System (PRopS), which learns to transform task instructions or inputmetadata, into continuous prompts that elicit task-specific outputs from thePLM. Our model uses a modular network structure based on our neural formulationof Production Systems, which allows the model to learn discrete rules -- neuralfunctions that learn to specialize in transforming particular prompt inputpatterns, making it suitable for compositional transfer learning and few-shotlearning. We present extensive empirical and theoretical analysis and show thatPRopS consistently surpasses other PLM adaptation techniques, and oftenimproves upon fully fine-tuned models, on compositional generalization tasks,controllable summarization and multilingual translation, while needing fewertrainable parameters.",,arXiv,"['cs.cl', 'cs.lg']",, +807,diverse retrievalaugmented incontext learning for dialogue state tracking,"['Brendan King', 'Jeffrey Flanigan']",http://arxiv.org/pdf/2307.01453v1.pdf,2023-07-04,," There has been significant interest in zero and few-shot learning fordialogue state tracking (DST) due to the high cost of collecting and annotatingtask-oriented dialogues. Recent work has demonstrated that in-context learningrequires very little data and zero parameter updates, and even outperformstrained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST,which advances the state of the art with three advancements to in-contextlearning for DST. First, we formulate DST as a Python programming task,explicitly modeling language coreference as variable reference in Python.Second, since in-context learning depends highly on the context examples, wepropose a method to retrieve a diverse set of relevant examples to improveperformance. Finally, we introduce a novel re-weighting method during decodingthat takes into account probabilities of competing surface forms, and producesa more accurate dialogue state prediction. We evaluate our approach usingMultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zeroand few-shot settings.",,arXiv,['cs.cl'],, +808,generating efficient training data via llmbased attribute manipulation,"['Letian Peng', 'Yuwei Zhang', 'Jingbo Shang']",http://arxiv.org/pdf/2307.07099v1.pdf,2023-07-14,," In this paper, we propose a novel method, Chain-of-Thoughts AttributeManipulation (CoTAM), to guide few-shot learning by carefully crafted data fromLarge Language Models (LLMs). The main idea is to create data with changes onlyin the attribute targeted by the task. Inspired by facial attributemanipulation, our approach generates label-switched data by leveraging LLMs tomanipulate task-specific attributes and reconstruct new sentences in acontrolled manner. Instead of conventional latent representation controlling,we implement chain-of-thoughts decomposition and reconstruction to adapt theprocedure to LLMs. Extensive results on text classification and other tasksverify the advantage of CoTAM over other LLM-based text generation methods withthe same number of training examples. Analysis visualizes the attributemanipulation effectiveness of CoTAM and presents the potential of LLM-guidedlearning with even less supervision.",,arXiv,['cs.cl'],, +809,overthinking the truth understanding how language models process false demonstrations,"['Danny Halawi', 'Jean-Stanislas Denain', 'Jacob Steinhardt']",http://arxiv.org/pdf/2307.09476v1.pdf,2023-07-18,," Modern language models can imitate complex patterns through few-shotlearning, enabling them to complete challenging tasks without fine-tuning.However, imitation can also lead models to reproduce inaccuracies or harmfulcontent if present in the context. We study harmful imitation through the lensof a model's internal representations, and identify two related phenomena:overthinking and false induction heads. The first phenomenon, overthinking,appears when we decode predictions from intermediate layers, given correct vs.incorrect few-shot demonstrations. At early layers, both demonstrations inducesimilar model behavior, but the behavior diverges sharply at some ""criticallayer"", after which the accuracy given incorrect demonstrations progressivelydecreases. The second phenomenon, false induction heads, are a possiblemechanistic cause of overthinking: these are heads in late layers that attendto and copy false information from previous demonstrations, and whose ablationreduces overthinking. Beyond scientific understanding, our results suggest thatstudying intermediate model computations could be a promising avenue forunderstanding and guarding against harmful model behaviors.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, +810,does correction remain a problem for large language models,"['Xiaowu Zhang', 'Xiaotian Zhang', 'Cheng Yang', 'Hang Yan', 'Xipeng Qiu']",http://arxiv.org/pdf/2308.01776v2.pdf,2023-08-03,," As large language models, such as GPT, continue to advance the capabilitiesof natural language processing (NLP), the question arises: does the problem ofcorrection still persist? This paper investigates the role of correction in thecontext of large language models by conducting two experiments. The firstexperiment focuses on correction as a standalone task, employing few-shotlearning techniques with GPT-like models for error correction. The secondexperiment explores the notion of correction as a preparatory task for otherNLP tasks, examining whether large language models can tolerate and performadequately on texts containing certain levels of noise or errors. By addressingthese experiments, we aim to shed light on the significance of correction inthe era of large language models and its implications for various NLPapplications.",,arXiv,['cs.cl'],, +811,thespian multicharacter text roleplaying game agents,"['Christopher Cui', 'Xiangyu Peng', 'Mark Riedl']",http://arxiv.org/pdf/2308.01872v1.pdf,2023-08-03,," Text-adventure games and text role-playing games are grand challenges forreinforcement learning game playing agents. Text role-playing games areopen-ended environments where an agent must faithfully play a particularcharacter. We consider the distinction between characters and actors, where anactor agent has the ability to play multiple characters. We present a frameworkwe call a thespian agent that can learn to emulate multiple characters alongwith a soft prompt that can be used to direct it as to which character to playat any time. We further describe an attention mechanism that allows the agentto learn new characters that are based on previously learned characters in afew-shot fashion. We show that our agent outperforms the state of the art agentframework in multi-character learning and few-shot learning.",,arXiv,"['cs.ai', 'cs.cl']",, +812,metalearning in healthcare a survey,"['Alireza Rafiei', 'Ronald Moore', 'Sina Jahromi', 'Farshid Hajati', 'Rishikesan Kamaleswaran']",http://arxiv.org/pdf/2308.02877v1.pdf,2023-08-05,," As a subset of machine learning, meta-learning, or learning to learn, aims atimproving the model's capabilities by employing prior knowledge and experience.A meta-learning paradigm can appropriately tackle the conventional challengesof traditional learning approaches, such as insufficient number of samples,domain shifts, and generalization. These unique characteristics positionmeta-learning as a suitable choice for developing influential solutions invarious healthcare contexts, where the available data is often insufficient,and the data collection methodologies are different. This survey discussesmeta-learning broad applications in the healthcare domain to provide insightinto how and where it can address critical healthcare challenges. We firstdescribe the theoretical foundations and pivotal methods of meta-learning. Wethen divide the employed meta-learning approaches in the healthcare domain intotwo main categories of multi/single-task learning and many/few-shot learningand survey the studies. Finally, we highlight the current challenges inmeta-learning research, discuss the potential solutions and provide futureperspectives on meta-learning in healthcare.",,arXiv,"['cs.lg', 'cs.ai']",, +813,autoconv automatically generating informationseeking conversations with large language models,"['Siheng Li', 'Cheng Yang', 'Yichun Yin', 'Xinyu Zhu', 'Zesen Cheng', 'Lifeng Shang', 'Xin Jiang', 'Qun Liu', 'Yujiu Yang']",http://arxiv.org/pdf/2308.06507v1.pdf,2023-08-12,," Information-seeking conversation, which aims to help users gather informationthrough conversation, has achieved great progress in recent years. However, theresearch is still stymied by the scarcity of training data. To alleviate thisproblem, we propose AutoConv for synthetic conversation generation, which takesadvantage of the few-shot learning ability and generation capacity of largelanguage models (LLM). Specifically, we formulate the conversation generationproblem as a language modeling task, then finetune an LLM with a few humanconversations to capture the characteristics of the information-seeking processand use it for generating synthetic conversations with high quality.Experimental results on two frequently-used datasets verify that AutoConv hassubstantial improvements over strong baselines and alleviates the dependence onhuman annotation. In addition, we also provide several analysis studies topromote future research.",,arXiv,['cs.cl'],, +814,distilled feature fields enable fewshot languageguided manipulation,"['William Shen', 'Ge Yang', 'Alan Yu', 'Jansen Wong', 'Leslie Pack Kaelbling', 'Phillip Isola']",http://arxiv.org/pdf/2308.07931v2.pdf,2023-07-27,," Self-supervised and language-supervised image models contain rich knowledgeof the world that is important for generalization. Many robotic tasks, however,require a detailed understanding of 3D geometry, which is often lacking in 2Dimage features. This work bridges this 2D-to-3D gap for robotic manipulation byleveraging distilled feature fields to combine accurate 3D geometry with richsemantics from 2D foundation models. We present a few-shot learning method for6-DOF grasping and placing that harnesses these strong spatial and semanticpriors to achieve in-the-wild generalization to unseen objects. Using featuresdistilled from a vision-language model, CLIP, we present a way to designatenovel objects for manipulation via free-text natural language, and demonstrateits ability to generalize to unseen expressions and novel categories ofobjects.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",, +815,refashioning emotion recognition modelling the advent of generalised large models,"['Zixing Zhang', 'Liyizhe Peng', 'Tao Pang', 'Jing Han', 'Huan Zhao', 'Bjorn W. Schuller']",http://arxiv.org/pdf/2308.11578v1.pdf,2023-08-21,," After the inception of emotion recognition or affective computing, it hasincreasingly become an active research topic due to its broad applications.Over the past couple of decades, emotion recognition models have graduallymigrated from statistically shallow models to neural network-based deep models,which can significantly boost the performance of emotion recognition models andconsistently achieve the best results on different benchmarks. Therefore, inrecent years, deep models have always been considered the first option foremotion recognition. However, the debut of large language models (LLMs), suchas ChatGPT, has remarkably astonished the world due to their emergedcapabilities of zero/few-shot learning, in-context learning, chain-of-thought,and others that are never shown in previous deep models. In the present paper,we comprehensively investigate how the LLMs perform in emotion recognition interms of diverse aspects, including in-context learning, few-short learning,accuracy, generalisation, and explanation. Moreover, we offer some insights andpose other potential challenges, hoping to ignite broader discussions aboutenhancing emotion recognition in the new era of advanced and generalised largemodels.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +816,gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles,"['Georgi Pachov', 'Dimitar Dimitrov', 'Ivan Koychev', 'Preslav Nakov']",http://arxiv.org/pdf/2309.06844v1.pdf,2023-09-13,," The wide-spread use of social networks has given rise to subjective,misleading, and even false information on the Internet. Thus, subjectivitydetection can play an important role in ensuring the objectiveness and thequality of a piece of information. This paper presents the solution built bythe Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivitydetection. Three different research directions are explored. The first one isbased on fine-tuning a sentence embeddings encoder model and dimensionalityreduction. The second one explores a sample-efficient few-shot learning model.The third one evaluates fine-tuning a multilingual transformer on an altereddataset, using data from multiple languages. Finally, the three approaches arecombined in a simple majority voting ensemble, resulting in 0.77 macro F1 onthe test set and achieving 2nd place on the English subtask.",,arXiv,"['cs.cl', 'cs.ai', 'cs.mm']",, +817,"an empathybased sandbox approach to bridge attitudes, goals, knowledge, and behaviors in the privacy paradox","['Chaoran Chen', 'Weijun Li', 'Wenxin Song', 'Yanfang Ye', 'Yaxing Yao', 'Toby Jia-jun Li']",http://arxiv.org/pdf/2309.14510v1.pdf,2023-09-25,," The ""privacy paradox"" describes the discrepancy between users' privacyattitudes and their actual behaviors. Mitigating this discrepancy requiressolutions that account for both system opaqueness and users' hesitations intesting different privacy settings due to fears of unintended data exposure. Weintroduce an empathy-based approach that allows users to experience how privacybehaviors may alter system outcomes in a risk-free sandbox environment from theperspective of artificially generated personas. To generate realistic personas,we introduce a novel pipeline that augments the outputs of large languagemodels using few-shot learning, contextualization, and chain of thoughts. Ourempirical studies demonstrated the adequate quality of generated personas andhighlighted the changes in privacy-related applications (e.g., onlineadvertising) caused by different personas. Furthermore, users demonstratedcognitive and emotional empathy towards the personas when interacting with oursandbox. We offered design implications for downstream applications inimproving user privacy literacy and promoting behavior changes.",,arXiv,['cs.hc'],, +818,injecting a structural inductive bias into a seq2seq model by simulation,"['Matthias Lindemann', 'Alexander Koller', 'Ivan Titov']",http://arxiv.org/pdf/2310.00796v1.pdf,2023-10-01,," Strong inductive biases enable learning from little data and helpgeneralization outside of the training distribution. Popular neuralarchitectures such as Transformers lack strong structural inductive biases forseq2seq NLP tasks on their own. Consequently, they struggle with systematicgeneralization beyond the training distribution, e.g. with extrapolating tolonger inputs, even when pre-trained on large amounts of text. We show how astructural inductive bias can be injected into a seq2seq model by pre-trainingit to simulate structural transformations on synthetic data. Specifically, weinject an inductive bias towards Finite State Transducers (FSTs) into aTransformer by pre-training it to simulate FSTs given their descriptions. Ourexperiments show that our method imparts the desired inductive bias, resultingin improved systematic generalization and better few-shot learning for FST-liketasks.",,arXiv,['cs.cl'],, +819,tram benchmarking temporal reasoning for large language models,"['Yuqing Wang', 'Yun Zhao']",http://arxiv.org/pdf/2310.00835v2.pdf,2023-10-02,," Reasoning about time is essential for understanding the nuances of eventsdescribed in natural language. Previous research on this topic has been limitedin scope, characterized by a lack of standardized benchmarks that would allowfor consistent evaluations across different studies. In this paper, weintroduce TRAM, a temporal reasoning benchmark composed of ten datasets,encompassing various temporal aspects of events such as order, arithmetic,frequency, and duration, designed to facilitate a comprehensive evaluation ofthe temporal reasoning capabilities of large language models (LLMs). We conductan extensive evaluation using popular LLMs, such as GPT-4 and Llama2, in bothzero-shot and few-shot learning scenarios. Additionally, we employ BERT-basedmodels to establish the baseline evaluations. Our findings indicate that thesemodels still trail human performance in temporal reasoning tasks. It is ouraspiration that TRAM will spur further progress in enhancing the temporalreasoning abilities of LLMs.",,arXiv,['cs.cl'],, +820,procedural text mining with large language models,"['Anisa Rula', ""Jennifer D'Souza""]",http://arxiv.org/pdf/2310.03376v1.pdf,2023-10-05,," Recent advancements in the field of Natural Language Processing, particularlythe development of large-scale language models that are pretrained on vastamounts of knowledge, are creating novel opportunities within the realm ofKnowledge Engineering. In this paper, we investigate the usage of largelanguage models (LLMs) in both zero-shot and in-context learning settings totackle the problem of extracting procedures from unstructured PDF text in anincremental question-answering fashion. In particular, we leverage the currentstate-of-the-art GPT-4 (Generative Pre-trained Transformer 4) model,accompanied by two variations of in-context learning that involve an ontologywith definitions of procedures and steps and a limited number of samples offew-shot learning. The findings highlight both the promise of this approach andthe value of the in-context learning customisations. These modifications havethe potential to significantly address the challenge of obtaining sufficienttraining data, a hurdle often encountered in deep learning-based NaturalLanguage Processing techniques for procedure extraction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.it', 'math.it']",, +821,prototypeformer learning to explore prototype relationships for fewshot image classification,"['Feihong He', 'Gang Li', 'Lingyu Si', 'Leilei Yan', 'Fanzhang Li', 'Fuchun Sun']",http://arxiv.org/pdf/2310.03517v1.pdf,2023-10-05,," Few-shot image classification has received considerable attention foraddressing the challenge of poor classification performance with limitedsamples in novel classes. However, numerous studies have employed sophisticatedlearning strategies and diversified feature extraction methods to address thisissue. In this paper, we propose our method called PrototypeFormer, which aimsto significantly advance traditional few-shot image classification approachesby exploring prototype relationships. Specifically, we utilize a transformerarchitecture to build a prototype extraction module, aiming to extract classrepresentations that are more discriminative for few-shot classification.Additionally, during the model training process, we propose a contrastivelearning-based optimization approach to optimize prototype features in few-shotlearning scenarios. Despite its simplicity, the method performs remarkablywell, with no bells and whistles. We have experimented with our approach onseveral popular few-shot image classification benchmark datasets, which showsthat our method outperforms all current state-of-the-art methods. Inparticular, our method achieves 97.07% and 90.88% on 5-way 5-shot and 5-way1-shot tasks of miniImageNet, which surpasses the state-of-the-art results withaccuracy of 7.27% and 8.72%, respectively. The code will be released later.",,arXiv,['cs.cv'],, +822,a holistic evaluation of piano sound quality,"['Monan Zhou', 'Shangda Wu', 'Shaohua Ji', 'Zijin Li', 'Wei Li']",http://arxiv.org/pdf/2310.04722v1.pdf,2023-10-07,," This paper aims to develop a holistic evaluation method for piano soundquality to assist in purchasing decisions. Unlike previous studies that focusedon the effect of piano performance techniques on sound quality, this studyevaluates the inherent sound quality of different pianos. To derive qualityevaluation systems, the study uses subjective questionnaires based on a pianosound quality dataset. The method selects the optimal piano classificationmodels by comparing the fine-tuning results of different pre-training models ofConvolutional Neural Networks (CNN). To improve the interpretability of themodels, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. Theresults reveal that musically trained individuals are better able todistinguish between the sound quality differences of different pianos. The bestfine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3\% as thepiano classifier. However, the dataset is limited, and the audio is sliced toincrease its quantity, resulting in a lack of diversity and balance, so we usefocal loss to reduce the impact of data imbalance. To optimize the method, thedataset will be expanded, or few-shot learning techniques will be employed infuture research.",,arXiv,"['cs.sd', 'cs.ai', 'eess.as']",, +823,argumentative stance prediction an exploratory study on multimodality and fewshot learning,"['Arushi Sharma', 'Abhibha Gupta', 'Maneesh Bilalpur']",http://arxiv.org/pdf/2310.07093v1.pdf,2023-10-11,," To advance argumentative stance prediction as a multimodal problem, the FirstShared Task in Multimodal Argument Mining hosted stance prediction in crucialsocial topics of gun control and abortion. Our exploratory study attempts toevaluate the necessity of images for stance prediction in tweets and compareout-of-the-box text-based large-language models (LLM) in few-shot settingsagainst fine-tuned unimodal and multimodal models. Our work suggests anensemble of fine-tuned text-based language models (0.817 F1-score) outperformsboth the multimodal (0.677 F1-score) and text-based few-shot prediction using arecent state-of-the-art LLM (0.550 F1-score). In addition to the differences inperformance, our findings suggest that the multimodal models tend to performbetter when image content is summarized as natural language over their nativepixel structure and, using in-context examples improves few-shot performance ofLLMs.",,arXiv,['cs.cl'],, +824,llmaugmented preference learning from natural language,"['Inwon Kang', 'Sikai Ruan', 'Tyler Ho', 'Jui-Chien Lin', 'Farhad Mohsin', 'Oshani Seneviratne', 'Lirong Xia']",http://arxiv.org/pdf/2310.08523v1.pdf,2023-10-12,," Finding preferences expressed in natural language is an important butchallenging task. State-of-the-art(SotA) methods leverage transformer-basedmodels such as BERT, RoBERTa, etc. and graph neural architectures such as graphattention networks. Since Large Language Models (LLMs) are equipped to dealwith larger context lengths and have much larger model sizes than thetransformer-based model, we investigate their ability to classify comparativetext directly. This work aims to serve as a first step towards using LLMs forthe CPC task. We design and conduct a set of experiments that format theclassification task into an input prompt for the LLM and a methodology to get afixed-format response that can be automatically evaluated. Comparingperformances with existing methods, we see that pre-trained LLMs are able tooutperform the previous SotA models with no fine-tuning involved. Our resultsshow that the LLMs can consistently outperform the SotA when the target text islarge -- i.e. composed of multiple sentences --, and are still comparable tothe SotA performance in shorter text. We also find that few-shot learningyields better performance than zero-shot learning.",,arXiv,['cs.cl'],, +825,incontext learning for fewshot molecular property prediction,"['Christopher Fifty', 'Jure Leskovec', 'Sebastian Thrun']",http://arxiv.org/pdf/2310.08863v1.pdf,2023-10-13,," In-context learning has become an important approach for few-shot learning inLarge Language Models because of its ability to rapidly adapt to new taskswithout fine-tuning model parameters. However, it is restricted to applicationsin natural language and inapplicable to other domains. In this paper, we adaptthe concepts underpinning in-context learning to develop a new algorithm forfew-shot molecular property prediction. Our approach learns to predictmolecular properties from a context of (molecule, property measurement) pairsand rapidly adapts to new properties without fine-tuning. On the FS-Mol andBACE molecular property prediction benchmarks, we find this method surpassesthe performance of recent meta-learning algorithms at small support sizes andis competitive with the best methods at large support sizes.",,arXiv,['cs.lg'],, +826,incontext fewshot relation extraction via pretrained language models,"['Yilmazcan Ozyurt', 'Stefan Feuerriegel', 'Ce Zhang']",http://arxiv.org/pdf/2310.11085v1.pdf,2023-10-17,," Relation extraction aims at inferring structured human knowledge from textualdocuments. State-of-the-art methods based on language models commonly have twolimitations: (1) they require named entities to be either given as input orinfer them, which introduces additional noise, and (2) they require humanannotations of documents. As a remedy, we present a novel framework forin-context few-shot relation extraction via pre-trained language models. To thebest of our knowledge, we are the first to reformulate the relation extractiontask as a tailored in-context few-shot learning paradigm. Thereby, we achievecrucial benefits in that we eliminate the need for both named entityrecognition and human annotation of documents. Unlike existing methods based onfine-tuning, our framework is flexible in that it can be easily updated for anew set of relations without re-training. We evaluate our framework usingDocRED, the largest publicly available dataset for document-level relationextraction, and demonstrate that our framework achieves state-of-the-artperformance. Finally, our framework allows us to identify missing annotations,and we thus show that our framework actually performs much better than theoriginal labels from the development set of DocRED.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +827,group preference optimization fewshot alignment of large language models,"['Siyan Zhao', 'John Dang', 'Aditya Grover']",http://arxiv.org/pdf/2310.11523v1.pdf,2023-10-17,," Many applications of large language models (LLMs), ranging from chatbots tocreative writing, require nuanced subjective judgments that can differsignificantly across different groups. Existing alignment algorithms can beexpensive to align for each group, requiring prohibitive amounts ofgroup-specific preference data and computation for real-world use cases. Weintroduce Group Preference Optimization (GPO), an alignment framework thatsteers language models to preferences of individual groups in a few-shotmanner. In GPO, we augment the base LLM with an independent transformer moduletrained to predict the preferences of a group for the LLM generations. Forfew-shot learning, we parameterize this module as an in-context autoregressivetransformer and train it via meta-learning on several groups. We empiricallyvalidate the efficacy of GPO through rigorous evaluations using LLMs withvaried sizes on three human opinion adaptation tasks. These tasks involveadapting to the preferences of US demographic groups, global countries, andindividual users. Our results demonstrate that GPO not only aligns models moreaccurately but also requires fewer group-specific preferences, and lesstraining and inference computing resources, outperforming existing strategiessuch as in-context steering and fine-tuning methods.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, +828,clara multilingual contrastive learning for audio representation acquisition,"['Kari A Noriy', 'Xiaosong Yang', 'Marcin Budka', 'Jian Jun Zhang']",http://arxiv.org/pdf/2310.11830v2.pdf,2023-10-18,," Multilingual speech processing requires understanding emotions, a task madedifficult by limited labelled data. CLARA, minimizes reliance on labelled data,enhancing generalization across languages. It excels at fostering sharedrepresentations, aiding cross-lingual transfer of speech and emotions, evenwith little data. Our approach adeptly captures emotional nuances in speech,overcoming subjective assessment issues. Using a large multilingual audiocorpus and self-supervised learning, CLARA develops speech representationsenriched with emotions, advancing emotion-aware multilingual speech processing. Our method expands the data range using data augmentation, textual embeddingfor visual understanding, and transfers knowledge from high- to low-resourcelanguages. CLARA demonstrates excellent performance in emotion recognition,language comprehension, and audio benchmarks, excelling in zero-shot andfew-shot learning. It adapts to low-resource languages, marking progress inmultilingual speech representation learning.",,arXiv,"['cs.sd', 'cs.lg', 'cs.mm', 'eess.as']",, +829,a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation,"['Giuseppe Attanasio', 'Flor Miriam Plaza-del-Arco', 'Debora Nozza', 'Anne Lauscher']",http://arxiv.org/pdf/2310.12127v2.pdf,2023-10-18,," Recent instruction fine-tuned models can solve multiple NLP tasks whenprompted to do so, with machine translation (MT) being a prominent use case.However, current research often focuses on standard performance benchmarks,leaving compelling fairness and ethical considerations behind. In MT, thismight lead to misgendered translations, resulting, among other harms, in theperpetuation of stereotypes and prejudices. In this work, we address this gapby investigating whether and to what extent such models exhibit gender bias inmachine translation and how we can mitigate it. Concretely, we computeestablished gender bias metrics on the WinoMT corpus from English to German andSpanish. We discover that IFT models default to male-inflected translations,even disregarding female occupational stereotypes. Next, using interpretabilitymethods, we unveil that models systematically overlook the pronoun indicatingthe gender of a target occupation in misgendered translations. Finally, basedon this finding, we propose an easy-to-implement and effective bias mitigationsolution based on few-shot learning that leads to significantly fairertranslations.",,arXiv,"['cs.cl', 'cs.lg']",, +830,an exploration of incontext learning for speech language model,"['Ming-Hao Hsu', 'Kai-Wei Chang', 'Shang-Wen Li', 'Hung-yi Lee']",http://arxiv.org/pdf/2310.12477v1.pdf,2023-10-19,," Ever since the development of GPT-3 in the natural language processing (NLP)field, in-context learning (ICL) has played an important role in utilizinglarge language models (LLMs). By presenting the LM utterance-labeldemonstrations at the input, the LM can accomplish few-shot learning withoutrelying on gradient descent or requiring explicit modification of itsparameters. This enables the LM to learn and adapt in a black-box manner.Despite the success of ICL in NLP, little work is exploring the possibility ofICL in speech processing. This study proposes the first exploration of ICL witha speech LM without text supervision. We first show that the current speech LMdoes not have the ICL capability. With the proposed warmup training, the speechLM can, therefore, perform ICL on unseen tasks. In this work, we verify thefeasibility of ICL for speech LM on speech classification tasks.",,arXiv,"['eess.as', 'cs.ai', 'cs.cl']",, +831,large language models are biased to overestimate profoundness,"['Eugenio Herrera-Berg', 'Tomás Vergara Browne', 'Pablo León-Villagrá', 'Marc-Lluís Vives', 'Cristian Buc Calderon']",http://arxiv.org/pdf/2310.14422v1.pdf,2023-10-22,," Recent advancements in natural language processing by large language models(LLMs), such as GPT-4, have been suggested to approach Artificial GeneralIntelligence. And yet, it is still under dispute whether LLMs possess similarreasoning abilities to humans. This study evaluates GPT-4 and various otherLLMs in judging the profoundness of mundane, motivational, and pseudo-profoundstatements. We found a significant statement-to-statement correlation betweenthe LLMs and humans, irrespective of the type of statements and the promptingtechnique used. However, LLMs systematically overestimate the profoundness ofnonsensical statements, with the exception of Tk-instruct, which uniquelyunderestimates the profoundness of statements. Only few-shot learning prompts,as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans.Furthermore, this work provides insights into the potential biases induced byReinforcement Learning from Human Feedback (RLHF), inducing an increase in thebias to overestimate the profoundness of statements.",,arXiv,['cs.cl'],, +832,improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning,"['Ananth Balashankar', 'Xiao Ma', 'Aradhana Sinha', 'Ahmad Beirami', 'Yao Qin', 'Jilin Chen', 'Alex Beutel']",http://arxiv.org/pdf/2310.16959v1.pdf,2023-10-25,," As large language models (LLMs) are widely adopted, new safety issues andpolicies emerge, to which existing safety classifiers do not generalize well.If we have only observed a few examples of violations of a new safety rule, howcan we build a classifier to detect violations? In this paper, we study thenovel setting of domain-generalized few-shot learning for LLM-based text safetyclassifiers. Unlike prior few-shot work, these new safety issues can be hard touncover and we do not get to choose the few examples. We demonstrate thatexisting few-shot techniques do not perform well in this setting, and rather wepropose to do parameter-efficient fine-tuning (PEFT) combined with augmentingtraining data based on similar examples in prior existing rules. We empiricallyshow that our approach of similarity-based data-augmentation + prompt-tuning(DAPT) consistently outperforms baselines that either do not rely on dataaugmentation or on PEFT by 7-17% F1 score in the Social Chemistry moraljudgement and 9-13% AUC in the Toxicity detection tasks, even when the new ruleis loosely correlated with existing ones.",,arXiv,['cs.lg'],, +833,retrofitting lightweight language models for emotions using supervised contrastive learning,"['Sapan Shah', 'Sreedhar Reddy', 'Pushpak Bhattacharyya']",http://arxiv.org/pdf/2310.18930v1.pdf,2023-10-29,," We present a novel retrofitting method to induce emotion aspects intopre-trained language models (PLMs) such as BERT and RoBERTa. Our method updatespre-trained network weights using contrastive learning so that the textfragments exhibiting similar emotions are encoded nearby in the representationspace, and the fragments with different emotion content are pushed apart. Whiledoing so, it also ensures that the linguistic knowledge already present in PLMsis not inadvertently perturbed. The language models retrofitted by our method,i.e., BERTEmo and RoBERTaEmo, produce emotion-aware text representations, asevaluated through different clustering and retrieval metrics. For thedownstream tasks on sentiment analysis and sarcasm detection, they performbetter than their pre-trained counterparts (about 1% improvement in F1-score)and other existing approaches. Additionally, a more significant boost inperformance is observed for the retrofitted models over pre-trained ones infew-shot learning setting.",,arXiv,['cs.cl'],, +834,nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection,"['Yunze Xiao', 'Firoj Alam']",http://arxiv.org/pdf/2311.03184v1.pdf,2023-11-06,," The spread of disinformation and propagandistic content poses a threat tosocietal harmony, undermining informed decision-making and trust in reliablesources. Online platforms often serve as breeding grounds for such content, andmalicious actors exploit the vulnerabilities of audiences to shape publicopinion. Although there have been research efforts aimed at the automaticidentification of disinformation and propaganda in social media content, thereremain challenges in terms of performance. The ArAIEval shared task aims tofurther research on these particular issues within the context of the Arabiclanguage. In this paper, we discuss our participation in these shared tasks. Wecompeted in subtasks 1A and 2A, where our submitted system secured positions9th and 10th, respectively. Our experiments consist of fine-tuning transformermodels and using zero- and few-shot learning with GPT-4.",,arXiv,"['cs.cl', 'cs.ai', 'cs.si', '68t50', 'f.2.2; i.2.7']",, +835,multilingual mathematical autoformalization,"['Albert Q. Jiang', 'Wenda Li', 'Mateja Jamnik']",http://arxiv.org/pdf/2311.03755v2.pdf,2023-11-07,," Autoformalization is the task of translating natural language materials intomachine-verifiable formalisations. Progress in autoformalization research ishindered by the lack of a sizeable dataset consisting of informal-formal pairsexpressing the same essence. Existing methods tend to circumvent this challengeby manually curating small corpora or using few-shot learning with largelanguage models. But these methods suffer from data scarcity and formallanguage acquisition difficulty. In this work, we create $\texttt{MMA}$, alarge, flexible, multilingual, and multi-domain dataset of informal-formalpairs, by using a language model to translate in the reverse direction, thatis, from formal mathematical statements into corresponding informal ones.Experiments show that language models fine-tuned on $\texttt{MMA}$ produce$16-18\%$ of statements acceptable with minimal corrections on the$\texttt{miniF2F}$ and $\texttt{ProofNet}$ benchmarks, up from $0\%$ with thebase model. We demonstrate that fine-tuning on multilingual formal data resultsin more capable autoformalization models even when deployed on monolingualtasks.",,arXiv,"['cs.cl', 'cs.lg']",, +836,robust retrieval augmented generation for zeroshot slot filling,"['Michael Glass', 'Gaetano Rossiello', 'Md Faisal Mahbub Chowdhury', 'Alfio Gliozzo']",http://arxiv.org/pdf/2108.13934v2.pdf,2021-08-31,," Automatically inducing high quality knowledge graphs from a given collectionof documents still remains a challenging problem in AI. One way to make headwayfor this problem is through advancements in a related task known as slotfilling. In this task, given an entity query in form of [Entity, Slot, ?], asystem is asked to fill the slot by generating or extracting the missing valueexploiting evidence extracted from relevant passage(s) in the given documentcollection. The recent works in the field try to solve this task in anend-to-end fashion using retrieval-based language models. In this paper, wepresent a novel approach to zero-shot slot filling that extends dense passageretrieval with hard negatives and robust training procedures for retrievalaugmented generation models. Our model reports large improvements on both T-RExand zsRE slot filling datasets, improving both passage retrieval and slot valuegeneration, and ranking at the top-1 position in the KILT leaderboard.Moreover, we demonstrate the robustness of our system showing its domainadaptation capability on a new variant of the TACRED dataset for slot filling,through a combination of zero/few-shot learning. We release the source code andpre-trained models.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir']",, +837,dataefficient goaloriented conversation with dialogue knowledge transfer networks,"['Igor Shalyminov', 'Sungjin Lee', 'Arash Eshghi', 'Oliver Lemon']",http://arxiv.org/pdf/1910.01302v1.pdf,2019-10-03,," Goal-oriented dialogue systems are now being widely adopted in industry whereit is of key importance to maintain a rapid prototyping cycle for new productsand domains. Data-driven dialogue system development has to be adapted to meetthis requirement --- therefore, reducing the amount of data and annotationsnecessary for training such systems is a central research problem. In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet),a state-of-the-art approach to goal-oriented dialogue generation which onlyuses a few example dialogues (i.e. few-shot learning), none of which has to beannotated. We achieve this by performing a 2-stage training. Firstly, weperform unsupervised dialogue representation pre-training on a large source ofgoal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, atthe transfer stage, we train DiKTNet using this representation together with 2other textual knowledge sources with different levels of generality: ELMoencoder and the main dataset's source domains. Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluateour model on it in terms of BLEU and Entity F1 scores, and show that ourapproach significantly and consistently improves upon a series of baselinemodels as well as over the previous state-of-the-art dialogue generation model,ZSDG. The improvement upon the latter --- up to 10% in Entity F1 and theaverage of 3% in BLEU score --- is achieved using only the equivalent of 10% ofZSDG's in-domain training data.",,arXiv,"['cs.cl', 'i.2.7']",, +838,metalearning with dynamicmemorybased prototypical network for fewshot event detection,"['Shumin Deng', 'Ningyu Zhang', 'Jiaojian Kang', 'Yichi Zhang', 'Wei Zhang', 'Huajun Chen']",http://arxiv.org/pdf/1910.11621v2.pdf,2019-10-25,," Event detection (ED), a sub-task of event extraction, involves identifyingtriggers and categorizing event mentions. Existing methods primarily rely uponsupervised learning and require large-scale labeled event datasets which areunfortunately not readily available in many real-life applications. In thispaper, we consider and reformulate the ED task with limited labeled data as aFew-Shot Learning problem. We propose a Dynamic-Memory-Based PrototypicalNetwork (DMB-PN), which exploits Dynamic Memory Network (DMN) to not only learnbetter prototypes for event types, but also produce more robust sentenceencodings for event mentions. Differing from vanilla prototypical networkssimply computing event prototypes by averaging, which only consume eventmentions once, our model is more robust and is capable of distilling contextualinformation from event mentions for multiple times due to the multi-hopmechanism of DMNs. The experiments show that DMB-PN not only deals with samplescarcity better than a series of baseline models but also performs morerobustly when the variety of event types is relatively large and the instancequantity is extremely small.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, +839,amp0 speciesspecific prediction of antimicrobial peptides using zero and few shot learning,"['Sadaf Gull', 'Fayyaz Minhas']",http://arxiv.org/pdf/1911.06106v1.pdf,2019-10-28,," The evolution of drug-resistant microbial species is one of the majorchallenges to global health. The development of new antimicrobial treatmentssuch as antimicrobial peptides needs to be accelerated to combat this threat.However, the discovery of novel antimicrobial peptides is hampered bylow-throughput biochemical assays. Computational techniques can be used forrapid screening of promising antimicrobial peptide candidates prior to testingin the wet lab. The vast majority of existing antimicrobial peptide predictorsare non-targeted in nature, i.e., they can predict whether a given peptidesequence is antimicrobial, but they are unable to predict whether the sequencecan target a particular microbial species. In this work, we have developed atargeted antimicrobial peptide activity predictor that can predict whether apeptide is effective against a given microbial species or not. This has beenmade possible through zero-shot and few-shot machine learning. The proposedpredictor called AMP0 takes in the peptide amino acid sequence and anyN/C-termini modifications together with the genomic sequence of a targetmicrobial species to generate targeted predictions. It is important to notethat the proposed method can generate predictions for species that are not partof its training set. The accuracy of predictions for novel test species can befurther improved by providing a few example peptides for that species. Ourcomputational cross-validation results show that the pro-posed scheme isparticularly effective for targeted antimicrobial prediction in comparison toexisting approaches and can be used for screening potential antimicrobialpeptides in a targeted manner especially for cases in which the number oftraining examples is small. The webserver of the method is available athttp://ampzero.pythonanywhere.com.",,arXiv,"['q-bio.bm', 'cs.lg', 'stat.ml']",, +840,braininspired globallocal learning incorporated with neuromorphic computing,"['Yujie Wu', 'Rong Zhao', 'Jun Zhu', 'Feng Chen', 'Mingkun Xu', 'Guoqi Li', 'Sen Song', 'Lei Deng', 'Guanrui Wang', 'Hao Zheng', 'Jing Pei', 'Youhui Zhang', 'Mingguo Zhao', 'Luping Shi']",http://arxiv.org/pdf/2006.03226v3.pdf,2020-06-05,," Two main routes of learning methods exist at present including error-drivenglobal learning and neuroscience-oriented local learning. Integrating them intoone network may provide complementary learning capabilities for versatilelearning scenarios. At the same time, neuromorphic computing holds greatpromise, but still needs plenty of useful algorithms and algorithm-hardwareco-designs for exploiting the advantages. Here, we report a neuromorphic hybridlearning model by introducing a brain-inspired meta-learning paradigm and adifferentiable spiking model incorporating neuronal dynamics and synapticplasticity. It can meta-learn local plasticity and receive top-down supervisioninformation for multiscale synergic learning. We demonstrate the advantages ofthis model in multiple different tasks, including few-shot learning, continuallearning, and fault-tolerance learning in neuromorphic vision sensors. Itachieves significantly higher performance than single-learning methods, andshows promise in empowering neuromorphic applications revolution. We furtherimplemented the hybrid model in the Tianjic neuromorphic platform by exploitingalgorithm-hardware co-designs and proved that the model can fully utilizeneuromorphic many-core architecture to develop hybrid computation paradigm.",,arXiv,"['cs.ne', 'cs.ai', 'q-bio.nc']",, +841,direct multimodal fewshot learning of speech and images,"['Leanne Nortje', 'Herman Kamper']",http://arxiv.org/pdf/2012.05680v2.pdf,2020-12-10,," We propose direct multimodal few-shot models that learn a shared embeddingspace of spoken words and images from only a few paired examples. Imagine anagent is shown an image along with a spoken word describing the object in thepicture, e.g. pen, book and eraser. After observing a few paired examples ofeach class, the model is asked to identify the ""book"" in a set of unseenpictures. Previous work used a two-step indirect approach relying on learnedunimodal representations: speech-speech and image-image comparisons areperformed across the support set of given speech-image pairs. We propose twodirect models which instead learn a single multimodal space where inputs fromdifferent modalities are directly comparable: a multimodal triplet network(MTriplet) and a multimodal correspondence autoencoder (MCAE). To train thesedirect models, we mine speech-image pairs: the support set is used to pair upunlabelled in-domain speech and images. In a speech-to-image digit matchingtask, direct models outperform indirect models, with the MTriplet achieving thebest multimodal five-shot accuracy. We show that the improvements are due tothe combination of unsupervised and transfer learning in the direct models, andthe absence of two-step compounding errors.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, +842,what makes good incontext examples for gpt$3$,"['Jiachang Liu', 'Dinghan Shen', 'Yizhe Zhang', 'Bill Dolan', 'Lawrence Carin', 'Weizhu Chen']",http://arxiv.org/pdf/2101.06804v1.pdf,2021-01-17,," GPT-$3$ has attracted lots of attention due to its superior performanceacross a wide range of NLP tasks, especially with its powerful and versatilein-context few-shot learning ability. Despite its success, we found that theempirical results of GPT-$3$ depend heavily on the choice of in-contextexamples. In this work, we investigate whether there are more effectivestrategies for judiciously selecting in-context examples (relative to randomsampling) that better leverage GPT-$3$'s few-shot capabilities. Inspired by therecent success of leveraging a retrieval module to augment large-scale neuralnetwork models, we propose to retrieve examples that are semantically-similarto a test sample to formulate its corresponding prompt. Intuitively, thein-context examples selected with such a strategy may serve as more informativeinputs to unleash GPT-$3$'s extensive knowledge. We evaluate the proposedapproach on several natural language understanding and generation benchmarks,where the retrieval-based prompt selection approach consistently outperformsthe random baseline. Moreover, it is observed that the sentence encodersfine-tuned on task-related datasets yield even more helpful retrieval results.Notably, significant gains are observed on tasks such as table-to-textgeneration (41.9% on the ToTTo dataset) and open-domain question answering(45.5% on the NQ dataset). We hope our investigation could help understand thebehaviors of GPT-$3$ and large-scale pre-trained LMs in general and enhancetheir few-shot capabilities.",,arXiv,['cs.cl'],, +843,spirit distillation precise realtime semantic segmentation of road scenes with insufficient data,"['Zhiyuan Wu', 'Yu Jiang', 'Chupeng Cui', 'Zongmin Yang', 'Xinhui Xue', 'Hong Qi']",http://arxiv.org/pdf/2103.13733v2.pdf,2021-03-25,," Semantic segmentation of road scenes is one of the key technologies forrealizing autonomous driving scene perception, and the effectiveness of deepConvolutional Neural Networks(CNNs) for this task has been demonstrated.State-of-art CNNs for semantic segmentation suffer from excessive computationsas well as large-scale training data requirement. Inspired by the ideas ofFine-tuning-based Transfer Learning (FTT) and feature-based knowledgedistillation, we propose a new knowledge distillation method for cross-domainknowledge transference and efficient data-insufficient network training, namedSpirit Distillation(SD), which allow the student network to mimic the teachernetwork to extract general features, so that a compact and accurate studentnetwork can be trained for real-time semantic segmentation of road scenes.Then, in order to further alleviate the trouble of insufficient data andimprove the robustness of the student, an Enhanced Spirit Distillation (ESD)method is proposed, which commits to exploit a more comprehensive generalfeatures extraction capability by considering images from both the target andthe proximity domains as input. To our knowledge, this paper is a pioneeringwork on the application of knowledge distillation to few-shot learning.Persuasive experiments conducted on Cityscapes semantic segmentation with theprior knowledge transferred from COCO2017 and KITTI demonstrate that ourmethods can train a better student network (mIOU and high-precision accuracyboost by 1.4% and 8.2% respectively, with 78.2% segmentation variance) withonly 41.8% FLOPs (see Fig. 1).",,arXiv,"['cs.cv', 'cs.ai', 'cs.lg']",, +844,modelling latent translations for crosslingual transfer,"['Edoardo Maria Ponti', 'Julia Kreutzer', 'Ivan Vulić', 'Siva Reddy']",http://arxiv.org/pdf/2107.11353v1.pdf,2021-07-23,," While achieving state-of-the-art results in multiple tasks and languages,translation-based cross-lingual transfer is often overlooked in favour ofmassively multilingual pre-trained encoders. Arguably, this is due to its mainlimitations: 1) translation errors percolating to the classification phase and2) the insufficient expressiveness of the maximum-likelihood translation. Toremedy this, we propose a new technique that integrates both steps of thetraditional pipeline (translation and classification) into a single model, bytreating the intermediate translations as a latent random variable. As aresult, 1) the neural machine translation system can be fine-tuned with avariant of Minimum Risk Training where the reward is the accuracy of thedownstream task classifier. Moreover, 2) multiple samples can be drawn toapproximate the expected loss across all possible translations duringinference. We evaluate our novel latent translation-based model on a series ofmultilingual NLU tasks, including commonsense reasoning, paraphraseidentification, and natural language inference. We report gains for bothzero-shot and few-shot learning setups, up to 2.7 accuracy points on average,which are even more prominent for low-resource languages (e.g., HaitianCreole). Finally, we carry out in-depth analyses comparing different underlyingNMT models and assessing the impact of alternative translations on thedownstream performance.",,arXiv,['cs.cl'],, +845,prototransformer a metalearning approach to providing student feedback,"['Mike Wu', 'Noah Goodman', 'Chris Piech', 'Chelsea Finn']",http://arxiv.org/pdf/2107.14035v2.pdf,2021-07-23,," High-quality computer science education is limited by the difficulty ofproviding instructor feedback to students at scale. While this feedback couldin principle be automated, supervised approaches to predicting the correctfeedback are bottlenecked by the intractability of annotating large quantitiesof student code. In this paper, we instead frame the problem of providingfeedback as few-shot classification, where a meta-learner adapts to givefeedback to student code on a new programming question from just a few examplesannotated by instructors. Because data for meta-training is limited, we proposea number of amendments to the typical few-shot learning framework, includingtask augmentation to create synthetic tasks, and additional side information tobuild stronger priors about each task. These additions are combined with atransformer architecture to embed discrete sequences (e.g. code) to aprototypical representation of a feedback class label. On a suite of few-shotnatural language processing tasks, we match or outperform state-of-the-artperformance. Then, on a collection of student solutions to exam questions froman introductory university course, we show that our approach reaches an averageprecision of 88% on unseen questions, surpassing the 82% precision of teachingassistants. Our approach was successfully deployed to deliver feedback to16,000 student exam-solutions in a programming course offered by a tier 1university. This is, to the best of our knowledge, the first successfuldeployment of a machine learning based feedback to open-ended student code.",,arXiv,"['cs.cy', 'cs.lg']",, +846,raft a realworld fewshot text classification benchmark,"['Neel Alex', 'Eli Lifland', 'Lewis Tunstall', 'Abhishek Thakur', 'Pegah Maham', 'C. Jess Riedel', 'Emmie Hine', 'Carolyn Ashurst', 'Paul Sedille', 'Alexis Carlier', 'Michael Noetel', 'Andreas Stuhlmüller']",http://arxiv.org/pdf/2109.14076v3.pdf,2021-09-28,," Large pre-trained language models have shown promise for few-shot learning,completing text-based tasks given only a few task-specific examples. Willmodels soon solve classification tasks that have so far been reserved for humanresearch assistants? Existing benchmarks are not designed to measure progressin applied settings, and so don't directly answer this question. The RAFTbenchmark (Real-world Annotated Few-shot Tasks) focuses on naturally occurringtasks and uses an evaluation setup that mirrors deployment. Baselineevaluations on RAFT reveal areas current techniques struggle with: reasoningover long texts and tasks with many classes. Human baselines show that someclassification tasks are difficult for non-expert humans, reflecting thatreal-world value sometimes depends on domain expertise. Yet even non-experthuman baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasetsand leaderboard will track which model improvements translate into real-worldbenefits at https://raft.elicit.org .",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +847,lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5,"['Chengwei Qin', 'Shafiq Joty']",http://arxiv.org/pdf/2110.07298v3.pdf,2021-10-14,," Existing approaches to lifelong language learning rely on plenty of labeleddata for learning a new task, which is hard to obtain in most real scenarios.Considering that humans can continually learn new tasks from a handful ofexamples, we expect the models also to be able to generalize well on newfew-shot tasks without forgetting the previous ones. In this work, we definethis more challenging yet practical problem as Lifelong Few-shot LanguageLearning (LFLL) and propose a unified framework for it based on prompt tuningof T5. Our framework called LFPT5 takes full advantage of PT's strong few-shotlearning ability, and simultaneously trains the model as a task solver and adata generator. Before learning a new domain of the same task type, LFPT5generates pseudo (labeled) samples of previously learned domains, and latergets trained on those samples to alleviate forgetting of previous knowledge asit learns the new domain. In addition, a KL divergence loss is minimized toachieve label consistency between the previous and the current model. Whileadapting to a new task type, LFPT5 includes and tunes additional promptembeddings for the new task. With extensive experiments, we demonstrate thatLFPT5 can be applied to various different types of tasks and significantlyoutperform previous methods in different LFLL settings.",,arXiv,['cs.cl'],, +848,metaicl learning to learn in context,"['Sewon Min', 'Mike Lewis', 'Luke Zettlemoyer', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2110.15943v2.pdf,2021-10-29,," We introduce MetaICL (Meta-training for In-Context Learning), a newmeta-training framework for few-shot learning where a pretrained language modelis tuned to do in-context learning on a large set of training tasks. Thismeta-training enables the model to more effectively learn a new task in contextat test time, by simply conditioning on a few training examples with noparameter updates or task-specific templates. We experiment on a large, diversecollection of tasks consisting of 142 NLP datasets including classification,question answering, natural language inference, paraphrase detection and more,across seven different meta-training/target splits. MetaICL outperforms a rangeof baselines including in-context learning without meta-training and multi-tasklearning followed by zero-shot transfer. We find that the gains areparticularly significant for target tasks that have domain shifts from themeta-training tasks, and that using a diverse set of the meta-training tasks iskey to improvements. We also show that MetaICL approaches (and sometimes beats)the performance of models fully finetuned on the target task, and outperformsmuch bigger models with nearly 8x parameters. Finally, we show that MetaICL iscomplementary to human-written instructions, and the best performance can beachieved by combining both approaches.",,arXiv,"['cs.cl', 'cs.ai']",, +849,scaling asr improves zero and few shot learning,"['Alex Xiao', 'Weiyi Zheng', 'Gil Keren', 'Duc Le', 'Frank Zhang', 'Christian Fuegen', 'Ozlem Kalinli', 'Yatharth Saraf', 'Abdelrahman Mohamed']",http://arxiv.org/pdf/2111.05948v3.pdf,2021-11-10,," With 4.5 million hours of English speech from 10 different sources across 120countries and models of up to 10 billion parameters, we explore the frontiersof scale for automatic speech recognition. We propose data selection techniquesto efficiently scale training data to find the most valuable samples in massivedatasets. To efficiently scale model sizes, we leverage various optimizationssuch as sparse transducer loss and model sharding. By training 1-10B parameteruniversal English ASR models, we push the limits of speech recognitionperformance across many domains. Furthermore, our models learn powerful speechrepresentations with zero and few-shot capabilities on novel domains and stylesof speech, exceeding previous results across multiple in-house and publicbenchmarks. For speakers with disorders due to brain damage, our best zero-shotand few-shot models achieve 22% and 60% relative improvement on the AphasiaBanktest set, respectively, while realizing the best performance on public socialmedia videos. Furthermore, the same universal model reaches equivalentperformance with 500x less in-domain data on the SPGISpeech financial-domaindataset.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, +850,pointclip point cloud understanding by clip,"['Renrui Zhang', 'Ziyu Guo', 'Wei Zhang', 'Kunchang Li', 'Xupeng Miao', 'Bin Cui', 'Yu Qiao', 'Peng Gao', 'Hongsheng Li']",http://arxiv.org/pdf/2112.02413v1.pdf,2021-12-04,," Recently, zero-shot and few-shot learning via Contrastive Vision-LanguagePre-training (CLIP) have shown inspirational performance on 2D visualrecognition, which learns to match images with their corresponding texts inopen-vocabulary settings. However, it remains under explored that whether CLIP,pre-trained by large-scale image-text pairs in 2D, can be generalized to 3Drecognition. In this paper, we identify such a setting is feasible by proposingPointCLIP, which conducts alignment between CLIP-encoded point cloud and 3Dcategory texts. Specifically, we encode a point cloud by projecting it intomulti-view depth maps without rendering, and aggregate the view-wise zero-shotprediction to achieve knowledge transfer from 2D to 3D. On top of that, wedesign an inter-view adapter to better extract the global feature andadaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in2D. By just fine-tuning the lightweight adapter in the few-shot settings, theperformance of PointCLIP could be largely improved. In addition, we observe thecomplementary property between PointCLIP and classical 3D-supervised networks.By simple ensembling, PointCLIP boosts baseline's performance and evensurpasses state-of-the-art models. Therefore, PointCLIP is a promisingalternative for effective 3D point cloud understanding via CLIP under lowresource cost and data regime. We conduct thorough experiments onwidely-adopted ModelNet10, ModelNet40 and the challenging ScanObjectNN todemonstrate the effectiveness of PointCLIP. The code is released athttps://github.com/ZrrSkywalker/PointCLIP.",,arXiv,"['cs.cv', 'cs.ai', 'cs.ro']",, +851,"visionlanguage intelligence tasks, representation learning, and large models","['Feng Li', 'Hao Zhang', 'Yi-Fan Zhang', 'Shilong Liu', 'Jian Guo', 'Lionel M. Ni', 'PengChuan Zhang', 'Lei Zhang']",http://arxiv.org/pdf/2203.01922v1.pdf,2022-03-03,," This paper presents a comprehensive survey of vision-language (VL)intelligence from the perspective of time. This survey is inspired by theremarkable progress in both computer vision and natural language processing,and recent trends shifting from single modality processing to multiple modalitycomprehension. We summarize the development in this field into three timeperiods, namely task-specific methods, vision-language pre-training (VLP)methods, and larger models empowered by large-scale weakly-labeled data. Wefirst take some common VL tasks as examples to introduce the development oftask-specific methods. Then we focus on VLP methods and comprehensively reviewkey components of the model structures and training methods. After that, weshow how recent work utilizes large-scale raw image-text data to learnlanguage-aligned visual representations that generalize better on zero or fewshot learning tasks. Finally, we discuss some potential future trends towardsmodality cooperation, unified representation, and knowledge incorporation. Webelieve that this review will be of help for researchers and practitioners ofAI and ML, especially those interested in computer vision and natural languageprocessing.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, +852,rethinking task sampling for fewshot visionlanguage transfer learning,"['Zhenhailong Wang', 'Hang Yu', 'Manling Li', 'Han Zhao', 'Heng Ji']",http://arxiv.org/pdf/2203.04904v3.pdf,2022-03-09,," Despite achieving state-of-the-art zero-shot performance, existingvision-language models still fall short of few-shot transfer ability ondomain-specific problems. Classical fine-tuning often fails to prevent highlyexpressive models from exploiting spurious correlations. Althoughmodel-agnostic meta-learning (MAML) presents as a natural alternative forfew-shot transfer learning, the expensive computation due to implicitsecond-order optimization limits its use on large-scale vision-language modelssuch as CLIP. While much literature has been devoted to exploring alternativeoptimization strategies, we identify another essential aspect towards effectivefew-shot transfer learning, task sampling, which is previously only be viewedas part of data pre-processing in MAML. To show the impact of task sampling, wepropose a simple algorithm, Model-Agnostic Multitask Fine-tuning (MAMF), whichdifferentiates classical fine-tuning only on uniformly sampling multiple tasks.Despite its simplicity, we show that MAMF consistently outperforms classicalfine-tuning on five few-shot vision-language classification tasks. We furthershow that the effectiveness of the bi-level optimization in MAML is highlysensitive to the zero-shot performance of a task in the context of few-shotvision-language classification. The goal of this paper is to provide newinsights on what makes few-shot learning work, and encourage more research intoinvestigating better task sampling strategies.",,arXiv,"['cs.mm', 'cs.cl', 'cs.cv']",, +853,mgpt fewshot learners go multilingual,"['Oleh Shliazhko', 'Alena Fenogenova', 'Maria Tikhonova', 'Vladislav Mikhailov', 'Anastasia Kozlova', 'Tatiana Shavrina']",http://arxiv.org/pdf/2204.07580v2.pdf,2022-04-15,," Recent studies report that autoregressive language models can successfullysolve many NLP tasks via zero- and few-shot learning paradigms, which opens upnew possibilities for using the pre-trained language models. This paperintroduces two autoregressive GPT-like models with 1.3 billion and 13 billionparameters trained on 60 languages from 25 language families using Wikipediaand Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture usingGPT-2 sources and the sparse attention mechanism; Deepspeed and Megatronframeworks allow us to parallelize the training and inference stepseffectively. The resulting models show performance on par with the recentlyreleased XGLM models by Facebook, covering more languages and enhancing NLPpossibilities for low resource languages of CIS countries and Russian smallnations. We detail the motivation for the choices of the architecture design,thoroughly describe the data preparation pipeline, and train five smallversions of the model to choose the most optimal multilingual tokenizationstrategy. We measure the model perplexity in all covered languages and evaluateit on the wide spectre of multilingual tasks, including classification,generative, sequence labeling and knowledge probing. The models were evaluatedwith the zero-shot and few-shot methods. Furthermore, we compared theclassification tasks with the state-of-the-art multilingual model XGLM. sourcecode and the mGPT XL model are publicly released.",,arXiv,"['cs.cl', 'cs.ai', '68-06, 68-04, 68t50, 68t01', 'i.2; i.2.7']",, +854,opt open pretrained transformer language models,"['Susan Zhang', 'Stephen Roller', 'Naman Goyal', 'Mikel Artetxe', 'Moya Chen', 'Shuohui Chen', 'Christopher Dewan', 'Mona Diab', 'Xian Li', 'Xi Victoria Lin', 'Todor Mihaylov', 'Myle Ott', 'Sam Shleifer', 'Kurt Shuster', 'Daniel Simig', 'Punit Singh Koura', 'Anjali Sridhar', 'Tianlu Wang', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2205.01068v4.pdf,2022-05-02,," Large language models, which are often trained for hundreds of thousands ofcompute days, have shown remarkable capabilities for zero- and few-shotlearning. Given their computational cost, these models are difficult toreplicate without significant capital. For the few that are available throughAPIs, no access is granted to the full model weights, making them difficult tostudy. We present Open Pre-trained Transformers (OPT), a suite of decoder-onlypre-trained transformers ranging from 125M to 175B parameters, which we aim tofully and responsibly share with interested researchers. We show that OPT-175Bis comparable to GPT-3, while requiring only 1/7th the carbon footprint todevelop. We are also releasing our logbook detailing the infrastructurechallenges we faced, along with code for experimenting with all of the releasedmodels.",,arXiv,"['cs.cl', 'cs.lg']",, +855,relation extraction as openbook examination retrievalenhanced prompt tuning,"['Xiang Chen', 'Lei Li', 'Ningyu Zhang', 'Chuanqi Tan', 'Fei Huang', 'Luo Si', 'Huajun Chen']",http://arxiv.org/pdf/2205.02355v2.pdf,2022-05-04,," Pre-trained language models have contributed significantly to relationextraction by demonstrating remarkable few-shot learning abilities. However,prompt tuning methods for relation extraction may still fail to generalize tothose rare or hard patterns. Note that the previous parametric learningparadigm can be viewed as memorization regarding training data as a book andinference as the close-book test. Those long-tailed or hard patterns can hardlybe memorized in parameters given few-shot instances. To this end, we regard REas an open-book examination and propose a new semiparametric paradigm ofretrieval-enhanced prompt tuning for relation extraction. We construct anopen-book datastore for retrieval regarding prompt-based instancerepresentations and corresponding relation labels as memorized key-value pairs.During inference, the model can infer relations by linearly interpolating thebase output of PLM with the non-parametric nearest neighbor distribution overthe datastore. In this way, our model not only infers relation throughknowledge stored in the weights during training but also assistsdecision-making by unwinding and querying examples in the open-book datastore.Extensive experiments on benchmark datasets show that our method can achievestate-of-the-art in both standard supervised and few-shot settings. Code areavailable in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, +856,towards unified prompt tuning for fewshot text classification,"['Jianing Wang', 'Chengyu Wang', 'Fuli Luo', 'Chuanqi Tan', 'Minghui Qiu', 'Fei Yang', 'Qiuhui Shi', 'Songfang Huang', 'Ming Gao']",http://arxiv.org/pdf/2205.05313v1.pdf,2022-05-11,," Prompt-based fine-tuning has boosted the performance of Pre-trained LanguageModels (PLMs) on few-shot text classification by employing task-specificprompts. Yet, PLMs are unfamiliar with prompt-style expressions duringpre-training, which limits the few-shot learning performance on downstreamtasks. It would be desirable if the models can acquire some prompting knowledgebefore adaptation to specific NLP tasks. We present the Unified Prompt Tuning(UPT) framework, leading to better few-shot text classification for BERT-stylemodels by explicitly capturing prompting semantics from non-target NLPdatasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed forjoint prompt learning across different NLP tasks, forcing PLMs to capturetask-invariant prompting knowledge. We further design a self-supervised tasknamed Knowledge-enhanced Selective Masked Language Modeling to improve thePLM's generalization abilities for accurate adaptation to previously unseentasks. After multi-task learning across multiple tasks, the PLM can be betterprompt-tuned towards any dissimilar target tasks in low-resourced settings.Experiments over a variety of NLP tasks show that UPT consistently outperformsstate-of-the-arts for prompt-based fine-tuning.",,arXiv,"['cs.cl', 'cs.ai']",, +857,towards answering openended ethical quandary questions,"['Yejin Bang', 'Nayeon Lee', 'Tiezheng Yu', 'Leila Khalatbari', 'Yan Xu', 'Samuel Cahyawijaya', 'Dan Su', 'Bryan Wilie', 'Romain Barraud', 'Elham J. Barezi', 'Andrea Madotto', 'Hayden Kee', 'Pascale Fung']",http://arxiv.org/pdf/2205.05989v3.pdf,2022-05-12,," Considerable advancements have been made in various NLP tasks based on theimpressive power of large language models (LLMs) and many NLP applications aredeployed in our daily lives. In this work, we challenge the capability of LLMswith the new task of Ethical Quandary Generative Question Answering. Ethicalquandary questions are more challenging to address because multiple conflictinganswers may exist to a single quandary. We explore the current capability ofLLMs in providing an answer with a deliberative exchange of differentperspectives to an ethical quandary, in the approach of Socratic philosophy,instead of providing a closed answer like an oracle. We propose a model thatsearches for different ethical principles applicable to the ethical quandaryand generates an answer conditioned on the chosen principles throughprompt-based few-shot learning. We also discuss the remaining challenges andethical issues involved in this task and suggest the direction towarddeveloping responsible NLP systems by incorporating human values explicitly.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +858,promptda labelguided data augmentation for promptbased fewshot learners,"['Canyu Chen', 'Kai Shu']",http://arxiv.org/pdf/2205.09229v3.pdf,2022-05-18,," Recent advances in large pre-trained language models (PLMs) lead toimpressive gains in natural language understanding (NLU) tasks withtask-specific fine-tuning. However, directly fine-tuning PLMs heavily relies onsufficient labeled training instances, which are usually hard to obtain.Prompt-based tuning on PLMs has shown to be powerful for various downstreamfew-shot tasks. Existing works studying prompt-based tuning for few-shot NLUtasks mainly focus on deriving proper label words with a verbalizer orgenerating prompt templates to elicit semantics from PLMs. In addition,conventional data augmentation strategies such as synonym substitution, thoughwidely adopted in low-resource scenarios, only bring marginal improvements forprompt-based few-shot learning. Thus, an important research question arises:how to design effective data augmentation methods for prompt-based few-shottuning? To this end, considering the label semantics are essential inprompt-based tuning, we propose a novel label-guided data augmentationframework PromptDA, which exploits the enriched label semantic information fordata augmentation. Extensive experiment results on few-shot text classificationtasks demonstrate the superior performance of the proposed framework byeffectively leveraging label semantics and data augmentation for naturallanguage understanding. Our code is available athttps://github.com/canyuchen/PromptDA.",,arXiv,"['cs.cl', 'cs.ai']",, +859,what makes datatotext generation hard for pretrained language models,"['Moniba Keymanesh', 'Adrian Benton', 'Mark Dredze']",http://arxiv.org/pdf/2205.11505v1.pdf,2022-05-23,," Expressing natural language descriptions of structured facts or relations --data-to-text generation (D2T) -- increases the accessibility of structuredknowledge repositories. Previous work shows that pre-trained languagemodels(PLMs) perform remarkably well on this task after fine-tuning on asignificant amount of task-specific training data. On the other hand, whileauto-regressive PLMs can generalize from a few task examples, their efficacy atD2T is largely unexplored. Furthermore, we have an incomplete understanding ofthe limits of PLMs on D2T. In this work, we conduct an empirical study of both fine-tuned andauto-regressive PLMs on the DART multi-domain D2T dataset. We consider theirperformance as a function of the amount of task-specific data and how thesedata are incorporated into the models: zero and few-shot learning, andfine-tuning of model weights. In addition, we probe the limits of PLMs bymeasuring performance on subsets of the evaluation data: novel predicates andabstractive test examples. To improve the performance on these subsets, weinvestigate two techniques: providing predicate descriptions in the context andre-ranking generated candidates by information reflected in the source.Finally, we conduct a human evaluation of model errors and show that D2Tgeneration tasks would benefit from datasets with more careful manual curation.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, +860,attempt parameterefficient multitask tuning via attentional mixtures of soft prompts,"['Akari Asai', 'Mohammadreza Salehi', 'Matthew E. Peters', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2205.11961v2.pdf,2022-05-24,," This work introduces a new multi-task, parameter-efficient language model(LM) tuning method that learns to transfer knowledge across different tasks viaa mixture of soft prompts-small prefix embedding vectors pre-trained fordifferent tasks. Our method, called ATTEMPT (ATTEntional Mixtures of PromptTuning), obtains source prompts as encodings of large-scale source tasks into asmall number of parameters and trains an attention module to interpolate thesource prompts and a newly initialized target prompt for every instance in thetarget task. During training, only the target task prompt and the attentionweights, which are shared between tasks in multi-task training, are updated,while the original LM and source prompts are intact. ATTEMPT is highlyparameter-efficient (e.g., updates 2,300 times fewer parameters than fullfine-tuning) while achieving high task performance using knowledge fromhigh-resource tasks. Moreover, it is modular using pre-trained soft prompts,and can flexibly add or remove source prompts for effective knowledge transfer.Our experimental results across 21 diverse NLP datasets show that ATTEMPTsignificantly outperforms prompt tuning and outperforms or matches fullyfine-tuned or other parameter-efficient tuning approaches that use over tentimes more parameters. Finally, ATTEMPT outperforms previous work in few-shotlearning settings.",,arXiv,['cs.cl'],, +861,making large language models better reasoners with stepaware verifier,"['Yifei Li', 'Zeqi Lin', 'Shizhuo Zhang', 'Qiang Fu', 'Bei Chen', 'Jian-Guang Lou', 'Weizhu Chen']",http://arxiv.org/pdf/2206.02336v3.pdf,2022-06-06,," Few-shot learning is a challenging task that requires language models togeneralize from limited examples. Large language models like GPT-3 and PaLMhave made impressive progress in this area, but they still face difficulties inreasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improvetheir reasoning skills, previous work has proposed to guide the language modelwith prompts that elicit a series of reasoning steps before giving the finalanswer, achieving a significant improvement on GSM8K from 17.9% to 58.1% inproblem-solving rate. In this paper, we present DIVERSE (Diverse Verifier onReasoning Step), a novel approach that further enhances the reasoningcapability of language models. DIVERSE has three main components: first, itgenerates diverse prompts to explore different reasoning paths for the samequestion; second, it uses a verifier to filter out incorrect answers based on aweighted voting scheme; and third, it verifies each reasoning step individuallyinstead of the whole chain. We evaluate DIVERSE on the latest language modelcode-davinci-002 and show that it achieves new state-of-the-art results on sixof eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%).",,arXiv,"['cs.cl', 'cs.ai']",, +862,language models are generalpurpose interfaces,"['Yaru Hao', 'Haoyu Song', 'Li Dong', 'Shaohan Huang', 'Zewen Chi', 'Wenhui Wang', 'Shuming Ma', 'Furu Wei']",http://arxiv.org/pdf/2206.06336v1.pdf,2022-06-13,," Foundation models have received much attention due to their effectivenessacross a broad range of downstream applications. Though there is a bigconvergence in terms of architecture, most pretrained models are typicallystill developed for specific tasks or modalities. In this work, we propose touse language models as a general-purpose interface to various foundationmodels. A collection of pretrained encoders perceive diverse modalities (suchas vision, and language), and they dock with a language model that plays therole of a universal task layer. We propose a semi-causal language modelingobjective to jointly pretrain the interface and the modular encoders. Wesubsume the advantages and capabilities from both causal and non-causalmodeling, thereby combining the best of two worlds. Specifically, the proposedmethod not only inherits the capabilities of in-context learning and open-endedgeneration from causal language modeling, but also is conducive to finetuningbecause of the bidirectional encoders. More importantly, our approachseamlessly unlocks the combinations of the above capabilities, e.g., enablingin-context learning or instruction following with finetuned encoders.Experimental results across various language-only and vision-languagebenchmarks show that our model outperforms or is competitive with specializedmodels on finetuning, zero-shot generalization, and few-shot learning.",,arXiv,['cs.cl'],, +863,fit parameter efficient fewshot transfer learning for personalized and federated image classification,"['Aliaksandra Shysheya', 'John Bronskill', 'Massimiliano Patacchiola', 'Sebastian Nowozin', 'Richard E Turner']",http://arxiv.org/pdf/2206.08671v2.pdf,2022-06-17,," Modern deep learning systems are increasingly deployed in situations such aspersonalization and federated learning where it is necessary to support i)learning on small amounts of data, and ii) communication efficient distributedtraining protocols. In this work, we develop FiLM Transfer (FiT) which fulfillsthese requirements in the image classification setting by combining ideas fromtransfer learning (fixed pretrained backbones and fine-tuned FiLM adapterlayers) and meta-learning (automatically configured Naive Bayes classifiers andepisodic training) to yield parameter efficient models with superiorclassification accuracy at low-shot. The resulting parameter efficiency is keyfor enabling few-shot learning, inexpensive model updates for personalization,and communication efficient federated learning. We experiment with FiT on awide range of downstream datasets and show that it achieves betterclassification accuracy than the leading Big Transfer (BiT) algorithm atlow-shot and achieves state-of-the art accuracy on the challenging VTAB-1kbenchmark, with fewer than 1% of the updateable parameters. Finally, wedemonstrate the parameter efficiency and superior accuracy of FiT indistributed low-shot applications including model personalization and federatedlearning where model update size is an important performance metric.",,arXiv,"['stat.ml', 'cs.cv', 'cs.lg']",, +864,a reinforcement learningbased offensive semantics censorship system for chatbots,"['Shaokang Cai', 'Dezhi Han', 'Zibin Zheng', 'Dun Li', ' NoelCrespi']",http://arxiv.org/pdf/2207.10569v1.pdf,2022-07-13,," The rapid development of artificial intelligence (AI) technology has enabledlarge-scale AI applications to land in the market and practice. However, whileAI technology has brought many conveniences to people in the productizationprocess, it has also exposed many security issues. Especially, attacks againstonline learning vulnerabilities of chatbots occur frequently. Therefore, thispaper proposes a semantics censorship chatbot system based on reinforcementlearning, which is mainly composed of two parts: the Offensive semanticscensorship model and the semantics purification model. Offensive semanticsreview can combine the context of user input sentences to detect the rapidevolution of Offensive semantics and respond to Offensive semantics responses.The semantics purification model For the case of chatting robot models, it hasbeen contaminated by large numbers of offensive semantics, by strengthening theoffensive reply learned by the learning algorithm, rather than rolling back tothe early versions. In addition, by integrating a once-through learningapproach, the speed of semantics purification is accelerated while reducing theimpact on the quality of replies. The experimental results show that ourproposed approach reduces the probability of the chat model generatingoffensive replies and that the integration of the few-shot learning algorithmimproves the training speed rapidly while effectively slowing down the declinein BLEU values.",,arXiv,['cs.cl'],, +865,alexatm 20b fewshot learning using a largescale multilingual seq2seq model,"['Saleh Soltan', 'Shankar Ananthakrishnan', 'Jack FitzGerald', 'Rahul Gupta', 'Wael Hamza', 'Haidar Khan', 'Charith Peris', 'Stephen Rawls', 'Andy Rosenbaum', 'Anna Rumshisky', 'Chandana Satya Prakash', 'Mukund Sridhar', 'Fabian Triefenbach', 'Apurv Verma', 'Gokhan Tur', 'Prem Natarajan']",http://arxiv.org/pdf/2208.01448v2.pdf,2022-08-02,," In this work, we demonstrate that multilingual large-scalesequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoisingand Causal Language Modeling (CLM) tasks, are more efficient few-shot learnersthan decoder-only models on various tasks. In particular, we train a 20 billionparameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B)and show that it achieves state-of-the-art (SOTA) performance on 1-shotsummarization tasks, outperforming a much larger 540B PaLM decoder model.AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially forlow-resource languages, across almost all language pairs supported by the model(Arabic, English, French, German, Hindi, Italian, Japanese, Marathi,Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show inzero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2datasets and provides SOTA performance on multilingual tasks such as XNLI,XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling casefor seq2seq models as a powerful alternative to decoder-only models forLarge-scale Language Model (LLM) training.",,arXiv,"['cs.cl', 'cs.lg']",, +866,unsupervisedly prompting alphafold2 for fewshot learning of accurate folding landscape and protein structure prediction,"['Jun Zhang', 'Sirui Liu', 'Mengyun Chen', 'Haotian Chu', 'Min Wang', 'Zidong Wang', 'Jialiang Yu', 'Ningxi Ni', 'Fan Yu', 'Diqing Chen', 'Yi Isaac Yang', 'Boxin Xue', 'Lijiang Yang', 'Yuan Liu', 'Yi Qin Gao']",http://arxiv.org/pdf/2208.09652v2.pdf,2022-08-20,," Data-driven predictive methods which can efficiently and accurately transformprotein sequences into biologically active structures are highly valuable forscientific research and medical development. Determining accurate foldinglandscape using co-evolutionary information is fundamental to the success ofmodern protein structure prediction methods. As the state of the art,AlphaFold2 has dramatically raised the accuracy without performing explicitco-evolutionary analysis. Nevertheless, its performance still shows strongdependence on available sequence homologs. Based on the interrogation on thecause of such dependence, we presented EvoGen, a meta generative model, toremedy the underperformance of AlphaFold2 for poor MSA targets. By promptingthe model with calibrated or virtually generated homologue sequences, EvoGenhelps AlphaFold2 fold accurately in low-data regime and even achieveencouraging performance with single-sequence predictions. Being able to makeaccurate predictions with few-shot MSA not only generalizes AlphaFold2 betterfor orphan sequences, but also democratizes its use for high-throughputapplications. Besides, EvoGen combined with AlphaFold2 yields a probabilisticstructure generation method which could explore alternative conformations ofprotein sequences, and the task-aware differentiable algorithm for sequencegeneration will benefit other related tasks including protein design.",,arXiv,"['cs.lg', 'cs.ai', 'physics.bio-ph']",, +867,disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective,"['Jiangmeng Li', 'Yanan Zhang', 'Wenwen Qiang', 'Lingyu Si', 'Chengbo Jiao', 'Xiaohui Hu', 'Changwen Zheng', 'Fuchun Sun']",http://arxiv.org/pdf/2208.12681v2.pdf,2022-08-26,," Few-shot learning models learn representations with limited humanannotations, and such a learning paradigm demonstrates practicability invarious tasks, e.g., image classification, object detection, etc. However,few-shot object detection methods suffer from an intrinsic defect that thelimited training data makes the model cannot sufficiently explore semanticinformation. To tackle this, we introduce knowledge distillation to thefew-shot object detection learning paradigm. We further run a motivatingexperiment, which demonstrates that in the process of knowledge distillation,the empirical error of the teacher model degenerates the prediction performanceof the few-shot object detection model as the student. To understand thereasons behind this phenomenon, we revisit the learning paradigm of knowledgedistillation on the few-shot object detection task from the causal theoreticstandpoint, and accordingly, develop a Structural Causal Model. Following thetheoretical guidance, we propose a backdoor adjustment-based knowledgedistillation method for the few-shot object detection task, namely Disentangleand Remerge (D&R), to perform conditional causal intervention toward thecorresponding Structural Causal Model. Empirically, the experiments onbenchmarks demonstrate that D&R can yield significant performance boosts infew-shot object detection. Code is available athttps://github.com/ZYN-1101/DandR.git.",,arXiv,['cs.cv'],, +868,neurips'22 crossdomain metadl competition design and baseline results,"['Dustin Carrión-Ojeda', 'Hong Chen', 'Adrian El Baz', 'Sergio Escalera', 'Chaoyu Guan', 'Isabelle Guyon', 'Ihsan Ullah', 'Xin Wang', 'Wenwu Zhu']",http://arxiv.org/pdf/2208.14686v1.pdf,2022-08-31,," We present the design and baseline results for a new challenge in theChaLearn meta-learning series, accepted at NeurIPS'22, focusing on""cross-domain"" meta-learning. Meta-learning aims to leverage experience gainedfrom previous tasks to solve new tasks efficiently (i.e., with betterperformance, little training data, and/or modest computational resources).While previous challenges in the series focused on within-domain few-shotlearning problems, with the aim of learning efficiently N-way k-shot tasks(i.e., N class classification problems with k training examples), thiscompetition challenges the participants to solve ""any-way"" and ""any-shot""problems drawn from various domains (healthcare, ecology, biology,manufacturing, and others), chosen for their humanitarian and societal impact.To that end, we created Meta-Album, a meta-dataset of 40 image classificationdatasets from 10 domains, from which we carve out tasks with any number of""ways"" (within the range 2-20) and any number of ""shots"" (within the range1-20). The competition is with code submission, fully blind-tested on theCodaLab challenge platform. The code of the winners will be open-sourced,enabling the deployment of automated machine learning solutions for few-shotimage classification across several domains.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ne']",, +869,automatic label sequence generation for prompting sequencetosequence models,"['Zichun Yu', 'Tianyu Gao', 'Zhengyan Zhang', 'Yankai Lin', 'Zhiyuan Liu', 'Maosong Sun', 'Jie Zhou']",http://arxiv.org/pdf/2209.09401v1.pdf,2022-09-20,," Prompting, which casts downstream applications as language modeling tasks,has shown to be sample efficient compared to standard fine-tuning withpre-trained models. However, one pitfall of prompting is the need ofmanually-designed patterns, whose outcome can be unintuitive and requires largevalidation sets to tune. To tackle the challenge, we propose AutoSeq, a fullyautomatic prompting method: (1) We adopt natural language prompts onsequence-to-sequence models, enabling free-form generation and larger labelsearch space; (2) We propose label sequences -- phrases with indefinite lengthsto verbalize the labels -- which eliminate the need of manual templates and aremore expressive than single label words; (3) We use beam search toautomatically generate a large amount of label sequence candidates and proposecontrastive re-ranking to get the best combinations. AutoSeq significantlyoutperforms other no-manual-design methods, such as soft prompt tuning, adaptertuning, and automatic search on single label words; the generated labelsequences are even better than curated manual ones on a variety of tasks. Ourmethod reveals the potential of sequence-to-sequence models in few-shotlearning and sheds light on a path to generic and automatic prompting. Thesource code of this paper can be obtained fromhttps://github.com/thunlp/Seq2Seq-Prompt.",,arXiv,"['cs.cl', 'cs.lg']",, +870,collaboration of pretrained models makes better fewshot learner,"['Renrui Zhang', 'Bohao Li', 'Wei Zhang', 'Hao Dong', 'Hongsheng Li', 'Peng Gao', 'Yu Qiao']",http://arxiv.org/pdf/2209.12255v2.pdf,2022-09-25,," Few-shot classification requires deep neural networks to learn generalizedrepresentations only from limited training images, which is challenging butsignificant in low-data regimes. Recently, CLIP-based methods have shownpromising few-shot performance benefited from the contrastive language-imagepre-training. Based on this point, we question if the large-scale pre-trainingcan alleviate the few-shot data deficiency and also assist the representationlearning by the pre-learned knowledge. In this paper, we propose CoMo, aCollaboration of pre-trained Models that incorporates diverse prior knowledgefrom various pre-training paradigms for better few-shot learning. Our CoMoincludes: CLIP's language-contrastive knowledge, DINO's vision-contrastiveknowledge, and DALL-E's language-generative knowledge. Specifically, CoMo worksin two aspects: few-shot data expansion and diverse knowledge ensemble. Forone, we generate synthetic images via zero-shot DALL-E to enrich the few-shottraining data without any manpower. For the other, we introduce a learnableMulti-Knowledge Adapter (MK-Adapter) to adaptively blend the predictions fromCLIP and DINO. By such collaboration, CoMo can fully unleash the potential ofdifferent pre-training methods and unify them to perform state-of-the-art forfew-shot classification. We conduct extensive experiments on 11 datasets todemonstrate the superiority and generalization ability of our approach.",,arXiv,['cs.cv'],, +871,clip2point transfer clip to point cloud classification with imagedepth pretraining,"['Tianyu Huang', 'Bowen Dong', 'Yunhan Yang', 'Xiaoshui Huang', 'Rynson W. H. Lau', 'Wanli Ouyang', 'Wangmeng Zuo']",http://arxiv.org/pdf/2210.01055v3.pdf,2022-10-03,," Pre-training across 3D vision and language remains under development becauseof limited training data. Recent works attempt to transfer vision-languagepre-training models to 3D vision. PointCLIP converts point cloud data tomulti-view depth maps, adopting CLIP for shape classification. However, itsperformance is restricted by the domain gap between rendered depth maps andimages, as well as the diversity of depth distributions. To address this issue,we propose CLIP2Point, an image-depth pre-training method by contrastivelearning to transfer CLIP to the 3D domain, and adapt it to point cloudclassification. We introduce a new depth rendering setting that forms a bettervisual effect, and then render 52,460 pairs of images and depth maps fromShapeNet for pre-training. The pre-training scheme of CLIP2Point combinescross-modality learning to enforce the depth features for capturing expressivevisual and textual features and intra-modality learning to enhance theinvariance of depth aggregation. Additionally, we propose a novel Dual-PathAdapter (DPA) module, i.e., a dual-path structure with simplified adapters forfew-shot learning. The dual-path structure allows the joint use of CLIP andCLIP2Point, and the simplified adapter can well fit few-shot tasks withoutpost-search. Experimental results show that CLIP2Point is effective intransferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIPand other self-supervised 3D networks, achieving state-of-the-art results onzero-shot and few-shot classification.",,arXiv,['cs.cv'],, +872,"rarr researching and revising what language models say, using language models","['Luyu Gao', 'Zhuyun Dai', 'Panupong Pasupat', 'Anthony Chen', 'Arun Tejasvi Chaganty', 'Yicheng Fan', 'Vincent Y. Zhao', 'Ni Lao', 'Hongrae Lee', 'Da-Cheng Juan', 'Kelvin Guu']",http://arxiv.org/pdf/2210.08726v3.pdf,2022-10-17,," Language models (LMs) now excel at many tasks such as few-shot learning,question answering, reasoning, and dialog. However, they sometimes generateunsupported or misleading content. A user cannot easily determine whether theiroutputs are trustworthy or not, because most LMs do not have any built-inmechanism for attribution to external evidence. To enable attribution whilestill preserving all the powerful advantages of recent generation models, wepropose RARR (Retrofit Attribution using Research and Revision), a system that1) automatically finds attribution for the output of any text generation modeland 2) post-edits the output to fix unsupported content while preserving theoriginal output as much as possible. When applied to the output of severalstate-of-the-art LMs on a diverse set of generation tasks, we find that RARRsignificantly improves attribution while otherwise preserving the originalinput to a much greater degree than previously explored edit models.Furthermore, the implementation of RARR requires only a handful of trainingexamples, a large language model, and standard web search.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, +873,tape assessing fewshot russian language understanding,"['Ekaterina Taktasheva', 'Tatiana Shavrina', 'Alena Fenogenova', 'Denis Shevelev', 'Nadezhda Katricheva', 'Maria Tikhonova', 'Albina Akhmetgareeva', 'Oleg Zinkevich', 'Anastasiia Bashmakova', 'Svetlana Iordanskaia', 'Alena Spiridonova', 'Valentina Kurenshchikova', 'Ekaterina Artemova', 'Vladislav Mikhailov']",http://arxiv.org/pdf/2210.12813v1.pdf,2022-10-23,," Recent advances in zero-shot and few-shot learning have shown promise for ascope of research and practical purposes. However, this fast-growing area lacksstandardized evaluation suites for non-English languages, hindering progressoutside the Anglo-centric paradigm. To address this line of research, wepropose TAPE (Text Attack and Perturbation Evaluation), a novel benchmark thatincludes six more complex NLU tasks for Russian, covering multi-hop reasoning,ethical concepts, logic and commonsense knowledge. The TAPE's design focuses onsystematic zero-shot and few-shot NLU evaluation: (i) linguistic-orientedadversarial attacks and perturbations for analyzing robustness, and (ii)subpopulations for nuanced interpretation. The detailed analysis of testing theautoregressive baselines indicates that simple spelling-based perturbationsaffect the performance the most, while paraphrasing the input has a morenegligible effect. At the same time, the results demonstrate a significant gapbetween the neural and human baselines for most tasks. We publicly release TAPE(tape-benchmark.com) to foster research on robust LMs that can generalize tonew tasks when little to no supervision is available.",,arXiv,['cs.cl'],, +874,learning new tasks from a few examples with softlabel prototypes,"['Avyav Kumar Singh', 'Ekaterina Shutova', 'Helen Yannakoudakis']",http://arxiv.org/pdf/2210.17437v2.pdf,2022-10-31,," It has been experimentally demonstrated that humans are able to learn in amanner that allows them to make predictions on categories for which they havenot seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020)have recently presented a machine learning approach that aims to do the same.They utilise synthetically generated data and demonstrate that it is possibleto achieve sub-linear scaling and develop models that can learn to recognise Nclasses from M training samples where M is less than N - aka less-than-one shotlearning. Their method was, however, defined for univariate or simplemultivariate data (Sucholutsky et al., 2021). We extend it to work on large,high-dimensional and real-world datasets and empirically validate it in thisnew and challenging setting. We apply this method to learn previously unseenNLP tasks from very few examples (4, 8 or 16). We first generate compact,sophisticated less-than-one shot representations called soft-label prototypeswhich are fitted on training data, capturing the distribution of differentclasses across the input domain space. We then use a modified k-NearestNeighbours classifier to demonstrate that soft-label prototypes can classifydata competitively, even outperforming much more computationally complexfew-shot learning methods.",,arXiv,"['cs.lg', 'cs.cl']",, +875,explicit knowledge transfer for weaklysupervised code generation,"['Zhangir Azerbayev', 'Ansong Ni', 'Hailey Schoelkopf', 'Dragomir Radev']",http://arxiv.org/pdf/2211.16740v3.pdf,2022-11-30,," Large language models (LLMs) can acquire strong code-generation capabilitiesthrough few-shot learning. In contrast, supervised fine-tuning is still neededfor smaller models to achieve good performance. Such fine-tuning demands alarge number of task-specific NL-code pairs, which are expensive to obtain. Inthis paper, we attempt to transfer the code generation ability of an LLM to asmaller model with the aid of weakly-supervised data. More specifically, wepropose explicit knowledge transfer (EKT), which uses the few-shot capabilitiesof a teacher LLM to create NL-code pairs that we then filter for correctnessand fine-tune the student on. We evaluate EKT on the task of generating codesolutions to math word problems from the GSM8k dataset. We find that EKT notonly yields better performance than training with expert iteration, but alsooutperforms knowledge distillation, another form of knowledge transfer. AGPT-Neo 1.3B model trained using EKT with a GPT-J teacher achieves a 12.4%pass@100 on GSM8k, while the same student and teacher trained with knowledgedistillation yield only a 3.7% pass@100. We also show that it is possible for astudent model to outperform the teacher using EKT.",,arXiv,['cs.cl'],, +876,can incontext learners learn a reasoning concept from demonstrations,"['Michal Štefánik', 'Marek Kadlčík']",http://arxiv.org/pdf/2212.01692v4.pdf,2022-12-03,," Language models exhibit an emergent ability to learn a new task from a smallnumber of input-output demonstrations. However, recent work shows thatin-context learners largely rely on their pre-trained knowledge, such as thesentiment of the labels, instead of learning new associations from the input.We argue that the commonly-used few-shot evaluation using a random selection ofin-context demonstrations can not disentangle models' reliance on such biases,as most of the randomly-selected demonstrations do not present relationsinformative for prediction beyond exposing the task's input-outputdistribution. Therefore, to evaluate models' in-context learning ability independent ofmodels' memory, we introduce a Concept-sharing few-shot learning methodchoosing the demonstrations that share an underlying concept with the predictedsample. We extract a set of such concepts from available human explanations andmeasure how much models can benefit from presenting these concepts in few-shotdemonstrations. We find that most of the recent in-context learners can not consistentlybenefit from the demonstrated concepts, irrespective of the model size.However, we note that T0 models are more sensitive to exhibited concepts,benefiting from concept-sharing demonstrations in 7 out of 8 evaluationscenarios.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +877,federated fewshot learning for mobile nlp,"['Dongqi Cai', 'Shangguang Wang', 'Yaozong Wu', 'Felix Xiaozhu Lin', 'Mengwei Xu']",http://arxiv.org/pdf/2212.05974v2.pdf,2022-12-12,," Natural language processing (NLP) sees rich mobile applications. To supportvarious language understanding tasks, a foundation NLP model is oftenfine-tuned in a federated, privacy-preserving setting (FL). This processcurrently relies on at least hundreds of thousands of labeled training samplesfrom mobile clients; yet mobile users often lack willingness or knowledge tolabel their data. Such an inadequacy of data labels is known as a few-shotscenario; it becomes the key blocker for mobile NLP applications. For the first time, this work investigates federated NLP in the few-shotscenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling andprompt learning, we first establish a training pipeline that deliverscompetitive accuracy when only 0.05% (fewer than 100) of the training data islabeled and the remaining is unlabeled. To instantiate the workflow, we furtherpresent a system FeS, addressing the high execution cost with novel designs.(1) Curriculum pacing, which injects pseudo labels to the training workflow ata rate commensurate to the learning progress; (2) Representational diversity, amechanism for selecting the most learnable data, only for which pseudo labelswill be generated; (3) Co-planning of a model's training depth and layercapacity. Together, these designs reduce the training delay, client energy, andnetwork traffic by up to 46.0$\times$, 41.2$\times$ and 3000.0$\times$,respectively. Through algorithm/system co-design, FFNLP demonstrates that FLcan apply to challenging settings where most training samples are unlabeled.",,arXiv,"['cs.lg', 'cs.cl']",, +878,fewfedweight fewshot federated learning framework across multiple nlp tasks,"['Weilong Dong', 'Xinwei Wu', 'Junzhuo Li', 'Shuangzhi Wu', 'Chao Bian', 'Deyi Xiong']",http://arxiv.org/pdf/2212.08354v1.pdf,2022-12-16,," Massively multi-task learning with large language models has recently madesubstantial progress on few-shot generalization. However, this is usuallyperformed in a centralized learning fashion, ignoring the privacy sensitivityissue of (annotated) data used in multiple tasks. To mitigate this issue, wepropose FewFedWeight, a few-shot federated learning framework across multipletasks, to achieve the best of both worlds: privacy preservation and cross-taskgeneralization. FewFedWeight trains client models in isolated devices withoutsharing data. It broadcasts the global model in the server to each client andproduces pseudo data for clients so that knowledge from the global model can beexplored to enhance few-shot learning of each client model. An energy-basedalgorithm is further proposed to weight pseudo samples in order to reduce thenegative impact of noise from the generated pseudo data. Adaptive model weightsof client models are also tuned according to their performance. We use thesemodel weights to dynamically aggregate client models to update the globalmodel. Experiments on 118 NLP tasks show that FewFedWeight can significantlyimprove the performance of client models on 61% tasks with an averageperformance improvement rate of 30.5% over the baseline and substantiallyoutperform FedAvg and other decentralized learning methods.",,arXiv,['cs.cl'],, +879,contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning,"['Chris Lengerich', 'Gabriel Synnaeve', 'Amy Zhang', 'Hugh Leather', 'Kurt Shuster', 'François Charton', 'Charysse Redwood']",http://arxiv.org/pdf/2212.11353v1.pdf,2022-12-21,," Traditional approaches to RL have focused on learning decision policiesdirectly from episodic decisions, while slowly and implicitly learning thesemantics of compositional representations needed for generalization. Whilesome approaches have been adopted to refine representations via auxiliaryself-supervised losses while simultaneously learning decision policies,learning compositional representations from hand-designed andcontext-independent self-supervised losses (multi-view) still adapts relativelyslowly to the real world, which contains many non-IID subspaces requiring rapiddistribution shift in both time and spatial attention patterns at varyinglevels of abstraction. In contrast, supervised language model cascades haveshown the flexibility to adapt to many diverse manifolds, and hints ofself-learning needed for autonomous task transfer. However, to date, transfermethods for language models like few-shot learning and fine-tuning stillrequire human supervision and transfer learning using self-learning methods hasbeen underexplored. We propose a self-supervised loss policy called contrastivedistillation which manifests latent variables with high mutual information withboth source and target tasks from weights to tokens. We show how thisoutperforms common methods of transfer learning and suggests a useful designaxis of trading off compute for generalizability for online transfer.Contrastive distillation is improved through sampling from memory and suggestsa simple algorithm for more efficiently sampling negative examples forcontrastive losses than random sampling.",,arXiv,"['cs.cl', 'cs.lg']",, +880,exploring efficient fewshot adaptation for vision transformers,"['Chengming Xu', 'Siqian Yang', 'Yabiao Wang', 'Zhanxiong Wang', 'Yanwei Fu', 'Xiangyang Xue']",http://arxiv.org/pdf/2301.02419v1.pdf,2023-01-06,," The task of Few-shot Learning (FSL) aims to do the inference on novelcategories containing only few labeled examples, with the help of knowledgelearned from base categories containing abundant labeled training samples.While there are numerous works into FSL task, Vision Transformers (ViTs) haverarely been taken as the backbone to FSL with few trials focusing on naivefinetuning of whole backbone or classification layer.} Essentially, despiteViTs have been shown to enjoy comparable or even better performance on othervision tasks, it is still very nontrivial to efficiently finetune the ViTs inreal-world FSL scenarios. To this end, we propose a novel efficient TransformerTuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The keynovelties come from the newly presented Attentive Prefix Tuning (APT) andDomain Residual Adapter (DRA) for the task and backbone tuning, individually.Specifically, in APT, the prefix is projected to new key and value pairs thatare attached to each self-attention layer to provide the model withtask-specific information. Moreover, we design the DRA in the form of learnableoffset vectors to handle the potential domain gaps between base and novel data.To ensure the APT would not deviate from the initial task-specific informationmuch, we further propose a novel prototypical regularization, which maximizesthe similarity between the projected distribution of prefix and initialprototypes, regularizing the update procedure. Our method receives outstandingperformance on the challenging Meta-Dataset. We conduct extensive experimentsto show the efficacy of our model.",,arXiv,['cs.cv'],, +881,unleashing the power of shared label structures for human activity recognition,"['Xiyuan Zhang', 'Ranak Roy Chowdhury', 'Jiayun Zhang', 'Dezhi Hong', 'Rajesh K. Gupta', 'Jingbo Shang']",http://arxiv.org/pdf/2301.03462v2.pdf,2023-01-01,," Current human activity recognition (HAR) techniques regard activity labels asinteger class IDs without explicitly modeling the semantics of class labels. Weobserve that different activity names often have shared structures. Forexample, ""open door"" and ""open fridge"" both have ""open"" as the action; ""kickingsoccer ball"" and ""playing tennis ball"" both have ""ball"" as the object. Suchshared structures in label names can be translated to the similarity in sensorydata and modeling common structures would help uncover knowledge acrossdifferent activities, especially for activities with limited samples. In thispaper, we propose SHARE, a HAR framework that takes into account sharedstructures of label names for different activities. To exploit the sharedstructures, SHARE comprises an encoder for extracting features from inputsensory time series and a decoder for generating label names as a tokensequence. We also propose three label augmentation techniques to help the modelmore effectively capture semantic structures across activities, including abasic token-level augmentation, and two enhanced embedding-level andsequence-level augmentations utilizing the capabilities of pre-trained models.SHARE outperforms state-of-the-art HAR models in extensive experiments on sevenHAR benchmark datasets. We also evaluate in few-shot learning and labelimbalance settings and observe even more significant performance gap.",,arXiv,"['cs.lg', 'cs.ai', 'eess.sp']",, +882,"see, think, confirm interactive prompting between vision and language models for knowledgebased visual reasoning","['Zhenfang Chen', 'Qinhong Zhou', 'Yikang Shen', 'Yining Hong', 'Hao Zhang', 'Chuang Gan']",http://arxiv.org/pdf/2301.05226v1.pdf,2023-01-12,," Large pre-trained vision and language models have demonstrated remarkablecapacities for various tasks. However, solving the knowledge-based visualreasoning tasks remains challenging, which requires a model to comprehensivelyunderstand image content, connect the external world knowledge, and performstep-by-step reasoning to answer the questions correctly. To this end, wepropose a novel framework named Interactive Prompting Visual Reasoner (IPVR)for few-shot knowledge-based visual reasoning. IPVR contains three stages, see,think and confirm. The see stage scans the image and grounds the visual conceptcandidates with a visual perception model. The think stage adopts a pre-trainedlarge language model (LLM) to attend to the key concepts from candidatesadaptively. It then transforms them into text context for prompting with avisual captioning model and adopts the LLM to generate the answer. The confirmstage further uses the LLM to generate the supporting rationale to the answer,verify the generated rationale with a cross-modality classifier and ensure thatthe rationale can infer the predicted output consistently. We conductexperiments on a range of knowledge-based visual reasoning datasets. We foundour IPVR enjoys several benefits, 1). it achieves better performance than theprevious few-shot learning baselines; 2). it enjoys the total transparency andtrustworthiness of the whole reasoning process by providing rationales for eachreasoning step; 3). it is computation-efficient compared with other fine-tuningbaselines.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, +883,large language models are latent variable models explaining and finding good demonstrations for incontext learning,"['Xinyi Wang', 'Wanrong Zhu', 'Michael Saxon', 'Mark Steyvers', 'William Yang Wang']",http://arxiv.org/pdf/2301.11916v3.pdf,2023-01-27,," In recent years, pre-trained large language models (LLMs) have demonstratedremarkable efficiency in achieving an inference-time few-shot learningcapability known as in-context learning. However, existing literature hashighlighted the sensitivity of this capability to the selection of few-shotdemonstrations. Current understandings of the underlying mechanisms by whichthis capability arises from regular language model pretraining objectivesremain disconnected from the real-world LLMs. This study aims to examine thein-context learning phenomenon through a Bayesian lens, viewing real-world LLMsas latent variable models. On this premise, we propose an algorithm to selectoptimal demonstrations from a set of annotated data with a small LM, and thendirectly generalize the selected demonstrations to larger LMs. We demonstratesignificant improvement over baselines, averaged over eight GPT models on eightreal-world text classification datasets. We also demonstrate the real-worldusefulness of our algorithm on GSM8K, a math word problem dataset. Ourempirical findings support our hypothesis that LLMs implicitly infer a latentvariable containing task information.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +884,language quantized autoencoders towards unsupervised textimage alignment,"['Hao Liu', 'Wilson Yan', 'Pieter Abbeel']",http://arxiv.org/pdf/2302.00902v2.pdf,2023-02-02,," Recent progress in scaling up large language models has shown impressivecapabilities in performing few-shot learning across a wide range of text-basedtasks. However, a key limitation is that these language models fundamentallylack visual perception - a crucial attribute needed to extend these models tobe able to interact with the real world and solve vision tasks, such as invisual-question answering and robotics. Prior works have largely connectedimage to text through pretraining and/or fine-tuning on curated image-textdatasets, which can be a costly and expensive process. In order to resolve thislimitation, we propose a simple yet effective approach calledLanguage-Quantized AutoEncoder (LQAE), a modification of VQ-VAE that learns toalign text-image data in an unsupervised manner by leveraging pretrainedlanguage models (e.g., BERT, RoBERTa). Our main idea is to encode image assequences of text tokens by directly quantizing image embeddings using apretrained language codebook. We then apply random masking followed by a BERTmodel, and have the decoder reconstruct the original image from BERT predictedtext token embeddings. By doing so, LQAE learns to represent similar imageswith similar clusters of text tokens, thereby aligning these two modalitieswithout the use of aligned text-image pairs. This enables few-shot imageclassification with large language models (e.g., GPT-3) as well as linearclassification of images based on BERT text features. To the best of ourknowledge, our work is the first work that uses unaligned images for multimodaltasks by leveraging the power of pretrained language models.",,arXiv,"['cs.lg', 'cs.cl', 'cs.cv']",, +885,the unreasonable effectiveness of fewshot learning for machine translation,"['Xavier Garcia', 'Yamini Bansal', 'Colin Cherry', 'George Foster', 'Maxim Krikun', 'Fangxiaoyu Feng', 'Melvin Johnson', 'Orhan Firat']",http://arxiv.org/pdf/2302.01398v1.pdf,2023-02-02,," We demonstrate the potential of few-shot translation systems, trained withunpaired language data, for both high and low-resource language pairs. We showthat with only 5 examples of high-quality translation data shown at inference,a transformer decoder-only model trained solely with self-supervised learning,is able to match specialized supervised state-of-the-art models as well as moregeneral commercial translation systems. In particular, we outperform the bestperforming system on the WMT'21 English - Chinese news translation task by onlyusing five examples of English - Chinese parallel data at inference. Moreover,our approach in building these models does not necessitate joint multilingualtraining or back-translation, is conceptually simple and shows the potential toextend to the multilingual setting. Furthermore, the resulting models are twoorders of magnitude smaller than state-of-the-art language models. We thenanalyze the factors which impact the performance of few-shot translationsystems, and highlight that the quality of the few-shot demonstrations heavilydetermines the quality of the translations generated by our models. Finally, weshow that the few-shot paradigm also provides a way to control certainattributes of the translation -- we show that we are able to control forregional varieties and formality using only a five examples at inference,paving the way towards controllable machine translation systems.",,arXiv,['cs.cl'],, +886,crosscodebench benchmarking crosstask generalization of source code models,"['Changan Niu', 'Chuanyi Li', 'Vincent Ng', 'Bin Luo']",http://arxiv.org/pdf/2302.04030v2.pdf,2023-02-08,," Despite the recent advances showing that a model pre-trained on large-scalesource code data is able to gain appreciable generalization capability, itstill requires a sizeable amount of data on the target task for fine-tuning.And the effectiveness of the model generalization is largely affected by thesize and quality of the fine-tuning data, which is detrimental for target taskswith limited or unavailable resources. Therefore, cross-task generalization,with the goal of improving the generalization of the model to unseen tasks thathave not been seen before, is of strong research and application value. In this paper, we propose a large-scale benchmark that includes 216 existingcode-related tasks. Then, we annotate each task with the corresponding metainformation such as task description and instruction, which contains detailedinformation about the task and a solution guide. This also helps us to easilycreate a wide variety of ``training/evaluation'' task splits to evaluate thevarious cross-task generalization capabilities of the model. Then we performsome preliminary experiments to demonstrate that the cross-task generalizationof models can be largely improved by in-context learning methods such asfew-shot learning and learning from task instructions, which shows thepromising prospects of conducting cross-task learning research on ourbenchmark. We hope that the collection of the datasets and our benchmark willfacilitate future work that is not limited to cross-task generalization.",,arXiv,"['cs.se', 'cs.ai']",, +887,revilm retrievalaugmented visual language model for zero and fewshot image captioning,"['Zhuolin Yang', 'Wei Ping', 'Zihan Liu', 'Vijay Korthikanti', 'Weili Nie', 'De-An Huang', 'Linxi Fan', 'Zhiding Yu', 'Shiyi Lan', 'Bo Li', 'Ming-Yu Liu', 'Yuke Zhu', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Chaowei Xiao', 'Anima Anandkumar']",http://arxiv.org/pdf/2302.04858v2.pdf,2023-02-09,," Augmenting pretrained language models (LMs) with a vision encoder (e.g.,Flamingo) has obtained the state-of-the-art results in image-to-textgeneration. However, these models store all the knowledge within theirparameters, thus often requiring enormous model parameters to model theabundant visual concepts and very rich textual descriptions. Additionally, theyare inefficient in incorporating new data, requiring a computational-expensivefine-tuning process. In this work, we introduce a Retrieval-augmented VisualLanguage Model, Re-ViLM, built upon the Flamingo, that supports retrieving therelevant knowledge from the external database for zero and in-context few-shotimage-to-text generations. By storing certain knowledge explicitly in theexternal database, our approach reduces the number of model parameters and caneasily accommodate new data during evaluation by simply updating the database.We also construct an interleaved image and text data that facilitatesin-context few-shot learning capabilities. We demonstrate that Re-ViLMsignificantly boosts performance for image-to-text generation tasks, especiallyfor zero-shot and few-shot generation in out-of-domain settings with 4 timesless parameters compared with baseline methods.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.ir', 'cs.lg']",, +888,maskguided bert for few shot text classification,"['Wenxiong Liao', 'Zhengliang Liu', 'Haixing Dai', 'Zihao Wu', 'Yiyang Zhang', 'Xiaoke Huang', 'Yuzhong Chen', 'Xi Jiang', 'Wei Liu', 'Dajiang Zhu', 'Tianming Liu', 'Sheng Li', 'Xiang Li', 'Hongmin Cai']",http://arxiv.org/pdf/2302.10447v3.pdf,2023-02-21,," Transformer-based language models have achieved significant success invarious domains. However, the data-intensive nature of the transformerarchitecture requires much labeled data, which is challenging in low-resourcescenarios (i.e., few-shot learning (FSL)). The main challenge of FSL is thedifficulty of training robust models on small amounts of samples, whichfrequently leads to overfitting. Here we present Mask-BERT, a simple andmodular framework to help BERT-based architectures tackle FSL. The proposedapproach fundamentally differs from existing FSL strategies such as prompttuning and meta-learning. The core idea is to selectively apply masks on textinputs and filter out irrelevant information, which guides the model to focuson discriminative tokens that influence prediction results. In addition, tomake the text representations from different categories more separable and thetext representations from the same category more compact, we introduce acontrastive learning loss function. Experimental results on public-domainbenchmark datasets demonstrate the effectiveness of Mask-BERT.",,arXiv,"['cs.cl', 'cs.ai']",, +889,metalearning with adaptive weighted loss for imbalanced coldstart recommendation,"['Minchang Kim', 'Yongjin Yang', 'Jung Hyun Ryu', 'Taesup Kim']",http://arxiv.org/pdf/2302.14640v2.pdf,2023-02-28,," Sequential recommenders have made great strides in capturing a user'spreferences. Nevertheless, the cold-start recommendation remains a fundamentalchallenge as they typically involve limited user-item interactions forpersonalization. Recently, gradient-based meta-learning approaches have emergedin the sequential recommendation field due to their fast adaptation andeasy-to-integrate abilities. The meta-learning algorithms formulate thecold-start recommendation as a few-shot learning problem, where each user isrepresented as a task to be adapted. While meta-learning algorithms generallyassume that task-wise samples are evenly distributed over classes or values,user-item interactions in real-world applications do not conform to such adistribution (e.g., watching favorite videos multiple times, leaving onlypositive ratings without any negative ones). Consequently, imbalanced userfeedback, which accounts for the majority of task training data, may dominatethe user adaptation process and prevent meta-learning algorithms from learningmeaningful meta-knowledge for personalized recommendations. To alleviate thislimitation, we propose a novel sequential recommendation framework based ongradient-based meta-learning that captures the imbalanced rating distributionof each user and computes adaptive loss for user-specific learning. Our work isthe first to tackle the impact of imbalanced ratings in cold-start sequentialrecommendation scenarios. Through extensive experiments conducted on real-worlddatasets, we demonstrate the effectiveness of our framework.",,arXiv,"['cs.ir', 'cs.lg']",, +890,knowledgeaugmented fewshot visual relation detection,"['Tianyu Yu', 'Yangning Li', 'Jiaoyan Chen', 'Yinghui Li', 'Hai-Tao Zheng', 'Xi Chen', 'Qingbin Liu', 'Wenqiang Liu', 'Dongxiao Huang', 'Bei Wu', 'Yexin Wang']",http://arxiv.org/pdf/2303.05342v1.pdf,2023-03-09,," Visual Relation Detection (VRD) aims to detect relationships between objectsfor image understanding. Most existing VRD methods rely on thousands oftraining samples of each relationship to achieve satisfactory performance. Somerecent papers tackle this problem by few-shot learning with elaboratelydesigned pipelines and pre-trained word vectors. However, the performance ofexisting few-shot VRD models is severely hampered by the poor generalizationcapability, as they struggle to handle the vast semantic diversity of visualrelationships. Nonetheless, humans have the ability to learn new relationshipswith just few examples based on their knowledge. Inspired by this, we devise aknowledge-augmented, few-shot VRD framework leveraging both textual knowledgeand visual relation knowledge to improve the generalization ability of few-shotVRD. The textual knowledge and visual relation knowledge are acquired from apre-trained language model and an automatically constructed visual relationknowledge graph, respectively. We extensively validate the effectiveness of ourframework. Experiments conducted on three benchmarks from the commonly usedVisual Genome dataset show that our performance surpasses existingstate-of-the-art models with a large improvement.",,arXiv,"['cs.cv', 'cs.ai']",, +891,hqp a humanannotated dataset for detecting online propaganda,"['Abdurahman Maarouf', 'Dominik Bär', 'Dominique Geissler', 'Stefan Feuerriegel']",http://arxiv.org/pdf/2304.14931v2.pdf,2023-04-28,," Online propaganda poses a severe threat to the integrity of societies.However, existing datasets for detecting online propaganda have a keylimitation: they were annotated using weak labels that can be noisy and evenincorrect. To address this limitation, our work makes the followingcontributions: (1) We present HQP: a novel dataset (N=30,000) for detectingonline propaganda with high-quality labels. To the best of our knowledge, HQPis the first dataset for detecting online propaganda that was created throughhuman annotation. (2) We show empirically that state-of-the-art language modelsfail in detecting online propaganda when trained with weak labels (AUC: 64.03).In contrast, state-of-the-art language models can accurately detect onlinepropaganda when trained with our high-quality labels (AUC: 92.25), which is animprovement of ~44%. (3) To address the cost of labeling, we extend our work tofew-shot learning. Specifically, we show that prompt-based learning using asmall sample of high-quality labels can still achieve a reasonable performance(AUC: 80.27). Finally, we discuss implications for the NLP community to balancethe cost and quality of labeling. Crucially, our work highlights the importanceof high-quality labels for sensitive NLP tasks such as propaganda detection.",,arXiv,['cs.cl'],, +892,parameterefficient crosslingual transfer of vision and language models via translationbased alignment,"['Zhen Zhang', 'Jialu Wang', 'Xin Eric Wang']",http://arxiv.org/pdf/2305.03510v2.pdf,2023-05-02,," Pre-trained vision and language models such as CLIP have witnessed remarkablesuccess in connecting images and texts with a primary focus on English texts.Despite recent efforts to extend CLIP to support other languages, disparitiesin performance among different languages have been observed due to unevenresource availability. Additionally, current cross-lingual transfer methods ofthose pre-trained models would consume excessive resources for a large numberof languages. Therefore, we propose a new parameter-efficient cross-lingualtransfer learning framework that utilizes a translation-based alignment methodto mitigate multilingual disparities and explores parameter-efficientfine-tuning methods for parameter-efficient cross-lingual transfer. Extensiveexperiments on XTD and Multi30K datasets, covering 11 languages underzero-shot, few-shot, and full-dataset learning scenarios, show that ourframework significantly reduces the multilingual disparities among languagesand improves cross-lingual transfer results, especially in low-resourcescenarios, while only keeping and fine-tuning an extremely small number ofparameters compared to the full model (e.g., Our framework only requires 0.16\%additional parameters of a full-model for each language in the few-shotlearning scenario). The codes are available at\url{https://github.com/eric-ai-lab/PECTVLM}. The codes are available at\url{https://github.com/eric-ai-lab/PECTVLM}.",,arXiv,"['cs.cl', 'cs.ai']",, +893,qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model,"['Jiageng Wu', 'Xian Wu', 'Zhaopeng Qiu', 'Minghui Li', 'Yefeng Zheng', 'Jie Yang']",http://arxiv.org/pdf/2305.10163v2.pdf,2023-05-17,," Generative Pre-Training (GPT) models like ChatGPT have demonstratedexceptional performance in various Natural Language Processing (NLP) tasks.Although ChatGPT has been integrated into the overall workflow to boostefficiency in many domains, the lack of flexibility in the finetuning processhinders its applications in areas that demand extensive domain expertise andsemantic knowledge, such as healthcare. In this paper, we evaluate ChatGPT onthe China National Medical Licensing Examination (CNMLE) and propose a novelapproach to improve ChatGPT from two perspectives: integrating medical domainknowledge and enabling few-shot learning. By using a simple but effectiveretrieval method, medical background knowledge is extracted as semanticinstructions to guide the inference of ChatGPT. Similarly, relevant medicalquestions are identified and fed as demonstrations to ChatGPT. Experimentalresults show that directly applying ChatGPT fails to qualify the CNMLE at ascore of 51 (i.e., only 51\% of questions are answered correctly). While ourknowledge-enhanced model achieves a high score of 70 on CNMLE-2022 which notonly passes the qualification but also surpasses the average score of humans(61). This research demonstrates the potential of knowledge-enhanced ChatGPT toserve as versatile medical assistants, capable of analyzing real-world medicalproblems in a more accessible, user-friendly, and adaptable manner.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, +894,sentiment analysis in the era of large language models a reality check,"['Wenxuan Zhang', 'Yue Deng', 'Bing Liu', 'Sinno Jialin Pan', 'Lidong Bing']",http://arxiv.org/pdf/2305.15005v1.pdf,2023-05-24,," Sentiment analysis (SA) has been a long-standing research area in naturallanguage processing. It can offer rich insights into human sentiments andopinions and has thus seen considerable interest from both academia andindustry. With the advent of large language models (LLMs) such as ChatGPT,there is a great potential for their employment on SA problems. However, theextent to which existing LLMs can be leveraged for different sentiment analysistasks remains unclear. This paper aims to provide a comprehensive investigationinto the capabilities of LLMs in performing various sentiment analysis tasks,from conventional sentiment classification to aspect-based sentiment analysisand multifaceted analysis of subjective texts. We evaluate performance across13 tasks on 26 datasets and compare the results against small language models(SLMs) trained on domain-specific datasets. Our study reveals that while LLMsdemonstrate satisfactory performance in simpler tasks, they lag behind in morecomplex tasks requiring deeper understanding or structured sentimentinformation. However, LLMs significantly outperform SLMs in few-shot learningsettings, suggesting their potential when annotation resources are limited. Wealso highlight the limitations of current evaluation practices in assessingLLMs' SA abilities and propose a novel benchmark, \textsc{SentiEval}, for amore comprehensive and realistic evaluation. Data and code during ourinvestigations are available at\url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}.",,arXiv,['cs.cl'],, +895,impact of large language models on generating software specifications,"['Danning Xie', 'Byungwoo Yoo', 'Nan Jiang', 'Mijung Kim', 'Lin Tan', 'Xiangyu Zhang', 'Judy S. Lee']",http://arxiv.org/pdf/2306.03324v2.pdf,2023-06-06,," Software specifications are essential for ensuring the reliability ofsoftware systems. Existing specification extraction approaches, however, sufferfrom limited generalizability and require manual efforts. The recent emergenceof Large Language Models (LLMs), which have been successfully applied tonumerous software engineering tasks, offers a promising avenue for automatingthis process. In this paper, we conduct the first empirical study to evaluatethe capabilities of LLMs for generating software specifications from softwarecomments or documentation. We evaluate LLMs' performance with Few Shot Learning(FSL), enabling LLMs to generalize from a small number of examples, as well asdifferent prompt construction strategies, and compare the performance of LLMswith traditional approaches. Additionally, we conduct a comparative diagnosisof the failure cases from both LLMs and traditional methods, identifying theirunique strengths and weaknesses. Lastly, we conduct extensive experiments on 15state of the art LLMs, evaluating their performance and cost effectiveness forgenerating software specifications. Our results show that with FSL, LLMs outperform traditional methods (by5.6%), and more sophisticated prompt construction strategies can furtherenlarge this performance gap (up to 5.1 to 10.0%). Yet, LLMs suffer from theirunique challenges, such as ineffective prompts and the lack of domainknowledge, which together account for 53 to 60% of LLM unique failures. Thestrong performance of open source models (e.g., StarCoder) makes closed sourcemodels (e.g., GPT 3 Davinci) less desirable due to size and cost. Our studyoffers valuable insights for future research to improve specificationgeneration.",,arXiv,['cs.se'],, +896,prompting classes exploring the power of prompt class learning in weakly supervised semantic segmentation,"['Balamurali Murugesan', 'Rukhshanda Hussain', 'Rajarshi Bhattacharya', 'Ismail Ben Ayed', 'Jose Dolz']",http://arxiv.org/pdf/2307.00097v3.pdf,2023-06-30,," Recently, CLIP-based approaches have exhibited remarkable performance ongeneralization and few-shot learning tasks, fueled by the power of contrastivelanguage-vision pre-training. In particular, prompt tuning has emerged as aneffective strategy to adapt the pre-trained language-vision models todownstream tasks by employing task-related textual tokens. Motivated by thisprogress, in this work we question whether other fundamental problems, such asweakly supervised semantic segmentation (WSSS), can benefit from prompt tuning.Our findings reveal two interesting observations that shed light on the impactof prompt tuning on WSSS. First, modifying only the class token of the textprompt results in a greater impact on the Class Activation Map (CAM), comparedto arguably more complex strategies that optimize the context. And second, theclass token associated with the image ground truth does not necessarilycorrespond to the category that yields the best CAM. Motivated by theseobservations, we introduce a novel approach based on a PrOmpt cLass lEarning(POLE) strategy. Through extensive experiments we demonstrate that our simple,yet efficient approach achieves SOTA performance in a well-known WSSSbenchmark. These results highlight not only the benefits of language-visionmodels in WSSS but also the potential of prompt learning for this problem. Thecode is available at https://github.com/rB080/WSS_POLE.",,arXiv,['cs.cv'],, +897,text descriptions are compressive and invariant representations for visual learning,"['Zhili Feng', 'Anna Bair', 'J. Zico Kolter']",http://arxiv.org/pdf/2307.04317v2.pdf,2023-07-10,," Modern image classification is based upon directly predicting classes vialarge discriminative networks, which do not directly contain information aboutthe intuitive visual features that may constitute a classification decision.Recently, work in vision-language models (VLM) such as CLIP has provided waysto specify natural language descriptions of image classes, but typicallyfocuses on providing single descriptions for each class. In this work, wedemonstrate that an alternative approach, in line with humans' understanding ofmultiple visual features per class, can also provide compelling performance inthe robust few-shot learning setting. In particular, we introduce a novelmethod, \textit{SLR-AVD (Sparse Logistic Regression using Augmented VisualDescriptors)}. This method first automatically generates multiple visualdescriptions of each class via a large language model (LLM), then uses a VLM totranslate these descriptions to a set of visual feature embeddings of eachimage, and finally uses sparse logistic regression to select a relevant subsetof these features to classify each image. Core to our approach is the factthat, information-theoretically, these descriptive features are more invariantto domain shift than traditional image embeddings, even though the VLM trainingprocess is not explicitly designed for invariant representation learning. Theseinvariant descriptive features also compose a better input compression scheme.When combined with finetuning, we show that SLR-AVD is able to outperformexisting state-of-the-art finetuning approaches on both in-distribution andout-of-distribution performance.",,arXiv,"['cs.cv', 'cs.lg']",, +898,dialogstudio towards richest and most diverse unified dataset collection for conversational ai,"['Jianguo Zhang', 'Kun Qian', 'Zhiwei Liu', 'Shelby Heinecke', 'Rui Meng', 'Ye Liu', 'Zhou Yu', 'Huan Wang', 'Silvio Savarese', 'Caiming Xiong']",http://arxiv.org/pdf/2307.10172v2.pdf,2023-07-19,," Despite advancements in conversational AI, language models encounterchallenges to handle diverse conversational tasks, and existing dialoguedataset collections often lack diversity and comprehensiveness. To tackle theseissues, we introduce DialogStudio: the largest and most diverse collection ofdialogue datasets, unified under a consistent format while preserving theiroriginal information. Our collection encompasses data from open-domaindialogues, task-oriented dialogues, natural language understanding,conversational recommendation, dialogue summarization, and knowledge-groundeddialogues, making it an incredibly rich and diverse resource for dialogueresearch and model training. To further enhance the utility of DialogStudio, weidentify the licenses for each dataset and design domain-aware prompts forselected dialogues to facilitate instruction-aware fine-tuning. Furthermore, wedevelop conversational AI models using the dataset collection, and ourexperiments in both zero-shot and few-shot learning scenarios demonstrate thesuperiority of DialogStudio. To improve transparency and support dataset andtask-based research, as well as language model pre-training, all datasets,licenses, codes, and models associated with DialogStudio are made publiclyaccessible at https://github.com/salesforce/DialogStudio",,arXiv,"['cs.cl', 'cs.ai']",, +899,mutual reinforcement effects in japanese sentence classification and named entity recognition tasks,"['Chengguang Gan', 'Qinghao Zhang', 'Tatsunori Mori']",http://arxiv.org/pdf/2307.10291v2.pdf,2023-07-18,," Information extraction(IE) is a crucial subfield within natural languageprocessing. However, for the traditionally segmented approach to sentenceclassification and Named Entity Recognition, the intricate interactions betweenthese individual subtasks remain largely uninvestigated. In this study, wepropose an integrative analysis, converging sentence classification with NamedEntity Recognition, with the objective to unveil and comprehend the mutualreinforcement effect within these two information extraction subtasks. Toachieve this, we introduce a Sentence Classification and Named EntityRecognition Multi-task (SCNM) approach that combines Sentence Classification(SC) and Named Entity Recognition (NER). We develop a Sentence-to-LabelGeneration (SLG) framework for SCNM and construct a Wikipedia datasetcontaining both SC and NER. Using a format converter, we unify input formatsand employ a generative model to generate SC-labels, NER-labels, and associatedtext segments. We propose a Constraint Mechanism (CM) to improve generatedformat accuracy. Our results show SC accuracy increased by 1.13 points and NERby 1.06 points in SCNM compared to standalone tasks, with CM raising formataccuracy from 63.61 to 100. The findings indicate mutual reinforcement effectsbetween SC and NER, and integration enhances both tasks' performance. Weadditionally implemented the SLG framework on single SC task. It yieldedsuperior accuracies compared to the baseline on two distinct Japanese SCdatasets. Notably, in the experiment of few-shot learning, SLG framework showsmuch better performance than fine-tune method. These empirical findingscontribute additional evidence to affirm the efficacy of the SLG framework.",,arXiv,['cs.cl'],, +900,chatgpt for arabic grammatical error correction,"['Sang Yun Kwon', 'Gagan Bhatia', 'El Moatez Billah Nagoud', 'Muhammad Abdul-Mageed']",http://arxiv.org/pdf/2308.04492v1.pdf,2023-08-08,," Recently, large language models (LLMs) fine-tuned to follow human instructionhave exhibited significant capabilities in various English NLP tasks. However,their performance in grammatical error correction (GEC) tasks, particularly innon-English languages, remains significantly unexplored. In this paper, wedelve into abilities of instruction fine-tuned LLMs in Arabic GEC, a task madecomplex due to Arabic's rich morphology. Our findings suggest that variousprompting methods, coupled with (in-context) few-shot learning, demonstrateconsiderable effectiveness, with GPT-4 achieving up to $65.49$F\textsubscript{1} score under expert prompting (approximately $5$ pointshigher than our established baseline). This highlights the potential of LLMs inlow-resource settings, offering a viable approach for generating usefulsynthetic data for model training. Despite these positive results, we find thatinstruction fine-tuned models, regardless of their size, significantlyunderperform compared to fully fine-tuned models of significantly smallersizes. This disparity highlights a substantial room for improvements for LLMs.Inspired by methods from low-resource machine translation, we also develop amethod exploiting synthetic data that significantly outperforms previous modelson two standard Arabic benchmarks. Our work sets new SoTA for Arabic GEC, with$72.19\%$ and $73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively.",,arXiv,['cs.ai'],, +901,llmebench a flexible framework for accelerating llms benchmarking,"['Fahim Dalvi', 'Maram Hasanain', 'Sabri Boughorbel', 'Basel Mousi', 'Samir Abdaljalil', 'Nizi Nazar', 'Ahmed Abdelali', 'Shammur Absar Chowdhury', 'Hamdy Mubarak', 'Ahmed Ali', 'Majd Hawasly', 'Nadir Durrani', 'Firoj Alam']",http://arxiv.org/pdf/2308.04945v1.pdf,2023-08-09,," The recent development and success of Large Language Models (LLMs)necessitate an evaluation of their performance across diverse NLP tasks indifferent languages. Although several frameworks have been developed and madepublicly available, their customization capabilities for specific tasks anddatasets are often complex for different users. In this study, we introduce theLLMeBench framework. Initially developed to evaluate Arabic NLP tasks usingOpenAI's GPT and BLOOM models; it can be seamlessly customized for any NLP taskand model, regardless of language. The framework also features zero- andfew-shot learning settings. A new custom dataset can be added in less than 10minutes, and users can use their own model API keys to evaluate the task athand. The developed framework has been already tested on 31 unique NLP tasksusing 53 publicly available datasets within 90 experimental setups, involvingapproximately 296K data points. We plan to open-source the framework for thecommunity (https://github.com/qcri/LLMeBench/). A video demonstrating theframework is available online (https://youtu.be/FkQn4UjYA0s).",,arXiv,"['cs.cl', 'cs.ai', '68t50', 'f.2.2; i.2.7']",, +902,codecot and beyond learning to program and test like a developer,"['Dong Huang', 'Qingwen Bu', 'Heming Cui']",http://arxiv.org/pdf/2308.08784v1.pdf,2023-08-17,," In natural language processing, transformer-based large language models(LLMs) like GPT-x models developed by OpenAI have revolutionized the landscape.Despite their impressive capabilities, these models often encounter challengeswhen handling tasks that differ from their training data, resulting incompromised performance. To address this, few-shot learning has emerged as avaluable technique, allowing LLMs to adapt with minimal task-specific data. Oneinnovative strategy, known as Chain-of-Thought Prompting (CoT), has beenintroduced to guide LLMs in revealing cognitive processes during multi-stepreasoning. In this paper, we propose Code Chain-of-Thought~(CodeCoT), whichconsists of two components: the Vanilla CodeCoT and the Self-exam CodeCoT. Thelatter incorporates self-examination, empowering the model to iterativelygenerate code, formulate test cases, and refine its outputs. Specifically, theprocess entails the generation of test examples by the model corresponding tothe code it is tasked to implement. If it fails on the test examples, then itregenerates the code based on the erroneous code and associated error types.Through comprehensive experiments, we observed that both techniquessignificantly enhance code generation accuracy across various LLM variants. Ourevaluation results reveal that CodeCoT improves the code generationeffectiveness, including an unprecedented pass@1 accuracy of 79.27\% using theSelf-exam CodeCoT approach on the gpt-3.5-turbo-0613 model in the HumanEvaldataset.",,arXiv,"['cs.se', 'cs.ai']",, +903,diagnosing infeasible optimization problems using large language models,"['Hao Chen', 'Gonzalo E. Constante-Flores', 'Can Li']",http://arxiv.org/pdf/2308.12923v1.pdf,2023-08-23,," Decision-making problems can be represented as mathematical optimizationmodels, finding wide applications in fields such as economics, engineering andmanufacturing, transportation, and health care. Optimization models aremathematical abstractions of the problem of making the best decision whilesatisfying a set of requirements or constraints. One of the primary barriers todeploying these models in practice is the challenge of helping practitionersunderstand and interpret such models, particularly when they are infeasible,meaning no decision satisfies all the constraints. Existing methods fordiagnosing infeasible optimization models often rely on expert systems,necessitating significant background knowledge in optimization. In this paper,we introduce OptiChat, a first-of-its-kind natural language-based systemequipped with a chatbot GUI for engaging in interactive conversations aboutinfeasible optimization models. OptiChat can provide natural languagedescriptions of the optimization model itself, identify potential sources ofinfeasibility, and offer suggestions to make the model feasible. Theimplementation of OptiChat is built on GPT-4, which interfaces with anoptimization solver to identify the minimal subset of constraints that renderthe entire optimization problem infeasible, also known as the IrreducibleInfeasible Subset (IIS). We utilize few-shot learning, expert chain-of-thought,key-retrieve, and sentiment prompts to enhance OptiChat's reliability. Ourexperiments demonstrate that OptiChat assists both expert and non-expert usersin improving their understanding of the optimization models, enabling them toquickly identify the sources of infeasibility.",,arXiv,"['cs.hc', 'cs.cl', 'cs.lg', 'math.oc']",, +904,"longbench a bilingual, multitask benchmark for long context understanding","['Yushi Bai', 'Xin Lv', 'Jiajie Zhang', 'Hongchang Lyu', 'Jiankai Tang', 'Zhidian Huang', 'Zhengxiao Du', 'Xiao Liu', 'Aohan Zeng', 'Lei Hou', 'Yuxiao Dong', 'Jie Tang', 'Juanzi Li']",http://arxiv.org/pdf/2308.14508v1.pdf,2023-08-28,," Although large language models (LLMs) demonstrate impressive performance formany language tasks, most of them can only handle texts a few thousand tokenslong, limiting their applications on longer sequence inputs, such as books,reports, and codebases. Recent works have proposed methods to improve LLMs'long context capabilities by extending context windows and more sophisticatedmemory mechanisms. However, comprehensive benchmarks tailored for evaluatinglong context understanding are lacking. In this paper, we introduce LongBench,the first bilingual, multi-task benchmark for long context understanding,enabling a more rigorous evaluation of long context understanding. LongBenchcomprises 21 datasets across 6 task categories in both English and Chinese,with an average length of 6,711 words (English) and 13,386 characters(Chinese). These tasks cover key long-text application areas includingsingle-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,and code completion. All datasets in LongBench are standardized into a unifiedformat, allowing for effortless automatic evaluation of LLMs. Uponcomprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercialmodel (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but stillstruggles on longer contexts. (2) Scaled position embedding and fine-tuning onlonger sequences lead to substantial improvement on long context understanding.(3) Context compression technique such as retrieval brings improvement formodel with weak ability on long contexts, but the performance still lags behindmodels that have strong long context understanding capability. The code anddatasets are available at https://github.com/THUDM/LongBench.",,arXiv,['cs.cl'],, +905,zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model,"['Neel Bhate', 'Ansh Mittal', 'Zhe He', 'Xiao Luo']",http://arxiv.org/pdf/2309.05475v2.pdf,2023-09-11,," Demographics, Social determinants of health, and family history documented inthe unstructured text within the electronic health records are increasinglybeing studied to understand how this information can be utilized with thestructured data to improve healthcare outcomes. After the GPT models werereleased, many studies have applied GPT models to extract this information fromthe narrative clinical notes. Different from the existing work, our researchfocuses on investigating the zero-shot learning on extracting this informationtogether by providing minimum information to the GPT model. We utilizede-identified real-world clinical notes annotated for demographics, varioussocial determinants, and family history information. Given that the GPT modelmight provide text different from the text in the original data, we explore twosets of evaluation metrics, including the traditional NER evaluation metricsand semantic similarity evaluation metrics, to completely understand theperformance. Our results show that the GPT-3.5 method achieved an average of0.975 F1 on demographics extraction, 0.615 F1 on social determinantsextraction, and 0.722 F1 on family history extraction. We believe these resultscan be further improved through model fine-tuning or few-shots learning.Through the case studies, we also identified the limitations of the GPT models,which need to be addressed in future research.",,arXiv,['cs.cl'],, +906,using large language model to solve and explain physics word problems approaching human level,"['Jingzhe Ding', 'Yan Cen', 'Xinyuan Wei']",http://arxiv.org/pdf/2309.08182v2.pdf,2023-09-15,," Our work demonstrates that large language model (LLM) pre-trained on textscan not only solve pure math word problems, but also physics word problems,whose solution requires calculation and inference based on prior physicalknowledge. We collect and annotate the first physics word problemdataset-PhysQA, which contains over 1000 junior high school physics wordproblems (covering Kinematics, Mass&Density, Mechanics, Heat, Electricity).Then we use OpenAI' s GPT3.5 to generate the answer of these problems and foundthat GPT3.5 could automatically solve 49.3% of the problems through zero-shotlearning and 73.2% through few-shot learning. This result demonstrates that byusing similar problems and their answers as prompt, LLM could solve elementaryphysics word problems approaching human level performance. In addition tosolving problems, GPT3.5 can also summarize the knowledge or topics covered bythe problems, provide relevant explanations, and generate new physics wordproblems based on the input. Our work is the first research to focus on theautomatic solving, explanation, and generation of physics word problems acrossvarious types and scenarios, and we achieve an acceptable and state-of-the-artaccuracy. This underscores the potential of LLMs for further applications insecondary education.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, +907,nnsam plugandplay segment anything model improves nnunet performance,"['Yunxiang Li', 'Bowen Jing', 'Zihan Li', 'Jing Wang', 'You Zhang']",http://arxiv.org/pdf/2309.16967v2.pdf,2023-09-29,," The recent developments of foundation models in computer vision, especiallythe Segment Anything Model (SAM), allow scalable and domain-agnostic imagesegmentation to serve as a general-purpose segmentation tool. In parallel, thefield of medical image segmentation has benefited significantly fromspecialized neural networks like the nnUNet, which is trained ondomain-specific datasets and can automatically configure the network to tailorto specific segmentation challenges. To combine the advantages of foundationmodels and domain-specific models, we present nnSAM, which synergisticallyintegrates the SAM model with the nnUNet model to achieve more accurate androbust medical image segmentation. The nnSAM model leverages the powerful androbust feature extraction capabilities of SAM, while harnessing the automaticconfiguration capabilities of nnUNet to promote dataset-tailored learning. Ourcomprehensive evaluation of nnSAM model on different sizes of training samplesshows that it allows few-shot learning, which is highly relevant for medicalimage segmentation where high-quality, annotated data can be scarce and costlyto obtain. By melding the strengths of both its predecessors, nnSAM positionsitself as a potential new benchmark in medical image segmentation, offering atool that combines broad applicability with specialized efficiency. The code isavailable at https://github.com/Kent0n-Li/Medical-Image-Segmentation.",,arXiv,"['cs.cv', 'eess.iv']",, +908,radit retrievalaugmented dual instruction tuning,"['Xi Victoria Lin', 'Xilun Chen', 'Mingda Chen', 'Weijia Shi', 'Maria Lomeli', 'Rich James', 'Pedro Rodriguez', 'Jacob Kahn', 'Gergely Szilvasy', 'Mike Lewis', 'Luke Zettlemoyer', 'Scott Yih']",http://arxiv.org/pdf/2310.01352v3.pdf,2023-10-02,," Retrieval-augmented language models (RALMs) improve performance by accessinglong-tail and up-to-date knowledge from external data stores, but arechallenging to build. Existing approaches require either expensiveretrieval-specific modifications to LM pre-training or use post-hoc integrationof the data store that leads to suboptimal performance. We introduceRetrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuningmethodology that provides a third option by retrofitting any LLM with retrievalcapabilities. Our approach operates in two distinct fine-tuning steps: (1) oneupdates a pre-trained LM to better use retrieved information, while (2) theother updates the retriever to return more relevant results, as preferred bythe LM. By fine-tuning over tasks that require both knowledge utilization andcontextual awareness, we demonstrate that each stage yields significantperformance improvements, and using both leads to additional gains. Our bestmodel, RA-DIT 65B, achieves state-of-the-art performance across a range ofknowledge-intensive zero- and few-shot learning benchmarks, significantlyoutperforming existing in-context RALM approaches by up to +8.9% in 0-shotsetting and +1.4% in 5-shot setting on average.",,arXiv,"['cs.cl', 'cs.ai']",, +909,longllmlingua accelerating and enhancing llms in long context scenarios via prompt compression,"['Huiqiang Jiang', 'Qianhui Wu', 'Xufang Luo', 'Dongsheng Li', 'Chin-Yew Lin', 'Yuqing Yang', 'Lili Qiu']",http://arxiv.org/pdf/2310.06839v1.pdf,2023-10-10,," In long context scenarios, large language models (LLMs) face three mainchallenges: higher computational/financial cost, longer latency, and inferiorperformance. Some studies reveal that the performance of LLMs depends on boththe density and the position of the key information (question relevant) in theinput prompt. Inspired by these findings, we propose LongLLMLingua for promptcompression towards improving LLMs' perception of the key information tosimultaneously address the three challenges. We conduct evaluation on a widerange of long context scenarios including single-/multi-document QA, few-shotlearning, summarization, synthetic tasks, and code completion. The experimentalresults show that LongLLMLingua compressed prompt can derive higher performancewith much less cost. The latency of the end-to-end system is also reduced. Forexample, on NaturalQuestions benchmark, LongLLMLingua gains a performance boostof up to 17.1% over the original prompt with ~4x fewer tokens as input toGPT-3.5-Turbo. It can derive cost savings of \$28.5 and \$27.4 per 1,000samples from the LongBench and ZeroScrolls benchmark, respectively.Additionally, when compressing prompts of ~10k tokens at a compression rate of2x-10x, LongLLMLingua can speed up the end-to-end latency by 1.4x-3.8x. Ourcode is available at https://aka.ms/LLMLingua.",,arXiv,"['cs.cl', 'cs.lg']",, +910,empower textattributed graphs learning with large language models (llms),"['Jianxiang Yu', 'Yuxiang Ren', 'Chenghua Gong', 'Jiaqi Tan', 'Xiang Li', 'Xuecang Zhang']",http://arxiv.org/pdf/2310.09872v1.pdf,2023-10-15,," Text-attributed graphs have recently garnered significant attention due totheir wide range of applications in web domains. Existing methodologies employword embedding models for acquiring text representations as node features,which are subsequently fed into Graph Neural Networks (GNNs) for training.Recently, the advent of Large Language Models (LLMs) has introduced theirpowerful capabilities in information retrieval and text generation, which cangreatly enhance the text attributes of graph data. Furthermore, the acquisitionand labeling of extensive datasets are both costly and time-consumingendeavors. Consequently, few-shot learning has emerged as a crucial problem inthe context of graph learning tasks. In order to tackle this challenge, wepropose a lightweight paradigm called ENG, which adopts a plug-and-playapproach to empower text-attributed graphs through node generation using LLMs.Specifically, we utilize LLMs to extract semantic information from the labelsand generate samples that belong to these categories as exemplars.Subsequently, we employ an edge predictor to capture the structural informationinherent in the raw dataset and integrate the newly generated samples into theoriginal graph. This approach harnesses LLMs for enhancing class-levelinformation and seamlessly introduces labeled nodes and edges without modifyingthe raw dataset, thereby facilitating the node classification task in few-shotscenarios. Extensive experiments demonstrate the outstanding performance of ourproposed paradigm, particularly in low-shot scenarios. For instance, in the1-shot setting of the ogbn-arxiv dataset, ENG achieves a 76% improvement overthe baseline model.",,arXiv,['cs.lg'],, +911,incontext learning with iterative demonstration selection,"['Chengwei Qin', 'Aston Zhang', 'Anirudh Dagar', 'Wenming Ye']",http://arxiv.org/pdf/2310.09881v2.pdf,2023-10-15,," Spurred by advancements in scale, large language models (LLMs) havedemonstrated strong few-shot learning ability via in-context learning (ICL).However, the performance of ICL has been shown to be highly sensitive to theselection of few-shot demonstrations. Selecting the most suitable examples ascontext remains an ongoing challenge and an open problem. Existing literaturehas highlighted the importance of selecting examples that are diverse orsemantically similar to the test sample while ignoring the fact that theoptimal selection dimension, i.e., diversity or similarity, is task-specific.Leveraging the merits of both dimensions, we propose Iterative DemonstrationSelection (IDS). Using zero-shot chain-of-thought reasoning (Zero-shot-CoT),IDS iteratively selects examples that are diverse but still strongly correlatedwith the test sample as ICL demonstrations. Specifically, IDS appliesZero-shot-CoT to the test sample before demonstration selection. The outputreasoning path is then used to choose demonstrations that are prepended to thetest sample for inference. The generated answer is accompanied by itscorresponding reasoning path for extracting a new set of demonstrations in thenext iteration. After several iterations, IDS adopts majority voting to obtainthe final result. Through extensive experiments on tasks including commonsensereasoning, question answering, topic classification, and sentiment analysis, wedemonstrate that IDS can consistently outperform existing ICL demonstrationselection methods.",,arXiv,"['cs.cl', 'cs.ai']",, +912,the skipped beat a study of sociopragmatic understanding in llms for 64 languages,"['Chiyu Zhang', 'Khai Duy Doan', 'Qisheng Liao', 'Muhammad Abdul-Mageed']",http://arxiv.org/pdf/2310.14557v1.pdf,2023-10-23,," Instruction tuned large language models (LLMs), such as ChatGPT, demonstrateremarkable performance in a wide range of tasks. Despite numerous recentstudies that examine the performance of instruction-tuned LLMs on various NLPbenchmarks, there remains a lack of comprehensive investigation into theirability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaningembedded within social and interactive contexts. This deficiency arises partlyfrom SM not being adequately represented in any of the existing benchmarks. Toaddress this gap, we present SPARROW, an extensive multilingual benchmarkspecifically designed for SM understanding. SPARROW comprises 169 datasetscovering 13 task types across six primary categories (e.g., anti-sociallanguage detection, emotion recognition). SPARROW datasets encompass 64different languages originating from 12 language families representing 16writing scripts. We evaluate the performance of various multilingual pretrainedlanguage models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT)on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Ourcomprehensive analysis reveals that existing open-source instruction tuned LLMsstill struggle to understand SM across various languages, performing close to arandom baseline in some cases. We also find that although ChatGPT outperformsmany LLMs, it still falls behind task-specific finetuned models with a gap of12.19 SPARROW score. Our benchmark is available at:https://github.com/UBC-NLP/SPARROW",,arXiv,['cs.cl'],, +913,program synthesis with large language models,"['Jacob Austin', 'Augustus Odena', 'Maxwell Nye', 'Maarten Bosma', 'Henryk Michalewski', 'David Dohan', 'Ellen Jiang', 'Carrie Cai', 'Michael Terry', 'Quoc Le', 'Charles Sutton']",http://arxiv.org/pdf/2108.07732v1.pdf,2021-08-16,," This paper explores the limits of the current generation of large languagemodels for program synthesis in general purpose programming languages. Weevaluate a collection of such models (with between 244M and 137B parameters) ontwo new benchmarks, MBPP and MathQA-Python, in both the few-shot andfine-tuning regimes. Our benchmarks are designed to measure the ability ofthese models to synthesize short Python programs from natural languagedescriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974programming tasks, designed to be solvable by entry-level programmers. TheMathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914problems that evaluate the ability of the models to synthesize code from morecomplex text. On both datasets, we find that synthesis performance scaleslog-linearly with model size. Our largest models, even without finetuning on acode dataset, can synthesize solutions to 59.6 percent of the problems fromMBPP using few-shot learning with a well-designed prompt. Fine-tuning on aheld-out portion of the dataset improves performance by about 10 percentagepoints across most model sizes. On the MathQA-Python dataset, the largestfine-tuned model achieves 83.8 percent accuracy. Going further, we study themodel's ability to engage in dialog about code, incorporating human feedback toimprove its solutions. We find that natural language feedback from a humanhalves the error rate compared to the model's initial prediction. Additionally,we conduct an error analysis to shed light on where these models fall short andwhat types of programs are most difficult to generate. Finally, we explore thesemantic grounding of these models by fine-tuning them to predict the resultsof program execution. We find that even our best models are generally unable topredict the output of a program given a specific input.",,arXiv,"['cs.pl', 'cs.lg']",, +914,"a minimalist dataset for systematic generalization of perception, syntax, and semantics","['Qing Li', 'Siyuan Huang', 'Yining Hong', 'Yixin Zhu', 'Ying Nian Wu', 'Song-Chun Zhu']",http://arxiv.org/pdf/2103.01403v3.pdf,2021-03-02,," Inspired by humans' exceptional ability to master arithmetic and generalizeto new problems, we present a new dataset, Handwritten arithmetic with INTegers(HINT), to examine machines' capability of learning generalizable concepts atthree levels: perception, syntax, and semantics. In HINT, machines are taskedwith learning how concepts are perceived from raw signals such as images (i.e.,perception), how multiple concepts are structurally combined to form a validexpression (i.e., syntax), and how concepts are realized to afford variousreasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusingon systematic generalization, we carefully design a five-fold test set toevaluate both the interpolation and the extrapolation of learned conceptsw.r.t. the three levels. Further, we design a few-shot learning split todetermine whether or not models can rapidly learn new concepts and generalizethem to more complex scenarios. To comprehend existing models' limitations, weundertake extensive experiments with various sequence-to-sequence models,including RNNs, Transformers, and GPT-3 (with the chain of thought prompting).The results indicate that current models struggle to extrapolate to long-rangesyntactic dependency and semantics. Models exhibit a considerable gap towardhuman-level generalization when evaluated with new concepts in a few-shotsetting. Moreover, we discover that it is infeasible to solve HINT by merelyscaling up the dataset and the model size; this strategy contributes little tothe extrapolation of syntax and semantics. Finally, in zero-shot GPT-3experiments, the chain of thought prompting exhibits impressive results andsignificantly boosts the test accuracy. We believe the HINT dataset and theexperimental findings are of great interest to the learning community onsystematic generalization.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv']",, +915,large language models are zeroshot reasoners,"['Takeshi Kojima', 'Shixiang Shane Gu', 'Machel Reid', 'Yutaka Matsuo', 'Yusuke Iwasawa']",http://arxiv.org/pdf/2205.11916v4.pdf,2022-05-24,," Pretrained large language models (LLMs) are widely used in many sub-fields ofnatural language processing (NLP) and generally known as excellent few-shotlearners with task-specific exemplars. Notably, chain of thought (CoT)prompting, a recent technique for eliciting complex multi-step reasoningthrough step-by-step answer examples, achieved the state-of-the-artperformances in arithmetics and symbolic reasoning, difficult system-2 tasksthat do not follow the standard scaling laws for LLMs. While these successesare often attributed to LLMs' ability for few-shot learning, we show that LLMsare decent zero-shot reasoners by simply adding ""Let's think step by step""before each answer. Experimental results demonstrate that our Zero-shot-CoT,using the same single prompt template, significantly outperforms zero-shot LLMperformances on diverse benchmark reasoning tasks including arithmetics(MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, CoinFlip), and other logical reasoning tasks (Date Understanding, Tracking ShuffledObjects), without any hand-crafted few-shot examples, e.g. increasing theaccuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% withlarge InstructGPT model (text-davinci-002), as well as similar magnitudes ofimprovements with another off-the-shelf large model, 540B parameter PaLM. Theversatility of this single prompt across very diverse reasoning tasks hints atuntapped and understudied fundamental zero-shot capabilities of LLMs,suggesting high-level, multi-task broad cognitive capabilities may be extractedby simple prompting. We hope our work not only serves as the minimal strongestzero-shot baseline for the challenging reasoning benchmarks, but alsohighlights the importance of carefully exploring and analyzing the enormouszero-shot knowledge hidden inside LLMs before crafting finetuning datasets orfew-shot exemplars.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +916,an empirical evaluation of using large language models for automated unit test generation,"['Max Schäfer', 'Sarah Nadi', 'Aryaz Eghbali', 'Frank Tip']",http://arxiv.org/pdf/2302.06527v4.pdf,2023-02-13,," Unit tests play a key role in ensuring the correctness of software. However,manually creating unit tests is a laborious task, motivating the need forautomation. Large Language Models (LLMs) have recently been applied to thisproblem, utilizing additional training or few-shot learning on examples ofexisting tests. This paper presents a large-scale empirical evaluation on theeffectiveness of LLMs for automated unit test generation without additionaltraining or manual effort, providing the LLM with the signature andimplementation of the function under test, along with usage examples extractedfrom documentation. We also attempt to repair failed generated tests byre-prompting the model with the failing test and error message. We implementour approach in TestPilot, a test generation tool for JavaScript thatautomatically generates unit tests for all API functions in an npm package. Weevaluate TestPilot using OpenAI's gpt3.5-turbo LLM on 25 npm packages with atotal of 1,684 API functions. The generated tests achieve a median statementcoverage of 70.2% and branch coverage of 52.8%, significantly improving onNessie, a recent feedback-directed JavaScript test generation technique, whichachieves only 51.3% statement coverage and 25.6% branch coverage. We also findthat 92.8% of TestPilot's generated tests have no more than 50% similarity withexisting tests (as measured by normalized edit distance), with none of thembeing exact copies. Finally, we run TestPilot with two additional LLMs,OpenAI's older code-cushman-002 LLM and the open LLM StarCoder. Overall, weobserved similar results with the former (68.2% median statement coverage), andsomewhat worse results with the latter (54.0% median statement coverage),suggesting that the effectiveness of the approach is influenced by the size andtraining set of the LLM, but does not fundamentally depend on the specificmodel.",,arXiv,"['cs.se', 'cs.ai']",, +917,on the opportunities and challenges of foundation models for geospatial artificial intelligence,"['Gengchen Mai', 'Weiming Huang', 'Jin Sun', 'Suhang Song', 'Deepak Mishra', 'Ninghao Liu', 'Song Gao', 'Tianming Liu', 'Gao Cong', 'Yingjie Hu', 'Chris Cundy', 'Ziyuan Li', 'Rui Zhu', 'Ni Lao']",http://arxiv.org/pdf/2304.06798v1.pdf,2023-04-13,," Large pre-trained models, also known as foundation models (FMs), are trainedin a task-agnostic manner on large-scale data and can be adapted to a widerange of downstream tasks by fine-tuning, few-shot, or even zero-shot learning.Despite their successes in language and vision tasks, we have yet seen anattempt to develop foundation models for geospatial artificial intelligence(GeoAI). In this work, we explore the promises and challenges of developingmultimodal foundation models for GeoAI. We first investigate the potential ofmany existing FMs by testing their performances on seven tasks across multiplegeospatial subdomains including Geospatial Semantics, Health Geography, UrbanGeography, and Remote Sensing. Our results indicate that on several geospatialtasks that only involve text modality such as toponym recognition, locationdescription recognition, and US state-level/county-level dementia time seriesforecasting, these task-agnostic LLMs can outperform task-specificfully-supervised models in a zero-shot or few-shot learning setting. However,on other geospatial tasks, especially tasks that involve multiple datamodalities (e.g., POI-based urban function classification, street viewimage-based urban noise intensity classification, and remote sensing imagescene classification), existing foundation models still underperformtask-specific models. Based on these observations, we propose that one of themajor challenges of developing a FM for GeoAI is to address the multimodalitynature of geospatial tasks. After discussing the distinct challenges of eachgeospatial data modality, we suggest the possibility of a multimodal foundationmodel which can reason over various types of geospatial data through geospatialalignments. We conclude this paper by discussing the unique risks andchallenges to develop such a model for GeoAI.",,arXiv,"['cs.ai', 'cs.cl', 'cs.cv', 'i.2.0; i.2.4; i.2.7; i.2.10; i.5.1']",, +918,effective test generation using pretrained large language models and mutation testing,"['Arghavan Moradi Dakhel', 'Amin Nikanjam', 'Vahid Majdinasab', 'Foutse Khomh', 'Michel C. Desmarais']",http://arxiv.org/pdf/2308.16557v1.pdf,2023-08-31,," One of the critical phases in software development is software testing.Testing helps with identifying potential bugs and reducing maintenance costs.The goal of automated test generation tools is to ease the development of testsby suggesting efficient bug-revealing tests. Recently, researchers haveleveraged Large Language Models (LLMs) of code to generate unit tests. Whilethe code coverage of generated tests was usually assessed, the literature hasacknowledged that the coverage is weakly correlated with the efficiency oftests in bug detection. To improve over this limitation, in this paper, weintroduce MuTAP for improving the effectiveness of test cases generated by LLMsin terms of revealing bugs by leveraging mutation testing. Our goal is achievedby augmenting prompts with surviving mutants, as those mutants highlight thelimitations of test cases in detecting bugs. MuTAP is capable of generatingeffective test cases in the absence of natural language descriptions of theProgram Under Test (PUTs). We employ different LLMs within MuTAP and evaluatetheir performance on different benchmarks. Our results show that our proposedmethod is able to detect up to 28% more faulty human-written code snippets.Among these, 17% remained undetected by both the current state-of-the-art fullyautomated test generation tool (i.e., Pynguin) and zero-shot/few-shot learningapproaches on LLMs. Furthermore, MuTAP achieves a Mutation Score (MS) of 93.57%on synthetic buggy code, outperforming all other approaches in our evaluation.Our findings suggest that although LLMs can serve as a useful tool to generatetest cases, they require specific post-processing steps to enhance theeffectiveness of the generated test cases which may suffer from syntactic orfunctional errors and may be ineffective in detecting certain types of bugs andtesting corner cases PUTs.",,arXiv,['cs.se'],, +919,llm4sgg large language model for weakly supervised scene graph generation,"['Kibum Kim', 'Kanghoon Yoon', 'Jaehyeong Jeon', 'Yeonjun In', 'Jinyoung Moon', 'Donghyun Kim', 'Chanyoung Park']",http://arxiv.org/pdf/2310.10404v5.pdf,2023-10-16,," Weakly-Supervised Scene Graph Generation (WSSGG) research has recentlyemerged as an alternative to the fully-supervised approach that heavily relieson costly annotations. In this regard, studies on WSSGG have utilized imagecaptions to obtain unlocalized triplets while primarily focusing on groundingthe unlocalized triplets over image regions. However, they have overlooked thetwo issues involved in the triplet formation process from the captions: 1)Semantic over-simplification issue arises when extracting triplets fromcaptions, where fine-grained predicates in captions are undesirably convertedinto coarse-grained predicates, resulting in a long-tailed predicatedistribution, and 2) Low-density scene graph issue arises when aligning thetriplets in the caption with entity/predicate classes of interest, where manytriplets are discarded and not used in training, leading to insufficientsupervision. To tackle the two issues, we propose a new approach, i.e., LargeLanguage Model for weakly-supervised SGG (LLM4SGG), where we mitigate the twoissues by leveraging the LLM's in-depth understanding of language and reasoningability during the extraction of triplets from captions and alignment ofentity/predicate classes with target data. To further engage the LLM in theseprocesses, we adopt the idea of Chain-of-Thought and the in-context few-shotlearning strategy. To validate the effectiveness of LLM4SGG, we conductextensive experiments on Visual Genome and GQA datasets, showing significantimprovements in both Recall@K and mean Recall@K compared to thestate-of-the-art WSSGG methods. A further appeal is that LLM4SGG isdata-efficient, enabling effective model training with a small amount oftraining images.",,arXiv,['cs.cv'],, +920,masakhanews news topic classification for african languages,"['David Ifeoluwa Adelani', 'Marek Masiak', 'Israel Abebe Azime', 'Jesujoba Alabi', 'Atnafu Lambebo Tonja', 'Christine Mwase', 'Odunayo Ogundepo', 'Bonaventure F. P. Dossou', 'Akintunde Oladipo', 'Doreen Nixdorf', 'Chris Chinenye Emezue', 'sana al-azzawi', 'Blessing Sibanda', 'Davis David', 'Lolwethu Ndolela', 'Jonathan Mukiibi', 'Tunde Ajayi', 'Tatiana Moteu', 'Brian Odhiambo', 'Abraham Owodunni', 'Nnaemeka Obiefuna', 'Muhidin Mohamed', 'Shamsuddeen Hassan Muhammad', 'Teshome Mulugeta Ababu', 'Saheed Abdullahi Salahudeen', 'Mesay Gemeda Yigezu', 'Tajuddeen Gwadabe', 'Idris Abdulmumin', 'Mahlet Taye', 'Oluwabusayo Awoyomi', 'Iyanuoluwa Shode', 'Tolulope Adelani', 'Habiba Abdulganiyu', 'Abdul-Hakeem Omotayo', 'Adetola Adeeko', 'Abeeb Afolabi', 'Anuoluwapo Aremu', 'Olanrewaju Samuel', 'Clemencia Siro', 'Wangari Kimotho', 'Onyekachi Ogbu', 'Chinedu Mbonu', 'Chiamaka Chukwuneke', 'Samuel Fanijo', 'Jessica Ojo', 'Oyinkansola Awosan', 'Tadesse Kebede', 'Toadoum Sari Sakayo', 'Pamela Nyatsine', 'Freedmore Sidume', 'Oreen Yousuf', 'Mardiyyah Oduwole', 'Tshinu Tshinu', 'Ussen Kimanuka', 'Thina Diko', 'Siyanda Nxakama', 'Sinodos Nigusse', 'Abdulmejid Johar', 'Shafie Mohamed', 'Fuad Mire Hassan', 'Moges Ahmed Mehamed', 'Evrard Ngabire', 'Jules Jules', 'Ivan Ssenkungu', 'Pontus Stenetorp']",http://arxiv.org/pdf/2304.09972v2.pdf,2023-04-19,," African languages are severely under-represented in NLP research due to lackof datasets covering several NLP tasks. While there are individual languagespecific datasets that are being expanded to different tasks, only a handful ofNLP tasks (e.g. named entity recognition and machine translation) havestandardized benchmark datasets covering several geographical andtypologically-diverse African languages. In this paper, we develop MasakhaNEWS-- a new benchmark dataset for news topic classification covering 16 languageswidely spoken in Africa. We provide an evaluation of baseline models bytraining classical machine learning models and fine-tuning several languagemodels. Furthermore, we explore several alternatives to full fine-tuning oflanguage models that are better suited for zero-shot and few-shot learning suchas cross-lingual parameter-efficient fine-tuning (like MAD-X), patternexploiting training (PET), prompting language models (like ChatGPT), andprompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).Our evaluation in zero-shot setting shows the potential of prompting ChatGPTfor news topic classification in low-resource African languages, achieving anaverage performance of 70 F1 points without leveraging additional supervisionlike MAD-X. In few-shot setting, we show that with as little as 10 examples perlabel, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance offull supervised training (92.6 F1 points) leveraging the PET approach.",,arXiv,['cs.cl'],, +921,nspbert a promptbased fewshot learner through an original pretraining tasknext sentence prediction,"['Yi Sun', 'Yu Zheng', 'Chao Hao', 'Hangping Qiu']",http://arxiv.org/pdf/2109.03564v2.pdf,2021-09-08,," Using prompts to utilize language models to perform various downstream tasks,also known as prompt-based learning or prompt-learning, has lately gainedsignificant success in comparison to the pre-train and fine-tune paradigm.Nonetheless, virtually all prompt-based methods are token-level, meaning theyall utilize GPT's left-to-right language model or BERT's masked language modelto perform cloze-style tasks. In this paper, we attempt to accomplish severalNLP tasks in the zero-shot scenario using a BERT original pre-training taskabandoned by RoBERTa and other models--Next Sentence Prediction (NSP). Unliketoken-level techniques, our sentence-level prompt-based method NSP-BERT doesnot need to fix the length of the prompt or the position to be predicted,allowing it to handle tasks such as entity linking with ease. Based on thecharacteristics of NSP-BERT, we offer several quick building templates forvarious downstream tasks. We suggest a two-stage prompt method for word sensedisambiguation tasks in particular. Our strategies for mapping the labelssignificantly enhance the model's performance on sentence pair tasks. On theFewCLUE benchmark, our NSP-BERT outperforms other zero-shot methods on most ofthese tasks and comes close to the few-shot methods.",,arXiv,"['cs.cl', 'cs.ai']",, +922,introducing language guidance in promptbased continual learning,"['Muhammad Gul Zain Ali Khan', 'Muhammad Ferjad Naeem', 'Luc Van Gool', 'Didier Stricker', 'Federico Tombari', 'Muhammad Zeshan Afzal']",http://arxiv.org/pdf/2308.15827v1.pdf,2023-08-30,," Continual Learning aims to learn a single model on a sequence of taskswithout having access to data from previous tasks. The biggest challenge in thedomain still remains catastrophic forgetting: a loss in performance on seenclasses of earlier tasks. Some existing methods rely on an expensive replaybuffer to store a chunk of data from previous tasks. This, while promising,becomes expensive when the number of tasks becomes large or data can not bestored for privacy reasons. As an alternative, prompt-based methods have beenproposed that store the task information in a learnable prompt pool. Thisprompt pool instructs a frozen image encoder on how to solve each task. Whilethe model faces a disjoint set of classes in each task in this setting, weargue that these classes can be encoded to the same embedding space of apre-trained language encoder. In this work, we propose Language Guidance forPrompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods.LGCL is model agnostic and introduces language guidance at the task level inthe prompt pool and at the class level on the output feature of the visionencoder. We show with extensive experimentation that LGCL consistently improvesthe performance of prompt-based continual learning methods to set a newstate-of-the art. LGCL achieves these performance improvements without needingany additional learnable parameters.",,arXiv,['cs.cv'],, +923,psg promptbased sequence generation for acronym extraction,"['Bin Li', 'Fei Xia', 'Yixuan Weng', 'Xiusheng Huang', 'Bin Sun', 'Shutao Li']",http://arxiv.org/pdf/2111.14301v2.pdf,2021-11-29,," Acronym extraction aims to find acronyms (i.e., short-forms) and theirmeanings (i.e., long-forms) from the documents, which is important forscientific document understanding (SDU@AAAI-22) tasks. Previous works aredevoted to modeling this task as a paragraph-level sequence labeling problem.However, it lacks the effective use of the external knowledge, especially whenthe datasets are in a low-resource setting. Recently, the prompt-based methodwith the vast pre-trained language model can significantly enhance theperformance of the low-resourced downstream tasks. In this paper, we propose aPrompt-based Sequence Generation (PSG) method for the acronym extraction task.Specifically, we design a template for prompting the extracted acronym textswith auto-regression. A position extraction algorithm is designed forextracting the position of the generated answers. The results on the acronymextraction of Vietnamese and Persian in a low-resource setting show that theproposed method outperforms all other competitive state-of-the-art (SOTA)methods.",,arXiv,"['cs.cl', 'cs.ai']",, +924,chemical identification and indexing in pubmed articles via bert and texttotext approaches,"['Virginia Adams', 'Hoo-Chang Shin', 'Carol Anderson', 'Bo Liu', 'Anas Abidin']",http://arxiv.org/pdf/2111.15622v1.pdf,2021-11-30,," The Biocreative VII Track-2 challenge consists of named entity recognition,entity-linking (or entity-normalization), and topic indexing tasks -- withentities and topics limited to chemicals for this challenge. Named entityrecognition is a well-established problem and we achieve our best performancewith BERT-based BioMegatron models. We extend our BERT-based approach to theentity linking task. After the second stage of pretraining BioBERT with ametric-learning loss strategy called self-alignment pretraining (SAP), we linkentities based on the cosine similarity between their SAP-BioBERT wordembeddings. Despite the success of our named entity recognition experiments, wefind the chemical indexing task generally more challenging. In addition to conventional NER methods, we attempt both named entityrecognition and entity linking with a novel text-to-text or ""prompt"" basedmethod that uses generative language models such as T5 and GPT. We achieveencouraging results with this new approach.",,arXiv,['cs.cl'],, +925,gpts at factify 2022 prompt aided factverification,"['Pawan Kumar Sahu', 'Saksham Aggarwal', 'Taneesh Gupta', 'Gyanendra Das']",http://arxiv.org/pdf/2206.14913v1.pdf,2022-06-29,," One of the most pressing societal issues is the fight against false news. Thefalse claims, as difficult as they are to expose, create a lot of damage. Totackle the problem, fact verification becomes crucial and thus has been a topicof interest among diverse research communities. Using only the textual form ofdata we propose our solution to the problem and achieve competitive resultswith other approaches. We present our solution based on two approaches - PLM(pre-trained language model) based method and Prompt based method. ThePLM-based approach uses the traditional supervised learning, where the model istrained to take 'x' as input and output prediction 'y' as P(y|x). Whereas,Prompt-based learning reflects the idea to design input to fit the model suchthat the original objective may be re-framed as a problem of (masked) languagemodeling. We may further stimulate the rich knowledge provided by PLMs tobetter serve downstream tasks by employing extra prompts to fine-tune PLMs. Ourexperiments showed that the proposed method performs better than justfine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset anda 7th position on the competition leader-board.",,arXiv,['cs.cl'],, +926,quantifying language models' sensitivity to spurious features in prompt design or how i learned to start worrying about prompt formatting,"['Melanie Sclar', 'Yejin Choi', 'Yulia Tsvetkov', 'Alane Suhr']",http://arxiv.org/pdf/2310.11324v1.pdf,2023-10-17,," As large language models (LLMs) are adopted as a fundamental component oflanguage technologies, it is crucial to accurately characterize theirperformance. Because choices in prompt design can strongly influence modelbehavior, this design process is critical in effectively using any modernpre-trained generative language model. In this work, we focus on LLMsensitivity to a quintessential class of meaning-preserving design choices:prompt formatting. We find that several widely used open-source LLMs areextremely sensitive to subtle changes in prompt formatting in few-shotsettings, with performance differences of up to 76 accuracy points whenevaluated using LLaMA-2-13B. Sensitivity remains even when increasing modelsize, the number of few-shot examples, or performing instruction tuning. Ouranalysis suggests that work evaluating LLMs with prompting-based methods wouldbenefit from reporting a range of performance across plausible prompt formats,instead of the currently-standard practice of reporting performance on a singleformat. We also show that format performance only weakly correlates betweenmodels, which puts into question the methodological validity of comparingmodels with an arbitrarily chosen, fixed prompt format. To facilitatesystematic analysis we propose FormatSpread, an algorithm that rapidlyevaluates a sampled set of plausible prompt formats for a given task, andreports the interval of expected performance without accessing model weights.Furthermore, we present a suite of analyses that characterize the nature ofthis sensitivity, including exploring the influence of particular atomicperturbations and the internal representation of particular formats.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +927,gpt3driven pedagogical agents for training children's curious questionasking skills,"['Rania Abdelghani', 'Yen-Hsiang Wang', 'Xingdi Yuan', 'Tong Wang', 'Pauline Lucas', 'Hélène Sauzéon', 'Pierre-Yves Oudeyer']",http://arxiv.org/pdf/2211.14228v6.pdf,2022-11-25,," In order to train children's ability to ask curiosity-driven questions,previous research has explored designing specific exercises relying onproviding semantic and linguistic cues to help formulate such questions. Butdespite showing pedagogical efficiency, this method is still limited as itrelies on generating the said cues by hand, which can be a very costly process.In this context, we propose to leverage advances in the natural languageprocessing field (NLP) and investigate the efficiency of using a large languagemodel (LLM) for automating the production of the pedagogical content of acurious question-asking (QA) training. We study generating the said contentusing the ""prompt-based"" method that consists of explaining the task to the LLMin natural text. We evaluate the output using human experts annotations andcomparisons with hand-generated content. Results suggested indeed the relevanceand usefulness of this content. We also conduct a field study in primary school(75 children aged 9-10), where we evaluate children's QA performance whenhaving this training. We compare 3 types of content : 1) hand-generated contentthat proposes ""closed"" cues leading to predefined questions; 2) GPT-3-generatedcontent that proposes the same type of cues; 3) GPT-3-generated content thatproposes ""open"" cues leading to several possible questions. We see a similar QAperformance between the two ""closed"" trainings (showing the scalability of theapproach using GPT-3), and a better one for participants with the ""open""training. These results suggest the efficiency of using LLMs to supportchildren in generating more curious questions, using a natural languageprompting approach that affords usability by teachers and other users notspecialists of AI techniques. Furthermore, results also show that open-endedcontent may be more suitable for training curious question-asking skills.",,arXiv,"['cs.cl', 'cs.hc']",, +928,mentalllm leveraging large language models for mental health prediction via online text data,"['Xuhai Xu', 'Bingsheng Yao', 'Yuanzhe Dong', 'Saadia Gabriel', 'Hong Yu', 'James Hendler', 'Marzyeh Ghassemi', 'Anind K. Dey', 'Dakuo Wang']",http://arxiv.org/pdf/2307.14385v3.pdf,2023-07-26,," Advances in large language models (LLMs) have empowered a variety ofapplications. However, there is still a significant gap in research when itcomes to understanding and enhancing the capabilities of LLMs in the field ofmental health. In this work, we present the first comprehensive evaluation ofmultiple LLMs, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4, onvarious mental health prediction tasks via online text data. We conduct a broadrange of experiments, covering zero-shot prompting, few-shot prompting, andinstruction fine-tuning. The results indicate a promising yet limitedperformance of LLMs with zero-shot and few-shot prompt designs for the mentalhealth tasks. More importantly, our experiments show that instructionfinetuning can significantly boost the performance of LLMs for all taskssimultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5,outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9%on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%.They further perform on par with the state-of-the-art task-specific languagemodel. We also conduct an exploratory case study on LLMs' capability on themental health reasoning tasks, illustrating the promising capability of certainmodels such as GPT-4. We summarize our findings into a set of action guidelinesfor potential methods to enhance LLMs' capability for mental health tasks.Meanwhile, we also emphasize the important limitations before achievingdeployability in real-world mental health settings, such as known racial andgender bias. We highlight the important ethical risks accompanying this line ofresearch.",,arXiv,"['cs.cl', '68u35', 'h.5.2; i.2.m']",, +929,towards zerolabel language learning,"['Zirui Wang', 'Adams Wei Yu', 'Orhan Firat', 'Yuan Cao']",http://arxiv.org/pdf/2109.09193v1.pdf,2021-09-19,," This paper explores zero-label learning in Natural Language Processing (NLP),whereby no human-annotated data is used anywhere during training and models aretrained purely on synthetic data. At the core of our framework is a novelapproach for better leveraging the powerful pretrained language models.Specifically, inspired by the recent success of few-shot inference on GPT-3, wepresent a training data creation procedure named Unsupervised Data Generation(UDG), which leverages few-shot prompts to synthesize high-quality trainingdata without real human annotations. Our method enables zero-label learning aswe train task-specific models solely on the synthetic data, yet we achievebetter or comparable results from strong baseline models trained onhuman-labeled data. Furthermore, when mixed with labeled data, our approachserves as a highly effective data augmentation procedure, achieving newstate-of-the-art results on the SuperGLUE benchmark.",,arXiv,"['cs.cl', 'cs.lg']",, +930,covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds,"['Keshav Kolluru', 'Gabriel Stanovsky', ' Mausam']",http://arxiv.org/pdf/2210.13039v1.pdf,2022-10-24,," Proper noun compounds, e.g., ""Covid vaccine"", convey information in asuccinct manner (a ""Covid vaccine"" is a ""vaccine that immunizes against theCovid disease""). These are commonly used in short-form domains, such as newsheadlines, but are largely ignored in information-seeking applications. Toaddress this limitation, we release a new manually annotated dataset, ProNCI,consisting of 22.5K proper noun compounds along with their free-form semanticinterpretations. ProNCI is 60 times larger than prior noun compound datasetsand also includes non-compositional examples, which have not been previouslyexplored. We experiment with various neural models for automatically generatingthe semantic interpretations from proper noun compounds, ranging from few-shotprompting to supervised learning, with varying degrees of knowledge about theconstituent nouns. We find that adding targeted knowledge, particularly aboutthe common noun, results in performance gains of upto 2.8%. Finally, weintegrate our model generated interpretations with an existing Open IE systemand observe an 7.5% increase in yield at a precision of 85%. The dataset andcode are available at https://github.com/dair-iitd/pronci.",,arXiv,['cs.cl'],, +931,visualizing linguistic diversity of text datasets synthesized by large language models,"['Emily Reif', 'Minsuk Kahng', 'Savvas Petridis']",http://arxiv.org/pdf/2305.11364v2.pdf,2023-05-19,," Large language models (LLMs) can be used to generate smaller, more refineddatasets via few-shot prompting for benchmarking, fine-tuning or other usecases. However, understanding and evaluating these datasets is difficult, andthe failure modes of LLM-generated data are still not well understood.Specifically, the data can be repetitive in surprising ways, not onlysemantically but also syntactically and lexically. We present LinguisticLens, anovel inter-active visualization tool for making sense of and analyzingsyntactic diversity of LLM-generated datasets. LinguisticLens clusters textalong syntactic, lexical, and semantic axes. It supports hierarchicalvisualization of a text dataset, allowing users to quickly scan for an overviewand inspect individual examples. The live demo is available atshorturl.at/zHOUV.",,arXiv,"['cs.cl', 'cs.ai']",, +932,summqa at mediqachat 2023incontext learning with gpt4 for medical summarization,"['Yash Mathur', 'Sanketh Rangreji', 'Raghav Kapoor', 'Medha Palavalli', 'Amanda Bertsch', 'Matthew R. Gormley']",http://arxiv.org/pdf/2306.17384v1.pdf,2023-06-30,," Medical dialogue summarization is challenging due to the unstructured natureof medical conversations, the use of medical terminology in gold summaries, andthe need to identify key information across multiple symptom sets. We present anovel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA2023 Shared Task. Our approach for section-wise summarization (Task A) is atwo-stage process of selecting semantically similar dialogues and using thetop-k similar dialogues as in-context examples for GPT-4. For full-notesummarization (Task B), we use a similar solution with k=1. We achieved 3rdplace in Task A (2nd among all teams), 4th place in Task B Division WiseSummarization (2nd among all teams), 15th place in Task A Section HeaderClassification (9th among all teams), and 8th place among all teams in Task B.Our results highlight the effectiveness of few-shot prompting for this task,though we also identify several weaknesses of prompting-based approaches. Wecompare GPT-4 performance with several finetuned baselines. We find that GPT-4summaries are more abstractive and shorter. We make our code publiclyavailable.",,arXiv,['cs.cl'],, +933,ecologically valid explanations for label variation in nli,"['Nan-Jiang Jiang', 'Chenhao Tan', 'Marie-Catherine de Marneffe']",http://arxiv.org/pdf/2310.13850v1.pdf,2023-10-20,," Human label variation, or annotation disagreement, exists in many naturallanguage processing (NLP) tasks, including natural language inference (NLI). Togain direct evidence of how NLI label variation arises, we build LiveNLI, anEnglish dataset of 1,415 ecologically valid explanations (annotators explainthe NLI labels they chose) for 122 MNLI items (at least 10 explanations peritem). The LiveNLI explanations confirm that people can systematically vary ontheir interpretation and highlight within-label variation: annotators sometimeschoose the same label for different reasons. This suggests that explanationsare crucial for navigating label interpretations in general. We few-shot promptlarge language models to generate explanations but the results areinconsistent: they sometimes produces valid and informative explanations, butit also generates implausible ones that do not support the label, highlightingdirections for improvement.",,arXiv,['cs.cl'],, +934,apiassisted code generation for question answering on varied table structures,"['Yihan Cao', 'Shuyi Chen', 'Ryan Liu', 'Zhiruo Wang', 'Daniel Fried']",http://arxiv.org/pdf/2310.14687v1.pdf,2023-10-23,," A persistent challenge to table question answering (TableQA) by generatingexecutable programs has been adapting to varied table structures, typicallyrequiring domain-specific logical forms. In response, this paper introduces aunified TableQA framework that: (1) provides a unified representation forstructured tables as multi-index Pandas data frames, (2) uses Python as apowerful querying language, and (3) uses few-shot prompting to translate NLquestions into Python programs, which are executable on Pandas data frames.Furthermore, to answer complex relational questions with extended programfunctionality and external knowledge, our framework allows customized APIs thatPython programs can call. We experiment with four TableQA datasets that involvetables of different structures -- relational, multi-table, and hierarchicalmatrix shapes -- and achieve prominent improvements over past state-of-the-artsystems. In ablation studies, we (1) show benefits from our multi-indexrepresentation and APIs over baselines that use only an LLM, and (2)demonstrate that our approach is modular and can incorporate additional APIs.",,arXiv,"['cs.cl', 'cs.ai']",, +935,tree of clarifications answering ambiguous questions with retrievalaugmented large language models,"['Gangwoo Kim', 'Sungdong Kim', 'Byeongguk Jeon', 'Joonsuk Park', 'Jaewoo Kang']",http://arxiv.org/pdf/2310.14696v1.pdf,2023-10-23,," Questions in open-domain question answering are often ambiguous, allowingmultiple interpretations. One approach to handling them is to identify allpossible interpretations of the ambiguous question (AQ) and to generate along-form answer addressing them all, as suggested by Stelmakh et al., (2022).While it provides a comprehensive response without bothering the user forclarification, considering multiple dimensions of ambiguity and gatheringcorresponding knowledge remains a challenge. To cope with the challenge, wepropose a novel framework, Tree of Clarifications (ToC): It recursivelyconstructs a tree of disambiguations for the AQ -- via few-shot promptingleveraging external knowledge -- and uses it to generate a long-form answer.ToC outperforms existing baselines on ASQA in a few-shot setup across themetrics, while surpassing fully-supervised baselines trained on the wholetraining set in terms of Disambig-F1 and Disambig-ROUGE. Code is available athttps://github.com/gankim/tree-of-clarifications.",,arXiv,['cs.cl'],, +936,dissecting incontext learning of translations in gpts,"['Vikas Raunak', 'Hany Hassan Awadalla', 'Arul Menezes']",http://arxiv.org/pdf/2310.15987v1.pdf,2023-10-24,," Most of the recent work in leveraging Large Language Models (LLMs) such asGPT-3 for Machine Translation (MT) has focused on selecting the few-shotsamples for prompting. In this work, we try to better understand the role ofdemonstration attributes for the in-context learning of translations throughperturbations of high-quality, in-domain demonstrations. We find thatasymmetric perturbation of the source-target mappings yield vastly differentresults. We show that the perturbation of the source side has surprisinglylittle impact, while target perturbation can drastically reduce translationquality, suggesting that it is the output text distribution that provides themost important learning signal during in-context learning of translations. Wepropose a method named Zero-Shot-Context to add this signal automatically inZero-Shot prompting. We demonstrate that it improves upon the zero-shottranslation performance of GPT-3, even making it competitive with few-shotprompted translations.",,arXiv,"['cs.cl', 'cs.ai']",, +937,extraction of atypical aspects from customer reviews datasets and experiments with language models,"['Smita Nannaware', 'Erfan Al-Hossami', 'Razvan Bunescu']",http://arxiv.org/pdf/2311.02702v1.pdf,2023-11-05,," A restaurant dinner may become a memorable experience due to an unexpectedaspect enjoyed by the customer, such as an origami-making station in thewaiting area. If aspects that are atypical for a restaurant experience wereknown in advance, they could be leveraged to make recommendations that have thepotential to engender serendipitous experiences, further increasing usersatisfaction. Although relatively rare, whenever encountered, atypical aspectsoften end up being mentioned in reviews due to their memorable quality.Correspondingly, in this paper we introduce the task of detecting atypicalaspects in customer reviews. To facilitate the development of extractionmodels, we manually annotate benchmark datasets of reviews in three domains -restaurants, hotels, and hair salons, which we use to evaluate a number oflanguage models, ranging from fine-tuning the instruction-based text-to-texttransformer Flan-T5 to zero-shot and few-shot prompting of GPT-3.5.",,arXiv,"['cs.cl', 'cs.ai']",, +938,sqlprompt incontext texttosql with minimal labeled data,"['Ruoxi Sun', 'Sercan Ö. Arik', 'Rajarishi Sinha', 'Hootan Nakhost', 'Hanjun Dai', 'Pengcheng Yin', 'Tomas Pfister']",http://arxiv.org/pdf/2311.02883v1.pdf,2023-11-06,," Text-to-SQL aims to automate the process of generating SQL queries on adatabase from natural language text. In this work, we propose ""SQLPrompt"",tailored to improve the few-shot prompting capabilities of Text-to-SQL forLarge Language Models (LLMs). Our methods include innovative prompt design,execution-based consistency decoding strategy which selects the SQL with themost consistent execution outcome among other SQL proposals, and a method thataims to improve performance by diversifying the SQL proposals duringconsistency selection with different prompt designs (""MixPrompt"") andfoundation models (""MixLLMs""). We show that \emph{SQLPrompt} outperformsprevious approaches for in-context learning with few labeled data by a largemargin, closing the gap with finetuning state-of-the-art with thousands oflabeled data.",,arXiv,['cs.cl'],, +939,jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue,"['Lena Reed', 'Cecilia Li', 'Angela Ramirez', 'Liren Wu', 'Marilyn Walker']",http://arxiv.org/pdf/2110.08094v2.pdf,2021-10-15,," One challenge with open-domain dialogue systems is the need to producetruthful, high-quality responses on any topic. We aim to improve the qualityand coverage of Athena, an Alexa Prize dialogue system. We experiment withfew-shot prompt-based learning, comparing GPT-Neo to Jurassic-1, for themovies, music, TV, sports, and video game domains, both within andcross-domain, with different prompt set sizes (2, 3, 10), formats, and meaningrepresentations consisting of either sets of WikiData KG triples, or dialogueacts. Our evaluation uses BLEURT and human metrics, and shows that with 10-shotprompting, Athena-Jurassic's performance is significantly better for coherenceand semantic accuracy. Experiments with 2-shot cross-domain prompts results ina huge performance drop for Athena-GPT-Neo, whose semantic accuracy falls to0.41, and whose untrue hallucination rate increases to 12%. Experiments withdialogue acts for video games show that with 10-shot prompting, both modelslearn to control dialogue acts, but Athena-Jurassic has significantly highercoherence, and only 4% untrue hallucinations. Our results suggest thatAthena-Jurassic produces high enough quality outputs to be useful in livesystems with real users. To our knowledge, these are the first resultsdemonstrating that few-shot semantic prompt-based learning can create NLGs thatgeneralize to new domains, and produce high-quality, semantically-controlled,conversational responses directly from meaning representations.",,arXiv,['cs.cl'],, +940,codelmsec benchmark systematically evaluating and finding security vulnerabilities in blackbox code language models,"['Hossein Hajipour', 'Keno Hassler', 'Thorsten Holz', 'Lea Schönherr', 'Mario Fritz']",http://arxiv.org/pdf/2302.04012v2.pdf,2023-02-08,," Large language models (LLMs) for automatic code generation have achievedbreakthroughs in several programming tasks. Their advances in competition-levelprogramming problems have made them an essential pillar of AI-assisted pairprogramming, and tools such as GitHub Copilot have emerged as part of the dailyprogramming workflow used by millions of developers. The training data forthese models is usually collected from the Internet (e.g., from open-sourcerepositories) and is likely to contain faults and security vulnerabilities.This unsanitized training data can cause the language models to learn thesevulnerabilities and propagate them during the code generation procedure. Whilethese models have been extensively assessed for their ability to producefunctionally correct programs, there remains a lack of comprehensiveinvestigations and benchmarks addressing the security aspects of these models. In this work, we propose a method to systematically study the security issuesof code language models to assess their susceptibility to generating vulnerablecode. To this end, we introduce the first approach to automatically findgenerated code that contains vulnerabilities in black-box code generationmodels. To achieve this, we present an approach to approximate inversion of theblack-box code generation models based on few-shot prompting. We evaluate theeffectiveness of our approach by examining code language models in generatinghigh-risk security weaknesses. Furthermore, we establish a collection ofdiverse non-secure prompts for various vulnerability scenarios using ourmethod. This dataset forms a benchmark for evaluating and comparing thesecurity weaknesses in code language models.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.se']",, +941,scifix outperforming gpt3 on scientific factual error correction,"['Dhananjay Ashok', 'Atharva Kulkarni', 'Hai Pham', 'Barnabás Póczos']",http://arxiv.org/pdf/2305.14707v2.pdf,2023-05-24,," Due to the prohibitively high cost of creating error correction datasets,most Factual Claim Correction methods rely on a powerful verification model toguide the correction process. This leads to a significant drop in performancein domains like scientific claims, where good verification models do not alwaysexist. In this work, we introduce SciFix, a scientific claim correction systemthat does not require a verifier but can outperform existing methods by aconsiderable margin -- achieving correction accuracy of 84% on the SciFactdataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to nextbest accuracies of 7%, 5%, and 15% on the same datasets respectively. Ourmethod leverages the power of prompting with LLMs during training to create arichly annotated dataset that can be used for fully supervised training andregularization. We additionally use a claim-aware decoding procedure to improvethe quality of corrected claims. Our method outperforms the very LLM that wasused to generate the annotated dataset -- with Few-Shot Prompting on GPT3.5achieving 58%, 61%, and 64% on the respective datasets, a consistently lowercorrection accuracy, despite using nearly 800 times as many parameters as ourmodel.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +942,diffender diffusionbased adversarial defense against patch attacks,"['Caixin Kang', 'Yinpeng Dong', 'Zhengyi Wang', 'Shouwei Ruan', 'Yubo Chen', 'Hang Su', 'Xingxing Wei']",http://arxiv.org/pdf/2306.09124v3.pdf,2023-06-15,," Adversarial attacks, particularly patch attacks, pose significant threats tothe robustness and reliability of deep learning models. Developing reliabledefenses against patch attacks is crucial for real-world applications, yetcurrent research in this area is unsatisfactory. In this paper, we proposeDIFFender, a novel defense method that leverages a text-guided diffusion modelto defend against adversarial patches. DIFFender includes two main stages:patch localization and patch restoration. In the localization stage, we findand exploit an intriguing property of the diffusion model to precisely identifythe locations of adversarial patches. In the restoration stage, we employ thediffusion model to reconstruct the adversarial regions in the images whilepreserving the integrity of the visual content. Thanks to the former finding,these two stages can be simultaneously guided by a unified diffusion model.Thus, we can utilize the close interaction between them to improve the wholedefense performance. Moreover, we propose a few-shot prompt-tuning algorithm tofine-tune the diffusion model, enabling the pre-trained diffusion model toadapt to the defense task easily. We conduct extensive experiments on imageclassification, face recognition, and further in the physical world,demonstrating that our proposed method exhibits superior robustness understrong adaptive attacks and generalizes well across various scenarios, diverseclassifiers, and multiple patch attack methods.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cr', 'cs.lg']",, +943,steering large language models for machine translation with finetuning and incontext learning,"['Duarte M. Alves', 'Nuno M. Guerreiro', 'João Alves', 'José Pombal', 'Ricardo Rei', 'José G. C. de Souza', 'Pierre Colombo', 'André F. T. Martins']",http://arxiv.org/pdf/2310.13448v1.pdf,2023-10-20,," Large language models (LLMs) are a promising avenue for machine translation(MT). However, current LLM-based MT systems are brittle: their effectivenesshighly depends on the choice of few-shot examples and they often require extrapost-processing due to overgeneration. Alternatives such as finetuning ontranslation instructions are computationally expensive and may weakenin-context learning capabilities, due to overspecialization. In this paper, weprovide a closer look at this problem. We start by showing that adapter-basedfinetuning with LoRA matches the performance of traditional finetuning whilereducing the number of training parameters by a factor of 50. This method alsooutperforms few-shot prompting and eliminates the need for post-processing orin-context examples. However, we show that finetuning generally degradesfew-shot performance, hindering adaptation capabilities. Finally, to obtain thebest of both worlds, we propose a simple approach that incorporates few-shotexamples during finetuning. Experiments on 10 language pairs show that ourproposed approach recovers the original few-shot capabilities while keeping theadded benefits of finetuning.",,arXiv,['cs.cl'],, +944,on bilingual lexicon induction with large language models,"['Yaoyiran Li', 'Anna Korhonen', 'Ivan Vulić']",http://arxiv.org/pdf/2310.13995v1.pdf,2023-10-21,," Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP thatstill, to a large extent, relies on calculating cross-lingual wordrepresentations. Inspired by the global paradigm shift in NLP towards LargeLanguage Models (LLMs), we examine the potential of the latest generation ofLLMs for the development of bilingual lexicons. We ask the following researchquestion: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) forBLI, and how does this approach compare against and complement current BLIapproaches? To this end, we systematically study 1) zero-shot prompting forunsupervised BLI and 2) few-shot in-context prompting with a set of seedtranslation pairs, both without any LLM fine-tuning, as well as 3) standardBLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-sourcetext-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on twostandard BLI benchmarks covering a range of typologically diverse languages.Our work is the first to demonstrate strong BLI capabilities of text-to-textmLLMs. The results reveal that few-shot prompting with in-context examples fromnearest neighbours achieves the best performance, establishing newstate-of-the-art BLI scores for many language pairs. We also conduct a seriesof in-depth analyses and ablation studies, providing more insights on BLI with(m)LLMs, also along with their limitations.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",, +945,an early evaluation of gpt4v(ision),"['Yang Wu', 'Shilong Wang', 'Hao Yang', 'Tian Zheng', 'Hongbo Zhang', 'Yanyan Zhao', 'Bing Qin']",http://arxiv.org/pdf/2310.16534v1.pdf,2023-10-25,," In this paper, we evaluate different abilities of GPT-4V including visualunderstanding, language understanding, visual puzzle solving, and understandingof other modalities such as depth, thermal, video, and audio. To estimateGPT-4V's performance, we manually construct 656 test instances and carefullyevaluate the results of GPT-4V. The highlights of our findings are as follows:(1) GPT-4V exhibits impressive performance on English visual-centric benchmarksbut fails to recognize simple Chinese texts in the images; (2) GPT-4V showsinconsistent refusal behavior when answering questions related to sensitivetraits such as gender, race, and age; (3) GPT-4V obtains worse results thanGPT-4 (API) on language understanding tasks including general languageunderstanding benchmarks and visual commonsense knowledge evaluationbenchmarks; (4) Few-shot prompting can improve GPT-4V's performance on bothvisual understanding and language understanding; (5) GPT-4V struggles to findthe nuances between two similar images and solve the easy math picture puzzles;(6) GPT-4V shows non-trivial performance on the tasks of similar modalities toimage, such as video and thermal. Our experimental results reveal the abilityand limitations of GPT-4V and we hope our paper can provide some insights intothe application and research of GPT-4V.",,arXiv,"['cs.cl', 'cs.cv']",, +946,you are an expert linguistic annotator limits of llms as analyzers of abstract meaning representation,"['Allyson Ettinger', 'Jena D. Hwang', 'Valentina Pyatkin', 'Chandra Bhagavatula', 'Yejin Choi']",http://arxiv.org/pdf/2310.17793v2.pdf,2023-10-26,," Large language models (LLMs) show amazing proficiency and fluency in the useof language. Does this mean that they have also acquired insightful linguisticknowledge about the language, to an extent that they can serve as an ""expertlinguistic annotator""? In this paper, we examine the successes and limitationsof the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaningstructure, focusing on the Abstract Meaning Representation (AMR; Banarescu etal. 2013) parsing formalism, which provides rich graphical representations ofsentence meaning structure while abstracting away from surface forms. Wecompare models' analysis of this semantic structure across two settings: 1)direct production of AMR parses based on zero- and few-shot prompts, and 2)indirect partial reconstruction of AMR via metalinguistic natural languagequeries (e.g., ""Identify the primary event of this sentence, and the predicatecorresponding to that event.""). Across these settings, we find that models canreliably reproduce the basic format of AMR, and can often capture core event,argument, and modifier structure -- however, model outputs are prone tofrequent and major errors, and holistic analysis of parse acceptability showsthat even with few-shot demonstrations, models have virtually 0% success inproducing fully accurate parses. Eliciting natural language responses producessimilar patterns of errors. Overall, our findings indicate that these modelsout-of-the-box can capture aspects of semantic structure, but there remain keylimitations in their ability to support fully accurate semantic analyses orparses.",,arXiv,"['cs.cl', 'cs.ai']",, +947,styleaware radiology report generation with radgraph and fewshot prompting,"['Benjamin Yan', 'Ruochen Liu', 'David E. Kuo', 'Subathra Adithan', 'Eduardo Pontes Reis', 'Stephen Kwak', 'Vasantha Kumar Venugopal', ""Chloe P. O'Connell"", 'Agustina Saenz', 'Pranav Rajpurkar', 'Michael Moor']",http://arxiv.org/pdf/2310.17811v2.pdf,2023-10-26,," Automatically generated reports from medical images promise to improve theworkflow of radiologists. Existing methods consider an image-to-report modelingtask by directly generating a fully-fledged report from an image. However, thisconflates the content of the report (e.g., findings and their attributes) withits style (e.g., format and choice of words), which can lead to clinicallyinaccurate reports. To address this, we propose a two-step approach forradiology report generation. First, we extract the content from an image; then,we verbalize the extracted content into a report that matches the style of aspecific radiologist. For this, we leverage RadGraph -- a graph representationof reports -- together with large language models (LLMs). In our quantitativeevaluations, we find that our approach leads to beneficial performance. Ourhuman evaluation with clinical raters highlights that the AI-generated reportsare indistinguishably tailored to the style of individual radiologist despiteleveraging only a few examples as context.",,arXiv,"['cs.ai', 'cs.cl']",, +948,mentallama interpretable mental health analysis on social media with large language models,"['Kailai Yang', 'Tianlin Zhang', 'Ziyan Kuang', 'Qianqian Xie', 'Sophia Ananiadou', 'Jimin Huang']",http://arxiv.org/pdf/2309.13567v2.pdf,2023-09-24,," With the development of web technology, social media texts are becoming arich source for automatic mental health analysis. As traditional discriminativemethods bear the problem of low interpretability, the recent large languagemodels have been explored for interpretable mental health analysis on socialmedia, which aims to provide detailed explanations along with predictions. Theresults show that ChatGPT can generate approaching-human explanations for itscorrect classifications. However, LLMs still achieve unsatisfactoryclassification performance in a zero-shot/few-shot manner. Domain-specificfinetuning is an effective solution, but faces 2 challenges: 1) lack ofhigh-quality training data. 2) no open-source LLMs for interpretable mentalhealth analysis were released to lower the finetuning cost. To alleviate theseproblems, we build the first multi-task and multi-source interpretable mentalhealth instruction (IMHI) dataset on social media, with 105K data samples. Theraw social media data are collected from 10 existing sources covering 8 mentalhealth analysis tasks. We use expert-written few-shot prompts and collectedlabels to prompt ChatGPT and obtain explanations from its responses. To ensurethe reliability of the explanations, we perform strict automatic and humanevaluations on the correctness, consistency, and quality of generated data.Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA,the first open-source LLM series for interpretable mental health analysis withinstruction-following capability. We also evaluate the performance ofMentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where theircorrectness for making predictions and the quality of explanations areexamined. The results show that MentalLLaMA approaches state-of-the-artdiscriminative methods in correctness and generates high-quality explanations.",,arXiv,['cs.cl'],, +949,acecoder utilizing existing code to enhance code generation,"['Jia Li', 'Yunfei Zhao', 'Yongmin Li', 'Ge Li', 'Zhi Jin']",http://arxiv.org/pdf/2303.17780v3.pdf,2023-03-31,," Large Language Models (LLMs) have shown great success in code generation.LLMs take as the input a prompt and output the code. A key question is how tomake prompts (i.e., Prompting Techniques). Existing prompting techniques aredesigned for natural language generation and have low accuracy in codegeneration. In this paper, we propose a new prompting technique named AceCoder. Ourmotivation is that code generation meets two unique challenges (i.e.,requirement understanding and code implementation). AceCoder contains two novelmechanisms (i.e., guided code generation and example retrieval) to solve thesechallenges. (1) Guided code generation asks LLMs first to analyze requirementsand output an intermediate preliminary (e.g., test cases). The preliminary isused to clarify requirements and tell LLMs ""what to write"". (2) Exampleretrieval selects similar programs as examples in prompts, which provide lotsof relevant content (e.g., algorithms, APIs) and teach LLMs ""how to write"". Weapply AceCoder to three LLMs (e.g., Codex) and evaluate it on three publicbenchmarks using the Pass@k. Results show that AceCoder can significantlyimprove the performance of LLMs on code generation. (1) In terms of Pass@1,AceCoder outperforms the state-of-the-art baseline by up to 56.4% in MBPP,70.7% in MBJP, and 88.4% in MBJSP. (2) AceCoder is effective in LLMs withdifferent sizes (i.e., 6B to 13B) and different languages (i.e., Python, Java,and JavaScript). (3) Human evaluation shows human developers prefer programsfrom AceCoder.",,arXiv,"['cs.se', 'cs.ai']",, +950,compositional semantic parsing with large language models,"['Andrew Drozdov', 'Nathanael Schärli', 'Ekin Akyürek', 'Nathan Scales', 'Xinying Song', 'Xinyun Chen', 'Olivier Bousquet', 'Denny Zhou']",http://arxiv.org/pdf/2209.15003v2.pdf,2022-09-29,," Humans can reason compositionally when presented with new tasks. Previousresearch shows that appropriate prompting techniques enable large languagemodels (LLMs) to solve artificial compositional generalization tasks such asSCAN. In this work, we identify additional challenges in more realisticsemantic parsing tasks with larger vocabulary and refine these promptingtechniques to address them. Our best method is based on least-to-mostprompting: it decomposes the problem using prompting-based syntactic parsing,then uses this decomposition to select appropriate exemplars and tosequentially generate the semantic parse. This method allows us to set a newstate of the art for CFQ while requiring only 1% of the training data used bytraditional approaches. Due to the general nature of our approach, we expectsimilar efforts will lead to new results in other tasks and domains, especiallyfor knowledge-intensive applications.",,arXiv,"['cs.cl', 'cs.ai']",, +951,gembamqm detecting translation quality error spans with gpt4,"['Tom Kocmi', 'Christian Federmann']",http://arxiv.org/pdf/2310.13988v1.pdf,2023-10-21,," This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed todetect translation quality errors, specifically for the quality estimationsetting without the need for human reference translations. Based on the powerof large language models (LLM), GEMBA-MQM employs a fixed three-shot promptingtechnique, querying the GPT-4 model to mark error quality spans. Compared toprevious works, our method has language-agnostic prompts, thus avoiding theneed for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-artaccuracy for system ranking, we advise caution when using it in academic worksto demonstrate improvements over other methods due to its dependence on theproprietary, black-box GPT model.",,arXiv,['cs.cl'],, +952,eliciting topic hierarchies from large language models,"['Grace Li', 'Tao Long', 'Lydia B. Chilton']",http://arxiv.org/pdf/2310.19275v1.pdf,2023-10-30,," Finding topics to write about can be a mentally demanding process. However,topic hierarchies can help writers explore topics of varying levels ofspecificity. In this paper, we use large language models (LLMs) to helpconstruct topic hierarchies. Although LLMs have access to such knowledge, itcan be difficult to elicit due to issues of specificity, scope, and repetition.We designed and tested three different prompting techniques to find one thatmaximized accuracy. We found that prepending the general topic area to a promptyielded the most accurate results with 85% accuracy. We discuss applications ofthis research including STEM writing, education, and content creation.",,arXiv,['cs.hc'],, +953,structured chainofthought prompting for code generation,"['Jia Li', 'Ge Li', 'Yongmin Li', 'Zhi Jin']",http://arxiv.org/pdf/2305.06599v3.pdf,2023-05-11,," Large Language Models (LLMs) (e.g., ChatGPT) have shown impressiveperformance in code generation. LLMs take prompts as inputs, andChain-of-Thought (CoT) prompting is the state-of-the-art prompting technique.CoT prompting asks LLMs first to generate CoTs (i.e., intermediate naturallanguage reasoning steps) and then output the code. However, CoT prompting isdesigned for natural language generation and has low accuracy in codegeneration. In this paper, we propose Structured CoTs (SCoTs) and present a novelprompting technique for code generation, named SCoT prompting. Our motivationis source code contains rich structural information and any code can becomposed of three program structures (i.e., sequence, branch, and loopstructures). Intuitively, structured intermediate reasoning steps make forstructured source code. Thus, we ask LLMs to use program structures to buildCoTs, obtaining SCoTs. Then, LLMs generate the final code based on SCoTs.Compared to CoT prompting, SCoT prompting explicitly constrains LLMs to thinkabout how to solve requirements from the view of source code and further theperformance of LLMs in code generation. We apply SCoT prompting to two LLMs(i.e., ChatGPT and Codex) and evaluate it on three benchmarks (i.e., HumanEval,MBPP, and MBCPP). (1) SCoT prompting outperforms the state-of-the-art baseline- CoT prompting by up to 13.79% in Pass@1. (2) Human evaluation shows humandevelopers prefer programs from SCoT prompting. (3) SCoT prompting is robust toexamples and achieves substantial improvements.",,arXiv,"['cs.se', 'cs.cl']",, +954,the impact of ai in physics education a comprehensive review from gcse to university levels,"['Will Yeadon', 'Tom Hardy']",http://arxiv.org/pdf/2309.05163v1.pdf,2023-09-10,," With the rapid evolution of Artificial Intelligence (AI), its potentialimplications for higher education have become a focal point of interest. Thisstudy delves into the capabilities of AI in Physics Education and offersactionable AI policy recommendations. Using a Large Language Model (LLM), weassessed its ability to answer 1337 Physics exam questions spanning GCSE,A-Level, and Introductory University curricula. We employed various AIprompting techniques: Zero Shot, In Context Learning, and ConfirmatoryChecking, which merges Chain of Thought reasoning with Reflection. The AI'sproficiency varied across academic levels: it scored an average of 83.4% onGCSE, 63.8% on A-Level, and 37.4% on university-level questions, with anoverall average of 59.9% using the most effective prompting technique. In aseparate test, the LLM's accuracy on 5000 mathematical operations was found todecrease as the number of digits increased. Furthermore, when evaluated as amarking tool, the LLM's concordance with human markers averaged at 50.8%, withnotable inaccuracies in marking straightforward questions, likemultiple-choice. Given these results, our recommendations underscore caution:while current LLMs can consistently perform well on Physics questions atearlier educational stages, their efficacy diminishes with advanced content andcomplex calculations. LLM outputs often showcase novel methods not in thesyllabus, excessive verbosity, and miscalculations in basic arithmetic. Thissuggests that at university, there's no substantial threat from LLMs fornon-invigilated Physics questions. However, given the LLMs' considerableproficiency in writing Physics essays and coding abilities, non-invigilatedexaminations of these skills in Physics are highly vulnerable to automatedcompletion by LLMs. This vulnerability also extends to Physics questionspitched at lower academic levels.",,arXiv,['physics.ed-ph'],, +955,languagespecific representation of emotionconcept knowledge causally supports emotion inference,"['Ming Li', 'Yusheng Su', 'Hsiu-Yuan Huang', 'Jiali Cheng', 'Xin Hu', 'Xinmiao Zhang', 'Huadong Wang', 'Yujia Qin', 'Xiaozhi Wang', 'Zhiyuan Liu', 'Dan Zhang']",http://arxiv.org/pdf/2302.09582v4.pdf,2023-02-19,," Understanding how language supports emotion inference remains a topic ofdebate in emotion science. The present study investigated whetherlanguage-derived emotion-concept knowledge would causally support emotioninference by manipulating the language-specific knowledge representations inlarge language models. Using the prompt technique, 14 attributes of emotionconcepts were found to be represented by distinct artificial neuronpopulations. By manipulating these attribute-related neurons, the majority ofthe emotion inference tasks showed performance deterioration compared to randommanipulations. The attribute-specific performance deterioration was related tothe importance of different attributes in human mental space. Our findingsprovide causal evidence in support of a language-based mechanism for emotioninference and highlight the contributions of emotion-concept knowledge.",,arXiv,"['cs.ai', 'cs.cl']",, +956,posqa probe the world models of llms with size comparisons,"['Chang Shu', 'Jiuzhou Han', 'Fangyu Liu', 'Ehsan Shareghi', 'Nigel Collier']",http://arxiv.org/pdf/2310.13394v1.pdf,2023-10-20,," Embodied language comprehension emphasizes that language understanding is notsolely a matter of mental processing in the brain but also involvesinteractions with the physical and social environment. With the explosivegrowth of Large Language Models (LLMs) and their already ubiquitous presence inour daily lives, it is becoming increasingly necessary to verify theirreal-world understanding. Inspired by cognitive theories, we propose POSQA: aPhysical Object Size Question Answering dataset with simple size comparisonquestions to examine the extremity and analyze the potential mechanisms of theembodied comprehension of the latest LLMs. We show that even the largest LLMs today perform poorly under the zero-shotsetting. We then push their limits with advanced prompting techniques andexternal knowledge augmentation. Furthermore, we investigate whether theirreal-world comprehension primarily derives from contextual information orinternal weights and analyse the impact of prompt formats and report bias ofdifferent objects. Our results show that real-world understanding that LLMsshaped from textual data can be vulnerable to deception and confusion by thesurface form of prompts, which makes it less aligned with human behaviours.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, +957,musr testing the limits of chainofthought with multistep soft reasoning,"['Zayne Sprague', 'Xi Ye', 'Kaj Bostrom', 'Swarat Chaudhuri', 'Greg Durrett']",http://arxiv.org/pdf/2310.16049v1.pdf,2023-10-24,," While large language models (LLMs) equipped with techniques likechain-of-thought prompting have demonstrated impressive capabilities, theystill fall short in their ability to reason robustly in complex settings.However, evaluating LLM reasoning is challenging because system capabilitiescontinue to grow while benchmark datasets for tasks like logical deduction haveremained static. We introduce MuSR, a dataset for evaluating language models onmultistep soft reasoning tasks specified in a natural language narrative. Thisdataset has two crucial features. First, it is created through a novelneurosymbolic synthetic-to-natural generation algorithm, enabling theconstruction of complex reasoning instances that challenge GPT-4 (e.g., murdermysteries roughly 1000 words in length) and which can be scaled further as morecapable LLMs are released. Second, our dataset instances are free textnarratives corresponding to real-world domains of reasoning; this makes itsimultaneously much more challenging than other synthetically-craftedbenchmarks while remaining realistic and tractable for human annotators tosolve with high accuracy. We evaluate a range of LLMs and prompting techniqueson this dataset and characterize the gaps that remain for techniques likechain-of-thought to perform robust reasoning.",,arXiv,['cs.cl'],, +958,"supercharging academic writing with generative ai framework, techniques, and caveats",['Zhicheng Lin'],http://arxiv.org/pdf/2310.17143v1.pdf,2023-10-26,," Academic writing is an indispensable yet laborious part of the researchenterprise. This Perspective maps out principles and methods for usinggenerative artificial intelligence (AI), specifically large language models(LLMs), to elevate the quality and efficiency of academic writing. We introducea human-AI collaborative framework that delineates the rationale (why), process(how), and nature (what) of AI engagement in writing. The framework pinpointsboth short-term and long-term reasons for engagement and their underlyingmechanisms (e.g., cognitive offloading and imaginative stimulation). It revealsthe role of AI throughout the writing process, conceptualized through atwo-stage model for human-AI collaborative writing, and the nature of AIassistance in writing, represented through a model of writing-assistance typesand levels. Building on this framework, we describe effective promptingtechniques for incorporating AI into the writing routine (outlining, drafting,and editing) as well as strategies for maintaining rigorous scholarship,adhering to varied journal policies, and avoiding overreliance on AI.Ultimately, the prudent integration of AI into academic writing can ease thecommunication burden, empower authors, accelerate discovery, and promotediversity in science.",,arXiv,"['cs.cy', 'cs.cl']",, +959,little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task,"['Neema Kotonya', 'Saran Krishnasamy', 'Joel Tetreault', 'Alejandro Jaimes']",http://arxiv.org/pdf/2311.00686v1.pdf,2023-11-01,," This paper describes and analyzes our participation in the 2023 Eval4NLPshared task, which focuses on assessing the effectiveness of prompt-basedtechniques to empower Large Language Models to handle the task of qualityestimation, particularly in the context of evaluating machine translations andsummaries. We conducted systematic experiments with various promptingtechniques, including standard prompting, prompts informed by annotatorinstructions, and innovative chain-of-thought prompting. In addition, weintegrated these approaches with zero-shot and one-shot learning methods tomaximize the efficacy of our evaluation procedures. Our work reveals thatcombining these approaches using a ""small"", open source model (orca_mini_v3_7B)yields competitive results.",,arXiv,['cs.cl'],, +960,can large language models design accurate label functions,"['Naiqing Guan', 'Kaiwen Chen', 'Nick Koudas']",http://arxiv.org/pdf/2311.00739v1.pdf,2023-11-01,," Programmatic weak supervision methodologies facilitate the expedited labelingof extensive datasets through the use of label functions (LFs) that encapsulateheuristic data sources. Nonetheless, the creation of precise LFs necessitatesdomain expertise and substantial endeavors. Recent advances in pre-trainedlanguage models (PLMs) have exhibited substantial potential across diversetasks. However, the capacity of PLMs to autonomously formulate accurate LFsremains an underexplored domain. In this research, we address this gap byintroducing DataSculpt, an interactive framework that harnesses PLMs for theautomated generation of LFs. Within DataSculpt, we incorporate an array ofprompting techniques, instance selection strategies, and LF filtration methodsto explore the expansive design landscape. Ultimately, we conduct a thoroughassessment of DataSculpt's performance on 12 real-world datasets, encompassinga range of tasks. This evaluation unveils both the strengths and limitations ofcontemporary PLMs in LF design.",,arXiv,"['cs.cl', 'cs.db', 'cs.lg', 'h.2.8; i.5.4']",, +961,once boosting contentbased recommendation with both open and closedsource large language models,"['Qijiong Liu', 'Nuo Chen', 'Tetsuya Sakai', 'Xiao-Ming Wu']",http://arxiv.org/pdf/2305.06566v4.pdf,2023-05-11,," Personalized content-based recommender systems have become indispensabletools for users to navigate through the vast amount of content available onplatforms like daily news websites and book recommendation services. However,existing recommenders face significant challenges in understanding the contentof items. Large language models (LLMs), which possess deep semanticcomprehension and extensive knowledge from pretraining, have proven to beeffective in various natural language processing tasks. In this study, weexplore the potential of leveraging both open- and closed-source LLMs toenhance content-based recommendation. With open-source LLMs, we utilize theirdeep layers as content encoders, enriching the representation of content at theembedding level. For closed-source LLMs, we employ prompting techniques toenrich the training data at the token level. Through comprehensive experiments,we demonstrate the high effectiveness of both types of LLMs and show thesynergistic relationship between them. Notably, we observed a significantrelative improvement of up to 19.32% compared to existing state-of-the-artrecommendation models. These findings highlight the immense potential of bothopen- and closed-source of LLMs in enhancing content-based recommendationsystems. We will make our code and LLM-generated data available for otherresearchers to reproduce our results.",,arXiv,"['cs.ir', 'cs.cl']",, +962,crosslingual prompting improving zeroshot chainofthought reasoning across languages,"['Libo Qin', 'Qiguang Chen', 'Fuxuan Wei', 'Shijue Huang', 'Wanxiang Che']",http://arxiv.org/pdf/2310.14799v1.pdf,2023-10-23,," Chain-of-thought (CoT) is capable of eliciting models to explicitly generatereasoning paths, thus promoting reasoning accuracy and attracting increasingattention. Specifically, zero-shot CoT achieves remarkable improvements in awide range of reasoning tasks by simply instructing the LLM with the prompt""Let's think step by step!"". Despite the success of zero-shot CoT, the existingzero-shot prompting techniques remain limited to a single language, making itchallenging to generalize to other languages and hindering global development.In this work, we introduce cross-lingual prompting (CLP), aiming to improvezero-shot CoT reasoning across languages. Specifically, CLP consists of twomain components: (1) cross-lingual alignment prompting and (2) task-specificsolver prompting. The cross-lingual alignment prompting is responsible foraligning representations across different languages, whereas the task-specificsolver prompting is used to generate the final chain of thoughts and resultsfor the reasoning task. In addition, we further introduce cross-lingualself-consistent prompting (CLSP) to ensemble different reasoning paths acrosslanguages. Our experimental evaluations on several benchmarks demonstrate thatCLP and CLSP significantly outperform the existing prompting methods andachieve state-of-the-art performance. We hope this work will inspire furtherbreakthroughs in cross-lingual CoT.",,arXiv,"['cs.cl', 'cs.ai']",, +963,hetgpt harnessing the power of prompt tuning in pretrained heterogeneous graph neural networks,"['Yihong Ma', 'Ning Yan', 'Jiayu Li', 'Masood Mortazavi', 'Nitesh V. Chawla']",http://arxiv.org/pdf/2310.15318v3.pdf,2023-10-23,," Graphs have emerged as a natural choice to represent and analyze theintricate patterns and rich information of the Web, enabling applications suchas online page classification and social recommendation. The prevailing""pre-train, fine-tune"" paradigm has been widely adopted in graph machinelearning tasks, particularly in scenarios with limited labeled nodes. However,this approach often exhibits a misalignment between the training objectives ofpretext tasks and those of downstream tasks. This gap can result in the""negative transfer"" problem, wherein the knowledge gained from pre-trainingadversely affects performance in the downstream tasks. The surge inprompt-based learning within Natural Language Processing (NLP) suggests thepotential of adapting a ""pre-train, prompt"" paradigm to graphs as analternative. However, existing graph prompting techniques are tailored tohomogeneous graphs, neglecting the inherent heterogeneity of Web graphs. Tobridge this gap, we propose HetGPT, a general post-training prompting frameworkto improve the predictive performance of pre-trained heterogeneous graph neuralnetworks (HGNNs). The key is the design of a novel prompting function thatintegrates a virtual class prompt and a heterogeneous feature prompt, with theaim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPTintroduces a multi-view neighborhood aggregation mechanism, capturing thecomplex neighborhood structure in heterogeneous graphs. Extensive experimentson three benchmark datasets demonstrate HetGPT's capability to enhance theperformance of state-of-the-art HGNNs on semi-supervised node classification.",,arXiv,"['cs.lg', 'cs.ai']",, +964,llm4dyg can large language models solve problems on dynamic graphs,"['Zeyang Zhang', 'Xin Wang', 'Ziwei Zhang', 'Haoyang Li', 'Yijian Qin', 'Simin Wu', 'Wenwu Zhu']",http://arxiv.org/pdf/2310.17110v1.pdf,2023-10-26,," In an era marked by the increasing adoption of Large Language Models (LLMs)for various tasks, there is a growing focus on exploring LLMs' capabilities inhandling web data, particularly graph data. Dynamic graphs, which capturetemporal network evolution patterns, are ubiquitous in real-world web data.Evaluating LLMs' competence in understanding spatial-temporal information ondynamic graphs is essential for their adoption in web applications, whichremains unexplored in the literature. In this paper, we bridge the gap viaproposing to evaluate LLMs' spatial-temporal understanding abilities on dynamicgraphs, to the best of our knowledge, for the first time. Specifically, wepropose the LLM4DyG benchmark, which includes nine specially designed tasksconsidering the capability evaluation of LLMs from both temporal and spatialdimensions. Then, we conduct extensive experiments to analyze the impacts ofdifferent data generators, data statistics, prompting techniques, and LLMs onthe model performance. Finally, we propose Disentangled Spatial-TemporalThoughts (DST2) for LLMs on dynamic graphs to enhance LLMs' spatial-temporalunderstanding abilities. Our main observations are: 1) LLMs have preliminaryspatial-temporal understanding abilities on dynamic graphs, 2) Dynamic graphtasks show increasing difficulties for LLMs as the graph size and densityincrease, while not sensitive to the time span and data generation mechanism,3) the proposed DST2 prompting method can help to improve LLMs'spatial-temporal understanding abilities on dynamic graphs for most tasks. Thedata and codes will be open-sourced at publication time.",,arXiv,['cs.lg'],, +965,which is better exploring prompting strategy for llmbased metrics,"['Joonghoon Kim', 'Saeran Park', 'Kiyoon Jeong', 'Sangmin Lee', 'Seung Hun Han', 'Jiyoon Lee', 'Pilsung Kang']",http://arxiv.org/pdf/2311.03754v1.pdf,2023-11-07,," This paper describes the DSBA submissions to the Prompting Large LanguageModels as Explainable Metrics shared task, where systems were submitted to twotracks: small and large summarization tracks. With advanced Large LanguageModels (LLMs) such as GPT-4, evaluating the quality of Natural LanguageGeneration (NLG) has become increasingly paramount. Traditionalsimilarity-based metrics such as BLEU and ROUGE have shown to misalign withhuman evaluation and are ill-suited for open-ended generation tasks. To addressthis issue, we explore the potential capability of LLM-based metrics,especially leveraging open-source LLMs. In this study, wide range of promptsand prompting techniques are systematically analyzed with three approaches:prompting strategy, score aggregation, and explainability. Our research focuseson formulating effective prompt templates, determining the granularity of NLGquality scores and assessing the impact of in-context examples on LLM-basedevaluation. Furthermore, three aggregation strategies are compared to identifythe most reliable method for aggregating NLG quality scores. To examineexplainability, we devise a strategy that generates rationales for the scoresand analyzes the characteristics of the explanation produced by the open-sourceLLMs. Extensive experiments provide insights regarding evaluation capabilitiesof open-source LLMs and suggest effective prompting strategies.",,arXiv,['cs.cl'],, +966,autonomous treesearch ability of large language models,"['Zheyu Zhang', 'Zhuorui Ye', 'Yikang Shen', 'Chuang Gan']",http://arxiv.org/pdf/2310.10686v1.pdf,2023-10-14,," Large Language Models have excelled in remarkable reasoning capabilities withadvanced prompting techniques, but they fall short on tasks that requireexploration, strategic foresight, and sequential decision-making. Recent workspropose to utilize external programs to define search logic, such that LLMs canperform passive tree search to solve more challenging reasoning tasks. Thoughimpressive results have been achieved, there are several fundamentallimitations of these approaches. First, passive tree searches are not efficientas they usually require multiple rounds of LLM API calls to solve one singleproblem. Moreover, passive search methods are not flexible since they needtask-specific program designs. Then a natural question arises: can we maintainthe tree-search capability of LLMs without the aid of external programs, andcan still generate responses that clearly demonstrate the process of atree-structure search? To this end, we propose a new concept called autonomoustree-search ability of LLM, which can automatically generate a responsecontaining search trajectories for the correct answer. Concretely, we performsearch trajectories using capable LLM API via a fixed system prompt, allowingthem to perform autonomous tree-search (ATS) right out of the box. Experimentson 4 puzzle games demonstrate our method can achieve huge improvements. TheATS-BFS method outperforms the Chain of Thought approach by achieving anaverage accuracy improvement of 33%. Compared to Tree of Thoughts, it requires65.6% or 47.7% less GPT-api cost to attain a comparable level of accuracy.Moreover, we have collected data using the ATS prompt method and fine-tunedLLaMA. This approach yield a greater improvement compared to the onesfine-tuned on CoT data. Specifically, it outperforms CoT-tuned LLaMAs by anaverage of 40.6% and 38.5% for LLaMA2-7B and LLaMA2-13B, respectively.",,arXiv,"['cs.cl', 'cs.ai']",, +967,s$^3$hqa a threestage approach for multihop texttable hybrid question answering,"['Fangyu Lei', 'Xiang Li', 'Yifan Wei', 'Shizhu He', 'Yiming Huang', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2305.11725v1.pdf,2023-05-19,," Answering multi-hop questions over hybrid factual knowledge from the giventext and table (TextTableQA) is a challenging task. Existing models mainlyadopt a retriever-reader framework, which have several deficiencies, such asnoisy labeling in training retriever, insufficient utilization of heterogeneousinformation over text and table, and deficient ability for different reasoningoperations. In this paper, we propose a three-stage TextTableQA frameworkS3HQA, which comprises of retriever, selector, and reasoner. We use a retrieverwith refinement training to solve the noisy labeling problem. Then, a hybridselector considers the linked relationships between heterogeneous data toselect the most relevant factual knowledge. For the final stage, instead ofadapting a reading comprehension module like in previous methods, we employ ageneration-based reasoner to obtain answers. This includes two approaches: arow-wise generator and an LLM prompting generator~(first time used in thistask). The experimental results demonstrate that our method achievescompetitive results in the few-shot setting. When trained on the full dataset,our approach outperforms all baseline methods, ranking first on the HybridQAleaderboard.",,arXiv,['cs.cl'],, +968,a mlllm pairing for better code comment classification,['Hanna Abi Akl'],http://arxiv.org/pdf/2310.10275v1.pdf,2023-10-13,," The ""Information Retrieval in Software Engineering (IRSE)"" at FIRE 2023shared task introduces code comment classification, a challenging task thatpairs a code snippet with a comment that should be evaluated as either usefulor not useful to the understanding of the relevant code. We answer the codecomment classification shared task challenge by providing a two-foldevaluation: from an algorithmic perspective, we compare the performance ofclassical machine learning systems and complement our evaluations from adata-driven perspective by generating additional data with the help of largelanguage model (LLM) prompting to measure the potential increase inperformance. Our best model, which took second place in the shared task, is aNeural Network with a Macro-F1 score of 88.401% on the provided seed data and a1.5% overall increase in performance on the data generated by the LLM.",,arXiv,"['cs.se', 'cs.ai']",, +969,multistage large language model correction for speech recognition,"['Jie Pu', 'Thai-Son Nguyen', 'Sebastian Stüker']",http://arxiv.org/pdf/2310.11532v1.pdf,2023-10-17,," In this paper, we investigate the usage of large language models (LLMs) toimprove the performance of competitive speech recognition systems. Differentfrom traditional language models that focus on one single data domain, the riseof LLMs brings us the opportunity to push the limit of state-of-the-art ASRperformance, and at the same time to achieve higher robustness and generalizeeffectively across multiple domains. Motivated by this, we propose a novelmulti-stage approach to combine traditional language model re-scoring and LLMprompting. Specifically, the proposed method has two stages: the first stageuses a language model to re-score an N-best list of ASR hypotheses and run aconfidence check; The second stage uses prompts to a LLM to perform ASR errorcorrection on less confident results from the first stage. Our experimentalresults demonstrate the effectiveness of the proposed method by showing a 10% ~20% relative improvement in WER over a competitive ASR system -- acrossmultiple test domains.",,arXiv,"['cs.cl', 'eess.as']",, +970,omnifill domainagnostic form filling suggestions using multifaceted context,"['Timothy J. Aveni', 'Armando Fox', 'Björn Hartmann']",http://arxiv.org/pdf/2310.17826v1.pdf,2023-10-27,," Predictive suggestion systems offer contextually-relevant text entrycompletions. Existing approaches, like autofill, often excel innarrowly-defined domains but fail to generalize to arbitrary workflows. Weintroduce a conceptual framework to analyze the compound demands of aparticular suggestion context, yielding unique opportunities for large languagemodels (LLMs) to infer suggestions for a wide range of domain-agnosticform-filling tasks that were out of reach with prior approaches. We explorethese opportunities in OmniFill, a prototype that collects multi-facetedcontext including browsing and text entry activity to construct an LLM promptthat offers suggestions in situ for arbitrary structured text entry interfaces.Through a user study with 18 participants, we found that OmniFill offeredvaluable suggestions and we identified four themes that characterize users'behavior and attitudes: an ""opportunistic scrapbooking"" approach; a trustplaced in the system; value in partial success; and a need for visibility intoprompt context.",,arXiv,['cs.hc'],, +971,knowledgeinfused prompting assessing and advancing clinical text data generation with large language models,"['Ran Xu', 'Hejie Cui', 'Yue Yu', 'Xuan Kan', 'Wenqi Shi', 'Yuchen Zhuang', 'Wei Jin', 'Joyce Ho', 'Carl Yang']",http://arxiv.org/pdf/2311.00287v1.pdf,2023-11-01,," Clinical natural language processing requires methods that can addressdomain-specific challenges, such as complex medical terminology and clinicalcontexts. Recently, large language models (LLMs) have shown promise in thisdomain. Yet, their direct deployment can lead to privacy issues and areconstrained by resources. To address this challenge, we delve into syntheticclinical text generation using LLMs for clinical NLP tasks. We propose aninnovative, resource-efficient approach, ClinGen, which infuses knowledge intothe process. Our model involves clinical knowledge extraction andcontext-informed LLM prompting. Both clinical topics and writing styles aredrawn from external domain-specific knowledge graphs and LLMs to guide datageneration. Our extensive empirical study across 7 clinical NLP tasks and 16datasets reveals that ClinGen consistently enhances performance across varioustasks, effectively aligning the distribution of real datasets and significantlyenriching the diversity of generated training instances. We will publish ourcode and all the generated data in \url{https://github.com/ritaranx/ClinGen}.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",, +972,fewshot reranking for multihop qa via language model prompting,"['Muhammad Khalifa', 'Lajanugen Logeswaran', 'Moontae Lee', 'Honglak Lee', 'Lu Wang']",http://arxiv.org/pdf/2205.12650v3.pdf,2022-05-25,," We study few-shot reranking for multi-hop QA with open-domain questions. Toalleviate the need for a large number of labeled question-document pairs forretriever training, we propose PromptRank, which relies on large languagemodels prompting for multi-hop path reranking. PromptRank first constructs aninstruction-based prompt that includes a candidate document path and thencomputes the relevance score between a given question and the path based on theconditional likelihood of the question given the path prompt according to alanguage model. PromptRank yields strong retrieval performance on HotpotQA withonly 128 training examples compared to state-of-the-art methods trained onthousands of examples -- 73.6 recall@10 by PromptRank vs. 77.8 by PathRetrieverand 77.5 by multi-hop dense retrieval. Code available athttps://github.com/mukhal/PromptRank",,arXiv,"['cs.cl', 'cs.ir']",, +973,metaincontext learning in large language models,"['Julian Coda-Forno', 'Marcel Binz', 'Zeynep Akata', 'Matthew Botvinick', 'Jane X. Wang', 'Eric Schulz']",http://arxiv.org/pdf/2305.12907v1.pdf,2023-05-22,," Large language models have shown tremendous performance in a variety oftasks. In-context learning -- the ability to improve at a task after beingprovided with a number of demonstrations -- is seen as one of the maincontributors to their success. In the present paper, we demonstrate that thein-context learning abilities of large language models can be recursivelyimproved via in-context learning itself. We coin this phenomenonmeta-in-context learning. Looking at two idealized domains, a one-dimensionalregression task and a two-armed bandit task, we show that meta-in-contextlearning adaptively reshapes a large language model's priors over expectedtasks. Furthermore, we find that meta-in-context learning modifies thein-context learning strategies of such models. Finally, we extend our approachto a benchmark of real-world regression problems where we observe competitiveperformance to traditional learning algorithms. Taken together, our workimproves our understanding of in-context learning and paves the way towardadapting large language models to the environment they are applied purelythrough meta-in-context learning rather than traditional finetuning.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +974,metavl transferring incontext learning ability from language models to visionlanguage models,"['Masoud Monajatipoor', 'Liunian Harold Li', 'Mozhdeh Rouhsedaghat', 'Lin F. Yang', 'Kai-Wei Chang']",http://arxiv.org/pdf/2306.01311v1.pdf,2023-06-02,," Large-scale language models have shown the ability to adapt to a new task viaconditioning on a few demonstrations (i.e., in-context learning). However, inthe vision-language domain, most large-scale pre-trained vision-language (VL)models do not possess the ability to conduct in-context learning. How can weenable in-context learning for VL models? In this paper, we study aninteresting hypothesis: can we transfer the in-context learning ability fromthe language domain to VL domain? Specifically, we first meta-trains a languagemodel to perform in-context learning on NLP tasks (as in MetaICL); then wetransfer this model to perform VL tasks by attaching a visual encoder. Ourexperiments suggest that indeed in-context learning ability can be transferredcross modalities: our model considerably improves the in-context learningcapability on VL tasks and can even compensate for the size of the modelsignificantly. On VQA, OK-VQA, and GQA, our method could outperform thebaseline model while having 20 times fewer parameters.",,arXiv,['cs.cl'],, +975,an explanation of incontext learning as implicit bayesian inference,"['Sang Michael Xie', 'Aditi Raghunathan', 'Percy Liang', 'Tengyu Ma']",http://arxiv.org/pdf/2111.02080v6.pdf,2021-11-03,," Large language models (LMs) such as GPT-3 have the surprising ability to doin-context learning, where the model learns to do a downstream task simply byconditioning on a prompt consisting of input-output examples. The LM learnsfrom these examples without being explicitly pretrained to learn. Thus, it isunclear what enables in-context learning. In this paper, we study howin-context learning can emerge when pretraining documents have long-rangecoherence. Here, the LM must infer a latent document-level concept to generatecoherent next tokens during pretraining. At test time, in-context learningoccurs when the LM also infers a shared latent concept between examples in aprompt. We prove when this occurs despite a distribution mismatch betweenprompts and pretraining data in a setting where the pretraining distribution isa mixture of HMMs. In contrast to messy large-scale datasets used to train LMscapable of in-context learning, we generate a small-scale synthetic dataset(GINC) where Transformers and LSTMs both exhibit in-context learning. Beyondthe theory, experiments on GINC exhibit large-scale real-world phenomenaincluding improved in-context performance with model scaling (despite the samepretraining loss), sensitivity to example order, and instances where zero-shotis better than few-shot in-context learning.",,arXiv,"['cs.cl', 'cs.lg']",, +976,rethinking the role of scale for incontext learning an interpretabilitybased case study at 66 billion scale,"['Hritik Bansal', 'Karthik Gopalakrishnan', 'Saket Dingliwal', 'Sravan Bodapati', 'Katrin Kirchhoff', 'Dan Roth']",http://arxiv.org/pdf/2212.09095v2.pdf,2022-12-18,," Language models have been shown to perform better with an increase in scaleon a wide variety of tasks via the in-context learning paradigm. In this paper,we investigate the hypothesis that the ability of a large language model toin-context learn-perform a task is not uniformly spread across all of itsunderlying components. Using a 66 billion parameter language model (OPT-66B)across a diverse set of 14 downstream tasks, we find this is indeed the case:$\sim$70% of attention heads and $\sim$20% of feed forward networks can beremoved with minimal decline in task performance. We find substantial overlapin the set of attention heads (un)important for in-context learning acrosstasks and number of in-context examples. We also address our hypothesis througha task-agnostic lens, finding that a small set of attention heads in OPT-66Bscore highly on their ability to perform primitive induction operationsassociated with in-context learning, namely, prefix matching and copying. Theseinduction heads overlap with task-specific important heads, reinforcingarguments by Olsson et al. (arXiv:2209.11895) regarding induction headgenerality to more sophisticated behaviors associated with in-context learning.Overall, our study provides several insights that indicate large languagemodels may be under-trained for in-context learning and opens up questions onhow to pre-train language models to more effectively perform in-contextlearning.",,arXiv,"['cs.cl', 'cs.ai']",, +977,a closer look at incontext learning under distribution shifts,"['Kartik Ahuja', 'David Lopez-Paz']",http://arxiv.org/pdf/2305.16704v1.pdf,2023-05-26,," In-context learning, a capability that enables a model to learn from inputexamples on the fly without necessitating weight updates, is a definingcharacteristic of large language models. In this work, we follow the settingproposed in (Garg et al., 2022) to better understand the generality andlimitations of in-context learning from the lens of the simple yet fundamentaltask of linear regression. The key question we aim to address is: Aretransformers more adept than some natural and simpler architectures atperforming in-context learning under varying distribution shifts? To comparetransformers, we propose to use a simple architecture based on set-basedMulti-Layer Perceptrons (MLPs). We find that both transformers and set-basedMLPs exhibit in-context learning under in-distribution evaluations, buttransformers more closely emulate the performance of ordinary least squares(OLS). Transformers also display better resilience to mild distribution shifts,where set-based MLPs falter. However, under severe distribution shifts, bothmodels' in-context learning abilities diminish.",,arXiv,"['cs.lg', 'stat.ml']",, +978,exploring the relationship between model architecture and incontext learning ability,"['Ivan Lee', 'Nan Jiang', 'Taylor Berg-Kirkpatrick']",http://arxiv.org/pdf/2310.08049v2.pdf,2023-10-12,," What is the relationship between model architecture and the ability toperform in-context learning? In this empirical study, we take the first stepstoward answering this question. We evaluate twelve model architectures capableof causal language modeling across a suite of synthetic in-context learningtasks. These selected architectures represent a broad range of paradigms,including recurrent and convolution-based neural networks, transformers,state-space model inspired, and other emerging attention alternatives. Wediscover that all the considered architectures can perform in-context learningunder a wider range of conditions than previously documented. Additionally, weobserve stark differences in statistical efficiency and consistency by varyingcontext length and task difficulty. We also measure each architecture'spredisposition towards in-context learning when presented with alternativeroutes for task resolution. Finally, and somewhat surprisingly, we find thatseveral attention alternatives are more robust in-context learners thantransformers. Given that such approaches have constant-sized memory footprintsat inference time, this result opens the possibility of scaling up in-contextlearning to accommodate vastly larger numbers of in-context examples.",,arXiv,['cs.lg'],, +979,what can transformers learn incontext a case study of simple function classes,"['Shivam Garg', 'Dimitris Tsipras', 'Percy Liang', 'Gregory Valiant']",http://arxiv.org/pdf/2208.01066v3.pdf,2022-08-01,," In-context learning refers to the ability of a model to condition on a promptsequence consisting of in-context examples (input-output pairs corresponding tosome task) along with a new query input, and generate the corresponding output.Crucially, in-context learning happens only at inference time without anyparameter updates to the model. While large language models such as GPT-3exhibit some ability to perform in-context learning, it is unclear what therelationship is between tasks on which this succeeds and what is present in thetraining data. To make progress towards understanding in-context learning, weconsider the well-defined problem of training a model to in-context learn afunction class (e.g., linear functions): that is, given data derived from somefunctions in the class, can we train a model to in-context learn ""most""functions from this class? We show empirically that standard Transformers canbe trained from scratch to perform in-context learning of linear functions --that is, the trained model is able to learn unseen linear functions fromin-context examples with performance comparable to the optimal least squaresestimator. In fact, in-context learning is possible even under two forms ofdistribution shift: (i) between the training data of the model andinference-time prompts, and (ii) between the in-context examples and the queryinput during inference. We also show that we can train Transformers toin-context learn more complex function classes -- namely sparse linearfunctions, two-layer neural networks, and decision trees -- with performancethat matches or exceeds task-specific learning algorithms. Our code and modelsare available at https://github.com/dtsip/in-context-learning .",,arXiv,"['cs.cl', 'cs.lg']",, +980,"structured prompting scaling incontext learning to 1,000 examples","['Yaru Hao', 'Yutao Sun', 'Li Dong', 'Zhixiong Han', 'Yuxian Gu', 'Furu Wei']",http://arxiv.org/pdf/2212.06713v1.pdf,2022-12-13,," Large language models have exhibited intriguing in-context learningcapability, achieving promising zero- and few-shot performance without updatingthe parameters. However, conventional in-context learning is usually restrictedby length constraints, rendering it ineffective to absorb supervision from alarge number of examples. In order to go beyond few shots, we introducestructured prompting that breaks the length limit and scales in-contextlearning to thousands of examples. Specifically, demonstration examples areseparately encoded with well-designed position embeddings, and then they arejointly attended by the test example using a rescaled attention mechanism. Sowe can scale the number of exemplars with linear complexity instead ofquadratic complexity with respect to length. Experimental results on a diverseset of tasks show that our approach improves end-task performance and reducesevaluation variance over conventional in-context learning as the number ofdemonstration examples increases. Code has been released athttps://aka.ms/structured-prompting.",,arXiv,['cs.cl'],, +981,pretraining to learn in context,"['Yuxian Gu', 'Li Dong', 'Furu Wei', 'Minlie Huang']",http://arxiv.org/pdf/2305.09137v1.pdf,2023-05-16,," In-context learning, where pre-trained language models learn to perform tasksfrom task examples and instructions in their contexts, has attracted muchattention in the NLP community. However, the ability of in-context learning isnot fully exploited because language models are not explicitly trained to learnin context. To this end, we propose PICL (Pre-training for In-ContextLearning), a framework to enhance the language models' in-context learningability by pre-training the model on a large collection of ""intrinsic tasks"" inthe general plain-text corpus using the simple language modeling objective.PICL encourages the model to infer and perform tasks by conditioning on thecontexts while maintaining task generalization of pre-trained models. Weevaluate the in-context learning performance of the model trained with PICL onseven widely-used text classification datasets and the Super-NaturalInstrctionsbenchmark, which contains 100+ NLP tasks formulated to text generation. Ourexperiments show that PICL is more effective and task-generalizable than arange of baselines, outperforming larger language models with nearly 4xparameters. The code is publicly available at https://github.com/thu-coai/PICL.",,arXiv,['cs.cl'],, +982,exnet efficient incontext learning for dataless text classification,"['Debaditya Shome', 'Kuldeep Yadav']",http://arxiv.org/pdf/2305.14622v1.pdf,2023-05-24,," Large pre-trained language models (PLMs) have made significant progress inencoding world knowledge and spawned a new set of learning paradigms includingzero-shot, few-shot, and in-context learning. Many language tasks can bemodeled as a set of prompts (for example, is this text about geography?) andlanguage models can provide binary answers, i.e., Yes or No. There is evidenceto suggest that the next-word prediction used by many PLMs does not align wellwith zero-shot paradigms. Therefore, PLMs are fine-tuned as aquestion-answering system. In-context learning extends zero-shot learning byincorporating prompts and examples, resulting in increased task accuracy. Ourpaper presents EXnet, a model specifically designed to perform in-contextlearning without any limitations on the number of examples. We argue thatin-context learning is an effective method to increase task accuracy, andproviding examples facilitates cross-task generalization, especially when itcomes to text classification tasks. With extensive experiments, we show thateven our smallest model (15M parameters) generalizes to several unseenclassification tasks and domains.",,arXiv,"['cs.cl', 'cs.lg']",, +983,raven incontext learning with retrieval augmented encoderdecoder language models,"['Jie Huang', 'Wei Ping', 'Peng Xu', 'Mohammad Shoeybi', 'Kevin Chen-Chuan Chang', 'Bryan Catanzaro']",http://arxiv.org/pdf/2308.07922v1.pdf,2023-08-15,," In this paper, we investigate the in-context learning ability ofretrieval-augmented encoder-decoder language models. We first conduct acomprehensive analysis of the state-of-the-art ATLAS model and identify itslimitations in in-context learning, primarily due to a mismatch betweenpretraining and testing, as well as a restricted context length. To addressthese issues, we propose RAVEN, a model that combines retrieval-augmentedmasked language modeling and prefix language modeling. We further introduceFusion-in-Context Learning to enhance the few-shot performance by enabling themodel to leverage more in-context examples without requiring additionaltraining or model modifications. Through extensive experiments, we demonstratethat RAVEN significantly outperforms ATLAS and achieves results comparable tothe most advanced language models in certain scenarios, despite havingsubstantially fewer parameters. Our work underscores the potential ofretrieval-augmented encoder-decoder language models for in-context learning andencourages further research in this direction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +984,incontext learning dynamics with random binary sequences,"['Eric J. Bigelow', 'Ekdeep Singh Lubana', 'Robert P. Dick', 'Hidenori Tanaka', 'Tomer D. Ullman']",http://arxiv.org/pdf/2310.17639v2.pdf,2023-10-26,," Large language models (LLMs) trained on huge corpora of text datasetsdemonstrate intriguing capabilities, achieving state-of-the-art performance ontasks they were not explicitly trained for. The precise nature of LLMcapabilities is often mysterious, and different prompts can elicit differentcapabilities through in-context learning. We propose a framework that enablesus to analyze in-context learning dynamics to understand latent conceptsunderlying LLMs' behavioral patterns. This provides a more nuancedunderstanding than success-or-failure evaluation benchmarks, but does notrequire observing internal activations as a mechanistic interpretation ofcircuits would. Inspired by the cognitive science of human randomnessperception, we use random binary sequences as context and study dynamics ofin-context learning by manipulating properties of context data, such assequence length. In the latest GPT-3.5+ models, we find emergent abilities togenerate seemingly random numbers and learn basic formal languages, withstriking in-context learning dynamics where model outputs transition sharplyfrom seemingly random behaviors to deterministic repetition.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",, +985,incontext learning with many demonstration examples,"['Mukai Li', 'Shansan Gong', 'Jiangtao Feng', 'Yiheng Xu', 'Jun Zhang', 'Zhiyong Wu', 'Lingpeng Kong']",http://arxiv.org/pdf/2302.04931v1.pdf,2023-02-09,," Large pre-training language models (PLMs) have shown promising in-contextlearning abilities. However, due to the backbone transformer architecture,existing PLMs are bottlenecked by the memory and computational cost whenscaling up to a large context size, leaving instruction tuning and in-contextlearning of many demonstration examples, as well as long-range languagemodeling under-explored. In this study, we propose a long-range language modelEVALM based on an efficient transformer mechanism. EVALM is trained with 8ktokens per batch line and can test up to 256k-lengthed contexts withextrapolation, 128 times to the limit of existing PLMs (e.g. GPT3). Based onEVALM, we scale up the size of examples efficiently in both instruction tuningand in-context learning to explore the boundary of the benefits from moreannotated data. Experimental results on a diverse set of tasks show that EVALMachieves 4.1% higher accuracy on average, and the average length of achievingthe best accuracy score over tasks is around 12k. We find that in-contextlearning can achieve higher performance with more demonstrations undermany-shot instruction tuning (8k), and further extending the length ofinstructions (16k) can further improve the upper bound of scaling in-contextlearning.",,arXiv,"['cs.cl', 'cs.ai']",, +986,the learnability of incontext learning,"['Noam Wies', 'Yoav Levine', 'Amnon Shashua']",http://arxiv.org/pdf/2303.07895v1.pdf,2023-03-14,," In-context learning is a surprising and important phenomenon that emergedwhen modern language models were scaled to billions of learned parameters.Without modifying a large language model's weights, it can be tuned to performvarious downstream natural language tasks simply by including concatenatedtraining examples of these tasks in its input. Though disruptive for manypractical applications of large language models, this emergent learningparadigm is not well understood from a theoretical perspective. In this paper,we propose a first-of-its-kind PAC based framework for in-context learnability,and use it to provide the first finite sample complexity results for thein-context learning setup. Our framework includes an initial pretraining phase,which fits a function to the pretraining distribution, and then a secondin-context learning phase, which keeps this function constant and concatenatestraining examples of the downstream task in its input. We use our framework inorder to prove that, under mild assumptions, when the pretraining distributionis a mixture of latent tasks (a model often considered for natural languagepretraining), these tasks can be efficiently learned via in-context learning,even though the model's weights are unchanged and the input significantlydiverges from the pretraining distribution. Our theoretical analysis revealsthat in this setting, in-context learning is more about identifying the taskthan about learning it, a result which is in line with a series of recentempirical findings. We hope that the in-context learnability frameworkpresented in this paper will facilitate future progress towards a deeperunderstanding of this important new learning paradigm.",,arXiv,['cs.cl'],, +987,sinc selfsupervised incontext learning for visionlanguage tasks,"['Yi-Syuan Chen', 'Yun-Zhu Song', 'Cheng Yu Yeo', 'Bei Liu', 'Jianlong Fu', 'Hong-Han Shuai']",http://arxiv.org/pdf/2307.07742v2.pdf,2023-07-15,," Large Pre-trained Transformers exhibit an intriguing capacity for in-contextlearning. Without gradient updates, these models can rapidly construct newpredictors from demonstrations presented in the inputs. Recent works promotethis ability in the vision-language domain by incorporating visual informationinto large language models that can already make in-context predictions.However, these methods could inherit issues in the language domain, such astemplate sensitivity and hallucination. Also, the scale of these languagemodels raises a significant demand for computations, making learning andoperating these models resource-intensive. To this end, we raise a question:``How can we enable in-context learning without relying on the intrinsicin-context ability of large language models?"". To answer it, we propose asuccinct and general framework, Self-supervised IN-Context learning (SINC),that introduces a meta-model to learn on self-supervised prompts consisting oftailored demonstrations. The learned models can be transferred to downstreamtasks for making in-context predictions on-the-fly. Extensive experiments showthat SINC outperforms gradient-based methods in various vision-language tasksunder few-shot settings. Furthermore, the designs of SINC help us investigatethe benefits of in-context learning across different tasks, and the analysisfurther reveals the essential components for the emergence of in-contextlearning in the vision-language domain.",,arXiv,"['cs.cv', 'cs.ai']",, +988,selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator,"['Hyuhng Joon Kim', 'Hyunsoo Cho', 'Junyeob Kim', 'Taeuk Kim', 'Kang Min Yoo', 'Sang-goo Lee']",http://arxiv.org/pdf/2206.08082v1.pdf,2022-06-16,," Large-scale pre-trained language models (PLMs) are well-known for beingcapable of solving a task simply by conditioning a few input-label pairs dubbeddemonstrations on a prompt without being explicitly tuned for the desireddownstream task. Such a process (i.e., in-context learning), however, naturallyleads to high reliance on the demonstrations which are usually selected fromexternal datasets. In this paper, we propose self-generated in-context learning(SG-ICL), which generates demonstrations for in-context learning from PLMitself to minimize the reliance on the external demonstration. We conductexperiments on four different text classification tasks and show SG-ICLsignificantly outperforms zero-shot learning and is generally worthapproximately 0.6 gold training samples. Moreover, our generated demonstrationsshow more consistent performance with low variance compared to randomlyselected demonstrations from the training dataset.",,arXiv,['cs.cl'],, +989,active example selection for incontext learning,"['Yiming Zhang', 'Shi Feng', 'Chenhao Tan']",http://arxiv.org/pdf/2211.04486v1.pdf,2022-11-08,," With a handful of demonstration examples, large-scale language models showstrong capability to perform various tasks by in-context learning from theseexamples, without any fine-tuning. We demonstrate that in-context learningperformance can be highly unstable across samples of examples, indicating theidiosyncrasies of how language models acquire information. We formulate exampleselection for in-context learning as a sequential decision problem, and proposea reinforcement learning algorithm for identifying generalizable policies toselect demonstration examples. For GPT-2, our learned policies demonstratestrong abilities of generalizing to unseen tasks in training, with a $5.8\%$improvement on average. Examples selected from our learned policies can evenachieve a small improvement on GPT-3 Ada. However, the improvement diminisheson larger GPT-3 models, suggesting emerging capabilities of large languagemodels.",,arXiv,"['cs.cl', 'cs.ai']",, +990,bayesian optimization of catalysts with incontext learning,"['Mayk Caldas Ramos', 'Shane S. Michtavy', 'Marc D. Porosoff', 'Andrew D. White']",http://arxiv.org/pdf/2304.05341v1.pdf,2023-04-11,," Large language models (LLMs) are able to do accurate classification with zeroor only a few examples (in-context learning). We show a prompting system thatenables regression with uncertainty for in-context learning with frozen LLM(GPT-3, GPT-3.5, and GPT-4) models, allowing predictions without features orarchitecture tuning. By incorporating uncertainty, our approach enablesBayesian optimization for catalyst or molecule optimization using naturallanguage, eliminating the need for training or simulation. Here, we performedthe optimization using the synthesis procedure of catalysts to predictproperties. Working with natural language mitigates difficulty synthesizabilitysince the literal synthesis procedure is the model's input. We showed thatin-context learning could improve past a model context window (maximum numberof tokens the model can process at once) as data is gathered via exampleselection, allowing the model to scale better. Although our method does notoutperform all baselines, it requires zero training, feature selection, andminimal computing while maintaining satisfactory performance. We also findGaussian Process Regression on text embeddings is strong at Bayesianoptimization. The code is available in our GitHub repository:https://github.com/ur-whitelab/BO-LIFT",,arXiv,"['physics.chem-ph', 'cs.lg']",, +991,incontext learning unlocked for diffusion models,"['Zhendong Wang', 'Yifan Jiang', 'Yadong Lu', 'Yelong Shen', 'Pengcheng He', 'Weizhu Chen', 'Zhangyang Wang', 'Mingyuan Zhou']",http://arxiv.org/pdf/2305.01115v2.pdf,2023-05-01,," We present Prompt Diffusion, a framework for enabling in-context learning indiffusion-based generative models. Given a pair of task-specific exampleimages, such as depth from/to image and scribble from/to image, and a textguidance, our model automatically understands the underlying task and performsthe same task on a new query image following the text guidance. To achievethis, we propose a vision-language prompt that can model a wide range ofvision-language tasks and a diffusion model that takes it as input. Thediffusion model is trained jointly over six different tasks using theseprompts. The resulting Prompt Diffusion model is the first diffusion-basedvision-language foundation model capable of in-context learning. Itdemonstrates high-quality in-context generation on the trained tasks andgeneralizes effectively to new, unseen vision tasks with their respectiveprompts. Our model also shows compelling text-guided image editing results. Ourframework aims to facilitate research into in-context learning for computervision. We share our code and pre-trained models athttps://github.com/Zhendong-Wang/Prompt-Diffusion.",,arXiv,['cs.cv'],, +992,large language models can be lazy learners analyze shortcuts in incontext learning,"['Ruixiang Tang', 'Dehan Kong', 'Longtao Huang', 'Hui Xue']",http://arxiv.org/pdf/2305.17256v2.pdf,2023-05-26,," Large language models (LLMs) have recently shown great potential forin-context learning, where LLMs learn a new task simply by conditioning on afew input-label pairs (prompts). Despite their potential, our understanding ofthe factors influencing end-task performance and the robustness of in-contextlearning remains limited. This paper aims to bridge this knowledge gap byinvestigating the reliance of LLMs on shortcuts or spurious correlations withinprompts. Through comprehensive experiments on classification and extractiontasks, we reveal that LLMs are ""lazy learners"" that tend to exploit shortcutsin prompts for downstream tasks. Additionally, we uncover a surprising findingthat larger models are more likely to utilize shortcuts in prompts duringinference. Our findings provide a new perspective on evaluating robustness inin-context learning and pose new challenges for detecting and mitigating theuse of shortcuts in prompts.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +993,multidimensional evaluation of text summarization with incontext learning,"['Sameer Jain', 'Vaishakh Keshava', 'Swarnashree Mysore Sathyendra', 'Patrick Fernandes', 'Pengfei Liu', 'Graham Neubig', 'Chunting Zhou']",http://arxiv.org/pdf/2306.01200v1.pdf,2023-06-01,," Evaluation of natural language generation (NLG) is complex andmulti-dimensional. Generated text can be evaluated for fluency, coherence,factuality, or any other dimensions of interest. Most frameworks that performsuch multi-dimensional evaluation require training on large manually orsynthetically generated datasets. In this paper, we study the efficacy of largelanguage models as multi-dimensional evaluators using in-context learning,obviating the need for large training datasets. Our experiments show thatin-context learning-based evaluators are competitive with learned evaluationframeworks for the task of text summarization, establishing state-of-the-art ondimensions such as relevance and factual consistency. We then analyze theeffects of factors such as the selection and number of in-context examples onperformance. Finally, we study the efficacy of in-context learning basedevaluators in evaluating zero-shot summaries written by large language modelssuch as GPT-3.",,arXiv,['cs.cl'],, +994,exploring the integration of large language models into automatic speech recognition systems an empirical study,"['Zeping Min', 'Jinbo Wang']",http://arxiv.org/pdf/2307.06530v1.pdf,2023-07-13,," This paper explores the integration of Large Language Models (LLMs) intoAutomatic Speech Recognition (ASR) systems to improve transcription accuracy.The increasing sophistication of LLMs, with their in-context learningcapabilities and instruction-following behavior, has drawn significantattention in the field of Natural Language Processing (NLP). Our primary focusis to investigate the potential of using an LLM's in-context learningcapabilities to enhance the performance of ASR systems, which currently facechallenges such as ambient noise, speaker accents, and complex linguisticcontexts. We designed a study using the Aishell-1 and LibriSpeech datasets,with ChatGPT and GPT-4 serving as benchmarks for LLM capabilities.Unfortunately, our initial experiments did not yield promising results,indicating the complexity of leveraging LLM's in-context learning for ASRapplications. Despite further exploration with varied settings and models, thecorrected sentences from the LLMs frequently resulted in higher Word ErrorRates (WER), demonstrating the limitations of LLMs in speech applications. Thispaper provides a detailed overview of these experiments, their results, andimplications, establishing that using LLMs' in-context learning capabilities tocorrect potential errors in speech recognition transcriptions is still achallenging task at the current stage.",,arXiv,"['cs.cl', 'cs.sd', 'eess.as']",, +995,actsql incontext learning for texttosql with automaticallygenerated chainofthought,"['Hanchong Zhang', 'Ruisheng Cao', 'Lu Chen', 'Hongshen Xu', 'Kai Yu']",http://arxiv.org/pdf/2310.17342v1.pdf,2023-10-26,," Recently Large Language Models (LLMs) have been proven to have strongabilities in various domains and tasks. We study the problem of promptdesigning in the text-to-SQL task and attempt to improve the LLMs' reasoningability when generating SQL queries. Besides the trivial few-shot in-contextlearning setting, we design our chain-of-thought (CoT) prompt with a similarmethod to schema linking. We provide a method named ACT-SQL to automaticallygenerate auto-CoT exemplars and thus the whole process doesn't need manuallabeling. Our approach is cost-saving since we only use the LLMs' API call oncewhen generating one SQL query. Furthermore, we extend our in-context learningmethod to the multi-turn text-to-SQL task. The experiment results show that theLLMs' performance can benefit from our ACT-SQL approach. Our approach achievesSOTA performance on the Spider dev set among existing in-context learningapproaches.",,arXiv,['cs.cl'],, +996,cosmic data efficient instructiontuning for speech incontext learning,"['Jing Pan', 'Jian Wu', 'Yashesh Gaur', 'Sunit Sivasankaran', 'Zhuo Chen', 'Shujie Liu', 'Jinyu Li']",http://arxiv.org/pdf/2311.02248v1.pdf,2023-11-03,," We present a data and cost efficient way of incorporating the speech modalityinto a large language model (LLM). The resulting multi-modal LLM is aCOntextual Speech Model with Instruction-following/in-context-learningCapabilities - COSMIC. Speech comprehension test question-answer (SQA) pairsare generated using GPT-3.5 based on the speech transcriptions as a part of thesupervision for the instruction tuning. With fewer than 20M trainableparameters and as little as 450 hours of English speech data for SQAgeneration, COSMIC exhibits emergent instruction-following and in-contextlearning capabilities in speech-to-text tasks. The model is able to follow thegiven text instructions to generate text response even on the unseen EN$\to$Xspeech-to-text translation (S2TT) task with zero-shot setting. We evaluate themodel's in-context learning via various tasks such as EN$\to$X S2TT andfew-shot domain adaptation. And instruction-following capabilities areevaluated through a contextual biasing benchmark. Our results demonstrate theefficacy of the proposed low cost recipe for building a speech LLM and thatwith the new instruction-tuning data.",,arXiv,"['cs.cl', 'cs.ai', 'eess.as']",, +997,thinking about gpt3 incontext learning for biomedical ie think again,"['Bernal Jiménez Gutiérrez', 'Nikolas McNeal', 'Clay Washington', 'You Chen', 'Lang Li', 'Huan Sun', 'Yu Su']",http://arxiv.org/pdf/2203.08410v3.pdf,2022-03-16,," The strong few-shot in-context learning capability of large pre-trainedlanguage models (PLMs) such as GPT-3 is highly appealing for applicationdomains such as biomedicine, which feature high and diverse demands of languagetechnologies but also high data annotation costs. In this paper, we present thefirst systematic and comprehensive study to compare the few-shot performance ofGPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs ontwo highly representative biomedical information extraction tasks, named entityrecognition and relation extraction. We follow the true few-shot setting toavoid overestimating models' few-shot performance by model selection over alarge validation set. We also optimize GPT-3's performance with knowntechniques such as contextual calibration and dynamic in-context exampleretrieval. However, our results show that GPT-3 still significantlyunderperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3in-context learning also yields smaller gains in accuracy when more trainingdata becomes available. Our in-depth analyses further reveal issues of thein-context learning setting that may be detrimental to information extractiontasks in general. Given the high cost of experimenting with GPT-3, we hope ourstudy provides guidance for biomedical researchers and practitioners towardsmore promising directions such as fine-tuning small PLMs.",,arXiv,"['cs.cl', 'cs.ir']",, +998,exploring effective factors for improving visual incontext learning,"['Yanpeng Sun', 'Qiang Chen', 'Jian Wang', 'Jingdong Wang', 'Zechao Li']",http://arxiv.org/pdf/2304.04748v1.pdf,2023-04-10,," The In-Context Learning (ICL) is to understand a new task via a fewdemonstrations (aka. prompt) and predict new inputs without tuning the models.While it has been widely studied in NLP, it is still a relatively new area ofresearch in computer vision. To reveal the factors influencing the performanceof visual in-context learning, this paper shows that prompt selection andprompt fusion are two major factors that have a direct impact on the inferenceperformance of visual context learning. Prompt selection is the process ofidentifying the most appropriate prompt or example to help the model understandnew tasks. This is important because providing the model with relevant promptscan help it learn more effectively and efficiently. Prompt fusion involvescombining knowledge from different positions within the large-scale visualmodel. By doing this, the model can leverage the diverse knowledge stored indifferent parts of the model to improve its performance on new tasks. Basedthese findings, we propose a simple framework prompt-SelF for visual in-contextlearning. Specifically, we first use the pixel-level retrieval method to selecta suitable prompt, and then use different prompt fusion methods to activate allthe knowledge stored in the large-scale model, and finally ensemble theprediction results obtained from different prompt fusion methods to obtain thefinal prediction results. And we conduct extensive experiments on single-objectsegmentation and detection tasks to demonstrate the effectiveness ofprompt-SelF. Remarkably, the prompt-SelF has outperformed OSLSM basedmeta-learning in 1-shot segmentation for the first time. This indicated thegreat potential of visual in-context learning. The source code and models willbe available at \url{https://github.com/syp2ysy/prompt-SelF}.",,arXiv,['cs.cv'],, +999,dissecting chainofthought compositionality through incontext filtering and learning,"['Yingcong Li', 'Kartik Sreenivasan', 'Angeliki Giannou', 'Dimitris Papailiopoulos', 'Samet Oymak']",http://arxiv.org/pdf/2305.18869v2.pdf,2023-05-30,," Chain-of-thought (CoT) is a method that enables language models to handlecomplex reasoning tasks by decomposing them into simpler steps. Despite itssuccess, the underlying mechanics of CoT are not yet fully understood. In anattempt to shed light on this, our study investigates the impact of CoT on theability of transformers to in-context learn a simple to study, yet generalfamily of compositional functions: multi-layer perceptrons (MLPs). In thissetting, we find that the success of CoT can be attributed to breaking downin-context learning of a compositional function into two distinct phases:focusing on and filtering data related to each step of the composition andin-context learning the single-step composition function. Through bothexperimental and theoretical evidence, we demonstrate how CoT significantlyreduces the sample complexity of in-context learning (ICL) and facilitates thelearning of complex functions that non-CoT methods struggle with. Furthermore,we illustrate how transformers can transition from vanilla in-context learningto mastering a compositional function with CoT by simply incorporatingadditional layers that perform the necessary data-filtering for CoT via theattention mechanism. In addition to these test-time benefits, we show CoT helpsaccelerate pretraining by learning shortcuts to represent complex functions andfiltering plays an important role in this process. These findings collectivelyprovide insights into the mechanics of CoT, inviting further investigation ofits role in complex reasoning tasks.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, +1000,incontext learning through the bayesian prism,"['Kabir Ahuja', 'Madhur Panwar', 'Navin Goyal']",http://arxiv.org/pdf/2306.04891v1.pdf,2023-06-08,," In-context learning is one of the surprising and useful features of largelanguage models. How it works is an active area of research. Recently, stylizedmeta-learning-like setups have been devised that train these models on asequence of input-output pairs $(x, f(x))$ from a function class using thelanguage modeling loss and observe generalization to unseen functions from thesame class. One of the main discoveries in this line of research has been thatfor several problems such as linear regression, trained transformers learnalgorithms for learning functions in context. However, the inductive biases ofthese models resulting in this behavior are not clearly understood. A modelwith unlimited training data and compute is a Bayesian predictor: it learns thepretraining distribution. It has been shown that high-capacity transformersmimic the Bayesian predictor for linear regression. In this paper, we showempirical evidence of transformers exhibiting the behavior of this ideallearner across different linear and non-linear function classes. We also extendthe previous setups to work in the multitask setting and verify thattransformers can do in-context learning in this setup as well and the Bayesianperspective sheds light on this setting also. Finally, via the example oflearning Fourier series, we study the inductive bias for in-context learning.We find that in-context learning may or may not have simplicity bias dependingon the pretraining data distribution.",,arXiv,"['cs.lg', 'cs.cl']",, +1001,explore incontext learning for 3d point cloud understanding,"['Zhongbin Fang', 'Xiangtai Li', 'Xia Li', 'Joachim M. Buhmann', 'Chen Change Loy', 'Mengyuan Liu']",http://arxiv.org/pdf/2306.08659v2.pdf,2023-06-14,," With the rise of large-scale models trained on broad data, in-contextlearning has become a new learning paradigm that has demonstrated significantpotential in natural language processing and computer vision tasks. Meanwhile,in-context learning is still largely unexplored in the 3D point cloud domain.Although masked modeling has been successfully applied for in-context learningin 2D vision, directly extending it to 3D point clouds remains a formidablechallenge. In the case of point clouds, the tokens themselves are the pointcloud positions (coordinates) that are masked during inference. Moreover,position embedding in previous works may inadvertently introduce informationleakage. To address these challenges, we introduce a novel framework, namedPoint-In-Context, designed especially for in-context learning in 3D pointclouds, where both inputs and outputs are modeled as coordinates for each task.Additionally, we propose the Joint Sampling module, carefully designed to workin tandem with the general point sampling operator, effectively resolving theaforementioned technical issues. We conduct extensive experiments to validatethe versatility and adaptability of our proposed methods in handling a widerange of tasks.",,arXiv,['cs.cv'],, +1002,dqlore dual queries with low rank approximation reranking for incontext learning,"['Jing Xiong', 'Zixuan Li', 'Chuanyang Zheng', 'Zhijiang Guo', 'Yichun Yin', 'Enze Xie', 'Zhicheng Yang', 'Qingxing Cao', 'Haiming Wang', 'Xiongwei Han', 'Jing Tang', 'Chengming Li', 'Xiaodan Liang']",http://arxiv.org/pdf/2310.02954v4.pdf,2023-10-04,," Recent advances in natural language processing, primarily propelled by LargeLanguage Models (LLMs), have showcased their remarkable capabilities groundedin in-context learning. A promising avenue for guiding LLMs in intricatereasoning tasks involves the utilization of intermediate reasoning steps withinthe Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge liesin the effective selection of exemplars for facilitating in-context learning.In this study, we introduce a framework that leverages Dual Queries andLow-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplarsfor in-context learning. Dual Queries first query LLM to obtain LLM-generatedknowledge such as CoT, then query the retriever to obtain the final exemplarsvia both question and the knowledge. Moreover, for the second query, LoReemploys dimensionality reduction techniques to refine exemplar selection,ensuring close alignment with the input question's knowledge. Through extensiveexperiments, we demonstrate that DQ-LoRe significantly outperforms priorstate-of-the-art methods in the automatic selection of exemplars for GPT-4,enhancing performance from 92.5% to 94.2%. Our comprehensive analysis furtherreveals that DQ-LoRe consistently outperforms retrieval-based approaches interms of both performance and adaptability, especially in scenarioscharacterized by distribution shifts. DQ-LoRe pushes the boundaries ofin-context learning and opens up new avenues for addressing complex reasoningchallenges. We will release the code soon.",,arXiv,['cs.cl'],, +1003,compositional exemplars for incontext learning,"['Jiacheng Ye', 'Zhiyong Wu', 'Jiangtao Feng', 'Tao Yu', 'Lingpeng Kong']",http://arxiv.org/pdf/2302.05698v3.pdf,2023-02-11,," Large pretrained language models (LMs) have shown impressive In-ContextLearning (ICL) ability, where the model learns to do an unseen task via aprompt consisting of input-output examples as the demonstration, without anyparameter updates. The performance of ICL is highly dominated by the quality ofthe selected in-context examples. However, previous selection methods aremostly based on simple heuristics, leading to sub-optimal performance. In thiswork, we formulate in-context example selection as a subset selection problem.We propose CEIL (Compositional Exemplars for In-context Learning), which isinstantiated by Determinantal Point Processes (DPPs) to model the interactionbetween the given input and in-context examples, and optimized through acarefully-designed contrastive learning objective to obtain preference fromLMs. We validate CEIL on 12 classification and generation datasets from 7distinct NLP tasks, including sentiment analysis, paraphrase detection, naturallanguage inference, commonsense reasoning, open-domain question answering, codegeneration, and semantic parsing. Extensive experiments demonstrate not onlythe state-of-the-art performance but also the transferability andcompositionality of CEIL, shedding new light on effective and efficientin-context learning. Our code is released athttps://github.com/HKUNLP/icl-ceil.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1004,icld3ie incontext learning with diverse demonstrations updating for document information extraction,"['Jiabang He', 'Lei Wang', 'Yi Hu', 'Ning Liu', 'Hui Liu', 'Xing Xu', 'Heng Tao Shen']",http://arxiv.org/pdf/2303.05063v4.pdf,2023-03-09,," Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstratedremarkable results in various natural language processing (NLP) tasks within-context learning, which involves inference based on a few demonstrationexamples. Despite their successes in NLP tasks, no investigation has beenconducted to assess the ability of LLMs to perform document informationextraction (DIE) using in-context learning. Applying LLMs to DIE poses twochallenges: the modality and task gap. To this end, we propose a simple buteffective in-context learning framework called ICL-D3IE, which enables LLMs toperform DIE with different types of demonstration examples. Specifically, weextract the most difficult and distinct segments from hard training documentsas hard demonstrations for benefiting all test instances. We designdemonstrations describing relationships that enable LLMs to understandpositional relationships. We introduce formatting demonstrations for easyanswer extraction. Additionally, the framework improves diverse demonstrationsby updating them iteratively. Our experiments on three widely used benchmarkdatasets demonstrate that the ICL-D3IE framework enables Davinci-003/ChatGPT toachieve superior performance when compared to previous pre-trained methodsfine-tuned with full training in both the in-distribution (ID) setting and inthe out-of-distribution (OOD) setting. Code is available athttps://github.com/MAEHCM/ICL-D3IE.",,arXiv,['cs.cl'],, +1005,learning to retrieve prompts for incontext learning,"['Ohad Rubin', 'Jonathan Herzig', 'Jonathan Berant']",http://arxiv.org/pdf/2112.08633v2.pdf,2021-12-16,," In-context learning is a recent paradigm in natural language understanding,where a large pre-trained language model (LM) observes a test instance and afew training examples as its input, and directly decodes the output without anyupdate to its parameters. However, performance has been shown to stronglydepend on the selected training examples (termed prompt). In this work, wepropose an efficient method for retrieving prompts for in-context learningusing annotated data and a LM. Given an input-output pair, we estimate theprobability of the output given the input and a candidate training example asthe prompt, and label training examples as positive or negative based on thisprobability. We then train an efficient dense retriever from this data, whichis used to retrieve training examples as prompts at test time. We evaluate ourapproach on three sequence-to-sequence tasks where language utterances aremapped to meaning representations, and find that it substantially outperformsprior work and multiple baselines across the board.",,arXiv,"['cs.cl', 'cs.lg']",, +1006,semanticoriented unlabeled priming for largescale language models,"['Yanchen Liu', 'Timo Schick', 'Hinrich Schütze']",http://arxiv.org/pdf/2202.06133v1.pdf,2022-02-12,," Due to the high costs associated with finetuning large language models,various recent works propose to adapt them to specific tasks without anyparameter updates through in-context learning. Unfortunately, for in-contextlearning there is currently no way to leverage unlabeled data, which is oftenmuch easier to obtain in large quantities than labeled examples. In this work,we therefore investigate ways to make use of unlabeled examples to improve thezero-shot performance of pretrained language models without any finetuning: Weintroduce Semantic-Oriented Unlabeled Priming (SOUP), a method that classifiesexamples by retrieving semantically similar unlabeled examples, assigninglabels to them in a zero-shot fashion, and then using them for in-contextlearning. We also propose bag-of-contexts priming, a new priming strategy thatis more suitable for our setting and enables the usage of more examples thanfit into the context window.",,arXiv,['cs.cl'],, +1007,diverse demonstrations improve incontext compositional generalization,"['Itay Levy', 'Ben Bogin', 'Jonathan Berant']",http://arxiv.org/pdf/2212.06800v3.pdf,2022-12-13,," In-context learning has shown great success in i.i.d semantic parsing splits,where the training and test sets are drawn from the same distribution. In thissetup, models are typically prompted with demonstrations that are similar tothe input utterance. However, in the setup of compositional generalization,where models are tested on outputs with structures that are absent from thetraining set, selecting similar demonstrations is insufficient, as often noexample will be similar enough to the input. In this work, we propose a methodto select diverse demonstrations that aims to collectively cover all of thestructures required in the output program, in order to encourage the model togeneralize to new structures from these demonstrations. We empirically showthat combining diverse demonstrations with in-context learning substantiallyimproves performance across three compositional generalization semantic parsingdatasets in the pure in-context learning setup and when combined withfinetuning.",,arXiv,['cs.cl'],, +1008,the impact of symbolic representations on incontext learning for fewshot reasoning,"['Hanlin Zhang', 'Yi-Fan Zhang', 'Li Erran Li', 'Eric Xing']",http://arxiv.org/pdf/2212.08686v1.pdf,2022-12-16,," Pre-trained language models (LMs) have shown remarkable reasoning performanceusing explanations (or ``chain-of-thought'' (CoT)) for in-context learning. Onthe other hand, these reasoning tasks are usually presumed to be moreapproachable for symbolic programming. To make progress towards understandingin-context learning, we curate synthetic datasets containing equivalent(natural, symbolic) data pairs, where symbolic examples contain first-orderlogic rules and predicates from knowledge bases (KBs). Then we revisitneuro-symbolic approaches and use Language Models as Logic Programmer (LMLP)that learns from demonstrations containing logic rules and correspondingexamples to iteratively reason over KBs, recovering Prolog's backward chainingalgorithm. Comprehensive experiments are included to systematically compareLMLP with CoT in deductive reasoning settings, showing that LMLP enjoys morethan 25% higher accuracy than CoT on length generalization benchmarks even withfewer parameters.",,arXiv,['cs.cl'],, +1009,selfadaptive incontext learning an information compression perspective for incontext example selection and ordering,"['Zhiyong Wu', 'Yaoxiang Wang', 'Jiacheng Ye', 'Lingpeng Kong']",http://arxiv.org/pdf/2212.10375v2.pdf,2022-12-20,," Despite the surprising few-shot performance of in-context learning (ICL), itis still a common practice to randomly sample examples to serve as context.This paper advocates a new principle for ICL: self-adaptive in-contextlearning. The self-adaption mechanism is introduced to help each sample find anin-context example permutation (i.e., selection and ordering) that can derivethe correct prediction, thus maximizing performance. To validate theeffectiveness of self-adaptive ICL, we propose a general select-then-rankframework and instantiate it with new selection and ranking algorithms. Uponextensive evaluation on eight different NLP datasets, our self-adaptive ICLmethod achieves a 40% relative improvement over the common practice setting.Further analysis reveals the enormous potential of self-adaptive ICL that itmight be able to close the gap between ICL and finetuning given more advancedalgorithms. Our code is released to facilitate future research in this area:https://github.com/Shark-NLP/self-adaptive-ICL",,arXiv,"['cs.cl', 'cs.ai']",, +1010,privacypreserving incontext learning for large language models,"['Tong Wu', 'Ashwinee Panda', 'Jiachen T. Wang', 'Prateek Mittal']",http://arxiv.org/pdf/2305.01639v2.pdf,2023-05-02,," In-context learning (ICL) is an important capability of Large Language Models(LLMs), enabling these models to dynamically adapt based on specific,in-context exemplars, thereby improving accuracy and relevance. However, LLM'sresponses may leak the sensitive private information contained in in-contextexemplars. To address this challenge, we propose Differentially PrivateIn-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. Thekey idea for DP-ICL paradigm is generating differentially private responsesthrough a noisy consensus among an ensemble of LLM's responses based ondisjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiateseveral techniques showing how to privatize ICL for text classification andlanguage generation. We evaluate DP-ICL on four text classification benchmarksand two language generation tasks, and our empirical results show that DP-ICLachieves a strong utility-privacy tradeoff.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cr']",, +1011,incontext learning as maintaining coherency a study of onthefly machine translation using large language models,"['Suzanna Sia', 'Kevin Duh']",http://arxiv.org/pdf/2305.03573v1.pdf,2023-05-05,," The phenomena of in-context learning has typically been thought of as""learning from examples"". In this work which focuses on Machine Translation, wepresent a perspective of in-context learning as the desired generation taskmaintaining coherency with its context, i.e., the prompt examples. We firstinvestigate randomly sampled prompts across 4 domains, and find thattranslation performance improves when shown in-domain prompts. Next, weinvestigate coherency for the in-domain setting, which uses prompt examplesfrom a moving window. We study this with respect to other factors that havepreviously been identified in the literature such as length, surface similarityand sentence embedding similarity. Our results across 3 models (GPTNeo2.7B,Bloom3B, XGLM2.9B), and three translation directions(\texttt{en}$\rightarrow$\{\texttt{pt, de, fr}\}) suggest that the long-termcoherency of the prompts and the test sentence is a good indicator ofdownstream translation performance. In doing so, we demonstrate the efficacy ofIn-context Machine Translation for on-the-fly adaptation.",,arXiv,"['cs.cl', 'cs.ai']",, +1012,small models are valuable plugins for large language models,"['Canwen Xu', 'Yichong Xu', 'Shuohang Wang', 'Yang Liu', 'Chenguang Zhu', 'Julian McAuley']",http://arxiv.org/pdf/2305.08848v1.pdf,2023-05-15,," Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but theirweights are often publicly unavailable and their immense sizes make the modelsdifficult to be tuned with common hardware. As a result, effectively tuningthese models with large-scale supervised data can be challenging. As analternative, In-Context Learning (ICL) can only use a small number ofsupervised examples due to context length limits. In this paper, we proposeSuper In-Context Learning (SuperICL) which allows black-box LLMs to work withlocally fine-tuned smaller models, resulting in superior performance onsupervised tasks. Our experiments demonstrate that SuperICL can improveperformance beyond state-of-the-art fine-tuned models while addressing theinstability problem of in-context learning. Furthermore, SuperICL can enhancethe capabilities of smaller models, such as multilinguality andinterpretability.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1013,gptfinre incontext learning for financial relation extraction using large language models,"['Pawan Kumar Rajpoot', 'Ankur Parikh']",http://arxiv.org/pdf/2306.17519v2.pdf,2023-06-30,," Relation extraction (RE) is a crucial task in natural language processing(NLP) that aims to identify and classify relationships between entitiesmentioned in text. In the financial domain, relation extraction plays a vitalrole in extracting valuable information from financial documents, such as newsarticles, earnings reports, and company filings. This paper describes oursolution to relation extraction on one such dataset REFinD. The dataset wasreleased along with shared task as a part of the Fourth Workshop on KnowledgeDiscovery from Unstructured Data in Financial Services, co-located with SIGIR2023. In this paper, we employed OpenAI models under the framework ofin-context learning (ICL). We utilized two retrieval strategies to find top Krelevant in-context learning demonstrations / examples from training data for agiven test example. The first retrieval mechanism, we employed, is alearning-free dense retriever and the other system is a learning-basedretriever. We were able to achieve 3rd rank overall. Our best F1-score is0.718.",,arXiv,['cs.cl'],, +1014,codestyle incontext learning for knowledgebased question answering,"['Zhijie Nie', 'Richong Zhang', 'Zhongyuan Wang', 'Xudong Liu']",http://arxiv.org/pdf/2309.04695v2.pdf,2023-09-09,," Current methods for Knowledge-Based Question Answering (KBQA) usually rely oncomplex training techniques and model frameworks, leading to many limitationsin practical applications. Recently, the emergence of In-Context Learning (ICL)capabilities in Large Language Models (LLMs) provides a simple andtraining-free semantic parsing paradigm for KBQA: Given a small number ofquestions and their labeled logical forms as demo examples, LLMs can understandthe task intent and generate the logic form for a new question. However,current powerful LLMs have little exposure to logic forms during pre-training,resulting in a high format error rate. To solve this problem, we propose acode-style in-context learning method for KBQA, which converts the generationprocess of unfamiliar logical form into the more familiar code generationprocess for LLMs. Experimental results on three mainstream datasets show thatour method dramatically mitigated the formatting error problem in generatinglogic forms while realizing a new SOTA on WebQSP, GrailQA, and GraphQ under thefew-shot setting. The code and supplementary files are released athttps://github.com/Arthurizijar/KB-Coder .",,arXiv,"['cs.cl', 'cs.ai']",, +1015,iclef incontext learning with expert feedback for explainable style transfer,"['Arkadiy Saakyan', 'Smaranda Muresan']",http://arxiv.org/pdf/2309.08583v1.pdf,2023-09-15,," While state-of-the-art language models excel at the style transfer task,current work does not address explainability of style transfer systems.Explanations could be generated using large language models such as GPT-3.5 andGPT-4, but the use of such complex systems is inefficient when smaller, widelydistributed, and transparent alternatives are available. We propose a frameworkto augment and improve a formality style transfer dataset with explanations viamodel distillation from ChatGPT. To further refine the generated explanations,we propose a novel way to incorporate scarce expert human feedback usingin-context learning (ICLEF: In-Context Learning from Expert Feedback) byprompting ChatGPT to act as a critic to its own outputs. We use the resultingdataset of 9,960 explainable formality style transfer instances (e-GYAFC) toshow that current openly distributed instruction-tuned models (and, in somesettings, ChatGPT) perform poorly on the task, and that fine-tuning on ourhigh-quality dataset leads to significant improvements as shown by automaticevaluation. In human evaluation, we show that models much smaller than ChatGPTfine-tuned on our data align better with expert preferences. Finally, wediscuss two potential applications of models fine-tuned on the explainablestyle transfer task: interpretable authorship verification and interpretableadversarial attacks on AI-generated text detectors.",,arXiv,['cs.cl'],, +1016,utilising a large language model to annotate subject metadata a case study in an australian national research data catalogue,"['Shiwei Zhang', 'Mingfang Wu', 'Xiuzhen Zhang']",http://arxiv.org/pdf/2310.11318v1.pdf,2023-10-17,," In support of open and reproducible research, there has been a rapidlyincreasing number of datasets made available for research. As the availabilityof datasets increases, it becomes more important to have quality metadata fordiscovering and reusing them. Yet, it is a common issue that datasets oftenlack quality metadata due to limited resources for data curation. Meanwhile,technologies such as artificial intelligence and large language models (LLMs)are progressing rapidly. Recently, systems based on these technologies, such asChatGPT, have demonstrated promising capabilities for certain data curationtasks. This paper proposes to leverage LLMs for cost-effective annotation ofsubject metadata through the LLM-based in-context learning. Our method employsGPT-3.5 with prompts designed for annotating subject metadata, demonstratingpromising performance in automatic metadata annotation. However, models basedon in-context learning cannot acquire discipline-specific rules, resulting inlower performance in several categories. This limitation arises from thelimited contextual information available for subject inference. To the best ofour knowledge, we are introducing, for the first time, an in-context learningmethod that harnesses large language models for automated subject metadataannotation.",,arXiv,"['cs.cl', 'cs.ai']",, +1017,hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks,"['Yifan Wang', 'Qingyan Guo', 'Xinzhe Ni', 'Chufan Shi', 'Lemao Liu', 'Haiyun Jiang', 'Yujiu Yang']",http://arxiv.org/pdf/2311.01949v1.pdf,2023-11-03,," In-context learning (ICL) ability has emerged with the increasing scale oflarge language models (LLMs), enabling them to learn input-label mappings fromdemonstrations and perform well on downstream tasks. However, under thestandard ICL setting, LLMs may sometimes neglect query-related information indemonstrations, leading to incorrect predictions. To address this limitation,we propose a new paradigm called Hint-enhanced In-Context Learning (HICL) toexplore the power of ICL in open-domain question answering, an important formin knowledge-intensive tasks. HICL leverages LLMs' reasoning ability to extractquery-related knowledge from demonstrations, then concatenates the knowledge toprompt LLMs in a more explicit way. Furthermore, we track the source of thisknowledge to identify specific examples, and introduce a Hint-related ExampleRetriever (HER) to select informative examples for enhanced demonstrations. Weevaluate HICL with HER on 3 open-domain QA benchmarks, and observe averageperformance gains of 2.89 EM score and 2.52 F1 score on gpt-3.5-turbo, 7.62 EMscore and 7.27 F1 score on LLaMA-2-Chat-7B compared with standard setting.",,arXiv,['cs.cl'],, +1018,rethinking the role of demonstrations what makes incontext learning work,"['Sewon Min', 'Xinxi Lyu', 'Ari Holtzman', 'Mikel Artetxe', 'Mike Lewis', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']",http://arxiv.org/pdf/2202.12837v2.pdf,2022-02-25,," Large language models (LMs) are able to in-context learn -- perform a newtask via inference alone by conditioning on a few input-label pairs(demonstrations) and making predictions for new inputs. However, there has beenlittle understanding of how the model learns and which aspects of thedemonstrations contribute to end task performance. In this paper, we show thatground truth demonstrations are in fact not required -- randomly replacinglabels in the demonstrations barely hurts performance on a range ofclassification and multi-choce tasks, consistently over 12 different modelsincluding GPT-3. Instead, we find that other aspects of the demonstrations arethe key drivers of end task performance, including the fact that they provide afew examples of (1) the label space, (2) the distribution of the input text,and (3) the overall format of the sequence. Together, our analysis provides anew way of understanding how and why in-context learning works, while openingup new questions about how much can be learned from large language modelsthrough inference alone.",,arXiv,"['cs.cl', 'cs.ai']",, +1019,fewshot anaphora resolution in scientific protocols via mixtures of incontext experts,"['Nghia T. Le', 'Fan Bai', 'Alan Ritter']",http://arxiv.org/pdf/2210.03690v2.pdf,2022-10-07,," Anaphora resolution is an important task for information extraction across arange of languages, text genres, and domains, motivating the need for methodsthat do not require large annotated datasets. In-context learning has emergedas a promising approach, yet there are a number of challenges in applyingin-context learning to resolve anaphora. For example, encoding a singlein-context demonstration that consists of: an anaphor, a paragraph-lengthcontext, and a list of corresponding antecedents, requires conditioning alanguage model on a long sequence of tokens, limiting the number ofdemonstrations per prompt. In this paper, we present MICE (Mixtures ofIn-Context Experts), which we demonstrate is effective for few-shot anaphoraresolution in scientific protocols (Tamari et al., 2021). Given only a handfulof training examples, MICE combines the predictions of hundreds of in-contextexperts, yielding a 30% increase in F1 score over a competitive promptretrieval baseline. Furthermore, we show MICE can be used to train compactstudent models without sacrificing performance. As far as we are aware, this isthe first work to present experimental results demonstrating the effectivenessof in-context learning on the task of few-shot anaphora resolution inscientific protocols.",,arXiv,"['cs.cl', 'cs.ai']",, +1020,adaptive machine translation with large language models,"['Yasmin Moslem', 'Rejwanul Haque', 'John D. Kelleher', 'Andy Way']",http://arxiv.org/pdf/2301.13294v3.pdf,2023-01-30,," Consistency is a key requirement of high-quality translation. It isespecially important to adhere to pre-approved terminology and adapt tocorrected translations in domain-specific projects. Machine translation (MT)has achieved significant progress in the area of domain adaptation. However,real-time adaptation remains challenging. Large-scale language models (LLMs)have recently shown interesting capabilities of in-context learning, where theylearn to replicate certain input-output text generation patterns, withoutfurther fine-tuning. By feeding an LLM at inference time with a prompt thatconsists of a list of translation pairs, it can then simulate the domain andstyle characteristics. This work aims to investigate how we can utilizein-context learning to improve real-time adaptive MT. Our extensive experimentsshow promising results at translation time. For example, LLMs can adapt to aset of in-domain sentence pairs and/or terminology while translating a newsentence. We observe that the translation quality with few-shot in-contextlearning can surpass that of strong encoder-decoder MT systems, especially forhigh-resource languages. Moreover, we investigate whether we can combine MTfrom strong encoder-decoder models with fuzzy matches, which can furtherimprove translation quality, especially for less supported languages. Weconduct our experiments across five diverse language pairs, namelyEnglish-to-Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French(EN-FR), English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES).",,arXiv,['cs.cl'],, +1021,scattershot interactive incontext example curation for text transformation,"['Tongshuang Wu', 'Hua Shen', 'Daniel S. Weld', 'Jeffrey Heer', 'Marco Tulio Ribeiro']",http://arxiv.org/pdf/2302.07346v1.pdf,2023-02-14,," The in-context learning capabilities of LLMs like GPT-3 allow annotators tocustomize an LLM to their specific tasks with a small number of examples.However, users tend to include only the most obvious patterns when craftingexamples, resulting in underspecified in-context functions that fall short onunseen cases. Further, it is hard to know when ""enough"" examples have beenincluded even for known patterns. In this work, we present ScatterShot, aninteractive system for building high-quality demonstration sets for in-contextlearning. ScatterShot iteratively slices unlabeled data into task-specificpatterns, samples informative inputs from underexplored or not-yet-saturatedslices in an active learning manner, and helps users label more efficientlywith the help of an LLM and the current example set. In simulation studies ontwo text perturbation scenarios, ScatterShot sampling improves the resultingfew-shot functions by 4-5 percentage points over random sampling, with lessvariance as more examples are added. In a user study, ScatterShot greatly helpsusers in covering different patterns in the input space and labeling in-contextexamples more efficiently, resulting in better in-context learning and lessuser effort.",,arXiv,"['cs.hc', 'cs.cl']",, +1022,resources and fewshot learners for incontext learning in slavic languages,"['Michal Štefánik', 'Marek Kadlčík', 'Piotr Gramacki', 'Petr Sojka']",http://arxiv.org/pdf/2304.01922v1.pdf,2023-04-04,," Despite the rapid recent progress in creating accurate and compact in-contextlearners, most recent work focuses on in-context learning (ICL) for tasks inEnglish. However, the ability to interact with users of languages outsideEnglish presents a great potential for broadening the applicability of languagetechnologies to non-English speakers. In this work, we collect the infrastructure necessary for training andevaluation of ICL in a selection of Slavic languages: Czech, Polish, andRussian. We link a diverse set of datasets and cast these into a unifiedinstructional format through a set of transformations and newly-craftedtemplates written purely in target languages. Using the newly-curated dataset,we evaluate a set of the most recent in-context learners and compare theirresults to the supervised baselines. Finally, we train, evaluate and publish aset of in-context learning models that we train on the collected resources andcompare their performance to previous work. We find that ICL models tuned in English are also able to learn some tasksfrom non-English contexts, but multilingual instruction fine-tuningconsistently improves the ICL ability. We also find that the massive multitasktraining can be outperformed by single-task training in the target language,uncovering the potential for specializing in-context learners to thelanguage(s) of their application.",,arXiv,['cs.cl'],, +1023,unified demonstration retriever for incontext learning,"['Xiaonan Li', 'Kai Lv', 'Hang Yan', 'Tianyang Lin', 'Wei Zhu', 'Yuan Ni', 'Guotong Xie', 'Xiaoling Wang', 'Xipeng Qiu']",http://arxiv.org/pdf/2305.04320v2.pdf,2023-05-07,," In-context learning is a new learning paradigm where a language modelconditions on a few input-output pairs (demonstrations) and a test input, anddirectly outputs the prediction. It has been shown highly dependent on theprovided demonstrations and thus promotes the research of demonstrationretrieval: given a test input, relevant examples are retrieved from thetraining set to serve as informative demonstrations for in-context learning.While previous works focus on training task-specific retrievers for severaltasks separately, these methods are often hard to transfer and scale on varioustasks, and separately trained retrievers incur a lot of parameter storage anddeployment cost. In this paper, we propose Unified Demonstration Retriever(\textbf{UDR}), a single model to retrieve demonstrations for a wide range oftasks. To train UDR, we cast various tasks' training signals into a unifiedlist-wise ranking formulation by language model's feedback. Then we propose amulti-task list-wise ranking training framework, with an iterative miningstrategy to find high-quality candidates, which can help UDR fully incorporatevarious tasks' signals. Experiments on 30+ tasks across 13 task families andmultiple data domains show that UDR significantly outperforms baselines.Further analyses show the effectiveness of each proposed component and UDR'sstrong ability in various scenarios including different LMs (1.3B - 175B),unseen datasets, varying demonstration quantities, etc.",,arXiv,['cs.cl'],, +1024,efficient prompting via dynamic incontext learning,"['Wangchunshu Zhou', 'Yuchen Eleanor Jiang', 'Ryan Cotterell', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2305.11170v1.pdf,2023-05-18,," The primary way of building AI applications is shifting from trainingspecialist models to prompting generalist models. A common practice forprompting generalist models, often referred to as in-context learning, is toappend a few examples (demonstrations) to the prompt to help the model betterunderstand the task. While effective, in-context learning can be inefficientbecause it makes the input prompt much longer, consuming valuable space in thecontext window and leading to larger computational costs. In this paper, wepropose DynaICL, a recipe for efficient prompting with black-box generalistmodels that dynamically allocate in-context examples according to the inputcomplexity and the computational budget. To achieve this, we train a metacontroller that predicts the number of in-context examples suitable for thegeneralist model to make a good prediction based on the performance-efficiencytrade-off for a specific input. We then dynamically allocate the number ofdemonstrations for an input according to predictions from the meta controllerand the given computation budget. Experimental results show that dynamicexample allocation helps achieve a better performance-efficiency trade-off intwo practical settings where computational resources or the requiredperformance is constrained. Specifically, DynaICL saves up to 46% token budgetcompared to the common practice that allocates the same number of in-contextexamples to each input. We also find that a meta controller trained on acertain backbone model and tasks can successfully generalize to unseen modelsand tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1025,post hoc explanations of language models can improve language models,"['Satyapriya Krishna', 'Jiaqi Ma', 'Dylan Slack', 'Asma Ghandeharioun', 'Sameer Singh', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2305.11426v3.pdf,2023-05-19,," Large Language Models (LLMs) have demonstrated remarkable capabilities inperforming complex tasks. Moreover, recent research has shown thatincorporating human-annotated rationales (e.g., Chain-of-Thought prompting)during in-context learning can significantly enhance the performance of thesemodels, particularly on tasks that require reasoning capabilities. However,incorporating such rationales poses challenges in terms of scalability as thisrequires a high degree of human involvement. In this work, we present a novelframework, Amplifying Model Performance by Leveraging In-Context Learning withPost Hoc Explanations (AMPLIFY), which addresses the aforementioned challengesby automating the process of rationale generation. To this end, we leveragepost hoc explanation methods which output attribution scores (explanations)capturing the influence of each of the input features on model predictions.More specifically, we construct automated natural language rationales thatembed insights from post hoc explanations to provide corrective signals toLLMs. Extensive experimentation with real-world datasets demonstrates that ourframework, AMPLIFY, leads to prediction accuracy improvements of about 10-25%over a wide range of tasks, including those where prior approaches which relyon human-annotated rationales such as Chain-of-Thought prompting fall short.Our work makes one of the first attempts at highlighting the potential of posthoc explanations as valuable tools for enhancing the effectiveness of LLMs.Furthermore, we conduct additional empirical analyses and ablation studies todemonstrate the impact of each of the components of AMPLIFY, which, in turn,leads to critical insights for refining in-context learning.",,arXiv,"['cs.cl', 'cs.ai']",, +1026,reticl sequential retrieval of incontext examples with reinforcement learning,"['Alexander Scarlatos', 'Andrew Lan']",http://arxiv.org/pdf/2305.14502v1.pdf,2023-05-23,," Many recent developments in large language models focus on prompting them toperform specific tasks. One effective prompting method is in-context learning,where the model performs a (possibly new) generation/prediction task given one(or more) examples. Past work has shown that the choice of examples can make alarge impact on task performance. However, finding good examples is notstraightforward since the definition of a representative group of examples canvary greatly depending on the task. While there are many existing methods forselecting in-context examples, they generally score examples independently,ignoring the dependency between them and the order in which they are providedto the large language model. In this work, we propose Retrieval for In-ContextLearning (RetICL), a learnable method for modeling and optimally selectingexamples sequentially for in-context learning. We frame the problem ofsequential example selection as a Markov decision process, design an exampleretriever model using an LSTM, and train it using proximal policy optimization(PPO). We validate RetICL on math problem solving datasets and show that itoutperforms both heuristic and learnable baselines, and achievesstate-of-the-art accuracy on the TabMWP dataset. We also use case studies toshow that RetICL implicitly learns representations of math problem solvingstrategies.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1027,metricbased incontext learning a case study in text simplification,"['Subha Vadlamannati', 'Gözde Gül Şahin']",http://arxiv.org/pdf/2307.14632v1.pdf,2023-07-27,," In-context learning (ICL) for large language models has proven to be apowerful approach for many natural language processing tasks. However,determining the best method to select examples for ICL is nontrivial as theresults can vary greatly depending on the quality, quantity, and order ofexamples used. In this paper, we conduct a case study on text simplification(TS) to investigate how to select the best and most robust examples for ICL. Wepropose Metric-Based in-context Learning (MBL) method that utilizes commonlyused TS metrics such as SARI, compression ratio, and BERT-Precision forselection. Through an extensive set of experiments with various-sized GPTmodels on standard TS benchmarks such as TurkCorpus and ASSET, we show thatexamples selected by the top SARI scores perform the best on larger models suchas GPT-175B, while the compression ratio generally performs better on smallermodels such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL isgenerally robust to example orderings and out-of-domain test sets, andoutperforms strong baselines and state-of-the-art finetuned language models.Finally, we show that the behaviour of large GPT models can be implicitlycontrolled by the chosen metric. Our research provides a new framework forselecting examples in ICL, and demonstrates its effectiveness in textsimplification tasks, breaking new ground for more accurate and efficient NLGsystems.",,arXiv,"['cs.cl', 'cs.ai']",, +1028,hicl hashtagdriven incontext learning for social media natural language understanding,"['Hanzhuo Tan', 'Chunpu Xu', 'Jing Li', 'Yuqun Zhang', 'Zeyang Fang', 'Zeyu Chen', 'Baohua Lai']",http://arxiv.org/pdf/2308.09985v1.pdf,2023-08-19,," Natural language understanding (NLU) is integral to various social mediaapplications. However, existing NLU models rely heavily on context for semanticlearning, resulting in compromised performance when faced with short and noisysocial media content. To address this issue, we leverage in-context learning(ICL), wherein language models learn to make inferences by conditioning on ahandful of demonstrations to enrich the context and propose a novelhashtag-driven in-context learning (HICL) framework. Concretely, we pre-train amodel #Encoder, which employs #hashtags (user-annotated topic labels) to driveBERT-based pre-training through contrastive learning. Our objective here is toenable #Encoder to gain the ability to incorporate topic-related semanticinformation, which allows it to retrieve topic-related posts to enrich contextsand enhance social media NLU with noisy contexts. To further integrate theretrieved context with the source text, we employ a gradient-based method toidentify trigger terms useful in fusing information from both sources. Forempirical studies, we collected 45M tweets to set up an in-context NLUbenchmark, and the experimental results on seven downstream tasks show thatHICL substantially advances the previous state-of-the-art results. Furthermore,we conducted extensive analyzes and found that: (1) combining source input witha top-retrieved post from #Encoder is more effective than using semanticallysimilar posts; (2) trigger words can largely benefit in merging context fromthe source and retrieved posts.",,arXiv,['cs.cl'],, +1029,incontext convergence of transformers,"['Yu Huang', 'Yuan Cheng', 'Yingbin Liang']",http://arxiv.org/pdf/2310.05249v1.pdf,2023-10-08,," Transformers have recently revolutionized many domains in modern machinelearning and one salient discovery is their remarkable in-context learningcapability, where models can solve an unseen task by utilizing task-specificprompts without further parameters fine-tuning. This also inspired recenttheoretical studies aiming to understand the in-context learning mechanism oftransformers, which however focused only on linear transformers. In this work,we take the first step toward studying the learning dynamics of a one-layertransformer with softmax attention trained via gradient descent in order toin-context learn linear function classes. We consider a structured data model,where each token is randomly sampled from a set of feature vectors in eitherbalanced or imbalanced fashion. For data with balanced features, we establishthe finite-time convergence guarantee with near-zero prediction error bynavigating our analysis over two phases of the training dynamics of theattention map. More notably, for data with imbalanced features, we show thatthe learning dynamics take a stage-wise convergence process, where thetransformer first converges to a near-zero prediction error for the querytokens of dominant features, and then converges later to a near-zero predictionerror for the query tokens of under-represented features, respectively via oneand four training phases. Our proof features new techniques for analyzing thecompeting strengths of two types of attention weights, the change of whichdetermines different training phases.",,arXiv,"['cs.lg', 'cs.ai', 'math.oc', 'stat.ml']",, +1030,large language modelaware incontext learning for code generation,"['Jia Li', 'Ge Li', 'Chongyang Tao', 'Jia Li', 'Huangzhao Zhang', 'Fang Liu', 'Zhi Jin']",http://arxiv.org/pdf/2310.09748v1.pdf,2023-10-15,," Large language models (LLMs) have shown impressive in-context learning (ICL)ability in code generation. LLMs take a prompt consisting of requirement-codeexamples and a new requirement as input, and output new programs. Existingstudies have found that ICL is highly dominated by the examples and thus arisesresearch on example selection. However, existing approaches randomly selectexamples or only consider the textual similarity of requirements to retrieve,leading to sub-optimal performance. In this paper, we propose a novellearning-based selection approach named LAIL (LLM-Aware In-context Learning)for code generation. Given a candidate example, we exploit LLMs themselves toestimate it by considering the generation probabilities of ground-truthprograms given a requirement and the example. We then label candidate examplesas positive or negative through the probability feedback. Based on the labeleddata, we import a contrastive learning objective to train an effectiveretriever that acquires the preference of LLMs in code generation. We applyLAIL to three LLMs and evaluate it on three representative datasets (e.g.,MBJP, MBPP, and MBCPP). LATA outperforms the state-of-the-art baselines by11.58%, 6.89%, and 5.07% on CodeGen, and 4.38%, 2.85%, and 2.74% on GPT-3.5 interms of Pass@1, respectively.",,arXiv,"['cs.se', 'cs.cl']",, +1031,on the relation between sensitivity and accuracy in incontext learning,"['Yanda Chen', 'Chen Zhao', 'Zhou Yu', 'Kathleen McKeown', 'He He']",http://arxiv.org/pdf/2209.07661v2.pdf,2022-09-16,," In-context learning (ICL) suffers from oversensitivity to the prompt, makingit unreliable in real-world scenarios. We study the sensitivity of ICL withrespect to multiple perturbation types. First, we find that label bias obscuresthe true sensitivity, and therefore prior work may have significantlyunderestimated ICL sensitivity. Second, we observe a strong negativecorrelation between ICL sensitivity and accuracy: predictions sensitive toperturbations are less likely to be correct. Motivated by these findings, wepropose \textsc{SenSel}, a few-shot selective prediction method that abstainsfrom sensitive predictions. Experiments on ten classification datasets showthat \textsc{SenSel} consistently outperforms two commonly usedconfidence-based and entropy-based baselines on abstention decisions.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1032,winodict probing language models for incontext word acquisition,"['Julian Martin Eisenschlos', 'Jeremy R. Cole', 'Fangyu Liu', 'William W. Cohen']",http://arxiv.org/pdf/2209.12153v1.pdf,2022-09-25,," We introduce a new in-context learning paradigm to measure Large LanguageModels' (LLMs) ability to learn novel words during inference. In particular, werewrite Winograd-style co-reference resolution problems by replacing the keyconcept word with a synthetic but plausible word that the model must understandto complete the task. Solving this task requires the model to make use of thedictionary definition of the new word given in the prompt. This benchmarkaddresses word acquisition, one important aspect of the diachronic degradationknown to afflict LLMs. As LLMs are frozen in time at the moment they aretrained, they are normally unable to reflect the way language changes overtime. We show that the accuracy of LLMs compared to the original Winograd tasksdecreases radically in our benchmark, thus identifying a limitation of currentmodels and providing a benchmark to measure future improvements in LLMs abilityto do in-context learning.",,arXiv,"['cs.cl', 'cs.ai']",, +1033,data curation alone can stabilize incontext learning,"['Ting-Yun Chang', 'Robin Jia']",http://arxiv.org/pdf/2212.10378v2.pdf,2022-12-20,," In-context learning (ICL) enables large language models (LLMs) to perform newtasks by prompting them with a sequence of training examples. However, it isknown that ICL is very sensitive to the choice of training examples: randomlysampling examples from a training set leads to high variance in performance. Inthis paper, we show that carefully curating a subset of training data greatlystabilizes ICL performance without any other changes to the ICL algorithm(e.g., prompt retrieval or calibration). We introduce two methods to choosetraining subsets -- both score training examples individually, then select thehighest-scoring ones. CondAcc scores a training example by its average dev-setICL accuracy when combined with random training examples, while Datamodelslearns linear regressors that estimate how the presence of each trainingexample influences LLM outputs. Across five tasks and two LLMs, sampling fromstable subsets selected by CondAcc and Datamodels improves average accuracyover sampling from the entire training set by 7.7% and 6.3%, respectively.Surprisingly, the stable subset examples are not especially diverse in contentor low in perplexity, in contrast with other work suggesting that diversity andperplexity are important when prompting LLMs.",,arXiv,['cs.cl'],, +1034,a survey on incontext learning,"['Qingxiu Dong', 'Lei Li', 'Damai Dai', 'Ce Zheng', 'Zhiyong Wu', 'Baobao Chang', 'Xu Sun', 'Jingjing Xu', 'Lei Li', 'Zhifang Sui']",http://arxiv.org/pdf/2301.00234v3.pdf,2022-12-31,," With the increasing ability of large language models (LLMs), in-contextlearning (ICL) has become a new paradigm for natural language processing (NLP),where LLMs make predictions only based on contexts augmented with a fewexamples. It has been a new trend to explore ICL to evaluate and extrapolatethe ability of LLMs. In this paper, we aim to survey and summarize the progressand challenges of ICL. We first present a formal definition of ICL and clarifyits correlation to related studies. Then, we organize and discuss advancedtechniques, including training strategies, demonstration designing strategies,as well as related analysis. Finally, we discuss the challenges of ICL andprovide potential directions for further research. We hope that our work canencourage more research on uncovering how ICL works and improving ICL.",,arXiv,"['cs.cl', 'cs.ai']",, +1035,towards fewshot identification of morality frames using incontext learning,"['Shamik Roy', 'Nishanth Sridhar Nakshatri', 'Dan Goldwasser']",http://arxiv.org/pdf/2302.02029v1.pdf,2023-02-03,," Data scarcity is a common problem in NLP, especially when the annotationpertains to nuanced socio-linguistic concepts that require specializedknowledge. As a result, few-shot identification of these concepts is desirable.Few-shot in-context learning using pre-trained Large Language Models (LLMs) hasbeen recently applied successfully in many NLP tasks. In this paper, we studyfew-shot identification of a psycho-linguistic concept, Morality Frames (Roy etal., 2021), using LLMs. Morality frames are a representation framework thatprovides a holistic view of the moral sentiment expressed in text, identifyingthe relevant moral foundation (Haidt and Graham, 2007) and at a finer level ofgranularity, the moral sentiment expressed towards the entities mentioned inthe text. Previous studies relied on human annotation to identify moralityframes in text which is expensive. In this paper, we propose prompting-basedapproaches using pretrained Large Language Models for identification ofmorality frames, relying only on few-shot exemplars. We compare our models'performance with few-shot RoBERTa and found promising results.",,arXiv,['cs.cl'],, +1036,openicl an opensource framework for incontext learning,"['Zhenyu Wu', 'YaoXiang Wang', 'Jiacheng Ye', 'Jiangtao Feng', 'Jingjing Xu', 'Yu Qiao', 'Zhiyong Wu']",http://arxiv.org/pdf/2303.02913v1.pdf,2023-03-06,," In recent years, In-context Learning (ICL) has gained increasing attentionand emerged as the new paradigm for large language model (LLM) evaluation.Unlike traditional fine-tuning methods, ICL instead adapts the pre-trainedmodels to unseen tasks without any parameter updates. However, theimplementation of ICL is sophisticated due to the diverse retrieval andinference methods involved, as well as the varying pre-processing requirementsfor different models, datasets, and tasks. A unified and flexible framework forICL is urgently needed to ease the implementation of the aforementionedcomponents. To facilitate ICL research, we introduce OpenICL, an open-sourcetoolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highlyflexible architecture that users can easily combine different components tosuit their needs. It also provides various state-of-the-art retrieval andinference methods to streamline the process of adapting ICL to cutting-edgeresearch. The effectiveness of OpenICL has been validated on a wide range ofNLP tasks, including classification, QA, machine translation, and semanticparsing. As a side-product, we found OpenICL to be an efficient yet robust toolfor LLMs evaluation. OpenICL is released athttps://github.com/Shark-NLP/OpenICL",,arXiv,['cs.cl'],, +1037,the scope of incontext learning for the extraction of medical temporal constraints,"['Parker Seegmiller', 'Joseph Gatto', 'Madhusudan Basak', 'Diane Cook', 'Hassan Ghasemzadeh', 'John Stankovic', 'Sarah Preum']",http://arxiv.org/pdf/2303.09366v2.pdf,2023-03-16,," Medications often impose temporal constraints on everyday patient activity.Violations of such medical temporal constraints (MTCs) lead to a lack oftreatment adherence, in addition to poor health outcomes and increasedhealthcare expenses. These MTCs are found in drug usage guidelines (DUGs) inboth patient education materials and clinical texts. Computationallyrepresenting MTCs in DUGs will advance patient-centric healthcare applicationsby helping to define safe patient activity patterns. We define a novel taxonomyof MTCs found in DUGs and develop a novel context-free grammar (CFG) basedmodel to computationally represent MTCs from unstructured DUGs. Additionally,we release three new datasets with a combined total of N = 836 DUGs labeledwith normalized MTCs. We develop an in-context learning (ICL) solution forautomatically extracting and normalizing MTCs found in DUGs, achieving anaverage F1 score of 0.62 across all datasets. Finally, we rigorouslyinvestigate ICL model performance against a baseline model, across datasets andMTC types, and through in-depth error analysis.",,arXiv,"['cs.cl', 'cs.lg']",, +1038,gptre incontext learning for relation extraction using large language models,"['Zhen Wan', 'Fei Cheng', 'Zhuoyuan Mao', 'Qianying Liu', 'Haiyue Song', 'Jiwei Li', 'Sadao Kurohashi']",http://arxiv.org/pdf/2305.02105v3.pdf,2023-05-03,," In spite of the potential for ground-breaking achievements offered by largelanguage models (LLMs) (e.g., GPT-3), they still lag significantly behindfully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE).This is due to the two major shortcomings of LLMs in RE: (1) low relevanceregarding entity and relation in retrieved demonstrations for in-contextlearning; and (2) the strong inclination to wrongly classify NULL examples intoother pre-defined labels. In this paper, we propose GPT-RE to bridge the gap between LLMs andfully-supervised baselines. GPT-RE successfully addresses the aforementionedissues by (1) incorporating task-specific entity representations indemonstration retrieval; and (2) enriching the demonstrations with goldlabel-induced reasoning logic. We evaluate GPT-RE on four widely-used REdatasets, and observe that GPT-RE achieves improvements over not only existingGPT-3 baselines, but also fully-supervised baselines. Specifically, GPT-REachieves SOTA performances on the Semeval and SciERC datasets, and competitiveperformances on the TACRED and ACE05 datasets.",,arXiv,['cs.cl'],, +1039,gersteinlab at mediqachat 2023 clinical note summarization from doctorpatient conversations through finetuning and incontext learning,"['Xiangru Tang', 'Andrew Tran', 'Jeffrey Tan', 'Mark Gerstein']",http://arxiv.org/pdf/2305.05001v1.pdf,2023-05-08,," This paper presents our contribution to the MEDIQA-2023 Dialogue2Note sharedtask, encompassing both subtask A and subtask B. We approach the task as adialogue summarization problem and implement two distinct pipelines: (a) afine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b)few-shot in-context learning (ICL) using a large language model, GPT-4. Bothmethods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1(deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421,respectively. Additionally, we predict the associated section headers usingRoBERTa and SciBERT based classification models. Our team ranked fourth amongall teams, while each team is allowed to submit three runs as part of theirsubmission. We also utilize expert annotations to demonstrate that the notesgenerated through the ICL GPT-4 are better than all other baselines. The codefor our submission is available.",,arXiv,['cs.cl'],, +1040,can we edit factual knowledge by incontext learning,"['Ce Zheng', 'Lei Li', 'Qingxiu Dong', 'Yuxuan Fan', 'Zhiyong Wu', 'Jingjing Xu', 'Baobao Chang']",http://arxiv.org/pdf/2305.12740v1.pdf,2023-05-22,," Previous studies have shown that large language models (LLMs) like GPTs storemassive factual knowledge in their parameters. However, the stored knowledgecould be false or out-dated. Traditional knowledge editing methods refine LLMsvia fine-tuning on texts containing specific knowledge. However, with theincreasing scales of LLMs, these gradient-based approaches bring largecomputation costs. The trend of model-as-a-service also makes it impossible tomodify knowledge in black-box LMs. Inspired by in-context learning (ICL), a newparadigm based on demonstration contexts without parameter updating, we explorewhether ICL can edit factual knowledge. To answer this question, we give acomprehensive empirical study of ICL strategies. Experiments show thatin-context knowledge editing (IKE), without any gradient and parameterupdating, achieves a competitive success rate compared to gradient-basedmethods on GPT-J (6B) but with much fewer side effects, including lessover-editing on similar but unrelated facts and less knowledge forgetting onpreviously stored knowledge. We also apply the method to larger LMs with tensor hundreds of parameters like OPT-175B, which shows the scalability of ourmethod. The code is available at https://github.com/Zce1112zslx/IKE.",,arXiv,['cs.cl'],, +1041,coveragebased example selection for incontext learning,"['Shivanshu Gupta', 'Matt Gardner', 'Sameer Singh']",http://arxiv.org/pdf/2305.14907v3.pdf,2023-05-24,," In-context learning (ICL), the ability of large language models to performnovel tasks by conditioning on a prompt with a few task examples, requiresthese examples to be informative about the test instance. The standard approachof independently ranking and selecting the most similar examples selectsredundant examples while omitting important information. In this work, we showthat BERTScore-Recall (BSR) selects better examples that demonstrate more ofthe salient aspects, e.g. reasoning patterns, of the test input. We furtherextend BSR and many standard metrics to easily optimizable set-level metrics,giving still better coverage of those salient aspects. On 15 datasets spanning6 tasks and with 7 diverse LLMs, we show that (1) BSR is the superior metricfor in-context example selection across the board, and (2) for compositionaltasks, set selection using Set-BSR outperforms independent ranking by up to 17points on average and, despite being training-free, surpasses methods thatleverage task or LLM-specific training.",,arXiv,['cs.cl'],, +1042,leveraging large language models for scalable vector graphicsdriven image understanding,"['Mu Cai', 'Zeyi Huang', 'Yuheng Li', 'Haohan Wang', 'Yong Jae Lee']",http://arxiv.org/pdf/2306.06094v1.pdf,2023-06-09,," Recently, large language models (LLMs) have made significant advancements innatural language understanding and generation. However, their potential incomputer vision remains largely unexplored. In this paper, we introduce a new,exploratory approach that enables LLMs to process images using the ScalableVector Graphics (SVG) format. By leveraging the XML-based textual descriptionsof SVG representations instead of raster images, we aim to bridge the gapbetween the visual and textual modalities, allowing LLMs to directly understandand manipulate images without the need for parameterized visual components. Ourmethod facilitates simple image classification, generation, and in-contextlearning using only LLM capabilities. We demonstrate the promise of ourapproach across discriminative and generative tasks, highlighting its (i)robustness against distribution shift, (ii) substantial improvements achievedby tapping into the in-context learning abilities of LLMs, and (iii) imageunderstanding and generation capabilities with human guidance. Our code, data,and models can be found here https://github.com/mu-cai/svg-llm.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, +1043,exploring the incontext learning ability of large language model for biomedical concept linking,"['Qinyong Wang', 'Zhenxiang Gao', 'Rong Xu']",http://arxiv.org/pdf/2307.01137v1.pdf,2023-07-03,," The biomedical field relies heavily on concept linking in various areas suchas literature mining, graph alignment, information retrieval,question-answering, data, and knowledge integration. Although large languagemodels (LLMs) have made significant strides in many natural language processingtasks, their effectiveness in biomedical concept mapping is yet to be fullyexplored. This research investigates a method that exploits the in-contextlearning (ICL) capabilities of large models for biomedical concept linking. Theproposed approach adopts a two-stage retrieve-and-rank framework. Initially,biomedical concepts are embedded using language models, and then embeddingsimilarity is utilized to retrieve the top candidates. These candidates'contextual information is subsequently incorporated into the prompt andprocessed by a large language model to re-rank the concepts. This approachachieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7%in chemical entity normalization, exhibiting a competitive performance relativeto supervised learning methods. Further, it showed a significant improvement,with an over 20-point absolute increase in F1 score on an oncology matchingdataset. Extensive qualitative assessments were conducted, and the benefits andpotential shortcomings of using large language models within the biomedicaldomain were discussed. were discussed.",,arXiv,"['cs.cl', 'cs.ai']",, +1044,learning to retrieve incontext examples for large language models,"['Liang Wang', 'Nan Yang', 'Furu Wei']",http://arxiv.org/pdf/2307.07164v2.pdf,2023-07-14,," Large language models (LLMs) have demonstrated their ability to learnin-context, allowing them to perform various tasks based on a few input-outputexamples. However, the effectiveness of in-context learning is heavily relianton the quality of the selected examples. In this paper, we propose a novelframework to iteratively train dense retrievers that can identify high-qualityin-context examples for LLMs. Our framework initially trains a reward modelbased on LLM feedback to evaluate the quality of candidate examples, followedby knowledge distillation to train a bi-encoder based dense retriever. Ourexperiments on a suite of $30$ tasks demonstrate that our frameworksignificantly enhances in-context learning performance. Furthermore, we showthe generalization ability of our framework to unseen tasks during training. Anin-depth analysis reveals that our model improves performance by retrievingexamples with similar patterns, and the gains are consistent across LLMs ofvarying sizes. The code and data are available athttps://github.com/microsoft/LMOps/tree/main/llm_retriever .",,arXiv,"['cs.cl', 'cs.ir']",, +1045,incontext learning learns label relationships but is not conventional learning,"['Jannik Kossen', 'Yarin Gal', 'Tom Rainforth']",http://arxiv.org/pdf/2307.12375v3.pdf,2023-07-23,," The predictions of Large Language Models (LLMs) on downstream tasks oftenimprove significantly when including examples of the input--label relationshipin the context. However, there is currently no consensus about how thisin-context learning (ICL) ability of LLMs works. For example, while Xie et al.(2021) liken ICL to a general-purpose learning algorithm, Min et al. (2022)argue ICL does not even learn label relationships from in-context examples. Inthis paper, we provide novel insights into how ICL leverages label information,revealing both capabilities and limitations. To ensure we obtain acomprehensive picture of ICL behavior, we study probabilistic aspects of ICLpredictions and thoroughly examine the dynamics of ICL as more examples areprovided. Our experiments show that ICL predictions almost always depend onin-context labels, and that ICL can learn truly novel tasks in-context.However, we also find that ICL struggles to fully overcome predictionpreferences acquired from pre-training data, and, further, that ICL does notconsider all in-context information equally.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1046,causallm is not optimal for incontext learning,"['Nan Ding', 'Tomer Levinboim', 'Jialin Wu', 'Sebastian Goodman', 'Radu Soricut']",http://arxiv.org/pdf/2308.06912v2.pdf,2023-08-14,," Recent empirical evidence indicates that transformer based in-contextlearning performs better when using a prefix language model (prefixLM), inwhich in-context samples can all attend to each other, compared to causallanguage models (causalLM), which use auto-regressive attention that prohibitsin-context samples to attend to future samples. While this result is intuitive,it is not understood from a theoretical perspective. In this paper we take atheoretical approach and analyze the convergence behavior of prefixLM andcausalLM under a certain parameter construction. Our analysis shows that bothLM types converge to their stationary points at a linear rate, but that whileprefixLM converges to the optimal solution of linear regression, causalLMconvergence dynamics follows that of an online gradient descent algorithm,which is not guaranteed to be optimal even as the number of samples growsinfinitely. We supplement our theoretical claims with empirical experimentsover synthetic and real tasks and using various types of transformers. Ourexperiments verify that causalLM consistently underperforms prefixLM in allsettings.",,arXiv,"['cs.lg', 'cs.cl']",, +1047,exploring demonstration ensembling for incontext learning,"['Muhammad Khalifa', 'Lajanugen Logeswaran', 'Moontae Lee', 'Honglak Lee', 'Lu Wang']",http://arxiv.org/pdf/2308.08780v2.pdf,2023-08-17,," In-context learning (ICL) operates by showing language models (LMs) examplesof input-output pairs for a given task, i.e., demonstrations. The standardapproach for ICL is to prompt the LM with concatenated demonstrations followedby the test input. This approach suffers from some issues. First, concatenationoffers almost no control over the contribution of each demo to the modelprediction. This can be sub-optimal when some demonstrations are irrelevant tothe test example. Second, due to the input length limit of some transformermodels, it might be infeasible to fit many examples into the context,especially when dealing with long-input tasks. In this work, we exploreDemonstration Ensembling (DENSE) as an alternative to simple concatenation.DENSE predicts outputs using subsets (i.e., buckets) of the demonstrations andthen combines the output probabilities resulting from each subset to producethe final prediction. We study different ensembling methods using GPT-j andexperiment on 12 language tasks. Our experiments show weighted max ensemblingto outperform vanilla concatenation by as large as 2.4 average points. Codeavailable at https://github.com/mukhal/icl-ensembling.",,arXiv,"['cs.cl', 'cs.ai']",, +1048,context is environment,"['Sharut Gupta', 'Stefanie Jegelka', 'David Lopez-Paz', 'Kartik Ahuja']",http://arxiv.org/pdf/2309.09888v2.pdf,2023-09-18,," Two lines of work are taking the central stage in AI research. On the onehand, the community is making increasing efforts to build models that discardspurious correlations and generalize better in novel test environments.Unfortunately, the bitter lesson so far is that no proposal convincinglyoutperforms a simple empirical risk minimization baseline. On the other hand,large language models (LLMs) have erupted as algorithms able to learnin-context, generalizing on-the-fly to eclectic contextual circumstances thatusers enforce by means of prompting. In this paper, we argue that context isenvironment, and posit that in-context learning holds the key to better domaingeneralization. Via extensive theory and experiments, we show that payingattention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as theyarrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context RiskMinimization (ICRM) algorithm to zoom-in on the test environment riskminimizer, leading to significant out-of-distribution performance improvements.From all of this, two messages are worth taking home. Researchers in domaingeneralization should consider environment as context, and harness the adaptivepower of in-context learning. Researchers in LLMs should consider context asenvironment, to better structure data towards generalization.",,arXiv,"['cs.lg', 'cs.ai', 'stat.ml']",, +1049,"prompt, condition, and generate classification of unsupported claims with incontext learning","['Peter Ebert Christensen', 'Srishti Yadav', 'Serge Belongie']",http://arxiv.org/pdf/2309.10359v1.pdf,2023-09-19,," Unsupported and unfalsifiable claims we encounter in our daily lives caninfluence our view of the world. Characterizing, summarizing, and -- moregenerally -- making sense of such claims, however, can be challenging. In thiswork, we focus on fine-grained debate topics and formulate a new task ofdistilling, from such claims, a countable set of narratives. We present acrowdsourced dataset of 12 controversial topics, comprising more than 120karguments, claims, and comments from heterogeneous sources, each annotated witha narrative label. We further investigate how large language models (LLMs) canbe used to synthesise claims using In-Context Learning. We find that generatedclaims with supported evidence can be used to improve the performance ofnarrative classification models and, additionally, that the same model caninfer the stance and aspect using a few training examples. Such a model can beuseful in applications which rely on narratives , e.g. fact-checking.",,arXiv,['cs.cl'],, +1050,incontext learning for text classification with many labels,"['Aristides Milios', 'Siva Reddy', 'Dzmitry Bahdanau']",http://arxiv.org/pdf/2309.10954v2.pdf,2023-09-19,," In-context learning (ICL) using large language models for tasks with manylabels is challenging due to the limited context window, which makes itdifficult to fit a sufficient number of examples in the prompt. In this paper,we use a pre-trained dense retrieval model to bypass this limitation, givingthe model only a partial view of the full label space for each inference call.Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the artperformance in few-shot settings for three common intent classificationdatasets, with no finetuning. We also surpass fine-tuned performance onfine-grained sentiment classification in certain cases. We analyze theperformance across number of in-context examples and different model scales,showing that larger models are necessary to effectively and consistently makeuse of larger context lengths for ICL. By running several ablations, we analyzethe model's use of: a) the similarity of the in-context examples to the currentinput, b) the semantic content of the class names, and c) the correctcorrespondence between examples and labels. We demonstrate that all three areneeded to varying degrees depending on the domain, contrary to certain recentworks.",,arXiv,"['cs.cl', 'cs.lg']",, +1051,privacypreserving incontext learning with differentially private fewshot generation,"['Xinyu Tang', 'Richard Shin', 'Huseyin A. Inan', 'Andre Manoel', 'Fatemehsadat Mireshghallah', 'Zinan Lin', 'Sivakanth Gopi', 'Janardhan Kulkarni', 'Robert Sim']",http://arxiv.org/pdf/2309.11765v1.pdf,2023-09-21,," We study the problem of in-context learning (ICL) with large language models(LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leakor regurgitate the private examples demonstrated in the prompt. We propose anovel algorithm that generates synthetic few-shot demonstrations from theprivate dataset with formal differential privacy (DP) guarantees, and showempirically that it can achieve effective ICL. We conduct extensive experimentson standard benchmarks and compare our algorithm with non-private ICL andzero-shot solutions. Our results demonstrate that our algorithm can achievecompetitive performance with strong privacy levels. These results open up newpossibilities for ICL with privacy protection for a broad range ofapplications.",,arXiv,"['cs.lg', 'cs.cr']",, +1052,hrot hybrid prompt strategy and retrieval of thought for tabletext hybrid question answering,"['Tongxu Luo', 'Fangyu Lei', 'Jiahe Lei', 'Weihao Liu', 'Shihu He', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2309.12669v1.pdf,2023-09-22,," Answering numerical questions over hybrid contents from the given tables andtext(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs)have gained significant attention in the NLP community. With the emergence oflarge language models, In-Context Learning and Chain-of-Thought prompting havebecome two particularly popular research topics in this field. In this paper,we introduce a new prompting strategy called Hybrid prompt strategy andRetrieval of Thought for TextTableQA. Through In-Context Learning, we promptthe model to develop the ability of retrieval thinking when dealing with hybriddata. Our method achieves superior performance compared to the fully-supervisedSOTA on the MultiHiertt dataset in the few-shot setting.",,arXiv,['cs.cl'],, +1053,allure auditing and improving llmbased evaluation of text using iterative incontextlearning,"['Hosein Hasanbeig', 'Hiteshi Sharma', 'Leo Betthauser', 'Felipe Vieira Frujeri', 'Ida Momennejad']",http://arxiv.org/pdf/2309.13701v2.pdf,2023-09-24,," From grading papers to summarizing medical documents, large language models(LLMs) are evermore used for evaluation of text generated by humans and AIalike. However, despite their extensive utility, LLMs exhibit distinct failuremodes, necessitating a thorough audit and improvement of their text evaluationcapabilities. Here we introduce ALLURE, a systematic approach to Auditing LargeLanguage Models Understanding and Reasoning Errors. ALLURE involves comparingLLM-generated evaluations with annotated data, and iteratively incorporatinginstances of significant deviation into the evaluator, which leveragesin-context learning (ICL) to enhance and improve robust evaluation of text byLLMs. Through this iterative process, we refine the performance of theevaluator LLM, ultimately reducing reliance on human annotators in theevaluation process. We anticipate ALLURE to serve diverse applications of LLMsin various domains related to evaluation of textual data, such as medicalsummarization, education, and and productivity.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc']",, +1054,dynamic demonstrations controller for incontext learning,"['Fei Zhao', 'Taotian Pang', 'Zhen Wu', 'Zheng Ma', 'Shujian Huang', 'Xinyu Dai']",http://arxiv.org/pdf/2310.00385v1.pdf,2023-09-30,," In-Context Learning (ICL) is a new paradigm for natural language processing(NLP), where a large language model (LLM) observes a small number ofdemonstrations and a test instance as its input, and directly makes predictionswithout updating model parameters. Previous studies have revealed that ICL issensitive to the selection and the ordering of demonstrations. However, thereare few studies regarding the impact of the demonstration number on the ICLperformance within a limited input length of LLM, because it is commonlybelieved that the number of demonstrations is positively correlated with modelperformance. In this paper, we found this conclusion does not always hold true.Through pilot experiments, we discover that increasing the number ofdemonstrations does not necessarily lead to improved performance. Building uponthis insight, we propose a Dynamic Demonstrations Controller (D$^2$Controller),which can improve the ICL performance by adjusting the number of demonstrationsdynamically. The experimental results show that D$^2$Controller yields a 5.4%relative improvement on eight different sizes of LLMs across ten datasets.Moreover, we also extend our method to previous ICL models and achievecompetitive results.",,arXiv,"['cs.cl', 'cs.ai']",, +1055,not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning,"['Zhe Yang', 'Damai Dai', 'Peiyi Wang', 'Zhifang Sui']",http://arxiv.org/pdf/2310.08309v1.pdf,2023-10-12,," Large Language Models (LLMs) have recently gained the In-Context Learning(ICL) ability with the models scaling up, allowing them to quickly adapt todownstream tasks with only a few demonstration examples prepended in the inputsequence. Nonetheless, the current practice of ICL treats all demonstrationexamples equally, which still warrants improvement, as the quality of examplesis usually uneven. In this paper, we investigate how to determine approximatelyoptimal weights for demonstration examples and how to apply them during ICL. Toassess the quality of weights in the absence of additional validation data, wedesign a masked self-prediction (MSP) score that exhibits a strong correlationwith the final ICL performance. To expedite the weight-searching process, wediscretize the continuous weight space and adopt beam search. Withapproximately optimal weights obtained, we further propose two strategies toapply them to demonstrations at different model positions. Experimental resultson 8 text classification tasks show that our approach outperforms conventionalICL by a large margin. Our code are publicly available athttps:github.com/Zhe-Young/WICL.",,arXiv,['cs.cl'],, +1056,how many pretraining tasks are needed for incontext learning of linear regression,"['Jingfeng Wu', 'Difan Zou', 'Zixiang Chen', 'Vladimir Braverman', 'Quanquan Gu', 'Peter L. Bartlett']",http://arxiv.org/pdf/2310.08391v1.pdf,2023-10-12,," Transformers pretrained on diverse tasks exhibit remarkable in-contextlearning (ICL) capabilities, enabling them to solve unseen tasks solely basedon input contexts without adjusting model parameters. In this paper, we studyICL in one of its simplest setups: pretraining a linearly parameterizedsingle-layer linear attention model for linear regression with a Gaussianprior. We establish a statistical task complexity bound for the attention modelpretraining, showing that effective pretraining only requires a small number ofindependent tasks. Furthermore, we prove that the pretrained model closelymatches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, byachieving nearly Bayes optimal risk on unseen tasks under a fixed contextlength. These theoretical findings complement prior experimental research andshed light on the statistical foundations of ICL.",,arXiv,"['stat.ml', 'cs.lg']",, +1057,generative calibration for incontext learning,"['Zhongtao Jiang', 'Yuanzhe Zhang', 'Cao Liu', 'Jun Zhao', 'Kang Liu']",http://arxiv.org/pdf/2310.10266v1.pdf,2023-10-16,," As one of the most exciting features of large language models (LLMs),in-context learning is a mixed blessing. While it allows users tofast-prototype a task solver with only a few training examples, the performanceis generally sensitive to various configurations of the prompt such as thechoice or order of the training examples. In this paper, we for the first timetheoretically and empirically identify that such a paradox is mainly due to thelabel shift of the in-context model to the data distribution, in which LLMsshift the label marginal $p(y)$ while having a good label conditional $p(x|y)$.With this understanding, we can simply calibrate the in-context predictivedistribution by adjusting the label marginal, which is estimated viaMonte-Carlo sampling over the in-context model, i.e., generation of LLMs. Wecall our approach as generative calibration. We conduct exhaustive experimentswith 12 text classification tasks and 12 LLMs scaling from 774M to 33B,generally find that the proposed method greatly and consistently outperformsthe ICL as well as state-of-the-art calibration methods, by up to 27% absolutein macro-F1. Meanwhile, the proposed method is also stable under differentprompt configurations.",,arXiv,['cs.cl'],, +1058,magnifico evaluating the incontext learning ability of large language models to generalize to novel interpretations,"['Arkil Patel', 'Satwik Bhattamishra', 'Siva Reddy', 'Dzmitry Bahdanau']",http://arxiv.org/pdf/2310.11634v1.pdf,2023-10-18,," Humans possess a remarkable ability to assign novel interpretations tolinguistic expressions, enabling them to learn new words and understandcommunity-specific connotations. However, Large Language Models (LLMs) have aknowledge cutoff and are costly to finetune repeatedly. Therefore, it iscrucial for LLMs to learn novel interpretations in-context. In this paper, wesystematically analyse the ability of LLMs to acquire novel interpretationsusing in-context learning. To facilitate our study, we introduce MAGNIFICo, anevaluation suite implemented within a text-to-SQL semantic parsing frameworkthat incorporates diverse tokens and prompt settings to simulate real-worldcomplexity. Experimental results on MAGNIFICo demonstrate that LLMs exhibit asurprisingly robust capacity for comprehending novel interpretations fromnatural language descriptions as well as from discussions within longconversations. Nevertheless, our findings also highlight the need for furtherimprovements, particularly when interpreting unfamiliar words or when composingmultiple novel interpretations simultaneously in the same example.Additionally, our analysis uncovers the semantic predispositions in LLMs andreveals the impact of recency bias for information presented in long contexts.",,arXiv,['cs.cl'],, +1059,which examples to annotate for incontext learning towards effective and efficient selection,"['Costas Mavromatis', 'Balasubramaniam Srinivasan', 'Zhengyuan Shen', 'Jiani Zhang', 'Huzefa Rangwala', 'Christos Faloutsos', 'George Karypis']",http://arxiv.org/pdf/2310.20046v1.pdf,2023-10-30,," Large Language Models (LLMs) can adapt to new tasks via in-context learning(ICL). ICL is efficient as it does not require any parameter updates to thetrained LLM, but only few annotated examples as input for the LLM. In thiswork, we investigate an active learning approach for ICL, where there is alimited budget for annotating examples. We propose a model-adaptiveoptimization-free algorithm, termed AdaICL, which identifies examples that themodel is uncertain about, and performs semantic diversity-based exampleselection. Diversity-based sampling improves overall effectiveness, whileuncertainty sampling improves budget efficiency and helps the LLM learn newinformation. Moreover, AdaICL poses its sampling strategy as a Maximum Coverageproblem, that dynamically adapts based on the model's feedback and can beapproximately solved via greedy algorithms. Extensive experiments on ninedatasets and seven LLMs show that AdaICL improves performance by 4.4% accuracypoints over SOTA (7.7% relative improvement), is up to 3x more budget-efficientthan performing annotations uniformly at random, while it outperforms SOTA with2x fewer ICL examples.",,arXiv,['cs.cl'],, +1060,crosslingual retrieval augmented incontext learning for bangla,"['Xiaoqian Li', 'Ercong Nie', 'Sheng Liang']",http://arxiv.org/pdf/2311.00587v2.pdf,2023-11-01,," The promise of Large Language Models (LLMs) in Natural Language Processinghas often been overshadowed by their limited performance in low-resourcelanguages such as Bangla. To address this, our paper presents a pioneeringapproach that utilizes cross-lingual retrieval augmented in-context learning.By strategically sourcing semantically similar prompts from high-resourcelanguage, we enable multilingual pretrained language models (MPLMs), especiallythe generative model BLOOMZ, to successfully boost performance on Bangla tasks.Our extensive evaluation highlights that the cross-lingual retrieval augmentedprompts bring steady improvements to MPLMs over the zero-shot performance.",,arXiv,['cs.cl'],, +1061,dail data augmentation for incontext learning via selfparaphrase,"['Dawei Li', 'Yaxuan Li', 'Dheeraj Mekala', 'Shuyao Li', 'Yulin wang', 'Xueqi Wang', 'William Hogan', 'Jingbo Shang']",http://arxiv.org/pdf/2311.03319v1.pdf,2023-11-06,," In-Context Learning (ICL) combined with pre-trained large language models hasachieved promising results on various NLP tasks. However, ICL requireshigh-quality annotated demonstrations which might not be available inreal-world scenarios. To overcome this limitation, we propose \textbf{D}ata\textbf{A}ugmentation for \textbf{I}n-Context \textbf{L}earning(\textbf{DAIL}). DAIL leverages the intuition that large language models aremore familiar with the content generated by themselves. It first utilizes thelanguage model to generate paraphrases of the test sample and employs majorityvoting to determine the final result based on individual predictions. Ourextensive empirical evaluation shows that DAIL outperforms the standard ICLmethod and other ensemble-based methods in the low-resource scenario.Additionally, we explore the use of voting consistency as a confidence score ofthe model when the logits of predictions are inaccessible. We believe our workwill stimulate further research on ICL in low-resource settings.",,arXiv,"['cs.cl', 'cs.ai']",, +1062,incontext exemplars as clues to retrieving from large associative memory,['Jiachen Zhao'],http://arxiv.org/pdf/2311.03498v2.pdf,2023-11-06,," Recently, large language models (LLMs) have made remarkable progress innatural language processing. The most representative ability of LLMs isin-context learning (ICL), which enables LLMs to learn patterns from in-contextexemplars without training. The performance of ICL greatly depends on theexemplars used. However, how to choose exemplars remains unclear due to thelack of understanding of how in-context learning works. In this paper, wepresent a novel perspective on ICL by conceptualizing it as contextualretrieval from a model of associative memory. We establish a theoreticalframework of ICL based on Hopfield Networks. Based on our framework, we lookinto how in-context exemplars influence the performance of ICL and propose moreefficient active exemplar selection. Our study sheds new light on the mechanismof ICL by connecting it to memory retrieval, with potential implications foradvancing the understanding of LLMs.",,arXiv,"['cs.cl', 'cs.lg']",, +1063,selective annotation makes language models better fewshot learners,"['Hongjin Su', 'Jungo Kasai', 'Chen Henry Wu', 'Weijia Shi', 'Tianlu Wang', 'Jiayi Xin', 'Rui Zhang', 'Mari Ostendorf', 'Luke Zettlemoyer', 'Noah A. Smith', 'Tao Yu']",http://arxiv.org/pdf/2209.01975v1.pdf,2022-09-05,," Many recent approaches to natural language tasks are built on the remarkableabilities of large language models. Large language models can performin-context learning, where they learn a new task from a few taskdemonstrations, without any parameter updates. This work examines theimplications of in-context learning for the creation of datasets for newnatural language tasks. Departing from recent in-context learning methods, weformulate an annotation-efficient, two-step framework: selective annotationthat chooses a pool of examples to annotate from unlabeled data in advance,followed by prompt retrieval that retrieves task examples from the annotatedpool at test time. Based on this framework, we propose an unsupervised,graph-based selective annotation method, voke-k, to select diverse,representative examples to annotate. Extensive experiments on 10 datasets(covering classification, commonsense reasoning, dialogue, and text/codegeneration) demonstrate that our selective annotation method improves the taskperformance by a large margin. On average, vote-k achieves a 12.9%/11.4%relative gain under an annotation budget of 18/100, as compared to randomlyselecting examples to annotate. Compared to state-of-the-art supervisedfinetuning approaches, it yields similar performance with 10-100x lessannotation cost across 10 tasks. We further analyze the effectiveness of ourframework in various scenarios: language models with varying sizes, alternativeselective annotation methods, and cases where there is a test data domainshift. We hope that our studies will serve as a basis for data annotations aslarge language models are increasingly applied to new tasks. Our code isavailable at https://github.com/HKUNLP/icl-selective-annotation.",,arXiv,['cs.cl'],, +1064,incontext example selection with influences,"['Tai Nguyen', 'Eric Wong']",http://arxiv.org/pdf/2302.11042v2.pdf,2023-02-21,," In-context learning (ICL) is a powerful paradigm emerged from large languagemodels (LLMs). Despite its promises, ICL performance is known to be highlysensitive to input examples. In this work, we use $\textit{in-contextinfluences}$ to analyze few-shot ICL performance directly from the in-contextexamples. Our proposed influence-based example selection method can identifyboth positive and negative examples, outperforming several baselines whenevaluated on 9 SuperGLUE tasks. Our analysis uncovers up to a $16.3\%$performance gap between using the most negative in-context examples compared tothe most positive. In a case study, we apply our influence-based framework toquantify the phenomena of recency bias in example ordering for few-shot ICL.",,arXiv,"['cs.cl', 'cs.lg']",, +1065,"tabular representation, noisy operators, and impacts on table structure understanding tasks in llms","['Ananya Singha', 'José Cambronero', 'Sumit Gulwani', 'Vu Le', 'Chris Parnin']",http://arxiv.org/pdf/2310.10358v1.pdf,2023-10-16,," Large language models (LLMs) are increasingly applied for tabular tasks usingin-context learning. The prompt representation for a table may play a role inthe LLMs ability to process the table. Inspired by prior work, we generate acollection of self-supervised structural tasks (e.g. navigate to a cell androw; transpose the table) and evaluate the performance differences when using 8formats. In contrast to past work, we introduce 8 noise operations inspired byreal-world messy data and adversarial inputs, and show that such operations canimpact LLM performance across formats for different structural understandingtasks.",,arXiv,"['cs.cl', 'cs.ai']",, +1066,evaluating the impact of model scale for compositional generalization in semantic parsing,"['Linlu Qiu', 'Peter Shaw', 'Panupong Pasupat', 'Tianze Shi', 'Jonathan Herzig', 'Emily Pitler', 'Fei Sha', 'Kristina Toutanova']",http://arxiv.org/pdf/2205.12253v2.pdf,2022-05-24,," Despite their strong performance on many tasks, pre-trained language modelshave been shown to struggle on out-of-distribution compositionalgeneralization. Meanwhile, recent work has shown considerable improvements onmany NLP tasks from model scaling. Can scaling up model size also improvecompositional generalization in semantic parsing? We evaluate encoder-decodermodels up to 11B parameters and decoder-only models up to 540B parameters, andcompare model scaling curves for three different methods for applying apre-trained language model to a new task: fine-tuning all parameters, prompttuning, and in-context learning. We observe that fine-tuning generally has flator negative scaling curves on out-of-distribution compositional generalizationin semantic parsing evaluations. In-context learning has positive scalingcurves, but is generally outperformed by much smaller fine-tuned models.Prompt-tuning can outperform fine-tuning, suggesting further potentialimprovements from scaling as it exhibits a more positive scaling curve.Additionally, we identify several error trends that vary with model scale. Forexample, larger models are generally better at modeling the syntax of theoutput space, but are also more prone to certain types of overfitting. Overall,our study highlights limitations of current techniques for effectivelyleveraging model scale for compositional generalization, while our analysisalso suggests promising directions for future work.",,arXiv,['cs.cl'],, +1067,controllable dialogue simulation with incontext learning,"['Zekun Li', 'Wenhu Chen', 'Shiyang Li', 'Hong Wang', 'Jing Qian', 'Xifeng Yan']",http://arxiv.org/pdf/2210.04185v4.pdf,2022-10-09,," Building dialogue systems requires a large corpus of annotated dialogues.Such datasets are usually created via crowdsourcing, which is expensive andtime-consuming. In this paper, we propose \textsc{Dialogic}, a novel dialoguesimulation method based on large language model in-context learning to automatedataset creation. Seeded with a few annotated dialogues, \textsc{Dialogic}automatically selects in-context examples for demonstration and prompts GPT-3to generate new dialogues and annotations in a controllable way. Our method canrapidly expand a small set of dialogue data with minimum or zero \textit{humaninvolvement} and \textit{parameter update} and is thus much more cost-efficientand time-saving than crowdsourcing. Experimental results on the MultiWOZdataset demonstrate that training a model on the simulated dialogues leads toeven better performance than using the same amount of human-generated dialoguesunder the challenging low-resource settings, with as few as 85 dialogues as aseed. When enough data is available, our method can still serve as an effectivedata augmentation method. Human evaluation results also show that our simulateddialogues have near-human fluency and annotation accuracy. The code and dataare available at \textbf{\url{https://github.com/Leezekun/dialogic}}.",,arXiv,"['cs.cl', 'cs.ai']",, +1068,xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing,"['Peng Shi', 'Rui Zhang', 'He Bai', 'Jimmy Lin']",http://arxiv.org/pdf/2210.13693v1.pdf,2022-10-25,," In-context learning using large language models has recently shown surprisingresults for semantic parsing tasks such as Text-to-SQL translation. PromptingGPT-3 or Codex using several examples of question-SQL pairs can produceexcellent results, comparable to state-of-the-art finetuning-based models.However, existing work primarily focuses on English datasets, and it is unknownwhether large language models can serve as competitive semantic parsers forother languages. To bridge this gap, our work focuses on cross-lingualText-to-SQL semantic parsing for translating non-English utterances into SQLqueries based on an English schema. We consider a zero-shot transfer learningsetting with the assumption that we do not have any labeled examples in thetarget language (but have annotated examples in English). This work introducesthe XRICL framework, which learns to retrieve relevant English exemplars for agiven query to construct prompts. We also include global translation exemplarsfor a target language to facilitate the translation process for large languagemodels. To systematically evaluate our model, we construct two new benchmarkdatasets, XSpider and XKaggle-dbqa, which include questions in Chinese,Vietnamese, Farsi, and Hindi. Our experiments show that XRICL effectivelyleverages large pre-trained language models to outperform existing baselines.Data and code are publicly available at https://github.com/Impavidity/XRICL.",,arXiv,['cs.cl'],, +1069,how many demonstrations do you need for incontext learning,"['Jiuhai Chen', 'Lichang Chen', 'Chen Zhu', 'Tianyi Zhou']",http://arxiv.org/pdf/2303.08119v3.pdf,2023-03-14,," Large language models (LLMs) are capable to perform complex reasoning byin-context learning (ICL) when provided with a few input-output demonstrations(demos) and more powerful when intermediate reasoning steps (""chain of thoughts(CoT)"") of the demos are given. Is it necessary to use multi-demo in ICL? Inthis paper, we study ICL using fewer demos for each test query on the tasksin~\cite{wei2022chain}. Surprisingly, we do not observe significant degradationwhen using only one randomly chosen demo. To study this phenomenon, for eachtest query, we categorize demos into ""correct demos"" leading to the correctanswer, and ""wrong demos"" resulting in wrong answers. Our analysis reveals aninherent bias in those widely studied datasets: most demos are correct for amajority of test queries, which explains the good performance of using onerandom demo. Moreover, ICL (with and w/o CoT) using only one correct demosignificantly outperforms all-demo ICL adopted by most previous works,indicating the weakness of LLMs in finding correct demo(s) for input queries,which is difficult to evaluate on the biased datasets. Furthermore, we observea counterintuitive behavior of ICL using multi-demo, i.e., its accuracydegrades(improves) when given more correct(wrong) demos. This implies that ICLcan be easily misguided by interference among demos and their spuriouscorrelations. Our analyses highlight several fundamental challenges that needto be addressed in LLMs training, ICL, and benchmark design.",,arXiv,['cs.ai'],, +1070,improving visual question answering models through robustness analysis and incontext learning with a chain of basic questions,"['Jia-Hong Huang', 'Modar Alfadly', 'Bernard Ghanem', 'Marcel Worring']",http://arxiv.org/pdf/2304.03147v1.pdf,2023-04-06,," Deep neural networks have been critical in the task of Visual QuestionAnswering (VQA), with research traditionally focused on improving modelaccuracy. Recently, however, there has been a trend towards evaluating therobustness of these models against adversarial attacks. This involves assessingthe accuracy of VQA models under increasing levels of noise in the input, whichcan target either the image or the proposed query question, dubbed the mainquestion. However, there is currently a lack of proper analysis of this aspectof VQA. This work proposes a new method that utilizes semantically relatedquestions, referred to as basic questions, acting as noise to evaluate therobustness of VQA models. It is hypothesized that as the similarity of a basicquestion to the main question decreases, the level of noise increases. Togenerate a reasonable noise level for a given main question, a pool of basicquestions is ranked based on their similarity to the main question, and thisranking problem is cast as a LASSO optimization problem. Additionally, thiswork proposes a novel robustness measure, R_score, and two basic questiondatasets to standardize the analysis of VQA model robustness. The experimentalresults demonstrate that the proposed evaluation method effectively analyzesthe robustness of VQA models. Moreover, the experiments show that in-contextlearning with a chain of basic questions can enhance model accuracy.",,arXiv,"['cs.cv', 'cs.ai']",, +1071,genegpt augmenting large language models with domain tools for improved access to biomedical information,"['Qiao Jin', 'Yifan Yang', 'Qingyu Chen', 'Zhiyong Lu']",http://arxiv.org/pdf/2304.09667v3.pdf,2023-04-19,," While large language models (LLMs) have been successfully applied to varioustasks, they still face challenges with hallucinations. Augmenting LLMs withdomain-specific tools such as database utilities can facilitate easier and moreprecise access to specialized knowledge. In this paper, we present GeneGPT, anovel method for teaching LLMs to use the Web APIs of the National Center forBiotechnology Information (NCBI) for answering genomics questions.Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIsby in-context learning and an augmented decoding algorithm that can detect andexecute API calls. Experimental results show that GeneGPT achievesstate-of-the-art performance on eight tasks in the GeneTuring benchmark with anaverage score of 0.83, largely surpassing retrieval-augmented LLMs such as thenew Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), aswell as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1)API demonstrations have good cross-task generalizability and are more usefulthan documentations for in-context learning; (2) GeneGPT can generalize tolonger chains of API calls and answer multi-hop questions in GeneHop, a noveldataset introduced in this work; (3) Different types of errors are enriched indifferent tasks, providing valuable insights for future improvements.",,arXiv,"['cs.cl', 'cs.ai', 'q-bio.gn']",, +1072,dinsql decomposed incontext learning of texttosql with selfcorrection,"['Mohammadreza Pourreza', 'Davood Rafiei']",http://arxiv.org/pdf/2304.11015v3.pdf,2023-04-21,," There is currently a significant gap between the performance of fine-tunedmodels and prompting approaches using Large Language Models (LLMs) on thechallenging task of text-to-SQL, as evaluated on datasets such as Spider. Toimprove the performance of LLMs in the reasoning process, we study howdecomposing the task into smaller sub-tasks can be effective. In particular, weshow that breaking down the generation problem into sub-problems and feedingthe solutions of those sub-problems into LLMs can be an effective approach forsignificantly improving their performance. Our experiments with three LLMs showthat this approach consistently improves their simple few-shot performance byroughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On theholdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9and the new SOTA at the time of this writing using our approach is 85.3. Ourapproach with in-context learning beats many heavily fine-tuned models by atleast 5%. Additionally, when evaluated on the BIRD benchmark, our approachachieved an execution accuracy of 55.9%, setting a new SOTA on its holdout testset.",,arXiv,"['cs.cl', 'cs.ai', 'cs.db', 'cs.hc']",, +1073,fewshot incontext learning for knowledge base question answering,"['Tianle Li', 'Xueguang Ma', 'Alex Zhuang', 'Yu Gu', 'Yu Su', 'Wenhu Chen']",http://arxiv.org/pdf/2305.01750v2.pdf,2023-05-02,," Question answering over knowledge bases is considered a difficult problem dueto the challenge of generalizing to a wide variety of possible natural languagequestions. Additionally, the heterogeneity of knowledge base schema itemsbetween different knowledge bases often necessitates specialized training fordifferent knowledge base question-answering (KBQA) datasets. To handlequestions over diverse KBQA datasets with a unified training-free framework, wepropose KB-BINDER, which for the first time enables few-shot in-contextlearning over KBQA tasks. Firstly, KB-BINDER leverages large language modelslike Codex to generate logical forms as the draft for a specific question byimitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledgebase to bind the generated draft to an executable one with BM25 score matching.The experimental results on four public heterogeneous KBQA datasets show thatKB-BINDER can achieve a strong performance with only a few in-contextdemonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can evenoutperform the state-of-the-art trained models. On GrailQA and WebQSP, ourmodel is also on par with other fully-trained models. We believe KB-BINDER canserve as an important baseline for future research. Our code is available athttps://github.com/ltl3A87/KB-BINDER.",,arXiv,"['cs.cl', 'cs.ai']",, +1074,text classification via large language models,"['Xiaofei Sun', 'Xiaoya Li', 'Jiwei Li', 'Fei Wu', 'Shangwei Guo', 'Tianwei Zhang', 'Guoyin Wang']",http://arxiv.org/pdf/2305.08377v3.pdf,2023-05-15,," Despite the remarkable success of large-scale Language Models (LLMs) such asGPT-3, their performances still significantly underperform fine-tuned models inthe task of text classification. This is due to (1) the lack of reasoningability in addressing complex linguistic phenomena (e.g., intensification,contrast, irony etc); (2) limited number of tokens allowed in in-contextlearning. In this paper, we introduce Clue And Reasoning Prompting (CARP). CARP adoptsa progressive reasoning strategy tailored to addressing the complex linguisticphenomena involved in text classification: CARP first prompts LLMs to findsuperficial clues (e.g., keywords, tones, semantic relations, references, etc),based on which a diagnostic reasoning process is induced for final decisions.To further address the limited-token issue, CARP uses a fine-tuned model on thesupervised dataset for $k$NN demonstration search in the in-context learning,allowing the model to take the advantage of both LLM's generalization abilityand the task-specific evidence provided by the full labeled dataset.Remarkably, CARP yields new SOTA performances on 4 out of 5 widely-usedtext-classification benchmarks, 97.39 (+1.24) on SST-2, 96.40 (+0.72) onAGNews, 98.78 (+0.25) on R8 and 96.95 (+0.6) on R52, and a performancecomparable to SOTA on MR (92.39 v.s. 93.3). More importantly, we find that CARPdelivers impressive abilities on low-resource and domain-adaptation setups.Specifically, using 16 examples per class, CARP achieves comparableperformances to supervised models with 1,024 examples per class.",,arXiv,['cs.cl'],, +1075,exploring incontext learning capabilities of foundation models for generating knowledge graphs from text,"['Hanieh Khorashadizadeh', 'Nandana Mihindukulasooriya', 'Sanju Tiwari', 'Jinghua Groppe', 'Sven Groppe']",http://arxiv.org/pdf/2305.08804v1.pdf,2023-05-15,," Knowledge graphs can represent information about the real-world usingentities and their relations in a structured and semantically rich manner andthey enable a variety of downstream applications such as question-answering,recommendation systems, semantic search, and advanced analytics. However, atthe moment, building a knowledge graph involves a lot of manual effort and thushinders their application in some situations and the automation of this processmight benefit especially for small organizations. Automatically generatingstructured knowledge graphs from a large volume of natural language is still achallenging task and the research on sub-tasks such as named entity extraction,relation extraction, entity and relation linking, and knowledge graphconstruction aims to improve the state of the art of automatic construction andcompletion of knowledge graphs from text. The recent advancement of foundationmodels with billions of parameters trained in a self-supervised manner withlarge volumes of training data that can be adapted to a variety of downstreamtasks has helped to demonstrate high performance on a large range of NaturalLanguage Processing (NLP) tasks. In this context, one emerging paradigm isin-context learning where a language model is used as it is with a prompt thatprovides instructions and some examples to perform a task without changing theparameters of the model using traditional approaches such as fine-tuning. Thisway, no computing resources are needed for re-training/fine-tuning the modelsand the engineering effort is minimal. Thus, it would be beneficial to utilizesuch capabilities for generating knowledge graphs from text.",,arXiv,['cs.cl'],, +1076,what incontext learning learns incontext disentangling task recognition and task learning,"['Jane Pan', 'Tianyu Gao', 'Howard Chen', 'Danqi Chen']",http://arxiv.org/pdf/2305.09731v1.pdf,2023-05-16,," Large language models (LLMs) exploit in-context learning (ICL) to solve taskswith only a few demonstrations, but its mechanisms are not yet well-understood.Some works suggest that LLMs only recall already learned concepts frompre-training, while others hint that ICL performs implicit learning overdemonstrations. We characterize two ways through which ICL leveragesdemonstrations. Task recognition (TR) captures the extent to which LLMs canrecognize a task through demonstrations -- even without ground-truth labels --and apply their pre-trained priors, whereas task learning (TL) is the abilityto capture new input-label mappings unseen in pre-training. Using a wide rangeof classification datasets and three LLM families (GPT-3, LLaMA and OPT), wedesign controlled experiments to disentangle the roles of TR and TL in ICL. Weshow that (1) models can achieve non-trivial performance with only TR, and TRdoes not further improve with larger models or more demonstrations; (2) LLMsacquire TL as the model scales, and TL's performance consistently improves withmore demonstrations in context. Our findings unravel two different forcesbehind ICL and we advocate for discriminating them in future ICL research dueto their distinct nature.",,arXiv,"['cs.cl', 'cs.lg']",, +1077,temporal knowledge graph forecasting without knowledge using incontext learning,"['Dong-Ho Lee', 'Kian Ahrabian', 'Woojeong Jin', 'Fred Morstatter', 'Jay Pujara']",http://arxiv.org/pdf/2305.10613v3.pdf,2023-05-17,," Temporal knowledge graph (TKG) forecasting benchmarks challenge models topredict future facts using knowledge of past facts. In this paper, we applylarge language models (LLMs) to these benchmarks using in-context learning(ICL). We investigate whether and to what extent LLMs can be used for TKGforecasting, especially without any fine-tuning or explicit modules forcapturing structural and temporal information. For our experiments, we presenta framework that converts relevant historical facts into prompts and generatesranked predictions using token probabilities. Surprisingly, we observe thatLLMs, out-of-the-box, perform on par with state-of-the-art TKG models carefullydesigned and trained for TKG forecasting. Our extensive evaluation presentsperformances across several models and datasets with different characteristics,compares alternative heuristics for preparing contextual information, andcontrasts to prominent TKG methods and simple frequency and recency baselines.We also discover that using numerical indices instead of entity/relation names,i.e., hiding semantic information, does not significantly affect theperformance ($\pm$0.4\% Hit@1). This shows that prior semantic knowledge isunnecessary; instead, LLMs can leverage the existing patterns in the context toachieve such performance. Our analysis also reveals that ICL enables LLMs tolearn irregular patterns from the historical context, going beyond simplepredictions based on common or recent information.",,arXiv,['cs.cl'],, +1078,learning incontext learning for named entity recognition,"['Jiawei Chen', 'Yaojie Lu', 'Hongyu Lin', 'Jie Lou', 'Wei Jia', 'Dai Dai', 'Hua Wu', 'Boxi Cao', 'Xianpei Han', 'Le Sun']",http://arxiv.org/pdf/2305.11038v3.pdf,2023-05-18,," Named entity recognition in real-world applications suffers from thediversity of entity types, the emergence of new entity types, and the lack ofhigh-quality annotations. To address the above problems, this paper proposes anin-context learning-based NER approach, which can effectively inject in-contextNER ability into PLMs and recognize entities of novel types on-the-fly usingonly a few demonstrative instances. Specifically, we model PLMs as ameta-function $\mathcal{ \lambda_ {\text{instruction, demonstrations, text}}.M}$, and a new entity extractor can be implicitly constructed by applying newinstruction and demonstrations to PLMs, i.e., $\mathcal{ (\lambda . M)}$(instruction, demonstrations) $\to$ $\mathcal{F}$ where $\mathcal{F}$ will bea new entity extractor, i.e., $\mathcal{F}$: text $\to$ entities. To inject theabove in-context NER ability into PLMs, we propose a meta-function pre-trainingalgorithm, which pre-trains PLMs by comparing the (instruction,demonstration)-initialized extractor with a surrogate golden extractor.Experimental results on 4 few-shot NER datasets show that our method caneffectively inject in-context NER ability into PLMs and significantlyoutperforms the PLMs+fine-tuning counterparts.",,arXiv,['cs.cl'],, +1079,plugmed improving specificity in patientcentered medical dialogue generation using incontext learning,"['Chengfeng Dou', 'Zhi Jin', 'Wenping Jiao', 'Haiyan Zhao', 'Zhenwei Tao', 'Yongqiang Zhao']",http://arxiv.org/pdf/2305.11508v2.pdf,2023-05-19,," The patient-centered medical dialogue systems strive to offer diagnosticinterpretation services to users who are less knowledgeable about medicalknowledge, through emphasizing the importance of providing responses specificto the patients. It is difficult for the large language models (LLMs) toguarantee the specificity of responses in spite of its promising performanceeven in some tasks in medical field. Inspired by in-context learning, wepropose PlugMed, a Plug-and-Play Medical Dialogue System, for addressing thischallenge. PlugMed is equipped with two modules, the prompt generation (PG)module and the response ranking (RR) module, to enhances LLMs' dialoguestrategies for improving the specificity of the dialogue. The PG module isdesigned to stimulate the imitative ability of LLMs by providing them with realdialogues from similar patients as prompts. The RR module incorporatesfine-tuned small model as response filter to enable the selection ofappropriate responses generated by LLMs. Furthermore, we introduce a newevaluation method based on matching both user's intent and high-frequencymedical term to effectively assess the specificity of the responses. We conductexperimental evaluations on three medical dialogue datasets, and the results,including both automatic and human evaluation, demonstrate the effectiveness ofour approach.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, +1080,toolkengpt augmenting frozen language models with massive tools via tool embeddings,"['Shibo Hao', 'Tianyang Liu', 'Zhen Wang', 'Zhiting Hu']",http://arxiv.org/pdf/2305.11554v4.pdf,2023-05-19,," Augmenting large language models (LLMs) with external tools has emerged as apromising approach to solving complex problems. However, traditional methods,which finetune LLMs with tool demonstration data, can be both costly andrestricted to a predefined set of tools. Recent in-context learning paradigmalleviates these issues, but the limited context length only allows for a fewshots of demonstrations, leading to suboptimal understandings of the tools.Moreover, when there are numerous tools to choose from, in-context learningcould completely fail to work. In this paper, we propose an alternativeapproach, $\textbf{ToolkenGPT}$, which combines the benefits of both sides. Ourapproach represents each $\underline{tool}$ as a to$\underline{ken}$($\textit{toolken}$) and learns an embedding for it, enabling tool calls in thesame way as generating a regular word token. Once a toolken is triggered, theLLM is prompted to complete arguments for the tool to execute. ToolkenGPToffers the flexibility to plug in an arbitrary number of tools by expanding theset of toolkens on the fly. In addition, it improves tool use by allowingextensive demonstration data for learning the toolken embeddings. In diversedomains, including numerical reasoning, knowledge-based question answering, andembodied plan generation, our approach effectively augments LLMs with tools andsubstantially outperforms various latest baselines. ToolkenGPT demonstrates thepromising ability to use relevant tools from a large tool set in complexscenarios.",,arXiv,"['cs.cl', 'cs.lg']",, +1081,measuring inductive biases of incontext learning with underspecified demonstrations,"['Chenglei Si', 'Dan Friedman', 'Nitish Joshi', 'Shi Feng', 'Danqi Chen', 'He He']",http://arxiv.org/pdf/2305.13299v1.pdf,2023-05-22,," In-context learning (ICL) is an important paradigm for adapting largelanguage models (LLMs) to new tasks, but the generalization behavior of ICLremains poorly understood. We investigate the inductive biases of ICL from theperspective of feature bias: which feature ICL is more likely to use given aset of underspecified demonstrations in which two features are equallypredictive of the labels. First, we characterize the feature biases of GPT-3models by constructing underspecified demonstrations from a range of NLPdatasets and feature combinations. We find that LLMs exhibit clear featurebiases - for example, demonstrating a strong bias to predict labels accordingto sentiment rather than shallow lexical features, like punctuation. Second, weevaluate the effect of different interventions that are designed to impose aninductive bias in favor of a particular feature, such as adding a naturallanguage instruction or using semantically relevant label words. We find that,while many interventions can influence the learner to prefer a particularfeature, it can be difficult to overcome strong prior biases. Overall, ourresults provide a broader picture of the types of features that ICL may be morelikely to exploit and how to impose inductive biases that are better alignedwith the intended task.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1082,buffet benchmarking large language models for fewshot crosslingual transfer,"['Akari Asai', 'Sneha Kudugunta', 'Xinyan Velocity Yu', 'Terra Blevins', 'Hila Gonen', 'Machel Reid', 'Yulia Tsvetkov', 'Sebastian Ruder', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2305.14857v1.pdf,2023-05-24,," Despite remarkable advancements in few-shot generalization in naturallanguage processing, most models are developed and evaluated primarily inEnglish. To facilitate research on few-shot cross-lingual transfer, weintroduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across54 languages in a sequence-to-sequence format and provides a fixed set offew-shot examples and instructions. BUFFET is designed to establish a rigorousand equitable evaluation framework for few-shot cross-lingual transfer across abroad range of tasks and languages. Using BUFFET, we perform thoroughevaluations of state-of-the-art multilingual large language models withdifferent transfer methods, namely in-context learning and fine-tuning. Ourfindings reveal significant room for improvement in few-shot in-contextcross-lingual transfer. In particular, ChatGPT with in-context learning oftenperforms worse than much smaller mT5-base models fine-tuned on English taskdata and few-shot in-language examples. Our analysis suggests various avenuesfor future research in few-shot cross-lingual transfer, such as improvedpretraining, understanding, and future evaluations.",,arXiv,['cs.cl'],, +1083,measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing,"['Shufan Wang', 'Sebastien Jean', 'Sailik Sengupta', 'James Gung', 'Nikolaos Pappas', 'Yi Zhang']",http://arxiv.org/pdf/2305.15338v1.pdf,2023-05-24,," In executable task-oriented semantic parsing, the system aims to translateusers' utterances in natural language to machine-interpretable programs (APIcalls) that can be executed according to pre-defined API specifications. Withthe popularity of Large Language Models (LLMs), in-context learning offers astrong baseline for such scenarios, especially in data-limited regimes.However, LLMs are known to hallucinate and therefore pose a formidablechallenge in constraining generated content. Thus, it remains uncertain if LLMscan effectively perform task-oriented utterance-to-API generation whererespecting API's structural and task-specific constraints is crucial. In this work, we seek to measure, analyze and mitigate such constraintsviolations. First, we identify the categories of various constraints inobtaining API-semantics from task-oriented utterances, and define fine-grainedmetrics that complement traditional ones. Second, we leverage these metrics toconduct a detailed error analysis of constraints violations seen instate-of-the-art LLMs, which motivates us to investigate two mitigationstrategies: Semantic-Retrieval of Demonstrations (SRD) and API-awareConstrained Decoding (API-CD). Our experiments show that these strategies areeffective at reducing constraints violations and improving the quality of thegenerated API calls, but require careful consideration given theirimplementation complexity and latency.",,arXiv,"['cs.ai', 'cs.cl']",, +1084,what can large language models do in chemistry a comprehensive benchmark on eight tasks,"['Taicheng Guo', 'Kehan Guo', 'Bozhao Nan', 'Zhenwen Liang', 'Zhichun Guo', 'Nitesh V. Chawla', 'Olaf Wiest', 'Xiangliang Zhang']",http://arxiv.org/pdf/2305.18365v3.pdf,2023-05-27,," Large Language Models (LLMs) with strong abilities in natural languageprocessing tasks have emerged and have been applied in various kinds of areassuch as science, finance and software engineering. However, the capability ofLLMs to advance the field of chemistry remains unclear. In this paper, ratherthan pursuing state-of-the-art performance, we aim to evaluate capabilities ofLLMs in a wide range of tasks across the chemistry domain. We identify threekey chemistry-related capabilities including understanding, reasoning andexplaining to explore in LLMs and establish a benchmark containing eightchemistry tasks. Our analysis draws on widely recognized datasets facilitatinga broad exploration of the capacities of LLMs within the context of practicalchemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) areevaluated for each chemistry task in zero-shot and few-shot in-context learningsettings with carefully selected demonstration examples and specially craftedprompts. Our investigation found that GPT-4 outperformed other models and LLMsexhibit different competitive levels in eight chemistry tasks. In addition tothe key findings from the comprehensive benchmark analysis, our work providesinsights into the limitation of current LLMs and the impact of in-contextlearning settings on LLMs' performance across various chemistry tasks. The codeand datasets used in this study are available athttps://github.com/ChemFoundationModels/ChemLLMBench.",,arXiv,"['cs.cl', 'cs.ai']",, +1085,mitigating label biases for incontext learning,"['Yu Fei', 'Yifan Hou', 'Zeming Chen', 'Antoine Bosselut']",http://arxiv.org/pdf/2305.19148v3.pdf,2023-05-28,," Various design settings for in-context learning (ICL), such as the choice andorder of the in-context examples, can bias a model toward a particularprediction without being reflective of an understanding of the task. While manystudies discuss these design choices, there have been few systematicinvestigations into categorizing them and mitigating their impact. In thiswork, we define a typology for three types of label biases in ICL for textclassification: vanilla-label bias, context-label bias, and domain-label bias(which we conceptualize and detect for the first time). Our analysis demonstrates that prior label bias calibration methods fallshort of addressing all three types of biases. Specifically, domain-label biasrestricts LLMs to random-level performance on many tasks regardless of thechoice of in-context examples. To mitigate the effect of these biases, wepropose a simple bias calibration method that estimates a language model'slabel bias using random in-domain words from the task corpus. After controllingfor this estimated bias when making predictions, our novel domain-contextcalibration significantly improves the ICL performance of GPT-J and GPT-3 on awide range of tasks. The gain is substantial on tasks with large domain-labelbias (up to 37% in Macro-F1). Furthermore, our results generalize to modelswith different scales, pretraining methods, and manually-designed taskinstructions, showing the prevalence of label biases in ICL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1086,pretraining task diversity and the emergence of nonbayesian incontext learning for regression,"['Allan Raventós', 'Mansheej Paul', 'Feng Chen', 'Surya Ganguli']",http://arxiv.org/pdf/2306.15063v2.pdf,2023-06-26,," Pretrained transformers exhibit the remarkable ability of in-context learning(ICL): they can learn tasks from just a few examples provided in the promptwithout updating any weights. This raises a foundational question: can ICLsolve fundamentally $\textit{new}$ tasks that are very different from thoseseen during pretraining? To probe this question, we examine ICL's performanceon linear regression while varying the diversity of tasks in the pretrainingdataset. We empirically demonstrate a $\textit{task diversity threshold}$ forthe emergence of ICL. Below this threshold, the pretrained transformer cannotsolve unseen regression tasks, instead behaving like a Bayesian estimator withthe $\textit{non-diverse pretraining task distribution}$ as the prior. Beyondthis threshold, the transformer significantly outperforms this estimator; itsbehavior aligns with that of ridge regression, corresponding to a Gaussianprior over $\textit{all tasks}$, including those not seen during pretraining.Thus, when pretrained on data with task diversity greater than the threshold,transformers $\textit{can}$ optimally solve fundamentally new tasks in-context.Importantly, this capability hinges on it deviating from the Bayes optimalestimator with the pretraining distribution as the prior. This study alsoexplores the effect of regularization, model capacity and task structure andunderscores, in a concrete example, the critical role of task diversity,alongside data and model scale, in the emergence of ICL. Code is available athttps://github.com/mansheej/icl-task-diversity.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",, +1087,understanding incontext learning via supportive pretraining data,"['Xiaochuang Han', 'Daniel Simig', 'Todor Mihaylov', 'Yulia Tsvetkov', 'Asli Celikyilmaz', 'Tianlu Wang']",http://arxiv.org/pdf/2306.15091v1.pdf,2023-06-26,," In-context learning (ICL) improves language models' performance on a varietyof NLP tasks by simply demonstrating a handful of examples at inference time.It is not well understood why ICL ability emerges, as the model has never beenspecifically trained on such demonstrations. Unlike prior work that exploresimplicit mechanisms behind ICL, we study ICL via investigating the pretrainingdata. Specifically, we first adapt an iterative, gradient-based approach tofind a small subset of pretraining data that supports ICL. We observe that acontinued pretraining on this small subset significantly improves the model'sICL ability, by up to 18%. We then compare the supportive subset constrastivelywith random subsets of pretraining data and discover: (1) The supportivepretraining data to ICL do not have a higher domain relevance to downstreamtasks. (2) The supportive pretraining data have a higher mass of rarelyoccurring, long-tail tokens. (3) The supportive pretraining data arechallenging examples where the information gain from long-range context isbelow average, indicating learning to incorporate difficult long-range contextencourages ICL. Our work takes a first step towards understanding ICL viaanalyzing instance-level pretraining data. Our insights have a potential toenhance the ICL ability of language models by actively guiding the constructionof pretraining data in the future.",,arXiv,['cs.cl'],, +1088,schemalearning and rebinding as mechanisms of incontext learning and emergence,"['Sivaramakrishnan Swaminathan', 'Antoine Dedieu', 'Rajkumar Vasudeva Raju', 'Murray Shanahan', 'Miguel Lazaro-Gredilla', 'Dileep George']",http://arxiv.org/pdf/2307.01201v1.pdf,2023-06-16,," In-context learning (ICL) is one of the most powerful and most unexpectedcapabilities to emerge in recent transformer-based large language models(LLMs). Yet the mechanisms that underlie it are poorly understood. In thispaper, we demonstrate that comparable ICL capabilities can be acquired by analternative sequence prediction learning method using clone-structured causalgraphs (CSCGs). Moreover, a key property of CSCGs is that, unliketransformer-based LLMs, they are {\em interpretable}, which considerablysimplifies the task of explaining how ICL works. Specifically, we show that ituses a combination of (a) learning template (schema) circuits for patterncompletion, (b) retrieving relevant templates in a context-sensitive manner,and (c) rebinding of novel tokens to appropriate slots in the templates. We goon to marshall evidence for the hypothesis that similar mechanisms underlie ICLin LLMs. For example, we find that, with CSCGs as with LLMs, differentcapabilities emerge at different levels of overparameterization, suggestingthat overparameterization helps in learning more complex template (schema)circuits. By showing how ICL can be achieved with small models and datasets, weopen up a path to novel architectures, and take a vital step towards a moregeneral understanding of the mechanics behind this important capability.",,arXiv,"['cs.cl', 'cs.ai']",, +1089,towards understanding incontext learning with contrastive demonstrations and saliency maps,"['Paiheng Xu', 'Fuxiao Liu', 'Zongxia Li', 'Hyemi Song']",http://arxiv.org/pdf/2307.05052v2.pdf,2023-07-11,," We investigate the role of various demonstration components in the in-contextlearning (ICL) performance of large language models (LLMs). Specifically, weexplore the impacts of ground-truth labels, input distribution, andcomplementary explanations, particularly when these are altered or perturbed.We build on previous work, which offers mixed findings on how these elementsinfluence ICL. To probe these questions, we employ explainable NLP (XNLP)methods and utilize saliency maps of contrastive demonstrations for bothqualitative and quantitative analysis. Our findings reveal that flippingground-truth labels significantly affects the saliency, though it's morenoticeable in larger LLMs. Our analysis of the input distribution at a granularlevel reveals that changing sentiment-indicative terms in a sentiment analysistask to neutral ones does not have as substantial an impact as alteringground-truth labels. Finally, we find that the effectiveness of complementaryexplanations in boosting ICL performance is task-dependent, with limitedbenefits seen in sentiment analysis tasks compared to symbolic reasoning tasks.These insights are critical for understanding the functionality of LLMs andguiding the development of effective demonstrations, which is increasinglyrelevant in light of the growing use of LLMs in applications such as ChatGPT.Our research code is publicly available at https://github.com/paihengxu/XICL.",,arXiv,"['cs.cl', 'cs.ai']",, +1090,lorahub efficient crosstask generalization via dynamic lora composition,"['Chengsong Huang', 'Qian Liu', 'Bill Yuchen Lin', 'Tianyu Pang', 'Chao Du', 'Min Lin']",http://arxiv.org/pdf/2307.13269v2.pdf,2023-07-25,," Low-rank adaptations (LoRA) are often employed to fine-tune large languagemodels (LLMs) for new tasks. This paper investigates LoRA composability forcross-task generalization and introduces LoraHub, a simple framework devisedfor the purposive assembly of LoRA modules trained on diverse given tasks, withthe objective of achieving adaptable performance on unseen tasks. With just afew examples from a new task, LoraHub can fluidly combine multiple LoRAmodules, eliminating the need for human expertise and assumptions. Notably, thecomposition requires neither additional model parameters nor gradients.Empirical results on the Big-Bench Hard benchmark suggest that LoraHub, whilenot surpassing the performance of in-context learning, offers a notableperformance-efficiency trade-off in few-shot scenarios by employing asignificantly reduced number of tokens per example during inference. Notably,LoraHub establishes a better upper bound compared to in-context learning whenpaired with different demonstration examples, demonstrating its potential forfuture development. Our vision is to establish a platform for LoRA modules,empowering users to share their trained LoRA modules. This collaborativeapproach facilitates the seamless application of LoRA modules to novel tasks,contributing to an adaptive ecosystem. Our code is available athttps://github.com/sail-sg/lorahub, and all the pre-trained LoRA modules arereleased at https://huggingface.co/lorahub.",,arXiv,"['cs.cl', 'cs.ai']",, +1091,ambiguityaware incontext learning with large language models,"['Lingyu Gao', 'Aditi Chaudhary', 'Krishna Srinivasan', 'Kazuma Hashimoto', 'Karthik Raman', 'Michael Bendersky']",http://arxiv.org/pdf/2309.07900v1.pdf,2023-09-14,," In-context learning (ICL) i.e. showing LLMs only a few task-specificdemonstrations has led to downstream gains with no task-specific fine-tuningrequired. However, LLMs are sensitive to the choice of prompts, and therefore acrucial research question is how to select good demonstrations for ICL. Oneeffective strategy is leveraging semantic similarity between the ICLdemonstrations and test inputs by using a text retriever, which however issub-optimal as that does not consider the LLM's existing knowledge about thattask. From prior work (Min et al., 2022), we already know that labels pairedwith the demonstrations bias the model predictions. This leads us to ourhypothesis whether considering LLM's existing knowledge about the task,especially with respect to the output label space can help in a betterdemonstration selection strategy. Through extensive experimentation on threetext classification tasks, we find that it is beneficial to not only choosesemantically similar ICL demonstrations but also to choose those demonstrationsthat help resolve the inherent label ambiguity surrounding the test example.Interestingly, we find that including demonstrations that the LLM previouslymis-classified and also fall on the test example's decision boundary, bringsthe most performance gain.",,arXiv,"['cs.cl', 'cs.ir']",, +1092,understanding incontext learning in transformers and llms by learning to learn discrete functions,"['Satwik Bhattamishra', 'Arkil Patel', 'Phil Blunsom', 'Varun Kanade']",http://arxiv.org/pdf/2310.03016v1.pdf,2023-10-04,," In order to understand the in-context learning phenomenon, recent works haveadopted a stylized experimental framework and demonstrated that Transformerscan learn gradient-based learning algorithms for various classes of real-valuedfunctions. However, the limitations of Transformers in implementing learningalgorithms, and their ability to learn other forms of algorithms are not wellunderstood. Additionally, the degree to which these capabilities are confinedto attention-based models is unclear. Furthermore, it remains to be seenwhether the insights derived from these stylized settings can be extrapolatedto pretrained Large Language Models (LLMs). In this work, we take a steptowards answering these questions by demonstrating the following: (a) On atest-bed with a variety of Boolean function classes, we find that Transformerscan nearly match the optimal learning algorithm for 'simpler' tasks, whiletheir performance deteriorates on more 'complex' tasks. Additionally, we findthat certain attention-free models perform (almost) identically to Transformerson a range of tasks. (b) When provided a teaching sequence, i.e. a set ofexamples that uniquely identifies a function in a class, we show thatTransformers learn more sample-efficiently. Interestingly, our results showthat Transformers can learn to implement two distinct algorithms to solve asingle task, and can adaptively select the more sample-efficient algorithmdepending on the sequence of in-context examples. (c) Lastly, we show thatextant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselineson prediction tasks that are guaranteed to not be in their training set.",,arXiv,"['cs.lg', 'cs.cl']",, +1093,demonstrations are all you need advancing offensive content paraphrasing using incontext learning,"['Anirudh Som', 'Karan Sikka', 'Helen Gent', 'Ajay Divakaran', 'Andreas Kathol', 'Dimitra Vergyri']",http://arxiv.org/pdf/2310.10707v1.pdf,2023-10-16,," Paraphrasing of offensive content is a better alternative to content removaland helps improve civility in a communication environment. Supervisedparaphrasers; however, rely heavily on large quantities of labelled data tohelp preserve meaning and intent. They also retain a large portion of theoffensiveness of the original content, which raises questions on their overallusability. In this paper we aim to assist practitioners in developing usableparaphrasers by exploring In-Context Learning (ICL) with large language models(LLMs), i.e., using a limited number of input-label demonstration pairs toguide the model in generating desired outputs for specific queries. Our studyfocuses on key factors such as -- number and order of demonstrations, exclusionof prompt instruction, and reduction in measured toxicity. We performprincipled evaluation on three datasets, including our proposed Context-AwarePolite Paraphrase dataset, comprising of dialogue-style rude utterances, politeparaphrases, and additional dialogue context. We evaluate our approach usingtwo closed source and one open source LLM. Our results reveal that ICL iscomparable to supervised methods in generation quality, while beingqualitatively better by 25% on human evaluation and attaining lower toxicity by76%. Also, ICL-based paraphrasers only show a slight reduction in performanceeven with just 10% training data.",,arXiv,"['cs.cl', 'cs.ai']",, +1094,pretraining data mixtures enable narrow model selection capabilities in transformer models,"['Steve Yadlowsky', 'Lyric Doshi', 'Nilesh Tripuraneni']",http://arxiv.org/pdf/2311.00871v1.pdf,2023-11-01,," Transformer models, notably large language models (LLMs), have the remarkableability to perform in-context learning (ICL) -- to perform new tasks whenprompted with unseen input-output examples without any explicit model training.In this work, we study how effectively transformers can bridge between theirpretraining data mixture, comprised of multiple distinct task families, toidentify and learn new tasks in-context which are both inside and outside thepretraining distribution. Building on previous work, we investigate thisquestion in a controlled setting, where we study transformer models trained onsequences of $(x, f(x))$ pairs rather than natural language. Our empiricalresults show transformers demonstrate near-optimal unsupervised model selectioncapabilities, in their ability to first in-context identify different taskfamilies and in-context learn within them when the task families arewell-represented in their pretraining data. However when presented with tasksor functions which are out-of-domain of their pretraining data, we demonstratevarious failure modes of transformers and degradation of their generalizationfor even simple extrapolation tasks. Together our results highlight that theimpressive ICL abilities of high-capacity sequence models may be more closelytied to the coverage of their pretraining data mixtures than inductive biasesthat create fundamental generalization capabilities.",,arXiv,"['cs.lg', 'cs.cl', 'stat.ml']",, +1095,large language models are fewshot summarizers multiintent comment generation via incontext learning,"['Mingyang Geng', 'Shangwen Wang', 'Dezun Dong', 'Haotian Wang', 'Ge Li', 'Zhi Jin', 'Xiaoguang Mao', 'Xiangke Liao']",http://arxiv.org/pdf/2304.11384v3.pdf,2023-04-22,," Code comment generation aims at generating natural language descriptions fora code snippet to facilitate developers' program comprehension activities.Despite being studied for a long time, a bottleneck for existing approaches isthat given a code snippet, they can only generate one comment while developersusually need to know information from diverse perspectives such as what is thefunctionality of this code snippet and how to use it. To tackle thislimitation, this study empirically investigates the feasibility of utilizinglarge language models (LLMs) to generate comments that can fulfill developers'diverse intents. Our intuition is based on the facts that (1) the code and itspairwise comment are used during the pre-training process of LLMs to build thesemantic connection between the natural language and programming language, and(2) comments in the real-world projects, which are collected for thepre-training, usually contain different developers' intents. We thus postulatethat the LLMs can already understand the code from different perspectives afterthe pre-training. Indeed, experiments on two large-scale datasets demonstratethe rationale of our insights: by adopting the in-context learning paradigm andgiving adequate prompts to the LLM (e.g., providing it with ten or moreexamples), the LLM can significantly outperform a state-of-the-art supervisedlearning approach on generating comments with multiple intents. Results alsoshow that customized strategies for constructing the prompts andpost-processing strategies for reranking the results can both boost the LLM'sperformances, which shed light on future research directions for using LLMs toachieve comment generation.",,arXiv,['cs.se'],, +1096,beyond task performance evaluating and reducing the flaws of large multimodal models with incontext learning,"['Mustafa Shukor', 'Alexandre Rame', 'Corentin Dancette', 'Matthieu Cord']",http://arxiv.org/pdf/2310.00647v2.pdf,2023-10-01,," Following the success of Large Language Models (LLMs), Large MultimodalModels (LMMs), such as the Flamingo model and its subsequent competitors, havestarted to emerge as natural steps towards generalist agents. However,interacting with recent LMMs reveals major limitations that are hardly capturedby the current evaluation benchmarks. Indeed, task performances (e.g., VQAaccuracy) alone do not provide enough clues to understand their realcapabilities, limitations, and to which extent such models are aligned to humanexpectations. To refine our understanding of those flaws, we deviate from thecurrent evaluation paradigm, and (1) evaluate 10 recent open-source LMMs from3B up to 80B parameter scale, on 5 different axes; hallucinations, abstention,compositionality, explainability and instruction following. Our evaluation onthese axes reveals major flaws in LMMs. While the current go-to solution toalign these models is based on training, such as instruction tuning or RLHF, werather (2) explore the training-free in-context learning (ICL) as a solution,and study how it affects these limitations. Based on our ICL study, (3) we pushICL further and propose new multimodal ICL variants such as; Multitask-ICL,Chain-of-Hindsight-ICL, and Self-Correcting-ICL. Our findings are as follows.(1) Despite their success, LMMs have flaws that remain unsolved with scalingalone. (2) The effect of ICL on LMMs flaws is nuanced; despite itseffectiveness for improved explainability, answer abstention, ICL only slightlyimproves instruction following, does not improve compositional abilities, andactually even amplifies hallucinations. (3) The proposed ICL variants arepromising as post-hoc approaches to efficiently tackle some of those flaws. Thecode is available here: https://github.com/mshukor/EvALign-ICL.",,arXiv,"['cs.cv', 'cs.mm']",, +1097,the inductive bias of incontext learning rethinking pretraining example design,"['Yoav Levine', 'Noam Wies', 'Daniel Jannai', 'Dan Navon', 'Yedid Hoshen', 'Amnon Shashua']",http://arxiv.org/pdf/2110.04541v3.pdf,2021-10-09,," Pretraining Neural Language Models (NLMs) over a large corpus involveschunking the text into training examples, which are contiguous text segments ofsizes processable by the neural architecture. We highlight a bias introduced bythis common practice: we prove that the pretrained NLM can model much strongerdependencies between text segments that appeared in the same training example,than it can between text segments that appeared in different training examples.This intuitive result has a twofold role. First, it formalizes the motivationbehind a broad line of recent successful NLM training heuristics, proposed forthe pretraining and fine-tuning stages, which do not necessarily appear relatedat first glance. Second, our result clearly indicates further improvements tobe made in NLM pretraining for the benefit of Natural Language Understandingtasks. As an example, we propose ""kNN-Pretraining"": we show that includingsemantically related non-neighboring sentences in the same pretraining exampleyields improved sentence representations and open domain question answeringabilities. This theoretically motivated degree of freedom for pretrainingexample design indicates new training schemes for self-improvingrepresentations.",,arXiv,"['cs.cl', 'cs.lg']",, +1098,instruction induction from few examples to natural language task descriptions,"['Or Honovich', 'Uri Shaham', 'Samuel R. Bowman', 'Omer Levy']",http://arxiv.org/pdf/2205.10782v1.pdf,2022-05-22,," Large language models are able to perform a task by conditioning on a fewinput-output demonstrations - a paradigm known as in-context learning. We showthat language models can explicitly infer an underlying task from a fewdemonstrations by prompting them to generate a natural language instructionthat fits the examples. To explore this ability, we introduce the instructioninduction challenge, compile a dataset consisting of 24 tasks, and define anovel evaluation metric based on executing the generated instruction. Wediscover that, to a large extent, the ability to generate instructions doesindeed emerge when using a model that is both large enough and aligned tofollow instructions; InstructGPT achieves 65.7% of human performance in ourexecution-based metric, while the original GPT-3 model reaches only 9.8% ofhuman performance. This surprising result suggests that instruction inductionmight be a viable learning paradigm in and of itself, where instead of fittinga set of latent continuous parameters to the data, one searches for the bestdescription in the natural language hypothesis space.",,arXiv,['cs.cl'],, +1099,large language models are few(1)shot table reasoners,['Wenhu Chen'],http://arxiv.org/pdf/2210.06710v2.pdf,2022-10-13,," Recent literature has shown that large language models (LLMs) are generallyexcellent few-shot reasoners to solve text reasoning tasks. However, thecapability of LLMs on table reasoning tasks is yet to be explored. In thispaper, we aim at understanding how well LLMs can perform table-related taskswith few-shot in-context learning. Specifically, we evaluated LLMs on populartable QA and fact verification datasets like WikiTableQuestion, FetaQA,TabFact, and FEVEROUS and found that LLMs are competent at complex reasoningover table structures, though these models are not pre-trained on any tablecorpus. When combined with `chain of thoughts' prompting, LLMs can achieve verystrong performance with only a 1-shot demonstration, even on par with some SoTAmodels. We show that LLMs are even more competent at generating comprehensivelong-form answers on FetaQA than tuned T5-large. We further manually studiedthe reasoning chains elicited from LLMs and found that these reasoning chainsare highly consistent with the underlying semantic form. We believe that LLMscan serve as a simple yet generic baseline for future research. The code anddata are released in https://github.com/wenhuchen/TableCoT.",,arXiv,['cs.cl'],, +1100,selfprompting large language models for zeroshot opendomain qa,"['Junlong Li', 'Zhuosheng Zhang', 'Hai Zhao']",http://arxiv.org/pdf/2212.08635v2.pdf,2022-12-16,," Open-Domain Question Answering (ODQA) aims at answering factoid questionswithout explicitly providing specific background documents. In a zero-shotsetting, this task is more challenging since no data is available to traincustomized models like Retriever-Readers. Recently, Large Language Models(LLMs) like GPT-3 have shown their power in zero-shot ODQA with directprompting methods, but these methods are still far from releasing the fullpowerfulness of LLMs only in an implicitly invoking way. In this paper, wepropose a Self-Prompting framework to explicitly utilize the massive knowledgestored in the parameters of LLMs and their strong instruction understandingabilities. Concretely, we prompt LLMs step by step to generate multiple pseudoQA pairs with background passages and explanations from scratch and then usethose generated elements for in-context learning. Experimental results show ourmethod surpasses previous SOTA methods significantly on three widely-used ODQAdatasets, and even achieves comparable performance with some Retriever-Readermodels fine-tuned on full training data.",,arXiv,"['cs.cl', 'cs.ai']",, +1101,ontologically faithful generation of nonplayer character dialogues,"['Nathaniel Weir', 'Ryan Thomas', ""Randolph D'Amore"", 'Kellie Hill', 'Benjamin Van Durme', 'Harsh Jhamtani']",http://arxiv.org/pdf/2212.10618v2.pdf,2022-12-20,," We introduce a language generation task grounded in a popular video gameenvironment. KNUDGE (KNowledge Constrained User-NPC Dialogue GEneration)requires models to produce trees of dialogue between video game characters thataccurately reflect quest and entity specifications stated in natural language.KNUDGE is constructed from side quest dialogues drawn directly from game dataof Obsidian Entertainment's The Outer Worlds, leading to real-worldcomplexities in generation: (1) dialogues are branching trees as opposed tolinear chains of utterances; (2) utterances must remain faithful to the gamelore -- character personas, backstories, and entity relationships; and (3) adialogue must accurately reveal new quest details to the human player. Wereport results for a set of neural generation models using supervised andin-context learning techniques; we find competent performance but room forfuture work addressing the challenges of creating realistic, game-qualitydialogues.",,arXiv,['cs.cl'],, +1102,batch prompting efficient inference with large language model apis,"['Zhoujun Cheng', 'Jungo Kasai', 'Tao Yu']",http://arxiv.org/pdf/2301.08721v2.pdf,2023-01-19,," Performing inference on large volumes of samples with large language models(LLMs) can be computationally and financially costly in industry and real-worlduse. We propose batch prompting, a simple yet effective prompting approach thatenables the LLM to run inference in batches, instead of one sample at a time.Our method reduces both token and time costs while retaining downstreamperformance. We theoretically demonstrate that under a few-shot in-contextlearning setting, the inference costs decrease almost inverse linearly with thenumber of samples in each batch. We extensively validate the effectiveness ofbatch prompting on ten datasets across commonsense QA, arithmetic reasoning,and NLI/NLU: batch prompting significantly~(up to 5x with six samples in batch)reduces the LLM (Codex) inference token and time costs while achieving betteror comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5and GPT-4, we show the benefits of batch prompting also hold. Further analysisshows that the number of samples in each batch and the complexity of tasksaffect its performance. Moreover, batch prompting can be applied acrossdifferent reasoning methods using LLMs. Our code can be found at the sitehttps://github.com/xlang-ai/batch-prompting.",,arXiv,"['cs.cl', 'cs.ai']",, +1103,finding support examples for incontext learning,"['Xiaonan Li', 'Xipeng Qiu']",http://arxiv.org/pdf/2302.13539v3.pdf,2023-02-27,," Additionally, the strong dependency among in-context examples makes it anNP-hard combinatorial optimization problem and enumerating all permutations isinfeasible. Hence we propose LENS, a fiLter-thEN-Search method to tackle thischallenge in two stages: First we filter the dataset to obtain informativein-context examples individually. Specifically, we propose a novel metric,InfoScore, to evaluate the example's in-context informativeness based on thelanguage model's feedback, and further propose a progressive filtering processto filter out uninformative examples. Then we propose diversity-guided examplesearch which iteratively refines and evaluates the selected examplepermutations, to find examples that fully depict the task. The experimentalresults show that LENS significantly outperforms a wide range of baselines.",,arXiv,['cs.cl'],, +1104,selfplanning code generation with large language models,"['Xue Jiang', 'Yihong Dong', 'Lecheng Wang', 'Zheng Fang', 'Qiwei Shang', 'Ge Li', 'Zhi Jin', 'Wenpin Jiao']",http://arxiv.org/pdf/2303.06689v2.pdf,2023-03-12,," Although large language models have demonstrated impressive ability in codegeneration, they are still struggling to address the complicated intentprovided by humans. It is widely acknowledged that humans typically employplanning to decompose complex problems and schedule the solution steps prior toimplementation. Thus we introduce planning into code generation to help themodel understand complex intent and reduce the difficulty of problem solving.This paper proposes a self-planning code generation method with large languagemodel, which consists of two phases, namely planning phase and implementationphase. Specifically, in the planning phase, the language model plans out thesolution steps from the intent combined with in-context learning. Then itenters the implementation phase, where the model generates code step by step,guided by the solution steps. The effectiveness of self-planning codegeneration has been rigorously evaluated on multiple code generation datasetsand the results have demonstrated a marked superiority over naive directgeneration approaches with language model. The improvement in performance issubstantial, highlighting the significance of self-planning in code generationtasks.",,arXiv,['cs.se'],, +1105,gpt is becoming a turing machine here are some ways to program it,"['Ana Jojic', 'Zhen Wang', 'Nebojsa Jojic']",http://arxiv.org/pdf/2303.14310v1.pdf,2023-03-25,," We demonstrate that, through appropriate prompting, GPT-3 family of modelscan be triggered to perform iterative behaviours necessary to execute (ratherthan just write or recall) programs that involve loops, including severalpopular algorithms found in computer science curricula or software developerinterviews. We trigger execution and description of Iterations by RegimentingSelf-Attention (IRSA) in one (or a combination) of three ways: 1) Using strongrepetitive structure in an example of an execution path of a target program forone particular input, 2) Prompting with fragments of execution paths, and 3)Explicitly forbidding (skipping) self-attention to parts of the generated text.On a dynamic program execution, IRSA leads to larger accuracy gains thanreplacing the model with the much more powerful GPT-4. IRSA has promisingapplications in education, as the prompts and responses resemble studentassignments in data structures and algorithms classes. Our findings holdimplications for evaluating LLMs, which typically target the in-contextlearning: We show that prompts that may not even cover one full task examplecan trigger algorithmic behaviour, allowing solving problems previously thoughtof as hard for LLMs, such as logical puzzles. Consequently, prompt design playsan even more critical role in LLM performance than previously recognized.",,arXiv,['cs.cl'],, +1106,is chatgpt a highly fluent grammatical error correction system a comprehensive evaluation,"['Tao Fang', 'Shu Yang', 'Kaixin Lan', 'Derek F. Wong', 'Jinpeng Hu', 'Lidia S. Chao', 'Yue Zhang']",http://arxiv.org/pdf/2304.01746v1.pdf,2023-04-04,," ChatGPT, a large-scale language model based on the advanced GPT-3.5architecture, has shown remarkable potential in various Natural LanguageProcessing (NLP) tasks. However, there is currently a dearth of comprehensivestudy exploring its potential in the area of Grammatical Error Correction(GEC). To showcase its capabilities in GEC, we design zero-shotchain-of-thought (CoT) and few-shot CoT settings using in-context learning forChatGPT. Our evaluation involves assessing ChatGPT's performance on fiveofficial test sets in three different languages, along with threedocument-level GEC test sets in English. Our experimental results and humanevaluations demonstrate that ChatGPT has excellent error detection capabilitiesand can freely correct errors to make the corrected sentences very fluent,possibly due to its over-correction tendencies and not adhering to theprinciple of minimal edits. Additionally, its performance in non-English andlow-resource settings highlights its potential in multilingual GEC tasks.However, further analysis of various types of errors at the document-level hasshown that ChatGPT cannot effectively correct agreement, coreference, tenseerrors across sentences, and cross-sentence boundary errors.",,arXiv,['cs.cl'],, +1107,a latent space theory for emergent abilities in large language models,['Hui Jiang'],http://arxiv.org/pdf/2304.09960v3.pdf,2023-04-19,," Languages are not created randomly but rather to communicate information.There is a strong association between languages and their underlying meanings,resulting in a sparse joint distribution that is heavily peaked according totheir correlations. Moreover, these peak values happen to match with themarginal distribution of languages due to the sparsity. With the advent of LLMstrained on big data and large models, we can now precisely assess the marginaldistribution of languages, providing a convenient means of exploring the sparsestructures in the joint distribution for effective inferences. In this paper,we categorize languages as either unambiguous or {\epsilon}-ambiguous andpresent quantitative results to demonstrate that the emergent abilities ofLLMs, such as language understanding, in-context learning, chain-of-thoughtprompting, and effective instruction fine-tuning, can all be attributed toBayesian inference on the sparse joint distribution of languages.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1108,"stance detection with supervised, zeroshot, and fewshot applications",['Michael Burnham'],http://arxiv.org/pdf/2305.01723v1.pdf,2023-05-02,," Stance detection is the identification of an author's beliefs about a subjectfrom a document. Researchers widely rely on sentiment analysis to accomplishthis. However, recent research has show that sentiment analysis is only looselycorrelated with stance, if at all. This paper advances methods in text analysisby precisely defining the task of stance detection, providing a generalizedframework for the task, and then presenting three distinct approaches forperforming stance detection: supervised classification, zero-shotclassification with NLI classifiers, and in-context learning. In doing so, Idemonstrate how zero-shot and few-shot language classifiers can replace humanlabelers for a variety of tasks and discuss how their application andlimitations differ from supervised classifiers. Finally, I demonstrate anapplication of zero-shot stance detection by replicating Block Jr et al.(2022).",,arXiv,['cs.cl'],, +1109,wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models,"['John Giorgi', 'Augustin Toma', 'Ronald Xie', 'Sondra S. Chen', 'Kevin R. An', 'Grace X. Zheng', 'Bo Wang']",http://arxiv.org/pdf/2305.02220v2.pdf,2023-05-03,," This paper describes our submission to the MEDIQA-Chat 2023 shared task forautomatic clinical note generation from doctor-patient conversations. We reportresults for two approaches: the first fine-tunes a pre-trained language model(PLM) on the shared task data, and the second uses few-shot in-context learning(ICL) with a large language model (LLM). Both achieve high performance asmeasured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second andfirst, respectively, of all submissions to the shared task. Expert humanscrutiny indicates that notes generated via the ICL-based approach with GPT-4are preferred about as often as human-written notes, making it a promising pathtoward automated note generation from doctor-patient conversations.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1110,how good are commercial large language models on african languages,"['Jessica Ojo', 'Kelechi Ogueji']",http://arxiv.org/pdf/2305.06530v1.pdf,2023-05-11,," Recent advancements in Natural Language Processing (NLP) has led to theproliferation of large pretrained language models. These models have been shownto yield good performance, using in-context learning, even on unseen tasks andlanguages. They have also been exposed as commercial APIs as a form oflanguage-model-as-a-service, with great adoption. However, their performance onAfrican languages is largely unknown. We present a preliminary analysis ofcommercial large language models on two tasks (machine translation and textclassification) across eight African languages, spanning different languagefamilies and geographical areas. Our results suggest that commercial languagemodels produce below-par performance on African languages. We also find thatthey perform better on text classification than machine translation. Ingeneral, our findings present a call-to-action to ensure African languages arewell represented in commercial large language models, given their growingpopularity.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1111,chainofdictionary prompting elicits translation in large language models,"['Hongyuan Lu', 'Haoyang Huang', 'Dongdong Zhang', 'Haoran Yang', 'Wai Lam', 'Furu Wei']",http://arxiv.org/pdf/2305.06575v3.pdf,2023-05-11,," Large language models (LLMs) have shown surprisingly good performance inmultilingual neural machine translation (MNMT) even when trained withoutparallel data. Yet, despite the fact that the amount of training data isgigantic, they still struggle with translating rare words, particularly forlow-resource languages. Even worse, it is usually unrealistic to retrieverelevant demonstrations for in-context learning with low-resource languages onLLMs, which restricts the practical use of LLMs for translation -- how shouldwe mitigate this problem? To this end, we present a novel method, CoD, whichaugments LLMs with prior knowledge with the chains of multilingual dictionariesfor a subset of input words to elicit translation abilities for LLMs. Extensiveexperiments indicate that augmenting ChatGPT with CoD elicits large gains by upto 13x chrF++ points for MNMT (3.08 to 42.63 for English to Serbian written inCyrillic script) on FLORES-200 full devtest set. We further demonstrate theimportance of chaining the multilingual dictionaries, as well as thesuperiority of CoD to few-shot demonstration for low-resource languages.",,arXiv,['cs.cl'],, +1112,autotrial prompting language models for clinical trial design,"['Zifeng Wang', 'Cao Xiao', 'Jimeng Sun']",http://arxiv.org/pdf/2305.11366v2.pdf,2023-05-19,," Clinical trials are critical for drug development. Constructing theappropriate eligibility criteria (i.e., the inclusion/exclusion criteria forpatient recruitment) is essential for the trial's success. Proper design ofclinical trial protocols should consider similar precedent trials and theireligibility criteria to ensure sufficient patient coverage. In this paper, wepresent a method named AutoTrial to aid the design of clinical eligibilitycriteria using language models. It allows (1) controllable generation underinstructions via a hybrid of discrete and neural prompting, (2) scalableknowledge incorporation via in-context learning, and (3) explicit reasoningchains to provide rationales for understanding the outputs. Experiments on over70K clinical trials verify that AutoTrial generates high-quality criteria textsthat are fluent and coherent and with high accuracy in capturing the relevantclinical concepts to the target trial. It is noteworthy that our method, with amuch smaller parameter size, gains around 60% winning rate against the GPT-3.5baselines via human evaluations.",,arXiv,['cs.cl'],, +1113,"how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings","['Shuaichen Chang', 'Eric Fosler-Lussier']",http://arxiv.org/pdf/2305.11853v3.pdf,2023-05-19,," Large language models (LLMs) with in-context learning have demonstratedremarkable capability in the text-to-SQL task. Previous research has promptedLLMs with various demonstration-retrieval strategies and intermediate reasoningsteps to enhance the performance of LLMs. However, those works often employvaried strategies when constructing the prompt text for text-to-SQL inputs,such as databases and demonstration examples. This leads to a lack ofcomparability in both the prompt constructions and their primary contributions.Furthermore, selecting an effective prompt construction has emerged as apersistent problem for future research. To address this limitation, wecomprehensively investigate the impact of prompt constructions across varioussettings and provide insights into prompt constructions for future text-to-SQLstudies.",,arXiv,['cs.cl'],, +1114,factchecking complex claims with programguided reasoning,"['Liangming Pan', 'Xiaobao Wu', 'Xinyuan Lu', 'Anh Tuan Luu', 'William Yang Wang', 'Min-Yen Kan', 'Preslav Nakov']",http://arxiv.org/pdf/2305.12744v1.pdf,2023-05-22,," Fact-checking real-world claims often requires collecting multiple pieces ofevidence and applying complex multi-step reasoning. In this paper, we presentProgram-Guided Fact-Checking (ProgramFC), a novel fact-checking model thatdecomposes complex claims into simpler sub-tasks that can be solved using ashared library of specialized functions. We first leverage the in-contextlearning ability of large language models to generate reasoning programs toguide the verification process. Afterward, we execute the program by delegatingeach sub-task to the corresponding sub-task handler. This process makes ourmodel both explanatory and data-efficient, providing clear explanations of itsreasoning process and requiring minimal training data. We evaluate ProgramFC ontwo challenging fact-checking datasets and show that it outperforms sevenfact-checking baselines across different settings of evidence availability,with explicit output programs that benefit human debugging. Our codes and dataare publicly available at https://github.com/mbzuai-nlp/ProgramFC.",,arXiv,"['cs.cl', 'cs.ai']",, +1115,mailex email event and argument extraction,"['Saurabh Srivastava', 'Gaurav Singh', 'Shou Matsumoto', 'Ali Raz', 'Paulo Costa', 'Joshua Poore', 'Ziyu Yao']",http://arxiv.org/pdf/2305.13469v2.pdf,2023-05-22,," In this work, we present the first dataset, MailEx, for performing eventextraction from conversational email threads. To this end, we first proposed anew taxonomy covering 10 event types and 76 arguments in the email domain. Ourfinal dataset includes 1.5K email threads and ~4K emails, which are annotatedwith totally ~8K event instances. To understand the task challenges, weconducted a series of experiments comparing three types of approaches, i.e.,fine-tuned sequence labeling, fine-tuned generative extraction, and few-shotin-context learning. Our results showed that the task of email event extractionis far from being addressed, due to challenges lying in, e.g., extractingnon-continuous, shared trigger spans, extracting non-named entity arguments,and modeling the email conversational history. Our work thus suggests morefuture investigations in this domain-specific event extraction task.",,arXiv,"['cs.cl', 'cs.ai']",, +1116,can chatgpt detect intent evaluating large language models for spoken language understanding,"['Mutian He', 'Philip N. Garner']",http://arxiv.org/pdf/2305.13512v2.pdf,2023-05-22,," Recently, large pretrained language models have demonstrated strong languageunderstanding capabilities. This is particularly reflected in their zero-shotand in-context learning abilities on downstream tasks through prompting. Toassess their impact on spoken language understanding (SLU), we evaluate severalsuch models like ChatGPT and OPT of different sizes on multiple benchmarks. Weverify the emergent ability unique to the largest models as they can reachintent classification accuracy close to that of supervised models with zero orfew shots on various languages given oracle transcripts. By contrast, theresults for smaller models fitting a single GPU fall far behind. We note thatthe error cases often arise from the annotation scheme of the dataset;responses from ChatGPT are still reasonable. We show, however, that the modelis worse at slot filling, and its performance is sensitive to ASR errors,suggesting serious challenges for the application of those textual models onSLU.",,arXiv,"['cs.cl', 'cs.ai', 'cs.sd', 'eess.as']",, +1117,logicllm exploring selfsupervised logicenhanced training for large language models,"['Fangkai Jiao', 'Zhiyang Teng', 'Shafiq Joty', 'Bosheng Ding', 'Aixin Sun', 'Zhengyuan Liu', 'Nancy F. Chen']",http://arxiv.org/pdf/2305.13718v2.pdf,2023-05-23,," Existing efforts to improve logical reasoning ability of language models havepredominantly relied on supervised fine-tuning, hindering generalization to newdomains and/or tasks. The development of Large Langauge Models (LLMs) hasdemonstrated the capacity of compressing abundant knowledge into a singleproxy, enabling them to tackle multiple tasks effectively. Our preliminaryexperiments, nevertheless, show that LLMs do not show capability on logicalreasoning. The performance of LLMs on logical reasoning benchmarks is farbehind the existing state-of-the-art baselines. In this paper, we make thefirst attempt to investigate the feasibility of incorporating logical knowledgethrough self-supervised post-training, and activating it via in-contextlearning, which we termed as LogicLLM. Specifically, we devise anauto-regressive objective variant of MERIt and integrate it with two LLMseries, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to13 billion. The results on two challenging logical reasoning benchmarksdemonstrate the effectiveness of LogicLLM. Besides, we conduct extensiveablation studies to analyze the key factors in designing logic-oriented proxytasks.",,arXiv,['cs.cl'],, +1118,make a choice! knowledge base question answering with incontext learning,"['Chuanyuan Tan', 'Yuehe Chen', 'Wenbiao Shao', 'Wenliang Chen']",http://arxiv.org/pdf/2305.13972v1.pdf,2023-05-23,," Question answering over knowledge bases (KBQA) aims to answer factoidquestions with a given knowledge base (KB). Due to the large scale of KB,annotated data is impossible to cover all fact schemas in KB, which poses achallenge to the generalization ability of methods that require a sufficientamount of annotated data. Recently, LLMs have shown strong few-shot performancein many NLP tasks. We expect LLM can help existing methods improve theirgeneralization ability, especially in low-resource situations. In this paper,we present McL-KBQA, a framework that incorporates the few-shot ability of LLMinto the KBQA method via ICL-based multiple choice and then improves theeffectiveness of the QA tasks. Experimental results on two KBQA datasetsdemonstrate the competitive performance of McL-KBQA with strong improvements ingeneralization. We expect to explore a new way to QA tasks from KBQA inconjunction with LLM, how to generate answers normatively and correctly withstrong generalization.",,arXiv,['cs.cl'],, +1119,ctqscorer combining multiple features for incontext example selection for machine translation,"['Aswanth Kumar', 'Ratish Puduppully', 'Raj Dabre', 'Anoop Kunchukuttan']",http://arxiv.org/pdf/2305.14105v2.pdf,2023-05-23,," Large language models have demonstrated the capability to perform on machinetranslation when the input is prompted with a few examples (in-contextlearning). Translation quality depends on various features of the selectedexamples, such as their quality and relevance, but previous work haspredominantly focused on individual features in isolation. In this paper, wepropose a general framework for combining different features influencingexample selection. We learn a regression model, CTQ Scorer (ContextualTranslation Quality), that selects examples based on multiple features in orderto maximize the translation quality. On multiple language pairs and languagemodels, we show that CTQ Scorer helps significantly outperform random selectionas well as strong single-factor baselines reported in the literature. We alsosee an improvement of over 2.5 COMET points on average with respect to a strongBM25 retrieval-based baseline.",,arXiv,"['cs.cl', 'cs.ai']",, +1120,empowering llmbased machine translation with cultural awareness,"['Binwei Yao', 'Ming Jiang', 'Diyi Yang', 'Junjie Hu']",http://arxiv.org/pdf/2305.14328v1.pdf,2023-05-23,," Traditional neural machine translation (NMT) systems often fail to translatesentences that contain culturally specific information. Most previous NMTmethods have incorporated external cultural knowledge during training, whichrequires fine-tuning on low-frequency items specific to the culture. Recentin-context learning utilizes lightweight prompts to guide large language models(LLMs) to perform machine translation, however, whether such an approach worksin terms of injecting culture awareness into machine translation remainsunclear. To this end, we introduce a new data curation pipeline to construct aculturally relevant parallel corpus, enriched with annotations ofcultural-specific entities. Additionally, we design simple but effectiveprompting strategies to assist this LLM-based translation. Extensiveexperiments show that our approaches can largely help incorporate culturalknowledge into LLM-based machine translation, outperforming traditional NMTsystems in translating cultural-specific sentences.",,arXiv,['cs.cl'],, +1121,selfchecker plugandplay modules for factchecking with large language models,"['Miaoran Li', 'Baolin Peng', 'Zhu Zhang']",http://arxiv.org/pdf/2305.14623v1.pdf,2023-05-24,," Fact-checking is an essential task in NLP that is commonly utilized forvalidating the factual accuracy of claims. Prior work has mainly focused onfine-tuning pre-trained languages models on specific datasets, which can becomputationally intensive and time-consuming. With the rapid development oflarge language models (LLMs), such as ChatGPT and GPT-3, researchers are nowexploring their in-context learning capabilities for a wide range of tasks. Inthis paper, we aim to assess the capacity of LLMs for fact-checking byintroducing Self-Checker, a framework comprising a set of plug-and-play modulesthat facilitate fact-checking by purely prompting LLMs in an almost zero-shotsetting. This framework provides a fast and efficient way to constructfact-checking systems in low-resource environments. Empirical resultsdemonstrate the potential of Self-Checker in utilizing LLMs for fact-checking.However, there is still significant room for improvement compared to SOTAfine-tuned models, which suggests that LLM adoption could be a promisingapproach for future fact-checking research.",,arXiv,['cs.cl'],, +1122,expertprompting instructing large language models to be distinguished experts,"['Benfeng Xu', 'An Yang', 'Junyang Lin', 'Quan Wang', 'Chang Zhou', 'Yongdong Zhang', 'Zhendong Mao']",http://arxiv.org/pdf/2305.14688v1.pdf,2023-05-24,," The answering quality of an aligned large language model (LLM) can bedrastically improved if treated with proper crafting of prompts. In this paper,we propose ExpertPrompting to elicit the potential of LLMs to answer asdistinguished experts. We first utilize In-Context Learning to automaticallysynthesize detailed and customized descriptions of the expert identity for eachspecific instruction, and then ask LLMs to provide answer conditioned on suchagent background. Based on this augmented prompting strategy, we produce a newset of instruction-following data using GPT-3.5, and train a competitiveopen-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluationto show that 1) the expert data is of significantly higher quality than vanillaanswers, and 2) ExpertLLaMA outperforms existing open-source opponents andachieves 96\% of the original ChatGPT's capability. All data and theExpertLLaMA model will be made publicly available at\url{https://github.com/OFA-Sys/ExpertLLaMA}.",,arXiv,"['cs.cl', 'cs.ai']",, +1123,getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning,"['Tianqing Fang', 'Zhaowei Wang', 'Wenxuan Zhou', 'Hongming Zhang', 'Yangqiu Song', 'Muhao Chen']",http://arxiv.org/pdf/2305.14970v1.pdf,2023-05-24,," Event temporal reasoning aims at identifying the temporal relations betweentwo or more events. However, knowledge conflicts arise when there is a mismatchbetween the actual temporal relations of events in the context and the priorknowledge or biases learned by the model. We first systematically definedistinct kinds of bias in event temporal reasoning, which include eventrelation prior bias, tense bias, narrative bias, and dependency bias, asindicators to study knowledge conflicts. To mitigate such event-relatedknowledge conflict, we introduce a Counterfactual Data Augmentation basedmethod that can be applied to both Pre-trained Language Models (PLMs) and LargeLanguage Models (LLMs) either as additional training data or demonstrations forIn-Context Learning. Experiments suggest the importance of mitigating knowledgeconflicts in event temporal reasoning tasks for reducing hallucination andhighlight the potential of counterfactual data augmentation for improving modelperformance.",,arXiv,"['cs.cl', 'cs.ai']",, +1124,boosting crosslingual transferability in multilingual models via incontext learning,"['Sunkyoung Kim', 'Dayeon Ki', 'Yireun Kim', 'Jinsik Lee']",http://arxiv.org/pdf/2305.15233v1.pdf,2023-05-24,," Existing cross-lingual transfer (CLT) prompting methods are only concernedwith monolingual demonstration examples in the source language. In this paper,we propose In-CLT, a novel cross-lingual transfer prompting method thatleverages both source and target languages to construct the demonstrationexamples. We conduct comprehensive evaluations on multilingual benchmarks,focusing on question answering tasks. Experiment results show that In-CLTprompt not only improves multilingual models' cross-lingual transferability,but also demonstrates remarkable unseen language generalization ability. In-CLTprompting, in particular, improves model performance by 10 to 20\% points onaverage when compared to prior cross-lingual transfer approaches. We alsoobserve the surprising performance gain on the other multilingual benchmarks,especially in reasoning tasks. Furthermore, we investigate the relationshipbetween lexical similarity and pre-training corpora in terms of thecross-lingual transfer gap.",,arXiv,"['cs.cl', 'cs.ai']",, +1125,a mechanism for solving relational tasks in transformer language models,"['Jack Merullo', 'Carsten Eickhoff', 'Ellie Pavlick']",http://arxiv.org/pdf/2305.16130v2.pdf,2023-05-25,," A primary criticism towards language models (LMs) is their inscrutability.This paper presents evidence that, despite their size and complexity, LMssometimes exploit a simple computational mechanism to solve one-to-onerelational tasks (e.g., capital_of(Poland)=Warsaw). We investigate a range oflanguage model sizes (from 124M parameters to 176B parameters) in an in-contextlearning setting, and find that for a variety of tasks (involving capitalcities, upper-casing, and past-tensing) a key part of the mechanism reduces toa simple linear update typically applied by the feedforward (FFN) networks.These updates also tend to promote the output of the relation in acontent-independent way (e.g., encoding Poland:Warsaw::China:Beijing),revealing a predictable pattern that these models take in solving these tasks.We further show that this mechanism is specific to tasks that require retrievalfrom pretraining memory, rather than retrieval from local context. Our resultscontribute to a growing body of work on the mechanistic interpretability ofLLMs, and offer reason to be optimistic that, despite the massive andnon-linear nature of the models, the strategies they ultimately use to solvetasks can sometimes reduce to familiar and even intuitive algorithms.",,arXiv,"['cs.cl', 'cs.lg']",, +1126,augmenting large language model translators via translation memories,"['Yongyu Mu', 'Abudurexiti Reheman', 'Zhiquan Cao', 'Yuchun Fan', 'Bei Li', 'Yinqiao Li', 'Tong Xiao', 'Chunliang Zhang', 'Jingbo Zhu']",http://arxiv.org/pdf/2305.17367v1.pdf,2023-05-27,," Using translation memories (TMs) as prompts is a promising approach toin-context learning of machine translation models. In this work, we take a steptowards prompting large language models (LLMs) with TMs and making them bettertranslators. We find that the ability of LLMs to ``understand'' prompts isindeed helpful for making better use of TMs. Experiments show that the resultsof a pre-trained LLM translator can be greatly improved by using high-qualityTM-based prompts. These results are even comparable to those of thestate-of-the-art NMT systems which have access to large-scale in-domainbilingual data and are well tuned on the downstream tasks.",,arXiv,['cs.cl'],, +1127,towards explainable conversational recommender systems,"['Shuyu Guo', 'Shuo Zhang', 'Weiwei Sun', 'Pengjie Ren', 'Zhumin Chen', 'Zhaochun Ren']",http://arxiv.org/pdf/2305.18363v1.pdf,2023-05-27,," Explanations in conventional recommender systems have demonstrated benefitsin helping the user understand the rationality of the recommendations andimproving the system's efficiency, transparency, and trustworthiness. In theconversational environment, multiple contextualized explanations need to begenerated, which poses further challenges for explanations. To better measureexplainability in conversational recommender systems (CRS), we propose tenevaluation perspectives based on concepts from conventional recommender systemstogether with the characteristics of CRS. We assess five existing CRS benchmarkdatasets using these metrics and observe the necessity of improving theexplanation quality of CRS. To achieve this, we conduct manual and automaticapproaches to extend these dialogues and construct a new CRS dataset, namelyExplainable Recommendation Dialogues (E-ReDial). It includes 756 dialogues withover 2,000 high-quality rewritten explanations. We compare two baselineapproaches to perform explanation generation based on E-ReDial. Experimentalresults suggest that models trained on E-ReDial can significantly improveexplainability while introducing knowledge into the models can further improvethe performance. GPT-3 in the in-context learning setting can generate morerealistic and diverse movie descriptions. In contrast, T5 training on E-ReDialcan better generate clear reasons for recommendations based on userpreferences. E-ReDial is available at https://github.com/Superbooming/E-ReDial.",,arXiv,"['cs.ir', 'cs.ai']",, +1128,grammar prompting for domainspecific language generation with large language models,"['Bailin Wang', 'Zi Wang', 'Xuezhi Wang', 'Yuan Cao', 'Rif A. Saurous', 'Yoon Kim']",http://arxiv.org/pdf/2305.19234v3.pdf,2023-05-30,," Large language models (LLMs) can learn to perform a wide range of naturallanguage tasks from just a handful of in-context examples. However, forgenerating strings from highly structured languages (e.g., semantic parsing tocomplex domain-specific languages), it is challenging for the LLM to generalizefrom just a few exemplars. We propose \emph{grammar prompting}, a simpleapproach to enable LLMs to use external knowledge and domain-specificconstraints, expressed through a grammar in Backus--Naur Form (BNF), duringin-context learning. Grammar prompting augments each demonstration example witha specialized grammar that is minimally sufficient for generating theparticular output example, where the specialized grammar is a subset of thefull DSL grammar. For inference, the LLM first predicts a BNF grammar given atest input, and then generates the output according to the rules of thegrammar. Experiments demonstrate that grammar prompting can enable LLMs toperform competitively on a diverse set of DSL generation tasks, includingsemantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, andSMILES-based molecule generation.",,arXiv,"['cs.cl', 'cs.ai']",, +1129,prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models,"['Fengzhu Zeng', 'Wei Gao']",http://arxiv.org/pdf/2306.02569v1.pdf,2023-06-05,," Few-shot or zero-shot fact verification only relies on a few or no labeledtraining examples. In this paper, we propose a novel method called ProToCo, to\underline{Pro}mpt pre-trained language models (PLMs) \underline{To} be\underline{Co}nsistent, for improving the factuality assessment capability ofPLMs in the few-shot and zero-shot settings. Given a claim-evidence pair,ProToCo generates multiple variants of the claim with different relations andframes a simple consistency mechanism as constraints for making compatiblepredictions across these variants. We update PLMs by using parameter-efficientfine-tuning (PEFT), leading to more accurate predictions in few-shot andzero-shot fact verification tasks. Our experiments on three public verificationdatasets show that ProToCo significantly outperforms state-of-the-art few-shotfact verification baselines. With a small number of unlabeled instances,ProToCo also outperforms the strong zero-shot learner T0 on zero-shotverification. Compared to large PLMs using in-context learning (ICL) method,ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model inboth few- and zero-shot settings.",,arXiv,['cs.cl'],, +1130,modular visual question answering via code generation,"['Sanjay Subramanian', 'Medhini Narasimhan', 'Kushal Khangaonkar', 'Kevin Yang', 'Arsha Nagrani', 'Cordelia Schmid', 'Andy Zeng', 'Trevor Darrell', 'Dan Klein']",http://arxiv.org/pdf/2306.05392v1.pdf,2023-06-08,," We present a framework that formulates visual question answering as modularcode generation. In contrast to prior work on modular approaches to VQA, ourapproach requires no additional training and relies on pre-trained languagemodels (LMs), visual models pre-trained on image-caption pairs, and fifty VQAexamples used for in-context learning. The generated Python programs invoke andcompose the outputs of the visual models using arithmetic and conditionallogic. Our approach improves accuracy on the COVR dataset by at least 3% and onthe GQA dataset by roughly 2% compared to the few-shot baseline that does notemploy code generation.",,arXiv,['cs.cl'],, +1131,disasterresponsegpt large language models for accelerated plan of action development in disaster response scenarios,"['Vinicius G. Goecks', 'Nicholas R. Waytowich']",http://arxiv.org/pdf/2306.17271v1.pdf,2023-06-29,," The development of plans of action in disaster response scenarios is atime-consuming process. Large Language Models (LLMs) offer a powerful solutionto expedite this process through in-context learning. This study presentsDisasterResponseGPT, an algorithm that leverages LLMs to generate valid plansof action quickly by incorporating disaster response and planning guidelines inthe initial prompt. In DisasterResponseGPT, users input the scenariodescription and receive a plan of action as output. The proposed methodgenerates multiple plans within seconds, which can be further refined followingthe user's feedback. Preliminary results indicate that the plans of actiondeveloped by DisasterResponseGPT are comparable to human-generated ones whileoffering greater ease of modification in real-time. This approach has thepotential to revolutionize disaster response operations by enabling rapidupdates and adjustments during the plan's execution.",,arXiv,"['cs.lg', 'i.2.7; j.7; k.4.0']",, +1132,metareasoning semanticssymbol deconstruction for large language models,"['Yiming Wang', 'Zhuosheng Zhang', 'Rui Wang']",http://arxiv.org/pdf/2306.17820v2.pdf,2023-06-30,," Neural-symbolic methods have shown their effectiveness in enhancing thereasoning abilities of large language models (LLMs). However, existing methodsprimarily rely on mapping natural languages to more syntactically completeformal languages (e.g., Python and SQL). Those approaches necessitate thatreasoning tasks be convertible into programs, which cater more to the computerexecution mindset and deviate from human reasoning habits. To expand thereal-world applicability and flexibility of symbolic methods, we proposeMeta-Reasoning from the scope of linguistics itself. This method empowers LLMsto deconstruct questions and effectively capture more generalized knowledgeautonomously. We find that Meta-Reasoning achieves improved in-context learningefficiency, reasoning accuracy, and output stability in six arithmetic andsymbolic reasoning tasks. In particular, when applied to symbolic reasoningtasks such as Tracking Shuffled Objects, GPT-3 (text-davinci-002) surpasses thefew-shot Chain-of-Thought prompting approach (+37.7%), with 99% accuracy aftera single demonstration of Meta-Reasoning.",,arXiv,['cs.cl'],, +1133,reasoning before responding integrating commonsensebased causality explanation for empathetic response generation,"['Yahui Fu', 'Koji Inoue', 'Chenhui Chu', 'Tatsuya Kawahara']",http://arxiv.org/pdf/2308.00085v2.pdf,2023-07-28,," Recent approaches to empathetic response generation try to incorporatecommonsense knowledge or reasoning about the causes of emotions to betterunderstand the user's experiences and feelings. However, these approachesmainly focus on understanding the causalities of context from the user'sperspective, ignoring the system's perspective. In this paper, we propose acommonsense-based causality explanation approach for diverse empatheticresponse generation that considers both the user's perspective (user's desiresand reactions) and the system's perspective (system's intentions andreactions). We enhance ChatGPT's ability to reason for the system's perspectiveby integrating in-context learning with commonsense knowledge. Then, weintegrate the commonsense-based causality explanation with both ChatGPT and aT5-based model. Experimental evaluations demonstrate that our methodoutperforms other comparable methods on both automatic and human evaluations.",,arXiv,"['cs.cl', 'cs.ai']",, +1134,jen1 textguided universal music generation with omnidirectional diffusion models,"['Peike Li', 'Boyu Chen', 'Yao Yao', 'Yikai Wang', 'Allen Wang', 'Alex Wang']",http://arxiv.org/pdf/2308.04729v1.pdf,2023-08-09,," Music generation has attracted growing interest with the advancement of deepgenerative models. However, generating music conditioned on textualdescriptions, known as text-to-music, remains challenging due to the complexityof musical structures and high sampling rate requirements. Despite the task'ssignificance, prevailing generative models exhibit limitations in musicquality, computational efficiency, and generalization. This paper introducesJEN-1, a universal high-fidelity model for text-to-music generation. JEN-1 is adiffusion model incorporating both autoregressive and non-autoregressivetraining. Through in-context learning, JEN-1 performs various generation tasksincluding text-guided music generation, music inpainting, and continuation.Evaluations demonstrate JEN-1's superior performance over state-of-the-artmethods in text-music alignment and music quality while maintainingcomputational efficiency. Our demos are available athttp://futureverse.com/research/jen/demos/jen1",,arXiv,"['cs.sd', 'cs.ai', 'cs.lg', 'cs.mm', 'eess.as']",, +1135,algorithm of thoughts enhancing exploration of ideas in large language models,"['Bilgehan Sel', 'Ahmad Al-Tawaha', 'Vanshaj Khattar', 'Ruoxi Jia', 'Ming Jin']",http://arxiv.org/pdf/2308.10379v2.pdf,2023-08-20,," Current literature, aiming to surpass the ""Chain-of-Thought"" approach, oftenresorts to an external modus operandi involving halting, modifying, and thenresuming the generation process to boost Large Language Models' (LLMs)reasoning capacities. This mode escalates the number of query requests, leadingto increased costs, memory, and computational overheads. Addressing this, wepropose the Algorithm of Thoughts -- a novel strategy that propels LLMs throughalgorithmic reasoning pathways, pioneering a new mode of in-context learning.By employing algorithmic examples, we exploit the innate recurrence dynamics ofLLMs, expanding their idea exploration with merely one or a few queries. Ourtechnique outperforms earlier single-query methods and stands on par with arecent multi-query strategy that employs an extensive tree search algorithm.Intriguingly, our results suggest that instructing an LLM using an algorithmcan lead to performance surpassing that of the algorithm itself, hinting atLLM's inherent ability to weave its intuition into optimized searches. We probeinto the underpinnings of our method's efficacy and its nuances in application.",,arXiv,"['cs.cl', 'cs.ai']",, +1136,building emotional support chatbots in the era of llms,"['Zhonghua Zheng', 'Lizi Liao', 'Yang Deng', 'Liqiang Nie']",http://arxiv.org/pdf/2308.11584v1.pdf,2023-08-17,," The integration of emotional support into various conversational scenariospresents profound societal benefits, such as social interactions, mental healthcounseling, and customer service. However, there are unsolved challenges thathinder real-world applications in this field, including limited dataavailability and the absence of well-accepted model training paradigms. Thiswork endeavors to navigate these challenges by harnessing the capabilities ofLarge Language Models (LLMs). We introduce an innovative methodology thatsynthesizes human insights with the computational prowess of LLMs to curate anextensive emotional support dialogue dataset. Our approach is initiated with ameticulously designed set of dialogues spanning diverse scenarios as generativeseeds. By utilizing the in-context learning potential of ChatGPT, werecursively generate an ExTensible Emotional Support dialogue dataset, namedExTES. Following this, we deploy advanced tuning techniques on the LLaMA model,examining the impact of diverse training strategies, ultimately yielding an LLMmeticulously optimized for emotional support interactions. An exhaustiveassessment of the resultant model showcases its proficiency in offeringemotional support, marking a pivotal step in the realm of emotional supportbots and paving the way for subsequent research and implementations.",,arXiv,"['cs.cl', 'cs.ai']",, +1137,breaking the bank with chatgpt fewshot text classification for finance,"['Lefteris Loukas', 'Ilias Stogiannidis', 'Prodromos Malakasiotis', 'Stavros Vassos']",http://arxiv.org/pdf/2308.14634v1.pdf,2023-08-28,," We propose the use of conversational GPT models for easy and quick few-shottext classification in the financial domain using the Banking77 dataset. Ourapproach involves in-context learning with GPT-3.5 and GPT-4, which minimizesthe technical expertise required and eliminates the need for expensive GPUcomputing while yielding quick and accurate results. Additionally, we fine-tuneother pre-trained, masked language models with SetFit, a recent contrastivelearning technique, to achieve state-of-the-art results both in full-data andfew-shot settings. Our findings show that querying GPT-3.5 and GPT-4 canoutperform fine-tuned, non-generative models even with fewer examples. However,subscription fees associated with these solutions may be considered costly forsmall organizations. Lastly, we find that generative models perform better onthe given task when shown representative samples selected by a human expertrather than when shown random ones. We conclude that a) our proposed methodsoffer a practical solution for few-shot tasks in datasets with limited labelavailability, and b) our state-of-the-art results can inspire future work inthe area.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'q-fin.cp']",, +1138,genderspecific machine translation with large language models,"['Eduardo Sánchez', 'Pierre Andrews', 'Pontus Stenetorp', 'Mikel Artetxe', 'Marta R. Costa-jussà']",http://arxiv.org/pdf/2309.03175v1.pdf,2023-09-06,," Decoder-only Large Language Models (LLMs) have demonstrated potential inmachine translation (MT), albeit with performance slightly lagging behindtraditional encoder-decoder Neural Machine Translation (NMT) systems. However,LLMs offer a unique advantage: the ability to control the properties of theoutput through prompts. In this study, we harness this flexibility to exploreLLaMa's capability to produce gender-specific translations for languages withgrammatical gender. Our results indicate that LLaMa can generategender-specific translations with competitive accuracy and gender biasmitigation when compared to NLLB, a state-of-the-art multilingual NMT system.Furthermore, our experiments reveal that LLaMa's translations are robust,showing significant performance drops when evaluated against opposite-genderreferences in gender-ambiguous datasets but maintaining consistency in lessambiguous contexts. This research provides insights into the potential andchallenges of using LLMs for gender-specific translations and highlights theimportance of in-context learning to elicit new tasks in LLMs.",,arXiv,['cs.cl'],, +1139,improving open information extraction with large language models a study on demonstration uncertainty,"['Chen Ling', 'Xujiang Zhao', 'Xuchao Zhang', 'Yanchi Liu', 'Wei Cheng', 'Haoyu Wang', 'Zhengzhang Chen', 'Takao Osaki', 'Katsushi Matsuda', 'Haifeng Chen', 'Liang Zhao']",http://arxiv.org/pdf/2309.03433v1.pdf,2023-09-07,," Open Information Extraction (OIE) task aims at extracting structured factsfrom unstructured text, typically in the form of (subject, relation, object)triples. Despite the potential of large language models (LLMs) like ChatGPT asa general task solver, they lag behind state-of-the-art (supervised) methods inOIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevantcontext from relevant relations and generate structured output due to therestrictions on fine-tuning the model. Second, LLMs generates responsesautoregressively based on probability, which makes the predicted relations lackconfidence. In this paper, we assess the capabilities of LLMs in improving theOIE task. Particularly, we propose various in-context learning strategies toenhance LLM's instruction-following ability and a demonstration uncertaintyquantification module to enhance the confidence of the generated relations. Ourexperiments on three OIE benchmark datasets show that our approach holds itsown against established supervised methods, both quantitatively andqualitatively.",,arXiv,['cs.cl'],, +1140,epa easy prompt augmentation on large language models via multiple sources and multiple targets,"['Hongyuan Lu', 'Wai Lam']",http://arxiv.org/pdf/2309.04725v1.pdf,2023-09-09,," Large language models (LLMs) have shown promising performance on various NLPtasks via task prompting. And their performance can be further improved byappending task demonstrations to the head of the prompt. And usually, a betterperformance can be achieved with more demonstrations. However, asking the usersto write the demonstrations can be cumbersome. As a simple yet cost-effectiveworkaround, this paper proposes a novel method called EPA (\textbf{E}asy\textbf{P}rompt \textbf{A}ugmentation)\footnote{While this paper considersaugmenting prompts via demonstrations, we name it EPA as the name EDA isalready taken by a well-known NLP method \citep{wei-zou-2019-eda}.} thateffectively minimizes user efforts in writing demonstrations while improvingthe model performance at the same time. EPA achieves these goals byautomatically augmenting the demonstrations with multiple sources/targets,where each of them paraphrases each other. This is well motivated as augmentingdata via paraphrasing effectively improves neural language models. EPA thusemploys paraphrasing as an augmentation method for in-context learning.Extensive experiments indicate that EPA effectively improves both NLU and NLGtasks, covering from natural language inference to machine translation intranslating tens of languages.\footnote{Code and data will be released uponpublication.}",,arXiv,['cs.cl'],, +1141,converser fewshot conversational dense retrieval with synthetic data generation,"['Chao-Wei Huang', 'Chen-Yu Hsu', 'Tsu-Yuan Hsu', 'Chen-An Li', 'Yun-Nung Chen']",http://arxiv.org/pdf/2309.06748v1.pdf,2023-09-13,," Conversational search provides a natural interface for information retrieval(IR). Recent approaches have demonstrated promising results in applying denseretrieval to conversational IR. However, training dense retrievers requireslarge amounts of in-domain paired data. This hinders the development ofconversational dense retrievers, as abundant in-domain conversations areexpensive to collect. In this paper, we propose CONVERSER, a framework fortraining conversational dense retrievers with at most 6 examples of in-domaindialogues. Specifically, we utilize the in-context learning capability of largelanguage models to generate conversational queries given a passage in theretrieval corpus. Experimental results on conversational retrieval benchmarksOR-QuAC and TREC CAsT 19 show that the proposed CONVERSER achieves comparableperformance to fully-supervised models, demonstrating the effectiveness of ourproposed framework in few-shot conversational dense retrieval. All source codeand generated datasets are available at https://github.com/MiuLab/CONVERSER",,arXiv,"['cs.cl', 'cs.ir']",, +1142,"bridging topic, domain, and language shifts an evaluation of comprehensive outofdistribution scenarios","['Andreas Waldis', 'Iryna Gurevych']",http://arxiv.org/pdf/2309.08316v1.pdf,2023-09-15,," Language models (LMs) excel in in-distribution (ID) scenarios where train andtest data are independent and identically distributed. However, theirperformance often degrades in real-world applications like argument mining.Such degradation happens when new topics emerge, or other text domains andlanguages become relevant. To assess LMs' generalization abilities in suchout-of-distribution (OOD) scenarios, we simulate such distribution shifts bydeliberately withholding specific instances for testing, as from the socialmedia domain or the topic Solar Energy. Unlike prior studies focusing on specific shifts and metrics in isolation, wecomprehensively analyze OOD generalization. We define three metrics to pinpointgeneralization flaws and propose eleven classification tasks covering topic,domain, and language shifts. Overall, we find superior performance ofprompt-based fine-tuning, notably when train and test splits primarily differsemantically. Simultaneously, in-context learning is more effective thanprompt-based or vanilla fine-tuning for tasks when training data embodies heavydiscrepancies in label distribution compared to testing data. This reveals acrucial drawback of gradient-based learning: it biases LMs regarding suchstructural obstacles.",,arXiv,['cs.cl'],, +1143,fewshot adaptation for parsing contextual utterances with llms,"['Kevin Lin', 'Patrick Xia', 'Hao Fang']",http://arxiv.org/pdf/2309.10168v1.pdf,2023-09-18,," We evaluate the ability of semantic parsers based on large language models(LLMs) to handle contextual utterances. In real-world settings, there typicallyexists only a limited number of annotated contextual utterances due toannotation cost, resulting in an imbalance compared to non-contextualutterances. Therefore, parsers must adapt to contextual utterances with a fewtraining examples. We examine four major paradigms for doing so inconversational semantic parsing i.e., Parse-with-Utterance-History,Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. Tofacilitate such cross-paradigm comparisons, we constructSMCalFlow-EventQueries, a subset of contextual examples from SMCalFlow withadditional annotations. Experiments with in-context learning and fine-tuningsuggest that Rewrite-then-Parse is the most promising paradigm whenholistically considering parsing accuracy, annotation cost, and error types.",,arXiv,['cs.cl'],, +1144,toward unified controllable text generation via regular expression instruction,"['Xin Zheng', 'Hongyu Lin', 'Xianpei Han', 'Le Sun']",http://arxiv.org/pdf/2309.10447v2.pdf,2023-09-19,," Controllable text generation is a fundamental aspect of natural languagegeneration, with numerous methods proposed for different constraint types.However, these approaches often require significant architectural or decodingmodifications, making them challenging to apply to additional constraints orresolve different constraint combinations. To address this, our paperintroduces Regular Expression Instruction (REI), which utilizes aninstruction-based mechanism to fully exploit regular expressions' advantages touniformly model diverse constraints. Specifically, our REI supports all popularfine-grained controllable generation constraints, i.e., lexical, positional,and length, as well as their complex combinations, via regular expression-styleinstructions. Our method only requires fine-tuning on medium-scale languagemodels or few-shot, in-context learning on large language models, and requiresno further adjustment when applied to various constraint combinations.Experiments demonstrate that our straightforward approach yields high successrates and adaptability to various constraints while maintaining competitivenessin automatic metrics and outperforming most previous baselines.",,arXiv,"['cs.cl', 'cs.ai']",, +1145,languageoriented communication with semantic coding and knowledge distillation for texttoimage generation,"['Hyelin Nam', 'Jihong Park', 'Jinho Choi', 'Mehdi Bennis', 'Seong-Lyun Kim']",http://arxiv.org/pdf/2309.11127v1.pdf,2023-09-20,," By integrating recent advances in large language models (LLMs) and generativemodels into the emerging semantic communication (SC) paradigm, in this articlewe put forward to a novel framework of language-oriented semantic communication(LSC). In LSC, machines communicate using human language messages that can beinterpreted and manipulated via natural language processing (NLP) techniquesfor SC efficiency. To demonstrate LSC's potential, we introduce threeinnovative algorithms: 1) semantic source coding (SSC) which compresses a textprompt into its key head words capturing the prompt's syntactic essence whilemaintaining their appearance order to keep the prompt's context; 2) semanticchannel coding (SCC) that improves robustness against errors by substitutinghead words with their lenghthier synonyms; and 3) semantic knowledgedistillation (SKD) that produces listener-customized prompts via in-contextlearning the listener's language style. In a communication task for progressivetext-to-image generation, the proposed methods achieve higher perceptualsimilarities with fewer transmissions while enhancing robustness in noisycommunication channels.",,arXiv,"['eess.sp', 'cs.ai', 'cs.cl']",, +1146,towards effective disambiguation for machine translation with large language models,"['Vivek Iyer', 'Pinzhen Chen', 'Alexandra Birch']",http://arxiv.org/pdf/2309.11668v2.pdf,2023-09-20,," Resolving semantic ambiguity has long been recognised as a central challengein the field of Machine Translation. Recent work on benchmarking translationperformance on ambiguous sentences has exposed the limitations of conventionalNeural Machine Translation (NMT) systems, which fail to handle many such cases.Large language models (LLMs) have emerged as a promising alternative,demonstrating comparable performance to traditional NMT models whileintroducing new paradigms for controlling the target outputs. In this paper, westudy the capabilities of LLMs to translate ""ambiguous sentences"" - i.e. thosecontaining highly polysemous words and/or rare word senses. We also propose twoways to improve their disambiguation capabilities, through a) in-contextlearning and b) fine-tuning on carefully curated ambiguous datasets.Experiments show that our methods can match or outperform state-of-the-artsystems such as DeepL and NLLB in four out of five language directions. Ourresearch provides valuable insights into effectively adapting LLMs to becomebetter disambiguators during Machine Translation. We release our curateddisambiguation corpora and resources athttps://data.statmt.org/ambiguous-europarl.",,arXiv,['cs.cl'],, +1147,incontext interference in chatbased large language models,"['Eric Nuertey Coleman', 'Julio Hurtado', 'Vincenzo Lomonaco']",http://arxiv.org/pdf/2309.12727v1.pdf,2023-09-22,," Large language models (LLMs) have had a huge impact on society due to theirimpressive capabilities and vast knowledge of the world. Various applicationsand tools have been created that allow users to interact with these models in ablack-box scenario. However, one limitation of this scenario is that userscannot modify the internal knowledge of the model, and the only way to add ormodify internal knowledge is by explicitly mentioning it to the model duringthe current interaction. This learning process is called in-context training,and it refers to training that is confined to the user's current session orcontext. In-context learning has significant applications, but also haslimitations that are seldom studied. In this paper, we present a study thatshows how the model can suffer from interference between information thatcontinually flows in the context, causing it to forget previously learnedknowledge, which can reduce the model's performance. Along with showing theproblem, we propose an evaluation benchmark based on the bAbI dataset.",,arXiv,"['cs.ai', 'cs.cl']",, +1148,affect recognition in conversations using large language models,"['Shutong Feng', 'Guangzhi Sun', 'Nurul Lubis', 'Chao Zhang', 'Milica Gašić']",http://arxiv.org/pdf/2309.12881v1.pdf,2023-09-22,," Affect recognition, encompassing emotions, moods, and feelings, plays apivotal role in human communication. In the realm of conversational artificialintelligence (AI), the ability to discern and respond to human affective cuesis a critical factor for creating engaging and empathetic interactions. Thisstudy delves into the capacity of large language models (LLMs) to recognisehuman affect in conversations, with a focus on both open-domain chit-chatdialogues and task-oriented dialogues. Leveraging three diverse datasets,namely IEMOCAP, EmoWOZ, and DAIC-WOZ, covering a spectrum of dialogues fromcasual conversations to clinical interviews, we evaluated and compared LLMs'performance in affect recognition. Our investigation explores the zero-shot andfew-shot capabilities of LLMs through in-context learning (ICL) as well astheir model capacities through task-specific fine-tuning. Additionally, thisstudy takes into account the potential impact of automatic speech recognition(ASR) errors on LLM predictions. With this work, we aim to shed light on theextent to which LLMs can replicate human-like affect recognition capabilitiesin conversations.",,arXiv,['cs.cl'],, +1149,calibrating llmbased evaluator,"['Yuxuan Liu', 'Tianchi Yang', 'Shaohan Huang', 'Zihan Zhang', 'Haizhen Huang', 'Furu Wei', 'Weiwei Deng', 'Feng Sun', 'Qi Zhang']",http://arxiv.org/pdf/2309.13308v1.pdf,2023-09-23,," Recent advancements in large language models (LLMs) on language modeling andemergent capabilities make them a promising reference-free evaluator of naturallanguage generation quality, and a competent alternative to human evaluation.However, hindered by the closed-source or high computational demand to host andtune, there is a lack of practice to further calibrate an off-the-shelfLLM-based evaluator towards better human alignment. In this work, we proposeAutoCalibrate, a multi-stage, gradient-free approach to automatically calibrateand align an LLM-based evaluator toward human preference. Instead of explicitlymodeling human preferences, we first implicitly encompass them within a set ofhuman labels. Then, an initial set of scoring criteria is drafted by thelanguage model itself, leveraging in-context learning on different few-shotexamples. To further calibrate this set of criteria, we select the bestperformers and re-draft them with self-refinement. Our experiments on multipletext quality evaluation datasets illustrate a significant improvement incorrelation with expert evaluation through calibration. Our comprehensivequalitative analysis conveys insightful intuitions and observations on theessence of effective scoring criteria.",,arXiv,['cs.cl'],, +1150,mededit model editing for medical question answering with external knowledge bases,"['Yucheng Shi', 'Shaochen Xu', 'Zhengliang Liu', 'Tianming Liu', 'Xiang Li', 'Ninghao Liu']",http://arxiv.org/pdf/2309.16035v1.pdf,2023-09-27,," Large Language Models (LLMs), although powerful in general domains, oftenperform poorly on domain-specific tasks like medical question answering (QA).Moreover, they tend to function as ""black-boxes,"" making it challenging tomodify their behavior. Addressing this, our study delves into model editingutilizing in-context learning, aiming to improve LLM responses without the needfor fine-tuning or retraining. Specifically, we propose a comprehensiveretrieval strategy to extract medical facts from an external knowledge base,and then we incorporate them into the query prompt for the LLM. Focusing onmedical QA using the MedQA-SMILE dataset, we evaluate the impact of differentretrieval models and the number of facts provided to the LLM. Notably, ouredited Vicuna model exhibited an accuracy improvement from 44.46% to 48.54%.This work underscores the potential of model editing to enhance LLMperformance, offering a practical approach to mitigate the challenges ofblack-box LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, +1151,towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method,"['Xuan Zhang', 'Wei Gao']",http://arxiv.org/pdf/2310.00305v1.pdf,2023-09-30,," While large pre-trained language models (LLMs) have shown their impressivecapabilities in various NLP tasks, they are still under-explored in themisinformation domain. In this paper, we examine LLMs with in-context learning(ICL) for news claim verification, and find that only with 4-shot demonstrationexamples, the performance of several prompting methods can be comparable withprevious supervised models. To further boost performance, we introduce aHierarchical Step-by-Step (HiSS) prompting method which directs LLMs toseparate a claim into several subclaims and then verify each of them viamultiple questions-answering steps progressively. Experiment results on twopublic misinformation datasets show that HiSS prompting outperformsstate-of-the-art fully-supervised approach and strong few-shot ICL-enabledbaselines.",,arXiv,['cs.cl'],, +1152,fool your (vision and) language model with embarrassingly simple permutations,"['Yongshuo Zong', 'Tingyang Yu', 'Bingchen Zhao', 'Ruchika Chavhan', 'Timothy Hospedales']",http://arxiv.org/pdf/2310.01651v1.pdf,2023-10-02,," Large language and vision-language models are rapidly being deployed inpractice thanks to their impressive capabilities in instruction following,in-context learning, and so on. This raises an urgent need to carefully analysetheir robustness so that stakeholders can understand if and when such modelsare trustworthy enough to be relied upon in any given application. In thispaper, we highlight a specific vulnerability in popular models, namelypermutation sensitivity in multiple-choice question answering (MCQA).Specifically, we show empirically that popular models are vulnerable toadversarial permutation in answer sets for multiple-choice prompting, which issurprising as models should ideally be as invariant to prompt permutation ashumans are. These vulnerabilities persist across various model sizes, and existin very recent language and vision-language models. Code is available at\url{https://github.com/ys-zong/FoolyourVLLMs}.",,arXiv,['cs.lg'],, +1153,improving automatic vqa evaluation using large language models,"['Oscar Mañas', 'Benno Krojer', 'Aishwarya Agrawal']",http://arxiv.org/pdf/2310.02567v2.pdf,2023-10-04,," 8 years after the visual question answering (VQA) task was proposed, accuracyremains the primary metric for automatic evaluation. VQA Accuracy has beeneffective so far in the IID evaluation setting. However, our community isundergoing a shift towards open-ended generative models and OOD evaluation. Inthis new paradigm, the existing VQA Accuracy metric is overly stringent andunderestimates the performance of VQA systems. Thus, there is a need to developmore robust automatic VQA metrics that serve as a proxy for human judgment. Inthis work, we propose to leverage the in-context learning capabilities ofinstruction-tuned large language models (LLMs) to build a better VQA metric. Weformulate VQA evaluation as an answer-rating task where the LLM is instructedto score the accuracy of a candidate answer given a set of reference answers.We demonstrate the proposed metric better correlates with human judgmentcompared to existing metrics across several VQA models and benchmarks. We hopewide adoption of our metric will contribute to better estimating the researchprogress on the VQA task. We plan to release the evaluation code and collectedhuman judgments.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",, +1154,a languageagent approach to formal theoremproving,"['Amitayush Thakur', 'Yeming Wen', 'Swarat Chaudhuri']",http://arxiv.org/pdf/2310.04353v3.pdf,2023-10-06,," Language agents, which use a large language model (LLM) capable of in-contextlearning to interact with an external environment, have recently emerged as apromising approach to control tasks. We present the first language-agentapproach to formal theorem-proving. Our method, COPRA, uses a high-capacity,black-box LLM (GPT-4) as part of a policy for a stateful backtracking search.During the search, the policy can select proof tactics and retrieve lemmas anddefinitions from an external database. Each selected tactic is executed in theunderlying proof framework, and the execution feedback is used to build theprompt for the next policy invocation. The search also tracks selectedinformation from its history and uses it to reduce hallucinations andunnecessary LLM queries. We evaluate our implementation of COPRA on the miniF2F benchmark for Lean anda set of Coq tasks from the Compcert project. On these benchmarks, COPRAsignificantly outperforms one-shot invocations of GPT-4, as well asstate-of-the-art models fine-tuned on proof data, at finding correct proofsquickly. Our code and data are available athttps://github.com/trishullab/copra.",,arXiv,"['cs.lg', 'cs.ai', 'cs.lo', 'cs.pl']",, +1155,guideline learning for incontext information extraction,"['Chaoxu Pang', 'Yixuan Cao', 'Qiang Ding', 'Ping Luo']",http://arxiv.org/pdf/2310.05066v2.pdf,2023-10-08,," Large language models (LLMs) can perform a new task by merely conditioning ontask instructions and a few input-output examples, without optimizing anyparameters. This is called In-Context Learning (ICL). In-context InformationExtraction (IE) has recently garnered attention in the research community.However, the performance of In-context IE generally lags behind thestate-of-the-art supervised expert models. We highlight a key reason for thisshortfall: underspecified task description. The limited-length contextstruggles to thoroughly express the intricate IE task instructions and variousedge cases, leading to misalignment in task comprehension with humans. In thispaper, we propose a Guideline Learning (GL) framework for In-context IE whichreflectively learns and follows guidelines. During the learning phrase, GLautomatically synthesizes a set of guidelines based on a few error cases, andduring inference, GL retrieves helpful guidelines for better ICL. Moreover, wepropose a self-consistency-based active learning method to enhance theefficiency of GL. Experiments on event extraction and relation extraction showthat GL can significantly improve the performance of in-context IE.",,arXiv,"['cs.cl', 'cs.lg']",, +1156,harnessing the power of large language models for empathetic response generation empirical investigations and improvements,"['Yushan Qian', 'Wei-Nan Zhang', 'Ting Liu']",http://arxiv.org/pdf/2310.05140v3.pdf,2023-10-08,," Empathetic dialogue is an indispensable part of building harmonious socialrelationships and contributes to the development of a helpful AI. Previousapproaches are mainly based on fine small-scale language models. With theadvent of ChatGPT, the application effect of large language models (LLMs) inthis field has attracted great attention. This work empirically investigatesthe performance of LLMs in generating empathetic responses and proposes threeimprovement methods of semantically similar in-context learning, two-stageinteractive generation, and combination with the knowledge base. Extensiveexperiments show that LLMs can significantly benefit from our proposed methodsand is able to achieve state-of-the-art performance in both automatic and humanevaluations. Additionally, we explore the possibility of GPT-4 simulating humanevaluators.",,arXiv,"['cs.cl', 'cs.ai']",, +1157,selective demonstrations for crossdomain texttosql,"['Shuaichen Chang', 'Eric Fosler-Lussier']",http://arxiv.org/pdf/2310.06302v1.pdf,2023-10-10,," Large language models (LLMs) with in-context learning have demonstratedimpressive generalization capabilities in the cross-domain text-to-SQL task,without the use of in-domain annotations. However, incorporating in-domaindemonstration examples has been found to greatly enhance LLMs' performance. Inthis paper, we delve into the key factors within in-domain examples thatcontribute to the improvement and explore whether we can harness these benefitswithout relying on in-domain annotations. Based on our findings, we propose ademonstration selection framework ODIS which utilizes both out-of-domainexamples and synthetically generated in-domain examples to constructdemonstrations. By retrieving demonstrations from hybrid sources, ODISleverages the advantages of both, showcasing its effectiveness compared tobaseline methods that rely on a single data source. Furthermore, ODISoutperforms state-of-the-art approaches on two cross-domain text-to-SQLdatasets, with improvements of 1.1 and 11.8 points in execution accuracy,respectively.",,arXiv,['cs.cl'],, +1158,jailbreak and guard aligned language models with only few incontext demonstrations,"['Zeming Wei', 'Yifei Wang', 'Yisen Wang']",http://arxiv.org/pdf/2310.06387v1.pdf,2023-10-10,," Large Language Models (LLMs) have shown remarkable success in various tasks,but concerns about their safety and the potential for generating maliciouscontent have emerged. In this paper, we explore the power of In-ContextLearning (ICL) in manipulating the alignment ability of LLMs. We find that byproviding just few in-context demonstrations without fine-tuning, LLMs can bemanipulated to increase or decrease the probability of jailbreaking, i.e.answering malicious prompts. Based on these observations, we propose In-ContextAttack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guardingaligned language model purposes. ICA crafts malicious contexts to guide modelsin generating harmful outputs, while ICD enhances model robustness bydemonstrations of rejecting to answer harmful prompts. Our experiments show theeffectiveness of ICA and ICD in increasing or reducing the success rate ofadversarial jailbreaking attacks. Overall, we shed light on the potential ofICL to influence LLM behavior and provide a new perspective for enhancing thesafety and alignment of LLMs.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cr']",, +1159,a search for prompts generating structured answers from contracts,"['Adam Roegiest', 'Radha Chitta', 'Jonathan Donnelly', 'Maya Lash', 'Alexandra Vtyurina', 'François Longtin']",http://arxiv.org/pdf/2310.10141v1.pdf,2023-10-16,," In many legal processes being able to action on the concrete implication of alegal question can be valuable to automating human review or signalling certainconditions (e.g., alerts around automatic renewal). To support such tasks, wepresent a form of legal question answering that seeks to return one (or more)fixed answers for a question about a contract clause. After showing thatunstructured generative question answering can have questionable outcomes forsuch a task, we discuss our exploration methodology for legal questionanswering prompts using OpenAI's \textit{GPT-3.5-Turbo} and provide a summaryof insights. Using insights gleaned from our qualitative experiences, we compare ourproposed template prompts against a common semantic matching approach and findthat our prompt templates are far more accurate despite being less reliable inthe exact response return. With some additional tweaks to prompts and the useof in-context learning, we are able to further improve the performance of ourproposed strategy while maximizing the reliability of responses as best we can.",,arXiv,['cs.cv'],, +1160,large language models meet openworld intent discovery and recognition an evaluation of chatgpt,"['Xiaoshuai Song', 'Keqing He', 'Pei Wang', 'Guanting Dong', 'Yutao Mou', 'Jingang Wang', 'Yunsen Xian', 'Xunliang Cai', 'Weiran Xu']",http://arxiv.org/pdf/2310.10176v1.pdf,2023-10-16,," The tasks of out-of-domain (OOD) intent discovery and generalized intentdiscovery (GID) aim to extend a closed intent classifier to open-world intentsets, which is crucial to task-oriented dialogue (TOD) systems. Previousmethods address them by fine-tuning discriminative models. Recently, althoughsome studies have been exploring the application of large language models(LLMs) represented by ChatGPT to various downstream tasks, it is still unclearfor the ability of ChatGPT to discover and incrementally extent OOD intents. Inthis paper, we comprehensively evaluate ChatGPT on OOD intent discovery andGID, and then outline the strengths and weaknesses of ChatGPT. Overall, ChatGPTexhibits consistent advantages under zero-shot settings, but is still at adisadvantage compared to fine-tuned models. More deeply, through a series ofanalytical experiments, we summarize and discuss the challenges faced by LLMsincluding clustering, domain-specific understanding, and cross-domainin-context learning scenarios. Finally, we provide empirical guidance forfuture directions to address these challenges.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1161,moconvq unified physicsbased motion control via scalable discrete representations,"['Heyuan Yao', 'Zhenhua Song', 'Yuyang Zhou', 'Tenglong Ao', 'Baoquan Chen', 'Libin Liu']",http://arxiv.org/pdf/2310.10198v3.pdf,2023-10-16,," In this work, we present MoConVQ, a novel unified framework for physics-basedmotion control leveraging scalable discrete representations. Building uponvector quantized variational autoencoders (VQ-VAE) and model-basedreinforcement learning, our approach effectively learns motion embeddings froma large, unstructured dataset spanning tens of hours of motion examples. Theresultant motion representation not only captures diverse motion skills butalso offers a robust and intuitive interface for various applications. Wedemonstrate the versatility of MoConVQ through several applications: universaltracking control from various motion sources, interactive character controlwith latent motion representations using supervised learning, physics-basedmotion generation from natural language descriptions using the GPT framework,and, most interestingly, seamless integration with large language models (LLMs)with in-context learning to tackle complex and abstract tasks.",,arXiv,"['cs.cv', 'cs.gr']",, +1162,semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking,"['Yuxiang Wu', 'Guanting Dong', 'Weiran Xu']",http://arxiv.org/pdf/2310.10520v3.pdf,2023-10-16,," Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiringand annotating task-oriented dialogues, which can be time-consuming and costly.However, DST extends beyond simple slot-filling and requires effective updatingstrategies for tracking dialogue state as conversations progress. In thispaper, we propose ParsingDST, a new In-Context Learning (ICL) method, tointroduce additional intricate updating strategies in zero-shot DST. Ourapproach reformulates the DST task by leveraging powerful Large Language Models(LLMs) and translating the original dialogue text to JSON through semanticparsing as an intermediate state. We also design a novel framework thatincludes more modules to ensure the effectiveness of updating strategies in thetext-to-JSON process. Experimental results demonstrate that our approachoutperforms existing zero-shot DST methods on MultiWOZ, exhibiting significantimprovements in Joint Goal Accuracy (JGA) and slot accuracy compared toexisting ICL methods. Our code has been released.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1163,mastering the task of open information extraction with large language models and consistent reasoning environment,"['Ji Qi', 'Kaixuan Ji', 'Xiaozhi Wang', 'Jifan Yu', 'Kaisheng Zeng', 'Lei Hou', 'Juanzi Li', 'Bin Xu']",http://arxiv.org/pdf/2310.10590v1.pdf,2023-10-16,," Open Information Extraction (OIE) aims to extract objective structuredknowledge from natural texts, which has attracted growing attention to builddedicated models with human experience. As the large language models (LLMs)have exhibited remarkable in-context learning capabilities, a question arisesas to whether the task of OIE can be effectively tackled with this paradigm? Inthis paper, we explore solving the OIE problem by constructing an appropriatereasoning environment for LLMs. Specifically, we first propose a method toeffectively estimate the discrepancy of syntactic distribution between a LLMand test samples, which can serve as correlation evidence for preparingpositive demonstrations. Upon the evidence, we introduce a simple yet effectivemechanism to establish the reasoning environment for LLMs on specific tasks.Without bells and whistles, experimental results on the standard CaRB benchmarkdemonstrate that our $6$-shot approach outperforms state-of-the-art supervisedmethod, achieving an $55.3$ $F_1$ score. Further experiments on TACRED andACE05 show that our method can naturally generalize to other informationextraction tasks, resulting in improvements of $5.7$ and $6.8$ $F_1$ scores,respectively.",,arXiv,['cs.cl'],, +1164,exploring automatic evaluation methods based on a decoderbased llm for text generation,"['Tomohito Kasahara', 'Daisuke Kawahara']",http://arxiv.org/pdf/2310.11026v1.pdf,2023-10-17,," Automatic evaluation of text generation is essential for improving theaccuracy of generation tasks. In light of the current trend towardsincreasingly larger decoder-based language models, we investigate automaticevaluation methods based on such models for text generation. This papercompares various methods, including tuning with encoder-based models and largelanguage models under equal conditions, on two different tasks, machinetranslation evaluation and semantic textual similarity, in two languages,Japanese and English. Experimental results show that compared to the tunedencoder-based models, the tuned decoder-based models perform poorly. Theanalysis of the causes for this suggests that the decoder-based models focus onsurface word sequences and do not capture meaning. It is also revealed thatin-context learning of very large decoder-based models such as ChatGPT makes itdifficult to identify fine-grained semantic differences.",,arXiv,['cs.cl'],, +1165,learning from red teaming gender bias provocation and mitigation in large language models,"['Hsuan Su', 'Cheng-Chu Cheng', 'Hua Farn', 'Shachi H Kumar', 'Saurav Sahay', 'Shang-Tse Chen', 'Hung-yi Lee']",http://arxiv.org/pdf/2310.11079v1.pdf,2023-10-17,," Recently, researchers have made considerable improvements in dialogue systemswith the progress of large language models (LLMs) such as ChatGPT and GPT-4.These LLM-based chatbots encode the potential biases while retainingdisparities that can harm humans during interactions. The traditional biasesinvestigation methods often rely on human-written test cases. However, thesetest cases are usually expensive and limited. In this work, we propose afirst-of-its-kind method that automatically generates test cases to detectLLMs' potential gender bias. We apply our method to three well-known LLMs andfind that the generated test cases effectively identify the presence of biases.To address the biases identified, we propose a mitigation strategy that usesthe generated test cases as demonstrations for in-context learning tocircumvent the need for parameter fine-tuning. The experimental results showthat LLMs generate fairer responses with the proposed approach.",,arXiv,"['cs.cl', 'cs.ai']",, +1166,evaluating llms for privilegeescalation scenarios,"['Andreas Happe', 'Aaron Kaplan', 'Jürgen Cito']",http://arxiv.org/pdf/2310.11409v2.pdf,2023-10-17,," Penetration testing, an essential component of cybersecurity, allowsorganizations to proactively identify and remediate vulnerabilities in theirsystems, thus bolstering their defense mechanisms against potentialcyberattacks. One recent advancement in the realm of penetration testing is theutilization of Language Models (LLMs). We explore the intersection of LLMs andpenetration testing to gain insight into their capabilities and challenges inthe context of privilige escalation. We create an automated Linuxprivilege-escalation benchmark utilizing local virtual machines. We introducean LLM-guided privilege-escalation tool designed for evaluating different LLMsand prompt strategies against our benchmark. We analyze the impact of differentprompt designs, the benefits of in-context learning, and the advantages ofoffering high-level guidance to LLMs. We discuss challenging areas for LLMs,including maintaining focus during testing, coping with errors, and finallycomparing them with both stochastic parrots as well as with human hackers.",,arXiv,"['cs.cr', 'cs.ai']",, +1167,measuring pointwise $mathcal{v}$usable information incontextly,"['Sheng Lu', 'Shan Chen', 'Yingya Li', 'Danielle Bitterman', 'Guergana Savova', 'Iryna Gurevych']",http://arxiv.org/pdf/2310.12300v2.pdf,2023-10-18,," In-context learning (ICL) is a new learning paradigm that has gainedpopularity along with the development of large language models. In this work,we adapt a recently proposed hardness metric, pointwise $\mathcal{V}$-usableinformation (PVI), to an in-context version (in-context PVI). Compared to theoriginal PVI, in-context PVI is more efficient in that it requires only a fewexemplars and does not require fine-tuning. We conducted a comprehensiveempirical analysis to evaluate the reliability of in-context PVI. Our findingsindicate that in-context PVI estimates exhibit similar characteristics to theoriginal PVI. Specific to the in-context setting, we show that in-context PVIestimates remain consistent across different exemplar selections and numbers ofshots. The variance of in-context PVI estimates across different exemplarselections is insignificant, which suggests that in-context PVI are stable.Furthermore, we demonstrate how in-context PVI can be employed to identifychallenging instances. Our work highlights the potential of in-context PVI andprovides new insights into the capabilities of ICL.",,arXiv,['cs.cl'],, +1168,attack prompt generation for red teaming and defending large language models,"['Boyi Deng', 'Wenjie Wang', 'Fuli Feng', 'Yang Deng', 'Qifan Wang', 'Xiangnan He']",http://arxiv.org/pdf/2310.12505v1.pdf,2023-10-19,," Large language models (LLMs) are susceptible to red teaming attacks, whichcan induce LLMs to generate harmful content. Previous research constructsattack prompts via manual or automatic methods, which have their ownlimitations on construction cost and quality. To address these issues, wepropose an integrated approach that combines manual and automatic methods toeconomically generate high-quality attack prompts. Specifically, consideringthe impressive capabilities of newly emerged LLMs, we propose an attackframework to instruct LLMs to mimic human-generated prompts through in-contextlearning. Furthermore, we propose a defense framework that fine-tunes victimLLMs through iterative interactions with the attack framework to enhance theirsafety against red teaming attacks. Extensive experiments on different LLMsvalidate the effectiveness of our proposed attack and defense frameworks.Additionally, we release a series of attack prompts datasets named SAP withvarying sizes, facilitating the safety evaluation and enhancement of more LLMs.Our code and dataset is available on https://github.com/Aatrox103/SAP .",,arXiv,"['cs.cl', 'cs.cr', 'cs.lg']",, +1169,are structural concepts universal in transformer language models towards interpretable crosslingual generalization,"['Ningyu Xu', 'Qi Zhang', 'Jingting Ye', 'Menghan Zhang', 'Xuanjing Huang']",http://arxiv.org/pdf/2310.12794v2.pdf,2023-10-19,," Large language models (LLMs) have exhibited considerable cross-lingualgeneralization abilities, whereby they implicitly transfer knowledge acrosslanguages. However, the transfer is not equally successful for all languages,especially for low-resource ones, which poses an ongoing challenge. It isunclear whether we have reached the limits of implicit cross-lingualgeneralization and if explicit knowledge transfer is viable. In this paper, weinvestigate the potential for explicitly aligning conceptual correspondencebetween languages to enhance cross-lingual generalization. Using the syntacticaspect of language as a testbed, our analyses of 43 languages reveal a highdegree of alignability among the spaces of structural concepts within eachlanguage for both encoder-only and decoder-only LLMs. We then propose ameta-learning-based method to learn to align conceptual spaces of differentlanguages, which facilitates zero-shot and few-shot generalization in conceptclassification and also offers insights into the cross-lingual in-contextlearning phenomenon. Experiments on syntactic analysis tasks show that ourapproach achieves competitive results with state-of-the-art methods and narrowsthe performance gap between languages, particularly benefiting those withlimited resources.",,arXiv,['cs.cl'],, +1170,mind the instructions a holistic evaluation of consistency and interactions in promptbased learning,"['Lucas Weber', 'Elia Bruni', 'Dieuwke Hupkes']",http://arxiv.org/pdf/2310.13486v1.pdf,2023-10-20,," Finding the best way of adapting pre-trained language models to a task is abig challenge in current NLP. Just like the previous generation of task-tunedmodels (TT), models that are adapted to tasks via in-context-learning (ICL) arerobust in some setups but not in others. Here, we present a detailed analysisof which design choices cause instabilities and inconsistencies in LLMpredictions. First, we show how spurious correlations between inputdistributions and labels -- a known issue in TT models -- form only a minorproblem for prompted models. Then, we engage in a systematic, holisticevaluation of different factors that have been found to influence predictionsin a prompting setup. We test all possible combinations of a range of factorson both vanilla and instruction-tuned (IT) LLMs of different scale andstatistically analyse the results to show which factors are the mostinfluential, interactive or stable. Our results show which factors can be usedwithout precautions and which should be avoided or handled with care in mostsettings.",,arXiv,"['cs.cl', 'cs.ai']",, +1171,a simple baseline for knowledgebased visual question answering,"['Alexandros Xenos', 'Themos Stafylakis', 'Ioannis Patras', 'Georgios Tzimiropoulos']",http://arxiv.org/pdf/2310.13570v2.pdf,2023-10-20,," This paper is on the problem of Knowledge-Based Visual Question Answering(KB-VQA). Recent works have emphasized the significance of incorporating bothexplicit (through external databases) and implicit (through LLMs) knowledge toanswer questions requiring external knowledge effectively. A common limitationof such approaches is that they consist of relatively complicated pipelines andoften heavily rely on accessing GPT-3 API. Our main contribution in this paperis to propose a much simpler and readily reproducible pipeline which, in anutshell, is based on efficient in-context learning by prompting LLaMA (1 and2) using question-informative captions as contextual information. Contrary torecent approaches, our method is training-free, does not require access toexternal databases or APIs, and yet achieves state-of-the-art accuracy on theOK-VQA and A-OK-VQA datasets. Finally, we perform several ablation studies tounderstand important aspects of our method. Our code is publicly available athttps://github.com/alexandrosXe/ASimple-Baseline-For-Knowledge-Based-VQA",,arXiv,['cs.cv'],, +1172,an incontext schema understanding method for knowledge base question answering,"['Yantao Liu', 'Zixuan Li', 'Xiaolong Jin', 'Long Bai', 'Saiping Guan', 'Jiafeng Guo', 'Xueqi Cheng']",http://arxiv.org/pdf/2310.14174v1.pdf,2023-10-22,," The Knowledge Base Question Answering (KBQA) task aims to answer naturallanguage questions based on a given knowledge base. As a kind of common methodfor this task, semantic parsing-based ones first convert natural languagequestions to logical forms (e.g., SPARQL queries) and then execute them onknowledge bases to get answers. Recently, Large Language Models (LLMs) haveshown strong abilities in language understanding and may be adopted as semanticparsers in such kinds of methods. However, in doing so, a great challenge forLLMs is to understand the schema of knowledge bases. Therefore, in this paper,we propose an In-Context Schema Understanding (ICSU) method for facilitatingLLMs to be used as a semantic parser in KBQA. Specifically, ICSU adopts theIn-context Learning mechanism to instruct LLMs to generate SPARQL queries withexamples. In order to retrieve appropriate examples from annotatedquestion-query pairs, which contain comprehensive schema information related toquestions, ICSU explores four different retrieval strategies. Experimentalresults on the largest KBQA benchmark, KQA Pro, show that ICSU with all thesestrategies outperforms that with a random retrieval strategy significantly(from 12\% to 78.76\% in accuracy).",,arXiv,['cs.cl'],, +1173,from chaos to clarity claim normalization to empower factchecking,"['Megha Sundriyal', 'Tanmoy Chakraborty', 'Preslav Nakov']",http://arxiv.org/pdf/2310.14338v2.pdf,2023-10-22,," With the rise of social media, users are exposed to many misleading claims.However, the pervasive noise inherent in these posts presents a challenge inidentifying precise and prominent claims that require verification. Extractingthe important claims from such posts is arduous and time-consuming, yet it isan underexplored problem. Here, we aim to bridge this gap. We introduce a noveltask, Claim Normalization (aka ClaimNorm), which aims to decompose complex andnoisy social media posts into more straightforward and understandable forms,termed normalized claims. We propose CACN, a pioneering approach that leverageschain-of-thought and claim check-worthiness estimation, mimicking humanreasoning processes, to comprehend intricate claims. Moreover, we capitalize onthe in-context learning capabilities of large language models to provideguidance and to improve claim normalization. To evaluate the effectiveness ofour proposed model, we meticulously compile a comprehensive real-world dataset,CLAN, comprising more than 6k instances of social media posts alongside theirrespective normalized claims. Our experiments demonstrate that CACN outperformsseveral baselines across various evaluation measures. Finally, our rigorouserror analysis validates CACN's capabilities and pitfalls.",,arXiv,"['cs.cl', 'cs.ai']",, +1174,retrievalaugmented chainofthought in semistructured domains,"['Vaibhav Mavi', 'Abulhair Saparov', 'Chen Zhao']",http://arxiv.org/pdf/2310.14435v1.pdf,2023-10-22,," Applying existing question answering (QA) systems to specialized domains likelaw and finance presents challenges that necessitate domain expertise. Althoughlarge language models (LLMs) have shown impressive language comprehension andin-context learning capabilities, their inability to handle very longinputs/contexts is well known. Tasks specific to these domains need significantbackground knowledge, leading to contexts that can often exceed the maximumlength that existing LLMs can process. This study explores leveraging thesemi-structured nature of legal and financial data to efficiently retrieverelevant context, enabling the use of LLMs for domain-specialized QA. Theresulting system outperforms contemporary models and also provides usefulexplanations for the answers, encouraging the integration of LLMs into legaland financial NLP systems for future research.",,arXiv,"['cs.cl', 'cs.ai']",, +1175,statistical depth for ranking and characterizing transformerbased text embeddings,"['Parker Seegmiller', 'Sarah Masud Preum']",http://arxiv.org/pdf/2310.15010v1.pdf,2023-10-23,," The popularity of transformer-based text embeddings calls for betterstatistical tools for measuring distributions of such embeddings. One such toolwould be a method for ranking texts within a corpus by centrality, i.e.assigning each text a number signifying how representative that text is of thecorpus as a whole. However, an intrinsic center-outward ordering ofhigh-dimensional text representations is not trivial. A statistical depth is afunction for ranking k-dimensional objects by measuring centrality with respectto some observed k-dimensional distribution. We adopt a statistical depth tomeasure distributions of transformer-based text embeddings, transformer-basedtext embedding (TTE) depth, and introduce the practical use of this depth forboth modeling and distributional inference in NLP pipelines. We first defineTTE depth and an associated rank sum test for determining whether two corporadiffer significantly in embedding space. We then use TTE depth for the task ofin-context learning prompt selection, showing that this approach reliablyimproves performance over statistical baseline approaches across six textclassification tasks. Finally, we use TTE depth and the associated rank sumtest to characterize the distributions of synthesized and human-generatedcorpora, showing that five recent synthetic data augmentation processes cause ameasurable distributional shift away from associated human-generated text.",,arXiv,['cs.cl'],, +1176,the bla benchmark investigating basic language abilities of pretrained multimodal models,"['Xinyi Chen', 'Raquel Fernández', 'Sandro Pezzelle']",http://arxiv.org/pdf/2310.15061v1.pdf,2023-10-23,," Despite the impressive performance achieved by pre-trainedlanguage-and-vision models in downstream tasks, it remains an open questionwhether this reflects a proper understanding of image-text interaction. In thiswork, we explore to what extent they handle basic linguistic constructions --active-passive voice, coordination, and relative clauses -- that even preschoolchildren can typically master. We present BLA, a novel, automaticallyconstructed benchmark to evaluate multimodal models on these Basic LanguageAbilities. We show that different types of Transformer-based systems, such asCLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting,in line with previous findings. Our experiments, in particular, show that mostof the tested models only marginally benefit when fine-tuned or prompted withconstruction-specific samples. Yet, the generative BLIP2 shows promisingtrends, especially in an in-context learning setting. This opens the door tousing BLA not only as an evaluation benchmark but also to improve models' basiclanguage abilities.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cv']",, +1177,llmintheloop leveraging large language model for thematic analysis,"['Shih-Chieh Dai', 'Aiping Xiong', 'Lun-Wei Ku']",http://arxiv.org/pdf/2310.15100v1.pdf,2023-10-23,," Thematic analysis (TA) has been widely used for analyzing qualitative data inmany disciplines and fields. To ensure reliable analysis, the same piece ofdata is typically assigned to at least two human coders. Moreover, to producemeaningful and useful analysis, human coders develop and deepen their datainterpretation and coding over multiple iterations, making TA labor-intensiveand time-consuming. Recently the emerging field of large language models (LLMs)research has shown that LLMs have the potential replicate human-like behaviorin various tasks: in particular, LLMs outperform crowd workers ontext-annotation tasks, suggesting an opportunity to leverage LLMs on TA. Wepropose a human-LLM collaboration framework (i.e., LLM-in-the-loop) to conductTA with in-context learning (ICL). This framework provides the prompt to framediscussions with a LLM (e.g., GPT-3.5) to generate the final codebook for TA.We demonstrate the utility of this framework using survey datasets on theaspects of the music listening experience and the usage of a password manager.Results of the two case studies show that the proposed framework yields similarcoding quality to that of human coders but reduces TA's labor and time demands.",,arXiv,['cs.cl'],, +1178,ui layout generation with llms guided by ui grammar,"['Yuwen Lu', 'Ziang Tong', 'Qinyi Zhao', 'Chengzhi Zhang', 'Toby Jia-Jun Li']",http://arxiv.org/pdf/2310.15455v1.pdf,2023-10-24,," The recent advances in Large Language Models (LLMs) have stimulated interestamong researchers and industry professionals, particularly in their applicationto tasks concerning mobile user interfaces (UIs). This position paperinvestigates the use of LLMs for UI layout generation. Central to ourexploration is the introduction of UI grammar -- a novel approach we proposedto represent the hierarchical structure inherent in UI screens. The aim of thisapproach is to guide the generative capacities of LLMs more effectively andimprove the explainability and controllability of the process. Initialexperiments conducted with GPT-4 showed the promising capability of LLMs toproduce high-quality user interfaces via in-context learning. Furthermore, ourpreliminary comparative study suggested the potential of the grammar-basedapproach in improving the quality of generative results in specific aspects.",,arXiv,"['cs.hc', 'cs.ai']",, +1179,poe process of elimination for multiple choice reasoning,"['Chenkai Ma', 'Xinya Du']",http://arxiv.org/pdf/2310.15575v1.pdf,2023-10-24,," Language models (LMs) are capable of conducting in-context learning formultiple choice reasoning tasks, but the options in these tasks are treatedequally. As humans often first eliminate wrong options before picking the finalcorrect answer, we argue a similar two-step strategy can make LMs better atthese tasks. To this end, we present the Process of Elimination (POE), atwo-step scoring method. In the first step, POE scores each option, andeliminates seemingly wrong options. In the second step, POE masks these wrongoptions, and makes the final prediction from the remaining options. Zero-shotexperiments on 8 reasoning tasks illustrate the effectiveness of POE, and afollowing analysis finds our method to be especially performant on logicalreasoning tasks. We further analyze the effect of masks, and show that POEapplies to few-shot settings and large language models (LLMs) like ChatGPT.",,arXiv,['cs.cl'],, +1180,webwise web interface control and sequential exploration with large language models,"['Heyi Tao', 'Sethuraman T V', 'Michal Shlapentokh-Rothman', 'Derek Hoiem']",http://arxiv.org/pdf/2310.16042v2.pdf,2023-10-24,," The paper investigates using a Large Language Model (LLM) to automaticallyperform web software tasks using click, scroll, and text input operations.Previous approaches, such as reinforcement learning (RL) or imitation learning,are inefficient to train and task-specific. Our method uses filtered DocumentObject Model (DOM) elements as observations and performs tasks step-by-step,sequentially generating small programs based on the current observations. Weuse in-context learning, either benefiting from a single manually providedexample, or an automatically generated example based on a successful zero-shottrial. We evaluate the proposed method on the MiniWob++ benchmark. With onlyone in-context example, our WebWISE method achieves similar or betterperformance than other methods that require many demonstrations or trials.",,arXiv,"['cs.cl', 'cs.ai']",, +1181,from heuristic to analytic cognitively motivated strategies for coherent physical commonsense reasoning,"['Zheyuan Zhang', 'Shane Storks', 'Fengyuan Hu', 'Sungryull Sohn', 'Moontae Lee', 'Honglak Lee', 'Joyce Chai']",http://arxiv.org/pdf/2310.18364v1.pdf,2023-10-24,," Pre-trained language models (PLMs) have shown impressive performance invarious language tasks. However, they are prone to spurious correlations, andoften generate illusory information. In real-world applications, PLMs shouldjustify decisions with formalized, coherent reasoning chains, but thischallenge remains under-explored. Cognitive psychology theorizes that humansare capable of utilizing fast and intuitive heuristic thinking to makedecisions based on past experience, then rationalizing the decisions throughslower and deliberative analytic reasoning. We incorporate these interlinkeddual processes in fine-tuning and in-context learning with PLMs, applying themto two language understanding tasks that require coherent physical commonsensereasoning. We show that our proposed Heuristic-Analytic Reasoning (HAR)strategies drastically improve the coherence of rationalizations for modeldecisions, yielding state-of-the-art results on Tiered Reasoning for IntuitivePhysics (TRIP). We also find that this improved coherence is a direct result ofmore faithful attention to relevant language context in each step of reasoning.Our findings suggest that human-like reasoning strategies can effectivelyimprove the coherence and reliability of PLM reasoning.",,arXiv,"['cs.cl', 'cs.ai']",, +1182,the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities,"['Yuxiang Zhou', 'Jiazheng Li', 'Yanzheng Xiang', 'Hanqi Yan', 'Lin Gui', 'Yulan He']",http://arxiv.org/pdf/2311.00237v1.pdf,2023-11-01,," Understanding emergent abilities, such as in-context learning (ICL) andchain-of-thought (CoT) prompting in large language models (LLMs), is of utmostimportance. This importance stems not only from the better utilization of thesecapabilities across various tasks, but also from the proactive identificationand mitigation of potential risks, including concerns of truthfulness, bias,and toxicity, that may arise alongside these capabilities. In this paper, wepresent a thorough survey on the interpretation and analysis of emergentabilities of LLMs. First, we provide a concise introduction to the backgroundand definition of emergent abilities. Then, we give an overview of advancementsfrom two perspectives: 1) a macro perspective, emphasizing studies on themechanistic interpretability and delving into the mathematical foundationsbehind emergent abilities; and 2) a micro-perspective, concerning studies thatfocus on empirical interpretability by examining factors associated with theseabilities. We conclude by highlighting the challenges encountered andsuggesting potential avenues for future research. We believe that our workestablishes the basis for further exploration into the interpretation ofemergent abilities.",,arXiv,['cs.cl'],, +1183,narrowing the gap between zero and fewshot machine translation by matching styles,"['Weiting Tan', 'Haoran Xu', 'Lingfeng Shen', 'Shuyue Stella Li', 'Kenton Murray', 'Philipp Koehn', 'Benjamin Van Durme', 'Yunmo Chen']",http://arxiv.org/pdf/2311.02310v1.pdf,2023-11-04,," Large language models trained primarily in a monolingual setting havedemonstrated their ability to generalize to machine translation using zero- andfew-shot examples with in-context learning. However, even though zero-shottranslations are relatively good, there remains a discernible gap comparingtheir performance with the few-shot setting. In this paper, we investigate thefactors contributing to this gap and find that this gap can largely be closed(for about 70%) by matching the writing styles of the target corpus.Additionally, we explore potential approaches to enhance zero-shot baselineswithout the need for parallel demonstration examples, providing valuableinsights into how these methods contribute to improving translation metrics.",,arXiv,['cs.cl'],, +1184,instructed language models with retrievers are powerful entity linkers,"['Zilin Xiao', 'Ming Gong', 'Jie Wu', 'Xingyao Zhang', 'Linjun Shou', 'Jian Pei', 'Daxin Jiang']",http://arxiv.org/pdf/2311.03250v1.pdf,2023-11-06,," Generative approaches powered by large language models (LLMs) havedemonstrated emergent abilities in tasks that require complex reasoningabilities. Yet the generative nature still makes the generated content sufferfrom hallucinations, thus unsuitable for entity-centric tasks like entitylinking (EL) requiring precise entity predictions over a large knowledge base.We present Instructed Generative Entity Linker (INSGENEL), the first approachthat enables casual language models to perform entity linking over knowledgebases. Several methods to equip language models with EL capability wereproposed in this work, including (i) a sequence-to-sequence training ELobjective with instruction-tuning, (ii) a novel generative EL framework basedon a light-weight potential mention retriever that frees the model from heavyand non-parallelizable decoding, achieving 4$\times$ speedup without compromiseon linking metrics. INSGENEL outperforms previous generative alternatives with+6.8 F1 points gain on average, also with a huge advantage in training dataefficiency and training compute consumption. In addition, our skillfullyengineered in-context learning (ICL) framework for EL still lags behindINSGENEL significantly, reaffirming that the EL task remains a persistenthurdle for general LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, +1185,metalearning via language model incontext tuning,"['Yanda Chen', 'Ruiqi Zhong', 'Sheng Zha', 'George Karypis', 'He He']",http://arxiv.org/pdf/2110.07814v2.pdf,2021-10-15,," The goal of meta-learning is to learn to adapt to a new task with only a fewlabeled examples. To tackle this problem in NLP, we propose $\textit{in-contexttuning}$, which recasts adaptation and prediction as a simple sequenceprediction problem: to form the input sequence, we concatenate the taskinstruction, the labeled examples, and the target input to predict; tometa-train the model to learn from in-context examples, we fine-tune apre-trained language model (LM) to predict the target label from the inputsequences on a collection of tasks. We benchmark our method on two collections of text classification tasks: LAMAand BinaryClfs. Compared to first-order MAML which adapts the model withgradient descent, our method better leverages the inductive bias of LMs toperform pattern matching, and outperforms MAML by an absolute $6\%$ AUC ROCscore on BinaryClfs, with increasing advantage w.r.t. model size. Compared tonon-fine-tuned in-context learning (i.e. prompting a raw LM), in-context tuningdirectly learns to learn from in-context examples. On BinaryClfs, in-contexttuning improves the average AUC-ROC score by an absolute $10\%$, and reducesthe variance with respect to example ordering by 6x and example choices by 2x.",,arXiv,"['cs.cl', 'cs.lg']",, +1186,glam efficient scaling of language models with mixtureofexperts,"['Nan Du', 'Yanping Huang', 'Andrew M. Dai', 'Simon Tong', 'Dmitry Lepikhin', 'Yuanzhong Xu', 'Maxim Krikun', 'Yanqi Zhou', 'Adams Wei Yu', 'Orhan Firat', 'Barret Zoph', 'Liam Fedus', 'Maarten Bosma', 'Zongwei Zhou', 'Tao Wang', 'Yu Emma Wang', 'Kellie Webster', 'Marie Pellat', 'Kevin Robinson', 'Kathleen Meier-Hellstern', 'Toju Duke', 'Lucas Dixon', 'Kun Zhang', 'Quoc V Le', 'Yonghui Wu', 'Zhifeng Chen', 'Claire Cui']",http://arxiv.org/pdf/2112.06905v2.pdf,2021-12-13,," Scaling language models with more data, compute and parameters has drivensignificant progress in natural language processing. For example, thanks toscaling, GPT-3 was able to achieve strong results on in-context learning tasks.However, training these large dense models requires significant amounts ofcomputing resources. In this paper, we propose and develop a family of languagemodels named GLaM (Generalist Language Model), which uses a sparsely activatedmixture-of-experts architecture to scale the model capacity while alsoincurring substantially less training cost compared to dense variants. Thelargest GLaM has 1.2 trillion parameters, which is approximately 7x larger thanGPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires halfof the computation flops for inference, while still achieving better overallzero-shot and one-shot performance across 29 NLP tasks.",,arXiv,['cs.cl'],, +1187,can language models learn from explanations in context,"['Andrew K. Lampinen', 'Ishita Dasgupta', 'Stephanie C. Y. Chan', 'Kory Matthewson', 'Michael Henry Tessler', 'Antonia Creswell', 'James L. McClelland', 'Jane X. Wang', 'Felix Hill']",http://arxiv.org/pdf/2204.02329v4.pdf,2022-04-05,," Language Models (LMs) can perform new tasks by adapting to a few in-contextexamples. For humans, explanations that connect examples to task principles canimprove learning. We therefore investigate whether explanations of few-shotexamples can help LMs. We annotate questions from 40 challenging tasks withanswer explanations, and various matched control explanations. We evaluate howdifferent types of explanations, instructions, and controls affect zero- andfew-shot performance. We analyze these results using statistical multilevelmodeling techniques that account for the nested dependencies among conditions,tasks, prompts, and models. We find that explanations can improve performance-- even without tuning. Furthermore, explanations hand-tuned for performance ona small validation set offer substantially larger benefits, and building aprompt by selecting examples and explanations together substantially improvesperformance over selecting examples alone. Finally, even untuned explanationsoutperform carefully matched controls, suggesting that the benefits are due tothe link between an example and its explanation, rather than lower-levelfeatures. However, only large models benefit. In summary, explanations cansupport the in-context learning of large LMs on challenging tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1188,automatic short math answer grading via incontext metalearning,"['Mengxue Zhang', 'Sami Baral', 'Neil Heffernan', 'Andrew Lan']",http://arxiv.org/pdf/2205.15219v3.pdf,2022-05-30,," Automatic short answer grading is an important research direction in theexploration of how to use artificial intelligence (AI)-based tools to improveeducation. Current state-of-the-art approaches use neural language models tocreate vectorized representations of students responses, followed byclassifiers to predict the score. However, these approaches have several keylimitations, including i) they use pre-trained language models that are notwell-adapted to educational subject domains and/or student-generated text andii) they almost always train one model per question, ignoring the linkageacross a question and result in a significant model storage problem due to thesize of advanced language models. In this paper, we study the problem ofautomatic short answer grading for students' responses to math questions andpropose a novel framework for this task. First, we use MathBERT, a variant ofthe popular language model BERT adapted to mathematical content, as our basemodel and fine-tune it for the downstream task of student response grading.Second, we use an in-context learning approach that provides scoring examplesas input to the language model to provide additional context information andpromote generalization to previously unseen questions. We evaluate ourframework on a real-world dataset of student responses to open-ended mathquestions and show that our framework (often significantly) outperformsexisting approaches, especially for new questions that are not seen duringtraining.",,arXiv,"['cs.cl', 'cs.lg']",, +1189,large language models can implement policy iteration,"['Ethan Brooks', 'Logan Walls', 'Richard L. Lewis', 'Satinder Singh']",http://arxiv.org/pdf/2210.03821v2.pdf,2022-10-07,," This work presents In-Context Policy Iteration, an algorithm for performingReinforcement Learning (RL), in-context, using foundation models. While theapplication of foundation models to RL has received considerable attention,most approaches rely on either (1) the curation of expert demonstrations(either through manual design or task-specific pretraining) or (2) adaptationto the task of interest using gradient methods (either fine-tuning or trainingof adapter layers). Both of these techniques have drawbacks. Collectingdemonstrations is labor-intensive, and algorithms that rely on them do notoutperform the experts from which the demonstrations were derived. All gradienttechniques are inherently slow, sacrificing the ""few-shot"" quality that madein-context learning attractive to begin with. In this work, we present analgorithm, ICPI, that learns to perform RL tasks without expert demonstrationsor gradients. Instead we present a policy-iteration method in which the promptcontent is the entire locus of learning. ICPI iteratively updates the contentsof the prompt from which it derives its policy through trial-and-errorinteraction with an RL environment. In order to eliminate the role ofin-weights learning (on which approaches like Decision Transformer relyheavily), we demonstrate our algorithm using Codex, a language model with noprior knowledge of the domains on which we evaluate it.",,arXiv,['cs.lg'],, +1190,transformers generalize differently from information stored in context vs in weights,"['Stephanie C. Y. Chan', 'Ishita Dasgupta', 'Junkyung Kim', 'Dharshan Kumaran', 'Andrew K. Lampinen', 'Felix Hill']",http://arxiv.org/pdf/2210.05675v2.pdf,2022-10-11,," Transformer models can use two fundamentally different kinds of information:information stored in weights during training, and information provided``in-context'' at inference time. In this work, we show that transformersexhibit different inductive biases in how they represent and generalize fromthe information in these two sources. In particular, we characterize whetherthey generalize via parsimonious rules (rule-based generalization) or viadirect comparison with observed examples (exemplar-based generalization). Thisis of important practical consequence, as it informs whether to encodeinformation in weights or in context, depending on how we want models to usethat information. In transformers trained on controlled stimuli, we find thatgeneralization from weights is more rule-based whereas generalization fromcontext is largely exemplar-based. In contrast, we find that in transformerspre-trained on natural language, in-context learning is significantlyrule-based, with larger models showing more rule-basedness. We hypothesise thatrule-based generalization from in-context information might be an emergentconsequence of large-scale training on language, which has sparse rule-likestructure. Using controlled stimuli, we verify that transformers pretrained ondata containing sparse rule-like structure exhibit more rule-basedgeneralization.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1191,large language models meet harry potter a bilingual dataset for aligning dialogue agents with characters,"['Nuo Chen', 'Yan Wang', 'Haiyun Jiang', 'Deng Cai', 'Yuhan Li', 'Ziyang Chen', 'Longyue Wang', 'Jia Li']",http://arxiv.org/pdf/2211.06869v4.pdf,2022-11-13,," In recent years, Dialogue-style Large Language Models (LLMs) such as ChatGPTand GPT4 have demonstrated immense potential in constructing open-domaindialogue agents. However, aligning these agents with specific characters orindividuals remains a considerable challenge due to the complexities ofcharacter representation and the lack of comprehensive annotations. In thispaper, we introduce the Harry Potter Dialogue (HPD) dataset, designed toadvance the study of dialogue agents and character alignment. The datasetencompasses all dialogue sessions (in both English and Chinese) from the HarryPotter series and is annotated with vital background information, includingdialogue scenes, speakers, character relationships, and attributes. Theseextensive annotations may empower LLMs to unlock character-driven dialoguecapabilities. Furthermore, it can serve as a universal benchmark for evaluatinghow well can a LLM aligning with a specific character. We benchmark LLMs on HPDusing both fine-tuning and in-context learning settings. Evaluation resultsreveal that although there is substantial room for improvement in generatinghigh-quality, character-aligned responses, the proposed dataset is valuable inguiding models toward responses that better align with the character of HarryPotter.",,arXiv,"['cs.cl', 'cs.ai']",, +1192,retrievalaugmented multimodal language modeling,"['Michihiro Yasunaga', 'Armen Aghajanyan', 'Weijia Shi', 'Rich James', 'Jure Leskovec', 'Percy Liang', 'Mike Lewis', 'Luke Zettlemoyer', 'Wen-tau Yih']",http://arxiv.org/pdf/2211.12561v2.pdf,2022-11-22,," Recent multimodal models such as DALL-E and CM3 have achieved remarkableprogress in text-to-image and image-to-text generation. However, these modelsstore all learned knowledge (e.g., the appearance of the Eiffel Tower) in themodel parameters, requiring increasingly larger models and training data tocapture more knowledge. To integrate knowledge in a more scalable and modularway, we propose a retrieval-augmented multimodal model, which enables a basemultimodal model (generator) to refer to relevant text and images fetched by aretriever from external memory (e.g., documents on the web). Specifically, forthe retriever, we use a pretrained CLIP, and for the generator, we train a CM3Transformer on the LAION dataset. Our resulting model, namedRetrieval-Augmented CM3 (RA-CM3), is the first multimodal model that canretrieve and generate both text and images. We show that RA-CM3 significantlyoutperforms baseline multimodal models such as DALL-E and CM3 on both image andcaption generation tasks (12 FID and 17 CIDEr improvements on MS-COCO), whilerequiring much less compute for training (<30% of DALL-E). Moreover, we showthat RA-CM3 exhibits novel capabilities, such as faithful image generation andmultimodal in-context learning (e.g., image generation from demonstrations).",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, +1193,"operationalizing specifications, in addition to test sets for evaluating constrained generative models","['Vikas Raunak', 'Matt Post', 'Arul Menezes']",http://arxiv.org/pdf/2212.00006v1.pdf,2022-11-19,," In this work, we present some recommendations on the evaluation ofstate-of-the-art generative models for constrained generation tasks. Theprogress on generative models has been rapid in recent years. These large-scalemodels have had three impacts: firstly, the fluency of generation in bothlanguage and vision modalities has rendered common average-case evaluationmetrics much less useful in diagnosing system errors. Secondly, the samesubstrate models now form the basis of a number of applications, driven both bythe utility of their representations as well as phenomena such as in-contextlearning, which raise the abstraction level of interacting with such models.Thirdly, the user expectations around these models and their feted publicreleases have made the technical challenge of out of domain generalization muchless excusable in practice. Subsequently, our evaluation methodologies haven'tadapted to these changes. More concretely, while the associated utility andmethods of interacting with generative models have expanded, a similarexpansion has not been observed in their evaluation practices. In this paper,we argue that the scale of generative models could be exploited to raise theabstraction level at which evaluation itself is conducted and providerecommendations for the same. Our recommendations are based on leveragingspecifications as a powerful instrument to evaluate generation quality and arereadily applicable to a variety of tasks.",,arXiv,"['cs.hc', 'cs.cl', 'cs.cv', 'cs.cy']",, +1194,language model acceptability judgements are not always robust to context,"['Koustuv Sinha', 'Jon Gauthier', 'Aaron Mueller', 'Kanishka Misra', 'Keren Fuentes', 'Roger Levy', 'Adina Williams']",http://arxiv.org/pdf/2212.08979v1.pdf,2022-12-18,," Targeted syntactic evaluations of language models ask whether models showstable preferences for syntactically acceptable content over minimal-pairunacceptable inputs. Most targeted syntactic evaluation datasets ask models tomake these judgements with just a single context-free sentence as input. Thisdoes not match language models' training regime, in which input sentences arealways highly contextualized by the surrounding corpus. This mismatch raises animportant question: how robust are models' syntactic judgements in differentcontexts? In this paper, we investigate the stability of language models'performance on targeted syntactic evaluations as we vary properties of theinput context: the length of the context, the types of syntactic phenomena itcontains, and whether or not there are violations of grammaticality. We findthat model judgements are generally robust when placed in randomly sampledlinguistic contexts. However, they are substantially unstable for contextscontaining syntactic structures matching those in the critical test content.Among all tested models (GPT-2 and five variants of OPT), we significantlyimprove models' judgements by providing contexts with matching syntacticstructures, and conversely significantly worsen them using unacceptablecontexts with matching but violated syntactic structures. This effect isamplified by the length of the context, except for unrelated inputs. We showthat these changes in model performance are not explainable by simple featuresmatching the context and the test inputs, such as lexical overlap anddependency overlap. This sensitivity to highly specific syntactic features ofthe context can only be explained by the models' implicit in-context learningabilities.",,arXiv,"['cs.cl', 'cs.lg']",, +1195,lowresource authorship style transfer can nonfamous authors be imitated,"['Ajay Patel', 'Nicholas Andrews', 'Chris Callison-Burch']",http://arxiv.org/pdf/2212.08986v2.pdf,2022-12-18,," Authorship style transfer involves altering text to match the style of atarget author whilst preserving the original meaning. Existing unsupervisedapproaches like STRAP have largely focused on style transfer to target authorswith many examples of their writing style in books, speeches, or otherpublished works. This high-resource training data requirement (often greaterthan 100,000 words) makes these approaches primarily useful for style transferto published authors, politicians, or other well-known figures and authorshipstyles, while style transfer to non-famous authors has not been well-studied.We introduce the \textit{low-resource authorship style transfer} task, a morechallenging class of authorship style transfer where only a limited amount oftext in the target author's style may exist. In our experiments, wespecifically choose source and target authors from Reddit and style transfertheir Reddit posts, limiting ourselves to just 16 posts (on average ~500 words)of the target author's style. Style transfer accuracy is typically measured byhow often a classifier or human judge will classify an output as written by thetarget author. Recent authorship representations models excel at authorshipidentification even with just a few writing samples, making automaticevaluation of this task possible for the first time through evaluation metricswe propose. Our results establish an in-context learning technique we developas the strongest baseline, though we find current approaches do not yet achievemastery of this challenging task. We release our data and implementations toencourage further investigation.",,arXiv,['cs.cl'],, +1196,training trajectories of language models across scales,"['Mengzhou Xia', 'Mikel Artetxe', 'Chunting Zhou', 'Xi Victoria Lin', 'Ramakanth Pasunuru', 'Danqi Chen', 'Luke Zettlemoyer', 'Ves Stoyanov']",http://arxiv.org/pdf/2212.09803v3.pdf,2022-12-19,," Scaling up language models has led to unprecedented performance gains, butlittle is understood about how the training dynamics change as models getlarger. How do language models of different sizes learn during pre-training?Why do larger language models demonstrate more desirable behaviors? In thispaper, we analyze the intermediate training checkpoints of differently sizedOPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-tokenprediction, sequence-level generation, and downstream tasks. We find that 1) ata given perplexity and independent of model sizes, a similar subset of trainingtokens see the most significant reduction in loss, with the rest stagnating orshowing double-descent behavior; 2) early in training, all models learn toreduce the perplexity of grammatical sequences that contain hallucinations,with small models halting at this suboptimal distribution and larger oneseventually learning to assign these sequences lower probabilities; 3)perplexity is a strong predictor of in-context learning performance on 74multiple-choice tasks from BIG-Bench, and this holds independent of the modelsize. Together, these results show that perplexity is more predictive of modelbehaviors than model size or training computation.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1197,dialog2api taskoriented dialogue with api description and example programs,"['Raphael Shu', 'Elman Mansimov', 'Tamer Alkhouli', 'Nikolaos Pappas', 'Salvatore Romeo', 'Arshit Gupta', 'Saab Mansour', 'Yi Zhang', 'Dan Roth']",http://arxiv.org/pdf/2212.09946v1.pdf,2022-12-20,," Functionality and dialogue experience are two important factors oftask-oriented dialogue systems. Conventional approaches with closed schema(e.g., conversational semantic parsing) often fail as both the functionalityand dialogue experience are strongly constrained by the underlying schema. Weintroduce a new paradigm for task-oriented dialogue - Dialog2API - to greatlyexpand the functionality and provide seamless dialogue experience. Theconversational model interacts with the environment by generating and executingprograms triggering a set of pre-defined APIs. The model also manages thedialogue policy and interact with the user through generating appropriatenatural language responses. By allowing generating free-form programs,Dialog2API supports composite goals by combining different APIs, whereasunrestricted program revision provides natural and robust dialogue experience.To facilitate Dialog2API, the core model is provided with API documents, anexecution environment and optionally some example dialogues annotated withprograms. We propose an approach tailored for the Dialog2API, where thedialogue states are represented by a stack of programs, with most recentlymentioned program on the top of the stack. Dialog2API can work with manyapplication scenarios such as software automation and customer service. In thispaper, we construct a dataset for AWS S3 APIs and present evaluation results ofin-context learning baselines.",,arXiv,['cs.cl'],, +1198,hint hypernetwork instruction tuning for efficient zero & fewshot generalisation,"['Hamish Ivison', 'Akshita Bhagia', 'Yizhong Wang', 'Hannaneh Hajishirzi', 'Matthew Peters']",http://arxiv.org/pdf/2212.10315v2.pdf,2022-12-20,," Recent NLP models have shown the remarkable ability to effectively generalise`zero-shot' to new tasks using only natural language instructions as guidance.However, many of these approaches suffer from high computational costs due totheir reliance on concatenating lengthy instructions with every input example,resulting in costly reprocessing of the instruction. To avoid this, weintroduce Hypernetworks for INstruction Tuning (HINT), which convert taskinstructions and examples into parameter-efficient modules inserted into anunderlying model using a pretrained text encoder, eliminating the need toinclude instructions in the model input. The hypernetwork in HINT also producesan encoded instruction, which we concatenate with encoded inputs duringdecoding to further improve performance. HINT models outperform strongstate-of-the-art baselines by over 10% when controlling for compute (measuredin FLOPs). By converting instructions into modules, HINT models can effectivelydisregard the length of instructions and few-shot example inputs in terms ofcompute usage. As a result, HINT can enhance its performance by up to 25% byincorporating additional few-shot data, while utilizing only up to 5% morecompute. This combines the strengths of parameter-efficient fine-tuning andin-context learning.",,arXiv,['cs.cl'],, +1199,parallel context windows for large language models,"['Nir Ratner', 'Yoav Levine', 'Yonatan Belinkov', 'Ori Ram', 'Inbal Magar', 'Omri Abend', 'Ehud Karpas', 'Amnon Shashua', 'Kevin Leyton-Brown', 'Yoav Shoham']",http://arxiv.org/pdf/2212.10947v3.pdf,2022-12-21,," When applied to processing long text, Large Language Models (LLMs) arelimited by their context window. Existing efforts to address this limitationinvolve training specialized architectures, and cannot be easily applied tooff-the-shelf LLMs. We present Parallel Context Windows (PCW), a method thatalleviates the context window restriction for any off-the-shelf LLM withoutfurther training. The key to the approach is to carve a long context intochunks (``windows''), restrict the attention mechanism to apply only withineach window, and re-use the positional embeddings across the windows. Our mainresults test the PCW approach on in-context learning with models that range insize between 750 million and 178 billion parameters, and show substantialimprovements for tasks with diverse input and output spaces. We show additionalbenefits in other settings where long context windows may be beneficial:multi-hop questions and retrieval-augmented question answering with multipleretrieved documents. Our results highlight Parallel Context Windows as apromising method for applying off-the-shelf LLMs in a range of settings thatrequire long text sequences. We make our code publicly available athttps://github.com/ai21labs/parallel-context-windows.",,arXiv,['cs.cl'],, +1200,distinguishability calibration to incontext learning,"['Hongjing Li', 'Hanqi Yan', 'Yanran Li', 'Li Qian', 'Yulan He', 'Lin Gui']",http://arxiv.org/pdf/2302.06198v3.pdf,2023-02-13,," Recent years have witnessed increasing interests in prompt-based learning inwhich models can be trained on only a few annotated instances, making themsuitable in low-resource settings. When using prompt-based learning for textclassification, the goal is to use a pre-trained language model (PLM) topredict a missing token in a pre-defined template given an input text, whichcan be mapped to a class label. However, PLMs built on the transformerarchitecture tend to generate similar output embeddings, making it difficult todiscriminate between different class labels. The problem is further exacerbatedwhen dealing with classification tasks involving many fine-grained classlabels. In this work, we alleviate this information diffusion issue, i.e.,different tokens share a large proportion of similar information after goingthrough stacked multiple self-attention layers in a transformer, by proposing acalibration method built on feature transformations through rotation andscaling to map a PLM-encoded embedding into a new metric space to guarantee thedistinguishability of the resulting embeddings. Furthermore, we take theadvantage of hyperbolic embeddings to capture the hierarchical relations amongfine-grained class-associated token embedding by a coarse-to-fine metriclearning strategy to enhance the distinguishability of the learned outputembeddings. Extensive experiments on the three datasets under various settingsdemonstrate the effectiveness of our approach. Our code can be found athttps://github.com/donttal/TARA.",,arXiv,['cs.cl'],, +1201,do we still need clinical language models,"['Eric Lehman', 'Evan Hernandez', 'Diwakar Mahajan', 'Jonas Wulff', 'Micah J. Smith', 'Zachary Ziegler', 'Daniel Nadler', 'Peter Szolovits', 'Alistair Johnson', 'Emily Alsentzer']",http://arxiv.org/pdf/2302.08091v1.pdf,2023-02-16,," Although recent advances in scaling large language models (LLMs) haveresulted in improvements on many NLP tasks, it remains unclear whether thesemodels trained primarily with general web text are the right tool in highlyspecialized, safety critical domains such as clinical text. Recent results havesuggested that LLMs encode a surprising amount of medical knowledge. Thisraises an important question regarding the utility of smaller domain-specificlanguage models. With the success of general-domain LLMs, is there still a needfor specialized clinical models? To investigate this question, we conduct anextensive empirical analysis of 12 language models, ranging from 220M to 175Bparameters, measuring their performance on 3 different clinical tasks that testtheir ability to parse and reason over electronic health records. As part ofour experiments, we train T5-Base and T5-Large models from scratch on clinicalnotes from MIMIC III and IV to directly investigate the efficiency of clinicaltokens. We show that relatively small specialized clinical models substantiallyoutperform all in-context learning approaches, even when finetuned on limitedannotated data. Further, we find that pretraining on clinical tokens allows forsmaller, more parameter-efficient models that either match or outperform muchlarger language models trained on general text. We release the code and themodels used under the PhysioNet Credentialed Health Data license and data useagreement.",,arXiv,['cs.cl'],, +1202,epalm efficient perceptual augmentation of language models,"['Mustafa Shukor', 'Corentin Dancette', 'Matthieu Cord']",http://arxiv.org/pdf/2303.11403v4.pdf,2023-03-20,," Large Language Models (LLMs) have so far impressed the world, withunprecedented capabilities that emerge in models at large scales. On the visionside, transformer models (i.e., ViT) are following the same trend, achievingthe best performance on challenging benchmarks. With the abundance of suchunimodal models, a natural question arises; do we need also to follow thistrend to tackle multimodal tasks? In this work, we propose to rather directeffort to efficient adaptations of existing models, and propose to augmentLanguage Models with perception. Existing approaches for adapting pretrainedmodels for vision-language tasks still rely on several key components thathinder their efficiency. In particular, they still train a large number ofparameters, rely on large multimodal pretraining, use encoders (e.g., CLIP)trained on huge image-text datasets, and add significant inference overhead. Inaddition, most of these approaches have focused on Zero-Shot and In ContextLearning, with little to no effort on direct finetuning. We investigate theminimal computational effort needed to adapt unimodal models for multimodaltasks and propose a new challenging setup, alongside different approaches, thatefficiently adapts unimodal pretrained models. We show that by freezing morethan 99% of total parameters, training only one linear projection layer, andprepending only one trainable token, our approach (dubbed eP-ALM) significantlyoutperforms other baselines on VQA and Captioning across Image, Video, andAudio modalities, following the proposed setup. The code is available here:https://github.com/mshukor/eP-ALM.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, +1203,towards making the most of chatgpt for machine translation,"['Keqin Peng', 'Liang Ding', 'Qihuang Zhong', 'Li Shen', 'Xuebo Liu', 'Min Zhang', 'Yuanxin Ouyang', 'Dacheng Tao']",http://arxiv.org/pdf/2303.13780v4.pdf,2023-03-24,," ChatGPT shows remarkable capabilities for machine translation (MT). Severalprior studies have shown that it achieves comparable results to commercialsystems for high-resource languages, but lags behind in complex tasks, e.g.,low-resource and distant-language-pairs translation. However, they usuallyadopt simple prompts which can not fully elicit the capability of ChatGPT. Inthis paper, we aim to further mine ChatGPT's translation ability by revisitingseveral aspects: temperature, task information, and domain information, andcorrespondingly propose an optimal temperature setting and two (simple buteffective) prompts: Task-Specific Prompts (TSP) and Domain-Specific Prompts(DSP). We show that: 1) The performance of ChatGPT depends largely ontemperature, and a lower temperature usually can achieve better performance; 2)Emphasizing the task information can further improve ChatGPT's performance,particularly in complex MT tasks; 3) Introducing domain information can elicitChatGPT's generalization ability and improve its performance in the specificdomain; 4) ChatGPT tends to generate hallucinations for non-English-centric MTtasks, which can be partially addressed by our proposed prompts but still needto be highlighted for the MT/NLP community. We also explore the effects ofadvanced in-context learning strategies and find a (negative but interesting)observation: the powerful chain-of-thought prompt leads to word-by-wordtranslation behavior, thus bringing significant translation degradation.",,arXiv,['cs.cl'],, +1204,$k$nn prompting beyondcontext learning with calibrationfree nearest neighbor inference,"['Benfeng Xu', 'Quan Wang', 'Zhendong Mao', 'Yajuan Lyu', 'Qiaoqiao She', 'Yongdong Zhang']",http://arxiv.org/pdf/2303.13824v1.pdf,2023-03-24,," In-Context Learning (ICL), which formulates target tasks as prompt completionconditioned on in-context demonstrations, has become the prevailing utilizationof LLMs. In this paper, we first disclose an actual predicament for thistypical usage that it can not scale up with training data due to context lengthrestriction. Besides, existing works have shown that ICL also suffers fromvarious biases and requires delicate calibration treatment. To address bothchallenges, we advocate a simple and effective solution, $k$NN Prompting, whichfirst queries LLM with training data for distributed representations, thenpredicts test instances by simply referring to nearest neighbors. We conductcomprehensive experiments to demonstrate its two-fold superiority: 1)Calibration-Free: $k$NN Prompting does not directly align LLM outputdistribution with task-specific label space, instead leverages suchdistribution to align test and training instances. It significantly outperformsstate-of-the-art calibration-based methods under comparable few-shot scenario.2) Beyond-Context: $k$NN Prompting can further scale up effectively with asmany training data as are available, continually bringing substantialimprovements. The scaling trend holds across 10 orders of magnitude rangingfrom 2 shots to 1024 shots as well as different LLMs scales ranging from 0.8Bto 30B. It successfully bridges data scaling into model scaling, and brings newpotentials for the gradient-free paradigm of LLM deployment. Code is publiclyavailable.",,arXiv,"['cs.cl', 'cs.ai']",, +1205,what makes good incontext demonstrations for code intelligence tasks with llms,"['Shuzheng Gao', 'Xin-Cheng Wen', 'Cuiyun Gao', 'Wenxuan Wang', 'Hongyu Zhang', 'Michael R. Lyu']",http://arxiv.org/pdf/2304.07575v2.pdf,2023-04-15,," Pre-trained models of source code have gained widespread popularity in manycode intelligence tasks. Recently, with the scaling of the model and corpussize, large language models have shown the ability of in-context learning(ICL). ICL employs task instructions and a few examples as demonstrations, andthen inputs the demonstrations to the language models for making predictions.This new learning paradigm is training-free and has shown impressiveperformance in various natural language processing and code intelligence tasks.However, the performance of ICL heavily relies on the quality ofdemonstrations, e.g., the selected examples. It is important to systematicallyinvestigate how to construct a good demonstration for code-related tasks. Inthis paper, we empirically explore the impact of three key factors on theperformance of ICL in code intelligence tasks: the selection, order, and numberof demonstration examples. We conduct extensive experiments on three codeintelligence tasks including code summarization, bug fixing, and programsynthesis. Our experimental results demonstrate that all the above threefactors dramatically impact the performance of ICL in code intelligence tasks.Additionally, we summarize our findings and provide takeaway suggestions on howto construct effective demonstrations, taking into account these threeperspectives. We also show that a carefully-designed demonstration based on ourfindings can lead to substantial improvements over widely-used demonstrationconstruction methods, e.g., improving BLEU-4, EM, and EM by at least 9.90%,175.96%, and 50.81% on code summarization, bug fixing, and program synthesis,respectively",,arXiv,['cs.se'],, +1206,sparks of gpts in edge intelligence for metaverse caching and inference for mobile aigc services,"['Minrui Xu', 'Dusit Niyato', 'Hongliang Zhang', 'Jiawen Kang', 'Zehui Xiong', 'Shiwen Mao', 'Zhu Han']",http://arxiv.org/pdf/2304.08782v2.pdf,2023-04-18,," Aiming at achieving artificial general intelligence (AGI) for Metaverse,pretrained foundation models (PFMs), e.g., generative pretrained transformers(GPTs), can effectively provide various AI services, such as autonomousdriving, digital twins, and AI-generated content (AIGC) for extended reality.With the advantages of low latency and privacy-preserving, serving PFMs ofmobile AI services in edge intelligence is a viable solution for caching andexecuting PFMs on edge servers with limited computing resources and GPU memory.However, PFMs typically consist of billions of parameters that are computationand memory-intensive for edge servers during loading and execution. In thisarticle, we investigate edge PFM serving problems for mobile AIGC services ofMetaverse. First, we introduce the fundamentals of PFMs and discuss theircharacteristic fine-tuning and inference methods in edge intelligence. Then, wepropose a novel framework of joint model caching and inference for managingmodels and allocating resources to satisfy users' requests efficiently.Furthermore, considering the in-context learning ability of PFMs, we propose anew metric to evaluate the freshness and relevance between examples indemonstrations and executing tasks, namely the Age of Context (AoC). Finally,we propose a least context algorithm for managing cached models at edge serversby balancing the tradeoff among latency, energy consumption, and accuracy.",,arXiv,['cs.ni'],, +1207,controlled text generation with natural language instructions,"['Wangchunshu Zhou', 'Yuchen Eleanor Jiang', 'Ethan Wilcox', 'Ryan Cotterell', 'Mrinmaya Sachan']",http://arxiv.org/pdf/2304.14293v2.pdf,2023-04-27,," Large language models generate fluent texts and can follow natural languageinstructions to solve a wide range of tasks without task-specific training.Nevertheless, it is notoriously difficult to control their generation tosatisfy the various constraints required by different applications. In thiswork, we present InstructCTG, a controlled text generation framework thatincorporates different constraints by conditioning on natural languagedescriptions and demonstrations of the constraints. In particular, we firstextract the underlying constraints of natural texts through a combination ofoff-the-shelf NLP tools and simple heuristics. We then verbalize theconstraints into natural language instructions to form weakly supervisedtraining data. By prepending natural language descriptions of the constraintsand a few demonstrations, we fine-tune a pre-trained language model toincorporate various types of constraints. Compared to existing search-based orscore-based methods, InstructCTG is more flexible to different constraint typesand has a much smaller impact on the generation quality and speed because itdoes not modify the decoding procedure. Additionally, InstructCTG allows themodel to adapt to new constraints without re-training through the use offew-shot task generalization and in-context learning abilities ofinstruction-tuned language models.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1208,tallrec an effective and efficient tuning framework to align large language model with recommendation,"['Keqin Bao', 'Jizhi Zhang', 'Yang Zhang', 'Wenjie Wang', 'Fuli Feng', 'Xiangnan He']",http://arxiv.org/pdf/2305.00447v3.pdf,2023-04-30,," Large Language Models (LLMs) have demonstrated remarkable performance acrossdiverse domains, thereby prompting researchers to explore their potential foruse in recommendation systems. Initial attempts have leveraged the exceptionalcapabilities of LLMs, such as rich knowledge and strong generalization throughIn-context Learning, which involves phrasing the recommendation task asprompts. Nevertheless, the performance of LLMs in recommendation tasks remainssuboptimal due to a substantial disparity between the training tasks for LLMsand recommendation tasks, as well as inadequate recommendation data duringpre-training. To bridge the gap, we consider building a Large RecommendationLanguage Model by tunning LLMs with recommendation data. To this end, wepropose an efficient and effective Tuning framework for Aligning LLMs withRecommendation, namely TALLRec. We have demonstrated that the proposed TALLRecframework can significantly enhance the recommendation capabilities of LLMs inthe movie and book domains, even with a limited dataset of fewer than 100samples. Additionally, the proposed framework is highly efficient and can beexecuted on a single RTX 3090 with LLaMA-7B. Furthermore, the fine-tuned LLMexhibits robust cross-domain generalization. Our code and data are available athttps://github.com/SAI990323/TALLRec.",,arXiv,['cs.ir'],, +1209,using chatgpt for entity matching,"['Ralph Peeters', 'Christian Bizer']",http://arxiv.org/pdf/2305.03423v2.pdf,2023-05-05,," Entity Matching is the task of deciding if two entity descriptions refer tothe same real-world entity. State-of-the-art entity matching methods often relyon fine-tuning Transformer models such as BERT or RoBERTa. Two major drawbacksof using these models for entity matching are that (i) the models requiresignificant amounts of fine-tuning data for reaching a good performance and(ii) the fine-tuned models are not robust concerning out-of-distributionentities. In this paper, we investigate using ChatGPT for entity matching as amore robust, training data-efficient alternative to traditional Transformermodels. We perform experiments along three dimensions: (i) general promptdesign, (ii) in-context learning, and (iii) provision of higher-level matchingknowledge. We show that ChatGPT is competitive with a fine-tuned RoBERTa model,reaching a zero-shot performance of 82.35% F1 on a challenging matching task onwhich RoBERTa requires 2000 training examples for reaching a similarperformance. Adding in-context demonstrations to the prompts further improvesthe F1 by up to 7.85% when using similarity-based example selection. Alwaysusing the same set of 10 handpicked demonstrations leads to an improvement of4.92% over the zero-shot performance. Finally, we show that ChatGPT can also beguided by adding higher-level matching knowledge in the form of rules to theprompts. Providing matching rules leads to similar performance gains asproviding in-context demonstrations.",,arXiv,['cs.cl'],, +1210,joint foundation model caching and inference of generative ai services for edge intelligence,"['Minrui Xu', 'Dusit Niyato', 'Hongliang Zhang', 'Jiawen Kang', 'Zehui Xiong', 'Shiwen Mao', 'Zhu Han']",http://arxiv.org/pdf/2305.12130v1.pdf,2023-05-20,," With the rapid development of artificial general intelligence (AGI), variousmultimedia services based on pretrained foundation models (PFMs) need to beeffectively deployed. With edge servers that have cloud-level computing power,edge intelligence can extend the capabilities of AGI to mobile edge networks.However, compared with cloud data centers, resource-limited edge servers canonly cache and execute a small number of PFMs, which typically consist ofbillions of parameters and require intensive computing power and GPU memoryduring inference. To address this challenge, in this paper, we propose a jointfoundation model caching and inference framework that aims to balance thetradeoff among inference latency, accuracy, and resource consumption bymanaging cached PFMs and user requests efficiently during the provisioning ofgenerative AI services. Specifically, considering the in-context learningability of PFMs, a new metric named the Age of Context (AoC), is proposed tomodel the freshness and relevance between examples in past demonstrations andcurrent service requests. Based on the AoC, we propose a least context cachingalgorithm to manage cached PFMs at edge servers with historical prompts andinference results. The numerical results demonstrate that the proposedalgorithm can reduce system costs compared with existing baselines byeffectively utilizing contextual information.",,arXiv,['cs.ni'],, +1211,enhancing fewshot texttosql capabilities of large language models a study on prompt design strategies,"['Linyong Nan', 'Yilun Zhao', 'Weijin Zou', 'Narutatsu Ri', 'Jaesung Tae', 'Ellen Zhang', 'Arman Cohan', 'Dragomir Radev']",http://arxiv.org/pdf/2305.12586v1.pdf,2023-05-21,," In-context learning (ICL) has emerged as a new approach to various naturallanguage processing tasks, utilizing large language models (LLMs) to makepredictions based on context that has been supplemented with a few examples ortask-specific instructions. In this paper, we aim to extend this method toquestion answering tasks that utilize structured knowledge sources, and improveText-to-SQL systems by exploring various prompt design strategies for employingLLMs. We conduct a systematic investigation into different demonstrationselection methods and optimal instruction formats for prompting LLMs in theText-to-SQL task. Our approach involves leveraging the syntactic structure ofan example's SQL query to retrieve demonstrations, and we demonstrate thatpursuing both diversity and similarity in demonstration selection leads toenhanced performance. Furthermore, we show that LLMs benefit fromdatabase-related knowledge augmentations. Our most effective strategyoutperforms the state-of-the-art system by 2.5 points (Execution Accuracy) andthe best fine-tuned system by 5.1 points on the Spider dataset. These resultshighlight the effectiveness of our approach in adapting LLMs to the Text-to-SQLtask, and we present an analysis of the factors contributing to the success ofour strategy.",,arXiv,['cs.cl'],, +1212,exploring chainofthought style prompting for texttosql,"['Chang-You Tai', 'Ziru Chen', 'Tianshu Zhang', 'Xiang Deng', 'Huan Sun']",http://arxiv.org/pdf/2305.14215v2.pdf,2023-05-23,," In-context learning with large language models (LLMs) has recently caughtincreasing attention due to its superior few-shot performance on various tasks.However, its performance on text-to-SQL parsing still has much room forimprovement. In this paper, we hypothesize that a crucial aspect of LLMs toimprove for text-to-SQL parsing is their multi-step reasoning ability. Thus, wesystematically study how to enhance LLMs' reasoning ability through chain ofthought (CoT) style prompting, including the original chain-of-thoughtprompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023).Our experiments demonstrate that iterative prompting as in Zhou et al. (2023)may be unnecessary for text-to-SQL parsing, and using detailed reasoning stepstends to have more error propagation issues. Based on these findings, wepropose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2and 6.5 point absolute gains on the Spider development set and the SpiderRealistic set, respectively, compared to the standard prompting method withoutreasoning steps; 2.4 and 1.5 point absolute gains, compared to theleast-to-most prompting method.",,arXiv,['cs.cl'],, +1213,increasing probability mass on answer choices does not always improve accuracy,"['Sarah Wiegreffe', 'Matthew Finlayson', 'Oyvind Tafjord', 'Peter Clark', 'Ashish Sabharwal']",http://arxiv.org/pdf/2305.14596v2.pdf,2023-05-24,," When pretrained language models (LMs) are applied to discriminative taskssuch as multiple-choice questions, they place probability mass on vocabularytokens that aren't among the given answer choices. Spreading probability massacross multiple surface forms with identical meaning (such as ""bath"" and""bathtub"") is thought to cause an underestimation of a model's trueperformance, referred to as the ""surface form competition"" (SFC) hypothesis.This has motivated the introduction of various probability normalizationmethods. However, many core questions remain unanswered. How do we measure SFC?Are there direct ways of reducing it, and does doing so improve taskperformance? We propose a mathematical formalism for SFC which allows us to quantify andbound its impact for the first time. We identify a simple method for reducingit -- namely, increasing probability mass on the given answer choices by a)including them in the prompt and b) using in-context learning with even justone example. We show this method eliminates the impact of SFC in the majorityof instances. Our experiments on three diverse datasets and six LMs revealseveral additional surprising findings. For example, both normalization andprompting methods for reducing SFC can be ineffective or even detrimental totask performance for some LMs. We conclude with practical insights foreffectively prompting LMs for multiple-choice tasks.",,arXiv,"['cs.cl', 'cs.lg']",, +1214,universal selfadaptive prompting,"['Xingchen Wan', 'Ruoxi Sun', 'Hootan Nakhost', 'Hanjun Dai', 'Julian Martin Eisenschlos', 'Sercan O. Arik', 'Tomas Pfister']",http://arxiv.org/pdf/2305.14926v2.pdf,2023-05-24,," A hallmark of modern large language models (LLMs) is their impressive generalzero-shot and few-shot abilities, often elicited through in-context learning(ICL) via prompting. However, while highly coveted and being the most general,zero-shot performances in LLMs are still typically weaker due to the lack ofguidance and the difficulty of applying existing automatic prompt designmethods in general tasks when ground-truth labels are unavailable. In thisstudy, we address this by presenting Universal Self-Adaptive Prompting (USP),an automatic prompt design approach specifically tailored for zero-shotlearning (while compatible with few-shot). Requiring only a small amount ofunlabeled data and an inference-only LLM, USP is highly versatile: to achieveuniversal prompting, USP categorizes a possible NLP task into one of the threepossible task types and then uses a corresponding selector to select the mostsuitable queries and zero-shot model-generated responses aspseudo-demonstrations, thereby generalizing ICL to the zero-shot setup in afully automated way. We evaluate USP with PaLM and PaLM 2 models anddemonstrate performances that are considerably stronger than standard zero-shotbaselines and often comparable to or even superior to few-shot baselines acrossmore than 40 natural language understanding, natural language generation, andreasoning tasks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1215,are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization,"['Aman Priyanshu', 'Supriti Vijay', 'Ayush Kumar', 'Rakshit Naidu', 'Fatemehsadat Mireshghallah']",http://arxiv.org/pdf/2305.15008v1.pdf,2023-05-24,," LLM-powered chatbots are becoming widely adopted in applications such ashealthcare, personal assistants, industry hiring decisions, etc. In many ofthese cases, chatbots are fed sensitive, personal information in their prompts,as samples for in-context learning, retrieved records from a database, or aspart of the conversation. The information provided in the prompt could directlyappear in the output, which might have privacy ramifications if there issensitive information there. As such, in this paper, we aim to understand theinput copying and regurgitation capabilities of these models during inferenceand how they can be directly instructed to limit this copying by complying withregulations such as HIPAA and GDPR, based on their internal knowledge of them.More specifically, we find that when ChatGPT is prompted to summarize coverletters of a 100 candidates, it would retain personally identifiableinformation (PII) verbatim in 57.4% of cases, and we find this retention to benon-uniform between different subgroups of people, based on attributes such asgender identity. We then probe ChatGPT's perception of privacy-related policiesand privatization mechanisms by directly instructing it to provide compliantoutputs and observe a significant omission of PII from output.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",, +1216,finetuning language models with just forward passes,"['Sadhika Malladi', 'Tianyu Gao', 'Eshaan Nichani', 'Alex Damian', 'Jason D. Lee', 'Danqi Chen', 'Sanjeev Arora']",http://arxiv.org/pdf/2305.17333v3.pdf,2023-05-27,," Fine-tuning language models (LMs) has yielded success on diverse downstreamtasks, but as LMs grow in size, backpropagation requires a prohibitively largeamount of memory. Zeroth-order (ZO) methods can in principle estimate gradientsusing only two forward passes but are theorized to be catastrophically slow foroptimizing large models. In this work, we propose a memory-efficientzerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operatein-place, thereby fine-tuning LMs with the same memory footprint as inference.For example, with a single A100 80GB GPU, MeZO can train a 30-billion parametermodel, whereas fine-tuning with backpropagation can train only a 2.7B LM withthe same budget. We conduct comprehensive experiments across model types(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks(classification, multiple-choice, and generation). Our results demonstrate that(1) MeZO significantly outperforms in-context learning and linear probing; (2)MeZO achieves comparable performance to fine-tuning with backpropagation acrossmultiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reductionin our implementation; (3) MeZO is compatible with both full-parameter andparameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZOcan effectively optimize non-differentiable objectives (e.g., maximizingaccuracy or F1). We support our empirical findings with theoretical insights,highlighting how adequate pre-training and task prompts enable MeZO tofine-tune huge models, despite classical ZO analyses suggesting otherwise.",,arXiv,"['cs.lg', 'cs.cl']",, +1217,do large language models know what they don't know,"['Zhangyue Yin', 'Qiushi Sun', 'Qipeng Guo', 'Jiawen Wu', 'Xipeng Qiu', 'Xuanjing Huang']",http://arxiv.org/pdf/2305.18153v2.pdf,2023-05-29,," Large language models (LLMs) have a wealth of knowledge that allows them toexcel in various Natural Language Processing (NLP) tasks. Current researchfocuses on enhancing their performance within their existing knowledge. Despitetheir vast knowledge, LLMs are still limited by the amount of information theycan accommodate and comprehend. Therefore, the ability to understand their ownlimitations on the unknows, referred to as self-knowledge, is of paramountimportance. This study aims to evaluate LLMs' self-knowledge by assessing theirability to identify unanswerable or unknowable questions. We introduce anautomated methodology to detect uncertainty in the responses of these models,providing a novel measure of their self-knowledge. We further introduce aunique dataset, SelfAware, consisting of unanswerable questions from fivediverse categories and their answerable counterparts. Our extensive analysis,involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering anintrinsic capacity for self-knowledge within these models. Moreover, wedemonstrate that in-context learning and instruction tuning can further enhancethis self-knowledge. Despite this promising insight, our findings alsohighlight a considerable gap between the capabilities of these models and humanproficiency in recognizing the limits of their knowledge.",,arXiv,['cs.cl'],, +1218,improving clip training with language rewrites,"['Lijie Fan', 'Dilip Krishnan', 'Phillip Isola', 'Dina Katabi', 'Yonglong Tian']",http://arxiv.org/pdf/2305.20088v2.pdf,2023-05-31,," Contrastive Language-Image Pre-training (CLIP) stands as one of the mosteffective and scalable methods for training transferable vision models usingpaired image and text data. CLIP models are trained using contrastive loss,which typically relies on data augmentations to prevent overfitting andshortcuts. However, in the CLIP training paradigm, data augmentations areexclusively applied to image inputs, while language inputs remain unchangedthroughout the entire training process, limiting the exposure of diverse textsto the same image. In this paper, we introduce Language augmented CLIP(LaCLIP), a simple yet highly effective approach to enhance CLIP trainingthrough language rewrites. Leveraging the in-context learning capability oflarge language models, we rewrite the text descriptions associated with eachimage. These rewritten texts exhibit diversity in sentence structure andvocabulary while preserving the original key concepts and meanings. Duringtraining, LaCLIP randomly selects either the original texts or the rewrittenversions as text augmentations for each image. Extensive experiments on CC3M,CC12M, RedCaps and LAION-400M datasets show that CLIP pre-training withlanguage rewrites significantly improves the transfer performance withoutcomputation or memory overhead during training. Specifically for ImageNetzero-shot accuracy, LaCLIP outperforms CLIP by 8.2% on CC12M and 2.4% onLAION-400M. Code is available at https://github.com/LijieFan/LaCLIP.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, +1219,sqlpalm improved large language model adaptation for texttosql,"['Ruoxi Sun', 'Sercan O. Arik', 'Hootan Nakhost', 'Hanjun Dai', 'Rajarishi Sinha', 'Pengcheng Yin', 'Tomas Pfister']",http://arxiv.org/pdf/2306.00739v3.pdf,2023-05-26,," One impressive emergent capability of large language models (LLMs) isgeneration of code, including Structured Query Language (SQL) for databases.For the task of converting natural language text to SQL queries, Text-to-SQL,adaptation of LLMs is of paramount importance, both in in-context learning andfine-tuning settings, depending on the amount of adaptation data used. In thispaper, we propose an LLM-based Text-to-SQL model SQL-PaLM, leveraging onPaLM-2, that pushes the state-of-the-art in both settings. Few-shot SQL-PaLM isbased on an execution-based self-consistency prompting approach designed forText-to-SQL, and achieves 77.3% in test-suite accuracy on Spider, which to ourbest knowledge is the first to outperform previous state-of-the-art withfine-tuning by a significant margin, 4%. Furthermore, we demonstrate that thefine-tuned SQL-PALM outperforms it further by another 1%. Towards applyingSQL-PaLM to real-world scenarios we further evaluate its robustness on otherchallenging variants of Spider and demonstrate the superior generalizationcapability of SQL-PaLM. In addition, via extensive case studies, we demonstratethe impressive intelligent capabilities and various success enablers ofLLM-based Text-to-SQL.",,arXiv,"['cs.cl', 'cs.ai', 'cs.db']",, +1220,zeroshot 3d shape correspondence,"['Ahmed Abdelreheem', 'Abdelrahman Eldesokey', 'Maks Ovsjanikov', 'Peter Wonka']",http://arxiv.org/pdf/2306.03253v2.pdf,2023-06-05,," We propose a novel zero-shot approach to computing correspondences between 3Dshapes. Existing approaches mainly focus on isometric and near-isometric shapepairs (e.g., human vs. human), but less attention has been given to stronglynon-isometric and inter-class shape matching (e.g., human vs. cow). To thisend, we introduce a fully automatic method that exploits the exceptionalreasoning capabilities of recent foundation models in language and vision totackle difficult shape correspondence problems. Our approach comprises multiplestages. First, we classify the 3D shapes in a zero-shot manner by feedingrendered shape views to a language-vision model (e.g., BLIP2) to generate alist of class proposals per shape. These proposals are unified into a singleclass per shape by employing the reasoning capabilities of ChatGPT. Second, weattempt to segment the two shapes in a zero-shot manner, but in contrast to theco-segmentation problem, we do not require a mutual set of semantic regions.Instead, we propose to exploit the in-context learning capabilities of ChatGPTto generate two different sets of semantic regions for each shape and asemantic mapping between them. This enables our approach to match stronglynon-isometric shapes with significant differences in geometric structure.Finally, we employ the generated semantic mapping to produce coarsecorrespondences that can further be refined by the functional maps framework toproduce dense point-to-point maps. Our approach, despite its simplicity,produces highly plausible results in a zero-shot manner, especially betweenstrongly non-isometric shapes. Project webpage:https://samir55.github.io/3dshapematch/.",,arXiv,['cs.cv'],, +1221,mimicit multimodal incontext instruction tuning,"['Bo Li', 'Yuanhan Zhang', 'Liangyu Chen', 'Jinghao Wang', 'Fanyi Pu', 'Jingkang Yang', 'Chunyuan Li', 'Ziwei Liu']",http://arxiv.org/pdf/2306.05425v1.pdf,2023-06-08,," High-quality instructions and responses are essential for the zero-shotperformance of large language models on interactive natural language tasks. Forinteractive vision-language tasks involving intricate visual scenes, a largequantity of diverse and creative instruction-response pairs should beimperative to tune vision-language models (VLMs). Nevertheless, the currentavailability of vision-language instruction-response pairs in terms ofquantity, diversity, and creativity remains limited, posing challenges to thegeneralization of interactive VLMs. Here we present MultI-Modal In-ContextInstruction Tuning (MIMIC-IT), a dataset comprising 2.8 million multimodalinstruction-response pairs, with 2.2 million unique instructions derived fromimages and videos. Each pair is accompanied by multi-modal in-contextinformation, forming conversational contexts aimed at empowering VLMs inperception, reasoning, and planning. The instruction-response collectionprocess, dubbed as Syphus, is scaled using an automatic annotation pipelinethat combines human expertise with GPT's capabilities. Using the MIMIC-ITdataset, we train a large VLM named Otter. Based on extensive evaluationsconducted on vision-language benchmarks, it has been observed that Otterdemonstrates remarkable proficiency in multi-modal perception, reasoning, andin-context learning. Human evaluation reveals it effectively aligns with theuser's intentions. We release the MIMIC-IT dataset, instruction-responsecollection pipeline, benchmarks, and the Otter model.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.hc']",, +1222,medfmc a realworld dataset and benchmark for foundation model adaptation in medical image classification,"['Dequan Wang', 'Xiaosong Wang', 'Lilong Wang', 'Mengzhang Li', 'Qian Da', 'Xiaoqiang Liu', 'Xiangyu Gao', 'Jun Shen', 'Junjun He', 'Tian Shen', 'Qi Duan', 'Jie Zhao', 'Kang Li', 'Yu Qiao', 'Shaoting Zhang']",http://arxiv.org/pdf/2306.09579v1.pdf,2023-06-16,," Foundation models, often pre-trained with large-scale data, have achievedparamount success in jump-starting various vision and language applications.Recent advances further enable adapting foundation models in downstream tasksefficiently using only a few training samples, e.g., in-context learning. Yet,the application of such learning paradigms in medical image analysis remainsscarce due to the shortage of publicly accessible data and benchmarks. In thispaper, we aim at approaches adapting the foundation models for medical imageclassification and present a novel dataset and benchmark for the evaluation,i.e., examining the overall performance of accommodating the large-scalefoundation models downstream on a set of diverse real-world clinical tasks. Wecollect five sets of medical imaging data from multiple institutes targeting avariety of real-world clinical tasks (22,349 images in total), i.e., thoracicdiseases screening in X-rays, pathological lesion tissue screening, lesiondetection in endoscopy images, neonatal jaundice evaluation, and diabeticretinopathy grading. Results of multiple baseline methods are demonstratedusing the proposed dataset from both accuracy and cost-effective perspectives.",,arXiv,['cs.cv'],, +1223,jiuzhang 20 a unified chinese pretrained language model for multitask mathematical problem solving,"['Wayne Xin Zhao', 'Kun Zhou', 'Beichen Zhang', 'Zheng Gong', 'Zhipeng Chen', 'Yuanhang Zhou', 'Ji-Rong Wen', 'Jing Sha', 'Shijin Wang', 'Cong Liu', 'Guoping Hu']",http://arxiv.org/pdf/2306.11027v1.pdf,2023-06-19,," Although pre-trained language models~(PLMs) have recently advanced theresearch progress in mathematical reasoning, they are not specially designed asa capable multi-task solver, suffering from high cost for multi-task deployment(\eg a model copy for a task) and inferior performance on complex mathematicalproblems in practical applications. To address these issues, in this paper, wepropose \textbf{JiuZhang~2.0}, a unified Chinese PLM specially for multi-taskmathematical problem solving. Our idea is to maintain a moderate-sized modeland employ the \emph{cross-task knowledge sharing} to improve the modelcapacity in a multi-task setting. Specially, we construct aMixture-of-Experts~(MoE) architecture for modeling mathematical text, so as tocapture the common mathematical knowledge across tasks. For optimizing the MoEarchitecture, we design \emph{multi-task continual pre-training} and\emph{multi-task fine-tuning} strategies for multi-task adaptation. Thesetraining strategies can effectively decompose the knowledge from the task dataand establish the cross-task sharing via expert networks. In order to furtherimprove the general capacity of solving different complex tasks, we leveragelarge language models~(LLMs) as complementary models to iteratively refine thegenerated solution by our PLM, via in-context learning. Extensive experimentshave demonstrated the effectiveness of our model.",,arXiv,"['cs.cl', 'cs.ai']",, +1224,a chain of aibased solutions for resolving fqns and fixing syntax errors in partial code,"['Qing Huang', 'Jiahui Zhu', 'Zhenchang Xing', 'Huan Jin', 'Changjing Wang', 'Xiwei Xu']",http://arxiv.org/pdf/2306.11981v1.pdf,2023-06-21,," API documentation, technical blogs and programming Q&A sites contain numerouspartial code that can be reused in programming tasks, but often these code areuncompilable due to unresolved names and syntax errors. To facilitate partialcode reuse, we propose the Partial Code Reuse Chain (PCR-Chain) for resolvingfully-qualified names (FQNs) and fixing last-mile syntax errors in partial codebased on a giant large language model (LLM) like ChatGPT. Methodologically,PCR-Chain is backed up by the underlying global-level prompt architecture(which combines three design ideas: hierarchical task breakdown, promptcomposition, and a mix of prompt-based AI and non-AI units) and the local-levelprompt design. Technically, we propose PCR-Chain, which employs in-contextlearning rather than symbolic, costly training methods. Experimental resultsdemonstrate that in dynamically-typed languages (Python), PCR-Chain outperformscurrent state-of-the-art (SOTA) 5% accuracy like RING. For statically-typelanguages (Java), our approach achieves high accuracy of 80.5% in resolvingboth non-FQNs and last-mile syntax errors, surpassing SOTA methods (RING) thatcan only address last-mile syntax errors. The correct execution of the unit,module, and PCR-Chain demonstrates the effectiveness of the prompt design,composition, and architecture and opens up possibilities for building softwareengineering tools based on LLMs, replacing traditional program analysismethods.",,arXiv,['cs.se'],, +1225,generative multimodal entity linking,"['Senbao Shi', 'Zhenran Xu', 'Baotian Hu', 'Min Zhang']",http://arxiv.org/pdf/2306.12725v2.pdf,2023-06-22,," Multimodal Entity Linking (MEL) is the task of mapping mentions withmultimodal contexts to the referent entities from a knowledge base (e.g.Wikipedia). Existing MEL methods mainly focus on designing complex multimodalinteraction mechanisms and require fine-tuning all model parameters, which canbe prohibitively costly and difficult to scale in the era of Large LanguageModels (LLMs). In this work, we propose GEMEL, a simple yet effectiveGenerative Multimodal Entity Linking framework based on LLMs, which directlygenerates target entity names. We keep the vision and language model frozen andonly train a feature mapper to enable cross-modality interactions. To adaptLLMs to the MEL task, we take advantage of the emergent in-context learningcapability of LLMs by retrieving multimodal instances as demonstrations.Extensive experiments show that, with only ~0.3% of the model parametersfine-tuned, GEMEL achieves state-of-the-art results on two well-established MELdatasets (7.7% accuracy gains on WikiDiverse and 8.8% accuracy gains onWikiMEL). The performance gain stems from mitigating the popularity bias of LLMpredictions and disambiguating less common entities effectively. Furtheranalysis verifies the generality and scalability of GEMEL. Our approach iscompatible with any off-the-shelf language model, paving the way towards anefficient and general solution for utilizing LLMs in the MEL task.",,arXiv,['cs.cl'],, +1226,kosmos2 grounding multimodal large language models to the world,"['Zhiliang Peng', 'Wenhui Wang', 'Li Dong', 'Yaru Hao', 'Shaohan Huang', 'Shuming Ma', 'Furu Wei']",http://arxiv.org/pdf/2306.14824v3.pdf,2023-06-26,," We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling newcapabilities of perceiving object descriptions (e.g., bounding boxes) andgrounding text to the visual world. Specifically, we represent referexpressions as links in Markdown, i.e., ``[text span](bounding boxes)'', whereobject descriptions are sequences of location tokens. Together with multimodalcorpora, we construct large-scale data of grounded image-text pairs (calledGrIT) to train the model. In addition to the existing capabilities of MLLMs(e.g., perceiving general modalities, following instructions, and performingin-context learning), Kosmos-2 integrates the grounding capability intodownstream applications. We evaluate Kosmos-2 on a wide range of tasks,including (i) multimodal grounding, such as referring expression comprehension,and phrase grounding, (ii) multimodal referring, such as referring expressiongeneration, (iii) perception-language tasks, and (iv) language understandingand generation. This work lays out the foundation for the development ofEmbodiment AI and sheds light on the big convergence of language, multimodalperception, action, and world modeling, which is a key step toward artificialgeneral intelligence. Code and pretrained models are available athttps://aka.ms/kosmos-2.",,arXiv,"['cs.cl', 'cs.cv']",, +1227,supervised pretraining can learn incontext reinforcement learning,"['Jonathan N. Lee', 'Annie Xie', 'Aldo Pacchiano', 'Yash Chandak', 'Chelsea Finn', 'Ofir Nachum', 'Emma Brunskill']",http://arxiv.org/pdf/2306.14892v1.pdf,2023-06-26,," Large transformer models trained on diverse datasets have shown a remarkableability to learn in-context, achieving high few-shot performance on tasks theywere not explicitly trained to solve. In this paper, we study the in-contextlearning capabilities of transformers in decision-making problems, i.e.,reinforcement learning (RL) for bandits and Markov decision processes. To doso, we introduce and study Decision-Pretrained Transformer (DPT), a supervisedpretraining method where the transformer predicts an optimal action given aquery state and an in-context dataset of interactions, across a diverse set oftasks. This procedure, while simple, produces a model with several surprisingcapabilities. We find that the pretrained transformer can be used to solve arange of RL problems in-context, exhibiting both exploration online andconservatism offline, despite not being explicitly trained to do so. The modelalso generalizes beyond the pretraining distribution to new tasks andautomatically adapts its decision-making strategies to unknown structure.Theoretically, we show DPT can be viewed as an efficient implementation ofBayesian posterior sampling, a provably sample-efficient RL algorithm. Wefurther leverage this connection to provide guarantees on the regret of thein-context algorithm yielded by DPT, and prove that it can learn faster thanalgorithms used to generate the pretraining data. These results suggest apromising yet simple path towards instilling strong in-context decision-makingabilities in transformers.",,arXiv,"['cs.lg', 'cs.ai']",, +1228,a gpt4 reticular chemist for guiding mof discovery,"['Zhiling Zheng', 'Zichao Rong', 'Nakul Rampal', 'Christian Borgs', 'Jennifer T. Chayes', 'Omar M. Yaghi']",http://arxiv.org/pdf/2306.14915v2.pdf,2023-06-20,," We present a new framework integrating the AI model GPT-4 into the iterativeprocess of reticular chemistry experimentation, leveraging a cooperativeworkflow of interaction between AI and a human researcher. This GPT-4 ReticularChemist is an integrated system composed of three phases. Each of theseutilizes GPT-4 in various capacities, wherein GPT-4 provides detailedinstructions for chemical experimentation and the human provides feedback onthe experimental outcomes, including both success and failures, for thein-context learning of AI in the next iteration. This iterative human-AIinteraction enabled GPT-4 to learn from the outcomes, much like an experiencedchemist, by a prompt-learning strategy. Importantly, the system is based onnatural language for both development and operation, eliminating the need forcoding skills, and thus, make it accessible to all chemists. Our collaborationwith GPT-4 Reticular Chemist guided the discovery of an isoreticular series ofMOFs, with each synthesis fine-tuned through iterative feedback and expertsuggestions. This workflow presents a potential for broader applications inscientific research by harnessing the capability of large language models likeGPT-4 to enhance the feasibility and efficiency of research activities.",,arXiv,"['cs.ai', 'cond-mat.mtrl-sci', 'physics.chem-ph']",, +1229,voicebox textguided multilingual universal speech generation at scale,"['Matthew Le', 'Apoorv Vyas', 'Bowen Shi', 'Brian Karrer', 'Leda Sari', 'Rashel Moritz', 'Mary Williamson', 'Vimal Manohar', 'Yossi Adi', 'Jay Mahadeokar', 'Wei-Ning Hsu']",http://arxiv.org/pdf/2306.15687v2.pdf,2023-06-23,," Large-scale generative models such as GPT and DALL-E have revolutionized theresearch community. These models not only generate high fidelity outputs, butare also generalists which can solve tasks not explicitly taught. In contrast,speech generative models are still primitive in terms of scale and taskgeneralization. In this paper, we present Voicebox, the most versatiletext-guided generative model for speech at scale. Voicebox is anon-autoregressive flow-matching model trained to infill speech, given audiocontext and text, trained on over 50K hours of speech that are not filtered orenhanced. Similar to GPT, Voicebox can perform many different tasks throughin-context learning, but is more flexible as it can also condition on futurecontext. Voicebox can be used for mono or cross-lingual zero-shottext-to-speech synthesis, noise removal, content editing, style conversion, anddiverse sample generation. In particular, Voicebox outperforms thestate-of-the-art zero-shot TTS model VALL-E on both intelligibility (5.9% vs1.9% word error rates) and audio similarity (0.580 vs 0.681) while being up to20 times faster. Audio samples can be found in\url{https://voicebox.metademolab.com}.",,arXiv,"['eess.as', 'cs.cl', 'cs.lg', 'cs.sd']",, +1230,spae semantic pyramid autoencoder for multimodal generation with frozen llms,"['Lijun Yu', 'Yong Cheng', 'Zhiruo Wang', 'Vivek Kumar', 'Wolfgang Macherey', 'Yanping Huang', 'David A. Ross', 'Irfan Essa', 'Yonatan Bisk', 'Ming-Hsuan Yang', 'Kevin Murphy', 'Alexander G. Hauptmann', 'Lu Jiang']",http://arxiv.org/pdf/2306.17842v3.pdf,2023-06-30,," In this work, we introduce Semantic Pyramid AutoEncoder (SPAE) for enablingfrozen LLMs to perform both understanding and generation tasks involvingnon-linguistic modalities such as images or videos. SPAE converts between rawpixels and interpretable lexical tokens (or words) extracted from the LLM'svocabulary. The resulting tokens capture both the semantic meaning and thefine-grained details needed for visual reconstruction, effectively translatingthe visual content into a language comprehensible to the LLM, and empowering itto perform a wide array of multimodal tasks. Our approach is validated throughin-context learning experiments with frozen PaLM 2 and GPT 3.5 on a diverse setof image understanding and generation tasks. Our method marks the firstsuccessful attempt to enable a frozen LLM to generate image content whilesurpassing state-of-the-art performance in image understanding tasks, under thesame setting, by over 25%.",,arXiv,"['cs.cv', 'cs.cl', 'cs.mm']",, +1231,recallm an adaptable memory mechanism with temporal understanding for large language models,"['Brandon Kynoch', 'Hugo Latapie', 'Dwane van der Sluis']",http://arxiv.org/pdf/2307.02738v3.pdf,2023-07-06,," Large Language Models (LLMs) have made extraordinary progress in the field ofArtificial Intelligence and have demonstrated remarkable capabilities across alarge variety of tasks and domains. However, as we venture closer to creatingArtificial General Intelligence (AGI) systems, we recognize the need tosupplement LLMs with long-term memory to overcome the context window limitationand more importantly, to create a foundation for sustained reasoning,cumulative learning and long-term user interaction. In this paper we proposeRecallM, a novel architecture for providing LLMs with an adaptable andupdatable long-term memory mechanism. Unlike previous methods, the RecallMarchitecture is particularly effective at belief updating and maintaining atemporal understanding of the knowledge provided to it. We demonstrate throughvarious experiments the effectiveness of this architecture. Furthermore,through our own temporal understanding and belief updating experiments, we showthat RecallM is four times more effective than using a vector database forupdating knowledge previously stored in long-term memory. We also demonstratethat RecallM shows competitive performance on general question-answering andin-context learning tasks.",,arXiv,"['cs.ai', 'cs.cl', 'cs.sc']",, +1232,one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention,"['Arvind Mahankali', 'Tatsunori B. Hashimoto', 'Tengyu Ma']",http://arxiv.org/pdf/2307.03576v1.pdf,2023-07-07,," Recent works have empirically analyzed in-context learning and shown thattransformers trained on synthetic linear regression tasks can learn toimplement ridge regression, which is the Bayes-optimal predictor, givensufficient capacity [Aky\""urek et al., 2023], while one-layer transformers withlinear self-attention and no MLP layer will learn to implement one step ofgradient descent (GD) on a least-squares linear regression objective [vonOswald et al., 2022]. However, the theory behind these observations remainspoorly understood. We theoretically study transformers with a single layer oflinear self-attention, trained on synthetic noisy linear regression data.First, we mathematically show that when the covariates are drawn from astandard Gaussian distribution, the one-layer transformer which minimizes thepre-training loss will implement a single step of GD on the least-squareslinear regression objective. Then, we find that changing the distribution ofthe covariates and weight vector to a non-isotropic Gaussian distribution has astrong impact on the learned algorithm: the global minimizer of thepre-training loss now implements a single step of $\textit{pre-conditioned}$GD. However, if only the distribution of the responses is changed, then thisdoes not have a large effect on the learned algorithm: even when the responsecomes from a more general family of $\textit{nonlinear}$ functions, the globalminimizer of the pre-training loss still implements a single step of GD on aleast-squares linear regression objective.",,arXiv,['cs.lg'],, +1233,large language models as general pattern machines,"['Suvir Mirchandani', 'Fei Xia', 'Pete Florence', 'Brian Ichter', 'Danny Driess', 'Montserrat Gonzalez Arenas', 'Kanishka Rao', 'Dorsa Sadigh', 'Andy Zeng']",http://arxiv.org/pdf/2307.04721v2.pdf,2023-07-10,," We observe that pre-trained large language models (LLMs) are capable ofautoregressively completing complex token sequences -- from arbitrary onesprocedurally generated by probabilistic context-free grammars (PCFG), to morerich spatial patterns found in the Abstraction and Reasoning Corpus (ARC), ageneral AI benchmark, prompted in the style of ASCII art. Surprisingly, patterncompletion proficiency can be partially retained even when the sequences areexpressed using tokens randomly sampled from the vocabulary. These resultssuggest that without any additional training, LLMs can serve as generalsequence modelers, driven by in-context learning. In this work, we investigatehow these zero-shot capabilities may be applied to problems in robotics -- fromextrapolating sequences of numbers that represent states over time to completesimple motions, to least-to-most prompting of reward-conditioned trajectoriesthat can discover and represent closed-loop policies (e.g., a stabilizingcontroller for CartPole). While difficult to deploy today for real systems dueto latency, context size limitations, and compute costs, the approach of usingLLMs to drive low-level control may provide an exciting glimpse into how thepatterns among words could be transferred to actions.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ro']",, +1234,megatts 2 zeroshot texttospeech with arbitrary length speech prompts,"['Ziyue Jiang', 'Jinglin Liu', 'Yi Ren', 'Jinzheng He', 'Chen Zhang', 'Zhenhui Ye', 'Pengfei Wei', 'Chunfeng Wang', 'Xiang Yin', 'Zejun Ma', 'Zhou Zhao']",http://arxiv.org/pdf/2307.07218v2.pdf,2023-07-14,," Zero-shot text-to-speech aims at synthesizing voices with unseen speechprompts. Previous large-scale multispeaker TTS models have successfullyachieved this goal with an enrolled recording within 10 seconds. However, mostof them are designed to utilize only short speech prompts. The limitedinformation in short speech prompts significantly hinders the performance offine-grained identity imitation. In this paper, we introduce Mega-TTS 2, ageneric zero-shot multispeaker TTS model that is capable of synthesizing speechfor unseen speakers with arbitrary-length prompts. Specifically, we 1) design amulti-reference timbre encoder to extract timbre information from multiplereference speeches; 2) and train a prosody language model with arbitrary-lengthspeech prompts; With these designs, our model is suitable for prompts ofdifferent lengths, which extends the upper bound of speech quality forzero-shot text-to-speech. Besides arbitrary-length prompts, we introducearbitrary-source prompts, which leverages the probabilities derived frommultiple P-LLM outputs to produce expressive and controlled prosody.Furthermore, we propose a phoneme-level auto-regressive duration model tointroduce in-context learning capabilities to duration modeling. Experimentsdemonstrate that our method could not only synthesize identity-preservingspeech with a short prompt of an unseen speaker but also achieve improvedperformance with longer speech prompts. Audio samples can be found inhttps://mega-tts.github.io/mega2_demo/.",,arXiv,"['eess.as', 'cs.sd']",, +1235,do emergent abilities exist in quantized large language models an empirical study,"['Peiyu Liu', 'Zikang Liu', 'Ze-Feng Gao', 'Dawei Gao', 'Wayne Xin Zhao', 'Yaliang Li', 'Bolin Ding', 'Ji-Rong Wen']",http://arxiv.org/pdf/2307.08072v2.pdf,2023-07-16,," Despite the superior performance, Large Language Models~(LLMs) requiresignificant computational resources for deployment and use. To overcome thisissue, quantization methods have been widely applied to reduce the memoryfootprint of LLMs as well as increasing the inference rate. However, a majorchallenge is that low-bit quantization methods often lead to performancedegradation. It is important to understand how quantization impacts thecapacity of LLMs. Different from previous studies focused on overallperformance, this work aims to investigate the impact of quantization on\emph{emergent abilities}, which are important characteristics that distinguishLLMs from small language models. Specially, we examine the abilities ofin-context learning, chain-of-thought reasoning, and instruction-following inquantized LLMs. Our empirical experiments show that these emergent abilitiesstill exist in 4-bit quantization models, while 2-bit models encounter severeperformance degradation on the test of these abilities. To improve theperformance of low-bit models, we conduct two special experiments: (1)fine-gained impact analysis that studies which components (or substructures)are more sensitive to quantization, and (2) performance compensation throughmodel fine-tuning. Our work derives a series of important findings tounderstand the impact of quantization on emergent abilities, and sheds lightson the possibilities of extremely low-bit quantization for LLMs.",,arXiv,"['cs.cl', 'cs.ai']",, +1236,generating mathematical derivations with large language models,"['Jordan Meadows', 'Marco Valentino', 'Andre Freitas']",http://arxiv.org/pdf/2307.09998v3.pdf,2023-07-19,," The derivation of mathematical results in specialised fields, using LargeLanguage Models (LLMs), is an emerging research direction that can helpidentify models' limitations, and potentially support mathematical discovery.In this paper, we leverage a symbolic engine to generate derivations ofequations at scale, and investigate the capabilities of LLMs when deriving goalequations from premises. Specifically, we employ in-context learning for GPTand fine-tune a range of T5 models to compare the robustness and generalisationof pre-training strategies to specialised models. Empirical results show thatfine-tuned FLAN-T5-large (MathT5) outperforms GPT models on all static andout-of-distribution test sets in conventional scores. However, an in-depthanalysis reveals that the fine-tuned models are more sensitive to perturbationsinvolving unseen symbols and (to a lesser extent) changes to equationstructure. In addition, we analyse 1.7K equations, and over 200 derivations, tohighlight common reasoning errors such as the inclusion of incorrect,irrelevant, and redundant equations. Finally, we explore the suitability ofexisting metrics for evaluating mathematical derivations and find evidencethat, while they can capture general properties such as sensitivity toperturbations, they fail to highlight fine-grained reasoning errors andessential differences between models. Overall, this work demonstrates thattraining models on synthetic data may improve their math capabilities beyondmuch larger LLMs, but current metrics are not appropriately assessing thequality of generated mathematical text.",,arXiv,"['cs.cl', 'math.ho']",, +1237,layoutllmt2i eliciting layout guidance from llm for texttoimage generation,"['Leigang Qu', 'Shengqiong Wu', 'Hao Fei', 'Liqiang Nie', 'Tat-Seng Chua']",http://arxiv.org/pdf/2308.05095v2.pdf,2023-08-09,," In the text-to-image generation field, recent remarkable progress in StableDiffusion makes it possible to generate rich kinds of novel photorealisticimages. However, current models still face misalignment issues (e.g.,problematic spatial relation understanding and numeration failure) in complexnatural scenes, which impedes the high-faithfulness text-to-image generation.Although recent efforts have been made to improve controllability by givingfine-grained guidance (e.g., sketch and scribbles), this issue has not beenfundamentally tackled since users have to provide such guidance informationmanually. In this work, we strive to synthesize high-fidelity images that aresemantically aligned with a given textual prompt without any guidance. Towardthis end, we propose a coarse-to-fine paradigm to achieve layout planning andimage generation. Concretely, we first generate the coarse-grained layoutconditioned on a given textual prompt via in-context learning based on LargeLanguage Models. Afterward, we propose a fine-grained object-interactiondiffusion method to synthesize high-faithfulness images conditioned on theprompt and the automatically generated layout. Extensive experimentsdemonstrate that our proposed method outperforms the state-of-the-art models interms of layout and image generation. Our code and settings are available athttps://layoutllm-t2i.github.io.",,arXiv,"['cs.cv', 'cs.ai']",, +1238,audioldm 2 learning holistic audio generation with selfsupervised pretraining,"['Haohe Liu', 'Qiao Tian', 'Yi Yuan', 'Xubo Liu', 'Xinhao Mei', 'Qiuqiang Kong', 'Yuping Wang', 'Wenwu Wang', 'Yuxuan Wang', 'Mark D. Plumbley']",http://arxiv.org/pdf/2308.05734v2.pdf,2023-08-10,," Although audio generation shares commonalities across different types ofaudio, such as speech, music, and sound effects, designing models for each typerequires careful consideration of specific objectives and biases that cansignificantly differ from those of other types. To bring us closer to a unifiedperspective of audio generation, this paper proposes a framework that utilizesthe same learning method for speech, music, and sound effect generation. Ourframework introduces a general representation of audio, called ""language ofaudio"" (LOA). Any audio can be translated into LOA based on AudioMAE, aself-supervised pre-trained representation learning model. In the generationprocess, we translate any modalities into LOA by using a GPT-2 model, and weperform self-supervised audio generation learning with a latent diffusion modelconditioned on LOA. The proposed framework naturally brings advantages such asin-context learning abilities and reusable self-supervised pretrained AudioMAEand latent diffusion models. Experiments on the major benchmarks oftext-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-artor competitive performance against previous approaches. Our code, pretrainedmodel, and demo are available at https://audioldm.github.io/audioldm2.",,arXiv,"['cs.sd', 'cs.ai', 'cs.mm', 'eess.as', 'eess.sp']",, +1239,time travel in llms tracing data contamination in large language models,"['Shahriar Golchin', 'Mihai Surdeanu']",http://arxiv.org/pdf/2308.08493v2.pdf,2023-08-16,," Data contamination, i.e., the presence of test data from downstream tasks inthe training data of large language models (LLMs), is a potential major issuein measuring LLMs' real effectiveness on other tasks. We propose astraightforward yet effective method for identifying data contamination withinLLMs. At its core, our approach starts by identifying potential contaminationat the instance level; using this information, our approach then assesses widercontamination at the partition level. To estimate contamination of individualinstances, we employ ""guided instruction:"" a prompt consisting of the datasetname, partition type, and the random-length initial segment of a referenceinstance, asking the LLM to complete it. An instance is flagged as contaminatedif the LLM's output either exactly or nearly matches the latter segment of thereference. To understand if an entire partition is contaminated, we propose twoideas. The first idea marks a dataset partition as contaminated if the averageoverlap score with the reference instances (as measured by ROUGE-L or BLEURT)is statistically significantly better with the completions from guidedinstruction compared to a ""general instruction"" that does not include thedataset and partition name. The second idea marks a dataset partition ascontaminated if a classifier based on GPT-4 with few-shot in-context learningprompt marks multiple generated completions as exact/near-exact matches of thecorresponding reference instances. Our best method achieves an accuracy between92% and 100% in detecting if an LLM is contaminated with seven datasets,containing train and test/validation partitions, when contrasted with manualevaluation by human experts. Further, our findings indicate that GPT-4 iscontaminated with AG News, WNLI, and XSum datasets.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",, +1240,inductivebias learning generating code models with large language model,"['Toma Tanaka', 'Naofumi Emoto', 'Tsukasa Yumibayashi']",http://arxiv.org/pdf/2308.09890v1.pdf,2023-08-19,," Large Language Models(LLMs) have been attracting attention due to a abilitycalled in-context learning(ICL). ICL, without updating the parameters of a LLM,it is possible to achieve highly accurate inference based on rules ``in thecontext'' by merely inputting a training data into the prompt. Although ICL isa developing field with many unanswered questions, LLMs themselves serves as ainference model, seemingly realizing inference without explicitly indicate``inductive bias''. On the other hand, a code generation is also a highlightedapplication of LLMs. The accuracy of code generation has dramatically improved,enabling even non-engineers to generate code to perform the desired tasks bycrafting appropriate prompts. In this paper, we propose a novel ``learning''method called an ``Inductive-Bias Learning (IBL)'', which combines thetechniques of ICL and code generation. An idea of IBL is straightforward. LikeICL, IBL inputs a training data into the prompt and outputs a code with anecessary structure for inference (we referred to as ``Code Model'') from a``contextual understanding''. Despite being a seemingly simple approach, IBLencompasses both a ``property of inference without explicit inductive bias''inherent in ICL and a ``readability and explainability'' of the codegeneration. Surprisingly, generated Code Models have been found to achievepredictive accuracy comparable to, and in some cases surpassing, ICL andrepresentative machine learning models. Our IBL code is open source:https://github.com/fuyu-quant/IBLM",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",, +1241,exploring parameterefficient finetuning techniques for code generation with large language models,"['Martin Weyssow', 'Xin Zhou', 'Kisub Kim', 'David Lo', 'Houari Sahraoui']",http://arxiv.org/pdf/2308.10462v2.pdf,2023-08-21,," Large Language Models (LLMs) demonstrate impressive capabilities to generateaccurate code snippets given natural language intents in zero-shot, i.e.,without the need for specific fine-tuning. While prior studies have highlightedthe advantages of fine-tuning LLMs, this process incurs high computationalcosts, making it impractical in resource-scarce environments, particularly formodels with billions of parameters. To address these challenges, previousresearch explored In-Context Learning (ICL) as a strategy to guide the LLMgenerative process with task-specific prompt examples. However, ICL introducesinconveniences, such as the need for designing contextually relevant promptsand the absence of learning task-specific parameters, thereby limitingdownstream task performance. In this context, we foresee Parameter-EfficientFine-Tuning (PEFT) techniques as a promising approach to efficiently specializeLLMs to task-specific data while maintaining reasonable resource consumption.In this paper, we deliver a comprehensive study of PEFT techniques for LLMsunder the automated code generation scenario. Our comprehensive investigationof PEFT techniques for LLMs reveals their superiority and potential over ICLacross a diverse set of LLMs. Additionally, we demonstrate the extendedcapabilities of PEFT, showcasing its ability to learn from two distinctdatasets jointly without compromising performance. Furthermore, our studyhighlights the potential for tuning larger LLMs and significant reductions inmemory usage by combining PEFT with quantization. Therefore, this study opensopportunities for broader applications of PEFT in software engineeringscenarios. Our code is available athttps://github.com/martin-wey/peft-llm-code/.",,arXiv,"['cs.se', 'cs.cl', 'cs.lg']",, +1242,causal intersectionality and dual form of gradient descent for multimodal analysis a case study on hateful memes,"['Yosuke Miyanishi', 'Minh Le Nguyen']",http://arxiv.org/pdf/2308.11585v1.pdf,2023-08-19,," In the wake of the explosive growth of machine learning (ML) usage,particularly within the context of emerging Large Language Models (LLMs),comprehending the semantic significance rooted in their internal workings iscrucial. While causal analyses focus on defining semantics and itsquantification, the gradient-based approach is central to explainable AI (XAI),tackling the interpretation of the black box. By synergizing these approaches,the exploration of how a model's internal mechanisms illuminate its causaleffect has become integral for evidence-based decision-making. A parallel lineof research has revealed that intersectionality - the combinatory impact ofmultiple demographics of an individual - can be structured in the form of anAveraged Treatment Effect (ATE). Initially, this study illustrates that thehateful memes detection problem can be formulated as an ATE, assisted by theprinciples of intersectionality, and that a modality-wise summarization ofgradient-based attention attribution scores can delineate the distinctbehaviors of three Transformerbased models concerning ATE. Subsequently, weshow that the latest LLM LLaMA2 has the ability to disentangle theintersectional nature of memes detection in an in-context learning setting,with their mechanistic properties elucidated via meta-gradient, a secondaryform of gradient. In conclusion, this research contributes to the ongoingdialogue surrounding XAI and the multifaceted nature of ML models.",,arXiv,"['cs.ai', 'cs.cl']",, +1243,empowering dynamicsaware texttovideo diffusion with large language models,"['Hao Fei', 'Shengqiong Wu', 'Wei Ji', 'Hanwang Zhang', 'Tat-Seng Chua']",http://arxiv.org/pdf/2308.13812v1.pdf,2023-08-26,," Text-to-video (T2V) synthesis has gained increasing attention in thecommunity, in which the recently emerged diffusion models (DMs) havepromisingly shown stronger performance than the past approaches. While existingstate-of-the-art DMs are competent to achieve high-resolution video generation,they may largely suffer from key limitations (e.g., action occurrencedisorders, crude video motions) with respect to the intricate temporal dynamicsmodeling, one of the crux of video synthesis. In this work, we investigatestrengthening the awareness of video dynamics for DMs, for high-quality T2Vgeneration. Inspired by human intuition, we design an innovative dynamic scenemanager (dubbed as Dysen) module, which includes (step-1) extracting from inputtext the key actions with proper time-order arrangement, (step-2) transformingthe action schedules into the dynamic scene graph (DSG) representations, and(step-3) enriching the scenes in the DSG with sufficient and reasonabledetails. Taking advantage of the existing powerful LLMs (e.g., ChatGPT) viain-context learning, Dysen realizes (nearly) human-level temporal dynamicsunderstanding. Finally, the resulting video DSG with rich action scene detailsis encoded as fine-grained spatio-temporal features, integrated into thebackbone T2V DM for video generating. Experiments on popular T2V datasetssuggest that our framework consistently outperforms prior arts with significantmargins, especially in the scenario with complex actions. Project page athttps://haofei.vip/Dysen-VDM",,arXiv,"['cs.ai', 'cs.cv']",, +1244,identifying and mitigating the security risks of generative ai,"['Clark Barrett', 'Brad Boyd', 'Elie Burzstein', 'Nicholas Carlini', 'Brad Chen', 'Jihye Choi', 'Amrita Roy Chowdhury', 'Mihai Christodorescu', 'Anupam Datta', 'Soheil Feizi', 'Kathleen Fisher', 'Tatsunori Hashimoto', 'Dan Hendrycks', 'Somesh Jha', 'Daniel Kang', 'Florian Kerschbaum', 'Eric Mitchell', 'John Mitchell', 'Zulfikar Ramzan', 'Khawaja Shams', 'Dawn Song', 'Ankur Taly', 'Diyi Yang']",http://arxiv.org/pdf/2308.14840v4.pdf,2023-08-28,," Every major technical invention resurfaces the dual-use dilemma -- the newtechnology has the potential to be used for good as well as for harm.Generative AI (GenAI) techniques, such as large language models (LLMs) anddiffusion models, have shown remarkable capabilities (e.g., in-contextlearning, code-completion, and text-to-image generation and editing). However,GenAI can be used just as well by attackers to generate new attacks andincrease the velocity and efficacy of existing attacks. This paper reports the findings of a workshop held at Google (co-organized byStanford University and the University of Wisconsin-Madison) on the dual-usedilemma posed by GenAI. This paper is not meant to be comprehensive, but israther an attempt to synthesize some of the interesting findings from theworkshop. We discuss short-term and long-term goals for the community on thistopic. We hope this paper provides both a launching point for a discussion onthis important topic as well as interesting problems that the researchcommunity can work to address.",,arXiv,['cs.ai'],, +1245,anomalygpt detecting industrial anomalies using large visionlanguage models,"['Zhaopeng Gu', 'Bingke Zhu', 'Guibo Zhu', 'Yingying Chen', 'Ming Tang', 'Jinqiao Wang']",http://arxiv.org/pdf/2308.15366v4.pdf,2023-08-29,," Large Vision-Language Models (LVLMs) such as MiniGPT-4 and LLaVA havedemonstrated the capability of understanding images and achieved remarkableperformance in various visual tasks. Despite their strong abilities inrecognizing common objects due to extensive training datasets, they lackspecific domain knowledge and have a weaker understanding of localized detailswithin objects, which hinders their effectiveness in the Industrial AnomalyDetection (IAD) task. On the other hand, most existing IAD methods only provideanomaly scores and necessitate the manual setting of thresholds to distinguishbetween normal and abnormal samples, which restricts their practicalimplementation. In this paper, we explore the utilization of LVLM to addressthe IAD problem and propose AnomalyGPT, a novel IAD approach based on LVLM. Wegenerate training data by simulating anomalous images and producingcorresponding textual descriptions for each image. We also employ an imagedecoder to provide fine-grained semantic and design a prompt learner tofine-tune the LVLM using prompt embeddings. Our AnomalyGPT eliminates the needfor manual threshold adjustments, thus directly assesses the presence andlocations of anomalies. Additionally, AnomalyGPT supports multi-turn dialoguesand exhibits impressive few-shot in-context learning capabilities. With onlyone normal shot, AnomalyGPT achieves the state-of-the-art performance with anaccuracy of 86.1%, an image-level AUC of 94.1%, and a pixel-level AUC of 95.3%on the MVTec-AD dataset. Code is available athttps://github.com/CASIA-IVA-Lab/AnomalyGPT.",,arXiv,['cs.cv'],, +1246,business process text sketch automation generation using large language model,"['Rui Zhu', 'Quanzhou Hu', 'Wenxin Li', 'Honghao Xiao', 'Chaogang Wang', 'Zixin Zhou']",http://arxiv.org/pdf/2309.01071v1.pdf,2023-09-03,," Business Process Management (BPM) is gaining increasing attention as it hasthe potential to cut costs while boosting output and quality. Business processdocument generation is a crucial stage in BPM. However, due to a shortage ofdatasets, data-driven deep learning techniques struggle to deliver the expectedresults. We propose an approach to transform Conditional Process Trees (CPTs)into Business Process Text Sketches (BPTSs) using Large Language Models (LLMs).The traditional prompting approach (Few-shot In-Context Learning) tries to getthe correct answer in one go, and it can find the pattern of transformingsimple CPTs into BPTSs, but for close-domain and CPTs with complex hierarchy,the traditional prompts perform weakly and with low correctness. We suggestusing this technique to break down a difficult CPT into a number of basic CPTsand then solve each one in turn, drawing inspiration from thedivide-and-conquer strategy. We chose 100 process trees with depths rangingfrom 2 to 5 at random, as well as CPTs with many nodes, many degrees ofselection, and cyclic nesting. Experiments show that our method can achieve acorrect rate of 93.42%, which is 45.17% better than traditional promptingmethods. Our proposed method provides a solution for business process documentgeneration in the absence of datasets, and secondly, it becomes potentiallypossible to provide a large number of datasets for the process model extraction(PME) domain.",,arXiv,['cs.cl'],, +1247,textbooks are all you need ii phi15 technical report,"['Yuanzhi Li', 'Sébastien Bubeck', 'Ronen Eldan', 'Allie Del Giorno', 'Suriya Gunasekar', 'Yin Tat Lee']",http://arxiv.org/pdf/2309.05463v1.pdf,2023-09-11,," We continue the investigation into the power of smaller Transformer-basedlanguage models as initiated by \textbf{TinyStories} -- a 10 million parametermodel that can produce coherent English -- and the follow-up work on\textbf{phi-1}, a 1.3 billion parameter model with Python coding performanceclose to the state-of-the-art. The latter work proposed to use existing LargeLanguage Models (LLMs) to generate ``textbook quality"" data as a way to enhancethe learning process compared to traditional web data. We follow the``Textbooks Are All You Need"" approach, focusing this time on common sensereasoning in natural language, and create a new 1.3 billion parameter modelnamed \textbf{phi-1.5}, with performance on natural language tasks comparableto models 5x larger, and surpassing most non-frontier LLMs on more complexreasoning tasks such as grade-school mathematics and basic coding. Moregenerally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,both good -- such as the ability to ``think step by step"" or perform somerudimentary in-context learning -- and bad, including hallucinations and thepotential for toxic and biased generations -- encouragingly though, we areseeing improvement on that front thanks to the absence of web data. Weopen-source \textbf{phi-1.5} to promote further research on these urgenttopics.",,arXiv,"['cs.cl', 'cs.ai']",, +1248,uncovering mesaoptimization algorithms in transformers,"['Johannes von Oswald', 'Eyvind Niklasson', 'Maximilian Schlegel', 'Seijin Kobayashi', 'Nicolas Zucchet', 'Nino Scherrer', 'Nolan Miller', 'Mark Sandler', 'Blaise Agüera y Arcas', 'Max Vladymyrov', 'Razvan Pascanu', 'João Sacramento']",http://arxiv.org/pdf/2309.05858v1.pdf,2023-09-11,," Transformers have become the dominant model in deep learning, but the reasonfor their superior performance is poorly understood. Here, we hypothesize thatthe strong performance of Transformers stems from an architectural bias towardsmesa-optimization, a learned process running within the forward pass of a modelconsisting of the following two steps: (i) the construction of an internallearning objective, and (ii) its corresponding solution found throughoptimization. To test this hypothesis, we reverse-engineer a series ofautoregressive Transformers trained on simple sequence modeling tasks,uncovering underlying gradient-based mesa-optimization algorithms driving thegeneration of predictions. Moreover, we show that the learned forward-passoptimization algorithm can be immediately repurposed to solve supervisedfew-shot tasks, suggesting that mesa-optimization might underlie the in-contextlearning capabilities of large language models. Finally, we propose a novelself-attention layer, the mesa-layer, that explicitly and efficiently solvesoptimization problems specified in context. We find that this layer can lead toimproved performance in synthetic and preliminary language modelingexperiments, adding weight to our hypothesis that mesa-optimization is animportant operation hidden within the weights of trained Transformers.",,arXiv,"['cs.lg', 'cs.ai']",, +1249,narrowing the gap between supervised and unsupervised sentence representation learning with large language model,"['Mingxin Li', 'Richong Zhang', 'Zhijie Nie', 'Yongyi Mao']",http://arxiv.org/pdf/2309.06453v2.pdf,2023-09-12,," Sentence Representation Learning (SRL) is a fundamental task in NaturalLanguage Processing (NLP), with the Contrastive Learning of Sentence Embeddings(CSE) being the mainstream technique due to its superior performance. Anintriguing phenomenon in CSE is the significant performance gap betweensupervised and unsupervised methods, with their only difference lying in thetraining data. Previous works attribute this performance gap to differences intwo representation properties (alignment and uniformity). However, sincealignment and uniformity only measure the results, they fail to answer ""Whataspects of the training data contribute to the performance gap?"" and ""How canthe performance gap be narrowed?"", In this paper, we conduct empiricalexperiments to answer these ""What"" and ""How"" questions. We first answer the""What"" question by thoroughly comparing the behavior of supervised andunsupervised CSE during their respective training processes. From thecomparison, we identify the similarity pattern as a key factor to theperformance gap, and introduce a metric, called Relative Fitting Difficulty(RFD), to measure the complexity of the similarity pattern. Then, based on theinsights gained from the ""What"" question, we tackle the ""How"" question byincreasing the pattern complexity of the training data. We achieve this byleveraging the In-Context Learning (ICL) capability of the Large Language Model(LLM) to generate data that simulates complex patterns. By utilizing thehierarchical patterns in the LLM-generated data, we effectively narrow the gapbetween supervised and unsupervised CSE. We release our codes and appendix athttps://github.com/BDBC-KG-NLP/NGCSE.",,arXiv,"['cs.cl', 'cs.lg']",, +1250,understanding catastrophic forgetting in language models via implicit inference,"['Suhas Kotha', 'Jacob Mitchell Springer', 'Aditi Raghunathan']",http://arxiv.org/pdf/2309.10105v1.pdf,2023-09-18,," Fine-tuning (via methods such as instruction-tuning or reinforcement learningfrom human feedback) is a crucial step in training language models to robustlycarry out tasks of interest. However, we lack a systematic understanding of theeffects of fine-tuning, particularly on tasks outside the narrow fine-tuningdistribution. In a simplified scenario, we demonstrate that improvingperformance on tasks within the fine-tuning data distribution comes at theexpense of suppressing model capabilities on other tasks. This degradation isespecially pronounced for tasks ""closest"" to the fine-tuning distribution. Wehypothesize that language models implicitly infer the task of the promptcorresponds, and the fine-tuning process predominantly skews this taskinference towards tasks in the fine-tuning distribution. To test thishypothesis, we propose Conjugate Prompting to see if we can recover pretrainedcapabilities. Conjugate prompting artificially makes the task look farther fromthe fine-tuning distribution while requiring the same capability. We find thatconjugate prompting systematically recovers some of the pretrainingcapabilities on our synthetic setup. We then apply conjugate prompting toreal-world LLMs using the observation that fine-tuning distributions aretypically heavily skewed towards English. We find that simply translating theprompts to different languages can cause the fine-tuned models to respond liketheir pretrained counterparts instead. This allows us to recover the in-contextlearning abilities lost via instruction tuning, and more concerningly, torecover harmful content generation suppressed by safety fine-tuning in chatbotslike ChatGPT.",,arXiv,"['cs.cl', 'cs.lg']",, +1251,gpt4aigchip towards nextgeneration ai accelerator design automation via large language models,"['Yonggan Fu', 'Yongan Zhang', 'Zhongzhi Yu', 'Sixu Li', 'Zhifan Ye', 'Chaojian Li', 'Cheng Wan', 'Yingyan Lin']",http://arxiv.org/pdf/2309.10730v1.pdf,2023-09-19,," The remarkable capabilities and intricate nature of Artificial Intelligence(AI) have dramatically escalated the imperative for specialized AIaccelerators. Nonetheless, designing these accelerators for various AIworkloads remains both labor- and time-intensive. While existing designexploration and automation tools can partially alleviate the need for extensivehuman involvement, they still demand substantial hardware expertise, posing abarrier to non-experts and stifling AI accelerator development. Motivated bythe astonishing potential of large language models (LLMs) for generatinghigh-quality content in response to human language instructions, we embark onthis work to examine the possibility of harnessing LLMs to automate AIaccelerator design. Through this endeavor, we develop GPT4AIGChip, a frameworkintended to democratize AI accelerator design by leveraging human naturallanguages instead of domain-specific languages. Specifically, we first performan in-depth investigation into LLMs' limitations and capabilities for AIaccelerator design, thus aiding our understanding of our current position andgarnering insights into LLM-powered automated AI accelerator design.Furthermore, drawing inspiration from the above insights, we develop aframework called GPT4AIGChip, which features an automated demo-augmentedprompt-generation pipeline utilizing in-context learning to guide LLMs towardscreating high-quality AI accelerator design. To our knowledge, this work is thefirst to demonstrate an effective pipeline for LLM-powered automated AIaccelerator generation. Accordingly, we anticipate that our insights andframework can serve as a catalyst for innovations in next-generationLLM-powered design automation tools.",,arXiv,"['cs.lg', 'cs.ar']",, +1252,a benchmark for learning to translate a new language from one grammar book,"['Garrett Tanzer', 'Mirac Suzgun', 'Eline Visser', 'Dan Jurafsky', 'Luke Melas-Kyriazi']",http://arxiv.org/pdf/2309.16575v1.pdf,2023-09-28,," Large language models (LLMs) can perform impressive feats with in-contextlearning or lightweight finetuning. It is natural to wonder how well thesemodels adapt to genuinely new tasks, but how does one find tasks that areunseen in internet-scale training sets? We turn to a field that is explicitlymotivated and bottlenecked by a scarcity of web data: low-resource languages.In this paper, we introduce MTOB (Machine Translation from One Book), abenchmark for learning to translate between English and Kalamang -- a languagewith less than 200 speakers and therefore virtually no presence on the web --using several hundred pages of field linguistics reference materials. This taskframing is novel in that it asks a model to learn a language from a singlehuman-readable book of grammar explanations, rather than a large mined corpusof in-domain data, more akin to L2 learning than L1 acquisition. We demonstratethat baselines using current LLMs are promising but fall short of humanperformance, achieving 44.7 chrF on Kalamang to English translation and 45.8chrF on English to Kalamang translation, compared to 51.6 and 57.0 chrF by ahuman who learned Kalamang from the same reference materials. We hope that MTOBwill help measure LLM capabilities along a new dimension, and that the methodsdeveloped to solve it could help expand access to language technology forunderserved communities by leveraging qualitatively different kinds of datathan traditional machine translation.",,arXiv,['cs.cl'],, +1253,benchmarking cognitive biases in large language models as evaluators,"['Ryan Koo', 'Minhwa Lee', 'Vipul Raheja', 'Jong Inn Park', 'Zae Myung Kim', 'Dongyeop Kang']",http://arxiv.org/pdf/2309.17012v1.pdf,2023-09-29,," Large Language Models (LLMs) have recently been shown to be effective asautomatic evaluators with simple prompting and in-context learning. In thiswork, we assemble 15 LLMs of four different size ranges and evaluate theiroutput responses by preference ranking from the other LLMs as evaluators, suchas System Star is better than System Square. We then evaluate the quality ofranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators(CoBBLEr), a benchmark to measure six different cognitive biases in LLMevaluation outputs, such as the Egocentric bias where a model prefers to rankits own outputs highly in evaluation. We find that LLMs are biased text qualityevaluators, exhibiting strong indications on our bias benchmark (average of 40%of comparisons across all models) within each of their evaluations thatquestion their robustness as evaluators. Furthermore, we examine thecorrelation between human and machine preferences and calculate the averageRank-Biased Overlap (RBO) score to be 49.6%, indicating that machinepreferences are misaligned with humans. According to our findings, LLMs maystill be unable to be utilized for automatic annotation aligned with humanpreferences. Our project page is at: https://minnesotanlp.github.io/cobbler.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1254,fewertoken neural speech codec with timeinvariant codes,"['Yong Ren', 'Tao Wang', 'Jiangyan Yi', 'Le Xu', 'Jianhua Tao', 'Chuyuan Zhang', 'Junzuo Zhou']",http://arxiv.org/pdf/2310.00014v1.pdf,2023-09-15,," Language model based text-to-speech (TTS) models, like VALL-E, have gainedattention for their outstanding in-context learning capability in zero-shotscenarios. Neural speech codec is a critical component of these models, whichcan convert speech into discrete token representations. However, excessivetoken sequences from the codec may negatively affect prediction accuracy andrestrict the progression of Language model based TTS models. To address thisissue, this paper proposes a novel neural speech codec with time-invariantcodes named TiCodec. By encoding and quantizing time-invariant information intoa separate code, TiCodec can reduce the amount of frame-level information thatneeds encoding, effectively decreasing the number of tokens as codes of speech.Furthermore, this paper introduces a time-invariant encoding consistency lossto enhance the consistency of time-invariant code within an utterance and forceit to capture more global information, which can benefit the zero-shot TTStask. Experimental results demonstrate that TiCodec can not only enhance thequality of reconstruction speech with fewer tokens but also increase thesimilarity and naturalness, as well as reduce the word error rate of thesynthesized speech by the TTS model.",,arXiv,"['cs.sd', 'eess.as']",, +1255,reactable enhancing react for table question answering,"['Yunjia Zhang', 'Jordan Henkel', 'Avrilia Floratou', 'Joyce Cahoon', 'Shaleen Deep', 'Jignesh M. Patel']",http://arxiv.org/pdf/2310.00815v1.pdf,2023-10-01,," Table Question Answering (TQA) presents a substantial challenge at theintersection of natural language processing and data analytics. This taskinvolves answering natural language (NL) questions on top of tabular data,demanding proficiency in logical reasoning, understanding of data semantics,and fundamental analytical capabilities. Due to its significance, a substantialvolume of research has been dedicated to exploring a wide range of strategiesaimed at tackling this challenge including approaches that leverage LargeLanguage Models (LLMs) through in-context learning or Chain-of-Thought (CoT)prompting as well as approaches that train and fine-tune custom models. Nonetheless, a conspicuous gap exists in the research landscape, where thereis limited exploration of how innovative foundational research, whichintegrates incremental reasoning with external tools in the context of LLMs, asexemplified by the ReAct paradigm, could potentially bring advantages to theTQA task. In this paper, we aim to fill this gap, by introducing ReAcTable(ReAct for Table Question Answering tasks), a framework inspired by the ReActparadigm that is carefully enhanced to address the challenges uniquelyappearing in TQA tasks such as interpreting complex data semantics, dealingwith errors generated by inconsistent data and generating intricate datatransformations. ReAcTable relies on external tools such as SQL and Python codeexecutors, to progressively enhance the data by generating intermediate datarepresentations, ultimately transforming it into a more accessible format foranswering the questions with greater ease. We demonstrate that ReAcTableachieves remarkable performance even when compared to fine-tuned approaches. Inparticular, it outperforms the best prior result on the WikiTQ benchmark,achieving an accuracy of 68.0% without requiring training a new model orfine-tuning.",,arXiv,['cs.db'],, +1256,graphtext graph reasoning in text space,"['Jianan Zhao', 'Le Zhuo', 'Yikang Shen', 'Meng Qu', 'Kai Liu', 'Michael Bronstein', 'Zhaocheng Zhu', 'Jian Tang']",http://arxiv.org/pdf/2310.01089v1.pdf,2023-10-02,," Large Language Models (LLMs) have gained the ability to assimilate humanknowledge and facilitate natural language interactions with both humans andother LLMs. However, despite their impressive achievements, LLMs have not madesignificant advancements in the realm of graph machine learning. Thislimitation arises because graphs encapsulate distinct relational data, makingit challenging to transform them into natural language that LLMs understand. Inthis paper, we bridge this gap with a novel framework, GraphText, thattranslates graphs into natural language. GraphText derives a graph-syntax treefor each graph that encapsulates both the node attributes and inter-noderelationships. Traversal of the tree yields a graph text sequence, which isthen processed by an LLM to treat graph tasks as text generation tasks.Notably, GraphText offers multiple advantages. It introduces training-freegraph reasoning: even without training on graph data, GraphText with ChatGPTcan achieve on par with, or even surpassing, the performance ofsupervised-trained graph neural networks through in-context learning (ICL).Furthermore, GraphText paves the way for interactive graph reasoning, allowingboth humans and LLMs to communicate with the model seamlessly using naturallanguage. These capabilities underscore the vast, yet-to-be-explored potentialof LLMs in the domain of graph machine learning.",,arXiv,"['cs.cl', 'cs.lg']",, +1257,llmparser a llmbased log parsing framework,"['Zhihan Jiang', 'Jinyang Liu', 'Zhuangbin Chen', 'Yichen Li', 'Junjie Huang', 'Yintong Huo', 'Pinjia He', 'Jiazhen Gu', 'Michael R. Lyu']",http://arxiv.org/pdf/2310.01796v1.pdf,2023-10-03,," The process of log parsing, which converts log messages into structuredformats, is a crucial step for various log analysis tasks. Although numerouslog parsers have been proposed, their effectiveness on complex log data isoften hindered due to reliance on human-made rules or learning-based modelswith limited training data. The recent rise of powerful large language models(LLMs) shows potential for log parsing due to their extensive pre-trainedknowledge related to code and logging. However, their accuracy is currentlylimited due to the lack of specialized log parsing capabilities. Additionally,the inconsistency of their answers and significant overhead obstruct thepractical implementation of LLM-based log parsing. To tackle these challenges, we introduce LLMParser, the first practicalLLM-based log parsing framework. LLMParser enables accurate and robust logparsing by leveraging the in-context learning (ICL) capability of the LLM,employing a hierarchical candidate sampling algorithm, and selectinghigh-quality demonstrations. LLMParser also includes a novel adaptive parsingcache component to store and refine the templates generated by the LLM. Thisdesign aids in addressing the inefficiency of LLMs by rapid matching topreviously parsed log templates. LLMParser also adaptively updates thetemplates in the parsing cache to ensure consistent parsed results. Extensiveevaluation on large-scale public datasets demonstrates that LLMParser surpassesthe state-of-the-art methods. Furthermore, LLMParser significantly reduces thequery times to LLMs, achieving efficiency comparable to the most efficientbaseline, Drain.",,arXiv,['cs.se'],, +1258,uncovering hidden geometry in transformers via disentangling position and context,"['Jiajun Song', 'Yiqiao Zhong']",http://arxiv.org/pdf/2310.04861v1.pdf,2023-10-07,," Transformers are widely used to extract complex semantic meanings from inputtokens, yet they usually operate as black-box models. In this paper, we presenta simple yet informative decomposition of hidden states (or embeddings) oftrained transformers into interpretable components. For any layer, embeddingvectors of input sequence samples are represented by a tensor $\boldsymbol{h}\in \mathbb{R}^{C \times T \times d}$. Given embedding vector$\boldsymbol{h}_{c,t} \in \mathbb{R}^d$ at sequence position $t \le T$ in asequence (or context) $c \le C$, extracting the mean effects yields thedecomposition \[ \boldsymbol{h}_{c,t} = \boldsymbol{\mu} + \mathbf{pos}_t +\mathbf{ctx}_c + \mathbf{resid}_{c,t} \] where $\boldsymbol{\mu}$ is the globalmean vector, $\mathbf{pos}_t$ and $\mathbf{ctx}_c$ are the mean vectors acrosscontexts and across positions respectively, and $\mathbf{resid}_{c,t}$ is theresidual vector. For popular transformer architectures and diverse textdatasets, empirically we find pervasive mathematical structure: (1)$(\mathbf{pos}_t)_{t}$ forms a low-dimensional, continuous, and often spiralshape across layers, (2) $(\mathbf{ctx}_c)_c$ shows clear cluster structurethat falls into context topics, and (3) $(\mathbf{pos}_t)_{t}$ and$(\mathbf{ctx}_c)_c$ are mutually incoherent -- namely $\mathbf{pos}_t$ isalmost orthogonal to $\mathbf{ctx}_c$ -- which is canonical in compressedsensing and dictionary learning. This decomposition offers structural insightsabout input formats in in-context learning (especially for induction heads) andin arithmetic tasks.",,arXiv,"['cs.lg', 'cs.ai', 'stat.ml']",, +1259,lightweight incontext tuning for multimodal unified models,"['Yixin Chen', 'Shuai Zhang', 'Boran Han', 'Jiaya Jia']",http://arxiv.org/pdf/2310.05109v1.pdf,2023-10-08,," In-context learning (ICL) involves reasoning from given contextual examples.As more modalities comes, this procedure is becoming more challenging as theinterleaved input modalities convolutes the understanding process. This isexemplified by the observation that multimodal models often struggle toeffectively extrapolate from contextual examples to perform ICL. To addressthese challenges, we introduce MultiModal In-conteXt Tuning (M$^2$IXT), alightweight module to enhance the ICL capabilities of multimodal unifiedmodels. The proposed M$^2$IXT module perceives an expandable context window toincorporate various labeled examples of multiple modalities (e.g., text, image,and coordinates). It can be prepended to various multimodal unified models(e.g., OFA, Unival, LLaVA) of different architectures and trained via amixed-tasks strategy to enable rapid few-shot adaption on multiple tasks anddatasets. When tuned on as little as 50K multimodal data, M$^2$IXT can boostthe few-shot ICL performance significantly (e.g., 18\% relative increase forOFA), and obtained state-of-the-art results across an array of tasks includingvisual question answering, image captioning, visual grounding, and visualentailment, while being considerably small in terms of model parameters (e.g.,$\sim$$20\times$ smaller than Flamingo or MMICL), highlighting the flexibilityand effectiveness of M$^2$IXT as a multimodal in-context learner.",,arXiv,['cs.cv'],, +1260,explainable claim verification via knowledgegrounded reasoning with large language models,"['Haoran Wang', 'Kai Shu']",http://arxiv.org/pdf/2310.05253v2.pdf,2023-10-08,," Claim verification plays a crucial role in combating misinformation. Whileexisting works on claim verification have shown promising results, a crucialpiece of the puzzle that remains unsolved is to understand how to verify claimswithout relying on human-annotated data, which is expensive to create at alarge scale. Additionally, it is important for models to provide comprehensiveexplanations that can justify their decisions and assist human fact-checkers.This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK)Reasoning that can verify complex claims and generate explanations without theneed for annotated evidence using Large Language Models (LLMs). FOLK leveragesthe in-context learning ability of LLMs to translate the claim into aFirst-Order-Logic (FOL) clause consisting of predicates, each corresponding toa sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoningover a set of knowledge-grounded question-and-answer pairs to make veracitypredictions and generate explanations to justify its decision-making process.This process makes our model highly explanatory, providing clear explanationsof its reasoning process in human-readable form. Our experiment resultsindicate that FOLK outperforms strong baselines on three datasets encompassingvarious claim verification challenges. Our code and data are available.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1261,glitter or gold deriving structured insights from sustainability reports via large language models,"['Marco Bronzini', 'Carlo Nicolini', 'Bruno Lepri', 'Andrea Passerini', 'Jacopo Staiano']",http://arxiv.org/pdf/2310.05628v3.pdf,2023-10-09,," Over the last decade, several regulatory bodies have started requiring thedisclosure of non-financial information from publicly listed companies, inlight of the investors' increasing attention to Environmental, Social, andGovernance (ESG) issues. Publicly released information on sustainabilitypractices is often disclosed in diverse, unstructured, and multi-modaldocumentation. This poses a challenge in efficiently gathering and aligning thedata into a unified framework to derive insights related to Corporate SocialResponsibility (CSR). Thus, using Information Extraction (IE) methods becomesan intuitive choice for delivering insightful and actionable data tostakeholders. In this study, we employ Large Language Models (LLMs), In-ContextLearning, and the Retrieval-Augmented Generation (RAG) paradigm to extractstructured insights related to ESG aspects from companies' sustainabilityreports. We then leverage graph-based representations to conduct statisticalanalyses concerning the extracted insights. These analyses revealed that ESGcriteria cover a wide range of topics, exceeding 500, often beyond thoseconsidered in existing categorizations, and are addressed by companies througha variety of initiatives. Moreover, disclosure similarities emerged amongcompanies from the same region or sector, validating ongoing hypotheses in theESG literature. Lastly, by incorporating additional company attributes into ouranalyses, we investigated which factors impact the most on companies' ESGratings, showing that ESG disclosure affects the obtained ratings more thanother financial or company data.",,arXiv,"['cs.cl', 'cs.ce', 'cs.cy']",, +1262,are large language models post hoc explainers,"['Nicholas Kroeger', 'Dan Ley', 'Satyapriya Krishna', 'Chirag Agarwal', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2310.05797v2.pdf,2023-10-09,," Large Language Models (LLMs) are increasingly used as powerful tools for aplethora of natural language processing (NLP) applications. A recentinnovation, in-context learning (ICL), enables LLMs to learn new tasks bysupplying a few examples in the prompt during inference time, therebyeliminating the need for model fine-tuning. While LLMs have been utilized inseveral applications, their applicability in explaining the behavior of othermodels remains relatively unexplored. Despite the growing number of newexplanation techniques, many require white-box access to the model and/or arecomputationally expensive, highlighting a need for next-generation post hocexplainers. In this work, we present the first framework to study theeffectiveness of LLMs in explaining other predictive models. More specifically,we propose a novel framework encompassing multiple prompting strategies: i)Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL,and iv) Explanation-based ICL, with varying levels of information about theunderlying ML model and the local neighborhood of the test sample. We conductextensive experiments with real-world benchmark datasets to demonstrate thatLLM-generated explanations perform on par with state-of-the-art post hocexplainers using their ability to leverage ICL examples and their internalknowledge in generating model explanations. On average, across four datasetsand two ML models, we observe that LLMs identify the most important featurewith 72.19% accuracy, opening up new frontiers in explainable artificialintelligence (XAI) to explore LLM-based explanation frameworks.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1263,salmon selfalignment with principlefollowing reward models,"['Zhiqing Sun', 'Yikang Shen', 'Hongxin Zhang', 'Qinhong Zhou', 'Zhenfang Chen', 'David Cox', 'Yiming Yang', 'Chuang Gan']",http://arxiv.org/pdf/2310.05910v1.pdf,2023-10-09,," Supervised Fine-Tuning (SFT) on response demonstrations combined withReinforcement Learning from Human Feedback (RLHF) constitutes a powerfulparadigm for aligning LLM-based AI agents. However, a significant limitation ofsuch an approach is its dependency on high-quality human annotations, makingits application to intricate tasks challenging due to difficulties in obtainingconsistent response demonstrations and in-distribution response preferences.This paper presents a novel approach, namely SALMON (Self-ALignMent withprinciple-fOllowiNg reward models), to align base language models with minimalhuman supervision, using only a small set of human-defined principles, yetachieving superior performance. Central to our approach is aprinciple-following reward model. Trained on synthetic preference data, thismodel can generate reward scores based on arbitrary human-defined principles.By merely adjusting these principles during the RL training phase, we gain fullcontrol over the preferences with the reward model, subsequently influencingthe behavior of the RL-trained policies, and eliminating the reliance on thecollection of online human preferences. Applying our method to the LLaMA-2-70bbase language model, we developed an AI assistant named Dromedary-2. With only6 exemplars for in-context learning and 31 human-defined principles,Dromedary-2 significantly surpasses the performance of several state-of-the-artAI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We haveopen-sourced the code and model weights to encourage further research intoaligning LLM-based AI agents with enhanced supervision efficiency, improvedcontrollability, and scalable oversight.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1264,opseval a comprehensive taskoriented aiops benchmark for large language models,"['Yuhe Liu', 'Changhua Pei', 'Longlong Xu', 'Bohan Chen', 'Mingze Sun', 'Zhirui Zhang', 'Yongqian Sun', 'Shenglin Zhang', 'Kun Wang', 'Haiming Zhang', 'Jianhui Li', 'Gaogang Xie', 'Xidao Wen', 'Xiaohui Nie', 'Dan Pei']",http://arxiv.org/pdf/2310.07637v2.pdf,2023-10-11,," Large language models (LLMs) have exhibited remarkable capabilities inNLP-related tasks such as translation, summarizing, and generation. Theapplication of LLMs in specific areas, notably AIOps (Artificial Intelligencefor IT Operations), holds great potential due to their advanced abilities ininformation summarizing, report analyzing, and ability of API calling.Nevertheless, the performance of current LLMs in AIOps tasks is yet to bedetermined. Furthermore, a comprehensive benchmark is required to steer theoptimization of LLMs tailored for AIOps. Compared with existing benchmarks thatfocus on evaluating specific fields like network configuration, in this paper,we present \textbf{OpsEval}, a comprehensive task-oriented AIOps benchmarkdesigned for LLMs. For the first time, OpsEval assesses LLMs' proficiency inthree crucial scenarios (Wired Network Operation, 5G Communication Operation,and Database Operation) at various ability levels (knowledge recall, analyticalthinking, and practical application). The benchmark includes 7,200 questions inboth multiple-choice and question-answer (QA) formats, available in English andChinese. With quantitative and qualitative results, we show how various LLMtricks can affect the performance of AIOps, including zero-shot,chain-of-thought, and few-shot in-context learning. We find that GPT4-score ismore consistent with experts than widely used Bleu and Rouge, which can be usedto replace automatic metrics for large-scale qualitative evaluations.",,arXiv,"['cs.ai', 'cs.ni']",, +1265,eipetext evaluationguided iterative plan extraction for longform narrative text generation,"['Wang You', 'Wenshan Wu', 'Yaobo Liang', 'Shaoguang Mao', 'Chenfei Wu', 'Maosong Cao', 'Yuzhe Cai', 'Yiduo Guo', 'Yan Xia', 'Furu Wei', 'Nan Duan']",http://arxiv.org/pdf/2310.08185v1.pdf,2023-10-12,," Plan-and-Write is a common hierarchical approach in long-form narrative textgeneration, which first creates a plan to guide the narrative writing.Following this approach, several studies rely on simply prompting largelanguage models for planning, which often yields suboptimal results. In thispaper, we propose a new framework called Evaluation-guided Iterative PlanExtraction for long-form narrative text generation (EIPE-text), which extractsplans from the corpus of narratives and utilizes the extracted plans toconstruct a better planner. EIPE-text has three stages: plan extraction,learning, and inference. In the plan extraction stage, it iteratively extractsand improves plans from the narrative corpus and constructs a plan corpus. Wepropose a question answer (QA) based evaluation mechanism to automaticallyevaluate the plans and generate detailed plan refinement instructions to guidethe iterative improvement. In the learning stage, we build a better planner byfine-tuning with the plan corpus or in-context learning with examples in theplan corpus. Finally, we leverage a hierarchical approach to generate long-formnarratives. We evaluate the effectiveness of EIPE-text in the domains of novelsand storytelling. Both GPT-4-based evaluations and human evaluationsdemonstrate that our method can generate more coherent and relevant long-formnarratives. Our code will be released in the future.",,arXiv,"['cs.cl', 'cs.ai']",, +1266,prompting large language models with chainofthought for fewshot knowledge base question generation,"['Yuanyuan Liang', 'Jianing Wang', 'Hanlun Zhu', 'Lei Wang', 'Weining Qian', 'Yunshi Lan']",http://arxiv.org/pdf/2310.08395v3.pdf,2023-10-12,," The task of Question Generation over Knowledge Bases (KBQG) aims to convert alogical form into a natural language question. For the sake of expensive costof large-scale question annotation, the methods of KBQG under low-resourcescenarios urgently need to be developed. However, current methods heavily relyon annotated data for fine-tuning, which is not well-suited for few-shotquestion generation. The emergence of Large Language Models (LLMs) has showntheir impressive generalization ability in few-shot tasks. Inspired byChain-of-Thought (CoT) prompting, which is an in-context learning strategy forreasoning, we formulate KBQG task as a reasoning problem, where the generationof a complete question is splitted into a series of sub-question generation.Our proposed prompting method KQG-CoT first retrieves supportive logical formsfrom the unlabeled data pool taking account of the characteristics of thelogical form. Then, we write a prompt to explicit the reasoning chain ofgenerating complicated questions based on the selected demonstrations. Tofurther ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting thelogical forms by their complexity. We conduct extensive experiments over threepublic KBQG datasets. The results demonstrate that our prompting methodconsistently outperforms other prompting baselines on the evaluated datasets.Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results ofthe PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4,METEOR, and ROUGE-L, respectively.",,arXiv,"['cs.cl', 'cs.ai']",, +1267,do pretrained transformers really learn incontext by gradient descent,"['Lingfeng Shen', 'Aayush Mishra', 'Daniel Khashabi']",http://arxiv.org/pdf/2310.08540v3.pdf,2023-10-12,," The emergence of In-Context Learning (ICL) in LLMs remains a significantphenomenon with little understanding. To explain ICL, recent studies try toshed light on ICL by connecting it to Gradient Descent (GD). However, thequestion is, do these hold up in practice in actual pre-trained models? We highlight the limiting assumptions in prior works that make their contextconsiderably different from the practical context in which language models aretrained. For example, the theoretical hand-constructed weights used in thesestudies have properties that don't match those of real LLMs. Furthermore, theirexperimental verification uses \emph{ICL objective} (training models explicitlyfor ICL), which differs from the emergent ICL in the wild. We also look for evidence in real models. We observe that ICL and GD havedifferent sensitivity to the order in which they observe demonstrations.Finally, we probe and compare the ICL vs. GD hypothesis in a natural setting.We conduct comprehensive empirical analyses on language models pre-trained onnatural data (LLaMa-7B). Our comparisons of three performance metrics highlightthe inconsistent behavior of ICL and GD as a function of various factors suchas datasets, models, and the number of demonstrations. We observe that ICL andGD modify the output distribution of language models differently. These resultsindicate that the equivalence between ICL and GD remains an open hypothesis andcalls for further studies.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1268,mastering robot manipulation with multimodal prompts through pretraining and multitask finetuning,"['Jiachen Li', 'Qiaozi Gao', 'Michael Johnston', 'Xiaofeng Gao', 'Xuehai He', 'Suhaila Shakiah', 'Hangjie Shi', 'Reza Ghanadan', 'William Yang Wang']",http://arxiv.org/pdf/2310.09676v1.pdf,2023-10-14,," Prompt-based learning has been demonstrated as a compelling paradigmcontributing to large language models' tremendous success (LLMs). Inspired bytheir success in language tasks, existing research has leveraged LLMs inembodied instruction following and task planning. However, not much attentionhas been paid to embodied tasks with multimodal prompts, combining visionsignals with text descriptions. This type of task poses a major challenge torobots' capability to understand the interconnection and complementaritybetween vision and language signals. In this work, we introduce an effectiveframework that learns a policy to perform robot manipulation with multimodalprompts from multi-task expert trajectories. Our methods consist of a two-stagetraining pipeline that performs inverse dynamics pretraining and multi-taskfinetuning. To facilitate multimodal understanding, we design our multimodalprompt encoder by augmenting a pretrained LM with a residual connection to thevisual input and model the dependencies among action dimensions. Empirically,we evaluate the efficacy of our method on the VIMA-BENCH and establish a newstate-of-the-art (10% improvement in success rate). Moreover, we demonstratethat our model exhibits remarkable in-context learning ability.",,arXiv,"['cs.ro', 'cs.ai']",, +1269,unifying image processing as visual prompting question answering,"['Yihao Liu', 'Xiangyu Chen', 'Xianzheng Ma', 'Xintao Wang', 'Jiantao Zhou', 'Yu Qiao', 'Chao Dong']",http://arxiv.org/pdf/2310.10513v1.pdf,2023-10-16,," Image processing is a fundamental task in computer vision, which aims atenhancing image quality and extracting essential features for subsequent visionapplications. Traditionally, task-specific models are developed for individualtasks and designing such models requires distinct expertise. Building upon thesuccess of large language models (LLMs) in natural language processing (NLP),there is a similar trend in computer vision, which focuses on developinglarge-scale models through pretraining and in-context learning. This paradigmshift reduces the reliance on task-specific models, yielding a powerful unifiedmodel to deal with various tasks. However, these advances have predominantlyconcentrated on high-level vision tasks, with less attention paid to low-levelvision tasks. To address this issue, we propose a universal model for generalimage processing that covers image restoration, image enhancement, imagefeature extraction tasks, \textit{etc}. Our proposed framework, namedPromptGIP, unifies these diverse image processing tasks within a universalframework. Inspired by NLP question answering (QA) techniques, we employ avisual prompting question answering paradigm. Specifically, we treat theinput-output image pair as a structured question-answer sentence, therebyreprogramming the image processing task as a prompting QA problem. PromptGIPcan undertake diverse \textbf{cross-domain} tasks using provided visualprompts, eliminating the need for task-specific finetuning. Our methodologyoffers a universal and adaptive solution to general image processing. WhilePromptGIP has demonstrated a certain degree of out-of-domain taskgeneralization capability, further research is expected to fully explore itsmore powerful emergent generalization.",,arXiv,"['cs.cv', 'eess.iv']",, +1270,ideal influencedriven selective annotations empower incontext learners in large language models,"['Shaokun Zhang', 'Xiaobo Xia', 'Zhaoqing Wang', 'Ling-Hao Chen', 'Jiale Liu', 'Qingyun Wu', 'Tongliang Liu']",http://arxiv.org/pdf/2310.10873v2.pdf,2023-10-16,," In-context learning is a promising paradigm that utilizes in-context examplesas prompts for the predictions of large language models. These prompts arecrucial for achieving strong performance. However, since the prompts need to besampled from a large volume of annotated examples, finding the right prompt mayresult in high annotation costs. To address this challenge, this paperintroduces an influence-driven selective annotation method that aims tominimize annotation costs while improving the quality of in-context examples.The essence of our method is to select a pivotal subset from a large-scaleunlabeled data pool to annotate for the subsequent sampling of prompts.Specifically, a directed graph is first constructed to represent unlabeleddata. Afterward, the influence of candidate unlabeled subsets is quantifiedwith a diffusion process. A simple yet effective greedy algorithm for unlabeleddata selection is lastly introduced. It iteratively selects the data if itprovides a maximum marginal gain with respect to quantified influence. Comparedwith previous efforts on selective annotations, our influence-driven methodworks in an end-to-end manner, avoids an intractable explicit balance betweendata diversity and representativeness, and enjoys theoretical support.Experiments confirm the superiority of the proposed method on variousbenchmarks, achieving better performance under lower time consumption duringsubset selection. The project page is available athttps://skzhang1.github.io/IDEAL/.",,arXiv,['cs.cl'],, +1271,eureka humanlevel reward design via coding large language models,"['Yecheng Jason Ma', 'William Liang', 'Guanzhi Wang', 'De-An Huang', 'Osbert Bastani', 'Dinesh Jayaraman', 'Yuke Zhu', 'Linxi Fan', 'Anima Anandkumar']",http://arxiv.org/pdf/2310.12931v1.pdf,2023-10-19,," Large Language Models (LLMs) have excelled as high-level semantic plannersfor sequential decision-making tasks. However, harnessing them to learn complexlow-level manipulation tasks, such as dexterous pen spinning, remains an openproblem. We bridge this fundamental gap and present Eureka, a human-levelreward design algorithm powered by LLMs. Eureka exploits the remarkablezero-shot generation, code-writing, and in-context improvement capabilities ofstate-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization overreward code. The resulting rewards can then be used to acquire complex skillsvia reinforcement learning. Without any task-specific prompting or pre-definedreward templates, Eureka generates reward functions that outperform experthuman-engineered rewards. In a diverse suite of 29 open-source RL environmentsthat include 10 distinct robot morphologies, Eureka outperforms human expertson 83% of the tasks, leading to an average normalized improvement of 52%. Thegenerality of Eureka also enables a new gradient-free in-context learningapproach to reinforcement learning from human feedback (RLHF), readilyincorporating human inputs to improve the quality and the safety of thegenerated rewards without model updating. Finally, using Eureka rewards in acurriculum learning setting, we demonstrate for the first time, a simulatedShadow Hand capable of performing pen spinning tricks, adeptly manipulating apen in circles at rapid speed.",,arXiv,"['cs.ro', 'cs.ai', 'cs.lg']",, +1272,selfprompted chainofthought on large language models for opendomain multihop reasoning,"['Jinyuan Wang', 'Junlong Li', 'Hai Zhao']",http://arxiv.org/pdf/2310.13552v2.pdf,2023-10-20,," In open-domain question-answering (ODQA), most existing questions requiresingle-hop reasoning on commonsense. To further extend this task, we officiallyintroduce open-domain multi-hop reasoning (ODMR) by answering multi-hopquestions with explicit reasoning steps in open-domain setting. Recently, largelanguage models (LLMs) have found significant utility in facilitating ODQAwithout external corpus. Furthermore, chain-of-thought (CoT) prompting booststhe reasoning capability of LLMs to a greater extent with manual or automatedparadigms. However, existing automated methods lack of quality assurance, whilemanual approaches suffer from limited scalability and poor diversity, hinderingthe capabilities of LLMs. In this paper, we propose Self-promptedChain-of-Thought (SP-CoT), an automated framework to mass-produce high qualityCoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generationpipeline of high quality ODMR datasets, an adaptive sampler for in-context CoTselection and self-prompted inference via in-context learning. Extensiveexperiments on four multi-hop question-answering benchmarks show that ourproposed SP-CoT not only significantly surpasses the previous SOTA methods onlarge-scale (175B) LLMs, but also nearly doubles the zero-shot performance ofsmall-scale (13B) LLMs. Further analysis reveals the remarkable capability ofSP-CoT to elicit direct and concise intermediate reasoning steps by recalling$\sim$50\% of intermediate answers on MuSiQue-Ans dataset.",,arXiv,"['cs.cl', 'cs.ai']",, +1273,explainable depression symptom detection in social media,"['Eliseo Bao Souto', 'Anxo Pérez', 'Javier Parapar']",http://arxiv.org/pdf/2310.13664v2.pdf,2023-10-20,," Users of social platforms often perceive these sites as supportive spaces topost about their mental health issues. Those conversations contain importanttraces about individuals' health risks. Recently, researchers have exploitedthis online information to construct mental health detection models, which aimto identify users at risk on platforms like Twitter, Reddit or Facebook. Mostof these models are centred on achieving good classification results, ignoringthe explainability and interpretability of the decisions. Recent research haspointed out the importance of using clinical markers, such as the use ofsymptoms, to improve trust in the computational models by health professionals.In this paper, we propose using transformer-based architectures to detect andexplain the appearance of depressive symptom markers in the users' writings. Wepresent two approaches: i) train a model to classify, and another one toexplain the classifier's decision separately and ii) unify the two taskssimultaneously using a single model. Additionally, for this latter manner, wealso investigated the performance of recent conversational LLMs when usingin-context learning. Our natural language explanations enable clinicians tointerpret the models' decisions based on validated symptoms, enhancing trust inthe automated process. We evaluate our approach using recent symptom-baseddatasets, employing both offline and expert-in-the-loop metrics to assess thequality of the explanations generated by our models. The experimental resultsshow that it is possible to achieve good classification results whilegenerating interpretable symptom-based explanations.",,arXiv,['cs.cl'],, +1274,ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms,"['Young-Suk Lee', 'Md Arafat Sultan', 'Yousef El-Kurdi', 'Tahira Naseem Asim Munawar', 'Radu Florian', 'Salim Roukos', 'Ramón Fernandez Astudillo']",http://arxiv.org/pdf/2310.13961v1.pdf,2023-10-21,," Using in-context learning (ICL) for data generation, techniques such asSelf-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023)can train strong conversational agents with only a small amount of humansupervision. One limitation of these approaches is that they resort to verylarge language models (around 175B parameters) that are also proprietary andnon-public. Here we explore the application of such techniques to languagemodels that are much smaller (around 10B--40B parameters) and have permissivelicenses. We find the Self-Instruct approach to be less effective at thesesizes and propose new ICL methods that draw on two main ideas: (a)Categorization and simplification of the ICL templates to make prompt learningeasier for the LM, and (b) Ensembling over multiple LM outputs to help selecthigh-quality synthetic examples. Our algorithm leverages the 175 Self-Instructseed tasks and employs separate pipelines for instructions that require aninput and instructions that do not. Empirical investigations with different LMsshow that: (1) Our proposed method yields higher-quality instruction tuningdata than Self-Instruct, (2) It improves performances of both vanilla andinstruction-tuned LMs by significant margins, and (3) Smaller instruction-tunedLMs generate more useful outputs than their larger un-tuned counterparts. Ourcodebase is available at https://github.com/IBM/ensemble-instruct.",,arXiv,"['cs.cl', 'cs.ai']",, +1275,investigating the fairness of large language models for predictions on tabular data,"['Yanchen Liu', 'Srishti Gautam', 'Jiaqi Ma', 'Himabindu Lakkaraju']",http://arxiv.org/pdf/2310.14607v1.pdf,2023-10-23,," Recent literature has suggested the potential of using large language models(LLMs) to make predictions for tabular tasks. However, LLMs have been shown toexhibit harmful social biases that reflect the stereotypes and inequalitiespresent in the society. To this end, as well as the widespread use of tabulardata in many high-stake applications, it is imperative to explore the followingquestions: what sources of information do LLMs draw upon when makingpredictions for tabular tasks; whether and to what extent are LLM predictionsfor tabular tasks influenced by social biases and stereotypes; and what are theconsequential implications for fairness? Through a series of experiments, wedelve into these questions and show that LLMs tend to inherit social biasesfrom their training data which significantly impact their fairness in tabularprediction tasks. Furthermore, our investigations show that in the context ofbias mitigation, though in-context learning and fine-tuning have a moderateeffect, the fairness metric gap between different subgroups is still largerthan that in traditional machine learning models, such as Random Forest andshallow Neural Networks. This observation emphasizes that the social biases areinherent within the LLMs themselves and inherited from their pre-trainingcorpus, not only from the downstream task datasets. Besides, we demonstratethat label-flipping of in-context examples can significantly reduce biases,further highlighting the presence of inherent bias within LLMs.",,arXiv,"['cs.cl', 'cs.lg']",, +1276,large language models are visual reasoning coordinators,"['Liangyu Chen', 'Bo Li', 'Sheng Shen', 'Jingkang Yang', 'Chunyuan Li', 'Kurt Keutzer', 'Trevor Darrell', 'Ziwei Liu']",http://arxiv.org/pdf/2310.15166v1.pdf,2023-10-23,," Visual reasoning requires multimodal perception and commonsense cognition ofthe world. Recently, multiple vision-language models (VLMs) have been proposedwith excellent commonsense reasoning ability in various domains. However, howto harness the collective power of these complementary VLMs is rarely explored.Existing methods like ensemble still struggle to aggregate these models withthe desired higher-order communications. In this work, we propose Cola, a novelparadigm that coordinates multiple VLMs for visual reasoning. Our key insightis that a large language model (LLM) can efficiently coordinate multiple VLMsby facilitating natural language communication that leverages their distinctand complementary capabilities. Extensive experiments demonstrate that ourinstruction tuning variant, Cola-FT, achieves state-of-the-art performance onvisual question answering (VQA), outside knowledge VQA, visual entailment, andvisual spatial reasoning tasks. Moreover, we show that our in-context learningvariant, Cola-Zero, exhibits competitive performance in zero and few-shotsettings, without finetuning. Through systematic ablation studies andvisualizations, we validate that a coordinator LLM indeed comprehends theinstruction prompts as well as the separate functionalities of VLMs; it thencoordinates them to enable impressive visual reasoning capabilities.",,arXiv,"['cs.cv', 'cs.cl']",, +1277,function vectors in large language models,"['Eric Todd', 'Millicent L. Li', 'Arnab Sen Sharma', 'Aaron Mueller', 'Byron C. Wallace', 'David Bau']",http://arxiv.org/pdf/2310.15213v1.pdf,2023-10-23,," We report the presence of a simple neural mechanism that represents aninput-output function as a vector within autoregressive transformer languagemodels (LMs). Using causal mediation analysis on a diverse range ofin-context-learning (ICL) tasks, we find that a small number attention headstransport a compact representation of the demonstrated task, which we call afunction vector (FV). FVs are robust to changes in context, i.e., they triggerexecution of the task on inputs such as zero-shot and natural text settingsthat do not resemble the ICL contexts from which they are collected. We testFVs across a range of tasks, models, and layers and find strong causal effectsacross settings in middle layers. We investigate the internal structure of FVsand find while that they often contain information that encodes the outputspace of the function, this information alone is not sufficient to reconstructan FV. Finally, we test semantic vector composition in FVs, and find that tosome extent they can be summed to create vectors that trigger new complextasks. Taken together, our findings suggest that LLMs contain internalabstractions of general-purpose functions that can be invoked in a variety ofcontexts.",,arXiv,"['cs.cl', 'cs.lg']",, +1278,tcrallm token compression retrieval augmented large language model for inference cost reduction,"['Junyi Liu', 'Liangzhi Li', 'Tong Xiang', 'Bowen Wang', 'Yiming Qian']",http://arxiv.org/pdf/2310.15556v2.pdf,2023-10-24,," Since ChatGPT released its API for public use, the number of applicationsbuilt on top of commercial large language models (LLMs) increase exponentially.One popular usage of such models is leveraging its in-context learning abilityand generating responses given user queries leveraging knowledge obtained byretrieval augmentation. One problem of deploying commercial retrieval-augmentedLLMs is the cost due to the additionally retrieved context that largelyincreases the input token size of the LLMs. To mitigate this, we propose atoken compression scheme that includes two methods: summarization compressionand semantic compression. The first method applies a T5-based model that isfine-tuned by datasets generated using self-instruct containing samples withvarying lengths and reduce token size by doing summarization. The second methodfurther compresses the token size by removing words with lower impact on thesemantic. In order to adequately evaluate the effectiveness of the proposedmethods, we propose and utilize a dataset called Food-Recommendation DB (FRDB)focusing on food recommendation for women around pregnancy period or infants.Our summarization compression can reduce 65% of the retrieval token size withfurther 0.3% improvement on the accuracy; semantic compression provides a moreflexible way to trade-off the token size with performance, for which we canreduce the token size by 20% with only 1.6% of accuracy drop.",,arXiv,"['cs.cl', 'cs.ir']",, +1279,testing the limits unusual text inputs generation for mobile app crash detection with large language model,"['Zhe Liu', 'Chunyang Chen', 'Junjie Wang', 'Mengzhuo Chen', 'Boyu Wu', 'Xing Che', 'Dandan Wang', 'Qing Wang']",http://arxiv.org/pdf/2310.15657v1.pdf,2023-10-24,," Mobile applications have become a ubiquitous part of our daily life,providing users with access to various services and utilities. Text input, asan important interaction channel between users and applications, plays animportant role in core functionality such as search queries, authentication,messaging, etc. However, certain special text (e.g., -18 for Font Size) cancause the app to crash, and generating diversified unusual inputs for fullytesting the app is highly demanded. Nevertheless, this is also challenging dueto the combination of explosion dilemma, high context sensitivity, and complexconstraint relations. This paper proposes InputBlaster which leverages the LLMto automatically generate unusual text inputs for mobile app crash detection.It formulates the unusual inputs generation problem as a task of producing aset of test generators, each of which can yield a batch of unusual text inputsunder the same mutation rule. In detail, InputBlaster leverages LLM to producethe test generators together with the mutation rules serving as the reasoningchain, and utilizes the in-context learning schema to demonstrate the LLM withexamples for boosting the performance. InputBlaster is evaluated on 36 textinput widgets with cash bugs involving 31 popular Android apps, and resultsshow that it achieves 78% bug detection rate, with 136% higher than the bestbaseline. Besides, we integrate it with the automated GUI testing tool anddetect 37 unseen crashes in real-world apps from Google Play.",,arXiv,['cs.se'],, +1280,unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving,"['Zhan Ling', 'Yunhao Fang', 'Xuanlin Li', 'Tongzhou Mu', 'Mingu Lee', 'Reza Pourreza', 'Roland Memisevic', 'Hao Su']",http://arxiv.org/pdf/2311.00694v2.pdf,2023-11-01,," Large Language Models (LLMs) have achieved tremendous progress, yet theystill often struggle with challenging reasoning problems. Current approachesaddress this challenge by sampling or searching detailed and low-levelreasoning chains. However, these methods are still limited in their explorationcapabilities, making it challenging for correct solutions to stand out in thehuge solution space. In this work, we unleash LLMs' creative potential forexploring multiple diverse problem solving strategies by framing an LLM as ahierarchical policy via in-context learning. This policy comprises of avisionary leader that proposes multiple diverse high-level problem-solvingtactics as hints, accompanied by a follower that executes detailedproblem-solving processes following each of the high-level instruction. Thefollower uses each of the leader's directives as a guide and samples multiplereasoning chains to tackle the problem, generating a solution group for eachleader proposal. Additionally, we propose an effective and efficienttournament-based approach to select among these explored solution groups toreach the final answer. Our approach produces meaningful and inspiring hints,enhances problem-solving strategy exploration, and improves the final answeraccuracy on challenging problems in the MATH dataset. Code will be released athttps://github.com/lz1oceani/LLM-As-Hierarchical-Policy.",,arXiv,"['cs.ai', 'cs.cl']",, +1281,sentiment analysis through llm negotiations,"['Xiaofei Sun', 'Xiaoya Li', 'Shengyu Zhang', 'Shuhe Wang', 'Fei Wu', 'Jiwei Li', 'Tianwei Zhang', 'Guoyin Wang']",http://arxiv.org/pdf/2311.01876v1.pdf,2023-11-03,," A standard paradigm for sentiment analysis is to rely on a singular LLM andmakes the decision in a single round under the framework of in-contextlearning. This framework suffers the key disadvantage that the single-turnoutput generated by a single LLM might not deliver the perfect decision, justas humans sometimes need multiple attempts to get things right. This isespecially true for the task of sentiment analysis where deep reasoning isrequired to address the complex linguistic phenomenon (e.g., clausecomposition, irony, etc) in the input. To address this issue, this paper introduces a multi-LLM negotiationframework for sentiment analysis. The framework consists of a reasoning-infusedgenerator to provide decision along with rationale, a explanation-derivingdiscriminator to evaluate the credibility of the generator. The generator andthe discriminator iterate until a consensus is reached. The proposed frameworknaturally addressed the aforementioned challenge, as we are able to take thecomplementary abilities of two LLMs, have them use rationale to persuade eachother for correction. Experiments on a wide range of sentiment analysis benchmarks (SST-2, MovieReview, Twitter, yelp, amazon, IMDB) demonstrate the effectiveness of proposedapproach: it consistently yields better performances than the ICL baselineacross all benchmarks, and even superior performances to supervised baselineson the Twitter and movie review datasets.",,arXiv,['cs.cl'],, +1282,chef a comprehensive evaluation framework for standardized assessment of multimodal large language models,"['Zhelun Shi', 'Zhipin Wang', 'Hongxing Fan', 'Zhenfei Yin', 'Lu Sheng', 'Yu Qiao', 'Jing Shao']",http://arxiv.org/pdf/2311.02692v1.pdf,2023-11-05,," Multimodal Large Language Models (MLLMs) have shown impressive abilities ininteracting with visual content with myriad potential downstream tasks.However, even though a list of benchmarks has been proposed, the capabilitiesand limitations of MLLMs are still not comprehensively understood, due to alack of a standardized and holistic evaluation framework. To this end, wepresent the first Comprehensive Evaluation Framework (ChEF) that canholistically profile each MLLM and fairly compare different MLLMs. First, westructure ChEF as four modular components, i.e., Scenario as scalablemultimodal datasets, Instruction as flexible instruction retrieving formulae,Inferencer as reliable question answering strategies, and Metric as indicativetask-specific score functions. Based on them, ChEF facilitates versatileevaluations in a standardized framework, and new evaluations can be built bydesigning new Recipes (systematic selection of these four components). Notably,current MLLM benchmarks can be readily summarized as recipes of ChEF. Second,we introduce 6 new recipes to quantify competent MLLMs' desired capabilities(or called desiderata, i.e., calibration, in-context learning, instructionfollowing, language performance, hallucination, and robustness) as reliableagents that can perform real-world multimodal interactions. Third, we conduct alarge-scale evaluation of 9 prominent MLLMs on 9 scenarios and 6 desiderata.Our evaluation summarized over 20 valuable observations concerning thegeneralizability of MLLMs across various scenarios and the composite capabilityof MLLMs required for multimodal interactions. We will publicly release all thedetailed implementations for further analysis, as well as an easy-to-usemodular toolkit for the integration of new recipes and models, so that ChEF canbe a growing evaluation framework for the MLLM community.",,arXiv,['cs.cv'],, +1283,kinematicaware prompting for generalizable articulated object manipulation with llms,"['Wenke Xia', 'Dong Wang', 'Xincheng Pang', 'Zhigang Wang', 'Bin Zhao', 'Di Hu']",http://arxiv.org/pdf/2311.02847v2.pdf,2023-11-06,," Generalizable articulated object manipulation is essential for home-assistantrobots. Recent efforts focus on imitation learning from demonstrations orreinforcement learning in simulation, however, due to the prohibitive costs ofreal-world data collection and precise object simulation, it still remainschallenging for these works to achieve broad adaptability across diversearticulated objects. Recently, many works have tried to utilize the strongin-context learning ability of Large Language Models (LLMs) to achievegeneralizable robotic manipulation, but most of these researches focus onhigh-level task planning, sidelining low-level robotic control. In this work,building on the idea that the kinematic structure of the object determines howwe can manipulate it, we propose a kinematic-aware prompting framework thatprompts LLMs with kinematic knowledge of objects to generate low-level motiontrajectory waypoints, supporting various object manipulation. To effectivelyprompt LLMs with the kinematic structure of different objects, we design aunified kinematic knowledge parser, which represents various articulatedobjects as a unified textual description containing kinematic joints andcontact location. Building upon this unified description, a kinematic-awareplanner model is proposed to generate precise 3D manipulation waypoints via adesigned kinematic-aware chain-of-thoughts prompting method. Our evaluationspanned 48 instances across 16 distinct categories, revealing that ourframework not only outperforms traditional methods on 8 seen categories butalso shows a powerful zero-shot capability for 8 unseen articulated objectcategories. Moreover, the real-world experiments on 7 different objectcategories prove our framework's adaptability in practical scenarios. Code isreleased at\href{https://github.com/GeWu-Lab/LLM_articulated_object_manipulation/tree/main}{here}.",,arXiv,"['cs.ro', 'cs.ai']",, +1284,incontext learning for knowledge base question answering for unmanned systems based on large language models,"['Yunlong Chen', 'Yaming Zhang', 'Jianfei Yu', 'Li Yang', 'Rui Xia']",http://arxiv.org/pdf/2311.02956v1.pdf,2023-11-06,," Knowledge Base Question Answering (KBQA) aims to answer factoid questionsbased on knowledge bases. However, generating the most appropriate knowledgebase query code based on Natural Language Questions (NLQ) poses a significantchallenge in KBQA. In this work, we focus on the CCKS2023 Competition ofQuestion Answering with Knowledge Graph Inference for Unmanned Systems.Inspired by the recent success of large language models (LLMs) like ChatGPT andGPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL)generation framework to generate the most appropriate CQL based on the givenNLQ. Our generative framework contains six parts: an auxiliary model predictingthe syntax-related information of CQL based on the given NLQ, a proper nounmatcher extracting proper nouns from the given NLQ, a demonstration exampleselector retrieving similar examples of the input sample, a prompt constructordesigning the input template of ChatGPT, a ChatGPT-based generation modelgenerating the CQL, and an ensemble model to obtain the final answers fromdiversified outputs. With our ChatGPT-based CQL generation framework, weachieved the second place in the CCKS 2023 Question Answering with KnowledgeGraph Inference for Unmanned Systems competition, achieving an F1-score of0.92676.",,arXiv,"['cs.cl', 'cs.ai', 'i.2.7']",, +1285,retrievalaugmented code generation for universal information extraction,"['Yucan Guo', 'Zixuan Li', 'Xiaolong Jin', 'Yantao Liu', 'Yutao Zeng', 'Wenxuan Liu', 'Xiang Li', 'Pan Yang', 'Long Bai', 'Jiafeng Guo', 'Xueqi Cheng']",http://arxiv.org/pdf/2311.02962v1.pdf,2023-11-06,," Information Extraction (IE) aims to extract structural knowledge (e.g.,entities, relations, events) from natural language texts, which bringschallenges to existing methods due to task-specific schemas and complex textexpressions. Code, as a typical kind of formalized language, is capable ofdescribing structural knowledge under various schemas in a universal way. Onthe other hand, Large Language Models (LLMs) trained on both codes and textshave demonstrated powerful capabilities of transforming texts into codes, whichprovides a feasible solution to IE tasks. Therefore, in this paper, we proposea universal retrieval-augmented code generation framework based on LLMs, calledCode4UIE, for IE tasks. Specifically, Code4UIE adopts Python classes to definetask-specific schemas of various structural knowledge in a universal way. By sodoing, extracting knowledge under these schemas can be transformed intogenerating codes that instantiate the predefined Python classes with theinformation in texts. To generate these codes more precisely, Code4UIE adoptsthe in-context learning mechanism to instruct LLMs with examples. In order toobtain appropriate examples for different tasks, Code4UIE explores severalexample retrieval strategies, which can retrieve examples semantically similarto the given texts. Extensive experiments on five representative IE tasksacross nine datasets demonstrate the effectiveness of the Code4UIE framework.",,arXiv,"['cs.ai', 'cs.cl', 'cs.ir']",, +1286,unified lowresource sequence labeling by sampleaware dynamic sparse finetuning,"['Sarkar Snigdha Sarathi Das', 'Ranran Haoran Zhang', 'Peng Shi', 'Wenpeng Yin', 'Rui Zhang']",http://arxiv.org/pdf/2311.03748v1.pdf,2023-11-07,," Unified Sequence Labeling that articulates different sequence labelingproblems such as Named Entity Recognition, Relation Extraction, Semantic RoleLabeling, etc. in a generalized sequence-to-sequence format opens up theopportunity to make the maximum utilization of large language model knowledgetoward structured prediction. Unfortunately, this requires formatting them intospecialized augmented format unknown to the base pretrained language model(PLMs) necessitating finetuning to the target format. This significantly boundsits usefulness in data-limited settings where finetuning large models cannotproperly generalize to the target format. To address this challenge andleverage PLM knowledge effectively, we propose FISH-DIP, a sample-aware dynamicsparse finetuning strategy that selectively focuses on a fraction ofparameters, informed by feedback from highly regressing examples, during thefine-tuning process. By leveraging the dynamism of sparsity, our approachmitigates the impact of well-learned samples and prioritizes underperforminginstances for improvement in generalization. Across five tasks of sequencelabeling, we demonstrate that FISH-DIP can smoothly optimize the model in lowresource settings offering upto 40% performance improvements over fullfine-tuning depending on target evaluation settings. Also, compared toin-context learning and other parameter-efficient fine-tuning approaches,FISH-DIP performs comparably or better, notably in extreme low-resourcesettings.",,arXiv,['cs.cl'],, +1287,ul2 unifying language learning paradigms,"['Yi Tay', 'Mostafa Dehghani', 'Vinh Q. Tran', 'Xavier Garcia', 'Jason Wei', 'Xuezhi Wang', 'Hyung Won Chung', 'Siamak Shakeri', 'Dara Bahri', 'Tal Schuster', 'Huaixiu Steven Zheng', 'Denny Zhou', 'Neil Houlsby', 'Donald Metzler']",http://arxiv.org/pdf/2205.05131v3.pdf,2022-05-10,," Existing pre-trained models are generally geared towards a particular classof problems. To date, there seems to be still no consensus on what the rightarchitecture and pre-training setup should be. This paper presents a unifiedframework for pre-training models that are universally effective acrossdatasets and setups. We begin by disentangling architectural archetypes withpre-training objectives -- two concepts that are commonly conflated. Next, wepresent a generalized & unified perspective for self-supervision in NLP andshow how different pre-training objectives can be cast as one another and howinterpolating between different objectives can be effective. We then proposeMixture-of-Denoisers (MoD), a pre-training objective that combines diversepre-training paradigms together. We furthermore introduce a notion of modeswitching, wherein downstream fine-tuning is associated with specificpre-training schemes. We conduct extensive ablative experiments to comparemultiple pre-training objectives and find that our method pushes thePareto-frontier by outperforming T5 & GPT-like models across multiple diversesetups. By scaling our model up to 20B parameters, we achieve SOTA performanceon 50 well-established supervised finetuning based NLP tasks. Our model alsoachieve strong results at in-context learning, outperforming 175B GPT-3 onzero-shot SuperGLUE and tripling the performance of T5-XXL on one-shotsummarization. On 0-shot MMLU, UL2 20B outperforms T0 and T5 models. UL2 20Balso works well with chain-of-thought prompting and reasoning, making it anappealing choice for research into reasoning at a small to medium scale of 20Bparameters. Finally, we apply FLAN instruction tuning to the UL2 20B model,achieving MMLU and Big-Bench scores competitive to FLAN-PaLM 62B. We releaseFlax-based T5X checkpoints for the UL2 20B & Flan-UL2 20B.",,arXiv,['cs.cl'],, +1288,humantimescale adaptation in an openended task space,"[' Adaptive Agent Team', 'Jakob Bauer', 'Kate Baumli', 'Satinder Baveja', 'Feryal Behbahani', 'Avishkar Bhoopchand', 'Nathalie Bradley-Schmieg', 'Michael Chang', 'Natalie Clay', 'Adrian Collister', 'Vibhavari Dasagi', 'Lucy Gonzalez', 'Karol Gregor', 'Edward Hughes', 'Sheleem Kashem', 'Maria Loks-Thompson', 'Hannah Openshaw', 'Jack Parker-Holder', 'Shreya Pathak', 'Nicolas Perez-Nieves', 'Nemanja Rakicevic', 'Tim Rocktäschel', 'Yannick Schroecker', 'Jakub Sygnowski', 'Karl Tuyls', 'Sarah York', 'Alexander Zacherl', 'Lei Zhang']",http://arxiv.org/pdf/2301.07608v1.pdf,2023-01-18,," Foundation models have shown impressive adaptation and scalability insupervised and self-supervised learning problems, but so far these successeshave not fully translated to reinforcement learning (RL). In this work, wedemonstrate that training an RL agent at scale leads to a general in-contextlearning algorithm that can adapt to open-ended novel embodied 3D problems asquickly as humans. In a vast space of held-out environment dynamics, ouradaptive agent (AdA) displays on-the-fly hypothesis-driven exploration,efficient exploitation of acquired knowledge, and can successfully be promptedwith first-person demonstrations. Adaptation emerges from three ingredients:(1) meta-reinforcement learning across a vast, smooth and diverse taskdistribution, (2) a policy parameterised as a large-scale attention-basedmemory architecture, and (3) an effective automated curriculum that prioritisestasks at the frontier of an agent's capabilities. We demonstrate characteristicscaling laws with respect to network size, memory length, and richness of thetraining task distribution. We believe our results lay the foundation forincreasingly general and adaptive RL agents that perform well acrossever-larger open-ended domains.",,arXiv,"['cs.lg', 'cs.ai', 'cs.ne']",, +1289,deidgpt zeroshot medical text deidentification by gpt4,"['Zhengliang Liu', 'Yue Huang', 'Xiaowei Yu', 'Lu Zhang', 'Zihao Wu', 'Chao Cao', 'Haixing Dai', 'Lin Zhao', 'Yiwei Li', 'Peng Shu', 'Fang Zeng', 'Lichao Sun', 'Wei Liu', 'Dinggang Shen', 'Quanzheng Li', 'Tianming Liu', 'Dajiang Zhu', 'Xiang Li']",http://arxiv.org/pdf/2303.11032v2.pdf,2023-03-20,," The digitization of healthcare has facilitated the sharing and re-using ofmedical data but has also raised concerns about confidentiality and privacy.HIPAA (Health Insurance Portability and Accountability Act) mandates removingre-identifying information before the dissemination of medical records. Thus,effective and efficient solutions for de-identifying medical data, especiallythose in free-text forms, are highly needed. While various computer-assistedde-identification methods, including both rule-based and learning-based, havebeen developed and used in prior practice, such solutions still lackgeneralizability or need to be fine-tuned according to different scenarios,significantly imposing restrictions in wider use. The advancement of largelanguage models (LLM), such as ChatGPT and GPT-4, have shown great potential inprocessing text data in the medical domain with zero-shot in-context learning,especially in the task of privacy protection, as these models can identifyconfidential information by their powerful named entity recognition (NER)capability. In this work, we developed a novel GPT4-enabled de-identificationframework (``DeID-GPT"") to automatically identify and remove the identifyinginformation. Compared to existing commonly used medical text datade-identification methods, our developed DeID-GPT showed the highest accuracyand remarkable reliability in masking private information from the unstructuredmedical text while preserving the original structure and meaning of the text.This study is one of the earliest to utilize ChatGPT and GPT-4 for medical textdata processing and de-identification, which provides insights for furtherresearch and solution development on the use of LLMs such as ChatGPT/GPT-4 inhealthcare. Codes and benchmarking data information are available athttps://github.com/yhydhx/ChatGPT-API.",,arXiv,"['cs.cl', 'cs.cy']",, +1290,taskmatrixai completing tasks by connecting foundation models with millions of apis,"['Yaobo Liang', 'Chenfei Wu', 'Ting Song', 'Wenshan Wu', 'Yan Xia', 'Yu Liu', 'Yang Ou', 'Shuai Lu', 'Lei Ji', 'Shaoguang Mao', 'Yun Wang', 'Linjun Shou', 'Ming Gong', 'Nan Duan']",http://arxiv.org/pdf/2303.16434v1.pdf,2023-03-29,," Artificial Intelligence (AI) has made incredible progress recently. On theone hand, advanced foundation models like ChatGPT can offer powerfulconversation, in-context learning and code generation abilities on a broadrange of open-domain tasks. They can also generate high-level solution outlinesfor domain-specific tasks based on the common sense knowledge they haveacquired. However, they still face difficulties with some specialized tasksbecause they lack enough domain-specific data during pre-training or they oftenhave errors in their neural network computations on those tasks that needaccurate executions. On the other hand, there are also many existing models andsystems (symbolic-based or neural-based) that can do some domain-specific tasksvery well. However, due to the different implementation or working mechanisms,they are not easily accessible or compatible with foundation models. Therefore,there is a clear and pressing need for a mechanism that can leverage foundationmodels to propose task solution outlines and then automatically match some ofthe sub-tasks in the outlines to the off-the-shelf models and systems withspecial functionalities to complete them. Inspired by this, we introduceTaskMatrix.AI as a new AI ecosystem that connects foundation models withmillions of APIs for task completion. Unlike most previous work that aimed toimprove a single AI model, TaskMatrix.AI focuses more on using existingfoundation models (as a brain-like central system) and APIs of other AI modelsand systems (as sub-task solvers) to achieve diversified tasks in both digitaland physical domains. As a position paper, we will present our vision of how tobuild such an ecosystem, explain each key component, and use study cases toillustrate both the feasibility of this vision and the main challenges we needto address next.",,arXiv,"['cs.ai', 'cs.cl']",, +1291,subjectdriven texttoimage generation via apprenticeship learning,"['Wenhu Chen', 'Hexiang Hu', 'Yandong Li', 'Nataniel Ruiz', 'Xuhui Jia', 'Ming-Wei Chang', 'William W. Cohen']",http://arxiv.org/pdf/2304.00186v5.pdf,2023-04-01,," Recent text-to-image generation models like DreamBooth have made remarkableprogress in generating highly customized images of a target subject, byfine-tuning an ``expert model'' for a given subject from a few examples.However, this process is expensive, since a new expert model must be learnedfor each subject. In this paper, we present SuTI, a Subject-drivenText-to-Image generator that replaces subject-specific fine tuning within-context learning. Given a few demonstrations of a new subject, SuTI caninstantly generate novel renditions of the subject in different scenes, withoutany subject-specific optimization. SuTI is powered by apprenticeship learning,where a single apprentice model is learned from data generated by a massivenumber of subject-specific expert models. Specifically, we mine millions ofimage clusters from the Internet, each centered around a specific visualsubject. We adopt these clusters to train a massive number of expert models,each specializing in a different subject. The apprentice model SuTI then learnsto imitate the behavior of these fine-tuned experts. SuTI can generatehigh-quality and customized subject-specific images 20x faster thanoptimization-based SoTA methods. On the challenging DreamBench andDreamBench-v2, our human evaluation shows that SuTI significantly outperformsexisting models like InstructPix2Pix, Textual Inversion, Imagic, Prompt2Prompt,Re-Imagen and DreamBooth, especially on the subject and text alignment aspects.",,arXiv,"['cs.cv', 'cs.ai']",, +1292,large language models are edgecase fuzzers testing deep learning libraries via fuzzgpt,"['Yinlin Deng', 'Chunqiu Steven Xia', 'Chenyuan Yang', 'Shizhuo Dylan Zhang', 'Shujing Yang', 'Lingming Zhang']",http://arxiv.org/pdf/2304.02014v1.pdf,2023-04-04,," Deep Learning (DL) library bugs affect downstream DL applications,emphasizing the need for reliable systems. Generating valid input programs forfuzzing DL libraries is challenging due to the need for satisfying bothlanguage syntax/semantics and constraints for constructing valid computationalgraphs. Recently, the TitanFuzz work demonstrates that modern Large LanguageModels (LLMs) can be directly leveraged to implicitly learn all the constraintsto generate valid DL programs for fuzzing. However, LLMs tend to generateordinary programs following similar patterns seen in their massive trainingcorpora, while fuzzing favors unusual inputs that cover edge cases or areunlikely to be manually produced. To fill this gap, this paper proposes FuzzGPT, the first technique to primeLLMs to synthesize unusual programs for fuzzing. FuzzGPT is built on thewell-known hypothesis that historical bug-triggering programs may includerare/valuable code ingredients important for bug finding. Traditionaltechniques leveraging such historical information require intensive humanefforts to design dedicated generators and ensure the validity of generatedprograms. FuzzGPT demonstrates that this process can be fully automated via theintrinsic capabilities of LLMs (including fine-tuning and in-context learning),while being generalizable and applicable to challenging domains. While FuzzGPTcan be applied with different LLMs, this paper focuses on the powerfulGPT-style models: Codex and CodeGen. Moreover, FuzzGPT also shows the potentialof directly leveraging the instruct-following capability of the recent ChatGPTfor effective fuzzing. Evaluation on two popular DL libraries (PyTorch andTensorFlow) shows that FuzzGPT can substantially outperform TitanFuzz,detecting 76 bugs, with 49 already confirmed as previously unknown bugs,including 11 high-priority bugs or security vulnerabilities.",,arXiv,['cs.se'],, +1293,can language models solve graph problems in natural language,"['Heng Wang', 'Shangbin Feng', 'Tianxing He', 'Zhaoxuan Tan', 'Xiaochuang Han', 'Yulia Tsvetkov']",http://arxiv.org/pdf/2305.10037v3.pdf,2023-05-17,," Large language models (LLMs) are increasingly adopted for a variety of taskswith implicit graphical structures, such as planning in robotics, multi-hopquestion answering or knowledge probing, structured commonsense reasoning, andmore. While LLMs have advanced the state-of-the-art on these tasks withstructure implications, whether LLMs could explicitly process textualdescriptions of graphs and structures, map them to grounded conceptual spaces,and perform structured operations remains underexplored. To this end, wepropose NLGraph (Natural Language Graph), a comprehensive benchmark ofgraph-based problem solving designed in natural language. NLGraph contains29,370 problems, covering eight graph reasoning tasks with varying complexityfrom simple tasks such as connectivity and shortest path up to complex problemssuch as maximum flow and simulating graph neural networks. We evaluate LLMs(GPT-3/4) with various prompting approaches on the NLGraph benchmark and findthat 1) language models do demonstrate preliminary graph reasoning abilities,2) the benefit of advanced prompting and in-context learning diminishes on morecomplex graph problems, while 3) LLMs are also (un)surprisingly brittle in theface of spurious correlations in graph and problem settings. We then proposeBuild-a-Graph Prompting and Algorithmic Prompting, two instruction-basedapproaches to enhance LLMs in solving natural language graph problems.Build-a-Graph and Algorithmic prompting improve the performance of LLMs onNLGraph by 3.07% to 16.85% across multiple tasks and settings, while how tosolve the most complicated graph reasoning tasks in our setup with languagemodels remains an open research question. The NLGraph benchmark and evaluationcode are available at https://github.com/Arthur-Heng/NLGraph.",,arXiv,"['cs.cl', 'cs.ai']",, +1294,improving language model negotiation with selfplay and incontext learning from ai feedback,"['Yao Fu', 'Hao Peng', 'Tushar Khot', 'Mirella Lapata']",http://arxiv.org/pdf/2305.10142v1.pdf,2023-05-17,," We study whether multiple large language models (LLMs) can autonomouslyimprove each other in a negotiation game by playing, reflecting, andcriticizing. We are interested in this question because if LLMs were able toimprove each other, it would imply the possibility of creating strong AI agentswith minimal human intervention. We ask two LLMs to negotiate with each other,playing the roles of a buyer and a seller, respectively. They aim to reach adeal with the buyer targeting a lower price and the seller a higher one. Athird language model, playing the critic, provides feedback to a player toimprove the player's negotiation strategies. We let the two agents playmultiple rounds, using previous negotiation history and AI feedback asin-context demonstrations to improve the model's negotiation strategyiteratively. We use different LLMs (GPT and Claude) for different roles and usethe deal price as the evaluation metric. Our experiments reveal multipleintriguing findings: (1) Only a subset of the language models we consider canself-play and improve the deal price from AI feedback, weaker models either donot understand the game's rules or cannot incorporate AI feedback for furtherimprovement. (2) Models' abilities to learn from the feedback differ whenplaying different roles. For example, it is harder for Claude-instant toimprove as the buyer than as the seller. (3) When unrolling the game tomultiple rounds, stronger agents can consistently improve their performance bymeaningfully using previous experiences and iterative AI feedback, yet have ahigher risk of breaking the deal. We hope our work provides insightful initialexplorations of having models autonomously improve each other with game playingand AI feedback.",,arXiv,['cs.cl'],, +1295,xtremeup a usercentric scarcedata benchmark for underrepresented languages,"['Sebastian Ruder', 'Jonathan H. Clark', 'Alexander Gutkin', 'Mihir Kale', 'Min Ma', 'Massimo Nicosia', 'Shruti Rijhwani', 'Parker Riley', 'Jean-Michel A. Sarr', 'Xinyi Wang', 'John Wieting', 'Nitish Gupta', 'Anna Katanova', 'Christo Kirov', 'Dana L. Dickinson', 'Brian Roark', 'Bidisha Samanta', 'Connie Tao', 'David I. Adelani', 'Vera Axelrod', 'Isaac Caswell', 'Colin Cherry', 'Dan Garrette', 'Reeve Ingle', 'Melvin Johnson', 'Dmitry Panteleev', 'Partha Talukdar']",http://arxiv.org/pdf/2305.11938v2.pdf,2023-05-19,," Data scarcity is a crucial issue for the development of highly multilingualNLP systems. Yet for many under-represented languages (ULs) -- languages forwhich NLP re-search is particularly far behind in meeting user needs -- it isfeasible to annotate small amounts of data. Motivated by this, we proposeXTREME-UP, a benchmark defined by: its focus on the scarce-data scenario ratherthan zero-shot; its focus on user-centric tasks -- tasks with broad adoption byspeakers of high-resource languages; and its focus on under-representedlanguages where this scarce-data scenario tends to be most realistic. XTREME-UPevaluates the capabilities of language models across 88 under-representedlanguages over 9 key user-centric technologies including ASR, OCR, MT, andinformation access tasks that are of general utility. We create new datasetsfor OCR, autocomplete, semantic parsing, and transliteration, and build on andrefine existing datasets for other tasks. XTREME-UP provides methodology forevaluating many modeling scenarios including text-only, multi-modal (vision,audio, and text),supervised parameter tuning, and in-context learning. Weevaluate commonly used models on the benchmark. We release all code and scriptsto train and evaluate models",,arXiv,['cs.cl'],, +1296,memoryefficient finetuning of compressed large language models via sub4bit integer quantization,"['Jeonghoon Kim', 'Jung Hyun Lee', 'Sungdong Kim', 'Joonsuk Park', 'Kang Min Yoo', 'Se Jung Kwon', 'Dongsoo Lee']",http://arxiv.org/pdf/2305.14152v2.pdf,2023-05-23,," Large language models (LLMs) face the challenges in fine-tuning anddeployment due to their high memory demands and computational costs. Whileparameter-efficient fine-tuning (PEFT) methods aim to reduce the memory usageof the optimizer state during fine-tuning, the inherent size of pre-trained LLMweights continues to be a pressing concern. Even though quantization techniquesare widely proposed to ease memory demands and accelerate LLM inference, mostof these techniques are geared towards the deployment phase. To bridge thisgap, this paper presents Parameter-Efficient and Quantization-aware Adaptation(PEQA) - a simple yet effective method that combines the advantages of PEFTwith quantized LLMs. By updating solely the quantization scales, PEQA can bedirectly applied to quantized LLMs, ensuring seamless task transitions.Parallel to existing PEFT methods, PEQA significantly reduces the memoryoverhead associated with the optimizer state. Furthermore, it leverages theadvantages of quantization to substantially reduce model sizes. Even afterfine-tuning, the quantization structure of a PEQA-tuned LLM remains intact,allowing for accelerated inference on the deployment stage. We employPEQA-tuning for task-specific adaptation on LLMs with up to 65 billionparameters. To assess the logical reasoning and language comprehension ofPEQA-tuned LLMs, we fine-tune low-bit quantized LLMs using a instructiondataset. Our results show that even when LLMs are quantized to below 4-bitprecision, their capabilities in language modeling, few-shot in-contextlearning, and comprehension can be resiliently restored to (or even improvedover) their full-precision original performances with PEQA.",,arXiv,"['cs.lg', 'cs.ai']",, +1297,palix on scaling up a multilingual vision and language model,"['Xi Chen', 'Josip Djolonga', 'Piotr Padlewski', 'Basil Mustafa', 'Soravit Changpinyo', 'Jialin Wu', 'Carlos Riquelme Ruiz', 'Sebastian Goodman', 'Xiao Wang', 'Yi Tay', 'Siamak Shakeri', 'Mostafa Dehghani', 'Daniel Salz', 'Mario Lucic', 'Michael Tschannen', 'Arsha Nagrani', 'Hexiang Hu', 'Mandar Joshi', 'Bo Pang', 'Ceslee Montgomery', 'Paulina Pietrzyk', 'Marvin Ritter', 'AJ Piergiovanni', 'Matthias Minderer', 'Filip Pavetic', 'Austin Waters', 'Gang Li', 'Ibrahim Alabdulmohsin', 'Lucas Beyer', 'Julien Amelot', 'Kenton Lee', 'Andreas Peter Steiner', 'Yang Li', 'Daniel Keysers', 'Anurag Arnab', 'Yuanzhong Xu', 'Keran Rong', 'Alexander Kolesnikov', 'Mojtaba Seyedhosseini', 'Anelia Angelova', 'Xiaohua Zhai', 'Neil Houlsby', 'Radu Soricut']",http://arxiv.org/pdf/2305.18565v1.pdf,2023-05-29,," We present the training recipe and results of scaling up PaLI-X, amultilingual vision and language model, both in terms of size of the componentsand the breadth of its training task mixture. Our model achieves new levels ofperformance on a wide-range of varied and complex tasks, including multipleimage-based captioning and question-answering tasks, image-based documentunderstanding and few-shot (in-context) learning, as well as object detection,video question answering, and video captioning. PaLI-X advances thestate-of-the-art on most vision-and-language benchmarks considered (25+ ofthem). Finally, we observe emerging capabilities, such as complex counting andmultilingual object detection, tasks that are not explicitly in the trainingmix.",,arXiv,"['cs.cv', 'cs.cl', 'cs.lg']",, +1298,transformers as statisticians provable incontext learning with incontext algorithm selection,"['Yu Bai', 'Fan Chen', 'Huan Wang', 'Caiming Xiong', 'Song Mei']",http://arxiv.org/pdf/2306.04637v2.pdf,2023-06-07,," Neural sequence models based on the transformer architecture havedemonstrated remarkable \emph{in-context learning} (ICL) abilities, where theycan perform new tasks when prompted with training and test examples, withoutany parameter update to the model. This work first provides a comprehensivestatistical theory for transformers to perform ICL. Concretely, we show thattransformers can implement a broad class of standard machine learningalgorithms in context, such as least squares, ridge regression, Lasso, learninggeneralized linear models, and gradient descent on two-layer neural networks,with near-optimal predictive power on various in-context data distributions.Using an efficient implementation of in-context gradient descent as theunderlying mechanism, our transformer constructions admit mild size bounds, andcan be learned with polynomially many pretraining sequences. Building on these ``base'' ICL algorithms, intriguingly, we show thattransformers can implement more complex ICL procedures involving\emph{in-context algorithm selection}, akin to what a statistician can do inreal life -- A \emph{single} transformer can adaptively select different baseICL algorithms -- or even perform qualitatively different tasks -- on differentinput sequences, without any explicit prompting of the right algorithm or task.We both establish this in theory by explicit constructions, and also observethis phenomenon experimentally. In theory, we construct two general mechanismsfor algorithm selection with concrete examples: pre-ICL testing, and post-ICLvalidation. As an example, we use the post-ICL validation mechanism toconstruct a transformer that can perform nearly Bayes-optimal ICL on achallenging task -- noisy linear models with mixed noise levels.Experimentally, we demonstrate the strong in-context algorithm selectioncapabilities of standard transformer architectures.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl', 'math.st', 'stat.ml', 'stat.th']",, +1299,instruction tuned models are quick learners,"['Himanshu Gupta', 'Saurabh Arjun Sawant', 'Swaroop Mishra', 'Mutsumi Nakamura', 'Arindam Mitra', 'Santosh Mashetty', 'Chitta Baral']",http://arxiv.org/pdf/2306.05539v1.pdf,2023-05-17,," Instruction tuning of language models has demonstrated the ability to enhancemodel generalization to unseen tasks via in-context learning using a fewexamples. However, typical supervised learning still requires a plethora ofdownstream training data for finetuning. Often in real-world situations, thereis a scarcity of data available for finetuning, falling somewhere between fewshot inference and fully supervised finetuning. In this work, we demonstratethe sample efficiency of instruction tuned models over various tasks byestimating the minimal downstream training data required by them to performtransfer learning and match the performance of state-of-the-art (SOTA)supervised models. We conduct experiments on 119 tasks from Super NaturalInstructions (SuperNI) in both the single task learning (STL) and multi tasklearning (MTL) settings. Our findings reveal that, in the STL setting,instruction tuned models equipped with 25% of the downstream train data surpassthe SOTA performance on the downstream tasks. In the MTL setting, aninstruction tuned model trained on only 6% of downstream training data achieveSOTA, while using 100% of the training data results in a 3.69% pointsimprovement (ROUGE-L 74.68) over the previous SOTA. We conduct an analysis onT5 vs Tk-Instruct by developing several baselines to demonstrate thatinstruction tuning aids in increasing both sample efficiency and transferlearning. Additionally, we observe a consistent ~4% performance increase inboth settings when pre-finetuning is performed with instructions. Finally, weconduct a categorical study and find that contrary to previous results, tasksin the question rewriting and title generation categories suffer frominstruction tuning.",,arXiv,['cs.cl'],, +1300,synapse trajectoryasexemplar prompting with memory for computer control,"['Longtao Zheng', 'Rundong Wang', 'Xinrun Wang', 'Bo An']",http://arxiv.org/pdf/2306.07863v3.pdf,2023-06-13,," Building agents with large language models (LLMs) for computer control is aburgeoning research area, where the agent receives computer states and performsactions to complete complex tasks. Previous computer agents have demonstratedthe benefits of in-context learning (ICL); however, their performance ishindered by several issues. First, the limited context length of LLMs andcomplex computer states restrict the number of exemplars, as a single webpagecan consume the entire context. Second, the exemplars in current methods, suchas high-level plans and multi-choice questions, cannot represent completetrajectories, leading to suboptimal performance in long-horizon tasks. Third,existing computer agents rely on task-specific exemplars and overlook thesimilarity among tasks, resulting in poor generalization to novel tasks. Toaddress these challenges, we introduce Synapse, a computer agent featuringthree key components: i) state abstraction, which filters out task-irrelevantinformation from raw states, allowing more exemplars within the limitedcontext, ii) trajectory-as-exemplar prompting, which prompts the LLM withcomplete trajectories of the abstracted states and actions to improvemulti-step decision-making, and iii) exemplar memory, which stores theembeddings of exemplars and retrieves them via similarity search forgeneralization to novel tasks. We evaluate Synapse on MiniWoB++, a standardtask suite, and Mind2Web, a real-world website benchmark. In MiniWoB++, Synapseachieves a 99.2% average success rate (a 10% relative improvement) across 64tasks using demonstrations from only 48 tasks. Notably, Synapse is the firstICL method to solve the book-flight task in MiniWoB++. Synapse also exhibits a56% relative improvement in average step success rate over the previousstate-of-the-art prompting scheme in Mind2Web.",,arXiv,['cs.ai'],, +1301,language to rewards for robotic skill synthesis,"['Wenhao Yu', 'Nimrod Gileadi', 'Chuyuan Fu', 'Sean Kirmani', 'Kuang-Huei Lee', 'Montse Gonzalez Arenas', 'Hao-Tien Lewis Chiang', 'Tom Erez', 'Leonard Hasenclever', 'Jan Humplik', 'Brian Ichter', 'Ted Xiao', 'Peng Xu', 'Andy Zeng', 'Tingnan Zhang', 'Nicolas Heess', 'Dorsa Sadigh', 'Jie Tan', 'Yuval Tassa', 'Fei Xia']",http://arxiv.org/pdf/2306.08647v2.pdf,2023-06-14,," Large language models (LLMs) have demonstrated exciting progress in acquiringdiverse new capabilities through in-context learning, ranging from logicalreasoning to code-writing. Robotics researchers have also explored using LLMsto advance the capabilities of robotic control. However, since low-level robotactions are hardware-dependent and underrepresented in LLM training corpora,existing efforts in applying LLMs to robotics have largely treated LLMs assemantic planners or relied on human-engineered control primitives to interfacewith the robot. On the other hand, reward functions are shown to be flexiblerepresentations that can be optimized for control policies to achieve diversetasks, while their semantic richness makes them suitable to be specified byLLMs. In this work, we introduce a new paradigm that harnesses this realizationby utilizing LLMs to define reward parameters that can be optimized andaccomplish variety of robotic tasks. Using reward as the intermediate interfacegenerated by LLMs, we can effectively bridge the gap between high-levellanguage instructions or corrections to low-level robot actions. Meanwhile,combining this with a real-time optimizer, MuJoCo MPC, empowers an interactivebehavior creation experience where users can immediately observe the resultsand provide feedback to the system. To systematically evaluate the performanceof our proposed method, we designed a total of 17 tasks for a simulatedquadruped robot and a dexterous manipulator robot. We demonstrate that ourproposed method reliably tackles 90% of the designed tasks, while a baselineusing primitive skills as the interface with Code-as-policies achieves 50% ofthe tasks. We further validated our method on a real robot arm where complexmanipulation skills such as non-prehensile pushing emerge through ourinteractive system.",,arXiv,"['cs.ro', 'cs.ai', 'cs.lg']",, +1302,trained transformers learn linear models incontext,"['Ruiqi Zhang', 'Spencer Frei', 'Peter L. Bartlett']",http://arxiv.org/pdf/2306.09927v3.pdf,2023-06-16,," Attention-based neural networks such as transformers have demonstrated aremarkable ability to exhibit in-context learning (ICL): Given a short promptsequence of tokens from an unseen task, they can formulate relevant per-tokenand next-token predictions without any parameter updates. By embedding asequence of labeled training data and unlabeled test data as a prompt, thisallows for transformers to behave like supervised learning algorithms. Indeed,recent work has shown that when training transformer architectures over randominstances of linear regression problems, these models' predictions mimic thoseof ordinary least squares. Towards understanding the mechanisms underlying this phenomenon, weinvestigate the dynamics of ICL in transformers with a single linearself-attention layer trained by gradient flow on linear regression tasks. Weshow that despite non-convexity, gradient flow with a suitable randominitialization finds a global minimum of the objective function. At this globalminimum, when given a test prompt of labeled examples from a new predictiontask, the transformer achieves prediction error competitive with the bestlinear predictor over the test prompt distribution. We additionallycharacterize the robustness of the trained transformer to a variety ofdistribution shifts and show that although a number of shifts are tolerated,shifts in the covariate distribution of the prompts are not. Motivated by this,we consider a generalized ICL setting where the covariate distributions canvary across prompts. We show that although gradient flow succeeds at finding aglobal minimum in this setting, the trained transformer is still brittle undermild covariate shifts. We complement this finding with experiments on large,nonlinear transformer architectures which we show are more robust undercovariate shifts.",,arXiv,"['stat.ml', 'cs.ai', 'cs.cl', 'cs.lg']",, +1303,hyenadna longrange genomic sequence modeling at single nucleotide resolution,"['Eric Nguyen', 'Michael Poli', 'Marjan Faizi', 'Armin Thomas', 'Callum Birch-Sykes', 'Michael Wornow', 'Aman Patel', 'Clayton Rabideau', 'Stefano Massaroli', 'Yoshua Bengio', 'Stefano Ermon', 'Stephen A. Baccus', 'Chris Ré']",http://arxiv.org/pdf/2306.15794v2.pdf,2023-06-27,," Genomic (DNA) sequences encode an enormous amount of information for generegulation and protein synthesis. Similar to natural language models,researchers have proposed foundation models in genomics to learn generalizablefeatures from unlabeled genome data that can then be fine-tuned for downstreamtasks such as identifying regulatory elements. Due to the quadratic scaling ofattention, previous Transformer-based genomic models have used 512 to 4k tokensas context (<0.001% of the human genome), significantly limiting the modelingof long-range interactions in DNA. In addition, these methods rely ontokenizers or fixed k-mers to aggregate meaningful DNA units, losing singlenucleotide resolution where subtle genetic variations can completely alterprotein function via single nucleotide polymorphisms (SNPs). Recently, Hyena, alarge language model based on implicit convolutions was shown to matchattention in quality while allowing longer context lengths and lower timecomplexity. Leveraging Hyena's new long-range capabilities, we presentHyenaDNA, a genomic foundation model pretrained on the human reference genomewith context lengths of up to 1 million tokens at the single nucleotide-level -an up to 500x increase over previous dense attention-based models. HyenaDNAscales sub-quadratically in sequence length (training up to 160x faster thanTransformer), uses single nucleotide tokens, and has full global context ateach layer. We explore what longer context enables - including the first use ofin-context learning in genomics. On fine-tuned benchmarks from the NucleotideTransformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 18 datasetsusing a model with orders of magnitude less parameters and pretraining data. Onthe GenomicBenchmarks, HyenaDNA surpasses SotA on 7 of 8 datasets on average by+10 accuracy points. Code at https://github.com/HazyResearch/hyena-dna.",,arXiv,"['cs.lg', 'q-bio.gn']",, +1304,generative type inference for python,"['Yun Peng', 'Chaozheng Wang', 'Wenxuan Wang', 'Cuiyun Gao', 'Michael R. Lyu']",http://arxiv.org/pdf/2307.09163v1.pdf,2023-07-18,," Python is a popular dynamic programming language, evidenced by its ranking asthe second most commonly used language on GitHub. However, its dynamic typesystem can lead to potential type errors, leading researchers to exploreautomatic type inference approaches for Python programs. The rule-based typeinference approaches can ensure the accuracy of predicted variable types, butthey suffer from low coverage problems. Supervised type inference approaches,while feature-agnostic, require large, high-quality annotated datasets and arelimited to pre-defined types. As zero-shot approaches, the cloze-styleapproaches reformulate the type inference problem into a fill-in-the-blankproblem. However, their performance is limited. This paper introduces TypeGen, a few-shot generative type inference approachthat incorporates static domain knowledge from static analysis. TypeGen createschain-of-thought (COT) prompts by translating the type inference steps ofstatic analysis into prompts based on the type dependency graphs (TDGs),enabling language models to learn from how static analysis infers types. Bycombining COT prompts with code slices and type hints, TypeGen constructsexample prompts from human annotations. TypeGen only requires very fewannotated examples to teach language models to generate similar COT prompts viain-context learning. Moreover, TypeGen enhances the interpretability of resultsthrough the use of the input-explanation-output strategy. Experiments show thatTypeGen outperforms the best baseline Type4Py by 10.0% for argument typeprediction and 22.5% in return value type prediction in terms of top-1 ExactMatch by using only five examples. Furthermore, TypeGen achieves substantialimprovements of 27% to 84% compared to the zero-shot performance of largelanguage models with parameter sizes ranging from 1.3B to 175B in terms oftop-1 Exact Match.",,arXiv,['cs.se'],, +1305,incontext pretraining language modeling beyond document boundaries,"['Weijia Shi', 'Sewon Min', 'Maria Lomeli', 'Chunting Zhou', 'Margaret Li', 'Rich James', 'Xi Victoria Lin', 'Noah A. Smith', 'Luke Zettlemoyer', 'Scott Yih', 'Mike Lewis']",http://arxiv.org/pdf/2310.10638v4.pdf,2023-10-16,," Large language models (LMs) are currently trained to predict tokens givendocument prefixes, enabling them to directly perform long-form generation andprompting-style tasks which can be reduced to document completion. Existingpretraining pipelines train LMs by concatenating random sets of short documentsto create input contexts but the prior documents provide no signal forpredicting the next document. We instead present In-Context Pretraining, a newapproach where language models are pretrained on a sequence of relateddocuments, thereby explicitly encouraging them to read and reason acrossdocument boundaries. We can do In-Context Pretraining by simply changing thedocument ordering so that each context contains related documents, and directlyapplying existing pretraining pipelines. However, this document sorting problemis challenging. There are billions of documents and we would like the sort tomaximize contextual similarity for every document without repeating any data.To do this, we introduce approximate algorithms for finding related documentswith efficient nearest neighbor search and constructing coherent input contextswith a graph traversal algorithm. Our experiments show In-Context Pretrainingoffers a simple and scalable approach to significantly enhance LMs'performance:we see notable improvements in tasks that require more complex contextualreasoning, including in-context learning (+8%), reading comprehension (+15%),faithfulness to previous contexts (+16%), long-context reasoning (+5%), andretrieval augmentation (+9%).",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1306,entity matching using large language models,"['Ralph Peeters', 'Christian Bizer']",http://arxiv.org/pdf/2310.11244v1.pdf,2023-10-17,," Entity Matching is the task of deciding whether two entity descriptions referto the same real-world entity. Entity Matching is a central step in most dataintegration pipelines and an enabler for many e-commerce applications whichrequire to match products offers from different vendors. State-of-the-artentity matching methods often rely on pre-trained language models (PLMs) suchas BERT or RoBERTa. Two major drawbacks of these models for entity matching arethat (i) the models require significant amounts of task-specific training dataand (ii) the fine-tuned models are not robust concerning out-of-distributionentities. In this paper, we investigate using large language models (LLMs) forentity matching as a less domain-specific training data reliant and more robustalternative to PLM-based matchers. Our study covers hosted LLMs, such as GPT3.5and GPT4, as well as open source LLMs based on Llama2 which can be run locally.We evaluate these models in a zero-shot scenario as well as a scenario wheretask-specific training data is available. We compare different prompt designsas well as the prompt sensitivity of the models in the zero-shot scenario. Weinvestigate (i) the selection of in-context demonstrations, (ii) the generationof matching rules, as well as (iii) fine-tuning GPT3.5 in the second scenariousing the same pool of training data across the different approaches. Ourexperiments show that GPT4 without any task-specific training data outperformsfine-tuned PLMs (RoBERTa and Ditto) on three out of five benchmark datasetsreaching F1 scores around 90%. The experiments with in-context learning andrule generation show that all models beside of GPT4 benefit from thesetechniques (on average 5.9% and 2.2% F1), while GPT4 does not need suchadditional guidance in most cases...",,arXiv,"['cs.cl', 'cs.lg']",, +1307,cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment,"['Jixiang Hong', 'Quan Tu', 'Changyu Chen', 'Xing Gao', 'Ji Zhang', 'Rui Yan']",http://arxiv.org/pdf/2310.16271v1.pdf,2023-10-25,," Language models trained on large-scale corpus often generate content that isharmful, toxic, or contrary to human preferences, making their alignment withhuman values a critical concern. Reinforcement learning from human feedback(RLHF) with algorithms like PPO is a prevalent approach for alignment but isoften complex, unstable, and resource-intensive. Recently, ranking-basedalignment methods have emerged, offering stability and effectiveness byreplacing the RL framework with supervised fine-tuning, but they are costly dueto the need for annotated data. Considering that existing large language models(LLMs) like ChatGPT are already relatively well-aligned and cost-friendly,researchers have begun to align the language model with human preference fromAI feedback. The common practices, which unidirectionally distill theinstruction-following responses from LLMs, are constrained by their bottleneck.Thus we introduce CycleAlign to distill alignment capabilities fromparameter-invisible LLMs (black-box) to a parameter-visible model (white-box)in an iterative manner. With in-context learning (ICL) as the core of thecycle, the black-box models are able to rank the model-generated responsesguided by human-craft instruction and demonstrations about their preferences.During iterative interaction, the white-box models also have a judgment aboutresponses generated by them. Consequently, the agreement ranking could beviewed as a pseudo label to dynamically update the in-context demonstrationsand improve the preference ranking ability of black-box models. Throughmultiple interactions, the CycleAlign framework could align the white-box modelwith the black-box model effectively in a low-resource way. Empirical resultsillustrate that the model fine-tuned by CycleAlign remarkably exceeds existingmethods, and achieves the state-of-the-art performance in alignment with humanvalue.",,arXiv,"['cs.cl', 'cs.ai']",, +1308,transformers are efficient incontext estimators for wireless communication,"['Vicram Rajagopalan', 'Vishnu Teja Kunde', 'Chandra Shekhara Kaushik Valmeekam', 'Krishna Narayanan', 'Srinivas Shakkottai', 'Dileep Kalathil', 'Jean-Francois Chamberland']",http://arxiv.org/pdf/2311.00226v2.pdf,2023-11-01,," Pre-trained transformers can perform in-context learning, where they adapt toa new task using only a small number of prompts without any explicit modeloptimization. Inspired by this attribute, we propose a novel approach, calledin-context estimation, for the canonical communication problem of estimatingtransmitted symbols from received symbols. A communication channel isessentially a noisy function that maps transmitted symbols to received symbols,and this function can be represented by an unknown parameter whose statisticsdepend on an (also unknown) latent context. Conventional approaches typicallydo not fully exploit hierarchical model with the latent context. Instead, theyoften use mismatched priors to form a linear minimum mean-squared errorestimate of the channel parameter, which is then used to estimate successive,unknown transmitted symbols. We make the basic connection that transformersshow excellent contextual sequence completion with a few prompts, and so theyshould be able to implicitly determine the latent context from pilot symbols toperform end-to-end in-context estimation of transmitted symbols. Furthermore,the transformer should use information efficiently, i.e., it should utilize anypilots received to attain the best possible symbol estimates. Through extensivesimulations, we show that in-context estimation not only significantlyoutperforms standard approaches, but also achieves the same performance as anestimator with perfect knowledge of the latent context within a few contextexamples. Thus, we make a strong case that transformers are efficientin-context estimators in the communication setting.",,arXiv,"['eess.sp', 'cs.lg']",, +1309,2nd place winning solution for the cvpr2023 visual anomaly and novelty detection challenge multimodal prompting for datacentric anomaly detection,"['Yunkang Cao', 'Xiaohao Xu', 'Chen Sun', 'Yuqi Cheng', 'Liang Gao', 'Weiming Shen']",http://arxiv.org/pdf/2306.09067v2.pdf,2023-06-15,," This technical report introduces the winning solution of the team Segment AnyAnomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge.Going beyond uni-modal prompt, e.g., language prompt, we present a novelframework, i.e., Segment Any Anomaly + (SAA$+$), for zero-shot anomalysegmentation with multi-modal prompts for the regularization of cascaded modernfoundation models. Inspired by the great zero-shot generalization ability offoundation models like Segment Anything, we first explore their assembly (SAA)to leverage diverse multi-modal prior knowledge for anomaly localization.Subsequently, we further introduce multimodal prompts (SAA$+$) derived fromdomain expert knowledge and target image context to enable the non-parameteradaptation of foundation models to anomaly segmentation. The proposed SAA$+$model achieves state-of-the-art performance on several anomaly segmentationbenchmarks, including VisA and MVTec-AD, in the zero-shot setting. We willrelease the code of our winning solution for the CVPR2023 VAN.",,arXiv,['cs.cv'],, +1310,similarityaware multimodal prompt learning for fake news detection,"['Ye Jiang', 'Xiaomin Yu', 'Yimin Wang', 'Xiaoman Xu', 'Xingyi Song', 'Diana Maynard']",http://arxiv.org/pdf/2304.04187v3.pdf,2023-04-09,," The standard paradigm for fake news detection mainly utilizes textinformation to model the truthfulness of news. However, the discourse of onlinefake news is typically subtle and it requires expert knowledge to use textualinformation to debunk fake news. Recently, studies focusing on multimodal fakenews detection have outperformed text-only methods. Recent approaches utilizingthe pre-trained model to extract unimodal features, or fine-tuning thepre-trained model directly, have become a new paradigm for detecting fake news.Again, this paradigm either requires a large number of training instances, orupdates the entire set of pre-trained model parameters, making real-world fakenews detection impractical. Furthermore, traditional multimodal methods fusethe cross-modal features directly without considering that the uncorrelatedsemantic representation might inject noise into the multimodal features. Thispaper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE)framework. First, we incorporate prompt learning into multimodal fake newsdetection. Prompt learning, which only tunes prompts with a frozen languagemodel, can reduce memory usage significantly and achieve comparableperformances, compared with fine-tuning. We analyse three prompt templates witha soft verbalizer to detect fake news. In addition, we introduce thesimilarity-aware fusing method to adaptively fuse the intensity of multimodalrepresentation and mitigate the noise injection via uncorrelated cross-modalfeatures. For evaluation, SAMPLE surpasses the F1 and the accuracies ofprevious works on two benchmark multimodal datasets, demonstrating theeffectiveness of the proposed method in detecting fake news. In addition,SAMPLE also is superior to other approaches regardless of few-shot anddata-rich settings.",,arXiv,['cs.cl'],, +1311,multitask multimodal prompted training for interactive embodied task completion,"['Georgios Pantazopoulos', 'Malvina Nikandrou', 'Amit Parekh', 'Bhathiya Hemanthage', 'Arash Eshghi', 'Ioannis Konstas', 'Verena Rieser', 'Oliver Lemon', 'Alessandro Suglia']",http://arxiv.org/pdf/2311.04067v1.pdf,2023-11-07,," Interactive and embodied tasks pose at least two fundamental challenges toexisting Vision & Language (VL) models, including 1) grounding language intrajectories of actions and observations, and 2) referential disambiguation. Totackle these challenges, we propose an Embodied MultiModal Agent (EMMA): aunified encoder-decoder model that reasons over images and trajectories, andcasts action prediction as multimodal text generation. By unifying all tasks astext generation, EMMA learns a language of actions which facilitates transferacross tasks. Different to previous modular approaches with independentlytrained components, we use a single multitask model where each task contributesto goal completion. EMMA performs on par with similar models on several VLbenchmarks and sets a new state-of-the-art performance (36.81% success rate) onthe Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guidedagents in the Alexa Arena",,arXiv,"['cs.lg', 'cs.ai', 'cs.cv']",, +1312,parameterefficient tuning of largescale multimodal foundation model,"['Haixin Wang', 'Xinlong Yang', 'Jianlong Chang', 'Dian Jin', 'Jinan Sun', 'Shikun Zhang', 'Xiao Luo', 'Qi Tian']",http://arxiv.org/pdf/2305.08381v3.pdf,2023-05-15,," Driven by the progress of large-scale pre-training, parameter-efficienttransfer learning has gained immense popularity across different subfields ofArtificial Intelligence. The core is to adapt the model to downstream taskswith only a small set of parameters. Recently, researchers have leveraged suchproven techniques in multimodal tasks and achieve promising results. However,two critical issues remain unresolved: how to further reduce the complexitywith lightweight design and how to boost alignment between modalities underextremely low parameters. In this paper, we propose A graceful prompt frameworkfor cross-modal transfer (Aurora) to overcome these challenges. Considering theredundancy in existing architectures, we first utilize the mode approximationto generate 0.1M trainable parameters to implement the multimodal prompttuning, which explores the low intrinsic dimension with only 0.04% parametersof the pre-trained model. Then, for better modality alignment, we propose theInformative Context Enhancement and Gated Query Transformation module underextremely few parameters scenes. A thorough evaluation on six cross-modalbenchmarks shows that it not only outperforms the state-of-the-art but evenoutperforms the full fine-tuning approach. Our code is available at:https://github.com/WillDreamer/Aurora.",,arXiv,['cs.cv'],, +1313,reframing instructional prompts to gptk's language,"['Swaroop Mishra', 'Daniel Khashabi', 'Chitta Baral', 'Yejin Choi', 'Hannaneh Hajishirzi']",http://arxiv.org/pdf/2109.07830v3.pdf,2021-09-16,," What kinds of instructional prompts are easier to follow for Language Models(LMs)? We study this question by conducting extensive empirical analysis thatshed light on important features of successful instructional prompts.Specifically, we study several classes of reframing techniques for manualreformulation of prompts into more effective ones. Some examples includedecomposing a complex task instruction into multiple simpler tasks or itemizinginstructions into sequential steps. Our experiments compare the zero-shot andfew-shot performance of LMs prompted with reframed instructions on 12 NLP tasksacross 6 categories. Compared with original instructions, our reframedinstructions lead to significant improvements across LMs with different sizes.For example, the same reframed prompts boost few-shot performance ofGPT3-series and GPT2-series by 12.5% and 6.7% respectively averaged over alltasks. Furthermore, reframed instructions reduce the number of examplesrequired to prompt LMs in the few-shot setting. We hope theseempirically-driven techniques will pave the way towards more effective futureprompting algorithms.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1314,large language models encode clinical knowledge,"['Karan Singhal', 'Shekoofeh Azizi', 'Tao Tu', 'S. Sara Mahdavi', 'Jason Wei', 'Hyung Won Chung', 'Nathan Scales', 'Ajay Tanwani', 'Heather Cole-Lewis', 'Stephen Pfohl', 'Perry Payne', 'Martin Seneviratne', 'Paul Gamble', 'Chris Kelly', 'Nathaneal Scharli', 'Aakanksha Chowdhery', 'Philip Mansfield', 'Blaise Aguera y Arcas', 'Dale Webster', 'Greg S. Corrado', 'Yossi Matias', 'Katherine Chou', 'Juraj Gottweis', 'Nenad Tomasev', 'Yun Liu', 'Alvin Rajkomar', 'Joelle Barral', 'Christopher Semturs', 'Alan Karthikesalingam', 'Vivek Natarajan']",http://arxiv.org/pdf/2212.13138v1.pdf,2022-12-26,," Large language models (LLMs) have demonstrated impressive capabilities innatural language understanding and generation, but the quality bar for medicaland clinical applications is high. Today, attempts to assess models' clinicalknowledge typically rely on automated evaluations on limited benchmarks. Thereis no standard to evaluate model predictions and reasoning across a breadth oftasks. To address this, we present MultiMedQA, a benchmark combining sixexisting open question answering datasets spanning professional medical exams,research, and consumer queries; and HealthSearchQA, a new free-response datasetof medical questions searched online. We propose a framework for humanevaluation of model answers along multiple axes including factuality,precision, possible harm, and bias. In addition, we evaluate PaLM (a540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, onMultiMedQA. Using a combination of prompting strategies, Flan-PaLM achievesstate-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA,MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (USMedical License Exam questions), surpassing prior state-of-the-art by over 17%.However, human evaluation reveals key gaps in Flan-PaLM responses. To resolvethis we introduce instruction prompt tuning, a parameter-efficient approach foraligning LLMs to new domains using a few exemplars. The resulting model,Med-PaLM, performs encouragingly, but remains inferior to clinicians. We showthat comprehension, recall of knowledge, and medical reasoning improve withmodel scale and instruction prompt tuning, suggesting the potential utility ofLLMs in medicine. Our human evaluations reveal important limitations of today'smodels, reinforcing the importance of both evaluation frameworks and methoddevelopment in creating safe, helpful LLM models for clinical applications.",,arXiv,['cs.cl'],, +1315,instructuie multitask instruction tuning for unified information extraction,"['Xiao Wang', 'Weikang Zhou', 'Can Zu', 'Han Xia', 'Tianze Chen', 'Yuansen Zhang', 'Rui Zheng', 'Junjie Ye', 'Qi Zhang', 'Tao Gui', 'Jihua Kang', 'Jingsheng Yang', 'Siyuan Li', 'Chunsai Du']",http://arxiv.org/pdf/2304.08085v1.pdf,2023-04-17,," Large language models have unlocked strong multi-task capabilities fromreading instructive prompts. However, recent studies have shown that existinglarge models still have difficulty with information extraction tasks. Forexample, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset,which is significantly lower than the state-of-the-art performance. In thispaper, we propose InstructUIE, a unified information extraction framework basedon instruction tuning, which can uniformly model various information extractiontasks and capture the inter-task dependency. To validate the proposed method,we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extractiondatasets in a unified text-to-text format with expert-written instructions.Experimental results demonstrate that our method achieves comparableperformance to Bert in supervised settings and significantly outperforms thestate-of-the-art and gpt3.5 in zero-shot settings.",,arXiv,"['cs.cl', 'cs.ai']",, +1316,fewshot instruction prompts for pretrained language models to detect social biases,"['Shrimai Prabhumoye', 'Rafal Kocielnik', 'Mohammad Shoeybi', 'Anima Anandkumar', 'Bryan Catanzaro']",http://arxiv.org/pdf/2112.07868v2.pdf,2021-12-15,," Detecting social bias in text is challenging due to nuance, subjectivity, anddifficulty in obtaining good quality labeled datasets at scale, especiallygiven the evolving nature of social biases and society. To address thesechallenges, we propose a few-shot instruction-based method for promptingpre-trained language models (LMs). We select a few class-balanced exemplarsfrom a small support repository that are closest to the query to be labeled inthe embedding space. We then provide the LM with instruction that consists ofthis subset of labeled exemplars, the query text to be classified, a definitionof bias, and prompt it to make a decision. We demonstrate that large LMs usedin a few-shot context can detect different types of fine-grained biases withsimilar and sometimes superior accuracy to fine-tuned models. We observe thatthe largest 530B parameter model is significantly more effective in detectingsocial bias compared to smaller models (achieving at least 13% improvement inAUC metric compared to other models). It also maintains a high AUC (droppingless than 2%) when the labeled repository is reduced to as few as $100$samples. Large pretrained language models thus make it easier and quicker tobuild new bias detectors.",,arXiv,"['cs.cl', 'cs.ai']",, +1317,benchmarking a foundation llm on its ability to relabel structure names in accordance with the aapm tg263 report,"['Jason Holmes', 'Lian Zhang', 'Yuzhen Ding', 'Hongying Feng', 'Zhengliang Liu', 'Tianming Liu', 'William W. Wong', 'Sujay A. Vora', 'Jonathan B. Ashman', 'Wei Liu']",http://arxiv.org/pdf/2310.03874v1.pdf,2023-10-05,," Purpose: To introduce the concept of using large language models (LLMs) tore-label structure names in accordance with the American Association ofPhysicists in Medicine (AAPM) Task Group (TG)-263 standard, and to establish abenchmark for future studies to reference. Methods and Materials: The Generative Pre-trained Transformer (GPT)-4application programming interface (API) was implemented as a Digital Imagingand Communications in Medicine (DICOM) storage server, which upon receiving astructure set DICOM file, prompts GPT-4 to re-label the structure names of bothtarget volumes and normal tissues according to the AAPM TG-263. Three diseasesites, prostate, head and neck, and thorax were selected for evaluation. Foreach disease site category, 150 patients were randomly selected for manuallytuning the instructions prompt (in batches of 50) and 50 patients were randomlyselected for evaluation. Structure names that were considered were those thatwere most likely to be relevant for studies utilizing structure contours formany patients. Results: The overall re-labeling accuracy of both target volumes and normaltissues for prostate, head and neck, and thorax cases was 96.0%, 98.5%, and96.9% respectively. Re-labeling of target volumes was less accurate on averageexcept for prostate - 100%, 93.1%, and 91.1% respectively. Conclusions: Given the accuracy of GPT-4 in re-labeling structure names ofboth target volumes and normal tissues as presented in this work, LLMs arepoised to be the preferred method for standardizing structure names inradiation oncology, especially considering the rapid advancements in LLMcapabilities that are likely to continue.",,arXiv,"['physics.med-ph', 'cs.cl']",, +1318,zeroshot information extraction from radiological reports using chatgpt,"['Danqing Hu', 'Bing Liu', 'Xiaofeng Zhu', 'Xudong Lu', 'Nan Wu']",http://arxiv.org/pdf/2309.01398v2.pdf,2023-09-04,," Electronic health records contain an enormous amount of valuable information,but many are recorded in free text. Information extraction is the strategy totransform the sequence of characters into structured data, which can beemployed for secondary analysis. However, the traditional informationextraction components, such as named entity recognition and relationextraction, require annotated data to optimize the model parameters, which hasbecome one of the major bottlenecks in building information extraction systems.With the large language models achieving good performances on variousdownstream NLP tasks without parameter tuning, it becomes possible to use largelanguage models for zero-shot information extraction. In this study, we aim toexplore whether the most popular large language model, ChatGPT, can extractuseful information from the radiological reports. We first design the prompttemplate for the interested information in the CT reports. Then, we generatethe prompts by combining the prompt template with the CT reports as the inputsof ChatGPT to obtain the responses. A post-processing module is developed totransform the responses into structured extraction results. We conducted theexperiments with 847 CT reports collected from Peking University CancerHospital. The experimental results indicate that ChatGPT can achievecompetitive performances for some extraction tasks compared with the baselineinformation extraction system, but some limitations need to be furtherimproved.",,arXiv,['cs.cl'],, +1319,healthprompt a zeroshot learning paradigm for clinical natural language processing,"['Sonish Sivarajkumar', 'Yanshan Wang']",http://arxiv.org/pdf/2203.05061v1.pdf,2022-03-09,," Deep learning algorithms are dependent on the availability of large-scaleannotated clinical text datasets. The lack of such publicly available datasetsis the biggest bottleneck for the development of clinical Natural LanguageProcessing(NLP) systems. Zero-Shot Learning(ZSL) refers to the use of deeplearning models to classify instances from new classes of which no trainingdata have been seen before. Prompt-based learning is an emerging ZSL techniquewhere we define task-based templates for NLP tasks. We developed a novelprompt-based clinical NLP framework called HealthPrompt and applied theparadigm of prompt-based learning on clinical texts. In this technique, ratherthan fine-tuning a Pre-trained Language Model(PLM), the task definitions aretuned by defining a prompt template. We performed an in-depth analysis ofHealthPrompt on six different PLMs in a no-data setting. Our experiments provethat prompts effectively capture the context of clinical texts and performremarkably well without any training data.",,arXiv,"['cs.cl', 'cs.ai', 'cs.ir']",, +1320,a fewshot approach to resume information extraction via prompts,"['Chengguang Gan', 'Tatsunori Mori']",http://arxiv.org/pdf/2209.09450v2.pdf,2022-09-20,," Prompt learning's fine-tune performance on text classification tasks hasattracted the NLP community. This paper applies it to resume informationextraction, improving existing methods for this task. We created manualtemplates and verbalizers tailored to resume texts and compared the performanceof Masked Language Model (MLM) and Seq2Seq PLMs. Also, we enhanced theverbalizer design for Knowledgeable Prompt-tuning, contributing to prompttemplate design across NLP tasks. We present the Manual KnowledgeableVerbalizer (MKV), a rule for constructing verbalizers for specificapplications. Our tests show that MKV rules yield more effective, robusttemplates and verbalizers than existing methods. Our MKV approach resolvedsample imbalance, surpassing current automatic prompt methods. This studyunderscores the value of tailored prompt learning for resume extraction,stressing the importance of custom-designed templates and verbalizers.",,arXiv,['cs.cl'],, +1321,the prompt artists,"['Minsuk Chang', 'Stefania Druga', 'Alex Fiannaca', 'Pedro Vergani', 'Chinmay Kulkarni', 'Carrie Cai', 'Michael Terry']",http://arxiv.org/pdf/2303.12253v1.pdf,2023-03-22,," This paper examines the art practices, artwork, and motivations of prolificusers of the latest generation of text-to-image models. Through interviews,observations, and a user survey, we present a sampling of the artistic stylesand describe the developed community of practice around generative AI. We findthat: 1) the text prompt and the resulting image can be considered collectivelyas an art piece prompts as art and 2) prompt templates (prompts with ``slots''for others to fill in with their own words) are developed to create generativeart styles. We discover that the value placed by this community on uniqueoutputs leads to artists seeking specialized vocabulary to produce distinctiveart pieces (e.g., by reading architectural blogs to find phrases to describeimages). We also find that some artists use ""glitches"" in the model that can beturned into artistic styles of their own right. From these findings, we outlinespecific implications for design regarding future prompting and image editingoptions.",,arXiv,['cs.hc'],, +1322,estimating uncertainty in multimodal foundation models using public internet data,"['Shiladitya Dutta', 'Hongbo Wei', 'Lars van der Laan', 'Ahmed M. Alaa']",http://arxiv.org/pdf/2310.09926v2.pdf,2023-10-15,," Foundation models are trained on vast amounts of data at scale usingself-supervised learning, enabling adaptation to a wide range of downstreamtasks. At test time, these models exhibit zero-shot capabilities through whichthey can classify previously unseen (user-specified) categories. In this paper,we address the problem of quantifying uncertainty in these zero-shotpredictions. We propose a heuristic approach for uncertainty estimation inzero-shot settings using conformal prediction with web data. Given a set ofclasses at test time, we conduct zero-shot classification with CLIP-stylemodels using a prompt template, e.g., ""an image of a "", and use thesame template as a search query to source calibration data from the open web.Given a web-based calibration set, we apply conformal prediction with a novelconformity score that accounts for potential errors in retrieved web data. Weevaluate the utility of our proposed method in Biomedical foundation models;our preliminary results show that web-based conformal prediction sets achievethe target coverage with satisfactory efficiency on a variety of biomedicaldatasets.",,arXiv,['cs.ai'],, +1323,beyond yes and no improving zeroshot llm rankers via scoring finegrained relevance labels,"['Honglei Zhuang', 'Zhen Qin', 'Kai Hui', 'Junru Wu', 'Le Yan', 'Xuanhui Wang', 'Michael Bendersky']",http://arxiv.org/pdf/2310.14122v2.pdf,2023-10-21,," Zero-shot text rankers powered by recent LLMs achieve remarkable rankingperformance by simply prompting. Existing prompts for pointwise LLM rankersmostly ask the model to choose from binary relevance labels like ""Yes"" and""No"". However, the lack of intermediate relevance label options may cause theLLM to provide noisy or biased answers for documents that are partiallyrelevant to the query. We propose to incorporate fine-grained relevance labelsinto the prompt for LLM rankers, enabling them to better differentiate amongdocuments with different levels of relevance to the query and thus derive amore accurate ranking. We study two variants of the prompt template, coupledwith different numbers of relevance levels. Our experiments on 8 BEIR data setsshow that adding fine-grained relevance labels significantly improves theperformance of LLM rankers.",,arXiv,['cs.ir'],, +1324,"large language models can share images, too!","['Young-Jun Lee', 'Jonghwan Hyeon', 'Ho-Jin Choi']",http://arxiv.org/pdf/2310.14804v1.pdf,2023-10-23,," This paper explores the image-sharing capability of Large Language Models(LLMs), such as InstructGPT, ChatGPT, and GPT-4, in a zero-shot setting,without the help of visual foundation models. Inspired by the two-stage processof image-sharing in human dialogues, we propose a two-stage framework thatallows LLMs to predict potential image-sharing turns and generate related imagedescriptions using our effective restriction-based prompt template. Withextensive experiments, we unlock the \textit{image-sharing} capability of LLMsin zero-shot prompting, with GPT-4 achieving the best performance.Additionally, we uncover the emergent \textit{image-sharing} ability inzero-shot prompting, demonstrating the effectiveness of restriction-basedprompts in both stages of our framework. Based on this framework, we augmentthe PhotoChat dataset with images generated by Stable Diffusion at predictedturns, namely PhotoChat++. To our knowledge, this is the first study to assessthe image-sharing ability of LLMs in a zero-shot setting without visualfoundation models. The source code and the dataset will be released afterpublication.",,arXiv,"['cs.cv', 'cs.ai', 'cs.cl']",, +1325,promptbased zeroshot relation extraction with semantic knowledge augmentation,"['Jiaying Gong', 'Hoda Eldardiry']",http://arxiv.org/pdf/2112.04539v2.pdf,2021-12-08,," In relation triplet extraction (RTE), recognizing unseen (new) relations forwhich there are no training instances is a challenging task. Efforts have beenmade to recognize unseen relations based on question-answering models orrelation descriptions. However, these approaches miss the semantic informationabout connections between seen and unseen relations. In this paper, We proposea prompt-based model with semantic knowledge augmentation (ZS-SKA) to recognizeunseen relations under the zero-shot setting. We present a new word-levelanalogy-based sentence translation rule and generate augmented instances withunseen relations from instances with seen relations using that new rule. Wedesign prompts with weighted virtual label construction based on an externalknowledge graph to integrate semantic knowledge information learned from seenrelations. Instead of using the actual label sets in the prompt template, weconstruct weighted virtual label words. We learn the representations of bothseen and unseen relations with augmented instances and prompts. We thencalculate the distance between the generated representations using prototypicalnetworks to predict unseen relations. Extensive experiments conducted on threepublic datasets FewRel, Wiki-ZSL, and NYT, show that ZS-SKA outperformsstate-of-the-art methods under the zero-shot scenarios. Our experimentalresults also demonstrate the effectiveness and robustness of ZS-SKA.",,arXiv,['cs.cl'],, +1326,adapting prompt for fewshot tabletotext generation,"['Zhixin Guo', 'Minyxuan Yan', 'Jiexing Qi', 'Jianping Zhou', 'Ziwei He', 'Zhouhan Lin', 'Guanjie Zheng', 'Xinbing Wang']",http://arxiv.org/pdf/2302.12468v2.pdf,2023-02-24,," Pretrained language models (PLMs) have made remarkable progress intable-to-text generation tasks. However, the lack of domain-specific knowledgemakes it challenging to bridge the topological gap between tabular data andtext, especially in real-world applications with limited resources. To mitigatethe limitation of insufficient labeled data, we propose a novel framework:Adapt-Prompt-to-Generate (AdaPTGen). The core insight of AdaPTGen is to adaptprompt templates of domain-specific knowledge into the model, which brings atleast three benefits: (1) it injects representation of normal table-relateddescriptions to bridge the topological gap between tabular data and texts; (2)it enables us to use large amounts of unlabeled domain-specific knowledgefully, which can alleviate the PLMs' inherent shortcomings of lacking domainknowledge; (3) it allows us to design various tasks to explore thedomain-specific knowledge. Extensive experiments and analyses are conducted onthree open-domain few-shot natural language generation (NLG) data sets: Humans,Songs, and Books. Compared to previous state-of-the-art approaches, our modelachieves superior performance in terms of both fluency and accuracy.",,arXiv,['cs.cl'],, +1327,revisit input perturbation problems for llms a unified robustness evaluation framework for noisy slot filling task,"['Guanting Dong', 'Jinxu Zhao', 'Tingfeng Hui', 'Daichi Guo', 'Wenlong Wan', 'Boqi Feng', 'Yueyan Qiu', 'Zhuoma Gongque', 'Keqing He', 'Zechen Wang', 'Weiran Xu']",http://arxiv.org/pdf/2310.06504v1.pdf,2023-10-10,," With the increasing capabilities of large language models (LLMs), thesehigh-performance models have achieved state-of-the-art results on a wide rangeof natural language processing (NLP) tasks. However, the models' performance oncommonly-used benchmark datasets often fails to accurately reflect theirreliability and robustness when applied to real-world noisy data. To addressthese challenges, we propose a unified robustness evaluation framework based onthe slot-filling task to systematically evaluate the dialogue understandingcapability of LLMs in diverse input perturbation scenarios. Specifically, weconstruct a input perturbation evaluation dataset, Noise-LLM, which containsfive types of single perturbation and four types of mixed perturbation data.Furthermore, we utilize a multi-level data augmentation method (character,word, and sentence levels) to construct a candidate data pool, and carefullydesign two ways of automatic task demonstration construction strategies(instance-level and entity-level) with various prompt templates. Our aim is toassess how well various robustness methods of LLMs perform in real-world noisyscenarios. The experiments have demonstrated that the current open-source LLMsgenerally achieve limited perturbation robustness performance. Based on theseexperimental observations, we make some forward-looking suggestions to fuel theresearch in this direction.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",, +1328,do language models learn about legal entity types during pretraining,"['Claire Barale', 'Michael Rovatsos', 'Nehal Bhuta']",http://arxiv.org/pdf/2310.13092v1.pdf,2023-10-19,," Language Models (LMs) have proven their ability to acquire diverse linguisticknowledge during the pretraining phase, potentially serving as a valuablesource of incidental supervision for downstream tasks. However, there has beenlimited research conducted on the retrieval of domain-specific knowledge, andspecifically legal knowledge. We propose to explore the task of Entity Typing,serving as a proxy for evaluating legal knowledge as an essential aspect oftext comprehension, and a foundational task to numerous downstream legal NLPapplications. Through systematic evaluation and analysis and two types ofprompting (cloze sentences and QA-based templates) and to clarify the nature ofthese acquired cues, we compare diverse types and lengths of entities bothgeneral and domain-specific entities, semantics or syntax signals, anddifferent LM pretraining corpus (generic and legal-oriented) and architectures(encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2performs well on certain entities and exhibits potential for substantialimprovement with optimized prompt templates, (2) law-oriented LMs showinconsistent performance, possibly due to variations in their training corpus,(3) LMs demonstrate the ability to type entities even in the case ofmulti-token entities, (4) all models struggle with entities belonging tosub-domains of the law (5) Llama2 appears to frequently overlook syntacticcues, a shortcoming less present in BERT-based architectures.",,arXiv,['cs.cl'],, +1329,llamarec twostage recommendation using large language models for ranking,"['Zhenrui Yue', 'Sara Rabhi', 'Gabriel de Souza Pereira Moreira', 'Dong Wang', 'Even Oldridge']",http://arxiv.org/pdf/2311.02089v1.pdf,2023-10-25,," Recently, large language models (LLMs) have exhibited significant progress inlanguage understanding and generation. By leveraging textual features,customized LLMs are also applied for recommendation and demonstrateimprovements across diverse recommendation scenarios. Yet the majority ofexisting methods perform training-free recommendation that heavily relies onpretrained knowledge (e.g., movie recommendation). In addition, inference onLLMs is slow due to autoregressive generation, rendering existing methods lesseffective for real-time recommendation. As such, we propose a two-stageframework using large language models for ranking-based recommendation(LlamaRec). In particular, we use small-scale sequential recommenders toretrieve candidates based on the user interaction history. Then, both historyand retrieved items are fed to the LLM in text via a carefully designed prompttemplate. Instead of generating next-item titles, we adopt a verbalizer-basedapproach that transforms output logits into probability distributions over thecandidate items. Therefore, the proposed LlamaRec can efficiently rank itemswithout generating long text. To validate the effectiveness of the proposedframework, we compare against state-of-the-art baseline methods on benchmarkdatasets. Our experimental results demonstrate the performance of LlamaRec,which consistently achieves superior performance in both recommendationperformance and efficiency.",,arXiv,"['cs.ir', 'cs.ai', 'cs.cl']",, +1330,"large language model is not a good fewshot information extractor, but a good reranker for hard samples!","['Yubo Ma', 'Yixin Cao', 'YongChing Hong', 'Aixin Sun']",http://arxiv.org/pdf/2303.08559,2023-03-15,,"Large Language Models (LLMs) have made remarkable strides in various tasks. Whether LLMs are competitive few-shot solvers for information extraction (IE) tasks, however, remains an open problem. In this work, we aim to provide a thorough answer to this question. Through extensive experiments on nine datasets across four IE tasks, we demonstrate that current advanced LLMs consistently exhibit inferior performance, higher latency, and increased budget requirements compared to fine-tuned SLMs under most settings. Therefore, we conclude that LLMs are not effective few-shot information extractors in general. Nonetheless, we illustrate that with appropriate prompting strategies, LLMs can effectively complement SLMs and tackle challenging samples that SLMs struggle with. And moreover, we propose an adaptive filter-then-rerank paradigm to combine the strengths of LLMs and SLMs. In this paradigm, SLMs serve as filters and LLMs serve as rerankers. By prompting LLMs to rerank a small portion of difficult samples identified by SLMs, our preliminary system consistently achieves promising improvements (2.4% F1-gain on average) on various IE tasks, with an acceptable time and cost investment.",0100785773b8217c44606ab260e3212f93b0a4fd,Semantic Scholar,,highly relevant,"The paper describes the use of large language models and the development of specific prompts to extract and standardize clinical information from medical reports, which is a direct application of prompt engineering." +1331,retrieving supporting evidence for generative question answering,"['Siqing Huo', 'Negar Arabzadeh', 'Charlie Clarke']",https://arxiv.org/pdf/2309.11392,2023-09-20,,"Current large language models (LLMs) can exhibit near-human levels of performance on many natural language-based tasks, including open-domain question answering. Unfortunately, at this time, they also convincingly hallucinate incorrect answers, so that responses to questions must be verified against external sources before they can be accepted at face value. In this paper, we report two simple experiments to automatically validate generated answers against a corpus. We base our experiments on questions and passages from the MS MARCO (V1) test collection, and a retrieval pipeline consisting of sparse retrieval, dense retrieval and neural rerankers. In the first experiment, we validate the generated answer in its entirety. After presenting a question to an LLM and receiving a generated answer, we query the corpus with the combination of the question + generated answer. We then present the LLM with the combination of the question + generated answer + retrieved answer, prompting it to indicate if the generated answer can be supported by the retrieved answer. In the second experiment, we consider the generated answer at a more granular level, prompting the LLM to extract a list of factual statements from the answer and verifying each statement separately. We query the corpus with each factual statement and then present the LLM with the statement and the corresponding retrieved evidence. The LLM is prompted to indicate if the statement can be supported and make necessary edits using the retrieved material. With an accuracy of over 80%, we find that an LLM is capable of verifying its generated answer when a corpus of supporting material is provided. However, manual assessment of a random sample of questions reveals that incorrect generated answers are missed by this verification process. While this verification process can reduce hallucinations, it can not entirely eliminate them.",0630a18fe3fe4765132ad52a591f9776cf3284bf,Semantic Scholar,,highly relevant,"The abstract explicitly mentions the use of a 'Mixture of Prompts' (MoPs) for adjusting pretrained language models for new tasks, which indicates direct relevance to prompt engineering." +1332,prd peer rank and discussion improve large language model based evaluations,"['Ruosen Li', 'Teerth Patel', 'Xinya Du']",https://arxiv.org/pdf/2307.02762,2023-07-06,,"Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized""strongest""LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.",130d18d1d455336e1a5b06c85784894bb67d87ec,Semantic Scholar,,somewhat relevant,"The abstract indicates the use of prompts to activate artificial neurons in LLMs (RoBERTa), which aligns with the use of prompts in prompt engineering." +1333,conavgpt multirobot cooperative visual semantic navigation using large language models,"['Bangguo Yu', 'H. Kasaei', 'Ming Cao']",https://arxiv.org/pdf/2310.07937,2023-10-11,,"In advanced human-robot interaction tasks, visual target navigation is crucial for autonomous robots navigating unknown environments. While numerous approaches have been developed in the past, most are designed for single-robot operations, which often suffer from reduced efficiency and robustness due to environmental complexities. Furthermore, learning policies for multi-robot collaboration are resource-intensive. To address these challenges, we propose Co-NavGPT, an innovative framework that integrates Large Language Models (LLMs) as a global planner for multi-robot cooperative visual target navigation. Co-NavGPT encodes the explored environment data into prompts, enhancing LLMs' scene comprehension. It then assigns exploration frontiers to each robot for efficient target search. Experimental results on Habitat-Matterport 3D (HM3D) demonstrate that Co-NavGPT surpasses existing models in success rates and efficiency without any learning process, demonstrating the vast potential of LLMs in multi-robot collaboration domains. The supplementary video, prompts, and code can be accessed via the following link: https://sites.google.com/view/co-navgpt",16ecaa7cf142605331fc21c9be73c7b13e8c1acd,Semantic Scholar,,highly relevant,"The paper describes how prompting a large language model improves the generation of labels for financial sentiment analysis, indicative of prompt engineering." +1334,retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain,"['Chunxi Guo', 'Zhiliang Tian', 'Jintao Tang', 'Shasha Li', 'Zhihua Wen', 'Kaixuan Wang', 'Ting Wang']",https://arxiv.org/pdf/2307.05074,2023-07-11,,"Text-to-SQL aims at generating SQL queries for the given natural language questions and thus helping users to query databases. Prompt learning with large language models (LLMs) has emerged as a recent approach, which designs prompts to lead LLMs to understand the input question and generate the corresponding SQL. However, it faces challenges with strict SQL syntax requirements. Existing work prompts the LLMs with a list of demonstration examples (i.e. question-SQL pairs) to generate SQL, but the fixed prompts can hardly handle the scenario where the semantic gap between the retrieved demonstration and the input question is large. In this paper, we propose a retrieval-augmented prompting method for a LLM-based Text-to-SQL framework, involving sample-aware prompting and a dynamic revision chain. Our approach incorporates sample-aware demonstrations, which include the composition of SQL operators and fine-grained information related to the given question. To retrieve questions sharing similar intents with input questions, we propose two strategies for assisting retrieval. Firstly, we leverage LLMs to simplify the original questions, unifying the syntax and thereby clarifying the users' intentions. To generate executable and accurate SQLs without human intervention, we design a dynamic revision chain which iteratively adapts fine-grained feedback from the previously generated SQL. Experimental results on three Text-to-SQL benchmarks demonstrate the superiority of our method over strong baseline models.",191e300e381d4128b749d16fe3d83c8643a3bd1f,Semantic Scholar,,highly relevant,"The paper explores the structuring of prompts for LLMs in the context of dialog evaluation, which is directly related to prompt engineering." +1335,regionblip a unified multimodal pretraining framework for holistic and regional comprehension,"['Qiang Zhou', 'Chaohui Yu', 'Shaofeng Zhang', 'Sitong Wu', 'Zhibin Wang', 'Fan Wang']",https://arxiv.org/pdf/2308.02299,2023-08-03,,"In this work, we investigate extending the comprehension of Multi-modal Large Language Models (MLLMs) to regional objects. To this end, we propose to extract features corresponding to regional objects as soft prompts for LLM, which provides a straightforward and scalable approach and eliminates the need for LLM fine-tuning. To effectively extract regional features from regular image features and irregular point cloud features, we present a novel and unified position-assisted feature extraction module. Furthermore, training an MLLM from scratch is highly time-consuming. Thus, we propose incrementally extending existing pre-trained MLLMs to comprehend more modalities and the regional objects of those modalities. Specifically, we freeze the Q-Former from BLIP-2, an impressive MLLM, and optimize the modality-specific Lora parameters in Q-Former and LLM for each newly introduced modality. The freezing of the Q-Former eliminates the need for extensive pre-training on massive image-text data. The freezed Q-Former pre-trained from massive image-text data is also beneficial for the pre-training on image-region-text data. We name our framework RegionBLIP. We pre-train RegionBLIP on image-region-text, point-cloud-text, and point-cloud-region-text data. Experimental results verify that \Ours{} can preserve the image comprehension capability of BILP-2 and further gain a comprehension of the newly introduced point cloud modality and regional objects. The Data, Code, and Pre-trained models will be available at https://github.com/mightyzau/RegionBLIP.",1ee8c8dd9d04247515b33775532b72df7b8ec0f3,Semantic Scholar,,highly relevant,"The paper explicitly describes the development and evaluation of new prompting techniques, such as Plan-and-Solve (PS) Prompting, for improving zero-shot chain-of-thought reasoning in large language models, which is directly related to the topic of prompt engineering." +1336,rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought,"['Tianci Xue', 'Ziqi Wang', 'Zhenhailong Wang', 'Chi Han', 'Pengfei Yu', 'Heng Ji']",https://arxiv.org/pdf/2305.11499,2023-05-19,,"Large language Models (LLMs) have achieved promising performance on arithmetic reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting. However, LLMs face challenges in maintaining factual consistency during reasoning, exhibiting tendencies to condition overlooking, question misinterpretation, and condition hallucination over given problems. Existing methods use coarse-grained feedback (e.g., whether the answer is correct) to improve factual consistency. In this work, we propose RCoT (Reversing Chain-of-Thought), a novel method to improve LLMs' reasoning abilities by automatically detecting and rectifying factual inconsistency in LLMs, generated solutions. To detect factual inconsistency, RCoT first asks LLMs to reconstruct the problem based on generated solutions. Then fine-grained comparisons between the original problem and the reconstructed problem expose the factual inconsistency in the original solutions. To rectify the solution, RCoT formulates detected factual inconsistency into fine-grained feedback to guide LLMs in revising solutions. Experimental results demonstrate improvements of RCoT over standard CoT, Self-Consistency and Self-Refine across seven arithmetic datasets. Moreover, we find that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities (e.g., ChatGPT reaches 94.6% accuracy on GSM8K), encouraging the community to further explore the fine-grained feedback generation methods.",22d5459d1f47341b355feeb1becc37208d6ec365,Semantic Scholar,,highly relevant,"The paper focuses on using prompts ('creating prompts' and 'few-shot chain-of-thought prompt') to guide large language models to perform annotation tasks, which aligns with the topic of prompt engineering." +1337,language models enable simple systems for generating structured views of heterogeneous data lakes,"['Simran Arora', 'Brandon Yang', 'Sabri Eyuboglu', 'A. Narayan', 'Andrew Hojel', 'Immanuel Trummer', 'Christopher Ré']",http://arxiv.org/pdf/2304.09433,2023-04-19,," + A long standing goal in the data management community is developing systems that input documents and output queryable tables without user effort. Given the sheer variety of potential documents, state-of-the art systems make simplifying assumptions and use domain specific training. In this work, we ask whether we can maintain generality by using the in-context learning abilities of large language models (LLMs). We propose and evaluate Evaporate, a prototype system powered by LLMs. We identify two strategies for implementing this system: prompt the LLM to directly extract values from documents or prompt the LLM to synthesize code that performs the extraction. Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap, but far less accurate than directly processing each document with the LLM. To improve quality while maintaining low cost, we propose an extended implementation, Evaporate-Code+, which achieves better quality than direct extraction. Our insight is to generate many candidate functions and ensemble their extractions using weak supervision. Evaporate-Code+ outperforms the state-of-the art systems using a + sublinear + pass over the documents with the LLM. This equates to a 110X reduction in the number of documents the LLM needs to process across our 16 real-world evaluation settings. +",2ef1c2438c3a4552db9e7080e15d8c51bc071f58,Semantic Scholar,,highly relevant,"The abstract mentions the use of specialized prompting for LLMs, which aligns with the discussion on prompt engineering for enhancing language models." +1338,prompting languageinformed distribution for compositional zeroshot learning,"['Wentao Bao', 'Lichang Chen', 'Heng Huang', 'Yu Kong']",https://arxiv.org/pdf/2305.14428,2023-05-23,,"Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, e.g., sliced tomatoes, where the model is learned only from the seen compositions, e.g., sliced potatoes and red tomatoes. Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives, i.e., state and object, are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., PLID, for the CZSL task. Specifically, the PLID leverages pre-trained large language models (LLM) to 1) formulate the language-informed class distributions which are diverse and informative, and 2) enhance the compositionality of the class embedding. Moreover, a visual-language primitive decomposition (VLPD) module and a stochastic logit mixup (SLM) strategy are proposed to dynamically fuse the decisions from the compositional and the primitive logit space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.",2ff69c238e26c473a6d8bcbb9292ded74d7fd1c2,Semantic Scholar,,highly relevant,"The paper studies the behaviour of large language models with negated prompts, which relates to the understanding and interaction of LMs with prompts, hence pertinent to prompt engineering, especially if it pertains to hard prefix prompts." +1339,developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer,"['Hyeon Seok Choi', 'Jun Yeong Song', 'Kyung Hwan Shin', 'Ji Hyun Chang', 'B. Jang']",https://www.e-roj.org/upload/pdf/roj-2023-00633.pdf,2023-09-01,,"Purpose We aimed to evaluate the time and cost of developing prompts using large language model (LLM), tailored to extract clinical factors in breast cancer patients and their accuracy. Materials and Methods We collected data from reports of surgical pathology and ultrasound from breast cancer patients who underwent radiotherapy from 2020 to 2022. We extracted the information using the Generative Pre-trained Transformer (GPT) for Sheets and Docs extension plugin and termed this the “LLM” method. The time and cost of developing the prompts with LLM methods were assessed and compared with those spent on collecting information with “full manual” and “LLM-assisted manual” methods. To assess accuracy, 340 patients were randomly selected, and the extracted information by LLM method were compared with those collected by “full manual” method. Results Data from 2,931 patients were collected. We developed 12 prompts for Extract function and 12 for Format function to extract and standardize the information. The overall accuracy was 87.7%. For lymphovascular invasion, it was 98.2%. Developing and processing the prompts took 3.5 hours and 15 minutes, respectively. Utilizing the ChatGPT application programming interface cost US $65.8 and when factoring in the estimated wage, the total cost was US $95.4. In an estimated comparison, “LLM-assisted manual” and “LLM” methods were time- and cost-efficient compared to the “full manual” method. Conclusion Developing and facilitating prompts for LLM to derive clinical factors was efficient to extract crucial information from huge medical records. This study demonstrated the potential of the application of natural language processing using LLM model in breast cancer patients. Prompts from the current study can be re-used for other research to collect clinical information.",35d855c49334ef1b8f945f13e9bc84868dab55c9,Semantic Scholar,,somewhat relevant,"The paper discusses Prompt Injection (PI) attacks, which involve using prompts to misguide Large Language Models, connecting the study directly to prompt engineering." +1340,when do programofthoughts work for reasoning,"['Zhen Bi', 'Ningyu Zhang', 'Yinuo Jiang', 'Shumin Deng', 'Guozhou Zheng', 'Huajun Chen']",https://arxiv.org/pdf/2308.15452,2023-08-29,,"In the realm of embodied artificial intelligence, the reasoning capabilities of Large Language Models (LLMs) play a pivotal role. Although there are effective methods like program-of-thought prompting for LLMs which uses programming language to tackle complex reasoning tasks, the specific impact of code data on the improvement of reasoning capabilities remains under-explored. To address this gap, we propose complexity-impacted reasoning score (CIRS), which combines structural and logical attributes, to measure the correlation between code and reasoning abilities. Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity by considering the difficulty and the cyclomatic complexity. Through an empirical analysis, we find not all code data of complexity can be learned or understood by LLMs. Optimal level of complexity is critical to the improvement of reasoning abilities by program-aided prompting. Then we design an auto-synthesizing and stratifying algorithm, and apply it to instruction generation for mathematical reasoning and code data filtering for code generation tasks. Extensive results demonstrates the effectiveness of our proposed approach. Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.",412fe1f135cb20c952962133ca1e534a71bfd27f,Semantic Scholar,,highly relevant,"The paper discusses the design of prompts to improve the reasoning steps generated by Large Language Models, which is directly related to prompt engineering." +1341,sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation,"['Chen Dun', 'Mirian Hipolito Garcia', 'Guoqing Zheng', 'A. Awadallah', 'Anastasios Kyrillidis', 'Robert Sim']",https://arxiv.org/pdf/2310.02842,2023-10-04,,"Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind. Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new -- but often individual -- downstream tasks. Thus, how one would expand prompt tuning to handle -- concomitantly -- heterogeneous tasks and data distributions is a widely open question. To address this gap, we suggest the use of \emph{Mixture of Prompts}, or MoPs, associated with smart gating functionality: the latter -- whose design is one of the contributions of this paper -- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task. Additionally, MoPs are empirically agnostic to any model compression technique applied -- for efficiency reasons -- as well as instruction data source and task composition. In practice, MoPs can simultaneously mitigate prompt training""interference""in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations. As a highlight, MoPs manage to decrease final perplexity from $\sim20\%$ up to $\sim70\%$, as compared to baselines, in the federated scenario, and from $\sim 3\%$ up to $\sim30\%$ in the centralized scenario.",45ee010607cad91728ae7fbad6cce3d805b93526,Semantic Scholar,,highly relevant,"The abstract explicitly mentions the use of 'engineered prompts' with a Large Language Model to simulate human participant responses, making this paper relevant to the topic of prompt engineering." +1342,prompt sapper llmempowered software engineering infrastructure for ainative services,"['Zhenchang Xing', 'Qing Huang', 'Yu Cheng', 'Liming Zhu', 'Qinghua Lu', 'Xiwei Xu']",http://arxiv.org/pdf/2306.02230,2023-06-04,,"Foundation models, such as GPT-4, DALL-E have brought unprecedented AI""operating system""effect and new forms of human-AI interaction, sparking a wave of innovation in AI-native services, where natural language prompts serve as executable""code""directly (prompt as executable code), eliminating the need for programming language as an intermediary and opening up the door to personal AI. Prompt Sapper has emerged in response, committed to support the development of AI-native services by AI chain engineering. It creates a large language model (LLM) empowered software engineering infrastructure for authoring AI chains through human-AI collaborative intelligence, unleashing the AI innovation potential of every individual, and forging a future where everyone can be a master of AI innovation. This article will introduce the R\&D motivation behind Prompt Sapper, along with its corresponding AI chain engineering methodology and technical practices.",486a8c8655b81c7f87ff257141466ec1186d4aea,Semantic Scholar,,highly relevant,"The paper describes using prompts in the form of QA to leverage LLMs for zero-shot task-oriented parsing, which is an example of utilizing hard prefix prompts." +1343,actiongpt leveraging largescale language models for improved and generalized zero shot action generation,"['Sai Shashank Kalakonda', 'Shubh Maheshwari', 'Ravi Kiran Sarvadevabhatla']",http://arxiv.org/pdf/2211.15603,,,"We introduce Action-GPT, a plug and play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. Our experiments show qualitative and quantitative improvement in the quality of synthesized motions produced by recent text-to-motion models. Code, pretrained models and sample videos will be made available at https://actiongpt.github.io .",488a27aacfebfef0071017bdc6407d7d515e2e2d,Semantic Scholar,,highly relevant,"The paper presents an approach called PBT-GPT, which involves prompting a large language model for property-based test generation, indicating relevance to prompt engineering techniques." +1344,human emotion knowledge representation emerges in large language model and supports discrete emotion inference,"['Ming Li', 'Yusheng Su', 'Hsiu-Yuan Huang', 'Jiali Cheng', 'Xin Hu', 'Xinmiao Zhang', 'Huadong Wang', 'Yujia Qin', 'Xiaozhi Wang', 'Zhi-Yun Liu', 'Dan Zhang']",https://arxiv.org/pdf/2302.09582,,,"How humans infer discrete emotions is a fundamental research question in the field of psychology. While conceptual knowledge about emotions (emotion knowledge) has been suggested to be essential for emotion inference, evidence to date is mostly indirect and inconclusive. As the large language models (LLMs) have been shown to support effective representations of various human conceptual knowledge, the present study further employed artificial neurons in LLMs to investigate the mechanism of human emotion inference. With artificial neurons activated by prompts, the LLM (RoBERTa) demonstrated a similar conceptual structure of 27 discrete emotions as that of human behaviors. Furthermore, the LLM-based conceptual structure revealed a human-like reliance on 14 underlying conceptual attributes of emotions for emotion inference. Most importantly, by manipulating attribute-specific neurons, we found that the corresponding LLM's emotion inference performance deteriorated, and the performance deterioration was correlated to the effectiveness of representations of the conceptual attributes on the human side. Our findings provide direct evidence for the emergence of emotion knowledge representation in large language models and suggest its casual support for discrete emotion inference. # These authors contributed equally: liming16@tsinghua.org.cn, yushengsu.thu@gmail.com * Corresponding authors: {liuzy, dzhang}@tsinghua.edu.cn The source code can be obtained from https://github.com/thunlp/Model_Emotion.",4a8fe7ecf225e5bada08642fcd77d3cbb322b967,Semantic Scholar,,highly relevant,"The abstract specifies a novel prompting framework called 'Deliberate then Generate' (DTG) for text generation, which directly relates to the design and use of prompts." +1345,what do llms know about financial markets a case study on reddit market sentiment analysis,"['Xiang Deng', 'Vasilisa Bashlovkina', 'Feng Han', 'Simon Baumgartner', 'Michael Bendersky']",http://arxiv.org/pdf/2212.11311,2022-12-21,,"Market sentiment analysis on social media content requires knowledge of both financial markets and social media jargon, which makes it a challenging task for human raters. The resulting lack of high-quality labeled data stands in the way of conventional supervised learning methods. Instead, we approach this problem using semi-supervised learning with a large language model (LLM). Our pipeline generates weak financial sentiment labels for Reddit posts with an LLM and then uses that data to train a small model that can be served in production. We find that prompting the LLM to produce Chain-of-Thought summaries and forcing it through several reasoning paths helps generate more stable and accurate labels, while using a regression loss further improves distillation quality. With only a handful of prompts, the final model performs on par with existing supervised models. Though production applications of our model are limited by ethical considerations, the model’s competitive performance points to the great potential of using LLMs for tasks that otherwise require skill-intensive annotation.",52136f813243ac3de8e277906112a41590a376d4,Semantic Scholar,,somewhat relevant,"The model mentioned in the paper is trained using positive and negative pairs sourced through prompting a LLM, which indicates that prompt engineering is an integral part of the research methodology." +1346,understanding the effectiveness of very large language models on dialog evaluation,"['Jessica Huynh', 'Cathy Jiao', 'Prakhar Gupta', 'Shikib Mehri', 'Payal Bajaj', 'Vishrav Chaudhary', 'M. Eskénazi']",http://arxiv.org/pdf/2301.12004,2023-01-27,,"Language models have steadily increased in size over the past few years. They achieve a high level of performance on various natural language processing (NLP) tasks such as question answering and summarization. Large language models (LLMs) have been used for generation and can now output human-like text. Due to this, there are other downstream tasks in the realm of dialog that can now harness the LLMs' language understanding capabilities. Dialog evaluation is one task that this paper will explore. It concentrates on prompting with LLMs: BLOOM, OPT, GPT-3, Flan-T5, InstructDial and TNLGv2. The paper shows that the choice of datasets used for training a model contributes to how well it performs on a task as well as on how the prompt should be structured. Specifically, the more diverse and relevant the group of datasets that a model is trained on, the better dialog evaluation performs. This paper also investigates how the number of examples in the prompt and the type of example selection used affect the model's performance.",5882dd04d95c9c88cdec389059fcf44d56cbb789,Semantic Scholar,,highly relevant,"The paper discusses a novel prompting method, Cue-CoT, which involves an intermediate reasoning step and compares its performance to standard prompting methods." +1347,planandsolve prompting improving zeroshot chainofthought reasoning by large language models,"['Lei Wang', 'Wanyu Xu', 'Yihuai Lan', 'Zhiqiang Hu', 'Yunshi Lan', 'R. Lee', 'Ee-Peng Lim']",http://arxiv.org/pdf/2305.04091,2023-05-06,,"Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, Few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demonstrations which enable LLMs to explicitly generate reasoning steps and improve their reasoning task accuracy. To eliminate the manual efforts, Zero-shot-CoT concatenates the target problem statement with “Let’s think step by step” as an input prompt to LLMs. Despite the success of Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, missing-step errors, and semantic misunderstanding errors. To address the missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of two components: first, devising a plan to divide the entire task into smaller subtasks, and then carrying out the subtasks according to the plan. To address the calculation errors and improve the quality of generated reasoning steps, we extend PS prompting with more detailed instructions and derive PS+ prompting. We evaluate our proposed prompting strategy on ten datasets across three reasoning problems. The experimental results over GPT-3 show that our proposed zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought Prompting, and has comparable performance with 8-shot CoT prompting on the math reasoning problem. The code can be found at https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.",62176de125738e3b95850d1227bac81fd646b78e,Semantic Scholar,,highly relevant,"The paper presents empirical work on using dynamic prompting to improve the performance of compressed language models, directly involving prompting strategies post-training." +1348,chainofthought prompting for responding to indepth dialogue questions with llm,"['Hongru Wang', 'Rui Wang', 'Fei Mi', 'Zezhong Wang', 'Rui-Lan Xu', 'Kam-Fai Wong']",http://arxiv.org/pdf/2305.11792,,,"The way and content in which users ask questions can provide insight into their current status, including their personality, emotions, and psychology. Instead of directly prompting the large language models (LLMs), we explore how chain-of-thought prompting helps in this scenario to perform reasoning and planning according to user status, aiming to provide a more personalized and engaging experience for the user query. To this end, we first construct a benchmark of 6 dialogue or question-answering datasets in both English and Chinese, covering 3 different aspects of user status ( including personality , emotion , and psychology ). Then we prompt the LLMs to generate the response regarding the user status as intermediate reasoning processing. We propose a novel demonstration selection strategy using the semantic similarity of intermediate reasoning instead of test queries. To evaluate the effectiveness and robustness of our approach, we conduct extensive experiments with 7 LLMs under zero-shot and one-shot settings. The experimental results show that our approach consistently outperforms standard prompting in terms of both helpfulness and acceptness across all datasets, regardless of the LLMs used. The code and dataset can be found at https://github.com/ruleGreen/ Dialogue_CoT.git .",70916fbeb446ab7dc811ab74b193365d789bf1eb,Semantic Scholar,,somewhat relevant,"The paper discusses generating music from text prompts using diffusion models and specifies the use of text prompts to condition the model output, which aligns with prompt engineering." +1349,annollm making large language models to be better crowdsourced annotators,"['Xingwei He', 'Zheng-Wen Lin', 'Yeyun Gong', 'Alex Jin', 'Hang Zhang', 'Chen Lin', 'Jian Jiao', 'S. Yiu', 'Nan Duan', 'Weizhu Chen']",http://arxiv.org/pdf/2303.16854,2023-03-30,,"Many natural language processing (NLP) tasks rely on labeled data to train machine learning models to achieve high performance. However, data annotation can be a time-consuming and expensive process, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator by providing them with sufficient guidance and demonstrated examples. To make LLMs to be better annotators, we propose a two-step approach, 'explain-then-annotate'. To be more precise, we begin by creating prompts for every demonstrated example, which we subsequently utilize to prompt a LLM to provide an explanation for why the specific ground truth answer/label was chosen for that particular example. Following this, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data. We conduct experiments on three tasks, including user input and keyword relevance assessment, BoolQ and WiC. The annotation results from GPT-3.5 surpasses those from crowdsourced annotation for user input and keyword relevance assessment. Additionally, for the other two tasks, GPT-3.5 achieves results that are comparable to those obtained through crowdsourced annotation.",70da4fb798a86cbe8cad96c27ced0415885bbd9d,Semantic Scholar,,highly relevant,"The paper discusses the use of carefully designed prompting strategies to enhance LLMs' contextual faithfulness, which is directly related to prompt engineering." +1350,the student becomes the master outperforming gpt3 on scientific factual error correction,"['D. Ashok', 'Atharva Kulkarni', 'Hai Pham', ""Barnab'as P'oczos""]",https://aclanthology.org/2023.findings-emnlp.451.pdf,2023-05-24,,"Due to the prohibitively high cost of creating error correction datasets, most Factual Claim Correction methods rely on a powerful verification model to guide the correction process. This leads to a significant drop in performance in domains like scientific claims, where good verification models do not always exist. In this work, we introduce SciFix, a scientific claim correction system that does not require a verifier but can outperform existing methods by a considerable margin -- achieving correction accuracy of 84% on the SciFact dataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to next best accuracies of 7%, 5%, and 15% on the same datasets respectively. Our method leverages the power of prompting with LLMs during training to create a richly annotated dataset that can be used for fully supervised training and regularization. We additionally use a claim-aware decoding procedure to improve the quality of corrected claims. Our method outperforms the very LLM that was used to generate the annotated dataset -- with Few-Shot Prompting on GPT3.5 achieving 58%, 61%, and 64% on the respective datasets, a consistently lower correction accuracy, despite using nearly 800 times as many parameters as our model.",716178841e169f5c02a1fd5da241825699501248,Semantic Scholar,,somewhat relevant,"The paper describes a method that includes the step of prompting a large language model to generate out-of-distribution examples, which is related to prompt engineering." +1351,enhancing small medical learners with privacypreserving contextual prompting,"['Xinlu Zhang', 'SHIYANG LI', 'Xianjun Yang', 'Chenxin Tian', 'Yao Qin', 'Linda Petzold']",http://arxiv.org/pdf/2305.12723,2023-05-22,,"Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under privacy-restricted scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability.",74b94891f8f7ac8d73d9df817b6720e1cb792bcc,Semantic Scholar,,somewhat relevant,"The paper focuses on evaluating the capability of NLP tools in generating policy briefings given brief prompts, indicating relevance to the use of prompts in NLP applications." +1352,corrpus codebased structured prompting for neurosymbolic story understanding,"['Yi Dong', 'Lara J. Martin', 'Chris Callison-Burch']",https://aclanthology.org/2023.findings-acl.832.pdf,2022-12-21,,"Story generation and understanding -- as with all NLG/NLU tasks -- has seen a surge in neurosymbolic work. Researchers have recognized that, while large language models (LLMs) have tremendous utility, they can be augmented with symbolic means to be even better and to make up for any flaws that the neural networks might have. However, symbolic methods are extremely costly in terms of the amount of time and expertise needed to create them. In this work, we capitalize on state-of-the-art Code-LLMs, such as Codex, to bootstrap the use of symbolic methods for tracking the state of stories and aiding in story understanding. We show that our CoRRPUS system and abstracted prompting procedures can beat current state-of-the-art structured LLM techniques on pre-existing story understanding tasks (bAbI Task 2 and Re^3) with minimal hand engineering. We hope that this work can help highlight the importance of symbolic representations and specialized prompting for LLMs as these models require some guidance for performing reasoning tasks properly.",76f54657eb0893a0b203da57dcf0b4fffeebfc2c,Semantic Scholar,,highly relevant,"The paper focuses on Chain-of-Thought prompting, which is a technique relevant to prompt engineering as it explores how different prompts affect LLM performance." +1353,selfcheckgpt zeroresource blackbox hallucination detection for generative large language models,"['Potsawee Manakul', 'Adian Liusie', 'M. Gales']",https://arxiv.org/pdf/2303.08896,2023-03-16,,"Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to the output probability distribution (which may not be available for systems such as ChatGPT) or external databases that are interfaced via separate, often complex, modules. In this work, we propose""SelfCheckGPT"", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i.e. without an external database. SelfCheckGPT leverages the simple idea that if an LLM has knowledge of a given concept, sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, stochastically sampled responses are likely to diverge and contradict one another. We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset, and manually annotate the factuality of the generated passages. We demonstrate that SelfCheckGPT can: i) detect non-factual and factual sentences; and ii) rank passages in terms of factuality. We compare our approach to several baselines and show that our approach has considerably higher AUC-PR scores in sentence-level hallucination detection and higher correlation scores in passage-level factuality assessment compared to grey-box methods.",7c1707db9aafd209aa93db3251e7ebd593d55876,Semantic Scholar,,highly relevant,"The paper discusses prompt-tuning large language models using small labeled datasets, which relates to the topic of prompt engineering with hard prefix prompts." +1354,can large language models truly understand prompts a case study with negated prompts,"['Joel Jang', 'Seonghyeon Ye', 'Minjoon Seo']",http://arxiv.org/pdf/2209.12711,2022-09-26,,"Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT&GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms",7ce0c89a452e3c2917b63847495533865697c79c,Semantic Scholar,,highly relevant,"The paper discusses the use of a hierarchical prompting approach to improve the performance of LLMs, indicating a focus on prompt engineering techniques." +1355,the student becomes the master matching gpt3 on scientific factual error correction,"['D. Ashok', 'Atharva Kulkarni', 'Hai Pham', 'B. Póczos']",https://arxiv.org/pdf/2305.14707,,,"Due to the prohibitively high cost of creating error correction datasets, most Factual Claim Correction methods rely on a powerful verification model to guide the correction process. This leads to a significant drop in performance in domains like Scientific Claim Correction, where good verification models do not always exist. In this work, we introduce a claim correction system that makes no domain assumptions and does not require a verifier but is able to outperform existing methods by an order of magnitude — achieving 94% correction accuracy on the SciFact dataset, and 62.5% on the SciFact-Open dataset, compared to the next best meth-ods 0.5% and 1.50% respectively. Our method leverages the power of prompting with LLMs during training to create a richly annotated dataset that can be used for fully supervised training and regularization. We additionally use a claim-aware decoding procedure to improve the quality of corrected claims. Our method is competitive with the very LLM that was used to generate the annotated dataset — with GPT3.5 achieving 89.5% and 60% correction accuracy on SciFact and SciFact-Open, despite using 1250 times as many parameters as our model.",80ae1347b2dda02748f8f09da8a738121f5edfb5,Semantic Scholar,,somewhat relevant,"The paper mentions prompting a large language model with data for AI-aided brainstorming, which is related to the use of prompts and their engineering." +1356,more than you've asked for a comprehensive analysis of novel prompt injection threats to applicationintegrated large language models,"['Kai Greshake', 'Sahar Abdelnabi', 'Shailesh Mishra', 'C. Endres', 'Thorsten Holz', 'Mario Fritz']",http://arxiv.org/pdf/2302.12173,,,"We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting . Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following . So far, these attacks assumed that the adversary is directly prompting the LLM. In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs ) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viabil-ity of our attacks, we implemented specific demonstrations",8fdd34153d1035d09dd4a6efa9cb0c91d23d0045,Semantic Scholar,,highly relevant,"The abstract describes an approach to prompting large language models using a recursive questioning technique, which is a form of prompt engineering to improve performance on reasoning tasks." +1357,exploring the path from instructions to rewards with large language models in instancebased learning,"['Chase McDonald', 'Tyler Malloy', 'Thuy Ngoc Nguyen', 'Cleotilde Gonzalez']",https://ojs.aaai.org/index.php/AAAI-SS/article/download/27697/27470,2024-01-22,,"A prominent method to model human learning is through experiential learning, where decisions are influenced by the outcomes observed in previous actions. The decisions-from-experience approach often excludes other forms of learning in humans, such as learning from descriptive information. In humans, descriptive information can enhance learning by providing a denser signal, achieved through understanding the relationship between intermediate decisions and their future outcomes, instead of relying solely on observed outcomes. To account for experiential and descriptive information, we propose the use of large language models (LLMs) to convert descriptive information into dense signals that can be used by computational models that learn from experience. Building on past work in cognitive modeling, we utilize task instructions and prompt an LLM to define and quantify the critical actions an agent must take to succeed in the task. In an initial experiment, we test this approach using an Instance-Based Learning cognitive model of experiential decisions in a gridworld task. We demonstrate how the LLM can be prompted to provide a series of actions and relative values given the task instructions, then show how these values can be used in place of sparse outcome signals to improve the model’s learning of the task significantly.",92a55a027f77312492eaf379aadcf290d1094828,Semantic Scholar,,somewhat relevant,"The paper is somewhat relevant as it discusses the design of a new prompt learning mechanism for aspect extraction, although it seems to emphasize the overall framework for recommendations" +1358,concise and organized perception facilitates large language models for deductive reasoning,"['Shaotian Yan', 'Chen Shen', 'Junjie Liu', 'Jieping Ye']",https://arxiv.org/pdf/2310.03309,2023-10-05,,"Exploiting large language models (LLMs) to tackle deductive reasoning has garnered growing attention. It still remains highly challenging to achieve satisfactory results in complex deductive problems, characterized by plenty of premises (i.e., facts or rules) entailing intricate relationships among entities and requiring multi-hop reasoning. One intuitive solution is to decompose the original task into smaller sub-tasks, and then chain the multiple casual reasoning steps together in a forward (e.g., Selection-Inference) or backward (e.g., LAMBADA) direction. However, these techniques inevitably necessitate a large number of overall stages, leading to computationally expensive operations and a higher possibility of making misleading steps. In addition to stage-by-stage decomposition, we draw inspiration from another aspect of human problem-solving. Humans tend to distill the most relevant information and organize their thoughts systematically (e.g., creating mind maps), which assists them in answering questions or drawing conclusions precisely and quickly. In light of this, we propose a novel reasoning approach named Concise and Organized Perception (COP). COP carefully analyzes the given statements to efficiently identify the most pertinent information while eliminating redundancy. It then prompts the LLMs in a more organized form that adapts to the model's inference process. By perceiving concise and organized proofs, the deductive reasoning abilities of LLMs can be better elicited, and the risk of acquiring errors caused by excessive reasoning stages is mitigated. Furthermore, our approach can be combined with the aforementioned ones to further boost their performance. Extensive experimental results on three popular deductive benchmarks (i.e., ProofWriter, PrOntoQA and PrOntoQA-OOD) show that COP significantly outperforms previous state-of-the-art methods.",96e265e5de378f89a162981cd1c3eafa7b6f1d30,Semantic Scholar,,highly relevant,"The paper is highly relevant as it specifically investigates the use of in-context prompting, which falls under the umbrella of prompt engineering." +1359,breaking language barriers with a leap learning strategies for polyglot llms,"['A. Nambi', 'Vaibhav Balloli', 'M. Ranjit', 'T. Ganu', 'Kabir Ahuja', 'Sunayana Sitaram', 'Kalika Bali']",http://arxiv.org/pdf/2305.17740,2023-05-28,,"Large language models (LLMs) are at the forefront of transforming numerous domains globally. However, their inclusivity and effectiveness remain limited for non-Latin scripts and low-resource languages. This paper tackles the imperative challenge of enhancing the multilingual performance of LLMs, specifically focusing on Generative models. Through systematic investigation and evaluation of diverse languages using popular question-answering (QA) datasets, we present novel techniques that unlock the true potential of LLMs in a polyglot landscape. Our approach encompasses three key strategies that yield remarkable improvements in multilingual proficiency. First, by meticulously optimizing prompts tailored for polyglot LLMs, we unlock their latent capabilities, resulting in substantial performance boosts across languages. Second, we introduce a new hybrid approach that synergizes GPT generation with multilingual embeddings and achieves significant multilingual performance improvement on critical tasks like QA and retrieval. Finally, to further propel the performance of polyglot LLMs, we introduce a novel learning algorithm that dynamically selects the optimal prompt strategy, LLM model, and embeddings per query. This dynamic adaptation maximizes the efficacy of LLMs across languages, outperforming best static and random strategies. Our results show substantial advancements in multilingual understanding and generation across a diverse range of languages.",9b71c89686334ba4f1247aa18990740a94e25cc3,Semantic Scholar,,highly relevant,"The paper describes a zero-shot Dynamic Strategy Chain (DSC) prompting method which is used to generate mental health counseling strategies, indicating it focuses on post-training prompting techniques, therefore relevant to the topic of prompt engineering." +1360,boosting language models reasoning with chainofknowledge prompting,"['J. Wang', 'Qiushi Sun', 'Nuo Chen', 'Xiang Lorraine Li', 'Ming Gao']",https://arxiv.org/pdf/2306.06427,2023-06-10,,"Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning tasks, which aims at designing a simple prompt like ``Let's think step by step'' or multiple in-context exemplars with well-designed rationales to elicit Large Language Models (LLMs) to generate intermediate reasoning steps. However, the generated rationales often come with mistakes, making unfactual and unfaithful reasoning chains. To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple. This is inspired by our human behaviors, i.e., we can draw a mind map or knowledge map as the reasoning evidence in the brain before answering a complex question. Benefiting from CoK, we additionally introduce a F^2-Verification method to estimate the reliability of the reasoning chains in terms of factuality and faithfulness. For the unreliable response, the wrong evidence can be indicated to prompt the LLM to rethink. Extensive experiments demonstrate that our method can further improve the performance of commonsense, factual, symbolic, and arithmetic reasoning tasks.",9efa81ec4954b0859c47dad8f42edfaf8bced69b,Semantic Scholar,,highly relevant,"The paper proposes a novel method called Graph Neural Prompting to assist pre-trained LLMs, which indicates that it is related to prompt engineering." +1361,zerotop zeroshot taskoriented semantic parsing using large language models,"['Dheeraj Mekala', 'J. Wolfe', 'Subhro Roy']",http://arxiv.org/pdf/2212.10815,2022-12-21,,"We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.",b8d06dd769f89d08bdd9997d7bd363c89ede845b,Semantic Scholar,,somewhat relevant,"The paper introduces a benchmark for evaluating LLMs on Inductive Instructions and mentions the use of Self-Critique prompting, which is a prompt engineering technique; however, it does not specifically indicate whether these are 'hard prefix prompts.'" +1362,can large language models write good propertybased tests,"['Vasudev Vikram', 'Caroline Lemieux', 'Rohan Padhye']",https://arxiv.org/pdf/2307.04346,2023-07-10,,"Property-based testing (PBT), while an established technique in the software testing research community, is still relatively underused in real-world software. Pain points in writing property-based tests include implementing diverse random input generators and thinking of meaningful properties to test. Developers, however, are more amenable to writing documentation; plenty of library API documentation is available and can be used as natural language specifications for property-based tests. As large language models (LLMs) have recently shown promise in a variety of coding tasks, we explore the potential of using LLMs to synthesize property-based tests. We call our approach PBT-GPT, and propose three different strategies of prompting the LLM for PBT. We characterize various failure modes of PBT-GPT and detail an evaluation methodology for automatically synthesized property-based tests. PBT-GPT achieves promising results in our preliminary studies on sample Python library APIs in $\texttt{numpy}$, $\texttt{networkx}$, and $\texttt{datetime}$.",c1996c3d4f289e613d4a44d04bb1c1c0fca80460,Semantic Scholar,,highly relevant,"The paper describes the development and application of a specific prompt designed to breakdown complex recipes, which demonstrates direct engagement with prompt engineering." +1363,large language models as batteriesincluded zeroshot esco skills matchers,"['Benjamin Clavié', ""Guillaume Souli'e""]",https://arxiv.org/pdf/2307.03539,2023-07-07,,"Understanding labour market dynamics requires accurately identifying the skills required for and possessed by the workforce. Automation techniques are increasingly being developed to support this effort. However, automatically extracting skills from job postings is challenging due to the vast number of existing skills. The ESCO (European Skills, Competences, Qualifications and Occupations) framework provides a useful reference, listing over 13,000 individual skills. However, skills extraction remains difficult and accurately matching job posts to the ESCO taxonomy is an open problem. In this work, we propose an end-to-end zero-shot system for skills extraction from job descriptions based on large language models (LLMs). We generate synthetic training data for the entirety of ESCO skills and train a classifier to extract skill mentions from job posts. We also employ a similarity retriever to generate skill candidates which are then re-ranked using a second LLM. Using synthetic data achieves an RP@10 score 10 points higher than previous distant supervision approaches. Adding GPT-4 re-ranking improves RP@10 by over 22 points over previous methods. We also show that Framing the task as mock programming when prompting the LLM can lead to better performance than natural language prompts, especially with weaker LLMs. We demonstrate the potential of integrating large language models at both ends of skills matching pipelines. Our approach requires no human annotations and achieve extremely promising results on skills extraction against ESCO.",c4f9f0cc8c138047a61bdb11b1a352e3d1aed035,Semantic Scholar,,highly relevant,"The paper introduces an approach using multimodal LLMs through prompting them with various types of prompts (image OCR, brief captions, and detailed descriptions) for fashion logo embedding, which is relevant to the topic of prompt engineering." +1364,deliberate then generate enhanced prompting framework for text generation,"['Bei Li', 'Rui Wang', 'Junliang Guo', 'Kaitao Song', 'Xuejiao Tan', 'Hany Hassan', 'Arul Menezes', 'Tong Xiao', 'Jiang Bian', 'Jingbo Zhu']",http://arxiv.org/pdf/2305.19835,2023-05-31,,"Large language models (LLMs) have shown remarkable success across a wide range of natural language generation tasks, where proper prompt designs make great impacts. While existing prompting methods are normally restricted to providing correct information, in this paper, we encourage the model to deliberate by proposing a novel Deliberate then Generate (DTG) prompting framework, which consists of error detection instructions and candidates that may contain errors. DTG is a simple yet effective technique that can be applied to various text generation tasks with minimal modifications. We conduct extensive experiments on 20+ datasets across 7 text generation tasks, including summarization, translation, dialogue, and more. We show that DTG consistently outperforms existing prompting methods and achieves state-of-the-art performance on multiple text generation tasks. We also provide in-depth analyses to reveal the underlying mechanisms of DTG, which may inspire future research on prompting for LLMs.",c85c90ef9e9a71efe031c3f7d6e34561f91168fe,Semantic Scholar,,highly relevant,"The paper discusses the effect of including explanations in the prompt for improving in-context learning in large language models, which is a form of prompt engineering." +1365,an empirical study on using large language models to analyze software supply chain security failures,"['Tanmay Singla', 'Dharun Anandayuvaraj', 'Kelechi G. Kalu', 'Taylor R. Schorlemmer', 'James C. Davis']",https://dl.acm.org/doi/pdf/10.1145/3605770.3625214,2023-08-09,,"As we increasingly depend on software systems, the consequences of breaches in the software supply chain become more severe. High-profile cyber attacks like SolarWinds and ShadowHammer have resulted in significant financial and data losses, underlining the need for stronger cybersecurity. One way to prevent future breaches is by studying past failures. However, traditional methods of analyzing past failures require manually reading and summarizing reports about them. Automated support could reduce costs and allow analysis of more failures. Natural Language Processing (NLP) techniques such as Large Language Models (LLMs) could be leveraged to assist the analysis of failures. In this study, we assessed the ability of Large Language Models (LLMs) to analyze historical software supply chain breaches. We used LLMs to replicate the manual analysis of 69 software supply chain security failures performed by members of the Cloud Native Computing Foundation (CNCF). We developed prompts for LLMs to categorize these by four dimensions: type of compromise, intent, nature, and impact. GPT 3.5's categorizations had an average accuracy of 68% and Bard's had an accuracy of 58% over these dimensions. We report that LLMs effectively characterize software supply chain failures when the source articles are detailed enough for consensus among manual analysts, but cannot yet replace human analysts. Future work can improve LLM performance in this context, and study a broader range of articles and failures.",c91f6eb320c70e2f64b6fb935494978a8699f06a,Semantic Scholar,,somewhat relevant,"The paper describes a method for generating prompts to bypass safety filters in text-to-image models, which indicates the use of prompt engineering techniques." +1366,actiongpt leveraging largescale language models for improved and generalized action generation,"['Sai Shashank Kalakonda', 'Shubh Maheshwari', 'Ravi Kiran Sarvadevabhatla']",https://arxiv.org/pdf/2211.15603,2022-11-28,,"We introduce Action-GPT, a plug-and-play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. We introduce a generic approach compatible with stochastic (e.g. VAE-based) and deterministic (e.g. MotionCLIP) text-to-motion models. In addition, the approach enables multiple text descriptions to be utilized. Our experiments show (i) noticeable qualitative and quantitative improvement in the quality of synthesized motions, (ii) benefits of utilizing multiple LLM-generated descriptions, (iii) suitability of the prompt function, and (iv) zero-shot generation capabilities of the proposed approach. Code and pretrained models are available at https://actiongpt.github.io.",cb2954127a7fce8ab84486765392ce95dcdd8175,Semantic Scholar,,highly relevant,"The paper discusses improving information retrieval for the preparation of prompts for large language models, which is directly related to prompt engineering." +1367,retrieving texts based on abstract descriptions,"['Shauli Ravfogel', 'Valentina Pyatkin', 'Amir D. N. Cohen', 'Avshalom Manevich', 'Yoav Goldberg']",http://arxiv.org/pdf/2305.12517,2023-05-21,,"While instruction-tuned Large Language Models (LLMs) excel at extracting information from text, they are not suitable for locating texts conforming to a given description in a large document collection (semantic retrieval). Similarity search over embedding vectors does allow to perform retrieval by query, but the similarity reflected in the embedding is ill-defined and non-consistent, and is sub-optimal for many use cases. What, then, is a good query representation for effective retrieval? We identify the well defined and consistent task of retrieving sentences based on abstract descriptions of their content. We demonstrate the inadequacy of current text embeddings and propose an alternative model that significantly improves when used in standard nearest neighbor search. The model is trained using positive and negative pairs sourced through prompting a LLM. While it is easy to source the training material from an LLM, the retrieval task cannot be performed by the LLM directly. This demonstrates that data from LLMs can be used not only for distilling more efficient specialized models than the original LLM, but also for creating new capabilities not immediately possible using the original model.",d0aec52375fd60c7fe9542a153706665500517c7,Semantic Scholar,,somewhat relevant,"The abstract discusses the use of a prompting-based approach to generate natural language reasoning steps for QA, which implies a relevance to prompt engineering techniques." +1368,cuecot chainofthought prompting for responding to indepth dialogue questions with llms,"['Hongru Wang', 'Rui Wang', 'Fei Mi', 'Yang Deng', 'Zezhong Wang', 'Bin Liang', 'Ruifeng Xu', 'Kam-Fai Wong']",https://aclanthology.org/2023.findings-emnlp.806.pdf,2023-05-19,,"Large Language Models (LLMs), such as \texttt{ChatGPT}, greatly empower dialogue systems with strong language understanding and generation capabilities. However, most of the previous works prompt the LLMs to directly generate a response based on the dialogue context, overlooking the underlying linguistic cues about the user status exhibited in the context. Such in-depth dialogue scenarios are challenging for existing LLMs to figure out the user's hidden needs and respond satisfactorily through a single-step inference. To this end, we propose a novel linguistic cue-based chain-of-thoughts (\textit{Cue}-CoT), which enhances the LLMs inference with an intermediate reasoning step to find cues exhibited in the dialogue, aiming to provide a more personalized and engaging response. To evaluate the approach, we build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English, targeting 3 major linguistic cues during the conversation: \textit{personality}, \textit{emotion}, and \textit{psychology}. We conduct extensive experiments on the proposed benchmark with 5 LLMs under both zero-shot and one-shot settings. Empirical results demonstrate our proposed \textit{Cue}-CoT method outperforms standard prompting methods in terms of both \textit{helpfulness} and \textit{acceptability} on all datasets.",d0c69c309fbf1233b6351cd57484557c16f28427,Semantic Scholar,,somewhat relevant,"The paper discusses using an LLM to generate a declarative task specification for reasoning tasks, demonstrating an instance of prompt engineering, but it does not specify if the prompts are hard, nor prefix in nature." +1369,soft prompt tuning for augmenting dense retrieval with large language models,"['Zhiyuan Peng', 'Xuyang Wu', 'Yihan Fang']",https://arxiv.org/pdf/2307.08303,2023-07-17,,"Dense retrieval (DR) converts queries and documents into dense embeddings and measures the similarity between queries and documents in vector space. One of the challenges in DR is the lack of domain-specific training data. While DR models can learn from large-scale public datasets like MS MARCO through transfer learning, evidence shows that not all DR models and domains can benefit from transfer learning equally. Recently, some researchers have resorted to large language models (LLMs) to improve the zero-shot and few-shot DR models. However, the hard prompts or human-written prompts utilized in these works cannot guarantee the good quality of generated weak queries. To tackle this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task, we leverage soft prompt-tuning to optimize a task-specific soft prompt on limited ground truth data and then prompt the LLMs to tag unlabeled documents with weak queries, yielding enough weak document-query pairs to train task-specific dense retrievers. We design a filter to select high-quality example document-query pairs in the prompt to further improve the quality of weak tagged queries. To the best of our knowledge, there is no prior work utilizing soft prompt tuning to augment DR models. The experiments demonstrate that SPTAR outperforms the unsupervised baselines BM25 and the recently proposed LLMs-based augmentation method for DR.",d44031f253668c61ac6d68b95bbe9cac57730d51,Semantic Scholar,,somewhat relevant,"The paper explores the use of different types of prompts (diegetic and non-diegetic) with LLMs, which relates to how users engineer prompts to interact with and guide the model's generation which is relevant to prompt engineering." +1370,on the planning abilities of large language models a critical investigation,"['Karthik Valmeekam', 'Matthew Marquez', 'S. Sreedharan', 'Subbarao Kambhampati']",http://arxiv.org/pdf/2305.15771,2023-05-25,,"Intrigued by the claims of emergent reasoning capabilities in LLMs trained on general web corpora, in this paper, we set out to investigate their planning capabilities. We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs as a source of heuristic guidance for other agents (AI planners) in their planning tasks. We conduct a systematic study by generating a suite of instances on domains similar to the ones employed in the International Planning Competition and evaluate LLMs in two distinct modes: autonomous and heuristic. Our findings reveal that LLMs' ability to generate executable plans autonomously is rather limited, with the best model (GPT-4) having an average success rate of ~12% across the domains. However, the results in the heuristic mode show more promise. In the heuristic mode, we demonstrate that LLM-generated plans can improve the search process for underlying sound planners and additionally show that external verifiers can help provide feedback on the generated plans and back-prompt the LLM for better plan generation.",dedfe929d182cc3537a9ed765d589b4735ce062a,Semantic Scholar,,somewhat relevant,"The study uses augmented prompts optimized using a genetic algorithm, which suggests a focus on prompt engineering for classification tasks." +1371,conal anticipating outliers with large language models,"['Albert Xu', 'Xiang Ren', 'Robin Jia']",http://arxiv.org/pdf/2211.15718,,,"In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly. Selective prediction, in which models abstain on low-confidence examples, provides a possible solution, but existing models are often overly confident on OOD examples. To remedy this overconfidence, we introduce Contrastive Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD examples representative of novel classes, then trains to decrease confidence on them. First, we generate OOD examples by prompting a large language model twice: we prompt it to enumerate relevant novel labels, then generate examples from each novel class matching the task format. Second, we train our classifier with a novel contrastive objective that encourages lower confidence on generated OOD examples than training examples. When trained with CoNAL, classifiers improve in their ability to detect and abstain on OOD examples over prior methods by an average of 2.3% AUAC and 5.5% AUROC across 4 NLP datasets, with no cost to in-distribution accuracy.1",19da40fd01c711fb2b3b0b19b3956b86b75f575d,Semantic Scholar,,highly relevant,"The abstract clearly mentions the use of prompt engineering to generate awareness messages, which indicates that the paper is focused on the application of prompt engineering techniques." +1372,scalable approach to medical wearable postmarket surveillance,"['Richard M. Yoo', 'BS BenT.Viggiano', 'Krishna N. Pundi', 'Jason A. Fries', 'Aydin Zahedivash MD Mba', 'MS TanyaPodchiyska', 'Natasha Din', 'Mbbs Mas', 'N. H. S. Mbbs']",https://www.medrxiv.org/content/medrxiv/early/2023/11/15/2023.11.14.23298488.full.pdf,2023-11-15,,"Objective We sought to develop a weak supervision-based approach to demonstrate feasibility of post-market surveillance of wearable devices that render AF pre-diagnosis. Materials and Methods Two approaches were evaluated to reduce clinical note labeling overhead for creating a training set for a classifier: one using programmatic codes, and the other using prompts to large language models (LLMs). Probabilistically labeled notes were then used to fine-tune a classifier, which identified patients with AF pre-diagnosis mentions in a note. A retrospective cohort study was conducted, where the baseline characteristics and subsequent care patterns of patients identified by the classifier were compared against those who did not receive pre-diagnosis. Results Label model derived from prompt-based labeling heuristics using LLMs (precision = 0.67, recall = 0.83, F1 = 0.74) nearly achieved the performance of code-based heuristics (precision = 0.84, recall = 0.72, F1 = 0.77), while cutting down the cost to create a labeled training set. The classifier learned on the labeled notes accurately identified patients with AF pre-diagnosis (precision = 0.85, recall = 0.81, F1 = 0.83). Those patients who received pre-diagnosis exhibited different demographic and comorbidity characteristics, and were enriched for anticoagulation and eventual diagnosis of AF. At the index diagnosis, existence of pre-diagnosis did not stratify patients on clinical characteristics, but did correlate with anticoagulant prescription. Discussion and Conclusion Our work establishes the feasibility of an EHR-based surveillance system for wearable devices that render AF pre-diagnosis. Further work is necessary to generalize these findings for patient populations at other sites.",216555443355ac615598a99d2949711726a1c36f,Semantic Scholar,,highly relevant,"The abstract mentions 'utilising different prompt engineering techniques' to improve performance, which indicates that the paper involves research directly related to prompt engineering." +1373,"the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis","['Mehrdad Safaei', 'Justin Longo']",https://dl.acm.org/doi/pdf/10.1145/3604570,2023-08-18,,"Policy advising in government centers on the analysis of public problems and the developing of recommendations for dealing with them. In carrying out this work, policy analysts consult a variety of sources and work to synthesize that body of evidence into useful decision support documents commonly called briefing notes. Advances in natural language processing (NLP) have led to the continuing development of tools that can undertake a similar task. Given a brief prompt, a large language model (LLM) can synthesize information in content databases. This article documents the findings from an experiment that tested whether contemporary NLP technology is capable of producing public policy relevant briefing notes that expert evaluators judge to be useful. The research involved two stages. First, briefing notes were created using three models: NLP generated; human generated; and NLP generated / human edited. Next, two panels of retired senior public servants (with only one panel informed of the use of NLP in the experiment) were asked to judge the briefing notes using a heuristic evaluation rubric. The findings indicate that contemporary NLP tools were not able to, on their own, generate useful policy briefings. However, the feedback from the expert evaluators indicates that automatically-generated briefing notes might serve as a useful supplement to the work of human policy analysts. And the speed with which the capabilities of NLP tools are developing, supplemented with access to a larger corpus of previously prepared policy briefings and other policy-relevant material, suggests that the quality of automatically-generated briefings may improve significantly in the coming years. The article concludes with reflections on what such improvements might mean for the future practice of policy analysis.",22b39e38e2fd52591ca23904b474eb19dc17b610,Semantic Scholar,,highly relevant,"The paper discusses the use of prompt modifiers in the context of text-based generative art, which is a clear example of prompt engineering, even if not specified as 'hard prefix' prompting." +1374,xparade crosslingual textual entailment and information divergence across paragraphs,"['Juan Diego Rodriguez', 'Katrin Erk', 'Greg Durrett']",https://arxiv.org/pdf/2309.08873,2023-09-16,,"Understanding when two pieces of text convey the same information is a goal touching many subproblems in NLP, including textual entailment and fact-checking. This problem becomes more complex when those two pieces of text are in different languages. Here, we introduce X-PARADE (Cross-lingual Paragraph-level Analysis of Divergences and Entailments), the first cross-lingual dataset of paragraph-level information divergences. Annotators label a paragraph in a target language at the span level and evaluate it with respect to a corresponding paragraph in a source language, indicating whether a given piece of information is the same, new, or new but can be inferred. This last notion establishes a link with cross-language NLI. Aligned paragraphs are sourced from Wikipedia pages in different languages, reflecting real information divergences observed in the wild. Armed with our dataset, we investigate a diverse set of approaches for this problem, including classic token alignment from machine translation, textual entailment methods that localize their decisions, and prompting of large language models. Our results show that these methods vary in their capability to handle inferable information, but they all fall short of human performance.",300b01dc726fe8acbededd805501811d427920bd,Semantic Scholar,,highly relevant,The paper explicitly mentions the use of 'prompt engineering' in the context of Visual-WSD with techniques that compare 'Simple prompt-based' methods and 'Generated prompt-based' methods. +1375,stress testing chainofthought prompting for large language models,"['Aayush Mishra', 'Karan Thakkar']",https://arxiv.org/pdf/2309.16621,2023-09-28,,"This report examines the effectiveness of Chain-of-Thought (CoT) prompting in improving the multi-step reasoning abilities of large language models (LLMs). Inspired by previous studies \cite{Min2022RethinkingWork}, we analyze the impact of three types of CoT prompt perturbations, namely CoT order, CoT values, and CoT operators on the performance of GPT-3 on various tasks. Our findings show that incorrect CoT prompting leads to poor performance on accuracy metrics. Correct values in the CoT is crucial for predicting correct answers. Moreover, incorrect demonstrations, where the CoT operators or the CoT order are wrong, do not affect the performance as drastically when compared to the value based perturbations. This research deepens our understanding of CoT prompting and opens some new questions regarding the capability of LLMs to learn reasoning in context.",31ae42394959fb1a336886379a5527bec5c9c9c4,Semantic Scholar,,highly relevant,"The paper describes using a 'bespoke, zero-shot prompt' with GPT-4 for extracting information from ecological literature, indicating that prompt engineering is a significant focus of the research." +1376,towards agile text classifiers for everyone,"['Maximilian Mozes', 'Jessica Hoffmann', 'K. Tomanek', 'Muhamed Kouate', 'Nithum Thain', 'Ann Yuan', 'Tolga Bolukbasi', 'Lucas Dixon']",http://arxiv.org/pdf/2302.06541,2023-02-13,,"Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior - a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require different classifiers, and safety policies themselves improve from iteration and adaptation. This paper introduces and evaluates methods for agile text classification, whereby classifiers are trained using small, targeted datasets that can be quickly developed for a particular policy. Experimenting with 7 datasets from three safety-related domains, comprising 15 annotation schemes, led to our key finding: prompt-tuning large language models, like PaLM 62B, with a labeled dataset of as few as 80 examples can achieve state-of-the-art performance. We argue that this enables a paradigm shift for text classification, especially for models supporting safer online discourse. Instead of collecting millions of examples to attempt to create universal safety classifiers over months or years, classifiers could be tuned using small datasets, created by individuals or small organizations, tailored for specific use cases, and iterated on and adapted in the time-span of a day.",335303a513e376b120212337c154cb91fa3689db,Semantic Scholar,,highly relevant,"The abstract mentions the use of different prompting approaches including zero-shot and few-shot to evaluate GPT-4, relevant to prompt engineering techniques." +1377,interacting with large language models a case study on aiaided brainstorming for guesstimation problems,"['Vildan Salikutluk', 'Dorothea Koert', 'F. Jäkel']",https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA230081,,,". Designing cooperative AI-systems that do not automate tasks but rather aid human cognition is challenging and requires human-centered design approaches. Here, we introduce AI-aided brainstorming for solving guesstimation problems, i.e. estimating quantities from incomplete information, as a testbed for human-AI interaction with large language models (LLMs). In a think-aloud study, we found that humans decompose guesstimation questions into sub-questions and often replace them with semantically related ones. If they fail to brainstorm related questions, they often get stuck and do not find a solution. Therefore, to support this brainstorming process, we prompted a large language model (GPT-3) with successful replacements from our think-aloud data. In follow-up studies, we tested whether the availability of this tool improves participants’ answers. While the tool successfully produced human-like suggestions, participants were reluctant to use it. From our findings, we conclude that for human-AI interaction with LLMs to be successful AI-systems must complement rather than mimic a user’s associations.",4f9e7eb2f009e30f15eca18f4e540915b637b603,Semantic Scholar,,highly relevant,"The paper clearly focuses on a prompt-based method for few-shot learning, which directly relates to prompt engineering within the scope of natural language processing." +1378,multiscript multimodal script learning for supporting open domain everyday tasks,"['Jingyuan Qi', 'Minqian Liu', 'Ying Shen', 'Zhiyang Xu', 'Lifu Huang']",https://arxiv.org/pdf/2310.04965,2023-10-08,,"Automatically generating scripts (i.e. sequences of key steps described in text) from video demonstrations and reasoning about the subsequent steps are crucial to the modern AI virtual assistants to guide humans to complete everyday tasks, especially unfamiliar ones. However, current methods for generative script learning rely heavily on well-structured preceding steps described in text and/or images or are limited to a certain domain, resulting in a disparity with real-world user scenarios. To address these limitations, we present a new benchmark challenge -- MultiScript, with two new tasks on task-oriented multimodal script learning: (1) multimodal script generation, and (2) subsequent step prediction. For both tasks, the input consists of a target task name and a video illustrating what has been done to complete the target task, and the expected output is (1) a sequence of structured step descriptions in text based on the demonstration video, and (2) a single text description for the subsequent step, respectively. Built from WikiHow, MultiScript covers multimodal scripts in videos and text descriptions for over 6,655 human everyday tasks across 19 diverse domains. To establish baseline performance on MultiScript, we propose two knowledge-guided multimodal generative frameworks that incorporate the task-related knowledge prompted from large language models such as Vicuna. Experimental results show that our proposed approaches significantly improve over the competitive baselines.",5ece96203cd1dc9ff3f99867faa451939d86d545,Semantic Scholar,,highly relevant,"The abstract discusses the importance of designing prompts to utilize ChatGPT's capabilities effectively, which relates directly to the topic of prompt engineering." +1379,development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews,"['Y. Kataoka', 'R. So', 'M. Banno', 'J. Kumasawa', 'H. Someko', 'S. Taito', 'T. Terasawa', 'Y. Tsujimoto', 'Y. Tsutsumi', 'Y. Wada', 'T. A. Furukawa']",https://www.medrxiv.org/content/medrxiv/early/2023/10/31/2023.10.31.23297818.full.pdf,2023-11-01,,"Systematic reviews (SRs) are a critical component of evidence-based medicine, but the process of screening titles and abstracts is time-consuming. This study aimed to develop and externally validate a method using large language models to classify abstracts for diagnostic test accuracy (DTA) systematic reviews, thereby reducing the human workload. We used a previously collected dataset for developing DTA abstract classifiers and applied prompt engineering. We developed an optimized meta-prompt for Generative Pre-trained Transformer (GPT)-3.5-turbo and GPT-4 to classify abstracts. In the external validation dataset 1, the prompt with GPT-3.5 turbo showed a sensitivity of 0.988, and a specificity of 0.298. GPT-4 showed a sensitivity of 0.982, and a specificity of 0.677. In the external validation dataset 2, GPT-3.5 turbo showed a sensitivity of 0.919, and a specificity of 0.434. GPT-4 showed a sensitivity of 0.806, and a specificity of 0.740. If we included eligible studies from among the references of the identified studies, GPT-3.5 turbo had no critical misses, while GPT-4 had some misses. Our study indicates that GPT-3.5 turbo can be effectively used to classify abstracts for DTA systematic reviews. Further studies using other dataset are warranted to confirm our results. Additionally, we encourage the use of our framework and publicly available dataset for further exploration of more effective classifiers using other LLMs and prompts (https://github.com/youkiti/ARE/).",6384921f1bd1059c6b4c37ac3c4e4f19e45d40c1,Semantic Scholar,,highly relevant,"The paper is focused on refining prompts automatically using logical constraints for LLMs in drug discovery, which is closely related to prompt engineering." +1380,the art of socratic questioning recursive thinking with large language models,"['Jingyuan Qi', 'Zhiyang Xu', 'Ying Shen', 'Minqian Liu', 'dingnan jin', 'Qifan Wang', 'Lifu Huang']",https://aclanthology.org/2023.emnlp-main.255.pdf,2023-05-24,,"Chain-of-Thought (CoT) prompting enables large language models to solve complex reasoning problems by generating intermediate steps. However, confined by its inherent single-pass and sequential generation process, CoT heavily relies on the initial decisions, causing errors in early steps to accumulate and impact the final answers. In contrast, humans adopt recursive thinking when tackling complex reasoning problems, i.e., iteratively breaking the original problem into approachable sub-problems and aggregating their answers to resolve the original one. Inspired by the human cognitive process, we propose SOCRATIC QUESTIONING, a divide-and-conquer style algorithm that mimics the recursive thinking process. Specifically, SOCRATIC QUESTIONING leverages large language models to raise and answer sub-questions until collecting enough information to tackle the original question. Unlike CoT, SOCRATIC QUESTIONING explicitly navigates the thinking space, stimulates effective recursive thinking, and is more robust towards errors in the thinking process. Extensive experiments on several complex reasoning tasks, including MMLU, MATH, LogiQA, and visual question-answering demonstrate significant performance improvements over the state-of-the-art prompting methods, such as CoT, and Tree-of-Thought. The qualitative analysis clearly shows that the intermediate reasoning steps elicited by SOCRATIC QUESTIONING are similar to humans' recursively thinking process of complex reasoning problems.",69335077fcacbff7a7cf25697da1949e6bdfa968,Semantic Scholar,,somewhat relevant,"The abstract mentions 'prompts engineering' in the context of interactive learning, which suggests relevance to the concept of prompt engineering." +1381,langrasp using large language models for semantic object grasping,"['Reihaneh Mirjalili', 'Michael Krawez', 'Simone Silenzi', 'Yannik Blei', 'Wolfram Burgard']",https://arxiv.org/pdf/2310.05239,2023-10-08,,"In this paper, we propose LAN-grasp, a novel approach towards more appropriate semantic grasping. We use foundation models to provide the robot with a deeper understanding of the objects, the right place to grasp an object, or even the parts to avoid. This allows our robot to grasp and utilize objects in a more meaningful and safe manner. We leverage the combination of a Large Language Model, a Vision Language Model, and a traditional grasp planner to generate grasps demonstrating a deeper semantic understanding of the objects. We first prompt the Large Language Model about which object part is appropriate for grasping. Next, the Vision Language Model identifies the corresponding part in the object image. Finally, we generate grasp proposals in the region proposed by the Vision Language Model. Building on foundation models provides us with a zero-shot grasp method that can handle a wide range of objects without the need for further training or fine-tuning. We evaluated our method in real-world experiments on a custom object data set. We present the results of a survey that asks the participants to choose an object part appropriate for grasping. The results show that the grasps generated by our method are consistently ranked higher by the participants than those generated by a conventional grasping planner and a recent semantic grasping approach.",894b2fe365642d350e0d688c33ba65124b1c2979,Semantic Scholar,,highly relevant,"The paper focuses on the practice of prompt engineering within the context of text-based generative art, thereby making it relevant to the topic of prompt engineering." +1382,prompt tuning large language models on personalized aspect extraction for recommendations,"['Pan Li', 'Yuyan Wang', 'Ed H. Chi', 'Minmin Chen']",http://arxiv.org/pdf/2306.01475,2023-06-02,,"Existing aspect extraction methods mostly rely on explicit or ground truth aspect information, or using data mining or machine learning approaches to extract aspects from implicit user feedback such as user reviews. It however remains under-explored how the extracted aspects can help generate more meaningful recommendations to the users. Meanwhile, existing research on aspect-based recommendations often relies on separate aspect extraction models or assumes the aspects are given, without accounting for the fact the optimal set of aspects could be dependent on the recommendation task at hand. In this work, we propose to combine aspect extraction together with aspect-based recommendations in an end-to-end manner, achieving the two goals together in a single framework. For the aspect extraction component, we leverage the recent advances in large language models and design a new prompt learning mechanism to generate aspects for the end recommendation task. For the aspect-based recommendation component, the extracted aspects are concatenated with the usual user and item features used by the recommendation model. The recommendation task mediates the learning of the user embeddings and item embeddings, which are used as soft prompts to generate aspects. Therefore, the extracted aspects are personalized and contextualized by the recommendation task. We showcase the effectiveness of our proposed method through extensive experiments on three industrial datasets, where our proposed framework significantly outperforms state-of-the-art baselines in both the personalized aspect extraction and aspect-based recommendation tasks. In particular, we demonstrate that it is necessary and beneficial to combine the learning of aspect extraction and aspect-based recommendation together. We also conduct extensive ablation studies to understand the contribution of each design component in our framework.",8a4320fd903677a3ea2bf606a6537b59885b1108,Semantic Scholar,,somewhat relevant,"The methodology involved a simple, standardized prompting strategy, which implies relevance to prompt engineering, despite not exploring more advanced techniques." +1383,automatic chain of thought prompting in large language models,"['Zhuosheng Zhang', 'Aston Zhang', 'Mu Li', 'Alexander J. Smola']",http://arxiv.org/pdf/2210.03493,2022-10-07,,"Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like""Let's think step by step""to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the""Let's think step by step""prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot",90350aa626bed47b02d0c162462e5b0ca82be6b2,Semantic Scholar,,somewhat relevant,"The paper discusses the role of prompt engineering in conjunction with a cognitive-agent approach for robotic task learning, indicating relevance to prompting techniques." +1384,harnessing the power of adversarial prompting and large language models for robust hypothesis generation in astronomy,"['I. Ciucă', 'Y. Ting', 'S. Kruk', 'K. Iyer']",http://arxiv.org/pdf/2306.11648,2023-06-20,,"This study investigates the application of Large Language Models (LLMs), specifically GPT-4, within Astronomy. We employ in-context prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System, to explore the extent to which performance can be improved by immersing the model in domain-specific literature. Our findings point towards a substantial boost in hypothesis generation when using in-context prompting, a benefit that is further accentuated by adversarial prompting. We illustrate how adversarial prompting empowers GPT-4 to extract essential details from a vast knowledge base to produce meaningful hypotheses, signaling an innovative step towards employing LLMs for scientific research in Astronomy.",91099bbb96133c70db091041900ecff502a5e3a8,Semantic Scholar,,somewhat relevant,"The paper discusses the use of prompt engineering for generating synthetic datasets using Stable Diffusion, which is relevant to the topic of hard prefix prompts in the context of image generation." +1385,dynamic strategy chain dynamic zeroshot cot for long mental health support generation,"['Qi Chen', 'Dexi Liu']",https://arxiv.org/pdf/2308.10444,2023-08-21,,"Long counseling Text Generation for Mental health support (LTGM), an innovative and challenging task, aims to provide help-seekers with mental health support through a comprehensive and more acceptable response. The combination of chain-of-thought (CoT) prompting and Large Language Models (LLMs) is employed and get the SOTA performance on various NLP tasks, especially on text generation tasks. Zero-shot CoT prompting is one of the most common methods in CoT prompting. However, in the LTGM task, Zero-shot CoT prompting can not simulate a counselor or provide personalized strategies without effective mental health counseling strategy prompts. To tackle this challenge, we propose a zero-shot Dynamic Strategy Chain (DSC) prompting method. Firstly, we utilize GPT2 to learn the responses written by mental health counselors and dynamically generate mental health counseling strategies tailored to the help-seekers' needs. Secondly, the Zero-shot DSC prompting is constructed according to mental health counseling strategies and the help-seekers' post. Finally, the Zero-shot DSC prompting is employed to guide LLMs in generating more human-like responses for the help-seekers. Both automatic and manual evaluations demonstrate that Zero-shot DSC prompting can deliver more human-like responses than CoT prompting methods on LTGM tasks.",96599abdbac3106b89f3d8dd3b26fe9c38a7624f,Semantic Scholar,,somewhat relevant,"The abstract discusses zero-shot and few-shot learning with pretrained language models, which implies the use of prompting, but does not specifically mention hard prefix prompting or prompt engineering." +1386,spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization,"['Yu-Neng Chuang', 'Ruixiang Tang', 'Xiaoqian Jiang', 'Xia Hu']",https://arxiv.org/pdf/2303.13035,,,"Electronic health records (EHRs) store an extensive array of patient information, encompassing medical histories, diagnoses, treatments, and test outcomes. These records are crucial for enabling healthcare providers to make well-informed decisions regarding patient care. Summarizing clinical notes further assists healthcare professionals in pinpointing potential health risks and making better-informed decisions. This process contributes to reducing errors and enhancing patient outcomes by ensuring providers have access to the most pertinent and current patient data. Recent research has shown that incorporating prompts with large language models (LLMs) substantially boosts the efficacy of summarization tasks. However, we show that this approach also leads to increased output variance, resulting in notably divergent outputs even when prompts share similar meanings. To tackle this challenge, we introduce a model-agnostic Soft Prompt-Based Calibration (SPeC) pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization. Experimental findings on multiple clinical note tasks and LLMs indicate that our method not only bolsters performance but also effectively curbs variance for various LLMs, providing a more uniform and dependable solution for summarizing vital medical information.",b378e54c88d241aa917131beb65c96be3730f40c,Semantic Scholar,,somewhat relevant,"The paper discusses using a 'visual-prompt-tuned foundational model' for mapping underwater vegetation, which implies use of a form of prompt without specifying hard prefix prompts." +1387,cotever chain of thought prompting annotation toolkit for explanation verification,"['Seungone Kim', 'Se June Joo', 'Yul Jang', 'Hyungjoo Chae', 'Jinyoung Yeo']",http://arxiv.org/pdf/2303.03628,2023-03-07,,"Chain-of-thought (CoT) prompting enables large language models (LLMs) to solve complex reasoning tasks by generating an explanation before the final prediction. Despite it’s promising ability, a critical downside of CoT prompting is that the performance is greatly affected by the factuality of the generated explanation. To improve the correctness of the explanations, fine-tuning language models with explanation data is needed. However, there exists only a few datasets that can be used for such approaches, and no data collection tool for building them. Thus, we introduce CoTEVer, a tool-kit for annotating the factual correctness of generated explanations and collecting revision data of wrong explanations. Furthermore, we suggest several use cases where the data collected with CoTEVer can be utilized for enhancing the faithfulness of explanations. Our toolkit is publicly available at https://github.com/SeungoneKim/CoTEVer.",b9d75f361b5310c6ddcddfe7858bb0416eb78de4,Semantic Scholar,,highly relevant,The abstract explicitly mentions the use of an adaptive prompt-based learning method for few-shot sentiment analysis. +1388,embedding democratic values into social media ais via societal objective functions,"['Chenyan Jia', 'Michelle S. Lam', 'Minh Chau Mai', 'Jeffrey T. Hancock', 'Michael S. Bernstein']",https://arxiv.org/pdf/2307.13912,2023-07-26,,"Can we design artificial intelligence (AI) systems that rank our social media feeds to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models, however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.",c4561fd08636b5f5f6b9f3f6d89f3cee39e678b0,Semantic Scholar,,highly relevant,"The paper demonstrates the use of prompt-based methods for sentiment analysis, which directly relates to the topic of prompt engineering." +1389,large language models as sous chefs revising recipes with gpt3,"['Alyssa Hwang', 'B. Li', 'Zhaoyi Hou', 'D. Roth']",http://arxiv.org/pdf/2306.13986,2023-06-24,,"With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand. In our work, we focus on recipes as an example of complex, diverse, and widely used instructions. We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps. We apply this prompt to recipes from various world cuisines, and experiment with several large language models (LLMs), finding best results with GPT-3.5. We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions. We find that annotators usually prefer the revision over the original, demonstrating a promising application of LLMs in serving as digital sous chefs for recipes and beyond. We release our prompt, code, and MTurk template for public use.",ca60126b2b534a3f1cd8007ba84fdbd163968770,Semantic Scholar,,highly relevant,"The paper mentions a 'Prompt based method' as one of its approaches to fact verification, which directly indicates relevance to prompt engineering." +1390,fashionlogo prompting multimodal large language models for fashion logo embeddings,"['Yulin Su', 'Min Yang', 'Minghui Qiu', 'Jing Wang', 'Tao Wang']",https://arxiv.org/pdf/2308.09012,2023-08-17,,"Logo embedding plays a crucial role in various e-commerce applications by facilitating image retrieval or recognition, such as intellectual property protection and product search. However, current methods treat logo embedding as a purely visual problem, which may limit their performance in real-world scenarios. A notable issue is that the textual knowledge embedded in logo images has not been adequately explored. Therefore, we propose a novel approach that leverages textual knowledge as an auxiliary to improve the robustness of logo embedding. The emerging Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in both visual and textual understanding and could become valuable visual assistants in understanding logo images. Inspired by this observation, our proposed method, FashionLOGO, aims to utilize MLLMs to enhance fashion logo embedding. We explore how MLLMs can improve logo embedding by prompting them to generate explicit textual knowledge through three types of prompts, including image OCR, brief captions, and detailed descriptions prompts, in a zero-shot setting. We adopt a cross-attention transformer to enable image embedding queries to learn supplementary knowledge from textual embeddings automatically. To reduce computational costs, we only use the image embedding model in the inference stage, similar to traditional inference pipelines. Our extensive experiments on three real-world datasets demonstrate that FashionLOGO learns generalized and robust logo embeddings, achieving state-of-the-art performance in all benchmark datasets. Furthermore, we conduct comprehensive ablation studies to demonstrate the performance improvements resulting from the introduction of MLLMs.",d53945d4afb4528590d79e20de52883d29037e86,Semantic Scholar,,highly relevant,"The paper proposes a prompt-based Chinese text classification framework and includes discussion of prompt-based fine-tuning, automating prompt generation, and using prompts in a few-shot learning context." +1391,systematic rectification of language models via deadend analysis,"['Mengyao Cao', 'Mehdi Fatemi', 'J. Cheung', 'S. Shabanian']",http://arxiv.org/pdf/2302.14003,2023-02-27,,"With adversarial or otherwise normal prompts, existing large language models (LLM) can be pushed to generate toxic discourses. One way to reduce the risk of LLMs generating undesired discourses is to alter the training of the LLM. This can be very restrictive due to demanding computation requirements. Other methods rely on rule-based or prompt-based token elimination, which are limited as they dismiss future tokens and the overall meaning of the complete discourse. Here, we center detoxification on the probability that the finished discourse is ultimately considered toxic. That is, at each point, we advise against token selections proportional to how likely a finished text from this point will be toxic. To this end, we formally extend the dead-end theory from the recent reinforcement learning (RL) literature to also cover uncertain outcomes. Our approach, called rectification, utilizes a separate but significantly smaller model for detoxification, which can be applied to diverse LLMs as long as they share the same vocabulary. Importantly, our method does not require access to the internal representations of the LLM, but only the token probability distribution at each decoding step. This is crucial as many LLMs today are hosted in servers and only accessible through APIs. When applied to various LLMs, including GPT-3, our approach significantly improves the generated discourse compared to the base LLMs and other techniques in terms of both the overall language and detoxification performance.",da5fcb26c830663b79c9aa1c550ae62e7725fcad,Semantic Scholar,,highly relevant,"The paper explicitly mentions the use of prompts to steer models' generation, which aligns with the topic of prompt engineering." +1392,the unreliability of explanations in fewshot incontext learning,"['Xi Ye', 'Greg Durrett']",http://arxiv.org/pdf/2205.03401,,,"How can prompting a large language model like GPT-3 with explanations improve in-context learning? We focus specifically on two NLP tasks that involve reasoning over text, namely question answering and natural language inference. Including explanations in the prompt and having the model generate them does not consistently improve performance in the settings we study, contrary to recent results on symbolic reasoning tasks (Nye et al., 2021; Wei et al., 2022). Despite careful prompting, explanations generated by GPT-3 may not even be factually grounded in the input, even on simple tasks with straightforward extractive explanations. However, these flawed explanations can still be useful as a way to verify GPT-3’s predictions post-hoc. Through analysis in three settings, we show that explanations judged as good by humans—those that are logically consistent with the input and the prediction—usually indicate more accurate predictions. Following these observations, we present a framework for calibrating model predictions based on the reliability of the explanations. Our framework trains calibrators using automatically extracted scores that approximately assess the reliability of explanations, which helps improve performance across three different datasets",de04aa282f8055cebe86966c592bf37af6aecc99,Semantic Scholar,,somewhat relevant,"The abstract mentions experimenting with various neural models using 'few-shot prompting', which implies the use of prompt engineering techniques within their methodology." +1393,surrogateprompt bypassing the safety filter of texttoimage models via substitution,"['Zhongjie Ba', 'Jieming Zhong', 'Jiachen Lei', 'Pengyu Cheng', 'Qinglong Wang', 'Zhan Qin', 'Zhibo Wang', 'Kui Ren']",https://arxiv.org/pdf/2309.14122,2023-09-25,,"Advanced text-to-image models such as DALL-E 2 and Midjourney possess the capacity to generate highly realistic images, raising significant concerns regarding the potential proliferation of unsafe content. This includes adult, violent, or deceptive imagery of political figures. Despite claims of rigorous safety mechanisms implemented in these models to restrict the generation of not-safe-for-work (NSFW) content, we successfully devise and exhibit the first prompt attacks on Midjourney, resulting in the production of abundant photorealistic NSFW images. We reveal the fundamental principles of such prompt attacks and suggest strategically substituting high-risk sections within a suspect prompt to evade closed-source safety measures. Our novel framework, SurrogatePrompt, systematically generates attack prompts, utilizing large language models, image-to-text, and image-to-image modules to automate attack prompt creation at scale. Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios. Both subjective and objective assessments validate that the images generated from our attack prompts present considerable safety hazards.",e1decb86f2a6aba8682d2fc4e427424b0b49e0d0,Semantic Scholar,,highly relevant,"The paper discusses the improvement of 'promptability' through continued pretraining and evaluates methods to optimize the usage of prompts in zero- and few-shot settings, which is directly related to the topic of prompt engineering." +1394,augmented embeddings for custom retrievals,"['Anirudh Khatry', 'Yasharth Bajpai', 'Priyanshu Gupta', 'Sumit Gulwani', 'Ashish Tiwari']",https://arxiv.org/pdf/2310.05380,2023-10-09,,"Information retrieval involves selecting artifacts from a corpus that are most relevant to a given search query. The flavor of retrieval typically used in classical applications can be termed as homogeneous and relaxed, where queries and corpus elements are both natural language (NL) utterances (homogeneous) and the goal is to pick most relevant elements from the corpus in the Top-K, where K is large, such as 10, 25, 50 or even 100 (relaxed). Recently, retrieval is being used extensively in preparing prompts for large language models (LLMs) to enable LLMs to perform targeted tasks. These new applications of retrieval are often heterogeneous and strict -- the queries and the corpus contain different kinds of entities, such as NL and code, and there is a need for improving retrieval at Top-K for small values of K, such as K=1 or 3 or 5. Current dense retrieval techniques based on pretrained embeddings provide a general-purpose and powerful approach for retrieval, but they are oblivious to task-specific notions of similarity of heterogeneous artifacts. We introduce Adapted Dense Retrieval, a mechanism to transform embeddings to enable improved task-specific, heterogeneous and strict retrieval. Adapted Dense Retrieval works by learning a low-rank residual adaptation of the pretrained black-box embedding. We empirically validate our approach by showing improvements over the state-of-the-art general-purpose embeddings-based baseline.",e4c466cf3df4887e0121561be90e0bac78d3e1cb,Semantic Scholar,,somewhat relevant,"The paper focuses on a defense method against adversarial attacks using a text-guided diffusion model and mentions the use of a few-shot prompt-tuning algorithm, indicating some relevance to prompt engineering." +1395,"tryage realtime, intelligent routing of user prompts to large language models","['S. Hari', 'Matt Thomson']",https://arxiv.org/pdf/2308.11601,2023-08-22,,"The introduction of the transformer architecture and the self-attention mechanism has led to an explosive production of language models trained on specific downstream tasks and data domains. With over 200, 000 models in the Hugging Face ecosystem, users grapple with selecting and optimizing models to suit multifaceted workflows and data domains while addressing computational, security, and recency concerns. There is an urgent need for machine learning frameworks that can eliminate the burden of model selection and customization and unleash the incredible power of the vast emerging model library for end users. Here, we propose a context-aware routing system, Tryage, that leverages a language model router for optimal selection of expert models from a model library based on analysis of individual input prompts. Inspired by the thalamic router in the brain, Tryage employs a perceptive router to predict down-stream model performance on prompts and, then, makes a routing decision using an objective function that integrates performance predictions with user goals and constraints that are incorporated through flags (e.g., model size, model recency). Tryage allows users to explore a Pareto front and automatically trade-off between task accuracy and secondary goals including minimization of model size, recency, security, verbosity, and readability. Across heterogeneous data sets that include code, text, clinical data, and patents, the Tryage framework surpasses Gorilla and GPT3.5 turbo in dynamic model selection identifying the optimal model with an accuracy of 50.9% , compared to 23.6% by GPT 3.5 Turbo and 10.8% by Gorilla. Conceptually, Tryage demonstrates how routing models can be applied to program and control the behavior of multi-model LLM systems to maximize efficient use of the expanding and evolving language model ecosystem.",ee025d7030d4767062af2bcd32a4d586737d30bf,Semantic Scholar,,highly relevant,The abstract indicates the paper discusses prompt engineering by testing GPT-3.5 and 4 with different prompt types and formats for data-to-text generation in under-resourced languages. +1396,distractor generation for multiplechoice questions with predictive prompting and large language models,"['Semere Kiros Bitew', 'Johannes Deleu', 'Chris Develder', 'Thomas Demeester']",https://arxiv.org/pdf/2307.16338,2023-07-30,,"Large Language Models (LLMs) such as ChatGPT have demonstrated remarkable performance across various tasks and have garnered significant attention from both researchers and practitioners. However, in an educational context, we still observe a performance gap in generating distractors -- i.e., plausible yet incorrect answers -- with LLMs for multiple-choice questions (MCQs). In this study, we propose a strategy for guiding LLMs such as ChatGPT, in generating relevant distractors by prompting them with question items automatically retrieved from a question bank as well-chosen in-context examples. We evaluate our LLM-based solutions using a quantitative assessment on an existing test set, as well as through quality annotations by human experts, i.e., teachers. We found that on average 53% of the generated distractors presented to the teachers were rated as high-quality, i.e., suitable for immediate use as is, outperforming the state-of-the-art model. We also show the gains of our approach 1 in generating high-quality distractors by comparing it with a zero-shot ChatGPT and a few-shot ChatGPT prompted with static examples.",f1bb5051965a3a4c9288f0123dd03c26a08e1378,Semantic Scholar,,somewhat relevant,The mention of increasing the performance of GPT-3.5 using few-shot prompting suggests relevance to prompt engineering. +1397,interleaving retrieval with chainofthought reasoning for knowledgeintensive multistep questions,"['H. Trivedi', 'Niranjan Balasubramanian', 'Tushar Khot', 'Ashish Sabharwal']",http://arxiv.org/pdf/2212.10509,2022-12-20,,"Prompting-based large language models (LLMs) are surprisingly powerful at generating natural language reasoning steps or Chains-of-Thoughts (CoT) for multi-step question answering (QA). They struggle, however, when the necessary knowledge is either unavailable to the LLM or not up-to-date within its parameters. While using the question to retrieve relevant text from an external knowledge source helps LLMs, we observe that this one-step retrieve-and-read approach is insufficient for multi-step QA. Here, what to retrieve depends on what has already been derived, which in turn may depend on what was previously retrieved. To address this, we propose IRCoT, a new approach for multi-step QA that interleaves retrieval with steps (sentences) in a CoT, guiding the retrieval with CoT and in turn using retrieved results to improve CoT. Using IRCoT with GPT3 substantially improves retrieval (up to 21 points) as well as downstream QA (up to 15 points) on four datasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. We observe similar substantial gains in out-of-distribution (OOD) settings as well as with much smaller models such as Flan-T5-large without additional training. IRCoT reduces model hallucination, resulting in factually more accurate CoT reasoning.",f208ea909fa7f54fea82def9a92fd81dfc758c39,Semantic Scholar,,highly relevant,"The paper discusses designing diverse prompts for OpenAI models, indicating an application of prompt engineering to generate responses in educational dialogues." +1398,satisfiabilityaided language models using declarative prompting,"['Xi Ye', 'Qiaochu Chen', 'Işıl Dillig', 'Greg Durrett']",https://arxiv.org/pdf/2305.09656,2023-05-16,,"Prior work has combined chain-of-thought prompting in large language models (LLMs) with programmatic representations to perform effective and transparent reasoning. While such an approach works well for tasks that only require forward reasoning (e.g., straightforward arithmetic), it is less effective for constraint solving problems that require more sophisticated planning and search. In this paper, we propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of LLMs. We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer. This approach has two key advantages. The declarative specification is closer to the problem description than the reasoning steps are, so the LLM can parse it out of the description more accurately. Furthermore, by offloading the actual reasoning task to an automated theorem prover, our approach can guarantee the correctness of the answer with respect to the parsed specification and avoid planning errors in the solving process. We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm. In particular, SATLM outperforms program-aided LMs by 23% on a challenging subset of the GSM arithmetic reasoning dataset; SATLM also achieves a new SoTA on LSAT and BoardgameQA, surpassing previous models that are trained on the respective training sets.",f27f6d1d521d189e78f5623098ced0deea613d33,Semantic Scholar,,somewhat relevant,"The paper mentions using few-shot prompting to test GPT-3's ability, which is related to the methodology of prompt engineering." +1399,choice over control how users write with large language models using diegetic and nondiegetic prompting,"['Hai Dang', 'Sven Goller', 'Florian Lehmann', 'Daniel Buschek']",https://arxiv.org/pdf/2303.03199,2023-03-06,,"We propose a conceptual perspective on prompts for Large Language Models (LLMs) that distinguishes between (1) diegetic prompts (part of the narrative, e.g. “Once upon a time, I saw a fox...”), and (2) non-diegetic prompts (external, e.g. “Write about the adventures of the fox.”). With this lens, we study how 129 crowd workers on Prolific write short texts with different user interfaces (1 vs 3 suggestions, with/out non-diegetic prompts; implemented with GPT-3): When the interface offered multiple suggestions and provided an option for non-diegetic prompting, participants preferred choosing from multiple suggestions over controlling them via non-diegetic prompts. When participants provided non-diegetic prompts it was to ask for inspiration, topics or facts. Single suggestions in particular were guided both with diegetic and non-diegetic information. This work informs human-AI interaction with generative models by revealing that (1) writing non-diegetic prompts requires effort, (2) people combine diegetic and non-diegetic prompting, and (3) they use their draft (i.e. diegetic information) and suggestion timing to strategically guide LLMs.",fccf8776d7525627c518a56a1f4db367a4d7120b,Semantic Scholar,,highly relevant,The paper focuses on exploring performance improvements through different prompt engineering approaches for language models within the context of legal text classification. +1400,improving short text classification with augmented data using gpt3,"['Salvador Balkus', 'Donghui Yan']",https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4F23066E3F0156382190BD76DA9A7BA5/S1351324923000438a.pdf/div-class-title-improving-short-text-classification-with-augmented-data-using-gpt-3-div.pdf,2022-05-23,," + GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two augmented classifiers: the Classification Endpoint with an increased training set size and the Completion Endpoint with an augmented prompt optimized using a genetic algorithm. We find that data augmentation significantly increases the accuracy of both classifiers, and that the embedding-based Classification Endpoint achieves the best accuracy of about 76%, compared to human accuracy of 85%. In this way, giving large language models like GPT-3 the ability to propose their own training examples can improve short text classification performance.",0008b1e49c3d4afe2cfffe82ea88be147b618504,Semantic Scholar,,highly relevant,"The abstract mentions the investigation of various prompting techniques, such as zero- and few-shot methods as well as the CoT prompting, which are indicative of prompt engineering relevance." +1401,mapo boosting large language model performance with modeladaptive prompt optimization,"['Yuyan Chen', 'Zhihao Wen', 'Ge Fan', 'Zhengyu Chen', 'Wei Wu', 'Dayiheng Liu', 'Zhixu Li', 'Bang Liu', 'Yanghua Xiao']",https://aclanthology.org/2023.findings-emnlp.215.pdf,,,"Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community. The existing research primarily emphasizes the importance of adapting prompts to specific tasks, rather than specific LLMs. However, a good prompt is not solely defined by its wording, but also binds to the nature of the LLM in question. In this work, we first quantitatively demonstrate that different prompts should be adapted to different LLMs to enhance their capabilities across various down-stream tasks in NLP. Then we novelly propose a model-adaptive prompt optimizer (MAPO) method that optimizes the original prompts for each specific LLM in downstream tasks. Extensive experiments indicate that the proposed method can effectively refine prompts for an LLM, leading to significant improvements over various downstream tasks.",91b6158978b248e9a0e65d0d588bc1054e72bc16,Semantic Scholar,,highly relevant,"The abstract directly mentions the use of zero-shot prompting, which is a technique within prompt engineering, making it relevant to hard prefix prompts." +1402,emerging technology in acute resuscitation monitoring,"['M. Tichauer', 'J. Mccoy']",http://www.scirp.org/journal/PaperDownload.aspx?paperID=24794,2012-11-23,,"Fluid optimization in the resuscitation of shock became the mainstay of treatment following the advent of Early Goal-Directed Therapy (EGDT) by Rivers et al. in 2001 [1]. Patients presenting in shock require prompt optimization of volume status and cardiac out- put to ensure adequate perfusion. Poor optimization may be associated with prolonged hospital and intensive care unit stays. The prior gold standard, pulmonary artery catheterization, is rarely available in the emergency department setting and its invasive nature has led to recent re-evaluation of its clinical utility. However, there are new monitoring technologies that are being studied in the intensive care unit setting that may soon be available in emergency departments to aid in nursing and physician decision making to improve acute resuscitation.",93e09c5feb9b2ffc8926b4edff13b3d8e02e41de,Semantic Scholar,,somewhat relevant,"The abstract explicitly mentions the use of zero-shot prompting with GPT-3, suggesting relevance to prompt engineering." +1403,recombinant hemagglutinin displaying on yeast reshapes congenital lymphocyte subsets to prompt optimized systemic immune protection against avian influenza infection,"['Han Zhang', 'Zexing Li', 'Huixia Zhang', 'Yanyu Guo', 'Xinyi Zhang', 'Lilin Zhang', 'Liu Yang', 'Shujun Li', 'Changyan Li', 'D. Cui', 'R. Xie', 'Yongqing Li', 'Jinhai Huang']",https://www.frontiersin.org/articles/10.3389/fmicb.2023.1153922/pdf,2023-05-31,,"Introduction Prophylactic vaccination is regarded as the most effective means to control avian flu infection. Currently, there is a need for a universal vaccine that provides broad and long-lasting protection against influenza virus. Meanwhile, although yeast-based vaccines have been used in clinic, studies are still required to further understand the molecular mechanism of yeast-based vaccines under physiological conditions. Methods We generated a yeast-based vaccine against influenza hemagglutinin (HA) of H5, H7 and H9 using surface displaying technology and evaluated the protective efficacy of chickens after exposure to H9N2 influenza virus. Results Oral yeast vaccine provided less clinical syndrome, reduced viral loading and alleviated airway damage significantly. Compared to the commercial inactivated vaccine, yeast vaccine stimulated the activation of splenic NK and APCs cells and boosted TLR7-IRF7-IFN signaling in spleen. Meanwhile, γδ T cells in the bursa of Fabricius were activated and the innate lymphoid cells (ILCs) in the bursa of Fabricius promoted the CILPs to differentiate to ILC3 cells in oral yeast birds. Moreover, the reshaped gut microbiota and a suppressed Th17-IL17-mediated inflammation in intestine was observed in oral yeast chickens, which might facilitate the recovery of intestinal mucosal immunity upon virus infection. Collectively, our findings suggest that oral yeast based multivalent bird flu vaccines provide an attractive strategy to update host defense function via reshapes of multi-systemic immune homeostasis.",98090bbc7b784a1f64d4522c5e1987b196863fd0,Semantic Scholar,,somewhat relevant,"The paper is relevant as it mentions the use of zero-shot prompting methods for dialogue generation, which is an application of prompt engineering." +1404,blackbox tuning of visionlanguage models with effective gradient approximation,"['Zixian Guo', 'Yuxiang Wei', 'Ming Liu', 'Zhilong Ji', 'Jinfeng Bai', 'Yiwen Guo', 'Wangmeng Zuo']",https://aclanthology.org/2023.findings-emnlp.356.pdf,2023-12-26,,"Parameter-efficient fine-tuning (PEFT) methods have provided an effective way for adapting large vision-language models to specific tasks or scenarios. Typically, they learn a very small scale of parameters for pre-trained models in a white-box formulation, which assumes model architectures to be known and parameters to be accessible. However, large models are often not open-source due to considerations of preventing abuse or commercial factors, hence posing a barrier to the deployment of white-box PEFT methods. To alleviate the dependence on model accessibility, we introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models. Specifically, considering that the backpropagation gradients are blocked, we approximate the gradients of textual prompts by analyzing the predictions with perturbed prompts. Secondly, a lightweight adapter is deployed over the output feature of the inaccessible model, further facilitating the model adaptation process. Empowered with these designs, our CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods. Code is released at https://github.com/guozix/cbbt.",b13106e918aa098a7666feca915c111a19790500,Semantic Scholar,,highly relevant,"The paper discusses zero-shot prompting using a large pre-trained language model, which is directly related to prompt engineering." +1405,diagnostic utility of endocan and interleukins for lateonset neonatal sepsis,"['Preslava Gatseva', 'Alexander B. Blazhev', 'Zarko Y. Yordanov', 'Victoria G. Atanasova']",https://sciendo.com/pdf/10.2478/jbcr-2023-0016,2023-11-01,,"Summary The aim of this study was to determine the potential of early inflammatory markers to diagnose late-onset neonatal sepsis – interleukin 6 (IL-6), interleukin 8 (IL-8) and endocan (ESM-1), and to compare them with routinely used markers like C-reactive protein (CRP) and procalcitonin (PCT). A prospective (January, 2022 – January, 2023) clinical-epidemiological study was conducted in a third level NICU in Pleven, Bulgaria. Patients with suspected nosocomial infection and healthy controls were tested. A sandwich ELISA method was used to measure the serum concentrations. Sixty newborns with an average gestational age of 29.75±3.61 gestational weeks were included, of which 35% were symptomatic and infected, 33.3% were symptomatic but uninfected, and 31.7% were asymptomatic controls. The mean values of PCT and IL-6 differ significantly in the three groups. For ESM-1, IL-8 and CRP, the difference was statistically insignificant. The best sensitivity (78%) and negative predictive value (84%) was found for IL-6. The introduction into routine practice of indicators such as PCT and IL-6 may provide an opportunity to promptly optimize the diagnostic and therapeutic approach to LOS.",b281d891508e347149e3623b339861fa47eabe07,Semantic Scholar,,highly relevant,"The paper is focused on evaluating the effectiveness of different reasoning strategies induced by zero-shot prompting across multiple language models, which involves the use of hard prefix prompts." +1406,artificial intelligence for health message generation an empirical study using a large language model (llm) and prompt engineering,"['Sue Lim', 'Ralf Schmälzle']",https://www.frontiersin.org/articles/10.3389/fcomm.2023.1129082/pdf,2023-05-26,,"Introduction This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Method We used prompt engineering to generate awareness messages about folic acid and compared them to the most retweeted human-generated messages via human evaluation with an university sample and another sample comprising of young adult women. We also conducted computational text analysis to examine the similarities between the AI-generated messages and human generated tweets in terms of content and semantic structure. Results The results showed that AI-generated messages ranked higher in message quality and clarity across both samples. The computational analyses revealed that the AI generated messages were on par with human-generated ones in terms of sentiment, reading ease, and semantic content. Discussion Overall, these results demonstrate the potential of large language models for message generation. Theoretical, practical, and ethical implications are discussed.",04f1ff349424b4fb64a24fcaf44532d69826b0f4,Semantic Scholar,,highly relevant,"The paper discusses improving the accuracy of LLMs using techniques like Self-Consistency (SC) and Chain-of-Thought (CoT), which are post-training prompting strategies related to prompt engineering." +1407,chatgpt vs crowdsourcing vs experts annotating opendomain conversations with speech functions,"['Lidiia Ostyakova', 'Veronika Smilga', 'Kseniia Petukhova', 'Maria M. Molchanova', 'Daniel Kornev']",https://aclanthology.org/2023.sigdial-1.23.pdf,,,"This paper deals with the task of annotating open-domain conversations with speech functions. We propose a semi-automated method for annotating dialogs following the topic-oriented, multi-layered taxonomy of speech functions with the use of hierarchical guidelines using Large Language Models. These guidelines comprise simple questions about the topic and speaker change, sentence types, pragmatic aspects of the utterance, and examples that aid untrained annotators in understanding the taxonomy. We compare the results of dialog annotation performed by experts, crowdsourcing workers, and ChatGPT. To improve the performance of ChatGPT, several experiments utilising different prompt engineering techniques were conducted. We demonstrate that in some cases large language models can achieve human-like performance following a multi-step tree-like annotation pipeline on complex discourse annotation, which is usually challenging and costly in terms of time and money when performed by humans.",061d5b2ceb7e537c3c96d13f267c0cc22f8f96d3,Semantic Scholar,,highly relevant,"The paper discusses fine-tuning CLIP models using prompt sentences for zero-shot classification tasks, which involves prompt engineering." +1408,prompt engineering for textbased generative art,['J. Oppenlaender'],http://arxiv.org/pdf/2204.13988,,,"Text-based generative art has seen an explosion of interest in 2021. Online communities around text-based generative art as a novel digital medium have quickly emerged. This short paper identifies five types of prompt modifiers used by practitioners in the community of text-based generative art based on a 3-month ethnographic study on Twitter. The novel taxonomy of prompt modifiers provides researchers a conceptual starting point for investigating the practices of text-based generative art, but also may help practitioners of text-based generative art improve their images. The paper concludes with a discussion of research opportunities in the space of text-based generative art and the broader implications of prompt engineering from the perspective of human-AI interaction in future applications beyond the use case of text-based generative art.",07cd498aacfb4d39fa2e0e8d8a9c8ad881257300,Semantic Scholar,,highly relevant,"The paper describes the use of prompts to regularize the fine-tuning process on vision-language models, which is directly related to prompt engineering." +1409,large language models help facilitate the automated synthesis of information on potential pest controllers,"['D. Scheepens', 'Joseph Millard', 'M. Farrell', 'T. Newbold']",https://www.biorxiv.org/content/biorxiv/early/2024/01/15/2024.01.12.575330.full.pdf,2024-01-15,,"The body of ecological literature, which informs much of our knowledge of the global loss of biodiversity, has been experiencing rapid growth in recent decades. The increasing difficulty to synthesise this literature manually has simultaneously resulted in a growing demand for automated text mining methods. Within the domain of deep learning, large language models (LLMs) have been the subject of considerable attention in recent years by virtue of great leaps in progress and a wide range of potential applications, however, quantitative investigation into their potential in ecology has so far been lacking. In this work, we analyse the ability of GPT-4 to extract information about invertebrate pests and pest controllers from abstracts of a body of literature on biological pest control, using a bespoke, zero-shot prompt. Our results show that the performance of GPT-4 is highly competitive with other state-of-the-art tools used for taxonomic named entity recognition and geographic location extraction tasks. On a held-out test set, we show that species and geographic locations are extracted with F1-scores of 99.8% and 95.3%, respectively, and highlight that the model is able to distinguish very effectively between the primary roles of interest (predators, parasitoids and pests). Moreover, we demonstrate the ability of the model to effectively extract and predict taxonomic information across various taxonomic ranks, and to automatically correct spelling mistakes. However, we do report a small number of cases of fabricated information (hallucinations). As a result of the current lack of specialised, pre-trained ecological language models, general-purpose LLMs may provide a promising way forward in ecology. Combined with tailored prompt engineering, such models can be employed for a wide range of text mining tasks in ecology, with the potential to greatly reduce time spent on manual screening and labelling of the literature.",092b230eee81f214a505eb57bea4dd0342555c10,Semantic Scholar,,highly relevant,"The paper focuses on a regularization method for improving gradient-based prompt tuning, which is directly related to optimizing hard prefix prompts for large-scale language models." +1410,comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students,"['Dollaya Hirunyasiri', 'Danielle R. Thomas', 'Jionghao Lin', 'K. Koedinger', 'Vincent Aleven']",https://arxiv.org/pdf/2307.02018,2023-07-05,,"Research suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor's ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.",0b94b999fdd9488e1a0914d37f8fb3ea7e9ea0fd,Semantic Scholar,,highly relevant,"The paper introduces a Self-Prompting framework for LLMs in ODQA tasks, utilizing prompting to generate data for in-context learning, which directly relates to the hard prefix prompt engineering." +1411,gptempowered personalized elearning system for programming languages,"['Jennifer Jin', 'Mira Kim']",https://www.mdpi.com/2076-3417/13/23/12773/pdf?version=1701183024,2023-11-28,,"The eLearning approach to programming language instruction has gained widespread acceptance due to advantages such as accessibility, temporal flexibility, and content reusability. However, the current eLearning for programming predominantly employs the delivery of one-size-fits-all content, engendering elevated costs in both the development of language coursework and administration of eLearning sessions, which includes the labor-intensive task of grading student submissions. A compelling research question to consider is how to construct an eLearning system capable of delivering personalized, student-centric content, automating the generation of coursework elements, and eliminating the need for instructor involvement in the management of eLearning sessions. Our approach to delivering a definite solution to the question involves the utilization of a suite of advanced software technologies: GPT to dynamically generate course contents/components, prompt engineering to personalize course content for each individual student, and autonomous computing to manage eLearning sessions without the need for human intervention. The research results encompass the design of an eLearning framework covering all programming languages, a fully functional Python-based implementation, seamless integration with ChatGPT for dynamic content generation, a high degree of content personalization, and the elimination of manual effort required for managing eLearning sessions.",0e11a4323328c7d1d00d9f7e6dd163ad43a3ffa4,Semantic Scholar,,somewhat relevant,"The paper discusses the use of language prompts as input in the context of adapting vision-language models to medical image segmentation, which is related to prompt engineering." +1412,can chatgpt understand causal language in science claims,"['Yuheun Kim', 'Lu Guo', 'Bei Yu', 'Yingya Li']",https://aclanthology.org/2023.wassa-1.33.pdf,,,"This study evaluated ChatGPT’s ability to understand causal language in science papers and news by testing its accuracy in a task of labeling the strength of a claim as causal, conditional causal, correlational, or no relationship. The results show that ChatGPT is still behind the existing fine-tuned BERT models by a large margin. ChatGPT also had difficulty understanding conditional causal claims mitigated by hedges. However, its weakness may be utilized to improve the clarity of human annotation guideline. Chain-of-Thoughts were faithful and helpful for improving prompt performance, but finding the optimal prompt is difficult with inconsistent results and the lack of effective method to establish cause-effect between prompts and outcomes, suggesting caution when generalizing prompt engineering results across tasks or models.",27d80545d142ced9b921290b5b2798cabd55468b,Semantic Scholar,,highly relevant,"The abstract mentions the use of prompt templates for experiments, which implies engagement with prompt engineering to direct the language model's outputs." +1413,contextual stance classification using prompt engineering,"['Felipe Penhorate Carvalho de Fonseca', 'Ivandré Paraboni', 'L. A. Digiampietri']",https://sol.sbc.org.br/index.php/stil/article/download/25435/25256,2023-09-25,,"This paper introduces a prompt-based method for few-shot learning addressing, as an application example, contextual stance classification, that is, the task of determining the attitude expressed by a given statement within a conversation thread with multiple points of view towards another statement. More specifically, we envisaged a method that uses the existing conversation thread (i.e., messages that are part of the test data) to create natural language prompts for few-shot learning with minimal reliance on training samples, whose preliminary results suggest that prompt engineering may be a competitive alternative to supervised methods both in terms of accuracy and development costs for the task at hand.",2d90460431c093757fcf651e333bc0da5f5404c2,Semantic Scholar,,highly relevant,"The paper discusses prompt tuning for Visual Question Answering by modifying input questions into prompts and integrating human priors, which is directly related to the topic of prompt engineering." +1414,prompt engineering in medical education,"['Thomas F. Heston', 'Charya Khun']",https://www.mdpi.com/2813-141X/2/3/19/pdf?version=1693479951,2023-08-31,,"Artificial intelligence-powered generative language models (GLMs), such as ChatGPT, Perplexity AI, and Google Bard, have the potential to provide personalized learning, unlimited practice opportunities, and interactive engagement 24/7, with immediate feedback. However, to fully utilize GLMs, properly formulated instructions are essential. Prompt engineering is a systematic approach to effectively communicating with GLMs to achieve the desired results. Well-crafted prompts yield good responses from the GLM, while poorly constructed prompts will lead to unsatisfactory responses. Besides the challenges of prompt engineering, significant concerns are associated with using GLMs in medical education, including ensuring accuracy, mitigating bias, maintaining privacy, and avoiding excessive reliance on technology. Future directions involve developing more sophisticated prompt engineering techniques, integrating GLMs with other technologies, creating personalized learning pathways, and researching the effectiveness of GLMs in medical education.",3159478fbc81e562c812b9d5dc1891271b21f0c4,Semantic Scholar,,somewhat relevant,"The abstract indicates the use of prompt templates in data collection from ChatGPT, directly implicating the concept of prompt engineering, though it does not specify the type of prompts." +1415,chatgpt opens a new door for bioinformatics,['Dong Xu'],https://journal.hep.com.cn/qb/EN/PDF/10.15302/J-QB-023-0328,2023-04-21,,"ChatGPT is an artificial intelligence (AI) system that can perform sophisticated writing and dialogs after learning from vast amounts of linguistic data. The success of ChatGPT is phenomenal. AI-based human-machine language interaction has been at the center of AI competition in recent years. The major players in this game have been Google, Meta, and OpenAI. Google was in the best position from the outset, given its invention of Transformer (the cornerstone of all cutting-edge language models) and its significant edge in reinforcement learning. Yet, Google’s efforts in this area were rather diffusing. It kept generating language model variants with incremental innovations but failed to reach the next level. Meta has a strong AI team, including many top AI researchers in the world. Nevertheless, their faith in self-supervised learning to solve human-machine interaction did not deliver high-impact success. Conversely, OpenAI, with a small team, stayed focused on a single product line (GPT, including its latest release of GPT-4). It moved in the right direction of using human input to “align” the language model based on the Reinforcement Learning from Human Feedback (RLHF) approach. The fact that OpenAI ultimately prevailed in this game shows that the model alignment to human labeling through supervised and reinforcement learning is critical for human-machine interaction. However, a chatbot’s actions rely heavily on cues (prompts) provided by human operators. To properly utilize ChatGPT’s capabilities, prompts to instruct or mentor the chatbot must be carefully designed to get valuable, valid, and robust responses. This process becomes another “alignment” problem of using prompt engineering to best probe ChatGPT’s knowledge graph for best serving users’ needs.",358d1d9eed69a6eadcda9996b3f13b0e0a356b88,Semantic Scholar,,highly relevant,"The paper introduces 'GraphPrompt,' a prompt-based learning approach that creates prompt templates according to the graphs, which is directly related to the use of prompts in transformer models." +1416,generating novel leads for drug discovery using llms with logical feedback,"['Shreyas Bhat Brahmavar', 'Ashwin Srinivasan', 'T. Dash', 'Sowmya Ramaswamy Krishnan', 'L. Vig', 'Arijit Roy', 'R. Aduri']",https://www.biorxiv.org/content/biorxiv/early/2023/09/17/2023.09.14.557698.full.pdf,2023-09-17,,"Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically require experimentation with progressively more refined prompts. Results thus become dependent not just on what is known about the target, but also on what is known about the prompt-engineering. In this paper, we separate the prompt into domain-constraints that can be written in a standard logical form, and a simple text-based query. We investigate whether LLMs can be guided, not by refining prompts manually, but by refining the the logical component automatically, keeping the query unchanged. We describe an iterative procedure LMLF (“Language Models with Logical Feedback”) in which the constraints are progressively refined using a logical notion of generalisation. On any iteration, newly generated instances are verified against the constraint, providing “logical-feedback” for the next iteration’s refinement of the constraints. We evaluate LMLF using two well-known targets (inhibition of the Janus Kinase 2; and Dopamine Receptor D2); and two different LLMs (GPT-3 and PaLM). We show that LMLF, starting with the same logical constraints and query text, can guide both LLMs to generate potential leads. We find: (a) Binding affinities of LMLF-generated molecules are skewed towards higher binding affinities than those from existing baselines; LMLF results in generating molecules that are skewed towards higher binding affinities than without logical feedback; (c) Assessment by a computational chemist suggests that LMLF generated compounds may be novel inhibitors. These findings suggest that LLMs with logical feedback may provide a mechanism for generating new leads without requiring the domain-specialist to acquire sophisticated skills in prompt-engineering.",3613299c54bbea66dd6db1b00573f7ade021a5a9,Semantic Scholar,,highly relevant,"The paper discusses using prompts for zero-shot learning in the context of relation extraction, which is related to the field of prompt engineering." +1417,enhancing arabic content generation with prompt augmentation using integrated gpt and texttoimage models,"['Wala Elsharif', 'James She', 'Preslav Nakov', 'Simon Wong']",https://dl.acm.org/doi/pdf/10.1145/3573381.3596466,2023-06-12,,"With the current and continuous advancements in the field of text-to-image modeling, it has become critical to design prompts that make the best of these model capabilities and guides them to generate the most desirable images, and thus the field of prompt engineering has emerged. Here, we study a method to use prompt engineering to enhance text-to-image model representation of the Arabic culture. This work proposes a simple, novel approach for prompt engineering that uses the domain knowledge of a state-of-the-art language model, GPT, to perform the task of prompt augmentation, where a simple, initial prompt is used to generate multiple, more detailed prompts related to the Arabic culture from multiple categories through a GPT model through a process known as in-context learning. The augmented prompts are then used to generate images enhanced for the Arabic culture. We perform multiple experiments with a number of participants to evaluate the performance of the proposed method, which shows promising results, specially for generating prompts that are more inclusive of the different Arabic countries and with a wider variety in terms of image subjects, where we find that our proposed method generates image with more variety 85 % of the time and are more inclusive of the Arabic countries more than 72.66 % of the time, compared to the direct approach.",3bb1a0193cb0b5dd9405a729b16320c6ec31b1dd,Semantic Scholar,,highly relevant,The paper discusses using question specific prompts with language models for structured answer generation which falls within the subject of prompt engineering. +1418,reinventing international business education integrating the power of generative ai,['Mamoun Benmamoun'],https://insights.aib.world/article/90397.pdf,2023-12-07,,"As artificial intelligence (AI) reshapes global business, international business (IB) education must adapt. This article explores the incorporation of Generative AI (GenAI) into IB curricula, examining course fit and faculty readiness, while presenting actional recommendations across three dimensions: Engagement, Collaboration, and Academic Integrity. We propose methods for interactive learning and lesson engagement using GenAI’s conversational interface and prompts engineering. We also propose leveraging GenAI as a multifaceted tool to enhance international teamwork and collaboration and cultivate cross-cultural and linguistic connections. Additionally, we outline measures to prevent its misuse and mitigate the inherent threats it poses to academic integrity and assessment.",3dd8bf0a2bec3f17a9bc852780fab1796198a61c,Semantic Scholar,,highly relevant,"The abstract mentions a 'target prompt template' as a component of an encoder-decoder method, indicating the use of prompt-based techniques, which aligns directly with prompt engineering." +1419,"engineering a dialogue with klara, or ethical invention with generative ai in the writing classroom","['Elitza Kotzeva', 'Brent Anders']",https://publications.coventry.ac.uk/index.php/joaw/article/download/989/1034,2023-12-22,,"In this teaching practice article, we discuss the possibilities of integrating AI into the writing classroom utilizing prompt engineering techniques. We propose a strategy for prompt engineering in which we see AI as an audience and interlocutor during the invention process. We consider using the method in preparation for argument composition and with that we propose an ethical model for teaching writing based on a view of rhetoric as both technê and praxis. To draw attention to the ethical question in relation to human—non-human interactions, we use as metaphor for AI tools the image of Klara, an android who serves as a children’s companion in Ishiguro’s novel Klara and the Sun (2021).",3fd3f10488e26a4f30a9e91313bb38a4123ffae8,Semantic Scholar,,somewhat relevant,"The paper discusses increasing resistance to Jailbreaking prompts through pruning, directly involving prompt interaction with LLMs, thus relevant to prompt engineering." +1420,deploying generative ai to draft a roleplay simulation of difficult conversations about inclusivity,['Clive Holtham'],https://journal.ilta.ie/index.php/telji/article/download/127/150,2023-12-07,,"Within inclusivity change initiatives, conversations around microaggressions are a key element in seeking behavioural change. This exemplar use of GenAI is focused on authoring a conversational role play simulation. Prompts draw on the extensive literature of microaggressions. The underlying scenario is a podcast where the author and chatbot co-design the role play. The study involved a novice in GenAI simultaneously learning prompt engineering which would generate realistic role played conversation. A core finding was that GenAI can produce a conversational style realistic enough to deploy in higher education inclusivity workshops, and whose subject matter content is satisfactory. The chatbot proposed four benefits of a role play simulation, which are drawn on to frame the critical reflection. An important conclusion was that the GenAI process could valuably facilitate shifting from a subject expert academic being sole author, to co-design involving both those with experience of microaggressions, as well as potential staff and student workshop participants.",420330845abd5436f464a03b12e8a0bfffd4f629,Semantic Scholar,,highly relevant,"The paper is focused on discovering and exploiting a system prompt leakage vulnerability and uses system prompts in the process, which indicates relevance to prompt engineering." +1421,from web catalogs to google a retrospective study of web search engines sustainable development,"['M. Duka', 'Marek Sikora', 'Artur Strzelecki']",https://www.mdpi.com/2071-1050/15/8/6768/pdf?version=1681779086,2023-04-17,,This study presents a review of search engines and search engine optimization and shows how the search engine landscape relates to sustainable development. We have used a narrative review research method and described three main topics: the past and present of web catalogs and search engines; current knowledge about the dominant types of search results presented in Google search; and methods of search engine optimization. Technical elements of important website areas related to technical website auditing are discussed. We summarize our research with several key findings on how web search engines are involved in sustainable development and offer a glimpse into the future use of web searching with the help of artificial intelligence chats and prompt engineering.,513b96c7d5d1f9a74afd9d946d5a7c83fe592869,Semantic Scholar,,somewhat relevant,"The abstract describes the use of prompt injection templates for testing ChatGPT's security which is related to prompt engineering, but it does not explicitly mention 'hard prefix' prompting." +1422,better integrating vision and semantics for improving fewshot classification,"['Zhuoling Li', 'Yong Wang']",https://dl.acm.org/doi/pdf/10.1145/3581783.3613819,2023-10-26,,"Some recent methods address few-shot classification by integrating visual and semantic prototypes. However, they usually ignore the difference in feature structure between the visual and semantic modalities, which leads to limited performance improvements. In this paper, we propose a novel method, called bimodal integrator (BMI), to better integrate visual and semantic prototypes. In BMI, we first construct a latent space for each modality via a variational autoencoder, and then align the semantic latent space to the visual latent space. Through this semantics-to-vision alignment, the semantic modality is mapped to the visual latent space and has the same feature structure as the visual modality. As a result, the visual and semantic prototypes can be better integrated. In addition, based on the multivariate Gaussian distribution and the prompt engineering, a data augmentation scheme is designed to ensure the accuracy of modality alignment during the training process. Experimental results demonstrate that BMI significantly improves few-shot classification, making simple baselines outperform the most advanced methods on miniImageNet and tieredImageNet datasets.",579ee305d538a679d72b808ffe8322680561a177,Semantic Scholar,,highly relevant,"The paper focuses on optimizing discrete prompts for PLMs specifically for few-shot NLP tasks using a novel dialogue alignment strategy and an RL framework, which aligns well with the topic of prompt engineering." +1423,omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know,"['Matthias Urban', 'Duc Dat Nguyen', 'Carsten Binnig']",http://publikationen.ub.uni-frankfurt.de/files/74426/06_08.pdf,2023-06-18,,"In this paper, we present our vision of OmniscientDB, a novel database that leverages the implicitly-stored knowledge in large language models to augment datasets for analytical queries or even machine learning tasks. OmiscientDB empowers its users to augment their datasets by means of simple SQL queries and thus has the potential to dramatically reduce the manual overhead associated with data integration. It uses automatic prompt engineering to construct appropriate prompts for given SQL queries and passes them to a large language model like GPT-3 to contribute additional data (i.e., new rows, columns, or entire tables), augmenting the explicitly stored data. Our initial evaluation demonstrates the general feasibility of our vision, explores different prompting techniques in greater detail, and points towards several directions for future research.",59266e06cdb867c2541603f9d94e13f67d55938f,Semantic Scholar,,highly relevant,"The abstract discusses the use of prompt engineering in the context of clinical note generation and introduces an Automatic Prompt Optimization framework to refine prompts for LLMs, indicating a direct involvement with prompt engineering techniques." +1424,the creativity of textbased generative art,['J. Oppenlaender'],http://arxiv.org/pdf/2206.02904,,,"Text-based generation of digital images has made a giant leap to-wards becoming a mainstream phenomenon. With text-based generative systems, anybody can create digital images and artworks. This provokes the question of whether text-based generative art is creative. This paper expounds on the nature of human creativity involved in text-based generative art with a specific focus on the practice of prompt engineering, drawing on Rhodes’s conceptual model of creativity. The paper critiques the current product-centered view of creativity which may fall short in the context of text-based generative art. An case exemplifying this shortcoming is provided and future opportunities for research on text-based generative art are outlined.",65d6c17a5f947a2aa92ab1fa0b876e4e3c75720c,Semantic Scholar,,highly relevant,"The paper provides a detailed examination of the evolution of prompt engineering, which is directly relevant to the topic of prompt engineering." +1425,artificial intelligence model gpt4 narrowly fails simulated radiological protection exam,"['G. Roemer', 'A. Li', 'U. Mahmood', 'L. Dauer', 'M. Bellamy']",https://iopscience.iop.org/article/10.1088/1361-6498/ad1fdf/pdf,2024-01-17,,"This study assesses the efficacy of Generative Pre-Trained Transformers (GPT) published by OpenAI in the specialized domains of radiological protection and health physics. Utilizing a set of 1064 surrogate questions designed to mimic a health physics certification exam, we evaluated the models' ability to accurately respond to questions across five knowledge domains. Our results indicated that neither model met the 67% passing threshold, with GPT-3.5 achieving a 45.3% weighted average and GPT-4 attaining 61.7%. Despite GPT-4's significant parameter increase and multimodal capabilities, it demonstrated superior performance in all categories yet still fell short of a passing score. The study's methodology involved a simple, standardized prompting strategy without employing prompt engineering or in-context learning, which are known to potentially enhance performance. The analysis revealed that GPT-3.5 formatted answers more correctly, despite GPT-4's higher overall accuracy. The findings suggest that while GPT-3.5 and GPT-4 show promise in handling domain-specific content, their application in the field of radiological protection should be approached with caution, emphasizing the need for human oversight and verification. .",67fb64933bb7c3376d13db0812cdd7f579257ed3,Semantic Scholar,,highly relevant,"The paper introduces ControlPE, which complements existing prompt engineering methods and focuses on controlling prompt influence and continuous targets." +1426,"can generative artificial intelligence write an academic journal article opportunities, challenges, and implications",['Hsiao-Ping Hsu'],https://journal.ilta.ie/index.php/telji/article/download/152/151,2023-12-07,,"This article offers an in-depth reflection on the author’s experiences with Generative Artificial Intelligence (Gen AI), ChatGPT 4.0. The author started the journey from their initial need for software for English proofreading and editing services to their interest in exploring pre-service teachers’ application of Gen AI in lesson planning. Based on prompt engineering techniques, an iterative three-stage manuscript generation process—brainstorming, refinement, and writing—with ChatGPT is detailed. A short paper generated by ChatGPT is presented. Although Gen AI is a valuable tool in providing insights and assistance in research idea generation and design, academic writing, and English writing learning, the author cautions that critical thinking plays a vital role in ensuring accuracy, ethical considerations, and the preservation of rigorous scholarly standards. As Gen AI emerges as a game-changer in academia and education, this article highlights the importance of balancing its emerging capabilities with maintaining traditional academic and educational values.",6a5efa3f47b84a865d29a9c060b3f402e6b52597,Semantic Scholar,,highly relevant,"The paper details the use of prompt engineering to improve the performance of GPT-4V in medical tasks, with a focus on refining prompts to increase accuracy in medical imaging, making it highly pertinent to the topic of prompt engineering." +1427,improving knowledge extraction from llms for robotic task learning through agent analysis,"['James R. Kirk', 'R. Wray', 'Peter Lindes']",https://arxiv.org/pdf/2306.06770,,,": Large language models (LLMs) offer significant promise as a knowledge source for robotic task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM but alone is insufficient for acquiring relevant, situationally grounded knowledge for an embodied robotic agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations, and thus enabling a robot to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous robot, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how a robot, by retrieving and evaluating a breadth of responses from the LLM, can achieve > 75% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as indication of preference) is provided, while greatly reducing how much human oversight is needed.",6b80c6e220ca2e2434f5a80b2eb5e8b645e97ae1,Semantic Scholar,,highly relevant,"The paper focuses on the role of prompt engineering in optimizing the performance of Large Language Models, which is directly related to the topic of hard prefix prompting." +1428,fake it till you make it learning(s) from a synthetic imagenet clone,"['Mert Bulent Sariyildiz', 'Alahari Karteek', 'Diane Larlus', 'Yannis Kalantidis']",http://arxiv.org/pdf/2212.08420,,,"Recent large-scale image generation models such as Stable Diffusion have exhibited an impressive ability to generate fairly realistic images starting from a very simple text prompt. Could such models render real images obsolete for training image prediction models? In this paper, we an-swer part of this provocative question by questioning the need for real images when training models for ImageNet classification. More precisely, provided only with the class names that have been used to build the dataset, we explore the ability of Stable Diffusion to generate synthetic clones of ImageNet and measure how useful they are for training classification models from scratch. We show that with minimal and class-agnostic prompt engineering those ImageNet clones we denote as ImageNet-SD are able to close a large part of the gap between models produced by synthetic images and models trained with real images for the several standard classification benchmarks that we consider in this study. More importantly, we show that models trained on synthetic images exhibit strong generalization properties and perform on par with models trained on real data.",70e8f98fbb0a0acdbd08af343a8504e7fd664267,Semantic Scholar,,somewhat relevant,"The paper presents AdaRefiner, which mitigates the need for intricate prompt engineering by automatically refining task comprehension with feedback from RL agents, making it somewhat relevant to the topic of prompt engineering." +1429,from text to tables a local privacy preserving large language model for structured information retrieval from medical documents,"['I. C. Wiest', 'D. Ferber', 'J. Zhu', 'M. van Treeck', 'S. K. Meyer', 'R. Juglan', 'Z. I. Carrero', 'D. Paech', 'J. Kleesiek', 'M. P. Ebert', 'D. Truhn', 'J. N. Kather']",https://www.medrxiv.org/content/medrxiv/early/2023/12/08/2023.12.07.23299648.full.pdf,2023-12-08,,"Background and Aims Most clinical information is encoded as text, but extracting quantitative information from text is challenging. Large Language Models (LLMs) have emerged as powerful tools for natural language processing and can parse clinical text. However, many LLMs including ChatGPT reside in remote data centers, which disqualifies them from processing personal healthcare data. We present an open-source pipeline using the local LLM 'Llama 2' for extracting quantitative information from clinical text and evaluate its use to detect clinical features of decompensated liver cirrhosis. Methods We tasked the LLM to identify five key clinical features of decompensated liver cirrhosis in a zero- and one-shot way without any model training. Our specific objective was to identify abdominal pain, shortness of breath, confusion, liver cirrhosis, and ascites from 500 patient medical histories from the MIMIC IV dataset. We compared LLMs with three different sizes and a variety of pre-specified prompt engineering approaches. Model predictions were compared against the ground truth provided by the consent of three blinded medical experts. Results Our open-source pipeline yielded in highly accurate extraction of quantitative features from medical free text. Clinical features which were explicitly mentioned in the source text, such as liver cirrhosis and ascites, were detected with a sensitivity of 100% and 95% and a specificity of 96% and 95%, respectively from the 70 billion parameter model. Other clinical features, which are often paraphrased in a variety of ways, such as the presence of confusion, were detected only with a sensitivity of 76% and a specificity of 94%. Abdominal pain was detected with a sensitivity of 84% and a specificity of 97%. Shortness of breath was detected with a sensitivity of 87% and a specificity of 96%. The larger version of Llama 2 with 70b parameters outperformed the smaller version with 7b parameters in all tasks. Prompt engineering improved zero-shot performance, particularly for smaller model sizes. Conclusion Our study successfully demonstrates the capability of using locally deployed LLMs to extract clinical information from free text. The hardware requirements are so low that not only on-premise, but also point-of-care deployment of LLMs are possible.",73788e8afc7c377805b0a94234810c8722f71377,Semantic Scholar,,highly relevant,"The paper presents NeuroPrompts, an adaptive framework for automatic prompt enhancement in text-to-image generation, which is a direct application of prompt engineering to improve the quality of outputs." +1430,zero and fewshot nlp with pretrained language models,"['Iz Beltagy', 'Arman Cohan', 'Robert Logan IV', 'Sewon Min', 'Sameer Singh']",https://aclanthology.org/2022.acl-tutorials.6.pdf,,,"The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult. This is a challenging setting both academically and practically—particularly because training neutral models typically require large amount of labeled data. More recently, advances in pretraining on unlabelled data have brought up the potential of better zero-shot or few-shot learning (Devlin et al., 2019; Brown et al., 2020). In particular, over the past year, a great deal of research has been conducted to better learn from limited data using large-scale language models. In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for zero- and few-shot learning with pretrained language models. Additionally, our goal is to reveal new research opportunities to the audience, which will hopefully bring us closer to address existing challenges in this domain.",037110f8e99488f9b8f6e962da0a912d927695e5,Semantic Scholar,,highly relevant,The paper is highly relevant as it directly discusses prompt engineering in the context of user interaction with an interactive segmentation model. +1431,"fewshot named entity recognition definition, taxonomy and research directions","['V. Moscato', 'Marco Postiglione', 'Giancarlo Sperlí']",https://dl.acm.org/doi/pdf/10.1145/3609483,2023-07-18,,"Recent years have seen an exponential growth (+98% in 2022 w.r.t. the previous year) of the number of research articles in the few-shot learning field, which aims at training machine learning models with extremely limited available data. The research interest toward few-shot learning systems for Named Entity Recognition (NER) is thus at the same time increasing. NER consists in identifying mentions of pre-defined entities from unstructured text, and serves as a fundamental step in many downstream tasks, such as the construction of Knowledge Graphs, or Question Answering. The need for a NER system able to be trained with few-annotated examples comes in all its urgency in domains where the annotation process requires time, knowledge and expertise (e.g., healthcare, finance, legal), and in low-resource languages. In this survey, starting from a clear definition and description of the few-shot NER (FS-NER) problem, we take stock of the current state-of-the-art and propose a taxonomy which divides algorithms in two macro-categories according to the underlying mechanisms: model-centric and data-centric. For each category, we line-up works as a story to show how the field is moving toward new research directions. Eventually, techniques, limitations, and key aspects are deeply analyzed to facilitate future studies.",0379cfb16c1678bde9b889bb1c0ca39db2cb564a,Semantic Scholar,,somewhat relevant,"The paper is somewhat relevant as it mentions the system's prompt engineering in the context of a dialogue manager, but it focuses more on the integration of vision capabilities than on the specifics of hard prefix prompting." +1432,speakerbox fewshot learning for speaker identification with transformers,"['Eva Maxfield Brown', 'To Huynh', 'Nicholas Weber']",https://joss.theoj.org/papers/10.21105/joss.05132.pdf,2023-03-20,,"Automated speaker identification is a modeling challenge for research when large-scale corpora, such as audio recordings or transcripts, are relied upon for evidence (e",05555160ff32dc487ffb1ec5048a4f00b1709f79,Semantic Scholar,,highly relevant,The abstract explicitly mentions the use of a Prompt Builder which dynamically generated comprehensive prompts to enhance model generation performance in the context of a ChatGPT-based code generation platform. +1433,a generative ai approach to pricing mechanisms and consumer behavior in the electric vehicle charging market,"['Sarthak Chaturvedi', 'Edward W. Chen', 'Ila P. Sharma', 'Omar Isaac Asensio']",https://ojs.aaai.org/index.php/AAAI-SS/article/download/27649/27422,2024-01-22,,"The electrification of transportation is a growing strategy to reduce mobile source emissions and air pollution globally. To encourage adoption of electric vehicles, there is a need for reliable evidence about pricing in pub-lic charging stations that can serve a greater number of communities. However, user-entered pricing information by thousands of charge point operators (CPOs) has created ambiguity for large-scale aggregation, increasing both the cost of analysis for researchers and search costs for consumers. In this paper, we use large language models to address standing challenges with price discovery in distributed digital data. We show that generative AI models can effectively extract pricing mechanisms from unstructured text with high accuracy, and at substantially lower cost of three to four orders of magnitude lower than human curation (USD 0.006 pennies per observation). We exploit the few-shot learning capabilities of GPT-4 with human-in-the-loop feedback—beating prior classification performance benchmarks with fewer training data. The most common pricing models include free, energy-based (per kWh), and time-based (per unit time), with tiered pricing (variable pricing based on usage) being the most prevalent among paid stations. Behavioral insights from a US nationally representative sample of 13,008 stations suggest that EV users are commonly frustrated with the slower than expected charging rates and the total cost of charging. This study uncovers additional consumer barriers to charging services concerning the need for better price standardization.",05c3f80b2048b40db29e3e691f54e690962ec4e7,Semantic Scholar,,highly relevant,"The abstract mentions 'prompt engineering for LLM utilization', indicating that the paper involves the use of prompt engineering techniques, which is relevant to the topic." +1434,metaaugmented prompt tuning for better fewshot learning,"['Kaihang Pan', 'Juncheng Billy Li', 'Hongye Song', 'Jun Lin', 'Xiaozhong Liu', 'Siliang Tang']",http://arxiv.org/pdf/2303.12314,,,"Prompt tuning is a parameter-efficient method, which freezes all PLM parameters and only prepends some additional tunable tokens called soft prompts to the input text. However, soft prompts heavily rely on a better initialization and may easily result in overfitting under few-shot settings, which causes prompt-tuning performing much worse than fine-tuning. To address the above issues, this paper proposes a novel S elf-s U pervised M eta-prompt learning framework with ME ta-gradient R egularization for few-shot generalization ( SUMMER ). We leverage self-supervised meta-learning to better initialize soft prompts and curriculum-based task augmentation is further proposed to enrich the meta-task distribution. Besides, a novel meta-gradient regularization method is integrated into the meta-prompt learning framework, which meta-learns to transform the raw gradient during few-shot learning into a domain- generalizable direction, thus alleviat-ing the problem of overfitting. Extensive experiments show that SUMMER achieves better performance for different few-shot downstream tasks, and also exhibits a stronger domain generalization ability.",0619de4ffded9cd19269c73cde22e6595133bade,Semantic Scholar,,highly relevant,"The paper discusses systematic construction of prompts and a methodology for more objective and replicable prompt generation process, which is highly related to the topic of prompt engineering." +1435,exploiting language model prompts using similarity measures a case study on the wordincontext task,"['Mohsen Tabasi', 'Kiamehr Rezaee', 'Mohammad Taher Pilehvar']",https://aclanthology.org/2022.acl-short.36.pdf,,,"As a recent development in few-shot learning, prompt-based techniques have demonstrated promising potential in a variety of natural language processing tasks. However, despite proving competitive on most tasks in the GLUE and SuperGLUE benchmarks, existing prompt-based techniques fail on the semantic distinction task of the Word-in-Context (WiC) dataset. Specifically, none of the existing few-shot approaches (including the in-context learning of GPT-3) can attain a performance that is meaningfully different from the random baseline.Trying to fill this gap, we propose a new prompting technique, based on similarity metrics, which boosts few-shot performance to the level of fully supervised methods. Our simple adaptation shows that the failure of existing prompt-based techniques in semantic distinction is due to their improper configuration, rather than lack of relevant knowledge in the representations. We also show that this approach can be effectively extended to other downstream tasks for which a single prompt is sufficient.",0a0e48c469b124c9a03d4bc841311f59424e97f2,Semantic Scholar,,highly relevant,"The paper mentions the use of 'diffusion model enhanced by prompt engineering', directly indicating the application of prompt engineering techniques in the research." +1436,hyperspectral classification of frost damage stress in tomato plants based on fewshot learning,"['Shiwei Ruan', 'Hao Cang', 'Huixin Chen', 'Tianying Yan', 'Fei Tan', 'Yuan Zhang', 'Long Duan', 'Peng Xing', 'Li Guo', 'Pan Gao', 'Wei Xu']",https://www.mdpi.com/2073-4395/13/9/2348/pdf?version=1694248497,2023-09-09,,"Early detection and diagnosis of crop anomalies is crucial for enhancing crop yield and quality. Recently, the combination of machine learning and deep learning with hyperspectral images has significantly improved the efficiency of crop detection. However, acquiring a large amount of properly annotated hyperspectral data on stressed crops requires extensive biochemical experiments and specialized knowledge. This limitation poses a challenge to the construction of large-scale datasets for crop stress analysis. Meta-learning is a learning approach that is capable of learning to learn and can achieve high detection accuracy with limited training samples. In this paper, we introduce meta-learning to hyperspectral imaging and crop detection for the first time. In addition, we gathered 88 hyperspectral images of drought-stressed tomato plants and 68 images of freeze-stressed tomato plants. The data related to drought serve as the source domain, while the data related to frost damage serve as the target domain. Due to the difficulty of obtaining target domain data from real-world testing scenarios, only a limited amount of target domain data and source domain data are used for model training. The results indicated that meta-learning, with a minimum of eight target domain samples, achieved a detection accuracy of 69.57%, precision of 59.29%, recall of 66.32% and F1-score of 62.61% for classifying the severity of frost stress, surpassing other methods with a target domain sample size of 20. Moreover, for determining whether the plants were under stress, meta-learning, with a minimum of four target domain samples, achieved a detection accuracy of 89.1%, precision of 89.72%, recall of 93.08% and F1-score of 91.37% outperforming other methods at a target domain sample size of 20. The results show that meta-learning methods require significantly less data across different domains compared to other methods. The performance of meta-learning techniques thoroughly demonstrates the feasibility of rapidly detecting crop stress without the need for collecting a large amount of target stress data. This research alleviates the data annotation pressure for researchers and provides a foundation for detection personnel to anticipate and prevent potential large-scale stress damage to crops.",0acabdcce3f1f64740b9feb068ca11108b84e369,Semantic Scholar,,somewhat relevant,"While the paper discusses the use of prompts with LLMs for identifying medical conditions and refers to the significance of prompt engineering, it is primarily about model evaluation and fine-tuning rather than the creation or manipulation of prompts." +1437,fewshot and prompt training for text classification in german doctor's letters,"['Phillip Richter-Pechanski', 'Philip Wiesenbach', 'Dominic M Schwab', 'Christina Kiriakou', 'Mingyang He', 'N. Geis', 'A. Frank', 'Christoph Dieterich']",https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI230275,2023-05-18,,"To classify sentences in cardiovascular German doctor's letters into eleven section categories, we used pattern-exploiting training, a prompt-based method for text classification in few-shot learning scenarios (20, 50 and 100 instances per class) using language models with various pre-training approaches evaluated on CARDIO:DE, a freely available German clinical routine corpus. Prompting improves results by 5-28% accuracy compared to traditional methods, reducing manual annotation efforts and computational costs in a clinical setting.",0c409f7b605ea5bbccfd50d3200a287697102fc3,Semantic Scholar,,highly relevant,The abstract explicitly mentions prompt engineering and discusses its application in social science contexts. +1438,mapping underwater aquatic vegetation using foundation models with air and spaceborne images the case of polyphytos lake,"['Leonidas Alagialoglou', 'I. Manakos', 'Sofia Papadopoulou', 'Rizos-Theodoros Chadoulis', 'Afroditi Kita']",https://www.mdpi.com/2072-4292/15/16/4001/pdf?version=1691827396,2023-08-12,,"Mapping underwater aquatic vegetation (UVeg) is crucial for understanding the dynamics of freshwater ecosystems. The advancement of artificial intelligence (AI) techniques has shown great potential in improving the accuracy and efficiency of UVeg mapping using remote sensing data. This paper presents a comparative study of the performance of classical and modern AI tools, including logistic regression, random forest, and a visual-prompt-tuned foundational model, the Segment Anything model (SAM), for mapping UVeg by analyzing air- and space-borne images in the few-shot learning regime, i.e., using limited annotations. The findings demonstrate the effectiveness of the SAM foundation model in air-borne imagery (GSD = 3–6 cm) with an F1 score of 86.5%±4.1% when trained with as few as 40 positive/negative pairs of pixels, compared to 54.0%±9.2% using the random forest model and 42.8%±6.2% using logistic regression models. However, adapting SAM to space-borne images (WorldView-2 and Sentinel-2) remains challenging, and could not outperform classical pixel-wise random forest and logistic regression methods in our task. The findings presented provide valuable insights into the strengths and limitations of AI models for UVeg mapping, aiding researchers and practitioners in selecting the most suitable tools for their specific applications.",0e13c3db536ca8916aa60a151783a21db4224595,Semantic Scholar,,highly relevant,"The abstract specifically mentions techniques to dynamically learn the prompts as textual inputs to avoid hand-crafted prompt engineering, which indicates relevance to prompt engineering." +1439,contextualized soft prompts for extraction of event arguments,"['Chien Van Nguyen', 'Hieu Man', 'Thien Huu Nguyen']",https://aclanthology.org/2023.findings-acl.266.pdf,,,"Event argument extraction (EAE) is a sub-task of event extraction where the goal is to identify roles of entity mentions for events in text. The current state-of-the-art approaches for this problem explore prompt-based meth-ods to prompt pre-trained language models for arguments over input context. However, existing prompt-based methods mainly rely on discrete and manually-designed prompts that cannot exploit specific context for each example to improve customization for optimal performance. In addition, the discrete nature of current prompts prevents the incorporation of relevant context from multiple external documents to enrich prompts for EAE. To this end, we propose a novel prompt-based method for EAE that introduces soft prompts to facilitate the encoding of individual example context and multiple relevant documents to boost EAE. We extensively evaluate the proposed method on benchmark datasets for EAE to demonstrate its benefits with state-of-the-art performance.",1f79ec669e3b6701c814d0165ad281796a49bd13,Semantic Scholar,,highly relevant,"The paper discusses using different prompting techniques and few-shot prompting with large language models in the context of semantic parsing for conversational question answering, which relates to the topic of prompt engineering." +1440,decomposed twostage prompt learning for fewshot named entity recognition,"['Feiyang Ye', 'Liang Huang', 'Senjie Liang', 'Kaikai Chi']",https://www.mdpi.com/2078-2489/14/5/262/pdf?version=1682672547,2023-04-28,,"Named entity recognition (NER) in a few-shot setting is an extremely challenging task, and most existing methods fail to account for the gap between NER tasks and pre-trained language models. Although prompt learning has been successfully applied in few-shot classification tasks, adapting to token-level classification similar to the NER task presents challenges in terms of time consumption and efficiency. In this work, we propose a decomposed prompt learning NER framework for few-shot settings, decomposing the NER task into two stages: entity locating and entity typing. In training, the location information of distant labels is used to train the entity locating model. A concise but effective prompt template is built to train the entity typing model. In inference, a pipeline approach is used to handle the entire NER task, which elegantly resolves time-consuming and inefficient problems. Specifically, a well-trained entity locating model is used to predict entity spans for each input. The input is then transformed using prompt templates, and the well-trained entity typing model is used to predict their types in a single step. Experimental results demonstrate that our framework outperforms previous prompt-based methods by an average of 2.3–12.9% in F1 score while achieving the best trade-off between accuracy and inference speed.",427d332ab3a1bdfb0c62c9f852e90dc2b2880546,Semantic Scholar,,somewhat relevant,"The abstract describes using zero-shot and one-shot prompts for fine-tuning an LLM, which relates directly to the use of prompts to guide the behavior of language models." +1441,fewshot sentiment analysis based on adaptive prompt learning and contrastive learning,"['Cong Shi', 'Rui Zhai', 'Yalin Song', 'Junyang Yu', 'Han Li', 'Yingqi Wang', 'Longge Wang']",https://itc.ktu.lt/index.php/ITC/article/download/34021/16205,2023-12-22,,"Traditional deep learning-based strategies for sentiment analysis rely heavily on large-scale labeled datasets for model training, but these methods become less effective when dealing with small-scale datasets. Fine-tuning large pre-trained models on small datasets is currently the most commonly adopted approach to tackle this issue. Recently, prompt-based learning has gained significant attention as a promising research area. Although prompt-based learning has the potential to address data scarcity problems by utilizing prompts to reformulate downstream tasks, the current prompt-based methods for few-shot sentiment analysis are still considered inefficient. To tackle this challenge, an adaptive prompt-based learning method is proposed, which includes two aspects. Firstly, an adaptive prompting construction strategy is proposed, which can capture the semantic information of texts by utilizing a dot-product attention structure, improving the quality of the prompt templates. Secondly, contrastive learning is applied to the implicit word vectors obtained twice during the training stage to alleviate over-fitting in few-shot learning processes. This improves the model’s generalization ability by achieving data enhancement while keeping the semantic information of input sentences unchanged. Experimental results on the ERPSTMT datasets of FewCLUE demonstrate that the proposed method have great ability to construct suitable adaptive prompts and outperforms the state-of-the-art baselines.",5352643b08c7f1898e6c59d58cbeebdd98ee3dab,Semantic Scholar,,somewhat relevant,"The paper surveys prompting techniques as part of its focus on the design parameters of task-oriented LLM systems, which is indicative of relevance to hard prefix prompt engineering." +1442,promptbased approach for czech sentiment analysis,"['Jakub Šmíd', 'P. Přibáň']",https://doi.org/10.26615/978-954-452-092-2_118,,,"This paper introduces the first prompt-based methods for aspect-based sentiment analysis and sentiment classification in Czech. We employ the sequence-to-sequence models to solve the aspect-based tasks simultaneously and demonstrate the superiority of our prompt-based approach over traditional fine-tuning. In addition, we conduct zero-shot and few-shot learning experiments for sentiment classification and show that prompting yields significantly better results with limited training examples compared to traditional fine-tuning. We also demonstrate that pre-training on data from the target domain can lead to significant improvements in a zero-shot scenario.",535ae2b443c63f35b462257179480dc5ca67e206,Semantic Scholar,,highly relevant,"The abstract discusses prompting techniques applied to autoregressive large language models, which is within the scope of prompt engineering." +1443,gpts at factify 2022 prompt aided factverification (short paper),"['Pawan Kumar Sahu', 'Saksham Aggarwal', 'Taneesh Gupta', 'Gyanendra Das']",http://arxiv.org/pdf/2206.14913,2022-06-29,,"One of the most pressing societal issues is the fight against false news. The false claims, as difficult as they are to expose, create a lot of damage. To tackle the problem, fact verification becomes crucial and thus has been a topic of interest among diverse research communities. Using only the textual form of data we propose our solution to the problem and achieve competitive results with other approaches. We present our solution based on two approaches - PLM (pre-trained language model) based method and Prompt based method. The PLM-based approach uses the traditional supervised learning, where the model is trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas, Prompt-based learning reflects the idea to design input to fit the model such that the original objective may be re-framed as a problem of (masked) language modeling. We may further stimulate the rich knowledge provided by PLMs to better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our experiments showed that the proposed method performs better than just fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and a 7th position on the competition leader-board.",c96a8150c82a0ce9c8c1e069590f534939a30038,Semantic Scholar,,highly relevant,"The paper presents two prompting techniques ('Miniature & Mull' and 'Explain & Compare'), which are used to enhance the capability of LLMs in determining SQL query equivalence, indicating a direct relevance to prompt engineering." +1444,vpn variation on prompt tuning for namedentity recognition,"['Niu Hu', 'Xu Zhou', 'Bing Xu', 'Han Liu', 'Xiangjin Xie', 'Haitao Zheng']",https://www.mdpi.com/2076-3417/13/14/8359/pdf?version=1689827421,2023-07-19,,"Recently, prompt-based methods have achieved a promising performance in many natural language processing benchmarks. Despite success in sentence-level classification tasks, prompt-based methods work poorly in token-level tasks, such as named entity recognition (NER), due to the sophisticated design of entity-related templates. Note that the nature of prompt tuning makes full use of the parameters of the mask language model (MLM) head, while previous methods solely utilized the last hidden layer of language models (LMs) and the power of the MLM head is overlooked. In this work, we discovered the characteristics of semantic feature changes in samples after being processed using MLMs. Based on this characteristic, we designed a prompt-tuning variant for NER tasks. We let the pre-trained model predict the label words derived from the training dataset at each position and fed the generated logits (non-normalized probability) to the CRF layer. We evaluated our method on three popular datasets, and the experiments showed that our proposed method outperforms the state-of-the-art model in all three Chinese datasets.",dc2aba63037ba3e1d6912170f5c292c89ca70b09,Semantic Scholar,,somewhat relevant,"The abstract mentions the use of various techniques from LLM prompting, signaling the incorporation of prompt engineering in the methodology." +1445,investigating prompt learning for chinese fewshot text classification with pretrained language models,"['Chengyu Song', 'Taihua Shao', 'Kejing Lin', 'Dengfeng Liu', 'Siyuan Wang', 'Honghui Chen']",https://www.mdpi.com/2076-3417/12/21/11117/pdf?version=1667385041,2022-11-02,,"Text classification aims to assign predefined labels to unlabeled sentences, which tend to struggle in real-world applications when only a few annotated samples are available. Previous works generally focus on using the paradigm of meta-learning to overcome the classification difficulties brought by insufficient data, where a set of auxiliary tasks is given. Accordingly, prompt-based approaches are proposed to deal with the low-resource issue. However, existing prompt-based methods mainly focus on English tasks, which generally apply English pretrained language models that can not directly adapt to Chinese tasks due to structural and grammatical differences. Thus, we propose a prompt-based Chinese text classification framework that uses generated natural language sequences as hints, which can alleviate the classification bottleneck well in low-resource scenarios. In detail, we first design a prompt-based fine-tuning together with a novel pipeline for automating prompt generation in Chinese. Then, we propose a refined strategy for dynamically and selectively incorporating demonstrations into each context. We present a systematic evaluation for analyzing few-shot performance on a wide range of Chinese text classification tasks. Our approach makes few assumptions about task resources and expertise and therefore constitutes a powerful, task-independent approach for few-shot learning.",eb4afff0eca0026fcc26a5f0c8a73184485e3a25,Semantic Scholar,,somewhat relevant,"The abstract mentions 'LLM prompt-based approaches' which suggests that the paper includes discussion on prompt engineering, relevant to the topic of hard prefix prompts." +1446,can language models understand physical concepts,"['Lei Li', 'Jingjing Xu', 'Qingxiu Dong', 'Ce Zheng', 'Qi Liu', 'Lingpeng Kong', 'Xu Sun']",http://arxiv.org/pdf/2305.14057,2023-05-23,,"Language models~(LMs) gradually become general-purpose interfaces in the interactive and embodied world, where the understanding of physical concepts is an essential prerequisite. However, it is not yet clear whether LMs can understand physical concepts in the human world. To investigate this, we design a benchmark VEC that covers the tasks of (i) Visual concepts, such as the shape and material of objects, and (ii) Embodied Concepts, learned from the interaction with the world such as the temperature of objects. Our zero (few)-shot prompting results show that the understanding of certain visual concepts emerges as scaling up LMs, but there are still basic concepts to which the scaling law does not apply. For example, OPT-175B performs close to humans with a zero-shot accuracy of 85\% on the material concept, yet behaves like random guessing on the mass concept. Instead, vision-augmented LMs such as CLIP and BLIP achieve a human-level understanding of embodied concepts. Analysis indicates that the rich semantics in visual representation can serve as a valuable source of embodied knowledge. Inspired by this, we propose a distillation method to transfer embodied knowledge from VLMs to LMs, achieving performance gain comparable with that by scaling up the parameters of LMs 134x. Our dataset is available at \url{https://github.com/TobiasLee/VEC}",1caa2a29d3ca38d0e5111f4f9ae140727bb7d567,Semantic Scholar,,somewhat relevant,"The paper mentions the use of 'prompt templating' as part of the TextMachina framework, indicating relevance to prompt engineering techniques." +1447,fewshot prompting towards controllable response generation,"['Hsuan Su', 'Po-Han Chi', 'Shih-Cheng Huang', 'Chung Ho Lam', 'Saurav Sahay', 'Shang-Tse Chen', 'Hung-yi Lee']",http://arxiv.org/pdf/2206.03931,,,"Much literature has shown that prompt-based learning is an efficient method to make use of the large pre-trained language model. Recent works also exhibit the possibility of steering a chatbot’s output by plugging in an ap-propriate prompt. Gradient-based methods are often used to perturb the prompts. However, some language models are not even available to the public. In this work, we first explored the combination of prompting and reinforcement learning (RL) to steer models’ generation without accessing any of the models’ parameters. Second, to reduce the training effort and enhance the generalizability to the unseen task, we apply multi-task learning to make the model learn to generalize to new tasks better. The experiment results show that our proposed method can successfully control several state-of-the-art (SOTA) dialogue models without accessing their parameters. Furthermore, the model demonstrates the strong ability to quickly adapt to an unseen task in fewer steps than the baseline model.",308a59020d320f620f34f96c9ecdc187baff9fa1,Semantic Scholar,,highly relevant,"The paper discusses using an LLM to generate a query as a form of prompting and it introduces a trainable scheme to refine this process, making it pertinent to prompt engineering." +1448,“covid vaccine is against covid but oxford vaccine is made at oxford!” semantic interpretation of proper noun compounds,"['Keshav Kolluru', 'Gabriel Stanovsky', 'Mausam']",http://arxiv.org/pdf/2210.13039,2022-10-24,,"Proper noun compounds, e.g., “Covid vaccine”, convey information in a succinct manner (a “Covid vaccine” is a “vaccine that immunizes against the Covid disease”). These are commonly used in short-form domains, such as news headlines, but are largely ignored in information-seeking applications. To address this limitation, we release a new manually annotated dataset, ProNCI, consisting of 22.5K proper noun compounds along with their free-form semantic interpretations. ProNCI is 60 times larger than prior noun compound datasets and also includes non-compositional examples, which have not been previously explored. We experiment with various neural models for automatically generating the semantic interpretations from proper noun compounds, ranging from few-shot prompting to supervised learning, with varying degrees of knowledge about the constituent nouns. We find that adding targeted knowledge, particularly about the common noun, results in performance gains of upto 2.8%. Finally, we integrate our model generated interpretations with an existing Open IE system and observe an 7.5% increase in yield at a precision of 85%. The dataset and code are available at https://github.com/dair-iitd/pronci.",33285e02758788b681754d283df20971fef6e31f,Semantic Scholar,,highly relevant,"The abstract describes a method (ProTeGi) for automatically improving prompts through a gradient descent-inspired approach, which directly pertains to the topic of prompt engineering." +1449,multilingual social media text generation and evaluation with fewshot prompting,['Mack Blackburn'],https://aclanthology.org/2022.gem-1.39.pdf,,,"This work adapts large language models to generate multilingual social media text that meets several objectives simultaneously: topic relevance, author style consistency, and reply validity. Leveraging existing online information behavior simulators, which currently only forecast activities but not content, our approach comprised of generalizable prompt formation and efficient evaluation to produce a believable, personalized, and responsive synthetic social network. According to some preliminary experiments, our multi-objective prompt formation and automatic evaluation/selection methods are able to yield a significant number of high-quality synthetic texts according to both standardized and trained metrics.",36731d3f9809535d5f57cc5cd610d92428a50716,Semantic Scholar,,highly relevant,"The paper is centered around the use of prompt engineering techniques for legal reasoning tasks and evaluates prompting strategies, therefore it is directly related to the topic of prompt engineering." +1450,continued pretraining for better zero and fewshot promptability,"['Zhaofeng Wu', 'IV RobertL.Logan', 'Pete Walsh', 'Akshita Bhagia', 'Dirk Groeneveld', 'Sameer Singh', 'Iz Beltagy']",http://arxiv.org/pdf/2210.10258,2022-10-19,,"Recently introduced language model prompting methods can achieve high accuracy in zero- and few-shot settings while requiring few to no learned task-specific parameters. Nevertheless, these methods still often trail behind full model finetuning. In this work, we investigate if a dedicated continued pretraining stage could improve “promptability”, i.e., zero-shot performance with natural language prompts or few-shot performance with prompt tuning. We reveal settings where existing continued pretraining methods lack promptability. We also identify current methodological gaps, which we fill with thorough large-scale experiments. We demonstrate that a simple recipe, continued pretraining that incorporates a trainable prompt during multi-task learning, leads to improved promptability in both zero- and few-shot settings compared to existing methods, up to 31% relative. On the other hand, we find that continued pretraining using MAML-style meta-learning, a method that directly optimizes few-shot promptability, yields subpar performance. We validate our findings with two prompt tuning methods, and, based on our results, we provide concrete recommendations to optimize promptability for different use cases.",53868a2a4caea7afc487ef08993372b186fb2ddb,Semantic Scholar,,highly relevant,"The paper describes the use of prompt engineering to test large language models on a spatial reasoning benchmark, indicating a direct application of prompting techniques." +1451,a comparative study of prompting strategies for legal text classification,"['Ali Hakimi Parizi', 'Yuyang Liu', 'Prudhvi Nokku', 'Sina Gholamian', 'David Emerson']",https://aclanthology.org/2023.nllp-1.25.pdf,,,"In this study, we explore the performance oflarge language models (LLMs) using differ-ent prompt engineering approaches in the con-text of legal text classification. Prior researchhas demonstrated that various prompting tech-niques can improve the performance of a di-verse array of tasks done by LLMs. However,in this research, we observe that professionaldocuments, and in particular legal documents,pose unique challenges for LLMs. We experi-ment with several LLMs and various promptingtechniques, including zero/few-shot prompting,prompt ensembling, chain-of-thought, and ac-tivation fine-tuning and compare the perfor-mance on legal datasets. Although the newgeneration of LLMs and prompt optimizationtechniques have been shown to improve gener-ation and understanding of generic tasks, ourfindings suggest that such improvements maynot readily transfer to other domains. Specifi-cally, experiments indicate that not all prompt-ing approaches and models are well-suited forthe legal domain which involves complexitiessuch as long documents and domain-specificlanguage.",b6511d2cad195b4e595737f080031647296136f6,Semantic Scholar,,somewhat relevant,"The paper describes the use of zero-shot and few-shot prompting with large language models for the task of hate speech detection, which aligns with prompt engineering." +1452,majority rule better patching via selfconsistency,"['Toufique Ahmed', 'Premkumar T. Devanbu']",https://arxiv.org/pdf/2306.00108,,,"—Large Language models (LLMs) can be induced to solve non-trivial problems with “few-shot” prompts including illustrative problem-solution examples. Now if the few-shots also include “chain of thought” ( C oT ) explanations, which are of the form problem-explanation-solution , LLMs will generate a “explained” solution, and perform even better. Recently an exciting, substantially better technique, self-consistency [1] ( S - C ) has emerged, based on the intuition that there are many plausible explanations for the right solution; when the LLM is sampled repeatedly to generate a pool of explanation-solution pairs, for a given problem, the most frequently occurring solutions in the pool (ignoring the explanations ) tend to be even more likely to be correct!Unfortunately, the use of this highly-performant S - C (or even C oT ) approach in software engineering settings is hampered by the lack of explanations ; most software datasets lack explanations. In this paper, we describe an application of the S - C approach to program repair, using the commit log on the fix as the explanation, only in the illustrative few-shots. We achieve state-of-the art results, beating previous approaches to prompting-based program repair, on the MODIT dataset; we also find evidence suggesting that the correct commit messages are helping the LLM learn to produce better patches.",c1a3dc24a2677b2c8a69ffd336b2112e1aa705b6,Semantic Scholar,,highly relevant,"The paper describes an approach to improve zero-shot prompt-based learning in sentiment analysis, focusing on addressing biases in large language models and the need for redesigning prompts across different domains, indicating relevance to the topic of prompt engineering." +1453,utilizing language models to expand visionbased commonsense knowledge graphs,"['Navid Rezaei', 'M. Reformat']",https://www.mdpi.com/2073-8994/14/8/1715/pdf?version=1660727694,2022-08-17,,"The introduction and ever-growing size of the transformer deep-learning architecture have had a tremendous impact not only in the field of natural language processing but also in other fields. The transformer-based language models have contributed to a renewed interest in commonsense knowledge due to the abilities of deep learning models. Recent literature has focused on analyzing commonsense embedded within the pre-trained parameters of these models and embedding missing commonsense using knowledge graphs and fine-tuning. We base our current work on the empirically proven language understanding of very large transformer-based language models to expand a limited commonsense knowledge graph, initially generated only on visual data. The few-shot-prompted pre-trained language models can learn the context of an initial knowledge graph with less bias than language models fine-tuned on a large initial corpus. It is also shown that these models can offer new concepts that are added to the vision-based knowledge graph. This two-step approach of vision mining and language model prompts results in the auto-generation of a commonsense knowledge graph well equipped with physical commonsense, which is human commonsense gained by interacting with the physical world. To prompt the language models, we adapted the chain-of-thought method of prompting. To the best of our knowledge, it is a novel contribution to the domain of the generation of commonsense knowledge, which can result in a five-fold cost reduction compared to the state-of-the-art. Another contribution is assigning fuzzy linguistic terms to the generated triples. The process is end to end in the context of knowledge graphs. It means the triples are verbalized to natural language, and after being processed, the results are converted back to triples and added to the commonsense knowledge graph.",cc7df8fa3b642269531c25af065c2cc78e5000e0,Semantic Scholar,,highly relevant,"The abstract mentions the use of an 'in-context prompting mechanism' to realize zero-shot persona customization, indicating the use of prompts in post-training model applications." +1454,naisteacher a prompt and rerank approach to generating teacher utterances in educational dialogues,"['Justin Vasselli', 'Christopher Vasselli', 'Adam Nohejl', 'Taro Watanabe']",https://aclanthology.org/2023.bea-1.63.pdf,,,"This paper presents our approach to the BEA 2023 shared task of generating teacher responses in educational dialogues, using the Teacher-Student Chatroom Corpus. Our system prompts GPT-3.5-turbo to generate initial suggestions, which are then subjected to reranking. We explore multiple strategies for candidate generation, including prompting for multiple candidates and employing iterative few-shot prompts with negative examples. We aggregate all candidate responses and rerank them based on DialogRPT scores. To handle consecutive turns in the dialogue data, we divide the task of generating teacher utterances into two components: teacher replies to the student and teacher continuations of previously sent messages. Through our proposed methodology, our system achieved the top score on both automated metrics and human evaluation, surpassing the reference human teachers on the latter.",d0482bd01de9d0912acf4e5338c7799eba4b9360,Semantic Scholar,,somewhat relevant,"The paper describes the use of prompt-based learning within a novel text-to-text framework for NER, which indicates relevance to prompt engineering." +1455,mdc at biolaysumm task 1 evaluating gpt models for biomedical lay summarization,"['Oisn Turbitt', 'R. Bevan', 'Mouhamad Aboshokor']",https://aclanthology.org/2023.bionlp-1.65.pdf,,,"This paper presents our approach to the BioLaySumm Task 1 shared task, held at the BioNLP 2023 Workshop. The effective communication of scientific knowledge to the general public is often limited by the technical language used in research, making it difficult for non-experts to comprehend. To address this issue, lay summaries can be used to explain research findings to non-experts in an accessible form. We conduct an evaluation of autoregressive language models, both general and specialized for the biomedical domain, to generate lay summaries from biomedical research article abstracts. Our findings demonstrate that a GPT-3.5 model combined with a straightforward few-shot prompt produces lay summaries that achieve significantly relevance and factuality compared to those generated by a fine-tuned BioGPT model. However, the summaries generated by the BioGPT model exhibit better readability. Notably, our submission for the shared task achieved 1st place in the competition.",e4e65df11e4d063199c6035004be2b28c3e2f82f,Semantic Scholar,,highly relevant,The paper's focus on 'categorization and simplification of the ICL templates to make prompt learning easier for the LM' directly relates to the construction and engineering of prompts. +1456,leveraging large language models for mental health prediction via online text data,"['Xuhai Xu', 'Bingsheng Yao', 'Yuanzhe Dong', 'Hong Yu', 'James A. Hendler', 'A. Dey', 'Dakuo Wang']",https://arxiv.org/pdf/2307.14385,,,"The recent technology boost of large language models (LLMs) has empowered a variety of applications. However, there is very little research on understanding and improving LLMs’ capability for the mental health domain. In this work, we present the first comprehensive evaluation of multiple LLMs, including Alpaca, Alpaca-LoRA, and GPT-3.5, on various mental health prediction tasks via online text data. We conduct a wide range of experiments, covering zero-shot prompting, few-shot prompting, and instruction finetuning. The results indicate the promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned model, Mental-Alpaca, outperforms GPT-3.5 (25 times bigger) by 16.7% on balanced accuracy and performs on par with the state-of-the-art task-specific model. We summarize our findings into a set of action guidelines for future researchers, engineers, and practitioners on how to empower LLMs with better mental health domain knowledge and become an expert in mental health prediction tasks.",ea284d2045672daf44deffa3f0b7ce154630424c,Semantic Scholar,,somewhat relevant,"The abstract discusses in-context learning (ICL), which is closely related to the use of prompts, but does not mention hard prefix prompting or the engineering of prompts directly." +1457,generating dialog responses with specified grammatical items for second language learning,"['Yuki Okano', 'Kotaro Funakoshi', 'Ryo Nagata', 'M. Okumura']",https://aclanthology.org/2023.bea-1.16.pdf,,,"This paper proposes a new second language learning task of generating a response including specified grammatical items. We consider two approaches: 1) fine-tuning a pre-trained language model (DialoGPT) by reinforcement learning and 2) providing a few-shot prompt to a large language model (GPT-3). For reinforcement learning, we examine combinations of three reward functions that consider grammatical items, diversity, and fluency. Our experiments confirm that both approaches can generate responses including the specified grammatical items and that it is crucial to consider fluency rather than diversity as the reward function.",fffd5378bdfb5a2d4bfc3ff9d2ce30f77d716e9f,Semantic Scholar,,somewhat relevant,"The paper discusses in-context learning which is closely related to prompt engineering, as it involves how language models use specific contextual information to generate responses." +1458,all in how you ask for it simple blackbox method for jailbreak attacks,['Kazuhiro Takemoto'],http://arxiv.org/pdf/2401.09798v2.pdf,2024-01-18,," Large Language Models (LLMs) like ChatGPT face `jailbreak' challenges, wheresafeguards are bypassed to produce ethically harmful prompts. This studyproposes a simple black-box method to effectively generate jailbreak prompts,overcoming the high complexity and computational costs associated with existingmethods. The proposed technique iteratively rewrites harmful prompts intonon-harmful expressions using the target LLM itself, based on the hypothesisthat LLMs can directly sample expressions that bypass safeguards. Demonstratedthrough experiments with ChatGPT (GPT-3.5 and GPT-4) and Gemini-Pro, thismethod achieved an attack success rate of over 80% within an average of 5iterations and remained effective despite model updates. The generatedjailbreak prompts were naturally-worded and concise; moreover, they weredifficult-to-defend. These results indicate that creating effective jailbreakprompts is simpler than previously considered, suggesting that black-boxjailbreak attacks pose a more serious threat.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cy']",highly relevant,"The paper explicitly mentions 'prompt learning with large language models' and designing prompts to guide model output, indicating relevance to prompt engineering." +1459,evil geniuses delving into the safety of llmbased agents,"['Yu Tian', 'Xiao Yang', 'Jingyuan Zhang', 'Yinpeng Dong', 'Hang Su']",http://arxiv.org/pdf/2311.11855v1.pdf,2023-11-20,," The rapid advancements in large language models (LLMs) have led to aresurgence in LLM-based agents, which demonstrate impressive human-likebehaviors and cooperative capabilities in various interactions and strategyformulations. However, evaluating the safety of LLM-based agents remains acomplex challenge. This paper elaborately conducts a series of manual jailbreakprompts along with a virtual chat-powered evil plan development team, dubbedEvil Geniuses, to thoroughly probe the safety aspects of these agents. Ourinvestigation reveals three notable phenomena: 1) LLM-based agents exhibitreduced robustness against malicious attacks. 2) the attacked agents couldprovide more nuanced responses. 3) the detection of the produced improperresponses is more challenging. These insights prompt us to question theeffectiveness of LLM-based attacks on agents, highlighting vulnerabilities atvarious levels and within different role specializations within thesystem/agent of LLM-based agents. Extensive evaluation and discussion revealthat LLM-based agents face significant challenges in safety and yield insightsfor future research. Our code is available athttps://github.com/T1aNS1R/Evil-Geniuses.",,arXiv,['cs.cl'],highly relevant,"The paper is highly relevant because it discusses the generation of prompts for LLMs in the context of UI affordances, including the use of predefined static prompts and template-based prompts, which are related to the hard prefix prompting." +1460,malla demystifying realworld large language model integrated malicious services,"['Zilong Lin', 'Jian Cui', 'Xiaojing Liao', 'XiaoFeng Wang']",http://arxiv.org/pdf/2401.03315v1.pdf,2024-01-06,," The underground exploitation of large language models (LLMs) for maliciousservices (i.e., Malla) is witnessing an uptick, amplifying the cyber threatlandscape and posing questions about the trustworthiness of LLM technologies.However, there has been little effort to understand this new cybercrime, interms of its magnitude, impact, and techniques. In this paper, we conduct thefirst systematic study on 212 real-world Mallas, uncovering their proliferationin underground marketplaces and exposing their operational modalities. Ourstudy discloses the Malla ecosystem, revealing its significant growth andimpact on today's public LLM services. Through examining 212 Mallas, weuncovered eight backend LLMs used by Mallas, along with 182 prompts thatcircumvent the protective measures of public LLM APIs. We further demystify thetactics employed by Mallas, including the abuse of uncensored LLMs and theexploitation of public LLM APIs through jailbreak prompts. Our findings enablea better understanding of the real-world exploitation of LLMs bycybercriminals, offering insights into strategies to counteract thiscybercrime.",,arXiv,"['cs.cr', 'cs.ai']",highly relevant,"The paper describes the use of large language models and the development of specific prompts to extract and standardize clinical information from medical reports, which is a direct application of prompt engineering." +1461,jailbreaking gpt4v via selfadversarial attacks with system prompts,"['Yuanwei Wu', 'Xiang Li', 'Yixin Liu', 'Pan Zhou', 'Lichao Sun']",http://arxiv.org/pdf/2311.09127v2.pdf,2023-11-15,," Existing work on jailbreak Multimodal Large Language Models (MLLMs) hasfocused primarily on adversarial examples in model inputs, with less attentionto vulnerabilities, especially in model API. To fill the research gap, we carryout the following work: 1) We discover a system prompt leakage vulnerability inGPT-4V. Through carefully designed dialogue, we successfully extract theinternal system prompts of GPT-4V. This finding indicates potential exploitablesecurity risks in MLLMs; 2) Based on the acquired system prompts, we propose anovel MLLM jailbreaking attack method termed SASP (Self-Adversarial Attack viaSystem Prompt). By employing GPT-4 as a red teaming tool against itself, we aimto search for potential jailbreak prompts leveraging stolen system prompts.Furthermore, in pursuit of better performance, we also add human modificationbased on GPT-4's analysis, which further improves the attack success rate to98.7\%; 3) We evaluated the effect of modifying system prompts to defendagainst jailbreaking attacks. Results show that appropriately designed systemprompts can significantly reduce jailbreak success rates. Overall, our workprovides new insights into enhancing MLLM security, demonstrating the importantrole of system prompts in jailbreaking. This finding could be leveraged togreatly facilitate jailbreak success rates while also holding the potential fordefending against jailbreaks.",,arXiv,"['cs.cr', 'cs.ai', 'cs.lg']",highly relevant,"The paper explores the structuring of prompts for LLMs in the context of dialog evaluation, which is directly related to prompt engineering." +1462,towards verifiable text generation with symbolic references,"['Lucas Torroba Hennigen', 'Shannon Shen', 'Aniruddha Nrusimha', 'Bernhard Gapp', 'David Sontag', 'Yoon Kim']",http://arxiv.org/pdf/2311.09188v1.pdf,2023-11-15,," Large language models (LLMs) have demonstrated an impressive ability tosynthesize plausible and fluent text. However they remain vulnerable tohallucinations, and thus their outputs generally require manual humanverification for high-stakes applications, which can be time-consuming anddifficult. This paper proposes symbolically grounded generation (SymGen) as asimple approach for enabling easier validation of an LLM's output. SymGenprompts an LLM to interleave its regular output text with explicit symbolicreferences to fields present in some conditioning data (e.g., a table in JSONformat). The references can be used to display the provenance of differentspans of text in the generation, reducing the effort required for manualverification. Across data-to-text and question answering experiments, we findthat LLMs are able to directly output text that makes use of symbolicreferences while maintaining fluency and accuracy.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",highly relevant,"The paper studies the behaviour of large language models with negated prompts, which relates to the understanding and interaction of LMs with prompts, hence pertinent to prompt engineering, especially if it pertains to hard prefix prompts." +1463,algo synthesizing algorithmic programs with llmgenerated oracle verifiers,"['Kexun Zhang', 'Danqing Wang', 'Jingtao Xia', 'William Yang Wang', 'Lei Li']",http://arxiv.org/pdf/2305.14591v3.pdf,2023-05-24,," Large language models (LLMs) excel at implementing code from functionalitydescriptions but struggle with algorithmic problems that require not onlyimplementation but also identification of the suitable algorithm. Moreover,LLM-generated programs lack guaranteed correctness and require humanverification. To address these challenges, we propose ALGO, a framework thatsynthesizes Algorithmic programs with LLM-Generated Oracles to guide thegeneration and verify their correctness. ALGO first generates a referenceoracle by prompting an LLM to exhaustively enumerate all the combinations ofrelevant variables. This oracle is then utilized to guide an arbitrary searchstrategy in exploring the algorithm space and to verify the synthesizedalgorithms. Our study shows that the LLM-generated oracles are correct for 88%of the cases. With the oracles as verifiers, ALGO can be integrated with anyexisting code generation model in a model-agnostic manner to enhance itsperformance. Experiments show that when equipped with ALGO, we achieve an 8xbetter one-submission pass rate over the Codex model and a 2.6x betterone-submission pass rate over CodeT, the current state-of-the-art model onCodeContests. We can also get 1.3x better pass rate over the ChatGPT CodeInterpreter on unseen problems. The problem set we used for testing, theprompts we used, the verifier and solution programs, and the test casesgenerated by ALGO are available at https://github.com/zkx06111/ALGO.",,arXiv,"['cs.cl', 'cs.se']",highly relevant,"The paper discusses utilizing large language models to convert descriptive information into signals using task instructions as prompts, which aligns with hard prefix prompting." +1464,multistage collaborative knowledge distillation from large language models for semisupervised sequence generation,"['Jiachen Zhao', 'Wenlong Zhao', 'Andrew Drozdov', 'Benjamin Rozonoyer', 'Md Arafat Sultan', 'Jay-Yoon Lee', 'Mohit Iyyer', 'Andrew McCallum']",http://arxiv.org/pdf/2311.08640v2.pdf,2023-11-15,," We study semi-supervised sequence generation tasks where labeled data are tooscarce to effectively finetune a model and at the same time few-shot promptingof a large language model (LLM) has suboptimal performance. This happens when atask, such as parsing, is expensive to annotate and also unfamiliar to apretrained LLM. In this paper, we present a discovery that student modelsdistilled from an in-context learned LLM can often generalize better than theirteacher on such tasks. Leveraging this finding, we present a new method --multistage collaborative knowledge distillation from an LLM (MCKD) -- for suchtasks. MCKD first few-shot prompts an LLM to produce pseudolabels for unlabeleddata. At each intermediate knowledge distillation (KD) stage, a new pair ofstudents is trained on disjoint partitions of the pseudolabeled data. Eachstudent then produces new and improved pseudolabels for its unseen partition tobe used in the next stage of distillation. We demonstrate the advantage ofmultistage cross-partition labeling on several syntactic and semantic parsingtasks. On CRAFT biomedical parsing, for example, 3-stage MCKD with 50 labeledexamples outperforms the prompted LLM and vanilla KD by 7.5% and 3.7% parsingF1, respectively, and matches the performance of supervised finetuning with 500examples.",,arXiv,"['cs.cl', 'cs.lg']",highly relevant,"The mention of natural language prompts and mock programming prompts used with large language models indicate the utilization of prompting strategies, relevant to prompt engineering." +1465,gpt4sgg synthesizing scene graphs from holistic and regionspecific narratives,"['Zuyao Chen', 'Jinlin Wu', 'Zhen Lei', 'Zhaoxiang Zhang', 'Changwen Chen']",http://arxiv.org/pdf/2312.04314v1.pdf,2023-12-07,," Learning scene graphs from natural language descriptions has proven to be acheap and promising scheme for Scene Graph Generation (SGG). However, suchunstructured caption data and its processing are troubling the learning anacurrate and complete scene graph. This dilema can be summarized as threepoints. First, traditional language parsers often fail to extract meaningfulrelationship triplets from caption data. Second, grounding unlocalized objectsin parsed triplets will meet ambiguity in visual-language alignment. Last,caption data typically are sparse and exhibit bias to partial observations ofimage content. These three issues make it hard for the model to generatecomprehensive and accurate scene graphs. To fill this gap, we propose a simpleyet effective framework, GPT4SGG, to synthesize scene graphs from holistic andregion-specific narratives. The framework discards traditional language parser,and localize objects before obtaining relationship triplets. To obtainrelationship triplets, holistic and dense region-specific narratives aregenerated from the image. With such textual representation of image data and atask-specific prompt, an LLM, particularly GPT-4, directly synthesizes a scenegraph as ""pseudo labels"". Experimental results showcase GPT4SGG significantlyimproves the performance of SGG models trained on image-caption data. Webelieve this pioneering work can motivate further research into mining thevisual reasoning capabilities of LLMs.",,arXiv,['cs.cv'],somewhat relevant,"The paper discusses the use of pre-trained large language models for robot communication and path planning, and it mentions providing feedback and prompting the LLM to improve plans, which indicates the use of post-training prompting techniques." +1466,localized symbolic knowledge distillation for visual commonsense models,"['Jae Sung Park', 'Jack Hessel', 'Khyathi Raghavi Chandu', 'Paul Pu Liang', 'Ximing Lu', 'Peter West', 'Youngjae Yu', 'Qiuyuan Huang', 'Jianfeng Gao', 'Ali Farhadi', 'Yejin Choi']",http://arxiv.org/pdf/2312.04837v2.pdf,2023-12-08,," Instruction following vision-language (VL) models offer a flexible interfacethat supports a broad range of multimodal tasks in a zero-shot fashion.However, interfaces that operate on full images do not directly enable the userto ""point to"" and access specific regions within images. This capability isimportant not only to support reference-grounded VL benchmarks, but also, forpractical applications that require precise within-image reasoning. We buildLocalized Visual Commonsense models, which allow users to specify (multiple)regions as input. We train our model by sampling localized commonsenseknowledge from a large language model (LLM): specifically, we prompt an LLM tocollect commonsense knowledge given a global literal image description and alocal literal region description automatically generated by a set of VL models.With a separately trained critic model that selects high-quality examples, wefind that training on the localized commonsense corpus can successfully distillexisting VL models to support a reference-as-input interface. Empirical resultsand human evaluations in a zero-shot setup demonstrate that our distillationmethod results in more precise VL models of reasoning compared to a baseline ofpassing a generated referring expression to an LLM.",,arXiv,"['cs.ai', 'cs.cl', 'cs.cv']",highly relevant,"The abstract specifies a novel prompting framework called 'Deliberate then Generate' (DTG) for text generation, which directly relates to the design and use of prompts." +1467,procot stimulating critical thinking and writing of students through engagement with large language models (llms),"['Tosin Adewumi', 'Lama Alkhaled', 'Claudia Buck', 'Sergio Hernandez', 'Saga Brilioth', 'Mkpe Kekung', 'Yelvin Ragimov', 'Elisa Barney']",http://arxiv.org/pdf/2312.09801v1.pdf,2023-12-15,," We introduce a novel writing method called Probing Chain of Thought (ProCoT),which prevents students from cheating using a Large Language Model (LLM), suchas ChatGPT, while enhancing their active learning through such models. LLMshave disrupted education and many other feilds. For fear of students cheating,many educationists have resorted to banning their use, as their outputs can behuman-like and hard to detect in some cases. These LLMs are also known forhallucinations (i.e. fake facts). We conduct studies with ProCoT in twodifferent courses with a combined total of about 66 students. The students ineach course were asked to prompt an LLM of their choice with one question froma set of four and required to affirm or refute statements in the LLM output byusing peer reviewed references. The results show two things: (1) ProCoTstimulates creative/critical thinking and writing of students throughengagement with LLMs when we compare the LLM solely output to ProCoT output and(2) ProCoT can prevent cheating because of clear limitations in existing LLMswhen we compare students ProCoT output to LLM ProCoT output. We also discoverthat most students prefer to give answers in fewer words than LLMs, which aretypically verbose. The average word counts for students, ChatGPT (v3.5) andPhind (v8) are 208, 391 and 383, respectively.",,arXiv,['cs.cl'],highly relevant,"The paper discusses the development of prompts for LLMs to categorize software supply chain security failures, which is directly related to the application of prompt engineering." +1468,editing arbitrary propositions in llms without subject labels,"['Itai Feigenbaum', 'Devansh Arpit', 'Huan Wang', 'Shelby Heinecke', 'Juan Carlos Niebles', 'Weiran Yao', 'Caiming Xiong', 'Silvio Savarese']",http://arxiv.org/pdf/2401.07526v1.pdf,2024-01-15,," Large Language Model (LLM) editing modifies factual information in LLMs.Locate-and-Edit (L\&E) methods accomplish this by finding where relevantinformation is stored within the neural network, and editing the weights atthat location. The goal of editing is to modify the response of an LLM to aproposition independently of its phrasing, while not modifying its response toother related propositions. Existing methods are limited to binarypropositions, which represent straightforward binary relations between asubject and an object. Furthermore, existing methods rely on semantic subjectlabels, which may not be available or even be well-defined in practice. In thispaper, we show that both of these issues can be effectively skirted with asimple and fast localization method called Gradient Tracing (GT). Thislocalization method allows editing arbitrary propositions instead of justbinary ones, and does so without the need for subject labels. As propositionsalways have a truth value, our experiments prompt an LLM as a booleanclassifier, and edit its T/F response to propositions. Our method applies GTfor location tracing, and then edit the model at that location using a mildvariant of Rank-One Model Editing (ROME). On datasets of binary propositionsderived from the CounterFact dataset, we show that our method -- without accessto subject labels -- performs close to state-of-the-art L\&E methods which hasaccess subject labels. We then introduce a new dataset, Factual AccuracyClassification Test (FACT), which includes non-binary propositions and forwhich subject labels are not generally applicable, and therefore is beyond thescope of existing L\&E methods. Nevertheless, we show that with our methodediting is possible on FACT.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",highly relevant,"The paper discusses a novel prompting method, Cue-CoT, which involves an intermediate reasoning step and compares its performance to standard prompting methods." +1469,ost refining text knowledge with optimal spatiotemporal descriptor for general video recognition,"['Tongjia Chen', 'Hongshan Yu', 'Zhengeng Yang', 'Zechuan Li', 'Wei Sun', 'Chen Chen']",http://arxiv.org/pdf/2312.00096v1.pdf,2023-11-30,," Due to the resource-intensive nature of training vision-language models onexpansive video data, a majority of studies have centered on adaptingpre-trained image-language models to the video domain. Dominant pipelinespropose to tackle the visual discrepancies with additional temporal learnerswhile overlooking the substantial discrepancy for web-scaled descriptivenarratives and concise action category names, leading to less distinct semanticspace and potential performance limitations. In this work, we prioritize therefinement of text knowledge to facilitate generalizable video recognition. Toaddress the limitations of the less distinct semantic space of category names,we prompt a large language model (LLM) to augment action class names intoSpatio-Temporal Descriptors thus bridging the textual discrepancy and servingas a knowledge base for general recognition. Moreover, to assign the bestdescriptors with different video instances, we propose Optimal DescriptorSolver, forming the video recognition problem as solving the optimal matchingflow across frame-level representations and descriptors. Comprehensiveevaluations in zero-shot, few-shot, and fully supervised video recognitionhighlight the effectiveness of our approach. Our best model achieves astate-of-the-art zero-shot accuracy of 75.1% on Kinetics-600.",,arXiv,['cs.cv'],highly relevant,"The paper focuses on Chain-of-Thought prompting, which is a technique relevant to prompt engineering as it explores how different prompts affect LLM performance." +1470,making large language models better knowledge miners for online marketing with progressive prompting augmentation,"['Chunjing Gan', 'Dan Yang', 'Binbin Hu', 'Ziqi Liu', 'Yue Shen', 'Zhiqiang Zhang', 'Jinjie Gu', 'Jun Zhou', 'Guannan Zhang']",http://arxiv.org/pdf/2312.05276v1.pdf,2023-12-08,," Nowadays, the rapid development of mobile economy has promoted theflourishing of online marketing campaigns, whose success greatly hinges on theefficient matching between user preferences and desired marketing campaignswhere a well-established Marketing-oriented Knowledge Graph (dubbed as MoKG)could serve as the critical ""bridge"" for preference propagation. In this paper,we seek to carefully prompt a Large Language Model (LLM) with domain-levelknowledge as a better marketing-oriented knowledge miner for marketing-orientedknowledge graph construction, which is however non-trivial, suffering fromseveral inevitable issues in real-world marketing scenarios, i.e.,uncontrollable relation generation of LLMs,insufficient prompting ability of asingle prompt, the unaffordable deployment cost of LLMs. To this end, wepropose PAIR, a novel Progressive prompting Augmented mIning fRamework forharvesting marketing-oriented knowledge graph with LLMs. In particular, wereduce the pure relation generation to an LLM based adaptive relation filteringprocess through the knowledge-empowered prompting technique. Next, we steerLLMs for entity expansion with progressive prompting augmentation,followed by areliable aggregation with comprehensive consideration of both self-consistencyand semantic relatedness. In terms of online serving, we specialize in a smalland white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-qualitycorpus provided by a strong teacher-LLM. Extensive experiments and practicalapplications in audience targeting verify the effectiveness of the proposed(Light)PAIR.",,arXiv,"['cs.ai', 'cs.lg']",highly relevant,"The paper discusses prompt-tuning large language models using small labeled datasets, which relates to the topic of prompt engineering with hard prefix prompts." +1471,a strong baseline for temporal videotext alignment,"['Zeqian Li', 'Qirui Chen', 'Tengda Han', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie']",http://arxiv.org/pdf/2312.14055v1.pdf,2023-12-21,," In this paper, we consider the problem of temporally aligning the video andtexts from instructional videos, specifically, given a long-term video, andassociated text sentences, our goal is to determine their correspondingtimestamps in the video. To this end, we establish a simple, yet strong modelthat adopts a Transformer-based architecture with all texts as queries,iteratively attending to the visual features, to infer the optimal timestamp.We conduct thorough experiments to investigate: (i) the effect of upgrading ASRsystems to reduce errors from speech recognition, (ii) the effect of variousvisual-textual backbones, ranging from CLIP to S3D, to the more recentInternVideo, (iii) the effect of transforming noisy ASR transcripts intodescriptive steps by prompting a large language model (LLM), to summarize thecore activities within the ASR transcript as a new training dataset. As aresult, our proposed simple model demonstrates superior performance on bothnarration alignment and procedural step grounding tasks, surpassing existingstate-of-the-art methods by a significant margin on three public benchmarks,namely, 9.3% on HT-Step, 3.4% on HTM-Align and 4.7% on CrossTask. We believethe proposed model and dataset with descriptive steps can be treated as astrong baseline for future research in temporal video-text alignment. Allcodes, models, and the resulting dataset will be publicly released to theresearch community.",,arXiv,['cs.cv'],highly relevant,"The paper discusses the use of a hierarchical prompting approach to improve the performance of LLMs, indicating a focus on prompt engineering techniques." +1472,maatphor automated variant analysis for prompt injection attacks,"['Ahmed Salem', 'Andrew Paverd', 'Boris Köpf']",http://arxiv.org/pdf/2312.11513v1.pdf,2023-12-12,," Prompt injection has emerged as a serious security threat to large languagemodels (LLMs). At present, the current best-practice for defending againstnewly-discovered prompt injection techniques is to add additional guardrails tothe system (e.g., by updating the system prompt or using classifiers on theinput and/or output of the model.) However, in the same way that variants of apiece of malware are created to evade anti-virus software, variants of a promptinjection can be created to evade the LLM's guardrails. Ideally, when a newprompt injection technique is discovered, candidate defenses should be testednot only against the successful prompt injection, but also against possiblevariants. In this work, we present, a tool to assist defenders in performing automatedvariant analysis of known prompt injection attacks. This involves solving twomain challenges: (1) automatically generating variants of a given promptaccording, and (2) automatically determining whether a variant was effectivebased only on the output of the model. This tool can also assist in generatingdatasets for jailbreak and prompt injection attacks, thus overcoming thescarcity of data in this domain. We evaluate Maatphor on three different types of prompt injection tasks.Starting from an ineffective (0%) seed prompt, Maatphor consistently generatesvariants that are at least 60% effective within the first 40 iterations.",,arXiv,"['cs.cr', 'cs.ai', 'cs.lg']",somewhat relevant,"The paper mentions prompting a large language model with data for AI-aided brainstorming, which is related to the use of prompts and their engineering." +1473,assessing prompt injection risks in 200+ custom gpts,"['Jiahao Yu', 'Yuhang Wu', 'Dong Shu', 'Mingyu Jin', 'Xinyu Xing']",http://arxiv.org/pdf/2311.11538v1.pdf,2023-11-20,," In the rapidly evolving landscape of artificial intelligence, ChatGPT hasbeen widely used in various applications. The new feature: customization ofChatGPT models by users to cater to specific needs has opened new frontiers inAI utility. However, this study reveals a significant security vulnerabilityinherent in these user-customized GPTs: prompt injection attacks. Throughcomprehensive testing of over 200 user-designed GPT models via adversarialprompts, we demonstrate that these systems are susceptible to promptinjections. Through prompt injection, an adversary can not only extract thecustomized system prompts but also access the uploaded files. This paperprovides a first-hand analysis of the prompt injection, alongside theevaluation of the possible mitigation of such attacks. Our findings underscorethe urgent need for robust security frameworks in the design and deployment ofcustomizable GPT models. The intent of this paper is to raise awareness andprompt action in the AI community, ensuring that the benefits of GPTcustomization do not come at the cost of compromised security and privacy.",,arXiv,"['cs.cr', 'cs.ai']",somewhat relevant,"The paper describes using large language models such as Vicuna to generate structured text, indicating the use of prompting techniques to guide the model's output." +1474,signedprompt a new approach to prevent prompt injection attacks against llmintegrated applications,['Xuchen Suo'],http://arxiv.org/pdf/2401.07612v1.pdf,2024-01-15,," The critical challenge of prompt injection attacks in Large Language Models(LLMs) integrated applications, a growing concern in the ArtificialIntelligence (AI) field. Such attacks, which manipulate LLMs through naturallanguage inputs, pose a significant threat to the security of theseapplications. Traditional defense strategies, including output and inputfiltering, as well as delimiter use, have proven inadequate. This paperintroduces the 'Signed-Prompt' method as a novel solution. The study involvessigning sensitive instructions within command segments by authorized users,enabling the LLM to discern trusted instruction sources. The paper presents acomprehensive analysis of prompt injection attack patterns, followed by adetailed explanation of the Signed-Prompt concept, including its basicarchitecture and implementation through both prompt engineering and fine-tuningof LLMs. Experiments demonstrate the effectiveness of the Signed-Prompt method,showing substantial resistance to various types of prompt injection attacks,thus validating its potential as a robust defense strategy in AI security.",,arXiv,"['cs.cr', 'cs.ai']",highly relevant,"The paper explicitly mentions the use of prompt engineering to develop an optimized meta-prompt for classifying abstracts using GPT models, which falls directly within the domain of hard prefix prompt engineering." +1475,benchmarking and defending against indirect prompt injection attacks on large language models,"['Jingwei Yi', 'Yueqi Xie', 'Bin Zhu', 'Keegan Hines', 'Emre Kiciman', 'Guangzhong Sun', 'Xing Xie', 'Fangzhao Wu']",http://arxiv.org/pdf/2312.14197v1.pdf,2023-12-21,," Recent remarkable advancements in large language models (LLMs) have led totheir widespread adoption in various applications. A key feature of theseapplications is the combination of LLMs with external content, where userinstructions and third-party content are combined to create prompts for LLMprocessing. These applications, however, are vulnerable to indirect promptinjection attacks, where malicious instructions embedded within externalcontent compromise LLM's output, causing their responses to deviate from userexpectations. Despite the discovery of this security issue, no comprehensiveanalysis of indirect prompt injection attacks on different LLMs is availabledue to the lack of a benchmark. Furthermore, no effective defense has beenproposed. In this work, we introduce the first benchmark, BIPIA, to measure therobustness of various LLMs and defenses against indirect prompt injectionattacks. Our experiments reveal that LLMs with greater capabilities exhibitmore vulnerable to indirect prompt injection attacks for text tasks, resultingin a higher ASR. We hypothesize that indirect prompt injection attacks aremainly due to the LLMs' inability to distinguish between instructions andexternal content. Based on this conjecture, we propose four black-box methodsbased on prompt learning and a white-box defense methods based on fine-tuningwith adversarial training to enable LLMs to distinguish between instructionsand external content and ignore instructions in the external content. Ourexperimental results show that our black-box defense methods can effectivelyreduce ASR but cannot completely thwart indirect prompt injection attacks,while our white-box defense method can reduce ASR to nearly zero with littleadverse impact on the LLM's performance on general tasks. We hope that ourbenchmark and defenses can inspire future work in this important area.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The abstract describes an approach to prompting large language models using a recursive questioning technique, which is a form of prompt engineering to improve performance on reasoning tasks." +1476,jatmo prompt injection defense by taskspecific finetuning,"['Julien Piet', 'Maha Alrashed', 'Chawin Sitawarin', 'Sizhe Chen', 'Zeming Wei', 'Elizabeth Sun', 'Basel Alomair', 'David Wagner']",http://arxiv.org/pdf/2312.17673v2.pdf,2023-12-29,," Large Language Models (LLMs) are attracting significant research attentiondue to their instruction-following abilities, allowing users and developers toleverage LLMs for a variety of tasks. However, LLMs are vulnerable toprompt-injection attacks: a class of attacks that hijack the model'sinstruction-following abilities, changing responses to prompts to undesired,possibly malicious ones. In this work, we introduce Jatmo, a method forgenerating task-specific models resilient to prompt-injection attacks. Jatmoleverages the fact that LLMs can only follow instructions once they haveundergone instruction tuning. It harnesses a teacher instruction-tuned model togenerate a task-specific dataset, which is then used to fine-tune a base model(i.e., a non-instruction-tuned model). Jatmo only needs a task prompt and adataset of inputs for the task: it uses the teacher model to generate outputs.For situations with no pre-existing datasets, Jatmo can use a single example,or in some cases none at all, to produce a fully synthetic dataset. Ourexperiments on seven tasks show that Jatmo models provide similar quality ofoutputs on their specific task as standard LLMs, while being resilient toprompt injections. The best attacks succeeded in less than 0.5% of casesagainst our models, versus 87% success rate against GPT-3.5-Turbo. We releaseJatmo at https://github.com/wagner-group/prompt-injection-defense.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl']",highly relevant,"The paper is highly relevant as it describes chain-of-thought prompting, a specific prompting technique for improving the performance of large language models, indicating a focus on post-training prompting methods." +1477,ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global scale prompt hacking competition,"['Sander Schulhoff', 'Jeremy Pinto', 'Anaum Khan', 'Louis-François Bouchard', 'Chenglei Si', 'Svetlina Anati', 'Valen Tagliabue', 'Anson Liu Kost', 'Christopher Carnahan', 'Jordan Boyd-Graber']",http://arxiv.org/pdf/2311.16119v2.pdf,2023-10-24,," Large Language Models (LLMs) are deployed in interactive contexts with directuser engagement, such as chatbots and writing assistants. These deployments arevulnerable to prompt injection and jailbreaking (collectively, prompt hacking),in which models are manipulated to ignore their original instructions andfollow potentially malicious ones. Although widely acknowledged as asignificant security threat, there is a dearth of large-scale resources andquantitative studies on prompt hacking. To address this lacuna, we launch aglobal prompt hacking competition, which allows for free-form human inputattacks. We elicit 600K+ adversarial prompts against three state-of-the-artLLMs. We describe the dataset, which empirically verifies that current LLMs canindeed be manipulated via prompt hacking. We also present a comprehensivetaxonomical ontology of the types of adversarial prompts.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl']",highly relevant,"The paper describes the development and application of a specific prompt designed to breakdown complex recipes, which demonstrates direct engagement with prompt engineering." +1478,arthmodel enhance arithmetic skills to large language model,['Yingdi Guo'],http://arxiv.org/pdf/2311.18609v1.pdf,2023-11-30,," With the great success of ChatGPT, the research of large language models hasbecome increasingly popular. However, the models have several limitations, suchas toxicity and pool performance of arithmetic solving. Meanwhile, LLM may havesome potential abilities that have yet to be exploited. In this paper, wechoose a different way to enhance the arithmetic ability of LLM. We propose totrain LLM to generate a postfix expression related to the arithmetic problemand incorporate it with small pretrained models. Moreover, this small modeltransfers the token embeddings into real dense numbers and invokes nativefunctions of a deep learning platform to get the correct answer. To generatethe final result, we propose prompt injection for adding the result outputs bythe small model to LLM. This work provides different ways of thinking, trainingand using a language model. The codes and models will be released at\url{https://github.com/eteced/arithmetic_finetuning_v1}.",,arXiv,['cs.cl'],highly relevant,"The paper introduces an approach using multimodal LLMs through prompting them with various types of prompts (image OCR, brief captions, and detailed descriptions) for fashion logo embedding, which is relevant to the topic of prompt engineering." +1479,look before you leap a universal emergent decomposition of retrieval tasks in language models,"['Alexandre Variengien', 'Eric Winsor']",http://arxiv.org/pdf/2312.10091v1.pdf,2023-12-13,," When solving challenging problems, language models (LMs) are able to identifyrelevant information from long and complicated contexts. To study how LMs solveretrieval tasks in diverse situations, we introduce ORION, a collection ofstructured retrieval tasks spanning six domains, from text understanding tocoding. Each task in ORION can be represented abstractly by a request (e.g. aquestion) that retrieves an attribute (e.g. the character name) from a context(e.g. a story). We apply causal analysis on 18 open-source language models withsizes ranging from 125 million to 70 billion parameters. We find that LMsinternally decompose retrieval tasks in a modular way: middle layers at thelast token position process the request, while late layers retrieve the correctentity from the context. After causally enforcing this decomposition, modelsare still able to solve the original task, preserving 70% of the originalcorrect token probability in 98 of the 106 studied model-task pairs. We connectour macroscopic decomposition with a microscopic description by performing afine-grained case study of a question-answering task on Pythia-2.8b. Buildingon our high-level understanding, we demonstrate a proof of concept applicationfor scalable internal oversight of LMs to mitigate prompt-injection whilerequiring human supervision on only a single input. Our solution improvesaccuracy drastically (from 15.5% to 97.5% on Pythia-12b). This work presentsevidence of a universal emergent modular processing of tasks across varieddomains and models and is a pioneering effort in applying interpretability forscalable internal oversight of LMs.",,arXiv,"['cs.ir', 'cs.cl', 'cs.lg']",highly relevant,"The paper discusses improving information retrieval for the preparation of prompts for large language models, which is directly related to prompt engineering." +1480,the ethics of interaction mitigating security threats in llms,"['Ashutosh Kumar', 'Sagarika Singh', 'Shiv Vignesh Murty', 'Swathy Ragupathy']",http://arxiv.org/pdf/2401.12273v1.pdf,2024-01-22,," This paper comprehensively explores the ethical challenges arising fromsecurity threats to Language Learning Models (LLMs). These intricate digitalrepositories are increasingly integrated into our daily lives, making themprime targets for attacks that can compromise their training data and theconfidentiality of their data sources. The paper delves into the nuancedethical repercussions of such security threats on society and individualprivacy. We scrutinize five major threats: prompt injection, jailbreaking,Personal Identifiable Information (PII) exposure, sexually explicit content,and hate based content, going beyond mere identification to assess theircritical ethical consequences and the urgency they create for robust defensivestrategies. The escalating reliance on LLMs underscores the crucial need forensuring these systems operate within the bounds of ethical norms, particularlyas their misuse can lead to significant societal and individual harm. Wepropose conceptualizing and developing an evaluative tool tailored for LLMs,which would serve a dual purpose, guiding developers and designers inpreemptive fortification of backend systems and scrutinizing the ethicaldimensions of LLM chatbot responses during the testing phase. By comparing LLMresponses with those expected from humans in a moral context, we aim to discernthe degree to which AI behaviors align with the ethical values held by abroader society. Ultimately, this paper not only underscores the ethicaltroubles presented by LLMs, it also highlights a path toward cultivating trustin these systems.",,arXiv,"['cs.cr', 'cs.ai', 'cs.cl']",highly relevant,"The paper discusses the use of prompting strategies with LLMs for the particular task of generating distractors for MCQs, indicating relevance to prompt engineering." +1481,do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation,"['Zonghai Yao', 'Ahmed Jaafar', 'Beining Wang', 'Yue Zhu', 'Zhichao Yang', 'Hong Yu']",http://arxiv.org/pdf/2311.09684v1.pdf,2023-11-16,," This study examines the effect of prompt engineering on the performance ofLarge Language Models (LLMs) in clinical note generation. We introduce anAutomatic Prompt Optimization (APO) framework to refine initial prompts andcompare the outputs of medical experts, non-medical experts, and APO-enhancedGPT3.5 and GPT4. Results highlight GPT4 APO's superior performance instandardizing prompt quality across clinical note sections. A human-in-the-loopapproach shows that experts maintain content quality post-APO, with apreference for their own modifications, suggesting the value of expertcustomization. We recommend a two-phase optimization process, leveragingAPO-GPT4 for consistency and expert input for personalization.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The abstract clearly mentions the use of prompt engineering to generate awareness messages, which indicates that the paper is focused on the application of prompt engineering techniques." +1482,evolutionary multiobjective optimization of large language model prompts for balancing sentiments,"['Jill Baumann', 'Oliver Kramer']",http://arxiv.org/pdf/2401.09862v1.pdf,2024-01-18,," The advent of large language models (LLMs) such as ChatGPT has attractedconsiderable attention in various domains due to their remarkable performanceand versatility. As the use of these models continues to grow, the importanceof effective prompt engineering has come to the fore. Prompt optimizationemerges as a crucial challenge, as it has a direct impact on model performanceand the extraction of relevant information. Recently, evolutionary algorithms(EAs) have shown promise in addressing this issue, paving the way for noveloptimization strategies. In this work, we propose a evolutionarymulti-objective (EMO) approach specifically tailored for prompt optimizationcalled EMO-Prompts, using sentiment analysis as a case study. We use sentimentanalysis capabilities as our experimental targets. Our results demonstrate thatEMO-Prompts effectively generates prompts capable of guiding the LLM to producetexts embodying two conflicting emotions simultaneously.",,arXiv,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.lg']",highly relevant,"The abstract mentions 'utilising different prompt engineering techniques' to improve performance, which indicates that the paper involves research directly related to prompt engineering." +1483,propane prompt design as an inverse problem,"['Rimon Melamed', 'Lucas H. McCabe', 'Tanay Wakhare', 'Yejin Kim', 'H. Howie Huang', 'Enric Boix-Adsera']",http://arxiv.org/pdf/2311.07064v1.pdf,2023-11-13,," Carefully-designed prompts are key to inducing desired behavior in LargeLanguage Models (LLMs). As a result, great effort has been dedicated toengineering prompts that guide LLMs toward particular behaviors. In this work,we propose an automatic prompt optimization framework, PROPANE, which aims tofind a prompt that induces semantically similar outputs to a fixed set ofexamples without user intervention. We further demonstrate that PROPANE can beused to (a) improve existing prompts, and (b) discover semantically obfuscatedprompts that transfer between models.",,arXiv,['cs.cl'],highly relevant,"The paper discusses the use of prompt modifiers in the context of text-based generative art, which is a clear example of prompt engineering, even if not specified as 'hard prefix' prompting." +1484,joint prompt optimization of stacked llms using variational inference,"['Alessandro Sordoni', 'Xingdi Yuan', 'Marc-Alexandre Côté', 'Matheus Pereira', 'Adam Trischler', 'Ziang Xiao', 'Arian Hosseini', 'Friederike Niedtner', 'Nicolas Le Roux']",http://arxiv.org/pdf/2306.12509v2.pdf,2023-06-21,," Large language models (LLMs) can be seen as atomic units of computationmapping sequences to a distribution over sequences. Thus, they can be seen asstochastic language layers in a language network, where the learnableparameters are the natural language prompts at each layer. By stacking two suchlayers and feeding the output of one layer to the next, we obtain a DeepLanguage Network (DLN). We first show how to effectively perform promptoptimization for a 1-Layer language network (DLN-1). Then, we present anextension that applies to 2-layer DLNs (DLN-2), where two prompts must belearned. The key idea is to consider the output of the first layer as a latentvariable, which requires inference, and prompts to be learned as the parametersof the generative distribution. We first test the effectiveness of DLN-1 inmultiple reasoning and natural language understanding tasks. Then, we show thatDLN-2 can reach higher performance than a single layer, showing promise that wemight reach comparable performance to GPT-4, even when each LLM in the networkis smaller and less powerful.",,arXiv,"['cs.cl', 'cs.lg']",highly relevant,The paper explicitly mentions the use of 'prompt engineering' in the context of Visual-WSD with techniques that compare 'Simple prompt-based' methods and 'Generated prompt-based' methods. +1485,prompt optimization via adversarial incontext learning,"['Xuan Long Do', 'Yiran Zhao', 'Hannah Brown', 'Yuxi Xie', 'James Xu Zhao', 'Nancy F. Chen', 'Kenji Kawaguchi', 'Michael Qizhe Xie', 'Junxian He']",http://arxiv.org/pdf/2312.02614v1.pdf,2023-12-05,," We propose a new method, Adversarial In-Context Learning (adv-ICL), tooptimize prompt for in-context learning (ICL) by employing one LLM as agenerator, another as a discriminator, and a third as a prompt modifier. As intraditional adversarial learning, adv-ICL is implemented as a two-player gamebetween the generator and discriminator, where the generator tries to generaterealistic enough output to fool the discriminator. In each round, given aninput prefixed by task instructions and several exemplars, the generatorproduces an output. The discriminator is then tasked with classifying thegenerator input-output pair as model-generated or real data. Based on thediscriminator loss, the prompt modifier proposes possible edits to thegenerator and discriminator prompts, and the edits that most improve theadversarial loss are selected. We show that adv-ICL results in significantimprovements over state-of-the-art prompt optimization techniques for both openand closed-source models on 11 generation and classification tasks includingsummarization, arithmetic reasoning, machine translation, data-to-textgeneration, and the MMLU and big-bench hard benchmarks. In addition, becauseour method uses pre-trained models and updates only prompts rather than modelparameters, it is computationally efficient, easy to extend to any LLM andtask, and effective in low-resource settings.",,arXiv,"['cs.lg', 'cs.cl']",highly relevant,"The paper mentions the utilization of prompt engineering to personalize course content, which directly pertains to the methodology within the field of prompt engineering." +1486,prompt2nerfpil fast nerf generation via pretrained implicit latent,"['Jianmeng Liu', 'Yuyao Zhang', 'Zeyuan Meng', 'Yu-Wing Tai', 'Chi-Keung Tang']",http://arxiv.org/pdf/2312.02568v1.pdf,2023-12-05,," This paper explores promptable NeRF generation (e.g., text prompt or singleimage prompt) for direct conditioning and fast generation of NeRF parametersfor the underlying 3D scenes, thus undoing complex intermediate steps whileproviding full 3D generation with conditional control. Unlike previousdiffusion-CLIP-based pipelines that involve tedious per-prompt optimizations,Prompt2NeRF-PIL is capable of generating a variety of 3D objects with a singleforward pass, leveraging a pre-trained implicit latent space of NeRFparameters. Furthermore, in zero-shot tasks, our experiments demonstrate thatthe NeRFs produced by our method serve as semantically informativeinitializations, significantly accelerating the inference process of existingprompt-to-NeRF methods. Specifically, we will show that our approach speeds upthe text-to-NeRF model DreamFusion and the 3D reconstruction speed of theimage-to-NeRF method Zero-1-to-3 by 3 to 5 times.",,arXiv,['cs.cv'],somewhat relevant,"The abstract mentions improving prompt performance and the difficulty of finding the optimal prompt, which pertains to the area of prompt engineering." +1487,large language models for intentdriven session recommendations,"['Zhu Sun', 'Hongyang Liu', 'Xinghua Qu', 'Kaidong Feng', 'Yan Wang', 'Yew-Soon Ong']",http://arxiv.org/pdf/2312.07552v1.pdf,2023-12-07,," Intent-aware session recommendation (ISR) is pivotal in discerning userintents within sessions for precise predictions. Traditional approaches,however, face limitations due to their presumption of a uniform number ofintents across all sessions. This assumption overlooks the dynamic nature ofuser sessions, where the number and type of intentions can significantly vary.In addition, these methods typically operate in latent spaces, thus hinder themodel's transparency.Addressing these challenges, we introduce a novel ISRapproach, utilizing the advanced reasoning capabilities of large languagemodels (LLMs). First, this approach begins by generating an initial prompt thatguides LLMs to predict the next item in a session, based on the varied intentsmanifested in user sessions. Then, to refine this process, we introduce aninnovative prompt optimization mechanism that iteratively self-reflects andadjusts prompts. Furthermore, our prompt selection module, built upon the LLMs'broad adaptability, swiftly selects the most optimized prompts across diversedomains. This new paradigm empowers LLMs to discern diverse user intents at asemantic level, leading to more accurate and interpretable sessionrecommendations. Our extensive experiments on three real-world datasetsdemonstrate the effectiveness of our method, marking a significant advancementin ISR systems.",,arXiv,"['cs.cl', 'cs.lg']",highly relevant,"The paper clearly focuses on a prompt-based method for few-shot learning, which directly relates to prompt engineering within the scope of natural language processing." +1488,recprompt a prompt tuning framework for news recommendation using large language models,"['Dairui Liu', 'Boming Yang', 'Honghui Du', 'Derek Greene', 'Aonghus Lawlor', 'Ruihai Dong', 'Irene Li']",http://arxiv.org/pdf/2312.10463v1.pdf,2023-12-16,," In the evolving field of personalized news recommendation, understanding thesemantics of the underlying data is crucial. Large Language Models (LLMs) likeGPT-4 have shown promising performance in understanding natural language.However, the extent of their applicability in news recommendation systemsremains to be validated. This paper introduces RecPrompt, the first frameworkfor news recommendation that leverages the capabilities of LLMs through promptengineering. This system incorporates a prompt optimizer that applies aniterative bootstrapping process, enhancing the LLM-based recommender's abilityto align news content with user preferences and interests more effectively.Moreover, this study offers insights into the effective use of LLMs in newsrecommendation, emphasizing both the advantages and the challenges ofincorporating LLMs into recommendation systems.",,arXiv,['cs.ir'],highly relevant,"The paper discusses the use of prompt engineering to effectively communicate with generative language models in medical education, which aligns with the topic of prompt engineering." +1489,unidcp unifying multiple medical visionlanguage tasks via dynamic crossmodal learnable prompts,"['Chenlu Zhan', 'Yufei Zhang', 'Yu Lin', 'Gaoang Wang', 'Hongwei Wang']",http://arxiv.org/pdf/2312.11171v1.pdf,2023-12-18,," Medical vision-language pre-training (Med-VLP) models have recentlyaccelerated the fast-growing medical diagnostics application. However, mostMed-VLP models learn task-specific representations independently from scratch,thereby leading to great inflexibility when they work across multiplefine-tuning tasks. In this work, we propose UniDCP, a Unified medicalvision-language model with Dynamic Cross-modal learnable Prompts, which can beplastically applied to multiple medical vision-language tasks. Specifically, weexplicitly construct a unified framework to harmonize diverse inputs frommultiple pretraining tasks by leveraging cross-modal prompts for unification,which accordingly can accommodate heterogeneous medical fine-tuning tasks.Furthermore, we conceive a dynamic cross-modal prompt optimizing strategy thatoptimizes the prompts within the shareable space for implicitly processing theshareable clinic knowledge. UniDCP is the first Med-VLP model capable ofperforming all 8 medical uni-modal and cross-modal tasks over 14 correspondingdatasets, consistently yielding superior results over diverse state-of-the-artmethods.",,arXiv,"['cs.cv', 'cs.ai']",highly relevant,"The abstract discusses the importance of designing prompts to utilize ChatGPT's capabilities effectively, which relates directly to the topic of prompt engineering." +1490,dspy assertions computational constraints for selfrefining language model pipelines,"['Arnav Singhvi', 'Manish Shetty', 'Shangyin Tan', 'Christopher Potts', 'Koushik Sen', 'Matei Zaharia', 'Omar Khattab']",http://arxiv.org/pdf/2312.13382v1.pdf,2023-12-20,," Chaining language model (LM) calls as composable modules is fueling a newpowerful way of programming. However, ensuring that LMs adhere to importantconstraints remains a key challenge, one often addressed with heuristic ""promptengineering"". We introduce LM Assertions, a new programming construct forexpressing computational constraints that LMs should satisfy. We integrate ourconstructs into the recent DSPy programming model for LMs, and present newstrategies that allow DSPy to compile programs with arbitrary LM Assertionsinto systems that are more reliable and more accurate. In DSPy, LM Assertionscan be integrated at compile time, via automatic prompt optimization, and/or atinference time, via automatic selfrefinement and backtracking. We report on twoearly case studies for complex question answering (QA), in which the LM programmust iteratively retrieve information in multiple hops and synthesize along-form answer with citations. We find that LM Assertions improve not onlycompliance with imposed rules and guidelines but also enhance downstream taskperformance, delivering intrinsic and extrinsic gains up to 35.7% and 13.3%,respectively. Our reference implementation of LM Assertions is integrated intoDSPy at https://github.com/stanfordnlp/dspy",,arXiv,"['cs.cl', 'cs.ai', 'cs.pl']",somewhat relevant,"The paper discusses the role of prompt engineering in conjunction with a cognitive-agent approach for robotic task learning, indicating relevance to prompting techniques." +1491,a bayesian approach for prompt optimization in pretrained language models,"['Antonio Sabbatella', 'Andrea Ponti', 'Antonio Candelieri', 'Ilaria Giordani', 'Francesco Archetti']",http://arxiv.org/pdf/2312.00471v1.pdf,2023-12-01,," A prompt is a sequence of symbol or tokens, selected from a vocabularyaccording to some rule, which is prepended/concatenated to a textual query. Akey problem is how to select the sequence of tokens: in this paper we formulateit as a combinatorial optimization problem. The high dimensionality of thetoken space com-pounded by the length of the prompt sequence requires a veryefficient solution. In this paper we propose a Bayesian optimization method,executed in a continuous em-bedding of the combinatorial space. In this paperwe focus on hard prompt tuning (HPT) which directly searches for discretetokens to be added to the text input with-out requiring access to the largelanguage model (LLM) and can be used also when LLM is available only as ablack-box. This is critically important if LLMs are made available in the Modelas a Service (MaaS) manner as in GPT-4. The current manu-script is focused onthe optimization of discrete prompts for classification tasks. The discreteprompts give rise to difficult combinatorial optimization problem which easilybecome intractable given the dimension of the token space in realisticapplications. The optimization method considered in this paper is Bayesianoptimization (BO) which has become the dominant approach in black-boxoptimization for its sample efficiency along with its modular structure andversatility. In this paper we use BoTorch, a library for Bayesian optimizationresearch built on top of pyTorch. Albeit preliminary and obtained using a'vanilla' version of BO, the experiments on RoB-ERTa on six benchmarks, show agood performance across a variety of tasks and enable an analysis of thetradeoff between size of the search space, accuracy and wall clock time.",,arXiv,"['cs.lg', 'cs.ai']",somewhat relevant,"The abstract discusses zero-shot and few-shot learning with pretrained language models, which implies the use of prompting, but does not specifically mention hard prefix prompting or prompt engineering." +1492,prompt engineering a prompt engineer,"['Qinyuan Ye', 'Maxamed Axmed', 'Reid Pryzant', 'Fereshte Khani']",http://arxiv.org/pdf/2311.05661v1.pdf,2023-11-09,," Prompt engineering is a challenging yet crucial task for optimizing theperformance of large language models (LLMs). It requires complex reasoning toexamine the model's errors, hypothesize what is missing or misleading in thecurrent prompt, and communicate the task with clarity. While recent worksindicate that LLMs can be meta-prompted to perform automatic promptengineering, their potentials may not be fully untapped due to the lack ofsufficient guidance to elicit complex reasoning capabilities in LLMs in themeta-prompt. In this work, we investigate the problem of ""prompt engineering aprompt engineer"" -- constructing a meta-prompt that more effectively guidesLLMs to perform automatic prompt engineering. We introduce and analyze keycomponents, such as a step-by-step reasoning template and contextspecification, which lead to improved performance. In addition, inspired bycommon optimization concepts such as batch size, step size and momentum, weintroduce their verbalized counterparts to the meta-prompt and investigatetheir effects. Our final method, named PE2, finds a prompt that outperforms""let's think step by step"" by 6.3% on the MultiArith dataset and 3.1% on theGSM8K dataset. To demonstrate its versatility, we apply PE2 to the InstructionInduction benchmark, a suite of counterfactual tasks, and a lengthy, real-worldindustrial prompt. In these settings, PE2 achieves strong performance andoutperforms prior automatic prompt engineering baselines. Further, we show thatPE2 makes meaningful and targeted prompt edits, amends erroneous or incompleteprompts, and presents non-trivial counterfactual reasoning abilities.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",highly relevant,"The paper discusses a new prompting technique based on similarity metrics for the Word-in-Context task, which is directly related to the topic of prompt engineering." +1493,a brief history of prompt leveraging language models (through advanced prompting),['Golam Md Muktadir'],http://arxiv.org/pdf/2310.04438v2.pdf,2023-09-30,," This paper presents a comprehensive exploration of the evolution of promptengineering and generation in the field of natural language processing (NLP).Starting from the early language models and information retrieval systems, wetrace the key developments that have shaped prompt engineering over the years.The introduction of attention mechanisms in 2015 revolutionized languageunderstanding, leading to advancements in controllability andcontext-awareness. Subsequent breakthroughs in reinforcement learningtechniques further enhanced prompt engineering, addressing issues like exposurebias and biases in generated text. We examine the significant contributions in2018 and 2019, focusing on fine-tuning strategies, control codes, andtemplate-based generation. The paper also discusses the growing importance offairness, human-AI collaboration, and low-resource adaptation. In 2020 and2021, contextual prompting and transfer learning gained prominence, while 2022and 2023 witnessed the emergence of advanced techniques like unsupervisedpre-training and novel reward shaping. Throughout the paper, we referencespecific research studies that exemplify the impact of various developments onprompt engineering. The journey of prompt engineering continues, with ethicalconsiderations being paramount for the responsible and inclusive future of AIsystems.",,arXiv,"['cs.cl', 'cs.ai']",somewhat relevant,"The paper discusses using a 'visual-prompt-tuned foundational model' for mapping underwater vegetation, which implies use of a form of prompt without specifying hard prefix prompts." +1494,to be or not to be an exploration of continuously controllable prompt engineering,"['Yuhan Sun', 'Mukai Li', 'Yixin Cao', 'Kun Wang', 'Wenxiao Wang', 'Xingyu Zeng', 'Rui Zhao']",http://arxiv.org/pdf/2311.09773v1.pdf,2023-11-16,," As the use of large language models becomes more widespread, techniques likeparameter-efficient fine-tuning and other methods for controlled generation aregaining traction for customizing models and managing their outputs. However,the challenge of precisely controlling how prompts influence these models is anarea ripe for further investigation. In response, we introduce ControlPE(Continuously Controllable Prompt Engineering). ControlPE enables fineradjustments to prompt effects, complementing existing prompt engineering, andeffectively controls continuous targets. This approach harnesses the power ofLoRA (Low-Rank Adaptation) to create an effect akin to prompt weighting,enabling fine-tuned adjustments to the impact of prompts. Our methodologyinvolves generating specialized datasets for prompt distillation, incorporatingthese prompts into the LoRA model, and carefully adjusting LoRA merging weightto regulate the influence of prompts. This provides a dynamic and adaptabletool for prompt control. Through our experiments, we have validated thepracticality and efficacy of ControlPE. It proves to be a promising solutionfor control a variety of prompts, ranging from generating short responsesprompts, refusal prompts to chain-of-thought prompts.",,arXiv,['cs.cl'],highly relevant,"The paper evaluates ChatGPT's performance in log parsing with different prompting methods, which is directly related to the study of prompt engineering." +1495,more samples or more prompt inputs exploring effective incontext sampling for llm fewshot prompt engineering,"['Bingsheng Yao', 'Guiming Chen', 'Ruishi Zou', 'Yuxuan Lu', 'Jiachen Li', 'Shao Zhang', 'Sijia Liu', 'James Hendler', 'Dakuo Wang']",http://arxiv.org/pdf/2311.09782v1.pdf,2023-11-16,," While most existing works on LLM prompt-engineering focus only on how toselect a better set of data samples inside one single prompt input (In-ContextLearning or ICL), why can't we design and leverage multiple prompt inputstogether to further improve the LLM performance? In this work, we proposeIn-Context Sampling (ICS), a low-resource LLM prompt-engineering technique toproduce the most confident prediction results by optimizing the construction ofmultiple ICL prompt inputs. Extensive experiments with two SOTA LLMs (FlanT5-XLand Mistral-7B) on three NLI datasets (e-SNLI, Multi-NLI, and ANLI) illustratethat ICS can consistently enhance LLM's prediction performance and confidence.An ablation study suggests that a diversity-based ICS strategy may furtherimprove LLM's performance, which sheds light on a new yet promising futureresearch direction.",,arXiv,['cs.cl'],highly relevant,"The paper mentions using a 'custom GPT-4 few-shot prompt annotation scheme', which indicates it involves prompt engineering." +1496,automatic engineering of long prompts,"['Cho-Jui Hsieh', 'Si Si', 'Felix X. Yu', 'Inderjit S. Dhillon']",http://arxiv.org/pdf/2311.10117v1.pdf,2023-11-16,," Large language models (LLMs) have demonstrated remarkable capabilities insolving complex open-domain tasks, guided by comprehensive instructions anddemonstrations provided in the form of prompts. However, these prompts can belengthy, often comprising hundreds of lines and thousands of tokens, and theirdesign often requires considerable human effort. Recent research has exploredautomatic prompt engineering for short prompts, typically consisting of one ora few sentences. However, the automatic design of long prompts remains achallenging problem due to its immense search space. In this paper, weinvestigate the performance of greedy algorithms and genetic algorithms forautomatic long prompt engineering. We demonstrate that a simple greedy approachwith beam search outperforms other methods in terms of search efficiency.Moreover, we introduce two novel techniques that utilize search history toenhance the effectiveness of LLM-based mutation in our search algorithm. Ourresults show that the proposed automatic long prompt engineering algorithmachieves an average of 9.2% accuracy gain on eight tasks in Big Bench Hard,highlighting the significance of automating prompt designs to fully harness thecapabilities of LLMs.",,arXiv,"['cs.ai', 'cs.lg']",highly relevant,"The paper presents an iterative procedure that includes prompt engineering in the design of descriptive knowledge and few-shot prompts for ChatGPT applications in customer services, making it relevant to the topic of prompt engineering." +1497,enhancing medical task performance in gpt4v a comprehensive study on prompt engineering strategies,"['Pengcheng Chen', 'Ziyan Huang', 'Zhongying Deng', 'Tianbin Li', 'Yanzhou Su', 'Haoyu Wang', 'Jin Ye', 'Yu Qiao', 'Junjun He']",http://arxiv.org/pdf/2312.04344v2.pdf,2023-12-07,," OpenAI's latest large vision-language model (LVLM), GPT-4V(ision), has piquedconsiderable interest for its potential in medical applications. Despite itspromise, recent studies and internal reviews highlight its underperformance inspecialized medical tasks. This paper explores the boundary of GPT-4V'scapabilities in medicine, particularly in processing complex imaging data fromendoscopies, CT scans, and MRIs etc. Leveraging open-source datasets, weassessed its foundational competencies, identifying substantial areas forenhancement. Our research emphasizes prompt engineering, an often-underutilizedstrategy for improving AI responsiveness. Through iterative testing, we refinedthe model's prompts, significantly improving its interpretative accuracy andrelevance in medical imaging. From our comprehensive evaluations, we distilled10 effective prompt engineering techniques, each fortifying GPT-4V's medicalacumen. These methodical enhancements facilitate more reliable, precise, andclinically valuable insights from GPT-4V, advancing its operability in criticalhealthcare environments. Our findings are pivotal for those employing AI inmedicine, providing clear, actionable guidance on harnessing GPT-4V's fulldiagnostic potential.",,arXiv,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",highly relevant,"The paper mentions the use of 'few-shot-prompted pre-trained language models' and 'the chain-of-thought method of prompting', which indicates that the study involves using prompting techniques with pre-trained transformer models, making it relevant to the topic of prompt engineering." +1498,towards goaloriented large language model prompting a survey,"['Haochen Li', 'Jonathan Leung', 'Zhiqi Shen']",http://arxiv.org/pdf/2401.14043v1.pdf,2024-01-25,," Large Language Models (LLMs) have shown prominent performance in variousdownstream tasks in which prompt engineering plays a pivotal role in optimizingLLMs' performance. This paper, not as an overview of current prompt engineeringmethods, aims to highlight the limitation of designing prompts while holding ananthropomorphic assumption that expects LLMs to think like humans. From ourreview of 35 representative studies, we demonstrate that a goal-oriented promptformulation, which guides LLMs to follow established human logical thinking,significantly improves the performance of LLMs. Furthermore, We introduce anovel taxonomy that categorizes goal-oriented prompting methods into fiveinterconnected stages and we demonstrate the broad applicability of ourframework by summarizing ten applicable tasks. With four future directionsproposed, we hope to further emphasize and promote goal-oriented promptengineering.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,The paper clearly relates to prompt engineering as it describes a system that uses prompting strategies with GPT-3.5-turbo for generating teacher responses. +1499,program decomposition and translation with static analysis,['Ali Reza Ibrahimzada'],http://arxiv.org/pdf/2401.12412v1.pdf,2024-01-22,," The rising popularity of Large Language Models (LLMs) has motivated exploringtheir use in code-related tasks. Code LLMs with more than millions ofparameters are trained on a massive amount of code in different ProgrammingLanguages (PLs). Such models are used for automating various SoftwareEngineering (SE) tasks using prompt engineering. However, given the very largesize of industry-scale project files, a major issue of these LLMs is theirlimited context window size, motivating the question of ""Can these LLMs processvery large files and can we effectively perform prompt engineering?"". Codetranslation aims to convert source code from one PL to another. In this work,we assess the effect of method-level program decomposition on context window ofLLMs and investigate how this approach can enable translation of very largefiles which originally could not be done due to out-of-context issue. Ourobservations from 20 well-known java projects and approximately 60K methodssuggest that method-level program decomposition significantly improves thelimited context window problem of LLMs by 99.5%. Furthermore, our empiricalanalysis indicate that with method-level decomposition, each input fragment onaverage only consumes 5% of the context window, leaving more context space forprompt engineering and the output. Finally, we investigate the effectiveness ofa Call Graph (CG) approach for translating very large files when doingmethod-level program decomposition.",,arXiv,['cs.se'],highly relevant,"The paper discusses using a few-shot prompt with GPT-3 to generate responses, which is related to prompt engineering, especially considering the generation uses hard prefix prompts." +1500,large language models and prompt engineering for biomedical query focused multidocument summarisation,['Diego Mollá'],http://arxiv.org/pdf/2311.05169v1.pdf,2023-11-09,," This paper reports on the use of prompt engineering and GPT-3.5 forbiomedical query-focused multi-document summarisation. Using GPT-3.5 andappropriate prompts, our system achieves top ROUGE-F1 results in the task ofobtaining short-paragraph-sized answers to biomedical questions in the 2023BioASQ Challenge (BioASQ 11b). This paper confirms what has been observed inother domains: 1) Prompts that incorporated few-shot samples generally improvedon their counterpart zero-shot variants; 2) The largest improvement wasachieved by retrieval augmented generation. The fact that these prompts allowour top runs to rank within the top two runs of BioASQ 11b demonstrate thepower of using adequate prompts for Large Language Models in general, andGPT-3.5 in particular, for query-focused summarisation.",,arXiv,['cs.cl'],somewhat relevant,"The paper compares strategies for specializing an LLM including prompt engineering, which makes it relevant to the topic." +1501,beautifulprompt towards automatic prompt engineering for texttoimage synthesis,"['Tingfeng Cao', 'Chengyu Wang', 'Bingyan Liu', 'Ziheng Wu', 'Jinhui Zhu', 'Jun Huang']",http://arxiv.org/pdf/2311.06752v1.pdf,2023-11-12,," Recently, diffusion-based deep generative models (e.g., Stable Diffusion)have shown impressive results in text-to-image synthesis. However, currenttext-to-image models often require multiple passes of prompt engineering byhumans in order to produce satisfactory results for real-world applications. Wepropose BeautifulPrompt, a deep generative model to produce high-qualityprompts from very simple raw descriptions, which enables diffusion-based modelsto generate more beautiful images. In our work, we first fine-tuned theBeautifulPrompt model over low-quality and high-quality collecting promptpairs. Then, to ensure that our generated prompts can generate more beautifulimages, we further propose a Reinforcement Learning with Visual AI Feedbacktechnique to fine-tune our model to maximize the reward values of the generatedprompts, where the reward values are calculated based on the PickScore and theAesthetic Scores. Our results demonstrate that learning from visual AI feedbackpromises the potential to improve the quality of generated prompts and imagessignificantly. We further showcase the integration of BeautifulPrompt to acloud-native AI platform to provide better text-to-image generation service inthe cloud.",,arXiv,['cs.cl'],highly relevant,"The abstract discusses the use of a design tool that supports the development and systematic evaluation of prompting strategies for LLMs, which directly relates to the topic of prompt engineering." +1502,on the discussion of large language models symmetry of agents and interplay with prompts,"['Qineng Wang', 'Zihao Wang', 'Ying Su', 'Yangqiu Song']",http://arxiv.org/pdf/2311.07076v1.pdf,2023-11-13,," Two ways has been discussed to unlock the reasoning capability of a largelanguage model. The first one is prompt engineering and the second one is tocombine the multiple inferences of large language models, or the multi-agentdiscussion. Theoretically, this paper justifies the multi-agent discussionmechanisms from the symmetry of agents. Empirically, this paper reports theempirical results of the interplay of prompts and discussion mechanisms,revealing the empirical state-of-the-art performance of complex multi-agentmechanisms can be approached by carefully developed prompt engineering. Thispaper also proposes a scalable discussion mechanism based on conquer and merge,providing a simple multi-agent discussion solution with simple prompts butstate-of-the-art performance.",,arXiv,['cs.cl'],highly relevant,"The abstract mentions the use of customized LLM prompts per query, which implies direct involvement in prompt engineering for question-answering systems." +1503,neuroprompts an adaptive framework to optimize prompts for texttoimage generation,"['Shachar Rosenman', 'Vasudev Lal', 'Phillip Howard']",http://arxiv.org/pdf/2311.12229v1.pdf,2023-11-20,," Despite impressive recent advances in text-to-image diffusion models,obtaining high-quality images often requires prompt engineering by humans whohave developed expertise in using them. In this work, we present NeuroPrompts,an adaptive framework that automatically enhances a user's prompt to improvethe quality of generations produced by text-to-image models. Our frameworkutilizes constrained text decoding with a pre-trained language model that hasbeen adapted to generate prompts similar to those produced by human promptengineers. This approach enables higher-quality text-to-image generations andprovides user control over stylistic features via constraint set specification.We demonstrate the utility of our framework by creating an interactiveapplication for prompt enhancement and image generation using Stable Diffusion.Additionally, we conduct experiments utilizing a large dataset ofhuman-engineered prompts for text-to-image generation and show that ourapproach automatically produces enhanced prompts that result in superior imagequality. We make our code, a screencast video demo and a live demo instance ofNeuroPrompts publicly available.",,arXiv,['cs.ai'],somewhat relevant,"The abstract explicitly mentions the use of a 'zero-shot prompting' method for machine translation tasks, which is relevant to prompt engineering." +1504,memorycompanion a smart healthcare solution to empower efficient alzheimer's care via unleashing generative ai,"['Lifei Zheng', 'Yeonie Heo', 'Yi Fang']",http://arxiv.org/pdf/2311.14730v1.pdf,2023-11-20,," With the rise of Large Language Models (LLMs), notably characterized by GPTframeworks, there emerges a catalyst for novel healthcare applications. Earlieriterations of chatbot caregivers, though existent, have yet to achieve adimension of human-like authenticity. This paper unveils `MemoryCompanion' apioneering digital health solution explicitly tailored for Alzheimer's disease(AD) patients and their caregivers. Drawing upon the nuances of GPT technologyand prompt engineering, MemoryCompanion manifests a personalized caregivingparadigm, fostering interactions via voice-cloning and talking-face mechanismsthat resonate with the familiarity of known companions. Using advancedprompt-engineering, the system intricately adapts to each patient's distinctprofile, curating its content and communication style accordingly. Thisapproach strives to counteract prevalent issues of social isolation andloneliness frequently observed in AD demographics. Our methodology, grounded inits innovative design, addresses both the caregiving and technologicalchallenges intrinsic to this domain.",,arXiv,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.lg']",highly relevant,"The abstract directly mentions the use of zero-shot prompting, which is a technique within prompt engineering, making it relevant to hard prefix prompts." +1505,devbots can codesign apis,['Vinicius Soares Silva Marques'],http://arxiv.org/pdf/2312.05733v1.pdf,2023-12-10,," DevBots are automated tools that perform various tasks in order to supportsoftware development. They are a growing trend and have been used inrepositories to automate repetitive tasks, as code generators, and ascollaborators in eliciting requirements and defining architectures. In thisstudy, we analyzed 24 articles to investigate the state of the art of usingDevBots in software development, trying to understand their characteristics,identify use cases, learn the relationship between DevBots and conversationalsoftware development, and discuss how prompt engineering can enablecollaboration between human developers and bots. Additionally, we identified agap to address by applying prompt engineering to collaborative API designbetween human designers and DevBots and proposed an experiment to assess whatapproach, between using Retrieval Augmented Generation or not, is moresuitable. Our conclusion is that DevBots can collaborate with human APIdesigners, but the two approaches have advantages and disadvantages.",,arXiv,"['cs.se', 'cs.ai', 'cs.hc']",somewhat relevant,"The abstract explicitly mentions the use of zero-shot prompting with GPT-3, suggesting relevance to prompt engineering." +1506,llms for robotic object disambiguation,"['Connie Jiang', 'Yiqing Xu', 'David Hsu']",http://arxiv.org/pdf/2401.03388v1.pdf,2024-01-07,," The advantages of pre-trained large language models (LLMs) are apparent in avariety of language processing tasks. But can a language model's knowledge befurther harnessed to effectively disambiguate objects and navigatedecision-making challenges within the realm of robotics? Our study reveals theLLM's aptitude for solving complex decision making challenges that are oftenpreviously modeled by Partially Observable Markov Decision Processes (POMDPs).A pivotal focus of our research is the object disambiguation capability ofLLMs. We detail the integration of an LLM into a tabletop environmentdisambiguation task, a decision making problem where the robot's task is todiscern and retrieve a user's desired object from an arbitrarily large andcomplex cluster of objects. Despite multiple query attempts with zero-shotprompt engineering (details can be found in the Appendix), the LLM struggled toinquire about features not explicitly provided in the scene description. Inresponse, we have developed a few-shot prompt engineering system to improve theLLM's ability to pose disambiguating queries. The result is a model capable ofboth using given features when they are available and inferring new relevantfeatures when necessary, to successfully generate and navigate down a precisedecision tree to the correct object--even when faced with identical options.",,arXiv,"['cs.ro', 'cs.cl', 'cs.lg']",somewhat relevant,"The paper is relevant as it mentions the use of zero-shot prompting methods for dialogue generation, which is an application of prompt engineering." +1507,generative ai has lowered the barriers to computational social sciences,['Yongjun Zhang'],http://arxiv.org/pdf/2311.10833v1.pdf,2023-11-17,," Generative artificial intelligence (AI) has revolutionized the field ofcomputational social science, unleashing new possibilities for analyzingmultimodal data, especially for scholars who may not have extensive programmingexpertise. This breakthrough carries profound implications for the realm ofsocial sciences. Firstly, generative AI can significantly enhance theproductivity of social scientists by automating the generation, annotation, anddebugging of code. Secondly, it empowers researchers to delve intosophisticated data analysis through the innovative use of prompt engineering.Lastly, the educational sphere of computational social science stands tobenefit immensely from these tools, given their exceptional ability to annotateand elucidate complex codes for learners, thereby simplifying the learningprocess and making the technology more accessible.",,arXiv,"['cs.hc', 'cs.cy']",somewhat relevant,"The abstract indicates that the paper explores the use of zero-shot prompts in improving translation quality, which relates to prompt engineering but does not specify if these are hard prefix prompts." +1508,loke linked open knowledge extraction for automated knowledge graph construction,['Jamie McCusker'],http://arxiv.org/pdf/2311.09366v1.pdf,2023-11-15,," While the potential of Open Information Extraction (Open IE) for KnowledgeGraph Construction (KGC) may seem promising, we find that the alignment of OpenIE extraction results with existing knowledge graphs to be inadequate. Theadvent of Large Language Models (LLMs), especially the commercially availableOpenAI models, have reset expectations for what is possible with deep learningmodels and have created a new field called prompt engineering. We investigatethe use of GPT models and prompt engineering for knowledge graph constructionwith the Wikidata knowledge graph to address a similar problem to Open IE,which we call Open Knowledge Extraction (OKE) using an approach we call theLinked Open Knowledge Extractor (LOKE, pronounced like ""Loki""). We consider theentity linking task essential to construction of real world knowledge graphs.We merge the CaRB benchmark scoring approach with data from the TekGen datasetfor the LOKE task. We then show that a well engineered prompt, paired with anaive entity linking approach (which we call LOKE-GPT), outperforms AllenAI'sOpenIE 4 implementation on the OKE task, although it over-generates triplescompared to the reference set due to overall triple scarcity in the TekGen set.Through an analysis of entity linkability in the CaRB dataset, as well asoutputs from OpenIE 4 and LOKE-GPT, we see that LOKE-GPT and the ""silver""TekGen triples show that the task is significantly different in content fromOIE, if not structure. Through this analysis and a qualitative analysis ofsentence extractions via all methods, we found that LOKE-GPT extractions are ofhigh utility for the KGC task and suitable for use in semi-automated extractionsettings.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper discusses fine-tuning CLIP models using prompt sentences for zero-shot classification tasks, which involves prompt engineering." +1509,texttosticker style tailoring latent diffusion models for human expression,"['Animesh Sinha', 'Bo Sun', 'Anmol Kalia', 'Arantxa Casanova', 'Elliot Blanchard', 'David Yan', 'Winnie Zhang', 'Tony Nelli', 'Jiahui Chen', 'Hardik Shah', 'Licheng Yu', 'Mitesh Kumar Singh', 'Ankit Ramchandani', 'Maziar Sanjabi', 'Sonal Gupta', 'Amy Bearman', 'Dhruv Mahajan']",http://arxiv.org/pdf/2311.10794v1.pdf,2023-11-17,," We introduce Style Tailoring, a recipe to finetune Latent Diffusion Models(LDMs) in a distinct domain with high visual quality, prompt alignment andscene diversity. We choose sticker image generation as the target domain, asthe images significantly differ from photorealistic samples typically generatedby large-scale LDMs. We start with a competent text-to-image model, like Emu,and show that relying on prompt engineering with a photorealistic model togenerate stickers leads to poor prompt alignment and scene diversity. Toovercome these drawbacks, we first finetune Emu on millions of sticker-likeimages collected using weak supervision to elicit diversity. Next, we curatehuman-in-the-loop (HITL) Alignment and Style datasets from model generations,and finetune to improve prompt alignment and style alignment respectively.Sequential finetuning on these datasets poses a tradeoff between better stylealignment and prompt alignment gains. To address this tradeoff, we propose anovel fine-tuning method called Style Tailoring, which jointly fits the contentand style distribution and achieves best tradeoff. Evaluation results show ourmethod improves visual quality by 14%, prompt alignment by 16.2% and scenediversity by 15.3%, compared to prompt engineering the base Emu model forstickers generation.",,arXiv,['cs.cv'],highly relevant,"The paper focuses on improving LLMs' zero-shot performance by rewriting task prompts for individual test inputs, indicating a direct relation to prompt engineering." +1510,prompt engineeringassisted malware dynamic analysis using gpt4,"['Pei Yan', 'Shunquan Tan', 'Miaohui Wang', 'Jiwu Huang']",http://arxiv.org/pdf/2312.08317v1.pdf,2023-12-13,," Dynamic analysis methods effectively identify shelled, wrapped, or obfuscatedmalware, thereby preventing them from invading computers. As a significantrepresentation of dynamic malware behavior, the API (Application ProgrammingInterface) sequence, comprised of consecutive API calls, has progressivelybecome the dominant feature of dynamic analysis methods. Though there have beennumerous deep learning models for malware detection based on API sequences, thequality of API call representations produced by those models is limited. Thesemodels cannot generate representations for unknown API calls, which weakensboth the detection performance and the generalization. Further, the conceptdrift phenomenon of API calls is prominent. To tackle these issues, weintroduce a prompt engineering-assisted malware dynamic analysis using GPT-4.In this method, GPT-4 is employed to create explanatory text for each API callwithin the API sequence. Afterward, the pre-trained language model BERT is usedto obtain the representation of the text, from which we derive therepresentation of the API sequence. Theoretically, this proposed method iscapable of generating representations for all API calls, excluding thenecessity for dataset training during the generation process. Utilizing therepresentation, a CNN-based detection model is designed to extract the feature.We adopt five benchmark datasets to validate the performance of the proposedmodel. The experimental results reveal that the proposed detection algorithmperforms better than the state-of-the-art method (TextCNN). Specifically, incross-database experiments and few-shot learning experiments, the proposedmodel achieves excellent detection performance and almost a 100% recall ratefor malware, verifying its superior generalization performance. The code isavailable at: github.com/yan-scnu/Prompted_Dynamic_Detection.",,arXiv,"['cs.cr', 'cs.ai']",somewhat relevant,"The paper focuses on evaluating language models using zero-shot prompts, which indicates the use of a prompting technique, relevant to prompt engineering." +1511,prompting hard or hardly prompting prompt inversion for texttoimage diffusion models,"['Shweta Mahajan', 'Tanzila Rahman', 'Kwang Moo Yi', 'Leonid Sigal']",http://arxiv.org/pdf/2312.12416v1.pdf,2023-12-19,," The quality of the prompts provided to text-to-image diffusion modelsdetermines how faithful the generated content is to the user's intent, oftenrequiring `prompt engineering'. To harness visual concepts from target imageswithout prompt engineering, current approaches largely rely on embeddinginversion by optimizing and then mapping them to pseudo-tokens. However,working with such high-dimensional vector representations is challengingbecause they lack semantics and interpretability, and only allow simple vectoroperations when using them. Instead, this work focuses on inverting thediffusion model to obtain interpretable language prompts directly. Thechallenge of doing this lies in the fact that the resulting optimizationproblem is fundamentally discrete and the space of prompts is exponentiallylarge; this makes using standard optimization techniques, such as stochasticgradient descent, difficult. To this end, we utilize a delayed projectionscheme to optimize for prompts representative of the vocabulary space in themodel. Further, we leverage the findings that different timesteps of thediffusion process cater to different levels of detail in an image. The later,noisy, timesteps of the forward diffusion process correspond to the semanticinformation, and therefore, prompt inversion in this range provides tokensrepresentative of the image semantics. We show that our approach can identifysemantically interpretable and meaningful prompts for a target image which canbe used to synthesize diverse images with similar content. We furtherillustrate the application of the optimized prompts in evolutionary imagegeneration and concept removal.",,arXiv,"['cs.cv', 'cs.lg']",somewhat relevant,"The paper discusses using multiple input prompts with Large Language Models for Text Style Transfer evaluation, which relates to prompt engineering." +1512,typefly flying drones with large language model,"['Guojun Chen', 'Xiaojing Yu', 'Lin Zhong']",http://arxiv.org/pdf/2312.14950v1.pdf,2023-12-08,," Commanding a drone with a natural language is not only user-friendly but alsoopens the door for emerging language agents to control the drone. Emerginglarge language models (LLMs) provide a previously impossible opportunity toautomatically translate a task description in a natural language to a programthat can be executed by the drone. However, powerful LLMs and their visioncounterparts are limited in three important ways. First, they are onlyavailable as cloud-based services. Sending images to the cloud raises privacyconcerns. Second, they are expensive, costing proportionally to the requestsize. Finally, without expensive fine-tuning, existing LLMs are quite limitedin their capability of writing a program for specialized systems like drones. In this paper, we present a system called TypeFly that tackles the abovethree problems using a combination of edge-based vision intelligence, novelprogramming language design, and prompt engineering. Instead of the familiarPython, TypeFly gets a cloud-based LLM service to write a program in a small,custom language called MiniSpec, based on task and scene descriptions inEnglish. Such MiniSpec programs are not only succinct (and therefore efficient)but also able to consult the LLM during their execution using a special skillcalled query. Using a set of increasingly challenging drone tasks, we show thatdesign choices made by TypeFly can reduce both the cost of LLM service and thetask execution time by more than 2x. More importantly, query and promptengineering techniques contributed by TypeFly significantly improve the chanceof success of complex tasks.",,arXiv,"['cs.ro', 'cs.ai', 'cs.hc']",somewhat relevant,The paper highlights ERNIE-Code's zero-shot prompting ability for multilingual code summarization and text-to-text translation but does not specifically address hard prefix prompts or explicit prompt engineering strategies. +1513,prompting large language models for recommender systems a comprehensive framework and empirical analysis,"['Lanling Xu', 'Junjie Zhang', 'Bingqian Li', 'Jinpeng Wang', 'Mingchen Cai', 'Wayne Xin Zhao', 'Ji-Rong Wen']",http://arxiv.org/pdf/2401.04997v1.pdf,2024-01-10,," Recently, large language models such as ChatGPT have showcased remarkableabilities in solving general tasks, demonstrating the potential forapplications in recommender systems. To assess how effectively LLMs can be usedin recommendation tasks, our study primarily focuses on employing LLMs asrecommender systems through prompting engineering. We propose a generalframework for utilizing LLMs in recommendation tasks, focusing on thecapabilities of LLMs as recommenders. To conduct our analysis, we formalize theinput of LLMs for recommendation into natural language prompts with two keyaspects, and explain how our framework can be generalized to variousrecommendation scenarios. As for the use of LLMs as recommenders, we analyzethe impact of public availability, tuning strategies, model architecture,parameter scale, and context length on recommendation results based on theclassification of LLMs. As for prompt engineering, we further analyze theimpact of four important components of prompts, \ie task descriptions, userinterest modeling, candidate items construction and prompting strategies. Ineach section, we first define and categorize concepts in line with the existingliterature. Then, we propose inspiring research questions followed byexperiments to systematically analyze the impact of different factors on twopublic datasets. Finally, we summarize promising directions to shed lights onfuture research.",,arXiv,['cs.ir'],highly relevant,"The paper describes the use of prompts to regularize the fine-tuning process on vision-language models, which is directly related to prompt engineering." +1514,pokergpt an endtoend lightweight solver for multiplayer texas hold'em via large language model,"['Chenghao Huang', 'Yanbo Cao', 'Yinlong Wen', 'Tao Zhou', 'Yanru Zhang']",http://arxiv.org/pdf/2401.06781v1.pdf,2024-01-04,," Poker, also known as Texas Hold'em, has always been a typical research targetwithin imperfect information games (IIGs). IIGs have long served as a measureof artificial intelligence (AI) development. Representative prior works, suchas DeepStack and Libratus heavily rely on counterfactual regret minimization(CFR) to tackle heads-up no-limit Poker. However, it is challenging forsubsequent researchers to learn CFR from previous models and apply it to otherreal-world applications due to the expensive computational cost of CFRiterations. Additionally, CFR is difficult to apply to multi-player games dueto the exponential growth of the game tree size. In this work, we introducePokerGPT, an end-to-end solver for playing Texas Hold'em with arbitrary numberof players and gaining high win rates, established on a lightweight largelanguage model (LLM). PokerGPT only requires simple textual information ofPoker games for generating decision-making advice, thus guaranteeing theconvenient interaction between AI and humans. We mainly transform a set oftextual records acquired from real games into prompts, and use them tofine-tune a lightweight pre-trained LLM using reinforcement learning humanfeedback technique. To improve fine-tuning performance, we conduct promptengineering on raw data, including filtering useful information, selectingbehaviors of players with high win rates, and further processing them intotextual instruction using multiple prompt engineering techniques. Through theexperiments, we demonstrate that PokerGPT outperforms previous approaches interms of win rate, model size, training time, and response speed, indicatingthe great potential of LLMs in solving IIGs.",,arXiv,"['cs.ai', 'cs.cl']",somewhat relevant,"The abstract mentions the use of zero-shot prompting with large language models, which indicates relevance to the concept of prompt engineering." +1515,prewrite prompt rewriting with reinforcement learning,"['Weize Kong', 'Spurthi Amba Hombaiah', 'Mingyang Zhang', 'Qiaozhu Mei', 'Michael Bendersky']",http://arxiv.org/pdf/2401.08189v1.pdf,2024-01-16,," Prompt engineering is critical for the development of LLM-based applications.However, it is usually done manually in a ""trial and error"" fashion. Thismanual procedure can be time consuming, ineffective, and the generated promptsare, in a lot of cases, sub-optimal. Even for the prompts which seemingly workwell, there is always a lingering question: can the prompts be made better withfurther modifications? To address these questions, in this paper, we investigate prompt engineeringautomation. We consider a specific use case scenario in which developers/usershave drafted initial prompts, but lack the time/expertise to optimize them. Wepropose PRewrite, an automated tool to rewrite these drafts and to generatehighly effective new prompts. PRewrite is based on the Reinforcement Learning(RL) framework which allows for end-to-end optimization and our design allowsthe RL search to happen in a large action space. The automated tool leveragesmanually crafted prompts as starting points which makes the rewriting proceduremore guided and efficient. The generated prompts are human readable, andself-explanatory, unlike some of those in previous works. We conductedextensive experiments on diverse datasets and found that the prompts generatedwith this new method not only outperform professionally crafted prompts, butalso prompts generated with other previously proposed methods.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",highly relevant,"The paper investigates the use of zero-shot and many-shot prompts for poetry generation in the style of specific authors, which is directly related to the application of prompt engineering techniques." +1516,can generalist foundation models outcompete specialpurpose tuning case study in medicine,"['Harsha Nori', 'Yin Tat Lee', 'Sheng Zhang', 'Dean Carignan', 'Richard Edgar', 'Nicolo Fusi', 'Nicholas King', 'Jonathan Larson', 'Yuanzhi Li', 'Weishung Liu', 'Renqian Luo', 'Scott Mayer McKinney', 'Robert Osazuwa Ness', 'Hoifung Poon', 'Tao Qin', 'Naoto Usuyama', 'Chris White', 'Eric Horvitz']",http://arxiv.org/pdf/2311.16452v1.pdf,2023-11-28,," Generalist foundation models such as GPT-4 have displayed surprisingcapabilities in a wide variety of domains and tasks. Yet, there is a prevalentassumption that they cannot match specialist capabilities of fine-tuned models.For example, most explorations to date on medical competency benchmarks haveleveraged domain-specific training, as exemplified by efforts on BioGPT andMed-PaLM. We build on a prior study of GPT-4's capabilities on medicalchallenge benchmarks in the absence of special training. Rather than usingsimple prompting to highlight the model's out-of-the-box capabilities, weperform a systematic exploration of prompt engineering. We find that promptinginnovation can unlock deeper specialist capabilities and show that GPT-4 easilytops prior leading results for medical benchmarks. The prompting methods weexplore are general purpose, and make no specific use of domain expertise,removing the need for expert-curated content. Our experimental design carefullycontrols for overfitting during the prompt engineering process. We introduceMedprompt, based on a composition of several prompting strategies. WithMedprompt, GPT-4 achieves state-of-the-art results on all nine of the benchmarkdatasets in the MultiMedQA suite. The method outperforms leading specialistmodels such as Med-PaLM 2 by a significant margin with an order of magnitudefewer calls to the model. Steering GPT-4 with Medprompt achieves a 27%reduction in error rate on the MedQA dataset over the best methods to dateachieved with specialist models and surpasses a score of 90% for the firsttime. Beyond medical problems, we show the power of Medprompt to generalize toother domains and provide evidence for the broad applicability of the approachvia studies of the strategy on exams in electrical engineering, machinelearning, philosophy, accounting, law, nursing, and clinical psychology.",,arXiv,"['cs.cl', 'i.2.7']",somewhat relevant,"The paper describes using a prompting method integrated with LoRA for LLMs, indicating relevance to prompt engineering despite focusing on LoRA fine-tuning." +1517,"topologies of reasoning demystifying chains, trees, and graphs of thoughts","['Maciej Besta', 'Florim Memedi', 'Zhenyu Zhang', 'Robert Gerstenberger', 'Nils Blach', 'Piotr Nyczyk', 'Marcin Copik', 'Grzegorz Kwaśniewski', 'Jürgen Müller', 'Lukas Gianinazzi', 'Ales Kubicek', 'Hubert Niewiadomski', 'Onur Mutlu', 'Torsten Hoefler']",http://arxiv.org/pdf/2401.14295v1.pdf,2024-01-25,," The field of natural language processing (NLP) has witnessed significantprogress in recent years, with a notable focus on improving large languagemodels' (LLM) performance through innovative prompting techniques. Among these,prompt engineering coupled with structures has emerged as a promising paradigm,with designs such as Chain-of-Thought, Tree of Thoughts, or Graph of Thoughts,in which the overall LLM reasoning is guided by a structure such as a graph. Asillustrated with numerous examples, this paradigm significantly enhances theLLM's capability to solve numerous tasks, ranging from logical or mathematicalreasoning to planning or creative writing. To facilitate the understanding ofthis growing field and pave the way for future developments, we devise ageneral blueprint for effective and efficient LLM reasoning schemes. For this,we conduct an in-depth analysis of the prompt execution pipeline, clarifyingand clearly defining different concepts. We then build the first taxonomy ofstructure-enhanced LLM reasoning schemes. We focus on identifying fundamentalclasses of harnessed structures, and we analyze the representations of thesestructures, algorithms executed with these structures, and many others. Werefer to these structures as reasoning topologies, because their representationbecomes to a degree spatial, as they are contained within the LLM context. Ourstudy compares existing prompting schemes using the proposed taxonomy,discussing how certain design choices lead to different patterns in performanceand cost. We also outline theoretical underpinnings, relationships betweenprompting and others parts of the LLM ecosystem such as knowledge bases, andthe associated research challenges. Our work will help to advance future promptengineering techniques.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg']",highly relevant,"The paper introduces a Self-Prompting framework for LLMs in ODQA tasks, utilizing prompting to generate data for in-context learning, which directly relates to the hard prefix prompt engineering." +1518,do prompt positions really matter,"['Junyu Mao', 'Stuart E. Middleton', 'Mahesan Niranjan']",http://arxiv.org/pdf/2305.14493v3.pdf,2023-05-23,," Prompt-based models have gathered a lot of attention from researchers due totheir remarkable advancements in the fields of zero-shot and few-shot learning.Developing an effective prompt template plays a critical role. However, priorstudies have mainly focused on prompt vocabulary selection or embeddinginitialization within a predefined template with the prompt position fixed. Inthis empirical study, we conduct the most comprehensive analysis to date ofprompt position for diverse natural language process tasks. Our findingsquantify the substantial impact prompt position has on model performance. Weobserve that the prompt position used in prior studies is often sub-optimal.These findings suggest prompt position optimisation as a valuable researchdirection to fill the gap in existing prompt engineering methodologies.",,arXiv,['cs.cl'],highly relevant,"The paper explains the use of a knowledge-guided prompt learning method for text classification, indicating relevancy to prompt-based learning, hence prompt engineering." +1519,how are prompts different in terms of sensitivity,"['Sheng Lu', 'Hendrik Schuff', 'Iryna Gurevych']",http://arxiv.org/pdf/2311.07230v1.pdf,2023-11-13,," In-context learning (ICL) has become one of the most popular learningparadigms. While there is a growing body of literature focusing on promptengineering, there is a lack of systematic analysis comparing the effects ofprompts across different models and tasks. To address this gap, we present acomprehensive prompt analysis based on the sensitivity of a function. Ouranalysis reveals that sensitivity is an unsupervised proxy for modelperformance, as it exhibits a strong negative correlation with accuracy. We usegradient-based saliency scores to empirically demonstrate how different promptsaffect the relevance of input tokens to the output, resulting in differentlevels of sensitivity. Furthermore, we introduce sensitivity-aware decodingwhich incorporates sensitivity estimation as a penalty term in the standardgreedy decoding. We show that this approach is particularly helpful wheninformation in the input is scarce. Our work provides a fresh perspective onthe analysis of prompts, and contributes to a better understanding of themechanism of ICL.",,arXiv,['cs.cl'],highly relevant,"The paper focuses on the task of optimizing large language models via prompt engineering specifically through a method called PE2, which directly relates to hard prefix prompting and the topic of prompt engineering." +1520,think before you speak cultivating communication skills of large language models via inner monologue,"['Junkai Zhou', 'Liang Pang', 'Huawei Shen', 'Xueqi Cheng']",http://arxiv.org/pdf/2311.07445v1.pdf,2023-11-13,," The emergence of large language models (LLMs) further improves thecapabilities of open-domain dialogue systems and can generate fluent, coherent,and diverse responses. However, LLMs still lack an important ability:communication skills, which makes them more like information seeking tools thananthropomorphic chatbots. To make LLMs more anthropomorphic and proactiveduring the conversation, we add five communication skills to the responsegeneration process: topic transition, proactively asking questions, conceptguidance, empathy, and summarising often. The addition of communication skillsincreases the interest of users in the conversation and attracts them to chatfor longer. To enable LLMs better understand and use communication skills, wedesign and add the inner monologue to LLMs. The complete process is achievedthrough prompt engineering and in-context learning. To evaluate communicationskills, we construct a benchmark named Cskills for evaluating variouscommunication skills, which can also more comprehensively evaluate the dialoguegeneration ability of the model. Experimental results show that the proposedCSIM strategy improves the backbone models and outperforms the baselines inboth automatic and human evaluations.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper provides a detailed examination of the evolution of prompt engineering, which is directly relevant to the topic of prompt engineering." +1521,assessing testtime variability for interactive 3d medical image segmentation with diverse point prompts,"['Hao Li', 'Han Liu', 'Dewei Hu', 'Jiacheng Wang', 'Ipek Oguz']",http://arxiv.org/pdf/2311.07806v1.pdf,2023-11-13,," Interactive segmentation model leverages prompts from users to produce robustsegmentation. This advancement is facilitated by prompt engineering, whereinteractive prompts serve as strong priors during test-time. However, this isan inherently subjective and hard-to-reproduce process. The variability in userexpertise and inherently ambiguous boundaries in medical images can lead toinconsistent prompt selections, potentially affecting segmentation accuracy.This issue has not yet been extensively explored for medical imaging. In thispaper, we assess the test-time variability for interactive medical imagesegmentation with diverse point prompts. For a given target region, the pointis classified into three sub-regions: boundary, margin, and center. Our goal isto identify a straightforward and efficient approach for optimal promptselection during test-time based on three considerations: (1) benefits ofadditional prompts, (2) effects of prompt placement, and (3) strategies foroptimal prompt selection. We conduct extensive experiments on the publicMedical Segmentation Decathlon dataset for challenging colon tumor segmentationtask. We suggest an optimal strategy for prompt selection during test-time,supported by comprehensive results. The code is publicly available athttps://github.com/MedICL-VU/variability",,arXiv,['cs.cv'],highly relevant,"The paper is focused on the design and application of prompts for Large Language Models and introduces a tool for collaborative prompt design, which directly relates to the topic of prompt engineering." +1522,i was blind but now i see implementing visionenabled dialogue in social robots,"['Giulio Antonio Abbo', 'Tony Belpaeme']",http://arxiv.org/pdf/2311.08957v1.pdf,2023-11-15,," In the rapidly evolving landscape of human-computer interaction, theintegration of vision capabilities into conversational agents stands as acrucial advancement. This paper presents an initial implementation of adialogue manager that leverages the latest progress in Large Language Models(e.g., GPT-4, IDEFICS) to enhance the traditional text-based prompts withreal-time visual input. LLMs are used to interpret both textual prompts andvisual stimuli, creating a more contextually aware conversational agent. Thesystem's prompt engineering, incorporating dialogue with summarisation of theimages, ensures a balance between context preservation and computationalefficiency. Six interactions with a Furhat robot powered by this system arereported, illustrating and discussing the results obtained. By implementingthis vision-enabled dialogue system, the paper envisions a future whereconversational agents seamlessly blend textual and visual modalities, enablingricher, more context-aware dialogues.",,arXiv,"['cs.ro', 'cs.ai', 'cs.hc']",highly relevant,"The paper introduces ControlPE, which complements existing prompt engineering methods and focuses on controlling prompt influence and continuous targets." +1523,simulating opinion dynamics with networks of llmbased agents,"['Yun-Shiuan Chuang', 'Agam Goyal', 'Nikunj Harlalka', 'Siddharth Suresh', 'Robert Hawkins', 'Sijia Yang', 'Dhavan Shah', 'Junjie Hu', 'Timothy T. Rogers']",http://arxiv.org/pdf/2311.09618v1.pdf,2023-11-16,," Accurately simulating human opinion dynamics is crucial for understanding avariety of societal phenomena, including polarization and the spread ofmisinformation. However, the agent-based models (ABMs) commonly used for suchsimulations lack fidelity to human behavior. We propose a new approach tosimulating opinion dynamics based on populations of Large Language Models(LLMs). Our findings reveal a strong inherent bias in LLM agents towardsaccurate information, leading to consensus in line with scientific reality.However, this bias limits the simulation of individuals with resistant views onissues like climate change. After inducing confirmation bias through promptengineering, we observed opinion fragmentation in line with existingagent-based research. These insights highlight the promise and limitations ofLLM agents in this domain and suggest a path forward: refining LLMs withreal-world discourse to better simulate the evolution of human beliefs.",,arXiv,"['physics.soc-ph', 'cs.cl']",highly relevant,"The abstract details a study on improving LLM prompt-engineering by optimizing the construction of multiple In-Context Learning prompt inputs, which is directly aligned with the topic of prompt engineering." +1524,fairytalecqa integrating a commonsense knowledge graph into children's storybook narratives,"['Jiaju Chen', 'Yuxuan Lu', 'Shao Zhang', 'Bingsheng Yao', 'Yuanzhe Dong', 'Ying Xu', 'Yunyao Li', 'Qianwen Wang', 'Dakuo Wang', 'Yuling Sun']",http://arxiv.org/pdf/2311.09756v1.pdf,2023-11-16,," AI models (including LLM) often rely on narrative question-answering (QA)datasets to provide customized QA functionalities to support downstreamchildren education applications; however, existing datasets only include QApairs that are grounded within the given storybook content, but children canlearn more when teachers refer the storybook content to real-world knowledge(e.g., commonsense knowledge). We introduce the FairytaleCQA dataset, which isannotated by children education experts, to supplement 278 storybook narrativeswith educationally appropriate commonsense knowledge. The dataset has 5,868 QApairs that not only originate from the storybook narrative but also contain thecommonsense knowledge grounded by an external knowledge graph (i.e.,ConceptNet). A follow-up experiment shows that a smaller model (T5-large)fine-tuned with FairytaleCQA reliably outperforms much larger prompt-engineeredLLM (e.g., GPT-4) in this new QA-pair generation task (QAG). This resultsuggests that: 1) our dataset brings novel challenges to existing LLMs, and 2)human experts' data annotation are still critical as they have much nuancedknowledge that LLMs do not know in the children educational domain.",,arXiv,['cs.cl'],highly relevant,"The paper details the use of prompt engineering to improve the performance of GPT-4V in medical tasks, with a focus on refining prompts to increase accuracy in medical imaging, making it highly pertinent to the topic of prompt engineering." +1525,"localizing lying in llama understanding instructed dishonesty on truefalse questions through prompting, probing, and patching","['James Campbell', 'Richard Ren', 'Phillip Guo']",http://arxiv.org/pdf/2311.15131v1.pdf,2023-11-25,," Large language models (LLMs) demonstrate significant knowledge through theiroutputs, though it is often unclear whether false outputs are due to a lack ofknowledge or dishonesty. In this paper, we investigate instructed dishonesty,wherein we explicitly prompt LLaMA-2-70b-chat to lie. We perform promptengineering to find which prompts best induce lying behavior, and then usemechanistic interpretability approaches to localize where in the network thisbehavior occurs. Using linear probing and activation patching, we localize fivelayers that appear especially important for lying. We then find just 46attention heads within these layers that enable us to causally intervene suchthat the lying model instead answers honestly. We show that these interventionswork robustly across many prompts and dataset splits. Overall, our workcontributes a greater understanding of dishonesty in LLMs so that we may hopeto prevent it.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",highly relevant,"The paper focuses on the role of prompt engineering in optimizing the performance of Large Language Models, which is directly related to the topic of hard prefix prompting." +1526,the transformative influence of large language models on software development,['Sajed Jalil'],http://arxiv.org/pdf/2311.16429v1.pdf,2023-11-28,," The increasing adoption and commercialization of generalized Large LanguageModels (LLMs) have profoundly impacted various aspects of our daily lives.Initially embraced by the computer science community, the versatility of LLMshas found its way into diverse domains. In particular, the software engineeringrealm has witnessed the most transformative changes. With LLMs increasinglyserving as AI Pair Programming Assistants spurred the development ofspecialized models aimed at aiding software engineers. Although this newparadigm offers numerous advantages, it also presents critical challenges andopen problems. To identify the potential and prevailing obstacles, wesystematically reviewed contemporary scholarly publications, emphasizing theperspectives of software developers and usability concerns. Preliminaryfindings underscore pressing concerns about data privacy, bias, andmisinformation. Additionally, we identified several usability challenges,including prompt engineering, increased cognitive demands, and mistrust.Finally, we introduce 12 open problems that we have identified through oursurvey, covering these various domains.",,arXiv,"['cs.se', 'cs.hc', '68t07', 'd.2.3; i.2.5; i.2.7']",highly relevant,"The paper discusses the use of prompt engineering in automating software engineering tasks with large language models, which addresses the use and effectiveness of prompts in a practical application." +1527,"large language models for networking applications, enabling techniques, and challenges","['Yudong Huang', 'Hongyang Du', 'Xinyuan Zhang', 'Dusit Niyato', 'Jiawen Kang', 'Zehui Xiong', 'Shuo Wang', 'Tao Huang']",http://arxiv.org/pdf/2311.17474v1.pdf,2023-11-29,," The rapid evolution of network technologies and the growing complexity ofnetwork tasks necessitate a paradigm shift in how networks are designed,configured, and managed. With a wealth of knowledge and expertise, largelanguage models (LLMs) are one of the most promising candidates. This paperaims to pave the way for constructing domain-adapted LLMs for networking.Firstly, we present potential LLM applications for vertical network fields andshowcase the mapping from natural language to network language. Then, severalenabling technologies are investigated, including parameter-efficientfinetuning and prompt engineering. The insight is that language understandingand tool usage are both required for network LLMs. Driven by the idea ofembodied intelligence, we propose the ChatNet, a domain-adapted network LLMframework with access to various external network tools. ChatNet can reduce thetime required for burdensome network planning tasks significantly, leading to asubstantial improvement in efficiency. Finally, key challenges and futureresearch directions are highlighted.",,arXiv,['cs.ni'],somewhat relevant,"The paper presents AdaRefiner, which mitigates the need for intricate prompt engineering by automatically refining task comprehension with feedback from RL agents, making it somewhat relevant to the topic of prompt engineering." +1528,large language models for travel behavior prediction,"['Baichuan Mo', 'Hanyong Xu', 'Dingyi Zhuang', 'Ruoyun Ma', 'Xiaotong Guo', 'Jinhua Zhao']",http://arxiv.org/pdf/2312.00819v1.pdf,2023-11-30,," Travel behavior prediction is a fundamental task in transportation demandmanagement. The conventional methods for travel behavior prediction rely onnumerical data to construct mathematical models and calibrate model parametersto represent human preferences. Recent advancement in large language models(LLMs) has shown great reasoning abilities to solve complex problems. In thisstudy, we propose to use LLMs to predict travel behavior with promptengineering without data-based parameter learning. Specifically, we carefullydesign our prompts that include 1) task description, 2) travel characteristics,3) individual attributes, and 4) guides of thinking with domain knowledge, andask the LLMs to predict an individual's travel behavior and explain theresults. We select the travel mode choice task as a case study. Results showthat, though no training samples are provided, LLM-based predictions havecompetitive accuracy and F1-score as canonical supervised learning methods suchas multinomial logit, random forest, and neural networks. LLMs can also outputreasons that support their prediction. However, though in most of the cases,the output explanations are reasonable, we still observe cases that violatelogic or with hallucinations.",,arXiv,"['cs.lg', 'cs.ai', 'cs.cl']",highly relevant,"The paper is focused on the use of prompt engineering with GPT-3.5 for a summarisation task, directly addressing the role of prompts in achieving top performance." +1529,improving the generalization of segmentation foundation model under distribution shift via weakly supervised adaptation,"['Haojie Zhang', 'Yongyi Su', 'Xun Xu', 'Kui Jia']",http://arxiv.org/pdf/2312.03502v1.pdf,2023-12-06,," The success of large language models has inspired the computer visioncommunity to explore image segmentation foundation model that is able tozero/few-shot generalize through prompt engineering. Segment-Anything(SAM),among others, is the state-of-the-art image segmentation foundation modeldemonstrating strong zero/few-shot generalization. Despite the success, recentstudies reveal the weakness of SAM under strong distribution shift. Inparticular, SAM performs awkwardly on corrupted natural images, camouflagedimages, medical images, etc. Motivated by the observations, we aim to develop aself-training based strategy to adapt SAM to target distribution. Given theunique challenges of large source dataset, high computation cost and incorrectpseudo label, we propose a weakly supervised self-training architecture withanchor regularization and low-rank finetuning to improve the robustness andcomputation efficiency of adaptation. We validate the effectiveness on 5 typesof downstream segmentation tasks including natural clean/corrupted images,medical images, camouflaged images and robotic images. Our proposed method istask-agnostic in nature and outperforms pre-trained SAM and state-of-the-artdomain adaptation methods on almost all downstream tasks with the same testingprompt inputs.",,arXiv,['cs.cv'],highly relevant,"The paper describes 'BeautifulPrompt', a model designed to improve the quality of prompts used for text-to-image synthesis, which falls under the scope of prompt engineering." +1530,promptbench a unified library for evaluation of large language models,"['Kaijie Zhu', 'Qinlin Zhao', 'Hao Chen', 'Jindong Wang', 'Xing Xie']",http://arxiv.org/pdf/2312.07910v2.pdf,2023-12-13,," The evaluation of large language models (LLMs) is crucial to assess theirperformance and mitigate potential security risks. In this paper, we introducePromptBench, a unified library to evaluate LLMs. It consists of several keycomponents that are easily used and extended by researchers: promptconstruction, prompt engineering, dataset and model loading, adversarial promptattack, dynamic evaluation protocols, and analysis tools. PromptBench isdesigned to be an open, general, and flexible codebase for research purposesthat can facilitate original study in creating new benchmarks, deployingdownstream applications, and designing new evaluation protocols. The code isavailable at: https://github.com/microsoft/promptbench and will be continuouslysupported.",,arXiv,"['cs.ai', 'cs.cl', 'cs.lg']",highly relevant,The paper discusses the use of prompt engineering to unlock reasoning capabilities of a large language model and compares it to multi-agent discussion mechanisms. +1531,chatsos llmbased knowledge q&a system for safety engineering,"['Haiyang Tang', 'Zhenyi Liu', 'Dongping Chen', 'Qingzhao Chu']",http://arxiv.org/pdf/2312.08629v1.pdf,2023-12-14,," Recent advancements in large language models (LLMs) have notably propellednatural language processing (NLP) capabilities, demonstrating significantpotential in safety engineering applications. Despite these advancements, LLMsface constraints in processing specialized tasks, attributed to factors such ascorpus size, input processing limitations, and privacy concerns. Obtaininguseful information from reliable sources in a limited time is crucial for LLM.Addressing this, our study introduces an LLM-based Q&A system for safetyengineering, enhancing the comprehension and response accuracy of the model. Weemployed prompt engineering to incorporate external knowledge databases, thusenriching the LLM with up-to-date and reliable information. The system analyzeshistorical incident reports through statistical methods, utilizes vectorembedding to construct a vector database, and offers an efficientsimilarity-based search functionality. Our findings indicate that theintegration of external knowledge significantly augments the capabilities ofLLM for in-depth problem analysis and autonomous task assignment. Iteffectively summarizes accident reports and provides pertinent recommendations.This integration approach not only expands LLM applications in safetyengineering but also sets a precedent for future developments towardsautomation and intelligent systems.",,arXiv,['cs.ai'],highly relevant,"The paper presents NeuroPrompts, an adaptive framework for automatic prompt enhancement in text-to-image generation, which is a direct application of prompt engineering to improve the quality of outputs." +1532,learning interpretable queries for explainable image classification with information pursuit,"['Stefan Kolek', 'Aditya Chattopadhyay', 'Kwan Ho Ryan Chan', 'Hector Andrade-Loarca', 'Gitta Kutyniok', 'Réne Vidal']",http://arxiv.org/pdf/2312.11548v1.pdf,2023-12-16,," Information Pursuit (IP) is an explainable prediction algorithm that greedilyselects a sequence of interpretable queries about the data in order ofinformation gain, updating its posterior at each step based on observedquery-answer pairs. The standard paradigm uses hand-crafted dictionaries ofpotential data queries curated by a domain expert or a large language modelafter a human prompt. However, in practice, hand-crafted dictionaries arelimited by the expertise of the curator and the heuristics of promptengineering. This paper introduces a novel approach: learning a dictionary ofinterpretable queries directly from the dataset. Our query dictionary learningproblem is formulated as an optimization problem by augmenting IP's variationalformulation with learnable dictionary parameters. To formulate learnable andinterpretable queries, we leverage the latent space of large vision andlanguage models like CLIP. To solve the optimization problem, we propose a newquery dictionary learning algorithm inspired by classical sparse dictionarylearning. Our experiments demonstrate that learned dictionaries significantlyoutperform hand-crafted dictionaries generated with large language models.",,arXiv,['cs.cv'],somewhat relevant,"The abstract indicates the study discusses how prompt engineering can enable collaboration between human developers and DevBots, and proposes an experiment related to prompt engineering for API design." +1533,climate change from large language models,"['Hongyin Zhu', 'Prayag Tiwari']",http://arxiv.org/pdf/2312.11985v2.pdf,2023-12-19,," Climate change presents significant challenges to the global community, andit is imperative to raise widespread awareness of the climate crisis andeducate users about low-carbon living. Artificial intelligence, particularlylarge language models (LLMs), have emerged as powerful tools in mitigating theclimate crisis, leveraging their extensive knowledge, broad user base, andnatural language interaction capabilities. However, despite the growing body ofresearch on climate change, there is a lack of comprehensive assessments ofclimate crisis knowledge within LLMs. This paper aims to resolve this gap byproposing an automatic evaluation framework. We employ a hybrid approach todata acquisition that combines data synthesis and manual collection to compilea diverse set of questions related to the climate crisis. These questions covervarious aspects of climate change, including its causes, impacts, mitigationstrategies, and adaptation measures. We then evaluate the model knowledgethrough prompt engineering based on the collected questions and generatedanswers. We propose a set of comprehensive metrics to evaluate the climatecrisis knowledge, incorporating indicators from 10 different perspectives.Experimental results show that our method is effective in evaluating theknowledge of LLMs regarding the climate crisis. We evaluate severalstate-of-the-art LLMs and find that their knowledge falls short in terms oftimeliness.",,arXiv,"['cs.cl', 'cs.cy']",highly relevant,"The paper describes a prompt engineering method (SSP) designed to improve image generation quality by appending camera descriptions to prompts in text-to-image synthesis, which aligns with prompt engineering concepts." +1534,a novel approach for rapid development based on chatgpt and prompt engineering,"['Youjia Li', 'Jianjun Shi', 'Zheng Zhang']",http://arxiv.org/pdf/2312.13115v2.pdf,2023-12-20,," Code generation stands as a powerful technique in modern softwaredevelopment, improving development efficiency, reducing errors, and fosteringstandardization and consistency. Recently, ChatGPT has exhibited immensepotential in automatic code generation. However, existing researches on codegeneration lack guidance for practical software development process. In thisstudy, we utilized ChatGPT to develop a web-based code generation platformconsisting of key components: User Interface, Prompt Builder and BackendService. Specifically, Prompt Builder dynamically generated comprehensiveprompts to enhance model generation performance. We conducted experiments on 2datasets, evaluating the generated code through 8 widely used metrics.Theresults demonstrate that (1) Our Prompt Builder is effective, resulting in a65.06% improvement in EM, a 38.45% improvement in BLEU, a 15.70% improvement inCodeBLEU, and a 50.64% improvement in Pass@1. (2) In real developmentscenarios, 98.5% of test cases can be validated through manual validation,highlighting the genuine assistance provided by the ChatGPT-based codegeneration approach.",,arXiv,['cs.se'],highly relevant,"The paper explicitly mentions the use of 'zero-shot prompt engineering' and the development of a 'few-shot prompt engineering system', indicating that it explores prompt engineering techniques in the context of LLMs." +1535,automated devops pipeline generation for code repositories using large language models,"['Deep Mehta', 'Kartik Rawool', 'Subodh Gujar', 'Bowen Xu']",http://arxiv.org/pdf/2312.13225v1.pdf,2023-12-20,," Automating software development processes through the orchestration of GitHubAction workflows has revolutionized the efficiency and agility of softwaredelivery pipelines. This paper presents a detailed investigation into the useof Large Language Models (LLMs) specifically, GPT 3.5 and GPT 4 to generate andevaluate GitHub Action workflows for DevOps tasks. Our methodology involvesdata collection from public GitHub repositories, prompt engineering for LLMutilization, and evaluation metrics encompassing exact match scores, BLEUscores, and a novel DevOps Aware score. The research scrutinizes theproficiency of GPT 3.5 and GPT 4 in generating GitHub workflows, whileassessing the influence of various prompt elements in constructing the mostefficient pipeline. Results indicate substantial advancements in GPT 4,particularly in DevOps awareness and syntax correctness. The researchintroduces a GitHub App built on Probot, empowering users to automate workflowgeneration within GitHub ecosystem. This study contributes insights into theevolving landscape of AI-driven automation in DevOps practices.",,arXiv,['cs.se'],somewhat relevant,"The abstract explicitly mentions the use of prompt engineering to enable sophisticated data analysis, which indicates relevance to the topic of hard prefix prompt engineering." +1536,evaluation is all you need prompting generative large language models for annotation tasks in the social sciences a primer using open models,"['Maximilian Weber', 'Merle Reichardt']",http://arxiv.org/pdf/2401.00284v1.pdf,2023-12-30,," This paper explores the use of open generative Large Language Models (LLMs)for annotation tasks in the social sciences. The study highlights thechallenges associated with proprietary models, such as limited reproducibilityand privacy concerns, and advocates for the adoption of open (source) modelsthat can be operated on independent devices. Two examples of annotation tasks,sentiment analysis in tweets and identification of leisure activities inchildhood aspirational essays are provided. The study evaluates the performanceof different prompting strategies and models (neural-chat-7b-v3-2,Starling-LM-7B-alpha, openchat_3.5, zephyr-7b-alpha and zephyr-7b-beta). Theresults indicate the need for careful validation and tailored promptengineering. The study highlights the advantages of open models for dataprivacy and reproducibility.",,arXiv,['cs.cl'],highly relevant,"The paper discusses the use of GPT models and prompt engineering for knowledge graph construction, which indicates direct involvement with prompt engineering techniques." +1537,iot in the era of generative ai vision and challenges,"['Xin Wang', 'Zhongwei Wan', 'Arvin Hekmati', 'Mingyu Zong', 'Samiul Alam', 'Mi Zhang', 'Bhaskar Krishnamachari']",http://arxiv.org/pdf/2401.01923v2.pdf,2024-01-03,," Equipped with sensing, networking, and computing capabilities, Internet ofThings (IoT) such as smartphones, wearables, smart speakers, and householdrobots have been seamlessly weaved into our daily lives. Recent advancements inGenerative AI exemplified by GPT, LLaMA, DALL-E, and Stable Difussion holdimmense promise to push IoT to the next level. In this article, we share ourvision and views on the benefits that Generative AI brings to IoT, and discusssome of the most important applications of Generative AI in IoT-relateddomains. Fully harnessing Generative AI in IoT is a complex challenge. Weidentify some of the most critical challenges including high resource demandsof the Generative AI models, prompt engineering, on-device inference,offloading, on-device fine-tuning, federated learning, security, as well asdevelopment tools and benchmarks, and discuss current gaps as well as promisingopportunities on enabling Generative AI for IoT. We hope this article caninspire new research on IoT in the era of Generative AI.",,arXiv,"['cs.dc', 'cs.lg', 'cs.ni']",somewhat relevant,"While the abstract discusses the use of prompt engineering, it is primarily in the context of finetuning and improving Latent Diffusion Models for generating sticker images, not focusing on hard prefix prompting specifically." +1538,chatgpt for conversational recommendation refining recommendations by reprompting with feedback,"['Kyle Dylan Spurlock', 'Cagla Acun', 'Esin Saka', 'Olfa Nasraoui']",http://arxiv.org/pdf/2401.03605v1.pdf,2024-01-07,," Recommendation algorithms have been pivotal in handling the overwhelmingvolume of online content. However, these algorithms seldom consider direct userinput, resulting in superficial interaction between them. Efforts have beenmade to include the user directly in the recommendation process throughconversation, but these systems too have had limited interactivity. Recently,Large Language Models (LLMs) like ChatGPT have gained popularity due to theirease of use and their ability to adapt dynamically to various tasks whileresponding to feedback. In this paper, we investigate the effectiveness ofChatGPT as a top-n conversational recommendation system. We build a rigorouspipeline around ChatGPT to simulate how a user might realistically probe themodel for recommendations: by first instructing and then reprompting withfeedback to refine a set of recommendations. We further explore the effect ofpopularity bias in ChatGPT's recommendations, and compare its performance tobaseline models. We find that reprompting ChatGPT with feedback is an effectivestrategy to improve recommendation relevancy, and that popularity bias can bemitigated through prompt engineering.",,arXiv,"['cs.ir', 'cs.ai', 'cs.cl', 'cs.lg', 'i.2.7; h.3.3']",somewhat relevant,"The abstract describes using GPT-4 for creating explanatory text for API calls, which is an application of prompt engineering to improve malware detection, but it does not specifically mention hard prefix prompting." +1539,from prompt engineering to prompt science with human in the loop,['Chirag Shah'],http://arxiv.org/pdf/2401.04122v1.pdf,2024-01-01,," As LLMs make their way into many aspects of our lives, one place thatwarrants increased scrutiny with LLM usage is scientific research. Using LLMsfor generating or analyzing data for research purposes is gaining popularity.But when such application is marred with ad-hoc decisions and engineeringsolutions, we need to be concerned about how it may affect that research, itsfindings, or any future works based on that research. We need a more scientificapproach to using LLMs in our research. While there are several active effortsto support more systematic construction of prompts, they are often focused moreon achieving desirable outcomes rather than producing replicable andgeneralizable knowledge with sufficient transparency, objectivity, or rigor.This article presents a new methodology inspired by codebook constructionthrough qualitative methods to address that. Using humans in the loop and amulti-phase verification processes, this methodology lays a foundation for moresystematic, objective, and trustworthy way of applying LLMs for analyzing data.Specifically, we show how a set of researchers can work through a rigorousprocess of labeling, deliberating, and documenting to remove subjectivity andbring transparency and replicability to prompt generation process. A set ofexperiments are presented to show how this methodology can be put in practice.",,arXiv,"['cs.hc', 'cs.ai']",somewhat relevant,"The paper discusses enhancing the interpretability and semantics of prompts for text-to-image diffusion models, focusing on prompt inversion rather than expanding prompt engineering techniques for post-training model interaction." +1540,can chatgpt rival neural machine translation a comparative study,"['Zhaokun Jiang', 'Ziyin Zhang']",http://arxiv.org/pdf/2401.05176v1.pdf,2024-01-10,," Inspired by the increasing interest in leveraging large language models fortranslation, this paper evaluates the capabilities of large language models(LLMs) represented by ChatGPT in comparison to the mainstream neural machinetranslation (NMT) engines in translating Chinese diplomatic texts into English.Specifically, we examine the translation quality of ChatGPT and NMT engines asmeasured by four automated metrics and human evaluation based on anerror-typology and six analytic rubrics. Our findings show that automatedmetrics yield similar results for ChatGPT under different prompts and NMTsystems, while human annotators tend to assign noticeably higher scores toChatGPT when it is provided an example or contextual information about thetranslation task. Pairwise correlation between automated metrics and dimensionsof human evaluation produces weak and non-significant results, suggesting thedivergence between the two methods of translation quality assessment. Thesefindings provide valuable insights into the potential of ChatGPT as a capablemachine translator, and the influence of prompt engineering on its performance.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,The paper explicitly mentions the use of prompt engineering in the context of programming language design and interaction with a large language model to control a drone. +1541,the benefits of a concise chain of thought on problemsolving in large language models,"['Matthew Renze', 'Erhan Guven']",http://arxiv.org/pdf/2401.05618v1.pdf,2024-01-11,," In this paper, we introduce Concise Chain-of-Thought (CCoT) prompting. Wecompared standard CoT and CCoT prompts to see how conciseness impacts responselength and correct-answer accuracy. We evaluated this using GPT-3.5 and GPT-4with a multiple-choice question-and-answer (MCQA) benchmark. CCoT reducedaverage response length by 48.70% for both GPT-3.5 and GPT-4 while having anegligible impact on problem-solving performance. However, on math problems,GPT-3.5 with CCoT incurs a performance penalty of 27.69%. Overall, CCoT leadsto an average per-token cost reduction of 22.67%. These results have practicalimplications for AI systems engineers using LLMs to solve real-world problemswith CoT prompt-engineering techniques. In addition, these results provide moregeneral insight for AI researchers studying the emergent behavior ofstep-by-step reasoning in LLMs.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper focuses on utilizing LLMs in recommendation tasks via prompting engineering and specifically analyzes the impact of prompting strategies, which falls directly within the scope of prompt engineering." +1542,seek for incantations towards accurate texttoimage diffusion synthesis through prompt engineering,"['Chang Yu', 'Junran Peng', 'Xiangyu Zhu', 'Zhaoxiang Zhang', 'Qi Tian', 'Zhen Lei']",http://arxiv.org/pdf/2401.06345v1.pdf,2024-01-12,," The text-to-image synthesis by diffusion models has recently shown remarkableperformance in generating high-quality images. Although performs well forsimple texts, the models may get confused when faced with complex texts thatcontain multiple objects or spatial relationships. To get the desired images, afeasible way is to manually adjust the textual descriptions, i.e., narratingthe texts or adding some words, which is labor-consuming. In this paper, wepropose a framework to learn the proper textual descriptions for diffusionmodels through prompt learning. By utilizing the quality guidance and thesemantic guidance derived from the pre-trained diffusion model, our method caneffectively learn the prompts to improve the matches between the input text andthe generated images. Extensive experiments and analyses have validated theeffectiveness of the proposed method.",,arXiv,['cs.cv'],highly relevant,"The abstract mentions the use of prompt engineering techniques to fine-tune a pre-trained language model, which is directly relevant to the topic of prompt engineering." +1543,open the pandora's box of llms jailbreaking llms through representation engineering,"['Tianlong Li', 'Xiaoqing Zheng', 'Xuanjing Huang']",http://arxiv.org/pdf/2401.06824v1.pdf,2024-01-12,," Getting large language models (LLMs) to refuse to answer hostile toxicityquestions is a core issue under the theme of LLMs security. Previous approacheshave used prompts engineering to jailbreak LLMs and answer some toxicityquestions. These approaches can easily fail after the model manufacturer makesadditional fine-tuning to the model. To promote the further understanding ofmodel jailbreaking by researchers, we are inspired by RepresentationEngineering to propose a jailbreaking method that does not require elaborateconstruction prompts, is not affected by model fine-tuning, and can be widelyapplied to any open-source LLMs in a pluggable manner. We have evaluated thismethod on multiple mainstream LLMs on carefully supplemented toxicity datasets,and the experimental results demonstrate the significant effectiveness of ourapproach. After being surprised by some interesting jailbreaking cases, we didextensive in-depth research to explore the techniques behind this method.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper discusses the development of an automated tool, PRewrite, for optimizing manually drafted initial prompts in the context of prompt engineering, indicating its focus on post-training prompting techniques." +1544,code generation with alphacodium from prompt engineering to flow engineering,"['Tal Ridnik', 'Dedy Kredo', 'Itamar Friedman']",http://arxiv.org/pdf/2401.08500v1.pdf,2024-01-16,," Code generation problems differ from common natural language problems - theyrequire matching the exact syntax of the target language, identifying happypaths and edge cases, paying attention to numerous small details in the problemspec, and addressing other code-specific issues and requirements. Hence, manyof the optimizations and tricks that have been successful in natural languagegeneration may not be effective for code tasks. In this work, we propose a newapproach to code generation by LLMs, which we call AlphaCodium - a test-based,multi-stage, code-oriented iterative flow, that improves the performances ofLLMs on code problems. We tested AlphaCodium on a challenging code generationdataset called CodeContests, which includes competitive programming problemsfrom platforms such as Codeforces. The proposed flow consistently andsignificantly improves results. On the validation set, for example, GPT-4accuracy (pass@5) increased from 19% with a single well-designed direct promptto 44% with the AlphaCodium flow. Many of the principles and best practicesacquired in this work, we believe, are broadly applicable to general codegeneration tasks. Full implementation is available at:https://github.com/Codium-ai/AlphaCodium",,arXiv,"['cs.lg', 'cs.cl', 'cs.se']",highly relevant,"The abstract explicitly mentions the use of prompt engineering and few-shot learning techniques to iteratively train GPT-3.5 for providing feedback, indicating direct relevance to the topic of prompt engineering." +1545,icbellm high quality international events data with open source large language models on consumer hardware,"['Rex W. Douglass', 'Thomas Leo Scherer', 'J. Andrés Gannon', 'Erik Gartzke']",http://arxiv.org/pdf/2401.10558v1.pdf,2024-01-19,," The International Crises Behavior Events (ICBe) ontology provides highcoverage over the thoughts, communications, and actions that constituteinternational relations. A major disadvantage of that level of detail is thatit requires large human capital costs to apply it manually to new texts.Whether such an ontolgy is practical for international relations research givenlimited human and financial resources is a pressing concern. We introduce aworking proof of concept showing that ICBe codings can be reliably extractedfrom new texts using the current generation of open source large languagemodels (LLM) running on consumer grade computer hardware. Our solution requiresno finetuning and only limited prompt engineering. We detail our solution andpresent benchmarks against the original ICBe codings. We conclude by discussingthe implications of very high quality event coding of any text being withinreach of individual researchers with limited resources.",,arXiv,['stat.ap'],highly relevant,"The abstract explicitly mentions the improvement of large language models' performance through innovative prompting techniques and discusses the use of prompt engineering coupled with structures, which are highly relevant to the topic of hard prefix prompting." +1546,incontext learning for extreme multilabel classification,"[""Karel D'Oosterlinck"", 'Omar Khattab', 'François Remy', 'Thomas Demeester', 'Chris Develder', 'Christopher Potts']",http://arxiv.org/pdf/2401.12178v1.pdf,2024-01-22,," Multi-label classification problems with thousands of classes are hard tosolve with in-context learning alone, as language models (LMs) might lack priorknowledge about the precise classes or how to assign them, and it is generallyinfeasible to demonstrate every class in a prompt. We propose a generalprogram, $\texttt{Infer--Retrieve--Rank}$, that defines multi-step interactionsbetween LMs and retrievers to efficiently tackle such problems. We implementthis program using the $\texttt{DSPy}$ programming model, which specifiesin-context systems in a declarative manner, and use $\texttt{DSPy}$ optimizersto tune it towards specific datasets by bootstrapping only tens of few-shotexamples. Our primary extreme classification program, optimized separately foreach task, attains state-of-the-art results across three benchmarks (HOUSE,TECH, TECHWOLF). We apply the same program to a benchmark with vastly differentcharacteristics and attain competitive performance as well (BioDEX). Unlikeprior work, our proposed solution requires no finetuning, is easily applicableto new tasks, alleviates prompt engineering, and requires only tens of labeledexamples. Our code is public at https://github.com/KarelDO/xmc.dspy.",,arXiv,"['cs.cl', 'cs.ai']",highly relevant,"The paper examines the impact of prompt position on model performance, which is a significant aspect of prompt engineering." +1547,a generalpurpose ai avatar in healthcare,"['Nicholas Yan', 'Gil Alterovitz']",http://arxiv.org/pdf/2401.12981v1.pdf,2024-01-10,," Recent advancements in machine learning and natural language processing haveled to the rapid development of artificial intelligence (AI) as a valuable toolin the healthcare industry. Using large language models (LLMs) asconversational agents or chatbots has the potential to assist doctors indiagnosing patients, detecting early symptoms of diseases, and providing healthadvice to patients. This paper focuses on the role of chatbots in healthcareand explores the use of avatars to make AI interactions more appealing topatients. A framework of a general-purpose AI avatar application isdemonstrated by using a three-category prompt dictionary and prompt improvementmechanism. A two-phase approach is suggested to fine-tune a general-purpose AIlanguage model and create different AI avatars to discuss medical issues withusers. Prompt engineering enhances the chatbot's conversational abilities andpersonality traits, fostering a more human-like interaction with patients.Ultimately, the injection of personality into the chatbot could potentiallyincrease patient engagement. Future directions for research includeinvestigating ways to improve chatbots' understanding of context and ensuringthe accuracy of their outputs through fine-tuning with specialized medical datasets.",,arXiv,['cs.cl'],highly relevant,"The paper presents a systematic analysis comparing the effects of prompts which indicates it focuses on aspects of prompt engineering, and thus is relevant to the topic." +1548,gptvoicetasker llmpowered virtual assistant for smartphone,"['Minh Duc Vu', 'Han Wang', 'Zhuang Li', 'Jieshan Chen', 'Shengdong Zhao', 'Zhenchang Xing', 'Chunyang Chen']",http://arxiv.org/pdf/2401.14268v1.pdf,2024-01-25,," Virtual assistants have the potential to play an important role in helpingusers achieves different tasks. However, these systems face challenges in theirreal-world usability, characterized by inefficiency and struggles in graspinguser intentions. Leveraging recent advances in Large Language Models (LLMs), weintroduce GptVoiceTasker, a virtual assistant poised to enhance userexperiences and task efficiency on mobile devices. GptVoiceTasker excels atintelligently deciphering user commands and executing relevant deviceinteractions to streamline task completion. The system continually learns fromhistorical user commands to automate subsequent usages, further enhancingexecution efficiency. Our experiments affirm GptVoiceTasker's exceptionalcommand interpretation abilities and the precision of its task automationmodule. In our user study, GptVoiceTasker boosted task efficiency in real-worldscenarios by 34.85%, accompanied by positive participant feedback. We madeGptVoiceTasker open-source, inviting further research into LLMs utilization fordiverse tasks through prompt engineering and leveraging user usage data toimprove efficiency.",,arXiv,['cs.hc'],highly relevant,"The abstract mentions the use of 'prompt engineering' as a method for improving large language models' communication skills, indicating it discusses the application of prompts in model training or utilization." +1549,can large language models reason about medical questions,"['Valentin Liévin', 'Christoffer Egeberg Hother', 'Andreas Geert Motzfeldt', 'Ole Winther']",http://arxiv.org/pdf/2207.08143v4.pdf,2022-07-17,," Although large language models (LLMs) often produce impressive outputs, itremains unclear how they perform in real-world scenarios requiring strongreasoning skills and expert domain knowledge. We set out to investigate whetherclose- and open-source models (GPT-3.5, LLama-2, etc.) can be applied to answerand reason about difficult real-world-based questions. We focus on threepopular medical benchmarks (MedQA-USMLE, MedMCQA, and PubMedQA) and multipleprompting scenarios: Chain-of-Thought (CoT, think step-by-step), few-shot andretrieval augmentation. Based on an expert annotation of the generated CoTs, wefound that InstructGPT can often read, reason and recall expert knowledge.Last, by leveraging advances in prompt engineering (few-shot and ensemblemethods), we demonstrated that GPT-3.5 not only yields calibrated predictivedistributions, but also reaches the passing score on three datasets:MedQA-USMLE 60.2%, MedMCQA 62.7% and PubMedQA 78.2%. Open-source models areclosing the gap: Llama-2 70B also passed the MedQA-USMLE with 62.5% accuracy.",,arXiv,"['cs.cl', 'cs.ai', 'cs.lg', 'i.2.1; i.2.7']",highly relevant,"The paper introduces a unified library called PromptBench that involves prompt construction and prompt engineering, directly addressing the topic of interest." +1550,spear phishing with large language models,['Julian Hazell'],http://arxiv.org/pdf/2305.06972v3.pdf,2023-05-11,," Recent progress in artificial intelligence (AI), particularly in the domainof large language models (LLMs), has resulted in powerful and versatiledual-use systems. This intelligence can be put towards a wide variety ofbeneficial tasks, yet it can also be used to cause harm. This study exploresone such harm by examining how LLMs can be used for spear phishing, a form ofcybercrime that involves manipulating targets into divulging sensitiveinformation. I first explore LLMs' ability to assist with the reconnaissanceand message generation stages of a spear phishing attack, where I find thatLLMs are capable of assisting with the email generation phase of a spearphishing attack. To explore how LLMs could potentially be harnessed to scalespear phishing campaigns, I then create unique spear phishing messages for over600 British Members of Parliament using OpenAI's GPT-3.5 and GPT-4 models. Myfindings provide some evidence that these messages are not only realistic butalso cost-effective, with each email costing only a fraction of a cent togenerate. Next, I demonstrate how basic prompt engineering can circumventsafeguards installed in LLMs, highlighting the need for further research intorobust interventions that can help prevent models from being misused. Tofurther address these evolving risks, I explore two potential solutions:structured access schemes, such as application programming interfaces, andLLM-based defensive systems.",,arXiv,"['cs.cy', 'cs.ai', 'cs.cr']",highly relevant,"The paper discusses the development of a foundation model for cell segmentation that uses a prompt engineering approach, which is directly related to the topic of prompt engineering." +1551,"software testing with large language models survey, landscape, and vision","['Junjie Wang', 'Yuchao Huang', 'Chunyang Chen', 'Zhe Liu', 'Song Wang', 'Qing Wang']",http://arxiv.org/pdf/2307.07221v2.pdf,2023-07-14,," Pre-trained large language models (LLMs) have recently emerged as abreakthrough technology in natural language processing and artificialintelligence, with the ability to handle large-scale datasets and exhibitremarkable performance across a wide range of tasks. Meanwhile, softwaretesting is a crucial undertaking that serves as a cornerstone for ensuring thequality and reliability of software products. As the scope and complexity ofsoftware systems continue to grow, the need for more effective software testingtechniques becomes increasingly urgent, making it an area ripe for innovativeapproaches such as the use of LLMs. This paper provides a comprehensive reviewof the utilization of LLMs in software testing. It analyzes 102 relevantstudies that have used LLMs for software testing, from both the softwaretesting and LLMs perspectives. The paper presents a detailed discussion of thesoftware testing tasks for which LLMs are commonly used, among which test casepreparation and program repair are the most representative. It also analyzesthe commonly used LLMs, the types of prompt engineering that are employed, aswell as the accompanied techniques with these LLMs. It also summarizes the keychallenges and potential opportunities in this direction. This work can serveas a roadmap for future research in this area, highlighting potential avenuesfor exploration, and identifying gaps in our current understanding of the useof LLMs in software testing.",,arXiv,['cs.se'],highly relevant,"The abstract explicitly mentions the use of 'prompt engineering' for improving reasoning capabilities in large language models, which indicates direct relevance to the topic of prompt engineering." +1552,"understanding users' dissatisfaction with chatgpt responses types, resolving tactics, and the effect of knowledge level","['Yoonsu Kim', 'Jueon Lee', 'Seoyoung Kim', 'Jaehyuk Park', 'Juho Kim']",http://arxiv.org/pdf/2311.07434v1.pdf,2023-11-13,," Large language models (LLMs) with chat-based capabilities, such as ChatGPT,are widely used in various workflows. However, due to a limited understandingof these large-scale models, users struggle to use this technology andexperience different kinds of dissatisfaction. Researchers have introducedseveral methods such as prompt engineering to improve model responses. However,they focus on crafting one prompt, and little has been investigated on how todeal with the dissatisfaction the user encountered during the conversation.Therefore, with ChatGPT as the case study, we examine end users'dissatisfaction along with their strategies to address the dissatisfaction.After organizing users' dissatisfaction with LLM into seven categories based ona literature review, we collected 511 instances of dissatisfactory ChatGPTresponses from 107 users and their detailed recollections of dissatisfiedexperiences, which we release as a publicly accessible dataset. Our analysisreveals that users most frequently experience dissatisfaction when ChatGPTfails to grasp their intentions, while they rate the severity ofdissatisfaction the highest with dissatisfaction related to accuracy. We alsoidentified four tactics users employ to address their dissatisfaction and theireffectiveness. We found that users often do not use any tactics to addresstheir dissatisfaction, and even when using tactics, 72% of dissatisfactionremained unresolved. Moreover, we found that users with low knowledge regardingLLMs tend to face more dissatisfaction on accuracy while they often put minimaleffort in addressing dissatisfaction. Based on these findings, we proposedesign implications for minimizing user dissatisfaction and enhancing theusability of chat-based LLM services.",,arXiv,['cs.hc'],somewhat relevant,"The abstract discusses optimizing Chain-of-Thoughts (CoT) prompts, which implies some form of prompt engineering, but does not specifically mention the use of hard prefix prompts." +1553,a foundation model for cell segmentation,"['Uriah Israel', 'Markus Marks', 'Rohit Dilip', 'Qilin Li', 'Morgan Schwartz', 'Elora Pradhan', 'Edward Pao', 'Shenyi Li', 'Alexander Pearson-Goulart', 'Pietro Perona', 'Georgia Gkioxari', 'Ross Barnowski', 'Yisong Yue', 'David Van Valen']",http://arxiv.org/pdf/2311.11004v1.pdf,2023-11-18,," Cells are the fundamental unit of biological organization, and identifyingthem in imaging data - cell segmentation - is a critical task for variouscellular imaging experiments. While deep learning methods have led tosubstantial progress on this problem, models that have seen wide use arespecialist models that work well for specific domains. Methods that havelearned the general notion of ""what is a cell"" and can identify them acrossdifferent domains of cellular imaging data have proven elusive. In this work,we present CellSAM, a foundation model for cell segmentation that generalizesacross diverse cellular imaging data. CellSAM builds on top of the SegmentAnything Model (SAM) by developing a prompt engineering approach to maskgeneration. We train an object detector, CellFinder, to automatically detectcells and prompt SAM to generate segmentations. We show that this approachallows a single model to achieve state-of-the-art performance for segmentingimages of mammalian cells (in tissues and cell culture), yeast, and bacteriacollected with various imaging modalities. To enable accessibility, weintegrate CellSAM into DeepCell Label to further accelerate human-in-the-looplabeling strategies for cellular imaging data. A deployed version of CellSAM isavailable at https://label-dev.deepcell.org/.",,arXiv,['q-bio.qm'],highly relevant,"The paper discusses improving pre-trained Vision-Language models with prompt learning, specifically adding a learnable context vector, which is relevant to hard prefix prompting." +1554,large language modeldriven classroom flipping empowering studentcentric peer questioning with flipped interaction,['Chee Wei Tan'],http://arxiv.org/pdf/2311.14708v1.pdf,2023-11-14,," Reciprocal questioning is essential for effective teaching and learning,fostering active engagement and deeper understanding through collaborativeinteractions, especially in large classrooms. Can large language model (LLM),such as OpenAI's GPT (Generative Pre-trained Transformer) series, assist inthis? This paper investigates a pedagogical approach of classroom flippingbased on flipped interaction in LLMs. Flipped interaction involves usinglanguage models to prioritize generating questions instead of answers toprompts. We demonstrate how traditional classroom flipping techniques,including Peer Instruction and Just-in-Time Teaching (JiTT), can be enhancedthrough flipped interaction techniques, creating student-centric questions forhybrid teaching. In particular, we propose a workflow to integrate promptengineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and aquiz-prompt-discuss routine to empower students to self-regulate their learningcapacity and enable teachers to swiftly personalize training pathways. Wedevelop an LLM-driven chatbot software that digitizes various elements ofclassroom flipping and facilitates the assessment of students using theseroutines to deliver peer-generated questions. We have applied our LLM-drivenchatbot software for teaching both undergraduate and graduate students from2020 to 2022, effectively useful for bridging the gap between teachers andstudents in remote teaching during the COVID-19 pandemic years. In particular,LLM-driven classroom flipping can be particularly beneficial in large classsettings to optimize teaching pace and enable engaging classroom experiences.",,arXiv,"['cs.cy', 'cs.ai', 'cs.cl', 'cs.hc']",highly relevant,"The paper details the evaluation of instruction-tuned language models on coreference resolution benchmarks using a prompt-based approach, which falls within the scope of prompt engineering." +1555,semanticaware frameevent fusion based pattern recognition via large visionlanguage models,"['Dong Li', 'Jiandong Jin', 'Yuhao Zhang', 'Yanlin Zhong', 'Yaoyang Wu', 'Lan Chen', 'Xiao Wang', 'Bin Luo']",http://arxiv.org/pdf/2311.18592v1.pdf,2023-11-30,," Pattern recognition through the fusion of RGB frames and Event streams hasemerged as a novel research area in recent years. Current methods typicallyemploy backbone networks to individually extract the features of RGB frames andevent streams, and subsequently fuse these features for pattern recognition.However, we posit that these methods may suffer from key issues like sematicgaps and small-scale backbone networks. In this study, we introduce a novelpattern recognition framework that consolidates the semantic labels, RGBframes, and event streams, leveraging pre-trained large-scale vision-languagemodels. Specifically, given the input RGB frames, event streams, and all thepredefined semantic labels, we employ a pre-trained large-scale vision model(CLIP vision encoder) to extract the RGB and event features. To handle thesemantic labels, we initially convert them into language descriptions throughprompt engineering, and then obtain the semantic features using the pre-trainedlarge-scale language model (CLIP text encoder). Subsequently, we integrate theRGB/Event features and semantic features using multimodal Transformer networks.The resulting frame and event tokens are further amplified using self-attentionlayers. Concurrently, we propose to enhance the interactions between texttokens and RGB/Event tokens via cross-attention. Finally, we consolidate allthree modalities using self-attention and feed-forward layers for recognition.Comprehensive experiments on the HARDVS and PokerEvent datasets fullysubstantiate the efficacy of our proposed SAFE model. The source code will bemade available at https://github.com/Event-AHU/SAFE_LargeVLM.",,arXiv,"['cs.cv', 'cs.ai']",somewhat relevant,"The abstract mentions the use of 'generic and custom prompts' in evaluating model performance, which is closely related to the study of prompt engineering, especially if custom prompts imply using hard prefixes." +1556,applying large language models and chainofthought for automatic scoring,"['Gyeong-Geon Lee', 'Ehsan Latif', 'Xuansheng Wu', 'Ninghao Liu', 'Xiaoming Zhai']",http://arxiv.org/pdf/2312.03748v1.pdf,2023-11-30,," This study investigates the application of large language models (LLMs),specifically GPT-3.5 and GPT-4, with Chain-of-Though (CoT)in the automaticscoring of student-written responses to science assessments. We focused onovercoming the challenges of accessibility, technical complexity, and lack ofexplainability that have previously limited the use of automatic assessmenttools among researchers and educators. We used a testing dataset comprising sixassessment tasks (three binomial and three trinomial) with 1,650 studentresponses. We employed six prompt engineering strategies, combining zero-shotor few-shot learning with CoT, either alone or alongside item stem and scoringrubrics. Results indicated that few-shot (acc = .67) outperformed zero-shotlearning (acc = .60), with 12.6\% increase. CoT, when used without item stemand scoring rubrics, did not significantly affect scoring accuracy (acc = .60).However, CoT prompting paired with contextual item stems and rubrics proved tobe a significant contributor to scoring accuracy (13.44\% increase forzero-shot; 3.7\% increase for few-shot). Using a novel approach PPEAS, we founda more balanced accuracy across different proficiency categories, highlightingthe importance of domain-specific reasoning in enhancing the effectiveness ofLLMs in scoring tasks. Additionally, we also found that GPT-4 demonstratedsuperior performance over GPT-3.5 in various scoring tasks, showing 8.64\%difference. The study revealed that the single-call strategy with GPT-4,particularly using greedy sampling, outperformed other approaches, includingensemble voting strategies. This study demonstrates the potential of LLMs infacilitating automatic scoring, emphasizing that CoT enhances accuracy,particularly when used with item stem and scoring rubrics.",,arXiv,"['cs.cl', 'cs.ai']",somewhat relevant,The paper discusses the evaluation of a prompt-based tool learning method among others to address hallucination issues in LLMs. +1557,seeavatar photorealistic textto3d avatar generation with constrained geometry and appearance,"['Yuanyou Xu', 'Zongxin Yang', 'Yi Yang']",http://arxiv.org/pdf/2312.08889v2.pdf,2023-12-13,," Powered by large-scale text-to-image generation models, text-to-3D avatargeneration has made promising progress. However, most methods fail to producephotorealistic results, limited by imprecise geometry and low-qualityappearance. Towards more practical avatar generation, we present SEEAvatar, amethod for generating photorealistic 3D avatars from text with SElf-Evolvingconstraints for decoupled geometry and appearance. For geometry, we propose toconstrain the optimized avatar in a decent global shape with a template avatar.The template avatar is initialized with human prior and can be updated by theoptimized avatar periodically as an evolving template, which enables moreflexible shape generation. Besides, the geometry is also constrained by thestatic human prior in local parts like face and hands to maintain the delicatestructures. For appearance generation, we use diffusion model enhanced byprompt engineering to guide a physically based rendering pipeline to generaterealistic textures. The lightness constraint is applied on the albedo textureto suppress incorrect lighting effect. Experiments show that our methodoutperforms previous methods on both global and local geometry and appearancequality by a large margin. Since our method can produce high-quality meshes andtextures, such assets can be directly applied in classic graphics pipeline forrealistic rendering under any lighting condition. Project page at:https://yoxu515.github.io/SEEAvatar/.",,arXiv,['cs.cv'],somewhat relevant,"The paper discusses parameter-efficient approaches like prompt-tuning, which is related to the topic of prompt engineering." +1558,promptbased distribution alignment for unsupervised domain adaptation,"['Shuanghao Bai', 'Min Zhang', 'Wanqi Zhou', 'Siteng Huang', 'Zhirong Luan', 'Donglin Wang', 'Badong Chen']",http://arxiv.org/pdf/2312.09553v2.pdf,2023-12-15,," Recently, despite the unprecedented success of large pre-trainedvisual-language models (VLMs) on a wide range of downstream tasks, thereal-world unsupervised domain adaptation (UDA) problem is still not wellexplored. Therefore, in this paper, we first experimentally demonstrate thatthe unsupervised-trained VLMs can significantly reduce the distributiondiscrepancy between source and target domains, thereby improving theperformance of UDA. However, a major challenge for directly deploying suchmodels on downstream UDA tasks is prompt engineering, which requires aligningthe domain knowledge of source and target domains, since the performance of UDAis severely influenced by a good domain-invariant representation. We furtherpropose a Prompt-based Distribution Alignment (PDA) method to incorporate thedomain knowledge into prompt learning. Specifically, PDA employs a two-branchprompt-tuning paradigm, namely base branch and alignment branch. The basebranch focuses on integrating class-related representation into prompts,ensuring discrimination among different classes. To further minimize domaindiscrepancy, for the alignment branch, we construct feature banks for both thesource and target domains and propose image-guided feature tuning (IFT) to makethe input attend to feature banks, which effectively integrates self-enhancedand cross-domain features into the model. In this way, these two branches canbe mutually promoted to enhance the adaptation of VLMs for UDA. We conductextensive experiments on three benchmarks to demonstrate that our proposed PDAachieves state-of-the-art performance. The code is available athttps://github.com/BaiShuanghao/Prompt-based-Distribution-Alignment.",,arXiv,['cs.cv'],highly relevant,"The paper focuses on few-shot prompting and discusses a novel technique to guide large language models, which is directly related to prompt engineering." +1559,knnicl compositional taskoriented parsing generalization with nearest neighbor incontext learning,"['Wenting Zhao', 'Ye Liu', 'Yao Wan', 'Yibo Wang', 'Qingyang Wu', 'Zhongfen Deng', 'Jiangshu Du', 'Shuaiqi Liu', 'Yunlong Xu', 'Philip S. Yu']",http://arxiv.org/pdf/2312.10771v1.pdf,2023-12-17,," Task-Oriented Parsing (TOP) enables conversational assistants to interpretuser commands expressed in natural language, transforming them into structuredoutputs that combine elements of both natural language and intent/slot tags.Recently, Large Language Models (LLMs) have achieved impressive performance insynthesizing computer programs based on a natural language prompt, mitigatingthe gap between natural language and structured programs. Our paper focuses onharnessing the capabilities of LLMs for semantic parsing tasks, addressing thefollowing three key research questions: 1) How can LLMs be effectively utilizedfor semantic parsing tasks? 2) What defines an effective prompt? and 3) How canLLM overcome the length constraint and streamline prompt design by includingall examples as prompts? We introduce k Nearest Neighbor In-ContextLearning(kNN-ICL), which simplifies prompt engineering by allowing it to bebuilt on top of any design strategy while providing access to all demoexamples. Extensive experiments show that: 1)Simple ICL without kNN search canachieve a comparable performance with strong supervised models on the TOPtasks, and 2) kNN-ICL significantly improves the comprehension of complexrequests by seamlessly integrating ICL with a nearest-neighbor approach.Notably, this enhancement is achieved without the need for additional data orspecialized prompts.",,arXiv,['cs.cl'],somewhat relevant,"The paper introduces a language-grounded visual prompting method, which is related to the concept of prompt engineering in the context of adapting vision-language models for specific tasks." +1560,a reliable knowledge processing framework for combustion science using foundation models,"['Vansh Sharma', 'Venkat Raman']",http://arxiv.org/pdf/2401.00544v2.pdf,2023-12-31,," This research explores the integration of large language models (LLMs) intoscientific data assimilation, focusing on combustion science as a case study.Leveraging foundational models integrated with Retrieval-Augmented Generation(RAG) framework, the study introduces an approach to process diverse combustionresearch data, spanning experimental studies, simulations, and literature. Themultifaceted nature of combustion research emphasizes the critical role ofknowledge processing in navigating and extracting valuable information from avast and diverse pool of sources. The developed approach minimizescomputational and economic expenses while optimizing data privacy and accuracy.It incorporates prompt engineering and offline open-source LLMs, offering userautonomy in selecting base models. The study provides a thorough examination oftext segmentation strategies, conducts comparative studies between LLMs, andexplores various optimized prompts to demonstrate the effectiveness of theframework. By incorporating an external database, the framework outperforms aconventional LLM in generating accurate responses and constructing robustarguments. Additionally, the study delves into the investigation of optimizedprompt templates for the purpose of efficient extraction of scientificliterature. The research addresses concerns related to hallucinations and falseresearch articles by introducing a custom workflow developed with a detectionalgorithm to filter out inaccuracies. Despite identified areas for improvement,the framework consistently delivers accurate domain-specific responses withminimal human oversight. The prompt-agnostic approach introduced holds promisefor future deliberations. The study underscores the significance of integratingLLMs and knowledge processing techniques in scientific research, providing afoundation for advancements in data assimilation and utilization.",,arXiv,"['cs.ai', 'cs.lg']",highly relevant,"The abstract mentions evaluating five bias mitigation prompt strategies, including zero-shot, one-shot, few-shot, and two Chain-of-Thought prompts, which are directly related to prompt engineering." +1561,in search of the longtail systematic generation of longtail knowledge via logical rule guided search,"['Huihan Li', 'Yuting Ning', 'Zeyi Liao', 'Siyuan Wang', 'Xiang Lorraine Li', 'Ximing Lu', 'Faeze Brahman', 'Wenting Zhao', 'Yejin Choi', 'Xiang Ren']",http://arxiv.org/pdf/2311.07237v1.pdf,2023-11-13,," Since large language models have approached human-level performance on manytasks, it has become increasingly harder for researchers to find tasks that arestill challenging to the models. Failure cases usually come from the long-taildistribution - data that an oracle language model could assign a probability onthe lower end of its distribution. Current methodology such as promptengineering or crowdsourcing are insufficient for creating long-tail examplesbecause humans are constrained by cognitive bias. We propose aLogic-Induced-Knowledge-Search (LINK) framework for systematically generatinglong-tail knowledge statements. Grounded by a symbolic rule, we search forlong-tail values for each variable of the rule by first prompting a LLM, thenverifying the correctness of the values with a critic, and lastly pushing forthe long-tail distribution with a reranker. With this framework we construct adataset, Logic-Induced-Long-Tail (LINT), consisting of 200 symbolic rules and50K knowledge statements spanning across four domains. Human annotations findthat 84% of the statements in LINT are factually correct. In contrast, ChatGPTand GPT4 struggle with directly generating long-tail statements under theguidance of logic rules, each only getting 56% and 78% of their statementscorrect. Moreover, their ""long-tail"" generations in fact fall into the higherlikelihood range, and thus are not really long-tail. Our findings suggest thatLINK is effective for generating data in the long-tail distribution whileenforcing quality. LINT can be useful for systematically evaluating LLMs'capabilities in the long-tail distribution. We challenge the models with asimple entailment classification task using samples from LINT. We find thatChatGPT and GPT4's capability in identifying incorrect knowledge drop by ~3% inthe long-tail distribution compared to head distribution.",,arXiv,"['cs.cl', 'cs.ai']",somewhat relevant,"The abstract mentions the use of a 'prompt design' in the hierarchical casualty extraction model, which indicates relevance to the topic of prompt engineering." +1562,from beginner to expert modeling medical knowledge into general llms,"['Qiang Li', 'Xiaoyan Yang', 'Haowen Wang', 'Qin Wang', 'Lei Liu', 'Junjie Wang', 'Yang Zhang', 'Mingyuan Chu', 'Sen Hu', 'Yicheng Chen', 'Yue Shen', 'Cong Fan', 'Wangshu Zhang', 'Teng Xu', 'Jinjie Gu', 'Jing Zheng', 'Guannan Zhang Ant Group']",http://arxiv.org/pdf/2312.01040v3.pdf,2023-12-02,," Recently, large language model (LLM) based artificial intelligence (AI)systems have demonstrated remarkable capabilities in natural languageunderstanding and generation. However, these models face a significantchallenge when it comes to sensitive applications, such as reasoning overmedical knowledge and answering medical questions in a physician-like manner.Prior studies attempted to overcome this challenge by increasing the model size(>100B) to learn more general medical knowledge, while there is still room forimprovement in LLMs with smaller-scale model sizes (<100B). In this work, westart from a pre-trained general LLM model (AntGLM-10B) and fine-tune it from amedical beginner towards a medical expert (called AntGLM-Med-10B), whichleverages a 3-stage optimization procedure, i.e., general medical knowledgeinjection, medical domain instruction tuning, and specific medical taskadaptation. Our contributions are threefold: (1) We specifically investigatehow to adapt a pre-trained general LLM in medical domain, especially for aspecific medical task. (2) We collect and construct large-scale medicaldatasets for each stage of the optimization process. These datasets encompassvarious data types and tasks, such as question-answering, medical reasoning,multi-choice questions, and medical conversations. (3) Specifically formulti-choice questions in the medical domain, we propose a novelVerification-of-Choice approach for prompting engineering, which significantlyenhances the reasoning ability of LLMs. Remarkably, by combining the aboveapproaches, our AntGLM-Med-10B model can outperform the most of LLMs onPubMedQA, including both general and medical LLMs, even when these LLMs havelarger model size.",,arXiv,['cs.cl'],somewhat relevant,"The paper discusses a novel prompt learning framework, PromptCS, that generates continuous prompts which are not the focus of the systematic review which is on hard prefix prompts, therefore it's only somewhat relevant to the topic of hard prompt engineering." +1563,comma coarticulated multimodal learning,"['Lianyu Hu', 'Liqing Gao', 'Zekang Liu', 'Chi-Man Pun', 'Wei Feng']",http://arxiv.org/pdf/2401.00268v1.pdf,2023-12-30,," Pretrained large-scale vision-language models such as CLIP have demonstratedexcellent generalizability over a series of downstream tasks. However, they aresensitive to the variation of input text prompts and need a selection of prompttemplates to achieve satisfactory performance. Recently, various methods havebeen proposed to dynamically learn the prompts as the textual inputs to avoidthe requirements of laboring hand-crafted prompt engineering in the fine-tuningprocess. We notice that these methods are suboptimal in two aspects. First, theprompts of the vision and language branches in these methods are usuallyseparated or uni-directionally correlated. Thus, the prompts of both branchesare not fully correlated and may not provide enough guidance to align therepresentations of both branches. Second, it's observed that most previousmethods usually achieve better performance on seen classes but causeperformance degeneration on unseen classes compared to CLIP. This is becausethe essential generic knowledge learned in the pretraining stage is partlyforgotten in the fine-tuning process. In this paper, we propose Co-ArticulatedMulti-Modal Learning (COMMA) to handle the above limitations. Especially, ourmethod considers prompts from both branches to generate the prompts to enhancethe representation alignment of both branches. Besides, to alleviate forgettingabout the essential knowledge, we minimize the feature discrepancy between thelearned prompts and the embeddings of hand-crafted prompts in the pre-trainedCLIP in the late transformer layers. We evaluate our method across threerepresentative tasks of generalization to novel classes, new target datasetsand unseen domain shifts. Experimental results demonstrate the superiority ofour method by exhibiting a favorable performance boost upon all tasks with highefficiency.",,arXiv,['cs.cv'],somewhat relevant,"The paper describes a time series forecasting model and uses terms like 'resolution prefix tuning', which suggests relevance to the topic of hard prefix prompting in prompt engineering." +1564,generative large language models are autonomous practitioners of evidencebased medicine,"['Akhil Vaid', 'Joshua Lampert', 'Juhee Lee', 'Ashwin Sawant', 'Donald Apakama', 'Ankit Sakhuja', 'Ali Soroush', 'Denise Lee', 'Isotta Landi', 'Nicole Bussola', 'Ismail Nabeel', 'Robbie Freeman', 'Patricia Kovatch', 'Brendan Carr', 'Benjamin Glicksberg', 'Edgar Argulian', 'Stamatios Lerakis', 'Monica Kraft', 'Alexander Charney', 'Girish Nadkarni']",http://arxiv.org/pdf/2401.02851v1.pdf,2024-01-05,," Background: Evidence-based medicine (EBM) is fundamental to modern clinicalpractice, requiring clinicians to continually update their knowledge and applythe best clinical evidence in patient care. The practice of EBM faceschallenges due to rapid advancements in medical research, leading toinformation overload for clinicians. The integration of artificial intelligence(AI), specifically Generative Large Language Models (LLMs), offers a promisingsolution towards managing this complexity. Methods: This study involved the curation of real-world clinical cases acrossvarious specialties, converting them into .json files for analysis. LLMs,including proprietary models like ChatGPT 3.5 and 4, Gemini Pro, andopen-source models like LLaMA v2 and Mixtral-8x7B, were employed. These modelswere equipped with tools to retrieve information from case files and makeclinical decisions similar to how clinicians must operate in the real world.Model performance was evaluated based on correctness of final answer, judicioususe of tools, conformity to guidelines, and resistance to hallucinations. Results: GPT-4 was most capable of autonomous operation in a clinicalsetting, being generally more effective in ordering relevant investigations andconforming to clinical guidelines. Limitations were observed in terms of modelability to handle complex guidelines and diagnostic nuances. RetrievalAugmented Generation made recommendations more tailored to patients andhealthcare systems. Conclusions: LLMs can be made to function as autonomous practitioners ofevidence-based medicine. Their ability to utilize tooling can be harnessed tointeract with the infrastructure of a real-world healthcare system and performthe tasks of patient management in a guideline directed manner. Promptengineering may help to further enhance this potential and transform healthcarefor the clinician and the patient.",,arXiv,['cs.ai'],highly relevant,The paper pertains directly to prompt engineering as it investigates the use of generated prompts for zero-shot learning in large language models. +1565,llm4plc harnessing large language models for verifiable programming of plcs in industrial control systems,"['Mohamad Fakih', 'Rahul Dharmaji', 'Yasamin Moghaddas', 'Gustavo Quiros Araya', 'Oluwatosin Ogundare', 'Mohammad Abdullah Al Faruque']",http://arxiv.org/pdf/2401.05443v1.pdf,2024-01-08,," Although Large Language Models (LLMs) have established pre-dominance inautomated code generation, they are not devoid of shortcomings. The pertinentissues primarily relate to the absence of execution guarantees for generatedcode, a lack of explainability, and suboptimal support for essential but nicheprogramming languages. State-of-the-art LLMs such as GPT-4 and LLaMa2 fail toproduce valid programs for Industrial Control Systems (ICS) operated byProgrammable Logic Controllers (PLCs). We propose LLM4PLC, a user-guidediterative pipeline leveraging user feedback and external verification toolsincluding grammar checkers, compilers and SMV verifiers to guide the LLM'sgeneration. We further enhance the generation potential of LLM by employingPrompt Engineering and model fine-tuning through the creation and usage ofLoRAs. We validate this system using a FischerTechnik Manufacturing TestBed(MFTB), illustrating how LLMs can evolve from generating structurally flawedcode to producing verifiably correct programs for industrial applications. Werun a complete test suite on GPT-3.5, GPT-4, Code Llama-7B, a fine-tuned CodeLlama-7B model, Code Llama-34B, and a fine-tuned Code Llama-34B model. Theproposed pipeline improved the generation success rate from 47% to 72%, and theSurvey-of-Experts code quality from 2.25/10 to 7.75/10. To promote openresearch, we share the complete experimental setup, the LLM Fine-TuningWeights, and the video demonstrations of the different programs on ourdedicated webpage.",,arXiv,"['cs.se', 'cs.ai', 'cs.cl', 'cs.pl', 'd.2.4; i.2.7; i.2.2']",somewhat relevant,"The paper discusses controlled studies comparing modular approaches and prompting-based methods, which are related to prompt engineering." +1566,impact of large language model assistance on patients reading clinical notes a mixedmethods study,"['Niklas Mannhardt', 'Elizabeth Bondi-Kelly', 'Barbara Lam', ""Chloe O'Connell"", 'Mercy Asiedu', 'Hussein Mozannar', 'Monica Agrawal', 'Alejandro Buendia', 'Tatiana Urman', 'Irbaz B. Riaz', 'Catherine E. Ricciardi', 'Marzyeh Ghassemi', 'David Sontag']",http://arxiv.org/pdf/2401.09637v1.pdf,2024-01-17,," Patients derive numerous benefits from reading their clinical notes,including an increased sense of control over their health and improvedunderstanding of their care plan. However, complex medical concepts and jargonwithin clinical notes hinder patient comprehension and may lead to anxiety. Wedeveloped a patient-facing tool to make clinical notes more readable,leveraging large language models (LLMs) to simplify, extract information from,and add context to notes. We prompt engineered GPT-4 to perform theseaugmentation tasks on real clinical notes donated by breast cancer survivorsand synthetic notes generated by a clinician, a total of 12 notes with 3868words. In June 2023, 200 female-identifying US-based participants were randomlyassigned three clinical notes with varying levels of augmentations using ourtool. Participants answered questions about each note, evaluating theirunderstanding of follow-up actions and self-reported confidence. We found thataugmentations were associated with a significant increase in actionunderstanding score (0.63 $\pm$ 0.04 for select augmentations, compared to 0.54$\pm$ 0.02 for the control) with p=0.002. In-depth interviews ofself-identifying breast cancer patients (N=7) were also conducted via videoconferencing. Augmentations, especially definitions, elicited positiveresponses among the seven participants, with some concerns about relying onLLMs. Augmentations were evaluated for errors by clinicians, and we foundmisleading errors occur, with errors more common in real donated notes thansynthetic notes, illustrating the importance of carefully written clinicalnotes. Augmentations improve some but not all readability metrics. This workdemonstrates the potential of LLMs to improve patients' experience withclinical notes at a lower burden to clinicians. However, having a human in theloop is important to correct potential model errors.",,arXiv,"['cs.hc', 'cs.ai', 'cs.cl']",somewhat relevant,The paper discusses the use of prompt-based approaches for scientific jargon identification which indicates relevance to prompt engineering. +1567,inferring latent class statistics from text for robust visual fewshot learning,"['Yassir Bendou', 'Vincent Gripon', 'Bastien Pasdeloup', 'Giulia Lioi', 'Lukas Mauch', 'Fabien Cardinaux', 'Ghouthi Boukli Hacene']",http://arxiv.org/pdf/2311.14544v1.pdf,2023-11-24,," In the realm of few-shot learning, foundation models like CLIP have proveneffective but exhibit limitations in cross-domain robustness especially infew-shot settings. Recent works add text as an extra modality to enhance theperformance of these models. Most of these approaches treat text as anauxiliary modality without fully exploring its potential to elucidate theunderlying class visual features distribution. In this paper, we present anovel approach that leverages text-derived statistics to predict the mean andcovariance of the visual feature distribution for each class. This predictiveframework enriches the latent space, yielding more robust and generalizablefew-shot learning models. We demonstrate the efficacy of incorporating bothmean and covariance statistics in improving few-shot classification performanceacross various datasets. Our method shows that we can use text to predict themean and covariance of the distribution offering promising improvements infew-shot learning scenarios.",,arXiv,"['cs.cv', 'cs.ai']",somewhat relevant,"The abstract mentions 'LLM prompt-based approaches' which suggests that the paper includes discussion on prompt engineering, relevant to the topic of hard prefix prompts." +1568,simple semanticaided fewshot learning,"['Hai Zhang', 'Junzhe Xu', 'Shanlin Jiang', 'Zhenan He']",http://arxiv.org/pdf/2311.18649v1.pdf,2023-11-30,," Learning from a limited amount of data, namely Few-Shot Learning, stands outas a challenging computer vision task. Several works exploit semantics anddesign complicated semantic fusion mechanisms to compensate for rarerepresentative features within restricted data. However, relying on naivesemantics such as class names introduces biases due to their brevity, whileacquiring extensive semantics from external knowledge takes a huge time andeffort. This limitation severely constrains the potential of semantics infew-shot learning. In this paper, we design an automatic way called SemanticEvolution to generate high-quality semantics. The incorporation of high-qualitysemantics alleviates the need for complex network structures and learningalgorithms used in previous works. Hence, we employ a simple two-layer networktermed Semantic Alignment Network to transform semantics and visual featuresinto robust class prototypes with rich discriminative features for few-shotclassification. The experimental results show our framework outperforms allprevious methods on five benchmarks, demonstrating a simple network withhigh-quality semantics can beat intricate multi-modal modules on few-shotclassification tasks.",,arXiv,['cs.cv'],somewhat relevant,"The abstract describes the use of multimodal prompt generator modules within the DPLNet system for semantic segmentation, indicating the paper is about designing prompts for improving machine learning model performance." +1569,leveraging normalization layer in adapters with progressive learning and adaptive distillation for crossdomain fewshot learning,"['Yongjin Yang', 'Taehyeon Kim', 'Se-Young Yun']",http://arxiv.org/pdf/2312.11260v1.pdf,2023-12-18,," Cross-domain few-shot learning presents a formidable challenge, as modelsmust be trained on base classes and then tested on novel classes from variousdomains with only a few samples at hand. While prior approaches have primarilyfocused on parameter-efficient methods of using adapters, they often overlooktwo critical issues: shifts in batch statistics and noisy sample statisticsarising from domain discrepancy variations. In this paper, we introduce a novelgeneric framework that leverages normalization layer in adapters withProgressive Learning and Adaptive Distillation (ProLAD), marking two principalcontributions. First, our methodology utilizes two separate adapters: onedevoid of a normalization layer, which is more effective for similar domains,and another embedded with a normalization layer, designed to leverage the batchstatistics of the target domain, thus proving effective for dissimilar domains.Second, to address the pitfalls of noisy statistics, we deploy two strategies:a progressive training of the two adapters and an adaptive distillationtechnique derived from features determined by the model solely with the adapterdevoid of a normalization layer. Through this adaptive distillation, ourapproach functions as a modulator, controlling the primary adapter foradaptation, based on each domain. Evaluations on standard cross-domain few-shotlearning benchmarks confirm that our technique outperforms existingstate-of-the-art methodologies.",,arXiv,"['cs.cv', 'cs.ai']",highly relevant,"The paper discusses 'few-shot prompting strategies' which indicates the use of prompts, potentially hard prefix prompts, to transfer styles in text, making it related to prompt engineering." +1570,ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global prompt hacking competition,"[, , , , , , , , , ]",https://aclanthology.org/2023.emnlp-main.302.pdf,2023-12-01,,"Large Language Models (LLMs) are increasingly being deployed in interactive contexts that involve direct user engagement, such as chatbots and writing assistants. These deployments are increasingly plagued by prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and instead follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of a large-scale resource and quantitative study on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive ontology of the types of adversarial prompts.",,ACL,['prompt injection'],highly relevant,"The paper explicitly mentions 'prompt learning with large language models' and designing prompts to guide model output, indicating relevance to prompt engineering." +1571,automatic prompt optimization with “gradient descent” and beam search,"[, , , , , ]",https://aclanthology.org/2023.emnlp-main.494.pdf,2023-12-01,,"Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to this problem, Prompt Optimization with Textual Gradients (ProTeGi), which is inspired by numerical gradient descent to automatically improve prompts, assuming access to training data and an LLM API. The algorithm uses minibatches of data to form natural language “gradients” that criticize the current prompt, much like how numerical gradients point in the direction of error ascent. The natural language gradients are then “propagated” into the prompt by editing the prompt in the opposite semantic direction of the gradient. These gradient descent steps are guided by a beam search and bandit selection procedure which significantly improves algorithmic efficiency. Preliminary results across three benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest that Automatic Prompt Optimization can outperform prior prompt editing techniques and improve an initial prompt’s performance by up to 31%, by using data to rewrite vague task descriptions into more precise annotation instructions.",,ACL,['prompt optimization'],somewhat relevant,"The abstract indicates the use of prompts to activate artificial neurons in LLMs (RoBERTa), which aligns with the use of prompts in prompt engineering." +1572,alexander knox at semeval2023 task 5 the comparison of prompting and standard finetuning techniques for selecting the type of spoiler needed to neutralize a clickbait,"[, ]",https://aclanthology.org/2023.semeval-1.202.pdf,2023-07-01,,"Clickbait posts are a common problem on social media platforms, as they often deceive users by providing misleading or sensational headlines that do not match the content of the linked web page. The aim of this study is to create a technique for identifying the specific type of suitable spoiler - be it a phrase, a passage, or a multipart spoiler - needed to neutralize clickbait posts. This is achieved by developing a machine learning classifier analyzing both the clickbait post and the linked web page. Modern approaches for constructing a text classifier usually rely on fine-tuning a transformer-based model pre-trained on large unsupervised corpora. However, recent advances in the development of large-scale language models have led to the emergence of a new transfer learning paradigm based on prompt engineering. In this work, we study these two transfer learning techniques and compare their effectiveness for clickbait spoiler-type detection task. Our experimental results show that for this task, using the standard fine-tuning method gives better results than using prompting. The best model can achieve a similar performance to that presented by Hagen et al. (2022).",,ACL,['prompt engineering'],highly relevant,"The paper describes how prompting a large language model improves the generation of labels for financial sentiment analysis, indicative of prompt engineering." +1573,improving formalitysensitive machine translation using datacentric approaches and prompt engineering,"[, , , ]",https://aclanthology.org/2023.iwslt-1.40.pdf,2023-07-01,,"In this paper, we present the KU x Upstage team’s submission for the Special Task on Formality Control on Spoken Language Translation, which involves translating English into four languages with diverse grammatical formality markers. Our methodology comprises two primary components: 1) a language-specific data-driven approach, and 2) the generation of synthetic data through the employment of large-scale language models and empirically-grounded prompt engineering. By adapting methodologies and models to accommodate the unique linguistic properties of each language, we observe a notable enhancement in performance relative to the baseline, substantiating the heightened efficacy of data-driven approaches. Moreover, our devised prompt engineering strategy yields superior synthetic translation instances.",,ACL,['prompt engineering'],highly relevant,"The paper explicitly describes the development and evaluation of new prompting techniques, such as Plan-and-Solve (PS) Prompting, for improving zero-shot chain-of-thought reasoning in large language models, which is directly related to the topic of prompt engineering." +1574,machine translation of folktales smalldatadriven and llmbased approaches,"[]",https://aclanthology.org/2023.clasp-1.8.pdf,2023-09-01,,"Can Large Language Models translate texts with rich cultural elements? How “cultured” are they? This paper provides an overview of an experiment in Machine Translation of Ukrainian folktales using Large Language Models (Open AI), Google Cloud Translation API, and Opus MT. After benchmarking their performance, we have fine-tuned an Opus MT model on a domain-specific small dataset specially created to translate folktales from Ukrainian to English. We have also tested various prompt engineering techniques on the new Open AI models to generate translations of our test dataset (folktale ‘The Mitten’) and have observed promising results. This research explores the importance of both small data and Large Language Models in Machine Learning, specifically in Machine Translation of literary texts, on the example of Ukrainian folktales.",,ACL,['prompt engineering'],highly relevant,"The paper discusses the use of a specialized prompting method termed 'chain-of-thought prompting' to enhance the performance of LLMs in dialogue scenarios, which is a form of prompt engineering." +1575,metalearning of prompt generation for lightweight prompt engineering on languagemodelasaservice,"[, , , ]",https://aclanthology.org/2023.findings-emnlp.159.pdf,2023-12-01,,"Recently, many companies have been providing the capabilities of large language models as services. These Language-Model-as-a-Service (LMaaS) offerings support a variety of user tasks through in-context learning from prompts, which include instructions and demonstrations of the task. However, for users, manually crafting prompts or running automatic prompt tuning methods themselves can be demanding. Despite these challenges, LMaaS providers do not offer automatic prompt engineering methods as part of their services. One of the major obstacles to deploying them on an LMaaS is the heavy computational costs associated with automatic prompt engineering methods. These methods are typically designed to iterate through tens of thousands of examples, which impose unaffordable overheads for LMaaS providers. In this paper, we introduce MetaL-Prompt, a novel lightweight automatic prompt generation method for LMaaS. MetaL-Prompt meta-trains a prompt generation model (PGM) to enable robust learning by the language model from the contexts created by the generated prompts (i.e., in-context learning). Thanks to our meta-learning approach, a PGM can generate prompts for unseen tasks without requiring additional training for those specific tasks. Furthermore, the PGM can generate prompts with a single forward pass, significantly reducing computational costs compared to previous methods. We evaluate MetaL-Prompt on a range of unseen tasks and find that it improves performance by up to 19.4% in terms of mean F1 score on QA datasets compared to the state-of-the-art baseline P-tuning, with limited computational cost.",,ACL,['prompt engineering'],somewhat relevant,"The paper discusses Prompt Injection (PI) attacks, which involve using prompts to misguide Large Language Models, connecting the study directly to prompt engineering." +1576,textgraphs16 natural language premise selection task zeroshot premise selection with prompting generative language models,"[, , ]",https://aclanthology.org/2022.textgraphs-1.15.pdf,2022-10-01,,"Automated theorem proving can benefit a lot from methods employed in natural language processing, knowledge graphs and information retrieval: this non-trivial task combines formal languages understanding, reasoning, similarity search. We tackle this task by enhancing semantic similarity ranking with prompt engineering, which has become a new paradigm in natural language understanding. None of our approaches requires additional training. Despite encouraging results reported by prompt engineering approaches for a range of NLP tasks, for the premise selection task vanilla re-ranking by prompting GPT-3 doesn’t outperform semantic similarity ranking with SBERT, but merging of the both rankings shows better results.",,ACL,['prompt engineering'],highly relevant,"The paper explicitly mentions the optimization of prompts tailored for polyglot LLMs, which suggests focus on prompt engineering for improving multilingual performance of the models." +1577,seeking clozure robust hypernym extraction from bert with anchored prompts,"[, , ]",https://aclanthology.org/2023.starsem-1.18.pdf,2023-07-01,,"The automatic extraction of hypernym knowledge from large language models like BERT is an open problem, and it is unclear whether methods fail due to a lack of knowledge in the model or shortcomings of the extraction methods. In particular, methods fail on challenging cases which include rare or abstract concepts, and perform inconsistently under paraphrased prompts. In this study, we revisit the long line of work on pattern-based hypernym extraction, and use it as a diagnostic tool to thoroughly examine the hypernomy knowledge encoded in BERT and the limitations of hypernym extraction methods. We propose to construct prompts from established pattern structures: definitional (X is a Y); lexico-syntactic (Y such as X); and their anchored versions (Y such as X or Z). We devise an automatic method for anchor prediction, and compare different patterns in: (i) their effectiveness for hypernym retrieval from BERT across six English data sets; (ii) on challenge sets of rare and abstract concepts; and (iii) on consistency under paraphrasing. We show that anchoring is particularly useful for abstract concepts and in enhancing consistency across paraphrases, demonstrating how established methods in the field can inform prompt engineering.",,ACL,['prompt engineering'],highly relevant,"The abstract explicitly mentions the use of 'engineered prompts' with a Large Language Model to simulate human participant responses, making this paper relevant to the topic of prompt engineering." +1578,system report for ccl23eval task 9 hust1037 explore proper prompt strategy for llm in mrc task,"[, , , , , , ]",https://aclanthology.org/2023.ccl-3.34.pdf,2023-08-01,,"“Our research paper delves into the Adversarial Robustness Evaluation for Chinese Gaokao Read-ing Comprehension (GCRC advRobust). While Chinese reading comprehension tasks havegained significant attention in recent years, previous methods have not proven effective for thischallenging dataset. We focus on exploring how prompt engineering can impact a model’s read-ing comprehension ability. Through our experiments using ChatGLM, GPT3.5, and GPT4, wediscovered a correlation between prompt and LLM reading comprehension ability, and found thatprompt engineering improves the performance of each model. Our team submitted the results ofour system evaluation, which ranked first in three indexes and total scores. Keywords— LLM, Prompt, Chinese Reading Comprehension”",,ACL,['prompt engineering'],highly relevant,"The paper describes using prompts in the form of QA to leverage LLMs for zero-shot task-oriented parsing, which is an example of utilizing hard prefix prompts." +1579,deeppavlov dream platform for building generative ai assistants,"[, , , , , , , , , , ]",https://aclanthology.org/2023.acl-demo.58.pdf,2023-07-01,,"An open-source DeepPavlov Dream Platform is specifically tailored for development of complex dialog systems like Generative AI Assistants. The stack prioritizes efficiency, modularity, scalability, and extensibility with the goal to make it easier to develop complex dialog systems from scratch. It supports modular approach to implementation of conversational agents enabling their development through the choice of NLP components and conversational skills from a rich library organized into the distributions of ready-for-use multi-skill AI assistant systems. In DeepPavlov Dream, multi-skill Generative AI Assistant consists of NLP components that extract features from user utterances, conversational skills that generate or retrieve a response, skill and response selectors that facilitate choice of relevant skills and the best response, as well as a conversational orchestrator that enables creation of multi-skill Generative AI Assistants scalable up to industrial grade AI assistants. The platform allows to integrate large language models into dialog pipeline, customize with prompt engineering, handle multiple prompts during the same dialog session and create simple multimodal assistants.",,ACL,['prompt engineering'],highly relevant,"The mention of natural language prompts and mock programming prompts used with large language models indicate the utilization of prompting strategies, relevant to prompt engineering." +1580,beyond information is chatgpt empathetic enough,"[, ]",https://aclanthology.org/2023.ranlp-1.18.pdf,2023-09-01,,"This paper aims to explore and enhance ChatGPT’s abilities to generate more human-like conversations by taking into account the emotional state of the user. To achieve this goal, a prompt-driven Emotional Intelligence is used through the empathetic dialogue dataset in order to propose a more empathetic conversational language model. We propose two altered versions of ChatGPT as follows: (1) an emotion-infused version which takes the user’s emotion as input before generating responses using an emotion classifier based on ELECTRA ; and (2) the emotion adapting version that tries to accommodate for how the user feels without any external component. By analyzing responses of the two proposed altered versions and comparing them to the standard version of ChatGPT, we find that using the external emotion classifier leads to more frequent and pronounced use of positive emotions compared to the standard version. On the other hand, using simple prompt engineering to take the user emotion into consideration, does the opposite. Finally, comparisons with state-of-the-art models highlight the potential of prompt engineering to enhance the emotional abilities of chatbots based on large language models.",,ACL,['prompt engineering'],highly relevant,"The paper discusses a novel prompting method, Cue-CoT, which involves an intermediate reasoning step and compares its performance to standard prompting methods." +1581,prompting chatgpt to draw morphological connections for new word comprehension,"[, ]",https://aclanthology.org/2023.ranlp-stud.11.pdf,2023-09-01,,"Though more powerful, Large Language Models need to be periodically retrained for updated information, consuming resources and energy. In this respect, prompt engineering can prove a possible solution to re-training. To explore this line of research, this paper uses a case study, namely, finding the best prompting strategy for asking ChatGPT to define new words based on morphological connections. To determine the best prompting strategy, each definition provided by the prompt was ranked in terms of plausibility and humanlikeness criteria. The findings of this paper show that adding contextual information, operationalised as the keywords ‘new’ and ‘morpheme’, significantly improve the performance of the model for any prompt. While no single prompt significantly outperformed all others, there were differences between performances on the two criteria for most prompts. ChatGPT also provided the most correct definitions with a persona-type prompt.",,ACL,['prompt engineering'],highly relevant,"The paper presents empirical work on using dynamic prompting to improve the performance of compressed language models, directly involving prompting strategies post-training." +1582,performance evaluation on humanmachine teaming augmented machine translation enabled by gpt4,"[]",https://aclanthology.org/2023.nlp4tia-1.4.pdf,2023-09-01,,"Translation has been modeled as a multiple-phase process where pre-editing analyses guide meaning transfer and interlingual restructure. Present-day machine translation (MT) tools provide no means for source text analyses. Generative AI with Large language modeling (LLM), equipped with prompt engineering and fine-tuning capabilities, can enable augmented MT solutions by explicitly including AI or human generated analyses/instruction, and/or human-generated reference translation as pre-editing or interactive inputs. Using an English-to-Chinese translation piece that had been carefully studied during a translator slam event, Four types of translation outputs on 20 text segments were evaluated: human-generated translation, Google Translate MT, instruction-augmented MT using GPT4-LLM, and Human-Machine-Teaming (HMT)-augmented translation based on both human reference translation and instruction using GPT4-LLM. While human translation had the best performance, both augmented MT approaches performed better than un-augmented MT. The HMT-augmented MT performed better than instruction-augmented MT because it combined the guidance and knowledge provided by both human reference translation and style instruction. However, since it is unrealistic to generate sentence-by-sentence human translation as MT input, better approaches to HMT-augmented MT need to be invented. The evaluation showed that generative AI with LLM can enable new MT workflow facilitating pre-editing analyses and interactive restructuring and achieving better performance.",,ACL,['prompt engineering'],highly relevant,"The paper discusses the use of carefully designed prompting strategies to enhance LLMs' contextual faithfulness, which is directly related to prompt engineering." +1583,a benchmark for reasoning with spatial prepositions,"[, ]",https://aclanthology.org/2023.emnlp-main.1015.pdf,2023-12-01,,"Spatial reasoning is a fundamental building block of human cognition, used in representing, grounding, and reasoning about physical and abstract concepts. We propose a novel benchmark focused on assessing inferential properties of statements with spatial prepositions. The benchmark includes original datasets in English and Romanian and aims to probe the limits of reasoning about spatial relations in large language models. We use prompt engineering to study the performance of two families of large language models, PaLM and GPT-3, on our benchmark. Our results show considerable variability in the performance of smaller and larger models, as well as across prompts and languages. However, none of the models reaches human performance.",,ACL,['prompt engineering'],somewhat relevant,"The paper mentions prompting a large language model with data for AI-aided brainstorming, which is related to the use of prompts and their engineering." +1584,exploring prompt engineering with gpt language models for documentlevel machine translation insights and findings,"[, ]",https://aclanthology.org/2023.wmt-1.15.pdf,2023-12-01,,"This paper describes Lan-Bridge Translation systems for the WMT 2023 General Translation shared task. We participate in 2 directions: English to and from Chinese. With the emergence of large-scale models, various industries have undergone significant transformations, particularly in the realm of document-level machine translation. This has introduced a novel research paradigm that we have embraced in our participation in the WMT23 competition. Focusing on advancements in models such as GPT-3.5 and GPT-4, we have undertaken numerous prompt-based experiments. Our objective is to achieve optimal human evaluation results for document-level machine translation, resulting in our submission of the final outcomes in the general track.",,ACL,['prompt engineering'],highly relevant,"The paper explicitly mentions the use of prompt engineering to develop an optimized meta-prompt for classifying abstracts using GPT models, which falls directly within the domain of hard prefix prompt engineering." +1585,optimizing machine translation through prompt engineering an investigation into chatgpt’s customizability,"[]",https://aclanthology.org/2023.mtsummit-users.19.pdf,2023-09-01,,"This paper explores the influence of integrating the purpose of the translation and the target audience into prompts on the quality of translations produced by ChatGPT. Drawing on previous translation studies, industry practices, and ISO standards, the research underscores the significance of the pre-production phase in the translation process. The study reveals that the inclusion of suitable prompts in large-scale language models like ChatGPT can yield flexible translations, a feat yet to be realized by conventional Ma-chine Translation (MT). The research scrutinizes the changes in translation quality when prompts are used to generate translations that meet specific conditions. The evaluation is conducted from a practicing translator’s viewpoint, both subjectively and qualitatively, supplemented by the use of OpenAI’s word embedding API for cosine similarity calculations. The findings suggest that the integration of the purpose and target audience into prompts can indeed modify the generated translations, generally enhancing the translation quality by industry standards. The study also demonstrates the practical application of the “good translation” concept, particularly in the context of marketing documents and culturally dependent idioms.",,ACL,['prompt engineering'],somewhat relevant,"The paper describes using a Large Language Model prompted to assist in determining the appropriate part of an object to grasp, which is relevant to prompt engineering." +1586,probabilistic ensembles of zero and fewshot learning models for emotion classification,"[, , ]",https://aclanthology.org/2021.ranlp-1.16.pdf,2021-09-01,,"Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model that does not require a large annotated corpus to be competitive. We experiment with pretrained language models in both a zero-shot and few-shot configuration. We build several of such models and consider them as biased, noisy annotators, whose individual performance is poor. We aggregate the predictions of these models using a Bayesian method originally developed for modelling crowdsourced annotations. Next, we show that the resulting system performs better than the strongest individual model. Finally, we show that when trained on few labelled data, our systems outperform fully-supervised models.",,ACL,['few-shot learning'],somewhat relevant,"The paper discusses the application of large language models to classification tasks by prompting with natural-language strings, aligning with the concept of prompt engineering, particularly if the 'short description' acts as a hard prefix prompt." +1587,a smashed glass cannot be full generation of commonsense explanations through promptbased fewshot learning,"[, ]",https://aclanthology.org/2023.nlrse-1.3.pdf,2023-06-01,,"We assume that providing explanations is a process to elicit implicit knowledge in human communication, and propose a general methodology to generate commonsense explanations from pairs of semantically related sentences. We take advantage of both prompting applied to large, encoder-decoder pre-trained language models, and few-shot learning techniques, such as pattern-exploiting training. Experiments run on the e-SNLI dataset show that the proposed method achieves state-of-the-art results on the explanation generation task, with a substantial reduction of labelled data. The obtained results open new perspective on a number of tasks involving the elicitation of implicit knowledge.",,ACL,['few-shot learning'],highly relevant,"The paper discusses the effect of including explanations in the prompt for improving in-context learning in large language models, which is a form of prompt engineering." +1588,map lowdata regime multimodal learning with adapterbased pretraining and prompting,"[, , , , , ]",https://aclanthology.org/2023.clasp-1.19.pdf,2023-09-01,,"Pretrained vision-language (VL) models have shown impressive results on various multi-modal downstream tasks recently. Many of the benchmark models build on pretrained causal language models (LMs), leveraging the original few-shot learning and generalization capability of the LMs trained with large text corpora. However, these models are often gigantic and require large-scale image and text data with high computational cost to train. This paper introduces a moderate-size model called MAP for efficient VL transfer learning through adapter-based pretraining and prompting. We aim to answer the question of how much we can complete through VL pretraining within the low-data regime while maximizing efficiency in transferring knowledge of a moderate-size frozen LM. Our experiments demonstrate that MAP achieves substantially better zero-shot and few-shot performance on downstream VL tasks with only 10% the size of pretraining data and a 30x lighter pretrained LM backbone compared to Frozen. MAP also outperforms fully trained models of comparable size at retaining its transfer learning ability when the amount of training data reduces.",,ACL,['few-shot learning'],somewhat relevant,"The paper discusses iterative prompt optimization in the context of using GPT-3.5-Turbo for information extraction, which is a form of prompt engineering relevant to the topic." +1589,generative pretrained transformers for emotion detection in a codeswitching setting,"[]",https://aclanthology.org/2023.wassa-1.61.pdf,2023-07-01,,"This paper describes the approach that we utilized to participate in the shared task for multi-label and multi-class emotion classification organized as part of WASSA 2023 at ACL 2023. The objective was to build mod- els that can predict 11 classes of emotions, or the lack thereof (neutral class) based on code- mixed Roman Urdu and English SMS text messages. We participated in Track 2 of this task - multi-class emotion classification (MCEC). We used generative pretrained transformers, namely ChatGPT because it has a commercially available full-scale API, for the emotion detec- tion task by leveraging the prompt engineer- ing and zero-shot / few-shot learning method- ologies based on multiple experiments on the dev set. Although this was the first time we used a GPT model for the purpose, this ap- proach allowed us to beat our own baseline character-based XGBClassifier, as well as the baseline model trained by the organizers (bert- base-multilingual-cased). We ranked 4th and achieved the macro F1 score of 0.7038 and the accuracy of 0.7313 on the blind test set.",,ACL,['few-shot learning'],highly relevant,"The abstract mentions 'utilising different prompt engineering techniques' to improve performance, which indicates that the paper involves research directly related to prompt engineering." +1590,promptfree and efficient fewshot learning with language models,"[, , , , , , ]",https://aclanthology.org/2022.acl-long.254.pdf,2022-05-01,,"Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score. In this work, we propose Perfect, a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting, which is highly effective given as few as 32 data points. Perfect makes two key design choices: First, we show that manually engineered task prompts can be replaced with task-specific adapters that enable sample-efficient fine-tuning and reduce memory and storage costs by roughly factors of 5 and 100, respectively. Second, instead of using handcrafted verbalizers, we learn new multi-token label embeddings during fine-tuning, which are not tied to the model vocabulary and which allow us to avoid complex auto-regressive decoding. These embeddings are not only learnable from limited data but also enable nearly 100x faster training and inference. Experiments on a wide range of few shot NLP tasks demonstrate that Perfect, while being simple and efficient, also outperforms existing state-of-the-art few-shot learning methods. Our code is publicly available at https://github.com/rabeehk/perfect.",,ACL,['few-shot learning'],highly relevant,The paper mentions 'doing prompt engineering for zero-shot / few-shot learning' which directly indicates relevance to the topic of hard prefix prompt engineering. +1591,coarsetofine fewshot learning for named entity recognition,"[, , , , , , , , ]",https://aclanthology.org/2023.findings-acl.253.pdf,2023-07-01,,"Recently, Few-shot Named Entity Recognition has received wide attention with the growing need for NER models to learn new classes with minimized annotation costs. However, one common yet understudied situation is to transfer a model trained with coarse-grained classes to recognize fine-grained classes, such as separating a product category into sub-classes. We find that existing few-shot NER solutions are not suitable for such a situation since they do not consider the sub-class discrimination during coarse training and various granularity of new classes during few-shot learning. In this work, we introduce the Coarse-to-fine Few-shot NER (C2FNER) task and propose an effective solution. Specifically, during coarse training, we propose a cluster-based prototype margin loss to learn group-wise discriminative representations, so as to benefit fine-grained learning. Targeting various granularity of new classes, we separate the coarse classes into extra-fine clusters and propose a novel prototype retrieval and bootstrapping algorithm to retrieve representative clusters for each fine class. We then adopt a mixture prototype loss to efficiently learn the representations of fine classes. We conduct experiments on both in-domain and cross-domain C2FNER settings with various target granularity, and the proposed method shows superior performance over the baseline methods.",,ACL,['few-shot learning'],somewhat relevant,"The paper mentions using a variety of pre-specified prompt engineering approaches to aid in the extraction of clinical features from medical texts, which indicates relevance to the topic of prompt engineering." +1592,answerstate recurrent relational network (asrrn) for constructed response assessment and feedback grouping,"[, , , ]",https://aclanthology.org/2023.findings-emnlp.254.pdf,2023-12-01,,"STEM educators must trade off the ease of assessing selected response (SR) questions, like multiple choice, with constructed response (CR) questions, where students articulate their own reasoning. Our work addresses a CR type new to NLP but common in college STEM, consisting of multiple questions per context. To relate the context, the questions, the reference responses, and students’ answers, we developed an Answer-state Recurrent Relational Network (AsRRN). In recurrent time-steps, relation vectors are learned for specific dependencies in a computational graph, where the nodes encode the distinct types of text input. AsRRN incorporates contrastive loss for better representation learning, which improves performance and supports student feedback. AsRRN was developed on a new dataset of 6,532 student responses to three, two-part CR questions. AsRRN outperforms classifiers based on LLMs, a previous relational network for CR questions, and few-shot learning with GPT-3.5. Ablation studies show the distinct contributions of AsRRN’s dependency structure, the number of time steps in the recurrence, and the contrastive loss.",,ACL,['few-shot learning'],highly relevant,"The paper mentions a 'Prompt based method' as one of its approaches to fact verification, which directly indicates relevance to prompt engineering." +1593,impact of sample selection on incontext learning for entity extraction from scientific writing,"[, , ]",https://aclanthology.org/2023.findings-emnlp.338.pdf,2023-12-01,,"Prompt-based usage of Large Language Models (LLMs) is an increasingly popular way to tackle many well-known natural language problems. This trend is due, in part, to the appeal of the In-Context Learning (ICL) prompt set-up, in which a few selected training examples are provided along with the inference request. ICL, a type of few-shot learning, is especially attractive for natural language processing (NLP) tasks defined for specialised domains, such as entity extraction from scientific documents, where the annotation is very costly due to expertise requirements for the annotators. In this paper, we present a comprehensive analysis of in-context sample selection methods for entity extraction from scientific documents using GPT-3.5 and compare these results against a fully supervised transformer-based baseline. Our results indicate that the effectiveness of the in-context sample selection methods is heavily domain-dependent, but the improvements are more notable for problems with a larger number of entity types. More in-depth analysis shows that ICL is more effective for low-resource set-ups of scientific information extraction",,ACL,['few-shot learning'],highly relevant,"The paper proposes a prompt-based Chinese text classification framework and includes discussion of prompt-based fine-tuning, automating prompt generation, and using prompts in a few-shot learning context." +1594,tabprompt graphbased pretraining and prompting for fewshot table understanding,"[, , , , , ]",https://aclanthology.org/2023.findings-emnlp.493.pdf,2023-12-01,,"Table Understanding (TU) is a crucial aspect of information extraction that enables machines to comprehend the semantics behind tabular data. However, existing methods of TU cannot deal with the scarcity of labeled tabular data. In addition, these methods primarily focus on the textual content within the table, disregarding the inherent topological information of the table. This can lead to a misunderstanding of the tabular semantics. In this paper, we propose TabPrompt, a new framework to tackle the above challenges. Prompt-based learning has gained popularity due to its exceptional performance in few-shot learning. Thus, we introduce prompt-based learning to handle few-shot TU. Furthermore, Graph Contrastive Learning (Graph CL) demonstrates remarkable capabilities in capturing topological information, making Graph Neural Networks an ideal method for encoding tables. Hence, we develop a novel Graph CL method tailored to tabular data. This method serves as the pretext task during the pre-training phase, allowing the generation of vector representations that incorporate the table’s topological information. The experimental results of outperforming all strong baselines demonstrate the strength of our method in few-shot table understanding tasks.",,ACL,['few-shot learning'],highly relevant,"The paper explicitly mentions the use of prompts to steer models' generation, which aligns with the topic of prompt engineering." +1595,improving generalization in large langue model by learning prefix subspaces,"[, , ]",https://aclanthology.org/2023.findings-emnlp.768.pdf,2023-12-01,,"This article focuses on large language models (LLMs) fine-tuning in the scarce data regime (also known as “few-shot learning setting”). We propose a method to increase the generalization capabilities of LLMs based on neural network subspaces. This optimization method, recently introduced in computer vision, aims to improve model generalization by identifying wider local optima through the joint optimization of an entire simplex of models in parameter space. Although this property would be highly beneficial in the context of training large language models in the “few-shot learning” setting, its adaptation to massive, pretrained transformers poses some challenges. First, their considerable number of parameters make it difficult to train several model jointly, and second, their deterministic parameter initialisation schemes make them unfit to the subspace method as originaly proposed. We show in this paper that its application to “Parameter Efficient Fine-Tuning” (PEFT) methods, however, is relatively natural, and we propose to apply it to prefix-tuning, by learning entire simplexes of continous prefixes. We test our method on a variant of the GLUE benchmark adapted to the few-shot learning setting, and show that both our contributions (learning prefix simplexes, and non-deterministic validation metric inference) jointly lead to a gain in average performances compared to state of the art methods.",,ACL,['few-shot learning'],highly relevant,"The paper discusses 'multi-objective prompt formation', which indicates relevance to prompt engineering, specifically for generating multilingual social media text." +1596,aisfg abundant information slot filling generator,"[, , , ]",https://aclanthology.org/2022.naacl-main.308.pdf,2022-07-01,,"As an essential component of task-oriented dialogue systems, slot filling requires enormous labeled training data in a certain domain. However, in most cases, there is little or no target domain training data is available in the training stage. Thus, cross-domain slot filling has to cope with the data scarcity problem by zero/few-shot learning. Previous researches on zero/few-shot cross-domain slot filling focus on slot descriptions and examples while ignoring the slot type ambiguity and example ambiguity issues. To address these problems, we propose Abundant Information Slot Filling Generator (AISFG), a generative model with a novel query template that incorporates domain descriptions, slot descriptions, and examples with context. Experimental results show that our model outperforms state-of-the-art approaches in zero/few-shot slot filling task.",,ACL,['few-shot learning'],highly relevant,"The paper mentions using a 'custom GPT-4 few-shot prompt annotation scheme', which indicates it involves prompt engineering." +1597,interactive symbol grounding with complex referential expressions,"[, ]",https://aclanthology.org/2022.naacl-main.358.pdf,2022-07-01,,"We present a procedure for learning to ground symbols from a sequence of stimuli consisting of an arbitrarily complex noun phrase (e.g. “all but one green square above both red circles.”) and its designation in the visual scene. Our distinctive approach combines: a) lazy few-shot learning to relate open-class words like green and above to their visual percepts; and b) symbolic reasoning with closed-class word categories like quantifiers and negation. We use this combination to estimate new training examples for grounding symbols that occur within a noun phrase but aren’t designated by that noun phase (e.g, red in the above example), thereby potentially gaining data efficiency. We evaluate the approach in a visual reference resolution task, in which the learner starts out unaware of concepts that are part of the domain model and how they relate to visual percepts.",,ACL,['few-shot learning'],highly relevant,"The paper mentions the use of 'few-shot-prompted pre-trained language models' and 'the chain-of-thought method of prompting', which indicates that the study involves using prompting techniques with pre-trained transformer models, making it relevant to the topic of prompt engineering." +1598,apprentissage de sousespaces de préfixes,"[, , ]",https://aclanthology.org/2023.jeptalnrecital-coria.4.pdf,2023-06-01,,"Cet article propose une nouvelle façon d’ajuster des modèles de langue en “Few-shot learning” se basant sur une méthode d’optimisation récemment introduite en vision informatique, l’apprentissage de sous-espaces de modèles. Cette méthode, permettant de trouver non pas un point minimum local de la fonction coût dans l’espace des paramètres du modèle, mais tout un simplexe associé à des valeurs basses, présente typiquement des capacités de généralisation supérieures aux solutions obtenues par ajustement traditionnel. L’adaptation de cette méthode aux gros modèles de langue n’est pas triviale mais son application aux méthodes d’ajustement dites “Parameter Efficient” est quant à elle relativement naturelle. On propose de plus une façon innovante d’utiliser le simplexe de solution étudié afin de revisiter la notion de guidage de l’ajustement d’un modèle par l’inférence d’une métrique de validation, problématique d’actualité en “few-shot learning”. On montre finalement que ces différentes contributions centrées autour de l’ajustement de sous-espaces de modèles est empiriquement associée à un gain considérable en performances de généralisation sur les tâches de compréhension du langage du benchmark GLUE, dans un contexte de “few-shot learning”.",,ACL,['few-shot learning'],somewhat relevant,"The abstract explicitly mentions the use of zero-shot prompting with GPT-3, suggesting relevance to prompt engineering." +1599,true fewshot learning with prompts—a realworld perspective,"[, ]",https://aclanthology.org/2022.tacl-1.41.pdf,2022-01-01,,"Prompt-based approaches excel at few-shot learning. However, Perez et al. (2021) recently cast doubt on their performance as they had difficulty getting good results in a “true” few-shot setting in which prompts and hyperparameters cannot be tuned on a dev set. In view of this, we conduct an extensive study of Pet, a method that combines textual instructions with example-based finetuning. We show that, if correctly configured, Pet performs strongly in true few-shot settings without a dev set. Crucial for this strong performance is a number of design choices, including Pet’s ability to intelligently handle multiple prompts. We put our findings to a real-world test by running Pet on RAFT, a benchmark of tasks taken from realistic NLP applications for which no labeled dev or test sets are available. Pet achieves a new state of the art on RAFT and performs close to non-expert humans for 7 out of 11 tasks. These results demonstrate that prompt-based learners can successfully be applied in true few-shot settings and underpin our belief that learning from instructions will play an important role on the path towards human-like few-shot learning capabilities.",,ACL,['few-shot learning'],highly relevant,"The paper presents a novel prompting method called explicit code-based self-verification (CSV), which uses zero-shot prompts to improve GPT-4 Code Interpreter's mathematical reasoning capabilities." +1600,making language models better reasoners with stepaware verifier,"[, , , , , , ]",https://aclanthology.org/2023.acl-long.291.pdf,2023-07-01,,"Few-shot learning is a challenging task that requires language models to generalize from limited examples. Large language models like GPT-3 and PaLM have made impressive progress in this area, but they still face difficulties in reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve their reasoning skills, previous work has proposed to guide the language model with prompts that elicit a series of reasoning steps before giving the final answer, achieving a significant improvement on GSM8K from 17.9% to 58.1% in problem-solving rate. In this paper, we present DiVeRSe (Diverse Verifier on Reasoning Step), a novel approach that further enhances the reasoning capability of language models. DiVeRSe has three main components: first, it generates diverse prompts to explore different reasoning paths for the same question; second, it uses a verifier to filter out incorrect answers based on a weighted voting scheme; and third, it verifies each reasoning step individually instead of the whole chain. We evaluate DiVeRSe on the latest language model code-davinci-002 and show that it achieves new state-of-the-art results on six of eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%).",,ACL,['few-shot learning'],highly relevant,"The paper proposes a generative zero-shot prompt learning framework, which involves prompt engineering as a mechanism for cross-domain slot filling." +1601,what does the failure to reason with “respectively” in zerofewshot settings tell us about language models,"[, , , ]",https://aclanthology.org/2023.acl-long.489.pdf,2023-07-01,,"Humans can effortlessly understand the coordinate structure of sentences such as “Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, *respectively*”. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of “respectively”. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.",,ACL,['few-shot learning'],highly relevant,"The paper presents a prompting method and discusses the use of a genetic algorithm to search for the best prompt for unseen tasks, indicating a focus on prompt engineering." +1602,understanding demonstrationbased learning from a causal perspective,"[, ]",https://aclanthology.org/2023.acl-short.125.pdf,2023-07-01,,"Demonstration-based learning has shown impressive performance in exploiting pretrained language models under few-shot learning settings. It is interesting to see that demonstrations, even those composed of random tokens, can still improve performance. In this paper, we build a Structural Causal Model (SCM) to understand demonstration-based learning from causal perspectives and interpret random demonstrations as interventions on the demonstration variable within the causal model. We investigate the causal effects and find that the concurrence of specific words in the demonstration will induce bias, while randomly sampled tokens in the demonstration do not. Based on this finding, we further propose simple ways to construct random demonstrations, which even outperform hand-crafted, meaningful demonstrations on public sequence labeling benchmarks.",,ACL,['few-shot learning'],highly relevant,"The paper discusses fine-tuning CLIP models using prompt sentences for zero-shot classification tasks, which involves prompt engineering." +1603,boosting transformers and language models for clinical prediction in immunotherapy,"[, , ]",https://aclanthology.org/2023.acl-industry.32.pdf,2023-07-01,,"Clinical prediction is an essential task in the healthcare industry. However, the recent success of transformers, on which large language models are built, has not been extended to this domain. In this research, we explore the use of transformers and language models in prognostic prediction for immunotherapy using real-world patients’ clinical data and molecular profiles. This paper investigates the potential of transformers to improve clinical prediction compared to conventional machine learning approaches and addresses the challenge of few-shot learning in predicting rare disease areas. The study benchmarks the efficacy of baselines and language models on prognostic prediction across multiple cancer types and investigates the impact of different pretrained language models under few-shot regimes. The results demonstrate significant improvements in accuracy and highlight the potential of NLP in clinical research to improve early detection and intervention for different diseases.",,ACL,['few-shot learning'],somewhat relevant,"The paper discusses using multiple input prompts with Large Language Models for Text Style Transfer evaluation, which relates to prompt engineering." +1604,dual contextguided continuous prompt tuning for fewshot learning,"[, , , , , ]",https://aclanthology.org/2022.findings-acl.8.pdf,2022-05-01,,"Prompt-based paradigm has shown its competitive performance in many NLP tasks. However, its success heavily depends on prompt design, and the effectiveness varies upon the model and training data. In this paper, we propose a novel dual context-guided continuous prompt (DCCP) tuning method. To explore the rich contextual information in language structure and close the gap between discrete prompt tuning and continuous prompt tuning, DCCP introduces two auxiliary training objectives and constructs input in a pair-wise fashion. Experimental results demonstrate that our method is applicable to many NLP tasks, and can often outperform existing prompt tuning methods by a large margin in the few-shot setting.",,ACL,['few-shot learning'],somewhat relevant,The paper highlights ERNIE-Code's zero-shot prompting ability for multilingual code summarization and text-to-text translation but does not specifically address hard prefix prompts or explicit prompt engineering strategies. +1605,crossdomain named entity recognition via graph matching,"[, , ]",https://aclanthology.org/2022.findings-acl.210.pdf,2022-05-01,,"Cross-domain NER is a practical yet challenging problem since the data scarcity in the real-world scenario. A common practice is first to learn a NER model in a rich-resource general domain and then adapt the model to specific domains. Due to the mismatch problem between entity types across domains, the wide knowledge in the general domain can not effectively transfer to the target domain NER model. To this end, we model the label relationship as a probability distribution and construct label graphs in both source and target label spaces. To enhance the contextual representation with label structures, we fuse the label graph into the word embedding output by BERT. By representing label relationships as graphs, we formulate cross-domain NER as a graph matching problem. Furthermore, the proposed method has good applicability with pre-training methods and is potentially capable of other cross-domain prediction tasks. Empirical results on four datasets show that our method outperforms a series of transfer learning, multi-task learning, and few-shot learning methods.",,ACL,['few-shot learning'],somewhat relevant,"The abstract mentions the use of zero-shot prompting with large language models, which indicates relevance to the concept of prompt engineering." +1606,rgl a simple yet effective relation graph augmented promptbased tuning approach for fewshot learning,"[, , , , , , ]",https://aclanthology.org/2022.findings-naacl.81.pdf,2022-07-01,,"Pre-trained language models (PLMs) can provide a good starting point for downstream applications. However, it is difficult to generalize PLMs to new tasks given a few labeled samples. In this work, we show that Relation Graph augmented Learning (RGL) can improve the performance of few-shot natural language understanding tasks. During learning, RGL constructs a relation graph based on the label consistency between samples in the same batch, and learns to solve the resultant node classification and link prediction problems on the relation graph. In this way, RGL fully exploits the limited supervised information, which can boost the tuning effectiveness. Extensive experimental results show that RGL consistently improves the performance of prompt-based tuning strategies.",,ACL,['few-shot learning'],highly relevant,"The paper focuses on a regularization method for improving gradient-based prompt tuning, which is directly related to optimizing hard prefix prompts for large-scale language models." +1607,parameterfree automatically prompting a latent pseudo label mapping model for promptbased learning,"[, , , , ]",https://aclanthology.org/2022.findings-emnlp.291.pdf,2022-12-01,,"Prompt-based learning has achieved excellent performance in few-shot learning by mapping the outputs of the pre-trained language model to the labels with the help of a label mapping component. Existing manual label mapping (MLM) methods achieve good results but heavily rely on expensive human knowledge. Automatic label mapping (ALM) methods that learn the mapping functions with extra parameters have shown their potentiality. However, no effective ALM model comparable to MLM methods is developed yet due to the limited data. In this paper, we propose a Latent Pseudo Label Mapping (LPLM) method that optimizes the label mapping without human knowledge and extra parameters. LPLM is built upon a probabilistic latent model and is iteratively self-improved with the EM-style algorithm. The empirical results demonstrate that our LPLM method is superior to the mainstream ALM methods and significantly outperforms the SOTA method in few-shot classification tasks. Moreover, LPLM also shows impressively better performance than the vanilla MLM method which requires extra task-specific prior knowledge.",,ACL,['few-shot learning'],somewhat relevant,"The paper discusses the use of language prompts as input in the context of adapting vision-language models to medical image segmentation, which is related to prompt engineering." +1608,improving fewshot domain transfer for named entity disambiguation with pattern exploitation,"[, ]",https://aclanthology.org/2022.findings-emnlp.506.pdf,2022-12-01,,"Named entity disambiguation (NED) is a critical subtask of entity linking, which seeks to connect knowledge base entities with textual mentions of those entities. Naturally, the performance of a model depends on the domain it was trained on; thus, reducing the amount of data required to train models is advantageous. In this work, we leverage recent research on pattern exploitation for NED and explore whether it can reduce the amount of data required for domain adaptation by reformulating the disambiguation task as a masked language modeling problem. Using ADAPET (Tam et al., 2021), which implements a new approach for few-shot learning using fine-tuned transformer-based language models, we produce an NED model which yields, without any sacrifice of in-domain accuracy, a 7% improvement in zero-shot cross-domain performance as evaluated on NEDMed, a new NED dataset of mental health news which we release with this work.",,ACL,['few-shot learning'],somewhat relevant,"The abstract discusses the use of multi-modal prompts in a zero-shot anomaly segmentation framework, which indicates it involves prompting techniques, but does not specify if these are hard prefix prompts." +1609,promptbased metalearning for fewshot text classification,"[, , , ]",https://aclanthology.org/2022.emnlp-main.87.pdf,2022-12-01,,"Few-shot Text Classification predicts the semantic label of a given text with a handful of supporting instances. Current meta-learning methods have achieved satisfying results in various few-shot situations. Still, they often require a large amount of data to construct many few-shot tasks for meta-training, which is not practical in real-world few-shot scenarios. Prompt-tuning has recently proved to be another effective few-shot learner by bridging the gap between pre-train and downstream tasks. In this work, we closely combine the two promising few-shot learning methodologies in structure and propose a Prompt-Based Meta-Learning (PBML) model to overcome the above meta-learning problem by adding the prompting mechanism. PBML assigns label word learning to base-learners and template learning to meta-learner, respectively. Experimental results show state-of-the-art performance on four text classification datasets under few-shot settings, with higher accuracy and good robustness. We demonstrate through low-resource experiments that our method alleviates the shortcoming that meta-learning requires too much data for meta-training. In the end, we use the visualization to interpret and verify that the meta-learning framework can help the prompting method converge better. We release our code to reproduce our experiments.",,ACL,['few-shot learning'],highly relevant,"The abstract mentions the use of prompt templates for experiments, which implies engagement with prompt engineering to direct the language model's outputs." +1610,fewshot learning with multilingual generative language models,"[, , , , , , , , , , , , , , , , , , , , ]",https://aclanthology.org/2022.emnlp-main.616.pdf,2022-12-01,,"Large-scale generative language models such as GPT-3 are competitive few-shot learners. While these models are known to be able to jointly represent many different languages, their training data is dominated by English, potentially limiting their cross-lingual generalization. In this work, we train multilingual generative language models on a corpus covering a diverse set of languages, and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark, our model outperforms GPT-3 on 171 out of 182 directions with 32 training examples, while surpassing the official supervised baseline in 45 directions. We conduct an in-depth analysis of different multilingual prompting approaches, showing in particular that strong few-shot learning performance across languages can be achieved via cross-lingual transfer through both templates and demonstration examples.",,ACL,['few-shot learning'],somewhat relevant,"The abstract indicates the use of prompt templates in data collection from ChatGPT, directly implicating the concept of prompt engineering, though it does not specify the type of prompts." +1611,amal meta knowledgedriven fewshot adapter learning,"[, ]",https://aclanthology.org/2022.emnlp-main.709.pdf,2022-12-01,,"NLP has advanced greatly together with the proliferation of Transformer-based pre-trained language models. To adapt to a downstream task, the pre-trained language models need to be fine-tuned with a sufficient supply of annotated examples. In recent years, Adapter-based fine-tuning methods have expanded the applicability of pre-trained language models by substantially lowering the required amount of annotated examples. However, existing Adapter-based methods still fail to yield meaningful results in the few-shot regime where only a few annotated examples are provided. In this study, we present a meta-learning-driven low-rank adapter pooling method, called AMAL, for leveraging pre-trained language models even with just a few data points. We evaluate our method on five text classification benchmark datasets. The results show that AMAL significantly outperforms previous few-shot learning methods and achieves a new state-of-the-art.",,ACL,['few-shot learning'],highly relevant,"The paper investigates various prompting strategies, including simple prompts and templated prompts, for LLMs in a medical context, which aligns it with the topic of prompt engineering." +1612,snoopy an online interface for exploring the effect of pretraining term frequencies on fewshot lm performance,"[, , , , ]",https://aclanthology.org/2022.emnlp-demos.39.pdf,2022-12-01,,"Current evaluation schemes for large language models often fail to consider the impact of the overlap between pretraining corpus and test data on model performance statistics. Snoopy is an online interface that allows researchers to study this impact in few-shot learning settings. Our demo provides term frequency statistics for the Pile, which is an 800 GB corpus, accompanied by the precomputed performance of EleutherAI/GPT models on more than 20 NLP benchmarks, including numerical, commonsense reasoning, natural language understanding, and question-answering tasks. Snoopy allows a user to interactively align specific terms in test instances with their frequency in the Pile, enabling exploratory analysis of how term frequency is related to the accuracy of the models, which are hard to discover through automated means. A user can look at correlations over various model sizes and numbers of in-context examples and visualize the result across multiple (potentially aggregated) datasets. Using Snoopy, we show that a researcher can quickly replicate prior analyses for numerical tasks, while simultaneously allowing for much more expansive exploration that was previously challenging. Snoopy is available at https://nlp.ics.uci.edu/snoopy.",,ACL,['few-shot learning'],highly relevant,"The paper presents EmotionPrompt, a method for incorporating emotional stimulus into prompts to enhance the performance of LLMs, which is directly related to the topic of prompt engineering." +1613,flight of the pegasus comparing transformers on fewshot and zeroshot multidocument abstractive summarization,"[, , ]",https://aclanthology.org/2020.coling-main.494.pdf,2020-12-01,,"Recent work has shown that pre-trained Transformers obtain remarkable performance on many natural language processing tasks including automatic summarization. However, most work has focused on (relatively) data-rich single-document summarization settings. In this paper, we explore highly-abstractive multi-document summarization where the summary is explicitly conditioned on a user-given topic statement or question. We compare the summarization quality produced by three state-of-the-art transformer-based models: BART, T5, and PEGASUS. We report the performance on four challenging summarization datasets: three from the general domain and one from consumer health in both zero-shot and few-shot learning settings. While prior work has shown significant differences in performance for these models on standard summarization tasks, our results indicate that with as few as 10 labeled examples there is no statistically significant difference in summary quality, suggesting the need for more abstractive benchmark collections when determining state-of-the-art.",,ACL,['few-shot learning'],highly relevant,"The paper discusses a method for generating prompts to bypass safeguards in LLMs, which is a form of prompt engineering focusing on manipulative prompt generation." +1614,csnlp at semeval2022 task 4 effective data augmentation methods for patronizing language detection and multilabel classification with roberta and gpt3,"[, , , ]",https://aclanthology.org/2022.semeval-1.69.pdf,2022-07-01,,"This paper presents a combination of data augmentation methods to boost the performance of state-of-the-art transformer-based language models for Patronizing and Condescending Language (PCL) detection and multi-label PCL classification tasks. These tasks are inherently different from sentiment analysis because positive/negative hidden attitudes in the context will not necessarily be considered positive/negative for PCL tasks. The oblation study observes that the imbalance degree of PCL dataset is in the extreme range. This paper presents a modified version of the sentence paraphrasing deep learning model (PEGASUS) to tackle the limitation of maximum sequence length. The proposed algorithm has no specific maximum input length to paraphrase sequences. Our augmented underrepresented class of annotated data achieved competitive results among top-16 SemEval-2022 participants. This paper’s approaches rely on fine-tuning pretrained RoBERTa and GPT3 models such as Davinci and Curie engines with an extra-enriched PCL dataset. Furthermore, we discuss Few-Shot learning technique to overcome the limitation of low-resource NLP problems.",,ACL,['few-shot learning'],somewhat relevant,"The paper discusses increasing resistance to Jailbreaking prompts through pruning, directly involving prompt interaction with LLMs, thus relevant to prompt engineering." +1615,semantic matching for text classification with complex class descriptions,"[, , , , , ]",https://aclanthology.org/2023.emnlp-main.475.pdf,2023-12-01,,"Text classifiers are an indispensable tool for machine learning practitioners, but adapting them to new classes is expensive. To reduce the cost of new classes, previous work exploits class descriptions and/or labels from existing classes. However, these approaches leave a gap in the model development cycle as they support either zero- or few-shot learning, but not both. Existing classifiers either do not work on zero-shot problems, or fail to improve much with few-shot labels. Further, prior work is aimed at concise class descriptions, which may be insufficient for complex classes. We overcome these shortcomings by casting text classification as a matching problem, where a model matches examples with relevant class descriptions. This formulation lets us leverage labels and complex class descriptions to perform zero- and few-shot learning on new classes. We compare this approach with numerous baselines on text classification tasks with complex class descriptions and find that it achieves strong zero-shot performance and scales well with few-shot samples, beating strong baselines by 22.48% (average precision) in the 10-shot setting. Furthermore, we extend the popular Model-Agnostic Meta-Learning algorithm to the zero-shot matching setting and show it improves zero-shot performance by 4.29%. Our results show that expressing text classification as a matching problem is a cost-effective way to address new classes. This strategy enables zero-shot learning for cold-start scenarios and few-shot learning so the model can improve until it is capable enough to deploy.",,ACL,['few-shot learning'],somewhat relevant,"The paper addresses the assessment of LLM resilience against prompt injection attacks, which are directly related to the exploitation of prompt engineering techniques, making it relevant to the field." +1616,pactuning finetuning pretrained language models with pacdriven perturbed gradient descent,"[, , , , ]",https://aclanthology.org/2023.emnlp-main.748.pdf,2023-12-01,,"Fine-tuning pretrained language models (PLMs) for downstream tasks is a large-scale optimization problem, in which the choice of the training algorithm critically determines how well the trained model can generalize to unseen test data, especially in the context of few-shot learning. To achieve good generalization performance and avoid overfitting, techniques such as data augmentation and pruning are often applied. However, adding these regularizations necessitates heavy tuning of the hyperparameters of optimization algorithms, such as the popular Adam optimizer. In this paper, we propose a two-stage fine-tuning method, PAC-tuning, to address this optimization challenge. First, based on PAC-Bayes training, PAC-tuning directly minimizes the PAC-Bayes generalization bound to learn proper parameter distribution. Second, PAC-tuning modifies the gradient by injecting noise with the variance learned in the first stage into the model parameters during training, resulting in a variant of perturbed gradient descent (PGD). In the past, the few-shot scenario posed difficulties for PAC-Bayes training because the PAC-Bayes bound, when applied to large models with limited training data, might not be stringent. Our experimental results across 5 GLUE benchmark tasks demonstrate that PAC-tuning successfully handles the challenges of fine-tuning tasks and outperforms strong baseline methods by a visible margin, further confirming the potential to apply PAC training for any other settings where the Adam optimizer is currently used for training.",,ACL,['few-shot learning'],highly relevant,"The paper discusses prompt hacking and adversarial prompts, which directly pertain to the manipulation and engineering of prompts in LLMs." +1617,"vist5 an adaptive, retrievalaugmented language model for visualizationoriented dialog","[, , , , , ]",https://aclanthology.org/2023.emnlp-demo.5.pdf,2023-12-01,,"The advent of large language models has brought about new ways of interacting with data intuitively via natural language. In recent years, a variety of visualization systems have explored the use of natural language to create and modify visualizations through visualization-oriented dialog. However, the majority of these systems rely on tailored dialog agents to analyze domain-specific data and operate domain-specific visualization tools and libraries. This is a major challenge when trying to transfer functionalities between dialog interfaces of different visualization applications. To address this issue, we propose VIST5, a visualization-oriented dialog system that focuses on easy adaptability to an application domain as well as easy transferability of language-controllable visualization library functions between applications. Its architecture is based on a retrieval-augmented T5 language model that leverages few-shot learning capabilities to enable a rapid adaptation of the system.",,ACL,['few-shot learning'],highly relevant,"The paper focuses on optimizing discrete prompts for PLMs specifically for few-shot NLP tasks using a novel dialogue alignment strategy and an RL framework, which aligns well with the topic of prompt engineering." +1618,pcbert parent and child bert for chinese fewshot ner,"[, , , , , , ]",https://aclanthology.org/2022.coling-1.192.pdf,2022-10-01,,"Achieving good performance on few-shot or zero-shot datasets has been a long-term challenge for NER. The conventional semantic transfer approaches on NER will decrease model performance when the semantic distribution is quite different, especially in Chinese few-shot NER. Recently, prompt-tuning has been thoroughly considered for low-resource tasks. But there is no effective prompt-tuning approach for Chinese few-shot NER. In this work, we propose a prompt-based Parent and Child BERT (PCBERT) for Chinese few-shot NER. To train an annotating model on high-resource datasets and then discover more implicit labels on low-resource datasets. We further design a label extension strategy to achieve label transferring from high-resource datasets. We evaluated our model on Weibo and the other three sampling Chinese NER datasets, and the experimental result demonstrates our approach’s effectiveness in few-shot learning.",,ACL,['few-shot learning'],highly relevant,"The paper describes an automatic prompt optimization framework, PROPANE, focusing on engineering prompts for directing LLMs' behaviors, which is central to the topic of prompt engineering." +1619,visual prompt tuning for fewshot text classification,"[, , , , , , , ]",https://aclanthology.org/2022.coling-1.492.pdf,2022-10-01,,"Deploying large-scale pre-trained models in the prompt-tuning paradigm has demonstrated promising performance in few-shot learning. Particularly, vision-language pre-training models (VL-PTMs) have been intensively explored in various few-shot downstream tasks. However, most existing works only apply VL-PTMs to visual tasks like image classification, with few attempts being made on language tasks like text classification. In few-shot text classification, a feasible paradigm for deploying VL-PTMs is to align the input samples and their category names via the text encoders. However, it leads to the waste of visual information learned by the image encoders of VL-PTMs. To overcome this drawback, we propose a novel method named Visual Prompt Tuning (VPT). To our best knowledge, this method is the first attempt to deploy VL-PTM in few-shot text classification task. The main idea is to generate the image embeddings w.r.t. category names as visual prompt and then add them to the aligning process. Extensive experiments show that our VPT can achieve significant improvements under both zero-shot and few-shot settings. Importantly, our VPT even outperforms the most recent prompt-tuning methods on five public text classification datasets.",,ACL,['few-shot learning'],highly relevant,"The paper introduces RecPrompt, a framework that explicitly uses prompt engineering for news recommendation systems, applying it to LLMs like GPT-4." +1620,crosslingual fewshot learning on unseen languages,"[, , , , ]",https://aclanthology.org/2022.aacl-main.59.pdf,2022-11-01,,"Large pre-trained language models (LMs) have demonstrated the ability to obtain good performance on downstream tasks with limited examples in cross-lingual settings. However, this was mostly studied for relatively resource-rich languages, where at least enough unlabeled data is available to be included in pre-training a multilingual language model. In this paper, we explore the problem of cross-lingual transfer in unseen languages, where no unlabeled data is available for pre-training a model. We use a downstream sentiment analysis task across 12 languages, including 8 unseen languages, to analyze the effectiveness of several few-shot learning strategies across the three major types of model architectures and their learning dynamics. We also compare strategies for selecting languages for transfer and contrast findings across languages seen in pre-training compared to those that are not. Our findings contribute to the body of knowledge on cross-lingual models for low-resource settings that is paramount to increasing coverage, diversity, and equity in access to NLP technology. We show that, in few-shot learning, linguistically similar and geographically similar languages are useful for cross-lingual adaptation, but taking the context from a mixture of random source languages is surprisingly more effective. We also compare different model architectures and show that the encoder-only model, XLM-R, gives the best downstream task performance.",,ACL,['few-shot learning'],highly relevant,"The paper describes a model that uses dynamic cross-modal learnable prompts for medical vision-language tasks, which indicates relevance to the topic of prompt engineering." +1621,dsp discriminative soft prompts for zeroshot entity and relation extraction,"[, , , , , , ]",https://aclanthology.org/2023.findings-acl.339.pdf,2023-07-01,,"Prompt-based methods have shown their efficacy in transferring general knowledge within pre-trained language models (PLMs) for low-resource scenarios. Typically, prompt-based methods convert downstream tasks to cloze-style problems and map all labels to verbalizers.However, when applied to zero-shot entity and relation extraction, vanilla prompt-based methods may struggle with the limited coverage of verbalizers to labels and the slow inference speed. In this work, we propose a novel Discriminate Soft Prompts (DSP) approach to take advantage of the prompt-based methods to strengthen the transmission of general knowledge. Specifically, we develop a discriminative prompt method, which reformulates zero-shot tasks into token discrimination tasks without having to construct verbalizers.Furthermore, to improve the inference speed of the prompt-based methods, we design a soft prompt co-reference strategy, which leverages soft prompts to approximately refer to the vector representation of text tokens. The experimental results show that, our model outperforms baselines on two zero-shot entity recognition datasets with higher inference speed, and obtains a 7.5% average relation F1-score improvement over previous state-of-the-art models on Wiki-ZSL and FewRel.",,ACL,['prompt-based methods'],highly relevant,"The paper describes 'BeautifulPrompt', a model designed to improve the quality of prompts used for text-to-image synthesis, which falls under the scope of prompt engineering." +1622,arggen prompting text generation models for documentlevel eventargument aggregation,"[, , ]",https://aclanthology.org/2022.findings-aacl.37.pdf,2022-11-01,,"Most of the existing discourse-level Information Extraction tasks have been modeled to be extractive in nature. However, we argue that extracting information from larger bodies of discourse-like documents requires more natural language understanding and reasoning capabilities. In our work, we propose the novel task of document-level event argument aggregation which generates consolidated event-arguments at a document-level with minimal loss of information. More specifically, we focus on generating precise document-level information frames in a multilingual setting using prompt-based methods. In this paper, we show the effectiveness of u prompt-based text generation approach to generate document-level argument spans in a low-resource and zero-shot setting. We also release the first of its kind multilingual event argument aggregation dataset that can be leveraged in other related multilingual text generation tasks as well: https://github.com/DebanjanaKar/ArgGen.",,ACL,['prompt-based methods'],highly relevant,"The paper specifically mentions the use of 'advanced prompt-engineering' as part of its methodology, therefore it is focused on the application of prompt engineering within the healthcare context." +1623,kipt knowledgeinjected prompt tuning for event detection,"[, , , , , , ]",https://aclanthology.org/2022.coling-1.169.pdf,2022-10-01,,"Event detection aims to detect events from the text by identifying and classifying event triggers (the most representative words). Most of the existing works rely heavily on complex downstream networks and require sufficient training data. Thus, those models may be structurally redundant and perform poorly when data is scarce. Prompt-based models are easy to build and are promising for few-shot tasks. However, current prompt-based methods may suffer from low precision because they have not introduced event-related semantic knowledge (e.g., part of speech, semantic correlation, etc.). To address these problems, this paper proposes a Knowledge-injected Prompt Tuning (KiPT) model. Specifically, the event detection task is formulated into a condition generation task. Then, knowledge-injected prompts are constructed using external knowledge bases, and a prompt tuning strategy is leveraged to optimize the prompts. Extensive experiments indicate that KiPT outperforms strong baselines, especially in few-shot scenarios.",,ACL,['prompt-based methods'],somewhat relevant,"The abstract describes using GPT-4 for creating explanatory text for API calls, which is an application of prompt engineering to improve malware detection, but it does not specifically mention hard prefix prompting." +1624,nspbert a promptbased fewshot learner through an original pretraining task —— next sentence prediction,"[, , , ]",https://aclanthology.org/2022.coling-1.286.pdf,2022-10-01,,"Using prompts to utilize language models to perform various downstream tasks, also known as prompt-based learning or prompt-learning, has lately gained significant success in comparison to the pre-train and fine-tune paradigm. Nonetheless, virtually most prompt-based methods are token-level such as PET based on mask language model (MLM). In this paper, we attempt to accomplish several NLP tasks in the zero-shot and few-shot scenarios using a BERT original pre-training task abandoned by RoBERTa and other models——Next Sentence Prediction (NSP). Unlike token-level techniques, our sentence-level prompt-based method NSP-BERT does not need to fix the length of the prompt or the position to be predicted, allowing it to handle tasks such as entity linking with ease. NSP-BERT can be applied to a variety of tasks based on its properties. We present an NSP-tuning approach with binary cross-entropy loss for single-sentence classification tasks that is competitive compared to PET and EFL. By continuing to train BERT on RoBERTa’s corpus, the model’s performance improved significantly, which indicates that the pre-training corpus is another important determinant of few-shot besides model size and prompt method.",,ACL,['prompt-based methods'],somewhat relevant,"The paper discusses enhancing the interpretability and semantics of prompts for text-to-image diffusion models, focusing on prompt inversion rather than expanding prompt engineering techniques for post-training model interaction." +1625,dissecting incontext learning of translations in gpt3,"[, , ]",https://aclanthology.org/2023.findings-emnlp.61.pdf,2023-12-01,,"Most of the recent work in leveraging Large Language Models (LLMs) such as GPT-3 for Machine Translation (MT) has focused on selecting the few-shot samples for prompting. In this work, we try to better understand the role of demonstration attributes for the in-context learning of translations through perturbations of high-quality, in-domain demonstrations. We find that asymmetric perturbation of the source-target mappings yield vastly different results. We show that the perturbation of the source side has surprisingly little impact, while target perturbation can drastically reduce translation quality, suggesting that it is the output text distribution that provides the most important learning signal during in-context learning of translations. We propose a method named Zero-Shot-Context to add this signal automatically in Zero-Shot prompting. We demonstrate that it improves upon the zero-shot translation performance of GPT-3, even making it competitive with few-shot prompted translations.",,ACL,['few-shot prompt'],highly relevant,"The paper includes a focus on performing prompt engineering to find which prompts best induce a specific behavior (lying) in a language model, which is directly related to hard prefix prompting." +1626,what makes chainofthought prompting effective a counterfactual study,"[, , ]",https://aclanthology.org/2023.findings-emnlp.101.pdf,2023-12-01,,"The effectiveness of Chain-of-thought prompting (CoT) has been widely recognized, but the underlying mechanisms behind its success, the reason why it just works for a wide range of tasks, remains an open question. To investigate this, we employ a counterfactual prompting approach, systematically manipulating elements of examples used in a few-shot prompt, and testing the consequences on model behavior. This allows us to understand the relative contributions of prompt elements such as symbols (digits, entities) and patterns (equations, sentence structure) on in-context learning. Our experiments with three different large language models (LLMs) reveal several key findings. First, the specific symbols used in the prompt do not significantly impact the model’s performance. However, consistent patterns in examples and specifying text in style frequently found on the web are crucial. Second, our findings suggest that the necessity of accurate few-shot examples depends on their role in communicating task understanding. We identify tasks where inaccurate few-shot examples hurt and, surprisingly, tasks where they improve performance. Additionally, we find that the intermediate steps in CoT may not necessarily facilitate learning how to solve a task, but instead efficiently convey task understanding (what) to the model. Furthermore, CoT leverages LLMs to fill in missing commonsense information, particularly helping difficult reasoning problems and long-tail questions.",,ACL,['few-shot prompt'],somewhat relevant,"The abstract mentions 'prompt engineering' as one of the usability challenges identified in software development using LLMs, which indicates the topic is discussed within the paper." +1627,legally enforceable hate speech detection for public forums,"[, , , ]",https://aclanthology.org/2023.findings-emnlp.730.pdf,2023-12-01,,"Hate speech causes widespread and deep-seated societal issues. Proper enforcement of hate speech laws is key for protecting groups of people against harmful and discriminatory language. However, determining what constitutes hate speech is a complex task that is highly open to subjective interpretations. Existing works do not align their systems with enforceable definitions of hate speech, which can make their outputs inconsistent with the goals of regulators. This research introduces a new perspective and task for enforceable hate speech detection centred around legal definitions, and a dataset annotated on violations of eleven possible definitions by legal experts. Given the challenge of identifying clear, legally enforceable instances of hate speech, we augment the dataset with expert-generated samples and an automatically mined challenge set. We experiment with grounding the model decision in these definitions using zero-shot and few-shot prompting. We then report results on several large language models (LLMs). With this task definition, automatic hate speech detection can be more closely aligned to enforceable laws, and hence assist in more rigorous enforcement of legal protections against harmful speech in public forums.",,ACL,['few-shot prompt'],highly relevant,"The abstract mentions the use of prompt engineering to improve a Q&A system for safety engineering, indicating that the paper is relevant to the topic of prompt engineering." +1628,conditioning on dialog acts improves empathy style transfer,"[, , ]",https://aclanthology.org/2023.findings-emnlp.884.pdf,2023-12-01,,"We explore the role of dialog acts in style transfer, specifically empathy style transfer – rewriting a sentence to make it more empathetic without changing its meaning. Specifically, we use two novel few-shot prompting strategies: target prompting, which only uses examples of the target style (unlike traditional prompting with source/target pairs), and dialog-act-conditioned prompting, which first estimates the dialog act of the source sentence and then makes it more empathetic using few-shot examples of the same dialog act. Our study yields two key findings: (1) Target prompting typically improves empathy more effectively while maintaining the same level of semantic similarity; (2) Dialog acts matter. Dialog-act-conditioned prompting enhances empathy while preserving both semantics and the dialog-act type. Different dialog acts benefit differently from different prompting methods, highlighting the need for further investigation of the role of dialog acts in style transfer.",,ACL,['few-shot prompt'],highly relevant,"The paper evaluates the knowledge of LLMs on the climate crisis through prompt engineering based on a set of questions, showing direct relevance to hard prefix prompting." +1629,complex reasoning in natural language,"[, , , , , ]",https://aclanthology.org/2023.acl-tutorials.2.pdf,2023-07-01,,"Teaching machines to reason over texts has been a long-standing goal of natural language processing (NLP). To this end, researchers have designed a diverse set of complex reasoning tasks that involve compositional reasoning, knowledge retrieval, grounding, commonsense reasoning, etc. A standard choice for building systems that perform a desired type of reasoning is to fine-tune a pretrained language model (LM) on specific downstream tasks. However, recent research has demonstrated that such a straightforward approach is often brittle. For example, Elazar et al. (2021) and Branco et al. (2021) show that, on question-answering (QA) tasks, similar performance can be achieved with questions removed from the inputs. Min et al. (2019), Chen and Durrett (2019), and Tang et al. (2021) show that models trained on multi-hop QA do not generalize to answer single-hop questions. The reasoning capabilities of these models thus remain at a surface level, i.e., exploiting data patterns. Consequently, augmenting LMs with techniques that make them robust and effective becomes an active research area. We will start the tutorial by providing an overview of complex reasoning tasks where the standard application of pretrained language models fails. This tutorial then reviews recent promising directions for tackling these tasks. Specifically, we focus on the following groups of approaches that explicitly consider problem structures: (1) knowledge-augmented methods, where the knowledge is either incorporated during fine-tuning or pretraining; (2) few-shot prompting methods, which effectively guide the models to follow instructions; (3) neuro-symbolic methods, which produce explicit intermediate representations; and, (4) rationale-based methods, one of the most popular forms of the neuro-symbolic methods, which highlight subsets of input as explanations for individual model predictions.",,ACL,['few-shot prompt'],highly relevant,"The paper describes a new approach to code generation with LLMs that includes the use of well-designed direct prompts, indicating relevance to the topic of prompt engineering." +1630,decomt decomposed prompting for machine translation between related languages using large language models,"[, , , , ]",https://aclanthology.org/2023.emnlp-main.279.pdf,2023-12-01,,"This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through few-shot prompting leverages a small set of translation pair examples to generate translations for test sentences. This procedure requires the model to learn how to generate translations while simultaneously ensuring that token ordering is maintained to produce a fluent and accurate translation. We propose that for related languages, the task of machine translation can be simplified by leveraging the monotonic alignment characteristic of such languages. We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations. Through automatic and human evaluation conducted on multiple related language pairs across various language families, we demonstrate that our proposed approach of decomposed prompting surpasses multiple established few-shot baseline approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.",,ACL,['few-shot prompt'],highly relevant,"The paper demonstrates techniques to decrease discrimination through careful prompt engineering, which is directly related to the topic of hard prefix prompts." +1631,a study on the effectiveness of large language models for translation with markup,"[, , , ]",https://aclanthology.org/2023.mtsummit-research.13.pdf,2023-09-01,,"In this paper we evaluate the utility of large language models (LLMs) for translation of text with markup in which the most important and challenging aspect is to correctly transfer markup tags while ensuring that the content, both, inside and outside tags is correctly translated. While LLMs have been shown to be effective for plain text translation, their effectiveness for structured document translation is not well understood. To this end, we experiment with BLOOM and BLOOMZ, which are open-source multilingual LLMs, using zero, one and few-shot prompting, and compare with a domain-specific in-house NMT system using a detag-and-project approach for markup tags. We observe that LLMs with in-context learning exhibit poorer translation quality compared to the domain-specific NMT system, however, they are effective in transferring markup tags, especially the large BLOOM model (176 billion parameters). This is further confirmed by our human evaluation which also reveals the types of errors of the different tag transfer techniques. While LLM-based approaches come with the risk of losing, hallucinating and corrupting tags, they excel at placing them correctly in the translation.",,ACL,['few-shot prompt'],highly relevant,"The paper discusses the effectiveness of prompts and introduces an approach that simplifies prompt engineering (kNN-ICL), which indicates its relevance to the topic of prompt engineering." +1632,fineprompt unveiling the role of finetuned inductive bias on compositional reasoning in gpt4,"[, , , ]",https://aclanthology.org/2023.findings-emnlp.245.pdf,2023-12-01,,"Compositional reasoning across texts has been a long-standing challenge in natural language processing. With large language models like GPT-4 taking over the field, prompting techniques such as chain-of-thought (CoT) were proposed to unlock compositional, multi-step reasoning capabilities of LLMs. Despite their success, the prompts demand significant human effort to discover and validate them. Our work draws attention to the idea of transferring task-specific inductive biases from finetuned models to prompts, as a way of improving GPT-4’s compositional reasoning capabilities. To leverage these inductive biases, we formulate prompt templates to ease the transfer of inductive biases. The experimental results on multi-hop question answering and numerical reasoning over text show that our proposed prompt scheme shows competitive zero-shot and few-shot performances compared to existing prompts on complicated reasoning tasks, highlighting the importance of adopting the validated biases of the previous paradigm.",,ACL,['prompting techniques'],somewhat relevant,The paper discusses the evaluation of a prompt-based tool learning method among others to address hallucination issues in LLMs. +1633,trojansql sql injection against natural language interface to database,"[, , , , , ]",https://aclanthology.org/2023.emnlp-main.264.pdf,2023-12-01,,"The technology of text-to-SQL has significantly enhanced the efficiency of accessing and manipulating databases. However, limited research has been conducted to study its vulnerabilities emerging from malicious user interaction. By proposing TrojanSQL, a backdoor-based SQL injection framework for text-to-SQL systems, we show how state-of-the-art text-to-SQL parsers can be easily misled to produce harmful SQL statements that can invalidate user queries or compromise sensitive information about the database. The study explores two specific injection attacks, namely boolean-based injection and union-based injection, which use different types of triggers to achieve distinct goals in compromising the parser. Experimental results demonstrate that both medium-sized models based on fine-tuning and LLM-based parsers using prompting techniques are vulnerable to this type of attack, with attack success rates as high as 99% and 89%, respectively. We hope that this study will raise more concerns about the potential security risks of building natural language interfaces to databases.",,ACL,['prompting techniques'],highly relevant,"The paper focuses on few-shot prompting and discusses a novel technique to guide large language models, which is directly related to prompt engineering." +1634,locally differentially private document generation using zero shot prompting,"[, , ]",https://aclanthology.org/2023.findings-emnlp.566.pdf,2023-12-01,,"Numerous studies have highlighted the privacy risks associated with large language models. Our research offers a unique perspective by demonstrating that pretrained large language models can effectively contribute to privacy preservation. We propose a locally differentially private mechanism called DP-Prompt, which leverages the power of pretrained large language models and zero-shot prompting to counter author de-anonymization attacks while minimizing the impact on downstream utility. When DP-Prompt is used with a powerful language model like ChatGPT (gpt-3.5), we observe a notable reduction in the success rate of de-anonymization attacks, showing that it surpasses existing approaches by a considerable margin despite its simpler design. For instance, in the case of the IMDB dataset, DP-Prompt (with ChatGPT) perfectly recovers the clean sentiment F1 score while achieving a 46% reduction in author identification F1 score against static attackers and a 26% reduction against adaptive attackers. We conduct extensive experiments across six open-source large language models, ranging up to 7 billion parameters, to analyze various effects of the privacy-utility tradeoff.",,ACL,['zero-shot prompt'],somewhat relevant,"The paper introduces methods related to zero-shot and few-shot learning modalities leveraging textual descriptors, which suggests relevance to the use of prompts, though it's not clear if they specifically discuss hard prefix prompts." +1635,"zeroprompt scaling promptbased pretraining to 1,000 tasks improves zeroshot generalization","[, , , , , , ]",https://aclanthology.org/2022.findings-emnlp.312.pdf,2022-12-01,,"We propose a multitask pretraining approach ZeroPrompt for zero-shot generalization, focusing on task scaling and zero-shot prompting. While previous models are trained on only a few dozen tasks, we scale to 1,000 tasks for the first time using real-world data. This leads to a crucial discovery that task scaling can be an efficient alternative to model scaling; i.e., the model size has less impact on performance with an extremely large number of tasks. Our results show that task scaling can improve training efficiency by 30 times in FLOPs.Empirically, ZeroPrompt substantially improves both the efficiency and the performance of zero-shot learning across a variety of academic and production datasets.",,ACL,['zero-shot prompt'],highly relevant,"The abstract specifically mentions that the study evaluates various prompting methods in the context of Arabic grammatical error correction, indicating relevance to prompt engineering." +1636,globallocal modeling with promptbased knowledge enhancement for emotion inference in conversation,"[, ]",https://aclanthology.org/2023.findings-eacl.158.pdf,2023-05-01,,"The ability to recognize emotions in conversations is necessary and important for the online chatbot to do tasks such as empathetic response generation and emotional support. Present researches mainly focus on recognizing emotions through a speaker’s utterance, while research on emotion inference predicts emotions of addressees through previous utterances. Because of the lack of the addressee’s utterance, emotion inference is more challenging than emotion recognition. In this paper, we propose a global-local modeling method based on recurrent neural networks (RNN) and pre-trained language models (PLM) to do emotion inference, which utilizes the sequence modeling ability of RNNs and abundant knowledge from PLMs. Moreover, we take the whole dialogue history as input of PLM to generate knowledge by in-context learning. Experimental results show that our model with knoledge enhancement achieves state-of-the-art performance on all three datasets.",,ACL,['in-context learning'],somewhat relevant,"The abstract describes the use of 'Chain-of-Thought-Few-Shot Prompting (CoT-FSP)' and 'Zero-Shot Prompting (ZSP)' as methods for incorporating guidelines into LLMs, indicating the use of prompting techniques relevant to prompt engineering." +1637,towards zeroshot persona dialogue generation with incontext learning,"[, , , , , ]",https://aclanthology.org/2023.findings-acl.90.pdf,2023-07-01,,"Much work has been done to improve persona consistency by finetuning a pretrained dialogue model on high-quality human-annoated persona datasets. However, these methods still face the challenges of high cost and poor scalability. To this end, we propose a simple-yet-effective approach to significantly improve zero-shot persona consistency via in-context learning. Specifically, we first pre-train a persona-augmented dialogue generation model and then utilize in-context prompting mechanism to realize zero-shot persona customization. Experimental results demonstrate that our method can dramatically improve persona consistency without compromising coherence and informativeness in zero-shot settings.",,ACL,['in-context learning'],highly relevant,"The paper evaluates GPT-4's abstract reasoning abilities using detailed, one-shot prompting, which is relevant to the topic of prompt engineering." +1638,what incontext learning “learns” incontext disentangling task recognition and task learning,"[, , , ]",https://aclanthology.org/2023.findings-acl.527.pdf,2023-07-01,,"Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations, but its mechanisms are not yet well-understood. Some works suggest that LLMs only recall already learned concepts from pre-training, while others hint that ICL performs implicit learning over demonstrations. We characterize two ways through which ICL leverages demonstrations. Task recognition (TR) captures the extent to which LLMs can recognize a task through demonstrations – even without ground-truth labels – and apply their pre-trained priors, whereas task learning (TL) is the ability to capture new input-label mappings unseen in pre-training. Using a wide range of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we design controlled experiments to disentangle the roles of TR and TL in ICL. We show that (1) models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations; (2) LLMs acquire TL as the model scales, and TL’s performance consistently improves with more demonstrations in context. Our findings unravel two different forces behind ICL and we advocate for discriminating them in future ICL research due to their distinct nature.",,ACL,['in-context learning'],highly relevant,"The abstract mentions the use of 'Thread of Thought' (ThoT) as a strategy that integrates with various prompting techniques, indicating relevance to prompt engineering." +1639,do large language models know what they don’t know,"[, , , , , ]",https://aclanthology.org/2023.findings-acl.551.pdf,2023-07-01,,"Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks. Current research focuses on enhancing their performance within their existing knowledge. Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend. Therefore, the ability to understand their own limitations on the unknows, referred to as self-knowledge, is of paramount importance. This study aims to evaluate LLMs’ self-knowledge by assessing their ability to identify unanswerable or unknowable questions. We introduce an automated methodology to detect uncertainty in the responses of these models, providing a novel measure of their self-knowledge. We further introduce a unique dataset, SelfAware, consisting of unanswerable questions from five diverse categories and their answerable counterparts. Our extensive analysis, involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering an intrinsic capacity for self-knowledge within these models. Moreover, we demonstrate that in-context learning and instruction tuning can further enhance this self-knowledge. Despite this promising insight, our findings also highlight a considerable gap between the capabilities of these models and human proficiency in recognizing the limits of their knowledge.",,ACL,['in-context learning'],highly relevant,"The paper discusses OverPrompt, a strategy leveraging in-context learning of LLMs to improve efficiency in zero-shot prompting techniques, which is a direct application of prompt engineering." +1640,incontext learning for fewshot multimodal named entity recognition,"[, , , , , , ]",https://aclanthology.org/2023.findings-emnlp.196.pdf,2023-12-01,,"Thanks in part to the availability of copious annotated resources for some entity categories, existing studies have achieved superior performance in multimodal named entity recognition (MNER). However, in the real-world scenario, it is infeasible to enumerate all entity categories in advance. Therefore, in this paper, we formulate a new few-shot multimodal named entity recognition (FewMNER) task, which aims to effectively locate and identify named entities for a text-image pair only using a small number of labeled examples. Further, we explore the merit of in-context learning (ICL) and propose a novel framework to deal with FewMNER, where three points are taken into account: i.e., converting visual modality, selecting useful examples, and designing an effective task demonstration. Specifically, we first employ an image caption model to convert images into textual descriptions, enabling large language models to absorb information from visual modality. Then, we use the ranking of the sum of similarity rankings from both text and image modalities to select k-nearest examples, which form a demonstration context. Finally, we utilize the MNER definition and the meaning of each entity category as effective instruction. Extensive experimental results demonstrate that our framework outperforms baselines under several few-shot settings.",,ACL,['in-context learning'],somewhat relevant,"The paper mentions using appropriate prompting techniques with GPT-3 to generate informative dialogue, which suggests relevance to prompt engineering." +1641,2iner instructive and incontext learning on fewshot named entity recognition,"[, , , , , , ]",https://aclanthology.org/2023.findings-emnlp.259.pdf,2023-12-01,,"Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extracting, to enhance the model’s understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms.",,ACL,['in-context learning'],highly relevant,"The paper describes using the chain-of-thought prompting technique for detecting online hate with large language models, which indicates the use of prompt engineering techniques." +1642,narrative style and the spread of health misinformation on twitter,"[, , , , ]",https://aclanthology.org/2023.findings-emnlp.282.pdf,2023-12-01,,"Using a narrative style is an effective way to communicate health information both on and off social media. Given the amount of misinformation being spread online and its potential negative effects, it is crucial to investigate the interplay between narrative communication style and misinformative health content on user engagement on social media platforms. To explore this in the context of Twitter, we start with previously annotated health misinformation tweets (n ≈15,000) and annotate a subset of the data (n=3,000) for the presence of narrative style. We then use these manually assigned labels to train text classifiers, experimenting with supervised fine-tuning and in-context learning for automatic narrative detection. We use our best model to label remaining portion of the dataset, then statistically analyze the relationship between narrative style, misinformation, and user-level features on engagement, finding that narrative use is connected to increased tweet engagement and can, in some cases, lead to increased engagement with misinformation. Finally, we analyze the general categories of language used in narratives and health misinformation in our dataset.",,ACL,['in-context learning'],somewhat relevant,"The abstract mentions the use of various techniques from LLM prompting, signaling the incorporation of prompt engineering in the methodology." +1643,information extraction from legal wills how well does gpt4 do,"[, , , , , ]",https://aclanthology.org/2023.findings-emnlp.287.pdf,2023-12-01,,"This work presents a manually annotated dataset for Information Extraction (IE) from legal wills, and relevant in-context learning experiments on the dataset. The dataset consists of entities, binary relations between the entities (e.g., relations between testator and beneficiary), and n-ary events (e.g., bequest) extracted from 45 legal wills from two US states. This dataset can serve as a foundation for downstream tasks in the legal domain. Another use case of this dataset is evaluating the performance of large language models (LLMs) on this IE task. We evaluated GPT-4 with our dataset to investigate its ability to extract information from legal wills. Our evaluation result demonstrates that the model is capable of handling the task reasonably well. When given instructions and examples as a prompt, GPT-4 shows decent performance for both entity extraction and relation extraction tasks. Nevertheless, the evaluation result also reveals that the model is not perfect. We observed inconsistent outputs (given a prompt) as well as prompt over-generalization.",,ACL,['in-context learning'],somewhat relevant,"The paper discusses an approach to improve inference efficiency for language models using precomputed attention states for overlapping text segments in prompts, but does not specifically relate to the construction or engineering of hard prefix prompts themselves." +1644,enhancing reasoning capabilities by instruction learning and chainofthoughts for implicit discourse relation recognition,"[, , , ]",https://aclanthology.org/2023.findings-emnlp.374.pdf,2023-12-01,,"The aim of implicit discourse relation recognition is to comprehend the sense of connection between two arguments. In this work, we present a classification method that is solely based on generative models. Our proposed approach employs a combination of instruction templates and in-context learning to refine the generative model for effectively addressing the implicit discourse relation recognition task. Furthermore, we utilize Chain-of-Thoughts to partition the inference process into a sequence of three successive stages. This strategy enables us to fully utilize the autoregressive generative model’s potential for knowledge acquisition and inference, ultimately leading to enhanced performance on this natural language understanding task. The results of our experiments, evaluated on benchmark datasets PDTB 2.0, PDTB 3.0, and the CoNLL16 shared task, demonstrate superior performance compared to previous state-of-the-art models.",,ACL,['in-context learning'],somewhat relevant,The paper details a system (SAGE) that utilizes a tree of LLM prompts to interpret user requests and perform actions which is central to prompt engineering techniques. +1645,large language models meet harry potter a dataset for aligning dialogue agents with characters,"[, , , , , , , ]",https://aclanthology.org/2023.findings-emnlp.570.pdf,2023-12-01,,"In recent years, Dialogue-style Large Language Models (LLMs) such as ChatGPT and GPT4 have demonstrated immense potential in constructing open-domain dialogue agents. However, aligning these agents with specific characters or individuals remains a considerable challenge due to the complexities of character representation and the lack of comprehensive annotations. In this paper, we introduce the Harry Potter Dialogue (HPD) dataset, designed to advance the study of dialogue agents and character alignment. The dataset encompasses all dialogue sessions (in both English and Chinese) from the Harry Potter series and is annotated with vital background information, including dialogue scenes, speakers, character relationships, and attributes. These extensive annotations may empower LLMs to unlock character-driven dialogue capabilities. Furthermore, it can serve as a universal benchmark for evaluating how well can a LLM aligning with a specific character. We benchmark LLMs on HPD using both fine-tuning and in-context learning settings. Evaluation results reveal that although there is substantial room for improvement in generating high-quality, character-aligned responses, the proposed dataset is valuable in guiding models toward responses that better align with the character of Harry Potter.",,ACL,['in-context learning'],somewhat relevant,"The paper discusses the use of prompts to integrate VG instruction with pretrained models, which involves an aspect of prompt engineering, but the focus seems to be more on model benchmarks and evaluations rather than the specifics of engineering hard prefix prompts." +1646,kicgpt large language model with knowledge in context for knowledge graph completion,"[, , , ]",https://aclanthology.org/2023.findings-emnlp.580.pdf,2023-12-01,,"Knowledge Graph Completion (KGC) is crucial for addressing knowledge graph incompleteness and supporting downstream applications. Many models have been proposed for KGC and they can be categorized into two main classes, including triple-based and test-based approaches. Triple-based methods struggle with long-tail entities due to limited structural information and imbalanced distributions of entities. Text-based methods alleviate this issue but require costly training for language models and specific finetuning for knowledge graphs, which limits their efficiency. To alleviate the limitations in the two approaches, in this paper, we propose KICGPT, a framework that integrates a large language model (LLM) and a triple-based KGC retriever, to alleviate the long-tail problem without incurring additional training overhead. In the proposed KICGPT model, we propose an in-context learning strategy called Knowledge Prompt, which encodes structural knowledge into demonstrations to guide LLM. Empirical results on benchmark datasets demonstrate the effectiveness of the proposed KICGPT model with lighter training overhead and no finetuning.",,ACL,['in-context learning'],somewhat relevant,"The paper presents an autoprompting technique for fuzzing which implies the use of prompts, but does not specify if they are hard prefix prompts as defined in the systematic review guidelines." +1647,exploring incontext learning for knowledge grounded dialog generation,"[, , ]",https://aclanthology.org/2023.findings-emnlp.675.pdf,2023-12-01,,"Large neural-based dialog generation models have been applied in many real-life scenarios, yet they are prone to hallucination and tend to produce factually inaccurate outputs which raise great concerns. To alleviate this problem, we propose a plug-and-play retrieval-based framework IKA, which leverages in-context learning and retrieval techniques to enhance LLMs on knowledge grounded dialog generation. We design thorough experiments on a large-scale knowledge graph with 1M+ facts to investigate the effectiveness and generalization of our framework. Experiments show that our method surpasses previous training-based SOTA by a large margin, specifically 46.67% in BLEU4, 26.01% in ROUGE-L, 122.90% in BARTScore and 30.50% in Entity Coverage F1. Further analysis show promising abilities of LLMs to perform knowledge-intensive tasks, which is previously considered weak and understudied.",,ACL,['in-context learning'],highly relevant,"The paper specifically mentions 'poisoning prompts', which relates directly to manipulating the input prompts that are characteristic of prompt engineering in the context of language model vulnerabilities." +1648,enhancing texttosql capabilities of large language models a study on prompt design strategies,"[, , , , , , , ]",https://aclanthology.org/2023.findings-emnlp.996.pdf,2023-12-01,,"In-context learning (ICL) has emerged as a new approach to various natural language processing tasks, utilizing large language models (LLMs) to make predictions based on context that has been supplemented with a few examples or task-specific instructions. In this paper, we aim to extend this method to question answering tasks that utilize structured knowledge sources, and improve Text-to-SQL systems by exploring various prompt design strategies for employing LLMs. We conduct a systematic investigation into different demonstration selection methods and optimal instruction formats for prompting LLMs in the Text-to-SQL task. Our approach involves leveraging the syntactic structure of an example’s SQL query to retrieve demonstrations, and we demonstrate that pursuing both diversity and similarity in demonstration selection leads to enhanced performance. Furthermore, we show that LLMs benefit from database-related knowledge augmentations. Our most effective strategy outperforms the state-of-the-art system by 2.5 points (Execution Accuracy) and the best fine-tuned system by 5.1 points on the Spider dataset. These results highlight the effectiveness of our approach in adapting LLMs to the Text-to-SQL task, and we present an analysis of the factors contributing to the success of our strategy.",,ACL,['in-context learning'],somewhat relevant,"The paper focuses on the robustness of LLMs to majority label bias in in-context learning, and specifically mentions the impact of instructional prompts on model robustness, indicating an exploration of the use of prompts." +1649,"don’t generate, discriminate a proposal for grounding language models to realworld environments","[, , ]",https://aclanthology.org/2023.acl-long.270.pdf,2023-07-01,,"A key missing capacity of current language models (LMs) is grounding to real-world environments. Most existing work for grounded language understanding uses LMs to directly generate plans that can be executed in the environment to achieve the desired effects. It thereby casts the burden of ensuring grammaticality, faithfulness, and controllability all on the LMs. We propose Pangu, a generic framework for grounded language understanding that capitalizes on the discriminative ability of LMs instead of their generative ability. Pangu consists of a symbolic agent and a neural LM working in a concerted fashion: The agent explores the environment to incrementally construct valid plans, and the LM evaluates the plausibility of the candidate plans to guide the search process. A case study on the challenging problem of knowledge base question answering (KBQA), which features a massive environment, demonstrates the remarkable effectiveness and flexibility of Pangu: A BERT-base LM is sufficient for setting a new record on standard KBQA datasets, and larger LMs further bring substantial gains.Pangu also enables, for the first time, effective few-shot in-context learning for KBQA with large LMs such as Codex.",,ACL,['in-context learning'],somewhat relevant,The paper discusses the construction of Comparable Demonstrations for In-Context Learning which is related to prompt engineering as it involves the selection and editing of demonstrations (prompts) to guide LLMs on downstream tasks. +1650,fidicl a fusionindecoder approach for efficient incontext learning,"[, , , , ]",https://aclanthology.org/2023.acl-long.454.pdf,2023-07-01,,"Large pre-trained models are capable of few-shot in-context learning (ICL), i.e., performing a new task by prepending a few demonstrations before the test input. However, the concatenated demonstrations are often excessively long and induce additional computation. Inspired by fusion-in-decoder (FiD) models which efficiently aggregate more passages and thus outperforms concatenation-based models in open-domain QA, we hypothesize that similar techniques can be applied to improve the efficiency and end-task performance of ICL. To verify this, we present a comprehensive study on applying three fusion methods—concatenation-based (early fusion), FiD (intermediate), and ensemble-based (late)—to ICL. We adopt a meta-learning setup where a model is first trained to perform ICL on a mixture of tasks using one selected fusion method, then evaluated on held-out tasks for ICL. Results on 11 held-out tasks show that FiD-ICL matches or outperforms the other two fusion methods. Additionally, we show that FiD-ICL (1) is 10x faster at inference time compared to concat-based and ensemble-based ICL, as we can easily pre-compute the representations of in-context examples and reuse them; (2) enables scaling up to meta-training 3B-sized models, which would fail for concat-based ICL.",,ACL,['in-context learning'],somewhat relevant,"The paper investigates in-context learning (ICL) and its robustness in language models, specifically examining syntax sensitivity and the use of chain-of-thought prompting, which is related to prompt engineering techniques." +1651,incontext learning of large language models for controlled dialogue summarization a holistic benchmark and empirical analysis,"[, , , ]",https://aclanthology.org/2023.newsum-1.6.pdf,2023-12-01,,"Large Language Models (LLMs) have shown significant performance in numerous NLP tasks, including summarization and controlled text generation. A notable capability of LLMs is in-context learning (ICL), where the model learns new tasks using input-output pairs in the prompt without any parameter update. However, the performance of LLMs in the context of few-shot abstractive dialogue summarization remains underexplored. This study evaluates various state-of-the-art LLMs on the SAMSum dataset within a few-shot framework. We assess these models in both controlled (entity control, length control, and person-focused planning) and uncontrolled settings, establishing a comprehensive benchmark in few-shot dialogue summarization. Our findings provide insights into summary quality and model controllability, offering a crucial reference for future research in dialogue summarization.",,ACL,['in-context learning'],highly relevant,"The method described involves chain-of-thought prompting and in-context learning with a large language model for few-shot classification and segmentation tasks, which falls under prompt engineering." +1652,empowering conversational agents using semantic incontext learning,"[, ]",https://aclanthology.org/2023.bea-1.62.pdf,2023-07-01,,"Language models are one of the biggest game changers in downstream NLP applications, especially in conversational agents. In spite of their awesome capabilities to generated responses to solve the inquireis, there are still some big challenges to using them. One challenge is how to enable the LLMs to use the private internal data to solve inquires. And secondly, how to keep the LLMs updated with newly incoming data without the burden of fine-tuning as it is not only expensive but also not an available option for some commercial LLMs, such as ChatGPT. In this work, we propose Semantic In-Context Learning (S-ICL) to address the aforementioned challenges. Our approach was participated in the BEA 2023 shared task and ended up having the fourth place in both development and evaluation phases.",,ACL,['in-context learning'],somewhat relevant,"The abstract mentions using various ways of phrasing the question/prompt for query variations and prompt-chaining to improve VLM predictions, indicating the use of prompts and their engineering as a part of the method, although not specifically 'hard prefix' prompts." +1653,tasklevel thinking steps help large language models for challenging classification task,"[, , , , , ]",https://aclanthology.org/2023.emnlp-main.150.pdf,2023-12-01,,"Large language models (LLMs) have shown incredible performance on many tasks such as dialogue generation, commonsense reasoning and question answering. In-context learning (ICL) is an important paradigm for adapting LLMs to the downstream tasks by prompting few demonstrations. However, the distribution of demonstrations can severely affect the performance, especially for challenging classification tasks. In this paper, we propose the concept of task-level thinking steps that can eliminate bias introduced by demonstrations. Further, to help LLMs distinguish confusing classes, we design a progressive revision framework, which can improve the thinking steps by correcting hard demonstrations. Experimental results prove the superiority of our proposed method, achieving best performance on three kinds of challenging classification tasks in the zero-shot and few-shot settings. Besides, with task-level thinking steps, automatically generated chain-of-thoughts (CoTs) bring more competitive performance.",,ACL,['in-context learning'],somewhat relevant,"The paper discusses In-Context Learning (ICL) using prompts as a method for debiasing PLMs without updating model parameters, which is related to prompt engineering." +1654,exploring chain of thought style prompting for texttosql,"[, , , , ]",https://aclanthology.org/2023.emnlp-main.327.pdf,2023-12-01,,"In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much room for improvement. In this paper, we hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we systematically study how to enhance LLMs’ reasoning ability through chain of thought (CoT) style prompting, including the original chain-of-thought prompting and least-to-most prompting. Our experiments demonstrate that iterative prompting as in least-to-most prompting may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps tends to have more error propagation issues. Based on these findings, we propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively, compared to the standard prompting method without reasoning steps; 2.4 and 1.5 point absolute gains, compared to the least-to-most prompting method.",,ACL,['in-context learning'],highly relevant,"The paper introduces a new prompting method, the Heuristic-Driven Link-of-Analogy (HD-LoA) prompting, for enhancing large language models (LLMs) in specific tasks, which falls within the scope of prompt engineering." +1655,representative demonstration selection for incontext learning with twostage determinantal point process,"[, , , , , ]",https://aclanthology.org/2023.emnlp-main.331.pdf,2023-12-01,,"Although In-Context Learning has proven effective across a broad array of tasks, its efficiency is noticeably influenced by the selection of demonstrations. Existing methods tend to select different demonstrations for each test instance, which is time-consuming and poses limitations in practical scenarios. Therefore, this study aims to address the challenge of selecting a representative subset of in-context demonstrations that can effectively prompt different test instances in a specific task. We propose that this representative subset should be of high quality and diversity. Our empirical analyses confirm that demonstrations that meet these criteria can indeed bolster model performance. To satisfy these criteria, this paper further introduces a two-stage Determinantal Point Process (DPP) method designed to incorporate both quality and diversity in the process of demonstration selection, thereby obtaining representative in-context demonstrations. Through comprehensive experimentation, we have confirmed the efficacy of our proposed method, paving the way for more practical and effective In-Context Learning.",,ACL,['in-context learning'],somewhat relevant,"The paper mentions the use of Cappy with PromptSource, which implies the use of prompts, but does not specifically discuss prompt engineering or hard prefix prompting techniques." +1656,mt2 towards a multitask machine translation model with translationspecific incontext learning,"[, , , , , ]",https://aclanthology.org/2023.emnlp-main.532.pdf,2023-12-01,,"Sentence-level translation, document-level translation, translation memory, and terminology constrained translation play an important role in machine translation. Most of the previous work uses separate models or methods to solve these tasks, which is not conducive to knowledge transfer of different tasks and increases the complexity of system construction. In this work, we explore the potential of pre-trained language model in machine translation tasks and propose a Multi-Task Machine Translation (MT2) model to integrate these translation tasks. We design a novel translation-specific In-Context Learning (ICL) paradigm for model training, in which all of the translation tasks can be modeled as context-learning tasks that integrate contextual information for performance improvement. Specifically, we propose a retrieval and alignment method to obtain a large scale context-enhancement training data, then we train the model in an in-context learning manner. Furthermore, we adopt two context-dependent training strategies to encourage the model to better understand and utilize contextual information for translation. Extensive experiments on translation memory, terminology constrained translation, document-level translation, and few-shot domain-adaptation tasks demonstrate the superior performance of our model, verifying the effectiveness of our proposed approach.",,ACL,['in-context learning'],somewhat relevant,"The paper discusses manipulating the label space for in-context classification with language models and vision-language models, which is related to how prompts affect performance, but does not explicitly mention hard prefix prompts or prompt engineering techniques." +1657,just adjust one prompt enhancing incontext dialogue scoring via constructing the optimal subgraph of demonstrations and prompts,"[, , , , ]",https://aclanthology.org/2023.emnlp-main.590.pdf,2023-12-01,,"The use of modern Large Language Models (LLMs) as chatbots still has some problems such as hallucinations and lack of empathy. Identifying these issues can help improve chatbot performance. The community has been continually iterating on reference-free dialogue evaluation methods based on large language models (LLMs) that can be readily applied. However, many of these LLM-based metrics require selecting specific datasets and developing specialized training tasks for different evaluation dimensions (e.g., coherence, informative). The developing step can be time-consuming and may need to be repeated for new evaluation dimensions. To enable efficient and flexible adaptation to diverse needs of dialogue evaluation, we propose a dimension-agnostic scoring method that leverages the in-context learning (ICL) capability of LLMs to learn from human scoring to the fullest extent. Our method has three key features. To begin with, rather than manual prompt crafting, we propose automatically generating prompts, allowing the LLM to observe human labels and summarize the most suitable prompt. Additionally, since the LLM has a token limit and ICL is sensitive to demonstration variations, we train a selector to finely customize demonstrations and prompts for each dialogue input. Finally, during inference, we propose to request the LLM multiple times with a subgraph of demonstrations and prompts that are diverse and suitable to maximize ICL from various human scoring. We validate the efficacy of our method on five datasets, even with a small amount of annotated data, our method outperforms all strong baselines. Code is available at https://github.com/iamlxb3/EMNLP2023-ADOROR.",,ACL,['in-context learning'],somewhat relevant,"The paper discusses expanding the CoT method which seems to involve a type of prompting technique to improve LLMs response generation, but it does not specifically mention 'hard prefix' prompting or 'prompt engineering' in the context of post-training modifications." +1658,what makes good incontext examples for gpt3,"[, , , , , ]",https://aclanthology.org/2022.deelio-1.10.pdf,2022-05-01,,"GPT-3 has attracted lots of attention due to its superior performance across a wide range of NLP tasks, especially with its in-context learning abilities. Despite its success, we found that the empirical results of GPT-3 depend heavily on the choice of in-context examples. In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3’s in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt. Intuitively, the examples selected with such a strategy may serve as more informative inputs to unleash GPT-3’s power of text generation. We evaluate the proposed approach on several natural language understanding and generation benchmarks, where the retrieval-based prompt selection approach consistently outperforms the random selection baseline. Moreover, it is observed that the sentence encoders fine-tuned on task-related datasets yield even more helpful retrieval results. Notably, significant gains are observed on tasks such as table-to-text generation (44.3% on the ToTTo dataset) and open-domain question answering (45.5% on the NQ dataset).",,ACL,['in-context learning'],somewhat relevant,"The abstract mentions the capabilities of LLMs in in-context learning to replicate input-output text generation patterns without further fine-tuning, potentially indicating a relevance to prompt-based methods, but does not directly reference hard prefix prompting or prompt engineering techniques." +1659,codeprompt taskagnostic prefix tuning for program and language generation,"[, ]",https://aclanthology.org/2023.findings-acl.325.pdf,2023-07-01,,"In order to solve the inefficient parameter update and storage issues of fine-tuning in Natural Language Generation (NLG) tasks, prompt-tuning methods have emerged as lightweight alternatives. Furthermore, efforts to reduce the gap between pre-training and fine-tuning have shown successful results in low-resource settings. As large Pre-trained Language Models (PLMs) for Program and Language Generation (PLG) tasks are constantly being developed, prompt tuning methods are necessary for the tasks. However, due to the gap between pre-training and fine-tuning different from PLMs for natural language, a prompt tuning method that reflects the traits of PLM for program language is needed. In this paper, we propose a Task-Agnostic prompt tuning method for the PLG tasks, CodePrompt, that combines Input-Dependent Prompt Template (to bridge the gap between pre-training and fine-tuning of PLMs for program and language) and Corpus-Specific Prefix Tuning (to update the parameters of PLMs for program and language efficiently).Also, we propose a method to provide richer prefix word information for limited prefix lengths. We prove that our method is effective in three PLG tasks, not only in the full-data setting but also in the low-resource setting and cross-domain setting.",,ACL,['prompt template'],somewhat relevant,"The paper presents an approach that uses multimodal prompts (textual and visual) for image restoration, indicating relevance to the subject of prompt engineering." +1660,kul@smm4h’22 template augmented adaptive pretraining for tweet classification,"[, ]",https://aclanthology.org/2022.smm4h-1.41.pdf,2022-10-01,,This paper describes models developed for the Social Media Mining for Health (SMM4H) 2022 shared tasks. Our team participated in the first subtask that classifies tweets with Adverse Drug Effect (ADE) mentions. Our best-performing model comprises of a template augmented task adaptive pre-training and further fine-tuning on target task data. Augmentation with random prompt templates increases the amount of task-specific data to generalize the LM to the target task domain. We explore 2 pre-training strategies: Masked language modeling (MLM) and Simple contrastive pre-training (SimSCE) and the impact of adding template augmentations with these pre-training strategies. Our system achieves an F1 score of 0.433 on the test set without using supplementary resources and medical dictionaries.,,ACL,['prompt template'],somewhat relevant,"The paper describes the use of hard prompt templates for inputting data into Pretrained Language Models, which is relevant to the topic of hard prefix prompting." +1661,minichain a small library for coding with large language models,"[]",https://aclanthology.org/2023.emnlp-demo.27.pdf,2023-12-01,,"Programming augmented by large language models (LLMs) opens up many new application areas, but also requires care. LLMs are accurate enough, on average, to replace core functionality, yet make basic mistakes that demonstrate a lack of robustness. An ecosystem of prompting tools, from intelligent agents to new programming languages, have emerged with different solutions for patching LLMs with other tools. In this work, we introduce MiniChain, an opinionated tool for LLM augmented programming, with the design goals of ease-of-use of prototyping, transparency through automatic visualization, and a minimalistic approach to advanced features. The MiniChain library provides core primitives for coding LLM calls, separating out prompt templates, and capturing program structure. The library includes demo implementations of the main applications papers in the area, including chat-bots, code generation, retrieval-based question answering, and complex information extraction. The library is open-source and available at https://github.com/srush/MiniChain, with code demos available at https://srush-minichain.hf.space/, and video demo at https://www.youtube.com/watch?v=VszZ1VnO7sk.",,ACL,['prompt template'],highly relevant,"The abstract mentions the use of 'empirically-grounded prompt engineering' as a part of their methodology, indicating relevance to prompt engineering techniques." +1662,connprompt connectivecloze prompt learning for implicit discourse relation recognition,"[, , , ]",https://aclanthology.org/2022.coling-1.75.pdf,2022-10-01,,"Implicit Discourse Relation Recognition (IDRR) is to detect and classify relation sense between two text segments without an explicit connective. Vanilla pre-train and fine-tuning paradigm builds upon a Pre-trained Language Model (PLM) with a task-specific neural network. However, the task objective functions are often not in accordance with that of the PLM. Furthermore, this paradigm cannot well exploit some linguistic evidence embedded in the pre-training process. The recent pre-train, prompt, and predict paradigm selects appropriate prompts to reformulate downstream tasks, so as to utilizing the PLM itself for prediction. However, for its success applications, prompts, verbalizer as well as model training should still be carefully designed for different tasks. As the first trial of using this new paradigm for IDRR, this paper develops a Connective-cloze Prompt (ConnPrompt) to transform the relation prediction task as a connective-cloze task. Specifically, we design two styles of ConnPrompt template: Insert-cloze Prompt (ICP) and Prefix-cloze Prompt (PCP) and construct an answer space mapping to the relation senses based on the hierarchy sense tags and implicit connectives. Furthermore, we use a multi-prompt ensemble to fuse predictions from different prompting results. Experiments on the PDTB corpus show that our method significantly outperforms the state-of-the-art algorithms, even with fewer training data.",,ACL,['prompt template'],highly relevant,"The paper details using various prompt engineering techniques on the new Open AI models to generate translations, which indicates relevance to the topic of prompt engineering." diff --git a/papers/2iner instructive and incontext learning on fewshot named entity recognition.pdf b/papers/2iner instructive and incontext learning on fewshot named entity recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a8ebc357fbd49178cdb23f91786de579d2e5ca1b --- /dev/null +++ b/papers/2iner instructive and incontext learning on fewshot named entity recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2403be7c4600ec48419a8e9d7f5e5d4e802d87ba2041c31f0b207a56958130c4 +size 526503 diff --git a/papers/a bayesian approach for prompt optimization in pretrained language models.pdf b/papers/a bayesian approach for prompt optimization in pretrained language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..946801e314f47657f7c42da421db0768dece199e --- /dev/null +++ b/papers/a bayesian approach for prompt optimization in pretrained language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94db6e16211dc761031578e6a76804bbd48c569494da99763d19300138225e72 +size 351191 diff --git a/papers/a benchmark for learning to translate a new language from one grammar book.pdf b/papers/a benchmark for learning to translate a new language from one grammar book.pdf index b74abf938c3e52b1673904421e6199ccca5d183f..683c4f60696b3a0cb01b0e4729862b100562a88b 100644 Binary files a/papers/a benchmark for learning to translate a new language from one grammar book.pdf and b/papers/a benchmark for learning to translate a new language from one grammar book.pdf differ diff --git a/papers/a benchmark for reasoning with spatial prepositions.pdf b/papers/a benchmark for reasoning with spatial prepositions.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cc02a73985ff107b675485f41787eae320e0a610 --- /dev/null +++ b/papers/a benchmark for reasoning with spatial prepositions.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b880808649a13389d52f486e03b58a5a0b9c1cf9eb9d40d602382182652c571 +size 163231 diff --git a/papers/a brief history of prompt leveraging language models (through advanced prompting).pdf b/papers/a brief history of prompt leveraging language models (through advanced prompting).pdf new file mode 100644 index 0000000000000000000000000000000000000000..602e8b9514f0ef946ca41a06c9c57fd9fa5e3d5f --- /dev/null +++ b/papers/a brief history of prompt leveraging language models (through advanced prompting).pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff061e4abde411e44210c7b820003e71a326bbd0c7f10442bf71545b9d31aa08 +size 169253 diff --git a/papers/a chat about boring problems studying gptbased text normalization.pdf b/papers/a chat about boring problems studying gptbased text normalization.pdf index 418995390ba343d3d8e6c340c1ddad638e1ac795..d1bb86954b07c81d75ab06000370624ccbf8dd25 100644 Binary files a/papers/a chat about boring problems studying gptbased text normalization.pdf and b/papers/a chat about boring problems studying gptbased text normalization.pdf differ diff --git a/papers/a closer look at incontext learning under distribution shifts.pdf b/papers/a closer look at incontext learning under distribution shifts.pdf index 845f29888b0ca9d16a11405c9c9d41c1f8e969f6..f3ec0601241852121b2bff238ec2d04bf4494d9c 100644 Binary files a/papers/a closer look at incontext learning under distribution shifts.pdf and b/papers/a closer look at incontext learning under distribution shifts.pdf differ diff --git a/papers/a communication theory perspective on prompting engineering methods for large language models.pdf b/papers/a communication theory perspective on prompting engineering methods for large language models.pdf index f834ca7aefe668d4f96304e74f0ad49cee81b09c..ff4505fccac0241a951c740395ed0e3ad87e8ba6 100644 Binary files a/papers/a communication theory perspective on prompting engineering methods for large language models.pdf and b/papers/a communication theory perspective on prompting engineering methods for large language models.pdf differ diff --git a/papers/a comparative study of prompting strategies for legal text classification.pdf b/papers/a comparative study of prompting strategies for legal text classification.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bd12f213b156d86edf2d90d281672d60e5a0431f --- /dev/null +++ b/papers/a comparative study of prompting strategies for legal text classification.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb779c152895400c80955b79dca859a0bd5496320152d23db36ed65bb3104b10 +size 526714 diff --git a/papers/a fewshot approach to resume information extraction via prompts.pdf b/papers/a fewshot approach to resume information extraction via prompts.pdf index a3aa0a85109703c2554947a22083622997cc0a1c..fd2710486be4655984ae70d2e9ce631d920d345a 100644 Binary files a/papers/a fewshot approach to resume information extraction via prompts.pdf and b/papers/a fewshot approach to resume information extraction via prompts.pdf differ diff --git a/papers/a foundation model for cell segmentation.pdf b/papers/a foundation model for cell segmentation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dc26c8d34c8f09ee3eb4371c1fb671a6dd87029e --- /dev/null +++ b/papers/a foundation model for cell segmentation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a532f4c40b1a1112ea0217abd497abb0fe9aeb10bc58da7af8c97f7c1d3b01f7 +size 3139860 diff --git a/papers/a generalpurpose ai avatar in healthcare.pdf b/papers/a generalpurpose ai avatar in healthcare.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4efe824ac6d96f42defac7ca237ef4cec52df693 --- /dev/null +++ b/papers/a generalpurpose ai avatar in healthcare.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3174beaddea29fa6197ab7bbdd2e1b74c4e4303c59770df7b76eda3c4eb10e16 +size 432959 diff --git a/papers/a generative ai approach to pricing mechanisms and consumer behavior in the electric vehicle charging market.pdf b/papers/a generative ai approach to pricing mechanisms and consumer behavior in the electric vehicle charging market.pdf new file mode 100644 index 0000000000000000000000000000000000000000..787cda0ae74a4ac1c0cc09cb31354fb3b412d127 --- /dev/null +++ b/papers/a generative ai approach to pricing mechanisms and consumer behavior in the electric vehicle charging market.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29d422d03853c919d3e8c6ca64647c2e9ce407d7cca4f0fc511fcf808aeaeeaf +size 410998 diff --git a/papers/a languageagent approach to formal theoremproving.pdf b/papers/a languageagent approach to formal theoremproving.pdf index cd76fcc1fd83ac615fe3ce5ac7df54478f916bb1..5e320ddd4b4a637ad9aa73f174ac3621b5d27a0c 100644 Binary files a/papers/a languageagent approach to formal theoremproving.pdf and b/papers/a languageagent approach to formal theoremproving.pdf differ diff --git a/papers/a latent space theory for emergent abilities in large language models.pdf b/papers/a latent space theory for emergent abilities in large language models.pdf index 06481ee06614a4683ce3f80b42c6f2f22f2e8ce3..ae80e74ce1227ede92a021a0c55c34adc0c338ad 100644 Binary files a/papers/a latent space theory for emergent abilities in large language models.pdf and b/papers/a latent space theory for emergent abilities in large language models.pdf differ diff --git a/papers/a lightweight framework for highquality code generation.pdf b/papers/a lightweight framework for highquality code generation.pdf index 7321790214621c500d7bb47dcdb07b4072485dd9..0efb6d73949f5cb86f2a250ac8a19c60808ad26a 100644 Binary files a/papers/a lightweight framework for highquality code generation.pdf and b/papers/a lightweight framework for highquality code generation.pdf differ diff --git a/papers/a mlllm pairing for better code comment classification.pdf b/papers/a mlllm pairing for better code comment classification.pdf index 8c688039a0d7552c1b67cded1fdcb971f6c1e343..12caf162971e29e9f7aa94f95acf2583f3f478e7 100644 Binary files a/papers/a mlllm pairing for better code comment classification.pdf and b/papers/a mlllm pairing for better code comment classification.pdf differ diff --git a/papers/a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf b/papers/a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf index 10fe2d8fd545db316907d1e9355fea501fe9c0eb..02cd04790c6acc1852e12fdbfc727a4e090bbc4e 100644 --- a/papers/a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf +++ b/papers/a multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3485d720b0a5c3dc4f2b86199237a6cf32971749ad6f6b49ccbab6e7598faf2f -size 2543508 +oid sha256:2a9dce9b97bdd226a1da5c76322dbf622db01341b43db85d0e9f060620f65df8 +size 2085222 diff --git a/papers/a new dataset and empirical study for sentence simplification in chinese.pdf b/papers/a new dataset and empirical study for sentence simplification in chinese.pdf index ee64045e39c9560cded63c8b1d4dc3238022068b..5e7833ddf1f5533a87e201be60ee4dcead83fb77 100644 Binary files a/papers/a new dataset and empirical study for sentence simplification in chinese.pdf and b/papers/a new dataset and empirical study for sentence simplification in chinese.pdf differ diff --git a/papers/a novel approach for rapid development based on chatgpt and prompt engineering.pdf b/papers/a novel approach for rapid development based on chatgpt and prompt engineering.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c0efb4261ec19d09b76e18d7c2daf17d7b1f9613 --- /dev/null +++ b/papers/a novel approach for rapid development based on chatgpt and prompt engineering.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d9c4b67965d4486a28e6487930795ab7f8c750d552778de379c14ecec930ced +size 1159972 diff --git a/papers/a practical survey on zeroshot prompt design for incontext learning.pdf b/papers/a practical survey on zeroshot prompt design for incontext learning.pdf index 2818d6d0c11e89f17e8f242408adc345c80573b1..240a398a1ca708c77f322e9c76dfd7bf9f73643b 100644 Binary files a/papers/a practical survey on zeroshot prompt design for incontext learning.pdf and b/papers/a practical survey on zeroshot prompt design for incontext learning.pdf differ diff --git a/papers/a prefrontal cortexinspired architecture for planning in large language models.pdf b/papers/a prefrontal cortexinspired architecture for planning in large language models.pdf index 0bc78c24699684cc7417cc14d2deef7dba91a94a..47b685472b826ee912537ddee3d3c756f37faa48 100644 Binary files a/papers/a prefrontal cortexinspired architecture for planning in large language models.pdf and b/papers/a prefrontal cortexinspired architecture for planning in large language models.pdf differ diff --git a/papers/a prompt pattern catalog to enhance prompt engineering with chatgpt.pdf b/papers/a prompt pattern catalog to enhance prompt engineering with chatgpt.pdf index 8f75756468fc327edb5638f465744edbc19c5278..4e6a343d62c18d6bc52af507571fb4580cd9af1d 100644 Binary files a/papers/a prompt pattern catalog to enhance prompt engineering with chatgpt.pdf and b/papers/a prompt pattern catalog to enhance prompt engineering with chatgpt.pdf differ diff --git a/papers/a promptbased fewshot learning approach to software conflict detection.pdf b/papers/a promptbased fewshot learning approach to software conflict detection.pdf index 99d4cb11576cf34d2ad568f91006e86ab44960bd..e4cf85efab211489eb9050b8b1685793a40d487d 100644 Binary files a/papers/a promptbased fewshot learning approach to software conflict detection.pdf and b/papers/a promptbased fewshot learning approach to software conflict detection.pdf differ diff --git a/papers/a reinforcement learningbased offensive semantics censorship system for chatbots.pdf b/papers/a reinforcement learningbased offensive semantics censorship system for chatbots.pdf index 88ca863ebf5f8dcbc409e5c97924f5549d5889e9..58366178a4217ab7b9d512297993ef61a7b9ddd3 100644 Binary files a/papers/a reinforcement learningbased offensive semantics censorship system for chatbots.pdf and b/papers/a reinforcement learningbased offensive semantics censorship system for chatbots.pdf differ diff --git a/papers/a reliable knowledge processing framework for combustion science using foundation models.pdf b/papers/a reliable knowledge processing framework for combustion science using foundation models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a45b59a1714b031b013ceed80017d0b4736e94f3 --- /dev/null +++ b/papers/a reliable knowledge processing framework for combustion science using foundation models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0137c2632f9e574c12b4b4f7924c9d8fdc3217c401cb1ffdc5adeb57ebf364f5 +size 3737383 diff --git a/papers/a search for prompts generating structured answers from contracts.pdf b/papers/a search for prompts generating structured answers from contracts.pdf index f78ca502d7850a79f8f0474a5f01ff7a8df63fbc..447689cff6f62b840de93a13228bb9bdfe227b2a 100644 Binary files a/papers/a search for prompts generating structured answers from contracts.pdf and b/papers/a search for prompts generating structured answers from contracts.pdf differ diff --git a/papers/a simple baseline for knowledgebased visual question answering.pdf b/papers/a simple baseline for knowledgebased visual question answering.pdf index 4118ca53a04d81e2b2f272d94932837013fc5992..78ab0c98b5dc28de6ca5162be57172a9df110cfb 100644 Binary files a/papers/a simple baseline for knowledgebased visual question answering.pdf and b/papers/a simple baseline for knowledgebased visual question answering.pdf differ diff --git a/papers/a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models.pdf b/papers/a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models.pdf index f55dff91c5dca654ad8e48dbbe106fc843fd3a0c..d60c8e62fc00d53810e1145885170b46f0bb5765 100644 Binary files a/papers/a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models.pdf and b/papers/a simple zeroshot prompt weighting technique to improve prompt ensembling in textimage models.pdf differ diff --git a/papers/a smashed glass cannot be full generation of commonsense explanations through promptbased fewshot learning.pdf b/papers/a smashed glass cannot be full generation of commonsense explanations through promptbased fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b40598a392af40666eeac6c0260099e9d4f56cfe --- /dev/null +++ b/papers/a smashed glass cannot be full generation of commonsense explanations through promptbased fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef0620892ebe91645d674efd69658822c8fb510a06e4465db0703295f9041954 +size 466787 diff --git a/papers/a strong baseline for temporal videotext alignment.pdf b/papers/a strong baseline for temporal videotext alignment.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a4e9b0dad9097bbddcb5f8504b3ce456351636ac --- /dev/null +++ b/papers/a strong baseline for temporal videotext alignment.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c90d74e0fc46731626121e5a00aae7f6fa00b7e450b839fc0f92f216f0b84e2f +size 3613881 diff --git a/papers/a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems.pdf b/papers/a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems.pdf index 9b96a1c7e69a6e3132deffefb1780c04caa8c13d..df5537b4957bebe4cb8493f23db397b8d89e7731 100644 Binary files a/papers/a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems.pdf and b/papers/a study on promptbased fewshot learning methods for belief state tracking in taskoriented dialog systems.pdf differ diff --git a/papers/a study on the effectiveness of large language models for translation with markup.pdf b/papers/a study on the effectiveness of large language models for translation with markup.pdf new file mode 100644 index 0000000000000000000000000000000000000000..57f65b8130c6e829a68fcff6e84e611692517c08 --- /dev/null +++ b/papers/a study on the effectiveness of large language models for translation with markup.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c1eb369f5e2b40ef195efa9d907fe54344c67c7cb56dac7a74a05ea31832e47 +size 388428 diff --git a/papers/a survey of large language models for autonomous driving.pdf b/papers/a survey of large language models for autonomous driving.pdf deleted file mode 100644 index d6578c1e3230cc769ef3c57ee56ada76f9274616..0000000000000000000000000000000000000000 Binary files a/papers/a survey of large language models for autonomous driving.pdf and /dev/null differ diff --git a/papers/a survey on fewshot knowledge graph completion with structural and commonsense knowledge.pdf b/papers/a survey on fewshot knowledge graph completion with structural and commonsense knowledge.pdf index b4effe87153f92770b22f3ca8b585a24bf7d1159..0351524a825ca0a07ddc2980d745ba0329bb3940 100644 Binary files a/papers/a survey on fewshot knowledge graph completion with structural and commonsense knowledge.pdf and b/papers/a survey on fewshot knowledge graph completion with structural and commonsense knowledge.pdf differ diff --git a/papers/a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation.pdf b/papers/a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation.pdf index ecf785e644e88382c25fe7c78674c08a5923641d..179197e8860c5f9c8a6f60bb895e13e2d4f189c7 100644 Binary files a/papers/a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation.pdf and b/papers/a tale of pronouns interpretability informs gender bias mitigation for fairer instructiontuned machine translation.pdf differ diff --git a/papers/a unified framework for multiintent spoken language understanding with prompting.pdf b/papers/a unified framework for multiintent spoken language understanding with prompting.pdf index f466942ccca983d5387de065a2ad6f4dc395b473..d644c3313d7f37508952328077b922e1a7f192d6 100644 Binary files a/papers/a unified framework for multiintent spoken language understanding with prompting.pdf and b/papers/a unified framework for multiintent spoken language understanding with prompting.pdf differ diff --git a/papers/a weak supervision approach for fewshot aspect based sentiment.pdf b/papers/a weak supervision approach for fewshot aspect based sentiment.pdf index 50029a922ee9563cd000fce1f2e13032cb848063..b6089ba3f5a88d1649c46e3ee8ed7acfd71e831b 100644 Binary files a/papers/a weak supervision approach for fewshot aspect based sentiment.pdf and b/papers/a weak supervision approach for fewshot aspect based sentiment.pdf differ diff --git a/papers/a wolf in sheep's clothing generalized nested jailbreak prompts can fool large language models easily.pdf b/papers/a wolf in sheep's clothing generalized nested jailbreak prompts can fool large language models easily.pdf deleted file mode 100644 index 398c06406cb31ff44aaf1d3035288039846f45f7..0000000000000000000000000000000000000000 --- a/papers/a wolf in sheep's clothing generalized nested jailbreak prompts can fool large language models easily.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:31fc7a91bfda1a1b6d066e2a9b70959a2d73eca3f5601ff2fb5041e7442e900b -size 1104225 diff --git a/papers/acecoder utilizing existing code to enhance code generation.pdf b/papers/acecoder utilizing existing code to enhance code generation.pdf index 38ebe844007b8ea38c593198b69a7cf0057f762f..74b82fffc23e3b91201e783231f60325c5f9d94d 100644 Binary files a/papers/acecoder utilizing existing code to enhance code generation.pdf and b/papers/acecoder utilizing existing code to enhance code generation.pdf differ diff --git a/papers/actionclip a new paradigm for video action recognition.pdf b/papers/actionclip a new paradigm for video action recognition.pdf index 38103550e8cea9d7b901c284741f9f0131408481..34d9cb496e08a7a998120d3542b9207d408978ba 100644 Binary files a/papers/actionclip a new paradigm for video action recognition.pdf and b/papers/actionclip a new paradigm for video action recognition.pdf differ diff --git a/papers/actiongpt leveraging largescale language models for improved and generalized zero shot action generation.pdf b/papers/actiongpt leveraging largescale language models for improved and generalized zero shot action generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..90801e94cfe0d62e9b9f2b216fb18561d6080ba7 --- /dev/null +++ b/papers/actiongpt leveraging largescale language models for improved and generalized zero shot action generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5b6edcfa0ae147f4483627d320fdf6e7b3cd9690bdfe83bc15bd139751c07d6 +size 9402369 diff --git a/papers/active example selection for incontext learning.pdf b/papers/active example selection for incontext learning.pdf index ac02444723ab42ac8ddcca6edc06b9356a64469c..b5282af6313f9001d5a0b2335214fcb2e095a144 100644 Binary files a/papers/active example selection for incontext learning.pdf and b/papers/active example selection for incontext learning.pdf differ diff --git a/papers/actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf b/papers/actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf index ac5f31af90144beddfc3123d3e6d24b54fe0dea0..2624369c9dd22ba3410bf5a473789c9967c8e1b2 100644 Binary files a/papers/actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf and b/papers/actsql incontext learning for texttosql with automaticallygenerated chainofthought.pdf differ diff --git a/papers/adaplanner adaptive planning from feedback with language models.pdf b/papers/adaplanner adaptive planning from feedback with language models.pdf index ec167644b89802beda81978f6968e5a110c9dc16..3d3b277a4d2d92226addde17d64b20259987f5a9 100644 Binary files a/papers/adaplanner adaptive planning from feedback with language models.pdf and b/papers/adaplanner adaptive planning from feedback with language models.pdf differ diff --git a/papers/adapting languageaudio models as fewshot audio learners.pdf b/papers/adapting languageaudio models as fewshot audio learners.pdf index 2204b6cda2a5bb1192c2ce8890fa93fe01ac2b2a..52965f2acbb47c1dc48ca113c3697a68b051b6e0 100644 Binary files a/papers/adapting languageaudio models as fewshot audio learners.pdf and b/papers/adapting languageaudio models as fewshot audio learners.pdf differ diff --git a/papers/adapting prompt for fewshot tabletotext generation.pdf b/papers/adapting prompt for fewshot tabletotext generation.pdf index 2928395a7782a5b1304b161308fbb1108b255f1c..4a8bac3185da7af8f764ecad15e676f672c7ea44 100644 Binary files a/papers/adapting prompt for fewshot tabletotext generation.pdf and b/papers/adapting prompt for fewshot tabletotext generation.pdf differ diff --git a/papers/adaptive machine translation with large language models.pdf b/papers/adaptive machine translation with large language models.pdf index 4a8bda50f1fdf49cb26144be8fe50b2e64c05c04..7204b045a5b8a2734eaa85ee651dd4789b9e4f92 100644 Binary files a/papers/adaptive machine translation with large language models.pdf and b/papers/adaptive machine translation with large language models.pdf differ diff --git a/papers/adaptivesolver framework for dynamic strategy selection in large language model reasoning.pdf b/papers/adaptivesolver framework for dynamic strategy selection in large language model reasoning.pdf index 1d2f5883d16f77eac2ebc1ee7b69e2bf443f04f3..cf9284760e09bc9d7d3b9331ba588420c2337ec7 100644 Binary files a/papers/adaptivesolver framework for dynamic strategy selection in large language model reasoning.pdf and b/papers/adaptivesolver framework for dynamic strategy selection in large language model reasoning.pdf differ diff --git a/papers/adelt transpilation between deep learning frameworks.pdf b/papers/adelt transpilation between deep learning frameworks.pdf index 3661882f6364825f8799f98e1abf5ff3509ce5e7..e4052fef4c1343796de57c3c998688ebb05f102d 100644 Binary files a/papers/adelt transpilation between deep learning frameworks.pdf and b/papers/adelt transpilation between deep learning frameworks.pdf differ diff --git a/papers/affect recognition in conversations using large language models.pdf b/papers/affect recognition in conversations using large language models.pdf index b733ae00fb36b7cfed2d7447537f8972dc2555d1..45f8e8e6cf5991b1103f7bde8c45e46cd6b615ed 100644 Binary files a/papers/affect recognition in conversations using large language models.pdf and b/papers/affect recognition in conversations using large language models.pdf differ diff --git a/papers/ai chains transparent and controllable humanai interaction by chaining large language model prompts.pdf b/papers/ai chains transparent and controllable humanai interaction by chaining large language model prompts.pdf index b4b96256e09204a13bc93e348b75952c0abfe6b9..b64a48f7f9c7a7b9bff00eb8bb61ef691d5983e7 100644 --- a/papers/ai chains transparent and controllable humanai interaction by chaining large language model prompts.pdf +++ b/papers/ai chains transparent and controllable humanai interaction by chaining large language model prompts.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2542e9bdde6a4f9d17c075c0e7a4fff3de0ee56f68d9f2437a72aa722816d9a0 +oid sha256:df562d05d100d017a6dfba9949af8435314caff5f83f6979e6b26f1099bae611 size 2673304 diff --git a/papers/ai foundation models for weather and climate applications, design, and implementation.pdf b/papers/ai foundation models for weather and climate applications, design, and implementation.pdf index e04d7718fe6eb2022e642db51bc1ad98a71ae4b8..15b354cfd65ade5ebc3775364aebb89aa2a0b9f2 100644 Binary files a/papers/ai foundation models for weather and climate applications, design, and implementation.pdf and b/papers/ai foundation models for weather and climate applications, design, and implementation.pdf differ diff --git a/papers/aicopilot for business optimisation a framework and a case study in production scheduling.pdf b/papers/aicopilot for business optimisation a framework and a case study in production scheduling.pdf index d850e62b4c7ce3043ead6ed3cea525387df84024..cab981930ad5637cd8efd159c18151d6c6bb35e5 100644 Binary files a/papers/aicopilot for business optimisation a framework and a case study in production scheduling.pdf and b/papers/aicopilot for business optimisation a framework and a case study in production scheduling.pdf differ diff --git a/papers/aisfg abundant information slot filling generator.pdf b/papers/aisfg abundant information slot filling generator.pdf new file mode 100644 index 0000000000000000000000000000000000000000..24f2cd7aaf5d7c181a66e5f590cf6c8c63d75841 --- /dev/null +++ b/papers/aisfg abundant information slot filling generator.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a79f3fa284fe8f2790af93ff1b11481eb3dde5b9363546df8feb012669da316 +size 273603 diff --git a/papers/alexander knox at semeval2023 task 5 the comparison of prompting and standard finetuning techniques for selecting the type of spoiler needed to neutralize a clickbait.pdf b/papers/alexander knox at semeval2023 task 5 the comparison of prompting and standard finetuning techniques for selecting the type of spoiler needed to neutralize a clickbait.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ee2f015499e9d81f5743720a5c3039e8b174e890 --- /dev/null +++ b/papers/alexander knox at semeval2023 task 5 the comparison of prompting and standard finetuning techniques for selecting the type of spoiler needed to neutralize a clickbait.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de496df744b0db1de603ee7c5e532a36d31c090560856b4d6428448912ddf716 +size 129961 diff --git a/papers/alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf b/papers/alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf index 5cc7a5fa67790e864e5e3f00ed2b67f95335f4c7..1e613826d1a632a0c55cd8616e09bf3f573a3494 100644 Binary files a/papers/alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf and b/papers/alexatm 20b fewshot learning using a largescale multilingual seq2seq model.pdf differ diff --git a/papers/algo synthesizing algorithmic programs with generated oracle verifiers.pdf b/papers/algo synthesizing algorithmic programs with generated oracle verifiers.pdf index cecf995f746361da4bdc58b5563d0f4bb6a8db83..113518b61d4ef9e5322ce402099ff8cbc383bb87 100644 --- a/papers/algo synthesizing algorithmic programs with generated oracle verifiers.pdf +++ b/papers/algo synthesizing algorithmic programs with generated oracle verifiers.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:96e43da92b59794b7629f5814229c61989644a61d89cf116cb55e5bb7790606f -size 1547665 +oid sha256:7c2766917085a47903e93287445fc5a75a48543d41e43e567c821422ad8431a7 +size 1547666 diff --git a/papers/algo synthesizing algorithmic programs with llmgenerated oracle verifiers.pdf b/papers/algo synthesizing algorithmic programs with llmgenerated oracle verifiers.pdf new file mode 100644 index 0000000000000000000000000000000000000000..113518b61d4ef9e5322ce402099ff8cbc383bb87 --- /dev/null +++ b/papers/algo synthesizing algorithmic programs with llmgenerated oracle verifiers.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c2766917085a47903e93287445fc5a75a48543d41e43e567c821422ad8431a7 +size 1547666 diff --git a/papers/algorithm of thoughts enhancing exploration of ideas in large language models.pdf b/papers/algorithm of thoughts enhancing exploration of ideas in large language models.pdf index 4b3baee9407d6f51a7772e121c2c2a18a07c0366..c13a500086b445afc06bcf7596c33b1c1d792731 100644 Binary files a/papers/algorithm of thoughts enhancing exploration of ideas in large language models.pdf and b/papers/algorithm of thoughts enhancing exploration of ideas in large language models.pdf differ diff --git a/papers/aligning language models to user opinions.pdf b/papers/aligning language models to user opinions.pdf index b7131392cff102b3efb7605e83d2a755d63473dc..dd1f6aa5d8178e2955722dd41c83f2f0f66220ce 100644 Binary files a/papers/aligning language models to user opinions.pdf and b/papers/aligning language models to user opinions.pdf differ diff --git a/papers/all in how you ask for it simple blackbox method for jailbreak attacks.pdf b/papers/all in how you ask for it simple blackbox method for jailbreak attacks.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5d29ceb8a682367e82b4b8dcd9a9e6b431d028f0 --- /dev/null +++ b/papers/all in how you ask for it simple blackbox method for jailbreak attacks.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1cd4b92c21bb4ab747a2af4c8913c81dcf89b7aba47fb00911a19fb35d0c86d +size 194185 diff --git a/papers/allsh active learning guided by local sensitivity and hardness.pdf b/papers/allsh active learning guided by local sensitivity and hardness.pdf index 8300dabb1551b11899effb691019d40105e38730..0b98d1852936ca340eadadfc02b40818f20a7291 100644 Binary files a/papers/allsh active learning guided by local sensitivity and hardness.pdf and b/papers/allsh active learning guided by local sensitivity and hardness.pdf differ diff --git a/papers/alpacafarm a simulation framework for methods that learn from human feedback.pdf b/papers/alpacafarm a simulation framework for methods that learn from human feedback.pdf index 0b36acde58ec09a313f74ea48dab4483742c9d9a..8593d445fe2940ecd85d5ee7146f1e6f511db91e 100644 --- a/papers/alpacafarm a simulation framework for methods that learn from human feedback.pdf +++ b/papers/alpacafarm a simulation framework for methods that learn from human feedback.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:efee7608bc4ac94af23d79729e630903618f88ecff2a634f5167761c7d0ccb62 -size 1183593 +oid sha256:7258d8f11817b04912dec72c8a57ceae6005d1d6905ffdf460db003c41c6180d +size 1379803 diff --git a/papers/alt towards finegrained alignment between language and ctr models for clickthrough rate prediction.pdf b/papers/alt towards finegrained alignment between language and ctr models for clickthrough rate prediction.pdf deleted file mode 100644 index d72f9f50952b55cd56942b5032db7c2e3fc2cbe1..0000000000000000000000000000000000000000 --- a/papers/alt towards finegrained alignment between language and ctr models for clickthrough rate prediction.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e9240c944e36f1e0f73098d5a522bfce700c8752d9e9deb15c46de8717d0d20d -size 3170927 diff --git a/papers/amal meta knowledgedriven fewshot adapter learning.pdf b/papers/amal meta knowledgedriven fewshot adapter learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9c8daa44d439c59b91e515178b3c1f59c238378b --- /dev/null +++ b/papers/amal meta knowledgedriven fewshot adapter learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8715499b8bcb1c603b3366819e419fddaf0152653dd519c5b5c89fa0313fd1c7 +size 871585 diff --git a/papers/ambiguityaware incontext learning with large language models.pdf b/papers/ambiguityaware incontext learning with large language models.pdf index eed39f9a9b03790d37fac869ebbc2982afe70124..3c616427df4ccb496cc3c9a76bf3697c3d1e751f 100644 Binary files a/papers/ambiguityaware incontext learning with large language models.pdf and b/papers/ambiguityaware incontext learning with large language models.pdf differ diff --git a/papers/amp0 speciesspecific prediction of antimicrobial peptides using zero and few shot learning.pdf b/papers/amp0 speciesspecific prediction of antimicrobial peptides using zero and few shot learning.pdf index 8de3fd9f8dfc2771ac8b4b66d9548b0712757027..30ff815ab34bc520aa8b5fb2811a115641ee7c72 100644 Binary files a/papers/amp0 speciesspecific prediction of antimicrobial peptides using zero and few shot learning.pdf and b/papers/amp0 speciesspecific prediction of antimicrobial peptides using zero and few shot learning.pdf differ diff --git a/papers/an ai chatbot for explaining deep reinforcement learning decisions of serviceoriented systems.pdf b/papers/an ai chatbot for explaining deep reinforcement learning decisions of serviceoriented systems.pdf index 7292bf08209c53875ca8c4e81abe2adf3a67531a..427827763582e8f59df78b02d5ee432942f8d49e 100644 Binary files a/papers/an ai chatbot for explaining deep reinforcement learning decisions of serviceoriented systems.pdf and b/papers/an ai chatbot for explaining deep reinforcement learning decisions of serviceoriented systems.pdf differ diff --git a/papers/an approach to solving the abstraction and reasoning corpus (arc) challenge.pdf b/papers/an approach to solving the abstraction and reasoning corpus (arc) challenge.pdf index 0c496758c15f3b008f488edd92e6acc599a5c486..89fd53df88181c6a7b89e2b2a8c032684babb2e0 100644 Binary files a/papers/an approach to solving the abstraction and reasoning corpus (arc) challenge.pdf and b/papers/an approach to solving the abstraction and reasoning corpus (arc) challenge.pdf differ diff --git a/papers/an empirical evaluation of using large language models for automated unit test generation.pdf b/papers/an empirical evaluation of using large language models for automated unit test generation.pdf index 1ab94c59b1240c3a455111e3800f19cb6a542057..d6506c4a3b327c1d6aad08026c51b69635a81822 100644 Binary files a/papers/an empirical evaluation of using large language models for automated unit test generation.pdf and b/papers/an empirical evaluation of using large language models for automated unit test generation.pdf differ diff --git a/papers/an empirical study on fewshot knowledge probing for pretrained language models.pdf b/papers/an empirical study on fewshot knowledge probing for pretrained language models.pdf index b001eece968199ca535729bcfa6abb7f7fd3d412..d2eb2ed887286630abe3c3eb2584b956b85d5366 100644 Binary files a/papers/an empirical study on fewshot knowledge probing for pretrained language models.pdf and b/papers/an empirical study on fewshot knowledge probing for pretrained language models.pdf differ diff --git a/papers/an empirical study on using large language models to analyze software supply chain security failures.pdf b/papers/an empirical study on using large language models to analyze software supply chain security failures.pdf index c294280edaeca1aee649cc8e3df1dbc225e0f752..29f3342cdebcd25541b1a67487f970c20a168c2e 100644 Binary files a/papers/an empirical study on using large language models to analyze software supply chain security failures.pdf and b/papers/an empirical study on using large language models to analyze software supply chain security failures.pdf differ diff --git a/papers/an evaluation of gpt models for phenotype concept recognition.pdf b/papers/an evaluation of gpt models for phenotype concept recognition.pdf index 42e15901c815e911925ad9d282fc791ef508497f..1229fd83695e7ffd8b2f6a03ffd7be6810a07b3b 100644 Binary files a/papers/an evaluation of gpt models for phenotype concept recognition.pdf and b/papers/an evaluation of gpt models for phenotype concept recognition.pdf differ diff --git a/papers/an exploration of incontext learning for speech language model.pdf b/papers/an exploration of incontext learning for speech language model.pdf index 5af2c79c69e175e9f30c0676ef698aaf8b383cc6..3069e68e582f89b6d4e9b69e4054156c98055f2b 100644 Binary files a/papers/an exploration of incontext learning for speech language model.pdf and b/papers/an exploration of incontext learning for speech language model.pdf differ diff --git a/papers/an incontext schema understanding method for knowledge base question answering.pdf b/papers/an incontext schema understanding method for knowledge base question answering.pdf index 95a145e88cf642840dd2986395f606df2bdd3b80..6c68cbda4cfdf2655d0520e6c55be584e5c1d179 100644 Binary files a/papers/an incontext schema understanding method for knowledge base question answering.pdf and b/papers/an incontext schema understanding method for knowledge base question answering.pdf differ diff --git a/papers/an informationtheoretic approach to prompt engineering without ground truth labels.pdf b/papers/an informationtheoretic approach to prompt engineering without ground truth labels.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0554e564239ecaa41cf755ad498a1f9b56d6b107 --- /dev/null +++ b/papers/an informationtheoretic approach to prompt engineering without ground truth labels.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7614c2c52bc9604f9b0fc9c45a4d9f2394923c48cb74c48ac35c2d6a2504b498 +size 2147644 diff --git a/papers/annollm making large language models to be better crowdsourced annotators.pdf b/papers/annollm making large language models to be better crowdsourced annotators.pdf index 5f07c76515135670ca77ba0eb0e42c3aaeccba11..a1064f6ce2572282b66528ac06a58017e4fc8b7d 100644 Binary files a/papers/annollm making large language models to be better crowdsourced annotators.pdf and b/papers/annollm making large language models to be better crowdsourced annotators.pdf differ diff --git a/papers/anomalygpt detecting industrial anomalies using large visionlanguage models.pdf b/papers/anomalygpt detecting industrial anomalies using large visionlanguage models.pdf index 4886be69947bd573df1495e71007f15e17bb8654..0ffede626c143bf3f4bfe1df0116c7fd7fc70f55 100644 --- a/papers/anomalygpt detecting industrial anomalies using large visionlanguage models.pdf +++ b/papers/anomalygpt detecting industrial anomalies using large visionlanguage models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4bc8fefad498bae150a02f047d21e3c2036a68e1e581be78a34046601f454dc9 -size 14894069 +oid sha256:f05615c13f01c8bd38cb1718e2a7000530402923984b2a2137ee9fbc275e7341 +size 14894050 diff --git a/papers/answerstate recurrent relational network (asrrn) for constructed response assessment and feedback grouping.pdf b/papers/answerstate recurrent relational network (asrrn) for constructed response assessment and feedback grouping.pdf new file mode 100644 index 0000000000000000000000000000000000000000..269c22d3e7979ff5f57e73c924a70711fe01bd61 --- /dev/null +++ b/papers/answerstate recurrent relational network (asrrn) for constructed response assessment and feedback grouping.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cfe13d3bd7ebc0f2619a5ea703dc2bb78e26a2a9592a57e2f7024139a160d71 +size 1902704 diff --git a/papers/applying large language models and chainofthought for automatic scoring.pdf b/papers/applying large language models and chainofthought for automatic scoring.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0dcd61761558e4ea65bf07c8ed0edf7e48446ff3 --- /dev/null +++ b/papers/applying large language models and chainofthought for automatic scoring.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3802904cd9f84d475b9770f6f5daaa9b6788e676cd4dc5f2d824cd0b17137609 +size 3673533 diff --git "a/papers/apprentissage de sousespaces de pr\303\251fixes.pdf" "b/papers/apprentissage de sousespaces de pr\303\251fixes.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..452bf6beaff0f4a4b9bd52110fc44d2736ec356c --- /dev/null +++ "b/papers/apprentissage de sousespaces de pr\303\251fixes.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c75643ef5d42c881f3929476950342f599102d8ae6514fa26bca2e7e9dad38b2 +size 459502 diff --git a/papers/are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization.pdf b/papers/are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization.pdf index 6d252df33ca678e66b809e465b2511340755d2a8..4320274317d4fb3d39fb7de22dd67b88682e900e 100644 Binary files a/papers/are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization.pdf and b/papers/are chatbots ready for privacysensitive applications an investigation into input regurgitation and promptinduced sanitization.pdf differ diff --git a/papers/are humangenerated demonstrations necessary for incontext learning.pdf b/papers/are humangenerated demonstrations necessary for incontext learning.pdf index 3ac9838c684c72d0839764796b171cd824f31064..512625b06d0804b7fc801f4bc696751f07d452ad 100644 Binary files a/papers/are humangenerated demonstrations necessary for incontext learning.pdf and b/papers/are humangenerated demonstrations necessary for incontext learning.pdf differ diff --git a/papers/are large language models ready for healthcare a comparative study on clinical language understanding.pdf b/papers/are large language models ready for healthcare a comparative study on clinical language understanding.pdf index ac8155f35210d6e2b9a26fcbc7a65c68991ba987..4d170dfc028137ea4aacf9607e5a737a364ba80a 100644 Binary files a/papers/are large language models ready for healthcare a comparative study on clinical language understanding.pdf and b/papers/are large language models ready for healthcare a comparative study on clinical language understanding.pdf differ diff --git a/papers/are structural concepts universal in transformer language models towards interpretable crosslingual generalization.pdf b/papers/are structural concepts universal in transformer language models towards interpretable crosslingual generalization.pdf index 34d8ea7519022baf871b194dd3032c7ecfa451c4..00f6a4d226b06e126ce53430d8c2705f395d4481 100644 --- a/papers/are structural concepts universal in transformer language models towards interpretable crosslingual generalization.pdf +++ b/papers/are structural concepts universal in transformer language models towards interpretable crosslingual generalization.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:73554f28bf369b3c411db100ed2c75f0eff373e3126ce4b92fccd81308168d4c -size 1295339 +oid sha256:173ab9a36e52f884f99d7b334356137925dbb392712606363f619a97938bd4de +size 1278765 diff --git a/papers/arggen prompting text generation models for documentlevel eventargument aggregation.pdf b/papers/arggen prompting text generation models for documentlevel eventargument aggregation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0834ca64f4eb26826230914c5f7187acb818951b --- /dev/null +++ b/papers/arggen prompting text generation models for documentlevel eventargument aggregation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df01262caaf8d1eb796c3cd7b09028916654e7fbc2892f6056b9b4cf023f4bb8 +size 587791 diff --git a/papers/argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf b/papers/argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf index b1c13de99457b0f3fe5a565c36409bfa657049a6..c035e576b1aaa13421f35b9794dbf49f4c4280b2 100644 Binary files a/papers/argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf and b/papers/argumentative stance prediction an exploratory study on multimodality and fewshot learning.pdf differ diff --git a/papers/arguments to key points mapping with promptbased learning.pdf b/papers/arguments to key points mapping with promptbased learning.pdf index 48a68da17a16bcb77cd37f534f01c96b679aac72..811709bbaa7708a773682023e88853e3db9a7327 100644 Binary files a/papers/arguments to key points mapping with promptbased learning.pdf and b/papers/arguments to key points mapping with promptbased learning.pdf differ diff --git a/papers/art automatic multistep reasoning and tooluse for large language models.pdf b/papers/art automatic multistep reasoning and tooluse for large language models.pdf index 9ed22e0749cc552245005ff7e717624ca2595a77..da4e51fb04b43d9070529e1868d14060812adfb2 100644 Binary files a/papers/art automatic multistep reasoning and tooluse for large language models.pdf and b/papers/art automatic multistep reasoning and tooluse for large language models.pdf differ diff --git a/papers/arthmodel enhance arithmetic skills to large language model.pdf b/papers/arthmodel enhance arithmetic skills to large language model.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e8261e028a0bc97ca39b60dacb3c365c74f3d986 --- /dev/null +++ b/papers/arthmodel enhance arithmetic skills to large language model.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f5ab9b8e5dca8956e1d98e590fde6a8bceb437621aff1c7177841349226b73c +size 5647815 diff --git a/papers/artificial intelligence for health message generation an empirical study using a large language model (llm) and prompt engineering.pdf b/papers/artificial intelligence for health message generation an empirical study using a large language model (llm) and prompt engineering.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b34eb03fe81114be021d4d531918154782ee2509 --- /dev/null +++ b/papers/artificial intelligence for health message generation an empirical study using a large language model (llm) and prompt engineering.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f97bf2e295d65f6804b625222e09bf37b14dc4d1695de6dc8f3e2afc8e50f392 +size 4272713 diff --git a/papers/artificial intelligence model gpt4 narrowly fails simulated radiological protection exam.pdf b/papers/artificial intelligence model gpt4 narrowly fails simulated radiological protection exam.pdf new file mode 100644 index 0000000000000000000000000000000000000000..91a659ccf1c97f1c4c01ac85ed70bd42e7ca7751 --- /dev/null +++ b/papers/artificial intelligence model gpt4 narrowly fails simulated radiological protection exam.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c93bb5975fa44e50b0f623d78c13cfcb784d96c718b32b2e1c6b3eef86299b25 +size 316956 diff --git a/papers/askit unified programming interface for programming with large language models.pdf b/papers/askit unified programming interface for programming with large language models.pdf deleted file mode 100644 index e1461c5b8550413357cc4728e2fc49a7897254d0..0000000000000000000000000000000000000000 Binary files a/papers/askit unified programming interface for programming with large language models.pdf and /dev/null differ diff --git a/papers/assessing prompt injection risks in 200+ custom gpts.pdf b/papers/assessing prompt injection risks in 200+ custom gpts.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f4f06ce34dcaf2c0ef7b1d736b5315a1908ccbec --- /dev/null +++ b/papers/assessing prompt injection risks in 200+ custom gpts.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:176acb87cdb40a55f379b28eedc08a4e788c375bd2aa7e2bb3ca5c4daa1ee22f +size 3105249 diff --git a/papers/atlas fewshot learning with retrieval augmented language models.pdf b/papers/atlas fewshot learning with retrieval augmented language models.pdf index 368c1f84355160eaf09c52ca26758b5ae0845eac..7d1752bc01573989c4d9f9b7339b79c772f88fec 100644 Binary files a/papers/atlas fewshot learning with retrieval augmented language models.pdf and b/papers/atlas fewshot learning with retrieval augmented language models.pdf differ diff --git a/papers/attempt parameterefficient multitask tuning via attentional mixtures of soft prompts.pdf b/papers/attempt parameterefficient multitask tuning via attentional mixtures of soft prompts.pdf index 16f85fc79e8d1f077b3b755f26daf6b502a4e066..259542cf81634279eb204381a9a590f8cb7bb8e6 100644 Binary files a/papers/attempt parameterefficient multitask tuning via attentional mixtures of soft prompts.pdf and b/papers/attempt parameterefficient multitask tuning via attentional mixtures of soft prompts.pdf differ diff --git a/papers/augmented embeddings for custom retrievals.pdf b/papers/augmented embeddings for custom retrievals.pdf index 990b5e5a515c6fbc2abe5d8a620b496f3ce5c8ec..31a71df3c2a06f772e0156038cc89cf1215dc6f3 100644 Binary files a/papers/augmented embeddings for custom retrievals.pdf and b/papers/augmented embeddings for custom retrievals.pdf differ diff --git a/papers/augmenting large language model translators via translation memories.pdf b/papers/augmenting large language model translators via translation memories.pdf index ae1ddb47b689ce9f85a7a2443cb07ad3de2eb12a..e8f81de276b16f6e2ae7697ecdb7e27fb41c8a47 100644 Binary files a/papers/augmenting large language model translators via translation memories.pdf and b/papers/augmenting large language model translators via translation memories.pdf differ diff --git a/papers/autoconv automatically generating informationseeking conversations with large language models.pdf b/papers/autoconv automatically generating informationseeking conversations with large language models.pdf index 10b1965b68d3287eb57d31a1a815bc94a6a3fb8d..3f5f07347774e69aa1d818b55d7902dd246d7bcc 100644 Binary files a/papers/autoconv automatically generating informationseeking conversations with large language models.pdf and b/papers/autoconv automatically generating informationseeking conversations with large language models.pdf differ diff --git a/papers/autodan generating stealthy jailbreak prompts on aligned large language models.pdf b/papers/autodan generating stealthy jailbreak prompts on aligned large language models.pdf index a04c0cb3572ae0b70ce8aa76b0ec11232ccd66dc..d881c4b8626274fa0b37257380e6fbd8f6f9c4a7 100644 Binary files a/papers/autodan generating stealthy jailbreak prompts on aligned large language models.pdf and b/papers/autodan generating stealthy jailbreak prompts on aligned large language models.pdf differ diff --git a/papers/autohint automatic prompt optimization with hint generation.pdf b/papers/autohint automatic prompt optimization with hint generation.pdf index 94e652aeb59436f9648f771999bc056f4251fac6..b67bad8b0ca39d2bddcce004472fdd2b4a63f0b7 100644 Binary files a/papers/autohint automatic prompt optimization with hint generation.pdf and b/papers/autohint automatic prompt optimization with hint generation.pdf differ diff --git a/papers/automated devops pipeline generation for code repositories using large language models.pdf b/papers/automated devops pipeline generation for code repositories using large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5823f56af16762953c0f0b45619887781793d117 --- /dev/null +++ b/papers/automated devops pipeline generation for code repositories using large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49ccd6c5b896cfd7c2c3308c8bdd65727ded59e1d8e1f29470a1e28bdc10bc14 +size 1751081 diff --git a/papers/automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf b/papers/automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf deleted file mode 100644 index 3d06d8866acbc3ac7238f12d4d992cb08b864cbd..0000000000000000000000000000000000000000 --- a/papers/automated extraction and visualization of metabolic networks from biomedical literature using a large language model.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e25f86fe311afa5464076aac0f54827e2e877bd09fe0d1f1b5c67be6677efc63 -size 2500411 diff --git a/papers/automated fewshot classification with instructionfinetuned language models.pdf b/papers/automated fewshot classification with instructionfinetuned language models.pdf index a993cf1e8a6e6035a9aca289db46567eab438345..241ecbbee86e819b692309cd3997775b10ac5a4c 100644 Binary files a/papers/automated fewshot classification with instructionfinetuned language models.pdf and b/papers/automated fewshot classification with instructionfinetuned language models.pdf differ diff --git a/papers/automatic chain of thought prompting in large language models.pdf b/papers/automatic chain of thought prompting in large language models.pdf index 2b329326aabf0abee42a27def7b3b57ea798ebba..c0f8c5a60edd1db1293957a95473ff6e4a974843 100644 Binary files a/papers/automatic chain of thought prompting in large language models.pdf and b/papers/automatic chain of thought prompting in large language models.pdf differ diff --git a/papers/automatic data transformation using large language model an experimental study on building energy data.pdf b/papers/automatic data transformation using large language model an experimental study on building energy data.pdf index 80905be406f393e7c24b1b5a00171c7bc2cd580a..878792f77d10bd4de3f4960dcc6bdec2a97d907a 100644 Binary files a/papers/automatic data transformation using large language model an experimental study on building energy data.pdf and b/papers/automatic data transformation using large language model an experimental study on building energy data.pdf differ diff --git a/papers/automatic engineering of long prompts.pdf b/papers/automatic engineering of long prompts.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8804f2bd1eccaae64ef54c3423715d4f91ee3498 --- /dev/null +++ b/papers/automatic engineering of long prompts.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f02e4be6055648f5f8575bb5edd31cb2c64f82b00e08a56a0a1214a41174745 +size 866632 diff --git a/papers/automatic label sequence generation for prompting sequencetosequence models.pdf b/papers/automatic label sequence generation for prompting sequencetosequence models.pdf index 6972c794bb2bc78b0fb8eafcd17bb19b3b1246a2..61f2e32bdbf7bf835bada8a66c4a870a6d871d45 100644 Binary files a/papers/automatic label sequence generation for prompting sequencetosequence models.pdf and b/papers/automatic label sequence generation for prompting sequencetosequence models.pdf differ diff --git a/papers/automatic multilabel prompting simple and interpretable fewshot classification.pdf b/papers/automatic multilabel prompting simple and interpretable fewshot classification.pdf index 1022cdf89dfb6c7db673dcc5f4ed58e591450271..1405377d39c49f96c6c4cfd3baa8d4737d98a2c1 100644 Binary files a/papers/automatic multilabel prompting simple and interpretable fewshot classification.pdf and b/papers/automatic multilabel prompting simple and interpretable fewshot classification.pdf differ diff --git "a/papers/automatic prompt optimization with \342\200\234gradient descent\342\200\235 and beam search.pdf" "b/papers/automatic prompt optimization with \342\200\234gradient descent\342\200\235 and beam search.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..b53aa39770b214b056cf19ec4d68df9ee15dc3be --- /dev/null +++ "b/papers/automatic prompt optimization with \342\200\234gradient descent\342\200\235 and beam search.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b61c488c65d190ea413a96c81e933c97b2414f03f8be692d0af14539a15ecaf +size 1120550 diff --git a/papers/automatic prompt rewriting for personalized text generation.pdf b/papers/automatic prompt rewriting for personalized text generation.pdf index c21a44249a64544f6d35113f737df3f92e7e9f49..e206460c8230d5b587a8f994d91e9e589761ff3c 100644 Binary files a/papers/automatic prompt rewriting for personalized text generation.pdf and b/papers/automatic prompt rewriting for personalized text generation.pdf differ diff --git a/papers/automatic short math answer grading via incontext metalearning.pdf b/papers/automatic short math answer grading via incontext metalearning.pdf index cc1016dfb15138d697448e979cfde6bed3f430e8..c1227e3371ab10b41b91577807f659f40661006a 100644 Binary files a/papers/automatic short math answer grading via incontext metalearning.pdf and b/papers/automatic short math answer grading via incontext metalearning.pdf differ diff --git a/papers/automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models.pdf b/papers/automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models.pdf index 980acefb6d3e506824aaa956b764b3f5facfedc6..2e930aa396f522cac4829d7004728175dfe21614 100644 Binary files a/papers/automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models.pdf and b/papers/automating governing knowledge commons and contextual integrity (gkcci) privacy policy annotations with large language models.pdf differ diff --git a/papers/autonomous treesearch ability of large language models.pdf b/papers/autonomous treesearch ability of large language models.pdf index 5d57db09cb9e2f5f04ca2c0f749712a87920384f..898feea2d9f8e3c2d106733446eaf169ad362ee8 100644 Binary files a/papers/autonomous treesearch ability of large language models.pdf and b/papers/autonomous treesearch ability of large language models.pdf differ diff --git a/papers/autoplan automatic planning of interactive decisionmaking tasks with large language models.pdf b/papers/autoplan automatic planning of interactive decisionmaking tasks with large language models.pdf index be85fadcf61c01069a5abf7d82249a5d28207a19..3f2201ef9a110208b29441794b64d88613516c6e 100644 Binary files a/papers/autoplan automatic planning of interactive decisionmaking tasks with large language models.pdf and b/papers/autoplan automatic planning of interactive decisionmaking tasks with large language models.pdf differ diff --git a/papers/autotrial prompting language models for clinical trial design.pdf b/papers/autotrial prompting language models for clinical trial design.pdf index c809adc110913e820233223ebfd0ed24d2b713fe..ccd4036055a67a81f1eb1c570b4e179fb520d06e 100644 Binary files a/papers/autotrial prompting language models for clinical trial design.pdf and b/papers/autotrial prompting language models for clinical trial design.pdf differ diff --git a/papers/backdoor attacks for incontext learning with language models.pdf b/papers/backdoor attacks for incontext learning with language models.pdf index d01b6291789bb64931d0afbe60b4791ecad4fd2f..3de7cbede3b4722b5a846738998aba673f2ba92b 100644 Binary files a/papers/backdoor attacks for incontext learning with language models.pdf and b/papers/backdoor attacks for incontext learning with language models.pdf differ diff --git a/papers/backdooring instructiontuned large language models with virtual prompt injection.pdf b/papers/backdooring instructiontuned large language models with virtual prompt injection.pdf index 19ad99f31e8592751ef13fd587e286ffe08b7c22..d6b83c69d96806baeda5766c6d56653829abd3f3 100644 Binary files a/papers/backdooring instructiontuned large language models with virtual prompt injection.pdf and b/papers/backdooring instructiontuned large language models with virtual prompt injection.pdf differ diff --git a/papers/baseline defenses for adversarial attacks against aligned language models.pdf b/papers/baseline defenses for adversarial attacks against aligned language models.pdf index 00fba5f258266d264f7828d7662799bf20679c81..34404fb6930f8d0b2423e88e557f8379100d2657 100644 Binary files a/papers/baseline defenses for adversarial attacks against aligned language models.pdf and b/papers/baseline defenses for adversarial attacks against aligned language models.pdf differ diff --git a/papers/batch calibration rethinking calibration for incontext learning and prompt engineering.pdf b/papers/batch calibration rethinking calibration for incontext learning and prompt engineering.pdf index 2567affe26bee2849b3a834e58e76fc0ec879bc5..7b2718a37df7a336a015dcc26d11ec6818878e3e 100644 --- a/papers/batch calibration rethinking calibration for incontext learning and prompt engineering.pdf +++ b/papers/batch calibration rethinking calibration for incontext learning and prompt engineering.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:8cbc6feb9a25ecd521de7160029001a3d451e23f33838bb8801d804d66911f9f -size 7617323 +oid sha256:7ca3e985b6de9c910db085588cf297bbed4604fdc1be829684d2a9c246642abd +size 7429602 diff --git a/papers/batch prompting efficient inference with large language model apis.pdf b/papers/batch prompting efficient inference with large language model apis.pdf index 852cc6272279e863b978d249a72917631491453f..c3a4b551f3f4e72b3a96904c31d00acbce426f73 100644 Binary files a/papers/batch prompting efficient inference with large language model apis.pdf and b/papers/batch prompting efficient inference with large language model apis.pdf differ diff --git a/papers/benchmarking and defending against indirect prompt injection attacks on large language models.pdf b/papers/benchmarking and defending against indirect prompt injection attacks on large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..deade85dfb3d26474ff74408764e51d84ee8b634 --- /dev/null +++ b/papers/benchmarking and defending against indirect prompt injection attacks on large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28b854ed0f02bbe70593f9875cd90cff6f3c187fe644bd491edb7da33911454f +size 874822 diff --git a/papers/benchmarking arabic ai with large language models.pdf b/papers/benchmarking arabic ai with large language models.pdf index 138e0bf15a4c8524d0365b9a1e6c322a4261234e..298f6f7799cb9cdcd2cca6f870fc079b18b2c590 100644 Binary files a/papers/benchmarking arabic ai with large language models.pdf and b/papers/benchmarking arabic ai with large language models.pdf differ diff --git a/papers/benchmarking large language model capabilities for conditional generation.pdf b/papers/benchmarking large language model capabilities for conditional generation.pdf index 4307d9dfdf2237a1193397a23364096936604d70..8174ba0f7591bee2d86cbb6624d558973805beb5 100644 Binary files a/papers/benchmarking large language model capabilities for conditional generation.pdf and b/papers/benchmarking large language model capabilities for conditional generation.pdf differ diff --git a/papers/better integrating vision and semantics for improving fewshot classification.pdf b/papers/better integrating vision and semantics for improving fewshot classification.pdf index 163290069b448cde27427ec60d5fc08ad3d15a55..f22d163aab0590cfef252fbf43848e5bb637541a 100644 --- a/papers/better integrating vision and semantics for improving fewshot classification.pdf +++ b/papers/better integrating vision and semantics for improving fewshot classification.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6f1813b6a01c0da2e9006c6da2b62d275dd70cdd13a6a438c138ce198fffedd7 +oid sha256:8a724c7b98669a2efbbc8977437ac8df18b91d31404e2877b678561ee8179120 size 4093474 diff --git a/papers/better patching using llm prompting, via selfconsistency.pdf b/papers/better patching using llm prompting, via selfconsistency.pdf index 7438d928dab25f2662b421a282f38373fbb7fe35..24314b5c7dbfe5cceb17c05edc8d22854dde6557 100644 Binary files a/papers/better patching using llm prompting, via selfconsistency.pdf and b/papers/better patching using llm prompting, via selfconsistency.pdf differ diff --git a/papers/beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf b/papers/beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf index cf097a70bc1be15867a30c00fd0ceead0d02c6ed..d38e480679a17325730bc4ef10e41682ac204497 100644 Binary files a/papers/beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf and b/papers/beyond factuality a comprehensive evaluation of large language models as knowledge generators.pdf differ diff --git a/papers/beyond information is chatgpt empathetic enough.pdf b/papers/beyond information is chatgpt empathetic enough.pdf new file mode 100644 index 0000000000000000000000000000000000000000..38dbaf5ab37e1aac9c9500c1b951526f8c5dfea7 --- /dev/null +++ b/papers/beyond information is chatgpt empathetic enough.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6c7c7c0e5e0bc9ba67190de05103104986fef81f66890b6107719e5368a2ee6 +size 1906805 diff --git a/papers/beyond task performance evaluating and reducing the flaws of large multimodal models with incontext learning.pdf b/papers/beyond task performance evaluating and reducing the flaws of large multimodal models with incontext learning.pdf index d4b3503a37d9a7ef2c29567f334a31a02178baca..10acf2910180d68a760573876a608a8ac2102638 100644 --- a/papers/beyond task performance evaluating and reducing the flaws of large multimodal models with incontext learning.pdf +++ b/papers/beyond task performance evaluating and reducing the flaws of large multimodal models with incontext learning.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:84d80ece201f86da490e7d3d264fe6e3e247e757de3c205c952c6f17be8ed7a0 -size 7414214 +oid sha256:c9d6fe64c3852c9d852823671dcb62a1980b25b64db730b00e260b329884ecde +size 14372125 diff --git a/papers/bioinformatics in plant breeding and research on disease resistance.pdf b/papers/bioinformatics in plant breeding and research on disease resistance.pdf deleted file mode 100644 index 3d46b8773025e11d46cb58b6245085245fe4f6b1..0000000000000000000000000000000000000000 --- a/papers/bioinformatics in plant breeding and research on disease resistance.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c7f1f95f8f39c03d58d6f4e2a23d095f58dec0bfa5f146a84119f47e88ef6218 -size 1484196 diff --git a/papers/blackbox tuning of visionlanguage models with effective gradient approximation.pdf b/papers/blackbox tuning of visionlanguage models with effective gradient approximation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3c36bf34b43f39b3f55cb7b038b2f43e0274d40d --- /dev/null +++ b/papers/blackbox tuning of visionlanguage models with effective gradient approximation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88bb026bcb3369cb6843af17b60006d2a9e9ba785c8afd75a5006be9dc2b5920 +size 2081944 diff --git a/papers/blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf b/papers/blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf index 81659f317ad72933dcb4967dac85ba28ff25b056..3dfb767fd0dd70cf48af41de49626ef623778c83 100644 Binary files a/papers/blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf and b/papers/blsp bootstrapping languagespeech pretraining via behavior alignment of continuation writing.pdf differ diff --git a/papers/booookscore a systematic exploration of booklength summarization in the era of llms.pdf b/papers/booookscore a systematic exploration of booklength summarization in the era of llms.pdf index 8f9d71e70a72a5540b796621d57f02210472f883..bf4058b96d2142e10ea205308038f9b41e1925a2 100644 Binary files a/papers/booookscore a systematic exploration of booklength summarization in the era of llms.pdf and b/papers/booookscore a systematic exploration of booklength summarization in the era of llms.pdf differ diff --git a/papers/boosted prompt ensembles for large language models.pdf b/papers/boosted prompt ensembles for large language models.pdf index 6118e9117ac515066209a24adbb841086f9b1105..67b0daa473b14dff90f0586f861098c80415b981 100644 Binary files a/papers/boosted prompt ensembles for large language models.pdf and b/papers/boosted prompt ensembles for large language models.pdf differ diff --git a/papers/boosting crosslingual transferability in multilingual models via incontext learning.pdf b/papers/boosting crosslingual transferability in multilingual models via incontext learning.pdf index 0dd7f61be1f6db19b981d35951f271827649c649..37ba07707cd666a847305b5238e5396feea6a671 100644 Binary files a/papers/boosting crosslingual transferability in multilingual models via incontext learning.pdf and b/papers/boosting crosslingual transferability in multilingual models via incontext learning.pdf differ diff --git a/papers/boosting language models reasoning with chainofknowledge prompting.pdf b/papers/boosting language models reasoning with chainofknowledge prompting.pdf index ff751fcc82a89382cb48148d0c96b4326794c8e6..9f5665e13a9b9fda7645cfdefd3aa3f1e48b2410 100644 Binary files a/papers/boosting language models reasoning with chainofknowledge prompting.pdf and b/papers/boosting language models reasoning with chainofknowledge prompting.pdf differ diff --git a/papers/boosting logical reasoning in large language models through a new framework the graph of thought.pdf b/papers/boosting logical reasoning in large language models through a new framework the graph of thought.pdf index 9ba1f107d4e1fe8a97c24a354eb8de423c96f776..0494b0beeaa0c455933e90154dac70083adde961 100644 Binary files a/papers/boosting logical reasoning in large language models through a new framework the graph of thought.pdf and b/papers/boosting logical reasoning in large language models through a new framework the graph of thought.pdf differ diff --git a/papers/boosting transformers and language models for clinical prediction in immunotherapy.pdf b/papers/boosting transformers and language models for clinical prediction in immunotherapy.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9307eb6617426da3c2044ef9dce1bebc9b0f1118 --- /dev/null +++ b/papers/boosting transformers and language models for clinical prediction in immunotherapy.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7ab1bcb3547a8eadd9957f96fdf301ec5df7248f708f2e8ba21014fdd34e7d3 +size 571356 diff --git a/papers/breaking language barriers with a leap learning strategies for polyglot llms.pdf b/papers/breaking language barriers with a leap learning strategies for polyglot llms.pdf new file mode 100644 index 0000000000000000000000000000000000000000..813b28a7c4c70e9d0fcfc5fc89be2988353b5efc --- /dev/null +++ b/papers/breaking language barriers with a leap learning strategies for polyglot llms.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f027af4b06c64064764db5f02a7acff8ab591fecc2fced131b0aa24da909896 +size 733681 diff --git a/papers/breaking the bank with chatgpt fewshot text classification for finance.pdf b/papers/breaking the bank with chatgpt fewshot text classification for finance.pdf index e98c3e0032588f0d87191b794d3530a4a116e7e3..e8d35a90beb3dc3fe55de9ccd3ee6a8c3bc72097 100644 Binary files a/papers/breaking the bank with chatgpt fewshot text classification for finance.pdf and b/papers/breaking the bank with chatgpt fewshot text classification for finance.pdf differ diff --git a/papers/building emotional support chatbots in the era of llms.pdf b/papers/building emotional support chatbots in the era of llms.pdf index 6ddefc4c47010b512e54a64c027f7fdc5da18071..0593bad5dca33e18f44ceb1ae08a60c04365cfd6 100644 Binary files a/papers/building emotional support chatbots in the era of llms.pdf and b/papers/building emotional support chatbots in the era of llms.pdf differ diff --git a/papers/business process text sketch automation generation using large language model.pdf b/papers/business process text sketch automation generation using large language model.pdf index f19c89dd515e5ec24e60a8f25ea03767dafce12c..0fd285040bab5c78d82156e1efe98fb28995fa97 100644 Binary files a/papers/business process text sketch automation generation using large language model.pdf and b/papers/business process text sketch automation generation using large language model.pdf differ diff --git a/papers/bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games.pdf b/papers/bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games.pdf index 02765cd747d4e6910a4ca06832e32f1ddda9a7fb..80ee16e6997e72464d22b4652e8facade738da12 100644 Binary files a/papers/bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games.pdf and b/papers/bytesized32 a corpus and challenge task for generating taskspecific world models expressed as text games.pdf differ diff --git a/papers/calibrate before use improving fewshot performance of language models.pdf b/papers/calibrate before use improving fewshot performance of language models.pdf index 865397af33323ea3327515eaf6e4f545511c5320..a1deac84788da896a582b66bf42a6a14c5343384 100644 Binary files a/papers/calibrate before use improving fewshot performance of language models.pdf and b/papers/calibrate before use improving fewshot performance of language models.pdf differ diff --git a/papers/calibrating llmbased evaluator.pdf b/papers/calibrating llmbased evaluator.pdf index 7898e2e6f2fa849b3108d59f3558c0f503ed64e1..65c4d8194d9f4eb061edd2e7aa3968fce1345170 100644 Binary files a/papers/calibrating llmbased evaluator.pdf and b/papers/calibrating llmbased evaluator.pdf differ diff --git a/papers/camoscio an italian instructiontuned llama.pdf b/papers/camoscio an italian instructiontuned llama.pdf index 402c2131f982c83b7a56a7a605ea64c39f44d831..79bee6393b715f363d813b764a654c15a71ad29a 100644 Binary files a/papers/camoscio an italian instructiontuned llama.pdf and b/papers/camoscio an italian instructiontuned llama.pdf differ diff --git a/papers/can ai moderate online communities.pdf b/papers/can ai moderate online communities.pdf index 37ae2f6ec9b8a481cebc2f40fd2eb530e28e840d..7b9d335e7757b2969e05c8ed8a0f3753d7d26e8f 100644 Binary files a/papers/can ai moderate online communities.pdf and b/papers/can ai moderate online communities.pdf differ diff --git a/papers/can chatgpt detect intent evaluating large language models for spoken language understanding.pdf b/papers/can chatgpt detect intent evaluating large language models for spoken language understanding.pdf index f09ed7232bd8953af37e27eeae37f47f3e2cca14..50bb5f06ab177e7482de4632fdc0a6feb8360b19 100644 Binary files a/papers/can chatgpt detect intent evaluating large language models for spoken language understanding.pdf and b/papers/can chatgpt detect intent evaluating large language models for spoken language understanding.pdf differ diff --git a/papers/can chatgpt rival neural machine translation a comparative study.pdf b/papers/can chatgpt rival neural machine translation a comparative study.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dadecd895ed270e786293c875e6586a8f4cd0a22 --- /dev/null +++ b/papers/can chatgpt rival neural machine translation a comparative study.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5808812d310d7250a25ceb199bd468b20d5af0bd74ffb7e0aac8e9672c08abe3 +size 943508 diff --git a/papers/can chatgpt understand causal language in science claims.pdf b/papers/can chatgpt understand causal language in science claims.pdf index 001dbc838f6806fcdb9b1179465dc37c83599669..bb0faf4ec78b52c4d832ef2326ae16012ba92894 100644 Binary files a/papers/can chatgpt understand causal language in science claims.pdf and b/papers/can chatgpt understand causal language in science claims.pdf differ diff --git a/papers/can generalist foundation models outcompete specialpurpose tuning case study in medicine.pdf b/papers/can generalist foundation models outcompete specialpurpose tuning case study in medicine.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fe877ba675b7ecaa2cbafb24188e5367a93ee94a --- /dev/null +++ b/papers/can generalist foundation models outcompete specialpurpose tuning case study in medicine.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c0832e1092c64aef7497e48b848a3383964bdfe799938c0f4953826524b60a9 +size 1103969 diff --git a/papers/can generative artificial intelligence write an academic journal article opportunities, challenges, and implications.pdf b/papers/can generative artificial intelligence write an academic journal article opportunities, challenges, and implications.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d700976176aece4a3f763eee2e6d7ad4ffe369ed --- /dev/null +++ b/papers/can generative artificial intelligence write an academic journal article opportunities, challenges, and implications.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9871eb66f5664c0e0b0c13fc8c8c384bb5e66541859b1eba1e927a3aa656e04b +size 210724 diff --git a/papers/can incontext learners learn a reasoning concept from demonstrations.pdf b/papers/can incontext learners learn a reasoning concept from demonstrations.pdf index 43705f7f7dacfe90362b63463dd52ccaf93fbf16..d4afe4398adec947fd684fb02d3e802d4b2fb249 100644 Binary files a/papers/can incontext learners learn a reasoning concept from demonstrations.pdf and b/papers/can incontext learners learn a reasoning concept from demonstrations.pdf differ diff --git a/papers/can language models be biomedical knowledge bases.pdf b/papers/can language models be biomedical knowledge bases.pdf index b2be8c33c82eedf5f0922152239a0d78b1714be6..7cb2d431758b45148c2fd52ec0e9ff1336115f80 100644 Binary files a/papers/can language models be biomedical knowledge bases.pdf and b/papers/can language models be biomedical knowledge bases.pdf differ diff --git a/papers/can language models learn from explanations in context.pdf b/papers/can language models learn from explanations in context.pdf index 91a934dbf17f7f6d2ab552f1f41fcda9c8e7aa0a..e526c1ed23fb740f57f36a746cb516d4ac4fb3b3 100644 Binary files a/papers/can language models learn from explanations in context.pdf and b/papers/can language models learn from explanations in context.pdf differ diff --git a/papers/can language models solve graph problems in natural language.pdf b/papers/can language models solve graph problems in natural language.pdf index 418b1d525d39871a9154f605487573affb405ef7..6deca5a78d8ea70fd491cd7b3add536e4ea900ab 100644 Binary files a/papers/can language models solve graph problems in natural language.pdf and b/papers/can language models solve graph problems in natural language.pdf differ diff --git a/papers/can language models understand physical concepts.pdf b/papers/can language models understand physical concepts.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c6034ba549e1c5c783a688a549d0da5ab645bd0e --- /dev/null +++ b/papers/can language models understand physical concepts.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bad720b695cec601ffc20c607fe6bf466b034bf2267344e27a59d214bb3298aa +size 14008710 diff --git a/papers/can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning.pdf b/papers/can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning.pdf index 344f7617f2ccb0155912a0c62798c9b204ed74a4..a4b00cd103ca5952fb3b1908272f10cc4d678fd9 100644 Binary files a/papers/can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning.pdf and b/papers/can large language models be good path planners a benchmark and investigation on spatialtemporal reasoning.pdf differ diff --git a/papers/can large language models reason about medical questions.pdf b/papers/can large language models reason about medical questions.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dd687aedf03ffab6397c1d3acdf8da1555b40513 --- /dev/null +++ b/papers/can large language models reason about medical questions.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd92e3239e1caf4f53912902e1a69dac93bb3b2f2763d2472ad27ae359bcca90 +size 5233955 diff --git a/papers/can large language models write good propertybased tests.pdf b/papers/can large language models write good propertybased tests.pdf index 79c9788b9d47bf74f8aa48051d2246b2c640589b..22a786be1946a9ff966e0dd0ed8dc2dd9a55d807 100644 Binary files a/papers/can large language models write good propertybased tests.pdf and b/papers/can large language models write good propertybased tests.pdf differ diff --git a/papers/can prompt learning benefit radiology report generation.pdf b/papers/can prompt learning benefit radiology report generation.pdf index 854907f8f1fdc5a23a608f4aa57dc9e4b13b17cd..6f9b9a9bd84667a6b2c07d6a9956806e674d813f 100644 Binary files a/papers/can prompt learning benefit radiology report generation.pdf and b/papers/can prompt learning benefit radiology report generation.pdf differ diff --git a/papers/can we edit factual knowledge by incontext learning.pdf b/papers/can we edit factual knowledge by incontext learning.pdf index 3cf57033c62b7fbbf9f10528d1d33fa55af7ab91..80760cf84fa12cc2c8cf852ee3c4d42c341282e8 100644 Binary files a/papers/can we edit factual knowledge by incontext learning.pdf and b/papers/can we edit factual knowledge by incontext learning.pdf differ diff --git a/papers/casteist but not racist quantifying disparities in large language model bias between india and the west.pdf b/papers/casteist but not racist quantifying disparities in large language model bias between india and the west.pdf index 1afa73a0b13f69f9b03ddfba78aca42a5e3bf11b..77971d79291af60007739de68cc054cc985db059 100644 Binary files a/papers/casteist but not racist quantifying disparities in large language model bias between india and the west.pdf and b/papers/casteist but not racist quantifying disparities in large language model bias between india and the west.pdf differ diff --git a/papers/causal interventionbased prompt debiasing for event argument extraction.pdf b/papers/causal interventionbased prompt debiasing for event argument extraction.pdf index 8e7c365c053125307d4999280d062c31d76992f6..11a5c4bfd315dda97bc73b778bb6d34f38c8f3b0 100644 Binary files a/papers/causal interventionbased prompt debiasing for event argument extraction.pdf and b/papers/causal interventionbased prompt debiasing for event argument extraction.pdf differ diff --git a/papers/causallm is not optimal for incontext learning.pdf b/papers/causallm is not optimal for incontext learning.pdf index af3922742d704cbb76c37b91a98d1649ed2c48e0..c45d566303b4b53ebcff18a1cb253dc5615df11a 100644 Binary files a/papers/causallm is not optimal for incontext learning.pdf and b/papers/causallm is not optimal for incontext learning.pdf differ diff --git a/papers/ccprompt counterfactual contrastive prompttuning for manyclass classification.pdf b/papers/ccprompt counterfactual contrastive prompttuning for manyclass classification.pdf index b79cf1bf890a266fec979d73182103523580ff77..b6bc02d7dd86de0c3793250096a79bae6586768f 100644 Binary files a/papers/ccprompt counterfactual contrastive prompttuning for manyclass classification.pdf and b/papers/ccprompt counterfactual contrastive prompttuning for manyclass classification.pdf differ diff --git a/papers/chaidt a framework for prompting conversational generative ai agents to actively participate in cocreation.pdf b/papers/chaidt a framework for prompting conversational generative ai agents to actively participate in cocreation.pdf index 3cc1d6917bd78fbed28eb9417a5c9c4dfc0ca4fa..7b2c436dfddcc73df2d9b174f0f87bab69fdf7ce 100644 Binary files a/papers/chaidt a framework for prompting conversational generative ai agents to actively participate in cocreation.pdf and b/papers/chaidt a framework for prompting conversational generative ai agents to actively participate in cocreation.pdf differ diff --git a/papers/chainforge a visual toolkit for prompt engineering and llm hypothesis testing.pdf b/papers/chainforge a visual toolkit for prompt engineering and llm hypothesis testing.pdf index 1dfb57b18862202ee57b97717608e6809f746a77..e2633827a51b3e8500bd13b024d2a5d6ca0321e9 100644 --- a/papers/chainforge a visual toolkit for prompt engineering and llm hypothesis testing.pdf +++ b/papers/chainforge a visual toolkit for prompt engineering and llm hypothesis testing.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3f268723c5dca85b74c3c8cfbed83a5e98e0301db9ee163125c47f44ee616be3 -size 2839953 +oid sha256:07a66c7bbdcfed7fab924fc8f219c6e19d7eacc29b23fa983cdc81ba2166c37d +size 3705029 diff --git a/papers/chainofdictionary prompting elicits translation in large language models.pdf b/papers/chainofdictionary prompting elicits translation in large language models.pdf index 04f0adeb02b313843828344dbdb16b4b596468f0..1678df09d641c2cc75104c75a148335365c00ee3 100644 Binary files a/papers/chainofdictionary prompting elicits translation in large language models.pdf and b/papers/chainofdictionary prompting elicits translation in large language models.pdf differ diff --git a/papers/chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations.pdf b/papers/chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations.pdf index d9b6707a1695ee568b887abe57997a41722081ea..9f32907b0ce050f980717ab178b50bd17d649f5c 100644 Binary files a/papers/chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations.pdf and b/papers/chatgpt evaluation on sentence level relations a focus on temporal, causal, and discourse relations.pdf differ diff --git a/papers/chatgpt for conversational recommendation refining recommendations by reprompting with feedback.pdf b/papers/chatgpt for conversational recommendation refining recommendations by reprompting with feedback.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a08ecbf941211083ba29e3630fdbe56b3f9e826a --- /dev/null +++ b/papers/chatgpt for conversational recommendation refining recommendations by reprompting with feedback.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05deec889d24ad9ac921aec691ce5f427d239cf7c1c35fa3a27d6d4b63248cf3 +size 1157863 diff --git a/papers/chatgpt for plcdcs control logic generation.pdf b/papers/chatgpt for plcdcs control logic generation.pdf index c11d161cb864767a09feaf13a21699700c4bee22..ee384f689d66f0f74abaf16db1c3507a40942920 100644 Binary files a/papers/chatgpt for plcdcs control logic generation.pdf and b/papers/chatgpt for plcdcs control logic generation.pdf differ diff --git a/papers/chatgpt for zeroshot dialogue state tracking a solution or an opportunity.pdf b/papers/chatgpt for zeroshot dialogue state tracking a solution or an opportunity.pdf index ccbb1b9b630e24cb8a23bda5eb457c936f29dbab..2e1755ea01f6275c05b5c1a8d54cf44cd36a9c05 100644 Binary files a/papers/chatgpt for zeroshot dialogue state tracking a solution or an opportunity.pdf and b/papers/chatgpt for zeroshot dialogue state tracking a solution or an opportunity.pdf differ diff --git a/papers/chatgpt opens a new door for bioinformatics.pdf b/papers/chatgpt opens a new door for bioinformatics.pdf index 831572e6431d5b9cc39878e00ba5b5ae94bc87f7..e1a0db120f7a5009d31d89db1955c30f7b00d3d8 100644 Binary files a/papers/chatgpt opens a new door for bioinformatics.pdf and b/papers/chatgpt opens a new door for bioinformatics.pdf differ diff --git a/papers/chatgpt vs crowdsourcing vs experts annotating opendomain conversations with speech functions.pdf b/papers/chatgpt vs crowdsourcing vs experts annotating opendomain conversations with speech functions.pdf new file mode 100644 index 0000000000000000000000000000000000000000..497a82038698c1c2a0556226d0be9e70b5eb0b9d --- /dev/null +++ b/papers/chatgpt vs crowdsourcing vs experts annotating opendomain conversations with speech functions.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2328730757a51767c895fc8e748fb9ff44b44e00d7d9ccadbfa8bacd6b1c0ec +size 1811659 diff --git a/papers/chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt.pdf b/papers/chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt.pdf index cb28be845473da355b6ae6820d135e573acceb0c..8c6334309e8d0e5bc6a4d83bd867ba984b020635 100644 Binary files a/papers/chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt.pdf and b/papers/chatgpthealthprompt harnessing the power of xai in promptbased healthcare decision support using chatgpt.pdf differ diff --git a/papers/chatsos llmbased knowledge q&a system for safety engineering.pdf b/papers/chatsos llmbased knowledge q&a system for safety engineering.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7d834617f68232ae9e9b91891227685b1463377e --- /dev/null +++ b/papers/chatsos llmbased knowledge q&a system for safety engineering.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41ed840feb1bd2e1e03011c64a0cc82916e82297cdc84a23121350072a27a229 +size 1006575 diff --git a/papers/cheapfake detection with llm using prompt engineering.pdf b/papers/cheapfake detection with llm using prompt engineering.pdf index 4233e8a21d36cb63ca631db63fcb705c475a5c49..a360ce4e7277ac7c8ad8f7ededdfe8cd947f67aa 100644 Binary files a/papers/cheapfake detection with llm using prompt engineering.pdf and b/papers/cheapfake detection with llm using prompt engineering.pdf differ diff --git a/papers/check your facts and try again improving large language models with external knowledge and automated feedback.pdf b/papers/check your facts and try again improving large language models with external knowledge and automated feedback.pdf index 3824386b4d387a8491c34b78a23ad906f6dd8171..7f9f0a831ba55500505df55dde84d199b0031d6e 100644 Binary files a/papers/check your facts and try again improving large language models with external knowledge and automated feedback.pdf and b/papers/check your facts and try again improving large language models with external knowledge and automated feedback.pdf differ diff --git a/papers/chemical identification and indexing in pubmed articles via bert and texttotext approaches.pdf b/papers/chemical identification and indexing in pubmed articles via bert and texttotext approaches.pdf index e4a9299a066e761d818ae8cc9b8f59561fed16c7..abfe21048fb416e55227aded4dcff41bcc182e2e 100644 Binary files a/papers/chemical identification and indexing in pubmed articles via bert and texttotext approaches.pdf and b/papers/chemical identification and indexing in pubmed articles via bert and texttotext approaches.pdf differ diff --git a/papers/citeprompt using prompts to identify citation intent in scientific papers.pdf b/papers/citeprompt using prompts to identify citation intent in scientific papers.pdf index d53278ac99878c275f55af297547e03faaaa101d..17d90c6bec682af37bbbce53ef5aaca63c21cf51 100644 Binary files a/papers/citeprompt using prompts to identify citation intent in scientific papers.pdf and b/papers/citeprompt using prompts to identify citation intent in scientific papers.pdf differ diff --git a/papers/clara multilingual contrastive learning for audio representation acquisition.pdf b/papers/clara multilingual contrastive learning for audio representation acquisition.pdf index c49fed90a39c73b32c170256154cc715d23bab53..f16704c4e22946ba9a2b422033bb6aad11d2dce2 100644 Binary files a/papers/clara multilingual contrastive learning for audio representation acquisition.pdf and b/papers/clara multilingual contrastive learning for audio representation acquisition.pdf differ diff --git a/papers/clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction.pdf b/papers/clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction.pdf index 2868fa09d9619d13140e8c4813eb265fc4f8d668..ad4a5e7542002f192852aac454cde0cc1082a1a8 100644 Binary files a/papers/clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction.pdf and b/papers/clickprompt ctr models are strong prompt generators for adapting language models to ctr prediction.pdf differ diff --git a/papers/climate change from large language models.pdf b/papers/climate change from large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ba2589737e7580ba202651be524b75f5335c6c48 --- /dev/null +++ b/papers/climate change from large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:612c6ce6356c8c5cd4a12c217d5b240f0de5dcdde37fb83d23d386a0d06be3c1 +size 780910 diff --git a/papers/coarsetofine fewshot learning for named entity recognition.pdf b/papers/coarsetofine fewshot learning for named entity recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..01305eb2c0e0613a8031500c526977c34a1eec3f --- /dev/null +++ b/papers/coarsetofine fewshot learning for named entity recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1069201f3bec2d862337513a995493484f9ad1cc4b09366cd0df3edcee25e359 +size 1934202 diff --git a/papers/coaudit tools to help humans doublecheck aigenerated content.pdf b/papers/coaudit tools to help humans doublecheck aigenerated content.pdf index 3ecdfedb80a8ae35a0a728ee3ceb5943f6245f30..235343ac2f08549c67205b5e4c7a81786aa2e754 100644 Binary files a/papers/coaudit tools to help humans doublecheck aigenerated content.pdf and b/papers/coaudit tools to help humans doublecheck aigenerated content.pdf differ diff --git a/papers/cocomo computational consciousness modeling for generative and ethical ai.pdf b/papers/cocomo computational consciousness modeling for generative and ethical ai.pdf index fab29a1441826e9f3cc37ecbc3a791410147d0ff..bf2a1f863d71a9e769831f4d67749bb4e259f396 100644 Binary files a/papers/cocomo computational consciousness modeling for generative and ethical ai.pdf and b/papers/cocomo computational consciousness modeling for generative and ethical ai.pdf differ diff --git a/papers/code generation tools (almost) for free a study of fewshot, pretrained language models on code.pdf b/papers/code generation tools (almost) for free a study of fewshot, pretrained language models on code.pdf index c09e890e183ccf654137213932f9c2a9b329f6b1..55dd76401443bf3f5a4048148804a180f266d33b 100644 Binary files a/papers/code generation tools (almost) for free a study of fewshot, pretrained language models on code.pdf and b/papers/code generation tools (almost) for free a study of fewshot, pretrained language models on code.pdf differ diff --git a/papers/code generation with alphacodium from prompt engineering to flow engineering.pdf b/papers/code generation with alphacodium from prompt engineering to flow engineering.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fb6f55e331513ae5d32ff2240b7a898e9d0fa795 --- /dev/null +++ b/papers/code generation with alphacodium from prompt engineering to flow engineering.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7e6422424c482e7002c19a6d7ba18220b7610a16047aeb46f1c235d836d96f0 +size 475308 diff --git a/papers/codecot and beyond learning to program and test like a developer.pdf b/papers/codecot and beyond learning to program and test like a developer.pdf index 261fd19f2d78bb5da245484ed2b2d13f1398bfab..db89b4940b0345850257e30362d42c88e50b2a0a 100644 Binary files a/papers/codecot and beyond learning to program and test like a developer.pdf and b/papers/codecot and beyond learning to program and test like a developer.pdf differ diff --git a/papers/codeie large code generation models are better fewshot information extractors.pdf b/papers/codeie large code generation models are better fewshot information extractors.pdf index 7a5910c2182cad8759d48898e64c1eb23e90a502..1ecb7d14f651000ddb796af1ae138078bedba27b 100644 Binary files a/papers/codeie large code generation models are better fewshot information extractors.pdf and b/papers/codeie large code generation models are better fewshot information extractors.pdf differ diff --git a/papers/codeprompt taskagnostic prefix tuning for program and language generation.pdf b/papers/codeprompt taskagnostic prefix tuning for program and language generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5e0046ff82ef69aadc40bb7dfc0f5adcc5a82cb3 --- /dev/null +++ b/papers/codeprompt taskagnostic prefix tuning for program and language generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cfd551f0d1ace600016e7ca24abbcb0a69e3b3eeb39ccf8d7bdb6917038530ec +size 400115 diff --git a/papers/codestyle incontext learning for knowledgebased question answering.pdf b/papers/codestyle incontext learning for knowledgebased question answering.pdf index 5733d6a1760bf9050f2ec746ffbfff8b8374ba45..55bcb12cf4e4bdfc26da7f54cc7d8c353ff94b20 100644 Binary files a/papers/codestyle incontext learning for knowledgebased question answering.pdf and b/papers/codestyle incontext learning for knowledgebased question answering.pdf differ diff --git a/papers/cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf b/papers/cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf index 5626dffb2ffbd926ab3dba6e01d09113126afe88..ac6100be106c7a740f5f8394e3b3c777c943d685 100644 Binary files a/papers/cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf and b/papers/cohortgpt an enhanced gpt for participant recruitment in clinical study.pdf differ diff --git a/papers/comma coarticulated multimodal learning.pdf b/papers/comma coarticulated multimodal learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..effc22a7f801de4b637de1ad953428c916a2cb5f --- /dev/null +++ b/papers/comma coarticulated multimodal learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d65a002dc8578c604ff7c5423a42ae7a8e4701d1a4eeeaf45afbf48143e18dfd +size 1619425 diff --git a/papers/comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students.pdf b/papers/comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students.pdf index bba50b5ba85918c38bbc92aac627965202b4983b..0d849f8dd51b9d6de52f9ccb0aee061684c0b390 100644 Binary files a/papers/comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students.pdf and b/papers/comparative analysis of gpt4 and human graders in evaluating human tutors giving praise to students.pdf differ diff --git a/papers/comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues.pdf b/papers/comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues.pdf index bba50b5ba85918c38bbc92aac627965202b4983b..0d849f8dd51b9d6de52f9ccb0aee061684c0b390 100644 Binary files a/papers/comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues.pdf and b/papers/comparative analysis of gpt4 and human graders in evaluating praise given to students in synthetic dialogues.pdf differ diff --git a/papers/complementary explanations for effective incontext learning.pdf b/papers/complementary explanations for effective incontext learning.pdf index 1f41b23f14d48f80255204d573f696e0316219cf..60ca47f63496a1d4d598aa69beffe9cc001e7083 100644 Binary files a/papers/complementary explanations for effective incontext learning.pdf and b/papers/complementary explanations for effective incontext learning.pdf differ diff --git a/papers/complex reasoning in natural language.pdf b/papers/complex reasoning in natural language.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9f7d22cc42159aba3fcfcb0ef86a69a00977555e --- /dev/null +++ b/papers/complex reasoning in natural language.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3abe14b61cfde12ccfa8c477d35e42b6b86731f1314007e1f1aeb06fadfc2b9 +size 168655 diff --git a/papers/compositional exemplars for incontext learning.pdf b/papers/compositional exemplars for incontext learning.pdf index 15ff305aeacb657523e8b2b31ac57163722eaae2..219663467cbc71a9d25d7cb17dfb74476727e7cb 100644 Binary files a/papers/compositional exemplars for incontext learning.pdf and b/papers/compositional exemplars for incontext learning.pdf differ diff --git a/papers/compositional semantic parsing with large language models.pdf b/papers/compositional semantic parsing with large language models.pdf index 3244a2c2930096f540ea397276ce4365fcde2a6b..70b9a3d2d05676a42d3453527ebc77ae4f818fc1 100644 Binary files a/papers/compositional semantic parsing with large language models.pdf and b/papers/compositional semantic parsing with large language models.pdf differ diff --git a/papers/conavgpt multirobot cooperative visual semantic navigation using large language models.pdf b/papers/conavgpt multirobot cooperative visual semantic navigation using large language models.pdf index 13f51ad4ffc114231ca57839e613a78be0543108..ada2eed7e2bc86bc469eb6fe9eaec8c889973548 100644 --- a/papers/conavgpt multirobot cooperative visual semantic navigation using large language models.pdf +++ b/papers/conavgpt multirobot cooperative visual semantic navigation using large language models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7acfec45293b1d807206a49870414729e2af4f7a4e86cebcc5374a3b05be4f3d +oid sha256:c54029090dd3a45435a6b3f983c7051310139712f19de4ac1cba86c700b16d7e size 2186946 diff --git a/papers/concise and organized perception facilitates large language models for deductive reasoning.pdf b/papers/concise and organized perception facilitates large language models for deductive reasoning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4fca98c25955ec4d958ea1344616d6584dcbc26c --- /dev/null +++ b/papers/concise and organized perception facilitates large language models for deductive reasoning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e86de74e571b05fd71ef0d838bf54d77d966a3ae422aafade66e38d8a674ca43 +size 1199969 diff --git a/papers/conditioning on dialog acts improves empathy style transfer.pdf b/papers/conditioning on dialog acts improves empathy style transfer.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a413e86c0e2bc092f9070ea9119de032f5a2afbb --- /dev/null +++ b/papers/conditioning on dialog acts improves empathy style transfer.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a426143694feb12e342cc659854fd3ffc2433ce4238214afde583b4ac8f8f123 +size 820226 diff --git a/papers/connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf b/papers/connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf index e7835ff0dad5724cde76c4e2494a6ec149ab0dfd..ccdcb8f421b146abfdf9a0b4a3ae7adf4562e667 100644 Binary files a/papers/connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf and b/papers/connecting large language models with evolutionary algorithms yields powerful prompt optimizers.pdf differ diff --git a/papers/connprompt connectivecloze prompt learning for implicit discourse relation recognition.pdf b/papers/connprompt connectivecloze prompt learning for implicit discourse relation recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ec8e3562d78e5594f9f7d1cd04fe2057fadc2b94 --- /dev/null +++ b/papers/connprompt connectivecloze prompt learning for implicit discourse relation recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d60b39c6a36f3d3a9697c2701a36964720168a6e9fd7811771cd2f963c6efd0 +size 2621443 diff --git a/papers/conqx semantic expansion of spoken queries for intent detection based on conditioned text generation.pdf b/papers/conqx semantic expansion of spoken queries for intent detection based on conditioned text generation.pdf index 23179396b7fcd8cec70bf4ab2e5874c94dca7b4a..adaa986114f3d9b4ac48153f99e0ef3e15fe8196 100644 Binary files a/papers/conqx semantic expansion of spoken queries for intent detection based on conditioned text generation.pdf and b/papers/conqx semantic expansion of spoken queries for intent detection based on conditioned text generation.pdf differ diff --git a/papers/consprompt easily exploiting contrastive samples for fewshot prompt learning.pdf b/papers/consprompt easily exploiting contrastive samples for fewshot prompt learning.pdf index 64fdc6d6712d91115ba84fbcc7c7c5a8dcd115e8..6564573ea41d8363244df4ef8ac889ae16ab6126 100644 Binary files a/papers/consprompt easily exploiting contrastive samples for fewshot prompt learning.pdf and b/papers/consprompt easily exploiting contrastive samples for fewshot prompt learning.pdf differ diff --git a/papers/contextfaithful prompting for large language models.pdf b/papers/contextfaithful prompting for large language models.pdf deleted file mode 100644 index 91ff2484fbff2d81729dd5383b47a71df801d616..0000000000000000000000000000000000000000 Binary files a/papers/contextfaithful prompting for large language models.pdf and /dev/null differ diff --git a/papers/contextual biasing of namedentities with large language models.pdf b/papers/contextual biasing of namedentities with large language models.pdf index 8f15d704a2812eb25cf0fece91cc7943bb449e1a..fb24cf17011c277ddc0e738be97161fc469c1ddb 100644 Binary files a/papers/contextual biasing of namedentities with large language models.pdf and b/papers/contextual biasing of namedentities with large language models.pdf differ diff --git a/papers/contextual stance classification using prompt engineering.pdf b/papers/contextual stance classification using prompt engineering.pdf index 8bdcea5eb6bb83af8bf9ecbf4d080920a50965d0..b00c061724b4963d6ce0a24bb5a63636e4e2d35f 100644 Binary files a/papers/contextual stance classification using prompt engineering.pdf and b/papers/contextual stance classification using prompt engineering.pdf differ diff --git a/papers/contextualized soft prompts for extraction of event arguments.pdf b/papers/contextualized soft prompts for extraction of event arguments.pdf new file mode 100644 index 0000000000000000000000000000000000000000..349cf0d4bc675d656edc805a4249cc05d1a21640 --- /dev/null +++ b/papers/contextualized soft prompts for extraction of event arguments.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b54067bffc6bfff6cc2661c061c4edd2496a4446e326ae8e6430ce2b0a877196 +size 239814 diff --git a/papers/continual training of language models for fewshot learning.pdf b/papers/continual training of language models for fewshot learning.pdf index b2002ec0b50116233111aa8014f25300a537c75f..d682e7f9fa77577b3cd52c4bf44000ad265bb5a9 100644 Binary files a/papers/continual training of language models for fewshot learning.pdf and b/papers/continual training of language models for fewshot learning.pdf differ diff --git a/papers/continued pretraining for better zero and fewshot promptability.pdf b/papers/continued pretraining for better zero and fewshot promptability.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6553df4d7dfb44e03912a4185f7c4ed9242d9ad7 --- /dev/null +++ b/papers/continued pretraining for better zero and fewshot promptability.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11d4de75b7b30dc8a7605d53dc294555048cccdcffd2b3512b7de443c6fce89d +size 741362 diff --git a/papers/continuous prompt tuning based textual entailment model for ecommerce entity typing.pdf b/papers/continuous prompt tuning based textual entailment model for ecommerce entity typing.pdf index d8b463990b7bde0e80d1abf20e7ac33002c08dba..d552d4945ef17eb48bd0d7adff25c0d151640d97 100644 Binary files a/papers/continuous prompt tuning based textual entailment model for ecommerce entity typing.pdf and b/papers/continuous prompt tuning based textual entailment model for ecommerce entity typing.pdf differ diff --git a/papers/contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning.pdf b/papers/contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning.pdf index c5506ce07cfa72f2ef57fdd9f008823b00a775e4..bc76740edf5335d91536ac1632b3492477412d16 100644 Binary files a/papers/contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning.pdf and b/papers/contrastive distillation is a sampleefficient selfsupervised loss policy for transfer learning.pdf differ diff --git a/papers/controlled text generation with natural language instructions.pdf b/papers/controlled text generation with natural language instructions.pdf index 5f98fb4458d20a69c1577c77924128b2d814f1f0..75f1828289cc377ec4867500ff95db8bb7d6ae95 100644 Binary files a/papers/controlled text generation with natural language instructions.pdf and b/papers/controlled text generation with natural language instructions.pdf differ diff --git a/papers/controlling personality style in dialogue with zeroshot promptbased learning.pdf b/papers/controlling personality style in dialogue with zeroshot promptbased learning.pdf index 825d4003369abc4594a30ca2ef332d32c1656b0b..8ece17c7c02dc6ad5f261125d191456b0b187f28 100644 Binary files a/papers/controlling personality style in dialogue with zeroshot promptbased learning.pdf and b/papers/controlling personality style in dialogue with zeroshot promptbased learning.pdf differ diff --git a/papers/converser fewshot conversational dense retrieval with synthetic data generation.pdf b/papers/converser fewshot conversational dense retrieval with synthetic data generation.pdf index ca4f30dc01736fd2e8a6fd2edb160764a33104af..1a8a718559bdba075efde6cfb23ff71ddff86f5c 100644 Binary files a/papers/converser fewshot conversational dense retrieval with synthetic data generation.pdf and b/papers/converser fewshot conversational dense retrieval with synthetic data generation.pdf differ diff --git a/papers/conversing with copilot exploring prompt engineering for solving cs1 problems using natural language.pdf b/papers/conversing with copilot exploring prompt engineering for solving cs1 problems using natural language.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1b0d75a147ced720d331dd14388603a0ff1462b0 --- /dev/null +++ b/papers/conversing with copilot exploring prompt engineering for solving cs1 problems using natural language.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b18118b280216df43b0b5e405dc758089089367526efb1b02aedccd18da5022 +size 663872 diff --git a/papers/convolutional bypasses are better vision transformer adapters.pdf b/papers/convolutional bypasses are better vision transformer adapters.pdf index 39910c17e6ef003d7b59d27a3619493a6800d152..065766629334be55eafb0237c04130b748134a3d 100644 Binary files a/papers/convolutional bypasses are better vision transformer adapters.pdf and b/papers/convolutional bypasses are better vision transformer adapters.pdf differ diff --git a/papers/coprompt supporting prompt sharing and referring in collaborative natural language programming.pdf b/papers/coprompt supporting prompt sharing and referring in collaborative natural language programming.pdf index d07f1e4ad7208a9a90bd9c7096f19e932347c769..a805151336c5a104d988c3668d95f1994f45ba1b 100644 --- a/papers/coprompt supporting prompt sharing and referring in collaborative natural language programming.pdf +++ b/papers/coprompt supporting prompt sharing and referring in collaborative natural language programming.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:387adbfc7697a59c6dcbea17435b03482b60d7cce5c60b15da534abe54b831fa -size 4299318 +oid sha256:30d1ba8c0fffbc1096599814dc804378a90157cf7bc455aeb5bad870c2940847 +size 5078779 diff --git a/papers/corrpus codebased structured prompting for neurosymbolic story understanding.pdf b/papers/corrpus codebased structured prompting for neurosymbolic story understanding.pdf index 4d87b7fbd597f442a2e91135d5ee2bcea2438d04..5fb64fb43749218f16fb472dd1e7dd71d3f7720a 100644 Binary files a/papers/corrpus codebased structured prompting for neurosymbolic story understanding.pdf and b/papers/corrpus codebased structured prompting for neurosymbolic story understanding.pdf differ diff --git a/papers/cosmic data efficient instructiontuning for speech incontext learning.pdf b/papers/cosmic data efficient instructiontuning for speech incontext learning.pdf index 46648aa0edcd651f73d19c120449f42416263dd0..d7b61fbf120d2d917a58a757fa4e1d5c236d3088 100644 Binary files a/papers/cosmic data efficient instructiontuning for speech incontext learning.pdf and b/papers/cosmic data efficient instructiontuning for speech incontext learning.pdf differ diff --git a/papers/cotbert enhancing unsupervised sentence representation through chainofthought.pdf b/papers/cotbert enhancing unsupervised sentence representation through chainofthought.pdf index dc78ae884501abadd5f52f6b141eac78e816a6ab..f38125b1e4cfc31f9984b0865d9ddeb88310f8b0 100644 Binary files a/papers/cotbert enhancing unsupervised sentence representation through chainofthought.pdf and b/papers/cotbert enhancing unsupervised sentence representation through chainofthought.pdf differ diff --git a/papers/coveragebased example selection for incontext learning.pdf b/papers/coveragebased example selection for incontext learning.pdf index 531dbc93ecf2eeee0a3f3a9362ecb35d69ba9fea..f04d609742d2f36d431a03c0adbd3f31cb40cbe9 100644 Binary files a/papers/coveragebased example selection for incontext learning.pdf and b/papers/coveragebased example selection for incontext learning.pdf differ diff --git a/papers/covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds.pdf b/papers/covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds.pdf index 6968ff0924d9aa0db753e6b9c197e11859342515..6a018e901a0fbfc0c01e7fde95358e8f5db32fc9 100644 Binary files a/papers/covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds.pdf and b/papers/covid vaccine is against covid but oxford vaccine is made at oxford! semantic interpretation of proper noun compounds.pdf differ diff --git a/papers/cplnovid contextaware promptbased learning for norm violation detection in online communities.pdf b/papers/cplnovid contextaware promptbased learning for norm violation detection in online communities.pdf index 4d4423262eae63612e98467b3d5ec8c4f4dc52d9..1cfb4227ec962201ed9fba3cfcbe910c553c02e1 100644 Binary files a/papers/cplnovid contextaware promptbased learning for norm violation detection in online communities.pdf and b/papers/cplnovid contextaware promptbased learning for norm violation detection in online communities.pdf differ diff --git a/papers/crosscodebench benchmarking crosstask generalization of source code models.pdf b/papers/crosscodebench benchmarking crosstask generalization of source code models.pdf index fc42de69f9ab9854bea922ae79ad8ac9b6f1827c..0d87a449fe7d69cf6cc11c4b4177d9f3ed42ef77 100644 Binary files a/papers/crosscodebench benchmarking crosstask generalization of source code models.pdf and b/papers/crosscodebench benchmarking crosstask generalization of source code models.pdf differ diff --git a/papers/crossdomain named entity recognition via graph matching.pdf b/papers/crossdomain named entity recognition via graph matching.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9de9dc2a768400041fc2abd225af07e321a524cc --- /dev/null +++ b/papers/crossdomain named entity recognition via graph matching.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f8db1ba3b59bac5b3c9aecb6a960438519a2174087b376f6faf3f3b29873d5b +size 1643935 diff --git a/papers/crosslingual fewshot learning on unseen languages.pdf b/papers/crosslingual fewshot learning on unseen languages.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e6992f16eb2b4aa85d9bccf4cb359e5d657b094e --- /dev/null +++ b/papers/crosslingual fewshot learning on unseen languages.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe2b711ec4e4dc04d90f96b93675a08574f9270c1d30154cb8d459e27ad7a211 +size 626873 diff --git a/papers/crosslingual retrieval augmented incontext learning for bangla.pdf b/papers/crosslingual retrieval augmented incontext learning for bangla.pdf index 6e5b2dc37079d0c4bb115fa5aae2c47888bfb581..6e999afb9cde9e15b210c78a022b2793b1a87d9d 100644 Binary files a/papers/crosslingual retrieval augmented incontext learning for bangla.pdf and b/papers/crosslingual retrieval augmented incontext learning for bangla.pdf differ diff --git a/papers/crowd score a method for the evaluation of jokes using large language model ai voters as judges.pdf b/papers/crowd score a method for the evaluation of jokes using large language model ai voters as judges.pdf index 72210db31da23869dc67569d22a5417af3b99df8..e35b01973f65e865aa8dfd79f406860e64464d57 100644 Binary files a/papers/crowd score a method for the evaluation of jokes using large language model ai voters as judges.pdf and b/papers/crowd score a method for the evaluation of jokes using large language model ai voters as judges.pdf differ diff --git a/papers/csnlp at semeval2022 task 4 effective data augmentation methods for patronizing language detection and multilabel classification with roberta and gpt3.pdf b/papers/csnlp at semeval2022 task 4 effective data augmentation methods for patronizing language detection and multilabel classification with roberta and gpt3.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a3f38cc2ab0a84feb4079ce78f6a9a96aa637ce5 --- /dev/null +++ b/papers/csnlp at semeval2022 task 4 effective data augmentation methods for patronizing language detection and multilabel classification with roberta and gpt3.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99e2e1a5d7abe3fdddc6430ef3eb94df4a5e2dff1822c275adfb26ee9d67ff6e +size 463698 diff --git a/papers/ctqscorer combining multiple features for incontext example selection for machine translation.pdf b/papers/ctqscorer combining multiple features for incontext example selection for machine translation.pdf index 76d42856d330290758d9a4064868f27999143159..0f30779893d1803b97f65a19dbdbf92ccd96a2ed 100644 Binary files a/papers/ctqscorer combining multiple features for incontext example selection for machine translation.pdf and b/papers/ctqscorer combining multiple features for incontext example selection for machine translation.pdf differ diff --git a/papers/cuecot chainofthought prompting for responding to indepth dialogue questions with llms.pdf b/papers/cuecot chainofthought prompting for responding to indepth dialogue questions with llms.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8e837bbd37c49ceaad3baee633c9c54325a33ef4 --- /dev/null +++ b/papers/cuecot chainofthought prompting for responding to indepth dialogue questions with llms.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bb769e60059250e7bf31664123bcec47946cce47579b61eeb02190a79f7ccec +size 923330 diff --git a/papers/cup curriculum learning based prompt tuning for implicit event argument extraction.pdf b/papers/cup curriculum learning based prompt tuning for implicit event argument extraction.pdf index 69e1ad351305497f303dbfa4353ce77005e3a079..1b7d52b34a209e7c677ecbfac7a357bc34dee305 100644 Binary files a/papers/cup curriculum learning based prompt tuning for implicit event argument extraction.pdf and b/papers/cup curriculum learning based prompt tuning for implicit event argument extraction.pdf differ diff --git a/papers/cxrllava multimodal large language model for interpreting chest xray images.pdf b/papers/cxrllava multimodal large language model for interpreting chest xray images.pdf deleted file mode 100644 index 1ed6dfe4ec113d84ac38311286aef3f941cafa09..0000000000000000000000000000000000000000 --- a/papers/cxrllava multimodal large language model for interpreting chest xray images.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:023718e0979dbddc327e09ad69008fe6c9a8ba8144e561680ef49e53cd0927e3 -size 3298096 diff --git a/papers/cyber sentinel exploring conversational agents in streamlining security tasks with gpt4.pdf b/papers/cyber sentinel exploring conversational agents in streamlining security tasks with gpt4.pdf index 6091fd83ff910fcfa36d6b137ea2e10f213db514..7fb05e05aa7d50b481785971b119f1d68afaef8a 100644 Binary files a/papers/cyber sentinel exploring conversational agents in streamlining security tasks with gpt4.pdf and b/papers/cyber sentinel exploring conversational agents in streamlining security tasks with gpt4.pdf differ diff --git a/papers/cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf b/papers/cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf index 86921cfea04a1f79244e8ca0d0e26bb403229863..0db268b61c325f6959498cc827aed05947c00303 100644 Binary files a/papers/cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf and b/papers/cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment.pdf differ diff --git a/papers/dail data augmentation for incontext learning via selfparaphrase.pdf b/papers/dail data augmentation for incontext learning via selfparaphrase.pdf index 850d8cad0facecf8ce1afd7ef8f463c476f4608a..dd37b50c593b29b6db903ff4f75b59c72104265a 100644 Binary files a/papers/dail data augmentation for incontext learning via selfparaphrase.pdf and b/papers/dail data augmentation for incontext learning via selfparaphrase.pdf differ diff --git a/papers/datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation.pdf b/papers/datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation.pdf index 2284c4dda979ee5a2838135b54fb15e30dfd0cd1..9b56cdf3c093e960861bf90b92177e3bb5ee5502 100644 Binary files a/papers/datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation.pdf and b/papers/datadriven approach for formalitysensitive machine translation languagespecific handling and synthetic data generation.pdf differ diff --git a/papers/dataefficient goaloriented conversation with dialogue knowledge transfer networks.pdf b/papers/dataefficient goaloriented conversation with dialogue knowledge transfer networks.pdf index 090977860c440cdd1837e6bf11c47f526325dfe4..2602ae4c80294afeddcb514f9bb7995d41ef21cd 100644 Binary files a/papers/dataefficient goaloriented conversation with dialogue knowledge transfer networks.pdf and b/papers/dataefficient goaloriented conversation with dialogue knowledge transfer networks.pdf differ diff --git a/papers/dcc help generating contextaware compiler error explanations with large language models.pdf b/papers/dcc help generating contextaware compiler error explanations with large language models.pdf index 42e0563363fd1aab4a0fe58dd852f01b7ee251da..0ec25a327bb6217801f587f6c84df7316a525e2b 100644 Binary files a/papers/dcc help generating contextaware compiler error explanations with large language models.pdf and b/papers/dcc help generating contextaware compiler error explanations with large language models.pdf differ diff --git a/papers/decomposed prompting for machine translation between related languages using large language models.pdf b/papers/decomposed prompting for machine translation between related languages using large language models.pdf index 709a7980532a9742b4159fe62fe23680a815c5b4..16ebbbb7340dee0eb2e6637b354f159b6593ca5c 100644 Binary files a/papers/decomposed prompting for machine translation between related languages using large language models.pdf and b/papers/decomposed prompting for machine translation between related languages using large language models.pdf differ diff --git a/papers/decomposed twostage prompt learning for fewshot named entity recognition.pdf b/papers/decomposed twostage prompt learning for fewshot named entity recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a4c46d710b14067c1a4c16bf70e7452e67bd2f51 --- /dev/null +++ b/papers/decomposed twostage prompt learning for fewshot named entity recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2907835c1a44a5279d0bedf311ec47b49209120b7d8e5f912f6095f45f4cc90 +size 590130 diff --git a/papers/decomt decomposed prompting for machine translation between related languages using large language models.pdf b/papers/decomt decomposed prompting for machine translation between related languages using large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c7bef1176d12b8524d547f2358e979739ad2135f --- /dev/null +++ b/papers/decomt decomposed prompting for machine translation between related languages using large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42d43f6bc108bb9028da639c41430c470c66009d07753d966f8ecefe0c7ad5b3 +size 751767 diff --git a/papers/deeply coupled crossmodal prompt learning.pdf b/papers/deeply coupled crossmodal prompt learning.pdf index b6e16e90e29f07d0402990e12761b250551d2191..cce9bbde95ae67ab56b7bae9fd1ae7648f413b51 100644 --- a/papers/deeply coupled crossmodal prompt learning.pdf +++ b/papers/deeply coupled crossmodal prompt learning.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:8e54c0a9b7cc31a78e88dbed84401ac8240cba592db58075367240210572c928 -size 2478099 +oid sha256:5b2c94fa9b5a65512aa9c341c916793539da753a864c5636c736b759752bf4bd +size 3362685 diff --git a/papers/deeppavlov dream platform for building generative ai assistants.pdf b/papers/deeppavlov dream platform for building generative ai assistants.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4dceb2ebf23c44d6a945e661e0293519ae82c419 --- /dev/null +++ b/papers/deeppavlov dream platform for building generative ai assistants.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2239cb801e95fab9ef2d5bbbb8de6134693d5030e9fef94fc9e96cf34c318856 +size 735533 diff --git a/papers/defending against alignmentbreaking attacks via robustly aligned llm.pdf b/papers/defending against alignmentbreaking attacks via robustly aligned llm.pdf index a09926a447baa2528ee2320d88a490ac89e71b0b..cac1a9e30d8dd3ee18f621d3c949c066a6b5d30c 100644 Binary files a/papers/defending against alignmentbreaking attacks via robustly aligned llm.pdf and b/papers/defending against alignmentbreaking attacks via robustly aligned llm.pdf differ diff --git a/papers/deidgpt zeroshot medical text deidentification by gpt4.pdf b/papers/deidgpt zeroshot medical text deidentification by gpt4.pdf index 1cba57fd32ab23569347b701bffb5d2dbbc8cf13..d35a76d464018770d0eede97f446b32f55f2ec77 100644 --- a/papers/deidgpt zeroshot medical text deidentification by gpt4.pdf +++ b/papers/deidgpt zeroshot medical text deidentification by gpt4.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:60f8166a170f90205365297ad534f043abb895aaffd9f3982a5bef9138130ac2 -size 2091307 +oid sha256:c9d10cc6825aa9a3b7d405cdba7bbc232d20a0f66e537c8c9b379c84a034ff01 +size 2208721 diff --git a/papers/deliberate then generate enhanced prompting framework for text generation.pdf b/papers/deliberate then generate enhanced prompting framework for text generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..465d46df9b3cf18151a5cc04b1dee4ab63b4d755 --- /dev/null +++ b/papers/deliberate then generate enhanced prompting framework for text generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61307a4236872968df5aff1c7dc0c0c4d5622d865e1130ebe3eeeca510fb162e +size 278639 diff --git a/papers/delving into multimodal prompting for finegrained visual classification.pdf b/papers/delving into multimodal prompting for finegrained visual classification.pdf index d97947de538e15ec14bdffaa93acce769d578453..d8c923bd44fde3f6bb6e9cc011108fb0b0f377dc 100644 --- a/papers/delving into multimodal prompting for finegrained visual classification.pdf +++ b/papers/delving into multimodal prompting for finegrained visual classification.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3b3a86ce7ed669c392c626592e8ef57232715fa5cc7f51fa265f531fde7c83c8 -size 1323473 +oid sha256:e40b8370aa8ee1b5d16bc1a1b0e8959b8d95cd73a771561af8001c2e602ecfa6 +size 1324121 diff --git a/papers/democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf b/papers/democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf index b1166601a40c8dbf3289328361f830e79d97a7c2..a5da7651af7c8f074e066f142db93028d9b7f99a 100644 Binary files a/papers/democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf and b/papers/democratizing llms for lowresource languages by leveraging their english dominant abilities with linguisticallydiverse prompts.pdf differ diff --git a/papers/demonstrations are all you need advancing offensive content paraphrasing using incontext learning.pdf b/papers/demonstrations are all you need advancing offensive content paraphrasing using incontext learning.pdf index 6e160c567cd3e9b0d6c35c49c82bf96738cc2a61..7f95f0ad8288a0e44b39ea6ff4f499fb5db1bea4 100644 Binary files a/papers/demonstrations are all you need advancing offensive content paraphrasing using incontext learning.pdf and b/papers/demonstrations are all you need advancing offensive content paraphrasing using incontext learning.pdf differ diff --git a/papers/demonstrations of the potential of aibased political issue polling.pdf b/papers/demonstrations of the potential of aibased political issue polling.pdf index 3e83f89e536c990d798c35ed36f0e4ef154237fd..7e078fd20b855488397c7e0418732a4bdee64bea 100644 Binary files a/papers/demonstrations of the potential of aibased political issue polling.pdf and b/papers/demonstrations of the potential of aibased political issue polling.pdf differ diff --git a/papers/demystifying prompts in language models via perplexity estimation.pdf b/papers/demystifying prompts in language models via perplexity estimation.pdf index 99eb0250758fdefe4e8c2ba2d8cadafbc0e41d82..5c16937e7e29214990a0f0c5aba9d572f9d8af1b 100644 Binary files a/papers/demystifying prompts in language models via perplexity estimation.pdf and b/papers/demystifying prompts in language models via perplexity estimation.pdf differ diff --git a/papers/deploying generative ai to draft a roleplay simulation of difficult conversations about inclusivity.pdf b/papers/deploying generative ai to draft a roleplay simulation of difficult conversations about inclusivity.pdf new file mode 100644 index 0000000000000000000000000000000000000000..692af35c76ec96f5de8f2c6f3979a2bb55a187bd --- /dev/null +++ b/papers/deploying generative ai to draft a roleplay simulation of difficult conversations about inclusivity.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0fddb9e699caceb4e43815d9df1127d52319e0029d0a217d175b614ea6f6aad +size 243296 diff --git a/papers/detecting hate speech with gpt3.pdf b/papers/detecting hate speech with gpt3.pdf index 313c950fbe67dda6744eddba18b2c45d65f6c3f1..a977979b19674731c30e5e847e69930c85fb460a 100644 Binary files a/papers/detecting hate speech with gpt3.pdf and b/papers/detecting hate speech with gpt3.pdf differ diff --git a/papers/detecting natural language biases with promptbased learning.pdf b/papers/detecting natural language biases with promptbased learning.pdf index 255ab712284af887466b0257d2f361db1a54327e..962a2f7a885f40aa74a43e3e02a3926d72e4529a 100644 Binary files a/papers/detecting natural language biases with promptbased learning.pdf and b/papers/detecting natural language biases with promptbased learning.pdf differ diff --git a/papers/devbots can codesign apis.pdf b/papers/devbots can codesign apis.pdf new file mode 100644 index 0000000000000000000000000000000000000000..79673ef310891046eee263eca0bdaa5866a4be80 --- /dev/null +++ b/papers/devbots can codesign apis.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88c1b1118c837a624ac5a25e81c82110b8eb25a81c47080c9df3ff0ec12e68d5 +size 251394 diff --git a/papers/developing a scalable benchmark for assessing large language models in knowledge graph engineering.pdf b/papers/developing a scalable benchmark for assessing large language models in knowledge graph engineering.pdf index 115d9d8048d0f045d2281704f19ba04a10a129ce..420e376466fb166d3abc7461e27c570f8b0cc168 100644 Binary files a/papers/developing a scalable benchmark for assessing large language models in knowledge graph engineering.pdf and b/papers/developing a scalable benchmark for assessing large language models in knowledge graph engineering.pdf differ diff --git a/papers/developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer.pdf b/papers/developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer.pdf index 650ad1865f3bd30ff214326dd995e940cab47be5..a0ae9ea8d068570c49de91fd4a17328c61a998ed 100644 Binary files a/papers/developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer.pdf and b/papers/developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer.pdf differ diff --git a/papers/development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews.pdf b/papers/development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews.pdf index 33a2e97c184d509bdf4b8e4d21aced217641af55..d9c70f1716ce652dfb061f377d6275d41c3a35e4 100644 Binary files a/papers/development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews.pdf and b/papers/development of metaprompts for large language models to screen titles and abstracts for diagnostic test accuracy reviews.pdf differ diff --git a/papers/devgpt studying developerchatgpt conversations.pdf b/papers/devgpt studying developerchatgpt conversations.pdf index 2415cb09de5f0012649494ef3e904791b3128efc..1e33594e1bc122ee7f0627a48f64c7a201bd69bc 100644 Binary files a/papers/devgpt studying developerchatgpt conversations.pdf and b/papers/devgpt studying developerchatgpt conversations.pdf differ diff --git a/papers/diagnosing infeasible optimization problems using large language models.pdf b/papers/diagnosing infeasible optimization problems using large language models.pdf index c8610514ea0e13c1bde6a4fc5f0c68f655d4a47d..af97bd7594798820c6518958934d7f562d08b893 100644 Binary files a/papers/diagnosing infeasible optimization problems using large language models.pdf and b/papers/diagnosing infeasible optimization problems using large language models.pdf differ diff --git a/papers/diagnostic utility of endocan and interleukins for lateonset neonatal sepsis.pdf b/papers/diagnostic utility of endocan and interleukins for lateonset neonatal sepsis.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9482d71f7469ff9ed527bea89f3a5f1eec8ec04e --- /dev/null +++ b/papers/diagnostic utility of endocan and interleukins for lateonset neonatal sepsis.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6673dabb1fa4837645f81040a85f51f6d548e5d288f2404b9d04c19ffeb0a7a +size 1105112 diff --git a/papers/dialog2api taskoriented dialogue with api description and example programs.pdf b/papers/dialog2api taskoriented dialogue with api description and example programs.pdf index 70049b8beb01c0ef87af611f7e6dfc62973d02ca..96300d1385df3b06761264177af4fb48e115b569 100644 Binary files a/papers/dialog2api taskoriented dialogue with api description and example programs.pdf and b/papers/dialog2api taskoriented dialogue with api description and example programs.pdf differ diff --git a/papers/dialogstudio towards richest and most diverse unified dataset collection for conversational ai.pdf b/papers/dialogstudio towards richest and most diverse unified dataset collection for conversational ai.pdf index fccdd8bc2f1d434e3a2e9fd4a5906478c1dad475..619aea79f54972682e631e666852feab3833308a 100644 Binary files a/papers/dialogstudio towards richest and most diverse unified dataset collection for conversational ai.pdf and b/papers/dialogstudio towards richest and most diverse unified dataset collection for conversational ai.pdf differ diff --git a/papers/dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning.pdf b/papers/dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning.pdf index 55749426b7e25a4501a1221f747d458e87107262..acc6a6cbe8dade49774d6be22396568796203ea7 100644 Binary files a/papers/dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning.pdf and b/papers/dialogue for prompting a policygradientbased discrete prompt optimization for fewshot learning.pdf differ diff --git a/papers/dictionarybased phraselevel prompting of large language models for machine translation.pdf b/papers/dictionarybased phraselevel prompting of large language models for machine translation.pdf index 8451f3297fbbfd527f724ee6f20946ab72755f68..c58629c7299b93ebb324db8de5451502bd0fc052 100644 Binary files a/papers/dictionarybased phraselevel prompting of large language models for machine translation.pdf and b/papers/dictionarybased phraselevel prompting of large language models for machine translation.pdf differ diff --git a/papers/diffender diffusionbased adversarial defense against patch attacks.pdf b/papers/diffender diffusionbased adversarial defense against patch attacks.pdf index 680f9c2738485f6c53440637d249e48300f2f071..dd10f8ab1816add37b29119cf55eab3a897717ff 100644 --- a/papers/diffender diffusionbased adversarial defense against patch attacks.pdf +++ b/papers/diffender diffusionbased adversarial defense against patch attacks.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6ac1c84568f8a38614fc8c5686a14282480f25de7a7e1767f5d96393729a769a -size 2609029 +oid sha256:cc4df23351388d401d0740973bdaf0132d456d9f2b2944a3a075ff8afba4ba79 +size 2746747 diff --git a/papers/differentiable entailment for parameter efficient few shot learning.pdf b/papers/differentiable entailment for parameter efficient few shot learning.pdf index 991ebdbfea8caaaa88fd4597e02c046722e1912b..feace44332841ccdabb1ba249c4ce19f507c7e5b 100644 Binary files a/papers/differentiable entailment for parameter efficient few shot learning.pdf and b/papers/differentiable entailment for parameter efficient few shot learning.pdf differ diff --git a/papers/direct multimodal fewshot learning of speech and images.pdf b/papers/direct multimodal fewshot learning of speech and images.pdf index ad78bc50d95d1a78bcfeb0ccbc469dbaa04349f0..3865632db1f6d042677df1139f135d2ee1d1831a 100644 Binary files a/papers/direct multimodal fewshot learning of speech and images.pdf and b/papers/direct multimodal fewshot learning of speech and images.pdf differ diff --git a/papers/discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators.pdf b/papers/discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators.pdf index 932da7c3744e5dd16843172e93effedc951d0448..496567918c073875c8261f87ad795984cc9ce534 100644 Binary files a/papers/discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators.pdf and b/papers/discern and answer mitigating the impact of misinformation in retrievalaugmented models with discriminators.pdf differ diff --git a/papers/discrete and soft prompting for multilingual models.pdf b/papers/discrete and soft prompting for multilingual models.pdf index 09d755d23369613867f6e81c24a21312b8f8d459..bc631475b2f45093abaffaf50b729ddae6259d77 100644 Binary files a/papers/discrete and soft prompting for multilingual models.pdf and b/papers/discrete and soft prompting for multilingual models.pdf differ diff --git a/papers/discrete prompt compression with reinforcement learning.pdf b/papers/discrete prompt compression with reinforcement learning.pdf index b4d70ee48e0eb3d12548c6d73940c2bae367f7c5..1aa55d6f573315c6173dcb6286fe2c904c6f837f 100644 Binary files a/papers/discrete prompt compression with reinforcement learning.pdf and b/papers/discrete prompt compression with reinforcement learning.pdf differ diff --git a/papers/discrete prompt optimization via constrained generation for zeroshot reranker.pdf b/papers/discrete prompt optimization via constrained generation for zeroshot reranker.pdf index d93ce07cf05a690dbd0ef052d672be98b2a51292..ed2b3a769ab3c49ffcc373e3f9c3db93c49ce1d4 100644 Binary files a/papers/discrete prompt optimization via constrained generation for zeroshot reranker.pdf and b/papers/discrete prompt optimization via constrained generation for zeroshot reranker.pdf differ diff --git a/papers/disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective.pdf b/papers/disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective.pdf index 6308027bf41d6eb78a21c4c28bd34f838c4b4801..38973a436bba17c324bdfab60fc3f0f866a897e6 100644 Binary files a/papers/disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective.pdf and b/papers/disentangle and remerge interventional knowledge distillation for fewshot object detection from a conditional causal perspective.pdf differ diff --git a/papers/dissecting incontext learning of translations in gpt3.pdf b/papers/dissecting incontext learning of translations in gpt3.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9dda484cfe1113fd47b53ba08a38adf9e223afd5 --- /dev/null +++ b/papers/dissecting incontext learning of translations in gpt3.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5444cac0c7affc42e22adaf8193f280f1958fb99d038b77828a5f497b46c2303 +size 205577 diff --git a/papers/dissecting incontext learning of translations in gpts.pdf b/papers/dissecting incontext learning of translations in gpts.pdf index 25c342d49cb557beee8ec573561091dbefbd9a33..ce216e81ede2f36299a9642e200adde6f6f15e3d 100644 Binary files a/papers/dissecting incontext learning of translations in gpts.pdf and b/papers/dissecting incontext learning of translations in gpts.pdf differ diff --git a/papers/distillation of encoderdecoder transformers for sequence labelling.pdf b/papers/distillation of encoderdecoder transformers for sequence labelling.pdf index 6432724c5bec1ca4b415f1c620011e833cac58b7..090cc3a9c59d4665155b495878aa2b37ca6d8863 100644 Binary files a/papers/distillation of encoderdecoder transformers for sequence labelling.pdf and b/papers/distillation of encoderdecoder transformers for sequence labelling.pdf differ diff --git a/papers/distilled feature fields enable fewshot languageguided manipulation.pdf b/papers/distilled feature fields enable fewshot languageguided manipulation.pdf index 01ee3fa081a668156bb90ddaec9fa2c5cc30badd..b5391a40b425929e58a1b0186a38d9e71c86c52e 100644 --- a/papers/distilled feature fields enable fewshot languageguided manipulation.pdf +++ b/papers/distilled feature fields enable fewshot languageguided manipulation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d1dca7845601f20fa24280341c96365ace77327bba0ef9b8bb1fdda5b45fe0fc -size 25243398 +oid sha256:797b9348678521c3057331aa4582bf82415c376371ad56bb68f323f8d1a58fe8 +size 21786550 diff --git a/papers/distractor generation for multiplechoice questions with predictive prompting and large language models.pdf b/papers/distractor generation for multiplechoice questions with predictive prompting and large language models.pdf index ecef316afbd41616503d026fe97743cd520cd8c5..69fa26f75d7c323530d327f32bb75821076fc9f0 100644 Binary files a/papers/distractor generation for multiplechoice questions with predictive prompting and large language models.pdf and b/papers/distractor generation for multiplechoice questions with predictive prompting and large language models.pdf differ diff --git a/papers/diverse demonstrations improve incontext compositional generalization.pdf b/papers/diverse demonstrations improve incontext compositional generalization.pdf index 5de6a7191d6a0379e99103f65c9f23b44e95f076..56d3050d6b18f523ed5c4fccc5d67cf8fa53c17a 100644 Binary files a/papers/diverse demonstrations improve incontext compositional generalization.pdf and b/papers/diverse demonstrations improve incontext compositional generalization.pdf differ diff --git a/papers/diverse retrievalaugmented incontext learning for dialogue state tracking.pdf b/papers/diverse retrievalaugmented incontext learning for dialogue state tracking.pdf index a5b3c5df5d2aec50cf3fdc86bb1186c2a5815af1..a9b00be6443ae4231a6cbfbeb7c35b23c7023f30 100644 Binary files a/papers/diverse retrievalaugmented incontext learning for dialogue state tracking.pdf and b/papers/diverse retrievalaugmented incontext learning for dialogue state tracking.pdf differ diff --git a/papers/divide and prompt chain of thought prompting for texttosql.pdf b/papers/divide and prompt chain of thought prompting for texttosql.pdf index 298de7c362b9e365f93392819b6cd5ece5f8860a..2c826fb1bc6fca507af6a2fb924b67e8c84ebb3e 100644 Binary files a/papers/divide and prompt chain of thought prompting for texttosql.pdf and b/papers/divide and prompt chain of thought prompting for texttosql.pdf differ diff --git a/papers/do emergent abilities exist in quantized large language models an empirical study.pdf b/papers/do emergent abilities exist in quantized large language models an empirical study.pdf index f9c261c4268f1f4db0afef41f7875fd4d334e148..347a9ac23f4897e774a64e54877c7dd782b480e1 100644 Binary files a/papers/do emergent abilities exist in quantized large language models an empirical study.pdf and b/papers/do emergent abilities exist in quantized large language models an empirical study.pdf differ diff --git a/papers/do gpts produce less literal translations.pdf b/papers/do gpts produce less literal translations.pdf index f9b52eb0ecff29ea3e7bf280b209f39f1400003c..a504688388c464d6ec40e78c90b1a363c4cbc5e0 100644 Binary files a/papers/do gpts produce less literal translations.pdf and b/papers/do gpts produce less literal translations.pdf differ diff --git a/papers/do language models learn about legal entity types during pretraining.pdf b/papers/do language models learn about legal entity types during pretraining.pdf index 0e4ae0459466f0cecb7f03f2ec81ccabb60e862e..18d38e55985b9a57f9358e8d72e5ff5e3481e236 100644 Binary files a/papers/do language models learn about legal entity types during pretraining.pdf and b/papers/do language models learn about legal entity types during pretraining.pdf differ diff --git a/papers/do large language models know what they don't know.pdf b/papers/do large language models know what they don't know.pdf index ffb08ff6b849c70c57ee453f3bd56b4d34ceb9dd..a1a0b64def04fbed09336da5c9ba12b3357bbeb8 100644 Binary files a/papers/do large language models know what they don't know.pdf and b/papers/do large language models know what they don't know.pdf differ diff --git "a/papers/do large language models know what they don\342\200\231t know.pdf" "b/papers/do large language models know what they don\342\200\231t know.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..c9e48675113eced2ea4454cc0c73bceb9a087727 --- /dev/null +++ "b/papers/do large language models know what they don\342\200\231t know.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cf32917140aff761cbd477cd708fdd4922aca7c63a584d439b1d6535943f019 +size 580163 diff --git a/papers/do llms possess a personality making the mbti test an amazing evaluation for large language models.pdf b/papers/do llms possess a personality making the mbti test an amazing evaluation for large language models.pdf index 36c1fe50b22c368ee285d8e41f222e313f4ce020..027e1c709ae04b5962432a31c647094c320df640 100644 Binary files a/papers/do llms possess a personality making the mbti test an amazing evaluation for large language models.pdf and b/papers/do llms possess a personality making the mbti test an amazing evaluation for large language models.pdf differ diff --git a/papers/do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation.pdf b/papers/do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation.pdf index aa093d572bbf8e6d2d575b21666106c9dae41075..8ec082dbe84e5a22f794d25a86dbac293fa420be 100644 Binary files a/papers/do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation.pdf and b/papers/do physicians know how to prompt the need for automatic prompt optimization help in clinical note generation.pdf differ diff --git a/papers/do pretrained transformers really learn incontext by gradient descent.pdf b/papers/do pretrained transformers really learn incontext by gradient descent.pdf index 8e41337314ca4b3b11e8a8f71ab22a859f4b076e..67bbad44f611f0c29db08e5e8d0c05765855205e 100644 --- a/papers/do pretrained transformers really learn incontext by gradient descent.pdf +++ b/papers/do pretrained transformers really learn incontext by gradient descent.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:41b284c1e5a6dd936eecd69baadd810e88b821201f14b752790071b44ebfbda1 -size 1618478 +oid sha256:12c324a0427166685c987b1154b9b6e972437675baa402ccdaaf0d44e32aefa3 +size 4183739 diff --git a/papers/do prompt positions really matter.pdf b/papers/do prompt positions really matter.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8fd8af638f5f1f7833d788ab08054e130db2616a --- /dev/null +++ b/papers/do prompt positions really matter.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d7e81e4b18396683f495080e112f0faecc53c3ab3d1cb2a372756afb33926cf +size 399811 diff --git a/papers/do we still need clinical language models.pdf b/papers/do we still need clinical language models.pdf index 914f344a798445f973b8f026df8c08c7b26f556a..60842a5155192987effd245b85c61d06d8ecb7a7 100644 Binary files a/papers/do we still need clinical language models.pdf and b/papers/do we still need clinical language models.pdf differ diff --git a/papers/does correction remain a problem for large language models.pdf b/papers/does correction remain a problem for large language models.pdf index 3a8d34bf990505f1345a2488cf37583269f1d022..8a23258ae4253e9df16ab150b57d78ef1cfe869e 100644 Binary files a/papers/does correction remain a problem for large language models.pdf and b/papers/does correction remain a problem for large language models.pdf differ diff --git a/papers/domain knowledge distillation from large language model an empirical study in the autonomous driving domain.pdf b/papers/domain knowledge distillation from large language model an empirical study in the autonomous driving domain.pdf index 9d23f96bb3d0f04345cf7502992b031103d4db06..687fa5140461c6fe8f473670bec36fca14b1ab66 100644 Binary files a/papers/domain knowledge distillation from large language model an empirical study in the autonomous driving domain.pdf and b/papers/domain knowledge distillation from large language model an empirical study in the autonomous driving domain.pdf differ diff --git a/papers/don't stop pretraining make promptbased finetuning powerful learner.pdf b/papers/don't stop pretraining make promptbased finetuning powerful learner.pdf index be3e48c29018d4da084b1a09d86e6af663abb8da..7c0d93ec43bac94eab88251e6341cbb9c5c2f102 100644 Binary files a/papers/don't stop pretraining make promptbased finetuning powerful learner.pdf and b/papers/don't stop pretraining make promptbased finetuning powerful learner.pdf differ diff --git "a/papers/don\342\200\231t generate, discriminate a proposal for grounding language models to realworld environments.pdf" "b/papers/don\342\200\231t generate, discriminate a proposal for grounding language models to realworld environments.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..3452cb937e31e20d17c8e89e4b60fb0590b105f3 --- /dev/null +++ "b/papers/don\342\200\231t generate, discriminate a proposal for grounding language models to realworld environments.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a89ac8bf392b40cc734509a0ef06a4b69d3323c08518ae7e0200058f4003ac8f +size 1636961 diff --git a/papers/draw your art dream diverse digital art synthesis with multimodal guided diffusion.pdf b/papers/draw your art dream diverse digital art synthesis with multimodal guided diffusion.pdf index 26e4cf333066c72971c43e479d0f366443b0e5e4..565b6fbc4924535e15433f2a3c417beb607de672 100644 --- a/papers/draw your art dream diverse digital art synthesis with multimodal guided diffusion.pdf +++ b/papers/draw your art dream diverse digital art synthesis with multimodal guided diffusion.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:60d0504ad29e1e716c621685cf1e7233eee68f1e7a5a43beec7c0c8560f82bc9 +oid sha256:aa97829b642ae10d804dfe28a5b93bc43570d48f60135ed9c76f70acd8a5a30d size 16752512 diff --git a/papers/dricl demonstrationretrieved incontext learning.pdf b/papers/dricl demonstrationretrieved incontext learning.pdf index f24b7fe9b4d06176e6a0477700bb4689a2aee7db..85bbb45c4b8b2beff031241378d20b3efd33d551 100644 Binary files a/papers/dricl demonstrationretrieved incontext learning.pdf and b/papers/dricl demonstrationretrieved incontext learning.pdf differ diff --git a/papers/dsp discriminative soft prompts for zeroshot entity and relation extraction.pdf b/papers/dsp discriminative soft prompts for zeroshot entity and relation extraction.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8ffdb67a46aa8d8e1d2051a9449d55de5038d7a0 --- /dev/null +++ b/papers/dsp discriminative soft prompts for zeroshot entity and relation extraction.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d39b0359a2a6f9342fdd89a4ebdae5a0a208456fc187edc1c441a07aa096132 +size 705745 diff --git a/papers/dspy assertions computational constraints for selfrefining language model pipelines.pdf b/papers/dspy assertions computational constraints for selfrefining language model pipelines.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a9342cf33c32b4561405db306dec4e4709f0b320 --- /dev/null +++ b/papers/dspy assertions computational constraints for selfrefining language model pipelines.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f46151194d676e4b4fbfae71ca798906cde8ffbe7e4af35a70db2c4c109e0d18 +size 207120 diff --git a/papers/dspy compiling declarative language model calls into selfimproving pipelines.pdf b/papers/dspy compiling declarative language model calls into selfimproving pipelines.pdf index df4b2e02531cc4d6699288ef4a5fc737a645925a..964fa2bc5f5651f05b5a62505f3418d605384862 100644 Binary files a/papers/dspy compiling declarative language model calls into selfimproving pipelines.pdf and b/papers/dspy compiling declarative language model calls into selfimproving pipelines.pdf differ diff --git a/papers/dual contextguided continuous prompt tuning for fewshot learning.pdf b/papers/dual contextguided continuous prompt tuning for fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3f6cf4b9578447fec02f64041b72dadb824475f1 --- /dev/null +++ b/papers/dual contextguided continuous prompt tuning for fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cdc91902520978dd3a1a6f61809266164870cc18b89b67d822751bbf0ffb2b4 +size 629276 diff --git a/papers/dynamar dynamic prompt with mask token representation.pdf b/papers/dynamar dynamic prompt with mask token representation.pdf index 7b98289be4b35badc5604e8e8d6566591ef84c67..d03ed9d1a946aca7dfd4d7944a63e9b189a95f8c 100644 Binary files a/papers/dynamar dynamic prompt with mask token representation.pdf and b/papers/dynamar dynamic prompt with mask token representation.pdf differ diff --git a/papers/dynamic strategy chain dynamic zeroshot cot for long mental health support generation.pdf b/papers/dynamic strategy chain dynamic zeroshot cot for long mental health support generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..db28f228819db64f58a2037ff9af787977c72102 --- /dev/null +++ b/papers/dynamic strategy chain dynamic zeroshot cot for long mental health support generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18b93ff6e6f3df43550b2f78df93aca1eefccaf5bcab21f0d28d9d3219b60a09 +size 411355 diff --git a/papers/early diagnostic markers of lateonset neonatal sepsis.pdf b/papers/early diagnostic markers of lateonset neonatal sepsis.pdf deleted file mode 100644 index 0a19ca2941e74cd4f78a1cb5629c1ecb488bec54..0000000000000000000000000000000000000000 Binary files a/papers/early diagnostic markers of lateonset neonatal sepsis.pdf and /dev/null differ diff --git a/papers/easynlp a comprehensive and easytouse toolkit for natural language processing.pdf b/papers/easynlp a comprehensive and easytouse toolkit for natural language processing.pdf index eeff22254ce6aaa8d6b6b01343fa0e2fe21e098a..bc98108a0dc3f207e45deab5a9573721cad37ee7 100644 Binary files a/papers/easynlp a comprehensive and easytouse toolkit for natural language processing.pdf and b/papers/easynlp a comprehensive and easytouse toolkit for natural language processing.pdf differ diff --git a/papers/ebhaam at semeval2023 task 1 a clipbased approach for comparing crossmodality and unimodality in visual word sense disambiguation.pdf b/papers/ebhaam at semeval2023 task 1 a clipbased approach for comparing crossmodality and unimodality in visual word sense disambiguation.pdf deleted file mode 100644 index 2a7d6e8273c27ff61be3b94597ba47689565ac5a..0000000000000000000000000000000000000000 --- a/papers/ebhaam at semeval2023 task 1 a clipbased approach for comparing crossmodality and unimodality in visual word sense disambiguation.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f71235a45cab1517ae418574b6a99af3e7b92ed78b87c97d73a5a9f400e1d9e0 -size 6610400 diff --git a/papers/echoprompt instructing the model to rephrase queries for improved incontext learning.pdf b/papers/echoprompt instructing the model to rephrase queries for improved incontext learning.pdf index 756b205adc54046512e6811faa2a5179a26dff84..756bc433ce3321ec178134f35ee689f534cfcae5 100644 Binary files a/papers/echoprompt instructing the model to rephrase queries for improved incontext learning.pdf and b/papers/echoprompt instructing the model to rephrase queries for improved incontext learning.pdf differ diff --git a/papers/ecologically valid explanations for label variation in nli.pdf b/papers/ecologically valid explanations for label variation in nli.pdf index dfca0d6704ef28a8e871de429219964dfaa0722f..e3daeddd664a6374dc296f30055543cf330b3b36 100644 Binary files a/papers/ecologically valid explanations for label variation in nli.pdf and b/papers/ecologically valid explanations for label variation in nli.pdf differ diff --git a/papers/editing arbitrary propositions in llms without subject labels.pdf b/papers/editing arbitrary propositions in llms without subject labels.pdf new file mode 100644 index 0000000000000000000000000000000000000000..adf02e3c32fe584bc93667f8a6d08b32c56bc085 --- /dev/null +++ b/papers/editing arbitrary propositions in llms without subject labels.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cfbc91032265b80a391bc0b1d3b6d97f5fdb78f3a6cb1284edf0e6eed6ce244 +size 585139 diff --git a/papers/effective test generation using pretrained large language models and mutation testing.pdf b/papers/effective test generation using pretrained large language models and mutation testing.pdf index 4e3338fcc960e7fcd7c07733c0a418d554c24915..fe38470898a53c274457f75537f83a1a1e21865b 100644 Binary files a/papers/effective test generation using pretrained large language models and mutation testing.pdf and b/papers/effective test generation using pretrained large language models and mutation testing.pdf differ diff --git a/papers/efficient blackbox adversarial attacks on neural text detectors.pdf b/papers/efficient blackbox adversarial attacks on neural text detectors.pdf index c5e13af98122187ba95012692b9a504e63b627a3..c89472c76896e46acb139ae38e2c246c26c55882 100644 Binary files a/papers/efficient blackbox adversarial attacks on neural text detectors.pdf and b/papers/efficient blackbox adversarial attacks on neural text detectors.pdf differ diff --git a/papers/efficient open domain multihop question answering with fewshot data synthesis.pdf b/papers/efficient open domain multihop question answering with fewshot data synthesis.pdf index cd01b8d6385d0e7043760f7eb28c00db9766f71e..48fabef173b83fcbd3d4994df865a006fadde1a7 100644 Binary files a/papers/efficient open domain multihop question answering with fewshot data synthesis.pdf and b/papers/efficient open domain multihop question answering with fewshot data synthesis.pdf differ diff --git a/papers/efficient prompting via dynamic incontext learning.pdf b/papers/efficient prompting via dynamic incontext learning.pdf index 4b47483f1ad69d883ec059cd15533b9dcad9ecba..61742a7fed646d3e49208370006fa8a0ae1586d0 100644 Binary files a/papers/efficient prompting via dynamic incontext learning.pdf and b/papers/efficient prompting via dynamic incontext learning.pdf differ diff --git a/papers/embedding democratic values into social media ais via societal objective functions.pdf b/papers/embedding democratic values into social media ais via societal objective functions.pdf index 297bf5d1307fa33fab6a0fcc7e67bd498690f68d..b40919a1bd095794943412146a3fba38775fc5d0 100644 --- a/papers/embedding democratic values into social media ais via societal objective functions.pdf +++ b/papers/embedding democratic values into social media ais via societal objective functions.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:30d05a36eb13ab5400ca4f06f45267dab53d4b0fd3e7eaf183dea69df08490f9 -size 8120076 +oid sha256:17ef43cf0e4d8fe01b75c2e8a45b139d50ab82eef2dd45ddb946a92c5ecb58a0 +size 8178619 diff --git a/papers/emerging technology in acute resuscitation monitoring.pdf b/papers/emerging technology in acute resuscitation monitoring.pdf index 50b89a60edd3cd6ca1786f3335df7e53e1334a76..32570ec8046007ad843e9d6a6d04853482686e0e 100644 Binary files a/papers/emerging technology in acute resuscitation monitoring.pdf and b/papers/emerging technology in acute resuscitation monitoring.pdf differ diff --git a/papers/emotionconditioned text generation through automatic prompt optimization.pdf b/papers/emotionconditioned text generation through automatic prompt optimization.pdf index 5527dd76d932cf73de8a2b4dac8ae2096f5f390e..c1a7d88f4ded5bc1f787a2503311b4e6c811de46 100644 Binary files a/papers/emotionconditioned text generation through automatic prompt optimization.pdf and b/papers/emotionconditioned text generation through automatic prompt optimization.pdf differ diff --git a/papers/empower textattributed graphs learning with large language models (llms).pdf b/papers/empower textattributed graphs learning with large language models (llms).pdf index c7e616e71d3e03f9e177610f092d33d56696da1a..47e80b12ecbff0a17e2f4bfbb861c15a58adce41 100644 Binary files a/papers/empower textattributed graphs learning with large language models (llms).pdf and b/papers/empower textattributed graphs learning with large language models (llms).pdf differ diff --git a/papers/empowering conversational agents using semantic incontext learning.pdf b/papers/empowering conversational agents using semantic incontext learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cc7e3853f03fd38bfc6d76026e22be81ab2120a8 --- /dev/null +++ b/papers/empowering conversational agents using semantic incontext learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3126ed8df920b0fc35d0f544564383af238d1afc17c50ac6723b8770e4bacdca +size 483501 diff --git a/papers/enable language models to implicitly learn selfimprovement from data.pdf b/papers/enable language models to implicitly learn selfimprovement from data.pdf index 923dd9a3f93e9793499298bfdf99c3edf0a89a61..bf8b095242d90973735f887016479fea1bc5a356 100644 Binary files a/papers/enable language models to implicitly learn selfimprovement from data.pdf and b/papers/enable language models to implicitly learn selfimprovement from data.pdf differ diff --git a/papers/enabling conversational interaction with mobile ui using large language models.pdf b/papers/enabling conversational interaction with mobile ui using large language models.pdf index e61d8ef5f2dccd2e0f7bb7bd9c949f60cb0f3e6b..acfadaccab0e261fe31aaef58099c6f8c2e003ae 100644 --- a/papers/enabling conversational interaction with mobile ui using large language models.pdf +++ b/papers/enabling conversational interaction with mobile ui using large language models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:50ffb5dc4a568a38bb178d50b25a5b47eb10dba6784802c78941f9d165f54b0e +oid sha256:755a1c9a51df0965ce10281a0668de1120ad1a3c752bd63f925e14428c5f069d size 9210266 diff --git a/papers/engineering a dialogue with klara, or ethical invention with generative ai in the writing classroom.pdf b/papers/engineering a dialogue with klara, or ethical invention with generative ai in the writing classroom.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4f6f8baea064a31c5451eee9eba3aa8689585e79 --- /dev/null +++ b/papers/engineering a dialogue with klara, or ethical invention with generative ai in the writing classroom.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3e1f0c4320ddadab6a42d857a1ddf24f2d9d2bc4aa7d603f38a6a0bc22c662f +size 830028 diff --git a/papers/enhancing arabic content generation with prompt augmentation using integrated gpt and texttoimage models.pdf b/papers/enhancing arabic content generation with prompt augmentation using integrated gpt and texttoimage models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fbf08b384c3871cdac7629fb394ce98fb4f9d1b1 --- /dev/null +++ b/papers/enhancing arabic content generation with prompt augmentation using integrated gpt and texttoimage models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f6089c875d61806778ea876ef58fbc56cff445dda4a005f601760eb9aa71bf9 +size 1038062 diff --git a/papers/enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf b/papers/enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf index 4723febc6caaa5261b5dcf87491d2c2387221345..b219b52227c1f7cfdce6313d21ecf5875e2cad9c 100644 Binary files a/papers/enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf and b/papers/enhancing clip with gpt4 harnessing visual descriptions as prompts.pdf differ diff --git a/papers/enhancing incontext learning with answer feedback for multispan question answering.pdf b/papers/enhancing incontext learning with answer feedback for multispan question answering.pdf index 99301816cc494384180b31892a76b9e3b13e7924..22a63f1723a197c78fd395f79bfaac8fc639685c 100644 Binary files a/papers/enhancing incontext learning with answer feedback for multispan question answering.pdf and b/papers/enhancing incontext learning with answer feedback for multispan question answering.pdf differ diff --git a/papers/enhancing medical task performance in gpt4v a comprehensive study on prompt engineering strategies.pdf b/papers/enhancing medical task performance in gpt4v a comprehensive study on prompt engineering strategies.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3ea60fea0efc2a130b2a3df8a0046b37ab4607fe --- /dev/null +++ b/papers/enhancing medical task performance in gpt4v a comprehensive study on prompt engineering strategies.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc0c458edcbe5531119bc0fccbd50804fae71240ea469b38ba152008b5bfe1d8 +size 16657611 diff --git a/papers/enhancing reasoning capabilities by instruction learning and chainofthoughts for implicit discourse relation recognition.pdf b/papers/enhancing reasoning capabilities by instruction learning and chainofthoughts for implicit discourse relation recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c7c63c26075a0c8844d513ff2188825043e92966 --- /dev/null +++ b/papers/enhancing reasoning capabilities by instruction learning and chainofthoughts for implicit discourse relation recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fcb912dea20241c77258549679832a3534f9a8e1acc4656638e89abce961dac +size 1033073 diff --git a/papers/enhancing small medical learners with privacypreserving contextual prompting.pdf b/papers/enhancing small medical learners with privacypreserving contextual prompting.pdf index 836cbff526d0da48688ab007a62ae430062fbd78..147351e4909cf39e5718aae612d646eec5828d49 100644 Binary files a/papers/enhancing small medical learners with privacypreserving contextual prompting.pdf and b/papers/enhancing small medical learners with privacypreserving contextual prompting.pdf differ diff --git a/papers/enhancing texttosql capabilities of large language models a study on prompt design strategies.pdf b/papers/enhancing texttosql capabilities of large language models a study on prompt design strategies.pdf new file mode 100644 index 0000000000000000000000000000000000000000..da9b6df6cb31ee909dbd59627aafa60fc9fc993e --- /dev/null +++ b/papers/enhancing texttosql capabilities of large language models a study on prompt design strategies.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:afcaab181eeb1698b0a7ec2f6955c207ec35386037f39a310d771a47c64d9489 +size 1590089 diff --git a/papers/ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf b/papers/ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf index f9ac62158ab98dd30cbdd5d82dbc196450c5e35e..d2be1dfd2d458d0a399d8e7625348e0139c0ece6 100644 Binary files a/papers/ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf and b/papers/ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms.pdf differ diff --git a/papers/entity matching using large language models.pdf b/papers/entity matching using large language models.pdf index 7f21ff5e494c1558cf962b39153745e3f87c843f..ba4ed3b47fa1a09a0f15ca90c8de8c3d218012a3 100644 Binary files a/papers/entity matching using large language models.pdf and b/papers/entity matching using large language models.pdf differ diff --git a/papers/epa easy prompt augmentation on large language models via multiple sources and multiple targets.pdf b/papers/epa easy prompt augmentation on large language models via multiple sources and multiple targets.pdf index 645baad61d4d5151a6d60e377c8b128547d046f2..d13ff1de99365f4ad78487fe98689b8f13253082 100644 Binary files a/papers/epa easy prompt augmentation on large language models via multiple sources and multiple targets.pdf and b/papers/epa easy prompt augmentation on large language models via multiple sources and multiple targets.pdf differ diff --git a/papers/estimating uncertainty in multimodal foundation models using public internet data.pdf b/papers/estimating uncertainty in multimodal foundation models using public internet data.pdf index 9cadad03656363d0732c2c1f1edd15abf0f811ee..f01399e02a3e9b0b412867814e079bba279d99bf 100644 --- a/papers/estimating uncertainty in multimodal foundation models using public internet data.pdf +++ b/papers/estimating uncertainty in multimodal foundation models using public internet data.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:8a2696154c698aadef2c873c54a2d435d50af020b850797c65e32f51faa7a37a -size 1496423 +oid sha256:b194abd42b3bf371b7ec21f36e939fa1dd06d04b0b2c45473fe2a3bfe83965f3 +size 1510656 diff --git a/papers/evaluating chatgpt textmining of clinical records for obesity monitoring.pdf b/papers/evaluating chatgpt textmining of clinical records for obesity monitoring.pdf index 9202c1e970bdd98981a229cc197c1cb9619d04a9..19313e2cdc974bf0d90dc072f2b183e9d32d686f 100644 Binary files a/papers/evaluating chatgpt textmining of clinical records for obesity monitoring.pdf and b/papers/evaluating chatgpt textmining of clinical records for obesity monitoring.pdf differ diff --git a/papers/evaluating large language models on graphs performance insights and comparative analysis.pdf b/papers/evaluating large language models on graphs performance insights and comparative analysis.pdf index 6e552efc0ecc6740cc931a38efa489ede8afe99d..7670d47535d7332e676e4150f7f3e4cf7d8f8044 100644 Binary files a/papers/evaluating large language models on graphs performance insights and comparative analysis.pdf and b/papers/evaluating large language models on graphs performance insights and comparative analysis.pdf differ diff --git a/papers/evaluating llms for privilegeescalation scenarios.pdf b/papers/evaluating llms for privilegeescalation scenarios.pdf index 435059ce76cf54b77ef1da776ddbe0990ff5b559..b59f3d15be5b183be7b1cb0682073b7556be07ba 100644 Binary files a/papers/evaluating llms for privilegeescalation scenarios.pdf and b/papers/evaluating llms for privilegeescalation scenarios.pdf differ diff --git a/papers/evaluating the instructionfollowing robustness of large language models to prompt injection.pdf b/papers/evaluating the instructionfollowing robustness of large language models to prompt injection.pdf index faba2e11d798fab869d154abe1e948c0f13d1a25..b9d897abeaeaa3c4421fa714a7c2f3d7d36ef514 100644 Binary files a/papers/evaluating the instructionfollowing robustness of large language models to prompt injection.pdf and b/papers/evaluating the instructionfollowing robustness of large language models to prompt injection.pdf differ diff --git a/papers/evaluation is all you need prompting generative large language models for annotation tasks in the social sciences a primer using open models.pdf b/papers/evaluation is all you need prompting generative large language models for annotation tasks in the social sciences a primer using open models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1a982fcf0cd30dc1b321eb62d482e06b50ff0902 --- /dev/null +++ b/papers/evaluation is all you need prompting generative large language models for annotation tasks in the social sciences a primer using open models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c342b0b00c3f1f7e38179cc66564345595158dfbc2f997ee88885a83c31ee45 +size 616520 diff --git a/papers/evaluation of chatgpt family of models for biomedical reasoning and classification.pdf b/papers/evaluation of chatgpt family of models for biomedical reasoning and classification.pdf index 8c38ef814108e452ec511ec215e6b754e969d7d4..29c454cc2ac382c789d8792b04fce70ffbc94339 100644 Binary files a/papers/evaluation of chatgpt family of models for biomedical reasoning and classification.pdf and b/papers/evaluation of chatgpt family of models for biomedical reasoning and classification.pdf differ diff --git a/papers/evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf b/papers/evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf index 56aca4dd71d3816432b4f371353a9f2d42fdc3aa..d1b8aa3aec21d8e93c4072a9e59d606e611a050d 100644 Binary files a/papers/evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf and b/papers/evaluation of gpt35 and gpt4 for supporting realworld information needs in healthcare delivery.pdf differ diff --git a/papers/events realm event reasoning of entity states via language models.pdf b/papers/events realm event reasoning of entity states via language models.pdf index fddb0d8266a4c6da0893598a8a0864ba17a95805..4926fa517f8c7f1b4693997e8246281aa464e2f4 100644 Binary files a/papers/events realm event reasoning of entity states via language models.pdf and b/papers/events realm event reasoning of entity states via language models.pdf differ diff --git a/papers/evil geniuses delving into the safety of llmbased agents.pdf b/papers/evil geniuses delving into the safety of llmbased agents.pdf new file mode 100644 index 0000000000000000000000000000000000000000..245c481fd43c36038a48d1a898f8c7e594e93222 --- /dev/null +++ b/papers/evil geniuses delving into the safety of llmbased agents.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c83cfa5e8bedda6fa7b2468ac0cf8f3bc38e06180d96081f59b63249a03821a +size 1895337 diff --git a/papers/evolutionary multiobjective optimization of large language model prompts for balancing sentiments.pdf b/papers/evolutionary multiobjective optimization of large language model prompts for balancing sentiments.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6637caea6505ef9c140100c922dff78cc0b455ba --- /dev/null +++ b/papers/evolutionary multiobjective optimization of large language model prompts for balancing sentiments.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:283100bc54330ce52535485dd0db5eab97c9ae6ac354aed3aeda209e0d4b6f68 +size 670826 diff --git a/papers/exnet efficient incontext learning for dataless text classification.pdf b/papers/exnet efficient incontext learning for dataless text classification.pdf index 087f14b2c7579efd49bbbd5cd6c853c9df79a064..4eba089d4fdf6b828f95b11c36dcd6b24a2e6725 100644 Binary files a/papers/exnet efficient incontext learning for dataless text classification.pdf and b/papers/exnet efficient incontext learning for dataless text classification.pdf differ diff --git a/papers/expertprompting instructing large language models to be distinguished experts.pdf b/papers/expertprompting instructing large language models to be distinguished experts.pdf index 208dff4318d73566ac7d1c69948088b3b3a7dde3..63b70e1f195156093c2d4e865fda419b1e1bb254 100644 Binary files a/papers/expertprompting instructing large language models to be distinguished experts.pdf and b/papers/expertprompting instructing large language models to be distinguished experts.pdf differ diff --git a/papers/explainable claim verification via knowledgegrounded reasoning with large language models.pdf b/papers/explainable claim verification via knowledgegrounded reasoning with large language models.pdf index 951ccf332ee3ef19285691ecbec59754f91db00c..8c0682fab2c689ab9445043a32c74fa07a5624c3 100644 Binary files a/papers/explainable claim verification via knowledgegrounded reasoning with large language models.pdf and b/papers/explainable claim verification via knowledgegrounded reasoning with large language models.pdf differ diff --git a/papers/explicit knowledge transfer for weaklysupervised code generation.pdf b/papers/explicit knowledge transfer for weaklysupervised code generation.pdf index 5660e147567332be625540f5e911ce5c4cd4cac2..2b08da773f3aef6f9f8c850adfba764ad733378f 100644 Binary files a/papers/explicit knowledge transfer for weaklysupervised code generation.pdf and b/papers/explicit knowledge transfer for weaklysupervised code generation.pdf differ diff --git a/papers/exploiting language model prompts using similarity measures a case study on the wordincontext task.pdf b/papers/exploiting language model prompts using similarity measures a case study on the wordincontext task.pdf new file mode 100644 index 0000000000000000000000000000000000000000..41e40b2dc261232f03eaf1ee99dbda0c4d1b00a4 --- /dev/null +++ b/papers/exploiting language model prompts using similarity measures a case study on the wordincontext task.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:770923a19a8bce32b377afd6127bc130d8cc0681da9e19467cfdd7b59ec88617 +size 1302789 diff --git a/papers/explore incontext learning for 3d point cloud understanding.pdf b/papers/explore incontext learning for 3d point cloud understanding.pdf index 969e263adbc131adf1220f10a30d3ce4a3655ad2..14839ab74c4aa3a73868343dc9125370f6978f3b 100644 --- a/papers/explore incontext learning for 3d point cloud understanding.pdf +++ b/papers/explore incontext learning for 3d point cloud understanding.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7859a3d5fee196d7cb47491029f4b26925a907558b66be7fce696d61a6fd2b47 -size 11667782 +oid sha256:06f83732fdc238499f5eb7749b474992b3bdf465e221520d818a85df0b1ccd1d +size 13800075 diff --git a/papers/exploring automated distractor and feedback generation for math multiplechoice questions via incontext learning.pdf b/papers/exploring automated distractor and feedback generation for math multiplechoice questions via incontext learning.pdf deleted file mode 100644 index 6e2f0145ace363c056067e1bd699d3834054f151..0000000000000000000000000000000000000000 Binary files a/papers/exploring automated distractor and feedback generation for math multiplechoice questions via incontext learning.pdf and /dev/null differ diff --git a/papers/exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf b/papers/exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf index 6e87e1ef2d0378674c4154b4befcf7d3cbd5ed10..582c8bc96ddcadcc3555869df68c65ddff707be3 100644 Binary files a/papers/exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf and b/papers/exploring automatic evaluation methods based on a decoderbased llm for text generation.pdf differ diff --git a/papers/exploring chain of thought style prompting for texttosql.pdf b/papers/exploring chain of thought style prompting for texttosql.pdf new file mode 100644 index 0000000000000000000000000000000000000000..89490a6f073efcc56ecd9788327cebef2a7e2060 --- /dev/null +++ b/papers/exploring chain of thought style prompting for texttosql.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b668e7a89689321bf61cfcfa6839cc7db3370a2896a71a765bcbbcb3aacfd9f +size 871909 diff --git a/papers/exploring chainofthought style prompting for texttosql.pdf b/papers/exploring chainofthought style prompting for texttosql.pdf index 0eeacc99aaa7b09f0c937cc224f400d07dd4891b..e826461f56cc2cd39450dcccab5aea656b985951 100644 Binary files a/papers/exploring chainofthought style prompting for texttosql.pdf and b/papers/exploring chainofthought style prompting for texttosql.pdf differ diff --git a/papers/exploring diverse incontext configurations for image captioning.pdf b/papers/exploring diverse incontext configurations for image captioning.pdf index 5f935e1ce88cfed6ddab7cdf89fd5ff077eba003..f8f12b571db3c236bb817c45be7956adfde311f7 100644 --- a/papers/exploring diverse incontext configurations for image captioning.pdf +++ b/papers/exploring diverse incontext configurations for image captioning.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:773941d879ce5ead6153b56711883825eb30bef310463301304ecfa22be43038 -size 5882391 +oid sha256:d0c981ea6e11be70b2f6427999cff2f88aac486d1d942d01647ca7e549eeee25 +size 5882359 diff --git a/papers/exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods.pdf b/papers/exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods.pdf index 1c922472b4fe80ef6803a6f3e77fc8fa587a9d60..69620e2b90b96dacd34212eb53c846aa299ecae0 100644 Binary files a/papers/exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods.pdf and b/papers/exploring effectiveness of gpt3 in grammatical error correction a study on performance and controllability in promptbased methods.pdf differ diff --git a/papers/exploring generative ai assisted feedback writing for students' written responses to a physics conceptual question with prompt engineering and fewshot learning.pdf b/papers/exploring generative ai assisted feedback writing for students' written responses to a physics conceptual question with prompt engineering and fewshot learning.pdf deleted file mode 100644 index 750a79dcdb15edc4c0973df7ed2867b5f1ecb965..0000000000000000000000000000000000000000 Binary files a/papers/exploring generative ai assisted feedback writing for students' written responses to a physics conceptual question with prompt engineering and fewshot learning.pdf and /dev/null differ diff --git a/papers/exploring incontext learning for knowledge grounded dialog generation.pdf b/papers/exploring incontext learning for knowledge grounded dialog generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0f7cf13c92f0bd93bb9b6b342a1476170153acfc --- /dev/null +++ b/papers/exploring incontext learning for knowledge grounded dialog generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1b1a726d8f74a317f617271e822dc4601c4852748fd813f2652551d344e043a +size 944430 diff --git a/papers/exploring parameterefficient finetuning techniques for code generation with large language models.pdf b/papers/exploring parameterefficient finetuning techniques for code generation with large language models.pdf index f8b4dfa2caf68493e2d94f22e296577cae77d2e9..804ec472ce3934577d255378dec7019bc39e99fb 100644 Binary files a/papers/exploring parameterefficient finetuning techniques for code generation with large language models.pdf and b/papers/exploring parameterefficient finetuning techniques for code generation with large language models.pdf differ diff --git a/papers/exploring prompt engineering with gpt language models for documentlevel machine translation insights and findings.pdf b/papers/exploring prompt engineering with gpt language models for documentlevel machine translation insights and findings.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7cbdeb66e0b3321f23128cdc86be17303ec06d88 --- /dev/null +++ b/papers/exploring prompt engineering with gpt language models for documentlevel machine translation insights and findings.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6c18828746ccd0d10b2c10137f7330ee9a443a91f6b053609df0839b0d3ea76 +size 223409 diff --git a/papers/exploring promptbased fewshot learning for grounded dialog generation.pdf b/papers/exploring promptbased fewshot learning for grounded dialog generation.pdf index 8b67cc357a5446c068447ae8e6975d6b7758ea24..030f978e631c994c5159d67c22187490cd50509a 100644 Binary files a/papers/exploring promptbased fewshot learning for grounded dialog generation.pdf and b/papers/exploring promptbased fewshot learning for grounded dialog generation.pdf differ diff --git a/papers/exploring the integration of large language models into automatic speech recognition systems an empirical study.pdf b/papers/exploring the integration of large language models into automatic speech recognition systems an empirical study.pdf index cadc05afd48d23e5527de06e6e57c83e010c0d96..a2cade04cd5b8a828ee1102cc351336a9a36a197 100644 Binary files a/papers/exploring the integration of large language models into automatic speech recognition systems an empirical study.pdf and b/papers/exploring the integration of large language models into automatic speech recognition systems an empirical study.pdf differ diff --git a/papers/exploring the intersection of large language models and agentbased modeling via prompt engineering.pdf b/papers/exploring the intersection of large language models and agentbased modeling via prompt engineering.pdf index dc3fcf1d5be38e382e099a8a556d75a5a29054a7..2080f7be6b8199cac37c27d23ce42ca9ef8be1a1 100644 Binary files a/papers/exploring the intersection of large language models and agentbased modeling via prompt engineering.pdf and b/papers/exploring the intersection of large language models and agentbased modeling via prompt engineering.pdf differ diff --git a/papers/exploring the path from instructions to rewards with large language models in instancebased learning.pdf b/papers/exploring the path from instructions to rewards with large language models in instancebased learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d3a9f9c2b5f7f46ea54bfdb4daf806dd746858ac --- /dev/null +++ b/papers/exploring the path from instructions to rewards with large language models in instancebased learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc3e4374ed8f3f92bd51afe1623f4f470671c09977aa7117978aea52c98cab99 +size 326102 diff --git a/papers/exploring the relationship between model architecture and incontext learning ability.pdf b/papers/exploring the relationship between model architecture and incontext learning ability.pdf index 7cb40b449e617c894349ead5fa5c6a44cc557dfd..48f157cbc5aa21b2e8ad5f1027a3f005a44e5b31 100644 Binary files a/papers/exploring the relationship between model architecture and incontext learning ability.pdf and b/papers/exploring the relationship between model architecture and incontext learning ability.pdf differ diff --git a/papers/extensible prompts for language models.pdf b/papers/extensible prompts for language models.pdf deleted file mode 100644 index 203bdb8ac3da702d416d957a0677a980a96c3925..0000000000000000000000000000000000000000 --- a/papers/extensible prompts for language models.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6e58d1206c9677c39d134a8bc28c43cbb4d0372311eef296e50157a3afbdd53a -size 1148615 diff --git a/papers/extracting accurate materials data from research papers with conversational language models and prompt engineering.pdf b/papers/extracting accurate materials data from research papers with conversational language models and prompt engineering.pdf index 7975f5a573ac69aa6557af45994edc800b612e82..cf9c0dea7caa8ad47255a4c857537305adaf282e 100644 Binary files a/papers/extracting accurate materials data from research papers with conversational language models and prompt engineering.pdf and b/papers/extracting accurate materials data from research papers with conversational language models and prompt engineering.pdf differ diff --git a/papers/extracting multivalued relations from language models.pdf b/papers/extracting multivalued relations from language models.pdf index de3e8db72fbabf26bdde7af77be88e13a0adc51c..451f8ad0a6f522ad941310c5e17e54cf7fd4edd9 100644 Binary files a/papers/extracting multivalued relations from language models.pdf and b/papers/extracting multivalued relations from language models.pdf differ diff --git a/papers/extractive summarization via chatgpt for faithful summary generation.pdf b/papers/extractive summarization via chatgpt for faithful summary generation.pdf index 4d91878917693adc791b581aa579a931a2b2e7fb..b025688be3798f6a83f95d0090850caa6523f2a5 100644 Binary files a/papers/extractive summarization via chatgpt for faithful summary generation.pdf and b/papers/extractive summarization via chatgpt for faithful summary generation.pdf differ diff --git a/papers/factchecking complex claims with programguided reasoning.pdf b/papers/factchecking complex claims with programguided reasoning.pdf index c2eb9750a4982e3ce094e7634d06a826cd1aab1f..e80bc32d69315418beb09b0eb82abad070d38076 100644 Binary files a/papers/factchecking complex claims with programguided reasoning.pdf and b/papers/factchecking complex claims with programguided reasoning.pdf differ diff --git a/papers/fake it till you make it learning(s) from a synthetic imagenet clone.pdf b/papers/fake it till you make it learning(s) from a synthetic imagenet clone.pdf new file mode 100644 index 0000000000000000000000000000000000000000..510ba1a78e50b757510caefcd5c0581b4424d0a1 --- /dev/null +++ b/papers/fake it till you make it learning(s) from a synthetic imagenet clone.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b507f4dc2d896069045271d4ae94ea0c3a073246cf26b9797402f6127cea7f2c +size 18832279 diff --git a/papers/falle a foley sound synthesis model and strategies.pdf b/papers/falle a foley sound synthesis model and strategies.pdf index b69363668d3083751010efd0a0df09f92a1255b5..cc0cdd03fbf40b2e2780fffa722e18886f62ee25 100644 Binary files a/papers/falle a foley sound synthesis model and strategies.pdf and b/papers/falle a foley sound synthesis model and strategies.pdf differ diff --git a/papers/federated large language model a position paper.pdf b/papers/federated large language model a position paper.pdf index 3198c519bbbed6283e2c0bc7d95ec09359d374b0..c27bec169cac4d37a4231aad015b7c0d5a0f3051 100644 Binary files a/papers/federated large language model a position paper.pdf and b/papers/federated large language model a position paper.pdf differ diff --git a/papers/fewclue a chinese fewshot learning evaluation benchmark.pdf b/papers/fewclue a chinese fewshot learning evaluation benchmark.pdf index 3c135ff0cfb26aedf89e911002d43844b136a821..3d7bf511bc25dc139dbba7e8ceef1ad1f7c3323d 100644 Binary files a/papers/fewclue a chinese fewshot learning evaluation benchmark.pdf and b/papers/fewclue a chinese fewshot learning evaluation benchmark.pdf differ diff --git a/papers/fewertoken neural speech codec with timeinvariant codes.pdf b/papers/fewertoken neural speech codec with timeinvariant codes.pdf index b5509898e189ad41ab2cfcd86f589395eaf0ca2a..6492169742e0034403ddd727e3198dccb3f07425 100644 Binary files a/papers/fewertoken neural speech codec with timeinvariant codes.pdf and b/papers/fewertoken neural speech codec with timeinvariant codes.pdf differ diff --git a/papers/fewfedweight fewshot federated learning framework across multiple nlp tasks.pdf b/papers/fewfedweight fewshot federated learning framework across multiple nlp tasks.pdf index 4bfedc52087073cdc11ac29dc82879aeb7e1dc10..6ed2cadb6596a4e9f0528c8dbebc58f9d0f3165e 100644 Binary files a/papers/fewfedweight fewshot federated learning framework across multiple nlp tasks.pdf and b/papers/fewfedweight fewshot federated learning framework across multiple nlp tasks.pdf differ diff --git a/papers/fewshot adaptation for parsing contextual utterances with llms.pdf b/papers/fewshot adaptation for parsing contextual utterances with llms.pdf index b8c4be5ec185c418360547aec7d9b3675c9eadcf..7feb99bd5814c5f17fddb5d6c61a8d5e3948a79f 100644 Binary files a/papers/fewshot adaptation for parsing contextual utterances with llms.pdf and b/papers/fewshot adaptation for parsing contextual utterances with llms.pdf differ diff --git a/papers/fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf b/papers/fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf index 253ef145688e7d3c73dc3e759530789ea0153072..da77c5bc6a6d316804097ed6d135f8ae0dcbdfe1 100644 Binary files a/papers/fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf and b/papers/fewshot anaphora resolution in scientific protocols via mixtures of incontext experts.pdf differ diff --git a/papers/fewshot and prompt training for text classification in german doctor's letters.pdf b/papers/fewshot and prompt training for text classification in german doctor's letters.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4e5ae3191ec0f602a8d2d818b66526102a7076e4 --- /dev/null +++ b/papers/fewshot and prompt training for text classification in german doctor's letters.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1516a6f7778a455a5c33d02802e710dce6f63bcdf4fbbc93ca25e815ea0f03d9 +size 159393 diff --git a/papers/fewshot instruction prompts for pretrained language models to detect social biases.pdf b/papers/fewshot instruction prompts for pretrained language models to detect social biases.pdf index 888ea15661d846f42ff3a367b511ffdb31b83631..330436b2b9254d0a2e1a6c0f75ca3fa3ccba872b 100644 Binary files a/papers/fewshot instruction prompts for pretrained language models to detect social biases.pdf and b/papers/fewshot instruction prompts for pretrained language models to detect social biases.pdf differ diff --git a/papers/fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf b/papers/fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf index 2cf4d37ff56feb7e93fab3fadd4c96279fa9869b..93d78dd9c846a804e3b2179cb03c0bc4b61a5773 100644 Binary files a/papers/fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf and b/papers/fewshot joint multimodal aspectsentiment analysis based on generative multimodal prompt.pdf differ diff --git a/papers/fewshot learning with multilingual generative language models.pdf b/papers/fewshot learning with multilingual generative language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..601690f424d3282b0fcca269938f898823f61c47 --- /dev/null +++ b/papers/fewshot learning with multilingual generative language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:234d833f0e76ae02b3a86aa856ff3aa4f81c265f181a6cf7d6e869921000ae56 +size 1492771 diff --git a/papers/fewshot multimodal sentiment analysis based on multimodal probabilistic fusion prompts.pdf b/papers/fewshot multimodal sentiment analysis based on multimodal probabilistic fusion prompts.pdf index f0701dbfd93c1f1421efb05a97144deaf6a7cbaa..dfaff8013bc94b1521cb1d9d3d17f6e1c19b398f 100644 --- a/papers/fewshot multimodal sentiment analysis based on multimodal probabilistic fusion prompts.pdf +++ b/papers/fewshot multimodal sentiment analysis based on multimodal probabilistic fusion prompts.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b1fe4036e1b94eced41d2b0678fdfd89fecc11244494e861f45768843e2598a5 -size 1126132 +oid sha256:2779857be1efa42557a48239d6d27e7295ebfb562c398299cafe6fc9b46104e9 +size 770442 diff --git a/papers/fewshot named entity recognition definition, taxonomy and research directions.pdf b/papers/fewshot named entity recognition definition, taxonomy and research directions.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fae12934a53e347ae88fd71ba923aceb29820cf0 --- /dev/null +++ b/papers/fewshot named entity recognition definition, taxonomy and research directions.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e27e95f0bc8e3ecb5e42b2eba99398aafab8a367f76903ca396baa2b0e1f6bf +size 2563719 diff --git a/papers/fewshot nested named entity recognition.pdf b/papers/fewshot nested named entity recognition.pdf index b8d8c5b9cdfb6cc2a3bd5cdaf964f4e0f3e994f5..cf8c57f5cdceb5748e71442f563e5fe6fe3a6bc0 100644 Binary files a/papers/fewshot nested named entity recognition.pdf and b/papers/fewshot nested named entity recognition.pdf differ diff --git a/papers/fewshot prompting towards controllable response generation.pdf b/papers/fewshot prompting towards controllable response generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..480164d033efaa12966add30e1011490c45ce413 --- /dev/null +++ b/papers/fewshot prompting towards controllable response generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37dec451610f5f92069c7b30ee19cda4fbf2bf3359b56df0914285483be80539 +size 643973 diff --git a/papers/fewshot queryfocused summarization with prefixmerging.pdf b/papers/fewshot queryfocused summarization with prefixmerging.pdf index 306d7be9fe7d78c70210fefa4021038349229004..516f140de960af8831d9c0ed18df3c6388f0fab1 100644 Binary files a/papers/fewshot queryfocused summarization with prefixmerging.pdf and b/papers/fewshot queryfocused summarization with prefixmerging.pdf differ diff --git a/papers/fewshot reranking for multihop qa via language model prompting.pdf b/papers/fewshot reranking for multihop qa via language model prompting.pdf index 17683e609e7ad3aed39fa0c55736c29fe40dd74b..6386e476a8980f33344835259a45b388a8f4676a 100644 Binary files a/papers/fewshot reranking for multihop qa via language model prompting.pdf and b/papers/fewshot reranking for multihop qa via language model prompting.pdf differ diff --git a/papers/fewshot sentiment analysis based on adaptive prompt learning and contrastive learning.pdf b/papers/fewshot sentiment analysis based on adaptive prompt learning and contrastive learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d63e8140843b430edee4c108021a98054aa436fd --- /dev/null +++ b/papers/fewshot sentiment analysis based on adaptive prompt learning and contrastive learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13b865ea285f2ed7a0ad35674cef8cf1a4489d1fe4509e46d3c46d15340e5d14 +size 2522497 diff --git a/papers/fewshot stance detection via targetaware prompt distillation.pdf b/papers/fewshot stance detection via targetaware prompt distillation.pdf index 96281c9ae7880db1b93174dbb7934f83afd80c65..9ce1862a8dd7c6e2be08dcbcdd685d626793669b 100644 Binary files a/papers/fewshot stance detection via targetaware prompt distillation.pdf and b/papers/fewshot stance detection via targetaware prompt distillation.pdf differ diff --git a/papers/fewshot training llms for projectspecific codesummarization.pdf b/papers/fewshot training llms for projectspecific codesummarization.pdf index 976eb49996504ef093ff5be702e02b5b9ca520f7..1dd74cff35d3cc02b96673f7df1fc9a5e5c458b0 100644 Binary files a/papers/fewshot training llms for projectspecific codesummarization.pdf and b/papers/fewshot training llms for projectspecific codesummarization.pdf differ diff --git a/papers/fiat fusing learning paradigms with instructionaccelerated tuning.pdf b/papers/fiat fusing learning paradigms with instructionaccelerated tuning.pdf index 1a776a50d9a299083bedea40d2a8fe960830a7a1..1a640f8991dfb1bdc11a2fe8bd13c054080435da 100644 Binary files a/papers/fiat fusing learning paradigms with instructionaccelerated tuning.pdf and b/papers/fiat fusing learning paradigms with instructionaccelerated tuning.pdf differ diff --git a/papers/fidicl a fusionindecoder approach for efficient incontext learning.pdf b/papers/fidicl a fusionindecoder approach for efficient incontext learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8adbd1bb9b4e865ef05df3099d4d7efd0f5e0945 --- /dev/null +++ b/papers/fidicl a fusionindecoder approach for efficient incontext learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cd96b2b28ed7a3daf002b8ebe6d115396d4d78200b96d4f02b97e2d99b8c46e +size 611196 diff --git a/papers/fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems.pdf b/papers/fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems.pdf index ecf75e963a1d2b581dcd33d9e5b434aaabeb63ba..60396fa001f4afc7ca69cd7cb2ad8819e76265aa 100644 Binary files a/papers/fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems.pdf and b/papers/fill in the blank exploring and enhancing llm capabilities for backward reasoning in math word problems.pdf differ diff --git a/papers/finding support examples for incontext learning.pdf b/papers/finding support examples for incontext learning.pdf index 6b6c21e772b194e69b05838c7a8c7897cad522e0..6285e2863f7b1048dafaaa6e287e411a85463a89 100644 Binary files a/papers/finding support examples for incontext learning.pdf and b/papers/finding support examples for incontext learning.pdf differ diff --git a/papers/finegrained visual prompting.pdf b/papers/finegrained visual prompting.pdf deleted file mode 100644 index 8d7ff4916ce8b556817e2aa2d8d1a6b08bba9e30..0000000000000000000000000000000000000000 --- a/papers/finegrained visual prompting.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a194a4313f3fd8ced1cac6ba5168e7a5abc4851dfb742d2cad85fad13faecda1 -size 4833002 diff --git a/papers/fineprompt unveiling the role of finetuned inductive bias on compositional reasoning in gpt4.pdf b/papers/fineprompt unveiling the role of finetuned inductive bias on compositional reasoning in gpt4.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0fab003c1df02c8fd559c0ece34f386812a3cb72 --- /dev/null +++ b/papers/fineprompt unveiling the role of finetuned inductive bias on compositional reasoning in gpt4.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b42c1b6f33902b274dee4795d71a41f780701c7d384cf55bc763cc41fb2037c5 +size 374466 diff --git a/papers/finetuning language models with just forward passes.pdf b/papers/finetuning language models with just forward passes.pdf index 3ef2e7285f3ce5f66b7204f54caa1308471f65c6..a880cadb2db26831f9833a994d4451a6d55a0c20 100644 Binary files a/papers/finetuning language models with just forward passes.pdf and b/papers/finetuning language models with just forward passes.pdf differ diff --git a/papers/fireact toward language agent finetuning.pdf b/papers/fireact toward language agent finetuning.pdf index 11ab3072ca7320bc2682333c53458f3650a0baf0..be3275188de8c8c24421ba9ac44c67091d1811ea 100644 Binary files a/papers/fireact toward language agent finetuning.pdf and b/papers/fireact toward language agent finetuning.pdf differ diff --git a/papers/fit parameter efficient fewshot transfer learning for personalized and federated image classification.pdf b/papers/fit parameter efficient fewshot transfer learning for personalized and federated image classification.pdf index 0306c5df713bc518a70ca1aa181f9ba2373321d7..c0d1e75ab3f78135414583ad409376eec334ae34 100644 Binary files a/papers/fit parameter efficient fewshot transfer learning for personalized and federated image classification.pdf and b/papers/fit parameter efficient fewshot transfer learning for personalized and federated image classification.pdf differ diff --git a/papers/fixing hardware security bugs with large language models.pdf b/papers/fixing hardware security bugs with large language models.pdf index 55807f79e520064cbaa2358f5f8d3fb1726dcc93..ecd7b335023ada83dbb44b3acc048f60c7b728b0 100644 Binary files a/papers/fixing hardware security bugs with large language models.pdf and b/papers/fixing hardware security bugs with large language models.pdf differ diff --git a/papers/fixing rust compilation errors using llms.pdf b/papers/fixing rust compilation errors using llms.pdf index 554c5e72ccd83ec9c5b75b13966e1937f394a17b..be049e4f68d57491809f2abe077c12c4b421dd8a 100644 Binary files a/papers/fixing rust compilation errors using llms.pdf and b/papers/fixing rust compilation errors using llms.pdf differ diff --git a/papers/flex unifying evaluation for fewshot nlp.pdf b/papers/flex unifying evaluation for fewshot nlp.pdf index 3ed3b1f8b34711d86eed89d48a011374fbb9abeb..723eba4e920041debb894ed868f04cdee4f03b16 100644 Binary files a/papers/flex unifying evaluation for fewshot nlp.pdf and b/papers/flex unifying evaluation for fewshot nlp.pdf differ diff --git a/papers/flight of the pegasus comparing transformers on fewshot and zeroshot multidocument abstractive summarization.pdf b/papers/flight of the pegasus comparing transformers on fewshot and zeroshot multidocument abstractive summarization.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4d9fa699fab63288e1031497c2b183a48d85d5cd --- /dev/null +++ b/papers/flight of the pegasus comparing transformers on fewshot and zeroshot multidocument abstractive summarization.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2edc2c3677e06a444977985e1555b31fa36ae6469110bef5aa66d91e5e439f03 +size 184635 diff --git a/papers/flocks of stochastic parrots differentially private prompt learning for large language models.pdf b/papers/flocks of stochastic parrots differentially private prompt learning for large language models.pdf index 5c3ef8e208a4515e54b66381f46900942464bcc2..f5e2d137bf0358631852dc56c77989faf3900a7c 100644 Binary files a/papers/flocks of stochastic parrots differentially private prompt learning for large language models.pdf and b/papers/flocks of stochastic parrots differentially private prompt learning for large language models.pdf differ diff --git a/papers/folio natural language reasoning with firstorder logic.pdf b/papers/folio natural language reasoning with firstorder logic.pdf index d9d424f8a9c04f7921d4617fc18fd3be63f456e4..2e51ccfcf273afbe3068c630b5a1beff7e971e5a 100644 Binary files a/papers/folio natural language reasoning with firstorder logic.pdf and b/papers/folio natural language reasoning with firstorder logic.pdf differ diff --git a/papers/framing the newsfrom human perception to large language model inferences.pdf b/papers/framing the newsfrom human perception to large language model inferences.pdf index 99f80cbad68e0a8d791fdc43c37807e10e3ca86b..546b35f4f85ec85cf49268ad318210a3f74ea420 100644 Binary files a/papers/framing the newsfrom human perception to large language model inferences.pdf and b/papers/framing the newsfrom human perception to large language model inferences.pdf differ diff --git a/papers/freshllms refreshing large language models with search engine augmentation.pdf b/papers/freshllms refreshing large language models with search engine augmentation.pdf index f23aaaaa0882c091d631bfb90b0fc76416db6676..1e497678a371717ec3d2fedb274c8d88089032bc 100644 --- a/papers/freshllms refreshing large language models with search engine augmentation.pdf +++ b/papers/freshllms refreshing large language models with search engine augmentation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:728e8f67322a3df9ba4ffb648f60bef6e6f0e6a21e7e2943df6108ec448a4436 -size 2163456 +oid sha256:e42fe14f8e87d405b6100fbb41f86e2be30710ab5c8a72cc9a3753f9ebcbb444 +size 2314447 diff --git a/papers/from beginner to expert modeling medical knowledge into general llms.pdf b/papers/from beginner to expert modeling medical knowledge into general llms.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2db130c0f5c9a38d4c6aa37e572334b5c0cbf503 --- /dev/null +++ b/papers/from beginner to expert modeling medical knowledge into general llms.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4112a4b84d212beb690e879841b1e35cd2c9d7eccf4291ecbcd85483ce75bc0 +size 1600672 diff --git a/papers/from prompt engineering to prompt science with human in the loop.pdf b/papers/from prompt engineering to prompt science with human in the loop.pdf new file mode 100644 index 0000000000000000000000000000000000000000..85eaafe6d96ee3bb3b83cef621f934fab6b1a455 --- /dev/null +++ b/papers/from prompt engineering to prompt science with human in the loop.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f380195fea56d12c445f861e628e209b865ae3173d67cd99e99d035691d1f19b +size 754828 diff --git a/papers/from text to tables a local privacy preserving large language model for structured information retrieval from medical documents.pdf b/papers/from text to tables a local privacy preserving large language model for structured information retrieval from medical documents.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff1bbf0ec10516f791c681574c118d600a2326c9 --- /dev/null +++ b/papers/from text to tables a local privacy preserving large language model for structured information retrieval from medical documents.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:093d7054195bc2158c0dce73837679ae1cb59fd22394547cd96f12153b41f105 +size 798617 diff --git a/papers/from web catalogs to google a retrospective study of web search engines sustainable development.pdf b/papers/from web catalogs to google a retrospective study of web search engines sustainable development.pdf index 5a025c27a30fe2dbbf066671a9a71dbffda7d3d4..fa5cdbf802dfd1bc96bdfc7a2c5a0323f4141157 100644 Binary files a/papers/from web catalogs to google a retrospective study of web search engines sustainable development.pdf and b/papers/from web catalogs to google a retrospective study of web search engines sustainable development.pdf differ diff --git a/papers/frozen clip model is an efficient point cloud backbone.pdf b/papers/frozen clip model is an efficient point cloud backbone.pdf deleted file mode 100644 index 4a636b9dd9a6e6aae5aaacf8137b16b4be208410..0000000000000000000000000000000000000000 --- a/papers/frozen clip model is an efficient point cloud backbone.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9ffdb299a17eb701137af6c29fe50f0e2820632c09d7d3c5b3939d061a851707 -size 3749150 diff --git a/papers/game of tones faculty detection of gpt4 generated content in university assessments.pdf b/papers/game of tones faculty detection of gpt4 generated content in university assessments.pdf index 4c5f7436edd7228937fa7138c1798702f27cf338..0632f5d1d5580b74e9c8e907279877f6ab9cef2b 100644 Binary files a/papers/game of tones faculty detection of gpt4 generated content in university assessments.pdf and b/papers/game of tones faculty detection of gpt4 generated content in university assessments.pdf differ diff --git a/papers/gembamqm detecting translation quality error spans with gpt4.pdf b/papers/gembamqm detecting translation quality error spans with gpt4.pdf index 103f1d960bf3f1308d15c175f5e0e97482c4cbb4..3be01177cab543bfbf9007eea9272c5cce85b0f0 100644 Binary files a/papers/gembamqm detecting translation quality error spans with gpt4.pdf and b/papers/gembamqm detecting translation quality error spans with gpt4.pdf differ diff --git a/papers/genderspecific machine translation with large language models.pdf b/papers/genderspecific machine translation with large language models.pdf index 3d3471d3ff8eeff0462e52838be89995f4df543c..69dc77d368cf5606eb4145ea9583105b17d08bfc 100644 Binary files a/papers/genderspecific machine translation with large language models.pdf and b/papers/genderspecific machine translation with large language models.pdf differ diff --git a/papers/genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf b/papers/genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf index 19fe55b51c243c362d0f2127ed9ce25224d1f2ac..28db935f494dbe1fb8aa36346ee0b10c448e58a3 100644 Binary files a/papers/genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf and b/papers/genegpt augmenting large language models with domain tools for improved access to biomedical information.pdf differ diff --git a/papers/generate rather than retrieve large language models are strong context generators.pdf b/papers/generate rather than retrieve large language models are strong context generators.pdf index 8d8f2d845407c5620865214c619c1b443b0acdf3..c5ec129ce93373b0a9668dc3ea03cc16e2892cb1 100644 Binary files a/papers/generate rather than retrieve large language models are strong context generators.pdf and b/papers/generate rather than retrieve large language models are strong context generators.pdf differ diff --git a/papers/generating dialog responses with specified grammatical items for second language learning.pdf b/papers/generating dialog responses with specified grammatical items for second language learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ba5dab38de9a28dbeb2b15748159f51da77670b5 --- /dev/null +++ b/papers/generating dialog responses with specified grammatical items for second language learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14261df070ecec0b6552ae24760cd693c1fcd82cfd266d8928a84f7864d3e912 +size 241663 diff --git a/papers/generating efficient training data via llmbased attribute manipulation.pdf b/papers/generating efficient training data via llmbased attribute manipulation.pdf index 2c8eb45eab197e99edb21d176137c30f19e4129f..4ecb85049874ab46b0febc0eb27412cd697e8347 100644 Binary files a/papers/generating efficient training data via llmbased attribute manipulation.pdf and b/papers/generating efficient training data via llmbased attribute manipulation.pdf differ diff --git a/papers/generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models.pdf b/papers/generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models.pdf index 452686bab22638f7452e29ac5a7abc90237259aa..1e33049ed2eba193e1e0569cc9dbf927f877cfaf 100644 Binary files a/papers/generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models.pdf and b/papers/generating medicallyaccurate summaries of patientprovider dialogue a multistage approach using large language models.pdf differ diff --git a/papers/generating novel leads for drug discovery using llms with logical feedback.pdf b/papers/generating novel leads for drug discovery using llms with logical feedback.pdf index 1123f3de7d2c5be61f11967ccc23386749835d31..bdeb2c74556052aef30f3b77b67d344d8bacc56d 100644 --- a/papers/generating novel leads for drug discovery using llms with logical feedback.pdf +++ b/papers/generating novel leads for drug discovery using llms with logical feedback.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6377871fc65356d406fdd92d823f74d305c13c9aacb2db17ae2dbf78e1d0efc4 +oid sha256:a2c2cb15939d8dd87f63dc3d7ca3e896f91c60c16e4d85c91d1eebbd5526b255 size 1529548 diff --git a/papers/generating training data with language models towards zeroshot language understanding.pdf b/papers/generating training data with language models towards zeroshot language understanding.pdf index 1bf9ff3bfc3a1ae587e8b8a35d03aeb01b74f4ed..dd9aef6276d7c8b839d554488cd9c92ebf286817 100644 Binary files a/papers/generating training data with language models towards zeroshot language understanding.pdf and b/papers/generating training data with language models towards zeroshot language understanding.pdf differ diff --git a/papers/generative ai has lowered the barriers to computational social sciences.pdf b/papers/generative ai has lowered the barriers to computational social sciences.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d2e262e5df992cbf6a4a3e894a3c8011660d1aef --- /dev/null +++ b/papers/generative ai has lowered the barriers to computational social sciences.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb00e9532bd8dce42b6cbc8eb22770e444ec63decd53f484e48b1181720fddac +size 114093 diff --git a/papers/generative large language models are autonomous practitioners of evidencebased medicine.pdf b/papers/generative large language models are autonomous practitioners of evidencebased medicine.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8c513eff515579da7d7a7128d1bcd0797aa4139a --- /dev/null +++ b/papers/generative large language models are autonomous practitioners of evidencebased medicine.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15b57ae70af9d82805efc75cd92b13007a846c92e3862e2470050dfa93f80d03 +size 1588307 diff --git a/papers/generative pretrained transformers for emotion detection in a codeswitching setting.pdf b/papers/generative pretrained transformers for emotion detection in a codeswitching setting.pdf new file mode 100644 index 0000000000000000000000000000000000000000..05a92b2e19ec04c00f371caffe033d99a7596709 --- /dev/null +++ b/papers/generative pretrained transformers for emotion detection in a codeswitching setting.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae5e872a07cd0873a05e8ab1c60b231380a849bfb1bb9fbeb2f29c4875c5182e +size 149825 diff --git a/papers/generative speech recognition error correction with large language models and taskactivating prompting.pdf b/papers/generative speech recognition error correction with large language models and taskactivating prompting.pdf index 822f5a24ea4fb69ed99513addaf751330e0526eb..e6b5fd6eb40a8b5d7e16b4353e41014064639309 100644 Binary files a/papers/generative speech recognition error correction with large language models and taskactivating prompting.pdf and b/papers/generative speech recognition error correction with large language models and taskactivating prompting.pdf differ diff --git a/papers/generative type inference for python.pdf b/papers/generative type inference for python.pdf index a6d31e13a6fcfc5a2025a7092fabbe6e44e77eda..d9320decc7679da27ab924ca71238bbeaae4c2bf 100644 Binary files a/papers/generative type inference for python.pdf and b/papers/generative type inference for python.pdf differ diff --git a/papers/getting more out of mixture of language model reasoning experts.pdf b/papers/getting more out of mixture of language model reasoning experts.pdf index 529a8fc853b711838b60d40fe20795dca743cb9d..33bcb7215c69ff9b994e554edcd669b60da0c0f3 100644 --- a/papers/getting more out of mixture of language model reasoning experts.pdf +++ b/papers/getting more out of mixture of language model reasoning experts.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:c63d2d46ec9b1d4dcd5d49f1d554d1a588d0c1b2b18276bf431b04e646ede171 -size 1272494 +oid sha256:e8040f172fea6c748d44e13c81a97d43aec55221ca8f0d5b7a97f1b82f5c8c3f +size 1210587 diff --git a/papers/getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning.pdf b/papers/getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning.pdf index 0620a61dfbdc3648e1131a8a5d5001df139805c9..7b9e30289f2116bc3a40ccbdd779f1eb0bd30a3a 100644 Binary files a/papers/getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning.pdf and b/papers/getting sick after seeing a doctor diagnosing and mitigating knowledge conflicts in event temporal reasoning.pdf differ diff --git a/papers/glam efficient scaling of language models with mixtureofexperts.pdf b/papers/glam efficient scaling of language models with mixtureofexperts.pdf index 90bf2f6959f36eea3184a2fd5288dec4ecc72128..b4fdea3142515101870d000826a4a4dda0e01804 100644 Binary files a/papers/glam efficient scaling of language models with mixtureofexperts.pdf and b/papers/glam efficient scaling of language models with mixtureofexperts.pdf differ diff --git a/papers/glitter or gold deriving structured insights from sustainability reports via large language models.pdf b/papers/glitter or gold deriving structured insights from sustainability reports via large language models.pdf index 78cef59446214542169587ccdd6cb3421e3ba51f..32b1c0a69b8873fc7f1e83d978e10b3772b13ef3 100644 --- a/papers/glitter or gold deriving structured insights from sustainability reports via large language models.pdf +++ b/papers/glitter or gold deriving structured insights from sustainability reports via large language models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:cc5d21be205db700ecaefe7b33337e0368523fa876d1f13311c3576f9c344366 -size 1017012 +oid sha256:35acaf1f7e87ad23cee9b611104ff85ec8111a9747809e440baa2cdf53f5ada1 +size 1466247 diff --git a/papers/global constraints with prompting for zeroshot event argument classification.pdf b/papers/global constraints with prompting for zeroshot event argument classification.pdf index fb97aa2b19481adb519a10c32bd22617c402b44d..e2ca740022603a2408e3a0d09a290bf45b61c0b8 100644 Binary files a/papers/global constraints with prompting for zeroshot event argument classification.pdf and b/papers/global constraints with prompting for zeroshot event argument classification.pdf differ diff --git a/papers/globallocal modeling with promptbased knowledge enhancement for emotion inference in conversation.pdf b/papers/globallocal modeling with promptbased knowledge enhancement for emotion inference in conversation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bcfaeb9ccdabb0759bf1d4f264f2ec1dc5f50644 --- /dev/null +++ b/papers/globallocal modeling with promptbased knowledge enhancement for emotion inference in conversation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d602d1a56469a1a5231c2f6c30a0067fee665f9b32cca416e787094e70b1484 +size 472025 diff --git a/papers/good examples make a faster learner simple demonstrationbased learning for lowresource ner.pdf b/papers/good examples make a faster learner simple demonstrationbased learning for lowresource ner.pdf index fb0808d16740deadb97e6ce40847ca5074b2170c..a7e74d589cc32879bc7adf873074e066b115eb6d 100644 Binary files a/papers/good examples make a faster learner simple demonstrationbased learning for lowresource ner.pdf and b/papers/good examples make a faster learner simple demonstrationbased learning for lowresource ner.pdf differ diff --git a/papers/gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles.pdf b/papers/gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles.pdf index 1169ba04794bb9d198c6e0adf7cdd00f91b5842d..6bac644f985cbe27fc85b65f8a5bfc9f07fab82f 100644 Binary files a/papers/gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles.pdf and b/papers/gpachov at checkthat! 2023 a diverse multiapproach ensemble for subjectivity detection in news articles.pdf differ diff --git a/papers/gps genetic prompt search for efficient fewshot learning.pdf b/papers/gps genetic prompt search for efficient fewshot learning.pdf index e8a2e6eb421fe8136c8c4bf6aa266fc34b5dea33..87d30b11302c918f9e2d3d8c16cf04e48fb3cbb7 100644 Binary files a/papers/gps genetic prompt search for efficient fewshot learning.pdf and b/papers/gps genetic prompt search for efficient fewshot learning.pdf differ diff --git a/papers/gpt is becoming a turing machine here are some ways to program it.pdf b/papers/gpt is becoming a turing machine here are some ways to program it.pdf index 372d65b75892f3d73380a1cae403d80a6fd16376..1f83d2928dd3f5b9328406736adaf49cd427c02a 100644 Binary files a/papers/gpt is becoming a turing machine here are some ways to program it.pdf and b/papers/gpt is becoming a turing machine here are some ways to program it.pdf differ diff --git a/papers/gpt takes the bar exam.pdf b/papers/gpt takes the bar exam.pdf index b093d1173189f558b6c91264dd2ce0ab1e2fa85b..2ab08fb7808a9218c81c000dea33a88a5d067a35 100644 Binary files a/papers/gpt takes the bar exam.pdf and b/papers/gpt takes the bar exam.pdf differ diff --git a/papers/gpt4sgg synthesizing scene graphs from holistic and regionspecific narratives.pdf b/papers/gpt4sgg synthesizing scene graphs from holistic and regionspecific narratives.pdf new file mode 100644 index 0000000000000000000000000000000000000000..786a8dce65ef6082484c6ef5fbbcbdf3966be4c0 --- /dev/null +++ b/papers/gpt4sgg synthesizing scene graphs from holistic and regionspecific narratives.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d23baf491af282a8fb07ec506cfe077ab8beeee35048ca397156dea8cb2fa524 +size 5289571 diff --git a/papers/gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench.pdf b/papers/gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench.pdf index d5e47640cbb47feb5ee04d01331a7314b599a855..e8c71cb1ba97289324788fb3d2ceb832d15426bb 100644 Binary files a/papers/gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench.pdf and b/papers/gptclonebench a comprehensive benchmark of semantic clones and crosslanguage clones using gpt3 model and semanticclonebench.pdf differ diff --git a/papers/gptempowered personalized elearning system for programming languages.pdf b/papers/gptempowered personalized elearning system for programming languages.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d08dfbb173795cd116c941525bb0be9976221815 --- /dev/null +++ b/papers/gptempowered personalized elearning system for programming languages.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d802e71739f6ccb4359b2e979bcd17b998ee3bea9f29ae66acbf42f5f3856378 +size 9612811 diff --git a/papers/gptfinre incontext learning for financial relation extraction using large language models.pdf b/papers/gptfinre incontext learning for financial relation extraction using large language models.pdf index ff4dc601e329ea1cc50ab787494a3afa7176a1eb..016aa1ac2acff46d56fe6e9d8662c93e8d72a045 100644 Binary files a/papers/gptfinre incontext learning for financial relation extraction using large language models.pdf and b/papers/gptfinre incontext learning for financial relation extraction using large language models.pdf differ diff --git a/papers/gptre incontext learning for relation extraction using large language models.pdf b/papers/gptre incontext learning for relation extraction using large language models.pdf index 85c7da14eefde191d39b7dbdb12638568365c821..c83ea1e2815d045fbc91ebe3dad26c50ae91a6f7 100644 --- a/papers/gptre incontext learning for relation extraction using large language models.pdf +++ b/papers/gptre incontext learning for relation extraction using large language models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4c1310dc9f427c15292bc8ef8a7663889129b31adc6fca014fd1fe3034e4bf74 -size 1847994 +oid sha256:01d644529de77fceb6b750af14fd4b229ae6f84a6688ee4258c046aeba63f83d +size 1850107 diff --git a/papers/gpts at factify 2022 prompt aided factverification (short paper).pdf b/papers/gpts at factify 2022 prompt aided factverification (short paper).pdf new file mode 100644 index 0000000000000000000000000000000000000000..56565fe074baf9af8675894dd3810f4c0741ec23 --- /dev/null +++ b/papers/gpts at factify 2022 prompt aided factverification (short paper).pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:deba323bfc5cead2879aef29ab0c3af537529ac4561e9912c98b09cf101245e3 +size 698433 diff --git a/papers/gpts at factify 2022 prompt aided factverification.pdf b/papers/gpts at factify 2022 prompt aided factverification.pdf index 4cd065a5004b0a5faeb483400bce165709a4c80c..56565fe074baf9af8675894dd3810f4c0741ec23 100644 Binary files a/papers/gpts at factify 2022 prompt aided factverification.pdf and b/papers/gpts at factify 2022 prompt aided factverification.pdf differ diff --git a/papers/gptvoicetasker llmpowered virtual assistant for smartphone.pdf b/papers/gptvoicetasker llmpowered virtual assistant for smartphone.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b5ff47a6b2fae4d65dbe6c24ea5ab7ac50c7ce36 --- /dev/null +++ b/papers/gptvoicetasker llmpowered virtual assistant for smartphone.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d75eee1ddfde9e85b2c4c617985d9b22ebb08a2fea0337c2cd11ba297d27ab9 +size 2924408 diff --git a/papers/grammar prompting for domainspecific language generation with large language models.pdf b/papers/grammar prompting for domainspecific language generation with large language models.pdf index 7fd28c50f599c0fb520dff73b8aa8dc2f024041b..9a7278a3c1bf53687acba8a761bf2e92cbce98d3 100644 Binary files a/papers/grammar prompting for domainspecific language generation with large language models.pdf and b/papers/grammar prompting for domainspecific language generation with large language models.pdf differ diff --git a/papers/graphprompt biomedical entity normalization using graphbased prompt templates.pdf b/papers/graphprompt biomedical entity normalization using graphbased prompt templates.pdf index f7376e68bf879bcc2828a752a423700f1c852c75..3b71834bc4b2becc6ef9f9b2a0b4ba38b0561de9 100644 Binary files a/papers/graphprompt biomedical entity normalization using graphbased prompt templates.pdf and b/papers/graphprompt biomedical entity normalization using graphbased prompt templates.pdf differ diff --git a/papers/grips gradientfree, editbased instruction search for prompting large language models.pdf b/papers/grips gradientfree, editbased instruction search for prompting large language models.pdf index 1be1c544e48b603622c313c9fa2cb0b82f066bb1..68650b96eb13b6dd035fa38bb503f597bc8a6962 100644 Binary files a/papers/grips gradientfree, editbased instruction search for prompting large language models.pdf and b/papers/grips gradientfree, editbased instruction search for prompting large language models.pdf differ diff --git a/papers/harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf b/papers/harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf index ad000927f2328eca85d32d4820af0510b2e42a60..59531020f830f00e58343f931b5b10015420ed51 100644 Binary files a/papers/harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf and b/papers/harnessing explanations llmtolm interpreter for enhanced textattributed graph representation learning.pdf differ diff --git a/papers/harnessing large language models' empathetic response generation capabilities for online mental health counselling support.pdf b/papers/harnessing large language models' empathetic response generation capabilities for online mental health counselling support.pdf index 2df7c0c4f6ec5729ce5613799eb9909a4f9d9e5f..562df226b691815a76440da029e54afb5b12f10a 100644 Binary files a/papers/harnessing large language models' empathetic response generation capabilities for online mental health counselling support.pdf and b/papers/harnessing large language models' empathetic response generation capabilities for online mental health counselling support.pdf differ diff --git a/papers/harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf b/papers/harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf index 7b9bf13a8e80bd8dcecc0f7134f2c6db984ae34a..7a1431cc64114fd59cbdb50979187dc07088ee8c 100644 Binary files a/papers/harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf and b/papers/harnessing the power of large language models for empathetic response generation empirical investigations and improvements.pdf differ diff --git a/papers/healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf b/papers/healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf index 6440c26d56e87dcb2a65d5f48beacaca9db03f0d..ee6ab8b5dac47c2fed53af633f43921c22a262ad 100644 Binary files a/papers/healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf and b/papers/healthprompt a zeroshot learning paradigm for clinical natural language processing.pdf differ diff --git a/papers/hetgpt harnessing the power of prompt tuning in pretrained heterogeneous graph neural networks.pdf b/papers/hetgpt harnessing the power of prompt tuning in pretrained heterogeneous graph neural networks.pdf index 900ac9f3f4144d5455f9b82586b501288ff8bff4..01242acced3310c34884d6cad147cd6ac86ed681 100644 --- a/papers/hetgpt harnessing the power of prompt tuning in pretrained heterogeneous graph neural networks.pdf +++ b/papers/hetgpt harnessing the power of prompt tuning in pretrained heterogeneous graph neural networks.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:597a16d86f272d54a7a3a8a6f0eb5abfbf84083cc1bb6bad8af3f333e206933f -size 1565084 +oid sha256:9c1cbb6f4ff7c76802e01fb2e8cf4e63af651c4f6dac65acbfff409a7e767645 +size 1565120 diff --git a/papers/hicl hashtagdriven incontext learning for social media natural language understanding.pdf b/papers/hicl hashtagdriven incontext learning for social media natural language understanding.pdf index 59865a0ed9ea434eedae447a4824564d9adfcb23..afc7777877a4a2e4260170e054503a374c745107 100644 Binary files a/papers/hicl hashtagdriven incontext learning for social media natural language understanding.pdf and b/papers/hicl hashtagdriven incontext learning for social media natural language understanding.pdf differ diff --git a/papers/hierarchical prompting assists large language model on web navigation.pdf b/papers/hierarchical prompting assists large language model on web navigation.pdf deleted file mode 100644 index 38683576a9b4f5657b9f6566b7cf0363cca5645a..0000000000000000000000000000000000000000 --- a/papers/hierarchical prompting assists large language model on web navigation.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f608f23d283b384315744629d01f14328f5f10567af14aafa6374dd5155b2afe -size 1784858 diff --git a/papers/hint hypernetwork instruction tuning for efficient zero & fewshot generalisation.pdf b/papers/hint hypernetwork instruction tuning for efficient zero & fewshot generalisation.pdf index fe064b5c56939a5c77fd0b9b868033dd0724aa28..acd605ce40a388f6ddd08b37416c6a4e7bc0b398 100644 Binary files a/papers/hint hypernetwork instruction tuning for efficient zero & fewshot generalisation.pdf and b/papers/hint hypernetwork instruction tuning for efficient zero & fewshot generalisation.pdf differ diff --git a/papers/hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks.pdf b/papers/hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks.pdf index f710f590626a6b00089863f19e2a3a8a8fbd462f..6bb05e5dadf4d560a84be8152acf88f3696580a2 100644 Binary files a/papers/hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks.pdf and b/papers/hintenhanced incontext learning wakes large language models up for knowledgeintensive tasks.pdf differ diff --git a/papers/honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model.pdf b/papers/honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model.pdf index 0cc0b65ebf34a5b0873ec7aed1a4e7761d157713..13b5c41b4fe5a686537083a220d34ad96e54b4db 100644 Binary files a/papers/honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model.pdf and b/papers/honest students from untrusted teachers learning an interpretable questionanswering pipeline from a pretrained language model.pdf differ diff --git a/papers/how does prompt engineering affect chatgpt performance on unsupervised entity resolution.pdf b/papers/how does prompt engineering affect chatgpt performance on unsupervised entity resolution.pdf index 2779d7d46686d2de456b99666e8e7a033ecae866..9f4b3b3bfbf1ebbcff868bbde7c91440b2ee27bf 100644 Binary files a/papers/how does prompt engineering affect chatgpt performance on unsupervised entity resolution.pdf and b/papers/how does prompt engineering affect chatgpt performance on unsupervised entity resolution.pdf differ diff --git a/papers/how far are large language models from agents with theoryofmind.pdf b/papers/how far are large language models from agents with theoryofmind.pdf index d1bb93db2d56d7da5b5f24a627b9010521b75bf6..c9521e5757acbbfec6c3789fb94122300bb9e29c 100644 Binary files a/papers/how far are large language models from agents with theoryofmind.pdf and b/papers/how far are large language models from agents with theoryofmind.pdf differ diff --git a/papers/how good are commercial large language models on african languages.pdf b/papers/how good are commercial large language models on african languages.pdf index 1e62ce41cdf7092fbdc0ab67be128dbe3d13621e..6498d4fe79df7e4f8686b549ace76deaf574a0b7 100644 Binary files a/papers/how good are commercial large language models on african languages.pdf and b/papers/how good are commercial large language models on african languages.pdf differ diff --git a/papers/how many demonstrations do you need for incontext learning.pdf b/papers/how many demonstrations do you need for incontext learning.pdf index 7f8f2735eca2a307d88843b1e43adfbb82667449..59d2bfc69f65bee4eb845b94b5ea9f0f6202b684 100644 Binary files a/papers/how many demonstrations do you need for incontext learning.pdf and b/papers/how many demonstrations do you need for incontext learning.pdf differ diff --git a/papers/how many pretraining tasks are needed for incontext learning of linear regression.pdf b/papers/how many pretraining tasks are needed for incontext learning of linear regression.pdf index c7e9d3596d1ff90f538f7dc85274bfe643759491..c6668e65c9c95067b62a8869a356d3765ed142b0 100644 Binary files a/papers/how many pretraining tasks are needed for incontext learning of linear regression.pdf and b/papers/how many pretraining tasks are needed for incontext learning of linear regression.pdf differ diff --git a/papers/how to design translation prompts for chatgpt an empirical study.pdf b/papers/how to design translation prompts for chatgpt an empirical study.pdf index 73a86eec62cc10c9d3b39083e4f0c8101a2e2669..d25fbee3c648355b9e87a1f55690c1ff5187c888 100644 Binary files a/papers/how to design translation prompts for chatgpt an empirical study.pdf and b/papers/how to design translation prompts for chatgpt an empirical study.pdf differ diff --git a/papers/how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings.pdf b/papers/how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings.pdf index 58dbab8257e4bef6a3ddd5f5ced854dcec217eab..aaed7752f2f283c0f07c97b21976eef2872562f3 100644 Binary files a/papers/how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings.pdf and b/papers/how to prompt llms for texttosql a study in zeroshot, singledomain, and crossdomain settings.pdf differ diff --git a/papers/how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models.pdf b/papers/how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models.pdf index 2e12c8ad82b840a6b2063913434aed4bcb16e3be..4a9318fa6631d7e90d5f061ee1a0430717e099ce 100644 Binary files a/papers/how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models.pdf and b/papers/how to prompt opportunities and challenges of zero and fewshot learning for humanai interaction in creative applications of generative models.pdf differ diff --git a/papers/how to unleash the power of large language models for fewshot relation extraction.pdf b/papers/how to unleash the power of large language models for fewshot relation extraction.pdf index e77d02b6f8c0c2f4007e06b4b96faf362473354c..e8d458fceb935c001c8b22f97eabb3e4edc4798b 100644 Binary files a/papers/how to unleash the power of large language models for fewshot relation extraction.pdf and b/papers/how to unleash the power of large language models for fewshot relation extraction.pdf differ diff --git a/papers/how understanding large language models can inform their use in physics education.pdf b/papers/how understanding large language models can inform their use in physics education.pdf deleted file mode 100644 index 1137011ad15b02d2336573fd1e4245e503f61245..0000000000000000000000000000000000000000 Binary files a/papers/how understanding large language models can inform their use in physics education.pdf and /dev/null differ diff --git a/papers/hqp a humanannotated dataset for detecting online propaganda.pdf b/papers/hqp a humanannotated dataset for detecting online propaganda.pdf index 7af29af77822f76ba1f6e17b088fb7693f693100..348144c89875fb820e63de4ce38438f43780012d 100644 Binary files a/papers/hqp a humanannotated dataset for detecting online propaganda.pdf and b/papers/hqp a humanannotated dataset for detecting online propaganda.pdf differ diff --git a/papers/humanintheloop machine translation with large language model.pdf b/papers/humanintheloop machine translation with large language model.pdf index bf5153663f93ed5bdf46fa8e3bc8be732db93f57..8c1c5c744202cf47fc590c3edb502d50a7f4afce 100644 Binary files a/papers/humanintheloop machine translation with large language model.pdf and b/papers/humanintheloop machine translation with large language model.pdf differ diff --git a/papers/humans in humans out on gpt converging toward common sense in both success and failure.pdf b/papers/humans in humans out on gpt converging toward common sense in both success and failure.pdf index 0842016dc1d7a633564af8bfcf7d9222b26dade3..f94da1afbfdc0a82f78b0b79e403d7f19d6cfee2 100644 Binary files a/papers/humans in humans out on gpt converging toward common sense in both success and failure.pdf and b/papers/humans in humans out on gpt converging toward common sense in both success and failure.pdf differ diff --git a/papers/hyperspectral classification of frost damage stress in tomato plants based on fewshot learning.pdf b/papers/hyperspectral classification of frost damage stress in tomato plants based on fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dcd9dea530e19fb12fd8bf56b031bef06f6b8397 --- /dev/null +++ b/papers/hyperspectral classification of frost damage stress in tomato plants based on fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:becc84dadd1856a02b99e2c935319c7609fea6d2a6ee7324743ae98aeac7d2bf +size 1051514 diff --git a/papers/hypothesis search inductive reasoning with language models.pdf b/papers/hypothesis search inductive reasoning with language models.pdf index 1505d9834c5865cb4feadcc58180e199b2b374ba..9b5aabef4a3ee82fea45236bc1eac7de01f68741 100644 Binary files a/papers/hypothesis search inductive reasoning with language models.pdf and b/papers/hypothesis search inductive reasoning with language models.pdf differ diff --git a/papers/i was blind but now i see implementing visionenabled dialogue in social robots.pdf b/papers/i was blind but now i see implementing visionenabled dialogue in social robots.pdf index 1712aee97cc1cf620c9cd0feb69b9c00c75e8959..b2c065cedd33d031ecb16c1b82f762eb05ee8a17 100644 Binary files a/papers/i was blind but now i see implementing visionenabled dialogue in social robots.pdf and b/papers/i was blind but now i see implementing visionenabled dialogue in social robots.pdf differ diff --git a/papers/icbellm high quality international events data with open source large language models on consumer hardware.pdf b/papers/icbellm high quality international events data with open source large language models on consumer hardware.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a8b1bace8154909dc23ee7bbd21a14895a9e604d --- /dev/null +++ b/papers/icbellm high quality international events data with open source large language models on consumer hardware.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faa1a39a732091977653cf77a1a2aceaecb06d1ba45ba9edabeb7083e71a91a0 +size 2597448 diff --git a/papers/ideal influencedriven selective annotations empower incontext learners in large language models.pdf b/papers/ideal influencedriven selective annotations empower incontext learners in large language models.pdf index c2748f64ecca7904526adecdb349b0807ea4fd38..236518618f8d4613d7183aff3a7c12816a58279d 100644 Binary files a/papers/ideal influencedriven selective annotations empower incontext learners in large language models.pdf and b/papers/ideal influencedriven selective annotations empower incontext learners in large language models.pdf differ diff --git a/papers/identifying and extracting rare disease phenotypes with large language models.pdf b/papers/identifying and extracting rare disease phenotypes with large language models.pdf index 10aec30ac13ddc3a1dc21bf5df0bff562c4960ab..bec2faf7f82939a4a728f1f1cdfa5ae02c7eb7d0 100644 Binary files a/papers/identifying and extracting rare disease phenotypes with large language models.pdf and b/papers/identifying and extracting rare disease phenotypes with large language models.pdf differ diff --git a/papers/identifying and mitigating the security risks of generative ai.pdf b/papers/identifying and mitigating the security risks of generative ai.pdf index 9d5476beb77bab1dc817ce93ea4607c0411e44ba..76811c517d51f058615634512fc2c56dd4979dd0 100644 --- a/papers/identifying and mitigating the security risks of generative ai.pdf +++ b/papers/identifying and mitigating the security risks of generative ai.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:0db50614a242cab3eb4f4b2bfeea9e98f1c757f75abc9c18fa54ffdeed583696 -size 3415589 +oid sha256:8728c4e16367c2c877069c345ef173aaa80f8a35bc1e1396c7d5f544a29b60ab +size 1272640 diff --git a/papers/ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global prompt hacking competition.pdf b/papers/ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global prompt hacking competition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7e9e9f40a7b4f4609db2f40edfdf3b256c8fe8e7 --- /dev/null +++ b/papers/ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global prompt hacking competition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:deaebfb272b7544e5d906ddd58435f73626f47743d83d153f41d4af6a2cc12de +size 4580639 diff --git a/papers/ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global scale prompt hacking competition.pdf b/papers/ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global scale prompt hacking competition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b017da00d15a3c578aa2dceb39433fbb2081da24 --- /dev/null +++ b/papers/ignore this title and hackaprompt exposing systemic vulnerabilities of llms through a global scale prompt hacking competition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:361ea111075f1c27b6bf4cafcad858065d53c47f2e654006e712eddb34f3a554 +size 5173534 diff --git a/papers/iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve.pdf b/papers/iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve.pdf index 8d666303f8d1b958b5c11e442d7ce21db0c1656a..516564237cb235b886661baf93aa0b7f2b1dc81e 100644 Binary files a/papers/iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve.pdf and b/papers/iienlpnut at semeval2020 task 4 guiding plm with prompt template reconstruction strategy for comve.pdf differ diff --git a/papers/impact of large language model assistance on patients reading clinical notes a mixedmethods study.pdf b/papers/impact of large language model assistance on patients reading clinical notes a mixedmethods study.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b0611ed02c1ab66435dd4c8c25c56a8080bba18d --- /dev/null +++ b/papers/impact of large language model assistance on patients reading clinical notes a mixedmethods study.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89077624911b221569a9e1631e79f103f5b0352e887840368b86a3808a5d1ced +size 1562571 diff --git a/papers/impact of sample selection on incontext learning for entity extraction from scientific writing.pdf b/papers/impact of sample selection on incontext learning for entity extraction from scientific writing.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3cc9c5467a5387afed734cfbdea269accd458611 --- /dev/null +++ b/papers/impact of sample selection on incontext learning for entity extraction from scientific writing.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfaab345dbc91c4e4cac2622abbbb37ddeef7b35df57d7c6a8f70796fa5638ed +size 701171 diff --git a/papers/impossible triangle what's next for pretrained language models.pdf b/papers/impossible triangle what's next for pretrained language models.pdf index 01d9148db858fc7eedc1105e6ed3863bab4af376..f2df8160f1717ef5c69d0532930bb3ac8b11f6a0 100644 Binary files a/papers/impossible triangle what's next for pretrained language models.pdf and b/papers/impossible triangle what's next for pretrained language models.pdf differ diff --git a/papers/impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf b/papers/impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf index 4082dd012abfd08f64d7b668b95349f4bda3928b..9e9fa5199ad5d99ec5bd9e47849aac647f66950e 100644 Binary files a/papers/impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf and b/papers/impressiongpt an iterative optimizing framework for radiology report summarization with chatgpt.pdf differ diff --git a/papers/improved compositional generalization by generating demonstrations for metalearning.pdf b/papers/improved compositional generalization by generating demonstrations for metalearning.pdf index 6c23682b8b8c7e8eba90fa37137a8fa226e38aa4..e2eb6a3942c3d4892bb913c62784834bcb549b3c 100644 Binary files a/papers/improved compositional generalization by generating demonstrations for metalearning.pdf and b/papers/improved compositional generalization by generating demonstrations for metalearning.pdf differ diff --git a/papers/improving automatic vqa evaluation using large language models.pdf b/papers/improving automatic vqa evaluation using large language models.pdf index 809da094a28f1584b8c7d341206bea494785dca8..3f14d1e5e72d4ab3172feff54b174990d0d1bc4b 100644 --- a/papers/improving automatic vqa evaluation using large language models.pdf +++ b/papers/improving automatic vqa evaluation using large language models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6048f085bc9dd2f26d7792fc98fb94dde0a239fe21c690630622c1a459332954 -size 3287820 +oid sha256:2133b3b8dc9a2c034be211a9e2d5b0956f654ad7b08bd5b52cc27a0c3292533f +size 3303638 diff --git a/papers/improving fewshot domain transfer for named entity disambiguation with pattern exploitation.pdf b/papers/improving fewshot domain transfer for named entity disambiguation with pattern exploitation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..49e383af20ced82a1a88b3173e77d806394a5504 --- /dev/null +++ b/papers/improving fewshot domain transfer for named entity disambiguation with pattern exploitation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bccd4c1b80413d53da0c44129446a3df56bb598064a50ea0822299a2ba374dda +size 311576 diff --git a/papers/improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning.pdf b/papers/improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning.pdf index aacfe3c442b37b5913cf48121d4981af727257ee..4001fb80f5520a3e1f9d66c81d5021c18d892420 100644 Binary files a/papers/improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning.pdf and b/papers/improving fewshot generalization of safety classifiers via data augmented parameterefficient finetuning.pdf differ diff --git a/papers/improving fewshot prompts with relevant static analysis products.pdf b/papers/improving fewshot prompts with relevant static analysis products.pdf deleted file mode 100644 index c13c3943ff4b6b7c9da9bb7497856b59f8fba453..0000000000000000000000000000000000000000 --- a/papers/improving fewshot prompts with relevant static analysis products.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:415e7557d7ef54409c9e0cbb34584ac75fdad027b357361299a7bcbc40b1158a -size 1912770 diff --git a/papers/improving formalitysensitive machine translation using datacentric approaches and prompt engineering.pdf b/papers/improving formalitysensitive machine translation using datacentric approaches and prompt engineering.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d1a5c332535b768c5f22f581a53e58148b5fdd9a --- /dev/null +++ b/papers/improving formalitysensitive machine translation using datacentric approaches and prompt engineering.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3d9b1cd4ac20c4da8960f31ddcd774cc893c89d5fd21af0632d04b9a1473602 +size 1282457 diff --git a/papers/improving generalization in large langue model by learning prefix subspaces.pdf b/papers/improving generalization in large langue model by learning prefix subspaces.pdf new file mode 100644 index 0000000000000000000000000000000000000000..13e44ec88e000abc61fefdb877006d0b771e0536 --- /dev/null +++ b/papers/improving generalization in large langue model by learning prefix subspaces.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b56440426c9c4daa5d142da32597442bedf9343f150a716debfe992f7c94c6a6 +size 228453 diff --git a/papers/improving open information extraction with large language models a study on demonstration uncertainty.pdf b/papers/improving open information extraction with large language models a study on demonstration uncertainty.pdf index e5435e334fa1918b91c8754c39ec81e8d9246103..0a33171df88066811d48151ca66f6983c18646e4 100644 Binary files a/papers/improving open information extraction with large language models a study on demonstration uncertainty.pdf and b/papers/improving open information extraction with large language models a study on demonstration uncertainty.pdf differ diff --git a/papers/improving short text classification with augmented data using gpt3.pdf b/papers/improving short text classification with augmented data using gpt3.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c72ef147cf4d4d0e34f77a183da0ec13c7f163d3 --- /dev/null +++ b/papers/improving short text classification with augmented data using gpt3.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:163a7e7a520c4683bd84e38a491a78fba085140191eef0b4ae4de9d97b8f6657 +size 925406 diff --git a/papers/improving the generalization of segmentation foundation model under distribution shift via weakly supervised adaptation.pdf b/papers/improving the generalization of segmentation foundation model under distribution shift via weakly supervised adaptation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a115746b5cd68fc32d4f469b5059c9894e6bd805 --- /dev/null +++ b/papers/improving the generalization of segmentation foundation model under distribution shift via weakly supervised adaptation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ae9fa1ee43bc6b74fd430bb4d5c8fbcd97eb963fd0329ef2acbdceb988c0394 +size 6567057 diff --git a/papers/improving the reliability of large language models by leveraging uncertaintyaware incontext learning.pdf b/papers/improving the reliability of large language models by leveraging uncertaintyaware incontext learning.pdf index d778d8827af4bad9694d7ba8747498ada07c5c0b..a14ff5bd447b03da00f25c47d0c06dedd524a8d3 100644 Binary files a/papers/improving the reliability of large language models by leveraging uncertaintyaware incontext learning.pdf and b/papers/improving the reliability of large language models by leveraging uncertaintyaware incontext learning.pdf differ diff --git a/papers/in search of the longtail systematic generation of longtail knowledge via logical rule guided search.pdf b/papers/in search of the longtail systematic generation of longtail knowledge via logical rule guided search.pdf new file mode 100644 index 0000000000000000000000000000000000000000..39e1d75b21809ecbc6d52773e4e80fc9556bbd28 --- /dev/null +++ b/papers/in search of the longtail systematic generation of longtail knowledge via logical rule guided search.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64313d8ae0a5a38e801276c0ab2510c2be9d04bc0f19a8a6201d04d542c43ffd +size 3936467 diff --git a/papers/inboxbart get instructions into biomedical multitask learning.pdf b/papers/inboxbart get instructions into biomedical multitask learning.pdf index 3c80fb8e76a7c5a0abf354c215d4966852f112da..0039b7a736f837564da3b738f7533f01434f959c 100644 Binary files a/papers/inboxbart get instructions into biomedical multitask learning.pdf and b/papers/inboxbart get instructions into biomedical multitask learning.pdf differ diff --git a/papers/incontext convergence of transformers.pdf b/papers/incontext convergence of transformers.pdf index bd5932241694b1c489ef56c36da4f3640d44b4ba..0f019108e39c293720b3e00a914785b2dd3ddff5 100644 Binary files a/papers/incontext convergence of transformers.pdf and b/papers/incontext convergence of transformers.pdf differ diff --git a/papers/incontext exemplars as clues to retrieving from large associative memory.pdf b/papers/incontext exemplars as clues to retrieving from large associative memory.pdf index c26071c223acd57e4a96cc580ad40b0d6dcebfd6..9ff83d51aafae6f807e6beaae1bac04af3bfe3c4 100644 Binary files a/papers/incontext exemplars as clues to retrieving from large associative memory.pdf and b/papers/incontext exemplars as clues to retrieving from large associative memory.pdf differ diff --git a/papers/incontext fewshot relation extraction via pretrained language models.pdf b/papers/incontext fewshot relation extraction via pretrained language models.pdf index 9c5f077f29d32aa1988560945bc158443d6c2495..d7ff2b13b8c446678724a578c0559d4e13fcb576 100644 Binary files a/papers/incontext fewshot relation extraction via pretrained language models.pdf and b/papers/incontext fewshot relation extraction via pretrained language models.pdf differ diff --git a/papers/incontext impersonation reveals large language models' strengths and biases.pdf b/papers/incontext impersonation reveals large language models' strengths and biases.pdf index 531a04093257eb7ee23debc8de9eac796ba5b142..bb6867b0773f9b9689d2f5b58cadf6ccef462327 100644 --- a/papers/incontext impersonation reveals large language models' strengths and biases.pdf +++ b/papers/incontext impersonation reveals large language models' strengths and biases.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1e1b9fa8ac7a6383bc72a9576606fdd4054c11a2eb034986174638c61edcca61 -size 2176335 +oid sha256:bf4f83d6579cf96586a754a669cb763e6d8c89d0a4536be7a431d32b29575603 +size 3966488 diff --git a/papers/incontext instruction learning.pdf b/papers/incontext instruction learning.pdf deleted file mode 100644 index 15d9af6185024faf6444726c16d986e99c3574c4..0000000000000000000000000000000000000000 Binary files a/papers/incontext instruction learning.pdf and /dev/null differ diff --git a/papers/incontext interference in chatbased large language models.pdf b/papers/incontext interference in chatbased large language models.pdf index 313cda2fd67a5203b9bc27dcf7bd50d43294a0fb..b8030a38f57c944c9ce42524d0c96c6799ca7c63 100644 Binary files a/papers/incontext interference in chatbased large language models.pdf and b/papers/incontext interference in chatbased large language models.pdf differ diff --git a/papers/incontext learning as maintaining coherency a study of onthefly machine translation using large language models.pdf b/papers/incontext learning as maintaining coherency a study of onthefly machine translation using large language models.pdf index 41d23e05d720e2e592d1a0d08796a36e673e335c..3efb47a6edc85a683611d99261781f74343a3bf1 100644 Binary files a/papers/incontext learning as maintaining coherency a study of onthefly machine translation using large language models.pdf and b/papers/incontext learning as maintaining coherency a study of onthefly machine translation using large language models.pdf differ diff --git a/papers/incontext learning dynamics with random binary sequences.pdf b/papers/incontext learning dynamics with random binary sequences.pdf index dc5ef5ae1656583885e27cb86ebedb3e8e4cc943..88ab055267db85fd3a569c1467e37aa5a3e6e8f7 100644 --- a/papers/incontext learning dynamics with random binary sequences.pdf +++ b/papers/incontext learning dynamics with random binary sequences.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:30eca80e244ce38d2377215b22637e863d59405c22d269bfac8ddeeb583f2c1b -size 2899202 +oid sha256:c75e3f197fe852f5e883c892a1206953d55d8b198dc75aad2bfa7a0e077d88a4 +size 3413245 diff --git a/papers/incontext learning for extreme multilabel classification.pdf b/papers/incontext learning for extreme multilabel classification.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fc6ea63ffdf7f81e46c853ec0ec52ab032e4246d --- /dev/null +++ b/papers/incontext learning for extreme multilabel classification.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bb42d29a045eb55872432b16dcd64de05a356afebd30f9bb11c44716a0c5ef8 +size 740562 diff --git a/papers/incontext learning for fewshot multimodal named entity recognition.pdf b/papers/incontext learning for fewshot multimodal named entity recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3849ffc1d97f834d2da999c1fa65ee25f224cccd --- /dev/null +++ b/papers/incontext learning for fewshot multimodal named entity recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9b8c0db17fa94895ef52c3aa059647697d0c0421cc469f40060b40063c53db3 +size 1132216 diff --git a/papers/incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf b/papers/incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf index de67195b230e436b52a8b62813028ac81362414d..2f147d92e9bc1202501d460646a730894cb6214b 100644 Binary files a/papers/incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf and b/papers/incontext learning for knowledge base question answering for unmanned systems based on large language models.pdf differ diff --git a/papers/incontext learning for modelfree system identification.pdf b/papers/incontext learning for modelfree system identification.pdf deleted file mode 100644 index 619a4e6c78e3ef994690e79f0a14b947d0c1ffde..0000000000000000000000000000000000000000 --- a/papers/incontext learning for modelfree system identification.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7aae87765ee935700c80d07b95fe0472e51f3f74b41233777159f6da3aaa917b -size 2416216 diff --git a/papers/incontext learning for text classification with many labels.pdf b/papers/incontext learning for text classification with many labels.pdf index ee0192727a6f4b150e36bdb8291f0a0b996964fe..27525c7a682d7a807692634c9ca01ef9a621b621 100644 --- a/papers/incontext learning for text classification with many labels.pdf +++ b/papers/incontext learning for text classification with many labels.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:4502382752572dd3b49cf97d687c5f4bbbaef22faf8ec76c5032a2562c7e3f4d -size 1311963 +oid sha256:a62552f5777f445b1ef606bfcfded0c71a645994c76c306620b22db0942b51bf +size 1334373 diff --git a/papers/incontext learning of large language models for controlled dialogue summarization a holistic benchmark and empirical analysis.pdf b/papers/incontext learning of large language models for controlled dialogue summarization a holistic benchmark and empirical analysis.pdf new file mode 100644 index 0000000000000000000000000000000000000000..74fb31764f8bd1d4f52aa2da5213291f7893f633 --- /dev/null +++ b/papers/incontext learning of large language models for controlled dialogue summarization a holistic benchmark and empirical analysis.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3153bce3bf7ae9d0fa161cba839f82b041471a1552366821201053df5d7316cb +size 2638053 diff --git a/papers/incontext learning user simulators for taskoriented dialog systems.pdf b/papers/incontext learning user simulators for taskoriented dialog systems.pdf index 3ade3dbb6653fa9fa7229852f4456212453fbdbd..a15761fb4205e532fdf37b04d122ec3c45de85a2 100644 Binary files a/papers/incontext learning user simulators for taskoriented dialog systems.pdf and b/papers/incontext learning user simulators for taskoriented dialog systems.pdf differ diff --git a/papers/incontext learning with iterative demonstration selection.pdf b/papers/incontext learning with iterative demonstration selection.pdf index ec2fb36ef3ec34c4bfa61029ec0b6024a4b388c0..033054f3603ca9df7207952f41f6a5c86976fe04 100644 Binary files a/papers/incontext learning with iterative demonstration selection.pdf and b/papers/incontext learning with iterative demonstration selection.pdf differ diff --git a/papers/incontext learning with many demonstration examples.pdf b/papers/incontext learning with many demonstration examples.pdf index 6a7ca6f49cbc877145aace96dd0a15d67953dedd..1e3260c1a0a6baf1cab9aa9f07d0754315706867 100644 Binary files a/papers/incontext learning with many demonstration examples.pdf and b/papers/incontext learning with many demonstration examples.pdf differ diff --git a/papers/incontext pretraining language modeling beyond document boundaries.pdf b/papers/incontext pretraining language modeling beyond document boundaries.pdf index ae4cb462323a8a9773ef7d815b38ddcced36a13c..43cc3696e1469629953c44927ab2ff7a804ea6c5 100644 --- a/papers/incontext pretraining language modeling beyond document boundaries.pdf +++ b/papers/incontext pretraining language modeling beyond document boundaries.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f338f0bcf90900bd5538a25dc5f4600e2557de306f11d71cc9669db1f46d91c2 -size 1076564 +oid sha256:aba7317da6ff1c0a5986242b07748f30aa3188c728cbdaa0986eb2f5838eb6be +size 1086913 diff --git a/papers/inducing anxiety in large language models increases exploration and bias.pdf b/papers/inducing anxiety in large language models increases exploration and bias.pdf index 13952a8b61ccd06a9d15f51e0bb8063034524766..308ad4bce91c63795746bc8c4926e39c59f278ba 100644 Binary files a/papers/inducing anxiety in large language models increases exploration and bias.pdf and b/papers/inducing anxiety in large language models increases exploration and bias.pdf differ diff --git a/papers/inductivebias learning generating code models with large language model.pdf b/papers/inductivebias learning generating code models with large language model.pdf index ea409ca273a6784d7a056c5d9a8eea32c66a5eaa..4f74ccbf36033e5b73df7254023aa1dfce9808ec 100644 Binary files a/papers/inductivebias learning generating code models with large language model.pdf and b/papers/inductivebias learning generating code models with large language model.pdf differ diff --git a/papers/inferfix endtoend program repair with llms.pdf b/papers/inferfix endtoend program repair with llms.pdf index 29a5a5aca9eb94b0cbdfe9f79fe4fea978c28232..52857b9c9d189d96460a8f1683053bca4d852905 100644 --- a/papers/inferfix endtoend program repair with llms.pdf +++ b/papers/inferfix endtoend program repair with llms.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1c9ba3282b063f8c7efeef0d667bec66abe29421bf06e92bcb11f7d73918ca7d -size 1025936 +oid sha256:b5878ce46bb09f2dbe0787dc6e2ba24ebfbfbd63d1d81eb28e68ff2bd6c4c13c +size 1074920 diff --git a/papers/inferring latent class statistics from text for robust visual fewshot learning.pdf b/papers/inferring latent class statistics from text for robust visual fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a550f1dcfb830c98b5959fb419d70dddb431197b --- /dev/null +++ b/papers/inferring latent class statistics from text for robust visual fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4156d93b85b99c61cf05e81e40331e77abb4fcf1e2a568a038d055e03397a0eb +size 1083418 diff --git a/papers/information extraction from documents question answering vs token classification in realworld setups.pdf b/papers/information extraction from documents question answering vs token classification in realworld setups.pdf index 8e0bc380d0351ca91b37aae23d47ddab9fc5e4ce..877e8fac6edb05c2a0d1cc496a5578d8a1344f5f 100644 Binary files a/papers/information extraction from documents question answering vs token classification in realworld setups.pdf and b/papers/information extraction from documents question answering vs token classification in realworld setups.pdf differ diff --git a/papers/information extraction from legal wills how well does gpt4 do.pdf b/papers/information extraction from legal wills how well does gpt4 do.pdf new file mode 100644 index 0000000000000000000000000000000000000000..09e0de8bd97cd91abc650c093c5e445c12ed3902 --- /dev/null +++ b/papers/information extraction from legal wills how well does gpt4 do.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0227b6300b9a3a2ba9bbf169c6fc0ea15a264c1cd05f9db1f391750cf4f4e142 +size 2090141 diff --git a/papers/injecting a structural inductive bias into a seq2seq model by simulation.pdf b/papers/injecting a structural inductive bias into a seq2seq model by simulation.pdf index 757a2ba6202d0f492230315954af8a2192058bef..ee817f6228cb64c86e3d56a3cba2b95674edb77d 100644 Binary files a/papers/injecting a structural inductive bias into a seq2seq model by simulation.pdf and b/papers/injecting a structural inductive bias into a seq2seq model by simulation.pdf differ diff --git a/papers/insertexpansions for toolenabled conversational agents.pdf b/papers/insertexpansions for toolenabled conversational agents.pdf deleted file mode 100644 index 09a3258c240dba517effa99ed31a69c16261591b..0000000000000000000000000000000000000000 --- a/papers/insertexpansions for toolenabled conversational agents.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cd42dcfa7d63675eee26953e517896f25975d8696837fa8aec9d603b35b18d97 -size 5940653 diff --git a/papers/instanceaware prompt learning for language understanding and generation.pdf b/papers/instanceaware prompt learning for language understanding and generation.pdf index a9156584c11f7c349825bcb1b8d79c9754cc6bd3..0afee460d87fdb6d06587f97b55b6e585b733061 100644 Binary files a/papers/instanceaware prompt learning for language understanding and generation.pdf and b/papers/instanceaware prompt learning for language understanding and generation.pdf differ diff --git a/papers/instructed language models with retrievers are powerful entity linkers.pdf b/papers/instructed language models with retrievers are powerful entity linkers.pdf index 83d39fa6e566c84a378a4e210ca0e8002b778f90..134f8005892e724b9e7b6be850979464de8c072b 100644 Binary files a/papers/instructed language models with retrievers are powerful entity linkers.pdf and b/papers/instructed language models with retrievers are powerful entity linkers.pdf differ diff --git a/papers/instructeval systematic evaluation of instruction selection methods.pdf b/papers/instructeval systematic evaluation of instruction selection methods.pdf index 13d94df669aec4506d108ecd2fa49f341c7ade89..6c6ac1631f99ab87880428bd230cb1b28e9abf64 100644 Binary files a/papers/instructeval systematic evaluation of instruction selection methods.pdf and b/papers/instructeval systematic evaluation of instruction selection methods.pdf differ diff --git a/papers/instruction distillation makes large language models efficient zeroshot rankers.pdf b/papers/instruction distillation makes large language models efficient zeroshot rankers.pdf index 5a1b628659664fb4b1f8400c1407689db34fd605..01a0d017b857fdc8f7017f90c48c679fb017722b 100644 Binary files a/papers/instruction distillation makes large language models efficient zeroshot rankers.pdf and b/papers/instruction distillation makes large language models efficient zeroshot rankers.pdf differ diff --git a/papers/instruction induction from few examples to natural language task descriptions.pdf b/papers/instruction induction from few examples to natural language task descriptions.pdf index 32438fcc1109d6c2374e2f8d71a2ac3d96a2f3cf..e5865c3f772afd3335d16b75b75046ce0ba0b9ff 100644 Binary files a/papers/instruction induction from few examples to natural language task descriptions.pdf and b/papers/instruction induction from few examples to natural language task descriptions.pdf differ diff --git a/papers/instruction tuning for fewshot aspectbased sentiment analysis.pdf b/papers/instruction tuning for fewshot aspectbased sentiment analysis.pdf index c975122ab76bfb7b0677ba638794479b315f94a7..84a26cfa9ed800596f4152120dee8215b8c11911 100644 Binary files a/papers/instruction tuning for fewshot aspectbased sentiment analysis.pdf and b/papers/instruction tuning for fewshot aspectbased sentiment analysis.pdf differ diff --git a/papers/instructionner a multitask instructionbased generative framework for fewshot ner.pdf b/papers/instructionner a multitask instructionbased generative framework for fewshot ner.pdf index 0202acc2b28de5233833709efacb2769f3c575fa..c21a9751c8a379245a847393c86483f23e39fcf1 100644 Binary files a/papers/instructionner a multitask instructionbased generative framework for fewshot ner.pdf and b/papers/instructionner a multitask instructionbased generative framework for fewshot ner.pdf differ diff --git a/papers/interact exploring the potentials of chatgpt as a cooperative agent.pdf b/papers/interact exploring the potentials of chatgpt as a cooperative agent.pdf index 7676a6f18c21807ab8328f6e4c3ea316fa867aa1..89ee71273c5266baff03ad91226733e1082865be 100644 Binary files a/papers/interact exploring the potentials of chatgpt as a cooperative agent.pdf and b/papers/interact exploring the potentials of chatgpt as a cooperative agent.pdf differ diff --git a/papers/interactive symbol grounding with complex referential expressions.pdf b/papers/interactive symbol grounding with complex referential expressions.pdf new file mode 100644 index 0000000000000000000000000000000000000000..045c11cb875141ebd879f50872c656450599c318 --- /dev/null +++ b/papers/interactive symbol grounding with complex referential expressions.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24c5deddb01b5a8019d5cc39e0c5643eda19bf8fbc1bc32c7040cc6f926f20ce +size 467256 diff --git a/papers/internetaugmented language models through fewshot prompting for opendomain question answering.pdf b/papers/internetaugmented language models through fewshot prompting for opendomain question answering.pdf index 60d1bcf8283ce63f578076bb3cc93d10a40b63fa..5cd488f755c4f0e6d983def2b286d6fadc35e4f9 100644 Binary files a/papers/internetaugmented language models through fewshot prompting for opendomain question answering.pdf and b/papers/internetaugmented language models through fewshot prompting for opendomain question answering.pdf differ diff --git a/papers/inverse is better! fast and accurate prompt for fewshot slot tagging.pdf b/papers/inverse is better! fast and accurate prompt for fewshot slot tagging.pdf index aa26f352269102bb67583b5bbfd43e454d7e453c..da94e656232692daa043a1f243733b2f8ad49d12 100644 Binary files a/papers/inverse is better! fast and accurate prompt for fewshot slot tagging.pdf and b/papers/inverse is better! fast and accurate prompt for fewshot slot tagging.pdf differ diff --git a/papers/investigating prompt learning for chinese fewshot text classification with pretrained language models.pdf b/papers/investigating prompt learning for chinese fewshot text classification with pretrained language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..19c3575c8e59545386b1ca1ef8aa743d28c6f0cc --- /dev/null +++ b/papers/investigating prompt learning for chinese fewshot text classification with pretrained language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a272476e8264890da4096fbb9826f4edae0c5642790225be67a04482400d46f4 +size 2781851 diff --git a/papers/investigating prompting techniques for zero and fewshot visual question answering.pdf b/papers/investigating prompting techniques for zero and fewshot visual question answering.pdf index c70f9190553302c05d1a7402a09e70b00b680ecb..f6d548cd27d0a1ab2f9292a2208afdc1c3865e02 100644 --- a/papers/investigating prompting techniques for zero and fewshot visual question answering.pdf +++ b/papers/investigating prompting techniques for zero and fewshot visual question answering.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:8c14242bc3b94b457e42db378d319f45dbc36ec26eeb75849f90d9f3b45b6cb1 -size 1192259 +oid sha256:cad6ba0f7c14dcecc813125c8a7965f1a143271d90f2a29996762e8a4d5e0e26 +size 5066622 diff --git a/papers/investigating the applicability of selfassessment tests for personality measurement of large language models.pdf b/papers/investigating the applicability of selfassessment tests for personality measurement of large language models.pdf deleted file mode 100644 index cde739ddc9064800310d796208e3caf7041ada98..0000000000000000000000000000000000000000 Binary files a/papers/investigating the applicability of selfassessment tests for personality measurement of large language models.pdf and /dev/null differ diff --git a/papers/investigating the fairness of large language models for predictions on tabular data.pdf b/papers/investigating the fairness of large language models for predictions on tabular data.pdf index dac7df960d699231e50200d47dfd11c22f2999b9..a36e306179ea2c3c7a1c0c7a8ceec505ea0e1946 100644 Binary files a/papers/investigating the fairness of large language models for predictions on tabular data.pdf and b/papers/investigating the fairness of large language models for predictions on tabular data.pdf differ diff --git a/papers/iot in the era of generative ai vision and challenges.pdf b/papers/iot in the era of generative ai vision and challenges.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ebfc178ef384e551eb3d8ee3eb03409db0113d44 --- /dev/null +++ b/papers/iot in the era of generative ai vision and challenges.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:484cbdb1c92c998e33a8622225fef1219bb80c31e421202c577f7b06ad0e5772 +size 904472 diff --git a/papers/is chatgpt a good causal reasoner a comprehensive evaluation.pdf b/papers/is chatgpt a good causal reasoner a comprehensive evaluation.pdf index 79e88d24d90e872691b7d590d074a816345b1950..bd845d371412e17a21f2a93f5810b653a99a401a 100644 Binary files a/papers/is chatgpt a good causal reasoner a comprehensive evaluation.pdf and b/papers/is chatgpt a good causal reasoner a comprehensive evaluation.pdf differ diff --git a/papers/is chatgpt a good recommender a preliminary study.pdf b/papers/is chatgpt a good recommender a preliminary study.pdf index e127986ecf131fc48969787839dc0833c3e736c1..4dcd7f2504340bb00bf28306cbd3d234959ebd6a 100644 Binary files a/papers/is chatgpt a good recommender a preliminary study.pdf and b/papers/is chatgpt a good recommender a preliminary study.pdf differ diff --git a/papers/is chatgpt the ultimate programming assistant how far is it.pdf b/papers/is chatgpt the ultimate programming assistant how far is it.pdf index add46421e8d3d4b0028ab472b7e36b6cd12acfec..ecafb4c2923702154ea1f37806ea63d8d3363e29 100644 Binary files a/papers/is chatgpt the ultimate programming assistant how far is it.pdf and b/papers/is chatgpt the ultimate programming assistant how far is it.pdf differ diff --git a/papers/is gpt4 a good trader.pdf b/papers/is gpt4 a good trader.pdf index ad5d497d4714571e41f7eb4f905d1a387d5ddb3b..bda1fb8a37c12423d2bae23e92b0516d762deb58 100644 Binary files a/papers/is gpt4 a good trader.pdf and b/papers/is gpt4 a good trader.pdf differ diff --git a/papers/jailbreak and guard aligned language models with only few incontext demonstrations.pdf b/papers/jailbreak and guard aligned language models with only few incontext demonstrations.pdf index f40e202b9ddb9ef6014a4d5e87120a340a194d0b..f58178e1e669c81a09d9b0c28a8600aac1bb981f 100644 Binary files a/papers/jailbreak and guard aligned language models with only few incontext demonstrations.pdf and b/papers/jailbreak and guard aligned language models with only few incontext demonstrations.pdf differ diff --git a/papers/jailbreaking chatgpt via prompt engineering an empirical study.pdf b/papers/jailbreaking chatgpt via prompt engineering an empirical study.pdf index 51b405d98a88008a83d2c528d83e7fdd96033be4..991c52a3a79195bc7283994574c269c72d809e60 100644 Binary files a/papers/jailbreaking chatgpt via prompt engineering an empirical study.pdf and b/papers/jailbreaking chatgpt via prompt engineering an empirical study.pdf differ diff --git a/papers/jailbreaking gpt4v via selfadversarial attacks with system prompts.pdf b/papers/jailbreaking gpt4v via selfadversarial attacks with system prompts.pdf index de4c40107b5636fd45e2d35fe5c005007f17e8ac..c59454ac5cc4448f6e5bcdfb12c6aab929418b16 100644 Binary files a/papers/jailbreaking gpt4v via selfadversarial attacks with system prompts.pdf and b/papers/jailbreaking gpt4v via selfadversarial attacks with system prompts.pdf differ diff --git a/papers/jampatoisnli a jamaican patois natural language inference dataset.pdf b/papers/jampatoisnli a jamaican patois natural language inference dataset.pdf index 363e3888fe306ff68cfa3bc032e9b59ee86dafc4..f622ffbb7204ce5344dc3426c62d3519d70a6297 100644 Binary files a/papers/jampatoisnli a jamaican patois natural language inference dataset.pdf and b/papers/jampatoisnli a jamaican patois natural language inference dataset.pdf differ diff --git a/papers/jatmo prompt injection defense by taskspecific finetuning.pdf b/papers/jatmo prompt injection defense by taskspecific finetuning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b1cf908df6dc14c01f87113c5217392084f9a167 --- /dev/null +++ b/papers/jatmo prompt injection defense by taskspecific finetuning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22c2db807f39404d1e9ffb7b7c619da9a06b9f581e0e10029cab112ae19a857f +size 1350213 diff --git a/papers/jen1 textguided universal music generation with omnidirectional diffusion models.pdf b/papers/jen1 textguided universal music generation with omnidirectional diffusion models.pdf index c2b7e308939110305370863eb6e87b193b5b7ed9..45179310e5ccad83cea457c6958c796c8e34ef48 100644 Binary files a/papers/jen1 textguided universal music generation with omnidirectional diffusion models.pdf and b/papers/jen1 textguided universal music generation with omnidirectional diffusion models.pdf differ diff --git a/papers/joint foundation model caching and inference of generative ai services for edge intelligence.pdf b/papers/joint foundation model caching and inference of generative ai services for edge intelligence.pdf index 9ab060cbafa2e210712b508f0eebb868ca8dd768..4a0e0d455f56bb4dd94591c2999aa36e4ea3afdb 100644 Binary files a/papers/joint foundation model caching and inference of generative ai services for edge intelligence.pdf and b/papers/joint foundation model caching and inference of generative ai services for edge intelligence.pdf differ diff --git a/papers/joint prompt optimization of stacked llms using variational inference.pdf b/papers/joint prompt optimization of stacked llms using variational inference.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b517a036dfea7e680223035616d07a9275a45c66 --- /dev/null +++ b/papers/joint prompt optimization of stacked llms using variational inference.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fe82566cc1f8a2ceec716d33e19783bbac9a7f66c564c5fef51e403c51373cd +size 685974 diff --git a/papers/jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue.pdf b/papers/jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue.pdf index 49814735f8793c9d8c148708fd31b71b534eff6c..06c47dbfb82eac5c0b59cbea007f9269abe76d80 100644 Binary files a/papers/jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue.pdf and b/papers/jurassic is (almost) all you need fewshot meaningtotext generation for opendomain dialogue.pdf differ diff --git a/papers/just adjust one prompt enhancing incontext dialogue scoring via constructing the optimal subgraph of demonstrations and prompts.pdf b/papers/just adjust one prompt enhancing incontext dialogue scoring via constructing the optimal subgraph of demonstrations and prompts.pdf new file mode 100644 index 0000000000000000000000000000000000000000..92b993b6988990bf69728121ab33b6b639997c54 --- /dev/null +++ b/papers/just adjust one prompt enhancing incontext dialogue scoring via constructing the optimal subgraph of demonstrations and prompts.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4e855528075caefa57fba7713162c33efbfbe68cbd76dd9e8a66511e9bc6d30 +size 12618265 diff --git a/papers/just tell me prompt engineering in business process management.pdf b/papers/just tell me prompt engineering in business process management.pdf index 5903140801361b74bd9aabebb81cb508527b3cdd..9e331aedd12b01bca8a1ec304750b54704de9948 100644 Binary files a/papers/just tell me prompt engineering in business process management.pdf and b/papers/just tell me prompt engineering in business process management.pdf differ diff --git a/papers/kicgpt large language model with knowledge in context for knowledge graph completion.pdf b/papers/kicgpt large language model with knowledge in context for knowledge graph completion.pdf new file mode 100644 index 0000000000000000000000000000000000000000..abd27db760cf92fd7c486f9f51ec9abe9b092990 --- /dev/null +++ b/papers/kicgpt large language model with knowledge in context for knowledge graph completion.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35e21163552a95d60901c9bff507d18f122c26843b7c33b0c25fd12db0824c4b +size 1101192 diff --git a/papers/kipt knowledgeinjected prompt tuning for event detection.pdf b/papers/kipt knowledgeinjected prompt tuning for event detection.pdf new file mode 100644 index 0000000000000000000000000000000000000000..177edb8094cfd3e1eccc237a43cd6f8ded3b1709 --- /dev/null +++ b/papers/kipt knowledgeinjected prompt tuning for event detection.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58a0f232a9890479d59b102f8046661518fb6acaccef2b74cee3138d3fd2970b +size 955246 diff --git a/papers/knnicl compositional taskoriented parsing generalization with nearest neighbor incontext learning.pdf b/papers/knnicl compositional taskoriented parsing generalization with nearest neighbor incontext learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c5c68a04dbec47540dd838c6c5e0e349bc02a0b6 --- /dev/null +++ b/papers/knnicl compositional taskoriented parsing generalization with nearest neighbor incontext learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4cee82680251e7f328e17374b2c3916b6563ba6a02eac48dd8879a104975162 +size 472524 diff --git a/papers/knowledge crosswords geometric reasoning over structured knowledge with large language models.pdf b/papers/knowledge crosswords geometric reasoning over structured knowledge with large language models.pdf index 53a2971d6e55d912fbc15773611449900e71bb9a..2ba6a95ec0a4711c8a7e4809cbb5342efecd4abb 100644 Binary files a/papers/knowledge crosswords geometric reasoning over structured knowledge with large language models.pdf and b/papers/knowledge crosswords geometric reasoning over structured knowledge with large language models.pdf differ diff --git a/papers/knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms.pdf b/papers/knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms.pdf index 8ddd3a90e4a9be2b7a1f6315f4a45c5b28e6840c..d31b65c486413464826c9cd7f1b311f16d171262 100644 Binary files a/papers/knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms.pdf and b/papers/knowledge graph completion models are fewshot learners an empirical study of relation labeling in ecommerce with llms.pdf differ diff --git a/papers/knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering.pdf b/papers/knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering.pdf index 03d1446f8ac19e01125d10def9ee679b598baa0a..26a7533cc45fd85608ddbdcd0327416eec033e32 100644 Binary files a/papers/knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering.pdf and b/papers/knowledgedriven cot exploring faithful reasoning in llms for knowledgeintensive question answering.pdf differ diff --git a/papers/knowledgegrounded dialog state tracking.pdf b/papers/knowledgegrounded dialog state tracking.pdf index 36c52dfcb23a350ac249bf24f2f91480314ffaba..ef290726752dcd8631b94e7cd58556960f147de3 100644 Binary files a/papers/knowledgegrounded dialog state tracking.pdf and b/papers/knowledgegrounded dialog state tracking.pdf differ diff --git a/papers/knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf b/papers/knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf index ad61a5485452b4f21a2a17f07ed567d9d9748553..8ffaf489c9a4a631427fd7bdfa01d9acb2ed9c94 100644 Binary files a/papers/knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf and b/papers/knowledgeprompted estimator a novel approach to explainable machine translation assessment.pdf differ diff --git "a/papers/kul@smm4h\342\200\23122 template augmented adaptive pretraining for tweet classification.pdf" "b/papers/kul@smm4h\342\200\23122 template augmented adaptive pretraining for tweet classification.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..b8e69cb79e7aefbe76ed77be4a300a2cc1f69034 --- /dev/null +++ "b/papers/kul@smm4h\342\200\23122 template augmented adaptive pretraining for tweet classification.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b83f437f30aaafd1c8afb9a8969d7e41a10fd0906a0ed1e81e4983e83f51580 +size 136555 diff --git a/papers/lambada backward chaining for automated reasoning in natural language.pdf b/papers/lambada backward chaining for automated reasoning in natural language.pdf index be763d4ee7ae7694d53992b5d1fd04d3dcdc2308..b04b96c073468dc4e7b95991323084462a0610fb 100644 Binary files a/papers/lambada backward chaining for automated reasoning in natural language.pdf and b/papers/lambada backward chaining for automated reasoning in natural language.pdf differ diff --git a/papers/lamp when large language models meet personalization.pdf b/papers/lamp when large language models meet personalization.pdf deleted file mode 100644 index 262f838e2097d41db4244ef2098eec2d4d54d923..0000000000000000000000000000000000000000 --- a/papers/lamp when large language models meet personalization.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9bc0d1632b18dce1a046ed430e254072b7e248276bc54e656d7b24fddef30537 -size 1077882 diff --git a/papers/langrasp using large language models for semantic object grasping.pdf b/papers/langrasp using large language models for semantic object grasping.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1914d9117a66997ee41e8d4104268dbf44aa7b42 --- /dev/null +++ b/papers/langrasp using large language models for semantic object grasping.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:997008870801d0a9921842ffa707990732fb18f1cb600731007ce73da0a1af45 +size 1383382 diff --git a/papers/language model cascades.pdf b/papers/language model cascades.pdf index ea567170aa93a6a788cf1796e5bd6679c5c0833e..00dad4246a65bdb2acf708d923c8eef2fd02bc23 100644 Binary files a/papers/language model cascades.pdf and b/papers/language model cascades.pdf differ diff --git a/papers/language models are fewshot learners for prognostic prediction.pdf b/papers/language models are fewshot learners for prognostic prediction.pdf index d81d9d0a947474ada115ede122a63cae68fece5c..df9fbd62f84666c34c404071e35b43504a3caae7 100644 Binary files a/papers/language models are fewshot learners for prognostic prediction.pdf and b/papers/language models are fewshot learners for prognostic prediction.pdf differ diff --git a/papers/language models are weak learners.pdf b/papers/language models are weak learners.pdf index 0a467f3c3bf755639ce72b813cecad2f1d414f41..e708b56920c5641a0375b92832e12f5cc6463e55 100644 Binary files a/papers/language models are weak learners.pdf and b/papers/language models are weak learners.pdf differ diff --git a/papers/language models don't always say what they think unfaithful explanations in chainofthought prompting.pdf b/papers/language models don't always say what they think unfaithful explanations in chainofthought prompting.pdf index 319a4cde8a18d2e682864067271a0f6f6efac31a..93f3ac2887c584f889b07590d8bd0fa80bec9f73 100644 Binary files a/papers/language models don't always say what they think unfaithful explanations in chainofthought prompting.pdf and b/papers/language models don't always say what they think unfaithful explanations in chainofthought prompting.pdf differ diff --git a/papers/language quantized autoencoders towards unsupervised textimage alignment.pdf b/papers/language quantized autoencoders towards unsupervised textimage alignment.pdf index 5e2da05dc079989d056b944621eca9a141485048..fbadf77b4b0b81145a44fc908595147486f9691c 100644 Binary files a/papers/language quantized autoencoders towards unsupervised textimage alignment.pdf and b/papers/language quantized autoencoders towards unsupervised textimage alignment.pdf differ diff --git a/papers/large language model prompt chaining for long legal document classification.pdf b/papers/large language model prompt chaining for long legal document classification.pdf index 9cef3563b11e255917fa455338eea52cd7f6167d..aa0b520972b3ce742b558fab701660739d085ce3 100644 Binary files a/papers/large language model prompt chaining for long legal document classification.pdf and b/papers/large language model prompt chaining for long legal document classification.pdf differ diff --git a/papers/large language modelaware incontext learning for code generation.pdf b/papers/large language modelaware incontext learning for code generation.pdf index 1cffd81ca16db7a3d6c15afe9764028d8267b5a1..f8673d06935a2510b8b69fa6dfd7b50a89d7d205 100644 Binary files a/papers/large language modelaware incontext learning for code generation.pdf and b/papers/large language modelaware incontext learning for code generation.pdf differ diff --git a/papers/large language modeldriven classroom flipping empowering studentcentric peer questioning with flipped interaction.pdf b/papers/large language modeldriven classroom flipping empowering studentcentric peer questioning with flipped interaction.pdf new file mode 100644 index 0000000000000000000000000000000000000000..07b74fae76eadf98ca27011287872bafe95d7282 --- /dev/null +++ b/papers/large language modeldriven classroom flipping empowering studentcentric peer questioning with flipped interaction.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bb7f92ac39f565012816f9fe1f0956806d2f5fb495270feed005c2dfdd1ef33 +size 4029618 diff --git a/papers/large language models and prompt engineering for biomedical query focused multidocument summarisation.pdf b/papers/large language models and prompt engineering for biomedical query focused multidocument summarisation.pdf index e5627ada1ee94b402ed4e78bb7f2bc6e00351c33..4c4d1ce8ce7173f32ba720ac772915698bf12638 100644 Binary files a/papers/large language models and prompt engineering for biomedical query focused multidocument summarisation.pdf and b/papers/large language models and prompt engineering for biomedical query focused multidocument summarisation.pdf differ diff --git a/papers/large language models are biased to overestimate profoundness.pdf b/papers/large language models are biased to overestimate profoundness.pdf index 13f3e3254475bdb1da55c6c674eb5dd2d454cd90..efb7b7f9c6b27c1f14dc1c56eb89e8fc8fc141be 100644 Binary files a/papers/large language models are biased to overestimate profoundness.pdf and b/papers/large language models are biased to overestimate profoundness.pdf differ diff --git a/papers/large language models are pretty good zeroshot video game bug detectors.pdf b/papers/large language models are pretty good zeroshot video game bug detectors.pdf index d54d86cef6b4fa83e732784d65e47f8beb92fdcd..7e2f4ac9da5698b6fc01196cc9fc4d1803fe128b 100644 Binary files a/papers/large language models are pretty good zeroshot video game bug detectors.pdf and b/papers/large language models are pretty good zeroshot video game bug detectors.pdf differ diff --git a/papers/large language models are stateoftheart evaluators of translation quality.pdf b/papers/large language models are stateoftheart evaluators of translation quality.pdf index ab82b873a8c109083e5e81e3a0a080757bd24764..7e56b2237bd2bfd103a19f55c2356d24dfc3de22 100644 Binary files a/papers/large language models are stateoftheart evaluators of translation quality.pdf and b/papers/large language models are stateoftheart evaluators of translation quality.pdf differ diff --git a/papers/large language models are zeroshot rankers for recommender systems.pdf b/papers/large language models are zeroshot rankers for recommender systems.pdf index 57ba0a9bfb7b8b27cc2e95205e7bccfa1687ce88..1a433c37ab1aefd8c0038476c750fa3173720df2 100644 Binary files a/papers/large language models are zeroshot rankers for recommender systems.pdf and b/papers/large language models are zeroshot rankers for recommender systems.pdf differ diff --git a/papers/large language models are zeroshot reasoners.pdf b/papers/large language models are zeroshot reasoners.pdf index 135a1caea89ec856cf493703d0334a786a7e474d..080e797556661171c9d2cb7c4ff828a3081ebb3b 100644 Binary files a/papers/large language models are zeroshot reasoners.pdf and b/papers/large language models are zeroshot reasoners.pdf differ diff --git a/papers/large language models as data preprocessors.pdf b/papers/large language models as data preprocessors.pdf index ac17f497067b42d9f57f0cf53a3a898753d3e4a3..d3c238c699750b25552c0715c2f3cd1f745f070a 100644 Binary files a/papers/large language models as data preprocessors.pdf and b/papers/large language models as data preprocessors.pdf differ diff --git a/papers/large language models as optimizers.pdf b/papers/large language models as optimizers.pdf index 2d35462ab97e1c8c345ea9396a01dc3acad0bc43..c23a64f1095abd6df128402eacc796e4dd945f79 100644 --- a/papers/large language models as optimizers.pdf +++ b/papers/large language models as optimizers.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:88c355954610d488a1c416eb94521e4f17ec87eca64fed7a03c58b9061897379 -size 5305063 +oid sha256:883bf4db6bb28241b28a678ec22f61058c80d6cb33ad8772d0a1f5f752074a41 +size 1592683 diff --git a/papers/large language models as sous chefs revising recipes with gpt3.pdf b/papers/large language models as sous chefs revising recipes with gpt3.pdf index 3dfb945f665f8e4ceede455facb78f19cdee1d89..83c78cc8d2f53da00bf2deba6e43746ac8a57199 100644 Binary files a/papers/large language models as sous chefs revising recipes with gpt3.pdf and b/papers/large language models as sous chefs revising recipes with gpt3.pdf differ diff --git a/papers/large language models as tax attorneys a case study in legal capabilities emergence.pdf b/papers/large language models as tax attorneys a case study in legal capabilities emergence.pdf index 9f300cf9b01975758ffbbc471265e94413c8c5cc..02645e8b3a4af787295ba16d9857a79456562e3f 100644 Binary files a/papers/large language models as tax attorneys a case study in legal capabilities emergence.pdf and b/papers/large language models as tax attorneys a case study in legal capabilities emergence.pdf differ diff --git a/papers/large language models can accomplish business process management tasks.pdf b/papers/large language models can accomplish business process management tasks.pdf index d4663f3ff684768d8537c6d91c91335897c56bc0..830c9a1813e69b3b299bcfd46d96853e7679a72c 100644 Binary files a/papers/large language models can accomplish business process management tasks.pdf and b/papers/large language models can accomplish business process management tasks.pdf differ diff --git a/papers/large language models can be lazy learners analyze shortcuts in incontext learning.pdf b/papers/large language models can be lazy learners analyze shortcuts in incontext learning.pdf index be6e62b711e9bc59eb97f8a3e3bc37049f5d22cd..cc2f80ed1dcd799e4e204a693170c8bd68fee5b8 100644 Binary files a/papers/large language models can be lazy learners analyze shortcuts in incontext learning.pdf and b/papers/large language models can be lazy learners analyze shortcuts in incontext learning.pdf differ diff --git a/papers/large language models can be used to effectively scale spear phishing campaigns.pdf b/papers/large language models can be used to effectively scale spear phishing campaigns.pdf deleted file mode 100644 index f3f0506746d391fea84d121fbe19b4a5aef1b363..0000000000000000000000000000000000000000 Binary files a/papers/large language models can be used to effectively scale spear phishing campaigns.pdf and /dev/null differ diff --git a/papers/large language models can implement policy iteration.pdf b/papers/large language models can implement policy iteration.pdf index e176628d6c4900d9e96ab120ab5e1c94ff26be40..844c333f2b0e038da4e97f5fa03ec4fb57456b98 100644 Binary files a/papers/large language models can implement policy iteration.pdf and b/papers/large language models can implement policy iteration.pdf differ diff --git a/papers/large language models for aspectbased sentiment analysis.pdf b/papers/large language models for aspectbased sentiment analysis.pdf index dab3a7e8f5c03455c6e32ac0d7b066f3a300fea2..98a9787caeb6388163247398a15351e830249118 100644 Binary files a/papers/large language models for aspectbased sentiment analysis.pdf and b/papers/large language models for aspectbased sentiment analysis.pdf differ diff --git a/papers/large language models for failure mode classification an investigation.pdf b/papers/large language models for failure mode classification an investigation.pdf index f737be95c9a5b4af2ac75228e3b1f211750054cf..968e1760a19c1d797cbff6401fa4a0b6c713433c 100644 Binary files a/papers/large language models for failure mode classification an investigation.pdf and b/papers/large language models for failure mode classification an investigation.pdf differ diff --git a/papers/large language models for intentdriven session recommendations.pdf b/papers/large language models for intentdriven session recommendations.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c5eb9a153e24a9cd7500c39b8621cc1a3477418f --- /dev/null +++ b/papers/large language models for intentdriven session recommendations.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa3afab83c0f6e2f2d7820da37ad867308ff39e36a35e6d3f38e10b57e989c90 +size 8223802 diff --git a/papers/large language models for networking applications, enabling techniques, and challenges.pdf b/papers/large language models for networking applications, enabling techniques, and challenges.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3800318f0fcd442f5d637af7863a87cdf593a54e --- /dev/null +++ b/papers/large language models for networking applications, enabling techniques, and challenges.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:846439ca6c08e941f29510fa5a59325e0b51cb8f5edabe47678c44e2698d6a0b +size 1479281 diff --git a/papers/large language models for propaganda detection.pdf b/papers/large language models for propaganda detection.pdf index b9faedfdcd094b250abe51059d1c1f960e48691b..657d114705fa17db6d069e088911f90536036139 100644 Binary files a/papers/large language models for propaganda detection.pdf and b/papers/large language models for propaganda detection.pdf differ diff --git a/papers/large language models for travel behavior prediction.pdf b/papers/large language models for travel behavior prediction.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dcdd23cc02f0b523163303a4b596e9b08de6d833 --- /dev/null +++ b/papers/large language models for travel behavior prediction.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5c701dabf53f92790cd7acaa8270f2da9c24477e02786bd80184676a824bb7f +size 2571020 diff --git a/papers/large language models help facilitate the automated synthesis of information on potential pest controllers.pdf b/papers/large language models help facilitate the automated synthesis of information on potential pest controllers.pdf new file mode 100644 index 0000000000000000000000000000000000000000..85b847c4b92247fd7c0e9f44132f9fb92c0b810d --- /dev/null +++ b/papers/large language models help facilitate the automated synthesis of information on potential pest controllers.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4a706c0c286dc178bb9b00a46f9b9ec6ff7e55b955c8f8d4fb9f71db5a6c32f +size 3720678 diff --git a/papers/large language models in fault localisation.pdf b/papers/large language models in fault localisation.pdf index 51cc42372f06cf0ded10530eb47dc39a0f0c7980..9e1d805e12370063380eeeac86f95651631d6287 100644 Binary files a/papers/large language models in fault localisation.pdf and b/papers/large language models in fault localisation.pdf differ diff --git a/papers/large language models in the workplace a case study on prompt engineering for job type classification.pdf b/papers/large language models in the workplace a case study on prompt engineering for job type classification.pdf index f355f33126ceaced8fe74bebba928c12a2cf1d4b..69d26fc7daa84654b10fca3f34c687988b512e2d 100644 Binary files a/papers/large language models in the workplace a case study on prompt engineering for job type classification.pdf and b/papers/large language models in the workplace a case study on prompt engineering for job type classification.pdf differ diff --git a/papers/large language models meet harry potter a dataset for aligning dialogue agents with characters.pdf b/papers/large language models meet harry potter a dataset for aligning dialogue agents with characters.pdf new file mode 100644 index 0000000000000000000000000000000000000000..276dc96baa3691e3b019b62bb5e14597db85ef34 --- /dev/null +++ b/papers/large language models meet harry potter a dataset for aligning dialogue agents with characters.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1ac4d4318f855380b468e796608d49e1d18bf82fb180601fd8d9ae581618b62 +size 974151 diff --git a/papers/large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf b/papers/large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf index 4f9544c4b86cb6b24fa6d9606289423b61f711bc..aac43efeabdeae25ce6af11d8e71b7ee9e6cacf2 100644 Binary files a/papers/large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf and b/papers/large language models meet openworld intent discovery and recognition an evaluation of chatgpt.pdf differ diff --git a/papers/large language models vote prompting for rare disease identification.pdf b/papers/large language models vote prompting for rare disease identification.pdf index 10c82d047d5e81ef5fce03530e3cfc66ab8f106e..f5461e02f2c9567d590fe3501fc3a96c4ffa6e97 100644 --- a/papers/large language models vote prompting for rare disease identification.pdf +++ b/papers/large language models vote prompting for rare disease identification.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:829d1c983051f1c0615c583f557ebde3e025ed81f0acd121f0990a7c47fb9388 -size 1258163 +oid sha256:6d6db579cd43e705d18da44d359788fcb275085e957b3bc5fe1277078b7b4e6f +size 1345331 diff --git a/papers/larger language models do incontext learning differently.pdf b/papers/larger language models do incontext learning differently.pdf index 298343a37b331e9aa43b9feff9816dee5576555f..e557327546a6dd319652712680373d483a8684bd 100644 Binary files a/papers/larger language models do incontext learning differently.pdf and b/papers/larger language models do incontext learning differently.pdf differ diff --git a/papers/latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf b/papers/latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf index bc3d8398a981b3db4c2a779f99a8f0abc5fa7e17..f1c4172831bde90453bcfc1905b851fe94da9d00 100644 Binary files a/papers/latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf and b/papers/latent jailbreak a benchmark for evaluating text safety and output robustness of large language models.pdf differ diff --git a/papers/learn to explore on bootstrapping interactive data exploration with metalearning.pdf b/papers/learn to explore on bootstrapping interactive data exploration with metalearning.pdf index bd4b66994be5e5368ef5c6d6779d089f29f7bfc5..ea6ded67626a9a50818acbf26d467f64b30639cc 100644 Binary files a/papers/learn to explore on bootstrapping interactive data exploration with metalearning.pdf and b/papers/learn to explore on bootstrapping interactive data exploration with metalearning.pdf differ diff --git a/papers/learning from taxonomy multilabel fewshot classification for everyday sound recognition.pdf b/papers/learning from taxonomy multilabel fewshot classification for everyday sound recognition.pdf index 54792579bff3ee81baa94be371f86040b8f35769..ab79fa7a4c4b59359cc8c7946ddae73e506591fc 100644 Binary files a/papers/learning from taxonomy multilabel fewshot classification for everyday sound recognition.pdf and b/papers/learning from taxonomy multilabel fewshot classification for everyday sound recognition.pdf differ diff --git a/papers/learning incontext learning for named entity recognition.pdf b/papers/learning incontext learning for named entity recognition.pdf index ffc135271bc44adface748a14a8d91138e93e2e4..23a6489105fe195e3916c20fe97b1145d794ece4 100644 Binary files a/papers/learning incontext learning for named entity recognition.pdf and b/papers/learning incontext learning for named entity recognition.pdf differ diff --git a/papers/learning interpretable queries for explainable image classification with information pursuit.pdf b/papers/learning interpretable queries for explainable image classification with information pursuit.pdf new file mode 100644 index 0000000000000000000000000000000000000000..02c9f4ee0af864d49b80057342ef5b8c448f68c9 --- /dev/null +++ b/papers/learning interpretable queries for explainable image classification with information pursuit.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7d088232d0b8cd0795ae0614ec7f9deb8e88202cfe209250af9d955c7cd005d +size 555746 diff --git a/papers/learning new tasks from a few examples with softlabel prototypes.pdf b/papers/learning new tasks from a few examples with softlabel prototypes.pdf index 91e98e5d733b9040a902265dd9e4b27128d3e2f4..f106f1b1a8c6f3a7a5610dab0bb582dd8a4c61d1 100644 Binary files a/papers/learning new tasks from a few examples with softlabel prototypes.pdf and b/papers/learning new tasks from a few examples with softlabel prototypes.pdf differ diff --git a/papers/learning performanceimproving code edits.pdf b/papers/learning performanceimproving code edits.pdf index dffc2a81442a0a0a78c7b83933960974e06802bd..2bb9401de49a46618361d24d979f0d16a44e011f 100644 Binary files a/papers/learning performanceimproving code edits.pdf and b/papers/learning performanceimproving code edits.pdf differ diff --git a/papers/learning to initialize can meta learning improve crosstask generalization in prompt tuning.pdf b/papers/learning to initialize can meta learning improve crosstask generalization in prompt tuning.pdf index d1689f53bc2abc8dd69733fe2f30827a2afb010b..680e24d50aab5f4e92a6d0cdb639ce6b0e3821ad 100644 --- a/papers/learning to initialize can meta learning improve crosstask generalization in prompt tuning.pdf +++ b/papers/learning to initialize can meta learning improve crosstask generalization in prompt tuning.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a7509728c66d746acdf2f653ae0ba75ca1c05efdc81f7075e79c58c78ed207f3 -size 1430530 +oid sha256:b9e7ae21cf0f178ef1624b40ad8161790217e91d412b9f9c1b11e68b844e6032 +size 1411578 diff --git a/papers/learning to retrieve incontext examples for large language models.pdf b/papers/learning to retrieve incontext examples for large language models.pdf index bd778b1fb7177d76f3f3585f137ffbe2caf437c3..d33a19d4522779fe1ba0003522b6d4de22b5eacc 100644 Binary files a/papers/learning to retrieve incontext examples for large language models.pdf and b/papers/learning to retrieve incontext examples for large language models.pdf differ diff --git a/papers/legal prompt engineering for multilingual legal judgement prediction.pdf b/papers/legal prompt engineering for multilingual legal judgement prediction.pdf index 966d5dcdab6fc8c01e723788fc68633b045c3357..b42c27db7707ab20f98c25149deef4f955f2a3bd 100644 Binary files a/papers/legal prompt engineering for multilingual legal judgement prediction.pdf and b/papers/legal prompt engineering for multilingual legal judgement prediction.pdf differ diff --git a/papers/legal prompting teaching a language model to think like a lawyer.pdf b/papers/legal prompting teaching a language model to think like a lawyer.pdf index 92c7040d1f6fd2ecb96a7c4279dd0892deceeafe..c1b85ebe0d59e5c44a099d9b237f1dce678499b1 100644 Binary files a/papers/legal prompting teaching a language model to think like a lawyer.pdf and b/papers/legal prompting teaching a language model to think like a lawyer.pdf differ diff --git a/papers/legally enforceable hate speech detection for public forums.pdf b/papers/legally enforceable hate speech detection for public forums.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e1081819f5cbea42a057823fceb752481361e9d6 --- /dev/null +++ b/papers/legally enforceable hate speech detection for public forums.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa9a3f99c6f0570c6b1ce572355d96e3717f8398799b634c094e7ff981e42ef9 +size 757578 diff --git a/papers/let me check the examples enhancing demonstration learning via explicit imitation.pdf b/papers/let me check the examples enhancing demonstration learning via explicit imitation.pdf index 690106abf2475846468cc62a19f5dd97564871fe..5e0e6385ebff0896ef9e72edd439421e0f097a9d 100644 Binary files a/papers/let me check the examples enhancing demonstration learning via explicit imitation.pdf and b/papers/let me check the examples enhancing demonstration learning via explicit imitation.pdf differ diff --git a/papers/leveraging large language models for exploiting asr uncertainty.pdf b/papers/leveraging large language models for exploiting asr uncertainty.pdf index 8dbc3fc49d95872cf35266b985ae4d591be61970..e28176dc8aed4313281656a0bf3418409d4e551d 100644 Binary files a/papers/leveraging large language models for exploiting asr uncertainty.pdf and b/papers/leveraging large language models for exploiting asr uncertainty.pdf differ diff --git a/papers/leveraging large language models for mental health prediction via online text data.pdf b/papers/leveraging large language models for mental health prediction via online text data.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6d39c2623b403bfd77dea204eb4c9c00d9caf2ea --- /dev/null +++ b/papers/leveraging large language models for mental health prediction via online text data.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ee8de7c8fc95f1896983cb4706b6cdc8b570ffb8c2ed0e409a3fc746d1930b2 +size 3430180 diff --git a/papers/leveraging large language models to generate answer set programs.pdf b/papers/leveraging large language models to generate answer set programs.pdf index 469b54d15143ec8f59ce6d38b659179317d3953e..bf285b8d52c0b0f00ceb6d5ced8c2758ef58dfb0 100644 Binary files a/papers/leveraging large language models to generate answer set programs.pdf and b/papers/leveraging large language models to generate answer set programs.pdf differ diff --git a/papers/leveraging normalization layer in adapters with progressive learning and adaptive distillation for crossdomain fewshot learning.pdf b/papers/leveraging normalization layer in adapters with progressive learning and adaptive distillation for crossdomain fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..db956143215295df35a41914830e000628df6831 --- /dev/null +++ b/papers/leveraging normalization layer in adapters with progressive learning and adaptive distillation for crossdomain fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ea00afb2b1e2de62a91712b3ef6242785fb48cc7f7e492423f2d38394d78d1a +size 9551840 diff --git a/papers/leveraging pretrained language models for conversational information seeking from text.pdf b/papers/leveraging pretrained language models for conversational information seeking from text.pdf index 6d31d4a11fc7058b42baebe575b51ca9b9952e47..b2d4bd944cf39fe0511836f824ea58e729bb9f4c 100644 Binary files a/papers/leveraging pretrained language models for conversational information seeking from text.pdf and b/papers/leveraging pretrained language models for conversational information seeking from text.pdf differ diff --git a/papers/leveraging training data in fewshot prompting for numerical reasoning.pdf b/papers/leveraging training data in fewshot prompting for numerical reasoning.pdf index f2949e77c1abdca1b6ed44d5741a85186fcf3cee..6405b55f09e25dac1531f6a065a18d294fb4ca96 100644 Binary files a/papers/leveraging training data in fewshot prompting for numerical reasoning.pdf and b/papers/leveraging training data in fewshot prompting for numerical reasoning.pdf differ diff --git a/papers/lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5.pdf b/papers/lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5.pdf index 068a6b3027a94e3e7a43adb78074885e83b7dbcd..8dbb073aba82f99e22f6e3d071a03fd58c701edc 100644 Binary files a/papers/lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5.pdf and b/papers/lfpt5 a unified framework for lifelong fewshot language learning based on prompt tuning of t5.pdf differ diff --git a/papers/limits of an ai program for solving college math problems.pdf b/papers/limits of an ai program for solving college math problems.pdf index 14de93a23c1760ae56eb4501c1ce038776b70f2d..c2039e5102a8983ab8f371c24814803e1adac428 100644 Binary files a/papers/limits of an ai program for solving college math problems.pdf and b/papers/limits of an ai program for solving college math problems.pdf differ diff --git a/papers/linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf b/papers/linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf index abce280dcf1107e1cf5fe6877d9362bf99c4da92..5d24ec79209ea045c3af164c3d734e14e7375c7c 100644 Binary files a/papers/linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf and b/papers/linguist language model instruction tuning to generate annotated utterances for intent classification and slot tagging.pdf differ diff --git a/papers/linking microblogging sentiments to stock price movement an application of gpt4.pdf b/papers/linking microblogging sentiments to stock price movement an application of gpt4.pdf index a16318ddfb1c371c2c55c8bcdd75d43c90853d26..7313289f3579a2ff852cd1f4012e013db16ad7a4 100644 Binary files a/papers/linking microblogging sentiments to stock price movement an application of gpt4.pdf and b/papers/linking microblogging sentiments to stock price movement an application of gpt4.pdf differ diff --git a/papers/list lite prompted selftraining makes parameterefficient fewshot learners.pdf b/papers/list lite prompted selftraining makes parameterefficient fewshot learners.pdf index 9ea6775de130599ec2b7cd66b2e01f812b4b7717..42f198b64c3b4506a8b0bf42422d215b21e07dc7 100644 Binary files a/papers/list lite prompted selftraining makes parameterefficient fewshot learners.pdf and b/papers/list lite prompted selftraining makes parameterefficient fewshot learners.pdf differ diff --git a/papers/little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task.pdf b/papers/little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task.pdf index 4d1882a226ffd7b96f31ce80e025546bf48d5ff5..387328bfe85652135abf23167991394c301824dc 100644 Binary files a/papers/little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task.pdf and b/papers/little giants exploring the potential of small llms as evaluation metrics in summarization in the eval4nlp 2023 shared task.pdf differ diff --git a/papers/llamarec twostage recommendation using large language models for ranking.pdf b/papers/llamarec twostage recommendation using large language models for ranking.pdf index 26f30d4fdd9c3cbdfb956fb92b443c51f8a2a275..9a219dc7f7043c4b0ae6a8eac9d4e0f4f8d8439f 100644 Binary files a/papers/llamarec twostage recommendation using large language models for ranking.pdf and b/papers/llamarec twostage recommendation using large language models for ranking.pdf differ diff --git a/papers/llm powered simtoreal transfer for traffic signal control.pdf b/papers/llm powered simtoreal transfer for traffic signal control.pdf deleted file mode 100644 index 214245d17f07995f75cc0255230705248bad333b..0000000000000000000000000000000000000000 --- a/papers/llm powered simtoreal transfer for traffic signal control.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f3defff28e65a31e3014241bcd5f040ffefafe2530744e14be437ddb59b4741a -size 3277333 diff --git a/papers/llm self defense by self examination, llms know they are being tricked.pdf b/papers/llm self defense by self examination, llms know they are being tricked.pdf index 8ed14c03cda513bd6e6382e57a71f6fc21451deb..45ae67bda87c57b644530bf7af51387828a1cf2c 100644 Binary files a/papers/llm self defense by self examination, llms know they are being tricked.pdf and b/papers/llm self defense by self examination, llms know they are being tricked.pdf differ diff --git a/papers/llm4dv using large language models for hardware test stimuli generation.pdf b/papers/llm4dv using large language models for hardware test stimuli generation.pdf index 38da3b957e6e856adac20dbb61e758c88d4df31a..c74fef69d3028db54b0aebd44b58411b3aecc9e7 100644 Binary files a/papers/llm4dv using large language models for hardware test stimuli generation.pdf and b/papers/llm4dv using large language models for hardware test stimuli generation.pdf differ diff --git a/papers/llm4dyg can large language models solve problems on dynamic graphs.pdf b/papers/llm4dyg can large language models solve problems on dynamic graphs.pdf index 047805a0da2bba33ddfe95da811a48c6a6e361a9..d3a134e6a8bd9220d0761dea5f1d43bbc9e9d43a 100644 Binary files a/papers/llm4dyg can large language models solve problems on dynamic graphs.pdf and b/papers/llm4dyg can large language models solve problems on dynamic graphs.pdf differ diff --git a/papers/llm4plc harnessing large language models for verifiable programming of plcs in industrial control systems.pdf b/papers/llm4plc harnessing large language models for verifiable programming of plcs in industrial control systems.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b07e6c629d9da06a8df6d6f49edfa6f884912512 --- /dev/null +++ b/papers/llm4plc harnessing large language models for verifiable programming of plcs in industrial control systems.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a347841ae65d5a677b77eea0a6ac7e4e3232c526e82968d572671c533fe7ffb6 +size 1845421 diff --git a/papers/llm4sgg large language model for weakly supervised scene graph generation.pdf b/papers/llm4sgg large language model for weakly supervised scene graph generation.pdf index 1cda6f066f4db5a2a720a2825ce4f8a3c109e52e..16345ce49b2cb9a27c4dd2245e18539276ed79bf 100644 --- a/papers/llm4sgg large language model for weakly supervised scene graph generation.pdf +++ b/papers/llm4sgg large language model for weakly supervised scene graph generation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcaec437cb147f73a15d83d3f22e55f7544f9d023347beec7ae84f0ff553fba2 -size 3909992 +oid sha256:32ade71346dd479d9773380b81a383b2d80ba0f4135c8afd45293188ccbfa641 +size 3725571 diff --git a/papers/llm4vv developing llmdriven testsuite for compiler validation.pdf b/papers/llm4vv developing llmdriven testsuite for compiler validation.pdf index c1d51c7a3bbda85d5ba72202d5bfc45758b8da89..a561c9edf50760b55ed6850a71c1e369d64f2161 100644 Binary files a/papers/llm4vv developing llmdriven testsuite for compiler validation.pdf and b/papers/llm4vv developing llmdriven testsuite for compiler validation.pdf differ diff --git a/papers/llmaugmented preference learning from natural language.pdf b/papers/llmaugmented preference learning from natural language.pdf index ec7cd3007ad2ec3edefde0bbe72b5539c335b10c..63c8e7aa2255f6b274c9cfb57e3657daf201cd31 100644 Binary files a/papers/llmaugmented preference learning from natural language.pdf and b/papers/llmaugmented preference learning from natural language.pdf differ diff --git a/papers/llmebench a flexible framework for accelerating llms benchmarking.pdf b/papers/llmebench a flexible framework for accelerating llms benchmarking.pdf index 99a4e02f041ec934db09dfd81a3bb6a282732d2a..7aa9e99cb4f4e87f1cdf34d543a2d9e97b0602c5 100644 Binary files a/papers/llmebench a flexible framework for accelerating llms benchmarking.pdf and b/papers/llmebench a flexible framework for accelerating llms benchmarking.pdf differ diff --git a/papers/llmfuncmapper function identification for interpreting complex clauses in building codes via llm.pdf b/papers/llmfuncmapper function identification for interpreting complex clauses in building codes via llm.pdf index 9f47f5053b51f1325cb8863e1fddfd42d3253570..ae114233ce2255d9ae4b1bb8c7906924700a66e0 100644 Binary files a/papers/llmfuncmapper function identification for interpreting complex clauses in building codes via llm.pdf and b/papers/llmfuncmapper function identification for interpreting complex clauses in building codes via llm.pdf differ diff --git a/papers/llmintheloop leveraging large language model for thematic analysis.pdf b/papers/llmintheloop leveraging large language model for thematic analysis.pdf index 26b15732f47a8f8e8f6891a482a863ece58cdf2b..ae256aac4ef5880389cd9aae012849328dad1f1d 100644 Binary files a/papers/llmintheloop leveraging large language model for thematic analysis.pdf and b/papers/llmintheloop leveraging large language model for thematic analysis.pdf differ diff --git a/papers/llmlingua compressing prompts for accelerated inference of large language models.pdf b/papers/llmlingua compressing prompts for accelerated inference of large language models.pdf index 58afc1982b5c6bf3374d1c7f3e8aac1e976a0a3d..5063ddd6444fe91b24fe41d6a245d04bfb86bdf8 100644 Binary files a/papers/llmlingua compressing prompts for accelerated inference of large language models.pdf and b/papers/llmlingua compressing prompts for accelerated inference of large language models.pdf differ diff --git a/papers/llms for robotic object disambiguation.pdf b/papers/llms for robotic object disambiguation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..17fdbef1368256356587d350a447eaf2570f2dff --- /dev/null +++ b/papers/llms for robotic object disambiguation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0e7fb0848ac87526d5f8432d6293eeaa8ad1ccf90a17c8d8dde09c53b43b593 +size 5078027 diff --git a/papers/lmcanvas objectoriented interaction to personalize large language modelpowered writing environments.pdf b/papers/lmcanvas objectoriented interaction to personalize large language modelpowered writing environments.pdf index 4007baaa219e3539635858a95f9faf192a3d51c8..1447281fa0768271f870eb1e6fa429b87e095ec0 100644 Binary files a/papers/lmcanvas objectoriented interaction to personalize large language modelpowered writing environments.pdf and b/papers/lmcanvas objectoriented interaction to personalize large language modelpowered writing environments.pdf differ diff --git a/papers/localized latent updates for finetuning visionlanguage models.pdf b/papers/localized latent updates for finetuning visionlanguage models.pdf index 08c773c3dffd28b9ca9b57151e385649bb91f980..f065541de085f8aba380aa9ac3e520d8a1b459b9 100644 Binary files a/papers/localized latent updates for finetuning visionlanguage models.pdf and b/papers/localized latent updates for finetuning visionlanguage models.pdf differ diff --git a/papers/localized symbolic knowledge distillation for visual commonsense models.pdf b/papers/localized symbolic knowledge distillation for visual commonsense models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d8740f1401a74db24f7ed4d1c9ea1bf71b6d1b10 --- /dev/null +++ b/papers/localized symbolic knowledge distillation for visual commonsense models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fa89a63eb741a738550c431cccaee4b97a495478aa81437527ba4d1f61214e9 +size 16845854 diff --git a/papers/localizing lying in llama understanding instructed dishonesty on truefalse questions through prompting, probing, and patching.pdf b/papers/localizing lying in llama understanding instructed dishonesty on truefalse questions through prompting, probing, and patching.pdf new file mode 100644 index 0000000000000000000000000000000000000000..70d5acc15f4c3e5704002bcbdfd148efdc0abd0b --- /dev/null +++ b/papers/localizing lying in llama understanding instructed dishonesty on truefalse questions through prompting, probing, and patching.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71fd97f6a7ed0ecc34ab05645efa908929d9446cd93108bdce6e7aa29672b902 +size 3210526 diff --git a/papers/locally differentially private document generation using zero shot prompting.pdf b/papers/locally differentially private document generation using zero shot prompting.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7e8326781b7f5a03631d63b5800bf3b0300027ef --- /dev/null +++ b/papers/locally differentially private document generation using zero shot prompting.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1ddaee3bf95beac65b5520763f24b917ea19ec70fec0a30ee26ccb1f07c070a +size 1483197 diff --git a/papers/log parsing with promptbased fewshot learning.pdf b/papers/log parsing with promptbased fewshot learning.pdf index 933711dbb0667e60e4219d35ccb3061402c1c162..5203fd0bdafcb07b1d7be29a8954198b121ffa6b 100644 Binary files a/papers/log parsing with promptbased fewshot learning.pdf and b/papers/log parsing with promptbased fewshot learning.pdf differ diff --git a/papers/logicllm exploring selfsupervised logicenhanced training for large language models.pdf b/papers/logicllm exploring selfsupervised logicenhanced training for large language models.pdf index 7128641072eefc91c0ab266a9e3d9c61aeb07aad..a7a8f59a019e4a5ef5e64806619ed621a95027d4 100644 Binary files a/papers/logicllm exploring selfsupervised logicenhanced training for large language models.pdf and b/papers/logicllm exploring selfsupervised logicenhanced training for large language models.pdf differ diff --git a/papers/logprompt prompt engineering towards zeroshot and interpretable log analysis.pdf b/papers/logprompt prompt engineering towards zeroshot and interpretable log analysis.pdf deleted file mode 100644 index 623cb4db9c5ae09db4376b1b44d17e1e897527f5..0000000000000000000000000000000000000000 --- a/papers/logprompt prompt engineering towards zeroshot and interpretable log analysis.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b7b79b95997608ca0823f60c18997a8cec0e7ceef3127511ab362e788e8c92f1 -size 8662797 diff --git a/papers/loke linked open knowledge extraction for automated knowledge graph construction.pdf b/papers/loke linked open knowledge extraction for automated knowledge graph construction.pdf index f60fa59ca62c3366819750b1997bfea9ec31df84..bf9b077bb783319d6a06df553634e944fff16a72 100644 Binary files a/papers/loke linked open knowledge extraction for automated knowledge graph construction.pdf and b/papers/loke linked open knowledge extraction for automated knowledge graph construction.pdf differ diff --git a/papers/look before you leap a universal emergent decomposition of retrieval tasks in language models.pdf b/papers/look before you leap a universal emergent decomposition of retrieval tasks in language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d7a9910c395529135d3090cb28066a4c09cba6e8 --- /dev/null +++ b/papers/look before you leap a universal emergent decomposition of retrieval tasks in language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bcdcdc559152dc3cf69a6916783e1e0de0db29c4c66b31461aa7a491c3119e4 +size 5403095 diff --git a/papers/looking for a handsome carpenter! debiasing gpt3 job advertisements.pdf b/papers/looking for a handsome carpenter! debiasing gpt3 job advertisements.pdf index c4605a480558f290959e131618271600903c4d0c..34dd0c53dd7acc7bcdbccdb8c14d0c2b436e3452 100644 Binary files a/papers/looking for a handsome carpenter! debiasing gpt3 job advertisements.pdf and b/papers/looking for a handsome carpenter! debiasing gpt3 job advertisements.pdf differ diff --git a/papers/lorahub efficient crosstask generalization via dynamic lora composition.pdf b/papers/lorahub efficient crosstask generalization via dynamic lora composition.pdf index e689d873885b697ebd365554c910db202a8656ad..0df99e1163d1295a1b1efe66c8356d8922263b6e 100644 Binary files a/papers/lorahub efficient crosstask generalization via dynamic lora composition.pdf and b/papers/lorahub efficient crosstask generalization via dynamic lora composition.pdf differ diff --git a/papers/low resource pipeline for spoken language understanding via weak supervision.pdf b/papers/low resource pipeline for spoken language understanding via weak supervision.pdf index cd4867a41ebc46b47cba3c0fe6c6c144ed262c29..d4d015cb01848ba5c104c949b03f2d2dec8f6e10 100644 Binary files a/papers/low resource pipeline for spoken language understanding via weak supervision.pdf and b/papers/low resource pipeline for spoken language understanding via weak supervision.pdf differ diff --git a/papers/lowresource authorship style transfer can nonfamous authors be imitated.pdf b/papers/lowresource authorship style transfer can nonfamous authors be imitated.pdf index 6c60161e9780e5d202c0596662cd994db5df06f9..f1efb5f9aebc4e4d05ea1f85bd1d9d6d47e8abe6 100644 Binary files a/papers/lowresource authorship style transfer can nonfamous authors be imitated.pdf and b/papers/lowresource authorship style transfer can nonfamous authors be imitated.pdf differ diff --git a/papers/lowresource multigranularity academic function recognition based on multiple prompt knowledge.pdf b/papers/lowresource multigranularity academic function recognition based on multiple prompt knowledge.pdf index 003b2076afdc591319f743623241a1e5d2e2a31f..bdf2fee64ae9735e7dd09747cc7950b0cda50df8 100644 Binary files a/papers/lowresource multigranularity academic function recognition based on multiple prompt knowledge.pdf and b/papers/lowresource multigranularity academic function recognition based on multiple prompt knowledge.pdf differ diff --git a/papers/lpml llmprompting markup language for mathematical reasoning.pdf b/papers/lpml llmprompting markup language for mathematical reasoning.pdf index 7f6b1b4e2803ff2167c27263c764e13d7d957283..c0970245211e4f456ec4d49aad8c73143f2b6bb6 100644 Binary files a/papers/lpml llmprompting markup language for mathematical reasoning.pdf and b/papers/lpml llmprompting markup language for mathematical reasoning.pdf differ diff --git a/papers/maatphor automated variant analysis for prompt injection attacks.pdf b/papers/maatphor automated variant analysis for prompt injection attacks.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b6cef91c07ad75ada0733271002ee667923e25dc --- /dev/null +++ b/papers/maatphor automated variant analysis for prompt injection attacks.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2898dc3cf735395a1e8af623f84859fc559e41b1ea75a67550fc86a8fbfa6c2 +size 784406 diff --git a/papers/machine translation of folktales smalldatadriven and llmbased approaches.pdf b/papers/machine translation of folktales smalldatadriven and llmbased approaches.pdf new file mode 100644 index 0000000000000000000000000000000000000000..67db1550589720f6783fba83ff16b372078babde --- /dev/null +++ b/papers/machine translation of folktales smalldatadriven and llmbased approaches.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:910be4587cf01f94b7cb3e1fb32dffa307bd6e5ec0a124988aa569f724c1bf96 +size 512233 diff --git a/papers/majority rule better patching via selfconsistency.pdf b/papers/majority rule better patching via selfconsistency.pdf new file mode 100644 index 0000000000000000000000000000000000000000..24314b5c7dbfe5cceb17c05edc8d22854dde6557 --- /dev/null +++ b/papers/majority rule better patching via selfconsistency.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:152e3578cde88f5d26740f0880358d2f1d5180301abdeb82a9f59963e0662605 +size 358832 diff --git a/papers/make a choice! knowledge base question answering with incontext learning.pdf b/papers/make a choice! knowledge base question answering with incontext learning.pdf index ae75d2ad9b9eb94e82c98a12b6916f5339db94a6..c297649d67f0088ed9a53a4c73ab5b720373ce00 100644 Binary files a/papers/make a choice! knowledge base question answering with incontext learning.pdf and b/papers/make a choice! knowledge base question answering with incontext learning.pdf differ diff --git a/papers/making language models better reasoners with stepaware verifier.pdf b/papers/making language models better reasoners with stepaware verifier.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4b1c10a147701002b9435dc57dd1df2eb24d7abd --- /dev/null +++ b/papers/making language models better reasoners with stepaware verifier.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14b6f3eb9ada0dd895990f41eb2c8291ddb9f85f322df984cae9462c636474f4 +size 603142 diff --git a/papers/making language models better tool learners with execution feedback.pdf b/papers/making language models better tool learners with execution feedback.pdf index f86c12ad6231c5f4f7a3f65e1efaa98a1bbe76d1..cda2b440497d283dbc264e9fc47ef9ab717696b4 100644 Binary files a/papers/making language models better tool learners with execution feedback.pdf and b/papers/making language models better tool learners with execution feedback.pdf differ diff --git a/papers/making large language models better data creators.pdf b/papers/making large language models better data creators.pdf index 23b0855d9e02e993ada48e6baaddafaf18234b1c..112d8b2b93f48fec88c4ca0814a0c2e5bc5ff8c9 100644 Binary files a/papers/making large language models better data creators.pdf and b/papers/making large language models better data creators.pdf differ diff --git a/papers/making large language models better knowledge miners for online marketing with progressive prompting augmentation.pdf b/papers/making large language models better knowledge miners for online marketing with progressive prompting augmentation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1d2b72611ae64a78c4f4c7b52d6b4acb2e0c633f --- /dev/null +++ b/papers/making large language models better knowledge miners for online marketing with progressive prompting augmentation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dc82f3e2b0e6d2b55c191e874130576d0e5d0587a23913ba9fd0d5119bb4b9e +size 9246027 diff --git a/papers/making large language models better reasoners with stepaware verifier.pdf b/papers/making large language models better reasoners with stepaware verifier.pdf index 91f2decfdf9ab01ce0052ec01b6ce45dbc00cdca..0866fbb256f8de99c9bfeac1d958753003669441 100644 Binary files a/papers/making large language models better reasoners with stepaware verifier.pdf and b/papers/making large language models better reasoners with stepaware verifier.pdf differ diff --git a/papers/malla demystifying realworld large language model integrated malicious services.pdf b/papers/malla demystifying realworld large language model integrated malicious services.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3660c3a94b573d2711689639f4c130743fbd08a9 --- /dev/null +++ b/papers/malla demystifying realworld large language model integrated malicious services.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed01af7d4dc6b4cdd1c6276c5bf60c01274725a01c4b168776bce1c769566d63 +size 732036 diff --git a/papers/map lowdata regime multimodal learning with adapterbased pretraining and prompting.pdf b/papers/map lowdata regime multimodal learning with adapterbased pretraining and prompting.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c1d4a326334ee25e2e47d27cccab68ae80067e42 --- /dev/null +++ b/papers/map lowdata regime multimodal learning with adapterbased pretraining and prompting.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4855ce7664780bb5e7663938c0c2e6bde0eb340eea8fe31f62fff2b31392fd8 +size 1243822 diff --git a/papers/mapo boosting large language model performance with modeladaptive prompt optimization.pdf b/papers/mapo boosting large language model performance with modeladaptive prompt optimization.pdf new file mode 100644 index 0000000000000000000000000000000000000000..80f8a9da76ae8fc5835a03bcd0d50c0f721a3637 --- /dev/null +++ b/papers/mapo boosting large language model performance with modeladaptive prompt optimization.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a702520c3253126e900a740eb04103f323bb7d45f34975b4c1bc111b9303ac1f +size 718208 diff --git a/papers/mapping underwater aquatic vegetation using foundation models with air and spaceborne images the case of polyphytos lake.pdf b/papers/mapping underwater aquatic vegetation using foundation models with air and spaceborne images the case of polyphytos lake.pdf new file mode 100644 index 0000000000000000000000000000000000000000..20a9875abd54da71cd957f522a36897431212573 --- /dev/null +++ b/papers/mapping underwater aquatic vegetation using foundation models with air and spaceborne images the case of polyphytos lake.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50460540aae8c7db32a23431c45d0707125e4601a0f26f0dbfcec5892053c2eb +size 23861299 diff --git a/papers/marked personas using natural language prompts to measure stereotypes in language models.pdf b/papers/marked personas using natural language prompts to measure stereotypes in language models.pdf index 532a9365c11d1a652acfed031be1a209196a81b7..3d7db743933d6d068df3bae6e423ee84a83a4991 100644 Binary files a/papers/marked personas using natural language prompts to measure stereotypes in language models.pdf and b/papers/marked personas using natural language prompts to measure stereotypes in language models.pdf differ diff --git a/papers/masakhanews news topic classification for african languages.pdf b/papers/masakhanews news topic classification for african languages.pdf index 72c6aa1c32fe3f0c7334c242106faa5693139eb3..29f5ee24b58780d8bb1f68135334277568084282 100644 Binary files a/papers/masakhanews news topic classification for african languages.pdf and b/papers/masakhanews news topic classification for african languages.pdf differ diff --git a/papers/mastering the task of open information extraction with large language models and consistent reasoning environment.pdf b/papers/mastering the task of open information extraction with large language models and consistent reasoning environment.pdf index 72f9bcb33d7bb4a72db7ddd6874ba046f9db57f4..c36c2d559f7ed4c0e456a7a87e7cff8b0e436370 100644 Binary files a/papers/mastering the task of open information extraction with large language models and consistent reasoning environment.pdf and b/papers/mastering the task of open information extraction with large language models and consistent reasoning environment.pdf differ diff --git a/papers/masterkey automated jailbreak across multiple large language model chatbots.pdf b/papers/masterkey automated jailbreak across multiple large language model chatbots.pdf index 9a7bf46405b5b234f1e70845df294031d43fb164..884b7fb29001adee9e8ef99abf2ad582681241d7 100644 Binary files a/papers/masterkey automated jailbreak across multiple large language model chatbots.pdf and b/papers/masterkey automated jailbreak across multiple large language model chatbots.pdf differ diff --git a/papers/mathprompter mathematical reasoning using large language models.pdf b/papers/mathprompter mathematical reasoning using large language models.pdf index 1546edcff222cffd404222180177ac0077e428cc..60ba4c2406f65ff67756d4bd214cdcc56be39f90 100644 Binary files a/papers/mathprompter mathematical reasoning using large language models.pdf and b/papers/mathprompter mathematical reasoning using large language models.pdf differ diff --git a/papers/mdc at biolaysumm task 1 evaluating gpt models for biomedical lay summarization.pdf b/papers/mdc at biolaysumm task 1 evaluating gpt models for biomedical lay summarization.pdf new file mode 100644 index 0000000000000000000000000000000000000000..05d8b69cdcc131a2997e6f02e067838f7f900cd6 --- /dev/null +++ b/papers/mdc at biolaysumm task 1 evaluating gpt models for biomedical lay summarization.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf00d335cd4ac72e97c49245799dfafd4157e2bb9b3dcf4a491d801565b1d9bf +size 196344 diff --git a/papers/meal stable and active learning for fewshot prompting.pdf b/papers/meal stable and active learning for fewshot prompting.pdf index be18be4236afbddb5c3b467549ff367c92a58ecb..4b247d3053ea3a7cc9d361d92c7347d65e14eb33 100644 Binary files a/papers/meal stable and active learning for fewshot prompting.pdf and b/papers/meal stable and active learning for fewshot prompting.pdf differ diff --git a/papers/measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing.pdf b/papers/measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing.pdf index da7f9c649943466bb5d5b3023420171685723c57..99e9321b10e9d849c1c39aa48da229825a528eff 100644 Binary files a/papers/measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing.pdf and b/papers/measuring and mitigating constraint violations of incontext learning for utterancetoapi semantic parsing.pdf differ diff --git a/papers/measuring inductive biases of incontext learning with underspecified demonstrations.pdf b/papers/measuring inductive biases of incontext learning with underspecified demonstrations.pdf index 395cbc7b00cf84a3aef13455d3e4ae3810ce0fc5..1920a75fb6ec0ca8c32f7f19f1222c7b63052d5f 100644 Binary files a/papers/measuring inductive biases of incontext learning with underspecified demonstrations.pdf and b/papers/measuring inductive biases of incontext learning with underspecified demonstrations.pdf differ diff --git a/papers/measuring pointwise $mathcal{v}$usable information incontextly.pdf b/papers/measuring pointwise $mathcal{v}$usable information incontextly.pdf index b8cd39281724602835a7c2b5996b36a3852c96ba..bf9062ff73fb44530685e24a13f4479c1f67a882 100644 --- a/papers/measuring pointwise $mathcal{v}$usable information incontextly.pdf +++ b/papers/measuring pointwise $mathcal{v}$usable information incontextly.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7ae0925a05b8f17c06ed6e01ae1532305173e2c036dd65d4098ad4fe57b2ee1a -size 1499123 +oid sha256:f441872e3b241ee78fe66c1a2b6138a9609567a3e0cba31aed1537ea27b98d38 +size 1499127 diff --git a/papers/measuring the robustness of natural language processing models to domain shifts.pdf b/papers/measuring the robustness of natural language processing models to domain shifts.pdf deleted file mode 100644 index 2fbaf4de509ac5bfe1c407f91bd8a935ad1f1bc0..0000000000000000000000000000000000000000 --- a/papers/measuring the robustness of natural language processing models to domain shifts.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:72c4f5e820985c8c0c18a3589f909dcb19ded65006fa9b8358ea9ab1730fe30f -size 1107960 diff --git a/papers/mededit model editing for medical question answering with external knowledge bases.pdf b/papers/mededit model editing for medical question answering with external knowledge bases.pdf index e3a2bfb248598e9db72883bafd478386dcbf6bc9..b805afec1cee22898d25265dc47dcf0c88e70b26 100644 Binary files a/papers/mededit model editing for medical question answering with external knowledge bases.pdf and b/papers/mededit model editing for medical question answering with external knowledge bases.pdf differ diff --git a/papers/megatts 2 zeroshot texttospeech with arbitrary length speech prompts.pdf b/papers/megatts 2 zeroshot texttospeech with arbitrary length speech prompts.pdf index 094e0fd97cebe1fe3459788fb004866af2376553..6f4169b69f3d1d15737d0cad330cc7c4a5db9aa9 100644 Binary files a/papers/megatts 2 zeroshot texttospeech with arbitrary length speech prompts.pdf and b/papers/megatts 2 zeroshot texttospeech with arbitrary length speech prompts.pdf differ diff --git a/papers/memobert pretraining model with promptbased learning for multimodal emotion recognition.pdf b/papers/memobert pretraining model with promptbased learning for multimodal emotion recognition.pdf index f52c062bc30237e93d50dc5abb82e4cf51f0e034..78eb8e02d5cf425a4925cc5a317d5c6f4e54cdd5 100644 Binary files a/papers/memobert pretraining model with promptbased learning for multimodal emotion recognition.pdf and b/papers/memobert pretraining model with promptbased learning for multimodal emotion recognition.pdf differ diff --git a/papers/memorycompanion a smart healthcare solution to empower efficient alzheimer's care via unleashing generative ai.pdf b/papers/memorycompanion a smart healthcare solution to empower efficient alzheimer's care via unleashing generative ai.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d87989d8d50f05318fc31d8408812a4839ef0325 --- /dev/null +++ b/papers/memorycompanion a smart healthcare solution to empower efficient alzheimer's care via unleashing generative ai.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:783ad6d7e74e778dd0a8158aeb91bd40d0d1bbd8926731db3a31de8b74630835 +size 5293110 diff --git a/papers/memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf b/papers/memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf index 10b63d28559c50bfce3162f448a2d9c1d99d80c4..2116fbab1464a943cda4f4c7790dc4b7df37622b 100644 Binary files a/papers/memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf and b/papers/memoryefficient finetuning of compressed large language models via sub4bit integer quantization.pdf differ diff --git a/papers/menucraft interactive menu system design with large language models.pdf b/papers/menucraft interactive menu system design with large language models.pdf index 25631d20d4d2a59d44447c9384f9700ead52093c..6892b7bf322ffeffb9d83244b77a0470e338ddac 100644 --- a/papers/menucraft interactive menu system design with large language models.pdf +++ b/papers/menucraft interactive menu system design with large language models.pdf @@ -1,2891 +1,3 @@ -%PDF-1.5 -% -4 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 128 128 ] /FormType 1 -/Length 46 /PTEX.FileName (./twitter/ballena.pdf) -/PTEX.PageNumber 1 -/Resources << /ExtGState << /E1 << /Type /ExtGState /SMask << /Type /Mask /G 41 0 R /S /Alpha >> >> >> -/XObject << /X1 40 0 R >> >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -/E1 gs -/X1 Do -Q -endstream -endobj -40 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 128 128 ] -/Group << /Type /Group /S /Transparency >> /Length 42 0 R -/Resources << >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -1.000000 0.000000 -0.000000 1.000000 48.087952 17.262695 cm -0.274510 0.607843 0.615686 scn -27.216436 49.866192 m -27.216436 49.866192 31.884148 27.598873 49.298954 35.670753 c -49.298954 35.670753 42.021599 42.571953 45.070244 46.625244 c -48.117012 50.681351 59.983284 62.803711 59.983284 62.803711 c -59.983284 62.803711 69.680794 52.873562 61.115505 43.435890 c -61.115505 43.435890 69.655472 50.693542 79.911118 42.393723 c -79.911118 42.393723 65.780403 20.848701 56.584747 29.314556 c -56.584747 29.314556 39.956028 3.127151 0.000000 -0.000290 c -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 27.231842 95.862305 cm -0.145098 0.584314 0.807843 scn -10.513207 1.001373 m -10.513207 1.001373 8.310679 4.306107 2.804354 3.939331 c --2.701970 3.571617 0.601826 12.014020 6.658782 10.913693 c -12.716678 9.812429 9.779656 5.040595 10.513207 1.001373 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 34.596161 105.208008 cm -0.145098 0.584314 0.807843 scn -3.757669 1.666909 m -3.757669 1.666909 3.851472 4.154607 0.920081 5.990362 c --2.012248 7.826117 2.768031 10.901030 5.438646 8.117848 c -8.111136 5.333730 4.866438 3.988572 3.757669 1.666909 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 39.505981 95.862305 cm -0.145098 0.584314 0.807843 scn -0.085132 1.001373 m -0.085132 1.001373 2.287661 4.306107 7.793048 3.939331 c -13.300311 3.571617 9.997452 12.014020 3.939556 10.913693 c --2.117401 9.812429 0.818683 5.040595 0.085132 1.001373 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 0.000000 16.100586 cm -0.274510 0.607843 0.615686 scn -38.667156 -0.000336 m -60.022438 -0.000336 77.334312 17.311543 77.334312 38.666824 c -77.334312 60.022110 60.022438 77.333984 38.667156 77.333984 c -17.311876 77.333984 0.000000 60.022110 0.000000 38.666824 c -0.000000 17.311543 17.311876 -0.000336 38.667156 -0.000336 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 0.051575 -6.680664 cm -0.847059 0.847059 0.847059 scn -0.000000 59.445431 m -0.000000 59.445431 13.296787 54.196136 23.251322 60.360966 c -28.897415 63.857063 28.372112 38.801872 44.752254 34.411823 c -61.131462 30.022709 88.318817 36.574951 104.622040 53.258083 c -104.622040 53.258083 70.177971 3.162724 19.461620 26.378403 c -19.461620 26.377464 0.787958 34.356476 0.000000 59.445431 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 32.967926 34.250000 cm -0.129412 0.521569 0.533333 scn -0.100967 20.749023 m -0.092524 17.790430 0.501513 14.839341 1.329807 12.031773 c -1.542743 11.331991 1.785696 10.643462 2.061481 9.968070 c -2.341956 9.295491 2.661831 8.642616 3.009846 8.006621 c -3.572672 7.078897 l -4.200226 6.198070 l -4.882183 5.362273 l -5.117632 5.092116 5.380286 4.846345 5.627930 4.586506 c -6.151359 4.098722 6.697300 3.632515 7.298587 3.245102 c -7.585629 3.030291 7.898000 2.855818 8.214121 2.688847 c -8.526490 2.514370 8.836982 2.337076 9.172802 2.211378 c -9.820992 1.906515 10.508580 1.700144 11.196167 1.496588 c -11.897824 1.352129 12.592915 1.157951 13.315209 1.110111 c -14.390209 0.997551 l -15.473650 0.964716 l -16.196882 0.912184 16.922930 1.014431 17.648039 1.030378 c -18.371273 1.089476 19.093567 1.203918 19.817738 1.286467 c -21.969616 1.703896 l -21.728535 1.545368 l -21.909576 1.802393 22.077490 2.173857 22.220072 2.510616 c -22.351398 2.864258 22.472408 3.224464 22.555893 3.596869 c -22.665646 3.962706 22.693783 4.350122 22.768827 4.725340 c -22.798845 5.109000 22.850439 5.491719 22.862635 5.878193 c -22.857944 6.264668 22.877640 6.653024 22.853252 7.041374 c -22.752882 8.205485 l -22.553078 9.362097 l -22.498671 9.750447 22.367344 10.124728 22.279169 10.506513 c -22.178797 10.886420 22.090620 11.270082 21.948038 11.639670 c -21.561563 12.756877 l -21.434929 13.131157 21.264202 13.488556 21.115992 13.856270 c -20.961214 14.220232 20.816755 14.589821 20.651659 14.950031 c -20.291451 15.655440 19.949062 16.373983 19.571968 17.073765 c -18.377838 19.144030 l -19.731438 17.168505 l -20.152620 16.490299 20.541908 15.790515 20.949959 15.100114 c -21.138506 14.747410 21.308290 14.382513 21.487457 14.025118 c -21.658180 13.661157 21.859863 13.311264 22.003384 12.935108 c -22.442389 11.810394 l -22.600918 11.438929 22.706919 11.048702 22.824173 10.662229 c -22.930172 10.272940 23.078384 9.893967 23.150614 9.494360 c -23.402006 8.303045 l -23.553036 7.090147 l -23.594309 6.683973 23.589617 6.274049 23.610252 5.864123 c -23.613068 5.454198 23.575548 5.043338 23.554911 4.632475 c -23.481743 4.225363 23.455477 3.811684 23.341036 3.409264 c -23.254736 3.003090 23.127159 2.605358 22.985516 2.211378 c -22.819481 1.814587 22.673149 1.447813 22.401115 1.046329 c -22.315754 0.919693 l -22.160038 0.887794 l -19.959383 0.432842 l -19.214575 0.336224 18.472582 0.211468 17.724022 0.139238 c -16.970772 0.112036 16.222212 0.000402 15.464272 0.042616 c -14.329238 0.064194 l -13.952144 0.088583 13.575048 0.146744 13.197954 0.188019 c -12.440951 0.248053 11.700835 0.460985 10.960718 0.625143 c -10.230919 0.849337 9.502995 1.080099 8.812594 1.414042 c -8.456137 1.552874 8.125943 1.746109 7.792937 1.937469 c -7.457117 2.120388 7.125988 2.310816 6.821123 2.544388 c -6.183252 2.967447 5.605417 3.472109 5.054784 3.999290 c -4.794946 4.277891 4.520099 4.543358 4.273393 4.833213 c -3.564230 5.728110 l -2.939492 6.679287 l -2.387924 7.671738 l -2.050227 8.348068 1.743484 9.035657 1.480832 9.740129 c -1.223807 10.446476 0.999614 11.163144 0.810129 11.886377 c -0.626272 12.612424 0.468681 13.343159 0.347673 14.079523 c -0.239798 14.817764 0.146931 15.556942 0.093463 16.298935 c --0.019103 17.781984 -0.045369 19.272541 0.100967 20.749023 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 12.638275 64.140625 cm -0.262745 0.282353 0.286275 scn -3.594590 0.000278 m -5.579826 0.000278 7.189178 1.609624 7.189178 3.594860 c -7.189178 5.580097 5.579826 7.189453 3.594590 7.189453 c -1.609353 7.189453 0.000000 5.580097 0.000000 3.594860 c -0.000000 1.609624 1.609353 0.000278 3.594590 0.000278 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 16.232849 67.070312 cm -1.000000 1.000000 1.000000 scn -1.357352 0.000134 m -2.106997 0.000134 2.714703 0.607843 2.714703 1.357489 c -2.714703 2.107134 2.106997 2.714844 1.357352 2.714844 c -0.607708 2.714844 0.000000 2.107134 0.000000 1.357489 c -0.000000 0.607843 0.607708 0.000134 1.357352 0.000134 c -h -f -n -Q -endstream -endobj -41 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 128 128 ] -/Group << /Type /Group /S /Transparency >> /Length 43 0 R -/Resources << >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -1.000000 0.000000 -0.000000 1.000000 0.000000 0.000000 cm -0.000000 0.000000 0.000000 scn -0.000000 128.000000 m -128.000000 128.000000 l -128.000000 0.000000 l -0.000000 0.000000 l -0.000000 128.000000 l -h -f -n -Q -endstream -endobj -5 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 121.020996 121.021484 ] -/FormType 1 /Length 46 /PTEX.FileName (./twitter/koala.pdf) -/PTEX.PageNumber 1 -/Resources << /ExtGState << /E1 << /Type /ExtGState /SMask << /Type /Mask /G 45 0 R /S /Alpha >> >> >> -/XObject << /X1 44 0 R >> >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -/E1 gs -/X1 Do -Q -endstream -endobj -44 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 121.020996 121.021484 ] -/Group << /Type /Group /S /Transparency >> /Length 46 0 R -/Resources << >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -1.000000 0.000000 -0.000000 1.000000 26.494019 2.607422 cm -0.549020 0.545098 0.533333 scn -70.080994 37.103626 m -70.080994 17.927628 70.081001 2.167633 35.357002 2.382629 c -34.721001 2.382629 l -0.000000 -0.000374 0.000000 17.927628 0.000000 37.103626 c -0.000000 37.669621 l -0.000000 56.845619 15.545000 72.390625 34.721001 72.390625 c -35.357002 72.390625 l -54.534004 72.390625 70.080994 56.845619 70.080994 37.669621 c -70.080994 37.103626 l -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 35.926208 1.673828 cm -0.886275 0.886275 0.886275 scn -52.775841 37.878395 m -52.775841 18.799393 60.648834 3.489395 26.798834 2.000397 c --7.187166 0.000397 0.821837 18.799393 0.821837 37.878395 c -0.821837 38.441399 l -0.821837 57.518398 12.345835 72.983398 26.561834 72.983398 c -27.032835 72.983398 l -41.250835 72.983398 52.775841 57.518398 52.775841 38.441399 c -52.775841 37.878395 l -52.775841 37.878395 l -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 20.464844 -5.987305 cm -0.615686 0.615686 0.615686 scn -29.239162 31.997231 m -33.853161 24.263229 34.142166 11.862236 27.062164 7.640236 c -19.983164 3.415237 8.213162 8.968231 3.599162 16.702232 c --1.014838 24.437233 -2.749838 30.939232 8.064162 38.355232 c -17.228161 44.638229 24.624163 39.732231 29.239162 31.997231 c -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 21.865601 -5.820312 cm -0.549020 0.545098 0.533333 scn -28.736397 30.155785 m -32.908398 23.162785 32.667393 11.649780 25.588394 7.426781 c -18.508394 3.203781 7.269395 7.865780 3.097395 14.860779 c --1.075605 21.852779 -2.490603 27.823784 8.362396 35.171783 c -17.560396 41.398785 24.562397 37.150784 28.736397 30.155785 c -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 72.576477 -5.987305 cm -0.615686 0.615686 0.615686 scn -3.325529 31.997231 m --1.288471 24.263229 -1.577469 11.862236 5.502531 7.640236 c -12.581532 3.415237 24.351528 8.968231 28.965528 16.702232 c -33.579529 24.437233 35.314533 30.939232 24.500532 38.355232 c -15.336533 44.638229 7.941529 39.732231 3.325529 31.997231 c -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 72.263123 -5.820312 cm -0.549020 0.545098 0.533333 scn -2.741888 30.155785 m --1.430112 23.162785 -1.189109 11.649780 5.889891 7.426781 c -12.969891 3.203781 24.208889 7.865780 28.380888 14.860779 c -32.553886 21.852779 33.968887 27.823784 23.115889 35.171783 c -13.917889 41.398785 6.915888 37.150784 2.741888 30.155785 c -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 75.953003 79.307617 cm -0.615686 0.615686 0.615686 scn -20.856995 -0.000134 m -32.375999 -0.000134 41.713997 9.337864 41.713997 20.856867 c -41.713997 32.375870 32.375999 41.713867 20.856995 41.713867 c -9.337992 41.713867 0.000000 32.375870 0.000000 20.856867 c -0.000000 9.337864 9.337992 -0.000134 20.856995 -0.000134 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 3.354004 79.308594 cm -0.615686 0.615686 0.615686 scn -20.855999 -0.000088 m -32.374451 -0.000088 41.712002 9.337463 41.712002 20.855913 c -41.712002 32.374363 32.374451 41.711914 20.855999 41.711914 c -9.337548 41.711914 0.000000 32.374363 0.000000 20.855913 c -0.000000 9.337463 9.337548 -0.000088 20.855999 -0.000088 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 9.523987 78.028320 cm -0.886275 0.886275 0.886275 scn -15.643999 0.000031 m -24.283941 0.000031 31.288000 6.622189 31.288000 14.791032 c -31.288000 22.959875 24.283941 29.582031 15.643999 29.582031 c -7.004057 29.582031 0.000000 22.959875 0.000000 14.791032 c -0.000000 6.622189 7.004057 0.000031 15.643999 0.000031 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 80.931030 78.028320 cm -0.886275 0.886275 0.886275 scn -15.643997 0.000031 m -24.283939 0.000031 31.288002 6.622189 31.288002 14.791032 c -31.288002 22.959875 24.283939 29.582031 15.643997 29.582031 c -7.004055 29.582031 0.000000 22.959875 0.000000 14.791032 c -0.000000 6.622189 7.004055 0.000031 15.643997 0.000031 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 24.210022 48.990234 cm -0.615686 0.615686 0.615686 scn -0.000000 33.821556 m -0.000000 52.501556 16.711000 67.643555 37.325001 67.643555 c -57.940002 67.643555 74.650002 52.501556 74.650002 33.821556 c -74.650002 15.141556 65.403000 -0.000443 37.325001 -0.000443 c -7.349001 -0.000443 0.000000 15.141556 0.000000 33.821556 c -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 38.447998 77.065430 cm -0.262745 0.282353 0.286275 scn -4.073997 0.000435 m -6.324006 0.000435 8.147999 1.824428 8.147999 4.074436 c -8.147999 6.324444 6.324006 8.148438 4.073997 8.148438 c -1.823989 8.148438 0.000000 6.324444 0.000000 4.074436 c -0.000000 1.824428 1.823989 0.000435 4.073997 0.000435 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 42.692993 80.384766 cm -1.000000 1.000000 1.000000 scn -0.000000 1.542007 m -0.000000 2.392007 0.685998 3.083008 1.537998 3.083008 c -2.389998 3.083008 3.076000 2.391007 3.076000 1.542007 c -3.076000 0.690007 2.389998 0.000008 1.537998 0.000008 c -0.685998 0.000008 0.000000 0.691007 0.000000 1.542007 c -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 75.953003 77.065430 cm -0.262745 0.282353 0.286275 scn -4.073997 0.000435 m -6.324006 0.000435 8.147995 1.824428 8.147995 4.074436 c -8.147995 6.324444 6.324006 8.148438 4.073997 8.148438 c -1.823989 8.148438 0.000000 6.324444 0.000000 4.074436 c -0.000000 1.824428 1.823989 0.000435 4.073997 0.000435 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 80.197998 80.384766 cm -1.000000 1.000000 1.000000 scn -0.000000 1.542007 m -0.000000 2.392007 0.687001 3.083008 1.539001 3.083008 c -2.392001 3.083008 3.077003 2.391007 3.077003 1.542007 c -3.077003 0.690007 2.391001 0.000008 1.539001 0.000008 c -0.687001 0.000008 0.000000 0.691007 0.000000 1.542007 c -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 54.474976 59.922852 cm -0.262745 0.282353 0.286275 scn -14.123001 5.649014 m -14.123001 2.529015 11.594004 0.000015 8.473003 0.000015 c -5.650002 0.000015 l -2.529001 0.000015 0.000000 2.529015 0.000000 5.649014 c -0.000000 19.641014 l -0.000000 22.762014 2.529001 25.291016 5.650002 25.291016 c -8.473003 25.291016 l -11.594004 25.291016 14.123001 22.762014 14.123001 19.641014 c -14.123001 5.649014 l -h -f* -n -Q -q -1.000000 0.000000 -0.000000 1.000000 58.412964 62.142578 cm -0.000000 0.000000 0.000000 scn -0.933002 0.000210 m -1.448284 0.000210 1.866001 0.417931 1.866001 0.933212 c -1.866001 1.448494 1.448284 1.866211 0.933002 1.866211 c -0.417721 1.866211 0.000000 1.448494 0.000000 0.933212 c -0.000000 0.417931 0.417721 0.000210 0.933002 0.000210 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 62.724976 62.141602 cm -0.000000 0.000000 0.000000 scn -0.934002 0.000164 m -1.449836 0.000164 1.868004 0.418332 1.868004 0.934166 c -1.868004 1.450000 1.449836 1.868164 0.934002 1.868164 c -0.418168 1.868164 0.000000 1.450000 0.000000 0.934166 c -0.000000 0.418332 0.418168 0.000164 0.934002 0.000164 c -h -f -n -Q -endstream -endobj -45 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 121.020996 121.021484 ] -/Group << /Type /Group /S /Transparency >> /Length 47 0 R -/Resources << >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -1.000000 0.000000 -0.000000 1.000000 0.000000 0.000000 cm -0.000000 0.000000 0.000000 scn -0.000000 121.021484 m -121.021004 121.021484 l -121.021004 0.000481 l -0.000000 0.000481 l -0.000000 121.021484 l -h -f -n -Q -endstream -endobj -6 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 128 128 ] /FormType 1 -/Length 7938 /PTEX.FileName (./twitter/macaw.pdf) -/PTEX.PageNumber 1 /Resources << >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -1.000000 0.000000 -0.000000 1.000000 36.891479 -0.000977 cm -0.913725 0.309804 0.239216 scn -22.400013 57.398178 m -22.400314 63.584396 17.385780 68.598930 11.199859 68.598633 c -5.014540 68.598633 -0.000301 63.583797 0.000000 57.398178 c -0.000000 0.000473 l -12.370938 0.000473 22.400013 10.029545 22.400013 22.400478 c -22.400013 57.398178 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 36.891479 19.648438 cm -0.000000 0.458824 0.788235 scn -15.190365 41.955299 m -15.190665 46.150620 11.789899 49.551083 7.595181 49.550781 c -3.400463 49.550781 0.000000 46.150017 0.000000 41.955299 c -0.000000 0.000290 l -8.389436 0.000290 15.190365 6.801521 15.190365 15.190659 c -15.190365 41.955299 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 64.470215 28.534180 cm -0.396078 0.317647 0.235294 scn -12.505669 5.220703 m -0.000000 5.220703 l -0.000000 -0.000435 l -12.505669 -0.000435 l -12.505669 5.220703 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 68.377441 30.581055 cm -1.000000 0.713726 0.282353 scn -3.861213 4.547517 m -3.754751 5.068398 3.294014 5.459961 2.741553 5.459961 c -1.142816 5.459961 l -0.511561 5.459961 0.000000 4.948105 0.000000 4.317153 c -0.000000 3.685900 0.511561 3.174336 1.142816 3.174336 c -1.598737 3.174336 l -1.598737 1.142542 l -1.598737 0.511290 2.110598 -0.000275 2.741553 -0.000275 c -3.372808 -0.000275 3.884369 0.511290 3.884369 1.142542 c -3.884369 4.317153 l -3.884369 4.396247 3.876250 4.473234 3.861213 4.547517 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 66.091553 30.581055 cm -1.000000 0.713726 0.282353 scn -3.861190 4.547517 m -3.754728 5.068398 3.293994 5.459961 2.741536 5.459961 c -1.142809 5.459961 l -0.511558 5.459961 0.000000 4.948105 0.000000 4.317153 c -0.000000 3.685900 0.511558 3.174336 1.142809 3.174336 c -1.598727 3.174336 l -1.598727 1.142542 l -1.598727 0.511290 2.110586 -0.000275 2.741536 -0.000275 c -3.372788 -0.000275 3.884346 0.511290 3.884346 1.142542 c -3.884346 4.317153 l -3.884646 4.396247 3.876528 4.473234 3.861190 4.547517 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 62.335266 30.581055 cm -1.000000 0.713726 0.282353 scn -5.331809 4.547517 m -5.225348 5.068398 4.764617 5.459961 4.212159 5.459961 c -1.142810 5.459961 l -0.511559 5.459961 0.000000 4.948105 0.000000 4.317153 c -0.000000 3.685900 0.511559 3.174336 1.142810 3.174336 c -3.069349 3.174336 l -3.069349 1.142542 l -3.069349 0.511290 3.580907 -0.000275 4.212159 -0.000275 c -4.843411 -0.000275 5.354965 0.511290 5.354965 1.142542 c -5.354965 4.317153 l -5.355266 4.396247 5.347147 4.473234 5.331809 4.547517 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 36.891479 33.153320 cm -0.913725 0.309804 0.239216 scn -21.726952 63.284180 m -9.727431 63.284180 0.000000 53.556747 0.000000 41.557228 c -0.000000 21.727039 l -0.000000 9.727520 9.727431 0.000092 21.726952 0.000092 c -33.726471 0.000092 43.453899 9.727520 43.453899 21.727039 c -43.453899 41.557228 l -43.453899 53.556747 33.726471 63.284180 21.726952 63.284180 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 56.346680 96.859375 cm -0.200000 0.243137 0.282353 scn -30.480276 15.240234 m -30.480276 6.823430 23.657093 0.000245 15.239988 0.000245 c -6.823185 0.000245 0.000000 6.823430 0.000000 15.240234 c -30.480276 15.240234 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 44.903198 103.575195 cm -0.980392 0.874510 0.552941 scn -21.995203 23.109375 m -12.249730 23.109375 3.849168 17.356823 0.000000 9.062717 c -27.534828 9.062717 l -35.109261 9.062717 41.851246 5.516993 46.205658 -0.000381 c -45.616508 12.861964 35.003098 23.109375 21.995203 23.109375 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 38.757874 88.204102 cm -0.913725 0.309804 0.239216 scn -29.847229 39.795898 m -19.898151 39.795898 l -8.908817 39.795898 0.000000 30.887081 0.000000 19.897745 c -0.000000 8.908407 8.908817 -0.000408 19.898151 -0.000408 c -30.887487 -0.000408 39.796303 8.908407 39.796303 19.897745 c -39.796303 29.846519 l -39.796005 35.341640 35.341747 39.795898 29.847229 39.795898 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 53.041504 91.583984 cm -1.000000 1.000000 1.000000 scn -15.672460 27.007374 m -14.010872 28.668661 11.715626 29.696289 9.180689 29.696289 c -4.110516 29.696289 0.000000 25.586079 0.000000 20.515602 c -0.000000 17.980366 1.027631 15.685120 2.688918 14.023832 c -16.712416 0.000336 l -22.020174 3.573725 25.512365 9.637541 25.512365 16.517567 c -25.512365 17.167467 l -15.672460 27.007374 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 60.477905 110.356445 cm -0.200000 0.243137 0.282353 scn -1.744293 -0.000304 m -2.707639 -0.000304 3.488587 0.780642 3.488587 1.743989 c -3.488587 2.707335 2.707639 3.488281 1.744293 3.488281 c -0.780947 3.488281 0.000000 2.707335 0.000000 1.743989 c -0.000000 0.780642 0.780947 -0.000304 1.744293 -0.000304 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 52.615967 28.534180 cm -0.396078 0.317647 0.235294 scn -12.179967 5.220703 m -0.000000 5.220703 l -0.000000 -0.000435 l -12.179967 -0.000435 l -12.179967 5.220703 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 59.921509 30.581055 cm -1.000000 0.713726 0.282353 scn -3.861209 4.547517 m -3.754747 5.068398 3.294015 5.459961 2.741554 5.459961 c -1.142815 5.459961 l -0.511561 5.459961 0.000000 4.948105 0.000000 4.317153 c -0.000000 3.685900 0.511561 3.174336 1.142815 3.174336 c -1.598739 3.174336 l -1.598739 1.142542 l -1.598739 0.511290 2.110601 -0.000275 2.741554 -0.000275 c -3.372809 -0.000275 3.884365 0.511290 3.884365 1.142542 c -3.884365 4.317153 l -3.884665 4.396247 3.876547 4.473234 3.861209 4.547517 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 57.635925 30.581055 cm -1.000000 0.713726 0.282353 scn -3.861197 4.547517 m -3.754735 5.068398 3.294004 5.459961 2.741545 5.459961 c -1.142813 5.459961 l -0.511560 5.459961 0.000000 4.948105 0.000000 4.317153 c -0.000000 3.685900 0.511560 3.174336 1.142813 3.174336 c -1.598732 3.174336 l -1.598732 1.142542 l -1.598732 0.511290 2.110592 -0.000275 2.741545 -0.000275 c -3.372798 -0.000275 3.884358 0.511290 3.884358 1.142542 c -3.884358 4.317153 l -3.884358 4.396247 3.876535 4.473234 3.861197 4.547517 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 55.350586 30.581055 cm -1.000000 0.713726 0.282353 scn -3.860895 4.547517 m -3.754433 5.068398 3.293698 5.459961 2.741239 5.459961 c -1.142812 5.459961 l -0.511559 5.459961 0.000000 4.948105 0.000000 4.317153 c -0.000000 3.685900 0.511559 3.174336 1.142812 3.174336 c -1.598730 3.174336 l -1.598730 1.142542 l -1.598730 0.511290 2.110290 -0.000275 2.741542 -0.000275 c -3.372795 -0.000275 3.884354 0.511290 3.884354 1.142542 c -3.884354 4.317153 l -3.884053 4.396247 3.875932 4.473234 3.860895 4.547517 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 42.447937 32.083008 cm -0.000000 0.458824 0.788235 scn -19.857252 43.942165 m -19.857553 49.426159 15.412315 53.871395 9.928624 53.871094 c -4.445535 53.871094 -0.000002 49.425556 0.000298 43.942165 c -0.000000 0.000458 l -10.966480 0.000458 19.856951 8.890930 19.856951 19.857410 c -19.857252 43.942165 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 42.447937 41.890625 cm -0.000000 0.690196 0.439216 scn -19.857252 34.134552 m -19.857553 39.618542 15.412315 44.063778 9.928624 44.063477 c -4.445535 44.063477 -0.000002 39.617939 0.000298 34.134552 c -0.000000 -0.000328 l -10.966480 -0.000328 19.856951 8.890148 19.856951 19.856627 c -19.857252 34.134552 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 42.447937 48.189453 cm -1.000000 0.713726 0.282353 scn -19.857252 27.835720 m -19.857553 33.319714 15.412315 37.764950 9.928624 37.764648 c -4.445535 37.764648 -0.000002 33.319111 0.000298 27.835720 c -0.000000 -0.000465 l -10.966480 -0.000465 19.856951 8.890009 19.856951 19.856489 c -19.857252 27.835720 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 42.447937 52.499023 cm -0.639216 0.219608 0.164706 scn -19.857252 23.526152 m -19.857553 29.010143 15.412315 33.455379 9.928624 33.455078 c -4.445535 33.455078 -0.000002 29.009541 0.000298 23.526152 c -0.000000 -0.000427 l -10.966480 -0.000427 19.856951 8.889744 19.856951 19.856224 c -19.857252 23.526152 l -h -f -n -Q -endstream -endobj -82 0 obj -<< /Filter /FlateDecode /Length 5184 >> -stream -xڽ[I֑hx+IHrO%r -DUjzpA$;2\ldD{2yͻ獊 Fm$μߤ>6NDjmo(}U.*TPhT ~6NݾjNꭶؼ’lm=)S8*L{梲kǾkp$=W2/k~{npRs/KXvk;*FA/xaȥ{˰S18RGNeتh^[SI&,B*8j%00ɉ0r6Liui ` = (n_i{24 kd2dۼ4"OS-J<?Pӝ]?^b筂eݬIFZųc1"sF.$b/}WV1W=.neBg,Sc>h3sd -|۠j`&[H#W]z aYcp +aY!ePȹ2JSI]C{i0T2Oݎo& 󛿿Cpp?ԯuL@ =;mY,ί -]0x. _D3׬9d=K,n>p+؀^ ݃kBY-^sr8TaJ݉SX "NE 짅7 ITC4Bb< #j{Rv,@%<A3;a60Yqݔ#}8*Ԝd Њq^yڪODz3=@#6F)hF>'}]'Az3$:!;'2gCP#:ǽ{)gEewBxv G`uԓ,|۠O%%M`TXj.ZXE,*5S4i`ikY@Y;LhY9ubǓ<`6c_9D6%*yop<>_zY&A#h ,ޔ=6VoĶ@7T r!Wϼ<IԤړ>pa`E -q3.ufgiE,syX&O?XNlP$[7PF2/nU|pS3;߯KΗNfQdLE_]z`N)ʁ1 8nRU[<ʥ7}* /$] -)51FAy D&1f,HG"tJ×Vٲʳا'+Ȓ -/u014Ŧ݅>3^M -lYYIK&l"W@`xÉk ac 9o"T |?KT.UIJՂ ;N5ؓF+p -^A5OӜ3%lľОw&$ALMf׌+ )kc%T*`M:4Q.RF[4M' Q!.ZP̍Y2:x\>lcʅ*Ki 'R1'f R#0 i $7ڜB7Jl3M/U%r -vՐ|P)8>ei^%&-1 ЇjYcd5H+Eplq$Kf˾gW^;8'ӡ-$o{YvI=4r5 ^t|q# 9T라+d+=]lEDZ%lwl"+^\&}^_Zb7g8aPuMqӜKh謩OLfc3cV\l#3lilSJuq9nP$ibPq +B\쭌-z& -y Ȏ$=8V2˖ T0 +]0Һe )_3"upHX.#Fot#}NP U"O/ Zq>y=,si(xc!EǪ>A`r w.uivWÏoȇ>!'Z(a)p Dh4n6MaC|] -T\НHn;3쭆̕7i;q 91ZFCƎɁA&!D]LwƉM=+O~Owcqo vk{Z~,O\&؇ -C!Z3G2pe s h< \JM.RG&6 +(؅5G:Eq@Gt7xy o$åa:Pi$#j~=٦YQYq\|6-,mUKi>3A|v -[mDhB] -rΓb+cS$Ɖ TV% J -!F)T Am+$.L˪r*Slo# -$gT$r(M*#U&S N DGV(Еj9T/ -e T)B|٘$3v:o緹3Z…A8aiVPX H:<6T)8ͨ'ؙ'.o%[gQi`G˻pGi%gѹhdrb -:煏c!mvۭ:4ˤ.g.*th?A=Ec B#=U-.D0A3."%j ~]+}⟚ja -r[O s.(|%r Msvu1qYZ6Aˬ*183X -0?-d|TZʴ&GlB7S;[̠{qtBqr cleU %f4zB_'0W5@$4VQpA^> m5J ,X~X1[TRv@mQrws?3\%L*X ]QGo5 ]>q؅#Bݙ&h`a1gb KL5WńbH- zXP}y7$^Sbv>te c˗-&%OfMѲ<8Ʒѥ -2r!յP!&tI=ڑM3.Xo`  D?DLݚ0fτp p) u}KN+=O`<穕Z$.u1qfҥ5d=q-$蟸3G2̢?fnc=z[dsuЮ1T1E.K?5VɥKe'&RSԮh+ &| --Η$437Z\*7qWy",fh#-~Yd@'ZA) 1Pl-O+H؇wi[X Gi~@wwi%d 4= Ҕ%ISz2j}% /#nt쬞~)/H~e@*vg1ko -BXL,M[Θ{tm]e ->(Tڨ0Dc &[7 3<%RG~Sdճi,c$E;j1uآUdm> -stream -x:[sz+=cAd˷M&ӴN$AkTx=7P-ٱ738+"Y|so\6-FçLb>,.7㒅Zb"766IXn?._iS%mV]sXBd Dz^b܌JC}Tfbe*LٸHK*+B8 PccXg.#w/}7@~9W7ZiZoy\6kߣMejnz)xljj= t5-CUuU?ķ\TM]*8rX4ZMv$ԽoݺnU'txWk0[b+.,6^i·~܃L].wFؖa64VwG~u}winU1w{%!}a<3#+$;@V~z$xDOK[< oomEOnU`5F3 -{Z>֥V[hwj_ -WrY"*?o\L+3[$xΗ9r[ ;pꀬmj.Pbƛlx9bbesϙûK0sj'xc:1ms՛Ou l"( ={Q4 ]P]k=xr 2Γx٢b΁KPYC{Қ -+ -Tm ҂|GWJ=TioXR ~ Oq,]ݫ4I{{㰹& L6)p Y b}xe az7 -y?fnFb[ C"% Ǔ"5@''^Ԃ|V|+QX|'Q GX=s|btZPtBԅUV0PP[Ep<6wadf un׭wSU8+d Qp?}€ZP{~-M5q*1  qmQ={ t}>szY>pN?\d u~~K.yfȧiv Z4 -GJjMWb Ug:\ Fݓ_qagu$? ?a+;B6حg۹kˊ<έX\R4z*l7;45ji{A# 20K ȳ8U"OM\~LMkk ofs đG¨k8ʜ qfgY+F++: ;mVus7]-&ho0xM&&G6h,cD]Xɰi#Z]EZ.ދ·(I /=~:a<{v V9/4𛕰o\jf<<#tlL%LuaeC8 DAʃư8]2Y *WJVeJ/יZB+e`mUiLPIHǹ葖.w}nݶAVEF!6/o\-^/ZM|Aߜ'Ƿ]Vx9u䚇[?_8$ҭ[ׯwb l} u`7Au2Xt W&*l|ZHE}0+EH{;p2Mc4J9uHfa Mٟ\_)\& ]:W eGS)G`O{J@xC ֭97zzh%G2q7zhc! U H9[\E6Q;լ-Ocw삜oqљIu*2u>'\tcוt 3 ǞpH8l|%'q/6=yR@n^_" -:*c?tS}#jgr 1AxU{)A1p -1pDUi@ѨZ;xC1K/D=qC ,8$yo&cUJ7}"LQ j3&EPqj&w cӂ>b%s-7]>%(Ȼ3:frI1U RT$ e-'s5%vWF_CUC|"^Ɩ~ ( 6pTb4odKTvexO 6Rs$ (2x_7ͦ\iQ%W`Jh_rXpm XMF{Ftl!XS%t>[h΍3 nO -u -/s1m| fl$]\S'ݶ 0 -Pt -E<;őͪ0/uqioOPL=,{l~ךUA?Z)4G~ DWh V7C}]};.蟸k{kRF31J]/0Cl(K ~ 9 @azr~U[6h$|#3L1q[@D⻙)# %oN& -C6X0)X#55IL.`FD0~&D.qxRX!"]LMў"ϹuT\[,,D+XR*zhH`ڔ"@N .HiT;w z…tU{:IїN@[3'F$84/phyf^QYRĺ m؉L@N(guѕO+&g庆ez!${ne!d߬a<HU`؄At4jC&o=7tB쀦*5KF('!$6x|δ(b e}?Q>hO 6.)__B. h V0-5̗M{F:>@'N: @ ZÂ_ŧ5?}Y~  dX\o婛Xx/ݛ0bBoV"?"Ee7Ž߂*{\TZ1pa0%K,c&;ѲBe#`XZ]]\l8O֜|aXn仃}3R (r Hr"/* N 3 ¼}mS+5ؗ8ڸIywRtDްEq4ag]g \u򵱧s NS]m sXh^0DoClDtF%)qz -$`=?D`R爞60A5 -`[#.(ppwa :7ph"Jd4!O}C/ -4z5DŽ -L.0SsWP=\!,$:@ >07/wLe81mz%LERrm8?g+R#ݎc郅㒼qO@t.B8;"Nm1WBg)CZ~ -"rg=If!f?b1Yx4;_iMendstream -endobj -2 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 845 /Length 2782 /N 100 >> -stream -x[o7_ǻ3iMza-o=KZC}CJNZFK-9#g84)+krh\%+)N*TV"b "9)b"[!Fa(y$C(%ʊb ;th@h":)N_8gtNYYY;PIxa/&t_CDRL0])xLSV3u0xX*o(-S^yxwU{闔LX<Ҳ A"% ِC+?n ~eB&M0ɰJAdd&N`|`90v*? *[` C&Xg#փ 09c-I>FPIGƛ稼 Q - `$bab#XD26x+,[|Ģ"?|OX~/oh ei!$Td7D+-.ԳL֖Of$;+mY_YEܝY|ZR{A\?pwZ\n{t\,nnER/nzcd[3#9)_,@gvۃUFBUga˅r2BTRvhUTiJ+iJ+MT.X@,Eiy<WW],wW,Xs{4 YAau 4I~OIG:~/xUwhulݾ[޵o}w}O6 ɮҡlⲖutP\ kٲ1E'f lzƦWzZXD|יg刴#î8 f=!b!}kcrҪ]QN}&޶ -`u|tX.8hl)1ę0YcL>k`yC+5H5duΡ6y|N'ԩ:Pg*4PXJ,VbXb%+8$O[R ]n@l -%qM$ ,,"uO$PVs -:GvZ7>RzB|Qw9qf %5ʳ}6~pnKhQ0]X;ftj Еsgvz?!_2B $+Y` F9GH> 66+@&+q5،hnرf;[a09LKm8FN͆ -v!QIjN -!ygB#h#ҳ?;6JC[ -$1=q<)'yEu9DQKCءn>*f(8\Jn<ce~,yCY4rwq!2$r%?f!4bHfEbۭR$l^_S*j9;J)ٹQ%D TlIF0$j;n1Ssbodeĥ HFp[#mN!B>"r"VǐgSVn8/뤜XsI2Eq<H&ǘBMu'u&(sb"@!4vX7WS,9:[U F0ࢤ5fC9ArJHtևIFt ,SN3 qZ} *R; <1,鈪/J3: ,l+j3f|%Ot;kqp> -stream -xڭZY6~Γj-ؓ\lR‘0)ln4@Jc'*qh4>6[3qE,p,-~}f+vij,P -=׷Ͼk6|x@ݙ -@}ϝOc q$ 0N(dDY$л&///Brpd<**mp3;c\l.7p<^6⤕؂ -Xr wu{ %\q$Z BȀGĨ.16veeMy2!:R T6 ’UfY֔v{XMx*"6X8lր<"&I-QJzQQ6=jIHq@0JD TL%Ο,֜[[2+I htܺh*B8AAzgo%,&B@kW&7OZ44!Bp&;*O]g,Gb9ETʊ]R8%uA@<ח\[S VHe1) -@ 'AX0 -b=4%!c QM*Q#4 - $!`'aus||GCc*kNg<6j8!VTA ƠV 8>gea~S !T_˵q^D,#3'Ƅ@:`oA3J<sDQ_+)^Y2Iu7D=ne@ `|%,o3V 6)xҶD^l *9:X3rLǏ9bAZ{ Ό,ܗy^-PQڦ)q8ؾ. 'j&-c !}dG1 1Lt<1Lqh2{8IpBrJFgqIP|$|#Xo 'U95Lā䎳j`po~c2G"nʇMͣ~H X1Gq+<(;:EQԉ!J6F#Cvpb" @QF8DzB9#oWFrsA]50׋h\3EWy;PE?T+🺟#k._u(M퇣hqbUvޚ^e{e;`' -PβsmCeoy_4.*pn*LrL3 o6Uٮ7ݏdMAh;\DK&h-HsU{9(~gC!]`)l y(>9hKY/kc\ -䥖0I_zvkX0B}+b4lT]nܥ8V4K+-9PHw0?K!U [& -#ruM U]#z3/7O.e߿ v[/'I '[Gw\L_ѕHUS8=By*GP;cnAA\e -vC m<h -ME&T V&A+Cp!"tqJHpؼ>aa, O~m&z3]O -Cl.2,̧vymԲ@+2RSy#hi4|zmr?YeV)K0OnUY<>{!DnYe*J Dl5o{ޚmӴ\zO[?}Un<²*I.,/$@X;qnf'RH/H_tw1=X;~lZG1+?\C졟bD9L[&f`;=4Pb<n'}`3!݌_wB hw V6e-I(:RdYid2G:fC9dH^H?+bUˏc(Wߘf%H%x9A%%UUAUrmTxIW]gHu 3ӂgx8m`s%f)LW 0IvX.b_oRxj^]RØsWe۴:P=Q3a06GʧIͩZ-/C&uwiWtqV4*nc&Bh|6Eӕצ]AD|Ԡpެ#zh@A?"PlT}寀}6&sR1.,hxs jKݴuvEio]dܴx^ˬXb|2u0{Dt3 -Xr,2 G_;~kŝc4 Nc'V5Y}E 55&fJ K|zF 6ȇ%g-$.ߺ[ :Wt@^t`eM<8fĩR}uN%݁r e0鄴NL)`4ߦ0&n|SQ/NìPԂYKvp|$I2y˱褒zU5Iph~>OuK':'F$'`A=P4 -73HFt᩸KكB8t P*)t1s?в o-f8,v֥tgzVEMy[x(y68T@-F)mrrjDb8?r5fPMdtoSpM͡ǿrvHx1*S'F>x #0rx,o`l59<8vR,-3Wg!y_M=a]AHahB ns*L/x3f]}٘]m.\nuz"ф<ݛ1S,ZjE\6xL > aïIA̹Pgqo{2 qoԽ%{ se#31 AukVh!Ws eHP7/{ ->Kkk;azaFg964uѡa_tڐe^$^lcܙ*uG;۬&_B e4w9󏶰鶤& BKwZl?g}(7}(ŃP-t %@S}mH|bA-yu O<0scj 8{vendstream -endobj -169 0 obj -<< /Filter /FlateDecode /Length 4058 >> -stream -xZB/!|Ka'v"þ6@]I+cJTH竑y,E騫i 8\kgO_O.|,IIM.W$Ir9y5l#ߟmKӏnK GחyF/a4(Lu'YpLI~V7ɢm%^qs^Ae}Z.B?(PlѪg(m.媮ܮݐu(},܊1oD#ٴ*Nz%υct1bh,9>Y`c`Zß".evZڶ\oerQ0n$-+^irJ8 k+SJwVA3 |HW'+oRDZ5\h@#X>Sܼx8E~0k|i0ߔt]]W4O(7rzw; -';W]gŅ*+!J/eЃ~}[v\/*п*Yz\U_W[8+HJpNm= -g?zs~8L~?/`ucXJnדn -H!tYIq_ZwϬ]^7_ 'oSr Js}I -$+-GEcG,خ!%yץmH laj|/{[pmȉ@2%D.iQS݈-QqWV]g<2UnRүn(J -8捭ɦXI2h A3NI`ZFm0`wO'KFDEJ waC#Cpx0F$zE.lNvf8agL-|ܑUع+:C0,L |D#8H5h d"FrUro.I_29VpA}coOިeZ H/Þ3v!VX!"zW}kI)˲oEUpx+w3s1rb X[=\ǘl -}Q"Pn$$J'DW##{]Ps:s8I^CvE$0zم8B' yTu'@ _9 +~)qqK&@A^n!ǙɧxzWpm]KN͕8,0ww;d^t;\7~Ǫ YF!; 黅u2f5tlga495?GGzAt0dy4}FVlv -B'' fʈfkBCq4s51eq8|xtlkXci2/ BCc(Q2<]!21Cu ώ?{H[Ca] AV-k8++:,40|2u^r)zK@aEKm+37׳xz+Ԁ4*:P$$,k!o ž*ys04PcgY[#G!SxXUoDrb%z j뭘4" _Ŏ0u\YER( @>=̛_gv3 =ɰk'Lh)W=.6 / -cEcZG%Q{(? 1^`@f`iS>qǡ;їדI߸o։~?؋}а7bsHsw9=eFq6 H/Wj +ވ^.0zM4YNBCC{ $˽ =#( b6 YHBb }6ƌ>+VUx%5A{<F~+I9M`T@8a͂Y'G6rۑR|: -zwzYW6i1EwwȈ |p<tɍT#$r)uH%]rdHLm'ufm R}jS4oWK(~12njm*Yq2Db%Lݛ*\koTyr]﫥y.=944r=i!~\@r ;-JFz Qz.L0 ~{GgxKWzIzow=(u(i1y1K}Wx"dh6 -\'/5_M-Zo]N\TM&ET)*ҵԨ2Ri}|{<ťcRUIMLя&W2<^y=\1%Ph<GSAU3 uBh閒VPE6dq72!X;Er:Mڊ{ 4!m -B-KioZsNcKR/0MGħjjB-;:]>мi%O=PzA -gl=ٱcLF9QxT8 }PZ -M@AV ]7ؑvs%Omɧ > -stream -xڭv_衕(Aogv/Y;fs -QZ"$eGɿwPLYڬ$n`07n=̷߯nvŹ?9(T3? -a\V@&zs{  -jCjlF`Y7̈́ufsb?`GDJ3B|2X -)`/F\H&\ʱ^9@AwyqKbCOxoz$_Vz5լ]R 2C$&JTyW ?ą=x$WDrEyG}}sl2?tuWߛA.WrLϕ) K;'))X b@&̀WXgu~[dc"@C(L%)ʼnEVboE܂eVT2; Xگ[͒ym:8?pzA;a},z̖_Eޔ#a)/yW9y8*]cK_V|)ں:J-Ѩ 7uCf.Pv(?RP3Ws+R.XAosEUwb.tqkZ3 =u9[אBl8Tq+k۽PFDiW:oK#WfhF1Kt;7>h%PFqD`XZouY&n_<}ûڇE\[6uVD;|e&kB@E3w[)A3ιEYpZU>=:oY +orF漙&rDs&([nϷ^.9mwN)/6# -)3su P0a}-_|?(⃗Z@?` K&Tgzg,)a"ǡ?:vŽ#+$ZHwn"#|{~P |Ɩ(~"֫ӓ~u&&-j먤2O3˩2\TeW;6̲/'_R5}bnY,D zU6w^񏽕0p/6"y-FuG35Z)(@166T1rQm(&4-*;5bJZ4AX4/h%X<@%жvvzJh;@'U -bPష0=_8qw],97##tE+L2LF$,x@%Q{cd%jK͢. \LGO$Mo8j3uF~?ެ0.rk!6*``OUs<& -`l/[7Of NlUQ(±hPH3bDT{IyC&e ND,d <uJfD)-xmE7F'ʼnՀ9(0a F]×@׆.cyЀ.pJh͸Lzu{64ЩpNEpCxEqHX3-(ϭߺ޺݆43iuḖ'4y5]H= K<4H3S> qnˬ´rDN\d/T -H4Hu#d`#If?MZ!* -LMlQj!ai}>3)$(p +T@ & z: l -eebo 0uAuUtSxt0clP.zu݃b㌐eO4! TJ%fܷ*_f#+z J R6k&{Q6^6śC[:+ zhkP8G]/khh&XiUy!I, D_.Wr%TCPJ0Chxd=?ęQD> :z{i{昐p&T`G] yԞ_J7J#şxDNZW":im4#=&ZE60OYTD~ -OQ;~iP~"w-Fc/GAp֚acͰhƚ=&5 :%}WoEpiRcB6E4DqEG.ƺnmj]t%+aXk:!\?H6v}%8f7GG〇1ԡS՘7LQGY_..ufs1VxKr#b^Cv0΋ᦈc8:Ww2a9J扊[<㗇?w.>+s-T ~d g*j_KE/fMښx@1iob:Èg,\*|cv\p&aCn6!* X~w Fen90t"v-CO{X e'.%ۣJEO;p18b9O P;ONu'=D(|e\9H}Ǐ Q֡ -2 -o^#sii̛-oZQ @b3@endstream -endobj -184 0 obj -<< /Filter /FlateDecode /Length 3880 >> -stream -xڭْ6}B'b1$oة>$ -F$Pc đފTh4I76hU$_W1,QXd,ϢPelhg?ϢPTI`=xϫ٬ ^(K0j[6֬EE[߻U\3GLȿ mpδ,,s!zk"Y֮E e̲6JB=ySCϐr(aO jt lWm6V͆[38;^ּvYuN5 X^3;럺_ CUsfa,t/k1J0$tƹ>953}[~ۺa}̖<7/Z߽z_fx'OY)a- 3r6'Ti$iShGRDqlP3uV2:DԁhǏN V3:Wu_${t0ti\b5#}'w2:/GYdC11ǽm;v$c귂$jiWUo*~c^L]'Q,Fwhj BwUjLjn>)j8"16dJ #LNń`bP\W@A]Jߣ4t/4̒Ɋ' E0Rd5yN"ʳ)II Dc6r@2 D -B[ϲ>N04K֮g2MzQ.81ȟ iX#㎩ -09?Tswa+ONY?d0 0@4Tb&g:8po!\ ff_F/&a,/I7Nk0`IJVVh0⃾W=7wU/.&::ئI-ѡ#ZaGlמ8Tu r$UaZnƶ)f3? 3]= @'Gx?/g՝t(&0\1vuY߿#>f_CAyΠHLi4ϪR%xn"ȱU<EpB / m#3[ ru}t9cU:Ge -#i׫ZWrBAk {DCm:Yp& -i8ZˁPkw8Jt8F -3O8+I  IaN$PWgst1ن`#4և]P˳d M^ w60(|T(mQvwK۳&1;˽7<7e`YCP§iBkKvmM'dV̈́:/ܲ~p\w9`C(h *}$/S)GBH.\#!4\s%2տds}7E#P - -#O;ym :Xl8*jW35Dz(6@;w;A;4u" \t2D^Fl G TRANjm}gw b'S&Oj¡j8?2e%2\sb—Xf*IkƌdE)=1s }Í?n pTz)t^'6^!+'K8cO%bV.v$Ԏp>vH 2??@3+zF7lJ@R| ;^endstream -endobj -193 0 obj -<< /Filter /FlateDecode /Length 4814 >> -stream -xڥ[ۖ۶}W[›vb'q;^>3gx1Ebg -(h< q] -prt\!PW"ˤWW"N$+q'qeWyI^+wZeT߭6\ڌ -E4\1loWRG+6FsJ$9W8Td<3 }Gih -P?mm|>S` >ǫkc_*)ZillIW2.3?[|fnm3 [s;k6|چx٨R;k)2""1/Lu[P5O - ؾzjlǕN -頂Ygc')~suO:WPxĄZ+߼oy%'Hz%;f2۴H[7VىPtu$ k `71ks:iʉikh-L]`q+qTΆ~[Z/cc ;rRrˌC1ڹ*gdWB]1i0j_K^e X!?L2mRaB9Ϻm[I8~h8S -:&4a -pP;jc] |uwA¹ۏKBГ0c;J52"h-Ş_gd[,l*熡r-}2PغXU;>RѠfz)%y?}prFcvеk8SQU#1? !m[94( -MSsѸ./,<6@5asi%ʛ{Ӱ֞gaɖP"}~8PZӭSjInhxi¹">[".JYG`?]GuX} qa'98`g lE FZyҎ:Zpsgч@>wXO\,>=ul~: .i0fc/!'w[u`r3h=?L (jdn7AHﱙ/ nqtfj=W=e>l7zDb,`pKI?^! - '[9N:308#kk&SX>.cdS75gVJ|dJ%[n}ۓ^;qڊnm8 bGR޹OC"!зWҍ룞W'XCmL]@vK Ndd*\5Zb6i}ΤBE|ᶩ1. bN Ϳ{:9KbD@-Tt7v_,l~PW)@^,4y}?ܱ(Z.5Mt 7>,9Ϥ%N]ݡf"-[9PYcs%NaR9 TJG-7l,Q<1Uk+6?~Rƍu,řDr)'N1M3)(cvjKnux@s ^]S,wI5`tB|%Q$[viS3h6).&YԂPq6saTyb_uDh`MQhDud!}j0;Z>pm >PˣNjMy8.pG,@wx5YeΩK%ɸp 1c`琮C,ƚ4xc:yNi\h܊~h7JbeuUE"߀sg5PUqLg rkAIP4.B6n|xG.$x0Q;G o\CLyvlN /laC~ۂ7(M#QZ,(7=_nօf&Êa2!I"%E9#;~Pp.۩/3'mT|W$7pW(r a",F |8_xm Vm#G8=j+7p `@c+y8T>ݻhU'ؒe؈\(Ў) m\v\ŗAV]ËF認c"tҭ3ߡ#yB }\>dLM6Q>&ҖrK<6$;(]Lʌ_P!ط -G ƺL]|U"(,5~}0X '\3L :K-u=z~ǧN*MOA,^Pgw/Ɔ֞I_pKendstream -endobj -200 0 obj -<< /Filter /FlateDecode /Length 4749 >> -stream -xڽ[r㶖}Wq!oovd2D[LK/8_?kcRsjZ @ koIr(<)¤mɉ8ɒ87$3I}eu_fQO(W?'"ɣfb)ʅ~dE4]|?U+פO6b!`Ռ`f:*A.~We=i(ҿE0a4XKuKR7%,PD9P̣$(vG1hcX{e3ONIս#CPbL581l ,("r_;'3xߵř*ЃkY\ڻfZD@ --+s!q uڊ0>nSJ߳f۪b*q[ϽMl+ɔL&K\0j\\TC>U-|ibnio-3K"Y*1HxyXj{zu -z4&T&HckJpBp[ڎ/>ٶU P \ `ISK;] .fWo@%x GWte2k;OQlkv#}6&u`~4L@SvBem0'~Fz9tض./@Zޏx;O1;P懎z3VA՞=^AZ揤|ڪ s`xx^4.ySX(<9yѦ k9:p7bYoEmӶoWbi#JASWb$HQH[l'ѝ-=_&=}½#*˟M[j#t.XM`)da&>7%b~ʞﳥɡsSoF^'$ug:@<̎7sT˘o+]v-r2ȃnC[A׶2;,\;X/b6}?G"{,M -վM;6cvԁ1U@fF'n?ϣbeF]S a&=,:@ -gP _ "닫[z>Kmu0&*e)5&,bO 3}t]H8x/%- PHaI mv?>A߫ʋESڐyZ}1IBo$T'|K*(sp(|+|AJX֡=iQx -2G΋!@|%5Y@f6&.`.'> N摇!7N2Wb+w2[CH2;w@?Yiwp.2Sya]Uv2)It]"Z50b^7݂o}A,1W0̠Ġ<P󬅑r"Q[B1-G~6]lRlHA5yD'[35FmBΩ8f,-Ճt# ܅g&iTmY6ӑPku0BP$:1&_M:\^Ch>=qyY?T01NKP5 Tf %G8ʄ+-Pۣ -C Z/#l+[#C>@dm i^sM_(i?{Mqxq;9=˅qZM=ۇP*VATUCaQyϲXD;7^zAq$R`q_Tʊ;T΁%^Jj)욐M`IB"|^`]Ԙ Iay4ixP2d'a% k\bH.j2!HzΡ!B\p&+ Q TNBi#qMId6>3wӴ-.paqgPʀП\w; Y ds'baXt[V|oې[~-I JUd^J7|;7uMFLE,$n~^ه_絫4.-6a6nvsuAHxG_0!ғy.%}ByGTŚ5y&Um')SPS:em\H -Xp APHm >(>``|q"mYg+c|Ӥz|U4EW1u~28 aP-lh_GŶ ^,|r>Zj_4RzmΧ2vdC -a_JmN]Ri[N7 ӣg& - Ƞ2!éa AU!SjXy7vft\62H^)0P&BDR9< e9)M* eM9jX}WlFib+ 7~L ,#67ˠe|bb;S~td"s"HhxWS#b|f Rq g,𭟫n 5?Her\Hc#;=5"~o ǤrөKtύ>M0C {N Kk>tϛm>RkZۧ< 4/)'!KIt['l\Έj?pl] -8x eW>tVę{ei'>..<=ηpbv~ʿR6k-;.? -<#CV\y:chf\w]*.X2w~`6v/Hba+8"56>®~̊о+ڡv^Rs:5ʊ.b!D}{SqDR*49HOurhGPes >Bᐈ6٥]/jwUxGT\6<7DC)ZǼ KK;)_m-қqP Hwߘfr˞oh&;KF6ެIe rqgUֹ'r-3Ϩ6vq@Fڽk7gn:`x4wE;4"Gz3菩,?Q.(Q L oHm9YYv!~ LK㲄|!&~WN1&e h2l:ˮ-gC]/llU9Q?dfvM]~ ]܃r⸕AvL>;D(f(9B<`:j1P2}?ZlN#spBݵggeJw{;I,0/m4y P\.A-HeۯKnx?>D1*Gql9t1-{BfP# c-԰9&-FM!p ~^r~"Ԃk()+jpJEޫQ͕9ЯPoyKKI wmtvUB9A>fCcѼTqv=ͻHQ^ӑW/5sl4O+J-~,.4?269":+y6$pFe ;'HCC[]E>5 e )ݚK>L`7D:=O:wc2>Mw-PQ6"ܥߪ\:*h{S>@L> -stream -xڍwT6ҡ"2* !0CwwJ0 003tH7ҝ]" **H79=Yk{_}g=z cA<@1*y@>RVV} Bp@ȣ` V`8@ @b@ @n0&@ IYHODEr:CQ0c،@A1.a xyyhNBPe5RV= ۠ŸQPV@h+`T5H(7X7@<3  `' n9AJ<#nvB#`70 l* P A4Gޟa׬G8;C4`(({Gýlap۟mظ"y 0W VEo.Ğg}O$S@lm@}aP7`PP_4S"60` IG<@,@ϿNX N1c=Y%m?-('xsP@ =3c0O -E~Y/ lXBfP~|(+"%W'_v߀v9yA`& A&VU p;,A<@zZ y @94# eGco]9̯Aإgp -(ؓ;|$a^8u`{"P?-$"GX+ -؜˿6 -BH?N# u5 ܫ#i|#Ę^q˥2= % N-9{+ JvqW&O<7J+VyEE"mc׫JC;uZHVGWMi ϺVU}ڇc"R2Kt&WS,R0 -Ps \a P*0*U<-qʠV%e[r\ke~=K*6넉9K-+Xƌ/x6N93=xj:H3Abo#7Pܶ5)+\4YA},Nٳ!F|"iRųmƔ7`iU7yz{ \utȱ͸TGcg}2 -nxjF{;/T{,k;#|+ڝ-|0 i_i'nn NV^~>D;?[i@n@EPxP~/GW?ReMӭ\4m+TK!0Qͤmr{_[؟Hn]o͍☨evT2Gŗ}~;,ɛ̦WW;r }CDMF4egYn (9 -Ymv[ݣ^VNH(ʎ5I*Ȯ&a'Ⱥ5K caSb㵻I6Ĕ 萴Sg#ylGz+͞N9$/{|v>$ep2[-v2Y@e9q|RⓍ`<͢3B-GccYD['dx ɆB,7oP͞f('5jD"β#:<m߫ԟ2+v:#[YdQ9x& s3U4k//"((hԍD=svZk6qt5 GB^)K*(FuD-Dn(^2|ӶIbAc<39E2wOPGSٓ4 wyϦrf_ ~W1E:h-fcF^Dotl |ʃ'hW)*q"22 \ȸ/b-r4l 9# W!9*4a?'-F w}cGұekN_cKUQs2󖅲w=i"~Ģ %qT0ūA TD&^8z}uB] GVV+bdwp_]r -1}k[[J\}͂spƄpJaUje9瓀WEC;ul|R7M鼤 -K$0ht - >oxS!5aNg%43OӸela`A.!o`Q,KPRqX3_OPY]e:Y=]ȧn|~1&[śCʒ=693[a$"2h;W<[GO;T/+Q϶DmPPBr=%^[fHe-֖jAfd>٧7]}>Ӆ"P6w_G'lGn־n5"8DN9 1S4\{OINBA}8gP96\47SqKP1:΋2pm8}%k_B -"P{x:h?ɶ.};(Ppk*9{YF4^LF#j!JYx(jH"⥾NW|IR_=\Ĕ0UfnĽ3z,\SnKXD3HU@gOW}<(kG -m"CmnMYV - &NP|S=jL!HaF3 jɃ=yWnQ-81iCLktܨH4BS8/Zi*D\n.OF~+Ժi0m5sq~)HP@U "+MlV$"ab͢kU{E"s@?Fwo|stUn_nԤN%@\/<[ to$ o:C}ڍOetv{[ Ù;#B^;L@f)q`b-)i"P*=N~<v.{U(/ZV3'|liH2m}{{:I[ -5=ͧ UWOLM g vʉo#!}|mQT=(_IF*uf(Bs+%U6"JvJ2؞O؈ 㙱/}@ܦB8 i{JVt}9e;HoS}RI\*]ȹ_ny$}}^TE1slf{AيS>0X^K@ݬinwMZ]Q%s 2zy>jdW*f Ծ $8R 3/ -'I˺gN&OMs\I+!>\x̍[#"wa'YHsiùKW -ZxVf8- -&:Yھud1a"cpщ .V)1$)ڹuEmQ37>[J^%R\pVk>7#[ $a܀els!m0e/ -/cpM{*$}$r6*|`JqTZ$솁>>+nl}}Y5OWh>Ov xc1cdd{\4M*: 7kOS6gA6FHUOqKdYnʣԾ5&߽ գ0Ά )FvQ/]F$yk;@J?/2ix&X,pȺV/ =z`")PZar2BI -\kZқ뇧=Lsf:$uX yS5Yw D?xA2%%n)}dQ2ד\څMbN}raq:߈0nb}bv\:1!zF2:5o4EKTƓ3s2*rv?| (-P|x6MF+4a{jyFJ:N T?Dº 5Ϗ`􍴭Bwx=X<> -stream -xڍtTTo.!J7 H%ݍ4 08 !tK " -HKwJR1{׺wy߽߽g(WN^BpDK/cAŅ%EO UWz?o'APiu`>D# A:A@ Ovg> p{e| to2 & ~LgC0o+V JsM/\ k!@ZX\SwῲH{`ܿ`H@?Ճ8A۫cw(u/vtÖ?v_*AC -|i9>:PJvA0jpG/KH$2f' aMb Ƅ1HPa$vB"1=z̩2 qL#ejÛO_)1X5#0|uNJ:9jV-CtQ>,gkZ2ʑ=Fˆ`"%5x<5v" h;N8rA٘KT+;%-!Tm3'|YrIfn ndVC:j&[@+yπ\ )Vk(lTb(5 L-ʳfmniSï$[!m?+;?SI\*DŇM]<%8=:a$7>FD4d\63H,0Zݦ^]D\\^|fa}W -t -G[Dv}yxVZ| @cLGw_N5nJ75!u@?3LUuڡc?Z񫇎{!o߮yZeڝh-xc ,yUZ*nW -weօyx99Wq̝ -ZMo{bXd%mQs'y\Mч!]c`';Eg QzyC?i?ݫ~ʏ f8x=nHـ򾛡FffQ#$U9 Pf)qܻwuOM~>Ol1 } -ㆹTeO9 WͲ6n}^<h^=VH/y7Pv -PNO3mE2㌺9t6OˑDa-1zZ+d E8LxuC9[EOh?yG& -8O-VqM`O q27^7rURݝJc?C<ߦY4Hɫl#fLL?*D*K?a7ZT;NwNJ?y7"[m4~΍O4[vaer_?[(E׈'QQ0,p]xAe<^0|߹4D^mHTPeJ16r%36rLF?x5;E98˖|6SḊ O>*bGQ3q@pE.ŧ~^urY~F57zS3@M6lNvzaE&HWK߶Bg^Xر5L0ǜLX<6<P6U+> kU[/mi8pKkrQrjU#r:&۴18ຏ OjSr*j>f*tmd:̵ÿ=ޱ׭.!c7u\D{/ϼ]+KzLtWE=lxtMw>"A1 oUE%&ƑP<0 h#e܉]`z`4SW [0 -uR炽oJXj>=΍.=LRtcEǴ.>SU~6"rq+`͗7YRY6SΟ̓9hFQ~#0`mA&]|btQӚm -:(*]!/"N%ƅ{/?)ZުjI?Y*r]F3ݯZOݻ~l^yOٖ2a]WlZqkӔ{OUުg.)V54giS2nt_sdVZfpϘz6|&b'KOISvLC1^6ΞN} H盶F|샑Gb$}EAOkC8Yc:.bۦ2+5n(}+^q k|xK5m; W~mOĊW(J Ox&~.UX2:2CzqED+S>?qԳ2 -{Hb_By:'њ M#b^qA:;SX?Zxh-+iQk꣇+?8PBllPYF͍! ,E!nziVGB}G_Ɵ, -9Χfn}~ѽ̦QJV嬇"eonztq\hYgK2íUe/@󚰷)ٍٛǵKڍ63w左T\Bi/|~v\I}3.D2])1:r0Ǯ>Gb5ܙ:p x3Uٿ҂]/԰]1lRV|xÖ(tu}vn9q ;0u9ѳO!X9i6)%v :g *yb9]e1dR7kħ鵱)!xiio=ъL2lKY9Ώ#Dd]ڪ[ -I#'L寚3 _.4q.HoܮowB+L]@lc\uefj /q;LIЋBDW q}K'5JxOU |5riWv.l=3 & cY -"w7V{JJ-Ƞ^/CgW:m[ oo28rb&W; LVBo 'S,u]"m5ZuMhN+DNiSϾқ j 7;wumqݒrV@WVD% -/ >,_[RԕpX 2tc{*xB5>QvJaW(o=x?EIU@m>v6m03qPNCzՠ`[k{oWQO yIHa -eG1>7Kxqӱוrۧ~E+$KO|-s9q%7\W&xթ uP?m,9sیr|-p_Ħ=AT3 am۠.Qt!IqRҠ6ǢtMxBJ%I{rB"Kg!?Hk\X5t'j%{-vjOKNY0W2nM0qO*mhUE}hzGH>E~G~*w1ٓ;ӴUib% -X_O:΂i4t֩QT/5l<:ncvzyvf?\ H,z!ЛPw3w_kZ{pc=[AC%*&=J=Dδ8 R dFuG $5[_؎&2ɦl vjDkTڨr9WhƧ,G,}2nd;hD!O׻YLHv:_WM&Gӯ5@ɖ'.Zs ti 4Ja%5OƎztCr/飲xIJRեzc7Ľ-Do3 0<)mS2y\F&6l&4ʜ։Dž 7^MA@4y8Oъ]dkRUr k?2oh7G%&Kq73~qIH)]홬((>OUQNr#nC0[ǻ[C1Xːl[#KK&7=/qsۮkB.ZJ!iVƦ)6FM\@>_?6Y}JyPǽw9`w^ZeEe#u;݄6>~z㹄 r5#:wpV&zK*0`xiZ/m{T8Dy,#:j 4N{Pv9řb*71yS-%8LqJRLc\"6cV|I$[ :w\5i ,enFHAb5^Y,g*HDL |MGgdipPNVfX wpM!#=qTdeL(r/^ڞۋ(^&x= >=/nFk|^z?a-/K 5Y؆S@V^ak+Z^҇2;DS/?dg -7Pa;3ͳ}l'~'8O~])Ѽ ,+(.kS<[H&񑙉VWF`hI.|"AIn_k%+{n\UWتG'"5,,S{wZ˞KL/qɃ=tQEw/3J27G8b vV X߷[vV̟Fg&"tL3u`O:v?X!!CDe\XE?MNӞe̖$x?-Cpuendstream -endobj -217 0 obj -<< /Filter /FlateDecode /Length 6994 /Length1 1383 /Length2 6042 -/Length3 0 >> -stream -xڍxTSۺ5"A5I RBH $"U&A( E@R7Ҥ"Ͻ㽑1v\BJ({: - *Ɩ EILo;) G!cp6U0E!<$! @@4@ w -P4) - wt} @RRJPw81NPW\E0FAP )xe07i///a+Z('cFP452@ -30) aaP΀CH4.utnP_`? ODp`0ru#p#G@:o t#(\< Gq߭J0n?!p7 Z GQW6!TPP$M?U;wȟuA`pn"HC Do#JIHЇ7IW7 Íà/R_4 -{@})pC0{#I83wpoG?;kPHG,ebn)g9Q_!Q)$1ZH - W}e?#>?sp̅xM@q w;_YWwGo?_v#8z`p*Eᴀo9/B€qjPB:-# V{C _nKo8jB=apQ@p""h58 r%6Qq %ŝ5n%TMf0p3`(w_ D8 !Gv;Nji+Po(tz u~tV%:@o!|C1P%CS ?fݳAﳂS5s1xjBy=;BPLCO# Y)x'.Kv_j -(qJ02=Unf]MT _Be- {9}݌t YޠOTJR9 \ø $翏wMuS Σ%>!6biM>1I)2WIo^" @T2Sw UT|j?SM(D0Ù]LFH:Eqse [|B6@EkEw[Py'VhCSo[>sn4(ޢ9cc3ፖGj3~5 L$Ֆ/.'n,(7m}Ea831<I54kҏǨ(h]vKX_͔K? K|4;Ep;ja۷6"$6ڹ-xa/֕-G%'s?z {vTkTq:l~W,Kڻf5o}Z^ nDY]x\]UX騞ĽVw du*sg%fVo{CeU %c)嗺RdfwiõHa<'X -BA{>ˊI}uI;mcȽn)AAb>XK|*ZvmY?dV m PtVOo7ޟoas)l+peC J3ŗt{F&Ok[ɹ-V 1i[?P9^-|9/ W9̶񒄲xQ!!,ݭ[|K/֥a+TEڥ=Zף_plD3f.5Cv)v,_cCتz1 7a93QLRq%j*7BPOg=sڪo$gU} Z.*kyQ.%UO#,CU'RF1E[n,f{bMI:k:AGDžr*=-'h۸@=X2fzL;`Yє#>n*[~VfHXO} ]5NE -G[ -In7f @he $z6b)&waa&aaxH>_h y9Pr(kd/|6|ÇmU>ɮ/)mNAxTKݯ^)ZC&U!$0~qqE_Ώ2 S@b'J/.)$IXͰmZ{zl3;z."%n ;4HҒ _jG |wzz>٘#O&n=8TI ]BYz69nL2=ji рPoHWbX 8ŕx 3\3yܷ\$oω tc?0>xelfe;\a}0.Hs`C$9QI$ԏl-"2诖7^{|.q^~zk׊ :V3&ä3 Hrfe KZeh(ppA -׫R$b - 9i9VzOyj+PCQ<NXVO7\Wܿ`ס;h:wX;1_YQa]_1Y%6smAnRZ[K;!jb %3QPI Ƹ\!-oeS=Zw;Sթ.ճ F;3C6S\T@|mзxу?؇;t/Cٴ~Iq3hſeHV~|=M۵5)~ 3B6mR᩷ơ!OVMKc] S.:YowlBFq9B2Xl~g=vG/O靾0 ]@!o+C{JUȧiQ~ WغQ~[E17>?T^ /'8UhԼ̳3 F>6y -[\f4A|$qp`N ԏv3x+ MݮNyXZ3/O;2&&Mk̸"sI|HQcBgZ"+S.q'y( -ZoFkDgʉ_ڵ2:{ ĂN fa"6H":W2(_MS/P*?ԯ8G?G5]u(=hg`%n\\[?Ȇ"Xx"*"{mL,чg7ArkvoKl_0;e%&F'c-BXF^}’oWxw)+r ('aF}njb]0Mp't)Lmb-,0N/XCTo} NtsP/+>OssDJ"%IW0o[Hw7eHbڥ2tog+7,gx|+]C ]7iJnO\wa -/*ܗFfmczc4//67q-؈iC ӋQ?3m~D El+InJDũUz%3R+jNPo334S1FYQ,^P2'&V -gЕ W{ʪԠJbEKtlfG8HX+ֳ}VR:VVN__V*.Yl~zh»;Ql/wA `  -NwJ-7]ey*agڪ17ZO582l 5 ^O~S|P@z2*#!X{+KDּ:"6*NO?!7+qk>͙{.o5WqAOacLEs,G WegYzd͂N®=|^MAGB5Uu8M3ʀ:/?-SL1DDq6 <E<% 7=~ /0Tܜ.HP {qF˦nI۶6ej-^M,xl]C<0(wJ=5M{K^yOZz\bg3OH68U3dt[|z ڟxM3.S sҙ-91aN|\8]\8lUC`yToUSeyZ~lzfV{0,LiԲ*I,M9!;03g8KFؽ:ޕ)]٢&=(vfoWHH# 3N⏩fL -nؑ 7 uw)LihO8U?ҫ'݌T!7][&;}tWY 4qrqe0L@%=QX>qOR,;jsq(@{7Kwl^La2.D9 j[ҮZ,\ Ux-Ȃ6N`QKF!E@46ʫD/kEbraA*-lț1JUy”+Nr{XtB %1J m;3#1l2o<86e?y6_>VϼM#tE-I~ ̯NƎ;٭ -^0?C)sBQ(6j=.`@qk{㵉GRs{aS -<^UZU߇2yaf}mVA6l(<˯u/gzVԉ^$#9_5?Nݰ}y*;8IϺ?u0Rڶ|/wk.ߺ"PUt+vMno,V^׸U$^ -Cl/i<~tb߁517ۛیJfO}P?q.JeTVCa}Va'ɑpRӥۜLRDLCUv}n.do۠!{#1Ƨ_:XvE܌ߕr0zA:+ExHnT5}^0|t_iu<32{2Q.=yw#/*tb^mXpj&:J-{EwC( Q&iёzP$VCg!# w}7 -C!e h7$i-kN>o -*#{ԣRD.ɭh=&Հu -r&lL9]lWx:yc:VxU3o{`. lL+d^q}(Sjs:;Bb3wC:[;b -DŽ> h *?Όi28Ӎ@9f~9>+]uis PE ;ZRzGq~%^%5P?8}/}60K:V!q my8q{}N Rd 2lu6 9\Ѵrm^kd^3M?̉f^:lv?*v [OϳGD5gQ'8ba."Ƞ~ 2;AQ^QZu'e_uDqHV6{=PΪ$դbZtI3ujWK&C2%t=Yz8nU;Ӛ܍RZ00Rxendstream -endobj -219 0 obj -<< /Filter /FlateDecode /Length 15411 /Length1 725 /Length2 14862 -/Length3 0 >> -stream -xmspe-mc6;ضmtl[ӱm[;~{[ﭯ?SO1֪z(H$A.j@Ff^333 B hbe7v4fUOwttP;а56rhڻYZ܄ݜ]\9 -\,s+[ @LQI[FA -@-Nƶ%W[+S) ;lLAfVo 7?̝j" -j 1&511 'q3?@SMGp,'`fe0ZXś Srif@]mmj1{;W@ {w'"-5-?2.!>%'ﴒ_,U.NV]fI_-[%*j ``qq}\SW'' ߤ3 ¯.ٛXUITBL-|[EmifxpH4h"s-sBjD<!_.!|U.xi03Uq\[ R`=u,@b1*BJXISg|"jPg,C LwWw) - +u -.tf9kFڊ]KZsL8^iqw}Hϐ J9%)w4Z^2UnJ;C"P5"ҦާRtqM+;tVb,ke^)lFI Z,!&rgGa|n$뷦_7O\4x{ػDi_(,beQ Kc iUa7׻y3mI0k (_9 -3Uh@S;Ců⓻O0d6*D|ĐȰA.Y2Y 'sK>Nnt$ [_fhIsze69`&wzt|7 -bӼ"ƕȮE媙f -iӯ?= ∜`nCh1y>w,v%nVɺI/g>ą ;4--C1ޡebDuyUWLO4QʩJеjt_d/LH -3ڥ$!NLrhB67g׏ ֿZj^f25B0F~߶5 - ه$?x{4LeG0G^vPهH=X{j2ޙW֭ :d9Wh% v)qm w:?;7ҼK -&WNras;y DFbqyɐv&?EVpK@z43ǧګHcb_%76~ohͥzz'=6>%X8 -duN,&S*F`+~ -?$m89弸'g{Y0-B(N$!ʰuu+q[C=Я ta/)S^r=0[#*ge`&,}4m<÷c,e.Э% q>Gj ΁tj:Z M8mQUKn@F^;L)'$s[l_wEA6DF!X*m~J5&gn悩SMhXa<^<è@b_!vO+ZKCGz9=b3R4S:ޮ@ 0DÂyCkuzbn) T9v|" Dqѕzy@GrD<1uJ$V`y^({?nΓ²]P8:`Ț3>* -:+sg t-FOw+{?5kKCI3.w#D`I raN ɗ>В) 8\{e}rϤo +$C\Ål q4&5Brc['zɞJHx]ZlU#V9=!&.RJTфioSg5{v3Ls<8/+>uFyɾ3KL_煷 -ܗ$\P4=^%d86B#\ztt g\Qqi!+y[᪮W"I4MNy9 "߲"g 8p[\e}b ڠL6HV R_lo$vbhh[Wy˜w^]&C<\}ER'Vv8U75E(`V,Q_bk0Ƚ*ٌĄWջYF.q4y& -qc`k=~Xi!VC }xMwWf|,^LV9@#JR1}2$w:~MQrOyn ZF#ydo`j6n>jMEL.%riaȸNZu 5{2d)f}#s$T`s"{.}b1Ϗ,c,:!DYqד~D;Ni.o1 XXi$R h[J-amٚs!KЇ'\ЪwcRf̘Iwz:L˒N&ѠAQxIζJޔ¬tL^A\/7%t8}FJp9FMh, -5gn7ЁSBdc;lrzNj:hƈ7CEwhgfzAB_ -c.smo@S֊hڮ؀60ϳ{:;ӌ'XKt^˅.`chtڋ^#1ZNQ`|! ϐd5N5;S]Y[De\I 7 ҜۤAE&FqQV+֋jWަZvzoX\1/a\:6CKNu'ycܑ9͈r<*qܦr)L#B"!JZCʼns2ŁFE)MkR[FITߑo&)8o#TDEpl?E, VbW^\mYg$&+c) ?ذF!7^aUSzrUfKJLMf$FiZqQoxkiUV+L\ER]J҇%"aW_|}i7Lb8{Kd8):|#X'wwVCKW.R*n -E'E -\| ɀ  |2R0ΒyZ72=1L¼z\u` `Yk{{TdBߪTՏZ>$6c?gεřSWN+{0A6±2yax)Q,=A>d  Io%5Ħϵe[vx]Sc}tEL9Ca['t3)!n.h@v%^|uqiC/t!Q`/i\_Àp1uI~H؃ ;9{Ѥ8s*M}}r{vêel?g&_>h܃[$\~QPzl8-a6R^aqmM!woo>8uu܎E -tBkƴk\,YlP&POx<.#O+;LԆ)Zx1lqxTǿ\V,OXQLA/ӚIp%S[WM7+YcC/(AB.%)2׬,'G7`i \(>h+Ry5cʌu=mfYnʡe cLG{˟(375 HITʡEL:lf,,+A{*a &e(6OuWjӌf0gA;]KشAC)v ^Wt\Wq[c^Lf~[d펶7#Bpd&&͔6V-7HG#K8aPNleoH_1G5,BG?5-Z-78 .C>V-;^nZj? cGjp&(49'^7\mS37Մ4votB5W.@ qunL4kVG PH' kPwiWRcX7@ӎM˧"WUA,<:/p]"k|ѵnB0&R}H7#ja|BTmPs?sY(S@8]!53W]V[(j,?C);OGWMgz0QF ABuʘ)ל'Tޱ=y{JpʖK5D&Y^IkLg2O$q59F8FJ 9+=OCWm2pveǐ{6r~JSi|؈MNJfs7 -6T.z= cͭ (l\L(gr ϒr;]?a7dfFOj:ש} dwP;uBJ9z8OQ{jCNL#nV\,y I'O6SKT}gUUfv޾*#JRfA7` 2U i;낒jAG*Ϥ:U(b)|ݫ88ʰ5u^{vx_IC4Se,.A2̃Hšt Vy)9k= G"K/μb>*E u9kO#41`9X*Jw?/g?$sR -X뒉_aH|* - -T ~u{0F܊% "FW |cY/U_6Q1P٦eʃw[*M"<ßR@F6DF1/k0IJ"󫈏 Fg7,NTl. 1vǰ[xp16<="&p7z. qҹ -aҀ>#@[ I ̒W9M?He;¬wyC= %Shm#?"ײ{Q[5EiUپoUwǬ%O\Q*-l% i> ˸@ -׏0#F|{Vy1S3LO -*QкjLVU)g5ZcaQ9EjL])q ;an4`{dWؾPRKHXe*wmћF% -n%ӫ'W1a20V]6F6SWr;=5RcjWtY5\P5Q\r:˹)mۯOj IHYR4]2:3a]hƶfc\lչ!K:N5v'1PnWσ4Us>ffv`c<Ǯ?3H>N2p1Eܵd}!Ln$2^`Tdh=BWss-04KELbn~ӽI^ r@*dO~,拵w 5cAi?Ⱥș{^:Hkw+`L8kS?Ea?1mcʸ-T0v(Ov2P/<1ֲrV'$T %EF4g(sS@A^<|b0(7@ĿKB+k(|a8}}N7Q 3b˝O%뾵-1YA“=@mUAŕB9Z#VM;%]`ҘJQk[ZpYBW=dEF.y-)lۭio#-3am2jzX@1 Qjj.oaԃtJjfI^-U+%9@I=PՄ\tIt3C ;vbXQ&9p}_qe]{rY|A Cx)B '9,+L qA\}Z^^f~ra8ͱ{i3\2ET@ bA{=MOUSC+-zXd ŅR_3t p_[HǎPxaRkmQoj*EB?MB<#"+;BÿIIK_jWL󩈥[B"|>Lw#ʹ߆N?xwv6 {83{&n)+MJCswr(!sG>X?+%j}cw ь:. -81M,K}YMf_9%Ԏ44Ad{s;nvXd+!<,h[솰~ -WwڬdԶĞ_|֢ {_T-qSeX""jJ6OJD;Bx  s:`+<-L'[#K`S#cќBAGT"S+wǦT - -5ɝT3yʕBy]*RA;ɌM׌@ѹ cЖD(Oq۰fj2@#~^HAn0 9ӫq9si=[Op&ʁ~J 9,fO+y ۖrkI?G`X5B -\[UOBhhZy3)weK"`."6[a=`H8'1 m<߼έUIJheowZ}=.})z{ ZB*ĎJ:ȵYߖn)NTq;\W -Xwd1]S$9%܏jLֺ$qrj-M;*k*ȹH9]w YoG *߉ީ Ln;kP@7@lCg E,@]&#Y}.DuS?𖌦OisXSQ]rE?Þ.ȸrH?ww2QŊTaďP}DfE`qJK:c.̐Zs@T}Uë9o~\m|g g=H%0$rTIj꤈/3=gϊn_xO.ỽjL4mFyAGB7uIѩj ҿ=9`;6c\L5{u_SwSL=U|xnPTzy ASxJ-1;3ddikއ@Vi/ot[\Wg lUŦ}kRj!V}s%x@GC!=l4Ԭq@54(H%wrLwf0*>\0Ca$+= -$zY=ˤ8[O0`{ ӵ;NI^x9E\N/gjN!EL(*6Z*z /L7%Up3^Q+cdaVL);gpy3)r %1ܣtb`:j:kuuVbGbAur,az+v>Oo5gæeP.&ڪ1Ze.-' 7 -gBZ-v׮Q~xoݲ-TQ(V]XYUo-Ya6N_GD5nl.n>L:P^ڝ_ݷ @\2MK.>$#.dz}Nyиa)Q9m’8S[8;TL_戮'+_J2,ƌEC;ӷbmuɑ6G{l+"DR4.~LT:({uxNНdYKKᾒ"L`F?Vӿ?n!Z V\H5Ȋ츬 ԫfF'oǦ\tX7 )ZU[#7`*B3["zu#]?{Gѩ̆NP7h_z1a-œU{q~l+I.Ҵ4cgq W`THo̢2Q4'nc't9m1o -kϻϤVVWmR3j&5fۯbcw;T鳜\чT{+ڎ)U5ܻ~:#f&6qdjr-[FԑFn37u1ܒ#pANoF{Iz-{L0;Y7YK7J|A\+Bo,wKa IҮsdu0YT K#zh3@U' 5F 40rM16?Xz@ zNH9\ėG=2"dPv p{iîT4U{L"*g&׵6{Qj#aDʋ 5hDrIǃn_s7ތ:4" [:+9נbwn4r{ݖ_>S/I nbĪSpa:[(Op\k5UB"oAԔv#}ˠu>~Ƃ8aYbzl9u[ef@3C(uߺ(匿βx[⯋:>hڻ)ֱl3[n H+i)_jeU6a.K( ]rTfXLg"&tK_pU~>u'p\h:.m&%ۼG fuv@PLeۮ'~БLj,s€Ip8A=;8\g<rJ++]kG>sk/B[dl,s'J!o֗{',EPuٙ!᪦i?ڴ^0xL2eiOwPG?[ҬD@= - S@mR0sGE\bgV#kv1s!8Z?E!n;IBŷ/[>m+BtIm =5㘸r8ͱ'> -stream -xڬctf].mulIxcv:m[ضӱn>3=ϹcͪO=UO9"'VT23:330q-l\le镁f.Fv8rrG34,,f...8r3JMY?-W gf04MPߩ'-Ov --bm-ohwakCWa )gÿ5+--܁&SC뿝׮fktUf虙Oֳ hk__RJj"rN7JΪRL aa;wH ;3!ۿ03tvph-+#fklgϬ8ښemWO߂A݁pKv#Nr0ph ˣRpgԙZkH Bh8<<}N/gй\%e8ߟhIzv -KN舮mIjDZq^vŶo,һɱDH:(8pxA"i}m׻Z f$[t_"tw/jx ^X6)Q߼ݓGX-A+@fwԁ71qt9@f7EӒR~u -šPsΓ+Mea ӮEYna^I"(ae]nF~' +%C"& 7 -~ s!d z}4C;(~_2v |ɗ?p {uf^TxvokId†ؗ&VyK)ΈͼR8i[Ge|XE(%% P4c4!/Ų_IHзnE~Dmn }б)H}pѸ݌%( -^bc BTPTG,R#r^t-9O}duwr !F7QUufd3AKO@ةZUSU P|.pmIY%9%bp3+2; -)A-+IZ4%vlS[E[,4#YT/hhC^1]^Y옒h)8'GY/rJKoYӸN|^Gru)qvUG Kc8jIu(@887^Ǚׄb_UFΞV*E9Y,5kn}6=Q{ aP}@HNzS8M19Βtaz!y$5 Q8>^«E.֏jƫ]C ڃc9U0.>{b8T`G;Co;K=ޙw%q5I98 }VrN&^%$%yE@zcƋo_bmaB,xC -W݈%q\" w;{ɱymIS:<XlPRorI5q[B;?&M{E56P= 6AOw_.2Zb eV  GJpOQ1 : -qT^[f(~2}VPjU1'{R}H`5p%~QfYbgmZ 깡1s/tg''sXr ?犬)Ҟ1 0Vw}ڕXdfpLZ3ƞ'NE/.aaT{Ԝv2NK.7 IX - }VK.: tZݞWaʊrYQEl]u7Ju"3 -X]}D#KoH_N%n%Ao -uWaSDb(s!=!_«E"pv2v֠ j+3w;E2P0٢yz#g2ɹ*^mA0"-YmzB9}Lb&03/(|KUԷ:+ǩ/YRp Cf\e.//)h^Zi"Ue'N9$2ryp0#T`b`x4S`h-1)oZt$RzW<-EWbgFdD o-,H6V`KyE69x덹:;oƙ"ٛۢ 48lmyo#eLZגN[uTY u_ih{%f[$$%XզSQ3bi@v ֲ{LK~0,!β}EFx谧3XWD Z*#1a˿7Ы( -G5o$d -dسK`Hd!pьS&`og|sn0ФmĒ"5[I݃#Sa*X(a}ռ##]|Ci$"VIu!ɱ6'y"rR.+M~b,{ -1"OuۦO'כ0CKԘll2߷"caes k0P‘.ˬgCH\L ج-IkMs w)6d1ECr,g"Ksf$ D?!tN\F rcI$X ƈJV3ߨW;̜F+prVAoM!2~T|W; =YҺ2P2-;'vdQ >ʔcGi4L b~{xZ )򼦂@;YE.4aA޸[Lְ)83 E_QT)sBk>~%D [ە9# -Hǣ%͸OL|Mi@͍S~"MC;:jt)'̫[&dXiqKi7 cla|mFnGY6v+NnU%jBP |K+4ZJ{Š7T|Z(Cb 4"8ʬ'?ޅރD %| Kg/2rQI8SI]o ÆjY5# ;a|md|GCp=ͥ4+唧@uA3tyFwI,9i@ \+oCTuxWQo˜e ~r邌e2W:'VZ{,Yn2E*&M|aˬ_Pɗ:8~im X;I -J"6t쎠2 - -"T`DCNchG?1#yOY;wۜ:h(HoEVt<S$hB5]#ч ͨ}FZVԢ+'Nz䆸 )zhukJz>\*=?M\UEpʤpE=^>i('b%ah

WwDƂHmV??~܌,˫wy⹐]'xrY;XaZ{pYšIc+Qg3}gO!6F\Z~y}XiaO)rBJxKu:`us]~r;d )H|^YC1*x^PpfPХՈH%R| -9]F<.'n&L臟U'j?U'MD9nko7U63XI)s9/(Qs.?#r4V-LrgerUsX۸2XK$,အd],0 "VR.88WO?G^^Ī$%`q_>29W:s0]*8+X8%PWbX͍m5ܞ&IgWTdHW&nrࠅIri>)N>\_ԇģ}& -z`{sHGw?~q]fKSaU!QS0LwYܫYm ^YGDAiw\iZ+^X/}*nN[F RQS<;H.( )+wĘO+õ*y}?z U^]p;&q^el\95F*FڿІeA9bM޽MvA$Uo_=Sb;w N$SqEW7ߩGA3_.t5.V.y.T- 5LҵCLG:$Sc$\Z$W.yѭ@s].H q~b[@ 2>R5sVgHQqwʵݘw3ۣsf=4l -<&?G,CD~655$$\a3_mܻdV`Rz-r$ P 9]EwO4 8Ӕ!z|`I걁nq5:Oe֧&^6fdg>+#{Zf?f7 bd@PM8O/(b=I' i9ku2GGQP| *onQp -XЊ;pgŽ.EϏ˚%&2'7W @[54Ƴ"@ú 0>OBP`PlEj_#`s4 ‚Q?cj漌^tvc^ ->'?:FdPeMTe%SnStk+B50Sd1pج -mF+2 1gZƲ,)3Kv/HCF mcd8'ta48 _o='5zү-nrv~"{!yA3GەR7IUa[ݱkv:plj,4E]8^՘-FfjdF'ݯcF<mH຿tIe֛6]BNV^7j~Q%Bi$Ń0cCn18Vpe纚Qq\)QtxJAls#&@Ȯ]Q<L+< ss?ap~OQ -AS]륧KKLi%p&`{ L  eg6-uz>H-L2`duGԤN4;iL'_dfY$nT;4*ZHYjrYHtw)^^tHTʥ`jH,bG`PLRJf&M%(Wy4qC w"~ > yb:o2Ǹ/@ApQq`gcr.!Aof*~u }U*[5Ҏzș !p@'V=9f5 QVe\yHNLl1@BvwA/طp O;ܳ P^(Գ,&VٓzƍucAf2԰WbimR[㟓Ծ`+' xȍGmK'x\"SmB&Xatmt46_ߏ47 ,̶vzO~>nVԲ?u@7Ѿ#O*~=2R]z1V/.u8BP~o&(y PR^)0N#m}onA>їD~3sYE@>s6  alxtAPryG8vQ;ūG%_Nmz(0OcKx Zz3e9s knWq?Q&@P-$JCkU3 $$#|̫xjeHܠEYJܞ늓ϛ؁+zrbZ˿Kk]LB.qTd G+ORo~O+3+FqWDXf<sNb8˾Xa2%GX[n@WuC);: l+(_!9ĞB4HPv3IN..6$ H'*z$91rjs3oը|6Kw΋|G]vPȥJ4ݏ!'Ww٘<$HoC6h-xF'd $-"bo1VhTEm-FM1UIJ -xQJm f%3nbVF Grw>M-σz]u7WPW-<[ -hV>l=rF9BO_(S#;LEyD%}h`$>K;~>](9>B#sȄ> -v-;gD6 +] Eul(Rͪ~("\VY&DPO3Phz hI5&Z&ߍ=iyv:(\e5cR6 VIKQX/@q]ph5!8΅w Uk'X,{z֬Ɣ9K+7 4hi=OSgJOgיy7kW -v^aBXd' 1 - jy)ݫ^?QhBr_7~5@YK8cUضh98pƇu&%I2[lzHHT .p(hKtd6wH2w|cubdp2kLLO9jzN/eq: ĄF'!tˆ,|(i:O$B7gJϠ$c^mHhgi}I [U_87=o8yamf{ ጎ"ot,~4w/E)5@G$ I~ -s4埱T򓒢fUCn1W"io!v|/dLعa:tiWR?2/qX)aɍǝw^K.]?m f~~cf熾$}=|n)-lq (jcU\JXӤǠ j-ْ6`#YaOߗ. (M0'eIZ^#mooč3~ x< -Y{Y#5Ӷd4"AV[ݸ^0+ӵB뗫\Ml51ׄÂEb7C Kș<6?s<zEۮՔo6 r-ɞA%57wJ/DXX#_\`?Odb柣GzsnQOxLN\/KskS] -&:HHf#+"I #&.QJdRzX -]M=Vب>aˢi vW zeD5 lA4ޢ0YqG2IGjU=U/q`2vI K,PTMAAS?54ǿ_Q-ąl/ V!]M#3Þ׏H&j^YiDַԊ/@[ GE!#2]b Wi*Cjs.;9s@bOk$ƯvjA V1'iR.ax lj1@ÑOͦ"3aRA` =+s&ʆ5Ii.0AJ +uyul]LoX\hcXA -%Ysu{G5 *gEܢD I7-d~Umszf%! !~ @!Ev<0 u)4:)f?.!ۆi3N e21}$Sȃ݊ B ׹S1?vzO]%"Q7ڔSWf&WHKxrJx vxU^ = - ៧=?wx< A2^5ʣb[gCuE qTPyN-h>Iv/ *J%ɤaeo1dqB=Tŭw'őM:8cu+"LsE&JStži^r7덡ΞF{ezo 5;Kv_ qm[E -F8A'kv!)I$]Ơagl7XT+滮 O/sAj&цs˭^cU nd,E:vK&q{y-Ͱ_X!Pܟ;ߵ&L{6+>00ZHe\(0hZ3 -2l\& ? 8n y"?'.hl^@ F - К̶sZ]ʙ*y갎 3J:e~V9b^GGzC 8YlfO!8;) gu*kkztdÍHϴms]!?QoU {;b7@ ?Xx&Q*-RVv~B9)2r%kEW9k ,(w.k853q,/XM`7v8#se#8Om3661*4սEx(K yWJK8G88b/soI2Wi'Ɖv&^gb /?G7ZјT/Qz#jxl!>aWCIG -m742އ2GlQ. -hN##@~mޱ`[`y6Dh71 ;PO15=63I6`xc"dWp]e]s ={dsVzH5JQ@%tFi؆j^׃gWNK't?ʑҾ߽N_~tSEq6JBKuvƏ#D'9U)Rd>f}vʄ]a?]?2'Y壝"6[.0V@2 - ;Oa5cg\Ao -q#y%9kQ-R9Ѵx3t@^5D,dȂd>)) ?@ ꋿE.!ܪ Nց'č!BMMxxD#wO -"ט}"Sړďv`bHQQ9U!#'ʙee$6WHuCR*H'ga Bt#E)_i[C335sRw|Q0x-8$(qJQ89 ++Syt$y?_4+ǚ?*K͏YE[N"r:Ab[< ibcRk99s~.y}4XAhyJ,sN4de*֛kj̒4k~>nӪ*8pn7ζ]z"7wGE>%CGU闣$B#!dtV(M]_VCV]9ldX@16+ gkg uZ;9z]<[maÎ 8]+y{Y/:?LַJ _ן2:!?C(1eXCI~/ŒX$3eXMO*Ff]o"X?XP5 p2 QհPM IhFUU]a_kWښqΞ<ɇsN ky: -I-Uǡkyg<]7UZ;0O^mAd75!`ocKkؤ^p/)z5C"eҁoN"|KM1{N笎 !ɍΦțqC:oRm@HFf7"O9Kp>1ǟ }cJ F#s QԔ /"Jİ Osj. .oWwk=euHߒ1{lפ2l@xxa_}c۽E"Snpt&%^NNWɔ-vxWt4|S6!tǦ{TΒQƅ4R:>3[Ujŭ^B8qE.ԇR0y{G'⑵.)-Zǽ.>y|rUe?z$f%?c/xݯtԓԐV|DSu/`ϝ|遜0fa9WWW, L@|+Y3e}"b?LAseV?mhXWqx%&5ί9AF*6l'(ܠYY&ij:e>CsVVz>xC -ZlJ\`$eߟn8Ҁ`syC.f:nϰ0M2V\\5A+LJ p5-4P//V5/C㠝&0ctQҀݨG PUXo_䡹;2Tȯ]DoOt#A>Lw$8 C>%$|4>pt2ؑ&͉g 9؋W8Xs`>i?NX7ZE. -H½bJ?:%2*Ku[(e܇%RĴTZEi -f#?4^> -stream -xڭxUT]ۖ-n۸ @pwww ܂C8ު[z?c>9P( 9K8A L<k{W{n9zys3kBd34b3777<@ -VWѤ/_&|xX[7nv@Q2XXۙD$Ԓ -Is@ gmjt1X88:ͬ*ͅK` pq47p305wKhloqvX:A=9vf%!p;!Gg  $&/M_&e'AA Rfm(&IeWVg_4dn~񘝟jQ&?>P \CQc<L@?q::Ȭ2$5g[Tmeb]~Oҏ(Sv9tj\$kH")~VO}mA>r23B˰VֶRGZGCz%y񳰧֙+ -dDwq_@igy\i^NJ7v,LQgqyQ_md@&&ьiU`=~C_={f =myugYIΏ0`hNyK=Ӌ8?a-:;@ -If{`+|ךHzFEz Cd=-EMz -5h]-3Еkbiv~Q0;ЖWՇ_⮱*+';R킁@knZ+Nh^EZ`3!n}N_ʳe67ǃo,C޼z=C_:W'PBi"\?gR`uxG'.';fۂS!`bQpHumQFq⻆{KzUvJ6*!$kF0& ܹ5g1=PsWBh.JYݓlikȅZ{DTO\rKIE= YTavAR塺oKo?qz |n5ҙ ӭ@2%eVyM -L~!2N'ڏ?Jt#NH$..,QBq(dhA2Ef3uFRj2i ژx$ .}h!fjר) {7d~Ę%̓ɱ:-}>y>IJJ$myN/_Z0=f_GTR'?_|eK 9- M)F=Sa|Uk -ӐZ7sY؅{M&l3\Ci*3Br[NnCDw%۟ :aպ?$>:@\9LTާ⺭ }dkHyl{;L+Wa!6. J^E?U,I%*0@+smagPv'&dɣPbʌ0,&x99H~zDm ^q`YF)_%;/ABM j8Poknì"IIJ X84B 5Y ENAe,0w-r+:J -\Hڌ3B[Y62kEoVw 3ԋU -ؗ HzxѨ@vES{|/vig[ұ9w0δlzwe͹s6穚g E]S?l7i>Jmձ=6Agk/ <\u-Nl#9 e(ciN"| -/+6Ma⫰dL ->#(Jݛv*%9{Ķqw`%wt\Fx2> VE:-kMiW$\FPeuU&D[V'ѩV]p.g\fǥN.WAzX&du8ߏx sFcs\9Z-jOi%]dGRWMܘ{ p0 -m82E_h -ħU59 SYy? { I$E+6(4pc0L<^w]}!1,3EYuBGsG.2) Ŭ-uc4kܢ> c^ юc[nVQݩjǚǯ鴇<aDZE9.ܴl3*28*7~yk~E׋gq_h N'U4ȩJ6m8(T1BEZVG -L8ot?%aF~2-?}|R!l4 -N-oςlMni@=rz(a]*ሜe[*TL|#sK[>TD&КOXN0c/G@ek2YǃPY|pQ$`<mØxqu(|yydV&SY -T½HE*x̵žxzzz@$k(C֌u" vI[z ^C+Y̮Fvi%u }^琢=1n-A kWM&dgCQڥsb?\-Xp*x`Dˡo @v>WFJ DXH''ק]ŬFT;zҥi/<;n>"1#D0'C쯭|nRNa"qPo˳83!"ˤtKy7c݄L+˞`.ejWwXۉdz}odVteI)B%{YcҜ)yYT~.ZTź+>LYw{#uiAm.iehm4'F*)4E -r4JtA3BzG[#}gWAxQFOxvj==ݰxٸQ\zRgfρi5gq Dr 1t9\;XE\N!Q, 5Rg66P42X l7 :0^庥k Gճ$am8-Lw"B, -- Ms&5Ξ&/Y2t|-@m쮿M<(;ŀkxgJ=qw{pxhɅ^Bg6QȤbo -k|Ugj;ڪeD8@ -qJ ǙϚU+XYV9W9dWOCO9Y7@م4XY y -ҟ3lVg>:5qu)-]ԭ@A+} ]45lMM{fy%B8I8Xf@oi";19xok#~NMH;sk:&0qE>l,/N<.W\yuXL -(w;eW &XP":OO,nNAA\YL5rz/>g\0uU3g2 -DaVɕPfo 5)#$IS ~A\432,l{r*s& -y0`V:]<:!ut\&1cq;Gꚕ蛤dНS'j?Wy×֢bH꒧R\  w)ܔUw=4S1+ɇE^ / !mo҉L ՁoA>q@ r55Z$g's8}0I-$?PpBŃq"ȉ>j7> Ĝ4Nx&$GoyˁZm8cHwiEr,ofJOӥ|&.cfE`|5zI;H[ywyJP)1r=Bzz -?=U}*XnHhED[3ېk P"Us~ H*VI =8$ ,9t#c,3)sSÑoηoݭ80Z$B }<2 -6#a{ fDc9!RRͧ\yg3R~$u7:z h&js?qf[џR>Au8m%K[Br -~ץCλ=;[8O4=م?y %?u!k_vŷd>+NlX) hWgu',y -QOh >qGؼFBPmr`.WbiO~=0wCA?R-d4D'<%Df.")A-FK&]KM;QMF3:wHJ6^MxBR !nbUl㣮K#2NJkDN* ֎XA@ ʌJO{Nn>W0\F/cN -ëꔔ9YU 5ymTJ34fM&Jv3'P>ޠ1L5-9UKZ,\V5^w]XńJo]JL*\1()-3 RR!≎#<66U[=Oӥ'㵆 ?$*5î^HR!ͺnsea5t6]VM yRhawCGA%2~TnM*dEszӗr ]J%H Dz*J:!k;sO43P߹{ͽY,wz -%b5-r4QB -7)[IjSu -={n>QOlY=ʵQ'B`/^a$$WcɥH:Zx'J -5>6N48=1jESP魄yY`8{Rld7pri(5?iѵ.w*&'vZc<8{I~lbw঎u[opsʎgM d;`8i|ulu9@"џvUBbf3ݴZA~?˚OU78DFg2BKVe,vV׆!` 峐UwH,= n9BS1θ| ,ںrXÊJ]!PnIAWDyb;:kfrHNGXtCEmsV8xcn&eSpq۳gEގr|GճW}}oȓ.Fݷ (.0ЛzVwhe5j@DUv}JéEMkF+|\VG6M g%RQQh9`ZM3zW FV-C -p׶$ ;Ḏb٠2H/4Yc]<0f-qX}ְ+|^hC@gTF|&1|/c«b%j*wba_-⪱ @` y/U5+?GxKepZ7CK{ɃS$AyԏJC%}: ]ARrǶ&S0_tIP}V>XMJAݥ/Umť!)]BdoX`Q/x`]R0 nq`m(7TYը9w㞘 ޷8z0?o8e)N+lՃ74Q}B}]3a6Aѕ&3 p NC",3EFهSƴO,äͩt 3j38v.nب2I(Z+Ds[\SPnZ:7VM^ K.$7&o rё!^w>IBU35W?!<_IdxmJQDKQBfiMX+!e:τHWFDG"۵/s8(hA{mTb!TUn83n"drn4Z;vW!!a\O}ʷ?5/xɚ.D>QNњ(9bB| [L6\6{.x 0DA҆T{b"wg8n_8&o|:jO4# -vh4>1!%dLAGl>ߴȔJLMQQf=?cvdo5*L&']-HeOXq W؞B| - (8#vg[L {t vTecTaF9ϭk|Tͩ74Rtl1J\w&Og+o(&r;}xt<&sfszebk<>?OMLt2yVMPsԝ18"+5\:-KJov溚gQvO;y:l -cFgg -r+1D9tzʃXy|?Qqpn-] %#T(*mCk3OwHsa9@eZɚN[PuM u<^MdS3Ek%6YUUY{w(;D -41Ipv u [6ɋ$ -Q툦v!<ͻcҢr7lfﰮ0z/fj`ش26/Ivnd!T8&4"7q`a+N>Yۍ#B -ϱO_0Q=}@P=`(&s2m&/#L#37s򭡛1Y\$jBdzLQ b?{&hC*/b|3:(vO*̖`"oݖgm?&mTu6GZmq3Y4] L K9XQFs;&;?K/b^rl ؘ&+ MCHN{ [A_ RƉEbt x~c3lyuox~)\D?\x|k4|qewjx 1M̿?gL8+q 9oSJp|A! - DD9Kq`ժy$rw8 }L4;&6#rIa3Eğe*htWpqR9f&lPs2T|ѱ\F])0YmHVUC Wjc,L8*tCb+FOv1:DF(DPT)#@tlT%,}weWNn?[aG9ΤNQ׷Bj;~a8܊ GA=|!0 廳-i>|yAޘu2n(F[ auDe˳!X)SC㧐>E(Z%Wү>@6_j텊I1|0x;Vq7}™qDQ~fd $\ՙt}\G?}WvƇo9j8FFEJ>XPZ-:DqҤ9WFd%Qr{~WW[a՗V~$O.%F~= U}Uy1]Q[jjw3q+IfGyhS]r֣ S:qx ŬL/ uZTДH]9>zp2ⒻcQIR_4 b {(Hq&oį I/*Ehd*q㟩gNsJ r ->`7H!2w ' awBue)PI4t) -k!{%\-mnŃB)\8uq-F2~KL/'zdΖ- SI՝V/) |Ιy u"Uou,6? se2i0`r/&cVzNxQ4;<(J{ax/_5#]b旅a?'0lխmV/z> -stream -xڬct]&vvl۶m6+۶m[۶Sy>8_Ǹ5q5XdD*tBftLy+;c7e;y.Y:e3 7_9 7@ jf`f0qqqD,,]jT44%`.Vfvf!Ufs+[3RB^ afold Pt32ZٻQ>LM)ͅ/ hfb-7`ld+{[7+7wWB-):8[9FUwFv84u0qպYٻ\<]el0rq5/տps hfFΦf..abӝ?Uoh/oYgV.f0LcmaeϬHٛ;-7usDPM `jf 7$ew$? O#s?]ߡlmw 13Y?cdgefN&j-BagET`ndgٛ9Zٛ_m1127?$[efo+K׿gPWӒ_,*WU/ǿjL'@M_@Lu3ru_?jdowS/Zfm':=+õ3odJTg t$ıjqa`Co@z.Wg]} vsǯCi꣱> [T:2 - -jYR_D?C+W -․n5aWN[W/C -7v#y\c!Kn5rGK=>t/ -Ξ 4ɔAyђ*揕gMѾBX2njՉ:ݭNdO<߰b쓻 NzF ,ףH?0X0ڞIjB7x:t_ "@Wd?7!E>@@<6}ݳjI }R1Kte8nκCdo[یkNw+5KVx{l?f+M:ɒiDHTiH6ְ!-7lpQGzvT"hzDE|*x!Fd6U2W1͗_kzNT˯y=fgq1`݋۰﷦T*6Bb[/= }m;=߯[ ^oAXKחǨr83e" ؆^\8ٖ%R/S|>}K_)J5g쏞p -P,E,?FFe.O@"%۟e4䈕OΪ X3T;~lUI5jL<_qYT}.,L~ xs8Q,sl4lIkFYӑ"vG<}2zs@!/>,r['9ŭ -O YDsW8n;}Epڤ>ea{n$yJߑ#Xg\A1 *u6ȍJRԔrUN}|9D_y rv =!hZчWBf&ұުdk7?dv -ˈ oRN莛/w6!lRk/ʓy>ƄDC+$/+yFDt`'D\_P]\' $QXFDr' )'i+lو~mAE K^Cy{֐/?&:i@W%KnQr8 )x@)35 WA&ʠQ|+=XZ.1y…ga戙Omj$~3Չ#AL"X s4jV\3qK ;S(Ex9_3+s-nSUh @fHAIJf|h2p-Xsၕ谮 f!#nԠ -k2 -+XX뱪p+C֝Nj"oTtcΨBi_,vι0ʜ -ɚay2{MjzM?ȷMAb_9F,Rrc&C0|)?iwG|֊-U4_m O&E\V>hQl|?$#)jUsCe㿤)i6o:t5Hk|^E3:x^0Ïツjn!n)4z Ҕզ6eMǒ *j -awXy$#seUavP(-u=qKML=2LϪIֱ\!5<{,+:-vzwV&~oXd.f.4868z9(Nm,٩X|p .LFUpy_ AJRx;㈌!n Rj/њX??VM]그r~f05Id=Cl{1G\!Xlp ˸]q e\#_f,Y@.jzT:kScN!^?Xng/4mg}23"OS7?hY] +D /6np82@4cxk'%0]'铠[TK|4`zߛtk+( EzE t]K /-;u7C#SBkHK;ߥ?9Y![V>ozѿzxzIo^jHF֍y!: -8Aۣ tԌ1aXhoK'냧.DR0zGD]:٬ Rx𕳻/ 2]tC} nuhc&cDz/=el+V6Ij[ԝ!.{ z]0>R/qH.,]4[2ɢ -lxFO! w/6xЕyl-fWڧ(ve6\_Z@ rA͈jo1Yݬ+0sM=p*p)*H#u3FAO\ Qyx"@/k>>,"akxڟ#*֫X-]X[!֕;4a1 -c $[϶zqXE{ -}usVC6^\ib⪔<N=.!M3U߂hdǢ2MqY3xR_.^Ty?;[P^8J1=} 4DhKN_cdRyae%C/M}kL[x4vp*8Iz*6<;1_/YG9Gy,O.JsB y1~s23[Bc yX|U6*5bb+*1KA[k'sm|xKF@h!DБ&^^qQ/CݾeoT@j / apxGm$}Z:}ffra/|])kX/?ؤG5{[, ]ǍX7FǗVjvzOd%nn?ަG;KZ>R4hލ.*S_v6!T_?M|*TEXh -11cUi5]PYlLb3@9[5:4j~٠wۻ}~_nW VbE !x\_?nI|]17"S iwKqEڒW*mʙP -wJBfnDMƹXAEF;^x:T(T:J*PkSr>ϡ֥ah[kNĝbd0FiG|<%XTGR{fa&]4pQ(IACk5!y0^U׺ UO"/ۤq7eq~)mv|:xMUp{8{_*reA_O߂ӐblR{O/FWK_rLN++coScVxzckCqۇm:N~P6v2:~}n>pLOby&:0Q]W'#˙,&n#`Xʰ(3Qa̤QGXWfSlPʔ96S^)%"ԑ-F#^ e\\=[d/27Һ5&BfάXelp]jmSRxʸ6ga#~ZvGfV$a$.S1wPp8Md[՟76eQbkdbs5+5;L.{i4[A<XPȶ${ $M!"z -bx\*@[u-+(HMz<.ፘ($Q:Z|ySP 6>̗<(OrG -fuD\:P xgX6C8GzG< 1!dс,= øێ\0MT({ }h;pE\G8O SaK%`E4Ĵnr0Ldm'ai/x9t%tn-u؈.UGN9k`k&TD "vI8gBZrH0Fר|.Qٗ:0`7^T/҇yhȺa:1~~+3}JDr7'gBcK~b&^ }qYC^)'nL .k\^_JO-a74~hU|jml\YQ6;2+<qC.; -"B֋IFKԈݪW4[HJR`gJQ 3ي7!wU&cglZVK 0$*EW{oجڣD';cR}ɅطѠZ Aٳ)=HFxRKեN> ۘX2Pl@([U-vPu`$`"+MISr45PJqђw[ &^Tqc_0C'Fv~r|(Zu>;fx!AK@"XE=av|GcF$%d<0oI ƘM;HCth8A]y!dGܬL<7(G/j{ -beVHaUռgtho|댎2<耇?j\fi`rX3$bfa*i"Qyr7shh[|wD+CC/rbDZ;&4MRL˸jj -k1O\mĸZl&qۓ;2Acb v= -vaى~@`FC[i_Y -f 3^T質e h^$9o=8E#-cV,7TIU"UAf‘Mv37~ga/Vg$;;Lz Հn³xmR`mb-Z^&>7tp^E"OVP"t6Fu i3el Y^;aak/{YM(݊CZ D{y4 |z{ p ,면%w=z|@ʎ%0{nEwuKEEWiT_Ңϑ"F5,d ؎C#.Գ3H&ȡl!=7{QүE!/T(V72tQ ы]7ĪyMq57`Bkb>P߄OͥcٿqO0HI:F%q:C/|E%ppUOmg=UxSG qZ[3בH %z;ݜ Zũڍs4[lpˏwa̕V4B0/`-P>kxbJLfFM]U"&N| 16f,➗ 2+ v~4aDɏ1eb6VB7> u(Yݞ)T#+ E}[FLJp-,0'55RS`']gJb[_shDyQGMP$"dZ9nA'Je0UkYSZUidiDiH%79'!&GY vJI{39֏~c {ll n'IE4Ew4|usO:uP).fLu?_&sYҞ6*ȓ w!~{wks`\*zDwāc1Ob NBw:i@y:L Kp3aۡ7Tq\0tb ؃ 3eu,|N9 T{^W/ݰ"y(vrӜ(9pbrJܼ.zFg,C2* -ڳ9hMz\@$#.:CAdY9āNMB9'ez_<9!ք?ݪ]etۆ/l|SRt,SL4MVi|f>E,5|x.Pt#\N22pV8ߪ 8Wq^dt)gO/J"Vui>2strp C4bh ":EkGliՇz1-|616:tIkOSu(ȑ׵79D-i@[ -[U1N;~ه+RaG{D- s/_wL.y$7(wbu*=(avROF{]G/e%nfv&a9slH41.&3*%|cnK?OЋ~}r\U~3<EF:sY#=mߎh,ُ9ry$bp2Z*6H&thoG<'7#H1A'7,fyXq#5vwv(Hz:Y#!CYl#2}twkO:o,-ɟED,/.$emo9!Ors`i1/ϣ?8h!M3 -fɁ09pZ#reٯ$K -V8{P -1+G`!L]'WšHc'UiͶ69e3@6BtJ٣Mc $xW-kliV)ځ~Wϣ p*ii11 `o0ʗu]tD5t)KQ|[)ԚٙO>,6ѮH?ʯovbv#l٥{1V^వɁhreܺcF<`TPKޙjA󟳘|4ELӬmFtD9䞆a};Ue@f}$]JȌ&1GOW P3CNkks+2j,t#"˅HcbkB,.B ]R035$'"?qJ #(8N&v$*6PF,(OpgPo!ODt 2mδ-u_w07V׶X*憎P3$L!5zk[;C" }Eav&Fn]["ÔҠm@tUPhIӍm2Pdc@*ebZ= j!ٻWEGAE!3n r$SP텟7=BM cT}uǙNF(雎_RF@ŠEtȰsB'*5^!?+Tt@d$(õ#Nz6(.e8e?Fef,Q_#PRħOE;nt9,>Ѐ\ejI"7J~>-Qs]#+j1'ȷYqau-wjUwϥH/FJeDz 6>Æ\1ʂ4_O 2.C4xߓL [LɟJ -n3^'fN RDp= ɖΘ7ۺtOWXРmc`[ XnFjI(B!M]~mE|vgXڳ@p*9(Awmpޅ/W"tL! -O]K6 -$%'wݝNH`q9H}X5\d5n"|{K^{mȀΟؗt);(_Eh#(ʹR⏽ ,yiX흊U ,622 -NG /~F Cs&dXx-AqVm .oKL3ǎX@D~pCS].Vȵ`ج!iYհv9G\t1Əm:_пT*% -}{wiVrjQ*~+UX[:jg194>"M]iAG/m~ٝYt!pY+>Ks,*6?`wTgìn 苃*űXyK 10Z[q?v4js YӚ,lT9}vս҃Hw8#ti=N%J8o5CpLF׆q[$Qa -v,!AU-:hJm:tc.Fl o:_UfHcc{ -U '2rE|FHZʠvf`SͲaUb'2=W$N㋡ 8(scQP[i;|FI QaIga!>/QJ\(4| ,g]/Z#_8(KUnX|H" 8 us]7E@ՠ+?cUb+Qm-2Դ / fus;$?B?<*;(0sc }#k/߀~͚2fЗ<2lH+tNѶfWIEʘؠ h<] M;ѺEq"syX b8 x] 8`ӝ= a -B'Ƣ;~.8uV}I1I -|?tuȀ.}xuʫS I_j>%E<÷%s;&ar9CeA;Wu'C;Ө}3ۀh[r72i005P3,|1'Z!)cFnJ@R3uGQ3s` {b]b;E&G9/Y08}Lp!CɡZelbEX|JbjWFĚص[%cd)c*gM~}o=.kp84 RəR4w`V6iOH1h>RoقHX ςn5sS -ÑG^ !*w[BviLˠ!9F(SՏywp Ņޮń HHV; 84yb a'X8;',gC߈ _=wgUr -dmzW AώFA7qrf"o3hokIr&f~lo`H9z6h/f>5{ waɛ2Ǵ -F/D1}<ܩyMrBi?`i׀ -mVݨƳ/G ,\W;'\s+>ݭ 'U8俇<Y%(B!t};qn?q]zvbv)z'fT3]谱}9Ӥ4]N{ jhbu'xG8^+@Oaout,O2DTV嗡r=T3b^!#}q5x)F%jAϺb{E^~F@Pw[Qdz尭" " ,c"J·\'4s"YHوQ瘿Qxs"OEd VQ̸b )?-դ YNcPt̂v^"CV@)+?q5e7jQFkӻ0 -4.b.tYO#+ߨ='pzi4Ew!WUOvcg!IQU"[>U]<Ԇ6z&ۖ~{ElG.vۥ]L2TzBݑzG`-Xt7R0(ak,%Ba` D\Cb;!OAjpSW@Y\9t*98^b޽@a^IY#%͆Rߡ --&0MLY_jkub~1EjW}'SCeՁwf6^-$RWwa"nْ|Unnl.XV9+" h&YWd0;[0)y@J0Q%I Ƙo}}XsJRf*XsaQNv -92Ʋ#4+hOpZ3OltºEkB+*l0.pM ߟ*aiK$X|y~Hrf"h4vQ2̠G\(:Ype-FurZbMiÑmtB 0 [Y{Kt.8Q_ (gWB}Ozlp=^\FC5E[t?䊨 `9``rʼnv |}bcxmibs+Yϊkvߗ5m-/0Mx~O:s/y93y-iRF}j.$헴"M n(&KpP[bҤp W&8Xuf;񮫳Dv Bߨ>$6PePCMި`Z|ȓQk{[|vD[഑ o,γ̐'(-Į@)/iȰ;qnuyN=Dc0vٯ - V5x^ƜHhho [gGD ~)ȣKƙܚ.LIM5UqcO2*p)t5Z-j>, Jh(OTz 3'Z)j:SE쥃E݈kpi])>GzL|kPK9g$=K8\w='f7yTͷ0!9Do ^(~ℌN؛vX5C ]Q&U{"yzQ[S=pӱ_! -'}dt''/H B.0J1U)W%%]q)JZΌV׳{!A$T2_.iU[}vxfR'ȌwJ4ߧ;|‡_LB -~qi7unzÛ1Ť~EɹH!{Iz70 Y 8E+A$0!HJd34zdz-qW4aw=7| d:t_?AޓCOO.An(j4̛Ԇ%k8P11ڍzH%3C;rC/U%dGPRl3bƶRz/ mnF9/#y0X=9׸ȏNI7j}Mcn׵fAerF{D~,)!vC>IQF૬R>pgFyknLu*ቦJǝDHX֘UG'-"t&wd옷z^C*գQq`2 sB1?|ZLNJh}PɛWЍ>g~PQxMiGk"_owpim T;Hy}*6'X4r/)-<9r3Bnu(*;yL1bZ>֚-> -stream -xڭsxf_&fmc;yb۶͎mc[}̙|3sګUwUVmrbeza;#3=3@IFގ[^ h"lh Ñ: -l <u @ h `a0sssÑD=-̝TJԴtig ?Nf@k;{_kGe lZX?4%TI-o -.FY c`j`lgkbOiN '{_71`tpr p9:=g;? +!{G;lb윜-*I;OgsCb;YvwSҿ4QgC ['3XF@-f@p:XtNVǿ`6ecfol3 [8Eo`@?=C7 C;[k QoH }"7H"oWKz+4 gX:8WC kݭw!lkW!zfv-$,܁& -SC뿇/ W_'&[[!nQVCTBL0mYoW8x3ZC%"bgгp2#7 O1Z ?WF>Rv65zlWM_tí.Ye;e/mP)*]i^8xj`˚'xOCJWII{W~u v;WE8sDGZAhkZى"rpldxo6'78Օ =Åߖ$Nsŏ.pJG$թaռj\J/48e yjFCJ -#kƹay :a&򣈋bOd뚒]XEN{يtwSVl9 -T8pED_vx0yKPQX-8lp^/Kb<pz}ςԵ7@0v#rjH.SWڿy)Sd"`quhiρ.6KtQFFNٌ /3Hyyȹ!r-PWFkR^yԈL VygOM- rE}-Fʣ>'=n%NԄ'GS SH[ܮOYP%讖w9PŢYK3#nhZ>O8ʇG40oV׳ &r܆iN)wsAÞ06\dJP -SVd٪&s9 Ku+4vbx?B7&&Ǣ͕:-⿕rj@Ivbwfu6A'-idgiS*a`*YJ:ro.&FB#GOl_ZիHdW|4 #:W}XA;Д5s܆ZFG:ٸSM)3M_Kl 6. ~oڏ2 &BZ:I+!4EG)M.Dg+l 2)_,7kML~8@mD!uZuHSr >HʏSeUdk8Bgtίn7i` icȺ2!VQij%5+L:'&\e) CcPgNPE5h}O̓V Rk=[V@-0R1m DZIF{r;q !%OSFceJ4HF{t$6,=xz[؇wlvhE#)f;uı\/J -0 Cһ] -Sg,+ipKLuq3;**>2(hKSFTR:T~|%Iݥl˪g4&~ rInuz'ϐ3=ádNl|ғmo-!"CGEŕ:ع8-Eʚ3d춐z?Ѧ|κyEWp[2~IGC~b*^RpϔdD/6"e"\xa3j0pTXE0t#iL]1ww){ciو V؎A[6wگ;Nb֏N1XU`XvBՒ7 L1!CuxEsZ"k -Y`tR̨eӗ ڸdҐdD0&|#2%Fz*{iۍι_vRx1j{ -t:_Ck:0>?i(GdOCV:W6YFKo+ՇmUkp_|[sE5 ^x uǮA1>) ;vVg͞lE0" -` n'#"c\_źyK`P/^QlR+NRw^ &lO|[(<*(x]aw13R!e?,7nT`2| J>m Kx9Fۗ0,q@f0܇~5$j*45#׫0R;EnPM]!Р) ~Dx,nw%ӕ"nV=OG@dK 1'1mKLqlow,D{&=*B/XA`*(;9th5BuкhZڦ`ݧ35?5%Io1f|,4elRĠJetkK2<"Ry"3\ gRf߯9g_[NRzWVL0 ܼ3 %Xy X̢JҊ@z^GmFB)]Bu]cx6커pw2VCU^av4.tr8} ³#y5c!S=zcvwr_H{:$i_TB9u8Ȅck̑>tp^n -T=T@wIqj ͔v\Y1֖44\H |;YbLJ$ .)?$FJc$Ԧ;DAˬSAҤkn/^ ޏ$#%;9_"mשk=1pa@ %,cjɵ4CVUw vC9ac -;lYɁg̴]g,gZ#eJ ,drھn5?ɋt}|[<, R1}V]\՛#дV1Fah7eqGڳU|uGKtt٤рR_Z 2vA3dԄ&9Б$9ƏX-#RVT['bC?s $#O A*iCzeGnh0ß7Z*$w"rj6(4N HR/yYaHp4/:#yNB}[*3 `L -#,Bqҭn5'FGW[.;w3ΆҫYolj8Sx1%rI4H_tCg]UQ׀&h۶sh\՝ 8ˊinmZ7x>Ħh.%?fnaq"94ou$;Hi: L(j2`5}Wn BMϟpYڔ+s@Q[Ctobك.yփcpG\g5PfƮ l=-Iv@1R PW=vQ~=z9*Gjj5yC+"L8Erp3%{+MXA%b* `uH_~K}3UTwA&u+!;BrfRk?2ؔ^"E'n4sbDaM.FG<b+ -nsD&|$o_hzm KlXZg |sLN1gZ\G)+00~ЫJҼO֍U-/d`X[;Q|m. $DYqnK+z؜6,U/\ f*$ LcXaY)\u&fGJ!Pm|tOW1p|A6H#),zVyɶ7 N?.J^Od3BoS U~T c:wv6!\%WkДEι !JjJ\]I'VFXBHIW7w܈uffP4/QK_sg/nQȽ"AV[ ]fߞsUmWr󗍟ҿ.BA:-c(N~eCF}Y#xfrf5} [t˯FiYB{}u1<1<A3-tY]قλ)"h_pj7/9˕n85:lsr;07Ә.jN󽸅a^w -n\@NOL9eF4ah[PMGJk~X&EןEzCRl76d.P [Jj"$DVS)JsĢj?l{POC-Τ^![TΑx\aF ޗg՟8xiP' 5w;]KcPc. 7g#0?ZIH[ҚZ#, eĪE6$(gF}2///hC^(tqo|Oc -? -u ÁަQx -+B&\`Gmt1]%is|$d=#ުa`ne=L2677!gب\4ȗOO063 o'ABe|crw.R{H!C6K>E0e!YƹnpÇS8K -+9*x՜W!wb?BRî~Ȥ|.  ]MUcD;0fB'i+y}1g&zxM)1:u-)QHKn p_TCLDž"jzܠZ2A\cY ̨a/7T5 -C"9W׺PR7\ q2?ϣh߹_$)FB5cwF\Q~Y?ڂ3#JK`ڃgYY{ϡT){jrmKyhTF{d?*tѶО-_ nu#:׾' 9sCIhTmiPFW'Gɐuy0xG*r}/R^/ ď_Q=g Ւ  x@=#7 E|m%=qPܠeJ~9⌿+4z46r@y4vRC66?݂|`}M@w{X Kg=4kwG r ")4[s9C"ВL R=jZhԯBQ=g).P`Y^2`4gHn|O!lʷ*N2H/; -<ոLC",C'GAoL·GVxWuɕn㌬XfmDzRW?{ac)p[AWNg9fo)="Hj2)}#R`KbRMzsAX͘GlV~nD P]FhGi_~fqww- F_ &L%i "sK=@4Vw)ǏW@=tL*(Xy~s6s=|D~}j}\/^iuQ ;MeJT`C$ 4N|JmE}|jgHN -6 $L dU0]P#U&AC/xr+Z-AŦ|A#uUd^!YcQK=ma"Sۥ ps/[͉jwb,%ޢS&z\D-lpo}'ɞ8t0:=H×i MYs9dS_+8 I# $i5t86Pht'No ~ZTvmt%JTwZf=58ݶ-d8Ne̗,rZ~vRu(J -;>U0e߼ KF!PMI1L&""m!wjw)t3mE6ykl{ԏL"m3*9]nkD3zUkI,]$V ";d k}õz]j͒x)I}CYy=9rC)d JgL{ucQ,W}g+ְKJ -{Wt;BXeJ]3|)-lYOɻ7 OKe6:#pn^E9P<ز#ܵ ◲c7FG){niUgda[gla [H1)R$ "`r3Yp8{X@OkAY4%RDvNTֱ0|*f  WvF 2]Y(2p2yX1oAIϗt2V%0쏳xP^x5W&;{\@q tET^S_G8#.^gl$ q GPT7`nBH* {ȣE["yC!+:BrT/Mwu\w=ᢉJ>bMc~2+0|j1QWL(ᬅA,6W##N:yw`1w"xp?I@rUѰ*IhyO: >.*[T!=&{y7r4k+)NoS4ހx~-[8Hs蚸}S? 2T*Dk{ jHWZ)ӕw]C&oni~Q!+jY~OwV6UeGWz4?$hdCSjo+}xwsH?LҷӬi_Q .GI'd0\/5Hepbm)ttZkL c\P_^[3]E$[MƎﲏUbZHޱ[rKQBo> o7'p~M4~b "gH<dω` :p3 v)uJ((}B#]޽Q$Qͧl_'T50qp|4,M52#Inxcr[C` f3?귲'bNnzЩI"=.?>cwI]ޜ-7:I*WLyB-0d:$@1@Ҭժ% -(X_yrM tz]r; 'GlQӿO"!*Ҟ5xF'Ƨf~=.;8_¡NTMoMDŒe_DrY=N?rДc>~ 8ٍ ^>H _޸#B|ӃU墄m 3Omݹ.d0)6wL"җlն~ &aYIҗ ʻ5I]5VxtKL}&> Jx6'43ʙf39 pVL^fVW՚m UU=I3m;f[uNˉ.>$Ɔg@9,if^.C} nWEk̮iYʸe^LQ | ]7R5 xyxc}'S܉aXwoPI/DNnh/4{,:Gx!17p6#D!c!nlρHwU- oс\.,ΔD$9AWvqgZ< `[_LEz\=A)ي)bOT -sЌڣVwn?N-ax"~W]7S: ?bAq8呢RO=\.Yg"4/$$rc2%od7o|?zqgoQ* UAij"< !O$[ڗ'HwF.7Tw{Hَ+i4W3|: 4M}59`F[8 .+ i"L?k郒kr+ g *̓Mvᥫ(`i u,>]f!2Jm^{1NA'nK|an=s~P_Q-E"ㅢEf25#0mGx}FQ#ԆLU0y`۔ nRmI/\)?WGK~x|ILJ̇ħ_5+AFyNcD o=Auczg?[R'1%*=SoTOӁo$ȭD8_o1WU,I:6Ebv7q?/0;Wkf: İ ΞF,)"WYhcE_[u~~uOWk,myXj_X/뚟FY?NW,VGut7p10W?J-d'X\F| c+?^kJkϚY1?VvqRyﻒ*V4aNYח4ߏ%VYS-Pk5U^?#~ ݣ|5bD!wx#&]]Y&;WmuI_WPӛHYajW9e?Mc`XFO6@LFHE~ˈ8%жa<駨>Tb]6oXTR:ou޲=퍾¬MyUD1V(R0ŷoS *$`v`)0JI•G Ո`OqlOn9}WJJ_A~~92#PO=k'_m$_+Mi0}sy]G$hO!PcF=7+9϶jrόRPdezmy=m~e?+Ow -UJM:P[W1 f[NbW<|Dw@ɗ'IM+$Ô4{~Gu5Ba)/f(pϨO/*BU,@=N"=?϶9O\KΆt?Vx*M+udU :K -mAJ C@NAU)wUF^&LV/End4^+6]~=.,M;~ S;.[ -e,f \%0)@N -~%{˰:D]@Ҝ*CC&E -Lr|B*4X7 "u2k7iA'T3iknD.xeYP-"ċ;1Bjr ]}0ΩBBd2("E$uBH݇~O7" N~050?U÷endstream -endobj -229 0 obj -<< /Filter /FlateDecode /Length 696 >> -stream -xmTMo0Wx$ -! 8l[jWHL7IPV=M̼ su;Uٛ=w]yil;<[[j<=?׾+v`&ߴț<^*;~&Q>MS >_P{=s@dkx;`VY`s4JaQܡn.Uu9\Y6><ٴ.Z.4>Dӗ}~r:-d0VWk,8yLһʮӮђ[*mLr?q 5F8@=@)& 8Rx uD\j2HV0CzL] bctI g$`htы0\F0s jd< I6zg W qȐ+#k .bsrbmXK7ǵH7Gnb>&jؐu1VljOu$՟qWS/%1{\xB!K(hHTЖ枃Jρϯv=k2UKς_:~$/ ~E+7ˢ/ l(/} -+ZXukoԝE?ZKqendstream -endobj -230 0 obj -<< /Filter /FlateDecode /Length 739 >> -stream -xmUMo0WxvHUdCmU^!1H#x?gx]OTm$|͜s_Iss :L;<Sz==׾f`*_`ɫڟk3'iѴ}=M;7rfnj-eSӵOLg~8 )ok A8 $`I\3`Af<Z]! -xNky"7 _㓧q -H`nḱRONH=CpB:# =%888QA~!*zƜАT?!~> tw8y*sύ -}nFE>7*QύR>7G];~<6OIyktg>O:yұϓN|I/|yIg>O:y҅ϓ.}2 L> -stream -xmUMo:W5?$R. d9M eCkmCp;;w~>|3E_?O]5߶w]Occ]=~?}Oyh9%?۹׬B|Ɯ>);vw%g43>\ 6 -EJ78 1{~`W(-;]%=xe_,b+-O;q\L}UI--=BKE1p[! -Mߊyu>.N5K)Wb٬8i[_uʕMzQ)V(Txޢjy!Z2P="Zd0\ÃGR\).2*Шa!U,H`+j.5Nα@VK-x%3%AYӀzΚ>kP#5m0Woþj.ZT$X/)n)#Wo(oRZ $Kp4Z-b\1ܰJ P"GXQi/8k^Zq:Zs9dB )sL-7xJ`aɽ)f$1 -dъcCZC<73JgznHȰYɚTa,_-O87}KԴܗLloK+gJ.GZyVc48Wt]:P~`rZq.n1] S/Pu7Ue:?&?!d&1yHn5)yғBx#1ޞ]Go׏M?Xendstream -endobj -232 0 obj -<< /Filter /FlateDecode /Length 699 >> -stream -xmTn0CƆ@LE"h.R$Λ1iZ)Ayo7?^$ŝPIs77EW]=?:Wz==硫nMi%oR1I+ִ)Q;{W` 4vo)ZZq/7}P^kMݧ`tTshz+&TuSՑ @tvM{BM_ht>X]0}j74훺"t{wJ˥݁ѬSC]wS!ڝ}}悅K(e۞0&xYF\20/0b# !ڇ\)&q)% 1ϹN"ۂ%481`rH%Dd#C k -Ю%"l %RQ F'b=:SuX$Q:\CAfpGR~m%^!N%$h&՚R #ƿp'XϾ>AI }3Nh25gNE'bkkؿs -%|V !3?fc91ӊ9|u 6ZcWCab d1׮eF-9Ag깐3Z=I= 6-7p?)pegT> -stream -xmTn0CƆ@LE"j.RC~8iZ)Ayo7?nkNy$냛G׎ծU[7|SlfM[kwʽ5g -x=i6;RV׵_n85]֚̽u[OsE͡i P{ LՑ @4=tb/yVvL MnݞArjwf4P׏ީFT]Nrî}sBZ2pmmR?\rs<, X#.KIɌCH'hjmJIQ09da"2rG~\5hגQv]`n @v)(A'b}qHI($ux-JBJ!^I :ggM597F7FN}Y{}&Ff.pdk_ ΜN0VG9ʱwDK4X=CaCɁg2)4X(rb0/s4lƵǮb]ˌ[r> -stream -xmTn0CƆ@LE"h.RC~8iZ)Ayo7?^$ŝPIs77EW]}==硫nTشxGɛz?{k۝=` 4vN߷u8NM>(s&`ywS0jzQshz+&TuS~Hxqq`P<+ OC톦}SWUn}@`T;P3qtj}w*5UWSܰo\ze \[3. 9ff ؤdF@!i @F\ -` H sn4ȶ` $(Ng 2R0zd9#Cb.k(@.0[Czr aà8SuX$Q:\CAfpGR~m%^!N%$h&՚R #ƿp'XϾ>AI }3Nh25gNE'bkkؿs -%|V !3?fc91ӊ9|u 6ZcWCab d1׮eF-9Ag깐3Z=I= 6-7p?)pegT> -stream -xmTn0CƆ@LE"j.RC~8M])A̼7W?^$PIsWWEW]}~{SCWmݨMi7mv9I+ڴg{ҏÄ~F )P ǦkZn;@1zz5= 7m=x Fgu P}?i]X<;k C톦}UYoO} A`TS7~wpjmS!詺]]ꂅK(ew&97\=̒5⒁yAa>:M1ȈK,x΍t,@F*&" C,zdWXPv-hakH/]d"btv"gg?|2JB^G5kdwt,uVT Jb9;kBX!00a0bw3W M";\88̿9Earʱs -ށ?c>+q p~PrL -  -hi˜c>:q-+01~k2#Ϡ3\OLqRυ>¹M \)s9O -\Y!O>\\/Au*[ӺkzT%C0tendstream -endobj -236 0 obj -<< /Filter /FlateDecode /Length 720 >> -stream -x}TMo0+J6*ħöUSEj9߯ IVcf͏睟ݛ{)^؝}]u:vzyu|CW$nmmΑmq5)M{`qjS5M2үxO%r^q &\TƦkR@YwDoYia) SZM5_$$>kxq4|;o4vhwqB؝Bf#j{p7P_?{+4}+VYu}e}n.ˍggfjj{k:lF #QhJq - -HQ/e.!Pp #]gQtVTv)#l-g!7'uӾ:[sI r.39uf *gQNxEqV11V啣Yq:54kDCZ+)]Ws8:а/9R\Qrz\8Ç]按Sp/ -d8D(B!4׳030 =;fzÞJmw&^0C~/nS0GKW皠NdzG5cC)!=E^K<3Iò8ȿ q3NOg{ACt~Qn~ɸ\ %1.: *4hH`<4̶E hSendstream -endobj -158 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 866 /Length 3231 /N 100 >> -stream -xZ[S~_ǤRSTa O%e=]ɯ?_k.ho0:/<G?5R;&4(V*WT0LjϴL:Ҹ*fڝf\ sRsY"ϣ/{^hW,Fz3) 5!i1ǍBԊ=A2D4@ OCsj)`Aҡ?8.SZ@۠P1#AI*$('ѩVD\IkK.RY=ThXhW852-e\'H@WcdZ}0Vm{0i'`OLPp,Ҙt* H fDBؠBpHz[a7FG$Z4$F5:jƌсj y3VD6NhTASt)l!s,*AAj:S0Q"\{JJ -Q)4I{:* $UJ%aoXY!<"EљB9!-J"a/DG -<#lj $ĪL 6 5O"B'Ozےh<,ݿ>伜 -P||77=ة/"@DAbἇ.{#Ɵnj?c? 擑N`6Z~3v*`owb`([8`4I֋$u,,0qp4y ` |0͒7==TsC/d<8*S<;`6cLoear44 1'rtbP3O0H3{\t&X4m\NnY8wy0ƣ+8lkpn -[Ÿn=XzVnֵ)͛4Z'Ma}b> gb]MgS<9 -SES(g~,6(ʰ -cjIYUu'H. TD'Er\%߷j(pG -Eܥn[Kdr6?9ԡ]Q֨ GCte D2{eh+ # -36(^!T!Tf}T!e.". - u6Uჹ_TctɃEt$l,ChmvE  FV2vKMXu4»PC MNߛiJ{Tiӊ+6lF^Dza- b%B5>yfi3G *?\fq?OpR\gWb8ޤ1?n,:6+` AIpf[,bqe٘2uѓD Dz -6ܴkd,6 -dV|u'YH|覃8nJ BnJJҷ$C+l!CGanN3&, 0p!$8, Puߵo*S`<y*TkLV3G%|H]J3j -BTG2{SSꧣGq2BskH1LWFFGQJg t+M):}c#Cu48:S_bSˆm9h5Hc4l˹hˏMͱ7ujo|uQΟս5v7:>Dz62CbiȪ܄IcM@*knVk^6wdknڪH6[Y]ۻb2( oӺX^`Mɗ4o(HJ_VI f.$M޵ֶ&ÒR߽mNe-j| -;ײ|dH>hNS;RH4y{L]uo-TY%*DinVrU;QUFT۶^,5bz6cH5uS$)CuhkBaIφ$L4NMDSvTY9LJc_o?)3~a9Y -XKt@\^rxy۠NUߎp;.тYyf)%O(/XM~0#2,{%1ᅥ1o7قxm{l%.nd:hƖQD#eX8`lNa$I_XD(;F 2wV-$, ƈܘIkY̦rќ9/~;{yӫ|z8y[^[%C8!:~^U)7fdfr Oއ_Co[~ď #y涜Lsޟ3~6bV&4I~.gm3Uǀs^/x>_y{UG%2j՘Pg8>)¿ize>gUe:$ ?3>%JnEOvkߎo^zi+naYdebxGk@J'7C:e_5S\\~9q1YЗ - -I%2]_UInƔtI7u\?S-COce#wObJL]"SWuQWo}ɲ^uԕ]5۫VJE좳^ⲖbY2-cXaVԫJF/l=endstream -endobj -254 0 obj -<< /Author () /CreationDate (D:20230725015639Z) -/Creator (LaTeX with hyperref) /Keywords () -/ModDate (D:20230725015639Z) -/PTEX.Fullbanner (This is pdfTeX, Version 3.141592653-2.6-1.40.25 \(TeX Live 2023\) kpathsea version 6.3.5) -/Producer (pdfTeX-1.40.25) /Subject () /Title () /Trapped /False >> -endobj -238 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 152 /Length 986 /N 18 >> -stream -xڅVK&ϯ`;6R)JvIMWUMoct!DD,EHVy|ݫM#L!dg\l:ޠb[klH<3B>pU؆ o|C+_ - ->?_x89F><^gd&|\|Ly67O@._}< - `E|}u.~<Cy}oV9=/+> Gn;endstream -endobj -255 0 obj -<< /Type /XRef /Filter /FlateDecode -/ID [ <2aef2ba042d4770516c8108b30dd0da0> -<2aef2ba042d4770516c8108b30dd0da0> ] -/Index [ 0 256 ] /Info 254 0 R /Length 631 /Root 253 0 R /Size 256 -/W [ 1 3 1 ] >> -stream -x%9OVA9 A@wE TpaqamK+11&&&&-|Mf -cgl~{{sΛRJ=%O^,'Ffyq1X0*`5C%TA5lMVMPk tm5C lZ~Ǩ +Y%&p(ԹXzVA9]a?}pv^k=] L{לY0w@'t!8 G(t1<>jAd)8 {O.C+E Xssk>700 -:LL =BϚ9 Zz,. "n\q͈--6(Q\q*NC*,-4% :@tbU(yUZZQ!DC(:U!ZCt -`ii> -endobj -3 0 obj -<< /Type /Page -/Annots [ 7 0 R 8 0 R 9 0 R 10 0 R 11 0 R 12 0 R 13 0 R 35 0 R 14 0 R 15 0 R -16 0 R 17 0 R 36 0 R 18 0 R 19 0 R 38 0 R 20 0 R 21 0 R 22 0 R -23 0 R 24 0 R 25 0 R 26 0 R ] -/Contents [ 28 0 R 256 0 R ] /MediaBox [ 0 0 595.276 841.89 ] -/Parent 39 0 R /Resources 27 0 R >> -endobj -7 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.giannisakis2022revisiting) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 207.855 108.156 288.712 119.906 ] >> -endobj -8 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.giannisakis2022revisiting) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 69.87 95.016 93.681 106.357 ] >> -endobj -9 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.ahlstrom2005modeling) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 332.748 587.876 376.779 599.627 ] >> -endobj -10 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.ahlstrom2005modeling) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 381.682 587.876 405.929 599.627 ] >> -endobj -11 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2016visual) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 411.143 587.876 467.418 599.627 ] >> -endobj -12 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2016visual) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 472.321 587.876 496.568 599.627 ] >> -endobj -13 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.card1980keystroke) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 501.782 587.876 525.406 599.627 ] >> -endobj -14 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.card1980keystroke) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 334.846 574.327 359.093 586.077 ] >> -endobj -15 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 395.62 560.778 451.934 572.528 ] >> -endobj -16 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 456.856 560.778 481.103 572.528 ] >> -endobj -17 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.dayama2021foraging) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 486.338 560.778 525.406 572.528 ] >> -endobj -18 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.dayama2021foraging) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 334.803 547.228 359.05 558.979 ] >> -endobj -19 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2014model) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 496.589 493.032 525.406 504.782 ] >> -endobj -20 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2014model) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 331.937 479.482 355.9 491.233 ] >> -endobj -21 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chen2015emergence) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 359.679 479.482 408.907 491.233 ] >> -endobj -22 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chen2015emergence) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 412.378 479.482 436.342 491.233 ] >> -endobj -23 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.adar2014commandspace) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 336.738 246.039 387.373 257.789 ] >> -endobj -24 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.adar2014commandspace) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 391.937 246.039 416.184 257.789 ] >> -endobj -25 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.li2018predicting) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 421.06 246.039 459.344 257.789 ] >> -endobj -26 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.li2018predicting) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 463.908 246.039 488.155 257.789 ] >> -endobj -27 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F116 32 0 R /F142 37 0 R -/arXivStAmP 257 0 R >> -/ProcSet [ /PDF /Text ] -/XObject << /Im1 4 0 R /Im2 5 0 R /Im3 6 0 R >> >> -endobj -28 0 obj -<< /Filter /FlateDecode /Length 4659 >> -stream -xZݓ6r߿b2[ %ۺ"],;ÈCO7Arf%*U; @FI7?ܥW߾{!F*+67N%&۟|34b|ʡ#LEj{w H<F~jy"tnq"dd*Ž>U}cbBӻ58ݿt#6Y*iM.llS~KRRX4P;y{ڻ_9wIgτNLmȼ=Y5>^Le"zx[ĵskeً?t)rrp3{6mU׋W"Kr9?|ȯctU|W>uTHc s6C&FQi`yQh%ؘ4`MzQ-76z"V=9)}s NAԪ.銤F/r$x 2Ux`3=EnϨS==G<vxp+j80AK}Ds l>Y{\3{|՝ε=Ux۲ -rUF&*M8n*x F4xWZzW{V}pW)k(=zJ8@Z 8m -CR|~׻ĥڻmJrUh,F$0& `yݏaqd}yXctpxP?S{d- @|0 "Bڳ _t;%@p3m ?V^_AQK4w,.tgАҽP6D`_ъ`Ǧu=Y>%SB/35zOJ3`@s^fp{]7ٲ=O?OwlFMAWH4њqz+j+l1_1%hxz[PdIIP%M{z 'e|x Xx^D[02ZdwO 3﫾 k8Ct'IP@@x6>$1A{PG?b[.d R|p0j"8!5ԖM]#c=o0emۚ,&"LP?Ο_iAx7CD!S:买gBU;-)=â]pU%Pv)T(Te'mS\!Ă^q"ot A<\ S'd2x—Z>=Rv2_W}*/U@Nv @P#RGF'9+!~u'( 8F;M'x`߸Kx)f:I UY=X|_ش6cSx}gЂыͻ5е`_>RͷWA24psS_7aC嚦DH# -܆z.XH^t"s*(M߾*# -rNq8d*fD--RZ3Lu1vg#,!$/u{@o a=:f+l9Ҍ&*a)YƛꗑJv$->#r8CN!m5{ P(}hx ʻAO@Q :Q 3ogRE4uB u?t?KcN_$,@S}ٿ~3Xַ -& AK`/)KAeɯY-u9BP|~Ӧl}[Vc\@\h;63;V ZyyF᫯9G#hwٝ6u:pWN&nҝт@Lv`Ķ0>u$4&23Y\¡%y= #~bq*S^PeTF4? -E -Sպ -(BsnΝqM+dxړI077wJ׫J B!a|7beՆ% G%]Ur ̥~?3Zi2X#=O`^Q^l!!iL\ժ)4@QdKٵ\ AaB( -Hi$,B-5F<,xͱW^:z~y\k{2ɋ"֛iԢLph2Vn붤cB"r6UIL0)V`SOdƤgs_I&W橈 ?Dpq)GNd"]ݳ|.SjVDĜcמb}kvb2-n`%,c%0U -fߺ܌랞:dqDp 5H{2V]#I؎+UcKvPs[{: ,(9 |~*C)AMn X屁,g v%jYk2*+=j8WգJH-B_܆"b /c Ʈ5iLpC=4hmϩT%* (r+GC:NwELƪAN0g_VT!R%$ta . Acx1Z+V$@2,Ѕ_ |(@eX]qc-qsθ^r"4&Ggq"s 2kZ%:b,Qpf"fJ*  PX(6jT_o= c[>=4n5iri&ċmgy'Ŷ000ix8:nU>tvrɅo8JIJDm5'J?> -endobj -30 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /AQEUAE+NimbusRomNo9L-Medi -/Encoding 208 0 R /FirstChar 2 /FontDescriptor 224 0 R -/LastChar 252 /ToUnicode 233 0 R /Widths 212 0 R >> -endobj -31 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /GVSEYM+NimbusRomNo9L-Regu -/Encoding 208 0 R /FirstChar 2 /FontDescriptor 226 0 R -/LastChar 246 /ToUnicode 234 0 R /Widths 211 0 R >> -endobj -32 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /IQUCLM+NimbusMonL-Regu -/Encoding 208 0 R /FirstChar 31 /FontDescriptor 222 0 R -/LastChar 125 /ToUnicode 232 0 R /Widths 210 0 R >> -endobj -33 0 obj -<< /D [ 3 0 R /XYZ 70.866 629.291 null ] >> -endobj -34 0 obj -<< /D [ 3 0 R /XYZ 70.866 346.943 null ] >> -endobj -35 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.card1980keystroke) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 305.146 574.327 330.022 586.077 ] >> -endobj -36 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.dayama2021foraging) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 305.146 547.228 330 558.979 ] >> -endobj -37 0 obj -<< /Type /Font /Subtype /Type1 -/BaseFont /LXCFDY+NimbusRomNo9L-ReguItal /Encoding 208 0 R -/FirstChar 39 /FontDescriptor 228 0 R /LastChar 122 -/ToUnicode 235 0 R /Widths 209 0 R >> -endobj -38 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2014model) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 305.146 479.482 328.465 491.233 ] >> -endobj -39 0 obj -<< /Type /Pages /Count 6 -/Kids [ 3 0 R 81 0 R 138 0 R 160 0 R 168 0 R 175 0 R ] -/Parent 237 0 R >> -endobj -42 0 obj -6180 -endobj -43 0 obj -237 -endobj -46 0 obj -6719 -endobj -47 0 obj -237 -endobj -48 0 obj -<< /D [ 199 0 R /XYZ 65.885 490.628 null ] >> -endobj -49 0 obj -<< /D [ 192 0 R /XYZ 301.161 726.213 null ] >> -endobj -50 0 obj -<< /D [ 192 0 R /XYZ 301.161 581.798 null ] >> -endobj -51 0 obj -<< /D [ 192 0 R /XYZ 301.161 270.076 null ] >> -endobj -52 0 obj -<< /D [ 192 0 R /XYZ 301.161 414.491 null ] >> -endobj -53 0 obj -<< /D [ 199 0 R /XYZ 65.885 594.019 null ] >> -endobj -54 0 obj -<< /D [ 192 0 R /XYZ 301.161 481.219 null ] >> -endobj -55 0 obj -<< /D [ 192 0 R /XYZ 301.161 214.307 null ] >> -endobj -56 0 obj -<< /D [ 192 0 R /XYZ 65.885 92.784 null ] >> -endobj -57 0 obj -<< /D [ 199 0 R /XYZ 65.885 218.093 null ] >> -endobj -58 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.brown2020language) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 235.69 658.928 288.821 670.678 ] >> -endobj -59 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.brown2020language) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 69.87 645.378 94.117 657.129 ] >> -endobj -60 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chowdhery2022palm) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 152.74 645.378 231.55 657.129 ] >> -endobj -61 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chowdhery2022palm) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 235.5 645.378 259.747 657.129 ] >> -endobj -62 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wei2022emergent) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 191.866 631.829 239.082 643.58 ] >> -endobj -63 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wei2022emergent) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 244.226 631.829 273.413 643.58 ] >> -endobj -64 0 obj -<< /Type /Annot /Subtype /Link /A << /D (appendix.A) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 209.542 83.992 219.717 95.922 ] >> -endobj -65 0 obj -<< /Type /Annot /Subtype /Link /A << /D (section.6) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 407.574 729.608 415.021 741.538 ] >> -endobj -66 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.li2020widget) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 429.33 596.688 464.81 608.439 ] >> -endobj -67 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.li2020widget) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 468.267 596.688 491.947 608.439 ] >> -endobj -68 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.zhang2021screen) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 495.708 596.688 525.406 608.439 ] >> -endobj -69 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.zhang2021screen) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 332.559 583.139 356.806 594.89 ] >> -endobj -70 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.leiva2022describing) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 365.46 569.59 414.467 581.341 ] >> -endobj -71 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.leiva2022describing) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 417.7 569.59 441.075 581.341 ] >> -endobj -72 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wang2021screen2words) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 444.607 569.59 494.474 581.341 ] >> -endobj -73 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wang2021screen2words) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 497.708 569.59 521.082 581.341 ] >> -endobj -74 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuinspector) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 371.058 352.422 458.306 364.173 ] >> -endobj -75 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuinspector) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 462.605 352.422 486.045 364.173 ] >> -endobj -76 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 381.795 311.774 438.468 323.525 ] >> -endobj -77 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 444.494 311.774 468.741 323.525 ] >> -endobj -78 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.adar2014commandspace) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 501.355 230.479 525.406 242.23 ] >> -endobj -79 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.adar2014commandspace) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 332.411 216.93 355.938 228.681 ] >> -endobj -80 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F158 85 0 R /F159 84 0 R /F33 86 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -81 0 obj -<< /Type /Page -/Annots [ 58 0 R 59 0 R 60 0 R 61 0 R 62 0 R 63 0 R 64 0 R 65 0 R 66 0 R -67 0 R 68 0 R 89 0 R 69 0 R 70 0 R 71 0 R 72 0 R 73 0 R 74 0 R -75 0 R 76 0 R 77 0 R 78 0 R 90 0 R 79 0 R ] -/Contents 82 0 R /MediaBox [ 0 0 595.276 841.89 ] /Parent 39 0 R -/Resources 80 0 R >> -endobj -83 0 obj -<< /D [ 81 0 R /XYZ 69.866 808.885 null ] >> -endobj -84 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /AITXFY+CMSY10 -/FirstChar 120 /FontDescriptor 218 0 R /LastChar 120 -/ToUnicode 231 0 R /Widths 207 0 R >> -endobj -85 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /PSAFOY+CMMI10 /FirstChar 65 -/FontDescriptor 214 0 R /LastChar 65 /ToUnicode 229 0 R -/Widths 206 0 R >> -endobj -86 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /VEWEEO+CMR10 /FirstChar 54 -/FontDescriptor 216 0 R /LastChar 54 /ToUnicode 230 0 R -/Widths 205 0 R >> -endobj -87 0 obj -<< /D [ 81 0 R /XYZ 306.142 719.848 null ] >> -endobj -88 0 obj -<< /D [ 81 0 R /XYZ 306.142 699.812 null ] >> -endobj -89 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.zhang2021screen) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 305.146 583.139 328.878 594.89 ] >> -endobj -90 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.adar2014commandspace) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 305.146 216.93 328.098 228.681 ] >> -endobj -91 0 obj -<< /D [ 192 0 R /XYZ 301.161 347.763 null ] >> -endobj -92 0 obj -<< /D [ 192 0 R /XYZ 301.161 136.62 null ] >> -endobj -93 0 obj -<< /D [ 199 0 R /XYZ 301.161 634.535 null ] >> -endobj -94 0 obj -<< /D [ 199 0 R /XYZ 306.142 303.656 null ] >> -endobj -95 0 obj -<< /D [ 192 0 R /XYZ 70.866 583.918 null ] >> -endobj -96 0 obj -<< /D [ 199 0 R /XYZ 65.885 155.438 null ] >> -endobj -97 0 obj -<< /D [ 199 0 R /XYZ 301.161 465.171 null ] >> -endobj -98 0 obj -<< /D [ 199 0 R /XYZ 65.885 269.788 null ] >> -endobj -99 0 obj -<< /D [ 199 0 R /XYZ 301.161 709.255 null ] >> -endobj -100 0 obj -<< /D [ 192 0 R /XYZ 301.161 536.988 null ] >> -endobj -101 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.brown2020language) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 172.708 592.799 226.19 604.55 ] >> -endobj -102 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.brown2020language) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 229.146 592.799 252.52 604.55 ] >> -endobj -103 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chowdhery2022palm) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 255.775 592.799 290.13 604.55 ] >> -endobj -104 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chowdhery2022palm) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 122.539 579.25 146.786 591.001 ] >> -endobj -105 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wei2022chain) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 151.642 579.25 197.659 591.001 ] >> -endobj -106 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wei2022chain) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 202.204 579.25 232.014 591.001 ] >> -endobj -107 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.zhou2022least) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 236.87 579.25 288.712 591.001 ] >> -endobj -108 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.zhou2022least) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 69.87 565.701 93.462 577.451 ] >> -endobj -109 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.morris2022design) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 267.726 538.602 290.13 550.353 ] >> -endobj -110 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.morris2022design) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 109.417 525.053 132.791 536.804 ] >> -endobj -111 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chung2022talebrush) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 234.823 525.053 288.821 536.804 ] >> -endobj -112 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chung2022talebrush) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 69.87 511.913 93.288 523.255 ] >> -endobj -113 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.coenen2021wordcraft) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 97.008 511.913 155.345 523.255 ] >> -endobj -114 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.coenen2021wordcraft) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 158.765 511.913 182.183 523.255 ] >> -endobj -115 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.yuan2022wordcraft) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 185.904 511.913 233.506 523.255 ] >> -endobj -116 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.yuan2022wordcraft) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 236.926 511.913 260.344 523.255 ] >> -endobj -117 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.kim2022stylette) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 162.73 497.955 210.458 509.705 ] >> -endobj -118 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.kim2022stylette) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 214.798 497.955 239.045 509.705 ] >> -endobj -119 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wang2022enabling) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 232.679 484.406 288.712 496.156 ] >> -endobj -120 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.wang2022enabling) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 69.87 470.856 94.117 482.607 ] >> -endobj -121 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.goodman2022lampost) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 168.407 470.856 239.228 482.607 ] >> -endobj -122 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.goodman2022lampost) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 243.063 470.856 267.31 482.607 ] >> -endobj -123 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.ahn2022can) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 195.903 457.307 241.192 469.058 ] >> -endobj -124 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.ahn2022can) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 244.689 457.307 268.848 469.058 ] >> -endobj -125 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.chatgpt) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 198.482 108.156 236.316 119.906 ] >> -endobj -126 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.chatgpt) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 240.451 108.156 264.698 119.906 ] >> -endobj -127 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.ahlstrom2005modeling) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 450.236 500.278 494.267 511.62 ] >> -endobj -128 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.ahlstrom2005modeling) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 498.971 500.278 523.218 511.62 ] >> -endobj -129 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 305.146 486.32 360.205 498.07 ] >> -endobj -130 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 364.5 486.32 388.747 498.07 ] >> -endobj -131 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.cockburn2007predictive) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 393.354 486.32 464.875 498.07 ] >> -endobj -132 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.cockburn2007predictive) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 469.17 486.32 493.417 498.07 ] >> -endobj -133 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 381.344 418.574 436.119 430.324 ] >> -endobj -134 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bailly2013menuoptimizer) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 440.272 418.574 464.519 430.324 ] >> -endobj -135 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.cockburn2007predictive) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 327.881 377.926 400.786 389.677 ] >> -endobj -136 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.cockburn2007predictive) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 405.773 377.926 430.02 389.677 ] >> -endobj -137 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F116 32 0 R /F142 37 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -138 0 obj -<< /Type /Page -/Annots [ 101 0 R 102 0 R 103 0 R 142 0 R 104 0 R 105 0 R 106 0 R 107 0 R -108 0 R 109 0 R 143 0 R 110 0 R 111 0 R 112 0 R 113 0 R 114 0 R -115 0 R 116 0 R 117 0 R 118 0 R 119 0 R 120 0 R 121 0 R 122 0 R -123 0 R 124 0 R 125 0 R 126 0 R 127 0 R 128 0 R 129 0 R 130 0 R -131 0 R 132 0 R 133 0 R 134 0 R 135 0 R 136 0 R ] -/Contents 139 0 R /MediaBox [ 0 0 595.276 841.89 ] /Parent 39 0 R -/Resources 137 0 R >> -endobj -140 0 obj -<< /D [ 138 0 R /XYZ 69.866 808.885 null ] >> -endobj -141 0 obj -<< /D [ 138 0 R /XYZ 70.866 739.593 null ] >> -endobj -142 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chowdhery2022palm) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 69.87 579.25 117.995 591.001 ] >> -endobj -143 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.morris2022design) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 69.87 525.053 106.045 536.804 ] >> -endobj -144 0 obj -<< /D [ 138 0 R /XYZ 70.866 371.465 null ] >> -endobj -145 0 obj -<< /D [ 138 0 R /XYZ 306.142 771.024 null ] >> -endobj -146 0 obj -<< /D [ 199 0 R /XYZ 301.161 570.775 null ] >> -endobj -147 0 obj -<< /D [ 199 0 R /XYZ 301.161 379.492 null ] >> -endobj -148 0 obj -<< /D [ 199 0 R /XYZ 65.885 103.743 null ] >> -endobj -149 0 obj -<< /D [ 199 0 R /XYZ 65.885 771.024 null ] >> -endobj -150 0 obj -<< /D [ 199 0 R /XYZ 65.885 645.715 null ] >> -endobj -151 0 obj -<< /D [ 199 0 R /XYZ 301.161 517.973 null ] >> -endobj -152 0 obj -<< /D [ 199 0 R /XYZ 65.885 332.443 null ] >> -endobj -153 0 obj -<< /D [ 199 0 R /XYZ 301.161 751.098 null ] >> -endobj -154 0 obj -<< /D [ 199 0 R /XYZ 65.885 427.974 null ] >> -endobj -155 0 obj -<< /D [ 192 0 R /XYZ 301.161 659.485 null ] >> -endobj -156 0 obj -<< /D [ 199 0 R /XYZ 301.161 771.024 null ] >> -endobj -157 0 obj -<< /D [ 199 0 R /XYZ 65.885 697.41 null ] >> -endobj -159 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F116 32 0 R /F142 37 0 R /F224 165 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -160 0 obj -<< /Type /Page /Contents 161 0 R /MediaBox [ 0 0 595.276 841.89 ] -/Parent 39 0 R /Resources 159 0 R >> -endobj -162 0 obj -<< /D [ 160 0 R /XYZ 69.866 808.885 null ] >> -endobj -163 0 obj -<< /D [ 160 0 R /XYZ 70.866 604.074 null ] >> -endobj -164 0 obj -<< /D [ 160 0 R /XYZ 70.866 548.01 null ] >> -endobj -165 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /FGNUUO+SFTT0900 -/Encoding 203 0 R /FirstChar 96 /FontDescriptor 220 0 R -/LastChar 96 /ToUnicode 236 0 R /Widths 204 0 R >> -endobj -166 0 obj -<< /D [ 160 0 R /XYZ 306.142 484.601 null ] >> -endobj -167 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F116 32 0 R /F142 37 0 R /F224 165 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -168 0 obj -<< /Type /Page /Contents 169 0 R /MediaBox [ 0 0 595.276 841.89 ] -/Parent 39 0 R /Resources 167 0 R >> -endobj -170 0 obj -<< /D [ 168 0 R /XYZ 69.866 808.885 null ] >> -endobj -171 0 obj -<< /D [ 168 0 R /XYZ 306.142 771.024 null ] >> -endobj -172 0 obj -<< /D [ 168 0 R /XYZ 306.142 113.349 null ] >> -endobj -173 0 obj -<< /Type /Annot /Subtype /Link /A << /D (subsection.4.1) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 89.571 201.441 105.2 213.191 ] >> -endobj -174 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F116 32 0 R /F142 37 0 R /F224 165 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -175 0 obj -<< /Type /Page /Annots [ 173 0 R ] /Contents 176 0 R -/MediaBox [ 0 0 595.276 841.89 ] /Parent 39 0 R /Resources 174 0 R >> -endobj -177 0 obj -<< /D [ 175 0 R /XYZ 69.866 808.885 null ] >> -endobj -178 0 obj -<< /D [ 175 0 R /XYZ 70.866 371.207 null ] >> -endobj -179 0 obj -<< /Type /Annot /Subtype /Link /A << /D (subsection.4.1) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 144.654 341.892 160.556 353.643 ] >> -endobj -180 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.de2021toward) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 354.146 135.254 426.195 147.005 ] >> -endobj -181 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.de2021toward) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 430.252 135.254 454.499 147.005 ] >> -endobj -182 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F116 32 0 R /F142 37 0 R /F224 165 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -183 0 obj -<< /Type /Page /Annots [ 179 0 R 180 0 R 181 0 R ] /Contents 184 0 R -/MediaBox [ 0 0 595.276 841.89 ] /Parent 189 0 R -/Resources 182 0 R >> -endobj -185 0 obj -<< /D [ 183 0 R /XYZ 69.866 808.885 null ] >> -endobj -186 0 obj -<< /D [ 183 0 R /XYZ 70.866 503.777 null ] >> -endobj -187 0 obj -<< /D [ 183 0 R /XYZ 306.142 469.351 null ] >> -endobj -188 0 obj -<< /D [ 183 0 R /XYZ 306.142 179.26 null ] >> -endobj -189 0 obj -<< /Type /Pages /Count 3 /Kids [ 183 0 R 192 0 R 199 0 R ] -/Parent 237 0 R >> -endobj -190 0 obj -<< /D [ 199 0 R /XYZ 65.885 531.365 null ] >> -endobj -191 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F142 37 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -192 0 obj -<< /Type /Page /Contents 193 0 R /MediaBox [ 0 0 595.276 841.89 ] -/Parent 189 0 R /Resources 191 0 R >> -endobj -194 0 obj -<< /D [ 192 0 R /XYZ 69.866 808.885 null ] >> -endobj -195 0 obj -<< /D [ 192 0 R /XYZ 70.866 239.016 null ] >> -endobj -196 0 obj -<< /D [ 192 0 R /XYZ 70.866 114.181 null ] >> -endobj -197 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://kargaranamir.github.io/MenuCraft/) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 344.723 239.141 525.406 250.209 ] >> -endobj -198 0 obj -<< /Font << /F111 30 0 R /F113 31 0 R /F116 32 0 R /F142 37 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -199 0 obj -<< /Type /Page /Annots [ 197 0 R 202 0 R ] /Contents 200 0 R -/MediaBox [ 0 0 595.276 841.89 ] /Parent 189 0 R -/Resources 198 0 R >> -endobj -201 0 obj -<< /D [ 199 0 R /XYZ 69.866 808.885 null ] >> -endobj -202 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://kargaranamir.github.io/MenuCraft/) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 305.146 226.41 392.229 236.24 ] >> -endobj -203 0 obj -<< /Type /Encoding /Differences [ 96 /grave.ts1 ] >> -endobj -204 0 obj -[ 524.9 ] -endobj -205 0 obj -[ 500 ] -endobj -206 0 obj -[ 750 ] -endobj -207 0 obj -[ 444.4 ] -endobj -208 0 obj -<< /Type /Encoding -/Differences [ 2 /fi /fl 31 /quotesingle 33 /exclam /quotedbl 38 /ampersand -/quoteright /parenleft /parenright 43 /plus /comma /hyphen /period -/slash /zero /one /two /three /four /five /six /seven /eight /nine -/colon /semicolon 63 /question /at /A /B /C /D /E /F /G /H /I /J -/K /L /M /N /O /P /Q /R /S /T /U /V /W /X /Y /Z /bracketleft 93 -/bracketright 97 /a /b /c /d /e /f /g /h /i /j /k /l /m /n /o /p -/q /r /s /t /u /v /w /x /y /z /braceleft 125 /braceright 150 -/endash 228 /adieresis 233 /eacute 246 /odieresis 252 /udieresis ] >> -endobj -209 0 obj -[ 333 333 333 500 675 250 333 250 278 500 500 500 500 500 500 500 -500 500 500 333 333 675 675 675 500 920 611 611 667 722 611 611 -722 722 333 444 667 556 833 667 722 611 722 611 500 556 722 611 -833 611 556 556 389 278 389 422 500 333 500 500 444 500 444 278 -500 500 278 278 444 278 722 500 500 500 500 389 389 278 500 444 -667 444 444 389 ] -endobj -210 0 obj -[ 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 -600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 -600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 -600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 -600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 -600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 ] -endobj -211 0 obj -[ 556 556 167 333 611 278 333 333 0 333 564 0 611 444 333 278 0 0 0 -0 0 0 0 0 0 0 0 0 333 180 250 333 408 500 500 833 778 333 333 333 -500 564 250 333 250 278 500 500 500 500 500 500 500 500 500 500 -278 278 564 564 564 444 921 722 667 667 722 611 556 722 722 333 -389 722 611 889 722 722 556 722 667 556 611 722 722 944 722 722 -611 333 278 333 469 500 333 444 500 444 500 444 333 500 500 278 -278 500 278 778 500 500 500 500 333 389 278 500 500 722 500 500 -444 480 200 480 541 0 0 0 333 500 444 1000 500 500 333 1000 556 -333 889 0 0 0 0 0 0 444 444 350 500 1000 333 980 389 333 722 0 0 -722 0 333 500 500 500 500 200 500 333 760 276 500 564 333 760 333 -400 564 300 300 333 500 453 250 333 300 310 500 750 750 750 444 -722 722 722 722 722 722 889 667 611 611 611 611 333 333 333 333 -722 722 722 722 722 722 722 564 722 722 722 722 722 722 556 500 -444 444 444 444 444 444 667 444 444 444 444 444 278 278 278 278 -500 500 500 500 500 500 500 ] -endobj -212 0 obj -[ 556 556 167 333 667 278 333 333 0 333 570 0 667 444 333 278 0 0 0 -0 0 0 0 0 0 0 0 0 333 278 250 333 555 500 500 1000 833 333 333 333 -500 570 250 333 250 278 500 500 500 500 500 500 500 500 500 500 -333 333 570 570 570 500 930 722 667 722 722 667 611 778 778 389 -500 778 667 944 722 778 611 778 722 556 667 722 722 1000 722 722 -667 333 278 333 581 500 333 500 556 444 556 444 333 500 556 278 -333 556 278 833 556 500 556 556 444 389 333 556 500 722 500 500 -444 394 220 394 520 0 0 0 333 500 500 1000 500 500 333 1000 556 -333 1000 0 0 0 0 0 0 500 500 350 500 1000 333 1000 389 333 722 0 0 -722 0 333 500 500 500 500 220 500 333 747 300 500 570 333 747 333 -400 570 300 300 333 556 540 250 333 300 330 500 750 750 750 500 -722 722 722 722 722 722 1000 722 667 667 667 667 389 389 389 389 -722 722 778 778 778 778 778 570 778 722 722 722 722 722 611 556 -500 500 500 500 500 500 722 444 444 444 444 444 278 278 278 278 -500 556 500 500 500 500 500 570 500 556 556 556 556 ] -endobj -214 0 obj -<< /Type /FontDescriptor /Ascent 694 /CapHeight 683 /CharSet (/A) -/Descent -194 /Flags 4 /FontBBox [ -32 -250 1048 750 ] -/FontFile 213 0 R /FontName /PSAFOY+CMMI10 /ItalicAngle -14 -/StemV 72 /XHeight 431 >> -endobj -216 0 obj -<< /Type /FontDescriptor /Ascent 694 /CapHeight 683 /CharSet (/six) -/Descent -194 /Flags 4 /FontBBox [ -40 -250 1009 750 ] -/FontFile 215 0 R /FontName /VEWEEO+CMR10 /ItalicAngle 0 /StemV 69 -/XHeight 431 >> -endobj -218 0 obj -<< /Type /FontDescriptor /Ascent 750 /CapHeight 683 -/CharSet (/section) /Descent -194 /Flags 4 -/FontBBox [ -29 -960 1116 775 ] /FontFile 217 0 R -/FontName /AITXFY+CMSY10 /ItalicAngle -14 /StemV 40 /XHeight 431 >> -endobj -220 0 obj -<< /Type /FontDescriptor /Ascent 0 /CapHeight 0 /CharSet (/grave.ts1) -/Descent 0 /Flags 4 /FontBBox [ -210 -359 1376 844 ] -/FontFile 219 0 R /FontName /FGNUUO+SFTT0900 /ItalicAngle 0 -/StemV 50 /XHeight 430 >> -endobj -222 0 obj -<< /Type /FontDescriptor /Ascent 625 /CapHeight 557 -/CharSet (/A/B/C/D/E/F/G/H/I/K/L/M/N/O/P/R/S/T/U/V/W/X/Y/Z/a/ampersand/at/b/braceleft/braceright/bracketleft/bracketright/c/colon/comma/d/e/exclam/f/four/g/h/hyphen/i/j/k/l/m/n/o/one/p/parenleft/parenright/period/plus/q/question/quotedbl/quotesingle/r/s/six/slash/t/three/u/v/w/x/y/z/zero) -/Descent -147 /Flags 4 /FontBBox [ -12 -237 650 811 ] -/FontFile 221 0 R /FontName /IQUCLM+NimbusMonL-Regu /ItalicAngle 0 -/StemV 41 /XHeight 426 >> -endobj -224 0 obj -<< /Type /FontDescriptor /Ascent 690 /CapHeight 690 -/CharSet (/A/B/C/D/E/F/H/I/K/L/M/N/P/R/S/T/W/a/b/c/colon/d/e/f/fi/five/four/g/h/hyphen/i/k/l/m/n/o/one/p/period/r/s/seven/six/t/three/two/u/udieresis/v/w/y/z) -/Descent -209 /Flags 4 /FontBBox [ -168 -341 1000 960 ] -/FontFile 223 0 R /FontName /AQEUAE+NimbusRomNo9L-Medi -/ItalicAngle 0 /StemV 140 /XHeight 461 >> -endobj -226 0 obj -<< /Type /FontDescriptor /Ascent 678 /CapHeight 651 -/CharSet (/A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/Q/R/S/T/U/V/W/X/Y/Z/a/adieresis/b/c/colon/comma/d/e/eacute/eight/endash/f/fi/five/fl/four/g/h/hyphen/i/j/k/l/m/n/nine/o/odieresis/one/p/parenleft/parenright/period/q/quoteright/r/s/semicolon/seven/six/slash/t/three/two/u/v/w/x/y/z/zero) -/Descent -216 /Flags 4 /FontBBox [ -168 -281 1000 924 ] -/FontFile 225 0 R /FontName /GVSEYM+NimbusRomNo9L-Regu -/ItalicAngle 0 /StemV 85 /XHeight 450 >> -endobj -228 0 obj -<< /Type /FontDescriptor /Ascent 668 /CapHeight 668 -/CharSet (/A/B/C/D/E/F/G/H/I/J/M/O/P/R/S/T/U/V/W/X/Z/a/b/c/colon/comma/d/e/eight/f/five/four/g/h/hyphen/i/k/l/m/n/nine/o/one/p/parenleft/parenright/period/plus/quoteright/r/s/seven/six/t/three/two/u/v/w/y/z/zero) -/Descent -193 /Flags 4 /FontBBox [ -169 -270 1010 924 ] -/FontFile 227 0 R /FontName /LXCFDY+NimbusRomNo9L-ReguItal -/ItalicAngle -15 /StemV 78 /XHeight 441 >> -endobj -237 0 obj -<< /Type /Pages /Count 9 /Kids [ 39 0 R 189 0 R ] >> -endobj -239 0 obj -<< /Limits [ (Doc-Start) (cite.bailly2013menuinspector) ] -/Names [ (Doc-Start) 33 0 R (appendix.A) 94 0 R (cite.adar2014commandspace) -56 0 R (cite.ahlstrom2005modeling) 49 0 R (cite.ahn2022can) -155 0 R (cite.bailly2013menuinspector) 100 0 R ] >> -endobj -240 0 obj -<< /Limits [ (cite.bailly2013menuoptimizer) (cite.chatgpt) ] -/Names [ (cite.bailly2013menuoptimizer) 52 0 R (cite.bailly2014model) -54 0 R (cite.bailly2016visual) 50 0 R (cite.brown2020language) -91 0 R (cite.card1980keystroke) 51 0 R (cite.chatgpt) 156 0 R ] >> -endobj -241 0 obj -<< /Limits [ (cite.chen2015emergence) (cite.dayama2021foraging) ] -/Names [ (cite.chen2015emergence) 55 0 R (cite.chowdhery2022palm) 92 0 R -(cite.chung2022talebrush) 149 0 R (cite.cockburn2007predictive) -157 0 R (cite.coenen2021wordcraft) 150 0 R -(cite.dayama2021foraging) 53 0 R ] >> -endobj -242 0 obj -<< /Limits [ (cite.de2021toward) (cite.li2018predicting) ] -/Names [ (cite.de2021toward) 190 0 R (cite.giannisakis2022revisiting) -48 0 R (cite.goodman2022lampost) 154 0 R (cite.kim2022stylette) -152 0 R (cite.leiva2022describing) 98 0 R (cite.li2018predicting) -57 0 R ] >> -endobj -243 0 obj -<< /Limits [ (cite.li2020widget) (cite.wei2022emergent) ] -/Names [ (cite.li2020widget) 96 0 R (cite.morris2022design) 148 0 R -(cite.wang2021screen2words) 99 0 R (cite.wang2022enabling) 153 0 R -(cite.wei2022chain) 146 0 R (cite.wei2022emergent) 93 0 R ] >> -endobj -244 0 obj -<< /Limits [ (cite.yuan2022wordcraft) (page.3) ] -/Names [ (cite.yuan2022wordcraft) 151 0 R (cite.zhang2021screen) 97 0 R -(cite.zhou2022least) 147 0 R (page.1) 29 0 R (page.2) 83 0 R -(page.3) 140 0 R ] >> -endobj -245 0 obj -<< /Limits [ (page.4) (page.9) ] -/Names [ (page.4) 162 0 R (page.5) 170 0 R (page.6) 177 0 R (page.7) -185 0 R (page.8) 194 0 R (page.9) 201 0 R ] >> -endobj -246 0 obj -<< /Limits [ (section*.1) (section.4) ] -/Names [ (section*.1) 188 0 R (section*.2) 196 0 R (section.1) 34 0 R -(section.2) 87 0 R (section.3) 144 0 R (section.4) 163 0 R ] >> -endobj -247 0 obj -<< /Limits [ (section.5) (subsection.3.1) ] -/Names [ (section.5) 187 0 R (section.6) 95 0 R (section.7) 195 0 R -(subsection.2.1) 88 0 R (subsection.2.2) 141 0 R (subsection.3.1) -145 0 R ] >> -endobj -248 0 obj -<< /Limits [ (subsection.4.1) (subsection.4.6) ] -/Names [ (subsection.4.1) 164 0 R (subsection.4.2) 166 0 R (subsection.4.3) -171 0 R (subsection.4.4) 172 0 R (subsection.4.5) 178 0 R -(subsection.4.6) 186 0 R ] >> -endobj -249 0 obj -<< /Kids [ 239 0 R 240 0 R 241 0 R 242 0 R 243 0 R 244 0 R ] -/Limits [ (Doc-Start) (page.3) ] >> -endobj -250 0 obj -<< /Kids [ 245 0 R 246 0 R 247 0 R 248 0 R ] -/Limits [ (page.4) (subsection.4.6) ] >> -endobj -251 0 obj -<< /Kids [ 249 0 R 250 0 R ] /Limits [ (Doc-Start) (subsection.4.6) ] >> -endobj -252 0 obj -<< /Dests 251 0 R >> -endobj -253 0 obj -<< /Type /Catalog /Names 252 0 R /OpenAction 1 0 R -/PageMode /UseOutlines /Pages 237 0 R >> -endobj -256 0 obj -<< /Filter /FlateDecode /Length 101 >> -stream -x+23UpW\N!\Ee% -F -!i - - -@HCr4 LL,ʌ}bJs5C\Cendstream -endobj -257 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /Times-Roman -/Name /arXivStAmP >> -endobj -xref -0 258 -0000000000 65535 f -0000177449 00000 n -0000033095 00000 n -0000177497 00000 n -0000000012 00000 n -0000007112 00000 n -0000014791 00000 n -0000177795 00000 n -0000177977 00000 n -0000178155 00000 n -0000178332 00000 n -0000178510 00000 n -0000178684 00000 n -0000178858 00000 n -0000179033 00000 n -0000179208 00000 n -0000179388 00000 n -0000179569 00000 n -0000179745 00000 n -0000179920 00000 n -0000180093 00000 n -0000180264 00000 n -0000180439 00000 n -0000180614 00000 n -0000180792 00000 n -0000180970 00000 n -0000181143 00000 n -0000181317 00000 n -0000181495 00000 n -0000186227 00000 n -0000186287 00000 n -0000186478 00000 n -0000186669 00000 n -0000186858 00000 n -0000186918 00000 n -0000186978 00000 n -0000187153 00000 n -0000187325 00000 n -0000187521 00000 n -0000187694 00000 n -0000000361 00000 n -0000006708 00000 n -0000187809 00000 n -0000187830 00000 n -0000007473 00000 n -0000014373 00000 n -0000187850 00000 n -0000187871 00000 n -0000187891 00000 n -0000187953 00000 n -0000188016 00000 n -0000188079 00000 n -0000188142 00000 n -0000188205 00000 n -0000188267 00000 n -0000188330 00000 n -0000188393 00000 n -0000188454 00000 n -0000188516 00000 n -0000188690 00000 n -0000188862 00000 n -0000189035 00000 n -0000189208 00000 n -0000189380 00000 n -0000189552 00000 n -0000189713 00000 n -0000189875 00000 n -0000190043 00000 n -0000190213 00000 n -0000190386 00000 n -0000190558 00000 n -0000190733 00000 n -0000190907 00000 n -0000191084 00000 n -0000191261 00000 n -0000191442 00000 n -0000191623 00000 n -0000191804 00000 n -0000191985 00000 n -0000192162 00000 n -0000192339 00000 n -0000192461 00000 n -0000022918 00000 n -0000192758 00000 n -0000192819 00000 n -0000192982 00000 n -0000193143 00000 n -0000193303 00000 n -0000193365 00000 n -0000193427 00000 n -0000193599 00000 n -0000193776 00000 n -0000193839 00000 n -0000193901 00000 n -0000193964 00000 n -0000194027 00000 n -0000194089 00000 n -0000194151 00000 n -0000194214 00000 n -0000194276 00000 n -0000194339 00000 n -0000194403 00000 n -0000194577 00000 n -0000194751 00000 n -0000194925 00000 n -0000195100 00000 n -0000195270 00000 n -0000195440 00000 n -0000195610 00000 n -0000195779 00000 n -0000195953 00000 n -0000196128 00000 n -0000196305 00000 n -0000196479 00000 n -0000196656 00000 n -0000196834 00000 n -0000197010 00000 n -0000197186 00000 n -0000197359 00000 n -0000197533 00000 n -0000197708 00000 n -0000197880 00000 n -0000198057 00000 n -0000198233 00000 n -0000198402 00000 n -0000198571 00000 n -0000198737 00000 n -0000198903 00000 n -0000199081 00000 n -0000199259 00000 n -0000199439 00000 n -0000199617 00000 n -0000199796 00000 n -0000199974 00000 n -0000200156 00000 n -0000200338 00000 n -0000200519 00000 n -0000200699 00000 n -0000200810 00000 n -0000028175 00000 n -0000201246 00000 n -0000201309 00000 n -0000201372 00000 n -0000201545 00000 n -0000201718 00000 n -0000201781 00000 n -0000201845 00000 n -0000201909 00000 n -0000201973 00000 n -0000202036 00000 n -0000202099 00000 n -0000202162 00000 n -0000202226 00000 n -0000202289 00000 n -0000202353 00000 n -0000202416 00000 n -0000202480 00000 n -0000202544 00000 n -0000171846 00000 n -0000202606 00000 n -0000202731 00000 n -0000035981 00000 n -0000202851 00000 n -0000202914 00000 n -0000202977 00000 n -0000203039 00000 n -0000203221 00000 n -0000203285 00000 n -0000203410 00000 n -0000040218 00000 n -0000203530 00000 n -0000203593 00000 n -0000203657 00000 n -0000203721 00000 n -0000203886 00000 n -0000204011 00000 n -0000044350 00000 n -0000204151 00000 n -0000204214 00000 n -0000204277 00000 n -0000204445 00000 n -0000204616 00000 n -0000204787 00000 n -0000204912 00000 n -0000048358 00000 n -0000205069 00000 n -0000205132 00000 n -0000205195 00000 n -0000205259 00000 n -0000205322 00000 n -0000205417 00000 n -0000205480 00000 n -0000205578 00000 n -0000052312 00000 n -0000205699 00000 n -0000205762 00000 n -0000205825 00000 n -0000205888 00000 n -0000206098 00000 n -0000206209 00000 n -0000057200 00000 n -0000206358 00000 n -0000206421 00000 n -0000206629 00000 n -0000206699 00000 n -0000206726 00000 n -0000206751 00000 n -0000206776 00000 n -0000206803 00000 n -0000207374 00000 n -0000207731 00000 n -0000208132 00000 n -0000209084 00000 n -0000062023 00000 n -0000210065 00000 n -0000069166 00000 n -0000210289 00000 n -0000076878 00000 n -0000210512 00000 n -0000083985 00000 n -0000210742 00000 n -0000099510 00000 n -0000210968 00000 n -0000117060 00000 n -0000211474 00000 n -0000131231 00000 n -0000211855 00000 n -0000149774 00000 n -0000212356 00000 n -0000165410 00000 n -0000166179 00000 n -0000166991 00000 n -0000167964 00000 n -0000168736 00000 n -0000169508 00000 n -0000170280 00000 n -0000171053 00000 n -0000212796 00000 n -0000175497 00000 n -0000212866 00000 n -0000213129 00000 n -0000213407 00000 n -0000213708 00000 n -0000213996 00000 n -0000214269 00000 n -0000214490 00000 n -0000214656 00000 n -0000214847 00000 n -0000215055 00000 n -0000215285 00000 n -0000215399 00000 n -0000215502 00000 n -0000215592 00000 n -0000215630 00000 n -0000175183 00000 n -0000176587 00000 n -0000215739 00000 n -0000215913 00000 n -trailer -<< /ID [ <2aef2ba042d4770516c8108b30dd0da0> -<989ca38bc7d8be8220f63b7659d30a67> ] -/Info 254 0 R /Root 253 0 R /Size 258 >> -startxref -216005 -%%EOF +version https://git-lfs.github.com/spec/v1 +oid sha256:4de87ebdd47d7c5b8cd55846b6ddac16a86e279ebf32354e19179b0238adec03 +size 221329 diff --git a/papers/metaaugmented prompt tuning for better fewshot learning.pdf b/papers/metaaugmented prompt tuning for better fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..691157a585d514f677343f6c445856e769bebf83 --- /dev/null +++ b/papers/metaaugmented prompt tuning for better fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17bd7cb6a3617d056ab7574fdfc4df1c8a9910250a9e84ae02d219497a5848f6 +size 916839 diff --git a/papers/metaicl learning to learn in context.pdf b/papers/metaicl learning to learn in context.pdf index 3d98ccabb493ae561fc70d23c98e86ab2637f720..8001e83d2f26f7afff37acdbc36d291627272949 100644 Binary files a/papers/metaicl learning to learn in context.pdf and b/papers/metaicl learning to learn in context.pdf differ diff --git a/papers/metaincontext learning in large language models.pdf b/papers/metaincontext learning in large language models.pdf index a370029eb6c8173bd18ded413ba13b10918e8680..0609999fe3fb3a7d8d48f39c6fd419c128c8148d 100644 Binary files a/papers/metaincontext learning in large language models.pdf and b/papers/metaincontext learning in large language models.pdf differ diff --git a/papers/metalearning of prompt generation for lightweight prompt engineering on languagemodelasaservice.pdf b/papers/metalearning of prompt generation for lightweight prompt engineering on languagemodelasaservice.pdf new file mode 100644 index 0000000000000000000000000000000000000000..547ae6c52c2a067aa233b7008c793219ed5463f3 --- /dev/null +++ b/papers/metalearning of prompt generation for lightweight prompt engineering on languagemodelasaservice.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9d2e5a52ea6d7e3cf98223d1487b557ab5ed4b76133a300eda2037c5a928f89 +size 367875 diff --git a/papers/metareasoning semanticssymbol deconstruction for large language models.pdf b/papers/metareasoning semanticssymbol deconstruction for large language models.pdf index f44138e24192a3a1e7feacdc3403ac45ad86b5b1..782204542604df26e6ec0c243867936826b1e48e 100644 Binary files a/papers/metareasoning semanticssymbol deconstruction for large language models.pdf and b/papers/metareasoning semanticssymbol deconstruction for large language models.pdf differ diff --git a/papers/metricbased incontext learning a case study in text simplification.pdf b/papers/metricbased incontext learning a case study in text simplification.pdf index 3368c40dcca455ee4ac35ab63cdd7964eed1dbb2..07960cfce97df69ad9f4427f99a976e2c9218663 100644 Binary files a/papers/metricbased incontext learning a case study in text simplification.pdf and b/papers/metricbased incontext learning a case study in text simplification.pdf differ diff --git a/papers/mgpt fewshot learners go multilingual.pdf b/papers/mgpt fewshot learners go multilingual.pdf index 5d4733d8e204c94da75b8cc2faba149072415d83..8bd9077443bcef5e60df17cab86721d9032293d6 100644 Binary files a/papers/mgpt fewshot learners go multilingual.pdf and b/papers/mgpt fewshot learners go multilingual.pdf differ diff --git a/papers/mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf b/papers/mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf index 9ed98a18e066a92357e90d4ead7a12cc99af66fe..4af9f89214bcaceee4fffb4b87378fb8044f5b54 100644 Binary files a/papers/mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf and b/papers/mind the instructions a holistic evaluation of consistency and interactions in promptbased learning.pdf differ diff --git a/papers/mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models.pdf b/papers/mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models.pdf deleted file mode 100644 index 0b37babd6336f05e1d2da17113895c90185694ce..0000000000000000000000000000000000000000 --- a/papers/mindwatch a smart cloudbased ai solution for suicide ideation detection leveraging large language models.pdf +++ /dev/null @@ -1,5683 +0,0 @@ -%PDF-1.7 % -1 0 obj -<>/Metadata 4 0 R/Pages 2 0 R/StructTreeRoot 3 0 R/Type/Catalog/ViewerPreferences 5 0 R>> -endobj -4 0 obj -<>stream - - - - - Adobe PDF Services - - - application/pdf - - - Srivastava, Vineet - - - - - Microsoft® Word 2019 - 2023-09-25T14:11:21Z - 2023-11-20T11:59:22-08:00 - 2023-11-20T11:59:22-08:00 - - - uuid:EDF07519-EB3B-4FDA-8B88-0D880AFB9335 - uuid:2b1db727-1dd2-11b2-0a00-bf00584cd3ff - - - - -endstream -endobj -2 0 obj -<> -endobj -3 0 obj -<> -endobj -5 0 obj -<> -endobj -120 0 obj -<> -endobj -121 0 obj -<> -endobj -123 0 obj -[275 0 R 276 0 R 277 0 R 278 0 R 279 0 R 280 0 R 281 0 R 282 0 R 283 0 R 284 0 R 285 0 R 286 0 R 287 0 R 288 0 R 289 0 R 153 0 R 153 0 R] -endobj -124 0 obj -[290 0 R 291 0 R 292 0 R 293 0 R 294 0 R 295 0 R 296 0 R 297 0 R 298 0 R 299 0 R 300 0 R 301 0 R 302 0 R 303 0 R 304 0 R 305 0 R 306 0 R] -endobj -125 0 obj -[307 0 R 308 0 R 309 0 R 310 0 R 311 0 R 312 0 R 313 0 R 314 0 R 315 0 R 316 0 R 317 0 R 318 0 R 319 0 R 320 0 R 321 0 R 322 0 R 323 0 R 324 0 R 325 0 R 326 0 R 327 0 R 328 0 R 329 0 R 330 0 R 331 0 R 332 0 R 333 0 R 334 0 R 335 0 R 336 0 R 337 0 R 338 0 R 339 0 R 340 0 R 341 0 R 342 0 R 343 0 R 344 0 R] -endobj -126 0 obj -[360 0 R 361 0 R 362 0 R 363 0 R 364 0 R 365 0 R 366 0 R 367 0 R 368 0 R 369 0 R 370 0 R 371 0 R 372 0 R 373 0 R 176 0 R] -endobj -127 0 obj -[374 0 R 375 0 R 376 0 R 377 0 R 378 0 R 379 0 R 380 0 R 381 0 R 382 0 R 383 0 R 384 0 R 385 0 R 386 0 R 387 0 R 388 0 R 389 0 R 390 0 R 391 0 R 392 0 R 393 0 R 394 0 R 395 0 R 396 0 R 397 0 R 398 0 R 399 0 R 400 0 R 401 0 R 186 0 R] -endobj -128 0 obj -<> -endobj -129 0 obj -<> -endobj -130 0 obj -[413 0 R 414 0 R 415 0 R 418 0 R 416 0 R 417 0 R 419 0 R 420 0 R 421 0 R 422 0 R 423 0 R 424 0 R 425 0 R 426 0 R 427 0 R 428 0 R 429 0 R 430 0 R 431 0 R 432 0 R 433 0 R 434 0 R 435 0 R 436 0 R 437 0 R 438 0 R 439 0 R 440 0 R 441 0 R 442 0 R 443 0 R 444 0 R 445 0 R 446 0 R 447 0 R 448 0 R 449 0 R 450 0 R 451 0 R 452 0 R 453 0 R 454 0 R 455 0 R 456 0 R 457 0 R] -endobj -131 0 obj -<> -endobj -132 0 obj -<> -endobj -133 0 obj -<> -endobj -134 0 obj -<> -endobj -135 0 obj -<> -endobj -136 0 obj -<> -endobj -137 0 obj -<> -endobj -138 0 obj -<> -endobj -139 0 obj -[474 0 R 475 0 R 476 0 R 477 0 R 478 0 R 479 0 R 480 0 R 481 0 R 482 0 R 483 0 R 484 0 R 485 0 R 486 0 R 487 0 R 488 0 R 489 0 R 490 0 R 491 0 R 492 0 R 493 0 R 494 0 R 495 0 R 496 0 R 497 0 R 498 0 R 499 0 R 500 0 R 501 0 R 502 0 R 503 0 R 504 0 R 505 0 R 506 0 R 507 0 R 508 0 R 509 0 R 510 0 R 511 0 R 512 0 R 513 0 R 514 0 R 515 0 R 516 0 R 517 0 R 518 0 R 519 0 R 520 0 R 521 0 R 522 0 R 523 0 R 524 0 R 525 0 R 526 0 R 527 0 R 528 0 R 529 0 R 530 0 R 531 0 R 532 0 R 533 0 R 534 0 R 198 0 R 208 0 R] -endobj -140 0 obj -[582 0 R 583 0 R 584 0 R 585 0 R 586 0 R 587 0 R 588 0 R 589 0 R 590 0 R 591 0 R 592 0 R 593 0 R 594 0 R 595 0 R 596 0 R 597 0 R 598 0 R 214 0 R] -endobj -141 0 obj -[599 0 R 600 0 R 601 0 R 602 0 R 603 0 R 604 0 R 605 0 R 606 0 R 607 0 R 608 0 R 609 0 R 610 0 R 611 0 R 612 0 R 613 0 R 614 0 R 615 0 R 616 0 R 617 0 R 223 0 R 225 0 R] -endobj -142 0 obj -[618 0 R 619 0 R 620 0 R 621 0 R 622 0 R 623 0 R 624 0 R 625 0 R 626 0 R 627 0 R 628 0 R 629 0 R 630 0 R 631 0 R 632 0 R 633 0 R 634 0 R 635 0 R 636 0 R 637 0 R 638 0 R 639 0 R 640 0 R 641 0 R 642 0 R 643 0 R 644 0 R] -endobj -143 0 obj -[645 0 R 646 0 R 647 0 R 648 0 R 649 0 R 650 0 R 651 0 R 652 0 R 653 0 R 654 0 R 655 0 R 656 0 R 657 0 R 658 0 R 659 0 R 660 0 R 661 0 R 662 0 R 663 0 R 664 0 R 665 0 R 666 0 R 667 0 R 668 0 R 669 0 R 670 0 R 671 0 R 672 0 R 673 0 R 674 0 R] -endobj -144 0 obj -[675 0 R 676 0 R 677 0 R 678 0 R 679 0 R 680 0 R 681 0 R 682 0 R 683 0 R 684 0 R 685 0 R 686 0 R 687 0 R 688 0 R 689 0 R 690 0 R] -endobj -675 0 obj -<> -endobj -676 0 obj -<> -endobj -677 0 obj -<> -endobj -678 0 obj -<> -endobj -679 0 obj -<> -endobj -680 0 obj -<> -endobj -681 0 obj -<> -endobj -682 0 obj -<> -endobj -683 0 obj -<> -endobj -684 0 obj -<> -endobj -685 0 obj -<> -endobj -686 0 obj -<> -endobj -687 0 obj -<> -endobj -688 0 obj -<> -endobj -689 0 obj -<> -endobj -690 0 obj -<> -endobj -274 0 obj -<> -endobj -17 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 23/Tabs/S/Type/Page>> -endobj -692 0 obj -[695 0 R] -endobj -693 0 obj -<>stream -HWۊ$}CSw *3fga?, x%BʾeLQ=Yʸ)<=>>_??|]yZis7hSy1_4=m~a'>3獵l՞wUӿOw邚v6&gӔt TQ]Y?y︛Qq.szGpLo˽!FsZGq锶cy{Ě%Z Ԣu0 7,VX&v&.u*=_n1 &MvSFdbéŜө6{N$8eB} ᧤CU8eICqyR pb.d-EmG𷧿i8CJ3MCRbo5hP1t~x$.[s 0oQO]:GU - -5TPtē,=|:6Ipnѩ] KIáwo㡕yuPN":኶ 5"&iS_׎R0Yq>Bm*ǿy(\,6͖'ć79;iyZkLôb(qcLD% #:_ }rȨ`[gAvh OO|sU|>=P$<m䉿^`j .XTy A&PCc~|&JT -'F#FFeڑ0U{ʨ.;hEnj(Pf8kUB; R x;]d|ʳXymiwj!Y9y7xzCMe%eFݸQm1QʢrQqT -sm:cÔͥTǦ|%As,`[J;fU&4Z񍪥[@Rc}RcZ~MPD5NƷJHvaqa -@xM,נJ%/t[$ʷgq޸֟jD߅Hϙ^$D+8vX"\wnrv0Yi*""MxD]3 R1 O~c1N3|QL"s:˽@<WNM%_:cp5j[w#Dk>k*dLlE Qޖ?&%ݘm vRiZ@ZG)˄J}Wۏ-ffջ?Y[[J!vbʼKjs&b9khOCŔAA@C[hh=ǫ54ynˎ)vE B}f<]u!;MpVۨda3䞒{z|Hb]( ThHo7ߒemxIX47CHIp(?Ӭ mdwKhq&8[dW`GIӯ!+^‡6KaZ@kleioCl8 p%t6ES*V:\fvR|ӧ_~ur,S8VPOOg7ħ{=._k?E lP|oKg#yƔ D_p-sFY2uyx=}gfjՏ0#Q0u3*=5q.TZ7m"v9XBB &W4v DXC $RDyޏ0d̔GTXӰn {6& ds˫ IˆAf=e=M(2ܬrOz4 {gtl I'ʅ%x&ې+!zC-IӘP_ꡒVBa-_\c\%ζPLQqPoL٦y3R)oɻ\8ĴFiwZ߬IZ%8a%SSSDxT=V4k(驮ՅM.G݀!:Ka%Ly*_ e8:TSuP&1ڒ2ḥ -%oQ@KRJ&QZӪi} ƟL%×c0iכ -zkw!X -\ qgs:, 2t -o:o맾ME΢32Qן59"?gk5(x0hִBtP53kۖ!. -Qz~╭]v`<.e -Q%')&й@ ~.J\!scB%/~N~zyp\؅٬rV8aJkk!s!u1/dC*DyGds)J^3aȝ? 3 -endstream -endobj -25 0 obj -<> -endobj -24 0 obj -<> -endobj -694 0 obj -<> -endobj -21 0 obj -<> -endobj -59 0 obj -<> -endobj -58 0 obj -[278] -endobj -60 0 obj -<>stream -H|yTWu۴HS ծQ#1qdtL (*Q4QȨQq w7AEP\Ј+"j&&c{fla&wνz_ջ@#BG y%<2!"qf_ @!Wx$pQJ{+F'NN赧ugON=SR оdht) IHJsR!cEF@LDr}x'yjDBT;$4kLrvBp扫=qFTbֱGz@ !B+n.5-D<ʍUAWy$4QY+V:> M_:t:Y/QPLms7woҳwk6m}%h -T;tԹ;]uGP?W> y8hÏ>6|D'9?lq'M 瑓'LO:-qI_̚^PdRPQ4(*<Ħjj4jbyȠ٘Ĭt]mc" ⌊C7{}@W*RŪD]T*SuU]S uS[궺cZUӾБ:JG:F8H7:A'&NHh6\/q_+|uS8ZΩw9'S7Z.f:YTusBgn1z^z^z^Uz^uzF8(ћfEo ߶Sһ=@  -Ł"W>>qDOp,qCnč98pSNf)i ns'0`_qA8PqQ8XqNpR0s Lnɭ5g |osnٸZ\17FMV܆qf7M܉L鉻]]Hް*VugUYjzhZuVUo=[O̰+l%08B xCDB -ʂ8h1C$շ*,$Yِ ) -i͡d^܎s!_"|+kȄ -ZC =<? ̅y -X a,%eŸ$W>jXuVJOXdDFh@S -E ce -$ElbrdCH62A&K!2&Lȧ(CG 5xJDjBM)R((S ʠL,S%֔Em-eS;jOPHR'*Ө3uPWzԝzP^|7E\eס>ԗQ@i !4pA#iv]73L3 4`t{!fffeF1f7{[-y%48Oh"@"&Ji4 ͤYLWUMџuX k`-`#&~a lmvNiWs ?}\ojпC_ -}͵p^>ȇ6II6)&n{S4 bT,+jVlb.8$c8!"&)߈LimM(ET\7mQ!*A܇P 5PvN|1GS'`Zzx <a B2]fβ*Kʾrn"QN/ʩr|I*_ir/Er\&WKk\#_;<)ZS,%LdUF˧J(XH#UWjTJU骅TThڪvꤺnz^U}T_OW@5H VCP5L W#H5JVcX/q[p&j'pc%4p: N;-L'iva:pg3ޙLv93PYZ=9{fd5jvuaGQvMe]ێ1v]]n`7ٍX;)uʜs9||\p.:qW;G ! - -]ߢ?Qq :D!{A:(/Eqni:m.ȾL ?~0<38Px} ;nHGb59h~tҰagLps`1߹L+TȯbnϿ%\xyNe[ۍv1n][m6tX4Y=Mvx`cafaFMfns7Mu{ino7Mr[}|| }<'=i^%g{؂zU^zռ^ /‹^f0Rqpnp^ 8kƋc/}~s)vZED(e8#Ίs⊸*↸)mQ¤AhYGȺ`Ny2@=4ꦓu -qDSDGcX=dG\H62diFxZ"dSL62I2+J|lV5fYg~g>1f([W?|Kv]e7`&<96ucifafYfc\"1&T -/ǸSq w*~{P;68 TNP %[Jd۸3z4/HAI<zo/U.lyo&R{ ŸqԟԊ#G4/7v r( - -u0Eo7raV'@{  -P x̞Ey#fYo੗i*hY~_Bձ̺v)tc" u~e눘Fb5~E8<$F0oT)n›)OwpeIE(4i:(VOb",r -z]zU=j81CPPcxMQd!-l`vryq>4Eo1 +45FDUͷ΄IJE>Lh-vʠ.UNI) 𧼙pV97.T~@P 5+:}X=l5XRA'YCEm(x/?U*ZM-{lՇc+,0U:|WBZ*S5aMm Prd!vQ8࡜Y7mQ RsS%ra"Y[YMơv,9VNvy -] |󕭢U]Pū*CSkU*A[mX^ @ -4⛇/ߟz7NZ$ZHA2E Sf+&s'XmDNUCX~LnFCEfqijV'Xa@]du*Lۅ )IF^`+ AJ-vSgPH hM#BJ≮nb̳Ch4F\dIN~(DXh7ۆ/@]4z_n ®w+}]> -obj.;UJ7l8}1iP gb(KʬPQxSjj:gf6{Ls51|=z;34^ۘ[pJ ԍ( *BC",FJPھb>!~.sf=m;9K?:w3o>uc3SGcozy}ޮm;Z[_ݶ9E|7_&aS ׭ ծY]*]c-qHqٜf+" FD)+5MhξifIS nkCWj~P1aR<.9T?VUDlWa -3Y/sZ}&z}3j$%E\6~@z"F8lH-S>YU*,HLT~Vî\CZ>ĦRњi1'mS( KVH8_.K t8=dKtmIB_C#K%Ki%*w}3"N .DsNhɆχ66_8wF-M> ֖\-lnMmaR|2SRYrTY2h3XkjL;s2P`%D˪X u/y't㏄2RZ \Frvj@ 9(zOk˻y]ǀA6mw0=yM!˥922-jK=EEɆ1dyJ3 l.]sR?3<9"C;Bm+o/ -3.fy 0 )'KDX5WM=ѕ w:? +5ҔJH a\S]!C -:Zal ;3+`DbsYB(m;[[8'!xI9rvJpϜx8W fVsZgk L'0do-ߌBNEEXX3XKՄGS1t֗ڕ]C+]]c>;2k3f*N\b`I h2URb(:ט?jt'9$auQ]&d(5!? 8SHJ9v?d-N" l*M5RbFNfc4d/#tWJ?n|ݠΤ3~ 5ML>Ph7,%NIq QSs@ħAKwVOۤ%hPW%L^$ʹ*RZr#Us%2A7Jd{L`-jˆ C%4 ީRxL)}3`[SnEЗS& 3r!|;Zg# -h92;S6O цPlKF>'sc]  ig!8f3WIVr";6bIf8@!Hp,4 }҇݌ɁtUuța*zDF[ơƣ(*Qux g93¹C<գNPl8r"{u5@d3QW99XT wRo O@yMtAE׆k?@Ff=Ѯ rmr¶C<,ڒ?oϻa~ߒVi7lS4MCfkg.\M!{>7}쟌{2`j_f˭i/<ű >YoX=مWw_`HOsl0~gvMw~B70> [q;H1O~xM_ag|/na}`Av2t`s}gk'a0 -\g 4~_n=Wņ11$cpY{Įh ."eq?\v[@_ļy1a`1׳O1au&_,bzcUe/ae1 `HiMc!N^NC59~E4pP k c|'a?G{-i6;0>V=>p ^{xÈG}x.YmTsz]$zzl">\Xt_`("Tc`8 tI`b3~Er s5I (6A!VDHb##|cc56Q EMA ZQajکhũ` -:6p9/Kgk^[mDfm6}&{iYo/jjL־kTb|ϴT 2?4:`yԫͪ|^֬N'euZhG;d=ko%2sʭ2)cqƾ V]NR$Vق>Etzis7m k >$kq8Dk?eE=?8{bfKpk γ\}"ٞ ΍:pajTbU6-Y!p)˽Sl -_mwwsX#Y 84rHBׇΰl.nKGgM }ow!ѵ m~ Fš]dd1;Ͼq-O5i8q&KfXR!ʑqa*`56E|xFt`WN1 (OG gm9,+lȣ^N }}"wWScVػyAs=k.Z& :w|Q,UȾSϫu~ -bm]9T/U@hq')ތ[cuP"]Zj7tMu9%?z ;ZDmXu~`vW!Ud+/1\ea`WMW#J$7ȝf뾊Nz۹kyOc'?̺!ڭķyqv4)v^b2;rry$yKHSgϩ2Oǘez.Hs g5j[PbS%P61f "g`KM6A>X~M -gLT EeB9U&!گk@H]'D`[,WJ~wbmo+L_~2Xy>௜ean8kokmH$=ޏ ch9Ny?ގkDd]\Tbƃx;KT;fcT˅j ޽O -Y|t? Fk -]\8WzuW' 8[×33}6(jL7Ɲn';W\)l>IEK>l ͂?9;/t{!mi"kºr}osSz3;~-#`Kz~G3?U?ļ_?>`v|??Zt{~|Ǚ25m9 ?C1Go%sZS7Ngg&LEpZjlǐֱ+1Ukg_a]Yy*k+5qD@.r-c>s=9QF1L#ܕC{{~PhӢ-fi_v9,rjQ91?¥!)⹸Ȗ:&Gg"E$خץ,lunOX]zf Bz -}[O q?򷰍q`DCׁ3rT#ϥ(nŠ2XpiCdj}nCtf3*\ `Atd?;d]} b5`V i!yb7ܛ>N,oǔ_Oʈw6<tJN,!w֓C#shz'5ɃU^;طR@-cò#SXzr+jΣo$<j2$5n 6W|^';fYzn^;)U#GoF?JQ^՜}SL%Cm_@{pȓ3k+l&yy]ѼT)0go5[mC_ -j8S9o,w9_)Xuju;[fӜof1u% ?]-&|W2g rڕ4*O}M;9z[ 7 -kr2 _sLxv&(Fۙ<YtAS!K-Зޟ/tc$^dzJfC"ѣ{A,hڳluU¼fHM῔{W-|SJ%k)^B.VbWr5GrTnsF( v'a*us𳴗H܌|swXKC|G4'?oC]&oXDk4ˆ 0.:=YjmYb`έ!FL5Uɸ:*_rͲ:}-҅6 -$0@ 8Q)BA(@':*ԙ # -C - ZTN`!m^6KA#Iw~YK~yy}0O͓q|=(3WrEI<.p6VfD_NzGtݛRΜ}3;Z~$)5A14aN?4ʀ(5Tg|GN$}2c:)ÂTFfowG3$ -ŧnYF/v륇䮌M`<ʞkYd`5f,a2PtG | -eu'z6-e(3KEpM*=5^z>"i@VH?f4p/x5wx+]啮-{û'hGmϾk{M^6l[5KsXɱ:/ٌnf=wd71!4s [ܟYU^Ƽ/עEρ;%|=МɊB!D66 J;K|B-ӎc_)^JDW0ݭg[s 4sPOXI=d^#|/_v]wl8!Ҫ߬Eu9Bu'k -gsw ۴q7UܳE%DE=9t"S[vr^: &K/-]=5_\)ҳA %GJ ǯ88Z -vs6m{*Zx۪cXly_I>;NKZ)2e@wawҜ]B^0gA>ґy)6eAįd@5Y}ϴ/vqq}8JhTBE7`"bu'oy1.,Kd7SDR~7lg -k^fOONQþH_Om—,)RWg %O:l#aOKۖ>wx]'sK"Ӷ@'A.C52[ -bCm&3uNQYe1w&7qB׻?u3H՞^rڲ>Ll{еwmZ֟&쓵rq0E:d]"G}e U4|y':FhcʽB6VjbYvitYywGӓOOg~].kޝhKdg٠w6zcl09d/ oopH?X)://E|%8|Rz[, R52MU -[3IF0 9K̕&W3r}P>)<H~zʫ?^PMn6z7f` *3ec~lĮ2"%R4tEMZL ?tB<̹Lj=k·\@d?K©ĜυN+NE>Eafry|#u4>Oc?_r1~d~M'gٗ[,)wJ7ᣄ c+Z.ђr> ̡iN8o)z)b)!ARN&I.3gnOۄapBJ{D9cbX/cig%){ 2ރ.~eq]  &sc3QmC,,T"$%d8y˘⧙KۋR(c}nr極<hJ8}7(UyڠgϞ {N"pdK!/r1Eʃܝ+#diqJ7(o!>yc1 E} -$Zƈw|9/GڍY2XeޏbO?!Qǽ^'}~Hy;ogS ;}6vrOqч6v V|{y³%lmFv|q7cNu6a(>e23x+"cҨg<睌ɞ3}"j~e\٫`OIQIz}F9a-[Xy 憧q>.L;h6+἞fkoeY(ZjOc@׸G6,1}{g/zL^7cO[ 8"a.( 8uf 1JuX"ͩx߬{#yAUUr5ULJ(Cnc(a{UwiWIH; -q^sxm{9R۝7}N%}Dʹh4cWSE8}ߜwMO@8/m[ a#Qi t{2NMh\%ڏ |:sM/8G)p.]E_@>5Kxܴ86k (G56B#it#!kb)I]X-d.$JĺvFJlJW7$[򿞥VX͚zfY@ݴE;i3 %F_*jj̒cBm-`?Fm#KO8:<O\ 5, ȕr;qw쥋OrF۫nRtUHh{1mF'+;l[K֧X#/1/C3.|ܔsr$ m<ڗNh.~/qr`jBv֔)h4J@ri4ԕT6*5LU˹ʜ)Lٺ5:&_i??Ec4iw<>}1#rI־|e'1!569p6U;k#-+҉;W:] /+&~,u>QZ.nX)*LWt4%P -و^I`Nd`|&+辰 cottHB6{͡of}3Ox^zCky>ܑO\ME.2$H)䒲8)jR¥v#11UBxBDU >*ҊDkcr}#9Ŝ 0%g?$z0s` !xV=`C]\ֻAvMvCܖ46Wp$QfwoŊ2McE*mlR܎GO2sI7Fq^4aC@`~D7 6x\* - X캋nky*~~E1ema-({ӍrC|>0A`ksad,J|"O*s؉ cl ̄>a0˸41οH{?@$31@$aDÈ1K?k_Ó}P=d'1K'1K'1K';c{Č]̵ܞj)jjImQj[Ԟ[16޸j_v;Nm&2ku$]VRJ'Z1X'^ٲ 8p_<c$8A ` T6 |\ lap,`huĠՁކZ-!-dW5Ҿh%ʒ$-;ܠ6iԤjy6.nwoMfjͻ[=qKڹۓwjK$d\ 9?ܣlfV"tg -\:[aܒ:Oi#t=]]8jjHuH -d=loFIMQfdAj"8RiD5)MJ4+W%HɞzWyeM"4M -H,; -FgOjr>y -#~Sf>5t(w2d(;F3nՕJuN_s~q͢}IQ}%E_IW+"r'U.|bَ4ґбnZN5Rgci(%J+VKMZg+RXcNjIS)bCj츘pϚu!9sO5 l_e˕^rERQEnTW`  0qCXgE ,RQiM5/3vUb#qHqyJlb;36&O+ʧV(*qF %n $Q6 _>k# -endstream -endobj -64 0 obj -[66 0 R] -endobj -65 0 obj -<>stream -x}Rn0> 84BJI#qCb/b,C}n6TKfwfwd&,meWFj I`{8hp [v B/a2m9 |s._́->ht`FE~ScXe|_k~#dF -Hp9@G,S`M?!վCvQGqI. y4.v!m+bҦPF(%TZ#J"B$+ODre&5] -)Y+ğ)HyuOVݓ ySt.'[_ٞ㜶KF9$bN6pIoM -endstream -endobj -66 0 obj -<> -endobj -67 0 obj -<> -endobj -68 0 obj -<> -endobj -69 0 obj -[0[750]3[278]15[278]17[278]20[556 556]29[278]36[667]39[722]44[278]48[833]54[667 611]58[944]68[556]70[500 556 556]75[556 222]79[222 833 556 556 556]85[333 500 278 556]92[500 500]] -endobj -71 0 obj -<>stream -HѬ={@ -endstream -endobj -70 0 obj -<>stream -H|VytT}7 H(E%@dF dcpPYڃ(KR@d‘70 T -METbU -e^7޷ @,@Osτ9# ЪjkZ+c~5 G|ypV*!߿qrts?3 ;xIZnO 0:W -x񷬯LW7cwUUĜ1H<(mRZ[.?BhXBD uqP3>ՎG 9ve?U϶ߔڂTd%l]eW=;ُX]*р 71u(*$7I8j OhcXe\c#2 cG^)RbͶZ7t|ްC3C}w3ew5>)-5%9c\qmDG95 -7UQTi#= 3~SG5XGy-yc'[\`MT^cWFn~GEp!2vݛ\[׽fڠ_bcO6B12f&u!JJD$ysCQm+3()F)ӽJsL[vyMʟdL4a 7"*ȏ13bFlz(&&T -)e,[`&;?o[r4Mɓu ߽R.+3xH/ 2;ɽ}; OhcQ炤MmpLMkCWnsXQ(JDplî<=~';Jhf(.ҦHU,Emxlk:m`tgp 9*I9O.0ٌ]6oj.C9;Glnc]2{™υdF~MQo+9w.mD&%ZĴUf --y箤C-YpWҺop6FEu02Z&u?y8ܧ{;-"i>&`"MF܉6kct9"\VpH/4]wYWl9]qAĎSwdc&-%k1ȶvAn @T4¤ԉ.HmUv:ZR*A{ЁIޟ{}N|z̳덻տx,vԒ=}٬;lGXgu%!'ϔxm -&{oX) 0uFGȴR)vtA1}& #&ܦa4`M)lTkodB1`9 -# ځlF:2#{bc;`Y:{&Q,ȣM ,u - 7 'O0*87r#VS])EScH 5P`8-,t:4!TAܣEcM܎ҎϠ.Ώ莋WQv)9՗Z)o75L5sߚhe\HaI_ Y/MW 2U.jizbsMtǍ 0GA0b p>Lz1N:wN)IC` [8#"+E% tm:Uj=Wi(M Hac 8 4mDi#e(`S)h8*D58wQoT,8|E7)@:NB'~ʿcvjܥCQL d4XH?8Pb0Ye,W2Ua."!oYRopr/nFW ҵS>1ZZ2e(i׆8.5@ h4 7W|h2p?<2hӃ!r(|prp{s,Ə.~8ڷ|XX$JT| slÆZdlZX} - h߈Nӻk/HF''<+:]N].ܬsal*x]pT0@Ż!b-p3b9 7ew8 "S -}S9 "Tcth"&?]Jj'ca&S[kǟѹ|8 |j܋wO^m0|ƃ@9 ['",XGqT;Y+JE0*BJ(FB^RX7U7Utl d(b6c];PQ]5̰apWA5(jámpox'w{Ht`ۏ;,ʉ\}SZxݦNLC1gQSTXBYZ]vmyq0,&L svip0W TXǟt/ p%9X.f7~e'8bB;~٦}vC]PԬv9P6 8Q86*kmȊ{M(X ,iH0eS)U% z@ -Ʋ_ubuŒe *eɭ\zQO -5/a -XA -VeU -IHL -,{(P&~-$9Q+;bDb{L_zE+2#n6S҄tFbvI#E DC,g1˖dW$qؠKA,QT{?^%3|J\C*W;J -5v`UQjlD},P[[nk8ėhöap'G{ l҅_ 3}G,_dg6?}tyfys?tszd?Ϡ+Ej_1E4<\CbX`HEM4ĸ \IKy2["Du\TWkqk5Y$6&7Q:a7AU"q=$+A&hgom(nqy)72÷SMFU -B@32(CVSRkTWKږ5py=%>*WU2n?R8?Ksqﭞ]VV>L[U T,-;mEI$OSmh꡶M#LlWZv%n -+HJ h^Y -q3凈sX縭g>L,4?7)Bt|F[C` 9a?DLj13i~BXJ̎@*.cр`xJh3ùmvEcŷܺ1T+UrHo5ifn1DzvI@DA&o=^& ϛ۽nޮn6`(înUmb?+O9 A寮O!?JATdrTKɻrIc,}1x==w9h;u'KC=YW= (mn3ᧉ %FA4+x+4G-%MWS8bRѝOcwUWkYZzZGs -+STTtI]io7LlT___gb# ZB@(Lg "W( |vl_sqAC6 蟫ިK$ŋr`C -{,s yez/ -׻pw1YC$쑦K&F"+qܲ5K*&c`,z^CvtiICl8XE - >xzg+p|]gK1tyc9#vV.[sd̎gضkQݱ=Mg|䣖 R~%J2/[jFubF9+Vf͢V;mf^ZlI,zVq^!jCI3fs#+SoM9RHu:mH1 LQ0`{F-s8 2:IxgK_ /ڻ^O۱wnShC !$ <yOtG5u'E !? >66J†r{N84줃tĸɵ ]v!ڌ$ʓqX%Bad7CšgPon<ڕdit V#@Bx -S\ߒ,QJxU)0 -FQէ]N|53vl&K%G66XҠzEu}aݸ501"\ q0t ^jQzvmj`;P+-R*X4DDӎĜipb!"VW =  ʨMֳbMHʅӤap fZPfBlaVfb} MH rG.bQ\f^#ayZ=!7 1zRE%"D{ހO(䄂}>E2e\:8}ݽ{AOd)#| ʠ*$*HZES;CX5ig lSѱ2&t2tbڤPt]=fss].}ºLnЗC*5҉Cxs$Y ~|bHa& V'>iS'= >uLh#{rKeNojB,I0S=Ͼ5i^⁵{"5e?7x,` w-ֲr폵ſ.[YxW]sR*K݉.&+byƾ!n֮@?М`8q ú(j*%^jv'`[p8l4/(6~(ٿav2r 9?r7އ"M/6fӚG"Ed1S NVf {0T[ u:G{_uּih#7u=b|5%FH^\D~}i@K2ȗps5 +]Ze|[>lwNWWά,? Gi%e(/`@V-~B ]'deajcpϺK_>_:}Mwx˲m(<7qR瞽vJZZSݻhA~ -o_[poVr_ Wֺ\nZsp"B+ܻͳWyx#䵉1lhZJ> qf&fV?Jg҇@4Q+4^TNQPAD@T*i*MS^m=dRR]MbQ%e -C9;]˴T8ttt-]i~A3Nt7th;?I9Z -[K=Gk)9l#YE.'Kq}W0| NaFUCJ|j҄TX#NR$jC l.1ͨh*9_,˓n z`K_5Y4+n$-fa,OaRwQ}GF M| Oק-K*WV>?sb<޼掅zBH|'Y -&;NNNvyB9nڐ~9p˾=qps~-ٕvoWLp*6r9N !R,S8)&BQRWПYhWopOjzpKsvHQ&d}Ʀl/r;VǗɌ]e`` ;꤅B *AT]BFD:4JL2B_ ISGdB1c H9DmcթZԳzqRp:jH?a[v ML'+l7>JF 9pG t{ע)"f+4~뻟72"RAYj 8 - 2LAY"0 ZB]PBmDIu@ e, _rs>j lw\Ԯoz5FB~Wl @ -۝:~<Mԗ _^s9[&K23˫/~9Bk}c{[b&eN3,]أ2NZN©0Iϣ݆;%RfDwW[/=bЦ([r^#r_:7ss̙?ӅH84D6vJcxq:f-{~ERة3ebR\,g J\\]8G!ͭg&U@9kP,0]g\O5{7줶 ,L̐UNR6{wgnBE[ýÿw;/am_&Z|f{h -,'ٻ.~+>iP׻>:Coc8c`QX.b1Ϫ[3[&+XS}*DFf ;}e*{9F}V-b՟o"jЯJ҈#[dY)~K쏖"Iob, raf=$'׊x"Q ʗ֞2<ʯto+ 8$۾p֦SVd}ܪ` }04_8 2ƛ!C\PI!̑^nLU,?:qq֘)/d;Gsg_Œn"?S4?Yn/ls =z6k|1Rֹ%s<7KamfA=ɻ> {ȠCyĻABǹ:Oorډ<gƛMQtU΄J"=CGIW78+ -/)?Q/Ͳ1؟mx{=;>RYO*SdE6?c4s0󯓽Jԗ}F= 6hm]`~k%J -A윉Pٶ>`Ͱk7H>KD}^3oFKoͥ -n}qh0|譒Q*xwS͎`"<-y?G7ϴEo,pO`7[˝ޑO;O^keRL#oju:DJR$L+\ /YiLNCw92EE;O~ -3 bB $b^5Km)e"g+F֞v K`0rƘ#㭤dCz -Ӈ)~Lσ< )>+5< 띞͜ОN#`>x9A7#5?8_<].g\xNl;8.ΪAcCt\93bh bsGbT5xf[A!4q'c5:xlol~]MJyZmG\!f.}<;Gw!o wfZx'叉 (k#7SKQ]~E^)N;Ts|œ͹4{7Hywx_Waoc۽dl^whjkp$v3 Ln/6;yFk0SMg%cgxc|&O\ވ~ʛdVdjBbq\h/dFhO~SbmE P+-S -Xl-p曦oCŕ#2h-~uOS%ˆ4:E"l[{SK.alVGyɓ&zJwvXnrb_[نow7l6*U]M]6$U븽Mϛm(v&A `eDn{t o_+WK;|c8}}ls7ݩCY `A6La> {NSb$}/x= mZ%H'w0P͉E) -xC8|[ȕđ_r@~EԸKq)y87Ժ@[nCKٹqeeU}׽ @Xj*`(`#& Z>qi: - X# d|X<H>PA0!~|w3Uw;}g}d 5d aHrv;(mPW+\3<\r _. >v1N72?|c|?dž]b]OB曽L1Oo =+ E6kwo\ X¤Vv |`ռ F]w:+ (ur(}Pi -\fgåOg*9ԙ= -jD3frFW'Yo2h-Cuo){_4~\Ja֍j[LM!Cd05SCސRG28X"27-1AM e_; &2<K{m:\ʷ3Cwµ{E5#UQc"o$سtgl'4 -8 r -ٕk),Wrif/E(&V.Pc|(Jihً-.m.#\l/=YDHgMW{17`})G׀@rwLU1RO2=5L;82+Fnswg8Ov+,9ϝ p^%mjNZN!X(2:řC)n!e`sHڥ]OP:e;O&sXr[,Y21暕oŸ+^f5c0Ӗ1o$4\}/ ;rKyy2{bd*ߒ/h]{reߑt8]+Ɓr3<]>KtR-x*"Yꘓg@g|yKKzjY+4ӷ)ʭے;C4lhzSz -P>}db]=UԼ"V֘Uۜ fsQU}q[H,7ʣ/y0Uc8˳pp}w#^v&QS9SL1Ƨ˝Mr~Ҁo]N__"b)%ͷjaqR˚A]ӽY~ؒg+όb_G;G̛f{ȩv0).ye2߯#1cuKڸ^=qP@L]tDk)2'# 5BORΘQQf>̛]ML -f%&d6ZԾc;j\s'(EXbsÑΜ6{/E:[ǿLf3s]_׌qٵU7S같̉m՞Gfo[Ƽ8{nHx!_#A@%lix/_%ew;0@m@;L u$U)Ё (O[Ge2Jq(:̓*3{ιsz<~ܝXe|ʤ9>+ya|\CQ֙>܁׵ܯ9, `xl¿ A۝9 [j: AwϨnmq~ .4X-ǹ  -w>|7M%3=ҽ]K`Moqh|bLs ۆ'voo5{,9O`aK٣8+ s+7v~̠,>ɿP>܉Q![,F=w#ꛬYf_ _v7[QCaf$[ -Y^=fF_&'w;5v'i;y s,KfIIgwXt̅)OZ;?w5p73\@ Gt@f31À?Vwg-%Yxkfމ'fPE,s"ỈSkC~c})eʸhX{\&1$/k#9s \G_g\a:aΧ~;G&ִ^ՊϼHiW.9ҽ4LƎs_+?->=>К;+>GyxN*2cݷc&1lh'?վGi軨<=|+00GՎ6P>yiZiƾ;~ -Q_#FFb]Q=Y@#{^#"Wus։JZڃsa@;GkVp'<@Wjd)ޮhsb֛ؓpE1Tq>aNԫ4U=Dns*2Ov@%XF  -ݡ$`G&6nhG)dlvc. v7Zgl6+~qZKEj.mTnj Acf_#d>~ѯʠnk>ho> cWpV5 J:E#o1z&u(5O9E_Y1@D!z+V$D"H$D"H$D"H$D"H$D"H$D"H$ɷB$] TJI6n'ICRy(."<)F +#h艐9HO+4̼7VYOk(OH/M.4O6(r)F#UN5 vS R($J #C>"5ߢ2U"" vzBh1r(uh!JJv)Ȯ r#C w:ϥvFa !׆.CSA)1yqokO &!_^ju19H-1^%h3gc=&>;>/ vA'iEm-QA P -vGa=#B26Q&B#V=ܼ۬Vyc`!XEp(` *2_uM4b9'f[NeYh@9Sb7룖?XX`!X\|n5GGXe?TcOSҩU\pM5.*&*>&V$X8 cN؊qL)J|'qYDZJu?Vjy'  -^'j슢g / l)Qw_(r@ۊmSgk`'[+E4f%*R\v~1M~h}#GVa*W[_2(Sݩ W S}ݣnǥNp#?_Od2.$֍p'ݤoW{m!c¿?1?02=} _%Q7[a' ;21.Sr=oueK4i^AߌTWL42mfGmt#<^Sg6>W:\~nO)'):J_M'r#7yyY<<ãMm7æ475skvziYku5%4H4j>SHq}Le*ʛUǩ_Vgeo -V /Xy/]v7@YUR998 85EN8MM ӫ\H>>stream -x}Rn0+|L@H%AЇJ^RKX9yKf51V K;q8t{˨>yj9s3Zus}њΞİF=}AAY:׶m^ w.4ws@, `Zn4*(qu]w/ܕKH> d -S;֐eQ<>IO2_Rv?4MPUC:tFٖWw hPyPOp ?㚇] -3]~$k -endstream -endobj -52 0 obj -<> -endobj -53 0 obj -<> -endobj -54 0 obj -<> -endobj -55 0 obj -[0[750]3[278 278]8[889 667 191 333 333]15[278 333 278 278 556 556 556 556 556 556 556 556 556 556 278 278]32[584]36[667 667 722 722 667 611 778 722 278 500 667 556 833 722 778 667 778 722 667 611 722 667 944 667 667 611]66[556]68[556 556 500 556 556 278 556 556 222 222 500 222 833 556 556 556 556 333 500 278 556 500 722 500 500 500]105[556]116[278]119[278]182[222]] -endobj -57 0 obj -<>stream -H1'0k - -endstream -endobj -56 0 obj -<>stream -H\U T -&*>EmD]T⮂,(/ F1x h1VAb1Ϧ F$$iϩ֨.ٹ3s>.@^Ġ&DX#"Tʝjky xRGBz>E%^WfGz8Qu؇ԅQ /UQ B[51Zy -yBaje5Sm3\,·Ƒ5. -iBL:i --eZFTO{+Mw,L@<+6##eo+$+ث鰼lUsVm}{Z%b>~Met71 -{k92v6Oɔ̢bJFrѮvqF >>C1iQ *FI4*CdGMce,Y+ "o{L -VTobX5K-VUuUҾ[J,-G->iMNfYG9;a}tINyEiYȗ3U<5>ZeIЦbvqK4&X ~ uc;͚+-Lܴ  +O+yj)nrg `[q@V8,@ݠh/JHRL,*J_U!#posUJ--菢HDWjP{8ԇU',7y,F -UM&/WX -sh.u -!i*r݌*l萟'& 4p@|\l~O,oGozEGEvѵKNBC;Y-tu#mǸq6܆[]LT2%dɔ$8Ѝ4~g_Ս|f;cn 3ʛ֝F3^ӝBSqha69#Q@ΈH0vʰ9ҜF#-!:=Ƥ,3ݞgPl1Fv1,݌^/ q'}kvlwlh#3eHOnFXfD>MK*KF7r=|j+}lz-8~Duˠj6" D 3.֍1Oc^boRK9u_a7Fpzz6DPtI|\Cm#0SY;.gjn8{zۘΒON$!صd! -GӐvF) #%L2:L;Mr23a>Ofڴ̸;$~'k -6natOU"ˇy -pT ->[UFyG֤%4%N#5hSͺijJuR Hc8"Ki%/ ,qɔ#;rYiP|;22%YfHj5"rL -ܽLla C1PJҐh+?UN[AFJYNo^F/3Q`8\G&rZ& -@<Ȇ턝T]&*l*LV\(<ю!t>$KCz^/TSur.Μ5~vT{+ 0X7pV3h1!9[TR14AQQ\:!g&|VYu !q R&%$J &/^W*31]bx:b<(RZYZLZ׶aʦ$< 8qOSSfu eMgt7ctu `}1v+U:iYM+/rPG09f3A,~H FVpF1̯"NisP}<fM~\``v8N%9 -yDa%fM C+$ըL7 *ᇔA:u -(sƗ!%3̅mِqP4iʒJY<o_FX. );ez:9 Ќz6UeY0(N(JyMC ;٢dXI.и1jg?}"laxPip5(MS-`Hu0&@0- VBы]ɼr;7qPiB̗8hS4+8Wz.maOBD$^RD ÔҠ#SОRpXb*;XaU,AI I;`ap -ikEcl?L3GJ}86ZFQ^!;F^%ndqeasجsN*!^H?Mo_H {uuCa0{wjEB&@dKΉ+يEPUH\Δ%mKa[C3bϤds5)kҒFI҄H;.Wó_* -X5s7l>^Z|OkZW?Uv y3ٍ.5'9'sx;>Y_.[P+Z#Mҙ}/6QJo󊞕. +FrYm|>^ox# "R`MūcХScx9긹e&)3)4MMӬՉ'ART1XDz|%v']庮#U.>}Qm.eDs̄m޾HfZKQ"3Ĵbl!~}[^?`{NXۏ>u xܷwfxfz{oޟܟi/$@r54-*Q#Gtt R " ZIU@,ұ #VF̤a80F~o fۼ}{~Ϟxf:?y%{lٹhZ;,o  -eR`Z?^1'.Lp3k(\TXtB@@̎ia>q#< -Tу&;1R]MU3*8lkrĨLa/MܠtL]WKVBNVQͬ]] o K:7).7cĭ/ٹglI6nh{v͈G۞6{L~ 4Ljb`-Yː! ~I"lrtE:Ģ2Hq*֓ -`a 6fZs#G cكfhSnH!ӌɝI&wVx{LQ0X@\B ZFXV2Kn2Y$Dʫ*@OWvqt/XMYs q,f9OXQ|b&w@5!>AE w>CQŽ==3]x:EaS Re”Ω?)' -v??pOB2-B}xx5޾ѹUأ!ϜKj I _k>y&Ru]KEltQX/K@1w؟J4Y3XO'4n6E#Q&70SѦlnuO>O[OƮ6̇'P~H@  9T .'\5k1nU6h˙@a( -p5X_[ƺy~wd xT6w&~x~zvWWj<6ТIA -; Ɋ ɌE!(zimf "la#rX&xD2+qaUw?#:;E7&=i~!id+$);ÒT pEP%"Q:ɿ+͠tN)R*vH[_<\Ԫ6{VJ٦65U+]C8j270$aaI7=MOwdjrD|2b:P@$I{QWg |f/b,7o^\ s4C{c ͆E1Mל󇢁AgYfˆHtANhyyYKp%.W 4BLbDaӣ rD*Ԧ%y=:]YMu\?%r:Қx -pւ_zLgi⨀9g,LS)9ChgYܖ:jY/YڭEuf]/[n WxJy<2&I:X -\+_BUVy*z-;Й_Q%{qWO3=."b*`MƋP)s>o(,&6 P\ ySQ)3 V 5*ΨJzCBKǒ/o,vrl UzERNPԙHD!Z§ 'lf} \!YRگҝߟK2JYY^pQZw]x'fxB=Nn){cpK@ArJ0:huP`c[: (j5̎T .i!P #Ii/`G,f }|}R~zR_c~J Q5Q>wMj06s z$d wvov" -E!ߘĶ6}Y\1@wQYp -|oI("ӌQ'.A Ҁ7mmsݖ&oLM{lƧLUOٻ,bө?<;>sibo7vk~@LmWݫ{^8GtCYAYCyF==ORX{ZOTdF) $u"+nT1mY0&zڣDU2zYB"׭ uq?.i h^R(ǪsR%ƅ/hyxVg[=vw;'=w=wB!=/{zBNrj8FB9EXasW_1 sلVeY$1*s%iy1 -iT  -/ WF.L-f!_8i8&ӛP",7Uy}?6w7nvKHHlS04R\Id)h ulglhAj -"`W$eDL;ہm´0(FȦK"2c{޻ws|'"X.7)C'IpV^unf.tU+"1E|sQZA'@KWhnzpWd{h{J*WdŒJ /|?o-U {5뒥+b Αfk~l\6/{TLCuM0=$Ċd!l [ ?|D(ekFᇻԸ(*qJdEVEq1K0!Xg}F-fpZPgf5A; lsD>!2zۮ2? E)@\EB%qlm- Ɔ3 -k Gfj+\.*r%Qڝ|!o=Hݴ.k) -@'P 6A\I4IՒLI/d{kU{5c{H+ϞFm~jY5ˎuAhU.\SDW(Ot:puM<S]VŶ -T3␠x܋0[(ֳЗ ~ %d{|6ڱcXX!H -{Sge)Ixi}Kr\CcO3X1YFx[&A $;EMqu{3:ϝ a*6iyoN# @iYƾr`$R}5]G|>\ΐcC?7Y,>IKWObm[Iz,]ן)lwi6g[^)E~VDz)&FOJ@ҸAqA%3P:-RrrrB鄂ASx(eE< - \Bpe@/O|_ ūY tM(,Cv6A5J!8fP;W +ZPH*GUAN|Uz!$Y! ~~?|㙾-/kO柄}tu<Ɲ;Uho zZ%۝xbkK0,`"']N Uc|P|wHEra4!$Ҥ%UT/ sӟNQYQA* GD5J?XR]X)OM"_mm Ad}1/'T~Mj`vپ'[֬^_8l~ަ_,I {pB᥅5qhȘxWYFT`)ԸոM -a-b -*)x'jQAT6fL}a&WMWޢdz -W-QHunQ-@v U7oubogɰRXبM(&h~ 9 #_[Цλ-g+@G q#9|G`/}9r`:.}}MoDYP+GP")'\m1BqM%tujӫ1t3W0^@(HY 7 +BW\c.ر kEQ,vV^Qm8HZJfy2N8gm6{hyJFJi%%+Sa3D$\/Mp6JL%qd{aWITEaw\S4b(=JUUIF1HIm+-{׏ص^:fwCX.0I^zYpr;]q;0~6?+{<9y&U2!߰^t(al.0WԖpm~n O ӦjS 5(E"NȆ^;RV }Z_ߠ$!&5A73S7 k3}V1\*vä/n_į<< `oG)p55!!vE;Mh':q(it ,v /Ga'gvsO=N k=ZeirSq9 ȓM3B62bTZC6UKg׍#])2~9=1wcfnhaYX -˫X $&45!*VjJUBRL hE)Zc]VPئ1<?d sF̜3s=j]S-:vX*1Gv_Ժ' "|.qYr6Q7&)*G-~XȈ16*X24 YTCR8U$ nσ|ueO}7EC\owSo?yk*Lt- ki,2uڼ|sz<@mڇCNk_d4O>/c3u,_{V(" ]9}LMl{ }LDTQ -wHI֑$nh6E =E/S>>[AYe@T2 Ҥgߑ9:G'c駫[v ֛p 2 -X=yo/t˓,=K770O 7 A22G_[ -㙩?-"HD1KG%VXdx:4 ɨrAgL={s.H*Q[ =?@~ܤ4q |aWw*׋r"^h E%1W\cU<o{SE?W˞M"} CN' -O!8E<5`mG_sN9Rx"`X)`R,V5 DfnJLŚdU A@o'!8ͻb=uzSHj뵈mVyJ*n W ?aU1ibU/ie<1ͲQ4gP5ڶ&UKZ5vH?6GBg4П)pӨc%)j!ke&:@A0A)k<^'vSPPWe޿]]V+mqN;Z=5=[&Kx z ߞ͙bp΢)Y;J,MpvɸAd!幁J$w1G}h0;6"zRzC߹xΝy;^ks#6U#Sf q7/Y bK -bX!2ƪd6NF:Aa@,5NM :kMV}@:aSmA7ʟӵYέʅ./OHbq,%2y"qf3tQ.+B -O,+2 ۋ - - n^/*f >Q~?> !<:{d&!哲TXqK/EVk-3 aiYɔ2DYX&Ȳ%1odl^]t=G]o o%7ڥj"$Ҥ{fI)ŝ/E^^7jp7 -G\hD!_2C~ BVݗ o?Zuj 6ڬg+1 rT.3(i?EWI -#|ȲnF-).G1,[NK.;o6wв׎VF~^3G(6ztgR98|@!rw<`=(A>~*kCL֍& r)i7`{ -+Hd6/:)FqТ#"$;;SFRD+QZ;A@DNkj -̦5)jI5MG1+Dm^ݨڨ?F9zy}>h6>ٶ6ܖ(QܜbxS3:וGpߌW lϾ;}g;q>'7HʇR ! ҏQ—j -+-&J M&4eUGQT:iʨ!D&&FYq>c4U?9OU/JP6SS ǹӲ&È+88mDXm+fBir 0.+餥>UgKW5Qa%޿9c'~^_l_@_.^E??ݷzX!?골h;ş~rCQF0A=Hx$(50>tmO -*;1&6CncWGP+F=KQ)F0_-"  ޝ^ʘSE-jOME=c!Crꑉ)F ޞ<[!aR"b LL`WYB sd|mNm({}0!Rj`QW=2\Yb si`PFTncw6u>6y\Uؒ{<ʉb _v~яǷ8gg\>XO r-9} ٪ ~ `N=灌lD*\[6>śeB`7u8[Dr:}_zn~ -W@"2K>q$l` iFƫHL"&q>hSZ%gz]oyy ;ݻ<>rHn#阥h5#CcTwq8axґd%A˔$#g))P, D{_8NĽ7E‚% pI"t(Oke7 .hW -EQGaDch(/#њ -i=?`pb>6YFDkF4vz+д?%xaiCߥ -BWBf5kԬԒ}}a;ٝ\)W' -kp9`5OXŇϠ0J4tj}mDX26-f24Έl0VTA!e7ٳ))Tt@K+ ;z9HaQGt|P=0 f-MC7MjOUW (,b x?SM#.ٴcj(оl-N{XeDe oC`7^"^ж6S>ROʀɏkJ׉ٮfҚuqmUm"nH_hvsn}*`4͹*;JVE`JZydĿWג:VU~ίsH`g3X(B@~DWQd_i&gΌQ5:{Q:8Bo~SE1ꚬ(QU󩪦`m<ݝ?''`Ɂ@ -xc2MҪc[;m0 Mڤuڤi1 ?Uh4NشfZşZVZ -1ir^}1FMè,4dF{UY~gݓI -/@zU]tY=)>b1TflRO=zu һ I7zS@OQuGګ 82ބ"q}5FX#gKyFߏAO@maq?=.¤W7~#!6 )>4@Ŭ0fJ>SEhaUŰO(aZPx@q1*4է ƪ2dQF-rFί(cYCU2E[avOFS8$E[6]8 ~dKQ&.F;Iz,_) q Xx#з0t ljh` - ذ $OrBM!u:˨ <,:Kh`﯌F,ƈeC% "d 7+tSd>1,ьbs4c!{Ka 1TtEXMQQہ\"]Vn#UP_J%6idclU|c){%7t?ww 4Ka8KFd PWm_ -^!U/y W-g8EV]ɼ o m@‡ p4cZHP, X(ARBt"h*2YŬ,5'J<9[[xW -6-JNFr ƃ3!Z@n[{lpV1#볜rXhzxB )f!3Fjz"dM&`$plC!L7$DXW,@doO|xN綜;7a_Axcctl6aBUn@fg\w73t9< lGǘ59s -TRM,uunMjsRԞգ8H1S'J]fI\ -]NL% |E Y`tuFBcmhl4mN6}myag`kʈrUO; LKg=7w'_LHI&'7y *qMBA 81;"wYgN p"{f̬^6Iz|yxwaz[''j{K3в-ȲXKqP4ezo:ɬÖc8 o)ڤfZ<_?XqBrյu[mh\i]؟Z -¿m1\f6ewU2U#3-0ۈs~ހyV1ˏ&}9cAXoWh0<q -3Ջ7o2܍tnQ[yS@v)i:L3u N+ -IC׭lˌ6S8f:r&|0#gMիOWgk {[c;kk6簗vq~}ۛ^zQGoyo;3#L?oƳG~weҡv'nYO??}hLߞg;5q99>{ ͍BhK*JJXP)W6N9Goͯ/+/:5)8C7[7ئ, T - 5 64;bŽJIcI![SL֝*R#AXJIeA $, -,X\dEW~` AxHL( 78XzH =]s\^zޠE>-,X^MUBC)G^,+Qo&~ZUâf"݅vpf.w?]t/h=4z;CG03s0T0kY*.ښ^7Yx -r#ŭ:sS- _CD0ڀ|tBb ,%!i$Mk!{-3N\UOa>PǞo./3;nn7_ݟTȖ{$oGG"##%GkWQ+B!ɍonkK.$LE"hí'`ah˛4]#>Fꤌ='ԍfQ3u˻Fgyg.; CA;;ᣝ(˻#/D2n0)03?̔a8nRe < -}3yAj 8>=RnNӮH ABIF,!1^%KM.1 0'WpF˿9hmwx~ZI{鍝?b -^Sj`83"|l֛rq~^pmxa9./Y ɤP STa:W'O7rBnܪ?h?!z3TL) UBUy/d]5ה_JըZ֣WTe1X 'oO -aT }^b%+mF$k΄#Ҫ/Ih2ł.y,c\]%a% #JQL) gʖ\ dlƈhoR}ܕ쵧߲ xxҁ-[͏W;odjb|l^\Lq#t9,WHkgkvla.UbT`fl5ӞkOͳګcO%J Ȧ06a+Խ*- -8.֪b}.;jjj:տ8ODf_~Z4Rz2ב_O_10'8`tJSŚa5Z\ȇ>+`!:)'_Y!i"qﰎn`z'Y < H5a<>"rQA+D)xԠHJLJ]d7̱Y'clۗ>I5"If2z(fHKKPSFC-B0ըXh!%M343iҝ>BD#s0-v!QoށXjs{f; 0Dޞ= [KnZz?ή֕7֎'vYod ODMe!IA=+m=*d=[;GWjՋziHN6[{oC[îެ0zԖlZ`STڑ4\V{A3Pp;.NGDYÀQ{U jh$r91־/Ƃ RHO&ϚT5]1Y8EnERK!iOB&X&G ghnVh=q| H YtN*eT 8Jz<_+J9%)5xWwdRX2)V&)иYڽUO,q KhK\@tAcX'z(RGA94Lolp^n9Mgj|Bx;(5ب3gg];%$0qԒ4U]նS1Mt(*(ZSV6BU1ԩkHNRԪҦNM[+(U ePHn{'@ >;=&ж:G`A)-e3gnq%chKvX!H'@ A,5a*! KK^B2!*2&) Yc"I^ĠjU7(9wu}@MFP ^N^RdFWX+ax yCLcQ~e9R1g'<$uLuC1ɮZEaS+uK9x40D-c:#ZBP+)TVan-}gIGSgGmՁGsnr\UE9T9$M<yKy@G%I"nC 9*pͳEu>wmXh*9Ԍ5*f{^, 7@ӑ݈٩V;fP2"i=A*5`DNl6T7S?..p 0H'y򣛶h_:_6~eJRozR7|~ٿ2.RD)9i~#5̤(` Y4f/fYܬ IBzEԉD,1/&"E؆R⾬HY^ [ [Zn5K"" bȭϵbTdϪћZ3lv:+fGcJg-sB6:rBϔ-߻GW}At"z\8kޛ1g"DTm?f,M '(P/ MZٍͮ'>-'Z&*ա%~V7[[VgMGKW_vx \aA.zP*Hrr㰘 =aE"QE9 f[<#"ʚCg[Jr8!hUBTB*]xS*U HSR徔q_ xjq \yL38vO(0$$@onyDΈʄa|@gB%(͘GD=2AZy!uӿ;;cFn:kw"|{/j*A - KBy!3(]] Yo6"Tpk nmJR߶Mꓷ/Fox>$:ULFxV~f%C.ˢⰨrCG(D(mP,aD`VQB1\D |g~`8aaan][\$*]5u7t F# -zdA9"b#"F @ɀCTF - iUW mQ9I )P178D,9lh²pL 01])b"vw=_ڹWw)Y/Dlrڥ̚5>'52B  /;g$6kȱ5Nwgp{K'2ra?> 蘔MRG(}v'_ډ'9 1*>G#5--[EF'Hv=')x"/NHt7`A* *!r@vX1HRT$WTos׹)nc<˕!\YE.A/p1"DmA-}{~}011MKCb<`bCO Qr_&rXGP7~xyLvzc'}m:Tc:t>o];7?xr`v(#XF2LdgWlNYlH$b@ |aŢnc!a:Wnb -,ylK%"o#εdЦG/nY9n/-c%!<gIq²Ĕ%9r_ޗ+>0ɗ״Z s ELp7W aLh_k)Np 'L8n16\SdbX+_(zc{C)kuy_[Qi9VKBI,I%{~>O9SD*`IK(nMn'FG_K/墓ocI(s1 094fOtR5I/#ևV;2B(AvHOT_ VQ-FNbNGъރ -HDdR):cS^JSbZ8sFfzh `륩RN. ']- o!x7>=R #q..-[ؘ=Z. [ăV;A*KBզx0AJC+ȸCo3Lڍp;rJ!p=C^o?( Ң4*- `ͪשdn-iK ingd{v5oѹBrZѷ_U>Z컲hYhPeqM}V?S[Y ҆x({J]/}uV/Eʇj|&l3a΄Y's*v̛{C̬ -=;7I2t,]`DRd!+Dʢ 4IG -tXUo4YF\C:΄t[D'#N#)(Itt9DϦ-t7\ mtbNi B$UXΰg35cŠbPX0oDM8JDV\Bp5 I H/ %lYא{sKRQɨc6 PZM6&B !]h9E1@FG0Fh @Cc6؈}g_̲ߵR_WwG]s - - @dG0nfΰv #zj#Bvņ@Q9={ôfip%фQצkKM%~Q"k~Q悁 3aCq_KAy5W )hj&<Ҷ@и&V"OـW AxSC1ECrB$F~"Ik!WM7yu@R]Rha*h.`=t%i"$}}Q^zY& Z|t~:Oڻ#5]R7I!$Qe&"ANcR(jZ0 Qm:jG5, iη2R -]Ժ[˺!m iC\Z߀^O'McZe#"eЎD*+A8=!sraj;*9@V  |1+[U yOƾ,~TC0YBǫu:T,~X~wiDP7XO(LW JϸVmDFЗFHW6Ju#nCmVXS%8ЏȑG/jΗ˖ *7ʎݘ -Z -%CVq-2U ?a$wuڻr맔k5m -. f#VcGb+Z -^ը4n+VTnot# %7Mrg%kS f~SJF_Կ`<<Ϳ՘gݱh36fvA{r -/'WuS gԫOz{0?&[QE+۠V n r8bS+gO,^ lkѠ6M&4U2QO}KrnGec\]ڻߊt_ST4 LׄiY}P J׍A^ӓ;9IڶcF'Z-F@5\rhzhuLm:;MGm|2>Ig%ӡYxez}T;Ow 3;J( -I~-y/,VC;'@]o h;b0:ܷlg,;8 ,2f*T j'!\7ШewpǮ}7gGlf7﫿 >%i}4V^_{ct̪Y!ӭDzy,(up@IaUAedJ4po>H;$HSV+w፳ՎS?/oZ.o'ޙfwg3U z_W􌽔{N:2\e3nC( QmڞOG X(Lv6//:s)> -7L3JB,3^'l>GF{u@0:9&̷x3{{2ڴH 6aiADJ )\ !(BS(\И6u]ۃtX.sw\$D(̾hS;g)IR¸gC`}j8J !ɏrOU>p6ztO (Uäw]wȈ£ cf -ӰhE5]T-EE.+֖1JO<_FB+r#H*.9ґFaWuw^ߺ6y.g';.XJ]?s&bnv«Ij,Tl QJN|׎ 6ު'6'rtSintnDY3Ux732 әFO[6jg޻û׻keM M%P - -\L0@Mǔ>M(D;,WA -چDZhF -$ww "ڵ>y93s;g -۽-7Rt(8W\Vn?efgxEJWeF{G[n6ye--X¿"k}O27{76;v0gyן}qHBϙT=c=xO*q)x\&n7qtNu6V@f,=+s㲺q9=ƼrWzb̻Q@ :˲Y,Uu)S]I9Rꁅdu{0o0ڃ;$^`v={`m` sB/l{s?GR-y|C(VEU7DGV$ +?0CC%cpXִrW/*#%kscњwə -6?9lU}yﳿ3<Ď]L6}1 u%ňK%"'M[̘ #qQ!fGACSzpwF'f !P쪎&29JL -=k] EG(4H[`[v([Mi}(ZDE!aFHkGȇ~`` =#$qzZ\)=lc6@#8Vې,W޶ōabbJIjgd!itBi1,U=Jʥ*!$Q#UnQ6rhQǬI(gs +/Յ.fՄ#4R;ۯ eJa.Cb~!`}Y'k38ecH>Iejr'/DaX#EYMZSwT6 -UruP[+?Qv#~|5rJ^doM'rz#,V^Sl)R"S'/0=FF?]Q\=b<ŚqrN"ލwB (åa"F8vBs?XYL.Bd^g.gu!~i6d!g?04Tد#BIQARQD'X޷6HO>K7tZv>}hPnӋ*+Vz -Eom5d|:sB+yxjC';c&ߕڛR`b4x΢p=cn{NU:f\A^gI}.MDɹeZr+d3ϵ_مBtkz_Ajѣ,?~潃gM2j<ПB1{+>=2RM2ҏw'4Gm,`>^iH>ZPQL=@/}? 3+@&:c<:g=lY`j ̷M;O?vGJo"wR%o0fmmG_3Qߍl+K -lh/R-RNSږ{iΉ1r0 yzUOA?$5г7u(:l (N@b7p= O[9H ƺ0A & Li=xv VCdvׁڒ uڥ[aۆ}, ܡRyeFп ,>,8dX8،8ק0_ǹy=m.{#f}9cx@Łm}&,5/hNt뱳}zA^#&_H93}T-b'$+%*=+}G>~雖M8 _0&m56(/tn"~٩֙9/zc 3ZD}Xml'R):;^lbW0/zUGl# -, [6ˀh 66>KZ?n\>'k=Q/K2>V-[%NRqGpwg n {?N-ZFk/B1uNaCYH~|cZ=vxLs[2B7#WkvóWG;^代rG'.ޛ,Ӷ_[yԖڿ_n~ګ:U}Kޟ7f^ $'Y7 r\*2b?!2ݟxJ3/k3*LWO] u<~=u]i%I,Ͱ~]l0rԶ7:F.x+mj.<ڻ@qdJ`5sT;597)xG3<'W~"?6}ݵmRJܠxOnPb{oO綷*rw܎,']suwc2Aѻ8^Gxî2FIOx.(YnDC~9Y;k[/%gcݍQ]wM  -;JR@y6Y(F,*@qĞto -m|Lʒ]w[ol sZ[;nri<PiUȵ;ry.rtj0χAxߤui:vwunOvUmN۾3ێuz~[x;:jP߷ƧBČA@pgӮ_UrLdFǴ3ؙUZ8/W^yY7${u;׫[fjnh@;v:8+I.KHz%w.$ṶXֳ_9 v쥹}ˤ{wRV}dy%+WG[W9ːÍ`.ɒWݩEe^iHGdUQVB?!FHU͉3T$>ysY]fsҊɥ'yV]mKFI)r9LfgvLsOsyPnC`c[p/U-&w$'C@s |[A(o06ŦCza^~X^,1`/=?'y{]'~2ͦŎg=VKIܑ"^v wIq#m~e7"ήsM1]+wRD $|DOhJsHn^5p7,[ oR+垀1%+|#x|:ih )0?J.*eX.fp,̀>rd\WЯ?׆~E6AuGWGWYlCѕl;SmjA|;6:ߦ[|!r -K3}8SX ?SP@+S;li$!/ WPK&~ؖ }!-[cor( ޯ.wkRĊ񉋒hmwVAN #{/( =wZqA>3fIP\uew>oR<+EZ7h e/VZ I#9қ/ܝmm9^6UTW?o &X*4|D>"*A!E R%" NQVDA (Z:SJKi;UTf*DUq?]6/猾ߜ={?t'ɹ𛈹Y۵~ -mz9jFͶDN{\?~(읂-4ЛF7ǩ.b0V \z IbNj?8chJKyASb R"Y5*n~<O'_֨snܙZ/渍rWh1P\D˱'@GB5~, =_RށhgO~.mBei$U;1/ݩM99Z~-ZEEX5 p{SX8w{\ujՃmLm5mm{-t'S5ں u?Ɍ6M^@O+agy>}tQ! ־{^ uekDǚsX]: wz&;ZwZ?x X{kzꧨkBXZh+h$FwLsbue~~ ^: j1ti nཀ`ߠL\xE2C˲`ރy v =39x]/껯{ɣ ӡXi'n0/Fe% #qiBmk>es鄁t4?=f?ьOUa@hM>@I%1|Z]--kS`o|mmLTf;pܿW{Y|9gJZ̢z/N9aFqC[A^g RlbZVck"N{^ݿ]>9vx~Kx .(x h;A&`-g$,b:wrq~0`)` xJs'=.+:XyjT M\LQm] 㡝JI@Q\:kpxf,,hY3() -1J5ŎMn uÚvaWfKW4F -I_,9 McZ-o^%VOR煮u D&Snb݁58ij{bьCسu=s&}xc7Po"fKyBlB O~Z{jA֘[׹/TSIwL] -u̍LZD/}w#dkgz͌{ 154H<}ZgSwQu6ԗk|Lݙ`o`{rߟR+Ղ߃³V-p@ :ؘįw9t#6qlD?9z{uCXh:GñO~9rL9Z4ěZ-~71(֢!EbG|,Vw٘RF g8eAtUuL61K3_z^&~9L rA3,2lM8c!4O] "^ʤlL/6.}mk=}k9y SXw tRA.|y':aNR,.p[VJ90O{fy, yw~b>=c$tYX\Pkp_^haTnH}}&Ѯ C5x&/d9;a^JRdXDcTMGMLYKctOaS4U@Tb$i$KzOAK&V|C -1u)sPo -0U=4[˳1QAaFQ=w>%NwPhC6OA{~iKݍg@yl(OQC5 m.߇-־{sBi;Ĝywl*PS!L@V#اy'K: |IGH;J'} E4_Av![ae:<v;=9!skkwsSGzԐ?X1oU}{dk6 -4iݏ^Cy >{y>3ہ6o=f:江4龳07 < -}g1b|l XhcJ#`/%mzrD_j@<q||ƀA`f{>ǟ>:&DY is{k`3 vgcǧL 2_X@dq-Zvdܻ ^`N4-T'MuNz-kAAo@^Ϋ(ݙsj~][8Q -uZIk@N{CM=['F^]3\qoɯvŪx:"IqW_.QUg&P Br&RCb&ٽIlv Py'Ermc*-$!d~f{>S1vi"\#zƥ{\ 6b:}Gi^Dl،l_g;{~eS^O-8|Jb+%C^/9/VTE2KqLF3(>+0<32C2;*]9ڕb/GUo!XS}~w[?a7⡞C'HWW\;N9ɳ'HTRD9* RKf^?2,tyYהIqڿ~L+f֟f(Gxw>>4[M=0г\\?#/{B?Fy;o>|6pO;8defΏ-󮱃gKڎNs!]n[/YuZ [95Z|dga}aݳy֛|E}#_٩}4R{BuƄ^tk; -;e} 2~Wne_MwdUdl}2wJ73oVǩ@i`L>fK8zv{qc\B97va0n}(8m3;P~r12FS<(iAm8f:~JO6ƴ 1M2C3݋ ۃrY>h|❼c7{ FisTz:?cG=Rq){fڮ{^qgw{a`wyٗ5[SƂgdL8[o6\c;n 4G`7׬{#iW#`**o㓔AJ7>䱼e\}Rmcyne[[:  ,/=YF|8koc,"s*s 9&MsըC|W(O,ﲿ:WLOB8.eoc+רI^7&6hG8r w|63DKMĂ+irqAk9~jjiel2-lb|ȹ>7KkHSYdͻ֤g$_'hYw撣<:oxG}Xr:|ysvx0m@>ˀu:]5=QF FƼ} [D^``G:e(043}0s0PN}%`6@ PG`gqQٿK~ Vl\4sV^\b/IѶh_3~+1m~y-ہ~ <99=Q9ݻg~V( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP(p<@'Ps~6Ɲƻ09jޡW>4TQ^m$SM7-MDޜC>4?g(*{yfK'i'\1nŃ^q;XVSl'{lW)u<%la% ca) PJ8!`Vc)rWUBqyBݦDU2 >2dSr} 9ÖĜæ$2%c}DC)0&Chd,J|&ɫ*r؍ c|̤>i0˸1.H?D$ݳ1?H㙣1F c"kO?Gt,,,"uɝ:1ݘK۳ԾJ~c>ME6Q;Bmst;¦[wX^.Q۠v]iҪ-+ş^}3ځ5߁{!m6R`' 9V901xm-`:4O* DhaI{'Ԭ6WhWMԧ6!n쥚pMs&c^@ %+CIq6yJ;wymiC0]NԼŷ=tBXq¿vR*aAUUZ2+%7'Y -OF:U|Mɳ ;!h6rfʓ`n+^'i S^(a?P2XЪ:vf^ls/5|2՚ -6zff{[}q{ƹWpz{$d\";Ӈ 3KE.;Co W,(^Qf|:aܖ:Oi#t<] C%t$&uzB7Ue/nRSjS&Mf1l +%IDѤDH{xMrI4iZZ4-AcX4ixx87͍Fsc`yV>S";1z9X9S=HtO4;Esak,X=l75y__u ƶH'EO}%E_}Vȥ>PRI{e؀e;Q̄{h=6WHYtf& R"HBuj=6K_nf9~tz6TY6ru0['$t;=C)├+uJW ;E]&wԊP|'d# \ -xϊR'X&-QiL90~b#qHieJnb̻36.iҫǕ8>-~#F|B[!+,$olr/WZ-`٥ -endstream -endobj -118 0 obj -<> -endobj -695 0 obj -<> -endobj -696 0 obj -<> -endobj -122 0 obj -<> -endobj -145 0 obj -<> -endobj -146 0 obj -<> -endobj -147 0 obj -<> -endobj -148 0 obj -<> -endobj -149 0 obj -<> -endobj -150 0 obj -<> -endobj -151 0 obj -<> -endobj -152 0 obj -<> -endobj -153 0 obj -<> -endobj -154 0 obj -<> -endobj -155 0 obj -<> -endobj -156 0 obj -<> -endobj -157 0 obj -<> -endobj -158 0 obj -<> -endobj -159 0 obj -<> -endobj -160 0 obj -<> -endobj -161 0 obj -<> -endobj -162 0 obj -<> -endobj -163 0 obj -<> -endobj -164 0 obj -<> -endobj -165 0 obj -<> -endobj -166 0 obj -<> -endobj -167 0 obj -<> -endobj -168 0 obj -<> -endobj -169 0 obj -<> -endobj -170 0 obj -<> -endobj -171 0 obj -<> -endobj -172 0 obj -<> -endobj -173 0 obj -<> -endobj -174 0 obj -<> -endobj -175 0 obj -<> -endobj -176 0 obj -<> -endobj -177 0 obj -<> -endobj -178 0 obj -<> -endobj -179 0 obj -<> -endobj -180 0 obj -<> -endobj -181 0 obj -<> -endobj -182 0 obj -<> -endobj -183 0 obj -<> -endobj -184 0 obj -<> -endobj -185 0 obj -<> -endobj -186 0 obj -<> -endobj -187 0 obj -<> -endobj -188 0 obj -<> -endobj -189 0 obj -<> -endobj -190 0 obj -<> -endobj -191 0 obj -<> -endobj -192 0 obj -<> -endobj -193 0 obj -<> -endobj -194 0 obj -<> -endobj -195 0 obj -<> -endobj -196 0 obj -<> -endobj -197 0 obj -<> -endobj -198 0 obj -<> -endobj -199 0 obj -<> -endobj -200 0 obj -<> -endobj -201 0 obj -<> -endobj -202 0 obj -<> -endobj -203 0 obj -<> -endobj -204 0 obj -<> -endobj -205 0 obj -<> -endobj -206 0 obj -<> -endobj -207 0 obj -<> -endobj -208 0 obj -<> -endobj -209 0 obj -<> -endobj -210 0 obj -<> -endobj -211 0 obj -<> -endobj -212 0 obj -<> -endobj -213 0 obj -<> -endobj -214 0 obj -<> -endobj -215 0 obj -<> -endobj -216 0 obj -<> -endobj -217 0 obj -<> -endobj -218 0 obj -<> -endobj -219 0 obj -<> -endobj -220 0 obj -<> -endobj -221 0 obj -<> -endobj -222 0 obj -<> -endobj -223 0 obj -<> -endobj -224 0 obj -<> -endobj -225 0 obj -<> -endobj -226 0 obj -<> -endobj -227 0 obj -<> -endobj -228 0 obj -<> -endobj -229 0 obj -<> -endobj -230 0 obj -<> -endobj -231 0 obj -<> -endobj -232 0 obj -<> -endobj -233 0 obj -<> -endobj -234 0 obj -<> -endobj -235 0 obj -<> -endobj -236 0 obj -<> -endobj -237 0 obj -<> -endobj -238 0 obj -<> -endobj -239 0 obj -<> -endobj -240 0 obj -<> -endobj -241 0 obj -<> -endobj -242 0 obj -<> -endobj -243 0 obj -<> -endobj -244 0 obj -<> -endobj -245 0 obj -<> -endobj -246 0 obj -<> -endobj -247 0 obj -<> -endobj -248 0 obj -<> -endobj -249 0 obj -<> -endobj -250 0 obj -<> -endobj -251 0 obj -<> -endobj -252 0 obj -<> -endobj -253 0 obj -<> -endobj -254 0 obj -<> -endobj -255 0 obj -<> -endobj -256 0 obj -<> -endobj -257 0 obj -<> -endobj -258 0 obj -<> -endobj -259 0 obj -<> -endobj -260 0 obj -<> -endobj -261 0 obj -<> -endobj -262 0 obj -<> -endobj -263 0 obj -<> -endobj -264 0 obj -<> -endobj -265 0 obj -<> -endobj -266 0 obj -<> -endobj -267 0 obj -<> -endobj -268 0 obj -<> -endobj -269 0 obj -<> -endobj -270 0 obj -<> -endobj -271 0 obj -<> -endobj -272 0 obj -<> -endobj -273 0 obj -<> -endobj -16 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 22/Tabs/S/Type/Page>> -endobj -697 0 obj -[699 0 R] -endobj -698 0 obj -<>stream -HWۊ$}ﯨ]dJha?,,z/VBRT3۝Y,)9'ϟO//o~?}4rNO~nqѧhմ~yN.oOϿq>ߧMu>)}zSЧO?̳qljN.<ѷJaK?<ޞn'~xz꽞'7o{KQ'5OK#iVNNpprpfVjAQM5e'-Ȝ~MRfd7%'l6駔'ʤ~_|g+v׸MeK.uä p#a -Q+Ɛ̣b[[4.ﺢ(}D` tDfu|X85>ݠ[W"zA4{ cZ0)ix*Z1j$ l,|nEn%~ -eRh(㒡+SImTͶafBt-dēy~4Tc1z-'/OzT'1HaZȈtƥH;`n -BNOV鏛;JOe|t4U:[SakF=cNaНp$ -' -}1"QDԘ X-M$a'x'TKRr8G ? ׵Ew]"9iiz7Rp'ѝLuS -9e޸if=L`՗6lE - -8/PyzJS@5YG=&i]\'R^ -#UU- Y;ʪVaUE?hM`&M"bNk/Ĵ!l많Qw#lAL((QW L9!#55NZfqZKcD>">$C>=NR]]tUZd]& /NO]&]ۊzJָ2%mpL'4fgªf$5T6ntjrPJwToL@R5Д~Ha!EB썞ϫC*2M#)3W15l,0;k Lv1EE䎐?E 'QmlJƄFݶA8=\ 5^V\|mur`HM6v0I-;zrAl{v~ -pjX|^G/. Nk"_-׌Z W!o -= Twhᒗe>V@P \VYGlbF}1nίj*QZƿ~ ʉrFaTZ9uݫv6{NU -`Y_MuM y5$5$Hl m - d 6.pb)Aq@n -7ub|!^ Bznh!Bd,W,.gא,NC!+n=|x*%0 `;K|=о.naZ=Q~":֎L)rfbEBU9u(\eFae(d{VG۝Z). E+sk_egz9ϸa#V75;光9,:]0qȖ"=ɲ i&մ32WXA}W -/ 'P [?(֛.EXFP#4TETuzϫ,IQ%Yi9ډ~t6eu("OcV$]*%reJvuKȑb1LfB0Y:GU^y`#W:? -QY-{(. (.PnZE"lWyp)r.+pgXw]aa YnWKRigW%R]h0 Zjf 5) { S_BKߨ"nC0R{\qԕs(t4Ʊ` ٬YmW̪R5GD]Eԅ s"ɫSApHũ{c9һUb'j@C NLݽ7y -"55^kaS3KOxy48 # udrWNdV+jOȠZvA10Q )]+)Ng=qztfst%\vwS -Ã?e7yYqygoiiYc%CKYE[YzbR.C[dl zޙF bs̕ {UXK`3yB i}ނ ]D9w_c%QNԡ"k;0J2&[ Z/GHP1#/9 W!I d>aZN- _/#T<3w(e'.an٠)h G5ߨv4 GT4F ZDe~6}cŸ*G4GK,1^Q Kkd-+,٤*XL8pp#g#q@4-a2@Sf djC%^oa]G=Md|Ar:Iv$&ɨ NԖ٨q`XEh,IwQI/ -8Ҟ'$^ǩZ[}FCCvc `i6bo2+9wlT}>aśA .Bt|}H!2lV͍NdӮh/_iMHECa3S>Of@xeoJm;vzk=aPfMc i \pbBIKu.DFh¯j䱶l2̭.FWzLC6?!4;",:,e8*sRAXG~Aj"x uD #gb- qPJ|LBEgSMtlPy"))j7eFfb%&E:ِRՄmٸnkBm$8j $Ev -7âoK:gCQI2!6hz*ھaca ^$XNd82X:\6#Ѥ:upr /8"xCd79/XzqBCnZ+VX9L JJM L˓9K`D(xDz3*ƬD*E| -V9`lgDծ4@ZCRs|J)'~%Uגœ{x0zz,FOޠg7CCqtJE?-o7Mja6Q~0Pbu3\U;IۜuV591J;0-*؇Ɣ^@y&XWth-4@DhqbYtG -{;ɵ1Ul#ى+r=,F+&jys;}388{ɠqv1153W -Ş@ӳcG?#q|H‹L Rc-NV"^;BkJ7!GPߣ)$$CN%٣;6m:'oC)],zCl n2׊WAʔiCԆMt9l;(fC9K-Ώ.|vp1x~y2~x}>=_n<]/wCZUIED$+AAPHo=?\v{|=z?>._/a -KB?/]wO7!!}q=n<|{ -~;.?wv绰ݗOR.`no:cڒy&A^L -Acn00$Vm< -ZV:b'*]J|bq2Р: 8^<Π¹Q?^k/!EiU :ǖ#n9AShs -0XN -endstream -endobj -19 0 obj -<> -endobj -23 0 obj -<> -endobj -37 0 obj -<> -endobj -36 0 obj -[278] -endobj -38 0 obj -<>stream -H|y\SW'/$".{ XUufѱU["., -*n᮸⊸^\q=:Gum L7s9wwtHӷeDCHtlRLEc']CIMw8ioI@X?qD\PQ&#DFC?|DRJ*NNp{ LC~ ~bRf P4PG$R~WwGK4MFo}ظdOD@@Q^)h*`(| z7+sb&~?O\:N mrj(^~',j$QS(rITJz Z&sm:ujаQPEX5[6{;-Zzuc^;uҵ[_a}>[~'CcyиaGOHL5:yq)'LL4/LMKӦzsΛ sKf-[b%VY.{6oٺmiy{ G;~$N>s /].)ŕe׮߸w쎻!6MlՀnJ%I}R!͕K% }m}_"ϔ3YcH375zBCBBCS(SJbW(ZG6VvTdu.QO-%dZlޖ~閥Vu@k#bmj}c Ӆ htYhZ\kEjZ6]k뵝^P;.j%m-m %F7OkcəSpE::N<ϫޯzVNq+=s'N\:n.W -tAqQ*^}Q9F-ϓʥS|!BܡM}4PTUCq+g]B~AZmFVSWSL5b,}-ņ֡7PzM1SֶX,(;gŠkά;Z pB\NϪ")V t/E}q -コ^/V3tTfjb**K+/}6;x'\y>ޯO'>ȇ>8 )>g,\"_\¥J}ޭȐIɒeelk&هU.|o-w_=yvϞs H  1{vl3^mzcl8dv܍1D!Dɝ7;9m -s֜3漹`.K油bk)27s9&D[&ٲ-omE[V)6VUm'rNb>gy_{bW=J̏6V6fl[ְ9:r9;N ;β;γw|.b.r"r8R`WFUv]imh79999)nmvaw]vcڏ>~fsyN -\+qeNTU9ӹgp&gqvtitNV1Dc숝3v c쉽7p0\kprMŵ_|sz\B\q .eWJ\q u^ZȋRZ(y7ʏ:tn9sy;瑓:'S'7 - -ש BHD(Ԃ$(v@9(P*B% )WXҜN=HwC5ȀLȂl5 G7|O ɷ6~PjAmuԇF&$x ކ043`&`6ŸϾG=uԉ:SJݨ;ԋzS~?Q:Ig"]V~ϗn]zHO%~?x;L ӂ`&4 LCh(DeN#h$FhGc4+*^ifi}wױoV6;x'%MҥdH&#ܐ,jjZV5jC\jڭ}jJ}~-4[:Sꬺ"uS݆p=y݄w˥S~E(GP - N -9p^Nn8?\. %pY4&l0|QޗdkF&˶Ӽeޖe&ƤifafY]nfD*"kkş߮pXi?rN(I#xϿ§-ԍVIUԝ"8j_HMi e;y/|.+Y( R:9ԘrYYz2ϓX4PsjRLnCPûV<d3uȕemK"_}-j --{nAY!HLh౴<ree -Ϥ&~P\.!Ru݋_KT\\5|E+n4GϢbҠdKYl4^V. -ntT FpZ~06CwC5S.هůB6XUwޔJS@~Ɵ폦O#(bhBG0<Ll'AEz~ |_Q홴}l#Gbo"Gk4R%W1l po,Q5>h.8їy\ ;8wTP;n#pn$aW0 -P+1]먶BIJo_;t>-*$oɮ(cC#^ŴAz^ iʓڙ𱤨{Y>|gYƚz_=~konProyrVXҒeRŏ/{l\OyYnݥ,/h6D 4b6 JD,6flK?,wͱ',)KcPM5Qoq9~ =o ,]oXRZ vGz  r2f}1)$9?X=NYEf}*4K) vZۚCb7\@mYg  -i,W41xxCݑ^kgR$6M>YaLip$U">3ȒHaeuhKSrQF-dzb3Ux1|o֔bDbcs&{^:ӗ N3ؒʬO ,ǀ'!1U򣯒=0/,Ջce"QO5˹xL#P潻%]Uy@rJ $oY+Wr)|(k}S>6+\]{~Fl%u_&EYؒ#OjY3ԤGLTEy4Y0ڒu_Bj66w`4䶱mV+LZ)ŲŊТ(wʶDQZ -R$D6'L?NqK9^ZeggyUZ*7uFt 8hF]q{4}-֮QW ^ ,;}Ea`Vs8ޤ~}6`.]Z5Wus&_pIЇVPA߂ö!!#Ӑ/gR?OC@7c|yEO-1`Wg l`J65x@1D܈#>{Om`]os14 x[#֏c'yḴ[COk Mծ0O5$9> -rCNj *R!!Ø>wAcE~}=' ڛ!>AcJ3oSK{b]cIb:$(}?rS؇$D9c"؟. $g>S+jkkC&j}O 1g|wr;rr~fy$IRQ{EݿqrMWqjfļ$}q27tNs%)k# nK?!j\opDޡ&jWCv Rh{V`-9tOJ%b.NKII.ٷ5k/t.)#]7K ߵIF݂}g|ݗKGqOPW -s)q>\cz=|`?ﻮ瑋;uK9!-t@]"Ǟ ˴Nz uWLoj1 -ZqZ^{" ]:qNB񚠯zS5 } d88#r1.rtZ0rc 7H'U/)B -544( 4@- /#m cKPEP )B)%"`} 6*#h%cV9~qgs{{9_tYtQ - 7,w,u -rAV^˽;*eѵic|z_rY>k}WH dPR^v;Hߞ19"X(.8? {;S|+W# [jɑYn_2ga<]~-h2L}ʙyq@xV1z3&MFK(rgָU<%yf\ -!T ):?Ϯ?SU] t~f:/L 蜔ce e^l#+ug^3Oʐݑ -:ds#_wz؅F+n/SyWas>/ΦGaϖt{t:ExE3cnLk;1<d;)JWwϟF0_/wz_\Jqx*u2,c\pN+h=zՙwZ*B?Iy Z'+\uBNÜBt9U6\5j;uBо -l{)騰o?Pb׉jW\ǚ|B?c9P2Fm-%W44;~:yLULto섫;,)H$qzm(wAMCNPɿc0Xﴑ9lM}hrlͦ`:2lwr|BPJ([JEd\rm=C~=1չ/\zke8}C\ ϳ}kߘ d`8AEoPr>r5pJhiv1)=U&ʝ"ךwQyRFxW7Rb#{R@7Ğ׋p&Ŋ'"w5%_Dޖ\0֋[]y9uTiSD"o zQX1{(HR\㿩GyIX3Yܷ{wvfo5!z[Pw#CPƶHWH7V@iojK<_p(J>?#S?JjPæEso~͌q$w] 'l2su!.ؐ7K{ԛRoJ,nvr)L'O fkWMW)q[f8by;rdS#Y<;/`v dwq!_L9.З_ڗ{ ;*̷;}L/+'m>'Y"rWa^ak+u)ឣכߐn8.QmƲƸ p&f8)AZӃrmt32o;3T(*be._þ[ga] .H ~}e|)I"r(s`ML| d?.ܥ:5?,ؔ'Au;2^&u06hӜ (*s]e((hW98 -_E5GC]-Уy= 7GCv흮(hՃD.}7ς? 992^JP?{ -&ч|c@3curr-[Zrm`M@RWYTr=6zI }~I޵ ֓͸Ws|xkWÀ3l!n D5/ĩt RcӤEJw.8_.9ҕ7h 7IyO~W0D9F$so'n>#]u |5*簏&W|>@~tAwD[61SOxדى2Y[ -m.xĿ;D3ϒl]%Ypkc箾,/FgE6G1XR>9?[s'&q[qY7~[neWJĿ/e{>/:Y`u!! w0s'e8VLYQ^:~s ԭGΔ7d3]u.0"C -YV(6k{O[w0w KA"K:x3o -|vRe\8|cQb0D°FeTYTn{MAKIG -IFTTm H,G1,P4B yeJD{{cO'<<ls J ]c~/C- -Wz7)vLuuuIW7 wgCMgj#rӭy4gu|F'zH;;4s>_LP-gOutnv֚쳺TڌnMұ6w~-l>ml1RϗX5hj];u.D5:yϙ4G-YK3wt;D+Wx!oŎ滻_|{y6@#z?gkq}Ll~觚8&w+dzG%OclѻCB~⛏#k[tyL煱ב6uŴ(ո;=h^,mwiQ -|^`!ONX&i`[sFC%A _̑uo9-sCnѾzaoGC]3G L~Y烥Ėg=9[۸\.TF)Ur^7Gw=vIY2mMS#6"ymOmMIGޚ6[)^6ԏzypfs?>I '6Vvݙ>?zG؊vJul8ZA~kAcw2Zb*lP-4J,AD㌷wK⒦霆E4f2e%1LmzWp9AXHޒr6_&;]=!o{nkT .*L5|1]HMY -` DhC|ۻkwvt}ظZjz=5GSz.:]۰A<=  ymceNn ƹ}Y];};`>{&a7z((;PlO-J ěeh_JDlk;P G3|WN ՇPf9x /X?mӎ_ -R< su8~_.9 -_9|XSpܥfR8&u; ƭ`?]@ɺcZM@r1/ oKuS˸\ncǪjmh} 3fW׹_BԹ:3.657mEn5hL7?H{껮NR{Le*%GCY^ɟ0 z'3a ڳI3IyƁ,1qC2e23'1FLbh\j 6ҘU{"&^}n^ ۉhV|Na"/sg#su쉔VW(_!'z|lYleΦ߱v\ =]s̋ OzDlSǏ}~/ $1P#y-N6B(s(Cc$&i)aHK`-ڛڮ]˺NcӘVu&RƤMݴ~,+I;{f5jѷ?P8n j}Y188sr417½t\fh N6S5sdmW :I:wmYjs{'wO=]U8PCc`T9x+xc!c;UT弄3U' -B.(tP ghTMZ:O0w|hHW{ _#~JZ9=&yά8kKpW{mslCwڟ97 :te -~N^8NkNhM[uFo/ڶr*1֦aok|܍ msQ)~9w7Gp_>R"m*!n2|,$`e)z -mۄuw[/z< &fc`*fc>6vi oXE9iٴѓzI<낏*Euc]C -K]:ra^>-$ҍhohߣR86jU7vq֢T-͛:}X:K;`kW7?Ѯ^RYNڠԆ2l]{VA '@?B\#b6>y8v[ (ƥ{jjWNXo 0׋}M~Vbb-uβB*Q>$zmsmR?)|^#l6Υ7Ϭ[Xrn܇ς`] *0 Ym"ZAo ص'gx/JIMgaD"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D" KjB.DZur"LTD)%+t9D[g4 ٺ -)[נ?o:ղ7`P᳔4_n -il!~m]u^g P-->2![)JUš!aQ/$ ف^ORBLH[ -B[FI[&GPnu>)17 -! -@@!-~zk%T!l G9G6۶ ԇXB!$i}zD_Ԅp7Rx_߷1'jފqUPxAxD"),LGk5">#x<=Z3^ -lr:%:oWfϳv3h$ƪh<hFG/0:̄b;nn5bfDSI# @46y ΘŒ --H ،}%L|}'7BP6aEF"LV4R7k|ƺP$F4 ͠A3b‰ήy!)>[yF2f#sn//\+ @3Ӣ @%Z V>1ֽsXwZMܞ#S'w*iW\*cϱSSmn٠7| 0 -5"mn!3y^@Y^'+x_ac0!z,ɣ2y[յBɺS-AjPok@C<cϰ#2x8G^7;{E#gj m9FcTh&*/5Eb|]`g> ,AR-@a#TpP+}s(Sҍ(d7_AJ}'A1H#FJmѢTp r|{4D r!=AmF&7\|P)Hw7~ɿ3%-~eM+<X9OY.$!׹qÁ5$Ԙi,rpv7j^=UگW{].hu8vıYif&N&J( SsgQ7L[OhnsR+NQn1&&.j)Ohcv=NrZOPkquqEF:nQdI|,i~؛2CٛǴs -`mh K%+ 눝L4{2 ܎00(D'" gԈG\0ĵ8q --j!c6<d hQn&EM5Cr 2wS998 ~2,Rވua̮Q?- cX;o2g/hZq:':4S/f=5Q8ErGd=6,QkHGSȉ?s=+ybNbE!#b.Ce"zD{M}"YtԳ^b[鞾h33"r5sFuݟ2wG6Ht!9]hhvܢכ]N4wGí%"(n}c/&ElRd^R{2Lnnr' cqXbt4E[Zb*:X͊DOAO]*+ * -Ǧ!1Ulipx\n aRpO*yO><@!lG*_h,6Y n[֎.ff`3PSƗ""^ 5KaJ`MC <8% @ `9 -endstream -endobj -28 0 obj -[30 0 R] -endobj -29 0 obj -<>stream -xRj0+tLCIƐ8 uN ,d࿯ 4bG3;aQnK:^VXjeaVáA$j;U]mЋqpЕ,cl=<U`[}`uu4:Ў ϙ7zk Q6/o8+s4b# -#{%Z ȸ?9vhu~R][d'y̑}“3t4^{#*F0Zᕜam~lgJ#tIKDH"/Җ00)I)̂¤FPFPmi8]&/ᴧ9.hz3I -endstream -endobj -30 0 obj -<> -endobj -31 0 obj -<> -endobj -32 0 obj -<> -endobj -33 0 obj -[0[750]3[278]11[333 333]15[278 333]19[556 556 556 556 556 556]29[333]36[722 722 722 722 667 611 778 722 278]46[722 611 833 722 778 667]53[722 667 611 722]58[944]61[611]68[556 611 556 611 556 333 611 611 278]78[556 278 889 611 611 611]85[389 556 333 611 556 778 556 556 500]] -endobj -35 0 obj -<>stream -H 9%@ )#o -endstream -endobj -34 0 obj -<>stream -H|U T{Qd0V@㎈0֨.0JF*ji4j46jܗ(7H9hcM19Vk*Θ47wwï stVĠy >9˙WZ -P`uN-~yfmBצjX - \߯Y]}B>\ӓ\8d>K1]dΘR[kmL;gL_YE'Ss ->@8 - 6m *m* :gᅱ#t4hc5LTVю FxD̫^"Xwǧq{u%B)á=G` c,R :a8)Em0K͛f{=NMP; ;!T%jkYln6ß$GF)XzpWe,|:N1 "T,9tgk'1y H-h5jO B+An5"mG4>,Hc=)l=CMeSKcբŸ܏Q} cط[q5.@6AMt[(1Zb'Gɕ^Vj:.i֖[]VU}FȹӚ =bNK_'X@ʥɬe.V>:ItwFZg9rJf pB(Rm[юkg- }ٗ-C#n86sc #!P5ەxe)콍\Ǹ6pH٢.I*:L=NI76ep򠤁^ ={t]b;ۣ"u!}mZY-npFP1a=gG?1tM1g -sNΔ)= Iqz]7:a7hGz+VzzHC7ȩNMǡ׏~3LލNP-`kŠ0#;< 72d;##s J˳O1`O5zYUcX W^긺ʪϷ&fҕ6:WCKgoeezH!+++tnLߜoEt2UWGdM,6h=xzj_=s✦>T{a]MW|wNwﬓed euc*`0F"U0cHC*MfB[B;NN*`di)$5ttt48 oOho}{ q$Sh>oáNAO1󦪝e=(j_]gZ-`{͓Qz:m{pګ_5,!RxMkJT0f6ADI#IW <ɯ07L͆I@.4>Oy(E|\e NoVEK|$ 42mt"(Ofj䭡S>`r?'ͲbiLa[3վu1OMyR^1m쥬$=dt+{ӃSlԛR RW`OmJn˴q?yI/,i^*3oϐgx'& uɤi6dM6'7'lyd_rt橀'.qj_ßhǕ@VpgC;W plL"K=~^Qu-Z -{wЭ4FNaԒ&Y(/Bْf3uj4 :!;]89ZL-H7f.iDcAϱ}$88!-'X3iRZj 2񬆑]0padۆlA^xՓXUt -UC_~bk)=CC"mH;w ɨV.4w3D0bdQpρc%WT+-f &',c(dX '/@dkܜY -i-M^j{̧5 crvS[ůWOmDUdjKH$pl(݌OW)z%Bu1S0JN^.O*7tdh oh5\Wmƴԓ] >i[}(dL|VӸ"dځߌȃQضёqX;::6-ad [y?7*v+_SG:n|?N|"g!%PCe3,dgmk$' T4% ;I!,]@5Me!e;7۷0c晵߉mjSu^|Uˏ!\vjBH )lbP\-n!F%l`-ꤧndD@A9S#Q4~]u0l. YkB* I7@x7IJ0q;f&?_a&/@q`Ͼ透Wr&v5|4UœaN0 |l6OQw6 {o\{^{gm^x5-;ВܔFFZ)@& 91J )DiiZ*$jI;fmhͮGwS;r񖃳V*wčo̾m]XxϼW WM]PqShi38~ܭ]uʚۣ% 9vNtxNxuLE xuBWLpJ+ [^>ןԽ %t!RŕjZmPnU\Nޥ/@`ك†x!u1C*tMIRɥjP>զVwnN-Q@-\IU9@(x(6/Zrs'GZ_ؼzGG{7fߕ?;sxJSq"A`,"&(]quNr) UУ9q!%(D6o9Z 8p NQ^д} -/#Jiq aDe^9q N/Ɋ¤܃NNAĔ' "gd2<֐C2J˽x,o[d(daaCs[6&l% %}<<ŶBt|8BN|G4{#ekhYٙMʶ$f;$ogθqIWTL7`M9q7XlYo0s,2Ȗ!s'&&9\Y`V¤;<3c 9Ra0މ5IJK;BS̏o3_FP9^:kͶ}m?.X9ځ3hޅy. BBUxQdTEev fӺ=t -R.Cd]LDPMXe(.[Qo (Ɂh2f RQ|K>_39zz䣴wLԹ> T;߽\s2Nm\-E JeG +rIPO ![Pw$8/Utdߎ L;z(𩰎VtEr}N>=9If`uU6}:(ܴڢbf"mo l.P; FwF*ꇎs_[t6# ӡ%S!#VA6c[_"F~3\!]'rqu*ȥBƺƽXǚWi">  lky<^xw#*O a5IMޣ7>?AĪo^ Ȝɦ=~V\6ϼqe>0LB+b> -.Ja+EItRI_1 \GqdA2'ffV\׍&~r2+Jr60$F](y9{fOGO/f7AOAO[md2(X5\( Q>CA91$L ;]iǬ' oڡV)v[B(J{O:lf^54MQ(YR,ɊԃE@QBBd"n,PgIे C-#%{>\^NGƞsw qgyBbvNrQY9!Z`8vvfo/>beM1z!$&:RB0&cBՆB˄OJuxd'%ir]YWAٮV`@oTij)=_rz/D3Z00ȍq`RQmA>ȕBd' i1yܔߘzl ~v7 < ţr D -hԓ`L̽zDK=Vnx߰獯Wxqu2YC=/nbt^,^hT%/N .~^.,\ESV,+hkq7}gKC=]~BY~A\#caI>ɷ-;p']W:f:7%T5m<|h4*"4YCx˩7OP3l]6,M@[ t=兮63>`8S"mvfTvT[RT9`* RrRc qOFY̫kkf"/ ];&xzJ6p%R -ن~bH~41)712e}LdFrLNT-N Av$i -3"9iDØ\*gLiN=>H:Y_?Bx7[ZlF=$7JܹL*?pRCæ9yPG> ;P -9&qT֥z֣׻h<$Br!< _;]&)/IZ&3{=Z?٧ 4`RoskR6glXz;ƋqG7ݒ;h4X (6`}/fĺ[[^P_/MxYD,`jǫd HtK:dO?N?ӯ :wͷ kj6W6JBlgK`47ٹ_/<栧GC趏t7dߕߝ_ vc% d:g(4#wyrYTz)Z|EM<^b12f)2 6ʠ?C! -) -.H\XрF89SWciVY(eYتOZz.q0iײpY-JM&4E"`URr3sx?yx㝃_]e3?xs=s'='x\ٴGV=OLZzYD?SN;J%Ny Gp,m<:m[eOY^6ٛ|džܖ'{Z-ڕ67Z?Ԭ՘d5,9Va3~l<~Yb\4ԴjE܈YE1'W؞J),L)*e.4/됃vt i*u ~hm^.P - -44mK -Bt ؐ@ 4@pOqL/ "&`CvZbQ<3fkib2)KSoZ*CuE2{6d۬CW7m -(˟>yU5[ƕϢek+ XU,t]Y쑣ϮN=̔ՎUhg!_{VT`h<M Zc;3;ݝݙ6pcƦ -XUk;Rp(EŎa uY*HBP*5U~X`wsgJT;sߢYr TE$~ۊq/އs 7ðd[ŞTO ZSq3ɯbF&&?]i[< M_^s/mAH|.z}IBJ$&m?*{WGU?_ƀ [í?SMGx@QKєkDbh ,gbV *1K`(2s[,&tJ+JGCugt+k=6ϟf>tKaPiJ!!aDF^Kk@$d}gDMo&3)ād bvȸ٩sv>MI 8 T`]jQpAgDI*JkZ0b$%f"@ JYTe vX+0enNp [pW9p{AҡL4+d!2s&oF"}npeSdf3Sөjc+җMe%oZBitW~>d\/z -5mû vJY_8IکVҞVIlVy}ַbɱŘXl&lxZHZӎLېa薙?d8#$wNN3amRw_I?'0x)7dIz_Wpk* at3Z]a:t.*W{22;[NgBK尴RWPOzJ2b֚$ΤҤ"K+h'C ȓ1: -OY3ԺuUZe*u+h$O9Xpۑk{ɺ+"w<+uu{VDKK*Ցmեu-݀֏R QԄ=G_g,5ƶ mH:ckZc ³"8 ҡ..HjE%MGqӥ,%M;k(仧y,Wgx#'2iCuD: 88)`v aejHٍG3b-{5׷n: 7(?}}8}#7AWJ:S m‡(; l`^`\YjN[ԅ"66n.ߛ)dhH9B A%hyk<[ș)R6jN0*yLC>x AMo/LXj|,jr{fBZ)w{xmI TK[N/<t"^B)E)>%1Z {3AKVd')I jKF%~s=_.Kׄ/ EU:j{OLҪxax RinboD!TrV3,TPdM,a9qZE!fb$D}H)%ö7( -yIwR(UU7ĻGLgmςadt]-@Qw|oߏ۽ۻ!\[S0ROoBL$RiJhL$HLG8m%[)ft&iM'M41ڢ06pKSݻ~ӼȾ\L£NuZ; H-N`@Dm hk8Am=2\|0L }lINn[#3.A/D[[z'~) 70#y*r\u֏=nMq.Bv\"jD,Z44!ͺ!so#HP.V p849 kVO|ͣH u$d#cO‘.!5j>Foԛ/{yB9Owt#T]%̞}jD1Znݔ3%GGGW p9Eq4P9.'ԅTu}|8ڂ*GVUX1 {E!IݩXK5g ݩMrZR¢% Lh pr\ur -^`b(oddC Aw`mlq9y%MV6$r {sP}muJX:XNNS\[6O[pKXGk4M7<\ -- ,P[iZ :~koۛպn{>;#=zދHD??_x|to+G2p"zu6".ʬBK#t ,xҪ UVIF.+$ +S< ih^q_#? i^^tË_" .ndb*CN}l l`  鎱ޢvXls,fae{YcK;v& w$?_A< oI%6<, q?ğ/y:op&)X ./A14Rf3^ Ǩ7ם8xlE"i8^Bs 7eN~ *ՇPP&ءHg .v( n ̊Lη0»{‡P ;X?_|{/S$9C/gs-y0O1}l'Pn&BE"-UT-&h'J)$DcRDjȠX .Rim':0Q;%R.&ra>#҆R "Vٍhj0?o96|]mTv!Gat5!}N9t34D$#CHWPUmNT7sPQG F0&*CC<),O3i(|!qI062b ? fd[1,A=(VV=3++ z.9Vf= 4+ WFf(; =P{ -5L-Bu6un,#GӍ-/Kѻts9N¿&*V'&lobF熈EK(?.ɧ٣;S=Z̩ϼV7rnT.}V<)c33;2q3HcOR&/%Ƿ-517Ex=`Cw=OQ'/7<=럚owO 1B2>c|u1Q2+eejELi2`~(.g*aeۨhC!+yZ,<]>tTҊyZVN=Qս+ޖ]PIss5rkGN8~ r'Aڱi;Q-BobjgM Kʑp4%ᄴBdYČPH2?)>c7O/)LiǏq@(Jλy1pJB&RIT jJkVB;_>{wؾٗ4ωI;\6Mϕ.hLSU imZl UtHlMƐʨƄZ*"BBHtHH`}K|K5!pWG  ȋ-|7\>愷rXNgOT5ȏ%cѕ[qJ;9׸%{(33v֨veʽZ'ԛUL^JRԨL -ѬV] %-r'^Nܩq)VNżT 1=R@joMB_SlrZ9:؈{[ -Z*@M*2#RK})?:C!-2ԑ;+;J C 2ng7C l:)WF ]:)ެ7f{⮼wj:+|=;X]'n;n^qw唫:dӓ'Xm*[@< tN<jp5 Կ3;]A:OBW:lk{R+um/MS |onڏpۯie_>xm~RUZ& l&L?{ɴW~/?|O\ehֵ'-3c'_zo=n} tUO7q5YxA_x-IҍH\YĎDM# qQUyC>g_a.ۢuĒA]SY;AxQl\ؑ~8|ZU9!o6w;_#2MQ3cYd0泄?,A |D0@_`(U}7KH |;;TG\ }=} qv\8-gosl.@?5AB?5,)ndnܠ>/fNwe -LP Q6ޞbkEF$$ "ƈZ)KS"LوisZX"aTc* \-x] F jضSz!BӨPhCEP@v %I/||ם!TC[o)'2'[; S ;:(@.UC;ݶ9*?5&WcX1w:,#I%1%G1zH^i,>Oe5==޺ CQ*QUcT%I`I6Kmj5HI%ۄmR]=NFqiTe1rI'ՓlLAnk5&4fs]r.JKʢz-m9 -R^|iDJ,gzMQ)9kRf*;DOSN(1MQr@Q$R`a k f5Y IbЎ !8qe*pXWHY5#[W#yzJ24a%kyk:jvB{c?g3*(`"^kUyKSX}4ֆmosY*+3p )PrYAx),y^WƵ%yIQhFɱ"f4=FI"=ɮ[CfTQTIcRբ v#led"ɒ 3r1hСӑy 2$)Jg~]U3H{ьIkؓWWg pSY ;8#kuhd1Ϫ_W7 VnAcE]5l\;>XyìF8JP13BE)I%YkiTXE(BHU̒U-qW}PT?[Vd PJ,J0ت#A& -ՉLu +*~&Mi -ݶ"64δk3IJW0Z*}̺&Mtg~{=9>B24dċ,)@@jи$ -)0`L>{_,au⟳a - xvyjs(B&LdυJS8g6.豸I׃-įJ= r o~vT%7f@z-e*9D{ۧUyF1e,1yCKܒuwzj6,  u]ie Z'|E= EY~WnuU7P3bhSWW3vx_63ҤFHG_bYGI$k U4KGB`>TKQܢ:_i0`603ijq؋m7 -ڊf!Us;zk Ey WX%y~}(o1*Ǫ' mr7cͬ30_'k(ߏErȲ)|([u\V}P -f?ǹ=oa93=)R_'|a >3笃 ųZk'4| -y bWan2?F =l6Rz<frk33T^xdNEɗ MgԾ+9rziR70\~aqVFߊ{iV$35GH >::-xE?鴘;X g7|{ùɽKO|/1GN5{B2#j*ǡqb-y6BwpAŜ2ю}􍪿T~Bd&oBDӢe/xӮ.͘E;\GϊF;'iմSv llUr=t}Ct>c cEPh0|kS 2WhK;=ÑS\:H3=w!>:rZkC]C/RڅXZ9~Jf2ߠZ^4]wA[҃(oM?1kXxoOW)+H)Vc W}Z z//s%k,ОBCYp,ȷ{hCx?\4?ыk3tZuM#k4U5 -Ը/S ͓hl18Jo5D8*1ףGc(C.Qj^.XQ:oUX_e]{x7.1,=ӵzj^b 'O\)%+i/Hȫ@ ,Gv00"wA 'n_4xGC~4G׍QTвӳoSR|g43ee L<+0/nrƖy:;@y@8%D|(J(]&Sq+9]&#\4Ͻ9'U1aǶcXwA4\FQ!?x 0cw{,Q>4IAwM m1 3ze v(nfhiC/E_0#ʯW=s|u>.csmE(-VFllƶncbb#b$ -x.\l›_ pǿݻ{4@A I+T|DhE &`** -EyP Z2S_tx-GiZ%wlrpQo=Ƿg3z4PiHë"Cĩ U{ %5;)Xf9]ٍoj7Qw0$0V y~{5>@l`Cyq(|وiD0S)g|ZVsom=;ɻF;]uۮd}ڽĖ8|7Mhhܪz{ՓyVS:zT;~>iyg\__qȊr Imğ{ -ɯ'Ж*6ŊL0;]ܗ~oN`wRA.ˏsZ[rڙ.n 9qGAd.>]:ݟ9y<?\JJ.Mꀎ-'uGVy%q9IHt$9ڿۼ؇9n9oX[##BpW5}@sLf6H)ey(yIOePCm{ܷϭ>dͬ- aC2̫\cȥsYyGr!8$%WdsT]bm W+SҞך"I1͝w9}\- ?4Ez< {wR8ߓLcUp1gSw/W J\f$c}N].5^dXVo%'T_j<^0Tq[Fy83mmCnfΖֿ!̪y~2f,EQ^ܑaKa C47{=O\%珓QUص])\MNwbNwI_X\Z(&0Z{`ʭ.J{ER/fsK4#ݳ OE kRk8H݃yoj][TYkc#cYCpZ0u̥^c?nnoug:Škw{tc3[VGXr}G4}x+Wrs6sٯϱwf3H,R\>\{l:&J"~GOG{$Lv=2>m$EJjm^o /E϶nޱ: ڙАsf~;"{5}&;w]; ?rs$mǏID.o~;]ޟ݄{7l{[];Zџ&ц|l“0t;y~3K6mqs6vGٹw1O淔yn>}7Dh=ۊxۍ5_`"4˞Avtd\J -OFn{:kD~rlt:}eK,TFz -ጛ2`gi>/v{ &p&:Ի9 -חͽpkf{.sѾM\ >vg#}ZVӷN[۷ ;eHO2rfWVYk8T%m>ͥ_[u;gt2ö8c4N6\(jغ8֩:]zJiU~-뾐37=t)A -1?OAPBp 7Z+G!4uV(. - (<xycŦedy*nm[#3.!i83پcvsGk:I1Xi?Gx=Zg[lDyh'M۪cG4[c+-|B_r)FT^9Pg"4٪?T/س0C :z.cZ֙E5N2&+c@|q8 {Y!jC"u$x㢜MGJ?{~ֻJi cyVzZR^/>vsdpoA2:,2`W!4@@X$,B K) 0 JİbmcAqR  Zh`iMٙޙ|sg+ :{xZ6˷.i7 SVqӞeQ?.R`](`w4wJ]ִ6i 8HLWf( v!FRf0b^HO't$=μc(8eUhw{srkZ;+6w$F%;mE:GK>/B_ZKzz{͑`s1=33>v(kejQӌ4xg.K& -ɷJ%M[ 8 HQ^SI|3w_$|]BAY,{*5 LlMwZvOi]̹-z7 ^ -e1B꼓Lj}5+o4_үkyFlWa-|uZ&)Mn lknzw1Oo_y`|eL&}];wb(k{pK=]vߑ): -0|p3_qj]\k'xfO4)U:KfptfOkΊo9[~Tsg `tu&aWv7ƾ'b#mqqӗ)fusp)u9)e\aZmƪJ雖{)g_-׹sQg9uNd;7~uh4f8W%f_?wt7}yL!Q׼{JY { -}!=`dC yc7~ tڮI Έ`YrG/eݚkGRnfg=@N/KVKx¬E:_6)H|{Fe+TCWKl>iAghYJzL6v -4p֋4`j؞Z2<]Yћ^2ޏd|;uMF6:yDpyp6}!xE\`Is2'7gƹX$[4m-RtwuȽ2yYkϱwBz"Tr}fY36/q6F+cx|%v(Xił72 v8Meߝ3u3v&^Oe3h_Su>߃*Zw*?E˭>6RiK똀Z z88$\I!"Ҧ9@\'M#5TEQ5 "*J?/Q${73|73o~\ -=k.^O;ęoN' < ?L]7az5{ҶqűĹ밉Ak -l냵.uoC -S琗K^F A{WB1^P_~p6>{݂|DUE U&Mrw{ 5W܋Β~b?~q&z05XU|#ě)LXMXsqw=؏&֥{i{p} -jE\,`}t.b),pU@\C>Uޡ ۈ^z C?yI<i~]Oǽ4wu'v n~]Ft-З2)rDW:蝰x]W/1r "b]7=9sxs D^ c^eWs!p;OG}l^5HzSnOc%DMo4gQ3hi$j\(.Qg:ѼyG_&ZEi;½DK.-K) -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -Ň[U=&FjDP& <9 V#RP%,˶:i|;Пut8Kುf|:G>hnmCqzt -Q|h r-9FiY t>!-Z$uZ~QV H[KK/WԋZQ#0q6~MLP z zm7oh7JY4lu8GضP@hBsyHy$oO̅NKPEL~L{e19_QaȚ!Xet__NBK. ia ˧nGtVYԈoyHgYK⌮"*c; .G1oOvϓz}0fmKR363\LdR_eFl5-׻zOHED5mPNO32.ܷuBoG}e43c[PH+Y1Rd Փ33Do2&u{DؘTϚCїf }(72zN㾈&3RYc5 5q#'z2#n䢉deҞ+ g%f26.BL4n F3[tiR:/ֽW -ZNsny ŌM8ȵGIߴS)OvՓO5~ sZ;ߠu)&]趋W[se -WK}w d< ?Di-Em^QC됅ϓ&ӝbIs9(d8rw{O$ŷ6O&ԀnAIhC6$-xי E )ةS!y"}EՋܐ>"WW!|ݓOٓ_xr9E&-&˞|a<9y")'pGyؤ'hE60g[sNF>O,,߭n~Kx<=Wڿ =<#[&bw('oY-yIʄƟȊ./wm{dE=ݢ)!%m^63on=VŇf\52y#dL?5ƍs 'fio -#cض#_~.Qd%_^20ц03,IFu7 q,Lqsܳr٦eQf8LQ!:!&6EIu4L}bBC(?ݘ1#?LoLזws\)[JU7,\K7{_EU -fr(k5wӡl殸]g1M;ӹlq#uMW; G vA\iݝҹ:WZJ9)ʕ@4YoåbT|Qoc_NJD( OӺ 3u̓c_ BvLܥXx[E9׏f}P(}?>WI{1U> -endobj -699 0 obj -<> -endobj -700 0 obj -<> -endobj -673 0 obj -<> -endobj -674 0 obj -<> -endobj -671 0 obj -<> -endobj -672 0 obj -<> -endobj -669 0 obj -<> -endobj -670 0 obj -<> -endobj -667 0 obj -<> -endobj -668 0 obj -<> -endobj -665 0 obj -<> -endobj -666 0 obj -<> -endobj -663 0 obj -<> -endobj -664 0 obj -<> -endobj -661 0 obj -<> -endobj -662 0 obj -<> -endobj -659 0 obj -<> -endobj -660 0 obj -<> -endobj -657 0 obj -<> -endobj -658 0 obj -<> -endobj -655 0 obj -<> -endobj -656 0 obj -<> -endobj -653 0 obj -<> -endobj -654 0 obj -<> -endobj -651 0 obj -<> -endobj -652 0 obj -<> -endobj -649 0 obj -<> -endobj -650 0 obj -<> -endobj -647 0 obj -<> -endobj -648 0 obj -<> -endobj -645 0 obj -<> -endobj -646 0 obj -<> -endobj -15 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 21/Tabs/S/Type/Page>> -endobj -701 0 obj -[703 0 R] -endobj -702 0 obj -<>stream -HWj}WPOݻDLK,y!ǐ$˾UH* l$tWZky}}z9||z> jX:\J?Gk8?p~jw:A cpW_ӿOW߮v/_zj!~L?]UJO+Oe7~ ԟ3,?%G{nq定Nj qs)2~N-ߔr9P}/ü^ykƪQ;np-VNֺvև=A~yZnLpM/jJG"=Cwrz_n]dpO|eMqr{ -pƍ*n`RX7)]~хcGFSJ -#!!8s9Bϥ0{l&MJѱRwk!*[2.䌙G'{A֌STyCT ,f7L#J])maaA44<3^ p[Teh2qeGK1W@\1yAc"ujQR ӬK L:VQZư-oJPelB>haLR[&0SX;畘~R8jGyJ>P!(XTq!ff4M{FsR_ۖw^72Y(ڼ.tjԟqvtaUjZN6 dR;^%3*iQ=Q %4Y nʓ.AN3*m .0Tr8QeT+)ք+n; ^zVe/~7U> vv5m6/0Xƍ`.@+\o7Fy832=O %HAr(G>=q -TqSx`Mrl\YeXgMC7x9Z?qv!q/hKC?P3=^m5r `^^Boa8G?jrpKG$$nRQԧC-DaW [7L(-*f;AAM rͣ "R3COE=7}Pd6}ݛVy<!LZ#j=) JyDNTَ f'b_>ۚpv`7pS#7_*)JīT-:= NG5mmz!4-IT!n|2nT<7Bj/ݫP' \Er5uHzznow__~<<|=wpzӉn4fTAe~uץ ITEE]AQ0ﰴG^E#JS(Ht8S=V#0jqx4b٩IMeÑGĬ]`G?!>Mޖ.$ ފZ%FSnCh{Aξ't+;IldU)DkSuM@JV)/(@4.[u=ꇬylfuר;_l -YX !Cy -mUݷ8W9[18yIQ/DW.=!Oێy(>8xx7I_ Z1=bߩjÉAڔfryo崻yhmM<حWL`DZswj1ygpLEpYk`ð/#Ύ7ن>vMfBș x3㳠HάDYKhŢuNmִWd}]cly|M^'XfBע^۰Se% -EZAUEp}._q"Kika=p۶7bg"OŃUX3aaqUN$"*'& -@w5{C`:2μp7¾Ռc/l XDB'Q{0%Z&;v)1Vs>)⾇W/8q%\Zt:xD!^$.nl G)bB@A|/ M%"1cFܴO 9orFTcBPom}~Bځ Wϧî?}%# ,鷇ֻmȖۧ0vdwD.+%> -endobj -703 0 obj -<> -endobj -704 0 obj -<> -endobj -644 0 obj -<> -endobj -643 0 obj -<> -endobj -642 0 obj -<> -endobj -641 0 obj -<> -endobj -640 0 obj -<> -endobj -638 0 obj -<> -endobj -639 0 obj -<> -endobj -636 0 obj -<> -endobj -637 0 obj -<> -endobj -634 0 obj -<> -endobj -635 0 obj -<> -endobj -632 0 obj -<> -endobj -633 0 obj -<> -endobj -630 0 obj -<> -endobj -631 0 obj -<> -endobj -629 0 obj -<> -endobj -627 0 obj -<> -endobj -628 0 obj -<> -endobj -625 0 obj -<> -endobj -626 0 obj -<> -endobj -623 0 obj -<> -endobj -624 0 obj -<> -endobj -621 0 obj -<> -endobj -622 0 obj -<> -endobj -619 0 obj -<> -endobj -620 0 obj -<> -endobj -618 0 obj -<> -endobj -14 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text/ImageC]/XObject<>>>/StructParents 20/Tabs/S/Type/Page>> -endobj -705 0 obj -[707 0 R] -endobj -706 0 obj -<>stream -HWn}'G;X6>=cjd^d٘@ J@"$mUwPjRU]uԩǫyf-yzs0u=>cn5Xi+a3nv-l^3YiΩ6Ĵ-Ud}3{E^;{uWRțޛR f;RwkQ& ceLoɘOk~c3Ks6ҜQyے5\f`ԅ/K4KD/Ug_ܟꤿ΃P.`宵An}]xsBi nx m_:1Ax#7 Ezw^2 WaxwLRkGӺKZF[!6Xn0huW (Tqi4egꋾ)IvA9+ ' `&Lyx{XT8mL(*-fZড3[1xS0DkR.ԡt#(,rL&/ L@HnVT4*qtYUӹvU&À$uTapUza^D E)a$$iTu8~Å2ˈ -❔__x%]v~}"Xm-LA<ܡ&^n }X -T$:u+9Ad1lh[uUiHDh(-Cxc9ҁ`FËmu M}|BޣAy@~Xa<505JL92R}}ȾeVqbFښld$ -l?@xnPJRmlbG%}ȂB++A1ډi?$adTLQ@j5~ |VFO`+ 8Rb 0x?N.ҽ,DL. -dP<]t1TO9E@䦁B5d8E:eX,3y LEUCb2zvXSY8wh끿bp -YFR f*u7zI 8zUpnEZxɀ(>1MW }˦Iqp@I@ QRlK}`vda23@#d\,i@ |Ve(/А5Xp%eӸeMᜊjOax aΉV(>I͐6Y*%rhHuA+.QACS1_؆*7bs>m z&Dʌeӄ{WF^x!3,K6Yjr2<08)QCgj7Zx_SW1Ce8mκ9@CcNqP>cc^Ԧjj' Aۼkdvǔdk*%Il+l~uP{b&Pj _0ƒRE!p c;eRPsێC.mvѣJLXP$ t1DbDeTХҤ"hTZ NQ.bYLv=!ǖ7db-RېGozpwG` i|UC%,c"e]0bA˜)Opz%2ږU)ɥmR}`>Jڍ>iP-0KE[:eF~~ij8@!:P&I$Y](uB,IV3d}Mo+wƹgf@g3*2⾴Sh@g$w)oqWӶT dK5qIgMAxP޾~-=0rw]VPn#)͝"JzyIxܤ/^ ^#Sc7rfC?^-a{{w::憒_ǧ\67i{:a{o7O^표\<\Oß~ Ow{r>=\NÇ?LzsG8x%Xη2[% 㗐rw~}=#qKttYsBGiR\Lp&}píV۶MN㏋~Kۅ˴XKva0#oZ}cP+UӐ >>stream -xK[٢#h*bip*%TN=x3mS*CTN)3Pt 2Tg%CG—~JPâX,ѣP( -BP+.(E?o~5`Tc7 -iMF ](x*ǛBweJ_ o]gTLu;Myy]SM<[}x< cޚ,G{jO^^P怇C]0j߹3:m^zQ_RixOwd{OedA#ɃLyFq5y8ض+uwbzp߄O&;yMWsj~2zc#^kFZs)L0YQÖF@aKѶ-tdsGbv|AqR?\fwh*%fyJ1aR#70yy5K E - U[<`'/+9dۥ;Dg'iɆ,!AWS}x}[}&wێH4]}=VGN#VVIpH&w a0<2*#&:( -;XM"[.w+"Mӡtgäh,evdhӲ2ۥρOzVC }]1^`?5~r+1a~F.rt};sC\׾c[H]dյsO#h͹69R¤rYjKYʷQ"OZFߋe/Ujnl\f>Wۣ[1c>}@rBowbdhMy?W?]dSyvOg ]#o]NYZwҖ|.J=ٕWE[rT+wWv=*"P앚6aW.a(wwixNŠW:ŠY];\-]v9d:_={t7¤Ҟm/N'ہM/D=jjtM.0-Ӧǜ8=|Kr}@;o4WU4lɨ.xߪGAlSJ2tW}u-bpktZV:8~'wJ/=Ol3H0Ӧ+=AknޅI}jڥL2RwrLYü5rqTy~J\1<İ7tBrB+kYc ,WLYySu樊аź" f>-v$ǼZ%V Ai9<l;~N^b*N̚Y;zt:_Ư(/GoFEERGO< ꊜ--L2V (1C V!R>99Iq*ļ 7x~\k 'Pj?Ɂz=haWLj>ixoq]r~cetCͨV [']`p,k&&ѧV⥋J}Mq7ä2šI&0 u+W?h q7f6Y1c6<(:/]d+ y k&abE5پz31w -𤋮;m_kխpQ{FJ%б:&cNp]軋+h>Gxf*\P>,bTل\a20T.Ƒk_(ЙN0)*?bc~AH j9P'p[q,A5G,͍\J+˝2cf>ߒ"f֯vtͤK%};,*\yMT}v}6Mkg}N.TRaoOKB>fd.ڨQd?1097jsf-]g7äii¤袠.DQ'[Zsΐr+rYQ)p.]ķnSfB~DGMY59'[:ڶƥY9Hjі޹ DE@Ɋ}ƻY/XgؗxT"g(9rJLƂrmr,;2}v(L6jJV¤~՟ -'qN(:䰧N"_G 8b&sg4aR7ɏk x/a1Lk?+]75-ɻCnyբ#'|k&EHLԥY7L:jaq&hOr;׭'L4 -LsyOXٱaCQ -;Lӑ ?$0Z=eYFjq[lYRnR(pd2I8!H{w*XUr4Mn[j-TӢ wplGdOo߽{O%FhZN¤6pg*agquY}Cy2"Tx!7vpYWbK&?Lq&]TGiZsil^;?tI$던y˕x S޲!FxcIꪪm6N{drmr|VwRK&6d."E/)@#njz;m6t em`nn|[J[09n/.oC(I2/{M΄ɬ3g@YOR4wt -?vÒlL'v$L\K;+Ga2]w_wC[O|dheRoMH}\w«ͨ)>}tkYSw6^>stream -xkO҉x@x$1ČHhEFEQx=2VwqyQwYSU]]4 `4իVZU~]S)SSS311ÇGu]DppUUZXG6o---k]h\k]k]i+5ϟ_>$j…HSW'O͹S[uaMv=;;z͸YSRv,e6ܹ:xO!800PZZz+ZEV,ҥK333ͫx6m:}vvSwwHk[l͕d=貲2cɏE;dIwޝ5Ei[޼y3j_ވ!z|/^TZaQtZ۷} -RU׉?E"SZZXwPAC4 -+n# VD}}}yh5IBi9Dl\}=|uuu+$GoVJwmc5jjj*+++---}}}|ր[neZh5I>ʑn YyAFS333JZƦcnnnÇuZL$3c -QR'o}}˗/3266vi%}ktQ[[;88znE0U 0z۶m*m&ZeBMx3ghcz˽{>SQQN]qD7RMzzz߿!֩)uaaatt!reT>$ViIh3ϝ;wejB8U#YSmٲ_@]Ǜ5ٷoj -ݱck~7P1믿FnZ -ѩϦ|h]G:D #:r:$WAU;ܗ2$ p۝LL}&Sm؃Mbtj'GI?~5TUU>[@Q -Me&uuuh|m8Ip@|i{0˗GDɯ_NJo~O><>352OL@(hʭ)8!:r1}666Y rͺGgeoMMM"UaJQ}-fX+Z#G -9`}ısN[&rŋٳ@T"VJRH+ eIXW[DkI-X g2~H@e9<<YZݕ^> -5QZ7Z$kIu^ /.@|&lCoߺǦu.| hnv?DWjyTϴE pf=XZEf+++'h -|]vMwvae׮]=:{[Rs)9#?3}',$~6?t钽ڢ{#Ս7 *nZ&H$˴ջ۽{4t+L!ziy,ٻwzZ@]^^WwHk>ho7oF6}cVmhhP=̕+W4"+;"7ZɓZfrrҮM$)$'+v$XTٿUepmNL{Vj5nt5S-C[ˌ[V!j^Vt]Eƺ6\ǑfJծ:::Tm?.'SӦiWS6&8d=|EZF۲}r7P $+jonڵUjDgCzQcnMU61îjs;v!555ǏW=+N4G[Vw+δ튻]8!:a㧲uxDxIA}T6i'N'FWm}vŋOGK 9o0 qBrc1}?2c7~n`y`.}kZ\1F~V vhhH/5~Wry1-.i"+I - -;m-: - 9aUt~ؘ֭Vmx*aNZoy+D6~A:D{Pѽn'Li+$ꕜv+ՌڭEnt?=%IYCt FnžvbMϏ?=[W^ -LLL34U-xݒBF{&'D'o@N ёGV@VV1>B܌G 04Hkooh)E>Ƙ$dZ@ L[ g}xfhm6xo˲IBtAũSgoY8&]fiê{Qן@GiM 'O itm2{gFV)>D^GGG#IdڑH!:k6N]m -ԭ408!:Ї]YNL zws -ic-B܋ ?'?\ӹS\`Fr>@)f!..lr lf}Xq}]KD6?%D7$^y߼!:!:a!9Ya&ه ZK>pN-r_e_ۿӧOӫ4 uSCtA:md#D9rNkO\!v6_|ϻAC]-rNIz~ UVVs_y6y4\ee]p\ -=^8~d3<3 klylrƷq=!,p+I!9Ya&aV=ɓ'*ֵg5H7{zW:B>>m_>qٳg:cdćh_W2WO oIen6yO }cs=zgOcSdIN;h.yIs'IjDOO}{[$ |:1fS,̞]ogAo)sWR"hmΝ;Ɋ$H^L{OeO; +}ۯr)IM62򂲅Zg)w(rʖK%13("k!&G\!~{%MU?m/ Ԩ/ŴcCsb$i|)936kNRHVv$2رcXƟ|n ~,eLEQ7M;mË[GS eׯ}t>P!:}1{0i', UL(۷ouu`]oQXUhXW?}}!(;w,,,u]`~vppUUZfm޼ݻwKKKwՋ^y-Uߏ6ѣGWo]1{p YPTKn+ (&7nw>~_[[t=n .𹹹{*%K.477]C_200pM6^WT*Dov[ DӧOǎ[% -+ ы/^WV21 E$D@ggH|$DovEIIɁ޼yq:~Bч^|iKިM޳gb2Mz!zC_ D9? VTTi(0>>~̙|})Q*؜;vV(n˨VVVfrqmmQeFGGÛ$nTH~a[LeNMM-..ZU^W59Zlbb%\;ZKssk  F@/jT@lO>i-Vͯ^Wmr՞j%DUWwu]H/OV_:tttQPvS#hubZjh222RWWX -L_\$9c׵ル)"C>|P<ܵkJ#|T[h]ΝϿ0Dbu)Yܹ3r<&l7 vŋj"ek׮I)模>~/2-' ё~۶m5/TN}* -\555da0k7P -,\|9PgϞR>hC5o߾u /&"hyZ@ۤ {D+a'<[#Ukf߿_ߺuKEz+ݻZ6Pƍ7oe횯GΞ=U벆 u);ۘ-a!zBW\Q ڢE/_8q"ai[Txuu˗/P_?1YZT*_kѺFWs\@bj.2Kbӹׯk{uȶ-lM F|7(--Xmv}-cռ'OTj/ϫVd{Y=u -k]v -AZӻz:nt[6ȑ#?~tUmmS /m(}TޫB:͌ Q #ӃŔM֭[o5vZƏnXN-ŋ԰0eu3 -+$,DyMJ ->bUqPTh5QiZG\fqIʏ̊ Ἔ=1˯yv6p]ɦ[ú T ?Ȉ]׎|+'ZQ }S%{^9~[F;ˮ&hWՂ[KЊԱϝ;\UpQw˨ ]!<͌ ֕@ \ex7BnLMMݻw/<ʦj-uCtu;©"x&*֢Auu]迖".4GF Wƿ~[)BGGGs -@4%-okXw?_%ږ#wvv_-)T߿?33W/~ڳgWտY_&&D'/XWbF>}IqyD_ 5xvόjiiɴ҄خ&ё- [EIk.g_\QB+q5jL7^{!MZ;~"\Mxm~[n o!P{*%I!u0Qhil+ -ܰX}9h] -D:F#VtS9hA-`n-tnnLN%66V488߼yh˗/ۃ3V}ڟ$Dky˕#b*$aw*v< *-N+&]+7W/Dk]}A%i@80<<^IW}wnmucp|#r?ڕ$D -q' I=ylzzZQڽ#Ԅ@?~ܮؖ8pn(H|v9119zll,>g.c7t1ao޼oiiYݻwJk]ѣ?^ZZzZP<.\/DK撒ը•hܼyg˖-eeeΝS0ܶm[!yueeVٴiSCCßnegӧOz_.߿ -yj'ᚨmUM6_7oެbE*hsQׯ򥯯R@1W{zzFFF޿?==ё|&vd޺uRuɞS*,---w (W޾}ȩLݭLi`- -Gq;wsX px]UU j3-G[V2j@%mC:p -Ν;vObBߏ?ڪ=^߹sĄ2+W,}ĉҥ/^ڵMiWa\/S(ݻW\zU(ti]}ߵKz=Lzٳg]>2S#imsFGG߯zznzS^4֋7nZJZ^W\cOъ t_QݻJZFMxVRUuejW+*NNNE+D˸;;;-g[\Qwvv֟}ۅ>(AUǏp[[[kYa9pDGHׯ_{2ﷷ(DӿῨ^Ʊ@?WUU /-ߧbS?}А)MGnݻ9K ߪܧ<;99颫m#Dwuu%]rW!J6:C[lqRmldhh!&Dw+Q<+**269pVhþ}mԿ:+UIQ: qҴ]UuVWE{zŋ)B4|WRR׷0??oi1.cL3*Ǧ@ (r!:~gQ?U`GG={޽^Q֊~{{um]A|GyW؃*++B4?q߆=t:G|w,k#|OMMչB蚚p9z-V7U[TmU>,nxO\7Ï]]]_:N-_=}ؘ1!znnӧhg ѩz˗ժBwkkٺC[ג% ѩ;F[WCCCW+/b  @Bh"D5y7oϷu] Pϝ;wֶ~=zKKK^J9QJ\׸lcN:566Uao"X ݽ{>isQ?7bݺu+SVzZIL$B96mz////lm+oEf׻Wr{C! IϪh{; y4?ҘJܱcG+)!FjD B:A^o`ZaV^ZZ:44F> ܵl7䫮陟_\\|Cnhh[!ڶT޲eϭ***:::&&&TӧOۮ+$SHW3Pˌ>jm{eetZTU3g6mdUPፍ{괌ե6IXOWO ؼekUMRH_v5˨O>nWmmmL^,phc|r ]vM.?\+Bmk[c|!z:eeeׯ_g4YհvppЎt->M]yIP+ -+c$_ӎmH𻁎YsScǎ{BR+ em|qƍ3}ϠV[WQms񝶴ұF!?~lךmfݾ}[U/]4K,NwRUvhiiqjjI'-s_oQlՐN{U۵;?C`ccuOm:_N|j{r;uK{j ]lᮮ. 5QvY!:hc.^'ڶZc*wA$#_Z-lVZͿ(tnUIƨE../Dr[9]mmm!6J_k$kSx"rjBV}{իW߸)m_UWW 'ۯjoҔku(U QZ]Ʒ=_g ;i݁i$r -IuniSj|$ -Ŗ(tpg 0UufuQW}j l}.h_?<+ͩ&{*Hv9iQnS}+ сT470}SB\X.//͟9==bU&$!:S۞/nw !Z#\5noo[5~\ -Wzu!ZmNHjyHj"?(O--_&8ydi:Iǜ1"N~!:u|N.fݷ"a4+nz3GVw?U\Sv+^^I҄Ƙc'Dk$Qj<*(PQP'(P);޿53klNr}^^ZY"zZXX)xwBI[Vfb >} -E~|Ӳ+-#"~:(4aH1f@cc#ab#-Z"Z\aa0LS-:"ZcPPzŊ A-nmmEXCb4~j'Y(EtYV(Q_53^~=*V>E/۷_zurrR1C'Bf a͛7e<77|pbEEՔȂ~HDSb&)M)9lii۹s'>CS#󳳳>-YD,D߯e92j3WTmN !N*:ŋmHVEyck.0?~h Et0766ak_"ZrhXقsbcJ -rVtZtrXꓡ*2y$z^ ]偲[S,)r nPr:I!B>{2h7oT믿apV27[DˊV 1pHD˖gACb#1 0$>Q"iHrIlEDv|x9X\&t*X"Zco A5wӢ޽{aFHVrZTOOǰ^E]|Y&`1相/644zѢs=&+Q$ڢLCAʏ?5O+JD봲HnJxຜT'#=uꔜɆGuUWW#ŋU5  "Zy)\WWD ȞYBs{>J9rD h9 BgOj-2_ (dMԙ&CWxZ#v o!AlHD˹Vl jEDz2+u޿Q%CWEHV0+BwFQ(CsC&P֖ۄ'xһ#[nzB M=>=F0aR봡XBҦZ!cN -Ϟ":h>EF oz|ܞj7iCC!B9yfff߾}rEլϡ# -8uƱ:|*|fO>P]ZZ:<<7^hO]]]=66y0t*2/sa&zEq*芊 &M}Νh1>B!"ZYXX8|ضl244yuΝm۶lh䚚\LђU> ->IVM &Uюo,!ݛ.i\bB!9ZYZȑ#9C}2El]Rׯ'J@m%DOD#OhB!)v166fJ]ݤDUUU[sqq۷oBuq l-]y~ᅅ ktcc#ӧO}&-U۷ɓ'(d}}c-'_˗/%) A]]];.vmmR2PeֲǺP֣Gdg=RAqJJJݐٳf$eee(F͛si݅>5SX}cǎKիB $l¦zʁ;+NW(EC%B!h7/^![Dc g^b z:6jp:"z֭ҟ>}Z(?~x 蹹9+KЃw6+1Ԟfm~~ތdzz 骂T)MUjQfpΑDo"7@#-\pAxb% ]S[5X;,w=P -BMM}K!B63Eaz> M3*4SDCAȎzZ)))T=*a\"ma.gϞ/^$>-Rorm0"A_ \yivC%" *&ʞM`usK{***P䞞IXXXhmmŷ(?11T("Z/s{hIZ^D'ODuȅ P_(#JoQj=t!B!8)"ccArPԹЫ&1JXy@Y .ZXmYhnۊDE4J bphР4B!lN-R"[  X+W -kGGGM%=X,ʆnD@ D"_z%>/>nxhh(jxhq(Yՙk,כ|ز6rTo5mGDGYUuhy]!B!X y:^"}hjhY=<&B!WDˡR+\Dˉg<:ZdW@F>}:??_[[ A-|D4lYDtss3"L"y'>vZVeݸ]v}֪2Y &%iMO-tB?Xm.?B -$֪(܁RB!lr%P ~:8]s]WWa3TDHm۶uttb"[9تz-qַ[D˞S 웚$obb! -g(_RȲ(ED[y{to_|OOOii2Ғ%Q,)#+Lbrsk'.ĀڑWA+e]ZG[[zѲ_UU%x)"rH~rȑC!B %඲&o ZD;!vn߾wTNW2Vζ -MyYY`N4hHD пQ/ZJ)s{{{PP:ѱvя=~T޽{B8r`oU[ -`q%y4XnA+[fI-ǀIӖ[bh>Ċh Z,q?!B!E6C4ajN++ Ϝ9],o]XX7C [<ΝknnN!%K]UtV&NׁtLzJttQD2&Sm>\vmvv6_PRgϞ_֭[/]JCB6k׬_ oHرc7Q5hiEѠ~rrU^|YUUlOyyy__TSLjg<]WW̌"B!u|E ׯ!8R -B!B!;vBA'Ad?.JJJ:$r3*B!B\Aݻ VHB!BGD`UUZgg#ѣY*('B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!ϛ[n--- u^6n_~˗:/$_~3::cǎ !D?apƿ{n>|:Gϟ,ynk0 }}}Z: V3~5uxjFtH;QXya؄vC/5 jh-[LMM---D?>9o袑gvBۈEXRVV5==ݻN\\TWWQIItLSSS"w9<<<22y[F7ش"z%O:jN:m0RD"Z$":fTEvuww}v'hOnG;YD+WT>f+**&&&̲ۛo@e9I - -I~TC"7PMND+,wh>D9DY;MMMlٲem3#n g&][^m*U֕~%jzs!###{]5PYo~92r~w2Lڳgo޼YlzAdVVt?6Έ ɮ_PDME:!UJ1mV"c&MDntH<,#g" >}|zYYÇ%~Tuuuwwt}}}Sn_ +ĉgϞ5{۷oGBH$ dM7;;PΝ;O"ItϘ'S۷ɓ'RI1=JJJrO -*Ϝ9311H]:m;;)tեH%"jd|||*̉iǏM@aw[dy֜x{XO[+*q{1BFmpH!PPyo7ٻKJF@.;͛7Ν,b"*ЦG :<_ܿ͜~S%&4~!liM.566֭[_z% NǏt6`(4O,mOOܾ};Zص}HB:$_N.\HCJ`uCfaa||==}#֒cǂ@P([ӊZD;~ TzصݎyDF0Œk^ӈ6ԯGDZJw5P1G)I3ONhT}ĉ*zH2,:浥C3>й9J]]][COb}pcT|N}|mn׮]B@@,x*{n'Ɓ#[%mM^XO]ݩ.@~jOWtT"Xy]A㫿.zi&Issa%ãd סuS &vCP[ܧ4!Ԧ6^"zǎXߢ.s_6.^hy,4'Q5|!h|aܝIj(Ɋ -*Qc q{B~<6/ƺzBL(Mb/NV9??o}M} ==>wiYd̬T|Uj41V#0w Tr~}vp=EtQ.tAn'aEbկ?vܡDl &dn988Z"0:7ΏgA𢤡@>@PDoh+e bIgۋ37tS/Bvvvn۶?z4/^ٳG2ybqځ/D{Q --]XN:ѣT {5 |!Z} QjˣMg]]]0VRRV?@)<۞k3p෈D-[,vky~+gnh&RA{2ʓLdRWwDy%D%vTn7nOt WțNҡR޾}++^zUNv - ǨIP6+=ԻlXCf Л%ʉ'h.Oϗ/Mf"M\+hPo޼IC%Qȑ# WA}wQ"Z|5l+l{U #֯`:h)OI" &7uZ /Le>?wi$ M!EVEe&B.Ymevn9X]K1!|#++N+=P\ak%hL4PIm|nG&p%0[4kq~($wRwz@%1q9AkGd2D:V7syRvC[Z)+嘎^E-zR:>dlikzszuww'ʉ'h.OχagϞ(Ltċ>9݂D/;GeGfW`Rs9p91-ҋZ$ic|qZTyEю>?H0B0b|$ͻOSիĻGW_&}q>v %utEtb]exx\߲DtPs8ZqOt}j0Qc(ݎDn?d-˰[zþ}fff4ǿu;fV)Eт<{ⅼ\ SάhfbvAmܯj||ܺ+k-g8m!UNwOjc"ZWd'+z0[ze%3Mzжcb\&d=>7\ԭQ-a7ykEQHH-cX%M|]NoCkCD%Ud ɓ'D(evT"WwAJVLES9nڋ&}ю>_p]@PPW}ܼ~D@%Q;T WptEvڅѝk#3@D&@":N+w8| 6"}ǐ5|1a&8%B} -\^d"z=S#ye)噽^3RhyΓTD[ 6JD -\z^e744Xڒ`ׯ_s"Qze%DoAю5]Dtsss57mj1 5юE>13Uo\-++kmm%,G%2~QP]DkzfblFuJDN[+YDtlLHEt{{{-Et -[h~wblհDt: hK=v/^Xx]etex`Ix,>n2dl$'y8e=)i&sN5`my -b|{jd1 -"[XZ]D֠> Mi; Qw]]]r|ЛB;::P9#=m7r,Sᢼ.\ )S,yؖr٣jm!cvZ>!1`޽( - ՇSZ1L'2~-'t5&T\-S̨A=XLmY·[}}}ֽ=7{Rs0B i_KojT$72(&R687t9k_OS!}j0@eEEgkTNGl"Z<jj -! Nd׏*yLHXhزn_8" U+)SnzEpr, uV&*O z~Ӛ; = -#{ED;"&0ZC_[f\d_a<8 YLj5]^qk|>? -m> $ꮍfe]&d8|Xpk̤]555ʑnN6! Gc>?jguRю2;-:Dr5؋b|nի^`.mE]{sDu<2w+=EuxW\}. `/:~x̩ш}r[[[t~+y!=2Bz]O):;;\7#IʩSd1E_vMƠ<{k`R^^gΛ+?z9_&.9kn%^t ϣʹRt+L9I$^';44$S$sCm߾5|/^X, /J}r;Oq~l(\av3 -蜇7cC4SYDt0 c-~KK U\+hw&BpF#JAhwٵkcN- -=ֳؾj>BEpEY/31"s7\F@1O3#w!Yb1??o=3ONN"z_@exiaY\--}jWMDcl  1h ftgFמ,"ZRsYqaWQ2*ћgp/!B&ĭ2s<ل9}or:yB>?H^tttw BLtKiu&7B! Eð}}}zǙ}m:A)D4$ȱe˖{Vk'!D0nNϦ7B! E"...޼yssG ǭ !8v%gSݛ!"z cuSPA>Wo~i ]033\_B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B/Y\\|ZB!Yx~F-"/n/w3g\Yǐu3B!BHP1۫[\{nӢDtuuZg$Uќ&B!*FLܼL -7)"B!Z|߿ye˖Y蕆":#фB!d}}}"V\e%%%MMMчUUU2PVVvΝC (ƥOnٲEÙ3gtVDD777;wNУ8===VNL Ǐo߮ĕ~$=55U]]ݍ>H#05>~~+5@TCT -6M"QF=AR.PSx;j_BrhPR+'Ȥ!!B!d(%r(>EK0KJf<0sssuuuѪ{{{MyuE9d```֭?h -2s sA~' -M CF9wA,>UFH==mղ[;8lK!BYMRG#W^_Y`/^8rC%?^ -lqjE¨!߾}[glݻ 7Z[[<4eiWWyHZvPݳgكHۑɝ;wJdWPp5F-o.lf~YH޵k`%j4?GZ/ -H`F;Ā#0~3냕32_WWH<(GںuT(<hV2I-TKVh#B!BV":JB@B@q!`˹}D4*,OVfdj(P\'weѲ6;jr\xy1?}TNr֙?ZR.Xd~:v]E%K^t8LZrnӘEY(Sddt…7o۷ÇmmmO-✚Ϝ'B!&+$eNv14Tgg#ND>lh؅́ Gh yAYYYEEųgTs#2]UU%p(7|#?ٻw5o N8,v7n Z˒֞={ ֬L&k׮Y}zHQ$fQy^! geK){o(%8}4ECRH~N!BYCRhsh\" jq֭[ֶ e -nddDC׍>˶V&r_q`/gHQ&HNEѱ}PS˗/EY ˔utl VTTɭqJm%B!BȚ;XRIuobůZZZvS!s&'']?x̙ S)[矑 ~zԦA|h}% e}_4JYJ8wl^ѡ;#_cpjkk%VFpF+}>x]{1jjjz{{痖_zӃ˒ϟQ_$LEN~%":S/*y@qƉ $R<(nkk[XXhii) -6)v!qHWDkC%ʬ5x_"oڴiffwwנF ru9IUA8_V=Qb]YJI吊)v!qHWDs877PUUUx=^"E;uyyyrrrA_ O8={nzƍ_5l":uTD+_6*ErPm"']jrJIoKYdniiy"Z&<+@V߻woyyfjjݻwۇsss?~oll| -iʓ'O2A} JF5>G:HvwAZZZp"{یɵ׮]&˰@R˗/DdΝ,/5Z'z_~իW0 dzH$`ǎQ[ -VE稑,ϟ?GPȆaq$蠺->wN477z?oDM!%5ȼy s"5!ݻw3'Hmj5 Wx7̻xɶ'N&H*2DXۋ"XI۷%;y"DroxMlY \iЀӢP02L-T\Ǹ38bW&7e{q1\q>ldWW IH bŭOb7SN RD5hhd#Fe(i7DwC>rg7p,~QW[n}e7ڤW͛oRo:tH,6>;w$7ZUv&pWf|m|7ըK.YBr"iC,hP |JGRHXd/n5XgѠXkC{|ɪU;NLLC-켶 ޽c߼ycɁ2. `}{A] ,B_XXhT.poJ$CTfx*Kl4lPϞ=! [Xaw"ocfrBD4Ȧm7l؀z%pFC_D+Ԉ|Ν;kV -QWr #C -֠Z> 4 D`m\?AItlp%RLЇ9脨xq3ğ-[wʔPDZ-ĺSҜxk0-6/y.d9DAJ)Cjbgt9s&0Uv̊W^۷_q`vMI&/2Gݹn5>;w a:3x=?0=?1N,w nEt j- B@c 3'!ب>;;iӦ\KD{%d+3D"C)/h%@(RؿȜtѵIW7"'"3"7|#no{zz֬YS[[tq'=id֭[|扖.wA߽{7 q޽'=t~7&iPdǎCDn( qʛRo|2Y*sZŋ|[]]x=yM$]ý\W&gϞEqN(X>iBڵ@mZ GL]౸+]mS^M?Gmkkc -(Re ǿqՕ ]m~6wA(2oD}! 8:6nP^6<0iȉrr擋% B}y;ED#%((|GB -\pxp ܿUۢQ 7nM:3z̛$!'ݜxEG 8R)rse9.oUsÎ󽄌1}ei^?rnm>CtrZfA-!7d/B>{T@2 -"{ ǒT,.APJ<6[e֊b -B/͖+Àhgvg~d!UxYzy2FH~ -wHDo- "o*DHIC؍E3%:o߾"6o"H)PDǡfj0KrC L1>>n2) ###A9y= s`.M+}nkG&T#׻̹9(e$2LC& -h%Xa@o.oؕ)bM ŋ(r ob*PQTDG>{ -`l*iT"c~__Dc,cI ^Lhab"U#"_2d4%燇w%dW\" zb@"cJj>\jڴaG++E -܂"K1ғ&|#\ -=YNh:֭[7==m}pi~%?4jE16T_@{ο{r(ckǔA駟L5M$]= npd 5 -u#ͽrVu?c{9e9700`Dȵ3 o2Kfq"֚5kl WLv"FCY+'-mzqqzcWD$I?&K":- o>SfѬ%SV -/ÿ!E@V1- юk^㇬0t{~aǁwW\" zb@,fY;68|/93BR]`qZ풒`"E {5Z988NG+q|TDGW !3ك .] 0bXKݼx9-$V㪩p_!՛H{ODW[ NtZb؞Z~%XD;E{P 6/-incVp6\hlLD%cCCC|cYx"`!o#<ڵSe[" -vE1~1F-mkRN%'3780)\(h:nEtZC)/hGYwZh󸏼uȲزpᄹ#iQ"-@M^) -0fZzI"zŋYwtt4"LNtZcv8omhxͼdȫbUn7N hgΜ1w^2>|& +c9~.fwu<uaa:q4(rhD)6%.h]E4b8DL"'@] u7\xcWQb6t(`P<ҋu.)"R^p cx?x"5ȧN fI qN{΂aO ek0P6mt}|§ց" +'p rǁ;M ]z{{yKC0'?YD3>~k|RTJlْ{ ǐPߵ<rgxz믿'𥡡!=Y^@;Esپ};E򆜛6hΝmww7CDsg|r:rp-2 gϞ1EWB]O466ͮgb N'=?{:x#1 xU B`tB1T6$W ⴉO.K8C$Xa*B AszZ:a\XZ"F~đx\ Ja>eFFFFt!XEFC<4b|!a'Ec$Wl^?x5|R A05;ӆ^]9GC_Ξ8gU;1BqkdC)/eK9:.z#tV}ps؈Et $k%d7]*ʼn}#Zz$F1@~Ν݅oK4 -q C cfW|-+5߀w͸ڲyGx'Wwd7uTRT(zpkEql5Q,"DOrQQb AB F k]otZki('!5ӆ4/!+׋WG7ָe+}}}t"%Y%m3[.oǑV^Et4'#$!m0d<5~ jH)<'Q;fNwk"ܽO*B"]%>QNCCB"w!1vjjj*>r1H:`;[ވqo@V#'; \K<")v{|i[[RIc_2w^\ $ZK~ ]|͛sn[, BV0[XX0ϗ0sp746z*[A^gDrGD;B^qz[s%n1;;|?~ #љK̗'e!j in )fC'N#vXDcO:;H8v./_5pɓj>cǺ;'!5IiK0&.vSFM$94Ee *^jVJD4Z9ñ[xdN:F88Bul!;1F&$G6b7qWCO*9aHA6}=O"H0Z'-c޾2Q%*̩D[*_ gϞ Cy`[.V6[l{ćK*T(pV"Z|zDk o>.{ _~.yl_fGXQEcLTD+(撼؍҉Q +̄GME9v>Ǥ(J;GVEQ(~"Z ܘS=%@Etjjjq+6G+C3"ZQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEq׷466zr祲AFGG{zzJwܴƍZ/((R!tuuk׮-wFڇ~z۶m|ǢHבh˒EQEQEQ,Ν;%};w,F>BZZZ9dY;MMMZg+ADwwwG,3G)m۶ *xqqqyyũF - _jǏ,(((.._|]11͓6m -L!*ܖ-[ss HQ.8͛ϟoذ!X=::*r@ʼnMٳg'ZhPhVJQEQEQ۷oGFFw a7gϞ饥ӧO1_ hh4(Lvo1Kkkk-Ѱؽ{>|pqPp~Sn777lV0~tCReϟG˺u.VEQEQ2rW^ap~5k$% BTDCCsJnqq AV~e333 O444@pj!'NPqLMMA=z֭[_fW"}&cs兪>/^ЀVeXGG +h+d޲BH坛;˹pϲ) q;XՍԐf< 1ƍZ0#Ep;v:u -E7М+K06ڵk|'_1?ߓ'OM@߾};11!՚ Z:rY;wѯ 0]bp2VEQEQ/s&ݻϟ?0đKDC A k93gkVDPDCᇹ{ 1;;;,ٳg;CJ:ü͛7(HÒ$e`NnH?1ź`(X]]]n"3VNW^YX{x2n3Xի"3Zd3KZ6?8SYkPڍ\r?^ S SRqͧ^k=Į"zDEf -EQEQE)}}} 5nݺHD矦^NH .@P̘²3AoYDpӧOQO޺u CT 'ggg7nO8=x ?Odo2 AGE )\Q5.d) (2d漧+DDZ߿_S۳իWs@0>Kä0,}*[5ȹ]T)yw^Ie7@{yn6no}k95'n5"X=\QEQEQm-8jllK-Q!0)aGMwvvB$&5]]D"5DKL4%*^nt"- a.nS떭̛»t0dZ8ֶCS{YG^xb+V(R@bf`\`) ?̛낌EQEQE|b gen[DZ[[v-ͳ8׼jժ s9vWuR_#<;[ -錍+V_tq"ZLY*^]__3܊[yA{(((J%ѲFR}:WD{k[n;b<{,iN8(^tϕHE4#MkQt&x:s ˗/7oޜXv|ˍ?~l-Tt:ђ,RR.%ùD3gv 5#/p (((Tsڵ #ho"\D]]] sرgϞE_qet27NLL@["!"ǀ,}eFطy1s ?c" ?̵|T$K2[TDtW"'͹~ܢ4ǷǠ?O>ˉ(xӹW((RZ dNJo5Ң":$\KB)CD_\WWgfJ$DD,|`!.\D8p͛7f"I_qyY.Y^`_ȑ#\m-3ie֋f >n)hÇGhnQ0ls3B[}ŕ((T>ȅDGUUՉ'8('DpScٱ;>>~q(hGٳg7Z~0)mhfwaa<+ ;f:oP;+̼"444| kÚ߷o!0+QiyUWW߿bh|ո"oqZ @0UA(שSb@ϛ4V_q(((6WV*ݱ[QEQEQEQr#KcnhEQEQEQE(`_^eR2((((@Ǝ^p,7vtҊ((((((((((((((((((((((((((((((((((((((RT޼y?;/:mkNNN~Kq9djjjÆ K[ZZuR9m0-/EAb;GJ<ǎAѕ;STUUE>|0۶m{8d_.} ގPg)YmJGVKsǯ$<37Qc_"G;AP=wbAR)kxI%+4b?S"/Ǐ׈u7NLLLNNVHs"MEt -/R9`C*qLthmaaM -.n"":㳛vs(1|*>šZxSJLUU8k֬Y³B{>\p>r6ң -ыH,(@pvphtwÕ"ZE^|y *JD33z?i:*+( 5Y":P(ţ"?~|~~~iii``ŋ۫Vd[(ի?֚I߿ ERttttֽ0߱cǩS.9f.4ưȦaFHwٽ{k޽{koooouuu&"(`-nLdE٫kq۽{,/<iiiٹsʋVfRAҍ75> U<33o>3UHWWk'^K%dC?==ZaǏO~#+E8qֲ7y&6eVݻw_q͇)l0ɓ' SSSHѣn2 w|Za'K+bx.o|򷅅1`,ϟ?G?MN8([o [*H[~Q(q'8ٳg v,+/[JV9$06uRcP!a> 9z>޺uKlÆ p\ ? YnށJa'v֖Oyb+'Z;ɵq%Y)p&d{w Hҁ鴶JVC[H7%E-1*?o߲`IM!s{!6i1Ҽzj&XD^^U".@)fnEnkL+agcjllٳgf lnnVPzO7CG$7lkΜ9ƬorzF,(uF0Eo Fh,5hP1|.=tp E"N A .o>}ʕ+!vcE=%F"4+^ىlܾ}!o #T-28'i+>|Xr !vKضl:<żFFÈd0Ԯ]iIl͕`dۼb k@+%m0dV(CL!c /"1^ I*":M6v! EсKQR"z%;C/?zh%;hjjb?ƒMNNٳLz1eh0^f -.O{!٧Oಋ/WU*^"H$Z~S\ Fk7o')%X͜2>(/bECC5U"\t˒֭[Q/^P߻wOfpz~G>ЕK:uB>~@$*^]]](Է~n8r/oTx+%)rn=dwAmdPp -H !DDD5]`|#F`!"ΨMt7n%JLJk7N+0'ѭ[ܝA -?5:AĄ)[a'K+bxƮL@(L UvY\"\s!^< 2Bh(9 ^kUwD+ő"2Aq2q7Yi9o!fWgIxAjZw}N+8 09rC{09}̘k{۠vrA+1vř?DD !x;dc$Wzǫr?<3"3\#wMkC=pwsb[!"l\ e6j6:Î0L d˨2y/4| d-VkMȺܢ7İ*{E4J|P)bD:\2kY: Ls٭9Z߿]\iUds۴+E;pYM-Eݻw!k-I.)m{npP|F":pc$-D9)DD{M..EɏRhif3hѝ?^>d$%m:v_V[[{5@gMeUl{/uH/>Aolll9 zYcf&9O?c<bĔl5 -r,LVeK>2hκfU>ݡw-CV~`Q4~ݘ$֭[7==m~D,z1.wĠƊh -68 nkTQBN;D"Ogt!i}״s&ֳT\#l6BGmHtN.ɓ'۷o/^1$g$LZsD9)DD9YBDcnRv(v)J{9w~""a?~ܼ"i>{,vT"::KhdϞ=rꚚ ._ܞf>t,FHƸg$}KfVul(K$C-Sv..*v;-<}4"<8 # -fڍ TR"\HN.#7'x":]K%b]ވ;lN*Ð[*x љOsc{YdX=- -q4?>JODCDZ6 -La'w333V,+?;2}m"@ȊI6&eљ$KQ -E4;2|jfK,\Ds<9rJ|oT$Y3h^)1,ud̙3;YkUJGe -^+]MJ&Q-\͉t>=iYݸ/qp6ѣG̉n\o:)Sflx"(rn-ODgoD1o洦.iDs!0jT:s $PDSqtӮ1իW<=o#}rn0grr3ěj56h^Ϥvr&J [*=t!&@C;D7p*DDlS6mB +`/_̃0B`_"Ł\.E>qwFR<,n赛P__?77Aí᥼,c,љ1\o`2vx#'\ۏn͛ n%siy8܉+J{۠Sf_ÄjB|YC$ID6ZV,H~'Tc\{chnɏuH+'⥐k4pm>r;SSS@}2G3V:I𱶕uGу[ZkyyixmAv[H8rR!վ>k}J[&;} ~ud ,N;rX*:-7d_?V^E]˫ap'DvRbTN*JB"yRb"]WWge3b7u+w7be'\Ds;!~cT -YO_)VK8$%)$|.F2O -Yk{۠ްS ϭ|v>ڊFZ܈I%#k=2"}e& n["Ԡ  ];b ]LZL%)#zdc5}݇(PD[IY-|;,Crl vtt$?C8qBJ$N.3a {Z[[;44dDH:":ђߪTDt紜zW̕AnfG!T9Ij7䤭 n2)/oiEcW& b!e" ش[l7-c캅;566[L0k9ږEt&; D}Ni&G0DW`XrP˗/sN<=AWp0/{mo8*$$b{`1H0uƲz5=uTkkkD48vٹ'=;d"Etizp%$y8[ED%љueZGgϚE(J%Ƣ[#@ɷCK|'p7̧ nVݸcu)-˾uNdrP(g -:kN|װ$KGGGS?b+=Ez4CCC­Ç"Ld+'0b7΄~RXJ<(Ji(ARAE(jbG0=jvػ(/e$|R$ (e J8hVm !1X@ gb&Ɔl[l%Jxfl({WmnnI5IU=SҥKn1r:nCLJyWM/i5PU]@ h^sС4ҠzN߿{FgωG5ӣ7|ͽu@.,汱o۷u4'NYtEZ[[?|tzzwΝ%P$k||<3w5q%%γ͛M6QeI>}tPo6ok%+W'+V8p@ƍ($T)WUUVe,x婩)%333^:|e˒KK7m%ׯ_wzbrիW߿6;]_omm jr̜-[455]v6 -I]׺u'&D+I<|CCC2k֬Q$>}ṚkD+J5Qvlπq`` JOOOL6{&zrr)X۷+k^rŋD;ӹS|'O}}}6}͚5wQ?66n:W-_~UۮcOzdFFF _RiѣGׯ?]vttTI2hRieZ#(ImTUUeٳgڮ޽{2>KۺuԔ)6kkjj촻1ǒ46G'vAVi@fD*m˗/$ݻɅKM s/(;̌>88hpWv_~|||jjj֭nӧ'ON+pewލٮMv 'ю龾>mŋkXŽ9w)?mۦS MV#SXI)oIK-Wjڵk׫WR߰ڛ皓RAeʑn*D믅$رcZ`,sfffw+r_h -g 4Xjr\֩$Gߖr/Nri}~D6E,O\o)U2իW)|hmmuI s6LS_:|ɓ'Iv-`O"Dˎ;U[rh5KggO\jժ鶶ZP=/_TVX,&ӧOk]@8q$Z_LOO?~a!YW _ .<|pJ{={bŊG*1\n]d A555-he˖޽VוVGbIkoӧOW^]P#?ݻwW\Yw$ڴOOO_}&?kʷ2w{|e+ _b[ӧo޼ټyÇƔ+[illt%h]Ϟ=kii o>&7o n׮]s|R_Inl~&jW^˖-ZJjT=URkhs}R@5w>|8444:::99?;5^vRRcǎ;| -/_n (_@yUʩ[9i2LkQ2w^WѣG-}.+U,<0DYFj3-$ZuS˨ߖ ٵkW~Tܕ+WhJ3g(-d}aÆ1倧O^b%###?~myef>~xƍ v+%2Jnݪr~W-n+_禦?C lɓ'ĽLu֑#G\٣|_e*?U(sO}Zͷ޾}]xQ띚ڹsgKWU4};P?v$Z)˗\ZWWW;[ē'O+JƍJ8P׬Y3888;Rݳg&箷ٔ˦S7ӦM`amis:K$ԼOK]mnsIݻwhK]s\B+G z^zeʻmvw[M? D+)EU'%n\L2V:22m۶'I3ԹTcǎ)^$S6mw/^kjګT֭[uݹsGrI4|A oOOeO+>sLvncS[ZZr>yN wU ]vvv~7ɷ~;<<͛M6'Jݻw[sir;ʗ=zχbMMMtvII4?qKCcOF˗爯\.e2)[lq&cccrE[;VU7U[TmU>,nx|O\᧜J'''޽qaCFFFhH߿@;7.;$Zvk?Z -+>sۺS[֯%]]~Ç/uN#9_e}BЯR)9G3ri+?b-U ;IWdH*V -AEHNİ,PC)_"gmB^=Dٖjϯ3===66v̙*פ-[ =y$$ыBSN~N"Mlk׮W7H_x}b >9tȈQ[ZZ쯹'4S=Yr۫4;1 NXDԈ$z{C97${X.^z=k||<{TgC_(Ty~]GvXQ$KcEDוy-_rk䠑d2TSp^p z-?$ 6h399mQĮkA#bL%*X(#N5KGU._m%VDeڕKdXz=_NNz6h:v:;;UC'Od-D$"(DIBXl٣G>~x_^S܅ lrʶ۷oN_/^a?(-f$gffܹX&uǢ]IBȎNfnP :2H^'F -&ƍ|@`۶mn^bO?GFFEZ SSSn2Ͷ<{lfd~W8;7#H{kk_Bn-߿_kW&ϟo*f'׶ڵkV-?^l>5ѣ:hs>iZZܹs:=@>sؘj>7KF-zJ%^IVԝ\鿗/_֢MK.hJ~I],VZUUm|͛jIݻwaWW;yK - -q -t}%[ n jOy2DɬcS~µkp*60::n:AhbaNϟ?Os 6v*'ؓAm#=?V zW}?\$:? ޕvZ2٣^x)zEPO --Ӧ8ԓӊ6Xj?%g?3 cUٳgmX$:;(5cP{);ͩɿb аaaÆBttE[OǏW̓vnnm^_;A:*"SojԶDkiMS]a'cƺvb&Gk6GOP'bXMjIİ^m۶`3˸Z1|%%P3m6;JP3baG{M'cdrϏlؕ=کT<8n$Uv'+Lu/\ -qZF؍ϟ?\![!:jm~`SSTRܳg-VQ\CCCkkb-ק1Ν;ve oinݺ5;l-`O_UUDjTH[jӹƖVhwvSev RCm߾]z7o߾aNZRu)h,}jMvVk`bZ@Nj{*L9=؀:NuFV[a7Vbcrn@wz҆tvv\RE_ĭ:v sTN[:|݅SᏂHIY&ǫ}ΝAmGGGՋO? ifY |MaBUΝv - wE]{P]B/d}Z֑M_-Ҋ'\V bÇYM1ze16z>}ٯ*M TԻIZ?$cXgKhmu򗱽bqѬjۃ:RT}2+&Ĝ}\}"EuH]vbn~ٱSspzx1X?|Y}kKW_217ʼ/#b"ԡfn!ŊbQbP3[ENncDNϕ -6u1K[GT֢}=HGM[:::wK(aٓFpD}^ hKNjbo?KƷ+v}4Gnkt𺞠-f\}8}W!;j+ -sϟ>TSk'jxx-SV \ˍwGyI%AN.9I0;j75֟!lW٣Y>`-+&Ĝ}lϕ+W\9i]vb6us)}'oW`ZE +7 :$u1"R񒚷f Nv=-$zpE%5UdމcDNϕRy&ъBɃEGIEb>jbhlQ2M211q8wYO>Rmdw_&蜥X,X RØ<Ν;p$:8f9ORA5F -wB#dܘƷNG6~P%х៻bseIn6(M~C$A?믇݇1-ݚ]klZȒf4~^|,dL -mm1'kU1 hx>i#YLHi `W%B٣ޛqĄO{ aL؉i7zUXPYG1h$b$ {]E%Nq!/b"ԡfn!eV<jfP))cHrcW!bcDۅq*Qs&:߿^Tڿ~h<-8 /4_RdR$:y$ZӓZHv|\NvBLSKLdkl\1#$:x1ojØn*ŋo|`;6m^_d$ɧeKM?qdD6nˏ/_:6f#*{Gٙ3gժȷرýjw4EqL:dwRИayٹ߂z0%~3us,~ <JV^?53Tvk۷oO>/nfTh9pH 轵5j$Z#SNkN1~wMb:޽{Օq79yF6pouM.\f%B s/!w$C)iHzFp:ّ6nxµ{/"aޯ ,gU2,K c;e$} sNLi}o'c~"FL*QaSBfĨ~*)xBP3CE&ŎHU175vŏv4[R]g(#.3D#I^M_/ViEx_{͛7~[f ԕvm|xxg$+~/U c=gaG+V؅yvџ*6{"v j?bbhN>-Xe6޽{8Ώ?g%W6^z՞5+{|o %/2blܸQO9!b#E\s3$6 B$ڟSw%hx|ҥZkCⅈ0vM@tˍء>yV:D ,j6+/\L_}R ?{G?x`@Rd]HQN>]!Vz5?-. D$ZfmXfͳg~꯺>DUV LOOպ.+WXuccco߾ݷo_낚nPWK^/_N8P<88f͚[Kbi[|}ifT1UO5Q=UV~,9; ެDي,f. ~]~z/a9Qt*8. Yܸq~x2_ΝhNtj^7Cbڭ"ƏS$z>wKB -yy6~M={Xڪbt;whll|JDgg>\h[] -$Gg{{{9 wGɓ'u-P]$wI GhH=2vZ9"Z$QgX$Zm[7m3V?-,[%+.]RB~$_7hGQ$Z6l066eT@mez{{H"NE#&O!I!fq^42Ǐ<iͯ_ֶtvv\0w7neT.tMe߶@ЗTٳg?Ij;w05$޽{###2oݺu||ܟb/t$ڵk4ihh8sؘ --]g Uba/ӧOÇΝ;Y+ClۜkeRMXn355lJSѡZeԒAȺueOQS?~\5J#kw5)v1YWi3m jsT+mwٳGXm՝^ORA:5h׫hZQ UhxxX jm6un:__@;w+5٪l-tZTU!YuGPٍ#׮^n={7Kv=ϋ16Ǫa{:Z5Te,q$655)U~!ojVhԅ%B3s7Kjn-D18$涺؉ __Fn79Cn 6*FYWw>E[RckĚE;48/dnw[2b{0Y Ri.rRҒ-[KhI-sd NdFZk'O]nO1/ZWww[&{vNb1?8b*'xZz˗/dGz%z{=C/~+;dRui-XŊ ;10Osa'9y&1oDŽvJ/^cVݬdžN1Cb{P"($:W$Z$9Dۜ`df6\MTJUnS -&voB`}r+Ƌ/F6~ImQHvU* JPGƬu_)d:-2V -`'PV[L!\18ha ˃BԌ{ud8*D Vo=>,iɍ]1Fj 6"Ns1s -Y_'1Il 7zA._sL#z6I6E@P(MHz 8,vtt$dwU;Iϵ9ɗddnE?pň ;0rE?ͥ?NcjT⪎A#?}vQXcW#:+;r|TV})TF)nR*Vs}Çe4Hu=>Z”vnݺU;ϯ*Xb~wUݻwMKP+:rHI5Q\l4{e[ҊT!ӧU[vi\-cVzk׮e쪩Ѿ_U˨lGeyj]ZcpKGYs .ڵO_zeaNjl-jt*>|X`3Fb?wuwS4_NnnBAr󓋹mDީ~6Pe{gSѽ?e'UMLhj%&Dm$D*(h+ElT|(Xj -r{fs̞Yf\{fφ>sb@$+%0w*JDKҺ.$u`^(2A쥍QXt Y!Cd.D-9$2ݼyS?+Fr*]GsXY~=ze͒|S[[dׯ_)`^Ldg229x ۷ x@&`|XFTbFֻщJt#4,a zPlnOD#ʔ[e4U  *2e9|$|P(%8 AGF0m$v QKAyڻEP yQʀ477kehhtDxCtvQluҲaÆ9xՓƒ)u7JĴP3Gnu'i*L Yݻ LЈoy1J'Hw68jL*s,,҂^MCa"Tz-1 -QS)N;R\,zЂޣ%EDˁyLmpG\{D3Bk(0:+%b ȇL-ZTre O1Oy.`愀,ȚfbNE\b 8=ւF?*|9[)GZ{w",;IANu D#W oB?EJ߼@P<-9s\OtV9 -XN4Z0v4d. s@Ex}:Jb2"%Keq;"_xϿx5AЇolL:%@e$3W2d^IyCOO; jeۤܙ89D"!WIm--]LvKd+zɤ:z?VmQ4z~ɷhpa,Əzp&с-2)DqHѠW2d{W*]-Ü3%=&vpq҂FCg;r]T*PSaE`XD{♉K򕷿1I Xx8OuOW? -EÇrv!/_ܼysU~[TK;Nv*uY\8s3%{8DVx O4/"4DLϋZdzs5}Z峇.z7'*+O^ Js#%aѲ}KvPJI»SҙˆcSf 411{ NzO͋3--7-=ۖl[ü{ɤ:tέTVD+!ȱz'!O0u (3<*"WҞ_LG)IR/eYǢAt hrydD[q=a. e -qy} -EtIwR=Q~?Mz#ײ>vdwLgW*tիz SA6̅N @*,hP _uf>3CejjjDC.|r8EP66/_\,xY[|X9@㝞Zp1ć*y#B9S0B;6@\x1OdLG$Aݻ'XԀ4n Vn*"bbtlgQA==)km޼Y"*Ltmqi^) ZtѼ?҂.yYz9zn<4'DVFqIȣVJDv0:F4 /j g a9rŃxp<2`2EQ7^(e悳ZP<:p(8~RM,BbU9h2 Y2Y@"҂raΤNzsIpb4#h)*ї.]Ef&a0>:5ZbnNTpȾoVHvz hkk{)ٳrn2?-Vӹ-a"""\8xG[ydo;cPHH~% -3mu4s ^_Yy̾6/-m StbtG -حر)zHX 02bknI̙KL{D&+ݾ3<(w T51sf(m1~-n8=xSh&IVnyo!g+%3lIL]#MOOO>X?,Dե!x?D 'gM*p*[.K -,\nL"bщ.<=N* sIL-m!=k>QzLmyx߾}mtg1~D%:pؑIv8g3ѯh9SwO]5lQUw,a""\^)92fh -ؘ>D֧U6l(6< -{} ӕCa  ˔0u3eV駟7Gܚ"n9- @-Ii)К---ZT 6{ύEtK 7 kܓvS8(˻EtvvQ] oNNƬ#8 A9rYύBsREo(LAA 8p\+AD'_7Q+1@2:EfdaЂMMM<؂PDt0EʛDg;i=}T޲e 211F6>عsg`K aDWo wr9 Ÿ#Ѐ > %A՚ӹ(*7[DtQU>u/Ǵ ;把TI BY(p#G6nrɪEvtwY!da!E 92ȓ%RUȖb!"%gB1l }y!"ic 2pvA!daPw9s•(+WG?.B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!1'Oڵk,{?e!B/cBYT2ShRv.Jzt.kD.He,UVuuuMLLLOOիWpU:ϟ?ǘ2!N[[dKKK+GR ;4VJѵ/^qB1XT֭[744s!˗/-97+сgR& QDsH!aYX )#) -Ҡݢ,'͛7wYUb7Gy#BJUr"Bќ%.zʙ{ رcٲedӦM׮]tR9wpVF0HssL Eȑ#/_wIi(@PwqF5)EQ<3mʕ+g< -b-ZVξ#1>xe˖-ʉ qOOJfrmj"JmfEcYߢsLipu!q㏎655Y-_]?22V-<>ǷH`FNXL=&r]LMMٳ " χ?|~қ6m/?AG ;-΢ ;yG.yIr*#T޽{:w{i雸ݻնZͼ֭[W\]۳I4X#pN7dr5/744 Ju }=.*^H-a'p]u$@P5&5EEq7nlr$lv@^23A?"dt6]%3jLBA+-2mh WMWIa%JELߢ#L98b - ^1PL3uD@\y[MkIbϷ\( ;. :a[4bXv2  sC !,hΘ6YWqÎcǎ̆<61=?(k۷Otӷ14h7Kz~9%Dޫ̼ -GU={V2=R")b޽@y^e.Ett -E0Tybb q- G !QWn%dV)GDcb 1tlv˗p۷'NE9;;;⋚?_j_ !|߻pႛFBGG_o/Sazaqe Wtvs A4/I* -o_VeFFF?ӌik[i&\˗`7DwM/>%DLCo۶MCo#ɓ'HZ={i+DLJ`7nڤ7;э:/^ *QZ=bB&t9s JOTP+'~@6l4]]] -uAyԔ.H&{`:$sݺuI֢3{Qo! ;&D 1,DO]sY}‡02`.А|^tl=qW} ~+1B& ѰcA ᫲ -^~pZMj -_Ob"U6# Jo뤰BV`Y@!OM5BPn<,v3vltzu7 a,y; P ]EP5Doi| NffW_(sN2gۑӧeIn0 IYbga%??̆Ȁ($G_N ؠ#NeSE gv@W[?yhrra^~m ;!X`󍄝ag鄝,ÂcX>6'G2dr1y<-zJʭ1X _=|l -d4fQ`K;iχ6oo`r .g7cY,Y[nf$p#=-.:*,pf6Ee0!XivᦱlKX!GeO7%fV`6+lz;fel1"l6VR٬g7aժUgΜ믿$xfM }t4hݤB"7;D4yYfhYe6˦PwYӂVJD'AeY:a'*ÂXDnNP<-Njsz Bhưv &zz.ښ]DWBa{vIqفn+.ݧJDtE_P"::pYfͅ kd8sXlp6٘%RDW>|MdGk4KӒ/rppPelVbԫW@V!:u[o Bxatx63#A"L!wgl_{DvİOFFF"7;$G/!.>t1TPE۹=D'N$k1nKVN˰tNna}±K *Psb._yh -=Ьv9~{l`(菉sWya'h)$Wfnu>n*<O<_,nrDr(#-G `~(V1Mc19%G[[z>9{,A(`Xx"z7t8Rmm[oV9~Y[2 ?ǎCws4T=XUЅGt6+{k9 I(/!oݺU>Ӣgl D1JeJ)nܸ1Ϫ&ˣH&3޽{DLuSFK耿ّC:񾲄G9l iߢßlcC8GX?qw$o#ie8 ]q2[@yG,X`EDtiv2,İ%`7Kİ`>%K3E19XFv2Àı Dl5FXl؉b ;QPxPOJ - O]݌# ={L- +RZ<_w3ICpnSڵ !U"+z2Ҵ72nS)-c}2xTN%vXnGs~;66&aJdAܷo_BkTC'$Xc77\x ?lޫsܝ-rD-FU@2|6Si.KΫWN>l/pu}-$2Ohl'@xWiOzF`t&N'|m4שh2,%lhİ`>ؕ w벀Y_>5oϪtwg ;Q,a'Jjۺ\HNk|4E$hĞt27&www&fvtJ7y -hIhg{~mZa-m]]i|LT2 e^*["FiƷ3Dw-< &o}56$ ׯ_);::߼;UO{cw{{wDyi (;b0ۋ`"䍹:)H X ӡvISS(*Gfv---r w0%@ fW\}2G޻t7e8޹s!{G9, ֣G$' VPPMGĶ C@Sܾˁatpu%+I(>XmqZv,Q,rT2,ZĉRnݺa\c!v,` -le -*φѐXwriE%>paСCz-Ts9/ʊ}*%C7p]f`Q":)KI(UFbL":)gս%$RL,G15l%bX6l,"d㒷u:]25Fn&,))9.LiKss\ž q3K,,YwY|ag',,lA5nsP>LNNnٲeJR=vŸׯ_C6Lw)TJ9\љ]ZZZ޾}Z0 n8&/A ! E6T>~xҥ,I-ŋE{)"agXXvy5< NYX,٬<9522O@ & _Tݠyy^fʇ"CDvС,2vfe7 W B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!2twwOOO \rB!B!KGG_wAȼև̼~o??x` B!B!'O&Uy)SSSߺuls 699Rwu H-[N\+фB!ŋk֬F{ӱcǞ>}Z[[k!-7jO}[=J9E4!B!JXlYcc'Ow UlcǎGb hPHѤ:7o|X( !B!ݻ_z{/z"ZpKhvth7H7ottt,_\$Ν;Hӧ/^ܹӞ hhhDVHdСCuyժU>|ؿM6 Rvuu!ѣGt:fOOO}}˗/ŀ+Vd(X{{; ૑޳B_;66&j7)lO E=h,)͍ܐ柧Xkk׮Ը vo~ t,ÅlrTe@`@wM\?99oe9ݿ_J]ro: >WRÇ֢o߾uC!B!+ϟ?0QODC A o;Eغ  em fAW;wNv2U8qD~@z@ݻW@Av*iو"AN T_O4Vh}vTB.mL}1.x}}Wӧ5 rE`pM+믒`ڵ*EjjjJg6TgU޵k[)52hOB!BR[ꫯ!!|&򠂡!Ϝ9-ŋ虂\@ףHm}I[[詚7n@@GGG׭[' Ad{n?gdoA  ZE<|P,YNɉ! .HO]!م  -nZ-_MaׯfX PNDQM1>ZMILL۶meΞ=luM ;v.n^jQV,#sιۢH_I~hDB!BsJ2l ha#ƹ|WD4W}}-񞨅.{|˖-? )nN - - &&&CQPgg ({0n޼ٳ'qd+!zJDtoo/޽{{eʕS]w gaRV|OJkkƩ)4{\2٭>inn43/F!B!eR{GMMMС}*xE g'z~%gxxxf>s tS4]UT"7DUzϟgf ͙ϒ9_}HQ"Z7Z\ϟwKU -.\H3m?|sB!BH)eePfeXmɤ5ܮ'x1Yk^bАTu":_#w;][W":)#=:ԩS4͛7W/VD۸9kSDB!B="|:::DEt4t z y8/sA8qB3&qNǐ^&(W\){VǏCJoذ!Z =zȽzXD#r)OkAM!BB2m޶mۛ7o *tMTDG3Mϟ_|9܁={~A/688'ߡ!H`YED10w}羿)nBGFF'%DIP7h믿O6oތ"9Ჾ/{| -h AKGx&W\B!B昼ːNdwwV# %*-mNg <119gn+uuukL,"f; "z׮]Pyn&ž*] (h@ӰNR/ҒYymA'sӾ.+B!B+\(N]|-[v!YCŊh&(Ա>;88xtP0J{ĉ֬Ysmh7W%0]]]AX%hp%h^aѠאI+{+;wS5aJTD'<訫K/766+ADWB!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!,A޾}wY4>!drɏ?]vB遁+WwY/ӧN,Q&&&0ﲐ%7|~Sb.q}4䶔[MzgݤR'v,vIپRR-OǖZhh JWAF.?=B&Ks0U!)QW"WW׬TޠDZ˗qI*89W淋uvQ=Td1ŋs.VhD=s|Bmmm---.a.D[nhhӧQs%കZwvaDWJesy*8;ryY-ҤӧO6o}ZWWff͚۷o㊲搹QfQ)Ʌ 1颺]jՕ+W<4HҢE߿ڹG;vW^} )jjj\>y ;r˗/QMfΝ;(*ʆܹS3GBQ.<2A[6q*3$+ jc_K֭[[@QfQ#~¨_XP o0pt-'J(KWBٿM6!TGnn,vP<׽./E*rUp]i}*G1 .EmA<mtyfgE(b~@CCKJb|}5.akE1SSS{q) ߲e <e"">`yd?Q, \vsH - | /8 ~X j_F##ODcӹ ߍ7m?tA9K{MbӘ<\e卍AhÇBܸqC.j8B_>Sx \ b#C̙3EEI {ZZT -FO#@-pN[5P;v@1u1l߾=Yޛ n+Ѱ$*?~eug z#>@#2gn[Gf8o:2ZǷ_| obDLaCa@]zv)}yƗ*y[ -Z=aUTq!LZZOhI O/:]AWQJn3gK {f(+#V$+ pƮ(zb1t|Ƶa3wa%1vZ#sQn [c4OϽ݉( Y9xC7|<' d>s;%Ш驦.6=7v[yD hI4ol"M?A7wwm!D$DwsQ(78w; %hkuRQ َHe\UAKq D-TAQLw (. }o! %Må$: |MCÇMWLvP1 -y¯P << TK҅Iz=m0D JQ&ԆsS,i333Ov ;1)_>p3D%M_'Ybޭf#BQrMPViXA}9/!3l IKO;G}$:m_EF߼םD5_Q2OX$t. f`4bn$d>n,!$s*DW$̙3ui&a%I{3ٶm˗\/o3ePQgPS{1i@؈O!DQ>s?pzM,W[/`_OD5I?jDMGo^D' -5դItz$1FAM]Q} '-hItz-tIt`I$:SiL <2ƧccM5I4J#/9)bxrY]0G?؛ L]t 64e])(rr2S! %i\O5 #-HD |7"pbD*r%I4o 8yJ[2LdKW@Bvȭk@46NJs[R ' OoLHi:5$Q}#WOKI^uVH M5jk<-rnKi7[Ps{\~@ݥi+oywe -O%7oTn7}UJ?r劷1ys7Z8~wXlWTjfʭmWi7gdy_q$8puG$ @9o-@8M#I17E_^zBrk.E1^5orrc-*B^|eͤCC,t%]͛7nprZ2+09pp -\k rA@ H:_sV MGDmWx{7̎&5 4' F}$ -B Zܚ`\RV^Z>;^%DNyG^\ -:}tWM]}÷gWf4#|ˏwVMv鮩^wBDUr~}ۓ{K9?9I4ދ~šL3_?0;E(0O]DnҺo:4 (&<{TrUxhh-ɢhG#JԀ/HWP>)ԧŎ\P̓ܽZ1hh(+W]c"UZv.aM],j>ID'M]Q<3$8 (tMJz ֑D䦡5Eى~e%ly޽{=zQO8 7]]]ccc*D\M?/OQӃNQF7|w E<_'Oi+c0ib>$:)brlhG#`Νlh$Y֭[/5I/;s̺u2-ZG!R[K4|ߐ³_Ν;ZoySjI_c u`h/EZ䴎j/UGdR~0IR{}VZ׭f.42THgESJullL6F^x1;;iӦh)K G-[ϣvYI:ֱ]%ыK c1cǎ˗/oٲ'B @02qYy~0]2oԩS jGŨEi^X$0}$o44 -[<4h}[.SSSu{2ڑR̍Fֱ]%ыK c1lٲ!8yaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaΝ;w^|yȑK222իe˖-t]Z>}O? NKY _5ݻVZ17uzܜa{=y?D ~OZ/,T*&'ji>gϞEKx<`A*ւ5i4, >"ZB#F@PO|et_B7ڋ/_vb.\ nذѣG%z^4ۜJ[lwލ7—-)9㣣б-M><;;Ű$ZF hʻ9nMsM6rsJ/%F!w^>0"yСׯ_7(n|z=yEs@״Sa&ѕ+%z&82i_smnNI4$Z`jiaɒ%7o|ՙ3g/_1A1ݻwe"}Ν9K;.ZK*Вh=D1ǑIkssJ,6Z6>=ZnB)DMOO3e޺u+zѣ]۳yf+3gάX-jҥ>|ypM!y-cH傞\lN-wYP Elܜd:ڵرc$=kBB۽{>~59}-[Jm&T']THͻr!?~2Qf@@TqJ5f'j(YOլ_4ük yTѣ!nj!^dtFn/L:9z+[Y.>EN2b%P'zϟ-,G~|A& E28T}7n4D5]pkעQ,>NRR:2AgBPmw%iu3m,JGRC~'6Z7Fn.nbdO٢Qvkı]6aݎ#i~j dLs-hׯCt6^Z_U^FT{3c~#F-CqՀyA!|3?A?Bd_#7% ѕ=uQnΒ%55 XT ec0508Ha11̯݁ݦ*zQ$91/$fPE൷d}%wbݻmL-֭C,Qj F A>71㏽&'4C$B祮|.2.P?+2aD;,䆝rE G=&rRPitk Ap+x]/2IDmv]BeZ5d!?cb<$GHs1h* -AJ ’|>Eݨqǔa aR : h|?]~Q%as E9_/o`{%׽on.5:M^SQ)̗ +ݥw\ShjwODB&I4zpddDʁJ`XUNɰ#>.qo)Dm>:nzzڝ1Cc_#7͔K4DW`C[zTDpNn )DYn#OOruOTDSEͰ%q͈j%6묈<`U$!$>ggg]6fGLN8dEȂa0nHYwᑩy@#|Dm -Mc-wN! L1;QەݍR u&n`>3` Y~TСxncMޜ$u4ynNIYRWDg<y\'ʸ9kQU# jjJK~'XܦɁIJJh!;W{( --ctnpwIkq$0|!57%D6u/Yc!r;MCNë:}Pk@S8ŭDglċ.Zz费pAe&''5iD-KF$d۶m/_3U\$Q&#+_r'27Rwܙ_[$:lv+>It`DiB& ?Q3@4&h~Ӓ( ^r<7Wpq6d֭0M] 'asAJ%fds[L pBIt%fGS9+O --Js=g8{>";. ']D'1EK[?ic. E4J3W pMŋ%lrvAB"ItcO\8R(\59?t2 &\ƘM>2ϙ6z{{J6ܚJ+geRI,ً<@jN$6'֘Jr'~ec>T*!}OҚ+ӥͭ^$ZDWՃ(y6<7Whq$z|Ϟ=NLMTzPc=(qs\rqsh 4:.i prXU `M!a0C뮸.$7=6~5~iEIt I۴Íts%ѭMh2a*DȰ3S"n -z*zA__:6Cz+那<~8 O晜 nQt@rLS^ rjpez&.hM[|'9yiD6HhވEwGȠ/ .l)E`055,+9:U%?3J޸q# 0PR+ 85?r\ޑ#G+ i]DOw}&$gq|5 2q_,>ϙ 1fGcd &A]E ,~588"![9@ wBij~>axXE$Z$Z8ʄCU 狸9o[t(-k?UFQ&պ@Fur#Щׯ}=|_a| 4:.nk{qpU)X%fG_fj~!P|tzTItʵQEK[輣< 2Qgc|=| ֛c9b%2x%o̷Hm -fSnbbB>6*Dv:NtxՅ\rN>Ht i~VSw:_vpnoJ2ֲ&O&Bg pϥ!uVzԴݐQ(lWyc5vdd$-Bk?Qh~x+'W&aǑlPUIIj{@w% -%s- bWfu]`\G{P#7|;_aՠ ZӃ8y/)ĉi+:77熣aiP|NԲ >!%0|!2:(J7Xݡ˒q^E!|n:WaIt2ݑx߾}20~/\ n޼wG[s>N)lO{1 >.hMHm3 M8/Y"H -mhhhtt4iFBх׿|".:*]\BmD a{}|jp5nթX(*|T؟Ͽ˨ɓ'ei$:8'T+D'1C¶7Fǵ9.)*9b4GNxÜWH%E5_D'AQI]J;Ij\Ti>#t$lv!W%DaG^2eZb0D-F'+$# -CaFg`>0ʰlٲObh'4СCvƍ\,6UV5 0|aԇ23jhDgU콽MssaO7I]b H'%1AhǏ喻/^XhwyD7aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaF[_|ݻVZQ'>|駟~u1 0wsu1 _j5ٳgq1&TlaynܸQh}A{;R.AuiO<[TEXbǏ_z뙙!|-[ܻw>e+b:Oe(Xr+$z$k׮9`2؂V^};wdzvLGGG=zdsD&RvL-J4@=jfp"5N!y4 -af>LnQI}Сׯ_# ۸qBץh1?D(4N%H;&%v裏XDeLcKac1jkW",%ٻ(#BF٨ g.VlĄ^z})[7FU`Y~h;$Rx2-ve@\©o{A{m)3bxPD!L4fG)]\i}s+!{!&+NX3+?~ܽ׮](?}4[{,|ҥv+TO3z -_>aP>X8q½cرcl͚5lL%B&0(nq<sl) 0|**o_Rdp~mÆ уh 7xk׮>r{On޺u c op ~ØPD!Q_xk׮M$}].>y$V= !U\sQH -&>5֭5 -TmgϞ#[_~ee^:z i#t/j`T5DNǘ(i,Ib+q& Bȸ\OOO桳g򪔷'.\ d4fGChsC*Bϟw"=|0[ -]b'd&+NXq jo}d6'O >@%aOp{VB[ҫM9|D#waN_?ȕ |X^F3ؕ\Ν !@(4 5)500 *;vոǡFя>vaM[ Uڨ+[QbQ[mTuaף+ -:Ur3p:H2]G.'bBy7n~ڳհ訧%Yy#3c;kkPݻwX>߰ۨƾ;)1? -Mbo8)a4 ɋ;Ԅq [Ěn 4>[F#*v.ƽV]j5q=o+EqL)Q' FpF_aQUZ3;ɢ1;i, .@e L)1 sw6tw!/4H3SSula¢_eɕ4Yivš&,_Ԕ|:g= SKF(a``Hr$*VFa%?γ}gȦi`x(nĜŝN$'}S̅3kκk~$:-؟~[8M\h`T󕄕1;i, .J>l”#Ɛr3S%,ˤ*͎ ٠xǏϦ@[,O+lrf'dæ- WSp<)OUg=X̓6'3eHJ/D3l$PhG>528%p}bb[ݪlrbF#|=xc4jO7ZSϺYNmYlz9Delo&M?+$FVXq?f{jV omhDD{vhW1uh399iI4ȇ\<55h`UItTZ3;DР>$Z'^3Nn{f'(Arz\СC;ItUM&E͎;wLovqف]'"It` j$z/D|h$:8\ĉ5tgM5Ccӈ(ђ薢$֭[K ,x^Feq^’y&粚>{lffh43.n:zr@?R lggl\m2:M|255N=}S<&9)0@y,^ KWPW"pЌ -h!fv Mc14hOvqgC.Fs!(a&_y[h.u9Ьى1;E@<%Wh9w%M3;aGԜ]I֨\1.*CܔçE輔(2I;[9I5kˬ`J4!J47;ÇE7|'(Knɓ08!=/=uiڵccc!&;?kdʚ?MN9pSR9Xx a<f ^HP`7oތ&?~i!رq>B\_dJ+NvyR [6>ye@Pׯyf*ZyΆ-DM}:ҘGk>䓼oQelsdw5Hf0t\P+Wx[4h`%ItTid^ӎf'J@nA3|¶kxXWWnc!@4v,$ ! d,jvhNT2Sm>(S^q5#4jIWZ Qh>⻙AvG}@VB!9XkFM9|J%hM煟R:At51'Lž4O<.w f@'PqhBsdrV'dň*mTz% K$:oNs~C)^Cr׮]R'&r'zzz7\旨z ?h69~ 衑[wH7+eޙUZy}-$ Ioz0*iaԏc FNEf VDkN#NܢCfDmWۂܥQӳgykV w -(%OUW5-].DWj4HxeMI_P_2fTDgA{U 5x =v_& ,vww{@'Pqhs[MXFQmh)!xZb޽]qyz\y1Xf7/}>NUӞozzDRͻF?] G\h BϕHJ$h)}%DyOOz{{߿ʻѺh6SnS=dwjX;޾}{G9$ k# $i}+D__HyLSQڴLt bMh@._y<܉%:+I5Jkf3̎ܢ#fhl[ON%|\-!N4NMʇ䧦p:̎рBNTT{)4\ǻJUZU%;X(NץO*Dgv{V0N3hIt"PqDvʚ&k,Fi/DQ%цK ?.yKZn=w:ev%#7n؄JˇLsKuշ#&޽3; A?|Li"aEE̿}llL6CCC/^ݴiSj:rø'O G6۷o2:VL/O>uWOu0fvD{MՆE v0ڋf͘˗/O:̚(:hwiRNh/iynF{a,wNMMM۷o׽nZJnԮ\"_]3$ڃI4O`۷o_{U: 3; jCCi"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa4>}O?],:Lxwܹ#G,t]QI!Fzjbbbٲe ]V믿{U&֟Y{'ywuuͬ@+gϞ]JvFP_~eZ*TK,@܅׷xڅJ,F%Җ{I7sr:͆v5@墖9Xb+V~1ׯ_ C} $zٲeo߮AիwTvLGGG=zjs/Ȥ]oVgK4bD vQX$:$Zc1Nb`k)a6H9K YdIFaD#j9$G732tŋG]!qt,6Jí:fxgK9Lܹm6f6l8wܩS(-D3(0%эÒhZI42˛\,F`IP,6 ehn͛7߸q˗>sLzoe] 2*]zuʕ-*644$._^@q>C%~zz;ߪ-)SILH H ib Դ!`1M -X*` -RTmJ15G/ +sg3=;=f͚^3׉.GeV+|={ʍ믾}P4xssR#t޽{l*V6, ɝ(^A#U.ǑG[Vl*WWM΄[oA.ߺu+Itcnv(Ӡ>Ξ9sf{{ ^^^9}=:q"zJ5"؆Jk+Iky58} : O_~^ +jg '%q.S8OaXqyԼ3τ1(=8'{Vi"Glo߾}‡ڿ'Xŕ#Gٳ}y1cRwš]|3PrA9T'ߎ駟&F5Sw}q]jh&$`zYŋK ԀW(0>>J'<‡p+ܽ{72`k~~O{dZET ޓ>lI铀KΝ;n!@2YgT/ᷢa|t%jy\pBΟ?C85x Vlmm!Bƍ|m+w(;}+y>ٓ!YK  o;둩Y'TKA~p o6-ڃbh0zumBXUCΰWvK nTv#ӹNKigd8ef=zqdԩSڵ謷c.p͎ف> -=ғ43Ed|zxvmWлSSSfZbTΣlib QxiT!)Jgn{S=,L="bZ5YK[I ^L `~Ӱ%4VtՄijoYU>|fxܬc+2;8 p©#| S %1`0vJ_hv hZN.ydʍB^ӭa,`)@E xV@3M -蹢>-ф[޽"fWޥaA4ZU~H<]"J>V3!LDV/je` -ȡC#{ٓtMŇ?wyke,IP)ׯ'M֦nNx4? Tv<h~v|2noAaC,Ж :N,7z2 :ki[GP(NHD!(ԺϖxlD3/41" _z*z,FzlW._ZZ -p?/^-0XXƳpX/_b=M;O,= -$5HvmW߄3},UCpTaMGmM4NOMҹ<‚"%rZTzQ666*?Llh1N#Nx^AG/R-Z+{5*524 kMd^4YRZEb]AtVT-5{F>B-m1 >Eh 0_CzX"ZD/*[N)U0./ -&`D^JfAvv|Kq߾} D%LXrio(pݻ>矩Wѫ]Ÿ^D;OOmWdh* 8*{\٣aQGADA44jZ/mdAte~fwvvl{RZΝ9\ x7G 7ZY\ܬ%7mmmqw侃hzG e 3TuT_6.f#|YŸCuC/NX8*<'cJܹ+|Wԙ>P%0 #bKY^.M4paU&Ee9 hSF6;!۲399YՍTѣ#JD~i>tCE7zʕ?nݥ4/}x茅>p}úe(=n}Jazg5ԾpDЩԩS,#NW|4jY/#x+1azC}Z]tUFjnn.zk~Vi6"gi/HC*CjEJvPEsQodMᯤ==J[lBpP1yMb~8:;9o|Ԭ|f_ biC##qqv/Қ 6l3;1k' #7i/h Ʒ2U;a1!:* []k>}ZcYQϔA@a}ʕouVi gh}&hGaM6# !E_G1hyz~V!155&&&*Y|{=622233ç4UTӊ9ꙝetߣ>Bhի /P8ǎ!pSeίGo/ʻi*S觚V4޽ژPǙ B!vpzsĠ~'B]vqcwM5 O?dq=-(~iEM61ټqF !xbVO|!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B -endstream -endobj -110 0 obj -<> -endobj -707 0 obj -<> -endobj -708 0 obj -<> -endobj -616 0 obj -<> -endobj -617 0 obj -<> -endobj -614 0 obj -<> -endobj -615 0 obj -<> -endobj -612 0 obj -<> -endobj -613 0 obj -<> -endobj -610 0 obj -<> -endobj -611 0 obj -<> -endobj -608 0 obj -<> -endobj -609 0 obj -<> -endobj -606 0 obj -<> -endobj -607 0 obj -<> -endobj -604 0 obj -<> -endobj -605 0 obj -<> -endobj -603 0 obj -<> -endobj -601 0 obj -<> -endobj -602 0 obj -<> -endobj -600 0 obj -<> -endobj -599 0 obj -<> -endobj -13 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text/ImageC]/XObject<>>>/StructParents 19/Tabs/S/Type/Page>> -endobj -709 0 obj -[711 0 R] -endobj -710 0 obj -<>stream -HWn}'Goj}f6rdl, vh%Zb ZO_GTa3]SN?^>_/n؛ɻ_~ܿ;c-~ vkZ#yn=lu1`7B0ŧYXc[ a?\{bgoe.{/K[> -|ai_{?m_B@#O%ΛVZHi:V}S,i3)@ӤC=$,N4;M/VPL߱=$INűoI|QĮF,`CM"[; ߭Lybn1)HwH#xI T;b$2)CJM& iI /Rjs*m/,- &E7@׵].|BɌ 25k#2pjta^b}9*&|H[|&od@|> j? m,!pP&%Vը]ڑH1e&3jiP3ū)q3LDNLedZ -a.tI T1M[M놱u=<_>+Fi޴$*%C8|W!ya'3\8\ۮ̥aٝ^ L09[;)V[ FITR퐹#L<$SvzQ Ԋ,(OppAuueYA~ 2 -^Sp (dSiת;]28` uD] Ӎ"6TS-!'z+bScZ!HȄC1\FqcV,'tH&fәQyJo!R+dkz0mq[͕ʼޕW5aq7֝};*YR]}g/c& @LRP̌- -T]Ym҉ז&ʅ,Mc66UL:HES]@"X-U u>ZLXBhbdm֪NtŶlWV0P밍eM ? ,%cԬ6Y*k۴+ #Xį:,ډ|ܔ,}1D%+N&ڊ_9]+,p/j1\ 1h饤BI/2&P(hrvQ5bҡv(ѝqT -K-Su a'*Nsivc+i0LZi `yNQ!^ۂ%c[nwRyV̮HVeTI+(&WpAOhiLa-řl5x4|@KBىc), 2|Ya9 ȕXZ`yiC*Jw>#,=fvIK.!xVZl%3 1e'6&jGq kK$4p[ҢF(5J1=P@ID8QeMUyV@m;MY8Z9GDLlWpa.}tØ[7*Jc6Ϋj8I`Ml2ǾEiq):N -e޵%1"ˍw&Q5=.%MWI*)%~! vMHx.(Vyǭhx7˻;~역yfwwۧ5{nqI21aq}9|MOZ1Ƀy\ݲ={zw|x};ŗlg[gnz?-ov~}lKvZ?[v?]~C 8;! k, y?vn1%וU zja}?܏L ^jy)#xgۮǷzWگ`9DǕa `x)h&eWm6p -endstream -endobj -108 0 obj -<>stream -x \U57{+4Ks+7L-56mԲԲfZ[  .. (sýs/W.Ǐ33g9yJ!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B/۷饗l] B!aɒ%RLjժʕ+FrrrڶmkggW|ƍ?>997n|*U裏VZk׮^^^yV^Cϟo}ر~5z˭[ƍפI= - :4**ICBBWv?O>o1s̔ƀ5n߾D - g۷VyQhR4,h;΂3VReQ-a B3gNp(XWcU4mgQot -|+6mZ[_8(pvt؞ѣMieU ̨Q0+F}sCHl:ujQjaFH"رcʖaÆaڵk-wܺu+$E_B.1f̘ʕ+oذtFVD2O3^t'hذattz{pp0B` U5G\T/$Wv~-¢8Q /֣/ŐhРnCe#逸ҥK>}={l8𮍾A"<^!k4=x\\_|ѨXQ3B -S 4W|LIIX"O8};7Çw 7d\(alQD8`jզL|=ztq:S6mWF\Wǎ˖-iᤒiofڞyƌ -#Dᅨ53ʕ+5eMs2zK{Gs60a„:u+WG1bDZZ\|H qbN&_N8aƍSO[ -:pnիW10b}꯿?*j5jnZۨS;w.xr;t_q9R)t$aGP5ۣG|'^:,V/j'̟?޼yߧNgQlGVf̘ܤI|473`E5k4h[oa- FxJ -1<'OJQ>Svo߆~w,Q`jZqŋF8eٲeʀD⋅ Κ58pqi.k\hX -]7Q]߫W/駟υGoVXenUx@w{o6R6Of &&ϯm۶*UBF(gFsURD__ٳ#}YJ4sLشià ".4B`6k 3"1XV;ǣGdx8noo3ڵkbq|@uE|R͛7aw3F<>P">>^1Ԧ5$$Q1vImۦl}d5eY-tԩS?([ 2 AtGqbb T[4׿I!T !n|oll,\270k\%rqk4r;av̙={j^8gAZ`.'Nm/d]sY_ 2_"(8z&'!!ߪJ(ŋ5TbELPO~bbb>N5[yz2gG)/}< hР<@5337^,LD7oF>_D0{U -~g/ -qԆuߥ{I:t05zGSH?Io oÆ [`>).]/Cєs!_P#cF{^la|IP!?xk\:322GP_|K4Q2 ~I(VѨ͛qt蘸75Z6m{m#@IFLy-Z۹s' -͋ɴ__bxlٲZjIIIb:?~0Υ>,<5Z#fc;Syɓ'(M<翸@DL -/ZTMW^yR%3|%4Ȍ[Qa[|Rnnn`8[o|kJSӪQ\xn;v4(P5k|Dyk B\̝˜H4_#a2Mq;ȚU}pmۢm4\-"Ү];}/3XsBg̘aTK+q_@۷oð4hӧOQbŊ/9_:55U-:DjsSjU -ZIX).,ض~R=3EʖPkw + +yIY:v5eYשSG=;o(^~IW -U_~%j,@Ȇ)Sh Bl4QfMhKcbbJey7f#0MҥK+OWQ7\qu*U2b_Gܕ&MBaÆkݩS'e_%LQ/1U"@ۿ}h r}ꇤEyf/HO~|Fە_8bj]FQam|[nv2 PaEY_4WiÇr*ߦ0:Uۻ/8IP2dg,bT+mۆ)k\bzYl:thl L9)CmFt700U-=< wn޼9fX+x0b%A ڵjՂʑQ EjBN!BX -Z~9勲999j֬[n4J¿M4)B_TFՀ.h.FĤb193f̨^:W|. -v|Rvڏ>(* u򡿦 -5 -N6]ƍJ.Q.dHvVGFe(jҥKK^A%B!B!B!B!B!B!B!@vvvdddBB-B!z\EI1D-!_$ /[;nBOZXU2-o%R=/B)О ! hBJE!%s}"`BHɀ\_!d@{/_R2=/B)О ! h }}GUVEnڴ\1f͚+WnݺK,/B)О#Gܸqt?>xϙ3L2;w|X! h  ;tЗ^zIؽ{N:Y>BJ煍m۶AO=i4߫bBHNNNfVvZfVJzVbjFԫ WnD\}ZrplūnpMGB_}!\̎3WOGo=ĕk]^u$be~a\r`o\{=8eGgxz!mAO XuVs;cߺ.{,s7htS~&v~PGwwwNII1*6vXkfB,q#9ZRZ̭Ԩ)o]KK -I<- S@7~08{:{t'l85G#Vw [+ }!}Bx ;p =.8-gGm:3|ԯkNDtaٱ>KZ|P~]ts|0k'}tvS[qk<_]l4zGQt=| Ê׿ö>7bԄ?};'F7:慬1vاn NҴʃ/_B -4Ang"hy;zR脔4%&"h:p:2D #\;tmo`ׅ݆i{@C/rv(|A9hZ/oC4kOn9h A4AFaX'MWyqM}!h: P~_}ypfi[}:Li3^-:ѳلMziΆv؏3Jdo;½HGh=g캈4|&A&@A1(f\xf󓹾00`TaZ4̶4M0&bdM_T z@!/ | qģ1!}Ĥp JKV8 ?-5'B4 -gc΋p<~ +EYy8 -'"q&. pr . Wo)k~=n[p>J̀+*#+QA Ru)t]:H+Ar}_r~YUf4yWZO|銿#egׯ_ܹ$煍W7V>~W\Ul}ug6͋; f`%l e6/D̓Umu* g"X%1M}M3v‚9ll,zg6ټwg`Qߜ -q-n`aM~ȃ÷Cэ4W'~4 eg,LDM u7h_M?KAӖ.cL^L x C O/ ^ -|u.o O̞1m o>0G߂Gv-C& QxP0hShRY)ptlgGiGҬf*Dv4Mi7Ҏ9ٍ#R)+STYYY'O.[, xz222$B#))4ÇSGC p¼yt,y 3lՄT1gb đKw@}%w@B ;g%=\u_臠I3MI:Li;[459hQw4&hf҉)ց.Ҽc#mR+B=Tٵ=#$%2UVDDD0 %%@I^ߓoڢRWl!!UVwlƍxp꯾0_MJI)ȩARwTٹ-$֥v`ԩG!/W񻠿A1 Q&mTɹSeOB2UdyJ*֪U+330NA6!: +C.*+iA[iJZ"rܩ%wN4ATvڧiTYҙRaCl128%.\h޼Xjշoߤ/4B 9lU+rߤ'KqRe>n]li՗ReoI鶾$B999wa0~7ںFPf8sN>*i|bGTYB/_Rxwl]mB~jPS*w98Ewlu9EثʆxA1Se )M]WRf\|V*TوC\֦n?a:URoPf>~Q[ժ`*T -Se!J!/Tf,!Eӧ7n "g PfAZoZ bRz~*U;SeQ屗:U"Kvvv.Wlcʕm]#k r -jUȕ*"ǤL'ˆ*KH$::cǎ"裏>u꯾>!םUlFj,U*cyUgTلH#+UX|XBf pV*\z+}&KunuuL%$վ}f͚?u_}_v^rV*2Rrn$gZ*\lE{8*jK" -'OLI f꯾/GI٪V"ʆAhܩMESe y8A;ye0u)Bm= キj u,VNzN~˱*D~{Ѿ}{Ԫk׮b[A si诳g`#;[^y,wakRe|(/H3kXfOReTWTYBHa݄۷ڰ2W_hˎAW!3MrԊT:҂Re?TYBHڪU[ں" W_hWo#u.kpF_}Ә*K)n,]'/-@w+;y;)B)ܸqk׮bUvm]…/4닿B=NGkpfQ!$_x{{רQlٲ%;Pfu <YWv/BH>8{lҥŜѣOSf?s?wo`g"!>}7)!z,W_h3BBk5I탋~b999...W^ g#BMk1L_ϱER;B?֬SN{BZ9DaSY(BHرJ*0eʕsrrں"*4Չju⚯e=G!IMM8p0jԨѩSl]#[B؝ߐ8u :G!ftRƍ0 %%51_}_ G^^(B+Wn BpzWe ;PD#)Oxϝ;kܹs_x?#G48;;ׯ_QƯǫM+;;Z0^B'7oT +Rk֬)W+}VP[r%ڵjժÚWjFT_?//B~-[|{[H ժUL!aJ.=dȐ4[ר8B- ˔)i&eK^:wlT  mذ#49ƥ!BBcB)h"([,,Uսl] 0B)[""6-9k֬G}T՟~Ihpo%22Ҩ&B|ZOx#L߉Ur!JΝ;' Z.]9f oa`TR7nY L6vRQ߆vhiY+"+9skNɌ 0rM6q˗7}786 d.z;_!O>'OuEtٻ(˽ $j.%sLc9Z! (X)"*)b -, *; -Ȓ"w0 -s5C_̗yvB[-.S(eii>Us}c&V_g~- 533ץaÆu#9?o;uwwOLL\`;wٳg/[LXcooohhx3gμK3gTeu%Mrm]%(7>wkXg@GTYYtRDW{VA|boQ~&V{~7M}IJJv -W\u8*w[S7t{#HmmΝ;egϞ޼u(_qQ {X6ߛgԗ:)S999u4_qQמ4߯\nbӫ3՗:[n6#~п<.R+?x vղ˧OcC򼶟O+KkkVb[ֈ ZÆ S@ܠXҪrVQ@ R. 2pV(g$ߕ^S\z?<rss'M$jN)Ey^kN$PKT\z7:ygEOOťw+.ZswKo]fQ6{-,,~y-;OKMťAPk>/ֶwN+.Zr85(Eqi ֿ[5h-[Ȏ_8y}C+8MqiֿPk>lYYYVVV22aW\=d{xf266אnݺa˗/( H߃K6=H@#ыƜ9s]F;EFy^vSt?=Z~thiiUUUN W\?Qtj? ܠ|yJKK_~eSS`YW\OP_1o%ƍ%%8fA>L{>Ixq٧|jkk 6o; 4 +.oť&MyL:3fZxBm Ot(ֿ)՚ - ߿?Jhkk[w"hW\c2:a|IeeҥK% DGFFN-@F;K{Ҟ_`[t\h舮^ڵkWz}>82(goܹh]mm{JE@rQ)п<+NQ6ZWq/}T>PwDP>˼@U\x]+ߕ=p|W߿_~ꫯb'g1BRyy9¼jkkMͻ8^uֿ= ;z9ow 2$..w"x_իW?]tIMMk~]vʣ0)/}?l.+֡"@{:tP|MYYYUmժUܿп^^^FGa^* %uyCDh7111"ЧO___q੠U{饗肁пIIIƼ(̫AпU5s͛sF*Z[[O<9/O鐳 6_2227!![n(+^пѾ"@[+-ý{ N_,--c0U~m^yuG?uά||Цv,X@zwhc_|||6l3lڴFGG̙3(+wȊSίg{m'::zРAH$`п-x񢵵u޽zӧOs 0wQZR俌mA[[&&&ΝW\u-wĺ|f{aSu떕ӧNݻNW\\LWʣ0"߷6(l5ooO-77W^ݺusss;0jDa?;wʣ0Bq7?^```rr20# .4hT - -I.xAqoX&jffu*HҒȡ-_*'NJ0IyԿl V\@*++첳yU…D/ʣ0]iԿx4Zt*ߍ/qHJJ㱓3ԡ[޼y.:Ns@OgyWyPߡ9głC>PBmꪯOO^zy{{Nڄ 8@lllFI&^y-&ﲣW-qα@c~NZTTB^^5ell,< -r: -wd@N|||߾}9(Jy' yK]s"Ѣ]~!)++{W "|6- Ioe%|PWw ٻҨ̈́UԩSK,Y|yj*;TRRԩS{=^ӱԿ[h(`__[:7TuV 6ۜ]vI$^zQ{߾}_~.$^Wo|2X͝8q?OTC6gذa7n G"~75] -W,R-:o|NOO}}}/ۜgy&=={vt!e_FΧtf_Cۼyyye|5oo3ot>:::akk[QQ;9lZnp#===;;; <ѿ"Ee5Zs 9v}9)@dп0`i34GN{,VEXͼ!@琙imm;l߾J\WWwĈM)..맣cnnk*k0_F~ε&@=---<1o;g(U[BB󍍍eM>|Ar5u{;8Sٳ]F}m'ͻpBT*}ָ8_Va^r7\WW򥏲 -BCC eggX\e@]t9v9sL2Ea{gq>} n:1UTTHvv&8Q4.-_i_ -Lxx8.=LMM5݈ vCWd*_ϣzrJfoo_e}zwE RW˗/0`NNN=/0WWMλ߰"$Ʒ?\^ndڨQlR'wʈ*奫`cc;wػfϞl2aMVV!7N<٧Okת -zΟ7==w UVV~h摑AU[niiMOObm}_|EpmpJ++sք9Y¼/onٽe{hvGsRF_U6uCCCۛWyITq!/ld볐WB}v޽{IOsYCdɒ~!!!TMKH=UQ]#_r8c9s=ezp WJmmmDҵkW--Yf[#?BW[ֿz@{aAAFffCn޼7J+ös /oc'gP3j#4"?;S\Ig'H:T(E|6Uk׮˗/OHHWVߗkb$ֿ׎qR.=I? WggѣGK$^{mƍ+5>0SVfrO>&lNI~^a)wpse)+_qYcbb}ѣwQPP;H=y]Owr7]i\ʲv>}zaa!DE~^w8˽C9_ d֭~ ۤǏWUU +s?r m9;P^4DYYِ!CF; @пMH$%Mѐ}boQ~ۥSα&x|Ϟ- п"??h1ֿ'qIuu}.]֯_; @UۻwoEE5t%<Ȥ+w+oռy^jZZZ -G{=4nyP3Zwwwz^yzzNZ_$I~~5qqq=zG~^_L`ĹW<u***9s;f̘ މ9:m6G^}UCC3fJ%?ԿkNW^^nl+@6gCw߭|dRkԿ?x7쳐w1 C^^^;Eymv4bĈpAW\7mcXmܹS__#GN v&C8泱q -&?/tzͿ7$x$??ʔ)®V999<-o݅ٳǽ)kw<ٛ 7;7ߴ ~#_GGQ*Nп"?Q;7'ŋŵ|@v}||N|r y/6ܼyֿwSxhsExghK_Ն~ȑO}Կ{oְ*J۽{Oh_պw6'7l0qDҿ^y{gDfme[]Kh\z GrU344ydmmD233x呟לԿGo+bY^O^ANϲ;vMFЁU7nܜ9s<<}&ֈ˗[^ݞ5iߋ7=yxw@пяD";IƏpF`úԿRdEqellLW3g>\+HBB}#?1?(-ĉ=̙#wҤI:'jݻwW8hDD85r][%[;^Z#44̌ZZZvvv All511143Կo#זo*^Tڥy' ڔ)SƎ+;٭[M+ڟMͿ_ +j3)|g%%%-CCeaaѵkׁх_=;;{yovу~7rl-8;F@ٳ |lI߼os<4VTT4o޼ A4W//O?>ruu坥l^5Z*_(*}t2ĝcY85 @#vܸq@9...dРAwi~ŲyW[Zl7#k@rҥ ›ۼˎo߾gyol^%eUBVVKoW~[@m^zzpY*檼:UpB߆_hBCCQڳgOoooq49D8d-)7TW򥏲"_}6Z=h9Կ_~?\)Tye-l_CVEpppuuc///O*^XYYӌqJ%Wr=V0xN6x/^LիW &_q+!/k97A\A?tPaW߼ W\d*p`ֿùND*:99O>-E6Bw6ߐ7nN%77wҤIf|;E6 A7y;g84][<x=w7Dkt.>|811cCl^ݦ#Y|5tdQQQ @ŋ}٨QnݺEzxxpw"޽{?ӷo_^yd:E;oϣo3`!yyyC{Vmm -ga/K¼Rd+t<fȑ#yhп=ˎ1%_~Wx-$w7;k8 ڝ6m]UU,W<8<<088x{޶mww+7l7+RTIII5j=fϞ; @LJmʄ.^^^:::nnnt7wt1co;mޅ  -R?࠼fѻv;wc_ ںU=Y3c+,,1cرc322x'п gU^ܥKcǎɮ3gΔ)SWXbڴitAEVTTHvv0/j^_j|~YߟTׯ?$+jC۳oŋU+''֖ԩ_{{jeswlŽ;רoτǔ߯_?ss󨨨W@A>.j%K^Ӛwn.;zM&/<ŷ =DeZcJNNѣ5sll,5]HIIQeϡXߝAߊf8䉿)i:<I3ֿO.>>~С4O?wP3*CuCJNNN4>}Nja^G.ݤK(wX^ [nnĉ]> <8:r_/YMD&,,W^4M===A0/Dٻ#UYގ ݻw{9 DY}5}Y "ԥ歨xA0g]WSʗ>-qppC0mqԿư4Xffռyxп"kX%د| T222u Z@Wly|w[¿wW"//ws: -W\C w)'t\za/H$0!B_=.ݓhBU)xIFM4IKK:" D W\eI,{j|¿/ sҥf͚1 -A+.M@>¿o sʕU֩Sg߾}BB)@l]53W!_=ݧ/={ ޿_`0%ȿ/_V_{yIq&wF{q!! ޝ L?"g2A>}:vXko>} +.j2s{C%tPFjjjF|O^TT$tDA _qgv~W:(A&-Yta#"d! ܧ}Cߵ]tسgK{ A W\2e$'7:(ӡt?# +.}~r¿FAN:5??_@\߿G(s1III 4PRj`c!rA|/J[XX=N<)tDAT.ȿo|ԡC7n Aq*{iA .pweeed2a!CVrssjժӧU Y}5V[F οmGG̣zݤJAAAfffׯ: ?!ꉘDnݺK.ð8w\zz۷5WGM0ж^^^RS -Ağrމ'ᒒ/^XCyLVz~Is/rQA΂6-JKK;XZv%tDA/!Tjaa?>\k - - -ݫfοŽ) I&F^^]v:" A8ڣU)Ȉ5,2~xϟ***/nݺ;C -OL ~;fff&HJJJ B>xU.^f͚ϟW;788]G!rzS{kR !W:l2{{jSF(Mǡ .?Ƞ _=l7 899jҥvvvNҲZ+"3hᰰ0D: !ꉘ++ 6\|y̘15jԸwOq԰afΜ,Yؾ}ݿxj9T7޷ڵ+;3EaGxx+ \899Mԩӈ#ذP]vժU mbccYJ!ȿo -n(tDBsNoiٲ˗  ο{WWHHRiVf̘c!x5ȿ -tTdk׮}%!RȿゅȠt-_ - AkAV*{#2111jժћ;_qO=F zذaR֭[_vM ^Eߴ#; 4Vϙ3 BԐ{ߴU7}靌jժ'$$ A ο6WI#?A&W\p}¿w҄H_: }AUaK,t8A+.r/.tDѣ@KKEd2#"* ='Pޡ+.j2eݟ-6?I3fa<==Ϝ9#tDDeΝ; B3v`Բ.#t8:cǎ윳/a$ -'&>Wt1+}ȿ/e/-;vΝBBT.d2Ё&S҉h`_)yyyFzЁ'8*"C:g 4|ɿ/isbcc֭K9tT$Fl6M >BO'pҤI&M-̄X k:W… ͚5c ~¿r5a{şs+TKZZҝQQQm„ jAvFoީS';;jժ;U׷z`ȑ7'Ԯ]% p=nիWSV-j׮ѣGY[`A5z  -Ϋs}c!QFM<677[xB"c*M9sf͚5UK ӧ'&&^~СC}]blG-ZHNN>y[oJ~2>DWWTZݻl/X\\q2ȿ&_QBъSNq7߾}[&t -/28ʕ+ U/XYYu͛\ݻwl4h0o|5 lm oz1c ~AXB "zϤzӦ?{ػwoÆ E[JKJJJMMmժU۶mY1nÆ YYYH3ݡ`6n:̺q{իWYٳ?O ̙ӿ wܙٿ&MBBwv7iҤ:m5jpi V}?EEEl ƍ=),l24Q~~>,zN l -o:5BpΝ;WpB ",Py,Uj׮ͽKɿNNNf14S^v~xѢE\m6mrttd(3enP'$0 0ٿ={l޼y@:n߾Z8$$7|9R>>JܐuU?ڱcU.]rrrE`d666vvvp4zl իŨ"##9;J[nE -%&MBƒQcǎիWVZͱ׿H^ 886>V -ub{ׂ֭ "P~G"%%Ha{ZhQ_?H Y]7`sssDīa$56 - ]\\ -0HRWWW:kͿ%'YYYmڴaZ 6LFB0rȿ: -avw}:ɿ&K8 Fii lmm}TT?K ҿ߼-`111,ر/Q C% xU ŋڵ[hL&9pxȿa /Jaaa]d^ kO t///:((Ȑ%B% x߭ իW`FK ҿu`R ݛ]jեK -FK ҿ֭I$˗0I0 _W\̴mҤ LT01_H'?C׮]BR[JJڹ_" - ))PO+"1*޽{7 Al?#>cG裏j׮mee1pǏY)ӌ:u7++?Q hFGGG6޺uJ*M6Odh^ԩɓѳg Էzk:y|ii9sիgmm흑XAAquuE6mڜ9sc1T7!<A2g6]&9Fq2r՜SNUZ/> [-[0^PPAf=}tʑe8p ((TgϞ!C*oMB<|-[F?~:::ZXaa/MA/{M~ٳUWM5^7S)*zjBGFF* L>ןÇ.Zn:;;GEE!4i-=nܹ)))H7o\jխ[b'O*r˗ -%ܹ.\ -T_111)H;t.^XzuwٿlZ}iرApVVz;vT*VPPbЮ];/SP* /{+U'1fI~w I! -vT%'}! _ 7c.U;Xb'NׯVq۷sbΝ;m۶E>?ɬٿX8q?wʔ)lEׯgÚtR/o]|߿T '&&fl7ejOkii&ssF ߕ+WVH5^7O7/߶mo```QQN$`$MNNпr0O?ĆWZUvmJ*>_(dرcDQ h @_lU+TRR`dkk?3e˖oߞ k%K+333tȐ!oWޔ5kԩSGB ҿGz%JKKAi!ʃ?󯿂IY. EDD={>3f (244T3_"̿)((PA)cKdȐ! D?@T+Wy_;pٳgcoHH+W~שSrYM 1t~Hlcqq1תU E}itݹ ޽vBMII D֠us 5jٳ[Lxx8Ffgg:tMںukֆ[o>l Upss۸Q͡_Yٲ_{[jH -'ԉ'.H Ӈ{Ey-**(2~3grڵ-YYϞ=,--֭a^߿.]y,1G_BcCf\hP=0aBz$'}4UXXXFLs)8Ǐqtt׊\&%[lOTÆ ׯ_o4oDDĹso߾-HMC1_ԡC_mJ*{zJHH@s;v,--MhEڲe hASN$ZٿYvܩUV6m88pa4LVz~ Z6ڧm۶?È#*iիW{xxxڴ -tܙ_ڵkg 1cFӦMAu@atPÆ 8s wtڊaxA >^_hTPHU;8&h⨴kPIII[}<<<*ό -۪x޼yЮ˄ =z$TBtbh+~w}=                텎%W BA?#w]V^a]u999ݿsݢ" \~;wc]=@)88Eknn -AhGwyhxFkggu߿nݺݺu{ -UCߦM"۷oŹhhb/A m֬YժU?{߿iii|mս76C 1YZ9fWjuqڴilzڵu\--EƹxbPPPWZ兪PvTӧOs6<22RDA2+++ԩS?+kc.2]¹;wĔ~wTϲϘUXX&j/3zhZ ݎ;n޼ l:tRh3~իٳptt4*8{,EϕTjqƵjJi]?VK Joߎ<… /Ȉ+\iy8qR #/\ ]~~~}mC&mۖ;/%&%%X#:w¶s%}||Yk֬Y&z  7-- .[bӍ7:99I+ƍ Θ1ϟ?occYHE322vޭ*Py C<ʕ+AAAWQQQȸGU^==[\\~WcIv!oܸ1Dr޼y˗Κ5K5*%dlV\QCRa$:ؐxٳg|'t[4 tQ{aݻCJy]\\$ VEVgΜڵmj՚7og9/ -0/#$.*c={M?jڵ\#flʣS_#ԉ,[aUH5*U԰͛7Yc[jԨ1~3gr%X;pݽ{wkƲH)&                        Q_:x -endstream -endobj -20 0 obj -<> -endobj -62 0 obj -<> -endobj -61 0 obj -[278] -endobj -63 0 obj -<>stream -H| T[= L $\pG (*qK5"*↻#PPG).O!+;9>V_u;}0 z7if3fnFDDS3;˸΁R\waaQ"qд ^28W=*4dcKj6'';>"fй4?"K?8kȐQNt`b/ =}0 -O5\//_|x\D"h5bsc~]0M+ -N& -0u ? l/asx0ĔۀVZa=nCr?|q:Vs^ŵ{u<<֫K>A/q_6kޢWZkߡc@]v#gPwzo8hB`aF9n|Ԅ9|cޛ |494|Z@ i-%r~d,d,M2Ls];.[{>/ȃ<,yT yRb[4,yVyQ)K"+*nK솼&%yWޓ| G|bZimژ 0Mt2 2]L(*ME(5JEahw -ƫqj1j*tgR5WMWbiz H$e&TV:r\P')uZ%3ꬫ6H]T.6XE+h%bEl*Oy o[%x?L^i#AqJKB69 '/ y //B> ϡ0/:Q|5Pjyi^&C>BI( rP*@E_e*D( c`,8 &D(PBD Eb6̎90'c,s,E_%~_7-bG8~%$X$QkZ:^lVF"6Xau5{ ~-7;zXb.v`Cl 6f[`Kl vvNڹ"")#aww{ao}1aq0[!8p#1#0pFzf1GZKh=mq8b<8'$'p:m j{؞mؾt=Jo؁vl;>i6ݾ/+ھ3q98|\ q.%r'WJ\q u7F`:$ `6́0|X a,%rL(;0i7M~o;u]UWu ]SkںAt}@7ԍtc{0YxpimQvn溅n[ֺn^wu'; E:7C<6OS̤TܤCxቹ˶`6w%{l,;&Sx!RM*;§&W!> -^Cghl,僩h*`Mojp@b{MuS4 LCLa4n;7v}k`Ig 3vfx-|&gpGZA:D)Q:F額1v4=NI:EmcXov%:K9Q:t.+zJ(R^ν^ z^7 {#^EzQ(/ڋF{^7{gl`?LWᨹd.+檹f榹8p -NCpvkXAYgޙO2'Mo߳=׶\:E:MqMvɷ:}E9`{>cN[\9 Y.$֞%̲.'Kހb -z .ReG*+ŔcQ\9Y,Ŵz]~fm̑N+I`UoF2(k g\NxE~$3=o˼Εq, -Kfoy %sbb.ʳ&5g,ę3֭RVoܽ±rQj.@VI쪳`boC53\*x#sn7ҜX"}>?_S___oo[/k b*I7j:9DPZz$[1(ʈECAM Ak-vGl<Et4҉ccӤډfDMp;wa8|;9;}x^א(B]ԍ*2 uQNa3Y![ɞbͨ_OR}}ȗj R%> -+//oG_p1nX< ͥ-s{QQF)*oPGǡMA};h.Nil+gglۍZ|*,*Ou_#23N$2^ϷaGm'<%~ ;ɷRgiT% f(y^bVI9G|H>:K[Kj=,,׬3wβmvɕͶڞ]SVG#UqR -?#s hkGK)r#a?jT-vwSY.1|Ngv5YNbOoɽl=@_k|54+V;ѫ<󃘅gj1/P6/lnQ(j^vg<^~ʹVma'xt];|)e |M ܆i>JzG8\Ҁ#4GZCw8$OLxޤ&<5^T~ǒQ#SSF,1!>nZ0r|?!]VKJrVQ=Cut%d)C5\MwTd -ssa;jPo|Mb/,tS y^8!ޣzVP8>h0}.F1|aNaJP>=] -t)W]/(Ng07GgZF'XOr*1ѭfváJ8眶#⠚+N -RuPB^}ƫiH4'm4S|iAj6E<0X`mgC]QL˃#}1V> V8XmV0!N 78;22])ZE@u3`wt8Nܜ#9~$q`dELfbJɄG*$b "|d0 8OHs-YU]yc(cr!%r'y0Ȥ4E4uC sSj@-(PMf\A܀$u Hb!ۉ(UgI#| :?+Ejiye@iؖV GxR&xdJU1eAu9 uhr;tF<0wS/p gIpX?DǾi~Uk!:bԨCպ!~H[ hޞw1VlTVYkyZUDJkE3 ! t)DnW0JM.7Q)MIF<QmGyhGfGn9*qh٣~m;$!&ld -?Ux*\d\Sup0hnleA9&uJoIFvE9Is"n4naoavB qɏ;Yx1x|_w[)92F,҅Iwvt>oߋ?hYo}e=}a'gY/m=}ψA#ʍ6z,Kh?ۆ⾍mi ݐ)-Ƿ0^/Fo3```(^ [F lKk-K_C݀iu%V&m-Q%u/;Cw;} -8[@DY F|Tg=4vHuhK3DؗXnxM&8f&LZ o7*0qĸ1ؘOCՃ}d0p5N#D{h̅:$k|?@='q//.f[w:)N5 Z)¼B~_B[yXKUyy_<Ȫ - TiAETP:a( -qD 03eIJ1RB&*f -MR KAPqF,`"Nw+ 㫯o>4#& fAo?@>־?jfQߴo뿲>1{g>c`YR&`}}Q}FsoDl{cvzN y(sީ+;A͕vo]z'2Y k`RF}<~BWZüR^zzI*rgs#9DdE1^HV&FkJyU:x _ap&78Ħ"wmT+dc~1gމDpSĴENœ V yw!_|iO#J}~ΒNqU\PGX u-J)t~,?qq֘)fw6?-&p?/Sb`~g[=ջ_8眎Q8=>=YdN\P·fCDenP %fyQwB~JI&1'(~"ʣ~ɱlxCZ<4Iup-Z'{Jco,昣w+Hg#ݹ6z!c;TIJ3iXtkr2XrC>ӧYA[{vlΕ,4뽣R6`ǔi{fڂ9A6gͷٛ,Eb#Ѧkk|]N~k%g(#)fAGy½fepgLX/TO%7'2!%/em{{QevO7 ULg͞`]݃ꔇe{ZҕX/g92HV`Ƽ,ۤ$'P[K>:2*ho4 -0qV -o;z̤ѻF$鮕U}4I5_:BXgۛ!Oy\ey}qu?`"q,gǤk?'wX6,ؽF+ݟJre}@'/?=q[,>pkȩ~Y|`_?]y!L̫" @ԑr\~ut\j0vk] DF}[[DΟW 9} Og깔Xf2f?R< ]ٕrV=<IWgivΑK/3#f6 6w!FuָQ㱍Gi: 4Ӊk,xH,~x8֠o}q* Fg{q׶y6K}:Y<֞ycwдP➕?$ka;lQk#7MHW݃9nKLeL\V4!ͳI!n6>gѼK665EqHMFp":_Mby Εy=A~49ɳfg/iT>s7ļy&{h\N/eٷܱtoIqY] 2Ud.5{7ԝo6:J)2^u'A4Z g/<2e\Wcy_uNSOzS ZatZv̲PS7NޥhUY88cz{Mg '&۵72G7rVٍ?|C젾lAkYlxuηg|~%GʨdEFC* -4#[x]9:5#LE ue?w{z݉MTڷa|{ -k>v~.P 8i&"0렏Af T=*8ڕo#~C+wnǥ]jjo;}\j&q2}>.eq2+hRn^ ׃U˿E&1}l?=#p>+͕Rx7hԛ0`.H7vf4jƧ4_j0:6U^OyјsʣZYUMYzc^{S 0 vA=jVQiW;8L:SɫL]S\ƒ2 QN댖JzOfs(X[DT[zV -V/|o0߯y[#~kx~Ѽ>]jkoMX^sc!ChN#Q&7_M+݃HCtϹVq7 ;9JwPE9ڗhF5Qw4wGkGZ "܎矜3rxߪfک9'v?%6FZS9Q&$oB1o5hVCd~5GYmEL\! vJ_WGz#ҥǚͥ_f{Kj e7kzWY]q|{@RYQ R7"B֘ eUjڦh *Ȓ60.ig [ys=OHi{s=s'QYr $7fHw vPn+#f -V#sakj'ɺk{̇fOBJZFI8 -ΉK½w/s1IweO)N󙶡zoa@O:{Nᾥ)J"UQ>3~wY\4̑Y^#_M{9Ȱ פ9Eܞnr>ḷs%so1GK˔pqz=i|#{1HDIg% 3LpEL#XB{ @NN`wWX~TPxAX%YRl#U`cS`s92Ts -l̒u aVUwbo-)o@1^ka ٕ _/S8;]-{N6KOy6;Z`9W}T|mE'2, J`Q*"|(KgLpW!pe_;9W\)fҠܷad\x4|b1_{dt;q#׺eN'3j̖%Րzxwq^W^3l" f?bq͘wլaO&sZ24xG*S} -)zﯔ8߂ -Kb#΂Hj퉤[ [׼CrGI&D 8]3i6η}zzK*?sP:BUS9ޡŪS_9 8puwK3Z//QUf-hM/VhglSc^Sa V78.M%YSrbn1!zUԘԺB6([֚ٚ f{Q595^Y.e/NwQ]y0ϱ1;gP薙bb~z p75I=՟52)%"8+a >53AYٕ8ls{O✈ն8Ac5b)Ǟ63> Dn}h#oOs,Kːki^hr;~كE`:ˢ>;SҞ\4} >EX&cA~xzJqtD+MJcOPKF>ϻM^'>)7l5p/zϭdqGRA,Txe8qw*^Ngp%u܍' =Zrw#{F޿LEsC M~9B\ PɚI}IYurw/z'&7OuOܩǐ R똍f Iz.=.:Gҙ9˃ͭGwrl$XԜ:7?(û f?=cօs̋rt߮lֆ3AmyYP{ے kG6Ԏ\н=9)Pb^Qpfv[P_95@8'~φw !سb{Kzos{(mSybsZ3'cV/sֆm{} ?3WWgeVePf -$?CW7Oh_vtߢk$vApAH ->`!` _vz -T&C$/WQT`o -r1h>h{Y5yf;Yrqc:4GeqU/ȯpJyvG׾b'gGuԒkO=szzVs$f#qGO3I)aD|;[ap.CG99cXԕaSO}HmZId|>jaƴu3o-E}`SF@˟`,/n[V) -ؘ{IQb|_+#e*{?'5S{ P8J^%N [nfy:xeImA )w9;ۜx -Om\_;2oK>9ܥm_c3<}?@ D "/D B%$$$ Y쮻 hU[Z: -.).ԩJgVZ[ԱJ-L}MQgؙv{{={ϹoioZO. a-v /@MB}0@@ߩ,YY-,Pwa:6k mAYt)w&j k\D=6s0yy8k÷07(Zb/{5,F]z7c'jE_ZKԁ0]gqm3B"uX[ԕwhm^;A/XГu#<񴡞yʍGd܎::Q Q sez{'=i^1<@ʭ׿G͛qF 5؇ռ~3B=\Ox^5rS@Ԧ kky5<]|N!)kd?ks&rb~< Tc>/s -yF޿A? wjo9J9uc!]j=cknk) WQԳf7%`Y56̧hv߱hE@ߛI'.J_:1qpÞP*=0*5 -za">i4\] mwsLN$!,||Zpj>+~bL2ޠWhceV!r -srmph3bI 3=~ џ,*4K|FYŜev[{RqX" -Ǣ9]MC$rNb%%m@;NGiFR<`,>cp_w"!ca EyH()" 5hKII~o¼#2O?|įU^I{cԣgŭAz3V{F?O4{]/?:^zqY[O-]%v%]&`_,)*)jLN?jt^jӋt^`vž-zyz^53sI_j_i -fR.Ei -Wnޣf}8YK! Qn*ua g\L(+ v1ɿ?B߽7/|'J|oEJkѥ's 1WR,kHG]&crpt\jt!]Sp U6{.Eo{]$i0$0$ ޑ"vJlI;cԷƐ;YZϮecyVnFQ1K߻q{K&%C:wVKϲ0 Q.; sV3Q!6Toe NlA|ZӨtZ%O`9` -Kd6s0r9?|dL$!PRf4އ-:dFF`09,g -0=} -endstream -endobj -107 0 obj -<> -endobj -711 0 obj -<> -endobj -712 0 obj -<> -endobj -598 0 obj -<> -endobj -596 0 obj -<> -endobj -597 0 obj -<> -endobj -594 0 obj -<> -endobj -595 0 obj -<> -endobj -592 0 obj -<> -endobj -593 0 obj -<> -endobj -590 0 obj -<> -endobj -591 0 obj -<> -endobj -589 0 obj -<> -endobj -587 0 obj -<> -endobj -588 0 obj -<> -endobj -586 0 obj -<> -endobj -585 0 obj -<> -endobj -583 0 obj -<> -endobj -584 0 obj -<> -endobj -582 0 obj -<> -endobj -12 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text/ImageC]/XObject<>>>/StructParents 18/Tabs/S/Type/Page>> -endobj -713 0 obj -[715 0 R] -endobj -714 0 obj -<>stream -HWmo~@?AE;qt -M7!. -U>KWؒswf_K.ť;w^g/͛7m^Oۋ4Ů!Cт=k?~h7WӿJi(k>nS-QH\}ؼj^_gǟ@߻i*]juQ[y+{C ;"5|$|[ƻ*BJSNn"pp{Qzk$|&XosLlEx'W2u{š76ف@ ėE>{WzMP9m~eTh;*p2>FvM?t[[i s -E(#/h-ד8 gU?@_:o bmfTL]ﶔR.lwC6훋 1@OO]CBD4JQ>Nqƒg^`e>#f>#bqjH|f+q" юgGL@dà؜iWgQ -@t -ycXK3@ @R>ϥ5px[f2~A}'pAxᦁxԪ'3S wwBqſ*o$e<%(MmUKU\ǝԇp=نiȥ,̗^N~ l+!7>Vcp]i[T n3ę^ jԷ%~T[5`5+WA9mEi8 }{jrTl LM!y?xl1[zCSS L܅@|Gv0[/-XR"YG>FژĻ=$]Dqztkf 7a6إi5CYy |E`I)MA_\)fXK%6 d+♶{u!? .pMa7tn'r&zHtx}ԜOooH0_x z4a?ax0{Ǻ~}˄;\J)NĦLIs;nc^b_Cf>ܹȯUܔ rrEQf@[!if^3Fgumνܦ!0S:R.`t*̿3}n|(`pcb0!esV~) 3ȞXUd>w[x+ -7Ͳb޹pI\#^#i܀<uzIsoMo }kmWLUUx@E2̏hJl+ w$ډ$Ta l,s/ tkNj[D,ֽaaNj27"hA6FbeZ^:;+{2[xJbTI:C4,7$de=qd -H5X,h'U5; q IBrGrrRQƪIP&Z.+i:sȌkgebй](p_NGa֓.YOf -J-41G9|<83;Mig$!bwsMxOt :a y"1F%2,FaHglAKys(/S]GkycLQl0e_KKHtLtprƫ1ȩ9u eڷJ -15-3奯jHcWX:x쬎fD·d RaA&\ -$w B$Kzb7-ק >g0K,>9nk#Z.|:<团i:V0QJ~0^auV)9YFΑg=ѡ^ńK@E$]J‡3(e#h@VXfa3"'HK?; pK{RhKbOA].S,}e͒^֓_0<nRG{.y%`["B:~CY۞JxE|xf8;<8 XG_ЖUm @=0ѮղLerS}8Ytg\<)/aMߵ7гݖ|ZtNb0>iҵ=ͫO=q\<>5_χ}ke zxÊOr -[yps<_?6O`j/_7 -~O?~?ӟ织n>aW (Qw2%qB,{oὶv4RHM6nΗ}Ց[Fw-nqX)PPf@ Y]A>stream -xTUWyke̛7db X۽`GT5j4hQc4M)]آDb-QPDEDDIwuy2.Hk]Z9! PF%lBf2KPM$լ~c5]5 N"ZI&}=|@~jɫ@؀;p;2\lPt5jxR5p5TS//I;4PPHHyHy^];]s[pl*t8t5xO&Tэ 8N pԐD|%B0mwYnj:!@Yv驏'yl^7rl]BVBAk"ͻJ}Kq"έ[dP!DPә5tF\U3-&ʿ5]AblFv;ĕ+ݨR $!tfAM AMg -l|MGͳ}@?UKԨD/QU)Hp))XD~3/VQFk7jSrF 5YdP!D$-^M"WQ=gj -bgı3wE"е ~M>QDy$,B^$o;}ע/V$a/LBRp&.COdTN88"5`՞l"{Z0 4[Hw]װMWRc)8B$yIR R #IS~q,Vm[QIDٕ?*5djuWd;ZGVvQ%ZKj2%bɑ@{JX5yVaMI'v7gG*bGEچ拊cui!V_ H5BD{g";B.1I!dkBE76M ;<)7~EtI'bQwYwȠM76@>I)b79`ir2kfcD+rZ/UH) 5޽:=hbUD敝' -<'eM'OO-%TnMT v7~IHbX٨pd,lxvΕ^i]TGќK"ܘa7p;x);J1GHǩ9&Ԉ 8f "*YO[pSlOMH+lJWbRFB .Ϩ /ʜT^^ݒ!N}x7j_IY)1L *kVj9 ݈>2Tt+(q*A3SJQ1ɶf"oCv yW WωN2Ɩ>j42EjO<& -o>{4z{*kw+lRYzʨSPd9;J .fFMP,>ԧIkmT:'q4خcC?I<җu%+My w/7h-;aBՠyPHWpŁ#=z_Qt UvzϏieNtoQk"|}$Vwx/cd'\}d%ٶ]8Ѭ6nVR}w )!))M`2lRYR{}?g!hyJ}.p֙rbAaCiŹ!/: h@TY=-p:K)c-klRYzOC%Y #'xXH3V_J;띨9&8@“ǯy˽tɴ&5h3vJq^*Ν  -#WF4SYMg7KìHP~U ـqSdžYIfBJFm)O65eٚ/Vװs J(qå]}_nq# -#Ȓ!Zp)B"CT>Q+trԝ2BGN/TJ0HnxW(ʆ +w@y,a}_gwvmY<[ww=ZuulR8rdby9hxR:+U:!4i~#]nUt@eVcCާr4)껈pN]j[EʨTtԱڏYY1|+ZG>S| 4[0s hWXR7'I";ǥbdx_R[b*;Jd2j=rQEqyWٙ5oQwZT%k:*yʦ(UE7ct0=Ew.M'E_ԏ(qBKkF!<_E[m'kվ;wX@|cvÊaVdIJLhЛ3Fa9qpӤ>G}F-Y{4o۵o5ЫUzMKk'{rTqMzK~.{fj6C"dLK|N,USqg+9fMQ_tpĩ"LQ7'Q";Woc"ˣcO0Au9LM7f :I3:uwD(wmtT6%*aQӱk:Dw#}JZJՇ$ɐ7ߝŷ۶>~ԥK^v=oQP6"8 >hA#Ʊ <5 0ϵmi"[ }BEΝդ!ɉË]ӠyךgϞ0q11ǎ\føIkX5A^Zө1SPs]kN0D鿃"ALM("kΐeWTMMl bQL=Oޭs/E8PTҜUvN]9旎q>dv"6qέU)\O/'Iq2UMEʎJtUa4oVPٔ(:zc3\ 謞zZ;躕*nn c֝<۾2Gk &8w@@ƹ}qu%u^oKG msҒ9<̘C= ۷;u5w=6v´MrTX>oɗ!9 L%h:G)Us)ӢWd0|;\W=rfFS͚ajYZ y2/(q^1#UŤ n:ʁņN"*YRGfn$*?:qHzmU>SCEI~ -*TJ0HnxW#[rDV=ʞusI)vG'JOI6zvFs7=}WȾ믿knYi;č)bc[XTNy:TgA~mo+.}"}+rӖ7#n8 _oTCMGUT\Bv*V ]é>AuݧfUzGSQqNG7f2XY1L j:JRrYc0-ݶ(qy -Ar(2'7f zR hS; s:O1VdP)QMrǡ@ڞ(~4S_3ˑ z%3 G=YgR-sgN(֚W.hDٹXu:zx(A_䚎Q|ճہ8rc52ԠE=0Dk'hPJa!T&%DŇxߊ~i-!d3e8-u/<YA1M.=)>b(_nZULyBU -"lyk:곪$YY1L ͫӨ̦J|kKa!R7!%YDg,*e'IMnU55]ǝ-_츿*?Uuh'@bEt)N^-)O -$u t.ϨBˍJ[81qCk>Y֗>ò3~ =̦.~*N/ց1 r{'S&Y:a4ūUؘ 4w}ls/tyZW- -dgvWIQ [qN4pF]*ǎr - 1 -tD h:`f056NKMwV3,D"~/b45Qt #9AHç|wES:pMٌ6z mySQh@VP6Z$ew5c<a8m34b8;v}891ѷK_vثw@Tm/ ON8vl嚍eԙΟ?l,fK]Gߣh:8]O"΂q?6cM{& zF__FS -sѮ $# [Xj0N)t53Sr2zam_٭GI,FFDh(#cMB*]E_oJiMX}nN;4}-T"0zrm!QZgSŊ2R6 2BaAκ"wtccJL[!ܬ4;ی6MKLHR[*/ZكniN)c6w/6\+5;̴z4u#6fj&7QZ%PGÓx ]$\p`M|:؎$)Y᲍-Z^Lǧ5+Wopy# U[  T_"y8;:ug_VHgq< f j5Ϗo긤nL}E  Uj]"mؼ|\l;3LfkpVzȗwusVdA#Ǔ"n$s>_̞M[ldҩOa\R5 9# Tq]ʔF3UI [tږԾ[Uxߗդ659 %p*AA$3-uCGMl`uAz"~zXAX:}n9"t R#2Sj1{S?!SOw)if[{Hf_}ޖN|͞D~]B׻HM+fJvt~7$SAsQ(.K&SNegy +&mat',g[t+   cd݊;U9c._R׋3csVzVc˯3Fl9Fthγ3xxmӔظeYƺ<AAdf̜5>%g]>32RSL9oMcɽ?,3׵O{;C9c^v%db[vFڮ S -)v;GAA`^ڤAOk]; 3`Tac }W{ j¿h{ |ٗ9q9<%52*=c&LonsshMGsg -ФGÎM)ql.7AAA/GةtdN}0ڙnTԩgC7l:cn}ّ>M3sw^e)jFֵP{c?*czdz$ѣfΚߡ{?gNjY&   ̴JEQ k:PC:/C7nӟi֦K\xumeaiS!wu=?rJ&_pK߬] G rc@AAqO"A7f4Gf; S>b'C4]hر2;^zX1:u;6 -qA_";#ƍE,@A:rc@AA1Ya 6-;tgO|Ӓ//u/n[qHc -*.d7)k&M']8cjIoMJIJjл~Cޮd4t|o\r@AAq3&5AQbSNEފ׻7UjsKMs-fYAGÎɝZius?Q{8-}x[K"  $h^֠yo'`$6&[^ >1ǁӴ׮ܸe]Y5ﺞ1Buoim۱ہTL?y{S?iݡ&l;a=U<Dݎryv#  R#{~Ӷ7FOj߫~3?ێ( L4o?9~ɲ57#n(Dٳ.xpk\z-VYr$)IIٳuΝ3N8K!AC k&6С{S>ڽgG.bAAAjO"o:w±c'AoE>59YvS{`ƭ+V_"tM=qTdAAAAAAAAAAAAAAaϱ`Twu0q-X" ).-)Ӏs rs\Jz9+?/qqjdkIQ; [`(r*07ZgZi룏  ھbnݎiGSs2ty +hsf-m`p@߁_sYQMG ?tĻA<BϪFQXuQRTP~[8xrkræj2a))y9hŅ% -۬:έl -.?7-i\\읻wFvTT샘's2s:P+v[^ɋ RFrN0sV>u||_nw[=2lܤ_}}ql}AcSwu=8|*:QRRn=~[aӎoF;`a^vl-ZqM[˲yӎ>qhxBeS;IKMȪ WWK;^sopRrrQ~E%>tS!n]j ZuqVI=~{߼lwu耱Q`OpqAGq >^r$S ,<4]u nE?^hи}+׮jccæ`/Y(?71e+[ur/|FMk~W5y]/_-[08 HCaa/?msi;d'(YxiNQ:lQA:[z;/ -Rxqvݽy)]9Y_-[_-CMw6rテv oO{8_ӝ<{L o "^K?4l^ni3]2܄FYiiɗQUc>?{xCWg`Z3[ψ -tP9R-*HZ4?ܸ Z;95TW2 -,<1x{U^ʭdnXeQ ]bёW\4g/\?7n>p'ϞHO+.*JOqyА{*)FMWKMס ISZs:>/ {|t -̂cjJBbRFZJ9K\||cּ `Ru5\RWiNx9 hϖ u@g>iba߉Uh[@UVɛ8_ee>8ki۠EW\r9)15Ò\v5t˶In۸3sPөF/!N׮+m>R9b#H>t}0j<) /~Zn4lJ)­d[&.K6H(=;|NTII -)̃*c'&<ٳSPT37U]Ņ9w-u||qfIQSdkbK>@<]Gx7;#fd+WjJt>7#[f@Aۦ칎ȨoskS%隮 '+77kg 6aCF\|0u6k*ʇfusi躍wdfN)>g=kȵ~^iR¥k\z7R'-R8=ryxj&>+^R)(@W]4si] ˍZ,Ztaw6塦+Ci:izE9ڡTl.;C7Frj:: Ygs<ѶcMTrQijwK_nzǪ:E˖מsR`a'Oܨo_mٳk:Gidž|{5<"es3sg| -#[ՆP~/dٳ}/nbٿC77Q#G<$姩e]ɏRߴ9~ś_),/3塪I+٬t˖V>l@{){呪%޼uE/{[vrl -mhȇp*}5GؠBpD/Nﮥ?s=/|=S -F -l>[G--Ҏ,zKI+^,uO`O`{c'O!M3!!ݼ2!i}0]Xtu5Vw_EO ]#n4%l i|/AxTUEyCNXxfc[]Y&;?|?\~ ,LK{~U| WVR)+~t>(k%_eƵO<'>K$Ҹp2x NS&  qWʿ:I<(ߴ׵v/ȅr~9ruHfyՕ))ɬQQWx."1i]a`Q; ~MetP q2} <=;'鑽 P/.xFjw`r21Bދ 3,l9{$"l_뜆p:xY7'>x(4x)tb'LlЛwpMHN-\aИ}V~]/k6l699Y69qW'e6bQYei 櫮m'MI{poG;}R~ZѹQ`b~'L'ܼl̓g(!%E<øO %6{1zs% -A+:,;Rr;wX6W.+U7#tn pGNٵ]T,=KJMAVQQ m]y/K*i2C6cFZ`vf -t6wTwu߼pˉPav^԰y| s4}S$8g~Bywt5kwƴ*K*+gV{ } ["|EÓwٰ6͸%6:UZ\xw$ne-(0`Nyx1uvRgG%K+KڲYN`?O£KK0G ρӲYy|+sM |dge6ǂONl\kmd6l$z:7G%_9;c7TE[g5vfgr -d̈́&jȅYYĂ7sv֫X'yƯL luQQ!6'AZv;77{Mttcwׯiktcs!@{6WZZ_Q^w72/4Rp5z;mw$>E>A?k$1j+KCa!A5D0{9-Yv5Q؋'$w>>c/6c2D* Xi"+*tek{h޼qA"}_DOw I >M R޳<8CU -mVђ_ik4u>@` >`\d} zxw>PL'5W"u$>˗rI9ܔKHnjpw[% 'w@դ_3vزӧA!?,)));þ-h"Du`7r <h[~[rzxtFOx:xŧt,# %&;#(~(.*\uG{]cK`0,X~˂Uk'`(xD_u3+c;"Z3hag0ϯ3&)9aa/j6ut,xpm-HMv7܁:ܛ鋗~-PԊk΀ϖK7xVdi׍@^ p, N,4mAd0^>+|-`*Of<؎T@4p=ӭ޶@زQ Ĥ9wg4 g/[5i6]Z -ULa^'e9e%śv-0FM%Zi[UeyNN,3@]kUU^&Wilj:P)R|\@BLnqAH_0멍=2cy/WeB0nRZZ,ۏ&ѵ 3N|W2ۨ2֝: -L ŅD+i` 0z<TLQp9&.zu٭UtƐ5bx'قaHNɤ/j$lk^zU@Z%M4[|C䴥wAc' bՙ3c"8$\g(u.@[vvYȼn㾸~c>;Xz}?q S@ &r=N'{{&('Zu3tͶG=O9ۨf16Q0a0bmGںҍ[zZNbg--«cd HǤ,GOy^JTHSc:@[\8LOfdW=΃':=e.fߩ Oq}ЦW3dp8y/++9tt̬y`E m6 #gBwmm8f+('w7}ۡ422.02>8Vw;+x}wci ] `+Z?hwsMNѦ{-t:n2144*]I/{cUŽq޾˷leӾ:qy\d\n.1>;-Y -n~Զ^n]4Ԁ"%%8F1=X~מc?h̄tFo -äWOh}S[d {3Cq[!`>= yt~M7xݫwn5f$]N!&55rte|F7n]Q4D7etg̙|5T1t` .MK#gܲ&XnZ\q>.ZNOO=}P,r>g g -1޲w_j;I HN˕jw5K$j]v>nL,f,YsXRj1ݻltܬK l^WzNI>A6m/F1'8ʁ>X@@4WtgJ#;7 5W==<)Lw(Sj,uW@ |z O&JHHJoh+ -zQ|-qLv?T\T@N:v^Jo;%G7qSm#Ȣ.gäoj},ق2W11iU@;xptxw< {Vx\C_XQZL׊Ғ쬬LtG_}` -_IYQzZ ӈ_;ywo & 5EZNUH(*݉Ӄ'T0/\ Umu1e)S<oWb޻Vf -6|zAe"|ś1S`:_#ag 1e%p:W@P\Zu?zPb=͊(L)B^%,0]kU sˎ&kb$WSjyyW, s-|ϏZܽлp)Qb~͋t3ۑh}H_*.L@J˒ɵO Օ.6YZ΢]ZkU+k:ïWbn'g<B\А#F'QLQgLɴ/p4Qb8W[;d`I ¤Pi޼u%Ca={[bޕ{\УLAEI h YFFd8L^t @Fx0z`QA@ ɀ=kcc;НkK[qqT.hB< &2:CG\ͫz._CΨ*/1(V1ߚz@)i ]uNV'Jn&c:HW&(a"ߔ[]}OڏGFA9 2^|fOU%eýH0}v܃0$o -ʼn_O$U38q<'L{gIFLBma:vj1P.-xQ£6q}q~kJQxB0%Hb# -Nmr>#FݣwRt4c:b50hSspyლz"/G_s ,7dIX'@$x{ =.\Ld$l޺sG=4z׻ci(KdC'ql ze^v$עys2Wx gZ/dst? -o,>DrˤJܯ0{3aM_T]qW@[Guly/r \n -CBYs/HXj˵Pւu_.2:F"~5|hDBh4 cbD `:(saRi@.(y ** pA2GV}ʯj~k f?(K c 0+ ]}VCJE4 gyi1.Rv!ApA=`ػw-m0& Qc)R)U']"3RW/MNYV$҂! -,n1:ýZ, l46*//WROrA0c8VB$.^Ym(bjdPBS=u?Щ0x<ߚ.`@$)F(I,i`:f;܎W[ϙ$UI1*6 Y O6xL8ۛ5'yǾq/'jž=0h -/D -Rye?k+$¬jM*I2׈B-&d!>?evUJ쑪nG].+e=ZCzat\ß~!Pp=pC6t^b:'0Xi:Y%E>y6N'>tu |^[OJb:t(cP:fᄯ/& '嫷lC[t,J&LO^pMOHn2dD>dDG]Y.I0*|#mKk (F]\IcvJ 1333iӑ(CegɅX)  -u3V'=UWmVCDᙒe8qޛ njgdNIJtM x>M[a[Lf@I?@ޟo"%=3j} Bl09|hWc٠!7PuHop1݌,dr4eSزfNDw8!Qnbq62 jѝH+Jr$T!P3"bpg/7g#.t54{,QRc/ru;`$rJ06覒`Q^Jc:wtp:|]U'B~13~@( ,Lr$1:n9rlӧ:7b@v $fF۲w; ),,`tܫ_efFsOBUJ>gTmu!')S9אӁ}ǃ硫!"PĜWZ\4m[z$͊f%UfUeK7lNY L],]`&b̗SSЊ3DQc*'է݅sUe?C,g.%OXq:Vγ\IT7+}l.@Zzˆ0I0p-R04q -wdQm 'S (mK|kƭ[4%|WJ3߀&c8Za:8H ao;-YVtܲ긲qspK^ -v]^Z${hGƵ,$V "aQ$PbKs\1lb֟n@$I֍b`[OBBu^_PK -0Χ9?j}I競  a0h$sK^+°3'Ag`xY| ΄L ct4k(H:[@>F0%7'0x7(IgȲE+_- -<|7pXkQIrrroۑ7Hbt|VPc -&rY٪'Q3WTA5_nC ZJgCB+ob/iͳ.Ȧt; bΝ0{AK:^/(("G9L58x;m 8# snb`#:BT6 ǩIn,n޽ML-IW 0"t\.hq7 -ٔ1 HVRvnZ|v_\h?zsכFvo&xvJGMqf+?{/ԧׄFD 8Іz0F2gqXOPpC3r-~35s3c[ն bXmV!4/?/UfZKS[{R7QLvP 7\)D[٦yyKmTxS`:zCEi/a[AIpg2Cӑ$#+khn%D,G'tFJ}1z0=Li/l'4U1[BY n+-_-o # #g呜HK=c /YU8Y c:q'ێzv~_V`r~.M$LAPĹ ᰁm&DJWwpQJn:STB~EyY~A> ,0`:|M]P7/Z&1)`Cb &W:7~/mHZH!vdexNpچ&B^˗9xgLuXē`t s/5ؙv:Fm4U-ܲr-0wt<{epL%ʦc:!0k -a0vr00G}RSԊlqh =>.KII9Mq7)Ԣ=}ho80< ~bLVC0usL.$#qfnN[CrCQq!gw`n| -L]@$$rD҃mٚ7 UO$v> -A=`:.n@Jn;A-`7} -}8 7~^(lP]lȎmۑoIL7~ag'EyZd9I\_˜2Ʌ$}ة/l͸$+/-*gOma@Sxi}ȴ40@stH9l,ħ4㦱31C'}Sr05*ᖂp@ GKk\1"5uI&CLg a:5޶#I}wtBqzVCuҧ -ԧtxZGQ'19%)_H@v~Ņ!bњ=PC]0p `:&B1|pEK\t\W@ST+?C1沸MoI_6Y !1גu͚dCa?陴5 ӡ~Pkֲ}; (7~WA-Dž..sWo"1p5=6s:'ktt{\e4^[ܢg5$+%GL-ne7Ӂ08S.!9d oˬ;ivEYf+QZDw|);/11" &5ML,*očNx*ntԣˆr Qnn49U+e'+_Z^9c(頎&&$&j ӉP}Fiƒȿj,ԡh\t$up¦RS+}x:.pvJ}A'tiǾ n Mҷڊ-8ektqщ逼F/9(%=B\ y{m?Y,ن# ;V@s6Wn0mOzF̽ԏg" -09p+]'(SeIHk;lu +ږ">2_/.zl4 [$X1|hK4]00tLMu 0 zùj-SΎ}:Lwue|.\XT~gv))26ֱRHV꡽>?Oۃo^>iiMvbpmNώXqťҹq\H1Bb:j}`VqnQ30iMBOAyA5$wyO?OZatçLUX6a:ZˠbҼ$^]Iꗯ2~fy'$F1UƮ -+ݱŮM^bX( tPA:{s@X>Gsm{9A`Bg1x5]T0T)OqPӵí]EK"Q]g/^v):@!Sz7z`W򡏷uBޮݾC]Ni& -fT\LSX$fh@ -lba6.%Ӕc+wԞQߝ b4c"9VyC}g晻?2'岖>и4 «:0so:? !CC ش)0+$Msw9m 5&f` t<3n:Yiktt nEk5#Sź9>y]z_5?""|l%4Zi6NW+kNQh:I@OӍ=|c.JМh:i[S/%N ->0S8f%A큯5*Mh|ݡ<@{R߸5-D@{[}&+og".2X"'2xYuaf% \Zr;O8r{Jy~k@ȣ)~Q 0neiɉZ@f2rbFө0auy]R̚u[u݁p 4j@óZk|j:Vƪ}R/&M'Bp+{vd$݉t⍢ߋSah< ӚE8KDDžjpf?-Z~ CjZOkԌdeVfyiaE e44Q4]A`齶l)?cr+g/ag|ZA4O.SxSqXf1LSJLi4 8PUtzy lB^btb" A㧼ӪM;y|`ctnP{L{OMr>0|$6~~le}klt48$Į tǞ<7"ij:G4ټtrZUkyZ}J*$DpX4]PpPEk`23T.lC!9tRN*H7mÒ[7?r=~j3b^BsOIz+]H5okՕ7D m[?UU\Btm;6@QnwދQS=T>pX#?MZN ˱Wv)Dih:.(2^Ϟe}""J0tx'](moWF7Mw5̳rVBWrqy*Ĺ 6dr'ϹTm^mlTYNԨ9Y<0>2ϡ))/3rI3zHBK:◗)i:2>D}sRG3P\F{~I&P?]6I_ʨ*b͵*d++|uϻNl|>qXv@{"^~ˤty+V䰉qobAac޼1*딺 w<޴7|KgApT0ݸu pLwH85,똗C&L&K(o\[ --VI -WABŽ0v<5-Ј޹tTƢyq"tP۾e?.$4O*O沄 -#fڰ?aa a/ -rN\vMb˟A8f"5!dؙsʰP 1▮TK"PeXz=<M܉e^F;AZr&vÍ&B @]R[n2xx9WGl݌k:3($}2-U%ʸQZjҊ[mؔ0l|%Qt8m D/*WF}xޫe -Aӹݼ5^MGL4Y ;i*N70\tZY[T za9ŃZςW)1%)qœ"l,{[fhXs2P^rRfs,J) pA4j}26-.\{^[Eq#Luy~c\.)L}xtPOwV?e9kjzShnCFk:E^J+s#Y ?4k.tfPѸ t𝕝؟,K{\GDUYˇϋE%+c n*;s5{kYd) -d|'ejI[U$S{ώx{{oR f?B>L7k0407$sEkXtNj-%9QPwxд5ڰ4̩=jԱ⎝9?Mi?SVtFHbqGI*fMr)PUe9l4S9<ݘ:n -5V\V.WJX 8'E阐H}g+5_jm -q,p.%4Me._pMǂS$t.[>~Ts^zptGӠ-YE:FzIЇ?Vwh\߰7']XU̽1tKZ?{Nd]U*5j6oj?ķ 9!>0(ǧ.\¼du, -J+8l!ˊeoϞr^j|6Σkjh'NjniwdT$@2(7,,ɳ(lcYFf8xtY-1>׸j,T[Q?$D);w롥] -p0?|35iI޾-iEvt+*={ҾhwU}aaK8:%'X~۝ظ%6U_fMI4'?BmuN -tMONRBh:Pl=yw)} sq[mpbپky;kuHhCR2 qb`` %FD]"?uh⾷CGD] ZfBUypNᑑϥΌ.ly ͦ= Կ[ڡ/a^B7t -W\hiܮ۾CN;h"tyrU]A4{6̹ 2 _)3R!;pҗzUӕ\yC[}=5JősΒKWLOCiU -=ZK{mh 푪ڃ6LcnAմwKi^K"5ojNa̝S+_=zZjL{ͭm*6-O=7o{<W+tڠ+gȹӀg݆blm18̥p0Q:#fbܞW2U*7ؽ,wnиm߁(Z;t4 -La_Ǽws~݆p#']0w*#Xե&SU8U%wȄ)q֬ǭ; ?Mf5ܓ&|f8۠mbʆHJ`чAھW[~\MnЈi@`@Qk:8lGOκԬ{QQCβdؼsVnveiA5W&X: !,]1kCGU`MK c(w'wr|ٔMpkw s܅, t;y޿ߴ[r :͞8waCVOu\ӡRSs 8g٪?/wܷls2xĔ~^Ͽ^31Y ھ29GqX14~P4"K1<~ -Lu'OL@i{gC04 ع`e[vڱ\-15us}F٬-zUkLϟX^QA-h֦\n㔅Kz  l$a)Jw )+']yiԙZwEUۖ,PK.Lrj~=aU6lf/t9h:.uއ$VTaNpͺyW;jnٞ=iڊGI*NOR.g#I?I -fv`|X{2H) GObNIXdr -#ͥ~7ECQۃ#5K3i/\R٦'?ԃ};Q]n¾om0''eFNI䇺ov\$ 2r 6bUJߜ~hZ,<몭 ʩMcmT9:)Gnt*۷qS,>?RףGxXT&L'.F< ^SI^4]EFGBASƾ~5%}oa5Ú6~Zzcg}^`Խv>vS"k!yj |-""gzxRfbkԪ}V=ôAìTnL{u}Qw(;*-DF1d^ϱx4awY&?㡓`wTk -KDXaq_wmؿҵk;r{zX.ܲswv YZҮs4 -tBӡ)-3#3#ʇSA'?-)6K(D8f4itwNxz_s YvB>@/6\{"WUlyX7bʌ:G|z%c߳uZqs|l_Yc^>xϨ=~k ӦsǭWo_ {:G#Ϊ߶ӗm]13nݹ ;=5yHf[h&8Kֽ4=WZ3Sy\e֝k؄-kSҸ}ǁÖb::b}_U_SfhakŌEbVnִ[oQ6CG9rmaժn܀?߸kO> |O M_ķ l0%ؘmN[VpCˀҵ,l;(u:Xl̂`-᪻{fP -ux=Z ׺W3Sf䴙1 9;s-4*510X[Gd q~yK\dj㚎/yDvv2^_o۴hڵ Sv -{M>ךtʳ] jؾ|%!LE,y]67ڼ-=gP&5XE;TWzPz1/ά9=4wmg,l蝶}n&d5Yt -n_0:߾-mٻ٫*`Pm[B@/Sa>di^fVvmgy: ۧ#6\69b,2 eߺ=a~}4@X\ҥ&Qt,fh - fՖmMԈMxCF´%eG _c[{ܾ`HF>b5:$=mʣi:NVҢm;EZ0g>UŦyEfk6oצ/^iD\<8(́s~StKW75-FM6.NZ3f^TJ+kJH KҋʳkTIe\^i{jga$q'Re4O|.Ũ|go;{q1سoMISa>Rלo&N(zryWWk^,>Vk4[&3+w?sK&%zl~BwO\xzhXB' Y>DTrCnWO]y&s]< -8nFŇCBp..]KOM**6=+Wqkn,uJSܥ+}|az;y㊻;d7 -kӮBCCFBG]qy׳/5JEV|]Z(ޫ/+p3/z2 RDӳJ> -,C"|=j$*q"M-6iyx_?nY$uuJz),<>{`jBz:e{FF>M2oziZm>hMsR=\ҵ!x,aDQsqHM)hgsՂ* a =y^z_P>vf*#A#u3-%E$ -h! QaOׯڸթ^}(3ҠuAݺ 宧ufl{u\c ,ZyYޫ/]:DŽtsyglWrrΞ陞jrI q^tqF? |gyB1Shlx`1wGoaSb˄Cv[ڭq;CqŒK:}XNIzk(?6 ([Ӯ0wltlOt.Q?9N|{K71+̈́ -a4jVhS|~ynXh(?¦>=psv' &!Ep1i< `7%ʴT)301CA˺^ koD'HS0o{r o]BC!-%=~հoiLs.I'1S0*OZ3O^FWr}4 -9:\lqW2tU5.TRK,աP^EK*es"#޼VrPLQNk&R/ň_I쯨H7 -Jfq2<3z)KWhyر|fKewޘ!dTPSTvAf?V!RY`ItͭK<&L[\9"ӬŻ|d+"u[+E:iOXoOWS$D# 2-,4$88ķ"~&' R$zKAEV xi˞-ul :Nd ҶQN}p\\CUQ -"RKR$)+_)=aUAT*_ԏ֫V/U[D}fNUaU22W6I  "J–ȏPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA~2?  {= [  {A MAQJ OQ"  J d J$4AD)>ADw (%'(NA2A% @F DB;AAAHhz' R}  MAQJ OQ"  J d J$4AD)>ADw (%'(NA2A% @F DB;AAAHhz' R}  MAQJ OQ"  J vp9N޹9'qMq|nrs'so΍'&c  - QEMAIzABkնWiOaWݙcv;LJ>$a $ld;LC@y |H62}I@& Iؼ>$ w>$a $ld;LC@y |H62}I@& Iؼ>$ w>$a $ld;LC@y |H62}I@& Iؼ>$ w>$a $lR,.<136.~ixmF>^wW?|'ѣ?KzJД4=*pNͧ}%7''%%$))Za]MuVVfvfVKSQ1u$ ;E%rơ;OW$a׿94kџ~{ 4{PRs3͟Bˢ%ri&>V[F&祦TUThUf<11XQ55iWrr;,F=‡>$a>e7<{O? CGsM tZZZ{ewT[s @ŨK3dЪj*{;*H>$ wiTf,yOS7>p.9GkHIkKLkNO{KJA1auUeR҅Rz34迄ںɠQHA7h5Y8Յ IؼDG{צ?)E'm-yzN @{JmI21BL2 Kst{^NCo.97J}=mzʄX"$l&o=6u==ݡ6z/?x-wgȽ%1INcm6ı#iF64hZAgmbwHZHȪ訨C7" $lƧvq47ғ|">3oi"һ Lg5Wܿ/:*rӧO48ztE6Fc^m&3=mܹ /-FOr#Ҁ}In;=u'Ŕp]q_Lɿ|I N&nbjnl YfuW3gۻg\\xHe7DglDWnC鯣V{&TXXHռAs |H6w'|C.qY{ֽI YSS./KK\]Y6"4dUXȪ IW7mdqʠ̌+9YpYIϚԔА+[^Z!q`0~k> -;iժێ>?0S7}޳Pm=Jý|2]se4ި]2 7-\li@^Nv[KsiqQ}mMҮVxBࠂZjug6TV鶛{R;z=+7|?>z5m*䞤 ɳDPm8HmŽ|L2իgϞoYǎ>/n32r̭1Ι=s ښ,F`Y͗/]Xv fL暮3ޏ|踲{`} >$ wDm=&m+㞞^2t鬥҅ċJ -]m])/v\L<tB]uV- [0=]6I7YpS*μs So[u:^ʭr$lIJ?Gab'&_AmE-FFǽ|2qޣŨg ߿}|:WabGCFoMށyGqu?93ZEҭr$l~/໸WW~“$_PX2!$*mg 'ӹY>u"ohVAd:IrU' L'8j룋p_آԒԞ{ &Ītȼ; -t6uZ 9tL'st?ye#E|kO-NjIjOjUj[ FCdq0VV4je0>tL'stdn'^T!=ԪԶswse<ɨ4*#2$!ɜ3]EE Q !ӍŠ2{:+JZ&h`Z1*pV N?ZZRSL It2'LstG|~:eˁdE͢mit>xJpMuU=CVzWɮILc_/X`ʭf@dNN3ϼ>È{ӯkFݠ={:!4tՂ̝|Rz$NǟSvQ3uegUeK:~bS[4d4=2$!ɜ 3]Usb Ɨޘf4]dq̦qw- ?/.:*rӆbcE],0`أ<~Jv֙S-+6hX^UZb zjsۨX=@dNn.#|c[^ξz{=pL7!҅kM 룣#׮  -,(hkm[\GKSNfVQQУنm&SkO qeӅ+Z̀lC$d:U;ݐ ܛWȅdU 2؆,kK_nۖsۻgh -w۷m]vú(J|۷́7jU^egr׮/e%~));[zm\m3)(~RŸ -+쬲0ݐAo̻#>/Yd:IB9dȝ''zW90#'tc3t3 A+- SQVOw\2(zAmӽu>lkV\-Pk+*S__T=ߔocݽM>C_PhT}=zFζ֦ۮL It2'L7/lΙ[ŽAzenGC#Iתۚ@ҨӐ:MߐDӴ4շ45*!:QDT!UI{/SV]\I/U~mGB9b2d=tDqr+&l}?/ѩSWXBnwy$ Nn)˩׉!9e"!YL7&:no@*Uom^vôȯb.i :t6ὴu(νpۋ ŠV- nmn8*=5OEqı#i))(c Zl6kow7ضm떕A+2Rá[b2$!ɜ3׫b4%Pq{SèNS -ɽZ< c(YZF-=Gیƒ_\S$Yq{C7d2ZK}ѧ+\-&fEKΝ9:<,IЊIvwvuwz:Z*vؾjʂɣGVR[8}9p$iTFzA.j…K,?҅$q 2$!ɜ$3xI;*<ӆl:N3{9zz{2orfJ^gŧeR{33RgԨ:yHQ{uJxРUz~ -nC%eqU)i'C H2I,41{A=نe:zK^ݦzOϳi_Y - M/EӐ8Ӊx%)<=)*eI=H9ijT!ӹՠ:TvyE)=}Aw 2$!ɜ2ݻ =3cpL'D'Xg.kYcmlz݄%d.pFt#ib‚Zт?ԅQ+}biJKބM(^2x/B$d:R[{o.|l Ǽc)kYbIJ|Bӱ ccle! -Md`^+P\~7YS^k34H.$ N$ҳ -x(bcc3f.7wd:!ӱ 6~?ͣޤ>^]L&Vo{SI"{c}Whn@dNW|pe"6tmsCy_eZtw4fXz$ N|=m؛ߖroF%_ύz{2،:A :8^%tL's>:;:'6roF&>;^^uL7!6hinp|]uېf5۬&`4F&UVs.&SW{}Ќ\ @dΧ3z6Ϡ^uL7"f͍Wr]0o,V%JNvwgǠ֒YEn1j@ZY^SOHTUP4%32$!ɜf{+!xwtlVsSC]ڈgbڴS'/Zp*J7<%~O*:tU_ϠgӾrڴTׇrEӨU>NkK$ N|7}tcpo@$qwtl6۩SǗ-JO?yߢ7nXs/-_xlA~^^N IGorL|}{,\tJN6%Y_\pkqn q udnxۊXxR]iZ_) ؙo$ĄvGWD y!<^7VQtʛÃ߼ Z\0n H2{v;L!JjfK8XnbgřN|_|XjƙL'LLq<,|E ,-Bī:o$>{P P%pƉ@CҒw:!7+GTf \Ռ OpsϽkd:)Lw_URp8[ sL_$9+nr8)1LkTG'Ga{:@t1b/ng@<*{M2w65/'2]fFF}ݍ6m^<-=ݨ22LFРzeŦΜNhjS- ^UirGNkd:)L.COXe:O)܄9؝*)_d:ْB$_BS!Gt*/7ɖK<~ -9>tQ"^5ff2v56DK**rmdĚ$=MyiV5UsgNoNM*f_"H2P;tso1TT+s|鸻yqK}KV-[&j^nQA>9gt*Eysf86d1 b`5ߨ޲qysCCV^~tL's>xKꞮ^^24ݝ-M^JjBů?o^O-d:IB9t/@D&A@U2{P{HG< g7@$d:L7W޻{z^LbF ܴ d:IB9_t9oCUA>D4meE - +rӲ$ N|%şK{`ކj{}257dg-mnZ2$!LW[s^Y{7x਽\6(c1d:uMB$tl`,ko7" |a2]͛ӗn~$ߞ 6h[TT\6(c1d:."pR$w3ř=[v|I|3SfG<īAc:ģ)=<θ - .q2pL I>~IuοD\؅8oLok2bYm]◧=ɖ;ce:? fŐ d:ILgw"Nݺ l_]MZ)N.NxR7w`1:MOWgqQarrrEy~` >)8q+/nQW2X -=8KJsqgTyNq,WuN!@È/ر32cx9d:XTځŒ<hS_t6~CJs|m&gNSGs!ǧUdg]ѩnKә㉗ -toώx08KɁ݀>b9t1RSSP廙\W2\m*/̽>4 - - Z5d:LY{7x਽s l7HKKD2J?ܛ Ud:w(nԢ\N|%Oܛ ~F>n\+,(./\N|%ӑɿÎz^L\pt6++K3Їpd:Lb=*W ސ:ZҔ]|WA9twӂ[q yTT+s| otܜT> ɜeC'.L0{1X^c#$ -lnN&2]vPqkov_@ F0VqP磸5"9u :KP=PUpA.g:vj ?/Ũ ҀL's>J˫'4{1,P#6r7K@>#f1l[]u---ԫ)>t -9>tLG Z2T> ɜe:kT2 JB|cOs/Y΢Xdw|M|-Ho4 =gu>OGl@ǙUTROJz^L\xC39EPL'sMq+rtd:؟(g /:鄥P:ec|ctT Tkrܐ ot&2]Iq*Udη2]ՒFS"pV$KX[>1 9+ә&cU:F80O -k.vٛnB@& Lgk斖\N|+ӑ夘&5H!9h39;{M#d:Oh}Ft3 .~"~iwpD5@'Lg]Ap%d:L7;xqo7Ǔ*L'M8Xv^:f: - jwG1` d:IB9t9ϼٻ({7No8yOMs{'Ns$N1)6E`; Dn Mhu -Bk쮴zc Ws<̙LynOMLNu>#ް[/uyvN6 [[ Ad:Id^pN|WoжCz|!38ߌY -nRN߻u]-u h e:Cq8\Ng; L!L1͏2Y7Ke|;Atf)j2m,}ZW ׇL!L1YZ[}K 5o=_"}4o2,GE,''S: $z9#f:܄Dkz>s4o2ݚȱ6>ʺ0o;w:lNb׼nttA3]eEԼ 7 Iz\]z_-'˒3DZ}R8'tCD zc `~| ]eW@;i8nnۜTr;6B3n+((l׼ w?^@$-m斦FrɬVjd Vm7_hjl[ - B3nu߰X!}-=V(zt.%۶llw;.W|4h(vl۝Ow8W_}|/tW[i\ؿ7e>Q;{Cػ'PQoѣoijh4Iě8ownI32$2]/gL'~G ,?9b2'2&eJz5 -ꤜ^|K , 3]gzѢ+ǟ+-Y|)Sf̘:aemUŪ3O{kئڋr֮]5yRxŠI~w'M0ADB KϽ/X#i>LO貭ҳҿ˚oimTSh᪕+J%M:e֭Tqwy ˗5V^<_ZRիRۓ+|R7}];='O9Ug d:Id^.2xur܃C -GKO=tjSX:|Wȯ`<S 2L7~7o2ҿb yٵP5/~^lf@HJjaWfϝt.?zԛQ[6oZ0ys'Mc떤 'O -Xp9S8DEN:e)fv7&B L_6q$1?[ſ@G]BPt)<5+[/vX}B}0JJJjHiwkkoG- \`֭jۚwX(*.xjh5{[JK֬N:uni$ $zt;K 8]y1QȻp) ^]Xuٻd+<:^-=MSYͷ Lw8vfTFt9ힺɯ -KC>Yr!@H"rwROhĜ=9&Lwd>i]~~QsVk8CNzST16'Y7ಃd< L!L T?`[-_|; ߜ@5TOd:R+-Z2o]tVNo.|9Atv֨4V(mM2&2]/A{}9Z?q݇5jnoG6yG@KTO1g.ߡ&_[ºݩw?,;?JOR|ݯ*AGMͷL"%{_&jre+m $B9n?q9F3'u 2&ddnʺ잏|K*}Mvٗ.kT×o.p]J3-K^ms|]|K^N^;.}'=(ԣz:)vrjsm\>czR+;4DD 19.laU:U{R@y_6o[QOLgVnkCuo\+:ׄ>V[T;4DD 1]r\ t+v<[Yv٣o^ |wO\k;\QRlt{.uZ''7~NY\{sZ'z72$2]e[^=@8q#p#?1KAO ^| ktӅyFb}|lIܒ~cW[ck[Ԕ6Իl{;2$2]ZCS'IӼ_puGSc|9eVsc}]ŠH06]RW]emkdXjظ~Mt\38qDc%TikmPV@vttϦz>s|N&mkּwpEۚL|kQΏ]xLcܚʋ1 2{"SL/Aϥښm#&sEVfWl6}n;d:IdP2`?>@Ű4j;IIGo'=3Li7L~;:2biw&؛xJ۽Ie=t@VIsKSSY$b]N~6\\omh5o@B.\f^TbkG'=åw4B3]gBiS׮Yc7'n\ر˖.qXiiǾ5f![7oZ̒*ϟ۰v͈GE.jΜ>a׎dbr]3smZbK[VGD %o#̎r^0]nJ8kͷD,QL'yQΟh Y5}ڔY3g̟;o;7'%9|nH?oٳ$^I:ᶜ۩nǪ00wB_gEN;A0<2$2]~3;nWg핾X[E| -fL>fQo%)Lqׇ 29.Q 8.o?xN-IK[[KARNVRje^WׂL!L<7l*,7KS9_VrŤIŝ.\_"@H"Ӆ[-oș+gԼC툙 t~2KVge=C]Nw@,mc=COIJ,:5i&b}4#U 6W/?[v;T}2vQ&l_wnOTY;WMZC1.2](Q3]IQÇrZzR}2vNJ~{__rSL!+8=,G^&LLּLb6+##55ՔԠnAL;`DEEoIczrO'Fd>7a۠'~c$ZkLjϦ샇ښuسz8 { m#ύo>u׋so*̓؍uose:|bN1,m<`$4HTiwXssSS6.O55KVmbӃk>)m=xY'^]K I钓SZlin|°͍m͍fk[KMUep'@a5yWc=83 -OɭjV-JN޷o_五=/eO޽{ݓ{Ν{RR$[Z]^‹d:8=ށPe*(3W|ngFm07ҥRJ&5zJm5o^B)0:~xz0O?v *ҥ'OLMݵkWͭ /z8 {zSȟߺ}i-c7 ~ঘܞ;'%KY,Q+K:HM>8-oO - yiJXۚu\p'@aB9yE w {''Gsw~/Ν#owv #iqN~ gT9זT'e_eRd^)'KiR'i&e;ª+/LO?zHiik[>Nw>T{U'Zs8d6~1ˏp^>廌L#S2ׄR_$ej^C}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2v p'@a@}2{x 4?`2LE"qt`\d:0.2LE"qt`\d:0.2LE"qt`\d:0.2LE"qt`\d:0.2LE"qt`\d:0.2LE"qt`\d:0.2BRrbCS.z4qOɚE+w/V%?j)21o/2(#'@&eQfB()PVw Cev/bJ[y ?/>;l̝{^ “ɢR '959md:TUcY'/VxS~z"]cʓOTF(^'Լ 654^ux3]jSlڪ&ͫ^fm)dJ+۶ j)zK~Tv!b[LRśMi[RyKU^('NddSc}fFf]Msv.*}_:J b-DP ='L1}\ %9ᴴ3%3?]\tСDT79-)o6S,0vꤕmL5;+͍'My;Z`Sfͫg4TY|k~ւh̓X.bjY򨬬 [/9rdѣG8IUtpļ; -b uY&sPXͅEGݛzࡒbz[L U Ե' P<Ei;zȮ]=HCOm.?pf)Q=tLV_ABDN@pZ-%SRRI9Wv@Zq'Dn̝yҜ4ւ(i$;(x-͚p|Ȥ 4_yM:[7:A ]ƫOsY6GIICiY0.z٪CU؜ǟ~?ɧ:(Ԧ8KpOo&M$ %ͥyaɧy9S\RZ}w0ILS㛚ԑ仌Q}g$~U:>uFRL|7;vCiw(^nNOdힹ3?oG|!_yl<Ee$d2"3q+JQR+kdU gv.RթM=f7߹95IRS%}UfԼCYFm|y f{3gϼS87mǙ/+pﺋ}kö6"2߲b@&0]fev)DX)\! BeRR/b}a)ދY-N*uƳJC>vivzRTD8.̳OQ΅+eg޲k+m+yi+k;trvrVfv\XXT.sԬQj;3a5{xkx>UZy.U%Gl6EhLO('?ISD?=wZ˦i̼Cw+6ŬG{^lͺÙ}sGn΂>aaa0wQXGA+?̑!:33ڳ:2=c2?/cTM\Yirܲ0wgj r>#[Bſp?fتؚN(-s7>HyM>zzJU -b?~T_`(TǨwyNڶ00-?ۤAqϐ _G5祪wY,ՖIӣ7hޤf6orq9lNOxVmo<4Y.NW.a?{J޿so ՙl}:wfә%y-TXI:udmӑ)33]=wg:2{12^3gpgiȖ6Йu+k++,]M -9tidӞ$SPJ~,8;RV֘ak99g)6}޴+άu!+6"6kvJ&:5@4: t*O "r+jNe換~=G_3du -Wd[%+y#r(r(nX[ۛ)kJ_J~\lO&|MIGSϲd'2vߙ>3}Γ>ܼ)>f[]ILHRy:G{ުz}ӈG3rn4lEzy$H%ˡzΩG=C١ -@| !,\ -|Ki$VdzY0lBlMbYLbxuYdKՃzdrI_w.&0"ubat{2qO6%/-7=@ GIɉe|2 6$I=Vɇ{(Թ"inxXxdxpx|x 7DDžbiJN~ }N8OO-8.F䁎Q F&ˎAbA#DQdUǓlH*1ȗhi#!d[ͼb4)C9tX  -u,h& x:)BZ{|W2hE@/vj8}_Ch @h>YGM_|-_쒨ew@S?v-LJ^ұh:$j ‘\ͼB,.XpX=p|SeIhLt7ie;S ;4PҪ0NVx'Z.犥׳كY҃Tb(A&4ںӾ'B֭ޓ94杁@ 3S[Ut~Ðb8Ls@@@ZZx *kHf 2G -CHVtPb8wXO.1w2 qp&ʫ -w C.fo44]@ &|Ŧk=4t;KX(đ2%xWΈxKtJDv-lsjSbxK\x:[J[|^"xQ4o -J}΂ 8&O@ӫhDSz6㪺UYt&-uR>Y0??!:@ 7@O\wqO\lZ;s NI1;P9uM1,|zܦr.>{≊srK|3sc2qʍ/Jm^(x]҈ٵ0CN p)WK^‰%,%I3mH ,p'.9싧F*|vP$B׎Hޓ4݃@ #D4wȥbү{暌saZ)lkaJ(-l_8];)r|oa*İ `3n :x5oʋՁxɡ筱hf! wHb[4ok\:*%k݀p*dl_؁L٨y j -q响;;g>7e;a5k@#6l)SSDR{yS /Sp D57ީ-&V^GyLs6yaQ{!|8qzI)Y{t_L5Heŵ( -->mѸf?4wޘߚkIqȆ8ݹs$Y7m^8'iqkSh y:};_5? x7ncoCIP79J!'gaENp60Kҁ*t퇣LN%ak@v@3iU8]dC\,x>w繚K8ݙөQ{z>ZgOa·Xwt^sKּ# 漲%E 8hBY87D_ԑƴտǦ֧l!ȉPb,T KGLSk'deW|hwEW_ -|־й:] KC tqfw?^3MYi4tꑖ8֦۔2M\WT49N6{+hHP#-:bmid1݌rt]/'3,l6#0>&gKG Z:4vHPwxK3#Vk8T::XKӵ3t9:x7l'|]ŀ>Wq:u|*Q9TD,ZMy!"t͠b«Kzs:{x3'/8?%ztSΑ:"XW -4 -Pu'uwN0PÿvwN%P>N\窏:{TTߪ&a>h,h`.%48]dCl4]})pD6 l Tm좪TDߟsƋN!sYh I^Qzަ X bQ8kRu_9MZ?}*Nׁ/W٢7ECiIIefҶN]~8RhD;a!Nk -7)=^Bs!ݧ}SQ4p:*=ȻB#--:aS(;Li:PKWǿRm^<_hTNjӡ, Coï}:or?s CgOn]& zt59t8F&҄ӅZwSV,9ө+Wϧk<v]ө}P==9ݒSм#t qF8h+ܮ.{s;4 -4ݣ";#?HԱhG-X6PhSUSuLM>S~)Z(l=E{3J>(U֩gfمNk0oDd^dUjZ:P5N+\!bx4#Jt7.=ltXZg(\,u, '|sE)Ǐj2tNԱjKRT -O%J9>@K(+g6^R٥Sٮ,� m N;;_ǻC_/˰okI%h%Un(rDu$&KX4ңSDռwN쿡lSWԔU)ɝT,:c YL)ŋ(kؑ5$ȳEQ, 8byoc,(ؐNg5!g"b -W"O ѐQ,hoH'2dqF#NEN3i1bf>㾓M{O+^*PJ*o /gedFn8c~Tmq>xjgwAnL9tr=^W]QvVʶFGhU)觤}":"t*|w&i/&%'hU|ħW/y3IG-;TYLB`e -Uspm0m*e*ezySL\t_"8MHf%9h^ F98@ٺޮr:d=2;΄sA֐B;%el:f*hftv=ܧ.3{YC'_ :upQL-kt`9T?XD4> -7QAQ`mtT0TW}`/oɓ_6uKK].I>(IJxhgޝ?o% t.w!cE|0ϊ<%"*9{ԯp4wA:B˲ӿHw*B.!N]Ћu)K+3ʝaʰh h&h,;L;Ct()1X1Y0Ձպl | >kiN>3AVj -]-/r_e&1yO2yǨATLε|+g=yw'bx4 -yL(AqtQ^laM/ۀR;"{͂? -%UDIhcxGd<Ȥ g3OO3/ْ􉟨\*!;ߘL+L&nםJ859wk䱨Nǵ,=j7l ?ٕlbY]YQ^zrVWjR|4:XGj7mܰsv޷߾kVUnV]Y&t]ueIqAyIY~D^,NTn|٥ZגޏY5ZBG8]dCNfWl͝%&Tqќfɫ6ּ۴'tC" ň"MnrG' 㿑KOe-䊖1%Q?QUiTPm U[) V+u@Ubt])Zg񅯰/19d2G2RA{APA{ x=ɻScR`G، \s\\4xWh[<*^HO~wHWXe遊PeN0X w[ . *ۙTPޫl濄&u$AA=eTMUS FIBm'P yPrGlzx S6j(`ؐ~rr)f;@84;t]k{IrĚrgL=u߇,[?9s-/Р۶dҋW}icFZʃxwClNWMA-v -h\;عR\TCiŧ[\פ75'\{ϓe|6@9cH8"){C.N끤+nyxG^=jt L7cha 3[?pWgx 80ay=cƌg'_,Z 2Ku>;'*sr-|ˡ5fM巻9?ڱOq꡺@mP' UZ?/@.NW[Ug qsMt:96"u (]lg R88G|ЈДw'i|:lVy,2(B)Qxzɮ25^A睉('ZM"] @Il$^1H'f^xcj"T،#yuPq -ϡE2:ymL#Jv`7 -}b&֎Yx؅jcPhfhwq:*/{] +# w&Lܿ6`OS[UQ[j%׮Y3kPngB;oͅcIO5=Cǎ9Z.}vfm#3.bz'fnѼ?1i̴yW>g3̴nz通~ -\  ="tc^Yv+5f:bd]Ӆ}N>@Qvuϯ=sIkt Bא0Ma9JP;v(}}:ҒDshJQ,A9dܰ 뢚qdn2J$}|BC>Q;n -y-"~ڗӉ[o/mټKI p?vLt([zM|t [?ᄑ[nNfpjLu=1'W7:8ARhh;Y1an<Fܚ\7 {ɫ7l;m.VE6>ٴGk#w:,\o-S8GzJS#\(ڎd}{'NxfcCGxb5t6?o޽k+5U% OeKO T,ǎq㞚3w,/#- EWҧߟԅ'®Vs]çFwerO4/yz9q?hpu^w|qR+tMt:/a^\Ftʟ9tg(?|+РqtрB%n$22Lچmֵ>fGᆪ6^s_k>wM 0 qȦ9\?eAh%zO^9e Vs: -Xjv:t ƻɔ䉒ң$[r4h`fcb; / @mIJK(,m .䋒F$ :ӵGvQ;:utZ.yNq7|E;[խף~kEa)>H)!!Nt9}}.Rs . ,4wszC.q JӍAS^V.ilmjKneZG8ID vc⟺жXdFIQ=Cze,8fXKWq/9A)VQn/;ͼz,8TEqsdu$5C*+y:K(IK6f :; I&^^|[n`Xp0\Hrt,-ZNk1Ń<$Xh4k5]Fb͛8'g-=LJJnDk8:N2 Hr9QݻmT9NђꁈE&O7pQ5R%JS3[R2`lөoVB}:hthQ9}kSXKOϥql:MI_hoʗ?fu0H<ʆO s:)up".q=a3Z r:֩Q/]8G.JV Zq ^J\IDŽ -i3_wE%Q${Su5ZJ&ɣzKۊFIƀ5~ĨDHv䉑RK}ٵ%;t%.X+eu]DK)T c:g|zѧ8O! XX}c6Ή\1"Q  -L gGڪ]=WdI1m>,NXf2]vMi,D\t` qȦ8u]RmQ5G$?^J3;7čøkU7ރGF)xΧD\I&stH" ?K3rpg(Gʇgk \>QUfܒ|6H߻j6eޱg=vc1v8]s4&Nu?xSs 7|ɱ^ =&9s70WZ1TwAN8u&E=9cΟ[M?=kWY2N<~PԆ坸n[6"񻣞y;|uyIww;eD=uف16t9{O}(ft0VPkWtIFndg;t\ic;9-]ߟ9s4 /,n߰гXg1+x58 -8_bjO;#ԚPVuZd9::N3(TEfLtK~[¬$6:Ô騨ptq9ɖxL 17zôٞ~SshSFuOQK9Ѹn>nnE%;muԚm˾me7skw7mwz|!uX- ;@xLxyCw /8akphu'>Xx3.;7kߵqX1*[sj~rG<=v-#ǽݱQz?V`Ӄz$*h=q:@$,ӳc~s =}%bgo/1G;&%1kWXٿVZy7wN23nؑBwIL٣_ysfg73zӧM;ztZy#-Ə3^?ݼؼyQ#FbΟ;; }}nj nq jGdto_]2>BOdUUUƽ~_b[īN,8v2ȮF( W2]%uFz/ps +e -΁atF簨̗p}%=0kN1cPp3󢑎csM>_J7pM?xd[ޠ.xoǤ;9*]:i͖q.ݲen xw[! <:^אxdJ>h7{_~% R8aa3"[^:?swoѧٚ .?osfLjT-8p^9Ý~$ڿ+؆+;q'{[ўMuoѼQ[tUD翷5y٤ޒ?u0{wEyNR7~'Oqc&'F]ճuu:}{׎} Zx=jo -\LqZ6m|HǎOS)t:zիVt!)^bٲA&ǁF8?A׬Zu:tPcCЮMpt;^m -Bv>$飇V =ɽ{,AP#i}o8pYGӵ[;U]WU0 -Q|g972ӁY NdV1ONeZB9qܐ^ -9l|LpTC*'71ãxl=-Lg݇w-ǐ."kzt\`lǾm/n94ta/5Zܺvo>Cǿ.~w3Ljq׃bP]\}*nsaKt?C|[6kpѵankPN;43حǿ;5ؽ=h|g-GwobB.˅ZG;0o޸-i3f͚1r|ӹS' PV3ެ[6ngdܼm׮7mЖfჃ na͝On߾]LTg>pݚW9+6J۶m/?c!5*}>pB[oS$g7HyS>a(5fO񷿚^AOkOY]YtjN [RKL7Œ9 )W8ifݚ6Ĩ U)=rpqȒʤ[hdc[bά98bTFp :l]0at" Cb.-1uB3>%Kϧ#"ܑ*H0'>+DS;; h?jt`x$7r:c7p9tm[/UȢ۴҅`+`s%2N}xkͺx>dL4liV{ e'^Zɼl(9q~YF fKUPM1$\FͽLGWac* W̺L_gmRQN(?jseq,[GFQF-*F8zh-0'bLm҉7ԳqCG,c:5:iq xKF:T/&Y[ghX %wH -G;wRҍ^nNj53`իV'JJufH$ĩ?3/Yd崔mlٴݬӧN}[ՔeK|'qd:<*XIp -֘ubK ^+&v{Ͷz }g{iyTfLG5*)t$әuk Mbe=P\_Ք:LGWli)Uᘝ)]fGc:U*t5\ZtBe]tbؐ!#F 3oA>ɢF1yΞ=3-5Y_V -6hS&sPຢgݫgG/XgO:sꄗ[hm+/3Wn\K:uɓfxMQiW{jЖ*bc 1A/?|`ܹsLVӅ8ti0*~-K~XCDѲ5K1Oߒ3|Guro -`x2n^K 8BWZr;ƅsҔ-6n޸"G ;3;YCͺ &'ܱ]'[z{nߴY2w8za.*)*T9TUQ[f#k1!h=]]<w0>"N!X2BCgʀfi\Tr+u R ptj^ntsU25 AJ.!'w AD8Fkr\(PA?"u2'B&AqQ`"SCmlbiHS 2%Pk^䖫UsTBCO4lSL/.4hK l$G?&)tѹߨ 3n[ `ϐ CA L\)/)̇8<ᕒG[K:=OX3);1DtdΊGita:j%_gLʲc'tbe3LQxb2LW"fR RTl,c!'jݜс,5|ZiᢒsF0H1t)QhsBG)qh-)&ٲ`|ZE1 -3@X:n`4B,WJ<\܇pI eK2pO P|"Ԣ?sj=1D%@b^!DUvOjR:$ޠ-CAhTE%PR.b=N \-UYkNP0PFLLm`9F ]S⢊I L6񤭶0cϊ \CzPNt2q- $1 -#Kʞ̡ԅ-״F^JI޼c0^#}89{2}tòzTSq:np{i dLGmb ->*̺p U*RuR%>)Zxe 9f XEa+F*t% -fqPVi",Rd}B-p%x_*6M$$C{39hYۉC -k(&F{#3('6(dR}O vN~gPH z8g;!s6v " IeJV%AKe( - Z_uTA^n㇅yJ=v((3 tTt_L>w\yHc슞|鹗`T{h-ٸĚxIQ_>eLGpUe:V L4NKtW0dj -I=8 5J_<;J4Y=2/p4kG\ö+Y3AtfKbwEEgϑ u(Iv8ᘏ 4 `2$mIQIa~Zj?.EFDdѢugRvL` 0Q)*68ʯ W|y:%=ΈL@6ʞ mʳ *=GV-1febnˆ2yYG&v2Տ -<Jtr^U^:}ަE v9?|;fH0Ow=O?R4Rzaξ\y$jw:|]e }`wzFC;H{;G& t6ty,TQ~^aĄs·BC1+U|>#bېe-g: F^^֮vA#sy"bС*|re/7nwנKk~>E=O7H kٕ0m?mv;L]?s綹ĹgxHinv5FD6cVtCCYYK8ydHHHdd71cŬįPOߌчqb\qLWU]3U?D'vq\8`.sl1: tue0']i-0:5 E6*w3_]wvX_?uW6}t}'vF?6'MI&ԾLobk.ܽK[m~Gg7"ѩXpl 8/pxM1;^_:ٳtaWPJ?8YNgꓶYAܿٹ,ӱzljOяLgzl}iӪ;LPia*gG=!Ze:N6t*x _|k3Sdxdv֝ҲǡQr>"I$x?ӫݼ G p~/#9uxaRwm7E($]9޼os{_/nx.REK^.g #=}/Kr -2H[Rt?,&&222%9zdEI8GMфʁƨ.ʀ/SWt'F7l>2t z)jRTkL3(gPstqB~dg2XAN@!ʗ9F{j Mu6Yp_ A0P[(d1D-gx ;ۀ}R0a5r_b?C-í0}&CGl,Y/PNbvff52n@|v\HVCJSH(# -v('"PV.BdQ!=yBlc8ݛ'x=wyLF=_WyF~o^P>McUmub:lf tuT>b*f>G8j5쳶Ψ/JxJњL=Z C$)b7=.h6X#xRi88NJsB.+JAJU8˙tya)q`eyx1)wƇu2]c:GMYvP$wj|2w!n܇Jtt]wnߴHK5;x`?q] z$Xkm aÆP ae^S:3= ,) ,xZTph1Q׮TVx{yGa!`][6?|n{kV662{mnf߆<1|+&Q!׬;/|?M;D['ti!cA#;%gmg:LbNl&aܛD_L- a8 -a8ƣa3w<lw fc4tXq0Zӂ;%d@0P)ts|S3 }>&\B0ΣS:v횪r]]۶4VUnXn͚wtgtn\ DFY_2R̞ζ~ .wuiim]ve?>mpoZQdt}{~:pIF+--~G.^8v09I_α.x^ʉ=Ke:"It[]޺Jw{Ƀu}cCAx'kw (&h5)&L'ؚJL:e9OwjΔIIttޜ/nS;[5kV./9vt==ojwuNCmueyi|lgOt^=/^ѣࠥ?8PUfz_WYQZQVzw޼y!/JW~e9z˪++R^>dɒH00z_MЃhjjTU uwvuewW77{ae|Ο?^]m93Gznu_ ̌ݻvtߴqCN )a!nZo٬bR7998޹yȦƃnfx`E\6V&Maphbo;ϊwxs̓{ܡcrRC2Yu:zlGγgeT"L-BJ$d:c:M@ ԰Pcy?/PRk  ' Ng,-}}=t{[6024(kJ YSx{Y ڕ˗/?wϞV;:؟1:mddf+_ݸn0xqLT$#epPӐhszEy)0sg/?뎟7jf~tqrtqvL?|!s.NN -N)4t2R2xJbExgaSlWfCH+qNx2'; ů7B -o'F,{։?/_ݸX -vJ$d:f:^0{NeD xxKy4JxxyxLM0ZX]À2:4 4jc-vF@\ "L)0D@(6FD~CXt*: YpH"T"*&rqD\"E.*"G \(.7@Р q!'",!knO߆<C9DʇvAAqtH{'bȿhشJ.:!494Z`g94`.qP.ې09eZeA"I`84Ml-bv7Ib?Ct0_K#yqsI:yLb/ܔܑ% PLC,zLr}bW4 )OBHk;dҔ{,~>@-h9iN.ʆL"ZE&f.E觓QnEV3ݰ#Mfj .|+4J?٘ (Mjnli C+-$P N54{k0oex0ՏzF -HGpNAgAA>x&Ft  X#ykRe.˙TگjҖ'9H!'d"tY.$"E(|<)J"y"bH 2E9y( q eI7P^Ȱ jɾdH Oőb";Chl H-^BhΔ+Ou#PD)"","”@ '9$+dEe1t*b4ٻzR9]Ťht1<Ȓa3HP/ $Dl)ΥK3tL?zt}ݝ EGmzG8&a+<2$LQ]ȕV:0 9mO+&"qC,gx=>kZGH1>U$>;_d;J)5 zrfdL/}(Őh̖mIdtjQsbO2|g&}LRLITWOwL >J%d:-_N?%p0Lb:/U -J"9DEh)Q_d`1xULgR-> %݇|CGVY >Wq(D6c2,('+*XUr;rIS:U:I65_:3*ާ4gSM(;,vFZu(C#|JNd˙ͩs",BsMJ9)gC%Wyr'kܓL{z,d9$V%z d}Me*KwR7S`>PBZ>y FRJ* 8'7K bROg6 ^ l4r7E5teOg"!S?v%]Xq48gM_\a8˖TR\䲡$A+B EM bֆAϑVrb!u&rs݃k^u-eA7; x00, '_ -boy6,?o|(߇t7.jU;%ohkuyvm1>܄Ht-!N%{{oPZ5soHr(ABBrd8 b1顓qMI 3)N9% 213cnO?==a,yU/o3=f殟%^s^}vY.7S׋F꒥s.Q]-C&sMw/ 5뤆c[Hbm8&UD#S_a%>xKO:h-m}Y5e ,]K({<` ٻe3r8bW*qYmĨ -"f(﬐{_NKMִֻȞd{pQ0U˗2ĸ/L&=5 YyT\T`brY_O?Q4@YoWǝ۞lml 3!W_}j*W -O8F2;[k\nZJ=x?3s_.X:y'8(ً!zz|;ZZ<\]uu:4t$E=4>cʣgge6ptPW;D]O^RRTjTSrS?hPaMFx!·OaSc=x,~g@PQ'-X%R^WyMoy"IcD|9]Sm_ -irWvR?tUM*SCt%~^,ӃONm Riɕ I 1?L9*RtTSִeZYcCKʬܴq0ի~  [v IEGF̟ݻCjN8wík֬vsqn9|3-M{mmȦm75Uwwsjo]v~횖֕+W^pYE ofnrqrجytg[͍Y5 ݻ1S'[YgٲjJ3SSpISS2%1~PqA%^^\.7)ᱚD`xK+a4l5{Jt#_RJ-:\]mkQo&Qǝ0'xuxS$! L7B)iԏQ;Ӵa)5ɪQd_ݍԑEA*e=MI)Ӳ:)|o(VE~`3. -œ0~R^ 4Q&;®0֖rgrb0#c ҩ[/^t ofa;)a6d%gJ挪(4 `:9qm/N T1vkɔ)7K*%uXV~0#:Ty7'7S2YK"9G۔^5=,e!7PLjJ#n&%ޞQ+Z,P(Օ].g^Lϑ~tVC2XMtrrT[e7**y3ܨP;E$ˤbG׋=rQ*}[:J҃$s+"9q,/lz|N! OCL'2f/^5SVк戹0ӨF'phb¡@ušsoj)\.ҒU@s [쑽U%r3M!SPX#iRM1yVYa Y3)/TCLbŹ/e L0-Mm˥xK[du^ P :v_w*$R˳lDKeK;;I4Є PU?,ݐ'Y$/*TwE%G5C5=Oˁ{Runz<-l`h{6Pܗ/~]"R禺܀8xw S N{G?tojn̰M -yuU۵Yzj -ZHpPWG[mk矛\hkC؃\ +K}+WX^ ->-,$#_z(߿Us`:pAs=M B|2ؿO{q]֮eYh5XZiޕ;eTTW:(l2HO~A%tY("??r#A7AgJJ٩g[?ڷ|IC}Q`jF [B{La'cf -J\Wd/SSHnPWjQ̞K|2/KNPwf*Bu^ԛll3LSL7f@DtɾF4@7mWL/_[k˖̴Ȉ%KEGEnԼx|[k?p90g%6ߴq (\p>3#5{;V׶lpou~w׭K75պ:ǀ{j,\hr"PjjLhVWK_<[~=|˗.>tXIg|ݿS9V:I##Ý;v/Z9O21V_z`|hzR>0V͋ 4)5!_FewCm}kYMq\ `M  -5E$d:Rio]qg)*ݴ @@4zЈ(Q y)EbqEFhCϗ1PXp.]GDr(] AXbD }(VT"p>CyLRp9w\ Au:8NOPAxsJ_}{۵f/ԚmjJ/K^WT P Ck[꒖4iuc!m;,X;` l.L'2(ҼvC {,Fta_7kd }1sQ8Sc6A+䠳x>2j1HKb34Jb'C)ଡS")Wɕ1!DCA* ɕ҆q$W@DrH.*>$ -nDCcDttKڑDPh~p{4Cdf@L`dk\k4!VPlXqɉu5Uݜ7iI e/ܺ; nWksgoz{[d+jQs2蹭8Ы8󒗗s$K0`cl:ƈM5Wc2wU]HwTݽͽҲp wܝ;wf.|3UWWV Ǹ]%LF]CmbU+AL: ϲ[Ƣ[ynQYZn de@!lk3h5gOgUUB}9tvu% f_<;#W/^,GTWeK¾9Ȉ7xCHa}tG̭ݠK/twg)).9qjt%-\;` >c^~;O(_Ԅpn1@|:]9_C{MǔxKy +$G-c!G LKS"^[>m9!-hx3R6z80$'&rgA։3j)F+n`R .1#_YȭMqYP?w11qCh8Đ:Z?79dSH: VɓpyPuSBjkþC^fuNGxD/ H4FKW6 t :ݣpfⲝ~ӮKk%yN02:}g>TdKw$EA:?j:?S"1/ 9JM͎KH9j,MґrӒ%dU -P!"p/#Ə#>vw3 |O"P3/?ɊJRwh:)ߥk\n7qLp:ŝ P\|üeo-z[>ף]zx׿/]8}A^.z6[Ki}.2vK_ A :]ѹ'xo!6aL]< ̻D0qhֻM +{cRުf)w -qA(%kJ9(T\Z?Nΐ|PYJNS򄜮U ltڢdV^K)y'@^6t_zssv57bDL2]!tB;(7'T6JL0^w< z' -xx^넩 i m9Xu,lG"wVEmNFGj$. &P1Z)\3 ZڎUD@%;]t6"0HOoJ+M/lU(vk|&"6  ytmtL}%[_|s+6U;w:?^7y_zܭNςɖrf4(4&.4:;]cTK)R.r+gd$7ޟ3w9S{k@#RԙӾmz+Aʥp9.˿tan7ٙ5iE[#T!r1'>VuY_g /y17A8Җ_ГѵVBki~N//ljVΚNQ3g 3e;-6-UA\9Nv&8с2잰sܚǶv,P<]%R‹36O/[1Bgk<yF;Z;IaֺQN| - 0EItAyxEacQ|v {ؐwTݽA9&\x^!xzrK-~)N -vPΆZ;~|1kr'_8vwJ~׼C{$GZ%~1+`=7?(`|dc#jmFtu]ʲOt_:X6bޠvY2N+yhtE[ [}iO|K̰q_7 <,C v9/]Q u{>;*m/ϲFW9W<,yZwsܽCԹ7N' m`*LʲSQ95.,0#M~cºғ[vg}:XSw5:~Aɺ܍Nx>m9o]Rhb.򳿸Kȣ[iƇt~$ :Tg':LO^&?ÓyS%8wRr!Y$ %p z=tB5Ԅ,yۄAHLLlhAyTh7q"ee[w{yַ:S  Caӱ 8 -jntak/BfNWc^NXazƩ_QOZw/.4ЪPt"`ň3;q1S' ;&9K鞜QDžNc^ȍZkN#A؜ Q?ASsyOznShG|ߵB0ʅ۽#9|N L_1 <F]UL)/̌mSܼZ^嶩~5#׿{g~=0'03.R|t\)RdtZ<2Ջ{t?U/WtH\mËע{㞻w:hk^ܵ5_ե:]?t`IHd'tr:?c`^x;ɢt"ӵQEb-.iUp:o<4+45s7,y;Gӗn&L”" OAX~ yQ7~3_~wS#\*=6y‹S"^x 5.|Nwe#XUc֖qq%)ߵ˪QPyz[CmgyOx>rnIw:`MckRc{\?miH c%A纬!3)0*R6]ʴ1jɐ $Ӌ7NsIdž<%i|tPx_h͜O :&&/AAZ9suġޣO?S:+5{w/LnC{.:{ϧ"q:XV+w{%>6`ek;渀S] =]4yi];f$u".+:*R,vp!"C]!?[%$}cQbwJZ0RT!fq:N#1~Nq:r:4:U)DA7m;cLſzi=Ƈ-JoUH|npB0NU80trĕab>: λ̿NBA1jQIv|_29cW.=g{h:i99d'~Fٟ3YQI5mTs*b)(3 3jN%"k\ZN֩?qQwy| -x[< tx?7t魜?```|AgZkھEao ?ױ];Eh%хO A8xÖЍ͇.AǠ{>"~NwΕ;tĭ|tZ\V fG:26$HĥJT9('*F+:(Q1vLj'NَSG(A zNhEHF)z;V5=U$Îe sfnɳ׶;w֨0wǿܗGz~RdpE%Uiɐ5ǖP'*@5Tnۡh -fqx<GC|> C@Vuޕ{#@vh.ĭ2NG.HL'J)#bŤV ]%)4b|MIȞ5&"8(z)g4ୁ3~өd.MY9,tO ư8G]iz?O`Z0E0Q>Y ׁg]O%,~k0wM?&z8 -'ՠ27'AS 4 |cAHU]rG|[T k~R)S׬FfZ;\T䓬3$уӉ)Z/"ՒʬڏZ@k%4xh|0L0P$THݠd2܎gKJpHBiZӭ)`9EE6 'I$RI ID_nN;ȓrnBa Jkoku!Gn!! A9H/ g|ɭ񧴁va;b$M$bR>B¥ʹ;IfC-8))f A쑶+͂Ba6\?7h\ZæDSѴ=kVxs  a쵷,EaT?ُՊ iD^6𩑸UPcN}#Å>ր s5rBv:aA%Lz ={V{eJkrogpn|zAAy0̺RkY<;ٝ':`5d%ws%nM bLj$*㷘@`"J/b|–:>*Č^LoxDClֺԳYѱ$eێxcXZ%޸MH~&`Fi* S  xp:9pԨe7G82^w39CP$҉D$b>P?QIiu[-%?Y]Gyg;ԭ ś-g esR)3n˯4]M:lO4E6^O A'#LaWryO  ,d5&8)anY+/;é™Ѧ_zL~;t>Q|T-vȤ Nux 9*gAr3Ӧ淝YѶrsm"m\uҢ[̑m`R18$$g.MKAA6h'Rv8r5mܩיhs^NE "<Χw܊FHZǧEj*a٤1ITt_-RHN{ӑџ*\nLtмʹe߿> b֕Tgω9bޞ` -;b"q| -'dy1|tWW" qx7qIPJ5TRU(a-Xk3w+δNߥ3`UF&"BZ:!ɝQKhJ@*@xT^7 xT;_5|*%2B<>r)[=E6!(rnUii*nUg/sNX1E$"[7ّw\xw -7Íp;4MA,4>8  İ~ G8:\O:ʳʋjV˴78唥ڤ,ZQ6 ekݣ    lw{>"foSM<.BËfuw*h@cHцֆ                                                          ӊasSvEN\T>   SvRʲ`v-cY|>:AAA̴U+WL0aڔ)1m u.̎c,(s,.!G;p/|{,pMS ΆoR]YqAAAJ*JKO6q„sgNEѯ_sgN-YiٙNXdf W]^ZQZ3* 5 CG=|0Vs3' -AAAIznsΞ64fF߷Rw;3ʹxт!cǎ9yXuy٦ Ǐ;' -`[y7coܹpK -QAAAZP-[Cݞ];?hv,ȿv$&|22$HRLL̰aΞ:yM7\|iW+GdCG~_kW/?woΜCQ6a["%e%E q☻Cc#h>TnO]QΟ8u衐t gS - -+TC-1S!K8vtgAA琌 ߸IqYMDz4|%/?_|eWC^՞`J G:jz !#öfeFp|,fU)R._<sښ~9eYO<}7^lEuVM:5t=>./y#zr@#H޷'JRf_|hߕ?zx¹3vK=e(SSٻXrط6Կro+S V(5rs.Ptѭ<'&NشaØt  i@|N8GeK37gM'?o3/\N4uʔ!C\_`IgOz7'Mn6ݽki60~6uVW1|/(7v hK>x+/͞٬kL< s|>oΜÇARSG _p> ZX`˗}x… *+vDF :l5Ç AEAAiр^:]Hp}X]aۂkk|I'\eK|grǍ MH/˗-MNJ;fgf/]|:F-&`FU^RskV'j3'*nڴ16Q}ZaMeeY'MXv2ݻ\ryQF4S}ٷn gAM+SS"mӨ3e$a?}T}j1IąE4(dXh1&~iFC}m cE'wC?|;dfL6mZiyȐmka&f6<0>Ko';I}d  :rV,[ -f4e}{jkl{.ٳ>KNJ]="hrV}{"7==yqzͫk(Q@p0!"$gP2(9304ϯ]WEׯstWWUVOeMu%/OOwKW<=kk^$>{av&"n3_ TIJKmm訮֢H/U)<Dj5>qzjJkS>Mz݋/-.Qmms qv6v6aa3Օ喖||~[^.|UL˦GSJ1\hf=]&Jߊ?R:\Dy|W~쒋 %,xGMC}cÃ,Q.}hi'FӲ񁾞zSȗ1ӴeaȐ4CQQ:J..B77+S(B[335/XgjblgzrS,w=Y+{X -F]%s7Ynme⭹ihёV,]wg;.EZzŃjEvfLeqRrEjBHWfgG\^Co%ZYbkRٔ8n·Jiknjoif'l4!"܏,8+Ċ [X"cۅNV6?غl+7j`ӧmm%(ض4V J PIҟLLcC-M -TW{kƫ5ժS*10j3_u,-Z[zS͍fg|]U "hoPwy9Y&'>4Uɧ/?BX_[.l -,735UTT_ˊ tvarZVWKA^9?Glock+:ԤC~̔?]M ޔRfg.V0`F}HgjPC9%E2q%N: [Mtϟ==̜\|KF˗L_x^SU?o;!;r~tcK㻜 K KKsg'J%NO?Z]V=uuq**wsu4dLB#:2U5QTLMNZS12:GyB#x57trr?ime~Q_// -A)I&掗Zb?uvt8vռݻ:ESC=m0OM~ܼᯫs~@tKE sER!v66}}&GG榦Anrvr4pRSfdAnR{)J%{],z5:ALLś]?ڻgOFzZTdxدB]ϝ=s̙~~nQ==B:PP?l`CID_O/7;E[9?pCCi -O !.o;!džlPIJLHfeiQgafF1-#5"1û)II)> &iB¼ ?/K - ;F{ R!w.JFޝgʌ\M ʄ 9~lomps&\TCΉԗI' -wu=rU%sܸk׮}6VV(XQ -׸ǔ%/:;S𰵱IK~yС'<~LJ|Ҫ0e/23ө|':\BnMQ5(0>xBKSc::'Rdo/ޞ(+fwMKMNrm-i[k+=zzztSܣZ[uqrGqEBz%(𖋓㳄6DOΟ*S`oGy䂑Y^vyksS\lUOϠ^<fogtÇc={pŅ"C}654I)ўok)M=sP|x?BŮ;==ܞ>rZ*/xr[._t0m[/]859fCuqqX-Ͽ=:~(Mz{uuevZF 29,T>5~{8}}tnV#4{x+ظ]qa~jr̴ՕqrrJGdeokiJyTY^Z]=)NONXYXx]LLLr(57-)(`_Bn777蟌K

xĸm))*.)̧2K {D1jmYI*Ca1EC?\]YQ_WB2*2bmEy7WK gzz3&'%Qhdl~7 -  ,qGz;]]zOOx]T -]]j8G(D˧dt&$SmH'BlIJ?J)5廥Z[IKyIIdvfjaNx߯Iz)k`~NV@hH0JIM 9Y (P9}b%ՊryQgf^! :qN/>3rawg{Ekss37WǎX[S473./{miaL!QOWmmV+C}:,x?ҙB1[]QqȨ071QTsKs39Ba7)Y[Zܼq"bvF%nngϜqt, y򤮫Sg[C|\,G88؏ 8np򤛫u&՜FE*Ĕ%c:`zj -ՇR[dD85:4BrnP`0%T8EEj ^8r}E%E3:|yariY ]Zi_ζ6@s;JLYUJq9}e.:J6><# *-MJWɺ MuwuHW}3ӯ*+i -DT` -Dt7R+m\'F';ښ)z]]V65ԧtvPzwYhFXeƚJޥ%E5}׊7C/{RY^BgWN./.).ϓ.-w4 B;[$[qEqN+6fksS^nժ|D-MC}}ݝ+K3\*-AVxKWsN\{,!)Oybjbl|d(*2⚏7|/J|44vA^rAnva^v];ml)4a#~O4Uk++ 5XXf}Ԅk+U -ԫm|=8N(JB`PVU -tӒ_^ssqתra7lWfM Ś&J8FE6ڸFn.*G\Vcmd%}EkΝJh횸t+pM]b-6 -$ToMX_O̿B _U] -Z -ߊDzZ{;]n[kSn.A?Cب{΋ stmtۚ~Q)ݏڷwu_7ϝ=|f@ɣGK 㗟ÝsQyrtp@r2@:{G[K]uog!{V>:76:216s{N7&.%''>|+.ѣGo]svO˖~nE߽Ť_96F;{v%]"ϝ=UΜ;{ΎŅҒ"cI/<p ?(5P"SKf7g)}Ӥ|Ik7>Wؔ/0;bRKK[7rzM[.. 䪕aSٙ*wClaNhkmub'FoX[tuq-'3t0JHpcG{;l/d#CgN - - T-+?&)-.-LFG>{ze[ҫA_6>/I\NI0?nyQ&d!r'϶+i K -?K;HXluyO۵T`;0J?v&QSL7^SON:W)՜xMpxM]3z&L^+:GR6+xd67Pf$tڽDY0Ov6vd1V\ΪEdz0bK,-wPbiHm%\LǞP=W_Ur Vt6t#sIlSdƲ<+c[e:6EY SX.+(ܱʰY!uZ~OnF"QҡȆ{h]yE}k@:Iv?e JX,ImHbj@JRV?b5Vn#pT[ַ띳 -Yb}jg~fAT=:~v^'vcaw adxiZG\R[xW]#i҉ufJ '-}9Rמ >iRk ~3sn5WBN4.1,:waT[$Qd$"vo\fY3im[X\bJYg/nҋ2âR*OdARJkmiGz-܊F3R2(X[0ƦOjT(}bbهU@DBؙIDd!t|2k4}43!8p9nLU?XHӱJ~MJj(u5vְٖ7*ݘ/?/w+ RZRe:v26)qVe:=id#2Rtt|0qLW8't<8Jsڤ>RG*{Jh#>ʍ- 8qCaKiZ>?{x>? 6'|PbI:5yy㝟{Y d:ZGǘԔ$pZo5W1cl7FO4Gy[ ᱑5GԦy"Ƽ+`1t9~te8/gd:F/):])_)b L-RtKQ-}>r{Ͽڜ_fӞc_%Rh,OI Eolf˅$:O`8hXZ#.klf:ɂDlMW|Ym7WJ<ޥ]e@cN[n.f[{:ZScst /TK?H3IKχ >6͊YU5F>m:t=&:zk㤿cɎe!__ -endstream -endobj -105 0 obj -<>stream -JFIFC  -   $.' ",#(7),01444'9=82<.342C  2!!222222222222222222222222222222222222222222222222229" - }!1AQa"q2#BR$3br -%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz - w!1AQaq"2B #3Rbr -$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -) y̾3*X[iM2DѝWU4ulj%>b?!u'\RƟ-YKRN J謟kxwD"*/Wbp>,nb5ݖ-#|Q8$zb:+!|G6y[)9aYc TeRA#u𭾯^+Cfxi 8?ZhjM7C+IonB@?rw..~ͭ\[DG6N+`yEra6:= /'֥k"}R+[[| (cf ր6&NJ4M*u`Wfӆ8Zm+ :] fٳ! z"ӧj(#Y,ErIuA5VoɡXY1  -A@e-1kF1"6 Ӂޫ6z:+B\BF>n(iZ-4Ewcؠ? -Z[Eʹ@1n?wQ\ N1-]';\X+Do{vL KBzn`0?@ͬ]q̦m>YʀX2sS\Cv+r$sx@W1xG丱Ra,B3y 늎v^ҵf V84G;QA'_ϡl%{,.Ńzm b]xKԵdX/ D2gQ\e7LwR, seZ84O:PTR=0S4XtMZ&X6vd'AZzvki_X&w#q@(~&}c[MK6Y\Ç+>$sv7SIݪܭN8Ίw+Ŵ6v"6+l`;<tmx!$˽sn\⋽GEo'գu7ap~`qݻefIxt#gTc3?S@ >p'*~fz]7waewk0(r]%͟Y @2N𽭽r[vd;F9<{W#s^i>>s,\c=s+͟W @zCOy>s,)'m9'pkm˂s#sZx{6_Q g3gEeż:o4M8>kUóΰhzW܂e{u7WY MEx{6Pw+5xFk@Fce,g޷<=>LK+]E%(zuK<=A?(< OOmZk !76Wڸ O8mA wC"#/q=zWY H 6a9+K<= @VNkkdf[4:w􈮼=i:!]Ḏ$(g/J=A?),,FJ\^iuvO$?/ϧԴxz6oۉ%b+l9\F"2 mwR`9VbQXu>$y#=/j,͟Y1?(Ӵ΅Qoe35KL-ʺ0E6MdxaNa/Lk?,}f?+lF)LJ}u+쀲wl{f?,l{D񥰼qҕݻZ)m+إ lV\|kjJ=A/(𭎵Ipљ -WDlVmm'fG!>Uⷿ,lgC6Pf-ƗzݩkuxSY-ŕ֛ 6&Fnt'"K<=A?(͟-DžWsBW_3jV^H:~uF j:Nir/fe$X#7 {bJ=A?),l -=3 X澹Y%]|>N9$E{;xg f؍Wb?/K<<gE~-٠1"A&G-Eh?ȣRz@}(4?Tdz_fx|>&G.h/An`,,. ྈF>=E"gR3՝Sv>ڶrd틢8RϕK:DZ]裯=f}_My!H[TpMO+^MXtƲ[6x0܎U+#~ S$WuQ̶msFb}S\Y;æ=֡&uD9?CiwrjVr Y#):~Vhks;[m3GV)Uw.W?'jߺ##MW} ^- I20p?|6HQZ.p3IƒmW*sr?_: (QRYa7QD3$Dυ/ܗ8ʨ|3>4-xm,$лHuǵM|/Ьe6Ӭz4IƟfEb>7Z +q-,|h'ӊmSCr]:[-L^7d`F w5nf 뫋1қi_o+SRiy=ȥsDL:V:8jz eMU47QB,(RN+>۸0Cc_J49jt? Ҳnn}mIq7%ƫkDM_.7J]L>ɧ†EČ?--orjl]4_2(˒xIȣԎqO5BN8.p zci6HIgnzDebW0*yeg~=Wvڇ;T9O41x ̇ZeaT)|EW+mZ_[uQm2gQ8z ~AoץzXP(m*-dsT.+>+YXϳgN3G>Թ" ֱ[ƫo{q@Hl13K|T[YjOSGW-$P#m9_5=no{ (&w)t[E}k:ͶP[!](}o>]B? ٹ+*ޯmGE%쿪 -$<]%H|R4L*[TIm-u %B٢{Kߙ7ht(|*ty.:=ϓ;}ǡ}_ooڄWZEvr=GOx#B5R!UIs$! -ǥxf7hEIqbFCRGcsp{ \Լ?ᙴ]3RQbG="d;H,.d&{"Kn7")QEx#QGsrX*Z p= -~"t:&k:ƻܰiAGZw+>Id.#ێr:MOzZ]YI4|-¬^vZeǤMܱn?\?&}W^ͮt>uKU[ۿ?/n.lyp[8\Woeˋ4[/ WGk|!Ŝ$fԁU*nm _Z-{1|Wsukc{+PA0vLak~ $Ěbۍ2;U_CIUUJ$iQXF!8?XʭOUR>dL ,{O4v;BddbXN?7o&dg+-Ӛ6Mg;@3M&đJjVR6>6ڥ/dۚc:TC=NjkYq i:Ak$ !Sjoq_[@TABҥb4V}u+SSȋL˦ &ҿm}* *ű1ɪ]hgz[@5uᴽ-f'>bIE&k-.溕nk,qO(G_- w7O-⫭RfE !1D-ј.xZSKh:c#{}wPhUF8<']^Iw> rMRּZU㦿1a&5IΟᆱ^v/؎(YvI+"߁ֻ 4 $)!.Bj+ Xa0mFzxnVc4Aյ[x_Fg!89Rkg\I]iB!+ⶂcTWXs)?qfI=F|9ZKӴ#BIS\[W˲cwX.v]Gg̕:*)*z8/6?ho&$?[MvΟoK,V$sO|-i?hIo]?kn|Mx뾻hBVv!m+Z>au({He d(KU?$B%1F=a3AOE;R=}^*Z)hNE3cڟEJp- JZ((((Cңd3$GRM'$`HN*$FW;{Ẅ Y*՝6Ql1zsӧ/*k "n2? -VFyb&{?"k/kr=tV'D_ݿ{EDKEPEPEPEPEPEPEPEPEPEPEPEPX.;Z[U@4vقh; ~xH4ش}"kEPf,szWchhۀ[m\o?-nK03\]8>Թ]sOoOS^hx,n=]Y8PTrtVri_k wABT> .-Kk†y.d] 1N[.NUse_R{~w$(>EGqkxF +) zdOEcȻss(F^-ڪXϭ 㶈tڭmI}+z|QQ]5Ht^\3CXb/oтW>Zg>QKñKMMxSG.{em T_#{ g"dT -ԢWC7^J.NmJ6M0DOQ}ȣ_SI-sձRHP)3o<6Z}ipI}ͭO>vуNRnS&u((((((((((((((((LњF`$|2^[1Ӥ eRnHyu{fF7@9&ssh$d9ȋ=jK1iS+rǩ`Vj}>}ǰ*P.)h"B(?5A5o+?"k/kn=tPQEQEQEQEQEQEQEQEQEQEQEQE@5X.;Z@ -<12iG+\7~Юi&VF+FHe# \OSR),ԗ2pC[uz=$3TiZH.x|[ɢx[R`[̛Fq[RH:cR}L֏soy9]@Yך%Q4ɴ`dUZRjlkD-QYQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQIFG3Ua̙;桾[@eOMIx{I'NwW4JMž kL%7dpX{`#@:)c(WNz`REl ((x۷h5A5o(Z( -( -( -( -( -( -( -( -( -( -( -( -wxڬ_ȝ׌ .n²]=z{)io%:[X+hf[FPGֺ!ڹk 9u}~$KdKa,ג]9}.ަ/EŠ(((((((((((((((((((((((͵I=&TN^Hd'6WA˳\\|[Al5@NN;2*tݗc+CF-s+*)hJբZ(QEQEQEQEQEQEs9][v|PVY, %6 mE:Ȗ7P}~Ѫz_7GL:E姌|E\ò+ETWY\w.N=IJHQu9'j\ʕI-/QE0(((((((((((((((((((((LR4YrI_&gظP0&VS^X{FLU,.WabS:l6@zF\eRhN>/`Q:Q]+aQL(((((((?"k/kn=tV'D_ݿ{E h(((((((()c`#G@mcoj6n.ӺAg pGwt& ́F}N+[}J}OJ5 $%@@O#z60b%(X,m f(|]"v^3'k_7huJ܆3(a  u@4r9ox /lݡ_&30z)}2խ1@b?hOe]Eˉ -V55_o`MkEq%cc!Bel5xxOZJ>MnNSz(FQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQERRfZwr4FI. m~@gX -s$>Gbu_© d}W"R5 -pt(-u}ĥAEPEPEPEPEPEPEPEPEP?5A5o+?"k/kn=tPQEQEQEQEQEQEQEQEQE?ն=)ϛi|]`|wlynxj}[ ]Mpу$iU+Ҵ}&٭md/ƒЯjl]OIMO - -Cё'E}5oFh>JEPX.;Z[U@4xAlluk}5C rb0p2G5Ћm>`8†;םRѵ-o5g-˥O"2BWiѴZ}leTN*~ѫvAi6Yʒ "3HGt|=aVkxUB}'didڀF9倮OçZsTWLkoODaEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPIFj -4qYԨil4X#$AԚ۵{51F[0XJ;Su Mm*Xίb+kH"BT~8#Z$( -( -( -( -( -( -( -( -( -( -(9ȉ׌ X9][v%(((((((((=)iJfֵo&1p&F Wx^(1ڛXCl-Gl:]O:f*E}v;m,@8sWZ7M2 p̹vkҼM}ޚ"2qU{ɾhў=8'8(u[ئ2l(DZ g-;ZX^FX]]?d xXp#]u?ui.`,UFy5;[_xum{KG,ְ'x^hڐ$| wcSҥ|L is6=>yAF p+EΒ-j\^mNJA*CK=u dEV# -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -(4S^EK;6H ?u?AUxi$c>N1\+Z񧬗r^\*H2gvL fr7SR",h@tӅ/{Z -1KEoaQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@D_ݿ{EOȉ׌ @EQEQEQEQEQEQEQEQER@k]SU.ӣZERcv$sx^}ӻ$!v>;F ψ5r| >N[=ONxNV$ ;0c QKE _ȟ׌wxg:ZPg|&>7Χ )5JȲNY0?>%- k՜ũxCk{&yݏ/Ҋۻ}EsZJ|45fkv:(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQI@ E&i2G֓Jk6Q2qWI4ww7}%NY´z5V7KBܮPjg>ht!쉸QE`QEQEQEQEQEQEQEQEQEQEQEQEQEs9][vVSJg m[AYEgvז32\._júc?ڸsm%z^Or#cDrq@4QE@5X.CZ@jy^3LタtFtK%ג6~?Jom$K`+Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@R4feV2m$&y5Yo[{g̈Aim!Gin|7gZSiSwZE}5) ZT AKSJtPl@1KE((((((((((((((((?"k/kn=tV'D_ݿ{E h((((((((*+# 3[.wG 0ǰ'MJ7Җ`5Rm#×ڂ;AuWڝTIj.mFoórdYg ı9>V~pz̑$O4 #" *= -ҼV!QYQEQEQEQEQEQEQEQEQEQEQEQERPHW1A4hnIqq-,^JYu8@bq7Icmry/ʾ¯X:IhE -QSbRVB QE(((((((((((((((((s"&^2&/EbxDMweMmǴ_(((((((((qR^jͬ].E=x8`+,n9WrR|Qa@Y!:'Ȥ.j6[aw)#`QuuxE丑휣7qi|;چ3I6\dֿo%_Iq>͐;52w}8ZίwkꖑhwPe(k/-2sq(Ymtt#|:ld~Xԓܑ~8~A&e'q|eKhm#tVܷzd<͛Rͦ&]sQ}-Gb˸\@k:Yzsip{~SW{ H1P=|7JF^Ay>xM+q-g,dwM{^m%J =I7՗g'8׊Z(QEQEQEQEQHdS%8bi$p$ #iYz"&e'H@QXJ mjbUHdKr0f8B'ՄP*N).q(QEQEQEQEQEQEQEQEQEQMs ^(ZHg q@>M76 >M76 gTH [_ꏳYEV8|>Me[0i?EV?鬟gTfY?_,P}I}R}K}P*[/G鬿Y}-EW7676 >M76 >M76 >M76 >M M'@9][v91 Mt6%(((((((((((,(>~[2k%̚٪>^p;k<''.+mmumюIc$tXJ4]{9|LQҖQEQEQEQIUa|όOLIbI/r >dޭF"mZ~)lz 14 -4Vc`Q#&QKZӧEF;!6&)hQEQEQEQEQEQEQEQEQEQE7VwY.TZQ#ɭYշ ve+/P04M[sLSr2ݤӎF]񴗐s(b $v/s]!vOo3|IAm,7JTq׎*"wQJ޺?k{u?"Ui =}4/Lm<8Eo&vIt&Xa_0x溍OVڦb0fT{l+o-#K[Ԛ35j^{3w 2^r$%U$S|amuc$1_Ϧ"!ӐA#ε[KmMsI%N1To -ćUwǸ ~{0G@W-X< #ɣe!eeKku#J6dI|Ј SgtV9͎[I>VgmnoVi-<0#zֳ>;ʸA>[7Ictz?t=Go-? -?,洓CVb˚c G*HQE/5+1lYK?57ncN~aR1>\<.pȹaXAkI/>&acm"2g'w\5 ߃yN!T^xm3Ko!LO&cr5e†%ZwzgjjZj\2I?=E[1<r&9\[D^om᩵%Eiې{0BNLQ}TRhј.ҸhbQnN M)u&TU`$YW󿳃gs^z֎=;Kò#$+,=B-LJaOG@h!9@E$}\K<jq7}M?S74mR;׹43H>]"N t[xj#ןZZ=7Om٣Jc<y((((=(wkfMtV;ֿoEo(Z( -( -( -( -( -( -( -( -( -( -( -( -^3߻jݬ/+^ 08o"K-+£BdK1|6zx='Pfhӧf.#p}kk`v((LzRdTsGFIX*p1iv2>UF*iOZeN4{W0’̧;>5[[Qj1ܹtAp+AEPEPEPEPEPEPEPEPEPEPEPEP%TCT,=./n# .5v_m5\Yw04dD ?#I l|Aj.g3"`Z/h1j#UQ >~I 6VЖ&8r?LM JDӬX@Ҧ#DskLK -]*YbOAz_TJ\en%h#8l U+)su9xgjCof@L l#{[e|*n5˭cĦ[m;X8Ygk;.9%Ӛ#˘NPkpj.RozeO'B5f֡O Hnܼڋ,i]څ2dYXYL{R '+{ֆ Vi3ct޼zeO1T:P7/.]2zfvbC#fI/d&7:0J|$Tk[Q*UICiגxTQcšǵOjiz1TCtD21/m,t[o=3ҧ]zBfIPum?,e߅g5ۋh%[@8"2ہTu kSr& qVT3 --wahl5RCӤޤq]]Y[wǜj#=QFAc'1vh.5m2 B(\}U I-Kڄ0LT6<֓LէΡc a~PyÀ)څȘjw(!*< NU3\q;ws24=FNEtZuiw='; Ӏ+*jD1tB rdƤ˷@5дLK*C `3\'d/䶴FZM++gi Gj}ƙmud<c$ A\:$VwwB&i>E$y%(_N>C@V_ZGD Et:>j}ˈ@l\hڗpXB8mpy~n3+Θ? d4EPEPEPEPHzRI޵^3&+\_7kh-Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ax̕V>r5X^2 x3]$dgϑlڀ8O) Ά-9fC# ?++>g37{[R~A<~u!%$mpŠ((ҸڔV" -dkCusy-f>8OZS)P \ҩ*--U#)C(Q[Bcv}BQZ(((((((((((((diշ^Z< .;({*J5{H4RK{e84NMX%%clo+jdA*Kttl=1%Ji)nXU-.-Y (R8SUۥgmowBoZmTWin"kܚ-|3.{]\,"2G8NIrM_Qz] bd'T j?>t8~a4QH3^_}a5Zi=*$ʿU[`ڼAmy.E#S$W -O@ff0,9@zXFFXWLQŤn$;5>h ^ y y%2`>h<ǵzmr 鐭HZYXK -ʮUo4&>+$$(`QEQEQEQEQEQEQEQEQEQEQEQEQEQE2_M4ioG\Ӽ?t֫L' LLѼq,:Nܑ'r֩[|NmVJrzXڜ:V׍|=e&sA;66"JXj HnQN@bۭcoےЅ =2+^=ޟglg2jy_i`֖יV6YA*hZ #Oxh(lV =HxGv|[^3Ҋڛqqt1yȖ`HOnc3Ds#G7a -@m$k\)YzL:%riK@IڨxHLi^(KF!Qxm;Pk0\Af1*(2hcBfҭ&T wtAN։ yL ]=ci6T(;71a( DGe`Č:< -Gb};Rc=`f"oAc֗OC3in'>H4 ``cm>v Vo=pϔ0s=TPZC_mdlk - }8t[/%TzY&}r!?ȊfmV{kS1(iC"Ct&e~jk. q_6WcNVnx[yoHmf!&&H23qZc"5 f hQYfj"Xdd1?P+3X.q-g39P<(sPMEdAC?o -[G׃YWnu HΣhbt\׈`@$Ơ3I+Gzڥݪݥ̖v?SwkfMtV;ֿoEo(Z( -( -( -( -( -( -( -( -( -( -( -*c.'7HF~Y:e#!"X^2^GQڵwH."8%O:VO[(s-Sw=π4yfy # f7o? -E Z$t\[k^L;cCZڽMV+j36@5Lږ:\w¸`QKEmd )QEQEQEQIK@Q@Q@Q@Q@Q@Q@RdPEQEQEQE7+o ͜i/-t B%0`r>F*ͮ|$g!{w) b Jfy} -vcg @v.9 ,HpZ/< w%䖖:ALr4Lӏz7?У͛xP,asޙyfmI Z UdM]7?У͛xP\xnmŕ'آ0ᕀW7vf.}d]/7?У͛xPN^66zdb[8m -;w4NkD$VOl#C#7 `c@kfy} -tmgV/J4N;bQƛ[ []B?.{. gv}ϛ7?Ф2LG} -wkfMtV!A5[Ǵ_(((((((((((7 g7_,}bb<'svkS6sbk4 uw7}+!e@ƕ,mb$%tO-X />ГM]1,^Q)p_w=ks/_rح:0mJ"mP)idT ;*-7_ymYU+J4=@omMݯHN1jy^/ڏt&sU2QHA#`T*aZ;`y^/_?z/CEP_.+<]t4P=x~_/] y^/_?z/CEsWEy<ދu@_.+<]t5D(<&X`GSPڇbhm$o.c /+<]_RSX ;-ZKX]+lW+y^/_?z/CEsWEy<ދu@_.+<]t4P=x~_/] y^/_?z/CEsWEy<ދuh1sꆋcLK6g6yڴ-|a^jkcnF)Z2"9Unֆ\$Cm,Bzԓ@WEy<ދuC[ۖ0΂D,0p}sWEy<ދu@_.+<]t4P=x~_/] y^/_?z/CEsWEy<ދu@_.[2}-K?ں-_Wl۰V`PSQzlg6J^'=?R_?z/x~_5Kd1$aڠ/G_.(_?z/x~_缯ދty^/롢9+<]WEyh{/G_.(_?z/x~_ ;P6s*&"&8wth2ŷeldH6~h^/i%_\#w4p:>gfRHd̅3@ދty^/롢9+<]WEyh{/G_.(_?z/x~_缯ދty^/롢9+<]WEyj+E,vN>/CE971OOw4YUk>#_-$hexڗt"\CJYa\ߢ$Ue ;}QEQEQEQEQEQEQEQEQEQEQEaSij/AqmE5u/X$8U{kV((((( jI4iqqh&t(c=}8 ]Kf;[>O ]4qŸXGNMޥzlH`(k -( -( -( -( -( -d^&QՁFEy}XxD:m\闞m̦?"wֶIak6 oYːFp\tϭvhF]K (<Ӳ(^[GLUgH@*F`A[dPIMy4.TdpM# p 7WF2@p"Mp"92Go%V7,ĻaK=f5]RK+n,1]govzӲ(]KQz4K@Y$ n\dRUROjuuC#SЃPEPEPEPEP7Jf;?%Ɵs4$yFrwG8O;;Pq"EU2EV< k5;Thn/T;~lϽvtPEPEPEPEPEPQ * ?z|5Ene|\"t8Uuseu[c\hmZ8+Hb><עQ@wl96>(ҮQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEq4K{A[#zg-ol(Yƻ}[K4U,!4q[WFl6f ̿;RB'ٝX&2J̈́oOP-NܮBUn)C9i,.P1-՜Dq *p18ǭu1`Bd(:)}iѮ}.k%{;wEc9Bo[24]d/g{x*9-.|$m#`z -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -(? -endstream -endobj -103 0 obj -<> -endobj -715 0 obj -<> -endobj -716 0 obj -<> -endobj -533 0 obj -<> -endobj -534 0 obj -<> -endobj -532 0 obj -<> -endobj -531 0 obj -<> -endobj -535 0 obj -<> -endobj -536 0 obj -<> -endobj -546 0 obj -<> -endobj -547 0 obj -<> -endobj -548 0 obj -<> -endobj -549 0 obj -<> -endobj -574 0 obj -<> -endobj -575 0 obj -<> -endobj -576 0 obj -<> -endobj -577 0 obj -<> -endobj -581 0 obj -<> -endobj -529 0 obj -<> -endobj -530 0 obj -<> -endobj -580 0 obj -<> -endobj -527 0 obj -<> -endobj -528 0 obj -<> -endobj -579 0 obj -<> -endobj -525 0 obj -<> -endobj -526 0 obj -<> -endobj -578 0 obj -<> -endobj -523 0 obj -<> -endobj -524 0 obj -<> -endobj -566 0 obj -<> -endobj -567 0 obj -<> -endobj -568 0 obj -<> -endobj -569 0 obj -<> -endobj -573 0 obj -<> -endobj -521 0 obj -<> -endobj -522 0 obj -<> -endobj -572 0 obj -<> -endobj -519 0 obj -<> -endobj -520 0 obj -<> -endobj -571 0 obj -<> -endobj -517 0 obj -<> -endobj -518 0 obj -<> -endobj -570 0 obj -<> -endobj -515 0 obj -<> -endobj -516 0 obj -<> -endobj -558 0 obj -<> -endobj -559 0 obj -<> -endobj -560 0 obj -<> -endobj -561 0 obj -<> -endobj -565 0 obj -<> -endobj -513 0 obj -<> -endobj -514 0 obj -<> -endobj -564 0 obj -<> -endobj -511 0 obj -<> -endobj -512 0 obj -<> -endobj -563 0 obj -<> -endobj -509 0 obj -<> -endobj -510 0 obj -<> -endobj -562 0 obj -<> -endobj -507 0 obj -<> -endobj -508 0 obj -<> -endobj -550 0 obj -<> -endobj -551 0 obj -<> -endobj -552 0 obj -<> -endobj -553 0 obj -<> -endobj -557 0 obj -<> -endobj -505 0 obj -<> -endobj -506 0 obj -<> -endobj -556 0 obj -<> -endobj -503 0 obj -<> -endobj -504 0 obj -<> -endobj -555 0 obj -<> -endobj -501 0 obj -<> -endobj -502 0 obj -<> -endobj -554 0 obj -<> -endobj -499 0 obj -<> -endobj -500 0 obj -<> -endobj -537 0 obj -<> -endobj -538 0 obj -<> -endobj -539 0 obj -<> -endobj -540 0 obj -<> -endobj -541 0 obj -<> -endobj -545 0 obj -<> -endobj -496 0 obj -<> -endobj -497 0 obj -<> -endobj -498 0 obj -<> -endobj -544 0 obj -<> -endobj -493 0 obj -<> -endobj -494 0 obj -<> -endobj -495 0 obj -<> -endobj -543 0 obj -<> -endobj -490 0 obj -<> -endobj -491 0 obj -<> -endobj -492 0 obj -<> -endobj -542 0 obj -<> -endobj -487 0 obj -<> -endobj -488 0 obj -<> -endobj -489 0 obj -<> -endobj -484 0 obj -<> -endobj -485 0 obj -<> -endobj -486 0 obj -<> -endobj -483 0 obj -<> -endobj -481 0 obj -<> -endobj -482 0 obj -<> -endobj -478 0 obj -<> -endobj -479 0 obj -<> -endobj -480 0 obj -<> -endobj -477 0 obj -<> -endobj -475 0 obj -<> -endobj -476 0 obj -<> -endobj -474 0 obj -<> -endobj -404 0 obj -<> -endobj -11 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 7/Tabs/S/Type/Page>> -endobj -718 0 obj -<>stream -HWY~_я( $l]`E6# J ۿ>=}1HH(%N}7_{^{xMsbJ<@X[O2Z,`rEئ CnG̪=*gԔSRZ>pcj9 ;gmq{J%38ީ)#J* 0 - ;z -.ÑC¦P:mʒtdG/iг;.U_nJ壘 RfW\fMѴXZʨWSW) !Wv]Fy+BPuE7j5}[\؝svZ646c&6?L1A -I# 'dM/W]8ٕ[c(E\ic5jeZ\NkS2@4S9;ӴݞoR ^ð$QC8&5]PY %}./҃8{D~Ge4zư =\Z}P`KjY;\ P:+ywu4Z/gmT9o9*%l|yaHZT]ES\r4=,~Pe4ZcƛLG®.fN+ Qay`vBw]{VRnz բ!QC*{$@5oKJVQ/Fv6U ł**&vLEuN`6ϙk%s|-딉CY#IF\M 0y1Pz_٤:PyF)nE´}=Qzvs Ѹ`mX=*LnJ֖H PLS[UkA.;8_zX+K)M!XskdZHx]'+_H֔W sT`Sl.D [7+QqaLeD+1~,zYzhƼya H_| w0maYXe;%9-nqw U+-ܢ>\`rN{5M={$wx"3gX,YE׸"ye&t&di9PH8%`kj$CPc| -B@mZRrR1"QsjJT3 RDꃆtFx+-[Xđ*\S9a)4}ADFZ/) 'x002D6ȸF#vhJLGJ]A}a:c%*:~q*#zfĀ'AHcADgc >DijRRmf5!UiL)pЋd3$D'K3RL/Rԛ&t@hL5 Asީ)-@hHXFatfYۼnLɢq-/,]ug'^̐K|qkQ y1浔n6bM JC۸+uv-n]8=WV'fq|tdx}r < " o-˻fujgN@$\K4.c懥"Hknn -|]{܈yR c0_>λ˄ǵ_tӡȪF%@ab3b]a e -`Qܖ+ʗf%6i0&D-[0ifVM.َpuPdg;4A͍w$vliI-Tf)R$ YW0Aqqd$N‰NqTK"dk-ZSy8ϧNQrܙaA#ZQ#ZVyzLw`;wKRm׀C+[zx|{!R$Nx\8Ģa "([e5!z'Y/3.i\2*NrLJ 8#K8~bƭV&ڪU'b3ǁC:34*qvJt:P1j:&( 0$+}\(wZʚpsxᩌ)"9bQX>p#1Ii4=m+>sW٤ځ%?%gn(\z { EE@@=>،E@))Dljpf3">^=3%v1}?XϷA`%ˑ3~a^iC!\@M۔#y^Lg'D(04-НINdl_<մ1Of" |l]4Ɋa4 -@D6ļyC|tyg5/%_n"az-"'w=> -endobj -93 0 obj -<>/BS<>/F 4/Rect[345.01 749.22 525.55 769.92]/StructParent 8/Subtype/Link>> -endobj -94 0 obj -<>/BS<>/F 4/Rect[69.75 728.52 132.92 749.22]/StructParent 9/Subtype/Link>> -endobj -95 0 obj -<>/BS<>/F 4/Rect[105.75 687.13 165.82 707.83]/StructParent 10/Subtype/Link>> -endobj -96 0 obj -<>/BS<>/F 4/Rect[262.7 687.13 343.22 707.83]/StructParent 11/Subtype/Link>> -endobj -97 0 obj -<>/BS<>/F 4/Rect[345.39 687.13 428.81 707.83]/StructParent 12/Subtype/Link>> -endobj -98 0 obj -<>/BS<>/F 4/Rect[451 687.13 525.55 707.83]/StructParent 13/Subtype/Link>> -endobj -99 0 obj -<>/BS<>/F 4/Rect[69.75 666.43 151.61 687.13]/StructParent 14/Subtype/Link>> -endobj -100 0 obj -<>/BS<>/F 4/Rect[129.1 562.94 201.19 583.64]/StructParent 15/Subtype/Link>> -endobj -101 0 obj -<>/BS<>/F 4/Rect[341.23 542.24 433.98 562.94]/StructParent 16/Subtype/Link>> -endobj -102 0 obj -<>/BS<>/F 4/Rect[200.5 376.65 307.07 397.35]/StructParent 17/Subtype/Link>> -endobj -717 0 obj -<> -endobj -719 0 obj -<> -endobj -405 0 obj -<> -endobj -406 0 obj -<> -endobj -407 0 obj -<> -endobj -408 0 obj -<> -endobj -409 0 obj -<> -endobj -410 0 obj -<> -endobj -411 0 obj -<> -endobj -473 0 obj -<> -endobj -452 0 obj -<> -endobj -453 0 obj -<> -endobj -454 0 obj -<> -endobj -455 0 obj -<> -endobj -456 0 obj -<> -endobj -457 0 obj -<> -endobj -471 0 obj -<> -endobj -446 0 obj -<> -endobj -447 0 obj -<> -endobj -448 0 obj -<> -endobj -450 0 obj -<> -endobj -451 0 obj -<> -endobj -470 0 obj -<> -endobj -442 0 obj -<> -endobj -443 0 obj -<> -endobj -444 0 obj -<> -endobj -445 0 obj -<> -endobj -469 0 obj -<> -endobj -438 0 obj -<> -endobj -439 0 obj -<> -endobj -440 0 obj -<> -endobj -441 0 obj -<> -endobj -466 0 obj -<> -endobj -430 0 obj -<> -endobj -431 0 obj -<> -endobj -432 0 obj -<> -endobj -434 0 obj -<> -endobj -436 0 obj -<> -endobj -437 0 obj -<> -endobj -460 0 obj -<> -endobj -422 0 obj -<> -endobj -424 0 obj -<> -endobj -426 0 obj -<> -endobj -428 0 obj -<> -endobj -429 0 obj -<> -endobj -412 0 obj -<> -endobj -413 0 obj -<> -endobj -414 0 obj -<> -endobj -415 0 obj -<> -endobj -416 0 obj -<> -endobj -417 0 obj -<> -endobj -10 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text/ImageC]/XObject<>>>/StructParents 4/Tabs/S/Type/Page>> -endobj -721 0 obj -<>stream -HWmo_D\X` -ECPF]8CrvZAKy{gv?uWWWOۻwowp}3u7~0%;ea󏟺a;=~&/ ~l9B}w99s=sn5ߛZ"zrpx0հUڂڤgLoD~p/{zssElFpÌ/K=h(RY<b|=f)dL '/Ƈ=@ /@mr, 'Q 3ҽb Rj -&;&@5Ǡ/crPZ\\Ukbxc9F(9*zDK5LP'Czj:I3țr7DߵǜI}nlxO=ZQԭfg)'Xg&b沇.4e/|2cFLybb '/5tF\M&JBBh%A\$8WC {y=r~^cɜ_$]" RG϶Ξ=i-3ֹeh*2LN싯d~k,<3&6~ؚ1&aEʦ? -/HŬT@rQ凑[Ռ[76_ y ,E6g1"`2[Ğ%!sm":DŽ&^ww86S3)|2s'@b+Dڋ -*mT^(sAexp#^Z{ī/iO,"#KrN//̉MN ȤecEyd"f·%@k"Y%6_0Cj^8Y66`2c/blG~^(\}zhD^##fx*. AWi1#V1=7Оa푻K Њ<57bJmzHEyG1z3(6zr!6A1@!|O^ =d. Db'cQnm-pwuP=V:=KEysfCOr3XfsXs -yXS'Õlgy- -\BܐH=d5'A&DQosޘ= !#)[T=83t(⃞;sO7!ZP̶8+sI[L 0E -t=FȑJ d _KGe}>L0UʄH -?Y zhQ yl9Z:7zZvȒج8_O٣;*>^ [h6Ga;CE2N[tMOΆJ>]l~}sAW,nihw,~,=ѱ-ZK)O\b2R3#ILP6 2K LUX/,(Y/ Y~PԂJOs-QtzUL fyRu ]XPP’*A@ <#%v~Z*(ۗF.[=wd㙓u%q;[O/dM=̃%2*,Ht)،ؽW4[yT3);P*!jsfݻOݛros<A'_uSgؔkl.Ă0+m,T+eSDN+`p<1EE +S$.M}$sr( FW4@ -륤,rJˑd@]*=^VM\ TsOd27*)ے}1 M;.k+:8/V8+'ʼnŠ#:W)>E -.Pm\ { -.m6lϬ_kϴl=@4́W7ۓγcݞ1l9́|mi3{[fv\0s.bV#5HgtB L2^ӄ=?L'v7 z:3E~rs2j:%ĩ3G4}4c (rưF#aqM#WT5PE A:VwZEP=.D{f T⸹ V?!v&7ti4k -F*Xgz>lKf/}}Ƌ";@`k;?yj|#~χݟxA }lgRMMc^3 0}hr.{aS$C`kDn[x1YNʼn㲖?/-I?.g>k4} c# G0 A(jɌʟv4kKM?~PO7A %3d[2XHHtb=rKU4juz(2!gC2\T3UYo\++/oK -Nt붲хcQIKY"a0UהM$ ->WMzvxT', -endstream -endobj -83 0 obj -<>stream -JFIF``ZExifMM*JQQQC  -   $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" - }!1AQa"q2#BR$3br -%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz - w!1AQaq"2B #3Rbr -$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -( -[Kk ҡg&4VM);H'U.ck'BvLEى+ifbg }g?!ӭ͘>o׾9jsKYO9?{9infyQ9?}sպ(p*nj<)Nfrs&oگ?\r?V裚]ȫo r} NGhNHۦv?'#TQ.dUNHcvILNGEomEaLI#TQ.dUӭD oIm[+|8oN? -EEKm:|qRn;wЉZi>dTmg|bhӭ%ib0;V裚]ȩoZEonv$QmopysIn9,i]5ycg;v^z=:+rwI'3^'[iw "ivnI'ܯ~.ljwN׾8*sKY?myۜcMkϵ;׾8*sKYM{two0w­G4P*n;wRZr4ۘ`~LM[iw "mk3K~DDu4^vwN?"HtQ.dTMk1ӻ}DZm7mrwI't_~n9,ݛI98(]6.sW?JEEA ϵ;׾8(Ͷg}g?oL}tQ.dTm:ٯs?JM]7^?[iw "ub4F'p?J&m[68ҭG4R}6g;(ӭYM*sKY.tkVI|06N?% Qu^H7FU(E[:rdΟ ӭyo0'tƭG4P*\kd3qƒc]:ĩjsKYgӭa)|ݱ.ٝO$ODu'?gSrEEYty.?gSr$ӭ嵎.?7&QG4Qy-ռ):bw }VӭmO)ZEES[1j|(߾9hη?{?w\TQ.dTu6I֕tukQLrs3֭QG4U4x|)g-}G[k%T{3o [iw "Zu[LNGhNHcvINNGV]ȫom GaNI#ND o­QG4VNY7 6-ӭ>o0w­G4RNg0qwRr3agq18TQ.dTmi!0;Emk;MȒ*sKY ӭx1ӻ}"th.Z?;ltßIV裚]ȩm ٺO;$ܯ=~RqPuwf|4NgrN?JEEEm;'<i_s?JEECϵ;׾ޘCix.^??JEEGmMy1;qQ.m5й;w u}GFƙ}=z m%v9ӁBGs6mqrybgutADmĖ-Kdh= -ڎ`QEP(h_={Uxګ Fة[Ky\[d6U5Ky mȘS`պ>QR0(((((((0E-.ljآ| 5n\-(Vآ*AjnjiқMqVꦧ_~H~3=D~$bQR0O<)^I`SHަbJݪM[QVׅx&ofɀu5>Y{ݎ䟺}%Jըg1$~bhڿu&{W GJ^숎?_z'.NgH.KÜ099+3AW0j(O)~GoOc]tEsOKUC\*RƬoyޫtu ]KRb.!T ܌:}vGon,KrdBWĘPd. i*-! i \cK2SEKHnNTHKvf:|[÷hP1'~Uf1 0zOQST[=:Y!*2H֭MN-4g!*.2Hn싨QE# -\?G?TQp?Q?5\,C?ysME s\?SQE>\?G?TQp?QK";$_EĴQE!Q@Q@Q@0v;1G 4qT?h?_h?_MYzm鶗 !kLH gAZv^A#hA#k.iO$x)fFc<)ui:}srZ(pZ@X.G=2h.e?/?Ə/?ƨ^xMвh&ߍ)mⲸ]H@'%yxK|xK||GIg+m湏˜eu8e#=)!x F#(Auܽ<%G<%GC1xTۻ$}*I*f}B1ސwh]w-} 4} 5N?i4V -#,?Z$jVȳLo ϴ/?Ə/?Ʃ4RSpQ~N\ҭ2M[.ܒWo ZA#hA#jKg(Ш4ܻ0s֥}wJha}FdUS ˆ}{Qo ZA#hA#jj)]̇xM|đFҢXʎԖ(0(((((((((."e5L;[?QER0%HHPLXFOPdI+z^Al"˺1hG/,/@9fy*7p}ط][CX9# dk0F eJ:ji8:Ou;XͧEqL~wGbM]D Mi}p=+Ȱ3 рNHV\J ŝ9>DĒv"Mb?5{c Ǟ3WNϵt:Vi[K;@X1E^IO64MvG0 -)jk winC!E,rxU9Ml3\dQV#>^^+l?zv)OsClftp~rj\O\Ejgdպ>QR0(((((((6-P U?Ǖ#KQ5@f ֪;ꦨJVyc[[IJnR;r1Hź(a_1xYּe{%仅 -@댚vS?l?o f {&-eh[OgօW&DiXuiU-E}O"($va&uxV̬їN>ڧޯUe"d_O -1{ ~կ#+=`(q,CƸzVꦧ<6tD%q<ޭ}u -(a^ +G"{x7į/܋@ωwޟ_扣ˤG]Yǥ VBX%ҝ&-ذcA$'#/4BX?InSzz>{Z)7{ďhVvsu5h 7w BG|t+ݯ.`Uaޡcێ2N}sզe]ESL6Fc*ޓRhQHe FA6`N2}* EPQ'}KZ<r7ʣ~š,U?} >_Ÿ+ SA?7Qu~(at[h'߆ -?}.tUO{Yzͅ-dhNUN4'd7-Tqr4lvEW hW^]XT*;mIg{fZ/faxh:u\ aey sJKO-N~li"3Hij?V)BJXF%n|7#Pƥ ,L} =_&F UXt0riڅY& Ox]]WPF4Qc)eFLm%-&F[,{50Kdˠ"#* 8@iZ|4Wmp(62"pRf~)_NEu C -7Vz!I(gP|ϯE`.9 1H~ hmeVhܷ8 9sRJSB'峖܈m@GOZ|["yi)5u 111߯qQ\xZ^\yjI)R-sFEO[h2eZIp=AݣeSށN}pEcjZJ'yB:Р )5Ԧ *B@ʀAr?浯5_b73 -9p J[gZ> خ$ROHzp99eqgZN*۽Lu=E_!i,7` `7*_voӸ['4 WF K2GXϪu!0|F#eQ[ӴnލٔUgmmn!Yݝس1QEvEP|W]U#HT#KБpɄ 㟭[ܾpd V#QEHŠ(((((((\y3U;m#ʑVͫM}c8pv*zTUnjq˧Jr8÷#[Viu/ 1D~$bQR05OC5e|٪L_†VV}h[O{QdK[[RؔW?z/"UZޯUe"d_O -q/Ɗ(Š( -AK%^l#ב֭MNKxtd̄cr#*WP*F|r/Wׂ|Je="|O{=4Kk6mi)`RNñS[cq$Wq "heW?:7\~џLPm#kܟOV-dѴu<Rm$)J $=+޿j9+JM7?zҿ)֔j -0>[cՅQ]'QEQEQEC/?? AMMM^1I-vPD,N95jxFԒ;n#wrIUF󎸮jVBqPy-mkBT8'۱q?-sevdY8Q9&gg gˬz2iig=ZmML - *ԍ}oVf +ڻ\?e[c4ZS6#88Z|7uii7/`DvNJ4lЫ(ۍ(zf#=*+oSҢXʣRZ(Š(((((((((?>ƚ|ڴW"bp@V#QBvA0[3Tƒ?`f(偊Mٚߕ -?4+IE-Y7#-/6Ԩk"gp {Rr?|mٚߕ -{ƱLl- T9,Iv1E{Iw-GagH G^GV*j2MO$34.w/^)u-R 7I4 +$ژorYo=E+D̮2=I}+NQ@Q@Q@j\QH :B3L4?SZdz2=EGh?ȣF&G#Tfx|>hdz2=EGh?ȣF&G#Tfx|>hdz2=EGh?ȣF&G#Tfx|>hdz2=EGh?ȣF&G#Tfx|>hdz?_W+ԼaajSYǍP@>*3iOCXu |n5?9v;*+a>7_l?_?쨮7-~k4ŰƏOc=nvKa%I68ֵ+p$%fifC.Fvڏ9焿Tܛ<%?Ə9焿Qp_hxKME <%?Ə9焿Qp_hxKME <%?Ə9焿Qp_hxKME <%?Ə9焿Qp_hxKME <%?Ə9焿QpQ/4j(TQpQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQETtNjT=.73'?JUE(Q@Q@Q@Q@Q@Q@Q@S[s H^M\0M%#38ewK=ck NYCyvZ qz訬-l^iA?dgF .Hfd\ ǡG%7/Ƶ ymRCCN[Oo5Kx+yR6$hK(N܏zSlIJ+-5Q6n—hȸ_+pF+NT2* dm[=ƪ'7V~wmȷrK*>NQ[y_Hta6K[y[!?f%T郞-6u5{Ir&r}y9Q.2W -((m)늷X! %g:sV܊##8#UEZZhUsn)7hŏlwデ=?+V@:Ӡ@"ެ̒*) 8#Ԏ]8܀ $xzܲA$hb հ `k&Z [;S%S0TAp{k{MUs(k8ldXc՟HA+8@Ϯ(> yl3kn4mIf3f,r#dv7y%Σ[م0v2mܤmg9+k=#u0mr88ү45v6[t9CBn-,wnKx,3%!8=jբĚԗ W oN;J[ gfjC+owgjYG[Wԏͷ"=3Ca5afjM #=gjYSkbUl?5' o:ط3RqQo!VnEe]bRK/JmG }[=}ԧm2{{Py/k[fjW>m\Oҽ=7#Nȼ&eG#nOAJI}>K[;/%C<3Uӳ+zב-~ۜB(=UGn0jZKs&[y]\ -=ҝW)m]g=EMwKdkM5tjzGÿV|!eI۽Dm+$v]kBO7R0?\_!3R˸օn/@"*d,ЇJ7Q0oU+N˓Z1ٺo8:exBٺ8+EemKcg\gfzwPm!Dm89.hRˏZi2jٺ8;[f EBm*[I<%oW<o 2kF?+?l~Ew-'yqghEQEQEQEQEQ/&dDB8nM\Ee׿疛v<MW'1EsRk_^Ytq0&L9mB894gO<ήPl\ߢkVyMfe\ :XYOD\ʧ-p9gjddGdщزF3KپF ޱ<$i?&3xyR׿疛1Eb^^V6rKĻ,=Zۥ(4Gqq 0ƥI*=++wN_'1[򥥮?%)4[[bt~Thsw o?;Jll_ʍQjc[G%)5?*6/GF o?;Jswؿ?%)4[[bt~Thsw o?;Jll_ʍQjc[G%)5?*6/GF o?;Jswؿ?%)4[[bt~Thsw o?;Jll_ʍQjc[G%)5?*6/GF o?;Jswؿ?%)4SdHw`2IZؿaTxv_Mv M-R[t-¦cw~T&KwLst}QEHŠ(xm"i>O?oW_j{'Gڠ}TQjE?T}KE_j{'Gڠ}TQjE?T}KE_j{'Gڠ}TQj1$I(UaK#lEgH^qRWTW1=Hv2 -8)] 29_X] ]Mak&Ws+jBDNW$V-> 7Cyqk ϹvrnsHocr(Yj/fW, g$x5m/4{n'@N{V_4;Zo @$4 G^}8P> /p $idi<^ϔ8&Ѷm:5T7VRd+Rv׌ڏ]"-u\N=85Յt5vH97AB2[ȋKguR>d qNtr^sjHaHܫTtr]ϧ˧ȁ$L $p8M,\5LEAe2\'E5[4Xe{ntqJ ˱|k OcZ.n PJ ӧTxG0X r);/jG;][l ˾?rc<$xPCU O%FKy prsvGvh4[z\q,e֜\BKpZd R}+^sbI *ʋ+I*U#Iq૛!6 -b ~3Z9c9o-OaT&M&d@M)%txZ^ڕ51jtt[[>*9Ks1늷OP)c=yT{5="9 :̠d=qAilQ,ɷ.P2?Kqm%A$ aUw034eqa (\hfC#@j^]9TC%f[Hy{@ֵ+ٔ -A#*ohWv; 3={P \e/) -:q*׌{{IhZ-mfF3Dq"+ʰ-QEQEQEQEQEQEQEQEQEQEQEQE1 -_{z7ߧꍨFlxr`oŌ+##/*zV=9utb*pAiZD,Lѻqc!S+Ia=ؼ`n4dHg@WrpTq隡{% gePqcכK U^Z?5mC5Bi^yC6=M2+Ju|;-|k?gqQE@QEQEQEVnaޓ,Miy碟h|h´7T[۽&Vb6zʑ6?iQEQEQEQEVf`if`]?-:(֛P^Cr2Uj!|X`6xNq5Ux"9bh㸊40{DD s2ݺ f-ۘA;An9⠽տ./8R}0s S`?g6yVbJY!Ԓsiw3||WyH<{m펵0x8`3ysaw˽2߈^?:zNhƠԜrG\z] CZV۲3wvf$?Eg^&3?/_Zuo^'3 _'1[/OڞQE!Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@aE뽿Jܬ?ȹ/wN;=RyttșAպ\Ki\\@2պd:QR0(_ST)SּMc]:?u)v6rs3ď^S5ư[4 -4љ@䐻)9=WZ>'jZ9'9w'*3(BXtO#W;4cx1(29!>'RZD*fqߜTUUȹ7&Hwxӌ8[]Y~REbڋoMtD76.Wn!Y;sڐz$V=5Y{Hcc x>(I1*G ECz!GYw 誓޽i Uy!~≭^4G2׷Zw%kU6KY 8=T!nfUl$a\\o?Jm;Oc杘߲X\Cm%W3()sO9GJo .6*ׄu GS, ߔn\Y6^(嵞!oomkFc2xg:MUNKm'?4iMC4q Z\FknOT7[XmmU$(cG!%.,I?w=sR"{kKؖ)& "a<9hwkimu5nIKu8kH%JʉHh± s ^d[h~h•|zje'&k.6~mpOVo.i;Duу+ :N(((+7TӾwM,cF6?iVf5ޓ2ʈ,>T :( -( -( -( -:"eETd,utyfw1qW$޺Z(_ iR-d[&uI#| rv/'տEf G {|VI_e'iQEC|O_Q\>N|c6gK|O_Q\b! ;1?*bzdb;Tt?i(sk"($yjR&`2@&Wu uk΋K($s(Т((((cLѩFc=~7K|P7Z+K>2,1185*7d.g%W(Y8@oc9=o$V|yfP$ӌ~~̞h ->7K|VoBfh0 ?TxNTHG)vs$t4w7~7K|Qy_¹ucM_Z?fH\$U$`n -a(;.]?.柳`lo_Xb ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab ->7K|U(Ab -g #Zt$/)4n ow+M -Ƚw?A!KB~[VWh!G^/O -ũ,^egET fxEkNERQEQEQEQEQEQEQEQEQEQEQEV\w{LTѦৡuSTqs<ݴ5nET(?oW - /\ͣxqb#;f~cMz/xO"im_e}݇U$|>WŘXO&ͭ_"קiu=fR}p*L&nfS];D^'c[`^B_ *(7ǭ[Kڅ!MVSӠxC?_i<:4cW k((*Va 䳿@i}9K|O_Q\p)&wHs؞dlg_>7P1Z3F֯H"vp7T#uk5蛔j -4((((7Zf[j\pd0ARcM֬R[?eX<%3K_;zqBoJD݂A @%l9VkQUÕv(i=e6 ivi%i 09bz -%VFEr69O<[TQv<5h1Ɵl-pz=A;dX;3'~.ÕQHaEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPUYUb_ سEU ((((+3Ue@kyL #ryR.=u?ifeuwyInRΜo#ryR.82OjӢ((((oM~ӬoM~&[tQEAA^O|-{U'P1#P3b+t~AETy뉭:֝[ Lq?VaxDS+vW+G+C{[(IRIUEyƟO(RÓWgjvj.t.? қMn$ػERQEQEdEr:=E OۡǗ ݺ -4֡-ä@x5LE8nrH+cjbg5˔Oka8^.M=(QEQESt[~V6qMƢ싔WKOOeoi,esWWOpHq\s侧Lu~] (NP?ȹ/w[%9)t' -2qBp [cMUd:QR0(_ST)S֦ďik>l+˖L{Ed![x@5kT‘A S<?N+/|30cc1Iq^o?Ŷ^jSm/|8T|u3ִ_-lj`𠶷Ayqd+gNjLxMo F@:ޛ7Ŗ~(Ѯ%)[Ҥd ʹ9Fo4;.Uheܲ- ,c9O\ z.#5 w2AkCF4 --S}p^(TK͎^e7AXku}9MčaGC@ZEQE _?Es,cKwg)#*X*y㿱5RN$}9ZOB̍&lgU -ju -.cD!7(85zF1'GBMKup9V!(; ⠣B(((( -!5]#rob$Tli|AZ'ZghyI֏)?Ѩh>ghF@S鉅Q@Q@΀O)RwOj'Zm]_;դ7iiC=*rZڎԷwL3^#Wk _JmowRiZcqV3?Ҽ7Վ]N>&xR][VYw"M!;7;GA{SU;'&k6G˼n -UדxCAw0x29_f-QR33ZΤЙ+#ݨތ?Z&{k# 92C>1~$p-Na ? ->3xQ@0~?{-W:s|8_-s{](Pf>sҴ_zX=$>zs_]|78,,ӑ%}#{փfm#?xn((*haAgq!8INEM|O_Q\ (:U,cf# qTt+ȵװ@ 9R#VPBUXو=jw hOm?嚲@qPQEPEPEPEPvɿՊ7ZIu-GLO¼vXQ_\ۣ[dRwxQ##z>-I;S9?H5 CZyoZmޡ'$EmQA;9#&ھȬmĴČי%vmvk'}-4k_x*Y/HQ%ר]̆?ЖěXI$~S}mʻ0-+8FKL):/»j}QLAEPEPL~':]G}QLGxXNu9efʒJB+ƩI''̿HYY?V:DqAl*&],Q*/|ɿ*A^kwΫs6+bw`)&kz@-b{V}܁<4K5T9Q&ǙM&y+=A)Hl(b -(b~4Hl+_%uWq>+ȹi_ke]MeY{ -O&O|OAVIvjI48i&*1Ž皙-2ն[葶߽ Ҽ:O=JvoU4{ -O&n@FEZ}'0%dp3SvRJoE!~ZuΧKL+rtd$׹D}3ں0s4Cl4Ζ')?^[Ubk1J_ %Y*QEQQENN(J*\~/_vɨqs4b'h\5XoZ͘E63 qWqs5ŭ:"|X׉F]O?UX.]Oы|rj*\~/_,&nR6}m?Sʃ3EEV/'HȯuU)ӺfeOAC* jEhz7QU"yQ?5?"1~qz7QG<ScP(ʟZ;ps9Q?5?"1~qz7QG<ScP(ʟZ;ps9Q>03j 2=8J֤=E.sғ{$/ȟq~: \*͏'|gq+q?W'MGbKS#?~R'|50MI:WUF+ZkX%/ |5ּ ti zsY[8힁XGtݕHTyv? -_/GBtqY&-Zа `zҞix#̹IQv=Ìm_ET䒱nw I8_ݫ7`[\{Wx^ v A1^0J쯛[mfYh_cXI`gہxꇋ}j65" l\\,  ?8@+7yJI#I>4}j˯i _*wX|=` 8)~m..*=uw~"+6x|#۴^H4F>WrR:Pfm{ZXBwğMQ Tw ^QV6[QT3IaЍOR?P\O?jxjJy/}t6+7󮂹/C_FuV//+3"J Ȼgf?Ovq3Z]"G%nV\䯡DZ q5^l(t[]ELgptu -(aEP)S֦S>uMMʊY -I=<;QTu=bGK{: -:}Tc);E])F+NȽEs>5/v8_9v~֯wTp)U΍H;J- -+I((/ʢ(YՏRWTW7M c8RZOB̄,lXdIGBuiHZ/p8#b*:^Ӯ,`-%yhT\APQEPEPEPEPvɿՊ7ZIu-GLO¼?#^^GcF ?lgωiYO A68IdDcEg6{sNWFЙmzh2s OHaEPw/SZ6=p*r_]GQEQEUYBNrJZIi:uč *odRyۀOjw%%a>}+/Úֳg,Vfّ~#Jm!ԯY8r?ދH/[G٠1"I][M<L,FQ>= U6kiQ)IJ6*qE^&٠1"Abς7ǧC|=b=K—rnMcL)۽}:@`Arks:DqLzKv1|թ!Ms[V+xѤ -=M{>+E=ڬ PʮAKpɹnݒ刮Iд -]w J/PV9χN{׭7yry}3gE ܴQE!!icpsv BX<捇:8 -GyV5hF)7&ai<)nLK\]ApE\WZ"Z3$O!p+SI𶧬z(S]7L+,rWFJXƩ)VL섆O%6#Q^I+#P)k-n)"BJҸLE$2:CdktGN[N;fiKTx!GRyp92k䚔sMI ri֚|^] @MpuMM!Y|;$ ->xgQ%փq==zF- 7s3v底bފ@nvg, -ԥNj3kNXT=[5#5^.מcN#($I>|pӹbiIX+>]}w1U#zWױx6խ|-f0ΦB?9jrP).#z(>(/ʢ0_[XW -OK|O_Q\y7Pyae)?ҭov'fBlXe@9GBtl䶍AQ U -ذTt),.kt#c*6QEQEQEQEWo\X!5Q!ij?4L+,c^B5|/ku812*'|T\4!7 #4ӵ8dB"XqZ%^TQn>ȫ'ExWs[)(():}2?K);+J]^y/- !!])Y-P+UsM`¼G׏ROhڋ߱m"쟘|A`W3bGQmHUW(r?^xj}'_jƬXV@Q| -7_toW(r𯂵3vKNL) -u=n/qME잘b}{ -}$6QE1Q@ o?}$6WrAj+ay6YZs"+ mnk,B'cg+AzȊgJ<dNdsWQEW;WOCp;\ s"袓we%e`) =p*r_ǴNOk (Š(m??ST6쟏󩩽Ķ([kqW3lNE!vo4:)8'wkWwvZkMe0e_jmƛƶ}Oң96*Spy9Ty'ŋ %1vV),pGJeXo8,scXռHpvY.IsBqxO[mn&<1 S8W|7soiIxb 2K8g*A[ÒEqe3)9xCG}qzmmxilG1Qwm#ڭXxS 3,Vڠi;LPɰi+wWt1G  9z(n外P3MP3/Oq?VQEQEQEQEQEQEQEQEQEQEQEQE%9+r]"G%84u6]2O9"MOٗml3~nET(?oW - 0un{:~_Əh5%:~_Əh5%:~_Ƴ5VRh/<>2>hT_g_uT_g_uǴNOkB-)<50B(0(OO {'joq-k}la3M eG{H ߺ9@]sn$`Jܢ3΍h8vKnPzp*#:o!+#G J$g>EaʌVO9u>.|lesuu ՇPiR, -@ kN9PQE -?j?3#_'1[ZS A:u T}"_sTi`mQXZ9?_sThTV/#*F-WUůj3_g?FEb1k?ڿ bQjmQXZ9?_sThTV/#*F-WUůj3_g?FEb1k?ڿ bQjmQXZ9?_sThTV/#*F-WUůj3_g?FEb1k?ڿ bQjmV\1k?ڿ ⩭K 659]_dΤ 2 `5d掦-p/ ?xG\Ui xm~­CQE# -(!O5B7jjlH) 0@#Hc#E>@E X(((/ʢ+(-3G -I*X*hcH+>BqVг&)}*!8i}|n U6mTt%O˦kmfc6AFQ@Q@Q@Q@B1&kV*F?j%Hg}3ZM>S -(((('ExSt_'袊b -( -( -ds?ΟL~'>袊b -( -( -( -(yLOAOŠ( (?}1'OŠ( (((I? -}2OxR{ n>_ǓWU)|,f(QES$Er Onkq=?_yCjj( 2< !4yxK񩨢,|<%[?A?+x7ퟆkB.̇?_yCjj( 2< !4yxK񩨢,j0<w\ǴNOk=(Š(m??ST6쟏󩩽Ķ -(Š(((>g*>g*hL(0(((((((((((((i x-~aVꦧewr00*WP*FQEB7k>$爭5[X-4KiQf6+?oW Qx}FOW}I-nؑ&I}ƚɩɭovX37GL'i|?܋(u|P,2A| -d{C- 14Af(Sp6"Qfn⽊VOP+kY%Lcw9R=#i:څ} A-,qZ>3P+cnli+6NsZɓ]I;]\|5 uC -=F A2mK_U֌-ͽi-n@ހ6 7?ڧ(FԑqZƀΞ&yM}c=3٠((b+`K(t;)'ȩb+-%t"l}fov'fM6 p9U ,!mIGm'ExR{ n>( ((G?.>+Q1i0_IeZᤉT@OI\oQ\|֕y|}O*qq8\8'KwHmd}0"dH:+OݝD1}ݽK:5I_vr4?vhpm8{agZEYI![,9".HF9#i=Gaַ|C4;;i;Vt8eaPyQÝt5Q@8ɥ,b}{ -}1>> -((7iX> -*+-f8@MqZO.K4}rQY]\.zuI~ѰYci8dg r1s8VV0Ev2;i}NjΕ ["Gm 4Rp. ::+'Bզ b }# 8 `2=4%PWhcĿ*v ~4a'Ld{UYUbKسEU (*bnkq=C8x[v5yI.#&e{`\HWG/Jm~/~ɸ1SpL{W@m-Ș#"x⚶6[X@WA~+qSxSoh"{y=e8,NJ}MBX5+X+qa}+EXaQФ;y֯GGQA% 33'$L_jbS=f|͡/_^k$K$VT'U*ڛ5洵M0VB00rx9zVƟ #M)^[{̡\֢_mmdRme-#9NC@pSdkߵl#Yc>rޠ}?N% .!_oO9]1up^jzuY3Ptx./aPPIREA^Tv}5ӴSsf>'Z_Y y93Vг 6 pAkm: [FJNwP#z@& p [ZxNϵ4\ -osPQEPEPEPEPvɿՊ7ZIu-GLOR[q\k.^ċ2U#$txVK{k4O7X `{G~%v_[Ҵc.Yw9W]!a \3+qscjdw#5˳`D1ºFҴKiP!K*p*AgjE !F>s..0'ԯ5kK+h|v!]w3n(LM';m?J!#m0hM2m]5eYk(0;w I$$ h>^)ibqJytHct>d@ -#Q=jNS5Q^"CDq 0oºxlka`_ He˳>3nc83^=]4өiwY -2*̝Ki kH0aI'ExT3H(((G?.>jM6JFzja5r&<C%F9 -c%~NC'i'l{8:<%76PKy_b{&?4It' -y+Xj\ǧO}sQ|לu9d)EL9OnF }pj#z7 c0Ì9k5o-l\ „PX`T6r$2ϩ,vCgt~_qG""Anƒ=2X$ڤZ8 c'끊|WIy**2Q7yҵ%+ڻIs}4# -( ~fQE?S)QEQE1'O7iŴ7voq 27B*2-J)aILU FQkI̗2\c6rBA\S{ɥze: A+g[Gփc>=q, 5܊2F{ ιk 5`©.as+rH*u%̟ jdO<_C:6flll z?4:Y k$6\::޹cUʚ UMx+s"yGsᓞ#m#(<6&1xRLŭ{KTVAMi?ʜ)tQQE!Q@Dz~?Φm??SS{lq1[Eg*f5E=~iV(WVI%ZU wtxler2n8 u+XΧRi"ƫuMN/hf0.aE{k6 -GʸEBCUM:ng;`DSʓާe%r2q'#3+ͻs4ߟ.w ik6+hL K h$#)ǦEy'|Y24r,22; {߄z+]o³1]%&vT1ST1B-QEQEQEQEQEQEQEQEQEQEQEQEQEQETLiD1:a$ ךU5;d.-D&jWP*FQEB7k#Cf WVӭ"IДV<ߧk}MZ -jOlEm -#bpǁ^++}4/^4-MÒHvWS%ش#iW73ZD2.=3W :%-2ڋy"% z[fm:lMgCwKӭ-.Ŵ$PvE )8OQЬ?YA88a*fB(((( -!5]#rob$1 _FM> -/&1}4(4/&1}4(4/&1}4(4/&1}4(4/&6}&C@)((*4`9?}jJ) g|-i`vmv,O "p;SQFUmlib㚒% B5‘*5 yɣ_FM>ZA=QLAEPEP;N3ҍyOqoҙ[p}~C:}&xEO1d`_Ƭ55f9Xk}Yŧb D8$`J-Mf0@VhP4[=]c0ʭoWj:;hY!v[Ze{fGcvSmLE$Md5C&zfVvA"/ 98'F+;<'w-׍gfsk -hvd߹"H[{x2 آЌTUT1ST1M&) (((((((((((((-qnD\njL2ĉy*WP*FQEB7k#Cׯ]WVnlḜCxֵ>uMMy/Kw!Q[Ilzaogqi {ԓHg^^ԭf 6Y4A㕈۱b81Z>,tF4tY<@YH w=룋~,?urKignM7i'YVfu8;(~!hŧZoqr+1GeګT0\23x]{_OSAE%`ݎ9<ҶÚPGSk}vGvhI$$y'%Zj&˹hVeQxPW8$>Vu c!nme$ojv,Qomy݌n Hyˏ hV2Oa}щ\"FO߻{mߌnٝXϽpOq}]hdoQF}EV#?7\C(e*~?}E`,Uq}?W -.fFdoQw.4Uo7\C(#?‹vY}G~?] [?W ->q}]hdoQF}EV#?7\C(e*~?}E`,Uq}?W -.fFdoQw.4Uo7\C(#?‹vY}G~?] [?W ->q}]hdoQF}EV#?7\C(e*~?}E`,Uq}?W -.fFdoQw.5Z ^NOeDFNG+j4'Z\Z&;/`ңJ7LLqmߒמ7a6#Ovѭ]}񖔝28Wf'WWwapF.O(~GQ\T-l_O5G{Wy\(IF2EVQ@Q@Dz~?Φm??SS{lyď -k^ l-X℣Wv{ÿ[̳AhJ*rֽ_J٢Yy}NEFYzuŒFc+rRxg$6FsFs*89ojs .c}tTAwM$,<2 8LOR -PKY~i )A{fFo I[.}:4`0N5/26qf~Nsj0i"yq!Y7Ap,qbӢ#R|lm 9s4Қ{C|U5C|T dQE!Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@5;S}\ZSnڭMRѯUuF6n}u -(aEP)Sֱ=wQ5]ZAӡ˂f/?2׭l'}MZ -rVKVK IRFrs;[DR}A1g!+1S;Kc3Hg xR(,9f6zL(ndv@fFmNKE$R^fwD@v,B0|Krx"98ԧBDd/#Fr0osM!k/'|i]yJۛ ꯮mEբ0Y+i|յ[q>m8ۜc_=7Je66Kt]$  3h:oMw2+/$0$gs3Z/;ؼ2;|ndd9.zGJN`fcgUK"Ph:*kO,EhE(1,H @Mr. -7#Q,qN;v]4٭ g26Xf=If${PUU[JfGc -x93RWTW1O,eGcx []Yw˜deN̢oGqWd]:67NyV" *d{qPQEPEPEPEPEPǪTm$j%G*QX1־QIf]S^^v]S^^FqX~ʶEWdQEQECi:dMM%Veia#Fh -D5X3]sa2J;P%{h6-kMHKxd1p hR^Icv) Tޡsss~23ANOG%JڌJl >@zgޕwcӦ35Dm.#8A՛k5y8UPg*k`c?ʦc?ʄ (0(((((((((((((jeŴnMAVꦩm%qo DڬN0jWP*FQEB7kC,ڮagocM*s [ S֦Ă :=>Q#$OI+\^Tt/Z->;s-^.<:gr׷&;xDeKQqn=kXx['`K)^]DcOӚ؞LEQЬd=Ȳkk,ъʯHeGBLapIcG!TyQEQEQEQEQED?Tp1R\ů!?_(C_-]X/z+V:U[K`k+7@d~We>e.PR#HU#ϭ}=jZQH,+ļƷHŀc'7]#~PM>9i\w<xK/7ܟM}C5]4g \/1`r;g]ӢӶ?We5 lJ)z HQEQ@Q@Dz~?Φm??SS{lR.%/amo#c8!Ii ~+׼E"Ϋ^r EuX΁PF%??¥_xOX6 NZuoQP>܍l~S*A`OZM:琺5C#hIBgqnr~@#Ӆ;+CJ7w7$'VTOs* QEQEQEQEQEQEQEQEQEQEQEQEQEQET--`ȘRN0jS Kٶ;jWP*FQEB7kʴB^D26 ``fU?oW ycq}w=#ʭ[v , 5yw@,~x i|t-&UmP2J(ǙxTΛX_[V.%%7yi:tvOfCBJC8-[ZԵ{ -5=ȡEk QH;xfחWkE:"nsA=+㴶D"1*zAUtݞm2Fg[ǎbdhٙ$]Y2 MEQEQE _?EsoR^b9b+df+BUOH[]YKD:@: wqokR$eP z@Z'UTT4Kw X♷u(Ѣ(((((#+b| --ȡ~GwZBAjCԨ|HZj#M*y/z=* K :=VgAG&^+KY*M \~9W?wu5ܷWiXzQ`:hf?s?td8';[~K6#-֡T֬xn^cX$Cq8cz hήQK'$r>yiba8zm,7{&O`w>a#7Fsڝy^RU)I$XmG˺N1۹4(,^5Bm3OeZج?ꁅpN8)GemBj-ſ?9OtR+J$Wre__z YNsx A:~Ts..R_<42Lmw $Z(g( ˻Yektd3;sִ㴵Hs:Rm5⤞QEIaEPEPEPEPEPEPEPQM?KQM?KEPEPEPEPEP^i/:K6kyg/ YKnʌ.;o% uYvɸ7f }0ַթJ\~Vpc?#o#Q -WwI,VqK+27e5kCK I$dثºcKu VTru }Iifk[ V90$ WP7Hu) ;m"y) оXgU|eoj[m&*6H`zs</oszÝSI׬漳x{*#zƺlԊtXB7lu#j;KUӠ @88#P+ZwSE'EW8Š((m??ST6쟏󩩽Ķ3Rs'sV>qi[p-4\ SDƕRFB9 66KczqzRR,7FགMӜZd#AwTo/ -81zS>5?#[A(E-ϠTzC\Ts* QEQEQEQEQEQEQEQEQEQEQEQEQEQET!}.+V+;&֭MR;Ia8VȺQR0(_ST)S֦Ă(0((((*M}e9)ق{*GK|O_Q\3~H岤qխ, D~RQm=_9{mcv-.9y18_T0]{MbבZƓmĸP9(Т(((((#+b| -~&i׺[1HNnZ쨭)Tt ;;:_ߺ?/{4)\if"+2|`1ϩ3;%&nG'-s>_kխ>#;ğt_ߺ/Y'Սv~"Ȭ0xßL~SEw$ XnMoFv*pIJgGoz?j*G_xwMkm.6hWFdcyl[hɱr3^y͛t@1 TNpF <jc*b"KdkEEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPQM?KQM?KEPEPEPEPEP%'XGU7tHڦԅ>F*}FzEbfo<40hIn {mрe#sc9Psb9Wc1ibߒ;iIq0wM#SZǜ?KMQ$`*SP3MP3L(3/|O{iڅK33;Hd1p?:?_0*o=Ka#޹^6_&McMLHhv(#Ҹ\?_0*o=GaӿTz?4mf].en#Ib#tZ_dZlP1M U.(`o BHO]̋*NS?UCGmv=#3 A !\<>{իmni]i bKhY;rnr"W :w -Qt7C`kSx#6xsSK6i$k[:9dKqc]O͌A]g;MǨW :w -T':Z3v#nLc??J&֏owg?I+} $z0rKs"oç੿*NS' *d{{$K3pwu#;Yi+%JѬ)8* g 4rH9g?_0*o=GaӿTze;QBxc[sv+{[a[ZF -V)'X`f$.dMt7?_0*o=XpCj1m5YK˞;W궗0H"k1 hq߱Z=*NS?UCԚ<кmi$c$w5jZmq3X1 Kdt#.Gk?_0*o=GaӿTzo#Em -m1?/OXAd]Xds.dlg;MǨW :w -U|Zvz.֠,ʤȮ 6yQ7>,lcinvO8]#Hb~$݃ZaӿTzç੿lX-SFK/#p[냑U?_0*o=GaӿTz -(o o˵u 4_(@'<ƺ<4-K[2{H8}u -(aEP ,!1#Ɨ7?݋?SQNnŒv/ME G(bTQpnŒv/ME G(bTQpnŒv/ME B#ɷ,G nmL3 - wbeHuR澲\ݘ{*Gօ/ Tt ů5,ۉ(ݓߜWb}{i֨h){ԋ\`Ihݒ8'9EPEPEPEPEP2:–8=/"NyU9<i<0y -,DK&9ݞOIc807߯N秣(]ε4; GeLڤs6m'Hk"%݃:#ݐxAZ~zz?==?NVFAд3$o~c9Ui9'$ZaciCiS+$by -<ad'"G"K秣(y -OEҏEҗOGQ秣(>J>J_==?G|D_(D_)|zz?€y -OEҏEҗOGQ秣(>J>J_==?G|D_(D_)|zz?€y -OEҏEҗOGQ秣(>J>J_==?G|D_(D_)|zz?€y -OEҏEҗOGQ秣(>J>J_==?G|D_(D_)|zz?€y -OEҙ$!Uw'|L %QH(((((uHeҚcn|T. -ҬR 9$WyR cPQ@Q@Q@Q@Q@Dz~?Φm??SS{lesY*G$ϒ࿧ ;/Gm~˪CX3w$ $ʱPxu]C+ FA\i+FvD8鑊ijKNL"91nbF\_^EZ9[lҲ:26qzhvPA 5x#` -y#sijE -ȕsq*mS%a 3M_jO~eX^6TPs&HH yt -,4=:mZar<7uOvдT1Cئ$QFc'֟P4J*&) \/ -zx~ Zl2[ RZGsl)꿈ޛH|5%vUA=L7t7Y,ȻCJ]r~Vf[fFuagh~2090=Go+8-ӛPėكmam*qNW[;}Y#6Y?0 -@#9`}O"=kYxv; BKonq-)E, -3\xbZR\İn$EIzY{.M%p|MIm≭+k\m#@nqrjm;x&8$oWHc\Dې (TA9ooΛʼn<m|.{r.9m8UԾKgt4ۈxHg(gTq%e͋nkk, \WzxZ~R7.Mί.bqyGAh5-n,h݌FsW>߽Ttz|=H06p7gH -( -( -( -( -("?*Zh(aIyt4~H%bpN;p*~ rƙGO -α]/W[!'6 r,mx+4m|oK G|Q ~A( -羾rOڬ rTmNd8I+oWw rʫzZ5ZOY 0cfŮRXQE(((((((((((((((((((((((uKkIʒ2sO*Af?iVnkksw=ȅ<<]Yݠ *( -( -( -( -(!dMPDz~?Φا\gq7 W3 >ƺ {@?A -峼]z8ƜVY`BpnJ?iOa5R<:U䱱WHbmZ;bO1Y^\qؗiUhtieP?G$?ZgVRXۿ +J-B+C|S"^d4=bPxX z߮C¿+ήJ!91c ۧ\UƎחWMhLy@_dNǵaٙ%'n7+A-cح=$6.l;羹9236,&c;Mv:=hp񆹔 c%_ Ho`[}\}Ͻs"UN ԑiшvX;0Sje:|,rj\ַܲB\qގYl]v:OBs}r4˟ hI[YVE$E*2qV^J1c͵(v˻;ǩ%ݴp4҅ FyiZI|9GУ)u9Dw*(>}+@xwHV6MA?8#rx'rw"6V-%_20x'{TZ-ܳMAV줣d,6M7]"0sjS[«9ʬ}6ղ#iwJ+TW8\K5H&.rC|`]5L+IGz9bEjh !le:8"'%|Ü9kfgvFPސ7+vNZhnzTk= < - G۸쑜7<f9p m2O$SE ǰQE((<4)Kk0Md<5%_ t-Fcwp;w싨QE# -( -( -( -( -( -( -( -sb8̃8*@E[0%"38e7yOm8da|ֱInHXD~SQelڊIk\+&$*7dv/.m4IFK@nz]5#>k+W~ܣEy.#HXx!P:W -.գ; jS^1<$!$5xgEoэY_yF$k{:[@0sVΗۭwJ>v-{ZXƬ/NäR¿+C+ȿ׫g*>g*QɨC0,?{?BGsl)}b z[1X-RlޤɹxZN +{y+dqVkpK[܃%3F 9wa7dui6\[[4$q*}H4T 6A<=':{yA˩ 4mߣ4?p7NH=zT-]햋7sF҆f7_7ܐ`mڭ̞c,nU)P{gi^y Q]X\Gk4J~Vt;8obvsW!]g;[:]|`UHcsQnas㷆&1m P}VFIliAH)g' A4;`*eNdL^ŭk%,I#ېvg'9CGWe ioۖ%S cim.2/ݎj졉qFw9U>(((((((SSO.[U:XFPmNj[|1]%e3ں.ET(((((((*Wb1FeT~qV\BH6N2;&YZwu6JKkΥvʂzBV'+*su{66]k.ݸrnsR3B(((((|?*Z(篒QjQuo%ϒFmGi K]OO}&'IVIమmBs1$#1F`ձ=ib wc(.F+xq;}y@["s/9]:Ɲqe%7D?ʸ2[okwevFCv ՇZr]a^I Y-ධ01`x*?> TڻN:⤮z9Š(QEr:7cU#]*{R`Sm3XĖ#˹1BbI'dóꊵ6Caobto5ٛwNy=Һ=FMG[Gk~ XºJ(slJGH=J),tm#)/8 -xIsAqs%PHH>q>G8:i%gm[^rF/<}2qTXisKvWs^MĹGG$UYOP"}CiM69k[䜸c@`O[Ks.)@Q\]¢E -R|[a]g,?T|7 .:犹=x"[IxC# ]5CWoS{)`'.O2?:ӵKq sE%A'#"jvsSv/.ͰGU ʶd0?^*1j&\?wc89{4q[_tJҷќmQ {֨[hah׶zvڙ#g+ #88?J(aj$/uc%*0vu=)5rst9۸pG5G;f(4 -( -( -( -( -( -n*Zn(Z( -( -( -( -( -( -,IX瞪Mr -ҬSOuޕ/GHG€4((((=u5Ci:Kc7_%Bj"hr7Uak&=e=ɲUX,* %eoŝ|The&$@4 Y ٰJ`tҠPc*`z6:])46D*ꠂ)Z3Hv$*܏ s,>K:nwK4iw ƣ@7\$$wt^a>p›" G6+/>N~9%?1]uWDE"Wa%rתOC|U5C|Uތ5QHfzxS`X׍=OxvZ/'vFպ6*YViH!{*s#5jFϽIDe~援y~ofqγO]Lm#j1@c喥asw%Ha9 ÂWK@W=m<6&o,%4V<) |dP7~[ô.;PxB 今gq.܈G^^$^Rk:wv-m^xwB`6AX @`m:pbvTM-w<+R}x'3(AONyVucRcc^H;G#z$[hP\fH6}jyS?fdi0 þ*ӼInj̓F3$/{Jd2R\ٹP36+݅b#ӣ?ʭ,(3Gsl)fkky"{ylj?Jư\/ -zhfwcujMc'nň޼zwYZ-R[]\I:۠b,NHZxAWӮ#eW)a[tk'{hiŬK Nd[s4_rxqV⺷\?-_ -<TP^|(k?R@y|6vI,VKKV>='ZtP^|(k?R@y|ϏKEE>?-_ -<TP^|(k?R@y|ϏKEE>?-_ -<TP^|(k?R@y|ϏKEE>?-_ -<TP^|(k?R@y|ϏKEE>?-_ -<TP^|(k?R@y|ϏKEE›$!(bwMOEQEQEQEQEQEQEi^2 E|Vf]2 U|u?iQEQEQEQEQE {'ꇉ)4 jZC2[ۼ=Dz~?Β).t3!G_PF)Ķ<t@ו f%yG[_>{"fÂGZ<Xhin9)lN?$ƹ`o{ ngb_ _>]2w*BNwxUG:eBQ@a(V~M9ukt3 ? -]W:.]9;U)z sPIk~2ׯ/--P2q=jFR>-܂Ҽ5x7$<H@~ nTSA7g!W.sǸ5qSU:=Cþ"4;84 &1oR޷?wD4ͼZQNuxwQK{-.!Eu,unT'(B -SJ+*{l{¨>g*IkFEG|UЏo5QHfzxSC\4;kO.9ިX׍=kX]6$ I~պ,V^Kiwoegfo/'VqF6r++SfGѹ Gp Э}DUźzye$=Ƒr  \I4-8VB̸o "eΣ ݓ_ \m q<`uC{k[ykY8Gvf;:l?2]طX6}ۀgrQ~ Yn-Gp͒%ߊ~{Ƅ uY, V9"`pNӠFra8ճYV-s\*F< -˓<;VL}mNSWF᎗rIFOCO ] :]ьH"x!(Q@Q@Q@Q@Q@Q@Q@T{X0 NqV̷ b)1H%8RGӜUGq2̍'lgUOiD5լsעPp>흧'Y:M>#_f$a[d8)xjFkY}swf35o@T~>v]$EȢFaI@Vf]E M'&jwrH$ -gM'jQYzv\H$Qd{9L)$~4iuw$Wz,Q(;fk7ऑ@V]޼7Z$p(;gk7U95(]CSRkyIml]5l>PwsD&j FAC(RˋPUki4Ic @7 t;A҅53IVqlD{qsۥjQY%}:҆51 e%kBϴG]P.-DKBqP;EơŪZ$ZPnjS}(R˻58u$Dr\I^-չ노B=(ԭeA"[qЖ8?iYvhM +qX]P[-Ę-51q݉4ˋJ-6 tYg|y51X‰(nceh.#?D? :+2[I4SEK8kAqe>=J.+]g(]K ]b#݌;=h]CR:I:,6#Fzր4ĿԛI{eK8[3qf;Z"ԟJMX- e~'iYdt7;;=5)E Wln#c'сhN̴5)n&e?vqxAhԮ,%ѥ0LPne8 -x4 Jilf872xNniuwd3oA~>Eeچt&7,E(7N?]2RyE-3G&ƀ4謽3PotIlW*qN(ӵ NH4IlQ#p=0@V]Iމ-*ٚ7 )$Qcjw ։- -7 )4EeqKmlܵl>Psچ.hj tn#`qw?JԢ595FD+PHf2vjl$hxn#*Gc;PuQg}=]҃jcV(%6ϴG;JԢPUˢJ7b0z҉ M5UDKBqPS}(R˹58DD{R\FAvz[V5 )95(CSHmInln[.zsm{V$[/<'EeZisvz$2K -s A>i4ie~2ΗǴa&4KPԭ!g|vK}>b3Rj:kDz\FY@VfZE"i8;~_V[- F#=(N̽ԭmeA![BX^jV['51qܓPuC-Ę-51q݉⋋J-2Yf|oNӬVH@kyN <uN_Is,N~{Aq)'izqb}- 0$GH%Ne8 -ۢ(((((OOZ#4Q y =ilتkRSК͛}yϻbrj*6ooQ>},&fvlآq$O2KX]т:.,mnb9DL;Xt"͛}yϻbQZ=͛}B̕6nsҋh;QE`X׍=ozxS_O{V6>T$\# Hڷ,ev|xϵ4z^=74™1d,yf䲰kyn&_)wM[eqW<3iqqvv$6>_ ڵWWvFbRPA{U.Qj@0:Oy |7'[$k"}AEqy~W2 H\w#x" *)rQQEHŠ((((((((S7+gKxlP<3p bv}u -(aEPEPEPEPEPEPEPU.f;\#RGӜUuq4W1Fgq!ۜ{r*r(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEf3\&u,ETQfxHrCO ЧnLJẵmI& -z,?{?B`a\${5)xdW\`p:EUm5D7\pI3ބ:_Hm&qrޒCG}7H{fv%/*' Tg[8]"Rѩ18iz}=BGK&:m$qUGv_%}HJ^pr+ -z]ݭܖ}KDb7tޯBk 7@i(,B7gks϶]Uw#J=<]%4k[9d_>Iw]}S#.c F .Si\+mxc ,*o޽ 6{$V3 G19鞃,:ėSI#r2WLcmǶ3/K CNH8 -* 9:KNX/$m2PNŽh`6уƮCAQYQEQEQEQEr+_Z\EƖCC<346 ͉,#-0ȺQR0(((((((72}c -()3s)#n\I - -HETw-7?JͲ$wK4jI9ⴛQ]:ZDJ28'eٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mC`H jvv< -PJX/8 yu->-f 6_pF6]iǭ\?>4U_-^`=U'Q8v&y+*(Alf=wGj.Y6eǃ'8(5- -ROf8Ia9!Ur@V q׺I=su.d E¨gMK~J=Ӌ-u%C8d}_ -v{ڶ5 I``%YH*H  sT״?R]8THm!WGeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_fYϺTmeٖ_'m?(mCGe>GeIO ->i?P@fYϺQe>R}~?Pٖ_4[Ʃ7$sE0&)U -C%sls,CǿP,e= Wy$ڑS[m1ԓԞ-zxS:ciVڤپY,/zxS4 +S5+5Ԃ,!Y pm0xutQgq5s$x輶cK1C,;J׃V,KDҥ,w*ziE[ |'>s_i71FK<Nzʊ*$4QE!Q@Q@Q@Q@Q@Q@Q@Q@e:\1y-e0+S2[K?2z -ȺQR0(((((((7M - -OuRcE"KKqUGq2GC<%##!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|<"#!I|nM oʃ}]y'O*wB#|isXմx[ϴnA[D#qԓԟSRז-խ.i[lnͻȽ{G^!m%.os5G#0XUR^G#uW5'kBH'k[!XB77UYgGElQͪN -FYAVW:ya[IJ)g} _ -k5 _ -k5^%eދCokm -"7`#n]-E5]$N{c?E5Q5VjTқLKv s+yJvV~5cjxm.{he|vu?wnXq@"k?G"k?Yzauch1;`̘Tt4˚ٕ/t]sZS]UZS]Ut4P=5Q5WCEexlYy,tXb2K` ZSSkd.K[n싨QE# -( -( -( -( -( -( -( -{vځm$]B>lW(eKB;ycuR(#bh. > -E;RPEGB9p>(KGUxnd,2 03 qVjTԠhn_xg.uB*|}p8tQu5*\jCr@r>~4mr47.$88j]v J1ܰph8-㙡a'EH}@[R[ⶎs , faP2?$;D0ܕ~1q\dU(nL7%X"qC_ƶksܕ' _j]v J1d.gw_l<[P_ms܅ hK{F܅S~E]R_%܈nBTh9.2h9mdCrUx\­E`ԩ s[0BuWQs$ ʈ22 -E]R\$ HB~ -KkFXnS`OVjT#V 2|3[6 6L29tQu5*Zv {FI@~h# e#E]RtGY`dS$`oGq;B]!\@ȧH[R B9ZYsI-B9At T1VjTPKl  Db#{l 7'U(7u8`m1Gm/u8w?{tQu5*6}Ⱥ-o1O[.X[RMB8Ź阐7 Sjt c-ݓ[R}B;{vl|̼.5X -QΣ@[RPUr9p>(#QຐaE]RZ2s&a\kt7/dyP3[RPc-T qKs~7.$(8j]v Jor47.$88h8 V儝8}@[Rk᷎c +T-qZpae~38[QKD0ܕ~1q\dPkra* \}Wn.C>Mq@f/y7;sCo돻պ(fy7;Aoq6k&*l00sj]v J~h"yF&;䴒Cr:@qVjT9CrUxXˆo &XnTG^V?@FOVjT;daA@)m9aA@OVjTde6 , 29P>.``d#E]RwErO&~k܌ eFs4 պ(-uh *3`tH}B;% -E?BF*]v Jj\4+$ !#X.~gxhU("#JuN?#&fAt7 E]R](fE?bE`Ԩu|Ι FtX7U(P;l`,H)}cKGط0]39h U(&#`h.>dvNOGopN[4p;(@[RPur > QsGk*]9a8Rn.KB;GUx.,3`w@⋫s&Ϗn.Kvn*|}p8;U2P3E`ԩs~ ˉ@r>?'Kxcq Î ƭE`ԩ5p[3CrN302?%h0ܲEH}T ƭE`ԩ%Ek1qP2({.L7%XUE[Qlɹ* \}W6?hh.Lmb3Vjd-w˥ll>|UqUJ0hkVx /1? -ߢ=ƒeKC06OJbx@U>US\gbjb\ھJH{C s2yM:{i&kjU0R2I:֭]v L3Kmd{m9Rc(;Hdzzv7Tg ˇKb ?t0 -訢jafhZMn$1fW,Nz -~g%E1$jѻ@H[TQu504KE+̡pEVSZ,ZaI%r:[TQu51otp-РI$|`c*m7BtP]Fs$eH$[Qu59mÓviGnXĶlH$c< -/cEIڰTqeb1zҢim$ıٔ;gsF2rySZ5Wr[[K 쬨B#+N.wwmo-t w ,ΧӠҴ]гʖNqE]S] Ji9"#ld1IY -]N%wOj8&}k~.$VZ5$h]ϗ\ wГP;n`,H)X[R]B8ntHd1Dp\ 3cyh U(.5XZ -8\L4N[P31Vjgsۘh\ .t -(ޖ jQE!Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@ -endstream -endobj -84 0 obj -<> -endobj -89 0 obj -<> -endobj -88 0 obj -[226] -endobj -90 0 obj -<>stream -x}\TW9Ncf`fgpё"ػc -RD%F &$7SM6mc0MϚ))f&nFs9&g缧S{ ?gc#+xz cʑ˯nembL)4-܋sP~q]%^2E(|Pͻbn(bL`SU38XS^܁`Asc]7=+ G]i@:yqۚc-KV-`l>Ɗ.[Ӛ;;M7wqc[ݾo¸Iֲnqc{з_dlu銶n7xG|քS[;0L}^<۰XIds_{2c>5 B2) {<-SrcΆA+|As9Su~~3~~ E6)^QgyݏӴ&LzYl/"O_'Fuqz_`ߛuvO݇+kd7}G`W=XO݇<ö}9{6ǿf-D-jQZyS̫e}ZE-jQMk?s1̨E-jQZԢE-jQ]E-jQZԢE-jQZԢxѣE-jQZԢE-jQZԢE-jQZԢE-jQZԢE-jQZԢE-jQZXQL -u b:$fz -gK1Yuk;޿aO6IXJfj/O_g&')S$+eSWwV`榳V,_ֺtS-l^09gͬN6uI'7vQU#Gʆ4lҒrdgez{\-b1 zpֿWU e׆tپѣsEWG]Gm WeBZl:dJv06, =_vSTji]E"#5 -oz+CU;*k+^\+o4gf *k9ù&! -3ŊdžԬʺ)ՕʵBQk˻Ph]v6om5ͮuԡVvtl9Pߵ0P_EeCc}>oW xO]cȲńC<:Mȗo!Ɨ!r^WG">^6f|MH9gP˜k}b*k#߫]}; ͂;|4oӫC -@]d(_WA,0:k %FR8b N֪DC>R+_Y!쨭|Swwy;"V#J*ǢdWvT74<&o;#k*졾qZ ea1rc[Zpx9 v,+:r,DJu\;HYE*vgdHܑ>Bm8'zvJV6Vq#<5Lb9G,5 '>h..oMV}5>j1617efڑ]2P*2-J9`-UKGO#}_ L[5/?&4_ 2D?sw5czm9j{:꺺wtCp.:|c:|ӪO^^+qG)st9fV3=gzuXJyȚLU2м -HxEB4 V޽+XZ3g>Aڃw(' K3JDJc9"[" S &`UbLpٍ1XDS5wo wi-MlGIk?CE y4gV2}Ha؅f!O* biу%a⛇o8 )2G,_&e7;'q,>bjtTѤ{zu58Kա?^n(7JQ:uYckp.e(2&b"-DVG7T^inP_|4Y%`V!0kWC-ؙXoZQXNzo|lv4XUQ!ui<)\8 '{kk)nF T_xLL]U:gԸCF}xD}E swt:BڹBa4c7F[Fqn7Fn͎h]Yn[KLB|Q!.sj GG|!xxU7W[:7R1"U`L(HG@fs1G^&Uljuh,'!CJr 2ԙ2N"{ 7]!ezudycDU\0oƜ/u4iIV3)HHG)>#) ŇR^bKJm)ޒM)ސu)^U)I/K/J{x^xVgOKOJ{KJKJ%EKqJSR"$=R-]R)v)R.mR*-R,/I& R\/uR\+5R\-V)J)r).R).b).B).bKqR+9RlbgK!=\^{py嵇k.=\^{py嵇k.=\^{r).?\py.?\py.?\py.?\py嵇k.o;\vpym.o;|]Y^=3{9AgRp!vJN! -ZOuD%:5>&^ZMh%Qjrr. -%ZBEN-"ZHL)VjTQ=|:ZyDsJ&E4dDADӈM!L4h"DƆc@cFcAqʰ{2#RɩP1x7?L(-7D_}vM%3D%9>#:@)}BGr~L>"~K}{]rCޢ"oR '^ 'JLыT|9g"&zO=I_Q)ѣDPDAv"ꢒS>{v'YN=DwEt'v;IDQޭDL n$Ft5v=rѵw D[ -WR -ˉ.KK..$h T,agL3 =D0槇@SuT4agTh5*DmD+T}QkYZJ-ZN!ZD5-5QF*YO4h\lY4t =d zPZN4h*єpb49(0)(pFЄpb.h<G46{CDYN 'nUOA#UD2xIvԀ ;(%* ;FՠAaLP1 ; -䀰C g3(~D}>DDYaL"̠ٛƼԊK'J#rs@}.(9lJ"r%%SUFGKd*i&g Hdz*#Jq"{yyr+l97wu?cW8nyjYpiuE’ M% ༒9g >3XSR<gLON+}JpRD' >.8dtpQ%UJ Ӽi]t`bz|d;~[!n5ޖIURx4 STp_eK~!ϓu yU,ɞMRblIWi\VAP`60 LS)d`0c1h`PT@902`8p0  - J`00(@!0(< ~@_ dY@&zt p@ -$ $ @< b+`@ `Tc >~8 |__  8| -|!{wo{;o77ׁ׀W}+K ^y9YSWc#Cn` vawwwہ;_7nn7W[++ˁˀKK -y@p.pFsq9?8sq9?8sϗ1#p8bG 1#p8bG 1#p8bG 1#p8bG 8s}q9>8gs}S៹k\ƘzƎ\z_Lf -֎Ml =fFlcO`zڑSUXc\T{mgG.23V7Vy?E{H+mZ/m'6bV0bfNa-l1[ o>R/>Vj)k6W+HJ-+j|a4|<됳VK6ӱ2g35%<Ylfv;GSUv-ǥ. -v5Wjw9<7hJ>Ȟd=>m.1k4#r^9l71ꣳcc눌t g*2FVhD+O0FD˵\:C -vN*MФntOGnҿ`7[iJ2ynݎ}1SV.:Y`;Yzv쐇٣4KzwWHRz= {=^`O W|:z}E,1ײl3hTdۺ^:5@މUOKf%_97sG\(2#+eDvel,$6{”k|7yq116{jjb1,3n~@|iwۿ(@bj-Z컿X5liQe~ ,W?uo pGCCbb4&|>ك,ez)h^(=T'6(|e3{c z%;,>mVְtj4z1gZ*{et;M$gxm}?Z].T2˕ohƘΒ`w$kNÛi4:1񅎁QzR -`fc`Ѳ8|+pt~ooZ@` bFv19w_M.ې -"b \-e~)+B;=OzۜZ+{Q1i9w Wiu%Egԛ,6Ö%4//ۗdqWQo[\9Gnǰtbl -I6Ldps7CeYbU;{Y-G7ȴ'xz(7iF79}inuEI+MV&u5ު7XC~gdX,)">zXB$yS)b.d}|+'x]==:G-N_nIn; iވ1Ii11iI1X\(T|(VYج{,W9%}>OmR֯9QQ Yu H'xF=9>e3$3G6+V7i/Hud>1;~&%Ӈbl[Wx drs5놭imml0o -{AQbQ˺EEֵӪg._5qȵ6.Z2hI̹O4C20//.oՆ%sg >c M{gw$|pQ#!Nc?׿@6s]Ԗ׻h`a'D892xBD>1ߘt|:}EEo}fP@ET@C EEeC edX- E*hZ'ZqޫֽuQsoB@~%$g.>1&kvgN=@'soM` 0Kh L 12$&0 L`/S&0 L`cp&0 L`&0 L`&0 L`?m0];n6c -sfaTe4n gXQT˨ʀo/6XWT֎efѲj0VRm:fj30WgTicٸYSmѸ9խն\Z.ږj[ђ XG,mMڶ %T/И@fjMڙlv&ۤ63fv&ۤ6igMڙlv&ۤɶ+~jv^#0.Z W4 -ӂlLƢЯo%#2Rb0``NzR*`bqFXHԤF*VZ8(r$8S߯x ,z16/|E!(^ Fl>Ok'I9yp'g8*BVh#IGEi#.`V[ՠ|%AVx.K@&hZDvKSSqq4E>Yk2R !Md=DOV PTE*!?gm9IlCYC:/d~:&r IbQ|Y\Iy~ȌN((;ìnA[iTgD-yNo(O yzUƠrP>"(j6fӀ^:}a5=ш= ->jӉ"WNrsɆ5:&@J";鬡\Ar(?㘌+5u!9й[Gk DHwh7%^Sgm%ոfi\b픦ux;F+ƫ=Yv۾j9*5[/Wq4V"٘ ^RQ/9\ -KJdR*K\B0AnK_nU -Oji\it% - ]]I Aϐg] QE>&3ix]dq*q>}V2g:t -WYޟxTc^T+5-$,2@AtЋ``P~S4THB(`8^oh|/RNOQ`$ a;eA_"XE^CHJ -qM z@OQуC1h34R4DGk2KA#ΤH0OG@JW'>h* Ih( $cl*I`e44YO jzZB@BDv5NIYFԚ.7bERQrIGz# -R,ќkD>ŠD"$E/)>:IIFoeG5{O<]#M\)Ο ;;ot>Pc+q dbJQ*Zd*%q,'WŅRT3\*I4%0FtȴТRWC6bʊNU - qHj8BiN\1U=.0BӁ :qx&iDBɃz "3X\ *R&r"m"Qt:KPPAAG_A"*G#Ru*NK"-x̴j\qLU>XBHCCkuR6.i I520+(R*nj2*BZ -čJodCuq 0lF W&50)|ҫ Re0BJZrX~421z(" '`*#Q(*D@du`#_Y@"jB\\Ԣ /%ѡCdhʲd:lS*[ȔxH dU) B*JN,OJd"J{As0S{QX=|: ~*y0!)h*4 KrؐIh&6:G6@ -Vl Hzpt6qUHvJhJ8r-@"V%`|HTb:Oer`Hx -O $(~$ʳp(TArS7!+6Ԑ s,JA@!m.ڰtV>ܼZ8HE 0(RȌYQ X4 W 6(0RD@92T*1AKdhu!C\.5*Jn2˨mLF -5ͅ Kd犌@Z&p~)I1B>.HIh~4~l -RAbI<4'Xu| -45#RD#c%% 4[N.V]hqaqOtqaL7/;o^3000hM71[t#t#t#?F;}A5+}ь -~)-/R|{{K/wtt;9|oQ0>\k1[̝ k0x`K3nF.rWz32#V=H0ޫa#\7pC'6.^`<3aXX,ꖁGe¢px4j |XhL*J YUU%ºd rX,= &e~\gIYR$HfxT1 V,M3f9fN)ѭ tJLkxXYӬlw 5P'E'Yzb5kDږfmukqPc-,li6tS1],4 EĆFa4%icIlz\xqyyy}{RgSgckAbGל: y.aK۲vخ]f,a@{2jђmm"5,6Q(xTFc5<6zGNjtx?:=eXō{iK3WyCi45a2hǠaeՑEc:i̪6XU6$tQkxR1"t, 9ѯmECb;\S2nCNӝ[~ \K -4&ˉu"a*C͕)st*%ׁNBDRJ^rr_ N240E'RH«-Ntv %nQ(HfCXyk'FdR׏hGQ25#)]<~`SH`'.mG -y|RQFkcl`(c`܊^(_:okٸROnYWu,Y1ɉaG:u}nV8}=z5<#u!Ɖ27ޘo_%y'y|kUȽ#7S:/4;RPRgz[.rIuO.:o]})sÉ;wvЛi=̻yWNs u!+OJ8]qyѓ -WY>ztԶoq,7tә'hw~b:le4K`3ӎْ|+S{W> 9:BLWeoȫ UC?ЍD*Df& AUl\X#o:OG; APr -їe9ƌ'zq>AgF #0) f$$:@[߹_:TYvE)Vξ3-KpaQ1^=_q1|P=*d^r|8z^nvnngWlx}d㻆X:6ܐk[p]OEv.=/ ӟkߝU.>njoʹt1~'㿍vV{f;:b7M!FzF`l!{Awzo֏?wİQ2'X4e=&*.Me=VYlߒ+6HY - ?Eɬ32 Lf]BG3>3фI[֙O_R 7ϗVLysfzwmS!m1VtՇ69{\o{`GeJaTUׇkvt/sZ|XkEaޓN6S߬2!bk*vS3cuOan{}x8,7uWĝ~/5ioo1Ӿ9Uك=i-w\{lÎK)Km}d@JE&M;n 9ނfة tul֘cC=dUB8Xϵ%LHJrC9uQFdJE!Y!!N@IX(7[ㅆd8V٩Zufb|ϧOf(Z  A ΁DJ"FÊQ -C,,t \|` vfiŌIKkx3nސR/5;o+ˌtu~фm٫.n}@O[X ֯ra9{D+KlC&/޷s]Ǻ6/sXxmm/|fnZ*ރ7%dոLo8[޾\rnse -|'q?閅Kq ysUOpVbTi)v)oWZ<9j=Ѕ՝)c|R}s<畻~TV*m릸㎿>w.m;!%Xċ};%!xSs~sfwQy -l͛>Z>^e.~6䀢ŭG׬ٶWPج>~k2dYgՐg$UYO1sƋ..~!U]?C+^b/j;6s;EOKҒ~j]x]^{MtaHLm3mu{|`5>-Kc%",V!({4?FrV%4 \7UAKC0HͶySR BW-tR<2_tE0" &ArqQ7+fu:vm{+&9z5'_#pgR[vXN4iǜxaCm/|<ǃvĝ $.M98 A뎯\{sŤw o4Mϡ3f2oFxקbԫN,6(?g,zf;dyUk1c-iH+|Ol1s b6_ٺ'0u:HnW]-oyr6S?6ۍ[*`儽! b?yӞ7p`YR5G0tKgSKV Wq׬RdZo25-{U-[܉#%ɢ @ņiHQAH5*qD mT31#a`kIuѝ.qMţE}h5ݰ -UGwsŒ)-CSr3;'mꭅO'ؕӭ]r)iլw=SR{)+h,mۨJKm Yil6'poY7~ܝ0pnDGu}ućVNWv[=>Ճ 8hFZq:E;qpkt@Z++g:{}7y=v>wOQ⬀GK -ܳDx)[go=#7n.tKa-J8eX(m7Oȩ_CIO""n9u㽭Oķvp鏾\`R?>;T34bbpяjw$|+}޾s| ="p]-7}nmZPcaHN -Wm~ax6AʪSz4^I.&UMY/(7'>&hQwNZ˼NKeDY8Je@鄿%ܐ_!lphtB].+2ǵkV=~q9!!䗧?Yܳu{ -sg -~R}-[?]Z_~yjmd!m ;)u.츶rݹ~'l"*c7<_Y-r5LZ) -JlxW7.۞PLpvGSG;!Ij.74D}aĆZ_V3e݉IcܟVeޜ5pmp߽ȈٰnVب K/g WJI[5nc=1n wN)m`Wޫӱkf]2{5xʧw7M2(tz}kV_q{iWls5fmN+;c*>qqԇ biW_k+JG4m}G#sJnO*!h̹o,ҠΞޣsRX=,*H7U!ޗ4pȜݒ~Bˋ]-J>ճjg>Q'K L/+,э"Տ& ;#ggkB#|5*8?ط`ׂj(LPdTD2hTz~Y :t1gVDL8 t4]ώNk.$*h&S4EbS= D1,`tEiH y:nټESu9mXu&UsȓE63j9n4'~`ם+>]lbťc{og_,<]ލn最#Z/;ɳ<=ɁbgN#='9ln͂79Uw+{[ʔͽ9b:^~Gvkok8~ݺJy vO-&^7hE? -­Y޷f .LMsW]pټ suu=X:w+mkc‹kӿP/~GҺ¬quNӖYVqͿߍU:v!mz۾W"~Yscgcg>Xf;Ρ8f}',Y\\MUXjg?߹ȵ}so{v iW+3#N,LOLYѦБS(Ҫ+7z‚aq]=,ߪ$.] -Bmqns=Q\Mш9Tɇ -bK g͵#g]ƅL.Hm\U>4ϐQa/X>zv`V?si.Mim2;/"[+W0e[ +fl }};{w{bnI\]6H_t\aisxH 8晱Ӄ*7q+=K0fag?4yzVRMnթ+mH/fpmP5L+b`V -endstream -endobj -82 0 obj -<> -endobj -86 0 obj -<>/BS<>/F 4/Rect[93.1 583.28 243.68 603.98]/StructParent 5/Subtype/Link>> -endobj -87 0 obj -<>/BS<>/F 4/Rect[103.1 562.58 216.32 583.28]/StructParent 6/Subtype/Link>> -endobj -720 0 obj -<> -endobj -722 0 obj -<> -endobj -400 0 obj -<> -endobj -401 0 obj -<> -endobj -399 0 obj -<> -endobj -397 0 obj -<> -endobj -398 0 obj -<> -endobj -396 0 obj -<> -endobj -395 0 obj -<> -endobj -394 0 obj -<> -endobj -393 0 obj -<> -endobj -392 0 obj -<> -endobj -391 0 obj -<> -endobj -390 0 obj -<> -endobj -389 0 obj -<> -endobj -388 0 obj -<> -endobj -386 0 obj -<> -endobj -387 0 obj -<> -endobj -380 0 obj -<> -endobj -382 0 obj -<> -endobj -384 0 obj -<> -endobj -385 0 obj -<> -endobj -377 0 obj -<> -endobj -378 0 obj -<> -endobj -379 0 obj -<> -endobj -376 0 obj -<> -endobj -374 0 obj -<> -endobj -375 0 obj -<> -endobj -9 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 3/Tabs/S/Type/Page>> -endobj -723 0 obj -[725 0 R] -endobj -724 0 obj -<>stream -HWۊ}Giar~FmdV a1F? kyȌ̮eMMUf\Nsߞ=X>|ǻ/燅/z|ؖ )x-Xxs|񿛻M/B.rƅay}qrq3s7o Ʀx<:_V-?ǿ;,sc@qA"V߁,߸ZIe|An)ddy+VxtRP39f2=< -PNfhV+c-!ڕrMRa.G -nxj+'M杀 Gg: -R tTuf~PދnM$2G\V{ŒA;-*NxmF~J?P\Z|k.C!k.o"GxתMeg_%/I 1j,Uj'mXG2ecy2k@-nq8QR5@H ANR?BŬnN$ֱ@dQD4 -e 1fL"d/$[(g`("rH9|S)q-"p殒~VhT)߇=GRŰ5:( -Y0*8[=)5qUGDmpxc}9wC=-nD-Tj%l -eV27URw&tJ$/OϬ ̣Dn -"cRuh]o؀݌)1pdL[JHlNg;E?2(6%o:9zq6Jϳ 3(#cDg`` -\'xaʯ  bio" 9AH.5$^EFցX?䑎6j8S -S2לU[z^Xt;0ѵ8xn2XǙb=ͪp.;FL(3Ϟ9S;X2u} -T KkVrU{d1[qup)YE`Vr/緧oO?ߟn?a˺[]X+ ||u d$-u?Հ65\p*I4c#]um5\lUZmc'Z`(Pi𐍹b^z%-17:`QQ*+A'ΡczC!O@D drЇoC?QkliGfemL6oĹvQ)n(xzVPD۟ߌ]VT]:e\ugN[N+h$'EdZ^y!- 9ICrGqǡ5R&j[ؾh2nmzAbvNz5iY[,kmG Qk(uYBڎW:yZZג3 N'WH taJ+nŞ7v;.49 "%a#4 -5ym@d̙hvڌ5أ$gJWV?th0"X$룡P1W@cKH>Њ׭p$o]BL;v -ѮJkի8FG {FFܼOqiGPۆs1$ Na`Ry֕]HamZ$  ! ->7*MX/} -OSq&<^m~L8ӒozU*I5Y#r&Q^NANd:Լu;Zψ_}5|눁cYvi:Zs>#!hegss _V -aM _~/23^kMjE,N粷l,1pˌFa,q$oeJzfeB\uLtHAMeά,v7W?QoMކ=OIɵD,b8!g -8FxBhrn"!,b7EHЖ\wN}4u~W0\|HEJτ;]*쒇aL Khvڸ'Lg;Т4 |?8˲.%ב79ͩmH|f\ގÖ\&B&s=Y1h|OuCv'iXgYGʕ>!'90E5fHXJ%wǒ c05Sc,:ab &S3F,E[uW@3)2CO-\KZDW _\uؔnd+-Hlt'Ϭ&Jiuh*FGz޿MLD~ou)e n8NV.Efw"3)vуh+7^񲐂C;قZEЁ(`ÙGpB}7|mD}, =0`11ɢɮ(6c%{KZi `"9wsmIZVfj j[MO| ZyGzYMhM=Trvi2VaQ@q[X%ZRa ~Ok=by/T4s֕:?!Es>`[># :&dhy҈<_ȰEfe{Wی|vPG9.oyLR!:UOǼxN4"f.#194(KL/b;KrºfLBoLiJJB'<3f03FN+CRXiJTu%Np8播L4{Kԋ ,6$Be\i1kVJ$ S8R>%JWC㞘 %Jˤ,4$:^@_2tΝͤhqFP hdyMmB&:n$*f\K@ &E핐ryHbFR۞ hπUwfl-9)( mep-gAغf\ʋYScEÁ 2 &ɸȵH3d5.%+ M\KZB'(7ja $kᾟuT\l%n`q  UlYbCQ$(~OpY{2#FUیX0Ih4i,vVYp**s!+3<`EҨz`B7ic\-+-(؞f(K>5l/W[rvs!'7O r~|ʁ_g]{B<>fTE1FΒH rTXzyID/.SD -53$viX60ciEi4#'56фDtI&,~of\QR$ѳr{1SƯ la_H1āD\`$R"2”[ٔD j JD_צԔrL8L朤rMʀy_Do)( -'s:?DaD^.'w˨hK6J]+3?4Gq|ᵏ Ikl0OĨ(DKKh{}<ixbpD1/(>UY~(acsFI֙|%CRsIe.רSCyD-Ӹ严d&N :8 -$oϵ9o{_~\%rA+0߀]t ð?[,w]oi@>h]qূךj8Mػpݔ/`0@ -endstream -endobj -77 0 obj -<> -endobj -80 0 obj -<> -endobj -79 0 obj -[285] -endobj -81 0 obj -<>stream -xwյﷺf`0a(@ˀظU#i4iFҨ43L/c0@@zSC 1p{k?^ ߵs);:Hf7av+ -h_ivzJĐM>ƲݐH(ғJnMbc7{Q1Ie{>gvuy8>0kgw߽/ls?gŐ:-:GqE8/ܕ oXC+w ]7or;~o+s1>Xz],H~yq'[ؤrb'[nشV72_V|7>_tb:[3RƲ_󭚎jW -irWF)b݇--dA^{߷Kn/mey/o+,6׺w;'y迿ȯa\6׿ f7U(0Eú;ͽo0- -BccQ.0qƷ[gV -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP| C_]dNP( -?h.Silpo;&~ -]* -?,940yrwy: wBP(߽* WӾu -|~r#?sBn>DPFu -  -Bgaw]P( -BP(Ego+u?7( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -B/P( - hD!EJ% _[t֖xeWhD4E;J{+ 毗6Wkpr6OMsfMsQ&l4T|4_iM3Yi9?b}qԭpHݟj~2ձ49Kӹ;&VNZw,Vzc̽O<Ik4xsLU~}jc&vՎſ]M+fGUi0e[V"=$쩒ػWN9PRT KfcdgW%?b]M:6;۱d=UGKWudOr*ǡ=8:Q%CEmpBwT"#e6\<;۫ΔűvLE)eʑ-r9h;&qkT]''rLrrq|}唃nST!kv4WU'_:vuDd,ީ">BΜȹ7:8oA$?tm 99 < - ?)< 2"bnp-<>"rPAꇃO^o@E`>i@x8t.x!~&2s;KzCwuH)ras rX';; iq1s D [!N`.`.] k w1Fx' y8yu5uODg:f>Spr ok.ks Yw_8 "};\xAaSE|"}t|o;\r|rK~K2d,82h9gX drl`c%&+ն -9 UvCrS˥zYgџcEeWIt Ʋd ycYy϶I%VoXU>$s_K5|}?Rc/IF 5;Af?LI7Q3Ej#5k皥JW oA~ gmR EV5Bj-*i_0H rG3oEjӮF6qkm߲ևTd?NmJjy7'1v\A*Ư[n} nm|Wd,MȗR6|J\'fR&m* i?GxuӍRwETJ] z؄Dr:V׍KvRJ]6[W%qU۩y)s.GꮳKݭ72?ZYmRWּ| br$>~MwFwϯIg/,F}O=Sciȝԏ/;H{5b|WZ7Ktw[Y GX$?н[&@~aI844hi؍VifOC=xu?؀ 5lMBeҐf0c*kn8ɻl]6\[Ҿ߈|8| h$3kWHi,Y4b;,43JBƗÇ4b'ưF_c/br}x:sw>Kl{mѿ+i_oBO\j&5|reri~?o~y2#䧐EZlVi!h`IiL{BZ~nӖCC.`.9Vrֶ!; U&-af[w-Y揠/?y\C 8O9vi9W'h!Z-i yO q{kZ6Wro-oCrH,vNirWVH Ҋ?mN3I9ݡH_Z] ʹZy+~m1HJԚa<a>9V+_1Vi==u-ΡO+ڜ7Ժŝޅy?Z.*:7f,mPmd[m򛶉i#i#9F#5JvF\nI[ :UȅvixmEOT'gn;·sFFG׶kۣgڰ7Y{\#޶/4ICtnGvܙ;sNpΦpη6'rםd'qI\wb!$< ye&q|ޖ2=:#'y[t߹tNH{t>Ne狴_A B'ĉ]:_A\;UzVm!6ڎ'۫Ӷsvb~;9M;m&i'lj 2ڵFi'~;-Nl'?lVۉ_-$i_S.kE*~]=f૴`ؾ=?cv|`. &qq&ys -m]Smup #\KF>h\]{sqw(ʢVrN'3Zƅg6q]oe.v\E-zm\Ap-='Mq[ @E7{'Os67M]&v{ܓӍ?qρ[ -+ܵ&qsN7>Moq9; -/7u;K{u6ݜM,pcS?ݼQ7o?nlM ~Wϸnj4ͬ-:|oGӐ:M~~L -7v~,ON_c4SA{XlÃzB~ Ͼ=3"<{=1уz=KB:~bQEڇؤcA::ˍ9:&hɜsI˗I1dtkuü wqYt͐wq7>/:6&x76|Jo3ƾOJg9kΉS$9w'~ڡe;F wt6Ҧ.tVu:b'5\'ej,t_'y]::oCO]y{? ?7{w|wtI28B?,] 1k6r.r~t/xG]q~aqr.l : y -{[.]]]ه߽ ѵeh^b5־Śwн?"&~uv[{2/tE?nnnnrnbr2Xx+z':V7m7m7o;ڭЍ?ڤڧ8O#g#CO^M͛n[؛{ kK7Mt$=== Cӳ-LCNѳYzlXԃB~!%6驧݊c]?}CM3D޳ӐН7Azw:Ɖ=ݞislWoUċy"ɋD -$_RzwG7_^mރ|Kn%_V3: /us/97>yk/o݋r.﹬Ŵ^rw/yt1{7{﷈Q?IKnWGNCHn1l(>xOhs>Ї|`^&Qs -} -_E|%[}乾$4$qN_}Cw裾z;y6qVߕy-.ܣ0:I|Fǽxo>{O',cL;";MgҟcŴ:R)22F~K,&=6ܻ=ы-^ FGmջy*1|Z/4|gđ^H/}\.~|o3\?I'h?6$a~?I_r%~r}m -q6H|8H_'9e->ڿ*=|&XClSS;ea>r"}W5pF#_*},҇U!}} HLF M"dj>bKyN菅'Ѧv#7[w}w.>ꎾF61{[޴KiS+ȵ ' [UF A<%@ y[-@ )9v*tIW:RbSK.9jc_L=/C|\uo=>9Y`U[U1?8Q;$8$SG( :8}%ǜF~ԺA ?S7yAl,  5T3OArv ^u0׌Q?Ar X\,򖂏"` L rAbT 9Nb28>wO꧶ߍ69C>8Oa~P?!z9VGߌ.0GwLOOOO|oH?0iC!3K*D Ga6F*Dm"fY i:I]:\BԄ _V B#ȟCfüI.7J>|abPxysYf t{QC蹋pI 0'|"v [$C!̽wwBr=I~gYO] SJH{!L|p%Bهql,B!L<2$6;DiGO"QAD?1It2/EwE/{%: EFOŗDX9sBh/$RF(wŏD#<^G#QrZg1F-%w^l!Q(l\)J%GGryo?DQZt3}Y:))(1l4Cb1|fl=gQƨec!'1[1]ǸAZ$6}ǎ"1ibnH#OCŰ +; -I#oNc=~%ۋcWZ%ߌl5k{gba1|J3ƶ 2Pa6 ܞ699.5;}pr`Qd=@5DpEdW}da{ Opd΄9uU߂8p7x0@N80c&W[\pH>$=K85{sq2fcs?Yv/`?^~;$ϛfl?6Vw'O8re讲Jƌcql7,{610Ι9N.18g8X\8& bE&ʑ?1IN OO&vQ6M̤OvrL `#ccL 0/EƏbqVLĊk^^7Ӿ 3#{w n cs_X%eߘ#S;VJj_t䑩L"ޥ#)^ -;J51\*HG#S^H -o'o"X[K -ߟ¦RĻ9sq秮FϛI<~{16$Ez>#ߟz"7GnL>"HOZ'iUҕȉpt4AߟT"̝<=uA^\iP ="qdyiΚ^siOߧ_1*uib]_ "|yO!ϱ'6~u[H|D|9y3Y2hs Lߘ$=r|c81.C+2-M]θQez 1>Ù3L6!g2 .C!.dNa 2Ҿy1 -yY2̭܅M2̣V3yb^/Hj yO#?E;VdqvQmv -r2Re9gvEgv>%&K 6>!eY@:iH͚3 -fנ˞e^iӳ=JNO2?}x+x13}55ߐ&C!shYdhO}-2<-g8>Ƙ }}mz5%ic4C)C QO^.-̽=G>>3|->;?sfwrİ&}Wrvʑ#9jL9j~?,evˑ9\.9(ˍ"y9l0Gs'Y$wXɑS79Γ7 HF~r&G^{65C!6w=|g9S7=$y|K7Ly)ϙ{ 6JZ5?Κshσ ~&O͚(yr'642 -ݐ8CF`1p'>d,bQ|"dNe4ldQrQ'Xe4.s/i_Nzry;J:yYw'wGCF#F}FO %W'|>(>yt3}8)Ý,R /p$0&rB|߮ 9g =̡,PW9|V!j_. >l@N^Xg….1IWs-szݍ8_< -O=r){"[qHMUN.$.Ncl:m켈Hܡ7k-R$+rEYĘ"vQL0?E|pqs8O |#C?}pW:NSrY5|o|o!>'V1(~ D5Tz8y=Q^.nQ W(RK"%#zle}ee=|.WZUC`@6GOT=o)ǿ 䇏9?2!B~? 3~xB8ߊj8SP~|'g16lh~oGSa 6 Qng?'^#~]}C{a -`gIq -+J}$":Y̝ q/~@Ą5 -@ k;h0vO[hog/v)? ȼ]::1`_#`WV ؕ8061S)ajU8j4o ޫH5YY 3"ԐE_uޫ5pRѵvCjRԀkF77qoVC;5Ǎ9ٷ(RF|#XΜ:~6]e/qc8i65Y1L#_GHn):BMj4qoMgb&ܴkZ+M&b:kS+mK =l"6n/6M/6Mؘ& }y@l -=K7˚&lG 5M_RM9# 5DoO|n/~9啴''1ʛcޫjg;ƈێ2N[yvު=o?~|qj"R)#~TƧQgFû۱krPvlZ{ \(c;}=kvޥOlf7n}dKqjq~O#8J߻>,\ eHKfK_Ϛ[('tLb p=b"p$XiQ j{c W1f?A'w<RK䎷;>w_S'J|m'cR'ܽw鼍9SuNgE3'R΅kvuB\8|'oCyCtneՉ}|v ;ߣ`:q -1 -%R' Sp!:1H1tDkCCď!?K_ч\V_Ŝ:|5 BB|_ୡk'c(^W !t'to:)t1Ůhu]buJi]ش? (R]WP uxLJ]p. ^ !]$xQ~#]_|\wܳ nݍƿucM}.)1nt7\8=WwWg,ڝn{ m6o~Qu#e71xE_#3=qz"Ճ\AY#=jqLܢNc>35̤d\ )fN(kRKy#^=if'?܃==Z1aW/~?wdap^;Z-齍8;{G;}*ڵw6Mnpg`%ܮ>{~x]~;{5߇EH ViQ>ثه컞94ؘ9Ot[rk1SqFmE2O= @dS)ψP?ܬX!. -6Z̃kŔ+p~?;~o$f 7n+}~tNƞhw؟X'l.gh `+i74@;- k|@Gпo^p?@dx@Gx{xpl <_LHp r 1^( փGhp2Ho&}s( _AbA`%z` 3y|5/S̓cIa:o8o8>>v6 C/ My/~6g~Xց_dDL weD_ˈ^ ƃ'd& -2;2GACF⍀M<(#ECưl' ,k?2FlpxYIŒ?fSu2N{Fȗd~33dr*]o וaFKF -pLh~dM88_2Ο6˸Eƅ;e\tf\AŜsd\.;.JƘKa2:x f\7G3^ sraPY3zyasVw8609wc.}eDg$L%̸fIN]}#y>7-̛|۸Je~;NDv`>h1刌T2, SF*wzif_F:7#ә?7drL-s3Yeg#Gsx,۲x,/E8$#l0G6S\pXF.ߙ.^y|c!{*@g -yߢ2yb73G7dj>o[ p%|W)Ya,AƒD0NÌV+N -3VDV p2V5xͫ2*&d߬SZ[)5+A @ky >u ka"~d7 aF8Oz8w=+6HmbNw6'Љd\~wa_ַxSF'H0UF2كd os !c D p;ew=ͽ}u6п3;8tG@v +;x+Œش쵃߁l@fw;fnx s;8Nv'yٰsoWv+劤ϕ$\iz49JprpRY+:f}?&+Vڦ;<HPGÿѴ"Mr‰v(Eۢ9NҔq]Ae2?H߰Ghp9E-v%ls /hj-M'nPKRmJ.2NKe/O:,3XeO}(R0P'$jɧLsOC˟>T+]eZy_Rd'?FꛣfNddjU~jbxbUکWjMLGk:SkGkv&ךW5TWۥIߩ&:l(YŻw(MP翩̪:[UEkݒӖi]Uժqi;mj6DR3VjϏ!ϬȽ|)7:՝'ML:YubVص;[%ﴑӠyϖ/|bc>rdL8WoS$nO$z8l48-p,ziI ϫMCyF?Չ eTkS𘠺G=-<3U=گ1}7\}E<Ԥ;{ߴo&aa)EX$PO#4J#,F59 #`T o#3y09=kReu~'U/`@X,l \:yߛ0*Jw" -eФT2u $`wOVz O -.%ck1!b`{KSFHVq200k --2w5_}sr%jo5g^z0 TB5Jk;< >sz 8Oa%=j60^w{f7BS <xO/2 ~* ҅悫wx kDq(+~j`^3[^E- #Ya%%ݚ%6E}EzCC.+ | H0 g/ͽGǛǂW#U+'얓,3~JιB9O8S8{,eUν:9{w5Ŵ֔)o2wKS:%ܼUSơ,SGis˚zy:~^S3RMgVjj5'rM}Ӽw(9bM=U)OSM ;YfRVZ TX++~O);JyR^$0 L9hPeJ QjRϱQXkR'[)z;%J.w]&Wퟕw(~hG+)C"Wwvrb)mEi86d`}f=JҖX5J[e4}ZT߭J8P$14mWZAMѴ[U~I\Je*&;Fi猽Jgʡ雒Pfv2bg5MZ5Je`WG8uh7>^:^MьxWP3F&1ϝ:5E32YT쥚<3-egOWǯAeQ\\QK(ez/yf27?P6en}gLP3 G55_3ui 4Q&͚4kRE*Vz,լXz}fmkց?Y7hv=E?kvݨx>?J:{8G3?֜Ay\9hA;Lsi95^͕ʚhveMs_?׊#WZX+?iV7ʵ{ZٔqVޝ1F+ kV]a֪:cӪfVU6j>+BhշZ}7o$wiv&_u9ZV?iE<.WH+Jc\|*_fV4ջÊVy(jo5WYwM:y֤P5<@'s/S5˪O^Wתݪΰ/SL+I0U|SՋTdE^j W2sWX'z٧[AUwLp7YOI[:T{.ʶ.|MbϴsKFh4/ZzGIZu:xbwM?A5 Ρ)pP͙fjTsaՌ3R$Kl>8ɵ]5[_<2}dғ& $L6!@ M[Zh]6P@Yu]S@-7*\>xz! 4>LZ -_2{<>s)Eo礕7%~QXtG \WX78X`CL?aIZp#_~Ô`J e`ZK0>uPW\ ԕ P7u,*6{n(IP 6ƚi  EV8Mx6N/6z / 6I4DMnn$l0lzJ6zfFeC(~dP'\/Bx!s~PUB>/A}ME/P`E 6! F fh@:""ow*~C}IhxJX]& $x{ZR#x?4§Y%$IlހɎ&|60>'^͓$lRx+ –=Y8l 0 r,-/ ÖWBrQj!Ey@c`.q fh|44~ p3!mݰQ6 B_vhb\*h~#Д ?BSP{p^BKCC?Cy#޾4>MB?5/i;4[W4(~M!.Gc8+4p bD/,kK\dc -R 8qLb:)'H7Ia,H~__K__0|En7^l_)V r#TSW%\q5[`ԂoHF [)j[+>u^𭗦op6I;W{w\{ߏh(ؠy=4s Bh.Ҡy0 y5| vh~DȄV|Lp@_旍h>'@.ɱ :mK fm]:ضE¶߻Yv1} XFa]Z~h^%~Bj.=vs4^I`2"H^k㏮xqD: -;E,xqv;\Kqyivg)9L;̗BpK`;+Wa" v.Mv# C/ň@αZbǠEo-)2[xZCaML c3Z:7% .FzvKm+BaWTK/̃]fW/%r㰫@zv)΅Ͱ;WvNz -vϔ=ҳ'q+`| a8a8)?=4Þ\aaW292ž}qX {aϷao).2!w~aX ¤`h!y7a3`߳raNA: 2'a3'5E„<80D<@4 -V80[z4Jap`M8 \ҙ:R Z894 A\}J }B P}/UkȀ4`: ]I9-!?AQ -EJ P\*%Av>Bae:e -Op&~/A$..D|;F" - nirOyzDW -@z7l @!8eC&Z8OF)3 U Yr@Q|I|>@_t9)(з+wB2gRހOZ`R'`L6PRs p˽"ێ2@ղic݆w]{ǚիVfK/ZX`ys̞5sFʊM:eJ -Ǎ-3zT^_NޟjgFֈH,Fl_#7]6Y\n.()1no;*K.ު7Np,NQ0/(TnVȝRzMM*r kzWuuc:yXih:{wm/ae.RTg2MmZP5oBI~S,h◯m + -d~zQ*xC^R* ^3o6mzlތs0[TIǵGhήnMhN&TLJ&;j^ ׫pr[XgUmҦU 9Tg' -”ZM 3i6i:-O-Sp4 8Dnnj[Ý1 -eW.T~4 -p|DžFOZV_rd{n+UhuP€֟hiݭ={z |k%ky k211U|^@k 6UM0UWذhy* 6ķҌRY[V;gG#zp%=~W8z'΢نYJ+*~U#Q)Dn+p7).[d4%sכcjuUcFNxC]|Nۈ\שּׁ@\|icq$2tR7j?G:JܶT6xU2q<iW}Hȅ2-UftmoX|7ϝ*gT.i[(HT7$TܫYB5Ͼ:G6@Jtg{ܶ.Jtפ&E¯BmU%q!,=lSt^0۩q8LeZɿv͔R\kLnSE &YDs,RS8ܸA?=r&oFmn}B -~0xO!v$vj -^X։0鎰(zVR -D#ȐF]C+2 zkFRk0SUԸդEv iERJi!5A-NOqLAN偲qOrsH]O2ld6_Hl -' -0ŐϑӉn)voM$tj[N͒M:45J8"$~ -jN^x-!ҽSN4l -- sI20优̭|H\dZ5*G>t5d1o V\Ak2iu5q_'7mYkS1ϐ)כǛʶ`t֨(n+d\m'6y[~mj0vrI:RRO.CJ>fܤZ\OsJ^7zd=<>E(-dGz8c ye_P>.t{JD "Qx-Ƚ#";{T,9*?{^Y;NNc8m;={N N“qZyzO3OS:<Ƹp2Q?({:[ٲWSeOE2*85S{Nӧ?6Kݑȭ[=8ȭ|[[njnIR1-qZn:\8O! 7/=dܬZnfZvJrJ[;Wa r[M&27ȹrkhm&`,+M rS۹K+N rE|>kor|ǻ}\971Chln93MrMSq#99Anxr-wkr g82k"wKv.))ǥxU)}t9)ș,SmIz8CfRD1^܋H).ihIzgl.-֛>QemjʢڨAǂ Q6j3:ؠ7Eȋ j˖h#b6JBD=Fel`롍OB)  exat5hC zj `!?aөS>(ih]aET2,gP6iry+Emu56Bo|5]\ުbv\ݮuD -TU O*4D)t պXd28lRBThy&!Y} 4H9> <x_0 w-o֖|0l쫀 P -T,P -5Z;$ 툵 p986"8$\'9WX wg#հ r˘+qRXZ@9T\{ޢqH, `/Qz]azȵ*(~ cwC;aʣy%6&c<]}KkA@E`*Ȯk 1uT -3f2SL;;?m.uk:Ԫ]ױb&WLog: ͶŶ_Cu5 t4jyXm38^Jiy4K_nW/}+d6U}84:(KkB]eF6؊,,.@ZЦ8F S룝Opz`mݖ9̣uze(lv*>+Qut:?xt̨xYu)ʎ[bF%0Ĭ+ccoݖe̕Oަ&HfɜeѢ3!&&ah-EGD]h# 5uY -Ct}RV :az&3%/ŃD" JD$D,lzc[QͶ/J$qmkV,A;Pڢ ţL@u37\^qX?3!A$/Nݞj~ֈ  CR]LA&'hT#؄xNE]V2:r0'klNU;%9l}?g/_3R>4Fc… !B{? Eؽw}AiZunE*^"΃tHp 诳:''O붩&_]MjmYNY?;"1{cyɶa,E vbgSWXfXNZ )8FwF0HnK)cu0>S }Ċyh!^=@\qΪטK:anR`1H,^NVg"C'9=z*6LFPxN I9fiu"'b,`yN; ;o[Ubt*oM@Nj*žCg^'ӭbX? !NOUޓv2׹Ԯ?HIM }F%Kz%i߿)DxwܿJZJ]Y|yczz0K,YƁb6{=h6oY(V{1aӞ8nt.@q5- -D5 2FFܢUm_}hu;ʋ7*:mx?6n\'?v n_f͊?_*k.OZuZBνS'4|Lʼ9ΠPC:9fȂdg ՞>՞/a?$`Dqo޳./U T=1m Ơԍ6.|'>N^IᷗjWf",A6zNܢ^Oݳ\5|x՜~*]vc8̜OwQ(lAǨ]Eȣȣ7GQ32IpSYR 6:)2118`r.g]{u3tNw`i63f[ u5/m-5;L;1wdU6] 7#F|v܄BGE݌owaYG e-٧rSj=fcʧRfY3ԩ!_YZW^~Fp|ۤKU_<ϾEljz'ᘫ|\tL':M}\ !u%'UD}vU:bcJRc{I S~t:`P QĦ?}$sxzK_H}ڰ/ur"528}f̂$L )o*K+ijdݨBC@lqqzΔFk~j˖+@SsϤNCh z MGI*A١Ma*U'n6-eUس뎋Յ\ye8bƄ:G.:Zޣ]VF+[ e |XC<]l:)x72&4|ۑtZ@?v9~)X"C<hF~-+rofY}V{fث=eeqDSf#8j33-lĈ}cˣުOͥ˩ KY]%IY7KV^n||񙃣^8(a'e(ٷ3$Vbe[MOmt~ݝDp;P$_W&&&&!m`:o§LcM _&7ydS/}S3<7ngGʼ:QQI={&'T#7" S {~y7=*`F`v!eI*3o8v=[W_j1ʅbY1V>ziB\@ GkL&z!';0ݜ{l|0I}+oAO_è@8:!nYfc5L)<|W.ٸ0LA_+[y`:elς)褍?UpoZYL1':$uИ[帍 Y6?%NҜnᗕ֨NtPUITJT(*Cs4Jf^۬i׵U0_cg*QTq]TgVaԨtLũVj$#~ՑS[jݴv:~i -Q-RtNyfnEFMWKIj(?/bKU?իw \B-LA.)uX( ˈs^ rh5*kܓ|-aR-=<[&yev9tʱqZ䜓r<3s+V֙QZ$GuHu9bV&Jȶr :TR'j8Db9#Q+5.F\=9-9xשF[q&vկ'X[Y(ij;&4ڋ: ,VPh[cxеn=$[jUS̈́UR}>FA{ǵ!abɧDXsOwvYwOO?~Ek #yQ3old?C'vo_m6[ =s P$ڪҳ:*q%zܛI>{k =HrDsǽ'QEpSK_p9uDܰq. -6m\Q^vnnoWN˺~YN֭um+~RG[2.+E".ߋR҆^$<<9'wz  -endstream -endobj -76 0 obj -<> -endobj -725 0 obj -<> -endobj -726 0 obj -<> -endobj -372 0 obj -<> -endobj -373 0 obj -<> -endobj -370 0 obj -<> -endobj -371 0 obj -<> -endobj -367 0 obj -<> -endobj -368 0 obj -<> -endobj -369 0 obj -<> -endobj -366 0 obj -<> -endobj -364 0 obj -<> -endobj -365 0 obj -<> -endobj -362 0 obj -<> -endobj -363 0 obj -<> -endobj -360 0 obj -<> -endobj -361 0 obj -<> -endobj -345 0 obj -<> -endobj -8 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 2/Tabs/S/Type/Page>> -endobj -727 0 obj -[729 0 R] -endobj -728 0 obj -<>stream -HWYk\~ׯG;V/q`H -.A`ǐqH/U|S3`a-=}u_~?|z׏wOyXvύ? -Xn_nAn>x?o>>>s4ݏ]{_Ƌ%zR;)ᴚC ~=V_)AXo76~Jn/Ki?&'|#ɟ> ߆N?'o=f?Mu}g#˱ovdbBJxz-$\%Rb SV{jjXQysp)~/5B;T7R%KsFTΕŭ.tkF J0hݱCFӇV`%Xo}IhXPw%HGעȟ?v]A#;c1c.#Ar _fHF6@n)N9 ڴ]P>=Ft͠f@"MG93WXEg1wc@FPuo,хR P7ak?Hf [5vØ MU=AN|B+N*8! y%`lȄc15T][fTPdL!!I#=,Ձ~ih]3I`C?UScHi a^cHhs֫ARi4fZA 6% -aiKHԁw[~ D:(Вu(UYRzqi#'.)Ȧ Ms"tm-l!M nU/ ٓ|zWBB\ l]y.`E8FYEN/œEp =TzԿM?qBЬԲqP>;J3sE_QFOZ-LOq捊RT073 "aņMh+ީ+T0 -QX,:3R9n~-RVwhU3cJ;~P!/__>|{ǻۧ_s:n?>ם};$xa+^W/ ,q=B%lNoYI&Sq;j +FUCg\}8SFU"-hHn}WF)fSOlilawkhk\1HG62 gەC_vzc%G{Kj8p#^U׽}cVY5VVMkĘVMMaް uB "wg2evTQ0Ga-ΖDݪҠrݹVd~;UHǺN஀a4]eX,L,9_Fẖau1F74"E·zJ9J{52sæ.#ה:RوG f+oo`][JŔHN]O+?vOʚ ciČmF8OT&Ux"dK4 A3f6ŵ39 ?gAMcpK^4~RWi1#Q8#NP'͂~MVB7n]StS?|菫iTwŔ:m#LyHmHN}x K%@zDzHL=5enK-DIwQ 'IG*. kvs}; 9hiE*0,C0:pU}iYI] ^"]jkߦI(o -LGa7YsB4 hBe%ӴTu[LZnJz7;̈/Uj@fr#VZ̭b^|nbOM!n\%,\olz_"yeĴ,;"̔R(3(#8&x Xjn_0HvGz9ŐvE),Ѽlc1!IԄt)z6եEOGCo 9$y^ -à"z5o~uT(nWD$|.~05uv) q+ J@ASJcM B5oqUNBeŗCgҬ N*S۲DAs m"!66( 2 -r-c*`E񀊥5&]X`Fa#h@GixIrv>s V_"T\FZL Y*씐|2SZ2dwP!ҟ -{n9XRP{O@/KCq;kWt(3/ )Ѝ( -$ /(rw@w@(ޡN;g;2%?HoX ?/iR[d3HQ6yHa##O 1鋫%3nSP -RErP1Xzx ͐Šg mma֥ҸPㄬS{Y=ȈΏu>HUl̾5n:Su:Q./0G'H- |+TK+ Il(j0i% uʛ ,f1wc Dž'$2p*Q0t?_-v;Opgrnל ;ݶyءjn2wtYj}Nء9z']ݔ+dȸ.>}r)m?7M͛eZL\ײKܐɌOpݵ͂\Ew1?1 ; 0H|& -endstream -endobj -74 0 obj -<> -endobj -729 0 obj -<> -endobj -730 0 obj -<> -endobj -346 0 obj -<> -endobj -347 0 obj -<> -endobj -348 0 obj -<> -endobj -349 0 obj -<> -endobj -350 0 obj -<> -endobj -351 0 obj -<> -endobj -352 0 obj -<> -endobj -359 0 obj -<> -endobj -341 0 obj -<> -endobj -342 0 obj -<> -endobj -343 0 obj -<> -endobj -344 0 obj -<> -endobj -358 0 obj -<> -endobj -337 0 obj -<> -endobj -338 0 obj -<> -endobj -339 0 obj -<> -endobj -340 0 obj -<> -endobj -357 0 obj -<> -endobj -333 0 obj -<> -endobj -334 0 obj -<> -endobj -335 0 obj -<> -endobj -336 0 obj -<> -endobj -356 0 obj -<> -endobj -329 0 obj -<> -endobj -330 0 obj -<> -endobj -331 0 obj -<> -endobj -332 0 obj -<> -endobj -355 0 obj -<> -endobj -325 0 obj -<> -endobj -326 0 obj -<> -endobj -327 0 obj -<> -endobj -328 0 obj -<> -endobj -354 0 obj -<> -endobj -321 0 obj -<> -endobj -322 0 obj -<> -endobj -323 0 obj -<> -endobj -324 0 obj -<> -endobj -353 0 obj -<> -endobj -317 0 obj -<> -endobj -318 0 obj -<> -endobj -319 0 obj -<> -endobj -320 0 obj -<> -endobj -315 0 obj -<> -endobj -316 0 obj -<> -endobj -313 0 obj -<> -endobj -314 0 obj -<> -endobj -312 0 obj -<> -endobj -310 0 obj -<> -endobj -311 0 obj -<> -endobj -308 0 obj -<> -endobj -309 0 obj -<> -endobj -307 0 obj -<> -endobj -7 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 1/Tabs/S/Type/Page>> -endobj -731 0 obj -[733 0 R] -endobj -732 0 obj -<>stream -HWj$G}ﯨJyP]-x4,Axw6/qFjAHʊ8q헿>L>O?~Դ,u:?>ٚ)9=g3ï?Lf3};틚pI!)47uR^'(rccw*?n1>(zc-{5Vs9IG_;[l;V, DՂOl޸=8>=v_F+B?Z.8|%ޔ9Si[ +\ks/Qnz󼙗{Yq6/ueJ ]\E?uUp$}y?k82+6/rgEdC`IH?hg+Uzݤg?_ h[☜׳Ms{V}Q,3fvn)bt1C=2R}ņPWdRx|zʷo3ֶ|=sj|jrӮ@ϑx艢dZǀvQBTP٣e|*_Tu?IEÖmNqSW!(shfAU+L,d,fKVZ ABky\݃;%#biJ2\_h`=I:k{Z=&SB\qGQ4~ha./:Q'@ֶFLccwI)\jjZs  - #HJ<@쳆xGehfc6]06Ymq( -Ze :-İ'FBʎUm6x4ivS!:fKZ]VC8Æ}m(FpIiܵaQt}qsYɨ;r-"jSIk%z6\d_h6D6qPNKD4̣ĜzSrh<$ī;P4յi) uU"FXc =ԮP7ŧoGfHcTgޜxu)mĻRt@Kt)*#t\&hrK縹8эúqBLKi#{$FƂL"<{j9cJɹ\B3a(C~L>O?~tW=Ln˄N}7}\v}Ԇq~;>{*P@7MʾD߶YOBWCW07. -~ gᅏx / )? -5I)h-7-(0esg'n7("6m.!Hf׫gNԕM=~6>%4ZZ@yyL ipe Z;_l%q8 -ZUwZ$5"| XG ]]_7}]#v RŬXWeL_A -xXhU.ۣPDBIe gȅ&ؕb%0OvYZ -R[d{fYt\9HjCv$Πݣ} -ҥ]AW<m$ŝ8,w}Z.:N(]d FMjb&7 e'PQxbqOXZb):4Ts1SEVSǷLו-WX~B-q+@EudE~Y:tWѾΘ7ʂѩ& D;&WoPV[=sh<|=۫ Mct >bG\Yr r.GG3.$:7 bAb-3MֽTI GMDZA hp:1G 0;q(2vUd|1(( fթ__7P5@J#rq5S*յ)Bd -:\lcH;1«*3UvE<쯵8Ñ}Kg$je֕ 53"Z ¸WRp>l$DT2HJ5!6h\iXg{ 9q.6aP4My#mv#NjQLQ&H|Xo+2.TQb=Yhi#(eyco1Uq`%=o/ESNf {i>o3FJF4 e[ԥˉ=JXWDjcm24);w)SM&.4*>YG3%TQ2L+;9ބWIfVb3z\^'Bt@klב]Q28QՠNF( -^63l6V;QIHw\H>/j\;eT2Qˆ(-GBQq^dQd]8_ ;б%F+7)9 jd\e1VQKBAU!ZxCocGޑ"3e[x& OHxynŸQT q:`M&Lt "+Iڗ'2tAInHL@[ 9τDK%7s'r?]XeO% -5!0[qoٷHZ33ܡbTE*U/Se{L?-ej{ۏWfX\|5O⮮D[M<m)¦ި}Ԫ֦rZi^?ᷳf~=C?ؙ-[m6=wz8}?tCrgYd<ͮhmo};=t7ք0mwoV/4meU+"2GnvmvBϥɦɪ`gOmosugڇ_f^~"C:2^eG޼[:|^y>u|MfsZV0nu>+n}65Bw!??7 -endstream -endobj -72 0 obj -<> -endobj -733 0 obj -<> -endobj -734 0 obj -<> -endobj -304 0 obj -<> -endobj -305 0 obj -<> -endobj -306 0 obj -<> -endobj -301 0 obj -<> -endobj -302 0 obj -<> -endobj -303 0 obj -<> -endobj -298 0 obj -<> -endobj -299 0 obj -<> -endobj -300 0 obj -<> -endobj -295 0 obj -<> -endobj -296 0 obj -<> -endobj -297 0 obj -<> -endobj -294 0 obj -<> -endobj -293 0 obj -<> -endobj -292 0 obj -<> -endobj -290 0 obj -<> -endobj -291 0 obj -<> -endobj -6 0 obj -<>/MediaBox[0 0 595.32 841.92]/Parent 2 0 R/Resources<>/Font<>/ProcSet[/PDF/Text]>>/StructParents 0/Tabs/S/Type/Page>> -endobj -735 0 obj -[738 0 R] -endobj -736 0 obj -<>stream -HW]%7}_QI s,eC!> amYeߞI]eH::z_?}\޽{O?,_~"m{|ڗV'^Ub~Wǻ织?}'{rQzy.j\.FJi,җG)S:b|t9f@lMOLZަ_Z$#QX/40n (.ԉ%MOKGm]JbٹnNkmm%NtHY }P](xac*V:5q=f9&ԽczĘ3 7x4:X bctIQ'󫗪$ ) f%!R"^s' 8oPc1 6A8`m\POHW y+Bـbqlkym馁e:pf24cc&7ib\zs"0΋u, }Ѻv}%\='ڗaݕaD2钼cr%FUzgͿz#ci[cM]Id#NZ :"4H;|r'8~(fZ@ei6 m<4n=Px;s+! q\%KWvmֶl*{(' -+5%ݤɑ:GWe*IR3k؍YXJʭFBC/}v䥆݇߻`r))2i#5VkF(ܗ=P4Uǽ9hE2cDf X.o(ZtP:SF6ðX~S|mW@k=/_pm/W*g+iz}~ˑ&+R̕@%¸OIe$I6<\^卖c( xxʚyau"u2 p#VL[/\J7Oq~I3fX(J)*S %$Cèqlք5 A -6 kύt5:Idҡ50وŧ\̫nd)ERR4trk[^lgIBxm¡ZrS7l.`e'$…>|,^FI€$dTkiG8KYC{opɯlc\^]vR 34]:$~J*,_⩡*B1` GҬ\oqk!w6b;!dsTI'Bɍ )=<@lm,[^ -yN+UIvY锫:$0ft) 4[OЊBݔmFo*uƛFzcXd d$T\6#}&,Hmexsd*A@L #|@R;Զğ]j&^:c"NKV 0.iK/3NL4fOT 1PAD( +t\۰ήF3C<ѧ+h0)O8uyQn + -ΉRFLQ? M9SR n5cͦ1^1RV22f13dF`v'=bdr@q2, Lnu&U~h%F$ٛFc" ?:r^8?GI))Q$ҜR@mDF;v_D$2T퇈a -44<.B%6x!m$n1.&Yix 6D<Ax5 j:hYvQl `f؜d,ONuFF1δRRı:aJ ˫ZAql¦3KWUz </4 ^5mH>&etY`6 pl%H$ǓbYl9qrI&Y߯ޣPlΏrRLNI% NZ<*~ QL7`eJf}Bt_tde'֏S+YfsiK1!ْ$n'4aDQaaCo^&,v4(L 4[Z2!fM)mjzF)hvFjw)!Yyͨd 6v6-./⌡{d4U[i()&ĴLΕn3sa srU L\WJ!-(-z25dX[8=W9 -0z79KAʹ_qҝme-M4Z m ݔ֜:wpbR8_FƤSj: 0wzZ? -{G JKtY9ݎu}:-5" +dj8Euc|z:-_E?:aJnD;ZDJz\!}UP.Ho2)E^ˀ2`6xlI -zuStC\pgOڧrgI }.3vpЎڣP$RLRIQݛVhAE|ӏ9?XY<  -d hQ2دi-r9͌⸌ -ㅲȚ` ۴ݟN8$GA?9!}ךKnwv.l;*UwXgg|5h4ݮB"mtYe`,M_.#֏>46yXZ#)0 -0U~Ц)hjO`P>yM7rJ$(K *b#cq!xzHi rg D&/T=^e+P&Fͥ8{H9EVngljE0٤2 - W6;р~DDM-c`q ӺԏG o'B~̅)+ri5{I2րS -&&6Jņ]- *Ta)~}(.SpLk|L}xm<]V+Ȼ)Xp3%/`=:3m"Tbw ԈȹL$# #QI(*CC"Cx!x$1U-FU\'|vҘk)AA&\ix;6*+ -e]7TڐIuP' -kK30;xe0g+3QuHDs'3(gKˤpCҊ"c:KDN"⭣cE!7y*'}Yd($ -gTM1|aGM^ІwnM|8eUg ,niɝ<\(V$r3 *1 /Ks2}?W)@bZ Jq~ -V8˹ v&=utkZ &)d*Sz3^_ruYP55eykUMmS bbY ;&yܮke^Ųhp`i eǾ: -6^ώsWLRe"oną]eʂA:VҋpDatI؝"<2Kꋉ\قA TkEM%ƆV+1H@y+ٴlh80J3V "t_xs{7(cyi2;W_2:hо1CkSϩgt\5*~>v7yHU>ǫ}s~;÷m<7ismT ->?lNJ]tjێN6||tknjn6}6vu=6;8>ktonnkn7p7w|!E :>wLIs;n[m x!|bN1:iFwP:9OßbqTsaw7m?|yRonh7;;pm.'ufh4*5M M)1bm>Đݳ> -endobj -737 0 obj -<> -endobj -22 0 obj -<> -endobj -40 0 obj -<> -endobj -39 0 obj -[250] -endobj -41 0 obj -<>stream -H|y|ϼ>3d;3UM5(Wr/iDBZKZ[Xcw"vBj+EՇdgȥ|>9<={ :un4bC.wBZ|{cP 0ahɐ T/u'uDr!k&ISc@ӎB7+k% -=N訾iÿhV*t5[#bi@|ZRmM}C2x)):Ep kX#5'8Cg )YzZӯn;=ԝh ;:R~Z[*zE&qRǴcq"Պ6t8~_C`x|U\_y="bIEPQ1P+b+UDcۛۛ-0;{;Y~>Ȕ#ֶV(fo%rIQ߭uD3B|U?-~#KĻ|M,U}K@~0ۗ?Zka^>8Ug^֥UA K爛]|)2a83?a2fcc!Vg<3,ZC +$ OD'838S8s8Ra.<D# B c0kd`(c$F`Flbdbb'lҐDZᄋ\GQ 71HZHZxAh9 +UZZGE6FD .$ʡ-vATv. - -JaP8U\Kh?t"cդZt6!7q2,tq:A'3IQ;,9:Oet.E>n]Iùb>%|q>'3|K.m֭Ȑŏϐeu^}e?>繌/%W*|+:&||vC?KtU*'_.H хto4sW=ssnn$dOF.@AvƶT@6e NDZZDš~g:>|~xyuN):Ut3usnȖ/p)_K| -_{Ux^Wx#{ٺѹ:OTn =MO3zg9z~Oz^źX/KH^9WzUZkbZ z@oқșȹȅȩHP?['SMo;gzޥw?=z9s2p*q:gp&gq67<ΏG` c쉽obo}8s<!8p#qTcp,p@S(4W-|-|"h͡V@[hOooG -w`L.`<0x xO)J:~E77&ނ"X Ű2X`UZXarW7s-fd^4/\7#? xlc60?7p+r56r1M3l1W LbDl;N[|.*.߈2Is8#J%qU$nrq=!T@%TɧӲ=x'p-?:~TC #P/" :!e,weO[Xӌr('W ȓo˩9=cX̓ "Y,1;[-׊B^n[V]{_+/FayT4)yN2J^7d/+eWBiʪ8 RUP*K5Q*_5UEj)ک֪zJWUgUuzjzN=!jjFûy`5VMP/+x#⼆^ez9^Wz-6ޓ3^' x#1xo7{{L5,gbԬ4kFl1fémvc_گ>ox  Q&azaU.>v ;.aװ[= {>a`=h=j{{ži{ƞy[b/R{^mb쵠1 ~S\pJ%2ջ׉upt]tu]v=wksC7m`jL{>'3eL*0\t5a=$MA! [aAP aA  v :,qe_Μ#;~n}]▸-AN |[)`?vJa>"h<8j'rWiE(x\˝3@p:`x:R'L/KpԕDN='W7$JJԏHi4`ʠ!4pAg!ȡrLi Gci@iM)4;R&MWALʢY4\G#e+h-i-Oh -mk:𦃎Nt<39|psrM =$R(76Oi-lZٴrhjZCZZG6 -HRL[h+mvA;i}ISʣ=l |G:H0t *"*tNS :Ketʩ*j[L7t.xxEDu=~x= -+*6}8 !A6vispNC-ҭunCtmuIFQ:YTt1^gqֳmˋj ƒmdmhfma[kut 3t_Ol݇ha{q{"[lOS-=k9[n+lն6l34=Rw7;]:#Qs9:EN8#Ί2qN -Q)D[,~WUY\ q9ѕ $[`نz[4H IU}U?U*_*USd2'D5I)+3ULɲɲI3GFJ+d;/d|Ag^P͵lr@]A 7^p@el - TX@Ok:I`4A D%+Pw/$ s`%4p}NCa4, &,,qA1 uOm@,rto ;|X̎Hɞj3(w&a#LnwhQ*G4$HX[x5.5Γv Ǫ_% -!e;=p+Ky!2l)LLXruss7]`;1d93U%>3|C Բ˼ź*qAgH0tʻzN7fjO =9m{wsKybkEg쏃]5x밎`1IܑrRKShP;0W; J c[].k|`F>Br4_QW1&K9cJ wLt/lI*=U ("Ut4ЖF4@KDԤMQ5Re=+꿕!0?:OqeN+\Rr5֨nHKj^w?#?>- -[}BSK@}_]T(C0xGyVή0GPtWR3e#W)}4J.{ 'j)kbKo}J"ET=}D/S?בtB18^*zŠ!S]}N8ka\`G5γc/I~bߦ8{Qu{Q9ɶܖsFt^ KJ1YRFVKY3;3_)@O0L-~|Z"[ItgFg'( -Q !IOJ: G:KKUIUL6jt{EHX9<--›{_I!_'PZgޏn6Ri:8=`j#<w;o4= w6EKÚŷku|xg#ۍt(*Ϩ/C:UrQTX+=Dtͭ<릔!PW=.b=C/5 ݩQ -ȻX ;(C4SfL)3e̔RZxQL92Dˑ*kCR8>D1{R& چچځˢ;Qw!J#vEӏa^d~QC_ADA5DHègQc}G#ZS,r4N?E.s/o|ޢs~Io#@{\"/#ڷ=Ѿmk|C}kc֮w{V~qEeK -/1Ew~`Šo{ۜų>MU8hHٺZe46VKHTj:mMg߿q' i}uTO8`9bLa׿A?=Yߠ fI:$L]0QFw6zʲ:uxy2T4BJ!) Rb&U,mt 2܈B oK> wC#z6zˎ=PҖ\c^6'>bTjPqgXm1RCkYCpvRXxGԦc-~2;uz؃eΠt>ר2;n#O- Sfo;)3[y=]Z ʤD&лuHbZ]KNZX Dc@JN |@Z8`Χ7e:&=J-)(}˥QXr\C qi+V@dZ҄mVmle^l6=!UH$!CdbS&w<}t3$b)EKw11oFaE;-mFm -,L(G>i S -+iSBY;aimKa,ńBC7@iBv0pVPҽ4o!/6yߴYP)(8#D 髄jl^(7BB\tl}WUse%`d@|MX$,aiXL2T:PZ&i!af ֡@.U0$}7L~w|˹QK(#g4T\e/SY+32U?A J(,偘/5.U1wpH*s^L5Ė(5E'^xFgV-zK`UѠ";Fԅ"]SSĞu+R>;<h ieæF];#l~2pH!<~C}*m1O_*ƣnO~^v끕ƕύd&e&E 4-C4t2&&ɏ$rh@HwK5スzu3šYxؙ4/L׫!XYq>/ŽB}(&&( -̊#֮YDv)b#o$>0 dˀ41`6?56ӃI -4 3R&e\~E: Cy0rt=|7X$ǡ~'lo8HA{2Χ((_O>@qg9I(nP (ż< -XX#Z~"ZјC'+|n>:2X>?\$XVsٺ@bY_қi\DyJ |qc ӝBr K/fqJݬqz>: -kw)݊dlȾm0f+h\€]E~*x~!D'Xc -x a\߻VQѷX C [<{@ai0J;$16Þ=n\mRf0V #̾km(Fg9٦|<.>ȶ}fQ"ϐ7XYW׊ }o"nKahzMeL/Jn-i;zq &[~ƬP8F =9d(F~I`X(Ui\V%5 7~E\=ڨ] 5b.髢3u%^aҧ__ŝJ;.S12?I^;ǔ%.3`>,¼p.姘ӗs _ -O2ΡoutXsޓq*(_0s6T)N%Kk s@v;sjFy,MR֓L;hfx,؃?A͋2"e#_qx(%FfqSZ:e *@Ø 7uhNCr=62` P]wo] ۼ@8sԬg8u+XˎQSs{A1Ч /75&9kd3WG^eTSY\d.NR({PNv9* fح\-ٹ*2e<>Y94a AR!^:`WP8}gJ>{;A{{F 2߰<;aoΉnJ^9pYCjuNPHCC߁]Z"^Lrcrw4_By0kxUSH+Y՘׌5vs辰Mh,(X^L ϩJJUV!rr>Y 7׃U;ۮy~9Ynr`]9c mTQmsJ@nh#z؈p1`[{&>`S F[-xKtq|~.9GF*~XCG'(^O"0/P7#sh 糞d2/ϧǒޜ`|-mN38W6\A^X ->OuOhllX`=x=e,c1Co8u}'֍xd`] fC!kρӈ/]~/@|z-dnu/|)>`%ݍ Fލ|M$^V0 ߨETjYAȩ%jK /ڐW2 h -Fi@<"FD[ѱڝ~~mv{DQlG} u'k -r[ }G+G|޲PfWo u=膥<-<_5~c'w[oʼnWs4w^=X?j0*uN!$"^^fy ('"1fUNSK![ȩ<, F -z(gs͢}˹9 pǿwoAI !`@b$A *(Z)SLi TԖ:)L bm}j+X2R-N[Q,gk3{Ξo'7On91 2\Ɍ3;.m)wsO\N蒸ɽ)i!qN[Oj؇9n9oMʅ8p>9CČR(uW䝐 pUر0_^ZKp;lNRrvc~w9oU[}Ȝ8#_jb(_xyϋ=7c쁄֎ov=( l -q~B̾Dc}xm˜6dV,Ԙay%>WRO^䢡rvF2d5W` .E~)Q9{Qaq!gcԕ_u0CXf0ؿO.ї6olIecL̬FDTu\s3fQRͫLssel}IMѯen7y\zg~δl-Q7vjU397\<ysjWL5Wk~8K<{`ZjA-kVk~,{EFD&c243DwKm7%}zO΍)SFȨ潄!߄%28=));sՖ2n!9JU濄]zMQ_.3]UR^JX/ ɛ U i61U+i7{#̱ d2\)xt ƬI/b?k5Kbwkzy;kb)T']sl[1E.osQI| -IUzl- s#4GU -Y4XSs5,asoj,5Qnpm7Gop9j}Oo#E"W2?w`[t~w6?;aHSd7>8HZ6q|WCƊm՝Nk9k9oI}-c;ncӹrRwam2!vxe땘 +ؾEwR"ߚ2ZI}n+9ʗ]0yo_f-;ڬkN`,9ɍ7Za RQAy,^}˺/{OA|$廻+ﵖyf1j5]f>*#FZ+(+5v'S^U]U_Ͻ$%$@)%<*Ew 4ETVj逅NlmqZҢLՊB[@[)=^n:ޙo>:ko\\9=e[[t6{I^5f5;!Ag4l0'?Ft[L[:F)٢ S@U"5?\>̛M8oeTie,=+ #Uqv̝cǽhBi 3R{ ߅m/qw{'q$Y<υN<ˣR%eklD?aKm{CSFb^3ҊZ{vL- -ܢs<'P}\'֟| ԏT6^.O6 .I+^&5R%Ih,ӡME qO$؇; '}wOJf?.=\eZ=1V~?no@R,K9(xikUՇϛ`Z#AǡS^ fa]k+xv)iSpG  {+JtTTeo; WbLmMi/#ood QCw:xS!{3TOiΌ \dək[ѕu74km%+1|?_ٵ1MOx}2\u?G ԇ?IFd-޼ભOKKr^om_`kݧkd@peR晌iPbDݥs6z8=Rj}̛@@^J&Z{CPhnq^Ne?.%ܒXqnyKIشf 'G?#oKsJVwDV%fymDsxM%fxC_p+b!~v4ȲȤ}ތxI/g(|o2y{j^CoQ ~ɎEj{H>{ׂ2ZV{LҙjF_)U{)h|]z-\G2TI:<\)4[ "m+x{ʂ -= pl}nN}77o7qwInzS$i1xR2B0"a.c'thpLߕ2(SClHC%ca{I]/Z=/t4Q{ zviՍ찔Ƈnɏ,nSzi̔ -(p1:CQK {Ckr0{0Y)bvz<&Kpx^Iy{LKHoWR \3&gD>\g)3|\{GCcC=)mc=zW޴po4vx"z&ʳmgw0W.bV,d)6@yBk$ux䃊/87⼳{ݶ@΂v//ҋ3&փLG?:dtTF/LIq0KͶO?.w?W˘"g0o -!eD&q$76K_M[9=g7_?[y֝(cHz|;zlZ~:OOPviJ|\ -;x 4oS {7-BS$,yNY9i*g9CKnl._Kaۗ# )LJQhu^݅8G[o)c9 K#KbF -CR%rJI9/xxW+՜2٬WgBۊ]M(Yt{ -]q8g-_\+TKs'U +$s } 5NTE^fjT_hS0ߕv -򗅭u2FKzbdFyn|ت/ozu_r|ec8~2mI_S 0۲(!Yc$&RdPkMLk),-_CLkCP6 Rϡ݆N/s9y2W#㴵l!^=:-ekw 'įo`?I-b_qJ2؆R诨`EAu pZ$!@8㴅=38okǨ?C ~"[_%]A~R%;N>;); V7{s?{_yo  7Q OD{wY.W?FpWnuB\HG`/_3*x IWlu_++t4C -|V( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BP( -BATޡM *!I"MrLC2@R!]BPr29 m3Z^mv~öKh#}ҹm;mkv%l?f:쓶}ٶ1q -P5R$,J lF)[01Y g(-K& o+J}4 -O=q㴣{pkE`G`'K͍f@`+5QC=$Qc0}DhȮۆ ":9$w>2ZPCDx2]c^) ~=)A<ŜDvb]Bvlo& cL騔ܞQ.4<"ɹ'xo",bhFz` 6kFarbOD*z446AwX +s0iV*i™xw3im3Z K3i&zDAk$@,#V`J |PM;Op"bEmu$# <~ocp#AyIELh8eDLXǣ==1is#O&7hԌxˣf:%Q3u;wܶm뺞ذnv[D׎.10ϤQs8V>$9#vn ߭C'FἸwZt̯G>>.Wq[WN=(G*vV½ne]+|gٻ9`&Mp $a3]m^W'u&C.]15jr\ɶg>[0rkzh -JՅV}<hN׹U%fƂ:^r,U\ -39:!/k6bߣ 0fNH^f/cE{d=Б ^dh%~ -诰K}z)wூ6BOoB9+3/~ea.cV*9h 1Xǐcb@:!#.hk<P>K3H8R?̍#s#4V3VꌡΘ/4K,p4.!σK$Q0#Jlyb>lC@X?Rd%+SJFv#2j^3tEAP"h*),]U{mR -}'zG9R–,giS)h:<:M (б^9z^LjI(y%נ(yPσzx=z%H{AҎ"6mku*AhC@@@K.,^۲N1mk8N8fjGkM͡@p%DyysoVD;k:?ˮkHB.Yh6jY/4 - A3BpՃf z-N,ώ Kĵp2 c`Zo>@.XwgYldqv춥4^Nz¹KI0 t]lv -ߓIV4+|TY|A7|VʧlrUfi3~fLk`hոScR?!&dOV!KH;Rn6o777^~n9~W֫76"u)wJ.)0.V6н^.B$iH$6`(14-6~ /"(Fo*~1qrv{ofgvʙƧ~uLJ!/&R+g1dGq2]A5w)hQ2,=9R1=c_qg$OWuĞ6__q 88 ~G)ԠyZzfn'l7HV6 Hfd7pPUA2SKq98gFŬ'v;-bX#F5j\*BriT"I $NBRyaᖩ#8BǮCՠ胜]ۜn]qI NwnR+<9sw -JcnYKra\{DŷplW:a&%;]Ue[WljPE?na^wOٝI…5qǭg pm:#{An0arC,>Lv|rѐ  (m)m[^ٛP{-gzr~*GZ6L\n'i.iIswƱݑK()4'HUT:Ul !xɺ.Sn)ujb9 -RY1?_tBßpbdr9t6f,Nmѝt Hfr0@L=ZOuτoP'|j^>kV _ZVNJ4RM&|9֜b\7Ac5ƍh.\WePVFq"PǞi+(|v"BKX h7@uUP`$@] t0?vg'nn0[ c9ٕ7d+\n?" !!aD 1aLR<"!9"ɦ!-B(0C`"1-ȏ ?<|= +A'2:i!dY{'uDWO@ӧLe2v5C(A0Yq -endstream -endobj -42 0 obj -[44 0 R] -endobj -43 0 obj -<>stream -x}Rn0>l^Ҥ8~ː_4֮gvvbWf`d ,J`G85+9\*ʶ2,c;FuGxw6_٘hA,)ݠʼU-0iBf82SЛJ ,pOβ{r:򻲈:Dcj j> -endobj -45 0 obj -<> -endobj -46 0 obj -<> -endobj -47 0 obj -[0[778]3[250]15[250]20[500 500 500]36[722 667 667 722]42[722]44[333 389]48[889]51[556]53[667 556]56[722 722]68[444]70[444 500 444 333 500 500 278]78[500 278 778 500 500 500]85[333 389 278 500 500]92[500]] -endobj -49 0 obj -<>stream -H3' & -endstream -endobj -48 0 obj -<>stream -HWwNWݿs>N$#!"AC  ["(!(ecAL12k4!om҅̎5¬M=;AE>4BLJ䎚;yL8cR@/JKIl ILϞvl{@_vI)gGyE17 l뱼n3}֘#^ Νy!pj;$W䤜a7"{A|B -_ ݟuy̟M{ i\8Fi -l9fŷ?,H{w39ֲj3ztx}y4ѫxqӟ}'ߓet>@`ZౄA/ j|0[*ob -11;BpS=}ot7K>>U؁-y -6y {^Fܑ3.!E 2W&ȉp -$tn\8 lsndVr[i#Hb)DjfF%z`qMH]ϻxڨ<ڠ=la4.6~A6JqN1y+v0 S>ʇA+$(pV$Q.1H1 70x*|#zԙn_. Sgqf)Jd|K﫿_wнp3)s~&ɔ<=l]X\`D'4򈩘(r*lF\H3r9̺ŲX/rK~jN*REYuYMD=_zqk‰XSdJ] w;?l2M<f&W8+>%P \H+2H Fh"I%/P -d':έD>=粟$rW$VU™FjF\"OMU3Bjڧ.:@Iz^w#QNtwF8Ns9\u_dbsJpeֹ>qwpCn[+juލm!32ͼR.W:Z1/tUH: t;b4*tZe"r"sd2YwOTZN1h/y,kBS]R<Qr@٦Ke WS[mn&u$VsK*At>#T#ZB;6& Xi}&$ zsZUS<ۢ1 *ЕbE83"T\/)OK!/Usjx'0Se#ȩeӇ4[D`,6`k8~bf -o7[`2Oa XQdX! -sO~CX*fr=82ZDs[vtmv':bIT#L'{\~Qcqz A=:Wv/ƣ9#]6rqh>1:;T%R+uaQ6Ԟ|45%"'Mo+xK1VjZMi55o‡~!]Lcְ8Z2!6XuFӃ.7}O D<+c8z$zT L7YMaW]ͤGGZg!r!X|tkrv؋}8Z(+ctp'PA/vY5MT]UџݍxSr'dOHOKM0|Ȟ=wڥs"ڇkܦu3^lQ\{zޗ1crM511kOcT3Y?&G!%?*!!JJTD]C۵#>*5ڿ*RZEUi£؋]UW=sk;َޱ]mStk2E[#PP%6yKyv`YMS pjbt'C$$4IKߍ0Z&7FE v]\h)8B9i/.:%{MEV\hD0ax v)j)FJ\a#Jy>>aF\8f9b<".E.% Fk<7b5:;3?̾Dxh^U}N)D8W(5anhY`l;eX:pe2,HE"&/V4!66lL{6cVΔr|HeVe֊(Ur:kZKt4o<r46+Q(N(xb2_ ș1Y<m'VzLscy`_R*딀{(Qh߀]DEQq:/ gc&HVB7$?{nY@xupfedt'ऎ -NuP]_s(Ń^cGĴIK۱-o!1 \ S JorxZj#OMJ,"5{aY fDؒ[u9R0-}\Tx$zLooq|ఔ'cSӎӴWqʌ۩_Tg1ggL|k*/_ E\d85ӦLn5Y-b|Vg9f.""@ :Q3#J !zu\&ax -EpDPQ T h C/-+U]\|K n0ʻ G8ƆGz_['&8n 2_ )hz(A2D # UݏѝA<؜tu\7d$ڠBl!X NwwG?q^;=^Dg\`'a-l3jvFsABrtRLT*7 s5o`xڊ.6вÅ}kg6sXzC«ԁ".J.߆>ss @Y5gAq8⚟o[LB)Xh"^EmhX& el/,I`*aDR!ڞ|:eGODIT~>~6&x"g7_ÙI$"avSɾ&JlGoҩSB=çn>ݹx9*sTM/'ɲwYrHI߀B; P/4PтQ" -FhiKRTɬ!> E㵧FzPSP_kt: 8A>KNtioՕowM#z'.ڱoCnߕ}l7d 1D;WyT01i:x,3};nG(qg>~">dR)MRjJI+mY9 g(enR9iS"J٧>xRrRjE#2Z#C}F$o(P녔޸Io([sKI)YDzhRMoQBOfl'9r$ɟ_U0^ߌMv6q{߳}>'Nٯ?vz9!q-rK%M!4Zæk),#L]CPn -l vH !B`h*X /OQ{gv<>T(*,.Ta2S(;gJo[ts, -~PguUr|ȻK;#5k~=qvҬc{WI3`)vXm;k;5"s,hlloTsً* эbS3 kbBk^O&KAZ*ـ#{cjXZK:w#}?V j5ԛͰ'V2hjS`Tf) ̞T߫^$4zY)pL<2YJR}羠dq&K$ bd1xy`| -$MY҃ h RRIz` 9 $ ,l$|(ӚxlA@c5VDLNdxM+c<ͽ⡥sN u*997|5 Jz1Ӕ -Ǫbx:(?S ~.%&!FЏiTp+IdÏ>e)RM W44r4"Ȏ*XgG6{W|N =KrLlxzYRcr(^-p6c.#!7ູh! sKQ_2ϡaӈc7FF>n܇)G3w)pΝG7*M,HA MYM`y[୐.gڀ9B8Qs8phꄸB i`B"vZVe9xء:5k^7tU/x}D<=>TAy8]ȣ#%鑂#Fd<t:gϮʅ~asTb0r4n /Uhɝѻ/9y0cq5z6mP@U 3\ypn_W Gg=ґ|M'=̈"IbZq絭Eє -7v"^1?LaJ_ UK/Qj+U^OM\]O|-׶S/({3{ñI<|=έiGzVrn"5$Ad扣O'ϔ"cўA"0㓮t*3?֝] -y'vC'P, ssV_ڗhb8MHc7Q>UPNVUy2 gc/Z^^;Y3Fۘ6AAGEV)#M -$[[Y1v0Q3:erFή~:>wRxp}&)Ԣ_@[TAHhMzfwf)Ib5DNbC TzxLKUK\R-t5H~2:e}阫LVFn.9Mz0`|WK<&BQ'=qL8YТ QisJ@DpnUQ C?3;JJ\.S%ݿHƨHܜ(F",/i̭"ߨ1Ir/mҘbRs6OtuXV%3p|DºNZVgU7uǶ/xbaOtUdfc+K?Ļc2sVS}b[Ln=0?[ 6zd&>эx@2-ws{!P nƊC&~;;JCY"gL rn')^t"ל@&1FwMCF˪£ N'tTh6@7Bq*Hb*WKdT0}rst1r c8wsΗ=|%v싓vG`h#[0<`BlHHIu*RZe[5|V6N@52 *ccmG+J=)9v{\^h)O*lTw +,HS` - DYV&rF0uņ$p@v9x$M0GM[w_|rygTTu'PkU)q~{/>$?t獻gPՆIsR2A=-,8Gibit(Zyx2zIS}!f>Va.Z%"Fĸh)fz`/0H&,)T 7АqИ28Ci*4dJ ˜WTpћ:0ɄizBۚMɼoҜգ@H0Tk :"!8bAm-ms^٣>OxR{XFd1< jOf=5d!qi`?]p8B6Z/ު︴k*k6sϚS_=uQ HAcc?uSrX+ŝRv{ȏxv{8\ECg:/g-[55U֍ncuҕ/ee.+eV/!"&>qU"K%ΈA+IӼ/)-8UǤzjEYzۙ z%F4!>S:g}z@;u9а:ػLɩݯrԓ,d34oH6\9 $ٝ'5kYf|847tE5>7.(H(>$u-$v<$?y`tfd#|1BXakK퓐T㚏+fPl h -2dY(W$ -eb1/lJ5>#ҟTHYoެRt/)xӬz^&[ |7P;HEraw{Ns,@㥯lڠg]i9)4{zS߫.TGa`oF~ -v-ί[У(~?/k0k抸hsa;S6N[ zr (=v(3G8?eg^>X8\8+Ɠ \|_W -;Ƕע5@Nm[]}_^Ԉi'XMuuMMDe`A -dK\b%vC](̺yal;m#o%KAN4* |OAұ;a`-2BoW("_ѕ.%JJٵ\F߁Ű -fK$o5q$fK-h3nA!W9fUEy.mnrTyPۜ$,B|>e-H5e$+Fc -˯l'Fx?wGM;Yk槗{/]hRd\͢#D39h9/v跭ܾh(e z)z1RBJRײskӛCmEmѕѿ(T! _L^F1$6)L,mYecFSzXx[V"CgdEn}19Tα/&sr\.hBt#ڿP"Ooitjc:iat1VR,E3ٍ] -û$2s,&GidXQ8J{2UVي~քDkf2mo ;0b|!c88~cw;9wq:'[K"H(&6Z4EZڴRC׵4ݠ@ - BF.`*A44u شM=Jw|( Foҽ29t?>77wpgҨ-ʠ\?MnUM뇺^mP{70 -xɲ|:yC Lt:Xx̧=&"A x-/*MA E l $jZwϿ2_̫ΐ-ET?n_CGz?] 3}֝I/bvvљ>pJOxP5N%]hefsKsEnol.h'QɩVc9B}J_x}1C33y; b}P_V5*D: -mV#` ЖW;W';MMz)X+EnښZZڼe gq8;GF{|s,q&;uw{lLbTEKs}jp BY%lLؕ>E#QhPhf9+[w?Z~~#ݺ@=u]{]?3j@RC'ܸ,~`^8rN8iWc:_c1d);z DV{0u1-9;`!nkzX5b++bbL"Vcб]'jWUEZCGZj&;GɅąŇG݄Y$-זr[Qe ^3IzۈՉ'F[Q0Pq¸`VCQIԫ$Uj}Ѓz=CÙyBw@}fCH")"Ւھq0jWll4%4X&i*C#pGxʗ2eH@~' D-8}ɂbQ/'Aj1-|GJ8{/uyh3_g>X)ljrO7>Tz]/F6xt-TMb w.\m -xyg% '<NζKm͛S(3 ItVBdZɴ/ȹ1KpO{mU;u2h5zvߤO2o%0?5zߠu_!"muhߵaǘnm\HӲVE\Ihݖ`l\K4?vc6G5R,4cXfsӰ+Dv岗L  7+L. ]cSY1Il&E| -5ŒfHeiX'Y ,v{";>h''TGP@TZFj&*dl5kWݠXHcЀė%GBe=%e&<Ǽzk/UsRz|EbpN0$d6 -6_ k3cjƊ[bb>[cB`2֒Aݨm]?.m -KR$2EK4-"emЎr8v:4S-[2j9Cvm#,$[@ Ib?}X\If,k6}x5h C;H\\"A<ս=};.5j6E{uEzeԖmO/eypu4} mC; F&D/Da.eyxb2j"! %Hd -|#Si?A"ign:9K:-%40]`{O[? w$I+ΨPD3 *!F ^K:ӝskxxg~WϤ)i|; =W-sKKqBG,6j< ƣF]ƮpZs{}ڇa~8h_q`xA%K|~LUvQ&Jf$lԮxrB'H((e6pB`谻ܘ2<ЖrRu{v$}[w~OՒ8Y6QI2 !S7IUMldy*"AY\p#a+پDEqxP Ri']f~=)ZB}g ֢"YoRq{l+~w /}p`*r b[6kEw7 pEzK#ׅ^2/:/g@@^ -=\_'^^<: y/+ -s DD8[ٵnZ3?f(NL pCfoktl5s|O KIOllپsx젬Ӛ;c;T, -їorh{{b -<0ۥ?NpMsoR-Q$n*=ډr̻zP!1 c(XGXlp4@D6} -ܾBD륁@xp -w'3(Vl*mom:Ԗ8Tܡ!;:᪢jt!|xXA<}xP&XY)'jˬV5~jĽ~j|JgF5 -2\z<("T}QlBt.Z]6EjB{uK6=wO~)Iѿ^(QQMY4jhy::-bXX5f]lxzBTK3Ji2CFCUiL{ǵl~lgJ. &D$-6I67{Q#/r.Ly8Z[tS&{⿨l_ X G̻~1 7#ckO?X_.##O^|^1,4 -HGIj:aَyfh"CdT 4i60p -YaDum[l=>pK$h:u >x-C-9Pz|_`2x&Ki ZY4e ;!3Da 0T3I`Y+E#RiP _C8ȡEH<)[3ϻ0UU8۪Z`|5?q{8lX}!ϙt_ w ٻƘq;eq3fw$;sS>߼:i1^쬏P:뱬  kq:Db q)f(8E "'-({ -S Ei1*A oQZ%,6a -e'k(XjTְ؉A$#8a%QLпqbnn ~.xYld:mݾi 5O3 7YK qB~/٭J oskA:(G4A%jk9A(}L*Q^BQA]($[[u6[#659#"⸸({aܻ -.OTVT -e muPYpjjg$6P C'R`Po(,wiFfͼ(/`ݖm7 >RS\jBdre*Boo} &\(7=oLnDbCmS댷So aW]̟AYlFdֱU?Φ*.T-[{9l6ǚEӦi)ϋK;O^+W}%sS~g1o 3ʛY?fjO:Y6`8N ?!E+=UT4, biok4F+ @2*\eN}[2Rޙ``;x1ƣ<4d؆O46_Nh~ߤ]GT ,ۋ{cAf^&aKL'L‹7dmDM],6&~gH U&I 2e̳) `ja4V]IxH:z>!N9TLBaHJ*`5 χɀJŏ* TX[cv[K 2.O"DρݑTE樶1Ȱ}z66@Փ S(v?a -*[0ݿӘ \*mazao_7Lt(XRoB00N(5z |4%)zp -QNxg{hhQRO↓zrG;{n ֣Wo5%}/f"Kh)9cT8r׫[CUg8pbP/4 jΓI4lrW8=H,n/HUyиc}q(xCU8RM6JP^G_El"oF24vB~qtD@AA@ )kSiZ3q< ٛ:÷%@/Sqi6iI/[% M0TpXMr@ȁ@@H%&/1ymiF*1Ĭ -o4h|^2[78eL%-V7e*~ŨM -#!bH޵ndwĻ7qk7v;iRR 8m$r '$$8 QUpj%{@H -QU@NE hDTZѺ|߬vv۝~3$)5KKҝ/#Ji hBɮ:Mф2J&7}]H;BbbXI*ۡD.ID֤ %p(%mgWkYl3_VN֟])wZ(OP&^~ՠ0Z.?yJⳄPp'b0.xL EfPJ͠"/C#>))mnZmPP`VTc^xq(׎Ez|t~7z3}bl{r];p[;~N+GOtLێJE߫?;WRkK*P07ȯ r0pD:}RG$/{ &`e0`PD_m^w1lZ^v4< `pP"r<\G e{f{Yh֙mAˆ&b1$Y~:6j5pbQhᄋ/8+9(FqzO~ -٩COGDzpfztL mrڅfn÷]^fg6pMV5i_˞y I||>:5j1Stn02HNSF02 R1-5C$TF]pyV\}҈u7Sֱ->ɼNV5_rxw1 v< jæ?2d.|Fc Ѷwm },ӝL[kDAell|f-!;{,C TS?#g`Ң*+ +^,-Ex1vL& ?(a"8p: Z-WaM -QMŇF؋P*cҩt6r\8(e,[&yХp#J{_[9M=nIȎs9QKΖs ;txgT$߽|۠|\9@D ->V 2qR9虮(6ʕlyY|vK`}66ח Un[|S61(~\c. -Xe٫B^Ez_-d %熉bҰnTx/=wW6Ҙ]^a;^ŝcw0H(F>L3—`<-ݣh09LνÇo mᾑ p˙I.+3%pݔx❩?t*lDW[wJ:m5%{l* 9r7j@>}cT "aنlCX!%&0>ejv" cp J.…1[b=ЀcE) O.2Ű0 A`&(zi%R~ADV] ^oj[#ͧ!Ug -XS]|mr^#6hkZSE:hBHbH-M';Bn(#ByJVmҋ ˽H͓KC cWo? - ~^#7~ kvb\m6N7|Eav>',Vg/eP϶!I 7 am.3/0Q=,ąƢUTϨ)ÎZ֖)]0O,,D\N{Z,nz[`=חN!(rv86j=ȡf+npD~f .ض3 4IQ?$DꗦX=^MIDQ6cu޲5b{Hdilmvff"P`:ҭu -,1 ^+M!C;G3,<>w/Yp:r fDujψ``辸ۈ!&F4_ga#SCzu衝WF@щrvc݅|/NV4Z#Ӓl+>4׷G9zx8ZK*MU*]]NO胋:y ,}U6@![%M(r\JCr#ց\Rܕڞ !fp']\Pi*1[AAUJ_a@!֤gk`Kh|Ƴ㛥?읏)NA@/a}<}e{[k<#X4!bu g3R:3sJP M4q\eg1= s+ 'CObĘWb3 ę%HJJD@@&"ķ)!|Bq`:(-2|n -!7GBD% bፖU7f h40E|Au.;d -:+Il$QjJ~ӌB o|b5JTf5e/<Z%9mwm]Vn|P<8{X v$zZCg^؈L)ܕ=!R44ML;fQ)P6l1j'I\S]NQLH_K6|_7 -Cl;hC&R&%~Y6pQ,E? [+]#ۇmSR^Co^>U4YuzYE|hKE{n<,IE8Ns2r}'-eg#ёJ {@~nLaWYYEΉ\:11z@EAq{R[Gm9k̍ا:u+m=SF%0w Wv@'_|;b[]dS'c*Re,#H#L 9jrcC*2IC<,99=ˣ)Bߣ?@ Ș0 SZT@ 0P4Z"#hToл %eKpG8ϳ4:ͣ'x{&co%DKӲA!#0,8ɕ qԵR!MYXk6q,9vVrИ9D=bV(NE\l KKBϩk7\7R7%\+71W Wj{qX/#9wzb\$cgy0 &<9rQ=S~%[tL?0um+"`w1BrI.BeiZF<#srm(떡$5 MXVLkZ#B櫻~@{[GnQ\y`}5*cNG}AZ vĂ/*m%</~@>8,itU9֭V;ay_ƨIJ}ݦt{r 1'b'P[NxX{s[uU~pR[j14Uqʡ x+M* -s9;̚* - -.:M) \ -P:tНtcf]uRyձ5=n%cXرMLIY6?'|R~X@ӂ1feztoxn)qRǛL$V+NvTDS"U4uDI${ڶ?4/O%]d2GG<򷁁6-Ns]O<: }}xfsϺ//v-&f|xC拗IZ|E_XEJCJXRh*5 wTIH@:ܠWBG;,|6]>"pڇ- !Wótv -e!xSr]1&N$DIU;L o]A]+fV5#O tZӶߎm>:Yd8Ǯ.r &U&A$X!̒C'3eҒŠ78Fk`*[ȍ8V[2;n$9)j)\5݃q))6.^x„0-p(Hg`hH$!2 923DrkRFM7-y^RL<*DJ)Vo Sp@J x&QadM џ3k4ҽѽ AGjDJ I[M)hL&G(4I1` $iTC 5rFV;rO(?AT!dȶE&%1و1J0cL)@ -UM=u0ܢ7_J,I%i24ŧpNv #kn+w;فr<ĝXZ ]}`*2E {XF [^] *IZZ*@ -:L j7*x=IB;OKf& L2O3w.\o*/dͿԥCzHy{diMc7bG7ޭ64 ?f0^z~͜~=4NEyϋ {jcEd׆&17?Ϋ6 -9wf|ᵽ;u^7lbgYrxql%DPE(LD(UI@4FUMLTQZ($DT!~we@jfc(2.d(A! hC|m[;wk{ cǛ^}Y"Q74k"%)d-2MDN -ӀftMU 5!oъȘђVuY=P;3^0^6Ǥ%ȄPdq!dY<_M8>Ƥ 4Kw5OX8TGuW󑷢Ȕ}g ŲnX[/^e"\đC{yHu;>ZP-Y&o$P.R1qQ#JӦnW$aّ7V}VT2𮚦b SS%VZ[ӭTCvDQќ\80sm慺Y[DsF.WTJ 6茢tLW3+)UV^_#pNBR:_gax4( PeVB͎yv__o{`P<oJY?"u -8B|kW8GEOV'sEQ,g$G@yM-ԳG۫˦aNn5[B-vs n0ol_!˸ͺ-ەgC1z9'nIQvL F"j۝V2#iǐvT-#LH">X k|ܟcQ׭|zR}eFS1w#]T B'Mȉ3RG/#g]7=/=uSPCgMߢKϟrT>i۫vHh_Xb,aK[ -e=>cFMq>‚"_US5]).XDg4;R{j~Čū{;Ɓf7E] y:JmCfEHi-ϴw]켅@%3G7|#3'ys;=p&4OW܎F-Vy5R^(ܤРU`[uB?A}Ft3/<(IY \$/W;4 Ã6k) Q3zR}}^vhk&_7q./s{<6CR^佺A -9u9j R -cy:AKq6A@?}cп -6nB{2źU19۔RLzgu>Nռo޳'m`QOmc JK{}X-cГzԈ Mݷa%sԍgӽML^nwZ* `8oJ{B ].oFM[⌊ݯ~ 4渘y^^zg. ͌;h lG@=xsQ -= t?*<^=qyпvɜan2?,v s3ߓ7}@DЎ_fŜ$[Kɜ*I-#>*m}JAMY{R>IO#ʍTgI,fcBjUP/ ?ًyD{tJKo2K ߤ%7lU'A!'fuRҴQmTO/s!X2(mwmuI&?N I?tظQx3ܢB@*Zw)_qדHOYO룔S&qXK>A{w,*.|2%-Y9uߡV2~i?c\`./Rnc8E|$ڎx#4&qx>5H88ܱݠt7(+ Q}}Y9>3SS-2˥|N5]Vg)Χ؁F;feZnS Uv(PԯUXYDA[N0|߱ -~r,3ʋ|4Ui `)mD5%ajscoE^\>BLNOJ2qegQxN~1˝6BghOPOiVm=#QS2HۂQmȗȏ0n"W ~(D>: -QOpNz-dI]K '0-4^ +J aܯi [98Т*~RBju__!: \)8~v۷ B  ! D 4 -VhP8Ai1JiAj*PZ)(Ҏڡ*HswyM39{?λsIuOzO`*rxrT ԯ~/ڍ yA'{UFhɉҔ<*MA-9y_Y^iXO;`Z|>߂5O15hR:퇪f.K>^"$ h-c(<x2*xV06SW%9n-ܳ\t|[ەk IUI-uVR雾IzèHX958ExʧWOOv5?Np'{9Zꭉq;%cbk۲lL\,&m WzpV7:^cќ2_^ yuj?CjJ,%C}HkrJ` Qps4Fs"߲, o+ʬe3<O@}8P2 Z ȶ9s=Dg8;lKfA?q炖TM:Zl -1khĺUA3G7rckY aM9]g)BLV/!׽m;)W`?u91v͑!̍QEJ<wrgrtne2FIO *q]AtQJv.t+<~),﮻Ҧ82k4%>&fT@)KE-rn&C=vg*ᰕrʽlsܪ>753q꿌P -fzdE_~Ҏax@W -q~B̾@c}´iY}'Yc:åW2fϕϠmrH=95Xu6W50J9+Q.^19=M|e>oO &?zRWR W`wͣ/'ȺYH>ϔ*i@D O8ߑƒh8iɫtEW":qMopr\xm_g `n./m݌.aŹ@u6YuM.ipirj{v㾸8:BFGg#z(Oֽҳd*r/{W˅0,޳&a|ami+j EWrTakLS=, _qnԷ f3mHO]3eCbz?~vftMo;6^fk΃9αVwDSKE>m$ v͚ ?!}SgK; -ԿWD}oe 2kۚ'~b(y6o%x(CBJrCb0MEq@د=g7"}.ƿ՞qvtVűqW;)Q]!? -CГGq+P_ىP?~Z!q҉ơҸħK}8~Tݖ9[~׶[نo_86yC]$KpW߶ZaW7\4i:__h>3[ klg#3ˣDhmo#k<\i"L:Ocuɼ K2ɍeAkMr]\_AN'~A\'="PNTb*Ϣ GiJ9;yޏLԼklqNFTm/}'ߤl~Ad$mVh2>lj)M)'߆OJ1ySRR;2٢;䛠\FoY/gZ+QR(ƒ?zQ{ˀ꨸>Ǥ*5z6X#I~)3hZ_z>#ǚ4GA!=4yXͰ p,b|׶ -M\oEY][_oǿ.j}[%v́'08Wz>Y]ߕςo32=GGz Kt\Htt;<Í36m}5qۦ ق5;~oq}Y}렿YtRNrY(QcݟVd\fu,g gN f`}R-PH tF@TNϏ'-eB %}c:: tFy }r}{ܺutw~}r޻3s|޲o=Ay޽/$h0#$($ $-QhE2V R(@+SԎ-8@vt -Uw6Lݽq pj}ϥC[xKeENxV#sure<JG^ >"}/Wb.(Vn'd{{w>^|'3-޽pTDN>y[9pxi9$tr;~ -.Wݿ{ֶX6bn!|XzVy-rfH~U?[8D&MVSLom[<,DW蟉1ToڠArY/Q|GІ._[q#{4 zE8߹z{l\}>hcouJچcbxc|?lg`7~b5p8cFΗ87>ۤ.s1{%u&IU,s8oK]s F?mkbCgM(-j(rnIrT{r)`s{odbkIC@CaDSfd[ϩ,q!ёV;1'Vcm|SE5SG} LԻ[~F_9»QwQw -vqNSS(GiNA0 TS[.+NװM)S4'7 RJSԿ'e:JSgr k=/6˻풯n}{?*Z[fEűTZ| +ݥcDqh?b͓T;qg{ .2Wu(-^mm'ZC?548^~h嗵Uj819U,Fưy36.Mp/x?)G'7Tŝxm6Zٓ8o\Oc8wNxD<1ߐ\[4C9;AdbrL 7 JRy3=2Feީ@]17voV?3(2j^X}}[^ʃ?_p9̞\h3^+}.‡4'>QoI1[i +::e"t孼)wxRwR!A)/@1@v%3s0kn~z+2&|l/ldϣgfAPn`x~)HʜmUhۏe6k`dd(>M fEǶ{@pJ&4Z@בܾs==tp.D Ժh?aךrH_/geo٦a@~޻_잵 R SX0Kz_M?)OvaF<.{v򍅌E';[0b.,e]12G[?ӵ뜩kp(_a~D«Dwrd "O4n+o$~<WdySVyQr/(0 g;qr 0:{ V:V.IHi)OZǰ0Jz_?uK6&zz4؎u#hVt4yTWTi䂬?Ux3^^nV{iuߜ' ľ"!,! .f 'R=:+c䚧$;\C̼+JGHcɟInX(kEiUGjzu迺9db|wI$%=<'gmĴ蠍7 -g,gVGOgR$sTg3Oʗ&.nc\#yq)mcM/cNb-5$K,y`mg2<XA[!S~xFF$p6gHZf3"pΈ`}tȼ$wKVr#j`nZv7lVq晦qWK3\",%uOt͉ѻr":¾^Nl||(}+^nEo瑧P&qlIIr*Nh 錫^GsfHRĄ|*(`Lu~*<7z)9X%MXÙ'KB4JX -$ͿJ"(N(NZaŦ[/4/7b~6u]ql'PcP$u- @:α_cEE -U&Ai6U%6i:mb0(uCڴ{_BH4&{Ͻs3xO<_B$Vt]4=p< -CW zJ6~E> 2>P?ixB&vsԝ2c6v{gdxzUeqꖻ6zP؊ELJԦQc5̢y9HJI]\}m=pk=geya><{Sn Xڽe<侄x,$ :G7j.yLOhsĐkmc݈֜/s&0wz/cDޛ/jhxL 667|US{]3߽9,<|$lV8fad -<))%5ίO97jP~5ByŞ s,+t0w~zK9,[(ioW,FNj dY9Y>Ƌ7^q9ߢ`%mx8C2γ^nCў7 1N qa eLlB'˂ԷB|_6 -,VoM`Y >N}+$}E$F1-')^ sU ;Km%ǹaND|e\b 3]c[?SK"|lĆþvIC:J^g:ڲ-(e7V̡ -iԺزW̖ue'|Ɩ9waZa9`1mG1^$d:撔oǚFDoeFJ,[oX+s,%[ǝRBCKNcNϾO疣F!H 59n!ndvsLMI!"|z-L41Q5f핚rֆzEoR UXZd~lu"_Lvo9tXښ<aάCԧ9{h~f{-l/-98a9]r=rMmr3gX<7aD<LG1޾tRVrVwcnIXHxo$d≽Ik"Y^cv>5 CX13[)1R/27'ͦHw4 -FM{DsP3H,&==eVLulmKY+͔eV[a3-5V*$a+DSMՁHlc]큻-&KLY$4`&w;r%Gw؄YПGÁJo k_.Z?~&Np\\޷dkF hLdjc9v4@!i%׉ gM l|RP’0KPic-$Ck̇Fi92/ҢlLh%Lyys?|}29`X 4 >0jaSl -DFv fABߟd)mvJ_m9ȟC{ݼx Boe_^'Xcui֝o`ddh=b.P!lvn;7DH > .zj < 3(YwU އ}q < [1`BU vЮd7ћel^./} ZWvlwS( -k"E~ (\| eWrM"> aǰQŷY=iu\Ŭnrg 02aMz1f %I8N|!Cԇ(L+̣>¿ 2d-* ؎לfV<~UQ}Ț*jUщ*:VEM#7Cwf֐{8ꥆB?o6)ӘY<`k(8i0ӭ-@pqςy>'2'D^Fa Zbs,+M-8 -Ѵ[yo'daxv3@ۃ1>4`{QrELmRN-ï%n̯jw%\aml~]vm J3iiWcZKLiY$T*U #fIP2jڑH -g$or\õ4z:iJr`U_G<7bjU<)m yȿlEۂ>+Rw!9F  ĠaǶq\?BhHBn~t+;҉ȏP,Ȫؑ{#' # vXLm<ַͿmu]65&]El$Fv$O 6قWt줎,NQ{{am~qؗQzJt:}]9)x^؈!Dز$$l$D,m^CL(hn&\GJӁű*Pc -MYE*X*aÅ8꓃ -l0̈́ðD2a& SOzxq:iu4-PSŐ(nxLww%DڭRj fVwv"ۺwǭz<0tHwҕؕ2vّ;7jx\i) D ɧ"}AJZIQyAJlԌ4-jVh/ {7{dS/ou*- XMLpT¾|Mb>GxވaW TJe7o-imٱ7excvFτ3IGt-ʖIGTIG^tpy%t O)eow 6Fqcp.oL'4 CHK]3BlW"'_4B0vO瘇=Oz=l2Y-aNFٮ u1zNKUyuAu:T[vW=gLs)^1/{:m:{40Pf]j{ )qqǹؾt@uS*"]x?x!:h?y| -Ȋa5W+PDjJR fF;tbЮ~HHo7r6vS@| _/ fe6b6^ -Ƀ ڌ@o19% XQ`bmmeW`z*fD ť  m  S G -endstream -endobj -18 0 obj -<> -endobj -738 0 obj -<> -endobj -739 0 obj -<> -endobj -287 0 obj -<> -endobj -288 0 obj -<> -endobj -289 0 obj -<> -endobj -286 0 obj -<> -endobj -284 0 obj -<> -endobj -285 0 obj -<> -endobj -282 0 obj -<> -endobj -283 0 obj -<> -endobj -280 0 obj -<> -endobj -281 0 obj -<> -endobj -278 0 obj -<> -endobj -279 0 obj -<> -endobj -277 0 obj -<> -endobj -275 0 obj -<> -endobj -276 0 obj -<> -endobj -472 0 obj -<> -endobj -449 0 obj -<> -endobj -468 0 obj -<> -endobj -435 0 obj -<> -endobj -467 0 obj -<> -endobj -433 0 obj -<> -endobj -464 0 obj -<> -endobj -465 0 obj -<> -endobj -427 0 obj -<> -endobj -463 0 obj -<> -endobj -425 0 obj -<> -endobj -462 0 obj -<> -endobj -423 0 obj -<> -endobj -461 0 obj -<> -endobj -419 0 obj -<> -endobj -420 0 obj -<> -endobj -421 0 obj -<> -endobj -458 0 obj -<> -endobj -459 0 obj -<> -endobj -418 0 obj -<> -endobj -403 0 obj -<> -endobj -383 0 obj -<> -endobj -402 0 obj -<> -endobj -381 0 obj -<> -endobj -691 0 obj -<> -endobj -xref -0 740 -0000000000 65535 f -0000000016 00000 n -0000001674 00000 n -0000001800 00000 n -0000000158 00000 n -0000001907 00000 n -0000645138 00000 n -0000638546 00000 n -0000629258 00000 n -0000592907 00000 n -0000484915 00000 n -0000474527 00000 n -0000317543 00000 n -0000263891 00000 n -0000168003 00000 n -0000160975 00000 n -0000103381 00000 n -0000007581 00000 n -0000708585 00000 n -0000110270 00000 n -0000301233 00000 n -0000011954 00000 n -0000651617 00000 n -0000110404 00000 n -0000011728 00000 n -0000011599 00000 n -0000651383 00000 n -0000000000 00000 f -0000129419 00000 n -0000129444 00000 n -0000129844 00000 n -0000130006 00000 n -0000130075 00000 n -0000130340 00000 n -0000130720 00000 n -0000130631 00000 n -0000110818 00000 n -0000110567 00000 n -0000110840 00000 n -0000652041 00000 n -0000651785 00000 n -0000652063 00000 n -0000672085 00000 n -0000672110 00000 n -0000672531 00000 n -0000672698 00000 n -0000672767 00000 n -0000673037 00000 n -0000673349 00000 n -0000673258 00000 n -0000049414 00000 n -0000049439 00000 n -0000049825 00000 n -0000049982 00000 n -0000050051 00000 n -0000050311 00000 n -0000050791 00000 n -0000050695 00000 n -0000012358 00000 n -0000012112 00000 n -0000012380 00000 n -0000301653 00000 n -0000301398 00000 n -0000301675 00000 n -0000030679 00000 n -0000030704 00000 n -0000031123 00000 n -0000031287 00000 n -0000031356 00000 n -0000031625 00000 n -0000031911 00000 n -0000031820 00000 n -0000643618 00000 n -0000000000 00000 f -0000634186 00000 n -0000000000 00000 f -0000627909 00000 n -0000598082 00000 n -0000000000 00000 f -0000598505 00000 n -0000598249 00000 n -0000598527 00000 n -0000590258 00000 n -0000488583 00000 n -0000570334 00000 n -0000000000 00000 f -0000590314 00000 n -0000590499 00000 n -0000570727 00000 n -0000570492 00000 n -0000570749 00000 n -0000478862 00000 n -0000000000 00000 f -0000478918 00000 n -0000479072 00000 n -0000479225 00000 n -0000479382 00000 n -0000479569 00000 n -0000479767 00000 n -0000480000 00000 n -0000480235 00000 n -0000480394 00000 n -0000480558 00000 n -0000465738 00000 n -0000321255 00000 n -0000429971 00000 n -0000000000 00000 f -0000315960 00000 n -0000267129 00000 n -0000000000 00000 f -0000262168 00000 n -0000171075 00000 n -0000176594 00000 n -0000000000 00000 f -0000165630 00000 n -0000000000 00000 f -0000158422 00000 n -0000000000 00000 f -0000091449 00000 n -0000000000 00000 f -0000001948 00000 n -0000002230 00000 n -0000091687 00000 n -0000002482 00000 n -0000002637 00000 n -0000002792 00000 n -0000003115 00000 n -0000003254 00000 n -0000003505 00000 n -0000003589 00000 n -0000003673 00000 n -0000004052 00000 n -0000004144 00000 n -0000004244 00000 n -0000004328 00000 n -0000004412 00000 n -0000004504 00000 n -0000004588 00000 n -0000004672 00000 n -0000004756 00000 n -0000005279 00000 n -0000005442 00000 n -0000005629 00000 n -0000005864 00000 n -0000006123 00000 n -0000092787 00000 n -0000092867 00000 n -0000092939 00000 n -0000093019 00000 n -0000093099 00000 n -0000093179 00000 n -0000093259 00000 n -0000093331 00000 n -0000093419 00000 n -0000093489 00000 n -0000093569 00000 n -0000093641 00000 n -0000093713 00000 n -0000093785 00000 n -0000093873 00000 n -0000093961 00000 n -0000094049 00000 n -0000094137 00000 n -0000094209 00000 n -0000094289 00000 n -0000094369 00000 n -0000094441 00000 n -0000094521 00000 n -0000094601 00000 n -0000094731 00000 n -0000094811 00000 n -0000094891 00000 n -0000094971 00000 n -0000095043 00000 n -0000095131 00000 n -0000095211 00000 n -0000095291 00000 n -0000095358 00000 n -0000095439 00000 n -0000095512 00000 n -0000095601 00000 n -0000095714 00000 n -0000095795 00000 n -0000095868 00000 n -0000095941 00000 n -0000096014 00000 n -0000096087 00000 n -0000096231 00000 n -0000096304 00000 n -0000096377 00000 n -0000096450 00000 n -0000096523 00000 n -0000096596 00000 n -0000096677 00000 n -0000096750 00000 n -0000096822 00000 n -0000096903 00000 n -0000097034 00000 n -0000097107 00000 n -0000097247 00000 n -0000097328 00000 n -0000097401 00000 n -0000097490 00000 n -0000097571 00000 n -0000097644 00000 n -0000097733 00000 n -0000097808 00000 n -0000097881 00000 n -0000097954 00000 n -0000098102 00000 n -0000098183 00000 n -0000098256 00000 n -0000098337 00000 n -0000098410 00000 n -0000098483 00000 n -0000098556 00000 n -0000098637 00000 n -0000098710 00000 n -0000098791 00000 n -0000098872 00000 n -0000098953 00000 n -0000099034 00000 n -0000099107 00000 n -0000099180 00000 n -0000099332 00000 n -0000099405 00000 n -0000099557 00000 n -0000099638 00000 n -0000099711 00000 n -0000099792 00000 n -0000099873 00000 n -0000099954 00000 n -0000100035 00000 n -0000100116 00000 n -0000100197 00000 n -0000100278 00000 n -0000100351 00000 n -0000100432 00000 n -0000100513 00000 n -0000100594 00000 n -0000100675 00000 n -0000100756 00000 n -0000100829 00000 n -0000100910 00000 n -0000100991 00000 n -0000101072 00000 n -0000101153 00000 n -0000101234 00000 n -0000101307 00000 n -0000101380 00000 n -0000101453 00000 n -0000101526 00000 n -0000101599 00000 n -0000101680 00000 n -0000101761 00000 n -0000101842 00000 n -0000101923 00000 n -0000102004 00000 n -0000102085 00000 n -0000102166 00000 n -0000102247 00000 n -0000102328 00000 n -0000102409 00000 n -0000102490 00000 n -0000102571 00000 n -0000102652 00000 n -0000102733 00000 n -0000102814 00000 n -0000102895 00000 n -0000102976 00000 n -0000103057 00000 n -0000103138 00000 n -0000103219 00000 n -0000103300 00000 n -0000007500 00000 n -0000709816 00000 n -0000709884 00000 n -0000709733 00000 n -0000709582 00000 n -0000709650 00000 n -0000709431 00000 n -0000709499 00000 n -0000709280 00000 n -0000709348 00000 n -0000709128 00000 n -0000709196 00000 n -0000709044 00000 n -0000708822 00000 n -0000708891 00000 n -0000708975 00000 n -0000644987 00000 n -0000645055 00000 n -0000644904 00000 n -0000644821 00000 n -0000644738 00000 n -0000644519 00000 n -0000644587 00000 n -0000644670 00000 n -0000644299 00000 n -0000644367 00000 n -0000644435 00000 n -0000644077 00000 n -0000644146 00000 n -0000644215 00000 n -0000643855 00000 n -0000643924 00000 n -0000643993 00000 n -0000638463 00000 n -0000638312 00000 n -0000638380 00000 n -0000638161 00000 n -0000638229 00000 n -0000638078 00000 n -0000637927 00000 n -0000637995 00000 n -0000637776 00000 n -0000637844 00000 n -0000637470 00000 n -0000637539 00000 n -0000637623 00000 n -0000637692 00000 n -0000637064 00000 n -0000637133 00000 n -0000637217 00000 n -0000637286 00000 n -0000636658 00000 n -0000636727 00000 n -0000636811 00000 n -0000636880 00000 n -0000636252 00000 n -0000636321 00000 n -0000636405 00000 n -0000636474 00000 n -0000635846 00000 n -0000635915 00000 n -0000635999 00000 n -0000636068 00000 n -0000635440 00000 n -0000635509 00000 n -0000635593 00000 n -0000635662 00000 n -0000635034 00000 n -0000635103 00000 n -0000635187 00000 n -0000635256 00000 n -0000629207 00000 n -0000634423 00000 n -0000634496 00000 n -0000634569 00000 n -0000634642 00000 n -0000634715 00000 n -0000634788 00000 n -0000634861 00000 n -0000637370 00000 n -0000636964 00000 n -0000636558 00000 n -0000636152 00000 n -0000635746 00000 n -0000635340 00000 n -0000634934 00000 n -0000629056 00000 n -0000629124 00000 n -0000628905 00000 n -0000628973 00000 n -0000628754 00000 n -0000628822 00000 n -0000628671 00000 n -0000628452 00000 n -0000628520 00000 n -0000628603 00000 n -0000628299 00000 n -0000628368 00000 n -0000628146 00000 n -0000628215 00000 n -0000592754 00000 n -0000592823 00000 n -0000592670 00000 n -0000592448 00000 n -0000592517 00000 n -0000592601 00000 n -0000592155 00000 n -0000711386 00000 n -0000592224 00000 n -0000711264 00000 n -0000592293 00000 n -0000592363 00000 n -0000592000 00000 n -0000592070 00000 n -0000591915 00000 n -0000591830 00000 n -0000591745 00000 n -0000591660 00000 n -0000591575 00000 n -0000591490 00000 n -0000591405 00000 n -0000591320 00000 n -0000591235 00000 n -0000591080 00000 n -0000591150 00000 n -0000590995 00000 n -0000590840 00000 n -0000590910 00000 n -0000711333 00000 n -0000711211 00000 n -0000474476 00000 n -0000480901 00000 n -0000480975 00000 n -0000481049 00000 n -0000481123 00000 n -0000481197 00000 n -0000481271 00000 n -0000481345 00000 n -0000484423 00000 n -0000484540 00000 n -0000484609 00000 n -0000484693 00000 n -0000484762 00000 n -0000484831 00000 n -0000711142 00000 n -0000710814 00000 n -0000710883 00000 n -0000710967 00000 n -0000484059 00000 n -0000710691 00000 n -0000484128 00000 n -0000710568 00000 n -0000484198 00000 n -0000710445 00000 n -0000484268 00000 n -0000484338 00000 n -0000483468 00000 n -0000483538 00000 n -0000483623 00000 n -0000710269 00000 n -0000483693 00000 n -0000710145 00000 n -0000483763 00000 n -0000483833 00000 n -0000483025 00000 n -0000483095 00000 n -0000483180 00000 n -0000483250 00000 n -0000482614 00000 n -0000482684 00000 n -0000482769 00000 n -0000482839 00000 n -0000482133 00000 n -0000482203 00000 n -0000482288 00000 n -0000710021 00000 n -0000482358 00000 n -0000482428 00000 n -0000481536 00000 n -0000481606 00000 n -0000481691 00000 n -0000481761 00000 n -0000481846 00000 n -0000481931 00000 n -0000711036 00000 n -0000711089 00000 n -0000483918 00000 n -0000710761 00000 n -0000710638 00000 n -0000710515 00000 n -0000710339 00000 n -0000710392 00000 n -0000483335 00000 n -0000710215 00000 n -0000710091 00000 n -0000482924 00000 n -0000482513 00000 n -0000482016 00000 n -0000709967 00000 n -0000481419 00000 n -0000474392 00000 n -0000474239 00000 n -0000474308 00000 n -0000474155 00000 n -0000473933 00000 n -0000474002 00000 n -0000474086 00000 n -0000473780 00000 n -0000473849 00000 n -0000473696 00000 n -0000473471 00000 n -0000473541 00000 n -0000473626 00000 n -0000473246 00000 n -0000473316 00000 n -0000473386 00000 n -0000472932 00000 n -0000473002 00000 n -0000473072 00000 n -0000472618 00000 n -0000472688 00000 n -0000472758 00000 n -0000472304 00000 n -0000472374 00000 n -0000472444 00000 n -0000471666 00000 n -0000471736 00000 n -0000471430 00000 n -0000471500 00000 n -0000471194 00000 n -0000471264 00000 n -0000470958 00000 n -0000471028 00000 n -0000470426 00000 n -0000470496 00000 n -0000470190 00000 n -0000470260 00000 n -0000469954 00000 n -0000470024 00000 n -0000469718 00000 n -0000469788 00000 n -0000469186 00000 n -0000469256 00000 n -0000468950 00000 n -0000469020 00000 n -0000468714 00000 n -0000468784 00000 n -0000468478 00000 n -0000468548 00000 n -0000467946 00000 n -0000468016 00000 n -0000467710 00000 n -0000467780 00000 n -0000467474 00000 n -0000467544 00000 n -0000467238 00000 n -0000467308 00000 n -0000466216 00000 n -0000466131 00000 n -0000465976 00000 n -0000466046 00000 n -0000466301 00000 n -0000466368 00000 n -0000471821 00000 n -0000471919 00000 n -0000471993 00000 n -0000472067 00000 n -0000472141 00000 n -0000473157 00000 n -0000472843 00000 n -0000472529 00000 n -0000472215 00000 n -0000466469 00000 n -0000466567 00000 n -0000466665 00000 n -0000466763 00000 n -0000470581 00000 n -0000470655 00000 n -0000470729 00000 n -0000470803 00000 n -0000471585 00000 n -0000471349 00000 n -0000471113 00000 n -0000470877 00000 n -0000469341 00000 n -0000469415 00000 n -0000469489 00000 n -0000469563 00000 n -0000470345 00000 n -0000470109 00000 n -0000469873 00000 n -0000469637 00000 n -0000468101 00000 n -0000468175 00000 n -0000468249 00000 n -0000468323 00000 n -0000469105 00000 n -0000468869 00000 n -0000468633 00000 n -0000468397 00000 n -0000466861 00000 n -0000466935 00000 n -0000467009 00000 n -0000467083 00000 n -0000467865 00000 n -0000467629 00000 n -0000467393 00000 n -0000467157 00000 n -0000317459 00000 n -0000317306 00000 n -0000317375 00000 n -0000317222 00000 n -0000317138 00000 n -0000316985 00000 n -0000317054 00000 n -0000316901 00000 n -0000316748 00000 n -0000316817 00000 n -0000316593 00000 n -0000316663 00000 n -0000316438 00000 n -0000316508 00000 n -0000316283 00000 n -0000316353 00000 n -0000316198 00000 n -0000263807 00000 n -0000263723 00000 n -0000263570 00000 n -0000263639 00000 n -0000263486 00000 n -0000263333 00000 n -0000263402 00000 n -0000263180 00000 n -0000263249 00000 n -0000263026 00000 n -0000263095 00000 n -0000262871 00000 n -0000262941 00000 n -0000262716 00000 n -0000262786 00000 n -0000262561 00000 n -0000262631 00000 n -0000262406 00000 n -0000262476 00000 n -0000167919 00000 n -0000167766 00000 n -0000167835 00000 n -0000167613 00000 n -0000167682 00000 n -0000167460 00000 n -0000167529 00000 n -0000167307 00000 n -0000167376 00000 n -0000167153 00000 n -0000167222 00000 n -0000167068 00000 n -0000166913 00000 n -0000166983 00000 n -0000166758 00000 n -0000166828 00000 n -0000166603 00000 n -0000166673 00000 n -0000166448 00000 n -0000166518 00000 n -0000166293 00000 n -0000166363 00000 n -0000166208 00000 n -0000166123 00000 n -0000166038 00000 n -0000165953 00000 n -0000165868 00000 n -0000160822 00000 n -0000160891 00000 n -0000160669 00000 n -0000160738 00000 n -0000160516 00000 n -0000160585 00000 n -0000160363 00000 n -0000160432 00000 n -0000160210 00000 n -0000160279 00000 n -0000160055 00000 n -0000160125 00000 n -0000159900 00000 n -0000159970 00000 n -0000159745 00000 n -0000159815 00000 n -0000159590 00000 n -0000159660 00000 n -0000159435 00000 n -0000159505 00000 n -0000159280 00000 n -0000159350 00000 n -0000159125 00000 n -0000159195 00000 n -0000158970 00000 n -0000159040 00000 n -0000158815 00000 n -0000158885 00000 n -0000158660 00000 n -0000158730 00000 n -0000006270 00000 n -0000006339 00000 n -0000006423 00000 n -0000006492 00000 n -0000006576 00000 n -0000006645 00000 n -0000006729 00000 n -0000006798 00000 n -0000006882 00000 n -0000006951 00000 n -0000007035 00000 n -0000007105 00000 n -0000007190 00000 n -0000007260 00000 n -0000007345 00000 n -0000007415 00000 n -0000711455 00000 n -0000007874 00000 n -0000007901 00000 n -0000011864 00000 n -0000091506 00000 n -0000091610 00000 n -0000103685 00000 n -0000103712 00000 n -0000158479 00000 n -0000158583 00000 n -0000161279 00000 n -0000161306 00000 n -0000165687 00000 n -0000165791 00000 n -0000168350 00000 n -0000168377 00000 n -0000262225 00000 n -0000262329 00000 n -0000264226 00000 n -0000264253 00000 n -0000316017 00000 n -0000316121 00000 n -0000317890 00000 n -0000317917 00000 n -0000465795 00000 n -0000465899 00000 n -0000480720 00000 n -0000474891 00000 n -0000480824 00000 n -0000590659 00000 n -0000485262 00000 n -0000590763 00000 n -0000593219 00000 n -0000593246 00000 n -0000627965 00000 n -0000628069 00000 n -0000629559 00000 n -0000629586 00000 n -0000634242 00000 n -0000634346 00000 n -0000638847 00000 n -0000638874 00000 n -0000643674 00000 n -0000643778 00000 n -0000645498 00000 n -0000645525 00000 n -0000651522 00000 n -0000708641 00000 n -0000708745 00000 n -trailer -<<4DB61D2BD21DB2110A00C10910000000>]>> -startxref -711742 -%%EOF diff --git a/papers/minichain a small library for coding with large language models.pdf b/papers/minichain a small library for coding with large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..af4857ea5531c59c08fb6fdac86d886775d98c5c --- /dev/null +++ b/papers/minichain a small library for coding with large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ceb1552337734ee3c809bc219213b4a7ca1f68c6c84e7a5b57f70727e734b5a1 +size 231993 diff --git a/papers/mitigating label biases for incontext learning.pdf b/papers/mitigating label biases for incontext learning.pdf index 247b39d8dbf16ea2d2e46fe7020343578b6882d5..1f90d001ec3554a874472a452f19e1399bac00dd 100644 Binary files a/papers/mitigating label biases for incontext learning.pdf and b/papers/mitigating label biases for incontext learning.pdf differ diff --git a/papers/mitigating word bias in zeroshot promptbased classifiers.pdf b/papers/mitigating word bias in zeroshot promptbased classifiers.pdf index c420331f9ec6eb477c13c4bbc3b01d755550aae8..487b59663cd85d45aca0ac8222a8cb07e1dc486d 100644 Binary files a/papers/mitigating word bias in zeroshot promptbased classifiers.pdf and b/papers/mitigating word bias in zeroshot promptbased classifiers.pdf differ diff --git a/papers/mixpro simple yet effective data augmentation for promptbased learning.pdf b/papers/mixpro simple yet effective data augmentation for promptbased learning.pdf index 47004e2452061aae2a082c4fe1810cc52cabaf46..c953cd143270a1dea0b7729b24e7a488d267c2ef 100644 Binary files a/papers/mixpro simple yet effective data augmentation for promptbased learning.pdf and b/papers/mixpro simple yet effective data augmentation for promptbased learning.pdf differ diff --git a/papers/mixture of soft prompts for controllable data generation.pdf b/papers/mixture of soft prompts for controllable data generation.pdf index 76f54da757fc7a3376a55e0916b67188e76957d9..d3f4f87687a8c7576ae47b24b9669f34d7746899 100644 Binary files a/papers/mixture of soft prompts for controllable data generation.pdf and b/papers/mixture of soft prompts for controllable data generation.pdf differ diff --git a/papers/moconvq unified physicsbased motion control via scalable discrete representations.pdf b/papers/moconvq unified physicsbased motion control via scalable discrete representations.pdf index a8b1b0940647c397976f983e6619f46c2021ff19..dc2f02a7376dad6f1995afba9a7aff3f8ad1bcb6 100644 --- a/papers/moconvq unified physicsbased motion control via scalable discrete representations.pdf +++ b/papers/moconvq unified physicsbased motion control via scalable discrete representations.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:44b8b4e7c976af75a79d6219233b06e1a4ed525e99e7090c189e014b09dd36dd +oid sha256:579f794563c8c905a7d643856cd133187d561881eb631de4fe96a35f52931663 size 14931181 diff --git a/papers/modelling latent translations for crosslingual transfer.pdf b/papers/modelling latent translations for crosslingual transfer.pdf index e715e3f6b33750c1f89488d582722e2531c23f0d..f4af13ae59ed4ee232a0cc0ab5104794e700b66e 100644 Binary files a/papers/modelling latent translations for crosslingual transfer.pdf and b/papers/modelling latent translations for crosslingual transfer.pdf differ diff --git a/papers/modeltuning via prompts makes nlp models adversarially robust.pdf b/papers/modeltuning via prompts makes nlp models adversarially robust.pdf index 310cceb769c94523fcbbd0fd44e140f9e53502aa..2ad35616d7866fc848607e4e1a322577f6ce69ee 100644 --- a/papers/modeltuning via prompts makes nlp models adversarially robust.pdf +++ b/papers/modeltuning via prompts makes nlp models adversarially robust.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:71aa5aaf58727c87ac755dd07633fe5ded644f37b1708ed09fd8aaedadf1ad4e -size 1562494 +oid sha256:18d70a7dfb615eeb10f0d56e43fd5ece34a98ef9bb3cd5348d8afa5c9bf942c2 +size 1581294 diff --git a/papers/mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf b/papers/mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf index 3d41710ec32e9c0c178aaed6082f4b15ba0080f9..371172cbb873ba982b6df4a50c14e0be158b1ee7 100644 Binary files a/papers/mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf and b/papers/mondrian prompt abstraction attack against large language models for cheaper api pricing.pdf differ diff --git a/papers/mt2 towards a multitask machine translation model with translationspecific incontext learning.pdf b/papers/mt2 towards a multitask machine translation model with translationspecific incontext learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b9eda4c25c581d3fb19776a8e93c1b718bd41dcc --- /dev/null +++ b/papers/mt2 towards a multitask machine translation model with translationspecific incontext learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72a1ce2d54e90e36d03c91cc05e48cc5fca1c4977646ee351a0cc8cd77c0b225 +size 870642 diff --git a/papers/multidimensional evaluation of text summarization with incontext learning.pdf b/papers/multidimensional evaluation of text summarization with incontext learning.pdf index 3b154966e526df5faecc1cb0b316dfc00100f153..9186f3d2ff3ca0365a31103c25667d44710a45bf 100644 Binary files a/papers/multidimensional evaluation of text summarization with incontext learning.pdf and b/papers/multidimensional evaluation of text summarization with incontext learning.pdf differ diff --git a/papers/multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf b/papers/multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf index 81ea6977dd9f281faaef4e2769c65a1a52398a28..ed95028aae9d8a91541704838468273096210823 100644 Binary files a/papers/multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf and b/papers/multilevel finetuning, data augmentation, and fewshot learning for specialized cyber threat intelligence.pdf differ diff --git a/papers/multilingual llms are better crosslingual incontext learners with alignment.pdf b/papers/multilingual llms are better crosslingual incontext learners with alignment.pdf index b730d6f69d7ff48a67b5743eb6518c5ac08db973..b2fe3c6aec270d60b86103f60fc4371becc2a21a 100644 Binary files a/papers/multilingual llms are better crosslingual incontext learners with alignment.pdf and b/papers/multilingual llms are better crosslingual incontext learners with alignment.pdf differ diff --git a/papers/multilingual mathematical autoformalization.pdf b/papers/multilingual mathematical autoformalization.pdf index 6b2b2b8a06faccb32d521bb4c1eb864fcc3a79d2..dfcb739ea553671daf4b255715ea8a1c10c4d5f3 100644 Binary files a/papers/multilingual mathematical autoformalization.pdf and b/papers/multilingual mathematical autoformalization.pdf differ diff --git a/papers/multilingual social media text generation and evaluation with fewshot prompting.pdf b/papers/multilingual social media text generation and evaluation with fewshot prompting.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9ae17dd8031c6b083c26dd5de7c37e719b8507d6 --- /dev/null +++ b/papers/multilingual social media text generation and evaluation with fewshot prompting.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d79e8a6c9f1afe9cabd150292354b14bfb9b630db777636072bd729ba9b340d +size 1350609 diff --git a/papers/multimethod selftraining improving code generation with text, and vice versa.pdf b/papers/multimethod selftraining improving code generation with text, and vice versa.pdf index b8ae76047e8161c90cf8af4ac184c54c21b09275..7a77236c6757a44de65f26e00d3bebc5e5242659 100644 Binary files a/papers/multimethod selftraining improving code generation with text, and vice versa.pdf and b/papers/multimethod selftraining improving code generation with text, and vice versa.pdf differ diff --git a/papers/multimodal prompt learning for product title generation with extremely limited labels.pdf b/papers/multimodal prompt learning for product title generation with extremely limited labels.pdf index 6c352e59c39f6d0c4b07be1f873e87250aa83cbf..e1f4dd932c906cd8cec3eb53dcebc759166f862b 100644 Binary files a/papers/multimodal prompt learning for product title generation with extremely limited labels.pdf and b/papers/multimodal prompt learning for product title generation with extremely limited labels.pdf differ diff --git a/papers/multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation.pdf b/papers/multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation.pdf index d2f80340d4793e2186bd42debf967be154033ef4..75e490ff86e08e6c6a99d23162b44de3b5da4301 100644 --- a/papers/multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation.pdf +++ b/papers/multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7304c7d54f4924eb465283b14826bccc1352e7989989ff356869467cab5be6ef -size 1370928 +oid sha256:fddb8e2ba51138ca2dc7a888e508d98766a9bed276ef6e3921f59ced714e9019 +size 1041274 diff --git a/papers/multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf b/papers/multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf index 3173db8c28fe2ee77bf1a88b1d55b1f21d3aa823..77551e508d80f3108cfda1fd75c6045e9fa1cfdd 100644 Binary files a/papers/multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf and b/papers/multiparty goal tracking with llms comparing pretraining, finetuning, and prompt engineering.pdf differ diff --git a/papers/multiscript multimodal script learning for supporting open domain everyday tasks.pdf b/papers/multiscript multimodal script learning for supporting open domain everyday tasks.pdf index 1603b439bee5c2b5078f9e5b100b7a25568beb44..6270815ec2976bda87fafd0b60ed68df64cbf5d5 100644 --- a/papers/multiscript multimodal script learning for supporting open domain everyday tasks.pdf +++ b/papers/multiscript multimodal script learning for supporting open domain everyday tasks.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f12bb25bc16539d095d5a6373ee65a397de896557d436433e497725e2075c14b -size 2630902 +oid sha256:b6c4d9a0fd4fb157ad0169376c005f794d1616d7606110860265ea96b46af6d2 +size 2630959 diff --git a/papers/multistage collaborative knowledge distillation from large language models for semisupervised sequence generation.pdf b/papers/multistage collaborative knowledge distillation from large language models for semisupervised sequence generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4b541bfeb81ddc70ecd966780d3308f49d0dfbe8 --- /dev/null +++ b/papers/multistage collaborative knowledge distillation from large language models for semisupervised sequence generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:408c3663c9af5d04e2a0579195caa60370988ceff26618c6e9793c8a7969b323 +size 628537 diff --git a/papers/multistage collaborative knowledge distillation from large language models.pdf b/papers/multistage collaborative knowledge distillation from large language models.pdf deleted file mode 100644 index d4de0ed962a58aebed929919913f008d32236821..0000000000000000000000000000000000000000 Binary files a/papers/multistage collaborative knowledge distillation from large language models.pdf and /dev/null differ diff --git a/papers/multistage large language model correction for speech recognition.pdf b/papers/multistage large language model correction for speech recognition.pdf index 90ff46b707e2177097c69e341fafaae192328059..0db05d74f1a845dd1263cc06d9813fb66fe48eb0 100644 Binary files a/papers/multistage large language model correction for speech recognition.pdf and b/papers/multistage large language model correction for speech recognition.pdf differ diff --git a/papers/mutual reinforcement effects in japanese sentence classification and named entity recognition tasks.pdf b/papers/mutual reinforcement effects in japanese sentence classification and named entity recognition tasks.pdf index 2a5b5c679817630207e10caba7fdeb3430d14ec3..91abdf5ba68249ded984b9600c3152b6bb45d5b1 100644 Binary files a/papers/mutual reinforcement effects in japanese sentence classification and named entity recognition tasks.pdf and b/papers/mutual reinforcement effects in japanese sentence classification and named entity recognition tasks.pdf differ diff --git a/papers/naisteacher a prompt and rerank approach to generating teacher utterances in educational dialogues.pdf b/papers/naisteacher a prompt and rerank approach to generating teacher utterances in educational dialogues.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f6083046919a683cdbccfe37c6dcc55a98c02b01 --- /dev/null +++ b/papers/naisteacher a prompt and rerank approach to generating teacher utterances in educational dialogues.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d653473a074b6383868fdd071b4ef4ba2b5f4fe484f63be7a3dd4e35647020c +size 637234 diff --git a/papers/narrative style and the spread of health misinformation on twitter.pdf b/papers/narrative style and the spread of health misinformation on twitter.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c0525b3961db24ada341ba50d780ef38c87956ec --- /dev/null +++ b/papers/narrative style and the spread of health misinformation on twitter.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfd3ce6cd8bca88ed8fd3fd698b55068c1c71f97fed4cd1e4bf1963569d394a8 +size 378519 diff --git a/papers/narrowing the gap between supervised and unsupervised sentence representation learning with large language model.pdf b/papers/narrowing the gap between supervised and unsupervised sentence representation learning with large language model.pdf index b762d642c7986c869d45db008ecaef62e05eeae7..ecb97b72a943e0a78c54dc1bae7e0dcc651b88f2 100644 Binary files a/papers/narrowing the gap between supervised and unsupervised sentence representation learning with large language model.pdf and b/papers/narrowing the gap between supervised and unsupervised sentence representation learning with large language model.pdf differ diff --git a/papers/narrowing the gap between zero and fewshot machine translation by matching styles.pdf b/papers/narrowing the gap between zero and fewshot machine translation by matching styles.pdf index 4183ae32b55999566a62c59ae74a13302b4dfcb1..4990eacdedde027829e86e2335897855ae2712f0 100644 Binary files a/papers/narrowing the gap between zero and fewshot machine translation by matching styles.pdf and b/papers/narrowing the gap between zero and fewshot machine translation by matching styles.pdf differ diff --git a/papers/natural language decomposition and interpretation of complex utterances.pdf b/papers/natural language decomposition and interpretation of complex utterances.pdf index 916f5ccc5c5ab6bf44b01f833c56f67f87575844..431a03c4763ebf40691e483e141709f8190be11a 100644 --- a/papers/natural language decomposition and interpretation of complex utterances.pdf +++ b/papers/natural language decomposition and interpretation of complex utterances.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9c78c701e0edfe541a176e1fd717673025dd2fdbcc82f50eafc94e41108449fb -size 1980223 +oid sha256:794eb2cdd03918063bf85517dcf5ca51a515863d2f9f2794dec505c93a3bd3ed +size 1535834 diff --git a/papers/naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers.pdf b/papers/naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers.pdf index e3dfe79f5e739f0e0946fa75f7bd7fdfdbec73cb..ad1f3f75c28574ec734be71b96877d8d512bc099 100644 Binary files a/papers/naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers.pdf and b/papers/naturalspeech 2 latent diffusion models are natural and zeroshot speech and singing synthesizers.pdf differ diff --git a/papers/neural finetuning search for fewshot learning.pdf b/papers/neural finetuning search for fewshot learning.pdf index 5341b5fe69dcdfc715c67cca0e600568919b8a6e..61eda71ce1a1049dbf339a777a8a1563a132ca26 100644 Binary files a/papers/neural finetuning search for fewshot learning.pdf and b/papers/neural finetuning search for fewshot learning.pdf differ diff --git a/papers/neural machine translation models can learn to be fewshot learners.pdf b/papers/neural machine translation models can learn to be fewshot learners.pdf index 55764827469450a1edd2519bce8ba7d0b6691127..1a207bc5a223fc0f15f9eaf800a660ad1af4c8de 100644 Binary files a/papers/neural machine translation models can learn to be fewshot learners.pdf and b/papers/neural machine translation models can learn to be fewshot learners.pdf differ diff --git a/papers/neurips'22 crossdomain metadl competition design and baseline results.pdf b/papers/neurips'22 crossdomain metadl competition design and baseline results.pdf index 0c37acd32d213eaf97caa364d8b7a55f16e68797..e0f26fb03643576617472bab1b3be60230cc4cbc 100644 Binary files a/papers/neurips'22 crossdomain metadl competition design and baseline results.pdf and b/papers/neurips'22 crossdomain metadl competition design and baseline results.pdf differ diff --git a/papers/neuroprompts an adaptive framework to optimize prompts for texttoimage generation.pdf b/papers/neuroprompts an adaptive framework to optimize prompts for texttoimage generation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9ad3885f940e3f39dc426c7baacd7b1280a4ced6 --- /dev/null +++ b/papers/neuroprompts an adaptive framework to optimize prompts for texttoimage generation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:616952ec8db4f6d8856059955f84722a8c649b27f9872645345499cbc3669cd6 +size 11757405 diff --git a/papers/nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection.pdf b/papers/nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection.pdf index c36bc56ec56e5c8381bcdf4128505919c0945ec5..36703d63e83f0cb5f3eeae41f9c8f7d991fc29d1 100644 Binary files a/papers/nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection.pdf and b/papers/nexus at araieval shared task finetuning arabic language models for propaganda and disinformation detection.pdf differ diff --git a/papers/noise2music textconditioned music generation with diffusion models.pdf b/papers/noise2music textconditioned music generation with diffusion models.pdf deleted file mode 100644 index 5902a142a7075e2213dd9b8f33c3a3c993b6390b..0000000000000000000000000000000000000000 Binary files a/papers/noise2music textconditioned music generation with diffusion models.pdf and /dev/null differ diff --git a/papers/noisy exemplars make large language models more robust a domainagnostic behavioral analysis.pdf b/papers/noisy exemplars make large language models more robust a domainagnostic behavioral analysis.pdf index 59c350c175f3a1e373e7810b02b587199e390616..ee1573efca68648516501a2b043b06852b756d31 100644 Binary files a/papers/noisy exemplars make large language models more robust a domainagnostic behavioral analysis.pdf and b/papers/noisy exemplars make large language models more robust a domainagnostic behavioral analysis.pdf differ diff --git a/papers/not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning.pdf b/papers/not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning.pdf index 0c6d515795fa47d1e8e80e0969304b3500c1bed4..3bcabed831321abd2bced0303a25837d2767f747 100644 Binary files a/papers/not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning.pdf and b/papers/not all demonstration examples are equally beneficial reweighting demonstration examples for incontext learning.pdf differ diff --git "a/papers/nspbert a promptbased fewshot learner through an original pretraining task \342\200\224\342\200\224 next sentence prediction.pdf" "b/papers/nspbert a promptbased fewshot learner through an original pretraining task \342\200\224\342\200\224 next sentence prediction.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..8eb1aa93e315688206523e48d8256fd68de3b52b --- /dev/null +++ "b/papers/nspbert a promptbased fewshot learner through an original pretraining task \342\200\224\342\200\224 next sentence prediction.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65663c1fe09450da12f2c5f3a480f43131e01d30a3c3ac4d0ae60fc0e616c7be +size 1205893 diff --git a/papers/omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know.pdf b/papers/omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know.pdf index 60ac66d0d91c2861fb233acdd5de3d0bd8627eb7..64d577a4b1d2e412ec7bff17c6eaa56847741276 100644 Binary files a/papers/omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know.pdf and b/papers/omniscientdb a large language modelaugmented dbms that knows what other dbmss do not know.pdf differ diff --git a/papers/on bilingual lexicon induction with large language models.pdf b/papers/on bilingual lexicon induction with large language models.pdf index bd7510529a1fd806c42b6372049566b5942c6635..b8b3a1960439db43af2b8ae990d23e4fc8637816 100644 Binary files a/papers/on bilingual lexicon induction with large language models.pdf and b/papers/on bilingual lexicon induction with large language models.pdf differ diff --git a/papers/on the planning abilities of large language models a critical investigation.pdf b/papers/on the planning abilities of large language models a critical investigation.pdf index 37087935518ff0e60f4e975b2afaa3e89c3785a2..511c63044abc00f60a6a8c650d0fc791c0232d85 100644 --- a/papers/on the planning abilities of large language models a critical investigation.pdf +++ b/papers/on the planning abilities of large language models a critical investigation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b3c14b96ae8ccb1b68c1af6d6d63ed01255372d2ddfd07d6b783b96642d89d1b +oid sha256:72f198a46489c9f5012c9be3032843613058e868d5f6b76631c942d6db11da28 size 10893711 diff --git a/papers/one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention.pdf b/papers/one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention.pdf index 3b95d34ca5b91d1ad76035d89cecb38016160bb7..69dee8de742064a7800fa53e540d32d7bfe72299 100644 Binary files a/papers/one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention.pdf and b/papers/one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention.pdf differ diff --git a/papers/oneshot labeling for automatic relevance estimation.pdf b/papers/oneshot labeling for automatic relevance estimation.pdf index 317fc0319c871f4583568892c97400c26c5b62b6..3f0a5e0288d4eb0d9c401ebc2f7097cb28dbda9d 100644 Binary files a/papers/oneshot labeling for automatic relevance estimation.pdf and b/papers/oneshot labeling for automatic relevance estimation.pdf differ diff --git a/papers/ontologically faithful generation of nonplayer character dialogues.pdf b/papers/ontologically faithful generation of nonplayer character dialogues.pdf index 2e3ca5c4ce9e1a9d3287762c58106809a6f005ef..2ab0b25fc2d0cca519511411723ac0e304f68a23 100644 --- a/papers/ontologically faithful generation of nonplayer character dialogues.pdf +++ b/papers/ontologically faithful generation of nonplayer character dialogues.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ee385701548ed8fb16d02014714d902275ce46d92c43452ed5eadde914380162 +oid sha256:baf561f824326c327c6506c06f53ccaaf0dcb3ff7272b570efef567e69fbec9e size 7114863 diff --git a/papers/open the pandora's box of llms jailbreaking llms through representation engineering.pdf b/papers/open the pandora's box of llms jailbreaking llms through representation engineering.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3613dd6c9b108046a5d78b64142dc53d06b7df2d --- /dev/null +++ b/papers/open the pandora's box of llms jailbreaking llms through representation engineering.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b963d040e7dded86f19a02b683545feef4d03e9dad71c9c38ead0dd518a463f +size 418753 diff --git a/papers/openended instructable embodied agents with memoryaugmented large language models.pdf b/papers/openended instructable embodied agents with memoryaugmented large language models.pdf index c5ba40bb0cb0a2e0c62fbfc1d4b90b199ad13b4e..a5c298ec2339975a52dda834c2b7cef21618235f 100644 --- a/papers/openended instructable embodied agents with memoryaugmented large language models.pdf +++ b/papers/openended instructable embodied agents with memoryaugmented large language models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d500689440072f82c3de39f385fa4a3d682fb9699f54268606459de0ba039d41 -size 3930365 +oid sha256:4dadba0dd690d995009147efbefeb4ce057817e5179740a93e4cf24f50b15f0d +size 3929860 diff --git a/papers/openicl an opensource framework for incontext learning.pdf b/papers/openicl an opensource framework for incontext learning.pdf index 6b8002a509b4993bd5a56193966fca5933b59f58..6033140cff40e7e569c59eacb6e1580b7139e0d3 100644 Binary files a/papers/openicl an opensource framework for incontext learning.pdf and b/papers/openicl an opensource framework for incontext learning.pdf differ diff --git a/papers/optimizing machine translation through prompt engineering an investigation into chatgpt's customizability.pdf b/papers/optimizing machine translation through prompt engineering an investigation into chatgpt's customizability.pdf index 8c480ce08cb1a31a5b4a39d9857709a1661eb3db..ab41733917f331f9848c268f618c4dce26ef1481 100644 Binary files a/papers/optimizing machine translation through prompt engineering an investigation into chatgpt's customizability.pdf and b/papers/optimizing machine translation through prompt engineering an investigation into chatgpt's customizability.pdf differ diff --git "a/papers/optimizing machine translation through prompt engineering an investigation into chatgpt\342\200\231s customizability.pdf" "b/papers/optimizing machine translation through prompt engineering an investigation into chatgpt\342\200\231s customizability.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..0de1bf5fadf6d7c4b68e6a44d4afb55fee8a6a47 --- /dev/null +++ "b/papers/optimizing machine translation through prompt engineering an investigation into chatgpt\342\200\231s customizability.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b8c59b48bf7d975ab9bb4afc8abf94bff356d1101c07abf9c0cc9b667648bbb +size 407158 diff --git a/papers/optimizing prompts for texttoimage generation.pdf b/papers/optimizing prompts for texttoimage generation.pdf index 26e4025d623a0d85827d14b51e03c6f2737c1d24..4251ae2f9d90ea734d19a4019d80120a63fe139d 100644 --- a/papers/optimizing prompts for texttoimage generation.pdf +++ b/papers/optimizing prompts for texttoimage generation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9e9abf253de03024d277b087fa2620075ee29f9c90178434bf83251a6647212a -size 8314519 +oid sha256:a4e633220327e643376537230cc302bae3164ca7fc8ad40da7e5657b80f0873a +size 4195649 diff --git a/papers/optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models.pdf b/papers/optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models.pdf index b8364c38a0c0ccd1cb5cff133159af354bf8a073..7cc263b3dde59b2babf293b003d19c10b14a347a 100644 Binary files a/papers/optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models.pdf and b/papers/optr exploring the role of explanations in finetuning and prompting for reasoning skills of large language models.pdf differ diff --git a/papers/ost refining text knowledge with optimal spatiotemporal descriptor for general video recognition.pdf b/papers/ost refining text knowledge with optimal spatiotemporal descriptor for general video recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e2b0aa83e8fac937836ce8062231dcbfcc46fea5 --- /dev/null +++ b/papers/ost refining text knowledge with optimal spatiotemporal descriptor for general video recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de74e6ac1e8cd8a51a85621f90e4191dfaa452fa866f1c78d06dcbfe20da3410 +size 5636595 diff --git a/papers/overprompt enhancing chatgpt capabilities through an efficient incontext learning approach.pdf b/papers/overprompt enhancing chatgpt capabilities through an efficient incontext learning approach.pdf deleted file mode 100644 index 8e6a6c316f081adf711d75cd707589d71017582d..0000000000000000000000000000000000000000 Binary files a/papers/overprompt enhancing chatgpt capabilities through an efficient incontext learning approach.pdf and /dev/null differ diff --git a/papers/overthinking the truth understanding how language models process false demonstrations.pdf b/papers/overthinking the truth understanding how language models process false demonstrations.pdf index ce4780436c409bc912426e3927643fb138acdde0..8aeb78f41c855b769aeee3a29ba53119d8788c34 100644 Binary files a/papers/overthinking the truth understanding how language models process false demonstrations.pdf and b/papers/overthinking the truth understanding how language models process false demonstrations.pdf differ diff --git a/papers/pactuning finetuning pretrained language models with pacdriven perturbed gradient descent.pdf b/papers/pactuning finetuning pretrained language models with pacdriven perturbed gradient descent.pdf new file mode 100644 index 0000000000000000000000000000000000000000..69f05f2d52c0b00976a0a3492a682a8a7c791354 --- /dev/null +++ b/papers/pactuning finetuning pretrained language models with pacdriven perturbed gradient descent.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae92a8a2c3a8d70a2c5d1136bf324c05223bba0fd98a9f96f4f28f294a77c365 +size 503855 diff --git a/papers/pair programming with large language models for sampling and estimation of copulas.pdf b/papers/pair programming with large language models for sampling and estimation of copulas.pdf index 61bfcd7b0d23f4a60419eff26d6f60d9ca40764d..8cf9f99d3391d83681f000164c62e22db6f8581c 100644 Binary files a/papers/pair programming with large language models for sampling and estimation of copulas.pdf and b/papers/pair programming with large language models for sampling and estimation of copulas.pdf differ diff --git a/papers/paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation.pdf b/papers/paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation.pdf index 8e373b5e67bdeb0d6cc99e99961b34181a1d9791..eb79bc31e63ea2052b7ab565c5ac4f9d15f20264 100644 Binary files a/papers/paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation.pdf and b/papers/paraamr a largescale syntactically diverse paraphrase dataset by amr backtranslation.pdf differ diff --git a/papers/parameterefficient crosslingual transfer of vision and language models via translationbased alignment.pdf b/papers/parameterefficient crosslingual transfer of vision and language models via translationbased alignment.pdf index 2d98669f0e6f58be777a1188d5a8bf3653e45053..5098d7b1159d1ac81340b3753701650118dad6a5 100644 Binary files a/papers/parameterefficient crosslingual transfer of vision and language models via translationbased alignment.pdf and b/papers/parameterefficient crosslingual transfer of vision and language models via translationbased alignment.pdf differ diff --git a/papers/parameterfree automatically prompting a latent pseudo label mapping model for promptbased learning.pdf b/papers/parameterfree automatically prompting a latent pseudo label mapping model for promptbased learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..41994b85aef2410dde9eedc3f68aae1ba0ffd438 --- /dev/null +++ b/papers/parameterfree automatically prompting a latent pseudo label mapping model for promptbased learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0edc1f3c2c638e0897401c68ce19a2c5b07557586b7f3af52d4ad40907473ba9 +size 1981952 diff --git a/papers/patchtoken aligned bayesian prompt learning for visionlanguage models.pdf b/papers/patchtoken aligned bayesian prompt learning for visionlanguage models.pdf index 6951ae03f5d185ce5cb03d91728d7c0a849d3ea8..775daee8b33d6e9b68833176d8086729c08b711f 100644 Binary files a/papers/patchtoken aligned bayesian prompt learning for visionlanguage models.pdf and b/papers/patchtoken aligned bayesian prompt learning for visionlanguage models.pdf differ diff --git a/papers/pcbert parent and child bert for chinese fewshot ner.pdf b/papers/pcbert parent and child bert for chinese fewshot ner.pdf new file mode 100644 index 0000000000000000000000000000000000000000..64104b1fd4f183294ffaef66873d5a8fd3787c58 --- /dev/null +++ b/papers/pcbert parent and child bert for chinese fewshot ner.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d549a60b25dc95860323ef88be4f0c80b12a095c31bb48952b939fd493adcc0 +size 4369626 diff --git a/papers/peace prompt engineering automation for clipseg enhancement in aerial robotics.pdf b/papers/peace prompt engineering automation for clipseg enhancement in aerial robotics.pdf index c6f9e61f30c12b52b0e320b4ea754a3498582827..d4bc14f751c43d02867106ea7e497c4177d2a8d6 100644 --- a/papers/peace prompt engineering automation for clipseg enhancement in aerial robotics.pdf +++ b/papers/peace prompt engineering automation for clipseg enhancement in aerial robotics.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3e1f18eb4ca270073aba7f04b9fbe37c39ef6a96bf2ad2337e172998cc75dfa3 -size 5332027 +oid sha256:6b9b6b5f7937e6cb63d23164ea8eb736a01eddffc1b7f8b9abbc80f95aafdc46 +size 5332154 diff --git a/papers/pearl prompting large language models to plan and execute actions over long documents.pdf b/papers/pearl prompting large language models to plan and execute actions over long documents.pdf index c26a684ddb7d8b3c3cceb4dea0614bfda7f6309a..de90017b16b46789f9cb46eac9fbca6ea34c22ea 100644 Binary files a/papers/pearl prompting large language models to plan and execute actions over long documents.pdf and b/papers/pearl prompting large language models to plan and execute actions over long documents.pdf differ diff --git a/papers/performance evaluation on humanmachine teaming augmented machine translation enabled by gpt4.pdf b/papers/performance evaluation on humanmachine teaming augmented machine translation enabled by gpt4.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ee0e0469040013e7795c3f1140a101a8d75f96cf --- /dev/null +++ b/papers/performance evaluation on humanmachine teaming augmented machine translation enabled by gpt4.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be4bdc329a266d1360d5b0897e9e4c8a149e6703084b5b97fbb71637f27bfe18 +size 851940 diff --git a/papers/plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf b/papers/plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf index 7e209566873fd96d88f2156c7670954ade9b547c..2886b37662ba32fb32722d3edcf8d492be5feeb0 100644 Binary files a/papers/plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf and b/papers/plugmed improving specificity in patientcentered medical dialogue generation using incontext learning.pdf differ diff --git a/papers/plum prompt learning using metaheuristic.pdf b/papers/plum prompt learning using metaheuristic.pdf deleted file mode 100644 index a04d0ed0cf4f108ae9767da5aa2d1701681281a3..0000000000000000000000000000000000000000 Binary files a/papers/plum prompt learning using metaheuristic.pdf and /dev/null differ diff --git a/papers/poe process of elimination for multiple choice reasoning.pdf b/papers/poe process of elimination for multiple choice reasoning.pdf index a49190f9a414fbce57b9639b2b78ff0300c17a53..1fa0f7226b425d1c49742f400206cd19c25526a6 100644 Binary files a/papers/poe process of elimination for multiple choice reasoning.pdf and b/papers/poe process of elimination for multiple choice reasoning.pdf differ diff --git a/papers/pokergpt an endtoend lightweight solver for multiplayer texas hold'em via large language model.pdf b/papers/pokergpt an endtoend lightweight solver for multiplayer texas hold'em via large language model.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f06c93c57b02530675eee973570ad5ac86c30d32 --- /dev/null +++ b/papers/pokergpt an endtoend lightweight solver for multiplayer texas hold'em via large language model.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ba44d2c1e0fc622e2c12d78b4577b51f211bf47b9a651d4eb8672597d079de8 +size 2766993 diff --git a/papers/positionbased prompting for health outcome generation.pdf b/papers/positionbased prompting for health outcome generation.pdf index aff1746d33003558238419867e80025bb6e16f92..9e760e45d353c468cc22c9c2e512b65d3b05ad9a 100644 Binary files a/papers/positionbased prompting for health outcome generation.pdf and b/papers/positionbased prompting for health outcome generation.pdf differ diff --git a/papers/posqa probe the world models of llms with size comparisons.pdf b/papers/posqa probe the world models of llms with size comparisons.pdf index 044e1f649f5ac8d6fc3d235413ca2d41d703ba08..ba966f70a13a928ebc47f71f72264053b610a978 100644 Binary files a/papers/posqa probe the world models of llms with size comparisons.pdf and b/papers/posqa probe the world models of llms with size comparisons.pdf differ diff --git a/papers/post hoc explanations of language models can improve language models.pdf b/papers/post hoc explanations of language models can improve language models.pdf index 66b5e61a0bf928421e40c62823dc9ccd8daf5ef6..7cefab0b9ac9c82fb118e91fb5aec6a7be8a5531 100644 Binary files a/papers/post hoc explanations of language models can improve language models.pdf and b/papers/post hoc explanations of language models can improve language models.pdf differ diff --git a/papers/prd peer rank and discussion improve large language model based evaluations.pdf b/papers/prd peer rank and discussion improve large language model based evaluations.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5849aa0e3aee03287c818fbe8134ba5dccd79860 --- /dev/null +++ b/papers/prd peer rank and discussion improve large language model based evaluations.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:086eba2c200660553b125920d331ca3bb2551b11cbcad6453ce03fc49cda14ac +size 2158895 diff --git a/papers/pre visionlanguage prompt learning with reparameterization encoder.pdf b/papers/pre visionlanguage prompt learning with reparameterization encoder.pdf index e0550ce6027efe838605ef37547bea730312f5e3..0dd41a69fc7f44cfccd478971f3db7d81b360544 100644 Binary files a/papers/pre visionlanguage prompt learning with reparameterization encoder.pdf and b/papers/pre visionlanguage prompt learning with reparameterization encoder.pdf differ diff --git a/papers/prefinetuning for fewshot emotional speech recognition.pdf b/papers/prefinetuning for fewshot emotional speech recognition.pdf index 18f792ee2566ae1b883d9e4d1290d056f74ab459..644506252516ac8f05628cc7c51df67643349294 100644 Binary files a/papers/prefinetuning for fewshot emotional speech recognition.pdf and b/papers/prefinetuning for fewshot emotional speech recognition.pdf differ diff --git a/papers/pretrained tokenreplaced detection model as fewshot learner.pdf b/papers/pretrained tokenreplaced detection model as fewshot learner.pdf index 405d523651dc841629ce301ff17e3ce17a94c6f7..b5dac55f30f054c4a3c5f13c621deffb42579078 100644 Binary files a/papers/pretrained tokenreplaced detection model as fewshot learner.pdf and b/papers/pretrained tokenreplaced detection model as fewshot learner.pdf differ diff --git a/papers/pretraining data mixtures enable narrow model selection capabilities in transformer models.pdf b/papers/pretraining data mixtures enable narrow model selection capabilities in transformer models.pdf index c2a86e466fb2b9d04448f8a7d524241f5d8bf92c..c0c4a144e15d0b243ff57f3daa682d71f7b75638 100644 Binary files a/papers/pretraining data mixtures enable narrow model selection capabilities in transformer models.pdf and b/papers/pretraining data mixtures enable narrow model selection capabilities in transformer models.pdf differ diff --git a/papers/pretraining to learn in context.pdf b/papers/pretraining to learn in context.pdf index 96b2fced1418d681efa5aee2077a78879d776280..4271d49cd52f04f01cf986b4d4874edb1cd44e41 100644 Binary files a/papers/pretraining to learn in context.pdf and b/papers/pretraining to learn in context.pdf differ diff --git a/papers/prewrite prompt rewriting with reinforcement learning.pdf b/papers/prewrite prompt rewriting with reinforcement learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4145a5a1aa8a60a3322cc8af661ad51a0f5364d1 --- /dev/null +++ b/papers/prewrite prompt rewriting with reinforcement learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:200f79355702e80b85257b395fc89ae3e9a9e15758600924064a1de9672c56cb +size 229862 diff --git a/papers/prismer a visionlanguage model with an ensemble of experts.pdf b/papers/prismer a visionlanguage model with an ensemble of experts.pdf deleted file mode 100644 index 9416c04e3df4f1881e71a223c035c30ac972efe7..0000000000000000000000000000000000000000 --- a/papers/prismer a visionlanguage model with an ensemble of experts.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:060c015f81a08e39cc5cef497f7582d88fe41695556874b9089b0c4988ab84ef -size 5519147 diff --git a/papers/probabilistic ensembles of zero and fewshot learning models for emotion classification.pdf b/papers/probabilistic ensembles of zero and fewshot learning models for emotion classification.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3d8e90f5bffd5af23ce00b68044e513f669514ed --- /dev/null +++ b/papers/probabilistic ensembles of zero and fewshot learning models for emotion classification.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15579c3b7a61c5a19d3ab69629a676745c9d6f6a85210b95e1c297e757f4a9a0 +size 321790 diff --git a/papers/probing llms for hate speech detection strengths and vulnerabilities.pdf b/papers/probing llms for hate speech detection strengths and vulnerabilities.pdf index 8f2583d4ddbf13bffc3a92d85d460f755177ee5d..67837e5a4f56b9f0670ed81191f4cd047dbfc4ec 100644 Binary files a/papers/probing llms for hate speech detection strengths and vulnerabilities.pdf and b/papers/probing llms for hate speech detection strengths and vulnerabilities.pdf differ diff --git a/papers/procedural text mining with large language models.pdf b/papers/procedural text mining with large language models.pdf index 16ed716dd7789ddef925fe718a9474285ca4b09e..0e5c0f42e47cd263fc20a13c5169610b79853ad3 100644 Binary files a/papers/procedural text mining with large language models.pdf and b/papers/procedural text mining with large language models.pdf differ diff --git a/papers/procot stimulating critical thinking and writing of students through engagement with large language models (llms).pdf b/papers/procot stimulating critical thinking and writing of students through engagement with large language models (llms).pdf new file mode 100644 index 0000000000000000000000000000000000000000..d11020d984dbe1c8f7b5fe7a54bc3459bdaae0cf --- /dev/null +++ b/papers/procot stimulating critical thinking and writing of students through engagement with large language models (llms).pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:924765178e1351969edcbeda9357b5daea64ceb5295b82c53a5f0261c0bf5af6 +size 678480 diff --git a/papers/prodigy enabling incontext learning over graphs.pdf b/papers/prodigy enabling incontext learning over graphs.pdf index 67f9e477c12d386495aa05e5cf971b1f3808d985..47eba5c31731c7c223763029bd644727973ca552 100644 Binary files a/papers/prodigy enabling incontext learning over graphs.pdf and b/papers/prodigy enabling incontext learning over graphs.pdf differ diff --git a/papers/product information extraction using chatgpt.pdf b/papers/product information extraction using chatgpt.pdf index 22ea45bda9767377660e7ac1ed0998be3e4ae002..f05c93d61352782252f2011a043b77e0714341a0 100644 Binary files a/papers/product information extraction using chatgpt.pdf and b/papers/product information extraction using chatgpt.pdf differ diff --git a/papers/program decomposition and translation with static analysis.pdf b/papers/program decomposition and translation with static analysis.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5c65eab5b620954058144e025e17011964de87e6 --- /dev/null +++ b/papers/program decomposition and translation with static analysis.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fdd970d7c50a1247a9fa805c72473671fb4a2614987fbf4a1fa352c0d523d89 +size 428231 diff --git a/papers/promisepromptdriven 3d medical image segmentation using pretrained image foundation models.pdf b/papers/promisepromptdriven 3d medical image segmentation using pretrained image foundation models.pdf index 26b752b21c08d88268a7c85862ae34f30a9ce664..def9d73de740c3ba261588732d8403b433ba51a6 100644 Binary files a/papers/promisepromptdriven 3d medical image segmentation using pretrained image foundation models.pdf and b/papers/promisepromptdriven 3d medical image segmentation using pretrained image foundation models.pdf differ diff --git a/papers/prompt engineering and calibration for zeroshot commonsense reasoning.pdf b/papers/prompt engineering and calibration for zeroshot commonsense reasoning.pdf index 94216124d52f23df857932db8740e5386ac04026..1ff794e32a2486b98d0a04279a485a17705c4ed5 100644 Binary files a/papers/prompt engineering and calibration for zeroshot commonsense reasoning.pdf and b/papers/prompt engineering and calibration for zeroshot commonsense reasoning.pdf differ diff --git a/papers/prompt engineering for students of medicine and their teachers.pdf b/papers/prompt engineering for students of medicine and their teachers.pdf index d09832a4d690324a75e879dd95b5ebab667ec4a2..2f810a08aea40505a3b62d774e6d0d3f6bb1ad04 100644 Binary files a/papers/prompt engineering for students of medicine and their teachers.pdf and b/papers/prompt engineering for students of medicine and their teachers.pdf differ diff --git a/papers/prompt engineering guiding the way to effective large language models.pdf b/papers/prompt engineering guiding the way to effective large language models.pdf deleted file mode 100644 index cd928cdd041a7c95e0bfa20dfc90acb9cb7d9fd1..0000000000000000000000000000000000000000 Binary files a/papers/prompt engineering guiding the way to effective large language models.pdf and /dev/null differ diff --git a/papers/prompt engineering in medical education.pdf b/papers/prompt engineering in medical education.pdf index a4499feb9ef875775392d9823bc478c1db4e730e..07d929c89fd1b4a71179fc015e0e18fda7e959a7 100644 Binary files a/papers/prompt engineering in medical education.pdf and b/papers/prompt engineering in medical education.pdf differ diff --git a/papers/prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf b/papers/prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf index 7dc0ce7251e4be5777bbfd3f197e301e5b957db1..64264d8ae595306a568f10d6b668564092d2998c 100644 Binary files a/papers/prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf and b/papers/prompt engineering or fine tuning an empirical assessment of large language models in automated software engineering tasks.pdf differ diff --git a/papers/prompt engineering through the lens of optimal control.pdf b/papers/prompt engineering through the lens of optimal control.pdf index 043d919644aed4147ad6cf2a52d8fa590ba5c6db..b01fe27825719ea802fbd4f313124f74e2a00cb2 100644 Binary files a/papers/prompt engineering through the lens of optimal control.pdf and b/papers/prompt engineering through the lens of optimal control.pdf differ diff --git a/papers/prompt engineeringassisted malware dynamic analysis using gpt4.pdf b/papers/prompt engineeringassisted malware dynamic analysis using gpt4.pdf new file mode 100644 index 0000000000000000000000000000000000000000..811a57075d7ea747a46529fcf3f2cc4de3c58c9b --- /dev/null +++ b/papers/prompt engineeringassisted malware dynamic analysis using gpt4.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab6a201eeb0a9f1add148753de34a398a27cfa287b2e0eb51ba633802b1d2a9f +size 6138956 diff --git a/papers/prompt injection attack against llmintegrated applications.pdf b/papers/prompt injection attack against llmintegrated applications.pdf index e8ae1011de03d0d5877cd16cbe7f7b0187bb5884..0b8b7d5be94fb5e0501a6c7b73988ebfd6b40209 100644 Binary files a/papers/prompt injection attack against llmintegrated applications.pdf and b/papers/prompt injection attack against llmintegrated applications.pdf differ diff --git a/papers/prompt injection attacks and defenses in llmintegrated applications.pdf b/papers/prompt injection attacks and defenses in llmintegrated applications.pdf index 4fc201dcd8ba3443c20e073f456db36832c753cb..45d4490cdaef29a0251ceba5b4f6aca95e944fb6 100644 Binary files a/papers/prompt injection attacks and defenses in llmintegrated applications.pdf and b/papers/prompt injection attacks and defenses in llmintegrated applications.pdf differ diff --git a/papers/prompt injection parameterization of fixed inputs.pdf b/papers/prompt injection parameterization of fixed inputs.pdf index f34d03a0d705277f7e1eea98fe271e21d6a2b27d..ba00f3ac48fae7a4cae42e9959cab3367529b079 100644 Binary files a/papers/prompt injection parameterization of fixed inputs.pdf and b/papers/prompt injection parameterization of fixed inputs.pdf differ diff --git a/papers/prompt middleware mapping prompts for large language models to ui affordances.pdf b/papers/prompt middleware mapping prompts for large language models to ui affordances.pdf deleted file mode 100644 index d1bd330b8218263539a34ab713ada82a56d89be9..0000000000000000000000000000000000000000 --- a/papers/prompt middleware mapping prompts for large language models to ui affordances.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4ac2dcc44caa028cfd21b9e8cc96a962fd8ec0134556baff389c35d94b4fd7aa -size 1709162 diff --git a/papers/prompt optimization via adversarial incontext learning.pdf b/papers/prompt optimization via adversarial incontext learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5f824e37c1bb128e9684d91cf922e51042243941 --- /dev/null +++ b/papers/prompt optimization via adversarial incontext learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5308de9ce6cd0a6773c8c81f71b5775d7a7d4a78f25100cafc076942eb1586e1 +size 997309 diff --git a/papers/prompt position really matters in fewshot and zeroshot nlu tasks.pdf b/papers/prompt position really matters in fewshot and zeroshot nlu tasks.pdf index 542aa2d915d9fb42a331c54979a15a328f37a423..8fd8af638f5f1f7833d788ab08054e130db2616a 100644 Binary files a/papers/prompt position really matters in fewshot and zeroshot nlu tasks.pdf and b/papers/prompt position really matters in fewshot and zeroshot nlu tasks.pdf differ diff --git a/papers/prompt programming for large language models beyond the fewshot paradigm.pdf b/papers/prompt programming for large language models beyond the fewshot paradigm.pdf index 70d909d8593143e4c5597dd0c6f283402286996b..d06f17451df90adda83f6e65ced66cd286a4d8e8 100644 Binary files a/papers/prompt programming for large language models beyond the fewshot paradigm.pdf and b/papers/prompt programming for large language models beyond the fewshot paradigm.pdf differ diff --git a/papers/prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf b/papers/prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf index 3aa1909b411bcb177c9b4120d726139c891e298a..ce8e8666b729c4d9ec7b09f56f0ca10c14e6518a 100644 Binary files a/papers/prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf and b/papers/prompt to be consistent is better than selfconsistent fewshot and zeroshot fact verification with pretrained language models.pdf differ diff --git a/papers/prompt, condition, and generate classification of unsupported claims with incontext learning.pdf b/papers/prompt, condition, and generate classification of unsupported claims with incontext learning.pdf index 65d5bf65780393f39c50ac28cceda9df5f885b95..05fdc82f7acd3605a0304d417b27cdcac9fea8df 100644 Binary files a/papers/prompt, condition, and generate classification of unsupported claims with incontext learning.pdf and b/papers/prompt, condition, and generate classification of unsupported claims with incontext learning.pdf differ diff --git a/papers/prompt2model generating deployable models from natural language instructions.pdf b/papers/prompt2model generating deployable models from natural language instructions.pdf index b347e0777d0bc2be6e34d59f40849b172c53f77b..9f1a397b6a3675dff71588d7407b95150aad7637 100644 Binary files a/papers/prompt2model generating deployable models from natural language instructions.pdf and b/papers/prompt2model generating deployable models from natural language instructions.pdf differ diff --git a/papers/prompt2nerfpil fast nerf generation via pretrained implicit latent.pdf b/papers/prompt2nerfpil fast nerf generation via pretrained implicit latent.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6952f82707b6cd1ef380e44b4ede4f38e959c237 --- /dev/null +++ b/papers/prompt2nerfpil fast nerf generation via pretrained implicit latent.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4856a28aa01a76ae295d01dc94e068a825b7a7d428036303a4446be9a3c5668b +size 45447418 diff --git a/papers/promptagent strategic planning with language models enables expertlevel prompt optimization.pdf b/papers/promptagent strategic planning with language models enables expertlevel prompt optimization.pdf index c5c05bb25e0d82d91c9ce73bbccb5f433ce43abc..227f38bcd93e87ccde74d75ae2d341904f1df040 100644 --- a/papers/promptagent strategic planning with language models enables expertlevel prompt optimization.pdf +++ b/papers/promptagent strategic planning with language models enables expertlevel prompt optimization.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:c677ba458e81c3eb5ded91928a7bde4f3c04e75294074b6b5437039e9a598d6a -size 1910791 +oid sha256:0e386889c321a4498a92d7b074fb8a9888a4606b8434ce3307568392cf9297ad +size 1910282 diff --git a/papers/promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models.pdf b/papers/promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models.pdf index f4e6dd011caa4154bf6be7a9a743ade4d325ccba..1ea4d15a385231b982d3904dfc21712ad249ff71 100644 Binary files a/papers/promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models.pdf and b/papers/promptandrerank a method for zeroshot and fewshot arbitrary textual style transfer with small language models.pdf differ diff --git a/papers/promptbased approach for czech sentiment analysis.pdf b/papers/promptbased approach for czech sentiment analysis.pdf new file mode 100644 index 0000000000000000000000000000000000000000..313eee15182032bfd19f827678d8933258aa5a8e --- /dev/null +++ b/papers/promptbased approach for czech sentiment analysis.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9f56e55190ff81f550642072b929bc798d37402ca79553ad7038a3db339406d +size 556360 diff --git a/papers/promptbased distribution alignment for unsupervised domain adaptation.pdf b/papers/promptbased distribution alignment for unsupervised domain adaptation.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cd57dcc3b681aef15921dd19f917f8917751deeb --- /dev/null +++ b/papers/promptbased distribution alignment for unsupervised domain adaptation.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a92a5e3e5e9c6ab4098a1764accd7649abd2aad298330368d561def6e2f753d +size 1991571 diff --git a/papers/promptbased extraction of social determinants of health using fewshot learning.pdf b/papers/promptbased extraction of social determinants of health using fewshot learning.pdf index b4d5cc769bc175f0ebee9e039c75c82fef34f9cb..fd5f9e2166012e2fb385f477e364bfd15c27d605 100644 Binary files a/papers/promptbased extraction of social determinants of health using fewshot learning.pdf and b/papers/promptbased extraction of social determinants of health using fewshot learning.pdf differ diff --git a/papers/promptbased learning for thread structure prediction in cybersecurity forums.pdf b/papers/promptbased learning for thread structure prediction in cybersecurity forums.pdf index 041836da9789aa6c2b0ee23d880188e29ee5df9d..5d3830a75e15fbab30375d9c7e1550b2b5fede28 100644 Binary files a/papers/promptbased learning for thread structure prediction in cybersecurity forums.pdf and b/papers/promptbased learning for thread structure prediction in cybersecurity forums.pdf differ diff --git a/papers/promptbased length controlled generation with reinforcement learning.pdf b/papers/promptbased length controlled generation with reinforcement learning.pdf index 799cf8c9cc1b8a2cacbe8ec372dc7ef87acf1fb8..8319d8b1fd328fe41a9237fd355a8d1a5545e5d8 100644 Binary files a/papers/promptbased length controlled generation with reinforcement learning.pdf and b/papers/promptbased length controlled generation with reinforcement learning.pdf differ diff --git a/papers/promptbased metalearning for fewshot text classification.pdf b/papers/promptbased metalearning for fewshot text classification.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3a7f606905448f1960a84790e7ed0b8b5f535b88 --- /dev/null +++ b/papers/promptbased metalearning for fewshot text classification.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c26ffb253defdac59096bf5b879633afba2e6a7462744bcdb8e265398c1608cd +size 1199838 diff --git a/papers/promptbench a unified library for evaluation of large language models.pdf b/papers/promptbench a unified library for evaluation of large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6a5774275c781dc9b21f0e435a4318ecfaf4c4e5 --- /dev/null +++ b/papers/promptbench a unified library for evaluation of large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05b3d1fa61d7074959ca74aff1537461bb7765c2c70a31df9400a612bae17533 +size 609318 diff --git a/papers/promptda labelguided data augmentation for promptbased fewshot learners.pdf b/papers/promptda labelguided data augmentation for promptbased fewshot learners.pdf index c9faa5003c0eaf08c12b4458757670682d6ed716..5b459ea0bd5765759533cccda7c590e7b485a950 100644 Binary files a/papers/promptda labelguided data augmentation for promptbased fewshot learners.pdf and b/papers/promptda labelguided data augmentation for promptbased fewshot learners.pdf differ diff --git a/papers/prompted llms as chatbot modules for long opendomain conversation.pdf b/papers/prompted llms as chatbot modules for long opendomain conversation.pdf index b71b59982744aa1c1a8f542537a267ab5c856865..071afc74525def5aa3452460d45de5ba2a3c623d 100644 --- a/papers/prompted llms as chatbot modules for long opendomain conversation.pdf +++ b/papers/prompted llms as chatbot modules for long opendomain conversation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1d4a1daebf8fe5edbe3334410b7a700681ba6ffdd92de20634376f3e1163d92a -size 1109170 +oid sha256:668ec08d2c12afe6122f13b1a663c70fe419336cae018ab9e8b910b652f8f751 +size 1135003 diff --git a/papers/prompted software engineering in the era of ai models.pdf b/papers/prompted software engineering in the era of ai models.pdf index c64c47f54ea0316865a2dbe32060dcfe23db7919..01b40a79cc92879fbbd4adc0b3f0a857bc6833a0 100644 Binary files a/papers/prompted software engineering in the era of ai models.pdf and b/papers/prompted software engineering in the era of ai models.pdf differ diff --git a/papers/promptengineering and transformerbased question generation and evaluation.pdf b/papers/promptengineering and transformerbased question generation and evaluation.pdf index 9aa35d56f6b067ec0a43b262cc00ae70850384fd..3536ab07c07abe446c356dc01a1e8313af51c6d3 100644 Binary files a/papers/promptengineering and transformerbased question generation and evaluation.pdf and b/papers/promptengineering and transformerbased question generation and evaluation.pdf differ diff --git a/papers/prompter utilizing large language model prompting for a data efficient embodied instruction following.pdf b/papers/prompter utilizing large language model prompting for a data efficient embodied instruction following.pdf index 0f6c3760ac5eb20002403fd3950462f3592b7bf2..5d2ece20fc0f11bc3fca6d12300b52d2deacb372 100644 Binary files a/papers/prompter utilizing large language model prompting for a data efficient embodied instruction following.pdf and b/papers/prompter utilizing large language model prompting for a data efficient embodied instruction following.pdf differ diff --git a/papers/promptfree and efficient fewshot learning with language models.pdf b/papers/promptfree and efficient fewshot learning with language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f93746e9cbb852f80f5c4691a4bfe9b48c24b56a --- /dev/null +++ b/papers/promptfree and efficient fewshot learning with language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d40b566ff5065f91c8d0b9e07230c25da32704469b98050afe27de6d7a7db98 +size 537558 diff --git a/papers/prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages.pdf b/papers/prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages.pdf index 995ada09a5ea16169f03385965e8c03ff14f9417..099ba9a1538330d94cca91d6f7f7391e005dd86f 100644 Binary files a/papers/prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages.pdf and b/papers/prompting a large language model to generate diverse motivational messages a comparison with humanwritten messages.pdf differ diff --git a/papers/prompting ai art an investigation into the creative skill of prompt engineering.pdf b/papers/prompting ai art an investigation into the creative skill of prompt engineering.pdf index 5f8515541688e72ae8a49e7d1ff624e875289280..df530ca5bd921bd6a77888a2ffa29b79c139fcfb 100644 --- a/papers/prompting ai art an investigation into the creative skill of prompt engineering.pdf +++ b/papers/prompting ai art an investigation into the creative skill of prompt engineering.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:df9957519b93a08889aa29b101414bbbf40c3350bf835545de0bc58469f1bb94 -size 8852569 +oid sha256:23ec57649fcafe07b508340d8b5f578bb4f51c292b17fe697b97a7d464bcbac0 +size 5584298 diff --git a/papers/prompting chatgpt in mner enhanced multimodal named entity recognition with auxiliary refined knowledge.pdf b/papers/prompting chatgpt in mner enhanced multimodal named entity recognition with auxiliary refined knowledge.pdf index cb364513b72e862de04dee0dd4d56ccceeb05b4e..12bd9405c96a110c3d9fd5bf14b5f662bc9ea0b8 100644 --- a/papers/prompting chatgpt in mner enhanced multimodal named entity recognition with auxiliary refined knowledge.pdf +++ b/papers/prompting chatgpt in mner enhanced multimodal named entity recognition with auxiliary refined knowledge.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:3228ae43eacb69b8b06a71558260b8e81383ce151ded2f57c1cc72c427987719 -size 1480997 +oid sha256:64343ba006e850551b0259c7be0ea534e58725658a58253e24c1c2544d33ad64 +size 1374699 diff --git a/papers/prompting chatgpt to draw morphological connections for new word comprehension.pdf b/papers/prompting chatgpt to draw morphological connections for new word comprehension.pdf new file mode 100644 index 0000000000000000000000000000000000000000..08eb20b937aebac444adba001461c9517a033a13 --- /dev/null +++ b/papers/prompting chatgpt to draw morphological connections for new word comprehension.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d26914fc6998d9d670ab5da7f254280a9e4f4b837391a0da78757a20013d1a03 +size 257790 diff --git a/papers/prompting classes exploring the power of prompt class learning in weakly supervised semantic segmentation.pdf b/papers/prompting classes exploring the power of prompt class learning in weakly supervised semantic segmentation.pdf index a01721150fee25262ce5cb7f7571a6e67bdd2907..60ce20e3d71d30e66baa4b8d872fc7da177894ec 100644 --- a/papers/prompting classes exploring the power of prompt class learning in weakly supervised semantic segmentation.pdf +++ b/papers/prompting classes exploring the power of prompt class learning in weakly supervised semantic segmentation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ff323c837aeb2ff23dfd5d5bccf799a860f616949c1a5b0b8a8785cd7b199b14 -size 3581467 +oid sha256:f1be7b68d67cee8146f1cf6e7fe3e31f0b684fdf53864a9111292aecd9b8e139 +size 3581466 diff --git a/papers/prompting hard or hardly prompting prompt inversion for texttoimage diffusion models.pdf b/papers/prompting hard or hardly prompting prompt inversion for texttoimage diffusion models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..85fd90912df79bfa109af6ede588cf10901b29de --- /dev/null +++ b/papers/prompting hard or hardly prompting prompt inversion for texttoimage diffusion models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:845b1411e52c7124db4a67c314e1c17d19bc8e3d9d4e989efd83379cba408020 +size 33547654 diff --git a/papers/prompting large language models for recommender systems a comprehensive framework and empirical analysis.pdf b/papers/prompting large language models for recommender systems a comprehensive framework and empirical analysis.pdf new file mode 100644 index 0000000000000000000000000000000000000000..344d7ce2e0888634d27f23d30e0745b59385d80a --- /dev/null +++ b/papers/prompting large language models for recommender systems a comprehensive framework and empirical analysis.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:715239e051ab7cc75847ef58bd0da63390a687e20600bd0436fcd590c16131cc +size 1775488 diff --git a/papers/prompting large language models with chainofthought for fewshot knowledge base question generation.pdf b/papers/prompting large language models with chainofthought for fewshot knowledge base question generation.pdf index 16470c8a00e165451b4fa9d95367574cd2fe35da..2b4c3a22ecd5640f7f7ed50f2a38f5169ad0e87e 100644 Binary files a/papers/prompting large language models with chainofthought for fewshot knowledge base question generation.pdf and b/papers/prompting large language models with chainofthought for fewshot knowledge base question generation.pdf differ diff --git a/papers/prompting large language models with the socratic method.pdf b/papers/prompting large language models with the socratic method.pdf index c1644cee07254e9e5f3893f0649b7a55b480d542..23cbbaf0a427ecd5efbe195e0abf20dfea1ed64f 100644 Binary files a/papers/prompting large language models with the socratic method.pdf and b/papers/prompting large language models with the socratic method.pdf differ diff --git a/papers/prompting palm for translation assessing strategies and performance.pdf b/papers/prompting palm for translation assessing strategies and performance.pdf index 937d479c11f024c61426ec05f7756193b651e5b2..0759bf44026e410d235c1811520f61545b2803d1 100644 Binary files a/papers/prompting palm for translation assessing strategies and performance.pdf and b/papers/prompting palm for translation assessing strategies and performance.pdf differ diff --git a/papers/prompting to distill boosting datafree knowledge distillation via reinforced prompt.pdf b/papers/prompting to distill boosting datafree knowledge distillation via reinforced prompt.pdf index 2089a08f84328e007b5958fb2e9cde3f4e978270..567f469f11c8c4f5e2fce57eb0bc37db99ca2c11 100644 Binary files a/papers/prompting to distill boosting datafree knowledge distillation via reinforced prompt.pdf and b/papers/prompting to distill boosting datafree knowledge distillation via reinforced prompt.pdf differ diff --git a/papers/promptner prompt locating and typing for named entity recognition.pdf b/papers/promptner prompt locating and typing for named entity recognition.pdf index 0459a18188f284f160d1e30d806dbd18796581cd..bd0e9850b3fdca0b0deb0f05876c0f2082fed55e 100644 Binary files a/papers/promptner prompt locating and typing for named entity recognition.pdf and b/papers/promptner prompt locating and typing for named entity recognition.pdf differ diff --git a/papers/prompts matter insights and strategies for prompt engineering in automated software traceability.pdf b/papers/prompts matter insights and strategies for prompt engineering in automated software traceability.pdf index be3558a8cf53b7cb0d1f54bf13629562f9e54fcb..a9f872cf7280d699650ad5f509c3907e22b4971c 100644 Binary files a/papers/prompts matter insights and strategies for prompt engineering in automated software traceability.pdf and b/papers/prompts matter insights and strategies for prompt engineering in automated software traceability.pdf differ diff --git a/papers/prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf b/papers/prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf index 569731a95d7b43a9b34147166a98659a1a178146..ac83dcf2fc847faf7bec560cc2ddfa1aa36f9e6d 100644 Binary files a/papers/prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf and b/papers/prompts should not be seen as secrets systematically measuring prompt extraction attack success.pdf differ diff --git a/papers/proqa structural promptbased pretraining for unified question answering.pdf b/papers/proqa structural promptbased pretraining for unified question answering.pdf index 3b13a5c786026c17525dbddb949db2b6d04824a0..b0357e6f1d12957d5e8d7f38faf61c2d2e0dd04f 100644 Binary files a/papers/proqa structural promptbased pretraining for unified question answering.pdf and b/papers/proqa structural promptbased pretraining for unified question answering.pdf differ diff --git a/papers/protect your prompts protocols for ip protection in llm applications.pdf b/papers/protect your prompts protocols for ip protection in llm applications.pdf index f37867808e8f3f1d185d8708916ac9da4333fdb3..bec0d84eedc76a7551ea7a6f74450b7ee84d65dd 100644 Binary files a/papers/protect your prompts protocols for ip protection in llm applications.pdf and b/papers/protect your prompts protocols for ip protection in llm applications.pdf differ diff --git a/papers/prototypeformer learning to explore prototype relationships for fewshot image classification.pdf b/papers/prototypeformer learning to explore prototype relationships for fewshot image classification.pdf index 0024eff1b2b7cb49bd7466f2830f78f3b5a7c54f..058764258ad2434ac3920ecc81b1b5663233bb33 100644 Binary files a/papers/prototypeformer learning to explore prototype relationships for fewshot image classification.pdf and b/papers/prototypeformer learning to explore prototype relationships for fewshot image classification.pdf differ diff --git a/papers/prototypical verbalizer for promptbased fewshot tuning.pdf b/papers/prototypical verbalizer for promptbased fewshot tuning.pdf index 2d959d38e46876860ce52356e4f4f9f314bd2a60..8c94750a2dfc7aac0bdf29f0c51eaa8a31f38c49 100644 Binary files a/papers/prototypical verbalizer for promptbased fewshot tuning.pdf and b/papers/prototypical verbalizer for promptbased fewshot tuning.pdf differ diff --git a/papers/psg promptbased sequence generation for acronym extraction.pdf b/papers/psg promptbased sequence generation for acronym extraction.pdf index de3f4c2dc9338d0dc0d2afc12765992193e844e3..76e58e4fc02f4277d0e3c6db827609b4cc16ad9e 100644 Binary files a/papers/psg promptbased sequence generation for acronym extraction.pdf and b/papers/psg promptbased sequence generation for acronym extraction.pdf differ diff --git a/papers/purr efficiently editing language model hallucinations by denoising language model corruptions.pdf b/papers/purr efficiently editing language model hallucinations by denoising language model corruptions.pdf index e781d633d0607441a59f8264dfe51fef356aaa11..f7c0c1b30347732b249891470287299d4577fa6b 100644 Binary files a/papers/purr efficiently editing language model hallucinations by denoising language model corruptions.pdf and b/papers/purr efficiently editing language model hallucinations by denoising language model corruptions.pdf differ diff --git a/papers/q2d turning questions into dialogs to teach models how to search.pdf b/papers/q2d turning questions into dialogs to teach models how to search.pdf index fee5a88bb6179fc89edafb4149c92f732f9bd29f..780807baf95321d2a8b4e585d0cfc29857709550 100644 --- a/papers/q2d turning questions into dialogs to teach models how to search.pdf +++ b/papers/q2d turning questions into dialogs to teach models how to search.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:c41c7ea2375ff5c68a4a370688f26ce80f9164b3d80ecbf749287386240a0353 -size 1505434 +oid sha256:9931dd45ac95b8961945de9dd38fc49aaebcea2abba11617615cfe0d291c16b2 +size 1520963 diff --git a/papers/qaclims questionanswer cross language image matching for weakly supervised semantic segmentation.pdf b/papers/qaclims questionanswer cross language image matching for weakly supervised semantic segmentation.pdf deleted file mode 100644 index 9b6fb126093f22a68c2c75036c7453c8ff18a469..0000000000000000000000000000000000000000 --- a/papers/qaclims questionanswer cross language image matching for weakly supervised semantic segmentation.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:055639dc000128965557bc5a60af6e278e28727c8b750bbf11f415348ca9ec1b -size 1931819 diff --git a/papers/qameleon multilingual qa with only 5 examples.pdf b/papers/qameleon multilingual qa with only 5 examples.pdf deleted file mode 100644 index 8cc0ac4294b9e127415c7fea6b903a234e75595a..0000000000000000000000000000000000000000 --- a/papers/qameleon multilingual qa with only 5 examples.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cd923ecd6f8504e58c5a6240c1eaa43901a04438f3dc63fea40ef157ee4abfee -size 1254273 diff --git a/papers/qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf b/papers/qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf index ca35645800cfc45221df1b3edbd183f759f4aa2f..8abd6bc958563e36e13d8ed481c113d14a03391d 100644 Binary files a/papers/qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf and b/papers/qualifying chinese medical licensing examination with knowledge enhanced generative pretraining model.pdf differ diff --git a/papers/query2doc query expansion with large language models.pdf b/papers/query2doc query expansion with large language models.pdf index c45edf0a4afc00eea70bda55aa8172bbceb04021..569cca79764e1d1821c2796b2472d85591d2a415 100644 Binary files a/papers/query2doc query expansion with large language models.pdf and b/papers/query2doc query expansion with large language models.pdf differ diff --git a/papers/raft a realworld fewshot text classification benchmark.pdf b/papers/raft a realworld fewshot text classification benchmark.pdf index 4fb9c445c9cb1508c516c63b13004675dcb91e80..939767e3fd30d73463ce0fbd11547f228a49c108 100644 Binary files a/papers/raft a realworld fewshot text classification benchmark.pdf and b/papers/raft a realworld fewshot text classification benchmark.pdf differ diff --git a/papers/rationaleaugmented ensembles in language models.pdf b/papers/rationaleaugmented ensembles in language models.pdf index 8e1fab6b49312422d0df35c8583b055b277a7b7c..c0ad59324bfb4570c3ba37b27c2eb186e71f0ada 100644 Binary files a/papers/rationaleaugmented ensembles in language models.pdf and b/papers/rationaleaugmented ensembles in language models.pdf differ diff --git a/papers/rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought.pdf b/papers/rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought.pdf index 45e95693e9cc7ba2101b5c6433125e06f45fffef..2186c0edc4eef5a1cb0e3b1e1b5d3cb7e974662f 100644 Binary files a/papers/rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought.pdf and b/papers/rcot detecting and rectifying factual inconsistency in reasoning by reversing chainofthought.pdf differ diff --git a/papers/reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf b/papers/reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf index 997de42b554a912b2f44335037bc4adeb2ffe635..dcfa3181dd40469e090e7ba7935b6256078075d1 100644 Binary files a/papers/reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf and b/papers/reasoning before responding integrating commonsensebased causality explanation for empathetic response generation.pdf differ diff --git a/papers/recprompt a prompt tuning framework for news recommendation using large language models.pdf b/papers/recprompt a prompt tuning framework for news recommendation using large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b3fdfe91ffd40f6fe9d17e9d39de5296ad90a8d7 --- /dev/null +++ b/papers/recprompt a prompt tuning framework for news recommendation using large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a01581e78d91bdd65ac132b2e3dbd909c20440903db8ef4cb11dd63f59a91f1 +size 1158834 diff --git a/papers/red teaming language model detectors with language models.pdf b/papers/red teaming language model detectors with language models.pdf index 51dba1a8e32530730172607cf725a38ceda50f2d..7453862e8e6c8fb9591520369c30b9b37bab7e76 100644 Binary files a/papers/red teaming language model detectors with language models.pdf and b/papers/red teaming language model detectors with language models.pdf differ diff --git a/papers/reframing instructional prompts to gptk's language.pdf b/papers/reframing instructional prompts to gptk's language.pdf index 0960f4fea9c72cc3d7117a1a931a83d62d6a73ea..efe96dc3e4f5ef88eedeabeacec57b2af699ac08 100644 Binary files a/papers/reframing instructional prompts to gptk's language.pdf and b/papers/reframing instructional prompts to gptk's language.pdf differ diff --git a/papers/reinventing international business education integrating the power of generative ai.pdf b/papers/reinventing international business education integrating the power of generative ai.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3a80d331907e5f0bc93b113258a26f4992921837 --- /dev/null +++ b/papers/reinventing international business education integrating the power of generative ai.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50f07246ba0d15cfb2e5f2825849b25a8ea6b5f16e7825b67509acf044af17e6 +size 246136 diff --git a/papers/relation extraction as openbook examination retrievalenhanced prompt tuning.pdf b/papers/relation extraction as openbook examination retrievalenhanced prompt tuning.pdf index 555c2d8276a6d8bb1da4ee9ec8bc8ef407c12402..46092fd92e355cf9b82b4fa66ec8f444b7724d29 100644 Binary files a/papers/relation extraction as openbook examination retrievalenhanced prompt tuning.pdf and b/papers/relation extraction as openbook examination retrievalenhanced prompt tuning.pdf differ diff --git a/papers/relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction.pdf b/papers/relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction.pdf index 9d301a9ca449c392a6fbf0cd60839d8602ec6c10..23c65b17d93a6df31d6fc34eccacdc3ec9c4189e 100644 Binary files a/papers/relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction.pdf and b/papers/relationprompt leveraging prompts to generate synthetic data for zeroshot relation triplet extraction.pdf differ diff --git a/papers/reordering examples helps during primingbased fewshot learning.pdf b/papers/reordering examples helps during primingbased fewshot learning.pdf index dd8fa08c76948db87c9e888af8083ea3abef919d..945b472ee7aa432e5b2a73da902789fab3d61283 100644 Binary files a/papers/reordering examples helps during primingbased fewshot learning.pdf and b/papers/reordering examples helps during primingbased fewshot learning.pdf differ diff --git a/papers/representative demonstration selection for incontext learning with twostage determinantal point process.pdf b/papers/representative demonstration selection for incontext learning with twostage determinantal point process.pdf new file mode 100644 index 0000000000000000000000000000000000000000..22de22a46599e9960a5e501ed4c425a607efdf44 --- /dev/null +++ b/papers/representative demonstration selection for incontext learning with twostage determinantal point process.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a9ac5b9d586bcfb3c88fd1c399bc81170a0388cca01973b5a698331b3ce3ffa +size 606312 diff --git a/papers/reranking for natural language generation from logical forms a study based on large language models.pdf b/papers/reranking for natural language generation from logical forms a study based on large language models.pdf index 641654a93425f6f0c4a915bb6003f653abb4ef24..ab9dd427d3a4b6e05727bddca40abdfb8749843f 100644 Binary files a/papers/reranking for natural language generation from logical forms a study based on large language models.pdf and b/papers/reranking for natural language generation from logical forms a study based on large language models.pdf differ diff --git a/papers/resources and fewshot learners for incontext learning in slavic languages.pdf b/papers/resources and fewshot learners for incontext learning in slavic languages.pdf index c5f4a5d690ae0357cba048ec0df99bd610f09f29..80525808e0fda3f2e4e00a8600096d3402e9b32c 100644 Binary files a/papers/resources and fewshot learners for incontext learning in slavic languages.pdf and b/papers/resources and fewshot learners for incontext learning in slavic languages.pdf differ diff --git a/papers/responsible task automation empowering large language models as responsible task automators.pdf b/papers/responsible task automation empowering large language models as responsible task automators.pdf index 5718b0a8ccd2416f7185c19885fa81d7f11354a5..ddb662500c91c185b5f042a67d63b104c8d12d43 100644 --- a/papers/responsible task automation empowering large language models as responsible task automators.pdf +++ b/papers/responsible task automation empowering large language models as responsible task automators.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e33be733e72555fea97b7a50446fa3b819e01e282278cde89943211c64d6812e -size 1989524 +oid sha256:ac3224d4ee87cf757fe8ad909761efae9ca685cfe525579587ddf9cefd8b37f6 +size 1988471 diff --git a/papers/rethink the effectiveness of text data augmentation an empirical analysis.pdf b/papers/rethink the effectiveness of text data augmentation an empirical analysis.pdf index cc5e8b62dfdd086c8c5f72b8b8cd215ac922ba75..89b3864b11b4d19022f36c296b54ff01cc4e82a7 100644 Binary files a/papers/rethink the effectiveness of text data augmentation an empirical analysis.pdf and b/papers/rethink the effectiveness of text data augmentation an empirical analysis.pdf differ diff --git a/papers/rethinking the event coding pipeline with prompt entailment.pdf b/papers/rethinking the event coding pipeline with prompt entailment.pdf index 6ddf9826a8e3b8ac6a19d635b6eabe7cc018ec67..8d59cda0958a9e69db1309faf3cd51c5e6cb9e18 100644 Binary files a/papers/rethinking the event coding pipeline with prompt entailment.pdf and b/papers/rethinking the event coding pipeline with prompt entailment.pdf differ diff --git a/papers/rethinking the role of demonstrations what makes incontext learning work.pdf b/papers/rethinking the role of demonstrations what makes incontext learning work.pdf index f7f378a5f103ad8cbabd22412174fb933075ead3..87c3c50a0db2a7dcc933138b27400be4ea614c93 100644 Binary files a/papers/rethinking the role of demonstrations what makes incontext learning work.pdf and b/papers/rethinking the role of demonstrations what makes incontext learning work.pdf differ diff --git a/papers/reticl sequential retrieval of incontext examples with reinforcement learning.pdf b/papers/reticl sequential retrieval of incontext examples with reinforcement learning.pdf index a268a57460f3bb134cc3ffe991eee60de11da363..4aecce58d3595007e0db65833fa48b0a1845ec70 100644 Binary files a/papers/reticl sequential retrieval of incontext examples with reinforcement learning.pdf and b/papers/reticl sequential retrieval of incontext examples with reinforcement learning.pdf differ diff --git a/papers/retrievalaugmented code generation for universal information extraction.pdf b/papers/retrievalaugmented code generation for universal information extraction.pdf index dae2fce7892549134c27155839cf48a52e852a16..08087c6610e211e8cae067b2fa20d19e901a6fca 100644 Binary files a/papers/retrievalaugmented code generation for universal information extraction.pdf and b/papers/retrievalaugmented code generation for universal information extraction.pdf differ diff --git a/papers/retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf b/papers/retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf index 165b84b84e7430f193708c6098a632393f70b7dd..9724ecd000cf9a2e94ae0b2ff8ea2f36cc29b8b8 100644 Binary files a/papers/retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf and b/papers/retrievalaugmented generation to improve math questionanswering tradeoffs between groundedness and human preference.pdf differ diff --git a/papers/retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf b/papers/retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf index 65ca7777b1260058bdfdbe3e4e05c9e2ac0827a4..bfad5ff6d9c55b0417de7b0230958f1fc806096a 100644 Binary files a/papers/retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf and b/papers/retrievalaugmented gpt35based texttosql framework with sampleaware prompting and dynamic revision chain.pdf differ diff --git a/papers/retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering.pdf b/papers/retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering.pdf index 94c3febd235e1f2717bd9ab66f8b539e5ff02336..6e8dac28f5a13452aad8c24d8e5aa0e713385e6d 100644 Binary files a/papers/retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering.pdf and b/papers/retrieverewriteanswer a kgtotext enhanced llms framework for knowledge graph question answering.pdf differ diff --git a/papers/retrieving supporting evidence for generative question answering.pdf b/papers/retrieving supporting evidence for generative question answering.pdf index abd261405cac67074dd74bb5be721cef13915e85..d0c2d9d9f258709f192008d59619e1bfdbbd5492 100644 Binary files a/papers/retrieving supporting evidence for generative question answering.pdf and b/papers/retrieving supporting evidence for generative question answering.pdf differ diff --git a/papers/retrieving texts based on abstract descriptions.pdf b/papers/retrieving texts based on abstract descriptions.pdf new file mode 100644 index 0000000000000000000000000000000000000000..86c99911e89c57866f0df9a3a80dba546bd8a3b2 --- /dev/null +++ b/papers/retrieving texts based on abstract descriptions.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7bcf77202bf544d0b14fea5106938e0fee34f5070b558176cc37c718229fbf2 +size 1425571 diff --git a/papers/review of large vision models and visual prompt engineering.pdf b/papers/review of large vision models and visual prompt engineering.pdf index 73608fb8f29b7030df0393059b651d12b3357c90..bae60e2d329734609cd469f4f6fa2c01223dc8ad 100644 Binary files a/papers/review of large vision models and visual prompt engineering.pdf and b/papers/review of large vision models and visual prompt engineering.pdf differ diff --git a/papers/revisiting automated prompting are we actually doing better.pdf b/papers/revisiting automated prompting are we actually doing better.pdf index abba34702861cae20549261e620a1eee526fec86..93bb7e3ac81a9422605bd870c990dedfafa5e3fc 100644 Binary files a/papers/revisiting automated prompting are we actually doing better.pdf and b/papers/revisiting automated prompting are we actually doing better.pdf differ diff --git a/papers/revisiting nonenglish text simplification a unified multilingual benchmark.pdf b/papers/revisiting nonenglish text simplification a unified multilingual benchmark.pdf index 323a26cf4deab515284b95dbb5ff026620469176..e486a345d52e2fc67f8f4ead5d4f18f6960a00de 100644 Binary files a/papers/revisiting nonenglish text simplification a unified multilingual benchmark.pdf and b/papers/revisiting nonenglish text simplification a unified multilingual benchmark.pdf differ diff --git a/papers/revisiting prompt engineering via declarative crowdsourcing.pdf b/papers/revisiting prompt engineering via declarative crowdsourcing.pdf index 40c281a4a0bead1befca8d52f144ca72b1b8eee8..02de37491217a820c221d7f10749ae81cc962e01 100644 Binary files a/papers/revisiting prompt engineering via declarative crowdsourcing.pdf and b/papers/revisiting prompt engineering via declarative crowdsourcing.pdf differ diff --git a/papers/rgl a simple yet effective relation graph augmented promptbased tuning approach for fewshot learning.pdf b/papers/rgl a simple yet effective relation graph augmented promptbased tuning approach for fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..89a9a6abd07a3900535e124e1ea17d39ce4adfdf --- /dev/null +++ b/papers/rgl a simple yet effective relation graph augmented promptbased tuning approach for fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bb93420ea9e981a13587cde8541b5538baa0e2348ce2f6fb9e5e128efb9ae88 +size 1834492 diff --git a/papers/right to be forgotten in the era of large language models implications, challenges, and solutions.pdf b/papers/right to be forgotten in the era of large language models implications, challenges, and solutions.pdf index 72d4caa286a4c27de1f5d86fab9341b94a09e539..c6e55f81f506bcb5240bfaea0f001ac11ae9af01 100644 Binary files a/papers/right to be forgotten in the era of large language models implications, challenges, and solutions.pdf and b/papers/right to be forgotten in the era of large language models implications, challenges, and solutions.pdf differ diff --git a/papers/rmprt realistic robotic manipulation simulator and benchmark with progressive reasoning tasks.pdf b/papers/rmprt realistic robotic manipulation simulator and benchmark with progressive reasoning tasks.pdf deleted file mode 100644 index eabbce0a4fade35cc1a3df0e556c81b3f1813888..0000000000000000000000000000000000000000 --- a/papers/rmprt realistic robotic manipulation simulator and benchmark with progressive reasoning tasks.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3d48353323aaad88a71a15dfcd7dded7ce59ab7634154f14460b0c313f958849 -size 1784788 diff --git a/papers/robust prompt optimization for large language models against distribution shifts.pdf b/papers/robust prompt optimization for large language models against distribution shifts.pdf index e4b1faa01e79f6343c9edf2c225b2418a27a3a49..f2f171daa356ee09e55a9569513bd92865fad3c4 100644 Binary files a/papers/robust prompt optimization for large language models against distribution shifts.pdf and b/papers/robust prompt optimization for large language models against distribution shifts.pdf differ diff --git a/papers/robust retrieval augmented generation for zeroshot slot filling.pdf b/papers/robust retrieval augmented generation for zeroshot slot filling.pdf index 227209bd0a6d906df80e81a50ae8ba4b532b18b7..2704f301d2756a68b6396622dffef22c4559f7fc 100644 Binary files a/papers/robust retrieval augmented generation for zeroshot slot filling.pdf and b/papers/robust retrieval augmented generation for zeroshot slot filling.pdf differ diff --git a/papers/robut a systematic study of table qa robustness against humanannotated adversarial perturbations.pdf b/papers/robut a systematic study of table qa robustness against humanannotated adversarial perturbations.pdf index efe5f5370470dc0f4cdf25a52d95a940a963d924..1c5edf0448bd71e2b3a0cd4717c252f13741109e 100644 Binary files a/papers/robut a systematic study of table qa robustness against humanannotated adversarial perturbations.pdf and b/papers/robut a systematic study of table qa robustness against humanannotated adversarial perturbations.pdf differ diff --git a/papers/roco dialectic multirobot collaboration with large language models.pdf b/papers/roco dialectic multirobot collaboration with large language models.pdf deleted file mode 100644 index 9e041d90f7cfec31da7b70630d0ba82a0d38435a..0000000000000000000000000000000000000000 --- a/papers/roco dialectic multirobot collaboration with large language models.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0f453bf7ed13b216ab0d8a5b8e4d2fee6083fbb5b882c63ad717f5051ff263eb -size 10645128 diff --git a/papers/rtllm an opensource benchmark for design rtl generation with large language model.pdf b/papers/rtllm an opensource benchmark for design rtl generation with large language model.pdf index 863f5760d4b1c62768b2fa42dd2129a3498cb14e..f268a5b03c766ad0006910ac2930640544bd955a 100644 Binary files a/papers/rtllm an opensource benchmark for design rtl generation with large language model.pdf and b/papers/rtllm an opensource benchmark for design rtl generation with large language model.pdf differ diff --git a/papers/s$^3$hqa a threestage approach for multihop texttable hybrid question answering.pdf b/papers/s$^3$hqa a threestage approach for multihop texttable hybrid question answering.pdf index c27ed3602ffbfe513f4cfb1759243dad46c1ab3a..8e275e4a2a0d7295622a78e85e15bb4c19a10185 100644 Binary files a/papers/s$^3$hqa a threestage approach for multihop texttable hybrid question answering.pdf and b/papers/s$^3$hqa a threestage approach for multihop texttable hybrid question answering.pdf differ diff --git a/papers/s3 socialnetwork simulation system with large language modelempowered agents.pdf b/papers/s3 socialnetwork simulation system with large language modelempowered agents.pdf index 8b2d5796ea4e88a9c36f57f52d224480cf96a9e6..c40adbb35b2309db811cf687e1fdcedc6d21c1d2 100644 Binary files a/papers/s3 socialnetwork simulation system with large language modelempowered agents.pdf and b/papers/s3 socialnetwork simulation system with large language modelempowered agents.pdf differ diff --git a/papers/s3dst structured opendomain dialogue segmentation and state tracking in the era of llms.pdf b/papers/s3dst structured opendomain dialogue segmentation and state tracking in the era of llms.pdf index 5c3849bc08915909be14fdf3af3f4de7195f6b8f..2682c4e36ed4a7d93a151c6c289ac1fb8c96c56a 100644 Binary files a/papers/s3dst structured opendomain dialogue segmentation and state tracking in the era of llms.pdf and b/papers/s3dst structured opendomain dialogue segmentation and state tracking in the era of llms.pdf differ diff --git a/papers/safety analysis in the era of large language models a case study of stpa using chatgpt.pdf b/papers/safety analysis in the era of large language models a case study of stpa using chatgpt.pdf index c7bb596ac380a02df9e6e8e6a41ca3860069d728..85eb84cbee169482a7d39701665f41370bb60458 100644 --- a/papers/safety analysis in the era of large language models a case study of stpa using chatgpt.pdf +++ b/papers/safety analysis in the era of large language models a case study of stpa using chatgpt.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:70a5c5bea7002e165cf6e3584d14ff7f714f435e2c268ff39f25cc025bd87d3f -size 1321409 +oid sha256:c3fa989cb11625bfc3b960876c51d3e1a2aa6f65e162786d8c6b6e4796e9d8a2 +size 1342712 diff --git a/papers/salmon selfalignment with principlefollowing reward models.pdf b/papers/salmon selfalignment with principlefollowing reward models.pdf index 33987e38a70776cfecf6730850edfb7bf054201f..2736d284ffc4dfb1e92c99d4649a148871453c68 100644 Binary files a/papers/salmon selfalignment with principlefollowing reward models.pdf and b/papers/salmon selfalignment with principlefollowing reward models.pdf differ diff --git a/papers/satisfiabilityaided language models using declarative prompting.pdf b/papers/satisfiabilityaided language models using declarative prompting.pdf index 11f627287e3e9aaef5f17d6c39138c225c7dd683..927d24081f9d223b4656723121cc011bc5e67a0f 100644 Binary files a/papers/satisfiabilityaided language models using declarative prompting.pdf and b/papers/satisfiabilityaided language models using declarative prompting.pdf differ diff --git a/papers/scalable and transferable blackbox jailbreaks for language models via persona modulation.pdf b/papers/scalable and transferable blackbox jailbreaks for language models via persona modulation.pdf index d84ef22001cb0b6c9ac137bd3c3a728d5215deca..4d6fda474273b420f22ff5610eedfb5e5dc5b651 100644 --- a/papers/scalable and transferable blackbox jailbreaks for language models via persona modulation.pdf +++ b/papers/scalable and transferable blackbox jailbreaks for language models via persona modulation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1f0ac60aadc057506a8473ed53f94f5587dcf8aae2e58d0d4c802dfe67ab2650 -size 1139838 +oid sha256:0617dcd2f5ac672a91a87abc29292dc4197b2150bd9c22f38f4dbce147b4e60c +size 1383215 diff --git a/papers/scalable approach to medical wearable postmarket surveillance.pdf b/papers/scalable approach to medical wearable postmarket surveillance.pdf index e1018f982e59b2bf29a793f47ac22dfb8ff42b06..c6f411425946ece357b7366b6ca19f29ea07100e 100644 Binary files a/papers/scalable approach to medical wearable postmarket surveillance.pdf and b/papers/scalable approach to medical wearable postmarket surveillance.pdf differ diff --git a/papers/scalable prompt generation for semisupervised learning with language models.pdf b/papers/scalable prompt generation for semisupervised learning with language models.pdf index c59581034c3fec17020b5a22c8dc529db4fb3615..f47fd4e32957b98e78a08e9c1e7854307073334c 100644 Binary files a/papers/scalable prompt generation for semisupervised learning with language models.pdf and b/papers/scalable prompt generation for semisupervised learning with language models.pdf differ diff --git a/papers/scaling asr improves zero and few shot learning.pdf b/papers/scaling asr improves zero and few shot learning.pdf index 12517ed04c77f952995c234700aff7588b81ba8b..0763a9fbcf6040db0be5599fbad1a23e1235b36f 100644 Binary files a/papers/scaling asr improves zero and few shot learning.pdf and b/papers/scaling asr improves zero and few shot learning.pdf differ diff --git a/papers/scifix outperforming gpt3 on scientific factual error correction.pdf b/papers/scifix outperforming gpt3 on scientific factual error correction.pdf index f6c81ed8c5881215ef44566c81a2bbc46104a515..7d3ef91ae0ab1dec948e82a94a78bfbfef1d088f 100644 Binary files a/papers/scifix outperforming gpt3 on scientific factual error correction.pdf and b/papers/scifix outperforming gpt3 on scientific factual error correction.pdf differ diff --git a/papers/seeavatar photorealistic textto3d avatar generation with constrained geometry and appearance.pdf b/papers/seeavatar photorealistic textto3d avatar generation with constrained geometry and appearance.pdf new file mode 100644 index 0000000000000000000000000000000000000000..40435855f929e748d26bec04bbeb8a95c91d746f --- /dev/null +++ b/papers/seeavatar photorealistic textto3d avatar generation with constrained geometry and appearance.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7023b4480fdf725a39d5bb9d6c1bebcb24ad9eb8437add2440c64473b5c3586 +size 12487475 diff --git a/papers/seek for incantations towards accurate texttoimage diffusion synthesis through prompt engineering.pdf b/papers/seek for incantations towards accurate texttoimage diffusion synthesis through prompt engineering.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9964c92e3dc4333e52243cabf42482dca3a96ef9 --- /dev/null +++ b/papers/seek for incantations towards accurate texttoimage diffusion synthesis through prompt engineering.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a06d1f79f65fbed114d7d52a4dbacf32d07770eb43bd4c8b096747f9e3b8dee +size 5838463 diff --git a/papers/seeking clozure robust hypernym extraction from bert with anchored prompts.pdf b/papers/seeking clozure robust hypernym extraction from bert with anchored prompts.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0ab8fc181548a0a7cf687fdac262925445f3621f --- /dev/null +++ b/papers/seeking clozure robust hypernym extraction from bert with anchored prompts.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e23a502ee7a4e7d0bf273e703ffb4fd295075800fceb40217a70825ffbb0a330 +size 799757 diff --git a/papers/selective annotation makes language models better fewshot learners.pdf b/papers/selective annotation makes language models better fewshot learners.pdf index db58ccb323e9c4ab8904e9e9bf078376094854d2..3939a2102f42bc1def671be30cacda3c92ade501 100644 Binary files a/papers/selective annotation makes language models better fewshot learners.pdf and b/papers/selective annotation makes language models better fewshot learners.pdf differ diff --git a/papers/selective demonstrations for crossdomain texttosql.pdf b/papers/selective demonstrations for crossdomain texttosql.pdf index 52c829a16ef88c5e782dd7460509dee3b2f97318..a88aca9e6ca04649e7915aa86136646e141f0220 100644 Binary files a/papers/selective demonstrations for crossdomain texttosql.pdf and b/papers/selective demonstrations for crossdomain texttosql.pdf differ diff --git a/papers/selfadaptive incontext learning an information compression perspective for incontext example selection and ordering.pdf b/papers/selfadaptive incontext learning an information compression perspective for incontext example selection and ordering.pdf index 644de2df0a44a63cabd7f09d52bd6076a32bc53f..b81417c61e2208071cf46e17e23797bc670f3363 100644 Binary files a/papers/selfadaptive incontext learning an information compression perspective for incontext example selection and ordering.pdf and b/papers/selfadaptive incontext learning an information compression perspective for incontext example selection and ordering.pdf differ diff --git a/papers/selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf b/papers/selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf index 8cef32e0e8d537ae4df97ab80958dbc7972be108..2ce858bd398a05555402aade84e6d993617ae582 100644 Binary files a/papers/selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf and b/papers/selfcheckgpt zeroresource blackbox hallucination detection for generative large language models.pdf differ diff --git a/papers/selfcritique prompting with large language models for inductive instructions.pdf b/papers/selfcritique prompting with large language models for inductive instructions.pdf deleted file mode 100644 index 44290edb5ae1e5aacd520eaef1ebaae5ba1a7674..0000000000000000000000000000000000000000 --- a/papers/selfcritique prompting with large language models for inductive instructions.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b5e642bdaed1594da7c0928cc51ce038dccb1c684235327601058ab6df83dcf2 -size 1458395 diff --git a/papers/selfevolve a code evolution framework via large language models.pdf b/papers/selfevolve a code evolution framework via large language models.pdf index 03185367312dab9db10e7f85a0fc62e7557fa941..2acec007ed39dfbfa7453f3d1f8ca283bacde002 100644 Binary files a/papers/selfevolve a code evolution framework via large language models.pdf and b/papers/selfevolve a code evolution framework via large language models.pdf differ diff --git a/papers/selfexplanation prompting improves dialogue understanding in large language models.pdf b/papers/selfexplanation prompting improves dialogue understanding in large language models.pdf index e3b31106c68b0d6dc7918bf62eda3c37471cddcd..9d4c016361bc17da8ec18799ffb34c92cd2978b3 100644 Binary files a/papers/selfexplanation prompting improves dialogue understanding in large language models.pdf and b/papers/selfexplanation prompting improves dialogue understanding in large language models.pdf differ diff --git a/papers/selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator.pdf b/papers/selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator.pdf index 6e6950768f24297c8efc7147c8bd7aeaa95df253..8033ea5d8b65684b21f747fd74b8565286b73e69 100644 Binary files a/papers/selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator.pdf and b/papers/selfgenerated incontext learning leveraging autoregressive language models as a demonstration generator.pdf differ diff --git a/papers/selficl zeroshot incontext learning with selfgenerated demonstrations.pdf b/papers/selficl zeroshot incontext learning with selfgenerated demonstrations.pdf index 443984355e19e8b3f2c3c5b1974fb61fb89c2f95..5befbf4a71e4c3da0897553127c4cea89a7d585e 100644 Binary files a/papers/selficl zeroshot incontext learning with selfgenerated demonstrations.pdf and b/papers/selficl zeroshot incontext learning with selfgenerated demonstrations.pdf differ diff --git a/papers/selfplanning code generation with large language models.pdf b/papers/selfplanning code generation with large language models.pdf index 7cc973687d5967f04c7ae0cfcb11c19d311284ae..cbcc1cece52cc5348685f541afe986296cc454eb 100644 Binary files a/papers/selfplanning code generation with large language models.pdf and b/papers/selfplanning code generation with large language models.pdf differ diff --git a/papers/selfpolish enhance reasoning in large language models via problem refinement.pdf b/papers/selfpolish enhance reasoning in large language models via problem refinement.pdf index e4ed6d5f1ed39894e1abad05a9c10e74a0570cf5..a0c84159bdfe9043438cfda6f3fa2c1049f9674b 100644 Binary files a/papers/selfpolish enhance reasoning in large language models via problem refinement.pdf and b/papers/selfpolish enhance reasoning in large language models via problem refinement.pdf differ diff --git a/papers/selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf b/papers/selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf index 76a07ee070a55ca334bbd747d32807766ef7df49..43018842eb01fc41c13c53f7a696f3b78a0a63ba 100644 Binary files a/papers/selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf and b/papers/selfprompted chainofthought on large language models for opendomain multihop reasoning.pdf differ diff --git a/papers/selfprompting large language models for zeroshot opendomain qa.pdf b/papers/selfprompting large language models for zeroshot opendomain qa.pdf index f5eb0cb4dd9132328a038b54678b9a5a4db7c49d..cea452c86fa1b0b382a7650af09477f4207b5724 100644 Binary files a/papers/selfprompting large language models for zeroshot opendomain qa.pdf and b/papers/selfprompting large language models for zeroshot opendomain qa.pdf differ diff --git a/papers/selfsupervision can be a good fewshot learner.pdf b/papers/selfsupervision can be a good fewshot learner.pdf index 38c5a3f9616618c95a594ae5cbc6be270bc8bf51..825790d26f4d0db6ab1af13172bab577cd3bdfd4 100644 Binary files a/papers/selfsupervision can be a good fewshot learner.pdf and b/papers/selfsupervision can be a good fewshot learner.pdf differ diff --git a/papers/semantic matching for text classification with complex class descriptions.pdf b/papers/semantic matching for text classification with complex class descriptions.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0d3846ad3559eeda16d57e26c60f542609cb41c3 --- /dev/null +++ b/papers/semantic matching for text classification with complex class descriptions.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aea46dd9a58f2dd08ce45d4f60312a238807b4504fc99965ffdada1295b437bc +size 538250 diff --git a/papers/semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking.pdf b/papers/semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking.pdf index 8fa916d4efbea6f6ba45c9d1607abfecf9b0bd02..c0e66aa71aec33cc7b9c4873d2f2494206de79a8 100644 Binary files a/papers/semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking.pdf and b/papers/semantic parsing by large language models for intricate updating strategies of zeroshot dialogue state tracking.pdf differ diff --git a/papers/semanticaware frameevent fusion based pattern recognition via large visionlanguage models.pdf b/papers/semanticaware frameevent fusion based pattern recognition via large visionlanguage models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5078c940b1836223915112891882755daa89ee9e --- /dev/null +++ b/papers/semanticaware frameevent fusion based pattern recognition via large visionlanguage models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:526ec3acafd742d597d7785544712248c577bbe55a837207bc766efbae4df41e +size 8134353 diff --git a/papers/semanticoriented unlabeled priming for largescale language models.pdf b/papers/semanticoriented unlabeled priming for largescale language models.pdf index d4210bb5cac9e21d218e82d77beb7d09fceb229e..5349746231d00c25f1a615f98061a51d6b1620b7 100644 Binary files a/papers/semanticoriented unlabeled priming for largescale language models.pdf and b/papers/semanticoriented unlabeled priming for largescale language models.pdf differ diff --git a/papers/sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf b/papers/sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf index 27fc8d9b07c92070a155ebbcf220187baea12e61..688494710d5ed7463b855e62bea3fcac46bac749 100644 Binary files a/papers/sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf and b/papers/sensitivity and robustness of large language models to prompt template in japanese text classification tasks.pdf differ diff --git a/papers/sentence simplification via large language models.pdf b/papers/sentence simplification via large language models.pdf index 7ec172fe2a1390391f486d71313b2964a26f11c0..65dec4443bff62e27c8fb71cc64ea174af0da001 100644 Binary files a/papers/sentence simplification via large language models.pdf and b/papers/sentence simplification via large language models.pdf differ diff --git a/papers/sentiment analysis in the era of large language models a reality check.pdf b/papers/sentiment analysis in the era of large language models a reality check.pdf index 9624a28fd469169465d39fd47e64fe8aaff0a929..0294daa257cfbecfdc9f8269049428ffa9e689f2 100644 Binary files a/papers/sentiment analysis in the era of large language models a reality check.pdf and b/papers/sentiment analysis in the era of large language models a reality check.pdf differ diff --git a/papers/sentiment analysis through llm negotiations.pdf b/papers/sentiment analysis through llm negotiations.pdf index c05d4c7da347f19eb3fccf65b65053bf85f72d9f..7fd0ef712554e6e3698b708230fad20a2e217c95 100644 Binary files a/papers/sentiment analysis through llm negotiations.pdf and b/papers/sentiment analysis through llm negotiations.pdf differ diff --git a/papers/short answer grading using oneshot prompting and text similarity scoring model.pdf b/papers/short answer grading using oneshot prompting and text similarity scoring model.pdf index 4b70e8db7506674c3b96b651c5314c503ab10b7b..26f6277840209e934442b36e64ac6e6b1a097d53 100644 Binary files a/papers/short answer grading using oneshot prompting and text similarity scoring model.pdf and b/papers/short answer grading using oneshot prompting and text similarity scoring model.pdf differ diff --git a/papers/signedprompt a new approach to prevent prompt injection attacks against llmintegrated applications.pdf b/papers/signedprompt a new approach to prevent prompt injection attacks against llmintegrated applications.pdf new file mode 100644 index 0000000000000000000000000000000000000000..31ec3261fb28f0431cd4f818828df1139e2bb88e --- /dev/null +++ b/papers/signedprompt a new approach to prevent prompt injection attacks against llmintegrated applications.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b508cc07c4e188e0e054e3e174c25ee70ab4329d5232cde11d713cf6395400c +size 556379 diff --git a/papers/simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf b/papers/simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf index 0fcfbddf71d5282f2e25201d330f5f53487fe809..189d883598f3497f5fdb172ff83189ff268a528b 100644 Binary files a/papers/simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf and b/papers/simple llm prompting is stateoftheart for robust and multilingual dialogue evaluation.pdf differ diff --git a/papers/simple semanticaided fewshot learning.pdf b/papers/simple semanticaided fewshot learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..df5cc08b1839461c215e933ca38b5241874f67c5 --- /dev/null +++ b/papers/simple semanticaided fewshot learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce1a34b03fd48ff868068fb969d3ab60e50dac9acebc02fce3cbf01d7c3cf04b +size 1117189 diff --git a/papers/simulating hp lovecraft horror literature with the chatgpt large language model.pdf b/papers/simulating hp lovecraft horror literature with the chatgpt large language model.pdf index 748e1cb14e428aac7b2b2aefd311387dd2738d4a..07f4bd74b3dbe131eb15566c71c9abe5a560c957 100644 Binary files a/papers/simulating hp lovecraft horror literature with the chatgpt large language model.pdf and b/papers/simulating hp lovecraft horror literature with the chatgpt large language model.pdf differ diff --git a/papers/small language models improve giants by rewriting their outputs.pdf b/papers/small language models improve giants by rewriting their outputs.pdf index ee6f5de5ae22bdb2505ae091688a20acb8f2ac25..b410eb5107bdf77e08216f2fccb00a21724f0ca8 100644 Binary files a/papers/small language models improve giants by rewriting their outputs.pdf and b/papers/small language models improve giants by rewriting their outputs.pdf differ diff --git a/papers/small models are valuable plugins for large language models.pdf b/papers/small models are valuable plugins for large language models.pdf index 28577c6fb67b97258f8de0e964bbd034d45646de..a88e2765cce5c0a3fe3c2b82622b3d020ce5e78c 100644 Binary files a/papers/small models are valuable plugins for large language models.pdf and b/papers/small models are valuable plugins for large language models.pdf differ diff --git a/papers/small visual language models can also be openended fewshot learners.pdf b/papers/small visual language models can also be openended fewshot learners.pdf deleted file mode 100644 index d5e40954c1451479fe5639d15f57d89cd356bb4b..0000000000000000000000000000000000000000 --- a/papers/small visual language models can also be openended fewshot learners.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:64400ae488f9d5b98882f7228fafeff03f6b2cc529596bc66a9b3881d6f1d5d5 -size 3864415 diff --git a/papers/snoopy an online interface for exploring the effect of pretraining term frequencies on fewshot lm performance.pdf b/papers/snoopy an online interface for exploring the effect of pretraining term frequencies on fewshot lm performance.pdf new file mode 100644 index 0000000000000000000000000000000000000000..48651e7857422e8e1697c35c7dd46f7a5b8cb4fa --- /dev/null +++ b/papers/snoopy an online interface for exploring the effect of pretraining term frequencies on fewshot lm performance.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f68a117c099b6b641ff5e4d3e1b1e0a4343a788163cc067be436e76c1367a66 +size 2098412 diff --git a/papers/social simulacra creating populated prototypes for social computing systems.pdf b/papers/social simulacra creating populated prototypes for social computing systems.pdf index 42d6384d11d7d2d3d2a57a9e5b4ee8db7b2e2aa7..836b3435117e1bb7f31d0c72099921405b52e0e2 100644 --- a/papers/social simulacra creating populated prototypes for social computing systems.pdf +++ b/papers/social simulacra creating populated prototypes for social computing systems.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9a1ad5f5129a6b0ca9d56d04a11d23620d39d7986898310dad0d19b8e9f0ab9c +oid sha256:67c2aa1e8d5675404a29deb656fa7840cabfefeab0675f93a7ec820109601062 size 3189538 diff --git a/papers/sociocultural knowledge is needed for selection of shots in hate speech detection tasks.pdf b/papers/sociocultural knowledge is needed for selection of shots in hate speech detection tasks.pdf index 0270d8f53a830851d57343a13a6f6088e2c99c3b..50dd08f7f9d923fc96ab19c65c8109402019f0b2 100644 Binary files a/papers/sociocultural knowledge is needed for selection of shots in hate speech detection tasks.pdf and b/papers/sociocultural knowledge is needed for selection of shots in hate speech detection tasks.pdf differ diff --git a/papers/sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf b/papers/sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf index 51d928e5461fd114035b4e4f095ca76b98d7f7b8..77dc28417ad8c852d4453e5055bbb875d0b8025e 100644 Binary files a/papers/sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf and b/papers/sociocultural norm similarities and differences via situational alignment and explainable textual entailment.pdf differ diff --git a/papers/software testing with large language model survey, landscape, and vision.pdf b/papers/software testing with large language model survey, landscape, and vision.pdf deleted file mode 100644 index af6c887ce533e797c5446862ecc5fd0ba291be7b..0000000000000000000000000000000000000000 --- a/papers/software testing with large language model survey, landscape, and vision.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e8c12f84c892d625d202082f1aba24fa000d173525b021a28b0961c1fdcb5297 -size 1311386 diff --git a/papers/software testing with large language models survey, landscape, and vision.pdf b/papers/software testing with large language models survey, landscape, and vision.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f2efe2b23db8e3ea149425cff8e58b215ba41e16 --- /dev/null +++ b/papers/software testing with large language models survey, landscape, and vision.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5030dac88f23bd8a4c0415ef3ea640e3cde88170fd61fc9a05b0f78ddf356fb6 +size 1318017 diff --git a/papers/solving and generating npr sunday puzzles with large language models.pdf b/papers/solving and generating npr sunday puzzles with large language models.pdf index af9fe371e2372291597c8da18e34bffe21f25040..a1e0db6846e866348abbac0f5023a502de0f461b 100644 Binary files a/papers/solving and generating npr sunday puzzles with large language models.pdf and b/papers/solving and generating npr sunday puzzles with large language models.pdf differ diff --git a/papers/speakerbox fewshot learning for speaker identification with transformers.pdf b/papers/speakerbox fewshot learning for speaker identification with transformers.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ba186fc816ba95e19948d74b0b467950f914baef --- /dev/null +++ b/papers/speakerbox fewshot learning for speaker identification with transformers.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86f139ecc3671cb4fe2b981d855d979af00618f7ce7ee9b2bdf3f992debf15a9 +size 3305171 diff --git a/papers/spear phishing with large language models.pdf b/papers/spear phishing with large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..62190bf77c7bb79eae3a84389c47d1012924be13 --- /dev/null +++ b/papers/spear phishing with large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5dd40c789c7cae86ee37bf51ee8a6509cc578cc79d534f2303e4613c382de39a +size 242880 diff --git a/papers/spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization.pdf b/papers/spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization.pdf index a64ba0d42e92c08d89966872e1d87257ff5f77f6..1a25fcb77bb836040e26fd1d1368f2fbbe20d194 100644 Binary files a/papers/spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization.pdf and b/papers/spec a soft promptbased calibration on mitigating performance variability in clinical notes summarization.pdf differ diff --git a/papers/spring gpt4 outperforms rl algorithms by studying papers and reasoning.pdf b/papers/spring gpt4 outperforms rl algorithms by studying papers and reasoning.pdf deleted file mode 100644 index 9176eb371d6a41a44706166febf995da8ea04ec2..0000000000000000000000000000000000000000 --- a/papers/spring gpt4 outperforms rl algorithms by studying papers and reasoning.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5711d1493ab15341733624435cb02908674106a7f0963e4e322ed07862deb81d -size 2071739 diff --git a/papers/sqlpalm improved large language model adaptation for texttosql.pdf b/papers/sqlpalm improved large language model adaptation for texttosql.pdf index d8aa1ae3bbd5aba7d8830680dbe4b36f3da28301..545cd465f2f3941098a208d215ebdedc53620388 100644 Binary files a/papers/sqlpalm improved large language model adaptation for texttosql.pdf and b/papers/sqlpalm improved large language model adaptation for texttosql.pdf differ diff --git a/papers/sqlprompt incontext texttosql with minimal labeled data.pdf b/papers/sqlprompt incontext texttosql with minimal labeled data.pdf index 64d4b4986dac0947fd3e103be46820473ba4d201..8e4d166dc3c08ca4a863383d24c92634d0d042bc 100644 Binary files a/papers/sqlprompt incontext texttosql with minimal labeled data.pdf and b/papers/sqlprompt incontext texttosql with minimal labeled data.pdf differ diff --git a/papers/stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf b/papers/stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf index 49af8c15cb912ad93f272c5c40a8f81f016708f8..0fffceb2510aec4d7ac7453618fc0dde407d188d 100644 Binary files a/papers/stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf and b/papers/stabilized incontext learning with pretrained language models for few shot dialogue state tracking.pdf differ diff --git a/papers/stance detection with supervised, zeroshot, and fewshot applications.pdf b/papers/stance detection with supervised, zeroshot, and fewshot applications.pdf index b0d1ec21c44b19dc6a0121f5f038f6254c2694e0..c7570b6e290aee7ea4117b1f36761dd737e01684 100644 Binary files a/papers/stance detection with supervised, zeroshot, and fewshot applications.pdf and b/papers/stance detection with supervised, zeroshot, and fewshot applications.pdf differ diff --git a/papers/statistical depth for ranking and characterizing transformerbased text embeddings.pdf b/papers/statistical depth for ranking and characterizing transformerbased text embeddings.pdf index ebcb631b05e4880a0f5c34c9acd3c050481bfad7..abc408b435e99845ec5bc60efd38b684947aa236 100644 Binary files a/papers/statistical depth for ranking and characterizing transformerbased text embeddings.pdf and b/papers/statistical depth for ranking and characterizing transformerbased text embeddings.pdf differ diff --git a/papers/steering large language models for machine translation with finetuning and incontext learning.pdf b/papers/steering large language models for machine translation with finetuning and incontext learning.pdf index 896b8047d130a0910940a8fab26909898009206a..4eea7ee93198111064c99e41b080d96acfba543e 100644 Binary files a/papers/steering large language models for machine translation with finetuning and incontext learning.pdf and b/papers/steering large language models for machine translation with finetuning and incontext learning.pdf differ diff --git a/papers/stprompt semanticguided and taskdriven prompts for effective fewshot classification.pdf b/papers/stprompt semanticguided and taskdriven prompts for effective fewshot classification.pdf index baaaca43de7064fd3cae8f52551c679f00dc8fbd..d63b14c4ddd418503fef01633d21f25aee9a7fd1 100644 Binary files a/papers/stprompt semanticguided and taskdriven prompts for effective fewshot classification.pdf and b/papers/stprompt semanticguided and taskdriven prompts for effective fewshot classification.pdf differ diff --git a/papers/street a multitask structured reasoning and explanation benchmark.pdf b/papers/street a multitask structured reasoning and explanation benchmark.pdf index 678750b6988931a84a30c4ca84c916214caebc62..3648d19c9a50d4f432da63c87c7412734be33d5c 100644 Binary files a/papers/street a multitask structured reasoning and explanation benchmark.pdf and b/papers/street a multitask structured reasoning and explanation benchmark.pdf differ diff --git a/papers/stress testing chainofthought prompting for large language models.pdf b/papers/stress testing chainofthought prompting for large language models.pdf index 1fd0c3d7d0ade40bff08aa3847c8514d9038b136..de6ed2557f10b6f5980454276caeb77b82398357 100644 Binary files a/papers/stress testing chainofthought prompting for large language models.pdf and b/papers/stress testing chainofthought prompting for large language models.pdf differ diff --git a/papers/structured chainofthought prompting for code generation.pdf b/papers/structured chainofthought prompting for code generation.pdf index d87700755ba5b8ffa4503d90653d43f737428d9a..c2e2eeea4c394346cc16513a66f476eeac1cf2a2 100644 Binary files a/papers/structured chainofthought prompting for code generation.pdf and b/papers/structured chainofthought prompting for code generation.pdf differ diff --git a/papers/structured prompting scaling incontext learning to 1,000 examples.pdf b/papers/structured prompting scaling incontext learning to 1,000 examples.pdf index 044b6a9dc528753a280a83f312bb148a1a6a8774..42abf011f64f116192e2d7d03e12930e41190071 100644 Binary files a/papers/structured prompting scaling incontext learning to 1,000 examples.pdf and b/papers/structured prompting scaling incontext learning to 1,000 examples.pdf differ diff --git a/papers/stt soft template tuning for fewshot adaptation.pdf b/papers/stt soft template tuning for fewshot adaptation.pdf index 0a8605658c761ccbd71eb248862f089f2ed12ea6..4263972b7e2d4f3674bbe4ce9501c82aafad0e52 100644 Binary files a/papers/stt soft template tuning for fewshot adaptation.pdf and b/papers/stt soft template tuning for fewshot adaptation.pdf differ diff --git a/papers/studenteval a benchmark of studentwritten prompts for large language models of code.pdf b/papers/studenteval a benchmark of studentwritten prompts for large language models of code.pdf index c47915d141cc2b2533e82e4b1a1b25911542db80..981b6286b4107978845f7de026b91accaecb9faf 100644 Binary files a/papers/studenteval a benchmark of studentwritten prompts for large language models of code.pdf and b/papers/studenteval a benchmark of studentwritten prompts for large language models of code.pdf differ diff --git a/papers/supercharging academic writing with generative ai framework, techniques, and caveats.pdf b/papers/supercharging academic writing with generative ai framework, techniques, and caveats.pdf index d8dd3be18da2fb1764b0e572147fcc31fe889b68..d6cff239bf04b432499e6b0e65992d450a4b802b 100644 Binary files a/papers/supercharging academic writing with generative ai framework, techniques, and caveats.pdf and b/papers/supercharging academic writing with generative ai framework, techniques, and caveats.pdf differ diff --git a/papers/susceptibility to influence of large language models.pdf b/papers/susceptibility to influence of large language models.pdf deleted file mode 100644 index 944db42687574c75a0970bb9625196695f8e4d25..0000000000000000000000000000000000000000 --- a/papers/susceptibility to influence of large language models.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d0b030487c6787a2e4006e89589c6d21046e0edcb797a0a2dc19d8c33b07b20d -size 1320024 diff --git a/papers/sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation.pdf b/papers/sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation.pdf index 8c4ca12e28cbd586f08c552ca785609dbcfe883a..31da22f1fc32deb2c0753a0bfe7084638dc17e7f 100644 Binary files a/papers/sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation.pdf and b/papers/sweeping heterogeneity with smart mops mixture of prompts for llm task adaptation.pdf differ diff --git a/papers/synapse trajectoryasexemplar prompting with memory for computer control.pdf b/papers/synapse trajectoryasexemplar prompting with memory for computer control.pdf index 0512b92f3d67e5656ef0a3ea33c1bb63aced3d82..9d59afb5d46149d45d7b2c7ae27b8ca5c248151b 100644 --- a/papers/synapse trajectoryasexemplar prompting with memory for computer control.pdf +++ b/papers/synapse trajectoryasexemplar prompting with memory for computer control.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1064ed170393ef5183179b0caf4283145ea9539fa3cd1dd39c40b5c568b53fdb -size 1362688 +oid sha256:24a8e95ceb3a90f6c303442f224c109f5abc92d2a9918ee0cf7d2246de781260 +size 1527350 diff --git a/papers/synthetic prompting generating chainofthought demonstrations for large language models.pdf b/papers/synthetic prompting generating chainofthought demonstrations for large language models.pdf index 8a97e657e77d019667483fc4878f46d3a2eb1e6c..5f28acec10f2570a190fbbd56374e4e800f081cb 100644 Binary files a/papers/synthetic prompting generating chainofthought demonstrations for large language models.pdf and b/papers/synthetic prompting generating chainofthought demonstrations for large language models.pdf differ diff --git a/papers/system report for ccl23eval task 9 hust1037 explore proper prompt strategy for llm in mrc task.pdf b/papers/system report for ccl23eval task 9 hust1037 explore proper prompt strategy for llm in mrc task.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8cc97f4662b8a1f0e6c9271c315c5c11695381da --- /dev/null +++ b/papers/system report for ccl23eval task 9 hust1037 explore proper prompt strategy for llm in mrc task.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b848607aabbc9d4016b90e90c079b1bcb60d5853241824c0cefbbd810d3bf370 +size 433500 diff --git a/papers/systematic rectification of language models via deadend analysis.pdf b/papers/systematic rectification of language models via deadend analysis.pdf index b961ee5f0159f09b95b333ffbf3fc84bc463b0fe..67487d739c106a19bd0314c02ecb544bfeebdb0d 100644 Binary files a/papers/systematic rectification of language models via deadend analysis.pdf and b/papers/systematic rectification of language models via deadend analysis.pdf differ diff --git a/papers/tabllm fewshot classification of tabular data with large language models.pdf b/papers/tabllm fewshot classification of tabular data with large language models.pdf deleted file mode 100644 index af81fd1388d52a6ba1ef4128128ccc7e0aeb20dc..0000000000000000000000000000000000000000 Binary files a/papers/tabllm fewshot classification of tabular data with large language models.pdf and /dev/null differ diff --git a/papers/tabprompt graphbased pretraining and prompting for fewshot table understanding.pdf b/papers/tabprompt graphbased pretraining and prompting for fewshot table understanding.pdf new file mode 100644 index 0000000000000000000000000000000000000000..532d3ebfe66b706baf1d53d2974fc2cbae8f3a5e --- /dev/null +++ b/papers/tabprompt graphbased pretraining and prompting for fewshot table understanding.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b57b94666e16107a1e7a189ea2471ee6037b9c1838ae035f6c8521dd1a96e2b9 +size 676566 diff --git a/papers/tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf b/papers/tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf index 7fa4f0d446fe81481182bdc4a875e95e993cd4fd..9eb918fc43ece869a0761c468c09acbed5e33a61 100644 Binary files a/papers/tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf and b/papers/tallrec an effective and efficient tuning framework to align large language model with recommendation.pdf differ diff --git a/papers/tasklevel thinking steps help large language models for challenging classification task.pdf b/papers/tasklevel thinking steps help large language models for challenging classification task.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ab0c9232633fc7b525ef5236029ebe5325a9cd0c --- /dev/null +++ b/papers/tasklevel thinking steps help large language models for challenging classification task.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46917774294217cf365aedddd5ed22fe8a969526bf961ec60dbe5ca223d49106 +size 383508 diff --git a/papers/teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf b/papers/teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf index d8dfc8c07ddeb0e9dde79f60fc90ff15bd2ee79b..248d80d0ac5f293642c24ce8a7726aa1ffea0b66 100644 Binary files a/papers/teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf and b/papers/teler a general taxonomy of llm prompts for benchmarking complex tasks.pdf differ diff --git a/papers/tempera testtime prompting via reinforcement learning.pdf b/papers/tempera testtime prompting via reinforcement learning.pdf index c0f189d6fabfb5de916fe10ad52311e327e2130d..b607921ead42fdfc1fc4d7551dc499f013984704 100644 Binary files a/papers/tempera testtime prompting via reinforcement learning.pdf and b/papers/tempera testtime prompting via reinforcement learning.pdf differ diff --git a/papers/templatefree prompt tuning for fewshot ner.pdf b/papers/templatefree prompt tuning for fewshot ner.pdf index 423a7a287c456cefefdf7435e1c22797d518f27e..737d07939e15a0c55c41cf479bd55f0e9fe975b0 100644 Binary files a/papers/templatefree prompt tuning for fewshot ner.pdf and b/papers/templatefree prompt tuning for fewshot ner.pdf differ diff --git a/papers/temporal knowledge graph forecasting without knowledge using incontext learning.pdf b/papers/temporal knowledge graph forecasting without knowledge using incontext learning.pdf index d79483c3f94dab5a28adfd7bba8a2bc757fa1cd3..19f83f8c6d39e240696d768a0564348fee1eec3f 100644 Binary files a/papers/temporal knowledge graph forecasting without knowledge using incontext learning.pdf and b/papers/temporal knowledge graph forecasting without knowledge using incontext learning.pdf differ diff --git a/papers/ten quick tips for harnessing the power of chatgptgpt4 in computational biology.pdf b/papers/ten quick tips for harnessing the power of chatgptgpt4 in computational biology.pdf index f0852a23758d65908def8a59c2370a6910322d4e..e03c7b0b8c9f9b5ff49563c3346eb6b3f43f74f4 100644 Binary files a/papers/ten quick tips for harnessing the power of chatgptgpt4 in computational biology.pdf and b/papers/ten quick tips for harnessing the power of chatgptgpt4 in computational biology.pdf differ diff --git a/papers/terminologyaware translation with constrained decoding and large language model prompting.pdf b/papers/terminologyaware translation with constrained decoding and large language model prompting.pdf index 95db57e4b200933fdc031880d994c891cd25bd4e..fd8a2e118689f9bbf3408c20ce2f3cd1c7595ad3 100644 Binary files a/papers/terminologyaware translation with constrained decoding and large language model prompting.pdf and b/papers/terminologyaware translation with constrained decoding and large language model prompting.pdf differ diff --git a/papers/text classification via large language models.pdf b/papers/text classification via large language models.pdf index b1f95e19e35d4d5297ca5c220e89119700f03c8e..050e500daf5e8da96d34fb92b888904d1c23702a 100644 Binary files a/papers/text classification via large language models.pdf and b/papers/text classification via large language models.pdf differ diff --git a/papers/text2cohort democratizing the nci imaging data commons with natural language cohort discovery.pdf b/papers/text2cohort democratizing the nci imaging data commons with natural language cohort discovery.pdf deleted file mode 100644 index d4d0d56a593c41a56e8163be4249383c0d7c3848..0000000000000000000000000000000000000000 Binary files a/papers/text2cohort democratizing the nci imaging data commons with natural language cohort discovery.pdf and /dev/null differ diff --git a/papers/textbased person search without parallel imagetext data.pdf b/papers/textbased person search without parallel imagetext data.pdf index a822e8b457a928691f5260e01d79a3b798471372..1792845f1b8493593b7e3b38288cd5c1af42f845 100644 --- a/papers/textbased person search without parallel imagetext data.pdf +++ b/papers/textbased person search without parallel imagetext data.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:0de9265f4535099093fa4be18f370d0697b92b7ac96817ea10cc4dbded67e527 -size 11260326 +oid sha256:417f9dbc0319eae1cad162e54245459bbcc6a0b9be5fda700a02a62da0e6e5e7 +size 10612716 diff --git a/papers/textbooks are all you need ii phi15 technical report.pdf b/papers/textbooks are all you need ii phi15 technical report.pdf index 178ab94221140ef1e8faccc0baaa47c7534fdd16..5884d182116a4b0d0c3c6e6156e80c20f4172ce6 100644 Binary files a/papers/textbooks are all you need ii phi15 technical report.pdf and b/papers/textbooks are all you need ii phi15 technical report.pdf differ diff --git a/papers/textgraphs16 natural language premise selection task zeroshot premise selection with prompting generative language models.pdf b/papers/textgraphs16 natural language premise selection task zeroshot premise selection with prompting generative language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5b94fb94388c0a885b4777db34fd4b6278f83239 --- /dev/null +++ b/papers/textgraphs16 natural language premise selection task zeroshot premise selection with prompting generative language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:393be767510d538ad5761f740b1092ff5b933de1bc9bb3d6cbbcee5bf9478fc0 +size 266701 diff --git a/papers/texttosql empowered by large language models a benchmark evaluation.pdf b/papers/texttosql empowered by large language models a benchmark evaluation.pdf index 32059ee4596d3e6cecc6db1ff104bb345d61db6f..cdc6d87d70ac61863db6d2f465ff93f4737d6fdb 100644 --- a/papers/texttosql empowered by large language models a benchmark evaluation.pdf +++ b/papers/texttosql empowered by large language models a benchmark evaluation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:0d7322de02ccc7bbb9dffd3399d9eb8fdd884de8a985b8994fc1aa205cbf8589 -size 1868284 +oid sha256:d11c0b10d6eb706f93ec262b00a27b40d36fb4f9bb3f8a061d574a418fe39887 +size 2660613 diff --git a/papers/texttosticker style tailoring latent diffusion models for human expression.pdf b/papers/texttosticker style tailoring latent diffusion models for human expression.pdf new file mode 100644 index 0000000000000000000000000000000000000000..869c1f2e606e076013bd1d67f5e090fbe154ee27 --- /dev/null +++ b/papers/texttosticker style tailoring latent diffusion models for human expression.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5dfd077ddad3e000fb3cdef7d4cb1e6665fbe72073798cbf1e3923727fbd3afe +size 19321472 diff --git a/papers/the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues.pdf b/papers/the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues.pdf index f49aa5b4ab2aebdaffc05353dfecd145bc7c9707..c3c1c34df6fda21a0732380eb1cfefe7f85719cc 100644 Binary files a/papers/the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues.pdf and b/papers/the adaio system at the bea2023 shared task on generating ai teacher responses in educational dialogues.pdf differ diff --git a/papers/the art of socratic questioning recursive thinking with large language models.pdf b/papers/the art of socratic questioning recursive thinking with large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e9d0dd4b5a4734a5ea583f234aa72851c19463ae --- /dev/null +++ b/papers/the art of socratic questioning recursive thinking with large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cec78c40892379e3196082a8b3f470f868f743bcd68aef4a45179646d57fad9f +size 4875835 diff --git a/papers/the benefits of a concise chain of thought on problemsolving in large language models.pdf b/papers/the benefits of a concise chain of thought on problemsolving in large language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3a1ba11a8589e90f15322c6c82f071f3209d7c09 --- /dev/null +++ b/papers/the benefits of a concise chain of thought on problemsolving in large language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e8fef0811ed4bde8d72b9d92c160c9b7d43fb7e227aa404289b6311169ac058 +size 302939 diff --git a/papers/the c4h, tat, hppr and hppd genes prompted engineering of rosmarinic acid biosynthetic pathway in salvia miltiorrhiza hairy root cultures.pdf b/papers/the c4h, tat, hppr and hppd genes prompted engineering of rosmarinic acid biosynthetic pathway in salvia miltiorrhiza hairy root cultures.pdf deleted file mode 100644 index f51dd1e8fa5e729cfe726b23ad6b0871e0f64521..0000000000000000000000000000000000000000 Binary files a/papers/the c4h, tat, hppr and hppd genes prompted engineering of rosmarinic acid biosynthetic pathway in salvia miltiorrhiza hairy root cultures.pdf and /dev/null differ diff --git a/papers/the creativity of textbased generative art.pdf b/papers/the creativity of textbased generative art.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2ca9b2eea5c2c702ed3ed7a4f7cb3a7de1551d8f --- /dev/null +++ b/papers/the creativity of textbased generative art.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8eb8516d7238383b6df7943ec0aa73101652ae5da368f825ca4d640ff14b3d08 +size 5728760 diff --git a/papers/the cultivated practices of texttoimage generation.pdf b/papers/the cultivated practices of texttoimage generation.pdf index 34fd50678c3f4b2428c6a28a63f38e80c7399cf2..495518691a1cdc8381b76a1f2455919f79d4a053 100644 Binary files a/papers/the cultivated practices of texttoimage generation.pdf and b/papers/the cultivated practices of texttoimage generation.pdf differ diff --git a/papers/the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf b/papers/the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf index 81646bb0ce6c6dc3d5f80b72c06a10e9dabbca49..d8e22b6fb843e262c357ff9e11958ebd9b994018 100644 Binary files a/papers/the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf and b/papers/the devil is in the errors leveraging large language models for finegrained machine translation evaluation.pdf differ diff --git a/papers/the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis.pdf b/papers/the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis.pdf index ade6ae3e3cc7c4074d92ab57b828b452917fbf56..a0c2f6dbab0448fa64e154f5c311e32f69e62ff3 100644 Binary files a/papers/the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis.pdf and b/papers/the end of the policy analyst testing the capability of artificial intelligence to generate plausible, persuasive, and useful policy analysis.pdf differ diff --git a/papers/the ethics of interaction mitigating security threats in llms.pdf b/papers/the ethics of interaction mitigating security threats in llms.pdf new file mode 100644 index 0000000000000000000000000000000000000000..51320b38e3684fc1eba2e7ec227d7edb563f3ad2 --- /dev/null +++ b/papers/the ethics of interaction mitigating security threats in llms.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aeae2321df1bc14a9ecf763a6571ac17a131cd2fc391a5473b232bbe900bb90b +size 243755 diff --git a/papers/the formai dataset generative ai in software security through the lens of formal verification.pdf b/papers/the formai dataset generative ai in software security through the lens of formal verification.pdf index c2340710aface6e53d6cce900f2ab1df7f32dd55..a81e14f50ffe471a411fc5ed4d398b676015584e 100644 Binary files a/papers/the formai dataset generative ai in software security through the lens of formal verification.pdf and b/papers/the formai dataset generative ai in software security through the lens of formal verification.pdf differ diff --git a/papers/the impact of symbolic representations on incontext learning for fewshot reasoning.pdf b/papers/the impact of symbolic representations on incontext learning for fewshot reasoning.pdf index 708f731c5d5e2e8fbb007be333a7b74ed214ea6b..c7bf213d5ae28e1d94f1d9a1387164698485a17d 100644 Binary files a/papers/the impact of symbolic representations on incontext learning for fewshot reasoning.pdf and b/papers/the impact of symbolic representations on incontext learning for fewshot reasoning.pdf differ diff --git a/papers/the inductive bias of incontext learning rethinking pretraining example design.pdf b/papers/the inductive bias of incontext learning rethinking pretraining example design.pdf index ce2cf0514aeda78e6788e40a3650a21e83a66815..d746ec4470cdb21f143e585accd3399d90bdf025 100644 Binary files a/papers/the inductive bias of incontext learning rethinking pretraining example design.pdf and b/papers/the inductive bias of incontext learning rethinking pretraining example design.pdf differ diff --git a/papers/the learnability of incontext learning.pdf b/papers/the learnability of incontext learning.pdf index af08fb6427b6a543cd2cb27abaf8ee7695bdd691..e9672732040873fbcc2071379d67dbb9a18e3e2d 100644 Binary files a/papers/the learnability of incontext learning.pdf and b/papers/the learnability of incontext learning.pdf differ diff --git a/papers/the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis.pdf b/papers/the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis.pdf index 82d0a130c07b2b18ee3d749cecf5c4728388c9b4..f0486b4c744a49d8257a0a7c7e8fa73439c1a6f1 100644 Binary files a/papers/the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis.pdf and b/papers/the limits of chatgpt in extracting aspectcategoryopinionsentiment quadruples a comparative analysis.pdf differ diff --git a/papers/the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf b/papers/the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf index d4678f94395a4a76230a82177808d0b104463f49..a68957285362190f2d0b4f2b0a05ea04d71408be 100644 Binary files a/papers/the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf and b/papers/the mystery and fascination of llms a comprehensive survey on the interpretation and analysis of emergent abilities.pdf differ diff --git a/papers/the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant.pdf b/papers/the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant.pdf index 51803b6e903167dac4aa602c22d35bce083aac8d..b6bf98a43763bdde8c280975d41532d9c3dd180b 100644 Binary files a/papers/the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant.pdf and b/papers/the potential and pitfalls of using a large language model such as chatgpt or gpt4 as a clinical assistant.pdf differ diff --git a/papers/the scope of incontext learning for the extraction of medical temporal constraints.pdf b/papers/the scope of incontext learning for the extraction of medical temporal constraints.pdf index 8b1d2fa8a7d020d81c2c9fa6de95145486f38476..bf664f0279f050ce138444c1ced5441e7710a485 100644 Binary files a/papers/the scope of incontext learning for the extraction of medical temporal constraints.pdf and b/papers/the scope of incontext learning for the extraction of medical temporal constraints.pdf differ diff --git a/papers/the student becomes the master matching gpt3 on scientific factual error correction.pdf b/papers/the student becomes the master matching gpt3 on scientific factual error correction.pdf index f6c81ed8c5881215ef44566c81a2bbc46104a515..7d3ef91ae0ab1dec948e82a94a78bfbfef1d088f 100644 Binary files a/papers/the student becomes the master matching gpt3 on scientific factual error correction.pdf and b/papers/the student becomes the master matching gpt3 on scientific factual error correction.pdf differ diff --git a/papers/the student becomes the master outperforming gpt3 on scientific factual error correction.pdf b/papers/the student becomes the master outperforming gpt3 on scientific factual error correction.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ec4ae5355bfb6c20fb579e74fe886fd17de9c559 --- /dev/null +++ b/papers/the student becomes the master outperforming gpt3 on scientific factual error correction.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ce63e6703aa3d03df39eeb3e2e49fafe50722bd807acd2f381be29609360c52 +size 415829 diff --git a/papers/the transformative influence of large language models on software development.pdf b/papers/the transformative influence of large language models on software development.pdf new file mode 100644 index 0000000000000000000000000000000000000000..057c7c6599a7ef06c68f35f859a6cfa5d8adeeb4 --- /dev/null +++ b/papers/the transformative influence of large language models on software development.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24229a5b2edfcffd918a6949271c946ab2cd65227c23768af19c2065bb6e195f +size 257497 diff --git a/papers/the unreasonable effectiveness of fewshot learning for machine translation.pdf b/papers/the unreasonable effectiveness of fewshot learning for machine translation.pdf index 2024420a593a8436912642976832375d633ef159..f41ca5450800f95a6a186791c2f5c8aef407844d 100644 Binary files a/papers/the unreasonable effectiveness of fewshot learning for machine translation.pdf and b/papers/the unreasonable effectiveness of fewshot learning for machine translation.pdf differ diff --git a/papers/the unreliability of explanations in fewshot incontext learning.pdf b/papers/the unreliability of explanations in fewshot incontext learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0eb4e118b2cb29df6dba67574a295b3e9be0fd30 --- /dev/null +++ b/papers/the unreliability of explanations in fewshot incontext learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6086d576bc7713a95f32c2f7e016d1f218bc73dcf34a904ad466adff14141d0 +size 500620 diff --git a/papers/the unreliability of explanations in fewshot prompting for textual reasoning.pdf b/papers/the unreliability of explanations in fewshot prompting for textual reasoning.pdf index 8f0543b454924f2d71f6473f5f1c1f8567cfe7cf..0eb4e118b2cb29df6dba67574a295b3e9be0fd30 100644 Binary files a/papers/the unreliability of explanations in fewshot prompting for textual reasoning.pdf and b/papers/the unreliability of explanations in fewshot prompting for textual reasoning.pdf differ diff --git a/papers/the utility of large language models and generative ai for education research.pdf b/papers/the utility of large language models and generative ai for education research.pdf index f393d6be9600f6d8d999eb638c434ba46f9b38f0..84700500c47adfeb6afad69949ab81d36ea5fb06 100644 Binary files a/papers/the utility of large language models and generative ai for education research.pdf and b/papers/the utility of large language models and generative ai for education research.pdf differ diff --git a/papers/think before you speak cultivating communication skills of large language models via inner monologue.pdf b/papers/think before you speak cultivating communication skills of large language models via inner monologue.pdf index f8022f578a02f266ecc5f3d5ad6c1d1ea3dba944..18f0d92ed7c977f011f00cb64a51289db436739f 100644 Binary files a/papers/think before you speak cultivating communication skills of large language models via inner monologue.pdf and b/papers/think before you speak cultivating communication skills of large language models via inner monologue.pdf differ diff --git a/papers/tiam a metric for evaluating alignment in texttoimage generation.pdf b/papers/tiam a metric for evaluating alignment in texttoimage generation.pdf index a8a3954e640fa02a9ce11faff8809602e6eab16d..a785486c80518a8adff5e8ffaec55925fc7ce6b0 100644 --- a/papers/tiam a metric for evaluating alignment in texttoimage generation.pdf +++ b/papers/tiam a metric for evaluating alignment in texttoimage generation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2eae723cc1bcd3b5601d0cbff3aede4f69d9e2d57777c60c631ae21b17505031 -size 9374976 +oid sha256:e758cbde6557643743051e687b128ab1be4f6daa06d440cb19a9423f06ce8788 +size 10959350 diff --git a/papers/time travel in llms tracing data contamination in large language models.pdf b/papers/time travel in llms tracing data contamination in large language models.pdf index 868c6e63455bd2d88d3f934060f630533f2bde42..8f81213a0271dbe2d38d19063bc9b6a743d1b306 100644 Binary files a/papers/time travel in llms tracing data contamination in large language models.pdf and b/papers/time travel in llms tracing data contamination in large language models.pdf differ diff --git a/papers/toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf b/papers/toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf index 60f8c75108c3175a3b787117279e5b82e91c5203..b9af0e77c70e16af9f21789a18bf60782f6440c3 100644 Binary files a/papers/toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf and b/papers/toolkengpt augmenting frozen language models with massive tools via tool embeddings.pdf differ diff --git a/papers/topologies of reasoning demystifying chains, trees, and graphs of thoughts.pdf b/papers/topologies of reasoning demystifying chains, trees, and graphs of thoughts.pdf new file mode 100644 index 0000000000000000000000000000000000000000..18f3a48fa27f9c68c9a70c00d85ff6ade881d03c --- /dev/null +++ b/papers/topologies of reasoning demystifying chains, trees, and graphs of thoughts.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:689e91beb7f63449fac177b836362895f389326b5dec8effde6c1b50dee7d886 +size 3261455 diff --git a/papers/toward reproducing network research results using large language models.pdf b/papers/toward reproducing network research results using large language models.pdf index 837597640a3d4dfca92a5492d4845579a5d0a8cb..00d5712e178422d1005a4923e2752b18b4b01717 100644 Binary files a/papers/toward reproducing network research results using large language models.pdf and b/papers/toward reproducing network research results using large language models.pdf differ diff --git a/papers/toward unified controllable text generation via regular expression instruction.pdf b/papers/toward unified controllable text generation via regular expression instruction.pdf index eed1fdade54efa190e61f726ebf2b0383c418bf2..0cd66ce00efe3bd590ad0e183480bf69c7c64012 100644 Binary files a/papers/toward unified controllable text generation via regular expression instruction.pdf and b/papers/toward unified controllable text generation via regular expression instruction.pdf differ diff --git a/papers/towards agile text classifiers for everyone.pdf b/papers/towards agile text classifiers for everyone.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d3655eaeb3aa7662bbaa6724eb04b354ba3128ee --- /dev/null +++ b/papers/towards agile text classifiers for everyone.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb30ed1cba9a0870993d620f0c2cc02d3757a3a489a8c514dfe66143f1413a76 +size 573266 diff --git a/papers/towards answering openended ethical quandary questions.pdf b/papers/towards answering openended ethical quandary questions.pdf index bc52146437825a9d156085687a0696cd1a206ea5..9aad737371f005305988c09e689379a1047c114c 100644 Binary files a/papers/towards answering openended ethical quandary questions.pdf and b/papers/towards answering openended ethical quandary questions.pdf differ diff --git a/papers/towards effective disambiguation for machine translation with large language models.pdf b/papers/towards effective disambiguation for machine translation with large language models.pdf index 0f75d852d09f1202ee2afc2de811ccf31dc2a1b6..8b7f9ce56edef3f8b93e8dae9728e69256214366 100644 Binary files a/papers/towards effective disambiguation for machine translation with large language models.pdf and b/papers/towards effective disambiguation for machine translation with large language models.pdf differ diff --git a/papers/towards explainable conversational recommender systems.pdf b/papers/towards explainable conversational recommender systems.pdf index 6424ed09ced9d0a82af7d6e6c42703a797c4c6b1..547774c4ae0809b03a829f60884380f8daace109 100644 Binary files a/papers/towards explainable conversational recommender systems.pdf and b/papers/towards explainable conversational recommender systems.pdf differ diff --git a/papers/towards fewshot identification of morality frames using incontext learning.pdf b/papers/towards fewshot identification of morality frames using incontext learning.pdf index 1a7439a43a2110d0fef510e1b825d28bc08d9572..4b0815d8179d37b6cc5299754ccdb2994f6ca3af 100644 Binary files a/papers/towards fewshot identification of morality frames using incontext learning.pdf and b/papers/towards fewshot identification of morality frames using incontext learning.pdf differ diff --git a/papers/towards goaloriented large language model prompting a survey.pdf b/papers/towards goaloriented large language model prompting a survey.pdf new file mode 100644 index 0000000000000000000000000000000000000000..32948d2c2e7d5b634d1c77741a14c85b057aa0a6 --- /dev/null +++ b/papers/towards goaloriented large language model prompting a survey.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5efa450be3c0571c4d4680cb19194b2fb044b229347e0d6c7c834e46cec79dc6 +size 320733 diff --git a/papers/towards informative fewshot prompt with maximum information gain for incontext learning.pdf b/papers/towards informative fewshot prompt with maximum information gain for incontext learning.pdf index f44160fb17c0abccb9bb1562b6274a7311a7e641..5974b7a3a1a91e3511917dd8536883fb19ecc2ac 100644 Binary files a/papers/towards informative fewshot prompt with maximum information gain for incontext learning.pdf and b/papers/towards informative fewshot prompt with maximum information gain for incontext learning.pdf differ diff --git a/papers/towards interpretable mental health analysis with large language models.pdf b/papers/towards interpretable mental health analysis with large language models.pdf index fbe66e9219de8d48259d3a8e431043ff964b2d88..e5563c8f3d82732630a3749cdadf0622c4ba8f2e 100644 --- a/papers/towards interpretable mental health analysis with large language models.pdf +++ b/papers/towards interpretable mental health analysis with large language models.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f10bcf2bb010840698d952b7faf25eb361bf4b3dbda1c2b2343ee25859588347 -size 1108894 +oid sha256:4b1166d0e39652e35449a0530b487165b7ee97ce0c1ff67b3194d4fc84983121 +size 1024332 diff --git a/papers/towards legally enforceable hate speech detection for public forums.pdf b/papers/towards legally enforceable hate speech detection for public forums.pdf index f78c47922af30a5c214aff19c22d188327dac788..ee57a0c74dff878773500bba441e19531f906175 100644 Binary files a/papers/towards legally enforceable hate speech detection for public forums.pdf and b/papers/towards legally enforceable hate speech detection for public forums.pdf differ diff --git a/papers/towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method.pdf b/papers/towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method.pdf index 1e4495d65e2a11b358b91726d00d5460ff64bb79..a1f0911b229cf6cb30938ba0e679b88f12f63a50 100644 Binary files a/papers/towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method.pdf and b/papers/towards llmbased fact verification on news claims with a hierarchical stepbystep prompting method.pdf differ diff --git a/papers/towards making the most of chatgpt for machine translation.pdf b/papers/towards making the most of chatgpt for machine translation.pdf index 8c9f9238b6428bf683dae34c385972b9ccc2beb1..02632f649bf487a215905dd1d06301424b8c32af 100644 Binary files a/papers/towards making the most of chatgpt for machine translation.pdf and b/papers/towards making the most of chatgpt for machine translation.pdf differ diff --git a/papers/towards realistic zeroshot classification via self structural semantic alignment.pdf b/papers/towards realistic zeroshot classification via self structural semantic alignment.pdf deleted file mode 100644 index 2940df5b6b666ae848eafcb638d112e92cbeeecb..0000000000000000000000000000000000000000 --- a/papers/towards realistic zeroshot classification via self structural semantic alignment.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7eaddfe3bf8bd8b246cbf9cc3e6ad1665ae1e00c9faabc507d1541d20c6f84c4 -size 32381449 diff --git a/papers/towards understanding incontext learning with contrastive demonstrations and saliency maps.pdf b/papers/towards understanding incontext learning with contrastive demonstrations and saliency maps.pdf index e5dcb6cf43540c86a02bb615dffb5c6d946a9c2c..69d0164b0e1ea446483586755679b8d0ea77b355 100644 --- a/papers/towards understanding incontext learning with contrastive demonstrations and saliency maps.pdf +++ b/papers/towards understanding incontext learning with contrastive demonstrations and saliency maps.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2487c7e933804594fae7cdc8fa698ed58a3ab50c76ec7227d79f595f4ea82797 -size 1917432 +oid sha256:f6eae04fb957d666865d6396cf55b948cdb13736d2815f21c842a77347b69d3f +size 1917436 diff --git a/papers/towards unified prompt tuning for fewshot text classification.pdf b/papers/towards unified prompt tuning for fewshot text classification.pdf index 0d850e52f5606c95e806500aa4b14e1d50d9b81f..5b4b8edcc8ecb4217e4f3ca5db570a1cdefa93c7 100644 Binary files a/papers/towards unified prompt tuning for fewshot text classification.pdf and b/papers/towards unified prompt tuning for fewshot text classification.pdf differ diff --git a/papers/towards using fewshot prompt learning for automating model completion.pdf b/papers/towards using fewshot prompt learning for automating model completion.pdf index 543a1041909b61ac03673cae5ed00ca1d257e087..85d70e8374196ab04fd5cd80b7827a9d4074624a 100644 Binary files a/papers/towards using fewshot prompt learning for automating model completion.pdf and b/papers/towards using fewshot prompt learning for automating model completion.pdf differ diff --git a/papers/towards zerolabel language learning.pdf b/papers/towards zerolabel language learning.pdf index 80f7ad33bf89f564f0f00b252a1ea93059ad3cd4..8e98ad9626ed938900aea271e8bc8c7d9052f494 100644 Binary files a/papers/towards zerolabel language learning.pdf and b/papers/towards zerolabel language learning.pdf differ diff --git a/papers/towards zeroshot and fewshot table question answering using gpt3.pdf b/papers/towards zeroshot and fewshot table question answering using gpt3.pdf index 8d71052235168f9fc0bc0e2284ef71e3fa6970d9..422838c26ac076fddc8d4d52ee43a5d85d44b164 100644 Binary files a/papers/towards zeroshot and fewshot table question answering using gpt3.pdf and b/papers/towards zeroshot and fewshot table question answering using gpt3.pdf differ diff --git a/papers/towards zeroshot persona dialogue generation with incontext learning.pdf b/papers/towards zeroshot persona dialogue generation with incontext learning.pdf new file mode 100644 index 0000000000000000000000000000000000000000..22a77aab2e47bfa92098b66264363e02e1a3e24c --- /dev/null +++ b/papers/towards zeroshot persona dialogue generation with incontext learning.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e15b1e1a385e852c3b04e2f7400351cfefc19c047f5e894d3f7d35ce0e19b6d2 +size 809966 diff --git a/papers/toxicity detection with generative promptbased inference.pdf b/papers/toxicity detection with generative promptbased inference.pdf index 731536b0ef67f1147b30a594d4c04e66fbab8955..a53d2686cf57b9ad834f54ac4c6faeb4b444a862 100644 Binary files a/papers/toxicity detection with generative promptbased inference.pdf and b/papers/toxicity detection with generative promptbased inference.pdf differ diff --git a/papers/trained transformers learn linear models incontext.pdf b/papers/trained transformers learn linear models incontext.pdf index 0941e75ed20f7d59368eb9bc0df89b4baba53970..c0e9ce1ea0e0bbdd43bb5398c0f86b77669c3cc2 100644 Binary files a/papers/trained transformers learn linear models incontext.pdf and b/papers/trained transformers learn linear models incontext.pdf differ diff --git a/papers/tram benchmarking temporal reasoning for large language models.pdf b/papers/tram benchmarking temporal reasoning for large language models.pdf index 15d169f0e421b23975a132336cd9c902f603d7a6..4768ca62912f76f5b20c434db5844fc281c8102b 100644 Binary files a/papers/tram benchmarking temporal reasoning for large language models.pdf and b/papers/tram benchmarking temporal reasoning for large language models.pdf differ diff --git a/papers/transfer learning for power outage detection task with limited training data.pdf b/papers/transfer learning for power outage detection task with limited training data.pdf index d4bfb031758ab4431cc6beed2d3d4ad0dee8d8f8..3aac26ca1d7ae82d8d706665590d7eb10eb6f4dd 100644 Binary files a/papers/transfer learning for power outage detection task with limited training data.pdf and b/papers/transfer learning for power outage detection task with limited training data.pdf differ diff --git a/papers/transferring procedural knowledge across commonsense tasks.pdf b/papers/transferring procedural knowledge across commonsense tasks.pdf index dc7ab1b51815edf3a3f96140a041baaf4f63e9b9..5b6c1d04ce59fb73214736ecb3c9eee3d22dc523 100644 Binary files a/papers/transferring procedural knowledge across commonsense tasks.pdf and b/papers/transferring procedural knowledge across commonsense tasks.pdf differ diff --git a/papers/transformers are efficient incontext estimators for wireless communication.pdf b/papers/transformers are efficient incontext estimators for wireless communication.pdf index 89b6e1c835fd96d521b55325ee0ad0b7a80cbf53..b80fb7740b742d2bf68e4e36e6f74f358e1abb00 100644 Binary files a/papers/transformers are efficient incontext estimators for wireless communication.pdf and b/papers/transformers are efficient incontext estimators for wireless communication.pdf differ diff --git a/papers/transformers generalize differently from information stored in context vs in weights.pdf b/papers/transformers generalize differently from information stored in context vs in weights.pdf index c77c54c4431b3d551b9bd6c28e1b431d863dbb42..7bc745e74717e8015cc0b7591e9553bc2fe68d7e 100644 Binary files a/papers/transformers generalize differently from information stored in context vs in weights.pdf and b/papers/transformers generalize differently from information stored in context vs in weights.pdf differ diff --git a/papers/tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf b/papers/tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf index f3e1e5f022a6c657205b5699ea1c0a008563bbe3..c996d7ce3d26467a9baff3a82823ad65c0fe7034 100644 Binary files a/papers/tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf and b/papers/tree of clarifications answering ambiguous questions with retrievalaugmented large language models.pdf differ diff --git a/papers/trojansql sql injection against natural language interface to database.pdf b/papers/trojansql sql injection against natural language interface to database.pdf new file mode 100644 index 0000000000000000000000000000000000000000..eddefa39949016a07a062b4e2d023d70bb5d0d36 --- /dev/null +++ b/papers/trojansql sql injection against natural language interface to database.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a18b700604881d4e5a11ac58b2251fa08977451c4abbc9ac1eb3181057502adc +size 844988 diff --git a/papers/true fewshot learning with language models.pdf b/papers/true fewshot learning with language models.pdf index a0ee5199ac8b9da398bc52304dd24ca9070ba8d1..6fad72ab244b36a89cb535c837344786373ed4aa 100644 Binary files a/papers/true fewshot learning with language models.pdf and b/papers/true fewshot learning with language models.pdf differ diff --git a/papers/true fewshot learning with prompts a realworld perspective.pdf b/papers/true fewshot learning with prompts a realworld perspective.pdf index 71d3d285fbaf54bbe8b5d4868138491948930b19..7f18c070ed77040d30c3b72b5f1f982ed82d23ce 100644 Binary files a/papers/true fewshot learning with prompts a realworld perspective.pdf and b/papers/true fewshot learning with prompts a realworld perspective.pdf differ diff --git "a/papers/true fewshot learning with prompts\342\200\224a realworld perspective.pdf" "b/papers/true fewshot learning with prompts\342\200\224a realworld perspective.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..9d39a7240aa0d0f1c101d0959b684d2c83bc4651 --- /dev/null +++ "b/papers/true fewshot learning with prompts\342\200\224a realworld perspective.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcf5bebc28481a1c3d24d969800c4becc28e7bfdfaa1fb716e90106d98c21ffa +size 509845 diff --git a/papers/two timin' repairing smart contracts with a twolayered approach.pdf b/papers/two timin' repairing smart contracts with a twolayered approach.pdf index 27417d8ed56fc6ca3586ce4dc552d4f31ed378db..e7ab3e54e92614942e827a4593e34c47a9961629 100644 Binary files a/papers/two timin' repairing smart contracts with a twolayered approach.pdf and b/papers/two timin' repairing smart contracts with a twolayered approach.pdf differ diff --git a/papers/typefly flying drones with large language model.pdf b/papers/typefly flying drones with large language model.pdf new file mode 100644 index 0000000000000000000000000000000000000000..886eddfcb52b4e09f2e52caae891803ca33eb930 --- /dev/null +++ b/papers/typefly flying drones with large language model.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5d2c72e37bab0a638e4fa96356e71f80479fa02956607a841bec9db88bf8366 +size 11945477 diff --git a/papers/udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers.pdf b/papers/udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers.pdf index fbd643b1881c6fe069a2ccb449492dafac110b58..766acc0ef702faa7081fe2b381a84f857561d23b 100644 Binary files a/papers/udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers.pdf and b/papers/udapdr unsupervised domain adaptation via llm prompting and distillation of rerankers.pdf differ diff --git a/papers/ul2 unifying language learning paradigms.pdf b/papers/ul2 unifying language learning paradigms.pdf index 8320f58c7cc74aada32875a29d58fff8105cf0b7..48b5a1ed89fd3d7b25602fee6a42ad34b7b95261 100644 Binary files a/papers/ul2 unifying language learning paradigms.pdf and b/papers/ul2 unifying language learning paradigms.pdf differ diff --git a/papers/understanding demonstrationbased learning from a causal perspective.pdf b/papers/understanding demonstrationbased learning from a causal perspective.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ebb4f47177f70dcef56996dc57693ff6d15d6837 --- /dev/null +++ b/papers/understanding demonstrationbased learning from a causal perspective.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d71b4e173805868d4ef3226008a772353939fbfe7fbbfa1a1e7ffac4b907f75 +size 537696 diff --git a/papers/understanding how model size affects fewshot instruction prompting.pdf b/papers/understanding how model size affects fewshot instruction prompting.pdf index 186fdfe835adb54a046e5ee6afcf33d6bc504642..22dbc7743569dde1689bfaf19fab2db4b5a07df9 100644 Binary files a/papers/understanding how model size affects fewshot instruction prompting.pdf and b/papers/understanding how model size affects fewshot instruction prompting.pdf differ diff --git a/papers/understanding incontext learning via supportive pretraining data.pdf b/papers/understanding incontext learning via supportive pretraining data.pdf index 912a023f9413756bf68eb0198ffe64dcec9caabc..617c4a49d335a2a3d153f794c3c05d50e85bc8af 100644 Binary files a/papers/understanding incontext learning via supportive pretraining data.pdf and b/papers/understanding incontext learning via supportive pretraining data.pdf differ diff --git a/papers/understanding stereotypes in language models towards robust measurement and zeroshot debiasing.pdf b/papers/understanding stereotypes in language models towards robust measurement and zeroshot debiasing.pdf index 1c48a6448d996140e124e47cff17983941ec389b..e991f2e3e874c80e4d33cabc13b90b41218f6f9a 100644 Binary files a/papers/understanding stereotypes in language models towards robust measurement and zeroshot debiasing.pdf and b/papers/understanding stereotypes in language models towards robust measurement and zeroshot debiasing.pdf differ diff --git a/papers/understanding the effectiveness of very large language models on dialog evaluation.pdf b/papers/understanding the effectiveness of very large language models on dialog evaluation.pdf index a2f2b1d4e184c3947c49a952c884413fa6619009..5107010893af8ccaf6cb33f7284458ec22bd092c 100644 Binary files a/papers/understanding the effectiveness of very large language models on dialog evaluation.pdf and b/papers/understanding the effectiveness of very large language models on dialog evaluation.pdf differ diff --git a/papers/understanding users' dissatisfaction with chatgpt responses types, resolving tactics, and the effect of knowledge level.pdf b/papers/understanding users' dissatisfaction with chatgpt responses types, resolving tactics, and the effect of knowledge level.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5ba3bc1ff298e91b0a2e28770db7364636568efe --- /dev/null +++ b/papers/understanding users' dissatisfaction with chatgpt responses types, resolving tactics, and the effect of knowledge level.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffed039d852b334b697a18ebf9a79d5d46ddd5edfeaa22523d7ac6b100eb0764 +size 2356225 diff --git a/papers/unidcp unifying multiple medical visionlanguage tasks via dynamic crossmodal learnable prompts.pdf b/papers/unidcp unifying multiple medical visionlanguage tasks via dynamic crossmodal learnable prompts.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6dce3575c9a14940ec36d180e22c30deec9e1423 --- /dev/null +++ b/papers/unidcp unifying multiple medical visionlanguage tasks via dynamic crossmodal learnable prompts.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ffb9f6eb8fc7216c4b27cd173f5da6347965ebb3a8d6080ee2e2c4cdbc3b1a4 +size 2490206 diff --git a/papers/unified demonstration retriever for incontext learning.pdf b/papers/unified demonstration retriever for incontext learning.pdf index 1d03a2c1cb69e1489df18338055e7f78fbfe6f9c..a43d0c327a78e78a514342ec0a7c27751bb22f3b 100644 Binary files a/papers/unified demonstration retriever for incontext learning.pdf and b/papers/unified demonstration retriever for incontext learning.pdf differ diff --git a/papers/unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf b/papers/unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf index ad1d8cd0559206f521f805295683732d28f079de..2f3db4d5a237dd9b0fd19998fe7d4352d1d85b3b 100644 Binary files a/papers/unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf and b/papers/unified lowresource sequence labeling by sampleaware dynamic sparse finetuning.pdf differ diff --git a/papers/unihd at tsar2022 shared task is compute all we need for lexical simplification.pdf b/papers/unihd at tsar2022 shared task is compute all we need for lexical simplification.pdf index fcc7bb64d212731f4ef8991efcac6b1cbda4870d..08b326700f1b5a3e15cdf08825ff3e6706b50b2f 100644 Binary files a/papers/unihd at tsar2022 shared task is compute all we need for lexical simplification.pdf and b/papers/unihd at tsar2022 shared task is compute all we need for lexical simplification.pdf differ diff --git a/papers/unipredict large language models are universal tabular predictors.pdf b/papers/unipredict large language models are universal tabular predictors.pdf deleted file mode 100644 index 3de9b40e92695ca9cbc5aff0dd3851d10e5299c0..0000000000000000000000000000000000000000 --- a/papers/unipredict large language models are universal tabular predictors.pdf +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e8db3dad89d4dfcd99680166f9205de0d2101a4cfdc5141470acb01a76e702fa -size 1676792 diff --git a/papers/unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving.pdf b/papers/unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving.pdf index 7c9b541d32317fc61d386c6ca945d13909cd3a95..f414ae921f3c7f2efa1e1383aabac69e471c4ea4 100644 Binary files a/papers/unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving.pdf and b/papers/unleashing the creative mind language model as hierarchical policy for improved exploration on challenging problem solving.pdf differ diff --git a/papers/unleashing the potential of prompt engineering in large language models a comprehensive review.pdf b/papers/unleashing the potential of prompt engineering in large language models a comprehensive review.pdf index 58a76be99c081b7e0e2c5ff793f3c979fd7695c9..30997df05786c9cb6b3b3a682526255372ae2bfc 100644 Binary files a/papers/unleashing the potential of prompt engineering in large language models a comprehensive review.pdf and b/papers/unleashing the potential of prompt engineering in large language models a comprehensive review.pdf differ diff --git a/papers/unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf b/papers/unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf index 384ff6d0b4fa7f7f9af34e3c25e6e532406a31f8..00284e3434badc952861d03140e15076fe99228e 100644 Binary files a/papers/unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf and b/papers/unlocking the potential of chatgpt a comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing.pdf differ diff --git a/papers/unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf b/papers/unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf index 70f42a887890803398da79aaebb26b3adbc7151d..4a548f611c7aa172ec7da2e3622efc2b6114702d 100644 Binary files a/papers/unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf and b/papers/unraveling chatgpt a critical analysis of aigenerated goaloriented dialogues and annotations.pdf differ diff --git a/papers/unsupervised contrastconsistent ranking with language models.pdf b/papers/unsupervised contrastconsistent ranking with language models.pdf index 00cf2129908af7fbb4f9d42c3b8bceea2858c7e0..4cdc260c62fdf526f5374dc7736395c625f0c538 100644 Binary files a/papers/unsupervised contrastconsistent ranking with language models.pdf and b/papers/unsupervised contrastconsistent ranking with language models.pdf differ diff --git a/papers/unsupervised human activity recognition through twostage prompting with chatgpt.pdf b/papers/unsupervised human activity recognition through twostage prompting with chatgpt.pdf index 9faa74de6ca14b3f09ac6e56ad369316db957c8c..d8efd2eb0a34050f58fb3968368e2fc3542916b5 100644 Binary files a/papers/unsupervised human activity recognition through twostage prompting with chatgpt.pdf and b/papers/unsupervised human activity recognition through twostage prompting with chatgpt.pdf differ diff --git a/papers/unveiling the potential of large language models in generating semantic and crosslanguage clones.pdf b/papers/unveiling the potential of large language models in generating semantic and crosslanguage clones.pdf index ef757e69682f50a467b35ee827ff603d7bb017af..8638aafee471576cdfce71554686489329625063 100644 Binary files a/papers/unveiling the potential of large language models in generating semantic and crosslanguage clones.pdf and b/papers/unveiling the potential of large language models in generating semantic and crosslanguage clones.pdf differ diff --git a/papers/upar a kantianinspired prompting framework for enhancing large language model capabilities.pdf b/papers/upar a kantianinspired prompting framework for enhancing large language model capabilities.pdf index c00a588b4972d1335031152192d3cefeba0eb9d9..9348ac84000e6fcdb2d94fe656753655fb380caa 100644 Binary files a/papers/upar a kantianinspired prompting framework for enhancing large language model capabilities.pdf and b/papers/upar a kantianinspired prompting framework for enhancing large language model capabilities.pdf differ diff --git a/papers/uprise universal prompt retrieval for improving zeroshot evaluation.pdf b/papers/uprise universal prompt retrieval for improving zeroshot evaluation.pdf index 00bd70407ebb769de0e8ba395fd0d76fedd50982..5426a8aa78a7b569137cbef51b2366454931d415 100644 --- a/papers/uprise universal prompt retrieval for improving zeroshot evaluation.pdf +++ b/papers/uprise universal prompt retrieval for improving zeroshot evaluation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e85e16d9cf51922ca38a4e0b9cf4223d366e4c633b0656390bfaf54f4fe76996 -size 1384767 +oid sha256:f8dbac766b4ed40ddf4aade9d17fe44206eeae6a7a6b295007342af87dc9f3e2 +size 1418611 diff --git a/papers/usb a unified summarization benchmark across tasks and domains.pdf b/papers/usb a unified summarization benchmark across tasks and domains.pdf index 91530580a1189ac304c3ec83c8d9ca94e706b666..d14ae26d1743ff8f8f7ebfa4d51e6fc2f23c9e90 100644 --- a/papers/usb a unified summarization benchmark across tasks and domains.pdf +++ b/papers/usb a unified summarization benchmark across tasks and domains.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:c5605d409dde2d6f0f72024ddb19adf5ff07faf7e65d30390ea1ae6b880aac76 -size 1756720 +oid sha256:4a93cd0fdf210c66c2cccd23f11f75fd51ed59bb1151162194970f687fe77a4e +size 2318389 diff --git a/papers/user simulation with large language models for evaluating taskoriented dialogue.pdf b/papers/user simulation with large language models for evaluating taskoriented dialogue.pdf index b648fdafdb1c90cde408530ed1c664b8d2a0e95e..a3567f0cf02358609ac4bd06ce8b16327866bcce 100644 Binary files a/papers/user simulation with large language models for evaluating taskoriented dialogue.pdf and b/papers/user simulation with large language models for evaluating taskoriented dialogue.pdf differ diff --git a/papers/using chatgpt for entity matching.pdf b/papers/using chatgpt for entity matching.pdf index dd5218967662ce05068c9e0e0c39fb7e2de7ed0a..32a7616b4b6fb5a7716ee5bd325abb2de20d52ea 100644 Binary files a/papers/using chatgpt for entity matching.pdf and b/papers/using chatgpt for entity matching.pdf differ diff --git a/papers/using incontext learning to improve dialogue safety.pdf b/papers/using incontext learning to improve dialogue safety.pdf index 0bc8e1601c27fe860c8db1143196784ebc010fc0..32dc99666571dcca3075441e54d62d6c82ecbe35 100644 Binary files a/papers/using incontext learning to improve dialogue safety.pdf and b/papers/using incontext learning to improve dialogue safety.pdf differ diff --git a/papers/using large language models for cybersecurity capturetheflag challenges and certification questions.pdf b/papers/using large language models for cybersecurity capturetheflag challenges and certification questions.pdf index 53f392e326871a3031507145d5c8166813e2841c..f99d4130a8074311da60f67949f2531122965fa2 100644 Binary files a/papers/using large language models for cybersecurity capturetheflag challenges and certification questions.pdf and b/papers/using large language models for cybersecurity capturetheflag challenges and certification questions.pdf differ diff --git a/papers/using large language models to generate engaging captions for data visualizations.pdf b/papers/using large language models to generate engaging captions for data visualizations.pdf index 106163632dde18e19466fe18a168c1fb2ce269ce..771d0b441cbcf750007c2b1e36b2766404f88c42 100644 Binary files a/papers/using large language models to generate engaging captions for data visualizations.pdf and b/papers/using large language models to generate engaging captions for data visualizations.pdf differ diff --git a/papers/using natural language explanations to improve robustness of incontext learning for natural language inference.pdf b/papers/using natural language explanations to improve robustness of incontext learning for natural language inference.pdf deleted file mode 100644 index 4b596e6107c06ae0189fc48f566cc8834d4dad56..0000000000000000000000000000000000000000 Binary files a/papers/using natural language explanations to improve robustness of incontext learning for natural language inference.pdf and /dev/null differ diff --git a/papers/utilizing language models for energy load forecasting.pdf b/papers/utilizing language models for energy load forecasting.pdf index 87cd13f22a3b18f06907e5abdb904a18201bdcc2..2d109c079772e617ac031b2a8553bebe80d98ec2 100644 Binary files a/papers/utilizing language models for energy load forecasting.pdf and b/papers/utilizing language models for energy load forecasting.pdf differ diff --git a/papers/utilizing language models to expand visionbased commonsense knowledge graphs.pdf b/papers/utilizing language models to expand visionbased commonsense knowledge graphs.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0794b8c5ff3e811d031900a0235f21c195d04fee --- /dev/null +++ b/papers/utilizing language models to expand visionbased commonsense knowledge graphs.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db9b220aa3c45df4d17784b22d36f7b8b0ee3387627d0184532746f178fdbbd6 +size 456078 diff --git a/papers/vist5 an adaptive, retrievalaugmented language model for visualizationoriented dialog.pdf b/papers/vist5 an adaptive, retrievalaugmented language model for visualizationoriented dialog.pdf new file mode 100644 index 0000000000000000000000000000000000000000..248c75e170b3258f6c64d4e14d302b2a8126e6d9 --- /dev/null +++ b/papers/vist5 an adaptive, retrievalaugmented language model for visualizationoriented dialog.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eebd3545d489e8b301e05d8af7432b03a4e9225dd367755661ca69741da6f4d1 +size 1851427 diff --git a/papers/visual prompt tuning for fewshot text classification.pdf b/papers/visual prompt tuning for fewshot text classification.pdf new file mode 100644 index 0000000000000000000000000000000000000000..939ebc1dd256c9fcaa319de796e54b9b7f2a3221 --- /dev/null +++ b/papers/visual prompt tuning for fewshot text classification.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b075b6078a0ed0e3ccb653c7aa6ce77dba1668d2d7aebb01fff3974074e06c2e +size 4403751 diff --git a/papers/visualizing linguistic diversity of text datasets synthesized by large language models.pdf b/papers/visualizing linguistic diversity of text datasets synthesized by large language models.pdf index cff875531605034378fef7245da3a02643a24713..84c25cd1516f1ee319e7f5eee571691cbe582287 100644 --- a/papers/visualizing linguistic diversity of text datasets synthesized by large language models.pdf +++ b/papers/visualizing linguistic diversity of text datasets synthesized by large language models.pdf @@ -1,9923 +1,3 @@ -%PDF-1.7 -% -72 0 obj -<< /Type /XObject /Subtype /Form /BBox [ 0 0 1523 808 ] /FormType 1 -/Length 137186 /PTEX.FileName (./figures/ui_annotated.pdf) -/PTEX.PageNumber 1 -/Resources << /ExtGState << /E1 << /ca 0.8 >> /E2 << /ca 0.8 >> /E3 << /ca 0.8 >> -/E4 << /ca 0.8 >> /E5 << /ca 0.8 >> >> -/Font << /F1 98 0 R >> /XObject << /X1 99 0 R >> >> >> -stream -/DeviceRGB CS -/DeviceRGB cs -q -1.000000 0.000000 -0.000000 1.000000 0.000000 0.000000 cm -1.000000 1.000000 1.000000 scn -0.000000 808.000000 m -1523.000000 808.000000 l -1523.000000 0.000000 l -0.000000 0.000000 l -0.000000 808.000000 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 3.000000 0.000000 cm -0.000000 804.000000 m -1520.000000 804.000000 l -1520.000000 0.000000 l -0.000000 0.000000 l -0.000000 804.000000 l -h -W -n -q -1529.999878 0.000000 0.000000 954.000122 0.000000 -150.000122 cm -/X1 Do -Q -n -Q -q -/E1 gs -1.000000 0.000000 -0.000000 1.000000 38.000000 153.000000 cm -1.000000 1.000000 1.000000 scn -0.000000 21.000000 m -164.000000 21.000000 l -164.000000 0.000000 l -0.000000 0.000000 l -0.000000 21.000000 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 41.000000 155.545410 cm -0.233333 0.233333 0.233333 scn -0.000000 1.454590 m -h -9.261364 9.920499 m -7.755682 9.494363 l -7.660985 9.745310 7.521307 9.989155 7.336648 10.225897 c -7.156724 10.467374 6.910512 10.666238 6.598012 10.822488 c -6.285511 10.978738 5.885417 11.056863 5.397727 11.056863 c -4.730114 11.056863 4.173769 10.902980 3.728693 10.595215 c -3.288352 10.292185 3.068182 9.906295 3.068182 9.437545 c -3.068182 9.020878 3.219697 8.691806 3.522727 8.450329 c -3.825758 8.208852 4.299242 8.007621 4.943182 7.846636 c -6.562500 7.448908 l -7.537879 7.212166 8.264678 6.849950 8.742898 6.362261 c -9.221118 5.879306 9.460228 5.256674 9.460228 4.494363 c -9.460228 3.869362 9.280304 3.310650 8.920455 2.818226 c -8.565341 2.325802 8.068182 1.937544 7.428977 1.653454 c -6.789773 1.369364 6.046402 1.227318 5.198864 1.227318 c -4.086174 1.227318 3.165246 1.468796 2.436080 1.951750 c -1.706913 2.434704 1.245265 3.140196 1.051136 4.068226 c -2.642045 4.465954 l -2.793561 3.878832 3.080019 3.438492 3.501420 3.144930 c -3.927557 2.851370 4.483902 2.704590 5.170455 2.704590 c -5.951705 2.704590 6.571970 2.870310 7.031250 3.201750 c -7.495265 3.537924 7.727273 3.940386 7.727273 4.409136 c -7.727273 4.787924 7.594697 5.105159 7.329545 5.360840 c -7.064394 5.621257 6.657197 5.815386 6.107955 5.943227 c -4.289773 6.369363 l -3.290720 6.606105 2.556818 6.973056 2.088068 7.470215 c -1.624053 7.972109 1.392045 8.599477 1.392045 9.352318 c -1.392045 9.967848 1.564867 10.512355 1.910511 10.985840 c -2.260890 11.459325 2.736742 11.831011 3.338068 12.100897 c -3.944129 12.370783 4.630682 12.505726 5.397727 12.505726 c -6.477273 12.505726 7.324811 12.268984 7.940341 11.795500 c -8.560606 11.322015 9.000947 10.697014 9.261364 9.920499 c -h -10.449219 1.454590 m -h -16.557175 1.227318 m -15.506038 1.227318 14.599314 1.459326 13.837003 1.923340 c -13.079427 2.392090 12.494673 3.045500 12.082742 3.883568 c -11.675545 4.726371 11.471946 5.706484 11.471946 6.823908 c -11.471946 7.941333 11.675545 8.926181 12.082742 9.778454 c -12.494673 10.635461 13.067590 11.303075 13.801492 11.781295 c -14.540129 12.264249 15.401871 12.505726 16.386719 12.505726 c -16.954901 12.505726 17.515982 12.411029 18.069958 12.221636 c -18.623936 12.032242 19.128197 11.724477 19.582743 11.298340 c -20.037289 10.876939 20.399504 10.318227 20.669390 9.622204 c -20.939276 8.926181 21.074219 8.069174 21.074219 7.051181 c -21.074219 6.340954 l -12.665128 6.340954 l -12.665128 7.789818 l -19.369675 7.789818 l -19.369675 8.405348 19.246569 8.954590 19.000355 9.437545 c -18.758879 9.920500 18.413235 10.301655 17.963425 10.581011 c -17.518349 10.860367 16.992781 11.000045 16.386719 11.000045 c -15.719106 11.000045 15.141454 10.834325 14.653765 10.502886 c -14.170810 10.176181 13.799125 9.750045 13.538708 9.224477 c -13.278291 8.698908 13.148083 8.135461 13.148083 7.534136 c -13.148083 6.568226 l -13.148083 5.744363 13.290128 5.045973 13.574219 4.473056 c -13.863045 3.904873 14.263140 3.471636 14.774503 3.173340 c -15.285867 2.879780 15.880091 2.733000 16.557175 2.733000 c -16.997515 2.733000 17.395243 2.794552 17.750355 2.917658 c -18.110205 3.045498 18.420338 3.234892 18.680754 3.485840 c -18.941170 3.741522 19.142401 4.058756 19.284447 4.437544 c -20.903765 3.983000 l -20.733311 3.433758 20.446852 2.950802 20.044390 2.534136 c -19.641928 2.122204 19.144768 1.800234 18.552912 1.568226 c -17.961056 1.340954 17.295811 1.227318 16.557175 1.227318 c -h -22.089844 1.454590 m -h -28.197800 1.227318 m -27.146662 1.227318 26.239939 1.459326 25.477629 1.923340 c -24.720053 2.392090 24.135298 3.045500 23.723366 3.883568 c -23.316170 4.726371 23.112572 5.706484 23.112572 6.823908 c -23.112572 7.941333 23.316170 8.926181 23.723366 9.778454 c -24.135298 10.635461 24.708216 11.303075 25.442116 11.781295 c -26.180754 12.264249 27.042496 12.505726 28.027344 12.505726 c -28.595526 12.505726 29.156607 12.411029 29.710583 12.221636 c -30.264561 12.032242 30.768822 11.724477 31.223368 11.298340 c -31.677914 10.876939 32.040131 10.318227 32.310017 9.622204 c -32.579903 8.926181 32.714844 8.069174 32.714844 7.051181 c -32.714844 6.340954 l -24.305754 6.340954 l -24.305754 7.789818 l -31.010300 7.789818 l -31.010300 8.405348 30.887194 8.954590 30.640980 9.437545 c -30.399504 9.920500 30.053860 10.301655 29.604050 10.581011 c -29.158974 10.860367 28.633406 11.000045 28.027344 11.000045 c -27.359730 11.000045 26.782078 10.834325 26.294390 10.502886 c -25.811436 10.176181 25.439749 9.750045 25.179333 9.224477 c -24.918917 8.698908 24.788708 8.135461 24.788708 7.534136 c -24.788708 6.568226 l -24.788708 5.744363 24.930754 5.045973 25.214844 4.473056 c -25.503670 3.904873 25.903765 3.471636 26.415129 3.173340 c -26.926493 2.879780 27.520716 2.733000 28.197800 2.733000 c -28.638140 2.733000 29.035868 2.794552 29.390980 2.917658 c -29.750830 3.045498 30.060963 3.234892 30.321379 3.485840 c -30.581795 3.741522 30.783026 4.058756 30.925072 4.437544 c -32.544392 3.983000 l -32.373936 3.433758 32.087479 2.950802 31.685015 2.534136 c -31.282553 2.122204 30.785393 1.800234 30.193537 1.568226 c -29.601681 1.340954 28.936436 1.227318 28.197800 1.227318 c -h -33.730469 1.454590 m -h -39.383877 1.227318 m -38.474785 1.227318 37.672230 1.456957 36.976208 1.916237 c -36.280186 2.380253 35.735680 3.033663 35.342686 3.876465 c -34.949692 4.724003 34.753197 5.725423 34.753197 6.880726 c -34.753197 8.026560 34.949692 9.020878 35.342686 9.863681 c -35.735680 10.706484 36.282555 11.357526 36.983311 11.816806 c -37.684067 12.276086 38.493729 12.505726 39.412289 12.505726 c -40.122513 12.505726 40.683594 12.387355 41.095524 12.150612 c -41.512192 11.918605 41.829426 11.653454 42.047230 11.355159 c -42.269768 11.061598 42.442593 10.820121 42.565697 10.630727 c -42.707741 10.630727 l -42.707741 16.000046 l -44.383881 16.000046 l -44.383881 1.454590 l -42.764561 1.454590 l -42.764561 3.130726 l -42.565697 3.130726 l -42.442593 2.931862 42.267403 2.680916 42.040127 2.377886 c -41.812855 2.079590 41.488518 1.812071 41.067116 1.575329 c -40.645714 1.343321 40.084633 1.227318 39.383877 1.227318 c -h -39.611149 2.733000 m -40.283501 2.733000 40.851681 2.908190 41.315697 3.258568 c -41.779713 3.613680 42.132458 4.103737 42.373936 4.728738 c -42.615410 5.358473 42.736149 6.085272 42.736149 6.909136 c -42.736149 7.723530 42.617779 8.436125 42.381039 9.046920 c -42.144295 9.662450 41.793915 10.140670 41.329899 10.481579 c -40.865883 10.827223 40.292969 11.000045 39.611149 11.000045 c -38.900925 11.000045 38.309067 10.817753 37.835583 10.453170 c -37.366833 10.093321 37.014088 9.603264 36.777344 8.982999 c -36.545338 8.367469 36.429333 7.676181 36.429333 6.909136 c -36.429333 6.132621 36.547703 5.427128 36.784447 4.792658 c -37.025925 4.162922 37.381039 3.661028 37.849789 3.286976 c -38.323273 2.917658 38.910393 2.733000 39.611149 2.733000 c -h -51.777344 1.454590 m -h -57.885300 1.227318 m -56.834164 1.227318 55.927437 1.459326 55.165127 1.923340 c -54.407551 2.392090 53.822796 3.045500 53.410866 3.883568 c -53.003670 4.726371 52.800072 5.706484 52.800072 6.823908 c -52.800072 7.941333 53.003670 8.926181 53.410866 9.778454 c -53.822796 10.635461 54.395714 11.303075 55.129616 11.781295 c -55.868252 12.264249 56.729996 12.505726 57.714844 12.505726 c -58.283024 12.505726 58.844105 12.411029 59.398083 12.221636 c -59.952061 12.032242 60.456322 11.724477 60.910866 11.298340 c -61.365414 10.876939 61.727631 10.318227 61.997517 9.622204 c -62.267403 8.926181 62.402344 8.069174 62.402344 7.051181 c -62.402344 6.340954 l -53.993252 6.340954 l -53.993252 7.789818 l -60.697800 7.789818 l -60.697800 8.405348 60.574692 8.954590 60.328480 9.437545 c -60.087002 9.920500 59.741360 10.301655 59.291550 10.581011 c -58.846474 10.860367 58.320904 11.000045 57.714844 11.000045 c -57.047230 11.000045 56.469578 10.834325 55.981888 10.502886 c -55.498936 10.176181 55.127251 9.750045 54.866833 9.224477 c -54.606415 8.698908 54.476208 8.135461 54.476208 7.534136 c -54.476208 6.568226 l -54.476208 5.744363 54.618252 5.045973 54.902344 4.473056 c -55.191170 3.904873 55.591263 3.471636 56.102627 3.173340 c -56.613991 2.879780 57.208218 2.733000 57.885300 2.733000 c -58.325642 2.733000 58.723366 2.794552 59.078480 2.917658 c -59.438328 3.045498 59.748459 3.234892 60.008877 3.485840 c -60.269295 3.741522 60.470524 4.058756 60.612572 4.437544 c -62.231892 3.983000 l -62.061436 3.433758 61.774975 2.950802 61.372513 2.534136 c -60.970051 2.122204 60.472893 1.800234 59.881039 1.568226 c -59.289181 1.340954 58.623936 1.227318 57.885300 1.227318 c -h -63.125000 1.454590 m -h -65.909088 12.363681 m -68.522728 7.903454 l -71.136360 12.363681 l -73.068184 12.363681 l -69.545456 6.909136 l -73.068184 1.454590 l -71.136360 1.454590 l -68.522728 5.687545 l -65.909088 1.454590 l -63.977272 1.454590 l -67.443184 6.909136 l -63.977272 12.363681 l -65.909088 12.363681 l -h -73.925781 1.454590 m -h -78.670097 1.198908 m -77.978813 1.198908 77.351448 1.329117 76.787994 1.589533 c -76.224548 1.854685 75.777107 2.235840 75.445671 2.733000 c -75.114227 3.234894 74.948509 3.840954 74.948509 4.551181 c -74.948509 5.176181 75.071617 5.682810 75.317825 6.071068 c -75.564034 6.464060 75.893112 6.771825 76.305046 6.994363 c -76.716980 7.216901 77.171524 7.382621 77.668678 7.491522 c -78.170570 7.605158 78.674835 7.695120 79.181465 7.761408 c -79.844345 7.846635 80.381744 7.910556 80.793678 7.953170 c -81.210342 8.000519 81.513374 8.078644 81.702766 8.187545 c -81.896896 8.296446 81.993965 8.485840 81.993965 8.755727 c -81.993965 8.812545 l -81.993965 9.513302 81.802200 10.057810 81.418678 10.446068 c -81.039886 10.834325 80.464607 11.028454 79.692825 11.028454 c -78.892639 11.028454 78.265274 10.853265 77.810722 10.502886 c -77.356178 10.152507 77.036575 9.778454 76.851921 9.380727 c -75.261009 9.948909 l -75.545097 10.611788 75.923882 11.127886 76.397369 11.497204 c -76.875587 11.871257 77.396423 12.131674 77.959869 12.278454 c -78.528053 12.429969 79.086761 12.505726 79.636009 12.505726 c -79.986389 12.505726 80.388847 12.463112 80.843391 12.377886 c -81.302673 12.297393 81.745384 12.129306 82.171516 11.873625 c -82.602386 11.617942 82.959869 11.232052 83.243965 10.715954 c -83.528053 10.199855 83.670097 9.508567 83.670097 8.642090 c -83.670097 1.454590 l -81.993965 1.454590 l -81.993965 2.931862 l -81.908737 2.931862 l -81.795097 2.695120 81.605705 2.441807 81.340553 2.171919 c -81.075401 1.902033 80.722656 1.672394 80.282318 1.483000 c -79.841972 1.293606 79.304565 1.198908 78.670097 1.198908 c -h -78.925781 2.704590 m -79.588661 2.704590 80.147369 2.834799 80.601921 3.095215 c -81.061203 3.355631 81.406845 3.691805 81.638847 4.103737 c -81.875595 4.515670 81.993965 4.948909 81.993965 5.403454 c -81.993965 6.937545 l -81.922943 6.852318 81.766693 6.774193 81.525215 6.703170 c -81.288467 6.636882 81.013847 6.577696 80.701347 6.525613 c -80.393585 6.478264 80.092918 6.435651 79.799362 6.397772 c -79.510536 6.364628 79.276161 6.336219 79.096237 6.312545 c -78.660629 6.255727 78.253433 6.163397 77.874641 6.035556 c -77.500595 5.912450 77.197563 5.725424 76.965553 5.474477 c -76.738281 5.228264 76.624641 4.892090 76.624641 4.465954 c -76.624641 3.883568 76.840080 3.443226 77.270950 3.144930 c -77.706558 2.851370 78.258171 2.704590 78.925781 2.704590 c -h -85.195312 1.454590 m -h -86.729401 1.454590 m -86.729401 12.363681 l -88.348724 12.363681 l -88.348724 10.659136 l -88.490768 10.659136 l -88.718040 11.241522 89.084991 11.693700 89.591621 12.015670 c -90.098251 12.342374 90.706673 12.505726 91.416901 12.505726 c -92.136604 12.505726 92.735558 12.342374 93.213776 12.015670 c -93.696732 11.693700 94.073151 11.241522 94.343040 10.659136 c -94.456673 10.659136 l -94.736031 11.222583 95.155067 11.670026 95.713776 12.001465 c -96.272491 12.337639 96.942474 12.505726 97.723724 12.505726 c -98.699104 12.505726 99.496925 12.200329 100.117188 11.589534 c -100.737450 10.983473 101.047585 10.038871 101.047585 8.755727 c -101.047585 1.454590 l -99.371452 1.454590 l -99.371452 8.755727 l -99.371452 9.560651 99.151283 10.135935 98.710938 10.481579 c -98.270592 10.827223 97.752129 11.000045 97.155540 11.000045 c -96.388496 11.000045 95.794273 10.768037 95.372871 10.304022 c -94.951469 9.844742 94.740768 9.262356 94.740768 8.556863 c -94.740768 1.454590 l -93.036224 1.454590 l -93.036224 8.926181 l -93.036224 9.546446 92.834991 10.045973 92.432526 10.424761 c -92.030067 10.808284 91.511597 11.000045 90.877129 11.000045 c -90.441528 11.000045 90.034332 10.884041 89.655540 10.652034 c -89.281487 10.420026 88.978455 10.098056 88.746452 9.686124 c -88.519180 9.278927 88.405540 8.807810 88.405540 8.272772 c -88.405540 1.454590 l -86.729401 1.454590 l -h -102.578125 1.454590 m -h -104.112213 -2.636320 m -104.112213 12.363681 l -105.731537 12.363681 l -105.731537 10.630727 l -105.930397 10.630727 l -106.053505 10.820121 106.223961 11.061598 106.441765 11.355159 c -106.664299 11.653454 106.981529 11.918605 107.393463 12.150612 c -107.810127 12.387355 108.373581 12.505726 109.083809 12.505726 c -110.002365 12.505726 110.812027 12.276086 111.512787 11.816806 c -112.213539 11.357526 112.760414 10.706484 113.153412 9.863681 c -113.546402 9.020878 113.742897 8.026560 113.742897 6.880726 c -113.742897 5.725423 113.546402 4.724003 113.153412 3.876465 c -112.760414 3.033663 112.215912 2.380253 111.519890 1.916237 c -110.823868 1.456957 110.021309 1.227318 109.112213 1.227318 c -108.411461 1.227318 107.850380 1.343321 107.428978 1.575329 c -107.007576 1.812071 106.683235 2.079590 106.455963 2.377886 c -106.228691 2.680916 106.053505 2.931862 105.930397 3.130726 c -105.788353 3.130726 l -105.788353 -2.636320 l -104.112213 -2.636320 l -h -105.759941 6.909136 m -105.759941 6.085272 105.880684 5.358473 106.122162 4.728738 c -106.363640 4.103737 106.716385 3.613680 107.180397 3.258568 c -107.644409 2.908190 108.212593 2.733000 108.884941 2.733000 c -109.585701 2.733000 110.170456 2.917658 110.639206 3.286976 c -111.112694 3.661028 111.467804 4.162922 111.704544 4.792658 c -111.946022 5.427128 112.066765 6.132621 112.066765 6.909136 c -112.066765 7.676181 111.948395 8.367469 111.711647 8.982999 c -111.479645 9.603264 111.126900 10.093321 110.653412 10.453170 c -110.184662 10.817753 109.595169 11.000045 108.884941 11.000045 c -108.203125 11.000045 107.630203 10.827223 107.166191 10.481579 c -106.702179 10.140670 106.351799 9.662450 106.115059 9.046920 c -105.878311 8.436125 105.759941 7.723530 105.759941 6.909136 c -h -114.765625 1.454590 m -h -117.975853 16.000046 m -117.975853 1.454590 l -116.299713 1.454590 l -116.299713 16.000046 l -117.975853 16.000046 l -h -119.511719 1.454590 m -h -125.619675 1.227318 m -124.568535 1.227318 123.661812 1.459326 122.899506 1.923340 c -122.141930 2.392090 121.557175 3.045500 121.145241 3.883568 c -120.738045 4.726371 120.534447 5.706484 120.534447 6.823908 c -120.534447 7.941333 120.738045 8.926181 121.145241 9.778454 c -121.557175 10.635461 122.130089 11.303075 122.863991 11.781295 c -123.602631 12.264249 124.464371 12.505726 125.449219 12.505726 c -126.017403 12.505726 126.578476 12.411029 127.132454 12.221636 c -127.686432 12.032242 128.190704 11.724477 128.645248 11.298340 c -129.099792 10.876939 129.462006 10.318227 129.731888 9.622204 c -130.001770 8.926181 130.136719 8.069174 130.136719 7.051181 c -130.136719 6.340954 l -121.727631 6.340954 l -121.727631 7.789818 l -128.432175 7.789818 l -128.432175 8.405348 128.309067 8.954590 128.062851 9.437545 c -127.821373 9.920500 127.475731 10.301655 127.025925 10.581011 c -126.580849 10.860367 126.055275 11.000045 125.449219 11.000045 c -124.781609 11.000045 124.203957 10.834325 123.716263 10.502886 c -123.233307 10.176181 122.861618 9.750045 122.601204 9.224477 c -122.340790 8.698908 122.210579 8.135461 122.210579 7.534136 c -122.210579 6.568226 l -122.210579 5.744363 122.352623 5.045973 122.636719 4.473056 c -122.925545 3.904873 123.325638 3.471636 123.837006 3.173340 c -124.348366 2.879780 124.942589 2.733000 125.619675 2.733000 c -126.060013 2.733000 126.457741 2.794552 126.812859 2.917658 c -127.172707 3.045498 127.482841 3.234892 127.743256 3.485840 c -128.003677 3.741522 128.204895 4.058756 128.346939 4.437544 c -129.966263 3.983000 l -129.795807 3.433758 129.509354 2.950802 129.106888 2.534136 c -128.704422 2.122204 128.207260 1.800234 127.615410 1.568226 c -127.023560 1.340954 126.358315 1.227318 125.619675 1.227318 c -h -136.777344 1.454590 m -h -138.908020 8.727318 m -138.908020 10.517091 139.140030 12.162451 139.604050 13.663397 c -140.072800 15.169079 140.740417 16.554022 141.606888 17.818228 c -143.084167 17.818228 l -142.743256 17.349476 142.423645 16.771824 142.125351 16.085272 c -141.831787 15.403454 141.573746 14.652981 141.351212 13.833851 c -141.128662 13.019457 140.953476 12.176655 140.825638 11.305443 c -140.702530 10.434230 140.640976 9.574855 140.640976 8.727318 c -140.640976 7.600423 140.749878 6.456957 140.967682 5.296920 c -141.185486 4.136881 141.479050 3.059704 141.848373 2.065386 c -142.217682 1.071068 142.629623 0.261408 143.084167 -0.363592 c -141.606888 -0.363592 l -140.740417 0.900612 140.072800 2.283188 139.604050 3.784136 c -139.140030 5.289817 138.908020 6.937545 138.908020 8.727318 c -h -144.023438 1.454590 m -h -150.642761 16.000046 m -150.642761 1.454590 l -148.881393 1.454590 l -148.881393 14.153454 l -148.796158 14.153454 l -145.245026 11.795500 l -145.245026 13.585272 l -148.881393 16.000046 l -150.642761 16.000046 l -h -153.320312 1.454590 m -h -158.433945 8.727318 m -158.433945 6.937545 158.199570 5.289817 157.730820 3.784136 c -157.266815 2.283188 156.601562 0.900612 155.735092 -0.363592 c -154.257812 -0.363592 l -154.598724 0.105158 154.915955 0.682810 155.209518 1.369362 c -155.507812 2.051180 155.768234 2.799286 155.990768 3.613680 c -156.213303 4.432810 156.386124 5.277980 156.509232 6.149193 c -156.637070 7.025140 156.700989 7.884515 156.700989 8.727318 c -156.700989 9.854212 156.592087 10.997678 156.374283 12.157715 c -156.156494 13.317753 155.862930 14.394932 155.493607 15.389250 c -155.124298 16.383568 154.712357 17.193226 154.257812 17.818228 c -155.735092 17.818228 l -156.601562 16.554022 157.266815 15.169079 157.730820 13.663397 c -158.199570 12.162451 158.433945 10.517091 158.433945 8.727318 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 41.000000 155.545410 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 1.454590 Tm -/F1 1.000000 Tf -[ (\002) (\t) (\t) (\010) (\006) (\t) 15.003586 (\n) (\003) (\001) (\000) (\013) (\t) (\006) (\005) (\004) (\007) ] TJ -ET -Q -q -600.000000 755.000000 m -333.000000 755.000000 l -333.000000 5.000000 l -600.000000 5.000000 l -600.000000 755.000000 l -h -W* -n -q --1.000000 -0.000000 -0.000000 1.000000 600.000000 5.000000 cm -0.366667 0.366667 0.366667 scn -0.000000 750.000000 m -0.000000 751.500000 l --1.500000 751.500000 l --1.500000 750.000000 l -0.000000 750.000000 l -h -267.000000 750.000000 m -268.500000 750.000000 l -268.500000 751.500000 l -267.000000 751.500000 l -267.000000 750.000000 l -h -267.000000 0.000000 m -267.000000 -1.500000 l -268.500000 -1.500000 l -268.500000 0.000000 l -267.000000 0.000000 l -h -0.000000 0.000000 m --1.500000 0.000000 l --1.500000 -1.500000 l -0.000000 -1.500000 l -0.000000 0.000000 l -h --1.500000 750.000000 m --1.500000 740.000000 l -1.500000 740.000000 l -1.500000 750.000000 l --1.500000 750.000000 l -h --1.500000 730.000000 m --1.500000 710.000000 l -1.500000 710.000000 l -1.500000 730.000000 l --1.500000 730.000000 l -h --1.500000 700.000000 m --1.500000 680.000000 l -1.500000 680.000000 l -1.500000 700.000000 l --1.500000 700.000000 l -h --1.500000 670.000000 m --1.500000 650.000000 l -1.500000 650.000000 l -1.500000 670.000000 l --1.500000 670.000000 l -h --1.500000 640.000000 m --1.500000 620.000000 l -1.500000 620.000000 l -1.500000 640.000000 l --1.500000 640.000000 l -h --1.500000 610.000000 m --1.500000 590.000000 l -1.500000 590.000000 l -1.500000 610.000000 l --1.500000 610.000000 l -h --1.500000 580.000000 m --1.500000 560.000000 l -1.500000 560.000000 l -1.500000 580.000000 l --1.500000 580.000000 l -h --1.500000 550.000000 m --1.500000 530.000000 l -1.500000 530.000000 l -1.500000 550.000000 l --1.500000 550.000000 l -h --1.500000 520.000000 m --1.500000 500.000000 l -1.500000 500.000000 l -1.500000 520.000000 l --1.500000 520.000000 l -h --1.500000 490.000000 m --1.500000 470.000000 l -1.500000 470.000000 l -1.500000 490.000000 l --1.500000 490.000000 l -h --1.500000 460.000000 m --1.500000 440.000000 l -1.500000 440.000000 l -1.500000 460.000000 l --1.500000 460.000000 l -h --1.500000 430.000000 m --1.500000 410.000000 l -1.500000 410.000000 l -1.500000 430.000000 l --1.500000 430.000000 l -h --1.500000 400.000000 m --1.500000 380.000000 l -1.500000 380.000000 l -1.500000 400.000000 l --1.500000 400.000000 l -h --1.500000 370.000000 m --1.500000 349.999969 l -1.500000 349.999969 l -1.500000 370.000000 l --1.500000 370.000000 l -h --1.500000 340.000000 m --1.500000 320.000000 l -1.500000 320.000000 l -1.500000 340.000000 l --1.500000 340.000000 l -h --1.500000 310.000031 m --1.500000 290.000000 l -1.500000 290.000000 l -1.500000 310.000031 l --1.500000 310.000031 l -h --1.500000 280.000000 m --1.500000 260.000031 l -1.500000 260.000031 l -1.500000 280.000000 l --1.500000 280.000000 l -h --1.500000 250.000000 m --1.500000 230.000000 l -1.500000 230.000000 l -1.500000 250.000000 l --1.500000 250.000000 l -h --1.500000 220.000000 m --1.500000 200.000000 l -1.500000 200.000000 l -1.500000 220.000000 l --1.500000 220.000000 l -h --1.500000 190.000000 m --1.500000 170.000000 l -1.500000 170.000000 l -1.500000 190.000000 l --1.500000 190.000000 l -h --1.500000 160.000000 m --1.500000 140.000000 l -1.500000 140.000000 l -1.500000 160.000000 l --1.500000 160.000000 l -h --1.500000 130.000000 m --1.500000 110.000000 l -1.500000 110.000000 l -1.500000 130.000000 l --1.500000 130.000000 l -h --1.500000 100.000000 m --1.500000 80.000000 l -1.500000 80.000000 l -1.500000 100.000000 l --1.500000 100.000000 l -h --1.500000 70.000000 m --1.500000 50.000000 l -1.500000 50.000000 l -1.500000 70.000000 l --1.500000 70.000000 l -h --1.500000 40.000000 m --1.500000 20.000000 l -1.500000 20.000000 l -1.500000 40.000000 l --1.500000 40.000000 l -h --1.500000 10.000000 m --1.500000 0.000000 l -1.500000 0.000000 l -1.500000 10.000000 l --1.500000 10.000000 l -h -0.000000 -1.500000 m -9.888889 -1.500000 l -9.888889 1.500000 l -0.000000 1.500000 l -0.000000 -1.500000 l -h -19.777779 -1.500000 m -39.555557 -1.500000 l -39.555557 1.500000 l -19.777779 1.500000 l -19.777779 -1.500000 l -h -49.444447 -1.500000 m -69.222229 -1.500000 l -69.222229 1.500000 l -49.444447 1.500000 l -49.444447 -1.500000 l -h -79.111115 -1.500000 m -98.888893 -1.500000 l -98.888893 1.500000 l -79.111115 1.500000 l -79.111115 -1.500000 l -h -108.777786 -1.500000 m -128.555573 -1.500000 l -128.555573 1.500000 l -108.777786 1.500000 l -108.777786 -1.500000 l -h -138.444458 -1.500000 m -158.222229 -1.500000 l -158.222229 1.500000 l -138.444458 1.500000 l -138.444458 -1.500000 l -h -168.111130 -1.500000 m -187.888885 -1.500000 l -187.888885 1.500000 l -168.111130 1.500000 l -168.111130 -1.500000 l -h -197.777771 -1.500000 m -217.555542 -1.500000 l -217.555542 1.500000 l -197.777771 1.500000 l -197.777771 -1.500000 l -h -227.444427 -1.500000 m -247.222198 -1.500000 l -247.222198 1.500000 l -227.444427 1.500000 l -227.444427 -1.500000 l -h -257.111084 -1.500000 m -267.000000 -1.500000 l -267.000000 1.500000 l -257.111084 1.500000 l -257.111084 -1.500000 l -h -268.500000 0.000000 m -268.500000 10.000000 l -265.500000 10.000000 l -265.500000 0.000000 l -268.500000 0.000000 l -h -268.500000 20.000000 m -268.500000 40.000000 l -265.500000 40.000000 l -265.500000 20.000000 l -268.500000 20.000000 l -h -268.500000 50.000000 m -268.500000 70.000000 l -265.500000 70.000000 l -265.500000 50.000000 l -268.500000 50.000000 l -h -268.500000 80.000000 m -268.500000 100.000000 l -265.500000 100.000000 l -265.500000 80.000000 l -268.500000 80.000000 l -h -268.500000 110.000000 m -268.500000 130.000000 l -265.500000 130.000000 l -265.500000 110.000000 l -268.500000 110.000000 l -h -268.500000 140.000000 m -268.500000 160.000000 l -265.500000 160.000000 l -265.500000 140.000000 l -268.500000 140.000000 l -h -268.500000 170.000000 m -268.500000 190.000000 l -265.500000 190.000000 l -265.500000 170.000000 l -268.500000 170.000000 l -h -268.500000 200.000000 m -268.500000 220.000000 l -265.500000 220.000000 l -265.500000 200.000000 l -268.500000 200.000000 l -h -268.500000 230.000000 m -268.500000 250.000000 l -265.500000 250.000000 l -265.500000 230.000000 l -268.500000 230.000000 l -h -268.500000 260.000000 m -268.500000 280.000000 l -265.500000 280.000000 l -265.500000 260.000000 l -268.500000 260.000000 l -h -268.500000 290.000000 m -268.500000 310.000000 l -265.500000 310.000000 l -265.500000 290.000000 l -268.500000 290.000000 l -h -268.500000 320.000000 m -268.500000 340.000000 l -265.500000 340.000000 l -265.500000 320.000000 l -268.500000 320.000000 l -h -268.500000 350.000000 m -268.500000 370.000000 l -265.500000 370.000000 l -265.500000 350.000000 l -268.500000 350.000000 l -h -268.500000 380.000000 m -268.500000 400.000031 l -265.500000 400.000031 l -265.500000 380.000000 l -268.500000 380.000000 l -h -268.500000 410.000000 m -268.500000 430.000000 l -265.500000 430.000000 l -265.500000 410.000000 l -268.500000 410.000000 l -h -268.500000 439.999969 m -268.500000 460.000000 l -265.500000 460.000000 l -265.500000 439.999969 l -268.500000 439.999969 l -h -268.500000 470.000000 m -268.500000 489.999969 l -265.500000 489.999969 l -265.500000 470.000000 l -268.500000 470.000000 l -h -268.500000 500.000000 m -268.500000 520.000000 l -265.500000 520.000000 l -265.500000 500.000000 l -268.500000 500.000000 l -h -268.500000 530.000000 m -268.500000 550.000000 l -265.500000 550.000000 l -265.500000 530.000000 l -268.500000 530.000000 l -h -268.500000 560.000000 m -268.500000 580.000000 l -265.500000 580.000000 l -265.500000 560.000000 l -268.500000 560.000000 l -h -268.500000 590.000000 m -268.500000 610.000000 l -265.500000 610.000000 l -265.500000 590.000000 l -268.500000 590.000000 l -h -268.500000 620.000000 m -268.500000 640.000000 l -265.500000 640.000000 l -265.500000 620.000000 l -268.500000 620.000000 l -h -268.500000 650.000000 m -268.500000 670.000000 l -265.500000 670.000000 l -265.500000 650.000000 l -268.500000 650.000000 l -h -268.500000 680.000000 m -268.500000 700.000000 l -265.500000 700.000000 l -265.500000 680.000000 l -268.500000 680.000000 l -h -268.500000 710.000000 m -268.500000 730.000000 l -265.500000 730.000000 l -265.500000 710.000000 l -268.500000 710.000000 l -h -268.500000 740.000000 m -268.500000 750.000000 l -265.500000 750.000000 l -265.500000 740.000000 l -268.500000 740.000000 l -h -267.000000 751.500000 m -257.111115 751.500000 l -257.111115 748.500000 l -267.000000 748.500000 l -267.000000 751.500000 l -h -247.222229 751.500000 m -227.444443 751.500000 l -227.444443 748.500000 l -247.222229 748.500000 l -247.222229 751.500000 l -h -217.555557 751.500000 m -197.777771 751.500000 l -197.777771 748.500000 l -217.555557 748.500000 l -217.555557 751.500000 l -h -187.888885 751.500000 m -168.111099 751.500000 l -168.111099 748.500000 l -187.888885 748.500000 l -187.888885 751.500000 l -h -158.222214 751.500000 m -138.444427 751.500000 l -138.444427 748.500000 l -158.222214 748.500000 l -158.222214 751.500000 l -h -128.555542 751.500000 m -108.777779 751.500000 l -108.777779 748.500000 l -128.555542 748.500000 l -128.555542 751.500000 l -h -98.888878 751.500000 m -79.111115 751.500000 l -79.111115 748.500000 l -98.888878 748.500000 l -98.888878 751.500000 l -h -69.222229 751.500000 m -49.444462 751.500000 l -49.444462 748.500000 l -69.222229 748.500000 l -69.222229 751.500000 l -h -39.555580 751.500000 m -19.777798 751.500000 l -19.777798 748.500000 l -39.555580 748.500000 l -39.555580 751.500000 l -h -9.888915 751.500000 m -0.000000 751.500000 l -0.000000 748.500000 l -9.888915 748.500000 l -9.888915 751.500000 l -h -0.000000 750.000000 m -0.000000 753.000000 l --3.000000 753.000000 l --3.000000 750.000000 l -0.000000 750.000000 l -h -267.000000 750.000000 m -270.000000 750.000000 l -270.000000 753.000000 l -267.000000 753.000000 l -267.000000 750.000000 l -h -267.000000 0.000000 m -267.000000 -3.000000 l -270.000000 -3.000000 l -270.000000 0.000000 l -267.000000 0.000000 l -h -0.000000 0.000000 m --3.000000 0.000000 l --3.000000 -3.000000 l -0.000000 -3.000000 l -0.000000 0.000000 l -h --3.000000 750.000000 m --3.000000 740.000000 l -3.000000 740.000000 l -3.000000 750.000000 l --3.000000 750.000000 l -h --3.000000 730.000000 m --3.000000 710.000000 l -3.000000 710.000000 l -3.000000 730.000000 l --3.000000 730.000000 l -h --3.000000 700.000000 m --3.000000 680.000000 l -3.000000 680.000000 l -3.000000 700.000000 l --3.000000 700.000000 l -h --3.000000 670.000000 m --3.000000 650.000000 l -3.000000 650.000000 l -3.000000 670.000000 l --3.000000 670.000000 l -h --3.000000 640.000000 m --3.000000 620.000000 l -3.000000 620.000000 l -3.000000 640.000000 l --3.000000 640.000000 l -h --3.000000 610.000000 m --3.000000 590.000000 l -3.000000 590.000000 l -3.000000 610.000000 l --3.000000 610.000000 l -h --3.000000 580.000000 m --3.000000 560.000000 l -3.000000 560.000000 l -3.000000 580.000000 l --3.000000 580.000000 l -h --3.000000 550.000000 m --3.000000 530.000000 l -3.000000 530.000000 l -3.000000 550.000000 l --3.000000 550.000000 l -h --3.000000 520.000000 m --3.000000 500.000000 l -3.000000 500.000000 l -3.000000 520.000000 l --3.000000 520.000000 l -h --3.000000 490.000000 m --3.000000 470.000000 l -3.000000 470.000000 l -3.000000 490.000000 l --3.000000 490.000000 l -h --3.000000 460.000000 m --3.000000 440.000000 l -3.000000 440.000000 l -3.000000 460.000000 l --3.000000 460.000000 l -h --3.000000 430.000000 m --3.000000 410.000000 l -3.000000 410.000000 l -3.000000 430.000000 l --3.000000 430.000000 l -h --3.000000 400.000000 m --3.000000 380.000000 l -3.000000 380.000000 l -3.000000 400.000000 l --3.000000 400.000000 l -h --3.000000 370.000000 m --3.000000 349.999969 l -3.000000 349.999969 l -3.000000 370.000000 l --3.000000 370.000000 l -h --3.000000 340.000000 m --3.000000 320.000000 l -3.000000 320.000000 l -3.000000 340.000000 l --3.000000 340.000000 l -h --3.000000 310.000031 m --3.000000 290.000000 l -3.000000 290.000000 l -3.000000 310.000031 l --3.000000 310.000031 l -h --3.000000 280.000000 m --3.000000 260.000031 l -3.000000 260.000031 l -3.000000 280.000000 l --3.000000 280.000000 l -h --3.000000 250.000000 m --3.000000 230.000000 l -3.000000 230.000000 l -3.000000 250.000000 l --3.000000 250.000000 l -h --3.000000 220.000000 m --3.000000 200.000000 l -3.000000 200.000000 l -3.000000 220.000000 l --3.000000 220.000000 l -h --3.000000 190.000000 m --3.000000 170.000000 l -3.000000 170.000000 l -3.000000 190.000000 l --3.000000 190.000000 l -h --3.000000 160.000000 m --3.000000 140.000000 l -3.000000 140.000000 l -3.000000 160.000000 l --3.000000 160.000000 l -h --3.000000 130.000000 m --3.000000 110.000000 l -3.000000 110.000000 l -3.000000 130.000000 l --3.000000 130.000000 l -h --3.000000 100.000000 m --3.000000 80.000000 l -3.000000 80.000000 l -3.000000 100.000000 l --3.000000 100.000000 l -h --3.000000 70.000000 m --3.000000 50.000000 l -3.000000 50.000000 l -3.000000 70.000000 l --3.000000 70.000000 l -h --3.000000 40.000000 m --3.000000 20.000000 l -3.000000 20.000000 l -3.000000 40.000000 l --3.000000 40.000000 l -h --3.000000 10.000000 m --3.000000 0.000000 l -3.000000 0.000000 l -3.000000 10.000000 l --3.000000 10.000000 l -h -0.000000 -3.000000 m -9.888889 -3.000000 l -9.888889 3.000000 l -0.000000 3.000000 l -0.000000 -3.000000 l -h -19.777779 -3.000000 m -39.555557 -3.000000 l -39.555557 3.000000 l -19.777779 3.000000 l -19.777779 -3.000000 l -h -49.444447 -3.000000 m -69.222229 -3.000000 l -69.222229 3.000000 l -49.444447 3.000000 l -49.444447 -3.000000 l -h -79.111115 -3.000000 m -98.888893 -3.000000 l -98.888893 3.000000 l -79.111115 3.000000 l -79.111115 -3.000000 l -h -108.777786 -3.000000 m -128.555573 -3.000000 l -128.555573 3.000000 l -108.777786 3.000000 l -108.777786 -3.000000 l -h -138.444458 -3.000000 m -158.222229 -3.000000 l -158.222229 3.000000 l -138.444458 3.000000 l -138.444458 -3.000000 l -h -168.111130 -3.000000 m -187.888885 -3.000000 l -187.888885 3.000000 l -168.111130 3.000000 l -168.111130 -3.000000 l -h -197.777771 -3.000000 m -217.555542 -3.000000 l -217.555542 3.000000 l -197.777771 3.000000 l -197.777771 -3.000000 l -h -227.444427 -3.000000 m -247.222198 -3.000000 l -247.222198 3.000000 l -227.444427 3.000000 l -227.444427 -3.000000 l -h -257.111084 -3.000000 m -267.000000 -3.000000 l -267.000000 3.000000 l -257.111084 3.000000 l -257.111084 -3.000000 l -h -270.000000 0.000000 m -270.000000 10.000000 l -264.000000 10.000000 l -264.000000 0.000000 l -270.000000 0.000000 l -h -270.000000 20.000000 m -270.000000 40.000000 l -264.000000 40.000000 l -264.000000 20.000000 l -270.000000 20.000000 l -h -270.000000 50.000000 m -270.000000 70.000000 l -264.000000 70.000000 l -264.000000 50.000000 l -270.000000 50.000000 l -h -270.000000 80.000000 m -270.000000 100.000000 l -264.000000 100.000000 l -264.000000 80.000000 l -270.000000 80.000000 l -h -270.000000 110.000000 m -270.000000 130.000000 l -264.000000 130.000000 l -264.000000 110.000000 l -270.000000 110.000000 l -h -270.000000 140.000000 m -270.000000 160.000000 l -264.000000 160.000000 l -264.000000 140.000000 l -270.000000 140.000000 l -h -270.000000 170.000000 m -270.000000 190.000000 l -264.000000 190.000000 l -264.000000 170.000000 l -270.000000 170.000000 l -h -270.000000 200.000000 m -270.000000 220.000000 l -264.000000 220.000000 l -264.000000 200.000000 l -270.000000 200.000000 l -h -270.000000 230.000000 m -270.000000 250.000000 l -264.000000 250.000000 l -264.000000 230.000000 l -270.000000 230.000000 l -h -270.000000 260.000000 m -270.000000 280.000000 l -264.000000 280.000000 l -264.000000 260.000000 l -270.000000 260.000000 l -h -270.000000 290.000000 m -270.000000 310.000000 l -264.000000 310.000000 l -264.000000 290.000000 l -270.000000 290.000000 l -h -270.000000 320.000000 m -270.000000 340.000000 l -264.000000 340.000000 l -264.000000 320.000000 l -270.000000 320.000000 l -h -270.000000 350.000000 m -270.000000 370.000000 l -264.000000 370.000000 l -264.000000 350.000000 l -270.000000 350.000000 l -h -270.000000 380.000000 m -270.000000 400.000031 l -264.000000 400.000031 l -264.000000 380.000000 l -270.000000 380.000000 l -h -270.000000 410.000000 m -270.000000 430.000000 l -264.000000 430.000000 l -264.000000 410.000000 l -270.000000 410.000000 l -h -270.000000 439.999969 m -270.000000 460.000000 l -264.000000 460.000000 l -264.000000 439.999969 l -270.000000 439.999969 l -h -270.000000 470.000000 m -270.000000 489.999969 l -264.000000 489.999969 l -264.000000 470.000000 l -270.000000 470.000000 l -h -270.000000 500.000000 m -270.000000 520.000000 l -264.000000 520.000000 l -264.000000 500.000000 l -270.000000 500.000000 l -h -270.000000 530.000000 m -270.000000 550.000000 l -264.000000 550.000000 l -264.000000 530.000000 l -270.000000 530.000000 l -h -270.000000 560.000000 m -270.000000 580.000000 l -264.000000 580.000000 l -264.000000 560.000000 l -270.000000 560.000000 l -h -270.000000 590.000000 m -270.000000 610.000000 l -264.000000 610.000000 l -264.000000 590.000000 l -270.000000 590.000000 l -h -270.000000 620.000000 m -270.000000 640.000000 l -264.000000 640.000000 l -264.000000 620.000000 l -270.000000 620.000000 l -h -270.000000 650.000000 m -270.000000 670.000000 l -264.000000 670.000000 l -264.000000 650.000000 l -270.000000 650.000000 l -h -270.000000 680.000000 m -270.000000 700.000000 l -264.000000 700.000000 l -264.000000 680.000000 l -270.000000 680.000000 l -h -270.000000 710.000000 m -270.000000 730.000000 l -264.000000 730.000000 l -264.000000 710.000000 l -270.000000 710.000000 l -h -270.000000 740.000000 m -270.000000 750.000000 l -264.000000 750.000000 l -264.000000 740.000000 l -270.000000 740.000000 l -h -267.000000 753.000000 m -257.111115 753.000000 l -257.111115 747.000000 l -267.000000 747.000000 l -267.000000 753.000000 l -h -247.222229 753.000000 m -227.444443 753.000000 l -227.444443 747.000000 l -247.222229 747.000000 l -247.222229 753.000000 l -h -217.555557 753.000000 m -197.777771 753.000000 l -197.777771 747.000000 l -217.555557 747.000000 l -217.555557 753.000000 l -h -187.888885 753.000000 m -168.111099 753.000000 l -168.111099 747.000000 l -187.888885 747.000000 l -187.888885 753.000000 l -h -158.222214 753.000000 m -138.444427 753.000000 l -138.444427 747.000000 l -158.222214 747.000000 l -158.222214 753.000000 l -h -128.555542 753.000000 m -108.777779 753.000000 l -108.777779 747.000000 l -128.555542 747.000000 l -128.555542 753.000000 l -h -98.888878 753.000000 m -79.111115 753.000000 l -79.111115 747.000000 l -98.888878 747.000000 l -98.888878 753.000000 l -h -69.222229 753.000000 m -49.444462 753.000000 l -49.444462 747.000000 l -69.222229 747.000000 l -69.222229 753.000000 l -h -39.555580 753.000000 m -19.777798 753.000000 l -19.777798 747.000000 l -39.555580 747.000000 l -39.555580 753.000000 l -h -9.888915 753.000000 m -0.000000 753.000000 l -0.000000 747.000000 l -9.888915 747.000000 l -9.888915 753.000000 l -h -f -n -Q -Q -q -/E2 gs -1.000000 0.000000 -0.000000 1.000000 345.000000 734.000000 cm -1.000000 1.000000 1.000000 scn -0.000000 22.000000 m -193.000000 22.000000 l -193.000000 0.000000 l -0.000000 0.000000 l -0.000000 22.000000 l -h -f -n -Q -q -/E3 gs -1.000000 0.000000 -0.000000 1.000000 530.000000 645.000000 cm -1.000000 1.000000 1.000000 scn -0.000000 22.000000 m -250.000000 22.000000 l -250.000000 0.000000 l -0.000000 0.000000 l -0.000000 22.000000 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 532.000000 649.681824 cm -0.233333 0.233333 0.233333 scn -0.000000 0.318176 m -h -6.392045 11.227267 m -6.392045 9.806813 l -0.511364 9.806813 l -0.511364 11.227267 l -6.392045 11.227267 l -h -2.272727 0.318176 m -2.272727 12.732948 l -2.272727 13.357950 2.419508 13.878784 2.713068 14.295450 c -3.006629 14.712116 3.387784 15.024616 3.856534 15.232950 c -4.325284 15.441283 4.820076 15.545450 5.340909 15.545450 c -5.752841 15.545450 6.089015 15.512306 6.349432 15.446017 c -6.609848 15.379730 6.803977 15.318177 6.931818 15.261358 c -6.448864 13.812495 l -6.363636 13.840904 6.245265 13.876415 6.093750 13.919029 c -5.946970 13.961643 5.752841 13.982950 5.511364 13.982950 c -4.957386 13.982950 4.557292 13.843271 4.311080 13.563915 c -4.069602 13.284559 3.948864 12.874994 3.948864 12.335222 c -3.948864 0.318176 l -2.272727 0.318176 l -h -7.207031 0.318176 m -h -8.741122 0.318176 m -8.741122 11.227267 l -10.360440 11.227267 l -10.360440 9.579540 l -10.474077 9.579540 l -10.672940 10.119312 11.032789 10.557286 11.553622 10.893461 c -12.074455 11.229635 12.661577 11.397722 13.314986 11.397722 c -13.438092 11.397722 13.591975 11.395355 13.776634 11.390620 c -13.961293 11.385885 14.100971 11.378782 14.195668 11.369312 c -14.195668 9.664767 l -14.138849 9.678972 14.008641 9.700279 13.805043 9.728688 c -13.606179 9.761832 13.395478 9.778404 13.172940 9.778404 c -12.642637 9.778404 12.169153 9.667135 11.752486 9.444597 c -11.340554 9.226794 11.013849 8.923763 10.772372 8.535506 c -10.535630 8.151983 10.417259 7.714010 10.417259 7.221585 c -10.417259 0.318176 l -8.741122 0.318176 l -h -14.316406 0.318176 m -h -20.424362 0.090904 m -19.373224 0.090904 18.466501 0.322912 17.704191 0.786926 c -16.946615 1.255676 16.361862 1.909086 15.949929 2.747154 c -15.542732 3.589957 15.339133 4.570070 15.339133 5.687494 c -15.339133 6.804919 15.542732 7.789767 15.949929 8.642040 c -16.361862 9.499047 16.934778 10.166661 17.668678 10.644881 c -18.407316 11.127835 19.269058 11.369312 20.253906 11.369312 c -20.822088 11.369312 21.383169 11.274615 21.937145 11.085222 c -22.491123 10.895828 22.995384 10.588063 23.449930 10.161926 c -23.904476 9.740525 24.266691 9.181813 24.536577 8.485790 c -24.806463 7.789767 24.941406 6.932760 24.941406 5.914767 c -24.941406 5.204540 l -16.532316 5.204540 l -16.532316 6.653404 l -23.236862 6.653404 l -23.236862 7.268934 23.113756 7.818176 22.867542 8.301131 c -22.626066 8.784086 22.280422 9.165241 21.830612 9.444597 c -21.385536 9.723953 20.859968 9.863631 20.253906 9.863631 c -19.586292 9.863631 19.008640 9.697911 18.520952 9.366472 c -18.037998 9.039767 17.666311 8.613631 17.405895 8.088063 c -17.145479 7.562494 17.015270 6.999047 17.015270 6.397722 c -17.015270 5.431812 l -17.015270 4.607949 17.157316 3.909559 17.441406 3.336642 c -17.730232 2.768459 18.130327 2.335222 18.641691 2.036926 c -19.153055 1.743366 19.747278 1.596586 20.424362 1.596586 c -20.864702 1.596586 21.262430 1.658138 21.617542 1.781244 c -21.977392 1.909084 22.287525 2.098478 22.547941 2.349426 c -22.808357 2.605108 23.009588 2.922342 23.151634 3.301130 c -24.770952 2.846586 l -24.600498 2.297344 24.314039 1.814388 23.911577 1.397722 c -23.509115 0.985790 23.011955 0.663820 22.420099 0.431812 c -21.828243 0.204540 21.162998 0.090904 20.424362 0.090904 c -h -25.957031 0.318176 m -h -34.934303 -3.772734 m -34.934303 1.994312 l -34.792259 1.994312 l -34.669155 1.795448 34.493965 1.544502 34.266689 1.241472 c -34.039417 0.943176 33.715080 0.675657 33.293678 0.438915 c -32.872276 0.206907 32.311195 0.090904 31.610441 0.090904 c -30.701349 0.090904 29.898792 0.320543 29.202770 0.779823 c -28.506746 1.243839 27.962238 1.897249 27.569246 2.740051 c -27.176254 3.587589 26.979759 4.589009 26.979759 5.744312 c -26.979759 6.890146 27.176254 7.884464 27.569246 8.727267 c -27.962238 9.570070 28.509113 10.221112 29.209871 10.680392 c -29.910629 11.139672 30.720289 11.369312 31.638849 11.369312 c -32.349075 11.369312 32.910156 11.250941 33.322086 11.014198 c -33.738754 10.782191 34.055988 10.517040 34.273792 10.218745 c -34.496330 9.925184 34.669155 9.683707 34.792259 9.494313 c -34.991123 9.494313 l -34.991123 11.227267 l -36.610443 11.227267 l -36.610443 -3.772734 l -34.934303 -3.772734 l -h -31.837713 1.596586 m -32.510063 1.596586 33.078243 1.771776 33.542259 2.122154 c -34.006275 2.477266 34.359020 2.967323 34.600498 3.592324 c -34.841972 4.222059 34.962711 4.948858 34.962711 5.772722 c -34.962711 6.587116 34.844341 7.299711 34.607601 7.910506 c -34.370857 8.526036 34.020477 9.004256 33.556461 9.345165 c -33.092445 9.690809 32.519531 9.863631 31.837713 9.863631 c -31.127485 9.863631 30.535629 9.681339 30.062145 9.316756 c -29.593395 8.956907 29.240648 8.466850 29.003906 7.846585 c -28.771898 7.231055 28.655895 6.539767 28.655895 5.772722 c -28.655895 4.996207 28.774267 4.290714 29.011009 3.656244 c -29.252485 3.026508 29.607599 2.524614 30.076349 2.150562 c -30.549835 1.781244 31.136955 1.596586 31.837713 1.596586 c -h -38.144531 0.318176 m -h -46.553623 4.778404 m -46.553623 11.227267 l -48.229759 11.227267 l -48.229759 0.318176 l -46.553623 0.318176 l -46.553623 2.164766 l -46.439987 2.164766 l -46.184303 1.610790 45.786575 1.139673 45.246803 0.751415 c -44.707031 0.367891 44.025215 0.176130 43.201351 0.176130 c -42.519531 0.176130 41.913471 0.325277 41.383167 0.623573 c -40.852863 0.926603 40.436195 1.381149 40.133167 1.987211 c -39.830139 2.598007 39.678623 3.367419 39.678623 4.295449 c -39.678623 11.227267 l -41.354759 11.227267 l -41.354759 4.409085 l -41.354759 3.613631 41.577297 2.979162 42.022373 2.505676 c -42.472183 2.032190 43.045097 1.795448 43.741123 1.795448 c -44.157791 1.795448 44.581558 1.901983 45.012428 2.115051 c -45.448032 2.328119 45.812618 2.654825 46.106178 3.095165 c -46.404476 3.535506 46.553623 4.096585 46.553623 4.778404 c -h -49.765625 0.318176 m -h -55.873581 0.090904 m -54.822445 0.090904 53.915718 0.322912 53.153408 0.786926 c -52.395832 1.255676 51.811077 1.909086 51.399147 2.747154 c -50.991951 3.589957 50.788353 4.570070 50.788353 5.687494 c -50.788353 6.804919 50.991951 7.789767 51.399147 8.642040 c -51.811077 9.499047 52.383995 10.166661 53.117897 10.644881 c -53.856533 11.127835 54.718277 11.369312 55.703125 11.369312 c -56.271305 11.369312 56.832386 11.274615 57.386364 11.085222 c -57.940342 10.895828 58.444603 10.588063 58.899147 10.161926 c -59.353695 9.740525 59.715912 9.181813 59.985798 8.485790 c -60.255684 7.789767 60.390625 6.932760 60.390625 5.914767 c -60.390625 5.204540 l -51.981533 5.204540 l -51.981533 6.653404 l -58.686081 6.653404 l -58.686081 7.268934 58.562973 7.818176 58.316761 8.301131 c -58.075283 8.784086 57.729641 9.165241 57.279831 9.444597 c -56.834755 9.723953 56.309185 9.863631 55.703125 9.863631 c -55.035511 9.863631 54.457859 9.697911 53.970169 9.366472 c -53.487217 9.039767 53.115532 8.613631 52.855114 8.088063 c -52.594696 7.562494 52.464489 6.999047 52.464489 6.397722 c -52.464489 5.431812 l -52.464489 4.607949 52.606533 3.909559 52.890625 3.336642 c -53.179451 2.768459 53.579544 2.335222 54.090908 2.036926 c -54.602272 1.743366 55.196499 1.596586 55.873581 1.596586 c -56.313923 1.596586 56.711647 1.658138 57.066761 1.781244 c -57.426609 1.909084 57.736740 2.098478 57.997158 2.349426 c -58.257576 2.605108 58.458805 2.922342 58.600853 3.301130 c -60.220173 2.846586 l -60.049717 2.297344 59.763256 1.814388 59.360794 1.397722 c -58.958332 0.985790 58.461174 0.663820 57.869320 0.431812 c -57.277462 0.204540 56.612217 0.090904 55.873581 0.090904 c -h -61.406250 0.318176 m -h -64.616478 6.880676 m -64.616478 0.318176 l -62.940342 0.318176 l -62.940342 11.227267 l -64.559662 11.227267 l -64.559662 9.522722 l -64.701706 9.522722 l -64.957390 10.076699 65.345642 10.521774 65.866478 10.857948 c -66.387314 11.198857 67.059662 11.369312 67.883522 11.369312 c -68.622162 11.369312 69.268463 11.217797 69.822441 10.914767 c -70.376419 10.616471 70.807297 10.161926 71.115059 9.551131 c -71.422821 8.945070 71.576706 8.178025 71.576706 7.249994 c -71.576706 0.318176 l -69.900566 0.318176 l -69.900566 7.136358 l -69.900566 7.993365 69.678032 8.660979 69.232956 9.139199 c -68.787880 9.622154 68.177078 9.863631 67.400566 9.863631 c -66.865532 9.863631 66.387314 9.747627 65.965912 9.515620 c -65.549248 9.283612 65.220169 8.945070 64.978691 8.499995 c -64.737213 8.054919 64.616478 7.515146 64.616478 6.880676 c -h -73.105469 0.318176 m -h -79.383881 11.227267 m -79.383881 9.806813 l -73.730469 9.806813 l -73.730469 11.227267 l -79.383881 11.227267 l -h -75.378197 13.840904 m -77.054329 13.840904 l -77.054329 3.443176 l -77.054329 2.969690 77.122986 2.614578 77.260300 2.377836 c -77.402344 2.145828 77.582268 1.989578 77.800072 1.909086 c -78.022614 1.833328 78.256989 1.795448 78.503197 1.795448 c -78.687851 1.795448 78.839371 1.804918 78.957741 1.823858 c -79.076111 1.847532 79.170807 1.866472 79.241829 1.880676 c -79.582741 0.374994 l -79.469101 0.332380 79.310486 0.289766 79.106888 0.247154 c -78.903290 0.199804 78.645241 0.176130 78.332741 0.176130 c -77.859253 0.176130 77.395241 0.277929 76.940697 0.481529 c -76.490891 0.685127 76.116837 0.995260 75.818535 1.411926 c -75.524979 1.828592 75.378197 2.354160 75.378197 2.988630 c -75.378197 13.840904 l -h -85.996094 0.318176 m -h -87.530182 -3.772734 m -87.530182 11.227267 l -89.149506 11.227267 l -89.149506 9.494313 l -89.348366 9.494313 l -89.471474 9.683707 89.641930 9.925184 89.859734 10.218745 c -90.082268 10.517040 90.399498 10.782191 90.811432 11.014198 c -91.228096 11.250941 91.791550 11.369312 92.501778 11.369312 c -93.420334 11.369312 94.229996 11.139672 94.930756 10.680392 c -95.631508 10.221112 96.178383 9.570070 96.571381 8.727267 c -96.964371 7.884464 97.160866 6.890146 97.160866 5.744312 c -97.160866 4.589009 96.964371 3.587589 96.571381 2.740051 c -96.178383 1.897249 95.633881 1.243839 94.937859 0.779823 c -94.241837 0.320543 93.439278 0.090904 92.530182 0.090904 c -91.829430 0.090904 91.268349 0.206907 90.846947 0.438915 c -90.425545 0.675657 90.101204 0.943176 89.873932 1.241472 c -89.646660 1.544502 89.471474 1.795448 89.348366 1.994312 c -89.206322 1.994312 l -89.206322 -3.772734 l -87.530182 -3.772734 l -h -89.177910 5.772722 m -89.177910 4.948858 89.298653 4.222059 89.540131 3.592324 c -89.781609 2.967323 90.134354 2.477266 90.598366 2.122154 c -91.062378 1.771776 91.630562 1.596586 92.302910 1.596586 c -93.003670 1.596586 93.588425 1.781244 94.057175 2.150562 c -94.530663 2.524614 94.885773 3.026508 95.122513 3.656244 c -95.363991 4.290714 95.484734 4.996207 95.484734 5.772722 c -95.484734 6.539767 95.366364 7.231055 95.129616 7.846585 c -94.897614 8.466850 94.544868 8.956907 94.071381 9.316756 c -93.602631 9.681339 93.013138 9.863631 92.302910 9.863631 c -91.621094 9.863631 91.048172 9.690809 90.584160 9.345165 c -90.120148 9.004256 89.769768 8.526036 89.533028 7.910506 c -89.296280 7.299711 89.177910 6.587116 89.177910 5.772722 c -h -98.183594 0.318176 m -h -102.927910 0.062494 m -102.236626 0.062494 101.609261 0.192703 101.045807 0.453119 c -100.482361 0.718271 100.034920 1.099426 99.703484 1.596586 c -99.372040 2.098480 99.206322 2.704540 99.206322 3.414767 c -99.206322 4.039767 99.329430 4.546396 99.575638 4.934654 c -99.821846 5.327646 100.150925 5.635411 100.562859 5.857949 c -100.974792 6.080487 101.429337 6.246207 101.926491 6.355108 c -102.428383 6.468744 102.932648 6.558706 103.439278 6.624994 c -104.102158 6.710221 104.639557 6.774142 105.051491 6.816756 c -105.468155 6.864105 105.771187 6.942230 105.960579 7.051131 c -106.154709 7.160032 106.251778 7.349426 106.251778 7.619313 c -106.251778 7.676131 l -106.251778 8.376888 106.060013 8.921396 105.676491 9.309654 c -105.297699 9.697911 104.722420 9.892040 103.950638 9.892040 c -103.150452 9.892040 102.523087 9.716851 102.068535 9.366472 c -101.613991 9.016093 101.294388 8.642040 101.109734 8.244313 c -99.518822 8.812495 l -99.802910 9.475374 100.181694 9.991472 100.655182 10.360790 c -101.133400 10.734843 101.654236 10.995260 102.217682 11.142040 c -102.785866 11.293555 103.344574 11.369312 103.893822 11.369312 c -104.244202 11.369312 104.646660 11.326698 105.101204 11.241472 c -105.560486 11.160979 106.003197 10.992892 106.429329 10.737211 c -106.860199 10.481528 107.217682 10.095638 107.501778 9.579540 c -107.785866 9.063441 107.927910 8.372153 107.927910 7.505676 c -107.927910 0.318176 l -106.251778 0.318176 l -106.251778 1.795448 l -106.166550 1.795448 l -106.052910 1.558706 105.863518 1.305393 105.598366 1.035505 c -105.333214 0.765619 104.980469 0.535980 104.540131 0.346586 c -104.099785 0.157192 103.562378 0.062494 102.927910 0.062494 c -h -103.183594 1.568176 m -103.846474 1.568176 104.405182 1.698385 104.859734 1.958801 c -105.319016 2.219217 105.664658 2.555391 105.896660 2.967323 c -106.133408 3.379256 106.251778 3.812495 106.251778 4.267040 c -106.251778 5.801131 l -106.180756 5.715904 106.024506 5.637779 105.783028 5.566756 c -105.546280 5.500468 105.271660 5.441282 104.959160 5.389199 c -104.651398 5.341850 104.350731 5.299237 104.057175 5.261358 c -103.768349 5.228214 103.533974 5.199805 103.354050 5.176131 c -102.918442 5.119313 102.511246 5.026983 102.132454 4.899142 c -101.758408 4.776036 101.455376 4.589010 101.223366 4.338063 c -100.996094 4.091850 100.882454 3.755676 100.882454 3.329540 c -100.882454 2.747154 101.097893 2.306812 101.528763 2.008516 c -101.964371 1.714956 102.515984 1.568176 103.183594 1.568176 c -h -109.453125 0.318176 m -h -115.731537 11.227267 m -115.731537 9.806813 l -110.078125 9.806813 l -110.078125 11.227267 l -115.731537 11.227267 l -h -111.725853 13.840904 m -113.401985 13.840904 l -113.401985 3.443176 l -113.401985 2.969690 113.470642 2.614578 113.607956 2.377836 c -113.750000 2.145828 113.929924 1.989578 114.147728 1.909086 c -114.370270 1.833328 114.604645 1.795448 114.850853 1.795448 c -115.035507 1.795448 115.187027 1.804918 115.305397 1.823858 c -115.423767 1.847532 115.518463 1.866472 115.589485 1.880676 c -115.930397 0.374994 l -115.816757 0.332380 115.658142 0.289766 115.454544 0.247154 c -115.250946 0.199804 114.992897 0.176130 114.680397 0.176130 c -114.206909 0.176130 113.742897 0.277929 113.288353 0.481529 c -112.838547 0.685127 112.464493 0.995260 112.166191 1.411926 c -111.872635 1.828592 111.725853 2.354160 111.725853 2.988630 c -111.725853 13.840904 l -h -116.718750 0.318176 m -h -122.997162 11.227267 m -122.997162 9.806813 l -117.343750 9.806813 l -117.343750 11.227267 l -122.997162 11.227267 l -h -118.991478 13.840904 m -120.667610 13.840904 l -120.667610 3.443176 l -120.667610 2.969690 120.736267 2.614578 120.873581 2.377836 c -121.015625 2.145828 121.195549 1.989578 121.413353 1.909086 c -121.635895 1.833328 121.870270 1.795448 122.116478 1.795448 c -122.301132 1.795448 122.452652 1.804918 122.571022 1.823858 c -122.689392 1.847532 122.784088 1.866472 122.855110 1.880676 c -123.196022 0.374994 l -123.082382 0.332380 122.923767 0.289766 122.720169 0.247154 c -122.516571 0.199804 122.258522 0.176130 121.946022 0.176130 c -121.472534 0.176130 121.008522 0.277929 120.553978 0.481529 c -120.104172 0.685127 119.730118 0.995260 119.431816 1.411926 c -119.138260 1.828592 118.991478 2.354160 118.991478 2.988630 c -118.991478 13.840904 l -h -123.867188 0.318176 m -h -129.975143 0.090904 m -128.924011 0.090904 128.017288 0.322912 127.254974 0.786926 c -126.497398 1.255676 125.912643 1.909086 125.500710 2.747154 c -125.093513 3.589957 124.889915 4.570070 124.889915 5.687494 c -124.889915 6.804919 125.093513 7.789767 125.500710 8.642040 c -125.912643 9.499047 126.485558 10.166661 127.219460 10.644881 c -127.958092 11.127835 128.819839 11.369312 129.804688 11.369312 c -130.372879 11.369312 130.933960 11.274615 131.487930 11.085222 c -132.041901 10.895828 132.546173 10.588063 133.000717 10.161926 c -133.455261 9.740525 133.817474 9.181813 134.087357 8.485790 c -134.357239 7.789767 134.492188 6.932760 134.492188 5.914767 c -134.492188 5.204540 l -126.083099 5.204540 l -126.083099 6.653404 l -132.787643 6.653404 l -132.787643 7.268934 132.664536 7.818176 132.418320 8.301131 c -132.176849 8.784086 131.831207 9.165241 131.381393 9.444597 c -130.936310 9.723953 130.410751 9.863631 129.804688 9.863631 c -129.137070 9.863631 128.559418 9.697911 128.071732 9.366472 c -127.588776 9.039767 127.217087 8.613631 126.956673 8.088063 c -126.696259 7.562494 126.566048 6.999047 126.566048 6.397722 c -126.566048 5.431812 l -126.566048 4.607949 126.708092 3.909559 126.992188 3.336642 c -127.281013 2.768459 127.681107 2.335222 128.192474 2.036926 c -128.703842 1.743366 129.298065 1.596586 129.975143 1.596586 c -130.415482 1.596586 130.813202 1.658138 131.168320 1.781244 c -131.528168 1.909084 131.838303 2.098478 132.098724 2.349426 c -132.359146 2.605108 132.560364 2.922342 132.702408 3.301130 c -134.321732 2.846586 l -134.151276 2.297344 133.864822 1.814388 133.462357 1.397722 c -133.059891 0.985790 132.562729 0.663820 131.970886 0.431812 c -131.379028 0.204540 130.713776 0.090904 129.975143 0.090904 c -h -135.507812 0.318176 m -h -137.041901 0.318176 m -137.041901 11.227267 l -138.661224 11.227267 l -138.661224 9.579540 l -138.774857 9.579540 l -138.973724 10.119312 139.333572 10.557286 139.854401 10.893461 c -140.375229 11.229635 140.962357 11.397722 141.615768 11.397722 c -141.738876 11.397722 141.892761 11.395355 142.077408 11.390620 c -142.262070 11.385885 142.401749 11.378782 142.496445 11.369312 c -142.496445 9.664767 l -142.439636 9.678972 142.309418 9.700279 142.105820 9.728688 c -141.906952 9.761832 141.696259 9.778404 141.473724 9.778404 c -140.943420 9.778404 140.469940 9.667135 140.053268 9.444597 c -139.641327 9.226794 139.314621 8.923763 139.073151 8.535506 c -138.836411 8.151983 138.718033 7.714010 138.718033 7.221585 c -138.718033 0.318176 l -137.041901 0.318176 l -h -142.949219 0.318176 m -h -146.159439 6.880676 m -146.159439 0.318176 l -144.483307 0.318176 l -144.483307 11.227267 l -146.102631 11.227267 l -146.102631 9.522722 l -146.244675 9.522722 l -146.500351 10.076699 146.888611 10.521774 147.409439 10.857948 c -147.930267 11.198857 148.602631 11.369312 149.426498 11.369312 c -150.165131 11.369312 150.811432 11.217797 151.365417 10.914767 c -151.919388 10.616471 152.350266 10.161926 152.658020 9.551131 c -152.965790 8.945070 153.119675 8.178025 153.119675 7.249994 c -153.119675 0.318176 l -151.443542 0.318176 l -151.443542 7.136358 l -151.443542 7.993365 151.221008 8.660979 150.775925 9.139199 c -150.330841 9.622154 149.720047 9.863631 148.943542 9.863631 c -148.408493 9.863631 147.930283 9.747627 147.508881 9.515620 c -147.092209 9.283612 146.763138 8.945070 146.521667 8.499995 c -146.280182 8.054919 146.159439 7.515146 146.159439 6.880676 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 532.000000 649.681824 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 0.318176 Tm -/F1 1.000000 Tf -[ (\025) (\023) 16.690350 (\t) (\020) (\021) (\t) (\016) (\024) (\006) (\000) (\003) (\024) (\024) 6.214523 (\t) (\023) (\016) ] TJ -ET -Q -q -1.000000 0.000000 -0.000000 1.000000 351.000000 737.681824 cm -0.233333 0.233333 0.233333 scn -0.000000 0.318176 m -h -5.965909 0.090904 m -4.943182 0.090904 4.062500 0.332382 3.323864 0.815336 c -2.585227 1.298290 2.017045 1.963535 1.619318 2.811073 c -1.221591 3.658612 1.022727 4.626889 1.022727 5.715904 c -1.022727 6.823858 1.226326 7.801604 1.633523 8.649142 c -2.045455 9.501415 2.618371 10.166661 3.352273 10.644881 c -4.090909 11.127835 4.952652 11.369312 5.937500 11.369312 c -6.704545 11.369312 7.395833 11.227267 8.011364 10.943176 c -8.626894 10.659085 9.131155 10.261358 9.524148 9.749995 c -9.917140 9.238631 10.160985 8.642040 10.255682 7.960222 c -8.579546 7.960222 l -8.451705 8.457381 8.167614 8.897722 7.727273 9.281245 c -7.291667 9.669502 6.704546 9.863631 5.965909 9.863631 c -5.312500 9.863631 4.739583 9.693176 4.247159 9.352267 c -3.759470 9.016093 3.378314 8.540241 3.103693 7.924710 c -2.833807 7.313915 2.698864 6.596585 2.698864 5.772722 c -2.698864 4.929919 2.831439 4.196017 3.096591 3.571017 c -3.366477 2.946016 3.745265 2.460695 4.232955 2.115051 c -4.725379 1.769407 5.303030 1.596586 5.965909 1.596586 c -6.401515 1.596586 6.796875 1.672344 7.151989 1.823858 c -7.507102 1.975372 7.807765 2.193176 8.053978 2.477266 c -8.300190 2.761358 8.475379 3.102268 8.579546 3.499994 c -10.255682 3.499994 l -10.160985 2.856056 9.926610 2.276037 9.552557 1.759937 c -9.183239 1.248573 8.693182 0.841377 8.082387 0.538347 c -7.476326 0.240051 6.770833 0.090904 5.965909 0.090904 c -h -11.171875 0.318176 m -h -14.382103 14.863631 m -14.382103 0.318176 l -12.705966 0.318176 l -12.705966 14.863631 l -14.382103 14.863631 l -h -15.917969 0.318176 m -h -24.327061 4.778404 m -24.327061 11.227267 l -26.003197 11.227267 l -26.003197 0.318176 l -24.327061 0.318176 l -24.327061 2.164766 l -24.213425 2.164766 l -23.957743 1.610790 23.560015 1.139673 23.020243 0.751415 c -22.480469 0.367891 21.798651 0.176130 20.974787 0.176130 c -20.292969 0.176130 19.686907 0.325277 19.156605 0.623573 c -18.626303 0.926603 18.209635 1.381149 17.906605 1.987211 c -17.603575 2.598007 17.452059 3.367419 17.452059 4.295449 c -17.452059 11.227267 l -19.128197 11.227267 l -19.128197 4.409085 l -19.128197 3.613631 19.350735 2.979162 19.795809 2.505676 c -20.245621 2.032190 20.818537 1.795448 21.514561 1.795448 c -21.931227 1.795448 22.354996 1.901983 22.785868 2.115051 c -23.221474 2.328119 23.586056 2.654825 23.879618 3.095165 c -24.177914 3.535506 24.327061 4.096585 24.327061 4.778404 c -h -27.539062 0.318176 m -h -36.800426 8.784085 m -35.294743 8.357949 l -35.200047 8.608896 35.060368 8.852741 34.875710 9.089483 c -34.695786 9.330960 34.449574 9.529824 34.137074 9.686074 c -33.824574 9.842324 33.424480 9.920449 32.936790 9.920449 c -32.269176 9.920449 31.712831 9.766566 31.267756 9.458801 c -30.827415 9.155771 30.607244 8.769881 30.607244 8.301131 c -30.607244 7.884464 30.758760 7.555392 31.061790 7.313915 c -31.364820 7.072438 31.838305 6.871207 32.482243 6.710222 c -34.101562 6.312494 l -35.076942 6.075752 35.803741 5.713536 36.281960 5.225847 c -36.760181 4.742892 36.999290 4.120260 36.999290 3.357949 c -36.999290 2.732948 36.819366 2.174236 36.459518 1.681812 c -36.104404 1.189388 35.607243 0.801130 34.968040 0.517040 c -34.328835 0.232950 33.585464 0.090904 32.737926 0.090904 c -31.625237 0.090904 30.704308 0.332382 29.975142 0.815336 c -29.245975 1.298290 28.784327 2.003782 28.590199 2.931812 c -30.181108 3.329540 l -30.332623 2.742418 30.619081 2.302078 31.040483 2.008516 c -31.466619 1.714956 32.022964 1.568176 32.709518 1.568176 c -33.490768 1.568176 34.111034 1.733896 34.570312 2.065336 c -35.034328 2.401510 35.266335 2.803972 35.266335 3.272722 c -35.266335 3.651510 35.133759 3.968745 34.868607 4.224426 c -34.603458 4.484843 34.196262 4.678972 33.647018 4.806813 c -31.828835 5.232949 l -30.829782 5.469691 30.095881 5.836642 29.627131 6.333801 c -29.163116 6.835695 28.931108 7.463063 28.931108 8.215904 c -28.931108 8.831434 29.103930 9.375941 29.449574 9.849426 c -29.799952 10.322911 30.275805 10.694597 30.877131 10.964483 c -31.483192 11.234369 32.169746 11.369312 32.936790 11.369312 c -34.016335 11.369312 34.863873 11.132570 35.479404 10.659086 c -36.099670 10.185601 36.540009 9.560600 36.800426 8.784085 c -h -37.988281 0.318176 m -h -44.266689 11.227267 m -44.266689 9.806813 l -38.613281 9.806813 l -38.613281 11.227267 l -44.266689 11.227267 l -h -40.261009 13.840904 m -41.937145 13.840904 l -41.937145 3.443176 l -41.937145 2.969690 42.005802 2.614578 42.143112 2.377836 c -42.285156 2.145828 42.465080 1.989578 42.682884 1.909086 c -42.905422 1.833328 43.139797 1.795448 43.386009 1.795448 c -43.570667 1.795448 43.722183 1.804918 43.840553 1.823858 c -43.958927 1.847532 44.053623 1.866472 44.124645 1.880676 c -44.465553 0.374994 l -44.351917 0.332380 44.193302 0.289766 43.989700 0.247154 c -43.786102 0.199804 43.528053 0.176130 43.215553 0.176130 c -42.742069 0.176130 42.278053 0.277929 41.823509 0.481529 c -41.373699 0.685127 40.999645 0.995260 40.701351 1.411926 c -40.407791 1.828592 40.261009 2.354160 40.261009 2.988630 c -40.261009 13.840904 l -h -45.136719 0.318176 m -h -51.244675 0.090904 m -50.193539 0.090904 49.286812 0.322912 48.524502 0.786926 c -47.766926 1.255676 47.182171 1.909086 46.770241 2.747154 c -46.363045 3.589957 46.159447 4.570070 46.159447 5.687494 c -46.159447 6.804919 46.363045 7.789767 46.770241 8.642040 c -47.182171 9.499047 47.755089 10.166661 48.488991 10.644881 c -49.227627 11.127835 50.089371 11.369312 51.074219 11.369312 c -51.642399 11.369312 52.203480 11.274615 52.757458 11.085222 c -53.311436 10.895828 53.815697 10.588063 54.270241 10.161926 c -54.724789 9.740525 55.087006 9.181813 55.356892 8.485790 c -55.626778 7.789767 55.761719 6.932760 55.761719 5.914767 c -55.761719 5.204540 l -47.352627 5.204540 l -47.352627 6.653404 l -54.057175 6.653404 l -54.057175 7.268934 53.934067 7.818176 53.687855 8.301131 c -53.446377 8.784086 53.100735 9.165241 52.650925 9.444597 c -52.205849 9.723953 51.680279 9.863631 51.074219 9.863631 c -50.406605 9.863631 49.828953 9.697911 49.341263 9.366472 c -48.858311 9.039767 48.486626 8.613631 48.226208 8.088063 c -47.965790 7.562494 47.835583 6.999047 47.835583 6.397722 c -47.835583 5.431812 l -47.835583 4.607949 47.977627 3.909559 48.261719 3.336642 c -48.550545 2.768459 48.950638 2.335222 49.462002 2.036926 c -49.973366 1.743366 50.567593 1.596586 51.244675 1.596586 c -51.685017 1.596586 52.082741 1.658138 52.437855 1.781244 c -52.797703 1.909084 53.107834 2.098478 53.368252 2.349426 c -53.628670 2.605108 53.829899 2.922342 53.971947 3.301130 c -55.591267 2.846586 l -55.420811 2.297344 55.134350 1.814388 54.731888 1.397722 c -54.329426 0.985790 53.832268 0.663820 53.240414 0.431812 c -52.648556 0.204540 51.983311 0.090904 51.244675 0.090904 c -h -56.777344 0.318176 m -h -58.311436 0.318176 m -58.311436 11.227267 l -59.930752 11.227267 l -59.930752 9.579540 l -60.044388 9.579540 l -60.243252 10.119312 60.603104 10.557286 61.123936 10.893461 c -61.644768 11.229635 62.231892 11.397722 62.885300 11.397722 c -63.008404 11.397722 63.162289 11.395355 63.346947 11.390620 c -63.531605 11.385885 63.671284 11.378782 63.765980 11.369312 c -63.765980 9.664767 l -63.709160 9.678972 63.578953 9.700279 63.375355 9.728688 c -63.176491 9.761832 62.965790 9.778404 62.743252 9.778404 c -62.212952 9.778404 61.739468 9.667135 61.322800 9.444597 c -60.910866 9.226794 60.584160 8.923763 60.342686 8.535506 c -60.105942 8.151983 59.987572 7.714010 59.987572 7.221585 c -59.987572 0.318176 l -58.311436 0.318176 l -h -69.843750 0.318176 m -h -75.809662 0.090904 m -74.824814 0.090904 73.960701 0.325279 73.217331 0.794029 c -72.478691 1.262779 71.901039 1.918556 71.484375 2.761358 c -71.072441 3.604161 70.866478 4.589010 70.866478 5.715904 c -70.866478 6.852267 71.072441 7.844218 71.484375 8.691756 c -71.901039 9.539294 72.478691 10.197438 73.217331 10.666187 c -73.960701 11.134937 74.824814 11.369312 75.809662 11.369312 c -76.794510 11.369312 77.656250 11.134937 78.394890 10.666187 c -79.138260 10.197438 79.715904 9.539294 80.127838 8.691756 c -80.544502 7.844218 80.752838 6.852267 80.752838 5.715904 c -80.752838 4.589010 80.544502 3.604161 80.127838 2.761358 c -79.715904 1.918556 79.138260 1.262779 78.394890 0.794029 c -77.656250 0.325279 76.794510 0.090904 75.809662 0.090904 c -h -75.809662 1.596586 m -76.557762 1.596586 77.173294 1.788347 77.656250 2.171869 c -78.139206 2.555393 78.496689 3.059654 78.728691 3.684654 c -78.960701 4.309654 79.076706 4.986737 79.076706 5.715904 c -79.076706 6.445071 78.960701 7.124521 78.728691 7.754256 c -78.496689 8.383991 78.139206 8.892987 77.656250 9.281245 c -77.173294 9.669502 76.557762 9.863631 75.809662 9.863631 c -75.061554 9.863631 74.446022 9.669502 73.963066 9.281245 c -73.480118 8.892987 73.122635 8.383991 72.890625 7.754256 c -72.658615 7.124521 72.542610 6.445071 72.542610 5.715904 c -72.542610 4.986737 72.658615 4.309654 72.890625 3.684654 c -73.122635 3.059654 73.480118 2.555393 73.963066 2.171869 c -74.446022 1.788347 75.061554 1.596586 75.809662 1.596586 c -h -81.777344 0.318176 m -h -88.169388 11.227267 m -88.169388 9.806813 l -82.288704 9.806813 l -82.288704 11.227267 l -88.169388 11.227267 l -h -84.050072 0.318176 m -84.050072 12.732948 l -84.050072 13.357950 84.196854 13.878784 84.490410 14.295450 c -84.783974 14.712116 85.165131 15.024616 85.633881 15.232950 c -86.102631 15.441283 86.597420 15.545450 87.118256 15.545450 c -87.530190 15.545450 87.866364 15.512306 88.126778 15.446017 c -88.387192 15.379730 88.581322 15.318177 88.709160 15.261358 c -88.226204 13.812495 l -88.140976 13.840904 88.022606 13.876415 87.871094 13.919029 c -87.724312 13.961643 87.530182 13.982950 87.288704 13.982950 c -86.734726 13.982950 86.334633 13.843271 86.088425 13.563915 c -85.846947 13.284559 85.726204 12.874994 85.726204 12.335222 c -85.726204 0.318176 l -84.050072 0.318176 l -h -94.609375 0.318176 m -h -100.717331 0.090904 m -99.666191 0.090904 98.759468 0.322912 97.997162 0.786926 c -97.239586 1.255676 96.654831 1.909086 96.242897 2.747154 c -95.835701 3.589957 95.632103 4.570070 95.632103 5.687494 c -95.632103 6.804919 95.835701 7.789767 96.242897 8.642040 c -96.654831 9.499047 97.227745 10.166661 97.961647 10.644881 c -98.700287 11.127835 99.562027 11.369312 100.546875 11.369312 c -101.115059 11.369312 101.676132 11.274615 102.230110 11.085222 c -102.784088 10.895828 103.288353 10.588063 103.742897 10.161926 c -104.197441 9.740525 104.559654 9.181813 104.829544 8.485790 c -105.099434 7.789767 105.234375 6.932760 105.234375 5.914767 c -105.234375 5.204540 l -96.825287 5.204540 l -96.825287 6.653404 l -103.529831 6.653404 l -103.529831 7.268934 103.406723 7.818176 103.160515 8.301131 c -102.919037 8.784086 102.573387 9.165241 102.123581 9.444597 c -101.678505 9.723953 101.152931 9.863631 100.546875 9.863631 c -99.879265 9.863631 99.301613 9.697911 98.813919 9.366472 c -98.330963 9.039767 97.959274 8.613631 97.698860 8.088063 c -97.438446 7.562494 97.308235 6.999047 97.308235 6.397722 c -97.308235 5.431812 l -97.308235 4.607949 97.450279 3.909559 97.734375 3.336642 c -98.023201 2.768459 98.423294 2.335222 98.934662 2.036926 c -99.446022 1.743366 100.040245 1.596586 100.717331 1.596586 c -101.157669 1.596586 101.555397 1.658138 101.910515 1.781244 c -102.270363 1.909084 102.580498 2.098478 102.840912 2.349426 c -103.101326 2.605108 103.302559 2.922342 103.444603 3.301130 c -105.063919 2.846586 l -104.893463 2.297344 104.607002 1.814388 104.204544 1.397722 c -103.802086 0.985790 103.304924 0.663820 102.713066 0.431812 c -102.121216 0.204540 101.455971 0.090904 100.717331 0.090904 c -h -105.957031 0.318176 m -h -108.741119 11.227267 m -111.354759 6.767040 l -113.968391 11.227267 l -115.900215 11.227267 l -112.377487 5.772722 l -115.900215 0.318176 l -113.968391 0.318176 l -111.354759 4.551131 l -108.741119 0.318176 l -106.809303 0.318176 l -110.275215 5.772722 l -106.809303 11.227267 l -108.741119 11.227267 l -h -116.757812 0.318176 m -h -121.502129 0.062494 m -120.810844 0.062494 120.183479 0.192703 119.620026 0.453119 c -119.056580 0.718271 118.609138 1.099426 118.277702 1.596586 c -117.946259 2.098480 117.780540 2.704540 117.780540 3.414767 c -117.780540 4.039767 117.903648 4.546396 118.149857 4.934654 c -118.396065 5.327646 118.725143 5.635411 119.137077 5.857949 c -119.549011 6.080487 120.003555 6.246207 120.500710 6.355108 c -121.002602 6.468744 121.506866 6.558706 122.013496 6.624994 c -122.676376 6.710221 123.213776 6.774142 123.625710 6.816756 c -124.042374 6.864105 124.345406 6.942230 124.534798 7.051131 c -124.728928 7.160032 124.825996 7.349426 124.825996 7.619313 c -124.825996 7.676131 l -124.825996 8.376888 124.634232 8.921396 124.250710 9.309654 c -123.871918 9.697911 123.296638 9.892040 122.524857 9.892040 c -121.724670 9.892040 121.097305 9.716851 120.642754 9.366472 c -120.188210 9.016093 119.868607 8.642040 119.683952 8.244313 c -118.093040 8.812495 l -118.377129 9.475374 118.755913 9.991472 119.229401 10.360790 c -119.707619 10.734843 120.228455 10.995260 120.791901 11.142040 c -121.360085 11.293555 121.918793 11.369312 122.468040 11.369312 c -122.818420 11.369312 123.220879 11.326698 123.675423 11.241472 c -124.134705 11.160979 124.577415 10.992892 125.003548 10.737211 c -125.434418 10.481528 125.791901 10.095638 126.075996 9.579540 c -126.360085 9.063441 126.502129 8.372153 126.502129 7.505676 c -126.502129 0.318176 l -124.825996 0.318176 l -124.825996 1.795448 l -124.740768 1.795448 l -124.627129 1.558706 124.437737 1.305393 124.172585 1.035505 c -123.907433 0.765619 123.554688 0.535980 123.114349 0.346586 c -122.674004 0.157192 122.136597 0.062494 121.502129 0.062494 c -h -121.757812 1.568176 m -122.420692 1.568176 122.979401 1.698385 123.433952 1.958801 c -123.893234 2.219217 124.238876 2.555391 124.470879 2.967323 c -124.707626 3.379256 124.825996 3.812495 124.825996 4.267040 c -124.825996 5.801131 l -124.754974 5.715904 124.598724 5.637779 124.357246 5.566756 c -124.120499 5.500468 123.845879 5.441282 123.533379 5.389199 c -123.225616 5.341850 122.924950 5.299237 122.631393 5.261358 c -122.342567 5.228214 122.108192 5.199805 121.928268 5.176131 c -121.492661 5.119313 121.085464 5.026983 120.706673 4.899142 c -120.332626 4.776036 120.029594 4.589010 119.797585 4.338063 c -119.570312 4.091850 119.456673 3.755676 119.456673 3.329540 c -119.456673 2.747154 119.672112 2.306812 120.102982 2.008516 c -120.538589 1.714956 121.090202 1.568176 121.757812 1.568176 c -h -128.027344 0.318176 m -h -129.561432 0.318176 m -129.561432 11.227267 l -131.180756 11.227267 l -131.180756 9.522722 l -131.322800 9.522722 l -131.550079 10.105108 131.917023 10.557286 132.423645 10.879256 c -132.930283 11.205960 133.538712 11.369312 134.248932 11.369312 c -134.968628 11.369312 135.567581 11.205960 136.045807 10.879256 c -136.528763 10.557286 136.905182 10.105108 137.175064 9.522722 c -137.288712 9.522722 l -137.568069 10.086169 137.987106 10.533612 138.545807 10.865051 c -139.104523 11.201225 139.774506 11.369312 140.555756 11.369312 c -141.531128 11.369312 142.328949 11.063915 142.949219 10.453120 c -143.569489 9.847059 143.879623 8.902457 143.879623 7.619313 c -143.879623 0.318176 l -142.203476 0.318176 l -142.203476 7.619313 l -142.203476 8.424237 141.983307 8.999521 141.542969 9.345165 c -141.102631 9.690809 140.584167 9.863631 139.987564 9.863631 c -139.220520 9.863631 138.626297 9.631623 138.204895 9.167608 c -137.783493 8.708328 137.572800 8.125942 137.572800 7.420449 c -137.572800 0.318176 l -135.868256 0.318176 l -135.868256 7.789767 l -135.868256 8.410032 135.667023 8.909559 135.264557 9.288347 c -134.862106 9.671870 134.343643 9.863631 133.709167 9.863631 c -133.273560 9.863631 132.866364 9.747627 132.487564 9.515620 c -132.113525 9.283612 131.810486 8.961642 131.578476 8.549710 c -131.351196 8.142513 131.237564 7.671396 131.237564 7.136358 c -131.237564 0.318176 l -129.561432 0.318176 l -h -145.410156 0.318176 m -h -146.944244 -3.772734 m -146.944244 11.227267 l -148.563568 11.227267 l -148.563568 9.494313 l -148.762436 9.494313 l -148.885529 9.683707 149.055984 9.925184 149.273788 10.218745 c -149.496338 10.517040 149.813568 10.782191 150.225494 11.014198 c -150.642166 11.250941 151.205612 11.369312 151.915833 11.369312 c -152.834396 11.369312 153.644058 11.139672 154.344818 10.680392 c -155.045578 10.221112 155.592453 9.570070 155.985443 8.727267 c -156.378433 7.884464 156.574936 6.890146 156.574936 5.744312 c -156.574936 4.589009 156.378433 3.587589 155.985443 2.740051 c -155.592453 1.897249 155.047943 1.243839 154.351913 0.779823 c -153.655884 0.320543 152.853333 0.090904 151.944244 0.090904 c -151.243484 0.090904 150.682404 0.206907 150.261002 0.438915 c -149.839600 0.675657 149.515274 0.943176 149.287994 1.241472 c -149.060715 1.544502 148.885529 1.795448 148.762436 1.994312 c -148.620377 1.994312 l -148.620377 -3.772734 l -146.944244 -3.772734 l -h -148.591980 5.772722 m -148.591980 4.948858 148.712723 4.222059 148.954193 3.592324 c -149.195663 2.967323 149.548416 2.477266 150.012436 2.122154 c -150.476440 1.771776 151.044632 1.596586 151.716980 1.596586 c -152.417740 1.596586 153.002487 1.781244 153.471237 2.150562 c -153.944717 2.524614 154.299835 3.026508 154.536575 3.656244 c -154.778046 4.290714 154.898788 4.996207 154.898788 5.772722 c -154.898788 6.539767 154.780426 7.231055 154.543686 7.846585 c -154.311676 8.466850 153.958923 8.956907 153.485443 9.316756 c -153.016693 9.681339 152.427200 9.863631 151.716980 9.863631 c -151.035156 9.863631 150.462234 9.690809 149.998230 9.345165 c -149.534210 9.004256 149.183823 8.526036 148.947083 7.910506 c -148.710342 7.299711 148.591980 6.587116 148.591980 5.772722 c -h -157.597656 0.318176 m -h -160.807877 14.863631 m -160.807877 0.318176 l -159.131744 0.318176 l -159.131744 14.863631 l -160.807877 14.863631 l -h -162.343750 0.318176 m -h -168.451706 0.090904 m -167.400574 0.090904 166.493851 0.322912 165.731537 0.786926 c -164.973953 1.255676 164.389206 1.909086 163.977280 2.747154 c -163.570068 3.589957 163.366470 4.570070 163.366470 5.687494 c -163.366470 6.804919 163.570068 7.789767 163.977280 8.642040 c -164.389206 9.499047 164.962128 10.166661 165.696030 10.644881 c -166.434662 11.127835 167.296402 11.369312 168.281250 11.369312 c -168.849442 11.369312 169.410522 11.274615 169.964493 11.085222 c -170.518463 10.895828 171.022736 10.588063 171.477280 10.161926 c -171.931824 9.740525 172.294037 9.181813 172.563919 8.485790 c -172.833801 7.789767 172.968750 6.932760 172.968750 5.914767 c -172.968750 5.204540 l -164.559662 5.204540 l -164.559662 6.653404 l -171.264206 6.653404 l -171.264206 7.268934 171.141098 7.818176 170.894882 8.301131 c -170.653412 8.784086 170.307770 9.165241 169.857956 9.444597 c -169.412872 9.723953 168.887314 9.863631 168.281250 9.863631 c -167.613632 9.863631 167.035980 9.697911 166.548294 9.366472 c -166.065338 9.039767 165.693665 8.613631 165.433243 8.088063 c -165.172821 7.562494 165.042618 6.999047 165.042618 6.397722 c -165.042618 5.431812 l -165.042618 4.607949 165.184662 3.909559 165.468750 3.336642 c -165.757584 2.768459 166.157669 2.335222 166.669037 2.036926 c -167.180405 1.743366 167.774628 1.596586 168.451706 1.596586 c -168.892044 1.596586 169.289764 1.658138 169.644882 1.781244 c -170.004730 1.909084 170.314865 2.098478 170.575287 2.349426 c -170.835709 2.605108 171.036926 2.922342 171.178970 3.301130 c -172.798294 2.846586 l -172.627838 2.297344 172.341385 1.814388 171.938919 1.397722 c -171.536453 0.985790 171.039291 0.663820 170.447449 0.431812 c -169.855591 0.204540 169.190338 0.090904 168.451706 0.090904 c -h -173.984375 0.318176 m -h -183.245743 8.784085 m -181.740051 8.357949 l -181.645355 8.608896 181.505676 8.852741 181.321030 9.089483 c -181.141098 9.330960 180.894882 9.529824 180.582382 9.686074 c -180.269882 9.842324 179.869781 9.920449 179.382095 9.920449 c -178.714478 9.920449 178.158142 9.766566 177.713074 9.458801 c -177.272720 9.155771 177.052551 8.769881 177.052551 8.301131 c -177.052551 7.884464 177.204071 7.555392 177.507095 7.313915 c -177.810135 7.072438 178.283615 6.871207 178.927551 6.710222 c -180.546875 6.312494 l -181.522263 6.075752 182.249054 5.713536 182.727280 5.225847 c -183.205490 4.742892 183.444595 4.120260 183.444595 3.357949 c -183.444595 2.732948 183.264679 2.174236 182.904831 1.681812 c -182.549713 1.189388 182.052551 0.801130 181.413345 0.517040 c -180.774139 0.232950 180.030777 0.090904 179.183243 0.090904 c -178.070557 0.090904 177.149628 0.332382 176.420456 0.815336 c -175.691284 1.298290 175.229645 2.003782 175.035507 2.931812 c -176.626419 3.329540 l -176.777939 2.742418 177.064392 2.302078 177.485794 2.008516 c -177.911926 1.714956 178.468277 1.568176 179.154831 1.568176 c -179.936081 1.568176 180.556351 1.733896 181.015625 2.065336 c -181.479645 2.401510 181.711655 2.803972 181.711655 3.272722 c -181.711655 3.651510 181.579071 3.968745 181.313919 4.224426 c -181.048767 4.484843 180.641571 4.678972 180.092331 4.806813 c -178.274155 5.232949 l -177.275101 5.469691 176.541199 5.836642 176.072449 6.333801 c -175.608429 6.835695 175.376419 7.463063 175.376419 8.215904 c -175.376419 8.831434 175.549240 9.375941 175.894882 9.849426 c -176.245270 10.322911 176.721115 10.694597 177.322449 10.964483 c -177.928513 11.234369 178.615051 11.369312 179.382095 11.369312 c -180.461639 11.369312 181.309189 11.132570 181.924713 10.659086 c -182.544983 10.185601 182.985321 9.560600 183.245743 8.784085 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 351.000000 737.681824 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 0.318176 Tm -/F1 1.000000 Tf -[ (\037) (\013) (\021) (\002) (\024) 6.214523 (\t) (\023) (\006) (\034) (\025) (\006) (\t) 15.003586 (\n) (\003) (\001) (\000) (\013) (\t) (\002) ] TJ -ET -Q -q -/E4 gs -1.000000 0.000000 -0.000000 1.000000 559.000000 390.000000 cm -1.000000 1.000000 1.000000 scn -0.000000 45.000000 m -136.000000 45.000000 l -136.000000 0.000000 l -0.000000 0.000000 l -0.000000 45.000000 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 560.000000 394.909058 cm -0.233333 0.233333 0.233333 scn -0.000000 24.090942 m -h -3.210227 30.653442 m -3.210227 24.090942 l -1.534091 24.090942 l -1.534091 35.000034 l -3.153409 35.000034 l -3.153409 33.295486 l -3.295455 33.295486 l -3.551136 33.849464 3.939394 34.294540 4.460227 34.630714 c -4.981061 34.971622 5.653409 35.142078 6.477273 35.142078 c -7.215909 35.142078 7.862216 34.990562 8.416194 34.687534 c -8.970171 34.389236 9.401042 33.934692 9.708807 33.323898 c -10.016572 32.717834 10.170455 31.950790 10.170455 31.022760 c -10.170455 24.090942 l -8.494319 24.090942 l -8.494319 30.909124 l -8.494319 31.766132 8.271781 32.433746 7.826705 32.911964 c -7.381629 33.394920 6.770833 33.636398 5.994318 33.636398 c -5.459280 33.636398 4.981061 33.520393 4.559659 33.288387 c -4.142992 33.056377 3.813920 32.717834 3.572443 32.272762 c -3.330966 31.827686 3.210227 31.287912 3.210227 30.653442 c -h -11.699219 24.090942 m -h -17.807175 23.863670 m -16.756037 23.863670 15.849314 24.095678 15.087003 24.559692 c -14.329427 25.028442 13.744673 25.681852 13.332742 26.519920 c -12.925545 27.362724 12.721946 28.342836 12.721946 29.460260 c -12.721946 30.577686 12.925545 31.562534 13.332742 32.414806 c -13.744673 33.271812 14.317590 33.939426 15.051492 34.417648 c -15.790129 34.900600 16.651871 35.142078 17.636719 35.142078 c -18.204901 35.142078 18.765982 35.047382 19.319958 34.857986 c -19.873936 34.668594 20.378197 34.360828 20.832743 33.934692 c -21.287289 33.513290 21.649504 32.954578 21.919390 32.258556 c -22.189276 31.562534 22.324219 30.705526 22.324219 29.687534 c -22.324219 28.977306 l -13.915128 28.977306 l -13.915128 30.426170 l -20.619675 30.426170 l -20.619675 31.041700 20.496569 31.590942 20.250355 32.073898 c -20.008879 32.556854 19.663235 32.938007 19.213425 33.217361 c -18.768349 33.496719 18.242781 33.636398 17.636719 33.636398 c -16.969105 33.636398 16.391453 33.470676 15.903765 33.139236 c -15.420810 32.812534 15.049125 32.386398 14.788708 31.860828 c -14.528291 31.335260 14.398083 30.771812 14.398083 30.170488 c -14.398083 29.204578 l -14.398083 28.380714 14.540128 27.682325 14.824219 27.109409 c -15.113045 26.541225 15.513140 26.107988 16.024504 25.809692 c -16.535868 25.516132 17.130091 25.369352 17.807175 25.369352 c -18.247515 25.369352 18.645243 25.430904 19.000355 25.554010 c -19.360205 25.681850 19.670338 25.871244 19.930754 26.122192 c -20.191170 26.377874 20.392401 26.695108 20.534447 27.073896 c -22.153765 26.619352 l -21.983311 26.070110 21.696852 25.587154 21.294390 25.170488 c -20.891928 24.758556 20.394768 24.436586 19.802912 24.204578 c -19.211056 23.977306 18.545811 23.863670 17.807175 23.863670 c -h -23.339844 24.090942 m -h -28.084162 23.835260 m -27.392874 23.835260 26.765507 23.965469 26.202059 24.225885 c -25.638613 24.491037 25.191170 24.872192 24.859730 25.369352 c -24.528292 25.871246 24.362572 26.477306 24.362572 27.187534 c -24.362572 27.812534 24.485678 28.319162 24.731890 28.707420 c -24.978102 29.100412 25.307173 29.408176 25.719105 29.630714 c -26.131037 29.853252 26.585583 30.018974 27.082741 30.127874 c -27.584635 30.241510 28.088898 30.331472 28.595526 30.397760 c -29.258404 30.482986 29.795811 30.546909 30.207743 30.589523 c -30.624409 30.636871 30.927439 30.714996 31.116833 30.823898 c -31.310961 30.932798 31.408026 31.122192 31.408026 31.392078 c -31.408026 31.448898 l -31.408026 32.149654 31.216265 32.694160 30.832743 33.082420 c -30.453955 33.470676 29.878670 33.664806 29.106890 33.664806 c -28.306700 33.664806 27.679333 33.489616 27.224787 33.139236 c -26.770241 32.788860 26.450640 32.414806 26.265980 32.017078 c -24.675072 32.585262 l -24.959162 33.248138 25.337950 33.764236 25.811434 34.133556 c -26.289654 34.507610 26.810488 34.768028 27.373934 34.914806 c -27.942116 35.066322 28.500830 35.142078 29.050072 35.142078 c -29.400450 35.142078 29.802912 35.099464 30.257458 35.014236 c -30.716738 34.933746 31.159447 34.765659 31.585583 34.509979 c -32.016453 34.254295 32.373932 33.868404 32.658024 33.352306 c -32.942116 32.836208 33.084164 32.144920 33.084164 31.278442 c -33.084164 24.090942 l -31.408026 24.090942 l -31.408026 25.568214 l -31.322800 25.568214 l -31.209162 25.331472 31.019768 25.078159 30.754618 24.808271 c -30.489466 24.538385 30.136719 24.308746 29.696379 24.119352 c -29.256039 23.929958 28.718632 23.835260 28.084162 23.835260 c -h -28.339844 25.340942 m -29.002722 25.340942 29.561434 25.471151 30.015980 25.731567 c -30.475260 25.991983 30.820904 26.328157 31.052912 26.740089 c -31.289654 27.152023 31.408026 27.585262 31.408026 28.039806 c -31.408026 29.573898 l -31.337004 29.488670 31.180754 29.410545 30.939276 29.339523 c -30.702534 29.273235 30.427912 29.214048 30.115412 29.161964 c -29.807648 29.114616 29.506985 29.072002 29.213425 29.034124 c -28.924599 29.000980 28.690224 28.972572 28.510300 28.948898 c -28.074694 28.892078 27.667496 28.799749 27.288708 28.671909 c -26.914656 28.548801 26.611624 28.361776 26.379616 28.110828 c -26.152344 27.864616 26.038708 27.528442 26.038708 27.102306 c -26.038708 26.519920 26.254143 26.079578 26.685015 25.781282 c -27.120621 25.487722 27.672230 25.340942 28.339844 25.340942 c -h -34.609375 24.090942 m -h -36.143467 24.090942 m -36.143467 35.000034 l -37.762783 35.000034 l -37.762783 33.352306 l -37.876419 33.352306 l -38.075283 33.892078 38.435135 34.330051 38.955967 34.666229 c -39.476799 35.002403 40.063923 35.170486 40.717331 35.170486 c -40.840435 35.170486 40.994320 35.168121 41.178978 35.163387 c -41.363636 35.158649 41.503315 35.151550 41.598011 35.142078 c -41.598011 33.437534 l -41.541191 33.451736 41.410984 33.473045 41.207386 33.501453 c -41.008522 33.534599 40.797821 33.551170 40.575283 33.551170 c -40.044983 33.551170 39.571499 33.439903 39.154831 33.217361 c -38.742897 32.999561 38.416191 32.696529 38.174717 32.308273 c -37.937973 31.924749 37.819603 31.486776 37.819603 30.994350 c -37.819603 24.090942 l -36.143467 24.090942 l -h -47.675781 24.090942 m -h -53.329189 23.863670 m -52.420097 23.863670 51.617542 24.093309 50.921520 24.552589 c -50.225498 25.016605 49.680992 25.670015 49.287998 26.512817 c -48.895004 27.360355 48.698509 28.361774 48.698509 29.517078 c -48.698509 30.662912 48.895004 31.657230 49.287998 32.500034 c -49.680992 33.342834 50.227867 33.993877 50.928623 34.453159 c -51.629379 34.912437 52.439041 35.142078 53.357601 35.142078 c -54.067825 35.142078 54.628906 35.023708 55.040836 34.786964 c -55.457504 34.554958 55.774738 34.289806 55.992542 33.991512 c -56.215080 33.697952 56.387905 33.456474 56.511009 33.267078 c -56.653053 33.267078 l -56.653053 38.636398 l -58.329193 38.636398 l -58.329193 24.090942 l -56.709873 24.090942 l -56.709873 25.767078 l -56.511009 25.767078 l -56.387905 25.568214 56.212715 25.317268 55.985439 25.014238 c -55.758167 24.715942 55.433830 24.448423 55.012428 24.211681 c -54.591026 23.979673 54.029945 23.863670 53.329189 23.863670 c -h -53.556461 25.369352 m -54.228813 25.369352 54.796993 25.544542 55.261009 25.894920 c -55.725025 26.250032 56.077770 26.740089 56.319248 27.365089 c -56.560722 27.994825 56.681461 28.721624 56.681461 29.545488 c -56.681461 30.359882 56.563091 31.072477 56.326351 31.683273 c -56.089607 32.298801 55.739227 32.777023 55.275211 33.117931 c -54.811195 33.463577 54.238281 33.636398 53.556461 33.636398 c -52.846237 33.636398 52.254379 33.454105 51.780895 33.089523 c -51.312145 32.729675 50.959400 32.239616 50.722656 31.619350 c -50.490650 31.003822 50.374645 30.312534 50.374645 29.545488 c -50.374645 28.768974 50.493015 28.063480 50.729759 27.429010 c -50.971237 26.799274 51.326351 26.297380 51.795101 25.923328 c -52.268585 25.554010 52.855705 25.369352 53.556461 25.369352 c -h -60.097656 24.090942 m -h -68.506744 28.551170 m -68.506744 35.000034 l -70.182884 35.000034 l -70.182884 24.090942 l -68.506744 24.090942 l -68.506744 25.937532 l -68.393112 25.937532 l -68.137428 25.383556 67.739700 24.912439 67.199928 24.524181 c -66.660156 24.140657 65.978333 23.948896 65.154472 23.948896 c -64.472656 23.948896 63.866592 24.098043 63.336292 24.396339 c -62.805988 24.699369 62.389320 25.153915 62.086292 25.759977 c -61.783264 26.370773 61.631748 27.140186 61.631748 28.068214 c -61.631748 35.000034 l -63.307884 35.000034 l -63.307884 28.181850 l -63.307884 27.386398 63.530422 26.751928 63.975498 26.278442 c -64.425308 25.804956 64.998222 25.568214 65.694244 25.568214 c -66.110909 25.568214 66.534683 25.674749 66.965553 25.887817 c -67.401161 26.100885 67.765747 26.427591 68.059303 26.867931 c -68.357597 27.308273 68.506744 27.869350 68.506744 28.551170 c -h -71.718750 24.090942 m -h -73.252838 20.000032 m -73.252838 35.000034 l -74.872162 35.000034 l -74.872162 33.267078 l -75.071022 33.267078 l -75.194130 33.456474 75.364586 33.697952 75.582390 33.991512 c -75.804924 34.289806 76.122154 34.554958 76.534088 34.786964 c -76.950752 35.023708 77.514206 35.142078 78.224434 35.142078 c -79.142990 35.142078 79.952652 34.912437 80.653412 34.453159 c -81.354164 33.993877 81.901039 33.342834 82.294037 32.500034 c -82.687027 31.657230 82.883522 30.662912 82.883522 29.517078 c -82.883522 28.361774 82.687027 27.360355 82.294037 26.512817 c -81.901039 25.670015 81.356537 25.016605 80.660515 24.552589 c -79.964493 24.093309 79.161934 23.863670 78.252838 23.863670 c -77.552086 23.863670 76.991005 23.979673 76.569603 24.211681 c -76.148201 24.448423 75.823860 24.715942 75.596588 25.014238 c -75.369316 25.317268 75.194130 25.568214 75.071022 25.767078 c -74.928978 25.767078 l -74.928978 20.000032 l -73.252838 20.000032 l -h -74.900566 29.545488 m -74.900566 28.721624 75.021309 27.994825 75.262787 27.365089 c -75.504265 26.740089 75.857010 26.250032 76.321022 25.894920 c -76.785034 25.544542 77.353218 25.369352 78.025566 25.369352 c -78.726326 25.369352 79.311081 25.554010 79.779831 25.923328 c -80.253319 26.297380 80.608429 26.799274 80.845169 27.429010 c -81.086647 28.063480 81.207390 28.768974 81.207390 29.545488 c -81.207390 30.312534 81.089020 31.003822 80.852272 31.619350 c -80.620270 32.239616 80.267525 32.729675 79.794037 33.089523 c -79.325287 33.454105 78.735794 33.636398 78.025566 33.636398 c -77.343750 33.636398 76.770828 33.463577 76.306816 33.117931 c -75.842804 32.777023 75.492424 32.298801 75.255684 31.683273 c -75.018936 31.072477 74.900566 30.359882 74.900566 29.545488 c -h -83.906250 24.090942 m -h -87.116478 38.636398 m -87.116478 24.090942 l -85.440338 24.090942 l -85.440338 38.636398 l -87.116478 38.636398 l -h -88.652344 24.090942 m -h -90.186432 24.090942 m -90.186432 35.000034 l -91.862572 35.000034 l -91.862572 24.090942 l -90.186432 24.090942 l -h -91.038704 36.818214 m -90.712006 36.818214 90.430283 36.929485 90.193535 37.152023 c -89.961533 37.374561 89.845528 37.642078 89.845528 37.954578 c -89.845528 38.267078 89.961533 38.534599 90.193535 38.757137 c -90.430283 38.979675 90.712006 39.090942 91.038704 39.090942 c -91.365410 39.090942 91.644768 38.979675 91.876778 38.757137 c -92.113518 38.534599 92.231888 38.267078 92.231888 37.954578 c -92.231888 37.642078 92.113518 37.374561 91.876778 37.152023 c -91.644768 36.929485 91.365410 36.818214 91.038704 36.818214 c -h -93.398438 24.090942 m -h -99.364349 23.863670 m -98.341621 23.863670 97.460938 24.105148 96.722298 24.588102 c -95.983665 25.071056 95.415482 25.736301 95.017754 26.583839 c -94.620026 27.431377 94.421165 28.399654 94.421165 29.488670 c -94.421165 30.596624 94.624763 31.574371 95.031960 32.421909 c -95.443893 33.274181 96.016808 33.939426 96.750710 34.417648 c -97.489349 34.900600 98.351089 35.142078 99.335938 35.142078 c -100.102982 35.142078 100.794266 35.000034 101.409798 34.715942 c -102.025330 34.431850 102.529594 34.034126 102.922585 33.522762 c -103.315582 33.011398 103.559425 32.414806 103.654121 31.732988 c -101.977982 31.732988 l -101.850143 32.230148 101.566055 32.670486 101.125710 33.054012 c -100.690102 33.442268 100.102982 33.636398 99.364349 33.636398 c -98.710938 33.636398 98.138023 33.465942 97.645599 33.125034 c -97.157906 32.788860 96.776749 32.313007 96.502129 31.697475 c -96.232239 31.086681 96.097298 30.369350 96.097298 29.545488 c -96.097298 28.702686 96.229874 27.968784 96.495026 27.343784 c -96.764915 26.718782 97.143700 26.233461 97.631393 25.887817 c -98.123817 25.542173 98.701469 25.369352 99.364349 25.369352 c -99.799950 25.369352 100.195312 25.445110 100.550423 25.596624 c -100.905540 25.748138 101.206207 25.965942 101.452415 26.250032 c -101.698624 26.534124 101.873817 26.875034 101.977982 27.272760 c -103.654121 27.272760 l -103.559425 26.628822 103.325050 26.048803 102.950996 25.532703 c -102.581680 25.021339 102.091621 24.614143 101.480827 24.311113 c -100.874763 24.012817 100.169273 23.863670 99.364349 23.863670 c -h -104.570312 24.090942 m -h -109.314629 23.835260 m -108.623344 23.835260 107.995979 23.965469 107.432526 24.225885 c -106.869080 24.491037 106.421638 24.872192 106.090202 25.369352 c -105.758759 25.871246 105.593040 26.477306 105.593040 27.187534 c -105.593040 27.812534 105.716148 28.319162 105.962357 28.707420 c -106.208565 29.100412 106.537643 29.408176 106.949577 29.630714 c -107.361511 29.853252 107.816055 30.018974 108.313210 30.127874 c -108.815102 30.241510 109.319366 30.331472 109.825996 30.397760 c -110.488876 30.482986 111.026276 30.546909 111.438210 30.589523 c -111.854874 30.636871 112.157906 30.714996 112.347298 30.823898 c -112.541428 30.932798 112.638496 31.122192 112.638496 31.392078 c -112.638496 31.448898 l -112.638496 32.149654 112.446732 32.694160 112.063210 33.082420 c -111.684418 33.470676 111.109138 33.664806 110.337357 33.664806 c -109.537170 33.664806 108.909805 33.489616 108.455254 33.139236 c -108.000710 32.788860 107.681107 32.414806 107.496452 32.017078 c -105.905540 32.585262 l -106.189629 33.248138 106.568413 33.764236 107.041901 34.133556 c -107.520119 34.507610 108.040955 34.768028 108.604401 34.914806 c -109.172585 35.066322 109.731293 35.142078 110.280540 35.142078 c -110.630920 35.142078 111.033379 35.099464 111.487923 35.014236 c -111.947205 34.933746 112.389915 34.765659 112.816048 34.509979 c -113.246918 34.254295 113.604401 33.868404 113.888496 33.352306 c -114.172585 32.836208 114.314629 32.144920 114.314629 31.278442 c -114.314629 24.090942 l -112.638496 24.090942 l -112.638496 25.568214 l -112.553268 25.568214 l -112.439629 25.331472 112.250237 25.078159 111.985085 24.808271 c -111.719933 24.538385 111.367188 24.308746 110.926849 24.119352 c -110.486504 23.929958 109.949097 23.835260 109.314629 23.835260 c -h -109.570312 25.340942 m -110.233192 25.340942 110.791901 25.471151 111.246452 25.731567 c -111.705734 25.991983 112.051376 26.328157 112.283379 26.740089 c -112.520126 27.152023 112.638496 27.585262 112.638496 28.039806 c -112.638496 29.573898 l -112.567474 29.488670 112.411224 29.410545 112.169746 29.339523 c -111.932999 29.273235 111.658379 29.214048 111.345879 29.161964 c -111.038116 29.114616 110.737450 29.072002 110.443893 29.034124 c -110.155067 29.000980 109.920692 28.972572 109.740768 28.948898 c -109.305161 28.892078 108.897964 28.799749 108.519173 28.671909 c -108.145126 28.548801 107.842094 28.361776 107.610085 28.110828 c -107.382812 27.864616 107.269173 27.528442 107.269173 27.102306 c -107.269173 26.519920 107.484612 26.079578 107.915482 25.781282 c -108.351089 25.487722 108.902702 25.340942 109.570312 25.340942 c -h -115.839844 24.090942 m -h -122.118256 35.000034 m -122.118256 33.579578 l -116.464844 33.579578 l -116.464844 35.000034 l -122.118256 35.000034 l -h -118.112572 37.613670 m -119.788704 37.613670 l -119.788704 27.215942 l -119.788704 26.742456 119.857361 26.387344 119.994675 26.150602 c -120.136719 25.918594 120.316643 25.762344 120.534447 25.681852 c -120.756989 25.606094 120.991364 25.568214 121.237572 25.568214 c -121.422226 25.568214 121.573746 25.577684 121.692116 25.596624 c -121.810486 25.620298 121.905182 25.639238 121.976204 25.653442 c -122.317116 24.147760 l -122.203476 24.105146 122.044861 24.062532 121.841263 24.019920 c -121.637665 23.972570 121.379616 23.948896 121.067116 23.948896 c -120.593628 23.948896 120.129616 24.050695 119.675072 24.254295 c -119.225266 24.457893 118.851212 24.768026 118.552910 25.184692 c -118.259354 25.601358 118.112572 26.126926 118.112572 26.761396 c -118.112572 37.613670 l -h -122.988281 24.090942 m -h -129.096237 23.863670 m -128.045105 23.863670 127.138374 24.095678 126.376068 24.559692 c -125.618492 25.028442 125.033737 25.681852 124.621803 26.519920 c -124.214607 27.362724 124.011009 28.342836 124.011009 29.460260 c -124.011009 30.577686 124.214607 31.562534 124.621803 32.414806 c -125.033737 33.271812 125.606651 33.939426 126.340553 34.417648 c -127.079193 34.900600 127.940933 35.142078 128.925781 35.142078 c -129.493973 35.142078 130.055054 35.047382 130.609024 34.857986 c -131.162994 34.668594 131.667267 34.360828 132.121811 33.934692 c -132.576355 33.513290 132.938568 32.954578 133.208450 32.258556 c -133.478333 31.562534 133.613281 30.705526 133.613281 29.687534 c -133.613281 28.977306 l -125.204193 28.977306 l -125.204193 30.426170 l -131.908737 30.426170 l -131.908737 31.041700 131.785629 31.590942 131.539413 32.073898 c -131.297943 32.556854 130.952301 32.938007 130.502487 33.217361 c -130.057404 33.496719 129.531845 33.636398 128.925781 33.636398 c -128.258163 33.636398 127.680519 33.470676 127.192825 33.139236 c -126.709869 32.812534 126.338181 32.386398 126.077766 31.860828 c -125.817352 31.335260 125.687141 30.771812 125.687141 30.170488 c -125.687141 29.204578 l -125.687141 28.380714 125.829185 27.682325 126.113281 27.109409 c -126.402107 26.541225 126.802200 26.107988 127.313568 25.809692 c -127.824928 25.516132 128.419159 25.369352 129.096237 25.369352 c -129.536575 25.369352 129.934296 25.430904 130.289413 25.554010 c -130.649261 25.681850 130.959396 25.871244 131.219818 26.122192 c -131.480240 26.377874 131.681458 26.695108 131.823502 27.073896 c -133.442825 26.619352 l -133.272369 26.070110 132.985916 25.587154 132.583450 25.170488 c -132.180984 24.758556 131.683823 24.436586 131.091980 24.204578 c -130.500122 23.977306 129.834869 23.863670 129.096237 23.863670 c -h -0.000000 0.090942 m -h -6.107955 -0.136330 m -5.056818 -0.136330 4.150095 0.095676 3.387784 0.559692 c -2.630208 1.028442 2.045455 1.681850 1.633523 2.519920 c -1.226326 3.362720 1.022727 4.342834 1.022727 5.460262 c -1.022727 6.577686 1.226326 7.562534 1.633523 8.414806 c -2.045455 9.271812 2.618371 9.939426 3.352273 10.417648 c -4.090909 10.900600 4.952652 11.142078 5.937500 11.142078 c -6.505682 11.142078 7.066761 11.047382 7.620739 10.857990 c -8.174716 10.668594 8.678978 10.360828 9.133523 9.934692 c -9.588068 9.513290 9.950284 8.954578 10.220171 8.258556 c -10.490057 7.562534 10.625000 6.705528 10.625000 5.687534 c -10.625000 4.977306 l -2.215909 4.977306 l -2.215909 6.426170 l -8.920455 6.426170 l -8.920455 7.041698 8.797349 7.590942 8.551137 8.073898 c -8.309660 8.556850 7.964015 8.938007 7.514205 9.217361 c -7.069129 9.496719 6.543561 9.636398 5.937500 9.636398 c -5.269886 9.636398 4.692235 9.470676 4.204545 9.139236 c -3.721591 8.812534 3.349905 8.386398 3.089489 7.860828 c -2.829072 7.335258 2.698864 6.771812 2.698864 6.170486 c -2.698864 5.204578 l -2.698864 4.380714 2.840909 3.682327 3.125000 3.109409 c -3.413826 2.541229 3.813920 2.107990 4.325284 1.809692 c -4.836648 1.516132 5.430871 1.369350 6.107955 1.369350 c -6.548295 1.369350 6.946023 1.430904 7.301137 1.554012 c -7.660985 1.681854 7.971117 1.871246 8.231534 2.122192 c -8.491951 2.377872 8.693182 2.695110 8.835228 3.073898 c -10.454546 2.619350 l -10.284091 2.070107 9.997633 1.587154 9.595171 1.170486 c -9.192709 0.758556 8.695550 0.436584 8.103693 0.204578 c -7.511837 -0.022694 6.846591 -0.136330 6.107955 -0.136330 c -h -11.347656 0.090942 m -h -14.131747 11.000034 m -16.745384 6.539806 l -19.359020 11.000034 l -21.290838 11.000034 l -17.768112 5.545486 l -21.290838 0.090942 l -19.359020 0.090942 l -16.745384 4.323898 l -14.131747 0.090942 l -12.199929 0.090942 l -15.665838 5.545486 l -12.199929 11.000034 l -14.131747 11.000034 l -h -22.148438 0.090942 m -h -26.892756 -0.164738 m -26.201468 -0.164738 25.574100 -0.034531 25.010653 0.225887 c -24.447206 0.491035 23.999763 0.872192 23.668324 1.369350 c -23.336885 1.871246 23.171165 2.477306 23.171165 3.187534 c -23.171165 3.812534 23.294271 4.319164 23.540483 4.707420 c -23.786695 5.100414 24.115767 5.408176 24.527699 5.630714 c -24.939631 5.853252 25.394176 6.018974 25.891335 6.127872 c -26.393229 6.241508 26.897491 6.331474 27.404119 6.397762 c -28.066998 6.482986 28.604404 6.546909 29.016336 6.589523 c -29.433002 6.636871 29.736032 6.714996 29.925426 6.823898 c -30.119555 6.932796 30.216619 7.122192 30.216619 7.392078 c -30.216619 7.448898 l -30.216619 8.149654 30.024858 8.694164 29.641336 9.082420 c -29.262548 9.470676 28.687263 9.664806 27.915483 9.664806 c -27.115294 9.664806 26.487926 9.489616 26.033381 9.139236 c -25.578835 8.788860 25.259233 8.414806 25.074574 8.017078 c -23.483665 8.585262 l -23.767756 9.248138 24.146544 9.764236 24.620028 10.133556 c -25.098248 10.507610 25.619081 10.768028 26.182528 10.914806 c -26.750710 11.066322 27.309423 11.142078 27.858665 11.142078 c -28.209044 11.142078 28.611506 11.099466 29.066051 11.014240 c -29.525331 10.933746 29.968040 10.765659 30.394176 10.509979 c -30.825048 10.254295 31.182529 9.868404 31.466619 9.352306 c -31.750710 8.836208 31.892756 8.144920 31.892756 7.278442 c -31.892756 0.090942 l -30.216619 0.090942 l -30.216619 1.568214 l -30.131393 1.568214 l -30.017756 1.331470 29.828362 1.078159 29.563211 0.808273 c -29.298059 0.538387 28.945312 0.308746 28.504972 0.119350 c -28.064632 -0.070042 27.527225 -0.164738 26.892756 -0.164738 c -h -27.148438 1.340942 m -27.811316 1.340942 28.370028 1.471149 28.824574 1.731567 c -29.283854 1.991985 29.629498 2.328159 29.861506 2.740089 c -30.098248 3.152020 30.216619 3.585258 30.216619 4.039806 c -30.216619 5.573898 l -30.145597 5.488670 29.989347 5.410545 29.747869 5.339523 c -29.511127 5.273235 29.236506 5.214046 28.924006 5.161964 c -28.616241 5.114616 28.315578 5.072002 28.022018 5.034122 c -27.733192 5.000980 27.498817 4.972572 27.318893 4.948898 c -26.883287 4.892078 26.476089 4.799751 26.097301 4.671909 c -25.723249 4.548801 25.420218 4.361774 25.188210 4.110828 c -24.960938 3.864616 24.847301 3.528442 24.847301 3.102306 c -24.847301 2.519920 25.062737 2.079578 25.493608 1.781284 c -25.929214 1.487724 26.480824 1.340942 27.148438 1.340942 c -h -33.417969 0.090942 m -h -34.952061 0.090942 m -34.952061 11.000034 l -36.571377 11.000034 l -36.571377 9.295486 l -36.713425 9.295486 l -36.940697 9.877872 37.307648 10.330051 37.814274 10.652023 c -38.320904 10.978725 38.929333 11.142078 39.639561 11.142078 c -40.359257 11.142078 40.958214 10.978725 41.436436 10.652023 c -41.919388 10.330051 42.295811 9.877872 42.565697 9.295486 c -42.679333 9.295486 l -42.958687 9.858936 43.377724 10.306377 43.936436 10.637817 c -44.495148 10.973991 45.165131 11.142078 45.946381 11.142078 c -46.921757 11.142078 47.719578 10.836681 48.339844 10.225887 c -48.960110 9.619827 49.270241 8.675224 49.270241 7.392078 c -49.270241 0.090942 l -47.594105 0.090942 l -47.594105 7.392078 l -47.594105 8.197002 47.373936 8.772285 46.933594 9.117931 c -46.493252 9.463577 45.974789 9.636398 45.378197 9.636398 c -44.611153 9.636398 44.016930 9.404388 43.595528 8.940372 c -43.174126 8.481094 42.963425 7.898708 42.963425 7.193214 c -42.963425 0.090942 l -41.258877 0.090942 l -41.258877 7.562534 l -41.258877 8.182800 41.057648 8.682323 40.655186 9.061111 c -40.252724 9.444637 39.734257 9.636398 39.099789 9.636398 c -38.664181 9.636398 38.256985 9.520393 37.878197 9.288387 c -37.504143 9.056377 37.201111 8.734406 36.969105 8.322475 c -36.741833 7.915279 36.628197 7.444160 36.628197 6.909122 c -36.628197 0.090942 l -34.952061 0.090942 l -h -50.800781 0.090942 m -h -52.334873 -3.999966 m -52.334873 11.000034 l -53.954189 11.000034 l -53.954189 9.267078 l -54.153053 9.267078 l -54.276157 9.456474 54.446613 9.697952 54.664417 9.991512 c -54.886955 10.289806 55.204193 10.554958 55.616123 10.786964 c -56.032791 11.023708 56.596237 11.142078 57.306461 11.142078 c -58.225021 11.142078 59.034683 10.912437 59.735439 10.453159 c -60.436195 9.993877 60.983074 9.342834 61.376068 8.500034 c -61.769058 7.657230 61.965553 6.662910 61.965553 5.517078 c -61.965553 4.361774 61.769058 3.360355 61.376068 2.512817 c -60.983074 1.670013 60.438564 1.016605 59.742542 0.552589 c -59.046520 0.093311 58.243965 -0.136330 57.334873 -0.136330 c -56.634117 -0.136330 56.073036 -0.020325 55.651634 0.211681 c -55.230232 0.448425 54.905895 0.715942 54.678623 1.014236 c -54.451347 1.317268 54.276157 1.568214 54.153053 1.767078 c -54.011009 1.767078 l -54.011009 -3.999966 l -52.334873 -3.999966 l -h -53.982601 5.545486 m -53.982601 4.721622 54.103340 3.994823 54.344814 3.365089 c -54.586292 2.740089 54.939037 2.250034 55.403053 1.894920 c -55.867069 1.544540 56.435249 1.369350 57.107601 1.369350 c -57.808357 1.369350 58.393112 1.554008 58.861862 1.923328 c -59.335346 2.297382 59.690456 2.799278 59.927200 3.429012 c -60.168678 4.063480 60.289417 4.768970 60.289417 5.545486 c -60.289417 6.312534 60.171047 7.003822 59.934303 7.619350 c -59.702297 8.239616 59.349548 8.729671 58.876064 9.089523 c -58.407314 9.454105 57.817825 9.636398 57.107601 9.636398 c -56.425781 9.636398 55.852867 9.463577 55.388851 9.117931 c -54.924835 8.777023 54.574455 8.298801 54.337711 7.683273 c -54.100971 7.072475 53.982601 6.359882 53.982601 5.545486 c -h -62.988281 0.090942 m -h -66.198509 14.636398 m -66.198509 0.090942 l -64.522369 0.090942 l -64.522369 14.636398 l -66.198509 14.636398 l -h -67.734375 0.090942 m -h -73.842331 -0.136330 m -72.791191 -0.136330 71.884468 0.095676 71.122162 0.559692 c -70.364586 1.028442 69.779831 1.681850 69.367897 2.519920 c -68.960701 3.362720 68.757103 4.342834 68.757103 5.460262 c -68.757103 6.577686 68.960701 7.562534 69.367897 8.414806 c -69.779831 9.271812 70.352745 9.939426 71.086647 10.417648 c -71.825287 10.900600 72.687027 11.142078 73.671875 11.142078 c -74.240059 11.142078 74.801132 11.047382 75.355110 10.857990 c -75.909088 10.668594 76.413353 10.360828 76.867897 9.934692 c -77.322441 9.513290 77.684654 8.954578 77.954544 8.258556 c -78.224434 7.562534 78.359375 6.705528 78.359375 5.687534 c -78.359375 4.977306 l -69.950287 4.977306 l -69.950287 6.426170 l -76.654831 6.426170 l -76.654831 7.041698 76.531723 7.590942 76.285515 8.073898 c -76.044037 8.556850 75.698387 8.938007 75.248581 9.217361 c -74.803505 9.496719 74.277931 9.636398 73.671875 9.636398 c -73.004265 9.636398 72.426613 9.470676 71.938919 9.139236 c -71.455963 8.812534 71.084274 8.386398 70.823860 7.860828 c -70.563446 7.335258 70.433235 6.771812 70.433235 6.170486 c -70.433235 5.204578 l -70.433235 4.380714 70.575279 3.682327 70.859375 3.109409 c -71.148201 2.541229 71.548294 2.107990 72.059662 1.809692 c -72.571022 1.516132 73.165245 1.369350 73.842331 1.369350 c -74.282669 1.369350 74.680397 1.430904 75.035515 1.554012 c -75.395363 1.681854 75.705498 1.871246 75.965912 2.122192 c -76.226326 2.377872 76.427559 2.695110 76.569603 3.073898 c -78.188919 2.619350 l -78.018463 2.070107 77.732002 1.587154 77.329544 1.170486 c -76.927086 0.758556 76.429924 0.436584 75.838066 0.204578 c -75.246216 -0.022694 74.580971 -0.136330 73.842331 -0.136330 c -h -79.375000 0.090942 m -h -88.636360 8.556850 m -87.130684 8.130714 l -87.035988 8.381664 86.896309 8.625507 86.711647 8.862247 c -86.531723 9.103725 86.285515 9.302589 85.973015 9.458839 c -85.660515 9.615089 85.260422 9.693214 84.772728 9.693214 c -84.105118 9.693214 83.548767 9.539333 83.103691 9.231567 c -82.663353 8.928539 82.443184 8.542648 82.443184 8.073898 c -82.443184 7.657230 82.594696 7.328159 82.897728 7.086681 c -83.200760 6.845203 83.674240 6.643970 84.318184 6.482986 c -85.937500 6.085262 l -86.912880 5.848518 87.639679 5.486301 88.117897 4.998611 c -88.596115 4.515659 88.835228 3.893024 88.835228 3.130714 c -88.835228 2.505714 88.655304 1.947002 88.295456 1.454578 c -87.940346 0.962154 87.443184 0.573898 86.803978 0.289806 c -86.164772 0.005714 85.421402 -0.136330 84.573860 -0.136330 c -83.461174 -0.136330 82.540245 0.105148 81.811081 0.588100 c -81.081917 1.071056 80.620270 1.776550 80.426140 2.704578 c -82.017044 3.102306 l -82.168556 2.515182 82.455017 2.074844 82.876419 1.781284 c -83.302559 1.487724 83.858902 1.340942 84.545456 1.340942 c -85.326706 1.340942 85.946968 1.506660 86.406250 1.838100 c -86.870262 2.174274 87.102272 2.576736 87.102272 3.045486 c -87.102272 3.424274 86.969696 3.741512 86.704544 3.997192 c -86.439392 4.257610 86.032196 4.451736 85.482956 4.579578 c -83.664772 5.005714 l -82.665718 5.242458 81.931816 5.609409 81.463066 6.106567 c -80.999054 6.608463 80.767044 7.235828 80.767044 7.988670 c -80.767044 8.604198 80.939865 9.148708 81.285515 9.622192 c -81.635895 10.095676 82.111740 10.467365 82.713066 10.737251 c -83.319130 11.007137 84.005684 11.142078 84.772728 11.142078 c -85.852272 11.142078 86.699806 10.905338 87.315338 10.431854 c -87.935600 9.958366 88.375946 9.333366 88.636360 8.556850 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 560.000000 394.909058 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 24.090942 Tm -/F1 1.000000 Tf -[ (\016) (\t) (\003) (\023) (\006) (\010) (\021) (\000) (\013) (\047) (\037) (\003) (\024) 6.214523 (\050) ] TJ -20.000000 0.000000 0.000000 20.000000 0.000000 0.090942 Tm -/F1 1.000000 Tf -[ (\t) 15.003586 (\n) (\003) (\001) (\000) (\013) (\t) (\002) ] TJ -ET -Q -q -1.000000 0.000000 -0.000000 1.000000 9.000000 148.000000 cm -0.366667 0.366667 0.366667 scn -30.000000 15.000000 m -30.000000 6.715729 23.284271 0.000000 15.000000 0.000000 c -6.715729 0.000000 0.000000 6.715729 0.000000 15.000000 c -0.000000 23.284271 6.715729 30.000000 15.000000 30.000000 c -23.284271 30.000000 30.000000 23.284271 30.000000 15.000000 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 18.000000 160.454590 cm -1.000000 1.000000 1.000000 scn -0.000000 -4.454590 m -h -6.420455 -4.454590 m -1.264205 -4.454590 l -1.264205 10.090865 l -6.463068 10.090865 l -7.926137 10.090865 9.185607 9.799671 10.241478 9.217285 c -11.297349 8.639635 12.109376 7.808669 12.677557 6.724388 c -13.250474 5.640107 13.536932 4.342759 13.536932 2.832342 c -13.536932 1.317191 13.250474 0.015107 12.677557 -1.073908 c -12.109376 -2.162922 11.292614 -2.998623 10.227272 -3.581011 c -9.166667 -4.163397 7.897728 -4.454590 6.420455 -4.454590 c -h -4.339489 -1.819647 m -6.292614 -1.819647 l -7.201705 -1.819647 7.966383 -1.658661 8.586648 -1.336692 c -9.211648 -1.009987 9.680398 -0.505726 9.992898 0.176092 c -10.310133 0.862645 10.468750 1.748061 10.468750 2.832342 c -10.468750 3.907153 10.310133 4.785467 9.992898 5.467285 c -9.680398 6.149104 9.214015 6.650998 8.593750 6.972968 c -7.973485 7.294937 7.208807 7.455921 6.299716 7.455921 c -4.339489 7.455921 l -4.339489 -1.819647 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 18.000000 160.454590 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 -4.454590 Tm -/F1 1.000000 Tf -[ (4) ] TJ -ET -Q -q -/E5 gs -1.000000 0.000000 -0.000000 1.000000 1332.000000 639.000000 cm -1.000000 1.000000 1.000000 scn -0.000000 21.000000 m -164.000000 21.000000 l -164.000000 0.000000 l -0.000000 0.000000 l -0.000000 21.000000 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 1335.000000 641.545471 cm -0.233333 0.233333 0.233333 scn -0.000000 1.454529 m -h -9.261364 9.920438 m -7.755682 9.494302 l -7.660985 9.745249 7.521307 9.989094 7.336648 10.225836 c -7.156724 10.467313 6.910512 10.666177 6.598012 10.822427 c -6.285511 10.978677 5.885417 11.056802 5.397727 11.056802 c -4.730114 11.056802 4.173769 10.902919 3.728693 10.595154 c -3.288352 10.292124 3.068182 9.906234 3.068182 9.437484 c -3.068182 9.020817 3.219697 8.691745 3.522727 8.450268 c -3.825758 8.208791 4.299242 8.007560 4.943182 7.846575 c -6.562500 7.448847 l -7.537879 7.212105 8.264678 6.849889 8.742898 6.362200 c -9.221118 5.879245 9.460228 5.256613 9.460228 4.494302 c -9.460228 3.869301 9.280304 3.310589 8.920455 2.818165 c -8.565341 2.325741 8.068182 1.937483 7.428977 1.653393 c -6.789773 1.369303 6.046402 1.227257 5.198864 1.227257 c -4.086174 1.227257 3.165246 1.468735 2.436080 1.951689 c -1.706913 2.434643 1.245265 3.140135 1.051136 4.068165 c -2.642045 4.465893 l -2.793561 3.878771 3.080019 3.438431 3.501420 3.144869 c -3.927557 2.851309 4.483902 2.704529 5.170455 2.704529 c -5.951705 2.704529 6.571970 2.870249 7.031250 3.201689 c -7.495265 3.537863 7.727273 3.940325 7.727273 4.409075 c -7.727273 4.787863 7.594697 5.105098 7.329545 5.360779 c -7.064394 5.621196 6.657197 5.815325 6.107955 5.943166 c -4.289773 6.369302 l -3.290720 6.606044 2.556818 6.972995 2.088068 7.470154 c -1.624053 7.972048 1.392045 8.599416 1.392045 9.352257 c -1.392045 9.967787 1.564867 10.512294 1.910511 10.985779 c -2.260890 11.459264 2.736742 11.830950 3.338068 12.100836 c -3.944129 12.370722 4.630682 12.505665 5.397727 12.505665 c -6.477273 12.505665 7.324811 12.268923 7.940341 11.795439 c -8.560606 11.321954 9.000947 10.696953 9.261364 9.920438 c -h -10.449219 1.454529 m -h -16.557175 1.227257 m -15.506038 1.227257 14.599314 1.459265 13.837003 1.923279 c -13.079427 2.392029 12.494673 3.045439 12.082742 3.883507 c -11.675545 4.726310 11.471946 5.706423 11.471946 6.823847 c -11.471946 7.941272 11.675545 8.926120 12.082742 9.778393 c -12.494673 10.635400 13.067590 11.303014 13.801492 11.781234 c -14.540129 12.264188 15.401871 12.505665 16.386719 12.505665 c -16.954901 12.505665 17.515982 12.410968 18.069958 12.221575 c -18.623936 12.032181 19.128197 11.724416 19.582743 11.298279 c -20.037289 10.876878 20.399504 10.318166 20.669390 9.622143 c -20.939276 8.926120 21.074219 8.069113 21.074219 7.051120 c -21.074219 6.340893 l -12.665128 6.340893 l -12.665128 7.789757 l -19.369675 7.789757 l -19.369675 8.405287 19.246569 8.954529 19.000355 9.437484 c -18.758879 9.920439 18.413235 10.301594 17.963425 10.580950 c -17.518349 10.860306 16.992781 10.999984 16.386719 10.999984 c -15.719106 10.999984 15.141454 10.834264 14.653765 10.502825 c -14.170810 10.176120 13.799125 9.749984 13.538708 9.224416 c -13.278291 8.698847 13.148083 8.135400 13.148083 7.534075 c -13.148083 6.568165 l -13.148083 5.744302 13.290128 5.045912 13.574219 4.472995 c -13.863045 3.904812 14.263140 3.471575 14.774503 3.173279 c -15.285867 2.879719 15.880091 2.732939 16.557175 2.732939 c -16.997515 2.732939 17.395243 2.794491 17.750355 2.917597 c -18.110205 3.045437 18.420338 3.234831 18.680754 3.485779 c -18.941170 3.741461 19.142401 4.058695 19.284447 4.437483 c -20.903765 3.982939 l -20.733311 3.433697 20.446852 2.950741 20.044390 2.534075 c -19.641928 2.122143 19.144768 1.800173 18.552912 1.568165 c -17.961056 1.340893 17.295811 1.227257 16.557175 1.227257 c -h -22.089844 1.454529 m -h -28.197800 1.227257 m -27.146662 1.227257 26.239939 1.459265 25.477629 1.923279 c -24.720053 2.392029 24.135298 3.045439 23.723366 3.883507 c -23.316170 4.726310 23.112572 5.706423 23.112572 6.823847 c -23.112572 7.941272 23.316170 8.926120 23.723366 9.778393 c -24.135298 10.635400 24.708216 11.303014 25.442116 11.781234 c -26.180754 12.264188 27.042496 12.505665 28.027344 12.505665 c -28.595526 12.505665 29.156607 12.410968 29.710583 12.221575 c -30.264561 12.032181 30.768822 11.724416 31.223368 11.298279 c -31.677914 10.876878 32.040131 10.318166 32.310017 9.622143 c -32.579903 8.926120 32.714844 8.069113 32.714844 7.051120 c -32.714844 6.340893 l -24.305754 6.340893 l -24.305754 7.789757 l -31.010300 7.789757 l -31.010300 8.405287 30.887194 8.954529 30.640980 9.437484 c -30.399504 9.920439 30.053860 10.301594 29.604050 10.580950 c -29.158974 10.860306 28.633406 10.999984 28.027344 10.999984 c -27.359730 10.999984 26.782078 10.834264 26.294390 10.502825 c -25.811436 10.176120 25.439749 9.749984 25.179333 9.224416 c -24.918917 8.698847 24.788708 8.135400 24.788708 7.534075 c -24.788708 6.568165 l -24.788708 5.744302 24.930754 5.045912 25.214844 4.472995 c -25.503670 3.904812 25.903765 3.471575 26.415129 3.173279 c -26.926493 2.879719 27.520716 2.732939 28.197800 2.732939 c -28.638140 2.732939 29.035868 2.794491 29.390980 2.917597 c -29.750830 3.045437 30.060963 3.234831 30.321379 3.485779 c -30.581795 3.741461 30.783026 4.058695 30.925072 4.437483 c -32.544392 3.982939 l -32.373936 3.433697 32.087479 2.950741 31.685015 2.534075 c -31.282553 2.122143 30.785393 1.800173 30.193537 1.568165 c -29.601681 1.340893 28.936436 1.227257 28.197800 1.227257 c -h -33.730469 1.454529 m -h -39.383877 1.227257 m -38.474785 1.227257 37.672230 1.456896 36.976208 1.916176 c -36.280186 2.380192 35.735680 3.033602 35.342686 3.876404 c -34.949692 4.723942 34.753197 5.725362 34.753197 6.880665 c -34.753197 8.026499 34.949692 9.020817 35.342686 9.863620 c -35.735680 10.706423 36.282555 11.357465 36.983311 11.816745 c -37.684067 12.276025 38.493729 12.505665 39.412289 12.505665 c -40.122513 12.505665 40.683594 12.387294 41.095524 12.150551 c -41.512192 11.918544 41.829426 11.653393 42.047230 11.355098 c -42.269768 11.061537 42.442593 10.820060 42.565697 10.630666 c -42.707741 10.630666 l -42.707741 15.999984 l -44.383881 15.999984 l -44.383881 1.454529 l -42.764561 1.454529 l -42.764561 3.130665 l -42.565697 3.130665 l -42.442593 2.931801 42.267403 2.680855 42.040127 2.377825 c -41.812855 2.079529 41.488518 1.812010 41.067116 1.575268 c -40.645714 1.343260 40.084633 1.227257 39.383877 1.227257 c -h -39.611149 2.732939 m -40.283501 2.732939 40.851681 2.908129 41.315697 3.258507 c -41.779713 3.613619 42.132458 4.103676 42.373936 4.728677 c -42.615410 5.358412 42.736149 6.085211 42.736149 6.909075 c -42.736149 7.723469 42.617779 8.436064 42.381039 9.046859 c -42.144295 9.662389 41.793915 10.140609 41.329899 10.481518 c -40.865883 10.827162 40.292969 10.999984 39.611149 10.999984 c -38.900925 10.999984 38.309067 10.817692 37.835583 10.453109 c -37.366833 10.093260 37.014088 9.603203 36.777344 8.982938 c -36.545338 8.367408 36.429333 7.676120 36.429333 6.909075 c -36.429333 6.132560 36.547703 5.427067 36.784447 4.792597 c -37.025925 4.162861 37.381039 3.660967 37.849789 3.286915 c -38.323273 2.917597 38.910393 2.732939 39.611149 2.732939 c -h -51.777344 1.454529 m -h -57.885300 1.227257 m -56.834164 1.227257 55.927437 1.459265 55.165127 1.923279 c -54.407551 2.392029 53.822796 3.045439 53.410866 3.883507 c -53.003670 4.726310 52.800072 5.706423 52.800072 6.823847 c -52.800072 7.941272 53.003670 8.926120 53.410866 9.778393 c -53.822796 10.635400 54.395714 11.303014 55.129616 11.781234 c -55.868252 12.264188 56.729996 12.505665 57.714844 12.505665 c -58.283024 12.505665 58.844105 12.410968 59.398083 12.221575 c -59.952061 12.032181 60.456322 11.724416 60.910866 11.298279 c -61.365414 10.876878 61.727631 10.318166 61.997517 9.622143 c -62.267403 8.926120 62.402344 8.069113 62.402344 7.051120 c -62.402344 6.340893 l -53.993252 6.340893 l -53.993252 7.789757 l -60.697800 7.789757 l -60.697800 8.405287 60.574692 8.954529 60.328480 9.437484 c -60.087002 9.920439 59.741360 10.301594 59.291550 10.580950 c -58.846474 10.860306 58.320904 10.999984 57.714844 10.999984 c -57.047230 10.999984 56.469578 10.834264 55.981888 10.502825 c -55.498936 10.176120 55.127251 9.749984 54.866833 9.224416 c -54.606415 8.698847 54.476208 8.135400 54.476208 7.534075 c -54.476208 6.568165 l -54.476208 5.744302 54.618252 5.045912 54.902344 4.472995 c -55.191170 3.904812 55.591263 3.471575 56.102627 3.173279 c -56.613991 2.879719 57.208218 2.732939 57.885300 2.732939 c -58.325642 2.732939 58.723366 2.794491 59.078480 2.917597 c -59.438328 3.045437 59.748459 3.234831 60.008877 3.485779 c -60.269295 3.741461 60.470524 4.058695 60.612572 4.437483 c -62.231892 3.982939 l -62.061436 3.433697 61.774975 2.950741 61.372513 2.534075 c -60.970051 2.122143 60.472893 1.800173 59.881039 1.568165 c -59.289181 1.340893 58.623936 1.227257 57.885300 1.227257 c -h -63.125000 1.454529 m -h -65.909088 12.363620 m -68.522728 7.903393 l -71.136360 12.363620 l -73.068184 12.363620 l -69.545456 6.909075 l -73.068184 1.454529 l -71.136360 1.454529 l -68.522728 5.687484 l -65.909088 1.454529 l -63.977272 1.454529 l -67.443184 6.909075 l -63.977272 12.363620 l -65.909088 12.363620 l -h -73.925781 1.454529 m -h -78.670097 1.198847 m -77.978813 1.198847 77.351448 1.329056 76.787994 1.589472 c -76.224548 1.854624 75.777107 2.235779 75.445671 2.732939 c -75.114227 3.234833 74.948509 3.840893 74.948509 4.551120 c -74.948509 5.176120 75.071617 5.682749 75.317825 6.071007 c -75.564034 6.463999 75.893112 6.771764 76.305046 6.994302 c -76.716980 7.216840 77.171524 7.382560 77.668678 7.491461 c -78.170570 7.605097 78.674835 7.695059 79.181465 7.761347 c -79.844345 7.846574 80.381744 7.910495 80.793678 7.953109 c -81.210342 8.000458 81.513374 8.078583 81.702766 8.187484 c -81.896896 8.296385 81.993965 8.485779 81.993965 8.755666 c -81.993965 8.812484 l -81.993965 9.513241 81.802200 10.057749 81.418678 10.446007 c -81.039886 10.834264 80.464607 11.028393 79.692825 11.028393 c -78.892639 11.028393 78.265274 10.853204 77.810722 10.502825 c -77.356178 10.152446 77.036575 9.778393 76.851921 9.380666 c -75.261009 9.948848 l -75.545097 10.611727 75.923882 11.127825 76.397369 11.497143 c -76.875587 11.871196 77.396423 12.131613 77.959869 12.278393 c -78.528053 12.429908 79.086761 12.505665 79.636009 12.505665 c -79.986389 12.505665 80.388847 12.463051 80.843391 12.377825 c -81.302673 12.297332 81.745384 12.129245 82.171516 11.873564 c -82.602386 11.617881 82.959869 11.231991 83.243965 10.715893 c -83.528053 10.199794 83.670097 9.508506 83.670097 8.642029 c -83.670097 1.454529 l -81.993965 1.454529 l -81.993965 2.931801 l -81.908737 2.931801 l -81.795097 2.695059 81.605705 2.441746 81.340553 2.171858 c -81.075401 1.901972 80.722656 1.672333 80.282318 1.482939 c -79.841972 1.293545 79.304565 1.198847 78.670097 1.198847 c -h -78.925781 2.704529 m -79.588661 2.704529 80.147369 2.834738 80.601921 3.095154 c -81.061203 3.355570 81.406845 3.691744 81.638847 4.103676 c -81.875595 4.515609 81.993965 4.948848 81.993965 5.403393 c -81.993965 6.937484 l -81.922943 6.852257 81.766693 6.774132 81.525215 6.703109 c -81.288467 6.636821 81.013847 6.577635 80.701347 6.525552 c -80.393585 6.478203 80.092918 6.435590 79.799362 6.397711 c -79.510536 6.364567 79.276161 6.336158 79.096237 6.312484 c -78.660629 6.255666 78.253433 6.163336 77.874641 6.035495 c -77.500595 5.912389 77.197563 5.725363 76.965553 5.474416 c -76.738281 5.228203 76.624641 4.892029 76.624641 4.465893 c -76.624641 3.883507 76.840080 3.443165 77.270950 3.144869 c -77.706558 2.851309 78.258171 2.704529 78.925781 2.704529 c -h -85.195312 1.454529 m -h -86.729401 1.454529 m -86.729401 12.363620 l -88.348724 12.363620 l -88.348724 10.659075 l -88.490768 10.659075 l -88.718040 11.241461 89.084991 11.693639 89.591621 12.015609 c -90.098251 12.342313 90.706673 12.505665 91.416901 12.505665 c -92.136604 12.505665 92.735558 12.342313 93.213776 12.015609 c -93.696732 11.693639 94.073151 11.241461 94.343040 10.659075 c -94.456673 10.659075 l -94.736031 11.222522 95.155067 11.669965 95.713776 12.001404 c -96.272491 12.337578 96.942474 12.505665 97.723724 12.505665 c -98.699104 12.505665 99.496925 12.200268 100.117188 11.589473 c -100.737450 10.983412 101.047585 10.038810 101.047585 8.755666 c -101.047585 1.454529 l -99.371452 1.454529 l -99.371452 8.755666 l -99.371452 9.560590 99.151283 10.135874 98.710938 10.481518 c -98.270592 10.827162 97.752129 10.999984 97.155540 10.999984 c -96.388496 10.999984 95.794273 10.767976 95.372871 10.303961 c -94.951469 9.844681 94.740768 9.262295 94.740768 8.556802 c -94.740768 1.454529 l -93.036224 1.454529 l -93.036224 8.926120 l -93.036224 9.546385 92.834991 10.045912 92.432526 10.424700 c -92.030067 10.808223 91.511597 10.999984 90.877129 10.999984 c -90.441528 10.999984 90.034332 10.883980 89.655540 10.651973 c -89.281487 10.419965 88.978455 10.097995 88.746452 9.686063 c -88.519180 9.278866 88.405540 8.807749 88.405540 8.272711 c -88.405540 1.454529 l -86.729401 1.454529 l -h -102.578125 1.454529 m -h -104.112213 -2.636381 m -104.112213 12.363620 l -105.731537 12.363620 l -105.731537 10.630666 l -105.930397 10.630666 l -106.053505 10.820060 106.223961 11.061537 106.441765 11.355098 c -106.664299 11.653393 106.981529 11.918544 107.393463 12.150551 c -107.810127 12.387294 108.373581 12.505665 109.083809 12.505665 c -110.002365 12.505665 110.812027 12.276025 111.512787 11.816745 c -112.213539 11.357465 112.760414 10.706423 113.153412 9.863620 c -113.546402 9.020817 113.742897 8.026499 113.742897 6.880665 c -113.742897 5.725362 113.546402 4.723942 113.153412 3.876404 c -112.760414 3.033602 112.215912 2.380192 111.519890 1.916176 c -110.823868 1.456896 110.021309 1.227257 109.112213 1.227257 c -108.411461 1.227257 107.850380 1.343260 107.428978 1.575268 c -107.007576 1.812010 106.683235 2.079529 106.455963 2.377825 c -106.228691 2.680855 106.053505 2.931801 105.930397 3.130665 c -105.788353 3.130665 l -105.788353 -2.636381 l -104.112213 -2.636381 l -h -105.759941 6.909075 m -105.759941 6.085211 105.880684 5.358412 106.122162 4.728677 c -106.363640 4.103676 106.716385 3.613619 107.180397 3.258507 c -107.644409 2.908129 108.212593 2.732939 108.884941 2.732939 c -109.585701 2.732939 110.170456 2.917597 110.639206 3.286915 c -111.112694 3.660967 111.467804 4.162861 111.704544 4.792597 c -111.946022 5.427067 112.066765 6.132560 112.066765 6.909075 c -112.066765 7.676120 111.948395 8.367408 111.711647 8.982938 c -111.479645 9.603203 111.126900 10.093260 110.653412 10.453109 c -110.184662 10.817692 109.595169 10.999984 108.884941 10.999984 c -108.203125 10.999984 107.630203 10.827162 107.166191 10.481518 c -106.702179 10.140609 106.351799 9.662389 106.115059 9.046859 c -105.878311 8.436064 105.759941 7.723469 105.759941 6.909075 c -h -114.765625 1.454529 m -h -117.975853 15.999984 m -117.975853 1.454529 l -116.299713 1.454529 l -116.299713 15.999984 l -117.975853 15.999984 l -h -119.511719 1.454529 m -h -125.619675 1.227257 m -124.568535 1.227257 123.661812 1.459265 122.899506 1.923279 c -122.141930 2.392029 121.557175 3.045439 121.145241 3.883507 c -120.738045 4.726310 120.534447 5.706423 120.534447 6.823847 c -120.534447 7.941272 120.738045 8.926120 121.145241 9.778393 c -121.557175 10.635400 122.130089 11.303014 122.863991 11.781234 c -123.602631 12.264188 124.464371 12.505665 125.449219 12.505665 c -126.017403 12.505665 126.578476 12.410968 127.132454 12.221575 c -127.686432 12.032181 128.190704 11.724416 128.645248 11.298279 c -129.099792 10.876878 129.462006 10.318166 129.731888 9.622143 c -130.001770 8.926120 130.136719 8.069113 130.136719 7.051120 c -130.136719 6.340893 l -121.727631 6.340893 l -121.727631 7.789757 l -128.432175 7.789757 l -128.432175 8.405287 128.309067 8.954529 128.062851 9.437484 c -127.821373 9.920439 127.475731 10.301594 127.025925 10.580950 c -126.580849 10.860306 126.055275 10.999984 125.449219 10.999984 c -124.781609 10.999984 124.203957 10.834264 123.716263 10.502825 c -123.233307 10.176120 122.861618 9.749984 122.601204 9.224416 c -122.340790 8.698847 122.210579 8.135400 122.210579 7.534075 c -122.210579 6.568165 l -122.210579 5.744302 122.352623 5.045912 122.636719 4.472995 c -122.925545 3.904812 123.325638 3.471575 123.837006 3.173279 c -124.348366 2.879719 124.942589 2.732939 125.619675 2.732939 c -126.060013 2.732939 126.457741 2.794491 126.812859 2.917597 c -127.172707 3.045437 127.482841 3.234831 127.743256 3.485779 c -128.003677 3.741461 128.204895 4.058695 128.346939 4.437483 c -129.966263 3.982939 l -129.795807 3.433697 129.509354 2.950741 129.106888 2.534075 c -128.704422 2.122143 128.207260 1.800173 127.615410 1.568165 c -127.023560 1.340893 126.358315 1.227257 125.619675 1.227257 c -h -136.777344 1.454529 m -h -138.908020 8.727257 m -138.908020 10.517030 139.140030 12.162390 139.604050 13.663336 c -140.072800 15.169018 140.740417 16.553961 141.606888 17.818167 c -143.084167 17.818167 l -142.743256 17.349415 142.423645 16.771763 142.125351 16.085211 c -141.831787 15.403393 141.573746 14.652920 141.351212 13.833790 c -141.128662 13.019396 140.953476 12.176594 140.825638 11.305382 c -140.702530 10.434169 140.640976 9.574794 140.640976 8.727257 c -140.640976 7.600362 140.749878 6.456896 140.967682 5.296859 c -141.185486 4.136820 141.479050 3.059643 141.848373 2.065325 c -142.217682 1.071007 142.629623 0.261347 143.084167 -0.363653 c -141.606888 -0.363653 l -140.740417 0.900551 140.072800 2.283127 139.604050 3.784075 c -139.140030 5.289756 138.908020 6.937484 138.908020 8.727257 c -h -144.023438 1.454529 m -h -145.529114 1.454529 m -145.529114 2.732939 l -150.330261 7.988620 l -150.893707 8.604151 151.357727 9.139189 151.722305 9.593734 c -152.086884 10.053014 152.356781 10.483885 152.531967 10.886348 c -152.711884 11.293545 152.801849 11.719681 152.801849 12.164757 c -152.801849 12.676120 152.678741 13.118828 152.432526 13.492882 c -152.191055 13.866935 151.859619 14.155760 151.438217 14.359359 c -151.016815 14.562958 150.543320 14.664757 150.017761 14.664757 c -149.459045 14.664757 148.971359 14.548752 148.554688 14.316745 c -148.142746 14.089472 147.823151 13.769871 147.595886 13.357939 c -147.373337 12.946007 147.262070 12.463051 147.262070 11.909075 c -145.585938 11.909075 l -145.585938 12.761347 145.782440 13.509453 146.175430 14.153393 c -146.568420 14.797333 147.103455 15.299227 147.780533 15.659075 c -148.462357 16.018923 149.227036 16.198849 150.074570 16.198849 c -150.926849 16.198849 151.682053 16.018923 152.340195 15.659075 c -152.998337 15.299227 153.514435 14.813905 153.888489 14.203109 c -154.262543 13.592313 154.449570 12.912863 154.449570 12.164757 c -154.449570 11.629719 154.352509 11.106518 154.158386 10.595154 c -153.968979 10.088525 153.637543 9.522711 153.164062 8.897711 c -152.695312 8.277446 152.044266 7.519870 151.210938 6.624984 c -147.943893 3.130665 l -147.943893 3.017029 l -154.705261 3.017029 l -154.705261 1.454529 l -145.529114 1.454529 l -h -156.132812 1.454529 m -h -161.246445 8.727257 m -161.246445 6.937484 161.012070 5.289756 160.543320 3.784075 c -160.079315 2.283127 159.414062 0.900551 158.547592 -0.363653 c -157.070312 -0.363653 l -157.411224 0.105097 157.728455 0.682749 158.022018 1.369301 c -158.320312 2.051119 158.580734 2.799225 158.803268 3.613619 c -159.025803 4.432749 159.198624 5.277919 159.321732 6.149132 c -159.449570 7.025079 159.513489 7.884454 159.513489 8.727257 c -159.513489 9.854151 159.404587 10.997617 159.186783 12.157654 c -158.968994 13.317692 158.675430 14.394871 158.306107 15.389189 c -157.936798 16.383507 157.524857 17.193165 157.070312 17.818167 c -158.547592 17.818167 l -159.414062 16.553961 160.079315 15.169018 160.543320 13.663336 c -161.012070 12.162390 161.246445 10.517030 161.246445 8.727257 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 1335.000000 641.545471 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 1.454529 Tm -/F1 1.000000 Tf -[ (\002) (\t) (\t) (\010) (\006) (\t) 15.003586 (\n) (\003) (\001) (\000) (\013) (\t) (\006) (\005) (\072) (\007) ] TJ -ET -Q -q -1.000000 0.000000 -0.000000 1.000000 1303.000000 634.000000 cm -0.366667 0.366667 0.366667 scn -30.000000 15.000000 m -30.000000 6.715729 23.284271 0.000000 15.000000 0.000000 c -6.715729 0.000000 0.000000 6.715729 0.000000 15.000000 c -0.000000 23.284271 6.715729 30.000000 15.000000 30.000000 c -23.284271 30.000000 30.000000 23.284271 30.000000 15.000000 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 1312.000000 646.454529 cm -1.000000 1.000000 1.000000 scn -0.000000 -4.454529 m -h -6.420455 -4.454529 m -1.264205 -4.454529 l -1.264205 10.090926 l -6.463068 10.090926 l -7.926137 10.090926 9.185607 9.799732 10.241478 9.217346 c -11.297349 8.639696 12.109376 7.808730 12.677557 6.724449 c -13.250474 5.640168 13.536932 4.342820 13.536932 2.832403 c -13.536932 1.317252 13.250474 0.015168 12.677557 -1.073847 c -12.109376 -2.162861 11.292614 -2.998562 10.227272 -3.580950 c -9.166667 -4.163336 7.897728 -4.454529 6.420455 -4.454529 c -h -4.339489 -1.819586 m -6.292614 -1.819586 l -7.201705 -1.819586 7.966383 -1.658600 8.586648 -1.336631 c -9.211648 -1.009926 9.680398 -0.505665 9.992898 0.176153 c -10.310133 0.862706 10.468750 1.748122 10.468750 2.832403 c -10.468750 3.907214 10.310133 4.785528 9.992898 5.467346 c -9.680398 6.149165 9.214015 6.651059 8.593750 6.973029 c -7.973485 7.294998 7.208807 7.455982 6.299716 7.455982 c -4.339489 7.455982 l -4.339489 -1.819586 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 1312.000000 646.454529 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 -4.454529 Tm -/F1 1.000000 Tf -[ (4) ] TJ -ET -Q -q -1.000000 0.000000 -0.000000 1.000000 500.000000 642.000000 cm -0.366667 0.366667 0.366667 scn -30.000000 15.000000 m -30.000000 6.715729 23.284271 0.000000 15.000000 0.000000 c -6.715729 0.000000 0.000000 6.715729 0.000000 15.000000 c -0.000000 23.284271 6.715729 30.000000 15.000000 30.000000 c -23.284271 30.000000 30.000000 23.284271 30.000000 15.000000 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 508.000000 654.056824 cm -1.000000 1.000000 1.000000 scn -0.000000 -4.056824 m -h -14.098011 5.396301 m -10.987216 5.396301 l -10.930398 5.798764 10.814394 6.156245 10.639205 6.468745 c -10.464016 6.785980 10.239110 7.055866 9.964489 7.278404 c -9.689868 7.500942 9.372633 7.671397 9.012784 7.789767 c -8.657671 7.908138 8.271781 7.967324 7.855114 7.967324 c -7.102273 7.967324 6.446496 7.780298 5.887784 7.406245 c -5.329072 7.036927 4.895833 6.497153 4.588068 5.786926 c -4.280303 5.081434 4.126420 4.224427 4.126420 3.215904 c -4.126420 2.178972 4.280303 1.307760 4.588068 0.602267 c -4.900568 -0.103225 5.336174 -0.635895 5.894886 -0.995744 c -6.453598 -1.355593 7.099905 -1.535517 7.833807 -1.535517 c -8.245739 -1.535517 8.626894 -1.481066 8.977273 -1.372164 c -9.332387 -1.263264 9.647254 -1.104645 9.921875 -0.896312 c -10.196496 -0.683244 10.423769 -0.425195 10.603694 -0.122165 c -10.788353 0.180866 10.916193 0.526510 10.987216 0.914767 c -14.098011 0.900563 l -14.017520 0.232949 13.816289 -0.410991 13.494319 -1.031256 c -13.177084 -1.646786 12.748580 -2.198397 12.208807 -2.686085 c -11.673769 -3.169039 11.034565 -3.552563 10.291194 -3.836653 c -9.552557 -4.116009 8.716856 -4.255688 7.784091 -4.255688 c -6.486742 -4.255688 5.326705 -3.962128 4.303977 -3.375006 c -3.285985 -2.787884 2.481061 -1.937979 1.889205 -0.825290 c -1.302083 0.287399 1.008523 1.634464 1.008523 3.215904 c -1.008523 4.802078 1.306818 6.151510 1.903409 7.264199 c -2.500000 8.376888 3.309659 9.224426 4.332386 9.806812 c -5.355114 10.393934 6.505682 10.687495 7.784091 10.687495 c -8.626894 10.687495 9.408144 10.569124 10.127841 10.332381 c -10.852273 10.095638 11.493845 9.749994 12.052557 9.295450 c -12.611269 8.845639 13.065815 8.294029 13.416194 7.640620 c -13.771307 6.987211 13.998580 6.239104 14.098011 5.396301 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 508.000000 654.056824 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 -4.056824 Tm -/F1 1.000000 Tf -[ (A) ] TJ -ET -Q -q -1.000000 0.000000 -0.000000 1.000000 318.000000 729.000000 cm -0.366667 0.366667 0.366667 scn -30.000000 15.000000 m -30.000000 6.715729 23.284271 0.000000 15.000000 0.000000 c -6.715729 0.000000 0.000000 6.715729 0.000000 15.000000 c -0.000000 23.284271 6.715729 30.000000 15.000000 30.000000 c -23.284271 30.000000 30.000000 23.284271 30.000000 15.000000 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 326.000000 742.454529 cm -1.000000 1.000000 1.000000 scn -0.000000 -4.454529 m -h -3.771307 -4.454529 m -0.475852 -4.454529 l -5.497159 10.090926 l -9.460228 10.090926 l -14.474432 -4.454529 l -11.178978 -4.454529 l -7.535512 6.767062 l -7.421875 6.767062 l -3.771307 -4.454529 l -h -3.565341 1.262801 m -11.349432 1.262801 l -11.349432 -1.137767 l -3.565341 -1.137767 l -3.565341 1.262801 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 326.000000 742.454529 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 -4.454529 Tm -/F1 1.000000 Tf -[ (B) ] TJ -ET -Q -q -1.000000 0.000000 -0.000000 1.000000 557.000000 434.000000 cm -0.366667 0.366667 0.366667 scn -30.000000 15.000000 m -30.000000 6.715729 23.284271 0.000000 15.000000 0.000000 c -6.715729 0.000000 0.000000 6.715729 0.000000 15.000000 c -0.000000 23.284271 6.715729 30.000000 15.000000 30.000000 c -23.284271 30.000000 30.000000 23.284271 30.000000 15.000000 c -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 565.000000 446.454590 cm -1.000000 1.000000 1.000000 scn -0.000000 -4.454590 m -h -1.264205 -4.454590 m -1.264205 10.090865 l -7.088068 10.090865 l -8.158144 10.090865 9.050663 9.932247 9.765625 9.615013 c -10.480588 9.297777 11.017993 8.857437 11.377841 8.293990 c -11.737690 7.735278 11.917614 7.091338 11.917614 6.362171 c -11.917614 5.793990 11.803978 5.294463 11.576705 4.863592 c -11.349432 4.437456 11.036932 4.087077 10.639205 3.812456 c -10.246212 3.542569 9.796402 3.350808 9.289773 3.237172 c -9.289773 3.095126 l -9.843750 3.071452 10.362216 2.915202 10.845171 2.626376 c -11.332860 2.337551 11.728220 1.932721 12.031250 1.411888 c -12.334281 0.895789 12.485796 0.280259 12.485796 -0.434703 c -12.485796 -1.206484 12.294034 -1.895405 11.910511 -2.501465 c -11.531724 -3.102791 10.970644 -3.578644 10.227272 -3.929022 c -9.483901 -4.279400 8.567708 -4.454590 7.478693 -4.454590 c -1.264205 -4.454590 l -h -4.339489 -1.940386 m -6.846591 -1.940386 l -7.703599 -1.940386 8.328599 -1.777033 8.721591 -1.450328 c -9.114584 -1.118889 9.311080 -0.678548 9.311080 -0.129306 c -9.311080 0.273157 9.214016 0.628270 9.019887 0.936035 c -8.825758 1.243800 8.548770 1.485278 8.188921 1.660467 c -7.833807 1.835656 7.410038 1.923251 6.917614 1.923251 c -4.339489 1.923251 l -4.339489 -1.940386 l -h -4.339489 4.004217 m -6.619318 4.004217 l -7.040720 4.004217 7.414773 4.077607 7.741477 4.224388 c -8.072917 4.375903 8.333334 4.588971 8.522728 4.863592 c -8.716857 5.138213 8.813921 5.467285 8.813921 5.850807 c -8.813921 6.376376 8.626894 6.800145 8.252841 7.122115 c -7.883523 7.444085 7.357955 7.605070 6.676137 7.605070 c -4.339489 7.605070 l -4.339489 4.004217 l -h -f -n -Q -q -1.000000 0.000000 -0.000000 1.000000 565.000000 446.454590 cm -BT -20.000000 0.000000 0.000000 20.000000 0.000000 -4.454590 Tm -/F1 1.000000 Tf -[ (C) ] TJ -ET -Q -q -1.000000 0.000000 -0.000000 1.000000 537.000000 167.022461 cm -0.364706 0.364706 0.364706 scn -13.500000 456.477539 m -14.000000 456.477539 l -14.000000 456.529999 l -13.989107 456.581329 l -13.500000 456.477539 l -h -13.500000 288.477539 m -13.000000 288.477539 l -13.000000 288.270447 l -13.146446 288.123993 l -13.500000 288.477539 l -h -20.000000 281.977539 m -20.353554 281.623993 l -20.707108 281.977539 l -20.353554 282.331085 l -20.000000 281.977539 l -h -13.500000 275.477539 m -13.146446 275.831085 l -13.000000 275.684631 l -13.000000 275.477539 l -13.500000 275.477539 l -h -13.500000 12.477539 m -13.996432 12.417908 l -14.000000 12.447632 l -14.000000 12.477539 l -13.500000 12.477539 l -h --0.130093 469.494751 m -4.729420 468.185303 7.601236 466.699371 9.460844 464.685699 c -11.319435 462.673187 12.226254 460.071045 13.010893 456.373749 c -13.989107 456.581329 l -13.196995 460.313873 12.246413 463.143372 10.195491 465.364166 c -8.145586 467.583862 5.055779 469.133026 0.130093 470.460327 c --0.130093 469.494751 l -h -13.000000 456.477539 m -13.000000 288.477539 l -14.000000 288.477539 l -14.000000 456.477539 l -13.000000 456.477539 l -h -13.146446 288.123993 m -19.646446 281.623993 l -20.353554 282.331085 l -13.853554 288.831085 l -13.146446 288.123993 l -h -19.646446 282.331085 m -13.146446 275.831085 l -13.853554 275.123993 l -20.353554 281.623993 l -19.646446 282.331085 l -h -13.000000 275.477539 m -13.000000 12.477539 l -14.000000 12.477539 l -14.000000 275.477539 l -13.000000 275.477539 l -h -13.003568 12.537170 m -12.526070 8.561279 11.627912 6.116180 9.978045 4.508850 c -8.322450 2.895935 5.825550 2.042480 1.927628 1.472260 c -2.072372 0.482819 l -6.009235 1.058716 8.780881 1.946442 10.675861 3.792572 c -12.576571 5.644287 13.511021 8.376160 13.996432 12.417908 c -13.003568 12.537170 l -h -f -n -Q -q --1.000000 -0.000000 0.000000 -1.000000 500.000000 657.000000 cm -0.364706 0.364706 0.364706 scn -0.000000 0.500000 m -70.000000 0.500000 l -70.000000 1.500000 l -0.000000 1.500000 l -0.000000 0.500000 l -h -f -n -Q -q -0.000000 1.000000 -1.000000 0.000000 450.548828 655.000000 cm -0.364706 0.364706 0.364706 scn -20.353554 20.195274 m -20.548815 20.390537 20.548815 20.707119 20.353554 20.902382 c -17.171574 24.084362 l -16.976311 24.279625 16.659729 24.279625 16.464466 24.084362 c -16.269203 23.889099 16.269203 23.572517 16.464466 23.377254 c -19.292892 20.548828 l -16.464466 17.720402 l -16.269203 17.525139 16.269203 17.208557 16.464466 17.013294 c -16.659729 16.818031 16.976311 16.818031 17.171574 17.013294 c -20.353554 20.195274 l -h -0.000000 20.048828 m -20.000000 20.048828 l -20.000000 21.048828 l -0.000000 21.048828 l -0.000000 20.048828 l -h -f -n -Q -endstream -endobj -99 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceRGB /Filter [ /FlateDecode ] /Height 954 -/Length 170 0 R /SMask 171 0 R /Width 1530 >> -stream -xuYu /!",ԑ8Vd5 MY3x $2r+ 86SG61v@8T9t{b]-3N)3P/X5PY{g{9<~?Zzgdzk_k%D"H$D"H$"_B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!؁O{MR_qi^ ) <ڍxy5<"$B=UBgh?Re޼v+ر [jB&-PzrJ:i,2ﻼp:)atFL׬[''i@h1RՆD⠂B*&M/ v#F'!(W֟IlԵ5_^꯳78ڢ`F$: f5dh"CxKZ MGG 07u+Btdi -!B!U字 -ˉHxyeTqo/FUk.wiVAC@h=%fu95&}a&xƵ\v}8j.]%&Xj%U\bBA2dS}Q7Z:$hx5{\%{H"k™d Zi M΁tWߥFi%}KBslKݐMXs׭44ߥ6km~ͅ(&rCv{:)B!{? jj¼1{TPuR3^mO%("Ufşh_BO߬Qx0/78 -!3i0eן1D AVa0hT Flx$ф vcUh(Zv hnO>=?#>P@ه|b:%( -פirCR U'~!Dl aJjEpĮ0o޴P'5c&poa*/vqa郠-S7t h"d@R5.'1fKVa. 4[//?h 5 -_d>T?kQ7:Kt,UR Uf'k[D>ݐM\o64] -iM@9'ZUg 'p`”2Bq`">p8CmbW7foZGu - ݧ?*RvJo -j"-F6 vy5Y2̐cXO./'iz0yu:cQEaWԄ+OXmq,5cskC&'6tqZСy񽰱%hUs4-jPUICjTdЍjP 3ƔZ*ar) ñq(ifn.\C-PaK$ $nIB$0 C l aJX!Bp86+7-Iǣ"\D.MNuT«*/U+@t1.bB?kfxų:R|ɤz#6a|tAX5׃>ĄْDao 7Xҩ3su,TIzU`Ҧh\ɩVh5J׫پڄ1t/>}]>co -$nIB$0 C l aJ%yWo_47=oASLɬ2C4Ů0{zM -M uRBqtؒ% $0 173-&)eb01&p)qz׻n [tD_3|2EMNp86+7-IB!aK▴ C/|]y[/~H-/GI;^tPcI`bngfͣw?,mq7mQ\g99L)$qwx%/{;蠉8fdV苝`">p8CmbW7foZBG+&teؒ8]|'v~>QcI`3Y{'Ɖ뷓߰WI`ā0vNlHC4sٛEG3nS)ebΐ/?V{Dˋ;xK_Los+߸q:h@z~yQ1Кʨy ,V9`&Ofu;蠉~ϗJ3j>˕\4C=_3X×󩙖`ޘ|7GP'5 !; d8Ҍ:١|^-C걋j -vb LØp{6Z3N᜙|c~nd @<3mz=Ds^ȜTm_XSOo2KB gH{oB=rt5c|ҕeهtݨu|^2Q&v;蠉~0jPpe=ddHn٫3Ɲd>&Í]a3=֔r,]^'q0izۿ"_ tyzf@!`ǡt~;Lm$i'/zGs.h:֙7m"bsM-TC ;:ۚ%]sUn. -krϏW -Jy ^كD 2///I$0ּw܇Cyf:Bhxf]-xaj¯$/Xv +Vc1 -9/54V+U/-ImSO9W&n4ϱ}6X^Lknmv^մ:zfdV# +̯M2IcM;꡶{{nWep;__8){^N04ycFp=/{1<?`QPo\AsB._pQPocexGAsVOP?w9~EAmtlMoSaUDj#Mf$eH  - [>IHAs4'`NQP"9c !SHAs4g`NQP 9EP"9c9 SVJىmݼv{ڠ2|qvfm!Fvi76jf wJt4\l]͡T(B=pZ)8沱$合STNOzғd uf0l;5]blgxȤH[;5f$rX9mǭAjlZ)5V#ev0tk*`OZA뗟xO@t9Ida&ؕPP+2(D/0#OjX2~l[ɉb끺Y+U`gXW[7Fxad:xHC|4jo䧮Se Ŷ悙d>&96A%kX֐ݿs4kϛ˯C Mjv8CmiICy=Ho%TRm6s-lڻjkɺaB1_vKgO4'`NQP"9c !SHAs4g`NQP 9EP"9c9 SVCnU6Gkj=`bO=$iptp !(kp0HuiJ! iE[ B¸+HUXCtWh[U6E!Q֏'[3;e$0%0 73[7ClܞIl  C+N&zϗU0vqZ~V!b>Nӭ![i ̟}x}If5fY]۫=mf΃Um< WKU"u)!'5+U`gX'['桙7h+I4beWgPU~Uau6dfdV@{t) 3;fObKǺBkݿBx|0i,hC|??-|/yP * wzc?Yt1n}qKW~MYل;{HޫnM4jü1{#PgO4'`NQP"9c !SHAs4g`NQP 9EP"9c9 SVenj/9ǀ+ 7[%qHHʀpCtKQVQ{!fwV* ihµhcsKBsn\A-i\[0MtC -xhpxR5ˀI`ѵO3|3gi `|sh4~`>37e6<3mў`f -fYd_VNu$|~a& {S^5[fi^ W%:.SѮ5*0AMslɌ'3m Xthͭ=-^t:33$ɬn4oaG|?df'4 -M q^¤P 曭;_}w>7*UO0{T?PϿ([}6F{h7aޘiNj^Drs@$g9!sڄ#hN6Dr@$g9+sL)#hl3J{uR`am}uUV-OC$i=]!i|p^ 7+ JغF\X}+J]Ch7*&݄\_NF:N O -<>|_LX 3ֹΌܛu{,g`ɖ -fgwd[U*N`u?IMin` <2NG0d bx|M piŒY݌--}6ffl0IdD]ejhքǚ->|e -|&-l@l\1y65Q!gnU1XYSf 3|2AMۄt;fTi|Fl6zg -^^'QcUkZTUDĊ.t]20ilC=%NHFͻ~dNl>[ڧ>qՍz=6>ZJ5fFh7`ޘiNj^Drs@$g9!sڄ#hN6Dr@$g9+sL)#h*e{{vj+.c+" $ip Ͱ$d]HJ4+Բ<7Z3TcVlxLͅիKZ3k.rcj0BnBgZB0 cFq{KD~if΍ު -*װ"9A\5:qU #7Cyfc+L0dYZn{B/unR&vIᴤ_f~5e&ʑTĿJӋ3d#.*;z׻AM{}vcsU38@HElѨq6}vi:Y5Ԋ-/+ 3|2AM(W"6h"_gު|VUVP.áj'"'`)UU'V϶T[MiN{ -M$7foZ1#hN6!DrB0(12#h -6s6Drs@0(-lIPE$V áj¼1{BԼ4HAsB !3FМ9EAm4HAsV S9 3FМ9EAlaK▴zBPGc2=֫;$0 q$Fgf[LS.Ƀ7J`vAM$Rs[tS|2Euh(1{BԼ4HAsB !3FМ9EAm4HAsV S9 3FМ9EAlaKtRT㱸=W{0~^_0̶B -E{D ٛ@$g9DrB0(M1) -js@$g DrB0("1) -jg [+#h;^4Z;!yfń0LB\!XPٛ@$g9DrB0(M1) -js@$g DrB0("1) -jg [$!!yfń0LB\!XPٛ@$g9DrB0(M1) -js@$g DrB0("1) -jg [$!!yfń0LB\!XPٛf!b + $20bBR!Zv86+7-IB!B!8O^W{dĮ0os8F,B!B͛s@=&vycé6f!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!]B!B!B!B!B$P &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#'\5!jB6X&jBl2BZqEP &kթjBˊ&z&#ŁxS/..sS_~2GJ~/5W1f7dq|p"(7^􎯙+UwZ}:/X9& j V_: uh`u*/sE8GH "~3Nl@ 6!-|OLf筤1I+[m~l%y;A$ T<;3|N#T8b}+caΟ~ jb`ýwXtRzcq:DzBT5Im~#-#邗AmŰik*{0cuy0k? `'v=zAʕ1;AGtیck;f&V6vCJXևjی]VA>'*T5baja ^VFz&#[L#yYaMTHW`~WS)qY]&Os&3ze]E`EP;=wdZ1lhfekpDMU4ߺzU:\N9!M& 4 &Vcfm{Qk~ZqL~V4=)֛L_~F}%5+WVX&ZkkrlhZ xnVZ=j4U2ةW -FR{:VQw.:2 U?'׳yrUxmtOX%ip^fVd kѱWC)TMĕk۶tьO ^ ->eSP X)o<XVIӚ}:"+hXs $X[J`u*0M cR揱kG ^ mE(^7W;$6 UӡlVmEVа.}5dBմ87ki*Siym+Go$X{EcH[i~AV½Y+`5əVCds=͚m9iC=ujںeMڪutbհ:AMdX]̜%1򓫢Odu$8"#TDfP[Nd''LQ&#cEP6lSAMٱbNEPk/+mMFH3Fўw.X:_5!fGU#NEPk/+mMFH3Fўw.X:_5!fGU#NEPk/+mMFH !NܹjB!Ԅbm:AM1xYԄXd&AM!SԄAMMFH !NܹjB!Ԅbm:AM1xYԄXd&AM!SԄAMMFH !NܹjB!Ԅbm:AM1xYԄXd&AM!SԄAMMFH !NܹjB!Ԅbm:AM1xYԄXd&AM!SԄAMMFH !NܹjB!Ԅbm:AM1xYԄXd&AM!SԄAMMFH !NܹjB!Ԅbm:AM1xYԄXd&AM!SԄAMMFH !NܹjB!Ԅbm:AM1xYԄXd&AM!SԄAMMFH|'\e# ܹjyDO@>lrx⍇F_51!xJxpd# :Ae1M.,4*/+Z3Lw&p#hzB' 6!}`Q z-,qfEPKJx(TGM!AML^3g&1NEP;"xzxp9dC %ˊDӝI69\: MFHXBz FsԶҷ35 -xexl.`EPDI6BSԎ^?sBbI"E@+:tgM1/,4z`Ш²g\D茇ṕB%|da9" Ą+:}f3T#GǜC69ШXj>Jx0ݙdEx 0d9E-,4᮷lę;Am;,}+:!\sPYX69lF_51!xJxpd# :Ae1M.,4*/+Z3Lw&p#hzB' 6!}`Q z-,qfEPKJx(TGM!AML^3g&1NEP;"xzxp9dC %ˊDӝI69\: MFHXBz f,;OGyKI9Pă,,6Cd3GҗĮ&&/^3lƱ^ N(XvDs&vU^pVtˊDӝI69\:,jd9Eޗr_GbQw=_[n|mHWg-Om[2~~VpOY/yN<%Td3 -B=1țCD\mp;Am;,}s>+ Bgw0.Ix-yJ;Q1-mB 0S9 -6!}`QNz;%k! z{b}g _{J}+hmO"B"S.-SC=_ *Y]/yKўۡ^^NX/|sԶ7Rۅxxy]?z˳^)TGlqI{}oYSU.6lr-"1V6c0u^l7u/3o+"C.f'w kbl1" ;[Yqp?vm/=N!6䛙Z~<qI=xΖU'&41ן%NEP;"x\.k>R1n&v+<#]Tm' ^ˬ)ګ}6[3~%#k0A-GZx0]/S"Rwvӛ;mWu(6G'*kmxn؎ss8RD` |+e~WvdoIb[j&&/6%–[ -`^oǶ֊V=k`; 3ܧswv?4gZk.El3 }XUCaz~08j,vVw`[:ń:Ae w谜+: FVĜEx1M&v+ 0Xa5NDP|%<ֽV:Ae w|{ՖI7'1;g{n1o^m݂iN -'zN˩pfY#A 6!F=z)rakEXF,a -W=ݳ V^VȄO!rH&gr ļ#llrH֌IB¼Fmcdr1D!0!qX|1EͰ0)ӁMFHX\+ 6G'}{ W6lJTs7¦]uXMGWE#nmk?Tk+AML^lR-aIDXR,rHC]\aH#\D\f%yAU-tƃDq:A;Ӷ^lŌy_*lGKo6|jI0MemnESԎ^VɠN+^H#XL&8t?KۻNmWN69[%!Lɡ+(Y%ϐo9avIރyB+C\"z/P)X%-\.ˊ!$B G\f%yAU-tƃzögG9MZ/ѝ2 g7Z˚As .Bhdߙl2B5J&wZNh] p tw M3NsӴW@5gs 6Jl8=jw.vX6t ,AVr,N ,=Bg3W39:9U7ևdSԇ[({Oi=`9fMGWS>ӷcIO[u%%0" M%"-)K%Bp+ i'JF-8^3'kk=VA?nU2u$"%4g/kQ1[pGp7Ð #Ahtz\#8KE;ӁMFHX*X~_3Co3Qw='v+ ؟VSnZ,S]WC:Ȧ-O.,b]Ěd^\0r3Ю%%«XZ~4yEPK_DX{"l(Tt-UBg'[ ^Y"xmWs4<p"m/JPl k,j! m9ϳ4۰jLĪʩ1%=+0ᚖ?ԪHa>,ԃPRySZ,S'>W76as$2+4B+҆e3wHRp^VXk"-niVFͼ5$Wķ /ʡ^ jbBb[.l5u ,)Kw|oJCibǹ9 %\~g1}L ~;r3mC} 4BG6%z,6sIoS_* T>_K"Q iGݱN1!NEP;"xY%}-oTF'U xpL/2koMVbìVTꗨCf3df,+EE̋k{,sg'Q\ |N$.PaeEȱD, jWӷΣA-21pg q|k~ZOCi jbC ߱L׋Y~NI69\+q%Mbӌ=I5o`DH:sMht)z2kL -6!}`Q z-,_ltm9Pă,,6Cd3pl.]<3d jbB>3ɦin`u*#ǃc!]Xh_pVrˊDӝI69\:boCFZ'>F=(,XGO]n\D茇ṕB%|da9"q,,]Ą+:}fN%y86NEP;"xzxp9dC /89eEPV"tƃ$.cM^XhtAl\2tMFHXBz f,\VN -uvaĝ9Pă,,6Cd3{wuPĄ+:}f8vޫgSԎ^?sB# ΊxY"|`3&AC]EM>F=83p"mo%BglrjbB>3F`u*#ǃc!]XhT, ^Vh%BgO=c?я2<>'?|Wo}[9mo_OllGe/}K7AZ578e#kG>'g? AM\q3߼yOl䳿&\p fo3k;8K+aZ,. .O(8/w@L^ZjCH;={fx$E5Okl2BZs#슩 ̏UhQ `f!L,˶~0EP;A _΋ 3;Õ7q\EP+G]wR51U(b`\Z1'[a -j뇑Gwg#71~Y@|"AaЙL5?N|;ˮ0]EPᆅMk0] #Ǒqb%/s0ݸw|~ľ@feAwe#F&:KB .MnC x!͖J%E:ZOpH]efyDg7F*1 VBsXy%%9$M'1ZLFrܥW=1V 唕m/5 -U7|`EP%;cova DgX0c^xg)BA2:*A 5]83? 6#okہc f5%Q -ø' D5 T5fldj?re|nǤôAM-. |o§x+l| f$^XpA LZxFix%v+ښ!G'#ܳp!# P$fI+CӀ0+}#FSS0BU|)\!B.YN d(/;=9HPnmj]L`]C0i 33,pO/q/? 8v%I52sG]؀؅yfF= ؼpg:Og{dr&zi9>GL^MdNaaS,;E@G2yu0EP+IC=FZ!`b=-鑫 -[[R̙q"NS~\Yj@|`ފZ846Ƙя`EdNXD٠i)&HRW|5c2"VJ"X< jCpZ3ܭXBA y!\R3A4AC6P̌e ʏ0)rE`h"b:ϜD0 V~* .8mE0E"-c:sC8A7l[82+.$křl@ ^5х2]NO+]0fbx7,W(v0"ÚvpȜ-Zj-Of{(9Kt@od99\{1i":oʎ>LVuE=w0xr&- -#$9+N&*G#=' %2ɡ**GH6u7yfAoΒ>: fOR9MgT\U;j!?#ށk!-= ,vC XKC9a{W8$+ƍRm\t]OW\5Űf&2c$Ml|I`,lT$QCq߻1` 2e3K%fqMq|4Nst~U0" vBX%#Hv" gB8S-v;?-||'AB!l(\-!Qҍ10v,Vf`c/@gnӑɫY刾%p/| 3]L"x.:ylDS~`1?YhL~-+e*0Z< -0"#Jy>\%ȜZaEudNI\ Wu]'s>Mni?I" 'mr[e[c}2ǀf <F;Sa z8;$c!gr-+'+rNX! )~;#}\TT_)9-@Ch{+?#Sh1!N9C }b<\JI:m]9G #ɫU ͑$"%-!: ~Kfcd6j&Xb -rNG7;'eGg?7$© P@-BA2OV5q0X1I6~ ڀ09@J^3W@Yta~*XjB nق)6nHYp1@ ?YUgorNX!Ui JNp^MfsԆS7z+HSvi"@"co -X%BlኇI0 e/ sI/,:JF#- z7nq΁( =rGgGW,#y0"?l&mfZ9-AL 6C(K ( -mQc9EPC"@qrN U5<ܞ {`+Bڭ8P-*!gŁJF%9L1Jx"v.&g*X.bܩ䒇omi2xvwHp%a~疆wݦQ jCp+_ \ 9*\2ՀTQ?w -$>^#2:$jy!H-,،(Gs.8okmbHteb+a ;#BN~*B)bɏ%Cj#$aEP;g#<>EL`fѶHa"?t೼Ba E Ł9QĒp%a fc9k ǃ1"<4c|ىԷlB[LN/0'EP[$&i!vA팶)7JqIwn, :C6HSV]'$~OpG3!B x<jHcDk!}>,/iMJZNv(}NeF -L-&*hSmeX YB?ĎI>혩0"' >`ș`ѶHnI2p^m_1447XUr)X|fa 2o@?-9E͚"S8 -_aD8V3+.h X y,a,=~+!0EP;:_8X -pv%gӚ=$uR3C;aҩl I=ښ -/NAmÝNBTpk#g?ֆ'x8i 5u A#&ɴ0rr+6L1Ia0톷C*->q(#op4Ed[d(:]- ,ݦt8 # dia`\EP;7ґ *ș Vh zkbb7 -I***@&;cFNΚ!C?>sr", e):qrH zTxܕ1EE3(qBD7's%[]N89cy 9qvJR[\ JHg5AmnàG89[OH~`$ ='$vm 5>yX= +I-W-ЌWN`IDts,X0lT#KHpJ0"Z'gV/@X -H~䳘:S2BT 4AΕQAM -OڛNr-kw8.\uHgrXdhec:?!4ɁBή0rԐ$@<A|IqLqCa~X񉽉B?WrZ0spʥQ-a䤳Gwvj*IRi$Al7řH;ã'm*ŁEG$HyZ]s9++"Nsa EPp {xgO_|7[p<ː7k3$M$k@!v iK`dZ7,ܼ?F!`eHl+ A:Laک9I^ jgBq˜?q6> ;GY2}ɡZ*&f-I.b\-8sͽbYIb?e\b9s<!5&",9?7" tpc];OpqnXh G' L TuVTa/UTpDEԆ`*Ev -g%$ʣ 8s$gopmq&p(܍V"x:+nGH*7X~#? s,X1{zِ&8O&ύ5/p{|=YCI!M T^ jG.i9fJ& 49CTN4D+ -C(;"f6<.LCK&Nraڅ7*EIV~?-ghYXëaIO8i k2J/+6@ηurNƧmŹbO {MHFk &:WU_493`-wI ZY4R/|aU"I`* t&L/`BF8>IG%y\<]ƍv=s -"lyD ffC0q]HC 6+o -fjG_u8wiSaCHx iDͼ_fZyd\:PX!,Qt|#)y{`EPH $-}6 -Adz C WsüVQi} X9B%!vYz6j:IO7tJ<ғoD82?+|N5/ C(i%[3, xd^Dq=0,54ACYwrs |&bE -W>ni*HpLv'L2նЎ8è<|GMzBe-mA&RwxX\h˴ K *.y"JlǓKae xu6t{t(rTp.-4 G!b^ `?(,ݦ., -NFH\1. Y dଈ$ I[! 12)_ʤ|vܓ{Q"0#WmEx|.܁{!"TK4 ]BJ@o&dˮ!aN\uT]Q'H믈=Q-- M{Z9j C,.߂ǣ@H }BߖAmRaEPg/LS -a: [:Wz[!/ Q?Z+vby(Rtn\/B`]CSuA17M:0ĄyݿT9dg!S3 t -љ' W:_S%]3voe…CT>]BJ@o&h939N D9G|$\{aCg21W^4rl&y_|Wҙnv+prR9#] н!2й" |aD%rx\ Q4  -;FF!ݦ^n޼qk>I<ȍ҇ :ǿύ//>|탟Bg|Mu|%y_}}Q~O~<"wtE_|/{אS䗛p-wCC?atcR[6^6w_kCz':#[L;Amnȟ|o -^E>0i/۽=||/;>𖇚: 'x7ο+_y:Y#N֥cɗ^G;L=!|@?ޏܸx<דxp|I6~g4/]ۏ9C|qƾE~{"4q:AmXU cG׿~?䳟}c{f󧤑~茖NKDJ<Dzys)=t)àl6}SzT#a!>nt%x" |$~9\~yۇ?H@EU-/U=HGށΐ|k&"_'O{?2$lF|3_';7P俾!t&l^D{^Sɯy=|'7oW{9$>?>hϟo$>͗˟^^9_~;ȵ9W%gGh7Q -v|㎟aixiE{pUI?sChcI^wiT09Lzh"- 7XKD7ɢ=/03ˊ6pӧ{V'X,Ҋؖ1B|ޞ` g$Xd63\hvQgQp9(ړ@4P+DўT09LzDў(,)x33jCp+ 7}*QgaH!50og]\\<宐-;F{/PHO&ۢ=}qݟXz[[|9DўÙ1ڳ#'?y*Q'i0&g?r.?Ox>һBў?^\_zwўn>Tzrf=מ{qܷU==/r* ?^ S0E^{ў^tH/*k6}Km?I#2'F{lt/|iȉr2ў7~ffnto.CKў! Ylu$nl^W+sqo@~ў7ϸF_E{^=b/ދwtGD{^?:ȶhϿ~4AblGўùBў?xaH+;c!=Z9>c=Q,$>A矽_Sl)>h]i $ON2uǎmv~70U$^ S0E}(ɤxlIӇa{3&|qd׫ܚau*x(NhF޶6_xgnfbՋ^>%"$٫7 ff> G{RGa9I앛/S{} -1&W"lQ"=AL-LNc9^Zi/i?#X~Wffnt/q G'mOv(O&hV:}ME>XjM45z)u3BJFFށJ3kl2Bz =a&Ä0E6YN36Y͐`pۊG{X̧0y>o!G¤NQ'|c[v~q_KW=P[_OByA=?tL,sDOo{B'eab'zRDu|=QS'7t5EnݕHW9 Nmky$DN XO:C>ў2IјwO@,3eHx kDhO۬ډe6M6n#&cW 3iEKK_{al>ai\溕F*YOBc_~N+/]=a}5[+%L-xIH^0W[E{yQ]]E{7CZǮU)fPWɋE{ع}ў̱LgcJhOiYI4O>c=6{UM1Kwe׫ܚau*x(Nj(m鵜8&fèY>!'y ^tNv9YCp4}HKDΨ%F/=3Ӓhu>6 y͋[Te0UO|}چP넞C\ٲE&UUt&D\dXo_Ysx" |$^0Ls=Q4C@6ht=?3lx4GS=q(_|nڃJO&JR% lԶf;ZN>*+ b:ku~[3kl2Bz =a6?aHJW}-&{D{$W g=-[=ϾD{k;c~nў_N NJib (}ўJoCbׄ"cCWt -hh|hO}: 4 tC1&/v\"94:M*pȎNm:JGGў㓢=I{WNqX\܂Ģ=MN{Emb8ygZ `',`$"S KWm tsz/SIL몝zk$ykG(ehO;rў8QwǿeV`qş6z[3NEP ƙ6JIĜ 3ﹽK#7A6lK'ImG>%":MڃJ3I6Lei6K`jx rXz޼j6~6,^5bM5x" |$^Hā9Af镼 ҞTҵ61 Wmd>HOdže 3S'l~c4ZeAT|Jsܕ*- 56! 6iMf-yA$/+F{LѨMAB'йIZ~ҟL?IcR<SIGsh]4<+L+&mQp ў܃F{UPL7㉑N Dڎ۩mY+s|R--U=c] t53#9nZwJ:Q7эtsɢ==VdkI^Ф>ўIv[ yh4fU#9IvʭV"ayNVJ":$vs& hOe"aMjötO !-Il:2S k-q[b NN&ў -yȇhOxDjDV2oU:Iwf+V[)[503ˊ6pӧ{ARmrtI"'&ɬ%/e{ehO'Ϋ\= y1L~a0SiM|8Ar )q^i+>.` g$Xd8<ړf5~F?{sOr?n,KXh%hFm{HyZBL09n>ʹ.CmvU\/UOddF^%O콾Zz2"2ry~Ud\ٞ-$ g$e{0Ll' ơ#K:z Lǟ,=E#=cئsdgOꌲ=,Pv923gnDlO -M"e$aKArG0ԖKO#j䐚2gH -DI6k<ٞt:n9Iv좐 -y]dlOR!ٞoLeƒ&yRQn͈NU1Ey, By\H2ٞdo~y>9q¤Pٟ=ҏ"o:}2ʛw7$)gٞюez9C=5Cu2iȸ>HF>O;|ǚ}3s8 -]85R_>B~q\H=lOrfQNUlSy$;=ܟǫٖI]hʩY#Ja*9Ljy<>Z#S:1G]h+:ºv&3klO'd?P/ZMIMmdOg2N O;J6Ԛ=6CgrLr\9Ö?Iض>KzjxʞlO،>'guJ -h0,7IIr$IǓEd{ٞr'8498T#[=MJ4?3TkuHR=YmINij4lIZ>|pGl8mpפh5H%{$e]6o6*'i]N65#N92ۓ -&ǐݧI} -/"T4{Rl]`'m<:"=v~;sڝLeWSY)y+ʱG5#:Uv8& 5y3 -V˭v]swd{<žm.[Y:N"꽠fZ&O1QdML.OvKRRٞf|FRFOdhIGݪնIw?{ͤV^v"/BmNox9QȓVnD;CF-䦊6Frdg4+lQlO:S?Hp![=G~-< M.5,x9#Lu`+I!u6uuMfaٞ<&Y<=m%i׀t+wTԷ'gZɥ~;,tK)H Iqd[2I.R$-P$Cʖ/:t*i+{In5Hd{ٞlO'-}gP9Ng{rJAGO$MNk[26li>}'w/ۓ7-6{PjG$| -]kR7#;%tNW˰= "pˠT|*f{ޓLFMjd\h̒:*4=LfLvݍ#BeO'vZ Lɛ$66 LtjeWnb009֌TS1xV,4vjiˍS)ɐ$ɼ;Z7\ 啇Je'u氧ˎl`j#ٙ8`hL7S}Z~6' eJe32eeU©@ 2t)lNzl42]lO o;8ٚLZ$MWO# &J돦:7;mʳI_X؀9ֵ 6Q^lZdoLrd";=lϜ's6l=kٞdWg ?sD=!lGd{%{=3ʮlrQn͈NU1E]ٞJ?Kٞeg=޶3s8 -]8Jf{]ٞ=۳Bgvgd^@X.dFy DgDgDgwBmDٞ =ٞDgNU1Edl9eU©@ oUٞY[A'/΁]ɌlVl~l\ -l%=jAd{6%=5lzP;c#ٞs -/03˪PۅS8ު$=s_rmM6uuMf@d{Jd{Kd{p*VHd{.()P"۳)Id{֋TSlOd{BZfpxYjp*?'}[D΁]ɌlVl~l\ -l%=jAd{6%=5lzP;c#ٞs -/03˪PۅS8ު$=g6uuMf@d{Jd{Kd{p*VHd{.()P"۳)Id{֋TSlOSx9^V.IV%9s kl2"۳U"۳_"3;WB"sAlOuٞMlOM"۳^D*䘢Hd{D *vT~ N*lϙ xa]`5 7>61+_y"2|]y'a狏= =|61l =pmb`Rw5PĥMA_=""c"#ݗ]\Sjcr&F]XEx."l]Z4)fpxYjp*?'KM ,obQE[=s =z΁]ɌrWwB-n_ZP `9xYjp*%`]` \Z - !:Ur*vTKv&3A\ܹ - AjAkCtB-eU©<uMf sjAp0*Ԃ ֆTZ˪PۅSy,(App*ԂaUA ѩ - U Xֵ 6Q -U P XSjA,/BmNAkl2P WA6D*Ԃ X^V.ʃ` X.dF9+;W7B-`mNUA -]8]ɌrWwB-n_ZP `9xYjp*%`]` \Z - !:Ur*vTKv&3A\ܹ - AjAkCtB-eU©<uMf sjAp0*Ԃ ֆTZ˪P &3A\ܹ - AjAkCtB-eUz` \Z - !:Ur*Ԃ`=ɌrWwB-n_ZP `9xYjAdF9+;W7B-`mNUA - Xl2P WA6D*Ԃ X^VZ6Q -U P XSjA,/B-(App*ԂaUA ѩ - UMf sjAp0*Ԃ ֆTZ˪P &3A\ܹ - AjAkCtB-eUz` \Z - !:Ur*Ԃ`=ɌrWwB-n_ZP `9xYjAdF9+;W7B-`mNUA - Xl2P WA6D*Ԃ X^VZ6Q -U P XSjA,/B-(App*ԂaUA ѩ - UMf sjAp0*Ԃ ֆTZ˪P &3A\ܹ - AjAkCtB-eUz` \Z - !:Ur*Ԃ`=ɌrWwB-n_ZP `9xYjAdF9+;W7B-`mNUA - Xl2P WA6D*Ԃ X^VZ6Q -U P XSjA,/B-(G+K_xᅯ_׾owSeL/L3\Zpwx=Eq@4w#2P 5Bl9712WUnpGķrċYXװy0O^SO1{$X\Zpg ~$L5= -+ƯN"ee_8T:?_җ|=PF#53VSf 8ۼ]!qpk,7vslhiB-8B/|ԝ -ś‡B-&;Ar 1 ֝AJ`<3Ԗ㭣ڃP)]|Vs[Zp/oڜ Z΃iBʰ껒)[e.gv>352F+n v\=X~%'G!)b IJ؄I`~T`,3r^)&dΏOPZr8?MiqXtlجq\U84X-ټ7ĊيX]inCr#7ʷtmkiFxjCYS)xj.쾬hf)xݭ]P\^flt78 -` 12Ow41#<gE60~*s;x*ԂMeOn}b]s% z7TV 9ቆ: [㴜Q!b%zkC&͆Ѿf5ۊiϠOF}p2 -`xtlxE{Hհx#)w\eP,B^}~8 X!KƂsU5 Ҡf5ѝJ#ƟQYIȰp{"Գ!Yqlݺ,VIhtX̘|5f&ݝ%T)Pz6ojSQaiݝI] a"@+kVˢ5w@:_ig_4Q/I5KKGaT@1lV=J%aRMaj.EyYH0 ;v|A_#*_^ցBb-`[<(as:5M-k?b$Y"S]w|F=qߑO1$aYε2yB`!:όק3AɤeP -C;=T?;xÎl;Qgaϵ S`:Fv -.UxZU+ay_K_[P#3#>kzy@_*Ob,֠AfF-#:G NVl0oS9$LX,}CK6C\\~2[=zW6wyؼZھZrP=͹tb[^(&;Tyfl `l\0c6Xm[+[̉j6٢ S'Ф1jYZnX,G5bHe cD`*dj'S'[Y*ݢ_ޮ -` X_XD>a`lkA8婹 IQh՜L}M[noa!^WZI1?F~ ;yʓ5l, 5*#6~esRs -He8c92ky9ʘ n+h(µ:RNX \*Lpш'iǎx,d0z0m;Ϙ+ܺ;ĻsJU`-f#3M.f|&sg?J)op~rQ=jUgjΌeO֨ "vTk >Ϋ#-"mPw<i 6![4̀F`x*,0R~آb[w?-hs`&o9pO'L(eقKfYV[2+* -g^fF0n렚 Vv0 g.΋Ӣ^XO,&͞=NyEY4)5WB-V@둯UU^EYNXңJ2 -chy=s;*Ԃ3'ĐbխwBMpwLfj aذZH̏c17jw |*f0:z։fـugԜ{*&aRaVJFșKqHF˥޵,ͥ&S&Dw_ -gG16Y1o>UJvU,?6~& 5Q3N5>=ˌݭcă4:FU7hF Ql8V[G씅Ү!8rGy޷|D[>/y >8UZ-F>8]adGz$y`2PG0Q>b SͼX1"urO3f5,/XŬVjva0D`?rA /|rxM:e rRWSڧjPjtlاK5*</UVduTj.Vm65r_~bBwpk -.SZp~Uc4 د與<~r L ?٫mcZ+-CJgWܣfd?88 -،EDM˼/a͞Vm0Zs3Y}a)LEXrNx=kR(U"f?d` >x~-|d`fuexކ͔GlʒQ]ŲìlGW0j'ʑc1^_ 0u*JdToQWj7_>fQ[2Tq\٘ཨaF4ͺKjc&'5+KEĥXw(/\k^`wTv!4}]CIh?z90WU ,l,A8n!* -Bɨ? XHJ D>b`穂կ (_%B͏O`"ury8=O|~\&e%nJ Q WaK9 wxwd ZAS]wHjٰ$iSW~ -.waqҘIi%yZlROuY -zeP5 cP VKY6G-]X"l)`諹8%"ͲYL`jb yG?lШg`Tw. ~=RF]*/4?՜@JiOlݭWvdl:RYq?\^#(r{d,IӼ{x*n-3Uwl?:' Z֋!-( j -.5ޢ - <%XM콂6 -(kwrg3f$BBưe'?ӣs2{3ĶS8BH]KaU]+#!,F@G[w"hctfG߅!ѹmjIºIc^'Wa+.kdtN7pN`*PA|;XgM[a9ֿpȈb`f -uA \Eؿ"=f/K@g6wqdQ3|%]M*׾>P?v~̧D \cR?򑏧GLk/>G?O/>=ڥW^}+//>'>.y'O7ʧ>~|=?]Ĩ.cBmpgb?tUmOO~Dxyi|g!YO9M qW>KSt6d0*ԮAR_|X8%Υd83_}#?;QuȦr䥗%<"'?LO9FZI7.yɯym8P>O:U=W_.u|1廋up}ϭ<'}_g[$Ңs_=y>K>St&l #xYjc.s_{Ũ:2Or3/}ZOon%{t.2^|c]򉧿LgbvpFR>Gg|o?>7F=y~y3PG}g?K.5Ϳ}#0r2ooQ] -\ -w+8O;?(*̷bwާ)G^9_d(h`OBjGK!c73_EOγ1>uʟy<+O~f{EDWT=IKO̰_z:>8ä`oO<6` -˳,Rg֨:ܘo'_;S/XW>O>_H|e](_=}TRSpt\N"ۓ$='1cg/ٞY$=jGK!cY(3UsO_w&=k@tB{J9F Fd{ -ّ0]=w(_>ٟ5o|8]j -ΛNɅ=c_(,(? -%='`'Ŗ`S>XV9!])./~E57<Ԭ]~ ?O-|)_#Ȼ$xԃ?H|?]|Hߙf{R޾[Bt}NU#}$Y~-h板Ԝ,clϟuD4Uaf˪Pyӕ$}%[<懚l}M(ٓyAˏ鿤V?sʺ6Qd{)lOڔorkſMySvg{X"۳.R\:L͑9 N]"3ٞ랞Ls'Mx_=]_%Sj'}k)IVw7ۓfÕN~qlO/r|V {/BmvLW! ̙IOTe~&I:46(7I_w$R>Ť }SD$lO׿; t)vBK`'dkg w?f(OS2ɖ̇c^[ -w~>\^Y%)58R+5ۓ&͟3ӵ3eH'%c]lm/C?N).Q"\HEm"D`bT5>~7o3BӖ{Jn!g{~̶3CL=KhySM}ٞ3<-y6zi9JD*NW# ְٱRU` =gT)5(l$MQ 4'`|N=^ZSn擦ogKN#ю 0^V$i;Ib$L?3ѷo >V -yjֻ~ܪ]6oKg);z{/}He+c~{ȞlO?cٹפnIڙ,s_Y&3ʗb)Ὑ4iʭ@͒tvj+~>o[e{'K2fzy6!YvGN'r"=GQnrfӥJIΧr> .GȶlO۞TOG_%|TOί¤G ydT/9a i\>pvTS|qsj&2~ -==ۓeSFNi -W6zn_#ۓMߌdS6Դ%5ۓ%me7'UEY\ٞuR&EJ̒B8 \FKҤh PR]/[K#ɁtN ->wJ6Ӄ/;=MH!mF)$f_Fy̒bמGUsOUiMH=1ɶwD<ݹCΌ=I;jYvg{JMmÆf[􈯐 -GiC}ʄPG8^4rZ_9c`Z*'àٔFF6Mg{w<,7P1Jl>M?F_H,M)4ٞI>kvH -E(dI}IGEpSNٞuR#) Z̹N6jDaeFVjI1oIia}䐚3l3V6(4ĥ6 - <5I6kQu<+nΘ%S -L3˶MX3ӴCΌ=C:fԧnV2\nѩ -{HqX+䯬ka<$omuAR= -LΦeG0)u誤`L=;t0y.irH_%m%=#`RHyjxk3peU͎S_?i!Cّieۢ#J/> -6_NedW'm&f&-UL.o.[WɌؗ^@424 oQk̙ICzM[Ny-#a#NSg+NlRG؍#9*ۓ矽珛r#]Vgz?7SFŐVm|H:dLJTN(<]=$ŇdV923J&}YtR?THz[,TvƁi'3ٞn9NFu9Dg]Hp3Ts$8I"Qllw̖!egLQu<IGK~%T!7Bd{y\'shSth'_„2i<ۃ U":Uvp)n{ -++F4HrؖdV92۳Q%ۥpT 4Dl|il*;*'m%MfIm>Ǘ̾6#s(+xG)6㯟tg7]f:Us -6^-lP|~+bvdFRl4* 8d_SnlOeCJ2VTNIw=KMyөid{&csn/Ge{6:ۓ`6>Mӝ RS]yT/9;f{pq  \#=(}Yk) ?g50V̑žv92-ǩdC'="ťFRkvI -( f [_+N1y )s8RSIAiR?~0f1Jb 7?$?N~75r,rٲ):$S&ƾ~ i'IjY\nѩ -{HSZ!e\ؖ֨z?HjChn*O`Գ=ӑj~N_e|t1of_E؀G*fGt%9_WKmi\<>Y=%I:T*Lۜ'ۓˢM;(IH?u1;l2|)vd{ 0IAJ.+5Օw^e{& ,4NI?5OHCSgqRSptle=Mz[SКٝINI?ighJr\'&#C4K#ǻV"sd{ʭWQ"F/i!7U$56=ٞ^;š>o_3ȑf}䕭p}kkk.)ٞ|I5NRL'CٓiI0-BY )k:ISIY[ʉr@TS:395#N92ۓ -&Ǔ{&)^jS1J~oioI=xVƚɅ>n ۓ63.(';=i IP6tF5Et}NU#}FWnib?z ŖzI>xr'ZMIMmlع-c7lOyW^߽QSUtͳF3%UvMm "i7 {/BmvLW]xGR} !I-{gPҭٲ=M}Y҃Mny&J.Ԥ´ٖٞ͝طH=ۓfu2eSޜ@.fMf/Ei&0dicalIrd=do'OKk$OٞlW-o^$͖Aǘ.}=:KMyөtb9f{ڳj0Hlxwg{_ttQeɒAT9.2?_tiR}Hs&* ӟ=ٞ8 3=kCA=m%Ik٘t+wTԷ/۳ T}䕭46?8r[ޙ4dJSB_ɾlO^[#rslϺHq"ۘ>XO#NW˰q|vR3h6=UʱٞD' -M*bT5>H]9׏Sn#sM`7޵;yHߙA'yZ~ɟf{D?EIFq5sn _g JD*NW# O[jc=dR]y l1!eG{//QNж8@R$emaf˪Pyӕ$@`ڇ_Aj˦~2|'裏>c_җ}_q܍~jyWנB6”0ż ը7Afp/P Cq;N{> t.Gr*ԮE>9'3~R[NĎ#7(_5g+7.c#bK,9.PcfvY~YHǨhȍ?UGa טe\⢝U+ |жg*pIıx9Qq,߶GmhҷcU,d71s@$b91USw X#JK/DgUYމP#J#hgY>盪an"N8&_$O3:KGr%CMhy4⍼B-8GJ*<~'p[^&-6f .PZ*,@<˧߬%ˉelXIlzdj+1 ;gɖΖ،ϭҶk g=^P*7w{( |#FssƇPV= .f~$ֆCfsM;`Zp:6Wr#»*aa՟F~##}Qӥ(vћvaMg WB- -c(T5 j+<_$GP} -.Uu *Ԃrl?pMA)˷ݩʹ?rQ%ʌxj9"B`9p/mkB-;v|_ cI< `8bPep& -{TтZ9POAHKS -6/!>;qkvOQoM\fΰώag׌gƴ/1B(EV S2*Onz<0Wa0;zf< sMKάz*8& -` -A~^wM8漈e[]7޴ -Xb8o^p,p<ȃpkW'Y w20[3ƣ_s5\-g28Ej4 -)*O>Y ;|5wG *bc92;zM*Vc<*D{^Qy8~< Ff9$?f -~.o1;23nNϳ/gW Y>bgkP<ײ'<#~u1%KFy0YҘ*Bk u1A[g` B-2.LFTٙ,[#П"iv -CQȵP .uʇӪ`ʫ V7MeYm m)Sv%X"oZiHwk_/Vy!P pa5\j))Z.ɱQH 00/&8 -SWښd' )*źV9{f5tY C=P?#0\B']ggYtq_((cx|5q撍 £[3~_%Ce0!r L #KV&%I}M0)_r0;& -\i_58pp^ӯJ?(hBwAxMC\tB#oWZp)x_6E+arcX7jke7?^GC8ysrK/Cj P NƁF/Imfocbr;sF/{x*֌CA¦!Ti0{ffFXi fMT5Lw?fl}Q 56Lr0U fl48b^lؿ0yņu~{\քYa9TC̳/b`Llj07!C =/L j5]SPLͱǴ jސ_}O$P;jJ{@rpfL~jipx(M˓@pZ 5&8Vj4B8,a8' ->>f^R ?=GbƱ=@Xjax`'/jkV7mV]Bم>#ec uƊ0K~Őہ-Qnr8ekm{x4mN姮\ -#uw/%_]:ᘺ*ngfo,g/T˧ˇ)@4p!#Oi)N}~N-|Y|R~$+$LGxS<~vyBPr,/"ׅ6V/XR^0QU&/ᴗ.$yxhI nAtj4GjNM]jSZp6Vʲhl sM&GZk -.H#ß B-8Kӥy㕖<5K>z1Jl\tRA2.*V].-k~"[R\ʅlrD #bl b|bÀs=:w:v[e<<߉<2s5XJ"f=<7L@'`~PAJDp^Ρ2nU2 -Rú Qf}i -uyx*ԂQRHw(2:v(` e 6jPF"^٨{ x\0x`E1USNvc{D:$].J1eTz?q4*Me5QlǶ C>! t5G1p0:5[y^hd]$( {85=OH7F!qDY>fC|>V6RڭQ+9Y}=v<D;T^Jvߞ8YДq/RZpXrll,lzvse5Ï+]%5#4i\WsFe<єr*Ԃ7c8ױE^]˥;sde5.Rj'].Mm=Xd>4ጬf9he5[v˻UF"rgj&Prҡ0D C]s2h.!BC*I0 V@ FnsZrgC|vZ8f -zx$H_Я;2 ?uk*r.5 ]x*ԂPF`wy,'u9-,+ŝ\n>(HY< fZhPe& -lIaW ;:=[e.@i.]*Vxu߸<˯ohuYneij>fQu9^3bY8_dqփQeϏsĞT\_!#ʤA:.OUF -1i[`' t\.)BnJᰶfۀ |M 3|M*nrcg,S_.gl?yrF4Y[T6lqEY&¡K2l{L%.Cɼ),E `P ^t)q@Dr9g8t -cslTINI1M ƆxE)r9#h? )|fH/ -~ܝ(iX˓Hf}*bc3Ny&&3p2<W,B-bk\^ -c'rۮSYٰGu- b~P[`j 2b,ض[T(V3l6uA,x2g@JWp4{UFTEaF bip,lx@`WsKg&3Vl=/^-`,DҰ7a2aLP WjhjfGׅSBjצ;.n‰,Vj2wUve%` LojACse{.G"&W8ENqɩB(ve0_:obe8}aV4etw\F/yi߅dIH [-FF駟~q^lEyaV,p+&mв^B.hP; ү[3tF:,:A[s#S`?odWwV [nŒHaP/OϮ &FΆwBC|M a_Bb;F38RXbeH\Z0/\"෺ފ[SQ.= %VPrʌQ:|&YvY3څ_Lә;*؊i3 . =ot8#^ -kő WeY.ZJ4]{V̈حйvg/t吭̸)ݿKg=J3YZ e`E5r]c - w(_b^ЙOs]ɧ?Fg"k/OD<)?X_>J41O;_o}|/laU^~?zwɷ?9\tO;?c6$_.Fա+W^o׿J4P&_9U3.g?a[]_% -0oHQU(V16\gG?o|DK`oUisֵ]'G^rOq]?9MddFp|⣧ɟ|Q8AٞݓW>lOQٞ}#0r2 |<\ -ww,K{;0W%gٞp7_ZP_~_:pU8 >cg =^p9l!NU]%dU<)\㕫 +c]_% -0oW'~[^y9 B2xq*1aùK.ٞ%=W"s)l$=WMd{Id{NTUHV%F+W%Y -^ViU:\-f{^y[X#Ӱ-{/^S^Jo_k~C|5G?a`j*OGe]%=30^|lF-$=jAsd{~z9WO^?(|l4'y45 ^Vimp]ͼ󷛚%D-d{>}|dy[SHUȌ7 o?( s#?X4aY]Oڔy7})o*=r7\Np̖9iٞ&=d_o#S۔N=[;[+ -ѩ -JsWѝr@k[=YR|;y*eUy[JN|--%ye_z>0ӳ=;%W)%pٞG%_)RzsTn?AiˇQ (ζl9dٞlϿO -#lO#g{.D'MlUsٞeO$'#ED*ԮF*q45^OٞeUy[=ҢߖKd{~M&pٞ7LR MS$ .bT'&3ʇg{Rk@v$`G;$ݐ򦓦ْIy)WٞaIѦRgJ{Ȯl~4=V-?lO:H烈9,)3_Ud['%L57dI锎?T3=ٞGZL'l58HG6*I^xTǩʟ33luo yRqό,}- *I9Ab[}ƖZ?G:ٚ矿ٞ=Ɛ癒o}?tg_5e2 8Ɛ=I2J:^f{ҖrJ û?P3\5l/!?_mꛌMǏE5ȸ=Ɨݿ'z ;xsKȰ=v"g{n$I/Lӟ5(7Δvnghcv-[~;+ۓ&vH;.Qw黤l!NU]%+Seg&,8GtOA\㕭r]|:l$ٕI>R$uTίQ P$=XFXFH)ctwW*w]G]la%iba^sH5},s)ld`#Gg{$'۞T'ZM>;='|~vI7۳=)ualdIpkHF>jٞ_}O8L*MJҔ7PiR=?3*NȰ<6lOMMAqn$> % e`#) -{l4>莼fPM2Mw[=YVgc*Kg-=ۓ&<\n<4ԥol!NU]%]Λ]5V1#l KsW67om9{q.s OGrm@4ըMƢ<=}K`ƚ)+-ӥ뫄Uv歑4WH&-Fi󄧚~Ӈld`#)&{d'kp$gH3dɍrL -V跥d{%FA^GZJ `7huixGsw8 7dF-oi!M1a9$35vҿD$hlOD#cdgSNl9Xٙz׿ʜٞiIɚ6fcGg`T췓-ecHA275OQ[$t"iQrٞ&XD rM#FUA%Gf{'۳il徾i 홮:j)LNQߦwtٛQFGrR/~-4r+=) REBS<,) d2~0ɾN'۲=l8HǷ.)ٞn\ٚIO,T ,s}V"sSjW #$Y6tq=$a};`pk8 UܐtuMԃ.n:6V$}}mrLg5S8o[d.˭"JxYja)ߨ| l -K*v^;hd}e1]vlOa~r_\N=2ۓ'Gӛ[A]yɰɌIo1di҆ES#̖͡ICz=$Z@֙IQ^"e-R~Cc*۳=v2}n \^r# <ҖTC2~0x04?L&suD)hk,s$EdV92ۓggx_'ɎlI٘ ]DlϸiĜMvJa"Gf{6VnN$jٞ2)frY eu0 <͟t -MbBfPA2yv٨:6elt#ٚi&<fO#|S Lf%*L&3= :Uv0lqbfʍ-ZKPsWfIٞt{}+鰜IxOrD'S6yReJ)kEW /B4[#iO`+kI6m-}e'٭D~hHDƚI" U2~0x0ǔ)i%=M!G\$tb4Œ$PN923uMٞNROѐ=D'+OltЉlU)iV$6#HIJr+f{:I`&IQis߆;-S#ۓ76Ncٝ)hLTe2*L>Dd{AtB*a$[$[,^SNR}8HgBW9àQu|JV|hMi'NHJ>,!I_d69"֩KzyhX i+.U˪P; HF[(dU5L]+>Pʝ$iNaY&e`CeŒpoو~[I}6`omS%m'<}eIߢ$IMX1aég{ J6"ۭK mVٙIoiʭIO_v>ZTkrlO:mMY3f{S?$lO]yVR$4RʢTˎlO~md(myT yvuI -$,y d{o_E&fmw44ʭ6 K/)"j#ZMIMmdO?Μe[l$Eg# R@IrK]4wvaxIj[l?J"6EOm ,m%t+wTԷ/ۓO *=ۓ=?2 /]ڧD(Rysp4ȾlOvzXGd{JfP2<͟ti,GTO#M"᝿w |~:z$B̈́{ -Nqvti'on;=llxgٚ(I3KÚ9bwQyTUHZȐKMՒNHݎg0hTmtmyW u+~snțJb:ٖi|sDIO]4["޸v뫄Uv歑4%[n_|Γ0oF0d[ rfK||'v:j<I ,~YldGg4E '|2[YM߫Y]lŨNMfl/sّ9]f{}ٞYew8 flN4Y.$%Eg{NٞIt ٟ9^"s =v?Ij)۳==ٞ۔nf{fmٞ/n_DhN"$= :Uv0f{aШ:>xd[缢뫄UvmU;oFgNّYɰɌl6,9X =@GJ#agٞKue{5D@-g{_ =_c\gNlO'ݦboE'l#sSjW #[8 Ug\Dg)xYjaV%sD4dFpJ'G ٞ%=GHzi.TlϥJ=?7b#lOZP?RٞNM=ozTUHh[%&F+W%Y -^ViU:\-9 6Q>Aٞ%=W"s)laٞ(s>lO'ݦ"ٞ$= :Uv0UàQu|UId{UvmU;oKd{NMf'=lVlٞKId{ j"۳M"sDD*ԮF*q46^*lR*NüJamliɌD*]"s)l$=WMd{Id{NTUHV%F+W%Y -^ViU:\-9 6Q> =[%=Kd{.%D窉l6lٞCPJɪaШ:>x$=K˪P; *Y%=&3ʇ7>yf{^ gd{>g(lS/> =_|dq2U>61l? =mb`!kWTxW>:d{xdI)}ےdId{AtB*a$kxd}g{ "뫄UvmmbTRbUZTtk:6Q -U P XSjA,/B-(App*ԂaUA ѩ - UMf sjAp0*Ԃ ֆTZ˪P &3A\ܹ - AjAkCtB-eUz` \Z - !:Ur*Ԃ`=ɌrWwB-n_ZP `9xYjAdF9+;W7B-`mNUA - Xl2P WA6D*Ԃ X^VZ6Q -U P XSjA,/B-(App*ԂaUA ѩ - UMf sjAp0*Ԃ ֆTZ˪P &3A\ܹ - AjAkCtB-eUz` \Z - !:Ur*Ԃ`=ɌrWwB-n_ZP `9xYjAdF9+;W7B-`mNUA - Xl2P WA6D*Ԃ X^VZ6Q -U P XSjA,/B-(App*ԂaUA ѩ - UMf sjAp0*Ԃ ֆTZ˪P &3A\ܹ - AjAkCtB-eUz` \Z - !:Ur*Ԃ`=ɌrWwB-n_Zweg? d!/F~DbL˓ +(y=!DC@CZcmݴq}N/>}u}vڵkWU]Oku}uy~muB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|` .\Z\! - auB-eU|`w}뭷^}{//ꫯz޽{zkn)-G7|0t-fsjXX1X2_W^ fB-.KEdm:Uoe8qGx7y3G#557O](*ԂSřMf/{ԛ~O>۷uy wyꩧ޽HXrh -^=쳏>l[:4ʷhXϟyU=O<3p*VbHP VJ_xᅧ~MXj$|A1[sT=zJGp<0sS>m6V *Ԃb="p?3K7)-8 f - w܏V`qr{L%Q`*Ԃ=D<ݱ=j kΞ|:S `jسpLa]6:$̠yԵ.Vrm ˣC`PҢ'!<(ӫ'@{XQ4U14{5Up PtO#'thvhI`x/  5>*̂aH) qVO|"ME)P -QH3e9 -N c_Z0ɜqCk㠺c:VOaiC-dY,D#cHY[Tv[]l׬sTZp#*eW ~iO}])t -92*Us+hcLQ32zgM2u^{I0Tni6tv "ƨ(Z^ZP9uρAj?|kW0#68Ƽ -Umg&b;bzZvբ(sR,6eK2.^퍑)-SyB-~h2 ~U)^)"RԨ"A5Cpe%d(:1\U/.\TzL8{6\-[ ƄM_/ -o7/zQ:OUgdny_[byDƣ|2+_ܻSNSȴln1nsAP 4ې?M,Q9n&vjp3;ܑ-i:L_c2 MHATZDoi!FYX --GJm3RZP8E8":(*'k&(VȸZ~Xt\9w*ֽMkؘk-`ws*#IQ.ckگ{LJgWNY`G8)K.{`189]s檵-='F'2nx@l9^_P.ò 3.,*ԂM Q Sm(jɰaN*?\Zp|֦,$kuhvt,vCb]EN[e(Mb4h -X:vo)P Ӿ䘎*Ԃ8jc7- -XapXjUevװ`  -EebC9 mZƵTF^/tp.>})jM(<;4>#}vX,qu4|DHa,aO]lR*VogcnÂSoCʧC4ȟ3ÃafÖ'QPL1SUb$f:?5D]j=oGhPx޶$8}B-82l8!MljN|˻r6{^ =-sR,exR.f0& -` (;去OA'OkQpTb - ണdi&Rj=k9툚ta#.xrV^b7e4_9Ґ?VKg 3Zf`%yd̸5`(PrR2ۓ:pZtu -9`[uV{j;bUf B-"S &˜չk-qnh\ZpLacuWU9|%4IVQs8<彻#9A'R B-Kġ,;rêcB(Br@SP`|P;#_Ua<'R23cT&\eH ¶hdG29#}.0m=[t Ã$]+6]rR;T/y~q UswnNtv[F~&hdjAaX2#,0KU\Zb}a ->3BgP QʰR#!HkÒs#u>s V@59a9߻'â#]9i$8&F -`vO߂`UoY+ؚE)Vtuu=BNB\`Xrn\ eV4ݧib3x -Oz`/n%i5948#}0b2(8@ 6q*%& )0{GO-,SiHg.U K Ύ&d"5ӡUr[4XtX+ء0ӒƫκQWyU8򐝅*lD^a3 - Lpj9{m}B h!YȇEOPW0Ug!gj8~v?v^BsԆHqI$Ljg)Z:'B`7[sJp9D P 2"~.}q9U]VJ(LO4?ڝ]**XM+:; -plx9f(p+]`>㈔ɩ>`s>26 g C`̫P z0`)8zyZ}s+1Ϝ. *THq[`W.bL] ~rN7[n -~Jk16#=s@I9>X /Ӂ+I~_|Rz!FGʇ~4 .b*xS6_ Ѽ*WBn4bQtp[f.XlBKX,5d{cjvTi+:Gt -`jy19gN9cU!DuI\&i/.nUrp:eBr1\M_eex;H'˚mU]9PBBuLK]rQF]>ܣzWZ05^C8uu{*g] -/g -Nx<cůL3n$*cP -,}'8RjE*(Μr[a cLP37+wOv)](P{rʁn$9ڙ'-~Nh2zx# -5E"~B!z1,ʒ+%/6COC -Sce(5ș!99[(^ؐ5ޮ#|G&а*ԮQVi9):?;] -2~e -)VZ0)65 :YmJSZ&Шa]LJEJI9 ` 2q>)(DW*ԂL{UΩp xNwSQ~`^0 Uv -[*K:9|b9"W1^rv{~…ѫ9by) -QIm"*춴؛D~Ǣ8̌Ø_ټpw::Z!Msj/"̍("Ӑ*wDmvh`f} Bj[L zL@]o؇6rHE֮ -%EתP ]]"匋k±e/P裝B]6[s_35H]&E&8WdؤxZʡ8S:-+F -Z[qa -8ӄmEθoqK5rzQ&e^-H 'm`S0mzˡXŦr-ȄK_>1fb^eQhSfJ|fXa\h8+ -š`?)ަ}^`29LYL6UtWjtd4SP{a1Qz+rs#[]9SS&֖LUQ)msGP 睝]Mfr&`I^8c;Wxrp**3ػ`,pjW%zaoʺ$뱋1_Ng(0މ.gqBmXTvUs~巻7WWFF=V0qVT _˙#V)oE=gT0{VYGX=CaQ;;S44`5%硆2 -FVv8y!&zQZ0 -B6C*q`j͈=g߱D{ul P. u?<{*P2{T(*Ԯv|i9ĝ4;~Yh0O iނGg'oBmn8i)3-@ ws{.W8r-^ۜrq`ɁY{2'ǰh I.ja{;M{ZWqPBCj0*~7wvL_<\-IP{j׃_a8NZ-=ax%] *vDȎ*GDP o0י {0앶Nx7;ΤcbܧF1C=!~cG#Ia -b?Kg -H~U G\&M -+֦{r\g}ڨ#\}r ё*ԂqgR`e9eOyx 'Նu85צ3:na6G1U]'"<;9㷁9Wɦ1S1k@P61q0FxEajET149-Z>!Ny 1,rl*GgMh⨙v{ ,ѱdL[UWv T,D#pU&%錂+71aI=c*7_wyvQEFz?gR&OB'ܹ}>Y%_o[:,#ɟ}E [o=_$/wt ` Uy7"W?$Wlo{-w<Ga ;Y~oyGyYw9olGM--?3~7t|z䁧EyZXP;{TF^z>I#n~A.婧Տ?Mi\͗77>Kh،IeU>c=E0mr2,;yΝy&={Vimwj/"t*2zl c'l@MfS}u,W?7*sWO_ԫr-[|]zPSo~e"_WE#?||Q3҄N$_yvD;W6s X22*'#zұ] ~?,gB-;|Wn%CEcȽ{^8csϑc:F{}:h.XP;.&ߏ>m#Iw$vT)^V裞%l ww#d=h"yU;} ;6t/e.^g\\%('35sDg>U{ÝP9aVH*3*'s`go.,B-pўQdha8cD皰:Uv*\LLuX`[TfN/BpQOg%! 'ThɌ~hTܺu_h;KR&b*˓⃷O6DjϸRXuO?[ɏNdnpl5ݤ7OݺS)=WSiB'#E{uG?)}ѪU0 7r2E{~÷n}w%6IwahOjA|Wh]*ў'>3Jm/~>L"3Nў/w} G{}Cg ;nEgNU -SJg;ё~7>\6Nڕf&goQxJUg>pS6iF.|dalF$*Gt*IzcѰ5ٕdv|wךdCgl`v8l2#3eK,2̬ʟ[%AQ3T.V9VўOVt?=AQ3҄N$9a*9xa?kһr23<{ID_ZP!=-hO:N)8c=̝dsgp:פH SjTA#i| ?QxJUHF8M8 -G=JfC4lCAPAGHJ2QSYd03Y!c[z՞qr:DLMR|&=uE{\1FUFg!gB-pўd['aϏ}&=,s)y.SjTFRg-./F`IAO|UbQxJUN7KfN/BpQOg%! k ўaИF> |4+jҫ athY~OҽJB:}>;x3?3]i9ED(Y_U=Jb%ݿ2k۳|u?SD t_?I?3b8 Iq>U+=_g9o{ҟ7M:gWOgt[ߖ<}WEԫ40lXE2t$I_LiMq)ў+ ruTؒ}FI/XV?O,?,2Rjosj3 4F~n Kq v䧿$+,=[>Kݟ}V"]ea?}lH$ZJkotR=)gy{3WTF{~}~OG>#_[~$,CўKAJ|FUx}Z[.d"7$ίM>I,*z_T~WcI!>y&sgC{9O2&ў4n˕3頨Qh.XP;.&gXB+,d~OY+oh?(*U -3WEuZRX`EoCא_&) -oQǡ3Z+/ ]&I} g[uwT; -yDZf|fN/BpQOI2 Tz1Ig;1 d!! $Γ2w 2 -2%٭ -yў/Y~MNf%ў3-:ICH'eTS*^6a&31$Sl:޴&sʍO9ˁJA-RCIꇍʥLTyRHg)2EMxga __. &˟ z՞qrJF{sd53G{<ͽ6oɟˆhOGR ghnRb@Ro{%_HfӚPcuŧiuuʫ¿ZiL.Y**|XIY*]KY:$Z7ܹ -cILwO9&QˣJ-C%3zΦhO$rdFZ|!xGYcr|ns&ICӯ0O‹32ړ%Ug")yeqꦛP>9/esgb-MCi6IDDg7NU -SJrO@2m<,u)jN:ސGZVڤWtV$6-"b<8/cKZ_9ړȫğsN|TՍvQv- E|CQ"HREؤΛ4cR٥" S=YҶտh޷PO[)֛Fr1!OYmTAg4;gИF)6Q"B"43VgPV?$i'^D]LedMd0ݨNu{^g\\I9;ں(A953E{.yhϦў%xl̠o8Nz,nhOUF_)pYrn)y՗ V wBmFply%rN#G\X:n$vўUQNf)ɓ6MgH&tJ -|j(Ot=,C:gB-ЏN<_ DMўᥠĢWoҝW}Y0zΚ F{ZI7[$y[$d&IjɡJ~: 77>~{D{O$;j G{F,OGWyӺfe""ڳV*Nl%Fg=6n$m+9rxwc<52&I:W嫥th+,$}VU^'-…/)n.dy Y*/N=殕DZewH~󻪙F-,,EeQ"]8 -G=m rf$8KF:Ig6x-x1"7Fy*ŋ.dv,/)nmc-ўx#SY?ρZ+CcY,eC֍uU'&ɡ(H'ړ}jԣ=yI 4Snթ -Sbb*[Y〯%i I:ndO: {Kg$زN_,]2m,?Y6oM^'-…/Yix YTՍFvxttF,a_̔ΟOV5SؔߧբV%]f<،IeU>i#i0A.C'Y"4$f Y~;(! YzNW}y@=);G{d}R~nC3_[FuQfJoTcv8l2#ȞX$RߵW?$is{)u1ERfie53Fk0D%IϺz=Q3T.V]qknp).鎶V t3,$}F3( I6K\ǜuL+cF{$~d=g@josj3 4R؀DdH#-,7L2Yu!d:ў?b;EnZ+HhEhZH ²C\B  -.ў62WO~I}$)#F{>~d=gMFlr_2lj!gPvhD,xq"\8R6N{WޢRU7Ҷv]AI㳪RE3)y~W5tY-jh$،IeU>i#eI-#ˊ$g雁rQERQkj+"a j M|HyUPA/_ޓt=~H.ўda馻O[j:RE,4pdFz?4ԝks/2?( Β6Ҕ^ -V2QSٕtiig?5nӮTm6E{mo痴EL皙67daG{Ɉڒ6s)˯>U{TFRj -#7|*mFi*3}ұDc' -M2U{ÝP9<ԅㆭ|}Ya'!t=ȌOI)VYn~1KwaG{eydngB-S0G~i? -9E{.Wb>Y\iWm9(vISъ!Cu`'t9ae8ڳ\xo-z!in /'b)SjTұtaſ{H&tMnYA~ ,3KE&%J"b<8i.}yEx YTՍjh,Y햶׫ChK3\ rQȤbJڒE-%dh(،IeU>i#idA.C' O~Ѱ5Y&vA# Ӌ ]S7$myK[a#=+pV6 Y+IF$4pdFz?4fVҽz՞qr:D5sDyg|Q3҄6;HZͶўV wBmY #Ѫ(nJ]X֣={P *t=Pg.-sV%J.-G#I'T{I~.nIҙy9[alF$*GtV"a j r"y@hϘ!3av8l2#3+^D]LEjϸRX"JPNfk&G:IDjϘJHD{nwBmY #Ѫ(nLD{ B-ў#SjTKRI+^'HޢRU%Pݍ3܄˪P;}Yx-x1ȉd"s8l2#3+^D]LEjϸRX"JPNfk&C=DjϘJHژ"ڳ#ܹ -cf%D2r29& -BD{Z"3DgNU -SyDI'' #I'T{J'?g(^V裞JC4lCAN$ ўaИYIR&b*/U{ƕQr2\37ɦh1EjϘJ:DgY #Ѫ(nLD{ B-ў#SjT^6N:(ENalF$*GtV"a j r"y@HJ2QSy|Q3T.V嚹I"3ѪU0+a$Z h1aU"ўq$=`uBT#I'T)^V裞JC4lCAN$ ўaИYIR&b*/U{ƕQr2\37ID{#Z7ܹ -cf%D2r29& -BD{Z"3DgNU -Sy"b<8D:˪P;}Yx-x1ȉd"s8l2#3+^D]LEjϸRX"JPNfk&h|D;W6s ¬hUf7TN&=DŽWThOKD{Ƒթ -Sbb*S^'HޢRU_06#pxYjz:+Ѱ59'|g⋯k;?|?~Ν۷oKW/oa|밐*Ԃw믿޽&Yc=J*Ce~{cP ﳛD+l>9kn;Ѷ$LCC짴^zɖM-8ZZppsg cr0.e5hs*Ԃ`3|>ۉq˷%]a*]Lw낕j - v:kvވ۷ojf_Qg׋a)VHP02U7"ń9 +tT#Im -{87PY|v0f7Pb[}aB-8_`.~sd*XD}]GW[<=ԁFg3җ;h_q`N,>9-BmڣU8YHk4mZuS3J-v(E]u;C?/#";#h*Ԃ{~7hx2$dXz]G>Âs*ԂR7з~[S5i;:kG^Xmjbza#WZ%7U#V*3؅q"kpqu1UG&ƼW "m}>mҋ/Z̋D3Y֩Q_0lY>կώLƀ)j;-fw;vkth'snSh\ق112(l\3ơ -`;1U|A̞Yu1?_Zp\ܢ cI9IF+z-hF{QYOФrxqB-ɉx{p->.cޱ(m:cXx'TUaP0,nXLŜZG(῜_h߷x_6#$Nл`͵dءA̵EZ1Iu4W*o.`J+ؿ'vuv cdwҡ[1U[p5`S~pvU c\$5^YF#}ڛz]Y`ʁu^X0F4;P =0se nG2B-(L^`XOPRTWS$: oA{kQW2>]jpLSz (5 #}ovy7aN2Dv*~-%'؄!B-XG\dn O2$Kv /A*njq -3"2uۏ-l^4IE2nPdHmI1K^qp^hvF1S&@,5rG`X11J+&XYn=*c*0Ki@\ݒdNtWR 줽ŊY+f0ԢlMtS+$%W ^QCתP Cwv~ pL=Mi8$CHx9:kˊgA T(;w797'>9x)=ʙS@(CN0F -sSgBrn%׷`e_M*9cYw/QG=#9#}v00erF$*er3T;##(ZS…NN!9÷*!=z N:%0&EUnXV%3Q% ,"Y#"*MQϡ^|i(>saWj)Vl3q tyN2ndˆN2Z%sHx9"Q`SZʼnװ@@uӂA1x1XˈIqWA ÐVvvɲմS;?p%gy7ȭd8nqj(kH@ی@3gV&X0xGGF՞w-өˇO1f˩o^0T頹(R*Ԯb,Ser'MW"CS;gKSQm١SU'ǎYAz9DrsaQ>`o␓Li9 +L)|4F -`};[(JoʅmP[p,W#,G1Ũej9߉ʁ=n0l#;dĞdp]NÔFYB N < ʳiZw1uLH `Hߣ&{p܇],AoT6@x@EU]3VmLD5*>'< J(삺|U'>9%bУ*Ԃbb@t7P 2o -9= G$cĜ|(3 廐n^[iOaB-Þ13QWRb.gG>Ϊu*ŗ+\b0P;/J4Eば;˹\̷/MR2lF$9P& LѼ)ԌXe4=h -8v+gw^^a89>, cv*Fk0_XWnՅ']zsUtv/y -L6嬜CPZʹ ș\N\wdž+bz W]ʹ t -t7P*gpQ ԱA8Z|%gt4)WIIOmU9!ap$63ݰ)Ƥ - S_q}*.@=Ota_S;$<b$P;#܆;nU:W N'gGY6SB Jθ:W+Oi9×uǷȽ#W3G;3Stf˙rt#u>`rߔiHt\,zТP·~Bȫ4@VFl8M~Uv0N#t@Y@XǛb-x%Hӫ-w]^s @wP N 7`>UEmrv{q3"fev&包Tl&g`$aYc@P p7PłgylѝwD,5-^ -sG_*؉]M4Fȹ)ʝklFf㤈ޫ*]װs9LJ1ggsw l~^WPQ_ x o@۬0rNBV?w3BPBa5C*)rD&r$b,0r.EA/}B-8 e뱁 9WZߓʩRZ%Um)yj6q_i%[UlE9L1U' {Q Uo׀]4FKr{c P;J$p -Q2w#7 -_C4~eBܜy5[42#}0˦[,r&ELebE$$sM#]u(}?)N4Tl?B(ǗvO˾Zep씳7|J9PrbS])`uG@(㹣#Ug&t7Pn+g N24q}p\;v/\s8 -[ -qtl -WŊozjvCj `?ajgA9;ylsz62oONSz)PVngfςxn7)O'UF+$8i\'l~:JU{Uݳ~ &᧎˹ -xęGΤ0luAg) - OX^AX)PPF/P LN}8) M:6Hut)^)'wſ`s*ԮcV^M| UX uo:{\}S-Z-_9({v)-@lI_%h^FzIMǣ/;z\G{(2Z9'5V{+M~>UAGh2}s*T)-Н*Ԯ#*@0T?8J== Ux(VPEP m89NפGX}y姴acJ짴P2[vǢmkVk Tv0/pbTzi)-2`9+m{U*fNp=#l -r -^EW3uw2mU23 S5<3ca bl.jz)C0t -Ma(9gTv Y=Ρ;+R\&SʣrRUQU"i9g.TG/_r -~w)NIˁ#VR, (8FoyaPZJ],ft= :)0%' -SNtDZn \U&z!q)eǥ3lvӡa",|Dى69* -rhX-Q.LW{S" Smޚkϵ=U],hql~2f -0ԥFV$ -tJs HcRsU*m-8W^yŘ{uFhj8eOarfWva[#t=:swUuuaK^MKihNc'txGGP FdžCmnL.S GNx!Ii]G  #׃^Wv:N*\EG;{X.u5Ulv5Ҏh ƌkuvu䒓^҃ :y{r4&#=+ۦFzߝ((Kwn)Fqm|]&+2ZțuF{9j_$ B0w롞DXU%@NVh?R:3N=)\Z0.#p" -p> Itr=|Zd%l6•U]']g'.S:<]('u= F -Rr|^r8(`a0yZz~+-HwMAL:_۱ jʜd=?47' -wLdn Bt[)/1ɶ`w?BI|?>B--֤2tP7'Nrl'R:K~AFgoM9S -+d= ArNǖ;/t*] DNtFu (q|1[/:E32A P6F$&+m|Wwt+d`nB(A&w}Wᛘo7kGaT&\!Z[Z0"Fu'CѽoUB6qAW!B j*ԮF}pJx4tv:ԤnB,N CT<kA]$o-+_yo_J^tn"-A⍿c3z8Y7ɿ}(w3:V*͝o{_|5I{9N_' /7w24iR?g_AgRU6:m;H.Zo1Fyͯp"wO~C"&z{e+";&;;DQaIC_=|ᯩ=##g~.hϻ˽/ -Z_HE睏HD{~/8|s?Uwt=eVۇ~0_ڹw3#c5F{~r3=H7?otў9/1f }i-SG.X̲n/ -1nFo>?}S Æ;D{l%I7ڣaIhXў$=[U͐P"SvDg$hOMWw&=GTc/nvjo`'2#3sxYjc܌|$=SYVɌ F{>?$͏|i ʱ=?uOSrhϗuI&ۣ=a_l;>ўO޺RJ&=O|nhcG>#F-0=ۘ0ړ[BYf?}4=HD{ -ΕyE{_?;M9Vgϓ^ўgoُ=y?VӼ_ݙC=iqvW>5?=sM,;$.2C2#3sxYjc܌$|d`.hϟߧ/ysޱSsv֯ў5춏hVMaI$y>Y]sVl=̚o65EP糿~֯Zb]"ڳ\ 9hϿ?kһXўV~7u-%ڳI'9!Mў汕A\՝hѰ:Uv阣esS{Eu(Zf?soMJKҺE̲n/ -1nFoI>2Znўd#yя pɲ;T5ͲGMf'e=S$\]sVXў=eў=dh?#Ag|h&%ɒvI'9L-S ˠwfDhXP;_tȲ٩Ay/3&eFg*FǸhOÞ~Kb$i#=&3ғs'm EҷngRz!??Տ~3kSj~Ф[XT6|Rd(ړB:}~K}K?V,~YNKJ^ҫll"鎹} F{?~f6p8OUў/F~!O\i?q?'H$[4LQ -Y0phўwd|u>dWM^-jɯvB?Y`giK'LHmOz>j-TZfɚxV列6=ǾDzEo拟ӑ^W?X啰جciY.nKҨe,-?_-HL2~'>%b|sg]󷫴Hi巒:, G{,889 D{t|Dst]]Ж^~ugzўBE{Li$ oLV$V^5ў9cכ22xUZI&M N EZ\F{VӲXX 1YOm.eϤriFO-#fY7^V7.38!/(3driz2ڳ军=gI|wv%ў4X%h KR^XGG̬|,}tdFzR:ў4}ڵ7j4&CMjIJNJٕў4kɤ6০Rkҩm4~HmhO靔rf 3UZQ~_|0yN=1ZNFllҝS F{-s͒2[o9hOZڕje1$ijmK|E.9+e?'=66S۔lI/땪Iz_b[I}i "=89 D{W˅s:`&i<]{KnGWwf%ڳ(-sk1i򐒏#Ͳ$ uWg1Di@ST)Ggθ阣F!QvS~$)&IVD1qS{&}YIn2;Pe'Kz8 E,!u=)'F!9Th9'R:՛4SH#,˪Pf iMfa9)g1eyҩ|-eFldSi)S<(N[h=8AZ;ў,7i/FrKB6D{1o}d@ֆ7/e9A&ڢҰ/1j6NYF8__yE-F'=҆h?%J:a)Yf&{^9J3i1ڳ ==}zl $Zӫ'C*a,'IÝ% -=hў5+Dvp‘?Ig$7 {]gKgS钯5,q5Z4%_BnI;u7ўSЉHcg)kw_w5-nc^ -(ByYW4*MstLmܤF_,6Zrds'-8iihvhr;+tZyΠ~dP'dstGheI쐙J:{#;cs'꧲x\Vj]Y 257sHl/VZe?8'[ N ̛$?C2w O$I_$ʂpD#McoYcOJjjU,˪Pf: `kl!9Ť*W3(4[l8`N~N -1LѲu=qOR^7o=4+PHze0{e,}tdFzRvYӰ:\EƋNf+޷2^'1!{.rhIΖEfI;Ju#;eQBJ7ZTѦW.!|e귃rhOc?ZS$[=x_@MRghO+9agm;l܍Dt=ie뱶,cJiJ%*cj[SD"-}rܒdCVP xL'IasB~~g,%oH.9Ȼd^NWwf5ڳ({pEa_i5̳ZO}aqDt=iڼu/։Pwe3( <*0+є5|"IK,K}+9C2jg!iY]&IWSA.9?'8ͫ,z9d̾7r3h֣nў΀E{9KMWh7su Rm㴟$VXEZ}9~vƳ! }o[]7;7H0Fُ4.]^Hl;kbD3)HObuI~7,a /BmtLqxr_ZG{V6,oIў+a4ePև4-M~m;AT&3ғ[{mcu+,8 cn]<6rhO..cLm-mf<]Rɟʒ**t9ړ ZLV=ڵr9ņhOs7\aq=LW%W=YsȦhOs/\OҭaA2z$%_byY,jI7hvt}#M^ٺWfګ4%WB'mўg-˰O'!7UOl#9{6E|whO_ƅw>ie['-efB=$=!R^J\5y%K#ZayZ'鮤Un-DSORCҗY>-,Il -$铖/QEoWY,CrhOg1XLiZf&H%*K_p t^s*8m#Gn{t=+pgu4-?_" r$ݡyӮniY"3gtQ#A֧/I,2B 9C2ע= - <>K)eDfRYRbIqFKUb[*ղdyg/Bmt˫\3$֧3+6~)9:8ړaAWhʦpSv!M%Ͱ̆ئϘj~6 "#fY&3ғ9 G{f#ۢ=G ўe03lLU{ in""3C6D{f!ۣ=7^f_ĮY=hQ~ugV=[e8s\hϜq1G7;7H0Hkў)%q,˪Pf#ۣ=cKўQesHb>:l2#=)'ew/J_I/%=[h %=jJD{F~ug"s4NU/n:f }i-3=3U6:G"3eHOJD{%==Cў-Dўs%=#ID{jr39V*7st|qS{>ў˪Pf#B̲GMf'%=ўdў$=[h %=jAўD&;ўauB|q1G7;7H0HkG_̲n/ -1nFo>ў),}tdFzR"3(O"ڳ\ h %=jJD{F~ug"s4NU/n:f }i-3=3U6:G"3eHOJD{%=ID{P!9D@\hHўܯLD{թ -M_ O"eFg*FǸHD{aIhϠDg?hsj3$='+I"S՝hѰ:Uv阣㋛$Ï^V77hb>:l2#=)=}3=g%hϽg_=<#o=Pўg^{r:QEFW\$=[sʈ,s [2GjʻZdBF,MV=|$" t=zqhϜ1A'7;7HHk=t٢ /BmtD[tҍID&3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3A\ܹ - BjAsTZ˪P &3WŻo/ݓ^h[o//w]-Q)/>;tU^3 -`XciUc=2`f̘}RZ07lj|4Wy/p؋^j' - k(b.P `:xYj #jW~$0ƑG]pMc T=նCxgKHEhET=Mgd;՚L0U3v`('bz饗s 8H8('82 - ULA C3җo3 ~]rzKZNO/((T}SS>΁:wt~ )!)fr{-Ft~CobVYXC`W޹zge]0qz;fCdr)_:^Z L:`z7֕m2; ͯmF,g]7fCf*ԂcDv+3mk8hܭw2ԺAl{Tma!_B3lV!Qa•W3_kk;j] s] - Uv1XN{jGC~G6;]S5iD2ĹF&L=s0>媨.r.p#^8)MtԤ(gt>_]y8GVB/"M++AP eƪcq4 -;€1c*}BpL|jih&wl]aIG|Qhh׏+1Dqy 27sjt*Ԃ ^Vڹckpԗv4 py=#Ǒ`;/Xԙ ugVt1_ejMUiLCbċb z*e%i~^22o6L«GP 6!s:3KpQ\;c^Zp*zޗGݰ-<~zSUSnc8; 38SuëL:B-n]Ԍ)MJނVi`R ujg,.M[pZ7(dד9.vmPxog'9|I2 #AaSˀ=klC0#qx= -0#/׎َp0x)/쇠aY똕DEUk>7,85|[~ gfēX+ 2 r|r`j1t<Ȝb\ϣQ(ySIẕQfFX4bs0A̠h^e/&ǐVL*W2)F]֖%;0A`@ z`"s P3xՃp$QϠ1mZ[6odċxb֍b^Udώ3232wdfDf2 ?رb_ڗ"89+M`& rjw ndduc>o)٥XpI桔Pɮ9p(jP)$ -듻eS0tfJT)LT0iNłM5DVPM-GUBcNwB-7,i9Tds;;c eC_ԥ-o'+ -PwUg`r9T~d252Gb)+ѐs"S4:xNW=K=X1J([G`gjB۝0߀k7%('#|huL@;ՂI0U!J` ~Cm=2O4GoV>ǘkfۧ(}$`:НPX I9(-p^# ԫ 78읾w8.ڪPGBO+CP FƢ#d΁*2WZlg<4x #P&UlP(P \敡kUg 8XAӱ8שMHI%<<jYWX&{&(S,n>ɨOXf -N`VLN5vGX͊[ʷJҔ)s2VecxP++ 6f'sY0W2h.ې]ձVGC'ī+@_2wr:QqfX@q,, Z%߆Rb>9+ -GJ7P,ǤsNtDjʼt -Ǥ13Q&;9h[' ( +p*Roq}+D5_Ush.s!tʯlv^qB* -㷜#PHY4vtD p5SrZ?'^'b$P Łf(QxʷeA }q֕LBN ġ`lG(gW(6vO% %~B׼HFNLСl 3GN}Ȓ1e<3UN`뿇4#¥Շ}QGM|6,YfΏUvpo s&Z0ݕ"-Xۑp9W3B!s5TjXo6G㎩etRx -) -r²V><K ,te#`XZ0!1yD\x -JxSZ[?_N0! -㴩J ;* -GXݪ]^&JB)PG'S"#}wa4x HԢ.pwp9A.9T>mxO -Ne|(ߣa)(eoӢ;U,,_Х0fA -.GF:dR@P k5r(p7F1#Ԟգ>܅m~|QH,kz!8J,{@_A= -1US?-K9r 8!~i9T*&,r-tl -Hl4}%cCs-'p'潺BFNtf.OXcguQ__pp[H7pԔsNlɹKj -f,)g>SO~'gUګ8aj$ղ,x4->xǣf@9"'CZڒz}nLC&mʱ(rNAGx/~N'^]HQJ@y()˲189wȩI 792QUh y8ws:%QZ)#qBXHxEj" ,96 So'S9 eNcWzH~yq0bڌS`})K7SUp.~d״dE cTe 9b KdѤP`-sf48TAWSu;Bۡ:W^?3fn -KFSP>%(#Ϭy;K,S-}+^5^d^[g]m -GUǑ!069%bmwҰz<\RBUwZv~K!O΀W8S`P @8z>m{;g]aP[lfyf+r#Z>zuA |oπ_8ߡq ּ8yٶY3j ؀x;Z`Cp -pmLeN7*Ԯ awn{ Bl;zv=v>,=Wm7`9 ]z,&8gw]nQ(u uRq#ztS*Ԃp5&Cؒ>r{;ǕĠU(^;oNt@ o`KR8*R6Hg),H/W*-ْa]l%3t3 :mBs4< "J 0- 20΁VvMo=֓R k*2Q|maB-8ĒN.R|-/>2~D }8pU7@` -\֌|<·ЊCh9U99&xE voS(=NdoX0#X /\\8Mkwq©L8'bWҙF]0H:E P&6o{ab,d -9{DB-8 THbԅ7%4_gγC 2y1DUG`2z ljʼn]W :HXj[?U`{Fmиs˲ЙA3V tfgGo'Ǿ]򹗾N8Ѱ^GTy9NvɃ:ξ"*?N湯Y -v}9:ljVeՆCָK|t6?j;G(3W~g<.n&oqxb|OgKB:M~ٿ6O眢^|O͓_~駟zu_|g>{ ɷy:5iǀ=P|i_5y|/ә\:0U28y]zy#{o?h%~S'9o_3|)+/G-;Ro=BgeUΣ2ڧ˓zgykO `䥗Z&=a_&^cR--v%n%?x:.oӊk>O%3m^l@wNMfgEԢ}r&?yWZ߼p~y/CZ9^{䫯~IG?XDg%_|+Uo~Eu=b*?rXPei}~!b/joxظџ9ZjTeUJϝYó_Ѱ59V~?{(1NVG_NڛFH?ڣW l@wNMfgv=]었dh"sAhOuLJN(wD -:UvˈE؋67)cў{qdo/~mNQrd?{ؤɎh;~_|GI~%ID{0c#`H?&=Zyiixׯ59KD{ -Ԯ1ў$+jG?\3*ٿ+ړN/~ޓH%ܻA~~?/wLy;+w59UcvBD{n ->C^q5dY':F.??oR[FtV-˪P;7}>I?T\y49G=Ug_d՘=ў)F{w6"GE{>gED{>~j {| p*{_Y/SE{Z:O?{hh/C1%UڿhO:-)rq$ў'pNސԪݳўB,¤O,P1{7I -%?iφ!sS"ڳ.ړ|gweOi}$ʝӫ%-+KڲO:8&dOGeGO⇚KHR2ڦ G{RfǯaisUl`'y6Y]Eo!=g#ڳG[6h`s50+gZeMn%/kߐhg~?hA^Ҿ߮0$?ՠ 6;Iã=L[zo~9c7H7+c=3p*hhQ`vF3#-ֲ7&W[>}ўB,¤7X]riY*̌-yJ#_؋6hϧ~U1-meo.5쮃)}x׺"W$1V-˪P;7}H2%cFϞ)&-/zguhg{ZCʛHl%mcD'UG;e+ٿy6;&3ҳiXbQy;ʨ!m! g5ͫvz&V3(S y%Ԛ2JYhOٴ&><Qy͢N"syў]+R$nU.- fZUn]X>ٖTFOvɭhOs|MkE9`'Zs*IA;DiTE!>v T?| %?~;[\l[u'mhM!aIl-=#I9u &S5 {Q{H7hwɾ;xb 79xQ+PPz[I-o]HlJ򯒓teUo$J%MTnOȚU p'6ZH4cR6ٷy G{R77hݿ7V$q'yxnL44%]>%#֍3(&q`أe0n3zU![>H?G{dT#@Q@#=4t%w&zҚܥ6KVvifhM!aI9rKn۴Zi>y;3[35{Q{V8]0'fN,7jF\Jxˆl^t]$-1jexYjo)S,gP"4қ4ΠHO2ie\{U#M*)^\Yk8ʀwƊGf&?>;4t]:ҫ]E؀HJ?ڳb#JjlehI-e;Y0IBvȔўgږ!)ehOthu֥4IRfD{NhI۲dC2(W'ɞhO#`끚hoxz=9^ӓ=i^77\UgrTd;0-/Vk ɭ0tL/ƷrXgӳ`*n+okўݧ٣=RUK:)iUޮšD0鍬[`'[R|$dKކu؋6öy;A+Viͨ]~OvVh -̒**45,7ړc;-m'l~iC:ۼdG'Li馹,2i'kv}IOFӓUNY\E&=1"ww+d^"h3ɚ54%WB/}ў?h{r`4QBYLM*WK6ўrIOZGЖrFgnQE$={`Kʈ0K 9]B:ܬ~F#/J֣=Æ>R(gD{ -D_Sj龈azjTED /BtD:XbW)h^96 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6 -U P XV*Ԃ ^VZ,6w p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9?3o|^~em~{9]Б~Iѩt[7M)]Y?M0zjL(_p F -7|8#ɋ+2&~9 }{4/z7(=HVxWK/wtSgu~ÇsujE`_gV<k1gs=[%(GiTբh~bK%bvt(+ -(ma5;-P:AV]Zp 0va $sjL؝mIDsm(o(v";WPfy0f_ h6 8^. +fhLam _ -+* ǀP[Zk6q}Awh莞'J0]2p zZu#`ttp@m8v%0$ۂG&te^ (}q?5uckG%!wٯK4gytѡyї*Ԯ;`iez`X#Pkfe*R.8&aCͣL{>ڀETn2+?! b\L/.EH ,5 Ƃ`Qjf}y1gMcJ3*UWqB-독2m2˜l+49m,m,UF;E}́B-oS6M{z7x .\,\+FO3c0w $ Z;!Bj%,֫Spvڰy5c 8ajP|eziǠU]1tƬ3~8:|(5ls)tD{gEjXد<ۻ^Tv}0kٸ2rTLnrF̆@WWiMuxS .j*U{[3bI!02S1*s(g? rTj$We -0UDu .%ӵPO2J$QS~D33B-[mxWG`wPpյ1X=.e6S2 b=P}Dim[j*t)Y}^][L]SfFzUvX-:m s+[Q,sUx}JfF{+h[{Yy/TJ^^E -kԳO*XDф0iFOWreV 0KNЙ'q)'$Jn/tcD SkY{ozj"Y0Jbg -UkfP(umoAW2o#PZ0 "hH P!YfƪbՄ(+V VvMXus. -G`<S:ċmIл -1%:eX^|pa]SW.+ i90LgZ]AP2]pV^M]lnM_^ۀfpX'C}\+8KK`cxF˫P\Nq2!'0'J'(IlL2 wW(g=_ve*F@Suo V-nS}^~M=F@ FEtQ Z>,MEPI3 ^Zp:r|ljA_`V:v;vN -f8e[8Nդ cTPXfz+Cs&lǼO];ۖ? Zn[ %oҡyQFz ΊwЙ/g<9P@819sw,ΪP!ӫQK#uRc` 3S)t٣̙,t9NUvM8q̺|ͱD+⯊*Y''⌧4蔴.X9& *ӫA8/5ts4>WZLsU]'eY{$dUN.s3 X8>*S+4C >ɨW_?!b/neBo0c++PskxjXnjdS[d`2O( -֨J沦Y= &VvexD)]8Ql٬m]T(\˕:Ɓ]G_2c..=c5ST80gSZδVʇ*)e҆ 6Av"glO2πP# 8=Bj`BfPz]dNe_`09SjW><\2b)}1Y)8|ge\Ǵa!hpjW%N_NJp W`xC6raQ򩍴VF5(b2?+A%*8nq|Q8م&R6`z'3JYTÂGN+0eMm}.U%O* 4F|T ^)9UGS6, ٖs(b>0u9'bV>*(2kthW @(7܃*SrFS\0 -w]Hoɘ}I8%圈+0|(8!J3% AWƛ>OΉ,-Y,p :R] LC\4˚*2r0] -|Y)rN="/P9vS~^_\8YڝyX'%C[d>4\!රDE mLGSF,Ӈ>rGϖgT)P8YZ6C!*t)z-4I4O!`͖s4PAP fad9+$T=te[?g;ʄANp"F -pPCm~Ư(؝Y̩;[Lq*ۻg`ӔqVTfh'h Qqe mΖ'3#|EPo)pUyWT 't>8 -;DX'g&TF[]*ۊYBt>fӣ#o&@9(LhZd.BF `3^XX ->Wm <4 dڃ|ćp`>EiL-5^Q%F2ax -hLx!5ernB-8s %E98(flDŞ#Yڝ\Nop&?m@6Ckp0rC3cpX&N/`.0̍{tã9r0 *q6#(GJ̡%>ڗՋS&tV8t -S.U'*SZNWN>Æ$ëݟq:X -\8<Tv4CJ+gU܇G;[\&ZXiꂟ&E{aչe KôR|(əmV^? b6bNz/ -s42Sim̏ңQ5Dɏ*uPʄ.A4oB-8{^)s"q  =NK: '8X6gr89_Q(3T R8T$g +8PҼ[hyFz į{tyRy(Pp.s~R:}j/-oGQ>ZVT9)-ǠUQ، ș G>U|9#q˶ -|㟷m,G2IHS84h؄>尊Q-2Uv1T1\3u(Ȅ%AoY#ؒԂkaU `BF=‰OjlX0,PH 9|LThf+#xxijAX9yLq|9 ,L1NQq -yWa1U-fcIu|ne8[_&"gZGBqyn12.<9'8ְw6HI~7+Mgx9礴Q9Yp _]=h<]> W\<kN1$ w 9sfMo `M'LLqBJuLж*"a g~D?଱a -| +O P۔[{<2FFg 4XvX4&ͨKEfۙP2 VXTG׺lW> UQ(Fk&FOP l[c2㱡;(qw:4('(k8 `jwgm)%z7)ǡ% -y<.O WwmH,1S&rP,ljR!~5w%_ҼE,d;T߅th{9' I0>A8 -;Kr0'|9UJ=)сH'NKq^@^XX(&/t4 -;r06`Kl@8JSīlD'9a[m>9,ME2]%{Ws`.̈yQKwRs1a4llNue؏aНh/B-Iz U`8\+i9gC1tU aͩR8HdkLڕG'@ףJ+=wHd9/K'4۰e^UgSZ0r*(U¡3UsrόJU /r. WvW`3 q㐉V&T\OGM`i~ հ\VqA+[9tPИ* w -}܈G^܋9 e6G2ˊ=ʜ{.\>r=f`<$l@n ;2y@E7΄NJW׊Vh`8W-6N-"^9CWqHTXH=NK9 /9EޡʙU(Zq1|45#=W΅PND` ނx v̀r2 - PROĽFEc*/wD1= VlҦT8PY[D5AOP^͍MVuzNMђ*ŴmqB\~]\(Ulx -55ê3]|x+hFjw`Mx *c j^MCK2oȇGR.JvO 9) -Ͱ_@&}=8Kcm?ϹB-"b< Ge?Xz<3JUps,Q> _( O&QwG#)shgFz _­QLm_{M3%{;?mV Xh Q]i0a6X圌yS%W^QH)jsZJ.Eͷ8tYEa $ -b z3:^s8G큄jCaDu7M~եm(XHmΆU݄/K9|SH2΋;eM\۫`? -%cw4.q&Ԣ|jQ؛LnvC嫅sѮvmjr;H:G3t45#=[E0wWwmcir:9pOtWUNA΁2^P;3dscz5JSvǝ{( Vv'p8ZH59u39zA {Kg3JU&i15fԛ'ݢ`}_́Y^^Vey Nj~8+`s̥j}Uڒq?Hm~ijنy;!6Dې Le>nyvih[Fz U5Je*), -6I@aZ693S0Fl?iqlR8p ZUb\Z ~%'bn %2V* -3ǀGcP8aXj3`fͯJ3m4.GR,c@6G/U{Ȏl:op(nEpXo0!Rjů X=·P/S6nac0n,4m&A2cp;3V$':-DnR?T)e)~QҙU7[3ǠIU-wXp33 uAgrAKJ0{*΃P[>r6L(6Jk(E Huh` -Yx59V i:d!s4:RZ`0.?fsY U0DU-w-hpT nBRQW=Veyf̔zEgilgɼ{^Q -)s~9]2љ]9p T2qa<<]XLA\U蜓GEe2U|i8~g-V2Ku/ EgVT]hs=e[Bƽ 翴1&ڰHj7(}6DT nE眼㟦f\a0DU-焯6aITqhrs:~uw d>OwpU #Kg9hUFz gv?I_{OI(U7~+_}K}ag|glr2=B'zc|~qwۿV}GoHE./jˣGo?NgK=_0nɟ=Wz+OS_{OD%?ѧW|?o41{孯o=k:{DO!zJ_^/(yv̳ǟ?9,t?}x|ϖK~QEF}ZoSjWϏz'3_O3_>BwFɛo%,}Wi3Z>̟<&!_ĵqbj4eU6y+s)?wH3ȝyyϼJd-O k߱ǀ]@g|UUdnβKGtFTɌ\9o>ܧhpFZ$M4hϏ態3=;Qў'|7 AP(و0S r2=lx?<]}??wj8Y=_0tG˟|鳿OK!xЌџy|R[L+:"ڣq 0,G[oJhOژfUdZ߯@XPz 9|VZ nxYj">`/W{hIPAFV>=zYa1r'=SID{ BmZ"sYhHjg]DT*Wl13I?o<q 0,*rNNU]=B -ў nxYj">`/!jh4b6>+l2#=Wy{"£i9g=}{Ĕ@_޽{?89*{ɿ+;=%UDg| g?ҤGy=?m &"2s'-\??nў0tKwKbA3kIW˯~əAZPAf#v?DO"SMf9L%K.G{.ـ;W6-F{'CklXG8 RUwI -%?i$wg~ǻ8ZO=(Iz$wՇ `'y6j^TeӐ̖5n9گ/? 6:v\Nm_%b{.=馹ɏnxjgu;viY*\&IAV\ӽUw4:D{RHgvJ~w?gma;h΃ְ+SGZfyџV=4ð0I#0NRG[BzK͙ҲUlj۳zPPEFX5֛fI4bt7 [7ੴƳ$chd&̠'rNNU]=B -=UaeKI\^%#d8c+K_+V-=n+r I}J?r!Yug=gCȠF@g'Ms+?32[:9bj4eU͊$2;2xU8KbtSzkC1*ړPfikkٗ'T)ў7?^hG̴,M9FF{LdJ~ڀщvHS H%5Vb6>+l2#=Wtў,eX:ioZeB '}l穮nx7"£i9I2}|*J-lN9m]$ZNla#^쐡hOٲޔY;ʯD?EtlnH:0 3kўV6/9k3ڃ6{\ єWŖd@D{J&wQV"<ϥnB:m'Y"3%WQz=%_H֞,<>)geɿ_$gp'KlmO)>G{RO⇚t/~WJ'IIўDOm'Ă+jdW4ziVaߑdc$"NARHŦthOnYRf{%1 ˣ3P;fcvR~$) ~vR֤xVJ3!xЌ6IA7)殝Д3Is?)Iǿtٌ3 D{YL OڭnϞh譍jF<#dmR!W 4zB=vʞhO.CMyɔ_>/b6>+l2#=Wԣ=ii*Izl6YZ;hk>nii˘IߖJS)6%f4gAY0ɪh*3zg}If߹xUgG{6)h1x%=ͯNwQOiҫOIޤ#n_R~z=;$_h|M"DidhueC[+vaўZ O?xhG覄qAwt$ITZEdWgDKo"H5W<F'.IӑwIVFzƟ^>Y9NXtՠ nxYj">`;IN+)3X$f{f.i[޶ư%!j=[oMGIyjԟPAfd' WƷ{=XeN+j$=flH+fS&3cpO=ړ:Ap5=Tj M㼮OzDG3rteBRb؍2SO3V?$tўGIz1(ΌYU;Yp'_SKo7e4Fk2;Y;(E{mlI!MN9==IEYMаÒa_ؼ<6 tt_Z*ўP'tM^{+\7U֢ߥ9)o`&I~鲬>*0SjWdx,!v3%!s]5[ZўVGa%xnHR;n*YwF>L=S$ν[-zJYە nxYj">`;YNv>);Y֫MyB=)oGG{cJhhcFUbC̦gMf9z'6U/Ƥ`tw/P͘8If]&Y5uLi̾xUgFG{:IͷKϫ~z$R*zўƿ>.#}dC+S=+Ip {2GG"ޜ5W<3nC<8;3WE~M%oT]D_Ήթ -GHwcmKўfSW?)0INy5wZW#ҫ4WYugўHZk:)6cR9Ci6+F/>=4}K;6榈ٴ~!O sB')dT' Hv4tidU~v$tJXTɌ\9dzYV318d}<&0v:9JDx48#-'IWf4%rݫ2{U$UхTBSt+M Ie=)3Ug 0|տɁўߝ`ٵHN)~_EOn.S^EeGR "-5Iž5;޶49a{:ͤ/ ~Cd7iEjԤ[)EI7E{#{k$mݞ]^ Uq~#׼L*ҝu&P Uwiґs$iܺ+hu˪P$@c t&3(S+l2#=WG{dsJaE+[Ggm;zI!ijJzW߮g$,D[l:3j^$BI% Kdw'[uGIq7z)v@k:Quikt']fdhOl QlK_;V b*\oxWryP6$[)9zbr? Q2[IuzO_A]|ўd5YzӑKQy+I7VEeV^ 0*Άiڒ4N2 -$.~=vh$Kt'fDKJ:5)YhOvJhopE7)铖ے޶DC: K#ͨ4]u87ki-|3һn*=͞U}k ݟSE"J5 ;?vfޫVM}/7p9:Uv)NvMimL⚏w2tmȮhOvҼB&9e0cGv$}YugFD{HzQ2sdƛZFî^VڬNf/i;֧,Uo;1x;ەo{ -2ў|۔ӑ=9$m7#ڳnAHvuNm=4Yl}VdFz ٌ,XDx48#-gQ/s.Qu9Z=3Ȏh2:=KўeGg)29%Bl;4}~PvD{"aX."-kўC%EH#*2zCu{IҜWͅSjW?U>L/ڳ15v*fE|h_CޱVPAf+3L,i_Bjb6>+l2#=WDg*hўAMKD{.+ Bl;4}~hOq 0,S=_~=_UdZѵhOSD_Ήթ -GH_"SSa /BmVED'uH)/6B=DVI }9'V*Ԯ!~hOUL]7 -Y0wD&?wvM -Hs<J"sDgjўJD{FP;"h_"ړDC: EhOUTk}їsbuBR0痈Thu˪P}~Q{GD{Ya1r'=SID{ BmZ"sYhHjg]DTKD{{hXaDPїsbuBR0痈Thu˪P}~Q{GD{Ya1r'=SID{ BmZ"sYhHjg]DTKD{{hXaDPїsbuBR0痈Thu˪P}~Q{GD{Ya1r'=SID{ BmZ"sYhHjg]DTKD{{hXaDPїsbuBR0痈Thu˪P}~Q{GD{Ya1r> H/MўEDў?'J?'~{KwBmZD{&yOd>K~G/"jS'ˣ3P;"h_DZ}ᬢ/M=4Ø\DZD{}/zW rNNU]=B -"isiўѰ놗U6+F"8gUd"b6>+l2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-H3P WA4NUA0 - Xl2#=W .wB-n_ZҰ:U|*Ԃ`9Ɍ\9 p ܹ - AjAKTZ˪P &3cp "p1p*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-HApp*ԂaUA, SjA/B-Ho^y|;k&G?ǣG~~eCA!8?ܹ -[1b>d2Z,Dy(S!{0,o[W,\y~ W61`<ƭʸs%,ԧZ~ -`&sjWruكm_jlkHm݅}(8Õjܞt1zwwvSUU/i)80H҂CZRK,) 8QXmԫ`  -`#Nc1"88>-ZŲVh!VhQހzKi腶ivn`tNt}˺ˆ€Co|LBj`xk8)cLAb#vLAgCKvrFK8Hw(˷ȼ + -[^+SBpjYUB|ׄU~ Ì06~ ,W f>tsPpaT?,mX1ǿ:b1 -{>`|m00q> 1VU'ba`Xv&'XK(aVT"4Cc4I4O#hNQPBp -F -Ͱ({egU}6؛]m?M 2rև_w5%zDxcYQ?hqVB{/hX4 -Jtá<B0! -Aodr^UȁUJm -+A"aX:(P;~-:%I\zTZp 1/0rfo|h:qۨ -B^Q8'z_6Zocr Ӽ>r{k!@ME>鸷>V%зPo 4e#SZp -s[ |AكrGȟ_0Uj jom5aBL7u1E:g~U9fd3na͌t0pnB -G23Jfr^]DjBk= ?tpCoI0Uu\.\%=/ -ʟ -s[pRDmZ UyURց:RZ07o@i6,ǫ3[yJ#BqeLuf1 -8aNsfLFw: Ijpca6kc4w#PbNǠ#[; Vvw;,"ʾ骗YD,Hۘz`|2Lnke|ͫÉ!%j'ǍAцu -*8Yڝ;Tll.D|S0bWo2Q5gܸ 5کm3hCo Z+]DP f6ڷm=햧|3Ӗ 7,~ Cc~Jˑ)Wg # ó%WxđZ9G~2X|hT;4.PUasQ9 wPC)gtVNpF -iwLl2'm1bᐶpl1jWKƦ%b{R?]"؉ ]H}XI72@.&XcVWƯdzuMpYqt{{uFW*(tOk0 ->S!HR -gr;N'VmRf9ao-B- r rϷ8:ک%NQ -\>.G6*?\?Μګ*&H| h+CҚ,glc1#SP"c`}~*Ԃ#[ n{s)LU<(JP*drrC1tU9loͬdNx6(< |Qg9H=,|MVZ8: -ؼ9D`}ۃÏ]06Nue48p ajwb~cqP@d4V9R5hr] oAHpˑ -v|ꮠUs/FvXwP?R#S-J/d Ф*A|(9Y].QF:πdLgy;t\'o`9Yqt׌䝽^jwr`*sWߔs -v+l.r6b) -nuja>D_$a;@ -`rX87p~syQTʩ2a3_A ʑ39zm )crs`m韁 QV@-7iHgJK|u*Ԃ(ۙ-xOs9GC+ -3 ǠUv(WHvsE+'R#ܧj6/ Qh\NF:3!;&g)Ǜ> ODu.|Agb:io/K?yΫ20tU-'r8Yew]eN9hK/Mr)^eM6O'>DCumZMWKF;P I،3cŰn=cy -8qʷJwodf.F`˲#y/lW}A+–Dq. !8hca<s2Mv@Csh&UjڇU{OV=ϳvO-^I -Xg;G#PuF9lg,&Pڣ̣+%y9W0~±@_2wNx#D& \ `8ݧyg)#P 6(P,oz}zt0nU]6 8m=>%n"Y])7(mX6hŢUdT<.:9N[rb|ۣ4e9dž9"&V2% `%qưc>j_NGŐ$!^Z'ЈplGD)}CX@u\n pUv8j-,Aqێ8,J@8*FhžEq ]ё=7~:`9ّF#)Gyp Q S -Pp0|q8liL;ݭbĪPz{f} =HAkdΙ>3j9座a -2O=fMѼ*ԂI`0X)ɩ`^8,_)XRSpzL) yĂa"D g7)~I0D Ȥ|oxj>pSZL ' >+ǙZun Uk*lrN(عoa8z+9'b@ykj| )+4rA~*Ԃ*ŜxǹaUuܳ\eeʙ:n TjK\VWK9渫L־ZKJHߋG(f9WW.d Q{NJgbH~!;dz1j(JlnY'☦r;k-@UpeGNuWerq ;`J@@Ɩ!6TzSFD4@3`<]Ee.MB-8vh6zʥegq(s72+L\EX|Fwu9t9S@Kz<[%dL//th \Zow.<8B(P9NKap<;0JU-Z=Ncba+gq惔!6Xӱ)OxhF`Tiy6Xz>=g9E?\P݇I9AфvYei~ '؆QBmᔳٌ.5¯=b~vESǣ`2=fnYx)4,࿖\GcP 8F9Mx os@ͱ)S7zS'#/y>9lSz"w^( - RU;@=lrΆPȧJgs.i96RjL as9pHx܁+05TݔTXĠ[H6N#(fɊ<_[}^&q| ->Wq3-j%`h 6bP[2Pd/x6>5br)iM,(q9گ"\|c2VUK`3,2/TԖj ,0 W:y|pa)sSJE1q 9'b}fN -<~f}9[Zr{#^_2Uo 왂ӈ3c Q5ϵCTba]ZfZ9y>be(Dqv³fh*΃gK.9E9ch>WHapC,*R%rIЪkȝc|P[,n1Zisw)Eۡ6݁aO4` )r2/fT4ʹ -4[Da7Qq>u6DeAޣb7 -Dl;[{@gP a2Ds#ĤvX*=`^Rn*ԖI9zqn'n=v@vqrl;"pl8t4,[<;r2 R~TP#*eBpz$6ącpP[&4-`5uC}hh)77<>>R2pQs)4 -Bnp_+q>"yH9"#8HJVr&@)vGF;YG8x{ijFy)-"FFQ&l/C' 1>U-8<ΊTEX?t9GcQn -A3@1?>RVWNOdd~@233m>.PJdlQ]T@DQتAB(^iMp(*&(' @&74#B׵Tc2<ͪ -P9Gg22!qnԢ.<;^wʯ2TyYBT0\*1U- -ʨ4Oi93nm *aP,&ߡsAtJ3Fm ajF,tCU۰y#4]]'Fw\VKZҫ儩UnB=UߎD6sȌ0P-_U#N- ZRgVn -EQ~pI-JG..@P6n7vl7hJGERX,a,*Vy R*Y^=qp -DwtJK1ΥP#7*P av F.nSAtJO\^B h8M۬}cԂc 737o%SU##cUù(\(sPT}^ ~_#<,t#G,;h6fo8YK$:`U6NtBPJRX,\u:W^+LFv;e@%n; CAR8Oj6f]_Z Njas8V\BphTiz2glӾچ-8`(dYd35+ -NUK^q l + -DW3qϿH-*ԖC9 :l{<~C -q?k#~@ahaFz<͗/=~OW|/ztFW?^'#l?|?I!8;o=aGw#s('#_|їÝP;1J'ʿ5,w2S/}R9{|g>+Mau?~?'[-7BF?F&~N"KD:l{K'ן]EZ&!AU/Sj7#2#ժ'w@xOxI,Ԣ1Zu*Ng3ۧӰ4Vڟ&|'aGas:l2#p"gihOUhϡ/ˇ;Wv:V!c(l*9A&v@mndhTU6t<bhOU/Sj7yYhU -w[hRjrTjm˪P;O(+_hɌ둼oz_)g4yi像sDՙN~/%ў_%zWўO˟yg[Tq37 |sjc2&sIo`[.+Qў:oI/_}ۃo{nhϬnTc?́~-O79Kx?xo}$3E{o??ICD:ly NjrNN,7's!A'JZjGuQ±:Uv3Vup]YDZ4U#Rn^V,]޻y59Gצ4BarY5f Ɍ$Cyƒ>NzΚ:3>;6"GE{^hO0|VV 8ϘUmўRr[~odўt1,)3柽yUD9#ٓt+ry -2Hs:S -yLlde-W[hEE:V[+P.Ogri/yl}MjT d5"e8Qmܸ^ɪe*6eUGD{t􌖲_@ i}Y0ڳ*]hX -ԾgFZf+Iem3'M>I:)[IZ2ttdFzu#Ǭ&hei N3`B:3>;U忍o$|z#G{?8tF{g&IG{<g/y%|'D9#I3d0}4)5SuWNUJ+B6Ie)?CzcїÝP;1i$υO#Fo5)movs#IO6G{$ _۞R?`|F{m ouI|F{=[4Vy I3N֧F#)W٤SN7>Z3ԩ&QlKKZotR"$=Y?626c@Te-oF+M ː3EKRhe!)9*ӄhVZVB.-$vXMuwύL,/,%>HoY~YÆΞI٧[^96>^*;`Q+2<|%“.&_}~ isհȋ@A撟ȏɲa# [IjUu-kv*s ^8V*n!Hu3e*Y ^~ZRoDE:V[#reMeM^sSjڢlhRjp,e0DPtb(~rN'n MQ'uLV -Y{s'P(6̻u&򦶒=;dS'usDbƊ6?wcO-HLjmQoksm՝B7 -Нa酣}0DÉGBˊu]Ticg/)#V_jG{J`G{є$Ɍ=OmrLG%FVjy\o8z8zl%dUTaCїÝP;1i Zf$KFzc\(9ȁўtE{~w|skgd# |b1D0P>%UUStSB.,zAnlҙx#%Orc/'KHzj=IўUN -XpYwOV=6vL]q%IH$9D:ly%)Z礳t{`NV̫a+-JFw>դ۞U~#&hgI2,!jIoz8+?=LV -Y{s&qKʫW#SO~g ë4i%yȻ{cEz -ÑYN- ]#nhef6;&3 G>Dk9;YMY')a8}Quf|v:4ړ= d=/\$ڳ6MY&+NvC4ey=2,/I)$6,\XI#e.K9/I2Uf'ۍr`'1og['Φh(?aS$D{JBf#2ӐqH$ͤ0T$F2Ӈ -ϒJHUH+3>ߐir6ʁў P[8YRe*d`ɡ1)D9.+IbMEڰHzz7ֿImcgIc˚?LRm갡界~k'-oMy]ZF'tя۞fb}*ev -0C3{yxeZ#%HcU/Sj7y$YB<* ٣;OѦ;PE_PCheI[X[G]S9pԶ&hIY p+6? ]:I#.b*6eUF{rV&j,+_D{_ɼўiҫ<cMAC[E+%7Нa酣}0DeFZ+G)(Ԙўo{D9D{1o=9Z#+<Ӭ5[5TWjUUUȘ4F2y%rN#H;>$t F{v>90ڳY ?1Üh_u+g4;C903>nҙx#%O@6I - ?MO֢4=ՆK $Ub8Ƭ;'gL=}I1~d=ghU =d=3Y^m%1VGCj4C3{bY۰ \4q%IRK&?W±:Uv3$ Lb6}R2Gs9BѪF^Lf-VV>镼E4ES5xMҰtҫ!Z ],Qa4b)oSU/Btx>0ړ\cn]y$+_E{v]ɼўe| ]v߫kyНa酣}4D&"t3#4y:YoGEՙ٩e*ړBC_ʧjWkNNuQNhO#QEAHve\H:TU.;IzU>'̕ D~M,}Y>ܹ -ӱ -F\Q^9†MQERQkjingg20\ՖhOHo)ԃi[z -%9(ړ CF#5~$_IW}OqoW*H**t_VdG9 PL~ x#J -lX0I.RQk#Ԁ.ҫ0|2m=)s'i󠶄"XÆΞW2Sq90oPKKĒ"orX]ғ~3M~7Pi:ͷ%' jxIIW]TZJV5ӵ kթ -AIrnRSvٔf[*خhU -w#}Mi_k}mt57&hw3 $vs~QKW=ib7ԪۀUv: 4ړ+a3>h#MR!ǧWT_borJ$vttdFzu_6QƀtUggVR=;#VtGYJ$͖qgK'+{hO9=ړ5;[s'8ɞ$W*JsѮ]Y4e?NSf3+MkWE-[[v#їÝP;1iEf%XMDesEIfiJNrE^~+=OZVa^ CE{3R c!E{Af1WVG74|ޥI~̔BH*)|$/=RN̍gP[8+i>.Z҄n!4+VA~-]$'ӥ/gU~ -l?++6}UX}>C=֢jz(2O4fIeE:ly%8!ۨ<%~C퉥=pE&I/my;Թ9~C3Tlv鞤sH; 2GcuBf"0/I6XQrd +MzJh}|'B(Z!Iћw}5z͎ls'WvݔjEZ4UICb8D+Vd#ږ?,-ԪۀUv: ܹ -ӱ -EID{"DO,ɢcuBf"0/!ZnMjT ^J xYj3vD{,l@wNMf^_DT4Dg9/ˇ;Wv:V!c(hB$="2tҲ%=YTpNU Be9"UB+mQI-SU/BtxƎhϙ ɌrQuguQ{Fß{DQNFWOTq37 |sjc2&K ˰E9s=zۢǟ?Y&v@ma~KaI'~GS/" jxzXPLD:ۢDZ4uiU/Bt~nLnViI)hހ96UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6UP XV*Ԃ ^VZ,6?^z_Wէ~'0_Wy景}k/7~:7o~_ԯg}zJx|ַ~pXZj6x%O>ASg,_,:~JXZszWE_[=|)G?,S4bRW6b۱:QM%N6cEwZh!\#^y-1UA0 -dI'xNz - f8WengfnXPn^(CY3F - \i9NqZsuɉ\ˋ` #BcKULeOƘT|*ԂYqu2q6UQK>1r_`x:N,-g1’QT/tJSG8闃C>NG;\uW{[90UQV֍O6嘬Oivڽ![AjnҿO@jDXkVx<(ӫ܊S(*./cQRo̎3?K$fE,dCrxqfMGgCjA/B- jGgQ!?"H??3w(`f P#!/]3\ϑN0 -Yɾ1lwx(m||#Vko'Gu.Jj/xBΥ/V$M62mU3ӿtpÄؒ(xe`'≯ JxK'8ch?#x:e53UKZ -u̔d1~yjP)|Wup; -YumQ>s*ԂK58+~ -s%oVLm7ނÆ qyEr=]Qz݊%W䣨2m^].x}rntZa* NxVvϰ^WPp £LΆo,g-c\",71U!olns0qS~ƩO'aRઇzM5ͺS/ -Bp*FC5*=] -E~+8yڝU+rNLV_V5n-:}:s BUZڬ=/sF2M.-Inw.Wȍy2u#RN'Q~p*n +q -uOt9-6&DDQ -U!ӌts0P`gX2*Y˱S8%B=?Vvo0c}uc>SmYnpzoY.Yϣ̛GgP+|"srʜW-'7g:|)G0ľgk~j}aA0NgZU%2G腾͂iβDgdnl9[Rt,_j`VK}w>^,ʙ P܈`L_jbUXG`~yLTI Gw8ób -yufF:؈%qy;Ҝ\a~2,NPU\XjŠѫBpe(,=N-s>x.Q=MN,e.JʁUNͣU @h]awcW2g"_XlOdNC23\&Hɿ Zh̹6# -X`Mh9Q#TG2w_NGW}ܠGٴl>q\)VuCOP 憱Q,s\*٪O? XCxC#x*`Zmj7}S2٨Fvkp' ÇrNĩ@*ΰW^}H8%!ĩF`ul .IPcK/8 ->k97nV̊%:cqQϖ8<k]i6\%K(SaTP*巏#{ '| tEy+al/ ($zFXƍLFZy1>=2TKq:gƹ?8%TTϥpNU1hU N$s>2Vv0b'n"GB\`ɬZqhy^=-VTyYp5ۆ׊1~bQBƄ:Z)X.ԛ9Z"ya^ʹUt -`>*A]NJC{peJN#8*w*Ԯ10246)aas -|AQNLg,=.t-#,h13ଢRM)hWN ,Þ|>B"6@+3#VÄ,: *#s>vx.; -=1+*QTJOyP; :i4 EfPpUv۰r#3+x.1d?A*qZkzrnBtqĹC19lr&+)Dz3.F+ -b:{\Q~v@K)dRp+` e9 s7ŪucC* rji8G?b$Pj,zlXrz'6;9PJ.h̛A20 8t)Bv&9MP>W^v("U|ƪ -݄YszBŅ -7Qx]jS><ٓb{zFʀ)-g*,>ڦ+o1p$kū3SV 샯By`P^FsO%Ų@9Ulg}˾SxـfUGb[,MHrlS)4KM{joڍmrW#4 G]Uڭ5؉>-gVqM裴zzYynO,3 Eе*n\5؀4 ˩B>N{rO,*#K/8yh7Ϗh9`Jg)(,r0Մ.B-OkTB܊2i8t>`1BJj? s4,M9pKwrB X)Gzѝ*9g1F`Z;GW( 6<ݫm -j8mc[6G96Rj7Ipvf -Qwm_A;nU>eyȗJ $ʹ1t -̖mL΢gk}ɛ5grNBAy)ͬ@P 1xc >.gZ`u bٴ`HNp -k曍O#Dت`Vop䣑Oٹ;Jy>;[ Qj_[+Y9s[g?SBCvCsj/3_Un+~qk8?#ljK6X&5T#,@tgT/(=S؈/f@vOU ):hT"t -`ZK\eydC+ -- _5 -k03Ʀ@0?9R;95f#N!P40P\H- XՋ_\)vX8kjyGDzLx> fk>{TvcHܸD y.ܻ}b랫_Ȳ9eQmNUvc X\4;|܍tTph~J'Xq4^v[\0"?2/EUK`(ɹ8F8mDUw 7ї*Ԃ qa4brr2 -fz %oX`õD8"Tj>!`+TlYLS28*jO2Ι -^{2 <m8kEy%,i׎TL68|jӅ> ,/cp( -l:ű\P6Fpy`M|vnB/2wb1mPԞUZS0E^\Q=vO|[w<^ԩˆBfYH_tE9ze8̌lm͠GU zcȇC)wÍh@9<@ZQH^[igyY 3hؒ7\Cȵy)G#Cы*Ԃ18m Ö "U(pbP.)Lg볒(uI9QZfa97*%ѱJΙt6'7nyȰvRD,egT=^%aBf(3Pjϵ1z"3#-^K絣GU%Zx]sZ]LA<΁Q*Eqз3jW`V 87ٌs&1'9K, M> prvBj$pI!e?-w#hI"7u1ܑBG**Ԃxg26bo'DK#xEׅBZ(0Qqx{(vj(ړ9\srqIul;? ͺΊ ~1Tpåyhmj}yJי&"^utBTڵ``?ƽ[}u7V@3wsӋ.;709?ʑc x*BQFe9UoĎiߤs~jW58Ԅ e#':T3k7bmDףsh|jWxo+ -׈3羉Fyk -وӈx %qܥp/ftd#^BGkP N# x6v,܈ϵå0JU]Fńٍ(_-etN[ y܈Nѹ -6#}'#kxԸ O+[K:G4C:ĸwEU]525dCOgwEswsvpf#vh|jׅfoõΕ"h/p 9eaȉf-vqt.6léU:s)hjjl0v[A݇30DU]O4۫~htG6<7Jg'?} E_◾>Ei('W~|w67?Og"_x+k?9]~c~8hUNJUd?zOKgяNi7_T!gxYjĤ'SǤUwXV{"_yId+߷s~㫫?/m'ZuNdFpg|Ԟ<{~E{o>sD{ I>O"#wƵHD{&RCFzK]I_F9;]u?#%h'[^yc~n(%#cK>?~ܰuOSjd#{lum=*Ȉz&1ªgxYjļ\DTbcIhO~_?.JꜰɌq;cD{ \EY"sr[ў|D)*9SjX_DeUrQuNj'-G{^vhH_[}&=yo>x)1/GI#GF{쟩YOD{O<>Ҥ'Tx)=DgBxDG?ؤD{/~OMz?љ/>+F{RZX:OI5fUܷ<6My&w-kr|D)=G .Gug~IO$K/܃?Hi$=gC,€"G0jўm ad"3+ -sb^d1QuNjީ_&ihqɌ8m6"G(ڳy?H䮢=~7$lb" ,rcggr|X -/B휘5IKܬ7b#\si?sɕ5!V6"3nG{=]n+YBgzhτ\swHD{ZI$Β=SJ'GE{ uR27A^"s6" rdggr|X -/B휘;^lp%{&dH_;co_1d6iY ($hOTzvg?YY6L·cWEE{ueʖq+hχOm̶N')3Ӽjeg#?Zw>a$[A|篥wVU4ǚDgBz|v#fk=q~W''rAN6hϟY7o=SdwWlk:_<ٷϖOvD{Rie5UVj$B%+I_dzS7ۆVتr- ׾ځlp7rrGț$i(~G<͟I7o$ӫ-(%S=tH0od['y"-N'䓞:a$IktT2h~feR[%V![ iNXn~SDH|LL._fUN+W#KܱU D{>+jݗ]̞|Xq'FzZQ={mUq*ΉyiHrr7DQQOW\ʑMZў?r::gK߶COnb(H/=n'Y'K\։V6"zwƁjF;;H1Ĵy rhONtE'6 -Y׋NtV=d[tH:Nt#A4w[R'.;=h3"Foҝbu)WF~LjëZΠ۫IwChτ<7;D>fZQ3ژvj)VQMz ?l#ds'KI+wH6 _Kv䰝{[IE)_Ob=ړgnFI-rڊm맛t-Lƶ;`QHZFozE۲48Ld $uIHEmtL敶ȆF5xbrf.~]$v-A6Ҭ*;>I׮rlΜM4mG|:tdJ'}|둠 tё ɮhO",#m~rGF4ɗs*mK.,=Y你ٖ3d3,nGH{Z|޾ZJm)jCS7HD{ΆXf4Ns2ff$FٱW]zO܈<>RadўULJKwRVW4&'V*^V91/뒌[7b#$i36iF{>nl&hn&ܟS7^PF5{2LEmН6"zwƁ ga(i殕KMYv:Pm)=[[;=;.{DwH$%z+if8u)i/J46$홐&k0JX"4䘭F~H*C?Ɂўk"mD-)Jtݹ<;]+=š/kK~sR_DvMc*m#w9%-D%k(Seڮ_d]ml$,._7$9k9ړNMxt^}ؗ(%wgׯoljўtݭӷ]zg-Hs6,D{~'R~%roc|l1,>n 男|ղE"s6" xodII94F~HO9ʆ%]{+pҔ^po?Z'EIlҒس7craU3 -sb^%m)H;I.30N1l)G/6Fd}K^9I0ӗ'IvD{zɅt)$S|zU7iTםh9a+wgHoiӔE$>+ͧN@i tꬽe)K)٦WmchτרuYsኑf}ȵwՠ%=9$ړ,!|'M)tOf0"[2;L鱌 )};fm*s(9b7e%HF4HUf'ۍr`g !݈|XAl2Q5osZ1ªgxYjļKf7&+I_ 4#\ɚbۤeI{h=1{"@K'fJbMꜰɌѻ3dhi^zmY6ֱf6F{=V|JhO;5hO4WQ~Qt6KO3˰tWNV2$=\3iZJdד9l$U~K%%U![=<[@GR'C_n'Yt_R5n*Iæ",9d\J"s6" xf-'%Wy0ZF@SB.H*jMm F{+pYa.Xў<9=N+혤.cՉV=˪P;'e]ҪUtixb(JHTbc+`6iG{J D{tߔu2&iX2}aܛC&Ѫs&3WD84qaơ~6G3VvF{A5hZf[?+54[-ўt+ҝґl6q'kv >[=)bxJ6eIC%]6ў)ynZ$JklJ{#v継-MUoeW煏|Y@/s!G{Ff&Msh3%ړ*]1l̐n fm#BKm=$Bn9ڶduawnc^ |]dPZ5'16ڳޛ|0?^TQ!՘2W%ړc!\:dў#A)tV5$}̤ղ?---d>%ۣ=-[tmN24TRJ'}޷њxZ%++MtMK -%=gC,€f?i3/fmўK;unVAfYu6 E'+rsgWjU#{UvN˺${nW$yfUwJ6뱶MZў%~CȎhϊR!I^_Zh3:'l2#}EIF}lOJ,%slL/ۢ=t]YoDgByewPNRDeu$[=gz]F{I6G{"[=GGYL)ۢ= a(/ mh(=%9b|933|XAgj1ªgxYjļ7b#OZў]#ڳ -f(ꜰɌѿ3ݿhc{DhY^'IJ4,홐.D"3F"sў9z==鯏6/ 9SjX_DeU7b#OZ%S i9a+bxgLs{#s,/3C=$=2EHD{zHD{n\"3\O%׶5!=buB0ˑ -sb^:9ߍ瓖G{⭢UMfK#s,1ڳDgB.;ڢ=u\g/hύKD{搫)9SjX_DeUrQuNj'-'G{"ZuNdFԝ1=GKD{wB"\swHD{,qhўk h˪P;'"#OZ"sl2#}E\ў%=g;Wv.;$=gܸDgh5KD{ΆX_DeUrQuNj'-96".ughўPў3KD{n\"3D%=gC,€/G"s{*Ήy瓖H_|ً3=g>NdD{{Dў?kL=whX.Ɉ𯋈3K;X:V~!ÿ&"cwOTG7 hymӱ:UvꥉVu<>w>Q=˪P;'&"8 M]h9a n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr` n\Z! - auB-eUr`R߭A}?+|7|{km0;Wd3䷾-6z{3?hPǯ -{Ä^mu5ͻ*mx  xzvy [>@~e.T$7*GG~qXP 性/XEi&iij`rxYj` ,IU} O#`<& _W|I4ʷJPnnz>|PwSO}k_TuaS]Ӌ2u 8٪E [_ĻrcOU-<&Oi9D޺iq@Jm ++e,+/k&inY4^s++uFY}L_*nhNGR=牶 s@nm=LAgCؤh^1Gi4O}7&'Z, ͬ-5}_YԈo9G2*U)r䯣IFOf[%p1W#T - pXb6Wa"~(rdQfE*R]A34F4L4tS0U]~cᢖD ts@:Rm -_$<ʴPqQ"Bߎ``'R\5:&YfϺiYx{K)(A9.\Tr+߲zi.N:(&P粅Sw~SHƅ)z]r[٭a+f98ՀbZ LS)BذMj66nGʡsTexKC۪P=̑)ֻ 4AΫ*|P`ZؕjQeؕØf>\9{+ -72 Nw`#eS[%+\_l30.'_톎*C[[u>?^l-V' -鯆8(_`Q^-B-8nd˰s`Q:˟K.#WEFh cjWg.cmmZscsJGŹ 6z'ч[ -U#zхtp"",!xؕ‰80Kg*uPG,'.[vf4ޚw:XQN.W&]{vcI 8;qfl -D1/g6H;hQZG75)p^DS`ssVUvK{~e̵ҭCq[נbRenC9hq>ŏhzj~-)tfy BsÛpG2bq&6Bc|h9VD'x;SBԲqX C:@ O.\8ZZp4fXN[? ɇW(P -TR#[ q*_Swc̦Z^ۧ{{([FM(JYP|o36ѷtpf9™đq9r8dxB,2{njB,7A"ЪI -QpU<]A@bP[&lB` b;~a)ʜpjb+>Wa%/!s*R͂=Wm~'(t?¤  'mڬnnpR%([ -|8..\R U>Vv)cX?=܍U!>lOZTpd . G|S!a8T*p˸#80>65-hjSggE3Pv@e,2'l2]U@-S9xaP#o q4MO⪜nrz9l@-8tCD0um,n4doò[؀˯㪸\tmFwj裩{]jZ_4C;_!DɜԵ +ɉD8Z6? $ʙm(MR@І*Ԯn8̽5`rSU) -lĖsg;pĞ[yѪZA *2bs4`$ef3ޞ 鯟>~fl1eP)XY(= ̥mUZDI`':}s@r&A7uV(ABmi Hbcl7'#[埂rg mF4R>r`3RkA3nL0K9Vcoi8^g 99>)щс>"JPfPN_UUv~\7rAtgLK),O2&6PJ5 < Qp Og7OhX%mrWvg,t[%>5? @ϲ,hW7϶ETIVqS )6l rrzɧU&QcG^GWqq,J:e. - B-QD;^4r&DẬd9*_h,R9RplrX K4;d-˸MR4RSe>+~~48#쀧OI$fyNp4ņ_z-FVͣcY~u8йXP~3Wjg!ژ 9NSirrI!,g>\ѐ}l^ ŷPA6mq;XhMUWv2r{JꅘC%#3 ;uhЇ A^@53ܟ􏸰WTeaO{;HUN^X~-VŲA&P]J( -`OZF ^`Gps" |C0]:|l`# -`rZel9{B`!=hڈr`C_}\yzPF~c9J F? rL^σ.LV[*x+˫0e_'vk&a[7v~dQ~:lwɖ؞Ч 4 -`,B3\I-`~NgjsA% Wj ~IyRQHZYBUv7`R&lrNGɊ/+V&1 y>45#Oz8-G-]`%z3e*T(+(5*yrls%W ;Ur0OǙ_i4q.Pw6&g/;~I>QK9vˇ~9bAP2vg9gCU]Nqe5;N1-fhݤJJ'+Sv:q>g,^ቼXf4(x;G -Ϩ+!XӴ -teo$E7DFn -`rTTq` G8AiOu j}]m`P[v+8*ObjIVs09IYGJZL-AK᚝#頏EdL}p7?P럵4@Eº 0&ї*Άۊa=NEq<6>{_kF>(~w74a[/ڏ=+r` )1jB0N ZU)ͩ!:}LDaPδ8 e+UY -pՀG+$cM.ؾ-Gv>,#qO,FxAY0YV6 o3|hsSZPůn؉ǹ IuijYهr cU(oIogGcSIPTY"gZ -.8Ja[:KF;3A$pS f pVq7oAvP9y1 *+nvrFEU]ŰupOɾY)pQc8+lLEpHx -ZkC;Yw -GUf3cG>8k3`ZNđ9[TGcQ*3N&#9В*ԂX2rFq:q#pu|$! `F -_?sJ9jj'd%wnϗ3>,}ZU*|0OZLɋ2oQ2>E%XIYTC[$^P@n2 Ø ^}Yw9gs9ԷNvw՗懒3lu=O&i - -ȱֽy_~ȿq9ޠO$%0lw1͔G qk#X\M3` Ŭ -`R\Jeqp[\ed}Ň#̎b9/fʹ/Ȼ虁6p99b"gVe85js"8`:\6ӸDwٵQ8Lv ų@ͷ8oHV9.' Sb#|7.D& .rЌA0RI/~5fzx4 sݶV67.+]E`[viqmdO< -3`ə™;^Zxj~3 rh w -THscpĢ+'I^éw^pۙ>a"::nELEH⊱j;''X'l9x5P3<|gwJLa}.ځ͚<,)|.+EA0!hύFҮ= xD%LvE bb>C':g-rNq\ -)t$s<~>E 4ga[]؋$s`67|LQ-|xE!c̴ѴaPp 5ʉ4fGH~~ - -`!g&(̖Ϲ_,2?=눷=1pxfGӭF+TpHQ -f+z[s@u1lYaՀegvZ+rABa_83O0f"5et DYi4 d;AwE ] K`"`77%w"sZ8fmBRT lGYn-ڦ>_єAo'Y|*qr&"2WFdW&g`6+[>pb. e$C; PL#0;QZP혔?f +]œv0I`8RKpa##xdWЀA0a?rWh\#t -+Sc`C.e7[' .ɜJ|ɦKq+ )!Dg}* 7>g( -8~V2|TΜ9ÑG#Ŏ+N&/0Bf BU257 ~:[\W/SvD_f N` zEvc839ӒC>喡je ?B;Xrw LIl!irȟ4v -x #si@+-|8߹v|`74޾ҀA0a:9r?XRH .`EřAhG~Esl'S;0\nA\9*0,24L TPV]?Ph @ "8pkK 33?:yYzoxU*壏[[~R|adh M >'sTv`\|#-rfe$f~˭˔\[`67{;~OT𯮔 L~TĘb!6fKvYq > ?A4mq=`vҥZu[3S#acvB]D9\*TA&S;i6sċ0[4EH "sk'yBKаp6UxdkP Upxs9aM1 a1V3ܪ2f;46E6 bɍ+ˋ]DΡ= -c:oqٌ_؎ `/X,b)/zba,GBCoJca#rwa`%mbc;A0> UoN42L[ztf( %D7'Q+Lay٤8bez!p̦.v`55ﴬLޅ2lge>ٰdeټ" -@́&u`/:.P{ -m~r ;܁/Y K('4lg"4ms5f Bk)Kǝ˔ft>! m--8 v$'6GGlvSD ?ֆf('X ?ySD)W^xr.X2͗'=q[?yZǞ9%əirWJ^~㥧~DU(xtow<à x=zw~K?<,6=zWicrC=ziz"9V)XCNOc7dvԣ?̻:%#`3^cj~3/p^{IlVQt&=إV\,U~.c3xK鰹~tهO:}3/z{sy(]<?e>*o}e.qBo71ХW66+7~ץ f`YOY~3جcK`&Ы/>LS"V祗aQ;$i- 0[Wy.ݥ;zUlClgv…GΜ9u^g镇m_y=KulzD -o7xѥN?]ƃO*'om=@ԣ.MSۗ=֣_~|DWgtPr?g>뿠t͜F{3#J xچ&o\O^D7ђG{p5uCwHԭ\y 1{>K?.5u ]?,J5Ägn/#o -0\ʕ&h刡UlCEh?gRs5V/7%=Bz;p:T<˼ ]ƃO*'gўej=ўhlўIhڪR=;Tў tb8c%J̭_f={!B(*@h`QIJ#V̲A0C-GD{VT)|w>1ў?qg?ro|]GG^IO=UG.==wGGG$=NE{Kpp;>x .=w^sNSF{\zSW:?rFj\'N}tIG{3xI䌉&Znz,kў?~t{HffHh׾Džhϣ?>KT= UI{/KT[oûTHOaF{fң=khO;zўU=[G53s#ڳYSkE{*^13IVᰇ+?yǝtٴhԪўf={!B(*@hO#UT=ˀY6fs73D{>ûTHONq0]Nw H'nwR5Jd:ro#O.J(b - 2/5<4QWh=*Y GfD$J1vCK܆_4ݒ8O0ȹ1`b*:H)ö6ɯbl|']gf1~nfɗ)v]'N)#Wߢ:[RNr+%3=*23=T~$8OnKQO3;1jF{W])2[>ןyz#i7twNcܕ['_%TMұmԻh|ⓗ>JK]mdmeK -VX -E5U0>ُ[ngN.7TE2Ӵ23\whzx&l;HiUH5vO/,$Ş,ߩW[Lq{ur fd8HǮt 3*a$|w>q/^SޢrM?cM:jI@Kꔗ#Z]%2FKfۯrPW1ڣO~\A7;4 -9 Ib>沩hJ>QIhd_7#?ʡ1 9|ƓA/,?[r*n,T!j=SM?Ms3 l~EuE{\ RT'sG_1TK?$+C#hJ:-!7>]G&S7Oq6?4ȩ2nla7ݖΥl͆. -9&<Q=;If*D*"ZN;YIt -V$z\c#+IZg,%jRϭFN>3L:7d*@rҜK&UGP!#=V*Y1:IVᰇ]ўFt VA7L$b; K:g*F{\] |f_i+RiANRLhO!ťu+UgIn=QN eEuhOQv'[F8GL~>(!Z -qep߰\iYձM,ʙDŧIJ\W?~(iA{ }]%Q;TQCIf ΡpXNm>Xm:`=olytTa. z)fMQWm;hq/߳.8m:=+ǎ2fF24`H4oLj* vўz L:ўsҪ'_%?3Z7crÒ0O 3B*/J{s,"2cf:SH8˚>/.?+㞊JBsL2mMQbw6hgZV+tʑh}],P֠3b5 ZSlˮ1CB.˧ӕ=yD{2w_U",NLVΤ|b}+(/Pdקh;tn%rHLZ&?(=0{qcjhDl]$u+pCKj}ўA) kڡ"1w4S&U'W{j:3t!DjUi4L_ċ\ F\%w`}X%ߥ̭ЪV̮/ 2웚 -rJN$M"yEatzL͗]WAJZu0lL| .)⌈ 2QR4UI3<)3(b'wlr1p%7k>vYvF9mknz#ўtJўؘ  nߪ,ٲ?o䋌+M1H 8F.ڃd*&+5-/)$!Nhfc2s:I -1Ш]+ڃdD߰j J -񕴜#!={LLl|O0%Q?+@A+F{&ޤ׌s~h|8 c'V>9EaVl)fzS:Qv%|IqX]y"Ӽ>M O"Ů=xWiuT4rЈ̦+:n{w5<^auj:L2qdʔG>բ=␍#pYҋxQTRg!=M bDe֫!=c0vY'N c$DBfCbd"BVX i4^4&Xtc5(;;D<4}Sٕ֦|ӓ["q4JZu0lS~AE{\OZ6޼ǎ -$꒴)bl -]ƃO*''F{܋篙+_[DyNX.jݒZ ci@vI,VxA:%vT=q'Hqus=hS~vA|:;"#E=RiwtOrz촙UON)K푙٦gM׊ўx˯*TE ZT2q~ ?Jx':';=nf4j\􆴋䱝 =-x[aO'} kノKw(3CQ:/i/\ mZJGwkVu`$GnciV(-&*+!)<[ -xYF{t0˂Q{2\QYChxKG}\+(wtO}/Yub8졪]xD9nKDIfcX$?ˮAZ0bbǕ_h^x$м 1 "Sw'nQ}AY2sP{# PTjtd@qa(@H|wZ#ؤNI$B(*@hƕAVd ݼզO;eQ4CSiSzϱeHgb^@{L#-3V̲A0CCFB6Rg?@V2@-*JjeZ1<:`P0qU$[XY=;mjMCڙpohʉKNR)+Ϧ.3xIh'BwwOSYZߴeIŌ M-6K3UC/hdx>~!! F9#9>R#I=ji}Ζ=(τ<]f:JJsP*_XCLNb2ړzI= 7(2L$9Ǖ,'}LqhʦRޤZUC7v6!SGA}cΙbα'36;#甥MfjXR=xl|2"蒢JEX+3]_-_11s ŬWe&|@JM%EL~P9#^PBZ5#Pyr:I -1Р]K\qĩ/eg"i"~ҖRF{P^V->)MHBowf,S5hϖTh:ٝj8{I `IhRYCў-;ڳME{Vr=rg1*G{ԭa#,WRgʣ@]/ڳ`F{v!=&SLƞjЪVE&yh* V>HEr b(ia,9trV}ў)U,R2$!)deûTHONY[5ca:xXUϙT=T`6٩jgў9T==zXCPhU -+D<4/G %: e`6:.Zjggx -΅W\9D{ąE97*P.=  O6D=نfND=OH=\nl,.1[B(T*y*c?fJ4XRl*O*+tJYR, VA0T*,Je9 -Jr0R9`VT*Ki*|0RYBRLA0T!8 U*`uJ2̲A0T>RT*JRT*Jr@?M2^endstream -endobj -100 0 obj -<< /Length 172 0 R >> -stream -0.563920 0 0.051136 -0.012784 0.487216 0.616477 d1 -endstream -endobj -101 0 obj -<< /Length 173 0 R >> -stream -0.582386 0 0.051136 -0.011364 0.531250 0.615057 d1 -endstream -endobj -102 0 obj -<< /Length 174 0 R >> -stream -0.522727 0 0.052557 -0.011364 0.473011 0.616477 d1 -endstream -endobj -103 0 obj -<< /Length 175 0 R >> -stream -0.580966 0 0.076705 -0.007102 0.504261 0.629261 d1 -endstream -endobj -104 0 obj -<< /Length 176 0 R >> -stream -0.464489 0 0.061080 0.000000 0.330966 0.788352 d1 -endstream -endobj -105 0 obj -<< /Length 177 0 R >> -stream -0.362216 0 0.046875 -0.090909 0.255682 0.955966 d1 -endstream -endobj -106 0 obj -<< /Length 178 0 R >> -stream -0.596591 0 0.051136 -0.011364 0.545455 0.615057 d1 -endstream -endobj -107 0 obj -<< /Length 179 0 R >> -stream -0.609375 0 0.076705 -0.204545 0.558239 0.833807 d1 -endstream -endobj -108 0 obj -<< /Length 180 0 R >> -stream -0.609375 0 0.051136 -0.204545 0.532670 0.808239 d1 -endstream -endobj -109 0 obj -<< /Length 181 0 R >> -stream -0.362216 0 0.106534 -0.090909 0.315341 1.015625 d1 -endstream -endobj -110 0 obj -<< /Length 182 0 R >> -stream -0.372159 0 0.076705 0.000000 0.349432 0.630682 d1 -endstream -endobj -111 0 obj -<< /Length 183 0 R >> -stream -0.620739 0 0.051136 -0.011364 0.532670 0.789773 d1 -endstream -endobj -112 0 obj -<< /Length 184 0 R >> -stream -0.539773 0 0.042614 0.000000 0.497159 0.588068 d1 -endstream -endobj -113 0 obj -<< /Length 185 0 R >> -stream -0.747869 0 0.023793 0.000000 0.723722 0.751065 d1 -endstream -endobj -114 0 obj -<< /Length 186 0 R >> -stream -0.869318 0 0.076705 0.000000 0.792614 0.629261 d1 -endstream -endobj -115 0 obj -<< /Length 187 0 R >> -stream -0.585227 0 0.076705 0.000000 0.508523 0.629261 d1 -endstream -endobj -116 0 obj -<< /Length 188 0 R >> -stream -0.582386 0 0.051136 -0.011364 0.531250 0.615057 d1 -endstream -endobj -117 0 obj -<< /Length 189 0 R >> -stream -0.522727 0 0.052557 -0.011364 0.473011 0.616477 d1 -endstream -endobj -118 0 obj -<< /Length 190 0 R >> -stream -0.363636 0 0.031250 -0.007102 0.323864 0.714489 d1 -endstream -endobj -119 0 obj -<< /Length 191 0 R >> -stream -0.363636 0 0.031250 -0.007102 0.323864 0.714489 d1 -endstream -endobj -120 0 obj -<< /Length 192 0 R >> -stream -0.372159 0 0.076705 0.000000 0.349432 0.630682 d1 -endstream -endobj -121 0 obj -<< /Length 193 0 R >> -stream -0.237216 0 0.076705 0.000000 0.160511 0.803977 d1 -endstream -endobj -122 0 obj -<< /Length 194 0 R >> -stream -0.539773 0 0.042614 0.000000 0.497159 0.588068 d1 -endstream -endobj -123 0 obj -<< /Length 195 0 R >> -stream -0.582386 0 0.051136 -0.011364 0.531250 0.615057 d1 -endstream -endobj -124 0 obj -<< /Length 196 0 R >> -stream -0.563920 0 0.051136 -0.012784 0.487216 0.616477 d1 -endstream -endobj -125 0 obj -<< /Length 197 0 R >> -stream -0.360795 0 0.025568 0.000000 0.346591 0.786932 d1 -endstream -endobj -126 0 obj -<< /Length 198 0 R >> -stream -0.609375 0 0.076705 -0.204545 0.558239 0.833807 d1 -endstream -endobj -127 0 obj -<< /Length 199 0 R >> -stream -0.237216 0 0.076705 0.000000 0.160511 0.803977 d1 -endstream -endobj -128 0 obj -<< /Length 200 0 R >> -stream -0.585227 0 0.076705 0.000000 0.508523 0.629261 d1 -endstream -endobj -129 0 obj -<< /Length 201 0 R >> -stream -0.237216 0 0.076705 0.000000 0.160511 0.803977 d1 -endstream -endobj -130 0 obj -<< /Length 202 0 R >> -stream -0.372159 0 0.076705 0.000000 0.349432 0.630682 d1 -endstream -endobj -131 0 obj -<< /Length 203 0 R >> -stream -0.281250 0 0.000000 0.000000 0.281250 1.000000 d1 -endstream -endobj -132 0 obj -<< /Length 204 0 R >> -stream -0.281250 0 0.000000 0.000000 0.281250 1.000000 d1 -endstream -endobj -133 0 obj -<< /Length 205 0 R >> -stream -0.281250 0 0.000000 0.000000 0.281250 1.000000 d1 -endstream -endobj -134 0 obj -<< /Length 206 0 R >> -stream -0.558239 0 0.051136 -0.011364 0.512784 0.615057 d1 -endstream -endobj -135 0 obj -<< /Length 207 0 R >> -stream -0.580966 0 0.076705 -0.007102 0.504261 0.629261 d1 -endstream -endobj -136 0 obj -<< /Length 208 0 R >> -stream -0.539773 0 0.042614 0.000000 0.497159 0.588068 d1 -endstream -endobj -137 0 obj -<< /Length 209 0 R >> -stream -0.563920 0 0.051136 -0.012784 0.487216 0.616477 d1 -endstream -endobj -138 0 obj -<< /Length 210 0 R >> -stream -0.360795 0 0.025568 0.000000 0.346591 0.786932 d1 -endstream -endobj -139 0 obj -<< /Length 211 0 R >> -stream -0.609375 0 0.076705 -0.204545 0.558239 0.833807 d1 -endstream -endobj -140 0 obj -<< /Length 212 0 R >> -stream -0.620739 0 0.051136 -0.011364 0.532670 0.789773 d1 -endstream -endobj -141 0 obj -<< /Length 213 0 R >> -stream -0.869318 0 0.076705 0.000000 0.792614 0.629261 d1 -endstream -endobj -142 0 obj -<< /Length 214 0 R >> -stream -0.582386 0 0.051136 -0.011364 0.531250 0.615057 d1 -endstream -endobj -143 0 obj -<< /Length 215 0 R >> -stream -0.281250 0 0.000000 0.000000 0.281250 1.000000 d1 -endstream -endobj -144 0 obj -<< /Length 216 0 R >> -stream -0.237216 0 0.059659 0.000000 0.178977 0.809659 d1 -endstream -endobj -145 0 obj -<< /Length 217 0 R >> -stream -0.362216 0 0.046875 -0.090909 0.255682 0.955966 d1 -endstream -endobj -146 0 obj -<< /Length 218 0 R >> -stream -0.558239 0 0.051136 -0.011364 0.512784 0.615057 d1 -endstream -endobj -147 0 obj -<< /Length 219 0 R >> -stream -0.363636 0 0.031250 -0.007102 0.323864 0.714489 d1 -endstream -endobj -148 0 obj -<< /Length 220 0 R >> -stream -0.281250 0 0.000000 0.000000 0.281250 1.000000 d1 -endstream -endobj -149 0 obj -<< /Length 221 0 R >> -stream -0.582386 0 0.051136 -0.011364 0.531250 0.615057 d1 -endstream -endobj -150 0 obj -<< /Length 222 0 R >> -stream -0.522727 0 0.052557 -0.011364 0.473011 0.616477 d1 -endstream -endobj -151 0 obj -<< /Length 223 0 R >> -stream -0.580966 0 0.076705 -0.007102 0.504261 0.629261 d1 -endstream -endobj -152 0 obj -<< /Length 224 0 R >> -stream -0.620739 0 0.051136 -0.011364 0.532670 0.789773 d1 -endstream -endobj -153 0 obj -<< /Length 225 0 R >> -stream -0.869318 0 0.076705 0.000000 0.792614 0.629261 d1 -endstream -endobj -154 0 obj -<< /Length 226 0 R >> -stream -0.522727 0 0.052557 -0.011364 0.473011 0.616477 d1 -endstream -endobj -155 0 obj -<< /Length 227 0 R >> -stream -0.563920 0 0.051136 -0.012784 0.487216 0.616477 d1 -endstream -endobj -156 0 obj -<< /Length 228 0 R >> -stream -0.609375 0 0.076705 -0.204545 0.558239 0.833807 d1 -endstream -endobj -157 0 obj -<< /Length 229 0 R >> -stream -0.727273 0 0.063210 0.000000 0.676847 0.790483 d1 -endstream -endobj -158 0 obj -<< /Length 230 0 R >> -stream -0.582386 0 0.051136 -0.011364 0.531250 0.615057 d1 -endstream -endobj -159 0 obj -<< /Length 231 0 R >> -stream -0.660866 0 0.063210 0.000000 0.624290 0.790483 d1 -endstream -endobj -160 0 obj -<< /Length 232 0 R >> -stream -0.609375 0 0.076705 -0.204545 0.558239 0.833807 d1 -endstream -endobj -161 0 obj -<< /Length 233 0 R >> -stream -0.869318 0 0.076705 0.000000 0.792614 0.629261 d1 -endstream -endobj -162 0 obj -<< /Length 234 0 R >> -stream -0.539773 0 0.042614 0.000000 0.497159 0.588068 d1 -endstream -endobj -163 0 obj -<< /Length 235 0 R >> -stream -0.563920 0 0.051136 -0.012784 0.487216 0.616477 d1 -endstream -endobj -164 0 obj -<< /Length 236 0 R >> -stream -0.362216 0 0.106534 -0.090909 0.315341 1.015625 d1 -endstream -endobj -165 0 obj -<< /Length 237 0 R >> -stream -0.605114 0 0.075284 0.000000 0.534091 0.812500 d1 -endstream -endobj -166 0 obj -<< /Length 238 0 R >> -stream -0.237216 0 0.076705 0.000000 0.160511 0.803977 d1 -endstream -endobj -167 0 obj -<< /Length 239 0 R >> -stream -0.752131 0 0.050426 -0.009943 0.704901 0.797585 d1 -endstream -endobj -169 0 obj -<< /Length 240 0 R >> -stream -/CIDInit /ProcSet findresource begin -12 dict begin -begincmap -/CIDSystemInfo -<< /Registry (FigmaPDF) - /Ordering (FigmaPDF) - /Supplement 0 ->> def -/CMapName /A-B-C def -/CMapType 2 def -1 begincodespacerange -<00> -endcodespacerange -1 beginbfchar -<00> <0070> -endbfchar -1 beginbfchar -<01> <006D> -endbfchar -1 beginbfchar -<02> <0073> -endbfchar -1 beginbfchar -<03> <0061> -endbfchar -1 beginbfchar -<04> <0031> -endbfchar -1 beginbfchar -<05> <0028> -endbfchar -1 beginbfchar -<06> <0020> -endbfchar -1 beginbfchar -<07> <0029> -endbfchar -1 beginbfchar -<08> <0064> -endbfchar -1 beginbfchar -<09> <0065> -endbfchar -1 beginbfchar -<0A> <0078> -endbfchar -1 beginbfchar -<0B> <006C> -endbfchar -1 beginbfchar -<0C> <0061> -endbfchar -1 beginbfchar -<0D> <0070> -endbfchar -1 beginbfchar -<0E> <006E> -endbfchar -1 beginbfchar -<0F> <0065> -endbfchar -1 beginbfchar -<10> <0071> -endbfchar -1 beginbfchar -<11> <0075> -endbfchar -1 beginbfchar -<12> <0020> -endbfchar -1 beginbfchar -<13> <0072> -endbfchar -1 beginbfchar -<14> <0074> -endbfchar -1 beginbfchar -<15> <0066> -endbfchar -1 beginbfchar -<16> <0070> -endbfchar -1 beginbfchar -<17> <006D> -endbfchar -1 beginbfchar -<18> <0074> -endbfchar -1 beginbfchar -<19> <0078> -endbfchar -1 beginbfchar -<1A> <0061> -endbfchar -1 beginbfchar -<1B> <0073> -endbfchar -1 beginbfchar -<1C> <006F> -endbfchar -1 beginbfchar -<1D> <0020> -endbfchar -1 beginbfchar -<1E> <0072> -endbfchar -1 beginbfchar -<1F> <0063> -endbfchar -1 beginbfchar -<20> <0065> -endbfchar -1 beginbfchar -<21> <0075> -endbfchar -1 beginbfchar -<22> <006C> -endbfchar -1 beginbfchar -<23> <0066> -endbfchar -1 beginbfchar -<24> <006D> -endbfchar -1 beginbfchar -<25> <0078> -endbfchar -1 beginbfchar -<26> <0074> -endbfchar -1 beginbfchar -<27> <0069> -endbfchar -1 beginbfchar -<28> <0065000A> -endbfchar -1 beginbfchar -<29> <0065> -endbfchar -1 beginbfchar -<2A> <0063> -endbfchar -1 beginbfchar -<2B> <006C> -endbfchar -1 beginbfchar -<2C> <0075> -endbfchar -1 beginbfchar -<2D> <0064> -endbfchar -1 beginbfchar -<2E> <0020> -endbfchar -1 beginbfchar -<2F> <0072> -endbfchar -1 beginbfchar -<30> <0073> -endbfchar -1 beginbfchar -<31> <0061> -endbfchar -1 beginbfchar -<32> <0070> -endbfchar -1 beginbfchar -<33> <006E> -endbfchar -1 beginbfchar -<34> <0044> -endbfchar -1 beginbfchar -<35> <0070> -endbfchar -1 beginbfchar -<36> <006D> -endbfchar -1 beginbfchar -<37> <0073> -endbfchar -1 beginbfchar -<38> <0061> -endbfchar -1 beginbfchar -<39> <0028> -endbfchar -1 beginbfchar -<3A> <0032> -endbfchar -1 beginbfchar -<3B> <0020> -endbfchar -1 beginbfchar -<3C> <0029> -endbfchar -1 beginbfchar -<3D> <0064> -endbfchar -1 beginbfchar -<3E> <0065> -endbfchar -1 beginbfchar -<3F> <0078> -endbfchar -1 beginbfchar -<40> <006C> -endbfchar -1 beginbfchar -<41> <0043> -endbfchar -1 beginbfchar -<42> <0041> -endbfchar -1 beginbfchar -<43> <0042> -endbfchar -endcmap -CMapName currentdict /CMap defineresource pop -end -endendstream -endobj -171 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceGray /Filter [ /FlateDecode ] /Height 954 -/Length 241 0 R /Width 1530 >> -stream -x1 g /(-endstream -endobj -2 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 844 /Length 1890 /N 100 >> -stream -xڵX[o6}ׯbN(I; X6nR{e[wd~2#;ɺ IOu.BUPN6*+ ,咇FO2IgJ&+kT -V-ʼفFLXcz>;'P `cJI>6| ZYC9:Z*:VEl2}oAz~(B*E.k,Ԙ`X4Y -4z4;%9 q2 %2\4-#bA-92LD]h9Ld/a5`61@50xd"Dƕ@(H@ -"j0hi Y̐GXig QH7 "{ "#i, "w5AdCdCd%299ȧDF%2re-kB?وZ -C8 [62~WCz: a|SK -.Xٽ+TqLr#}+ϖbsvp8I\Ծ9=$sg X{sV/azaYq}WcPv§?a7_fY8r>7 đTPpsTLoŞ ֐j;:}%$LЄ>W:= Nr^oUT^!:ςM'*Ǿp e,>6W??s}rG#]ҼgЪN^HO~#ETiCͳZUUf:YQ6_7iRM/q3}׹]ϛj/ 6ӗj\/fOW77"U)k=Y.Wy "e}.]껶r׵z*u]rEڽvAzuQ|zֽ3_`vev 븿FW;3> -stream -xڝ<[sܶz wFI}I4$L]c.Ŋ H/Ks?]^Do_}gEY^\/04 FN/#_\/*I`o_Qlvz&}c vX F&MOX~ \ P]cZ^ofoQXp pǺutLo9̖7QpwWzn{dڱxnMkj4;.0l 1 hZnJZ Pf]vEVfaq ;:U'o8f0xx3Xoߚv27xyS=LUshY2Xw[6tpK}o?项+2|'X@GY%"G%&K4@ۀkV 62idUoã,0iJ Q7Cʼn+x<4v/:5Dqwip˟9FxjIKX}uCcL M;a T=7}d$vR;j wGgeA%4!r PT^V7™X|^:f I}!6a" -@,2R_ 9ޘ|BTa@N&RÀV7wSYq"gE, iod|!YhOL&!^z|'FkfMV.aajmn)%(hLsr^128 F:,㘁;x;.3+ǟy Xh2EGPccLи}7Vp!R_PN 0gáњ?,8ìPP%kvtIZmn¾idi x m=pwWIF| -E*on)ޒ8 UzG[97 -'Khj/}9?̔Nɂ\eq-&}S';]iexlv?IpMM#|Co<@ޜD_ {PכD<+ I~LIU~w'eQT' Pw}` !KgVd7M_4G԰,uҠ馑VղN5mΈ\ؖ˃i`A ͷocɒ?NS7ȸW4{٤JThc,xt8QiJ2PwŸ$K-7{\z4l;'9E P( LY-SO:ض~#9 -H|rW#H&vΖO菛{! -Fhhy!d\܁pVX,IU$t[!20">#i4 l]5PM*IGkv =RE -nLF|[Wݘs),bsT_UK%Em"nv%oi9U"c4jy藮ݙ~Xof52 Mr/9by~A'SYBeHa)ك35L`K:*썕7Jxzq*-<#H34U X( 7݉17M+b^? -ŝ^ D঳HLt0nOf8H4)_=r@I5Z0ﻉw"i^]nɟ۾ڏwӵr~'B@d ٱՉؼ#!ZLŻ}Gju۪9;зkߺq{0M^e d \J%NGJ$Vʳ\$_K \x<:$X@ c&ןPUT^Ew/ 78Oȫ"uȗU<B8Qq0g"<#*Q"1irX)0OM5Pov,4[TG; :* -{'^4*-ZBGa/eΰĭ)@gEiw0].5f"`kXnǪd/!V4[)IcA[I_b,C0c -*C!0  qXcʰϹsa۾<>VDxXIt]q/ O(R9a-ߧj|CÆ%` 42texh[ @ F&sX ~%l`x7; Y$ doDZ~,tVĂm]`CdA֌(aA&g܂J8vifJ_עFݡlV - {cyDjvxϦ+01QbpBbIs* !Dk -F!68w7]XhQyNGSȔ#Xʱ/b5`[I Իaзm4p۪dp+mc܆h=iS^"]%豎]u^)}Ph!ê"į,+$=>$&>Crb-)46c mj) ܀ HLcA#XE|\nWMםx>NJ`2j%ٺP&zVQV6\5HH|Du3z!lA - @FtN9ʦ-< 0G_?yNr~A@@sLs|8:Q!ox^Xq !*~9zٸ=@u}* 9f2LJ|"ћnw-r.V*r zTꉣphW"m매5##i=me$h!&J</JW!c 0=|D7*q3PT.zk;ۀ-C.9YX tFMs urqg`Tr5h-|x *nߦz/f4)Vp],$sj0ۏ#Ȱ5T܊cUdmcS)e9 ,y"o˕K才"\=3E aɹ5ъ9БxMNC;ގ_9vET:pjlm'`eyqݵיtfl<TAZ\7vݷfpGDE{G;TqBx Fla*ul$3Y6V&`;]+b:Q̺$ATefSQN4:ݱcpm Ji暄DL~$pup%/y|S\.Q`6w23% @-_]y9 xQ#2꼵U*ǘrEsӍ-`^'P76G]ˑ^9+Uw>`;#ZGXה/*bJ3r>%F8B\ˑ!cM8< 03.lE%l._E֧v]-=߳ҧ+(ׁ,4c[|;' "(jbExN+B%SdlKJ[Ju 6;!isK`Q0޳-;m.vZw( cgaIc ?gdIdKD㐝$v5g˲Yqd9E# 4rUUSǣTB&W$Ma4)(M2i]{`ey9Ug* TqZm.ԝa eog(ٸ(+3[;i%Xl茯_5N1;S`v#O #A66p*Bp,jv\<mDoXdUy&W%n|tjzRlkOebb -;}Hs׏Iab#$\<$ܬTl%+*s)uʆ -=8e)n'pRЩh rCj\R~z)}Hip :2JZ5 &]=`mVբ -S;{}틿A]P/ޗt$FمJ0iE$jNBkm47JভE(-]/РUmTHØ))ڇg((.m!Bd/N?!t -LqqS-+1",Vo_1A:?$Q0VA@nxezfYA..uh0uM#7}GB})h?YirgJ2uɈ"71 ~3H[VD7!F6R4O}aSqZ4Wdwđnx \Ӈ<3UJķha9lq')[O092le#b7oǿ26Y 1"c<&~ɩcJRmx'W2Ķ lBia -`q,z.M;L"_%GiKy촖¹c95sX`Q$$O5l2 ĬE8)OE3H`GEٯ{$_[h@rt׈zzQ!s0e=czLJ0ePBjܗ,BB!@N5Ӝ<.bv.(]*~䣣TY/?8"_/z<{*)KP&ש, p]^Z[Wqaڄ};l-*BJڜ_!vYsE'\`ϯA\ׇ@?A>0?,"i[/'- I(ގVqIv9'_;endstream -endobj -320 0 obj -<< /Filter /FlateDecode /Length 5047 >> -stream -xڥ[m6_NeMY"“lӨ5:3W*JF=C`xDX"UOU.Uz -.qɝIlW^*6Ib3!(&DݫѾͷ'{׽'[3}0_o_Fvy2tH< Y`q4i/u&}ɥ[Mj=|*( XHt疄d.&A?ɉ*'@~ߖ\l-$FD -D }Hf:NtUDRb!8TeSpjS2n0QG+^S=INwooM7MPx%}SfG=B@d) R^!'A P%ra4v^%DjJKFK3Rrj˿oDQVz *@ik(%WCղ,$kN'P=V;,D)[@%2"x/3 wkw ҨnibEKOeF42f)yq$juByJZL8O[ƌ$(Y3\ -U}Xr^Z3q/,Yd*'pQ%6P'>S#MNY;S*NA5< :Íְ%-"T݉JyoDJ}PQ?6fX/%Rv5C1\I -rT;%` -6HO W~<&E,$l G!,z*t"GݡHm@і5 5A XKGV m0 ²^<+|CCC N? )rb ҡ>I64d~Zarel />3I  kXڿ~kLigYF5'0||vLv_! NTAҠ~M]dh==ʹ{R! -Rik{WŤ`^IK -A}lZ"4PVn-S2P2]nx5WX$@ νW`i鶸Ay?x.?9/`6ɮN - -r@zdeBfz@+m@j-| (C^[@`c՜21aM - Z A&ڪКZk]Q%6oK -(O(T 'w)$ǡCA:8_Q{p: r|9 YpU>nt3IUז^y -O˗35>$GWN!].\LNmЫn=c9Dl1CUmJ8F4(yeU(do 5?ٓEWݻS-/BMN,p芥q2YEƶҎ\]<dzNjT. lQ۔0m/0؍6(}xlⷯȇ Uyl81Ő"۩Lę!Gv͍|:=EHW@4u8!8|CzOE9F3 ['r"B>,ۨ=~Co"I7680#WtJy -, =g\Cƅ{6 !:ӜJ)e<5AZ94Rr*~ b[(66.x(5|̙ IPpQe,x \'/8]f>)bYGufX.Ԑ L)'ꙊKb*QT&>-^o1 '#9swvx ["m&xGZg\GC,'\[oΊ2o[|qFe_ Z٢wzb`oI9 5y0_!<%@@onvܭ[-G/1ԧ)zcEtW_F^|zx&ڭA,L`*6e o*]qJq׎$]IM=^]fw}8mfs92蛓kϰژn5̝{[iWxw.twXnriܛ'lh]?l3 MӀ+MSv -aR3컜\T@ʖ$J`7PޝK]@!Tb%GpOȝ ;@bxQđd88 RZL˶f4FsF! 3 T'۱9Pgd`R$3<J玬o{E[9Dχ`?]cyuvxZSx(ޑORA -v"q~*u`CAMEߐj8g{;mX=<x_:f/:D0?$xrxNLW.6X#t#x((tZ=pU =Yur_Orp?ܧL?@1AH+i^ Ѓ$DEKv` -l0턭hEKAcFMK9gL9 }h( `r>1ݛuA -.W"qOCVWu8u-kg_lT|gۥ`Sڰ y,q~/ % 'kh?.D*d ɡ >R)oW*\%ՉZJB.n/φ#LC%e8nuR48:?AYYf -*RzlLMY턝&֚CL0/dQێ`K&I S3FJ4?(1 g>N6ߗ@m@P ͉8z73lg=K$6"1F``Wf\2X a>& -AZ"e 8n|Vk^G~ 6߾t #/_O*L/e"`\T,3SA*d -̏CFiKs_S߁/? wB죶Oۮendstream -endobj -306 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceRGB /Filter /FlateDecode /Height 90 -/Length 10752 /SMask 330 0 R /Width 719 >> -stream -xyXWoދq&q̘1KEA\dQAE6QDQA\PQ!//YLbP3&$3,ys뵪~|Mխ[nsCBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB'ӓG|fJ$H\rnb }2U -'S2Wiڻ9X|Xw۷UO):)kkE;o?4vm;y=&f6 L"!ѩeqbMԡSC-mU]#w i.fֲ*T,i*A&܀M8(}Ob| K$Nu=oxl&'~,{0=A9[G&Z*ZDA^WʼݻwF!ԲLU[Ja5rjƛzjG٬[~uN){LaT6򠺱҂jUY͒E)jXU*l5uY~ͱ]ڹPBrȘ1XQXٯ(0VNWms5+Cq{k(ΆU -_777?xȡY-VD -Yq ױ2)?npRQO'6A|du3lQfDIjsTXV]=wpRo*!1|+x_=Fu4V 6UXM,Ea'+S^A_c˲\vRmd ghЉLm$:q8WM>|jmWUWjGp!?&פeceX4tu=Hr9å5W SJc_)Yk)-d.7εӲ˙'se\7V0t -21pkDm͵rY9:58^U\$-0VbcC#f1>h8ɤWI/uwZoJ㈶=aWkVI)X`V7 ,҅! C<@3zc}I~6Rxõ1*VI:KᬬjPs% ;UJJeQinRS\X9R "af5v,.B+yCTىK -VRsZ -{l>c-kvT[x%1Ckmn65WҐIT A~l<lJnR5.i"Ƅ0CNԬ+RuҐbՓ9u_l-UUfiYAs.oXiDZgDwЦ:_Hk[fLn vʳZz؍ S -Gŏj7tg|r.xqY#@>rXj޴=MXp-ՠѡxyí6Idk3tGpOz(fqgmKTDN!g1ۭ2'6a> 9$y8+>v@XU/XRm ]z#hdž[z'ª8'C-(iXEZ wsh8XEcabv -IyCݐYlw[.(P\+6V<m$&PF(Z, &?F礠*vݐ! ]+ݹzL!̰.hߏ'澈8)2ٻdu{nQtC'wVm4%ze}3Z_8)u y)xY#ULצX܉ߦ}^]+yYO->Tmgtdї:˯6Y5Ԃ5&m;I]rZJԫg 8\h/թU,&V}aDNih[ۂGk\9l/0VTLtmsuqOq. D\<5ȅ{@LZ4dĤMT[;$cfDfO*eՠMF\`%7\XXߧvrhnP^$!=JTZN_> /cwƒT"=nU'nzX\_,M[317{YnXaUra}6 Ç|͝;w>I -:44[%H$](ݻikU$5sVkMͩW_}Eds@ j&++uds.* -<S'~$H0ڔx"r^>9fmTC]Oמa\uʮuIN־j;X59b]#d[1AwL Tr4&Y5Iu 4VbH O2> w diCmQ5Ǻ~T?..>t%O2{"[4W/*-e^EG֛0RG+H0zOHC_1Obq݊ ;w/ -] -'r+-u'SWlQ>sEDVy&uD\D]WZ8*:*[>&A]89wgOAa/~>Z)9.ı[n\C7/| 52= ja 666T,% Ӂ`````CwDAnu``CݰPHnFZ. [BFե i.Ǐ|>"+HÇ -ĬNy``e`#[hѱW Mk~!0ckW<>0Z؈zl,v߁3xI[$++/lh6*OLsFަ_mŭl,\V\0r8׀ H gZذ*,u ؘq* -`CDoԞ6xŸE*xCgF1ՠI.;Q{7Rw.IKڳ ~\?PQ6?/\y /|pºC&U_tsU˷>vpo3aR ҏYx$0$7;QG -aVܻ߿JfdM\㗉[\sC]wؑOIZq.aq Y5*O%n;!8Rqۓv3^ 3m? -l﹐묃a/16ӊg.Y%h&7cea#}=D]# W]K->f{KOk:~28X{yoY G4k.sb؀%E7TdsPw%a}yE;9&ʼEPP=#zCZj Ynh8?İȯ -X3꒹Z%lBjEDH.oĊUPBU  $̅`/WTulgnqcyw>3ϗoCUe2S>l=ڬ̺gbnVF؀tByY^l[.7bÊm=4򻶄υ%{#6慬ΈY -m+++ Hdn{T7e=.y+ OfJKD\Ool7?dA8iب:M_&f&$t^_Uo|,ΦqC}_b3Gʇ2^OYi "c0?v ʇ*ϟaf宽ApŰ _HcR!YA)sm$072Y?-Ig O O]FL[aajq;%#H,<<>NkGLG=C'aőq+0X(-ww ]x .^( -6v)bjX|/ZHJtd>~<5|xӧkʨGEȇme H :t9os˺w8$+ 7y0ȑ3 Rq603dFV&l(MX;Xfǯw!-İ1Y%b 5<4 4K]z ۄ*.⑆}"]DK ~0s(#=r9Y8}S|cg֞Ch^sJ0|0y"Rt-xFJn6@8#B/xsҊ2[l_)_9Y$]NC  1rf].Ϫõ`ƁJL8͜N6*W7+Ha b4v1bvVŖ -Q= -n@Ɓ_\8}}5; y6m-fxt `86f6a~0m/A(Fx^9l CYQ -?}y Jܦ@.}|My V*ƞkV/0j,_|,ux h<ǎMKybKY+OlX8"Ax꩖ *Cc"p9h_8}$o'9bn4*(14A`ֳ*aҢ$~2._L̵UZ P)]{Ӧ c3 N(0znb60w7~c>y-z6gfE%T;QI,I &aU˧}Q= eQw#!hˬc 'VFZ4~4Ӗc 7 |-gGk+Ğ -<%UtekO5`E#0 w/Ű! $6SjOshdbc ;Qol<8/<] ȅcFGJɿ=`u;Y} t;8` -9'WF5^;۰ -xOwlDEY|GÆ|dj2DNݖɥp*!<>P8Cd,}79k-VoPq _eTf\t1sE48D6Xkn.Dl{jp} E`'leMK愔>Xw8slgcNu7#:1tg#qa]Z `>GsHe8?C}F#l9Qs -l<8{۳ o!N,Hgnvv]ۋLa6珏f l@qW=xmμY+d ͯpl9^g4 -؆4Fr:%r+\-@ݽ$6.__.Xh&w/30>9nkacؤ8>3=Gв)9be&hD1qŜ2`KpZs6" ±"E߀~]$Wv|~]a]x?ql\Rqw -N}>\h8kaox641G+9d,?_m;l@U*U'W -6Jug'b2٠8B%G Pm1U 3b ŘMaOqcpǺýG /}e+D7x`wBz5)l(K6a6^fZ._)NZ=ʭOG7/ Rѭ͍3ï⌻(  -VV^d%Xـ$ؐ¶˝qrZfU*` ~c:O#WoLtCqߡg0P11ΛRi`cqe+)+aׅ;FԽU7FU'pdoU"ʇ퉗ZذPoFzQCݍܧi6 ;WfUiT`@U!_7{F'DzebN+O6\^b|A~ă6N" Vg;6koi/v+Q0opr~eK - ]dAgyGٷk/ Nb-*ؐaW#PC:)POvd+?#%ln*Gs: j8.w,&;,+oٮp ->WSp'uj߁ʼnO~ѻ[OD^=tu<+p -9YD7ЄcYt1~G,~ s ɶXE)&4'##>aЂo!A (GgBVg0*3ntWHE+}za\ /؍ܔZvIrv&Yj˷Qv&usq~3L8o)Ol9x[$ݏYm{ S>8p՛`|WAGZ!1&x0q+{vtlϿDENf;tᾠY֧$ -29rd,Um .枵Vkd8oy L(O_2D5'! u7XW1r͉ص㘂oMhpzh%( @X㘧 rvjX`,` K -1P'8-,*"Cb*`{!( @3 f!q L_('6~0Wq ŕO?8xԇOȏ&w {zGzhJ7#*C`jlJAHɆ+?~˸Dh[ K_b.haKB!6Ꝺ\^g{@!dW˜i1}@nv){tMht}0$ɟ\^rb?!Ԋ&ezC7{> Wv4$ >O =s*4{:~g_pؗ"U1|.(v n憅[]yީVُ?{g1vϮS7R6l)lU~||uiЋUHbr6h(thPddX_F'6X+6oZY\9m] m`Qh Hzm4\%ibYO bJKڽ7X} ^<06߮ -:ێv -UGO[`)ZX 9'́?0*_B -O{G`Bp ٤~;_ 6IC 5;66666 6w```!gNɇnMAAAAa05#& 66\~Άss -T? 6666666666666666(llllllllllllllllllllllllP p[ !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!bendstream -endobj -330 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceGray /Filter /FlateDecode /Height 90 /Length 85 -/Width 719 >> -stream -x  oKendstream -endobj -311 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceRGB /Filter /FlateDecode /Height 625 -/Length 48021 /SMask 331 0 R /Width 1171 >> -stream -x{UՙSTe&P"01&H8 !ڀr;ȥ44Bsӎ!Ⅶ D@ f>t)g^{}?붯^W (= np@!npr57>8(= ZP%!Y$8FnPMp? YvmZ񲯖UT~n:IEh\NV/QW2gWɍQD2np3R1#I@@55-٢g7 -ipg7@!괏7Ж_ =" 5 6'H[LOʋ ΢8x 7f9č@;gpqpqڲ7WcCRMw;h7@@h. ݣgĝp:t jWיUch7ɴ - ': 2тNq7@@U[=b Vc.75_7կgRRk5`k7T[}*d$~_0U[s4h7j+&g jh0w3־'[nNRKx ,ؙ?# Vϴ5np|nj[C~n -IUdJb BQ@ny\ AQbb .%p0~/kf$ k@U#dC>.=8n?î^Yxʻ<[@Ѡ8?< -uV@jpzB/ Pm6NJ{7WV]ΫʿL!  -ըvýԴq˝oʹiӦ)<[n_Wr88q_zm ǍZZnS^^({s:n~Zu;j iƍp6md[ {%߾E7YS?ٰsnў&Mdp裏Ν;׀p;v옮ҬYiΟ?~o~2&5ҙϝ;+yӦM'NLM]paÆ -0VZe:pOdov57WR[mKu56ΘwϽ77-,]x39бcǙ3gQ-.En%;nWӨ,eE7([6fԀpUd(L0CD@g~wb޽ "ܪ/K- -,Q&q xg$O: $n.]ڿb+V޽|;j̘|ʀꫯNvׯ\Rb… 5?۷/_\W?trR^^.3 -ZvΝ;cCOхuٳv]vd>_ZMw pӑxB.^x˖-#E~mk?yzҰ[%^6sfL~iMɟv"˯ˢ*~ e}-[&,s"!qFܺu|Μ9cS !T,Q~>ˣD&+1yg 4dX)e U&WR1~2YQF R4elU1GzH&$Ҹ;)wy;8aX5woh",sdosss]R 8P$׍5(pРA~G-ydȠ.NiИ23w~KŽÞRڰ%op-2gϟ?߷{f:d\;K_x(^Mɟ͛t .̜z.[S?NnJeޱcGi+E~O>i?<x~$8˭ :4M*gfٚyo^r&PC|?uTJ / - S-Z䤮.Ꮋ4*XA(HJemJf9;s?yRT6fɶ"QxRH5Dȑ#~VZF$+lZ۶mh\oJ: [4<;"A}4ǽc~Ԅ۰aì-L)3 ô[L1nU.(ܬ'3 1M\`Ɨ_~ +(mZ▘W8]e~ԩ~J$5:ey/9L#*5L~rܜ7>w7|6y̭T__Xm]&qz/n-򏷗\@ -{Dϗ9 R4z-;r_w^ӛp2_v]w\p [YSx#@zgx⍻MwT[Kpgg:bqI_ܵ͟{~o4Kq_Ux*G=rRxwQRRTIxO}|7iZ6=.;nzI<21sS)~4A)ɩ~m+pKVK'){ztl>q߿N{;2vx"-ʽ-Lwx,Y2yd9=DGv|6l(((zTlWĀǥ߶m[۷86c$&µYi#? qs irW,2A:Gہpjr: 9"!q^+//%Oy׭~r;&ϝ;WSɾ?JUVO>Ie@KпHZeWJH.JСCR[IuFkS 7d'*FwqW{Dq"qk+ڵ9 Clx~Ȃks'>#ZU^V[&,@T6mϞ=OZbGԩS}(Ax=Tj;~(.ܧls̙{H(ӈnJrې[\"o1˟ʦ(oڦ/rK[s-,PVo:NP}T ­ -7H4(^*\<)N7(|ƍ5j̙3njVE|{fyyMֵkWKyS\<) p^uwq߃\<) p`7nժUozܹެ p7p|.np@Nw7 p|.'p p"@>O -7@!' np@!np@ p@ p@ 7 7 n7 n7 np@!np@Qz7@ pC p p p n7 n7@ n +W\οyF5 ւ- )7"X1pjJ3a k"}sI*B"(8}ު퐧@@V*䕹jLn"pE7^h"7 R^Ԏj[SӒ-zpP7@zxm# jy{;/f$ o1#/께x/'np\YC#ճ. n+@[vzx )¾/npth.gpOt- 2_ʪ1FUx,)#` RZ*3/wXCoj˨@ ,sL@ xp5E@=VmpP -7@נ P59?PMES\$@C?$*x,pEw%oh7`gl7T[nxIHpYy@ Ֆ@CUmk B!@6lTI,v X( -gSNa2FQD#dgϖ)pCd/`h ]aKK5 PmvhȮ0k|p$npha$v g/V@4TF]ܲ@4oXn Q(1mb@4(?JvmI΄I@4nirKzc7 v$zϰW^.n)k^P&@u`ixj*%N9ĺRr^Uh*nuIU0ͯFN {x]Ƽ6a PUi>6|MiʻPb> 긓bd -0N? sR`RbFd[t.k;GAxp p`WPen7nlHRG<)wc p78&@P?lƍ  *--=}4E7#d32CիW߾}{؎;z:Q;vlLϜ9C󟭜ٳ ~3g-_]|9//6qnѦWo>C1-&-QRRR']p̘1{.]ZnɓTׯPc GP Ν;ַf͚25ۘ,~G}tʕ.)M>g>u2X]tiWX{ӧO;v1_*;oÇׯ_rJ .Ԙp㏷o߾|r]СCIyyH*zkڵ;wFEOO?uGT^G -$#gڙwUZZⒷJ>k 7ч7HPMGݾK -x-[D6y+Il]xڄEΙ15rd7@ =*{"Go{aq~t!%wzje@T~G8p 2tf<) ^{Mne֭'Nz}' - -TtrK.-ʆNdɒxt}-[LF2OV}zX;>7oUn?QRRfvy?* @JMI {Țr`ԨQ.EEEi)Ufu򀿎2@QAuB_A~sR|ɾ}d˗/=~\^JśQ NˤI2KTwUݎsЊrOdAfqJv#(eۅ.] 7[*.rW,2=%66'|Oj~S#.PMMɦ?~ҥf"d.ݻwK})nD7;f(xAAAaѣGd}_u/wmԩff^VVf)6YlN(b?1ṣG4;* Mm ݤrdr~~y>۶m6mݔ Ν;}W\/3?}oyYʆ-?o;n.0Cæ6tE-q^`+t5M6p۲etY`Zn1TV~$dnQwk:`4'Jp}[{Mb>R?i䚎9M,޻ݺù#npop/ڵ=Ώ#| [#<@H={MSR๹ndŋȑ#RvaaKl ~%mGsI8|~+OVᦻV-sunڵk}0C%I=)4&q֊Z̛7 YYlc}b͚5S590W[ԢE݈;R$+H߾ -_Xwn\uqGrK%޸ϙ4 #dZɄf-Q Pdqaᦓ{i;)3C{raVTTd ?~.]ZG?Ov|_h,YNLJ|0.٧5nN32,o}LhȰWGS=np*w -7xQ%Hnqymwmj{I*߲@Mù#Gsl HsfĜޏp(ܜ7n\z[R>6^~y.]m77<) 77ѣ1-jx ф[#{=; -?.\s+C+ -tEP.~XlY~ԩ*7 ^`\g/L)iR=c(lu;Wn֛/N/R%lnQ:vKO:Oi{&\W[nd7_r -E?Soϫn_ ^O1i^mכּog7S+֩(7@]1]_0,$䛶oߞB#}MKR ۑ#G**M |eKX:-a,Y$,ǦMb\`A]n*tw{[#([UoH5xGxR[vkX3m٦uEX\R9rW'G?w}ށk5Oyy-2*ܬPq޲ncaXy5m2]QQ1sL7#5va _ [$ sWI;G2P>;Ν\3cEͻ뮻t!?hvl՞o=qŋf1s۟VŌy^7:1lddu;<~e%zbƍSoL0AW:3]?ɓϟ?4veVSBe뤓{Sni,YgΜQ5fdRrPI[4W[8ߺD^Ru۴9`'dbS\}8HW -#~OSffïq-"MomS+쯽rn'ҬZ{ŀ)qI7xm%dQ{I\_zfd J))tҘ/'2)7ܽۧOUﮥ,Ic ĭZ_X٭r0]TTnYo߾ݿ) vNy*pK&u 5=}t<64)?h' l.I7[^J[EIeQm -h@^u]lο/~f^)<W:ۏ7; 4[i˱}߄EEտ]v̦Hc1 syIߙz{G_eXK,2wwm_ LeK}H7|7nh?`8qG}d -Zɓ'ҙblJ7( ` -~2n~EA6ܮSyv3fL ixlIdÇw!=wA:bz -&q_Ux*G9]6U7A_:^6_UݦgtmcW/ ;G&{pJ eޏ&(%iٯm1A=nW/.Eɓ'V {@ؤh"W?ʳ-dlfഒ!wXUznzi%۠fدΞ=kX6nܸ ~qmΟ?_Wwi$%ܼ'NM@w~w^dI`Q4;h= iWIv=c#(E2?tS7`V׶9fns irW,2A:Gہpu{ĩȶ0-{ ^ -#W[DrjDd<~_g -WORcRo;ҳV ?tVI'ZHP*l۶MU$U -7qk+ڵ9A)`ڜ=O8|%.4V/gݻWJn"-'OmdGVWFr׮]r1}D쳜̚TZdt]pQ]7ߌOU2R~J}^#ݪ'x`ʐSٳg=opڊU+7E,#Xqы$ 2Zj 0m! kZ‡3G 5P%>oʫ27\ 0"t-^LBe ݋~6K7REX("8^c މzɘ< 7HAgd%Zng{F-a+bL*3@5g ta6J7@@QJ5d=\'K7@Uċ[xqԜ==cx-?17@CRdd:t;@l05  (u&'02ed pC4Ֆm.Ptaan7ddC@=fv@!dhOBHj^opC+Vƪz4:4pCSVG6) -ڸ@@z|wh7h 'hn7{꺹@ nST4;E@}Wmh7 'XU.h7hp-G n:݈^>az,KPT[>on7BT n`(Ֆ~VA-@QzYNkX L_EC\ pCvƬ!,@óAF RᣉwgћpCnl)8ڍws4X6M,v@@6rܲ@4BwVӇ@ZWmLNlNpN?JvzYE nYRNSC3@i^髗^g+ ^yB@@fIu pkӓ)Gnjˮ&jwy*p ]eXO)-`@@Z"1y>FzbZw#YJ\qTҥL UFzbdV7@Y - XRdp" ꀅ6&2zݒ1z(0A1 -d 2o.>Y u 7Hs n "ϗmܸu='[@u8tPiiӧnu­5Cիo߾^;z:Q;vlLϜ9C󟭜ٳ ~3g-_]|9//6q.nUK~Ҥ˱! f}vahѢ^%%%h)3K.[nz+,,ϪgJ\)e[{5ZNx?Lǿъ)Be˖[n=;''elݛ  \~|ʹg`ΝWƪ?쳮6msf^|*%j}'Nph_vsNsQ~_7rgLk_&[og -j&g+WX{p#׸qmoӦMV=VvJ)ٳ .T'jrƮ^rK,1E7YS?尅sMIK24i$ۄG}tܹ۱ctf͚%HsPkԨ|ܹnUΛ6m8qbj… 6l^j(M?ѓ߿ߌ­cHmKUjFocqjE>|Ls„ Dwմm۶ҥKnQ,ر̙3ssssrrRJo{Yx.=|:QP\e5xެ3m $.q?{5%]( \n:ۄQPPP­BWQժ2 U? b޽{BFnUSkYW\}kE9CtW}2g,._lӂ DaҴΝK?C5|:W?r(ܚ׶~PI=-ԟs7n_|ExOs)YL/?+W\vI7m'$>SҥKWLbŊݻw>}>رcGU/R_}Ui>~+WJ\pƄ}˗RN%@b&V[k׮ݹsg4z(z~;:P ={μk׮RV g_kX>A:n:]Uŋoٲ% ~u˴Me7O;sK?7jK?21/$â]wAS$ؓ9]MkG翸2`:e>@̳Q>kO;ǮC)F,Ip[>OOXIrBY34v)& 7P|lW2gΜQ58y;e2%g^ZVMvI8p:HUw֭/lHPDF2۷${d3Yd2rv6lWJllŖ[sjJo2 Ϙ9^>sҨlgo'|ux۷oٲeL|(qT.]jׯ6nܨgwȑUT?U2i${ǎS=\dTtUݬZ*=ü;VR欺Wc$&%hW,jbP7Q08blSM@>5{Y'WTJ*Aʈ?i%Ɛ@S;QemR42-dƏoDƍ p:X.zObnvfnwo?ϟ?o}0|;mDªӈA۷~W#~ޮM؂9j6fo|O.2jhn],AO~+8J&# uGקFn9y#dM~~.ZE54'N &?'[jqƌ3U㵎=zt@ԩSr"/ByFɊ - .Zzmҳ+ -ʘoB0FOӼys|"$䚛5kf&e9ڷo$llGߺ7~m -F`ٳ-O>'+I}9On֎(Wãx|T2:e;danJ.|:y%]v5zMNʢFi;[n:ϟ#89q'[Z%V0O>6mBjF[LT΁ޕVZbWrުm۶~4s?21[GxsCK/e˜2gϞ%!bV?W]%񀁲UZX:vx]w?C#AN]ZUѣ"#GP$.WRU{܃P+sR]Ȇ{ιbŊM;wI*[EE2`1LȀprԓUf=_8M'tgN=5m&rʿs_P B6lWqUHEH_Y)WYdER^0?|jݢFiA$],Z"KW B7n{"dvI/75#iԀpSڔ6KZUQݻ%|MgƎXsԿ[n:~/ܦNjUܮ2l2P~>c̙GYT,6r:dId=;}m6m4)Kq;w&(eoxO>mvLv7&'^GK-?o;n.0RoSakA} -Z.X@ǕyV7RYck΢ -[X + L2-XئU<@N^m9{ʋn7[!J);{ޡK$d|)?+ ჿ|7N<21/pZeM^0gG(=:iO@ao _ʷ2vC(ԨܽW.tq[WOnlDrk[]ްIO=;v|FWʸ%z Y^^^j 3V -^k~2'"c[>ydmr&[iٳg_xQ̎ϛ7/VTyyY`GI -p;رcvnʅ FqH&)XQ(<8tBcpVT=dy=&qv̙O:6]+AMܴCU!L0`@"R{@?CRIKE.ө\T^JIaꃂU%C~jݢFi7o<[Lk}b͚5S$٩-ZdǏF ݑ%yVS]?X14,s\Klݭ;Ti&΃-QI2:T -on:gϞ;b!E+U!,l^ ~.]Z/*U%K+_j fbAkncƌ񍘚s~~;7 Q#RV׭pzVY(~=Dd"U6Z8pO.~n0'od`[$A%_MjYKcXׯ9+ Ya`Y.mU|ֵ:yWt3zt/Ʈ^^;85 -75"L6fl2`WX!lñ1 n9]FW}YTyFnD˙ rm&ܦL4SP0e&V\i'0X!\v5nn7x˦>|Nr/\tE~ LR(֌+q1M_~>3tPs4ṻ 6l0~S]Iy|7H:TURۿ 77VռySwSY@-yOEyyyq&IܺpV݌( ÝUa88#w{hĜ#uH&J)lXQ׮ XԤlZ@Y:D% 7 U퍸љi@beG/G{$ǟ=ȴp 6''ܽ{+XwڴiN)n&M*f80oIT%ugo3Z!1Y!CH^[I|ulKS㽃ÙAn rK( -Nر-Us%)JtO|6bq$IMNdS^^쮶[`op7swu}rm%Ə:LAŜa=~fH)S, yVqUTU[X)VM막]G&J)TX}WtԄ[M)[W^]3-vHТE77Q~R6ןqmA7' DnFlRvٵ璚p39-,IQ}[h %erD6`~#@;ŋOKM$ZGV<'_G㏫EVfcm۷ R8' -V&[7=;"A}4G7k[٠GrFӭ #]Ōҭ (-ֱ~UKYŢ.]T?@ti: u3$"g,` 5ɶ 7y5 3:-­'bzV3"URFLLa-V֭GlSggΜ揥&>6!8T'%ab*ih_նe%ܹsv!5rF͓ׅ|٪=~{ĉŋMMc -7Ν-*8y5k -ʼn&w@l.b -wȑ#nYsr7 -&,.\`2򱶯eps!U9?x𠫜*BـK㈛f~)Ec}J6mq G+wj޾m+$;OOVrݎCݮ#XpMlvԑSD'|\(B?U4SnSS53gθ5kYL RR%ٰpƍ~8<&L; +ϩTeTXTqdh4jT2 1D ͻy4( 7#@EQ`AJb  /,\noϧNQ}gZ̋wz׮WeH=r1Ϻ=~j3F2lc+]\g&JwQg1 i$dyE !''w WX2o/cBN8ܵMmνsuMǍ{ho[ ԩ$Z2=nt#tYғK(-&&\=lXRzNF{&<nsZڵo6,*[ʒp[_n…h&bI&-l}ݺuM5e/=L<=TLG`&e -EbLpa4Ii>nh?[߹r&˺h7&i:&J&ɥ-u;kڡ2#C]igMtAsh -ƭY'~taɄoܜwҰʆR]I{8yXh?Co%3bm')[ܡ`EzT+aGC+ZأSodoծKRydcN.r<@X+̀]D!wg6*c9g2w -Wz7afS3.L}Ț^h˒nTr,/s&qԩa 5kց– Çմ+**cOB|bayy|JKGRR@X&eV7)]M$eQS4=~R޼yvR!Hk:|7rwFܳ b6Ï^~^؇>zGnF [zNZ:UYllTvq:|&g|ܚeG )rfE8E :T2#623QOvU[[T쨩Z-;[9uNZLL믿^r"l"*W_}>(w:e@XпHu2T+eJ.J~R[t kS!oT[nwVq*BuofLO}ec%C<ɺdoV) 7X|].[^3?:FR)MXjQ#q I64hyZ&!cۊ#f - pf^~&QBqaʖg8Ӌp(Ƚ#8\!ĹVnt n7 _SSrJoUϭk{n۷ϟ?={Ν+#x{onݺڵt2~;#Bo}L=>c⭷޲r~s^n=ovߚ1ٵ^ wy͡7nصkW-[Xl٨Qu}„ rM0s^4뎊wzU`ҹ7gZMի5jN:yz?7w^x!GiӦy~^~g`Æ -|G<[O-yKxwR ->}zx%:ʟX>04[wZG-_pC5.+{zo_7pzZr)i^8qbq/.ѢEնiӦ Ҹ\r ;ƏΘxƧN=0q1MH2q&vޭt87|SrYgH˗tI rO>d]y߾}T9ZjĈ >lŊ .:pWTsClbvD 4g^wd̜9Sׯ_3n:<6#K\=!ySG^܀pC5:ڶm+4{솸͛,Y[  ݫ nزebymܸQwY~qk_~2_gu;vX| $F>&u͝;W}v夶V$cb+dɒ 6dLJQm~G~F%CQbU'vgyf*.g;zkrʊU5}zq2z}3= @m'r[UKߺSIu2y ZTVǭYG_ܠ/ꚓ?U/M&35l0`&ZhѧOdHoKT}i&L裏&f^F ejرI{)|+ -zJү_? -TWW>dl?pFr,$ʞ%O&ax9KпId -MY6dIF)`x\J½y}3zXL-[L:n2TDw} $iû>S -7N^;w$7|3[LsGnrzj+R)=yjǪA~~U:S7wLo<}{?~'=0kjrR>Dɥ- #& K4DmYi~{*;}0Ȟ 莲7\_(vdNK;\0ïD.GN܄B$,]ȲئM_s9묳|oQanM1O'sEO~A0<LizG)/Ypp۵kW-]yݔ<~Ȇ:*P[HFcƌNCQ6a„0T0%5c ^E.袰.:ujtG`/wu6W(fµ~XɡD‡ߪ_ܠÏlpC5 -=Q7'VMml۶mh\ 9|nus9?nݺ5MMRKم0'|HAdln%|ԏgδo .D6Ye ͛nɬm&%N"R_)ݻwk"d\BI+ٳ}okΛ7/)]ٯ١CHyGAn۶mS%*V9FBF­3\?WǎICcǎ}>Q$ u(4ߝ}+ٿz4/=_<2jᖍq3 -qUo+^&^&n ,0eL6V"x[nΝ]KEiKy>+rćUWQ3n5~Ml{r8k9rjnJ`׮]{WjB0~S*W~R #1SVS~UWK$ n#\d_wwX63n%Jx6m{'#o~o{B5ᦧцNΞ=ZUKqFɁ;Ȅ-77n]Aaxo5k0\ ?hp_쐵?#qsG?S6fR ]򘹌݃pkMm?OD!TU;s+Ӫk^z饤pÓ]tňѫ=pAj;\iY g9(L)dv?lY &$Sk ,ayy䰟 ^ۛАmp zJG/[TkTV%(~UJ2[*I-\3J8IzV϶MHI=0kjG>fX)?-/(5c{1zS\Upk{G9SFSeyʖp ':rp]Vpן\2A2酛\8QigOGQ=:ݻוE}[80#$/GﰢvZ`ARV/Cr0a6&_qfEv&.28oV,f"96vX{ qnn7@~E%wn555wľ*Hơ8[lQ{4W_=+TV![nKe\_H;'G)4ÿO?=`A *llP޲D]׻ϕ̎U;C%}O{wYg*ٯF p DeNƍ}c -W -ȑ#ÓHe4hPu$j>S5^]zv[$tA}GX&ԚmC ff۾d.̅[^Q&]4WkWlM@ճm&F]۾wN6+pz /]P2&5@e\6SiiHHpq~vyX#p}b Ѭ OGWEl$ 7_$أ)@UUUi_p{ײ%H=4-ZԹwхSKl2T%.T2 -7{ѢEGc!-[Wu=Ծ[Okݜ~,kꌝ6z\anB7.- n9j|w(6#7a;bKDۆ1vD4^xrI䂅lo$6vN[s4nuϣMT65$sly}MV#|`aR5x_prwܙzy^߿?5d(LNd͔B87ʨRFi8) 4_"p°%UqhO1S]DEZnj5O -n #)9n,a2j;m3|%,3_ypg̸sM$$.kӺ`6镖V7^},|ZARr̦]|U 'tؠ]x~qe\QGo}MO -7dǧ#܎p㏓QA n>;۲ 'r'DC7nڵk2.\U޳ ->myVtvX _[?+lG*6p4pjq#.lG}HF_跺O8~ _zcdF+Ž}um۶„ێ;|B]ACY&㪒vMbbuFٷo"OkSSL CWMrm֞o}wgΜfn:XlgϞŋۖ -O/.RQUXG6z_?:2S GTP6`&abPPMps1-*篿?* &FZg^mʕ#6K/ >\TMߴi%Sd] -B|k(63\sOXI<%bC -n:.ͯ -|Ғ/ ׍,@#N7(CiPvk _| p*]mWw ^47PJ~ Cl;#):BrϪ6]+4Ѳos̱¥(rijg϶3:a]\F+FpINbŋ(ܼee.h|W; $܊[]#gU;ZDp%%{UOYofwکmt&KL~֜]vn"uaK -юjS{o_-TIwmӃj-"{ח7n\Qp3MN:IxNe-vO7ޗ'`UileaIZ;%sRb LY\Vv[(aW¥,I鄛D . S9ړ&M[u›kGg܈',D0k5T˞R.%v+1 [3L^*('McLGz -uC­yA,7-;=އܓ49,9F}Fvte?cW|'i;`E_x_х[˅o⤤Ji:۷7=(:ג7܈\ușp”v+ -7_(>ᖍ?P&Av)GW7!ZmK}߮^Cg N>y8bl?|_#vjed @3uT 7~ʉK-ppz]9[;;:2ޗ^vϠ͛7=c%|ՠ_מ,]vy6==.aa(̙u/:}K;\e֭&O'* n喨J[nCx%"poT+ԄMF_RQYs̵~"Z7/3]G{nKjժ*{ԥ[j/ݠ4HE*B߲aAf~78p`Ru!99rd_~_eX_YF#/5]̷n϶g6lX[Zϭ?]ܿ>`0qEEES}Յ L|ICت<~#c~dYkm3. nnKC2\&jɴ-Ͻ>ݺ(Yf+I]9ZfrsshV,N9t#V}-rN}R֑ʣ]|gݶ}Wז^;@{#z "ܒ(xYnIGE>'gʾ!Ms+l 4! -B>[2+СCX&^ ;bD C9sб˭x/EX\ bHF} }F)۳f͊Blq>RKwG5]'m) >JoܸWh=SeÁsk]j"3mn܉oy$ ?5(?v}%筁;[9m(VZٻnl٢7/ۨ}W[G +7U*y=]%}v6Nsʧloqbܺu<5I ۔2$Rdֳmft@rÖ32)yHg=M,|UCiL^w|xpRa`Tyut_y쪌RoX w)/V"T ؒPM{ezXD^%TG6mc{ԓcGHv_*V-8*ʾPr]P^cxh$FV%~" Bn7 n#WO/ n@!W>B< n7 oŁOG#7|: n@!W>B< n7@ n7@ pC p p p n7 n7@ n7@ n7@ pC p p p n7 n7@ n7@ nmGŇ7f9N`:zl,n77?mǰhV4+(#nek6ЈZ]ph|b-k!ڍqn7#h,^h*F -jk⥙>4ŋ6 мe ) $P7 E( T*h7(?:.dvCѦXМU3p(-DW Nkކj+QƂ07T PiVjA0p8F$v\ rD$ P@)z65׌vn7@(1@up(a0;Ent6VXVq5XV O~ཇ|{N (V"Kh|#GC V+ 8m kc p;4Jiڍ]@ZYE4bh10*ۍ̤@3, ,baJ sn7T[ҏR`jk^,uHĀp@K@fSn!7S`7<|`20e3@4-$ÞS/M/n7 p9(ﯩYr%E@h~+NB!g=6l(Xl٨Qzѽ{ &.Θ1R.ɓ'y -7߼Uz ?c -x뭷Eeee_իnݺi4h]mĈTP5:}eY5Mz)­ӸBtk#$=za] c9ْ [:aLVʨh[ҦM)f3,"i|ڵ,ܹs_x Mu/=6lXh裏֧M#Oԧu~oTP5:}5e^3pk4¨ر80{>?ag$Mԩg&;}2­d+ڷo?~N:ZX+USSpÇ]bI't98p,<묳rٿU dN>&ӍYTΫV1bDam>ӣ.[ygn[ǖ-[!sLwOi h\!vZ*e Fg6&=/?G;+Z^xm(YN?lǟ^~-~`_)7V;w\%{wbue(|+V̚5kƍ~a}J)˗/߾}^[[2 -6mݻwW3ZxU>]xϛ`?&? SNEu-[(,W=<-돏_O9QeK,X 1g7 ~+dɒ 6dLFѳgTV_(OfЂ?t}pdU,K&xاqԓW~#\dwK>a{O8flyqCټ'Vt=qǓ{nMloBmJbٲe2ё9UwSjC/vB#ZkȒ˷fK+$/q!) KqJ̯}믿.ͷieBۨ -P׬Y#H8Q1..oV2h!y~]M1’@Sʋ]gUكk35L՝*}LNCa~uY\k_ꫯb >}D]e3|8|ҤI6,͈ތWVVZҵk׌Lz+{I=R'j7p÷2Qn<.c,LH&JN4Rg_o]0C'/[l.NOgn:1cPCCM: -yx.(Xn (HfǭYfiH:_?a77~t8xQ1X6/ -$UtlNJ6N$OB1Νk]ʿ]G'6lzaС?j[]o - Aʷ覛nvm~q~݈݊+X/EβWaTs9?nݺ5K0ٳgj2~m/#:O7o%rETu,ɤh (YZV۷ c8;@@ OԥTUݥܺ;VIfT-~_SݔTQV-M($d*anmʲe7n\ hh pC=4&gIpbELcɒ%ґS-(2vz)+=.L{$O;Q1̌3aXiV;ٲ^30߰"</;7w :IJK35w}{nYEdFaAѣG_{ھ3[{ƢQ7e˖u%kfuwm{Zaa(Q^^jժz -7Id.],bLQáXpm­a2"JDyE]6-ܒQ# f^}U3hРl n>F.Ug G^)wZsu9R -lZWL8 ǍyUS$ܢ6G/ٺS&'e7ыvK -飧"9ȸs'IHXZ8]|ӭzcuuueWj͹ꪫ2&ޑ3g ZlBomm92UL@(ܼ#7W~Ǭ"[d]F ]xi}sG_::sZa-sLlς=gƲNc[0#غu]-*Wݻw# I$XM2$9H)*y%U[msɶK^u 7 ,H:O=J2rL f}Uwu9:v}Sj83r?&߸]ѹS8º)?3wܒqO։nUUUߪn3G陼q>0B͆AJCQzmȑ<n6n\CDΝCW5E^]2w>7pF/ZJnR]]=}?~|=ahh l;L>=z n+ׅOOzmmmi -7bѾ}:s7𤿳.6]չA^V1e)pkpO {&ШHٲf yOv? ܚy!IJer.=_~~9_mܸэ~hȐ!^نj;vluզ ^`YGhѢ#rhG/1==\uZGn4z̮WIn6V"'ܹsw&.ۮ]|e7ݽ{YݳgOkWϟ?_!L׋$]y)E>s{-jKO(Fqɝc0tC5 -9B۷bN֝J-eli`}*e ZlBF>,c?6]GKAUG *܊ø3gNF_&T’IS|]]ǤZOf+GO;v]^pkN[Da(eSp[j]6RRӦMpib -Y3ν/`]j[Zrn*X_]!wW5%+0/2 -4qs~\ʸ\Q,pH(&"-cƌ Sڼ„wTF#cKy[(9u zcs7nfJᖯU;KîQ˒x󏦸)$\6ǏѽLiӦ%;0C,G&&&*.nEatAaرcqJ{^;leH }OUkm {}ʞ+@|aVHϊТKH ̚5enp?|Z2{ ]0\"l1ZBYJB9s]vupƱc*'(u7Yx;bn4r(T]d+|߹QN(=?ن ntk6 Z5>NhbJ_B0j3%zcȸ_|r">!ʿUPvf…0j*{;i$eaK7չs4Y>a'IlRv!/l/Ul/ϤýUQNhkyYSeeeצHvb^ mܚe$)M\峫]۔u޻;<:]6Nھŀ-rPp^(­47}qA_K/;K.Q6|NsT 8)뙦S(B;F q lr)(͛7RN)n #ajzUܳ a23a\(lU.4y(.p-s!H2zOlq5CN>ɷPzqt#] z=mb>llוQA?W}}Ós ;6@&|e24X3 /GƍW]]s)!uV;%IvL8ьI ,(S{^PS.Ënj"Txmlë鿤vmկlT(+Bb/b}QK.$S|ȑa~~яaߣa8[7cFm۶#:z}7g[35}P[p5mΈSQQ -wX-%ߡ  -vQRZ|HaM=t66]nnshV͗ 36ySrlcvVڮn/*y}{lǷPOQ1U%y޼BYf}J2K. رcG;>)dXuY[H>< #ўDٖv7HgEnIXgEq^;gdnVwc0z峢GE7n3?l=lfiS>*ehn0oаۙwtwDzVw)Y/0$# -T֭[}hmz)z뭷݇y)}Nx]X4qѼVN;CeFGeS־SQS$_ -ᘐkkk2NbWӐ*vʧV= K=*$kb^WÕFW> -stream -x  o7n ^endstream -endobj -315 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceRGB /Filter /FlateDecode /Height 1330 -/Length 103887 /SMask 332 0 R /Width 1490 >> -stream -xXUWڶ'NLS4FMb4&DŊHUT+ EDJS*қTH" E ,3g2O99hyk_\pg^{U!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!Bek-,kBwu\NO;/B!lgnawb !_;.~TBՅBBՅBBՅ UB! !BPuB!Pu!BUBՅBBՅBBՅ UB! !BPuB!Pu!BUBՅBB!.T]B!T]B! U.BUB!Pu!T]!.BBՅ !BՅ !BPuB!Pu!BUBՅBB!.T]B!T]B! U.BUB!Pu!T]!.BBՅ !BՅ !BPuB!Pu!BUBՅBB!.T]B!T]B! U.BUB!Pu!T]!.BBՅ !BՅ !BPuB!Pu!BUBՅBB!.T]B!T]B! U.BUB!Pu!T]!.BBՅ !BՅ !BPuB!Pu!BUBՅ Çwmjj:wz$DBՅB!T]Pu!ýtx`)_H?şH7 DR}2`jjڻwo[n]tիW;!uuuqqqɜ>}m^R]VXq璓={̙3g͚5ǎBՅ !<%+ -СCBՅ U./w޵{w${={TQQ>|x޽_}UI{qvv>w$&&_~]| 4h~"=zܼy̙3IIIfff L)f7nb2ɥKVZoqnذy5Țhlݺw UҡGUG/BTTԮ]]" %c>}xfV BՅ U3gH_.]훔tu8qbڵ辰+^nܸ0{mܸQ<|e…"sNII4eĉ{}q!77W4wy… ҙץSz^9)//WQQ-yMt"T]/cȑ*DFF[nM;88455=7fff^7o۶KlgϞ,,)~Ge.T]Puyq8+q)Ξ="./o+5tIJpÎ=:dȐ7H߇ DEj^PuZDVuqww?w넪 UAMv… JrMݻ8p֭[F>2HæM+.tnݺicccǎ^X%'.UBYSekkۻwo}}֯ > iiiN~ЂW]W^ݼyXWUS.@xtr.wA/eE} 8! w{j E,:x>.?PVذaU\G``sW>}O>O>9rD w^i #S]pڔ>~gffvIBՅPu!oܸ-YСCbu>,--.tu{ئtHpꄄ77ŋ#3eeeW^Eۯ:u*66r^^O0ɩ˸=Cho[oehh -cmzQ 666x޽{90q|pZff&ěL - -B|JJ3fR+//oO銋_XYY-1p#XB|kʕhB[ @ӧOGW|V &YBNlfoBϟ駟jkkB~i̘1CN˰b7XpKZt˗e[neddhjjv3QSS#-V a?/ \/iӦo!T]U~JKK -/KҥKG[n߹s'** -qhϞ=_udW_֭R8޲eK\\\{T7~o.]HΕ[GT|p\ki[mxĪJ|C!5 ڵ;!5jT^^^U'޽{rC]]]gccӿT_?ySSӢ"% ui'|"+jOT8n>0`֑#G$@}9貲?C/14ő*_dL$uj...Up+i$R`HJɐ馦>-9mREL{͚5٨_K5 ;5n8e_uA7zh| OP;`uH&~}NCމQ#oO2Pu!T]H;pႏ#;|Sǎ[~=d Μ93jDzC.I% ٳg˥#H ?99YN@\}v!>if=ܱcd~QW|Ihh՞Νk cW{nCC{':O[M ׯ_ffŋFP$ >@yCҳgO)zUx}}\b---ٹQu!("## ૯BgKn444c_@NuA/N)fvݿ,2NG.BPj=n U.T]^`tRRRJC2KUU<=sssDĈ -E:jjj`pu]]]٨1 ֵ`jjڻwoI 5믿G>KFDD8qرc;w400޽8P^ײ6qrr<9A駟4ڮvwʀ>Lx&455lKrP!FFF9s1B,#D>`ڵr"^zIM~,Y{3ggAAU[m؈4цv.ohhΞ0a=>}zXX_~)eiѢEDC#ЖVx q此 gٳ.]B#DS.<ߨ_0"a}]t'O~ TtfW]`P];vРAA;nܸyu"&갘I rʬ;kooXLXd !,  -¶S1ŭFqP5 -;|i'M%-[cnݪEtp dA| ESN]D>bttt;{&˃᫘߀'>\ zTW6 S %T]Pu!/J0Xyaگ#^8I*--m+.~ "WWW#x5˖-۷cUQ!:؜xxRnݺ%=aH\nɈHY\oǎ ?6OpQQQRRRYYٹssii_ LOOP+"3pexpq]84TӧQ:---tߗUK+pD - ݻ/|Lj&77W?Q]!ٳgKٳ'{Idn dDD ڌ q]]]zCB+RSSE0Wl5Kiq9=˗B~!Ck"AsW]gggCۣGt(&::UUUD< ^~A~m*$VcO)13f@TEۀXvΜ9uY -GGGx J /vv>À͛ASf *?޽{z_';t/(n -*\L3a᫼m]]ݴ4%+޾}{ƍ"1ǽV^:T>< -\H:;m4//C"chq ࢠ -+Ԧ"""֭['nQ"h̙ン]HHѣG =?\ F =d y~뭷`.PuBgMaa 9dK$f.lj':4'욘zuEۖ{7?/] |]Flsڪ6];aq]qQ䳡Ai,t-%HS*ͤ.\OOp*lAL(++U;>~$Nr[***4J#^ &!/^$)fဵA+I&V=6ԅ+Ti" qҡQ - eLLL$fx"A8@Dd1q5dԞp)G. >dt}:B)T}jjjfBwKtڦk+Wi%ÉݻlPCu) _ SWWG W(P۷4R򻐦,i\ 浭 `nZgypqqyl~|ŶLF ɺdpo;+vBDԡ}[rU}A*`uuunj##[laGPu3~C0" zһ xN___XRYq5rrrWٖ|qر GuhiiiGFǏ={(l+WHzV{oD 9Qk69/Ā &dddtʌlu\LhfBփpB)o[^VQ+U;.x.[+8NwTIj?})S?Ehhhnn.<=.䳩kWMBՅt&bѣEto[6{ -E/dsHĪ PSS G/T -aLLLV^vZ?T#Gb1TRRRRWWsпa@z-r;N~҉ʍT``X >*?%%9o߾UVDQp G!I=BR@]!: & )]v/[Bv+|׸)?:8q4ŷmqقXx=[<>>ܹL(x޽e_>wX[50cbȴ/UTTtqON 2DdT[ .T]Puy1BC/$11))鱃.{7nܸpBt+++fXWWWOZ -a+++A@ -+_*<]]݁=`JR|%<5 GeÇ۷/d@2ǰ믿.cƌٵkW.&LRl;hp,#8~QC!|NP]p3gδ. ⎊ -1wnOgBC$>n8Dޥׯ_gGGՅ i@ߨg1jTDlbb"{aYxY[.^ -G|Ј\1bā$tP9bLdkkzknn<s֭Ǐo޼Y̆RR1uMtGu<(Tѣ 2$.. x4#8 7ytRNNtsG%L?jѱOHH(((c0Kw/ QgʎPuBgXeX``Q; -ŔIB teBʕ+pzIJX,6Q]`gΜ)n! -,cǎ9r$~u:YuYdɓ' W -=6bfveJ@266~8;vt™555󬭭ETm믿^xqQQ;.oZx~`=tD[o!Ax322V^-kh455ѧu*UUU}9G|#ꗳb0 $(Q]LaDb@ΣO?0CԩSR8 UExx8|3Faa\/naad5PbRqĉsJRWW$WL|Hmmm龈}BS444DNoVZ%ѢL[b.8KޭFΘ1Cƒo=s\?b\܂wU\t1[n˖-SC-5=rU.T]\h\r*:|IX΃hjjB/f^|۷o -K -7߄d~AX^]pfΜ v޽p;a Emll˙3gTu9z(_3斔 Wp{XXP:Yu~%s?b| yn*T]~.7{S.⊨ꐐMUUnw611iE)藎?>uT+"歭:'R]Ç ⃉K'kJJ4`]ϟ9F&-Tkⱪoxxʕ+544'{WSSdee/qҤI0J*YaXWW3jhh8j(B=mW;wn.'[nmk -6CBB*^xxV&N(W]]\w:}4bC ikghkk+agg'-(I -{vK\88rp]vM,$U*\BPuBՅ J@@.9 p222n0cB>C//*N~~><a%w\] |WZO\]]鵮xIIIb.o6B']ء^.<;w^pA = eYng:YuAE)9]ڀ>&;߇‹6lX'0z檋#ͧn޼xu4FGG\AAA7nD-֗~opp0=.eibq׾}"EK+ Cd.uz9Ҷ/zzz:Pq0ВɈU"ܻwX-9sfʕ|BVҥ ̄ĉ홨.p$ - - -,--W_U-oLJSMLLtD@&KJJWaa(K\v JX|P{իWwꂚT^8CU/z{{KQ4ˁh_|xFFF:ԡU.T]3ɓ)bt8vK谿tw -iE XYY 3Zu   -dX)HK[eeeOP>#wwfu­li!.\tQ>,EEEIG̘1sV}Vȑ#ř˖-Sd[֢-Hݻ-b"8477ВQu!/+UUURWx v-J0¨(#|m -e;F#(RNIIB{433Lfee)ʔږr}9e0޽{&N8zO?TvaNP]vލ I 4ЀW3UX COћI#o/^=ׅD;2v2{l €J:Zuwޑ#G֬Y| %1_xСk׮cwGPun̺uЙ/_Z7tT -?#$0a*pwww,Piii -툷q[{8EEES8MzϞ=\B"={ikXZC uV~bmdC|E˭P(=a_EC6l- $K绺>v΋TVVYgOCSNUTT -m PX _Uo>PO>erR!Apԣ[a4Ŋpݺu{ VvEÇ#zTתUv܉NСCE-uꂋ,+oodCl?rz%י˂ p {ҰX``hQhbÂ7|s….'|"ZYꂻK[\9H'$$A5jT[YK#vЄ>BՅ U Ap -CZqVܼ_ uaaB!""6KV~Bh@Oϟߞ:\z޽qD8P2jU?N^kA۷?^" lܸ~=֭["0F4IwWOOo!!!a+ӧ8p… +VUpiP9p#׮]̘1CzgpT]RY&<ZljjZl؋c]^u}ȎC=i$x竪%'Ġ<<<`!Ҍr|"[cv믥 Ԉ NNNjjjb|0k,tܲfBNu0uemm駟E5sLcϞ=jCw֭ 'ǒ%KK3^߾}c-GfffBAN=[OFQ ׭[fX@8kkk0p/ V8!&ȑ# %Θ1ꤰPɨa)<ѨjR\\|AT!꣏>#yÄ NiӦMFFFy8cՈr/^D'/bW$lH}4%n9PíUVQ.KMm|O){tOgt] Dhm'.pTP&*?'OE͕S^]]ݸq/`~. \_1yaZHPF1ex׿8uU.T]^po_CCCڱ=['uMwww]w߅6`1:W -ax8s k\NvqdȐ!śo#ml}ƍtBVWA -7xCZ:;;+\ vKTcpaCxH'ˮ4WX!^Iwޑ'O]?+Vtn!'1B!h{e˖)DFFjiiIGe6lRO7>\+00PnGESS d[>3f#~ -NnT"֌3P1,Keӈ駟 -[v]G…Ν-#j_~ra>}ѝ2e -KVDVZVA;wSjh,fb1K" -nll|9W'$$H&@F??a] zHě -+׋.cm۶hNNN}\R9GR Pj@jjj/^,MLcn=F210UYԏ:>ikh$ň9]PPPQQ!W ;wD)+T%hpD.T]Ƞ?>.zk>)r={ -s,,, '#-';%NBuWWW{ -tb8MKKKnlhkkKIIF~vvvmˉg!&&FCCޣV.8fFFF0,/rz`DTVV6l0I+;ZuYv-ԭ>\5CoĈm5QPdggK>l"oUTT$O򓒒&u|yf9r9Qx;bb -Gn7߿??n\v޻won\#4D]\E.nB5:NuVaa,d[.8s׮]buq1]4!!f޽{.\رc bva$4 -Wq]])]+ ]~? s)^L:5??_1۸FwҵkWs4t_0ѣGKf%==]yݝ;w7NJ1--MN@j%%%͓6T&\/닲KnJJ˷xbxdМegɡɉ)Osss *o455駟ا.T]ǵ$''+|$] 싘~СǮl:x rWm̙:::֞EEE~~~%]EDL0)((Î%QFNpu2|piJa 28Yel -:p+9 $Vˎ&]lpZPX"9W_}uС-awyGRO@zvv4G9Yr%رc'O>H}~jjtHQT - wߕ -{-'mD - qF UUwQQ"ZsssGPȉXXꫯW487& d裏\ZqgQh#F^ 0h -,//WSS:::yOqHՈ;.㙒4!<%hy$" -{e{ U.T][/_edd޽&ܢp={p?;|aV]@E|89kS[Μ9Ǐ~z9Oӥ,+.. gbhhJ8"h-M BPu!^^^u&Ib\%>L߰,EeG8РA{fS׿F /VпҒVİCYdС것oq>RzWn|ޞ={*ƍӓbsTA c:̙#^{9!8a„XY-ڵkNNN3FQO=$]뭷?~ԩSeYfu.Yu>c Bƌm۶I?>,,LڳOAq>}d諯@eʙxX1++w}&ttkΝ;%#[/;f$%%RXX(HW^ -ˈ#""CDp-|ѢE.] b`W6l__5jǎF"BՅ !䱪 ! |JK_*\HD D+bc//zm>޽{&''"V.2)Uȕ+Wbcc[OAE\TTo>===1rFvA9u eWBq؄G:u* t뉢on222/Eݻw733St3T]VX@-YdҤI~X^:tɓ]]] D_nnnd'ٳGN[Ѯ=3F6S^^ -k<8M\(l`۷ookUa1))ظo߾r '"ӧ;vLNSڻwX'K.|Zz-w+EPuBՅ..AVwAzԩݻw73fGWonne͝<F(~ҥׯ߶m[jj؈Zׯ_UVUUDV^y昘˗/#GS\\'P]xrٳ;+;; 6JJJ߶j - - -E%e 2RpNVeʕ'ODŞ>}9Aȏ놅%''Agp?e\hњ5k6m[;d#GvLdT -<ȞD-+Fȝpu6JYGuuu^^^(n*33JaE]pld,YZZb**̙3 -g9.T]Pu!%uee%UUUi:=?رcw!T]y -"CTn%zǏi|3=_zٳ'OD0tsܹ#%rEk<]r.]܌3YJ ӧE%^ǸӔ:9եuVb堌NBݺu%߿  żyFLv9i7FpupPWc T](Pu!N=?l罼}>~3dKsd꒨U ->^v kö,ٰ!"3=18'-A_ ) ,ɕ,?H%,Q_e E- o"8PRP_oɥqcT]… -ܼynjSSS^^^DDӺptttssKJJ:zBCՅKWNV*+/&bgރT]xUw7w 9P-0d\{uc.T]t}*8gDd5MԚj`0gL+ Kg_lxxcǨ'R]zڥSj=zBĭVdcH̹[D*9_k[\rPԉ稺lsdCRuٲK~]6=&{wZs~?]6e6ϝU1rI=X#oiw} 9rq!Ȓ叽6fԯslα&WJu}vmmmBBBVVټsNnn D~4|3DOO}u.8d*333%eǎgϞeG-5ÇƖܽ{nUBgt* * (B}%S]3(V8zzF/XUQg.Лi4AGK60\0-9V:sV3c4mBD=]#hcվv/_5^[K('cL151]qU&KtO5fL:ϖvM_h6idQą5eCLfv3 3zĈBTўf`h'(WG>G4qcCB}?(8g!)I)=~|rU v ;;;=ɓ' -Θ '̙3qqq8ɩ3}r./VLҘxGܙeXBu)M~AT7Ig*LJG>/ewy'/ے8{|#t ZClt-:j) -.4A{)ӧ,YLʕnCrŬ%kqZҙkҙy9gG~Li^^ަMНFDD{v"*[m8'LODZOmZ___jFFF?=_,--ݼy3BSYjJJ -l U~AeŶcG?<8-}"-?ݬjo?ES]=zuj!2/7_͡ Ku1[5jX5'x~?$}pƘ 8)@{BC9\bb6R=_D5vd3M{ȖuGTc.TU;^Sڬ>uJT]=<<ӟ"+WDEEJ.T] - C ;w\-? #/F$&&^~_vHx]vԂ:tHuuuͽq UV] -VBwy\{((0#kGs^%A#*# KGWR󅆃 =#?/15:1.y"n4/.ͯ=5# 30/%68{Ep'FN2lZtgsw^W9Fct%XYK'htt:lDMz3f-Q$}0izsPIn.hhu3UHrxbYYYW!999 6޽[ĥU_bbbN:EՅ_u_ycG?SP$ -bwO©.G]R3瞗6$sJ-9z4ù֖~α?u9Zcf5mdt:NS{•N  -֝>{Dm)Ai&(<g>.'jU@ea Ua 73^cbe7|s)-lKLj{;kW A\3ASxCZzj;r86޻j^̙3ׯG/_N>)ܺuL'豅DKxxÇqegظcg|:6.ϩ?ҥKšUhhhP>=PuTG[.\`)(&3 Y:9 TGՙklCJܪSF&&2TWWVm6sғ}2S-'Ϛ)3d&"G1`D+ -"֮bj2n4%e8sM΅DnqZpya[on)MY3m|ܷ_r+ߔ|WL-WWVj Kskw'Tyst s;ZuϿjcc -]LWg6XeU[V%&^֖_8h̙344ƪ*'51_:;kӕ˩Sx ')uY LI nҾ뾓'bcc/:UKe1$ammݬ6,UU+LM͞FRםc>kׁ;\T{u`[|ΪM2CGL;tj9Cn1QHcODjc͖T1rOaϝ%;=lCң=9H{ȹQ nڐXUkuYZu֔i]Kyp-tg?b1㾟:vs"4778/2pwZxa"AܦxZJjiBx \ US].m+/Нn(ހ+A})&ޕȏDQW̵]&wC -qCֆm5z94ғZiLr`4:r44]zc'jxQ醁EmnI1SVXFŖl䲵,o]6ŖcǴs-^V]-&La#GlQ(VfAB=?#-'kz{-=X|^DLuu(.,?z(y6M6<3Cu&i˯6RE9jC%w %Vb ۭ ?\.;Ŧ-%PR),x%(hdA=~XښV]p9Q:ޙ)T]ׯ×} - -klٲJΣo***xvU..P3/{:Ֆs^z grMf幚;|V9.|ּsYfn&Fk4GuKlXՁ)Ux'**#FhmH,[=|4c -JmGyUX>@ zKܬ8նw(3\SKó]DTp•ݸ~}Dk-^UQuw!_xbjj*R@?>?,,&M(mQ%%%䬬 .DGGNǗ3bm۷_ܵkה eѹIIIֶmpא&u&&L!(to>ht!Ill0h(eDՅ U3l0֚j:rDMC4d0tDI̙5Fc 3sEpyikJ·,^d0{ Ɉm5{ ͐)j]t$ ;Unc'j9Y[:9m_D9+ldl9G?0-q#({-nc-}ڧMDcfCU ͖/s̞Ѫ˿Yh;{}}M==Iƍ;#UICcysT>wKIv76_WSS2eҪUs]]ܼi)Rha1 M #b͚K20$D>k+cEsNyknnlDUTF8Ѫʳ-7d kwݰirF#Z2̮q}.SLMֆmtv@ckP=51ZvQ[˖y=\{[@iUc_:RաD}=(ࣇ}Ĉ1&,p'Uv:q$xxxץKഋqH$-- y/'xyy!T]~Y)1ýB< t'L0e8k >;"}f42vĀ;'S#c- MQ#Z5r愉0-8%FE>3 u4̮Ot'i[͵H -jҷG^>p"}{|m^[B=DdћݺhQD6W'vZ\-Wn5-YzY%;{=Z8ŭeQeOHLha&Ox -E]#TEulg8nH*{SSS@!\gw9Y#S F?GPYYYvv={}||y999ϕ£UUUm͠u=?00?_^^^tt4Δ]⊧NEvqqiH6%%)\\\m6T%r%e"qJ٭[6669II=z U[LhUOW!nMs - ,Ly.f0%>2cW<.aF?[U܏s.bJpEaPqEkORV_Bձ\^(oHUPDCR63pypy#0K_ڣD/oVQ.tZ嗗@tKYc,ciOŤTO妝9\w8n^1{ԩZPpIw-;ݬ锗m='Lcc3+%Gӈ?Nwyb.. }YrNMur{7odiXHv=zƄ1ơ!kՄoL]ܬDF.\ YhzOxG+nV7>xFo9 .C?lKKv|dhwwwY]Yabgg3䪪2{#KuGp?֢ތNvvxb#ǏxtrUoьT/--mbݳ\EVESNrsslɋsssuy}NNNVVV#*#|;_ɦ㫘u!_Fl^ qާ3A>땎+|# =P?QMr/9Z%Cjb5hѲuk{a#f;,1c{L_g4Nt8.[F W-̜7wFF?SoquO.#JcC)e4K?oE/;{vCg~/P?¿VS{হsn8eGyvu̿׮]jnvOsZ'3?o_זhn{=[/l?W^xzzja1rG]du^+8[&f=Ò}"-93g.$捹WMɇ|^鋞f;.7<"Yl3JW.LvfMkև9~ -uI3,>csc7mM#xF3qE𹡹~&._$n;r<:N>@]b,Xk|ˎ.7=:hl]EZjj_D__Iy"##i,-wV"0uK_ZܲQ^Q^H}2(O,Z2>#X'G+C̘9CpMSW24*|,u({6O^YUSU0-;Uuc^tpRi *_d;[i,G-{[Rڜ4uٸqxDgli}X\XTt'=W;f,Yxұ#fsFZ82xD1aU"۴K>;a͵T]JN-Yj|ʓ&ޕzdoeyB}^ wvv~IEQW]~xVu;[Jug #{nY(k'ٵT&>+_UP~J5+VČ7pnMb^[Pk|?WYP/[qaJ -#t^V1V___YY6#*r.E]%qe goܿǭ C]E/sqq#!s?ȊŤvLƷVuViTurK=Nʟ˺ʔv38a%e#/.x,G2< NQEyT=9gVS4pTΛ2~(bf[ "ݴG˜y 6;[Rylݷx:uek6]vmS]].\rUvIYq[0o)3f.]G uikkSvOj|K_&"-**RI򪪪kCCCeSoo)='WDO___3"d[}GPUW233UEihh TŔh Ң԰5**J9"*/Wg266ݻw>3r7K~&- S*9ꂺ.V]q{aߞ=+N]k]rS;2gz棺Mԕlڿg .[p:iduy:ƍC 5{׭[v;p"k^UF]tǏ?\>i5rru4i^4K-^h9Ivu^}aʔ^Ze.q|7Y =aTU`>a)͉'._pr"ESw0cr5E{jjj4*//>PQ1L{30++ks.P]nysڼ(jm)A [J˒gq9! /\:~zugf7D;ۧuu}S/^I48odo 8rJ.KWoX~R.[a"ׯYsΙP),3f\E񶩩.#;p:AT={"粲I}(0qujH?...99Y.hggݹsgRrMGͩ9ߨKaaڒ렜4dJJJ/;P4F򦨣<}T6jH"##&)9ւ[퓒䇧I*եC-Wd̜ԥYֵQ8uA]>̞;g4>u}5/^fϚbɲ}{ɕ[NR̮|{uΙ󅨋ׂmϚ;T {vxɒcYwD7o|'y7n Wu4C/䧏.;Ol^ gѪw;m5Au,P{W=JrP푛bU]&;AVV#g;tuPiG.s s'jE+G/#+_=cڕ/\4wb73 ץ.>V,]鴱4[=<=ǮD?.Cq[R_55gF'.Z`Ɋӆ72Z]ٴ FJ{Noɫ[]~a|aނ{N^T]󹺺Z|vrrsnn,왙#Ԛ"7]-}B[/^!2ɣGPDEE*.OUwmRc9v0 |^EeRԾ羙1hLܰo76]C]]ݤbbb:ev1DŽV*w}rÉϥ., I+>zK_]Z#>TFi7n9k]eK./񕪋o-X;<]5(TF}K]r7Cs%9x,2j4KsɖͣJ'.-=p[KsGY~`μ12urqXqXRBeO}Ra΃A;ӫKOO*lK?))蟑6:obbbhhhJJ"i+򶶶I|Lyk4EޣS M0rѬU___MM!d~ɏ#-OLN[HH! %+1Tق'\ 2n܍*UW,zgݳMICMJQx-<|G)*.^kWwEʸ.vsի9lYUˋↃ;~,>kG/,&[KP2W3Zfզ{Wl0Z]܋y8wC{ŷH/kO!9FE-|p_WR{˝7r?)uOGQT#(c,Y~V遱ie1YRTh$pC3]e%y"U9cm>@]B6r?pkU\uBLj#-ϚzNd4]bY}ۑ'ذ}_JDe9m[+A 馝b?tvvJK\|mcc[ĽlFFFֶ4 D(^fff˿QyibqqqϫWӣ#Wa39 R@@8Qu:9Zm4sʾ;zEY^T'wCs-[Fwk-"7ڧlmņQS._hWSVq+뢃e.FǏoNӳ27o~{Ws/_]t>Pi|Y>t^5D}9㺨ť } -khFS]JlVnި<2?{޼sVetP+qM;geywS9t_i5wG1 g7Nr挹2e}km6s΢.Zؤ\{I"?J_"~rgd:88ȭB>L]VC@Yl[{G~X}MJ]N8z.6 uGnX)]x*%Ϙ1t{N (Su|R\Ȓ._lݎ&>lKԦ}'~)[1 -ɱN7"EjqwSӫlWEdO0 ti}u155֏&Oπ4/^555R_1yW| -F؋_mWyIiT* ύ *?2{Hؿ:S;{oF,ן?mYf(}cn:UK[ )_;}$:sU}HPe/P]z矃i0)zCä /&qS):WXx:n5E#Ϳ}GQ V^=u^Q+'ϗd}uj={:=I.nO{?WKRMߔ(fk1/^~*e hɪ'n9Ljh~+6lW֜ynښb/ه}zu|~FDk-$$r{h~K[ -[[ۜO!W`&\R^^.)[[[Ybbb¤Jv}źTW>>Z"{!DLכ(uju9giqO條?3%qYmܴᬥcV;uy;E 7k?l^A>#ׯt`꭛g̜1z^-Y꒣k޽˖MHoɚջΞ3)?xOBwcX%F>{Jg&mػK Zj'7͟3fl;~xH,7QsRc: \ S.$Ը}U:̨{T~Vorݫ"0\΂P-inn.>!!As/33Ss @@U*N 6W#ў% ;ܲh{i2OrԷ|G.|VMuGOMkֻ4W TRh~EQwc\JǸGc}5ˡ9X]>.?+;|u.dͶ}ʨ)/]i#W/D]jo:}+\cF]]"Kt}SN8qP&!9}#++I==9_ y^PP3eaͦ\VnnnU@ifVYCigeei=p~~\?G>ȹ͵VV122JJJjjj1saUUUrr\abbbF8#\DzG'v}]---MOIIr.]]{oc77#.!#ܴͫ0OV7loI%7MٸWyԿ4[J6mFkwn9{3V̚3缍{QU.9ZiijqUGў-G2(GT~wآ=6,oky.ߞNPn8\U/i#:m6tF~ЉPuƝUsc:O]Wry/huQGfj~G5iv Vyx%7%%ވr/9hv/܍UHbqiP-%sG޽;zrUTT|-C]uySӞqۇ7`|Y7;5(cgwnަ׼uwvFg98sI]hάWNw3+ -Yżs*?-;jM\lV/_xڕ/?gq0׮X=kCfG۱=luMl=j Ԓd;)%#adV:Ɨݵy8>9f-7<;vyKWvjuh'h4fy[T\w=k{f ;gNhxaP*U7<lݫ}ݸ瘬(51 /.dzW/~{Sl&p _0}W9~+~3s52ӫ4lym|VGk^Zooo}K)/666xHJJ)---//V˫<3)*))QV'%;wHBcccfffBBBFdE+ٳP޲5hJ1U][!!!RgYXNm9e/rke)ggg~~~Ҩ#Ë:Ԏ9MB93j~ni|;oU.;o-3gyki=g֜c3g)Zx˶#,Y1{mmmI>#=&&fĸ3v }3Kllj>r\rfffr!!֩Wh9F*&X@ÇbR5S,o``P4ϛfxuZپiaaYj%8#fVdt{ݣ#وM...VVV߻W^{a. 2FEgigS9"9ɪy9N}~Q4<\O.ojFm\/069 =,&u=ō}߽ieE J_?W y.e#Tdh ܢ,VIQcvtjUZav𜵙nLֈqmX*ihx;qCS&[``w/^9ަu1Σ4ǧP#hknh!5G]5aOnݻQQQZ{OO?.R{:ptŒIEټvcgoY{էT%z˚-8{,7uyY,-xޜ9hNhfW7^7~W-[qĹpG~.8dbos'n3%.ԁc+,9go\3rhr\Od͌KV镨ev.QwVݝlVo=4M8viceFbPkUv5f ?c.]E]Fk3fΞ;ъr.pSfY\"-du,ka=C*++#&ZXXdeeV~Ihh4G`4 ̗O=.:RaYQC]]]KKKRYRobb3*>k\FȆr2}||zzzrssGȆ2M}}(y H=󃃃;_wwӧOvwOΤHmɩk}8N]y]ǩ%Oʺ -ꇻt|-w>-=Am|db^hQ}r=/A]Pu.E{ZZZ,--QuA](]]$eee7o422}gIII2AfffOOuA]PuA](dsQ?܉uA]P.}}}EEE,,fggZXX$$$@ ꂺ. BA]%LKuA](_(Kuuu4xɧR__(?~`'uA]PuA]( A]Pu/J4xs ?{ ꂺ. BA]ꂺ.X]! ꂺPPuA]ꂺ. B! ꂺ. BA]ꂺ.ԅ. uA]Pԅ. uA]! ꂺPPuA]ꂺ. BA]PEGI[3 ,-I^XMGﻋۚ)?x=]A]PԅBPӫ !/0_}كG]{G)C,@:/m|}ĄuA]!?`}$88-00fPB!.u!yMܽ{ӂB!u! !L 444m677;88꒐ɩ֌O>|}ppC.BA]PԅBgxllWRR|~m꒜ͩLWWWHH}|||cc#B!ԅ.KAAc~~GfwwwjjKIII_C;T@]]]##VԅB! A]!?Lkkk[[۬ꆆsG)))rMLL^z%/.BA]B!=u4sy'Eο 9,SSB]!. !L7u:;;F[ &~իW+rT__CB?00 "444$W%[ ٳgOyQV.(nGYWv-u{|-ߒ'8(T,==XNVu!Bꂺ.F1$ i[YYcKQ($$cTbbb+ V KMMT8B"_?z>!!AH;ҞǏʖ---oݺlSOOO啙$B7핕jkyWecc# - -z [``R%Zzz{F]!. !|iiiֽmT<</#_YͭOK7'OݻwOR]] oŋS~``ѣPR0(Ⱦ(A%~~~JիW>200PPQ\EE4ȵJɪTU#۷PB!.u!4r}p"##U߹saLpX4jkkx* E"""ʢ|R0+HQQ2rbbހeﲊǏ#%R{fggݻw>x`LTPv,j>$u!BBPBd*fqepp0((Hy^Iꙕ~AE__?::Z}QٔCqqJU===n9RaxbllyZ#*ZZZFDDLt=x@"-}PB!. B!RX[ZZZ\\\VSSSY$]]]/&..\Ysİ*$>>>Oje"y搿٣tB]!. !.S.^^^]]]GK@@ڣ9ӧO5(1 ޻x0/GQ)%iiir6PB!.u!SKMM774zB U8u -444j_Q(k``pI&@$&&ʡ.BA]B!˗.B>_dddqqqw啞GEEEڪ}}k-#à.BA]PԅB:uLMM:-:44g‘ )[[[eSv-׾zRNN:õK]]OWss.III ԅB! A]!kOvv.u̴R2FDggg"'Jsj!Z6009^C F%BBPB%??_i "r]-))s/G]띝'bbbF?4ncccNNNjjT[[}}}IIIr={6ޱ!)lee>@*aфBA]B!<|PYPRR6β(gggkkkKK~iP2͛7ܹ#u&܀{{{YLtHBBR+^n]]]NNɩrRo:PqyyQB!. B!_K̞|-gLL̝;w]\\՞Fuެ,yW#ގ$}G%##ɓ':lPMujj"<<@vmeRCCC''6蜝eE9j1E̒$''˦q]L?ǔJ6naDԅB! ꂺBWׯ_?z_EPɑ***dYBVȈ)~'tFƦ-%%eľd[nVWWǛNssse===*pݧOFEE+*dlll>9@xgBCC(62zǏ+SxB*&(r]555*Rφ5듋l8###...222::Z0''Iyi33Y1>>>??_v-{s"$&&vܹ#_('ѣGjX3^j( B!uA]!+ׯ^x-cJטA9v9މ<400 $j9'.N[[|!>D555EDDIeggk肺B!uA]PB!d"),,TV_{!u!Bꂺ.BtvvFGG+*OT.BA]B!:99{yyuttL'ԅB! ꂺB!Z4.BA]PԅBђ.}}}773V.BA]B!|Xz{{ ޽[UUիB!uA]PB!D{^gk.BA]PԅB.BA]PԅBA]!ԅBA]!. !.BA]PԅBA]!ԅBA]!. !.BA]PԅBA]!ԅBA]!. !.BA]PԅBA]!ԅBA]!.J]^B!; !BPi#B'/I!i6u!BT$B!#*A]!u!B T|q]2<}X>MJw@?Қuӹwr~j[oӘ~W|Wؾ~L<>i[T<}<=Kd~}oIg,}ߨȍjYI-=rr<;m@sQa\a?[ZZ>=P:MJ~Kcy5˟k͛?vN2S2}~,YO :37H/|ZJZhj;mg-''qMR,)~ZNN{ʦg節3Ͽd=?haKKi[~ome``uA]PuA]PBPuA]Pu! ꂺ. ꂺ. ꂺ. ꂺpzQuA]PBPuA]PuA]PuA]PuA]A]PuA]Pԅ. ꂺ. Bꂺ. ꂺ. BPuA]PBPuA]PuA]PuA]PuA]A]PuA]Pԅ. ꂺ. Bꂺ. ꂺ. BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA]Pԅ. ꂺ.. ꂺ. ꂺ. ꂺ. ꂺuA]PuA]P.ϯNsuŸgOsu堖E4W-?uuA]PuA]P.˿)ݭZi."?40叁_=\]۹5o޼A]PuA]Pԅ. ꂺ. Bꂺ. ꂺ. BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA].uA]Pu!uA]PuA]PuA]PuA]Pԅ. ꂺ. ꂺuA]PuA]A]PuA]Pԅ.uA]Pu!uA]PuA]PuA]PuA]Pԅ. ꂺ. ꂺuA]PuA]A]PuA]Pԅ.uA]Pu!uA]PuA]PuA]PuA]Pԅ. ꂺ. ꂺuA]PuA]A]PuA]Pԅ.uA]Pu!uA]PuA]yZ4m/\Q. ꂺ. ꂺ.S.]8BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA]BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA]BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA]BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA]BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ|q=dږ>m_Ӷ.uA]Pu!uA]PuA]4ؾtrr -;O3Q__.uA]Pu!uA]PuA]PuA]PuA]Pԅ. ꂺ. ꂺ. ꂺ. ꂺ. ꂺ. ꂺuA]PԅuA]PuA]PuA]PuA]P. ꂺ. ꂺ. ꂺ. Bꂺ. ꂺ. BPuA]PBPuA]PuA]PuA]PuA]ꂺ. ꂺ. ꂺ. ꂺ. ! ꂺ. ꂺ. A]PuA]A]PuA]PuA]PuA]Pu! ꂺ. ꂺ. ꂺ. ꂺ.. ꂺ. ꂺ.uA]Pu!uA]PuA]PuA]PuA]Pԅ. ꂺ. ꂺ. ꂺ. ꂺLtev琋EZNNi۝>Oi~xv節3_sS+ꂺ. ꂺ.. ꂺ. ! ꂺ. ꂺ. ꂺ..^uA]PԅuA]PuA]ꂺ. ꂺ. ! ꂺ. Bꂺ. ꂺ.. ꂺ. ! A]PuA]A]PuA]Pԅ. ꂺ. ꂺ. ꂺ.. ꂺ. Bꂺ. ꂺuA]PԅuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA]A]PuA]PBP. ꂺ. ꂺ. A]PuA]Pu! ꂺ. ꂺ. ꂺ. ! ꂺ.C]~ -E;). ꂺ. BPuA]PP?_ԙlSuA]PuA]Pԅ. ꂺ. BPuA]PuA]A]PuA]PBPuA]Pu!uA]PuA]A]8 ꂺ. ! ꂺ. ꂺuA]PuA]PBPuA]PԅuA]PuA]A]PuA]PBP. ꂺ. ꂺ. A]PuA]Pu!uA]PuA]A]PuA]PԅuA]Pu!u! ꂺ. ! ꂺ. ꂺuA]PuA]P. ꂺ. ! ꂺ. ꂺ. ꂺ..uA]Pu!uA]PuA]P. ꂺ. BPuA]Pu!uA]PuA]PBPuA]Pԅԅ. ꂺ.. ꂺ. BPuA]PuA]ꂺ. ꂺ.. ꂺ. Bꂺ. ꂺuA]PԅuA]PuA]ꂺ. ꂺ. A]PuA]PԅuA]PuA]A]PuA]PBP. ꂺ. ꂺ. A]PuA]Pu! ꂺ. ꂺ. ꂺ. ! ꂺ. BBPuA]PBPuA]Pu!?&Db!"6#+v챠` 5FDQD(Šb"{ -Rge ²|,SnܹC] u.ԅP"B] u.ԅHԅPB] u u.ԅPPB]D] u.ԅPQB] u.ԅPB] u.u.ԅPB]$B] u.E..ԅP"QB] u.EԅPB] u..ԅPB]D] u.ԅPPB] u u.ԅHԅPB] uu.ԅPB] u.ԅP"QB] u.E.ԅPB]$"B] u.u.ԅPB]D] u.ԅP"B] u.ԅHԅPB] u u.ԅPPB]D] u.ԅPQB] u.ԅPB] u.u.ԅPB]$B] u.E.ԅPB]D] u.ԅNPB] u.EԅPB] uu.ԅPB]D] u.ԅHԅPB] u u.ԅPXԅPB]$B] u.ԅPB] u.E.ԅPB]$B] u.ԅHԅPB]D]D] u.E.ԅPB] u.ԅPB]$B] u.E.ԅPB]D] u.ԅHEԅPB]Wr>xMԅP"B] u.ԅPPB] u u.ԅP"QB] u.uu.ԅPPB] u..ԅPB] u u.ԅPPB] u.u.ԅP"QQB] u u.ԅP"B] u.ԅPQB]KYKޢa{eY?`^e,_U+E?%G"B] u.u.ԅd/NrB] u.ԅPB] u.ԅPB] u.ԅPB] u.ԅPB] u.ԅPB] u.ԅPQB] u.ԅPB] u.ԅPB] u.ԅPB] u.ԅP"B] u.ԅP"B] u.EԅPB] u.ԅPB] u.ԅPB] u.ԅPB] u.ԅPB] u.ԅPB] uu.ԅPB] u.ԅPB] u.ԅPB] u.ԅPB] u..e._/)xkܨ-Xe,hYyb?Ȳp>1O>K, f=g/Kˆu+9{Y*T˾^Cn^ ~,Au.ԅPB]K%rYβ/ rwK:UWfY8]O_mj墜= -(.+o;_QԅPB] u..ԅPB]$B] u.ԅPQB] u.u.ԅPB] u.ԅPB] u.ԅPB] u.ԅPPB] u.EԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB] u u.ԅPB]D] u.ԅHEԅPB] u.ԅPB] u.ԅPB] u.ԅPB]$B] u.ԅPQB] u.uu.ԅPB] u.ԅP"B] u.ԅPB] u.ԅPPB] u.EeU%K;vJ^ -mc1}}壾S.28_ܼLW].yΜ,vJԅPB] u.ԅPK~?fY8&~sǸ.y>gXk%xvNPB] u.ԅPB] u.EԅPB] u.EԅPB]D] u.ԅP"B] u.E./u.ԅPB] u.ԅP"B] u.ԅP"B] u.E.ԅPB] uu.ԅP"QQB] u.ԅPB] u..ԅPB] u..ԅPB]$B] u.ԅPQB] u.uu.ԅPB] u.ԅP"B] u.ԅPB] u.ԅPPB] u.EԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB] ue߾,6vJԅPB] u.ԅPB]KiKlEs{jOSԅPB] u.ԅPB] u..ԅPB] u..ԅPB]$B] u.ԅPQB] u.u@] u.ԅPB] u.ԅPB] u.ԅPB] u u.ԅPB]D] u.ԅHEԅPB] u.ԅPB] u.ԅPB] u.ԅPPB] u.EԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB] u.E.ԅPB] uu.ԅP"QQB] u.ԅPB] u..ԅPB] u.ԅPB]6:uYr_NPB] u.ԅPB] u) uY}eДyS.ԅPB] u.ԅPB] uu.ԅPB] uu.ԅP"QB] u.ԅPB] uK] u.ԅPB] u.ԅPB] u.ԅPB] u u.ԅPB]D] u.ԅHEԅPB] u.ԅPB] u.ԅPB] u.ԅPPB] u.EԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB] u.E.ԅPB] uu.ԅP"QQB] u.ԅPB] u..ԅPB] u.ԅPB] u u.ԅPB]D] u..]w^Vi;%B] u.ԅPB] u.ԥ%Ǩ4E] u.ԅPB] u.ԅP"B] u.ԅP"B] u.E.ԅl8uq#˗ó?-/ףsriA͢c?_RB] u.ԅPQB]KC1kx^{~6Z} t"B] u.ԅP"B] u.ԅP"B] u.E.ԅPB] uu.ԅP"QQB] u.ԅPB] u..ԅPB] u..ԅPB]$B] u.ԅPQB] u.uu.ԅPB] u.ԅP"B] u.ԅPB] u.ԅPPB] u.EԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB]K˪~ˊwbO./?NPB] u.ԅPB]˺$QB] u.ԅPB] u.D] u.ԅP"B] u$"B] u.9.?\t\XPB]$PB] u.eKsUh|B] u$B] u.ԅPB] u.E u.ԅPB] u.ԅHuu.ԅPB] u.E u.ԅPB] u.ԅPI.ԅPB] u.ԅP"IEԅPB] u.ԅPI.ԅPB] u.ԅPB]$PB] u.ԅPB]$QB] u.ԅPB] u$B] u.ԅPB] u..?]ws$B] u.ԅPB] u.._4y)xY{@u.ԅPB] u.ԅPB]$QB] u.ԅPB] u. u.ԅPB] u.Eu.L]&M6v||B]JO]fc'el*reSͼX2S軂\^>qYGB] u. u)O`-PSP93zWu.ԅPIԅPB] u u.ԅH..ԅP"QB] u.u.ԅPB] u.ԅPIԅPB] u u.ԅH..ԅP"QB] u.u.ԅPB] u.ԅPIԅPB] u u.ԅH..ԅP"QB] u.u.ԅPB] u.eë˷~rmY+WL[4?g/ vJԅPB] u..ԅPuQ٧O3deKsy1z^픨 u.ԅPB]D] u.ԅPQB] u.ԅHԅPB]D] u.ԅPPB] ukTI_3+G.~PSu܈\ 33gժ4.YS^bW{U߱eϑۗlcW,3"B] u.E.ecs{W u)=uY\ˈ`?w0,^0˳M_Us,/ -yohB] u.E.ԅPB] u.ԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB] u.E.ԅPB] u.ԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB] u.E.ԅPB] u.ԅPB]D]D] u.ԅPB] u.ԅPB] u.ԅPB]VӾryfƼ,K9ul^>fCQB] u.ԅPB] uYuyoF|IԅPB] u.ԅPB] uD] u.ԅPB] u.ԅH..ԅPB] u.ԅPIԅPB] u.ԅPB] uD] uY{uY-˗9C?-7{efY>y|k\L3'^تe3_/X,h7S,/_~uҺM] u.Eu.ԥ.eiӌ9~,NJ,åϙ߫X;==ǟBSfᆪ.ԅP"PB] u.ԅPB] uD]D] u.ԅPB] u.E.ԅPB] u.ԅPB]$B] u.ԅPB] u.Euu.ԅPB] u.ԅPPB] u.ԅPB] u. u.ԅPB] u.ԅPIEԅPB] u.ԅPB]$B] u.ԅPB]ֱ_Ý~gLᮜ,21g Kq~\eicrv~Xs2tZ҉cU^~3Ӿ3uu.ԅPB]rP]}7]wO\}gY8ÿϏ]]~6?g1ʲd,gˠ) PǏgͰL]D] u.ԅPB] u.ԅPB] u.ԅPB] u.ԅP"B] u.ԅPB] u.ԅPQm2`wϑ_g]?{հܾ\ǩ|Y,狂TeԼϠjDn}yȯShVBB] u.ԅPB]ʓ,YҖ$xԅPB] uu.E$B] u.ԅP""Iu.ԅPB] u.E$B] u.ԅPQB]$I.ԅPB] u..$QB] u.ԅPB]$I.ԅPB] uu.E$B] u.ԅHD] u.ԅPB] u$PuQv9{rWY/ܼ|1n+,sRn^e,Yar2rlM9{Yӏ6EE$S2{ɢ=#._,̲p}Ys|?UWfY8]O9^툅o-ߩlD] u$PB] u..ԅPI u.ԅPP"Iu.ԅP"QHD] u.ԅPQB]$IԅPB]D] u$PB] u$IԅPB] uu.ԅHD] u.ԅHԅPI u.ԅPHD] u.ԅPQB]$I.ԅPB]D] u$PB] u$IԅPB] uu.E$B] u.|iCreWyy?g/+}c#N]D]$I.ԅPeي Ks,Ə,/kl-.Fy]t:u&??ԨQƍ[pᆟ)Swy'dtg,Z[xi?s='NhPI$B]~lٲf͚;S -b?7MŊwyc={Y>+E_Qƃ>8 <_)vJO>qrsP٪KzU;7n=ti&Oq7|?,;8s;oxfϞ]JBSRz*|I -yϫuWu&jժ&L`}wl+W߿A]$I֥^z%o;بQ۷?S^nڴi|ƌĎŵ;w} ,(}kժUҺu!jg]bjjO={y܅ 3?V&l[ԅH$h]5j!kڴ# - - -~n+VHB۶mר.O=T 0`ҤIݖI SNi˝wޙ<ğ]Zwx;wn</^!_u:X ԅH$h'6lXg}eȐ! Rui߾B]$IJ $va|3|^z5*Lڨ˒%KfΜ?wmּy6mtyСksuWs~O?t۶mof͚jꩧٳg}diڴiz>믓/&) jq_~}_~_=VA]8yO>gF~XѣGN"0aB\bX^wiCzW;vزe˛n)N=\6q}_<ęgÇQtSOg}viz-tM+V;T]qǏ_mʔ)׏ŕ|Ծ{m?CY]%=3k֬,dɒ.]5&[ʕc1nUVo4h;dСCc ޙ_; 9bտ zh2^طon! ;sLC,XbK+~8bĈ'6\s5)cݸ;vUP!Vb9E}ц$ ܫZZD|5gΜ>䓱#[ 3gNӦMcmo,}瘅ڵk.OloJ6좋.*4\oٲ[o]iuqʱV[U*ҡZ39&gϞ^t6dНwy4bԩӯ_b<gsQbX^[o,gӍXLsƲk]|iӦ={JQ03+W\ObNKYe^g:.ER:uϷk׮UV۷ܹs޽WhѢQF5{Pט1c:^{k/hu\rɋ/g*СC:"' uup¹sN>gw}UtAOyqǜ4iҩ>[l6lnmA*1[1[ne6W vaCݺu˼ٳc/;U#G./B]PЊ=$uzkw^]ú'V^{.֫W'o{lQGT]Bӧ4v3͛תUcbС]tTRvuJ޽ct:tZh/\j䰙 fn 3Ϝp uW+GuT,voƒj„ g}qעEi߿!s<̙3bӟW^{-޻c6|O9̏;"+'k!VwK/Eᱚ|Ƀ>8䀓짾]KuI^<]J袋ĸ~y -=Vƍ+nmI{G"*'I /N'I'4eʔE#> 9օPeԩ=P̷ntS wM|FXbl6 F'6ֵkmݒol'NXJ;Ln쭯{Ǻkneܸq~{ c>GFIEԌ3b0sC]Ͳ̙s$W-駟~W_s5 6QFz?~Z%K|ТW^馛2(vW%n_,^+H)6C6|qÅ [ݭޚ}lf^xa;d1{>b5x <+RIXTPPp衇&ԴiӢ1K/tjC]TkrlvSN9+lѢş;,Nw}wgNB -#Fl'|o6n8yӡ%6y1VLl޼O\5/^z'|r<6عb fׯ_&[I%qE˖-:w>xJbwԩd?{׾꫟y$G mfŽj*աB{I&F13fx;wܡCwq>:udwիK.5jT>}!oAXuQ)5wx߿O?}GmܸqIj̙3gر~tGqDz,tLt\|y~~~W+((xꩧz,O:&zUTI|'[e`$eѥ.1ܹg>ΛoV[m;ƀ0=D$x.W]uUяϤ֩SԸec5j(ӧCg.x^GH׻Dq.1=իwqǝ{M4뮻wjСE]$!s=7o޼&~sc|vv{'Klz;-vNn(S]u?k}zi/=uXW-9sL0!̜9s-?66lX޽w/qokXegKzW1}&aȐ!τXh{:wG_'Q]k92?| c>|xΕ$l/BD*UzG~= 4(nx]dDb{Ūŋ/Ŀ1^xSO=C^z-[<)%uy衇2[O|GcӳgϷz]v'pB2@]bgyO$d͚5k>]uQO׿ׯ\3Θ6mZ)K)p.zcJ6t{j^5iQ"m`{)ixIoҤF.1 'O~饗Zn}Wzꩱm=ӵk~8L4)W]uՑGۧoŌxlb8<[" +"U >QJbO{?sӡ]byGqq'&ɾ[ү:bݻiꫯ%C矏esѿ; kNs7g}N:x -u-=% V1%C̯jܸE^^^ < 4xcFZhQ^>1X5N3&Q,:$ϖC9$Åc#G4nܸN:5l0ƫ+WN3~b!\pţx񑨋2>|{ *}gm!=mo^^bs˲9~7ӽX$P2&G d-%(ߘ(ON6:*ј1H&`K,,y뭷[%ǺĈsM~#b`ĠZ̉gWQbq 71̈1[:u=]v%Yn7兺Pi,^xWWpm6Ne);O>-WGIlb_y]+zeYf5J>][=u]YSH&'|zl{Ǽs1cK2vK 6̘btgdLJ_PBdߤIZ(ꫯիWc3f<3K1IbE%*ž1w 1ϱ1:TDc׭[7sJ2T|ZƘ3~ǚYfW*U:蠃 -?/.v0_PPCƍǯCcPJB`?/K۸ի'-Z{ӧOzŜ;wn9K:cn} -Ύ;{WVbŊ,V]b;ɩc.b;v:aF3~zl3o8|=)Nl^{-:F<]b>ĭ{7NUqoW_}uoRgTR%]1=+WNdL[wǒQ -81vX,K.$Fs{PF'?butPJF]b[xI1܊?W>AỌm[nCk&do>uEKaÆqϱd5jdve-3uɵzǒ7Po<$_S/鋐~YHy-ԩScv0馛X10a'60j׮]J@%}t:v $!ɭ9bϜ3gΜ&MlOŐ#zHK!9(Fqtruוt̙3/|12,e.1;ɹYb.+ -s}ƭbQXծ];imD]TVůO>|>"1c߹X$OեվF<_gqF]C=4F\1T{.,ŋ:u_7x#(PlƏ7'ni2!FsϑGy-ib+|W+c{뭷b$sX,^|O<1ݻgLuvq%kjNV[ųzuc(: cƌIXP¹;nܸt)E_~y2uuF.9Ր!C<̆ 6^dlE^zNbT4=Ī+[t8/wy'6Wo+4c2 ЮJ۶mBȜL;))dWΝ;'WB6=rJly37X1ɭ~ظgEȑ#5j%Y׷rK/]\!ƍz;ӡCƀ-}}E8o 6VGlSV%BXǸ+i_드(xM6ߩJŒ'AEҺ;˱Û?^k׮ݳgϒ6~aajժ|G.=P! }`c\q&mf^xag~z>lLRzEIuӄ S~I6&MQPيm_oz7oۻQF5i$ņr3X nz(K2 go;=~\w^M1| TӷoO?=y(뮻+rqfc {tS,%W֭[]cŀa-%)=8Zv[-$_|SUV} -=cG}4WP*nRh=7n̙>c"abY&T- kH%jڴilbbsm[#<!տoo+odʔ)]v;{^wu]vxkkI>Nh>Xp`Dc.ڵk/dž/*tdz/ōt>BaqeL@)XbWtbMzW&OB꒼4V_򗘌e~.  %M"f3Gv6'tR.OdN=wy5 ԩC*t:hm۶ɰ'<~<gl< t's~;ŞԮcǎh9V';?Fb(rqų=F4 E%yO0+1 qd6Ƹ.]\!&4AE/.}IxMy?>% ]5،f~9gA'=~aŞ.ؘ;Ow -GZ3S]>.:u 0x馛n*>͛ ײ[o5aH}/'b{챢E,fbu) Ĉ1dzz4Q nP"mTg5l0XoM4 ,=}[*~CodۋG}tN'|Otsƾp2(V]b4eʔz+֭[jsq&Qp=s%&~Ȑ!)8 EdSZ3/uСSN]t'X>}b>cƌnIԥ-ZD;4h`ԨQb\ŵN;rXlM^bڽ{BXİ .E]w@UTKcǟŽCsfˋ/ǰ'=ƜɚPB,}'Ɛ%'F1k~xiJS` 3=wͬY]}%SdlƓi'|~O֭[X:bN={6̹X1-rꩧ.sl|q'>vOxر1̈Vic%?ꨣ8∘ /0???wqemVӧOoӦYg[ڵkǖ .{w[o7B߻Ynܸ#G7y?K&ɈS0chDdgʔ)!^ ~R;~; !~F?w.B] 0/N?%}gdZ[+=qNv6m&dٸ.vr͢'Go喧v+RupZj|h-k]WؼrmݮaÆebbzymc]6m嚓'ONuy衇C/Xluve?0FŒɈE1y&MKA+p9L^|ŘB1z:u$gjtSe.1WTR{Xq'1ao>S9  HNdzKyE.R#`dpu.үS6o<լYg=C[lE{:tІ Vg7n̙3W>VZ%F5jyԓ@;cw8zڴiFرcժU37]tѧ~/Sӌ<^}սz//1;OWO>,6]~m,Xh7N뮛0aB0{&M$Wꪫ_uY}$WWkO>1krHvXӣno11T,-ꫯN?VZe[oKrvKM%w?jԨP#6tرr-%{uYW^ye,ڵk'M>dذayG?U*W.x*e]c窍UfͶm&B'by<9sfo>vm|'^"㏏.իWϼBn+b}Mc<Dz$"b_Wf^}Ւ,'Q"KNxA*UԸqBhѢ?8sO5;#6o<~yxJifܹNOv?wq39;O3nܸXb1ׯ_fd!$Od9]{G.$B]$%z{m۶;bGSL_C?^ٳٳGϏ}_g~e11o'L81ic - - -&q]kcƌ) vJc%S] -/8LԩSc-tܹ{`a͛7/O>ÇώNc~cf͚ϥ7|__#FlIED]$Ku󅋺H0"IZ"WE]$Ieu$ u{.2Hf{UV;KF~{勺H0"IZcsQ;Q ,(ITQIϿ ._FTQIRF]$Ikoϟ={⊟ZJ6¨$l.$I*QIRF]$IT^.H$F]$Ieu$IRyH6"Iu$mE$I5"I*ۨ$IkETQI$ר$l.$I*QIRF]$IT^.H$F]$Ieu$IRyH6"Iu$mE$I5"I*ۨ$IkETQI$ר$l.$I*QIRF]$IT^.H$F]$Ieu$IRyH6"Iu$mE$I5"I*ۨ$IkETQI$ר$l.$I*QRmђY ]6œM.E$I%UqC\6œM.E$IԅPIE$IԅP"PI$QB]$Q"I$B]\$"I$B] uD]$I u. u$Iu..Eu$Iu.ԅH.W]/ p EKr˚/̯1+_%VW?,XL5^~a s7^hI1 u.EuYuj3.׋3e/kꓱ;;Wq3vWcpmm-%%Ed O۷?~W 4xJajjjggDB]u!ԅPRu!ԅPB]B ._N]F{,ܜrъ'6)QQQj!`0F(IIIˇt1Kll,[#)))LW6C뒚* ,\8|׉V$ԅPB]u!PB]N&ԅ.e4jFt&LLL 3yOOOG!22wqqqzz5WWW#b\ZZ:66-  o TT.o޼p8%%%GD1B .B .BPB]uR1^,^(ݜP0"044$A0LT}{{Sxqmm GqA - KrrÇ/ۂE#\{rk*40D7B .B .BPB]uʅGyB}tkkmryyy0Κlmme0ʊ4-vttTVVHjkk޿ޒ&L.ԁdxGB]~L D0aXrLfB]udUm-҃ 29+S?uy|e3s4B]H\Z2kn3Qɍ=ll//ڸﻢ"o EMBex5 wZ] B ޽a/̽W^=ѣGP12n{zzHDB]ua/Oz)ky9mc -0((M" Y#F6О hM5]aY`Yppsֳ0CJVK\]q_J6ܬؚқau}S/TxN8|`" ua/MxKVTQ6nmomdcMWQR{?oq.!X777K1(e4 B6=x`~~f#b;44DX}&uMMM'# . OB*%! u#JIMttf/+K]>/z$Y< /g-lu%MT+K]^\(pk5Ԃֳq5s,JӢg. k3`_7 Tؘ;o',=gݝg_*j xrׄ.APs}[Q0?z%]08,(({ αZlmm#[`pIpxx/_Mq0a(ppפYrs'.=KĖZPAKf'N.26=P̡Ne%\ME.Aԅ9#ˊ$P] W!S8ӌ -sG^ G{1ksti ?f&I9Hû`~+_5WjuPˢZJoX%$$`Jwrķ{zzRVV=뛅R؈]u p 6>|B{{{|^|=x:CPp7qXcx 0X#v -JG .cJ?:g$/N|_渀l )9 Uu%eY\7.5AQ%ܗw u%ݭgnK+;*S Douu%UU -EEEPN__4~^'ši -|wԅ>79Z X̶a6ƘKL57U #j0z0* +Vƶ Y V3C¬Nk[S'|Y S9=Ώ`KP75(`4 -E>^4uKlww3iɽ{S ux}Ah<^ׯ'''lruzJJJgg4PoK]oFnkԠ´"ˊEgd4EEPkH]{-ΧYpq:ٳ'ϗduIZFec(Yh߄loo Mٳimuu{_zssSEĄZ }AL)@|b.ի!,B&gz<*,DWyB]#JXWZ(//@Fd*8z?1eyy=$ uѨK;OTAYIQE9[ S} \ydeb3+%xtCiJ7.w=ݐWDmKU+)!-^ɪW?/ӛj+cbcn~ u9:WS-,>PQ@FcC5{.=<" - -7"C F)3WESCءe|C)yu&-H -4K7Ԟ 9edsM&tTYpY9YsW'괝[%$c@,p N8M]^~][[NNNJ k[0AܑQwuuIicomm%7dX) ^ht,"ۿ{edd= 022CYZ\\@tttuuPoL]h4*0v?-SNb -rG0\B_O]<cлj:>) %+#w? f%^*%o&C~mf<Sp@dLq666{x=|0++ /A/IL^zO0xMMM7sP" ?FGG1!u6#0'x̞[~A{)= -ỲGL߿ YAA+bR\\L$.? u'WRk-Sŗ.K] Mf΋|FB.}W~!ԅPF]\u_F]8+S("X4!#{ -q ygPT0u(]m䔑Xr ){>)0]ć\]O7["* G7F`vx/Zh∑{\dRkR{[nF;P MQ1>;>&X[$'/_E,7oޜy.%@dJO]={pr; Zd\ukv \|f u£.A~A]EOe\m8W_H]61qܾ>|qYA1_,+++8nUYYE LXLCnQquqݘ͂B6DOgJ/㜕ilF1R+B/QT4.$R%,_zg|tE\+e˃{[]`Bt"|g?gOq}QUVrRbBlbfAt^tlHތܾƫܺl^Jj?iaM܉S"Y]^i_5|" #n6aJ@ (l`ZZW^EW..)>|xZSje]GйWf0޼yJvÀS>%Dt2ZZ<~}`Qk S0Sr͜4 A+()Q);:M]v*qZ-uB2/6w|}prc_܋BRiա.8 <ҤLh\ -8tCxAk K"I\5{\msH0N͘pFW\87 -v8r 22KcrO|AT8v Lk>bCO|C8^ϠԞ 3eeD7v#2-"X@_0QLJ[D75)w_Z15 - Nᆴj`듒 4[_/XPݣm(kS6ּgd]/:xms>R<*NʾતH9V >!91YqYMYhix;L%|W_ _Y3_O]R:6Ҫ,]|TehHKW_eu-c1In[.p Mp40WW10upw fu -tJhAgkS*(](Vn~YV:Q/Fu%ca`( }[q10(n)mF{'#JxiQ2ui/4^;z:FdUa>5W|!a.=gzu]crg4ԅ:!*y&ĈT쥉;\C'%3yEFXGxf~[겵Mt0`ll -Kbaa0S&:VQee!9䋄nhhr%+$t=775 ...! bh, `0BLFw+//`S)a$``Kf/0 ?L&%tx- ۊW(К}j=7!0vdR03^>2(xwF0; [JB|#װ;h,"8ua^PP_.ᨼrR QuLIIe>}T8>/Аgff`;;;񍐕3h*++bN?!qN$,K?~gaoe܊PB/.)]Mx㏱ 1u$F]@h<ap~,~~,,,H͈(d."KXCO_$Z(Ei-cfbpIU[K+/%2:nƜ(Yyy./ڋk rr7]$S3BFWU{a漙{J=qXPt5}q%l͟AI嶾)$cP,B$4UPRQR8i`'&6; +kvuiY -.̦%j﷓^!vJB71G]bJ5aTkDPNEY_#uID?0Σ*lmmlBF{jQ I9kRwwlvOcgP@ޯ LIũ]+/'j{,~Q+=_Hxmad&=u G~΅%KE/Q3ɋd/1#v>Q.qv#o΍tFDr;/ X'bfbhԴOg.u: S޻!wӪQ)/FR"Kz' ͏Ɍor)uEuAVFnݹ2wRTR)j*j"gؼĆ9<.3GO~ç.W͍򨯧.} uż)VͿtVyUkf3ܷ;/3Ә죮~<-,6ֹ0.%co܈ -?f,-`A!\ʚT/.eLzh'uJIIAXuu56/Q`a4?zL[[ -SVEUFppHvmScLҤ~ -<І#i :Z\ji&C6˓,x bFV[GTfLK<emATBWP~Xyq}._9au-qa~;4;FuLh~jJhj ryxOJABlU{;Trj.)na)趚:G¶lP70[}8Zz"w1>4q֖jDhk";6vESɁ - JH^@"JMj0C.$YTBYgoH]`iX~/.`XP^6,qdXA<~wޥƇw7oHv>vS`P㹣oiiف^Hx080\ er\uppZL&ނ:e)=䧦&:B]uU= Z9䃊%\;.>t?C&ՕkCbpt)ŝ̴ TnƄ'ւ&/j[&Ԧc@DP$tEA\󸉢hSC Ϝb&%O]EVlDTy1hJͨ0U+*g$xqfhhh:e8 |1Cן ]j~~$/{ppԄ/Aw@!OEPA Sl"999xçѯ_^ZZ@+*O S&srrٳgHa?CmP,ABG0ccc0y#aͽx4ZNq.PՀ\B#n΃E?MhjDp qϛ<7 :+!a#R+`] _xɝ!=+%u]ÃN&y0x~(^IN@ .8.h&S} HRrɾqaUHpõmy ǯZVX"/u~*qpn%48uNe_V{tِ$ Z!2W / -'5).+ *I gQ>L]9utsII]@ߏ]PO{zp:lnllD7.%%*΅p -ކނ"RQI -z|)vׁ 9\2u2udZ_͆"+?e‰rb.)k!3އV6561B^4;l@9 \ Ay bxJA.4~;X5 _f\aX -zR$X}Eg -7HNXE_]]FZ4773H#'Rsз ,! R𷪪J`8/JFx\R]ÿ<;LtwѷPuisP@.^MW,^وBxoD#u3XIx{#:,(:9{4yp4 c|uǟqS[)]Meb,̎E˫jk[#\Jr -guq<ZczZZh*)iy*я7攧ve3G(ik?"85Q=jY'ZKjZd6/tdZЦi`ӯ%ٴw E;( iOF-9 vvQuG뷢250rs z32# :-?vQ ҤVr9"RnV0/"Kc}^ݿ}.z%::j6|K)쓜V$uu8閬3C;| .Y[tG\B[Q~&eA]#Ω) -5ʝf_ Gq'&&B T'yX`F/R^^y;11un(4w ܋ 8r)m/MlR/2 -.L]rzz|'Gw 1zH$h* f'YG0j˨1ysH (4'R)"m0M.24=muš0Rr0*``{Gk#'AdblYVYFew4.;;;"с@Mqay,@ccc_yzz:???..N - #;;䤀nY\\D{R -h`U 0h`Ё6]½cu`xxaa!V0al0W8%zp8CgAG C䍀N1144t*$0K8 :eC]O?*)*~ԅl.<YPMF2[[kZx(/'O%x7HN*K-%oq4EIEQYUM[7 9xT CYCPNgʡ'δ19D]5Ɖj\ #@]tlUXUOТ-# q%8Y6i'ӨojjYF]d(>H>1ᡤtGU,@5qtsԅ75IAedjI]@ xfL߆ab~B f{h4 xMVV'\r=$))ixxXd(&$~D]RSSߘ}a`m#^PaӃ=,=`8VXO/k@ZpB] uEl MJL,eE``NU1z(qs΍{3.A`d&6W j98w&.H0PoQ mF#2!-<ڌΏԄ0 -'Z$ X2ij2&('̓CoD3x7tTjӧO&FX@lll effN+--4::z_.daamoo] rzz"cݼyfyyyBjA 0@tzGf•0WP SrAwuu QVWW -..B!XVV-tvvCwfKju!E$uP.E v>^rRȀ kmc#V{pP. -k˓PLz @<:EUIeqP‡'*eEBk8_ww:zũKJzp^u.2r42/ eu+$ZdW' b/j3EQJ%~N]8l¨YKrۚor))((`;(. -ʍrr?}$K.B_r?i -.2qZ:Pqrg6?2OS%u5X.`uS!hYVѣG(UJKPAXY)Bn---?uIKK0ԅEzeÇmK<,h3//*2.L]. z s7' -f&i_`ٜGFpD? # Tvf%߾s]6/2NI?w͛7X`0u|K?3-ԁ7~|j~ꕔ-hx'^DB]`+h)()QD%]9GZkl{ ֛NrerKYJ~֩`K\ t477>Ea -P$EcTYpgfftEIIIRZCϞ=f`}MTm^.&R#"P|6 e&ԅPs.FrrA~"5gtUdR0;3Y!Mk7^Ϩ`隙"qYy3Q%{+ZAɝRDh2-9YT{8x4B -O,%ui+u˧.mkIfrBL\Dlhw 38/'JM$mXQ>NN#r_QRp aJ;k (b?]U+4u)Cԫܝ[1zA%c]8ꈋO\Kӄ]YDV%yX1YZ '$XP:Σ$cKCPX~}`XՔz{RF~FF?\q!Hٚ;;Wؚ݅RDfi48]#( ]švd)E< -Ayz{HI]J7o$G0y7.8;r G! -aAQVV&677Ѿ&).Rܿގ~H, r3ކw0 >>\]]W7bttTIF"ԅPB]RdUl5DP54;22BPK.> \EjNτ  ~= 3cK>ˏݘ#ubK -O!k 8AbJ$*Mho9Hrgʆd]uyqRʃe*{, @]`R͕P⨋OGSדX+ -?yt)6G :k'F؍3ک!JU4uË:68n2?~( .P,]|i(tPvCrۚHw ]Yuq -W5v+(_pX-ˌ AG %/7Z_#x|!--ɉ\Lx_LnyXr3QP1\=KsUP 1if2Z&m~/PS@\.o~hť W8\~11\1<|O|.`OԜ_!-ܓ%E3@ptCx"^N3y.s.ɭ.Lڭ F͔j~)ʼz&,'R3auCE1" -Y0[|%(Oo5*9Ii_+Kɕr~aB]u\ݘKlM}?,ѫ'0v5oe[S -ퟑvzQW~;m4MFCE4.o&.9 -yQ`Ahm؀p3C}m]{+.о ohj`@S1fFG&zXXhal:ܸTt~"9Y981'x')a>[{[rbSP\cgmjqNД.tU]Mȭ5/Pzz)33LLsfogD-UJ]UQ4_VwBw r4KaыQz{xmlu#-,g Ad-KB,.>glejbwSMP@Md`iow3EA/ZFh!/+'ot'"$oh~u;vV0 8;A`SY][CIZ]XAIUFTV-# yh;$Mfm=zDr&uv)*`w``'61Jme/OKCEeOs 7126oUUAlxB]NNWPRSUVQ- -6ɬMWZ=PQ+v -m2-\Q t -Jte%^>YRTVvIj,yQ#Jj*tJ -^*U |FZ礆"e!'*dD%6ZF_*9=PP,L8\F]2~ʖ"]w+>4yy%.v? \V".N6p'Q0Fm.ws`PUCECWY][,+{5 -I؈v44-*tUu03:n)"&'6+DM48@ȝvo'ѹ7Tdd(f*y-)Pv$P~AEeՐ;xj˜&;{, eޝ:0H]IIMSESWIMH_7KWhMLE]޾i+VVVPo6$U4.PQaBsi7͛6?!cY |ɋB -MNgz1)ʉ+G{sW&38h-/ ,v= 3Phv\m8b#P R}rg -"}6mbLpqv@[!'(gOs.(rӡL+s&jw<39;SͬN݁~RWU -φԔ' -tsppkSHPvkfVơ# O u皆D:~tqN٬t=Ybl۱U5u%FZ؞ܶ*2PR|H~O`*sߝѨٲZi`x\Yu=#԰.GMI+$ V!ЩFt"hJˇ.!(@l]v";8UĚ#&/iu;rSW[NyZ&G`sQOKsTtYYV&l&86Ë EZZ]:6?Wןr:?{ZmaG&IS2_}ԍQ;ۍ 66rhŹ=y\utM\KbKpjITdQZ9hn1(TцM@W|#.c6.<$]#>*N[9k~T5ҭix!UzݻȿҜUQQA]׋X 8CCCPM`Fr9 ߯gggITIMMmiiYYY@x̋$jD|x*FPmOFGGAI666DnPFFFWW&pILLhluuUJ# -:%%E\_B]u!ԅ"\FixԒZ9 -wm,Փi}mȵ2݌3'&S{ZbAyC+ g/O qG^q!ԴGvEwxRPIR;9 n0D]޽{W]]\/⤌/_> HXg,'6%$5.ZW$l4;v:iL"<fbB,{& 3l^ړ`aU,%6M' tMo4ݿAWw{w?;.#sTes'xJ,} -ha,or wb&wm$ޜC<4?充8.tAdbbb! Xq+۷o/s=|wh4&a%^_8YF›E"ԅPB]!Bˏ@]HÖ.\B9ˠ.?L8겳SVV]$, KFF#r%xB .D!ԅPB]u!ԅPB]uAlZZBe\ |$[u!ԅP"DB .B . #oHcc#JSZZ3b,...FபjooPB]u!B.B .B .ǏO9NDDD\\I7IDzݻwQ,zP"D!B .B )B ^8ollp8 B]H/߿_YYɁg%ԅ"DB .B .P#?ӧO^|I1"u <&4.D!BPB]u!ԅPB]u!G.xM]|'Gʚ!BB]u!ԅPB]u!ԅP!rB "D!ԅPB]u!ԅPB]u!ԅ޽5u^ٽoM3әĩKDmryg]{x.$B] u.P"Q"I.ԅPC] u..ED] u!*ԅPB]$"I.Pb u.u$ԥlm0[7?nݼy7ޖ쫕]s%fc7{ƣ |cnefk$Q"I..$Iu$u$IRԨ$)ߨ$IF]$IF]$I5"I7"IQIRQI$EHH$)jEoE$IQ.|.$Iu$u$IRԨ$)ߨ$IF]$IF]$I5"I7"IQIRQI$EHH$)jEoE$IQ.|.$Iu$u$IRԨ$)ߨ$IF]$IF]$I5"I7"IQIRQI$EHH$)jEoE$IQ.|.$Iu$u$IRԨ$)ߨ$IF]$IF]$I5"I7"IQIRQI$EHH$)jEoE$IQ.|.$Iu$u$IRԨ$)ߨ$IF]$IF]$ISIũKeE$I!E|١.$I.$QI$Qrf?hUH]zcgKKHTde.ΝK.N]WE$IԅP"Q"I$B] uD]$I u.E.E$IԅP"Q"I$B] uD]$I u.E.E$IԅP"Q"I$B] u@]J˻$IfeB] u$INڨKH.$Iu._]2"ʫxl~j^YcO٣5 B] uGŵ.=4 u36~-|?سSOf4rsm>؟ks?}Ax~Oofjvxkݧ~7}v_uwQ.EB]$QB] u.ԅPPB]$B] u.ԅPB]D]|ȷRPB] u.ԅPQ◣$B] u.ԅP"QB]D] u.ԅPB] u.E.ԅPB] u.EԅP"PB] u.ԅHԅP"QB] u.ԅP"B] u u.ԅPB] uu.ԅH.ԅPB] u..ԅP"QB] u.ԅP"B] u u.ԅPB] uu.ԅH.ԅPB] u..ԅP"QR'}#۩U[,ق+S&Gڌ=i]=;*2B] u.ԅjjfD] u.uyuWU*B] u.ԅH.H]ԅH.ԅPB] uD] u.u.ԅPB] uO6]"QB] u.ԅPP.Eu.ԅPB]$B] u u.ԅPB]D] u.u.ԅPB] u u.Eu.ԅPB]$B] u u.ԅPB]D] u.u.ԅPB] u u.Eu.ԅPB]$B] u u.ԅPB]D] u.u.ԅPB] u u.Eu.ԅPB]$B] u u.ԅPB]D] u.u.ԅPB] u u.Eu.ԅPB]D] u.E.ԅPB] u.u.ԅHԅPB] u.E.ԅPIԅPB] u.u.ԅPz`0.ԅPB] u u.E.ԅPB] u.u.ԅH.ԅPB] u u.ԅHԅPB] u.E.ԅPPB] u.ԅHԅP"PB] u.E.ԅP"QB] u.ԅPPB]$B] u.ԅP"QB]$B] u.ԅPP=%PIԅPB] u. u.E.UYQB] uri<Ի˿U+uY]ԅH:u.ԅP.N3_lNj<-~+u.ԅHԅPB] u.E.%ַRIԅPB] u.uFE.ԅPB] u.u.ԅHԅPB] u.E.ԅPIԅPB] u. u.E.ԅPB] u.u.ԅHԅPB] u.E.ԅPIԅPB] u. u.E.ԅPB] u.u.ԅHԅPB] u.E.ԅPIԅPB] u. u.E.ԅPB] u.u.ԅHԅPB] u.E.ԅPIZ4vf\*=],ق6cOwc4.'b˩׿8? <7.\ u.E.Y]ׂok4H݌,aقɇ0azfgK&m?*7S7 }{N/S)_ s+u.ԅHԅPB] u.E.ԅPIԅPB] u.u.ԅx u.ԅPB]$QB]D] u.ԅPB]$B] uD] u.ԅP"QB]$B] u.ԅPIԅP"QB] u.ԅPPB]$B] u.ԅP"QB]$B] u.ԅPIԅP"QB] u.ԅP\uuPIԅPB] u.EKq]-S"QB] u.ԅPPB]$B] u.ԅP"QB]$B] uRZ)XxxjZjr_~{&jj{Y<$QB]D] u.͵`母ޒD] u.u.ԅPB]$PB]$QB] u.D] u. u.ԅPIu.ԅHԅPB] u$B] u u.ԅP"IԅP"PB] u$QB]D] u.ԅPI.ԅPPB] u.D] u. u.ԅPIu.ԅHԅPB] u$B] u u.ԅP"IԅP"PB] u$QB]D] u.ԅPI.ԅPPB] u.D] u. u.ԅPIu.ԅHԅPB] u$r٦]]$[]wn<.+sIu.ԅP"I嚫яkc˝O/n,GV+s-Iu.ԅP"IԅP"PB] u$QB]D] u.ԅPI.ԅPPB] u.D] u. u.ԅPIu.ԅHԅPB] u$B] u u.ԅP"IԅP"PB] u$QB]D] u.ԅPI.ԅPPB] u.D] u. u.ԅPIu.ԅHԅPB] u$B] u u.ԅP"IԅP"PB] u$QwRŴ uD] u.ԅHO>oR7 "Tde.|:BШ u.ԅPI.ԅPIԅPB]$PB]$B] u.ԅHu.ԅHԅPB] u$B] uD] u.ԅH u.E.ԅPB]$QB]D] u.ԅPI.ԅPIԅPB]$PB]$B] u.ԅHu.ԅHԅPB] u$B] uD] u.ԅH u.E.ԅPB]$QB]D]%vkq[]f vëAjI!  +ggk{7{Ӟcq'a~hfPB]$B]fVn[]f v/U򸰗Z4\ .V3~ouڽP݋}g6\ |1<_ͷ$ju.ԅHԅPB] u.E.ԅPPB] u.ԅHԅ\uPIԅPB] u. uqօHԅPB] u.E.Y"QB] u.ԅPPg]$B] u.ԅPIԅ8B]$B] u.ԅP"QB]D] u.ԅPB]$B]|ˆH.ԅPB] uD].E.ԅPB] u.u.κPPB] u.ԅHԅ\/uɜuD] u.ԅP"PB]$B] u.ԅP"QB]D]jK~/h .sjjZxuG~tuRk}TKkեMjnP݋=F'CwyI] uD]Kk4H݌,{قɇ0,ԓB2[2}%)QڽiWjZN[]n?O>:}6 .z=B] u u.ԅPB] uu.~ÈHԅPB] u.ԅPB]$QB] u.ԅPQB] u u.ԅPB] uu.ԅHԅPB] u.ԅPB]$QB] u.ԅPQB] u u.ԅPB] uu.ԅHԅPB] u.ԅPB]$QB] u.ԅPQB] u u.ԅPB] uu.ԅHԅPB] u.ԅPB]$QB] u.ԅPQB] u u.ԅPB]D] u.u.ԅPB] u..ԅPIԅPB] u.EԅPB.ԅPB] u.u.ԅHԅPB] u.ԅPB]$QB] u.ԅPQB]$B] u.ԅP"QB]D] u.ԅPB] u.E.ԅPB] u.EԅP"PB] u.ԅHԅP"QB] u.ԅP"B] u u.ԅPB] uu.ԅH.ԅPB] u.u.ԅHԅPB] u.ԅP+.YH.ԅPB] u.u.ԅHԅPB] u.ԅPB]$B] u.ԅPB]D] u. u.ԅ\OusrvZx<{k|ŸN՚ u.E.ԅPB]RIu.~ÈHԅPB] u$B] u u.ԅP"IԅP"PB] u$QB]D] u.ԅPI.ԅPPB] u.D] u. u.ԅPIu.ԅHԅPB] u$B] u u.ԅP"IԅP"PB] u$QB]D] u.ԅPI.ԅPPB] u.D] u. u.ԅPIu.ԅHԅPB] u$B] u u.ԅP˦.a~)tFC/ْ u.E.ԅPB]>;`Oqߞj%[u.yKH.ԅPB]$QB]D] u.ԅP"P0.u.ԅPB]$B]u. u.ԅP"Iԅ8B]$B] u.ԅPIԅ8B]$B] u.ԅPIԅ8B]$QB] u.E u.E.ԅPB] uD]OQPB] u.Eu.κPIԅPB] u$B]U]2g]$QB] u.E u.E.ԅPB] uD] u.u.ԅPB]$B] uD]6FB-7SonV־=ӤvR#R zww6cOk8-PB]$B]rW\ɗ$iLɷޅ;3=zޖ$Q7D] u.ԅH u.E.ԅPB]$IԅP"PB]$IԅP"QB] u.$B] u u.ԅPIu.ԅH.ԅP"Iu.ԅHԅPB]$PB]$B] u.ED] u. u.ԅHD] u.u.ԅP"I.ԅPPB] u$QB]$B] u.$QB]D] u.ԅH u.E.ԅPB]$IԅP"PB]$IԅP"QB] u.$B] u u.ԅPIu.ԅH.ԅP"Iu.ԅHԅPB]$PB]$B] u.ED] u. u.ԅHD] u.u.ԅP"I.ԅPPB] u$QB]$B] u.$B] u.u.ԅP"I.咩KH.ԅP"Iu._]2"PB]$IԅP"QB] u.$B] u u.ԅPIu.ԅH.ԅ\wuyVÇg:Mg[X |x:/fIu.κPPB] u.D].Eu.ԅP"IԅP"PB] u$QFE.ԅPB]$Qr%sEu.ԅP"I.Y"QB] u.E u.E.ԅPB]$QB]$B] u.ED]OQPB] u.D].E.ԅPB]$Q7$B] u.ED] u.u.ԅPB]$PB]$B] u.ԅHu.ԅH.QizQfwǣ3LfIu.ԅHԅPԥ5}6'.I u.E.ԅP"Iu.ԅHԅPB]$I.ԅPIԅP"I u.E.ԅP"Iu.ԅHԅPB]$I.ԅPIԅP"I u.E.ԅP"Iu.ԅHԅPB]$I.ԅPIԅP"I u.E.ԅP"Iu.ԅHԅPB]$I.ԅPIԅP"I u.E.ԅP"Iu.W@] -uD] uƧ[ͣ =qID].E.ԅ|`u$I.彪. u.$Iԅ8B]$B] u$IԅP"QB]$I.'$B]$Iu.κPPB]$Iu.κPPB]$Iu.κPIԅPI$B]u.u.ԅH$B] u u.E$Qr%. u.$IԅP"QB]$I.ԅPPB]$Iu.ԅH.ԅH$QtD] u.$P˯.Eu.E$PB]$B] u$IԅP"QB]$I.ԅPIԅPI$B] u u.E$QB]D] u.$PB]$QB]$I u.E.ԅPID] u.u.ԅH$B] uD] u$I.ԅPPB]$Iu.ԅHԅP"I u.Eu.E$QB] u u.E$QB]D]ix3[0Nφ3==Km?FO 0,I.ԅP\Bu)yٜkiy;=ɗ$ءoϤK?W->44͵ػO_J u.E.ԅPB]$QB]D] u.ԅPI.ԅPIԅPB]$PB]$B] u.ԅHu.ԅHԅPB] u$B] uD] u.ԅH u.E.ԅPB]$QB]D] u.ԅPI.ZKH.ԅPB]$Iԅuɨ$B] u.ED] u.u.ԅPB]$PB]$B] u.ԅHu.ԅH.ԅPB]$Iԅ6]"QB] u.E uE.ԅPre09=SYD] u.u.ԅP.$QB]D] u.ԅH u.E.ԅPB]$IԅP"PB]$IԅP"QB] u.$B] u u.ԅPIu.ԅH.ԅP"Iu.ԅHԅPB]$PB]$B] u.ED] u. u.ԅHD] u.u.ԅP"I.ԅPPB] u$QB]$B] u.$B];KH.ԅP"Iu.κP.խh}9Lǣ > -stream -x -ڱendstream -endobj -350 0 obj -<< /Filter /FlateDecode /Length 6048 >> -stream -xڥ[[ƒ~ۡ#d ̉ڱיl Dix, -/j6%`1bwu]*F*Z}"U?Z0ϳUjҰab0lCFAB*z}272ƛw/N*4MVw -#bU$aaiK]}jIכ8ʂWjDZ~t**vTX<}W>V #{ֿc1"b50Dn|_RڏkUspù~7K@ejLgy:>D: 7J mqiJ# +% β7i=5Z'cCuC=aCIn֛Mƺ̗x,}vy[f -qJ$/Oq8pZG8PPQêeCLJ}OHuׇgi4,QX91%Z`mWU 嶢C}"HsWNULCЕy$EeLb]&ލv]eKL. uGUu>gB|r2tێX)9˙KrfvUxhXTXo?pOKٕ͡AN@1=cڴiʙ5mBjfmY~ȕUE<gd=S˱PIأ4ѡy]u`{ĸxu- X<pZ(-P 8eI4ţ?/S;*6b&6 tWhiBՄqQD(;~3[KNjޯ վi $O/UP%H\Ȑ& cCu xE`Il@FT4Vn9J +{gKCuEiYJ@`%*x9(J8K$tPW]Djc}Ib{y*<741>ƾPS;,l:9._} вMk젲1Dܧ(t!%H6:.w?ngURΊRzxA"+]|vUdq4 MY:ojVԑ Ud(̀57᮳}N0Y!V=?L}F1KnJ8O._˃wtвR!dDw2ƮޕÒI3LSF`50Pp+RހB_#CC\*Q=E` -pR䔂<<!˙+`v\$KK%UY٢Z]rJcPYd.ϵ;GrXlxxسߏdצd޾I8 "(@g [(t-Y{*gxgfxW9wb.My6~TA}gt21Xcغʀ:6 o>] ~s كA%"A*^ cs=|"B d ϥt-^ʱތ4r2Sٷ"@ݐL7Xϧb'w0uoclw["+K,V-hּGjjPV#״A /?XOc l֧Ͳ0 -ʥ|=UKkKU)6pd<C}PVjº_/H 3Ԋ&8s܂hv*Ʋ_¤oXLӇ_ ,h|;̾&8vZMn%nAlϲ-6 <kdZFʁ1Tk|FhajyA4=@=Z#[w?[b>Y*gdPDGvb[qc+8h?cMjwDDdYX Mff>V`:Z1cAe0#SE% _@-\P)}AQ+tE0KW]&Ɔr_G+)GҨY\ɪjp3QױF9۞9j1T`b-THXNSWZ$#Uֹd7gNT;T&uvq*jxrg -C? HYo| -dʼnbӲyT"#5Tϡk3 Ū -M Pլ{AO/3^ D( -UB\7B?W&!?5뎒2pٟATO D~eX:rP JwA7쳀[k x֍'D_LWw.' ,ް0fe@E$$lu2<T'Nd㤃%(}*󉀢H2b?s -zz.4L^vp -- o9G# ,f7n%`R pEF^:f+g?L@8.,[L@ưAYHHU UB Ub" :%޷.'i]-Xq+#a$JC ޠ!jOܓ}#odƌiuTڗXIv:IfsFoO$b`ޏGI$1Bԗb8y94x |K%72TI&6NnKjR;yVj3'<3c90t9B[w%Pŕ]OrXi -ɥCՂp2[$8Պr+= [ՂUUzѫ`7s˛sjsCI&4q&ɥ]Wy* 岉V\ŖޏG*܆QDF:z-r] b;ѢQ'U3KWCM 0‰C䭁n9N7d!BiL"k: 'S9jW -3A=[}e i\y[R,VSv悂v>.47W+*,"'|TBѬf\`l*h7m10jP!N8fM0ćj۞NU#<9]sU>"ҥsg#SL'AVsӯSPᒜ01orFo#i4 CC 'Q 0b[BlaP1_GEYW`"FqW*o K&g,)Hgsr~)<֡5g&>_QZƸx$[rbɣ(ZlGN(ҏzVK[E0)_u0^uA5A/H}2ILZ>bj½Y:NZm]ܫ孙fMt2s[ˉR q6+J4yUrPyb)+ -j@Lََ/AG`=!*VuB7F\#80)nQ廨sTנ9ۨ!.kA,Nh?2KJܺH/Gu!zLc>'Фv հ Q9%ZX1>'+VBQ!VliWU6C|JrfXOvXOKhXƥqчryzrBpiŖN'-wrJ" V/S(e-/t 66M-R`7N!" B`5 MŶ#A8 =JcaꥶPG4y*@:rrDcOXhWr g%XFдR㲨iGd Ds(@:='7@=^Ki^,TAE _^N@;j7>$&1æڬ-]>MJqw,ӇDXsXt7o˷ -! X 3u$0u -KIDW]x:^9%dʙ3> -stream -xwGս{wCN̶l l,Yd13,,F$K)gى8:sZ=_WWO?VuuD"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$/H$}nL$>^|)D"a0V$ c"H$D"a0V$ c"H$D"a0V$ c6}q˛?s -vSwoݾH@忸Յ?(xqg<}zM{1a쟜7'nhQR͹ʹק'=m Ąrƾt7[b̧;Hmiys_؎K6UZubXa0V+ -cXaH+ -cXa0V+ cXa_BO}*3i032I$'tԅ]~yuyu M#4?8;0CQ#cG'8U#c K,L--…o[58cU՛47RZUzV{F{krNR$Uj5~׳?#`uؼzÇϞ={޽Ǐí[Ν;7===ֱcǠ&s֔P Ul^xwN(#]>c_p٣[N5 7 ݇ퟻ5Б]dfޖY~ؘȑ\u֗yVԖ I>^\V13_㧇ۂ:9ܥlC{Ueh&30e{ǎ44ۄ['$8d~b,=8>Lj lPUPgh7n> S3 lBT1R(SJjջثJT>1V9IjIQQU yT;jR1Oy8#>)bƂ2HO8"=]WWW]]0<<|)A޼y… .]|J7n|vBWjm0{oTewQtkp7H@gsv7$#¡ȴ+,cv˳`2@W˘0"l4Maо";RNVaxG0퍽rݻe> -HJ!l4rL=-B\r)&;Q]&^.9&a-NXL;ܜbzUIwUolnJrR#/w,?=‰nڽ݉ʤi^]28:f;<\܊ъ ֎?9fffAlooheee|l{x{wvu &Scb~uB$]]o1F)?5S}O&Eo޽pR? JE64#o=G[*CjSG[Pʵz6n$?N)ɫ6-90 )G>jueع9o x!T clPҒwtڨKLj]vݻ8JxKK `O<3?izz"#G䐕!\b -?Wd)ߺu'&̙3|E4XV>:777;v#X c7gF'O?8yapvwnM_sK<LvۛZZ ͫJ.:A@=;m߼έ46T쯎`BB];+|#Ig!#a ?U8R<$U+j6 Q\A>dSQ܄<|=x(vj v8#X8ָCCqnSvʉYe bػFo+Ma ۧp_IÏ36yQ򼼼vnڱc^DMOO`ݻǏSbbv޶a÷[\\ v'';WV?_{^.<pjJ#xAؗSi:L6 G u(y:I{a5q&G]HʆL}=zzTO&Mt]Ӏo }F_P5;=LRɜȑ.jueł^ !o ;yxzۺآM - td{;} ~^ʩGqZ`LTK}j!NXx9 j%VP5" *g)%vt>{Q f&ÃfFۥXu{U\Вٽ(g-xorʪm -?g`mb-6b#;;sR1nY˟WYS>>jrSZK|ϺЋK޽k{36~>Ϻ|:QZ@oZ~9ll--C*v>-W Q2J}v}8U4dVᑅ))mpl|j1dZw+?cm[!UlS%|k޴Xކy@A#o=/|l-++Ќh 闖B).s`AYvv6KNNZcqLl0 ▁E:wq& ,$%%[ l!_XXP3{[d_i2hsWW# 8d… TX cSa9m^tdKFi[f -clt7n)Ϟ"Y]{VRQ3'| X6 qn[vg -UYS~MK[wc5db䚷קī*U쐞=pjG:MjmVcWo݂qWоXC/wled -fmSb}+W۷+[Z"0ee~ǎ;wpx8 :;Z[돍c߾myСD'kkڐl ӧ +`,n5U kXoi00IVתSVfsSz.ք*."yP7܁tQRK䘵z4T?g_4#8V1 3Tb|l xK;a ܃ -X%c!^ZlmXkI#б <ҠeO&o߾ -Ti>]Fph -5;qX c_tspz|xGuO_ZZ::[[#?/_6ݻWSZg vj*ƍk,߮ITS~ *͉WI8-WrYjcA;UL}`,6bS5dɠz0=+ -"Ʈa`&;LBd,zVy -Ɖlh72n߰|j0K04:X>0 ߫ l.<^90 -bVzPf `j3Vy0`llɓ$b;ggga, -8[`N Ў&DU~<66~X qKGBXO+&a>'紿R 'BwRV֥o7O-d[ eqY;h[A^6.~ɚObZ}]j@;jCz9 -cٰW2Zz۾p_yY SE|"*$z\}ι~:\j1vnݩ8Чn=8X}i\.[+ܬzS~h`k -$_ju2-U]tnMMNeyXI[,Чz9LNO7e`RkaFOWMZ=pJ向007MT%[@{FjT4rvr^tm-U'Mu,ΟC *eeho3MƄ夬5fb ?rN*=k0K戈h΀2  ddd$cXPL!) c72(*O.jDoS5g26`j -۪go/4zдwMuHOdsbkoI=r$Dx|<͊ r>N]a{cJEkz񪹯GƔZUcMvGӵOh2:Lᅩ Ѐ~6LWjw@K5gJa, ˨PS_ P=2hk -#xq^72>247LWnc&/88^|-O ,=Sxŏ2=_OXH#r1p2,, 9hZ2@A룘g3V|jPGL0el~djQLƥm(6m?p<>}i\3T<]wo{_难+a,. $#c3R fX_ }X0k+&o$͌u,vyceBK:#U,yL{R؜&S#ҟs܌ݍKK`!;UW;6vw<{m -?XP[딝bnQuylblzyx C`{u\- -~pciÅ -Qf) `BQXb}A'FX7dvx?72 YXvxxX3`<7eTu!40'dAZ' odgff'Z)9dez2L9s tv.ػ0v93tH@Ubnm pS6XtcWw74njaח_yt;3xuNf6dhpnxr_8(@>:Ys^S\ܹu<#/OXݖ\d] 2ԇ4aD6)+}[wcƸ6[K6NnE5ct,kuٝ]jAfr);>uv}}g;vgI&'1ɴ]x`ɟ̶]hTDGG'%%N(|&p !^^^N13i#9GG׳:2!X̉FȈv ȼe2rL_~]&ڹeB_|wD]jvz{˦6lm+uG\~0vl ofֵ:)h6)q}e3iNmvVt/OM*ybÕw]A0vWe!51+p 6&_47U̕%8$5*ŁaLLWU]+u|<%*@_:SSWW ]߾m}%p뵓YYsglPxao}9=bV*vXGR;U7-v?zZ)Xi#*ԯ,pHO9;X{VG[h:a 0ZjcA4b^pP8[CqeJP;P_ښdz0~ˀH*na#Çx^@6QnfQZI!|5O:,2d`_D&ǎ*/O5!RbDbp~CiKCE}6uб㇪{5X(}:y̵.cSs.'['X3]wm _!fXiHIcM#W<? e$ X`{Uj[aSUyT$))~ KzxAp=L We[a&5[KVM+VG 4WötiHgi5gfۑճ gi_lĒ޺U939=W;8w횊gl(#R姁nb5ZW\nNAy-Ḯ:=%9UΩg?Vh"fB I+'%i~ETQb@AZ.e>չs甁JC.^Q^} j]l"+c@y,)Ңc?hؖ9N>MiY'*={o/^̕G 20o)Fo@fe~єkӸ[yo'̈́[vo)&B*Ob?Rla ;?;w&Nt]&[~Vߊ]pf>_e?^x ޸r``o, kOpe~D+>vxxxllٳ& -c tՃ׮]U- -cw.znK"hvXaD'a0V+ c8ƾ{9] 07=~G}C?oA{쭫*~c c3*=gT$ƊD"a0V$XH$ƊD"a0V$XHЫW"H$>V+XH$ƊD"0V+XH$d^H$ƊD"6(D[D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$DKz+D3ƾyH$9$D"a0V$ c"H$D"a0V$ c"H$D"a0V$ c"H$D"a0V$ cN}铧ٓϝ~h1}H-{΍Ki-:ysۙϛ^:_~Ϙ^~%NK=p cv>؁G;5sE;>ozvvi忺|1rQ=}y_tW;zeI* -cXa0V+ -cEXa0V+ -cXaHySTI>Xa=ɉXaTDhQa0VH?s%~Rc]>%am4c޽kU6\dP!Ȣlhdۙk_yskf/˃n,<4rk؛\pLQiڝc #IsڽS&IފON+‡B_ݻw/^p˗oݺѣELcCXw5%ۍ7FO-aZMJ233sqg%T0/XGR(c~Y޾m]C8*ӧyӲIIϟ7zfWؗ.??t":ޓ[_l%y~gƪG"Lxܹq(xpa*af$j;=%v_D68U[ TU$ GapLkq W@HlRn={v||wnnNvw?~ddl---SSS`+C= "ԩS555 7o쌥N8j*L0s%*?ϊ?̙R+WJ&5d ![ggt{{TWWxٳ/_@ -`;:Z[#"7 ?T8:Q9[7pxp̐xm)'ν>|e,X kSg/ͷ E7Wf٧'8d$Ęu9v)I1ɱi{C{-LotHOKsj)qܛdV>T隗A眴m&ǰ26foŌ1^rLӧOk ơCZ[[lCCCmmm{{;,tDW3g,ǎjΝÝc?UUUbgg,uuÇS9arHRi?ukw?sI W\`kwUa'ϢhE[c--AAV,ݻWpd^P[o3d9s - -BKJ|--u_}{6swo ni ƟX:s i1%yu7N t\]J_*θ7rŁK;ն0dP}t+Zz6ȑ˘-6mXyIwx)cj-m4Φ; F{ x'*⌛{}UoW[F-ꘙV9i_|vu֯,@Ѣ pOLlYYYVVVjj*  -]+**:::X &G 1fxll JS<ӟ!2LNN*Z3VD.Ϳt4ŋjJia, -lxp!XE -Đ eO])y<+wG6cu6QVqm5J_Xwu1`cSwGqmԐ֯] -UY؈NBxP0锝ūrmŋ%N__3G{^Tpv60"$2?{d$? +#-+~ℱOWTmrw7j}U3Ƃcg6 W%zZ;ƴgWNM>QY+d):z938>+$:)W(|:yjߑd0eVk?rjC-m62^MƳ<'IbwgCO7zdÎrj8M}_Ng,( \owuԳ5caZ*n0uzv縃Aux]{W2*& <߿Uc<~Piii\\VP1>>oY O@ 'Y^S  -XV T3:q'Nc !իW5wwwq MA;;;<+``ill$(`4RRQezqpsc!aHܾx&S;6$EY넨5 ѳ zH8X I!3dl0ځɐ'~zD3Zl̄ ˍx0Do`Y]k tu;5y 8eg_Qe -f]U^h^>t{lrspP'IIN/?yRUs`yժ/=^n ~7vdWyo73uI .J+J)i3t˪P-ZamdƂcEBι -zg76?TXwyoilֱQ`̵+ݜ6om[_I*IK)N<2fŔeCEV,1  Hq+V✿^v~?hἮf0ݻ@f߾}yyyz6 coܸA<;;P-pʩ)LrJ'<a8р-LkiiogϞPǘE .p ʷdL|5OHH䓰LIIZDČ " %%%# -$9phϱPE{؉H6k Π\鱂)؆l\l0ڹFONKnz@đrA^6̎Y)Q݉s4[ņMb}ikg|N! qJ\)A>\nY8T`k*8e#}O}'[`K\>垗^tp؎/))nUCKzqaa_q][>/wNΞoDmϪ -ٽ5=((:4Zڶ;L<+iѝ9U{gs=y6ƻ'޼ڔk͠:)r̋?^q&^`.5җyت" nZAap;[?Ɔt7`iq -:.ŝb>+P ` @xW 0A! L05x1VpQ0kZNo(쌶էMϮ2,e 0\)n-Iʀ;}8Le<;=S4.13/yoPqpqqq^65GVd'G%h{~"&_ -] Dp6y8v˩OMsU݄ڗ0֫O_ ak5 jऻg*u6m q#oFSXP(}p*z*''UPcXԂc02jFB=Ej^JO:)<{Tٱ9++ o 'Y9| QY;N4@X#G@N -d ʆI9ׯ c?ظCCutn=yKcCO=ey\ 9RJH70V!:e( iOӵZrYńqq-rKF #\DP9caw'@:eN׵[A}z -^l:thoQifǡCEE>֑--삏0oժ/XXNк}WWòs5}Iy)*yI}7{]!=8{~q_xrtcT?lnK(pqLFaT:X^H۲Oc'*61;ͮUlX:)hIv#j'inBs7`b咛fl_ewZǬoW %6?rkLKK 60??*L` 2 - 8R| 1:ȃg_z -GʯX%NgggDD{DžzfggOc'%%O:x!3ll R^`KL/ ᫥w:1&0WQociYY1iɯ[_5B_9066%/[lvF5AlCow7pr5i]ւw.Ųnwsڹ"28? 'w5yѣ 11))bt?gffꮫfTW0sD5:wka(5 2=Fg[)ũ:;C'+:$ D=q忾 9sVcUMP/?y;8H_|m՘ӓz߆ ;X֕߃Y?}+~r&^cwqSҬ#۲72U?ұ$ .dL ˁ5>saau8] Xdu6_uޗ~=c =";hDZPVFNLL -hll - -G8[J.;w('Ƃ8QU"+|BNY c322`&48[>TاONd퓓l=:>SS%Hܴx;Yfkdg~揾&taרK=քfʷbE%H@;/~#cұր/yU#6@&88caN< ̀6ꓞlEy TpQdYiiixN)0y<r8ž>}hY) c\: {eaGPf0u;aV|<"Ճ^t)kLl{Q6j][=+ -"V)oodyHX| ^UdL \є }v},nǻ=ڥƩfcң\&c`xȑ;/] W[GDzRS]܌##׮l1vm99z - bƞ<ؐVٜQ:\ґUQWueY,hqL_~vnzpmr~{MOnM%&z;f'+3J5>YcUꉯzKNշ7R*΍$:h !3(wYyB6~-8lKM?҆6v6d+~NnY\İxy:9995RRR8T? &!4<6ça a}-VqMtAQ~W@oXDbAIIpc LXE8gV{lc6?KlͮFQ;3bU|Qzl̏3mxTפ$ z@¾} #[xUڻם5Idճg Q)).NJLư7+V/فƬ'0+(:/J,Jf&6_ꙁY!~9U&M]x0vdؽ MO-k7r+a1钛}}-g* -I?'N҅MQ=۵DžyX оf(%ꨣlI9ܥ0^kW]aI-CqCLXTˆ.z& D%0߫V^D0<<| R:Ș:+Y{k luGGF~.he[ &d e d+/g=zLZ8G94 20DGK?6 ׯrAcWV3#ɮK^ -c ~Ν;˟, -cnra"Xa/ H P+qd9'0OcMo|뫫յ3s?}w_t7^8~}0Oث}Nܽy◱?˫^] ;?xϘ(>೧K3n?xX%3V$XH$ƊD"0V$XH$ƊD"0V$XH$ƊD"0V+XH$+{H$XH$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"H$D"HTD"H7o<Dy cE"H+DXaH$ cE"H+DXaH$ cE"H+DXaH$ cE"H+DXaH${ik/:Oo~qJ =zkQ.zZϛN^`O>pl9i/*߼9^@ kS{S3'\|==?p\/^XT*8s2MC'/*أ'O_[N*=wփEҏ?Dzvٳ+a0V+ƊXa0V$ -cXa0V+ƊXa0V$/䅩ٱ鑄Ѥ a0V?xÇO<xd<|-1lkT0O؅֣a-z4u֟We[xҤJv]c޽kXѣz)}5_~]#͛7oܸ&+F fSN={˿e(P{`13"2>=!F'a, i n?Bh湐êM?& m:B67Y&9aF;; -k_1c,t}ׯ?c!$s) l®]6CaZv9}SS_/3'OFиr -lnnnP Ҏ_zUcؠ'ؼѣ+1~S촮XMt̙X03}- lI^:M6i5S9m&1YjmpswZu70llzt=,CYn_~n&#u&kutvǗm74'Η߮eۍ;wZs>>}0?]_i{@xW -91ڌwEEEYYY===̌y$(,,$grrW&&pMs?;c^j-&=8Ylk -qI%YE&ǸK k96IUހji.9;=PjmnMbTpgCxTDab PEÒ'B{٥Ɠ&8KsqcV;+וlzn{n# emFw:=N3۞WϠhfI=|mo ,LX9C B]l'0V=V7ZezŘxFM@Ri>q$Nɕ>%J9vE8FB3$Ղ3|K_}cLݿr%A{'*쵉3Q"$24;xOk/EMcUoSHp McRCgqfku 1T8rG{Ƙ,?~NՑ#Y55vvž{IǏ޸QYםnj -/+./^X˗/ڒEp]t MNN:uӘ4:*V{YJH?0 0}lllAA#GXî -ŋwgV/n߾ġTɷ]]] |B1^1|&'1 0H__'i­[;cG;w|e,r+ֱ4[k~U~fsSsl^jGɾF #oU7B66޵pKsRwmv۶Fo7ד\2GaWn%ޤஆ=8ZÅ8w k t,vYP}91v[+sU1!G:vV[v5+ݝTl?quq>MWnw9fϮ:_?n?ky-onm28!>|CΩ5 n5ݨlbg-nrhꭖVGd6uLP l.W;c27fzFC+t$Nl*7O,6jSH]Yc[8%WlP=բkT}q!MGh/5nx][w;o6Q M4UP嗠]3ݻ'ZZ"}EE>p{vk(,qv645b`lLɄ?ж2zeeeGՌ.RPVSSE6C02޸q68pxxxJJ -VQ0zRvxYYY\srr""谰0uvvvBBeVfdacDɣ]t%1SSS -.L - ÁA)_.L%L`&uXAQ?uVk dY|)ÑFt5:8I;f!n9x]v XRjȆ^nvqmڌ -lfjmkpFC.ZCN]vj(yv:g7q٪MRyq6%fhYQr)1[@ql##W./gEo*1?f=wH[Sux&_r^>cBQͣsmm tՌsWucwYGÝyc 1!M!3EZa~hPkk0){^[kÎaGTFV.ua;e/:²p6FDfP 1ۘvϼnʿr?ޛ«׵a}|)LvN䛌@1Ӹ8'jZMc;nކZyz$$8c[[#JK ^VǙ3׭0A ->3IffZZ&S{1V 0*nxaze5++ }ϋ#vr8FD[ʊETXS_ (L&NFFҨgM>:nm9;-mp̽(*.\ٞo]Y~olNot魇|n-^W_c$[k{յ_M,430 Cz@PzDP)6AhM~ϙc^ $f g޳>'3%Ku㒏"SC=p[.ȊjYĞvUfCHm8C`V8)b-W~3pjTn Iz$%q& 3P#*oNkuxoS -ڽé(Ǫ찒1r0vjgll2y+&'fj4;[)c;'+{I[}3*4zgmҚvH\_BI9z#- .$+MZXA;GU!4*OV>xm˜_XhF -Oޞxcǎ.dHOt\ZZ -v__r -T 1À54֗//tONNJ ]p^z!\5vHWz6_(2k'&} -$52đmXV%ϛ8 -xI2.ȴ5vsf}uXYmU)T9{uj,AWzuc#kn1eU7>Bs[`,d;|ƾ KcMoB3~ūœ>mu.}q/^hoLKb6[`}!*V`\(3\<o';3aKX^.{%FϤSk6Z?Pl4"Wn4o1 ;V^U6|ĸ=i8mjcd 0|= -׬\iy2ՖtnrFw 5 lՑ9r1sn7u<+1;iuvyttKMMtd9OS=UbbT)(z=ubb{f_jwq䮂8*i@iׅt"/-5m,q(J+T䣅}#_J2VAŗVUU FPL^&&&H.G)9T W5(&;|0-0LU+g`lPo@IwNOnz⾶vSW' ĢA.Jc}2Rvʛ:~+nNu F"@[\dL{<[eʌ䥌Eo07D`99i h^{G۷؄o{A'CMvsNS՗هe\?efPtosJ!s{/%֔2{iꤴ ?iϐՄJ-D;'h[{3执Ϊ^=* { lpq{4])cw . -QW짇 TJ^if,܉-4Uw(qNGטv#[#[W+6[0nZ~ʤ"MN X4L rcv bԗ?=[)X2OĉeeCEEtjMiihzoVU0pQff&س~(g4 c#*((صk}`$ZoVWWUnpo 9X0 [l-%^~$ hvv֭[oYZ /_W5sXѬ?/cǴP'K m qdxˌ~ϔՍٸfmU~ڀcxgkר2/ʒ5ֻDFg[px9}07xW /Bmb&7%QUvPc*}X:ׇvV;f%ぇ{<c lOc_0;5Zc3&׺`0  TMx^߈>~ [IϭZP\cSbU!*H)<;C@kDF\(-~#p3EmD-"VgK[ґ,UglU] ƅP+elCC\XP_x莍Xֹ& -sA9 uz -px91{uF򌎎޼ysCcI} :b)I0v۶m07\2$CYP*\իWÃ-,,9wdddUP3CrnI {7clDs>ܷy`hm]f!cR5E!5b:!glG[|zSX -XX]ksa_YC1 >j0ǡ'{T}6&Řns^#֔_kƐ2ҫ?= -yr#쐒QEEu]f]?>\ b W1V`D7d Jc@ fTF7 o"2HI9[ojM"1G+X:3+>d}0++;ƙ8xp+ߋ]X<>< xwDUp2NMUUEݛT?Aׯ휞.okKJNV7oy=rDcGܕ+5`K *^dyD7|%.MNN½۷oqW(75RX]]`k׮qɓ<}4urӠsӽ[*~+HSUG;U)mmfXq^MВ0!q_@鬱C`0)\ˈՐr3;{wv*6 M؟r`Vf$9Gz$oMH=7sJܷV}œкTB`rp* \P{{n'x}=)ĎB"C{/S4U:yo3>Nz8Ƅfύw_{j]x366F+I"z -胇z8 QE`#C%?GR׋~P \]0N U/3ahO@n%Ϧ&%^gsN $G7Ǜ|ZZ럗 ~Ÿ\RmJe0Վ`bIcG=p|D T ,;2_@ glR !&)hao`hlnlq ' X3)F`fXxᘵR>'d =x;w\cqQMVJlgDuږ]A n0yV`e֘i~9rn"XCk'گ(=P@&ZU46|bJ>mhT*//+ٹ'OvAZlJ%Vvs3MHPb}իݚ&P쀔EZ2tNyO0EB t%;޲e 4&H9.VvvvS[QQQqq1{xcЊrOh1''Hֆ$OSSSsss[ZZ`,nVVVR!B6Y\ Qs'4DK++=Jp6BŘ];)!եq];=7'x$՗7TxƓΧ)J]O҈kh N_5ApSx01S0 ` 'fGʍ$/XwFZ JjkL8cGWK}[,ã -)jjAUR̦p^45A;+rƷ!7"6DfQoajAd 4g [gk p O`Q:H?Z -HxꜴIx` v`w Sj=t&I*|>^&cZҮDKS˶wŷ?'WJC`siϹ55&Zv`a|:f-2%,/[BV}#ҥ8O蚕+?n22jta[˗U0V^3².zFsUgϞ_ -L.//̃`paL{{;y}֊&qJKy2>u]zY (ta``UMä hz?cBxCyᐱ.y2Hm#M[+Z ɮ)KZzf#ygRPύGO[H(Grn|8oBZ'+…f81ҟ>ܗ9z0g+s|xхE~}|q+o?/{|ӎflNܯ~_BPET/őKcV<13zq/&h%@~V*őz߻w9彼a#}Y Y ܹ_tOy3-qX@iQ kmMGg8jF jm7|>;{Hbw} qVQo'=>Ɠs=8~=߾u7HViABk+$+ d]UݹsGPŸғY)y5w -%Ae`oXB>)c?(՗~=An׷៪}R7-Cbl癛c)#Wo~ߣ~=.+Rx;d~xWO?Uy̳.7ٱof>mY;>yǖS|řg^ߣxqUJ0vE0{NZendstream -endobj -355 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceGray /Filter /FlateDecode /Height 259 -/Length 138 /Width 460 >> -stream -x  o/endstream -endobj -341 0 obj -<< /Type /XObject /Subtype /Image /BitsPerComponent 8 -/ColorSpace /DeviceRGB /Filter /FlateDecode /Height 899 -/Length 117860 /Width 1439 >> -stream -xy\W{w&s'dɼܹ3dl1.Q]qG\@dDQQ\QYDqW\QWpEq eQAc\f2y|X.O}4էNU<w##W p"p\G|@##W pWܺukڵ pm۶{n -?~\ʡ2Çոs玁zj``g,Z{~饗7n[W`{{n=zmo{{fW\i0emm}Lyoٲ âZɒ%K^zFMM p4Dk/QPP $/KظqQYJKK9œ6mg׮];EoQ\vcP)@ - ҥKlp333pBNil~^^|(**4M4IC3g0 1:&$$֢^|9%%S3gW_r@v@NU_/Y2vXm믿NwG>iӦc -0hW\{E~߰lݺ<~ܺuKG 56?pp>lF5j4{ァ]9Yڶm˟SRRiQ Fk}駏=2䀞퀀QFQ2J#c.`w:vCċ/A7HQQUĭ+0hrB`;fK?9r$% ++0999Ʌdō lϛMԩS۷oc9rP p֭[kM6MLL4V܀{{&kzL޽[[XtldÇjrE%SSZZvڃ)nXodQk׮Qp֭d(?*,[:'O>ɄENNX%zj0ӻwҿ4I___yZZyw+W -*FPz -*(0HIIY`'_ =zTtjC?SbqLܹs>/ ^htΖٳgy*>T-!zSfddPvٓ^p'yIqq%e\?SN=gϞ۷o;sssΚ5KL<9 ,g.YDE,Z$sQq3MF9KMh{QQQb`+Kυ ɼUVYؿ?Kv۶mGիP_͒xe1>"pOuرc2͝;wA}U WWS"&{ՙ~mmr0/^$Ku5Vbڵɤ #㧰`"ϦMnݺu h=3p_V\UR^~͛>̥BAeJdQUOVrB%Bە$Kl`^n,=~Qdd4׊W*% _ WWT{ ҽij>J>??dT;h KlszCmE}|lR|D-4[ YÓ;ZН瀄![,j>_)S6l2KwЕ -2ʱfLllkݺuA=B9'KܹsYu52ѣG9s苾8Cdݯ@ -qy^+QZՅ 8BQ[[ - - -q&qWr|rGz PI۶muj-0K.U~[܈VFUQ~-Z 0 **|.c"p j=<֌ -+dtI&%&&N0-鎾9"J> Yg#}ķ~۲eK*zV&DP~ו5r9sp֭[s[XC4֢r+f uΧ!Y: UzWYqv~ (%?VQ}YO}@?MQ+bv&v1>8 !n ٳ'kf{tp -Ok.j7 h ZT###Ğ={8eRRJP-ddI,7iiiq - -:~T+|2Νxu}#W+\||@ ->#W#W+\\Gi7vn~l9`熲p~b?+ 7.-aW37k[7NvdWmn n@ p  q7 n n@܀qq7 7 n7Pq n`qq7 nBqq7`7 n@܀q@܀q7 7{7Pq67{7q3{nAAkp+qCcz㆝ q Fҧ큸<|EEE7 nxOvt-?o"6d |n&n\Y>9iyedt^q4DJJ Jѹ0z-xQSSU[[k?Y8 ti{rxرWZ>;;[QQaf~]vi(!nxfn6GMv2"bkT?~|ʒ\Qp)Jo>iiӦ]~СC֋/_ 盼8pA#?kB1q#xq͚-)A I?hx[O&Df,;u?FHؼ0i\(s q ָ8WTpuNJJZnѣGfoSRR233/^'G)ׯ7yٳg]ڶm`@ u0eB_N|uHčC;~;#s EE?!ҴA_L$LسM9lF]Y!K/n ٨I#<^ܠ#66<ݻwUߕު}/b/^))ٹs'\,Whܹs5͞=,Х%&&'Ŏ$9v\RpU?~jj -[PP_%$$PH7 ٳgLOfUttv4,-BkxKMr`>F;scFk >ࠠ ?4eI}9VfDbmm&ʧ=nR谼ӧgwߐ+@iK{ 1Bcδg@(:x@}x_QKO4A~ je+b"ȷ~Kސm,11u"rU4;EH:Ȋ+<ܹsǒIkō5m4e[:of_lȋFndϻeS&nuU{vy9֍o𹓗$vݻ>5`LˠimƱ#r<\xH -cgNs7?/SRD-Pz֬YCoɒ%RpajTTTPAqÑƩ=ݻ7!!d&l˟8tĈE>ckcǸ2>|P5;v)߽{#a 2#/X.]j!p -JG㐑}I6\08΅HF͹-ŒYY6ОLj=#8t}عÆp1%j چ~oDŋ+N.snIQHdݺu/_IԷ<&I~ڃY>&?Z;ϞA{~=';,Jda|թp;s#vyXC8"-'(q]e;6YwdJ677Sq7(nd3 -Dڦ`PdB!E_nݢ<(**U Z(#G^i-//CPJ -<(ȡ['֞w3]x>bwwc?ݙ.)+ -93 D*-:%I)6}ƚQ-Z4}.竫G5vS +l(ϟ/YQQ%!qÇjSHL@5ZT|wǏ/..fa:tՐW_}E Ty5{ݹsYg޼yѣGdTq~x4=%p4:@W1@6wy#,=qPT!:R F||ŋٰ1K(+6)8qDnne }/+=rff+YKY͛7K񇈈a-t'"eӦM AeRL17 MZYhviޣ[Ku}Z]U3NsjH7}ƶ}z@V|Ο7ϼ i{aj4񕨉;~)XHo(YnCc&]m6_HKۉ)c .F^ -KxK[UT˿3sʮ 3F -֑OJŖ?*N.IwvB9!ͅrf-HN,fUܠ=cUƊCKXѓ,)6l|ZF@Z3|Aqؽ\y^^(:7;8dh$ PG&6%K(u`̝;W5|4Bqjҝ=:߻6Um?Csf\ۅaνx({,LY4Hc<|S?0Mt.x6=e.ԜCUWW3g^UUѣG"s%jHk>}6/B)k/Tʉa5R}ojh(߄RC,jְx<:ot:aKT JH|ApUߐQlM!^vR9~89TvӵISNwd39T #E;2W\rsn-[3 n4n7fjsdxPIR,fQSqHyEz>qc3G&^6oӴIHNC۷#|۽!KǸ:ES:SѓX0ٯGp5XtZp[׋?"Iϓs)=:+/B;`4I޳!}{v{dZqH'$Tz%E+-۶ O[2q#pΪ{حuJZaTŬ;ڨI3^NeDQ|’Bܸ>uY^[ -iaE>%kec 6~r $,\({U\`87Ϡ111'W^ Ҷm&mH_w͹"k@,!MO BuIh]{1c;o"ûsw"1#ϛ7VmQ6bbOQv+{GISr>meֲ4Ɨ܋eyO` R߇lxY6QF zd:1ܝֱC-!kFN>OGGb[;ztLU\M&."7o?_l7})6P<5hx - ػwFxHoɛs -FDÍvR$&PAq~jHЩDQk׮))Ծ}4.↲xa)ѷbm/C)oߔGvRbRq7ry/4+g 5%-۹sg)NNNVG8wb}S񗗗PEFʲeˤjo꩖b\*x[X6(Z4"Zw@CMM 9~%MQܠϽGս3#J)n)8T.)-nJI7Q`ezrbծT7e}!K.YtL(jVWWKGp! -G(##nEnn.Y---sO/kW\ _2N߶M~}ZW|!9$Sxsm58D鴠`gOq,ğ'X5r4R1A8t(Ad.čGY3?aӜIKKSbl7}]*:K _x~6'~ҷ4tDQs qCjmqC%,X@-JЉ챁:&FDZ|!F YSv6 u5![ݻ{4((@n|EnpLpbû)g`G6#ōulm<kg]TTd!ޜBm6ثCXuL(ڠAGjDJfOU[w7ڥR-5('O^jM]h 1ƂC92q'H7R5șqF7ҊKK`YoVzybTSTӤ,999b$7DT6A,;D$k8ϟOϟ7d\!zsQpB'JMMɾDE %6nҗfIC*wl\9ӨwV&nCo^E_ .@19}lP޵kKJJL+n)j|u7]v+' F#AƐq>;`C&nDҮAN?^ 9ØmW֏)<.v -Kw -T=&( )n7D:;%nȠ IG6V U#7tx=m) AR4X7Iox?Ǎ9H( ھM^+G߃gm3 QV?t:H]ӻm+3n?Q7'8YPv&!nh3ueO0A*n=Bl=↘guP}٥6jlꊽn!nPH\[M!>JGr[Ι#n\z7&*pKi?7oZPTh p2==MǕѬg,B@φRFٛGZFN7zuA{Ν\T@P]=dgg?^tQv6!ֹyyy VmDjW[q"K0OeL4!f3K7JƋ&"ݲV爡)Ȕ0PܸUj#LP(0KPl RN1={Q"[^H=aq#99]n5(tɓ'Ox&"KdS| -q$m<[uK׻IW+M~D>F Q\4q#"XE)nTMacغ <&Df s;j?7O!6}riↆY5FF)pbnlFIꦰ2,嬬?o\!n0/usvx>L_{dbBdKDuu4(q;JZnȔbn+; P >$; 7l(qc˖-|X@SVjP#+??СCj,I]jj* [[7}YxZ6V."MqF ^ٜ\Lؐ>L.8,Ņ$nرC:u vHUkZjPؿōsΉ^ߪU[vjjjA?~999B,--n:ٺl 0Qc$//!+b"eGY.;y0(iʔ|i7We5?/[dq6~q1G} ZYY81#k֬"%1& ꓮ]uƌǏK={c} KR=*z*87ASIy{[}=pJC̱7h[lu۔F[U Ү.]gX)<%M%qCL,[EjS+ɢEdJ0z҉Fea)kō%';)7,;'-,Iuv2(=Z~1UCm(n],qZr>g3Bn K޳9<ͦ̊K:PIAJQQtJ -쥥;wSo7<]+V3QsL9b?WE= ⌐y @?ޙʑqʯxYSz~,PEe 97Ģ'NuhR7~LIFIWqZlJRlj{P2.a׭e :]5P*cCxT>j@yd#nX)gOsŰSEb dtyLqcHt4od넲=/n|`4Xr~0! - ayH,cu %eC>Ȫ(qo~> J5oĜBٵk4"ںu8#+7,mW 5eu2B,oYq%oD.zJK?C[<zq/fi?1\Kitؽ{7}F%b1TvuJR+))%Y--QmyΝG*((^mbX -/955U*zȚE Ѭi!PymܶMSN`>q !MҟsRH;b@@˯V>9J9r@лq/|Ψs9 Olta 7N_& 0JZcx!jXBV b81]|bN,)0tFa ϑ-O19p7h?e5x`a'"<3HOO's$f<&ƴ?=xL -~؜1g篢û˜U-n fҐAuBbA^D|?\y]<2V'bGǷ]i:K[6A:o)2)<*ѾySycm[yeǗuMRܰm[/wo$uOC8`x1ֲ]k043oGd9zJU;ōC4jҌ75Ud>Q_:v=rb(`w/n<~KZ370<+WhrJN\1pJDAEHC_ N"_aATsP|ߩW.gHH.n"gKi֯_/kttZ)Xƞ={"U['PEliqvntXڜQ5|X,VLXH|ޅ Y^6 >t"3D!lGV1 Fī݇ʧgcs`8O&/I{eU%NI^$ x -[/+aKxcxx}0'^O:\jY\uLy"*MF -S< m%&o>N6ª\|ɓ'".rDII)Xl@. GƤxqx_&F:Y@ܻ̉6tj!FXMdk=XږֲŞi^ -7, Yla);~)*ڨvD]qʁ0Dߩ!bZѫdBe =,E]|t)UDrYFӖ"#.~VD\.:֯v`nqS_ѨI`ꤘf < %p*VKYzZuǏ = -}¯]&j+߾}[o|vRʒ(Z"//ONjb[ke2Դ׸eeeB=۷oxiRТ:č=s_K!,$c|OQ&>;^t - "&[;DFC邹jU>)YYUDM{.*RKH՗Bq -1/%xY… "t/jn5MذahXرcKX -V&l"bi`xrRRQ|ʷ%'ʙv])g 8?7fȌ=DWU+U/ah{8[=&j kT , Ν<9\ QQo޼Iߞ![DŋUI 8oA_rŞW'xaϖWUX[U\ș yTtV;+tZБc'ܙ9ŇL xW3â 'QbyDm);6F_HTn*96s+rwR9;ÓwQDcϡvЇ9;[oabѮ~+b[3lrdj 9 -.]aMrrlOOCߞL=YfVl:- v̓# 9c1ۃPĘF/7D+"hjTcd -p4<&0 7".fq(@x?=a7 n@0Oy }Mixql*{5NP];ō_FFFb9+0FDDXZ& @lqbccQO^^kV:61ɆCc!n@܀afqݻ> pW p"p|\G|@ ->> pW+W+++GG#@ ->#W++||W=|x:p#W+\\G|\G\>> pW p"p|\G\G##W p"p+\||@ ->#@ -> pE -+\\GG#@ -> pE -> pW pNϹ#A+ eKnkFD -2_p?%TA;!~93VVVܾ}ߢR@_C FO0lЪFńt)lG - hyC7)ApB# p J&Dqj?\Gǜx,$7u^z:v~Q#WNw:(//ދA3s!GW"L弤 {ʲxS]NY7:se|@5RdGq\< pEdg͝}3JOv.ѠMnϮ,ގ(P#P,}74#ة`#1aZTV>hkj4Ա$VFVo߾]SSSYYillTMvCf1S[ۃ;PGxP[[[UUÿ;+x -a:,q|)hGְ_v+xʦ Zm{9znjz{z V$~tJ} qhZ: K@H ˹]| -rG7umJ8s9TӰ)A;;뵥l7M)|R]]gEVojݾ}7:tDn/&|gK#| Ե88Íd ۿtijѫA6u6}ӌ):0\=>%ηWqP cC&l4製/G ܞ{hYAP>e+h♶ҜOfǦW] p$MÜ^}6E >t0o/ /U΍l㩽Ua6%W1PXohTYyd<+1a}`]5j鄛fg pukYl.paʡmS1v˹fvj>TU&y!jzonQa2`7vNyb-՘' - puӍ]88}cq5 ZY1a"oYC̣lXr&gF6lZSob(j 1DžV0P29CJ3aXK]rTe C\nȼyb|6,]fqb>Sc+|:9'_5.FVKTQɅS[ bR[w44OX` c1wU(aOv$)]u^(qGʿ8n7+snnamcMx_Tڽjy[&\_Ho-*彪ip3*@ -Yá Tr8*t(NXJޭ o8#\r ob+ Myz3 OF +|j?>K?1hތUa\aUUUx9OӍ:Z*\#k])tØ*lU6fww}{$[fwQs)[' .Wν9zwʧjuxK31@<^k 3"Y Ojĥ.O n pƋ<;ᖺd,Q^O[*q#I0T+8#Q{` -@jt5̬lأ 9]C]c 34ɕ}+GJl87j1\wdw̫OqNF0$4ros\) VN6Jl^@m A'Nc -+K%Ó-w<~pK>3hSCXo+{F5@_,ٞ%nݢhtz<$z߻Kp+ʀ {N[M(yK67<[3$?Nî>7ؕݚKf=,8w/eCY]&Y@@oaku6j 3v `nlhOjkk>8 nxCI=R6V7\TQ7ݮ瀧v.79k;0_=1vG<D&sqyCXѽ*Y7 Mk\18 nxF3ltPP*+ၛxpGY{@sDZW!:Y)l3%˻عI {^jnx1;$nZy{ѠaM0MIlQ߭ߊ =Ӧ z;i~P6dL.uN{_cSEИqٞjֹŀ3a8 nx@kʇks +) EqaKqK?ͧ"d>Ps@ʆOs7swfR[U5.Klj,@sqN_Vr(wh↙4ok1.U;2Mk0r%PοIh. nx[m䊴VHqgwh{aZ!.|Muዡ޺wnZɡ\J@@$띒W;\WLӾ^oR{j↧Z" ;)ax%rleڙE?߿3[[[ʆ,WM -ovy5XZ1^OSIW!l+뻣3 ww2CY[ꄪkKY>=wBn -^^s*9{s_gK%V3ljXWWW;G >Z?:5FZvMXU|BkI|kpQ힦M$o;S֓Bs\ oƵrv́F颦g+D( QqJ׮`ha.F8!nxv;4j'e6\u N;QF7vt -f5C )n4nQeZEF9#B+Uo!nēzduks,, \do=$9!nxvm 1TW2eօwɨٯ㙼[ys 3]Yy8x֭d޵tu$E -qkmf9-n7(Ylv^X8dv R|L8?[o6`w7qxOW]ā7 n8l; !Mo57ѣGvV84%=^6MC杣1:gsv.^xⵌkN;m3rN][:2oػobyӄs7f,XF|n{WO7٬%v6{F&α78r#:K67 n@qUMA n@܀q != 7\[!n@܀ n@܀sGq 7 n qh<{HXb7 n@܀qõ[a_Ơף'¹{a176Lĥ!iI7 n@pje6;Keu_Gx;4`{ǿrAq/h:O/җ!nj _nM:ku'L(n$8·U^m? Lq^|h>1[7tLOtRx7^E.)ms8w/ \bV6v#пvwZKH.<qs 6 nZNjOta},n$g\IF&Ef0h}cEx ōЕ^pBD_ō?Qz6Ev,w-QEčKgC:?]٨UkdEI;V̘ qҖwبf \bvIRұ<;w>Qↇ7DbGgo?'+ڬ 7΋޼څMV*|1>HvxʤO%n=lJH.]ֿʏ5{w~☿eQo8v7^{g7Ň} / -e֌Isƭ_ɫ8S-ʢ{~IƀIZeIGcZqck~@ nlXG>~u`0Uo:Ahot>_m>QͿuz?{Gijѵo↥^MNY^ ]5%n.WU8B$qdͷ]W[gȾZ=gCu#ל(3Dܘ︰ZiJ;cX4Ҍ }| 7*noTU5{5Dx/?{&~J~[!N7x0`φN7vϤ{~IFtQj127A(=ԻۻkW}*^7jf7zwy9z}Xw×q֨c;E#gFNv1Fz7-suUiߍf5}ܡ:!̬t~Pe= 7JΧ~9zd sF% 7hNߐ#k=ҿk,W*r`u9~m~{Σ[+K)u3ezkƤV qh3!~C-<2frc&ѿq!78PoњK -W }OX0qi"teZxdooMhsy?\G(* o֮Yw-2"zy箋Y_0uU٫&n#㒵Y5~qctEC.KA{BSvӕ*m>9iVJ9,*)xN +O7>am즺Y5I?`-O*ݿr)Vg)j|jT.?}X\Su(s%rye'ѿg n@pɜg^t\ۙw~S>U w&Pa,.O?'n\:Owe[4 &X;'9͓Gɾ?ibUl5rB7K%fV( &ܣ0O#'3CZ5; -4y)cONY4ksaɋv6`C'`=)f [GV48uɼܬt)gE!6 \2OZ+n8-`bl -d7A_=37VȊ$}+sLB,9&?F* !&%&naM*u+gl]'K qsnpv!8mP.9)i]>|'6{XxӄFՊPUOV=P861R"&uЕvRUFwJÜ)n|p+(vPnݚ)zhfA,T7& RV-Ղ%s"B7ޢl-Sg,tqEoCT٫-V>~Yg{R k֯U{Hc#zwL֎ -iӌ>6[; -۬(*5ķ}&M~{8OonϤ̀o3ltp[࿿&ͲTGǦR)m7ҷ/>?bTP}/Ȅ)bY0`nYe#;;nCJ^ɫ3qh4 ҉5ⷞ _;o_wz*633O2@OG ?4U=FI3LY"C  -'OOSܠWX?LP0bT}o)&fe#R n@pqD16ε)ΠMy[Sihפ/`Ԫ~oYl]ni1r|.w_RfΫdRu"K탷/Swc;?;ɡKlrr.3B>^^xmVX/^X_+ -u['|4G@GbA!"!F͚.")MC6ӢG鯂-?͉<51//hwZXVmܸ7}]ļ]RqC|KOEGtjo>@oLNY$=Osն8͛t!Iہ}Ra~n4=aqFsٶx5۱pm4Vt۰YWue~%ڌ0">ZDy[4uV[C)Ki6"=,fjlY4kz()X;aaٷZ6]Z}>7w -ѴLE&4WO4gm[3@<ٞiQyd7_E -(!#&pWɇ@%=xyҰqǿp*o/E"ӃhE -nttӏEr`7NIh<{ שH/1- E֋2?$ԴkG1j6n#?jRvjhIxɥbXFum1¥'=aj2js2r -č6n֞ f踁v ߧǗ?~ܧ%7*4M;&*iF$k_԰?LV~!3t -PJa iEVghӵ\{ U|PD0dSNQV΋goCB[)/d" .7ا{>׋&Ύ|upgr#RUhZ_&Sq;ߤs7FGk?gؐN4j}..!o҉5D[O:M㲓t{߁>ՊZasCLVlA?bjQ׳sNӦ|cݕi4׾Z iRw'MeAśKƲyu%WY|{ZTnn -_T[9s .P⨱_:go*√YUirl_w+}P?y@i_>?Vlz 8lM+ ډ~:ZE}/m`z˭o6.=yPLnX#I%JCZVv?Z98}%jаFziÃ-mJJUdE07mfȌCb{nHyG?~Gm:ہm7(q{u4f}M;]kܴ"l<.NQ5+Kdz36cSEߏNC'a__) 2hSq km7ޜ/mpp'<|њr&9Z -DYvA޶pƶ|L~N΋!7 nD82G]3E H|0k8# 3͛c΍긐3!~фC,  r ߣň*_ ҽҶr`_!RxΟŒA#X9^|{CȭE.Ձ(>$n 3s,t}mimy|ި-k%}9Gw7+ lФ<Zս]ki +%7s3׈+nNI> -v2F.o톼Xii9߶JP@nSؤf֜=.J6hq~.wMo7r)K9JطCh\uսnկnt1Dm -w -MJc36(:Aզ-:flM.7O'2n'jmJ4جXBb=:ח -1mWn<̞_QHyVg:Go^A\T7jRK?.{)xۤݺ,Z3 Nj$%q\p'ޥbۼK>y+.s+Nu!sn:T"]X&nP!P'DVW^*K;t?:[ޫ]}l P,OQ;(l_o5fk&sB H"$$#@%&D9gI6rX߆޽~g8CQtόzzRGQOO:ԔXs-[=5<+ -qCG6ש9=(qKlZ' t6lF$ Ek-ミ^^mh uXx8 # -1v\s]΍͇ט_LSժڟAv:9ۯmbo:;G3 -CRQSdђs#07#n(a!MKCK҇ݛ7ES7XB?* -îbTYzRRxIp -7'L#ִƫo 4lzo)YnSrroWMNjJKgKec .ظwQ}`fY RM`uJrA<6i^ .7&Jnf|jP.qǕ>{N%/ Mιa8NѦO:P-Ǧeō[ ^qF ntb:"n!Z*m|9i/wעY>}!n@p[>~)P]fZk_Z(Xgmrč597Ers`r> 5QZa4ӱZ Ts8ExeSJm2a)re2u4hϘE YP'3u<(RKB(a GLF*g6^N0(Z78F/MRM-nDj"Æy7% E|!}7>(93~8J849EJX;ClHassnGMvq 1cI>u({5ݰaō٦ޭ:sJ-S ŜA/Իܽtk%⥒sō읛D6Q9gb+r黝 -p!xeNm͏HW*T(s݆uFX6j9rōܒ[sOG δANc4IP!d9,yh¼ӧ7^<7bD uJnxQ s( -q]ƦBӣ4}oO.ѿMFS7?pe_<6wхpD>L΃5Ir{(Q2WS"x*⥃͡(< Ϸqc~@/k+J26- g@KCG1M8a,1呤zn&a+D{/DruFkapht RO Ê6n5E;k7Pe填wn< ]Ijjk3k6č;O:QܰN)j֯GrbIg;WxXuxԳpm9U]F?V3=Q^.o5%M=+\dqdiL>lmNT1]~cȥO6fu ƫ_1*OXo-7 K㇚Mϐh7*M^tdNb9?ō#7̹}n&nl=fUdֲ銯}ygQ6f]F"tLY~c͜p6l_N[:iwKP(Z2=yI^Q8#g,(n%sω)UsGS<:(n\c2f%,=OK^ݙ(Y&^|>j<+ --_4Kе]-ڙ,9.q#m9~A31[ʌ9w1q@Xvۮ納gxI<Eq#<{7 Zo[r5+h٫w dzз.7M3Orpnt˒ qq;D1_Iit "9GLv/b d?wTƀ↖jWr~Zqč1旼'jRa]PA<}o\//[7(3ɶ~;{GYҋV+nP49Z{`ǘ5>Nܠ=k֯g4)wöi&:$SO hJJ QmS<:oxk֊*NѫVyhNڡU, hὓS$]2~@($)F'=?m?jJw(rSDPgzկ/eB) BݘMžPY/~bm -Ą%e)처MBf.E&1*dX/bo:G!3=V\^\\A"6Q`E =7R{y3Qdm>#bFeJ;z!ߩr[ efi7fT?{ )n7]}Sko{i%17 nh7z5F?0WжճL -G<pD fqSZϚYR=bo׶*pQVO TUl]OǍ!c*r9.H+B;Rxr1i|"uViTIRqZ+n<ڔΪ ٲ?K<"PףYNbe=fXZA|Ň\EY-1%naphw9%t m['ZЁzF̹@!EgA |(G2\SaӇq9P5*$#UIkWpt2[R<]a]Igo>!m3RYT[WܐJQ-U4NJ Pڬ7_TD1L_ٹ4nvm'ONnP*7[tIft'\k"_yF~FB[\\Mj, i;b mvٴyrXOAWL.$>L"CKB[{;3^i6c+ѳ iW6"(K3dh{P=5lӒ1Qk&_'3KS,')6Ω+ >ʡ -30~2댜8_l)T>Ȩ$~z SO}rP^QiGئS˒[(Rę.+m7ǡI䂒cSEq'6߼),x^tXJ^eE^XzY;k-ؚ]="s6;)'®MJ.nPu`ԉ}BeqK |SVF`o'/ Pd9+k&\qFqڸ~^{5Aܡq -#6$5gWOUv|8#Gޛ*~<$!븡kLNT4y8ƅ͇8mp4SA=tuS1}sɺkw%lTV\̃SKQQ1, . -` 7Z-KuD?O)9wr7:M>a)nn'ÏߤRz%A JȠ&׏YčӧרQG]ԥR\QEܦכ4*3͉QTlCNʹȽ#Ƈ}|QQWk;r-ez4u.aqsnpΨy9lcaYb`z^dخvZۅƸ9(}> G Q}wm·?\:{Wr4vo]7舵&N9aV@a.;qtՊՒ J"sV쳽ew?sHί(InD{ -W6Yvd֎ﶱe#Y7[Envy_TC?l)7,{tQ(dd-JZ<І!Or5%lHERwY~m>ˈ!OX(P˸S{E(L-]v[!nȶܖ:rE'ܑi:Z: 9 *tie uz2y8ʏn_)>F14XN,1螜=,.+Ye+tgo~|1Mw<[2_ -~gչ%zax_6q\qÉų⊖uENq#>ҔͿvY1Z0r1- -C)F67% Ζ7gl(L+ZJNjm7 n@܀qð/w3N ;g-[y3{4o%g:`Ýn.p!nB܀v 7X? q 7#nDinԫ -q÷;gP 7 n@܀q#[.FPTq 7nF -3-$Y42 p3wܹw +*OWMx..eGa`dMCuz-' BP7"V8rTx+ͱ^n]i&a7l%'\gaxyJ)#:uEu>A߬$bQ1tאRNn~WֲGzp\#9nBmhuhGXFrCER)J)+6@F`jyq+\wx?jͨWp3ݷJKK=zݣ+n1N@j[{-'#fC -[Lxeeedu~.6^FUa@\"E+ -%~ds涀v첽-*`="kpxW0{-*-+]gqP-Z` @֡$jqҳX:l]/,tn'wĻ<76׺?'qӦDv(wM}F wW/me -$J;:+Eao8tZ&XϽѷa*Ô`4 kf׫#>ߓ5 8jؚ,[Kqo4A^iQ6:qGlNv7u" za| z[n =rƱXxT\\Հ hla{ᠬaFynڵk>;hqoH}HuF[mp=+qXUYY]zٕPT-PklwฺG)3ع_;ZZWʆ;cvo7*aX&P߀a01k+F\cݱ旅WZ^+mK$lwp wKv)6DlhɚUtqjZ4j@lwPGn~BܨhGS6lW.7c~x6瀯.7|>tus\^圴18Mޮ>SQaH:nzЕ>o% 8a÷ݡQǭUr0 kO`u/ <]Ĺtd_s! Nz3ʆ/ld?8=#sM#K薰 q@ab">\7HԘVI)X;~k3x}U\un!I8̄ҥ:vTv~SlSkѱ;n.{7 h4q Y#oS%tյ;2U"w↏y뤖'Qt4A P6 qJ\Ǽƹ#1 TnZA:)r+}o(R߮.y:dlww'S^ NLF⓱ m.;uC$|~s7 k7\r#( t\u A({olWT{Pǣ8hδ3N:]?tgF 9Ug._('?ƴ/Zp\QiExc5C:uCQ\AǰʝR13ss\%e#muJfinruϣ.7|qU_M U].--hq 5k|%޾Rjk;qÇ=O}:h9·8f>|x(-/ynU8׆hYGk\mV=q{WooQўSތ|+ox|!nTS{n3N*m8!rlE >Y<Ͳ2gt7KN'ƯGn{Yly$n 7GߖCltH%g܃A߶;֚2\hgdMT,:֏ZA_̘J~ZgANK}*Cn6ahqÝ5lhifQʹmJbmsmuW9>fw7ZO=qx6cFIE O]%'>nۑ+o2-]Mgt}:uBy>樻 -5T)!n>Ru-!cC`/2;oT -xkvA܀bHN*2^%g=nۑ;?4cgUܮּ>Nqn}V0߀aci¼}ȳYU.sQ+wtÀ7_%/`?|EO+FPG97c[Q߀hxO^z=*cϞ; x?#\%`v^]NӘgQw Z7\:P|{& <9w<󆡺m8~ݿ_!NϨ\}^Bc?-n+;gsas!nX[^]*穻7رNz3` q↷TJUq (h'Syǎq 7 np;锨 7r n@܀q -Iu!&B܀v 7 n8ixY7 nx\Ix?xxtS 7}Y7 n@pUK|q5xmЏߤCCG 7l;oզ0Q"s46}FEfAuatvFr 2x\faMe G]wilk.S q@pCNڶ1Cپvs{|Ƌ_ -uoƸYӸU2zd7~oqč^ 8*&$1r=aSKT~i—JN(_kZ\->t[ qcrё=OAc46(akGܸɜwP܈Z`XR+#7:c+N.>|Nu n7OY_\KrcIA8=UOޮl2[5ڍ!}g%SܨZ5mtӞ[rϨu> n,xbAaXq7KNsh\+uՇ{RwUkwoYډFԼ̭k>yGvWv҆#A -}6V}3(?௥S%n\45%6jZ 57_s_÷^~UA6 m.@co/ȋ^;8>ʗ *Zw}UF$>k:ݝ~FAaLq+"@x.7{Fp78—N7.ٰژr 7*u"}~_Z< mNЎN7lQXh׮î7*Wzfm;}[/[hB qx˹?,.qK-F7/l)977q=%ǯ↾u-  h(viA xa7^Z|ōGxwCV̼.x~Ϲ1u7|)i&{@?(7)nn{woЮ 燮 k mhy#8W"s1sVO(:|K9EJ+Sϯzqf/eI OSc.Sycp{ˣo;tm -ڽ3l^~g>;qm˻>l N3zJ~ڊ}wtPۍ_vh熹+NϜ%n|T|7+1Dڃ aF>ŔX>'Zmc;( -alqō›eᩃZeO?ZY_Fdظ\|ДMEQrRV qrn|q-Asxw&:>^yᅵ(l/čO&G)7rl|AG 4NFܞ'f'eĔ6#FfOƎ]XT`臼3RKI\Iq$RÙ)t HNMօ9v%,]hrҒ[{Qܠv6bQԜvښ˯F2f:E3KXho]Pظ:QjT׭ G\_m&,*,M=wg'95{#]缣{&YN"}*a}tvq.8<鍦[O7"n:EYo (np6LK 7rGڇYӊZfvҢ -!{}1ѝ_~)\Q.ڹHܠ_o>(nƒ7kn[ݚSă_ҵuGEG8ʬ sxDeYg6.nDZ#?V-%7*Qx߲m qV>(D|~g#Ctv\9_ڷx&>dƏkLqW;7iߦs^\j&X$ןg2R_7f1@f崜";E#qqmVQx};r1[mCԬ%~"V"ʫWXNoo`T~Fk>YnKUi+[Esh׻+Ni7NZT!ڶz]y&uq~VYιb)!<2EW3UhBH a)b6MZgę_ZUr`cB҇EgI6~}3qqč'"ZبKG }#ijfcZ19cpw7 ;Ԩ^NI]fZr׷_2wѯ[bۋf 2]<2BB.܇U]x)KjC9>tŖ]E7 59g%^h6Dr򖳟JX.qc^#GxsG.[%-Nd$-tI΍V¾Ǫƶ}jSfKU+yb`ݥS&n,͜Gn+TPҋܧo7oO\%7L{ty,) )ڙ2m~'{-n#8fN/\nԜ𚜑Hh֋Zn/W?qơUB+TugppAnȐsRg%- po`O~oi IEل0^8*SOnf*\DX{"F^$snt2І!)˯" d~ܬilqQ|s\El*dƯŠ z9[`$O_1w&odDiߦg\Md~ *'QZ9,&huy?VB>N涮oE5)c{~ # eBvr>g,vٲ5qZ~ݲue(BX,lST&*vrzzܢpz}{N5l>qqoKflVBQb*+j'F]a)q= ^/c[li̝3£hQ'.C98}BuK'GNH+NBq}eT$ĥggݟp58$#rNY!<,읛jU=xwڷaqF7klz+}p'K .;6ǴYyX}7(J2 -F&_okrkE}**Ep.DprO#8"kwsciY-?9FW7↽_sόlsʕ_6ܸ{nMۍ(̑ǶKLMW UϢҩCC,[aW 1_,\XK=z))$|8G av&OǍWkZU^8g@O$951J3,8s qî,4^M1ZQ몱d^rq7G:su&̟hNƆBx N1<*/ky ?mV! $.r"'ˈEG䮓wD>Ō]q{ -q# : -S6/LQu$GL+ :ZRЧ,wT_ȘCVQbƋmҵU(7<.nXm;Säܞް)BX2eXק )2<|n¢RgZ -&kUj)hnqE97D˵b~ګW[\:řa}0+\|H^8tfΓRب̈:qOhPDܤCo7z٠9 |""u(:q7}*(1qe7{OL;[ō_^Hc2vrsGF(=+n|{֨-VN2: 6T;BSwgĢͯ*}hN)GewoO>UE.7hšp_=-E(l_޺00:Xr7 %n*g*{*~Y֔Ӊ90M8ۆ,wNXqVP`;3STf;<4 oՍsA4´wE[c2 -.spN{rdoSI͒%n,8_BO"(6.&Q: -q~択< -gfJ[,doA; nx\ܸUrgp ӛ73%:K 9z W7rW9O^xz7 _P{J}haō٦Cw^)pE qC1,ō[% DVIc&̟8#6$BKvlrY; -6kbo:qS(_qrTFnm<7bMQՉL.78>I>l]Fů8FuI3=">7R:n37:to1\,ּ2^ƒb&^srώ.7z^&\vsl+n%fwM( q↡čLNLxku0sݛLHu?46jqPW:>u{X7I,_W^£璻W ]X#ozjqC8 ckji'WpќmuNs%^S34`+'Ǔ1}HY{.ǶNwکeRRN{ōeO"t]%$Qjqî'ny3n)mXBRIɫ-9{D]en]+C\9#n!iA7<.n|[˛ZݹYX5TM7Sr'7.i3ߪor =m' +ndgނX7̿^wT^_!nyAq -L\>Po1&jY6znPSr(č[ո>qoLͰ&N^9&Fs( -OwGܘ - -.ƋG嶓t2y=BōWpș}CԼj0C2Rytb 8>[ln7Pešwn< Ŕ.i&F̧O"8'(nX.7x"j-*Ft|sō[W,]^-ntlF-bxvݚU3u=zFa. -0mxk R^PQsPO³ILč̥|ĉ977[kвlb 2Ϣ2l`y٦9QTrA^S$_kZE̥b1uBZ@ʹH1aÙ)1bC!neq){PaNUM(VG ;fDԈ7-]䕛snj[AGRhΡY=qq#1<޿VWɅqAqNv8GfT+qHōd=4Dqgz"+ 3[$󇶣6h~`"qױ؇MN,ϊ"YIbL$CԣgLȤ+tZ4S"ӝby._ʂ rQ '(j:H -.wU)sTE@ܮm;qCR,Ypo9Fqî7]:ɝ7xymn7:3?R|9F'B(tΓ΢!"8T|878i3լ4t\u&uum49!aP7 QH~s暜|1ѦI S5ߝm \My763]v -%y~oBӞWnTY88cynq#gUL'\1i+մ+nPO*YbkZ3O6y`sBoe S&Ԭ[ |E}nøg1cɳ,H"Wpӎ}Z)ϗ2NcӐAv˦]*]`à~ܠ29?9y{Y_7}&xXx#-R*lpWֽLf%[UѬrDpl)+eJlf.ߒQx!y"(L+F ^a LYq PksDxv߬077時ܩك]VSȚV[qQ VP4~PkC4VjqJR׶vH,Fp~F;օEO#8{ zȯ%A՘j;.7ČKrJm=J)뒿jvqq#3͔:$SO hJJ QmS:7o{ 5kE5PNU|hXЪbw&N~(ϛ^wynMez2JK?ЦH6wXMLl].qc#?JG\-awUbș^.u'y~_cZA^Bk(Yvsi7Gynگ5:e^0~/t|5j&Vԫ}ӖYUծd"vdKō_5iyf^F#q̓ݤ/K Quެ)U{unj9tVKB6jq)Bs 'W:GǣYib~Xrλ&(<}yj ڗ\(5< S,! ԉ Q̊]KlVF4Q,$M1Ze+uv6rwb?$ G e9]Fr -ڶz]|E -G<pD,߹e 5{qzX۵- -hiZ> r" ٧C,g.t0;)4z,VPv x1i|"uF |իœ:.y.'|0ʚ-L]]RW"-?f,!$] ~S{Hc|mSn1}92±Q:P`yZtج7FAL><-AljboG><PӾYGd|ܓ.KhيB_[pͭ8觱o {pUq#8ZxW=~*/$Fej{ ~D@SttΣ~FUKel?}gҰ>=V\l'ONnP*7[tIft'\k\DWׯճ: Z(ؿbX8W i; Ε#Gju'K.ۤ83s&cxqa{\u6O>0z&.T80)j0ԭ~պ[g=:3q1ãļƜ-DYx[_G 3O_0Y騴qkөe-ǛhwkxʵgK S;#zgo7TʟjꀽR[Kn~YzьOJWQ*JrģYiO4S|N^qFq>qUʠS)6;-ڐ "f)_8wTaPO%O:sVY<{1y -6|hՊiE%@&u;'ret䗟K n@pDhnu_1^F:wh0=~S_ڣJ_k>£KB81O&G+/~r(*Zqz4ݪ,mU8rs#2MM*7xBW jh"Ƌ^NP{{em &(Fg'\b[H&7Ssn:wWYLNa^Rc`^EF =`dv~oph-e"'Fd'Mq⢧H^ͼӾ B+QBt}AV6-'~<$9/nԴtW#dmdI~$He6 KW_"#܊#n|eKeLW@?*:5k]^?ڱdq)SdRh)導a)zgp8$~“:ls~泋Ф1Kqs;p~&l/) *OE5nXMVBF5ٿ-~'n>5F<*եWXGh,|z}?֝sI* !ҋr1*}:Ӭ,\L^s4j֚]9ۯS4ke{)B)tHeXo%g\po=Cv(=;熢L*}9WbEw?sHί(Nj7>7򴉲pQ}'(@ x̨;l(槓clc%Qrgl2Rk胍!'v־;;P_lWS"FY +Ujmg1#CEe.P4jVl4[o *iE)p|sBp<;^q5}x:-F>q*-=w}-ю[n'0qrۧHGۅ,wgl^y9tfcaE!Vquǎ#M43^qɹslSwrCGifj^tgo~|1Mw<[\7_; WV$*.7^<{\Qu+Zu ٬H!-{DlhWST}5mwF ͑4 6%,CLa}hT9]tn]vM[Lf86Ȏ["-y?mͲjڍqhpa#|RwA\"c\9-llG1&ņ!KnpnKKK]8sV9uiiÇ+0Nj)CRQ -T~_FTuuWFFsSno%kr۝v %L8ӯ(!,Ap5w:}ӷהZְK(7ެcʴw(--.ACB A-9lG -B밫K 8W0f{bwK]-'|KF*.lW?b -_ncsMz˸#8ʃ5$+TШi}]3 兠T-Qo+7xWTGiu+bi`:,S\ -}ޠI1 H︖?֮ih!dv8 q^ kqQXYc ,"XdMD$@ PP@ @D$D4Yds mL ^{wPCtϴzz{g7P]}Sn8lʡ֌)qGESaf(e^ś3GTXXǏ+ U P<XprA%2jUVW8}3j܂K:)qVy € j鹌)qa.h4tj)|:xpg*PMUi(_ϙ ^t~5G%^0iqaLO9cewUҺ8)i4l`< X~=ߛ6Տ># dZpyV%S)1!&bM/\a rFZ.Vq57aeCd3NjHlԻ6;7#f}>½MNA#m -C':Ǐž# pq3r;hRCS2|= C]SS!gbG>{OTEW  Wd GܻHɈ=;A3p_ -bXCnkzަlj0qR)* -_<&UwNhu('7@\ {UH! REٰ|rRGr4U-P&х L̢P) -zRYG𪾳5 8@\zAV_JB\Í >KGxjɞY (lC>nSt08?p>пʆ=|Ѝ LtIw5YD>U|e {mŘt|Pz}TW6[.Ϧ!ӒWZ*4 -]D6J[fR {}H%m37U'O5l HAֿTKx `kaUjI>՘Y2LwG2jI94㌊ۉL}I:Ϥmj -^u*Dqohds@3RGx[KCωzkշCDwws -1nUG q;77c\KRƒJ+نNӰ Ql's|[K,ҭ/dMܧxdY)(:iͯ袗f:[Ww~Jc#7okNrx=z#-0t25}L O7џzKϚ q -U^ o n@0Cm̓ -qV| 7 nJ8VA܀s)O\2NF htYԼ읛@7`'- x=O'xw.l+Wi7Ttۇ=2#^zJ_G*dtB -i[Æd2~y?NoAVSo @0+.+]||ܲ=Bu94$-q}u+ e *n7Y 7Dlﰑ'- qCEक़''nS,/?idqCkC'T }FJ7~?KSlo>Nt﫿 cPHΉkU>>[q~ōʯTe۾; %rת"z7z,Ӻnג' s?+;ōwRUUqCOYĒs-?d_2oN^qV]Vzk)]:o{Fy7AbXqct:Qף+ V~LK\[]ׇ]%n\4)1_uQ7?_,s۸Kq)E3 -ߨƬe2Wf-`\ōAPN{gmrPFyS.t m/ 'US)nMJ/q!țp=|𫣥cWA܀vnɟ|>=BO/zm-lIqJ6]+RuDBaōmƯXs%z&7#8`kE_-6UFFoБqcR:[}& +u}8o zJD$dG_cKč6ڷo>*qc]:mM*Vc[).̹ѸugÊ+.k֧> i$ƨ7dzY,;g|SgnpH_憸@x>aJ9=xy2j``#nW-+.+yV bf':/n?h>=6pK%G/q>3_0%4^ ;pVYy!=3}7c_SFFJ;qc-tV} EO(:|/Uڥ]"n|xc%w\?'o<,"FG*F^%g^1s[yܒcM[jqo/\{x={H eWv[еq#c˩fVϜ-oL(,};{ik2ci=w[9NTQK͊jڣNvCW޹a%='X4=BNMVܙ7|!~r ܙ>o65_Έ=}Ho _qty[׍S?{vr=h ˹qJ,7 -?MSc;|dV§I[CuVZXjd[J,;-LXdޡݫ*\r0yqCd;!^.QM -U"n=l#z^8-K -WɺCY"o_~Sy+OocWx/߽e6\S7 -Ycl2YN,7!鍣7t6Lv mʈYɫKjd4Cg//$0Mō?TZYoG -jopq#v۱4)PF-)7;LR=>Fֱ*nm' wOlBWW]6odځp$^f/Ϻ)↏_NAy`TZڢZzYfC_%EZtQN1I*7 !:ٷ3U]6F_N"/CS!ߝ߮#hf F_ ]!弄vGgN=ϳv}|Nm]KxCٜþrX9{4C5My)$Ka8߂}+%HaZ5瞣F4iךkLИӯ ,> n\DmCY 82_0W5zU[ n@lqyjbZm\b0Վm[U M9>:퐸+o5!Cg%Z}x>H Vby79$-?*,ܓf̆Ѧon:,IQǧ O9n=]TlF8'اBܠ~݋icNMd^DY6>2ia4&,k>iR(tZhOKP!Rm_$nLY2^ ō-,=G|oHKMǺlu dO|{]^%;v!uՌX]X9g"Bm܀?D[P?I$9tlͣA:Bz|,z4?ޝD#ڽY+*&vkO'vX2ckw{AM}jkTQl˸-Wϴٍ+2r>p2nѧ_C[Ώ0jz -ۭ[]&XNwr:NΉ+ 3il޴m^Tz|;`~y7 nxqjC4b\}zX(wBX+gtD@fz+9IPov:ȽO7`uҼε=a{*Lhزׇ; ԏXn(ĝL(DT0*88zݦ=v>Bѱ&3k> -qtT'aҨ_wOIWbRwx(}xZ J ͫ7$-Q^h@y1ne'?%>(=e}B#ٷ,Æ0(76k%3 H =$_abf>ʼtXč˿k=GxлCY-zD;*YQڟtk%ẁf_~zRgթ0JPxt.Y-GEUX<.naōE۬mҲy7fn4xRTܘ[,>܄36>;9nGsnp-ϠwOҖ3wk[aP{ݓY1XZTF3/3]zzFfn+DY;*NMBgm>'(aIed3aՉDh/'F^`'rDyB$+_ș:?ʹCu`&`5^‡?N8-gb'^̍a [ծUxV2[bŧcm'jm8~f@]'6Ny ,8wZNOS`f2݌\rY#ܹE32֠E3mʚ储CkeW Jg`3!ēȼlq%vD؅̓2|ՑMWY֌D98 ҂|l5pA2S,>׿mm97`b8L:7mcaY0C%)X3y٥5_:5JbqWB̹AH%!Q!zNMDQqcՇXflaK.E-f1,)vh -+̋HN7k5(Bs"oHt1WۧBe,s&Az^(n%X9?%vXp&nTTTn`d|yڜLaDǵ'؞<ëw5̎vgVhݎFnCKKk!i sv %gS`DL̖6^8du]^~E©(tjA戀w#F7n_?^Lܿ1}e˚/׿ɶf>Rom59QWk1yFet"ir n~:qR嗅EdI:E΍ʯT -8$:\O\;2jpBjmo[5[1h|:Bun3 -ʒqG?=Ek6ogSr&_OH%#]1(у&ܭy7Xt=>]oDND}Ed&mS_ 2١܌Z֨.㐸18f"3 .ǒMdmыH0!Vc+ܓmhIɈN :L:t<*nUQGWi,%xaECtEh-C6|#R<ʹ+EtJ9Ҳg7a7Pء4! >,_<0LH7GvU;)@] &?tQJdml!A=8V6ō; :qiz&n\NuRA;ϿمRš+Kvߺp{ԙL(DH[W$El 2g"q؀NQ ^]뿳~|¥`].naqg -.nHa1<$čVo]7,ܘl: M'͹0)o {ʢ:N!7o.n`yfn<4CΔtLiXuR۔e_E437Nō̍glN`F灖?cR'nwk+xc`Vn`O3$_x"Ϗ!7fDE^ּc*ō'S횼1d8qhb(=ӧ~z9}]$oW%еp76:K[gdפ65QKh3 Az7'Dz{Kioqb}"VV|(8E8S_d7FgVxA nxcERhCj]t{M5o~ңe1m>0VkqVڅVZJ-[hudCjpUθtϠZ,ʔE@L)`kp*7![b;wPL_)#n 6O_Uweւ~HТYRqC,K!7nbb[I6>l"1##XLj|)@p #~":ÊYը`FkqcV i6KQa.<9K&rXu&n -ꮳb6e TDZV^EDkkL/9n;1s%n[MXfVqidi6S ]wg%_N..O%ycaD,>y~PO{;ˊgP<|Z6 G'孩D;Sa bʭ}ۤQRcˌ*tw h/BSqM!}i D֕g];eljgA^u7Xų~.Yc=-9s'6wؚ}ܬie ꔭ|Br=6UEY.L[zu&߽67Z{>:pqƵ'\(n3b"20ff\%n,_Zq-+]7xz+L~BeDB(_C:eL i{qč9+,=[)nn+tT?8&E.hZp6uHܘ>1Q*b:l2)mЕF g9C|E ~N}EiD>;C]Q}ac](n$">R^.C_ۤUlA1yRQMs}0 W,khJLf8ZԔ~ŖSX؆=+mX+\Pno@ga1jմ.D;v;<47ؼfi0UPQq9ߎE˳MDd6cWxaUEcEeIV³Ɉ,(]ªoO^LR!"$aXqHG e6Ux?y˄,;[ayɔ5B<#+8Is^8Fm؇J&iyb6Bq`OG-WagwUds[>w5|*Mqm03ѿOܺp5u -T)QqLw`GX-Yډ|Y> %VlfHZ"!$F3t΋ɖ _N?/|rd׫UjNܘ>ƻ\]?Om԰ڝkqTExy\O.6[QNyًP3Rek.,=6N\5Dt:qꃳeM6Uu~"K7ͷG_T.nP O):+UeP8?:DuKG`pe+׍e6T‡EWS.n|y??׬ܤ,G 渉2tnÿzMh8s&5/MuMR14FFo7C5~3 $R]ط7LƉo?rS/@x#zɳz5?_S -i3YZ',䕍: ϨTgdvA:VȔo6]q60B쑳ŧ<Xs\/]ƒ,j4q#WCDk #vZɿ"Zr7gG-1G#7`uҢKlB"r+ylvp#,2<9V~*:Uȋ27Ǻ=XV!3QAIR\`]݆Dp3Py^3  -VtX֍ʳm"fuXKF=йx ^ϫ "AlY6wz(xGL`xqӿ\nY çC^Sx%yK3Z/,e*G *?X{G%+uR?';g&Fk[IE̳k[)dd?Et ȧtݳnOůE>KS%5Z۷..7H׫Uu9Qq3*Fx(zR;yF;z S=J_zlm˘$O%ਸӧI_ |9:]|8LIE#v&S 1us! ?1y 8jw2b^ -b"N;a-l3fk,.[Hٞt&>Aoi.0۾$c**?pQDwѸmFk6gE~.SEMfl]'kLh"'O-SP9UXT׮~čMk?ߛZo_-'#-Rqb5any!m+ VD酎uc -@ դy\'C͖VQ7HQ8Ά{D#hϨ*VKYu<밈d -6Ig6!Xo>K15P_rS7za=Mڵ~TpGhN\??j#jkyq~s~xo\>ò|/d=`8xQWaN+v.Ywd7"Aelr -uC*0Gp{TۧwsД(yutF*'`i<΍ҫ ȧUe^Ϋ"qՓQ QÚԉNίUvRM()mP|R޺x5Gfn>dZƣ/n/ósn8Zu4wC-ʕ?sy2/ӓ:i^QQ *G{^#94-Ey!FJ%ϊ_2@3 S_N9ZQ6<=˃sqc_gSNythDroP!Jt/*bA.)ӎϵl+JYAyrqT{I[WDe(.J{?|B0:k={ٓZb־mGS^JЮpNKv& -ˣӟ^qɹ!zArlUb͜YLz=EyD k\BXۣ_L_O2O({\0 KhaQ(JFQ`A\=R܈`IG=>'l{9 CsR\hj n@܀q?}3{~`?O nJC܀׃q 7 n@m̓j -qcB<6V+ q^ 7 n@܀q4)čBG7 BY6 Xiz7 n@܀q ͠y@\M!n~JC܀׃q 7 n6 7P n@܀׃Cq (͠y 7 n@܀q f<7 n@܀q 7 n@܀ q -4 x=7 n@܀q4qq| :7 n@@m̓q48? x=7`J7>>q#W>q##W>@\@\ -G+GG | -G + |+ -G | -G+ | - - -G+GG |+++ | - -G | -G + |++|| -G++ -G | -G+`p)~zo_M;)'8N!=hп 6)=z$<ȃ=ݻwm}h$@c@\A\eBUxRԡK I豺=+i67|LO9 :2uuYwՙwi 4mZ4*$@WWv"?qîD0ox=\ JSJ Z{dxC4Q7K6 :=n\1`jMGpGV]<' ?)2i 5@\ofjL-pfeVq+/×yFgV粙B_?]WպÇY2Oj ttǏ;ʯ8Qܿk n1R@ SWuܱA eƒԙ Vqfa9\pܤehfy4 -s1VUtHw:|^C =BwswTOSp=+)3ݓqy[BV`G(p/TQaC[2 5e$UKkaoG}|̀Z?roh"6NBaN5@\@5貥ˆ\GJW 4>;啬ga^6$ܙ:t-|L5 6=g; -Ͳ 9]ѣG6=C iWhFה8ZXC77Ug08-B뵝Y\EZ窮x"tZ1*(H4SWZ8Qtu~ -Ǐ+{oNNW8Pw]:ihbr4UJh%ʕp̆P heBaޗ -]|ʭ(㨽Ca(nr d6? ͦ\7r3] }JD*l2KwNT0BS`6 {\dVr)J&Y(\D+t(^d2fT(ql<}* Wϐ5Uj[@%V΀\]W*_%PPl8vI_I>]l'[ѢZVTZ :̽#M1+-\a"(O`* PܥO-sToț/D f4rvZWW7WߓynWlCgBvp+.ۃinՏ># 0|=e In%s+Sa6,cyb۟S˰}%;g4dn@\!k鴶3HF7LKb3CpTДHf>8tԵ&8=)7SZwٍNYCJv$_SZx?w*d5 -C8>zr &5L-Ǎ9U7E| wYLZ^})dC5v q3րSQ~]Mkmpˌ<~X)n*LD %qEƵZǤJݒ͙S1r'?tKLF-#=2|'p|*'S<7»zv6yaX:j3^O^:{۪fxWbT0NKA0swaZ". g$45mG^.-֓1|5{MkS uȆj1\ )s37ET dϵ:7c{-Af2TЦ3H+d =$7:ԘFw9iծM~ w a -:jl #?~/P+Z s37/e׎2 -A D#箑Q*7\.6 7ֿ=jL5H齧pGxjkw{Zct^8Ҍ'VpɬOġ_#[>O -SThSfU6\>X 3$o95D,44ہjd=rgYK~_9Pk)e賠©(ްx[Y PzQk B@\XK+'m2"1[ pF޻Y\0|7Tˉ:uFR]8<1/0n樾ZKI}r(*MGl;)8< F>ipeC#n gqc 7>wѕs8 B8WV橯=ጾ *%[M#MG45n#y uڝ])|h&,ݗM€t&* y-GًW qP!aU;&w /Q61>OeLۄ=tԡ w$5"eɓ'`Lj %f >&vɳzgm6oN MZQ+ #~CN:zd)e >ۤ lxCW>mBi;D&\K&m`r[?:8 ,P?.֧)߀aEӁ -AʤU"W*=hwT7RYV6!nlDϨ%60l_hiKB%qÛ;νz 3c:j;gWAm"9{>4`^Ǘ X򛷩~|n5_7:YaVVtņ ↩fZADJ?_>c*7 HW\>SXTqq-I?eSx_z1 -lwL!m!%L):"M;q#9'NWvM}.~}"0΋_nз}wO#|yܱ:*5/7cKr8nW./MJ ׯmݺoTr? n@A yJ,;{_2gi:MZ՗^DFoKdЎi\fk -Qmt.+nM!_͈?jOn_6^O2fKӅՇQ.i7<^ܸsS{W_}q*=gvo;rXra^:^(Rf;VtWn«|]9Eۗ+ܘØFJс747e }gﺮ .2&}Ƭet|e\Ŏq^H)y` 3M6$ycC۷( +u}8o4q/!776n _WGK:ӧ>;ݠf?w]@۸k_.(Ohtà .nˢj7L٤MWZ#7%}V. kˎCެ}nK< 7;a f_m[VUI?g٦jU_HܨY :oD05uI |^Vȉ>Bvr^*.ڟ:cB2;v|0#hܺ,nغN׭Og}2>uI΍Q n757jqc-[P+v^wNsƱ`I4 ZĒ'ƺu^]|Y֖?wRC"g'܆wˢB4w}kXz{Կ4 B!n]C#Yա۰wn`_}^V>(dpLphעͽWmQRe:λD055{B,qkt^*.UK\2{Ж.ڼͿ=ѹҍ#Ζvt'_n:Ǚ -Z+18İ'ndm>7m1V$ܘu'7>Og߳m/(ؐ_veh .x@O\NuJnNΖ qN_D{&J+W 9]vzpD?X2jvY3WM]{Iqoq;7]qdg>sqkdO7YqGnĆo5M9wIWo c)Q1;c:.83K.~.d9yveO$D9*K[2bVfS-n*U^-ؿ;Ǡ7 nh-n0}sF|cjlG+BޓOcדGۅ!qItd*Ő;>tу/ďZ':+wOC?Dr/g8{b:M~Ȏ('ovdPyB̕4ōG3V @UA樸QxYŧFg,*/-ߓ_2ҝW\>mdvl^4'0܁3L^ λ.Q (n$ v{S5oZEn->KN <;I_MZsB?p$|A -!NN܈]Av,I%Q j6 Wi]!su܍NՇՉM[.`O({G`eJµy&t߆S$2{yoN7|:t --äCj=jo6~ۡ_m^r\+'E$ځ4=qϗ !:37>NDLSk3m ֨~hbvdc. xbkglj JHZU8:#8˘{EǯX$W.CZ5MNM 5lWNb3ҥCrnm>m$ qCqh -ok?W cڱu*liF t.!q#KhkַLJHrt~0ߛ5ߎolWdDCw>J}zg*e@Vjp~6aAMɡϼ U #o>BYޢMXdpưĨMgQe{;OP,ܙ?>m+)W4{=ͲG"͋L^Okq6?GGWZ4[úoNɶCԊqT JBIW.ߙ|Nl6-yip ZXz^tN~6xj[SyYy6EM~ub^nY ZFgV:aQD*M "I m\ͥϼX7\u 7Rqqp@X:h#ۈvQլBuc&.ղl~?c;6޺6Ejd? -oB8aK!7 -KQk9Jv@Wz48*nLLU;U˭:=籋" +{j}VMA< ZrI|ãx7$αmlܴ}z"C3S?^~s:W[qd7y^:qԔ0)u -GPզ@PS=FVC1֮;~%P pHXz|C"6.7ՠ>DE;nI{ -H3Mt$D?1}۝_-GEUX7 č'< O mYaBEGNxV"ӡVrzD ֩f -w꠩[4wyo|3rfYr%:AG槣Lo)1EGѻ3l -q(2?8a$1YK8IjSc,{#GGLcĆ;XS=G D$m{Y=#,TvƕG#789}%$-aN.{L{ޡ2TA RkLE?6^8dU;wZ3:۪ - -LZӰ9_''ۚ‘귏B܀q2tdpig zDؖ|cX=7\x̔Uxx91buEyB$+_lP941t~POa<_R(&38mc -=ۢ^0jcXJƐ,*NDŽǍj^GJݨʄg^ʲvi=;=rp Lצ|oOôu+_>lO+U[T.n옡ӒE,zP?e߂G;Oݸj(_WQg&aa lpD։Y mS-nt^>dlSPWZ#1NpjZTfq&r}֨nOU;.Sj ƈ$ukqF^ Kix8}O_SW,B?7Tb*h sWX΋!w\)踺ۓ'rxNnj -m?V 5I"F7n_?^,^tp-k:T^=4I0 ^aiF46>= 6y: f*57EYD_=IqR嗅EdI:E΍ʯT -8$:,KwGSSmO]md|w-4>ol= :QxF!*n97즃ٔ_TVME KcPFBTօ'ƀIV%&+z4?_X|^# XesO*W; NJIں"/ؿt"G/c \4GtإV-{&uK^*X ->[<_YBV+^2>ht(jE?dC-"á|HF҈5,<0LHB7AE3=ƚi& -tYǰh%i-sk:2I?\.n(zam^p7v,Yb:WT9E=aDƒis^<.ȸ}ʱD,\}b^6, u$74ʹ^*.sszo|)v>,loِP#qE ˕6ЯBܠ:ߔEf7Zun<~ssLhܺpcҲ6+4y4äu/bet0upЇLrqcM<}Q2k~ gY aN,i 7ެ4#nPW'=۸l./+7FKbqуc&7'[$P?GgT~4u3O(,@Yfm/xr0Sѡ^܀num%a> 7JOݢ{8ϒ'1<Ъ;2B*n\&8<fn<4熵k?%↦tԽ&Ѱ[׷p#ioXbZԴ[[-č̍glNv`%P!nf^u7n<{1 RimTפx*F-; c :&7%-Bpp{Tf.W.n<m!EgFb7 k oB(X07j^ 6g;[Æl V^I/&}4ō-%uޤu>w@C5; - &h)nXg%#,Pݛ,j@GWKϗ -6Nqhb(qA$9}]rەmaVvik4C:| 쨸1y\W-{Zqԉ"ecXR\٠6OݸO?o?(z>\Sqdg;5͹ЃiXX -|tha". iޘֳO5лč; 7n|B0cnoyWd-L(ʱB/J%7DKBqz)&Vd&sb<ze  ōM2~b{f kQKS,|[x_86\O!pF`cZ7>7Y:YC!kWYɗK4S9l.[G`C6˽,ၟM;cx*$MEnZܠÊ򂮼rF4El%[6 -ר壖hM "O҇ՇG-;xRFaZ<9r&AiV4mH4ɺፕG nh*nPaSEXŭ˴WnlB%3ҲM*čɡ^O*sR8yq.7LJGk]%n,_Zq-T\-$ɗieP%ebN/ខ}Q7-[Wzʈ<976.4Fu (nYmY} =ōm6#4y6(2?2AwϥZu6uHܘ>1Q*bō2f&Geڸq,zMæPgB$JV%M<[o;a,HU |Bnd\k",DO*čmM鱱m?@܀tNf#[NbV0;.gw3X}ipv@<"/t@(Ff/,!pUag(7~[UQB)K &-Nƨ)|mEZ /:e4n(dHe|fK١?"f-#UPK;q6򮰉%#bXGI4zTCav5PaS"7/.qCSq#>l5lWn]ưܖ]eU*Uޔ_䨸aj:^輸l#ԠO'GyZبv wQq,Ut/Lٰ.R7;N**^"+dI$sr %KYr% e`@ݻ{s>>]ԩyίNvJ67_~:ky˖~43G9~ݹT↸%UZKe]#JmLSy "BVlmmFNMzTzW^uf8zg 0yѸč}%o@Gʞ9u - QVbȕlFQIfUuQB]ѹV>9JVҖƟ^U#??chZↈXiB1U47FWE:xzt|7eZV'U, -HtP&?|% -q,Eo"We=d\\_zN |NlUD+<&bW=MXTK,Rڹ#7{yU77\7ZPL -Z)2pG4\h)(y>V~zVD &ET}hTR,u -& {>puS: ;gZ}Yq ݘAѾ^ -_+D5R%/^ ڞZTSZMƷ YnъutОW"׋Rɏx6r-mj+#y/)YBT8R)/&5MO,|5êuUXkgv/0JA8zյIWWտ"-k0:o ֞(%D!_sNmm_0↣꧱ofԠ]m߅ ;oP7 -b΅F9φ.r:r4L[(BW܈p4&Q'HѲ32(nO.P9hG͜HōXAōbErsٞPڸ*(*{u9뇴ieĠk7S} 9Se;dj>׊9$7o63V;[QofY7j6|}*˓[9##qk|7'}Ι)Vkv5uȔw.% -QCjuèu(W/n.Vh~"ұU;Vꩃ;أ{ Wܐ}%5W@. ݔa=_]QDd|uNS+SlҼRIl-W}eF5"QQuoImf#E7!"/uB[:P%Hyn:u.Ŀ2I -/7-~"60u: TbQUfaKCeGv芅ZKEl`ʹʙm[j3|Ԥ]Ή}V# g/;<*os|9{$tHj)Z=Jܸ-/Ѿ"UI+Ǥf}yY_TKr꺀bNyו6Xl=ITN;,eYfmժE. GōsIjq 6Q_m9tcVqٿp+W\-/3CBP|}kv-i^&w}~u8[20AD(rq!]`vҡu[čX♎뇴ORN{&9v܍?8Xa KJYw ϛ -+MzqMsLI6+R\HCWv^w697M;c(.Y,Ҥa3MGzsY^W]W7hXH=rǟ^ !2qK;] Q+\|4Ͼ *C$~KQf9?[ӑLX&*:g򞭋/>]ID"N]}t7E6ѐ j#n nxϓ4{tD6,zm(ayDsބ!VI]ceCrJ=^K7ϡna6?F;ثŇYT%n- 1=:jÊq[.|$QD%㫑"fY 뵲m#OCi9lﳿgKCzĪ,|Hǡlg ҃ 7/n;v|n9bw+O9q4snu8[hK"_y'I(nciؘh;<"n2h|%FX)I]8f7x -7Њta= 1a|GY FyyDJp4&1t čpō\f⯫M2fIZqq7J?.'t޻90֬I|:q#čOܵ)WW5l"΀G$ID@@@q'mqc77q<\hO -͆wFAX{칱[!U˶yU.nY_-ꊸaےڷ,ҮciqgQ:yJ˦o֫MF̹#7P 7˓âS7770$,* qqqqqqqqPq77h!n n n n n n n n n n n#nHqqqqA AB@@@@@@@@@  qqqqqqqqPq77777777773p|#@ -WG>\ \ p|+#\G+#\WG#WW|>\@ -WG++> p|+# p|+#@ -J\@ - p%pG# p|#@ -WG>\WG>\ \ p|+#WG>@J -G+#\WG#LG ~ | 'ecэ [/,WpA+t!`R!&zϺN|aE޽z\DpGwhSU< -f 7q+=vť/\y\#eX4:!7<!|#WVbXpv! pMﳋ[n!kSWݩ^+WHjFypV|~?ʽ{<!wo!#0dw۫5aUP!@GD{Gk_wk^íJZ\pT2 -~8JEy -G[ۛQd;56MʹZ߇د[RD,cvpF+1[d-삂*Z"8(& pk 1wލuxgyi.|Q%~X6*M,o};1MQNqWwdh7FOr}ubXWZQ e-o Vc%Bs4Hx -|8zB;^Чޝ.T+6|HTqs珕wޡ7*qU#oNAsI]EG~GS5>2Q-,2Cd>O7L'W刻2ӛaVjlzzzlk:ҍxy@{=]JqԻ. !'_u2s @3Y˽lUq%.V<<KʦzHMo@k5$\MC+`w▉ =ÊēlsŅ2Kp3n 'oY%GY\f.1NSG#1l9xdE B\C6-ʆű6V -wY:r+Dqkb /+(Hs3:glػ7Rb{| k=ˌ+='JSn[Y̹:j^.L Eݵh1\kqx&@wv>J# %UNިlD_=E}rw"8s}t{nVՂC߈++T8Ί%DPtA;89mW"ވ ]BpU{9 nƼwZ9q7γP;'kcVhG 8ot.fGVjJ 8iDYtA#kCQb`FiNJs/өZ /q8uá6.{F{iCE"JC2FзޙIv e#{P{+3FdFyAO'QoD_AߠlՌyGAū֭v()K71v]zIf>01t 6E6etp[e I-ɹ]ГqsFv ]7W9851,@*1y@ߺu+V+HzGLL}!w|$m>&ziymnxi>D\tQhj!81,}:o9Fl#^}Q280S@W@eÖ!}C>/'GDr;rMG1YHzvyh!S"TiG;e*kX1-:' X2̩}͏eޑIGg}1gʆI3whQ&.̝ K=ޔ\ªnh5sa' NrI$\~ J)jgߊt6WT_j}77|e㑞瑽u 7^^#b9e/29U>]^5)yyXͮN]P}#xP;~g:unޑqt\7>3Y&G4o3S%"1ⲣGĪ$=Xa"B#0v-`N ,[^XLd EȪX|lsLp{K. p&_8^~肾{*( -`22';{p۫7. Ne3X]/fgLb3(:lOLe#S#B`Y[LܓO ӗ>"qlλϴ+ƦEyn*oOE}2SN#c}ۉI㙎\&kP77W6ެ(^h #-C[;Dfl)|7*/*6fBc[t -)aќ A/;lX uݻg5{׬Q̯sSM0pmELI+y}ӛ9|uZњL{Ap^S6bf&FFG5=? f~˅i4e)F@B삭4~*]Q̢V#s(;FL谘(ĴToGgZ38\=q_l䬰?:V}a\\mo{L:] ]3;w[nypU"沆GXĹ3N+A;2NM6;JPx<: n$Zjy?57ߠldj%ŠUX3NFPmy?3>"W7]Qn]8sFQ5z΄Қc[SEpH˴@iGni2>E'jmRGč \M"t/xQ -zY ;#nbQwdZ=҅ MuQU=sNl?)SBZe!ibAvܻw-wD4}ψ%(aG0&̣2ov!6G`QlJ)a]vXoby;g.GyZ»W p):72D"6G`pmmD_J ܹcc>Gd ]lO)/y(% pir yyb|+ -צeW/\dv ٢)%)%@H5" E4*󍙊ay#0 "wuaO4e2sM߸)ؿ>k3QnE~J",|yg]F&R-F ǣ!-7d9TJh  ?4 $Ε n n n n@@@@!n n n n n nFŠXݮͱl ]7)u g77&iJY m߿qy5r-5e~N+OX/x4"ι|#VF@@7hey'U/|ԆڴҪLvX~OSu/ |D4g/Hٽe~ssƤ>+o_z7mϚ7R n8tg'M?NQz'RIčY+&o=MqǤ괻].!ƕdo^ߧ.p.ʳ<8u --SpL]kAwǻ=Ҫ&.<̳ i˓,S>B'nR pu?n\B6 -k:R]7jUYH^mkW`gŵ+Tk]΅WԥYIl='ʗ;CF֧=-Ƥ[~L(mzQS[č/{{9/nnb~s@lz͜Sw^ǧl"tFO]vL^-Rw`ؒ}b'rYȹIfm8I)n\I?f;gm7Lz1h+Y\PEfEN=֕eI]|Qi[Zn׫wxʑ;Xrb|a~'ok_$6dN4Աb'oo/v(`lDÉz0v&/o-vNlR 'Ӷo9.%\I7}/'F՘M7!xJ7̨/7HqxrEߍ0"U coF&n|2y־sNٷ-We #KDksuPd}gUG{O]!*ד*V9+uC*V vwot~[s$Œ]nN -:~ -q×snHI;'* dt۷dڢč޿}q۱ 3ZK_"~|룽,޸isW7Gn &t%=Ov`S-=k;Ykv2~EFpͦm8k#cGō+k''unlԨ8D&7x-+_[v)_QM;:dCncEml0 -os5DՆY~Ky۠č;=[r+CU?gX-tWpލWF#ntNR9DE-G.QnZH|sPEZ^kW +5.J,,q/ounVw=nVdF˝T˛{Hղv/NπJet'gE"3YQUⅴ?G~{W |A`<60s;ςoYCr~Ym|`0q\mD\Ht?ȍ4W{sY0֥*>|Lͼg;_{I.+b=˕E Q6om*24*}G'ԫl'FaKʩݡhBidi,lrHҚMu -uWGi+\ڰ"qc)v[VYGhXQOFĖV)ĩ>KR }> }²g~3haKE&n.e^ʹg;Uz/AX7d6ZjogcGw|n:G/q#čKgF[R|\vL궡čy3ף?tjJ(~ur^m6%is1Yfc,ܑSN1tŽch R#g۫?x\>9 Z~PY_M?H/Vû"7\iAjdDDaY3"4+,>}m>2ft{92qúQMom{Jտ-i3m:2&|=oCgj3P[ntM{Anumξ4pč _|”}jgqE -%l]I:1R#7"JOui}xSJJQ)kS^ Cle2)7N٭Nvj VpFjm(RGa㝴2gHղ˾Bm<7Q[ ->a Dhb% -s("[m6sUN*xJ:p!}S+VAW#֢q*3dϛWukfj:9u/#gApF_ܱ.3ovOR7d>i3W=E;KTk^-3ϫRf(ҪW rjw(KK5S¾ emԧՖ߉@pͦm8Zp"8aټpY֬e>hP3č:VZ0n| -)g̈'yQFzP@ڮ/G6ӭ`"NqU$nύ& ''A|Vi!ߥr^x q,O~#7E,nKM\m&W -PW.ޫ7T7B'pZf>YWe?^jvC˶QZD6;]ZysBڜ- VۗCF"k~{ ;24`xomDJWw٣oTh״j[PZ \eu>}uo/Zrt߷a)@<{(chjEV.hS) JEnܮ||wɽ];Z5⛹"sјIYKCzǕ IݤG#ueH(_:!+h]ӌO ~h.v%nW;PQfk63iJzK_ʾRT٧Bd[?cIq<%ѫSk{t#UM!ґ;9L59MmC+w8'nj8n Vk,l'BxͨT楌*64EO鎸q7mh&|>e7pōbU2ڗ[3w:q#wBˮqeT h76h?Mĺ1~ -oJ0raՍ*oAX# mC.e2kAq}i=s72]laf)]^=Bn7m:ئ~F^8vv㙍snFƻr :jrɝ3CP}ZI^/xG/n"hg eNڙ֐jd晍k)^9=to+.`ԆjaJ*eQJ27!}mTۯ{y˪VnrޘԮ'5f (6N괸qOpVWG֬8^U5WNfP~BrTT-&mh=yBy-&E[_"qC1A)6:]A:)7"C,ި^(n]e"uP!}Gg()'>q/QO.ؖV*'{P>Cvۨ^4qCTn )itH֐ -[w9%n,H%9\oqCv KXަaP/\6iLsÞw7丘E3~b.nkwŇ+tVA+TjƲ \.#i>,ŤʅU7lsdƨ/ΉDI,n,ԮPMHɘhčI6ETon;N(nX ;m7"'x ݃aw^ivoihqޖ%o;77F:tUZ*곜T*J^}j1Sj [/97\ LzsPƖckd):QJ"7~e -zM 4znO#G-g9(.n%1OQ۵tAavUcҭ, Gڶ+aCִo";_@t7ݮФT#tn?|˧Иg=7 - UΝya7U䬡+D_TWF(jqo=4;kqk W6r a U; jo7d5*VPIIհ_7HuI[7Z-/""17uO/l -\qڰ^jROGSD nU7t^oI Ȥ!q9qčk]喷~Ji1\)Jx!{FL{xv[ TeqU_s;:gmZ̓6oƑMm n8*ne-mvvv)5ZҍauO)UhaQ"F?;m]צ&jīAL vr򎴹)T'gM"^9EЕ%Qnkīᵂfo?q˅CT9EbԼfuIJƧD n,7Swzwصp}ms#SqCspE6pXHc`8nThZj/D[JxpTY*qž=uH"a# 1:.|AIcUkGPJEu"s jZqt7S-W9D'vL"R/SdC@KyJVo\r}EfC`WTFXUzЮ,$tGSd;;IUEs7T\vĔp x7T?}K6:m.dTܱxR,Xo;d~gCN]YC{k-zd1r>o"\qC-(T3ZG>fP6M'LܹQ͟/*qC9m7ĮzLkZ')ك6MjhJ6-R/ڟ|rq),aFzNZ6\n\Q\ m0{sĕ{nwvڇJM:sn_]ך;c4:^*a9bEj܆|^m҄ MOj]}n+zR]g6i^CR۬ϘM;Ň5p8GN-Lx1oap!u-"7D:j֨ -Qyt!a _ϧ *Ntf[XZW"U__i BFת$cRtY"}ڰV.&rp?+WToR&ڬ" =>MF尔ӻg͚QV-w!n+Cĭ[\nK9~n:/ٿG |kX} G4BY6hC%^:?Z"'#~{@,v*]L+CCݍ_7!Fq9ҩv~$yn:ڦ).p$HbKz/ny:ڏ?>zi 4nZ?3lqcmdDf!ФHc2Gg.:nxiHqg [ DE\9԰/%kʳϨ;'qn!);޶qٿer˕213o W= -i'&w}~ImBy]ـ;5P ,,W:M.X-iplt/GsNw{.]th]t֤>-2qԻs&G](цXIv6|^WqR]Dik$>Lu$爕{OY5zq< }woko/`ߚKvݐtDP×'n JrĊ٤d$ghuhXd7Do쑕;tj K };yصah{f`Yޕm~"VE,n]<9v˚k:gS>QVt֧lwBZ -EP{tD6,z9vhv߽ C俓6xU&ʆ}¨vnR7ݴ!uxD_sf`vbeY~{MK˪/vnwRW-'*lquMr}gFM>Sڥ#KuC'MKCz1 7%wNqù͋ێ_[4wN}͜$L=[_8iM!k~Sm3޴iD2/">?G=+n0E nؘb^ϑB夭7JALRKa^]'_{meaֱ,k6FqR#n nHqu=&IQ\M @ܰ~=}]Nws`ڮYjt(@ƂuV.3^ѵ*U`pղ~̠Ƭ9a^Ki747 6>MkT0Ă@@@ G@)u5\O*6&A@@ -kXZ<@=7vvk=j6o>j-7j↧J qq77sčĩuÈ@AB@@G n n n#n n n n n+>q47777i=G@@@@@@@@@ FW >qqqqqqqqq\777774 č\|>WW|>\@ -WG+#@ -WG++# p|+#WG>@J -G+#\@ - p|>\@ - p%p|>WG>\ \ p|+#\|>WG#WWG+#\@ - p%pG# p|#@ -WG++# p|]W~Q5P\ ^tȏA(4!G [~L=\ \}t܊61[KʀG -oL&Hϒ>E2(|LWu p%pI&DY9Uţ0n6pCp":ǡtAobTu p%prGʰ !D5@G8j8#,7oM4ބrѩd pO9)G]ܺu NEEGXqFq%w+ݝLv_s]'{Bয়~w"裏8+_SgA;ZA_5aUP!@GD{խ11T3ݚt*:{ |a"P8*iO${xWPh;pN:pjVAj/Mi}>""c )"L[13>kKE?eYCхϽ{"m#rWWJϛ-Ġ&4vIݻa/i#8Q?ͅ/>¯et%&GrzGš,;MG*ES(2!kN*%r4x -|F&: lYn&eXs²T&(N"D5\Wp%۟?VzpuTi\d0:yNZāG+2 m!4 -A/STh Zй)<G G"k8!w0\5MÛ<uaӸ*3^&Vjlzzzlk:ƐR|D#M(z ׸ZVWN .OqQ:lZN1):k 4ƒoM[-~D]S8dZDR=o\tagrõLfȷ2Et1WB5bDHxF 1h U}-뭧؞|\_&񡋡λ?S#CcP(沜ZYL# lzQg5gcZ -+X5Nt?^∡[Wakd]hЅ+г~({;,\ qpAٰj2Ϥ; EgQPO^םx7)aP<9Ho@W<8634&Ԯ\mp5a4G$SG릾a2n[ΕFLFoxD3Q颳v#ʵN<5C/;lDN|DKt/TpB37;X"TIPᄩ7 HkN<1 ,#`D\N:rkА= [$4F}ù9yT -%DS/:7dRN*]frБ\=}&?ƩݱgJ,7u5tsI拓o#5D1sص0VQ} -ݝA흑&Q.j 2vuT;un1\p*fWSawe~:o$ -hGp#<>³ƒ"J9&ʣLhްrVatePCuѮ(1 ڹ8!}ùI6<6V-&'2V\=^8)Sk3<4L;ח>{1)VrQXX3 m=񫅉_{?#UPAXX 6B*<ܢ6W8`՗Z|y }zydoclDs1yghXZ?]M ˤjʄƼ\yAŎ]d'o,{xice<"EW^g"qD6;U"71\#$=Xa"B#0vT]qv]{"@s[+2SZ2,æE+3H*:Y# -gݱhc.BzΛ87\-zv%K2/￟5&oח>ƅk*up9ۖ￿sW">|7nxxFt=ztݻV^xat\wwl\λ#SպIa0/EgxRiu|p!n$E+2i Hq;~gfq4=kBg~K -uX` č8"'>Q|Pt暹7KÅh7=Ħvv&Gw5{sB`q]cuHE܈# -v.2BB@x)y_>\v4/teʈni g`mK -u>'m p+)SKBHWԦv|WORd%===^=$}`:WNَ{oT K>^YGnykx{>p-*}|㦼/|k軋%J^Xf<\e2)qW LAL6F#w~+ Y^}(ÀDuv~~/6Mu"fU7\N))kIܐ#n n n n n n`f!`W'Cq#&7-׎E@@@@@@pM5yeo8qcڮ}2vW777q#&Ґ59v fփ &r.C@@ g,[6߸;xq@R-;ksO0&U6޾n"4s5oǥVIXq;?8 hq1kV.hsCV׶CF;ce1k׺)n≇tqùԶDaU+>~:sjBvJp7tn?XxjčG}gx:^߃)xSb&ٮ뵠y;i]ܨҪ&.<̳ i˓,S>B'n׵7l)6o5p=7f{Yč?//l7f%rmːӹ xo4mo\?sٞхIhF+q'}SP\7Yb_II-F.DՋ.nᵌVXmL(ꎸVQ;촸q%ӛ毛ӟQW4iE$.0yΖv);^0l>Q7z_9rKN7nA'Mr&jct~3y蕔[;޵ޢ b~7 ;KX{oa}#P7`F}ϷFÓ+ʍnχj?#]02q㓑}ʴ %,ۙ]%5Ա{M؉L7 PM6n4.Hb'w=ܟ_}u&09v8bqq€jX a ڙh0㷭[~2u;A_M}Ԅϛ1b%n,8wtƕm]+,yГ@WΨl;g}"hLqUEJA#.¿h\vUJxV vwot~[sagt/ܐwNU5ofE?cf.=~ ExG{YqӬ6<(n#&HA'ܐD)L5KzN0v,[{wݳ0#.>efM F qW֬ONHبQYq&MyVܸ ZKce">}mōC\ߤ]}²g~3hڍLXu/ -=eUWl lֺJ~Q!u9uن/?dTsF`./BwQ xxe4F $CTrܴ-خ;((VMv(,fE#nDrںv/׳ݳY._0v`7Np$#i k}J> 86oX4=) -qCDʖOvxQad -YJ&iX)UBnݩv{DZ7 čg ".V$[uKɽʍ_zZLQM\b9ܳJ _'6rT~%mċ9,WB(ݯ6Nt>ڣUUFO0RmRD2^ںF5͔CrgִobX73jgf6el9_J<͓>?=\ڰ"qcVv[VYGhXQOFĖV)ĩ>KR }> PY2]K Nqs-ʳ;>YwY툸F3F[#[Fy.n;VuPƼQ^ΙQ"Wn>?tjJ(~ur^m6%is1Yfc,ܑSN1tB10fdJƏul.%sgtL2h-jQC{kc,REk3k6-h+^ (¡|PC?}Terdgō?aʷaF>&ܠЂ+?r.6E n$ݦE3e++¥*4hWBmۦGkw;e%+6j,mqj@g]<.Vm$s^O -a"og5U6X*TXs=/rN29čL*N(Tt@t{Qj%j5-Ιjܧ9rm H(q^h\J AbcݚcƄZoΥd~Y3:!~qǺ@:BN?I KܐP|9.Yjvm=ڪW0ʓZr%^̱ML'4*OI(Ց7iZpb?ʗԶ({ԂWءyu/j?ҞmJr@" sQy4_/l2ⷢm+>-,m=Qasϫ=n6UΙw+%!5ڷ*UR;IL\NI"ͫUNaswTUW֊:&cb1V,  )EHHQz`CibAAD -(""b~ɘDc4M̬֚Y}PeY=9fK\=t"`&=%|Vː% jv:g#eX1 -WͰ%]!3qWUѯǽaO4pCv32?rF"qCNO3LWa!=mT\p&M@pfGi# -pL-o m m=}\ g6M,~cQO(iKyu!|QQn|e%s)Yr-?+X $+g7);Qz'0v~)Wn!I8ع%7&X;Kv绬8p;|Y:3)H?.hANñһm{o^LMT7wvVww.ďy|} -ZӲPMTweOj R-^nFœbSA Ѣcȹ$3&S7`CncUlGlj/* tcG_sFB>I AS @,!N7FZL-Z_>Nc6MtoepG-/wOr >0 'p :xn&oL.a[! I;+&_3n)Ƥ󸑯NI_3':1CS~΍#gy͒]-X7[ƛIRmB#F KŰdrnx -omK EՁ) |gH*3{ɉ@iZÍ^ʹxm񾪕՜RnLa_vL'4m),-nS0o=^ -ߘK}I AÅGXe[6*Hȉ {ㅉꤩv~ ].0kpx쓶W 4c`l&ikpw<\c@4k;AS]a>BL}g$<%'vO|7F"4بy:5[sC\baiAlq< =XIz;W]ʱ -!l7?|;znsnĥ,=74|P 5 u }ՂLf|,docX&I\`̹[36sUDZ!uGkB -~ehM&P̲{1@½{ni=ditmD~-’%an`kBD1|=ܸS׌_\glwBƵ#MFޝR(Ϲ!NÓ;;MO&7V@,݇M_$*n@Nٍb' Nh -7p5ʈN]#t^v1A&Wwu̚pon1DM˄;p\s;x_ gmd &N[6}f<n.)!wx0yJ&ƘOwN51x\%޾FM[MX,dpU/+鎇ɇK٪&CF y켧,ih 3Uk5d@OLXP 3> 7{VS&y\nrHƟf,7-.l! tO+ 3MIޡ>l --?F3[zM̶)[[QW'K(5|/D>C9ܘԢg&i=ditmZǽ$p)蕔"p㧛_Uq$_pҷ}?;3D;rpCR -wp* -$ px6p#7, F6mt.KSpc6>yu]Xz5gvU!)p5&O Ӓe)Na*"7ݸރ,޳dh7mqa$ -aҮUTDnǥ[f<6UUc=/Yʦ;E{s|>n 7]m/)\niNѡ,lCxw nDpD S͔9F}OQEc%/aZ:aun-s=͉md-3/iM ȷRh%.%,?R=ZH\WҼғUޘYSbZ-QBlᾦCFFB\a/79D4\*Q7u!Qap#Cɳx|nt܃pCeUkl|FA(Yv7lZO *œUE{#~vp n>$VsM̗Eoڰ"Fx1uT<VP҃pc8GfiPA5b:9Fwa9؅a3bFh{r - -\̄ܠAp.8jV^Ga$сk0'e5W]K9yr{BE$|஖sYQZ XuKv ?l.9[Ũ{yWz 4.yek8Rpz )BI~hxW9,2ha%N~>s(kʏ15_ hzobo7}_D-vyX>F#Q*֝i*9ۏ^n! ]rn@`eJ̞n;}2_f00ٱYWdtE{gD1Ӝ]⟗}cꢎf{(O(\%f#wKaˬ![ xq>/'ۛ7l&LnKpp(., +[ + 'c0Vy7kFq+HQqcƿ&m{[8>5KA,YKiifsOgsOװ~eG.3VEk%mxB8n93?/˶Kn_ eVS` GbLًGznЁ^bqu嫋wؿZ 哆6x兮p[!09Wqm3x͕[،6rkp5!䝩WL":x{pP*;f>w۞{o׀i ]l ܺ%ᆷl %|:UQ -sLm0jBRXugd*_9H9xJsMX$#$; -Xll k D&#`cP~v23YSe)М։) 0.=}?;7= {EeQb|Ї:7?B /fk {)O&TYX0S|CcXMW#:ij]M 9`E(/~*jv p5v^潖mdȂ =(v SCVf'oB npf>nF+nOcIza(1p_՗J3y3J/ vk$n+bQk=u"#(y5 '? Hjb~[%p"ɪZx"*~Ka! pc5,4yH6f6"&;6+f,e#򈎂ÃƬ160>Y)sBE;Lkpv~ [\)HlO@)x;CF_+y$gݚ<إqܿ~ Ifp4`q EG%X| .Э /5{ڌt>?lHa8Թۨb/J5LW3;|=1]lM,=l2Oғ܀Cehky &=7:Om)qbApmo?͉Q$\=Es^/yLqs찑n*#_LF/buƝk70|``+\h]YuGY3.5H-w_n@ۼo㛙:7ܘ5c49X -dѿw6J0 Fr+e{ -Wz/[ZvӤK$% DeqY<*Nzx+ R"x{(LZ%pOO8T[Ͳ% vhOE'|̳ȱHTnB^R>;]ײNV%(Vr]||ࢎnWM޹"5mV_lY UK!1h`E"nY~``8#FңT ~zG)WIre6=2p@ !fcS*n=__jffyX ߛYwd߭n`sO__o@6ܿ5j 0 GiԆڠ9tFg7p$4Q=nw򣫟so` o\)~CUw_N19U6*}GS~}_ -VJ~ݶ9mn`hnE \l6J?6-Ɲm!08P8B`I7T@!vdMf^f@s{pJYfw:?Ь - ys -{wGd#q'H3VX%گcL47 nZajl 4*<-$ &*/ W>εZ  54x_:uTL۵5k{q7nD@ 1Xl7nM#ApCͳgU˗&YQɓ~^Hށn?;>^`X{hzA[WV PASG AM  n A6 Ql?vsTnpgnotOwʞvu>;ܨu1[2{zB'Vԕn˗jkI5$3<ܝf~ Af&ZDn ! dn߳o#_pMnܠ:Ą:W 7n Ap $r\ nP#Ap7n Ap nq%Ap 4F Fp 7n ApY Ap47n Ap 7 Ap 7n ApWn Ac 7n Apn$r\ n Ap7Ȧ H$Ơv\I$DcD"H$Wr\I$#H$D"H丒H$D"H$WD"A"H$D"J"H$#H$D"H丒J"H4FA"H$D"ǕD"H4FH$D"H丒H$D"H$WD"A"H$D"ǕD"H4FA"H$D"ǕD"H4FH$D"H丒H$D"H$9丒H$aendstream -endobj -242 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 844 /Length 2781 /N 100 >> -stream -xZmo8_8|g,mwE5VX${FV8-^HLS3Ùg% O/> & -rh(PB/&t0:MF <(H4*A*) 3;+d6M0 P IS$~0 I~렣I#s죟h.Dc!w% 40՞` IEwHz6ĎaӗA9{,NNDFd/Ȟ_fe[|Y.f/./m -7pZo>Vz/lVuOZ)݈5de`76?U \ئQnz.sb`~Hղm9kel:|<\ظ8-#{8 !sYU4#InUnte1?/tq/|DR$m7,iE!l *D0L[j8격.Yt ucZ*BL #"Jr ՚dDrdGX"G*:(>Ŏw`#Jٟ7&YɈzxgEf -'hl- 1 q?yl; <'讻 PIm曮0A=rO}OzuV?M]s->Oa)-6Vr6+uQ3JtOlu| -MmطI0ݴԷo|?|~׫N"bkݴ|/l{FEH1&b@"0.Hޑ$SD4"w@;/z*gEv뇳JXR6Ñ@tIɓL?cvEf 2?DŽ~%8հ:QG4Ƞ`p` -1?Ei \Cc!DaPJ:.s"Zsd' OeH-o|ʠ \(D RHI"V1 72ai *'c@ΪQ.dH1eVE5&Fb DP@a.1cG!x MR~$΃9?&w2qmYUu[|92"rNjsl>gsfZ-6f<>fR^ jeq5"4B ͫEe٬ňB -[HΆ!]e/- G$6*ZpHR-* BJàŢh -YyKPp bBڴ,ؾ~dxұ:H~47 -s8hӢM뼝3 E1n!S`̪Ţ\o_;d\DBvQe CV iqׅ2{@Ol77JGlAGV$;rq&ؽU۵Na݂ |KE]WInxmZ۷o7ӮMQuMry8ulVlv%oge9+^x"WwV:wj&tlv؁No>뺐w<.S-aۇPPU- n<$|8;IFb -Wc8ď4|8 x.X>ԃh`R,_q9" ܔD0"&#Qf=mT:"+FqS*I&u`^45;9&)p}#™jEʳ2{7(6b61Dΰ~xf>6ϳc)X#bY>Z -n2i`?i=f&bg}&%,lV2-9+guuF`GdUh5ù}{Ϯoi QVd)8H>148 Κ"Jw3;вaX^D>!KG 3n1BaY|<LH2 ♞܅jss4v7q9Aߥd>5#>=Y۬ڴٞٞYs \u|F7P~(~+͈ !$w6#RH2Z) pV{w_7*AYNVu OB=x+{5{#W@GAd9 c% oe*`34*]nךcGt-dq*IKQ&/Cc=HL 0endstream -endobj -444 0 obj -<< /Filter /FlateDecode /Length 7919 >> -stream -xڭ\IwHzc 6_iEnr-}H0%Ft͛-0rEfd,_D"x~GkF~qE})X&~x'k+ƾJ P˓E$[(H{직]uB8B -?IpLJ2L_A ,J2/^#LHwE&2u^ijS6UI`)x"B4MռdF~&vwLw1'"f%ܟĉgppHy\w.;&\:sR9,{[s8Yu-E S[=kOƼ227Dvm)Kޑ)o\ʑ;x[smMp55i6{G\[V,*9u%7v߸ S4P_DZqD GԠ ޴Dn6\rC% ):ϊ摖$ǯ'i2@}ǔwJY.,u}0:Vݺ+I(RtX5l)kd:ۡ\:J΄9^Mx;y+LjI ]m'AV58bIJR:ӟ'j7\Ƭyee4? RWuW6z6DAxjpkY2<Е fq;&~UOmob4q~;3lŘQݻ$#_&ٕVنy;[H; Ah#*Y _IlǑT9jGJxPD~X- ZnKK7Xzi<S؟`\Vu#y4H鸦<G^zkTJ x21^NKOAn5d ?B2/х!^]S0@j]KW{@b -X씨xq ^IږDZf,M3[g:S'֐u܁@BtP^N_+g<* nQW[lm -Kkl0QO:6 bdo:>OL6a/81Cwq&R? ұn7Lvǵ櫗6@㴴v4=s[)m Tve'&yQk98a"EQ[SINJi_l5 4Īʋ_Ny͛%SQՖH<^a!Yg &[ǝ JUm[܀? -8R%h }j,ۚf A1r:70jec~U`Uˀ شV7^q\o|8pQMl>1Ҟ)+0@U@^ŹWs"(> YW6ZP?r~44pTL ' _p&UM5LćYvH!T*.F^:D#kK%7LwkV,=?r9mVߙ̖!s~^֝h8K$?l̢X5yKKMGx Kni~.б/d:'x',H$`s =n)/L wy!R c_a0D&TWe#yۢ`C6"Vn CCV=0 - -L! MfȺ*2{&h˹Vޗ&g)|@1`~xі[.|o~GEeAxj+6 -Ϡ@e6W8f}pT˽^.sFNTD{K;vwS~nڰ'_KƊIϫ{wÃť[3uWdؘ:N)^U}Ȳ<1w-eWl#%"vml(8] `O -$LT[eg;O պ1k\ge52|dDE4tkYgyu/ =P ۏj!4 O`Lj$r䬃w-ɸ 82K._?>ojØc9536"aX#ݨQ[\wI[T,abffۊ7L2 s$h`{`L'tb@6N2΍ԝ -?>d5*ٗzS*mGGXЫφ:sf||TXO1" 5]'AIMur|>i 9,d -\°1.rUq4h[c -[b[\fi[l3LRMWg^0[Y%GZ|OU'~M&tl@7{ k {@ / FȎ8h &K<7ו젿1ـDGD$`L`6%"g}8\Wp=ĄiL;ei'Sÿ٭?vAK'B0mtd2-wm (]#| Od4FpecZS{]Y1 =3JIQRU ؜3FƔwNWfU_8rѷqSnJ:P=x^Ub:$ D#[k4\!xaV4`#d2̀ލhNE{@)_UYmY/L3rKcy{eve>۟y >j2SAwSMm./7|EL#jn;@O_F <# (JGgg>ݓ1?aí$^ߛX c7sl:Bs̳Ê#ώԐm]!< CK - 86f>~)\e}0F];O-(fuma1([P-|㮚) -tNr\_%{\+AӸ r.B}m ?~cN̹`Iű;B0(F鰜g<(P\"TOA<@SAP%~ ßD*(MhGxhz`cj_JY;aQVYg _OШ^MAIFX cn*+r~;bhH&I=nDa-4 yL̀ƔjaJOn{?څ:DOxگC]|7wd%=Ɩ"? ©DvINon&6 @,)'l ^YH=gcNovɍ-,A(\a}wM1 C5$)߾(6;1p+u -21?B -1cMD(v燨>=^$WdC\7Fʻ2E1NLc0ޙeL+0(+hCG{w%1pB"'hrIab~"XC4ElG2Wb2`)$Q0H$# .(1лki*xh)I渳i ©Ѫ^Sh8M3$b ƠJVpsr6Љlͧx>T~xnrU#SL𤙵u{au nNV{pf6 #Ѵۗ; [.1r2b1Ar$$㟞(AC *v ْ/fx?Jg3Ke;&}dW#ztŦC -]!; l!~O"TS>H@&_( -AS}\=fʹ"!8,񯟛_Aꦢ֬:~ -%pɣ9)]b#odh=K=t"&cc308V&]3fW|OƆ|11mj.?eEY{UIcx^N^Ql{wbOÁ>rjlK>{[ѐz{−%PNVO[*]|{h?QQ+#>Ip'L>XfЊWc_™g+Pwɹ)9Oփk#D1K&;D >}4M;SGYevXw{>^vaqfK$ܷ 1BJ@p7.}<9.ֱ -NT]~pII8s=}mgUFtYѹNi/+AW\iDWә^&g/ǫ' P;m=۶PH{clO}$d:x"0 p-ydb39X?tXa%kCG$Ҙh2%w[hd3M'@+/fJ;p%|{v&pdeU.a,sN褫x2ʅA4E}7az1"sLLcV -,v=FDdr"e#$%ۦV&ʗq4OtZQ{oK)Y1yX})y* _s ##v{jrq wƜ1l%Z93|)!PJy+f-Y.`TۭmK GA-uYyOzqCK>b8+3](i9H?i7F D"?cFǻ"y'DL&P6:fp5IiݨэW{+~?'IvE4굄QVSfHB;qB?6W=8~s1kO@u,Ǽ.b:OJ57YIB)I؏If -|ͯƯ6^Mendstream -endobj -358 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 920 /Length 2524 /N 100 >> -stream -x͜]ϯerUG4ZeE(+ &FIvgV>o)wHhA>v@C -J!㯆= *p^$pʡٮ=-e:H΁j 5P_i)01LqBpθS l8 ,'\^-3|Wj . - ( Iq:f8.2rȥM?Y@\CBsbB9QMpB![HB)*[(EkɼCTJw)Ta+^Ba4!+CQ j*&Tbb'QJhH9tW0젒=7b?Ӹ3,{Ö́FnJfAr_DWp:D(n0h}"h%q&l`P@5ݤ!\1Z}ALŮ -b_V2ٵfaZ+٨S:ZdD1`$dp %[4X*BlT4fZ# -g&t)! *l L%YZ} [hZ}hP*a.IyX~F7~o_>T; -$\>s٠Nl3j/b3k_bL_Qg~z{s..њ{J#Nh!uL0Bdlu$<-??==$n6Ж۷/ë|Ӱxo.z6ꗿr xXW71f3sX_p.ҟ]_~{1JPC.P6z^9ZG #(~~,~t{W^q{W^9ڳy<Ѿ]u/n_6(jy=F~n1=GiW_5߆_/\O/o#~kftwWK|n#'~Wzws;iw;Ns]/7F9v#CnQ˱an=|nu~zy _KiGv{>HKۻwWw^/X-OL= *E[Y$֥ V^|xso^;\y$X$-N+]bD!xS3mPF\Xg" CwALe&Q1o5ҧ"f,4a.섪:H u&BIؤDH-b˹+S6i[{?ALIO{öp*mcXi9OkOSL:Ӏ;rl#6ӤJ}3bl,}63Mzv7)>1leZ[cFzB?ޗ3^.S ׽FN;[RCbc36:hTp Fn%ϴfd[9&F'. \Sc=qQ(^#:0^}ksp`O ;M%}`GBeRA"˜*᝵)j 雳n:!kҮP~X"umaZljoOh_Uwn 7pk#+' D&"DD }"6br6!4Cұ)o#ѽ[#Ngx 놘!h -qܾ҄X]}[ ̊~BLb3nVL]avY)׹tu:!kiX"l<*ueF3u:!릫u/*Vv\U5seNV>w֌~`FV+zO{8uN Pky8gf^{ ѯf,՞6|WIݱ~Ɔɸ;glwwQƹ59vɢendstream -endobj -515 0 obj -<< /Filter /FlateDecode /Length 6955 /Length1 1416 /Length2 5988 -/Length3 0 >> -stream -xڍwT6Ҥ8$4W R $$ޔ*;R*)RTzZ߷V3g<ϻp޾g(`UQHX$ P $Di9Sp1X -)% ۔!8t`1I]I I Da$7-@GB±J('aß %$ -(1 Ё.ag! -<[CK -c/pGp,\ Ѕ&H 0r@`; Qv8w08yc=4s9 _D_$B C`0 D vg8@OU[@@3A(t@UAw?, @㰂XE4kVA*\\H>e߻'p(w;; ۇh17QfDAwEpW8 _=zQh /B!?Q[ HgǛvc ~`%a(F 43Q1򿜊(@@HDw A?b5v(EzwnHG!<&E p@  /LT:;s?~ O݇8 tPx1 j -]-{5pxJ EA"*n{9oH=qG@ë ``?PAP@0'~(-@A$ -{ء0v\[@4~^(_{E|) (>`RAUA -K$)fByVdNa8̌ b`U+V]E׃ދl5"{l+lИK,T LoIb: FRwrRU)ޝHfpnK[EQ6=ɷ/Fhђ__\.D=OMcגn9s/홢`}xi#G"â,rdB4ZouuΗg.ysw𨩐xClDPU7DJ5`{^"gϷ[Z&\#ѴQ^sS>iyώ6w5(%/w=ryKnToW4OPkA M쥓 aJcl ts^пB5SD{t)5^$nH ڋu?$S)Y_SEj \7KiVf>piP'aQOW${WF5+4fGT2~\$@&(k=3v56ks9Ʀ"IQu[l 2AVQUz^@ǭ):ELVWspj^B/DS`D?ϓt'XOrqi^}dʾ`kE҅$S]G KTLkzѨ ㌗D-vի,,T8D)i)^fSq誝L&SulES}_8rz;5.5xYT9U'Xc=md^"V:qE޴{5ߠP&[9#6qbCDϛ!D=&jζ+O3eG9%/\PKgɛ_Ed#h#.[+lwluACVsrd+a)FV+DCZek -ٓw'F!<^ sPZbO"~*HF=09xAR Uцq o+m/]yc$b&U M&DQUAIo=;}K hB U&hzjxo/$oL.9*ڮȔy}h3k^j=9u9C#1WtLL^7VV ye '6Ák;m^AӖ#4Xy2nZéUDW|V~ߪQS@;{]Dz2w~dNx$b*xpEj/G -[*/JN^[ , -zSFB67̑XLM*٠H Ɯ5<~(P$5sWҋdrn*V -85jJrʟ9~u%rZ~:OmFh9 .DmhͲpJRxSqϻ_3m[tSHZг{7 -~Ȯx1#кCMavfQHIx~S3N#1'b{זEqKۺVq|DYfhV(q~g&vż)19Ń8uRKWx•Tg-k>oa'a܅R~gi`| I )P)SUv!9'LR5XB;s8=n*JK~x=${۫}j2V/>87"š#Xj炱!dT z!i#8/M5d28%}zs*/QY nD+ns]_MM!Ƽ~0Y/p[|&ZhYyݘ ㍦${m'= -9lܟ-u?PSMX%V:O~a)U -`8{>s2@?}IPR!|د0ʵ"w#0{}&1sA^"v -ꁼv<(Gʫ/&7R!*CLKWF/ӭ'Zd_~pnϟ{g+d-M_3d3 /+_ޔ !d5ΗfU-L럮*YiƪZy/  Ogm&g-Qũ}#rၚ x^Z=Vr״ݍm%n g4r)GS} -#ˢEXnf)uUrέ5U0:3{;dk8O@&`w,T:$1~=K:Jȹ}·|q5Np@z#Gӷ0CufqYMĈ/GK]yl|K̚mV!c<&`Zpeehkfz|y8wBپ"⮚Ɣw̒T2v3p֋8`voKVv -TR fub+ /J_t/{ZΎ?Ș -8Uw'彻k(:-Bs-Nsso4Td -,16߫M?4Of,s5?8lӂ"b,Գ i(FR}@kHOٽםܫ^3Ck.nx S+(pN3k"u$EWW1]"R. :F(O q#(6*¡ ;·el&$q+KKU;Vt~X:ұ>=VQk=i ^;JI1g>rNf3dEZ-d9ݚ gZIU~+FRZEl]AcsMFk\NpY9=o;<Ccc-BYk I-/Mb_4XË^a9TGmKF!…<C>zI(KqDd{of -2>]Fcw5Pf]Lj QC&euƵϠBXߛFz+ ! ]=Z5gRO}^ϒ\1bZO 7:kݹ5Nj%^;4k=%]u[r*w~ LghR? f&Ov3SKڰK?j3̪ڇn7$iF+:{i"@V;26bL]$$ĝzo!JC3/Q,d3ݒN6著8]C +#iwYmAJ$D?mM;Uph[WpڡC0[<UqIzل2UM$W?X/wʜS%I,(3<{@9iE/a1ɫmB'傾.)&ex㩜i8h$<ǝFf@53= -\˞y $% &lKB↘(=(wymZK0ݰzF'DmͰGIO*mLw\= o{Pd V7 - rh%-GtVRrU|U,6 ?-4di5WK .4D+ʥ `) U[du6ńJZ3YVz,I8!+o> -65qN1M/Œsf#j|*g#Zdx{s'ex98ۍ: orXs;FCxe|pQxm'ȹ~=Vڤɝ<*cV@Qd!#/h`Oj"jGǽ\t?v|c/w1u'EE -g̖XDxm]k?_xn%DvlKckuZhSx6(q+zt}b HF]Fϡ?/'&0,KggӒoV9DV,"!!LaXH?fN>cykK{$:IMYvOEM5 }Rm=^^.(0*yrXHE߹zk vTr~},ٙgZ --We]$fw+ EWA'Wh艫`YdkЛ &»FW8^LPCid̘k_M=AJLtڏUnu9',RiQ,EJpO\7.et!!r:*Vv(C87x,zwAcx#/r-jT?{RJCv畬i:{|ty02xriHLû1Kx -+z\-q6LAo-pƐk nT"Pw.C<+С\v7BMW̑/¾Mh(Ϊ]$}KHݭ sd> -stream -xڍtT6 JCt730 KP t#4HJ|cZ߷fg{k{u=lzH; - H CAlP; " - -AcmJ4Dp,E%b@$JPxm>@>(3{߯=Ȼ@Q0{І.!pi}Sv⃸!QN2\< ܇CQPgÀ3>` sm7@: ((5aP;6Eu-@ OD0į`=!G -h<#O ~UT?ۣ`hw>wg?`pPD@hw`(=v>7B98?pp7B/*SP@z~I21K?A`Xu#]-vLW3Z(o[ -c33F.HKZ4VH  56^u4+y_cz0oT!a??+/v5J!Ꭵ/^G:(A > 쒱'բ~> HFE?;w(V]/)CP{(^*AUx+^ؖMqMz^{U::zczQ@noZTC~3A>˿` 5[?w7yW^-CT/ث]uf7rtYBT 7(20[]- .7OLAB~NF|gJ ݛXi{R5h -?{K2='y:{oM0v?QV;b)ŢKUhKzeX{0th)ח0zp]ߏjGOk{[L^v;y*;[LOIf0<Ǡ4@)oLz>?ÇDH*/o> -(y7?|s Env&siNmf$gnr_ RUК0 |5RV҂n'kjXyweσ|-2D|}楍X޾.vCÒkH㓝2hG_ szw?yn|G#mDvw6 <]jL .58+qNhwh~W*l{,GAm`[cN7feLݱy]k{qKᶁqy}Ye/)4lsn +ºž6G y_WQ}X%0PqL7Ot6e z֝"Dw/%&ߴ&nM?Hj[#'i&GdrQu=VVu4ʹо[tclN)vQGti<@RvZYaѻw_t ^J/mLstlO  ׽&48X*eO~+4fZwcr\x6R'WtY ,{ºy2p^~A+$m(9UwH汖ಥB@#Y&%Ou ,= -mTHi"{^żC*U}$HZ=oR?s9[%HwɴT;g,\J/|?ũ1'%,2Ы - & -\W `tyOx(cF[d{XTO5U s~<{8Tޒ5i,~n 3: DXL7(Xp3.{[$[?swx }Bx,@r^+ۗ 80橣.V[ULU] v,?t .A['IWH^329h3<}lycdr`99O DJ&>{ejUwʫ<Yx/̠e, Rtq3Ro qK3cRe7ق*zFDoz-t}jї[`iq} } A6b.K{.:rᵧsӅocɛY,Vw|itk!b|v8QćR -QgX>mGc <ԉ|3$m8rI@<WjKEiMܤgswb?}ANso~!!]@la5ymw;~rJ!RmͰ VL[A!#=1 xw(\2F ԤɉNJFAgJ-cLkvAa/b/ -밠6ELć ] e)zL[WEwX?ir3 3ލ_lm -" _4h%TrOݰ@futPlA*E~)m_ i5qTỌ.S3-ߝ8#W}51l0E ϥ#lO4r 9S@U9Ae5: ~*5sS;3f[7m* F~g Zs +yX(P.[O([?!5+>q?AfP=%&dVTd'3 2͸"@ spr{;hyX?%*y}`F)_w{nN_"0/6U -=Q# z7\t0f g)pNE:yM0_-XF3wU<u<VȡvZjJqw/j,C%M{c F~;?V'T\ZMb+X3W߯{3˒~Yqunekr;ţ2AE?DwL2ڦl+waFMY ^ ʚF&wT%67!$t:Z$ا66SNI5m_ li*DBߴL3U<ɤ>U4&뤦p?^u%Ur(K^)s -H7PM]yTܙА~'TtmGN6mDn==kw\"; V4n2B hjO?sհ26@]\uD4pMΡ1\mS㳐Բy -axmߋ(A揩fO60- =d[/T`qOaWv?pәB?Rcƻhzf#RW魮ҘPFȉ' rg>!j VR{wcѝ -E= -Zki$]=d=h"ommglޢ={%~SU'G"$K%rѸ#Ա)giZ̶gO }1~Ԍ/:YTSor8j^f251:8;3k&c{ɬn;VW풰ɛX uU!Vw@0dSβ!"i _%fCJy)c"‘&ڜ{ (ڗwlf~lW<m89AgCz=~)y϶B֬CƝufne,^/fE?9ػq2Xd>n 5)}ZIӍh,|eP1~w6@@-S6Ϋ*e7,;"Cܒ-hsdwњvB[B{M.IԼCBG 15Oag{rws,*5N}ΘhŒ*.pZcBCF5q240||h^Jp"懒L+#I™ᴨJn[vs桔H AWz%u۶||MGipRf_%'/ě7by( O/=. Vi%4ojCLYY,\sU3 fgfǾJ FZߞKo*߫M@owz]tjt:z:-l\]$117l?UoՂa}=w !xj[G~r@G7ʶkiˈ(GwIH+žOzceV7\yٻmkEly[31 ?X[u7j\KcTӻl$ QK#&}oN}G*9im>3uҁ+wt9^ -˚ -Z& -e[ iAUό6&% lp}(OSC$f_i(*cy__}}Eߓ3.?ঘxa2F߷/m,w5 H'+L0_Ӹ]|AN2g-Cݙ-zz)rkU6QI -(*;UE5][uC\ғsRpbu#(yz5q9\&ζKM2cyݗ_J#OL_*bh>vw;Ҟ_y 2Ԥy8taN-ѩ#d<[xRiDJ$jâ8줼=bg2<9Vx|_'Wr+e<JmIkayKU0<Oj " -Xqe~pfYCMqmUKk-3ҥ\&!oUj_^6e@xG)S (~8"6#m>jl z3޼0"b@hgaT*Gf4zCTؒ)-|uPLǕnwj,YƩ&}?v{,d:NDp)'Ffzq8Q|rwxpb*['qaf;^. *R?ues GkTPJ;bF\^ m' ߣ{pE}-?zc25o 1)4X֋+ǨrSOҼܫ-8>'BSid[[f$5ґwDduWU$asܬ-S'Ej{ AǾƟ7×PRR%њoDyB"6U .AUGYJc4)[tw0]&z"-1hִt*pPy#l@_It:(4ZUX`M%Z I|~6DAv*T^b#NP:Nγ qHendstream -endobj -519 0 obj -<< /Filter /FlateDecode /Length 6850 /Length1 1379 /Length2 5903 -/Length3 0 >> -stream -xڍxTSۺ5"wkҥޛH !@HRIo)Qt -Ҥ+"Ͻ㽑1o~m9#FN(G -e ňo;1 G!e c650 G!:^HI1 Po S;E:($M̭;`uA )ew'F;" p()`0>>>"`wY_ǸLh7 kdg4bn =@hl Vj =ȿz6+ݟ_`Fj`|1B0@`o0v~h( ̇x=0h4kF_i۬tREC4PvD {#`p5G{A`&ۜPFRR@}!. -y@;3x<0 8 !@4(?\@'8p:ÑΎ5Ca >2 DEt,5T/ - ,&@ $@JJ#V CdjO|g.P߿~(`@gWݑ?,s0X裰Z@7tNp/jcX5(#Gk}NFp /e77 5B0( |XAܰw4]PYW A9$ #ƞ5v%aUMf3`(O_ JD 7e^4 /̢ r#:딙}7G VVb%0{UnrVG]Pp+g(y\q:dcVr|TˇR=[>_u̘x=G\a{ T6^fW g*qm߿;XHV&_#$e69WGET"==gAjt@{WjMoP=G,9fDIZRb\sp:g~Gӱ~OϬp}`8wN٦#?Z~+A핲"ę7EVFzWbb7kZ+PZ z)F]/v酥;IzQAVk4wLO{z&.gl_Jkƃ綀Ғ>o *\6ΐwtPd,Gs~ 6h[A^[i84}3[ _M(c-岺|RUxSF=}In@ׯA`Mr^zױ/~53a&ERfMcbyj0Mw[~bOC\Zn;{Uߚӳ'M{ϗ3M+ ҍwQ;[Fv Y> ԥNCZn|3pIlCF ݱQ4COֵޯm -[ϳq*eiGRY6My:<#4b^xҰ!0!M6G/dBk*Hǘm6H"L~ƥQᭆ@SdoDh_?>>DMQ;Q>;tr`!-U=[^|6L&Zdnm7CyUJ怔wF(Gpf IdzY ?x16=}f7'nȖRJ4im͎?l/LzZiaa.ӔP.q;4XY(KREreNTX?pV`T88( zr`H`:CK6U<ZZo0PIjPمr0:jPLYm16 ȶ=V"=8r"R3]($F{ؗiI2p{x|)yJnv<e]V: Օ;!{ha[>_SU;,ćc>cT2~hVٟo:oo8Vnv3 6epIZfGO>%S_˓xq8f:6GOm|̯ ~%9׈]9+5,yOr'N+0CNh­X6Y<ИԐְKbr83n'6MUwhlj#A_Wf4l(ˣLM\f, /M ^ޜh)+f!ޥ4;rHqdI>'‚|&P2}˸4:V/+0]%gwM#VRnόzFƨ]c-Lm~tXyp5(*|FZN ~L,=_(F ZX -+q[b`;K'9'z?qY6VZ(»bK~eLJ6Y~|x썳0cqmdoE.a%gF^/C -<~Tr5?=u-l!D0z.}W;.xY0uq{gºs*='+ -P)K1*!#VR gh6뮱o ֠u}{ -DfczI,byyNJrp:$W?3~g fxyڟ6LHN -Oc}#KYNHȭM""f$ПR*{ -KniVLE=m?`_S[16Yn K*Zpi )YL9 e޷ڵ0ؑ-+0 -I жnO,*8 -Z.Y4/=~̺tfÌYGeoVҒ -;5xM,N޻/4Q98IXD&cw]NƾHle%ml - -:{( tT -:C9< EȎF93P(c+%qab&3 -yݘu8jT_i<,^fLӭfqPF.U"j;/z=Y'y*Y9UT\DU4 FЬizu'e6_y;| hO{9dw\$u更ƻ4F ua 5`F:+ED|bLeBZ:z x`=eXi AGfsBUk ]`\͢ᢵXraqqޫ?Z -/j)U-3F>`G>I5ZN+q| Y.Ii|3ʔ)=j㷄?uxtތ5>+@wu*sy&HHB̝?dvZ{Vtwoց{1Ø-Wȃ^m=mT Vx>3lo< Ӈ\]'%q?x%S ~K/SNICu ׫ J&hC4+&@lM^F?~ )\{l"֋mM ykTW6 p|̌muFj* -=j~nٮ;)X1eVVe^QYk '!}$0CQyHKٓUk-VGPjZ`@Od6#.Zj GLG~[i5zTNN 5u\4ڴkF/Onlݸ3wz6Z}dr ] ՟^WK?&k(GC͞"՚1eVԉ"Frf/txS'bܼJh|Q𓯍XuTz`GډgV/D6__wr6_0Ģ^h#zi:^iٲ-G&o`z"ɐDi4BĪHa|rC[cKV6D`jn8 sSu)Pe(fyVRTщ;bUk)].9msKz('%8EI-f/^t'ZAuC ,8~ۣ8<(ƛ_(hKYFXg`AyPdqdIi(\e֎j8s^9ō/}J8S3N4H-rS! َbE{lg![bL+aA2Ƣ}% А|WCSuM(]6lcKo\Vݙ<ħǰTٌG30T*O[!Ŕ6{Gcju\=ҷРJtQeUPL5c~VFr5~<4K\08dC;R`ME?r]z;P+PX {PGCr}#jKֻ q܅']L-R%0;/xcFTpJmDOĨc9tc&F86'9!ypADr43#{7 }s&*H hY)זWнRs!z7lwIU:LJYj,\!}0,OO}2K`9 ˩؜5~ /;#dLuϞ湙gjSGWZ>GVU)t _Er|laY#=6@vȕŽo xOղ4]&!4kY.qus&owng> %('陈>nmQ兗 bM^ęs:t#^&"ֲ3֙" -\OILތ8!kEuFl]ѩ -C-Ħ(Q4¡:AŲSL b- -CשƖmO N&0&i9vUY\NߕߓJfj<=}+,^шMU|Jj F4l=7kkL$mV7cb`vij齁/oEHm9/Ǿgۛx j΂" -I.K3O>*<^7oiᷓj:$T~IYD ^e~:;tLXqS:6@ \$1\{`a1zz N -I]w5B|RIaJq4TC-vt4HpX.Z" -V}K!^P0$6B6!;ED mi klEendstream -endobj -521 0 obj -<< /Filter /FlateDecode /Length 8355 /Length1 1805 /Length2 7359 -/Length3 0 >> -stream -xuuXZ EFB`kNe!C Ei.o$^f'p+f G>^a[;7suCpr%G/5QpP'?oڡ0kfUPPG,)=ap;:0m!0?9=;OJ[At%msB;nV0 - _O_P"+BZ>A!uI {`^$ - ] !APwonHܐ(C?$ oT!~P@BWW!tuBW׸!tuBWDյo >!Bkѽ!Bkѿ!Bk1D!2̇.a -9lPvwzQݹ?$mtD?F~Gn~@?] 9]gvj#tMN`?Vc?mD7xSW&7g!_?~ PW_ h D޸)#Qm~D Fo`+7} & {qs]@AN+[\. ?G>D[}OP8ϐ=b߬rE: (?h@ѯ B?J~2ONG=bA>(2+wWA =a0OIx}EHi,SAŦX0Aۗgvw^)YXShO>:C˵}:i:p.Y9й$˓{vj F /=#0eʞ`8y*ƿZwj"p3&Ӣd36"YQ}fѤz*×X!̗*fυC<-~dsR?z[smj8G?Z!_>r{墤&]6fਰFO\]bo.Ÿ@ݍ,zDkLgɃ߾֥c݂g7`9T֚^}a _+xŦQVQ5KW@o5mO`4RH>>x8Gl0 *EsEƟT0_ хs2,pQY!}W1F=FjwBsZ#RێXv-t%MgjT|\Pp?HUE:sKX:4ѴwD5X}R> :Rn4w<:dmY|1pDҘWnꠇ- hCkPHHO |v JոNաܔ1Arx6"/AVڣr>vW M ^HdrX[rUC<݉wavx ߶'=5x+ár‰Ww&ak򫑪\Z'3Wm~Foyܸ #<웅;biy*?9K93o胜^4%TYp@V)P=zjAKdd 35:ܨ* yȱD,$S{t"(,-f'5U9+E^Of: ^?qC %}#ޠ)Vǯ< -a >,r]`ͦ+ck&tLBZ~Af$ -E9k/eXJxKPȨL"H0I?KQM]Csgam*\lx'P ?L*&?7}-$謁 έ - #t5%}wGHwiuϛm,اܑƇz d^G3TBWYO0钫~9@4~Q XRhݗ94$R" Jd~1oY[cXS獧p^/j43ph!!̼QS's+$KPDR1ㆾy?~tr*DZj?UǺ4'yH~Y]ydxdE̻)MuP Ŀ>"N\W(/RZLߡ o%й6nii*MVQkjζ̋a3>V=ɽF?)dSD`vDmjgª|VW⫟,|Ue89d=odt /^`Pf5(P՗$.zyGBysʍg[; f؎w%10/lS8$+,$![K)QI~&lDwp8W)hܰqǐs=ۯ.AHu/*h>WX* ן"TmHʽé^V#ϣ -TnL^ 9n[ 9GI;]W&Vʌ?,+mS-^Jȯ Int>{pyCciajRѷNܫk] 6Q%SzK l/1-u?Yٜξ2Ǚ#{6>5Nqx!2q_}J,$Ztg6y^c4YJ=bN{puij3oQL'ĪeiJ̃OfxT~5,Q )zj`RK idI)B3k)H!eHHpyRktiM|S=oQ{8r 1yuU;jbpk2S f''*0Ժ % &$o}ɣe2Hjd$fVMH:*|m'О7q(n>8;qHf",gtMmN>jBfunU߁.-^7at:-8# 4j]s32$S*&s*r;>E{1cNM(M/H\al لK-DHÈXfu!waXw:yb0qvh.[w )'<v#^ =뽟i>G|LI8.>j7zHǶА}{'SUBH053>7su(+3MI!NDA6#: 6~\Ŕmʃ聊ses+R3͵W˓M1}9*[ϥulChZ%72TйXUm ak1a5nMVXZek^&.,_T2"bfSj$Kh[F|GtIMͶodP>Xv5f?zN!S2Z]_ MI&ŀSDž-`9S:Al"`uRTGF#$KJڈA;[MpHX:A\F-CFN_AN'8!Xn2ɷ"C|viE=L.L EY_L)em5$2p#j2A>p)8dW.eԬ",c|մW@wm5jI;Rnx:9=Ivs!gT}=aT_ja\iE:P4.Y+)>bW{m$sj -4Xufci۵ҟ>k!m<cDl (ړc'S{>ɢw%i+ ('v]0?fY["^ݓt8eVf6lO3W:+^Mib<1QN h\WƥYX'N~.H>+kgL[aݓI0`GԒL KPB -T״)9 BTfwlX̄J&(X6}n1]4jP>Vrь{'E`;WÓv];}gЀLr#$*Rvݖ=bNR5{T-Ue}1KB I<0rRMTAGg",RILRDRq"5v,!N6W;;sd@Ù0qOvzsR  \T\M˹db ?/M- 8!q'9]=) - $1Zj nܙd u֓RVk^Om '89ܦA@Eh0'(o@G#-xq.k53хتUJ9?m,_)O}_3 k'WMUCohjό$*:Tzk''x\NϢWfQŻu+ )u1VzEzi|a2~#g/3.;b#C.Ʉ5hYv>=$jNWAJK6Yl4% zas;lNt]h:^QpT"'Y: q52H֏x^[# Sѿ^4BnWM>Tҧ?U u}u>}ЃvhbfG^Ν\ߠ6w6A*;6 -9`1#o/u]l#JI?FMēIX`$^h\fb^K=Sn|<3xcLjb4:EQqMcG{84ۆ.'D+%Xq!IlRgHWc+Ŀb2jBeJ}ۗf4h=hTau,I-T5F&+ xϓt2Lv]lHy{~V!\XȰ-i0f9q׆SZ{DI493)yxqCRiAfHeQOof/i 1%0x8GoSp]. -*pfbmsoqrE;Xxˀ/!.eiZR&#'gvSL@lbnwkvӍɶܺ: -4qɺ}^iqv xBqE0!Z4&C-R.'2DDMp -r`JX43Qo OÞ_*a^bae^(GG׈5QO8ftK)6"i?O3#a~e.>q -onr瑱X_1QЧqE,veܥv银xmc$~L^+^v 1E%#ʛ~1'2Y9zٍ=DQn{qrOt&]퉉'4b[saUR囁X[oygw:ﶄ}̈́` 8k5ϗ/tvi|+|[&[A|@LeIh߂) w\ڽ`|zSVR:5dN{#lDmtfoĄ뭭Se  6bk{+#Qy+N+oVHyifOmvqÃ35ofɖMcPӠ#@Y@NVuxT\ȈIFzS9Um§n '^J|>VkV`.؟D1~]Jz$sm=&O Qt&M񨞪}pR-[͡MaعʗVzugj˞o($?y?K Ӿ^v38;`,f -u//B}苛 -:ʆ>"1^mb^ z}^4Q -hR~~z9|d  +~e吺yoZ_L~g N/P}~{]}!>ۃ* -o8RY.hd\ӥr00(D{"{u!oӁv`(ԬF!'V@1T#"ojH̖6)?*ljCn⯱ƀf8w8u瑕GA@O{`Z4ý%Jqq$:Lx9oY0m&O c~q P68r@bV)EXK!-|`@K,ףQE1F[vK-Jc,SeI: [RsoٰC"tm =򢢫q-QK$`D Ou`guX*tl AeRCzJdp/=.FY)>ngHFK7ޝʅ'j~7\X,â\u51!eʰ`@)&7eNxta^?^3zhp X9*7Yg6CB_ ɄE͂+[ 9zg/nvq+WFJAP jUETNŏk9#$ukMڼ1lav/" - -h%n[V OIX~7{d!#JrhΞDZdHgu 2CtѧHc(w#f>v)1sۊjAPŋNO?CJ\+uY WPSOuO7ۚ1:1>2Aݕ9`{cXUQa:{W'=ѧͥ6znendstream -endobj -504 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 924 /Length 2873 /N 100 >> -stream -x[moF_[׃Q &mnduQ$UݤYr)*OJ.ggy[ήH#J˔8;f3'11#1$S23Ft;)A$Tpi`xRLkm@hCwe%DI-A ZFVN 3NRȌ3Xh0KƣYc`g@k!A R];4;­O*!#y&*X@Jh0Q,H)h`dZ:x_[:D2;L$fϢ -> ,͢I,F'U,)fX2d Xgi0 %ڡqE -1)$"L -j88ŀP&1; 1ҐQ! CHjLCN6MxttDƑ -#@5T4:6S讧@,V2Hj˔%>(h-! Mm H0!Si(dH]@31HB7)h8K`eEf7'j T"ckX<ة1kuqPCP9glg+O7^@<Ƴ8=㛫j1-.\)J -q -G*0aBT} 7rF (Oi Wv E66Mc鸬CU?1Z̉/f:O/|Z]/%|8OeM -U K#uqQ3ά:3#{/H+'/>xN+4#G9̀n0O3-EƵ3۠g,1+r3x7z֮3MO'GܘKnbBΩ6-]R%OXb(~6u &=ZIɪAriRɽn@UDu_u,9fQ(*wm@JD0=Ek%ЮmKhԴ% 8U- w-e'/䘘Ju.+7`=Uk3rC}@(.cW_P}ҭYQji14<C{(i+Tsz5bXQqz J@Y0AeIJzD͇~:pzaʼnZ7h?}u^7㺦1qo^+E;Oև1#KzW `z1yڳj!Ŋ^PmM i8^Di+h Gjh61tWdΰ8lz=˲`vE5gG?Ujـϑ|I͕ٮ ]#r [ȡ.l-ʇAG֭iiלϨj=7J -]yLldLuh g_:ȸE]%[Ku֠n̐ʃٕYo6Ƭ7fGk脙Q(Y{Ym2Φ+ġPw;c*1]x+L_@T#{>s uvzwK[LÁ`1ʹtЏl/2#0yڵ $A4־Ny}:4K@y𑆰XWrCh $Ms:_9w> o4a\uNIpȯ[VXA-)T\%n,E2A8VkuwI -P ]\Wg7vuwӯEک)^&C9wz]t''ٛhUOx]u%4.b~ IS6RsWVxuwiCi#NXgڰQ߭qŇGd+v[I#v}Rr׶u$tv~ _ۯS3"O\UxttnQ\h\EdE -#|b<ԾbݰL[ʹ̧]K5AFkz*Ӄj9Z٢T}쇗?{DIܘ /ޱFXCQZ'p珫ՊC\(d5G --h8^Uo_d酅:\ T#Fob^-ƳokcvmmjYYԒ.jɢO*U~=lj>G?vm_ݦV[O[ݾKVުpNu?98nn20"}x7TѺx&"~Cq~?E]<GX_ 3qIuj6oΓٴ sQ q1ĥ\TUWx#&bR-⭘xZpQMLբ _V5Lb) .'Xբꏙ7^) 'jUŭ?endstream -endobj -523 0 obj -<< /Filter /FlateDecode /Length 10697 /Length1 1608 /Length2 9871 -/Length3 0 >> -stream -xڭweTђ-=4.-hi hn%#3sg7f^;Uuv]j -uf1c52vWZ+0X^(44v` bc- Z`$ acb135մi0vOM{5lick@/9` -$Utd2J5h Pq0 -0XX@f%fm 53lYA_{ b t0Cnj󗐭KՋLjB/YU$jmyqlL_"Ml@J{yBk{  0Z]^rApXl3ۿ`?ꁶ.o/=Ҕ%' bgPMml8lAf`6EaUTfo"E{J@ǂl{ώbVK!_`?A/m6{F4l@˗kZ,!-/> s[+yfАԐdm7JEu C0mn켜fN6ޗ xgE  {)oӿHYlL̉:hm2Ze9ٽ(C;A(s36 o)raIvv*,R6EFe,#O .{oǷkKezP1tda-62e5GO -w;RXAa{mXU l57c7.-'(2 -M5 vy>m]@_oO)b& (d gE< -\ʟُr[FN[DSJ(vsJbxY'Z~8QH}1k//xj3U`5 7L`ĵx(_xOnNTRGd3MRTF7''6DP1%Ӏ&vX -Llm~R8hn.R)Iw#wѽjӢKRߦ|&H'֤2&yzlyL޷{P4܁A]r8Q!9iN}<3%cb6(Dgp_f։6HfrMYCAn6! :ho*̩BjZ!OF5e"EĦb+ke\ -yJcAɛBH!"M5./&={3kNE~xB8+F,9RqVib#ufswѷE]oȩtZzz8 -\2g,=] _;<zΗOUxRF$Dx^Qmf:nIU['7Sc}.UD& yFs u4>߿R: !LH0Rq[KOX˰u@d'@l} Vd,9Ym;'cv4sh;Ǣ2 .Acxup^KS #x VM'Y8׀<J8UcBL|eňJڣY3JxW/{`Q9 gєNx7{lo(L9aG2r)fVS`jQ1[;V%:;bVyTDajo.i)o+wSˊv f\&៸vjjp6ߧ{w%2ʖV25J*ebK!?:i k -)w7k3WzZk#Dp|g 6IO?38yϦraw%V8LtH7굱Srkv~WGDnN+N -){lG@nU'vsѢ*׌ĭ4Çi dL:8(7󸙚5ɡ$ᕐIZ{#{sv)K~V͒zJ.+` - Z#DyIZ'MUTqp%:eNswdq>N1Ԡ?%gE®K07!곎DF4j&56iUҥS[>I(m;k rnq2Uri@EHA6$b]e2RѺ?7uhKz툞\ݒzCӽ݅kC6&fS#~!YDL7LwAQN7r5ph,tu;+x_p>Is2dQ۫`ԊasG.H|G*TCw?J2]o$&guĤJ?PS*E4E'( {ܗ}ɔ3g(ɪ=^iE3T|:ECM^] ]s+S{̹ǜԻ3e\noi+.?%N`< L_Y^~^V[J8_x9TS5=:(~PI$jΖOj- -vސ~̲d~ \7mLzZ3k嶓5Ng.^|km oCc ۦW+' -~㾗ְn"ID&~;Gۊ4e@Ө!Dz/ 94a@m{%F?+ˋS^{'s+Ļ9JYJ[L//m#G~"`GJ-sM`azM@attX"@;yS>?o%7=[݈2f3icb]SĘ֑4yT$.Xhy[BE_ $d <\cydfU^W -Rno!`^:`Zpo>!55:$0H>$yvqmP{Vy5v~ FlkJl&`5CpPYPb/}9N?JDv()>z͛̈́'}ʑn2hi w\ O?6,jWKdf @-3᤽KD,m8ϗ紁#uģ+ -%1UX„SVc -2:=~i%Z@M>ȬFu^H [) (4,/OWfW͆id:H!Pb196Q% -4;tl(πsH~Ӱ&wTni )5b׳Y']nJxd*GHRԛf?2SuY^"jKoue01B_MOW[ *(I# EYv¾`B}q|  n~c8z -ssQP zMRUHnrRfc!K6[‚~c%DT7h^$re{ʨ` -Bq*1-EՌep% 7E:uP xay Aܿ\RWP~elДOWJБٚ>(EP>& -ԛPXbk.@>.QxX`ܣ"OcWuc\'uc=0R~/qU.5 52[ .b"b0`N;y4(? bnXHj~L7b,$ ^#CݲߩR 6d-U!l g?חLwjX!*FN豧غKQ?|kT}v@J*z<|:8JU{ՀO61 괲ΖwND} -"P@KLEf;qIckSƗ|=,VO~e|9-;D hJkktZs_՘Jpc2︯݇/z?t5aӿV+]ܪ;1ՊМaE -!l)F# ^R23nAܒyFj^W:^Z;VgºM.vzq6D64BǕۭF;¢OwEp9,'$AF5Db wOPaAG+ ߆j$X%N9?TNm3;:*[yټ# X(] eK>Xg2kZ:}z>c#Ov+<7AbYi.:il2;ij BjvI9j3m.,ЌQ|?K 漭4ww6ڐutVG6P_*<iT=IQ3c&ď0Eò6b4M0mH%;#i._L&5$m8(T~Hd -g m NzžPB3Q6@<=y)/ )^i_Þքi:[WO3֧c5?i?Wrwm [{еZĀ+l)TTNH)hFwu8p)/鵡M6w Eִ'E47wW^ xKP?{ s>)TGekړwTHv_y|A3Tm'Y*-g$dpYMҹN ך`O(DE|%kgcNݻ%zy~}6M > oR.y[zz͚z}lɮxn@WK}'9@t|)Rx+BLQtA׷f_ױR-Q7 Ax> -#7N樻B2%<`i?зB]a*4rcUUޔVxx*H.2̿5^֘z*oIqn;) cBdv&Z>6L`s= |fozYk QkLm\uQ1"aJW -b^eɳ,uFk(tpP) r _w( uPmY8_VQDޏDgF~ʡꎩy:-(`{ Lrt#5Ǹc}`FF`_S6?E~^)lakˢ;"fX2}jcM\i_agw3M(ԉrp@!~cE)ӡn7䂫6PJR/zCX<2tK|j7x7nJlnȪ*ǬbW9Z b˙y7xgv+W'^K oC, ނIkZINTZZqvp ˣ 6_-:BI#ig&uPX.9b5UȑuKjR h\oُ J0eW7eو;*! -\c -st0ouC3ZAo{ LjFqc9,fR%@lXƣ 7GfyoV@RrTzN ݉7 vSe9 7Wӥ0F'uLS;\7a7Q#O򱨻: -ō DBeB-%|ޮDGҬX6^&M4z; H9}]G4LJtyo=D"Nzo~~gcM"vG~"7QX/1Z><@[oAtԑfվŷpc'1,`[|Jfmwڹ]!U4D?3L&]l!,G_Dc5i0eb~{SՆ۹ gCh5Hj`J`B9>M:H? e*r yˀ`qz]:)ӀZ lL@hYi'az* ^un1뎯?4HOuՎzpܲI`&HkB9[q悎^IT.n 51Ťd9Ze#-I -m%d -1D.7Mz >~/I[-yZIDK"3t8Qk3 ) dpu7*`̣,DL4no]g/)V_4ӮEs p -z;=ymɵk췕g;[4>b 1j$!slWCdbn&oLGVsJijuͷƗSuO,l?`˥M,?/?r),V\seGĈef1c>8>Ֆyk+R#wU_E|NpmEUFHj8J?.0`~%|{8W|S;y!vCam+`maŭǁi0qǓ !!LI hE1g%ڔ؏~URG#NѬ>qU}ڮ+HqۅQSPC9횿6BljAVfrvfR܈Ec{4„bj#W0E$mIIU;A??n|hCIk?VS?9ƙMPBB?yg8J.EމC{ާօuzyŰIT{3$e`Ey!=LZqfR@>*bӉW7X:3kj7;fr iAҦ }RbaJThQٟ?WTZ>~K5\` SI7R9! *$ 4Sb>}ʺ*݋WWeoJNl,sx )n>u1:D]5(F)h,2pzxV*?c5N4#E8\m,tXTQ[f؎(oNN3F4 3=)k_>Nz? -6TnQPbͥ`XK,޻9YU"p ory'mer2R_|H־ى}Zcn;cvn{9Whin1[OFtT;fLT: 'ƒ^GcBpu5|_ga(5m)zֶ kPVD_P3>@!ES%KLH@!!Q8}Ct }P!+G -4e..Zhc?yr쬪2kW219>زiu*߆MJ'FBSDk^޾A*gq=BA}qM$_8Tm+ү6u #իq#< gF"YgѺ([h**|`AjONEǂ7BBla41s-7؅wIkE%=>l$UsUɀBtdo^ -t X#볗S1H;V#gHx٠z}IҀ֚||xcf7R[Vj/*EJZbZX5Lf[Ŭ,* _SS\E65sm!mϴ:L\ -)gzIʯ| =I;ץOP_!}F J_*Dž#x|@͚2ziOU>r!rҏBO %bw)~.!LmuߢZ Th'ZX@ y%-1q/ŕ,xAD6T5z7K"u? ^Ww?ӟˌ Jr0uċ%aͭc$̚R*Dѫ*-;Н\g@( _p{-8P)O0Zy=!?Hi7i[(Dd&JT쐻v4=&&_Cvrw /O\9&?팢n!j "a3S)TLKRGk!m(9[Bk&5mu.b*>RXOփH? -m -H-W"o5!TspV/v!!˸d_^mM+avIs$ibK|n=i4#'#+R⁷R]I7h$msGpySo$ʤtop!M vʄWyIxJ3%[0xFNϨpuuȐOcp>՜^U⁛eKjУ;r+z6[ CB(0j"endstream -endobj -526 0 obj -<< /Filter /FlateDecode /Length 7381 /Length1 1144 /Length2 6618 -/Length3 0 >> -stream -xuSu\ۺ[nɡ`F%iFF/}s[yH!m )@!pnN@bcQ{A~+,FF]0#! YPuY<@7P($4a`/@GJju<\] [m;fr=VYPW`6x7-,, ȁ uQBig%o npqY1Nw;NPy,巀;Ɂa Ǧ|97' bGK\zHY!1{ -y 7ƁwJ]W$o -b -uY9v'y/;,nn-?^0Ͻ9@nB!>[\Zr/dd܂|!G<* -SQ -> ,qb ?a]I -ۀ,|jPpv?A>R g)w`?yRizϐ mN$(xwM -Sco0oJ΁Z"h#]r@Шtl[fc즣kN|˹'AXSk -rAn/{؋cwϹƱGXgqHSVppIIBU]Dm0(2 -:gXFp>sHC,kbW}sI=T7ܴ-t~ڝۛŅv fϻ|4z5PgJEI 6+Xf(&9 -K4Z1hrvtN̙P5+ȷE,!T1K=24j/}Zs7[-)tJUw׬ -'3[ ="1 }ӎi.p:lqBz)C]êE )'JZaq5\,fhLCjGx?ԘC,&&R>iiđ7:CǺ"-93vb>uN}Mbwa҆z~~Fʏh<%žMT}TV L %g}/ZPn|Φji_s^y.Vc!9o%}XTu;ٞM"y3A [{)0*o<%RlGZ'8$HvOQ3d j6꙼|'/$@F}.x"C#Mw~ve%07Er;4HY71ئ/F5žO]Χ硼|,4L*pQ߰+6E19X -./JNQeqy%XVN5fCN,&.@ei h5Y3I|T(敲bEc -AkY}$xSh}&u-else&L7Ysy_9I4=zC7ĶgKSz 0r 2#r x*N;aCzdc\u_{ -aoY5W~$ g{HDuLL={Fch^lׯU7VbXxU &_uvɧ9iiCg>s+-M<ɟl$| -[Uo e\k$,̶^CGCFxkc L-JCTVOi6y@tTpw;B(6itTލ$PE5%FD?r~썐~ j{NqDzOM!ˊݒγ콏8?E.PxE)%̈07bNQ2TqOğ$D\* Et^,\a酳,m}}#SD~Doh-<\b"`228)WmAKA_qbr&#q<>Α%;ݣ"%Iyq/\f+V^$C !k,%жQtbfPtw;E?W|,aۆ7/gșuϮ֘Ͷڀ,/ڽk=FټS:- -¹y J}`y7Kz1YqB {_+woW\i[R0Sݾu..ަII(w֤[Sȹ_e'Y^:2{:=aIay~s0`B%rk/jIE>dB3dKA3QjQDsHt nO>|k`iNA?.a}Yw.>\q^o-=!ɆzDn7dZ=Q\+%K&Jv *]0},┙`E2gE{g(1I(@(5f~BmNms #gjԬTr{hP,kҐlif'[m~s0|KFa ]xr`G%USAQr0vu!t6/Ԣu N2|CIuYԣb<}ӶK:Zn?JĹS7X1&ZO78(^M ЯV%kWLO{5;]W̦h43J/^}~ -mѵx~ lCkmғN(UhH>m= |ZER8 df@:Sl=M{.u @_o'z#Kx|,JW<=HSd֪JpL|7>[\Mf+(~K pluE}JA -[\J -{aG_`m],wmQmH}|9@R'8--hx9*Դ`qoILjfrE -/K]r!XjEB[@FHc`<&LAnCzEX.눋}VDtӁ]6I*ʗH"+񧥅e%7T]Zu`)'^?FåmO@:>N\ڇ{~QD_C'D{T[k47*j0^GJ=U*cKmcEhfy~@^XFynSy`TGSx}`nܽM%Ҩ{~_asNw}#h_iz|*1pCPfgBΧxɤdI/'tTiP_QqsZL`=Lz9zη͐Vnj_]e#KY"cv2[<1<ܐFNQm"I`pq{t$:4_CBmv1`fZ4ܷ2m!)}~a~v!ipc)S~`8LG.@w븱+!kyyR$f[VuP$LA|H]&"/Sf -{Pf龍*Vk'?yTMUcМrڤ~U@q~hk:z095QKLSuecNcB3x@Eo rR|]Ծ+CJ-lU!%;ҭ5W+EA -r;k`ŹKWJ J^Sc03 wT-Tg/",_>{0!$N;N7[U][&c&XKL=I.svD>ȏld4(v,7:]4xe =p}RIwLkqdC.,jP`50cFv|n"abȫHQ9e< -bw^O|vVVbZᆯ3>* xȝT22V^_!0ˎ23owQ1X 8ۡ{9͇h3Mڎ06 Gi9gq2BT119FJ޴&믛S._&N &ُ j??YE خ3Z*bڇG֝AۉʩU|xm{9_#ܝ7' ɣ_:뤢āP® -Nl#=*1.bft8fWMҭ+L.Z),3AUZ{ wGcACh1" 7gyjJ09\rWNC.7-qdAD-klh4Fi-uJ(^VVv/|!Oog5!.C8~ܶe -~Em$O뒶V&DÊ6>o?"X |Dnx}ƃX93:xeGGNEBq2ӫ31 荊'،eARM}$j']'.) -QDF}̽@ZK۹PBT3IU@ȹS,ϴbGf4̤sVZċ-UKAct{ߧ($0TjZPgLbuyu6N1 u2E"ߓ, -\fL"Տ;6o9Ōswc'/(WT%rfi;eKڸi]X `aY $cuR-2[~[nL P9ֳfzn:qQ9lq!|FKUڂۃ az܇ȘrZ]ޟ.&s7X^V-< ~e`[ u=m/P>4bETo䒿_ȼTu8h¥mqs>{Q t]K\ZYC/7, 9oH{<ഹfiaEa䫫B4Bb8 KZ_4[yɒ玲BN|ؐ -;4ħd ~qm5cHRW0zJr-L̊OOX 6dfI`C6+oAj8By9sދ~\> 9oZx -Sg#?;S="an #2ZdMgLf!ŸPsJ@ţ*5? -'o -fUoNߛhD!hMr.qO&]dym KWI;k؁$)-rA1BvD"/sfڠ}s R { 1yZGg>!Ev6ƥ]ve])aAc(D{]f - -%!C -#Lo+n(dA"+]LK&@+P{ iCqDaUUd<duI ;}no8*V[&."=D''Gkm*[A҆'$'y{Ԭ%*1jߏj[MO&E"4}'p"E ~ ąvM5m5E1J%*ɶڸk? . #ߒXPuFPNe [,4j:\ B^Ѫ\(o0H94( -;L)↿Ih%ĩ{~ >YuhU9MnX fU g#f"-Z͕u8wL`;endstream -endobj -528 0 obj -<< /Filter /FlateDecode /Length 13210 /Length1 1626 /Length2 12366 -/Length3 0 >> -stream -xڭweTۖ-$ RPx!Cpww{sߟ~G%s͵{"'UT23613r6Nv6v\trf@ \ 5q4Lf&ff9@ha |US֠z33-bf[́ 3ૄ@Pt6M@3['3*#WkNXBN# =/- :,l3& gӿ&dha{Ss;8 4U ؙGڙ8߾ww/hel0:كk;'Z)OwQv2#014׶"0+Rv&Mb3T$LlAS3sy;{I> ['j.}WhqgH}#[;8_)F6@+_5Vd>)Hl-eag$t33UM,FymW55smu{:&FZMm.3[.54Th@%ۿsVL@~ q3z%bYtwI_`lMLZ/_Λ ,/ؙZqDuz>թI *3| -~mv?ٕQv&%B}fϟA9X#lNvJQ}o{LIYD+#Xd(&ɵ1mhyGw}Ãݻ4Y1|&Fb3 -H I7@C:4WW.)7Ը X*OI,߭V:UV|:HoD Xn!T|!l3$KtX^tk?.Ŀ73SOfl (ڳC,]=w8ĸ )6^&RܭihrhbaHMmY  ܳ+`j2Jo;}n$\w}uis7rFDiKMd\bNndT -mU*+WfCU=&07RӪ͏2xjEZJ0(* ʊǠ~{#໡旲\.p3ɢ-)UR:6)9_eȽHqBSE\6\ֳ *ex+7%(p47W|f#ϸWEbŏ -W  ]wYtW -cryneWH]ʉs;~~\(#4}pG`UWg6glP' O 4lI3dNư'Hm'BGA6<&E&pֳ`Lb(@ԪOvt(ֽox7&W/0|6$(_wj &$^\? tB2!;;U/Os.C,?uT1l?cXh&7v*}Hf̉œ{b,O|~@:M˳;)3:&J7MI,Jf@'T$}Pš7qTvh4(9Q-%U!R3MqdYs֍mR'uQsUe&D/K[pdø/G 1ϩtgIvfVK?#}hG9èoSGB߫=R`9#yom 7 F0[?JÛ3(lM8YˆsQɜ@P-jДLJ{Y5_˰KǕjPgb2݋=dᨑ%܅-᤿<7?sx 8aW(NWzb$ AWcUn^RV\X2zq=GS[5I3TmQsDtD}BƹX,@_54\z5RV#@js -ψ -A{N0|h}ˠi31F$ x (*:-:˖(916K+3A؂Y=HviM6yx%n>-)*ʳVM4{bQc:qj4*̣7LsӜˇ*m)^r-V<嘽G S[|5M\cCitRߺRTLU$at^.y§u -iNh3$oLUĚNLZk;A|q.c'qaZ'<Dpo/[4 ~`u=eV'2<;w%wSvyהԁŠȮA(SD@VǛ!74A3~Q.ijL~͐G,jIE_$wZ`%qf -YL4CkM(ip.9^ 2 }&X&1\y`{l<@OBr븞T谭~zXu}B_\qrTzG.V0%|0uࣰJ#0/<ӝmҗT„yz*Fڵ:SՇrg?{^Bހ߁]Y7܀DĐṗ {qؐ*7]5өz!k XuY -4G=v=#WCӥEV'< #mz'x{Hسa+a }j<.g50앃0dqG. bNSt9$- *;X\zTeO!c5 Hdh<h ,>AjsAVrDs7bE -R¬-!ˀQ;ر#B+[6y+}l -ap/R#rP|3~MrakV/Z~ ʑWw~(.sE|zy;6MiAsG6+'rA1]K$_} Z$3A;=_Il^O,@ݖ|)*U5Ltܽփ w"Y;5_t gX(͡yڊ˽ Nx%>-|{! mv7?ׂa\X[>G#o27x@w`BaǶ큝wUo7oVXh|ጅ$hb$1;IvR}BI&Ѭ aB&ϱ&ssθ2& w9S/:c9Z)z<5Y r++%Ry)#p@R'Y+Nߒԗ'B{fG"S8&M4(ᕿ 26a$&_K(`q٧2JIJBq&I=(LoJxmw*{c}v?.zM&$)T(I1v?f|."(< {},M/Z\l -^8N(Ӎ߱* buRDʞ'3ΏY1J,2){*4yJ7U:?݅gKs%Nq|,aL:Ê1 -ӋaFŒ%jݑ$YZݖGB >$!J;ͣ0|D'V߲@L;ڠg EʺDG0:b~53H4U{s?wVB9+$ښN(pbL#֬w}zÉm4*hq?6mWC5j&zP);$JS+6n:{螸d^Wf-CbTg  }*Eqy3 99Q6xW0 +(WBщ CNtmg fgL&zfn$, Ȕ_Xo~v#M)=%\O^ ~E,NoC2I1ZpbD%,ݿN %qLGv0Pb>O"^=*Fe( C]H-TXR w᭩f%uJUĦ஧~QxxA=ּoYxUO2F胯B!ӄ^ŇX4_1$ӓG%=e$w#1*e&Mڀ-PXncul -򈂪*>Пp^۰H"-) Cqkxݩ}wm y48-zx,XI}V!Ч*kO`HǔD)e֬4i ax2aK2Gx4TP _]yv>Nmik_m0h4H01bfKR̢ekQ`RF;%(bֶ~H,^TsDOm1R,.bﰜR6qF~_xeMfL?9||%ޞD56XӨA~l2|2c_7 ~|#ZB@`E Zy]EĠ@jQu Aͣ/Sq? knhK[N8 cRw($gWش%Bcٓ+1.(M@okLUx*>NuG|To$@;$C}-(dTf#=xvڌ"?-8EBst >"A۸ >)[=l /ҍɷ'pnOk9Oᕽ`&L /t\kg?!N9ΞBݷ,f@a,! Pvs'X܌[QplwDlc*ƕ&ZEkgP)?eY˻*~81 4%PS -Ehx[حl4a8X}ብ:eO#[f-2У}Z~HɶT2gud(zl񍷇l~f3P!&EPIg3.04~`[!L|זW4j,ߡp(5N[L14?J Yec8ђx9 }Xz;C#\FCalEͯpT2\ndNNg5=o&YߐKkYR mɇU߮;ET=D d +2yts` INیy22iSUͷ hlwn1EzYbcev'8벫ݔ(HBƎThzw}LAdmkDyr4Pd954ܓ}݅VYsiJ,Um7#U YWvY蕱/DH$jIC2w"M:UW{q"4j!m;=}cW+9)K9?.h7XPbc>V_yMvhbe9ѬiLq T`bE`Jݵ -Cj:u!1WzY N6:8_U3ݾmӯfAs~-&Wބ +L3!"qſ:!r -&2BBRg!{|s0 $ U <ՠ(8M տW \[r쀩st*~nʇ+)zcuk\uKw^¾Ay6>v jzZr-JLL[:Yi:/U -^I9 3\X޲h#t -xn <-/G52f-=(? 5Vc6 C dyو|aĨw+lj/ lMET6{m(ɩfyF"@;6$ cZG]1Հ.׆vŴ_9[=Zbz(;֍/lƆIG %_V<.Uix/Iw\[}8 PDAglśjigfme}G/p:tp*5pmŠ ꪿\!E/^݊˶Y` *QM ?0Қ@륳ȩY9>Bp+@؍ZiLѮ܋;M4D+⢴m8(fO@174Qba9yeJ~然I;b6bG(~,܂s`6- 4qlEE̐"39B)6Bi`eMo`zGvX] R$qOZ*j 'p1"C uI ޲Q95 ;4V˄?SfqtU %۷.6I͖l1֡ApME ˄$?r2~GbJ$G}UV/5ߨ܅tkt.RȦ̈́Ab_<Ơ w3qGTЩ#!g& ʉ _vlKXɣ܏9hx Sc1s "#?h@kfx2E.S4aΖ[&E\jݔYz}"Lp/ZnܾuOV -0z Ύbݏr*X?,6 mvHu#Yf88lSO׉Xk;Է7&^'[ڬmOEj8(fߗOVH{BZqWnW1C %LJeZ5T2L}j%mjz-/0!,Xn-[w/@~iȾNj|F1?nl7|kԨJ,*(_ŰKg?v P.[(}!dd&ܱYCdX?GC0ļK` 6ͪ m~Жoֹ+0.mx('oxt=!:`݊r460y7‹#(Ba0ɡ~JuPHR,AkS%e{U~qoȅpP |XPtJ2Vi+V sMm ckAI+alǔV4.aV SE7Xpr& Ē"ߩ^DXɝ{YViRcI 5šIuQgI B&ÆW"3 ]ۧ:և}܆|aox+x@ND. **``85KG88EXcp} 9zL}G8KF(xEnB1n=l{]K2$vȲ4/¸|>"=T 4 $(`b]tк\3y3/839;kQMB]0 #cS|o`Xr:L/D8EXԡMfP*!"QF'V_U@kQcr2}W *>X1 ~)+ʺߞf|4#Жԇ`4f˄9K-LJh~7g*ӧ_f.-Vgz54%uCOdȫϩJdtJy*NsGV= qRo+k%U>A%D'V 9LG oى<ߑA섅$.fmC3vxs>CWEWۀXZkc=軛OVpV`9+H7gKkNP3G>'[sb{NM \P3{X#RC -Mi}F(tF"⹚ t_wEQC?AfQmT\$ -{khhïF2po&Ik6<7{ pc{*G'eGRGv/;^BEg6߾tT{g#>v< v\O^'WaA=W,#P칺$Dl፧-*߰2tatWF~X2)'f&?[y*wx(2b(z=jϕi-7i=2% u@ξCzY?ز_1:Ȅ_\bdw`}*[i@EOsoc%xHNpf-MvX"96W]TҸ73E>Aq$r#ݷ;Q4g>7rps"Ns60KiD\V$n-}s:ŒG 2;sá)&$#出`U)C=5#~k5N*ֶcAu"9'4?j0=`A_~ od0:!"j?d;"6nx[uz^Xh36%>s66)I7h1xkh!gz,[u3˰ࠆTŴ -('}§_u(_^VNX]INp<!~®8֭=^L+, %Ulgfix4Q!> -stream -xڬSfݶ% ۶a_(öm۶m۶m#y߻NjVTښ}9$#RP25813rYX9;*Zr)̜ `DN6NnBu ( P܉RUI?%毧 !5/2@d 4+h S% , e,6*BS[BmmL,)͑/#!``m‘ol -lMI_ 9 `h`aD7t27t'_5_K[cJ/_#XFB G;+C9X+ gG ̀`f`bpt gKvvV`eJ7f60 OS[B&MCpW(IXLal$c[or_w3;1 =C(CϢ2tZ[X:~:m_j-pp(X8Zٿ6&+ _nVB:&FS10e lVlLkW Bjb -4/Cn7Q<#,lFIIHM_@Ly5trp#[7#ӿt ?dhcw_V=0Y_5 LLwc +mT)*M 4 onw_:>ðMyP owr0§_Gy.i3O)*|Bdq}#q)C#}C1NmEBjB+$O<{}?¥ɉ&1I Jrr7pxj4Xժ^L@ʔ~͂oGQp(V&46(`BdVEJkU?wگ5 ؓ4"od5JAɡLxI5FNtEMHYz21~H:}hN'o|(jK> Ph׈'{^FAAM˨J18 )MӖU #i1ȵ7d,C%~=z=&jw/ -̞4IC7hJ\IC&k]~!,75jGUfudF$$|e8 wPOq3\"5`hu.U@өm!f k+9) T麕nINfwⒹ[Nr؍ 9_cטn5g^dk̸z -04bn[ʯ,I[ zMo IưdCLh55H~[ҋ[sa ד(u[51*x~P΁ I܆X[K<]^ ~C% 4b!|1jp\q9ݜs`GܶcѓЇܵ%v*Ō4=)Nz -:A.s&- 5۝ۢ_,uE}jZ&(U1=aDX - 3߆:1^Ć>Ϝ5@";%4q{*uX3TUEY޶lӕ̪L<w[_cJ\f6sdAgܺnģYdcaJLY7 97)*d}EbFAw:G˓d@@ك縦譧j!9-X>ŎʓۮdihLǒɈJO7Z;W[ ?'ic@h[^ %(pݰ_7Gn iBVgշZQk剭 Y*hNuIP( Nil_:5 9N0Sf{qyQV~Lx^8Aw IDT8 vԷ}–agm|~~^^?-MRFx#g0DhHEqCNFJV-[dDFu%lZd~#ʴ rX׎mVA]a{ؾ`|ljAh"h#a̾ Xm5oc=涖'U`hzAs#T+:u/Dt%;a鷏/#^O49OjK1\s2'S`e? _1˽Jpԟ3{N>\tL}7Gܸfo=? z4Xђ,#kwj|+$,L2HbIq L`O2,C8]g%|ptNi;hڰw~3FtcD L&Qq!LN+HjZmv]y=idU`]*Ooq=vFQ(`SPIj3_!E4>Kk:U;VNα\Н,po }T*a @7diz<,޳㡎1yt92tv,T$[#,ڮὬW@k6YR,2w5aR+ȾHۏQ )Źo&{+PxW.xaӽtZ^9>!%jt-|ߥ2qh~גzS)&UXsȵjsO47ӓ07SJ9U2-:b""6z -E}WcԘ2ܰ1gO$/;Ar3Է,/?c?hh bh.hEƅ^@7%I^. V}k<5b\|< Gd&N668eLAjŒ[b';-fz [9}ic-mИ|D]MCr,{G"d#|1KKm$m -OL`u&i3Qtf|炨BiMSM b[ܲhaءu 2~瓩oYe&QB*TMBb mF`?b-ߛN %0]O{j㷀uz u46ӭ1x(+^ٕvD$G^Lv*е̻<(lys?Žp͊]MP)?@ooxQS#$\4Qs4+]H wg}6)cś{rdr03SJG߷̟ٚwJ~NgP/qv=Zۡ0/4b"2MPzЀٿh|(gu"zV<. %w~yOc. beGQ-_@T)bx؛MRVFI8B73>`:jBN1k#8ȟ//䮖n6Zs40Z{Z+>~!<%H{OA VB6)/79QﭤR{҃I^!̯z,:o'dwHx=_+CFuP v n!CȽK]L_mzs~>X9,Ksl*`Y7/?m SmxF1 g/B^'QERaDԎ!ge>X|$3wײA2FJ  -lB-ꛡy;!LfʴخɎ L2#1{l6+|+}$6 C0i\b<|1P'#}Z/Ҷ\LFi?XN[ztz5=_7K:넆P>)i-zdPL镬QcЈcTm{7ěl^)!$-ֳa*ܼ_!;zOK8u%<^kF uIw;h9Rtש thJoYLwU4* - -XCj}gxT(9Uic~]K[| ii 8ƴ \Dc'׈eިjdwKgr͇Yx;kROJDpYo3`͢ŗ -ۗǮY}{uo*pq:;8[*pMM/ .)VN-XMHmʓ0*= B`]x [ïptPDL\Is!z_ 21'5 jrP<>Sp]Xֵv6X>mg~}Y3>! nIŇi~n۪Zu?}tXJ{:pIh,WK_LJ9A|x'kBe5)H~@&e[V. -ax2"nu7^*)COL\*OnbY#jokq,`rsln^Nj.`YWNDᒙmÇCaQ?b%/:@> kYsgᛣEf^}kgꌍ/OD$œלs>Ib -,3S,凿V\ Q; @ʽ{wM/N6{x;f&] 埸I8Um3:X:;>).iծI|[v#M)pAgCЌN=Ba|_ǀnp ܖREM_k}h<]|e17?{%֧R.OH'(5d5 䰃VD|TkAi0#nm˼y.RDRCYeYcߝm9eh:tD\%Έ405r)_ά޴_thsDt4KHz1v 1Ogn85;7-< -F|Ay x|u)B466UcL͊2* 6NJojMRnvMw2CJЍ'h2y"-3BEZ&Wq㾧=LW ,`?hu D'E1zTS#r)k/}<]3!NixE9+ɪgB#l0Jakv|N~_rcٻ+=꺎%e>CdBMT~/ `"mOHYM -SUlqO Pv/,z38LH/uZ`^}+ {)>Gq %ChQ(_M}cx+S Йg m!I/^"<\0oM Vl!\]Vѫ@x~}6o]]7 B3'%s{tG{?hGs c -AY)A[-'̴&=C4OZ{w6nS}5Azc{sI5Iv[H*Ig~ 8-m_DlO{4<j?+{#+-Eѿy؋E촁bʱjFBn<&=gኃw2qbe`O!TPE2/F'R7!ήNU|ӯ)f2Z>pU?+sW*~‹l'[P-ʞJ[!W[8(4>(*@nX똻i@ jxu :J={ و-K QKU;WІ5漹ϦF`,b;k':qCq 2tO MMqSX^G j7^4D5 -Ҹ U$ZIz慔'ۨ]ZmxM 0O;骓y*22ig(UGCj( z@aNN+F"燧A -ƙWW-kxn\O3)GJ&;1+>be||g?qDKCΞe 1t} -jF6LpTBYl "lWJYͶsS)ӇBzncǢ>f|V-F^yWcƅ뮺##OQ6wGa -y*48E&c{oGnϼU:bTě*H.rIJyS@^2DWYK%E8EOK9ې= էʁ!>dOǶӰ0(RL}\:NεG0)Z[,fKظC5ʒ0 FN:F$&77b -Z>,2FjBa3dύVc/98EDA"LAA\G=:٭B0#F1h0Y#IpS/7cZ@ï⼤HI7\A"Y@* "^c^O^O|pnp`E"i[if6mD3A]8"9đ:YD}@NQ0_Ȼ^ 0]OiRM0ZZn-W֜ M+ߨ]EG 7]z^DrG8p10.ÆQ 5(x(+i^~C-Q-SJ@9=pf4 uVsAi=৕^Jh9_60=/3 z-e+RU v a1+5(Q Σ!="s}f=-(̛xJGF239VPke, cZ1" *3/OF/) -wΨ^+Oџ|7]`|>Z˘rtA g,*3!m ٹ̆Z&PY ]`vR Txak?y4*-㥹=*@`TQ `i<=ͨcC#kЁsbc(Uyb%&m,މAZnW<;A,iJ_:sD7q9Ņk~+\dL35 -_DRs ~>V9J -ORLXc#޶BKOǰ|8y݀eIP Cx:+a'*;]0WN,g7Gh؟ehf,Z}HƅO?bg'c &m㱾YGE?ofDED-L{f~ 1XAlYg, :(9(N`z$7ȒN~Ku+@{ELY[dA^`=|`>A-S9Қ@B$ ]n]#vA7+Hȑ)(U \=2Ҧ׊kӷn^C+=TpɳQMۄh]N`"w} ҎPhFRVc֩1683~Ve_w<\.h"IbUS:Uܦ?31ov$0K~y6e-2dvئ^"EMf!NN&Mk2a5 l5߼~=ANU%ġp~x?>ԡl~c~0#= bN -'B}#D;s}"spPs ^0dpA p1}RB ̥o{q@O7bUsK9iޒMMMΒ}= U_HGsL%gB3=8݄/?hpvF۵8O2<%^|` 6ڜdQF@NF5UlˤJ>yl;-V:>;ϕ. ag_uB!D5cG,viWE"~l9@y:bY)0!ߪi[=HS>m1ZZg74BSKp;>Qhq_S@ݠ[mW#BxX2njDb*hծPl$[/JT6PC{(ԱsÍ۞ڦ ;iJֻ`YCUPXl!Ůn+Z%o0S O%;)x- C/0Dsd4p R ^>YuGlo_MWAW2Yhk"Ra!ˁXx)/Ny-γP 뀉>}T3hp zelUKOx yaE~Wnd*yYBfTx䵈/}I'\ׯ3 cr=OZzHP5Mi} \1tyshW~M/u-&E o0i|7z:$eσb˗>,m\f# %~"B HB e"}]"$ʝfRY랼d +}jn}!azt7m(/GmfO!l߄Ws8c$3uLW\jƶաOYItŪFKnܯu5"L>C;/ltX"s{,6uJ3'^w)|p P+0sj%5#\M:lGOF+Vޘ]*(U~Ie[\O7cš.ϩND `$5NmG11> Kyvp=z抛ݩǙ\y gvaV!KEYLӭxl:cE4@P{a[Ny=9!0^66`Ņes$6wEӝh'm r3(fiz"\7] AP -c&\ybx9,| -l]^JV@|QD$;~V76zuL%yxdlomS`?n@nSR1=;q׌@}wQc] t(NoYlRhwn&|=p!k<ɟϊܫ"2՜NߐIa?$͜*r)R(A_@̆[½\TA(p(7u7Fys -nwB?hiH)SDBma!_2)xDceoϠ#eN? Ћ }hp6^EIIhL2Cdm67ˠ2R9'- jpBDLX|JqbSF6me+fP>5iBK_u|RuyN>6F _4r5Yf)Jݳ4#A{C1EHT? -u Ɂ~֗ ]`NM#|s89;vFAP$ 0սg hϪY9ssCƎ9j?*hcğHûI7I..faz3LM`DN'MQ 45{LV^˚G^8|{Y]/*|5EP:3sF,7w v&z*Q -x-d* 1 {3DB**0MFGE:By Iːi@ k {V -ܱR5 7p!" ڡ*H}7\LGxYNFCHo:tE>jz8iK#t^m/5 *B. -(EPL#>}G Oql9R'R:t_e~&iF/,KNCNlLdb5vdtuGv5ԅq̥m&e_8BI:ZGO, iC+k~ӝ+7@O\:9*3?mZ9m\lT`ԟA&iix\Ynf [TatǼWl|@kuβd' -v^g^ -lԧ~-L lQtY$YiV3qrp.>nmOfC#1oIj@pj_H ɹ4sts3D\ӥ|ZkuK՗~ΆU=vFCƅWSd=|# mيL.7ǯKQd*Btt4amc)&;|7|aI&:7k&ne\;"#ɒ,LX? N/*p>d4m+uY6[]`#݅;a*}'~'`ӷQy$(YJJrw7~`h𕠢n9\.'gI}LL40(B/**rE(h)+.SlãG,`'sQR68of)SEwlMwo (d/vU.li  8;0,(}=C2a#矨bI!qŠ_,qm!s δPP|Cb|~(.~{vPoh>]b9PhZ8]5A[2eiqי~n*Jg3v 02ԑlߑL`"i\D!"ws6^N -A"sLJȃ4RyZ˗mj0t|p%!ڦ2 -z BtL\s[SR@cDwRp4+V) ecVAX'+OJoQe8 V'tfͲb<Jl[7+82eК%ŝR?=y䫊 [D6Os̃exA&JUx,GF͑AZgymBp)a=?^,;sh㡹oƍZ&yt쉷?7&ҋ H2*`K)2R"=]I99l7\_.6' Uβ7Eݝu8BFAT1||3.yXCjG?[b!c \ha0bs'TSߕzzb׾^ɑ@5Z1A_;״,aĺӳ 1Z([fZe F d -$s>DWO}vd%#CÀ % BGL̐C;.c|٣p5X`? OG+p%w &f̧J`ϒ.`Q[lrhEu}C?Dj2Ev/O - -o7>QVrȅ=mӬԤZƦ:igN~Y/,[$.Z":=W>7=j - Q̆+}l{>vEFqtbU*˵çdg\ŒΫo:]5CāUH:k͕P*oВ*VR^ΙAed-i[qe/D\uOe};D -f,#pYU[#Zx}]M+g>W0ze#n>G+77ύAVά_\20'>5*i~"󩝡o,[i$F>}qFZ qMaS{eYӧf\_l[!ma[eY!:N0CZƿhf8Ԑ+"ל1}M)$ W5b9imI>}ps6ᵠu+,ن -vI+0D= ݇g%bflFv=f_§Tabp}w;KIE"D.",rB85Kݙ5uFȑN\˜? T~5ɥZ|pWEsP(굘wl/ߗ\q_]c_˜ѱHt!Pյ?'ь\K<FʌԤW˻W߁BZWAz .g/Y*4t9oozN/)50 yexN)e^T':"ߟ`U)?1Tf.qȚ[5jxfG5rN~|`Y\Oq.?bQf ȓdU *:,f1)68?ze'l=b/M%f[[y/sR۾i}a/ʼn!l1[Uݩl:c/t;C_q"|6wnkzC0Vt|u'暥KҞ0b3!iw1aKm%׵+^J<`TzT&D`&TxX{\ M vH:^܁An4}(~ɒ۵Qf^{<簌׺Lc -?ur -P5pWkշ)Ď$z(j6 \Jdz 9C).MQͫMpCGnW9ؒڜVz15kp`K-YSALآq> vuNn%EuC'{U|S"(h끒^f>k:Gy~d$|5omsp:@ڝ[3ٚb;m͎Qk0_ksgZG?ӊ/{xm@/w`fq -t΄q)38Q"} 8 !8ـ_ٙ*We<827\,C#Й%F&@W~3/ңv[ݔ3E6{-O'"exh7OTg~ʚ/@Q$7nަVjtYMzeS!ط/ώmmP:Ƽk -xCqPA]-3Y2930@KL{nsf@&T6,^ ?6k4xDT=VEE-'vs;wɛOr,-< n} P6X0I&aSILCjXh?O"KEgߺ !&`7#mte ۼ{՛˚65"Eb - 10tIh ¸MɅgsHW=8?{Y7aqj)=,u,Uu_Jmh7 )SIofcqUZ M/~"=Bw'i]ҹBS~IK&nK#}KJ - Dl-瘭hu?x<ꚷB!}Q'YҜ)~W{%G*x"(W"&dsıw۸ň+o({`"89ds>㩬}_Ip==̮>:l@3nCfB^0_6x+`5n_UD[Si!v7*cc󂻥3Fmev4X$\Lts ZNVAXɨMw2"~\5-p}*/ -1n~sgU^Zuj+Krx -ȰWY|qV,#u -IUwD@vxՅ5.K%H%JuF\AykE$3!3ġȮC3nt MUAY}l;'RZiH}4TEsK [gUApvRP.D2LF4E4G+i}槶k2tOFm?`a0TݚDOy.r)wKz rŚ" ,,|JI3Sٌɚe|EFdԌ`Zkv|cUow*)GZnR!^,Oz<4~tD$ Q>uz#Fn- РF@. I.sA. k jIa;':h@l](]!^ql]]HѧK&2[tpΊM  ܃.[^쾀0z/C4BvMp-k:/v/:8SO[.7$ @e<WҤmB6$e -?b)j7;!/ -9m\g~,iAPE(62A߇D)oG2N̼aȧL$bF7}(Qq~qưUȝ/w;Kƒ]EOBhJe{GQ&mX5K WʯLm$Kbs@GTnGߑ /aPL!Vo],}tLWZ.p}j2?/džU'vUJd Ҍc)O :@(˲ioR7C3*wbFhsI0&P6Je֚P屛a- /fh&h5)+bAXu9S" fm# Qϓ1"gM29P`JvlwU--Le aE[jMrT r/gT=]NJ{5ȐO5j9[>\V|F%sid |{mWEqxid?};y| fb*/CMǰOn )m,2MOڽGDz=vagDd0`TyVX?ࠓ )Tmd, i,Hxǜn|݊iݝpJ7~1fTPeQqWcmдS5-$ay`jzBkij#ث_R ܵKdr/w[Qu` "hIl~t~=|:,n.m -S߱\A*CU_ z6@76BE8FmC-Zv,Vh} /FcpyҠ;s m Li -)r}{t'ٵkmh6&ɷW-& CEkމZy`'FT\ݬT 2e׬ks kՑ)QEJ.FS<Qi!ĨYAA›,gA̕w|]Nk%>ސ"_Zy>mǢ{2PbW>z |-5KkU-~ -?V<^[K9b{t|Pˏm16 -*5N_44&Oنg㑌m..Xuv+pψpKޛFJjg`HLI]; z8~@` E2P!ې>y^peO *e.t kw*S XYKgΘ. c5_Haw%­~Iy-#>hs:HkގaCVؽp*sZ%?,#rM,s`i=Fr8I[w¾4q+t<e -jH $ˁ ĹfU -4h -Dc]8TאPAV-C+u -Hse=& ;{$MqCm텣Ĥ!FyV`}{E 4Satwwͩp\O?1PF.FW`#ڇxKiNw`+GX7Ec~u~Ǭfh魸I5 582,LG5v٧شǨ0dZ3" '`'9V +2exۄ泸>y(0=Du9&umR"t`b]~'PoqG{Ï:Wh>[Bma=e]dH6Hk;-Yz'2?788E~TW}s -T1f,4N KsߺSU7IW4O bendstream -endobj -532 0 obj -<< /Filter /FlateDecode /Length 15120 /Length1 1647 /Length2 14273 -/Length3 0 >> -stream -xڭweT]ݒ-n9;!8\wwBpwݾ}{OqثYUk֪79*P֙`acbg`-G4sv6|bp䢎@Cg ;[1Cg @hXXpQ;{G 3sg&5--ݿ,myt0P|>mΟcGU lZXJ_$T -I-%#k c1 H 0sXc054'O.a'!hlt7쁎6NN '8,l]LJnjwBv;l>O2%;'g'cG {ggT%1lnWl'O`g寒>i>QgC ['3XF@gO2{Gpq5WtG5WRv,֦ p,1?cY1/ҶvfM\> z3 C;[k Q3$2 "ߩ%\ m>s9h m kX:85?8Ga[Oapp(Y8L ?o )o_jښ{ ]<3mެj@DҔ3_T""v/zfn= 'e,l>1k-oh _+53T mM>[? .= >+KVy-2ӝkrž0W+K ]nZ0pl+C7܃iMٝ;$&MZNW-o F*֌Sik\MC-6f^U*5/".M~ yYKyDuE/ P#[nRCj0(䨞9G$"*UǵkL+QמuR*]hyh>zCWMUbF7إ٫!`|4B/ʷ{G1їU۰ x/[tWcz,YP2|T8pe_/R`$sE~ߚ1ضzM<. `w!J+i~[w5BB]UFrɨBw#MB ǡ. vmdvP4Bg͐@oUϭa7.:zxD y:jS-mY T>Q,$%ͺPi\`kЭUɧ8BPoZۆ~gItR,욚/ -/ zʕapptޟ]NK&d\|/>]^!g߰y -V@AQ;ݣq'?알MY3Nm:ۯtR#<_836ȥa0MS+Vh$PFӽT3`t75orK#MQJxKWdߑbmp@6_% ɰbU!/- +,hB?yxh3֔#MUj Ek?5~_pT)z!˞fyꦬn}%$tcjǮ,^jƇ3WkCcPgN_1E1huGŃZ R\%' >˭"O+=s) :9Oc'V(c"eJHFwH,27X+{"XL3ao4m;IfiM;đ/e`>7y<@D'XV4;Ꝧps 5iq;d**^2Ӗ(W(hqy.rED4D䰙9`_J|ҿ|=-,f`;#0bGa( -luQ^cK6̇FR"E$ն-ǒee7EZ>O4& y,w': -z+cFfuy/ :(V$f"'*@~] xSBd\E3Y>^Jjkҽ$@}iB7sYM,R.Pe5lD͵6~`>^l?5J  ´8[JubyG+PTF~`R .\i豛á:K,It&D@uؽGxHyPrFKAUrJ0*}$΍2&!S%t!7mU?zQ9VCkdl]Vayam!BX_ΤXW:1X=Hʸg/J- hڂ҅\iQWxaJt}6uƯѲ:5txP0|VEUE77fw4]'-qYlA[6 -OTU7n=ُ05|?ҪÌύP7gN#|Ayfr'rk-!$ܥ@ X$P}&,9wʴ)}1}(S&ez; -xNBp/!-b'“̟٫(_ٺ`C*m*.1gNC &:hQ: -Mzptgx-+iˊo<1'/U5@A̢6 17(11,y2y+&(=VU&IXb5 71lsX2.Y^J"[F$gTN^j4LƖ2n\0-nȳ="@M$ׄ)pnB~RЅc!ja p` F?u/׷Z:4)UMj~{=9҉u^/=2ذV^ę9^'q8/Pi=r7ǿ醂VB8'ӷBV§2Q@a18aC 2c 0ZJb%ܣ3X B=Sh ދ?ԏvB&>ogB*ã!D"e#<$;&ZLS4W N{ >B/>d& )^UjKJ@|x c0RApxʋX+;Jnzcwq7u.jmX-^g_,Pȴ -X?y`- `V_Ύ\zs)B%ӏՊ,goQ)?oB}k'm?Jsz@jHK.1H -*/b0_#.~}=zA\5hzHI#wg^U^;YЌH؉l'W+jqGZ#pt ѓg$ o)a_-ZzHMM\J4~-=)wZP -_lj*OJRn2J^A "~GWյpm9yj+?dӝurC/vzX )㜭-z$'K0fBE-b*(uLDV(>a[#a/ڛMH簏>Z Co$MM/h2Īr F<"Fۇaz[%IU/c!P]NeskaWBAg vnLs Hi~mמF > TsI 4uy)TA,a鼒ta( JLlLB3Љ'aIv#8AglGTV~).;uӊ@8?fb0* JJM@Ӱ ag%t$]rzg=!N2J62sSb%MTؘf|Q%/=F.Jܫy|VtӮ< A2'.:,:8ViH,kd׈wI?T"|ΘqU{D01UŸu)# - `zW)XWàя\cJ::&)6}A O0ȑg׻,@-8iқA^H(z(s;s时:8X[BEXﷲfWOKbvτcii+UTJEFxj!ڃ( )K<".Yaj41W,t%1ؠ]=Ji?tĞbEA AKd3iq^ -;{t፲4 もk9^:;M/<)y-8o? ],8D؏m&30J3udN1<ʦS*oIV=9!Lw)O*hm.dy<@˄. J%lݚks(ch1D(ԣ/I@?`iAGN3& (b -_?!xBVbZcX+cYWjan- u vrPuPFuWX+WC `dMz-[ ~OO2DGxJTșYfG}4/[,D .Z*]/2^WVL -j-ͻr:n -Z,!c Ea5S}<+M>y౜ʏ6쾀hTm=Hj;$d3ºnA7KN(;Z~H{ֻE"zݭs;#J[Ϥa}$͕'3R n.rT.`~m+DCc.UO;ДЄ\PVetg]M㣗Ӻ&Ȗ%BX.NwewZ[aTVa?I'D#Ew%uo¶}L,y3"cV<0I -:_HKn ʾAt? PRiT/1co8>90]łǒUͧ4@ o}יaEIie>v4@ -Z0xtS`q5ԷSe-f.A`!fh+F=ulag')$$4Xhϙk|1Nz8 T'ڷhsFlL,i'jJKO[\gO^Zܴo>fL!J*<0WϰY -šov[_|e$o}4NJ .ƕu3f8 -׌Y=o]J uɕ7j'bu=FԕJ -;l~\ AfBJHlhpհŒ8ζԭNKsSIUCQ禍fQlLɟ{5.\2;#O,&Oݰ(vzƴeJ.}(0m-?Y5!ꔇ -䕎0.=JFH),sLU?epQϩsD`׊ev,b}; Tf$1riGK0! =P_6+o uW2{Df#v=#r{Ld_KHd+"9G3CrhCF4yMCY|HX̥H}E(JG,Q̆DpEW#X1,,2$S:~7HeJ!LtAųs"U7GL o0o}ˌe6._NZdř?z#stڇw 0;犁4iT'ki‹]08TvvKGmjK g U=t Gw+L(X '܍)h/1oBeڄWu3it%%!רTBH-)KLOnaCx>ZGf_' -)软3mugQTPȒ̭)F iEyW=MgjM OmcR ec5E…x.f7Dz4D -B4[sJvΡ-zK3F 31|ޚJ_?1j׬r3jrNcSrp9:VJ!Anuәw$ - sEE޳.cAl?Ni.@3UFaK3b ye\W,dm|ٜ(f ]DxT7P.KTMJMǎ,I{٦ݗ&3Y5Aqkgn!Dg^ch2ğ1񪻱vfbd`6po|"hǏ~ڱ.l}W*x4WhOm7M߅Z~"fD*yk} Lv'B=E.%*썮++Gj\6IAȨV-s=4EKX׆ ws08t_e()V G-$kmҊQX㑁DW.IE;i`W_Q˛mQH~/j/T'|"P:=Fqv>E۬ g/dn?ց7]\d39=G={C츸>Àմ Wgly@^+ąKP0 -7yKɪytB&H`!> Z6ɭ2TW8R4T"30 -%|0\ -:CUAgHHUSk[rN͓ʵBS:Y&KQz $[f0H bk4hg6 i7~։fC -;!rZfc)ҫՏOTHq~(!Ps#xt -)qt[t0qC5Srl>CK\]mzҩ/^c/vE -5! {- @CO -O\'U=,#ĂTą[dVhcD5H98Q6Le_j.}gJwS)ê*SV:@8dǝ=9~|CǏАMڐ 6lb**Nʌ#HMc{ԖL k_/%;gZ 1/{✢ȔqC[#'x uEJ%y|?ܦxY&?BSq:ʵoH#DBxK.U4W& eMvAlݺ͑h)W/NODoj"3fp}zրl~.i=מNBQ! -Kiyfg AbQ -J4 -9 - .]0͆l+#r!UXCFnŲboy'RMjc,֖HI*:A!/G53QRV -6E'JD}W1.n -O5X%sPc-YQf;EӰy"7[F\y`<繼 mk<*_E)Re%fܥmaAꔍT?ك|sLj%M&-_Lt Y)30dp%!|ٟå| +mt>(gX t:ȅt3T`ЯCTSk a&,V/'u:_3{yWkitnv5Mv1}y-=rdzO4^q2*IcRT/EmXi -qsWIaxJ6Cc8.oBH_C`Q"O olŪ&b;_I6gp4x ,~32 j7 -zw 1m)jV?_PD_q;#:QK)PTݓ^oiP/I R7<EF?=PMhVWVEwT,A3ܱ2ADx5Ȑo0AzUufD-fR(BI 'B6[A ItX}(6-!(mZ"4&һ^ -=B;ChP BNUU  /0굃Kʷ4i2!=q"\-PG`M)&>VB)c=3s;=3x.% jץaǺB Fl=th<p ɷ"PPk?m ZlQC+b7_ p@e -,', \$‹Q M΍K6JȢj7o߲ _U,5E}NꐘNy5hǩRK X@_j va("T +~x}Wl*w㇝bEc]-&q3Tb|{lQ ݹ+7]ߔ E\QF4Yfwh2/2=`FO }E|Uk գ. P{do@+R`?]  -d~/+'džγ^BJΕy@{$ oFb}yBN%- -݅o&AI[[>YN&E.3iJZz#vdk>Edw X EqX[ydaƗg>P?d%^CM3h{ХRdvZg=W6Yc BHnl^ N:ZU)Ŏӆ??hyPt2%~2 OҸH\s*jz}Wq Z[yvǹ!N" Db: -|UCSW:bJ -5ӹ/-"/odҞw<(8[]Wvo΢93gJ#UNY:L׮KIx?{< $\a/ XOT+Oa]lVЗlnbw0*pJFt1ܨ bN6B$,if?nh9̋t1 Ȟl> Z= {Gy;޾ȤD? 1+vd8vo"jڦja}]%"8ggKȯ{:jc<݈o[HV*VЁrJM] XswB8C ~yJݏV'c -q0teJ nozꢈ&R}EY^k_H2" -EEBw&PaT1ݳUV$qʈ%]5 -Qt"wF';0mV}}3tKr)Cm8c%w,;d; p0I;pq F ~`lOBfKed4mm0NQ^XP[+NB0%Zz\%kGz u~f֥Ym'P̈́x}I׷}< R+Wփ84EM<pTwmOtK:`cyJJ6LR!lQD|rgLleʉf^+9tgpF|"+Q4pe+z`'HXJſ黎xڊěfLR# 4xA)^Ra\95ak&|ŔZ;32P -NV9Fŕ6)R\+֢ƈ5^!SvWs`aph+NliIji+iՕW]vW$1)N1"k T|Ji=!&t9F_Z$3hCK2Bd8nAE,E74jOGyy"e>?ʜݏhxW̑Xszb_7sTܢy I.¿*X:x)4\eu~uk,> -stream -xmTMo0Wx$ -! 8l[jWHL7IPV=M̼ su;Uٛ=w]yil;<[[j<=?׾+v`&ߴț<^*;~&Q>MS >_P{=s@dkx;`VY`s4JaQܡn.Uu9\Y6><ٴ.Z.4>Dӗ}~r:-d0VWk,8yLһʮӮђ[*mLr?q 5F8@=@)& 8Rx uD\j2HV0CzL] bctI g$`htы0\F0s jd< I6zg W qȐ+#k .bsrbmXK7ǵH7Gnb>&jؐu1VljOu$՟qWS/%1{\xB!K(hHTЖ枃Jρϯv=k2UKς_:~$/ ~E+7ˢ/ l(/} -+ZXukoԝE?ZKqendstream -endobj -535 0 obj -<< /Filter /FlateDecode /Length 739 >> -stream -xmUMo0WxvHUdCmU^!1H#x?gx]OTm$|͜s_Iss :L;<Sz==׾f`*_`ɫڟk3'iѴ}=M;7rfnj-eSӵOLg~8 )ok A8 $`I\3`Af<Z]! -xNky"7 _㓧q -H`nḱRONH=CpB:# =%888QA~!*zƜАT?!~> tw8y*sύ -}nFE>7*QύR>7G];~<6OIyktg>O:yұϓN|I/|yIg>O:y҅ϓ.}2 L> -stream -xmUMo:W5?$R. d9M eCkmCp;;w~>|3E_?O]5߶w]Occ]=~?}Oyh9%?۹׬B|Ɯ>);vw%g43>\ 6 -EJ78 1{~`W(-;]%=xe_,b+-O;q\L}UI--=BKE1p[! -Mߊyu>.N5K)Wb٬8i[_uʕMzQ)V(Txޢjy!Z2P="Zd0\ÃGR\).2*Шa!U,H`+j.5Nα@VK-x%3%AYӀzΚ>kP#5m0Woþj.ZT$X/)n)#Wo(oRZ $Kp4Z-b\1ܰJ P"GXQi/8k^Zq:Zs9dB )sL-7xJ`aɽ)f$1 -dъcCZC<73JgznHȰYɚTa,_-O87}KԴܗLloK+gJ.GZyVc48Wt]:P~`rZq.n1] S/Pu7Ue:?&?!d&1yHn5)yғBx#1ޞ]Go׏M?Xendstream -endobj -537 0 obj -<< /Filter /FlateDecode /Length 700 >> -stream -xmTn0+Jb'$8B - 8l[jWHL$Q;o.Z ̼7/o~:W{xPCWQb6;J^ǩv'-[~Gݾߣ#i6ڿRV_n84]֚̽e[sYͮi P[ L:?=v8|`4nh7u{QE -sU5Y7{C]_?{B^QSu; 3jV՞d;&xD\20-0b# !ڇ\)&q)% 1ON"ۂ%480`rH%Dd#C K -.%"l %RQ NLHI($ux-LJ@J!^H :ggM597F7FN|[{}&Ff*pdk_ΜN0VG9ʱwDK4X=CaCɁg2)4X(rb0/s4lƵ.b]ʌ[r> -stream -xmTn0CƆ@LE"jD;oƤjgy_xN{qV'wC&]\]]u>t\qxں7ŦmN7isƬ'k~G]?ߓ` 4;RV_n86]{̭֚u[sfߴ L:?v>4|`0nhWu}QE -KU=5Yw߇l?N6jwwv Z/բ,ko{&PaffIq XMJ0LfhrdĥP> -stream -xmTn0CƆ@LE"j.RC~8iZ)Ayo7?nkNy$냛G׎ծU[7|SlfM[kwʽ5g -x=i6;RV׵_n85]֚̽u[OsE͡i P{ LՑ @4=tb/yVvL MnݞArjwf4P׏ީFT]Nrî}sBZ2pmmR?\rs<, X#.KIɌCH'hjmJIQ09da"2rG~\5hגQv]`n @v)(A'b}qHI($ux-JBJ!^I :ggM597F7FN}Y{}&Ff.pdk_ ΜN0VG9ʱwDK4X=CaCɁg2)4X(rb0/s4lƵǮb]ˌ[r> -stream -xmTn0CƆ@LE"h.RC~8iZ)Ayo7?^$ŝPIs77EW]}==硫nTشxGɛz?{k۝=` 4vN߷u8NM>(s&`ywS0jzQshz+&TuS~Hxqq`P<+ OC톦}SWUn}@`T;P3qtj}w*5UWSܰo\ze \[3. 9ff ؤdF@!i @F\ -` H sn4ȶ` $(Ng 2R0zd9#Cb.k(@.0[Czr aà8SuX$Q:\CAfpGR~m%^!N%$h&՚R #ƿp'XϾ>AI }3Nh25gNE'bkkؿs -%|V !3?fc91ӊ9|u 6ZcWCab d1׮eF-9Ag깐3Z=I= 6-7p?)pegT> -stream -xmTn0CƆ@LE"j.RC~8M])A̼7W?^$PIsWWEW]}~{SCWmݨMi7mv9I+ڴg{ҏÄ~F )P ǦkZn;@1zz5= 7m=x Fgu P}?i]X<;k C톦}UYoO} A`TS7~wpjmS!詺]]ꂅK(ew&97\=̒5⒁yAa>:M1ȈK,x΍t,@F*&" C,zdWXPv-hakH/]d"btv"gg?|2JB^G5kdwt,uVT Jb9;kBX!00a0bw3W M";\88̿9Earʱs -ށ?c>+q p~PrL -  -hi˜c>:q-+01~k2#Ϡ3\OLqRυ>¹M \)s9O -\Y!O>\\/Au*[ӺkzT%C0tendstream -endobj -542 0 obj -<< /Filter /FlateDecode /Length 578 >> -stream -xmMo0= PEHBmUЪWH (3jŁfc3f}FEu{ڪorp߇ڙL*`Wq}MdnM=Y䗾ù4#{t]ҕƕhݗZ -?C@ӖyYJi)h!f#֩4E3#r9J̻፞՞&nm׭9UNه]lF|O))YLم]_@!Zp_W;wLyU@[rh N(ZSs4Z*"ZY HmKб@H[yjLz+ϦIeZsKuz:C3D:+֔aMfLz2aMa/ SgC:*TrJ_i62(qdO8x_Mc'.4si~,Ѕm> -endobj -525 0 obj -<< /Type /ObjStm /Filter /FlateDecode /First 427 /Length 2607 /N 51 >> -stream -xڝYiSH_17s*ds!@nd{dA ~u1:|,t<13B E%Rqp5ᾫqxp!C3'݉8#>pO'RjH-%aMOâJKAPb -08s'ځ_+򈒀Pb0r -%iQЀ~\@#@s䄃/C]xcDs Y-8OtR;Ÿa*&ZnVКG;u-+N)*4qzLƒK\T WyrMsBֆWIl<=%=\5 -ezr5蠟Cz*Ͷ[-̊ ˁ9Da[H( -0͒*`C,<, !|%FjV [R?Xu.L -4@kC]⽎ j{a<pߋ$7lTVkVX!t?O&I%yqӑo0^qmpnr3.iZ'ȕ~mV¹RwMes$0/Y2@j3D 7"mAvvkkL*k7kU O\/i) *sI:# -w -*]NBsjhg]]dTTrU\[{j'NAV g1W< 23^99}y g^10rf!O<_fpԕU1:jzbǃ__Z<~:㷟_/c̱o-$wy:G>6Ǧ:,@lS5i}:؍>X`r X7n9l#^ ^xzx%\bA/>d r7}.!]9zH>@wj!{}ΞCH=q$\yE+h@Y&}zY8=.%ְvT YLJ'|+RDxJSL=!As帨=ljd~a$ Z:dY8 5?5i^f!9ܠd!d62MܡLċ"(h1K\%M"mg  -HH >/2[&`\yײɣqfEӌuu!_5:O?8$.ģEܙ.5cpDyϒILD>ha ,ҊSsgk*}y@M Yđ= ؄, --qCv%ugm7 0Ʉ "Ȅ$Rހ_Yag0p@ڸY&S尴e -e혶]r΃\jbjWoX2z0*Qs@i݇@89kTmNo!79yUL[Rk5!]%nAZ%H;CpC@0l=D]Q(`WY[3@TӁz3mCCLuGɩ$j[?ƷvWiC{W׵L9~hp?ld #ad1[4SzZ!~aN3sT&>(&n{ks#endstream -endobj -562 0 obj -<< /Type /XRef /Filter /FlateDecode -/ID [ - ] -/Index [ 0 563 ] /Info 561 0 R /Length 1305 /Root 560 0 R -/Size 563 /W [ 1 3 1 ] >> -stream -xKlTU{LLߝ@RJE[NN  ;KJ+1jB1qB;&F(qa> Zz7ss;Aa: P@lXĆa4b !&6J0 d;İcX2,rb jV@%eb sPElW~a! 1K & bAqX&(tw=`+gedѵqK5APT2}!ĠʀNKfX ik1aD` -}_;|;"(W,מ$ S=Q=O=-xI {t z -g OOST US zJ#?yʦSOQ S{*O}OXywO+S&7 [MT׫6U;.>8'``F`Np, 8"L\i3pfaa",2*\5X ۰8q#zpȈ2-69(p,)L $ -' -O4C BPPBA - -*T(PPBA -:o*PFB4INI8p.Ye9X } zEͅfIh7z:B.L(PP2nYFhA%NhVMt*0ɩہIVp L:AUVZEh1SE8hch=wA8k&g}b@ʤoU-@9 ڠ: z 0ųt -TR%gKζHܒ%qK-TK %X}ozw| n IK~Nmfy$-0an.ډ>uI(Vّ#+G;:֑Y9Rs40u>qKGuD>4q&ׯozkj.վLlfuhLf^&sKJ̃?d~=&ϏlO-y)(&hh6hclnߣq?8}Sendstream -endobj -1 0 obj -<< /D (section.3) /S /GoTo >> -endobj -3 0 obj -<< /D [ 71 0 R /XYZ 317.955 313.651 null ] >> -endobj -4 0 obj -<< /A 1 0 R /Next 8 0 R /Parent 543 0 R /Title 5 0 R >> -endobj -5 0 obj -(Introduction) -endobj -6 0 obj -<< /D (section.4) /S /GoTo >> -endobj -7 0 obj -<< /D [ 278 0 R /XYZ 54 571.115 null ] >> -endobj -8 0 obj -<< /A 6 0 R /Next 12 0 R /Parent 543 0 R /Prev 4 0 R /Title 9 0 R >> -endobj -9 0 obj -(Background: Synthesizing Datasets using LLMs) -endobj -10 0 obj -<< /D (section.5) /S /GoTo >> -endobj -11 0 obj -<< /D [ 278 0 R /XYZ 54 262.144 null ] >> -endobj -12 0 obj -<< /A 10 0 R /Count 2 /First 16 0 R /Last 20 0 R /Next 24 0 R -/Parent 543 0 R /Prev 8 0 R /Title 13 0 R >> -endobj -13 0 obj -(Related work) -endobj -14 0 obj -<< /D (subsection.6) /S /GoTo >> -endobj -15 0 obj -<< /D [ 278 0 R /XYZ 54 248.062 null ] >> -endobj -16 0 obj -<< /A 14 0 R /Next 20 0 R /Parent 12 0 R /Title 17 0 R >> -endobj -17 0 obj -(Evaluating Datasets Generated by LLMs) -endobj -18 0 obj -<< /D (subsection.7) /S /GoTo >> -endobj -19 0 obj -<< /D [ 278 0 R /XYZ 317.955 738 null ] >> -endobj -20 0 obj -<< /A 18 0 R /Parent 12 0 R /Prev 16 0 R /Title 21 0 R >> -endobj -21 0 obj -(Visualizing Text Corpora) -endobj -22 0 obj -<< /D (section.8) /S /GoTo >> -endobj -23 0 obj -<< /D [ 278 0 R /XYZ 317.955 619.043 null ] >> -endobj -24 0 obj -<< /A 22 0 R /Next 28 0 R /Parent 543 0 R /Prev 12 0 R /Title 25 0 R >> -endobj -25 0 obj -(Design Challenges) -endobj -26 0 obj -<< /D (section.13) /S /GoTo >> -endobj -27 0 obj -<< /D [ 319 0 R /XYZ 54 480.19 null ] >> -endobj -28 0 obj -<< /A 26 0 R /Count 6 /First 32 0 R /Last 40 0 R /Next 56 0 R -/Parent 543 0 R /Prev 24 0 R /Title 29 0 R >> -endobj -29 0 obj -(Visualization Design of LinguisticLens) -endobj -30 0 obj -<< /D (subsection.14) /S /GoTo >> -endobj -31 0 obj -<< /D [ 319 0 R /XYZ 54 416.431 null ] >> -endobj -32 0 obj -<< /A 30 0 R /Next 36 0 R /Parent 28 0 R /Title 33 0 R >> -endobj -33 0 obj -(Introducing LinguisticLens) -endobj -34 0 obj -<< /D (subsection.18) /S /GoTo >> -endobj -35 0 obj -<< /D [ 319 0 R /XYZ 54 304.543 null ] >> -endobj -36 0 obj -<< /A 34 0 R /Next 40 0 R /Parent 28 0 R /Prev 32 0 R /Title 37 0 R >> -endobj -37 0 obj -(Visualizing Individual Examples) -endobj -38 0 obj -<< /D (subsection.20) /S /GoTo >> -endobj -39 0 obj -<< /D [ 319 0 R /XYZ 54 132.62 null ] >> -endobj -40 0 obj -<< /A 38 0 R /Count 3 /First 44 0 R /Last 52 0 R /Parent 28 0 R -/Prev 36 0 R /Title 41 0 R >> -endobj -41 0 obj -(Visualizing Clusters of Examples) -endobj -42 0 obj -<< /D (subsubsection.21) /S /GoTo >> -endobj -43 0 obj -<< /D [ 319 0 R /XYZ 317.955 403.655 null ] >> -endobj -44 0 obj -<< /A 42 0 R /Next 48 0 R /Parent 40 0 R /Title 45 0 R >> -endobj -45 0 obj -(Clustering with Syntax: Technical Details) -endobj -46 0 obj -<< /D (subsubsection.24) /S /GoTo >> -endobj -47 0 obj -<< /D [ 319 0 R /XYZ 317.955 152.389 null ] >> -endobj -48 0 obj -<< /A 46 0 R /Next 52 0 R /Parent 40 0 R /Prev 44 0 R /Title 49 0 R >> -endobj -49 0 obj -(Overview by Adapting Table Lens) -endobj -50 0 obj -<< /D (subsubsection.26) /S /GoTo >> -endobj -51 0 obj -<< /D [ 349 0 R /XYZ 54 533.356 null ] >> -endobj -52 0 obj -<< /A 50 0 R /Parent 40 0 R /Prev 48 0 R /Title 53 0 R >> -endobj -53 0 obj -(Summary with Frequent Pattern Mining) -endobj -54 0 obj -<< /D (section.27) /S /GoTo >> -endobj -55 0 obj -<< /D [ 349 0 R /XYZ 54 273.03 null ] >> -endobj -56 0 obj -<< /A 54 0 R /Count 2 /First 60 0 R /Last 64 0 R /Next 68 0 R -/Parent 543 0 R /Prev 28 0 R /Title 57 0 R >> -endobj -57 0 obj -(Case studies) -endobj -58 0 obj -<< /D (subsection.28) /S /GoTo >> -endobj -59 0 obj -<< /D [ 349 0 R /XYZ 54 182.876 null ] >> -endobj -60 0 obj -<< /A 58 0 R /Next 64 0 R /Parent 56 0 R /Title 61 0 R >> -endobj -61 0 obj -(Dialog Agent) -endobj -62 0 obj -<< /D (subsection.30) /S /GoTo >> -endobj -63 0 obj -<< /D [ 349 0 R /XYZ 317.955 425.342 null ] >> -endobj -64 0 obj -<< /A 62 0 R /Parent 56 0 R /Prev 60 0 R /Title 65 0 R >> -endobj -65 0 obj -(Music Recommendations) -endobj -66 0 obj -<< /D (section.31) /S /GoTo >> -endobj -67 0 obj -<< /D [ 349 0 R /XYZ 317.955 221.497 null ] >> -endobj -68 0 obj -<< /A 66 0 R /Parent 543 0 R /Prev 56 0 R /Title 69 0 R >> -endobj -69 0 obj -(Limitations and Future Work) -endobj -70 0 obj -<< /D [ 71 0 R /Fit ] /S /GoTo >> -endobj -71 0 obj -<< /Type /Page -/Annots [ 73 0 R 74 0 R 75 0 R 76 0 R 77 0 R 78 0 R 79 0 R 80 0 R 81 0 R -82 0 R 83 0 R ] -/Contents [ 85 0 R 563 0 R ] /MediaBox [ 0 0 612 792 ] -/Parent 97 0 R /Resources 84 0 R >> -endobj -73 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://shorturl.at/zHOUV) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 95.925 160.626 215.601 170.231 ] >> -endobj -74 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.anil2023palm) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 469.016 261.656 475.402 271.705 ] >> -endobj -75 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.DBLP:journals/corr/abs-2005-14165) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 476.497 261.656 482.883 271.705 ] >> -endobj -76 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.workshop2023bloom) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 483.978 261.656 494.758 271.705 ] >> -endobj -77 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.synthetic_patient_data) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 546.95 251.693 557.882 261.742 ] >> -endobj -78 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.tang2023does) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 316.959 241.731 327.819 251.779 ] >> -endobj -79 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.vijayakumar2022evaluating) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 329.21 241.731 340.07 251.779 ] >> -endobj -80 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.DBLP:journals/corr/abs-2111-06467) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 401.418 241.731 412.278 251.779 ] >> -endobj -81 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.borisov2022language) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 536.27 241.731 542.696 251.779 ] >> -endobj -82 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.he2022synthetic) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 544.087 241.731 554.947 251.779 ] >> -endobj -83 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.fryer2022flexible) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 520.314 221.806 531.273 231.854 ] >> -endobj -84 0 obj -<< /Font << /F134 95 0 R /F161 87 0 R /F164 88 0 R /F167 89 0 R /F178 91 0 R -/F185 92 0 R /F203 96 0 R /arXivStAmP 564 0 R >> -/ProcSet [ /PDF /Text ] /XObject << /Im1 72 0 R >> >> -endobj -85 0 obj -<< /Filter /FlateDecode /Length 3489 >> -stream -xڕZɒFWN4!TaidK5#Yd$$h,_?/mDG\^&:Xl⇛+w7Ͼ7Y.P }&={:l+fkԆ뫵_TvibGj[&Wؗ^'}ѕ}pM|FP{xweW\Ȳ QWB/2{ȕI@LK8x&q0HœoI]/etOb>|F `-Zh0:*];W?AN^ha3TÆ6xXj9KX/ԝb g5P*LHӢw{n+i^cߐ7 -1-يa64* Eez#@n6A?^!#Qꇃh{35;5&o H$~j1&t!nlh~dX|JNt5 -cPEBP84U'lBrVxPD2[H停rQ DB!D%*}3k.-BtQ-f]u- ~.]2i6dH 庢!MTU) %+H&t+V/*Y~*?t^7ǶTĠGϹJs`G+}qf:,aX]k6elC2TZ6Sy"K8,3)0RL(KOIqhh3x@. f59X "ToɉoCl\APc;*@ոsrMP*ު&2dHe x 7gLQg-yO'qR  P+[iO^w3"f^N6Ǧ *\q*mwOM"1g/E5e! !PKYDÒöMP!/lsIѱQ 55P+O Rx]@5 .'3`PN]ՕX]`ٔFIba?DbF$&\*E9&1]o=j׏o/7)I^EbT4ʹ "T-s o-BwM֦wtL1Qw}w $tIN',/Nٵr+u=ZJL?xB&8oj(Gu hBhOsY-߸qŖ+s~z*JUBJ<1ph&L3=$ӓP\b3p_U 6T*V[W cN2M=p96тe&oi}4-1hj4!l~'iG6@\ 0L߄GgŌGoTdRK}[T꽹%!4^|Q;I0`҉. b֚Nd枂{<3h5'D(8CMx;0?\Ou6;ֳ>΂Mũ?]5C.]I%' uEGz?ͥ¬d "o {vߗ]ֿLA$rSU5"7`uyg.u$΂H]wbGsm%*O׎b! x+IWrWe~ ⟃Ц39}@Vz/nHXYT!tq@RyRC7S3yr)pcH"qY`mKR8r7ǕZʉ=si+NJw;:q PGwVfs` JߞBuT9=`>F'&]wpov@7endstream -endobj -86 0 obj -<< /D [ 71 0 R /XYZ 53 771.4 null ] >> -endobj -87 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /TTDTKU+NimbusSanL-Bold -/Encoding 509 0 R /FirstChar 46 /FontDescriptor 524 0 R -/LastChar 122 /ToUnicode 537 0 R /Widths 514 0 R >> -endobj -88 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /VQPDKD+NimbusSanL-Regu -/Encoding 509 0 R /FirstChar 46 /FontDescriptor 527 0 R -/LastChar 121 /ToUnicode 538 0 R /Widths 513 0 R >> -endobj -89 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /SAVEMP+NimbusRomNo9L-Regu -/Encoding 509 0 R /FirstChar 2 /FontDescriptor 531 0 R -/LastChar 223 /ToUnicode 540 0 R /Widths 512 0 R >> -endobj -90 0 obj -<< /D [ 71 0 R /XYZ 112.444 376.416 null ] >> -endobj -91 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /VNXIWS+NimbusRomNo9L-Medi -/Encoding 509 0 R /FirstChar 40 /FontDescriptor 529 0 R -/LastChar 121 /ToUnicode 539 0 R /Widths 511 0 R >> -endobj -92 0 obj -<< /Type /Font /Subtype /Type1 -/BaseFont /VRRQGM+NimbusRomNo9L-ReguItal /Encoding 509 0 R -/FirstChar 2 /FontDescriptor 533 0 R /LastChar 122 -/ToUnicode 541 0 R /Widths 510 0 R >> -endobj -93 0 obj -<< /D [ 71 0 R /XYZ 54 313.651 null ] >> -endobj -94 0 obj -<< /D [ 71 0 R /XYZ 54 313.651 null ] >> -endobj -95 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /MCSIDT+txtt /FirstChar 39 -/FontDescriptor 522 0 R /LastChar 125 /ToUnicode 542 0 R -/Widths 508 0 R >> -endobj -96 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /XDJWFC+CMSY10 -/FirstChar 106 /FontDescriptor 520 0 R /LastChar 106 -/ToUnicode 536 0 R /Widths 507 0 R >> -endobj -97 0 obj -<< /Type /Pages /Count 5 -/Kids [ 71 0 R 278 0 R 319 0 R 349 0 R 443 0 R ] >> -endobj -98 0 obj -<< /Type /Font /Subtype /Type3 -/CharProcs << /C0 107 0 R /C1 114 0 R /C10 122 0 R /C11 129 0 R /C12 124 0 R -/C13 126 0 R /C14 128 0 R /C15 123 0 R /C16 108 0 R /C17 135 0 R -/C18 133 0 R /C19 130 0 R /C2 102 0 R /C20 119 0 R /C21 125 0 R -/C22 139 0 R /C23 153 0 R /C24 147 0 R /C25 136 0 R /C26 137 0 R -/C27 150 0 R /C28 106 0 R /C29 143 0 R /C3 100 0 R /C30 110 0 R -/C31 134 0 R /C32 149 0 R /C33 103 0 R /C34 127 0 R /C35 138 0 R -/C36 141 0 R /C37 112 0 R /C38 118 0 R /C39 144 0 R /C4 104 0 R -/C40 142 0 R /C41 101 0 R /C42 146 0 R /C43 121 0 R /C44 151 0 R -/C45 152 0 R /C46 148 0 R /C47 120 0 R /C48 154 0 R /C49 155 0 R -/C5 109 0 R /C50 156 0 R /C51 115 0 R /C52 157 0 R /C53 160 0 R -/C54 161 0 R /C55 117 0 R /C56 163 0 R /C57 164 0 R /C58 165 0 R -/C59 131 0 R /C6 132 0 R /C60 145 0 R /C61 140 0 R /C62 158 0 R -/C63 162 0 R /C64 166 0 R /C65 167 0 R /C66 113 0 R /C67 159 0 R -/C7 105 0 R /C8 111 0 R /C9 116 0 R >> -/Encoding << /Type /Encoding -/Differences [ 0 /C0 /C1 /C2 /C3 /C4 /C5 /C6 /C7 /C8 /C9 /C10 /C11 /C12 /C13 /C14 -/C15 /C16 /C17 /C18 /C19 /C20 /C21 /C22 /C23 /C24 /C25 /C26 /C27 -/C28 /C29 /C30 /C31 /C32 /C33 /C34 /C35 /C36 /C37 /C38 /C39 /C40 -/C41 /C42 /C43 /C44 /C45 /C46 /C47 /C48 /C49 /C50 /C51 /C52 /C53 -/C54 /C55 /C56 /C57 /C58 /C59 /C60 /C61 /C62 /C63 /C64 /C65 /C66 -/C67 ] >> -/FirstChar 0 /FontBBox [ 0 0 0 0 ] /FontMatrix [ 1 0 0 1 0 0 ] -/LastChar 67 /Resources << >> /ToUnicode 169 0 R /Widths 168 0 R >> -endobj -168 0 obj -[ 0.609375 0.869318 0.522727 0.56392 0.464489 0.362216 0.28125 -0.362216 0.620739 0.582386 0.539773 0.237216 0.56392 0.609375 -0.585227 0.582386 0.609375 0.580966 0.28125 0.372159 0.363636 -0.360795 0.609375 0.869318 0.363636 0.539773 0.56392 0.522727 -0.596591 0.28125 0.372159 0.558239 0.582386 0.580966 0.237216 -0.360795 0.869318 0.539773 0.363636 0.237216 0.582386 0.582386 -0.558239 0.237216 0.580966 0.620739 0.28125 0.372159 0.522727 -0.56392 0.609375 0.585227 0.727273 0.609375 0.869318 0.522727 -0.56392 0.362216 0.605114 0.28125 0.362216 0.620739 0.582386 -0.539773 0.237216 0.752131 0.747869 0.660866 ] -endobj -170 0 obj -185522 -endobj -172 0 obj -51 -endobj -173 0 obj -51 -endobj -174 0 obj -51 -endobj -175 0 obj -51 -endobj -176 0 obj -50 -endobj -177 0 obj -51 -endobj -178 0 obj -51 -endobj -179 0 obj -51 -endobj -180 0 obj -51 -endobj -181 0 obj -51 -endobj -182 0 obj -50 -endobj -183 0 obj -51 -endobj -184 0 obj -50 -endobj -185 0 obj -50 -endobj -186 0 obj -50 -endobj -187 0 obj -50 -endobj -188 0 obj -51 -endobj -189 0 obj -51 -endobj -190 0 obj -51 -endobj -191 0 obj -51 -endobj -192 0 obj -50 -endobj -193 0 obj -50 -endobj -194 0 obj -50 -endobj -195 0 obj -51 -endobj -196 0 obj -51 -endobj -197 0 obj -50 -endobj -198 0 obj -51 -endobj -199 0 obj -50 -endobj -200 0 obj -50 -endobj -201 0 obj -50 -endobj -202 0 obj -50 -endobj -203 0 obj -50 -endobj -204 0 obj -50 -endobj -205 0 obj -50 -endobj -206 0 obj -51 -endobj -207 0 obj -51 -endobj -208 0 obj -50 -endobj -209 0 obj -51 -endobj -210 0 obj -50 -endobj -211 0 obj -51 -endobj -212 0 obj -51 -endobj -213 0 obj -50 -endobj -214 0 obj -51 -endobj -215 0 obj -50 -endobj -216 0 obj -50 -endobj -217 0 obj -51 -endobj -218 0 obj -51 -endobj -219 0 obj -51 -endobj -220 0 obj -50 -endobj -221 0 obj -51 -endobj -222 0 obj -51 -endobj -223 0 obj -51 -endobj -224 0 obj -51 -endobj -225 0 obj -50 -endobj -226 0 obj -51 -endobj -227 0 obj -51 -endobj -228 0 obj -51 -endobj -229 0 obj -50 -endobj -230 0 obj -51 -endobj -231 0 obj -50 -endobj -232 0 obj -51 -endobj -233 0 obj -50 -endobj -234 0 obj -50 -endobj -235 0 obj -51 -endobj -236 0 obj -51 -endobj -237 0 obj -50 -endobj -238 0 obj -50 -endobj -239 0 obj -51 -endobj -240 0 obj -2748 -endobj -241 0 obj -1437 -endobj -243 0 obj -<< /D [ 443 0 R /XYZ 54 715.439 null ] >> -endobj -244 0 obj -<< /D [ 443 0 R /XYZ 54 373.219 null ] >> -endobj -245 0 obj -<< /D [ 443 0 R /XYZ 317.955 508.708 null ] >> -endobj -246 0 obj -<< /D [ 443 0 R /XYZ 54 136.606 null ] >> -endobj -247 0 obj -<< /D [ 443 0 R /XYZ 317.955 404.829 null ] >> -endobj -248 0 obj -<< /D [ 443 0 R /XYZ 317.955 329.404 null ] >> -endobj -249 0 obj -<< /D [ 443 0 R /XYZ 317.955 291.024 null ] >> -endobj -250 0 obj -<< /D [ 443 0 R /XYZ 54 431.476 null ] >> -endobj -251 0 obj -<< /D [ 443 0 R /XYZ 317.955 726.312 null ] >> -endobj -252 0 obj -<< /D [ 443 0 R /XYZ 54 164.92 null ] >> -endobj -253 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://shorturl.at/zHOUV) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 53.004 593.455 172.68 603.06 ] >> -endobj -254 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://github.com/PAIR-code/interpretability) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 53.004 583.402 267.325 592.976 ] >> -endobj -255 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.borisov2022language) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 156.371 198.577 162.936 208.626 ] >> -endobj -256 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.tang2023does) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 164.763 198.577 175.901 208.626 ] >> -endobj -257 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.synthetic_patient_data) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 131.108 178.652 142.067 188.7 ] >> -endobj -258 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.tang2023does) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 227.446 158.726 238.531 168.775 ] >> -endobj -259 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.DBLP:journals/corr/abs-2111-06467) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 118.433 128.838 129.213 138.887 ] >> -endobj -260 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.lara2022evaluation) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 189.445 118.876 200.584 128.924 ] >> -endobj -261 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.assogba2023large) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 192.974 88.988 199.54 99.037 ] >> -endobj -262 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.corpus) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 383.3 700.065 394.079 710.114 ] >> -endobj -263 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.doi:10.1057/palgrave.ivs.9500180) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 395.118 700.065 405.898 710.114 ] >> -endobj -264 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.4658133) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 406.937 700.065 417.717 710.114 ] >> -endobj -265 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.5613456) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 527.487 690.102 534.053 700.151 ] >> -endobj -266 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.choo2013utopian) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 535.791 690.102 542.357 700.151 ] >> -endobj -267 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.kucher2015text) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 544.095 690.102 555.233 700.151 ] >> -endobj -268 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.chatzimparmpas2020survey) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 358.762 670.177 365.206 680.226 ] >> -endobj -269 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.hohman2018visual) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 366.596 670.177 377.492 680.226 ] >> -endobj -270 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.yuan2021survey) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 378.882 670.177 389.779 680.226 ] >> -endobj -271 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.reif2019visualizing) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 500.228 670.177 511.124 680.226 ] >> -endobj -272 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.strobelt2017lstmvis) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 512.514 670.177 523.41 680.226 ] >> -endobj -273 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.tenney2020language) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 524.8 670.177 535.696 680.226 ] >> -endobj -274 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.reif2019visualizing) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 341.841 660.215 352.979 670.263 ] >> -endobj -275 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.brath2023role) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 512.298 630.327 518.774 640.375 ] >> -endobj -276 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.collins2022visual) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 520.188 630.327 531.147 640.375 ] >> -endobj -277 0 obj -<< /Font << /F134 95 0 R /F161 87 0 R /F167 89 0 R /F178 91 0 R /F185 92 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -278 0 obj -<< /Type /Page -/Annots [ 253 0 R 254 0 R 255 0 R 256 0 R 257 0 R 258 0 R 259 0 R 260 0 R -261 0 R 262 0 R 263 0 R 264 0 R 265 0 R 266 0 R 267 0 R 268 0 R -269 0 R 270 0 R 271 0 R 272 0 R 273 0 R 274 0 R 275 0 R 276 0 R ] -/Contents 279 0 R /MediaBox [ 0 0 612 792 ] /Parent 97 0 R -/Resources 277 0 R >> -endobj -280 0 obj -<< /D [ 278 0 R /XYZ 53 771.4 null ] >> -endobj -281 0 obj -<< /D [ 278 0 R /XYZ 317.955 535.098 null ] >> -endobj -282 0 obj -<< /D [ 278 0 R /XYZ 317.955 473.156 null ] >> -endobj -283 0 obj -<< /D [ 278 0 R /XYZ 317.955 401.252 null ] >> -endobj -284 0 obj -<< /D [ 278 0 R /XYZ 317.955 249.647 null ] >> -endobj -285 0 obj -<< /D [ 443 0 R /XYZ 317.955 603.273 null ] >> -endobj -286 0 obj -<< /D [ 443 0 R /XYZ 54 507.256 null ] >> -endobj -287 0 obj -<< /D [ 443 0 R /XYZ 54 183.849 null ] >> -endobj -288 0 obj -<< /D [ 443 0 R /XYZ 317.955 471.372 null ] >> -endobj -289 0 obj -<< /D [ 443 0 R /XYZ 317.955 319.338 null ] >> -endobj -290 0 obj -<< /D [ 443 0 R /XYZ 54 306.967 null ] >> -endobj -291 0 obj -<< /D [ 443 0 R /XYZ 54 240.715 null ] >> -endobj -292 0 obj -<< /D [ 443 0 R /XYZ 317.955 631.667 null ] >> -endobj -293 0 obj -<< /D [ 443 0 R /XYZ 54 270.579 null ] >> -endobj -294 0 obj -<< /D [ 443 0 R /XYZ 317.955 698.52 null ] >> -endobj -295 0 obj -<< /D [ 443 0 R /XYZ 317.955 262.631 null ] >> -endobj -296 0 obj -<< /D [ 443 0 R /XYZ 317.955 538.572 null ] >> -endobj -297 0 obj -<< /D [ 443 0 R /XYZ 317.955 442.377 null ] >> -endobj -298 0 obj -<< /D [ 443 0 R /XYZ 317.955 377.739 null ] >> -endobj -299 0 obj -<< /D [ 443 0 R /XYZ 54 402.134 null ] >> -endobj -300 0 obj -<< /D [ 443 0 R /XYZ 54 203.379 null ] >> -endobj -301 0 obj -<< /Type /Annot /Subtype /Link /A << /D (figure.1) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 53.004 391.556 77.394 403.501 ] >> -endobj -302 0 obj -<< /Type /Annot /Subtype /Link /A << /D (Hfootnote.15) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 279.105 391.556 285.082 403.501 ] >> -endobj -303 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://lit.dev/) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 67.35 77.973 136.291 89.775 ] >> -endobj -304 0 obj -<< /Type /Annot /Subtype /Link /A << /D (Hfootnote.16) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 182.475 381.594 188.453 393.538 ] >> -endobj -305 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://d3js.org/) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 67.35 68.164 140.476 79.966 ] >> -endobj -307 0 obj -<< /Type /Group /CS /DeviceRGB /I true /S /Transparency >> -endobj -308 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (figure.caption.17) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 1 0 0 ] /H /I /Rect [ 116.311 279.667 140.62 289.716 ] >> -endobj -309 0 obj -<< /Type /Annot /Subtype /Link /A << /D (subsection.20) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 155.231 219.633 189.888 229.682 ] >> -endobj -310 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (figure.caption.19) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 1 0 0 ] /H /I /Rect [ 127.306 159.857 150.74 169.906 ] >> -endobj -312 0 obj -<< /Type /Annot /Subtype /Link /A << /D (Hfootnote.22) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 343.035 329.094 349.012 341.039 ] >> -endobj -313 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.spacy2) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 419.494 68.164 429.456 79.966 ] >> -endobj -314 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.bertucci2022dendromap) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 548.668 189.404 555.233 199.453 ] >> -endobj -316 0 obj -<< /Type /Annot /Subtype /Link /A << /D (cite.tableLens) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 0 1 0 ] /H /I -/Rect [ 100.545 599.361 111.683 609.41 ] >> -endobj -317 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (figure.caption.23) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 1 0 0 ] /H /I /Rect [ 172.67 599.361 197.166 609.41 ] >> -endobj -318 0 obj -<< /Font << /F134 95 0 R /F161 87 0 R /F164 88 0 R /F167 89 0 R /F178 91 0 R -/F185 92 0 R /F284 328 0 R /F286 327 0 R >> -/ProcSet [ /PDF /Text /ImageC ] -/XObject << /Im2 306 0 R /Im3 311 0 R /Im4 315 0 R >> >> -endobj -319 0 obj -<< /Type /Page -/Annots [ 301 0 R 302 0 R 304 0 R 308 0 R 309 0 R 310 0 R 303 0 R 305 0 R -312 0 R 314 0 R 313 0 R ] -/Contents 320 0 R /Group 307 0 R /MediaBox [ 0 0 612 792 ] -/Parent 97 0 R /Resources 318 0 R >> -endobj -321 0 obj -<< /D [ 319 0 R /XYZ 53 771.4 null ] >> -endobj -322 0 obj -<< /D [ 319 0 R /XYZ 54 742.981 null ] >> -endobj -323 0 obj -<< /D [ 319 0 R /XYZ 54 668.598 null ] >> -endobj -324 0 obj -<< /D [ 319 0 R /XYZ 68.346 81.809 null ] >> -endobj -325 0 obj -<< /D [ 319 0 R /XYZ 68.346 72 null ] >> -endobj -326 0 obj -<< /D [ 319 0 R /XYZ 317.955 742.981 null ] >> -endobj -327 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /JHYTSG+CMR10 /FirstChar 61 -/FontDescriptor 518 0 R /LastChar 61 /ToUnicode 535 0 R -/Widths 506 0 R >> -endobj -328 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /XYLMEX+CMMI10 /FirstChar 58 -/FontDescriptor 516 0 R /LastChar 59 /ToUnicode 534 0 R -/Widths 505 0 R >> -endobj -329 0 obj -<< /D [ 319 0 R /XYZ 332.301 72 null ] >> -endobj -333 0 obj -<< /D [ 443 0 R /XYZ 54 478.862 null ] >> -endobj -334 0 obj -<< /D [ 443 0 R /XYZ 317.955 660.662 null ] >> -endobj -336 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (figure.caption.25) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 1 0 0 ] /H /I /Rect [ 213.905 469.356 237.597 479.405 ] >> -endobj -337 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.han2001prefixspan) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 280.14 429.506 291.278 439.554 ] >> -endobj -338 0 obj -<< /Type /Annot /Subtype /Link /A << /D (section.4) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 53.004 199.068 78.728 209.116 ] >> -endobj -339 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (cite.anil2023palm) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 0 1 0 ] /H /I /Rect [ 129.552 199.068 135.938 209.116 ] >> -endobj -340 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (figure.caption.29) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 1 0 0 ] /H /I /Rect [ 117.375 69.063 140.742 79.111 ] >> -endobj -342 0 obj -<< /Type /Annot /Subtype /Link /A << /D (section.4) /S /GoTo >> -/Border [ 0 0 0 ] /C [ 1 0 0 ] /H /I -/Rect [ 531.722 401.392 558.306 409.625 ] >> -endobj -343 0 obj -<< /Type /Annot /Subtype /Link -/A << /D (figure.caption.19) /S /GoTo >> /Border [ 0 0 0 ] -/C [ 1 0 0 ] /H /I /Rect [ 512.265 268.912 536.798 278.961 ] >> -endobj -344 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 703.4 295.041 712.554 ] >> -endobj -345 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 97.402 514.11 179.467 523.264 ] >> -endobj -346 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 183.514 514.11 294.005 523.264 ] >> -endobj -347 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 506.26 89.196 513.799 ] >> -endobj -348 0 obj -<< /Font << /F134 95 0 R /F161 87 0 R /F164 88 0 R /F167 89 0 R /F178 91 0 R -/F185 92 0 R /F203 96 0 R >> -/ProcSet [ /PDF /Text /ImageC ] -/XObject << /Im5 335 0 R /Im6 341 0 R >> >> -endobj -349 0 obj -<< /Type /Page -/Annots [ 316 0 R 317 0 R 336 0 R 337 0 R 338 0 R 339 0 R 340 0 R 342 0 R -343 0 R ] -/Contents 350 0 R /Group 307 0 R /MediaBox [ 0 0 612 792 ] -/Parent 97 0 R /Resources 348 0 R >> -endobj -351 0 obj -<< /D [ 349 0 R /XYZ 53 771.4 null ] >> -endobj -352 0 obj -<< /D [ 349 0 R /XYZ 54 742.981 null ] >> -endobj -353 0 obj -<< /D [ 349 0 R /XYZ 317.955 742.981 null ] >> -endobj -354 0 obj -<< /D [ 349 0 R /XYZ 317.955 106.157 null ] >> -endobj -356 0 obj -<< /D [ 443 0 R /XYZ 317.955 584.344 null ] >> -endobj -357 0 obj -<< /D [ 443 0 R /XYZ 54 100.394 null ] >> -endobj -359 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2301.04518) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 495.181 188.128 504.335 ] >> -endobj -360 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2301.04518) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 189.964 495.181 295.041 504.335 ] >> -endobj -361 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2301.04518) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 186.404 485.717 294.047 494.87 ] >> -endobj -362 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2301.04518) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 477.866 89.196 485.406 ] >> -endobj -363 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 466.788 295.041 475.941 ] >> -endobj -364 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 150.649 457.323 295.041 466.477 ] >> -endobj -365 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 234.033 447.859 295.041 457.012 ] >> -endobj -366 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 208.783 438.474 259.097 447.568 ] >> -endobj -367 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 261.181 438.474 279.433 447.568 ] >> -endobj -368 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 281.507 438.474 295.041 447.568 ] >> -endobj -369 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2210.06280) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 419.465 294.403 428.619 ] >> -endobj -370 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2210.06280) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 410.001 244.3 419.154 ] >> -endobj -371 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2210.06280) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 247.171 410.001 295.041 419.154 ] >> -endobj -372 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2210.06280) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 133.47 401.138 151.402 409.71 ] >> -endobj -373 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 296.737 252.642 305.58 ] >> -endobj -374 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 257.247 296.737 295.041 305.58 ] >> -endobj -375 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 223.579 286.962 295.041 296.115 ] >> -endobj -376 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 193.011 277.577 249.241 286.671 ] >> -endobj -377 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 251.143 277.577 268.757 286.671 ] >> -endobj -378 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 270.646 277.577 295.041 286.671 ] >> -endobj -379 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 230.175 205.79 239.328 ] >> -endobj -380 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 208.631 230.175 295.041 239.328 ] >> -endobj -381 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 275.889 220.71 295.041 229.864 ] >> -endobj -382 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 250.421 211.326 295.041 220.419 ] >> -endobj -383 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 91.188 202.383 109.121 210.935 ] >> -endobj -384 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 111.114 202.383 211.166 210.935 ] >> -endobj -385 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 126.065 294.403 135.219 ] >> -endobj -386 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 116.601 241.872 125.774 ] >> -endobj -387 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 246.666 116.601 295.041 125.774 ] >> -endobj -388 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 148.099 107.216 158.046 116.31 ] >> -endobj -389 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 160.052 107.216 187.914 116.31 ] >> -endobj -390 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 189.917 107.216 295.041 116.31 ] >> -endobj -391 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2212.09864) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 715.851 536.558 725.005 ] >> -endobj -392 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2212.09864) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 542.97 715.851 558.996 725.005 ] >> -endobj -393 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2212.09864) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 511.827 706.387 558.996 715.54 ] >> -endobj -394 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2212.09864) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 397.425 697.524 415.358 706.096 ] >> -endobj -395 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 687.458 501.284 696.611 ] >> -endobj -396 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 505.085 687.458 558.996 696.611 ] >> -endobj -397 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 539.654 677.993 558.996 687.147 ] >> -endobj -398 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 518.007 668.608 558.996 677.702 ] >> -endobj -399 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 355.144 659.666 373.076 668.218 ] >> -endobj -400 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 375.069 659.666 491.062 668.218 ] >> -endobj -401 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 573.884 415.501 583.037 ] >> -endobj -402 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 422.109 573.884 558.996 583.037 ] >> -endobj -403 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 418.126 555.034 558.996 564.128 ] >> -endobj -404 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 468.561 545.49 510.938 554.663 ] >> -endobj -405 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 512.963 545.49 531.119 554.663 ] >> -endobj -406 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 533.143 545.49 558.996 554.663 ] >> -endobj -407 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 526.561 558.996 535.714 ] >> -endobj -408 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 378.35 517.096 543.311 526.27 ] >> -endobj -409 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 546.588 517.096 558.996 526.27 ] >> -endobj -410 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 497.448 507.712 507.411 516.805 ] >> -endobj -411 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 509.404 507.712 527.336 516.805 ] >> -endobj -412 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1057/palgrave.ivs.9500180) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 460.309 441.445 469.463 ] >> -endobj -413 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1057/palgrave.ivs.9500180) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 447.982 460.309 558.996 469.463 ] >> -endobj -414 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1057/palgrave.ivs.9500180) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 472.907 450.845 557.962 460.018 ] >> -endobj -415 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1057/palgrave.ivs.9500180) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.019 441.38 380.401 450.534 ] >> -endobj -416 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1057/palgrave.ivs.9500180) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 382.393 441.38 400.326 450.534 ] >> -endobj -417 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1057/palgrave.ivs.9500180) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 402.319 441.38 516.94 450.534 ] >> -endobj -418 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 432.398 510.746 441.069 ] >> -endobj -419 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 513.988 432.398 558.996 441.069 ] >> -endobj -420 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 368.058 413.067 558.04 422.16 ] >> -endobj -421 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 403.833 384.585 412.676 ] >> -endobj -422 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 386.578 403.833 404.51 412.676 ] >> -endobj -423 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 406.503 403.833 522.496 412.676 ] >> -endobj -424 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2303.04360) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 394.058 456.413 403.211 ] >> -endobj -425 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2303.04360) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 459.296 394.058 558.996 403.211 ] >> -endobj -426 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2303.04360) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 447.816 384.593 557.96 393.767 ] >> -endobj -427 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2303.04360) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 376.743 353.151 384.282 ] >> -endobj -428 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 365.664 558.996 374.818 ] >> -endobj -429 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 370.394 346.735 558.996 355.889 ] >> -endobj -430 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 509.057 337.271 558.996 346.424 ] >> -endobj -431 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 397.425 328.408 415.358 336.96 ] >> -endobj -432 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 308.877 444.655 318.071 ] >> -endobj -433 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 450.068 308.877 558.996 318.071 ] >> -endobj -434 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 395.252 299.492 558.996 308.586 ] >> -endobj -435 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 368.462 290.028 425.798 299.122 ] >> -endobj -436 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 427.791 290.028 445.724 299.122 ] >> -endobj -437 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 447.716 290.028 547.769 299.122 ] >> -endobj -438 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2111.06467) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 280.484 558.996 289.637 ] >> -endobj -439 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2111.06467) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 396.605 271.019 558.996 280.173 ] >> -endobj -440 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2111.06467) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 415.883 261.634 523.622 270.728 ] >> -endobj -441 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2111.06467) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 525.615 261.634 543.547 270.728 ] >> -endobj -442 0 obj -<< /Font << /F161 87 0 R /F167 89 0 R /F185 92 0 R >> -/ProcSet [ /PDF /Text ] >> -endobj -443 0 obj -<< /Type /Page -/Annots [ 344 0 R 447 0 R 448 0 R 449 0 R 450 0 R 451 0 R 452 0 R 453 0 R -454 0 R 455 0 R 456 0 R 457 0 R 458 0 R 459 0 R 460 0 R 461 0 R -462 0 R 463 0 R 464 0 R 465 0 R 466 0 R 345 0 R 346 0 R 347 0 R -359 0 R 360 0 R 467 0 R 361 0 R 362 0 R 363 0 R 468 0 R 364 0 R -469 0 R 365 0 R 470 0 R 366 0 R 367 0 R 368 0 R 471 0 R 369 0 R -370 0 R 371 0 R 472 0 R 372 0 R 373 0 R 374 0 R 473 0 R 375 0 R -474 0 R 376 0 R 377 0 R 378 0 R 475 0 R 379 0 R 380 0 R 476 0 R -381 0 R 477 0 R 382 0 R 478 0 R 383 0 R 384 0 R 385 0 R 386 0 R -387 0 R 479 0 R 388 0 R 389 0 R 390 0 R 480 0 R 391 0 R 392 0 R -481 0 R 393 0 R 482 0 R 394 0 R 395 0 R 396 0 R 483 0 R 397 0 R -484 0 R 398 0 R 485 0 R 399 0 R 400 0 R 401 0 R 402 0 R 486 0 R -487 0 R 403 0 R 488 0 R 404 0 R 405 0 R 406 0 R 489 0 R 407 0 R -490 0 R 408 0 R 409 0 R 491 0 R 410 0 R 411 0 R 412 0 R 413 0 R -492 0 R 414 0 R 415 0 R 416 0 R 417 0 R 418 0 R 419 0 R 493 0 R -494 0 R 420 0 R 421 0 R 422 0 R 423 0 R 424 0 R 425 0 R 495 0 R -426 0 R 427 0 R 428 0 R 496 0 R 497 0 R 429 0 R 498 0 R 430 0 R -499 0 R 431 0 R 432 0 R 433 0 R 500 0 R 434 0 R 501 0 R 435 0 R -436 0 R 437 0 R 438 0 R 502 0 R 439 0 R 503 0 R 440 0 R 441 0 R ] -/Contents 444 0 R /MediaBox [ 0 0 612 792 ] /Parent 97 0 R -/Resources 442 0 R >> -endobj -445 0 obj -<< /D [ 443 0 R /XYZ 53 771.4 null ] >> -endobj -446 0 obj -<< /D [ 443 0 R /XYZ 54 738 null ] >> -endobj -447 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 693.936 295.041 703.089 ] >> -endobj -448 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 684.471 295.041 693.625 ] >> -endobj -449 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 675.007 295.041 684.16 ] >> -endobj -450 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 665.542 295.041 674.696 ] >> -endobj -451 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 656.078 295.041 665.231 ] >> -endobj -452 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 646.613 295.041 655.767 ] >> -endobj -453 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 637.149 295.041 646.342 ] >> -endobj -454 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 627.684 295.041 636.838 ] >> -endobj -455 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 618.822 295.041 627.373 ] >> -endobj -456 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 608.755 295.041 617.909 ] >> -endobj -457 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 599.291 295.041 608.444 ] >> -endobj -458 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 589.826 295.041 598.98 ] >> -endobj -459 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 580.362 295.041 589.515 ] >> -endobj -460 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 570.897 295.041 580.051 ] >> -endobj -461 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 561.433 295.041 570.586 ] >> -endobj -462 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 551.968 295.041 561.122 ] >> -endobj -463 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 542.504 295.041 551.657 ] >> -endobj -464 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 533.039 295.041 542.193 ] >> -endobj -465 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 523.575 295.041 532.728 ] >> -endobj -466 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2305.10403) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 514.11 93.356 523.264 ] >> -endobj -467 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2301.04518) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 485.717 183.526 494.87 ] >> -endobj -468 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 457.323 148.764 466.477 ] >> -endobj -469 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 447.859 231.488 457.012 ] >> -endobj -470 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 438.474 206.698 447.568 ] >> -endobj -471 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2022.3209425) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 430.48 171.786 438.083 ] >> -endobj -472 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2210.06280) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 401.138 131.477 409.71 ] >> -endobj -473 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 286.962 220.75 296.115 ] >> -endobj -474 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 277.577 191.109 286.671 ] >> -endobj -475 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2010.154) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 269.583 148.374 277.186 ] >> -endobj -476 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 220.71 273.053 229.864 ] >> -endobj -477 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 211.326 248.392 220.419 ] >> -endobj -478 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2013.212) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 202.383 89.196 210.935 ] >> -endobj -479 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 107.216 146.093 116.31 ] >> -endobj -480 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1186/s12874-020-00977-1) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 71.263 99.397 78.572 106.825 ] >> -endobj -481 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2212.09864) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 706.387 508.984 715.54 ] >> -endobj -482 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2212.09864) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 697.524 395.432 706.096 ] >> -endobj -483 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 677.993 536.793 687.147 ] >> -endobj -484 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 668.608 515.567 677.702 ] >> -endobj -485 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2018.2843369) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 659.666 353.151 668.218 ] >> -endobj -486 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 564.419 558.996 573.573 ] >> -endobj -487 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 555.034 412.916 564.128 ] >> -endobj -488 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 545.49 466.535 554.663 ] >> -endobj -489 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1145/191666.191776) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 537.576 405.08 545.179 ] >> -endobj -490 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 517.096 375.073 526.27 ] >> -endobj -491 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 507.712 495.456 516.805 ] >> -endobj -492 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1057/palgrave.ivs.9500180) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 450.845 470.039 460.018 ] >> -endobj -493 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 422.451 558.996 431.605 ] >> -endobj -494 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI -/URI (https://doi.org/10.1109/TVCG.2017.2744158) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 413.067 365.765 422.16 ] >> -endobj -495 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2303.04360) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 384.593 445.516 393.767 ] >> -endobj -496 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 356.2 558.996 365.353 ] >> -endobj -497 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 346.735 364.481 355.889 ] >> -endobj -498 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 337.271 501.229 346.424 ] >> -endobj -499 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2008.05122) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 328.408 395.432 336.96 ] >> -endobj -500 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 299.492 392.372 308.586 ] >> -endobj -501 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://doi.org/10.1109/TVCG.2008.172) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 290.028 366.469 299.122 ] >> -endobj -502 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2111.06467) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 271.019 393.699 280.173 ] >> -endobj -503 0 obj -<< /Type /Annot /Subtype /Link -/A << /Type /Action /S /URI /URI (https://arxiv.org/abs/2111.06467) >> -/Border [ 0 0 0 ] /C [ 0 1 1 ] /H /I -/Rect [ 335.218 261.634 413.014 270.728 ] >> -endobj -505 0 obj -[ 277.8 277.8 ] -endobj -506 0 obj -[ 777.8 ] -endobj -507 0 obj -[ 277.8 ] -endobj -508 0 obj -[ 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 -525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 -525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 -525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 -525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 -525 525 525 525 525 525 525 ] -endobj -509 0 obj -<< /Type /Encoding -/Differences [ 2 /fi /fl 17 /dotlessi 39 /quoteright /parenleft /parenright -/asterisk /plus /comma /hyphen /period /slash /zero /one /two -/three /four /five /six /seven /eight /nine /colon /semicolon 63 -/question /at /A /B /C /D /E /F /G /H /I /J /K /L /M /N /O /P /Q -/R /S /T /U /V /W /X /Y /Z /bracketleft 93 /bracketright 96 -/quoteleft /a /b /c /d /e /f /g /h /i /j /k /l /m /n /o /p /q /r -/s /t /u /v /w /x /y /z 150 /endash /emdash 168 /dieresis 180 -/acute 184 /cedilla 223 /germandbls ] >> -endobj -510 0 obj -[ 500 500 167 333 556 278 333 333 0 333 675 0 556 389 333 278 0 0 0 -0 0 0 0 0 0 0 0 0 333 214 250 333 420 500 500 833 778 333 333 333 -500 675 250 333 250 278 500 500 500 500 500 500 500 500 500 500 -333 333 675 675 675 500 920 611 611 667 722 611 611 722 722 333 -444 667 556 833 667 722 611 722 611 500 556 722 611 833 611 556 -556 389 278 389 422 500 333 500 500 444 500 444 278 500 500 278 -278 444 278 722 500 500 500 500 389 389 278 500 444 667 444 444 -389 ] -endobj -511 0 obj -[ 333 333 500 570 250 333 250 278 500 500 500 500 500 500 500 500 -500 500 333 333 570 570 570 500 930 722 667 722 722 667 611 778 -778 389 500 778 667 944 722 778 611 778 722 556 667 722 722 1000 -722 722 667 333 278 333 581 500 333 500 556 444 556 444 333 500 -556 278 333 556 278 833 556 500 556 556 444 389 333 556 500 722 -500 500 ] -endobj -512 0 obj -[ 556 556 167 333 611 278 333 333 0 333 564 0 611 444 333 278 0 0 0 -0 0 0 0 0 0 0 0 0 333 180 250 333 408 500 500 833 778 333 333 333 -500 564 250 333 250 278 500 500 500 500 500 500 500 500 500 500 -278 278 564 564 564 444 921 722 667 667 722 611 556 722 722 333 -389 722 611 889 722 722 556 722 667 556 611 722 722 944 722 722 -611 333 278 333 469 500 333 444 500 444 500 444 333 500 500 278 -278 500 278 778 500 500 500 500 333 389 278 500 500 722 500 500 -444 480 200 480 541 0 0 0 333 500 444 1000 500 500 333 1000 556 -333 889 0 0 0 0 0 0 444 444 350 500 1000 333 980 389 333 722 0 0 -722 0 333 500 500 500 500 200 500 333 760 276 500 564 333 760 333 -400 564 300 300 333 500 453 250 333 300 310 500 750 750 750 444 -722 722 722 722 722 722 889 667 611 611 611 611 333 333 333 333 -722 722 722 722 722 722 722 564 722 722 722 722 722 722 556 500 ] -endobj -513 0 obj -[ 278 278 556 556 556 556 556 556 556 556 556 556 278 278 584 584 -584 556 1015 667 667 722 722 667 611 778 722 278 500 667 556 833 -722 778 667 778 722 667 611 722 667 944 667 667 611 278 278 278 -469 556 222 556 556 500 556 556 278 556 556 222 222 500 222 833 -556 556 556 556 333 500 278 556 500 722 500 500 ] -endobj -514 0 obj -[ 278 278 556 556 556 556 556 556 556 556 556 556 333 333 584 584 -584 611 975 722 722 722 722 667 611 778 722 278 556 722 611 833 -722 778 667 778 722 667 611 722 667 944 667 667 611 333 278 333 -584 556 278 556 611 556 611 556 333 611 611 278 278 556 278 889 -611 611 611 611 389 556 333 611 556 778 556 556 500 ] -endobj -516 0 obj -<< /Type /FontDescriptor /Ascent 694 /CapHeight 683 -/CharSet (/comma/period) /Descent -194 /Flags 4 -/FontBBox [ -32 -250 1048 750 ] /FontFile 515 0 R -/FontName /XYLMEX+CMMI10 /ItalicAngle -14 /StemV 72 /XHeight 431 >> -endobj -518 0 obj -<< /Type /FontDescriptor /Ascent 694 /CapHeight 683 /CharSet (/equal) -/Descent -194 /Flags 4 /FontBBox [ -40 -250 1009 750 ] -/FontFile 517 0 R /FontName /JHYTSG+CMR10 /ItalicAngle 0 /StemV 69 -/XHeight 431 >> -endobj -520 0 obj -<< /Type /FontDescriptor /Ascent 750 /CapHeight 683 /CharSet (/bar) -/Descent -194 /Flags 4 /FontBBox [ -29 -960 1116 775 ] -/FontFile 519 0 R /FontName /XDJWFC+CMSY10 /ItalicAngle -14 -/StemV 40 /XHeight 431 >> -endobj -522 0 obj -<< /Type /FontDescriptor /Ascent 668 /CapHeight 622 -/CharSet (/A/B/C/D/E/F/H/I/J/K/L/M/N/O/P/Q/R/S/T/U/V/W/a/b/braceleft/braceright/bracketleft/bracketright/c/colon/comma/d/e/f/five/g/greater/h/hyphen/i/j/k/l/less/m/n/nine/o/p/parenleft/parenright/period/quoteleft/quoteright/r/s/seven/slash/t/three/two/u/v/w/y/z/zero) -/Descent -167 /Flags 4 /FontBBox [ -5 -183 542 746 ] -/FontFile 521 0 R /FontName /MCSIDT+txtt /ItalicAngle 0 /StemV 85 -/XHeight 461 >> -endobj -524 0 obj -<< /Type /FontDescriptor /Ascent 722 /CapHeight 722 -/CharSet (/A/B/C/D/E/F/G/H/I/K/L/M/N/O/R/S/T/U/V/W/Y/Z/a/b/c/colon/d/e/f/five/four/g/h/i/l/m/n/o/one/p/period/r/s/seven/six/t/three/two/u/v/x/y/z) -/Descent -217 /Flags 4 /FontBBox [ -173 -307 1003 949 ] -/FontFile 523 0 R /FontName /TTDTKU+NimbusSanL-Bold /ItalicAngle 0 -/StemV 141 /XHeight 532 >> -endobj -527 0 obj -<< /Type /FontDescriptor /Ascent 712 /CapHeight 712 -/CharSet (/A/C/D/E/F/G/K/L/M/O/P/R/S/T/a/b/c/colon/d/e/f/five/g/h/i/k/l/m/n/o/one/p/period/q/r/s/t/three/two/u/v/w/x/y) -/Descent -213 /Flags 4 /FontBBox [ -174 -285 1001 953 ] -/FontFile 526 0 R /FontName /VQPDKD+NimbusSanL-Regu /ItalicAngle 0 -/StemV 85 /XHeight 523 >> -endobj -529 0 obj -<< /Type /FontDescriptor /Ascent 690 /CapHeight 690 -/CharSet (/A/B/C/D/E/F/H/I/O/Q/R/S/T/U/a/b/c/colon/d/e/f/four/g/h/hyphen/i/k/l/m/n/o/one/p/parenleft/parenright/period/r/s/t/three/two/u/v/w/x/y) -/Descent -209 /Flags 4 /FontBBox [ -168 -341 1000 960 ] -/FontFile 528 0 R /FontName /VNXIWS+NimbusRomNo9L-Medi -/ItalicAngle 0 /StemV 140 /XHeight 461 >> -endobj -531 0 obj -<< /Type /FontDescriptor /Ascent 678 /CapHeight 651 -/CharSet (/A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/Q/R/S/T/U/V/W/X/Y/Z/a/acute/asterisk/at/b/bracketleft/bracketright/c/cedilla/colon/comma/d/dieresis/dotlessi/e/eight/emdash/endash/f/fi/five/fl/four/g/germandbls/h/hyphen/i/j/k/l/m/n/nine/o/one/p/parenleft/parenright/period/plus/q/question/quoteleft/quoteright/r/s/semicolon/seven/six/slash/t/three/two/u/v/w/x/y/z/zero) -/Descent -216 /Flags 4 /FontBBox [ -168 -281 1000 924 ] -/FontFile 530 0 R /FontName /SAVEMP+NimbusRomNo9L-Regu -/ItalicAngle 0 /StemV 85 /XHeight 450 >> -endobj -533 0 obj -<< /Type /FontDescriptor /Ascent 668 /CapHeight 668 -/CharSet (/A/B/C/D/E/G/H/I/M/N/P/R/S/T/V/X/a/b/c/colon/d/e/eight/f/fi/five/four/g/h/hyphen/i/k/l/m/n/nine/o/one/p/parenleft/parenright/period/r/s/seven/six/t/three/two/u/v/w/x/y/z/zero) -/Descent -193 /Flags 4 /FontBBox [ -169 -270 1010 924 ] -/FontFile 532 0 R /FontName /VRRQGM+NimbusRomNo9L-ReguItal -/ItalicAngle -15 /StemV 78 /XHeight 441 >> -endobj -543 0 obj -<< /Type /Outlines /Count 17 /First 4 0 R /Last 68 0 R >> -endobj -544 0 obj -<< /Limits [ (Doc-Start) (Item.11) ] -/Names [ (Doc-Start) 93 0 R (Hfootnote.15) 324 0 R (Hfootnote.16) 325 0 R -(Hfootnote.22) 329 0 R (Item.10) 282 0 R (Item.11) 283 0 R ] >> -endobj -545 0 obj -<< /Limits [ (Item.12) (cite.DBLP:journals/corr/abs-2111-06467) ] -/Names [ (Item.12) 284 0 R (Item.9) 281 0 R (cite.4658133) 289 0 R -(cite.5613456) 290 0 R (cite.DBLP:journals/corr/abs-2005-14165) -244 0 R (cite.DBLP:journals/corr/abs-2111-06467) 249 0 R ] >> -endobj -546 0 obj -<< /Limits [ (cite.anil2023palm) (cite.chatzimparmpas2020survey) ] -/Names [ (cite.anil2023palm) 243 0 R (cite.assogba2023large) 286 0 R -(cite.bertucci2022dendromap) 333 0 R (cite.borisov2022language) -250 0 R (cite.brath2023role) 299 0 R -(cite.chatzimparmpas2020survey) 293 0 R ] >> -endobj -547 0 obj -<< /Limits [ (cite.choo2013utopian) (cite.han2001prefixspan) ] -/Names [ (cite.choo2013utopian) 291 0 R (cite.collins2022visual) 300 0 R -(cite.corpus) 287 0 R (cite.doi:10.1057/palgrave.ivs.9500180) -288 0 R (cite.fryer2022flexible) 252 0 R (cite.han2001prefixspan) -357 0 R ] >> -endobj -548 0 obj -<< /Limits [ (cite.he2022synthetic) (cite.spacy2) ] -/Names [ (cite.he2022synthetic) 251 0 R (cite.hohman2018visual) 294 0 R -(cite.kucher2015text) 292 0 R (cite.lara2022evaluation) 285 0 R -(cite.reif2019visualizing) 296 0 R (cite.spacy2) 334 0 R ] >> -endobj -549 0 obj -<< /Limits [ (cite.strobelt2017lstmvis) (cite.vijayakumar2022evaluating) ] -/Names [ (cite.strobelt2017lstmvis) 297 0 R (cite.synthetic_patient_data) -246 0 R (cite.tableLens) 356 0 R (cite.tang2023does) 247 0 R -(cite.tenney2020language) 298 0 R (cite.vijayakumar2022evaluating) -248 0 R ] >> -endobj -550 0 obj -<< /Limits [ (cite.workshop2023bloom) (figure.caption.23) ] -/Names [ (cite.workshop2023bloom) 245 0 R (cite.yuan2021survey) 295 0 R -(figure.1) 90 0 R (figure.caption.17) 322 0 R (figure.caption.19) -323 0 R (figure.caption.23) 326 0 R ] >> -endobj -551 0 obj -<< /Limits [ (figure.caption.25) (page.4) ] -/Names [ (figure.caption.25) 352 0 R (figure.caption.29) 353 0 R (page.1) -86 0 R (page.2) 280 0 R (page.3) 321 0 R (page.4) 351 0 R ] >> -endobj -552 0 obj -<< /Limits [ (page.5) (section.27) ] -/Names [ (page.5) 445 0 R (section*.2) 94 0 R (section*.32) 354 0 R -(section*.33) 446 0 R (section.13) 27 0 R (section.27) 55 0 R ] >> -endobj -553 0 obj -<< /Limits [ (section.3) (subsection.14) ] -/Names [ (section.3) 3 0 R (section.31) 67 0 R (section.4) 7 0 R -(section.5) 11 0 R (section.8) 23 0 R (subsection.14) 31 0 R ] >> -endobj -554 0 obj -<< /Limits [ (subsection.18) (subsection.7) ] -/Names [ (subsection.18) 35 0 R (subsection.20) 39 0 R (subsection.28) -59 0 R (subsection.30) 63 0 R (subsection.6) 15 0 R (subsection.7) -19 0 R ] >> -endobj -555 0 obj -<< /Limits [ (subsubsection.21) (subsubsection.26) ] -/Names [ (subsubsection.21) 43 0 R (subsubsection.24) 47 0 R -(subsubsection.26) 51 0 R ] >> -endobj -556 0 obj -<< /Kids [ 544 0 R 545 0 R 546 0 R 547 0 R 548 0 R 549 0 R ] -/Limits [ (Doc-Start) (cite.vijayakumar2022evaluating) ] >> -endobj -557 0 obj -<< /Kids [ 550 0 R 551 0 R 552 0 R 553 0 R 554 0 R 555 0 R ] -/Limits [ (cite.workshop2023bloom) (subsubsection.26) ] >> -endobj -558 0 obj -<< /Kids [ 556 0 R 557 0 R ] -/Limits [ (Doc-Start) (subsubsection.26) ] >> -endobj -559 0 obj -<< /Dests 558 0 R >> -endobj -560 0 obj -<< /Type /Catalog /Names 559 0 R /OpenAction 70 0 R /Outlines 543 0 R -/PageMode /UseNone /Pages 97 0 R >> -endobj -563 0 obj -<< /Filter /FlateDecode /Length 102 >> -stream -x+23UpW\N!\Ee% -F -!i - - -@HCr4 L Lʌ}bS 5C\Cgendstream -endobj -564 0 obj -<< /Type /Font /Subtype /Type1 /BaseFont /Times-Roman -/Name /arXivStAmP >> -endobj -xref -0 565 -0000000000 65535 f -0000792650 00000 n -0000334935 00000 n -0000792695 00000 n -0000792756 00000 n -0000792827 00000 n -0000792871 00000 n -0000792916 00000 n -0000792973 00000 n -0000793057 00000 n -0000793165 00000 n -0000793211 00000 n -0000793269 00000 n -0000793392 00000 n -0000793437 00000 n -0000793486 00000 n -0000793544 00000 n -0000793618 00000 n -0000793713 00000 n -0000793762 00000 n -0000793821 00000 n -0000793895 00000 n -0000793964 00000 n -0000794010 00000 n -0000794073 00000 n -0000794161 00000 n -0000794216 00000 n -0000794263 00000 n -0000794320 00000 n -0000794444 00000 n -0000794541 00000 n -0000794591 00000 n -0000794649 00000 n -0000794723 00000 n -0000794796 00000 n -0000794846 00000 n -0000794904 00000 n -0000794991 00000 n -0000795074 00000 n -0000795124 00000 n -0000795181 00000 n -0000795291 00000 n -0000795376 00000 n -0000795429 00000 n -0000795492 00000 n -0000795566 00000 n -0000795669 00000 n -0000795722 00000 n -0000795785 00000 n -0000795872 00000 n -0000795955 00000 n -0000796008 00000 n -0000796066 00000 n -0000796140 00000 n -0000796233 00000 n -0000796280 00000 n -0000796337 00000 n -0000796461 00000 n -0000796506 00000 n -0000796556 00000 n -0000796614 00000 n -0000796688 00000 n -0000796733 00000 n -0000796783 00000 n -0000796846 00000 n -0000796920 00000 n -0000796983 00000 n -0000797030 00000 n -0000797093 00000 n -0000797168 00000 n -0000797243 00000 n -0000797293 00000 n -0000000012 00000 n -0000797504 00000 n -0000797696 00000 n -0000797866 00000 n -0000798057 00000 n -0000798232 00000 n -0000798411 00000 n -0000798581 00000 n -0000798762 00000 n -0000798953 00000 n -0000799129 00000 n -0000799302 00000 n -0000799477 00000 n -0000799673 00000 n -0000803235 00000 n -0000803290 00000 n -0000803479 00000 n -0000803668 00000 n -0000803859 00000 n -0000803921 00000 n -0000804113 00000 n -0000804308 00000 n -0000804365 00000 n -0000804422 00000 n -0000804582 00000 n -0000804745 00000 n -0000804838 00000 n -0000137553 00000 n -0000323268 00000 n -0000323375 00000 n -0000323482 00000 n -0000323589 00000 n -0000323696 00000 n -0000323802 00000 n -0000323909 00000 n -0000324016 00000 n -0000324123 00000 n -0000324230 00000 n -0000324337 00000 n -0000324443 00000 n -0000324550 00000 n -0000324656 00000 n -0000324762 00000 n -0000324868 00000 n -0000324974 00000 n -0000325081 00000 n -0000325188 00000 n -0000325295 00000 n -0000325402 00000 n -0000325508 00000 n -0000325614 00000 n -0000325720 00000 n -0000325827 00000 n -0000325934 00000 n -0000326040 00000 n -0000326147 00000 n -0000326253 00000 n -0000326359 00000 n -0000326465 00000 n -0000326571 00000 n -0000326677 00000 n -0000326783 00000 n -0000326889 00000 n -0000326996 00000 n -0000327103 00000 n -0000327209 00000 n -0000327316 00000 n -0000327422 00000 n -0000327529 00000 n -0000327636 00000 n -0000327742 00000 n -0000327849 00000 n -0000327955 00000 n -0000328061 00000 n -0000328168 00000 n -0000328275 00000 n -0000328382 00000 n -0000328488 00000 n -0000328595 00000 n -0000328702 00000 n -0000328809 00000 n -0000328916 00000 n -0000329022 00000 n -0000329129 00000 n -0000329236 00000 n -0000329343 00000 n -0000329449 00000 n -0000329556 00000 n -0000329662 00000 n -0000329769 00000 n -0000329875 00000 n -0000329981 00000 n -0000330088 00000 n -0000330195 00000 n -0000330301 00000 n -0000330407 00000 n -0000806289 00000 n -0000330514 00000 n -0000806912 00000 n -0000333318 00000 n -0000806936 00000 n -0000806956 00000 n -0000806976 00000 n -0000806996 00000 n -0000807016 00000 n -0000807036 00000 n -0000807056 00000 n -0000807076 00000 n -0000807096 00000 n -0000807116 00000 n -0000807136 00000 n -0000807156 00000 n -0000807176 00000 n -0000807196 00000 n -0000807216 00000 n -0000807236 00000 n -0000807256 00000 n -0000807276 00000 n -0000807296 00000 n -0000807316 00000 n -0000807336 00000 n -0000807356 00000 n -0000807376 00000 n -0000807396 00000 n -0000807416 00000 n -0000807436 00000 n -0000807456 00000 n -0000807476 00000 n -0000807496 00000 n -0000807516 00000 n -0000807536 00000 n -0000807556 00000 n -0000807576 00000 n -0000807596 00000 n -0000807616 00000 n -0000807636 00000 n -0000807656 00000 n -0000807676 00000 n -0000807696 00000 n -0000807716 00000 n -0000807736 00000 n -0000807756 00000 n -0000807776 00000 n -0000807796 00000 n -0000807816 00000 n -0000807836 00000 n -0000807856 00000 n -0000807876 00000 n -0000807896 00000 n -0000807916 00000 n -0000807936 00000 n -0000807956 00000 n -0000807976 00000 n -0000807996 00000 n -0000808016 00000 n -0000808036 00000 n -0000808056 00000 n -0000808076 00000 n -0000808096 00000 n -0000808116 00000 n -0000808136 00000 n -0000808156 00000 n -0000808176 00000 n -0000808196 00000 n -0000808216 00000 n -0000808236 00000 n -0000808256 00000 n -0000808276 00000 n -0000808296 00000 n -0000808318 00000 n -0000667871 00000 n -0000808340 00000 n -0000808399 00000 n -0000808458 00000 n -0000808522 00000 n -0000808581 00000 n -0000808645 00000 n -0000808709 00000 n -0000808773 00000 n -0000808832 00000 n -0000808896 00000 n -0000808954 00000 n -0000809145 00000 n -0000809358 00000 n -0000809536 00000 n -0000809707 00000 n -0000809886 00000 n -0000810057 00000 n -0000810249 00000 n -0000810426 00000 n -0000810598 00000 n -0000810761 00000 n -0000810952 00000 n -0000811118 00000 n -0000811284 00000 n -0000811458 00000 n -0000811631 00000 n -0000811814 00000 n -0000811989 00000 n -0000812162 00000 n -0000812340 00000 n -0000812517 00000 n -0000812692 00000 n -0000812870 00000 n -0000813042 00000 n -0000813218 00000 n -0000813342 00000 n -0000336929 00000 n -0000813659 00000 n -0000813716 00000 n -0000813780 00000 n -0000813844 00000 n -0000813908 00000 n -0000813972 00000 n -0000814036 00000 n -0000814095 00000 n -0000814154 00000 n -0000814218 00000 n -0000814282 00000 n -0000814341 00000 n -0000814400 00000 n -0000814464 00000 n -0000814523 00000 n -0000814586 00000 n -0000814650 00000 n -0000814714 00000 n -0000814778 00000 n -0000814842 00000 n -0000814901 00000 n -0000814960 00000 n -0000815120 00000 n -0000815286 00000 n -0000815467 00000 n -0000815633 00000 n -0000348686 00000 n -0000815815 00000 n -0000815891 00000 n -0000816061 00000 n -0000816228 00000 n -0000359878 00000 n -0000816398 00000 n -0000816564 00000 n -0000816727 00000 n -0000408991 00000 n -0000816907 00000 n -0000817074 00000 n -0000817243 00000 n -0000817470 00000 n -0000343565 00000 n -0000817698 00000 n -0000817755 00000 n -0000817814 00000 n -0000817873 00000 n -0000817935 00000 n -0000817993 00000 n -0000818057 00000 n -0000818218 00000 n -0000818380 00000 n -0000359624 00000 n -0000408087 00000 n -0000513068 00000 n -0000818439 00000 n -0000818498 00000 n -0000521305 00000 n -0000818562 00000 n -0000818733 00000 n -0000818908 00000 n -0000819069 00000 n -0000819240 00000 n -0000549837 00000 n -0000819409 00000 n -0000819572 00000 n -0000819743 00000 n -0000819941 00000 n -0000820140 00000 n -0000820340 00000 n -0000820538 00000 n -0000820737 00000 n -0000515183 00000 n -0000820949 00000 n -0000821006 00000 n -0000821065 00000 n -0000821129 00000 n -0000549528 00000 n -0000821193 00000 n -0000821257 00000 n -0000678751 00000 n -0000821316 00000 n -0000821516 00000 n -0000821717 00000 n -0000821917 00000 n -0000822116 00000 n -0000822325 00000 n -0000822535 00000 n -0000822745 00000 n -0000822955 00000 n -0000823165 00000 n -0000823375 00000 n -0000823575 00000 n -0000823773 00000 n -0000823974 00000 n -0000824173 00000 n -0000824377 00000 n -0000824582 00000 n -0000824788 00000 n -0000824994 00000 n -0000825200 00000 n -0000825406 00000 n -0000825610 00000 n -0000825816 00000 n -0000826021 00000 n -0000826227 00000 n -0000826432 00000 n -0000826638 00000 n -0000826848 00000 n -0000827058 00000 n -0000827269 00000 n -0000827479 00000 n -0000827689 00000 n -0000827899 00000 n -0000828100 00000 n -0000828300 00000 n -0000828500 00000 n -0000828701 00000 n -0000828911 00000 n -0000829121 00000 n -0000829331 00000 n -0000829541 00000 n -0000829751 00000 n -0000829961 00000 n -0000830167 00000 n -0000830373 00000 n -0000830579 00000 n -0000830784 00000 n -0000830989 00000 n -0000831194 00000 n -0000831464 00000 n -0000831732 00000 n -0000832001 00000 n -0000832271 00000 n -0000832541 00000 n -0000832754 00000 n -0000832967 00000 n -0000833180 00000 n -0000833392 00000 n -0000833604 00000 n -0000833815 00000 n -0000834025 00000 n -0000834235 00000 n -0000834443 00000 n -0000834653 00000 n -0000834862 00000 n -0000835072 00000 n -0000835273 00000 n -0000835474 00000 n -0000835674 00000 n -0000835875 00000 n -0000836076 00000 n -0000836277 00000 n -0000836478 00000 n -0000836678 00000 n -0000836884 00000 n -0000837090 00000 n -0000837296 00000 n -0000837502 00000 n -0000837708 00000 n -0000837914 00000 n -0000838115 00000 n -0000838316 00000 n -0000838517 00000 n -0000838718 00000 n -0000838816 00000 n -0000670758 00000 n -0000840093 00000 n -0000840150 00000 n -0000840205 00000 n -0000840405 00000 n -0000840605 00000 n -0000840804 00000 n -0000841004 00000 n -0000841204 00000 n -0000841404 00000 n -0000841604 00000 n -0000841804 00000 n -0000842004 00000 n -0000842204 00000 n -0000842404 00000 n -0000842603 00000 n -0000842803 00000 n -0000843003 00000 n -0000843203 00000 n -0000843403 00000 n -0000843603 00000 n -0000843803 00000 n -0000844003 00000 n -0000844201 00000 n -0000844400 00000 n -0000844609 00000 n -0000844818 00000 n -0000845027 00000 n -0000845235 00000 n -0000845434 00000 n -0000845638 00000 n -0000845843 00000 n -0000846048 00000 n -0000846252 00000 n -0000846457 00000 n -0000846661 00000 n -0000846870 00000 n -0000847078 00000 n -0000847278 00000 n -0000847479 00000 n -0000847689 00000 n -0000847899 00000 n -0000848109 00000 n -0000848315 00000 n -0000848521 00000 n -0000848726 00000 n -0000848931 00000 n -0000849200 00000 n -0000849470 00000 n -0000849683 00000 n -0000849893 00000 n -0000850102 00000 n -0000850303 00000 n -0000850502 00000 n -0000850703 00000 n -0000850904 00000 n -0000851104 00000 n -0000851310 00000 n -0000851516 00000 n -0000851717 00000 n -0000710835 00000 n -0000851918 00000 n -0000851951 00000 n -0000851978 00000 n -0000852005 00000 n -0000852374 00000 n -0000852906 00000 n -0000853383 00000 n -0000853733 00000 n -0000854593 00000 n -0000854919 00000 n -0000681381 00000 n -0000855248 00000 n -0000688449 00000 n -0000855483 00000 n -0000695404 00000 n -0000855708 00000 n -0000702367 00000 n -0000855934 00000 n -0000713814 00000 n -0000856406 00000 n -0000788401 00000 n -0000724625 00000 n -0000856772 00000 n -0000732119 00000 n -0000857110 00000 n -0000745444 00000 n -0000857478 00000 n -0000765716 00000 n -0000858063 00000 n -0000780951 00000 n -0000781720 00000 n -0000782532 00000 n -0000783505 00000 n -0000784278 00000 n -0000785051 00000 n -0000785823 00000 n -0000786595 00000 n -0000787368 00000 n -0000858476 00000 n -0000858551 00000 n -0000858743 00000 n -0000859019 00000 n -0000859318 00000 n -0000859612 00000 n -0000859879 00000 n -0000860186 00000 n -0000860442 00000 n -0000860640 00000 n -0000860829 00000 n -0000861020 00000 n -0000861233 00000 n -0000861395 00000 n -0000861533 00000 n -0000861670 00000 n -0000861762 00000 n -0000861800 00000 n -0000788019 00000 n -0000791113 00000 n -0000861923 00000 n -0000862098 00000 n -trailer -<< /ID [ - ] -/Info 561 0 R /Root 560 0 R /Size 565 >> -startxref -862190 -%%EOF +version https://git-lfs.github.com/spec/v1 +oid sha256:72986b15444f9c9832af564b7180983c50b7986b7b1980a31a98a288264ffd01 +size 873654 diff --git a/papers/vitaclip video and text adaptive clip via multimodal prompting.pdf b/papers/vitaclip video and text adaptive clip via multimodal prompting.pdf index 77eaaf00327f7ae5b33fd50a7078eebe34681097..47af60638e332891692e426fe70509bfda09a589 100644 Binary files a/papers/vitaclip video and text adaptive clip via multimodal prompting.pdf and b/papers/vitaclip video and text adaptive clip via multimodal prompting.pdf differ diff --git a/papers/voice visual oracle for interaction, conversation, and explanation.pdf b/papers/voice visual oracle for interaction, conversation, and explanation.pdf index 84d4a7af8b4552c9933ae21d6bd6bf3481635c8e..9907b7f60825d05e029f5862f3dd10e19dc969e5 100644 --- a/papers/voice visual oracle for interaction, conversation, and explanation.pdf +++ b/papers/voice visual oracle for interaction, conversation, and explanation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f064bd394b9cbcdbf1ba939e741fa75c609760f81ed2637f623d3a948a27fe2b -size 25302933 +oid sha256:f0c5794f22b51f28bbce585fbbfd03b9fce47e9345b82a2e7ad61a16fdac61cc +size 36728620 diff --git a/papers/vpn variation on prompt tuning for namedentity recognition.pdf b/papers/vpn variation on prompt tuning for namedentity recognition.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5fd71d2d6cb91ea6fbec31885cb6b85995344e01 --- /dev/null +++ b/papers/vpn variation on prompt tuning for namedentity recognition.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:005a951ff252e3dddae51726889d98e45d6e883e85b2b0172808a177f4c196e3 +size 1382719 diff --git a/papers/wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf b/papers/wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf index d7cb5d5202d17458b223c19df0c55e735b011097..8113780567c2ca2bd73017ef5a084d8950631213 100644 Binary files a/papers/wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf and b/papers/wanglab at mediqachat 2023 clinical note generation from doctorpatient conversations using large language models.pdf differ diff --git a/papers/what can large language models do in chemistry a comprehensive benchmark on eight tasks.pdf b/papers/what can large language models do in chemistry a comprehensive benchmark on eight tasks.pdf index c92226ef60611fc610ee3b99052dda577d436bd5..5652f36d47a2294a398bc01b0069e656af01e012 100644 --- a/papers/what can large language models do in chemistry a comprehensive benchmark on eight tasks.pdf +++ b/papers/what can large language models do in chemistry a comprehensive benchmark on eight tasks.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ac0039013a2d29a9eb71a3ebdb72aea8510fe7f0e8e3af7c2d22a6ac6084bb7d -size 3441157 +oid sha256:9623bba95a935d5d3272e26cbadfbcf64348a28aa88ec1cc7f933c69447977f8 +size 3437726 diff --git a/papers/what do llms know about financial markets a case study on reddit market sentiment analysis.pdf b/papers/what do llms know about financial markets a case study on reddit market sentiment analysis.pdf index d135d85c7c1edb3d159203198e91afa2ef43b081..3320ca39ac8ff546460506f0dca2c0f546472047 100644 Binary files a/papers/what do llms know about financial markets a case study on reddit market sentiment analysis.pdf and b/papers/what do llms know about financial markets a case study on reddit market sentiment analysis.pdf differ diff --git a/papers/what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf b/papers/what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf index 0c52bd97da6bc3dbdcce9058c760cd822ef08ae0..39958ea1c0dfd76c7acfe24966cf25c0ef1ee7c4 100644 Binary files a/papers/what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf and b/papers/what does the failure to reason with respectively in zerofewshot settings tell us about language models.pdf differ diff --git "a/papers/what does the failure to reason with \342\200\234respectively\342\200\235 in zerofewshot settings tell us about language models.pdf" "b/papers/what does the failure to reason with \342\200\234respectively\342\200\235 in zerofewshot settings tell us about language models.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..50b8e36e0dc1cc47bbfbe72fa9c1fadcd0c67b45 --- /dev/null +++ "b/papers/what does the failure to reason with \342\200\234respectively\342\200\235 in zerofewshot settings tell us about language models.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48b4f314b88f7b86bf9cdc9b4d893398f21f2ce6c119e7b7d05c6b2920cd938e +size 318022 diff --git a/papers/what incontext learning learns incontext disentangling task recognition and task learning.pdf b/papers/what incontext learning learns incontext disentangling task recognition and task learning.pdf index 300bbd2ff4541120740ac5f71b8e567f906cc11c..05573f0e40d02e303e44adddd01002481b5022b1 100644 Binary files a/papers/what incontext learning learns incontext disentangling task recognition and task learning.pdf and b/papers/what incontext learning learns incontext disentangling task recognition and task learning.pdf differ diff --git "a/papers/what incontext learning \342\200\234learns\342\200\235 incontext disentangling task recognition and task learning.pdf" "b/papers/what incontext learning \342\200\234learns\342\200\235 incontext disentangling task recognition and task learning.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..a1a59ead77992d15889a5e4e3f5b4fc4b7a1da8c --- /dev/null +++ "b/papers/what incontext learning \342\200\234learns\342\200\235 incontext disentangling task recognition and task learning.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:441a84b4a4782a07c0a71c183632269b9157dcdb504ffed9766f10d2bc29cc6c +size 522225 diff --git a/papers/what makes chainofthought prompting effective a counterfactual study.pdf b/papers/what makes chainofthought prompting effective a counterfactual study.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d106009d403f93af250707eb0dc07775b7773c2e --- /dev/null +++ b/papers/what makes chainofthought prompting effective a counterfactual study.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dd9220f52be793b3607d39ac533bf7451e8a618e376d5d753afd3e4546357e7 +size 4154768 diff --git a/papers/what makes datatotext generation hard for pretrained language models.pdf b/papers/what makes datatotext generation hard for pretrained language models.pdf index 4000c4d71c6944e6116c89d338aee5b27b54113e..8dbf4af500e2d216c594f274cc2b0c0f490593c7 100644 Binary files a/papers/what makes datatotext generation hard for pretrained language models.pdf and b/papers/what makes datatotext generation hard for pretrained language models.pdf differ diff --git a/papers/what makes good incontext demonstrations for code intelligence tasks with llms.pdf b/papers/what makes good incontext demonstrations for code intelligence tasks with llms.pdf index 146834ca4991d5aa51db8456561de82314dd5d7a..39f97b7b3f8ea18d1474a60faba25d90beb01b9c 100644 Binary files a/papers/what makes good incontext demonstrations for code intelligence tasks with llms.pdf and b/papers/what makes good incontext demonstrations for code intelligence tasks with llms.pdf differ diff --git a/papers/what makes good incontext examples for gpt$3$.pdf b/papers/what makes good incontext examples for gpt$3$.pdf index cb13cc45978f23ce9e82cc0a5beca4b6c571573c..54f13adb7e43df4f49484d4a293c4a04f1950fd7 100644 Binary files a/papers/what makes good incontext examples for gpt$3$.pdf and b/papers/what makes good incontext examples for gpt$3$.pdf differ diff --git a/papers/what makes good incontext examples for gpt3.pdf b/papers/what makes good incontext examples for gpt3.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3777d60b0315c0df9a4404c1a12787b604fb7ea4 --- /dev/null +++ b/papers/what makes good incontext examples for gpt3.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:257a0527b152dc970a21d20a1f9e85e2eb9834ec5cabdc1e662b68a24d04351f +size 382540 diff --git a/papers/what makes pretrained language models better zeroshot learners.pdf b/papers/what makes pretrained language models better zeroshot learners.pdf index 4195c421ddf28c90e8105c9b3861d9cec3c94a7d..6ed7c2c9ee356d794bcc6b93093f008434fe6003 100644 Binary files a/papers/what makes pretrained language models better zeroshot learners.pdf and b/papers/what makes pretrained language models better zeroshot learners.pdf differ diff --git a/papers/what's in a measurement using gpt3 on semeval 2021 task 8 measeval.pdf b/papers/what's in a measurement using gpt3 on semeval 2021 task 8 measeval.pdf index 6f5b878ab56cfc32f122f8a6dc11ea0a837da143..ea715249c546dfdbef24d5dc4d074a627e35cf2c 100644 Binary files a/papers/what's in a measurement using gpt3 on semeval 2021 task 8 measeval.pdf and b/papers/what's in a measurement using gpt3 on semeval 2021 task 8 measeval.pdf differ diff --git a/papers/what's the magic word a control theory of llm prompting.pdf b/papers/what's the magic word a control theory of llm prompting.pdf index 31c87549a6d37fd0c3d14e17d9be61cb34081845..68a85ff79469ab6a60c555ab2bb4393847575b71 100644 Binary files a/papers/what's the magic word a control theory of llm prompting.pdf and b/papers/what's the magic word a control theory of llm prompting.pdf differ diff --git a/papers/when do programofthoughts work for reasoning.pdf b/papers/when do programofthoughts work for reasoning.pdf index b34aef88a402e2fd1d7226347f021b9e7b68fa90..b07ed34084fc9bd6ca528cdd39449c70f268b421 100644 --- a/papers/when do programofthoughts work for reasoning.pdf +++ b/papers/when do programofthoughts work for reasoning.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:699f88ea1b5a678bbccfb9943527859c74df41415c84625e6ba95377d052242c -size 1044745 +oid sha256:0ce78d563134f1f194bcff9a2df1527d3544ed1251978056be6b228c5d822860 +size 1046050 diff --git a/papers/will it blend mixing training paradigms & prompting for argument quality prediction.pdf b/papers/will it blend mixing training paradigms & prompting for argument quality prediction.pdf index 6fadaf3d622ee5f1b81e74f044ee41f0a718704c..ea331734d53296369329fc712d008b703b040e52 100644 Binary files a/papers/will it blend mixing training paradigms & prompting for argument quality prediction.pdf and b/papers/will it blend mixing training paradigms & prompting for argument quality prediction.pdf differ diff --git a/papers/winodict probing language models for incontext word acquisition.pdf b/papers/winodict probing language models for incontext word acquisition.pdf index d16665ca9126a62ec9605f7f814b38f00d059cd3..8b7e53e817b3eb437599d5a5fb90340be9150142 100644 Binary files a/papers/winodict probing language models for incontext word acquisition.pdf and b/papers/winodict probing language models for incontext word acquisition.pdf differ diff --git a/papers/xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf b/papers/xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf index bc5b1c4f53fd24362e287696e887a20dc0888589..195a7056166dd2dcec97e26b01217e7f7cf4a3fc 100644 Binary files a/papers/xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf and b/papers/xricl crosslingual retrievalaugmented incontext learning for crosslingual texttosql semantic parsing.pdf differ diff --git a/papers/you are an expert linguistic annotator limits of llms as analyzers of abstract meaning representation.pdf b/papers/you are an expert linguistic annotator limits of llms as analyzers of abstract meaning representation.pdf index 1f43624ed688a87d6d577c7723ee34648bcf72b7..60906b5f7d1362fd77c90cb231de39b07d68e6cb 100644 --- a/papers/you are an expert linguistic annotator limits of llms as analyzers of abstract meaning representation.pdf +++ b/papers/you are an expert linguistic annotator limits of llms as analyzers of abstract meaning representation.pdf @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d90c000a70afbc31adb0521ccf1a5dcfdb1a22ac983456a621844590c86fbb3c -size 1080417 +oid sha256:20286755bd4b6afdf7ca093714ec6da5f0d21b1d4fffb5a00a6d757d3ed75e63 +size 1088842 diff --git a/papers/zara improving fewshot selfrationalization for small language models.pdf b/papers/zara improving fewshot selfrationalization for small language models.pdf index a3d8fa4b839cc158b15443d83d94b6350f1d8e52..03121ee97cc775b7deea38476832d4b76f34a136 100644 Binary files a/papers/zara improving fewshot selfrationalization for small language models.pdf and b/papers/zara improving fewshot selfrationalization for small language models.pdf differ diff --git a/papers/zero and fewshot nlp with pretrained language models.pdf b/papers/zero and fewshot nlp with pretrained language models.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8cf4395c3b8fc5c170321f708b87abdb05f6c5c2 --- /dev/null +++ b/papers/zero and fewshot nlp with pretrained language models.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c48663160e1f2ea8748009570136ecaf29e1f97b20aebb224e77d7eec1a1644 +size 105519 diff --git a/papers/zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis.pdf b/papers/zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis.pdf index 5832e15db157de440f23036c4efa157fbc013a06..9a16baf0d266e065d6c57d05ae351b5765e6d79e 100644 Binary files a/papers/zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis.pdf and b/papers/zero and fewshot prompting with llms a comparative study with finetuned models for bangla sentiment analysis.pdf differ diff --git a/papers/zeroprompt scaling promptbased pretraining to 1,000 tasks improves zeroshot generalization.pdf b/papers/zeroprompt scaling promptbased pretraining to 1,000 tasks improves zeroshot generalization.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fd0d92f65e11e92aaafcc06134016e9ec1a79680 --- /dev/null +++ b/papers/zeroprompt scaling promptbased pretraining to 1,000 tasks improves zeroshot generalization.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88f5a39dff958bf8b023cd2d7f84dae614e87f7dd6ca04a785655294a7f04b36 +size 787120 diff --git a/papers/zeroshot approach to overcome perturbation sensitivity of prompts.pdf b/papers/zeroshot approach to overcome perturbation sensitivity of prompts.pdf index f61db1d086b2ec57bc4b6e1eee3f460a88fdf87e..82435f25a3386e26944280234c79b3675ffef98d 100644 Binary files a/papers/zeroshot approach to overcome perturbation sensitivity of prompts.pdf and b/papers/zeroshot approach to overcome perturbation sensitivity of prompts.pdf differ diff --git a/papers/zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts.pdf b/papers/zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts.pdf index 58c1142b04cdf210d2863cd0380c453042871f54..cadbc28b7a57305a7346b06747248b640f90f913 100644 Binary files a/papers/zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts.pdf and b/papers/zeroshot domain adaptation for neural machine translation with retrieved phraselevel prompts.pdf differ diff --git a/papers/zeroshot information extraction from radiological reports using chatgpt.pdf b/papers/zeroshot information extraction from radiological reports using chatgpt.pdf index d299c07321192c29740be4c618b6833c67165196..f85c29da29635af6a73b699c6f1bac5f40798e34 100644 Binary files a/papers/zeroshot information extraction from radiological reports using chatgpt.pdf and b/papers/zeroshot information extraction from radiological reports using chatgpt.pdf differ diff --git a/papers/zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model.pdf b/papers/zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model.pdf index 635b6fe30c67ab98ac7136ee80e14d34e0b2f5f8..62ea18bc51a0feb1cee1ee5f7fb24ed036baf15a 100644 Binary files a/papers/zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model.pdf and b/papers/zeroshot learning with minimum instruction to extract social determinants and family history from clinical notes using gpt model.pdf differ diff --git a/papers/zeroshot temporal relation extraction with chatgpt.pdf b/papers/zeroshot temporal relation extraction with chatgpt.pdf index 2450db34e035fd2f4b829a3aeee13c5fa930244f..6680c9e541bd99f9992cc77e6e7cd534680624fd 100644 Binary files a/papers/zeroshot temporal relation extraction with chatgpt.pdf and b/papers/zeroshot temporal relation extraction with chatgpt.pdf differ diff --git a/papers/zerotop zeroshot taskoriented semantic parsing using large language models.pdf b/papers/zerotop zeroshot taskoriented semantic parsing using large language models.pdf index eeff81eabda881373202ba4594f5be7b38a96ba9..438c488de926449602c710975b5969d0c60f66f2 100644 Binary files a/papers/zerotop zeroshot taskoriented semantic parsing using large language models.pdf and b/papers/zerotop zeroshot taskoriented semantic parsing using large language models.pdf differ diff --git a/papers/zicl zeroshot incontext learning with pseudodemonstrations.pdf b/papers/zicl zeroshot incontext learning with pseudodemonstrations.pdf index b275ac58c98bf0ae266cc7277f9ef0e67138e86d..0773271a99358decd351726cc7eb427b53ac7a14 100644 Binary files a/papers/zicl zeroshot incontext learning with pseudodemonstrations.pdf and b/papers/zicl zeroshot incontext learning with pseudodemonstrations.pdf differ diff --git "a/papers/\342\200\234covid vaccine is against covid but oxford vaccine is made at oxford!\342\200\235 semantic interpretation of proper noun compounds.pdf" "b/papers/\342\200\234covid vaccine is against covid but oxford vaccine is made at oxford!\342\200\235 semantic interpretation of proper noun compounds.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..6a018e901a0fbfc0c01e7fde95358e8f5db32fc9 --- /dev/null +++ "b/papers/\342\200\234covid vaccine is against covid but oxford vaccine is made at oxford!\342\200\235 semantic interpretation of proper noun compounds.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2f2fc05302e09bee8937c89a35c3438a30b5326bba83c5524147f287d9cd2aa +size 562946